text
stringlengths 56
7.94M
|
---|
\begin{document}
\author{Ha Thu Nguyen}
\address{DPMMS, CMS, Wilberforce Road, Cambridge, CB3 0WB, England UK.}
\email{[email protected]}
\thanks{Keywords: symmetric groups, modular representation theory, integral design theory}
\thanks{Mathematics subject classification(2010): Primary 20C30; Secondary 20E48, 20G40}
\begin{abstract}
We aim to construct an element satisfying Hemmer's combinatorial criterion for $H^1(\mathfrak{S}_n, S^\lambda)$ to be non-vanishing. In the process, we discover an unexpected and surprising link between the combinatorial theory of integral designs and the representation theory of the symmetric groups.\end{abstract}
\maketitle
\section{Introduction}\label{intro}
It is well known that cohomology groups largely control the representation theory. However, these groups are very difficult to compute and very few are known explicitly. For a Specht module $S^\lambda$ of the symmetric group $\mathfrak{S}_n$, where $\lambda$ is a partition of $n$, the cohomology $H^i(\mathfrak{S}_n,S^\lambda)$ is known only in degree $i = 0$. Further known results about first cohomology groups concern mostly Specht modules corresponding to hook partitions or two-part partitions (Weber~\cite{Weber}). However, the proofs for these involve powerful and complicated algebraic group machinery. Recently, David Hemmer~\cite{Hemmer12} proposed a method that allows (in principle, although it is difficult in practice) to check whether $H^1(\mathfrak{S}_n,S^\lambda)$ is trivial or not, only by means of combinatorics. While Hemmer's criterion for non-zero $H^1(\mathfrak{S}_n, S^\lambda)$ provides some understanding of these groups, there is currently no effective method for determining when the criterion is fulfilled. Even for the cases where it is known that $H^1(\mathfrak{S}_n, S^\lambda)$ is non-zero, so that the criterion holds, proving directly that the criterion does indeed hold has been difficult.
In this paper, we aim to construct an element $u$ of the permutation module $M^\lambda$ of the symmetric group $\mathfrak{S}_n$, that satisfies the necessary and sufficient conditions found by Hemmer~\cite{Hemmer12} for the cohomology group $H^1(\mathfrak{S}_n, S^\lambda)$ to be non-zero. In constructing this element for $2$-part partitions, we discover an unexpected and surprising link between the combinatorial theory of integral designs and the representation theory of symmetric groups. A similar, although much more complex, procedure should be sufficient to construct the desired element of Hemmer's criterion for partitions with three or more non-zero parts. There is much hope that this will lead to a classification of partitions labelling the Specht modules $S^\lambda$ such that $H^1(\mathfrak{S}_n,S^\lambda)$ is non-zero, and this is a possible source of future research.
The outline of the paper is as follows. In section $2$, we introduce Hemmer's criterion and collect some relevant known results that will be used throughout. In section $3$, we give a complementary argument to apply the criterion to do some computations and extend Theorems $5.8$ and $5.1$ in Hemmer~\cite{Hemmer12}. This argument highlights the fact that it is difficult to apply Hemmer's method in practice and one needs a novel approach to tackle first degree cohomology groups of Specht modules in a purely combinatorial way. One such approach is to use the theory of integral designs as shown in the second half of section $3$ and section $4$. Finally, we collect in the conclusion some general conjectures posed by Hemmer~\cite{Hemmer12} that could perhaps be attacked by our new approach.
\section{Hemmer's criterion}
In this section, we describe Hemmer's criterion for non-zero $H^1(\mathfrak{S}_n, S^\lambda)$. Let us first introduce some notations and known results that will be used throughout.
A $\emph{partition}$ of $n$, denoted $\lambda \vdash n$, is a non-increasing string of non-negative integers $\lambda=(\lambda_1,\lambda_2, \dots, \lambda_r)$ summing to $n$. Write $[\lambda]$ for the $\emph{Young diagram}$ for $\lambda$ i.e $[\lambda] = \{ (i,j) \in \mathbb{N}^2\ | j \leq \lambda_i \}$. We adopt the convention of drawing the Young diagram by taking the $i$-axis to run from left to right on the page, and the $j$-axis from top to bottom, and placing a \text{\sffamily \large{x}} at each node. For example, the partition $(5, 3, 1)$ has the following diagram:
\newcommand{\text{\sffamily \large{x}}}{\text{\sffamily \large{x}}}
\[
\gyoung(:\text{\sffamily \large{x}}:\text{\sffamily \large{x}}:\text{\sffamily \large{x}}:\text{\sffamily \large{x}}:\text{\sffamily \large{x}},:\text{\sffamily \large{x}}:\text{\sffamily \large{x}}:\text{\sffamily \large{x}},:\text{\sffamily \large{x}})
\]
A $\lambda$-\emph{tableau} is an assignment of $\{1,2, \dots, n \}$ to the nodes in $[\lambda]$. For example,
\[
\gyoung(:1:3:2:5:4,:8:7:9,:6)
\]
\noindent
is a $(5, 3, 1 )$-tableau.
The symmetric group acts naturally on the set of $\lambda$-tableaux. For a tableau $t$ its row stabiliser $R_t$ is the subgroup of $\mathfrak{S}_n$ fixing the rows of $t$ setwise. Say $t$ and $s$ are row equivalent if $t = \pi s$ for some $\pi \in R_s$. An equivalence class is called a $\lambda$-\emph{tabloid}, and the class of $t$ is denoted by $\{ t \}$. We shall depict $\{t\}$ by drawing lines between rows of t. For instance,
\[
\youngtabloid(12345,678,9)
\]
\noindent
is a $(5, 3, 1)$-tabloid.
For $R$ a commutative ring, the \emph{permutation module $M^\lambda_R$} is the free $R$-module with basis the set of $\lambda$-tabloids. We drop the subscript $R$ when there is no confusion. If $\lambda=(\lambda_1,\lambda_2, \dots, \lambda_r)$, there is a corresponding \emph{Young subgroup} $\mathfrak{S}_\lambda \cong \mathfrak{S}_{\lambda_1} \times \mathfrak{S}_{\lambda_2} \times \dots \times \mathfrak{S}_{\lambda_r} \leq \mathfrak{S}_n$. The stabiliser of a $\lambda$-tabloid $\{ t \} $ is clearly a conjugate of $\mathfrak{S}_\lambda$ in $\mathfrak{S}_n$, and $\mathfrak{S}_n$ acts transitively on $\lambda$-tabloids, so we have
\[
M^\lambda_R \cong Ind_{\mathfrak{S}_\lambda}^{\mathfrak{S}_n}R.
\]
Since $M^\lambda$ is a transitive permutation module, it has a one-dimensional fixed-point space under the action of $\mathfrak{S}_n$. Let $f_\lambda \in M^\lambda$ denote the sum of all the $\lambda$-tabloids, so $f_\lambda$ spans this fixed subspace.
Note that the definition of $M^\lambda$ as the permutation module of $\mathfrak{S}_n$ on a Young subgroup does not require $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_r$ so $M^\lambda$ is also defined for compositions $\lambda$ i.e.\ $\lambda=(\lambda_1, \lambda_2, \dots, \lambda_r)$ such that $\sum_{i=1}^r \lambda_i =n$.
\begin{Example}
\[
n=10, \lambda = (4, 5, 0, 1) \text{ and } M^{(4, 5, 0, 1}) \cong M^{(5, 4, 1)}.
\]
\end{Example}
The Specht module $S^{\lambda}$ is defined explicitly as the submodule of $M^{\lambda}$ spanned by certain linear combination of tabloids, called polytabloids. In characteristic zero the Specht modules $\{S^{\lambda} | \lambda \vdash n\}$ give a complete set of non-isomorphic simple $\mathfrak{S_n}$-modules. James~\cite{James78} gave an important alternative description of $S^\lambda$ inside $M^\lambda$ as the intersection of the kernels of certain homomorphisms from $M^\lambda$ to other permutation modules.
Let $\lambda=(\lambda_1,\lambda_2, \dots, \lambda_r) \vdash n$, and let $\nu=(\lambda_1,\lambda_2, \dots, \lambda_{i-1}, \lambda_{i} + \lambda_{i+1} -v, v, \lambda_{i+2}, \dots )$. Define the module homomorphism $\psi_{i,v} : M^\lambda \to M^\nu$ by
\begin{align*}
\psi_{i,v}(\{t\}) =& \sum \{ \{t_1\} \ |\ \{t_1\} \ \text{agrees with}\ \{t\} \ \text{on all rows except rows}\ i \ \text{and}\ i+1,\\
&\text{and row} \ i+1 \ \text{of}\ \{t_1\} \ \text{is a subset of size}\ v \ \text{in row} \ i+1 \ \text{of} \ \{t\}\}.
\end{align*}
\begin{Theorem}[\textbf{Kernel Intersection Theorem}, James~\cite{James78}] \label{kernel}
Suppose $\lambda \vdash d$ has $r$ non-zero parts. Then
\[
S^\lambda = \bigcap_{i=1}^{r-1} \bigcap_{v=0}^{\lambda_i-1} \ker (\psi_{i,v}) \subseteq M^\lambda.
\]
\end{Theorem}
So given a linear combination of tabloids $u \in M^\lambda$, this gives an explicit test for whether $u\in S^\lambda$.
\begin{Theorem}[James~\cite{James78}]\label{H0}
Given $t \in \mathbb{Z}$, let $l_p(t)$ be the smallest non-negative integer satisfying $ t < p^{l_p(t)}$. The invariants $Hom_{k\mathfrak{S}_d}(k,S^\lambda)=H^0(\mathfrak{S}_d,S^\lambda)=0$ unless $\lambda_i \equiv -1 \pmod {p^{l_p(\lambda_{i+1})}}$ for all $i$ such that $\lambda_{i+1} \neq 0$, in which case it is one-dimensional.
\end{Theorem}
Generalizing James' work, Hemmer proposed
\begin{Theorem}[Hemmer~\cite{Hemmer12}]\label{ext} Let $p>2$ and $\lambda \vdash d$. Then $Ext^1(k, S^\lambda) \neq 0$ if and only if there exists $u \in M^\lambda$ with the following properties:
(i) For each $\psi_{i,v}: M^\lambda \to M^\nu$ appearing in Theorem~\ref{kernel}, $\psi_{i,v}(u)$ is a multiple of $f_\nu$, at least one of which is a non-zero multiple.
(ii) There does not exist a scalar $a \neq 0$ such that all the $\psi_{i,v}(af_\lambda-u)$ are zero.
\noindent
If so then the subspace spanned by $S^\lambda$ and $u$ is a submodule that is a non-split extension of $k$ by $S^\lambda$.
\end{Theorem}
\begin{Remark}
We can replace $M^\lambda$ by the Young module $Y^\lambda$ --- the unique indecomposable direct summand of $M^\lambda$ containing the Specht module $S^\lambda$.
\end{Remark}
In the same paper, Hemmer~\cite{Hemmer12} proposed the following problem:
\begin{Problem}\label{non-vanishing hom}
It is known by an argument of Andersen (Proposition $5.2.4$~\cite{Hem09}) that for $\lambda \neq (d)$, if $H^0(\mathfrak{S}_d,S^\lambda) \neq 0$ then $H^1(\mathfrak{S}_d, S^\lambda) \neq 0$. For each such $\lambda$ (given by Theorem~\ref{H0}) construct an element $u \in M^\lambda$ as in Theorem~\ref{ext}.
\end{Problem}
\begin{Remark}
We note that for $\lambda$ such that $H^0(\mathfrak{S}_d,S^\lambda) \neq 0$, condition (ii) in \textbf{Theorem~\ref{ext}} is implied by condition (i). Hence, one natural approach to \textbf{Problem~\ref{non-vanishing hom}} is
\end{Remark}
\begin{planA}
Find a $u \in M^\lambda$ such that all the $\psi_{i,v}(u)$ vanish except for precisely one value of $(i, v)$.
\end{planA}
Finally, many of the problems involving Specht modules that arise depend upon whether or not the prime characteristic $p$ divides certain binomial coefficients. We collect several relevant lemmas below.
\begin{Lemma}[James~\cite{James78}, Lemma 22.4]\label{lem224}
Assume that
$a = a_0 + a_1p + \dots + a_rp^r (0 \leq a_i < p),$
$b = b_0 + b_1p + \dots + b_rp^r (0 \leq b_i <p).$
\noindent
Then $\binom{a}{b} \equiv \binom{a_0}{b_0}\binom{a_1}{b_1}\dots \binom{a_r}{b_r}$ $\pmod{p}$. In particular, $p$ divides $\binom{a}{b}$ if and only if $a_i < b_i$ for some $i$.
\end{Lemma}
\begin{Corollary}[James~\cite{James78}, Corollary 22.5]\label{cor225}
Assume $a \geq b \geq 1$. Then all the binomial coefficients $\binom{a}{b}$, $\binom{a-1}{b-1}, \dots, \binom{a-b+1}{1}$ are divisible by $p$ if and only if
\[
a-b \equiv (-1)\pmod{p^{l_p(b)}}.
\]
\end{Corollary}
\begin{Lemma}[Kummer~\cite{Kummer}, p.116]\label{lemkum}
The highest power of a prime $p$ that divides the binomial coefficient $\binom{x+y}{x}$ is equal to the number of carries that occur when the integers $x$ and $y$ are added in $p$-ary notation.
\end{Lemma}
\section{New results and introduction to design theory}
\subsection{Two-part partitions and integral designs.}
In this section, we specialize to $2$-part partitions. In this case, $\lambda$-tabloids are determined by the second row.
\begin{Example} We can represent
\[
\youngtabloid(12345,678)
\]
simply by $\overline{6\ 7 \ 8}$.
\end{Example}
In~\cite{Hemmer12}, Hemmer followed \textbf{Plan A} to find explicit u's for $\lambda = (p^a, p^a)$ and $\lambda = (p^b-1,p^a)$ for $b >a$. We will give a complementary argument to shed light on why this natural approach is particularly suitable for the values of $\lambda$ chosen by Hemmer: they belong to a subclass of the class of $2$-part partitions where the second part is a $p$-power. Our argument also extends Theorems $5.8$ and $5.11$ in Hemmer~\cite{Hemmer12}.
\begin{Theorem}\label{2-p-power}
Let $k$ have characteristic $p \geq 3$. Then
(1) $H^1(\mathfrak{S}_d, S^{(rp^n,\ p^n)}) \neq 0$ for any $n\geq 1$, $r \geq1$, $p \nmid (r+1)$ and $p \nmid r$.
(2) $H^1(\mathfrak{S}_d, S^{(a,\ p^n)}) \neq 0$ for any $a \geq p^n$ and $a\equiv (-1) \pmod{p^{n+1}}$.
\end{Theorem}
\begin{proof}
Suppose for a partition $(a,b)$ in the class of partitions considered in $(1)$ and $(2)$ in the theorem, there is an explicit $u \in M^{(a, b)}$,
such that for each
\begin{align*}
\psi_{1,v}: &M^{(a,\ b)} \to M^{(a+b-v,\ v)}\\
& \overline{i_1 \dots i_b} \mapsto \sum_{\{j_1,\dots,j_v\} \in [b]^{(v)}}\overline{i_{j_1} \dots {i_{j_v}}}
\end{align*}
where $0 \leq v \leq b-1, \psi_{1,v}(u)=\lambda_vf_v$, where $\lambda_v \neq 0$ for some $v$, $f_v=f_{(a+b-v,\ v)}$, $[b] = \{1, 2, \dots, b \}$ and for a set $X, X^{(v)}$ denotes the set of all subsets of size $v$ of X.
Let $u = \sum{\lambda_{i_1...i_b}}\overline{i_1\dots i_b}$ where the sum is over all the subsets of size $b$ of $\{1,2,\dots$ $a+b\}$. Then for a fixed $v$, $1\leq v \leq b-1$,
\[
\psi_{1,v}\left(\sum{\lambda_{i_1...i_b}}\overline{i_1\dots i_b}\right) = \sum{\lambda_{i_1...i_b}(\overline{i_1 \dots i_v} + \dots) = \lambda_v(\overline{1 \dots v} + \dots)} = \lambda_v f_v,
\]
where both sums are over all the $b$-sets of $\{1, 2, \dots, a+b\}$. Counting the total number of tabloids involved on both sides, i.e. equating the sum of coefficients on both sides, we have
$\binom{a+b}{v}\lambda_v = \binom{b}{v} \sum{\lambda_{i_1...i_b}}$. Replace $v$ by $b-v$ to get
\[
\binom{a+b}{b-v}\lambda_{b-v} = \binom{b}{b-v} \sum{\lambda_{i_1...i_b}}
\]
\begin{align*}
&\Rightarrow \frac{(a+b)(a+b-1)\dots (a+v+1)}{(b-v)!} \lambda_{b-v}=\frac{b(b-1) \dots (b-v+1)}{v!}\sum{\lambda_{i_1...i_b}}\\
&\Rightarrow (a+b)(a+b-1)\dots (a+v+1) v! \lambda_{b-v} = b(b-1) \dots (b-v+1) (b-v)! \sum{\lambda_{i_1...i_b}}\\
&\Rightarrow (a+b)(a+b-1)\dots (a+v+1) \lambda_{b-v} = b(b-1) \dots (v+1) \sum{\lambda_{i_1...i_b}} \label{star} \tag{$\star$}
\end{align*}
\underline{\emph{Case $1: a = r p^n, \ b = p^n,\text{ and p }\nmid r.$}}
\noindent
Recall that for an integer $m$, $v_p(m)$ is the greatest integer $t$ such that $p^t | m$.
\noindent
\underline{Claim:} For p an odd prime,
\[
v_p((r+1)p^n-i) = v_p(p^n-i) \quad \forall 1\leq i \leq p^n-1.
\]
\noindent
\underline{Proof of claim:}
This follows from one of the well-known properties of non-archimedean valuation:
\[
v_p(m) \neq v_p(n) \Rightarrow v_p(m+n) = \text{ min} \{ v_p(m),v_p(n)\}.
\]
Next, we work with the case $\lambda \in \mathbb{Z}$ only. From the above claim and equation~\eqref{star}, $v_p(\lambda_{b-v}) = v_p(\sum{\lambda_{i_1...i_b}})$ for all $1\leq v \leq b-1$, i.e.\ $v_p(\sum{\lambda_{i_1...i_b}}) = v_p(\lambda_1) = v_p(\lambda_2) = \dots = v_p(\lambda_{b-1}) $.
It is also easy to see from~\eqref{star} that $\lambda_{b-v} \equiv \sum{\lambda_{i_1...i_b}}\pmod {p}$ for all $1\leq v \leq b-1$, i.e.\ $\sum{\lambda_{i_1...i_b}} \equiv \lambda_1 \equiv \lambda_2 \equiv \dots \equiv \lambda_{b-1} \pmod{p}$ i.e.\ $\sum{\lambda_{i_1...i_b}} = \lambda_1 = \lambda_2 = \dots = \lambda_{b-1}$ in $k$. We believe these necessary conditions are also sufficient: it is likely that we could find a $u$ for each of the values of $\sum{\lambda_{i_1...i_b}}$ mod $p$. Here we give $u$ in the case $\sum{\lambda_{i_1...i_b}} \equiv 0 \pmod p$: let $u = \sum_{i=0}^{p^n-1}(i+1)v_i$, where
\[
v_i = \sum\{\{t\} \in M^{(rp^n,p^n)} \mid \text{exactly } i \text{ of } \{1, 2, \dots, p^n-1\} \text{ lie in row two of } \{t\}\},
\]
\noindent
for $0 \leq i \leq p^n-1$.
In addition to a similar argument to Hemmer~\cite{Hemmer12}, Lemma~\ref{lem224} and Corollary~\ref{cor225} show that $\psi_{1,0}(u)=c \cdot \emptyset \text{ where }p \nmid c$, and $\psi_{1,i}(u) = 0$ for all $i \geq 1$:
$\bullet$ Counting the number of tabloids that appear in the sum defining $v_i$, there are $\binom{p^n - 1}{i}$ choices for the row two entries from $\{1, 2, \dots, p^n -1\}$ and $\binom{rp^n +1}{p^n - i}$ for the remaining entries from $\{p^n, p^n +1 ,\dots, (r+1)p^n \}$. Hence, $\psi_{1, 0}(v_i) = \binom{p^n - 1}{i} \binom{rp^n+1}{p^n-i} \emptyset$. By Lemma ~\ref{lem224}, $\psi_{1, 0}(v_i) = 0$ for all $i \notin \{0, p^n -1\}$. Thus, $\psi_{1, 0}(u) = \psi_{1, 0}(v_0) + p^n\psi_{1,0}(v_{p^n - 1}) = \binom{p^n - 1}{0}\binom{rp^n+1}{p^n} \emptyset = c \cdot \emptyset$ for some $c$ with $p \nmid c$.
$\bullet$
For $1 \leq t \leq s < p^n$, the coefficient of
\[
\overline{1, 2, 3, \dots , t, p^n, p^n +1, \dots p^n +s -t -1} \in M^{((r+1)p^n - s, s)}
\]
\noindent
in $\psi_{1, s}(v_i)$ is $\binom{p^n - 1 -t}{i - t}\binom{rp^n -s +t +1}{p^n -s +t -i}$. This follows from counting the number of tabloids in the sum defining $v_i$ that contribute to the coefficient when we evaluate $\psi_{1, s}(v_i)$: such a tabloid must have $\{1, 2, \dots, t\}$ in the second row, so there are $\binom{p^n - 1- t}{i -t}$ choices for the remaining entries from $\{1, 2, \dots, p^n -1 \}$ and $\binom{rp^n -s + t + 1}{p^n - s + t- i}$ choices for the remaining entries from $\{p^n, p^n+1, \dots, (r+1)p^n\}$.
\indent
Now let $A_{s, t}$ be the coefficient of
\[
\overline{1, 2, 3, \dots , t, p^n, p^n +1, \dots p^n +s -t -1} \in M^{((r+1)p^n - s, s)}
\]
\noindent
in $\psi_{1, s}(u)$. Then
\[
A_{s, t} = \sum_{m =t}^{p^n -1}(m+1)\binom{p^n -1 -t}{m-t}\binom{rp^n -s +t +1}{(r-1)p^n + m+1}.
\]
\noindent
\underline{Claim}
$\bullet$ For $1 \leq s < p^n$, $A_{s, s-1} \equiv 0 \pmod p$.
$\bullet$ For $1 \leq t \leq s < p^n$, we have
\[
A_{s, t} - A_{s, t-1} = \binom{(r+1)p^n - s- 1}{rp^n -1} \equiv 0 \pmod p.
\]
\underline{Proof of claim:}
When $t = s-1$, the second binomial coefficient in each term in the sum defining $A_{s, s-1}$ is $\binom{rp^n}{(r-1)p^n + m + 1}$, which is congruent to zero except for the last term $m = p^n -1$ by Lemma~\ref{lem224}, in which case the factor $m+1$ in the term is zero.
To prove the second bullet point, apply the identity $\binom{rp^n -s +t +1}{(r-1)p^n + m + 1} = \binom{rp^n -s +t}{(r-1)p^n + m}+\binom{rp^n -s +t}{(r-1)p^n + m + 1}$ to the second coefficient in the defining sum. Expand out and collect term to obtain:
\begin{align*}
A_{s, t} &= (t+1)\binom{p^n - t -1}{0} \binom{rp^n -s +t}{(r-1)p^n +t}\\
&+ \sum_{w =t}^{p^n -2}\left[ (w+1) \binom{p^n - t -1}{w - t} + (w + 2)\binom{p^n -t -1}{w - t +1}\right]\binom{rp^n - s + t}{(r-1)p^n + w + 1}\\
&+ p^n\binom{p^n - t- 1}{p^n -t -1}\binom{rp^n -s + t}{rp^n}.
\end{align*}
\noindent
Finally, replace each $(w + 1)\binom{p^n -t -1}{w - t} + (w + 2)\binom{p^n -t -1}{w - t +1}$ in the above equation by
\[
(w + 1)\binom{p^n -t}{w -t +1} + \binom{p^n - t - 1}{w -t +1}
\]
\noindent
and subtract off
\[
A_{s, t-1} = \sum_{w = t-1}^{p^n -1}(w + 1)\binom{p^n -t}{w -t +1}\binom{rp^n -s + t}{(r-1)p^n + w +1}
\]
\noindent
to obtain
\[
A_{s, t} - A_{s, t-1} = \sum_{w = t}^{p^n - 1}\binom{p^n - t -1}{w - t}\binom{rp^n -s +t}{(r-1)p^n + w} = \binom{(r+1)p^n - s- 1}{rp^n -1}.
\]
The last equality follows from the identity
\[
\sum_{k}\binom{l}{m + k}\binom{s}{n +k} = \binom{l+s}{l -m +n},
\]
\noindent
for $ l \geq 0$ and integers $m, n$ in \cite{concrete}, (2.53). Here we take $l = p^n -t -1, s= rp^n -s + t, k =w, m =-t$, and $n = (r-1)p^n$.
By Lemma~\ref{lemkum}, $A_{s, t} - A_{s, t-1} \equiv 0 \pmod p$: expanding $rp^{n} -1$ in $p$-ary notation, all the digits corresponding to $p^0, p, \dots, p^{n-1}$ are $p -1$ and some digit of $p^n- s $ corresponding to $p^0, p, \dots, p^{n-1}$ is non-zero as $1 \leq s <p^n$. Thus adding them together in $p$-ary notation will always result in at least one carry. Thus, the claim is true.
Finally, $\psi_{1,p^n-1}(f_{(rp^n,\ p^n)}) = \binom{rp^n+1}{1}f_{(rp^n+1,\ p^n-1)}$ so condition $(2)$ of Theorem~\ref{ext} is satisfied, too, and we are done by symmetry (c.f. Remark $5.3$ in~\cite{Hemmer12}).
Later we will see that the necessary conditions for $u$ for a general $2$-part partition turn out to be sufficient over $\mathbb{Z}$.
\underline{\emph{Case $2: b = p^n, a\equiv (-1) \pmod{p^{n+1}}$}}.
\noindent
Since $a\equiv (-1) (p^{l_p(b)})$, and $b = p^n $, we have
$a = (p-1) + (p-1)p + \dots + (p-1) p^n + a_{n+1}p^{n+1} + \dots$, where $a_m \geq1$ for some $m \geq (n+1)$.
\noindent
Thus,
\begin{align*}
v_p(a+2) &= v_p(1)\\
v_p(a+3) &= v_p(2) \\
&\dots \\
v_p(a+b) &= v_p(b-1)
\end{align*}
\noindent
Therefore, \eqref{star} implies
\[
v_p(a+v+1) + v_p(\lambda_{b-v}) = v_p(b) + v_p(\sum{\lambda_{i_1...i_b}}).
\]
\noindent
Since $1 \leq v \leq b-1$ and $a+1 \equiv 0 \pmod{p^{n+1}}$, $v_p(a + 1 + v) = v_p(v)$ so we have
\[
v_p(v) + v_p(\lambda_{b-v}) = v_p(b) + v_p(\sum{\lambda_{i_1...i_b}}). \label{2star} \tag{$\star$ $\star$}
\]
\noindent
Now as $ b = p^n$, $v_p(v) < v_p(b)$ for all $1 \leq v \leq b-1$. Thus, for~\eqref{2star} to hold, $v_p(\lambda_{b-v}) > 0$ i.e. $\lambda_{b-v} = 0$ in $k$ for all $v$. This means that there is \emph{no} non-trivial fixed point in $\im\psi_{1,b-v}$ for $1 \leq v \leq b-1.$ Hence, if we can choose a $u$ such that $\psi_{1,v} (u) = \lambda_v f_v$ for $0 \leq v \leq b-1$, $\psi_{1,v}(u)$ must vanish for all $1\leq v\leq b-1$.
Note that this is not the case if $b$ is not a $p$-power: if $b = p^r \tilde{b}$ where $\tilde{b} \geq 2$ and $ p \nmid \tilde{b}$, then a similar argument to the above argument shows that there is no non-trivial fixed point in any of the images of
\begin{align*}
&\psi_{1,b-1}, \psi_{1,b-2}, \dots, \psi_{1, b-p^r+1},\\
&\psi_{1,b-p^r-1}, \psi_{1, b-p^r-2}, \dots, \psi_{1, b-2p^r+1},\\
&\dots\\
&\psi_{1,b-(\tilde{b}-1)p^r -1},\psi_{1,b-(\tilde{b}-1)p^r -2}, \dots, \psi_{1,b-\tilde{b}p^r -1}.
\end{align*}
\noindent
However, $v_p(p^r) = v_p(2p^r) = \dots = v_p((\tilde{b}-1)p^r) = v_p(\sum{\lambda_{i_1...i_b}})$, so the images of $\psi_{1,p^r}, \psi_{1,2p^r},$ $\dots, \psi_{1,(\tilde{b}-1)p^r}$ may contain a non-trivial fixed point.
\noindent
This is the reason why \textbf{Plan A} is particularly practicable if $b$ is a $p$-power. Our candidate $u$ when $b = p^n$ is
\[
u = \sum \{\{t \} \in M^{(a,\ p^n)} | 1, 2, \dots, p^n \ \text{appear in the first row of} \{t\}\}.
\]
\noindent
Then $\psi_{1,0}(u) = \binom{a}{p^n} \overline{\emptyset}$ and $\psi_{1,v} (u)= \binom{a-v}{p^n-v} \cdot m$, where $m \in M^{(a,p^n)}, 1\leq v \leq p^n-1$. Since $a \equiv (-1) \pmod{p^{n+1}}$, $p \nmid \binom{a}{p^n}$ and $p | \binom{a-v}{p^n-v}$ for $1\leq v \leq p^n-1$ by Lemma ~\ref{lem224} and Corollary~\ref{cor225}. We note that $H^0(\mathfrak{S}_{a+p^n},S^{(a,\ p^n)}) \neq 0$, by Theorem~\ref{H0}, so $u$ satisfies the condition of \textbf{Theorem~\ref{ext}} and we are done.
\end{proof}
It can be seen that \textbf{Plan A} does not seem to work for general $2$-part partitions. To progress further, we need to have a different approach. If we work over $\mathbb{Z}$ instead of $k$, the necessary conditions for $\lambda_v \neq 0$ turn out to be sufficient conditions as well. In fact, they arise from fundamental and until recently poorly understood combinatorial structures called \textbf{\emph{integral designs}} (or $t$-designs as in Dembowski~\cite{Dembowski}), and the $u$ we want over $\mathbb{Z}$ turns out to be an $(a+b,b,\lambda_0,\dots, \lambda_{b-1})$-design.
\begin{Definition}
Given integers $t, v, l, \lambda_0, \dots, \lambda_t$, where $v\geq 1$ and $0 \leq t, l \leq v$, let $V= \{1, 2, \dots, v\}$, $X = \{ x | x\subseteq V\}$, and $V_l = \{x | x\subseteq V, |x| = l \} = [v]^{(l)}$. The elements of $X$ are called \emph{blocks} and those in $V_l$ are called \emph{$l$-blocks} or \emph{blocks of size $l$}. An \emph{integral $(v, l, \lambda_0, \dots, \lambda_t)$-design} associates integral multiplicities $c(x)$ to $l$-blocks $x$ and zero to all other blocks such that
\[
\hat{c}(y) := \sum_{x\supseteq y} c(x)=\lambda_s, \quad \text{if}\quad |y| = s \leq t.
\]
If all the parameters $\lambda_i$'s are zero, it is called a \emph{null design}.
\end{Definition}
\begin{Theorem}[Graver-Jurkat~\cite{GJ73}, Wilson~\cite{W73}]\label{design}
There exists an integral $(v, l, \lambda_0, \lambda_1,$ $\dots, \lambda_t)$-design if and only if $\lambda_{s+1} = \frac{l-s}{v-s}\lambda_s$ for $0\leq s<t$.
\end{Theorem}
Graver-Jurkat developed a method for constructing a design with prescribed parameters $\lambda_0, \dots, \lambda_t$ satisfying the conditions of Theorem~\ref{design}. We will outline their construction below.
Fixing $t, v$, and $l$, the set of all integral $(v, l, \lambda_0, \dots, \lambda_t)$-designs form a module $C_{tl}(v)$ over $\mathbb{Z}$ and the null designs form a submodule $N_{tl}(v)$. Let $G=(G_{xy})_{x, y \in X}$ be the inclusion matrix on X i.e.
\[
G_{xy} =
\begin{cases}
1, &\text{if $x \subseteq y;$}\\
0, &\text{otherwise.}
\end{cases}
\]
\noindent
The transform $\hat{c} = Gc$ is defined by
\[
\hat{c}(u) = (Gc)(u) = \sum_{x \supseteq u}c(x).
\]
By partitioning $X$ into blocks of subsets of size $0 \leq i \leq v$, the vectors $c$ and $d = \hat{c}$ split into blocks $c_l$ (the restriction of $c$ to $V_l$) and $d_t$ (the restriction of $d$ to $V_t$). Furthermore, the matrix $G$ or $G(v)$ splits into blocks $G_{tl}$ or $G_{tl}(v)$ such that
\[
d_t = \sum_{l=0}^{v}G_{tl}c_l, \quad \text{for} \ 0\leq t \leq v.
\]
\begin{Lemma}[Graver-Jurkat~\cite{GJ73}]\label{matrixId}
For $s \leq h \leq l$,
\[
\binom{l-s}{h-s}G_{sl} = G_{sh}G_{hl}.
\]
\end{Lemma}
\begin{Theorem}[Graver-Jurkat~\cite{GJ73}]\label{fundamental}
Let $0 \leq t \leq l \leq v$. Suppose $c \in \mathbb{Z}_{+}^{X}$ has constant block size $l$, i.e.\ $c(x) = 0 \ \forall x$ with $|x| \neq l$,
and $(\hat{c})_{t} = \lambda_t e_t$, where $e_t$ is the vector which has components one for each $t$ block. Then $c$ is an integral $(v, l, \lambda_0, \dots, \lambda_t)$-design. Furthermore,
\[
\lambda_s = \frac{\binom{v-s}{t-s}}{\binom{l-s}{t-s}}\lambda_t, \quad \text{for} \ s = 0,1, \dots, t.
\]
\end{Theorem}
\begin{Theorem}
\[
rank \ G_{tl}(v) =
\begin{cases}
&\binom{v}{t}, \quad \text{when} \ t\leq l \leq v - t,\\
& \binom{v}{l}, \quad \text{when} \ v - t\leq l \leq v, \ t \leq l.
\end{cases}
\]
\end{Theorem}
\begin{Corollary}\label{dim}
dim$N_{tl}(v) = \binom{v}{l} - \binom{v}{t}$ and dim$C_{tl}(v) = dim N_{tl}(v) + 1$ for $t \leq l \leq v-t$.
\end{Corollary}
\begin{Definition}(Graver-Jurkat~\cite{GJ73})\label{support}
The \emph{support} of an element $c \in \mathbb{Z}^{X}$ is the collection of elements of $X$ which have non-zero multiplicities:
\[
\supp(c) = \{x | c(x) \neq 0 \}.
\]
\noindent
The \emph{foundation} of an element $c \in \mathbb{Z}^{X}$ is the union of the blocks in its support:
\[
\found(c) = \bigcup_{x: c(x) \neq 0} x.
\]
\end{Definition}
\noindent
\begin{Lemma}[Graver-Jurkat~\cite{GJ73}]\label{foundationSize}
If $c \in C_{tl}(v) \setminus N_{tl}(v)$, then $\found(c) = V$. On the other hand, if $c \in N_{tl}(v)$, then $|\found(c)| \geq t+l+1$.
\end{Lemma}
\begin{Definition}\label{convolution}(Graver-Jurkat~\cite{GJ73})
The \emph{convolution} $\star$ on $\mathbb{Z}+^X$ is defined as follows
\[
(c \star d)(z) = \sum_{x+y = z} c(x)d(y),
\]
\noindent
where $x, y, z \in X, c, d \in \mathbb{Z}_+^X$ and $+$ is the Boolean sum or symmetric difference.
\end{Definition}
\begin{Convention}
$N_{-1l}(v) = \{c | c \in \mathbb{Z}^X \ \text{and} \ c(x) = 0 \ \text{whenever} \ |x| \neq l \}$.
\end{Convention}
\begin{Definition}
For $x \in X$, let $\delta_{x} \in \mathbb{Z}_{+}^{X}$ be the indicator function of $x$, i.e.
\[
\delta_x (y) =
\begin{cases}
&1, \quad \text{if}\ y = x,\\
&0, \quad \text{otherwise.}
\end{cases}
\]
\noindent
If $x \subseteq V$, define the \emph{extension of $c$ by $x$} by $c \star \delta_x$.
\end{Definition}
\begin{Definition}
If $q$ and $r$ are distinct points in $V$, let $d_{qr} = \delta_{\{q\}} - \delta_{\{r\}}$. Define the \emph{suspension of $c$ by $q$ and $r$} by $c \star d_{qr}$.
\end{Definition}
\begin{Theorem}[Graver-Jurkat~\cite{GJ72}]
If $c \in C_{t,l_1}(v)$, $d \in C_{t, l_2}(v)$ and $|x \cap y|$ is fixed whenever $c(x)d(y) \neq 0$, then $c \star d \in C_{t, l_3}(v)$
\end{Theorem}
\begin{Theorem}[Graver-Jurkat~\cite{GJ73}]\label{null-convolution}
Let $-1 \leq t \leq l$, $-1 \leq s \leq h$, $0 \leq h, l \leq v$, $u = \found(c)$ and $w = \found(d)$. If $c \in N_{tl}(v)$, $d \in N_{sh}(v)$ and $u \cap w = \emptyset $, then $c \star d \in N_{t+s+1, k+h}(v).$
\end{Theorem}
\begin{Corollary}[Graver-Jurkat~\cite{GJ73}]\label{suspension}
If $c \in N_{tl}(v)$, $|x| = h$ and $\found(c) \cap x =\emptyset$, then $c \star \delta_x \in N_{t,l+1}(v)$. Also, if $\found(c) \cap \{q, r \} = \emptyset$, then $c \star d_{qr} \in N_{t+1,l+1}(v)$.
\end{Corollary}
\begin{Definition}
Let $W = \{1, 2, \dots, , v-1 \}$ and $Y = \{x \mid x \subseteq W \}$ where $v >1$. Define $\phi : \mathbb{Z}^X \to \mathbb{Z}^Y$ by $(\phi c)(x) = c(x + v)$, where $x \in Y$ and $v$ denotes $\{v\}$.
\end{Definition}
\begin{Lemma}[Graver-Jurkat~\cite{GJ73}]\label{map}
$\phi$ commutes with $G$:
\[
G(v-1)\phi = \phi G(v),
\]
and
\begin{align*}
\phi(C_{tl}(v)) &\subseteq C_{t-1,l-1}(v),\\
\phi |_{N_{tl}(v)}(N_{tl}(v)) &= N_{t-1,l-1}(v).
\end{align*}
\end{Lemma}
\begin{Definition}[Graver-Jurkat~\cite{GJ73}]\label{pod}
Assume that $t < l < v-t$. Let $u$ be any $(l+t+1)$-subset of $V$. Label $2t+2$ of the points in $u$ as $p_0, p_1, \dots, p_t, q_0, q_1, \dots, q_t$ and let $x$ be the subset of the $l - t - 1$ remaining points in $u$. $d$ is a \emph{$t,l$-pod} if
\[
d = d_{p_0q_0} \star \dots \star d_{p_t q_t} \star \delta_x.
\]
\end{Definition}
\begin{Theorem}[Graver-Jurkat~\cite{GJ73}]\label{nullBasis}
For $0 \leq t, l \leq v$ and $v \geq 1$, $N_{tl}(v)$ has a module basis consisting of designs with foundation size $l + t + 1$; in fact it has a module basis consisting of t,l-pods.
\end{Theorem}
\begin{Theorem}[Graver-Jurkat~\cite{GJ73}]\label{surject}
Let $0 \leq t < l < v-t$. Then $G_{t+1,l}(N_{t,l}) = N_{t,t+1}$.
\end{Theorem}
\begin{Theorem}[Graver-Jurkat~\cite{GJ73}, Wilson~\cite{W73}]\label{mainDesign}
Let $t, v, l, \lambda_0, \dots \lambda_t$ be integers where $v \geq 1$ and $0\leq t, l \leq v$. There exists an integral $(v, l, \lambda_0,\dots, \lambda_t)$-design if and only if
\[
\lambda_{s+1} = \frac{l-s}{v-s}\lambda_s,\quad \text{for} \ 0\leq s <t.
\]
\end{Theorem}
\begin{proof}
The necessary conditions for $t \leq l$ follow from Theorem~\ref{fundamental}. If $t >l$, $\lambda_s = 0$ by definition for $l < s \leq t$.
We will prove the sufficient conditions by induction on $v$. If $t = 0$, the set of conditions is vacuous and $\lambda_0\delta_x$ is a $(v, l, \lambda_0)$-design for $|x| = l$. Assume that the conditions are sufficient for some $t \geq 0$, and that $\lambda_0, \dots, \lambda_{t+1}$ satisfy the conditions. Then there exists an integral $(v, l, \lambda_0, \dots, \lambda_t)$-design $c'$. We would like to alter $c'$ so as to make it an integral $(v, l, \lambda_0, \dots, \lambda_{t+1})$-design.
If $l < t$ or $l > v-t$, then $C_{tl}(v)$ is a one-dimensional space spanned by $e_l$. In this case, $c' = \alpha e_l$, and hence is a $(v, l, \lambda_0, \dots, \lambda_t, \lambda_{t+1}')$-design. We need only check that $\lambda_{t+1} = \lambda_{t+1}'$, which is a straightforward computation.
Now we turn to the case $t < l < v-t$. We have
\[
G_{tl}c_l' = \lambda_t e_t. \label{one} \tag{1}
\]
By Lemma~\ref{matrixId},
\[
G_{tl} = \frac{1}{l-t}G_{t,t+1}G_{t+1,l},
\]
\noindent
and one may easily compute that
\[
e_t = \frac{1}{v-t}G_{t,t+1}e_{t+1}.
\]
\noindent
Substituting these into \eqref{one} yields
\[
G_{t,t+1}G_{t+1,l}c_l' = G_{t,t+1}\frac{l-t}{v-t}\lambda_t e_{t+1} = G_{t,t+1}\lambda_{t+1}e_{t+1}.
\]
\noindent
Let $d'$ be the extension by zero of $(G_{t+1,l}c_l' - \lambda_{t+1}e_{t+1})$, it is clear that $d' \in N_{t,t+1}$. By Theorem~\ref{surject}, there exist $d \in N_{tl}$ such that $G_{t+1,l}d_l = d_{t+1}'$. Finally, if $c = c' -d$, then $c$ has constant block size $l$ and
\[
G_{t+1,l}c_l = G_{t+1,l}c_l' - d_{t+1}' = \lambda_{t+1}e_{t+1}.
\]
\noindent
It follows from Theorem~\ref{fundamental} that $c$ is a $(v, l, \lambda_0, \dots, \lambda_t, \lambda_{t+1})$-design.
\end{proof}
Turning our attention to the theory of Specht module cohomology, we can now state and prove our main theorem for $2$-part partitions:
\begin{Theorem}\label{main}
Over an algebraically closed field $k$ of odd characteristic, we can construct an $u \in M^{(a, b)}$, such that for each
\begin{align*}
\psi_{i,v}: &M^{(a, b)} \to M^{(a+b-v, v)}\\
& \overline{i_1 \dots i_b} \mapsto \sum_{\{j_1,\dots,j_v\} \in [b]^{(v)}}\overline{i_{j_1} \dots {i_{j_v}}}
\end{align*}
where $0 \leq v \leq b-1, \psi_{1,v}(u)=\lambda_vf_v$ and $\lambda_v \neq 0$ for some $v$
, $f_v=f_{(a+b-v, v)}$, $[b]=\{1, 2, \dots, b\}$ and for a set $X, X^{(v)}$ denotes the set of all subsets of size $v$ of X.
Therefore, \textbf{Problem \ref{non-vanishing hom}} can be solved in the case of $2$-part partitions.
\end{Theorem}
\begin{proof}
Note that such a $u$ is equivalent to an integral $(a+b, b, \lambda_0, \dots, \lambda_{b-1})$-design with the parameters $\lambda_i's$ satisfying the sufficient conditions in Theorem~\ref{mainDesign} \emph{and} $\lambda_i \not\equiv 0 \pmod{p}$, for some $i$, since we are working over $k$. How do we find such $\lambda_i's$? We need $\lambda_{s+1} = \frac{b-s}{a+b-s}\lambda_s \ \forall 0\leq s < b-1$, i.e.
\begin{align*}
\lambda_1 &= \frac{b}{a+b}\lambda_0,\\
\lambda_2 &= \frac{b-1}{a+b-1}\frac{b}{a+b}\lambda_0,\\
&\dots
\end{align*}
\begin{align*}
\lambda_{s+1} &=\frac{b-s}{a+b-s} \dots \frac{b}{a+b}\lambda_0 \\
&=\frac{b!}{(b-s-1)!}\frac{(a+b-s-1)!}{(a+b)!}\lambda_0\\
&=\frac{\binom{a+b-s-1}{a}}{\binom{a+b}{a}}\lambda_0,\\
&\dots
\end{align*}
\noindent
Let $d = \text{min}\{ v_p(\binom{a+b-s-1}{a})\}$, where the minimum is taken over the set $\{0, 1, \dots, b-1\}$, and put $\lambda_s = p^{-d}\binom{a+b-s-1}{a}$.
Note that our choice of $\lambda_s$ ensures that all the $\lambda_s's$ are integers. Construct an integral $(a+b, b, \lambda_0,\dots, \lambda_{b-1})$-design $c$ as in Theorem~\ref{mainDesign} and let
\[
u = \sum_{y \in M^{(a,b)}}c(y)y.
\]
\noindent
Then $\psi_{1,v}(u) = \lambda_v f_v$ by Theorem~\ref{mainDesign} and $\lambda_0 \not\equiv 0 \pmod{p}$ or $\lambda_{s_0} \not\equiv 0 \pmod{p}$ by our choice of $\lambda_0$.
\end{proof}
\section{Three-part partitions and beyond.}
The next natural step would be to generalize Graver-Jurkat's method to solve our problem for arbitrary partitions. For $3$-part partitions $\lambda= (a,b,c)$, $\lambda$-tabloids are determined by the second and the third rows.
\noindent
\emph{Example.} Let $\overline{s}$ denote a set of size $s$. Then an element of $M^{(a, b, c)}$ can be represented as $\frac{\ \overline{b}\ }{\ \overline{c}\ }$.
We have
\begin{align*}
\psi_{1,v}: M^{(a, b, c)} &\mapsto M^{(a+b-v, v, c)}\\
\frac{\ \overline{b}\ }{\ \overline{c}\ } &\mapsto \sum_{\overline{v} \subseteq \overline{b}}\frac{\ \overline{v}\ }{\ \overline{c}\ }
\end{align*}
\noindent
for $0 \leq v \leq b-1$, and
\begin{align*}
\psi_{2,w}: M^{(a,b,c)} &\mapsto M^{(a,b+c-w,w)}\\
\frac{\ \overline{b}\ }{\ \overline{c}\ } &\mapsto \sum_{\overline{w} \subseteq \overline{c}} \frac{\ \overline{b} \cup (\overline{c}\smallsetminus \overline{w})\ }{\ \overline{w}\ }
\end{align*}
\noindent
for $0 \leq w \leq c-1$.
We want to find a $u = \sum_{\overline{b}, \overline{c}}\alpha(\overline{b},\overline{c})\frac{\ \overline{b}\ }{\ \overline{c}\ }$ such that $\psi_{1,v}(u) = \lambda_{v}^1 f_{a+b-v, v,c}$ and $\psi_{2,w}(u) = \lambda_{v}^2 f_{a,b+c-w,w}$, where $\alpha(\overline{b}, \overline{c}) \in k$, $\lambda_v^1$ and $\lambda_{w}^2$ not all zero for $0 \leq v \leq b-1$ and $0 \leq w \leq c-1$.
Note that if we fix the third row of the tabloids involved in $u$, then we have again an integral $(a+b, b, \lambda^1_0, \dots, \lambda^1_{b-1})$-design. Furthermore, if instead we fix the first row of those tabloids, we have an integral $(b+c, c, \lambda^2_0, \dots, \lambda^2_{c-1})$-design. Therefore, by Corollary~\ref{dim},
\begin{align*}
\text{dim}_{\mathbb{Z}}\bigcap_{v = 0}^{b-1}\psi_{1,v}^{-1}(\mathbb{Z}f_{(a+b-v, v, c)}) &= \binom{a+b+c}{c} \left( \binom{a+b}{b} - \binom{a+b}{b-1}+1\right),\\
\text{dim}_{\mathbb{Z}}\bigcap_{w=0}^{c-1}\psi_{2,w}^{-1}(\mathbb{Z}f_{(a, b+c -w, w)}) &= \binom{a+b+c}{a} \left( \binom{b+c}{c} - \binom{b+c}{c-1}+1\right).
\end{align*}
\noindent
Clearly, we have the required $u$ if the system of integral designs coming from $\psi_{1,v}$'s overlaps with those coming from $\psi_{1,w}$'s non-trivially i.e.
\[(
\cap_{v = 0}^{b-1}\psi_{1,v}^{-1}(\mathbb{Z}f_{(a+b-v, v, c)})) \bigcap (\cap_{w=0}^{c-1}\psi_{2,w}^{-1}(\mathbb{Z}f_{(a, b+c -w, w)})) \text{ strictly contains $S^{(a, b, c)}$}.
\] This will be true if
\[
\text{dim}_{\mathbb{Z}} S^{(a, b, c)} < \text{dim}_{\mathbb{Z}}\bigcap_{v = 0}^{b-1}\psi_{1,v}^{-1}(k) + \text{dim}_{\mathbb{Z}}\bigcap_{w=0}^{c-1}\psi_{2,w}^{-1}(k) - \text{dim}_{\mathbb{Z}} M^{(a, b ,c)}.
\]
\begin{Theorem}\label{3part}
We have
\[
\dim_{\mathbb{Z}} S^{(a, b, 1)} < \dim_{\mathbb{Z}}\bigcap_{v = 0}^{b-1}\psi_{1,v}^{-1}(k) + \dim_{\mathbb{Z}}\psi_{2,0}^{-1}(k) - \dim_{\mathbb{Z}} M^{(a, b ,1)},
\]
i.e. \textbf{Problem~\ref{non-vanishing hom}} can be solved for partitions of the form $(a,b,1)$ .\end{Theorem}
\begin{proof}
Let $d = a+b+c$. By a theorem of Frame, Robinson and Thrall~\cite{FRT},
\begin{align*}
&\text{dim}_{\mathbb{Z}}S^{(a, b, c)} = \frac{d!}{\prod (\text{hook lengths in} [(a, b, c)])} \\
&= \frac{d!}{(a+2)\dots (a-c+3)(a-c+1)\dots (a-b+2)(a-b)! (b+1)\dots (b-c+2)(b-c)! c!}\\
&=\frac{d!(a-c+2)(a-b+1)(b-c+1)}{(a+2)!(b+1)!c!}.
\end{align*}
Also, by $4.2$ in James~\cite{James78},
\[
\text{dim}_{\mathbb{Z}} M^{(a, b, c)} = \frac{d!}{a!b!c!}
\]
We have
\begin{align*}
&\frac{d!(a-c+2)(a-b+1)(b-c+1)}{(a+2)!(b+1)!c!} + \frac{d!}{a!b!c!} \\
&< \binom{d}{c} \left( \binom{a+b}{b} - \binom{a+b}{b-1}+1\right) + \binom{d}{a} \left( \binom{b+c}{c} - \binom{b+c}{c-1}+1\right)\\
\iff &\frac{d!(a-c+2)(a-b+1)(b-c+1)}{(a+2)!(b+1)!c!} + \frac{d!}{a!b!c!}+\binom{d}{c}\binom{a+b}{b-1}+\binom{d}{a}\binom{b+c}{c-1}\\
&<\binom{d}{c}\binom{a+b}{b}+\binom{d}{a}\binom{b+c}{c}+\binom{d}{c}+\binom{d}{a}\\
\iff &\frac{d!(a-c+2)(a-b+1)(b-c+1)}{(a+2)!(b+1)!c!} + \frac{d!}{a!b!c!} +\frac{d!}{c!(a+b)!}\frac{(a+b)!}{(b-1)!(a+1)!} + \frac{d!}{a!(b+c)!} \cdot\\
&\cdot\frac{(b+c!)}{(c-1)!(b+1)!}
<\frac{d!}{c!(a+b)!}\frac{(a+b)!}{b!a!}+\frac{d!}{a!(b+c)!}\frac{(b+c)!}{b!c!}+\frac{d!}{c!(a+b)!}+\frac{d!}{a!(b+c)!}\\
\iff & \frac{1}{c!(a+1)!(b-1)!}+\frac{1}{a!(c-1)!(b+1)!}+\frac{(a-c+2)(a-b+1)(b-c+1)}{(a+2)!(b+1)!c!}\\
<&\frac{1}{a!b!c!}+\frac{1}{c!(a+b)!}+\frac{1}{a!(b+c)!}\\
\iff &\frac{1}{(a+1)!(b-1)!}+\frac{c}{a!(b+1)!}+\frac{(a-c+2)(a-b+!)(b-c+1)}{(a+2)!(b+1)!}\\
<&\frac{1}{a!b!} +\frac{1}{(a+b)!}+\frac{c!}{a!(b+c)!}\\
\iff &\frac{b}{a+1}+\frac{c}{b+1}+\frac{(a-c+2)(a-b+1)(b-c+1)}{(a+2)(a+1)(b+1)}
< 1 +\frac{a!b!}{(a+b)!}+\frac{c!b!}{(b+c)!} \label{dagger}\tag{$\dagger$}
\end{align*}
\noindent
If $c =1$, then \eqref{dagger} becomes
\[
\frac{b}{a+1}+\frac{1}{b+1}+\frac{(a+1)(a-b+1)b}{(a+2)(a+1)(b+1)} < 1 + \frac{a!b!}{(a+b)!}+\frac{b!}{(b+1)!}.
\]
This is equivalent to
\[
\frac{(a+1)(a-b+1)b}{(a+2)(a+1)(b+1)} < \frac{a-b+1}{a+1} + \frac{a!b!}{(a+b)!},
\]
\noindent
which is clearly true since $\frac{(a+1)b}{(a+2)(b+1)} < 1$, so we are done.
\end{proof}
\section{Conclusion}
In this paper, we aimed to construct the desired element $u$ of the permutation module $M^\lambda$ of the symmetric group $\mathfrak{S}_n$, satisfying the necessary and sufficient conditions found by Hemmer~\cite{Hemmer12} for the cohomology group $H^1(\mathfrak{S}_n, S^\lambda)$ to be non-zero by specializing first to two-part partitions. This leads to an unexpected and surprising connection of combinatorial $t$-design theory to the representation theory of the symmetric groups.
The next natural step would be to generalize the method by Graver-Jurkat to solve our problem for arbitrary partitions where now instead of one integral design, we have a system of integral designs. For example, for $3$-part partitions $(a, b, c)$, we have seen that fixing the third row of the tabloids involved in $u$ gives an $(a+b,b,\lambda_0,...,\lambda_{b-1})$-design for $\psi_{1, v}$, while fixing the first row of tabloids involved in $u$ gives a $(b + c, c, \alpha_0, . . . , \alpha_{c-1}$)-design for $\psi_{2,v}$. Note that constructing a $u$ for $s$-part partitions from a $u$ for $(s-1)$-part partitions satisfied conditions of \textbf{\emph{Problem 1}} is equivalent to constructing a map
\[
\Theta: H^1(\mathfrak{S}_n, S^\lambda) \to H^1(\mathfrak{S}_{n+a}, S^{(a, \lambda_1,...,\lambda_{s-1})}),
\]
\noindent
where $\lambda = (\lambda_1, . . . , \lambda_{s-1}) \vdash n$ and $a \equiv (-1) \pmod{p^{l_p(\lambda_1)}}$.
Thus it appears that this approach can be used to tackle the case $i = 1$ in the following problem posed by Hemmer~\cite{Hemmer12}:
\begin{Problem}
Does the isomorphism $H^i(\mathfrak{S}_n, S^\lambda) \cong H^i(\mathfrak{S}_{n+a}, S^{(a,\lambda_1,...,\lambda_{s-1})})$ hold for $i > 0$?
\end{Problem}
Alternatively, one could investigate the space $\bigcap_{v=0}^{\lambda_i-1} \psi^{-1}(k)$ and try to recast Graver and Jurkat's construction in terms of tabloids and mimic James's formulation of the Specht modules as the kernel intersection. If this is successful, we will then be able find all
u satisfying conditions (i) in \textbf{Theorem~\ref{ext}}. Combining this with Weber's work ~\cite{Weber}, we will be able to find when $H^1(\mathfrak{S}_n, S^\lambda)\neq 0$ for all partitions with at least five parts.
Another problem could perhaps be tackled using this approach is
\begin{Problem} [Hemmer ~\cite{Hemmer12}] Let $\lambda \vdash n$ and let $p > 2$. Then there is an isomorphism
\[
H^1(\mathfrak{S}_{pn}, S^{p\lambda}) \cong H^1(\mathfrak{S}_{p^2n},S^{p^2\lambda})\ (*),
\]
where for a partition $\lambda = (\lambda_1, \lambda_2, \dots )$, define $p\lambda = (p\lambda_1, p\lambda_2, \dots) \vdash pn$ . Suppose $H^1(\mathfrak{S}_{pn}, S^{p\lambda})\neq 0$ and suppose one has constructed $u \in M^{p\lambda}$ satisfying Theorem~\ref{ext}, describe a general method to construct $\tilde{u} \in M^{p^2\lambda}$ corresponding to an element in $H^1(\mathfrak{S}_{p^2n}, S^{p^2\lambda})$ and realizing the isomorphism $(*)$.
\end{Problem}
\end{document} |
\begin{document}
\title{\Large {\bf ON THE CONNECTION BETWEEN THE LIOUVILLE EQUATION AND THE SCHRÖDINGER EQUATION}
{\bf Abstract}
We derive a classical Schrödinger type equation from the phase space Liouville equation valid
for an arbitrary nonlinear potencial $V(x)$.
The derivation is based on a Wigner type Fourier transform of the classical phase space
probability distribution, which depends on a small constant $\alpha$ with dimension
of action, that will be characteristic of the microscopic phenomena.
In order to obtain the Schrödinger type equation, two requirements are necessary: 1) It is assumed
that the appropriately defined classical probability
amplitude $\Psi(x,t)$ can be expanded in a complete set of functions $\Phi _n(x)$ defined
in the configuration space; 2) the classical phase space distribution $W(x,p,t)$ obeys the Liouville
equation and is a real function of the position, the momentum and the time.
We show that for $\alpha$ equal to the Planck´s constant $\hbar$, the evolution equation for the {\em classical}
probability amplitude $\Psi(x,t)$ is identical to the Schrödinger equation. The wave particle duality principle,
however, is not introduced in our paper.
{\em Key words}: Foundations of quantum mechanics; Wigner phase-space distributions;Quantum electrodynamics;
Vacuum fluctuations; Stochastic electrodynamics
PACS: 03.65.Ta; 12.20.-m; 42.50.Lc
\section{Introduction}
In 1932 E. Wigner published an important paper {\cite{wigner,moyal}} where he
introduced what is called today {\em the Wigner's quasi probability function}, or simply
{\em Wigner's phase space function}. We shall denote it by $Q(x,p,t)$ and it is connected
with the Schrödinger wave equation solutions $\psi(x,t)$ by the expression:
\begin{equation}\label{wf}
Q(x,p,t)=\frac{1}{\pi\hbar}\int \psi ^*(x-y,t) \psi(x+y,t) e^{-\frac{2ipy}{\hbar}}dy,
\end{equation}
where $\hbar$ is the Planck´s constant. We recall that Wigner´s intention was to obtain a quantum mechanical description of
the classical phase space phenomena.
It was stated later, by Wigner and O'Connel {\cite{wignerprova}}, that the definition (\ref{wf})
is the only one that satisfies a set of conditions that are expected on physical grounds.
Notice, however, that the Planck's constant $\hbar$ enters in the above expression in a combination that was not completely
clarified by Wigner and other authors [1-7].
We know that the Planck´s constant $\hbar$ is related to the {\em electromagnetic fluctuation phenomena} characteristic of
Quantum Electrodynamics \cite{dalibard} and Classical Stochastic Electrodynamics [9-12]. However, for simplicity, these electromagnetic
fluctuations will be not considered in our paper.
We would like to mention that, if applied to an arbitrary solution $\psi(x,t)$ of the Schrödinger wave equation, the
expression (\ref{wf}) may lead to {\em negative}, or even {\em complex}, phase space probability densities, suggesting that
many solutions of the Schrödinger wave equation cannot be associated with genuine phase space probability
distributions, that is, the relation (\ref{wf}) is problematic.
Simple examples are the excited states of the harmonic oscillator \cite{franca}.
Here, following França and Marshall \cite{franca}, we shall consider that these excited states
are simply useful mathematical functions. We shall clarify this point later.
It is also well known \cite{wigner,woong} that $Q(x,p,t)$ defined in (\ref{wf}) propagates in time according
to the quantum Liouville equation
\begin{equation}\label{aliouville}
\frac{\partial Q}{\partial t}=-\frac{p}{M}\frac{\partial Q}{\partial x}
+\frac{\partial V}{\partial x}\frac{\partial Q}{\partial p}+
\frac{
\left(
\frac{\hbar}{2i}
\right)^2
}{3!}
\frac{\partial ^3 V}{\partial x^3}\frac{\partial ^3 Q}{\partial p^3}
+
\frac{
\left(
\frac{\hbar}{2i}
\right)^4
}{5!}
\frac{\partial ^5 V}{\partial x^5}\frac{\partial ^5 Q}{\partial p^5}+...,
\end{equation}
provided that $\psi(x,t)$ satisfies the Schrödinger wave equation. Here
$F(x)=-\partial V/\partial x$ is the arbitrary nonlinear force acting on the particle with mass $M$.
The equation (\ref{aliouville}) shows
that the definition (\ref{wf}), and the Schrödinger equation, do not lead to the classical
Liouville equation. We recall that, according to the correspondence principle, one should
necessarily obtain the Liouville equation if the mass $M$ is large enough. Therefore, the relationship between the
equation (\ref{wf}), the Schrödinger equation and the Liouville equation is also problematic.
In our paper, starting with the classical Liouville equation, we shall see the mathematical and the physical conditions
to connect it with an equation which is formally identical to the Schrödinger equation for a {\em classical} probability
amplitude $\Psi(x,t)$ defined in the configuration space.
By means of the theorem established within the section 2,
we show that the connection between the Liouville equation and the Schrödinger type equation is valid for an arbitrary
potential $V(x)$. This connection was presented in reference
\cite{manfredi}, restricted to the case where the potential $V(x)$ is quadratic in the variable $x$.
Our presentation is organized as follows. We give, within section 2, the mathematical background
necessary to connect the classical phase space probability distribution to the classical
configuration space probability amplitude $\Psi(x,t)$.
For clarity sake, we shall denote $\psi(x,t)$ a solution of the Schrödinger wave equation and $\Psi(x,t)$ the classical
probability amplitude.
The direct relation between the Liouville equation for an arbitrary potential $V(x)$,
and the classical Schrödinger type equation is presented within section 3. A brief discussion and our conclusions are
presented in the final section.
\section{Mathematical background}
Our starting point is the Liouville equation for the probability distribution in phase
space, denoted by $W(x,p,t)$. The Liouville equation is such that
\begin{equation}\label{liouvillea}
\frac{\partial W}{\partial t}+\frac{p}{M}\frac{\partial W}{\partial x}+
F(x)\frac{\partial W}{\partial p}=0.
\end{equation}
The nonlinear force acting on the particle with mass $M$ will be denoted by $F(x)=-\partial V(x)/\partial x$ and $V(x)$
is an arbitrary continuous function.
The solutions of the equation (\ref{liouvillea}) are obtained in conjunction with the solutions
of the Newton equation
\begin{equation}\label{newton}
M\frac{d^2x}{dt^2}=F(x),
\end{equation}
and the classical positive distribution function, $W_0(x_0,p_0)$, associated with the initial conditions $x_0$ and
$p_0$, namely
\begin{equation}\label{wzero}
W(x,p,t)=\int dx_0 \int dp_0 W_0(x_0,p_0)\delta (x-x_c(t))\delta (p-p_c(t)).
\end{equation}
Here $x_c(t)$ is the classical trajectory and $p_c(t)=m \dot{x}_c(t)$. Notice that $x_c(t)$, and $p_c(t)$,
depend on $x_0$ and $p_0$.
Therefore, $W(x,p,t)$ is always real and positive, and the classical probability amplitude
$\Psi(x,t)$ is such that
\begin{equation}\label{psi2}
|\Psi(x,t)|^2=\int^{\infty}_{-\infty}dpW(x,p,t) .
\end{equation}
The above definition is not enough to determine the complex amplitude $\Psi(x,t)$. To achieve
this goal we shall use the Liouville equation (\ref{liouvillea}).
The Liouville equation for $W(x,p,t)$ has a close relation with a classical Schrödinger type
equation
for $\Psi(x,t)$, as we shall see in the next section.
In order to clearly explain this relationship we shall establish a useful theorem
based on a convenient Fourier transform of $W(x,p,t)$ similar to that used in equation (\ref{wf}).
\underline{{\em Theorem}}:
Consider the Fourier transform defined by
\begin{equation}\label{twa}
T[W](x,y,t)=\int _{-\infty}^{\infty} W(x,p,t)e^{\frac{2ipy}{\alpha}}dp,
\end{equation}
where $\alpha$ is a {\em small} constant with dimension of action.
The mathematical definition (\ref{twa}) has a physical meaning only when $y\approx 0$ because in this case, the equation
(\ref{twa}) becomes identical to (\ref{psi2}) and gives us the probability distribution in the configuration space.
Mathematically, we can write $T[W](x,y=0,t)=|\Psi(x,t)|^2$. We shall show that $T[W](x,y,t)$
can always be expressed in the form
\begin{equation}\label{psipsi}
T[W](x,y,t)=\Psi ^*(x-y,t)\Psi(x+y,t),
\end{equation}
provided that $W(x,p,t)$ is a {\em real function} of its variables.
\underline{{\em Proof}}:
Consider any set of functions $\Phi _n(x)$, that satisfy the completeness relation
\begin{equation}\label{completeza}
\sum _{n}\Phi^*_n(x)\Phi _n(y)=\delta(x-y),
\end{equation}
and are orthogonal to each other. An example of these functions is the set
\begin{equation}\label{funcaophi}
\Phi _n(x)=\frac{1}{\sqrt{2b}}e^{\frac{in\pi x}{b}} ,
\end{equation}
where $n=0,\pm 1,\pm 2,\pm 3,...$ and $b$ is a positive constant ($-b < x < b$). Other sets
of functions $\Phi _n(x)$ can be used without loss of generality \cite{moyal,franca}.
Therefore, any complex probability amplitude $\Psi(x,t)$ can be expressed as
\begin{equation}\label{7a}
\Psi(x,t)=\sum _n a_n (t)\Phi _n(x) ,
\end{equation}
where $a_n(t)$ are the coefficients of the expansion. Different sets of coefficients $a_n(t)$ will give different
probability amplitudes $\Psi(x,t)$. One can show, from the orthogonality properties of the functions $\Phi _n(x)$,
that
\begin{equation}\label{aene}
a_n(0)=\int dx \Phi^*_n(x)\Psi(x,0) .
\end{equation}
Let $W_{mn}(x,p)$ be phase-space functions defined by \cite{moyal,franca}
\begin{equation}\label{wmn}
W_{mn}(x,p)=\frac{1}{\pi\alpha}\int dy \Phi^*_m(x-y)\Phi _n(x+y)e^{-\frac{2ipy}{\alpha}},
\end{equation}
where $\alpha$ is the same constant introduced in (\ref{twa}).
This is a convenient mathematical definition because the functions $W_{mn}(x,p)$ have
useful properties as we shall see in what follows. Notice that the functions $W_{mn}(x,p)$
can be {\em positive, negative or even complex functions} \cite{moyal,franca}.
The functions $W_{mn}(x,p)$ constitute a complete and orthogonal set of functions in
phase space.
The completeness can be verified as follows:
$$
\sum _{m}\sum _{n}W^*_{mn}(x,p)W_{mn}(y,q)=
$$
$$
=\frac{1}{(\pi\alpha)^2}\int d\xi\int d\eta e^{-\frac{2i}{\alpha}(q\eta-p\xi)}
\sum _{m}\Phi _m(x-\xi)\Phi^*_m(y-\eta)
\sum _{n}\Phi _n(y+\eta)\Phi^*_n(x+\xi)
$$
\begin{equation}\label{completa}
=\frac{1}{\pi\alpha}\delta(x-y)\delta(q-p).
\end{equation}
In the last equality the fact that functions $\Phi _n(x)$ form a complete set was used,
and we also used the Fourier integral representation for the Dirac's delta
function.
A verification of the orthogonality requirement can be obtained in an
analogous way and leads us to the following relation \cite{moyal,franca}:
\begin{equation}\label{ortogonal}
\int dx\int dp W^*_{mn}(x,p)W_{rs}(x,p)=\frac{1}{\pi\alpha}\delta _{mr}\delta _{ns}.
\end{equation}
Therefore, it is always possible to put the classical phase-space distribution $W(x,p,t)$ in the form
\begin{equation}\label{wexpand}
W(x,p,t)=\sum _{m}\sum _{n}C_{mn}(t)W_{mn}(x,p) ,
\end{equation}
where the coefficients $C_{mn}(t)$ are only functions of the time $t$ \cite{franca}.
By substituting the explicit form of $W_{mn}(x,p)$ in the last equation, we obtain the formal expression
\begin{equation}\label{wmnb}
W(x,p,t)=\frac{1}{\pi\alpha}\int dy \sum _{m}\sum _{n}
C_{mn}(t)\Phi^*_m(x-y)\Phi _n(x+y)e^{-\frac{2ipy}{\alpha}} .
\end{equation}
Since $W(x,p,t)$ is a {\em real} function we have
\begin{equation}\label{wreal}
W(x,p,t)=W^*(x,p,t) .
\end{equation}
Using the relation (\ref{wreal}) we get the following equality:
$$
\int dy \sum _{m}\sum _{n}
C_{mn}(t)\Phi^*_m(x-y)\Phi _n(x+y)e^{-\frac{2ipy}{\alpha}}=
$$
\begin{equation}\label{igualdade}
=\int dy \sum _{m}\sum _{n}
C^*_{mn}(t)\Phi _m(x-y)\Phi^*_n(x+y)e^{\frac{2ipy}{\alpha}} .
\end{equation}
Considering the change of variables $y=-\xi$, and exchanging the dummy indices $m$ and $n$,
we can write (\ref{igualdade}) in the form
$$
\int dy \sum _{m}\sum _{n}
C_{mn}(t)\Phi^*_m(x-y)\Phi _n(x+y)e^{-\frac{2ipy}{\alpha}}=
$$
\begin{equation}\label{eq15}
=\int d\xi \sum _{m}\sum _{n}
C^*_{nm}(t)\Phi^*_m(x-\xi)\Phi _n(x+\xi)e^{-\frac{2ip\xi}{\alpha}}.
\end{equation}
From this equation we conclude that
\begin{equation}\label{cautadj}
C_{mn}(t)=C^*_{nm}(t) .
\end{equation}
Therefore, the elements $C_{mn}(t)$ can be written in the following form:
\begin{equation}\label{coeficientes}
C_{mn}(t)=a^*_m(t)a_n(t) ,
\end{equation}
where the functions $a_n(t)$ are only functions of the time $t$ \cite{franca}. In order to make the connection with
the probability amplitude $\Psi(x,t)$ we shall assume that the functions $a_n(t)$ are the same functions introduced above
in the equation (\ref{7a}).
Using (\ref{coeficientes}), (\ref{wmnb}) and (\ref{twa}) we obtain
$$
\int dp W(x,p,t)e^{\frac{2ipy}{\alpha}}=
$$
$$
=\frac{1}{\pi\alpha}\int dy'
\left[
\sum _{m}a_{m}^{*}(t)\Phi _{m}^{*}(x-y')
\right]
\left[
\sum _{n}a_{n}(t)\Phi _{n}(x+y')
\right]
\int dp e^{\frac{2ip}{\alpha}(y-y')}
$$
\begin{equation}\label{fouriertransf}
=
\left[
\sum _{m}a_{m}^{*}(t)\Phi _{m}^{*}(x-y)
\right]
\left[
\sum _{n}a_{n}(t)\Phi _{n}(x+y)
\right]
\end{equation}
So, according to the definition (\ref{twa}) and the expression (\ref{7a}) for the
classical probability amplitude, we get
\begin{equation}\label{psipsi2}
T[W](x,y,t) \equiv \int^{\infty}_{-\infty} dp W(x,p,t)e^{\frac{2ipy}{\alpha}}=
\Psi^*(x-y,t)\Psi(x+y,t) ,
\end{equation}
thus completing the demonstration of the theorem.
\section{Considerations concerning the Liouville equation and its connection with the
Schrödinger equation}
Phase space distribution functions, as those defined in (\ref{wmn}),
provide a framework for a reformulation of non-relativistic quantum mechanics (QM)
in terms of classical concepts \cite{wigner,moyal,franca}.
Moreover, it is a widespread belief that the connection between the
Liouville equation and the Schrödinger type equation is only possible for quadratic potentials in the $x$ variable
($V(x,t)=a(t)x^2+b(t)x+c(t)$, where $a(t)$, $b(t)$ and $c(t)$ are arbitrary functions of
time \cite{manfredi}). We shall explain, in this section, that the potential energy $V(x)$
can be an arbitrary function of $x$. In order to achieve this goal we shall use a procedure
discussed before by L. S. Olavo \cite{olavo} and K. Dechoum, H. M. França and C. P. Malta
\cite{francab}. We shall see that the theorem established in the previous section is a proof
of a working hypothesis (see (\ref{psipsi})) used in these and many other works \cite{olavo,francab}. The first work
quoted in the reference \cite{francab} treats the Stern-Gerlach phenomenon.
As before, $W(x,p,t)$ is the classical phase space probability density associated
with some physical system (see the equations (\ref{liouvillea}) and (\ref{newton})). We
shall assume that $W(x,p,t)$ is normalized so that
\begin{equation}\label{norma}
\int dx\int dp W(x,p,t)=1 .
\end{equation}
The classical configuration space probability density will be denoted by $P(x,t)$ given by
\begin{equation}\label{probconf}
P(x,t)=\int dp W(x,p,t)\equiv |\Psi(x,t)|^2 .
\end{equation}
where $\Psi(x,t)$ is the classical probability amplitude (see (\ref{psi2})).
We are interested in obtaining a dynamical differential equation for the classical
probability amplitude $\Psi(x,t)$ based on the fact that $W(x,p,t)$ obeys the classical
Liouville equation (\ref{liouvillea}).
To obtain this equation, we shall use the Fourier transform
\begin{equation}\label{wtiu1}
T[W](x,y,t)\equiv\int^{\infty}_{-\infty} W(x,p,t)e^{\frac{2ipy}{\alpha}}dp ,
\end{equation}
defined previously in the equation (\ref{twa}).
We recall that the Fourier transform $T[W](x,y,t)$ is a complex function which has
a physical meaning only in the limit $|y|\rightarrow 0$, that is,
\begin{equation}\label{qx}
T[W](x,y=0,t)=|\Psi(x,t)|^2 .
\end{equation}
Therefore, we shall consider the definition (\ref{wtiu1}) only for very small values of $y$.
Our goal is to obtain the differential equation for $\Psi(x,t)$, from the
differential equation for $T[W](x,y,t)$, valid when $|y|$ is very small.
Our first step is to consider the equation
\begin{equation}\label{dwtiudt1}
\frac{\partial}{\partial t}T[W](x,y,t)=\int^{\infty}_{-\infty} \frac{\partial W}{\partial t}
e^{\frac{2ipy}{\alpha}}dp .
\end{equation}
Using the Liouville equation (\ref{liouvillea}) we obtain
\begin{equation}\label{dwtiudt2}
\frac{\partial T[W]}{\partial t}=-\int^{\infty}_{-\infty}
\left[
\frac{p}{M}\frac{\partial W}{\partial x}+F(x)\frac{\partial W}{\partial p}
\right]
e^{\frac{2ipy}{\alpha}}dp .
\end{equation}
One can show that the integral
$$
I=-\int^{\infty}_{-\infty}F(x)\frac{\partial W}{\partial p}e^{\frac{2ipy}{\alpha}}dp
$$
is such that
\begin{equation}\label{I}
I=F(x)\frac{2iy}{\alpha}\int^{\infty}_{-\infty} W(x,p,t)e^{\frac{2ipy}{\alpha}}dp .
\end{equation}
Here we have assumed that
\begin{equation}\label{limite1}
\lim _{|p|\rightarrow \infty}W(x,p,t)=0.
\end{equation}
We also see that
\begin{equation}\label{pop}
pe^{\frac{2ipy}{\alpha}}=-i\frac{\alpha}{2}\frac{\partial}{\partial y}e^{\frac{2ipy}{\alpha}}.
\end{equation}
Substituting (\ref{I}) and (\ref{pop}) in (\ref{dwtiudt2}) we get
\begin{equation}\label{dwtiudt4}
i\alpha\frac{\partial}{\partial t}T[W](x,y,t)=
\left[
\frac{(-i\alpha)^2}{2M}\frac{\partial ^2}{\partial y\partial x}
-2yF(x)
\right]
T[W](x,y,t) .
\end{equation}
Notice that, in accordance with the theorem proved within section 2,
$T[W](x,y,t)=\Psi^*(x-y,t)\Psi(x+y,t)$.
It is convenient to use new variables, namely $s=x-y$ and $r=x+y$, so that the equation
(\ref{dwtiudt4}) can be written as
$$
i\alpha \frac{\partial}{\partial t}[\Psi^*(s,t)\Psi(r,t)]=
$$
\begin{equation}\label{dpsipsidt1}
\left[
\frac{(-i\alpha) ^2}{2M}
\left(
\frac{\partial ^2}{\partial r^2}-\frac{\partial ^2}{\partial s^2}
\right)
-(r-s)F
\left(
\frac{r+s}{2}
\right)
\right]\Psi^*(s,t)\Psi(r,t) .
\end{equation}
According to (\ref{qx}), we want an equation for $\Psi(x,t)=[\Psi(r,t)]_{y\rightarrow 0}$, and the
equivalent equation for $\Psi^*(x,t)=[\Psi^*(s,t)]_{y\rightarrow 0}$. Consequently we shall consider that
the points $r$ and $s$ are arbitrarily close so that
\begin{equation}\label{tvm}
-(r-s)F
\left(
\frac{r+s}{2}
\right)
= -\int _{s}^{r}F(\xi)d\xi = V(r)-V(s) ,
\end{equation}
in accordance to the mean value theorem. Substituting (\ref{tvm}) into (\ref{dpsipsidt1})
we get the equations
\begin{equation}\label{schrod1}
i \alpha \frac{\partial}{\partial t}\Psi^*(s,t)=
\left[
\frac{-1}{2M}
\left(
-i \alpha \frac{\partial}{\partial s}
\right)^2 -V(s)
\right]\Psi^*(s,t) ,
\end{equation}
and
\begin{equation}\label{schrod2}
i \alpha \frac{\partial}{\partial t}\Psi(r,t)=
\left[
\frac{1}{2M}
\left(
-i \alpha \frac{\partial}{\partial r}
\right)^2 + V(r)
\right]\Psi(r,t) .
\end{equation}
These two equations are, in fact, the same differential equation, namely
\begin{equation}\label{schrod4}
i \alpha \frac{\partial}{\partial t}\Psi(x,t)=
\left[
\frac{1}{2M}
\left(
-i \alpha \frac{\partial}{\partial x}
\right)^2 + V(x)
\right]\Psi(x,t) ,
\end{equation}
which is formally identical to the Schrödinger wave equation.
Notice, however, that $\Psi(x,t)$ is a {\em classical} probability amplitude, which is conceptually different
from the Schrödinger {\em wave} function $\psi(x,t)$. The classical probability amplitudes $\Psi(x,t)$ are just
mathematical objects \cite{leggett2}.
The wave-particle duality principle was not used and the small characteristic constant $\alpha$ is, up to this point,
arbitrary. This constant can be determined from the observation of several phenomena within microscopic domain [10-12].
Therefore, we can conclude that $\alpha = \hbar$, where $\hbar$ is the universal Planck´s constant.
It is relevant to observe that if we write
\begin{equation}\label{psiene}
\Psi(x,t)=\Phi _n(x)e^{-i\frac{\epsilon _nt}{\alpha}} ,
\end{equation}
where $\epsilon _n$ are constants with dimension of energy, the equation (\ref{schrod4}) leads to
\begin{equation}\label{schrodautoval}
\left[
\frac{1}{2M}
\left(
-i \alpha \frac{\partial}{\partial x}
\right)^2 +V(x)
\right]\Phi _n(x)
=\epsilon _n\Phi _n(x) .
\end{equation}
This is a familiar eigenfunction equation . Its solutions $\Phi _n(x)$ constitute
a complete set of orthogonal functions that can be obtained for different potentials $V(x)$. This set
of functions $\Phi _n(x)$ can also be used in our previous equations (\ref{7a}) and (\ref{wmn})
(see reference \cite{franca}) showing the consistency between the sections 2 and 3.
We would like to call the reader attention to the fact that the above mathematical treatment can be extended to the
three dimensional case \cite{francab} without additional difficulty.
\section{Discussion}
The connection between the classical Liouville equation for a probability distribution in phase space and the
Schrödinger equation was already established for quadratic potentials \cite{wigner,manfredi}. The extension of the formalism to a
{\em generic potential} $V(x)$ presented here put all the previous related works \cite{olavo,francab} on
a solid mathematical ground.
We get the Schrödinger type equation (\ref{schrod4}), and also Born's statistic interpretation of
$|\Psi(x,t)|^2$, by considering the Liouville equation, and
the equations (\ref{twa}), (\ref{dwtiudt4}) and (\ref{dpsipsidt1}) only in the limit $y \rightarrow 0$. In this context
we were able to use the mean value theorem (see equation (\ref{tvm})) to separate (\ref{dpsipsidt1}) in two equivalent equations,
formally identical to the Schrödinger equation. Since $y$ has to be small, we understand why the inverse transform
(equation (\ref{wf})), utilized by Wigner, does not guarantee that the phase space function $Q(x,p,t)$ is a true (positive)
probability distribution.
The association of the constant $\alpha$ with the Planck´s constant $\hbar$
is natural and inevitable to any reader familiarized with the Schrödinger equation
used in the microscopic world. As we said above, the explicit
numerical value of $\alpha$ (or $\hbar$) can be determined by testing the validity of the Schrödinger type equation in
various phenomena of the microscopic domain \cite{merzbacher}. The connection of the Schrödinger momentum operator
$-i\hbar \partial/\partial x$ with the zero-point vacuum electromagnetic radiation is also interesting. This fact is
explored by P. W. Milonni in the references \cite{milonni1,milonni2}.
According to A. J. Legget \cite{leggett2}, ``despite the spectacular success of quantum mechanics over the last 80 years
in explaining phenomena observed at atomic and subatomic level, the conceptual status of the theory is still a topic
of lively controversy''. He seems to believe that quantum mechanics is nothing more than a very good mathematical
``recipe'' \cite{leggett}.
We agree with the above statements. According to the connection between the Liouville equation (\ref{liouvillea}) and
the Schrödinger type equation (\ref{schrod4}), presented here, one can conclude that the classical probability amplitude
$\Psi(x,t)$ is not associated with a genuine de Broglie wave. As a matter of fact, the existence of de Broglie waves was
questioned recently by Sulcs, Gilbert and Osborne \cite{sulcs} in their analysis of the famous experiments \cite{arndt} on
the interference of massive particles as the $C_{60}$ molecules (fullerenes).
{\bf Acknowledgment}
We thank Professor Coraci P. Malta for a critical reading of the manuscript and Alencar
J. Faria, José Edmar A. Ribeiro and Gerson Gregório Gomes for valuable comments.
This work was supported in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq - Brazil.
\end{document} |
\begin{document}
\title{A Morawetz inequality for gravity-capillary water waves at low Bond number}
\author[]{Thomas Alazard, Mihaela Ifrim and Daniel Tataru}
\begin{abstract}
This paper is devoted to the 2D gravity-capillary water waves equations in their Hamiltonian formulation, addressing the general question of proving Morawetz inequalities. We continue the analysis initiated in our previous work, where we have established local energy decay
estimates for gravity waves. Here we add surface tension and prove a stronger estimate
with a local regularity gain, akin to the smoothing effect for dispersive equations.
Our main result holds globally in time and holds for genuinely nonlinear waves, since we are
only assuming some very mild uniform Sobolev bounds for the solutions.
Furthermore, it is uniform both in the infinite depth limit and the zero surface tension limit.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}\left\vertbel{s:Introduction}
\subsection{The water-wave equations}
A classical topic in the mathematical theory of hydrodynamics
concerns the propagation of water waves.
The problem consists in studying the evolution of the free surface
separating air from an incompressible perfect fluid, together with the evolution of the velocity field
inside the fluid domain.
We assume that the free surface $\Sigma(t)$ is a graph and that
the fluid domain $\Omega(t)$ has a flat bottom, so that
\begin{align*}
\Omega(t)&=\{ (x,y) \in \mathbb{R} \times\mathbb{R} \, \arrowvert\, -h<y<\eta(t,x) \},\\
\Sigma(t) &= \{ (x,y) \in \mathbb{R} \times\mathbb{R} \, \arrowvert\, \, y=\eta(t,x) \},
\end{align*}
where $h$ is the depth and $\eta$ is an unknown (called the free surface elevation). We assume that the velocity field $v\colon \Omega\rightarrow \mathbb{R}^2$ is irrotational, so that $v=\nabla_{x,y}\phi$ for some potential
$\phi\colon\Omega\rightarrow \mathbb{R}$ satisfying
\begin{equation}\left\vertbel{t5}
\begin{aligned}
&\Delta_{x,y}\phi=0\quad\text{in }\Omega,\\
&\partial_{t} \phi +\frac{1}{2} \left\vert \nabla_{x,y}\phi\right\vert^2 +P +g y = 0 \quad\text{in }\Omega,\\
&\partial_y\phi =0 \quad \text{on }y=-h,
\end{aligned}
\end{equation}
where
$P\colon \Omega\rightarrow\mathbb{R}$ is the pressure,
$g>0$ is the acceleration of gravity,
and $\Delta_{x,y}=\partial_x^2+\partial_y^2$. Partial differentiations will often be
denoted by suffixes so that $\phi_x=\partial_x\phi$ and $\phi_y=\partial_y \phi$.
The
water-wave problem is described
by two equations that hold on the free surface:
firstly an equation describing the time evolution of $\Sigma$:
\begin{equation}\left\vertbel{eta}
\partial_{t} \eta = \sqrt{1+\eta_x^2}\, \phi_n \arrowvert_{y=\eta}=\phi_y(t,x,\eta(t,x))-
\eta_x(t,x)\phi_x(t,x,\eta(t,x)),
\end{equation}
and secondly an equation for the balance of forces at the free surface:
\begin{equation}\left\vertbel{pressure}
P\arrowvert_{y=\eta}=-\kappa H(\eta),
\end{equation}
where $\kappa$ is the coefficient of surface tension
and $H(\eta)$ is the curvature given by
\begin{equation*}
H(\eta)=\px\left( \frac{\eta_x}{\sqrt{1+\eta_x^2}}\right).
\end{equation*}
We begin by recalling the main features of the water wave problem.
\begin{itemize}
\item \textbf{Hamiltonian system. }Â
Since $\phi$ is harmonic function with Neumann boundary condition on the
bottom, it is fully determined by
its trace on $\Sigma$. Set
$$
\psi(t,x)=\phi(t,x,\eta(t,x)).
$$
Zakharov discovered that $\eta$ and $\psi$ are canonical variables.
Namely, he gave the following Hamiltonian formulation
of the water-wave equations (\cite{Zakharov1968,Zakharov1998}):
$$
\frac{\partial \eta}{\partial t}=\frac{\delta \mathcal{H}}{\delta \psi},\quad
\frac{\partial \psi}{\partial t}=-\frac{\delta \mathcal{H}}{\delta \eta},
$$
where $\mathcal{H}$ is the energy, which reads
\begin{equation}\left\vertbel{Hamiltonian}
\mathcal{H}=\frac{g}{2}\int_\mathbb R \eta^2\, d x+\kappa \int_\mathbb R \left(\sqrt{1+\eta_x^2}-1\right)\, d x
+\frac{1}{2}\iint_{\Omega(t)}\left\vert \nabla_{x,y}\phi \right\vert^2\, dydx.
\end{equation}
The Hamiltonian is the sum of the gravitational potential energy,
a surface energy due to stretching of the surface and the kinetic energy.
One can give more explicit evolution equations
by introducing the
Dirichlet to Neumann operator associated to the fluid domain $\Omega(t)$, defined by
$$
G(\eta)\psi=\sqrt{1+\eta_x^2}\, \phi_{n \arrowvert_{y=\eta}}=(\phi_y-\eta_x \phi_x)_{\arrowvert_{y=\eta}}.
$$
Then (see \cite{LannesLivre}),
with the above notations, the water-wave system reads
\begin{equation}\left\vertbel{systemT}
\left\{
\begin{aligned}
&\partial_t \eta=G(\eta)\psi \\
&\partial_t \psi+g\eta +\frac{1}{2} \psi_x^2-\frac{1}{2} \frac{(G(\eta)\psi+\eta_x\psi_x)^2}{1+\eta_x^2}-\kappa H(\eta)=0.
\end{aligned}
\right.
\end{equation}
Of course the Hamiltonian is conserved along the flow. Another conservation law that is essential in this paper is the conservation of the horizontal momentum,
\begin{equation} \left\vertbel{momentum}
\mathcal M = \int_\mathbb R \eta \psi_x \, dx.
\end{equation}
From a Hamiltonian perspective, this can be seen as arising via Noether's theorem as
the generator for the horizontal translations, which commute with the water wave flow.
\item \textbf{Scaling invariance.}
Another symmetry is given by the scaling invariance which holds
in the infinite depth case (that is when $h=\infty$) when either $g = 0$ or $\kappa = 0$.
If $\kappa = 0$ and $\psi$ and $\eta$ are solutions of the gravity water waves equations \eqref{systemT}, then $\psi_\left\vertmbda$ and $\eta_\left\vertmbda$ defined by
\[
\psi_\left\vertmbda(t,x)=\left\vertmbda^{-3/2} \psi (\sqrt{\left\vertmbda}t,\left\vertmbda x),\quad
\eta_\left\vertmbda(t,x)=\left\vertmbda^{-1} \eta(\sqrt{\left\vertmbda} t,\left\vertmbda x),
\]
solve the same system of equations. The (homogeneous)
Sobolev spaces invariant by this scaling
correspond to $\eta$ in $\dot{H}^{3/2}(\mathbb{R})$ and $\psi$ in $\dot{H}^2(\mathbb{R})$.
On the other hand if $g = 0$ and $\psi$ and $\eta$ are solutions of
the capillary water waves equations \eqref{systemT}, then
$\psi_\left\vertmbda$ and $\eta_\left\vertmbda$ defined by
\[
\psi_\left\vertmbda(t,x)=\left\vertmbda^{-\frac12} \psi (\left\vertmbda^\frac32 t,\left\vertmbda x),\quad
\eta_\left\vertmbda(t,x)=\left\vertmbda^{-1} \eta(\left\vertmbda^\frac32 t,\left\vertmbda x),
\]
solve the same system of equations. The (homogeneous) Sobolev spaces invariant by this scaling correspond to $\eta$ in $\dot{H}^{3/2}(\mathbb{R})$ and $\psi$ in $\dot{H}^1(\mathbb{R})$.
\item
This is a \textbf{quasi-linear} system of \textbf{nonlocal} equations.
As a result, even the study of the Cauchy problem for smooth data is highly nontrivial.
The literature on this topic is extensive by now,
starting with the works of Nalimov~\cite{Nalimov} and
Yosihara~\cite{Yosihara}, who proved existence and uniqueness in Sobolev spaces under a smallness assumption. Without a smallness assumptions on the data, the well-posedness of the Cauchy problem
was first proved by Wu~\cite{Wu97, Wu99} without surface tension and
by Beyer-G\"unther in~\cite{BG} in the case with surface tension.
Several extensions of these results were obtained
by various methods and many authors. We begin by quoting recent results for gravity-capillary waves.
For the local in time Cauchy problem
we refer to \cite{Agrawal-LWP,ABZ1,AmMa,CMSW,CS,IguchiCPDE,LannesKelvin,LannesLivre,MingWang-LWP,RoTz,Schweizer,ScWa,ShZe},
see also~\cite{ITc} and \cite{IoPuc,X-Wang-capillary} for global existence results
for small enough initial data which are localized,
and \cite{CCFGS-surface-tension,FeIoLi-Duke-2016} for results about splash singularities
for large enough initial data.
Let us recall that the Cauchy problem for the
gravity-capillary water-wave equations is locally
well-posed in suitable function spaces which are $3/2$-derivative more
regular than the scaling invariance, e.g.\ when initially
\[
\eta \in H^{s+\frac{1}{2}}(\mathbb{R}), \qquad \psi \in H^{s}(\mathbb{R}), \qquad s >\frac 5 2.
\]
Actually, some better results hold using Strichartz estimates (see \cite{dPN1,dPN2,Nguyen-AIHPANL-2017}).
There are also many recent results for the equations
without surface tension, and we refer
the reader to the papers~\cite{Ai1,Ai2,AD-global,CCFGGS-arxiv,CS2,dePoyferre-ARMA,GMS,GuTi,H-GIT,HIT,IT,IoPu,Wu09,Wu10}.
\item \textbf{Dispersive equation.}
Consider the linearized water-wave equations:
$$
\left\{
\begin{aligned}
& \partial_t \eta=G(0)\psi=\left\vert D\right\vert\tanh(h\left\vert D\right\vert) \psi, \qquad (\left\vert D\right\vert=\sqrt{-\partial_x^2})\\
& \partial_t \psi+g\eta-\kappa \partial_{x}^2\eta=0.
\end{aligned}
\right.
$$
Then the dispersion relationship reads
$\omega^2= k \tanh(h k) (g+\kappa k^2)$, which
shows that water waves are dispersive waves.
The dispersive properties have been studied for many different problems,
including the global in time existence results alluded to before and
also various problems about Strichartz estimates,
local smoothing effect, control theory
or the study of solitary waves (see e.g.\ \cite{Ai1,Ai2,ABZ1,CHS,dPN2,dPN1,IT2018solitary,Zhu1,Zhu2}).
\end{itemize}
\subsection{Morawetz estimates}
Despite intensive researches on dispersive or Hamiltonian equations,
it is fair to say that many natural questions concerning
the dynamics of water waves are mostly open.
Among these, we have initiated in our previous work \cite{AIT}
the study of Morawetz estimates for water waves.
In this paragraph, we introduce this problem within the general framework of Hamiltonian equations.
Consider a Hamiltonian system of the form
$$
\frac{\partial \eta}{\partial t}=\frac{\delta \mathcal{H}}{\delta \psi},\quad
\frac{\partial \psi}{\partial t}=-\frac{\delta \mathcal{H}}{\delta \psi}\quad\text{with}\quad
\mathcal{H}=\int e\, d x,
$$
where the density of energy
$e$ depends only on $\eta$ and $\psi$.
This setting includes the water-wave equations, the
Klein-Gordon equation, the Schr\"odinger equation, the Korteveg-de-Vries equation, etc.
For the sake of simplicity, let us compare the following linear equations:
\begin{itemize}
\item (GWW) the gravity water-wave equation $\partial_t^2 u+\left\vert D_x\right\vert u=0$;
\item (CWW) the capillary water-wave equation $\partial_t^2 u+\left\vert D_x\right\vert^3 u=0$;
\item (KG) the Klein-Gordon equation $\square u+u=0$;
\item (S) the Schr\"odinger equation $i\partial_t u+\Delta u=0$.
\end{itemize}
All of them can be written in the above Hamiltonian form with
\begin{align*}
&e_{\text{GWW}}=\eta^2+ (\left\vert D_x\right\vert^{1/2} \psi)^2,\\
&e_{\text{CWW}}=(\left\vert D_x\right\vert\eta)^2+ (\left\vert D_x\right\vert^{1/2} \psi)^2,\\
&e_{\text{KG}}=(\left\vertngle D_x\right\vertngle^{1/2}\eta)^2+ (\left\vertngle D_x\right\vertngle^{1/2}\psi)^2,\\
& e_{\text{S}}=(\left\vert D_x\right\vert \eta)^2+(\left\vert D_x\right\vert \psi)^2.
\end{align*}
For such Hamiltonian systems, one can deduce from the Noether's theorem and
symmetries of the equations some conserved quantities.
For instance, the invariance by translation in time implies that
the energy $\mathcal{H}$ is conserved, while the invariance by spatial translation implies that
the momentum $\mathcal{M}=\int \eta\psi_x \, d x$ is conserved.
Then, a Morawetz estimate is an estimate of the local energy in terms of a quantity which scales
like the momentum.
More precisely, given a time $T>0$ and a compactly supported function $\chi=\chi(x)\ge 0$, one seeks an estimate
of the form
$$
\int_0^T \int_\mathbb{R} \chi (x)e(t,x) \,dx dt \le C(T) \sup_{t\in [0,T]}\left\Vert \eta(t)\right\Vert_{H^s(\mathbb{R})}
\left\Vert \psi(t)\right\Vert_{H^{1-s}(\mathbb{R})},
$$
for some real number $s$ chosen in a balanced
way depending on the given equation. If the constant
$C(T)$ does not depend on time $T$, then
we say that the estimate is global in time. The study of the latter estimates was
introduced in Morawetz's paper \cite{Morawetz1968} in the context of the nonlinear Klein-Gordon
equation.
The study of Morawetz estimates is interesting for both linear and nonlinear equations. We begin by discussing the linear phenomena.
For the Klein-Gordon equation,
the local energy density $e_{\text{KG}}$ measures the same
regularity as the momentum density $\eta\psi_x$ so that only the global
in time estimate is interesting. The latter result expresses the fact that
the localized energy is globally integrable in time,
and hence has been called a {\em local energy decay} result.
Morawetz estimates have also been proved for the Schr\"odinger equation. Here
the natural energy density $e_{\text{S}}$ measures a higher
regularity than the momentum density $\eta \psi_x$; for this reason the result
is meaningful even locally in time and the resulting (local in time) Morawetz estimates
have been originally called \emph{local smoothing estimates}, see~\cite{CS-ls,Vega,PlVe}.
The same phenomena appears also for the KdV equation.
In fact, the general study of Morawetz estimates has had a long
history, which is too extensive to try to describe here.
For further recent references we refer the reader to \cite{MT} for the wave equation, \cite{MMT} for the Schr\"odinger equation,
\cite{OR} for fractional dispersive equations. These estimates are very robust and also hold for nonlinear problems, which make them useful in the study of
the Cauchy problem for nonlinear equations (see e.g.\, \cite{CKSTT,KPVinvent,PlVe}).
We are now ready to discuss Morawetz estimates for water waves.
One of our motivations to initiate their study in \cite{AIT}
is that this problem exhibits interesting new features.
Firstly, since the problem is nonlocal, it is
already difficult to obtain a global in time estimate for the linearized equations.
The second and key observation is that, even if the equation is \emph {quasilinear}, one can prove such global in time estimates for the nonlinear equations, assuming some very mild smallness assumption on the solution.
Given a compactly supported bump function $\chi=\chi(x)$,
we want to estimate the local energy
\[
\int_0^T \!\!\!\int_\mathbb{R}\chi(x-x_0) ( g \eta^2 + \kappa \eta_x^2) \, dx dt
+ \int_0^T \!\!\!\int_\mathbb{R}\int_{-h}^{\eta(t,x)} \chi(x-x_0)
\left\vert \nabla_{x,y}\phi \right\vert^2 \, dydx dt,
\]
uniformly in time $T$ and space location $x_0$, assuming only a uniform bound
on the size of the solutions. In our previous paper \cite{AIT},
we have studied this problem for gravity water waves (that is for $\kappa=0$). There the momentum is not controlled by the Hamiltonian energy, and as a result
we have the opposite phenomena to local smoothing, namely a loss
of $1/4$ derivative in the local energy. This brings
substantial difficulties in the low frequency analysis,
in particular in order to prove a global in time estimate.
In addition, we also took into account the effect of the bottom,
which generates an extra difficulty in the analysis of
the low frequency component.
In this article we assume that $\kappa\ge 0$ so that
one can both study gravity water waves and gravity-capillary water waves
(for $\kappa>0$). Furthermore,
we seek to prove Morawetz estimates uniformly with respect
to surface tension $\kappa$ as $\kappa \to 0$, and also uniformly with respect to the depth $h$ as $h \to \infty$. In this context we will impose a smallness condition on the Bond number.
\subsection{Function spaces}
As explained above, the goal of the present
paper is to be able to take into account surface tension effects
i.e. the high frequency local smoothing, while still
allowing for the presence of a bottom which yields substantial difficulties at
low frequencies. In addition, one key point is that
our main result holds provided that some mild Sobolev norms
of the solutions remain small enough uniformly in time. Here
we introduce the functional setting which will allow us to do so.
Precisely, in this subsection we introduce three spaces:
a space $E^0$ associated to the energy,
a space $E^{\frac14}$ associated to
the momentum, and a uniform in time control norm $\left\Vert \cdot\right\Vert_{X^\kappa}$
which respects the scaling invariance.
The above energy $\mathcal{H}$ (Hamiltonian) corresponds to the energy space for $(\eta,\psi)$,
\[
E^0 = \left( g^{-\frac12} L^2(\mathbb{R}) \cap \kappa^{-\frac12} \dot H^1(\mathbb R) \right) \times \dot{H}^{\frac12}_h(\mathbb{R}),
\]
with the depth dependent $H^{\frac12}_h(\mathbb{R})$ space defined as
\[
\dot{H}^{\frac12}_h(\mathbb{R}) = \dot H^\frac12(\mathbb{R}) + h^{-\frac12} \dot H^1(\mathbb{R}).
\]
We already observe in here the two interesting frequency thresholds in this problem, namely
$h^{-1}$, determined by the depth, and $\sqrt{g/\kappa}$, determined by the balance between gravity
and capillarity. The dimensionless connection between these two scales is described by the Bond number,
\[
{\mathbf B} = \frac{\kappa}{gh^2}.
\]
A key assumption in the present work is that
\[
{\mathbf B} \ll 1.
\]
This is further discussed in the comments following our main result.
Similarly, in order to measure the momentum, we use the space
$E^{\frac14}$, which is the $h$-adapted linear $H^{\frac14}$-type norm for $(\eta,\psi)$ (which corresponds to the momentum),
\begin{equation}\left\vertbel{e14}
\begin{aligned}
&E^{\frac14} := \left( g^{-\frac14} H^{\frac14}_h(\mathbb{R}) \cap \kappa^{-\frac14} H^{\frac34}_h(\mathbb{R})\right) \times \left( g^\frac14 \dot H^{\frac34}_h(\mathbb{R}) + \kappa ^\frac14 \dot H^{\frac14}_h(\mathbb{R}) \right), \\
\end{aligned}
\end{equation}
with
\[
H^{s}_h(\mathbb{R}) := \dot H^s(\mathbb{R}) \cap h^{s} L^2(\mathbb{R}), \qquad \dot H^{s}_h(\mathbb{R}) = \dot{H}^s (\mathbb{R})+ h^{s-1} \dot H^1(\mathbb{R}),
\]
so that in particular we have
\[
|\mathcal M| \lesssim \|(\eta,\psi)\|_{E^\frac14}^2.
\]
We remark that there is some freedom here in choosing the space $E^{\frac14}$; the one above is
not as in the previous paper \cite{AIT}, adapted to
gravity waves, but is instead changed above the frequency threshold $\left\vertmbda_0 = \sqrt{g/\kappa}$ and adapted to capillary waves instead at high frequency.
For our uniform a-priori bounds for the solutions, we begin by recalling the set-up in \cite{AIT}
for the pure gravity waves. There we were able to use
a scale invariant norm, which corresponds to the following Sobolev bounds:
\[
\eta \in H^\frac32_h(\mathbb{R}), \qquad \nabla \phi\arrowvert_{y=\eta} \in g^\frac12 H^1_h(\mathbb{R})
\cap \kappa^{\frac12} L^2(\mathbb{R}).
\]
Based on this, we have introduced the homogeneous norm $X_0$ defined by
\[
X_0 := L^\infty_t H^{\frac{3}{2}}_{h} \times g^{-\frac{1}{2}}L^\infty_t H^1_h,
\]
and used it to define the uniform control norm $X$ by
\[
\|(\eta,\psi)\|_{X} := \left\Vert P_{\leq h^{-1}}
(\eta,\nabla \psi)\right\Vert_{X_0}+
\sum_{\left\vertmbda > h^{-1}}
\|P_\left\vertmbda (\eta,\psi)\|_{X_0} .
\]
Here we use a standard Littlewood-Paley decomposition beginning at frequency $1/h$,
\[
1 = P_{<1/h} + \sum_{1/h < \left\vertmbda \in 2^\mbb Z} P_\left\vertmbda.
\]
The uniform control norm we use in this paper, denoted by $X^\kappa$,
matches the above scenario at low frequency, but requires some strengthening at high
frequency. The threshold frequency in this context is
\[
\left\vertmbda_0 = \sqrt{g/\kappa} \gg 1,
\]
and describes the transition from gravity to capillary waves. Then we will complement the $X$
bound with a stronger bound for $\eta$ in the higher frequency range, setting
\[
X_1 := \left(\frac{g}{\kappa}\right)^\frac14 L^\infty_t H^{2}.
\]
One can verify that this matches the first component of $X_0$ exactly at frequency $\left\vertmbda_0$.
Then our uniform control norm $X^{\kappa}$ will be
\[
\|(\eta,\psi)\|_{X^\kappa} := \left\Vert
(\eta,\nabla \psi)\right\Vert_{X}+\| \eta\|_{X_1}.
\]
Based on the expression \eqref{Hamiltonian} for the
energy, we introduce the following notations for the local energy. Fix an
arbitrary compactly supported nonnegative function $\chi$.
Then, the local energy centered around a point $x_0$ is
\[
\begin{split}
\| (\eta,\psi)\|_{LE^\kappa_{x_0}}^2 := & \ g \int_0^T \int_\mathbb{R}\chi(x-x_0) \eta^2\,dx dt + \kappa \int_0^T \int_\mathbb{R} \chi(x-x_0)\eta_{x}^2\, d x dt
\\ & \ +\int_0^T \int_\mathbb{R}\int_{-h}^{\eta(t,x)} \chi(x-x_0) \left\vert \nabla_{x,y}\phi \right\vert^2\, dy dx dt.
\end{split}
\]
It is also of interest to take the supremum over $x_0$,
\begin{equation}\left\vertbel{defiLEkappa}
\| (\eta,\psi)\|_{LE^\kappa}^2 := \sup_{x_0\in \mathbb{R}} \| (\eta,\psi)\|_{LE_{x_0}}^2.
\end{equation}
\subsection{The main result}
Our main Morawetz estimate for gravity-capillary water waves is as follows:
\begin{theorem}[Local energy decay for gravity-capillary waves]\left\vertbel{ThmG1}
Let $s>5/2$ and $C > 0$. There exist $\varepsilonilon_0$ and $C_{0}$ such
that the following result holds. For all $T\in (0,+\infty)$, all
$h\in [1,+\infty)$, all $g\in (0,+\infty)$, all $\kappa\in (0,+\infty)$
with $\kappa \ll g$ and all solutions $(\eta,\psi)\in
C^0([0,T];H^{s}(\mathbb R)\times H^s(\mathbb R))$ of the water-wave system
\eqref{systemT} satisfying
\begin{equation}\left\vertbel{uniform}
\| (\eta,\psi)\|_{X^\kappa} \leq \varepsilonilon_0
\end{equation}
the following estimate holds
\begin{equation}\left\vertbel{nMg}
\begin{aligned}
\| (\eta,\psi)\|_{LE^\kappa}^2 \leq C_0 ( \| (\eta, \psi) (0) \|^2_{E^{\frac14}}+ \| (\eta, \psi) (T) \|^2_{E^{\frac14}}).
\end{aligned}
\end{equation}
\end{theorem}
We continue with several remarks concerning the choices of parameters/norms in the theorem.
\begin{remark}[Uniformity]
One key feature of our result is
that it is global in time (uniform in $T$) and uniform in $h \geq 1$ and $\kappa \ll g$. In particular our estimate is uniform in the infinite depth limit and the zero surface tension limit.
\end{remark}
\begin{remark}[Time scaling]
Another feature of our result is that the statement of Theorem ~\ref{ThmG1} is invariant with respect to the following scaling law (time associated scaling)
\[
\begin{aligned}
(\eta (t,x), \psi(t,x)) &\rightarrow (\eta (\left\vertmbda t,x), \left\vertmbda \psi(\left\vertmbda t,x))\\
(g,\kappa,h) &\rightarrow (\left\vertmbda^2 g, \left\vertmbda^2 \kappa, h).
\end{aligned}
\]
This implies that the value of $g$ and $\kappa$ separately are not
important but their ratio is. By scaling one could simply set
$g = 1$ in all the proofs. We do not do that in order to improve the
readability of the article.
\end{remark}
\begin{remark}[Window size]
Once we have the local energy decay bounds for a unit window size, we also trivially have them
for any higher window size $R > 1$, with a constant of $R$ on the right in the estimate.
\end{remark}
\begin{remark} [Spatial scaling]
One can also rescale the spatial coordinates $x \to \left\vertmbda x$, and correspondingly
\[
\begin{aligned}
(\eta (t,x), \psi(t,x)) &\rightarrow ( \left\vertmbda^{-1}\eta (t, \left\vertmbda x), \left\vertmbda^{-\frac32} \left\vertmbda \psi(t, \left\vertmbda x))\\
(g,\kappa,h,R) &\rightarrow (g, \left\vertmbda{-2} \kappa, \left\vertmbda^{-1} h, \left\vertmbda^{-1} R).
\end{aligned}
\]
Then combining this with respect to the previous remark, we can restate our hypothesis $\kappa/g \ll 1 \leq h^2$ as a constraint on the Bond number, ${\mathbf B} \ll 1$, together with a window size restriction as $\kappa/g \lesssim R^2 \lesssim h^2$.
\end{remark}
\begin{remark}
As already explained, the uniform control norms in \eqref{uniform} are below the current local well-posedness threshold for this problem, and are instead closer to one might view as the
critical, scale invariant norms for this problem. The dependence on $h$
is natural as spatial scaling will also alter the depth $h$. In the infinite depth limit one recovers exactly the homogeneous Sobolev norms at low frequency.
We also note that, by Sobolev embeddings, our smallness assumption
guarantees that
\[
|\eta| \lesssim \varepsilonilon_0 h, \qquad |\eta_x | \lesssim \varepsilonilon_0.
\]
\end{remark}
Our previous work \cite{AIT} on gravity waves followed
the same principles as in Morawetz's original paper (\cite{Morawetz1968}), proving
the results using the multiplier method, based on the momentum conservation law. One difficulty
we encountered there was due to the nonlocality of the problem, which made it
far from obvious what is the ``correct" momentum density. Our solution in \cite{AIT} was to
work in parallel with two distinct momentum densities.
Some of the difficulties in \cite{AIT} were connected to low frequencies,
due both to the fact that the equations are nonlocal, and that they have quadratic nonlinearities.
A key to defeating these difficulties was to shift some of the analysis
from Eulerian coordinates and the holomorphic coordinates; this is because the latter provide a better setting to understand the fine bilinear and multilinear structure of the equations.
Here we are able to reuse the low frequency part of the analysis from \cite{AIT}. On the other
hand, we instead encounter additional high frequency issues arising from the surface tension contributions.
As it turns out, these are also best dealt with in the set-up of holomorphic coordinates.
\subsection{Plan of the paper}
The key idea in our proof of the Morawetz estimate is to think of the energy as the flux for the momentum. Unfortunately this is not as simple as it sounds, as the momentum density is not a uniquely defined object, and the ``obvious" choice does not yield the full energy as its flux, not even at leading order. To address this matter, in the next section, we review density flux pairs for the momentum. The standard density $\eta\psi_x$ implicit in \eqref{momentum} only allows one to control the local potential energy, while for the local kinetic energy we introduce
an alternate density and the associated flux.
The two density-flux relations for the momentum are exploited in Section~\ref{s:nonlinear}.
There the proof of the Morawetz inequality is reduced to three main technical lemmas.
However, proving these lemmas in the standard Eulerian setting seems nearly impossible; instead,
our strategy is to first switch to holomorphic coordinates.
We proceed as follows. In Section~\ref{ss:laplace} we review the holomorphic (conformal) coordinates
and relate the two sets of variables between Eulerian and holomorphic formulation. In particular
we show that the fixed energy type norms in the paper admit equivalent formulations in the two settings. In this we largely follow \cite{HIT}, \cite{H-GIT} and \cite{AIT}, though several
new results are also needed. In Section~\ref{s:switch} we discuss the correspondence between the
local energy norms in the two settings, as well as several other related matters.
The aim of the last section of the paper is to prove the three key main technical lemmas.
This is done in two steps. First we obtain an equivalent formulation in the holomorphic setting, and then we use multilinear analysis methods to prove the desired estimates.
\section{Conservation of momentum and local conservation laws}\left\vertbel{s:Cl}
A classical result is that the momentum, which is the following quantity,
\[
\mathcal{M}(t)=\int_\mathbb{R}\int_{-h}^{\eta(t,x)}\phi_x (t,x,y)\, dy dx,
\]
is a conserved quantity:
$$
\frac{d}{dt} \mathcal{M}=0.
$$
This comes from the invariance with respect to horizontal translation (see Benjamin
and Olver \cite{BO} for a systematic study of the symmetries of
the water-wave equations).
We exploit later the conservation of the momentum through the use of
density/flux pairs $(I,S)$. By definition, these are pairs such that
\begin{equation}\left\vertbel{density}
\mathcal{M}=\int I \, dx ,
\end{equation}
and such that one has the conservation law
\begin{equation}\left\vertbel{cl}
\partial_t I+\partial_x S=0.
\end{equation}
One can use these pairs by means of the multiplier method of Morawetz.
Consider a function $m=m(x)$ which is positive and increasing.
Multiplying the identity \eqref{cl} by $m=m(x)$ and integrating
over $[0,T]\times \mathbb{R}$ then yields (integrating by parts)
\[
\iint_{[0,T]\times \mathbb{R}} S(t,x) m_x \, dxdt =\int_\mathbb{R} m(x)I(T,x)\, dx-\int_\mathbb{R} m(x)I(0,x)\, dx.
\]
The key point is that, since the slope $m_x$ is nonnegative, the later identity is favorable provided
that $S$ is non-negative.
There are three pairs $(I,S)$ that play a role in our work.
These pairs have already
been discussed in \cite[\S 2]{AIT}. Here we keep the same densities;
however, the associated fluxes will acquire an extra term due to
the surface tension term in the equations.
\begin{lemma}
The expression
\[
I_1(t,x)=\int_{-h}^{\eta(t,x)} \phi_x(t,x,y) \, dy,
\]
is a density for the momentum, with associated density flux
\[
S_1(t,x)\mathrel{:=} -\int_{-h}^{\eta(t,x)}\partial_t \phi\, dy -\frac{g}{2}\eta^2
+ \kappa\left(1-\frac{1}{\sqrt{1+\eta_x^2}}\right)
+\frac{1}{2} \int_{-h}^{\eta(t,x)}(\phi_x^2-\phi_y^2)\, dy.
\]
\end{lemma}
\begin{proof}
With minor changes this repeats the computation in \cite{AIT}.
Given
a function $f=f(t,x,y)$, we use the notation
$\evaluate{f}$ to denote the function
$$
\evaluate{f}(t,x)=f(t,x,\eta(t,x)).
$$
Then we have
$$
\partial_t I_1=\partial_t \int_{-h}^{\eta}\phi_x\, dy=(\partial_t \eta)\evaluate{\phi_x}+\int_{-h}^\eta \partial_t \phi_x \, dy.
$$
It follows from the kinematic equation for $\eta$ and
the Bernoulli equation for $\phi$ that
$$
\partial_t I_1=(\evaluate{\phi_y}-\eta_x\evaluate{\phi_x})\evaluate{\phi_x}
-\int_{-h}^{\eta}\partial_x \left(\frac{1}{2} \left\vert\nabla_{x,y}\phi\right\vert^2+ P\right)\, dy,
$$
so
$$
\partial_t I_1=\evaluate{\phi_y}\evaluate{\phi_x}+\frac{1}{2} \eta_x\evaluate{\phi_y^2}
-\frac{1}{2}\eta_x\evaluate{\phi_x}^2+\eta_x \evaluate{P}
-\partial_x \int_{-h}^{\eta}\left(\frac{1}{2} \left\vert\nabla_{x,y}\phi\right\vert^2+ P\right) \, dy.
$$
Using again the Bernoulli equation and using the
equation for the pressure at the free surface),
we find that
$$
\partial_t I_1=\evaluate{\phi_y}\evaluate{\phi_x}+\frac{1}{2} \eta_x\evaluate{\phi_y^2}
-\frac{1}{2}\eta_x\evaluate{\phi_x}^2
-\kappa \eta_x \partial_x\left(\frac{\eta_x}{\sqrt{1+\eta_x^2}}\right)
+\partial_x \int_{-h}^{\eta}\left(\partial_t \phi+gy\right) \, dy.
$$
Since
$$
-\kappa \eta_x \partial_x\left(\frac{\eta_x}{\sqrt{1+\eta_x^2}}\right)
=\kappa\partial_x \frac{1}{\sqrt{1+\eta_x^2}}=\kappa \partial_x\left(\frac{1}{\sqrt{1+\eta_x^2}}-1\right),
$$
and since $\partial_x \int_{-h}^{\eta}gy\, dy=\partial_x(g\eta^2/2)$, to complete the proof, it is sufficient to verify that
$$
\evaluate{\phi_y}\evaluate{\phi_x}+\frac{1}{2} \eta_x\evaluate{\phi_y^2}
-\frac{1}{2}\eta_x\evaluate{\phi_x}^2=\frac{1}{2}\partial_x \int_{-h}^{\eta}(\phi_y^2-\phi_x^2)\, dy.
$$
This in turn follows from the fact that
$$
\int_{-h}^\eta(\phi_x\phi_{yx}-\phi_x\phi_{xx})\, dy=
\int_{-h}^\eta(\phi_x\phi_{yx}+\phi_x\phi_{yy})\, dy=
\int_{-h}^\eta\partial_y (\phi_x\phi_y)\, dy=\evaluate{\phi_x}\evaluate{\phi_y},
$$
where we used $\phi_{xx}=-\phi_{yy}$ and the solid wall boundary condition $\phi_y(t,x,-h)=0$.
\end{proof}
The above density-flux pair will not
be directly useful because it has a linear component in it.
However, we will use it as a springboard
for the next two density-flux pairs.
\begin{lemma}\left\vertbel{l:I2}
The expression
\[
I_2(t,x)=\eta(t,x)\psi_x(t,x)
\]
is a density for the momentum, with associated density flux
\[
S_2(t,x)\mathrel{:=} -\eta\psi_t-\frac{g}{2}\eta^2+ \kappa\left(1- \frac{1}{\sqrt{1+\eta_x^2}}\right)+
\frac{1}{2} \int_{-h}^{\eta(t,x)}(\phi_x^2-\phi_y^2)\, dy.
\]
\end{lemma}
\begin{proof}
The proof is identical to the one of Lemma~2.3~in~\cite{AIT}.
\end{proof}
To define the third pair we recall from \cite{AIT} two auxiliary functions defined inside the fluid domain.
Firstly we introduce the stream function~$q$,
which is the harmonic conjugate of $\phi$:
\begin{equation}\left\vertbel{defi:qconj}
\left\{
\begin{aligned}
& q_x = -\phi_y , \quad \text{in }-h<y<\eta(t,x),\\
& q_y = \phi_x, \qquad \text{in }-h<y<\eta(t,x),\\
& q(t,x,-h) = 0.
\end{aligned}
\right.
\end{equation}
We also introduce the harmonic extension $\theta$ of $\eta$ with Dirichlet boundary condition on the bottom:
\begin{equation}\left\vertbel{defi:thetainitial}
\left\{
\begin{aligned}
& \Delta_{x,y}\theta=0\quad \text{in }-h<y<\eta(t,x),\\
&\theta(t,x,\eta(t,x))=\eta(t,x),\\
&\theta(t,x,-h) = 0.
\end{aligned}
\right.
\end{equation}
Now the following lemma contains the key density/flux pair for the momentum.
\begin{lemma}\left\vertbel{l:I3}
The expression
\[
I_3(t,x)= \int_{-h}^\eta
\nabla \theta(t,x,y) \cdot\nabla q(t,x,y) \, dy
\]
is a density for the momentum, with associated density flux
\[
S_3(t,x)\mathrel{:=} -\frac{g}{2}\eta^2 - \int_{-h}^{\eta(t,x)} \theta_y \phi_t \, dy
+ \kappa\left(1 - \frac{1}{\sqrt{1+\eta_x^2}}\right)
+ \int_{-h}^{\eta(t,x)} {\mathbf B}ig(\frac{1}{2} (\phi_x^2 - \phi_y^2)+ \theta_t \phi_y {\mathbf B}ig)\, dy.
\]
\end{lemma}
\begin{proof}
The proof is identical to the one of Lemma~2.4~in~\cite{AIT}.
\end{proof}
In our analysis we will not only need the evolution equations restricted to the free boundary,
but also the evolution of $\theta$ and $\phi$ within the fluid domain.
To describe that, we introduce the operators
$H_D$ and $H_N$, which act
on functions on the free surface
$\{y=\eta(t,x)\}$ and produce
their harmonic extension inside
the fluid domain with zero Dirichlet, respectively Neumann
boundary condition\footnote{These two operators coicide in the infinite depth setting.} on the bottom $\{y=-h\}$.
With these notations, we have
\[
\theta = H_D(\eta), \qquad \phi= H_N(\psi).
\]
Recall that, given a function $f=f(t,x,y)$, we set
$\evaluate{f}(t,x):=f(t,x,\eta(t,x))$.
\begin{lemma}
The function $\phi_t$ is harmonic within the fluid domain,
with Neumann boundary condition on the bottom, and satisfies
\begin{equation}\left\vertbel{phit}
\phi_t = -g H_N(\eta) - H_N\left(\evaluate{|\nabla \phi|^2}\right) + \kappa H_N( \partial_x(\eta_x/\sqrt{1+\eta_x^2}) ).
\end{equation}
The function $\theta_t$ is harmonic within the fluid domain,
with Dirichlet boundary condition on the bottom, and satisfies
\begin{equation}\left\vertbel{thetat}
\theta_t = \phi_y - H_D\left(\evaluate{\nabla \theta \cdot\nabla \phi}\right).
\end{equation}
\end{lemma}
\begin{proof}
The equation for $\phi$ follows directly from the Bernoulli equation. The equation for $\theta$
is as in \cite[Lemma~2.5]{AIT}.
\end{proof}
\section{Local energy decay for gravity-capillary waves} \left\vertbel{s:nonlinear}
In this section we prove our main result in Theorem~\ref{ThmG1}, modulo three results (see Lemmas~\ref{l:fixed-t},~\ref{l:kappa-diff},~\ref{l:kappa-3}), whose proofs are in turn relegated to the last section of the paper.
We begin with a computation similar to \cite{AIT}.
We use the density-flux pairs $(I_2,S_2)$ and $(I_3,S_3)$ introduced in the previous section. Given $\sigma\in (0,1)$ to be chosen later on, we set
\[
\mI_m^\sigma(t) = \int_\mathbb R m(x)( \sigma I_2(x,t) + (1-\sigma) I_3(x,t))\, dx .
\]
Since $\partial_t I_j+\partial_x S_j=0$, by integrating by parts,
we have
\[
\partial_t \mI_m^\sigma(t) = \int_\mathbb R m_x ( \sigma S_2(x,t) + (1-\sigma) S_3(x,t)) \, dx.
\]
Consequently, to prove Theorem~\ref{ThmG1}, it is sufficient
to establish the following estimates:
\begin{description}
\item[(i)] Fixed time bounds,
\begin{equation}\left\vertbel{I2-bound}
\left| \int_\mathbb R m(x) I_2 \, dx \right| \lesssim
\| \eta\|_{ g^{-\frac14}H^{\frac14}_h \cap
\, \kappa^{-\frac14} H^{\frac34}_h}
\| \psi_x\|_{g^{\frac14}H^{-\frac14}_h + \kappa^{\frac14}H^{-\frac34}_h},
\end{equation}
\begin{equation}\left\vertbel{I3-bound}
\left| \int_\mathbb R m(x) I_3 \, dx \right|
\lesssim \| \eta\|_{g^{-\frac14}H^{\frac14}_h \cap \kappa^{-\frac14}H^{\frac34}_h}
\| \psi_x\|_{g^{\frac14}H^{-\frac14}_h+\kappa^{\frac14}H^{-\frac34}_h}.
\end{equation}
\item[(ii)] Time integrated bound; the goal is to prove that there exist
$\sigma \in (0,1)$ and $c < 1$ such that
\begin{equation} \left\vertbel{S23-bound}
\int_{0}^T \int_\mathbb R m_x ( \sigma S_2(t) + (1-\sigma) S_3(t))\, dx dt \gtrsim
\| (\eta,\psi)\|_{LE^\kappa_{0}}^2 - c \| (\eta,\psi)\|_{LE^\kappa}^2,
\end{equation}
where the local norms are as defined in~\eqref{defiLEkappa}.
\end{description}
We now successively discuss the two sets of bounds above.
\textbf{(i) The fixed time bounds \eqref{I2-bound}-\eqref{I3-bound}.} These are similar to \cite{AIT}, except that in this case, one needs to take into account the threshold frequency $\left\vertmbda_{0}=\sqrt{g/\kappa}$.
Since $I_2=\eta \psi_x$, by a duality argument,
to prove the bound in \eqref{I2-bound}
it suffices to show that
\begin{equation}
\left\vertbel{algebra}
\Vert m\eta \Vert_{g^{\frac{1}{4}}H^\frac{1}{4}_h\cap \kappa^{-\frac{1}{4}}H^{\frac{3}{4}}_h } \lesssim \Vert \eta \Vert_{g^{\frac{1}{4}}H^\frac{1}{4}_h\cap \kappa^{-\frac{1}{4}}H^{\frac{3}{4}}_h }.
\end{equation}
The $H_h^\frac14$ bound was already proved in \cite{AIT}, so it remains to establish the $H^\frac34_h$ bound. For this we use
a paradifferential decomposition of the product $m\eta$:
\[
m\eta =T_{m}\eta +T_{\eta}m +{\mathbf P}i (m,\eta),
\]
and estimate each term separately. Here $T_{m}\eta$ represents the multiplication between the low frequencies of $m$ and the high frequencies of $\eta$, and ${\mathbf P}i(m, \eta)$ is the \emph{high-high} interaction operator.
We begin with the first term and note that since we are
estimating the high frequencies in $\eta$, we do not have to
deal with the $\left\vertmbda_0$ frequency threshold.
\[
\begin{aligned}
\Vert T_{m}\eta\Vert^2_{\dot{H}^{\frac{3}{4}} }&\lesssim \sum_{\left\vertmbda} \left\vertmbda^{\frac{3}{2}} \Vert m_{<\left\vertmbda}\eta_{\left\vertmbda}\Vert_{L^2}^2\lesssim \sum_{\left\vertmbda}\left\vertmbda^{\frac{3}{2}}\Vert m_{< \left\vertmbda}\Vert^2_{L^{\infty}} \Vert \eta_{\left\vertmbda}\Vert_{L^2}^2 \lesssim \Vert m\Vert^2_{L^{\infty}}\sum_{\left\vertmbda} \left\vertmbda ^{\frac{3}{2}}\Vert \eta_{\left\vertmbda}\Vert_{L^2}^2 \\ &\lesssim \Vert m\Vert ^2_{L^{\infty}}\Vert \eta \Vert_{\dot{H}^{\frac{3}{4}}}.
\end{aligned}
\]
For the second term, we only need to bound \eqref{algebra} only at frequencies $\left\vertmbda$ with $ \left\vertmbda_0 <\left\vertmbda $, as the case $\left\vertmbda_0 \geq \left\vertmbda $ follows as in \cite{AIT}. For this we rewrite $T_{\eta}m$
\[
T_{\eta}m= \sum_{\left\vertmbda } \eta_{<\left\vertmbda} m_{\left\vertmbda},
\]
and then, due to the fact that we are not adding all frequencies (only the ones above $\left\vertmbda_0$), we get
\[
\Vert T_{\eta}m \Vert_{\dot{H}^{\frac{3}{4}}} \lesssim \sum_{\left\vertmbda_0<\left\vertmbda}\Vert \eta_{<\left\vertmbda}m_{\left\vertmbda} \Vert^2_{\dot{H}^{\frac{3}{4}}},
\]
and for each term we estimate using Plancherel and Bernstein's inequalities
\[
\begin{aligned}
\sum_{ \left\vertmbda_0< \left\vertmbda } \Vert \eta_{<\left\vertmbda } m_{\left\vertmbda} \Vert^2_{\dot{H}^{\frac{3}{4}}} &\lesssim \sum_{\left\vertmbda_{0}<\left\vertmbda } \left\vertmbda ^{\frac{3}{2}}\Vert \eta_{<\left\vertmbda} m_{\left\vertmbda}\Vert_{L^2}^2\\
&\lesssim \sum_{\left\vertmbda_{0}<\left\vertmbda } \left\vertmbda ^{\frac{3}{2}}\Vert \eta_{<\left\vertmbda} \Vert^2_{L^4} \Vert m_{\left\vertmbda}\Vert_{L^4}^2 \\
&\lesssim \sum_{\left\vertmbda_{0}<\left\vertmbda } \left\vertmbda ^{\frac{3}{2}}\Vert \eta \Vert^2_{\dot{H}^{\frac{1}{4}}} \left\vertmbda^{\frac{3}{2}}\Vert m_{\left\vertmbda}\Vert_{L^1}^2 \\
&\lesssim \sum_{\left\vertmbda_{0}<\left\vertmbda } \left\vertmbda^{-1} \Vert \eta \Vert^2_{\dot{H}^{\frac{1}{4}}} \Vert m_{xx}\Vert_{L^1}^2.
\end{aligned}
\]
The summation over $\left\vertmbda$ is trivial. Finally, the bound for the final term is obtained in a similar fashion.
To obtain the second
bound in \eqref{I3-bound}, we begin by transforming $I_3$. Firstly, by definition
of $I_3$ and $q$, we have
\[
\int_\mathbb R m I_3 \, dx =\iint_{\Omega(t)} m(\theta_{y} \phi_x -\theta_{x} \phi_y)\, dy dx = \iint_{\Omega(t)} m\left( \partial_y (\theta \phi_x) -\partial_{x}(\theta \phi_y)\right)\, dydx.
\]
Now we have
\[
\iint_{\Omega(t)} m \partial_y (\theta \phi_x)\, dydx=\int_\mathbb R m (\theta \phi_x)\arrowvert_{y=\eta}\, dx.
\]
On the other hand, integrating by parts in $x$, we get
\[
\iint_{\Omega(t)} m \partial_{x}(\theta \phi_y)\, dydx=
-\iint_{\Omega(t)} m_x \theta \phi_{y}\, dy dx-\int_\mathbb R \eta_x m (\theta \phi_y)\arrowvert_{y=\eta}\, dx.
\]
Consequently,
\[
\int_\mathbb R m I_3 \, dx=\int_\mathbb R m (\theta \phi_x+\eta_x\theta\phi_y)\arrowvert_{y=\eta}\, dx
+\iint_{\Omega(t)} m_x \theta \phi_{y}\, dy dx.
\]
Now, by definition of $\theta$ one has $\theta\arrowvert_{y=\eta}=\eta$.
Since $\phi_y=-q_x$ and since $(\phi_x+\eta_x\phi_y)\arrowvert_{y=\eta}=\psi_x$, we end up with
\[
\int_\mathbb R m I_3 \, dx=\int_\mathbb R mI_2\, dx-\iint_{\Omega(t)}m_x \theta q_x \, dydx.
\]
It remains to estimate the second part. This is a more delicate bound, which requires
the use of holomorphic coordinates and is postponed for the last section of the paper. We state
the desired bound as follows:
\begin{lemma}\left\vertbel{l:fixed-t}
The following fixed estimate holds:
\begin{equation} \left\vertbel{fixed-t}
\left | \iint_{\Omega} m_x \theta q_{x}\, dy dx \right | \lesssim
\| \eta\|_{g^{-\frac{1}{4}}H^{\frac14}_h \cap \kappa^{-\frac{1}{4}}H^{\frac34}_h} \| \psi_x\|_{ g^{\frac{1}{4}}H^{-\frac14}_h+ \kappa^{\frac{1}{4}}H^{-\frac34}_h}.
\end{equation}
\end{lemma}
\textbf{ (ii) The time integrated bound \eqref{S23-bound}.}
We take $\sigma <1/2$, but close to
$1/2$. Using the expressions in Lemmas ~\ref{l:I2},~\ref{l:I3}
as well as the relations \eqref{phit} and \eqref{thetat}
we write the integral in \eqref{S23-bound} as a combination of two
leading order terms plus error terms
\[
\int_{0}^T \int_\mathbb R m_x ( \sigma S_2(t) + (1-\sigma) S_3(t))\, dx dt = LE_{\psi} + g LE_\eta
+ \kappa LE_\eta^\kappa
+ Err_1 + g Err_2 + Err_3,
\]
where
\[
LE_\psi := \frac12 \int_0^T \iint_{\Omega(t)} m_x [ \sigma(\phi_x^2 - \phi_y^2) + (1-\sigma) |\nabla \phi|^2] \, dx dy dt
\]
\[
LE_\eta := \int_0^T \left(\frac{\sigma}{2} \int_\mathbb R m_x \eta^2 \, dx - (1-\sigma) \iint_{\Omega(t)} m_x \theta_y ( \theta-H_N(\eta))\, dx dy\right) dt ,
\]
\[
LE_\eta^\kappa = \int_0^T \!\! \left(\int_\mathbb R m_x \left( 1 - \frac{1}{\sqrt{1+\eta_x^2}}
- \sigma \eta H(\eta) \right) dx- (1-\sigma)
\iint_{\Omega(t)}\!\! m_x \theta_y H_N(H(\eta)) dy dx \!\right)\! dt ,
\]
and finally
\[
\begin{aligned}
&Err_1 := \sigma \int_0^T \int_\mathbb R m_x \eta \mathcal{N}(\eta) \psi \, dx dt ,\\
&Err_2 := \frac{1-\sigma}{2} \int_0^T \iint_{\Omega(t)} m_x
\theta_y H_N(|\nabla \phi|^2) \, dx dy dt ,\\
&Err_3 := \frac{1-\sigma}{2}
\int_0^T \iint_{\Omega(t)} m_x \phi_y H_D(\nabla \theta \nabla \phi)\, dx dy dt .
\end{aligned}
\]
The terms which do not involve the surface tension have already been estimated
in \cite{AIT}. We recall the outcome here:
\begin{proposition}[\cite{AIT}]
The following estimates hold:
(i) Positivity estimates:
\begin{equation}
LE_\psi+ LE_\eta \gtrsim \| (\eta,\psi)\|_{LE^\kappa_{0}}^2 - c \| (\eta,\psi)\|_{LE^\kappa}^2.
\end{equation}
(ii) Error bounds:
\begin{equation}
|Err_1| + |Err_2 | \lesssim \varepsilonilon LE(\eta,\psi).
\end{equation}
(iii) Normal form correction in holomorphic coordinates:
\begin{equation}
| Err_3 | \lesssim \varepsilonilon \left( LE(\eta,\psi) + \| \eta(0)\|_{H^{\frac14}_h} \| \psi_x(0)\|_{H^{-\frac14}_h}
+ \| \eta(T)\|_{H^{\frac14}_h} \| \psi_x(T)\|_{H^{-\frac14}_h}\right).
\end{equation}
\end{proposition}
We recall that the bound for $Err_3$ is more complex because, rather than estimating it directly,
in \cite{AIT} we use a normal form correction to deal with the bulk of $Err_3$, and estimate
directly only the ensuing remainder terms. Fortunately the normal form correction only
uses the $\eta$ equation, and thus does not involve at all the surface tension.
Thus, in what follows our remaining task is to estimate the contribution of $LE_\eta^\kappa$,
which we describe in the following
\begin{proposition}\left\vertbel{p:kappa}
The following estimate holds:
\begin{equation}
LE_\eta^\kappa \gtrsim \int_\mathbb R m_x \eta_x^2 dx - \varepsilonilon LE^\kappa.
\end{equation}
\end{proposition}
For the rest of the section we consider the main steps in the proof of the above proposition.
We consider the three terms separately. For the first one there is nothing to do.
For the remaining two we recall that
\begin{equation*}
H(\eta)=\px\left( \frac{\eta_x}{\sqrt{1+\eta_x^2}}\right).
\end{equation*}
The second term is easier. Integrating by parts we obtain
\[
\sigma \int_{0}^T \int_\mathbb R m_x \left( \frac{\eta_x^2}{ \sqrt{1+\eta_x^2}} + 1 - \frac{1}{\sqrt{1+\eta_x^2}}\right)
+ m_{xx} \frac{\eta \eta_x}{ \sqrt{1+\eta_x^2}}\, dx dt .
\]
The first term gives the positive contribution
\[
c \int_0^T \int_\mathbb R m_x \eta_x^2 dx dt, \qquad c \approx \frac32 \sigma,
\]
while the second is lower order and can be controlled by Cauchy-Schwarz using
the gravity part of the local energy, provided that $\kappa \ll g$.
This condition is invariant with respect to pure time scaling, but
not with respect to space-time scaling. This implies that even if this condition
is not satisfied, we still have local energy decay but with a window size
larger than $1$ (depending on the ratio $\kappa/g$), provided that $h^2 g \gg \kappa$.
The more difficult term is the last one, involving $H_N(H(\eta))$, namely
\[
I^\kappa = - \int_0^T \iint_{\Omega(t)} m_x \theta_y H_N(H(\eta))\, dy dx dt.
\]
The difficulty here is that,
even though $H(\eta)$ is an exact derivative as a function of $x$, this property
is lost when taking its harmonic extension since the domain itself is not flat.
Thus in any natural expansion of $H_{N}(H(\eta))$ (e.g. in holomorphic coordinates
where this is easier to see) there are quadratic (and also higher order)
terms where no cancellation occurs in the
high $\times$ high $\to$ low terms, making it impossible
to factor out one derivative.
One can think of $I^\kappa$ as consisting of a leading order quadratic
part in $\eta$ plus higher order
terms. We expect the higher order terms to be perturbative
because of our smallness condition,
but not the quadratic term. Because of this,
it will help to identify precisely the quadratic term.
On the top we have, neglecting the quadratic and higher order terms,
\[
H(\eta) \approx \eta_{xx} \approx \theta_{xx},
\]
so one might think of replacing $H(\eta)$ with $\theta_{xx}$ modulo cubic and higher order
terms.
This is not entirely correct since $\theta_{xx}$ satisfies a Dirichlet boundary condition on the bottom,
and not the Neumann boundary condition
which we need. Nevertheless, we will still make this
substitution, and pay the price of switching the boundary conditions. Precisely, we write
\begin{equation}\left\vertbel{H-decomp}
H_N(H(\eta)) = \theta_{xx} + ( H_{N}(\theta_{xx}) - \theta_{xx}) + H_N(H(\eta)- \theta_{xx})
\end{equation}
and estimate separately the contribution of each term.
The contribution $I^\kappa_1$ of the first term in \eqref{H-decomp} to $I^\kappa$ is
easily described using the relation $\theta_{xx} = -\theta_{yy}$,
\begin{equation}\left\vertbel{Ik1}
I^\kappa_1 =
\int_0^T \iint_{\Omega(t)} m_x \theta_y \theta_{yy}\, dy dx dt =
\int_0^T \int_\mathbb R m_x {\theta_y^2}\arrowvert_{y= \eta(t,x)}\, dx dt,
\end{equation}
which has the right sign.
It remains to estimate the integrals
\[
I^\kappa_2 = - \int_0^T \iint_{\Omega(t)} m_x \theta_y(H_{N}(\theta_{xx}) - \theta_{xx})\, dy dx dt,
\]
and
\[
I^\kappa_3 = - \int_0^T \iint_{\Omega(t)} m_x \theta_y H_N(H(\eta)- \theta_{xx}) \, dy dx dt.
\]
For these two integrals we will prove the following lemmas:
\begin{lemma}\left\vertbel{l:kappa-diff}
The integral $I^\kappa_2$ is estimated by
\begin{equation}
| I^\kappa_{2}| \lesssim h^{-2} \| \eta \|_{LE}^2.
\end{equation}
\end{lemma}
\begin{lemma}\left\vertbel{l:kappa-3}
The expression $I_{3}^{\kappa}$ is estimated by
\begin{equation}
| I^{\kappa}_{3} | \lesssim \varepsilonilon {\mathbf B}ig( \| \eta_x \|_{LE_{0}}^2 + \frac{g}{\kappa} \| \eta \|_{LE}^2{\mathbf B}ig).
\end{equation}
\end{lemma}
Given the relation \eqref{Ik1} and the last two lemmas, the desired
result in Proposition~\ref{p:kappa} follows. The two lemmas above are
most readily proved by switching to holomorphic coordinates. In the
next two sections we recall how the transition to holomorphic
coordinates works, following \cite{HIT}, \cite{ITc}, \cite{H-GIT} and
\cite{AIT}. Finally, in the last section of the paper we prove
Lemmas~\ref{l:kappa-diff},~\ref{l:kappa-3}.
\section{Holomorphic coordinates}
\left\vertbel{ss:laplace}
\subsection{Harmonic functions in the canonical domain}
We begin by discussing two classes
of harmonic functions in the horizontal strip \(S = \mathbb Rl\times(-h,0)\).
Given a function $f=f(\alpha)$ defined on the top,
consider its harmonic extension with
homogeneous Neumann boundary condition on the bottom,
\begin{equation}\left\vertbel{MixedBCProblem}
\begin{cases}
\Delta u = 0\qquad \text{in}\ \ S \\[1ex]
u(\alpha,0) = f\\[1ex]
\partial_\beta u(\alpha,-h) = 0.
\end{cases}
\end{equation}
It can be written in the form
\begin{equation}\left\vertbel{PN}
u(\alpha,\beta)= P_N(\beta, D)f(\alpha): = \frac{1}{{2\pi}}\int p_N(\xi,\beta)\hat
f(\xi)e^{i\alpha\xi}\, d\xi,
\end{equation}
where $p_N$ is a Fourier multiplier with symbol
\[
p_N(\xi,\beta) = \frac{\cosh((\beta+h)\xi)}{\cosh(h\xi)}.
\]
We will make use of the Dirichlet to Neumann map $\mathcal{D}_N$, defined by
$$
\mathcal{D_N}f=\partial_\beta u(\cdot, 0),
$$
as well as the Tilbert transform, defined by
\begin{equation}\left\vertbel{def:Tilbert}
\mathcal Th f(\alpha) =
-\frac{1}{2h}\lim\limits_{\varepsilonilon\downarrow0}\int_{|\alpha-\alpha'|>\varepsilonilon}
\cosech\left(\frac\pi{2h}(\alpha-\alpha')\right)f(\alpha')\, d\alpha'.
\end{equation}
Then the Tilbert transform is the Fourier multiplier
\[
\mathcal Th = -i\tanh(hD).
\]
Notice that it
takes real-valued functions to real-valued functions. The
inverse Tilbert transform is denoted by \(\mathcal Th^{-1}\); \emph{ a priori}
this is defined modulo constants.
It follows that the Dirichlet to Neumann map can be written under the form
\[
\mathcal{D}_N f=\mathcal Th \partial_{\alpha} f.
\]
We now consider a similar problem
with the homogeneous Dirichlet boundary condition on the bottom
\begin{equation}\left\vertbel{DirichletProblem}
\begin{cases}
\Delta v = 0\qquad \text{in}\ \ S \\[1ex]
v(\alpha,0) = g \\[1ex]
v(\alpha,-h) = 0.
\end{cases}
\end{equation}
Then
\[
v(\alpha,\beta)=P_D(\beta, D)g(\alpha) := \frac{1}{{2\pi}}\int p_D(\xi,\beta)\hat
g(\xi)e^{i\alpha\xi}\, d\xi,
\]
where
\[
p_D(\xi,\beta) = \frac{\sinh((\beta+h)\xi)}{\sinh(h\xi)}.
\]
The Dirichlet to Neumann map $\mathcal{D}_D$ for this problem is given by
\[
\partial_\beta v(\alpha ,0)=\mathcal{D}_D g=-\mathcal T_h^{-1}\partial_{\alpha} g.
\]
The solution to~\eqref{MixedBCProblem} is related
to the one of~\eqref{DirichletProblem} by means of harmonic conjugates.
Namely, given a real-valued solution $u$ to \eqref{MixedBCProblem},
we consider its
harmonic conjugate $v$, i.e.,
satisfying the Cauchy-Riemann equations
\[
\left\{
\begin{aligned}
&u_\alpha = -v_\beta \\
&u_\beta = v_\alpha \\
&\partial_\beta u(\alpha,-h)=0.
\end{aligned}
\right.
\]
Then $v$ is a solution to \eqref{DirichletProblem} provided that
the Dirichlet data $g$ for $v$ on the top is determined by the
Dirichlet data $f$ for $u$ on the top via the relation
\[
g = -\mathcal T_h f.
\]
Conversely, given $v$, there is
a corresponding harmonic conjugate~$u$ (which is uniquely
determined modulo real constants).
\subsection{Holomorphic functions in the canonical domain}
Here we consider the real algebra of
holomorphic functions $w$ in the canonical domain
$S\mathrel{:=}\{\alpha+i\beta\ :\ \alpha\in\mathbb{R},~-h\leq \beta\leq 0\}$,
which are real on the bottom $\{\mathbb R-ih\}$. Notice that such
functions are uniquely determined by their values on the top $\{\beta=0\}$,
and can be expressed
as
\[
w = u+ iv,
\]
where $u$ and $v$ are harmonic conjugate functions satisfying
the equations \eqref{MixedBCProblem}, respectively \eqref{DirichletProblem}.
Hereafter, by definition, we will call
functions on the real line holomorphic if they
are the restriction on the real line of holomorphic functions in the
canonical domain $S$ which are real on the bottom $\{\mathbb R-ih\}$.
Put another way, they are functions $w\colon \mathbb{R}\rightarrow \mathbb{C}$ so
that there is an holomorphic function,
still denoted by $w\colon S\rightarrow \mathbb{C}$, which satisfies
\[
\operatorname{Im} w = - \mathcal Th \mathbb Re w
\]
on the top.
The complex conjugates of holomorphic functions are called antiholomorphic.
\subsection{Holomorphic coordinates and water waves}
Recall that $\Omega(t)$ denotes the fluid domain
at a given time $t\ge 0$, in Eulerian coordinates.
In this section we
recall following \cite{HIT,ITc} (see also \cite{dy-zak,H-GIT})
how to rewrite the water-wave problem
in holomorphic coordinates.
We introduce holomorphic coordinates
$z=\alpha + i\beta$, thanks to conformal maps
\[
Z\colon S \rightarrow \Omega(t),
\]
which associate the top to the top, and the bottom to the bottom.
Such a conformal transformation exists by the
Riemann mapping theorem. Notice that these maps
are uniquely defined up to horizontal translations in $S$ and that,
restricted to the real axis, this provides a
parametrization for the water surface $\Gamma$.
Because of the
boundary condition on the bottom of the fluid domain
the function $W$ is holomorphic when $\alpha \in \mathbb{R}$.
We set
$$
W:=Z-\alpha,
$$
so that $W = 0$ if the fluid surface is flat i.e., $\eta = 0$.
Moving to the velocity
potential $\phi$, we consider its
harmonic conjugate~$q$
and then the function $Q:=\phi +i q$,
taken in holomorphic coordinates, is the
holomorphic counterpart of $\phi$.
Here $q$ is exactly the stream function, see \cite{AIT}.
With this notations, the water-wave problem can be recast as
an evolution system for $(W, Q)$,
within the space of holomorphic functions
defined on the surface (again, we refer the reader
to \cite{HIT,ITc,dy-zak,H-GIT} for the details of the computations).
Here we recall the equations:
\begin{equation}\left\vertbel{FullSystem-re}
\hspace{-.1in}
\begin{cases}
W_t + F(1+W_\alpha) = 0
\\
Q_t + FQ_\alpha - g\mathcal Th[W] \! +\! {\mathbf P}_h \!
\left[\dfrac{|Q_\alpha|^2}{J}\right] \!
+ \! i\kappa {\mathbf P}_h\! \left[ \dfrac{W_{\alpha \alpha}}{J^{1/2}(1+W_{\alpha})} - \dfrac{\bar{W}_{\alpha \alpha}}{J^{1/2}(1+\bar{W}_{\alpha})} \right] \! = 0,
\end{cases}
\end{equation}
where
\[
J = |1+W_\alpha|^2, \qquad F = {\mathbf P}_h\left[\frac{Q_\alpha-\bar
Q_\alpha}{J}\right].
\]
Here ${\mathbf P}_h$ represents the orthogonal projection on the space of
holomorphic functions with respect with the inner product
in the Hilbert space $\mathfrakH_h$ introduced in
\cite{H-GIT}. This has the form
\[
\left\vertngle u, v\right\vertngle_{\mathfrakH_h} := \int \left( \mathcal T_h \mathbb Re u \cdot \mathcal T_h \mathbb Re v + \operatorname{Im} u \cdot \operatorname{Im} v \right) \, d\alpha,
\]
and coincides with the $L^2$ inner product in the
infinite depth case.
Written in terms of the real and imaginary parts of $u$, the
projection ${\mathbf P}_h$ takes the form
\begin{equation}\left\vertbel{def:P}
{\mathbf P}_h u = \frac12 \left[(1 - i \mathcal Th) \mathbb Re u + i (1+ i\mathcal Th^{-1}) \operatorname{Im} u \right].
\end{equation}
Since all the functions in the system \eqref{FullSystem-re} are
holomorphic, it follows that these relations also hold in the
full strip $S$ for the holomorphic extensions of each term.
We also remark that in the finite depth case there is an additional
gauge freedom in the above form of the equations, in that $\mathbb Re F$ is a-priori
only uniquely determined up to constants. This corresponds
to the similar degree of freedom in the choice of the conformal coordinates,
and will be discussed in the last subsection.
A very useful function in the holomorphic setting is
\[
R = \frac{Q_\alpha}{1+W_\alpha},
\]
which represents the ``good variable" in this setting, and
corresponds to the Eulerian function
\[
R = \phi_x + i \phi_y.
\]
We also remark that the function $\theta$ introduced in the previous section is described in holomorphic coordinates by
\[
\theta =\operatorname{Im} W.
\]
Also related to $W$, we will use the auxiliary holomorphic function
\[
Y = \frac{W_\alpha}{1+W_\alpha}.
\]
Another important auxiliary function here is the advection velocity
\[
b = \mathbb Re F,
\]
which represents the velocity of the particles on the fluid surface
in the holomorphic setting.
It is also interesting to provide the form of the conservation
laws in holomorphic coordinates.
We begin with the energy (Hamiltonian), which has the form
\[
\mathcal H = \frac g2 \int |\operatorname{Im} W|^2 (1+\mathbb Re W_\alpha ) \, d\alpha
- \frac14\left\vertngle Q,\mathcal Th^{-1}[Q_\alpha]\right\vertngle_{\mathfrakH_h} .
\]
The momentum on the other has the form
\[
\mathcal M = \frac12 \left\vertngle W, \mathcal T_h^{-1} Q_\alpha \right\vertngle_{\mathfrakH_h}
= \int _{\mathbb{R}}\mathcal T_h \mathbb Re W \cdot \mathbb Re Q_\alpha \, d\alpha = \int _{\mathbb{R}} \operatorname{Im} W \cdot \mathbb Re Q_\alpha \, d\alpha .
\]
\subsection{Uniform bounds for the conformal map}
In order to freely switch computations between the Eulerian and holomorphic setting
it is very useful to verify that our Eulerian uniform smallness assumption
for the functions $(\eta,\nabla \phi\arrowvert_{y=\eta})$ also has an identical
interpretation in the holomorphic setting for the
functions $(\operatorname{Im} W,R)$. Our main result is as follows:
\begin{theorem}\left\vertbel{t:equiv}
Assume that the smallness condition \eqref{uniform} holds. Then we have
\begin{equation}\left\vertbel{X-kappa-hol}
\| (\operatorname{Im} W,R)\|_{X^\kappa} \lesssim \varepsilonilon.
\end{equation}
\end{theorem}
This result is in effect an equivalence between the two bounds. We state and prove
only this half because that is all that is needed here.
\begin{proof}
The similar result for the $X$ space corresponding to pure gravity waves was proved in \cite{AIT},
so we only need to add the $X_1$ component of the $X^\kappa$ norm.
We first recall some of the set-up in \cite{AIT}, and then return to $X_1$.
The $X$ norm is described in \cite{AIT} using the language
of frequency envelopes. We define a \emph{frequency envelope}
for $(\eta,\nabla \phi_{|y = \eta})$ in $X$ to be any positive sequence
\[
\left\{ c_{\left\vertmbda}\, : \, \quad h^{-1} < \left\vertmbda \in 2^\mathbb{Z} \right\}
\]
with the following two properties:
\begin{enumerate}
\item Dyadic bound from above,
\[
\| P_\left\vertmbda (\eta,\nabla \phi_{|y = \eta})\|_{X_0} \leq c_\left\vertmbda .
\]
\item Slowly varying,
\[
\frac{c_\left\vertmbda}{c_\mu} \leq \max
\left\{ \left(\frac{\left\vertmbda}{\mu}\right)^\delta, \left(\frac{\mu}{\left\vertmbda}\right)^\delta \right\} .
\]
\end{enumerate}
Here $\delta \ll 1$ is a small universal constant.
Among all such frequency envelopes there exists a \emph{minimal frequency envelope}.
In particular, this envelope has the property that
\[
\| (\eta,\nabla \phi_{|y = \eta})\|_{X} \approx \| c\|_{\ell^1}.
\]
We set the notations as follows:
\begin{definition} \left\vertbel{def-fe}
By $\{c_\left\vertmbda\}_{\left\vertmbda \geq 1/h}$ we denote the minimal frequency
envelope for $(\eta,\nabla \phi\arrowvert_{y=\eta})$ in $X_0$.
We call $\{c_\left\vertmbda\}$ the \emph{control frequency envelope}.
\end{definition}
Since in solving the Laplace equation on the strip, solutions
at depth $\beta$ are localized at frequencies $\leq \left\vertmbda$
where $\left\vertmbda \approx |\beta|^{-1}$, we will also use the notation
\[
c_\beta = c_\left\vertmbda, \qquad \left\vertmbda \approx
|\beta|^{-1}.
\]
This determines $c_\beta$ up to an $1+O(\varepsilonilon)$ constant,
which suffices for our purposes.
Using these notations, in \cite{AIT} we were able to prove
a stronger version of the above theorem
for the $X$ norm, and show that one can transfer the control envelope
for $(\eta,\nabla \phi_{|y=\eta})$ to their counterpart $(\operatorname{Im} W,R)$
in the holomorphic coordinates.
\begin{proposition}\left\vertbel{p:control-equiv}
Assume the smallness condition \eqref{uniform}, and let $\{c_\left\vertmbda\}$
be the control envelope as above. Then we have
\begin{equation}\left\vertbel{X-fe-hol}
\| P_{\left\vertmbda} (\operatorname{Im} W,R)\|_{X_0} \lesssim c_\left\vertmbda .
\end{equation}
\end{proposition}
As noted in \cite{AIT}, as a consequence of this proposition we can further
extend the range of the frequency envelope estimates:
\begin{remark}
The $X$ control envelope $\left\lbrace c_\left\vertmbda \right\rbrace$
is also a frequency envelope for
\begin{itemize}
\item $(\operatorname{Im} W, R)$ in $X_0$.
\item $W_\alpha$ in $H^{\frac12}_h$ and $L^\infty$.
\item $Y$ in $H^{\frac12}_h$.
\end{itemize}
\end{remark}
We remark that this in particular implies, by Bernstein's inequality,
the pointwise bound
\begin{equation}\left\vertbel{W-ppoint}
\| W_\alpha\|_{L^\infty} \lesssim \varepsilonilon_0.
\end{equation}
This in turn implies that the Jacobian matrix for the change
of coordinates stays close to the identity.
We now turn our attention to the $X_1$ component of the $X^\kappa$ norm. We
have the additional information that
\begin{equation}
\| \eta_{xx}\|_{L^2} \lesssim \varepsilonilon \left(\frac{g}{\kappa}\right)^\frac14 ,
\end{equation}
and we need to show that
\begin{equation}\left\vertbel{want-X1}
\| W_{\alpha\alpha}\|_{L^2} \lesssim \varepsilonilon \left(\frac{g}{\kappa}\right)^\frac14 .
\end{equation}
We begin by computing
\[
\eta_{xx} = J^{-\frac12} \partial_\alpha( J^{-\frac12} \operatorname{Im} W_\alpha),
\]
therefore
\[
\| \eta_{xx} \|_{L^2} \approx \| \partial_\alpha( J^{-\frac12} \operatorname{Im} W_\alpha) \|_{L^2} .
\]
Thus we have
\[
\| \operatorname{Im} W_{\alpha\alpha}\|_{L^2} \lesssim \|\eta_{xx}\|_{L^2}
+ \| W_\alpha \|_{L^\infty} \| W_{\alpha\alpha}\|_{L^2}
\lesssim \varepsilonilon \left(\frac{g}{\kappa}\right)^\frac14
+ \varepsilonilon \| W_{\alpha\alpha}\|_{L^2}.
\]
On the other hand, the real and imaginary parts of $W_{\alpha\alpha}$
have the same regularity at frequency
$> h^{-1}$; more precisely, we can estimate
\[
\| \mathbb Re W_{\alpha\alpha}\|_{L^2} \lesssim \|\operatorname{Im} W_{\alpha\alpha}\|_{L^2} + h^{-1} \| \operatorname{Im} W_\alpha\|_{L^2}
\lesssim
\|\operatorname{Im} W_{\alpha\alpha}\|_{L^2} + h^{-\frac12} \varepsilonilon,
\]
where the $L^2$ bound for $\operatorname{Im} W_\alpha$ comes from the $X$ norm.
Combining the last two bounds we get
\[
\| W_{\alpha\alpha}\|_{L^2}\lesssim \varepsilonilon \left(\left(\frac{g}{\kappa}\right)^\frac14 + h^{-\frac12}\right).
\]
Then \eqref{want-X1} follows from the relation $h^2 g \gtrsim \kappa$,
which says that the Bond number
stays bounded.
\end{proof}
\subsection{Fixed time bounds at the level of the momentum}
Our objective here is to relate the Eulerian norms of $(\eta,\psi)$
at the momentum level in $E^\frac14$ (see \eqref{e14})
to their counterpart in the holomorphic setting
for $(W,R)$. Precisely, we have:
\begin{lemma} \left\vertbel{l:equiv-M}
Assume that the condition \eqref{uniform} holds.
Then we have the estimate
\begin{equation}
\| (\operatorname{Im} W,\partial_\alpha^{-1} \operatorname{Im} R) \|_{E^\frac14} \approx \| (\eta,\psi)\|_{E^\frac14}.
\end{equation}
\end{lemma}
\begin{proof}
Recalling that $\eta = \operatorname{Im} W$, for the first part of
the equivalence we are bounding the same function but
in different coordinates. As the change of coordinates is bi-Lipschitz,
the $L^2$ and $\dot H^1$ norms are equivalent, and, by interpolation, all
intermediate norms.
For the second part of the equivalence we use the relation $\psi =\mathbb Re Q$.
By the same reasoning as above, we can switch coordinates to get
\[
\| \psi \|_{ g^\frac14 \dot H^{\frac34}_h(\mathbb{R}) + \kappa^\frac14 \dot H^{\frac14}_h(\mathbb{R})}
\approx \| \mathbb Re Q \|_{ g^\frac14 \dot H^{\frac34}_h(\mathbb{R}) + h^\frac14 \dot H^{\frac14}_h(\mathbb{R})},
\]
where the first norm is relative to the the Eulerian coordinate $x$
and the second norm is relative to the the holomorphic coordinate $\alpha$.
It remains to relate the latter to the corresponding norm of $\partial^{-1} R$.
Differentiating, we need to show that
\[
\| Q_\alpha \|_{ g^{\frac14} H^{-\frac14}_h(\mathbb{R}) + \kappa^\frac14 H^{-\frac34}_h(\mathbb{R})}
\approx \| R \|_{ g^{\frac14} H^{-\frac14}_h(\mathbb{R}) + \kappa^\frac14 H^{-\frac34}_h(\mathbb{R})}.
\]
But here we can use the relation
\[
Q_\alpha = R(1+W_\alpha),
\]
along with the multiplicative bound
\begin{equation}\left\vertbel{algebra1}
\| f R \|_{g^\frac14 H^{-\frac14}_h(\mathbb{R}) + \kappa^\frac14 H^{-\frac34}_h(\mathbb{R})}
\lesssim (\| f\|_{L^\infty} + (\kappa/g)^\frac14 \| f_\alpha \|_{L^2})
\| R \|_{g^\frac14 H^{-\frac14}_h(\mathbb{R}) + \kappa^\frac14 H^{-\frac34}_h(\mathbb{R})},
\end{equation}
applied with $f = W_\alpha$ and then in the other direction with $f = Y$.
It remains to prove \eqref{algebra1}. By duality we rephrase this as
\begin{equation}\left\vertbel{algebra1-dual}
\| f R \|_{g^{-\frac14} H^{\frac14}_h(\mathbb{R}) \cap \kappa^{-\frac14} H^{\frac34}_h(\mathbb{R})}
\lesssim (\| f\|_{L^\infty} + (\kappa/g)^\frac14 \| f_\alpha \|_{L^2})
\| R \|_{g^{-\frac14}H^{\frac14}_h(\mathbb{R}) \cap \kappa^{-\frac14} H^{\frac34}_h(\mathbb{R})},
\end{equation}
which we approach in the same way as in the earlier proof of \eqref{algebra}.
In the paraproduct decomposition the terms $T_f R$ and ${\mathbf P}i(f,R)$ are easy to estimate,
using only the $L^\infty$ bound for $f$. The term $T_R f$ is more interesting.
At fixed frequency $\left\vertmbda$ we estimate in $L^2$ the product
\[
f_\left\vertmbda R_{< \left\vertmbda}.
\]
We split into two cases:
a) $\left\vertmbda < \left\vertmbda_0 = (g/\kappa)^\frac12$. Here we write
\[
\| f_\left\vertmbda R_{< \left\vertmbda} \|_{L^2} \lesssim \| f_\left\vertmbda \|_{L^4}
\| R_{<\left\vertmbda}\|_{L^4} \lesssim (\kappa/g)^{-\frac18} (\| f\|_{L^\infty}
+ (\kappa/g)^\frac14 \| f_\alpha \|_{L^2})
\| R \|_{g^{-\frac14} H^{\frac14}_h(\mathbb{R})},
\]
which suffices.
b) $\left\vertmbda > \left\vertmbda_0 = (g/\kappa)^\frac12$. Then we estimate
\[
\| f_\left\vertmbda R_{< \left\vertmbda} \|_{L^2} \lesssim \| f_\left\vertmbda \|_{L^2}
\| R_{<\left\vertmbda}\|_{L^\infty} \lesssim \left\vertmbda^{-1} (\kappa/g)^{-\frac18} [ (\kappa/g)^\frac14
\| f_\alpha \|_{L^2}] \| R \|_{g^{-\frac14} H^{\frac14}_h(\mathbb{R})\cap \kappa^{-\frac14} H^{\frac34}_h(\mathbb{R})},
\]
which again suffices.
\end{proof}
\subsection{ Vertical strips in Eulerian vs holomorphic coordinates.}
In our main result, the local energy functionals are defined using vertical strips
in Eulerian coordinates. On the other hand, for the multilinear
analysis in our error estimates in the last section, it would be easier to use vertical
strips in holomorphic coordinates. To switch from one to
the other we need to estimate the horizontal drift between the two strips in depth.
As the conformal map is biLipschitz, it suffices to compare the
centers of the two strips. It is more convenient to do this in the reverse order,
and compare the Eulerian image of the holomorphic vertical section with
the Eulerian vertical section. This analysis was carried out in \cite{AIT},
and we recall the result here:
\begin{proposition} \left\vertbel{p:switch-strips}
Let $(x_0, \eta(x_0)) = Z(\alpha_0,0)$, respectively $(\alpha_0,0)$
be the coordinates of a point on the free surface in Eulerian,
respectively holomorphic coordinates.
Assume that \eqref{uniform} holds, and let $\{c_\left\vertmbda\}$
be the control frequency envelope in Definition~\ref{def-fe}.
Then we have the uniform bounds:
\begin{equation}
|\mathbb Re Z( \alpha_0, \beta) - x_0 + \beta \operatorname{Im} W_{\alpha}(\alpha_0,\beta)|
\lesssim c_\left\vertmbda, \qquad |\beta| \approx \left\vertmbda^{-1}.
\end{equation}
\end{proposition}
As a corollary, we see that the distance between the two
strip centers grows at most linearly:
\begin{corollary}
Under the same assumptions as in the above proposition we have
\begin{equation}
|\mathbb Re Z( \alpha_0, \beta) - x_0| \lesssim \varepsilonilon_0 |\beta|.
\end{equation}
\end{corollary}
\subsection{ The horizontal gauge invariance}
Here we briefly discuss the gauge freedom due to the fact that $\mathbb Re F$
is a-priori only uniquely determined up to constants. In the infinite
depth case this gauge freedom is removed by making the assumption $F
\in L^2$. In the finite depth case (see \cite{H-GIT}) instead this is
more arbitrarily removed by setting $F(\alpha = -\infty) = 0$.
In the present paper no choice is necessary for our main result,
as well as for most of the proof. However, such a choice was made
for convenience in \cite{AIT}, whose results we also apply here.
Thus we briefly recall it.
Assume first that we have a finite depth.
We start with a point $x_0 \in \mathbb R$ where our local energy estimate
is centered. Then we resolve the gauge invariance
with respect to horizontal translations by setting $\alpha(x_0) = x_0$,
which corresponds to setting $ \mathbb Re W(x_0)= 0$.
In dynamical terms, this implies that the real part of $F$ is
uniquely determined by
\[
0 = \mathbb Re W_t(x_0) = \mathbb Re ( F(1+W_\alpha))(x_0),
\]
which yields
\[
\mathbb Re F(x_0) = \operatorname{Im} F(x,0) \frac{\operatorname{Im} W_\alpha(x_0)}{1+ \mathbb Re W_\alpha(x_0)}.
\]
In the infinite depth case, the canonical choice for $F$ is the one
vanishing at infinity. This corresponds to a moving location in the
$\alpha$ variable. We can still rectify this following the finite
depth model, at the expense of introducing a constant component in
both $\mathbb Re W$ and in $F$. We will follow this convention in the paper,
in order to insure that our infinite depth computation is an exact
limit of the finite depth case.
\section{Local energy bounds in holomorphic coordinates}
\left\vertbel{s:switch}
\subsection{Notations}
We begin by transferring the
local energy bounds to the holomorphic setting. Recall that
in the Eulerian setting, they are equivalently defined as
\[
\|(\eta,\psi)\|_{LE^\kappa} := g^\frac12 \|\eta\|_{LE^0} + \kappa^\frac12 \|\eta_x\|_{LE^0}
+ \| \nabla \phi\|_{LE^{-\frac12}},
\]
where
\[
\|\eta\|_{LE^0} := \sup_{x_0 \in \mathbb R} \| \eta\|_{L^2(S(x_0))},
\qquad
\| \nabla \phi\|_{LE^{-\frac12}} := \sup_{x_0 \in \mathbb R} \| \nabla \phi\|_{L^2(\mathbf S(x_0))}.
\]
Here $S(x_0)$, respectively $\mathbf S(x_0)$ represent the Eulerian strips
\[
S(x_0) := \{ [0,T] \times [x_0-1,x_0+1] \}, \qquad
\mathbf S(x_0) := S(x_0)\times [-h,0].
\]
In holomorphic coordinates the
functions $\eta$ and $\nabla \phi$ are given by $\operatorname{Im} W$ and $R$.
Thus we seek to replace the above local energy norm with
\[
\|(W,R)\|_{LE} := \|\operatorname{Im} W\|_{LE^0} + \| R \|_{LE^{-\frac12}},
\]
with
\[
\|\operatorname{Im} W\|_{LE^0} := \sup_{x_0 \in \mathbb R} \| \operatorname{Im} W \|_{L^2(S_h(x_0))},
\qquad
\| R \|_{LE^{-\frac12}} :=
\sup_{x_0 \in \mathbb R} \| R \|_{L^2(\mathbf S_h(x_0))}.
\]
Here $S_h(x_0)$ and $\mathbf S_h(x_0)$ represent the holomorphic strips given by
\[
S_h(x_0) := \{(t,\alpha): t \in [0,T], \ \alpha \in [\alpha_0-1,\alpha_0+1]\}, \qquad
\mathbf S_h(x_0) := S(x_0) \times [-h,0],
\]
where $\alpha_0 = \alpha_0(t,x_0)$ represents the holomorphic
coordinate of $x_0$, which in general will depend on $t$.
We call the attention to the fact that,
while the strips $S_h(x_0)$ on the top roughly correspond
to the image of $S(x_0)$ in holomorphic coordinates,
this is not the case for the strips $\mathbf S_h(x_0)$
relative to $\mathbf S(x_0)$. In depth, there may be a horizontal drift,
which is estimated by means of Proposition~\ref{p:switch-strips}.
We can now state the following equivalence.
\begin{proposition}\left\vertbel{p:equiv}
Assuming the uniform bound \eqref{uniform}, we have the equivalence:
\begin{equation}\left\vertbel{equiv}
\|(\eta,\psi)\|_{LE^\kappa} \approx \|(W,R)\|_{LE^\kappa}.
\end{equation}
\end{proposition}
\begin{proof}
Here the correspondence between the $LE^0$ norms of $\eta$ and $\operatorname{Im} W$
is straightforward due to the bi-Lipschitz property of the conformal map.
However, the correspondence between the $LE^{-\frac12}$ norms of $\nabla \phi$ and $R$
is less obvious, and was studied in detail in \cite{AIT}.
Moving on to the $LE^0$ norms of $\eta_x$ and $\operatorname{Im} W$, we have
\[
\eta_x = J^{-\frac12} \operatorname{Im} W_\alpha.
\]
Since $J= 1+O(\varepsilonilon)$ and the correspondence between
the two sets of coordinates is bi-Lipschitz, it immediately follows that
$\| \eta_x \|_{LE^0} \approx \| W_\alpha \|_{LE^0}$.
\end{proof}
One difference between the norms for $\operatorname{Im} W$ and for $R$ is that they
are expressed in terms of the size of the function on the top,
respectively in depth. For the purpose of multilinear estimates later
on we will need access to both types of norms.
Since the local energy norms are defined using the unit spatial
scale, in order to describe the behavior of functions in these spaces
we will differentiate between high frequencies and low frequencies. We
begin with functions on the top:
a) \textbf{ High frequency characterization on top.} Here we will use local norms
on the top, for which we will use the abbreviated notation
\[
\| u \|_{L^2_t H^s_{loc}} := \sup_{x_0 \in \mathbb R } \| u\|_{L^2_t H^s_{\alpha}([\alpha_0-1, \alpha_0+1])},
\]
where again $\alpha_0 = \alpha_0(x_0,t)$.
b) \textbf{ Low frequency characterization on top.} Here we will use local norms
on the top to describe the frequency $\left\vertmbda$
or $\leq \left\vertmbda$ part of functions, where $\left\vertmbda < 1$ is a dyadic frequency. By the
uncertainty principle such bounds should be uniform on the $\left\vertmbda^{-1}$
spatial scale. Then it is natural to use the following norms:
\[
\| u \|_{L^2_t L^\infty_{loc}(B_\left\vertmbda)} := \sup_{x_0 \in \mathbb R } \| u\|_{L^2_t L^\infty_{\alpha}(B_\left\vertmbda(x_0))},
\]
where
\[
B_\left\vertmbda(x_0) := \{ \alpha \in \mathbb R: \ |\alpha-\alpha_0| \lesssim \left\vertmbda^{-1} \}.
\]
We remark that the local norms in $a)$ correspond
exactly to the $B_{\left\vertmbda}(x_0)$ norms with $\left\vertmbda =1$.
Next we consider functions in the strip which are harmonic extensions
of functions on the top. To measure them we will use function spaces as follows:
a1) \textbf{ High frequency characterization in strip.}
Here we will use local norms on regions with depth at most $1$,
for which we will use the abbreviated notation
\[
\| u \|_{L^2_t X_{loc}(A_1)} := \sup_{x_0 \in \mathbb R } \| u\|_{L^2_t X(A_1(x_0))},
\]
where $X$ will represent various Sobolev norms and
\[
A_1(x_0) := \{(\alpha,\beta): |\beta| \lesssim 1,\ |\alpha-\alpha_0| \lesssim 1 \}.
\]
b1) \textbf{ Low frequency characterization in strip.} Here a
frequency $\left\vertmbda < 1$ is associated
with depths $|\beta| \approx \left\vertmbda^{-1}$.
Thus, we define the regions
\[
A_{\left\vertmbda}(x_0) = \{ (\alpha,\beta): |\beta| \approx \left\vertmbda^{-1},\ |\alpha-\alpha_0|
\lesssim \left\vertmbda^{-1} \}, \qquad \left\vertmbda < 1,
\]
as well as
\[
\begin{aligned}
&\mathbf{B}_1(x_0) := \{ (\alpha,\beta); \ |\alpha - \alpha_0| \leq 1, \ \beta \in [-1,0] \} ,\\
&\mathbf{B}_\left\vertmbda (x_0)
:= \{ (\alpha,\beta); \ |\alpha - \alpha_0| \leq \left\vertmbda^{-1}, \ \beta \in [-\left\vertmbda^{-1},0] \},
\mathbfox{ for } \left\vertmbda < 1.
\end{aligned}
\]
In these regions we use the uniform norms,
\[
\| u \|_{L^2_t L^\infty_{loc}(A_\left\vertmbda)} := \sup_{x_0 \in \mathbb R }
\| u\|_{L^2_t L^\infty_{\alpha,\beta}(A_\left\vertmbda(x_0))},
\]
and similarly for $\mathbf{B}^1$ and $\mathbf{B}_\left\vertmbda$.
\subsection{Multipliers and Bernstein's inequality in uniform norms}
Here we recall the results of \cite{AIT} describing how multipliers
act on the uniform spaces defined above. We will work
with a multiplier $M_{\left\vertmbda}(D)$ associated to a dyadic frequency $\left\vertmbda$.
In order to be able to use the bounds in several circumstances,
we make a weak assumption on their (Lipschitz)
symbols $m_{\left\vertmbda}(\xi)$:
\begin{equation}
\left\vertbel{m-symbol}
\begin{aligned}
|m_{\left\vertmbda}(\xi)| \lesssim \ (1+\left\vertmbda^{-1}|\xi|)^{-3}, \
\mathbfox{ and }\
| \partial_\xi^{k+1} m_{\left\vertmbda}(\xi) | \lesssim c_k \ |\xi|^{-k}(1+\left\vertmbda^{-1}|\xi|)^{-4}.
\end{aligned}
\end{equation}
Examples of such symbols include
\begin{itemize}
\item Littlewood-Paley localization operators $P_{\left\vertmbda}$, $P_{\leq \left\vertmbda}$.
\item The multipliers $p_D(\beta, D)$ and $p_N(\beta, D)$ in subsection~\ref{ss:laplace}
with $|\beta| \approx \left\vertmbda^{-1}$.
\end{itemize}
We will separately consider high frequencies, where
we work with the spaces $L^2_t L^p_{loc}$,
and low frequencies, where we work with the spaces
$L^2_t L^p_{loc}(B_\left\vertmbda)$ associated with a dyadic
frequency $1/h \leq \left\vertmbda \leq 1$.
\textbf{A. High frequencies.}
Here we consider a dyadic high frequency $\left\vertmbda \geq 1$,
and seek to understand how multipliers $M_{\left\vertmbda}(D)$
associated to frequency $\left\vertmbda$ act on the spaces $L^2_t L^p_{loc}$.
\begin{lemma}\left\vertbel{l:bern-loc-hi}
Let $\left\vertmbda \geq 1$ and $1 \leq p \leq q \leq \infty$. Then
\begin{equation}
\| M_{\left\vertmbda}(D) \|_{L^2_t L^p_{loc} \to
L^2_t L^q_{loc}} \lesssim \left\vertmbda^{\frac1p-\frac1q}.
\end{equation}
\end{lemma}
\textbf{B. Low frequencies.}
Here we consider two dyadic low frequencies $1/h \leq \left\vertmbda_1, \left\vertmbda_2 \leq 1$,
and seek to understand how multipliers $M_{\left\vertmbda_2}(D)$
associated to frequency $\left\vertmbda_2$
act on the spaces $L^2_t L^p_{loc}(B_{\left\vertmbda_1})$.
For such multipliers we have:
\begin{lemma}\left\vertbel{l:bern-loc}
Let $1/h \leq \left\vertmbda_1,\left\vertmbda_2 \leq 1$ and $1 \leq p \leq q \leq \infty$.
a) Assume that $\left\vertmbda_1 \leq \left\vertmbda_2$. Then
\begin{equation}
\| M_{\left\vertmbda_2}(D) \|_{L^2_t L^p_{loc}(B_{\left\vertmbda_1}) \to
L^2_t L^q_{loc}(B_{\left\vertmbda_1})} \lesssim \left\vertmbda_2^{\frac1p-\frac1q}.
\end{equation}
b) Assume that $ \left\vertmbda_2 \leq \left\vertmbda_1$. Then
\begin{equation}
\| M_{\left\vertmbda_2}(D) \|_{L^2_t L^p_{loc}(B_{\left\vertmbda_1}) \to
L^2_t L^q_{loc}(B_{\left\vertmbda_2})} \lesssim \left\vertmbda_1^{\frac1p} \left\vertmbda_2^{-\frac1q}.
\end{equation}
\end{lemma}
We remark that part (a) is nothing but the classical Bernstein's
inequality in disguise, as the multiplier $M_{\left\vertmbda_2}
$ does not mix
$\left\vertmbda_1^{-1}$ intervals. Part (b) is the more interesting one,
where the $\left\vertmbda_1^{-1}$ intervals are mixed.
\subsection{Bounds for \texorpdfstring{$\eta = \operatorname{Im} W$}{} and for
\texorpdfstring{$\eta_x = J^{-\frac12} \operatorname{Im} W_\alpha$.}{}}
Here we have the straightforward equivalence
\begin{equation}\left\vertbel{W-eta-le}
\| \eta \|_{LE^0} \approx \| \operatorname{Im} W\|_{LE^0},
\qquad \| \eta_x \|_{LE^0} \approx \| \operatorname{Im} W_\alpha\|_{LE^0},
\end{equation}
as $\eta$ and $\operatorname{Im} W$ are one and the same function up to a bi-Lipschitz
change of coordinates. We begin with
a bound from \cite{AIT} for the low frequencies of $\operatorname{Im} W$ on the top:
\begin{lemma}\left\vertbel{l:W-LE-top}
For each dyadic frequency $1/h \leq \left\vertmbda < 1$ we have
\begin{equation}\left\vertbel{W-LE-top}
\|\operatorname{Im} W_{\leq \left\vertmbda} \|_{L^2_t L^\infty_{loc}(B_\left\vertmbda)} \lesssim \| \operatorname{Im} W\|_{LE^0}.
\end{equation}
\end{lemma}
Here one may also replace $\operatorname{Im} W$ by $W_\alpha$,
\begin{equation}\left\vertbel{Wa-LE-top}
\|W_{\alpha,\leq \left\vertmbda} \|_{L^2_t L^\infty_{loc}(B_\left\vertmbda)} \lesssim \|W_\alpha\|_{LE^0}.
\end{equation}
Since
\[
\|W_\alpha\|_{LE^0} \lesssim \| \operatorname{Im} W_\alpha\|_{LE^0} + h^{-1} \| \operatorname{Im} W\|_{LE^0},
\]
we can also estimate the same expression in terms of $\operatorname{Im} W$,
\begin{equation}\left\vertbel{Wa-LE-top+}
\|W_{\alpha,\leq \left\vertmbda} \|_{L^2_t L^\infty_{loc}(B_\left\vertmbda)} \lesssim \left\vertmbda \|\operatorname{Im} W\|_{LE^0}.
\end{equation}
On the other hand, for nonlinear estimates, we also need bounds in depth,
precisely over the regions $A_{\left\vertmbda}(x_0)$. There, by \cite{AIT}, we have
\begin{lemma}\left\vertbel{l:theta-LE-loc}
For each dyadic frequency $\left\vertmbda < 1$ we have
\begin{equation}\left\vertbel{theta-LE-loc}
\|\operatorname{Im} W \|_{L^2_t L^\infty_{loc}(A_\left\vertmbda)}
+\left\vertmbda^{-1}\|W_{\alpha} \|_{L^2_t L^\infty_{loc}(A_\left\vertmbda)} \lesssim \| \operatorname{Im} W\|_{LE^0}.
\end{equation}
\end{lemma}
We will also need a mild high frequency bound:
\begin{lemma}\left\vertbel{l:theta-LE-hi}
The following rstimate holds:
\begin{equation}\left\vertbel{theta-LE-hi}
\|\operatorname{Im} W \|_{L^2_t L^2_{loc}(A_1)} + \|\beta W_{\alpha} \|_{L^2_t L^\infty_{loc}(A_1)}
\lesssim \| \operatorname{Im} W\|_{LE^0}.
\end{equation}
\end{lemma}
\begin{proof}
It suffices to prove a fixed $\beta$ bound,
\[
\|\operatorname{Im} W(\beta) \|_{L^2_t L^2_{loc}(B_1)}
+ \|\beta W_{\alpha}(\beta) \|_{L^2_t L^\infty_{loc}(B_1)} \lesssim \| \operatorname{Im} W\|_{LE^0}, \qquad \beta \in (0,1).
\]
But both $\operatorname{Im} W(\beta)$ and $\beta W_{\alpha}(\beta)$ are defined in terms
of $\operatorname{Im} W$ via zero order multipliers localized at frequency $\leq \beta^{-1}$.
Hence the above estimate easily follows.
\end{proof}
\subsection{ Estimates for \texorpdfstring{$F(W_\alpha$)}{}}
Here we consider a function $F$ which is holomorphic in a
neighbourhood of $0$, with $F(0) = 0$ and prove
local energy bounds for the auxiliary holomorphic function $F(W_\alpha)$.
\begin{lemma}[ Holomorphic Moser estimate]
\left\vertbel{l:Y}
Assume that $\|W_\alpha\|_{ L^\infty} \ll 1$. Then
a) For $\left\vertmbda > 1$ we have
\begin{equation}\left\vertbel{Yhi}
\| F(W_\alpha)_\left\vertmbda \|_{L^2_t L^2_{loc}} \lesssim \left\vertmbda \|W\|_{L_t^2 L^2_{loc}}.
\end{equation}
b) For $\left\vertmbda \leq 1$ we have
\begin{equation}\left\vertbel{Ylo}
\| F(W_\alpha)_\left\vertmbda \|_{L^2_t L_{\alpha}^\infty(B_\left\vertmbda(x_0))} \lesssim \left\vertmbda\|W\|_{L_t^2 L^2_{loc}}.
\end{equation}
\end{lemma}
In both cases, one should think of the implicit
constant as depending on $\|W_\alpha\|_{L^\infty}$.
We note that both estimates follow directly from
Lemma~\ref{l:theta-LE-loc} if $F(z) = z$. However
to switch to an arbitrary $F$ one would seem
to need some Moser type inequalities,
which unfortunately do not work in negative Sobolev spaces.
The key observation is that in both of these
estimates it is critical that $W_\alpha$ is holomorphic, and $F(W_\alpha)$ is an analytic function
of $W_\alpha$. This lemma was proved in \cite{AIT} for the expression
\[
Y = \frac{W_\alpha}{1+W_\alpha},
\]
but the proof is identical in the more general case considered here.
\section{ The error estimates}
The aim of this section is to use holomorphic coordinates in order to prove
Lemmas~\ref{l:fixed-t},~\ref{l:kappa-diff},~\ref{l:kappa-3}, which for convenience we recall below.
\begin{lemma}\left\vertbel{l:fixed-t-re}
The following fixed estimate holds:
\begin{equation} \left\vertbel{fixed-t-re}
\left | \iint_{\Omega(t)} m_x(x-x_0) \theta q_{x}\, dy dx \right | \lesssim
\| \eta\|_{g^{-\frac{1}{4}}H^{\frac14}_h \cap \kappa^{-\frac{1}{4}}H^{\frac34}_h} \| \psi_x\|_{ g^{\frac{1}{4}}H^{-\frac14}_h+ h^{\frac{1}{4}}H^{-\frac34}_h}.
\end{equation}
\end{lemma}
\begin{lemma}\left\vertbel{l:kappa-diff-re}
The integral $I^\kappa_2$ is estimated by
\begin{equation}
| I^\kappa_{2}| \lesssim h^{-2} \| \eta \|_{LE^0}^2 .
\end{equation}
\end{lemma}
\begin{lemma}\left\vertbel{l:kappa-3-re}
The expression $I_{3}^{\kappa}$ is estimated by
\begin{equation}
| I^{\kappa}_{3} | \lesssim \varepsilonilon {\mathbf B}ig( \| \eta_x \|_{LE^{0}}^2 + \frac{g}{\kappa} \| \eta \|_{LE^{\kappa}}^2 {\mathbf B}ig).
\end{equation}
\end{lemma}
We begin by expressing the quantities in the Lemma using holomorphic coordinates.
We first recall that
\[
\theta = \operatorname{Im} W, \qquad q_x = \operatorname{Im} R,
\]
and by the chain rule we have
\[
\theta_x = \operatorname{Im} \left(\frac{W_\alpha}{1+W_\alpha}\right), \qquad \theta_y =
\mathbb Re \left(\frac{W_\alpha}{1+W_\alpha}\right).
\]
A second use of chain rule yields
\begin{equation}
\left\vertbel{second}
\theta_{xx} = \operatorname{Im} \left[ \frac{1}{1+W_\alpha}
\partial_\alpha \left(\frac{W_\alpha}{1+W_\alpha}\right) \right]
= -\frac12 \partial_\alpha \operatorname{Im} \frac{1}{(1+W_\alpha)^2}.
\end{equation}
Finally, $H(\eta)$ is expressed as
\[
H (\eta) = - \frac{i}{1+ W_\alpha} \partial_\alpha (J^{-\frac12}(1+ W_\alpha)).
\]
We recall that the local energy norms are easily transferred,
see Proposition~\ref{p:equiv}
\[
\| \eta \|_{LE^{\kappa}} \approx \| \operatorname{Im} W \|_{LE^{\kappa}},
\qquad \| \eta_x \|_{LE^{\kappa}} \approx \| \operatorname{Im} W_\alpha \|_{LE^{\kappa}}.
\]
while, by Theorem~\ref{t:equiv}, for the uniform bound we have
\begin{equation}\left\vertbel{l:transfer-hf}
\| \operatorname{Im} W\|_{X^{\kappa}} \lesssim \| \eta \|_{X^{\kappa}}.
\end{equation}
The last item we need to take into account in switching coordinates is that
the image of the vertical strip in Euclidean coordinates is still
a strip $S_{hol}(t)$ in the holomorphic setting with $O(1)$ horizontal size,
but centered around $\alpha_0(t,\beta)$, where
\[
|\alpha_0(t,\beta) - \alpha_0(t,0) | \lesssim \varepsilonilon |\beta|.
\]
This is from Proposition~\ref{p:switch-strips}.
\subsection{ Proof of Lemma~\ref{l:fixed-t-re}}
We begin by rewriting our integral in holomorphic coordinates,
\[
I_{t,x_0} = \iint_S J m_x(x-x_0) \operatorname{Im} W \operatorname{Im} R \, d\alpha d\beta.
\]
In view of the norm equivalence in Lemma~\ref{l:equiv-M},
for this integral we need to prove the bound
\begin{equation} \left\vertbel{fixed-t-hol}
| I_{t,x_0} | \lesssim \| \operatorname{Im} W \|_{g^{-\frac{1}{4}}H^{\frac14}_h \cap \kappa^{-\frac{1}{4}}H^{\frac34}_h} \| \operatorname{Im} R \|_{ g^{\frac{1}{4}}H^{-\frac14}_h+ \kappa^{\frac{1}{4}}H^{-\frac34}_h}
\end{equation}
where the norms on the right are taken on the top.
Here we recall that $m_x$ is a bounded,
Lipschitz bump function with support
in the strip $S_{hol}(t)$. This is all we will use concerning $m_x$.
The strip $S_{hol}(t)$ is contained in the dyadic union
\[
S_{hol}(t) \subset A_1(x_0) \bigcup \bigcup_{ h^{-1} < \left\vertmbda < 1} A_\left\vertmbda(x_0).
\]
Correspondingly we split the integral as
\[
I_{t,x_0} = I_1 + \sum_{ h^{-1} < \left\vertmbda < 1} I_\left\vertmbda.
\]
For $I_\left\vertmbda$ we directly estimate
\[
|I_\left\vertmbda| \lesssim \left\vertmbda^{-1} \| \operatorname{Im} W\|_{L^\infty(A_\left\vertmbda(x_0))}
\| \operatorname{Im} R\|_{L^\infty(A_\left\vertmbda(x_0))}.
\]
For the pointwise bounds we recall that
$\operatorname{Im} W(\beta) = P_N(\beta,D) \operatorname{Im} W $, and similarly for $\operatorname{Im} R$,
where, for $\beta \approx \left\vertmbda^{-1}$,
the multiplier $P_N(\beta,D)$ selects the frequencies
$\leq \left\vertmbda$. Hence, harmlessly allowing
rapidly decaying tails in our Littlewood-Paley truncations,
we obtain using Bernstein's inequality
\[
\begin{split}
\sum_{\left\vertmbda < 1} |I_\left\vertmbda| \lesssim & \ \sum_{\left\vertmbda < 1} \left\vertmbda^{-1}
\| \operatorname{Im} W_{\leq \left\vertmbda}\|_{L^\infty} \| \operatorname{Im} R_{\leq \left\vertmbda}\|_{L^\infty}
\\
\lesssim & \ \sum_{\left\vertmbda < 1}
\left\vertmbda^{-1} \sum_{\mu < \left\vertmbda} \mu^\frac12 \| \operatorname{Im} W_{\mu}\|_{L^2}
\sum_{\nu < \left\vertmbda} \nu^\frac12 \| \operatorname{Im} R_{\nu}\|_{L^2}
\\
\lesssim & \ \sum_{\mu,\nu < 1}
\min\left\{ (\mu/\nu)^\frac12, (\nu/\mu)^\frac12 \right\}
\| \operatorname{Im} W_{\mu}\|_{L^2} \nu^\frac12 \| \operatorname{Im} R_{\nu}\|_{L^2}
\\
\lesssim & \ \| \operatorname{Im} W_{<1}\|_{ H_h^\frac14}
\sum_{\nu < \left\vertmbda} \nu^\frac12 \| \operatorname{Im} R_{\leq 1}\|_{H_h^{-\frac14}}
\\
\lesssim &
\| \operatorname{Im} W \|_{g^{-\frac{1}{4}}H^{\frac14}_h
\cap \kappa^{-\frac{1}{4}}H^{\frac34}_h}
\| \operatorname{Im} R \|_{ g^{\frac{1}{4}}H^{-\frac14}_h+ \kappa^{\frac{1}{4}}H^{-\frac34}_h},
\end{split}
\]
where the last step accounts
for the rapidly decaying tails in the frequency localizations.
It remains to consider $I_0$, for which
it suffices to estimate at fixed $\beta \in [0,1]$
(the norms for $\operatorname{Im} W$ and $\operatorname{Im} R$ at depth $\beta$ are easily estimated by the similar norms on the top):
\[
\begin{split}
\left|\int_\mathbb R J m_x \operatorname{Im} W \operatorname{Im} R \, d\alpha \right| \lesssim
\|J m_x \operatorname{Im} W \|_{g^{-\frac{1}{4}}H^{\frac14}_h \cap \kappa^{-\frac{1}{4}}H^{\frac34}_h} \| \operatorname{Im} R \|_{ g^{\frac{1}{4}}H^{-\frac14}_h+ \kappa^{\frac{1}{4}}H^{-\frac34}_h},
\end{split}
\]
where for the first factor we further estimate
as in the proof of \eqref{algebra},
\[
\begin{split}
\|J m_x \operatorname{Im} W \|_{g^{-\frac{1}{4}}H^{\frac14}_h \cap \kappa^{-\frac{1}{4}}H^{\frac34}_h} \lesssim & \ \| m_x\|_{W^{1,1}} \|J \operatorname{Im} W \|_{g^{-\frac{1}{4}}H^{\frac14}_h \cap \kappa^{-\frac{1}{4}}H^{\frac34}_h}
\\ \lesssim & \ (\|J \|_{L^\infty} + (\kappa/g)^\frac14 \| J_\alpha\|_{L^2}) \|\operatorname{Im} W \|_{g^{-\frac{1}{4}}H^{\frac14}_h \cap \kappa^{-\frac{1}{4}}H^{\frac34}_h},
\end{split}
\]
and for the $L^2$ norm of $J_\alpha$ we use our a-priori $X^\kappa$ bound
given by Theorem~\ref{t:equiv} to get
\[
(\kappa/g)^\frac14 \| J_\alpha\|_{L^2} \lesssim \varepsilonilon.
\]
\subsection{ Proof of Lemma~\ref{l:kappa-diff-re}}
Taking into account the above properties, we bound the integral $I_2^{\kappa}$ by
\[
|I^\kappa_2| \lesssim \int_0^T \iint_{S_{hol}(t)} |\theta_y| |H_{N}(\theta_{xx}) - \theta_{xx}|\, d\alpha d\beta
dt .
\]
As in \cite{AIT}, we split the integration
region vertically into dyadic pieces, which are contained
in the regions $A_1(x_0)$, respectively $A_{\left\vertmbda}(x_0)$
with $h^{-1} < \left\vertmbda < 1$ dyadic, and all of which
are contained in $\mathbf{B}_{1/h}(x_0)$. We also take
advantage of the fact that the second factor is smooth on the $h$ scale and
vanishes on the top in order to insert
a $\beta$ factor. Then we estimate
\[
\begin{split}
|I^\kappa_2| \lesssim & \ \int_0^T \iint_{A_1(x_0)}
|\beta \theta_y| d\alpha d \beta \sup_{A_1(x_0)}
|\beta^{-1} (H_{N}(\theta_{xx}) - \theta_{xx})| +\\
& \hspace*{1cm} + \sum_{\left\vertmbda = 1}^{h^{-1}} \left\vertmbda^{-1} \sup_{A_\left\vertmbda(x_0)}
|\beta \theta_y| \sup_{A_\left\vertmbda(x_0)}
|\beta^{-1} (H_{N}(\theta_{xx}) - \theta_{xx})|\, dt
\\
\lesssim & \left(\! \|\beta \theta_y\|_{L^2_t L^\infty(A_1(x_0))} +\! \sum_{\left\vertmbda = 1}^{h^{-1}} \left\vertmbda^{-1}
\|\beta \theta_y\|_{L^2_t L^\infty(A_\left\vertmbda(x_0))}\! \right)\! \| \beta^{-1} (H_{N}(\theta_{xx}) - \theta_{xx}) \|_{L^2_t L^\infty( \mathbf{B}^{1/h}(x_0)) }.
\end{split}
\]
Then it suffices to prove the following bounds:
\begin{equation}\left\vertbel{est-thetab1}
\| \beta \theta_y \|_{L^2_t L^2(A_1(x_0))} \lesssim \| \operatorname{Im} W\|_{LE^{0}},
\end{equation}
\begin{equation}\left\vertbel{est-thetab}
\| \beta \theta_y \|_{L^2_t L^\infty(A_\left\vertmbda(x_0))} \lesssim \| \operatorname{Im} W\|_{LE^{0}},
\end{equation}
respectively
\begin{equation}\left\vertbel{estDN}
\| \beta^{-1} (H_{N}(\theta_{xx}) - \theta_{xx}) \|_{L^2_t L^\infty( \mathbf{B}^{1/h}(x_0)) } \lesssim h^{-3} \| \operatorname{Im} W\|_{LE^{0}}.
\end{equation}
Given these three bounds, the conclusion of the Lemma easily follows. It remains to prove \eqref{est-thetab1},
\eqref{est-thetab}, respectively \eqref{estDN}.
{\bf Proof of \eqref{est-thetab1}, \eqref{est-thetab}:}
These bounds are direct consequences of Lemma~\ref{l:theta-LE-hi},
respectively Lemma~\ref{l:theta-LE-loc}.
{\bf Proof of \eqref{estDN}:} Here we are subtracting the Dirichlet
and Neuman extension of a given function. This we already had to do in
\cite{AIT}, where the idea was that the only contributions come from
very low frequencies $\leq h^{-1}$,
\[
H_N(\theta_{xx}) - \theta_{xx} \approx P_{<1/h} \theta_{xx}.
\]
If instead of $\theta_{xx}$ we had its principal part $\operatorname{Im} W_{\alpha \alpha}$ then the argument would
be identical to \cite{AIT}, gaining two extra $1/h$ factors from the derivatives.
The challenge here is to show that we can bound the very low frequencies of $\theta_{xx}$
in a similar fashion. But given the expression \eqref{second} for $\theta_{xx}$,
this is also a direct consequence of Lemma~\ref{l:Y}, applied to the function
\[
F(W_\alpha) = \frac{1}{(1+W_\alpha)^2} -1.
\]
\subsection{ Proof of Lemma~\ref{l:kappa-3-re}}
As before we bound $I_3^{\kappa}$ as
\[
|I^\kappa_3| \lesssim \int_0^T \iint_{S_{hol}(t)} | \theta_y |
|H_N(H(\eta)- \theta_{xx})| \, d\alpha d\beta dt .
\]
We write on the top
\[
\begin{split}
H(\eta) - \theta_{xx} = &\ \operatorname{Im} \left[ \frac{1}{1+ W_\alpha} \partial_\alpha (J^{-\frac12}(1+ W_\alpha))
- \frac{1}{1+W_\alpha} \partial_\alpha \left(\frac{W_\alpha}{1+W_\alpha}\right) \right]
\\
= &\ \operatorname{Im} \left[ \frac{W_{\alpha \alpha}}{2}
\left(\frac{1}{(1+ W_\alpha)^\frac32 (1+\bar W_\alpha)^\frac12} - \frac{2}{(1+ W_\alpha)^3}\right) \right],
\end{split}
\]
where the linear part cancels and we are left with
a sum of expressions of the form
\[
\operatorname{Im} [ \partial_\alpha F_1(W_\alpha) G_1(\bar W_\alpha) ], \qquad \operatorname{Im} [ \partial_\alpha F_2(W_\alpha)],
\]
where the subscript indicates the minimum degree
of homogeneity. The first type of expression is the
worst, as it allows all dyadic frequency interactions,
whereas the second involves products of
holomorphic functions which preclude high-high to low
interactions. Since $F_1(W_\alpha)$ and $ G_1(\bar W_\alpha) $
have the same regularity as $W_\alpha$ and
$\bar W_\alpha$, to streamline the computation we simply replace
them by that. Hence we
end up having to bound trilinear expressions of the form
\begin{equation} \left\vertbel{I3-main}
I = \int_0^T \iint_{S_{hol}(t)}|W_\alpha| |H_N( W_{\alpha \alpha} \bar W_\alpha)|\, d\alpha d\beta dt .
\end{equation}
To estimate this expression we use again a
dyadic decomposition with respect to depth.
We consider dyadic $\beta$ regions
$|\beta| \approx \left\vertmbda^{-1}$ associated to a frequency
$\left\vertmbda \in (0,h^{-1}]$. But here we need to separate
into three cases, depending on how $\left\vertmbda$ compares to $1$
and also to $\left\vertmbda_0 = \sqrt{g/\kappa} > 1$.
{\bf Case 1, $\left\vertmbda \geq \left\vertmbda_0$.}
The harmonic extension at depth $\left\vertmbda$ is a multiplier which
selects frequencies $\leq \left\vertmbda$, with exponentially decaying tails.
Hence we need to estimate an integral of the form
\begin{equation}\left\vertbel{I3-high}
I_\left\vertmbda = \int_0^T \iint_S 1_{A_1} 1_{ |\beta| \approx \left\vertmbda^{-1}}
|W_{\alpha, <\left\vertmbda}| |P_{< \left\vertmbda} ( W_{\alpha \alpha} \bar W_\alpha)|\, d\alpha d\beta dt.
\end{equation}
Then we use two local energies for the $W_\alpha$ factors and one
apriori bound $W_{\alpha \alpha} \in \varepsilonilon (g/\kappa)^\frac14 L^2$
arising from the $X_1$ norm plus Bernstein's inequality to bound this by
\[
\begin{split}
I_\left\vertmbda \lesssim & \ \left\vertmbda^{-1} \| W_\alpha\|_{LE^0} \| P_{< \left\vertmbda}
( W_{\alpha \alpha} \bar W_\alpha)\|_{LE^0}
\\
\lesssim & \ \left\vertmbda^{-1} \| W_\alpha\|_{LE^0} \left\vertmbda^{\frac12}
\| W_{\alpha \alpha}\|_{L^2} \| W_\alpha\|_{LE^0}
\\
\lesssim & \ \varepsilonilon \left\vertmbda^{-1} \left\vertmbda^\frac12
(g/\kappa)^\frac14 \| W_\alpha\|_{LE^{0}}^2,
\end{split}
\]
where the dyadic $\left\vertmbda$ summation is trivial for $\left\vertmbda > \left\vertmbda_0$.
{\bf Case 2, $1 \leq \left\vertmbda < \left\vertmbda_0$.}
Here we still need to estimate an integral of the form \eqref{I3-high} but we balance norms differently.
Precisely, for the first $W_\alpha$ factor we use the local energy
norm for $\operatorname{Im} W$ instead, and otherwise
follow the same steps as before.
Then we bound the integral in \eqref{I3-high} by
\[
I_\left\vertmbda \lesssim \varepsilonilon \left\vertmbda^{-1} \left\vertmbda \left\vertmbda^\frac12
(g/\kappa)^\frac14
\|\operatorname{Im} W\|_{LE^{0}} \| W_\alpha\|_{LE^0}.
\]
Again the dyadic $\left\vertmbda$ summation for $\left\vertmbda < \left\vertmbda_0$
is straightforward, and we conclude by the Cauchy-Schwarz inequality.
{\bf Case 3, $\left\vertmbda < 1$}. Here the corresponding
part of the integral \eqref{I3-main} is localized in the intersection of the strip
$S^\left\vertmbda_h(x_0)$ with $A_\left\vertmbda$, and we estimate it using $L^\infty$ norms in $A_{\left\vertmbda}$,
by
\[
\begin{split}
I_\left\vertmbda = & \ \left\vertmbda^{-1} \int_0^T \sup_{A_\left\vertmbda(x_0)} |W_\alpha|
\sup_{A_\left\vertmbda(x_0)} |H_N( W_{\alpha \alpha} \bar W_\alpha)|\, dt
\\
\lesssim & \ \left\vertmbda^{-1} \| W_{\alpha}\|_{L^2_t L^\infty(A_\left\vertmbda)}
\| H_N ( W_{\alpha \alpha} \bar W_\alpha) \|_{L^2_t L^\infty(A_\left\vertmbda)}.
\end{split}
\]
For the first factor we use the local energy of $W$,
\[
\| W_{\alpha}\|_{L^2_t L^\infty(A_\left\vertmbda)} \lesssim \left\vertmbda \| \operatorname{Im} W\|_{LE^{0}}.
\]
For the bilinear factor we recall again that the harmonic extension
at depth $\beta \approx \left\vertmbda^{-1}$ is a multiplier selecting frequencies
$\leq \left\vertmbda$, see \eqref{PN}. Then we use the a-priori $L^2$
bound for $W_{\alpha \alpha}$ and the local energy of $W_\alpha$,
and apply the Bernstein inequality in Lemma \eqref{l:bern-loc}:
\[
\begin{split}
\| P_{< \left\vertmbda} ( W_{\alpha \alpha} \bar W_\alpha) \|_{L^2_t L^\infty(B_\left\vertmbda)}
\lesssim & \ \left\vertmbda \| W_{\alpha \alpha} \bar W_\alpha \|_{L^2_t L^1_{loc}(B_\left\vertmbda)}
\lesssim \left\vertmbda \| W_{\alpha \alpha}\|_{L^\infty_t L^2} \| \bar W_\alpha \|_{L^2_t L^2_{loc}(B_\left\vertmbda)}
\\
\lesssim & \ \left\vertmbda \varepsilonilon (g/\kappa)^\frac14 \left\vertmbda^{-\frac12} \| W_\alpha\|_{LE^{0}}.
\end{split}
\]
Overall we obtain the same outcome as in Case 2.
\noindent\textbf{Thomas Alazard}\\
\noindent CNRS and CMLA, \'Ecole Normale Sup{\'e}rieure de Paris-Saclay, Cachan, France
\noindent\textbf{Mihaela Ifrim}\\
\noindent Department of Mathematics, University of
Wisconsin, Madison, USA
\noindent\textbf{Daniel Tataru}\\
\noindent Department of Mathematics, University of California, Berkeley, USA
\end{document} |
\begin{document}
\title{Values of certain $L$-series \
in positive characteristicootnote{Keywords: $L$-functions in positive characteristic, Drinfeld modular forms, function fields of positive characteristic, AMS Classification 11F52, 14G25, 14L05.}
\begin{small}
\noindent\textbf{Abstract.} We introduce a family of $L$-series specialising to both
$L$-series associated to certain Dirichlet characters over $\mathbb{F}_q[\theta]$ and to integral values of Carlitz-Goss zeta function associated to $\mathbb{F}_q[\theta]$. We prove, with the use of the theory of deformations of vectorial modular forms, a formula for their value at $1$, as well as some arithmetic properties of values at other positive integers.
\end{small}
\section{Introduction, results}
let $q=p^e$ be a power of a
prime number $p$ with $e>0$ an integer, let $\mathbb{F}_q$ be the finite field with $q$ elements.
We consider, for an indeterminate $\theta$, the polynomial ring $A=\mathbb{F}_q[\theta]$ and its fraction field $K=\mathbb{F}_q(\theta)$. On $K$, we consider the absolute value
$|\cdot|$ defined by $|a|=q^{\deg_\theta a}$, $a$ being in $K$, so that
$|\theta| = q$. Let $K_\infty :=\mathbb{F}_q((1/\theta))$ be the
completion of $K$ for this absolute value, let $K_\infty^{\text{alg}}$ be an
algebraic closure of $K_\infty$, let $\mathbb{C}_\infty$ be the completion of
$K_\infty^{\text{alg}}$ for the unique extension of $|\cdot|$ to $K_\infty^{\text{alg}}$, and let $K^{\text{sep}}$ be the separable closure of $K$
in $K_\infty^{\text{alg}}$.
We consider an element $t$ of $\mathbb{C}_\infty$. We have the ``evaluating at $t$" ring homomorphism
$$\chi_t:A\rightarrow \mathbb{F}_q[t]$$
defined by $\chi_t(a)=a(t)$. In other words, $\chi_t(a)$ is the image of the polynomial map $a(t)$ obtained by substituting, in $a(\theta)$, $\theta$ by $t$. For example,
$\chi_t(1)=1$ and $\chi_t(\theta)=t$. If we choose $t\in\mathbb{F}_q^{\text{alg}}$ then $\chi_t$ factors through a Dirichlet character modulo the ideal generated by the minimal polynomial of $t$
in $A$.
We can also consider $t$ as an indeterminate;
for $\alpha>0$ an integer, we then have a formal series
$$L(\chi_t,\alpha)=\sum_{a\in A^+}\chi_t(a)a^{-\alpha}=\prod_{\mathfrak{p}}(1-\chi_t(\mathfrak{p})\mathfrak{p}^{-\alpha})^{-1},$$
where $A^+$ denotes the set of monic polynomials of $A$ and where the eulerian product runs over the monic irreducible
polynomials of $A$, which turns out to be well defined in $K_\infty[[t]]$.
This formal series
converges for $\log_q|t|<\alpha$, $\log_q$ being the logarithm in base $q$. In this paper, we are
interested in the function $L(\chi_t,\alpha)$ of the variable $t\in\mathbb{C}_\infty$, for a fixed positive integer $\alpha$.
We give some relevant examples of values of these series.
If $t=\theta$ and $\alpha>1$, then $L(\chi_\theta,\alpha)$ converges to the value of Carlitz-Goss' zeta series at $\alpha-1$:
$$L(\chi_\theta,\alpha)=\zeta(\alpha-1)=\sum_{a\in A^+}a^{1-\alpha}.$$
If on the other side we consider $t\in\mathbb{F}_q^{\text{alg}}$, then, for $\alpha>0$,
$L(\chi_t,\alpha)$ converges to the value at $\alpha$ of the $L$-series
associated to a Dirichlet character, see Goss' book \cite{Go2}.
We will need some classical functions related to Carlitz's module. First of all, its {\em exponential function} $e_{{{\text{Car}}}}$ defined, for all $\eta\in \mathbb{C}_\infty$, by the sum of the convergent series:
\begin{equation}\label{exponential}
e_{{\text{Car}}}(\eta)=\sum_{n\ge 0}\frac{\eta^{q^n}}{d_n},
\end{equation}
where $d_0:=1$ and $d_i:=[i][i-1]^q\cdots[1]^{q^{i-1}}$, with $[i]=\theta^{q^i}-\theta$ if $i>0$.
We choose once and for all a fundamental period $\widetilde{\pi}$ of $e_{{{\text{Car}}}}$.
It is possible to show that $\widetilde{\pi}$ is equal, up to the choice of a $(q-1)$-th root of $-\theta$,
to the (value of the) convergent infinite product:
$$\widetilde{\pi}:=\theta(-\theta)^{\frac{1}{q-1}}\prod_{i=1}^\infty(1-\theta^{1-q^i})^{-1}\in (-\theta)^{\frac{1}{q-1}}K_\infty.$$
Next, we need the following series of $K^{\text{sep}}[[t]]$:
\begin{equation}\label{scarlitz}
s_{{{\text{Car}}}}(t):=\sum_{i=0}^\infty e_{{\text{Car}}}\left(\frac{\widetilde{\pi}}{\theta^{i+1}}\right)t^i=\sum_{n=0}^\infty\frac{\widetilde{\pi}^{q^n}}{d_n(\theta^{q^n}-t)},
\end{equation} converging for $|t|<q$.
We shall prove:
\begin{Theorem}\label{corollairezeta11}
The following identity holds:
$$L(\chi_t,1)=-\frac{\widetilde{\pi}}{(t-\theta) s_{{{\text{Car}}}}}.$$
\end{Theorem}
The function $s_{{\text{Car}}}$ is the {\em canonical rigid analytic trivialisation} of the so-called {\em Carlitz's motive}, property leading to three important facts that we recall now. First of all, $s_{{\text{Car}}}$ generates the one-dimensional $\mathbb{F}_q(t)$-vector space of solutions of the $\tau$-difference equation
\begin{equation}\label{scarlitz}
\tau X =(t-\theta)X,
\end{equation} in the fraction field of the ring of formal power series in $t$ converging for
$|t|\leq 1$,
where $\tau:\mathbb{C}_\infty((t))\rightarrow\mathbb{C}_\infty((t))$ is the operator defined by $\tau\sum c_it^i=\sum c_i^qt^i$.
Second, $\boldsymbol{\Omega}:=(\tau s_{{\text{Car}}})^{-1}$ is entire holomorphic on $\mathbb{C}_\infty$ with the only zeros at
$t=\theta^q,\theta^{q^2},\ldots$ and
third, we have the limit $\lim_{t\rightarrow\theta}\boldsymbol{\Omega}(t)=-\frac{1}{\widetilde{\pi}}$.
We refer to \cite{Bourbaki} for a description of the main properties of
the functions $s_{{\text{Car}}},\boldsymbol{\Omega}$, or to the papers \cite{An, ABP,Pa}, where $\boldsymbol{\Omega}$ was originally introduced and studied.
The function $\boldsymbol{\Omega}$ being entire, the function $L(\chi_t,1)=-\widetilde{\pi}\boldsymbol{\Omega}(t)$ allows, beyond its domain of convergence, entire analytic continuation in terms of the variable $t$ with the only zeros at the points $t=\theta^q,\theta^{q^2},\ldots$ Also, it is well known (see \cite{ABP})
that $\boldsymbol{\Omega}$ has the following infinite product expansion:
$$\boldsymbol{\Omega}(t)=-\theta^{-1}(-\theta)^{-\frac{1}{q-1}}\prod_{i=1}^\infty(1-t\theta^{-q^i})\in(-\theta)^{\frac{1}{q-1}}K_\infty[[t]],$$ so that the product $\widetilde{\pi}\boldsymbol{\Omega}$ no longer depends on a choice of $(-\theta)^{1/(q-1)}$. We deduce that the
function $L(\chi_t,1)$, apart from its eulerian product, also has the following product expansion converging this time for all $t\in\mathbb{C}_\infty$:
\begin{equation}\label{infiniteprod}
L(\chi_t,1)=\prod_{i=1}^\infty\frac{1-\frac{t}{\theta^{q^i}}}{1-\frac{\theta}{\theta^{q^i}}}=\prod_{i=1}^\infty\left(1-\frac{t-\theta}{[i]}\right).\end{equation}
It follows, from the above product expansion, that
\begin{equation}\label{limit1}
\lim_{t\to\theta}L(\chi_t,1)=1.\end{equation} This value coincides
with $\zeta(0)$. Moreover, we obtain from (\ref{infiniteprod}) that $\lim_{t\to\theta^{q^k}}L(\chi_t,1)=0$ for
$k>0$. This corresponds to the value at $1-q^k$ of $\zeta$. Indeed, we know more generally that $\zeta(s(q-1))=0$ for negative values of $s$. We see here a sign of the existence of functional equations for $\zeta$, see Goss' paper \cite{Go25} on ``zeta phenomenology". Indeed, in Theorem \ref{corollairezeta11}, the function $\tau s_{{\text{Car}}}$ plays a role analogue to that of Euler's gamma function in the classical functional equation of Riemann's zeta function; its poles provide the trivial zeroes of
Carlitz-Goss' $\zeta$ at the points $1-q^k$, $k>0$ and this is the first known interplay between certain
values of $\zeta$ at positive and negative multiples of $q-1$ (\footnote{This function was used in an essential way
in \cite{ABP} in the study of the algebraic relations between values of the ``geometric" gamma function
of Thakur.}).
The above observations also agree with the results of Goss in \cite{Go3}, where he considers, for $\beta$ an integer,
an analytic continuation of the function $\alpha\mapsto L(\chi_t^\beta,\alpha)$
with
$$L(\chi_t^\beta,\alpha)=\sum_{a\in A^+}\chi_t(a)^\beta a^{-\alpha}$$ to the space $\mathbb{S}_\infty=\mathbb{C}_\infty^\times\times \mathbb{Z}_p$ via the map $\alpha\mapsto((1/\theta)^{-\alpha},\alpha)\in\mathbb{S}_\infty$.
It is interesting to notice that Theorem \ref{corollairezeta11} directly implies the classical formulas for the values $$\zeta(q^k-1)=\sum_{a\in A^+}a^{1-q^k}=(-1)^k\frac{\widetilde{\pi}^{q^k-1}}{[k][k-1]\cdots[1]}$$ of Carlitz-Goss' zeta value at $q^k-1$ for $k>0$. This follows easily from the computation of the limit $t\rightarrow\theta$ in the
formula $\tau^kL(\chi_t,1)=-\widetilde{\pi}^{q^k}/(\tau^k(t-\theta)s_{{{\text{Car}}}})$, observing that
\begin{equation}\label{formula11}
\tau^k((t-\theta) s_{{\text{Car}}})=(t-\theta^{q^k})\cdots(t-\theta^q)(t-\theta) s_{{\text{Car}}}\end{equation} by (\ref{scarlitz}).
Also,
if $t=\xi\in\mathbb{F}_q^\text{alg}$, Theorem \ref{corollairezeta11} implies that $L(\chi_\xi,1)$, the value of an $L$-function associated to a Dirichlet character,
is a multiple of $\widetilde{\pi}$ by an algebraic element of $\mathbb{C}_\infty$. More precisely, we get the
following result.
\begin{Corollaire}\label{fq}
For $\xi\in\mathbb{F}_{q}^{\text{alg}}$ with $\xi^{q^r}=\xi$ ($r>0$), we have
$$L(\chi_\xi,1)=-\frac{\widetilde{\pi}}{\rho_\xi},$$ where
$\rho_\xi\neq0$ is the root $(\tau s_{{\text{Car}}})(\xi)\in\mathbb{C}_\infty$ of the polynomial equation
$$X^{q^r-1}=(\xi-\theta^{q^{r}})\cdots(\xi-\theta^q).$$
\end{Corollaire}
The corollary above follows from Theorem \ref{corollairezeta11} with $t=\xi$, applying (\ref{formula11}) with $k=r$,
simply noticing that $(\tau^r(\tau s_{{\text{Car}}}))(\xi)=((\tau s_{{\text{Car}}})(\xi))^{q^r}$.
Some of the above consequences are also covered by the so-called Anderson's {\em log-algebraic power series} identities for {\em twisted harmonic sums}, see \cite{Anbis,Anter,Lutes}, see also \cite{Dam}. One of the new features
of the present work is to highlight the fact that several such results on values at one of $L$-series in positive
characteristic belong to the special family described in Theorem \ref{corollairezeta11}.
We briefly mention on the way that the series $\zeta(\alpha),L(\chi_t,\alpha)$ can be viewed as a special case of the (often divergent) multivariable formal series:
$$\mathcal{L}(\chi_{t_1},\ldots,\chi_{t_r};\alpha_1,\ldots,\alpha_r)=\sum_{a\in A^+}\chi_{t_1}^{-\alpha_1}(a)\cdots\chi_{t_r}^{-\alpha_r}(a).$$ For $r=1$ we get $\mathcal{L}(\chi_\theta,\alpha)=\zeta(\alpha)$ and
for $r=2$ we get $\mathcal{L}(\chi_t,\chi_\theta;-1,\alpha)=L(\chi_t,\alpha)$. These series may also be of some interest
in the arithmetic theory of function fields of positive characteristic.
For other positive values of $\alpha\equiv 1\pmod{q-1}$, we have a result on $L(\chi_t,\alpha)$
at once more general and less precise.
\begin{Theorem}\label{generalalpha}
Let $\alpha$ be a positive integer such that $\alpha\equiv1\pmod{q-1}$. There exists a non-zero element
$\lambda_\alpha\in\mathbb{F}_q(t,\theta)$ such that
$$L(\chi_t,\alpha)=\lambda_\alpha\frac{\widetilde{\pi}^\alpha}{(t-\theta)s_{{\text{Car}}}}.$$
\end{Theorem}
Theorem \ref{corollairezeta11} implies that $\lambda_1=-1$. On the other hand, again by (\ref{formula11}), we have the formula
$$\tau^k\lambda_\alpha=(t-\theta^{q^k})\cdots(t-\theta^q)\lambda_{q^k\alpha}.$$
Apart from this, the explicit computation of the $\lambda_\alpha$'s is difficult and very little is known
on these coefficients which could encode, we hope, an interesting generalisation in $\mathbb{F}_q(t,\theta)$ of the theory of Bernoulli-Carlitz numbers.
\noindent\emph{Transcendence properties.} For $t$ varying in $K^{\text{alg}}$, algebraic closure of $K$ in $\mathbb{C}_\infty$, our results can be used to obtain transcendence properties
of values $L(\chi_{t},1)$ (or more generally, of values $L(\chi_t,\alpha)$ with $\alpha\equiv 1\pmod{q-1}$). Already, we notice by Corollary \ref{fq} that
if $t$ belongs to $\mathbb{F}_q^{\text{alg}}$, then $L(\chi_t,1)$ is transcendental by the well known transcendency of
$\widetilde{\pi}$. Moreover, the use of the functions
$$f_{t}(X)=\prod_{i>0}(1-t/X^{q^i}),\quad |X|>1,$$
satisfying the functional equation $f_t(X)=(1-t/X^{q})f_t(X^q)$, the identity
$L(\chi_t,1)=f_t(\theta)/f_\theta(\theta)$ and Mahler's method for transcendence as
in \cite{mahler}, allow to show that for all $t$ algebraic non-zero not of the form $\theta^{q^i}$ for $i\in\mathbb{Z}$, $L(\chi_t,1)$ and
$\widetilde{\pi}$ are algebraically independent. A precise necessary and
sufficient condition on $t_1,\ldots,t_s$ algebraic for the algebraic independence of
$\boldsymbol{\Omega}(t_1),\ldots,\boldsymbol{\Omega}(t_s)$ can be given, from which one deduces the
corresponding condition for
$L(\chi_{t_1},1),\ldots,L(\chi_{t_s},1)$. These facts will be described in more detail in another work.
The proofs of Theorems \ref{corollairezeta11} and \ref{generalalpha} that we propose rely on certain properties
of {\em deformations of vectorial modular forms} (see Section \ref{dvmf}). In fact, Theorem \ref{corollairezeta11} is a corollary of
an identity involving such functions that we describe now (Theorem \ref{firsttheorem} below) and Theorem
\ref{generalalpha} will be obtained from a simple modification of the techniques introduced to prove Theorem
\ref{firsttheorem}.
\noindent\emph{A fundamental identity for deformations of vectorial modular forms.} To present Theorem \ref{firsttheorem}, we need to introduce more tools. Let $\Omega$ be the rigid analytic
space $\mathbb{C}_\infty\setminus K_\infty$.
For $z\in\Omega$, we denote by $\Lambda_z$ the $A$-module $A+zA$, free of rank $2$.
The evaluation at $\zeta\in \mathbb{C}_\infty$ of the {\em exponential function} $e_{\Lambda_z}$ associated to the lattice $\Lambda_z$ is given by the series
\begin{equation}\label{ellipticexponential}
e_{\Lambda_z}(\eta)=\sum_{i=0}^\infty\alpha_i(z)\eta^{q^i},
\end{equation}
for functions $\alpha_i:\Omega\rightarrow \mathbb{C}_\infty$ with $\alpha_0=1$. We recall that for $i>0$, $\alpha_i$ is a
{\em Drinfeld modular form} of weight $q^i-1$ and type $0$ in the sense of Gekeler, \cite{Ge}.
We also recall from \cite{archiv} the series:
\begin{eqnarray*}
\boldsymbol{s}_1(z,t)&=&\sum_{i=0}^\infty\frac{\alpha_i(z)z^{q^i}}{\theta^{q^i}-t},\\
\boldsymbol{s}_2(z,t)&=&\sum_{i=0}^\infty\frac{\alpha_i(z)}{\theta^{q^{i}}-t},
\end{eqnarray*} which converge for $|t|<q$ and define two
functions $\Omega\rightarrow \mathbb{C}_\infty[[t]]$ with the series in the image converging for
$|t|<q$. We point out that for a fixed choice of
$z\in\Omega$,
the matrix function ${}^t(\boldsymbol{s}_1(z,t),\boldsymbol{s}_2(z,t))$ is the {\em canonical rigid analytic trivialisation} of the {\em $t$-motive associated}
to the lattice $\Lambda_z$ discussed in \cite{Bourbaki}.
We set, for $i=1,2$:
$$\boldsymbol{d}_i(z,t):=\widetilde{\pi}s_{{\text{Car}}}(t)^{-1}\boldsymbol{s}_i(z,t),$$
remembering that in the notations of \cite{archiv}, we have $\boldsymbol{d}_2=\boldsymbol{d}$.
The advantage of using these functions instead of the $\boldsymbol{s}_i$'s, is that
$\boldsymbol{d}_2$ has a {\em $u$-expansion} defined over $\mathbb{F}_q[t,\theta]$ (see Proposition \ref{prarchiv}); moreover,
it can
be proved that for all $z\in\Omega$, $\boldsymbol{d}_1(z,t)$ and $\boldsymbol{d}_2(z,t)$ are entire functions
of the variable $t\in\mathbb{C}_\infty$.
On the other hand, both the series
$$\boldsymbol{e}_1(z,t)=\sideset{}{'}\sum_{c,d\in A}\frac{\chi_t(c)}{cz+d},\quad \boldsymbol{e}_2(z,t)=\sideset{}{'}\sum_{c,d\in A}\frac{\chi_t(d)}{cz+d}$$
converge for $(z,t)\in\Omega\times \mathbb{C}_\infty$ with $|t|\leq 1$, where the dash $'$ denotes a sum avoiding the pair $(c,d)=(0,0)$. The series $\boldsymbol{e}_1,\boldsymbol{e}_2$ then define
functions $\Omega\rightarrow \mathbb{C}_\infty[[t]]$ such that all the series in the images converge for $|t|\leq 1$.
Let $\mathbf{Hol}(\Omega)$ be the ring of holomorphic functions $\Omega\rightarrow\mathbb{C}_\infty$, over which the Frobenius $\mathbb{F}_q$-linear map $\tau$ is well defined: if $f\in\mathbf{Hol}(\Omega)$, then $\tau f=f^q$. We consider the
unique $\mathbb{F}_q((t))$-linear extension of $\tau$:
$$\mathbf{Hol}(\Omega)\otimes_{\mathbb{F}_q}\mathbb{F}_q((t))\rightarrow\mathbf{Hol}(\Omega)\otimes_{\mathbb{F}_q}\mathbb{F}_q((t)),$$
again denoted by $\tau$.
We shall prove the fundamental theorem:
\begin{Theorem}\label{firsttheorem}
The following identities hold for $z\in\Omega,t\in\mathbb{C}_\infty$ such that $|t|\leq 1$:
\begin{eqnarray}\label{eqP101}
L(\chi_t,1)^{-1}\boldsymbol{e}_1(z,t)&=&-(t-\theta)s_{{\text{Car}}}(t)(\tau\boldsymbol{d}_2)(z,t) h(z),\\
L(\chi_t,1)^{-1}\boldsymbol{e}_2(z,t)&=&(t-\theta)s_{{\text{Car}}}(t)(\tau\boldsymbol{d}_1)(z,t) h(z)\nonumber.\end{eqnarray}
\end{Theorem}
In the statement of the theorem, $h$ is the opposite of the unique normalised Drinfeld cusp form of weight $q+1$ and type $1$ for $\Gamma=\mathbf{GL}_2(A)$
as in \cite{Ge}. It can be observed that both right-hand sides in (\ref{eqP101}) are well defined for $t\in\mathbb{C}_\infty$ with $|t|<q^q$ so that
these equalities provide analytic continuations of the functions $\boldsymbol{e}_1,\boldsymbol{e}_2$ in terms of
the variable $t$.
Let us write $\mathcal{E}=L(\chi_t,1)^{-1}(\boldsymbol{e}_1,\boldsymbol{e}_2)$ and $\mathcal{F}=\binom{\boldsymbol{d}_1}{\boldsymbol{d}_2}$, and
let us consider the representation $\rho_t:\mathbf{GL}_2(A)\rightarrow\mathbf{GL}_2(\mathbb{F}_q[t])$ defined by
$$\rho_t(\gamma)=\sqm{\chi_t(a)}{\chi_t(b)}{\chi_t(c)}{\chi_t(d)},$$ for $\gamma=\sqm{a}{b}{c}{d}\in\mathbf{GL}_2(A)$.
Then, for any such a choice of $\gamma$ we have the functional equations:
\begin{eqnarray*}\mathcal{F}(\gamma(z))&=&(cz+d)^{-1}\rho_t(\gamma)\cdot\mathcal{F}(z),\\
{}^t\mathcal{E}(\gamma(z))&=&(cz+d)\;{}^t\rho_t^{-1}(\gamma)\cdot{}^t\mathcal{E}(z).\end{eqnarray*}
This puts right away the functions ${}^t\mathcal{E}$ and $\mathcal{F}$ in the framework of
{\em deformations of vectorial modular forms}, topic that will be developed in Section \ref{dvmf}.
Let $B_1$ be the set of $t\in\mathbb{C}_\infty$ such that $|t|\leq1$.
We will make use of a remarkable sequence $\mathcal{G}=(\mathcal{G}_k)_{k\in\mathbb{Z}}$ of functions
$\Omega\times B_1\rightarrow\mathbb{C}_\infty$ defined by the scalar
product (with component-wise action of $\tau$):
$$\mathcal{G}_k=\mathcal{G}_{1,0,k}=(\tau^k\mathcal{E})\cdot\mathcal{F},$$
such that, for $k\geq 0$, $\mathcal{G}_k$ (\footnote{We will also adopt the notation $\mathcal{E}_{1,0}=\mathcal{E}$ and $\mathcal{G}_k=\mathcal{G}_{1,0,k}$.}) turns out to be an element of $M_{q^k-1,0}\otimes\mathbb{F}_q[t,\theta]$ where $M_{w,m}$ denotes the $\mathbb{C}_\infty$-vector space of Drinfeld modular forms
of weight $w$ and type $m$. In fact, only the terms $\mathcal{G}_0,\mathcal{G}_1$ are needed to be examined to prove
our Theorem. Once their explicit computation is accomplished (see Proposition \ref{interpretationvk}),
the proof of Theorem \ref{firsttheorem} is only a matter of solving a non-homogeneous system in
two equations and two indeterminates $\boldsymbol{e}_1,\boldsymbol{e}_2$. Furthermore, Theorem \ref{corollairezeta11}
will be deduced with the computation of a limit in the identity $\boldsymbol{e}_1=-L(\chi_t,1)\tau (s_{{\text{Car}}}\boldsymbol{d}_2) h$
and a similar path will be followed for Theorem \ref{generalalpha}, by using this time the more general
sequences $\mathcal{G}_{\alpha,0,k}=(\tau^k\mathcal{E}_{\alpha,0})\cdot\mathcal{F}$ defined later.
We end this introduction by pointing out that the sequence
$\mathcal{G}$ has itself several interesting features. For example, the functions $\mathcal{G}_k$ already appeared
in \cite{archiv} (they are denoted by $g_k^\star$ there) as the coefficients
of the ``cocycle terms $\boldsymbol{L}_\gamma$" of the functional equations of the {\em deformations of Drinfeld quasi-modular forms $\tau^k\boldsymbol{E}$} introduced there.
It is also interesting to notice that the {\em deformation of Legendre's identity} (\ref{dethPsi}) that we quote here
(proved in \cite{archiv}):
$$h^{-1}(\tau s_{{\text{Car}}})^{-1}=\boldsymbol{d}_1(\tau\boldsymbol{d}_2)-\boldsymbol{d}_2(\tau\boldsymbol{d}_1)$$
can be deduced from Theorem \ref{firsttheorem} by using the fact that $\mathcal{G}_0$ equals $-1$, property obtained in
Proposition \ref{interpretationvk}.
Moreover, it can be proved, with the help of the theory of {\em $\tau$-linear recurrent sequences} and {\em $\tau$-linearised recurrent sequences} (they will not be described here), that for $k\geq 0$, the function $\mathcal{G}_k(z,\theta)$, well defined, is equal to the {\em ortho-Eisenstein} series $g_k(z)$, and that $\mathcal{G}_k(z,\theta^{q^k})$, also well defined, is equal to the {\em para-Eisenstein} series $m_k(z)$, in the
notations and the terminology of \cite{Gek2}. Hence, the sequence $\mathcal{G}$ provides an interesting tool also in the study of both these kinds of functions. This program will be however pursued in another paper.
\noindent\emph{Acknowledgements.} The author is thankful to Vincent Bosser for a careful reading of a previous version of the manuscript, and Vincent Bosser, David Goss and Matt Papanikolas
for fruitful discussions about the topics of the present paper.
\section{Vectorial modular forms and their deformations\label{dvmf}}
We denote by $J_\gamma$ the factor of automorphy $(\gamma,z)\mapsto cz+d$, if $\gamma=\sqm{a}{b}{c}{d}$.
We will write, for $z\in\Omega$, $u=u(z)=1/e_{{\text{Car}}}(\widetilde{\pi}z)$; this is the local
parameter at infinity of $\Gamma\backslash\Omega$. For all $w,m$, we have an embedding of
$M_{w,m}$ in $\mathbb{C}_\infty[[u]]$ associating to every Drinfeld modular form its $u$-expansion, see \cite{Ge};
we will often identify modular forms with their $u$-expansions.
In all the following, $t$ will be considered either as a new indeterminate or as a parameter varying in $\mathbb{C}_\infty$, and we will freely switch from formal series to functions.
We will say that a series $\sum_{i\geq i_0}c_iu^i$ (with the coefficients $c_i$ in some field extension of $\mathbb{F}_q(t,\theta)$) is {\em normalised}, if $c_{i_0}=1$.
We will also say that the series is {\em of type} $m\in\mathbb{Z}/(q-1)\mathbb{Z}$ if $i\not\equiv m\pmod{q-1}$ implies $c_i=0$.
This definition is obviously compatible with the notion of {\em type} of a Drinfeld modular form already mentioned in the introduction, see \cite{Ge}.
Vectorial modular forms is a classical topic of investigation in the theory of modular forms of one complex
variable. The following definition is a simple adaptation of the notion of vectorial modular form for $\mathbf{SL}_2(\mathbb{Z})$ investigated in works by Knopp and Mason such as \cite{KM,Mas}, the most relevant for us.
Let $\rho:\Gamma\rightarrow\mathbf{GL}_s(\mathbb{C}_\infty)$ be a representation of $\Gamma$.
\begin{Definition}
{\em A {\em vectorial modular form of weight $w$, type $m$ and dimension $s$ associated to $\rho$} is a
rigid holomorphic function $f:\Omega\rightarrow\mathbf{Mat}_{s\times 1}(\mathbb{C}_\infty)$ with the following two properties. Firstly, for all
$\gamma\in\Gamma$,
$$f(\gamma(z))=\det(\gamma)^{-m}J_\gamma^w\rho(\gamma)\cdot f(z).$$ Secondly, the
vectorial function $f={}^t(f_1,\ldots,f_s)$ is {\em tempered} at infinity in the following way. There exists
$\nu\in\mathbb{Z}$ such that for all $i\in\{1,\ldots,s\}$, the following limit holds: $$\lim_{|z|=|z|_i\rightarrow\infty}u(z)^\nu f_i(z)=0$$ ($|z|_i$ denotes, for $z\in \mathbb{C}_\infty$,
the infimum $\inf_{a\in K_\infty}\{|z-a|\}$).}
\end{Definition}
We denote by $\mathcal{M}^s_{w,m}(\rho)$ the $\mathbb{C}_\infty$-vector space generated by these functions.
We further write, for $s=1$, $M^!_{w,m}=\mathcal{M}^1_{w,m}(\boldsymbol{1})$ with $\rho=\boldsymbol{1}$ the constant representation. This is just the $\mathbb{C}_\infty$-vector space (of infinite dimension)
generated by the rigid holomorphic functions $f:\Omega\rightarrow\mathbb{C}_\infty$ satisfying, for all $z\in\Omega$ and
$\gamma\in\Gamma$,
$f(\gamma(z))=\det(\gamma)^{-m}J_\gamma^wf(z)$ and meromorphic at infinity.
It can be proved that this vector space is generated by the functions $h^{-i}m_i$, where
$m_i$ is a Drinfeld modular form of weight $w+i(q+1)$ and type $i$.
Next, we briefly discuss simple formulas expressing Eisenstein series for $\Gamma$ as scalar products
of two vectorial modular forms (\footnote{Similar identities hold in the classical theory of
$\mathbf{SL}_2(\mathbb{Z})$ but will not be discussed here.}); these formulas suggest the theory we develop here.
Let $g_k$ be the normalisation of the Eisenstein series of weight $q^k-1$
as in \cite{Ge}. We have, for $k>0$ (see loc. cit.):
$$
g_k(z)=(-1)^{k+1}\widetilde{\pi}^{1-q^k}[k]\cdots[1]\left(z\sideset{}{'}\sum_{c,d\in A}\frac{c}{(cz+d)^{q^k}}+\sideset{}{'}\sum_{c,d\in A}\frac{d}{(cz+d)^{q^k}}\right).
$$
The identity above can then be rewritten in a more compact form as a scalar product:
\begin{equation}\label{makeup}g_k(z)=-\zeta(q^k-1)^{-1}\mathcal{E}_k'(z)\cdot\mathcal{F}'(z),\end{equation}
where $\mathcal{E}_k'(z)$ is the convergent
series $\sum_{c,d\in A}'(cz+d)^{-q^k}(c,d)$, defining a holomorphic map $\Omega\rightarrow \mathbf{Mat}_{1\times 2}(\mathbb{C}_\infty)$ (it is well defined also
when $k=0$, but only as a conditionally convergent series), and
$\mathcal{F}'(z)$ is the map $\binom{z}{1}:\Omega\rightarrow \mathbf{Mat}_{2\times 1}(\mathbb{C}_\infty)$ (\footnote{After this discussion we will no longer use the notations $\mathcal{E}_k'$ and $\mathcal{F}'$.}).
It is not difficult to see that ${}^t\mathcal{E}_k'$ and $\mathcal{F}'$ are {vectorial modular forms}.
More precisely, if $\rho$ is the identity map, we have ${}^t\mathcal{E}_k'\in\mathcal{M}^2_{q^k,0}({}^t\rho^{-1})$
and $\mathcal{F}'\in\mathcal{M}^2_{-1,0}(\rho)$ (\footnote{More generally, one speaks about matrix modular forms
associated to left and right actions of two representations of $\Gamma$ (or $\mathbf{SL}_2(\mathbb{Z})$). Then,
$\mathcal{E}_k'$ is a row matrix modular form associated to the right action of ${}^t\rho^{-1}$.}).
We are going to provide variants of the identities
(\ref{makeup}) depending on the variable $t\in\mathbb{C}_\infty$ in such a way that they will become arithmetically rich. For this, we need
to introduce the notion of {\em deformation of vectorial modular form}.
In the rest of the paper, we will only use deformations of vectorial modular of a certain type.
For the sake of simplicity, we will confine the presentation to the class of functions that we strictly need.
This does not immediately allow to see how vectorial modular forms arise as special values of such deformations (for specific choices of
$t\in\mathbb{C}_\infty$).
However, the reader should be aware that this, in an obvious extension of the theory presented
here, is of course possible.
The subring
of formal series in $\mathbb{C}_\infty[[t]]$ converging for all $t$ such that $|t|\leq 1$ will be denoted by $\mathbb{T}$.
It is endowed with the $\mathbb{F}_q[t]$-linear automorphism $\tau$ acting on formal series as follows:
$$\tau\sum_ic_it^i=\sum_ic_i^qt^i.$$
We will work with certain functions $f:\Omega\times B_1\rightarrow \mathbb{C}_\infty$ with
the property that for all $z\in\Omega$,
$f(z,t)$ can be identified with a series of $\mathbb{T}$ converging for all $t_0\in B_1$ to $f(z,t_0)$. For such functions we will then also write
$f(z)$ to stress the dependence on $z\in\Omega$ when we want to consider them as functions $\Omega\rightarrow\mathbb{T}$.
Sometimes, we will not specify the variables $z,t$ and just write $f$ instead of $f(z,t)$ or $f(z)$ to lighten our formulas
just as we did in some parts of the introduction. Moreover,
$z$ will denote a variable in $\Omega$ all along the paper.
With $\mathbf{Hol}(\Omega)$ we denote the ring of rigid holomorphic functions on $\Omega$.
Let us denote by
$\mathcal{R}$ the integral ring whose elements are the formal series $f=\sum_{i\geq0}f_it^i$,
such that
\begin{enumerate}
\item For all $i$, $f_i$ is a map $\Omega\rightarrow \mathbb{C}_\infty$ belonging to $\mathbf{Hol}(\Omega)$.
\item For all $z\in\Omega$, $\sum_{i\geq0}f_i(z)t^i$ is an element of $\mathbb{T}$.
\end{enumerate}
The ring $\mathcal{R}$ is
endowed with the injective endomorphism $\tau$ acting on formal series as follows:
$$\tau\sum_{i\geq0}f_i(z)t^i=\sum_{i\geq0}f_i(z)^qt^i.$$
\subsection{Deformations of vectorial modular forms.}
Let us consider a representation
\begin{equation}\label{rho}
\rho:\Gamma\rightarrow \mathbf{GL}_s(\mathbb{F}_q((t))).
\end{equation}
We assume that the determinant representation $\det(\rho)$ is the
$\mu$-th power of the determinant character, for some $\mu\in\mathbb{Z}/(q-1)\mathbb{Z}$.
\begin{Definition}
{\em A {\em deformation of vectorial modular form} of weight $w$, dimension $s$ and type $m$ associated with a representation $\rho$ as in
(\ref{rho}) is a column matrix $\mathcal{F}={}^t(f_1,\ldots,f_s)\in\mathbf{Mat}_{s\times 1}(\mathcal{R})$ such that
the following two properties hold. Firstly, considering $\mathcal{F}$ as a map $\Omega\rightarrow\mathbf{Mat}_{s\times 1}(\mathbb{T})$ we have,
for all $\gamma\in\Gamma$,
$$\mathcal{F}(\gamma(z))=J_\gamma^w\det(\gamma)^{-m}\rho(\gamma)\cdot\mathcal{F}(z).$$
Secondly, the entries of $\mathcal{F}$ are {\em tempered}: there exists $\nu\in\mathbb{Z}$ such that, for all $t\in B_1$ and $i\in\{1,\ldots,s\}$,
$\lim_{|z|=|z|_i\rightarrow\infty}u(z)^\nu f_i(z)=0$.}
\end{Definition}
The set of deformations of vectorial modular forms of weight $w$, dimension $s$ and type $m$ associated to a representation $\rho$
is a $\mathbb{T}$-module that we will denote by $\mathcal{M}^{s}_{w,m}(\rho)$ (we use the same notations as
for vectorial modular forms).
If $s=1$ and if $\rho=\boldsymbol{1}$ is the constant map, then $\mathcal{M}^1_{w,m}(\boldsymbol{1})$ is the space $M^!_{w,m}\otimes\mathbb{T}$. It is easy to see that we can endow the space
$\mathcal{M}^2(\rho)=\oplus_{w,m}\mathcal{M}^2_{w,m}(\rho)$ with the structure of a graded
$M^!\otimes\mathbb{T}$-module, where $M^!=\oplus_{w',m'}M^!_{w',m'}$.
\begin{Lemme}\label{twist} Let $k$ be a non-negative integer.
If $\mathcal{F}$ is in $\mathcal{M}^{s}_{w,m}(\rho)$, then $\tau^k\mathcal{F}\in\mathcal{M}^{s}_{wq^k,m}(\rho)$.
If we
choose nonnegative integers $k_1,\ldots,k_s$, then
$$\det(\tau^{k_1}\mathcal{F},\ldots,\tau^{k_s}\mathcal{F})\in M^!_{w(q^{k_1}+\cdots+q^{k_s}),sm-\mu}\otimes\mathbb{T}.$$
In particular, $$W_\tau(\mathcal{F})=\det(\tau^0\mathcal{F},\ldots,\tau^{s-1}\mathcal{F})\in M^!_{w(1+q+q^2+\cdots+q^{s-1}),sm-\mu}\otimes\mathbb{T}.$$
\end{Lemme}
\noindent\emph{Proof.} From the definition, and for all $k'\in\mathbb{Z}$,
$$(\tau^{k'}\mathcal{F})(\gamma(z))=J_\gamma^{wq^{k'}}\det(\gamma)^{-m}\rho(\gamma)(\tau^{k'}\mathcal{F})(z)$$
because $\tau(\rho(\gamma))=\rho(\gamma)$. Moreover, $\tau$ is an endomorphism of $\mathcal{R}$ and it
is obvious that $\mathcal{F}$ being tempered, also $\tau^k\mathcal{F}$ is tempered.
Now define the matrix function:
$$\mathbf{M}_{k_1,\ldots,k_s}=(\tau^{k_1}\mathcal{F},\ldots,\tau^{k_s}\mathcal{F}).$$
After the first part of the lemma we have, for $\gamma\in\mathbf{GL}_2(A)$:
$$\mathbf{M}_{k_1,\ldots,k_s}(\gamma(z))=\det(\gamma)^{-m}\rho(\gamma)\cdot \mathbf{M}_{k_1,\ldots,k_s}(z)\cdot\mathbf{Diag}(J_\gamma^{wq^{k_1}},\cdots,
J_\gamma^{wq^{k_s}}),$$
and we conclude the proof taking determinants of both sides. \CVD
\begin{Lemme}\label{wronsk} Let
us consider $\mathcal{F}$ in $\mathcal{M}^{s}_{w,m}(\rho)$ and let $\mathcal{E}$ be such that ${}^t\mathcal{E}$ is in $\mathcal{M}^{s}_{w',m'}({}^t\rho^{-1})$. Let us denote by $\mathcal{G}_k$ the scalar product $(\tau^k\mathcal{E})\cdot\mathcal{F}$, then, for nonnegative $k$, $$\mathcal{G}_k\in M^!_{wq^k+w',m+m'}\otimes\mathbb{T}.$$
Furthermore, we have:
$$\tau^k\mathcal{G}_{-k}\in M^!_{w+w'q^k,m+m'}\otimes\mathbb{T}.$$
\end{Lemme}
\noindent\emph{Proof.} By Lemma \ref{twist},
$\tau^k({}^t\mathcal{E})$ is in $\mathcal{M}^s_{w'q^k,m'}({}^t\rho^{-1})$ and
$\tau^k\mathcal{F}$ is in $\mathcal{M}^s_{wq^k,m}(\rho)$.
Let
$\gamma$ be in $\mathbf{GL}_2(A)$. We have, after transposition, and for all $k\in\mathbb{Z}$,
$$(\tau^{k}\mathcal{E})(\gamma(z))=J_\gamma^{w'q^{k}}\det(\gamma)^{-m'}\mathcal{E}(z)\cdot\rho^{-1}(\gamma),$$
and since $\tau^k\mathcal{G}_{-k}={}^t\mathcal{E}\cdot(\tau^k\mathcal{F})$,
$$(\tau^{k}\mathcal{F})(\gamma(z))=J_\gamma^{wq^{k}}\det(\gamma)^{-m}\rho(\gamma)\cdot(\tau^{k}\mathcal{F}(z)).$$
Hence, for $k\geq 0$,
$$\mathcal{G}_{k}(\gamma(z))=J_\gamma^{w'q^{k}+w}\det(\gamma)^{-m-m'}\mathcal{G}_{k}(z),$$
and
$$\tau^k\mathcal{G}_{-k}(\gamma(z))=J_\gamma^{w+w'q^{k}}\det(\gamma)^{-m-m'}\mathcal{G}_{-k}(z).$$
On the other hand, $\mathcal{G}_k$ and $\tau^k\mathcal{G}_{-k}$ are tempered for all $k\geq 0$, from which we can conclude.\CVD
From now on, we will use the representation $\rho=\rho_{t}$ and the transposed of its inverse.
\subsection{The function $\mathcal{F}$}
The function of the title of this subsection is the vector valued function
$\binom{\boldsymbol{d}_1}{\boldsymbol{d}_2}$. In the next proposition, containing the properties of $\mathcal{F}$ of interest for us, we write $g$ for the unique normalised Drinfeld modular form of weight $q-1$ and type $0$ for $\Gamma$
(proportional to an Eisenstein series), and $\Delta$ for the cusp form $-h^{q-1}$.
\begin{Proposition}\label{prarchiv}
We have the following four properties for $\mathcal{F}$ and the $\boldsymbol{d}_i$'s.
\begin{enumerate}
\item We have
$\mathcal{F}\in\mathcal{M}^2_{-1,0}(\rho_{t})$.
\item The functions $\boldsymbol{d}_1,\boldsymbol{d}_2$ span the $\mathbb{F}_q(t)$-vector space of dimension
$2$ of solutions of the following $\tau$-linear difference equation:
\begin{equation}\label{equexpansion}
X=(t-\theta^q)\Delta\tau^2X+g\tau X,
\end{equation}
in a suitable existentially closed inversive
field containing $\mathcal{R}$.
\item Let us consider the matrix function:
$$\Psi (z,t):=\sqm{\boldsymbol{d}_1(z,t)}{\boldsymbol{d}_2(z,t)}{(\tau\boldsymbol{d}_1)(z,t)}{(\tau\boldsymbol{d}_2)(z,t)}.$$
For all $z\in\Omega$ and $t$ with $|t|<q$:
\begin{equation}\label{dethPsi}
\det(\Psi)=(t-\theta)^{-1}h(z)^{-1}s_{{\text{Car}}}(t)^{-1}.
\end{equation}
\item We have the series expansion
\begin{equation}\label{uexpd2}
\boldsymbol{d}_2 =\sum_{i\geq 0}c_i(t)u^{(q-1)i}\in1+u^{q-1}\mathbb{F}_q[t,\theta][[u^{q-1}]],
\end{equation}
convergent for $t,u$ sufficiently close to $(0,0)$.
\end{enumerate}
\end{Proposition}
\noindent\emph{Proof.} All the properties but one follow immediately from the results of
\cite{archiv} where some of them are stated in slightly different, although equivalent formulations.
The only property we have to justify here is that $\mathcal{F}$ is tempered. After (\ref{uexpd2}),
we are led to check that there exists $\nu\in\mathbb{Z}$ such that $u(z)^\nu\boldsymbol{d}_1\rightarrow0$
for $z\in\Omega$ such that $|z|=|z|_i\rightarrow\infty$. For this, we have the following lemma,
which concludes the proof of the Proposition.
\begin{Lemme}\label{basiclimits}
The following limits hold, for all $t\in\mathbb{C}_\infty$ such that $|t|\leq 1$:
$$\lim_{|z|=|z|_i\rightarrow\infty}u(z)\boldsymbol{d}_1(z,t)=0,\quad \lim_{|z|=|z|_i\rightarrow\infty}u(z)(\tau\boldsymbol{d}_1)(z,t)=1.$$
\end{Lemme}
\noindent\emph{Proof.} We recall from \cite{archiv} the series expansion
$$\boldsymbol{d}_1(z)=\frac{\widetilde{\pi}}{s_{{{\text{Car}}}}(t)}\boldsymbol{s}_2(z)=\frac{\widetilde{\pi}}{s_{{{\text{Car}}}}(t)}\sum_{n\geq0}e_{\Lambda_z}\left(\frac{z}{\theta^{n+1}}\right)t^n,$$
converging for all $t$ such that $|t|<q$ and for all $z\in\Omega$.
By a simple modification of the proof of \cite[Lemma 5.9 p. 286]{gekeler:compositio}, we have
$$\lim_{|z|_i=|z|\to\infty}u(z)t^ne_{\Lambda_z}(z/\theta^{n+1})^q=0$$ uniformly in $n>0$, for all $t$ such that
$|t|\leq 1$ (in fact, even if $|t|\leq q$).
Moreover, it is easy to show that
$$\lim_{|z|_i=|z|\to\infty} u(z)e_{\Lambda_z}(z/\theta)^q=\widetilde{\pi}^{-q}\lim_{|z|_i=|z|\to\infty}e^{q}_{{{\text{Car}}}}(\widetilde{\pi}z/\theta)/e_{{{\text{Car}}}}(\widetilde{\pi}z)=1,$$ which gives the second limit, from which we deduce the first limit as well.\CVD
\subsection{Structure of $\mathcal{M}^2$}
Let us denote by $\mathcal{F}^*$ the function $\binom{-\boldsymbol{d}_2}{\boldsymbol{d}_1}$,
which is easily seen to be an element of $\mathcal{M}^2_{-1,-1}({}^t\rho_t^{-1})$.
In this subsection we give some information on the structure of the spaces
$\mathcal{M}^2_{w,m}({}^t\rho_t^{-1})$.
\begin{Proposition}\label{embedding}
We have
$$\mathcal{M}^2_{w,m}({}^t\rho^{-1}_t)=(M^!_{w+1,m+1}\otimes\mathbb{T})\mathcal{F}^*\oplus
(M^!_{w+q,m+1}\otimes\mathbb{T})(\tau\mathcal{F}^*).$$ More precisely,
for all $\mathcal{E}$ with ${}^t\mathcal{E}\in\mathcal{M}^2_{w,m}({}^t\rho^{-1}_t)$ we have
$${}^t\mathcal{E}=(\tau s_{{\text{Car}}})h((\tau\mathcal{G}_{-1})\mathcal{F}^*+\mathcal{G}_{0}(\tau\mathcal{F}^*)),$$
where we have written $\mathcal{G}_k=(\tau^k({}^t\mathcal{E}))\cdot\mathcal{F}$ for all $k\in\mathbb{Z}$.
\end{Proposition}
The first part of the proposition is equivalent to the equality
$$\mathcal{M}^2_{w,m}(\rho_t)=(M^!_{w+1,m}\otimes\mathbb{T})\mathcal{F}\oplus
(M^!_{w+q,m}\otimes\mathbb{T})(\tau\mathcal{F}).$$ In the rest of this paper, we will only use the equality
for $\mathcal{M}^2_{w,m}({}^t\rho^{-1}_t)$.
\noindent\emph{Proof of Proposition \ref{embedding}.} Let us temporarily write $\mathcal{M}'$ for $(M^!_{w+1,m+1}\otimes\mathbb{T})\mathcal{F}^*+
(M^!_{w+q,m+1}\otimes\mathbb{T})(\tau\mathcal{F}^*)$. It is easy to show, thanks to the results of
\cite{archiv}, that the sum is direct. Indeed, in loc. cit. it is proved that
$\boldsymbol{d}_2$, $\tau\boldsymbol{d}_2$, $g$ and $h$ are algebraically independent over $\mathbb{C}_\infty((t))$.
Moreover, $\mathcal{M}'$ clearly embeds in $\mathcal{M}^2_{w,m}({}^t\rho^{-1}_t)$. It remains
to show the opposite inclusion.
By Proposition \ref{prarchiv}, the matrix $M=(\mathcal{F},\tau^{-1}\mathcal{F})$ is invertible.
From (\ref{dethPsi}) we deduce that
$$\tau M^{-1}=(t-\theta)s_{{\text{Car}}} h\sqm{-\boldsymbol{d}_2}{\boldsymbol{d}_1}{\tau\boldsymbol{d}_2}{-\tau\boldsymbol{d}_1}.$$
Let $\mathcal{E}$ be such that ${}^t\mathcal{E}\in\mathcal{M}^2_{w,m}({}^t\rho^{-1}_t)$.
Thanks to the above expression for $\tau M^{-1}$, the identity:
$$\binom{\mathcal{E}}{\tau\mathcal{E}}\cdot\mathcal{F}=\binom{\mathcal{G}_0}{\mathcal{G}_1},$$ product of a $2\times 2$ matrix with a one-column matrix (which is the definition of $\mathcal{G}_0,\mathcal{G}_1$),
yields the formulas:
\begin{eqnarray}
\mathcal{E}& =&(\mathcal{G}_0,\tau^{-1}\mathcal{G}_1)\cdot M^{-1}\nonumber\\
&=&(t-\theta^{1/q})h^{1/q}(\tau^{-1}s_{{\text{Car}}})(\mathcal{G}_0,\tau^{-1}\mathcal{G}_1)\cdot
\left(\begin{array}{ll}-\tau^{-1}\boldsymbol{d}_2&\tau^{-1}\boldsymbol{d}_1\\ \boldsymbol{d}_2&-\boldsymbol{d}_1\end{array}\right)\nonumber\\
&=&(t-\theta^{1/q})h^{1/q}(\tau^{-1}s_{{\text{Car}}})(\tau^{-1}\mathcal{G}_1\boldsymbol{d}_2 -\mathcal{G}_0\tau^{-1}\boldsymbol{d}_2,-\tau^{-1}\mathcal{G}_1\boldsymbol{d}_1 +\mathcal{G}_0\tau^{-1}\boldsymbol{d}_1).\label{partial1}
\end{eqnarray}
Now, we observe that we have, from the second part of Proposition \ref{prarchiv} and for all $k\in\mathbb{Z}$,
\begin{equation}\nonumber
\mathcal{G}_k=g\tau\mathcal{G}_{k-1}+\Delta(t-\theta^q)\tau^2\mathcal{G}_{k-2}.
\end{equation}
Applying this formula for $k=1$ we obtain
\begin{equation}\label{taudifference}
\tau\mathcal{G}_{-1}=\frac{\tau^{-1}\mathcal{G}_{1}-g^{1/q}\mathcal{G}_{0}}{(t-\theta)\Delta^{1/q}},
\end{equation}
and by using again Part two of Proposition \ref{prarchiv}, we eliminate $\tau^{-1}\mathcal{G}_{1}$
and $\tau^{-1}\boldsymbol{d}_i$:
$$(\tau^{-1}\mathcal{G}_{1})\boldsymbol{d}_i -\mathcal{G}_{0}\tau^{-1}\boldsymbol{d}_i=\Delta^{1/q}(t-\theta)((\tau\mathcal{G}_{-1})\boldsymbol{d}_i-\mathcal{G}_{0}(\tau\boldsymbol{d}_i)),\quad i=1,2.$$
Replacing in (\ref{partial1}), and using $\Delta^{1/q}h^{1/q}=-h$ and $(t-\theta^{1/q})\tau^{-1}s_{{{\text{Car}}}}=s_{{\text{Car}}}$,
we get the formula:
\begin{equation}\label{formula1}
\mathcal{E}=(t-\theta)s_{{\text{Car}}} h(-(\tau\mathcal{G}_{-1})\boldsymbol{d}_2+\mathcal{G}_{0}(\tau\boldsymbol{d}_2),(\tau\mathcal{G}_{-1})\boldsymbol{d}_1-\mathcal{G}_{0}(\tau\boldsymbol{d}_1)).
\end{equation}
By Lemma \ref{wronsk}, we have $\mathcal{G}_{0}\in M^!_{w-1,m}\otimes\mathbb{T},(\tau\mathcal{G}_{-1})\in M^!_{w-q,m}\otimes\mathbb{T}$ and the proposition follows from the fact that $h\in M_{q+1,1}$.\CVD
\noindent\emph{Remark.} Let us choose any $\mathcal{E}=(f_1,f_2)$ such that
${}^t\mathcal{E}\in\mathcal{M}^{2}_{w,m}({}^t\rho_t^{-1})$.
Proposition \ref{embedding} implies that there exists $\mu\in\mathbb{Z}$ such that
$$h^\mu f_1\in\mathbb{M}^\dag_{\mu(q+1)+w,1,\mu+m},$$
where $\mathbb{M}^\dag_{\alpha,\beta,m}$ is a sub-module of {\em almost $A$-quasi-modular forms}
as introduced in \cite{archiv}.
\subsection{Deformations of vectorial Eisenstein and Poincar\'e series\label{poincare}.}
The aim of this subsection is to construct non-trivial elements
of $\mathcal{M}^2_{w,m}({}^t\rho_t^{-1})$.
Following Gekeler \cite[Section 3]{Ge}, we recall that for all $\alpha>0$ there exists a polynomial $G_\alpha(u)\in \mathbb{C}_\infty[u]$, called the
{\em $\alpha$-th Goss polynomial},
such that, for all $z\in\Omega$, $G_{\alpha}(u(z))$ equals the sum of the convergent series
$$\widetilde{\pi}^{-\alpha}\sum_{a\in A}\frac{1}{(z+a)^\alpha}.$$
Several properties of these polynomials are collected in \cite[Proposition (3.4)]{Ge}. Here, we will need that
for all $\alpha$, $G_\alpha$ is of type $\alpha$ as a formal series of $\mathbb{C}_\infty[[u]]$. Namely:
$$G_\alpha(\lambda u)=\lambda^\alpha G_\alpha(u),\quad \text{ for all }\lambda\in\mathbb{F}_q^\times.$$
We also recall, for $a\in A$, the function
$$u_a(z):=u(az)=e_{{\text{Car}}}(\widetilde{\pi}az)^{-1}=u^{|a|}f_a(u)^{-1}=u^{|a|}+\cdots\in A[[u]],$$
where $f_a\in A[[u]]$ is the {\em $a$-th inverse cyclotomic polynomial} defined in \cite[(4.6)]{Ge}. Obviously,
we have
$$u_{\lambda a}=\lambda^{-1}u_a\quad \text{ for all }\lambda\in\mathbb{F}_q^\times.$$
We will use the following lemma.
\begin{Lemme}\label{primolemma}
Let $\alpha$ be a positive integer such that $\alpha\equiv 1\pmod{q-1}$.
We have, for all $t\in \mathbb{C}_\infty$ such that $|t|\leq1$ and $z\in\Omega$, convergence of the series below, and equality:
$$\sideset{}{'}\sum_{c,d\in A}\frac{\ol{c}}{(cz+d)^{\alpha}}=-\widetilde{\pi}^\alpha\sum_{c\in A^+}\ol{c}G_{\alpha}(u_c(z)),$$ from which it follows that the series in the left-hand side is not identically zero.
\end{Lemme}
\noindent\emph{Proof.} Convergence is ensured by Lemma \ref{interpret} (or Proposition \ref{mainproppoincare})
and the elementary properties of Goss' polynomials. By the way, the series on the right-hand side
converges for all $t\in\mathbb{C}_\infty$.
We then compute:
\begin{eqnarray*}
\sideset{}{'}\sum_{c,d}\frac{\ol{c}}{(cz+d)^\alpha}&=&\sum_{c\neq0}\ol{c}\sum_{d\in A}\frac{1}{(cz+d)^\alpha}\\
&=&\widetilde{\pi}^\alpha\sum_{c\neq 0}\ol{c}\sum_{d\in A}\frac{1}{(c\widetilde{\pi}z+d\widetilde{\pi})^\alpha}\\
&=&\widetilde{\pi}^\alpha\sum_{c\neq 0}\ol{c}G_\alpha(u_c)\\
&=&\widetilde{\pi}^\alpha\sum_{c\in A^+}\ol{c}G_\alpha(u_c)\sum_{\lambda\in\mathbb{F}_q^\times}\lambda^{1-\alpha}\\
&=&-\widetilde{\pi}^\alpha\sum_{c\in A^+}\ol{c}G_\alpha(u_c).
\end{eqnarray*} The non-vanishing of the series comes from the non-vanishing contribution of the
term $G_{\alpha}(u)$ in the last series. Indeed, the order of vanishing at $u=0$ of the right-hand side is equal to
$\min_{c\in A^+}\{\text{ord}_{u=0}G_\alpha(u_c)\}=\text{ord}_{u=0}G_\alpha(u)$, which is
$<\infty$.
\CVD
Following \cite{Ge}, we consider the subgroup $$H=\left\{\sqm{*}{*}{0}{1}\right\}$$ of $\Gamma=\mathbf{GL}_{2}(A)$ and its left action on $\Gamma$.
For $\delta=\sqm{a}{b}{c}{d}\in\Gamma$, the map $\delta\mapsto(c,d)$ induces a bijection
between the orbit set $H\backslash\Gamma$ and the set of $(c,d)\in A^2$ with $c,d$ relatively prime.
We consider the factor of automorphy $$\mu_{\alpha,m}(\delta,z)=\det(\delta)^{-m}J_\delta^\alpha,$$ where $m$ and $\alpha$ are
positive integers (at the same time, $m$ will also determine a type, that is, a class modulo $q-1$).
Let $V_1 (\delta)$ be the row matrix $(\ol{c},\ol{d}).$
It is easy to show that the row matrix $$\mu_{\alpha,m}(\delta,z)^{-1}u^m(\delta(z))V_1 (\delta)$$ only depends on the class of $\delta\in H\backslash\Gamma$,
so that we can consider the following expression:
$$\mathcal{E}_{\alpha,m}(z)=\sum_{\delta\in H\backslash\Gamma}\mu_{\alpha,m}(\delta,z)^{-1}u^m(\delta(z))V_1 (\delta),$$
which is a row matrix whose two entries are formal series.
Let $\mathcal{V}$ be the set of functions $\Omega\rightarrow\mathbf{Mat}_{1\times 2}(\mathbb{C}_\infty[[t]]).$
We introduce, for $\alpha,m$ integers, $f\in \mathcal{V}$ and $\gamma\in\Gamma$, the Petersson slash operator:
$$f|_{\alpha,m}\gamma=\det(\gamma)^{m}(cz+d)^{-\alpha}f(\gamma(z))\cdot\rho_{t}(\gamma).$$
This will be used in the next proposition, where we denote by $\log^+_q(x)$ the maximum between $0$ and $\log_q(x)$, the logarithm in base $q$ of $x>0$. We point out that we will not apply this proposition in full generality.
Indeed, in this paper, we essentially consider the case $m=0$ in the proposition. The proposition is presented
in this way for the sake of completeness.
\begin{Proposition}\label{mainproppoincare} Let $\alpha,m$ be non-negative integers with $\alpha\geq 2m+1$, and write $r(\alpha,m)=\alpha-2m-1$. We have the following properties.
\begin{enumerate}
\item For $\gamma\in\Gamma$, the map $f\mapsto f|_{\alpha,m}\gamma$ induces a permutation of the subset of $\mathcal{V}$:
$$\mathcal{S}=\{\mu_{\alpha,m}(\delta,z)^{-1}u^m(\delta(z))V_1 (\delta);\delta\in H\backslash\Gamma\}.$$
\item If $t\in \mathbb{C}_\infty$ and $\alpha,m$ are chosen so that $r(\alpha,m)>\log^+_q|t|$, then the components of $\mathcal{E}_{\alpha,m}(z,t)$ are series of functions
of $z\in\Omega$ which converge absolutely and uniformly on every compact subset of $\Omega$ to holomorphic functions.
\item If $|t|\leq 1$, then the components of $\mathcal{E}_{\alpha,m}(z,t)$ converge absolutely and uniformly on every compact subset of $\Omega$
also if $\alpha-2m>0$.
\item For any choice of $\alpha,m,t$ submitted to the convergence conditions above, the function ${}^t\mathcal{E}_{\alpha,m}(z,t)$
belongs to the space $\mathcal{M}^{2}_{\alpha,m}({}^t\rho_{t}^{-1})$.
\item If $\alpha-1\not\equiv 2m\pmod{(q-1)}$, the matrix function $\mathcal{E}_{\alpha,m}(z,t)$ is identically zero.
\item If $\alpha-1\equiv 2m\pmod{(q-1)}$, $\alpha\geq (q+1)m+1$ so that $\mathcal{E}_{\alpha,m}$ converges, then $\mathcal{E}_{\alpha,m}$ is not identically zero in its domain of convergence.
\end{enumerate}
\end{Proposition}
\noindent\emph{Proof.}
\noindent\emph{1.} We choose $\delta\in H\backslash \Gamma$ corresponding
to a pair $(c,d)\in A^2$ with $c,d$ relatively prime, and set $f_\delta=\mu_{\alpha,m}(\delta,z)^{-1}u^m(\delta(z))V_1 (\delta)\in\mathcal{S}$.
We have
\begin{eqnarray*}
f_\delta(\gamma(z))&=&\mu_{\alpha,m}(\delta,\gamma(z))^{-1}u^m(\delta(\gamma(z)))V_1 (\delta)\\
&=&\mu_{\alpha,m}(\gamma,z)\mu_{\alpha,m}(\delta\gamma,z)^{-1}u^m(\delta\gamma(z))V_1 (\delta),\\
&=&\mu_{\alpha,m}(\gamma,z)\mu_{\alpha,m}(\delta\gamma,z)^{-1}u^m(\delta\gamma(z))V_1 (\delta\gamma)\cdot\rho_{t}(\gamma)^{-1},\\
&=&\mu_{\alpha,m}(\gamma,z)\mu_{\alpha,m}(\delta',z)^{-1}u^m(\delta'(z))V_1 (\delta')\cdot\rho_{t}(\gamma)^{-1},\\
&=&\mu_{\alpha,m}(\gamma,z)f_{\delta'}\cdot\rho_{t}(\gamma)^{-1},
\end{eqnarray*}
with $\delta'=\delta\gamma$ and $f_{\delta'}=\mu_{\alpha,m}(\delta',z)^{-1}u^m(\delta'(z))V_1 (\delta')$, from which part 1 of the
proposition follows.
\noindent\emph{2.}
Convergence and holomorphy are ensured by simple modifications of \cite[(5.5)]{Ge}, or by the arguments in \cite[Chapter 10]{GePu}.
More precisely, let us choose $0\leq s\leq 1$ and look at the component at the place $s+1$ $$\mathcal{E}_s(z,t)=\sum_{\delta\in H\backslash\Gamma}\mu_{\alpha,m}(\delta,z)^{-1}u(\delta(z))^m\ol{c^sd^{1-s}}$$ of the vector
series $\mathcal{E}_{\alpha,m}$. Writing $\alpha=n(q-1)+2m+l'$ with $n$ non-negative integer and $l'\geq 1$ we see, following Gerritzen and van der Put, \cite[pp. 304-305]{GePu} and taking into account the inequality
$|u(\delta(z))|\leq|cz+d|^2/|z|_i$, that
the term
of the series $\mathcal{E}_s$:
$$\mu_{\alpha,m}(\delta,z)^{-1}u^m(\delta(z))\ol{c^sd^{1-s}}=(cz+d)^{-n(q-1)-l'-2m}u(\delta(z))^m\chi_t(c^sd^{1-s})$$
(where $\delta$ corresponds to $(c,d)$)
has absolute value bounded from above by
$$|z|_i^{-m}\left|\frac{\ol{c^sd^{1-s}}}{(cz+d)^{n(q-1)+l'}}\right|.$$
Applying the first part of the proposition,
to check convergence, we can freely substitute $z$ with $z+a$ with $a\in A$ and we may assume, without loss of generality, that $|z|=|z|_i$.
We verify that, either $\lambda=\deg_\theta z\in\mathbb{Q}\setminus\mathbb{Z}$,
or $\lambda\in\mathbb{Z}$ case in which for all $\rho\in K_\infty$ with $|\rho|=|z|$, we have $|z-\rho|=|z|$.
In both cases, for all $c,d$, $|cz+d|=\max\{|cz|,|d|\}$.
Then, the series defining $\mathcal{E}_s$ can be decomposed as follows:
$$\mathcal{E}_s=\left(\sideset{}{'}\sum_{|cz|<|d|}+\sideset{}{'}\sum_{|cz|\geq|d|}\right)\mu_{\alpha,m}(\delta,z)^{-1}u^m(\delta(z))\ol{c^sd^{1-s}}.$$
We now look for upper bounds for the absolute values of the terms of the series above separating the two cases in a way similar to that of Gerritzen and van der Put
in loc. cit.
Assume first that $|cz|<|d|$, that is, $\deg_\theta c+\lambda< \deg_\theta d$. Then
$$\left|\frac{\ol{c^sd^{1-s}}}{(cz+d)^{n(q-1)+l'}}\right|\leq \kappa \max\{1,|t|\}^{\deg_\theta d}|d|^{-n(q-1)-l'}\leq \kappa q^{\deg_\theta d(\log^+_q|t|-n(q-1)-l')},$$
where $\kappa$ is a constant depending on $\lambda$, and the corresponding sub-series
converges with the imposed conditions on the parameters, because $\log^+_q|t|-n(q-1)-l'<0$.
If on the other side $|cz|\geq|d|$, that is, $\deg_\theta c+\lambda\geq\deg_\theta d,$ then
$$\left|\frac{\ol{c^sd^{1-s}}}{(cz+d)^{n(q-1)+l'}}\right|\leq\kappa' \max\{1,|t|\}^{\deg_\theta d}|c|^{-n(q-1)-l'}\leq\kappa' q^{\deg_\theta c(\log^+_q|t|-n(q-1)-l')},$$
with a constant $\kappa'$ depending on $\lambda$, again because $\log^+_q|t|-n(q-1)-l'<0$. This completes the proof of the second part of the Proposition.
\noindent\emph{3.} This property can be deduced from the proof of the second part because if $|t|\leq 1$, then $|\chi_t(c^sd^{1-s})|\leq 1$.
\noindent\emph{4.} The property is obvious by the first part of the proposition, because
$\mathcal{E}_{\alpha,m}=\sum_{f\in\mathcal{S}}f$, and because the functions are obviously
tempered thanks to the estimates we used in the proof of Part two.
\noindent\emph{5.}
We consider $\gamma=\mathbf{Diag}(1,\lambda)$ with $\lambda\in\mathbb{F}_q^\times$; the
corresponding homography, multiplication by $\lambda^{-1}$, is equal to that defined by $\mathbf{Diag}(\lambda^{-1},1)$. Hence, we have:
\begin{eqnarray*}
\mathcal{E}_{\alpha,m}(\gamma(z))&=&\lambda^{\alpha-m}\mathcal{E}_{\alpha,m}(z)\cdot\mathbf{Diag}(1,\lambda^{-1})\\
&=&\lambda^{m}\mathcal{E}_{\alpha,m}(z)\cdot\mathbf{Diag}(\lambda,1),
\end{eqnarray*}
from which it follows that $\mathcal{E}_{\alpha,m}$ is identically zero if $\alpha-1\not\equiv 2m\pmod{q-1}$.
\noindent\emph{6.} If $m=0$ and $\alpha=1$, we simply appeal to Lemma \ref{primolemma}.
Assuming now that either $m>0$ or $\alpha>1$, we have that $\mathcal{E}_{\alpha,m}$ converges
at $t=\theta$ and:
\begin{eqnarray*}
z\mathcal{E}_0(z,\theta)+\mathcal{E}_1(z,\theta)&=&\sum_{\delta\in H\backslash\Gamma}\det(\delta)^{m}(cz+d)^{1-\alpha}u(\delta(z))^m\\
&=&P_{\alpha-1,m},
\end{eqnarray*}
where $P_{\alpha-1,m}\in M_{\alpha-1,m}$ is the Poincar\'e series of weight $\alpha-1$ and type $m$ so that
\cite[Proposition 10.5.2]{GePu} suffices for our purposes.\CVD
Let $\alpha,m$ be non-negative integers such that $\alpha-2m>1$ and $\alpha-1\equiv 2m\pmod{(q-1)}$. We have constructed functions:
\begin{eqnarray*}
\mathcal{E}_{\alpha,m}:\Omega&\rightarrow&\mathbf{Mat}_{1\times 2}(\mathcal{R}),\\
\mathcal{F}:\Omega&\rightarrow&\mathbf{Mat}_{2\times 1}(\mathcal{R}),
\end{eqnarray*}
and
${}^t\mathcal{E}_{\alpha,m}\in\mathcal{M}^{2}_{\alpha,m}({}^t\rho_{t}^{-1})$, $\mathcal{F}\in\mathcal{M}^{2}_{-1,0}(\rho_{t})$. Therefore,
after Lemma \ref{wronsk}, the functions $$\mathcal{G}_{\alpha,m,k}=(\tau^k\mathcal{E}_{\alpha,m})\cdot\mathcal{F}=\mathcal{E}_{q^k\alpha,m}\cdot\mathcal{F}:\Omega\rightarrow\mathbb{T}$$ satisfy $\mathcal{G}_{\alpha,m,k}\in M^!_{q^k\alpha-1,m}\otimes\mathbb{T}$.
\noindent\emph{A special case.}
After Proposition \ref{mainproppoincare},
if $\alpha>0$ and $\alpha\equiv 1\pmod{q-1}$, then $\mathcal{E}_{\alpha,0}\neq0$. We call these series {\em deformations of vectorial Eisenstein series}.
\begin{Lemme}\label{interpret} With $\alpha>0$ such that $\alpha\equiv 1\pmod{q-1}$, the following identity holds
for all $t\in\mathbb{C}_\infty$ such that $|t|\leq 1$:
$$\mathcal{E}_{\alpha,0}(z,t)=L(\chi_t,\alpha)^{-1}\sideset{}{'}\sum_{c,d}(cz+d)^{-\alpha} V_1(c,d),$$
and $\mathcal{E}_{\alpha,0}$ is not identically zero.
\end{Lemme}
\noindent\emph{Proof.}
We recall the notation $$V_1(c,d)=(\ol{c},\ol{d})\in\mathbf{Mat}_{1\times 2}(\mathbb{F}_q[t]).$$
We have
\begin{eqnarray*}
\sideset{}{'}\sum_{c,d}(cz+d)^{-\alpha} V_1(c,d)&=& \sum_{(c',d')=1}\sum_{a\in A^+}a^{-\alpha}(c'z+d')^{-\alpha} V_1(ac',ad')\\
&=&L(\chi_t,\alpha)\mathcal{E}_{\alpha,0}(z,t),
\end{eqnarray*}
where the first sum is over pairs of $A^2$ distinct from $(0,0)$, while the second sum is over the pairs $(c',d')$ of relatively prime
elements of $A^2$.
Convergence features are easy to deduce from Proposition \ref{mainproppoincare}.
Indeed, we have convergence if $\log_q^+|t|<r(\alpha,m)=\alpha-1$, that is, $\max\{1,|t|\}\leq q^{\alpha-1}$ if $\alpha>1$
and we have convergence, for $\alpha=1$, for $|t|\leq1$. In all cases, convergence holds for $|t|\leq1$.
Non-vanishing of the function also follows from Proposition \ref{mainproppoincare}.
\CVD
\section{Proof of the Theorems\label{proofs}}
We will need two auxiliary lemmas.
\begin{Lemme}\label{lemmelimit1} Let $\alpha>0$ be an integer such that $\alpha\equiv1\pmod{q-1}$. For all $t\in \mathbb{C}_\infty$ such that $|t|\leq1$, we have
$$\lim_{|z|_i=|z|\to\infty}\boldsymbol{d}_1(z)\sideset{}{'}\sum_{c,d}\frac{\ol{c}}{(cz+d)^\alpha}=0.$$
\end{Lemme}
\noindent\emph{Proof.} By Lemma \ref{basiclimits}, we have that
$\lim_{|z|=|z|_i\rightarrow\infty}f(z)\boldsymbol{d}_1(z,t)=0$ for all $t\in B_1$ and for all $f$ of the form
$f(z)=\sum_{n=1}^\infty c_nu(z)^n$, with $c_i\in\mathbb{C}_\infty$, locally convergent at $u=0$.
But after Lemma \ref{primolemma}, $\sum_{c,d}'\frac{\ol{c}}{(cz+d)^\alpha}$ is equal, for
$|z|_i$ big enough, to the sum of the series
$f(z)=-\widetilde{\pi}^\alpha\sum_{c\in A^+}\ol{c}G_{\alpha}(u_c(z))$ which is of the
form $-\widetilde{\pi}^\alpha u^{\alpha}+o(u^{\alpha})$,
and the lemma follows.\CVD
\begin{Lemme}\label{lemmelimit2} Let $\alpha>0$ be an integer such that $\alpha\equiv1\pmod{q-1}$. For all $t\in \mathbb{C}_\infty$ such that $|t|\leq1$, we have
$$\lim_{|z|_i=|z|\to\infty}\sideset{}{'}\sum_{c,d}\frac{\ol{d}}{(cz+d)^\alpha}=-L(\chi_t,\alpha).$$
\end{Lemme}
\noindent\emph{Proof.} It suffices to
show that
$$\lim_{|z|_i=|z|\to\infty}\sum_{c\neq0}\sum_{d\in A}\frac{\ol{d}}{(cz+d)^{\alpha}}=0.$$
Assuming that $z'\in\Omega$ is such that $|z'|=|z'|_i$, we see that for all $d\in A$, $|z'+d|\geq |z'|_i$.
Now, consider $c\in A\setminus\{0\}$ and $z'=cz$ with $|z|=|z|_i$. Then,
$|cz+d|\geq|cz|_i=|cz|$, so that, for $|t|\leq 1$,
$$
\left|\frac{\chi_t(d)}{(cz+d)^{\alpha}}\right|\leq|cz|^{-\alpha}.$$
This implies that
$$\left|\sum_{c\neq0}\sum_{d\in A}\frac{\ol{d}}{(cz+d)^{\alpha}}\right|\leq|z|^{-\alpha},$$
from which the Lemma follows.\CVD
The next step is to prove the following proposition.
\begin{Proposition}\label{interpretationvk}
For all $\alpha>0$ with $\alpha\equiv 1\pmod{q-1}$, then $\mathcal{G}_{\alpha,0,0}\in M_{\alpha-1,0}\otimes\mathbb{T}$, and we have the limit $\lim_{|z|=|z|_i\rightarrow\infty}\mathcal{G}_{\alpha,0,0}=-1$.
Moreover, if $\alpha\leq q(q-1)$,
then:
$$\mathcal{G}_{\alpha,0,0}=-E_{\alpha-1},$$
where $E_{\alpha-1}$ is the normalised Eisenstein series of weight $\alpha-1$ for $\Gamma$.
\end{Proposition}
\noindent\emph{Proof.} Let us write:
$$F_\alpha(z,t):=\boldsymbol{d}_1(z)\sideset{}{'}\sum_{c,d}\frac{\ol{c}}{(cz+d)^\alpha}+\boldsymbol{d}_2(z)\sideset{}{'}\sum_{c,d}\frac{\ol{d}}{(cz+d)^\alpha},$$
series that converges for all $(z,t)\in\Omega\times\mathbb{C}_\infty$ with $|t|\leq 1$.
By Lemma \ref{interpret}, we have
$$F_\alpha(z,t)=L(\chi_t,\alpha)\mathcal{E}_{\alpha,0}(z,t)\cdot\mathcal{F}(z,t)=L(\chi_t,\alpha)\mathcal{G}_{\alpha,0,0},$$ so that $F_\alpha\in M^!_{\alpha-1,0}\otimes\mathbb{T}$.
After (\ref{uexpd2}), we verify that for all $t$ with $|t|\leq1$,
$\lim_{|z|_i=|z|\to\infty}\boldsymbol{d}_2(z)=1$. From Lemmas \ref{lemmelimit1} and \ref{lemmelimit2},
$$\lim_{|z|_i=|z|\to\infty}F_\alpha(z,t)=-L(\chi_t,\alpha).$$
Therefore, for all $t$ such that $|t|\leq 1$, $F_\alpha(z,t)$ converges to an holomorphic function
on $\Omega$ and is endowed with a $u$-expansion holomorphic at infinity.
In particular, $F_\alpha(z,t)$ is a family of modular forms of $M_{\alpha-1,0}\otimes\mathbb{T}$.
Since for the selected values of $\alpha$, $M_{\alpha-1,0}=\langle E_{\alpha-1}\rangle$,
we obtain that $F_\alpha=-L(\chi_t,\alpha)E_{\alpha-1}$.\CVD
\noindent\emph{Proof of Theorem \ref{firsttheorem}.}
Let us consider, for given $\alpha>0$, the form $\mathcal{E}=\mathcal{E}_{\alpha,0}$ and the scalar product form $\mathcal{G}_{\alpha,0,k}=(\tau^k\mathcal{E})\cdot\mathcal{F}$.
The general computation of $\mathcal{G}_0=\mathcal{G}_{\alpha,0,0}$
and $\tau\mathcal{G}_{-1}=\tau\mathcal{G}_{\alpha,0,-1}$ as in Proposition \ref{embedding}
is difficult, but for $\alpha=1$ we can apply
Proposition \ref{interpretationvk}. We have $\mathcal{G}_{1,0,0}=-1$ and $\mathcal{G}_{1,0,1}=
\mathcal{G}_{q,0,0}=-g=-E_{q-1}$. Therefore, $\mathcal{G}_{\alpha,0,-1}=0$ by (\ref{taudifference}) and Theorem \ref{firsttheorem} follows.\CVD
\noindent\emph{Proof of Theorem \ref{corollairezeta11}.}
Lemma \ref{primolemma} and Proposition \ref{embedding} imply that
\begin{equation}\label{Lchit}
L(\chi_t,\alpha)=\frac{\widetilde{\pi}^\alpha\sum_{c\in A^+}\ol{c}G_{\alpha}(u_c)}{(\tau s_{{\text{Car}}}) h((\tau\mathcal{G}_{\alpha,0,-1})\boldsymbol{d}_2-\mathcal{G}_{\alpha,0,0}(\tau \boldsymbol{d}_2))},\end{equation} so that
in particular, the numerator and the denominator of the fraction are one proportional to the other.
For $\alpha=1$ we can replace, by the above discussion, $\mathcal{G}_{\alpha,0,-1}=0$ and
$\mathcal{G}_{\alpha,0,0}=-1$ and
we get, thanks to the fact that $h=-u+o(u)$ and $\sum_{c\in A^+}\chi_t(c)u_c=u+o(u)$,
$$L(\chi_t,\alpha)=\frac{\widetilde{\pi}\sum_{c\in A^+}\ol{c}u_c}{(\tau (s_{{\text{Car}}}\boldsymbol{d}_2)) h}=-\frac{\widetilde{\pi}}{\tau s_{{\text{Car}}}}+o(1),$$
from which we deduce Theorem \ref{corollairezeta11} and even some additional information, namely, the formula:
$$(\tau \boldsymbol{d}_2) h=-\sum_{c\in A^+}\ol{c}u_c.$$\CVD
\noindent\emph{Proof of Theorem \ref{generalalpha}.} For general $\alpha$, we know by (\ref{Lchit}) that there exists $\lambda_\alpha\in\mathbb{L}$ ($\mathbb{L}$ being the fraction field of $\mathbb{T}$) such that
\begin{equation}\label{secondnewformula}
\sum_{c\in A^+}\ol{c}G_{\alpha}(u_c)=\lambda_\alpha h((\tau\mathcal{G}_{\alpha,0,-1})\boldsymbol{d}_2-\mathcal{G}_{\alpha,0,0}(\tau \boldsymbol{d}_2)).\end{equation} Since
$L(\chi_t,\alpha)=\lambda_\alpha\widetilde{\pi}^\alpha/(\tau s_{{\text{Car}}})$, it remains to show that $\lambda_\alpha$
belongs to $\mathbb{F}_q(t,\theta)$.
Let us write $f$ for the series $\sum_{c\in A^+}\ol{c}G_{\alpha}(u_c)$,
$\phi$ for $\lambda_\alpha h\tau\mathcal{G}_{\alpha,0,-1}$ and $\psi$ for $-\lambda_\alpha h\mathcal{G}_{\alpha,0,0}$, so that
(\ref{secondnewformula}) becomes:
$$f=\phi\boldsymbol{d}_2+\psi\tau \boldsymbol{d}_2.$$ Proposition \ref{embedding} then tells us that $\phi\in M^!_{\alpha+1,1}\otimes\mathbb{L}$ and $\psi\in M^!_{\alpha+q,1}\otimes\mathbb{L}$.
Let $L$ be an algebraically closed
field containing $\mathbb{L}$, hence containing also $\mathbb{F}_q(t,\theta)$.
As for any choice of $w,m$, $M^!_{w,m}$ embeds in $\mathbb{C}_\infty((u))$ and since there is a basis of this space with $u$-expansions defined over $K$, we have that $\mathbf{Aut}(L/\mathbb{F}_q(t,\theta))$
acts on $M^!_{w,m}\otimes L$ through the coefficients of the $u$-expansions. Let $\sigma$ be an element of $\mathbf{Aut}(L/\mathbb{F}_q(t,\theta))$ and,
for $\mu\in M^!_{w,m}\otimes L$, let us denote by $\mu^\sigma\in M^!_{w,m}\otimes L$
the form obtained applying $\sigma$ on every coefficient of the $u$-expansion of $\mu$.
Since $f,\boldsymbol{d}_2$ and $\tau\boldsymbol{d}_2$ are defined over $\mathbb{F}_q[t,\theta]$,
we get
$f=\phi^\sigma\boldsymbol{d}_2+\psi^\sigma\tau \boldsymbol{d}_2$, so that
$$(\phi-\phi^\sigma)\boldsymbol{d}_2+(\psi-\psi^\sigma)\tau\boldsymbol{d}_2=0.$$
Assuming that $\phi^\sigma\neq\phi$ or $\psi^\sigma\neq\psi$ is impossible by Proposition \ref{embedding}. Hence, for all $\sigma$, $\phi^\sigma=\phi$ and $\psi^\sigma=\psi$.
This means that the $u$-expansions of $\phi,\psi$
are both defined over $\mathbb{F}_q(t^{1/q^s},\theta^{1/q^s})$ for some $s\geq 0$. By the fact that
$\mathcal{G}_{\alpha,0,0}=-1+o(1)$ (this follows from the first part of Proposition \ref{interpretationvk}), we get that $\lambda_\alpha\in\mathbb{F}_q(t^{1/q^s},\theta^{1/q^s})$.
We have proven that $\lambda_\alpha=\widetilde{\pi}^{-\alpha}L(\chi_t,\alpha)(t-\theta)s_{{\text{Car}}}\in\mathbb{F}_q(t^{1/q^s},\theta^{1/q^s})$. But we already know that $L(\chi_t,\alpha)\in K_\infty[[t]]$, $s_{{\text{Car}}}\in K^{\text{sep}}[[t]]$,
and $\widetilde{\pi}\in K_\infty^{\text{sep}}$ (the separable closure of $K_\infty$). Therefore, $$\lambda_\alpha\in\mathbb{F}_q(t^{1/q^s},\theta^{1/q^s})\cap K_\infty^{\text{sep}}((t))=\mathbb{F}_q(t,\theta).$$ \CVD
\noindent\emph{Remark.} Proposition \ref{interpretationvk} tells that $\phi\in M_{\alpha+1,1}\otimes\mathbb{L}$, which is a more precise statement than just saying that
$\phi\in M^!_{\alpha+1,1}\otimes\mathbb{L}$, following Proposition \ref{embedding}. We also have $\psi=(f-\phi\boldsymbol{d}_2)/\tau\boldsymbol{d}_2$. Since $\boldsymbol{d}_2=1+\ldots$ has $u$-expansion
in $\mathbb{F}_q[t,\theta][[u]]$ by the fourth part of Proposition \ref{prarchiv}, the same property holds for
$\tau\boldsymbol{d}_2$ and $\psi$ also belongs to $M_{\alpha+q,1}\otimes\mathbb{L}$.
We observe the
additional information that both $\mathcal{G}_{\alpha,0,0}$ and $\tau\mathcal{G}_{\alpha,0,-1}$
are defined over $\mathbb{F}_q(t^{1/q^s},\theta^{1/q^s})$; this also follows from Proposition \ref{embedding}.
\end{document} |
\begin{document}
\title{Character varieties for real forms}
\begin{abstract}
Let $\Gamma$ be a finitely generated group and $G$ a real form of $\slnc$. We propose a definition for the $G$-character variety of $\Gamma$ as a subset of the $\slnc$-character variety of $\Gamma$. We consider two anti-holomorphic involutions of the $\slnc$ character variety and show that an irreducible representation with character fixed by one of them is conjugate to a representation taking values in a real form of $\slnc$. We study in detail an example: the $\slnc$, $\su21$ and $\mathrm{SU}(3)$ character varieties of the free product $\z3z3$.
\end{abstract}
\section{Introduction}
Character varieties of finitely generated groups have been widely studied and used, whether from the point of view of algebraic geometry or the one of geometric structures and topology. Given a finitely generated group $\Gamma$, and a complex algebraic reductive group $G$, the $G$-character variety of $\Gamma$ is defined as the GIT quotient
\[\mathcal{X}_G(\Gamma) = \mathrm{Hom}(\Gamma, G) // G .\]
It is an algebraic set that takes account of representations of $\Gamma$ with values in $G$ up to conjugacy by an element of $G$.
See the articles of Sikora \cite{sikora_character_2012} and Heusener \cite{heusener_slnc_2016} for a detailed exposition of the construction.
Whenever $\Gamma$ has a geometric meaning, for example when it is the fundamental group of a manifold, the character variety reflects its geometric properties. For $\mathrm{SL}_2(\mathbb{C})$-character varieties, we can cite for example the construction of the $A$-polynomial for knot complements, as detailed in the articles \cite{cooper_plane_1994} of Cooper, Culler, Gillet, Long and Shalen, and \cite{cooper_representation_1998} of Cooper and Long, or the considerations related to volume and the number of cusps of a hyperbolic manifold, as well as ideal points of character varieties treated by Morgan and Shalen in \cite{morgan_shalen}, Culler and Shalen in \cite{culler_shalen} and the book of Shalen \cite{shalen_representations_2002}. On the other hand, $\mathrm{SL}_2(\mathbb{C})$-character varieties of compact surface groups are endowed with the Atiyah-Bott-Goldman symplectic structure (see for example \cite{goldman_complex-symplectic_2004}).
In the construction of character varieties, we consider an algebraic quotient $\mathrm{Hom}(\Gamma, G) // G$ where $G$ acts by conjugation. The existence of this quotient as an algebraic set is ensured by the Geometric Invariant Theory (as detailed for example in the article of Sikora \cite{sikora_character_2012}), and it is not well defined for a general algebraic group, nor when considering a non algebraically closed field. Besides that, for the compact form $\mathrm{SU}(n)$, the classical quotient $\mathrm{Hom}(\Gamma, \mathrm{SU}(n))/\mathrm{SU}(n)$, taken in the sense of topological spaces, is well defined and Hausdorff. See the article \cite{florentino_topology_2009} of Florentino and Lawton for a detailed exposition. This quotient, that Procesi and Schwartz show in their article \cite{procesi_inequalities_1985} to be a semi-algebraic set, can be embedded in the $\slnc$-character variety; we give a proof of this last fact in Section \ref{section_chi_sun}. Similar quotients for other groups have been studied by Parreau in \cite{parreau_espaces_2011}, in which she studies completely reducible representations and in \cite{parreau_compactification_2012}, where she compactifies the space of conjugacy classes of semi-simple representations taking values in noncompact semisimple
connected real Lie groups with finite center.
It is then natural to try to construct an object similar to a character variety for groups $G$ which are not in the cases stated above, for example real forms of $\slnc$. For the real forms of $\mathrm{SL}_2(\mathbb{C})$, Goldman studies, in his article \cite{goldman_topological_1988}, the real points of the character variety of the rank two free group $F_2$ and shows that they correspond to representations taking values either in $\mathrm{SU}(2)$ or $\mathrm{SL}_2(\mathbb{R})$, which are the real forms of $\mathrm{SL}_2(\mathbb{C})$.
Inspired by this last approach, we will consider $\slnc$-character varieties and will try to identify the points coming from a representation taking values in a real form of $\slnc$.
For a finitely generated group $\Gamma$, we introduce two involutions $\Phi_1$ and $\Phi_2$ of the $\slnc$-character variety of $\Gamma$ induced respectively by the involutions $A \mapsto \con{A}$ and $A \mapsto \!^{t}\con{A}^{-1}$ of $\slnc$. We show the following theorem:
\begin{thm}\label{main_thm}
Let $x$ be a point of the $\slnc$-character variety of $\Gamma$ corresponding to an irreducible representation $\rho$. If $x$ is a fixed point for $\Phi_1$, then $\rho$ is conjugate to a representation taking values in $\slnr$ or $\mathrm{SL}_{n/2}(\mathbb{H})$. If $x$ is a fixed point for $\Phi_2$, then $\rho$ is conjugate to a representation taking values in a unitary group $\mathrm{SU}(p,q)$ with $p+q = n$.
\end{thm}
In the second section of this article, we recall the definition of $\slnc$-character varieties with some generalities and examples that will be studied further. In the third section, we recall some generalities on real forms of $\slnc$, we propose a definition for "character varieties for a real form" as a subset of the $\slnc$-character variety and we show Theorem \ref{main_thm} by combining Propositions \ref{prop_invol_traces} and \ref{prop_traces_reelles} in order to identify those character varieties beneath the fixed points of involutions $\Phi_1$ and $\Phi_2$. At last, in Section 4, we study in detail the $\mathrm{SU}(3)$ and $\su21$-character varieties of the free product $\z3z3$. This particular character variety has an interesting geometric meaning since it contains the holonomy representations of two \CR {} uniformizations: the one for the Figure Eight knot complement given by Deraux and Falbel in \cite{falbel} and the one for the Whitehead link complement given by Parker and Will in \cite{ParkerWill}.
\paragraph{Acknowledgements :} The author would like to thank his advisors Antonin Guilloux and Martin Deraux, as well as Pierre Will, Elisha Falbel and Julien Marché for many discussions about this article. He would also like to thank Maxime Wolff, Joan Porti, Michael Heusener, Cyril Demarche and the PhD students of IMJ-PRG for helping him to clarify many points of the paper.
\section{$\mathrm{SL}_n(\mathbb{C})$-character varieties}
Let $\Gamma$ be a finitely generated group and $n$ a positive integer. We are going to consider representations of $\Gamma$, up to conjugation, taking values in $\slnc$ and in its real forms. In order to study these representations, the most adapted object to consider is the $\slnc$-character variety.
\subsection{Definition of character varieties}
We give here a definition of the $\slnc$-character variety and recall some useful properties.
Character varieties of finitely generated groups have been widely studied, for example in the case of $\mathrm{SL}_2(\mathbb{C})$ by Culler and Shalen in \cite{culler_shalen}. For a detailed exposition of the general results that we state, we refer to the article \cite{sikora_character_2012} of Sikora or the first sections of the article \cite{heusener_slnc_2016} of Heusener.
\begin{defn}
The $\slnc$-representation variety of $\Gamma$ is the set $\mathrm{Hom}(\Gamma , \mathrm{SL}_n(\mathbb{C}))$.
\end{defn}
\begin{rem}
$\mathrm{Hom}(\Gamma , \slnc )$ is an algebraic set, not necessarily irreducible. If $\Gamma$ is generated by $s_1, \dots , s_k$, an element of $\mathrm{Hom}(\Gamma , \slnc )$ is given by $(S_1, \dots , S_k) \in (\mathcal{M}_n(\mathbb{C}))^k \simeq \mathbb{C}^{kn^2}$ satisfying the equations $\det(S_i) = 1$ for $1 \leq i \leq k$ and, for each relation in the group, the $n^2$ equations in the coefficients associated to an equality $S_{i_1}^{\alpha_1} \cdots S_{i_l}^{\alpha_l} = \mathrm{Id}$. Since all these equations are polynomial, they define an algebraic set, possibly with many irreducible components. Finally, by changing the generators, we obtain an isomorphism of algebraic sets.
\end{rem}
\begin{defn}
The group $\slnc$ acts by conjugation on $\mathrm{Hom}(\Gamma , \slnc )$. The $\slnc$-character variety is the algebraic quotient $\mathrm{Hom}(\Gamma , \slnc ) // \slnc$ by this action. We denote this algebraic set by $\mathcal{X}_{\slnc}(\Gamma)$.
\end{defn}
\begin{rem}\label{git+fonctoriel}
The existence of this quotient is guaranteed by the Geometric Invariant Theory (GIT), and it is due to the fact that the group $\slnc$ is reductive. By construction, the ring of functions of $\mathcal{X}_{\slnc}(\Gamma)$ is exactly the ring of invariant functions of $\mathrm{Hom}(\Gamma , \slnc )$.
Moreover, the quotient map is functorial. In particular, if we have a surjective homomorphism $\tilde{\Gamma} \rightarrow \Gamma$, we obtain an injection $\mathrm{Hom}(\Gamma, \slnc) \hookrightarrow \mathrm{Hom}(\tilde{\Gamma}, \slnc)$, which induces an injection $\mathcal{X}_{\slnc}(\Gamma) \hookrightarrow \mathcal{X}_{\slnc}(\tilde{\Gamma})$. See the article \cite{heusener_slnc_2016} of Heusener for a detailed exposition.
\end{rem}
The following result, due to Procesi (Theorems 1.3 and 3.3 in \cite{procesi_invariant_1976} ) tells us that it is enough to understand the trace functions in order to understand the whole ring of invariant functions, and that this ring of functions is generated by finitely many trace functions.
\begin{thm}
The ring of invariant functions of $\mathrm{Hom}(\Gamma , \slnc )$ is generated by the trace functions $\tau_\gamma : \rho \mapsto \mathrm{tr} (\rho(\gamma))$, for $\gamma$ in a finite subset $\{ \gamma_1, \dots , \gamma_k \}$ of $\Gamma$. Consequently, $\mathcal{X}_{\slnc}(\Gamma)$ is isomorphic, as an algebraic set, to the image of $(\tau_{\gamma_1}, \dots ,\tau_{\gamma_k}) : \mathrm{Hom}(\Gamma , \slnc ) \rightarrow \mathbb{C}^k$.
\end{thm}
Character varieties are strongly related to characters of representations. Let us briefly recall their definition:
\begin{defn}
Let $\rho \in \mathrm{Hom}(\Gamma , \slnc )$. The \emph{character of $\rho$} is the function $\chi_\rho : \Gamma \rightarrow \mathbb{C}$ given by $\chi_\rho(g) = \mathrm{tr}(\rho(g))$.
\end{defn}
\begin{rem}
We have a projection map $\mathrm{Hom}(\Gamma , \slnc ) \rightarrow \mathcal{X}_{\slnc}(\Gamma)$. Two representations $\rho , \rho' \in \mathrm{Hom}(\Gamma , \slnc )$ have the same image if and only if $\chi_\rho = \chi_{\rho'}$. This explains the name "character variety" for $\mathcal{X}_{\slnc}(\Gamma)$. We will sometimes abusively identify the image of a representation $\rho$ in the character variety with its character $\chi_\rho$.
\end{rem}
Semi-simple representations are representations constructed as direct sums of irreducible representations. We will use the following statement when dealing with irreducible representations.
\begin{thm} \label{thm_semisimple_char} (Theorem 1.28 of \cite{lubotzky_varieties_1985})
Let $\rho , \rho' \in \mathrm{Hom}(\Gamma , \slnc )$ be two semi-simple representations. Then $\chi_\rho = \chi_{\rho'}$ if and only if $\rho$ and $\rho'$ are conjugate.
\end{thm}
\subsection{Some $\mathrm{SL}_2(\mathbb{C})$ and $\sl3c$-character varieties}
We consider here two $\sl3c$-character varieties that we will study further: the character variety of the free group of rank two $F_2$ and the one of the fundamental group of the Figure Eight knot complement. We will also recall a classic result describing the $\mathrm{SL}_2(\mathbb{C})$-character variety of $F_2$.
\subsubsection{The free group of rank 2}
We denote here by $s$ and $t$ two generators of the free group of rank two $F_2$, so $F_2 = \langle s,t \rangle$. We will use the character varieties $\mathcal{X}_{\mathrm{SL}_2(\mathbb{C})} (F_2)$ and $\mathcal{X}_{\mathrm{\sl3c}} (F_2)$.
Consider first the following theorem, that describes the $\mathrm{SL}_2(\mathbb{C})$-character variety of $F_2$. A detailed proof can be found in the article of Goldman \cite{goldman_exposition_2004}.
\begin{thm}[Fricke-Klein-Vogt]
The character variety $\mathcal{X}_{\mathrm{SL}_2(\mathbb{C})} (F_2)$ is isomorphic to $\mathbb{C}^3$, which is the image of $\mathrm{Hom}(F_2 , \mathrm{SL}_2(\mathbb{C}) )$ by the trace functions of the elements $s,t$ and $st$.
\end{thm}
\begin{rem}
Thanks to the theorem below, we know that it is possible to write the trace of the image of $st^{-1}$ in terms of the traces of the images of $s,t$ and $st$ for any representation $\rho : F_2 \rightarrow \mathrm{SL}_2(\mathbb{C})$. By denoting $S$ and $T$ the respective images of $s$ and $t$, the traces of the four elements are related by the \emph{trace equation}:
\[\mathrm{tr}(S) \mathrm{tr}(T) = \mathrm{tr}(ST) + \mathrm{tr}(ST^{-1}). \]
\end{rem}
On the other hand, in his article \cite{lawton_generators_2007}, Lawton describes the $\sl3c$-character variety of $F_2$. He obtains the following result:
\begin{thm}
$\mathcal{X}_{\mathrm{\sl3c}} (F_2)$ is isomorphic to the algebraic set $V$ of $\mathbb{C}^9$, which is the image of $\mathrm{Hom}(F_2 , \sl3c )$ by the trace functions of the elements $s,t,st,st^{-1}$, of their inverses $s^{-1},t^{-1},t^{-1}s^{-1},ts^{-1}$, and of the commutator $[s,t]$.
Furthermore, there exist two polynomials $P,Q \in \mathbb{C}[X_1 , \dots , X_8]$ such that $(x_1, \dots , x_9) \in V$ if and only if $x_9^2 - Q(x_1 , \dots , x_8) x_9 + P(x_1, \dots x_8) = 0$.
\end{thm}
\begin{rem}
The polynomials $P$ and $Q$ are explicit: we can find them in the article of Lawton \cite{lawton_generators_2007} or in the survey of Will \cite{will_generateurs}. By denoting $\Delta = Q^2 - 4P$, the algebraic set $V$ is a double cover of $\mathbb{C}^8$, ramified over the zero level set of $\Delta$. Furthermore, the two roots of $X_9^2 - Q(x_1 , \dots , x_8) X_9 + P(x_1, \dots x_8)$, as a polynomial in $X_9$, are given by the traces of the commutators $[s,t]$ and $[t,s] = [s,t]^{-1}$.
\end{rem}
\subsubsection{The Figure Eight knot complement}\label{sous_sect_char_sl3c_m8}
We state briefly some results about the $\sl3c$-character variety of the figure eight knot complement. It is one of the very few $\sl3c$-character varieties of three-manifolds studied exhaustively. We will come back to it in Subsection \ref{sous_sect_descr_chi_z3z3}.
The results were obtained independently by Falbel, Guilloux, Koseleff, Rouiller and Thistlethwaite in \cite{character_sl3c}, and by Heusener, Muñoz and Porti in \cite{heusener_sl3c-character_2015}. Denoting by $\Gamma_8$ the fundamental group of the Figure Eight knot complement, they describe the character variety $\mathcal{X}_{\mathrm{\sl3c}} (\Gamma_8)$.
Theorem 1.2 of \cite{heusener_sl3c-character_2015} can be stated in the following way:
\begin{thm}
The character variety $\mathcal{X}_{\mathrm{\sl3c}} (\Gamma_8)$ has five irreducible components: $X_{\mathrm{TR}}$, $X_{\mathrm{PR}}$, $R_1$, $R_2$, $R_3$. Furthermore:
\begin{enumerate}
\item The component $X_{\mathrm{TR}}$ only contains characters of completely reducible representations.
\item The component $X_{\mathrm{PR}}$ only contains characters of reducible representations.
\item The components $R_1$, $R_2$, $R_3$ contain the characters of irreducible representations.
\end{enumerate}
\end{thm}
\begin{rem}
We take here the notation $R_1$, $R_2$, $R_3$ given in \cite{character_sl3c}. These components are denoted respectively by $V_0$, $V_1$ and $V_2$ in \cite{heusener_sl3c-character_2015}.
\end{rem}
\begin{rem}
\begin{itemize}
\item The component $R_1$ contains the class of the geometric representation, obtained as the holonomy representation $\Gamma_8 \rightarrow \mathrm{PSL}_2(\mathbb{C})$ of the complete hyperbolic structure of the Figure Eight knot followed by the irreducible representation $\mathrm{PSL}_2(\mathbb{C}) \rightarrow \sl3c$.
\item The component $R_3$ is obtained from $R_2$ by a pre-composition with an outer automorphism of $\Gamma_8$. These components contain the representations $\rho_2$ and $\rho_3$ with values in $\su21$ given by Falbel in \cite{falbel_spherical}. The representation $\rho_2$ is the holonomy representation of the \CR {} uniformization of the Figure Eight knot complement given by Deraux and Falbel in \cite{falbel}.
\end{itemize}
\end{rem}
Besides determining the irreducible components $R_1$, $R_2$ and $R_3$, Falbel, Guilloux, Koseleff, Rouillier and Thistlethwaite give parameters, in Section 5 of their article \cite{character_sl3c}, for explicit representations corresponding to the points of the character variety.
\section{Character varieties for real forms}
We are going to be interested in representations of a finitely generated group $\Gamma$ taking values in some real forms of $\slnc$ up to conjugacy. We will focus on the real forms $\mathrm{SU}(3)$ and $\su21$ of $\sl3c$ in the detailed example that we will consider further. In order to study the representations up to conjugacy, we will consider the $\slnc$-character variety and will try to figure out the locus of representations taking values in real forms. When $n=2$, the problem was treated by Morgan and Shalen in \cite{morgan_shalen} and by Goldman in his article \cite{goldman_topological_1988}.
\subsection{Real forms and definition}
Let us first recall the classification of the real forms of $\slnc$. For a detailed exposition of the results that we state, see the book of Helgason \cite{helgason_geometric_2008}. Recall that a real form of a complex Lie group $G_\mathbb{C}$ is a real Lie group $G_\mathbb{R}$ such that $G_\mathbb{C} = \mathbb{C} \otimes_\mathbb{R} G_\mathbb{R}$.
The real forms of $\slnc$ belong to three families: the real groups $\slnr$, the unitary groups $\mathrm{SU}(p,q)$ and the quaternion groups $\mathrm{SL}_{n/2}(\mathbb{H})$. We give the definitions of the two last families in order to fix the notation.
\begin{defn}
Let $n \in \mathbb{N}$ and $p,q \in \mathbb{N}$ such that $n = p+q$. Denote by $I_{p,q}$ the block matrix:
\[I_{p,q} = \begin{pmatrix}
I_{p} & 0 \\
0 & -I_{q}
\end{pmatrix}\]
We define the group $\mathrm{SU}(p,q)$ as follows:
\[\mathrm{SU}(p,q) = \{M \in \slnc \mid {}^t\!\con{M} I_{p,q} M = I_{p,q} \} .\]
It is a real Lie group, which is a real form of $\slnc$.
\end{defn}
\begin{defn}
Let $n \in \mathbb{N}$. Denote by $J_{2n}$ the block matrix:
\[J_{2n} = \begin{pmatrix}
0 & I_n \\
-I_n & 0
\end{pmatrix}\]
We define the group $\mathrm{SL}_{n}(\mathbb{H})$, also noted $\mathrm{SU}^*(n)$ as follows:
\[\mathrm{SL}_{n}(\mathbb{H}) = \{M \in \mathrm{SL}_{2n}(\mathbb{C}) \mid \con{M}^{-1} J_{2n} M = J_{2n} \} .\]
It is a real Lie group, which is a real form of $\mathrm{SL}_{2n}(\mathbb{C})$.
\end{defn}
In order to study representations taking values in real forms, we consider the following definition of character variety for a real form:
\begin{defn}
Let $G$ be a real form of $\slnc$. Let $\Gamma$ be a finitely generated group. We call the $G$-character variety of $\Gamma$ the image of the map $\mathrm{Hom}(\Gamma , G) \rightarrow \mathcal{X}_{\mathrm{SL}_n(\mathbb{C})}(\Gamma)$. In this way,
\[\mathcal{X}_G(\Gamma) = \{ \chi \in \mathcal{X}_{\mathrm{SL}_n(\mathbb{C})} \mid \exists \rho \in \mathrm{Hom}(\Gamma, G) , \chi = \chi_\rho \}.\]
\end{defn}
\begin{rem}
The set $\mathcal{X}_G(\Gamma)$ given by this definition is a subset of a complex algebraic set, which it is not, a priori, a real nor a complex algebraic set. It is the image of a real algebraic set by a polynomial map, and hence a semi-algebraic set. The definition might seem strange if compared to the one for the $\slnc$-character variety. This is due to the fact that the real forms of $\slnc$ are real algebraic groups but not complex algebraic groups and that the algebraic construction and the GIT quotient do not work properly when the field is not algebraically closed. Nevertheless, when considering the compact real form $\mathrm{SU}(n)$, it is possible to define a $\mathrm{SU}(n)$-character variety by considering a topological quotient. We will show, in the next section, that this topological quotient is homeomorphic to the $\mathrm{SU}(n)$-character variety as defined above.
\end{rem}
\subsection{The character variety $\mathcal{X}_{\mathrm{SU}(n)}(\Gamma)$ as a topological quotient} \label{section_chi_sun}
Let $n$ be a positive integer. We are going to show that the topological quotient $\mathrm{Hom}(\Gamma, \mathrm{SU}(n))/\mathrm{SU}(n)$, where $\mathrm{SU}(n)$ acts by conjugation, is naturally homeomorphic to the character variety $\mathcal{X}_{\mathrm{SU}(n)}(\Gamma)$.
Let us notice first that a map between these two sets is well defined.
Indeed,
since two representations taking values in $\mathrm{SU}(n)$ which are conjugate in $\mathrm{SU}(n)$ are also conjugate in $\slnc$, the natural map $\mathrm{Hom}(\Gamma, \mathrm{SU}(n)) \rightarrow \mathcal{X}_{\slnc}(\Gamma)$ factors through the quotient $\mathrm{Hom}(\Gamma, \mathrm{SU}(n)) / \mathrm{SU}(n)$.
We are going to show the following proposition:
\begin{prop} \label{prop_chi_sun}
The map $\mathrm{Hom}(\Gamma,\mathrm{SU}(n)) / \mathrm{SU}(n) \rightarrow \mathcal{X}_{\mathrm{SU}(n)}(\Gamma)$ is a homeomorphism.
\end{prop}
\begin{proof}
We consider $\mathcal{X}_{\mathrm{SU}(n)}(\Gamma)$ as a subset of $\mathcal{X}_{\slnc}(\Gamma) \subset \mathbb{C}^m$, endowed with the usual topology of $\mathbb{C}^m$. By definition, we know that the map $\mathrm{Hom}(\Gamma,\mathrm{SU}(n)) / \mathrm{SU}(n) \rightarrow \mathcal{X}_{\mathrm{SU}(n)}(\Gamma)$ is continuous and surjective. Since a continuous bijection between a compact space and a Hausdorff space is a homeomorphism, it is enough to show that the map is injective. We want to show that if $\rho_1, \rho_2 \in \mathrm{Hom}(\Gamma, \mathrm{SU}(n))$ are representations such that $\chi_{\rho_1} = \chi_{\rho_2}$, then $\rho_1$ and $\rho_2$ are conjugate in $\mathrm{SU}(n)$. It is the statement of Lemma \ref{lemme_chi_sun}, that we prove below.
\end{proof}
In order to prove proposition \ref{prop_chi_sun}, we are going to show the following lemma, which seems standard despite the lack of references.
\begin{lemme}\label{lemme_con_slnc_con_sun}
Let $\rho_1,\rho_2 \in \mathrm{Hom}(\Gamma, \mathrm{SU}(n))$. If they are conjugate in $\slnc$, then they are conjugate in $\mathrm{SU}(n)$.
\end{lemme}
\begin{proof}
Let us deal first with the irreducible case, and treat the general case after that.\\
\emph{First case: The representations $\rho_1$ and $\rho_2$ are irreducible.}
Let $G\in \slnc$ such that $\rho_2 = G\rho_1G^{-1}$.
Let $J$ be the matrix of the Hermitian form $\sum_{i=1}^{n} \con{x_i}y_i$, which is preserved by the images of $\rho_1$ and $\rho_2$. Since $\rho_2 = G\rho_1G^{-1}$, we know that the image of $\rho_1$ also preserves the form $\!^{t}GJG$. But $\rho_1$ is irreducible: its image preserves then a unique Hermitian form up to a scalar. We deduce that $J = \lambda \!^{t}GJG$ with $\lambda \in \mathbb{R}$.
Since $J$ is positive definite, we have $\lambda > 0$, and, by replacing $G$ by $\sqrt{\lambda} G$, we have $J = \!^{t}GJG$ i.e. that $G \in \mathrm{SU}(n)$.
\emph{General case.}
Recall first that every representation $\rho \in \mathrm{Hom}(\Gamma, \mathrm{SU}(n))$ is semi-simple, since its stable subspaces are in an orthogonal sum.
Let $G \in \slnc$ such that $\rho_2 = G\rho_1G^{-1}$. Since $\rho_1$ takes values in $\mathrm{SU}(n)$, we know that it is semi-simple. It can then be written as a direct sum of irreducible representations: $\rho_1 = \rho_1^{(1)} \oplus \dots \oplus \rho_1^{(m)}$, acting on stable subspaces $E_1, \dots ,E_m$ of $\mathbb{C}^n$, in such a way that the image of $\rho_i$ acts irreducibly on $E_i$ and that $E_i$ are in a direct orthogonal sum. The same applies to $\rho_2$, which admits as stable subspaces $GE_1, \dots , GE_m$. The direct sum $GE_1 \oplus \dots \oplus GE_m$ is then orthogonal. Hence, there exists $U_0 \in \mathrm{SU}(n)$ such that, for all $i \in \{1,\dots , m \}$ we have $U_0G E_i = E_i$. Maybe after conjugating $\rho_2$ by $U_0$, we can suppose that for all $i \in \{1,\dots , m \}$ we have $GE_i = E_i$. The linear map $G$ can then be written $G_1 \oplus \dots \oplus G_m$, acting on $E_1 \oplus \cdots \oplus E_m$. For each $i\in \{1,\dots , m \}$, since $\rho_1^{(i)}$ and $G_i\rho_1^{(i)}G_i^{-1}$ are unitary and irreducible on $E_i$, they are conjugate in $SU(E_i)$ by the first case. We can then replace $G_i$ by $G'_i \in \mathrm{SU}(E_i)$. Setting $G' = G'_1 \oplus \cdots \oplus G'_m$, we have $G' \in \mathrm{SU}(n)$ and $\rho_2 = G'\rho_1G'^{-1}$.
\end{proof}
Thanks to Lemma \ref{lemme_con_slnc_con_sun}, we can show the following lemma, which finishes the proof of Proposition \ref{prop_chi_sun} and ensures that $\mathrm{Hom}(\Gamma, \mathrm{SU}(n))/\mathrm{SU}(n) $ and $ \mathcal{X}_{\slnc}(\Gamma)$ are homeomorphic.
\begin{lemme}\label{lemme_chi_sun}
Let $\rho_1, \rho_2 \in \mathrm{Hom}(\Gamma, \mathrm{SU}(n))$ such that $\chi_{\rho_1} = \chi_{\rho_2}$. Then $\rho_1$ and $\rho_2$ are conjugate in $\mathrm{SU}(n)$.
\end{lemme}
\begin{proof}
Since $\rho_1$ and $\rho_2$ take values in $\mathrm{SU}(n)$, they are are semi-simple. By Theorem \ref{thm_semisimple_char}, since $\chi_{\rho_1} = \chi_{\rho_2}$ and $\rho_1$ and $\rho_2$ are semi-simple, we know that $\rho_1$ and $\rho_2$ are conjugate in $\slnc$. We deduce, thanks to Lemma \ref{prop_chi_sun}, that $\rho_1$ and $\rho_2$ are conjugate in $\mathrm{SU}(n)$.
\end{proof}
\subsection{ Anti-holomorphic involutions and irreducible representations}
In this section, we find the locus of character varieties for the real forms of $\slnc$ inside the $\slnc$-character variety $\mathcal{X}_{\slnc}(\Gamma)$. Before focusing on irreducible representations, we show the following proposition, which ensures that two character varieties for two different unitary real forms intersect only in points which correspond to reducible representations.
\begin{prop} \label{prop_inter_charvar_reelle}
Let $n \in \mathbb{N}$, and $p ,p',q,q' \in \mathbb{N}$ such that $p+q = p'+q' = n$ and $p \neq p', q'$. Let $\rho \in \mathrm{Hom}(\Gamma, \slnc)$ such that $\chi_\rho \in \mathcal{X}_{\mathrm{SU}(p,q)} \cap \mathcal{X}_{\mathrm{SU}(p',q')}$. Then $\rho$ is a reducible representation.
\end{prop}
\begin{proof}
Suppose that $\rho$ is irreducible. It is then, up to conjugacy, the only representation of character $\chi_\rho$. Since $\chi_\rho \in \mathcal{X}_{\mathrm{SU}(p,q)}$, we can suppose that $\rho$ takes values in $\mathrm{SU}(p,q)$. Then, for every $g \in \Gamma$, we have ${}^t\!\con{\rho(g)} J_{p,q} \rho(g) = J_{p,q}$. On the other hand, let us assume that $\rho$ is conjugate to a representation taking values in $\mathrm{SU}(p',q')$. Hence there exists a matrix $J'_{p',q'}$, conjugated to $J_{p',q'}$, such that, for every $g \in \Gamma$, we have ${}^t\!\con{\rho(g)} J'_{p',q'} \rho(g) = J'_{p',q'}$. We deduce that, for every $g \in \Gamma$,
\[J'_{p',q'} \rho(g)(J'_{p',q'})^{-1} = {}^t\!\con{\rho(g)}^{-1} = J_{p,q} \rho(g)( J_{p,q})^{-1} .\]
The matrix $(J_{p,q})^{-1} J'_{p',q'}$ commutes with the whole image of $\Gamma$. Since $\rho$ is irreducible, it is a scalar matrix. We deduce that $J_{p,q}$ has either the same signature as $J_{p',q'}$, or the opposite signature. Hence $(p',q')=(p,q)$ or $(p',q')=(q,p)$.
\end{proof}
From now on, we will limit ourselves to irreducible representations and will consider two anti-holomorphic involutions of the $\slnc$-character variety, which induce anti-holomorphic involutions on the character variety.
We will denote by $\phi_1$ and $\phi_2$ two anti-holomorphic automorphisms of the group $\slnc$, given by $\phi_1(A) = \con{A}$ and $\phi_2(A) = \con{{}^t\!A^{-1}}$. These two involutions induce anti-holomorphic involutions $\Phi_1$ and $\Phi_2$ on the representation variety $\mathrm{Hom}(\Gamma, \slnc)$, in such a way that, for a representation $\rho$, we have $\Phi_1(\rho) = \phi_1 \circ \rho$ and $\Phi_2(\rho) = \phi_2 \circ \rho$.
\begin{prop}
The involutions $\Phi_1$ and $\Phi_2$ induce as well anti-holomorphic involutions on the character variety $\mathcal{X}_{\mathrm{\slnc}} (\Gamma)$.
\end{prop}
\begin{proof}
For $\rho \in \mathrm{Hom}(\Gamma, \slnc)$ and $g \in \Gamma$ we have $\mathrm{tr}(\Phi_1(\rho)(g)) = \con{ \mathrm{tr}(\rho(g)) }$ and $\mathrm{tr}(\Phi_2(\rho)(g)) = \con{ \mathrm{tr}(\rho(g^{-1})) }$. Hence $\chi_\rho(\Phi_1(g)) = \con{\chi_\rho(g)}$ and $\chi_\rho(\Phi_2(g)) = \con{\chi_\rho(g^{-1})}$. Let $g_1,\dots , g_m \in \Gamma$ such that $\mathcal{X}_{\mathrm{\slnc}} (\Gamma)$ is isomorphic to the image of $\psi : \mathrm{Hom}(\Gamma,\slnc) \rightarrow \mathbb{C}^{2m}$ given by
\[\psi(\rho) = ( \chi_\rho (g_1), \dots , \chi_\rho(g_m), \chi_\rho (g_1^{-1}) , \dots , \chi_\rho (g_1^{-1}) ).\]
If $\psi(\rho) = (z_1, \dots , z_m , w_1 , \dots , w_m) \in \mathbb{C}^{2m}$, then
$ \psi(\Phi_1(\rho)) = (\con{z_1}, \dots , \con{z_m} , \con{w_1} , \dots , \con{w_m}) $
and
$ \psi(\Phi_2(\rho)) = (\con{w_1} , \dots , \con{w_m} , \con{z_1}, \dots , \con{z_m})$.
The anti-holomorhic involutions
\[(z_1, \dots , z_m , w_1 , \dots , w_m) \mapsto (\con{z_1}, \dots , \con{z_m} , \con{w_1} , \dots , \con{w_m})\]
and
\[(z_1, \dots , z_m , w_1 , \dots , w_m) \mapsto (\con{w_1} , \dots , \con{w_m} , \con{z_1}, \dots , \con{z_m})\]
are hence well defined on $\mathcal{X}_{\mathrm{\slnc}} (\Gamma)$ and induced by $\Phi_1$ and $\Phi_2$ respectively.
\end{proof}
We will still denote these involutions on $\mathcal{X}_{\mathrm{\slnc}} (\Gamma)$ by $\Phi_1$ and $\Phi_2$.
We will denote by $\mathrm{Fix}(\Phi_1)$ and $\mathrm{Fix}(\Phi_2)$ the points in $\mathcal{X}_{\mathrm{\slnc}} (\Gamma)$ fixed respectively by $\Phi_1$ and $\Phi_2$.
The following remark ensures that character varieties for real forms are contained in the set of fixed points of $\Phi_1$ and $\Phi_2$ in $\mathcal{X}_{\slnc}(\Gamma)$:
\begin{rem}
If $\rho \in \mathrm{Hom}(\Gamma, \slnc)$ is conjugate to a representation taking values in $\mathrm{SL}_n(\mathbb{R})$, then $\chi_\rho \in \mathrm{Fix}(\Phi_1)$.
Furthermore, if is conjugate to a representation taking values in $\mathrm{SL}_{n/2}(\mathbb{H})$, then $\chi_\rho \in \mathrm{Fix}(\Phi_1)$, since a matrix $A \in \mathrm{SL}_{n/2}(\mathbb{H})$ is conjugated to $\con{A}$.
On the other hand, if $\rho$ is conjugate to a representation taking values in $\mathrm{SU}(p,q)$, then $\chi_\rho \in \mathrm{Fix}(\Phi_2)$. Indeed, if $A$ is a unitary matrix, then it is conjugated to $\con{{}^{t}\!A^{-1}}$.
In this way, $\mathcal{X}_{\slnr} (\Gamma) \subset \mathrm{Fix}(\Phi_1)$, $\mathcal{X}_{\mathrm{SL}_{n/2}(\mathbb{H})} (\Gamma) \subset \mathrm{Fix}(\Phi_1)$ and $\mathcal{X}_{\mathrm{SU}(p,q)} (\Gamma) \subset \mathrm{Fix}(\Phi_2)$.
\end{rem}
From now on, we will work in the reciprocal direction. We will show that an irreducible representation with character in $\mathrm{Fix}(\Phi_1)$ or $\mathrm{Fix}(\Phi_2)$ is conjugate to a representation taking values in a real form of $\slnc$. Let us begin with the case of $\mathrm{Fix}(\Phi_2)$, which corresponds to unitary groups. The result is given in the following proposition:
\begin{prop}\label{prop_invol_traces}
Let $\rho \in \mathrm{Hom}(\Gamma, \slnc)$ be an irreducible representation such that $\chi_\rho \in \mathrm{Fix}(\Phi_2)$. Then there exists $p,q \in \mathbb{N}$ with $n=p+q$ such that $\rho$ is conjugate to a representation taking values in $\mathrm{SU}(p,q)$.
\end{prop}
\begin{proof}
We know that $\chi_\rho \in \mathrm{Fix}(\Phi_2)$, so the representations $\rho$ and $\Phi_2(\rho)$ have the same character. Since $\rho$ is irreducible, $\rho$ and $\Phi_2(\rho)$ are conjugate. Then there exists $P \in \mathrm{GL}_n(\mathbb{C})$ such that, for every $g \in \Gamma$, we have $P\rho(g)P^{-1} = \con{{}^t\!\rho(g)^{-1}}$. By considering the inverse, conjugating and transposing, we obtain, for every $g \in \Gamma$, that \[\con{{}^t\!P^{-1}}\con{{}^t\!\rho(g)^{-1}}\con{{}^t\!P} = \rho(g).\]
By replacing $\con{{}^t\!\rho(g)^{-1}}$ in the expression, we deduce that
\[(P^{-1}\con{{}^t\!P})^{-1} \rho(g) (P^{-1}\con{{}^t\!P}) = \rho(g).\]
The matrix $P^{-1}\con{{}^t\!P}$ commutes to the whole image of $\rho$. But $\rho$ is irreducible, so there exists $\lambda \in \mathbb{C}$ such that $P^{-1}\con{{}^t\!P} = \lambda \mathrm{Id}$. By taking the determinant, we know that $|\lambda| = 1$. Up to multiplying $P$ by a square root of $\lambda$, we can suppose that $\lambda = 1$. We then have that $P = \con{{}^t\!P} $, which means that $P$ is a Hermitian matrix.
We have then a Hermitian matrix $P$ such that, for every $g \in \Gamma$, $\con{{}^t\!\rho(g)} P \rho(g) = P$. The representation $\rho$ takes then values in the unitary group of $P$.
Denoting by $(p,q)$ the signature of $P$, the representation $\rho$ is then conjugate to a representation taking values in $\mathrm{SU}(p,q)$.
\end{proof}
\begin{rem}
\begin{enumerate}
\item When $n=3$, the only possibilities are $\mathrm{SU}(3)$ and $\su21$.
\item When $n = 2$, the involutions $\Phi_1$ and $\Phi_2$ are equal: we recognize the result shown by Morgan and Shalen in \cite{morgan_shalen} (Proposition III.1.1) and by Goldman in \cite{goldman_topological_1988} (Theorem 4.3), which is that an irreducible representation with real character is conjugate to either a representation with values in $\mathrm{SU}(2)$, or to a representation with values in $\mathrm{SU}(1,1)$ (appearing as $\mathrm{SL}_2(\mathbb{R})$ for Morgan and Shalen and $\mathrm{SO(2,1)}$ for Goldman).
\end{enumerate}
\end{rem}
Let us see now the case of $\mathrm{Fix}(\Phi_1)$, which corresponds to representations taking values in $\mathrm{SL}_n(\mathbb{R})$ or $\mathrm{SL}_{n/2}(\mathbb{H})$. The result is given in the following proposition:
\begin{prop}\label{prop_traces_reelles}
Let $\rho \in \mathrm{Hom}(\Gamma, \slnc)$ be an irreducible representation such that $\chi_\rho \in \mathrm{Fix}(\Phi_1)$. Then $\rho$ is conjugate to either a representation taking values in $\mathrm{SL}_n(\mathbb{R})$, either a representation taking values in $\mathrm{SL}_{n/2}(\mathbb{H})$ (when $n$ is even).
\end{prop}
We are going to give a proof of this statement inspired from the proof of Proposition \ref{prop_invol_traces}. An alternative proof can be done by adapting the proof given by Morgan and Shalen in the third part of their article \cite{morgan_shalen} for the $\mathrm{SL}_2(\mathbb{C})$ case.
\begin{lemme}\label{lemme_p_pconj}
Let $P \in \slnc$ such that $\con{P} P = \mathrm{Id}$. Then, there exists $Q \in \mathrm{GL}_n(\mathbb{C})$ such that $P = \con{Q} Q^{-1}$.
\end{lemme}
This fact is an immediate consequence of Hilbert's Theorem 90, which says that $H^1(\mathrm{Gal}(\mathbb{C} / \mathbb{R}) , \slnc)$ is trivial. We give here an elementary proof.
\begin{proof}
We search $Q$ of the form $Q_{\alpha} = \alpha \mathrm{Id} + \con{\alpha} \con{P}$. Those matrices satisfy trivially $Q_{\alpha}P = \con{Q_{\alpha}}$. It is then sufficient to find $\alpha \in \mathbb{C}$ such that $\det (Q_{\alpha}) \neq 0$. But $\det (Q_{\alpha}) = \con{\alpha}^n \det (\con{P} + \frac{\alpha}{\con{\alpha}} \mathrm{Id})$, so any $\alpha$ such that $-\frac{\alpha}{\con{\alpha}}$ is not an eigenvalue of $\con{P}$ works.
\end{proof}
\begin{lemme}\label{lemme_p_pconj_bis}
Let $P \in \mathrm{SL}_{2m}(\mathbb{C})$ such that $\con{P} P = - \mathrm{Id}$. Then there exists $Q \in \mathrm{GL}_{2m}(\mathbb{C})$ such that $P = \con{Q} J_{2m} Q^{-1}$.
\end{lemme}
\begin{proof}
We search $Q$ of the form $Q_{\alpha} = -\alpha \mathrm{Id} - \con{\alpha} J_{2m} \con{P}$. Those matrices satisfy trivially $Q_{\alpha}P = \alpha P + \con{\alpha} J_{2m} = J_{2m} \con{Q_\alpha}$. It is then sufficient to find $\alpha \in \mathbb{C}$ such that $\det (Q_{\alpha}) \neq 0$. But $\det (Q_{\alpha}) = \con{\alpha}^{2n} \det (J_{2m} \con{P} - \frac{\alpha}{\con{\alpha}} \mathrm{Id})$, so any $\alpha$ such that $\frac{\alpha}{\con{\alpha}}$ is not an eigenvalue of $ J_{2m} \con{P}$ works.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop_traces_reelles}]
We know that $\chi_\rho \in \mathrm{Fix}(\Phi_1)$, so the representations $\rho$ and $\Phi_1(\rho)$ have the same character. Since $\rho$ is irreducible, $\rho$ and $\Phi_1(\rho)$ are conjugate. Hence, there exists $P \in \slnc$ such that, for all $g \in \Gamma$, we have $P\rho(g)P^{-1} = \con{\rho(g)}$. By taking the complex conjugation, we obtain $\con{P}\con{\rho(g)} \con{P^{-1}} = \rho(g)$. By replacing $\con{\rho(g)}$ in the expression, we deduce that for all $g \in \Gamma$:
\[ (\con{P} P) \rho(g) (\con{P} P)^{-1} = \rho(g).\]
The matrix $\con{P} P$ commutes to the whole image of $\rho$. But $\rho$ is irreducible, so here exists $\lambda \in \mathbb{C}$ such that $\con{P} P = \lambda \mathrm{Id}$. In particular, $P$ and $\con{P}$ commute, so, by conjugating the equality above, we have $\lambda \in \mathbb{R}$. Furthermore, by taking the determinant, we have $\lambda^n = 1$, hence $\lambda = \pm 1$ and $P \con{P} = \pm \mathrm{Id}$. We have two cases:
\emph{First case: $P \con{P} = \mathrm{Id}$. }
By Lemma \ref{lemme_p_pconj}, there exists $Q \in \slnc$ such that $P = \con{Q}Q^{-1}$.
We deduce that for all $g \in \Gamma$:
\[\con{Q}Q^{-1} \rho(g) Q \con{Q^{-1}}= \con{\rho(g)}\]
\[Q^{-1} \rho(g) Q = \con{Q^{-1}} \con{\rho(g)} \con{Q} \]
This means that the representation $Q^{-1} \rho Q$ takes values in $\mathrm{SL}_n(\mathbb{R})$.
\emph{Second case $P \con{P} = -\mathrm{Id}$.}
By taking the determinant, we see that this case can only happen if $n$ is even. Let $m = \frac{n}{2}$.
By Lemma \ref{lemme_p_pconj_bis}, there exists $Q \in \mathrm{GL}_{2m}(\mathbb{C})$ such that $P = \con{Q} J_{2m} Q^{-1}$.
We deduce that, for all $g \in \Gamma$:
\begin{eqnarray*}
\con{Q} J_{2m} Q^{-1} \rho(g) Q J_{2m}^{-1} \con{Q^{-1}} &=& \con{\rho(g)}
\\
\con{\rho(g)^{-1}} \con{Q} J_{2m} Q^{-1} \rho(g) Q J_{2m} \con{Q^{-1}} &=& \mathrm{Id}
\\
(\con{Q^{-1} \rho(g) Q})^{-1} J_{2m} Q^{-1} \rho(g) Q J_{2m}^{-1} &=& \mathrm{Id}
\\
(\con{Q^{-1} \rho(g) Q})^{-1} J_{2m} Q^{-1} \rho(g) Q &=& J_{2m}.
\end{eqnarray*}
This means that the representation $\con{Q^{-1}} \rho \con{Q}$ takes values in $\mathrm{SL}_m(\mathbb{H})$.
\end{proof}
With the propositions below, we showed that an irreducible representation with character in $\mathrm{Fix}(\Phi_1)$ or $\mathrm{Fix}(\Phi_2)$ is conjugate to a representation taking values in a real form of $\slnc$. By combining Propositions \ref{prop_invol_traces} and \ref{prop_traces_reelles} we obtain immediately a proof of Theorem \ref{main_thm}.
\section{A detailed example: the free product $\z3z3$}
We are going to study in detail the character varieties $\mathcal{X}_{\su21}(\z3z3)$ and $\mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$.
We will begin by studying the character variety $\mathcal{X}_{\sl3c}(\z3z3)$ inside the variety $\mathcal{X}_{\sl3c}(F_2)$ given by Lawton in \cite{lawton_generators_2007}. We will then focus on the fixed points of the involution $\mathrm{Fix}(\Phi_2)$, that will give us the two character varieties with values in real forms, and we will finally describe them in detail and find the slices parametrized by Parker and Will in \cite{ParkerWill} and by Falbel, Guilloux, Koseleff, Rouillier and Thistlethwaite in \cite{character_sl3c}.
\subsection{The character variety $\mathcal{X}_{\sl3c}(\z3z3)$}
In this section, we will study the character variety $\mathcal{X}_{\sl3c}(\z3z3)$. First, notice that $\z3z3$ is a quotient of the free group of rank two $F_2$. Thanks to remark \ref{git+fonctoriel}, we are going to identify $\mathcal{X}_{\sl3c}(\z3z3)$ as a subset of $\mathcal{X}_{\sl3c}(F_2) \subset \mathbb{C}^9$. Let us begin by making some elementary remarks on order 3 elements of $\sl3c$.
\begin{rem}
\begin{itemize}
\item If $S \in \sl3c$, then the characteristic polynomial of $S$ is $\chi_S = X^3 - \mathrm{tr}(S)X^2 + \mathrm{tr}(S^{-1})X -1$.
\item If $S \in \sl3c$ is of order 3, then $S^3 - \mathrm{Id} = 0$. Hence the matrix $S$ is diagonalizable and admits as eigenvalues cube roots of 1. We will denote this cube roots by $1$, $\omega$ and $\omega^2$.
\end{itemize}
\end{rem}
The following elementary lemma will be useful to separate the irreducible components of $\mathcal{X}_{\sl3c}(\z3z3)$.
\begin{lemme}\label{lemme_ordre3_sl3c}
Let $S \in \sl3c$. The following assertions are equivalent:
\begin{enumerate}
\item $S^3 = \mathrm{Id}$
\item One of the following cases holds:
\begin{enumerate}
\item There exists $i \in \{0,1,2\}$ such that $S = \omega^i \mathrm{Id}$.
\item $\mathrm{tr}(S) = \mathrm{tr}(S^{-1}) = 0$.
\end{enumerate}
\end{enumerate}
\end{lemme}
\begin{proof} \hspace{1cm}
\begin{itemize}
\item[$(a) \Rightarrow (1)$]: Trivial
\item[$(b) \Rightarrow (1)$]: In this case, $\chi_S = X^3 - 1$. By Cayley-Hamilton theorem, we have $S^3 - \mathrm{Id} = 0$.
\item[$(1) \Rightarrow (2)$]: If $S^3 = \mathrm{Id}$, then $S$ is diagonalizable and its eigenvalues are cube roots of one. If $S$ has a triple eigenvalue, we are in case $(a)$. If not, since $\det(S) = 1$, the three eigenvalues are different and equal to $(1,\omega,\omega^2)$. We deduce that $\mathrm{tr}(S) = \mathrm{tr}(S^{-1}) = 1 + \omega + \omega^2 = 0 $.
\end{itemize}
\end{proof}
We can now identify the irreducible components of
$\mathcal{X}_{\sl3c}(\z3z3)$, thanks to the following proposition:
\begin{prop}
The algebraic set $\mathcal{X}_{\sl3c}(\z3z3)$ has 16 irreducible components : 15 isolated points and an irreducible component $X_0$ of complex dimension 4.
\end{prop}
\begin{proof}
Consider $\mathcal{X}_{\sl3c}(\z3z3) \subset \mathcal{X}_{\sl3c}(F_2) \subset \mathbb{C}^9$, as the image of $\mathrm{Hom}(\z3z3 , \sl3c )$ by the trace maps of elements $s,t,st,st^{-1} , s^{-1},t^{-1},t^{-1}s^{-1},ts^{-1}$, and of the commutator $[s,t]$. Let $\rho \in \mathrm{Hom}(\z3z3 , \sl3c )$. Denote by $S = \rho(s)$ and $T = \rho(t)$. By Lemma \ref{lemme_ordre3_sl3c}, either $S$ or $T$ is a scalar matrix, or $\mathrm{tr}(S)=\mathrm{tr}(S^{-1})=\mathrm{tr}(T)=\mathrm{tr}(T^{-1})=0$. Let us deal with this two cases separately.\\
\emph{First case: $S$ or $T$ is a scalar matrix.}
Suppose, for example, that $S = \omega^i\mathrm{Id}$ with $i \in \{0,1,2\}$. Since $T$ is of finite order and hence diagonalizable, the representation is totally reducible, and it is conjugate to either a representation of the form \[S=\omega^i\mathrm{Id} \hspace{2cm} T = \omega^j\mathrm{Id}\] with $i,j \in \{0,1,2\}$, either a representation given by \[S=\omega^i\mathrm{Id} \hspace{2cm} T = \begin{pmatrix}
\omega^2 & 0 &0 \\
0 & \omega &0 \\
0 & 0 &1
\end{pmatrix}
.\]
Considering the symmetries, we obtain 15 points of the character variety, classified by the traces of $S$ and $T$ in the following way (where $i,j \in \{0,1,2\}$):
\[
\begin{array}{|c||c|c|c|}
\hline
\mathrm{tr}(S) & 3\omega^i & 0 & 3\omega^i \\
\hline
\mathrm{tr}(T) & 3\omega^j & 3\omega^j & 0\\
\hline
\end{array}
\]
Since the traces of $S$ and $T$ are different for these 15 points and both $0$ in the second case, the points are isolated in $\mathcal{X}_{\sl3c}(\z3z3)$.
\emph{Second case: $\mathrm{tr}(S)=\mathrm{tr}(S^{-1})=\mathrm{tr}(T)=\mathrm{tr}(T^{-1})=0$.}
By Lemma \ref{lemme_ordre3_sl3c}, all the points of $\mathcal{X}_{\sl3c}(F_2)$ satisfying this condition are in $\mathcal{X}_{\sl3c}(\z3z3)$. Denote by $z = \mathrm{tr}(ST)$, $z' = \mathrm{tr}((ST)^{-1})$, $w = \mathrm{tr}(ST^{-1})$, $w' = \mathrm{tr}(TS^{-1})$ and $x = \mathrm{tr}([S,T])$.
The equation defining $\mathcal{X}_{\sl3c}(F_2) \subset \mathbb{C}^9$ becomes:
\[x^2 - (zz'+ww' - 3)x + (zz'ww' + z^3 + z'^3 +w^3 + w'^3 - 6zz' - 6ww' +9) = 0\]
This polynomial is irreducible. Indeed, if it were not, it would be equal to a product of two polynomials of degree 1 in $x$. By replacing $z'$, $w$ and $w'$ by $0$, we would obtain a factorization of the form $x^2 + 3x +z^3+9 = (x-R_1(z))(x-R_2(z))$, with $R_1(z)R_2(z) = z^3 + 9$ and $R_1(z)+R_2(z) = -3$. By considering the degrees of the polynomials $R_1$ and $R_2$ we easily obtain a contradiction.
Since the polynomial defining $X_0$ is irreducible, $X_0$ is an irreducible component of $\mathcal{X}_{\sl3c}(\z3z3)$. Furthermore, it can be embedded into $\mathbb{C}^5$ and it is a ramified double cover of $\mathbb{C}^4$.
\end{proof}
\subsection{Reducible representations in the component $X_0 \subset \mathcal{X}_{\sl3c}(\z3z3)$}
In order to complete the description of the character variety $\mathcal{X}_{\sl3c}(\z3z3)$, we are going to identify the points corresponding to reducible representations. The 15 isolated points of the algebraic set come from totally reducible representations ; it remains to determine the points of the component $X_0$ corresponding to reducible representations.
\begin{notat}
We consider here $X_0 \subset \mathbb{C}^5$, with coordinates $(z,z',w,w',x)$ corresponding to the traces of the images of $(st,(st)^{-1}, st^{-1}, ts^{-1}, [s,t])$ respectively.
We denote by $X_0^{\mathrm{red}}$ the image of reducible representations in $X_0$
\end{notat}
\begin{rem}
If the coordinates $(z,z',w,w',x)$ correspond to a reducible representation, then $\Delta(z,z',w,w') = 0$. Indeed, for a reducible representation, the two commutators $[s,t]$ and $[t,s]$ have the same trace, and the polynomial $X - Q(z,z',w,w')X + P(z,z',w,w')$ has a double root equal to those traces.
\end{rem}
We are going to show that the locus of the characters of reducible representations is a set of 9 complex lines, which intersect at six points with triple intersections, corresponding to totally reducible representations. Before doing the proof let us fix a notation for these lines.
\begin{notat}
For $i,j \in \{0,1,2\}$, let \[L^{(i,j)} = \{(z,z',w,w',x) \in X_0 \mid \omega^i z = \omega^{-i} z' ; \omega^j w = \omega^{-j} w' ; \omega^i z + \omega^j w = 3\}. \]
Each $L^{(i,j)}$ is a complex line parametrized by the coordinate $z$ (or $w$), and these lines intersect with triple intersections at the six points of coordinates $(z,w) = (0,3\omega^j)$ and $(z,w) = (3\omega^i,0)$, where $i,j \in \{0,1,2\}$.
\end{notat}
With this notation, we can state in a simpler way the proposition describing the points of $X_0$ corresponding to reducible representations.
\begin{prop}\label{prop_rep_red_z3z3}
The points of $X_0$ corresponding to reducible representations are exactly those in the lines $L^{(i,j)}$. In other terms, we have
\[X_0^{\mathrm{red}} = \bigcup_{i,j \in \{0,1,2\}} L^{(i,j)}.\]
\end{prop}
\begin{proof}
We are going to show a double inclusion. Let us first show that \[\displaystyle X_0^{\mathrm{red}} \subset \bigcup_{i,j \in \{0,1,2\}} L^{(i,j)}.\]
Let $\rho \in \mathrm{Hom}(\z3z3 , \sl3c)$ be a reducible representation such that $\chi_\rho \in X_0$. Let $S = \rho(s)$ and $T = \rho(t)$. Since the representation is reducible, we can suppose, after conjugating $\rho$, that
\[
S = \omega^i \begin{pmatrix}
S' & \\
& 1
\end{pmatrix}
\text{ and }
T = \omega^j \begin{pmatrix}
T' & \\
& 1
\end{pmatrix}
\]
where $i,j \in \{0,1,2\}$, and $S', T' \in \mathrm{SL}_2(\mathbb{C})$ of order 3 (and trace $-1$). Notice that it is enough to show that, whenever $i=j=0$, we have $\chi_\rho \in L^{(0,0)}$, in order to have the other cases by symmetry.
Let us consider this case, with $i=j=0$. Since $S'T' \in \mathrm{SL}_2(\mathbb{C})$, we have $\mathrm{tr}(S'T') = \mathrm{tr}((S'T')^{-1})$, hence $\mathrm{tr}(ST) = \mathrm{tr}((ST)^{-1})$ and $z=z'$. Similarly, we know that $w=w'$. Furthermore, the trace equation in $\mathrm{SL}_2(\mathbb{C})$ gives
$\mathrm{tr}(S')\mathrm{tr}(T') = \mathrm{tr}(S'T') + \mathrm{tr}(S'T'^{-1})$.
Hence $(-1)^2 = (z-1) + (w-1)$, i.e. $z+w=3$. We obtain finally that $\chi_\rho \in L^{(0,0)}$.
Let us show now the other inclusion. In order to do it, it is enough to show that all the points of $L^{(0,0)}$ are images of representations given by
\[
S = \begin{pmatrix}
S' & \\
& 1
\end{pmatrix}
\text{ and }
T = \begin{pmatrix}
T' & \\
& 1
\end{pmatrix}
\]
with $S',T' \in \mathrm{SL}_2(\mathbb{C})$, and recover the points of the other lines $L^{(i,j)}$ by considering $(\omega^i S, \omega^j T)$.
Since the image of every reducible representation of this form satisfies $z=z'$, $w=w'$ and $z+w=3$, we have to show that any $z\in \mathbb{C}$ can be written as $1 + \mathrm{tr}(S'T')$ with $S',T' \in \mathrm{SL}_2(\mathbb{C})$ of order 3 and trace $-1$. Fix $z \in \mathbb{C}$. Since the $\mathrm{SL}_2(\mathbb{C})$-character variety of $F_2$ is isomorphic to $\mathbb{C}^3$ by the trace maps of two generators and their product, there exist matrices $S', T' \in \mathrm{SL}_2(\mathbb{C})$ such that $(\mathrm{tr}(S'),\mathrm{tr}(T'),\mathrm{tr}(S'T')) = (-1,-1,z-1)$. In this case, the two matrices $S'$ and $T'$ have trace $-1$ and hence order 3, and we have $z = 1 + \mathrm{tr}(S'T')$.
\end{proof}
\begin{rem}
The lines $L^{(i,j)}$ intersect with triple intersections at the six points of coordinates $(z,w) = (3\omega^i,0)$ and $(z,w) = (0,3\omega^i)$ with $i \in \{0,1,2\}$. The corresponding representations are exactly the totally reducible ones, where $S$ and $T$ are diagonal with eigenvalues $(1,\omega,\omega^2)$.
\end{rem}
\subsection{The fixed points of the involution $\Phi_2$}
We are going to describe here the character varieties $\mathcal{X}_{\su21}(\z3z3)$ and $\mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$ as fixed points of the involution $\Phi_2$ of $\mathcal{X}_{\sl3c}(\z3z3)$. In this technical subsection, we will choose coordinates and find equations that describe the fixed points of $\Phi_2$. We will identify the characters corresponding to reducible representations as lying in an arrangement of 9 lines, and show that the ones corresponding to irreducible representations are in a smooth manifold of real dimension 4. We will describe the set obtained in this way in Subsection \ref{sous_sect_descr_chi_z3z3}.
\begin{rem}
Notice first that the 15 isolated points of $\mathcal{X}_{\sl3c}(\z3z3)$ come from totally reducible representations taking values in $\su21$ and $\mathrm{SU}(3)$. Hence they are in $\mathcal{X}_{\su21}(\z3z3) \cap \mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$, and so in $\mathrm{Fix}(\Phi_2)$.
\end{rem}
From now on, we will only consider the points of $\mathrm{Fix}(\Phi_2) \cap X_0$. Recall that we have identified $X_0$ to $\{(z,z',w,w',x) \in \mathbb{C}^5 \mid x^2 -Q(z,z',w,w')x + P(z,z',w,w') = 0\}$ by considering the trace maps of $st, (st)^{-1} , st^{-1} , ts^{-1}$ and $[s,t]$.
\begin{rem}\label{rem_fix_x0}
If $(z,z',w,w',x) \in \mathrm{Fix}(\Phi_2) \cap X_0$, then $z' = \con{z}$ and $w' = \con{w}$. In this case, the polynomials $P$ and $Q$, that we will denote by $P(z,w)$ and $Q(z,w)$, take real values. Furthermore, we can write the discriminant of $X^2 -Q(z,w)X + P(z,w)$ as \[\Delta(z,w) = f(z) + f(w) - 2|z|^2|w|^2 +27,\]
where $f(z) = |z|^4 - 8\Re(z^3) + 18|z|^2 -27$ is the function described by Goldman in \cite{goldman} which is nonzero in the traces of regular elements of $\su21$ (positive for loxodromic elements, negative for elliptic elements). In a point of $\mathrm{Fix}(\Phi_2) \cap X_0$, the two roots of $X^2 -Q(z,w)X + P(z,w)$ are the traces of the images of the commutators $[s,t]$ and $[t,s]$. Since these commutators are inverses, and since we are in $\mathrm{Fix}(\Phi_2)$, the two roots are complex conjugate, which is equivalent to $\Delta(z,w) \leq 0$.
\end{rem}
\begin{prop} We have:
\[ \mathrm{Fix}(\Phi_2) \cap X_0 = \{ (z,z',w,w',x) \in X_0 \mid z' = \con{z} , w' = \con{w} , \Delta(z,w) \leq 0 \}\]
\end{prop}
\begin{proof}
We are going to show a double inclusion. The first one is given by Remark \ref{rem_fix_x0} ; Let us show the second.
Let $z,w \in \mathbb{C}$ such that $\Delta(z,w) \leq 0$. Let $x$ be a root of $X^2 - Q(z,w)X + P(z,w)$. Since $\Delta(z,w) \leq 0$, the other root of the polynomial is $\con{x}$. We know that $(z,\con{z},w,\con{w},x) \in X_0$ ; we want to show that $(z,\con{z},w,\con{w},x) \in \mathrm{Fix}(\Phi_2)$. Let $\rho \in \mathrm{Hom}(\Gamma, \sl3c)$ be a semi-simple representation with image in $X_0$ equal to $(z,\con{z},w,\con{w},x)$. It is enough to prove that for all $\gamma \in \Gamma$ we have $\mathrm{tr}(\rho(\gamma)) = \con{\mathrm{tr}(\rho(\gamma)^{-1})}$. But the representation ${}^t\!\rho^{-1}$ has image $(\con{z},z,\con{w},w,\con{x})$ in $X_0$. We deduce that the representations $\rho$ and ${}^t\!\con{\rho}^{-1}$ are semi-simple and have the same character. Hence they are conjugate and for all $\gamma \in \Gamma$ we have $\mathrm{tr}(\rho(\gamma)) = \con{\mathrm{tr}(\rho(\gamma)^{-1})}$, and so $(z,\con{z},w,\con{w},x) \in \mathrm{Fix}(\Phi_2)$.
\end{proof}
\begin{notat}
From now on, we will consider $\mathrm{Fix}(\Phi_2) \cap X_0$ as $\{(z,w,x) \in \mathbb{C}^3 \mid \Delta(z,w) \leq 0 , x^2 - Q(z,w)x + P(z,w) =0\}$. The projection on the first two coordinates is a double cover of $\{(z,w) \in \mathbb{C}^2 \mid \Delta(z,w) \leq 0 \}$ outside from the level set $\Delta(z,w) = 0$, where points have a unique pre-image.
\end{notat}
We are going to identify the points corresponding to reducible representations, and then show that outside from these points, the $\su21$ and $\mathrm{SU}(3)$-character varieties are smooth manifolds.
Let us begin by identifying the points of $X_0 \cap \mathrm{Fix}(\Phi_2)$ corresponding to reducible representations with the coordinates $(z,w)$.
\begin{rem}
Let $(z,w,x) \in X_0 \cap \mathrm{Fix}(\Phi_2)$. Let $\rho \in \mathrm{Hom}(\z3z3 , \sl3c)$ have coordinates $(z,\con{z},w,\con{w},x)$. The following assertions are equivalent:
\begin{enumerate}
\item $\rho$ is reducible.
\item There exist $i,j \in \{0,1,2\}$ such that $\omega^i z$ and $\omega^jw$ are real and $\omega^i z + \omega^jw = 3$.
\end{enumerate}
In this case $\Delta(z,w) = 0$.
\end{rem}
\begin{proof}
It is an immediate consequence of Proposition \ref{prop_rep_red_z3z3} and the fact that we are in $\mathrm{Fix}(\Phi_2)$ and hence, in the coordinates $(z,z',w,w',x)\in \mathbb{C}^5$, we have $z'=\con{z}$ and $w' = \con{w}$. At last, we check that $\Delta(z,3-z) = 0$. If we are in the setting of the equivalence, we have $\Delta(z,w) = 0$.
\end{proof}
Let us show now that the points corresponding to irreducible representations form a smooth manifold.
\begin{prop}\label{prop_rep_irr_lisses}
Outside from the points corresponding to reducible representations, the set $X_0 \cap \mathrm{Fix}(\Phi_2)$ is a sub-manifold of $\mathbb{C}^3$ of real dimension 4.
\end{prop}
\begin{proof}
Recall that we defined $X_0 \cap \mathrm{Fix}(\Phi_2)$ as:
\[\{ (z,w,x) \in \mathbb{C}^3 \mid x^2 -Q(z,w)x+P(z,w) =0 , \Delta(z,w) \leq 0 \} \]
where
\[ Q(z,w) =|z|^2 + |w|^2 - 3 \]
\[ P(z,w) = 2\Re(z^3) + 2\Re(w^3) + |z|^2|w|^2 - 6|z|^2 -6|w|^2 + 9.\]
We can hence re-write
$X_0 \cap \mathrm{Fix}(\Phi_2)$ as :
\[\{ (z,w,x) \in \mathbb{C}^3 \mid x + \con{x} = Q(z,w) , x\con{x} = P(z,w) \} . \]
Consider the functions $f_1 , f_2 : \mathbb{C}^3 \rightarrow \mathbb{R}$ given by
$f_1(z,w,x) = Q(z,w) - ( x + \con{x})$ and $f_2(z,w,x) = P(z,w) - (x \con{x})$, and then $f = (f_1,f_2) : \mathbb{C}^3 \rightarrow \mathbb{R}^2$. With this notation, $X_0 \cap \mathrm{Fix}(\Phi_2) = f^{-1}(\{0\})$. We are going to show that outside from the points corresponding to reducible representations, $f$ is a submersion, i.e. that $\mathrm{d}f$ is of rank 2.
Let $(z_0,w_0,x_0) \in X_0 \cap \mathrm{Fix}(\Phi_2)$.
Notice first that \[\frac{\partial f}{\partial x}(z_0,w_0,x_0) = \begin{pmatrix}
-1 \\ -\con{x_0}
\end{pmatrix} \text{ and } \frac{\partial f}{\partial \con{x}} = \begin{pmatrix}
-1 \\ -x_0
\end{pmatrix},\]
hence $\mathbb{d}f$ is always of rank at least $1$ and, if $x_0 \notin \mathbb{R}$, the map $f$ is a submersion at $(z_0,w_0,x_0)$.
Suppose now that $\mathrm{d}f (z_0,w_0,x_0)$ is of rank $1$. In particular, $x_0 \in \mathbb{R}$. We want to show that in this case, the point $(z_0,w_0,x_0)$ corresponds to a reducible representation.
We have \[z_0\frac{\partial f}{\partial z}(z_0,w_0,x_0) = \begin{pmatrix}
|z_0|^2 \\ 3z_0^3 + |z_0|^2|w_0|^2 - 6|z_0|^2
\end{pmatrix}
\text{ and }\]
\[\con{z_0}\frac{\partial f}{\partial \con{z}}(z_0,w_0,x_0) = \begin{pmatrix}
|z_0|^2 \\ 3\con{z_0}^3 + |z_0|^2|w_0|^2 - 6|z_0|^2
\end{pmatrix},\]
and, since the two vectors linearly dependent, we have $z_0^3 \in \mathbb{R}$. In the same way, $w_0^3 \in \mathbb{R}$. Then there exist $r_1,r_2 \in \mathbb{R}$ and $i,j \in \{0,1,2\}$ such that $z_0 = \omega^i r_1$ and $w_0 = \omega^j r_2 $.
By Proposition \ref{prop_rep_red_z3z3}, it is enough to show that $r_1 + r_2 = 3$ in order to finish the proof. We consider two cases:
\emph{First case : $r_1$ or $r_2$ is zero.}
Suppose, for example, that $r_2 = 0$. In this case, since $f(0,0,x_0) \neq (0,0)$, we have $r_1 \neq 0$. On the one hand, we have
$2x_0 = Q(z,0) = r_1^2 - 3$. On the other hand, since $z_0\frac{\partial f}{\partial z}(z_0,w_0,x_0)$ and $\frac{\partial f}{\partial x}(z_0,w_0,x_0)$ are linearly dependent, we have $x_0 = 3(r_1-2)$. We deduce that $6(r_1-2) = 2x_0 = r_1^2 - 3$, hence $r_1^2 - 6r_1 + 9 = 0$ and $r_1 = 3$.
\emph{Second case : $r_1 , r_2 \neq 0$.} We know that the following vectors are collinear :
\[z_0\frac{\partial f}{\partial z}(z_0,w_0,x_0) =
\begin{pmatrix}
|z_0|^2 \\ 3z_0^3 + |z_0|^2|w_0|^2 - 6|z_0|^2 \end{pmatrix} = r_1^2 \begin{pmatrix}
1 \\ 3r_1 + r_2^2 -6 \end{pmatrix} \]
\[w_0\frac{\partial f}{\partial w}(z_0,w_0,x_0) =\begin{pmatrix}
|w_0|^2 \\ 3w_0^3 + |w_0|^2|z_0|^2 - 6|w_0|^2 \end{pmatrix} = r_2^2 \begin{pmatrix}
1 \\ 3r_2 + r_1^2 -6 \end{pmatrix}.\]
We deduce that $3r_1 + r_2^2 -6 = 3r_2 + r_1^2 -6$. If $r_1 \neq r_2$, then they are the two roots of a polynomial of the form $X^2 - 3X +k$, hence $r_1 + r_2 =3$. If not, and $r_1 = r_2$ we have $2x_0 = Q(z_0,w_0) = 2r_1^2 - 6$ and, since $z_0\frac{\partial f}{\partial z}(z_0,w_0,x_0)$ and $\frac{\partial f}{\partial x}(z_0,w_0,x_0)$ are collinear, $x_0 = r_1^2 - 3r_1 -6$. We deduce that $r_1 = \frac{3}{2}$ and $r_1 + r_2 = 3$.
\end{proof}
\subsection{Description of $\mathcal{X}_{\su21}(\z3z3)$ and $\mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$} \label{sous_sect_descr_chi_z3z3}
We are going to describe here the character varieties $\mathcal{X}_{\su21}(\z3z3)$ and $\mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$. In order to do it, we are going to study in detail $\mathrm{Fix}(\Phi_2)$, verify that it is the union of the two character varieties, and that their intersection corresponds to reducible representations. We finally consider two slices of $\mathrm{Fix}(\Phi_2)$, that were studied respectively by Parker and Will in \cite{ParkerWill} and by Falbel, Guilloux, Koseleff, Rouiller and Thistlethwaite in \cite{character_sl3c}.
First, consider the $15$ isolated points of $\mathcal{X}_{\sl3c}(\z3z3)$, which are all in $\mathrm{Fix}(\Phi_2)$. They correspond to totally reducible representations. Since an order 3 matrix is conjugated to a matrix in $\su21$ and $\mathrm{SU}(3)$, we have the following remark:
\begin{rem}
The points of $\mathrm{Fix}(\Phi_2)$ corresponding to totally reducible representations are all in $\mathcal{X}_{\su21}(\z3z3) \cap \mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$.
\end{rem}
It remains to consider the representations of $X_0 \cap \mathrm{Fix}(\Phi_2)$. Proposition \ref{prop_inter_charvar_reelle} ensures us that points corresponding to irreducible representations are exactly in one of the character varieties $\mathcal{X}_{\su21}(\z3z3)$ and $\mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$. For the points of $X_0$ corresponding to reducible representations, we briefly modify the proof of Proposition \ref{prop_rep_red_z3z3} in order to obtain the following remark:
\begin{rem}\label{rem_tot_red_z3z3}
The points of $\mathrm{Fix}(\Phi_2) \cap X_0$ corresponding to reducible representations are in $\mathcal{X}_{\su21}(\z3z3)$. Only some of them are in $\mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$.
\end{rem}
\begin{proof}
A reducible representation $\rho$ with character in $X_0$ is conjugate to a representation given by
\[
S = \omega^i \begin{pmatrix}
S' & \\
& 1
\end{pmatrix}
\text{ and }
T = \omega^j \begin{pmatrix}
T' & \\
& 1
\end{pmatrix}
\]
with $i,j \in \{0,1,2\}$, and $S', T' \in \mathrm{SL}_2(\mathbb{C})$ of order 3 (and trace $-1$). Since $\chi_\rho \in \mathrm{Fix}(\Phi_2)$, the representation $\rho' : \z3z3 \rightarrow \mathrm{SL}_2(\mathbb{C})$ given by $\rho' (s) = S'$ and $\rho' (t) = T'$, is in $\mathrm{Fix}(\Phi_2) \subset \mathcal{X}_{\mathrm{SL}_2(\mathbb{C})}(\z3z3)$. If $\rho$ is totally reducible, then, by Remark \ref{rem_tot_red_z3z3}, $\chi_\rho \in \mathcal{X}_{\su21}(\z3z3) \cap \mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$. If not, $\rho'$ is irreducible and, by Proposition \ref{prop_invol_traces}, maybe after a conjugation of $\rho'$, we have $S',T' \in \mathrm{SU}(2)$ or $\mathrm{SU}(1,1)$. If $S',T' \in \mathrm{SU}(2)$, then $S,T \in \su21 \cap \mathrm{SU}(3)$. If, on the other hand, $S',T' \in \mathrm{SU}(1,1)$, then $S,T \in \su21$. It remains to see that the second case happens. The point $(4,-1,7) \in X_0 \cap \mathrm{Fix}(\Phi_2)$ corresponds to a reducible representation, with a trace $4$ element, and so it cannot take values in $\mathrm{SU}(3)$.
\end{proof}
Furthermore, by noticing that an irreducible representation cannot take values at the same time in $\su21$ and in $\mathrm{SU}(3)$, we obtain the following proposition :
\begin{prop}We have
\[\mathrm{Fix}(\Phi_2) = \mathcal{X}_{\su21}(\z3z3) \cup \mathcal{X}_{\mathrm{SU}(3)}(\z3z3).\]
The subsets $ \mathcal{X}_{\su21}(\z3z3)$ and $ \mathcal{X}_{\su21}(\z3z3)$ are non-empty and intersect only at points corresponding to reducible representations.
\end{prop}
At last, we are going to draw some slices of $\mathrm{Fix}(\Phi_2)$, corresponding to projections on coordinates $(z,w)$, followed by a restriction to a slice of the form $z=z_0$ or $w = w_0$.
Recall that the projection on coordinates $(z,w)$ is a double cover outside from the level set $\Delta(z,w) = 0$, where points have a unique pre-image.
We draw, in a plane of the form $(z,w_0)$, the curve $\Delta(z,w_0)=0$, and then we identify the regions contained in $\mathcal{X}_{\su21}(\z3z3)$ and those contained in $\mathcal{X}_{\mathrm{SU}(3)}(\z3z3)$.
\subsubsection{The Parker-Will slice}
In their article \cite{ParkerWill}, Parker and Will give an explicit parametrization of representations of $\z3z3 = \langle s,t \rangle$ taking values in $\su21$ such that the image of $st$ is unipotent. This corresponds exactly to representations such that the trace of the image of $st$ is equal to $3$. They form a family of representations of the fundamental group of the Whitehead link complement containing the holonomy representation of a \CR {} uniformization of the manifold. This particular representation has coordinates $(z,w,x) = (3,3,\frac{15 + 3i\sqrt{15}}{2})$. We can see this slice in figure \ref{fig_tranche_parker_will}. We see three lobes corresponding to representations taking values in $\su21$, which intersect in a singular point, of coordinate $z=0$, which corresponds to a totally reducible representation, of coordinates $(z,w,x)= (0,3,3)$. Going back to coordinates $(z,w,x)$ on $X_0 \cap \mathrm{Fix}(\Phi_2)$, the representations of the slice $z = 0$ form, topologically, three spheres touching at a single point.
\begin{figure}
\caption{The Parker-Will slice of $\mathcal{X}
\label{fig_tranche_parker_will}
\end{figure}
\subsubsection{The Thistlethwaite slice}
In the last section of their article \cite{character_sl3c}, Falbel, Guilloux, Koseleff, Roullier and Thistlethwaite give an explicit parametrization of representations lifting the irreducible components $R_1$, $R_2$ and $R_3$ of $\mathcal{X}_{\sl3c}(\Gamma_8)$, as we saw in Subsection \ref{sous_sect_char_sl3c_m8}. They also give necessary and sufficient conditions for a representation to take values in $\su21$ or $\mathrm{SU}(3)$ : therefore they parametrize lifts of the intersections of $R_1$ and $R_2$ with $ \mathcal{X}_{\su21}(\Gamma_8)$ and $\mathcal{X}_{\mathrm{SU}(3)}(\Gamma_8)$.
Recall that the fundamental group of the figure eight knot complement has the following presentation:
\[\Gamma_8 = \langle g_1, g_2 , g_3 \mid g_2 = [g_3 , g_1^{-1}] , g_1 g_2 = g_2 g_3 \rangle \]
As noticed by Deraux in \cite{deraux_uniformizations} and by Parker and Will in \cite{ParkerWill}, if $G_1$, $G_2$ and $G_3$ are the images of $g_1,g_2$ and $g_3$ respectively by a representation with character in $R_2$, then $(G_1G_2)=(G_1^2G_2)^3 = G_2^4 = \mathrm{Id}$. Setting $T = (G_1G_2)^{-1}$ and $S = (G_1^2G_2)$, we have two elements of $\sl3c$ of order 3 which generate the image of the representation, since $G_1 = ST , G_3 = TS$ and $G_2 = (TST)^{-1} = (TST)^{3}$. Hence we can consider $R_2 \subset \mathcal{X}_{\sl3c}(\z3z3)$ : this component corresponds to the slice of coordinate $w = 1$, since $TST$ has order 4 if and only if $\mathrm{tr}(TST) = \mathrm{tr}(ST^{2}) = \mathrm{tr}(ST^{-1}) = 1$. We can see this slice in figure \ref{fig_tranche_fgkrt}. It has three regions of representations taking values in $\su21$ and a region of representations taking values in $\mathrm{SU}(3)$. They intersect at three singular points, corresponding to reducible representations.
Going back to coordinates $(z,x)$ on on the slice $w=1$ of $X_0 \cap \mathrm{Fix}(\Phi_2)$, these regions are the images of four topological spheres which intersect at three points.
\begin{figure}
\caption{The slice of Falbel, Guilloux, Koseleff, Roullier and Thistlethwaite of $\mathcal{X}
\label{fig_tranche_fgkrt}
\end{figure}
\subsubsection{Other remarkable slices}
At last, to complete the whole picture, we describe three more slices of $X_0 \cap \mathrm{Fix}(\Phi_2)$. Recall that, thanks to Proposition \ref{prop_rep_irr_lisses}, a slice of the form $w=w_0$ will only have singular points if $w_0^3 \in \mathbb{R}$. On the one hand, in figure \ref{fig_tranches_z3z3_legende}, we see the slices $w = 3.5$ and $w=3.5 + 0,1 i$. In each one there are three regions corresponding to irreducible representations taking values in $\su21$, which intersect, in the slice $w = 3.5$ at three points corresponding to reducible representations. There are no points corresponding to representations with values in $\mathrm{SU}(3)$.
\begin{figure}
\caption{The slice $w_0= 3.5$. There are three singular points.}
\caption{The slice $w_0 = 3.5 + 0,1i$. The region is smooth.}
\caption{The slices $w_0= 3.5$ and $w_0 = 3.5 + 0.1i$.}
\label{fig_tranches_z3z3_legende}
\end{figure}
On the other hand, in figure \ref{fig_tranche_z3z3_1_legende}, we see the slice $w= 1 + 0,1i$ : there are three regions corresponding to irreducible representations taking values in $\su21$ and a region corresponding to representations taking values in $\mathrm{SU}(3)$.
\begin{figure}
\caption{The slice $w_0 = 1 + 0,1i$. The region is smooth.}
\label{fig_tranche_z3z3_1_legende}
\end{figure}
\break
\end{document} |
\begin{document}
\title{On Deligne's functorial Riemann-Roch theorem in positive characteristic}
\author{\\Quan XU\\Universit\'{e} Paul Sabatier}\maketitle
\footnote{Email to: [email protected]}
\begin{abstract}
In this note, we give a proof for a variant of the functorial Deligne-Riemann-Roch theorem in positive characteristic based on
ideas appearing in Pink and R\"ossler's proof of the Adams-Riemann-Roch theorem in positive characteristic (see \cite{Pi}).
The method of their proof appearing in \cite{Pi}, which is valid for any positive characteristic and which is completely different from the
classical proof, will allow us to prove the functorial Deligne-Riemann-Roch theorem in a much easier and more direct way.
Our proof is also partially compatible with Mumford's isomorphism.
\end{abstract}
\section{Introduction}
In \cite{Pi}, Richard Pink and Damian R\"{o}ssler proved the following version of the Adams-Riemann-Roch theorem:
Let $f:X\rightarrow Y$ be projective and smooth of relative dimension $r$, where
$Y$ is a quasi-compact scheme of characteristic $p>0$ and carries an ample invertible sheaf. Then the following equality
$$\psi^{p}(R^{\bullet}f_{*}(E))=R^{\bullet}f_{*}(\theta ^{p}(\Omega_{f})^{-1}\otimes\psi^{p}(E))~~~~~(*)$$
holds in $K_{0}(Y)[\frac{1}{p}]:=K_{0}(Y)\otimes_{\mathbb{Z}}\mathbb{Z}[\frac{1}{p}].$
The symbols in the previous equality are explained simply as follows:
(a) The symbol $\psi^{p}$ is the $p$-th Adams operation and $\theta ^{p}$ is the $p$-th Bott class operation (see Sect. 3.2 and 3.3);
(b) For a vector bundle $E$, $R^{\bullet}f_{*}(E)=\sum_{i\geq 0}(-1)^{i}R^{i}f_{*}(E)$ where $R^{i}f_{*}$ is the higher direct image functor of the push-forward $f_{*}$ (see Sect. 2.1);
(c) For a quasi-compact scheme $Y$, $K_{0}(Y)$ is the Grothendieck group of locally free coherent sheaves of $\mathcal{O}_{Y}$-modules (see Sect. 2.1);
(d) The symbol $\Omega_{f}$ is the relative differentials of the morphism $f$. When $f$ is a smooth and projective morphism, $\Omega_{f}$ is locally free sheaf (see \cite{Hart}).
For the general case, i.e., for a projective local complete intersection morphism $f$ (see \cite{FL}, Pag. 86) with no restrictions on the characteristic, $p$ can be replaced by any positive integer $k\geq2$ in the equality (*)
and $\Omega_{f}$ is replaced by cotangent complex of the morphism $f$ (see \cite{Lit}). Then the previous equality holds in
$K_{0}(Y)\otimes_{\mathbb{Z}}\mathbb{Z}[\frac{1}{k}]$. The classical proof of the Adams-Riemann-Roch theorem for a projective local complete intersection morphism
consists in verifying that the theorem holds for both closed immersions and projections and also for their composition. Moreover,
a deformation to a normal cone is used (see \cite{Man} or \cite{FL}). However, in the case of characteristic $p$, the decomposition
of the projective morphism and the deformation are completely avoidable in the course of the proof of the $p$-th Adams Riemann Roch theorem, i.e., the equality $(*)$, when $f$ is a projective and smooth morphism. In this case, Pink and R\"ossler construct an explicit representative
for the $p$-th Bott element $\theta ^{p}(E)=\tau(E):=\text{Sym}(E)/\mathcal{J}_{E}$ in $K_{0}(Z)$ for any locally free coherent sheaf $E$ over a quasi-compact scheme $Z$ of characteristic
$p$ where $\text{Sym}(E)$ is the symmetric algebra of $E$ and $\mathcal{J}_{E}$ is the graded sheaf of ideals of $\text{Sym}(E)$ that is locally generated
by the sections $e^{p}$ of $\text{Sym}(E)$ for all the sections $e$ of $E$ (see Sect. 3.2).
Furthermore, they proved an isomorphism of $\mathcal{O}_{X}$-modules
$$I/ I^{2}\cong \Omega_{f},$$
and an isomorphism of graded $\mathcal{O}_{X}$-algebras:
$$\tau(I/ I^{2})\cong Gr(F^{*}F_{*}\mathcal{O}_{X}).$$
(The previous notations will be explained in Section \ref{section ARR char p}). These isomorphisms play an essential role in their proof
of the Adams-Riemann-Roch theorem in positive characteristic, which is also very important in our theorem.
When $f:C\rightarrow S$ is a smooth family of proper curves, Deligne proved the following functorial Riemann Roch theorem (see \cite{Del}, Theorem 9.9):
There exists a unique, up to sign, functorial isomorphism of line bundles
\begin{align}
(\det &Rf_{*}L)^{\otimes 18 }\notag\\
&\cong (\det Rf_{*}\mathcal{O})^{\otimes 18 }\otimes (\det Rf_{*}(L^{\otimes 2}\otimes
\omega^{-1}))^{\otimes 6}\otimes (\det Rf_{*}(L\otimes \omega^{-1}))^{\otimes (-6)}.\notag
\end{align}
The statement above is not Deligne's original statement, but the essence is the same. We will explain it in Thm. \ref{Del,func} and Rem. \ref{explai}.
In this note, our strategy is to give a similar isomorphism in positive characteristic for a line bundle. More precisely, we will provide an isomorphism between $(\det Rf_{*}L)^{\otimes p^{4}}$ and some tensor products of $(\det Rf_{*}(L^{\otimes l}\otimes \omega ^{\otimes n}))^{\otimes m}$ for any line bundle $L$ and some integers $l, n, m$, in any characteristic $p>0$, by using only
a property of the Deligne pairing in \cite{Del} as well as some ideas from the proof of the Adams-Riemann-Roch theorem in positive characteristic
given in \cite{Pi}. More importantly, the isomorphism is also stable under base change. These are our main results (see Theorem \ref{functor}).
A byproduct of our result is a partial compatibility with Mumford's isomorphism. We will give a brief introduction to Mumford's isomorphism and verify the
compatibility (see Cor. \ref{mfi}).
Furthermore, we also make a comparison between Deligne's functorial Riemann Roch theorem and our result.
In the case of characteristic $p=2$, our result completely coincides with Deligne's theorem. When the characteristic is an odd prime number, our theorem is just an analogue of Deligne's.
The possible reason is that our setting emphasizes a lot about the case of characteristic $p$. However, Deligne's work in \cite{Del} is independent of the characteristic.
\section{Preliminaries}
We always assume that $X$ is a quasi-compact scheme whenever a scheme is mentioned in this note, unless we use a different statement for the scheme.
\subsection{Grothendieck groups and the virtual category}
Let $X$ be a scheme. We denote by $\mathcal{I}$ the category of coherent sheaves on $X$ and
by $\mathcal{L}$ its fully subcategory of locally free
sheaves . Furthermore, let $\mathbb{Z}[\mathcal{I}]$ (respectively $\mathbb{Z}[\mathcal{L}]$) denote the free abelian group generated by the isomorphism classes $[\mathcal{F}]$ of $\mathcal{F}$ in category
$\mathcal{I}$(respectively $\mathcal{L}$).
For the exact sequence $0\rightarrow \mathcal{F}_{1}\rightarrow\mathcal{F}_{2}\rightarrow\mathcal{F}_{3}\rightarrow 0$
of sheaves of $\mathcal{I}$ (respectively $\mathcal{L}$), we form the element $\mathcal{F}_{2}-\mathcal{F}_{1}-\mathcal{F}_{3}$ and consider the subgroup $\mathcal{J}$ (respectively $\mathcal{J}_{1}$) generated by such elements
in $\mathbb{Z}[\mathcal{I}]$ (resp. $\mathbb{Z}[\mathcal{L}]$).
\begin{defi}\label{b. 1}
We define $K_{0}(X):=\mathbb{Z}[\mathcal{L}]/\mathcal{J}_{1}$ and $K_{0}^{'}(X):=\mathbb{Z}[\mathcal{I}]/\mathcal{J}$. \\
Usually, $K_{0}(X)$ and $K_{0}^{'}(X)$ are called the Grothendieck groups of locally free shaves and coherent sheaves on a scheme $X$, respectively.
\end{defi}
The following are basic facts about Grothendieck groups:
(1) The tensor product of $\mathcal{O}_{X}$-modules makes the group $K_{0}(X)$ into a commutative unitary ring and the inverse image of locally free sheaves under
any morphism of schemes $X^{'}\rightarrow X$ induces a morphism of unitary rings $K_{0}(X)\rightarrow K_{0}(X^{'})$ (see \cite{Man}, \S 1);
(2) The obvious group morphism $K_{0}(X)\rightarrow K_{0}^{'}(X)$ is an isomorphism if $X$ is regular and carries an ample invertible sheaf (see \cite{Man}, Th. I.9);
(3) Let $f:X\rightarrow Y$ be a projective local complete intersection morphism of schemes (A morphism $f
: X\rightarrow Y$ is called a local complete intersection morphism if $f $ is a composition of morphisms as $X\rightarrow \text{P} \rightarrow Y$ where the first morphism is a regular embedding and the second is a smooth morphism. See \cite{Ful} or \cite{FL}) and $Y$ carries an ample invertible sheaf. There is
a unique group morphism $\text{R}^{\bullet}f_{*}:K_{0}(X)\rightarrow K_{0}(Y)$ which sends the class of a locally free coherent sheaf $E$ on $X$ to the class of the class of the strictly perfect complex (The strictly perfect complex will be defined in Sect. 2.2) $\text{R}^{\bullet}f_{*}E$ in $K_{0}(Y)$, where $\text{R}^{\bullet}f_{*}E$ is defined
to be $\sum_{i\geq 0}(-1)^{i}\text{R}^{i}f_{*}E$ and $\text{R}^{i}f_{*}E$ is viewed as an element in $K_{0}(Y)$ (see \cite{Berth}, IV, 2.12).
In \cite{Del}, Deligne defined a categorical refinement of the Grothendieck groups. In order to define it, we
need to review some material from the theory of exact categories. Above all, let us recall definitions of the additive category and the abelian category.
\begin{defi}
An additive category is a category $\mathcal{A}$ in which $\text{Hom}(A,B)$ is an abelian group for all objects $A, B$, composition of arrows is bilinear, and
$\mathcal{A}$ has (finite) direct sums and a zero object. An abelian category is an additive category in which every arrow $f$ has a kernel, co-kernel, image and co-image, and the canonical
map $\text{coim}(f)\rightarrow \text{im}(f)$ is an isomorphism.
\end{defi}
Let $\mathcal{A}$ be an additive category. A short sequence in $\mathcal{A}$ is a pair of
composable morphisms $L\rightarrow M\rightarrow N$ such that $L\rightarrow M $ is a kernel for $M\rightarrow N$
and $M\rightarrow N$ is a cokernel for $L \rightarrow M$. Homomorphisms of short sequences are defined in the obvious way as commutative diagrams.
\begin{defi}
An exact category is an additive category $\mathcal{A}$ together with a choice $S$ of
a class of short sequences, called short exact sequences, closed under isomorphisms and satisfying the axioms below. A short exact sequence is displayed as
$L \rightarrowtail M \twoheadrightarrow N$, where $L \rightarrowtail M$ is called an admissible monomorphism and $M\twoheadrightarrow N$ is
called an admissible epimorphism. The axioms are the following:
(1) The identity morphism of the zero object is an admissible monomorphism
and an admissible epimorphism.
(2) The class of admissible monomorphisms is closed under composition and
cobase changes by push-out along arbitrary morphisms, i.e., given any admissible monomorphism $L\rightarrowtail M$ and any arbitrary $L\rightarrow L^{'}$ , their push-out $M^{'}$ exists and the induced morphism $L^{'} \rightarrow M^{'}$ is again an admissible
monomorphism.
$$\xymatrix{L\ar @{>->}[r]\ar[d]&M\ar@ {-->}[d]\\
L^{'}\ar@{>-->}[r]& M^{'}}$$
(3) Dually, the class of admissible epimorphisms is closed under composition
and base changes by pull-backs along arbitrary morphisms, i.e., given any
admissible epimorphism $M\twoheadrightarrow
N$ and any arbitrary $N^{'}\rightarrow N$ , their pullback
$M^{'}$ exists and the induced morphism $M^{'}\rightarrow N^{'}$ is again an admissible
epimorphism.
$$\xymatrix{M^{'}\ar @{-->}[r]\ar @{-->>}[d]&M\ar @{->>}[d]\\
N^{'}\ar[r]& N
}$$
\end{defi}
\begin{exam}
Any abelian category is an exact category in an evident way. Any additive category can be made into an exact category in at least one way by taking $S$ be the family
of split exact sequences.
\end{exam}
In order to give the definition of the virtual category, we shall need the definition of a groupoid, especially of a specific groupoid called the Picard groupoid (see \cite{Del}, \S 4).
\begin{defi}
A groupoid is a (small) category in which all morphisms are invertible.
\end{defi}
This means there is a set $B$ of objects, usually called the base, and a set $G$ of morphisms, usually called the arrows. One
says that $G$ is a groupoid over $B$ and writes $\xymatrix{G\ar@<+.7ex> [r]\ar@<-.7ex>[r]& B}$ or just $G$ when the base is understood.
We can be much more explicit about the structure of a groupoid. To begin with, each arrow has an associated source object and associated
target object. This means that there are two maps $$s, t: G\rightarrow B$$ called the source and the target, respectively. Since a groupoid
is a category, there is a multiplication of arrows $$m: G\times_{B} G\rightarrow G$$ where $G\times_{B} G$ fits into the pull-back
square: $$\xymatrix{G\times_{B} G\ar[d]\ar[r]& G\ar[d]^{s}\\
G\ar[r]^{t}&B }$$
More explicitly, $$G\times_{B} G=\{(h,g)\in G\times G\lvert s(h)=t(g)\}=(s\times t)^{-1}(\Delta_{B}).$$
This is just to say that we can only compose arrows when the target of the first and source of the second agree. This multiplication preserves
sources and targets: $$ s(hg)=s(g), t(hg)=t(h),$$ and is associative:
$$k(hg)=(kh)g.$$
For each object $x\in B$, there is an identity arrow, written $1_{x}\in G$ and this association defines an injection $$ \mathbf{1}: B\hookrightarrow G.$$\\
For each arrow $g\in G$, there is an inverse arrow, written $g^{-1}\in G$, and this defines a bijection $$\iota : G\rightarrow G.$$\\
These identities and inverse satisfy the usual properties. Namely, identities work as expected: $$1_{t(g)}g=g=g1_{s(g)},$$
and inversion swaps sources and targets: $$s(g^{-1})=t(g),~t(g^{-1})=s(g),$$
and inverses works as expected, with respect to the identities: $$g^{-1}g=1_{s(g)},~ gg^{-1}=1_{t(g)}.$$
Thus we have a set of maps between $B$ and $G$ as follows:$$\xymatrix{ B\ar @{.>}[r]&G\circlearrowleft\iota\ar@<+.9ex> [l]^{s}\ar@<-.9ex>[l]_{t}}$$
\begin{exam}Any set $X$ can be viewed as a groupoid over itself, where the only arrows are identities. This is the trivial groupoid, or the unit groupoid
and is simply written as $X$. The source and target maps are the identity map $\text{id}_{X}$, multiplication is only defined between a point and itself: $$xx=x.$$
\end{exam}
\begin{exam} Any set gives rise to the pair groupoid of $X$. The base is $X$, and the set of arrows is $X\times X \rightrightarrows X.$ The source and target
maps are the first and second projection maps. Multiplication is defined as follows: $(x,x^{'})(x^{'},x{''})=(x,x{''}).$
\end{exam}
\begin{defi}
A Picard category is a groupoid $P$ together with the following extra structure:\\
1. A functor $+ : P\times P\rightarrow P.$\\
2. An isomorphism of functors: $\xymatrix{&P\times P \times P\ar[dl]_{+\times Id}\ar[dr]^{Id\times +}&\\
P\times P\ar[dr]^{+}& &P\times P\ar[dl]_{+}\\
& P & }$
$ \sigma_{x,y,z}: (x+y)+z\backsimeq x+(y+z).$\\
3. A natural transformation $\tau_{x,y}: x+y\backsimeq y+x$ commuting with $+$.\\
4. For all $x\in P$, the functor $P\rightarrow P$ by $y\mapsto x+y$ is an equivalence.\\
5. Pentagon Axiom: The following diagram commutes\\
$$\xymatrix{ & (x+y)+(z+w)\ar[ddl]^{\sigma_{x,y,z+w}}& \\
& & \\
x+(y+(z+w)) & & ((x+y)+z)+w\ar[uul]_{\sigma_{x+y,z,w}}\ar[dd]_{\sigma_{x,y,z}}\\
& &\\
x+((y+z)+w)\ar[uu]_{\sigma_{y,z,w}}& &(x+(y+z))+w.\ar[ll]^{\sigma_{x,y+z,w}} }$$
6. $\tau_{x,x}$=id for all $x\in P$\\
7. $\forall x,y\in P, \tau_{x,y}\tau_{y,x}$=id\\
8. Hexagon Axion: The following diagram commutes:\\
$$\xymatrix{ & x+(y+z)\ar[r]^{\tau}& x+(z+y)\\
(x+y)+z\ar[ur]^{\sigma}& & & (x+z)+y\ar[ul]_{\sigma}\\
& z+(x+y)\ar[ul]^{\tau}\ar[r]^{\sigma}& (z+x)+y\ar[ur]^{\tau}& }.$$
\end{defi}
\begin{exam}\label{lbd}
Let $X$ be a scheme. We denote by $\mathscr{P}_{X}$ the category of graded invertible
$\mathcal{O}_{X}$-modules. An object of $\mathscr{P}_{X}$ is a pair $(L,\alpha)$ where $L$ is an invertible $\mathcal{O}_{X}$-module and $\alpha$
is a continuous function: $$\alpha: X\rightarrow \mathbb{Z}.$$
A homomorphism $h:(L,\alpha)\rightarrow (M,\beta)$ is a homomorphism of $\mathcal{O}_{X}$-modules such that for each $x\in X$ we have:
$$\alpha(x)\neq\beta(x)\Rightarrow h_{x}=0.$$
We denote by $\mathscr{P}is_{X}$ the subcategory of $\mathscr{P}_{X}$ whose morphisms are all isomorphism.
The tensor product of two objects in $\mathscr{P}_{X}$ is given by: $$(L,\alpha)\otimes(M,\beta)=(L\otimes M, \alpha+\beta).$$
For each pair of objects $(L,\alpha), (M,\beta)$ in $\mathscr{P}_{X}$ we have an isomorphism:
$$\xymatrix{\psi_{(L,\alpha), (M,\beta)}: (L,\alpha)\otimes(M,\beta)\ar[r]^(.60){\sim}&(M,\beta)\otimes(L,\alpha) }$$
defined as follows: If $l\in L_{x}$ and $m\in M_{x}$ then
$$\psi(l\otimes m)=(-1)^{\alpha(x)+\beta(y)}\cdot m\otimes l.$$\\
Clearly: $$\psi_{(M,\beta),(L,\alpha)}\cdot\psi_{(L,\alpha), (M,\beta)}=1_{(L,\alpha)\otimes(M,\beta)}$$\\
We denote by $1$ the object $(\mathcal{O}_{X},0)$. A right inverse of an object $(L,\alpha)$ in $\mathscr{P}_{X}$ will be an object $(L^{'},\alpha^{'})$
together with an isomorphism $$\xymatrix{\delta: (L,\alpha)\otimes(L^{'},\alpha^{'})\ar[r]^(.70){\sim}& 1}$$\\
Of course $\alpha^{'}=-\alpha$.
A right inverse will be considered as a left inverse via:
$$\xymatrix{\delta: (L^{'},\alpha^{'})\otimes(L,\alpha)\ar[r]_(.55){\sim}^(.55){\psi}& (L,\alpha)\otimes(L^{'},\alpha^{'})\ar[r]_(.70){\sim}^(.70){\delta}&1 }.$$
Given the definition, further verification implies that $\mathscr{P}is_{X}$ is a Picard category.
\end{exam}
After defining the exact category, we can give the definition of Deligne's virtual category.
By an admissible filtration in an exact category we mean a finite sequence of admissible monomorphisms $0=A^{0}\rightarrowtail A^{1}\rightarrowtail
\cdots \rightarrowtail A^{n} = C.$
\begin{defi}\label{vircon}(see \cite{Del}, Pag. 115)
The virtual category $V(\mathcal{C})$ of an exact category $\mathcal{C}$ is
a Picard category, together with a functor $\{~\} : (\mathcal{C}, iso)\rightarrow V(\mathcal{C})$ (Here, the
first category is the subcategory of $\mathcal{C}$ consisting of the same objects and the
morphisms are the isomorphisms of $\mathcal{C}$.), with the following universal property:\\
Suppose we have a functor $[~] : (\mathcal{C}, iso)\rightarrow P$ where $P$ is a Picard category,
satisfying
(a) Additivity on exact sequences, i.e., for an exact sequence $A \rightarrow B \rightarrow C $ ($A\rightarrow B$ is a admissible monomorphism and $B\rightarrow C$ is a admissible epimorphism),
we have an isomorphism $[B]\cong[A]+[C]$, functorial with respect to isomorphisms of exact sequences.
(b) A zero-object of $C$ is isomorphically mapped to a zero-object in $P$ (According to (4) in the definition of the Picard category, it implies the existence of the
unit object which is also called zero-object. See \cite{Del}, \S 4.1.).
(c) The additivity on exact sequences is compatible with admissible filtrations, i.e., for an admissible filtration $C\supset B\supset A\supset 0$,
the diagram of isomorphisms from (a)
$$\xymatrix{[C]\ar[rr]\ar[d]& & [A]+ [C/A]\ar[d]\\
[B]+[C/B]\ar[rr] & & [A]+[B/A]+[C/B] }$$
is commutative.
(d) If $f: A\rightarrow B$ is an isomorphism and $\sum$ is the exact sequence $0\rightarrow A\rightarrow B$ (resp. $A\rightarrow B\rightarrow 0$ ), then $[f]$ (resp. $[f]^{-1}$)
is the composition
$$\xymatrix{[A]\ar[r]_(.35){\sum}&[0]+[B]\ar[r]_(.6){(b)}&[B] }$$
$$(resp. \xymatrix{[B]\ar[r]_(.35){\sum}&[A]+[0]\ar[r]_(.6){(b)}&[A]})$$
where (b) in the diagram above means that the morphism is from (b).\\
Then the conclusion is that the functor $[~] :(\mathcal{C}, iso) \rightarrow P$ factors uniquely up to
unique isomorphism through $(\mathcal{C}, iso)\rightarrow V(\mathcal{C})$.
\end{defi}
Roughly speaking, for an exact category $\mathcal{C}$, $V(\mathcal{C})$ is a universal Picard category with a functor $[~]$ satisfying some properties. In practice, the functor $[~]$ usually can be chosen as the determinant functor we will
define in the next subsection.
\begin{rem}\label{v,k}
In \cite{Del}, Deligne also provided a topological definition for the virtual category of a small exact category.
The category of virtual objects of $\mathcal{C}$, $V(\mathcal{C})$ is the following: Objects
are loops in $\mathcal{B}Q\mathcal{C}$ around a fixed zero-point, and morphisms are homotopy-classes of homotopies
of loops. Here $\mathcal{B}Q\mathcal{C}$, is the geometrical realization of the Quillen Q-construction of $\mathcal{C}$.
The addition is the usual addition of loops. This construction is the fundamental groupoid of the loop
space $\Omega\mathcal{B}Q\mathcal{C}$ of $\mathcal{B}Q\mathcal{C}$. By the description above and Quillen's definition of $K$-theory (see \cite{Quill}), the group of isomorphism classes of objects
of the virtual category is the usual Grothendieck group $K_{0}(X)$ of the category of vector bundles on $X$ (see \cite{Del}, Pag. 114).
\end{rem}
\subsection{The determinant functor}\label{sdf}
In this subsection, we will consider the determinant functor and mainly consult \cite{Kund}. In \cite{Kund}, the determinant functor can be defined in several backgrounds.
But the case we are most interested in is the determinant functor from some subcategory of derived category to the subcategory of the category $\mathscr{P}_{X}$ of graded line bundles.
In the following, we denote by $\mathscr{C}_{X}$ the category of finite locally free $\mathcal{O}_{X}$-modules for a scheme $X$.
\begin{defi}
If $E\in \text{ob}(\mathscr{C}_{X})$, we define: $\det ^{*}(F)=(\wedge^{max}F, \text{rank}F)$ \\ (where $(\wedge^{max}F)_{x}=\wedge^{\text{rank}F_{x}}F_{x}$).
\end{defi}
For every short exact sequence of objects in $\mathscr{C}_{X}$
$$\xymatrix{0\ar[r]&F^{'}\ar[r]^{\alpha}& F\ar[r]^{\beta}&F^{''}\ar[r]&0 }$$ we have an isomorphism,
$$\xymatrix{i^{*}(\alpha,\beta): \det^{*}F^{'}\otimes \det^{*}F^{''}\ar[r]^(.70){\sim}&\det^{*} F },$$ such that locally,
$$i^{*}(\alpha,\beta)((e_{1}\wedge\ldots\wedge e_{l})\otimes(\beta f_{1}\wedge\ldots\beta f_{s}))=\alpha e_{1}\wedge\ldots\alpha e_{l}\wedge f_{1}\wedge\ldots f_{s}$$
for $e_{i}\in\Gamma (U,F^{'})$ and $f_{j}\in\Gamma (U,F^{''})$.
\begin{defi}
If $F^{i}$ is an indexed object of $\mathscr{C}_{X}$ we define:
\[
\det(F^{i})=
\begin{cases}
\det^{*}(F^{i}) &\text{if $i$ even};\\
\det^{*}(F^{i})^{-1} &\text{if $i$ odd}.
\end{cases}
\]
If $$\xymatrix{ 0\ar[r]& F^{i^{'}}\ar[r]^{\alpha^{i}}&F^{i}\ar[r]^{\beta^{i}}&F^{i^{''}}\ar[r]&0} $$
is an indexed short exact sequence of objects in $\mathscr{C}_{X}$, we define
\[
i(\alpha^{i},\beta^{i})=
\begin{cases}
i^{*}(\alpha^{i},\beta^{i})&\text{if $i$ even};\\
i^{*}(\alpha^{i},\beta^{i})^{-1} &\text{if $i$ odd}.
\end{cases}
\]
Usually, for a object $F$ in $\mathscr{C}_{X}$, we view the object as the indexed object by $0$, i.e., $\det(F)=\det^{*}(F)$.
\end{defi}
We also denote by $\mathscr{C}^{\cdot}_{X}$ the category of the bounded complex of objects in $\mathscr{C}_{X}$ over a scheme $X$.
\begin{defi}
If $F^{\cdot}$ is an object of $\mathscr{C}^{\cdot}_{X}$, we define $$\det(F^{\cdot})=\cdots\otimes\det(F^{i+1})\otimes\det(F^{i})\otimes\det(F^{i-1})\otimes\cdots$$
Furthermore, if $$\xymatrix{ 0\ar[r]& F^{\cdot'}\ar[r]^{\alpha}&F^{\cdot}\ar[r]^{\beta}&F^{\cdot''}\ar[r]&0} $$
is a short exact sequence of objects in $\mathscr{C}^{\cdot}_{X}$ we define
$$\xymatrix{i(\alpha,\beta):\det(F^{\cdot'})\otimes \det(F^{\cdot''})\ar[r]^(.70){\sim}& \det(F^{\cdot}) }$$
to be the composite:
$$\det(F^{\cdot'})\otimes \det(F^{\cdot''})=\cdots\otimes\det(F^{i'})\otimes\det(F^{i-1'})\otimes\cdots$$
$$\xymatrix{\otimes\det(F^{i''})\otimes\det(F^{i-1''})\otimes\cdots\ar[r]^(.55){\sim}&\cdots\otimes\det(F^{i'})\otimes\det(F^{i''}) }$$
$$\xymatrix @C=0.7in{\otimes\det(F^{i-1'})\otimes\det(F^{i-1''})\otimes\cdots\ar[r]^(.65){\otimes_{i}i(\alpha^{i},\beta^{i})}_(.65){\sim}&\cdots\otimes\det(F^{i}) }$$
$$\otimes\det(F^{i-1})\otimes\cdots=\det(F^{\cdot}) .$$
\end{defi}
In \cite{Kund}, it is proved that there is one and, up to canonical isomorphism, only one determinant $(f,i)$ from $\mathscr{C}is_{X}$ (resp. $\mathscr{C}^{\cdot}is_{X}$) to $\mathscr{P}is_{X}$, which we write $(\det, i)$,
where $\mathscr{C}is_{X}$ (resp. $\mathscr{C}^{\cdot}is_{X}$) is the category with same objects from $\mathscr{C}_{X}$ (resp. $\mathscr{C}^{\cdot}_{X}$) and the morphisms being
all isomorphisms (resp. quasi-isomorphisms). In case of repeating, we don't give the definitions of the determinant functor from from $\mathscr{C}is_{X}$ (resp. $\mathscr{C}^{\cdot}is_{X}$) to $\mathscr{P}is_{X}$, because the definitions are completely similar
to the following definition of the extended functor. For the precise definitions and proofs, see \cite{Kund}, Pag. 21-30.
In order to extend the determinant functor to the derived category in \cite{Kund}, we need to recall the definitions about the perfect complex and the strictly perfect complex:
In [11], a perfect complex $\mathcal{F}^{\cdot}$ on a scheme $X$ means a complex of $\mathcal{O}_{X}-$modules (not necessarily quasi-coherent) such
that locally on $X$ there exists a bounded complex $\mathcal{G}^{\cdot}$ of finite free $\mathcal{O}_{X}-$modules and a quasi-isomorphism:
$$\mathcal{G}^{\cdot}\rightarrow\mathcal{F}^{\cdot}\mid_{U}$$ for any open subset $U$ of a covering of $X$. A strictly perfect complex $\mathcal{F}^{\cdot}$ on a scheme $X$
is a bounded complex of locally free $\mathcal{O}_{X}-$modules of finite type. In other words, a perfect complex is locally quasi-isomorphic to a strictly perfect complex.
We denote by $\text{Parf}_{X}$ the full subcategory of $\text{D(Mod}X)$ whose objects are perfect complexes and denote by $\text{Parf-is}_{X}$ the subcategory of
$\text{D(Mod}X)$ whose objects are perfect complexes and morphisms are only quasi-isomorphisms.
\begin{defi}\label{ex,de}(see \cite{Kund}, Pag. 40)
An extended determinant functor $(f, i)$ from $\text{Parf-is}$ to $\mathscr{P}is$ consist of the following data:
I) For every scheme $X$, a functor
$$f_{X}: \text{Parf-is}_{X}\rightarrow\mathscr{P}is_{X}$$ such that $f_{X}(0)=1$.
II) For every short exact sequence of complexes $$ \xymatrix{ 0\ar[r]& F\ar[r]^{\alpha}&G\ar[r]^{\beta}&H\ar[r]&0} $$ in $\text{Parf-is}_{X}$,
we have an isomorphism:
$$i_{X}(\alpha,\beta):\xymatrix{f_{X}(F)\otimes f_{X}(H)\ar[r]^(.65){\sim}&f_{X}(G)}$$ such that for the particular short exact sequences
$$\xymatrix {0\ar[r]&H\ar@{=}[r]&H\ar[r]&0\ar[r]&0}$$
and
$$\xymatrix {0\ar[r]&0\ar[r]&H\ar@{=}[r]&H\ar[r]&0}$$
we have : $i_{X}(1,0)=i_{X}(0,1)=1_{f_{X}(H)}$.\\
We require that:
i) Given an isomorphism of short exact sequences of complexes
$$\xymatrix{0\ar[r]&F\ar[r]^{\alpha}\ar[d]^{u}&G\ar[r]^{\beta}\ar[d]^{v}&H\ar[r]\ar[d]^{w}&0 \\
0\ar[r]&F^{'}\ar[r]^{\alpha ^{'}}&G^{'}\ar[r]^{\beta ^{'}}&H^{'}\ar[r]&0 }$$
the diagram
$$\xymatrix{f_{X}(F)\otimes f_{X}(H)\ar[r]^(.65){i_{X}(\alpha,\beta)}_(.65){\sim}\ar[d]^{f(u)\otimes f_{X}(w)}_{\wr}&f(G)\ar[d]^{f_{X}(v)}_{\wr}\\
f(F^{'})\otimes f(H^{'})\ar[r]_(.65){i_{X}(\alpha^{'},\beta^{'})}^(.65){\sim} & f(G^{'}) }$$
commutes.
ii) Given a exact sequence of short exact sequences of complexes, i.e., a commutative diagram
$$\xymatrix{ & 0\ar[d]&0\ar[d]&0 \ar[d]\\
0\ar[r]&F\ar[r]^{\alpha}\ar[d]^{u}&G \ar[r]^{\beta}\ar[d]^{u^{'}}&H\ar[r]\ar[d]^{u^{''}}&0 \\
0\ar[r]&F^{'}\ar[r]^{\alpha^{'}}\ar[d]^{v}&G^{'}\ar[r]^{\beta^{'}}\ar[d]^{v^{'}}&H^{'}\ar[r]\ar[d]^{v ^{''}}&0 \\
0\ar[r]&F^{''}\ar[r]^{\alpha^{''}}\ar[d]&G^{''} \ar[r]^{\beta^{''}}\ar[d]&H^{''}\ar[r]\ar[d]&0 \\
& 0 &0 &0 }$$
the diagram:
$$\xymatrix{f_{X}(F)\otimes f_{X}(H)\otimes f_{X}(F^{\cdot''})\otimes f_{X}(H^{''})\ar[rrr]^(.62){i_{X}(\alpha,\beta)\otimes i_{X}(\alpha^{''},\beta^{''})}_(.62){\sim}\ar[d]^{i_{X}(u,v)\otimes i_{X}(u^{''},v^{''})}_{\wr}& & &f_{X}(F^{\cdot})\otimes f_{X}(H^{\cdot})\ar[d]^{i_{X}(u^{'}, v^{'})}_{\wr}\\
f_{X}(F^{'})\otimes f_{X}(H^{'})\ar[rrr]_(.65){i_{X}(\alpha^{'},\beta^{'})}^(.65){\sim} & & &f_{X}(G^{'}) }$$
commutes.
iii) $f$ and $i$ commutes with base change. More precisely, this means: \\
For every morphism of schemes
$$g: X\rightarrow Y$$
we have an isomorphism
$$\eta(g): \xymatrix{f_{X}\cdot \text{L}g^{*}\ar[r]^{\sim}&g^{*}f_{X}}$$ such that for every short exact sequence of complexes
$$\xymatrix{0\ar[r]&F^{\cdot}\ar[r]^{u}&G^{\cdot}\ar[r]^{v}&H^{\cdot}\ar[r]&0}$$
the diagram:
$$\xymatrix{f_{X}(\text{L}g^{*}F^{\cdot})\otimes f_{X}(\text{L}g^{*}H^{\cdot})\ar[d]^{\eta \cdot\eta}_{\wr}\ar[rr]^(.60){i_{Y}(\text{L}g^{*}(u,v))}_(.60){\sim}&& f_{X}(\text{L}g^{*}G^{\cdot})\ar[d]^{\eta}_{\wr}\\
g^{*}f_{Y}(F^{\cdot})\otimes g^{*}f_{Y}(H^{\cdot})\ar[rr]^{i_{Y}(u,v)}_{\sim}& &g^{*}f_{Y}(F^{\cdot})
}$$
commutes, where $\text{L}g^{*}$ is the left derived functor of the morphism $g$ and exists for the category whose objects are short exact sequences of
complexes of three objects in $\text{Mod}(Y)$ and whose morphisms are triples
such that the resulting diagram (like the diagram in i) but not isomorphism in general) commutes (see \cite{Kund}, Prop. 3). Moreover if
$$\xymatrix{X\ar[r]^{g}&Y\ar[r]^{h}&Z}$$ are two consecutive morphisms, the diagram:
$$\xymatrix{ f_{X}(\text{L}g^{*}\text{L}h^{*})\ar[r]^{\eta(g)}_{\sim}\ar[d]^{f_{X}(\theta)}_{\wr}& g^{*}f_{Y}\text{L}h^{*}\ar[r]^{g^{*}\eta(h)}_{\sim}&g^{*}h^{*}f_{Z}\ar[d]_{\wr}\\
f_{X}(\text{L}(h\cdot g)^{*})\ar[rr]^{\sim}& &(h\cdot g)^{*}f_{Z}
}$$
commutes where $\theta$ is the canonical isomorphism
$$\theta:\xymatrix{ \text{L}g^{*}\text{L}h^{*}\ar[r]^{\sim}&\text{L}(h\cdot g)^{*}},$$
iv) On finite complexes of locally free $\mathcal{O}_{X}$-modules
$$f=\det~\text{and}~~~i=i.$$
\end{defi}
\begin{theo}\label{ucd}
There is one, and, up to canonical isomorphism, only one extended determinant functor $(f,\text{i})$ which we will write $(\det,\text{i})$.
\end{theo}
\begin{proof}
See \cite{Kund}, Theorem 2, Pag. 42.
\end{proof}
The theorem above implies that the functor $(\det, i)$ have same compatibility as ordinary $\det^{*}$. In particular:
a) If each term $\mathcal{F}^{n}$ in the corresponding perfect complex $\mathcal{F}^{\cdot}$ is itself perfect, i.e., has locally a finite free resolution, then $$\det(\mathcal{F}^{\cdot})\cong \otimes_{n}\text{det}^{*}(\mathcal{F}^{n})^{(-1)^{n}}.$$
b) If the cohomology sheaves $H^{n}(\mathcal{F}^{\cdot})$ of the complex are perfect we denote the objects of subcategory by $\text{Parf}^{0}\subset\text{Parf}$,
then $$\det(\mathcal{F}^{\cdot})\cong \otimes_{n}\text{det}^{*}(H^{n}(\mathcal{F}^{\cdot}))^{(-1)^{n}}.$$
From the previous theorem and a) and b), we have the following corollary:
\begin{coro}
Let $$\xymatrix{\ar[r]&\mathcal{F}_{1}^{\cdot}\ar[r]^{u}&\mathcal{F}_{2}^{\cdot}\ar[r]^{v}&\mathcal{F}_{3}^{\cdot}\ar[r]^{w}&T\mathcal{F}_{1}^{\cdot}\ar[r]& }$$
be a triangle in $\text{Parf}_{X}$ such that $\mathcal{F}_{i}^{.}\in\text{Parf}^{0}_{X}$. We have a unique isomorphism,
$$\xymatrix{i_{X}(u,v,w):\det(\mathcal{F}_{1}^{\cdot})\otimes\det(\mathcal{F}_{3}^{\cdot})\ar[r]^(.70){\sim}&\det(\mathcal{F}_{2}^{\cdot}) }$$
which is functorial with respect to such triangles, i.e., the extended functor can be defined for triangles instead of short exact sequences of complexes.
\end{coro}
\begin{proof}
See \cite{Kund}, Page 43.
\end{proof}
\begin{rem}\label{pcd}
From the previous corollary, it is to say that the extended determinant functor can be defined on triangle in $\text{Parf}$ which satisfies similar properties of short exact sequences from i) to iv) if the objects in the triangles
are in $\text{Parf}^{0}$. For a vector bundle $E$, if $Rf_{*}E$ is a strictly perfect complex under some suitable morphism where $Rf_{*}$ is viewed as the right derived functor of $f_{*}$, then
the properties of extended determinant functor is valid for the strictly perfect complex.
\end{rem}
To conclude this section, we put the determinant functor, the virtual category, and the Picard category together to make the following definition.
\begin{defi}\label{vdk}
For a scheme $X$, we denote by $Vect(X)$ the exact category of vector bundles over $X$ and by $V(X): =V(Vect(X))$ (resp. $V(Y)$) the virtual category of vector bundles on $X$ (resp. $Y$).
Let $f:X\rightarrow Y$ be a smooth and projective morphism and $Y$ carries an ample invertible sheaf. Then there is an induced functor from $V(X)$ to the Picard category $\mathscr{P}is_{Y}$ (the definition of $\mathscr{P}is_{Y}$is in the example \ref{lbd}) denoted
by $\det Rf_{*}$, which is defined as follows:
In the Theorem \ref{ucd}, we have a unique functor $\det: \text{Perf-is}_{Y} \rightarrow \mathscr{P}is_{Y}$ . For any vector bundle $E$ from the exact category $Vect(X)$, it can be viewed a perfect complex $E^{.}$ with a term $E$ at
degree $0$ and $0$ at other degree.
For any perfect complex $E^{.}$ of $\mathcal{O}_{X}$-modules and the morphism $f$ in the definition, $Rf_{*}E^{.}$ is still a perfect complex of $\mathcal{O}_{Y}$-modules (see \cite{Berth}, IV, 2.12). Therefore, we have a functor
$\det Rf_{*} :(Vect(X), iso)\rightarrow\text{Perf-is}_{Y}$ where $(Vect(X), iso)$ is the category with the same objects from $Vect(X)$ and morphisms being only isomorphisms.
By the definition of the extended determinant functor $\det$, it can be verified that $\det Rf_{*}$ satisfies the same conditions from a) to d) with $[~]$ in Def. \ref{vircon}.
By the universality of the virtual category $V(X)$, the functor $\det Rf_{*}$ factors uniquely up to unique isomorphism through $(Vect(X), iso)\rightarrow V(X)$. More clearly, we have the following diagram:
$$\xymatrix{(Vect(X), iso)\ar[d]\ar[r]^(.60){Rf_{*}}&\text{Perf-is}_{Y}\ar[d]^{\det}\\
V(X)\ar@{.>}[r]^{\det Rf_{*}}&\mathscr{P}is_{Y}
}$$
Meanwhile, there is a functor from
$V(X)\rightarrow\mathscr{P}is_{Y}$ which is still denoted by $\det Rf_{*}$.
\end{defi}
\section{The Adams-Riemann-Roch Theorem}
\subsection{The Frobenius morphism}
As in the introduction, for the $p$-th Adams-Riemann-Roch theorem in the case of characteristic $p>0$, the Frobenius morphism for the schemes of characteristic $p$ plays a key role in the proof of \cite{Pi}.
For more information about Frobenius morphisms, see the electronic lectures of Professor Lars Kindler or Qing Liu's book \cite{Liu}. We say that a scheme $X$ is of characteristic $p$, if we have $p \mathcal{O}_{X}=0$.
(1) For every $\mathbb{F}_{p}-$algebra $A$, we have the classical Frobenius endomorphism
$$Frob_{A}: A\rightarrow A$$
$$\quad~~~~~~~ a\mapsto a^{p}.$$
Hence, for every affine $\mathbb{F}_{p}$-scheme $X=\text{Spec}~A$, we have an affine Frobenius morphism $$F_{X}:X\rightarrow X.$$
We also see that $F_{X}$ is the identity morphism on the underlying topological space Sp$(X)$ because if $a^{p}\in\mathfrak{p}$ for any prime ideal $\mathfrak{p}$ of $A$, we have $a\in\mathfrak{p}$. We denote the corresponding morphism over sheaves
by $Frob_{\mathcal{O}_{X}}$ such that for every open set $U$ of $X$ we have $Frob_{\mathcal{O}_{X}}(U)= Frob_{\mathcal{O}_{X(U)}}:\mathcal{O}_{X}(U)\rightarrow\mathcal{O}_{X}(U)$. To be precise, the affine Frobenius morphism
is the morphism $F_{X}:=(Id_{sp(X)},Frob_{\mathcal{O}_{X}})$.
For any $\mathbb{F}_{p}$-algebra homomorphism $f:A\rightarrow B$, the following diagram commutes
$$\xymatrix{A\ar[d]^{f}\ar[r]^{Frob_{A}}&A\ar[d]^{f}\\
B\ar[r]^{Frob_{B}}&B }$$
Therefore, we can define the absolute $Frobenius$ morphism in general case.
\begin{defi}
Let $X$ be a $\mathbb{F}_{p}$-scheme, the $\textbf{absolute Frobenius}$ morphism on $X$, denoted by $F_{X}$, is a morphism $$X\longrightarrow X$$ such that for every open affine subset $U$ of $X$ we have $F_{X|U}:U\longrightarrow U$. Hence we have Frobenius morphism
$F_{X}=(Id _{Sp(X)},\text{Frob}_{\mathcal{O}_{X}})$ for a general $\mathbb{F}_{p}$-scheme.
\end{defi}
Let $S$ be an $\mathbb{F}_{p}$-scheme and $X$ an $S$-scheme. It is easy to verify that the following diagram commutes
$$\xymatrix{X\ar[d]_{f}\ar[r]^{F_{X}}&X\ar[d]^{f}\\
S\ar[r]^{F_{S}}&S }$$
where $f$ is the structure morphism.
From the diagram above, we find that $F_{X}$ is not an $S$-morphism. But we can get an $S$-morphism by making use of the diagram above and introducing the relative Frobenius morphism.
\begin{defi}\label{rel,fr}
For any morphism $f:X\longrightarrow S$ of $\mathbb{F}_{p}$-schemes, the following diagram commutes
$$\xymatrix{X\ar@/_/[ddr]_{f}\ar@{.>}[dr]|{F_{X/S}}\ar@/^/[rrd]^{F_{X}}& & \\
& X^{'}\ar[r]_{W_{X/S}}\ar[d]^{f^{'}}&X\ar[d]^{f} \\
&S\ar[r]^{F_{S}}&S
}~~~~~(1)$$
where $X^{'}$ is the fiber product of the morphism $f:X\longrightarrow S$ by base extension $F_{S}: S\longrightarrow S$. The morphism $F_{X/S}$ is called the $\textbf{relative Frobenius}$ morphism of the morphism $f$ which exists by the universal property of fiber product of schemes.
\end{defi}
\begin{exam}
Let $A$ be a $\mathbb{F}_{p}$-algebra, $S=\text{Spec}~A$ an affine scheme, $f(x)=\sum_{k=0}^{n}f_{k}x^{k}$ a polynomial in $A[x]$ and $X=\text{Spec}~A[x]/(f(x))$ an affine $S$-scheme. It is easy to verify that $X^{'}\cong \text{Spec}~A[x]/(f^{'}(x))$, where $f^{'}(x)=\sum_{k=0}^{n}f_{k}^{p}x^{k}$. Therefore
we have a corresponding commutative diagram
$$\xymatrix{A[x]/(f(x))& & \\
& A[x]/(f^{'}(x))\ar[lu]|{\widetilde{F}_{X/S}}&A[x]/(f(x))\ar[l]^{\widetilde{W}_{X/S}}\ar@/_/[llu]_{Frob_{A[x]/(f(x))}} \\
&A\ar@/^/[uul]^{\tilde{f}}\ar[u]_{\tilde{f^{'}}}&A\ar[l]^{Frob_{A}}\ar[u]_{\tilde{f}}
}~~~~~(2)$$
where $\tilde{f}$ (resp. $\tilde{f}^{'}$) send every $a\in A$ to its equivalence $\bar{a}$ in $A[x]/(f(x))$ (resp. $A[x]/(f^{'}(x)$), $\widetilde{W}_{X/S}$ sends the equivalence class of the monomial $ax$ to the equivalence class of the monomial $a^{p}x$ and $\widetilde{F}_{X/S}$ sends the equivalence class of the monomial $ax$ to the equivalence class of $ax^{p}$ in $A[x]/(f(x))$.
\end{exam}
In order to prove the next useful lemma, we state the \'{e}tale coordinates as follows:
(\'{E}tale coordinates) Let $f:X\rightarrow S$ be a smooth morphism and $q$ a point of $X$. Since $\Omega_{X/S}^{1}$ is locally free, there is an open affine neighborhood $U$ of the point $q$ and $x_{1},...,x_{n}\in\mathcal{O}(U)$ such that $x_{i}|_{q}=0$ for all $i$ and such that $dx_{1},...dx_{n}$ generate
$\Omega_{X/S}^{1}|_{U}$ (use Nakayama lemma). These sections define an $S$-morphism $h: U\rightarrow \mathbb{A}_{S}^{n}$. This morphism is \'{e}tale
by construction:
Because of smoothness we have the exact sequence of $\mathcal{O}(U)-$modules
$$0\longrightarrow h^{*}\Omega_{Z/S}^{1}\longrightarrow\Omega_{U/S}^{1}\longrightarrow\Omega_{U/Z}^{1}\longrightarrow 0$$
where $Z:=\mathbb{A}_{S}^{n}$ for brevity. By construction $\Omega_{U/Z}^{1}=0$, so $h:U\rightarrow Z$ is smooth and unramified, hence \'{e}tale. The $x_{1},...x_{n}$ are called \'{e}tale coordinates around $q$.
\begin{defi} Let $X,S$ be schemes and $f:X\rightarrow S$ a morphism of schemes. The morphism $f$ is said to be universally homeomorphism if for every scheme $T$ and for every morphism of schemes $g:T\rightarrow S$, the corresponding base change of $f$ is a homeomorphism.
\end{defi}
\begin{exam}\label{u,h}
It is not difficult to check that the relative Frobenius morphism and the absolute Frobenius morphism are both universally homeomorphisms since they are already homeomorphisms on topological spaces by verifying their definitions (Actually, they are identities on topological spaces).
\end{exam}
Based on the notations in Definition \ref{rel,fr}, we can state and prove the following important lemma.
\begin{lem}\label{l,f} Let $S$ be a scheme of positive characteristic (say $p$) and $f:X\rightarrow S$ a smooth morphism of pure relative dimension $n$. Then the relative Frobenius $F_{X/S}$ is finite and flat, and the $\mathcal{O}_{X^{'}}-$algebra $(F_{X/S})_{*}\mathcal{O}_{X}$ is locally free of rank $p^{n}$. In particular, if $f$ is \'{e}tale, then $F_{X/S}$ is an isomorphism.
\end{lem}
\begin{proof} We firstly show that $F_{X/S}$ is an isomorphism when $f$ is \'{e}tale, i.e., smooth of relative dimension $0$. In the diagram (1) (we still use the notation of Definition \ref{rel,fr} in the entire proof), we find that $f^{'}$ is
\'{e}tale and that $f=f^{'}\circ F_{X/S}$ is \'{e}tale, so $F_{X/S}$ is \'{e}tale(by the properties of base change and composition of \'{e}tale morphism). But in the diagram (1), actually $W_{X/S},F_{X}$ induce identity on topological spaces and $F_{X}=W_{X/S}\circ F_{X/S}$, therefore the relative Frobenius morphism also induces an identity on topological spaces.
In addition, $F_{X/S}$ is an open immersion (in [SGAI.5.1], it is proved that any morphism of finite type is open immersion if and only if the morphism is \'{e}tale and radical. Radical means universally injective which is similar with universally closed morphism, i.e., injective itself and also injective for any base extension. But universally homeomorphism is
equivalent to integral, surjective and universally injective,
and Example \ref{u,h} said that $F_{X/S}$ is a universal homeomorphism.).
Putting homeomorphism and open immersion together, we have that $F_{X/S}$ is an isomorphism.
For $Z=\mathbb{A}^{n}_{S}$ and the projection $f: Z \rightarrow S$, we have a Cartesian diagram
$$\xymatrix{Z\ar@/_/[ddr]_{f}\ar@{.>}[dr]|{F_{Z/S}}\ar@/^/[rrd]^{F_{Z}}& & \\
& Z^{'}\ar[r]_{W_{Z/S}}\ar[d]^{f^{'}}&Z\ar[d]^{f} \\
&S\ar[r]^{F_{S}}&S
}$$
as in Definition \ref{rel,fr} and the topological space $\text{Sp}(Z)=\text{Sp}(Z^{'})$. By the definition of $Z$, we have $\mathcal{O}_{Z}=\mathcal{O}_{S}[t_{1},...,t_{n}]$. So the sheaf morphism $F^{\sharp}_{Z/S}:\mathcal{O}_{Z^{'}}\rightarrow (F_{Z/S})_{*}{\mathcal{O}}_{Z}$ is the map
$$F^{\sharp}_{Z/S}:\mathcal{O}_{S}[t_{1},...,t_{n}]\rightarrow \mathcal{O}_{S}[t_{1},...,t_{n}]$$
$$t_{i}\mapsto t_{i}^{p}.$$
Hence the monomials $\prod_{i=1}^{n}t_{i}^{k_{i}}$, with $k_{i}$ integers such that $0\leq k_{i}\leq p-1$, form a basis of $(F_{Z/S})_{*}{\mathcal{O}}_{Z}$ over $\mathcal{O}_{Z^{'}}$. Therefore $F_{Z/S}$ is indeed finite locally free of rank $p^{n}$, hence also flat.
For the general case, we can assume by the smoothness of $f$ that locally on $X$, we have the factorization $\xymatrix{X\ar[r]^{h}& Z\ar[r]^{g}& S}$, where $g$ is a projection $Z=\mathbb{A}^{n}_{S}\rightarrow S$ and $h$ is \'{e}tale. We have the following diagram
$$\xymatrix{X\ar[rr]^{F_{X/S}}\ar[dr]^{F_{X/Z}}\ar[dd]_{h}& &X^{'}=X\times_{F_{S}}S\ar[r]\ar[dd]&X\ar[dd]^{h}\\
&X\times_{F_{Z}}Z\ar[ld]\ar[ur]^{\psi}& & \\
Z\ar[rr]^{F_{Z/S}}\ar[drr]_{g}& &Z^{'}=S\times_{F_{S}}Z\ar[r]\ar[d]&Z\ar[d]^{g}\\
& & S\ar[r]^{F_{S}}& S } (2)$$
In the diagram $(2)$, the right-most part ($X^{'},Z^{'}$, i.e., the fiber products of schemes) is Cartesian. Meanwhile, we have $F_{X/S}=\psi\circ F_{X/Z}$. By the first part, we know that $F_{X/Z}$ is isomorphism because $h$ is \'{e}tale. $\psi$ is finite locally free of rank $p^{n}$ which is from the base
change of $F_{Z/S}$ when $F_{Z/S}$ is finite locally free of rank $p^{n}$ for $Z=\mathbb{A}^{n}_{S}\rightarrow S$.
Therefore, $F_{X/S}$ is finite locally free of rank of $p^{n}$, and flat also.
\end{proof}
\begin{exam} Let $X$, $S$ be affine, denoted by $X=\text{Spec}~B$, $S=\text{Spec}~A$, where $A$ is of characteristic $p>0$. Then $X^{'}=B\otimes_{F_{A}}A$, i.e., $ab\otimes 1=b\otimes a^{p}$, and the relative $Frobenius~X\rightarrow X^{'}$ is given by $a\otimes b\mapsto ab^{p}$.
If $B=A[t]$, then $A\otimes_{F_{A}}A[t]\cong A[t]$, $W_{X/S}$ is given by $at\rightarrow a^{p}t$ and $F_{X/S}$ by $at\rightarrow at^{p}$ for $a\in A$ ($W_{X/S}$ is the morphism from $X^{'}$ to $X$ as in Definition \ref{rel,fr}). Hence the image of $F_{A[t]/A}$ is $A[t^{p}]\subseteq A[t]$ and $A[t]$ is freely generated by $t^{i}$, $i=0,...,p$ over $A[t^{p}]$.
\end{exam}
\subsection{The construction of the Bott element}
\begin{defi}\label{ad,2}
For any integer $k\geq 2$, the symbol $\theta^{k}$ refers to an operation, which associates an element of $K_{0}(X)$ to any locally free coherent sheaf on a quasi-compact scheme $X$. It satisfies the following three properties:
(1) For any invertible sheaf $L$ over $X$, we have
$$\theta^{k}(L)=1+L+\cdots +L^{\otimes k-1};$$
(2) For any short exact sequence $0\longrightarrow E^{'}\longrightarrow E\longrightarrow E^{''}\longrightarrow 0$ of locally free coherent
sheaves on $X$ we have
$$\theta^{k}(E)=\theta^{k}(E^{'})\otimes\theta^{k} (E^{''});$$
(3) For any morphism of schemes $g : X^{'}\longrightarrow X$ and any locally free coherent sheaf $E$ over $X$ we have
$$g^{*}(\theta^{k}(E))=\theta^{k}(g^{*}(E)).$$
If $E$ is a locally free coherent sheaf on a quasi-compact scheme $X$, then the element $\theta^{k}(E)$ is often called the $k$-th Bott element.
\end{defi}
\begin{pro}
The operation $\theta^{k}$ which satisfies three properties above can be defined uniquely.
\end{pro}
\begin{proof}
See \cite{Man}. Lemma 16.2. Subsection 16, or SGA, VII.
\end{proof}
On a quasi-compact scheme of characteristic $p$, Pink and R\"{o}ssler constructed an explicit representative of the $p$-th Bott element (see \cite{Pi}, Sect. 2).
We recall the construction:
Let $p$ be a prime number and $Z$ a scheme of characteristic $p$. Let $E$ be a locally free coherent sheaf $Z$. For any integer $k\geq 0$ let
$\text{Sym}^{k}(E)$ denote the $k$-th symmetric power of $E$. Then
$$\text{Sym}(E):=\bigoplus _{k\geq 0}\text{Sym}^{k}(E)$$ is quasi-coherent graded $\mathcal{O}_{Z}$-algebra, called the symmetric algebra of $E$. Let
$\mathcal{J}_{E}$ denote the graded sheaf of ideals of $\text{Sym}(E)$ that is locally generated by the sections $e^{p}$ of $\text{Sym}^{p}(E)$ for
all sections $e$ of $E$, and set $$\tau(E):=\text{Sym}(E)/\mathcal{J}_{E}.$$
Locally this construction means the following. Consider an open subset $U\subset Z$ such that $E|_{U}$ is free, and
choose a basis $e_{1},\ldots, e_{r}$. Then $\text{Sym}(E)|_{U}$ is the polynomial algebra over $\mathcal{O}_{Z}$ in the variables $e_{1},\ldots,e_{r}$.
Since $Z$ has characteristic $p$, for any open subset $V\subset U$ and any sections $a_{1},\ldots,a_{r} \in \mathcal{O}_{Z}(V)$ we have
$$(a_{1}e_{1}+\ldots +a_{r}e_{r})^{p}=a_{1}^{p}e_{1}^{p}+\ldots +a_{r}^{p}e_{r}^{p}.$$
It follows that $\mathcal{J}_{E}|_{U}$ is the sheaf of ideals of $\text{Sym}(E)|_{U}$ that is generated by $e_{1}^{p}\ldots,e_{r}^{p}$. Clearly that description is independent of
the choice of basis and compatible with localization; hence it can be used as an equivalent definition of $\mathcal{J}_{E}$ and $\tau(E)$.
The local description also implies that $\tau(E)|_{U}$ is free over $\mathcal{Z}|_{U}$ with the basis the images of the monomials $e_{1}^{i_{1}}\cdots e_{r}^{i_{r}}$ for all choices of exponents $0\leqq i_{j}< p$.
It can be showed that $\tau(E)$ satisfies the defining properties of the $p$-th Bott element. In other words, we have the following proposition (see \cite{Pi}, Prop. 2.6).
\begin{pro}\label{btp}
For any locally free coherent sheaf $E$ on a quasi-compact scheme $Z$ of characteristic $p>0$, we have $\tau(E)=\theta ^{p}(E)$ in $K_{0}(Z)$.
\end{pro}
\subsection{The Adams Riemann Roch Theorem in positive characteristic}\label{section ARR char p}
In order to state the $p$-th Adams-Riemann-Roch theorem in positive characteristic in \cite{Pi}, we firstly give the conditions of the Adams-Riemann-Roch theorem. We assume that $f:X\rightarrow Y$ is a projective local complete intersection morphism and $Y$ is quasi-compact scheme
and carries an ample invertible sheaf. Furthermore, we make the supplementary
hypothesis that $f$ is smooth and that $Y$ is a scheme of
characteristic $p>0$. Let $\Omega_{f}$, the relative differentials of $f$, be rank $r$, which is a locally constant function.
Based on the condition above, we draw the diagram again, i.e., the commutative diagram
$$\xymatrix{X\ar@/_/[ddr]_{f}\ar@{.>}[dr]|{F_{X/Y}}\ar@/^/[rrd]^{F_{X}}& & \\
& X^{'}\ar[r]_{J}\ar[d]^{f^{'}}&X\ar[d]^{f} \\
&Y\ar[r]^{F_{Y}}&Y
}~~~~~(4)$$
where $ F_{X}$ and $F_{Y}$ are obvious absolute Frobenius morphisms respectively and the square is Cartesian. We also denote the relative Frobenius morphism of the morphism $f$ by $F:=F_{X/Y}$ for
simplicity. In the following propositions and proofs until the end of the note, we will use these notations.
Since the pull-back $F^{*}$ is adjoint to $F_{*}$ (see \cite{Hart}, Page 110), there is a natural morphism of $\mathcal{O}_{X}$-algebras $F^{*}F_{*}\mathcal{O}_{X}\rightarrow\mathcal{O}_{X}$. Let $I$
be the kernel of the natural morphism, which is a sheaf of the ideal of $F^{*}F_{*}\mathcal{O}_{X}$ by the definition. In \cite{Pi}, the following definition is made
$$Gr(F^{*}F_{*}\mathcal{O}_{X}):=\bigoplus_ {k\geq 0}I^{k}/ I^{k+1}$$
which denote the associated graded sheaf of $\mathcal{O}_{X}$-algebras. Let $\Omega_{f}$ be the sheaf of relative differentials of $f$.
Also, they proved the following key proposition which can be used to prove the $p$-th Adams-Riemann-Roch theorem in positive characteristic (see \cite{Pi}, Prop. 3.2).
\begin{pro}\label{gci}
There is a natural isomorphism of $\mathcal{O}_{X}$-modules
$$ I/ I^{2}\cong \Omega_{f}$$
and a natural isomorphism of graded $\mathcal{O}_{X}$-algebras
$$\tau(I/ I^{2})\cong Gr(F^{*}F_{*}\mathcal{O}_{X})$$
\end{pro}
According to the proposition above, directly there are isomorphisms $$Gr(F^{*}F_{*}\mathcal{O}_{X})\cong\tau(I/ I^{2})\cong \tau(\Omega_{f}).$$
Moreover, Proposition \ref{btp} also
implies $\tau(\Omega_{f})\cong \theta ^{p}(\Omega_{f})$. In the Grothendieck groups, we have $Gr(F^{*}F_{*}\mathcal{O}_{X})\cong F^{*}F_{*}\mathcal{O}_{X}$, then the equality $F^{*}F_{*}\mathcal{O}_{X}\cong \theta ^{p}(\Omega_{f})$ holds by viewing them as elements of the Grothendieck groups.
In the case of characteristic $p>0$, the Adams operation $\psi^{p}$ can be described, especially. We will give the proposition after recalling the definition of the Adams operation.
\begin{defi}\label{ad,1}
Let $X$ be a quasi-compact scheme. For any positive integer $k\geq 2$, the $k$-th Adams operation is the functorial endomorphism $\psi^{k}$ of unitary ring $K_{0}(X)$ which is uniquely determined by the following two conditions,
(1) $\psi^{k}f^{*}=f^{*}\psi^{k}$ for any morphism of Noetherian schemes $f:X\longrightarrow Y$.
(2) For any invertible sheaf $L$ over $X$, $\psi^{k}(L)=L^{\otimes k}$.
\end{defi}
A more interesting proposition related to the Frobenius morphism and the Adams operation is the following:
\begin{pro}\label{frebad}
For a scheme $Z$ of characteristic $p>0$ and its absolute Frobenius morphism $F_{Z}: Z\rightarrow Z$, we claim that the pullback $F_{Z}^{*}: K_{0}(Z)\rightarrow K_{0}(Z)$ is just the $p$-th Adams operation $\psi^{p}$.
\end{pro}
\begin{proof}
This is a well-known fact (see \cite{Bern 1}, Pag. 64, Proposition 2.15), which is also a consequence of the splitting principle (see \cite{Man}, Par. 5).
\end{proof}
\begin{theo}
The Adams-Riemann-Roch theorem is true under the assumption given in the beginning of this subsection, i.e., the following equality
$$\psi^{p}(R^{\bullet}f_{*}(E))=R^{\bullet}f_{*}(\theta ^{p}(\Omega_{f})^{-1}\otimes\psi^{p}(E)) $$
holds in $K_{0}(Y)[\frac{1}{p}]:=K_{0}(Y)\otimes_{\mathbb{Z}}\mathbb{Z}[\frac{1}{p}]$.
\end{theo}
\begin{proof} See \cite{Pi}, page 1074.
\end{proof}
\begin{rem}
Based on the condition we gave in the beginning of this subsection, the equality $$\psi^{k}(R^{\bullet}f_{*}(E))=R^{\bullet}f_{*}(\theta ^{k}(\Omega_{f})^{-1}\otimes\psi^{k}(E))$$ for any integer $k\geq 2$, also holds
in $K_{0}(Y)\otimes \mathbb{Z}[\frac{1}{k}]$ not only for $k=p$ (see \cite{FL}, V. Th. 7.6), but its proof is more complicated than \cite{Pi}.
\end{rem}
\section{A functorial Riemann-Roch theorem in positive characteristic}
\subsection{The Deligne pairing}\label{sdp}
Before stating the Deligne's functorial Riemann-Roch theorem, it is necessary to introduce the Deligne pairing, which appeared in [3] for the first time and was extended
to a more general situation by S. Zhang (see [8]). We shall only need the basic definition here.
Let $g: S^{'}\rightarrow S$ be a finite and flat morphism. Then $g_{*}\mathcal{O}_{S^{'}}$ is a locally free sheaf of constant rank (say $d$).
We have a natural morphism of $\mathcal{O}_{S}$-modules $$g_{*}\mathcal{O}_{S^{'}}\rightarrow End_{\mathcal{O}_{S}}(\mathcal{O}_{S^{'}}).$$
Taking the composition with the determinant, we obtain a morphism $$\text{N}: g_{*}\mathcal{O}_{S^{'}}\rightarrow \mathcal{O}_{S}$$
Generally, we have the following definition (see \cite{Del}, Sect. 7.):
\begin{defi}\label{d,n}
Let $g: S^{'}\rightarrow S$ be finite and flat. For any invertible sheaf $L$ over $S^{'}$, and the sub-sheaf $L^{*}$
of invertible sections of $L$, its norm $\text{N}_{S^{'}/S}(L)$ is defined to be an invertible sheaf $N$ over $S$ equipped with a morphism of sheaves $\text{N}_{S^{'}/S}:
g_{*}L^{*}\rightarrow N^{*}$ satisfying $\text{N}_{S^{'}/S}(u\ell)=\text{N}(u)\text{N}_{S^{'}/S}(\ell)$ for any local sections $u$ of $g_{*}\mathcal{O}_{S^{'}}$
and $\ell$ of $g_{*}L^{*}$.
In other words, the norm morphism induces a norm functor $\text{N}_{g}$ from the category of line bundles over $S^{'}$ to the category of line bundles over
$S$ together with a collection of homomorphisms $\text{N}_{g}^{L}: g_{*}L\rightarrow \text{N}_{g}(L)$ of sheaves of sets, for all line bundle $L$
over $S^{'}$, functorial under isomorphisms of line bundles over $S^{'}$, sending local generating sections over $S^{'}$ to the local generating
sections over $S$ and such that the equality
$\text{N}_{g}^{L}(xl)=\text{N}(x)\text{N}_{g}^{L}(l)$ holds for all local sections $x$ of $g_{*}\mathcal{O}_{S^{'}}$ and $l$ of $g_{*}L$. Moreover, the functor
$\text{N}_{g}$ together with the collection of the $\text{N}_{g}^{L}$ is unique up to unique isomorphism.
The norm functor is a special case of the trace of a torsor for a commutative group scheme under a finite flat morphism (see \cite{Dir}, expos\'e XVII, 6.3.26).
Instead of $\text{N}_{g}$, we also write $\text{N}_{S^{'}/S}$ for the norm functor when the morphism is clear in the specific context. We list the basic properties of the norm functor as follows:
\begin{pro}
The norm functor has the following properties:
(1) The functor $\text{N}_{S^{'}/S}$ is compatible with any base change $Y\rightarrow S$;
(2) If $L_{1}$ and $L_{2}$ are two line bundles on $S^{'}$, there is a natural isomorphism
$$\text{N}_{S^{'}/S}(L_{1}\otimes_{\mathcal{O}_{S^{'}}} L_{2})\cong \text{N}_{S^{'}/S}(L_{1})\otimes_{\mathcal{O}_{S}} \text{N}_{S^{'}/S}(L_{2});$$
(3) If $S_{1}\rightarrow S_{2}\rightarrow S_{3}$ are finite and flat morphisms, there is a natural isomorphism
$$\text{N}_{S_{1}/S_{3}}\cong \text{N}_{S_{2}/S_{3}}\circ\text{N}_{S_{1}/S_{2}};$$
(4) There is a functorial isomorphism $$\text{N}_{S^{'}/S}(L)\cong \text{Hom}_{\mathcal{O}_{S}}(det_{\mathcal{O}_{S}} g_{*}\mathcal{O}_{S^{'}}, det_{\mathcal{O}_{S}} g_{*}L).$$
\end{pro}
\begin{proof}
See \cite{Dir}, expos\'e XVII, 6.3.26 for (1), (2), and (3). See \cite{Dir}, expos\'e XVIII, 1.3.17. for (4).
\end{proof}
From the properties above, it is to say that the pair $(N,\text{N}_{S^{'}/S})$ is unique in the sense of unique isomorphism up to a sign.
\end{defi}
For any locally free coherent sheaves $F_{0}$ and $F_{1}$ with same rank over $S^{'}$ and the morphism $g: S^{'}\rightarrow S$ as in the definition of the norm, we have a canonical isomorphism
$$\det(g_{*}F_{0}-g_{*}F_{1})=\text{N}_{S^{'}/S}\det(F_{0}-F_{1})~~~~~(a) $$
$$ \text{i.e.,}~~~~~~~\det g_{*}F_{0}\otimes(\det g_{*}F_{1})^{-1}=\text{N}_{S^{'}/S}(\det F_{0}\otimes (\det F_{1})^{-1} )$$
by viewing $F_{0}$ and $F_{1}$ as the virtual objects in Def. \ref{vdk}, which is compatible with localization over $S$ and is characterized by the fact that the trivialization have a corresponding isomorphism for any isomorphism $u:F_{1}\rightarrow F_{0}$.
Such an isomorphism $u$ exist locally on $S$ and it doesn't depend on the choice of $u$. The fact that the isomorphism (a) doesn't depend on the choice of $u$ is because for an automorphism $v$ of $F_{1}$, we have $\det (v, g_{*}F_{1})=\text{N}_{S^{'}/S}\det (v, F_{1})$.
For more links between the functor $\det$ and the norm functor, see \cite{Del}, Pag. 146-147.
After giving the definition of the norm, we can define the Deligne pairing as follows:
\begin{defi}\label{d,p.n}(see \cite{Del}, \S 6.1)
Let $f:X\rightarrow S$ be a proper, flat morphism and of purely relative dimension $1$.
Let $L,M$ be two line bundles on $X$. Then $\langle L,M\rangle$ is defined to be the $\mathcal{O}_{S}$-module
which is generated, locally for Zariski topology on $S$, by the symbols $\langle\ell,m\rangle$ for sections $\ell, m$ of $L,M$ respectively with the following relations
$$\langle\ell,gm\rangle=g(\text{div}(\ell))\langle\ell,m\rangle$$
$$\langle g\ell,m\rangle=g(\text{div}(m))\langle\ell,m\rangle$$
where $g$ is a non-zero section of $\mathcal{O}_{X}$ and $g(\text{div}(\ell))$, $g(\text{div}(m))$ are interpreted as a norm: for a relative Cartier divisor $D$ on $X$, i.e., $D\rightarrow S$ is finite and flat in our case (see \cite{Kat}, Chapt. 1, \S 1.2 about relative Cartier divisors ), we put
$g(D):=\text{N}_{D/S}(g)$, then we have $g(D_{1}+D_{2})=g(D_{1})\cdot g(D_{2})$. If $\text{div}(\ell)=D_{1}-D_{2}$, we put
$g(\text{div}(\ell))=g(D_{1})\cdot g(D_{2})^{-1}$. One checks that this is independent of the choice $D_{1}, D_{2}$. Moreover, the $\mathcal{O}_{S}$-module
$\langle L,M\rangle$ is also a line bundle on $S$.
\end{defi}
For $L=\mathcal{O}(D)$, and the canonical section $1$ of $\mathcal{O}(D)$, we have $\langle 1,fm\rangle=\text{N}_{D/S}(f)\cdot\langle 1,m\rangle$ for non-zero section $f$ of $\mathcal{O}_{X}$ and a section $m$ of the line bundle $M$ on $X$.
Let $g:X\rightarrow S$ be proper, flat and purely of relative dimension $1$ and $D$ be a relative Cartier divisor of $g$. For any invertible sheaf $M$ over $X$, we have
$$\xymatrix{\langle\mathcal{O}(D), M\rangle \ar[r]^{\sim}& \text{N}_{D/S}}(M) : ~\langle 1,m\rangle\mapsto\text{N}_{D/S}(m).$$
In other words, for any invertible sheaf $L$ over $X$ and for any section $\ell$ of $L$, which is not a zero divisor in every fiber, we have \\
$$\xymatrix{\langle L,M\rangle \ar[r]^(.40){\sim}&\text{N}_{\text{div}(\ell)/S}(M) } : ~\langle \ell,m\rangle\mapsto\text{N}_{\text{div}(\ell)/S}(m).$$
From Definition \ref{d,p.n}, we have a bi-multiplicative isomorphism:
$$\langle L_{1}\otimes L_{2}, M\rangle\cong\langle L_{1},M\rangle\otimes\langle L_{2},M\rangle$$
$$\langle L,M_{1}\otimes M_{2} \rangle\cong\langle L,M_{1}\rangle\otimes\langle L,M_{2}\rangle$$
and symmetric isomorphism $$\langle L,M\rangle\cong\langle M,L\rangle.$$ when $L=M$ the symmetric isomorphism is obtained by multiplication by $(-1)^{\text{deg}L}$
(see SGA4, XVIII 1.3.16.6).
\subsection{ A functorial Riemann-Roch theorem in positive characteristic}
In this subsection, as in section 2, the Picard category of graded line bundles still will be denoted by $\mathscr{P}is_{X}$ and the virtual category of the exact category of vector bundles will be denoted by $V(X)$ for any scheme $X$.
For any vector bundle $E$ from an exact category of vector bundles, which is viewed as a complex, $Rf_{*}E$ is a complex again under some given morphism $f$.
About the Deligne pairing, the most important proposition we will use is the following:
\begin{pro}\label{d,p}
Let $f:X\rightarrow S$ be proper, flat and purely of relative dimension $1$. Let $E_{0}$ and $E_{1}$ be locally free coherent sheaves with same rank everywhere over $X$,
$F_{0}$ and $F_{1}$ with the same property as $E_{0}$ and $E_{1}$. Then we have the following isomorphism
$$\langle\det(E_{0}-E_{1}),\det(F_{0}-F_{1})\rangle\cong\det Rf_{*}((E_{0}-E_{1})\otimes(F_{0}-F_{1})).$$
\end{pro}
\begin{proof} The key point is to verify that
the determinant functor under the Deligne pairing and $\det Rf_{*}$ are compatible with additivity, respectively. Furthermore, their local trivialization are simultaneously identified with
the corresponding norm functor. This is the construction 7.2 of [3]. The precise proof is in [3] (see [3], Pag. 147-149).
\end{proof}
\begin{coro}\label{trs}
In particular, we have a canonical isomorphism $\det Rf_{*}((H_{0}-H_{1})^{\otimes l}\otimes H)\cong\mathcal{O}_{S}$ if $l\geq 3$ and the ranks
rk$H_{0}$ = rk$H_{1}$, for any vector bundles $H_{0}, H_{1}, H$ over $X$ and $f$ as in the theorem, which is stable under base change.
\end{coro}
\begin{proof}
It suffices to prove the conclusion for $l=3$ because it is automatic for $l>3$ after proving the conclusion for $l=3$ according to the following proof. We apply Proposition \ref{d,p} to $E_{0}=H_{0}\otimes H$, $E_{1}=H_{1}\otimes H$, and $F_{0}=H_{0}^{\otimes 2}+H_{1}^{\otimes 2},$
$F_{1}=2(H_{0}\otimes H_{1})$. \\
Then we have the following:
\begin{align}
(E_{0}-E_{1})\otimes(F_{0}-F_{1})&\cong H\otimes(H_{0}-H_{1})\otimes(H_{0}-H_{1})^{\otimes 2}\notag\\
&\cong (H_{0}-H_{1})^{\otimes 3}\otimes H.\notag
\end{align}
Because of the ranks $rk H_{0} =rk H_{1}$, we immediately have equalities of ranks $rk E_{0}= rk E_{1}$, $rkF_{0}=rk F_{1}$.
Meanwhile, notice that
\begin{align}
\det(F_{0}-F_{1})&\cong(\det(H_{0}))^{\otimes 2rk(H_{0})}\otimes(\det(H_{1}))^{\otimes 2rk(H_{1})}\notag\\
&\otimes((\det(H_{0})^{\otimes -rk(H_{0})}\otimes(\det(H_{1}))^{\otimes -rk(H_{1})})^{2}\notag\\
&\cong\mathcal{O}_{X}.\notag
\end{align}
According to the bi-multiplicativity of the Deligne pairing ( see statements after Def. \ref{d,p.n} or see \cite{Del}, \S 6. 6.2), by a trivial computation:
$\langle\mathcal{O}_{X},L\rangle\cong\langle\mathcal{O}_{X}\otimes\mathcal{O}_{X},L\rangle\cong\langle\mathcal{O}_{X},L\rangle\otimes\langle\mathcal{O}_{X},L\rangle$, the obvious consequence is
$\langle\mathcal{O}_{X},L\rangle\cong\mathcal{O}_{S}$ for any line bundle $L$ over $X$.
Now, we can obtain the corollary by
\begin{align}
\det Rf_{*}((H_{0}-H_{1})^{\otimes 3}\otimes H)&\cong\det Rf_{*}((E_{0}-E_{1})\otimes (F_{0}-F_{1}))\notag\\
&\cong\langle(\det(F_{0}-F_{1}),\det(E_{0}-E_{1})\rangle\notag\\
&\cong\langle\mathcal{O}_{X},\det(E_{0}-E_{1})\rangle\notag\\
&\cong \mathcal{O}_{S}.\notag
\end{align}
For any morphism $g :S^{'}\rightarrow S$, we have the fiber product under base change:
$$\xymatrix{X^{'}\ar[r]^{g^{'}}\ar[d]_{f^{'}}&X\ar[d]^{f}\\
S^{'}\ar[r]^{g}&S}$$
Furthermore, the proper morphism and the flat morphism are stable under base change (see \cite{Liu}, Chapt. 3), i.e., the morphism $f^{'}$ is flat of relative dimension $1$ and proper.
For any vector bundle $F$ over $X$, then we have the isomorphism: $$g^{*}(\text{det}_{S}(Rf_{*}F))\cong{det}_{S^{'}}(\text{L}g^{*}(Rf_{*}F))\cong\text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}F)$$ (we will prove the isomorphisms in (II) of Theorem \ref{functor})
Let $F$ be $(H_{0}-H_{1})^{\otimes l}\otimes H$, and the isomorphism above becomes $$ \text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}F))\cong g^{*}(\text{det}_{S}(Rf_{*}F))\cong g^{*}(\mathcal{O}_{S})\cong \mathcal{O}_{S^{'}}.$$ So it is done.
\end{proof}
Before proving our theorem, we state the Deligne's functorial Riemann-Roch theorem so that we can make some comparison later. His theorem is true regardless of any characteristic.
\begin{theo}\label{Del,func}(Deligne)
Suppose that a morphism $f:X\rightarrow S$ is proper and smooth of relative dimension $1$, with geometrically connected fibers. For any line bundle $L$ in $V(X)$, then there exists a unique, up to sign, isomorphism of line bundles
$$(\det Rf_{*}(L))^{\otimes 12}\cong \langle\omega, \omega\rangle\otimes\langle L, \omega ^{-1}\otimes L\rangle$$ where $\omega:= \Omega_{f}$ is the relative differentials of the morphism $f$.
\end{theo}
\begin{proof}
See \cite{Del}, Pag. 170, Theorem. 9.8.
\end{proof}
\begin{rem}\label{explai}
According to the property of Deligne pairing
(Prop. \ref{d,p}), if $u$ and $v$ are virtual vector bundles of rank $0$ over $X$, then there is a canonical isomorphism:
$$\langle\det u,\det v\rangle\cong\det Rf_{*}(u\otimes v).$$
In particular, let $u$ be $L-\mathcal{O}$ and $v$ be $M-\mathcal{O}$. Then we have
\begin{align}
\langle L, M\rangle&\cong\langle \det (L -\mathcal{O}), \det(M-\mathcal{O})\rangle\notag\\
&\cong \det Rf_{*}((L -\mathcal{O})\otimes (M-\mathcal{O}))\notag\\
&\cong\det Rf_{*}(L\otimes M) \cdot(\det Rf_{*}(L))^{-1}\cdot (\det Rf_{*}(M))^{-1} \cdot\det Rf_{*}(\mathcal{O})\notag
\end{align}
for line bundles $L, M$ and the trivial bundle $\mathcal{O}$ over $X$.
Similarly, we have the isomorphism
\begin{align} \langle L&, ~\omega^{-1} \otimes L \rangle \notag\\
&\cong\det Rf_{*}(L^{2}\otimes \omega^{-1} ) \otimes (\det Rf_{*}L)^{-1}\otimes (\det Rf_{*}L\otimes\omega^{-1})^{-1}\otimes \det Rf_{*}(\mathcal{O}).\notag
\end{align}
Moreover, by Mumford's isomorphism (see \cite{Mum}, \S 5), i.e.,
$(\det Rf_{*}\mathcal{O})^{\otimes 12 }\cong \langle\omega,\omega\rangle$, the equivalent expression of Deligne's isomorphism is the following:
\begin{align}
(\det &Rf_{*}L)^{\otimes 18 }\notag\\
&\cong (\det Rf_{*}\mathcal{O})^{\otimes 18 }\otimes (\det Rf_{*}(L^{\otimes 2}\otimes
\omega^{-1}))^{\otimes 6}\otimes (\det Rf_{*}(L\otimes \omega^{-1}))^{\otimes (-6)}.\notag
\end{align}
which is the statement appearing in the introduction.
\end{rem}
After preparing well all we will need, our main result is as follows:
\begin{theo}\label{functor}
Let $f:X\rightarrow S$ be projective and smooth of relative dimension $1$, where $S$ is a quasi-compact scheme of characteristic $p>0$ and carries an ample invertible sheaf.
Let $L$ be a line bundle over $X$ and $\omega:= \Omega_{f}$ be the sheaf of relative differentials of $f$, then we have
\begin{align}
(I)~(\det Rf_{*}L)^{\otimes p^{4}}\cong&(\det Rf_{*}L^{\otimes p})^{\otimes~3p^{2}-3p+1}\otimes_{k=1}^{p-1}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{k}))^{\otimes k+1-3p}\notag\\
&\otimes_{k=0}^{p-2}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{p+k}))^{\otimes p-1-k}.\notag
\end{align}
In particular, for $p=2$ we have
\begin{align}
(\det Rf_{*}L)^{\otimes 16}\cong&(\det Rf_{*}(L^{\otimes 2}))^{\otimes 7}\otimes(\det Rf_{*}(\omega\otimes(L^{\otimes 2})))^{\otimes~(-4)}\notag\\
&\otimes\det Rf_{*}(\omega^{2}\otimes(L^{\otimes 2})).\notag
\end{align}
(II) The isomorphism in (I) is stable under base change, i.e., for any flat base extension $g :S^{'}\rightarrow S$ and the fiber product under base change:
$$\xymatrix{X^{'}\ar[r]^{g^{'}}\ar[d]_{f^{'}}&X\ar[d]^{f}\\
S^{'}\ar[r]^{g}&S }$$
Then there are canonical isomorphisms over $S^{'}$:
$$\xymatrix{g^{*}((\text{det}_{S}(Rf_{*}(L))^{\otimes p^{4}})\ar[rr]^{\cong}\ar[d]_{\cong}& &(\det_{S^{'}} Rf^{'}_{*}g^{'*}L)^{\otimes p^{4}}\ar[d]^{\cong}\\
B\ar[rr]^{\cong}& &A}$$
\begin{align}
A=: &(\det Rf^{'}_{*}g^{'*}L^{\otimes p})^{\otimes~3p^{2}-3p+1}\otimes_{k=1}^{p-1}(\det Rf^{'}_{*}(g^{'*}L^{\otimes p}\otimes \omega^{'k}))^{\otimes k+1-3p}\notag\\
&\otimes_{k=0}^{p-2}(\det Rf^{'}_{*}(g^{'*}L^{\otimes p}\otimes \omega^{'p+k}))^{\otimes p-1-k}\notag
\end{align}
\begin{align}
B=: &g^{*}((\det Rf_{*}L^{\otimes p})^{\otimes~3p^{2}-3p+1}\otimes_{k=1}^{p-1}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{k}))^{\otimes k+1-3p}\notag\\
&\otimes_{k=0}^{p-2}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{p+k}))^{\otimes p-1-k})\notag
\end{align}
where $\omega^{'}=\Omega_{f^{'}}$ is the relative differentials of the morphism $f^{'}$.
More precisely, the pull-back of the isomorphism in (I) for the morphism $f: X\rightarrow S$ coincides with the isomorphism in (I) for the morphism $f^{'}: X^{'}\rightarrow S^{'}$.
\end{theo}
According to Definition \ref{vdk} we made, there is an induced functor from the virtual category $V(X)$ to the Picard category $\mathscr{P}is_{S}$. The isomorphisms in (I) and (II) can be viewed as
isomorphisms of line bundles because we didn't write out the degree of graded line bundles, but that is from the isomorphism of graded line bundles in the category $\mathscr{P}is_{S}$.
Because for any two objects $(L,l)$ and $(M,m)$ in the category $\mathscr{P}is_{S}$, they are isomorphic if and only if $L\cong M$ and $l=m$. We will apply ideas appearing in the proof of the $p$-th Adams-Riemann-Roch Theorem in the case of characteristic $p>0$ to our proof. To some extent, our theorem can be viewed as a variant of Deligne's functorial Riemann-Roch theorem. Both of us want to
give the expression for $(\det Rf_{*}L)^{\otimes k}$ by tensor product of $\det Rf_{*}L^{\otimes l}$, $\det Rf_{*}\omega^{\otimes m}$ and
$\det Rf_{*}\mathcal{O}$ with some power for some $k, l, m$.
\begin{proof}
Firstly, for any prime number $p$, we have
\begin{align}
&(\det Rf_{*}L)^{\otimes p^{4}}\notag\\
&\cong F^{*}_{S}(\det (Rf_{*}L))^{\otimes p^{3}}\notag\\
&\cong(\det \text{L}F^{*}_{S}(Rf_{*}L))^{\otimes p^{3}}\\
&\cong(\det Rf^{'}_{*}(J^{*}L))^{\otimes p^{3}}\\
&\cong\det Rf^{'}_{*}(p^{3}J^{*}L+(F_{*}\mathcal{O}_{X}-p)^{\otimes 3}\otimes J^{*}L)\\
&\cong\det Rf^{'}_{*}(p^{3}J^{*}L+((F_{*}\mathcal{O}_{X})^{\otimes 3}-3p(F_{*}\mathcal{O}_{X})^{\otimes 2}+3p^{2}(F_{*}\mathcal{O}_{X}))\otimes J^{*}L\notag\\
&~~- p^{3}J^{*}L)\\
&\cong\det Rf^{'}_{*}((F_{*}\mathcal{O}_{X})\otimes(p^{2}+p(p-F_{*}\mathcal{O}_{X})+(p-F_{*}\mathcal{O}_{X})^{2})
\otimes J^{*}L)\\
&\cong\det Rf_{*}(F^{*}(p^{2}+p(p-F_{*}\mathcal{O}_{X})+(p-F_{*}\mathcal{O}_{X})^{2})\otimes F^{*}_{X}L)\\
&\cong\det Rf_{*}((p^{2}+p(p-F^{*}F_{*}\mathcal{O}_{X})+(p-F^{*}F_{*}\mathcal{O}_{X})^{2})\otimes L^{\otimes p})\\
&\cong(\det Rf_{*}L^{\otimes p})^{\otimes 3p^{2}}\otimes(\det Rf_{*}(F^{*}F_{*}\mathcal{O}_{X}\otimes L^{\otimes p})))^{\otimes (-3p)}\notag\\
&~~\otimes\det Rf_{*}((F^{*}F_{*}\mathcal{O}_{X})^{\otimes 2}\otimes L^{\otimes p})\\
&\cong(\det Rf_{*}L^{\otimes p})^{\otimes~3p^{2}-3p+1}\otimes_{k=1}^{p-1}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{k}))^{\otimes k+1-3p}\notag\\
&\otimes_{k=0}^{p-2}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{p+k}))^{\otimes p-1-k}.
\end{align}
In fact, these isomorphisms are the consequences of properties of $\det$ and isomorphisms appearing in the $p$-th Adams-Riemann-Roch theorem in the case of characteristic $p>0$.
Here, we explain them one by one. We will use the following diagram and some notations again.
$$\xymatrix{X\ar@/_/[ddr]_{f}\ar@{.>}[dr]|{F_{X/S}}\ar @/^/[rrd]^{F_{X}}& & \\
& X^{'}\ar[r]_{J}\ar[d]^{f^{'}}&X\ar[d]^{f}\\
&S\ar[r]^{F_{S}}&S
}$$
We will continue to use $F$ to denote the relative Frobenius instead of $F_{X/S}$ for simplicity.
Firstly, by the definition of the extended determinant functor and the property of the absolute Frobenius morphism $F^{*}_{S}$ (by Prop. \ref{frebad}), $(\det (Rf_{*}L))^{\otimes p^{3}}$ is a
line bundle and we have the isomorphism $(\det Rf_{*}L)^{\otimes p^{4}}\cong F^{*}_{S}(\det (Rf_{*}L))^{\otimes p^{3}}$.
Moreover, (1) follows from the fact that the pull-back commutes with the determinant functor by the property iii) of the extended determinant functor, where $\text{L}F^{*}_{S}$ is the left
derived functor of the functor $F^{*}_{S}$.
We get (2) because cohomology commutes with flat base change, i.e., \\$\text{L}F^{*}_{S}\cdot Rf_{*}\cong Rf^{'}_{*}\cdot\text{L}J^{*}$ (see \cite{Berth}, IV, Prop. 3.1.1). Because $L$ is a line bundle,
$\text{L}F^{*}_{S}$ is the same with $F^{*}_{S}$.
In (3), we introduce a new term $(F_{*}\mathcal{O}_{X}-p)^{\otimes 3}\otimes J^{*}L$. In Lemma \ref{l,f}, we know that $F_{*}\mathcal{O}_{X}$ is locally free of rank $p^{r}$ where $r$ is the relative dimension
of the morphism $f$. Because $f$ is relatively smooth of dimension $1$ in our condition, $F_{*}\mathcal{O}_{X} -p$ is a virtual vector bundle of rank $0$.
According to Corollary \ref{trs},
$\det Rf^{'}_{*}((F_{*}\mathcal{O}_{X}-p)^{\otimes 3}\otimes J^{*}L)$ is trivial.
After that, (4) is a expansion of (3) and (5) is a recombination of (4), by taking the term $F_{*}\mathcal{O}_{X}$ out and leaving $p-F_{*}\mathcal{O}_{X}$ out.
(6) is direct from the projection formula (see \cite{Berth}, III, Pro. 3.7) and the fact $F^{*}_{X}=F^{*}J^{*}$. We know that $F_{X}^{*}$ has the same property with the $\psi^{p}$ in the case of characteristic $p>0$ by Prop. \ref{frebad}, i.e., $F_{X}^{*}(L)=L^{\otimes p}$.
By the functoriality of the functor $\det$, we make combinations and go into (7).
In the $p$-th Adams-Riemann-Roch theorem in characteristic $p>0$, we have the isomorphism $F^{*}F_{*}\mathcal{O}_{X}\cong \theta^{p}(\omega)$ (see Prop. \ref{gci} and some statements before Def. \ref{ad,1}).
As in the Grothendieck group, there are equalities $F^{*}F_{*}\mathcal{O}_{X}=\tau(\omega)=1+\omega+\omega^{2}+,\cdots,+\omega^{p-1}$ in the virtual category $V(X)$.
Replacing $F^{*}F_{*}\mathcal{O}_{X}$ by the equality above in (7) and sorting out (7), we have
\begin{align}
&(\det Rf_{*}L)^{\otimes p^{4}}\notag\\
&\cong F^{*}_{S}(\det Rf_{*}L)^{\otimes p^{3}}\cong(\det \text{L}F^{*}_{S}(Rf_{*}L))^{\otimes p^{3}}\notag\\
&\cong(\det Rf_{*}L^{\otimes p})^{\otimes 3p^{2}}\otimes(\det Rf_{*}(F^{*}F_{*}\mathcal{O}_{X}\otimes L^{\otimes p}))^{\otimes -3p}\notag\\
&~~\otimes\det Rf_{*}((F^{*}F_{*}\mathcal{O}_{X})^{\otimes 2}\otimes L^{\otimes p})\notag\\
&\cong(\det Rf_{*}L^{\otimes p})^{\otimes~3p^{2}-3p+1}\otimes_{k=1}^{p-1}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{k}))^{\otimes k+1-3p}\notag\\
&\otimes_{k=0}^{p-2}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{p+k}))^{\otimes p-1-k}.\notag
\end{align}
These are isomorphisms (8) and (9), which finishes the proof of isomorphisms in (I). In particular, for $p=2$ and by a direct computation we have
\begin{align}
(\det Rf_{*}L)^{\otimes 16}\cong&(\det Rf_{*}(L^{\otimes 2}))^{\otimes 7}\otimes(\det Rf_{*}(\omega\otimes L^{\otimes 2}))^{\otimes~(-4)}\notag\\
&\otimes\det Rf_{*}(\omega^{2}\otimes(L^{\otimes 2})).\notag
\end{align}
For (II), there are well-known facts about base change, i.e., the smooth morphism is stable under base change (see \cite{Hart}, Chap. III, section 10) and the projective morphism is
also stable under flat base change (see \cite{Liu}, 6.3.2). It means that $f^{'} :X^{'}\rightarrow S^{'}$ is projective and smooth of relative dimension
$1$. Furthermore, for any line bundle $L$ on $X$, $Rf_{*}(L)$ is a strictly perfect complex in $\text{Parf}^{0}$ (see (3) of the section 2.1 and Rem. \ref{pcd}). Then we have $$g^{*}(\text{det}_{S}(Rf_{*}(L)))\cong\text{det}_{S^{'}}(\text{L}g^{*}(Rf_{*}(L)))\cong\text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}L))).$$
The first isomorphism is from iii) of the definition of the extended determinant functor. The second isomorphism is from the base-change formula (see \cite{Berth}, IV, Prop. 3.1.1), i.e., $\text{L}g^{*}Rf_{*}\cong Rf^{'}_{*}\text{L}g^{'*}$.
Because $L$ is a line bundle, $\text{L}g^{'*}L$ is same with $g^{'*}L$, which proves the horizontal isomorphism of the diagram in (II).
For the left vertical isomorphism in the diagram, it is obvious, i.e., the pull-back for an isomorphism is the pull-backs for two sides of the isomorphism, respectively. The pull-back of the right hand side of the isomorphism is just $B$.
For the isomorphism $B\cong A$, it results from the further expression of $B$. The pull-back of the right hand side of the isomorphism in (I) is the pull-back for $\det Rf_{*}L^{\otimes p}$, $\det Rf_{*}(L^{\otimes p}\otimes \omega^{k})$ and
$\det Rf_{*}(L^{\otimes p}\otimes \omega^{p+k})$, respectively. As proof in the horizontal isomorphism, for any vector bundle $F$ on $X$ we have
$$g^{*}(\text{det}_{S}(Rf_{*}(F))\cong\text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}F))).$$
Furthermore, these pull-backs are
$$g^{*}(\det Rf_{*}L^{\otimes p})\cong\text{det}_{S^{'}}(Rf^{'}_{*}(f^{'*}L)^{\otimes p});$$
\begin{align}
g^{*}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{k})&\cong\text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}(L^{\otimes p}\otimes\omega^{k}))\notag\\
&\cong\text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}L^{\otimes p}\otimes\omega^{'k}));\notag
\end{align}
\begin{align}
g^{*}(\det Rf_{*}(L^{\otimes p}\otimes \omega^{p+k}))&\cong\text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}(L^{\otimes p}\otimes\omega^{p+k}))\notag\\
&\cong\text{det}_{S^{'}}(Rf^{'}_{*}(g^{'*}L^{\otimes p}\otimes\omega^{'p+k})).\notag
\end{align}
In the last two pull-backs, we use the fact that the differentials is stable under base change, i.e., $\omega^{'}\cong g^{'*}(\omega)$.
Putting these pull-backs together, this is just $A$.
Meanwhile, for the morphism $f^{'} :X^{'}\rightarrow S^{'}$ which satisfies the condition as the morphism $f$,
we have the isomorphism for $g^{'*}L$, i.e.,
\begin{align}
(\det Rf^{'}_{*}&g^{'*}L)^{\otimes p^{4}}\notag\\
&\cong(\det Rf^{'}_{*}g^{'*}L^{\otimes p})^{\otimes~3p^{2}-3p+1}\otimes_{k=1}^{p-1}(\det Rf^{'}_{*}(g^{'*}L^{\otimes p}\otimes \omega^{'k}))^{\otimes k+1-3p}\notag\\
&\otimes_{k=0}^{p-2}(\det Rf^{'}_{*}(g^{'*}L^{\otimes p}\otimes \omega^{'p+k}))^{\otimes p-1-k}.\notag
\end{align}
The right hand side is also $A$. This verifies the compatibility under base change.
\end{proof}
\begin{rem}
Our theorem is not a consequence of the Adams Riemann Roch theorem in $K$-theory. It results from the virtual category and the Picard category, which allows that our theorem is functorial.
Compared with the general setting where the property of the Deligne pairing can be applicable and similar results can be obtained, our proof is not so complicated. In \cite{Den}, Eriksson defined the Adams operation and the Bott class on the virtual category and proved that the Adams Riemann Roch theorem was true in the localized Picard category.
Before that, he needs to define what the localized virtual category and the localized Picard category are. These definitions and proofs are impossible to state clearly in several pages.
In our theorem, the Adams operation and the Bott class defined on the virtual category is unnecessary. We emphasize more about the merits of the positive characteristic, which is one of our motivations.
\end{rem}
In \cite{Mum}, Mumford gave an isomorphism which is called Mumford's isomorphism now. We state it as follows:
Let $f:C\rightarrow S$ be a flat
local complete intersection generically smooth proper morphism with geometrically connected fibers of dimension $1$, with $S$ any connected normal Noetherian locally factorial scheme.
We denote $\lambda_{n}$ by $\det Rf_{*}\mathcal{\omega}^{\otimes n}$ for $\omega=\omega_{C/S}$ being the relative dualizing sheaf which is canonically isomorphic to the relative differentials of the morphism $f$.
Then Mumford's isomorphism is $\det Rf_{*}(\omega^{n})\cong(\det Rf_{*}(\omega))^{6n^{2}-6n+1}$, whose original version is stronger than the present expression.
By our theorem, we can get some results of a version of Mumford's isomorphism.
Under our condition, i.e., $f$ is a smooth and projective morphism of quasi-compact schemes with geometrically connected fibers of dimension $1$, then the isomorphism is also
$\det Rf_{*}(\omega^{n})\cong(\det Rf_{*}
(\omega))^{6n^{2}-6n+1}$, i.e., $\lambda_{n}=\lambda_{1}^{6n^{2}-6n+1}$ by notations.
\begin{coro}\label{mfi}
Suppose $p=2$ and let $f$ be as in theorem. Then we have Deligne's Riemann-Roch theorem in \cite{Del} and Mumford's isomorphism.
\end{coro}
\begin{proof}
On the one hand, we have the isomorphism (I) in the previous theorem for any characteristic $p>0$. we will view the isomorphism as an isomorphism of graded
line bundles, even though we can't write up the corresponding degree of the graded line bundles.
On the other hand, we have the Serre duality. For any vector bundle $F$ on $X$, by the Grothendieck-Serre duality, $R\underline{Hom}(Rf_{*}F, \mathcal{O}) [-1]\cong Rf_{*}\underline{Hom}(F, \omega)$, where $\underline{Hom}$ is the hom functor for complexes of sheaves, one gets the isomorphism of graded line bundles
between $\det Rf_{*}F$ and $\det Rf_{*}\underline{Hom}(F, \omega)$ (see \cite{Del}, Pag. 150). When we denote $\det Rf_{*}\mathcal{\omega}^{\otimes n}$ by $\lambda_{n}$, we also denote $\det Rf_{*}\mathcal{O}$ by $\lambda_{0}$.
Firstly, let $F$ be the trivial bundle in the isomorphism of line bundles above. This is $\det Rf_{*}\mathcal{O} \cong \det Rf_{*}\mathcal{\omega}$, i.e., $\lambda_{0}\cong\lambda_{1}$.
Furthermore, let $F$ be a line bundle $L$ and we have
\begin{align}
\det Rf_{*}&((L-\mathcal{O})\otimes (\omega\otimes L^{-1}- \mathcal{O}))\notag\\
&\cong \det Rf_{*}\omega \otimes (\det Rf_{*}L)^{-1} \otimes (\det Rf_{*}(\omega \otimes L^{-1}))^{-1}\otimes
\det Rf_{*}\mathcal{O}\notag\\
&\cong(\det Rf_{*}(L-\mathcal{O}))^{\otimes (-2)}.\notag
\end{align}
By the property of the Deligne pairing (Prop. \ref{d,p}), then we have
\begin{align}
(\det Rf_{*}(L-\mathcal{O}))^{\otimes 2}&\cong\det Rf_{*}(-(L-\mathcal{O})\otimes (\omega\otimes L^{-1}- \mathcal{O}))\notag\\
&\cong\det Rf_{*}((L-\mathcal{O})\otimes ( \mathcal{O}-\omega\otimes L^{-1}))\notag\\
&\cong \langle L, \omega ^{-1}\otimes L\rangle.
\end{align}
By tensoring power $6$ of two sides of (10), we have $(\det Rf_{*}(L-\mathcal{O}))^{\otimes 12}\cong \langle L, \omega ^{-1}\otimes L\rangle^{\otimes 6}$.
Meanwhile, we consider the Deligne pairing $\langle \omega, \omega \rangle$. According to the property of the Deligne pairing (Prop. \ref{d,p}), this is the following isomorphism
\begin{align}
\langle \omega, \omega\rangle& \cong \langle\det ( \omega-\mathcal{O}), \det(\omega-\mathcal{O})\rangle \notag\\
&\cong \det Rf_{*} (( \omega-\mathcal{O})\otimes (\omega-\mathcal{O}))\notag\\
&\cong \det Rf_{*}(\omega^{2})\otimes ( \det Rf_{*} \omega)^{\otimes (-2)}\otimes\det Rf_{*} \mathcal{O}.\notag
\end{align}
Taking $L$ for trivial bundle and $p=2$ in our theorem, we have
\begin{align}
(\det Rf_{*}\mathcal{O})^{\otimes 2^{4}}&\cong(\det Rf_{*}\mathcal{O})^{\otimes~3\cdot 2^{2}-3\cdot 2+1}\otimes(\det Rf_{*}\omega)^{\otimes~2-3\cdot 2}\notag\\
&~~\otimes\det Rf_{*}(\omega^{2})\notag
\end{align}
i.e., $\lambda_{0}^{16}\cong \lambda_{0}^{7}\otimes \lambda_{1}^{-4}\otimes\lambda_{2}$. By $\lambda_{0}\cong\lambda_{1}$, that is $\lambda_{2}\cong \lambda_{1}^{13}$ and therefore,
hence $\langle \omega, \omega\rangle\cong\lambda_{2}\otimes \lambda_{1}^{(-2)}\otimes\lambda_{0}\cong\lambda_{0}^{12}$.
By the isomorphism $(\det Rf_{*}(L-\mathcal{O}))^{\otimes 12}\cong \langle L, \omega ^{-1}\otimes L\rangle^{\otimes 6}$, then we have the standard statement $$(\det Rf_{*}L)^{\otimes 12}\cong (\det Rf_{*}\mathcal{O})^{\otimes12}\otimes\langle L, \omega ^{-1}\otimes L\rangle^{\otimes 6}\cong\langle \omega, \omega\rangle\otimes\langle L, \omega ^{-1}\otimes L\rangle^{\otimes 6}.$$
which completely coincides with Deligne's statement in \cite{Del}.
If we use the property of the Deligne pairing (Prop. \ref{d,p}) again as in Rem. \ref{explai}, the right hand side of the isomorphism in (10) is
\begin{align} \langle L&, ~\omega^{-1} \otimes L \rangle \notag\\
&\cong\det Rf_{*}(L^{2}\otimes \omega^{-1} ) \otimes (\det Rf_{*}L)^{-1}\otimes (\det Rf_{*}L\otimes\omega^{-1})^{-1}\otimes \det Rf_{*}(\mathcal{O}).\notag
\end{align}
Then the isomorphism (10) becomes
\begin{align}
(\det& Rf_{*}L)^{\otimes 2}\otimes (\det Rf_{*}\mathcal{O})^{\otimes (-2)}\\
&\cong\det Rf_{*}(L^{2}\otimes \omega^{-1} ) \otimes (\det Rf_{*}L)^{-1}\otimes (\det Rf_{*}L\otimes\omega^{-1})^{-1}\otimes \det Rf_{*}(\mathcal{O}).\notag
\end{align}
Let $L$ be $n$-th power $\omega^{n}$ of the sheaf of differentials $\omega$ in (11). The isomorphism (11) is $\lambda_{n}^{2}\otimes \lambda_{0}^{\otimes(-2)}\cong\lambda_{2n-1}\otimes\lambda_{n}^{(-1)}\otimes \lambda_{n-1}^{\otimes (-1)}\otimes\lambda_{0}.$
In our proof of Deligne's Riemann Roch theorem, we already have the isomorphism $\lambda_{2}\cong \lambda_{1}^{13}$ which is just Mumford's isomorphism for $n=2$.
For general $n$ in Mumford's isomorphism , let $L$ be the bundle $\omega^{n}$ in our theorem. This is
\begin{align}
(\det Rf_{*}\omega ^{n})^{\otimes 16}&\cong(\det Rf_{*}\omega ^{2n})^{\otimes~7}\otimes(\det Rf_{*}(\omega\otimes(\omega ^{2n})))^{\otimes(-4)}\notag\\
&~~\otimes\det Rf_{*}(\omega^{2}\otimes(\omega ^{2n}))\notag
\end{align}
i.e., $\lambda_{2n+2}\cong\lambda_{n}^{16}\otimes\lambda_{2n}^{(-7)}\otimes\lambda_{2n+1}^{4}$. Plus the isomorphism $\lambda_{n}^{2}\otimes \lambda_{0}^{\otimes(-2)}\cong\lambda_{2n-1}\otimes\lambda_{n}^{(-1)}\otimes \lambda_{n-1}^{\otimes (-1)}\otimes\lambda_{0}$,
by induction, that is just Mumford's isomorphism
$\lambda_{n}\cong\lambda_{1}^{\otimes (6n^{2}-6n+1)}$ for $p=2$.
\end{proof}
\begin{rem}
The original proof of Mumford's isomorphism (see \cite{Mum}, Pages 99-110) is a calculation by using Grothendieck-Riemann-Roch
and the facts which is referred to the Picard-group of the moduli-functor of stable curves and so on. In our corollary, it is a special case of our theorem
by taking the line bundle to be the trivial bundle and the Serre duality. For any prime number $p>2$, we have an analogous expression to Mumford's isomorphism. This is explained as follows:
Let $L$ be the trivial bundle in our theorem. This is the isomorphism
\begin{align}
\lambda_{0}^{p^{4}} \cong & \lambda_{0}^{3p^{2}-3p+1} \otimes\lambda_{1}^{2-3p} \otimes\lambda_{2}^{3-3p}\otimes \ldots\otimes \lambda_{p-1}^{p-3p}\otimes\notag\\
& \lambda_{p}^{p-1}\otimes \lambda_{p+1}^{p-2}\otimes\ldots \otimes\lambda_{2p-2}.\notag
\end{align}
Given the isomorphism $\lambda_{0}\cong\lambda_{1}$, then we have
\begin{align}
\lambda_{2p-2}\cong &\lambda_{1}^{p^{4}-3p^{2}+6p-3}\otimes\lambda_{2}^{3p-3}\otimes \ldots\otimes \lambda_{p-1}^{3p-p}\otimes\notag\\
& \lambda_{p}^{1-p}\otimes \lambda_{p+1}^{2-p}\otimes\ldots \otimes\lambda_{2p-3}^{\otimes(-2)}.\notag
\end{align}
More generally, let $L$ be $\omega^{n}$ in our theorem. Then we have
\begin{align}
\lambda_{n}^{p^{4}} \cong & \lambda_{np}^{3p^{2}-3p+1} \otimes\lambda_{np+1}^{2-3p} \otimes\lambda_{np+2}^{3-3p}\otimes \ldots\otimes \lambda_{np+p-1}^{p-3p}\otimes\notag\\
& \lambda_{np+p}^{p-1}\otimes \lambda_{np+p+1}^{p-2}\otimes\ldots \otimes\lambda_{np+2p-2}.\notag
\end{align}
\end{rem}
\end{document} |
\begin{document}
\title{Entanglement Detection Using Majorization Uncertainty Bounds}
\author{M. Hossein \surname{Partovi}}
\email[Electronic address:\,\,]{[email protected]}
\affiliation{Department of Physics and Astronomy, California State
University, Sacramento, California 95819-6041}
\date{\today}
\begin{abstract}
Entanglement detection criteria are developed within the framework of the majorization formulation of uncertainty. The primary results are two theorems asserting linear and nonlinear separability criteria based on majorization relations, the violation of which would imply entanglement. Corollaries to these theorems yield infinite sets of scalar entanglement detection criteria based on quasi-entropic measures of disorder. Examples are analyzed to probe the efficacy of the derived criteria in detecting the entanglement of bipartite Werner states. Characteristics of the majorization relation as a comparator of disorder uniquely suited to information-theoretical applications are emphasized throughout.
\end{abstract}
\pacs{03.65.Ud, 03.67.Mn, 03.65.Ca}
\maketitle
\section{Introduction}
Quantum measurements in general have indeterminate outcomes, with the results commonly expressed as a vector of probabilities corresponding to the set of outcomes. In case of noncommuting observables, the joint indeterminacy of their measurement outcomes has an inviolable lower bound, in stark contrast to classical expectations. This was discovered by Heisenberg in his desire to advance the physical understanding of the newly discovered matrix mechanics by relating the unavoidable disturbances caused by the act of measurement to the fundamental commutation relations of quantum dynamics \cite{HEI}. Heisenberg's arguments relied on the statistical spread of the measured values of the observables to quantify uncertainty. This gave rise to the variance formulation of the uncertainty principle which remains a powerful source of intuition on the structure and spectral properties of microscopic systems. With the prospect of quantum computing and the development of quantum information theory in recent decades, on the other hand, the need for a measure of uncertainty that can better capture its information theoretical aspects, especially in dealing with noncanonical observables, has inspired new formulations. Among these are the entropic measure developed in the eighties, and the majorization formulation proposed recently. Uncertainty relations resulting from these formulations have found application to quantum cryptography, information locking, and entanglement detection, in addition to providing uncertainty limits \cite{SUR,I}.
In this paper we develop applications of the majorization formulation of uncertainty introduced in Ref. \cite{I} to the problem of entanglement detection. Deciding whether a given quantum state is entangled is a central problem of quantum information theory and known to be computationally intractable in general \cite{GUR}. As a result, computationally tractable necessary conditions for separability, which provide a partial solution to this problem, have been the subject of active research in recent years. Among these, the Peres-Horodecki positive partial transpose criterion actually provides necessary and sufficient separability conditions for $2\otimes2$ and $2\otimes3$ dimensional systems, and necessary conditions otherwise. Other notable results are the reduction and global versus local disorder criteria, both necessary conditions in general \cite{sep}. An observable that has non-negative expectation values for all separable states and negative ones for a subset of entangled states provides an operational method of entanglement detection and is known as an entanglement witness \cite{HOR}. It has also long been known that uncertainty relations can serve a similar purpose by providing inequalities that must be satisfied by separable states and if violated signal entanglement \cite{GIO,GAM}.
In this paper we extend the majorization formulation of uncertainty developed in Ref. \cite{I} to the problem of entanglement detection. As discussed in that paper (hereafter referred to as Paper I) and in the following, majorization as a comparator of uncertainty is qualitatively different from, and stronger than, scalar measures of uncertainty. Consequently, the entanglement detection results developed here represent a qualitative strengthening of the existing variance and entropic results. This will be evident, among other things, by the fact that they yield, as corollaries, an infinite class of entanglement detectors based on scalar measures.
As mentioned above, results of quantum measurements are in general probability vectors deduced from counter statistics, and an information theoretical formulation of uncertainty is normally based on an order of uncertainty defined on such vectors (see Paper I). Thus a scalar measure of uncertainty is commonly a real, non-negative function defined on probability vectors whose value serves to define the uncertainty in question. For example, the Shannon entropy function is the measure of uncertainty for the standard entropic formulation of uncertainty \cite{DEU}. By contrast, majorization provides a partial order of uncertainty on probability vectors that is in general more stringent, and fundamentally stronger, than a scalar measure \cite{maj}. Note that, unlike scalar measures, majorization does not assign a quantitative measure of uncertainty to probability vectors, and as a partial order may find a pair of vectors to be incomparable.
A characterization of majorization that clarifies the foregoing statements can be attained by considering the quasi-entropic set of measures. These were defined in Paper I as the set of concave, symmetric functions defined on probability vectors, and include the Shannon, Tsallis, and (a subfamily of) R\'{e}nyi entropies as special cases \cite{ftn0}. It is important to realize that the uncertainty order determined by one member of the quasi-entropic set for a given pair of probability vectors may contradict that given by another. While additional considerations may justify the use of, e.g., Shannon entropy in preference to the others, the foregoing observation clearly indicates the relative nature of the uncertainty order given by a specific measure, and immediately raises the following question: are there pairs of vectors for which all quasi-entropic measures determine the same uncertainty order? The answer is yes, and the common determination of the quasi-entropic set in such cases defines the uncertainty order given by majorization. What about the cases where there are conflicting determinations by the members of the quasi-entropic set? The majorization relation defines such pairs as incomparable, whence the ``partial'' nature of the order defined by majorization. We may therefore consider the majorization order to be equivalent to the collective order determination of the entire quasi-entropic set (see \S IV). This characterization of the majorization relation clearly shows its standing vis-\`{a}-vis the scalar measures of uncertainty. More practical definitions of the majorization relation will be considered in \S II.
The rest of this paper is organized as follows. In \S II we review elements of the majorization formulation of uncertainty needed for our work and establish the notation. In \S III we present the central results of this paper on entanglement detection, and in \S IV we derive entire classes of entanglement detectors for quasi-entropic measures as corollaries to the theorems of \S III. Concluding remarks are presented in \S V. Details of certain mathematical proofs are given in the Appendix.
\section{Majorization formulation of uncertainty}
This section presents a review of the basic elements of majorization theory and the formulation of the uncertainty principle based on it. It follows the treatment given in Paper I.
\subsection{Majorization}
The basic element of our formulation is the majorization protocol for comparing the degree of uncertainty, or disorder, among probability vectors, i.e., sequences of non-negative numbers summing to unity \cite{maj}. By definition, ${\lambda}^{1}$ is no less uncertain than ${\lambda}^{2}$ if ${\lambda}^{1}$ equals a mixture of the permutations of ${\lambda}^{2}$. Then ${\lambda}^{1}$ is said to be majorized by ${\lambda}^{2}$ and written ${\lambda}^{1}\prec{\lambda}^{2}$. An equivalent definition that flushes out the details of the foregoing is based on the vector ${\lambda}^{\downarrow}$ which is obtained from $\lambda$ by arranging the components of the latter in a nonincreasing order. Then, ${\lambda}^{1}\prec{\lambda}^{2}$ if ${\sum}_{i}^{j} {\lambda}^{1\downarrow}_{i} \leq {\sum}_{i}^{j} {\lambda}^{2\downarrow}_{i} $ for $j=1,2, \ldots d-1$, where $d$ is the larger of the two dimensions and trailing zeros are added where needed. As stated earlier, the majorization relation is a \textit{partial} order, i.e., not every two vectors are comparable under majorization. As suggested in \S I, two vectors are found to be incomparable when the difference in their degrees of uncertainty does not rise to the level required by majorization. As evident from the second definition given above, majorization requires the satisfaction of $N-1$ inequalities if the number of non-zero components of the majorizing vector is $N$. This accounts for the strength of the majorization relation as compared to scalar measures of disorder. Indeed as alluded to in \S I, for any quasi-entropic function $F(\lambda)$, ${\lambda}^{1} \prec {\lambda}^{2}$ implies $F({\lambda}^{1}) \geq
F({\lambda}^{2})$, but not conversely. On the other hand, if for \textit{every} quasi-entropic function $F(\lambda)$ we have
$F({\lambda}^{1}) \geq F({\lambda}^{2})$, then ${\lambda}^{1} \prec {\lambda}^{2}$ \cite{maj}.
To establish uncertainty bounds, we need to characterize the greatest lower bound, or \textit{infimum}, and the least upper bound, or \textit{supremum}, of a set of probability vectors \cite{HP3}. The \textit{infimum} is defined as the vector that is majorized by every element of the set and in turn majorizes any vector with that property . The \textit{supremum} is similarly defined. We will briefly outline the construction of the \textit{infimum} here and refer the reader to Paper I for further details.
Given a set of probability vectors $\{ {\lambda}^{a} {\}}_{a=1}^{N}$, consider the vector ${\mu}^{inf}$ defined by
\begin{align}
{\mu}_{0}^{inf}=0&,\,\,\,{\mu}_{j}^{inf}=\min \big ( {\sum}_{i=1}^{j} {\lambda}^{1\downarrow}_{i},{\sum}_{i=1}^{j} {\lambda}^{2\downarrow}_{i},\ldots, \nonumber \\
&{\sum}_{i=1}^{j}{\lambda}^{N\downarrow}_{i} \big ), \,\,\, 1 \leq j \leq {d}_{max}, \label{1}
\end{align}
where ${d}_{max}$ is the largest dimension found in the set. The infimum is then given by
\begin{equation}
{\lambda}^{inf}_{i}={[\inf({\lambda}^{1},{\lambda}^{2}, \ldots, {\lambda}^{N})]}_{i}={\mu}_{i}^{inf}-{\mu}_{i-1}^{inf}, \label{2}
\end{equation}
where $1 \leq i \leq {d}_{max}$. The construction of the supremum starts with ${\mu}^{sup}$ in parallel with Eq.~(\ref{1}), but with ``max'' replacing ``min,'' and may require further steps detailed in Paper I.
It is worth noting here that the infimum (supremum) of a pair of probability vectors will in general be more (less) disordered than either. Important special cases are (i) one of the two majorizes the other, in which case the latter is the infimum and the former the supremum of the two, and (ii) the two are equal, in which case either is both the infimum and the supremum. Furthermore, a useful qualitative rule is that the more ``different'' are two probability vectors, the further will the infimum or supremum of the two be from at least one of them. Finally, we note that while the infimum or supremum of a set of probability vectors always exists, it need not be a member of the set.
\subsection{Measurement and uncertainty}
A generalized measurement may be defined by a set of positive operators $\{ {\hat{\mathrm{E}}}_{\alpha} \}$ called measurement elements and subject to the completeness condition ${\sum}_{\alpha}{\hat{\mathrm{E}}}_{\alpha} = \hat{\mathbbm{1}}$. The
probability that outcome $\alpha$ turns up in a measurement of the state $\hat{\rho}$ is given by the Born rule
$\mathscr{P}_{\alpha}(\hat{\rho})=\textrm{tr} [\hat{\mathrm{E}}_{\alpha} \hat{\rho} ]$. A generalized measurement can always be considered to be the restriction of a more basic type, namely a \textit{projective} measurement, performed on an enlarged system to the system under generalized measurement \cite{NCH}. A projective measurement is usually associated with an observable of the system represented by a self-adjoint operator $\hat{M}$, and entails a partitioning of the spectrum of $\hat{M}$ into a collection of subsets $\{ {b}_{\alpha}^{M} \}$ called measurement \textit{bins}. We call a projective measurement \textit{maximal} if each bin consists of a single point of the spectrum of the measured observable.
Having assembled the necessary concepts, we can now characterize uncertainty by means of majorization relations in a natural manner. To start, we define the probability vector $\mathscr{P}^{X}(\hat{\rho})$ resulting from a measurement $\mathrm{X}$ on a state $\hat{\rho}$ to be \textit{uncertain} if it is majorized by $\mathcal{I}=(1,0,\ldots,0)$ but not equal to it. As such, $\mathscr{P}^{X}(\hat{\rho})$ is said to be strictly
majorized by $\mathcal{I}$ and written $\mathscr{P}^{X}(\hat{\rho})\prec \prec \mathcal{I}$. Similarly, given a pair of measurements $\mathrm{X}$ and $\mathrm{Y}$ on a state $\hat{\rho}$, we say $\mathscr{P}^{\mathrm{X}}(\hat{\rho})$ is more uncertain, equivalently more disordered, than $\mathscr{P}^{\mathrm{Y}}(\hat{\rho})$ if $\mathscr{P}^{\mathrm{X}}(\hat{\rho}) \prec \mathscr{P}^{\mathrm{Y}}(\hat{\rho})$. Further, we define the joint uncertainty of a pair of measurements $\mathrm{X}$ and $\mathrm{Y}$ to be the outer product $\mathscr{P}^{\mathrm{X}}\otimes\mathscr{P}^{\mathrm{Y}}$, i.e., $\mathscr{P}_{\alpha
\beta}^{\mathrm{X} \oplus \mathrm{Y}}=\mathscr{P}^{\mathrm{X}}_{\alpha} \mathscr{P}^{\mathrm{Y}}_{\beta}$. Since
$H(\mathscr{P}^{\mathrm{X}}\otimes \mathscr{P}^{\mathrm{Y}})=H(\mathscr{P}^{\mathrm{X}})+H(\mathscr{P}^{\mathrm{Y}})$, where $H(\cdot)$ is the Shannon entropy function, this definition is seen to be consistent with its entropic counterpart. As stated earlier, $\mathscr{P}^{\mathrm{X}} \prec \mathscr{P}^{\mathrm{Y}}$ implies $H(\mathscr{P}^{\mathrm{X}}) \geq H(\mathscr{P}^{\mathrm{Y}})$ but not conversely. These definitions naturally extend to an arbitrary number of states and measurements.
We are now in a position to state the majorization statement of the uncertainty principle established in Paper I:
``The joint results of a set of generalized measurements of a given state are no less uncertain than a probability vector that depends on the measurement set but not the state, and is itself uncertain unless the measurement elements have a common eigenstate."
In symbols,
\begin{equation}
\mathscr{P}^{\mathrm{X}}(\hat{\rho})\otimes \mathscr{P}^{\mathrm{Y}}(\hat{\rho}) \otimes \ldots \otimes \mathscr{P}^{\mathrm{Z}}(\hat{\rho}) \prec
{\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}} \prec \prec \mathcal{I}, \label{3}
\end{equation}
where
\begin{equation}
{\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}=
{\sup}_{\hat{\rho}} [\mathscr{P}^{\mathrm{X}}(\hat{\rho})\otimes \mathscr{P}^{\mathrm{Y}}(\hat{\rho}) \otimes \ldots \otimes
\mathscr{P}^{\mathrm{Z}}(\hat{\rho})] , \label{4}
\end{equation}
unless the measurement elements $\{ \hat{\mathrm{E}}^{\mathrm{X}}, \hat{\mathrm{E}}^{\mathrm{Y}}, \ldots, \hat{\mathrm{E}}^{\mathrm{Z}} \}$ have a common eigenstate in which case ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}} = \mathcal{I}$.
Note that ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}$ is the majorization uncertainty bound for the measurement set considered.
\subsection{majorization uncertainty bounds}
As stated above, the uncertainty bound ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}$ given by Eq.~(\ref{4}) depends on the measurement set but not the state of the system. As the supremum of all possible measurement outcomes, it is the probability vector that sets the irreducible lower bound to uncertainty for the set. As such, it is the counterpart of the variance product or entropic lower bound in the traditional formulations of the uncertainty principle. Unlike the latter, however, ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}$ is in general not realizable or even approachable by any state of the system. Therefore, there is in general no such thing as a ``minimum uncertainty state'' within the majorization framework. This is a consequence of the fact, mentioned earlier, that the infimum or supremum of a set of vectors need not be a member of the set. An obvious special case is the trivial example of zero uncertainty for which ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}= \mathcal{I}$, signaling the existence of a common eigenstate for the measurement elements.
Intuitively, we expect that mixing states can only increase their uncertainty, as is known to be the case for scalar measures of uncertainty. In case of majorization, this expectation is manifested in the property that the uncertainty bound ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}$ is in general realized on the class of pure states. Indeed for two and three mutually unbiased observables on a two-dimensional Hilbert space considered, we found in Paper I that the required maxima for the components of ${\mu}^{sup}$ that serve to define the uncertainty bound are reached on pure states. More specifically, as outlined in \S IIA above and detailed in \S IIB, IVA and IVB of Paper I, the calculation of ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}$ involves a component-wise determination of ${\mu}^{sup}$ by a series of maximizations over all possible density matrices $\hat{\rho}$. The property in question then guarantees that the maximization process can be limited to density matrices representing pure states only. We will prove this assertion in general, as well as for the class of separable states, in the Appendix.
\section{Entanglement Detection}
As discussed earlier, an entanglement detector can be effective in deciding whether a given density matrix is separable by providing a condition that is satisfied by all separable states and if violated signals entanglement. A large body of entanglement detection strategies have been developed in recent years which are primarily based on scalar conditions, including those based on variance and entropic type uncertainty relations, which can be found in Refs. \cite{sep,HOR,GIO,GAM}. Here we shall develop majorization conditions for entanglement detection based on the formulation of measurement uncertainty given in Paper I and outlined in \S II above. In particular, the entanglement condition given in Theorem 1 below is linear and susceptible to experimental implementation, so it can be formulated as an entanglement witness. Nonlinear detectors developed below, on the other hand, rely on majorization-based uncertainty relations. We will also introduce the notion of subsystem disorder and a sharpened version of the Nielsen-Kempe \cite{nik} separability condition as a nonlinear entanglement detector.
As majorization relations, our results in general entail multiple inequalities whose number will grow with the uncertainty levels involved. As discussed in \S IIA, this is an important feature of the majorization relation as comparator of disorder, one that sets it apart from scalar conditions and provides for the refinement needed in comparing highly disordered vectors. As will be seen in \S IV, this property has the consequence that each majorization condition yields a scalar condition for the entire quasi-entropic class of uncertainty measures.
\subsection{Linear detectors}
Our detection strategy is based on the intuitive expectation that the measurement uncertainty bound for the class of separable states of a multipartite system must be majorized by the corresponding bound for all states, and that this hierarchy can be exploited for entanglement detection. This is the majorization rendition of the strategy often used to derive separability conditions \cite{GIO,GAM}. We will first consider the case of one generalized measurement resulting in a linear detection condition.
\textbf{Theorem 1.} Let the results of the generalized measurement ($\mathrm{X}, \{ \hat{\mathrm{E}}_{\alpha}^{X} \}$) on a multipartite state ${\hat{\rho}}^{ABC \dots F}$ of parties $(A, B, C, \dots, F)$ defined on a finite-dimensional Hilbert space be bounded by
\begin{equation}
{\mathscr{P}}_{sup}^{\mathrm{X}}={\sup}_{{\hat{\rho}}^{ABC \dots F}}{\mathscr{P}}^{\mathrm{X}}({\hat{\rho}}^{ABC \dots F}), \label{5}
\end{equation}
and in the case of a separable state ${\hat{\rho}}^{ABC \dots F}_{sep}$ by
\begin{equation}
{\mathscr{P}}_{sep;sup}^{\mathrm{X}}={\sup}_{{\hat{\rho}}^{ABC \dots F}_{sep}}{\mathscr{P}}^{\mathrm{X}}({\hat{\rho}}^{ABC \dots F}_{sep}). \label{6}
\end{equation}
Then, (i) the suprema in the foregoing pair of equations may be taken over the class of pure and pure, product states, respectively, and (ii) given an arbitrary state ${\hat{\sigma}}^{ABC \dots F}$, the condition ${\mathscr{P}}^{\mathrm{X}}({\hat{\sigma}}^{ABC \dots F}) \nprec {\mathscr{P}}_{sep;sup}^{\mathrm{X}}$ implies that ${\hat{\sigma}}^{ABC \dots F}$ is entangled \cite{noteent}.
Part (i) of Theorem 1 is the majorization version of the familiar result that uncertainty bounds are realized on pure states and is proved in the Appendix \cite{noteext}. Part (ii) is the statement that the bound in Eq.~(\ref{6}) is a necessary condition for separability and its violation signals the existence of entanglement. As noted above, Theorem 1 provides an operational method of entanglement detection and can be reformulated as a set of entanglement witnesses.
Clearly, the efficacy of Theorem 1 in detecting entanglement depends on the choice of the measurement ${\mathrm{X}}$ for a given quantum state. This suggests looking for the optimum measurement, in parallel with the optimization problem for entanglement witnesses \cite{LKCH}. Note that here the choice of the generalized measurement ${\mathrm{X}}$ affects both ${\mathscr{P}}^{\mathrm{X}}({\hat{\sigma}}^{ABC \dots F})$ and ${\mathscr{P}}_{sep;sup}^{\mathrm{X}}$, thus making the task of finding a fully optimized solution rather difficult. It is therefore fortunate that we can achieve a partial optimization on the basis of Theorem 3 of Paper I, the relevant content of which we can state as follows:
\textbf{Lemma} Under the conditions of Theorem 1 and with ${\mathrm{X}}$ restricted to rank 1 measurements, we have
\begin{equation}
{\mathscr{P}}^{\mathrm{X}}({\hat{\sigma}}^{ABC \dots F}) \prec {\mathscr{P}}^{{\mathrm{X}}^{\star}}({\hat{\sigma}}^{ABC \dots F})={\lambda}({\hat{\sigma}}^{ABC \dots F}), \label{7}
\end{equation}
where ${\mathrm{X}}^{\star}$ is a maximal projective measurement whose elements are the set of rank 1 orthogonal projection operators onto the eigenvectors of ${\hat{\sigma}}^{ABC \dots F}$ and ${\lambda}({\hat{\sigma}}^{ABC \dots F})$ is the corresponding spectrum.
It should be noted here that while the choice of ${{\mathrm{X}}^{\star}}$ succeeds in optimizing ${\mathscr{P}}^{\mathrm{X}}({\hat{\sigma}}^{ABC \dots F})$ which appears in part (ii) of Theorem 1, its effect on ${\mathscr{P}}_{sep;sup}^{\mathrm{X}}$, the other object appearing therein, is unknown and may result in a poor overall choice. Moreover, any degeneracy in the spectrum of ${\hat{\sigma}}^{ABC \dots F}$ renders ${{\mathrm{X}}^{\star}}$ nonunique and subject to further optimization. These features will be at play in the example considered below where, the foregoing caveats notwithstanding, the resulting detector turns out to be fully optimal.
The example in question is the Werner state ${\hat{\rho}}^{wer}_{d}(q)$ for a bipartite system of two $d$-level parties $A$ and $B$ defined on a $d^2$-dimensional Hilbert space \cite{wer,pitr}
\begin{equation}
{\hat{\rho}}^{wer}_{d}(q)=\frac{1}{d^2}({1-q})\hat{{\mathbbm{1}}} + q \mid{\mathfrak{B}}_{1} \rangle \langle {\mathfrak{B}}_{1}\mid, \label{8}
\end{equation}
where
\begin{equation}
\mid{\mathfrak{B}}_{1} \rangle \ =\frac{1}{\sqrt{d}} {\sum}_{j=0}^{d-1} \mid A,j \rangle \otimes \mid B,j \rangle. \label{9}
\end{equation}
Here $\{ \mid A,j \rangle \}$ and $\{ \mid B,j \rangle \}$) are orthonormal bases for subsystem $A$ and $B$ respectively. Also, here and elsewhere we use the symbol $\hat{{\mathbbm{1}}}$ to denote the identity operator of dimension appropriate to the context. Note that $\mid{\mathfrak{B}}_{1} \rangle$ is the generalization of the two-qubit Bell state $(\mid 00 \rangle + \mid 11 \rangle )/\sqrt{2}$. As such, it is totally symmetric, as well as maximally entangled in the sense that its marginal states are maximally disordered and equal to $\hat{{\mathbbm{1}}}/d$. An important fact to be used in the following is that ${\hat{\rho}}^{wer}_{d}(q)$ is separable if and only if $q \leq (1+d)^{-1}$ \cite{pitr}.
In order to probe the separability of ${\hat{\rho}}^{wer}_{d}(q)$ by means of Theorem 1, we must first choose a measurement. We will use the above Lemma to guide our choice of the putative optimum measurement ${\mathrm{X}}^{\star}$. The Lemma requires that the measurement elements be equal to the orthogonal projections onto the eigenvectors of ${\hat{\rho}}^{wer}_{d}(q)$. However, a simple calculation shows that the spectrum of ${\hat{\rho}}^{wer}_{d}(q)$ is degenerate, consisting of a simple eigenvalue equal to $q+{d}^{-2}(1-q)$ and a $({d}^{2}-1)$-fold degenerate eigenvalue equal to ${d}^{-2}(1-q)$. A natural choice of basis for the degenerate subspace is a set of $d^2-1$ generalized Bell states which, together with $\mid{\mathfrak{B}}_{1} \rangle $ of Eq.~(\ref{9}), constitute the orthonormal eigenbasis ${\{\mid{\mathfrak{B}}_{\alpha} \rangle \}}_{\alpha=1}^{d^2}$ for ${\hat{\rho}}^{wer}_{d}(q)$. This generalized Bell basis is thus characterized by the fact that each $\mid{\mathfrak{B}}_{\alpha} \rangle$ is maximally entangled and possesses equal Schmidt coefficients.
The measurement elements of the optimal measurement ${\mathrm{X}}^{\star}$ are thus given by ${\hat{\mathrm{E}}}_{\alpha}^{{X}^{\star}}= \mid {\mathfrak{B}}_{\alpha} \rangle \langle {\mathfrak{B}}_{\alpha} \mid $, $\alpha=1, 2, \dots, d^2$. With ${\mathrm{X}}^{\star}$ so defined, we readily find
\begin{align}
{\mathscr{P}}^{{\mathrm{X}}^{\star}}[{\hat{\rho}}^{wer}_{d}(q)]=&[q+{d}^{-2}(1-q), {d}^{-2}(1-q), \nonumber \\
&{d}^{-2}(1-q), \ldots, {d}^{-2}(1-q)], \label{10}
\end{align}
which corresponds to the spectrum of ${\hat{\rho}}^{wer}_{d}(q)$ as expected. Note that the Lemma guarantees that any other rank 1 measurement in place of ${\mathrm{X}}^{\star}$ would produce a more disordered probability vector on the right-hand side of Eq.~(\ref{10}).
The next step is the calculation of ${\mathscr{P}}_{sep;sup}^{{\mathrm{X}}^{\star}}$ of Theorem 1, which is the least disordered probability vector that can result from the measurement of ${\mathrm{X}}^{\star}$ on a pure, product state of two $d$-level subsystems. As can be seen in Eq.~(\ref{A7}) of the Appendix, this calculation requires finding the maximum overlap of every measurement element $\mid {\mathfrak{B}}_{\alpha} \rangle \langle {\mathfrak{B}}_{\alpha} \mid $ with the class of pure, product states. This overlap is given by the square of the largest Schmidt coefficient of the respective Bell state $\mid {\mathfrak{B}}_{\alpha}\rangle$ \cite{MBetal}. Since all Schmidt coefficients are equal to $1/\sqrt{d}$ for every Bell state $\mid {\mathfrak{B}}_{\alpha}\rangle$, we can conclude that
\begin{equation}
{\mathscr{P}}_{sep;sup}^{{\mathrm{X}}^{\star}}=(1/d, 1/d, \ldots, 1/d,0,0, \ldots,0). \label{11}
\end{equation}
We are now in a position to apply the entanglement criterion of Theorem 1, which reads ${\mathscr{P}}^{\mathrm{{X}^{\star}}}[{\hat{\rho}}^{wer}_{d}(q)] \nprec {\mathscr{P}}_{sep;sup}^{\mathrm{{X}^{\star}}}$ in this instance \cite{rem}. Using the information in Eqs.~(\ref{10}) and (\ref{11}), we readily see that this criterion translates to the single inequality $q+{d}^{-2}(1-q) > 1/d$, or $q > 1/(1+d)$, which is the necessary and sufficient condition for the inseparability of the Werner state ${\hat{\rho}}^{wer}_{d}(q)$. Thus every entangled Werner state of two $d$-level systems is detected by the measurement ${\mathrm{X}}^{\star}$ defined above.
While the efficacy of the generalized Bell states for detecting entanglement in Werner states is well established \cite{sep,HOR,GIO,GAM}, we note its natural emergence from the general majorization results of this subsection. It should also be noted that Theorem 1 and the foregoing analysis can be extended to multipartite states.
\subsection{Nonlinear detectors}
As an example of nonlinear entanglement detectors, we will derive majorization conditions for separability based on uncertainty relations. The method we will follow relies on the degeneracy properties of projective measurements on a single system versus the products of such measurements on a multipartite system \cite{GIO, GAM}. For example, with measurement ${\mathrm{X}}^{A}$ on ${\hat{\rho}}^{A}$ and ${\mathrm{X}}^{B}$ on ${\hat{\rho}}^{B}$ and each having a simple spectrum, the product projective measurement ${\mathrm{X}}^{A} \otimes {\mathrm{X}}^{B}$ on the bipartite system ${\hat{\rho}}^{AB}$ may be degenerate. A specific case is ${{\hat{\sigma}}_{x}}^{A}\otimes {{\hat{\sigma}}_{x}}^{B}$, whose spectrum $(+1/4,+1/4,-1/4,-1/4)$ is doubly degenerate, versus ${{\hat{\sigma}}_{x}}^{A}$ or ${{\hat{\sigma}}_{x}}^{B}$ each of which has the simple spectrum $(+1/2,-1/2)$. Note the fact that here we are following common practice by defining ${{\hat{\sigma}}_{x}}^{A}\otimes {{\hat{\sigma}}_{x}}^{B}$ to be the projective measurement whose two elements project into the subspaces corresponding to the eigenvalues $+1/4$ and $-1/4$ (and not the rank 1 projective measurement that resolves the degeneracies by virtue of using all four product elements). Consequently, it may happen that ${\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}$ and ${\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B}$ have a common eigenstate while ${\mathrm{X}}^{A}$ and ${\mathrm{Y}}^{A}$ do not, and that the said common eigenstate is an entangled pure state. In such cases, the corresponding probability vectors will reflect the stated differences and may be capable of detecting entanglement as in the case of linear detectors.
Consider two product projective measurements ${\mathrm{X}}^{A} \otimes {\mathrm{X}}^{B}$ and ${\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B}$ performed on the bipartite state ${\hat{\rho}}^{AB}$. The measurement results are then given by
\begin{align}
{\mathscr{P}}^{{\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}}({\hat{\rho}}^{AB})= \textrm{tr}&[{\hat{\Pi}}^{{\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}} {\hat{\rho}}^{AB} ], \nonumber \\ {\mathscr{P}}^{{\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B}}({\hat{\rho}}^{AB})= \textrm{tr}&[{\hat{\Pi}}^{{\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B}} {\hat{\rho}}^{AB} ], \nonumber \\
\mathscr{P}^{ ({\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}) \oplus ({\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B})}({\hat{\rho}}^{AB}) &=\mathscr{P}^{ {\mathrm{X}}^{A} \otimes {\mathrm{X}}^{B} }({\hat{\rho}}^{AB})
\nonumber \\
\otimes \mathscr{P}^{{\mathrm{Y}}^{A} \otimes {\mathrm{Y}}^{B} }({\hat{\rho}}^{AB}), \label{12}
\end{align}
where ${\hat{\Pi}}^{{\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}}$ and ${\hat{\Pi}}^{{\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B}}$, both projection operators but not necessarily rank 1, represent the measurement elements of the two product measurements. Note the majorization definition of joint uncertainty, given earlier, at work on the last line of Eq.~(\ref{12}).
Under these conditions, we have the following general result.
\textbf{Theorem 2}. The results of measurements ${\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}$ and ${\mathrm{Y}}^{A}\otimes
{\mathrm{Y}}^{B}$ on a separable state ${\hat{\rho}}^{AB}_{sep}$ satisfy
\begin{equation}
\mathscr{P}^{ ({\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}) \oplus ({\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B})}({\hat{\rho}}^{AB}_{sep}) \prec
\mathscr{P}_{sup}^{ {\mathrm{X}}\oplus {\mathrm{Y}}}, \label{13}
\end{equation}
where $\mathscr{P}_{sup}^{ {\mathrm{X}}\oplus {\mathrm{Y}}}$ is defined in Eq.~(\ref{4}), hence the condition $\mathscr{P}^{ ({\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}) \oplus ({\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B})}({\hat{\sigma}}^{AB}) \nprec \mathscr{P}_{sup}^{ {\mathrm{X}}\oplus {\mathrm{Y}}}$ implies that the state ${\hat{\sigma}}^{AB}$ is entangled.
To establish Theorem 2, consider product states of the form ${\hat{\rho}}^{A} \otimes {\hat{\rho}}^{B}$. Then Lemma 1 of Ref.~\cite{GAM} asserts that
\begin{align}
\mathscr{P}^{ {\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B} }({\hat{\rho}}^{A} \otimes {\hat{\rho}}^{B}) &\prec \mathscr{P}^{ {\mathrm{X}}^{A}}({\hat{\rho}}^{A}), \nonumber \\
\mathscr{P}^{ {\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B} }({\hat{\rho}}^{A} \otimes {\hat{\rho}}^{B}) &\prec \mathscr{P}^{ {\mathrm{Y}}^{A}}({\hat{\rho}}^{A}). \label{14}
\end{align}
Using these relations and Eq.~(\ref{12}), we find \cite{crs}
\begin{equation}
\mathscr{P}^{ ({\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}) \oplus ({\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B})}({\hat{\rho}}^{A} \otimes {\hat{\rho}}^{B}) \prec
\mathscr{P}^{ {\mathrm{X}}^{A}}({\hat{\rho}}^{A})\otimes \mathscr{P}^{ {\mathrm{Y}}^{A}}({\hat{\rho}}^{A}). \label{15}
\end{equation}
Since the right-hand side of Eq.~(\ref{15}) is by definition majorized by $\mathscr{P}_{sup}^{ {\mathrm{X}}\oplus
{\mathrm{Y}}}$, we conclude that
\begin{equation}
\mathscr{P}^{ ({\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}) \oplus ({\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B})}({\hat{\rho}}^{A} \otimes {\hat{\rho}}^{B}) \prec \mathscr{P}_{sup}^{ {\mathrm{X}}\oplus {\mathrm{Y}}}. \label{16}
\end{equation}
At this point we note that $\mathscr{P}^{ ({\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B}) \oplus ({\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B})}_{sep;sup}$, which by definition majorizes all probability vectors that can appear on the left-hand side of Eq.~(\ref{13}), can be found among pure, product states \cite{noteext}. This assertion is established in Eq.~(\ref{A8}) et seq. of the Appendix. Consequently, the product state ${\hat{\rho}}^{A} \otimes {\hat{\rho}}^{B}$ in Eq.~(\ref{16}) may be replaced by any separable state ${\hat{\rho}}^{AB}_{sep}$, thereby establishing Eq.~(\ref{13}) and Theorem 2.
We note in passing that, because the measurements considered in Theorem 2 are in general not rank 1, the Lemma which we used earlier for detector optimization cannot be applied here.
To illustrate Theorem 2, we will consider the case of three mutually unbiased observables measured on bipartite states of two-level systems, as in \S IVB of Paper I. In effect, this amounts to measuring products of the three spin components of a pair of spin-1/2 systems (or qubits). For this case, Eq.~(\ref{16}) reads
\begin{equation}
\mathscr{P}^{ ({{\sigma}_{x}}^{A}\otimes {{\sigma}_{x}}^{B}) \oplus
({{\sigma}_{y}}^{A}\otimes {{\sigma}_{y}}^{B})\oplus ({{\sigma}_{z}}^{A}\otimes {{\sigma}_{z}}^{B})}({\hat{\rho}}^{AB}_{sep})\prec \mathscr{P}_{sup}^{
{\sigma}_{x}\oplus {\sigma}_{y}\oplus {\sigma}_{z}}, \label{17}
\end{equation}
where ${\hat{\rho}}^{AB}_{sep}$ is any separable state of two qubits. Thus a violation of this relation by a two-qubit state implies that it is entangled.
For the set of states ${\hat{\rho}}^{AB}$ to be probed by Theorem 2, we will consider the Werner family of two-qubit states
\begin{equation}
{\hat{\rho}}^{wer}(q)=\frac{1}{4}({1-q}){\mathbbm{1}}+ q \mid{\mathfrak{B}}_{1} \rangle \langle {\mathfrak{B}}_{1} \mid, \label{18}
\end{equation}
which is known to be entangled if and only if $q >1/3$. Here $0 \leq q \leq 1$ and $\mid{\mathfrak{B}}_{1} \rangle =(\mid 00 \rangle + \mid 11 \rangle )/\sqrt{2}$ is the totally symmetric Bell state of Eq.~(\ref{9}) for the present case.
A calculation of
the probability vector for this measurement gives $(1 \pm q)(1 \pm q)(1 \pm q)/8$ for the $8$ components of $\mathscr{P}^{
({{\sigma}_{x}}^{A}\otimes {{\sigma}_{x}}^{B}) \oplus ({{\sigma}_{y}}^{A}\otimes {{\sigma}_{y}}^{B})\oplus
({{\sigma}_{z}}^{A}\otimes {{\sigma}_{z}}^{B})}[{\hat{\rho}}^{wer}(q)]$. Arranged in a non-ascending order, these are ${(1+q)}^{3}/8$, ${(1+q)}^{2}{(1-q)}/8$ and ${(1+q)}{(1-q)}^{2}/8$ both three-fold degenerate, and ${(1-q)}^{3}/8$. The supremum $\mathscr{P}_{sup}^{
{\sigma}_{x}\oplus {\sigma}_{y}\oplus {\sigma}_{z}}$, on the other hand, was calculated in \S IVB of Paper I, where it was found as
\begin{align}
\mathscr{P}_{sup}^{
{\sigma}_{x}\oplus {\sigma}_{y}\oplus {\sigma}_{z}}=\frac{1}{8}\big[{(1+1/\sqrt{3})}^{3},2{(1+1/\sqrt{2})}^{2} \nonumber \\
-{(1+1/\sqrt{3})}^{3}, 4-{(1+1/\sqrt{2})}^{2}, 4-{(1+1/\sqrt{2})}^{2}\nonumber \\
,0,0,0,0 \big]. \label{19}
\end{align}
According to Theorem 2, ${\hat{\rho}}^{wer}(q)$ violates the separability condition of Eq.~(\ref{13}) if $\mathscr{P}_{sup}^{
{\sigma}_{x}\oplus {\sigma}_{y}\oplus {\sigma}_{z}}$ fails to majorize $\mathscr{P}^{
({{\sigma}_{x}}^{A}\otimes {{\sigma}_{x}}^{B}) \oplus ({{\sigma}_{y}}^{A}\otimes {{\sigma}_{y}}^{B})\oplus
({{\sigma}_{z}}^{A}\otimes {{\sigma}_{z}}^{B})}[{\hat{\rho}}^{wer}(q)]$, which occurs for $q > 1/\sqrt{3}=0.577$ in this example. The entanglement in ${\hat{\rho}}^{wer}(q)$ is thus detected by Theorem 2 for $q > 0.577$, and missed for the range $0.333 < q \leq 0.577$. By comparison, a similar method based on the Shannon entropy in Ref. \cite{GIO} detects entanglement in ${\hat{\rho}}^{wer}(q)$ for $q >0.65$. Reference \cite{GAM}, using the same method but relying on the family of Tsallis entropies, matches our condition $q > 0.577$ by means of a numerical calculation that searches over the Tsallis family for optimum performance.
We end this subsection by a brief description of a sharpened version of the Nielsen-Kempe theorem as a nonlinear entanglement detector \cite{HP3}. This celebrated theorem is an elegant separability condition based on majorization relations. It asserts that the spectrum of a separable bipartite state is majorized by each of its marginal spectra. By extension, this theorem guarantees that the spectra of all possible subsystems of a separable multipartite system must majorize its global spectrum \cite{nik}. Our version of this theorem employs the notion of the infimum of a set of probability vectors discussed in \S IIA, and is based on the observation that if a probability vector is majorized by each member of a set of probability vectors, then it must also be majorized by the infimum of that set.
Consider the multipartite state ${\hat{\rho}}^{ABC \dots F}$ together with all its subsystem states ${\{ {\hat{\rho}}^{{X}_{a}} \}}_{a=1}^{f}$ and subsystem spectra ${\{ {\lambda}^{{X}_{a}} \}}_{a=1}^{f}$, where ${\{ {{X}_{a}} \}}_{a=1}^{f}$ represent all proper subsets of the set of parties $(A, B, C, \dots, F)$. Then the infimum of the subsystem spectra, ${\Lambda}^{ABC \dots F}={\inf}[{\lambda}^{{X}_{1}},{\lambda}^{{X}_{2}}, \ldots, {\lambda}^{{X}_{f}}]$, embodies the \textit{subsystem disorder} of the state ${\hat{\rho}}^{ABC \dots F}$, as expressed in the following theorem.
\textbf{Theorem 3} (Nielsen-Kempe). A multipartite state is entangled if its system disorder fails to exceed its subsystem disorder, i.e., ${\lambda}^{ABC \dots F}$ is entangled if ${\lambda}^{ABC \dots F}\nprec {\Lambda}^{ABC \dots F}$.
\section{Quasi-Entropic Detectors}
The majorization based theorems of the previous section have direct corollaries that yield scalar entanglement detectors for the entire class of \textit{Schur-concave} measures. A Schur-concave function $G$ is defined by the property that ${\lambda}^{1} \prec {\lambda}^{2}$ implies $G({\lambda}^{1}) \geq G({\lambda}^{2})$ \cite{ftn1}. Equivalently, Schur-concave functions are characterized by being monotonic with respect to the majorization relation. They include all functions $G(\cdot)$ that are concave and symmetric with respect to the arguments, a subset which we have defined as quasi-entropic. An important subclass of quasi-entropic functions is obtained if we restrict $G$ to have the following trace structure:
\begin{equation}
G(\lambda)=\mathrm{tr}[g(\lambda)]= {\sum}_{\alpha} g({\lambda}_{\alpha}), \label{20}
\end{equation}
where $g(\cdot)$ is a concave function of a single variable and $\lambda$ is treated as a diagonal matrix for the purpose of calculating the trace. Note that $G(\cdot)$ as constructed in Eq.~(\ref{20}) is manifestly symmetric and, as a sum of concave functions, it is also concave. We shall refer to this class of functions, which include the Shannon and Tsallis entropies, as \textit{trace type} quasi-entropic. An important fact regarding this class of measures is that if, for a given pair of probability vectors $\lambda$ and $\mu$, we have $G({\lambda})>G({\mu})$ for every trace type quasi-entropic measure $G$, then $\lambda \prec \mu$.
The standard entropic measure of uncertainty \cite{DEU}, which is based on the Shannon entropy function, is trace type quasi-entropic and corresponds to the choice $g(x)=H(x)=-x \ln(x)$ in Eq.~(\ref{20}). An example of an information theoretically relevant measure that is quasi-entropic but not trace type is the R\'{e}nyi subfamily of entropies of order less than one \cite{ftn2}. Accordingly, although the scalar entanglement detectors that follow from the theorems of \S III actually hold for the entire class of Schur-concave measures, we will continue to focus on the quasi-entropic class as the most suitable for information theoretical applications.
Scalar entanglement detectors based on Shannon, Tsallis, and R\'{e}nyi entropies are already well known, albeit with detection bounds that may differ from those reported here \cite{GIO,GAM,sep}. Our primary purpose here is to emphasize the natural and categorical manner in which majorization based entanglement detectors yield entire classes of scalar detectors.
With $G(\cdot)$ a quasi-entropic function, the aforementioned monotonicity property implies the following corollaries to Theorems 1-3.\\
\noindent
\textbf{Corollary 1}. Under the conditions of Theorem 1, the multipartite state ${\hat{\sigma}}^{ABC \dots F}$ is entangled if
\begin{equation}
G[{\mathscr{P}}^{\mathrm{X}}({\hat{\sigma}}^{ABC \dots F})] < G[{\mathscr{P}}_{sep;sup}^{\mathrm{X}}]. \label{21}
\end{equation}
\textbf{Corollary 2}. Under the conditions of Theorem 2, the bipartite state ${\hat{\sigma}}^{AB}$ is entangled if
\begin{equation}
G[\mathscr{P}^{ ({\mathrm{X}}^{A}\otimes {\mathrm{X}}^{B})}({\hat{\sigma}}^{AB})]+ G[\mathscr{P}^{ ({\mathrm{Y}}^{A}\otimes {\mathrm{Y}}^{B})} ({\hat{\sigma}}^{AB})] < G[\mathscr{P}_{sup}^{ {\mathrm{X}}\oplus {\mathrm{Y}}}]. \label{22}
\end{equation}
\textbf{Corollary 3}. Under the conditions of Theorem 3, the multipartite state ${\hat{\sigma}}^{ABC \dots F}$ is entangled if
\begin{equation}
G[{\lambda}^{ABC \dots F}] < G[{\Lambda}^{ABC \dots F}]. \label{23}
\end{equation}
As an application of the above corollaries, we will consider the entanglement detection threshold for the bipartite Werner state considered in Eq.~(\ref{8}) et seq. Using Corollary 1 together with the Tsallis entropy function, we find that ${\hat{\rho}}^{wer}_{d}(q)$ is entangled if
\begin{equation}
{S}^{tsa}_{r}\big[ {\mathscr{P}}^{{\mathrm{X}}^{\star}}[{\hat{\rho}}^{wer}_{d}(q)] \big ] < {S}^{tsa}_{r}\big[ {\mathscr{P}}_{sep;sup}^{{\mathrm{X}}^{\star}} \big ], \label{24}
\end{equation}
where ${S}^{tsa}_{r}(\cdot)$ is the Tsallis entropy of order $r$, and the two probability vectors appearing in Eq.~(\ref{24}) are those in Eqs.~(\ref{10}) and (\ref{11}). The Tsallis entropy function ${S}^{tsa}_{r}(\cdot)$ corresponds to the choice $g(\lambda)= (\lambda-{\lambda}^{r})/(r-1)$, $1<r<\infty$, in Eq.~(\ref{20}). The Tsallis entropy of order 1 is defined by continuity and equals the Shannon entropy \cite{tsa}.
A straightforward calculation turns Eq.~(\ref{24}) into
\begin{align}
&\frac{1-{[q+(1-q)/{d}^{2})]}^{r}-({d}^{2}-1){[(1-q)/{d}^{2})]}^{r}}{r-1} \nonumber \\
< &\frac{1-{d}^{1-r}}{r-1}. \label{25}
\end{align}
For each value of $r$, Eq.~(\ref{25}) yields an entanglement detection threshold value for $q$ which depends on $d$, and for a fixed value of the latter, decreases with increasing $r$. This corresponds to improving detection performance with increasing $r$, a behavior which is known for $d$ equal to 2 and 3 \cite{GAM}. Figure \ref{fig1} shows a plot of the threshold value $q$ versus the dimension $d$ for four different values of the order $r$, respectively increasing from top to bottom. The improvement in entanglement detection for fixed $d$ and increasing $r$, as well as fixed $r$ and increasing $d$, is clearly in evidence in Fig.~\ref{fig1}.
\begin{figure}
\caption{Entanglement detection threshold $q$ for a bipartite Werner state versus dimension $d$ of each party, using Corollary 1 and the Tsallis entropy. The order of the Tsallis entropy equals, from top to bottom respectively, 1(red), 2 (magenta), 5 (turquoise), and $\infty$ (blue). The top graph corresponds to the Shannon entropy and the bottom one (d) reproduces the known entanglement threshold for the state.}
\label{fig1}
\end{figure}
The weakest performance obtains for $r=1$, top graph in Fig. 1, and can be found by taking the limit of Eq.~(\ref{25}) as $r \rightarrow 1$. The result is $H[q+(1-q)/{d}^{2}])+ ({d}^{2}-1)H[q+(1-q)/{d}^{2}] < \ln(d)$, which corresponds to the choice of Shannon entropy function $H(\cdot)$ together with the set of generalized Bell states for the measurement of $d \otimes d$ Werner states \cite{ftn3}. The best performance obtains for $r \rightarrow \infty$ in Eq.~(\ref{25}), which simplifies to $q > 1/(1+d)$, the known entanglement threshold for a the two-qudit Werner state. This limiting case corresponds to the bottom graph in Fig.~\ref{fig1}.
The foregoing discussion of how well Corollary 1 does in detecting entanglement notwithstanding, the aim of this section is not to promote quasi-entropic detectors given in Corollaries 1-3 per se, but rather to emphasize the reach of the majorization results of \S III whence they are inherited. In addition, the above example highlights the fact that, while the entanglement detection bounds given in Corollaries 1-3 are optimal for Schur-concave functions as a class, their performance need not be optimal for individual members of the class, a fact that was emphasized in paper I as well.
\section{Concluding Remarks}
In this paper we have developed entanglement detection criteria within the majorization formulation of uncertainty presented in paper I. These are inherently stronger than similar scalar conditions in the sense that they are equivalent to and imply infinite classes of such scalar criteria. Majorization relations in effect envelope the set of scalar measures of disorder based on Schur-concave functions, a huge set that includes the information-theoretically relevant class of quasi-entropic measures as a subset. This enveloping property elucidates the exceptional effectiveness of majorization relations in dealing with problems of quantum information theory.
Entanglement detection criteria that can be experimentally implemented are especially useful in studies involving entangled microsystems, hence the importance of linear detectors and entanglement witnesses. As already mentioned, Theorem 1 can be formulated as an entanglement witness inasmuch as it is a linear majorization condition on measurement results. What may not be so obvious is that in principle the spectrum of a state is also experimentally accessible on the basis of the Lemma of \S IIIA. That Lemma states that the spectrum of a quantum state is the supremum of probability vectors that may be deduced from the counter statistics of all possible rank 1 generalized measurements performed on the state. In practice, this provides an experimental method for estimating the spectrum, since a high precision determination may require a search over a large number of possible measurements.
There is already an extensive literature on entanglement detection with many important results. With the notable exception of the Nielsen-Kempe theorem, these results are not formulated within the majorization framework. The present contribution in part serves to demonstrate the effectiveness of the majorization formulation of some of the existing methods. One can reasonably expect a sharpening of the results of some of the other entanglement detection strategies when reformulated within the majorization framework. It is also not unreasonable to expect useful, albeit computationally intractable, necessary and sufficient separability criteria to emerge from such formulations.
\begin{acknowledgments}
This work was in part supported by a grant from California State University, Sacramento.
\end{acknowledgments}
\appendix*
\section{majorization bounds are found on pure states}
Our task here is to establish that majorization uncertainty bounds can be reached on pure states. We will prove this for two important cases, first for the uncertainty bound on all states of the system, and second on the class of separable states. To avoid unnecessary clutter and technical distractions, we will limit the number of generalized measurements to three and the underlying Hilbert spaces to finite dimensions in the following treatment.
Suppose generalized measurements ($\mathrm{X}, \{ \hat{\mathrm{E}}_{\alpha}^{X} \}$), ($\mathrm{Y}, \{ \hat{\mathrm{E}}_{\beta}^{Y} \}$), and ($\mathrm{Z}, \{ \hat{\mathrm{E}}_{\beta}^{Z} \}$) are performed on a state $\hat{\rho}$ that is supported on a finite-dimensional Hilbert space. What we will show below is that the search for the components of ${\mu}^{sup}$ which serve to define ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \mathrm{Z}}$ can be limited to pure states.
By definition,
\begin{equation}
{\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \mathrm{Z}}={\sup}_{\hat{\rho}}[{\mathscr{P}}^{\mathrm{X} \oplus \mathrm{Y} \oplus \mathrm{Z}}(\hat{\rho})], \label{A1}
\end{equation}
where
\begin{equation}
{\mathscr{P}}_{\alpha \beta \gamma}^{\mathrm{X} \oplus \mathrm{Y} \oplus \mathrm{Z}}(\hat{\rho})=\textrm{tr}({\hat{\mathrm{E}}}_{\alpha}^{X} \hat{\rho}) \textrm{tr}({\hat{\mathrm{E}}}_{\beta}^{Y} \hat{\rho}) \textrm{tr}({\hat{\mathrm{E}}}_{\gamma}^{Z} \hat{\rho}). \label{A2}
\end{equation}
We recall from \S IIA above and \S IIB, IVA and IVB of Paper I that the supremum defined by Eqs.~(\ref{A1}) and (\ref{A2}) is found by calculating ${\mu}^{sup}_{i}$ for $i=1,2,\ldots$ (${\mu}^{sup}_{0}=0$ by definition). Furthermore, ${\mu}^{sup}_{1}$ is the maximum value of a single component of ${\mathscr{P}}^{\mathrm{X} \oplus \mathrm{Y} \oplus \mathrm{Z}}(\hat{\rho})$ as $\hat{\rho}$ is varied, ${\mu}^{sup}_{2}$ is the maximum value of the sum of two (different) components of ${\mathscr{P}}^{\mathrm{X} \oplus \mathrm{Y} \oplus \mathrm{Z}}(\hat{\rho})$ as $\hat{\rho}$ is varied, and ${\mu}^{sup}_{i}$ is the maximum value of the sum of $i$ (different) components of ${\mathscr{P}}^{\mathrm{X} \oplus \mathrm{Y} \oplus \mathrm{Z}}(\hat{\rho})$ as $\hat{\rho}$ is varied. It is thus sufficient to prove that the said maximum for ${\mu}^{sup}_{i}$ is in fact realized on a pure state.
Let $\hat{\rho}={\sum}_{a}\, {\lambda}_{a}(\hat{\rho}) \mid {\psi}_{a} \rangle \langle {\psi}_{a} \mid$ be the principal ensemble representation for $\hat{\rho}$. Then the above maximization process defines ${\mu}^{sup}_{i}$ as the maximum of
\begin{align}
{\sum}_{a,b,c}{\lambda}_{a}(\hat{\rho}){\lambda}_{b}(\hat{\rho}){\lambda}_{c}(\hat{\rho}) \big[ \langle {\psi}_{a} \mid {\hat{\mathrm{E}}}_{{\alpha}_{1}}^{X}\mid {\psi}_{a} \rangle \langle {\psi}_{b} \mid {\hat{\mathrm{E}}}_{{\beta}_{1}}^{Y}\mid {\psi}_{b} \rangle \nonumber \\ \times \langle {\psi}_{c} \mid {\hat{\mathrm{E}}}_{{\gamma}_{1}}^{Z}\mid {\psi}_{c} \rangle
+ \langle {\psi}_{a} \mid {\hat{\mathrm{E}}}_{{\alpha}_{2}}^{X}\mid {\psi}_{a} \rangle \langle {\psi}_{b} \mid {\hat{\mathrm{E}}}_{{\beta}_{2}}^{Y}\mid {\psi}_{b} \rangle \nonumber \\ \times \langle {\psi}_{c} \mid {\hat{\mathrm{E}}}_{{\gamma}_{2}}^{Z}\mid {\psi}_{c} \rangle
+ \ldots \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \nonumber \\
+ \langle {\psi}_{a} \mid {\hat{\mathrm{E}}}_{{\alpha}_{i}}^{X}\mid {\psi}_{a} \rangle \langle {\psi}_{b} \mid {\hat{\mathrm{E}}}_{{\beta}_{i}}^{Y}\mid {\psi}_{b} \rangle \langle {\psi}_{c} \mid {\hat{\mathrm{E}}}_{{\gamma}_{i}}^{Z}\mid {\psi}_{c} \rangle \big] \,\,\,\,\,\, \nonumber \\
+\,\, {\xi}_{i} \big(1-{\sum}_{a} {\lambda}^{\rho}_{a}\big)+{\sum}_{a,b} {\eta}_{i,ab} \big({\delta}_{ab}-\langle {\psi}_{b}\mid {\psi}_{a} \rangle\big), \label{A3}
\end{align}
as the state vectors $\{\mid {\psi}_{a}\rangle \}$ which are the eigenstates of $\hat{\rho}$, the probabilities $\{ {\lambda}_{a}(\hat{\rho}) \}$ which constitute the spectrum of $\hat{\rho}$, and the Lagrange multipliers ${\xi}_{i}$ and ${\eta}_{i,ba}$ which serve to enforce the structure of ${\sum}_{a}\, {\lambda}^{\rho}_{a} \mid {\psi}_{a} \rangle \langle {\psi}_{a} \mid$ as an orthogonal ensemble are varied. The symbol ${\delta}_{ab}$ in the above expression is the Kronecker delta, and the three sets of indices $\{ {({\alpha}, {\beta}, {\gamma})}_{1}, {({\alpha}, {\beta}, {\gamma})}_{2}, \dots, {({\alpha}, {\beta}, {\gamma})}_{i} \}$ are understood to be distinct, i.e., ${\alpha}_{1} \neq {\alpha}_{2} \neq \ldots \neq {\alpha}_{i}$, and similarly for the other two sets. Note also that since $\langle {\psi}_{a}\mid {\psi}_{b} \rangle$ is in general a Hermitian matrix (in the indices $a$ and $b$), the Lagrange multipliers ${\eta}_{i,ba}$ may also be taken to constitute a Hermitian matrix.
A variation with respect to $\langle {\psi}_{a} \mid$ gives
\begin{equation}
{\lambda}_{a}(\hat{\rho}) \big[ {\hat{\mathcal{E}}}_{i}^{X}+ {\hat{\mathcal{E}}}_{i}^{Y}+{\hat{\mathcal{E}}}_{i}^{Z} \big]\mid {\psi}_{a} \rangle ={\sum}_{b} {\eta}_{i,ba} \mid {\psi}_{b} \rangle, \label{A4}
\end{equation}
where
\begin{equation}
{\hat{\mathcal{E}}}_{i}^{X}={\sum}_{k=1}^{i} \mathscr{P}^{\mathrm{Y}}_{{\beta}_{k}}(\hat{\rho})\mathscr{P}^{\mathrm{Z}}_{{\gamma}_{k}}(\hat{\rho})
{\hat{\mathrm{E}}}_{{\alpha}_{k}}^{X}, \label{A5}
\end{equation}
and ${\hat{\mathcal{E}}}_{i}^{Z}$ and ${\hat{\mathcal{E}}}_{i}^{Y}$ are defined similarly. Note that the three operators introduced in Eqs.~(\ref{A4}) and (\ref{A5}) are Hermitian and positive.
Let ${\hat{\mathcal{E}}}_{i}={\hat{\mathcal{E}}}_{i}^{X}+ {\hat{\mathcal{E}}}_{i}^{Y}+{\hat{\mathcal{E}}}_{i}^{Z}$ and ${\mathcal{E}}_{i,aa}=\langle {\psi}_{a} \mid {\hat{\mathcal{E}}}_{i} \mid {\psi}_{a} \rangle$. Then the maximum sought in (\ref{A3}), namely ${\mu}^{sup}_{i}$, is given by ${\sum}_{a}{\lambda}_{a}(\hat{\rho}){\mathcal{E}}_{i,aa}/3=\textrm{tr}[{\hat{\mathcal{E}}}_{i}\hat{\rho}]/3$. A key fact at this juncture is that the diagonal elements ${\mathcal{E}}_{i,aa}$ do not depend on $a$ and are all equal. In other words, each pure state $\mid {\psi}_{a} \rangle$ in the ensemble contributes equally to ${\mu}^{sup}_{i}$. This property follows from a variation of (\ref{A3}) with respect to ${\lambda}_{a}(\hat{\rho})$, which leads to
\begin{equation}
{\xi}_{i}=\langle {\psi}_{a} \mid {\hat{\mathcal{E}}}_{i}^{X}+ {\hat{\mathcal{E}}}_{i}^{Y}+{\hat{\mathcal{E}}}_{i}^{Z} \mid {\psi}_{a} \rangle={\mathcal{E}}_{i,aa}. \label{A6}
\end{equation}
Clearly then, each pure state $\mid {\psi}_{a} \rangle$ in the ensemble must realize the same maximum ${\mu}^{sup}_{i}$ as the entire ensemble $\hat{\rho}$. We conclude therefore that the sought maximum can be found among the pure states of the system.
What if we are looking for ${\mu}^{sup}_{i}$ but with $\hat{\rho}$ limited to separable states? It turns out that here too the search can be limited to pure states, which would be pure product states in this instance. To establish this result, we will modify the foregoing analysis by stipulating that $\hat{\rho}={\sum}_{a}\, {q}_{a} \mid {\phi}^{A}_{a} \rangle \langle {\phi}^{A}_{a} \mid \otimes \mid {\phi}^{B}_{a} \rangle \langle {\phi}^{B}_{a} \mid$ representing a separable, bipartite state of parties $A$ and $B$ . Then the corresponding ${\mu}^{sup}_{sep,i}$ is the maximum of
\begin{align}
{\sum}_{a,b,c}{q}_{a}{q}_{b}{q}_{c} \big[
\langle {\phi}^{B}_{a}\mid \otimes \langle {\phi}^{A}_{a} \mid {\hat{\mathrm{E}}}_{{\alpha}_{1}}^{X}\mid {\phi}^{A}_{a} \rangle \otimes \mid {\phi}^{B}_{a} \rangle \nonumber \\
\times \langle {\phi}^{B}_{b}\mid \otimes \langle {\phi}^{A}_{b} \mid {\hat{\mathrm{E}}}_{{\beta}_{1}}^{Y}\mid {\phi}^{A}_{b} \rangle \otimes \mid {\phi}^{B}_{b} \rangle \nonumber \\
\times \langle {\phi}^{B}_{c}\mid \otimes \langle {\phi}^{A}_{c} \mid {\hat{\mathrm{E}}}_{{\gamma}_{1}}^{Z}\mid {\phi}^{A}_{c} \rangle \otimes \mid {\phi}^{B}_{c} \rangle \nonumber \\
+ \langle {\phi}^{B}_{a}\mid \otimes \langle {\phi}^{A}_{a} \mid {\hat{\mathrm{E}}}_{{\alpha}_{2}}^{X}\mid {\phi}^{A}_{a} \rangle \otimes \mid {\phi}^{B}_{a} \rangle \nonumber \\
\times \langle {\phi}^{B}_{b}\mid \otimes \langle {\phi}^{A}_{b} \mid {\hat{\mathrm{E}}}_{{\beta}_{2}}^{Y}\mid {\phi}^{A}_{b} \rangle \otimes \mid {\phi}^{B}_{b} \rangle \nonumber \\
\times \langle {\phi}^{B}_{c}\mid \otimes \langle {\phi}^{A}_{c} \mid {\hat{\mathrm{E}}}_{{\gamma}_{2}}^{Z}\mid {\phi}^{A}_{c} \rangle \otimes \mid {\phi}^{B}_{c} \rangle \nonumber \\
+ \ldots \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\, \nonumber \\
+ \langle {\phi}^{B}_{a}\mid \otimes \langle {\phi}^{A}_{a} \mid {\hat{\mathrm{E}}}_{{\alpha}_{i}}^{X}\mid {\phi}^{A}_{a} \rangle \otimes \mid {\phi}^{B}_{a} \rangle \nonumber \\
\times \langle {\phi}^{B}_{b}\mid \otimes \langle {\phi}^{A}_{b} \mid {\hat{\mathrm{E}}}_{{\beta}_{i}}^{Y}\mid {\phi}^{A}_{b} \rangle \otimes \mid {\phi}^{B}_{b} \rangle \nonumber \\
\times \langle {\phi}^{B}_{c}\mid \otimes \langle {\phi}^{A}_{c} \mid {\hat{\mathrm{E}}}_{{\gamma}_{i}}^{Z}\mid {\phi}^{A}_{c} \rangle \otimes \mid {\phi}^{B}_{c} \rangle \nonumber \\
+ {\sum}_{a} {\eta}_{i,a}^{A} \big[ 1-\langle {\phi}^{A}_{a}\mid {\phi}^{A}_{a} \rangle ]
+{\sum}_{a} {\eta}_{i,a}^{B} \big[ 1-\langle {\phi}^{B}_{a}\mid {\phi}^{B}_{a} \rangle ] \nonumber \\
+ {\xi}_{i} \big(1-{\sum}_{a} {q}_{a}\big), \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \label{A7}
\end{align}
as the state vectors $ \{ \mid {\phi}^{A}_{a} \rangle \}$, $\{ \mid{\phi}^{B}_{a} \rangle \}$, the probabilities $\{ {q}_{a} \}$, and the Lagrange multipliers ${\eta}_{i,a}^{A}$, ${\eta}_{i,a}^{B}$, and ${\xi}_{i}$ which serve to enforce normalization conditions and probability conservation for the ensemble are varied.
Variations with respect to $ \langle {\phi}^{A}_{a} \mid$ and $ \langle {\phi}^{B}_{a} \mid$ now give
\begin{align}
{q}_{a} \big[ \langle {\phi}^{B}_{a}\mid ({\hat{\mathcal{E}}}_{i}^{X}+ {\hat{\mathcal{E}}}_{i}^{Y}+{\hat{\mathcal{E}}}_{i}^{Z}) \mid {\phi}^{B}_{a} \rangle \big] \mid {\phi}^{A}_{a} \rangle &={\eta}_{i,a}^{A} \mid {\phi}^{A}_{a} \rangle \nonumber \\
{q}_{a} \big[ \langle {\phi}^{A}_{a}\mid ({\hat{\mathcal{E}}}_{i}^{X}+ {\hat{\mathcal{E}}}_{i}^{Y}+{\hat{\mathcal{E}}}_{i}^{Z}) \mid {\phi}^{A}_{a} \rangle \big] \mid {\phi}^{B}_{a} \rangle &={\eta}_{i,a}^{B} \mid {\phi}^{B}_{a} \rangle , \label{A8}
\end{align}
where $({\hat{\mathcal{E}}}_{i}^{X},{\hat{\mathcal{E}}}_{i}^{Y},{\hat{\mathcal{E}}}_{i}^{Z})$ are defined as in Eq.~(\ref{A5}) et seq.
Equations (\ref{A8}) directly imply that ${\eta}_{i,a}^{A}={\eta}_{i,a}^{B}$, and as in the general case above, the maximum sought in (\ref{A7}), namely ${\mu}^{sup}_{sep,i}$, is found to equal ${\sum}_{a}{q}_{a} {\mathcal{E}}_{i,aa}/3=\textrm{tr}[{\hat{\mathcal{E}}}_{i}\hat{\rho}]/3$, where ${\hat{\mathcal{E}}}_{i}={\hat{\mathcal{E}}}_{i}^{X}+ {\hat{\mathcal{E}}}_{i}^{Y}+{\hat{\mathcal{E}}}_{i}^{Z}$. Furthermore, a variation with respect to the probabilities $\{ {q}_{a} \}$ shows the equality of the matrix elements ${\mathcal{E}}_{i,aa}$ just as in the general case above, whereupon we learn that each pure product state in the ensemble makes the same contribution to ${\mu}^{sup}_{sep,i}$. In other words, the separable density matrix which maximizes (\ref{A7}) may be taken to be a pure product state \cite{noteext}.
It should be clear from the above arguments that the results hold for any number of generalized measurements as well as any number of parties in the multipartite state. In summary, then, we have found that the joint majorization uncertainty bound, ${\mathscr{P}}_{sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}$, for a set of generalized measurements performed on an arbitrary quantum state supported on a finite-dimensional Hilbert space can be found on the class of pure states of the system, and in the case of a separable states, ${\mathscr{P}}_{sep;sup}^{\mathrm{X} \oplus \mathrm{Y} \oplus \ldots \oplus \mathrm{Z}}$ can be found on the class of pure product states of the system.
{}
\end{document} |
\begin{document}
\title{Free energy of directed polymers in random environment in $1+1$-dimension at high temperature}
\author{Makoto Nakashima \footnote{[email protected], Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, Japan } }
\date{}
\mathbbm{p}agestyle{myheadings}
\markboth{Makoto Nakashima}{On the universality of $1+1$ DPRE}
\maketitle
\sloppy
\begin{abstract}
We consider the free energy $F(\beta)$ of the directed polymers in random environment in $1+1$-dimension. It is known that $F(\beta)$ is of order $-\beta^4$ as $\beta\to 0$ \cite{AleYil,Lac,Wat}. In this paper, we will prove that under a certain condition of the potential, \begin{align*}
\lim_{\beta\to 0}\frac{F(\beta)}{\beta^4}=\lim_{T\to\mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right] =-\frac{1}{6},
\end{align*}
where $\{\mathcal{Z}_\beta(t,x):t\geq 0,x\mathbbm{i}n\mathbb{R}\}$ is the unique mild solution to the stochastic heat equation \begin{align*}
\frac{\mathbbm{p}artial}{\mathbbm{p}artial t}\mathcal{Z}=\frac{1}{2}\Delta \mathcal{Z}+\beta \mathcal{Z}{\dot{\mathcal W}},\ \ \lim_{t\to 0}\mathcal{Z}(t,x)dx=\delta_{0}(dx),
\end{align*}
where $\mathcal{W}$ is a time-space white noise
and \begin{align*}
\mathcal{Z}_\beta(t)=\mathbbm{i}nt_\mathbb{R}\mathcal{Z}_\beta(t,x)dx.
\end{align*}
\mathbbm{i}ffalse{
$\mathcal{Z}_{\sqrt{2}}$, that is, \begin{align*}
c=\lim_{\to \mathbbm{i}nfty}\frac{1}{t}Q\left[\log \mathcal{Z}_{\sqrt{2}}(t)\right],
\end{align*}
where $\mathcal{Z}_\beta(t,x)$ is the unique solution to the stochastic heat equation \begin{align*}
\frac{\mathbbm{p}artial}{\mathbbm{p}artial t}Z=\frac{1}{2}\Delta \mathcal{Z}+\beta \mathcal{Z}\dot{\mathcal W},\ \ \mathcal{Z}(0,x)=\delta_{0}(x)
\end{align*}
and \begin{align*}
\mathcal{Z}_\beta(t)=\mathbbm{i}nt_\mathbb{R}\mathcal{Z}_\beta(t,x)dx.
\end{align*}
}\fi
\end{abstract}
{\bf AMS 2010 Subject Classification:} 82D60, 82C44.
{\bf Key words:} Directed polymers, Free energy, Universality, Continuum directed polymer.
We denote by $(\Omega, {\cal F},P )$ a probability space. We denote by $P[X]$ the expectation of random variable $X$ with respect to $P$. Let $\mathbb{N}_0=\{0,1,2,\cdots\}$, $\mathbb{N}=\{1,2,3,\cdots\}$, and $\mathbb{Z}=\{0,\mathbbm{p}m 1,\mathbbm{p}m 2,\cdots\}$.
Let $C_{x_1,\cdots,x_p}$ or $C(x_1,\cdots,x_p)$ be a non-random constant which depends only on the parameters $x_1,\cdots,x_p$.
\section{Introduction and main result}
Directed polymers in random environment was introduced by Henly and Huse in the physical literature to study the influence by impurity of media to polymer chain \cite{HenHus}. In particular, random media is given as i.i.d.\,time-space random variables and the shape of polymer is achieved as time-space path of walk whose law is given by Gibbs measure with the inverse temperature $\beta\geq 0$, that is, time-space trajectory $s$ up to time $n$ appears as a realization of a polymer by the probability \begin{align*}
\mu_{\beta,N}(s)=\frac{1}{Z_{\beta,N}}\exp\left(\beta H_N(s)\right)P_S^0(S_{[0,N]}=s), \ \ \ \ s\mathbbm{i}n \left(\mathbb{Z}^{d}\right)^{N+1},
\end{align*}
where $H_N(s)$ is a Hamiltonian of the trajectory $s$, $(S,P_S^0)$ is the simple random walk on $\mathbb{Z}^d$ starting from $x\mathbbm{i}n\mathbb{Z}^d$, $S_{[0,N]}=(S_0,S_1,\cdots,S_N)\mathbbm{i}n \left(\mathbb{Z}^{d}\right)^{N+1}$, and $Z_{\beta,N}$ is the normalized constant which is called the quenched partition function.
There exists $\beta_1$ such that if $\beta<\beta_1$, then the effects by random environment are weak and if $\beta>\beta_1$, then environment has a meaningful influence. This phase transition is characterized by the uniform integrability of the normalized partition functions. Also, we have another phase transition characterized by the non-triviality of the free energy, i.e. there exists $\beta_2$ such that if $\beta<\beta_2$, then the free energy is trivial and if $\beta>\beta_2$, then the free energy is non-trivial. The former phase transition is referred to weak versus strong disorder phase transition and the latter one is referred to strong versus very strong disorder phase transition. We have some known results on the phase transitions: $\beta_1=\beta_2=0$ when $d=1,2$ \cite{ComVar,Lac} and $\beta_2\geq \beta_1>0$ when $d\geq 3$ \cite{Bol,ComShiYos}. In particular, the best lower bound of $\beta_1$ is obtained by Birkner et.al.\,by using size-biased directed polymers and random walk pinning model \cite{Bir2,BirGreden2,Nak3}.
There are a lot of progressions for $\mathbb{Z}^d$-lattice model in three decades\cite{Bol,CarHu,ComShiYos,ComShiYos2,CarHu2,ComVar,Lac,BerLac2}. Recently, the KPZ universality class conjecture for $d=1$ case has been focused and was confirmed for a certain environment \cite{Sep,GeoSep,ComNgu}. The recent progressions are reviewed in \cite{Com}.
\subsection{Model and main result}
To define the model precisely, we introduce some random variables.
\begin{itemize}
\mathbbm{i}tem (Random environment) Let $\{\eta(n,x):(n,x)\mathbbm{i}n\mathbb{N}\times \mathbb{Z}^d\}$ be $\mathbb{R}$-valued i.i.d.\,random variables with $\lambda(\beta)=\log Q[\exp\left(\beta \eta(n,x)\right)]\mathbbm{i}n\mathbb{R}$ for any $\beta\mathbbm{i}n\mathbb{R}$, where $Q$ is the law of $\eta$'s.
\mathbbm{i}tem (Simple random walk) Let $(S,P_S^x)$ be a simple random walk on $\mathbb{Z}^d$ starting from $x\mathbbm{i}n\mathbb{Z}^d$. We write $P_S=P_S^0$ for simplicity.
\end{itemize}
Then, the Hamiltonian $H(s)$ is given by \begin{align*}
H_N(s)=H_N(s,\eta)=\sum_{k=1}^N \eta(k,s_k),\ \ \ s=(s_0,\cdots,s_N)\mathbbm{i}n \left(\mathbb{Z}^d\right)^{N+1},
\end{align*}
and \begin{align*}
Z_{\beta,N}=Z_{\beta,N}(\eta)=P_S\left[\exp\left(\beta \sum_{k=1}^N \eta(k,S_k)\right)\right].
\end{align*}
It is clear that \begin{align*}
Q\left[Z_{\beta,N}(\eta)\right]=\exp\left(N\lambda(\beta)\right)
\end{align*}
for any $\beta\mathbbm{i}n\mathbb{R}$.
The normalized partition function is defined by \begin{align}
W_{\beta,N}(\eta)&=\frac{Z_{\beta,N}(\eta)}{Q\left[Z_{\beta,N}(\eta)\right]}{\bf n}otag\\
&=Z_{\beta,N}(\eta)\exp\left(-N\lambda(\beta)\right){\bf n}otag{\bf n}otag\\
&=P\left[\exp\left(\beta H_N(S)-N\lambda(\beta)\right)\right]{\bf n}otag\\
&=P\left[\mathbbm{p}rod_{k=1}^N{\mathbbm z}eta_{k,S_k}(\beta,\eta)\right],\label{partfn}
\end{align}
where we write for each $(n,x)\mathbbm{i}n\mathbb{N}\times \mathbb{Z}^d$\begin{align*}
{\mathbbm z}eta_{n,x}(\beta,\eta)=\exp\left(\beta \eta(n,x)-\lambda(\beta)\right).
\end{align*}
Then, the following limit exists $Q$-a.s.\,and $L^1(Q)$ \cite{ComShiYos,ComYos}:
\begin{align}
F(\beta)&=\lim_{N\to \mathbbm{i}nfty}\frac{1}{N}\log W_{\beta,N}(\eta){\bf n}otag\\
&=\lim_{N\to\mathbbm{i}nfty}\frac{1}{N}Q\left[\log W_{\beta,N}(\eta)\right]{\bf n}otag\\
&=\sup_{N\geq 1}\frac{1}{N}Q\left[\log W_{\beta,N}(\eta)\right].\label{free}
\end{align}
The limit $F(\beta)$ is a non-random constant and called the quenched free energy. Jensen's inequality implies that \begin{align*}
F(\beta)\leq \lim_{N\to\mathbbm{i}nfty}\frac{1}{N}\log Q\left[W_{\beta,N}(\eta)\right]=0.
\end{align*}
It is known that $F(\beta)<0$ if $\beta{\bf n}ot=0$ when $d=1,2$ \cite{ComVar,Lac} and $F(\beta)=0$ for sufficiently small $|\beta|$ when $d\geq 3$. Recently, the asymptotics of $F(\beta)$ near high temperature ($\beta\to 0$) are studied:\begin{align*}
&F(\beta)\asymp -\beta^{4},\ \ \text{if }d=1
\mathbbm{i}ntertext{\cite{Lac, Wat, AleYil} and}
&\log |F(\beta)|\sim -\frac{\mathbbm{p}i}{\beta^2},\ \ \text{if }d=2
\end{align*}
\cite{Lac,BerLac2}.
In particular, it is conjectured that when $d=1$, \begin{align*}
\lim_{\beta\to 0}\frac{1}{\beta^4}F(\beta)=-\frac{1}{24},
\end{align*}
where $\frac{1}{24}$ appears in the literature of stochastic heat equation or KZP equation \cite{BerGia,AmiCorQua}.
Our main result answers this conjecture in some sense.
\begin{thm}\label{main}
Suppose $d=1$. We assume the following concentration inequality:
{There exist $\gamma\geq 1$, $C_1,C_2\mathbbm{i}n(0,\mathbbm{i}nfty)$ such that for any $n\mathbbm{i}n\mathbb{N}$ and for any convex and $1$-Lipschitz function $f:\mathbb{R}^n\to \mathbb{R}$,}
\begin{align}Q\left(|f(\omega_1,\cdots,\omega_n)-Q[f(\omega_1,\cdots,\omega_n)]|\geq t\right)\leq C_1\exp\left(-C_2t^{\gamma}\right),\label{A}
\end{align}
where $1$-Lipschitz means $|f(x)-f(y)|\leq |x-y|$ for any $x,y\mathbbm{i}n\mathbb{R}^n$ and $\omega_1,\cdots,\omega_n$ are i.i.d.~random variables with the marginal law $Q(\eta(n,x)\mathbbm{i}n dy)$.
Then, we have \begin{align*}
\lim_{\beta\to 0}\frac{1}{\beta^4}F(\beta)=-\frac{1}{6}.
\end{align*}
\end{thm}
The constant $-\frac{1}{6}$ appears as the limit of the free energy of the continuum directed polymers (see Lemma \ref{freeenecdp}):
\begin{align*}
F_\mathcal{Z}(\sqrt{2})=\lim_{T\to\mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\log \mathbbm{i}nt_{\mathbb{R}}\mathcal{Z}_{\sqrt{2}}^x(T,y)dx\right]=-\frac{1}{6},
\end{align*}
where $\mathcal{Z}^x_{\beta}(t,y)$ is the unique mild solution to the stochastic heat equation \begin{align*}
\mathbbm{p}artial \mathcal{Z}=\frac{1}{2}\Delta \mathcal{Z}+\beta\mathcal{Z}\dot{\mathcal{W}},
\end{align*}
with the initial condition $\displaystyle \lim_{t\to 0 }\mathcal{Z}(t,y)dy=\delta_x(dy)$ and $\mathcal{W}$ is a time-space white noise and $P_\mathcal{Z}$ is the law of $\mathcal{Z}_\beta^x$.
We write \begin{align*}
\mathcal{Z}_{\beta}^x(t)=\mathbbm{i}nt_\mathbb{R}\mathcal{Z}_{\beta}^x(t,y)dy
\end{align*}
and $\mathcal{Z}_\beta(t)=\mathcal{Z}_\beta^0(t)$ for simplicity. $-\frac{1}{6}$ seems to be different from the value $-\frac{1}{24}$ in the conjecture. However, it has the relation \begin{align*}
-\frac{1}{6}=-\frac{(\sqrt{2})^4}{24}
\end{align*}
and $\sqrt{2}$ appears from the periodicity of simple random walk. Thus, the conjecture it true essentially.
\begin{rem}
Assumption (\ref{A}) are given in \cite{CarTonTor} for pinning model. Under this assumption, $\{\eta(n,x):n\mathbbm{i}n\mathbb{N},x\mathbbm{i}n\mathbb{Z}\}$ satisfies a good concentration property (see Lemma \ref{tal}). It is known that the following distribution satisfies (\ref{A}).
\begin{enumerate}
\mathbbm{i}tem If $\eta(n,x)$ is bounded, then (\ref{A}) holds for $\gamma=2$ \cite[Corollary 4.10]{Led}.
\mathbbm{i}tem If the law of $\eta(n,x)$ satisfies a log-Sobolev inequality (for example Gaussian distribution), then (\ref{A}) holds with $\gamma=2$ \cite[Theorem 5.3, Corollary 5.7]{Led}
\mathbbm{i}tem If the law of $\eta(n,x)$ has the probability density with $c_\gamma \exp\left(-|x|^\gamma\right)$, then (\ref{A}) holds with $\gamma\mathbbm{i}n [1,2]$ \cite[Proposition 4.18,Proposition 4.19]{Led}.
\end{enumerate}
\end{rem}
\subsection{Organization of this paper} This paper is structured as follows:
\begin{itemize}
\mathbbm{i}tem We first give the strategy of the proof of our main result in section \ref{2}.
\mathbbm{i}tem Section \ref{3} is devoted to prove the statements mentioned in section \ref{2} related to discrete directed polymers.
\mathbbm{i}tem Section \ref{4} is also devoted to prove the statement mentioned in section \ref{2} related to continuum directed polymers.
\end{itemize}
\section{Proof of Theorem \ref{main}}\label{2}
In this section, we give the proof of Theorem \ref{main}.
\subsection{Proof of limit inferior}
The idea is simple. Alberts, Khanin and Quastel proved the following limit theorem.
\begin{thm}\label{thmref}$($\cite{AlbKhaQua}$)$ Suppose $d=1$. Let $\{\beta_n:n\geq 1\}$ be an $\mathbb{R}$-valued sequence with $\beta_n\to 0$ and $r>0$. Then,
the sequence $\displaystyle \{W_{r\beta_n,\lfloor T\beta_n^{-4}\rfloor}(\eta):n\geq 1\}$ is $L^2$-bounded and converges in distribution to a random variable $\mathcal{Z}_{r\sqrt{2}}(T)$ for each $T>0$.
\end{thm}
Combining this with (\ref{free}), we have that \begin{align*}
\frac{1}{\lfloor T\beta_n^{-4}\rfloor}Q\left[\log W_{\beta_n,\lfloor T\beta_n^{-4}\rfloor}(\eta)\right]\leq F(\beta_n)
\end{align*}for any $n\geq 1$ and $t>0$, i.e. \begin{align}
\frac{\beta_n^{-4}}{\lfloor T\beta_n^{-4}\rfloor}Q\left[\log W_{\beta_n,\lfloor T\beta_n^{-4}\rfloor}(\eta)\right]\leq \frac{1}{\beta_n^{4}}F(\beta_n).\label{inf}
\end{align}
Thus, if $ \log W_{\beta_n,\lfloor T\beta_n^{-4}\rfloor}(\eta)$ is uniformly integrable, then we have that \begin{align}
\frac{1}{T}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]\leq \varliminf_{n\to \mathbbm{i}nfty}\frac{1}{\beta_n^{4}}F(\beta_n).\label{inf}
\end{align}
Taking the limit in $T$, we have that \begin{align}
\lim_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]\leq \varliminf_{n\to \mathbbm{i}nfty}\frac{1}{\beta_n^{4}}F(\beta_n).\label{limcdp}
\end{align}
Therefore, it is enough to show the following lemmas.
\begin{lem}\label{lpbdd}
Suppose $d=1$. We assume (\ref{A}).
Then, for each $T>0$\begin{align*}
\log W_{\beta_n,\lfloor T\beta_n^{-4}\rfloor }(\eta) \text{ is uniformly integrable.}
\end{align*}
\end{lem}
\begin{lem}\label{freeenecdp}
We have the limit \begin{align*}
F_\mathcal{Z}(\sqrt{2})=\lim_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]=\sup_{T>0}\frac{1}{t}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]=-\frac{1}{6}.
\end{align*}
\end{lem}
We should take $n=\lfloor \beta_n^{-4}\rfloor$ in general. However, we may consider the case \begin{align*} \beta_n=n^{-\frac{1}{4}}\end{align*}
without loss of generality.
\subsection{Proof of limit superior}
We use the coarse graining argument to prove the limit superior.
We divide $\mathbb{Z}$ into the blocks with size of order $n^{1/2}$: For $y\mathbbm{i}n \mathbb{Z}$, we set \begin{align*}
B_y^n=\left[(2y-1)\lfloor n^{1/2}\rfloor+y,(2y+1)\lfloor n^{1/2}\rfloor+y\right].
\end{align*}
For each $\ell\mathbbm{i}n \mathbb{N}$, we denote by $B_y^{n}(\ell)$ the set of lattice $z\mathbbm{i}n \mathbb{Z}$ such that \begin{align*}
z-\ell\mathbbm{i}n 2\mathbb{Z},
\end{align*}
that is the set of lattices in $B_y^n$ which can be reached by random walk $(S,P_S)$ at time $\ell$.
We will give an idea of the proof.
It is clear by Jensen's inequality that for each $\theta \mathbbm{i}n (0,1)$, $T\mathbbm{i}n\mathbb{N}$, and $N\mathbbm{i}n\mathbb{N}$,
\begin{align}
\begin{array}{ll}&\displaystyle \frac{1}{NTn}Q\left[\log W_{\beta_n,NTn}(\eta)\right]\\
&\displaystyle =\frac{1}{\theta NTn}Q\left[\log W^\theta_{\beta_n,NTn}(\eta)\right]\\
&\displaystyle \leq \frac{1}{\theta NTn}\log Q\left[W^\theta_{\beta_n,NTn}(\eta)\right].\end{array}\label{thetaineq}
\end{align}
We will take the limit superior of both sides in $N\to\mathbbm{i}nfty$, $n\to \mathbbm{i}nfty$, $T\to \mathbbm{i}nfty$, and then $\theta\to 0$ in this order. Then, it is clear that \begin{align*}
\varlimsup_{n\to \mathbbm{i}nfty}\frac{1}{\beta_n^4}F(\beta_n)\leq \varlimsup_{\theta\to 0}\varlimsup_{T\to \mathbbm{i}nfty}\varlimsup_{n\to \mathbbm{i}nfty}\varlimsup_{N\to\mathbbm{i}nfty} \frac{1}{\theta NTn}\log Q\left[W^\theta_{\beta_n,NTn}(\eta)\right].
\end{align*}
We would like to estimate the right hand side.
For $\theta \mathbbm{i}n (0,1)$, we have that \begin{align*}
Q\left[W^\theta_{\beta_n,NTn}(\eta)\right]\leq \sum_{Z}Q\left[\hat{W}_{\beta_n,NTn}^\theta(\eta,Z)\right],
\end{align*}
where for $Z=(z_1,\cdots,z_N)\mathbbm{i}n\mathbb{Z}^N$\begin{align*}
&\hat{W}_{\beta_n,NTn}(\eta,Z)\\
&=P_S\left[\mathbbm{p}rod_{i=1}^{NTn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta):S_{\ell Tn}\mathbbm{i}n B_{z_i}^{n}(\ell Tn)\right],
\end{align*}
and we have used the fact $(a+b)^\theta\leq a^\theta+b^\theta$ for $a,b\geq 0$ and $\theta\mathbbm{i}n (0,1)$. Then, we have from the Markov property that \begin{align}
Q\left[W^\theta_{\beta_n,NTn}(\eta)\right]&\leq \left(\sum_{z\mathbbm{i}n \mathbb{Z}}Q\left[\max_{x\mathbbm{i}n B_0^n(0)}P_S^x\left[\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta):S_{Tn}\mathbbm{i}n B_z^n\right]^\theta\right]\right)^N.\label{maxtheta}
\end{align}
Combining (\ref{thetaineq}) and (\ref{maxtheta}), we have that \begin{align*}
\frac{1}{\beta_n^4}F(\beta_n)\leq \frac{1}{\theta T}\log \sum_{z\mathbbm{i}n \mathbb{Z}}Q\left[\max_{x\mathbbm{i}n B_0^n(0)}P_S^x\left[\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta):S_{Tn}\mathbbm{i}n B_z^n\right]^\theta\right].
\end{align*}
Here, we have the following lemmas:
\begin{lem}\label{maxsup}
We have that \begin{align*}
\lim_{n\to\mathbbm{i}nfty}Q\left[\max_{x\mathbbm{i}n B_0^n(0)}P_S^x\left[\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta)\right]^\theta\right]=P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1]}\left(\mathcal{Z}_{\sqrt{2}}^x(T)\right)^\theta\right].
\end{align*}
\end{lem}
\begin{lem}\label{partition1}
There exists a set $I^{(\theta)}(T)\subset \mathbb{Z}$ such that $\sharp I^{(\theta)}(T)\asymp T^2$ and for some constant $C_1>0$ and $C_2>0$, \begin{align*}
\sum_{z\mathbbm{i}n I^{(\theta)}(T)^c}Q\left[\max_{x\mathbbm{i}n B_0^n(0)}P_S^x\left[\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta):S_{Tn}\mathbbm{i}n B_z^n\right]^\theta\right]\leq C_1\exp(-C_2T^2)
\end{align*}
for any $N\geq 1$.
\end{lem}
Then, we have that \begin{align*}
\varlimsup_{n\to \mathbbm{i}nfty}\frac{1}{\beta_n}F(\beta_n)\leq \frac{1}{\theta T}\log \left(C_1\exp(-C_2T^2)+C_3T^2 P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1]}\left(\mathcal{Z}_{\sqrt{2}}^x(T)\right)^\theta\right]\right).
\end{align*}
The following result gives us an upper bound of the limit superior:
\begin{lem}\label{ttheta}
We have that
\begin{align*}
\varlimsup_{\theta\to 0}\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1]}\left(\mathcal{Z}_{\sqrt{2}}^x(T)\right)^\theta\right]\leq F_\mathcal{Z}(\sqrt{2}).
\end{align*}
\end{lem}
\begin{rem}
(\ref{A}) is not assumed in lemmas in this subsection. Thus, we find that the limit superior in Theorem \ref{main} is true for general environment.
\end{rem}
In the rest of the paper, we will prove the above lemmas.
\section{Proof of Lemma \ref{lpbdd}, Lemma \ref{maxsup}, and Lemma \ref{partition1}}\label{3}
\subsection{Proof of Lemma \ref{lpbdd}}
To prove Lemma \ref{lpbdd}, we use the following concentration inequality.
\begin{lem}\label{tal}(\cite[Lemma 3.3, Proposition 3.4]{CarTonTor})
Assume (\ref{A}). Then, for any $m\mathbbm{i}n\mathbb{N}$ and for any differentiable convex function $f:\mathbb{R}^m\to \mathbb{R}$, we have that \begin{align*}
Q\left(f(\eta)<a-t\right)Q\left( f(\eta)>a, \left|{\bf n}abla f(\eta)\right|\leq c \right)\leq C_1'\exp\left(-\frac{\left(\frac{t}{c}\right)^\gamma}{C_2'}\right),\ \ a\mathbbm{i}n\mathbb{R}, t,c\mathbbm{i}n(0,\mathbbm{i}nfty),
\end{align*}
where $\eta=\{\eta_1,\cdots,\eta_n\}$ are i.i.d.~random variables with the marginal law $Q(\eta_e\mathbbm{i}n dx)$ and $|{\bf n}abla f(\eta)|=\displaystyle \sqrt{\sum_{i=1}^m \left|\frac{\mathbbm{p}artial }{\mathbbm{p}artial \eta_i}f(\eta)\right|^2}$.
\end{lem}
We take $\mathbb{R}^m$ as $\mathbb{R}^{E_n}$ in the proof of Lemma \ref{lpbdd} with $E_n=\{1,\cdots,Tn\}\times \{-Tn,\cdots,Tn\}$ which contains all lattices simple random walk can each up to time $Tn$.
\begin{proof}[Proof of Lemma \ref{lpbdd}] When we look at $W_{\beta_n,Tn}(\eta)$ as the function of $\{\eta(i,x):(i,x)\mathbbm{i}n E_n\}$, $\log W_{\beta_n,Tn}(\eta)$ is differentiable and convex. Indeed, we have that \begin{align*}
\frac{\mathbbm{p}artial }{\mathbbm{p}artial \eta(i,x)}\log W_{\beta_n,Tn}(\eta)=\frac{1}{W_{\beta_n,Tn}(\eta)}P_S\left[\beta_n \mathbbm{p}rod_{k=1}^{Tn}{\mathbbm z}eta_{k,S_k}(\beta_n,\eta): S_i=x \right]
\end{align*}
and for $s\mathbbm{i}n [0,1]$, for $\eta=\{\eta(i,x):(i,x)\mathbbm{i}n E_n\}$ and $\eta'=\{\eta'(i,x):(i,x)\mathbbm{i}n E_n\}$\begin{align*}
W_{\beta_n,Tn}(s\eta+(1-s)\eta')&=P_S\left[\mathbbm{p}rod_{k=1}^{Tn}{\mathbbm z}eta_{k,S_k}(\beta_n,s\eta+(1-s)\eta')\right]\\
&\leq W_{\beta_n,Tn}(\eta)^sW_{\beta_n,Tn}(\eta')^{1-s}.
\end{align*}
Thus, we can apply Lemma \ref{tal} to $\log W_{\beta_n,Tn}(\eta)$. Since \begin{align*}
\left|{\bf n}abla \log W_{\beta_n,Tn}(\eta)\right|^2&=\beta_n^2\sum_{(i,x)\mathbbm{i}n E_n}\left(\frac{1}{W_{\beta_n,Tn}}P_S\left[\mathbbm{p}rod_{k=1}^{Tn}{\mathbbm z}eta_{k,S_k}(\beta_n,\eta):S_i=x\right]\right)^2\\
&=\beta_n^2\sum_{(i,x)\mathbbm{i}n E_n}\mu_{\beta_n,Tn}^{\eta}(S_i=x)^2\\
&=\beta_n^2\left(\mu_{\beta_n,Tn}^{\eta}\right)^{\otimes 2}\left[\sharp\{1\leq i\leq Tn:S_i=S_i'\}\right],
\end{align*}
where $\mu_{\beta,n}^\eta$ is the probability measure on the simple random walk paths defined by \begin{align*}
\mu_{\beta,n}^\eta(s)=\frac{1}{W_{\beta,n}(\eta)}\exp\left(\beta \sum_{i=1}^N\eta(i,s_i)-n\lambda(\beta)\right)P_S(S_{[0,n]}=s),\ \ \ s=(s_0,s_1,\cdots,s_n)\mathbbm{i}n\mathbb{Z}^{n+1},
\end{align*}
with $\left(\mu_{\beta_n,Tn}^{\eta}\right)^{\otimes 2}$ is the product probability measure of $\mu_{\beta_n,Tn}^{\eta}$, and $S$ and $S'$ are paths of independent directed path with the law $\mu_{\beta_n,Tn}^{\eta}$.
\mathbbm{i}ffalse{
Also, we have that \begin{align*}
\left|\mu_{n,\beta_n}^{\eta'}\left[\sum_{e\mathbbm{i}n P_n}\left(\eta_e-\eta_e'\right)\right]\right|&=\left|\sum_{e\mathbbm{i}n E_n}\left(\eta_e-\eta_e'\right)\mu_{n,\beta_n}^{\eta'}(e\mathbbm{i}n P_n)\right|\\
&\leq \sqrt{\sum_{e\mathbbm{i}n E_n}\left|\eta_e-\eta_e'\right|^2}\sqrt{\sum_{e\mathbbm{i}n E_n}\mu_{n,\beta_n}^{\eta'}(e\mathbbm{i}n P_n)^2}\\
&= d_{E_n}(\eta,\eta')\sqrt{\sum_{e\mathbbm{i}n E_n}\mu_{n,\beta_n}^{\eta'}(e\mathbbm{i}n P_n)^2}.
\end{align*}
}\fi
We write \begin{align*}
L_n(s,s')=\sharp\{1\leq i\leq n:s_i=s_i'\}
\end{align*}
for $s=(s_1,\cdots,s_n)$ and $s'=(s_1',\cdots,s_n')\mathbbm{i}n \mathbb{Z}^n$.
We define the event $A_n$ on the environment by \begin{align*}
&A_n\\
&=\left\{\eta: W_{\beta_n,Tn}(\eta)\geq \frac{1}{2}Q\left[W_{\beta_n,Tn}(\eta)\right],\ \beta_n^2\left(\mu_{\beta_n,Tn}^{\eta}\right)^{\otimes 2}\left[L_{Tn}(S,S')\right]\leq C_4\right\}
\end{align*}
for some $C_4>0$ which we will take large enough. We claim that for $C_4>0$ large enough, there exists a constant $\delta>0$ such that \begin{align}
Q(A_n)>\delta\label{lbdd}
\end{align}
for all $n\geq 1$. If (\ref{lbdd}) holds, then Lemma \ref{lpbdd} follows.
\mathbbm{i}ffalse{
Indeed, if $\eta \mathbbm{i}n \mathbb{R}^{E_n}$ satisfies \begin{align*}
\log W_{\beta_n,Tn}(\eta)\leq -C_2u-C_3
\end{align*}
for $u>0$, $C_2>0$, and $C_3>0$, then we have that for any $\eta'\mathbbm{i}n A_n$
\begin{align*}
-C_2u-C_3&\geq -\log 2-\beta_nd_{E_n}(\eta,\eta')\sqrt{C_1s^{\alpha}}\\
&=-\log 2-\hat\beta \sqrt{C_1}d_{E_n}(\eta,\eta').
\end{align*}
}\fi
Indeed, applying Lemma \ref{tal}, we have that \begin{align*}
Q(\log W_{\beta_n,Tn}(\eta)\leq -\log 2 -u)\leq Q(A_n)^{-1}C_1'\exp\left(-\frac{\left(\frac{u}{C_1'}\right)^\gamma}{C_2'}\right).
\end{align*}
Thus, we find the $L^2$-boundedness of $\log W_{\beta_n,Tn}(\eta)$ and hence uniform integrability.
We will complete the proof of Lemma \ref{lpbdd} by showing that (\ref{lbdd}). We observe that \begin{align*}
&Q(\eta\mathbbm{i}n A_n)\\
&=Q\left(\left\{\eta: W_{\beta_n,Tn}(\eta)\geq \frac{1}{2}Q\left[W_{\beta_n,Tn}(\eta)\right]\right\}\right)\\
&-Q\left(\left\{\eta: W_{\beta_n,Tn}(\eta)\geq \frac{1}{2}Q\left[W_{\beta_n,Tn}(\eta)\right],\ \ \left(\mu_{\beta_n,Tn}^{\eta}\right)^{\otimes 2}\left[L_{\lfloor Tn}(S,S')\right]> C_4\right\} \right)\\
&\geq Q\left(\left\{\eta: W_{\beta_n,Tn}(\eta)\geq \frac{1}{2}\right\}\right)\\
&-Q\left(\left\{\eta: P_{S,S'}\left[\beta_n^2L_{Tn}(S,S')\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta){\mathbbm z}eta_{i,S_i}(\beta_n,\eta)\right]> \frac{C_4}{4}\right\} \right)\\
&\geq Q\left(\left\{\eta: W_{\beta_n,Tn}(\eta)\geq \frac{1}{2}Q\left[W_{\beta_n,Tn}(\eta)\right]\right\}\right)\\
&-\frac{4}{C_4}Q\left[P_{S,S'}\left[\beta_{n}^2L_{Tn}(S,S')\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta){\mathbbm z}eta_{i,S_i'}(\beta_n,\eta)\right]\right],
\end{align*}
where $(S',P_{S'})$ is the simple random walk on $\mathbb{Z}$ starting from the origin and $P_{S,S'}$ is the product measure of $P_S$ and $P_{S'}$.
Paley-Zygmund's inequality yields that \begin{align*}
Q\left(\left\{\eta: W_{\beta_n,Tn}(\eta)\geq \frac{1}{2}Q\left[W_{\beta_n,Tn}(\eta)\right]\right\}\right)&\geq \frac{1}{4}\frac{\left(Q\left[W_{\beta_n,Tn}(\eta)\right]\right)^2}{Q\left[W_{\beta_n,Tn}(\eta)^2\right]}\\
&=\frac{1}{4}\frac{1}{Q\left[W_{\beta_n,Tn}(\eta)^2\right]}.
\end{align*}
Also, we have that \begin{align*}
&Q\left[P_{S,S'}\left[\beta_{n}^2L_{Tn}(S,S')\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta){\mathbbm z}eta_{i,S_i'}(\beta_n,\eta)\right]\right]\\
&=P_{S,S'}\left[\beta_n^2L_{Tn}(S,S')\exp\left(\left(\lambda(2\beta_n)-2\lambda(\beta_n)\right)L_{Tn}(S,S')\right)\right]\\
&\leq P_{S,S'}\left[\exp\left(2\left(\lambda(2\beta_n)-2\lambda(\beta_n)\right)L_{Tn}(S,S')\right)\right].
\mathbbm{i}ntertext{and}
&Q\left[W_{\beta_n,Tn}(\eta)^2\right]\\
&=Q\left[P_{S,S'}\left[\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta){\mathbbm z}eta_{i,S_i'}(\beta_n,\eta)\right]\right]\\
&=P_{S,S'}\left[\exp\left(2\left(\lambda(2\beta_n)-2\lambda(\beta_n)\right)L_{Tn}(S,S')\right)\right].
\end{align*}
Since \begin{align*}
\frac{\lambda(2\beta_n)-2\lambda(\beta_n)}{\beta_n^2}\to \lambda''(0)=1
\mathbbm{i}ntertext{and}
\frac{\lambda(2r\beta_n)-2\lambda(r\beta_n)}{\lambda(2\beta_n)-2\lambda(\beta_n)}\to r^2
\end{align*}
as $n\to \mathbbm{i}nfty$ for $r>0$, there exists a constant $r>0$ such that \begin{align*}
&P_{S,S'}\left[\exp\left(2\left(\lambda(2\beta_n)-2\lambda(\beta_n)\right)L_{Tn}(S,S')\right)\right]\\
&\leq P_{S,S'}\left[\exp\left(\left(\lambda(2r\beta_n)-2\lambda(r\beta_n)\right)L_{Tn}(S,S')\right)\right]\\
&\leq Q\left[W_{r\beta_n,Tn}(\eta)^2\right]
\end{align*}
for any $n$ large enough. The $L^2$-boundedness of $W_{\beta_n,Tn }(\eta)$ (see Theorem \ref{thmref}) implies that there exist $C_5>0$ and $C_6>0$ such that \begin{align*}
&Q\left[W_{\beta_n,Tn}(\eta)^2\right]\leq C_5\\
\mathbbm{i}ntertext{and}
&Q\left[P_{S,S'}\left[\beta_{n}^2L_{Tn}(S,S')\mathbbm{p}rod_{i=1}^{Tn}{\mathbbm z}eta_{i,S_i}(\beta_n,\eta){\mathbbm z}eta_{i,S_i'}(\beta_n,\eta)\right]\right]\leq C_6.
\end{align*}
We conclude that \begin{align*}
Q(\eta\mathbbm{i}n A_n)\geq \frac{1}{4C_{5}}-\frac{4C_{6}}{C_4}
\end{align*}
and we obtain (\ref{lbdd}) by taking $C_4>0$ large enough.
\end{proof}
\subsection{Proof of Lemma \ref{maxsup}}
Since the finite dimensional distributions $\displaystyle \left\{W_{\beta_n,Tn}^{x_in^{1/2}}(\eta):1\leq i\leq m\right\}$ for $x_1,\cdots,x_m\mathbbm{i}n B_0^n(0)$ converge to
$\displaystyle \left\{\mathcal{Z}_{\sqrt{2}}^{x_i}(T):1\leq i\leq m\right\}$ (see \cite[Section 6.2]{AlbKhaQua}), the tightness of $\{W_{\beta_n,Tn}^{xn^{1/2}}(\eta):x\mathbbm{i}n [-1,1]\}$ in $C[-1,1]$ and $L^p$-boundedness of $ \max_{x\mathbbm{i}n[-1,1]}W_{\beta_n,Tn}^{xn^{1/2}}(\eta)^\theta$ for $p>1$ imply Lemma \ref{maxsup}.
We will use Garsia-Rodemich-Rumsey's lemma \cite[Lemma A.3.1]{Nua} a lot of times in the proof for limit superior.
\begin{lem}\label{Gar}
Let $\mathbbm{p}hi:[0,\mathbbm{i}nfty)\to [0,\mathbbm{i}nfty)$ and $\Psi:[0,\mathbbm{i}nfty)\to [0,\mathbbm{i}nfty)$ be continuous and stricctly increasing functions satisfying\begin{align*}
\mathbbm{p}hi(0)=\Psi(0)=0,\ \lim_{t\to \mathbbm{i}nfty}\Psi(t)=\mathbbm{i}nfty.
\end{align*}
Let $f:\mathbb{R}^d\to \mathbb{R}$ be a continuous function. Provided \begin{align*}
\Gamma=\mathbbm{i}nt_{B_r(x)}\mathbbm{i}nt_{B_r(x)}\Psi\left(\frac{|f(t)-f(s)|}{\mathbbm{p}hi(|t-s|)}\right)dsdt<\mathbbm{i}nfty,
\end{align*}
where $B_r(x)$ is an open ball in $\mathbb{R}^d$ centered at $x$ with radius $r$, then for all $s,t\mathbbm{i}n B_r(x)$,\begin{align*}
|f(t)-f(s)|\leq 8\mathbbm{i}nt_0^{2|t-s|}\Psi^{-1}\left(\frac{4^{d+1}\Gamma}{\lambda_d u^{2d}}\right)\mathbbm{p}hi(du),
\end{align*}
where $\lambda_d$ is a universal constant depending only on $d$.
\end{lem}
Applying Lemma \ref{Gar} with $\Psi(x)=|x|^p$, $\mathbbm{p}hi(u)=u^q$ for $p\geq 1$, $q>0$ and $pq>2d$, we have that \begin{align}
|f(t)-f(s)|\leq \frac{2^{\frac{2}{p}+q+3}}{\lambda_d^{\frac{1}{p}}\left(q-\frac{2d}{p}\right)}|t-s|^{q-\frac{2d}{p}}\left(\mathbbm{i}nt_{B_1(x)}\mathbbm{i}nt_{B_1(x)}\left(\frac{|f(t)-f(s)|}{|t-s|^{q}}\right)^pdsdt\right)^{\frac{1}{p}}\label{Garpol}
\end{align}
for $t,s\mathbbm{i}n B_1(x)$.
We set \begin{align*}
&W_{\beta,n}^x(\eta)=P_S^x\left[\mathbbm{p}rod_{k=1}^{n}{\mathbbm z}eta_{k,S_k}(\beta,\eta)\right]
\mathbbm{i}ntertext{and}
&W_{\beta,n}^{x}(\eta,y)=P_S^x\left[\mathbbm{p}rod_{k=1}^n{\mathbbm z}eta_{k,S_k}(\beta,\eta):S_n=y\right]
\end{align*}
for $x,y\mathbbm{i}n\mathbb{Z}$.
For $-1\leq u\leq 1$, we define \begin{align*}
f_{n,\theta}(u)=\begin{cases}
\left(W_{\beta_n,Tn}^{un^{1/2}}(\eta)\right)^\theta,\ \ \ \ &un^{1/2}\mathbbm{i}n B_0^{n}(0)\\
\text{linear interpolation,}\ &\text{otherwise}.
\end{cases}
\end{align*}
Then, we have that \begin{align*}
\max_{x\mathbbm{i}n B_0^n(0)}\left(W_{\beta_n,Tn}^x(\eta)\right)^\theta\leq \left(W_{\beta_n,Tn}(\eta)\right)^\theta+C_{p,q}B_{p,q,n,\theta},
\end{align*}
where \begin{align*}
B_{p,q,n,\theta}=\left(\mathbbm{i}nt_{-1}^1\mathbbm{i}nt_{-1}^1\left(\frac{|f_{n,\theta}(t)-f_{n,\theta}(s)|}{|t-s|^{q}}\right)^pdsdt\right)^{\frac{1}{p}}.
\end{align*}
We will show that for some $p\geq 1$, $q>0$ with $pq>2$, there exist $C_{p,T,\theta}>0$ and $\eta_{p,\theta}-pq>-1$ such that \begin{align}
&Q\left[|f_{n,\theta}(t)-f_{n,\theta}(s)|^p\right]\leq C_{p,T,\theta}|t-s|^{\eta_{p,\theta}},\ \ -1\leq s,t\leq 1.\label{contcri}
\end{align}
(\ref{contcri}) tells us the tightness of $\{W_{\beta_n,Tn}^{xn^{1/2}}(\eta):x\mathbbm{i}n [-1,1]\}$ in $C[-1,1]$ and $L^p$-boundedness of $ \max_{x\mathbbm{i}n[-1,1]}W_{\beta_n,Tn}^{xn^{1/2}}(\eta)^\theta$ for $p>1$ and therefore Lemma \ref{maxsup} follows.
\begin{proof}[Proof of (\ref{contcri})]
We remark that \begin{align*}
|f_{n,\theta}(t)-f_{n,\theta}(s)|\leq \left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta)-W_{\beta_n,Tn}^{sn^{1/2}}(\eta)\right|^{\theta},
\end{align*}
where we have used that $(x+y)^\theta \leq x^\theta+y^\theta$ for $x\geq 0,$ $y\geq 0$. First, we will estimate \begin{align*}
Q\left[\left|W_{\beta_n,Tn}^{x}(\eta)-W_{\beta_n,Tn}^{y}(\eta)\right|^2\right],
\end{align*}
for $x,y\mathbbm{i}n B_0^n(0)$.
When we define i.i.d.\,random variables by\begin{align*}
e_n(k,x)=\exp\left(\beta_n\eta(k,x)-\lambda(\beta_n)\right)-1,\ \ \ (k,x)\mathbbm{i}n\mathbb{N}\times \mathbb{Z},
\end{align*}
we find that \begin{align*}
Q[e_n(k,x)]=0,\ \ \text{and }\frac{Q\left[e_n(k,x)^2\right]}{\beta_n^2}=\frac{e(\lambda(2\beta_n)-2\lambda(\beta_n))-1}{\beta_n^2}\to 1.
\end{align*}
Then, we can write \begin{align*}
W_{\beta_n,Tn}^{x}(\eta)&=P_S^x\left[\mathbbm{p}rod_{i=1}^{Tn}\left(1+e_n(i,S_i)\right)\right]\\
&=1+\sum_{k=1}^{Tn} \sum_{1\leq i_1<\cdots <i_k\leq Tn}\sum_{\mathbf{x}\mathbbm{i}n\mathbb{Z}^k}\mathbbm{p}rod_{j=1}^kp_{i_j-i_{j-1}}(x_j-x_{j-1})e_n(i_j,x_j)\\
&=\sum_{k=0}^{Tn}\Theta^{(k)}(x),
\end{align*}
where $p_n(y)=P_S(S_n=y)$ for $(n,y)\mathbbm{i}n\mathbb{N}\times \mathbb{Z}$, $x_0=x$, $\mathbf{x}=(x_1,\cdots,x_k)$, and
\begin{align*}
\Theta^{(k)}(x)=\begin{cases}
\displaystyle 1,\ \ \ &k=0\\
\displaystyle \sum_{1\leq i_1<\cdots <i_k\leq Tn}\sum_{\mathbf{x}\mathbbm{i}n\mathbb{Z}^k}\mathbbm{p}rod_{j=1}^kp_{i_j-i_{j-1}}(x_j-x_{j-1})e_n(i_j,x_j), \ \ &k\geq 1.
\end{cases}
\end{align*}
Then, it is easy to see that \begin{align*}
&Q\left[\Theta^{(k)}(x)\right]=0,\ \ k\geq 1
\mathbbm{i}ntertext{and}
&Q\left[\Theta^{(k)}(x)\Theta^{(\ell)}(y)\right]=0,\ \ k{\bf n}ot=\ell,\ \ x,y\mathbbm{i}n\mathbb{Z}.
\end{align*}
Thus, we have that \begin{align*}
&Q\left[\left|W_{\beta_n,Tn}^{x}(\eta)-W_{\beta_n,Tn}^{y}(\eta)\right|^2\right]\\
&=\sum_{k=1}^{Tn}Q\left[\left(\Theta^{(k)}(x)-\Theta^{(k)}(y)\right)^2\right]\\
&=\sum_{k=1}^{Tn}\left(Q[e_{n}(0,0)^2]\right)^k\sum_{1\leq i_1<\cdots <i_k\leq Tn} \sum_{\mathbf{x}\mathbbm{i}n\mathbb{Z}^k}(p_{j_1}(x_1-x)-p_{j_1}(x_1-y))^2\mathbbm{p}rod_{j=2}^kp_{i_j-i_{j-1}}(x_j-x_{j-1})^2.
\end{align*}
Since we know that for $k\geq 1$\begin{align*}
\frac{1}{n^{k/2}}\sum_{1\leq i_1<\cdots <i_k\leq Tn} \sum_{\mathbf{x}\mathbbm{i}n\mathbb{Z}^k}\mathbbm{p}rod_{j=1}^kp_{i_j-i_{j-1}}(x_j-x_{j-1})^2\leq \frac{C_7^kT^{k/2}}{\Gamma\left(\frac{k}{2}+1\right)},
\end{align*}
where $\Gamma(s)$ is a Gamma function at $s>0$ \cite[Section 3.4 and Lemma A.1]{AlbKhaQua},
\begin{align*}
Q\left[\left(\Theta^{(k)}(x)-\Theta^{(k)}(y)\right)^2\right]&\leq \sum_{z\mathbbm{i}n\mathbb{Z}}(p_{i}(z-x)-p_i(z-y))^2 Q[e_n(0,0)^2]\frac{C_7^{k-1}T^{\frac{k-1}{2}}}{\Gamma\left(\frac{k-1}{2}+1\right)}\\
&=Q[e_n(0,0)^2]\frac{C_7^{k-1}T^{\frac{k-1}{2}}}{\Gamma\left(\frac{k-1}{2}+1\right)}\sum_{1\leq i\leq Tn}(p_{2i}(0)-p_{2i}(x-y)).
\end{align*}
Since we know that there exists a constant $c>0$ such that for all $n\geq 1$ and $x,y\mathbbm{i}n\mathbb{Z}$ with $x-y\mathbbm{i}n 2\mathbb{Z}$,\begin{align}
|p_{n}(x+y)-p_{n}(x)|&\leq \frac{c|x|}{{n^{2}}}+2\left(\frac{1}{\sqrt{4\mathbbm{p}i n}}\right)\left|\exp\left(-\frac{(x+y)^2}{4n}\right)-\exp\left(-\frac{x^2}{4n}\right)\right|\label{LLT}
\end{align}
(see \cite[Theorem 2.3.6]{LawLim}), we have that
\begin{align}
Q\left[\left(\Theta^{(k)}(x)-\Theta^{(k)}(y)\right)^2\right]&\leq C\left|\frac{x-y}{\sqrt{n}}\right|\frac{C_7^{k-1}T^{\frac{k-1}{2}}}{\Gamma\left(\frac{k-1}{2}+1\right)}\label{theta2}
\end{align}
and
\begin{align*}
Q\left[\left|W_{\beta_n,Tn}^{x}(\eta)-W_{\beta_n,Tn}^{y}(\eta)\right|^2\right]\leq C_{1,T}\left|\frac{x-y}{\sqrt{n}}\right|,
\end{align*}
{for }$x,y\mathbbm{i}n B_0^n(0)$, where we remark that \begin{align*}
C_{1,T}=C\sum_{k\geq 1}\frac{C_7^{k-1}T^{\frac{k-1}{2}}}{\Gamma(\frac{k-1}{2}+1)}
\end{align*}
and \begin{align}
\varlimsup_{T\to\mathbbm{i}nfty} \frac{1}{T}\log C_{1,T}\leq C_{8}.\label{C3}
\end{align}
Now, we would like to estimate
\begin{align*}
Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta)-W_{\beta_n,Tn}^{sn^{1/2}}(\eta)\right|^p\right]
\end{align*}
for $p\geq 2$, $s,t\mathbbm{i}n [-1,1]$ with $sn^{1/2}, tn^{1/2}\mathbbm{i}n B_0^n(0)$.
Then, the hypercontractivity established in \cite[Proposition 3.11, Proposition 3.12, and Proposition 3.17]{MosOdoOle} allows us to estimate
\begin{align*}
Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta)-W_{\beta_n,Tn}^{sn^{1/2}}(\eta)\right|^p\right].
\end{align*}
Indeed, \begin{align*}
&Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta)-W_{\beta_n,Tn}^{sn^{1/2}}(\eta)\right|^p\right]^{1/p}\\
&=Q\left[\left|\sum_{k=1}^{Tn}\left(\Theta^{(k)}(tn^{1/2})-\Theta^{(k)}(sn^{1/2})\right)\right|^p\right]^{1/p}\\
&\leq \left(\sum_{k=1}^{Tn}Q\left[\left|\Theta^{(k)}(tn^{1/2})-\Theta^{(k)}(sn^{1/2})\right|^p\right]\right)^{1/p}\\
&\leq \left(\sum_{k=1}^{Tn}\kappa_p^k\left(Q\left[\left(\Theta^{(k)}(tn^{1/2})-\Theta^{(k)}(sn^{1/2})\right)^2\right]^{1/2}\right)\right)^{1/p},
\end{align*}
where $\displaystyle \kappa_p=2\sqrt{p-1}\sup_{n\geq 1}\frac{Q[e_n(0,0)^p]^{1/p}}{Q[e_n(0,0)^2]^{1/2}}<\mathbbm{i}nfty$. $\kappa_p$ is finite since \begin{align*}
\lim_{n\to \mathbbm{i}nfty}\frac{1}{\beta_n}Q\left[|e_n(0,0)|^p\right]^{1/p}=Q\left[|\eta(0,0)|^p\right]^{1/p}.
\end{align*}
We obtain from (\ref{theta2}) \begin{align*}
Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta)-W_{\beta_n,Tn}^{sn^{1/2}}(\eta)\right|^p\right]&\leq C|t-s|^{\frac{p}{2}}\sum_{k\geq 1}\kappa_p^k\left(\frac{C_7^{k-1}T^{\frac{k-1}{2}}}{\Gamma(\frac{k-1}{2}+1)}\right)^{1/2}\\
&\leq C_{p,T}|t-s|^{\frac{p}{2}}.
\end{align*}
Thus, we find that for $p\geq \frac{2}{\theta}$ \begin{align*}
\eta_{p,\theta}=\frac{p\theta}{2}
\end{align*}in (\ref{contcri}). Therefore, the proof completed when we take $p=\displaystyle \frac{5}{\theta}$ and $q=\displaystyle \frac{2\theta}{3}$.
\end{proof}
\subsection{Proof of Lemma \ref{partition1}}
The idea is the same as the proof of Lemma \ref{maxsup}.
\begin{proof}[Proof of Lemma \ref{partition1}] We set \begin{align*}
&W_{\beta,n}^{x}(\eta,A)=\sum_{y\mathbbm{i}n A}P_S^x\left[\mathbbm{p}rod_{k=1}^n{\mathbbm z}eta_{k,S_k}(\beta,\eta):S_n=y\right]
\end{align*}
for $A\subset \mathbb{Z}$.
Then, we know that \begin{align*}
&\left|\left(W_{\beta_n,Tn}^x(\eta,B_z^n)\right)^\theta-\left(W_{\beta_n,Tn}^y(\eta,B_z^n)\right)^\theta\right|\\
&\leq \left|W_{\beta_n,Tn}^x(\eta,B_z^n)-W_{\beta_n,Tn}^y(\eta,B_z^n)\right|^\theta.
\end{align*}
By the same argument as the proof of Lemma \ref{maxsup} that \begin{align*}
&Q\left[\max_{x\mathbbm{i}n B_0^n}\left(\sum_{w\mathbbm{i}n B_z^n}W_{\beta_n,Tn}^x(\eta,w)\right)^\theta\right]\\
&\leq Q\left[\left(W_{\beta_n,Tn}(\eta,B_z^n)\right)^\theta\right]+C_{p,q}B_{p,q,n,\theta,z,T}\\
&\leq \left(\sum_{y\mathbbm{i}n B_z^n}p_{Tn}(y)\right)^{\theta}+C_{p,q}B_{p,q,n,\theta,z,T},
\end{align*}
where \begin{align*}
B^p_{p,q,n,\theta,z,T}=\mathbbm{i}nt_{-1}^1\mathbbm{i}nt_{-1}^1 \frac{Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)-W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)\right|^{p\theta}\right]}{|t-s|^{pq}}dsdt.
\end{align*}
We write \begin{align*}
&W_{\beta_n,Tn}^{x}(\eta,w)\\
&=P_{S}^x\left[\mathbbm{p}rod_{i=1}^{Tn}(1+e_n({i,S_i})):S_{Tn}=w\right]\\
&=p_{Tn}(w-x)\\
&+p_{Tn -i_{k}}(w-x_k)\sum_{k=1}^{Tn}\sum_{1\leq i_1<\cdots<i_k\leq Tn}\sum_{{\bf x}\mathbbm{i}n \mathbb{Z}^k}\left(\mathbbm{p}rod_{i=1}^k p_{i_j-i_{j-1}}(x_i-x_{j-1})e_n(i_{j},x_j)\right)\\
&=\sum_{k=0}^{Tn}\Theta^{(k)}(x,w),
\end{align*}
where \begin{align*}
&\Theta^{(k)}(x,w)\\
&=\begin{cases}
\displaystyle p_{Tn}(x,w),\ \ &k=0\\
\displaystyle p_{Tn -i_{k}}(w-x_k)\sum_{k=1}^{Tn}\sum_{1\leq i_1<\cdots<i_k\leq Tn}\sum_{{\bf x}\mathbbm{i}n \mathbb{Z}^k}\left(\mathbbm{p}rod_{i=1}^k p_{i_j-i_{j-1}}(x_i-x_{j-1})e_n(i_{j},x_j)\right),\ \ &k\geq 1.
\end{cases}
\end{align*}
Then, we have that \begin{align*}
&Q\left[\Theta^{(k)}(x,w)\right]=0,\ \ \ k\geq 1\\
&Q\left[\Theta^{(k)}(x,y)\Theta^{(\ell)}(z,w)\right]=0,\ \ k{\bf n}ot=\ell.
\end{align*}
Hence,
\begin{align*}
Q\left[\left(\sum_{w\mathbbm{i}n B_z^n}(\Theta^{(0)}(x,w)-\Theta^{(0)}(y,w))\right)^2\right]& =\left(\sum_{w\mathbbm{i}n B_z^n}\left(p_{Tn}(w-x)-p_{Tn}(w-y)\right)\right)^2.
\end{align*}
(\ref{LLT}) implies that \begin{align*}
Q\left[\left(\sum_{w\mathbbm{i}n B_z^n}(\Theta^{(0)}(tn^{1/2},w)-\Theta^{(0)}(sn^{1/2},w))\right)^2\right]& =C_{2,T}|t-s|,\ \ \ t,s\mathbbm{i}n[0,1],
\end{align*}
where $C_{2,T}\to 0 $ as $T\to \mathbbm{i}nfty$.
Also, we have that for $k\geq 1$\begin{align*}
&Q\left[\left(\sum_{w\mathbbm{i}n B_z^n}(\Theta^{(k)}(x,w)-\Theta^{(k)}(y,w))\right)^2\right]\\
&=Q[e_{n}(0,0)^2]^k\sum_{1\leq i_1<\cdots<i_k\leq Tn}\sum_{{\mathbbm x}\mathbbm{i}n\mathbb{Z}^k}\\
&\ \ \ (p_{i_1}(x_1-x)-p_{i_1}(x_1-y))^2\left(\mathbbm{p}rod_{i=2}^kp_{i_{j}-i_{j-1}}(x_j-x_{j-1})^2\right)^2\left(\sum_{w\mathbbm{i}n B_{z}^n}p_{\lfloor T_n\rfloor-i_k}(w-x_{k})\right)^2\\
&\leq Q[e_{n}(0,0)^2]^k\sum_{1\leq i_1<\cdots<i_k\leq Tn}(p_{2i_1}(0)-p_{2i_1}(x-y))^2\mathbbm{p}rod_{i=2}^kp_{2(i_{j}-i_{j-1})}(0)\\
&\leq C\frac{|x-y|}{n^{1/2}}\frac{C_7^{k-1}T^{\frac{k-1}{2}}}{\Gamma\left(\frac{k-1}{2}+1\right)}
\end{align*}
as the proof of Lemma \ref{maxsup}.
We obtain by H\"older's inequality that for $p=\frac{5}{\theta}$\begin{align*}
&Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)-W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)\right|^{p\theta}\right]\\
&\leq Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)-W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)\right|^{\frac{9}{2}}\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)+W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)\right|^{\frac{1}{2}}\right]\\
&\leq Q\left[\left|W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)-W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)\right|^{9}\right]^{\frac{1}{2}}Q\left[W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)+W_{\beta_n,Tn}^{tn^{1/2}}(\eta,B_z^n)\right]^{\frac{1}{2}}\\
&\leq C_{3,T}|t-s|^{9/2}\left(2\sum_{w\mathbbm{i}n B_z^n}(p_{Tn}(tn^{1/2},w)+p_{Tn}(sn^{1/2},w))\right)^\frac{1}{2},
\end{align*}
where we have used the hypercontractivity as the proof of Lemma \ref{maxsup}, $C_{3,T}$ is independent of the choice of $z$ and \begin{align*}
\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T}\log C_{3,T}\leq C<\mathbbm{i}nfty.
\end{align*}
Also, we know that
\begin{align*}
\sum_{w\mathbbm{i}n B_z^n}p_{Tn}(x,w)\leq \exp\left(-\frac{z^2n}{Tn}\right)
\end{align*}
for $x\mathbbm{i}n B_0^n$. Thus, we obtain that if $I^{(\theta)}(T)\asymp T^2$ with $p=\frac{5}{\theta}$, $q=\frac{\theta}{2}$, there exist $C_1>0$ and $C_2>0$ such that \begin{align*}
\sum_{z\mathbbm{i}n I^{(\theta)}(T)^c} \left(\left(\sum_{y\mathbbm{i}n B_z^n}p_{Tn}(y)\right)^{\theta}+C_{p,q}B_{p,q,n,\theta,z,T}\right)\leq C_1\exp\left(-C_2T^2\right).
\end{align*}
\end{proof}
\section{Continuum directed polymers}\label{4}
To prove Lemma \ref{freeenecdp} and Lemma \ref{ttheta}, we recall the property of continuum directed polymers.
\subsection{Continuum directed polymers}
The mild solution to stochastic heat equation \begin{align*}
\mathbbm{p}artial_t \mathcal{Z}=\frac{1}{2}\Delta \mathcal{Z}+\beta \mathcal{Z} \dot{\mathcal{W}},\ \ \lim_{t\searrow 0}\mathcal{Z}(t,y)=\delta_x(y)
\end{align*}
has the following representation using Wiener chaos expansion:\begin{align*}
\mathcal{Z}^x_\beta(T,w)&=\rho_T(x,w)\\
&+\sum_{n\geq 1}\beta^n\mathbbm{i}nt_{\Delta_n(T)}\mathbbm{i}nt_{\mathbb{R}^n}\left(\mathbbm{p}rod_{i=1}^n\rho_{t_i-t_{i-1}}(x_i-x_{i-1})\right)\rho_{T-t_n}(w-x_n)\mathcal{W}(dt_1dx_1)\cdots \mathcal{W}(dt_ndx_n),
\end{align*}
where we set $x_0=x$\begin{align*}
\rho_t(x,w)=\rho_t(x-w)=\frac{1}{\sqrt{2\mathbbm{p}i t}}\exp\left(-\frac{(x-w)^2}{2t}\right),\ \ \ t>0,\ x,w\mathbbm{i}n\mathbb{R},
\end{align*}
and \begin{align*}
\Delta_n(T)=\{(t_1,\cdots,t_n):0< t_1< \cdots< t_n\leq T\}.
\end{align*}
Also, we define the four parameter field by \begin{align*}
\mathcal{Z}_\beta(s,x;t,y)&=\rho_{t-s}(x,y)\\
&\hspace{-3em}+\sum_{n\geq 1}\beta^n\mathbbm{i}nt_{\Delta_n(s,t)}\mathbbm{i}nt_{\mathbb{R}^n}\left(\mathbbm{p}rod_{i=1}^n\rho_{t_i-t_{i-1}}(x_i-x_{i-1})\right)\rho_{t-t_n}(y-x_n)\mathcal{W}(dt_1dx_1)\cdots \mathcal{W}(dt_ndx_n),
\end{align*}
for $ 0\leq s<t<\mathbbm{i}nfty,\ \ x,y\mathbbm{i}n\mathbb{R}^2$, where we set $t_0=s$ and \begin{align*}
\Delta_n(s,t)=\{(t_1,\cdots,t_n):s< t_1< \cdots< t_n\leq t\}.
\end{align*}
Also, we define \begin{align*}
\mathcal{Z}_{\beta}^{(s,x)}(t)=\mathbbm{i}nt_{\mathbb{R}}\mathcal{Z}_{\beta}(s,x;t,y)dy, \ \ \text{for }0\leq s<t<\mathbbm{i}nfty,\ \ x\mathbbm{i}n\mathbb{R}.
\end{align*}
Then, we have the following fact\cite[Theorem 3.1]{AlbKhaQua2}:
\begin{thm}\label{cdpprop}There exists a version of the field $\mathcal{Z}_\beta(s,x;t,y)$ which is jointly continuous in all four variables and have the following properties:
\begin{enumerate}[(i)]
\mathbbm{i}tem $P_\mathcal{Z}\left[\mathcal{Z}_\beta(s,x;t,y)\right]=\rho_{t-s}(y-x)$.
\mathbbm{i}tem ({\bf Stationary}): $\displaystyle \mathcal{Z}_\beta(s,x;t,y)\stackrel{d}{=}\mathcal{Z}_\beta(s+u_0,x+z_0;t+u_0,y+z_0)$.
\mathbbm{i}tem ({\bf Scaling}): $\displaystyle \mathcal{Z}_\beta(r^2s,rx;r^2t,ry)\stackrel{d}=\frac{1}{r}\mathcal{Z}_{\beta\sqrt{r}}(s,x;t,y)$.
\mathbbm{i}tem ({\bf Positivity}): With probability one, $\mathcal{Z}_\beta(s,x;t,y)$ is strictly positive for all tuples $(s,x;t,y)$ with $0\leq s<t$.
\mathbbm{i}tem The law of $\displaystyle \frac{\mathcal{Z}_\beta(s,x;t,y)}{\rho_{t-s}(y-x)}$ does not depend on $x$ or $y$.
\mathbbm{i}tem It has an independent property among disjoint time intervals: for any finite $\{(s_1,t_i]\}_{i=1}^n$ and any $x_i,y_i\mathbbm{i}n\mathbb{R}$, the random variables $\{\mathcal{Z}_\beta(s_i,x_i;t_i,y_i)\}_{i=1}^n$ are mutually independent.
\mathbbm{i}tem ({\bf Chapman-Kolmogorov equations}): With probability one, for all $0\leq s<r<t$ and $x,y\mathbbm{i}n\mathbb{R}$,\begin{align*}
\mathcal{Z}_\beta({s,x;t,y})=\mathbbm{i}nt_{\mathbb{R}}\mathcal{Z}_\beta(s,x;r,z)\mathcal{Z}_\beta(r,z;t,y)dz.
\end{align*}
\end{enumerate}
\end{thm}
The following is the corollary of \cite[Theorem 1.1]{AmiCorQua}.
\begin{thm}\label{limitcdp}
$\displaystyle \frac{1}{T}\log \mathcal{Z}_1(T,0)$ converges to $\displaystyle -\frac{1}{4!}$ in probability as $T\to \mathbbm{i}nfty$.
\end{thm}
Also, the following is the result obtained by Moreno \cite{Mor}:
\begin{cor}\label{cor}
For any $\beta\geq 0$ and $p\geq 1$, $\left(\mathcal{Z}_{\sqrt{2}}(t)\right)^{-1}\mathbbm{i}n L^p$.
\end{cor}
\subsection{Proof of Lemma \ref{ttheta}}
We first show a weak statementt:
\begin{lem}\label{wttheta}
We have that \begin{align*}
\varlimsup_{\theta\to 0}\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T\theta}P_\mathcal{Z}\left[\left(\mathcal{Z}_{\sqrt{2}}(T)\right)^\theta\right]\leq F_\mathcal{Z}(\sqrt{2}).
\end{align*}
\end{lem}
\begin{proof} We will show that there exists a $K>0$ such that \begin{align}
P_\mathcal{Z}\left[\exp\left(\theta\left(\log \mathcal{Z}_{\sqrt{2}}(T)-P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]\right)\right)\right]\leq \exp\left(\frac{T\theta^2K}{1-|\theta|}\right).\label{thetaz}
\end{align}
for $|\theta|\mathbbm{i}n(0,1)$.
For fixed $T\mathbbm{i}n\mathbb{N}$, we define $\sigma$-field\begin{align*}
&\mathcal{F}_i(T)=\sigma\left[\mathcal{W}(t,x):0\leq t\leq i,x\mathbbm{i}n\mathbb{R}\right]\\
&\tilde{\mathcal{F}}_i(T)=\sigma\left[\mathcal{W}(t,x): t{\bf n}ot\mathbbm{i}n \left[i-1,i\right],x\mathbbm{i}n\mathbb{R}\right].
\end{align*}
Then, we write \begin{align*}
\log \mathcal{Z}_{\sqrt{2}}(T)-P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]=\sum_{i=1}^{n_T}V_{i}^T,
\end{align*}
where \begin{align*}
V_i^T=P_\mathcal{Z}\left[\left.\log \mathcal{Z}_{\sqrt{2}}(T)\right|\mathcal{F}_i(T)\right]-P_\mathcal{Z}\left[\left.\log \mathcal{Z}_{\sqrt{2}}(T)\right|\mathcal{F}_{i-1}(T)\right]
\end{align*}
are martingale differences.
Here, we introduce new random variables \begin{align*}
\hat{\mathcal{Z}}_{\sqrt{2}}(i,T)&=P_\mathcal{Z}\left[\left.\mathcal{Z}_{\sqrt{2}}(T)\right|\hat{\mathcal{F}}_i\right]\\
&=\mathbbm{i}nt_{\mathbb{R}^2}\mathcal{Z}_{\sqrt{2}}\left(i-1,x\right)\rho_{1}(x,y)\mathcal{Z}_{\sqrt{2}}^{(i,y)}\left(T\right)dxdy.
\end{align*}
Since it is clear that \begin{align*}
P_\mathcal{Z}\left[\left.\log \hat\mathcal{Z}_{\sqrt{2}}(i,T)\right|\mathcal{F}_{i-1}(T)\right]=P_\mathcal{Z}\left[\left.\log \hat{\mathcal{Z}}_{\sqrt{2}}(i,T)\right|\mathcal{F}_{i}(T)\right],
\end{align*}
we have \begin{align*}
V_i^T=P_\mathcal{Z}\left[\left.\log \frac{\mathcal{Z}_{\sqrt{2}}(T)}{\hat{\mathcal{Z}}_{\sqrt{2}}(i,T)}\right|\mathcal{F}_{i}(T)\right]-P_\mathcal{Z}\left[\left.\log \frac{\mathcal{Z}_{\sqrt{2}}(T)}{\hat{\mathcal{Z}}_{\sqrt{2}}(i,T)}\right|\mathcal{F}_{i-1}(T)\right]
\end{align*}
Also, we consider a new probability measure on $\mathbb{R}^2$ by \begin{align*}
&\mu^{(i)}_T\left(x,y\right)dxdy\\
&=\frac{1}{\hat{\mathcal{Z}}_{\sqrt{2}}(i,T)}\mathcal{Z}_{\sqrt{2}}\left(i-1,x\right)\rho_{1}(x,y)\mathcal{Z}_{\sqrt{2}}^{(i,y)}\left(T\right)dxdy.
\end{align*}
Then, it is clear that \begin{align*}
\frac{\mathcal{Z}_{\sqrt{2}}(T)}{\hat{\mathcal{Z}}_{\sqrt{2}}(i,T)}=\mathbbm{i}nt_{\mathbb{R}^2}\frac{\mathcal{Z}_{\sqrt{2}}\left(i-1,x;i,y\right)}{\rho_{1}(x,y)} \mu_T^{(i)}(x,y)dxdy,
\end{align*}
and Jensen's inequality implies from Theorem \ref{cdpprop} (ii) and (iv) that \begin{align*}
0\leq -P_\mathcal{Z}\left[\left.\log \frac{\mathcal{Z}_{\sqrt{2}}(T)}{\hat{\mathcal{Z}}_{\sqrt{2}}(i,T)}\right|\mathcal{F}_{i-1}(T)\right]&\leq -P_\mathcal{Z}\left[\log \frac{\mathcal{Z}_{\sqrt{2}}\left(0,0;1,0\right)}{p_{1}(0)}\right]\\
&\leq C_9,
\end{align*}
where we have used that \begin{align*}
-P_\mathcal{Z}\left[\log \frac{\mathcal{Z}_{\sqrt{2}}\left(0,0;1,0\right)}{p_{t}(0)}\right]\leq C_9
\end{align*}
(see Corollary \ref{cor}).
Thus, we have from Jensen's inequality that \begin{align*}
P_\mathcal{Z}\left[\left.\exp\left(V_{i}(T)\right)\right|\mathcal{F}_{i-1}(T)\right]\leq e^C P_\mathcal{Z}\left[\left.P_\mathcal{Z}\left[\left.\frac{\mathcal{Z}_{\sqrt{2}}(T)}{\hat{\mathcal{Z}}_{\sqrt{2}}{(i,T)}}\right|\tilde{\mathcal{F}}_{i}(T)\right]\right|\mathcal{F}_{i-1}(T)\right]=e^C.
\end{align*}
Also, Jensen's inequality implies that \begin{align*}
&P_\mathcal{Z}\left[\left.\exp\left(-V_{i}(T)\right)\right|\mathcal{F}_{i-1}(T)\right]\leq P_\mathcal{Z}\left[\left.P_\mathcal{Z}\left[\left.\frac{\hat{\mathcal{Z}}_{\sqrt{2}}(i,T)}{{\mathcal{Z}}_{\sqrt{2}}{(T)}}\right|\tilde{\mathcal{F}}_{i}(T)\right]\right|\mathcal{F}_{i-1}(T)\right]\\
&\leq P_\mathcal{Z}\left[\left.P_\mathcal{Z}\left[\left.\mathbbm{i}nt_{\mathbb{R}^2}\left(\frac{\mathcal{Z}_{\sqrt{2}}\left(i-1,x;i,y\right)}{\rho_{1}(x,y)}\right)^{-1}\mu^{(i)}(x,y)dxdy\right|\tilde{\mathcal{F}}_{i}(T)\right]\right|\mathcal{F}_{i-1}(T)\right]\leq C_{10},
\end{align*}
where we have used that \begin{align*}
P_\mathcal{Z}\left[\left(\frac{\mathcal{Z}_{\sqrt{2}}\left(0,x;1,y\right)}{\rho_{1}(x,y)}\right)^{-1}\right]\leq C_{10}.
\end{align*}
Thus, we have confirmed conditions in \cite[Theorem 2.1]{LiuWat} so that we have proved \ref{thetaz}.
\end{proof}
We can find that the above proof is true when we replace $\mathcal{Z}_{\sqrt{2}}(T)$ by $\mathcal{Z}_{\sqrt{2}}(T,0)$. Therefore, we have the following corollary from (\ref{thetaz}).
\begin{cor}\label{L1con}
We have
\begin{align*}
&\lim_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\left|\log \mathcal{Z}_{\sqrt{2}}(T)-P_\mathcal{Z}\left[\mathcal{Z}_{\sqrt{2}}(T)\right]\right|\right]=0
\mathbbm{i}ntertext{and}
&\lim_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\left|\log \mathcal{Z}_{\sqrt{2}}(T,0)-P_\mathcal{Z}\left[\mathcal{Z}_{\sqrt{2}}(T,0)\right]\right|\right]=0.
\end{align*}
In particular, we have \begin{align*}
\lim_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\mathcal{Z}_{\sqrt{2}}(T,0)\right]=-\frac{1}{6}.
\end{align*}
\end{cor}
\begin{proof}[Proof of Lemma \ref{ttheta}]
The proof is similar to the proofs of Lemma \ref{maxsup} and Lemma \ref{partition1}.
Also, we will often use the equations in Appendix to compute integrals of functions of heat kernels.
We write \begin{align*}
&\mathcal{Z}_{\sqrt{2}}^x(T)=\mathbbm{i}nt_{\mathbb{R}}\mathcal{Z}_{\sqrt{2}}^x(1,w)\mathcal{Z}_{\sqrt{2}}^{(1,w)}(T)dw\\
&=\mathbbm{i}nt_{A(T)}\mathcal{Z}_{\sqrt{2}}^x(1,w)\mathcal{Z}_{\sqrt{2}}^{(1,w)}(T)dw\\
&+\mathbbm{i}nt_{A(T)^c}\mathcal{Z}_{\sqrt{2}}^x(1,w)\mathcal{Z}_{\sqrt{2}}^{(1,w)}(T)dw\\
&=:I_1(T,x)+I_2(T,x),
\end{align*}
where $A(T)=[-a(T),a(T)]$ is a segment with length of order $T^3$. Hereafter, we will look at $I_1(T,x)$ and $I_2(T,x)$.
We will show in the lemmas below that \begin{align*}
&\varlimsup_{\theta\to 0}\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{\theta T}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1]}I_1(T,x)^\theta\right]\leq \lim_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]
\mathbbm{i}ntertext{and}
&\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{ T}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1]}I_2(T,x)^\theta\right]= -\mathbbm{i}nfty.
\end{align*}
Thus, we complete the proof.
\end{proof}
\begin{lem}\label{zc}
We have that
\begin{align*}
\varlimsup_{\theta\to 0}\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{\theta T}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1]}I_1(T,x)^\theta\right]\leq \lim_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T)\right]
\end{align*}
\end{lem}
\begin{lem}\label{zt}
We have that for any $\theta \mathbbm{i}n (0,1)$\begin{align*}
\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{ T}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1]}I_2(T,x)^\theta\right]= -\mathbbm{i}nfty.
\end{align*}
\end{lem}
\begin{proof}[Proof of Lemma \ref{zt}]
It is easy to see from Lemma \ref{cdpprop} (i) that \begin{align*}
\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{T}\log P_\mathcal{Z}\left[ I_2(T,0)^\theta\right]\leq \varlimsup_{T\to \mathbbm{i}nfty}\frac{\theta}{T}\log \mathbbm{i}nt_{A(T)^c}\rho_1(0,w)dw=-\mathbbm{i}nfty.
\end{align*}
Thus, it is enough to show that \begin{align*}
\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{T}\log P_\mathcal{Z}\left[ \sup_{x,y\mathbbm{i}n [-1,1]}\left|I_2(T,x)^\theta-I_2(T,y)^\theta\right|\right]=-\mathbbm{i}nfty.
\end{align*}
Applying (\ref{Garpol}) to the continuous function $I_2(T,y)^\theta$ with $d=1$, $x=0$, \begin{align*}
&P_\mathcal{Z}\left[\sup_{y\mathbbm{i}n[-1,1]}\left|I_2(T,y)^\theta-I_2(T,0)^\theta\right|\right]\\
&\leq C_{p,q}\left(\mathbbm{i}nt_{-1}^1\mathbbm{i}nt_{-1}^1\frac{P_\mathcal{Z}\left[\left|I_2(T,s)-I_2(T,t)\right|^{\theta p}\right]}{|t-s|^{pq}}dsdt\right)^{\frac{1}{p}}.
\end{align*}
for some $p>1$, $q>0$ with $pq>2$.
Thus, we will show that for $\theta \mathbbm{i}n (0,1)$, there exist $p\geq 1$ and $q>0$ with $pq>2$ such that
\begin{align}
\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{T}\log \left(\mathbbm{i}nt_{-1}^1\mathbbm{i}nt_{-1}^1\frac{P_\mathcal{Z}\left[\left|I_2(T,s)-I_2(T,t)\right|^{\theta p}\right]}{|t-s|^{pq}}dsdt\right)^{\frac{1}{p}}=-\mathbbm{i}nfty.\label{i2t}
\end{align}
We remark that $I_2(T,x)$ have the following Wiener chaos representation:
\begin{align*}
I_2(T,x)&=\mathbbm{i}nt_{A(T)^c}\rho_1(w-x)dw\\
&+\sum_{k\geq 1}2^{\frac{k}{2}}\mathbbm{i}nt_{\Delta_{k}(T)}\mathbbm{i}nt_{\mathbb{R}^k}\rho^{(k,T)}(x;{\bf t},{\bf x})\mathcal{W}(dt_1,dx_1)\cdots \mathcal{W}(dt_k,dx_k)\\
&=\sum_{k\geq 0} 2^{\frac{k}{2}}J^{(k)}(T,x),
\end{align*}
where \begin{align*}
&\rho^{(1,T)}(x;t,x_1)\\
&=\begin{cases}
\displaystyle \displaystyle \mathbbm{i}nt_{A(T)^c}\rho_1(x,w)\rho_{t-1}(w,x_1)dw,\ \ \ \ &\text{for }1\leq t\leq T\\
\displaystyle \rho_{t}(x,x_1)\mathbbm{i}nt_{A(T)^c}\rho_{1-t}(x_{1},w)dw,\ \ \ \ &\text{for }0<t\leq 1,
\end{cases}
\end{align*}
and
\begin{align*}
&\rho^{(k,T)}(x;{\bf t},{\bf x})\\
&=\begin{cases}
\displaystyle \mathbbm{i}nt_{A(T)^c}\rho_1(x,w)\rho_{t_1-1}(w,x_1)\mathbbm{p}rod_{i=2}^{k}\rho_{t_i-t_{i-1}}(x_{i-1},x_i)dw,\ \ \\
\hspace{17em}\text{for }1\leq t_1<\cdots<t_k\leq T\\
\displaystyle \rho_{t_1}(x,x_1)\mathbbm{p}rod_{\begin{smallmatrix}i=2,\\mathbbm{i}{\bf n}ot=\ell+1\end{smallmatrix}}^{k}\rho_{t_i-t_{i-1}}(x_{i-1},x_i)\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{\ell}}(x_\ell,w)\rho_{t_{\ell+1}-1}(w,x_{\ell+1})dw,\ \ \\
\hspace{12em} \text{for }0<t_1<\cdots<t_\ell\leq 1<t_{\ell+1}<\cdots<t_k\leq T.
\end{cases}
\end{align*}
We will estimate \begin{align*}
P_\mathcal{Z}\left[|J^{(k)}(T,x)-J^{(k)}(T,y)|^2\right]
\end{align*}
for $k\geq 0$.
It is easy to see that \begin{align*}
|J^{(0)}(T,x)-J^{(0)}(T,y)|&\leq \mathbbm{i}nt_{a(T)}^\mathbbm{i}nfty \left|\rho_1(x-w)-\rho_1(y-w)\right|dw\\
&+\mathbbm{i}nt^{-a(T)}_{-\mathbbm{i}nfty} \left|\rho_1(x-w)-\rho_1(y-w)\right|dw\\
&\leq |x-y|\mathbbm{i}nt_{a(T)-1}^\mathbbm{i}nfty \frac{4}{\sqrt{2\mathbbm{p}i}}w\exp\left(-\frac{w^2}{2}\right)dw\\
&=\frac{4}{\sqrt{2\mathbbm{p}i}}|x-y|\exp\left(-\frac{(a(T)-1)^2}{2}\right).
\end{align*}
Also, we have \begin{align}
&P_\mathcal{Z}\left[|J^{(1)}(T,x)-J^{(1)}(T,y)|^2\right]{\bf n}otag\\
&=\mathbbm{i}nt_0^T\mathbbm{i}nt_{\mathbb{R}}\left(\rho^{(1,T)}(x;t,x_1)-\rho^{(1,T)}(y;t,x_1)\right)^2dtdx_1{\bf n}otag\\
&\stackrel{(\ref{hconv})}{=}\mathbbm{i}nt_1^Tdt\mathbbm{i}int_{(A(T)^c)^2}(\rho_1(x,w)-\rho_1(y,w))(\rho_1(x,w')-\rho_1(y,w'))\rho_{2(t-1)}(w,w')dwdw'{\bf n}otag\\
&+\mathbbm{i}nt_0^1dt\mathbbm{i}nt_\mathbb{R}dx_1(\rho_t(x,x_1)-\rho_t(y,x_1))^2\mathbbm{i}int_{(A(T)^c)^2}\rho_{1-t}(x_1,w)\rho_{1-t}(x_1,w')dwdw'{\bf n}otag\\
&=: M^{(1)}(T)+M^{(2)}(T){\bf n}otag\\
&\stackrel{\textrm{H\"older}}{\leq} \mathbbm{i}nt_1^T\left(\mathbbm{i}nt_{A(T)^c}dw(\rho_1(x,w)-\rho_1(y,w))^2\mathbbm{i}nt_{\mathbb{R}}dw'\rho_{2(t-1)}(w,w')^2\right){\bf n}otag\\
&+\mathbbm{i}nt_0^1dt\mathbbm{i}nt_\mathbb{R}dx_1\mathbbm{i}nt_{A(T)^c}dw(\rho_t(x,x_1)-\rho_t(y,x_1))^2\rho_{1-t}(x_1,w)^2{\bf n}otag\\
&\stackrel{\text{H\"older}, (\ref{hprod}), (\ref{hconv})}{\leq} \mathbbm{i}nt_{1}^{T}dt\frac{1}{2\sqrt{2\mathbbm{p}i(t-1) }}\left(\mathbbm{i}nt_{\mathbb{R}}dw(\rho_1(x,w)-\rho_1(y,w))^2\right)^{1/2}{\bf n}otag\\
&\hspace{6em}\times \left(\mathbbm{i}nt_{A(T)^c}2\rho_1(x,w)^2dw+\mathbbm{i}nt_{A(T)^c}2\rho_1(y,w)^2dw\right)^{1/2}{\bf n}otag\\
&+\mathbbm{i}nt_0^1\frac{dt}{2\sqrt{\mathbbm{p}i (1-t)}}\mathbbm{i}nt_{A(T)^c}dw\left(\rho_{2t}(0)\rho_{\frac{1}{2}}(x,w)+\rho_{2t}(0)\rho_{\frac{1}{2}}(y,w)-2\rho_{2t}(x,y)\rho_{\frac{1}{2}}\left(\frac{x+y}{2},w\right)\right){\bf n}otag\\
&\hspace{-1em}\stackrel{(\ref{hsqare}),(\ref{hconv})}{\leq} C|x-y|\exp\left(-C'a^2(T)\right){\bf n}otag\\
&+\mathbbm{i}nt_0^1\frac{dt}{2\sqrt{\mathbbm{p}i (1-t)}}\mathbbm{i}nt_{A(T)^c}dw(\rho_{2t}(0)-\rho_{2t}(x,y))(\rho_{\frac{1}{2}}(x,w)+\rho_{\frac{1}{2}}(y,w)){\bf n}otag\\
&+\mathbbm{i}nt_0^1\frac{dt}{2\sqrt{\mathbbm{p}i (1-t)}}\mathbbm{i}nt_{A(T)^c}dw\rho_{2t}(x,y)\left(\rho_{\frac{1}{2}}(x,w)+\rho_{\frac{1}{2}}(y,w)-2\rho_{\frac{1}{2}}\left(\frac{x+y}{2},w\right)\right){\bf n}otag\\
&\stackrel{(\ref{hest})}{\leq} C|x-y|\exp\left(-C'a^2(T)\right).{\bf n}otag
\end{align}
Also, we have
\begin{align*}
&P_\mathcal{Z}\left[|J^{(2)}(T,x)-J^{(2)}(T,y)|^2\right]=\mathbbm{i}nt_{\Delta_2(T)}\mathbbm{i}nt_{\mathbb{R}^2}\left(\rho_1^{(2,T)}(x;{\bf t},{\bf x})-\rho_1^{(2,T)}(y;{\bf t},{\bf x})\right)^2d{\bf t}d{\bf x}\\
&= \mathbbm{i}nt_{D_{2}(1,T)}\mathbbm{i}nt_{\mathbb{R}^2}\left(\mathbbm{i}nt_{A(T)^c}(\rho_1(x,w)-\rho_1(y,w))\rho_{t_1-1}(w,x_1)dw\right)^2\rho_{t_2-t_{1}}(x_{1},x_{2})^2d{\bf t}d{\bf x}\\
&+\mathbbm{i}nt_{0<t_1<t_2\leq 1}\mathbbm{i}nt_{\mathbb{R}^2}(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\rho_{t_2-t_{1}}(x_{1},x_{2})^2 \left(\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{2}}(x_{2},w)dw\right)^2d{\bf t}d{\bf x}\\
&+\mathbbm{i}nt_{0<t_1\leq 1<t_{2}\leq T}\mathbbm{i}nt_{\mathbb{R}^2}(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2 \left(\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{1}}(x_{1},w)\rho_{t_{2}-1}(w,x_{2})dw\right)^2d{\bf t}d{\bf x}\\
&\leq \mathbbm{i}nt_{1}^Tdt_1\frac{\sqrt{T-t_1}}{\sqrt{\mathbbm{p}i}}\mathbbm{i}nt_{A(T)^c}\mathbbm{i}nt_{A(T)^c}\left(\rho_1(x,w)-\rho_1(y,w)\right)(\rho_1(x,w')-\rho_1(y,w'))\rho_{2(t_1-1)}(w,w')dwdw'\\
&+\mathbbm{i}nt_{0<t_1<t_2\leq 1}\mathbbm{i}nt_{\mathbb{R}}(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\frac{2\sqrt{\mathbbm{p}i (1-t_1)}}{2\sqrt{\mathbbm{p}i(t_2-t_1)}2\sqrt{\mathbbm{p}i(1-t_2)}}\left(\mathbbm{i}nt_{A(T)^c}\rho_{1-t_1}(x_1,w)dw\right)^2d{\bf t}dx_1\\
&+\mathbbm{i}nt_{0<t_1\leq 1<t_{2}\leq T}\mathbbm{i}nt_{\mathbb{R}^2}(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2 \\
&\hspace{4em}\times \left(\mathbbm{i}int_{(A(T)^c)^2}\rho_{1-t_{1}}(x_{1},w)\rho_{1-t_1}(x_1,w')\rho_{2(t_2-1)}(w,w')dwdw'\right)d{\bf t}d{\bf x}\\
&\leq \frac{\sqrt{T-1}}{\sqrt{\mathbbm{p}i}}M^{(1)}(T)+\frac{\sqrt{\mathbbm{p}i}}{2}M^{(2)}(T)+\frac{\sqrt{T-1}}{\sqrt{\mathbbm{p}i}}M^{(2)}(T).
\end{align*}
For $k\geq 3$,\begin{align*}
&P_\mathcal{Z}\left[|J^{(k)}(T,x)-J^{(k)}(T,y)|^2\right]=\mathbbm{i}nt_{\Delta_k(T)}\mathbbm{i}nt_{\mathbb{R}^k}\left(\rho_1^{(k,T)}(x;{\bf t},{\bf x})-\rho_1^{(k,T)}(y;{\bf t},{\bf x})\right)^2d{\bf t}d{\bf x}\\
&= \mathbbm{i}nt_{D_{k}(1,T)}\mathbbm{i}nt_{\mathbb{R}^k}\left(\mathbbm{i}nt_{A(T)^c}(\rho_1(x,w)-\rho_1(y,w))\rho_{t_1-1}(w,x_1)dw\right)^2\mathbbm{p}rod_{i=2}^{k}\rho_{t_i-t_{i-1}}(x_{i-1},x_{i})^2d{\bf t}d{\bf x}\\
&+\mathbbm{i}nt_{0<t_1\leq 1<t_{2}<\cdots<t_k\leq T}\mathbbm{i}nt_{\mathbb{R}^k}(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\mathbbm{p}rod_{i=3}^{k}\rho_{t_i-t_{i-1}}(x_{i-1},x_{i})^2 \\
&\hspace{10em}\times \left(\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{1}}(x_{1},w)\rho_{t_{2}-1}(w,x_{2})dw\right)^2d{\bf t}d{\bf x}\\
&+\sum_{\ell=2}^{k-1}\mathbbm{i}nt_{0<t_1<\cdots<t_\ell\leq 1<t_{\ell+1}<\cdots<t_k\leq T}\mathbbm{i}nt_{\mathbb{R}^k}(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\mathbbm{p}rod_{\begin{smallmatrix}i=2,\\mathbbm{i}{\bf n}ot=\ell+1\end{smallmatrix}}^{k}\rho_{t_i-t_{i-1}}(x_{i-1},x_{i})^2 \\
&\hspace{10em}\times \left(\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{\ell}}(x_{\ell},w)\rho_{t_{\ell+1}-1}(w,x_{\ell+1})dw\right)^2d{\bf t}d{\bf x}\\
&\stackrel{(\ref{hconv}),(\ref{hfull}),(\ref{hfull2})}{=}\frac{1}{2^{k-1}\Gamma(\frac{k+1}{2})}\mathbbm{i}nt^{T}_1{(T-t_1)^{\frac{k-1}{2}}}\\
&\hspace{5em}\times \mathbbm{i}int_{(A(T)^c)^2}(\rho_1(x,w)-\rho_1(y,w))(\rho_1(x,w')-\rho_1(y,w'))\rho_{2(t_1-1)}(w,w')dwdw'dt_1\\
&+\mathbbm{i}nt_0^1dt_1\mathbbm{i}nt_\mathbb{R}dx_1(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\\
&\hspace{5em}\times \mathbbm{i}nt_1^Tdt_2\frac{(T-t_2)^{\frac{k-2}{2}}}{2^{k-2}\Gamma\left(\frac{k}{2}\right)}\left(\mathbbm{i}nt_{A(T)^c}\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{1}}(x_{1},w)\rho_{1-t_1}(x_1,w')\rho_{2t_{2}-2}(w,w')dwdw'\right)\\
&+\sum_{\ell=2}^{k-1}\frac{1}{2^{k-2}\Gamma\left(\frac{k-\ell+1}{2}\right)\Gamma\left(\frac{\ell-1}{2}\right)}\mathbbm{i}nt_{0}^1dt_1\mathbbm{i}nt_{t_1}^1dt_\ell (t_\ell-t_1)^{\frac{\ell-3}{2}}\mathbbm{i}nt_{\mathbb{R}}dx_1(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\rho_{\frac{t_\ell-t_1}{2}}(x_1,x_\ell)\\
&\hspace{4em}\times \mathbbm{i}nt_1^Tdt_{\ell+1}\mathbbm{i}int_{(A(T)^c)^2}{(T-t_{\ell+1})^{\frac{k-\ell-1}{2}}} \rho_{1-t_\ell}(w-x_\ell)\rho_{1-t_\ell}(w'-x_\ell)\rho_{2t_{\ell+1}-2}(w,w')dwdw'\\
&\leq \frac{(T-1)^{\frac{k-1}{2}}}{2^{k-1}\Gamma\left(\frac{k+1}{2}\right)}M^{(1)}(T)+\frac{(T-1)^{\frac{k-1}{2}}}{2^{k-1}\Gamma\left(\frac{k+1}{2}\right)}M^{(2)}(T)\\
&+\sum_{\ell=2}^{k-1}\frac{(T-1)^{\frac{k-\ell}{2}}}{2^{k-1}\Gamma\left(\frac{k-\ell+2}{2}\right)}\mathbbm{i}nt_0^1dt_1\mathbbm{i}nt_{t_1}^1dt_\ell \frac{(t_\ell-t_1)^{\frac{\ell-3}{2}}}{\Gamma\left(\frac{\ell-1}{2}\right)}\\
&\hspace{4em}\times\mathbbm{i}int_{\mathbb{R}^2}dx_1dx_\ell(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\rho_{\frac{t_\ell-t_1}{2}}(x_1,x_\ell)\left(\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{\ell}}(x_{\ell},w)dw\right)^2\\
&\leq \frac{(T-1)^{\frac{k-1}{2}}}{2^{k-1}\Gamma\left(\frac{k+1}{2}\right)}M^{(1)}(T)+\frac{(T-1)^{\frac{k-1}{2}}}{2^{k-1}\Gamma\left(\frac{k+1}{2}\right)}M^{(2)}(T)\\
&+\sum_{\ell=2}^{k-1}\frac{(T-1)^{\frac{k-\ell}{2}}}{2^{k-1}\Gamma\left(\frac{k-\ell+2}{2}\right)}\mathbbm{i}nt_0^1dt_1(1-t_1)^{\frac{1}{2}}\mathbbm{i}nt_{t_1}^1dt_\ell \frac{(t_\ell-t_1)^{\frac{\ell-3}{2}}}{\sqrt{1-t_\ell}\Gamma\left(\frac{\ell-1}{2}\right)}\\
&\hspace{5em}\times \mathbbm{i}nt_{\mathbb{R}}dx_1(\rho_{t_1}(x,x_1)-\rho_{t_1}(y,x_1))^2\left(\mathbbm{i}nt_{A(T)^c}\rho_{1-t_{1}}(x_{1},w)dw\right)^2\\
&\leq \frac{(T-1)^{\frac{k-1}{2}}}{2^{k-1}\Gamma\left(\frac{k+1}{2}\right)}M^{(1)}(T)+\frac{(T-1)^{\frac{k-1}{2}}}{2^{k-1}\Gamma\left(\frac{k+1}{2}\right)}M^{(2)}(T)\\
&+\sum_{\ell=2}^{k-1}\frac{\sqrt{\mathbbm{p}i}(T-1)^{\frac{k-\ell}{2}}}{2^{k-1}\Gamma\left(\frac{k-\ell+2}{2}\right)\Gamma\left(\frac{\ell}{2}\right)}M^{(2)}(T)\\
&\leq C\frac{T^{\frac{k-1}{2}}}{\Gamma\left(\frac{k-2}{2}\right)}(M^{(1)}(T)+M^{(2)}(T)).
\end{align*}
Thus, we have that \begin{align*}
P_\mathcal{Z}\left[|J^{(k)}(T,x)-J^{(k)}(T,y)|^2\right]\leq \frac{CT^{\frac{k-1}{2}}}{2^k\Gamma(\frac{k-2}{2})}P_\mathcal{Z}\left[|J^{(1)}(T,x)-J^{(1)}(T,y)|^2\right],
\end{align*}
where $C$ is a constant independent of $k$.
By hypercontractivity of Wiener chaos \cite[Theorem 5.10]{Jan}, we have that for $p\geq 2$\begin{align*}
P_\mathcal{Z}\left[|I_2(T,x)-I_2(T,y)|^p\right]^{1/p}&\leq \sum_{k\geq 0}P_\mathcal{Z}\left[|J^{(k)}(T,x)-J^{(k)}(T,y)|^p\right]^{1/p}\\
&\leq \sum_{k\geq 0}(p-1)^{k/2}P_\mathcal{Z}\left[|J^{(k)}(T,x)-J^{(k)}(T,y)|^2\right]^{1/2}\\
&\leq C_p|x-y|^{1/2}\exp\left(-C'a(T)\right).
\end{align*}
Thus, (\ref{i2t}) holds with $p=\frac{10}{\theta}$ and $q=\frac{\theta}{4}$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{zc}]
It is clear that
\begin{align*}
\sup_{x\mathbbm{i}n [-1,1]}|I_1(T,x)|^\theta&\leq \sup_{x\mathbbm{i}n [-1,1],w\mathbbm{i}n A(T)}\left|\frac{\mathcal{Z}_{\sqrt{2}}^x(1,w)}{\rho_L(w-1)+\rho_L(w+1)}\right|^\theta\\
&\hspace{2em}\times \left( \mathbbm{i}nt_{\mathbb{R}}\left(\rho_L(u-1)+\rho_L(u+1)\right)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T)du\right)^\theta,
\end{align*}
where $L\mathbbm{i}n \mathbb{N}$ is taken large later. Thus, we have that \begin{align*}
P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}|I_1(T,x)|^\theta\right]&\leq P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1],w\mathbbm{i}n A(T)}\left|\frac{\mathcal{Z}_{\sqrt{2}}^x(1,w)}{\rho_L(w-1)+\rho_L(w+1)}\right|^\theta\right]\\
&\times P_\mathcal{Z}\left[ \left( \mathbbm{i}nt_{\mathbb{R}}\left(\rho_L(u-1)+\rho_L(u+1)\right)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T)du\right)^\theta\right].
\end{align*}
If there exists a constant $C>0$ such that \begin{align}
P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n [-1,1],w\mathbbm{i}n [2k-1,2k+1]}\left|\frac{\mathcal{Z}_{\sqrt{2}}^x(1,w)}{\rho_L(w-1)+\rho_L(w+1)}\right|\right]\leq C\label{boundedat}
\end{align}
for $k\mathbbm{i}n \mathbb{Z}$, then we have \begin{align*}
&P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}|I_1(T,x)|^\theta\right]\\
&\leq C^\theta a(T)P_\mathcal{Z}\left[ \left( \mathbbm{i}nt_{\mathbb{R}}\left(\rho_L(u-1)+\rho_L(u+1)\right)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T)du\right)^\theta\right]
\end{align*}
and therefore \begin{align*}
&\varlimsup_{\theta\to 0}\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}|I_1(T,x)|^\theta\right]\\
&\leq \varlimsup_{\theta\to 0}\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[ \left( \mathbbm{i}nt_{\mathbb{R}}\left(\rho_L(u-1)+\rho_L(u+1)\right)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T)du\right)^\theta\right].
\end{align*}
Also, we know that \begin{align*}
&P_\mathcal{Z}\left[ \left( \mathbbm{i}nt_{\mathbb{R}}\rho_L(u-1)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T)du\right)^\theta\right]\\
&=P_\mathcal{Z}\left[ \left( \mathcal{Z}_{\sqrt{2}}^1(T+L)\frac{1}{\displaystyle \mathbbm{i}nt_{\mathbb{R}}\frac{\mathcal{Z}_{\sqrt{2}}^1(L,u)}{\rho_L(u-1)}{\bf n}u^{(1,L)}(u)du}\right)^\theta\right]\\
&\leq P_\mathcal{Z}\left[ \left( \mathcal{Z}_{\sqrt{2}}^1(T+L)\right)^{\frac{\theta}{1-\theta}}\right]^{1-\theta}P_\mathcal{Z}\left[\frac{1}{\displaystyle \mathbbm{i}nt_{\mathbb{R}}\frac{\mathcal{Z}_{\sqrt{2}}^1(L,u)}{\rho_L(u-1)}{\bf n}u^{(1,L)}(u)du}\right]^\theta\\
&\leq P_\mathcal{Z}\left[ \left( \mathcal{Z}_{\sqrt{2}}^1(T+L)\right)^{\frac{\theta}{1-\theta}}\right]^{1-\theta}P_\mathcal{Z}\left[\frac{{\rho_L(u-1)}}{{\mathcal{Z}_{\sqrt{2}}^1(L,u)}}\right]^\theta\\
&\leq CP_\mathcal{Z}\left[ \left( \mathcal{Z}_{\sqrt{2}}^1(T+L)\right)^{\frac{\theta}{1-\theta}}\right]^{1-\theta},
\end{align*}
where ${\bf n}u^{(1,L)}(u)$ is the probability density function on $\mathbb{R}$ given by \begin{align*}
{\bf n}u^{(1,L)}(u)=\frac{1}{\displaystyle \mathbbm{i}nt_{\mathbb{R}}\rho_L(u-1)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T)du}\rho_L(u-1)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T).
\end{align*}
Then, we have from Lemma \ref{wttheta} that \begin{align*}
&\varlimsup_{\theta\to 0}\varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[ \left( \mathbbm{i}nt_{\mathbb{R}}\rho_L(u-1)\mathcal{Z}_{\sqrt{2}}^{(1,u)}(T)du\right)^\theta\right]\\
&\leq \mathcal{F}_\mathcal{Z}(\sqrt{2})
\end{align*}
and we can complete the proof of Lemma \ref{ttheta}.
We will prove (\ref{boundedat}).
We consider a function on $[-1,1]\times \mathbb{R}$
\begin{align*}
f(x,w)=\frac{\mathcal{Z}_{\sqrt{2}}^x(1,w)}{\rho_L(w-1)+\rho_L(w+1)}.
\end{align*}
Then, we have from Lemma \ref{cdpprop} (i) that \begin{align*}
P_\mathcal{Z}\left[f(x,w)\right]= \frac{\rho_1(x,w)}{\rho_L(w-1)+\rho_L(w+1)}\leq C_L.
\end{align*}
Also, if $w\geq w'\geq 1$, \begin{align*}
|f(x,w)-f(x',w')|\leq &\left|\frac{\mathcal{Z}_{\sqrt{2}}^x(1,w)-\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')}{\rho_L(w-1)+\rho_L(w+1)}\right|\\
&+\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')\left|\frac{1}{\rho_L(w-1)+\rho_L(w+1)}-\frac{1}{\rho_L(w'-1)+\rho_L(w'+1)}\right|\\
\leq & \left|\frac{\mathcal{Z}_{\sqrt{2}}^x(1,w)-\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')}{\rho_L(w-1)+\rho_L(w+1)}\right|\\
&+\frac{|w-w'|^2}{L}\frac{\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')\rho_L(w+1)}{\rho_L(w'-1)^2}.
\end{align*}
We can treat the case $w,w'\leq -1$ in the same manner and if $w,w'\mathbbm{i}n [-1,1]$, then it is clear that \begin{align*}
|f(x,w)-f(x',w')|&\leq C(L)\left(\left|{\mathcal{Z}_{\sqrt{2}}^x(1,w)-\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')}\right|+|w-w'|^2\right).
\end{align*}
Thus, if we show for some $p\geq 1$ and $q>0$ with $pq>4$, there exists $\eta_p>pq-2$ such that \begin{align}
&P_\mathcal{Z}\left[\left|{\mathcal{Z}_{\sqrt{2}}^x(1,w)-\mathcal{Z}_{\sqrt{2}}^{x'}(w')}\right|^p\right]\\
&\leq C(L)\left(|x-x'|^{\eta_p}+|w-w'|^{\eta_p}\right)\left(\rho_L(|w|\varepsilone |w'|-1)^p+\rho_L(|w|\varepsilone |w'|+1)^p\right)\label{1timesup}
\mathbbm{i}ntertext{and}
&P_\mathcal{Z}\left[\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')^p\right]\leq C(L)\left(\rho_L(|w|\varepsilone |w'|-1)^p+\rho_L(|w|\varepsilone |w'|+1)^p\right),\label{1timesp}
\end{align}
then we can apply Lemma \ref{Gar} with $d=2$ to $f(x,w)$ and we obtain (\ref{boundedat}).
$\mathcal{Z}_{\sqrt{2}}^x(1,w)$ has the Wiener chaos representation \begin{align*}
\mathcal{Z}_{\sqrt{2}}^x(1,w)&=\rho_1(w-x)+\sum_{k\geq 1}\sqrt{2}^k\mathbbm{i}nt_{D_k(1)} \rho^{(k)}(x,w;{\bf t},{\bf x})W(t_1,x_1)\cdots W(t_k,x_k)\\
&=\sum_{k\geq 0}\sqrt{2}^k{K^{(k)}(x,w)},
\end{align*}
where \begin{align*}
\rho^{(k)}(x,w;{\bf t},{\bf x})=\rho_{t_1}(x_1-x)\mathbbm{p}rod_{i=2}^k\rho_{t_i-t_{i-1}}(x_i-x_{i-1})\rho_{1-t_k}(w-x_k).
\end{align*}
Then, we have from (\ref{hfull}) that \begin{align*}
P_\mathcal{Z}\left[\left(K^{(k)}(x,w)\right)^2\right]=\frac{1}{2^{k+1}\Gamma\left(\frac{k+1}{2}\right)}\exp\left(-(x-w)^2\right),
\end{align*}
and hypercontractivity implies that \begin{align*}
&P_\mathcal{Z}\left[\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')^p\right]\leq \left(\sum_{k\geq 0}(p-1)^{\frac{k}{2}}\left(\frac{1}{2^{k+1}\Gamma\left(\frac{k+1}{2}\right)}\exp\left(-(x'-w')^2\right)\right)^{1/2}\right)^p\\
&\leq C(p)\exp\left(-\frac{p(x'-w')^2}{2}\right).
\end{align*}
Then, we have that for $k\geq 2$\begin{align*}
&P_\mathcal{Z}\left[\left|K^{(k)}(x,y)-K^{(k)}(x',y')\right|^2\right]\\
&=\mathbbm{i}nt_{D_k(1)}\mathbbm{i}nt_{\mathbb{R}^k}\left(\rho_{{t_1}}(x,x_1)\rho_{1-t_k}(x_k,y)-\rho_{t_1}(x',x_1)\rho_{1-t_k}(x_k,y')\right)^2\mathbbm{p}rod_{i=1}^{k-1}\rho_{(t_{i+1}-t_i)}(x_{i},x_{i+1})^2d{\bf x}_kd{\bf t}_k\\
&=\frac{1}{2^{k-1}\Gamma\left(\frac{k-1}{2}\right)}\mathbbm{i}nt_{0}^1\mathbbm{i}nt_{s}^1(t-s)^{\frac{k-3}{2}}\left(\rho_{2s}(0)\rho_{2({1-t})}(0)\rho_{\frac{1}{2}}(x,y)+\rho_{2s}(0)\rho_{2({1-t})}(0)\rho_{\frac{1}{2}}(x',y')\right.\\
&\hspace{10em}\left.-2\rho_{2s}(x,x')\rho_{2(1-t)}(y,y')\rho_{\frac{1}{2}}\left(\frac{y+y'}{2}\frac{x+x'}{2}\right)\right)dtds\\
&=\frac{1}{2^{k+1}\Gamma\left(\frac{k+1}{2}\right)}\left(\rho_{\frac{1}{2}}(x,y)+\rho_{\frac{1}{2}}(x',y')-2\rho_{\frac{1}{2}}\left(\frac{x+x'}{2},\frac{y+y'}{2}\right)\right)\\
&+\frac{2}{2^{k-1}\Gamma\left(\frac{k-1}{2}\right)}\mathbbm{i}nt_0^1\mathbbm{i}nt_s^1(t-s)^{\frac{k-3}{2}}\rho_{\frac{1}{2}}\left(\frac{x+x'}{2},\frac{y+y'}{2}\right)\rho_{2s}(0)\left(\rho_{2({1-t})}(0)-\rho_{2(1-t)}(y,y')\right)dtds\\
&+\frac{2}{2^{k-1}\Gamma\left(\frac{k-1}{2}\right)}\mathbbm{i}nt_0^1\mathbbm{i}nt_s^1(t-s)^{\frac{k-3}{2}}\rho_{\frac{1}{2}}\left(\frac{x+x'}{2},\frac{y+y'}{2}\right)\rho_{2(1-t)}(y,y')\left(\rho_{2s}(0)-\rho_{2s}(x,x')\right)dtds\\
&\leq \frac{C(|x-x'|+|y-y'|)}{2^{k-1}\Gamma\left(\frac{k-1}{2}\right)}\exp\left(-\frac{(|y|\varepsilone |y'|-1)^2 }{2}\right).
\end{align*}
Also, we can estimate that \begin{align*}
P_\mathcal{Z}\left[\left|K^{(0)}(x,y)-K^{(0)}(x',y')\right|^2\right]\leq C(|x-x'|^2+|y-y'|^2)\exp\left(-(|y|\varepsilone |y'|-1)^2\right)
\end{align*}
and \begin{align*}
&P_\mathcal{Z}\left[\left|K^{(1)}(x,y)-K^{(1)}(x',y')\right|^2\right]\\
&\leq \mathbbm{i}nt_0^1\left(\rho_{2s}(0)\rho_{2(-1s)}(0)\left(\rho_{\frac{1}{2}}(x,y)+\rho_{\frac{1}{2}}(x',y')\right)\right.\\
&\hspace{5em}
\left.-2\rho_{2s}(x,x')\rho_{2(1-s)}(y,y')\rho_{\frac{1}{2}}\rho\left(\frac{x+x'}{2},\frac{y+y'}{2}\right)\right)ds\\
&\leq {C(|x-x'|+|y-y'|)}\exp\left(-\frac{(|y|\varepsilone |y'|-1)^2 }{2}\right).
\end{align*}
Then, hypercontractivity implies that \begin{align*}
&P_\mathcal{Z}\left[\left|\mathcal{Z}_{\sqrt{2}}^x(1,w)-\mathcal{Z}_{\sqrt{2}}^{x'}(1,w')\right|^p\right]\\
&\leq \left(|x-x'|^{\frac{p}{2}}+|w-w'|^{\frac{p}{2}}\right)\left(\sum_{k\geq 0}(p-1)^{\frac{k}{2}}\left(\frac{C}{2^{k-1}\Gamma\left(\frac{k-1}{2}\right)}\exp\left(-\frac{2(w^2+w'^2)}{L}\right)\right)^{1/2}\right)^p\\
&\leq C(p)(|x-x'|^{\frac{p}{2}}+|w-w'|^{\frac{p}{2}})\exp\left(-\frac{2p(w^2+w'^2)^2}{L}\right)
\end{align*}
for $L$ large enough.
Thus, we have confirmed (\ref{1timesup}) and (\ref{1timesp}). Therefore, we completed the proof of Lemma \ref{ttheta}.
\end{proof}
Finally, we need to prove the free energy $F_\mathcal{Z}(\sqrt{2})=-\displaystyle \frac{1}{6}$. The proof is a modification of the proof of Lemma \ref{ttheta}.
\begin{proof}[Proof of Lemma \ref{freeenecdp}]
It is easy to see that for $a'(T)\mathbbm{i}n [0,\mathbbm{i}nfty)$\begin{align*}
P_\mathcal{Z}\left[\mathcal{Z}_{\sqrt{2}}(T)^\theta\right]&\leq \sum_{k=-a'(T)}^{a'(T)}P_\mathcal{Z}\left[\left(\mathbbm{i}nt_{2k-1}^{2k+1}\mathcal{Z}_{\sqrt{2}}(T,x)dx\right)^\theta\right]\\
&+P_\mathcal{Z}\left[\mathbbm{i}nt_{-\mathbbm{i}nfty}^{-a'(T)}\mathcal{Z}_{\sqrt{2}}(T,x)dx\right]^\theta+P_\mathcal{Z}\left[\mathbbm{i}nt^{\mathbbm{i}nfty}_{a'(T)}\mathcal{Z}_{\sqrt{2}}(T,x)dx\right]^\theta.
\end{align*}
If $\lim_{T\to \mathbbm{i}nfty}\frac{a'(T)}{T^3}>0$, then \begin{align*}
\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T}\log \left(P_\mathcal{Z}\left[\mathbbm{i}nt_{-\mathbbm{i}nfty}^{-a'(T)}\mathcal{Z}_{\sqrt{2}}(T,x)dx\right]^\theta+P_\mathcal{Z}\left[\mathbbm{i}nt^{\mathbbm{i}nfty}_{a'(T)}\mathcal{Z}_{\sqrt{2}}(T,x)dx\right]^\theta\right)=-\mathbbm{i}nfty.
\end{align*}
We denote \begin{align*}
\exp\left(A_{\sqrt{2}}(T,x)\right)=\frac{\mathcal{Z}_{\sqrt{2}}(T,x)}{\rho_T(x)},\ \ T>0,\ \ x\mathbbm{i}n\mathbb{R}.
\end{align*}
Then, we find that \begin{align*}
P_\mathcal{Z}\left[\left(\mathbbm{i}nt_{2k-1}^{2k+1}\mathcal{Z}_{\sqrt{2}}(T,x)dx\right)^\theta\right]&=P_\mathcal{Z}\left[\left(\mathbbm{i}nt_{-1}^1\exp\left(A_{\sqrt{2}}(T,x)\right)\rho_T(x+2k)dx\right)^\theta\right]\\
&\leq P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}\exp\left(\theta A_{\sqrt{2}}(T,x)\right)\right]\mathbbm{i}nt_{-1}^1\rho_{T}(x+2k)^\theta dx.
\end{align*}
Since \begin{align*}
\sum_{k=-\mathbbm{i}nfty}^\mathbbm{i}nfty \mathbbm{i}nt_{-1}^1\rho_{T}(x+2k)^\theta dx<\mathbbm{i}nfty,
\end{align*}
it is enough to show that \begin{align*}
\varlimsup_{\theta\to 0}\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}\exp\left(\theta A_{\sqrt{2}}(T,x)\right)\right]\leq -\frac{1}{6}.
\end{align*}
When we consider the time reversal, it is enough to show that \begin{align*}
\varlimsup_{\theta\to 0}\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}\mathcal{Z}^{x}_{\sqrt{2}}(T,x)^\theta\right]\leq -\frac{1}{6}.
\end{align*}
We know
\begin{align*}
\mathcal{Z}^{x}_{\sqrt{2}}(T,x)&=\mathbbm{i}nt_{A(T)}\mathcal{Z}^x_{\sqrt{2}}(1,w)\mathbbm{i}nt_\mathbb{R}\mathcal{Z}_{\sqrt{2}}(1,w;T,0)dw\\
&+\mathbbm{i}nt_{A(T)^c}\mathcal{Z}^x_{\sqrt{2}}(1,w)\mathbbm{i}nt_\mathbb{R}\mathcal{Z}_{\sqrt{2}}(1,w;T,0)dw\\
&=I'_1(T,x)+I_2'(T,x)
\end{align*}
in a similar manner to the proof of Lemma \ref{ttheta}. Then, we find that \begin{align*}
P_\mathcal{Z}\left[\left(\mathcal{Z}^{x}_{\sqrt{2}}(T,x)\right)^\theta\right]\leq P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}|I_1'(T,x)|^\theta\right]+P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}|I_2'(T,x)|^\theta\right]
\end{align*}
and we obtain by using the same argument as the proof of Lemma \ref{ttheta} that \begin{align*}
\varlimsup_{\theta\to 0}\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}|I_1'(T,x)|^\theta\right]&\leq \varlimsup_{T\to \mathbbm{i}nfty}\frac{1}{T}P_\mathcal{Z}\left[\log \mathcal{Z}_{\sqrt{2}}(T,0)\right]=-\frac{1}{6}
\mathbbm{i}ntertext{and}
\varlimsup_{\theta\to 0}\varlimsup_{T\to\mathbbm{i}nfty}\frac{1}{T\theta}\log P_\mathcal{Z}\left[\sup_{x\mathbbm{i}n[-1,1]}|I_2'(T,x)|^\theta\right]&=-\mathbbm{i}nfty.
\end{align*}
\end{proof}
\begin{appendix}
\section{Some formulas for heat kernel}
Here, we give some formulas for calculations in the proofs. We set \begin{align*}
\rho_t(x-y)=\rho_t(x,y)=\frac{1}{\sqrt{2\mathbbm{p}i t}}\exp\left(-\frac{(y-x)^2}{2t}\right)
\end{align*}
for $x,y\mathbbm{i}n \mathbb{R}$ and $t>0$. Then, we have that for $k\geq 1$
\begin{align}
&\rho_t(x)^2=\frac{1}{2\sqrt{\mathbbm{p}i t}}\rho_{\frac{t}{2}}(x),\label{hsqare}\\
&\rho_{t}(x,w)\rho_t(y,w)=\rho_{2t}(x,y)\rho_{\frac{t}{2}}\left(\frac{x+y}{2},w\right),\label{hprod}\\
&\mathbbm{i}nt_\mathbb{R}\rho_s(x,y)\rho_t(y,z)dy=\rho_{t+s}(x,z),\label{hconv}\\
&\mathbbm{i}nt_{\mathbb{R}^k}\rho_{t_1-t_0}(x,x_1)^2\mathbbm{p}rod_{i=1}^{k-1}\rho_{(t_{i+1}-t_i)}{(x_{i},x_{i+1})}^2\rho_{t_{k+1}-t_k}(x_k,y)^2d{\bf x}_k{\bf n}otag\\
&=\frac{1}{2^{k+1}\mathbbm{p}i^{\frac{k+1}{2}}}\rho_{\frac{t_{k+1}-t_0}{2}}(x,y)\mathbbm{p}rod_{i=0}^{k}\frac{1}{\sqrt{t_{i+1}-t_i}},\label{hsqconv}\\
&\mathbbm{i}nt_{t_0}^{t_2}\mathbbm{i}nt_\mathbb{R}\rho_{t_1-t_0}(x_0,x_1)^2dx_1dt_1=\frac{\sqrt{t_2-t_0}}{\sqrt{\mathbbm{p}i}}\label{h2}\\
&\mathbbm{i}nt_{D_k(t_0,t_{k+1})}\mathbbm{i}nt_{\mathbb{R}^k}\rho_{t_1-t_0}(x,x_1)^2\mathbbm{p}rod_{i=1}^{k-1}\rho_{(t_{i+1}-t_i)}(x_{i},x_{i+1})^2\rho_{t_{k+1}-t_k}(x_k,y)^2d{\bf x}_kd{\bf t}_k{\bf n}otag\\
&=\frac{(t_{k+1}-t_0)^{\frac{k-1}{2}}}{2^{k+1}\Gamma\left(\frac{k+1}{2}\right)}\rho_{\frac{t_{k+1}-t_0}{2}}(x,y),\label{hfull}\\
&\mathbbm{i}nt_{D_k(s,t)}\mathbbm{i}nt_{\mathbb{R}^k}\rho_{t_1-t_0}(x,x_1)^2\mathbbm{p}rod_{i=1}^{k}\rho_{(t_{i+1}-t_i)}(x_{i},x_{i+1})^2d{\bf x}_kd{\bf t}_k{\bf n}otag\\
&=\frac{(t-s)^{\frac{k}{2}}}{2^{k}\Gamma\left(\frac{k+2}{2}\right)},\label{hfull2}\\
&\mathbbm{i}nt_0^t \left(\rho_s(0)-\rho_s(x)\right)ds\leq \frac{|x|}{2\sqrt{\mathbbm{p}i}}\mathbbm{i}nt_{\frac{|x|^2}{2t}}^\mathbbm{i}nfty\frac{1}{u^{\frac{3}{2}}}\left(1-\exp\left(-u\right)\right)du{\bf n}otag\\
&\hspace{5em}\leq \frac{|x|}{2\sqrt{\mathbbm{p}i}}\mathbbm{i}nt_0^\mathbbm{i}nfty u^{-\frac{3}{2}}(1\wedge u)du,\label{hest}
\end{align}
where ${\bf x}_k=(x_1,\cdots,x_k)\mathbbm{i}n\mathbb{R}^k$, ${\bf t}_k=(t_1,\cdots,t_k)\mathbbm{i}n [0,\mathbbm{i}nfty)^k$, and \begin{align*}
D_k(s,t)=\{{\bf t}_k\mathbbm{i}n [0,\mathbbm{i}nfty)^k:s\leq t_1<\cdots<t_k<t\}.
\end{align*}
\end{appendix}
\mathbbm{i}ffalse{
We consider directed polymers in random environment.
Directed polymers in random environment are described in terms of polymers and random environment.
\begin{itemize}
\mathbbm{i}tem \textit{Polymers}: Let $(S, P_S^x)$ be a simple random walk on $\mathbb{Z}^d$ starting from $x$. In particular, we write $P_S^0=P_S$.
\mathbbm{i}tem \textit{Random environment}: Let $\{\eta(n,x):n\mathbbm{i}n\mathbb{N},x\mathbbm{i}n\mathbb{Z}^d\}$ be i.i.d.\,$\mathbb{R}$-valued random variables which is independent of $S$ and we denote by $Q$ the law of $\eta$. We assume $Q[\exp(\beta\eta(n,x))]=\exp(\lambda(\beta))<\mathbbm{i}nfty$ for any $\beta\mathbbm{i}n\mathbb{R}$.
\end{itemize}
The normalized partition function of directed polymers in random environment at $\beta\geq 0$ is give by \begin{align*}
W_{N}^\beta(\eta)&=P_S\left[ \exp\left( \sum_{i=1}^N\beta\eta(i,S_i)-N\lambda(\beta) \right) \right]\\
&=P_S\left[{\mathbbm z}eta_N(\beta,S)\right],
\end{align*}
where \begin{align*}
{\mathbbm z}eta_N(\beta,S)=\mathbbm{p}rod_{i=1}^N\exp\left(\beta\eta(i,S_i)-\lambda(\beta)\right).
\end{align*}
Then, we define the polymer measure $\mu_N^\beta$ by
\begin{align*}
\mu_N^\beta(dS)=\frac{1}{W_N^\beta(\eta)}\exp\left(\beta \sum_{i=1}^N \eta(i,S_i)-N\lambda(\beta)\right)P(dS).
\end{align*}
Bolthausen proved the martingale property of $\{W_N^\beta(\eta):N\geq 0\}$ and the following zero-one law:
\begin{align*}
Q\left(W_\mathbbm{i}nfty^\beta(\eta)=0\right)\mathbbm{i}n \{0,1\},
\end{align*}
where
\begin{align*}
W_{\mathbbm{i}nfty}^\beta(\eta)=\lim_{N\to \mathbbm{i}nfty}W_N^\beta(\eta)
\end{align*}
$Q$-a.s. \cite{Bol}.
Then, Comets and Yoshida proved the existence of the following phase transition\cite{ComYos}: there exists $\beta_1\geq 0$ such that \begin{align*}
\beta<\beta_1 &\Rightarrow W_\mathbbm{i}nfty^\beta(\eta)>0,\ Q\text{-a.s.}\ \textit{(weak disorder)}\\
\beta>\beta_1 &\Rightarrow W_\mathbbm{i}nfty^\beta(\eta)=0,\ Q\text{-a.s.}\ \textit{(strong disorder)}.
\end{align*}
In \cite{AlbZho,Bol,ComYos,ImbSpe,SonZho}, we have that the polymer chains behaves diffusively in the weak disorder phase.
Also, the free energy is the quantity which plays an important role to study the behavior of polymer chains. It is defined by
\begin{align*}
\mathbbm{p}si(\beta)=\lim_{N\to \mathbbm{i}nfty}\frac{1}{N}\log W_N^\beta(\eta)=\lim_{N\to \mathbbm{i}nfty}\frac{1}{N}Q\left[\log W_N^\beta(\eta)\right]\ \ Q\text{-a.s.}
\end{align*}
The limit is known to exists and be a constant $Q$-a.s. \cite{CarHu,ComShiYos2}.
Jensen's inequality give an upper bound of the free energy: \begin{align*}
\mathbbm{p}si(\beta)=\lim_{N\to\mathbbm{i}nfty}\frac{1}{N}Q\left[\log W_N^\beta(\eta)\right]\leq \lim_{N\to \mathbbm{i}nfty}\frac{1}{N}\log Q\left[W_N^\beta(\eta)\right]=0.
\end{align*}
Comets and Yoshida gave another phase transition \cite{ComYos}: There exists $\beta_2\geq 0$ such that \begin{align*}
\beta<\beta_2 &\Rightarrow \mathbbm{p}si(\beta)=0,\\
\beta>\beta_2 &\Rightarrow \mathbbm{p}si(\beta)<0,\ \ \textit{(very strong disorder)}.
\end{align*}
Thus, it is clear that $0\leq \beta_1\leq \beta_2$. Especially, we know that \begin{align*}
& \beta_1=\beta_2=0\ \ \text{when $d=1,2$}\\
& 0<\beta_1\leq\beta_2\ \ \text{when $d\geq 3$}
\end{align*}
\cite{AleYil,Bol,CarHu,ComShiYos,ComShiYos2,ComVar,ComYos,Lac,Wat}.
Moreover, it is believed that \begin{align}
\beta_1=\beta_2\label{corr}
\end{align}
for any dimension.
The author gave a strategy to prove (\ref{corr}) in \cite[Section IV, B]{Nak}.
Suppose $\beta>\beta_1$. To prove $\mathbbm{p}si(\beta)<0$, it is sufficient to find non-negative\footnote{The strategy is a little bit different from the one in \cite{Nak}. However, we can find that it is true with some calculation.} random variables $\{A_N(\beta):N\geq 1\}$ such that \begin{enumerate}[(i)]
\mathbbm{i}tem\label{i} $A_N\mathbbm{i}n \sigma[\eta(n,x):1\leq n\leq N,x\mathbbm{i}n\mathbb{Z}^d]={\cal F}_N$.
\mathbbm{i}tem\label{ii} $Q[A_N(\beta)]$ is bounded.
\mathbbm{i}tem\label{iii} $A_N(\beta)$ will grows exponentially in $P_SQ_S^\beta$-probability,
\end{enumerate}
where $P_SQ_S^\beta$ is a probability measure whose Radon-Nikodym derivative with respect to $QP_S$ conditioned on ${\cal G}_N$ is given by \begin{align*}
\left.\frac{dP_SQ_S^\beta}{dQP_S}\right|_{{\cal G}_N}={\mathbbm z}eta_N(\beta,S),
\end{align*}
where ${\cal G}_N=\displaystyle\sigma[\eta(n,x),S_n:1\leq n\leq N,x\mathbbm{i}n\mathbb{Z}^d]$ is the filtration generated by environment and random walk up to time $N$.
Now, we take $A_N(\beta)$ as a normalized partition function of directed polymers in random environment at $\gamma\geq 0$, i.e.\begin{align*}
A_N(\beta)=W_N^\gamma.
\end{align*}
Then, $A_N(\beta)$ satisfies (\ref{i}) and (\ref{ii}).
We assume the following:
{\bf Assumption }\begin{align*}
\eta(i,x)\stackrel{\text{d}}{\sim}N(0,1).\end{align*}
This assumption gives us a nice property of change of measure.
We have by a similar argument to \cite{Bir2} that the size biased law of $W_N^\gamma$, i.e. the law of $W_N^{\gamma}$ under $P_SQ_S^\beta$, has the same as the law of\begin{align*}
\hat{W}_N^{\beta,\gamma}&=P_{S'}\left[ \exp\left( \sum_{i=1}^N \gamma\left( \eta(i,S'_i)+\beta{\bf 1}\{S'_i=S_i\} \right)-N\lambda(\gamma) \right) \right]\\
&=P_{S'}\left[ \exp\left( \sum_{i=1}^N \gamma \eta(i,S_i')+\beta\gamma L_N(S,S') -N\lambda(\gamma) \right) \right]
\end{align*}
under $P_S{Q}$,
where $\{S'_i:i\geq 0\}$ is a simple random walk on $\mathbb{Z}^d$ independent of $S$ and $\eta$.
Indeed, we have for a bounded continuous function $f:\mathbb{R}_+\to \mathbb{R}$ \begin{align*}
P_SQ_S^\beta\left[f(W_N^\gamma)\right]&=P_SQ\left[{\mathbbm z}eta_N(\beta, S) f(W_N^\gamma) \right]\\
&=P_S\left[Q\left[f\left(\sum_{y_1,\cdots,y_N}\mathbbm{p}rod_{i=1}^Np(y_{i-1},y_{i})\times \mathbbm{p}rod_{i=1}^N \exp\left(\gamma \eta (i,y_i)-\lambda(N)\right)\right){\mathbbm z}eta_N(\beta,S) \right] \right]\\
&=P_S\left[Q\left[f\left(\sum_{y_1,\cdots,y_N}\mathbbm{p}rod_{i=1}^Np(y_{i-1},y_{i})\times \mathbbm{p}rod_{i=1}^N \exp\left(\gamma \eta (i,y_i)+\beta\gamma 1\{y_i=S_i\}-\lambda(N)\right)\right)\right] \right]\\
&=P_S\left[Q\left[f\left(\hat{W}_N^{\beta,\gamma}\right)\right]\right],
\end{align*}
where $p(x,y)$ is the transition probability of simple random walk on $\mathbb{Z}^d$ and we have used in the third line that for fixed $S$, \begin{align*}
\eta(i,x)\ \text{under }Q_S^\beta\stackrel{d}{=}\eta(i,x)+\beta1\{S_i=x\}\ \text{under }Q.
\end{align*}
Thus, it is enough to find $\gamma$ such that \begin{align}
\varliminf_{N\to \mathbbm{i}nfty}\frac{1}{N}\log \hat{W}^{\beta,\gamma}_N>0,\,\ \ P_SQ\text{-a.s.}\label{bia}
\end{align}
We define that \begin{align*}
\hat{W}_{N,x}^{\beta,\gamma}=P_{S'}\left[ {\mathbbm z}eta_N(\gamma,S')e^{\beta\gamma L_N(S,S')} : S'_{N+1}=x \right],
\end{align*}
where $L_N(S,S')$ is the collision local time of simple random walks $S$ and $S'$ up to time $N$, that is, \begin{align*}
L_N(S,S')=\sum_{i=1}^N{\bf 1}\{S_i=S_i'\}.
\end{align*}
\begin{lem}\label{loc}If $W_\mathbbm{i}nfty^\gamma=0$ $Q$-almost surely, then
\begin{align*}
\sum_{N\geq 1}\hat{W}_{N-1,S_N}^{\gamma,\gamma}=\mathbbm{i}nfty\ \ \ \ P_SQ\text{-a.s.}
\end{align*}
\end{lem}
\begin{proof}
Lemma 9 in \cite{Bir2} yields that if DPRE is in strong disorder phase, then \begin{align*}
{W}_{N}^{\gamma}\to \mathbbm{i}nfty\ \ \text{stochastically under $P_SQ_S$}
\mathbbm{i}ntertext{and therefore}
\hat{W}_N^{\gamma,\gamma}\to \mathbbm{i}nfty\ \ \text{in $P_SQ$-probability.}
\end{align*}
It implies that \begin{align*}
\varlimsup_{N\to \mathbbm{i}nfty}\hat{W}_N^{\gamma,\gamma}=\mathbbm{i}nfty,\ \ P_SQ\text{-almost surely.}
\end{align*}
It is easy to see that
$\displaystyle \hat{W}_{N}^{\gamma,\gamma}$ is submartingale with respect to ${\cal F}_N$ $P_S$-almost surely. Indeed, \begin{align*}
&Q\left[\left.\hat{W}_N^{\gamma,\gamma}\right|{\cal F}_{N-1}\right]\\
&=\sum_{x\mathbbm{i}n\mathbb{Z}^d}P_{S'}\left[{\mathbbm z}eta_{N-1}(\gamma,S)e^{\gamma^2L_{N-1}(S,S')}:S'_{N-1}=x\right]\\
&\hspace{3em}\times P_{S'}^xQ\left[\exp\left(\gamma\eta(N,S'_1)-\frac{\gamma^2}{2}+\gamma^2{\bf 1}\{S'_1=S_N\}\right)\right]\\
&=\hat{W}_{N-1,S_N}^{\gamma,\gamma}\left(e^{\gamma^2}-1\right)+\hat{W}_{N-1}^{\gamma,\gamma}.
\end{align*}
Proposition \ref{dive} implies that \begin{align*}
P_S\left(\sum_{N\geq 1}\hat{W}_{N-1,S_N}^{\gamma,\gamma}=\mathbbm{i}nfty, Q\text{-almost surely}\right)=1
\end{align*}
\end{proof}
The following proposition is a modification of Exercise 4.3.4 in \cite{Dur}.
\begin{prop}\label{dive}
Let $X_n$ and $Y_n$ be positive integrable and adapted to ${\cal F}_n$. Suppose $E\left[X_{n+1}|{\cal F}_n\right]\leq X_n+Y_n$. Then, we have that \begin{align*}
\left\{\sum_{n\geq 1}Y_n<\mathbbm{i}nfty\right\} \stackrel{\text{a.s.}}{\subset }\left\{\lim_{n\to \mathbbm{i}nfty}X_n\ \text{exits and finite.}\right\}
\end{align*}
\end{prop}
\begin{proof}
We remark that \begin{align*}
Z_n=X_n-\sum_{m=1}^{n-1}Y_m
\end{align*}
is a supermartingale.
Let $N=\mathbbm{i}nf\left\{ k:\sum_{m=1}^kY_m>M \right\}$ for $M>0$. Then, \begin{align*}
Z_{n\wedge N}
\end{align*}is also a submartingale and $Z_{n\wedge N}\geq -M$ a.s. Then, The positive supermartingale $Z_{n\wedge N}+M$ converges a.s. Especially, $Z_\mathbbm{i}nfty$ exists a.s.\,on $\{N=\mathbbm{i}nfty\}$. That is, $X_\mathbbm{i}nfty$ exists on $\displaystyle \left\{\sum_{m=1}^\mathbbm{i}nfty Y_m\leq M\right\}$. Since $M>0$ is arbitrary, we obtain the conclusion.
\end{proof}
We have from subadditive ergodic theorem that the limit \begin{align*}
\Psi(\beta,\gamma)=\lim_{N\to \mathbbm{i}nfty}\frac{1}{N}\log \hat{W}_{N-1,S_N}^{\beta,\gamma}
\end{align*} exists and is non-random $P_SQ$-almost surely. Moreover, we know that \begin{align*}
\Psi(\beta,\gamma)&=\lim_{N\to \mathbbm{i}nfty}\frac{1}{N}P_SQ\left[\log \hat{W}_{N-1,S_N}^{\beta,\gamma}\right]\\
&=\sup_{N\geq 1}\frac{1}{N}P_SQ\left[\log \hat{W}_{N-1,S_N}^{\beta,\gamma}\right].
\end{align*}
It is obvious that \begin{align*}
\Psi(\beta,\gamma)\leq \varliminf_{N\to \mathbbm{i}nfty}\frac{1}{N}\log \hat{W}_{N}^{\beta,\gamma},\ \ P_SQ\text{-a.s.}
\end{align*}
Thus, it is enough to show that for any $\beta>\beta_1$, there exists $\beta_1<\gamma<\beta$ such that \begin{align*}
\Psi(\beta,\gamma)>0.
\end{align*}
Let $\beta_1<\gamma<\beta$ and $r\geq 0$. We define \begin{align*}
\Theta(\beta,\gamma,r)=\exp\left(\gamma{\eta}(0,0)-\frac{\gamma^2}{2}+\beta\gamma\right)\sum_{N\geq 1}e^{-rN}\hat{W}_{N-1,S_N}^{\beta,\gamma}.
\end{align*}
By Lemma \ref{loc}, \begin{align*}
\Theta(\gamma,\gamma,0)=\mathbbm{i}nfty , \ P_SQ\text{-almost surely.}
\end{align*}
If there exists $r>0$ such that \begin{align*}
\Theta(\beta,\gamma,r)=\mathbbm{i}nfty,\ P_SQ\text{-almost surely},
\end{align*}
then we will have \begin{align*}
\Psi(\beta,\gamma)\geq r
\end{align*}
and we can complete the proof of (\ref{corr}).
It is easy to see that \begin{align*}
\exp\left(\gamma{\eta}(0,0)-\frac{\gamma^2}{2}+\beta\gamma\right)\hat{W}_{N-1,S_N}^{\beta,\gamma}&=e^{\beta\gamma}P_{S'}\left[\tilde{{\mathbbm z}eta}_N(\gamma,S')e^{\beta\gamma L_{N-1}(S,S')}:S'_N=S_N\right]\\
&\hspace{-10em}=P_{S'}\left[ \tilde{{\mathbbm z}eta}_N(\gamma,S')\mathbbm{p}rod_{k=0}^{N-1}\left(1+\left(e^{\beta\gamma}-1\right){\bf 1}\{S_k=S_k'\}\right): S_N'=S_N \right]\\
&\hspace{-10em}=P_{S'}\left[ \tilde{{\mathbbm z}eta}_N(\gamma,S')\sum_{k=1}^{N}\sum_{0= j_1<\cdots<j_{k}=N}\left(e^{\beta\gamma}-1\right)^k:S_{j_i}'=S_{j_i},i=0,\cdots,k \right]\\
&\hspace{-10em}=\sum_{k=1}^{N}\left(e^{\beta\gamma}-1\right)^k\sum_{0= j_1<\cdots<j_{k}=N}P_{S'}\left[ \tilde{{\mathbbm z}eta}_N(\gamma,S'):S_{j_i}'=S_{j_i},\ i=0,\cdots,k \right],
\end{align*}
where \begin{align*}
\tilde{{\mathbbm z}eta}_N(\gamma,S')=\mathbbm{p}rod_{i=0}^{N-1}\exp\left( \gamma\eta(i,S_i')-\frac{\gamma^2}{2} \right).
\end{align*}
Then, we have that \begin{align*}
\Theta(\beta,\gamma,r)&=\sum_{N\geq 1}\sum_{k=1}^{N-1}\sum_{0= j_0<\cdots <j_{k}=N}\left(e^{\beta\gamma}-1\right)^ke^{-rj_k}\mathbbm{p}rod_{i=1}^{k}{W}_{j_i,S_{j_i}}^{j_{i-1},S_{j_{i-1}}}(\gamma)\\
&=\sum_{k\geq 1}\left(e^{\beta\gamma}-1\right)^k\sum_{\begin{smallmatrix}1\leq \ell_1,\cdots,\ell_k<\mathbbm{i}nfty \end{smallmatrix}}\mathbbm{p}rod_{i=1}^ke^{-r \ell_i}{W}_{j_i,S_{j_i}}^{j_{i-1},S_{j_{i-1}}}(\gamma),
\end{align*}
where $\ell_i=j_i-j_{i-1}$ for $i=1,\cdots,k$ in the second line and
\begin{align*}
W^{j_{i-1},S_{j_{i-1}}}_{j_i,S_{j_i}}(\gamma)=P_{S'}^{S_{j_{i-1}}}\left[ \exp\left(\gamma \sum_{m=0}^{\ell_i-1}\eta(j_{i-1}+m,S'_{m})-\frac{\ell_{i}\gamma^2}{2} \right):S_{\ell_i}'=S_{j_i} \right].
\end{align*}
Let \begin{align*}
F_k(\gamma,r,S)=\sum_{\begin{smallmatrix}1\leq \ell_1,\cdots,\ell_k<\mathbbm{i}nfty \end{smallmatrix}}\mathbbm{p}rod_{i=1}^ke^{-r \ell_i}{W}_{j_i,S_{j_i}}^{j_{i-1},S_{j_{i-1}}}(\gamma)
\end{align*}
{\bf Conjecture}
The limit \begin{align*}
\Phi(\gamma,r)=\lim_{k\to \mathbbm{i}nfty}\frac{1}{k}\log F_k(\gamma,r,S)
\end{align*}
exists $P_SQ$-almost surely and is non-random. Moreover, $\Phi(\gamma,r)$ is continuous in $r\geq 0$.
If the conjecture is true, then we can prove (\ref{corr}).
\mathbbm{i}ffalse{
We will give two examples of $\gamma$ such that (\ref{bia}) seems to be true at a glance.
\begin{enumerate}[(i)]
\mathbbm{i}tem $0<\gamma<\beta_1'$: Since it is known that $\beta_1\leq (\beta_1')^2$, we can take $\gamma<\beta_1'$ such that $\beta\gamma>\beta_1'$. Then, Theorem \ref{main1} implies that \begin{align*}
Q[W_N^{\beta,\gamma}|Y]\text{ grows exponentially }P_Y\text{-a.s.}
\end{align*}and $W_N^{0,\gamma}$ is in weak disorder phase so that we may expect $W_N^{\beta,\gamma}$ grows exponentially $Q$-almost surely. Actually, this expectation is correct when the underlying graph is tree with $d$-ary tree.
We consider branching random walks on $\mathbb{R}$.
Let $\mathbb{T}$ be $d$-ary tree and $\{S_v:v\mathbbm{i}n\mathbb{T}\}$ be the position of particles indexed by the vertices of trees. We denote by $\{\eta_v:v\mathbbm{i}n \mathbb{T}\}$ be an i.i.d.\,random variables with $\exp(-\beta \eta)=\exp(\lambda(\beta))<\mathbbm{i}nfty$ for $\beta\geq 0$ which describes increments of particles. Then, it is known that \begin{align*}
W_n=\frac{1}{d^n}\sum_{|v|=n}\exp\left( -\beta S_v-n\lambda(\beta) \right)
\end{align*}
is a positive martingale. Also, we can regard $W_n$ as the partition function of directed polymers in random environment on $d$-ary tree as follows:\begin{align*}
W_n=E_X\left[\exp\left( -\sum_{i=1}^n\beta\eta(i,X_i)-n\lambda(\beta) \right)\right],
\end{align*}
where $\{X_n:n\geq 1\}$ is directed simple random walk on $\mathbb{T}$ starting from the origin.
In \cite{BufPatPul}, the critical point $\beta_1'=\beta_2'$ of phase transition is given when $d=2$ and $\eta$ has an exponential distribution with mean $\lambda$, that is $\lambda (\beta)=\frac{\lambda}{\lambda+\beta}$: $\beta_1'$ is the unique solution to\begin{align*}
\log \frac{2\lambda}{\lambda+\beta}=-\frac{\beta}{\lambda+\beta}.
\end{align*}
\end{enumerate}
On the other hand,
}\fi
}\fi
{\bf Acknowledgement:}
This research was supported by JSPS Grant-in-Aid for Young Scientists (B) 26800051.
\end{document} |
\begin{document}
\title{Nested distance for stagewise-independent processes}
\begin{abstract}
We prove that, for two discrete-time stagewise-independent processes with a stagewise metric,
the nested distance is equal to the sum of the Wasserstein distances
between the marginal distributions of each stage.
\end{abstract}
\tableofcontents
\section{Introduction}
An usual approach when solving a multi-stage stochastic programming problem is approximating the underlying probability distribution by a \emph{scenario tree}.
Once we obtain this approximation, the resulting problem becomes a deterministic optimization problem with an extremely large number of variables and constraints.
Most of the standard algorithms for solving it depend on the convexity properties of the objective function and constraints:
for example the Cutting Plane (or L-Shaped), Trust Region, and Bundle methods, for the two-stage case;
and Nested Cutting Plane, Progressive Hedging and SDDP, for the multi-stage case,
see~\cite{birge2011introduction,ski2003stochastic}.
Although all those methods are very effective,
in practice their performance depend heavily on the size of the scenario tree.
Ideally, we seek the best approximation with the least number of scenarios,
since a large number of scenarios can really impact computational time.
There are two main available techniques for the scenario generation:
those based on sampling methods (like Monte Carlo) and those based on optimal scenario generation using probability metrics.
The later is the subject of this article.
In particular, the widely used SDDP method~\cite{pereira1991multi} depends on an additional hypothesis about the underlying uncertainty:
the \emph{stagewise independence property}~\cite{shapiro2011analysis}.
We postpone a precise definition to section~\ref{sec:swi},
but we note that such property is crucial for the significant computational cost reduction of the SDDP algorithm.
Our aim here is to show that, analogously, the stagewise independence
allows for a similar reduction of the computation time in a related problem regarding the optimal scenario generation technique.
\subsection{Optimal scenario generation}
Optimal scenario generation aims to approximate,
in a reasonable sense for stochastic optimization,
a general probability distribution by a discrete one with a fixed number of support points.
There are several probability metrics that could be used as objective functions for the optimal probability discretization:
an extensive list with 49 examples can be found in~\cite[\textsection 14]{DezaDeza201410}.
Among all those probability metrics, the Wasserstein distance stands out as a convenient one, \cite{pflug2001scenario,heitsch2003scenario,dupavcova2003scenario},
since under some mild regularity conditions it provides an upper bound
on the absolute difference between the optimal values
of a two-stage stochastic optimization problem with distinct probability distributions~\cite[section 2 -- page 242]{romisch1991stability}.
A generalization of the Wasserstein distance for the multi-stage case
which has an analogous bound for the difference between optimal values is the Nested Distance
developed in~\cite{pflug2009version,pflug2012distance,pflug2014multistage}.
The standard algorithm for evaluating it is based on a dynamic programming problem~\cite{KovacevicP15},
whose intermediate subproblems are similar to (conditional) Wasserstein distances.
However, this is prohibitive for general distributions,
since the number of intermediate subproblems grows with the number of scenarios,
which is generally exponential on the number of stages.
For practical applications, optimal scenario generation using the Nested Distance is therefore very limited.
In this paper we show that, for stagewise independent distributions,
it is possible to reduce dramatically the computational burden of evaluating the Nested Distance.
Actually, we obtain a stronger result relating the Nested and Wasserstein distances:
We prove that the Nested distance is equal to the sum of the Wasserstein distances
between the marginal distributions of each stage.
In particular, the number of subproblems required for evaluating the Nested Distance is now equal to the number of stages,
and each subproblem can be solved independently and very effectively by calculating the Wasserstein distances.
This result supports a new scenario reduction method that preserves the stagewise independence property,
which was compared to the standard Monte Carlo approach in~\cite[\textsection Appendix A]{shapiro2015guidelines}.
\subsection{Organization}
In this paper, we focus on the case of evaluating the nested distance between discrete-time, discrete-state stochastic processes.
This is not very restrictive, since in most cases the best we can do is producing very large samples and computing the nested distance from them, due to the complexity of the nested distance formula.
In section~\ref{sec:w_nd}, we will review the definitions of the Wasserstein distance and the Nested distance.
We also present the usual tree representation of discrete-time stochastic process
as a motivation for a matrix representation of the linear problems defining both distances.
Then, in the following section, we recall the definition of stagewise independence for processes,
and observe how this assumption simplifies the trees corresponding to them.
This suggests an analog simplification for the Nested Distance picture,
which we will prove correct in section~\ref{sec:3stage} in the fundamental 3-stage setting.
Finally, in section~\ref{sec:multistage},
we recall the different equivalent linear programming formulations of the Nested distance.
Then, we define the subtree distance, which will be,
along with the intuition developed in the 3-stage case,
the fundamental tool to proving our result.
We thank professor Tito Homem-de-Mello, Universidad Adolfo Ibanez, and Erlon C. Finard, Federal University of Santa Catarina, for the enlightening discussion occurred on the XIV International Conference in Stochastic Programming, which have encouraged us for writing this paper.
We would like to show our gratitude to Joari P. da Costa, Brazilian Electrical System Operator (ONS), for the assistance and comments that greatly improved the manuscript.
We are also grateful to Alberto S. Kligerman, ONS, for the opportunity to conduct this research.
\input wasserstein_nd
\input swi
\input 3stage
\input multistage
\input example
\end{document} |
\begin{document}
\thispagestyle{empty}
\title{ Dynamical Cocycles with Values \ in the Artin Braid Group}
\SBIMSMark{1997/5}{April 1997}{}
\NI{\bf Abstract:}
By considering the way an $n$-tuple of points in the 2-disk are linked
together under iteration of an orientation preserving diffeomorphism,
we
construct a dynamical cocycle with values in the Artin braid group. We
study
the asymptotic properties of this cocycle and derive a series of
topological
invariants for the diffeomorphism which enjoy rich properties.
\section{Introduction}
When one is concerned with the study of a group of automorphisms on a
probability space $(X,{\cal B}, \mu)$, the knowledge of some cocycle
associated to this group is useful. Indeed, cocycles carry into a
simple, well understood, target group $G$ the main dynamical features.
By using subadditive functions on $G$, for instance left-invariant
metrics, one is able to define asymptotic invariants with relevant
dynamical properties. As an illustration, see the Oseledec theory of
the Lyapounov exponents (\cite{Oseledec}).
In this paper, we start by recalling some basic definitions related
to cocycles in general and we state some nice asymptotic properties
when a subadditive function on the target group is given. Namely, we
show the existence of asymptotic invariants and give conditions for
their topological invariance (section $2$).
\NI Section $3$ deals with the study of orientation preserving
$C^1$-diffeomorphisms of the $2$-disk which preserve a non atomic
measure. Given a $n$-tuple of distinct points in the disk, we
construct on the group of diffeomorphisms we are considering a
cocycle with values in the Artin braid group $B_n$. Indeed, we show
that given a diffeomorphism there is a well defined way to associate a
braid to a $n$-tuple of orbits. This construction generalizes a
well-known construction for periodic orbits.
\NI Despite the relative complexity of the Artin braid group, there
are naturally defined subadditive functions on $B_n$. Therefore, we can
consider the asymptotic invariants associated to these cocycles. These
invariants turn to be topological invariants that we can relate to
other well known quantities such as the Calabi invariant and the
topological entropy (section 4). In this last case, we generalize a
result of minoration of the entropy by Bowen (see \cite{Bowen}).
\section{Cocycles}
\subsection {Asymptotic behaviour}
Let $(X,{\cal B}, \mu)$ be a probability space,
$Aut (X, \mu)$, the group of all its automorphisms, i.e. invertible
measure preserving transformations and $G$ a topological group (with
identity element $e$).
For any subgroup $\Gamma$ of $Aut(X, \mu)$, we say that a measurable
map
$\alpha \,:\, X\times \Gamma \to G$
is a {\it cocycle} of the dynamical system $(X, {\cal B}, \mu, \Gamma)$
with values in $G$ if for all $\gamma_1$ and $\gamma_2$ in $\Gamma$ and
for $\mu$-a.e. $x$ in $X$:
$$ \alpha(x, \gamma_1\gamma_2)\,=\, \alpha (x,
\gamma_1)\,\alpha(\gamma_1 x, \gamma_2).$$
\NI A continuous map $\theta : G\to \R^+$, which satisfies, for all
$g_1$ and $g_2$ in $G$:
$$\theta (g_1g_2)\,\,\leq \,\,\theta (g_1) \,+\, \theta(g_2)$$
is called a {\it subadditive function} on $G$.
\NI For example, if the group $G$ is equipped with a right (or left)
invariant metric $d$, then it is clear that the map:
$$\begin{array}{rcl}
G & \to & \R^+\\
x & \mapsto & d(g , e)
\end{array}$$
is a subadditive function.
The following Theorem is a straightforward application of the
subadditive ergodic theorem (see for instance \cite {Walter}).
\begin{thm}
Let $\alpha$ be a cocycle of the dynamical system $(X, {\cal B}, \mu,
\Gamma)$ with values in a group $G$ and $\theta$ a subadditive
function on $G$.
Assume that there exists an automorphism $\gamma$ in $\Gamma$ such
that the map:
$$\begin{array}{rcl}
X & \to & \R^+\\
x & \mapsto & \theta(\alpha (x, \gamma))
\end{array}$$
is integrable. Then:
\begin{enumerate}
\item For $\mu$-a.e. $x$ in $X$, the sequence ${{1}\over{n}}
\theta(\alpha (x, \gamma^n))$ converges, when $n$ goes to $+\infty$ to
a limit denoted by $\Theta(x, \gamma, \alpha)$.
\item The map:
$$\begin{array}{rcl}
X & \to & \R^+\\
x & \mapsto & \Theta(x, \gamma, \alpha)
\end{array}$$
is integrable and we have :
$$\lim_{n\to +\infty} {{1}\over{n}}\int \theta(\alpha(x, \gamma^n))d\mu
\,\,=\,\,
\inf_{n\to +\infty} {{1}\over{n}}\int \theta(\alpha(x,\gamma^n))d\mu
\,\,=\,\,
\int \Theta(x, \gamma, \alpha) d\mu.$$
\end{enumerate}
\end{thm}
\NI We denote by $\Theta_\mu(\gamma, \alpha)$ the quantity $\int
\Theta(x, \gamma, \alpha) d\mu$.
\subsection {Invariance property}
Let $\alpha$ and $\beta$ be two cocycles of the dynamical system $(X,
{\cal B}, \mu, \Gamma)$ with value in a group $G$. We say that
$\alpha$ and $\beta$ are {\it cohomologous} if there exists a
measurable function $\phi:\, X \to G$ such that for all $\gamma$ in
$\Gamma$ and for $\mu$-a.e. $x$ in $X$:
$$ \beta(x, \gamma)\,=\, \phi(x)\,\alpha(x, \gamma)\,
(\phi(\gamma\,x))^{-1}.$$
Consider two probability spaces $(X_1, {\cal B}_1, \mu_1)$ and
$(X_2, {\cal B}_2, \mu_2)$ and an isomorphism $h: X_1 \to X_2$,
( i.e. an invertible transformation which carries the measure $\mu_1 $
onto the measure $\mu_2$). Let $\Gamma_1$ be a subgroup of $Aut(X_1,
\mu_1)$ and $\Gamma_2\,=\,h\Gamma_1 h^{-1}$. Given a cocycle
$\alpha_1$ on $(X_1, {\cal B}_1, \mu_1, \Gamma_1)$, we denote by
$h\circ\alpha_1$ the cocycle defined on $(X_2, {\cal B}_2, \mu_2,
\Gamma_2 )$ by:
$$
h\circ\alpha_1(x_2, \gamma_2)\,=\, \alpha_1(h^{-1}(x_2), h^{-1}\gamma_2
h),
$$
for all $\gamma_2 \in \Gamma_2$ and for all $x_2\in X_2$.
Given two dynamical systems $(X_1, {\cal B}_1, \mu_1, \Gamma_1)$ and
$(X_2, {\cal B}_2, \mu_2, \Gamma_2)$ and given two cocycles $\alpha_1$
and $\alpha_2$ defined respectively on the first and the second system
and with values in the same group $G$, we say that the two cocycles
$\alpha_1$ and $\alpha_2$ are {\it weakly equivalent} if there exists
an isomorphism $h : X_1 \to X_2$ which satisfies :
\begin {enumerate}
\item $\Gamma_2 \,=\, h\Gamma_1 h^{-1}$,
\item $\alpha_2$ and $h\circ \alpha_1$ are cohomologous.
\end{enumerate}
\begin{thm}
{Let $\alpha_1$ and $\alpha_2$ be two cocycles defined on the
dynamical
systems $(X_1, {\cal B}_1, \mu_1, \Gamma_1)$ and $(X_2, {\cal B}_2,
\mu_2, \Gamma_2)$ respectively and with values in a group equipped with
a subadditive function $\theta$
. Assume that $\alpha_1$ and $\alpha_2$ are weakly equivalent through
an isomorphism $h$.
\NI If $\gamma_1\in\,\Gamma_1$ and $\gamma_2 = h\gamma_1 h^{-1}$
satisfy that both maps:
$$x_1 \mapsto \theta\,(\alpha_1(x_1, \gamma_1))\,\,\,\,\,\,\,\,\,\,{\it
and}\,\,\,\,\,\,\,\,\,\,x_2 \mapsto \theta\,(\alpha_2(x_2, \gamma_2))$$
are integrable respectively on $X_1$ and $X_2$, then:
$${\Theta}_{\mu_1}(\gamma_1, \alpha_1)\,=\, {\Theta}_{\mu_2}(\gamma_2,
\alpha_2).$$
}\end{thm}
\NI Before proceeding to the proof of the theorem, let us recall a
basic lemma in ergodic theory:
\begin{lem} Let $(X, {\cal B}, \mu)$ be a probability space, $\gamma$
an automorphism on $X$, $G$ a group, $\theta$ a subadditive function
on $G$ and $\phi: X\to G$ a measurable map.
Then, for $\mu$-a.e. $x$ in $X$:
$$
\liminf_{n\to +\infty} \theta(\phi(\gamma^n x)) \,<\,\,+\infty.$$
\end{lem}
\begin{pf} Let $N$ be a positive integer and
$${\cal E}_N\,\,=\,\,\{x\in X\,\,\vert\,\, \theta(\phi(x))\leq N\}.$$
\NI Since the map $\phi$ is measurable the set ${\cal E}_N$ is
measurable and has a positive measure for $N$ big enough. From the
Poincar\'e recurrence theorem (see for instance \cite{Walter}) we know
that, for $\mu$-a.e $x$ in ${\cal E}_N$, the orbit $\gamma^n(x)$
visits ${\cal E}_N$ infinitely often and thus:
$$\liminf_{n\to +\infty} \theta(\phi(\gamma^n x)) \,\leq\,\,N.$$
\NI Since the union $\cup_{N\geq0} {\cal E}_N$ is a set of
$\mu$-measure 1 in $X$, the lemma is proved.
\end{pf}
\NI {\bf Proof of Theorem 2:} The weak equivalence of the cocycles
$\alpha_2$ and $\alpha_1$ leads to the existence of some measurable
function $\phi : X_2 \to G$ such that, for all $n \geq 0$, for all
$\gamma_2$ in $\Gamma_2$ and for $\mu_2$-a.e. $x_2$ in $X_2$:
$$
\alpha_2(x_2, \gamma_2^n) \,=\, \phi(x_2)\,h\circ\alpha_1(x_2,
\gamma_2^n)\,(\phi(\gamma_2^nx_2))^{-1}.$$
\NI The subadditivity property of the map $\theta : G\to \R^+$ gives
us:
$$
\vert \theta(\alpha_2(x_2, \gamma_2^n))\,-\, \theta(h\circ\alpha_1(x_2,
\gamma_2^n))\vert \,\,\leq\,\,
\theta(\phi(x_2))\,+\, \theta(\phi(\gamma_2^n x_2)^{-1}).
$$
\NI From Lemma 1, it follows that for all $\gamma_2$ in $\Gamma_2$, and
for $\mu_2$ a.e. $x_2$ in $X_2$:
$$
\liminf_{n\to+\infty}\vert \theta(\alpha_2(x_2, \gamma_2^n))\,-\,
\theta(h\circ\alpha_1(x_2, \gamma_2^n) )\vert \,<\, +\infty.$$
\NI Thus:
$$
\liminf_{n\to+\infty}\vert {{1}\over{n}}\theta(\alpha_2(x_2,
\gamma_2^n))\,-\,{{1}\over{n}} \theta(h\circ\alpha_1(x_2, \gamma_2^n)
)\vert \,=\,0.
$$
Since the map $\theta\,(\alpha_i(\,.\,, \gamma_i))$ is integrable for
$i=1$ and $i=2$, it results from Theorem $1$ that for $\mu_i$-a.e. $x$
in $X_i$:
$$
\lim_{n\to +\infty}{{1}\over{n}} \theta\,(\alpha_i(x, \gamma_i^n)) =
\Theta (x, \gamma_i, \alpha_i)
$$
Recalling that the definition of $h\circ\alpha_1(x_2, \gamma_2^n)$ is
$\alpha_1(h^{-1}(x_2), h^{-1}\gamma_2^n h)$, we get that, for
$\mu_2$-a.e. $x_2$ in $X_2$:
$$
\Theta(h^{-1}(x_2), \gamma_1, \alpha_1)\,\,=\,\, \Theta(x_2, \gamma_2,
\alpha_2).$$
By integrating this equality with respect to $\mu_2$, we get:
$${\Theta}_{\mu_1}(\gamma_1, \alpha_1) \,\,=\,\,
{\Theta}_{\mu_2}(\gamma_2, \alpha_2).$$
\section{Braids and Dynamics}
\subsection{The Artin braid group}
For a given integer $n>0$, the Artin braid group $B_n$ is the group
defined by the generators $\sigma_1, \dots , \sigma_{n-1}$ and the
relations:
$$\begin{array}{rcll}
\sigma_i\sigma_j & = & \sigma_j\sigma_i, & \vert i-j\vert \geq
2,\,\,\,\,\, 1\leq i, j\leq n-1\\
\sigma_i\sigma_{i+1}\sigma_i & = & \sigma_{i+1}\sigma_i\sigma_{i+1}, &
1\leq i\leq n-2.
\end{array}
$$
An element in this group is called a {\it braid}.
\NI A geometrical representation of the Artin braid group is given by
the following construction.
\NI Let $\a^2$ denote the closed unit disk in $\R^2$ centered at the
origin and $\d^2$ its interior. Let $Q_n = \{q_1, \dots , q_n\}$ be a
set of $n$ distinct points in $\d^2$ equidistributed on a diameter of
$\d^2$. We denote by $\d_n$ the $n$-punctured disk ${\a^2} \setminus
Q_n$.
\NI We define a collection of $n$ arcs in the cylinder $ {\d^2} \times
[0, 1]$:
$$\Gamma_i = \{(\gamma_i(t), t) \,,\,t\in\,[0,1]\}, \,\,\,\,\,\,\,
1\leq i \leq n$$
joining the points in $Q_n \times \{ 0\}$ to the points in $Q_n \times
\{1\}$ and such that $\gamma_i(t) \neq \gamma_j(t)$ for $i\neq j$.
\NI We call $\Gamma = \cup_{i=0,\dots , n}\Gamma_i$ a {\it geometrical
braid} and say that two geometrical braids are equivalent if there
exists a continuous deformation from one to the other through
geometrical braids. The set of equivalence classes is the Artin braid
group. The composition law
is given by concatenation as shown in Figure 1 and a generator
$\sigma_i$ corresponds to the geometrical braid shown in Figure 2.
\begin{figure}
\caption{The concatenation of two geometrical braids}
\end{figure}
\begin{figure}
\caption{Representation of the $i$-th generator $\sigma_i$ of the Artin
braid group}
\end{figure}
\NI
Let $\H$ be the subset of homeomorphisms from $\a^2$ onto itself which
are the iudentity map in a neighborhood on the boundary of the disk.
The subgroup of elements which let the set $Q_n$ globally invariant is
denoted by ${\rm Homeo}({\a^2}, \partial {\a^2}, Q_n)$. The following
Theorem, due to J. Birman, shows the relation between braids and
dynamics.
\begin{thm}[\cite{Birman}]{The Artin braid group $B_n$ is isomorphic to
the group of automorphisms of $\pi_1({\d_n})$ which are induced by
elements of ${\rm Homeo}({\a^2}, \partial {\a^2}, Q_n)$, that is to say
the group of isotopy classes of ${\rm Homeo}({\a^2}, \partial {\a^2},
Q_n)$.}
\end{thm}
\NI This isomorphism ${\cal I}$ can be described as follows. Let
$\{x_1, \dots , x_n\}$ be a basis for the free group $\pi_1({\d_n})$,
where $x_i$, for $1\leq i\leq n$ is represented by a simple loop which
encloses the boundary point $q_i$ (see Figure 3). A generator
$\sigma_i$ of $B_n$ induces on $\pi_1({\d_n})$ the following
automorphism ${\cal I}(\sigma_i)$ :
$${\cal I}(\sigma_i)\,\left \{ \begin{array}{lcl}
x_i & \mapsto & x_i\, x_{i+1}\, {x_i}^{-1}\\
x_{i+1} & \mapsto & x_i\\
x_j & \mapsto & x_j \,\,\,\,\,\,{\rm if}\,\, \,\,j\,\neq\,
i\,,\, i+1.
\end{array} \right.$$
\NI The action of the Artin braid group on $\pi_1({\d_n})$ is a right
action. We denote by $wb$ the image of $w\in \pi_1({\d_n})$ under the
automorphism induced by the braid $b$.
\begin{figure}
\caption{The generators of \protect{$\pi_1({\bf D}
\end{figure}
We now introduce two standard subadditive functions on the Artin
braid group. Given a group presented by a finite number of
generators and relations and $g$ an element of the group, we denote
by $L(g)$ the minimal length of the word $g$ written with the
generators and their inverses.
We can define two subadditive functions on $B_n$ by measuring lengths
of words either in the Artin braid group or in the fundamental group of
the punctured disk $\d_n$. More precisely, given an element $b$ in
$B_n$, the first subadditive function $\theta_1$ is defined by:
$$\theta_1(b) \,\,= \,\, L(b).$$
Notice that by setting $d(b, c)\,=\, L(bc^{-1})$ for all $b$ and $c$ in
$B_n$, we define a right invariant metric $d$ on $B_n$.
\NI
The second subadditive function $\theta_2$ is defined, for all $b$ in
$B_n$, by:
$$ \theta_2(b)\,\,=\,\, \sup_{i=1,\dots, n}\log L(x_ib),$$
where the $x_i$'s are the generators of the fundamental group of $\d_n$
defined as above.
\NI It is plain that the two subadditive functions are related as
follows:
$$ \theta_2(b)\,\,\leq \,\,(\log 3)\theta_1(b),$$
for all $b$ in $B_n$.
\NI In the sequel, we shall focus on a particular subgroup of the Artin
braid group, that we now define. Let $b$ be an element in $B_n$
represented by a geometrical braid:
$$\Gamma = \cup_{i=0,\dots,n}\{(\gamma_i(t), t) \subset {\d^2} \times
[0, 1], \,\,\, 1\leq i \leq n\}.$$
We say that $b$ is a {\it pure braid} if for all $i\,=\,1,\dots,n$,
$\gamma_i(0)\,=\,\gamma_i(1)$.
\NI We denote by ${\cal F}_n$ the set of all pure braids in $B_n$ (see
for instance \cite{Zieschang}) .
\NI {\bf Remark:} The pure braid group can be interpreted as follows.
Consider a braid $b$ in ${\cal F}_n$, and $\Gamma = \cup_{i=0,\dots ,
n} \{(\gamma_i(t), t), t\in [0, 1]\}$ a geometrical braid whose
equivalence class is $b$. We can associate to $\Gamma$ the loop
$(\gamma_1(t), \dots ,\gamma_n(t))_{t\in [0, 1]}$ in the space ${{\bf
D^2}\times\dots\times {\bf D^2} }\setminus \Delta$, where $\Delta$
is the {\it generalized diagonal }:
$$ \Delta \,\,=\,\, \{(p_1, \dots, p_n) \in {{\bf
D^2}\times\dots\times {\bf D^2} },\,\vert \,\,\,\,\exists\, i\neq j
\,\,\, {\rm and}\,\,\, p_i = p_j\}.
$$
\NI In other words, there exist an isomorphism $\cal J$ between the
pure braid group ${\cal F}_n$ and the fundamental group
$\pi_1({{\bf D^2}\times\dots\times {\bf D^2} }\setminus \Delta)$.
\subsection{A cocycle with values in the Artin braid group }
\NI Let $\phi$ be an element in $\H$, and let $P_n\,=\,(p_1,\dots,
p_n)$ be a $n$-tuple of pairwise disjoint points in $\d^2$. The
following procedure describes a natural way to associate a pure braid
in ${\cal F}_n$ to the datas $\phi$ and $P_n$ (see figure 4).
\begin{enumerate}
\item For $1\leq i\leq n$, we join the point $(q_i, 0)$ to the
point $(p_i,{1\over3} )$ in the cylinder $ {\d^2} \times [0, {1 \over
3}]$ with a segment.
\item For $1\leq i\leq n$, we join the point $(p_i, {1\over3} )$ to
the point $(\phi(p_i),{2\over3} )$ in the cylinder $ {\d^2} \times [{1
\over 3}, {2\over 3}]$ with the arc $(\phi_{(3t-1)}(p_i), t)_{t\in [{1
\over 3}, {2\over 3} ]}$, where $(\phi_\tau)_{\tau\in [0, 1]}$ is an
isotopy from the identity to $\phi$ in $\H$.
\item For $1\leq i\leq n$, we join the point $(\phi(p_i), {2\over3}
)$ to the point $(q_i, 1)$ in the cylinder $ {\d^2} \times [{2 \over
3}, 1]$ with a segment.
\end{enumerate}
\NI The concatenation of this sequence of arcs provides a geometrical
braid. The equivalence class of this geometrical braid is a braid in
${\cal F}_n$ that we denote by $\beta (P_n; \phi)$.
\begin{figure}
\caption{Construction of $\beta(P_n; \phi)$}
\end{figure}
\NI It is clear that the above procedure is well defined if and only if
for all $1\leq i<j\leq n$:
\begin{enumerate}
\item[{(i)}] the segment joining the points $(q_i, 0)$ and $(p_i,
{1\over3})$ does not intersect the segment joining the points $(q_j,
0)$ and $(p_j,{1\over3} )$, and
\item[{ (ii)}] the segment joining the points $(\phi(p_i),{2\over3} )$
and $(q_i, 1)$ does not intersect the segment joining the points
$(\phi(p_j),{2\over3} )$ and $(q_j, 1)$.
\end{enumerate}
\NI For $1\leq i\neq j\leq n$, consider in $\R^2\times\dots\times \R^2$
the codimension 2 plane $ P_{i,j}$ of points $(z_1, \dots, z_n)$ such
that $z_i=z_j$. Let $H_{i,j}$ be the hyperplane which contains $P_{i,
j}$ and the point $(q_1, \dots , q_n)$. The plane $P_{i, j}$ splits
the hyperplane $H_{i, j}$ in 2 components; we denote by $H^0_{i, j}$
the closure of the component which does not contain the point $(q_1,
\dots,q_n)$. Let $\Omega^{2n}$ be the open dense subset of $\dn$
defined by:
$$\Omega^{2n}\,\,=\,\, {\dn} \setminus \bigcup_{1\leq i , j \leq n}
H^0_{i, j}\cap {\dn}.$$
\NI We can reformulate conditions (i) and (ii) as follows : the braid
$\beta (P_n; \phi)$ is defined if and only if both $P_n$ and
$(\phi,\dots,\phi) (P_n)$ belong to $\Omega^{2n}$.
\NI {\bf Remark 1:} This procedure is locally constant where it is
defined. More precisely \- if $\beta (P_n; \phi)$ is defined and if
$P'_n$ is close enough to $P_n$ in ${\dn}\setminus \Delta$ , then
$\beta (P'_n; \phi)$ is also defined and $\beta (P'_n; \phi)$ is
equal to
$\beta (P_n; \phi)$.
\NI {\bf Remark 2:} Since the set $\H$ is contractible, this procedure
does not depend on the isotopy $\phi_t$.
\NI In order to construct a cocycle with values in the Artin braid
group, we also need to define the invariant measures we are going to
consider. We say that a $n$-tuple of probability measures
$\lambda_1 , \dots , \lambda_n$ on $\a^2$ is {\it coherent} if the
subset $\Omega^{2n}
$ has measure 1 with respect to the product measure
$\lambda_1\times\dots\times\lambda_n$. Notice that this is the case
when the measures $\lambda_i$ are non atomic.
\begin{lem} Let $\phi$ be a map in $\H$ which preserves the coherent
probability measures $\lambda_1,\dots , \lambda_n$. Then, the map :
$$
\begin{array}{rll}
{\dn} & \longrightarrow & {\cal F}_n\\
P_n & \longmapsto & \beta (P_n; \phi)
\end{array}
$$
is measurable with respect to the product measure $\lambda_1 \times
\dots \times \lambda_n$.
\end{lem}
\begin{pf} The map is continuous on $\Omega^{2n}$ which has measure 1
with respect to the product measure
$\lambda_1\times\dots\times\lambda_n$.
\end{pf}
There is a natural injection $j$ from the set of maps from $\a^2$ into
itself into the set of maps from $\dn$ into itself. Namely:
$$j(\phi)\,=\,(\phi,\dots,\phi).$$
\NI Let $\lambda_1,\dots ,\lambda_n$ be coherent probability measures
and $\HU$ the set of maps in $\H$ which preserve the measures
$\lambda_1,\dots ,\lambda_n$.
\NI We call $\Gamma$ the image of $\HU$ by $j$ in $Aut({\dn},
\lambda_1\times\dots\times\lambda_n)$.
\NI The following proposition is straightforward:
\begin{prop} The map:
$$
\begin{array}{rll}
{\dn}\times \Gamma & \longrightarrow & {\cal F}_n\\
(P_n,j(\phi) ) & \longmapsto & \beta(P_n; \phi)
\end{array}
$$
is a cocycle of the dynamical system $({\dn}, {\cal B}({\dn}),
\lambda_1\times \dots\times \lambda_n, \Gamma)$ with values in the
group ${\cal F}_n$ (here ${\cal B}(\dn)$ stands for the Borel
$\sigma$-algebra of $\dn$).
\end{prop}
\subsection{Asymptotic limits and invariance}
Let $\D$ denote the set of $C^1$-diffeomorphisms from $\a^2$ into
itself which are the identity in a neighborhood of the boundary. The
following result is fundamental for our purpose.
\begin{prop} If the map $\phi$ is in $\D$ then there exists a positive
number $K(\phi, n)$ such that for all $P_n$ where $\beta (P_n;
\phi)$ is defined :
$$ \theta_1(\beta(P_n; \phi)) \leq K(\phi, n).$$
\end{prop}
\begin{pf} From the isomorphism ${\cal J}$ between ${\cal F}_n$ and
$\pi_1({{\bf D^2}\times\dots\times {\bf D^2} }\setminus \Delta)$ (see
section 3.1), we know that when the braid $\beta(P_n; \phi)$ is
defined, it can be seen as a homotopy class of a loop in ${{\bf
D^2}\times\dots\times {\bf D^2} }\setminus \Delta$. The fundamental
group $\pi_1({{\bf D^2}\times\dots\times {\bf D^2} }\setminus \Delta)$
possesses a finite set of generators $(e_k)$. These generators can be
expressed with the generators $\sigma_i$ of the braid group $B_n$
through the isomorphism ${\cal J}$. It follows that the proposition
will be proved once we prove that the length of the word $
\beta(P_n; \phi)$ (seen as a homotopy class written with the system of
generators $(e_k)$) is uniformely bounded when both $P_n$ and $(\phi,
\dots ,\phi)(P_n)$ are in $\Omega^{2n}$.
\NI Consider the blowing-up set ${\cal K}$ of the generalized diagonal
$\Delta$ in $\dn$. More precisely, ${\cal K}$ is the compact set of
points:
$$
(p_1, \dots, p_i,\dots ,p_n; p'_1,\dots, p'_j, \dots p'_n;
\Delta_{1,1}, \dots , \Delta_{i, j}, \dots , \Delta_{n,n})
$$
where, for all $1\leq i, j\leq n$, $\Delta_{i,j}$ is a line
containing $p_i$ and $p'_j$. Obviously, if $p_i \neq p'_j$, then the
line $\Delta_{i, j}$ is uniquely defined and thus ${\dn} \setminus
\Delta$ is naturally embedded in ${\cal K}$ as an open dense set.
\NI A map $\phi$ in $\H$ yields a map $j(\phi)=(\phi,\dots,\phi)$
defined and continuous on $\dn$. Its restriction to ${\bf
D^2}\times\dots\times {\bf D^2}$ let $\Delta$ globally invariant. We
claim that if the map $\phi$ is a $C^1$-diffeomorphism, then it can be
extended to a continuous map on ${\cal K}$. This is done as follows:
whenever for some $1\leq i, j\leq n$, we have $p_i = p'_j$, then a
line $\Delta_{i, j}$ containing $p_i=p'_j$ is mapped to the line
$d\phi(p_i)(\Delta_{i, j})$.
\NI Furthermore, if the map $\phi$ is in $\D$ we can choose an isotopy
$\phi_t$ from the identity to $\phi$ in $\D$. Thus, the map $\Psi$:
$$\begin{array}{rrll}
\Psi:& [0, 1] \times {\cal K} & \longrightarrow & {\cal K}\\
& (t, p_1, \dots , p_n) & \longmapsto & (\phi_t(p_1), \dots ,
\phi_t(p_n))
\end{array}$$
is continuous.
\NI Let ${\cal K}^0$ be the universal cover of ${\cal K}$, $\pi: {\cal
K}^0 \to {\cal K}$ the standard projection, and $\Psi^0 : [0, 1] \times
{\cal K} \to {\cal K}^0$ a lift of the map $\Psi$ ($\Psi= \pi
\circ\Psi^0$). The system of generators $(e_k)$ can be chosen so that
the projection $\pi$ restricted to a fundamental domain of ${\cal
K}^0$ is a homeomorphism onto $\Omega^{2n}$. Since ${\cal K}$ is
compact $\Psi^0([0,1]\times {\cal K})$ is also compact and consequently
covers a bounded number $k(\phi, n)$ of fundamental domains of ${\cal
K}^0$. If both
$(p_1, \dots , p_n)$ and $(\phi_1(p_1), \dots ,\phi(p_n))$ are in
$\Omega^{2n}$, the choice of the system of generators implies that
the length of the word $
\beta((p_1, \dots, p_n); \phi)$ written with the system of generators
$(e_k)$ is also uniformly bounded by $k(\phi, n)$. This achieves the
proof of the proposition.
\end{pf}
\begin{lem} Let $\phi$ be a map in $\D$ which preserves the coherent
probability measures $\lambda_1,\dots , \lambda_n$. Then, for $i=1$
and $i=2$ the map:
$$
\begin{array}{rll}
{\dn} & \longrightarrow & \R^+\\
P_n & \longmapsto & \theta_i(\beta (P_n; \phi))
\end{array}
$$
is integrable with respect to $\lambda_1\times\dots\times \lambda_n$.
\end{lem}
\begin{pf} The integrability is a direct consequence of Proposition 2
and of the fact that $\theta_2(b) \leq (\log 3) \theta_1(b)$ for all
$b$ in $B_n$.
\end{pf}
\NI By applying Theorem 1 to the cocycle $\beta$ and using lemma 3, we
immediately get:
\begin{cor} Let $\phi$ be an element in $\D$ which preserves the
coherent probability measures $\lambda_1,\dots , \lambda_n$. Then for
$i=1$ and $i=2$:
\begin{enumerate}
\item For $\lambda_1\times\dots \times \lambda_n$ a.e. $P_n$ in $\dn$
the quantity ${{1}\over {N}}\theta_i(\beta(P_n; \phi^N))$ converges
when $N$ goes to $+\infty$ to a limit $\Theta^{(i)}(P_n; \phi)$.
\item
The map
$$
\begin{array}{rcl}
{\dn} & \longrightarrow & \R^+\\
P_n & \longmapsto & \Theta^{(i)}(P_n; \phi)
\end{array}
$$
is integrable on $\dn$ with respect to $\lambda_1\times \dots\times
\lambda_n$.
\end {enumerate}
\end{cor}
\NI We denote by ${\Theta}^{(i)}_{\lambda_1, \dots ,\lambda_n}(\phi)$
the integral of this function.
\begin{cor} Let $\lambda_1, \dots , \lambda_n$ and $\mu_1 , \dots ,
\mu_n$ be two sets of coherent probability measures on $\d^2$, and let
$\phi_1$ and $\phi_2$ be two elements in $\D$ which preserve the
probability measures $\lambda_1, \dots , \lambda_n$ and $\mu_1 , \dots
, \mu_n$ respectively.
\NI
Assume there exists a map $h$ in $\H$ such that :
\begin {enumerate}
\item $ h\circ \phi_1 \,=\, \phi_2\circ h$,
\item $h\ast\lambda_j\,=\, \mu_j,$ , for $j=1,\dots, n$.
\end{enumerate}
Then for $i=1$ and $i=2$:
$$
{\Theta}^{(i)}_ {\lambda_1, \dots , \lambda_n} (\phi_1) \,\,=\,\,
{\Theta}^{(i)}_{\mu_1 , \dots , \mu_n} (\phi_2).
$$
\end{cor}
\begin{pf} Consider the cocycle $\alpha_1$:
$$
\begin{array}{rll}
{\dn} \times j(\HU) & \longrightarrow & {\cal F}_n\\
(P_n, j(\phi_1)) & \longmapsto & \beta(P_n; \phi_1)
\end{array}
$$
and the cocycle $\alpha_2$:
$$
\begin{array}{rll}
{\dn} \times j(\K) & \longrightarrow & {\cal F}_n\\
(P_n, j(\phi_2)) & \longmapsto & \beta(P_n; \phi_2)
\end{array}
$$
\NI Thanks to Theorem 2, the corollary will be proved once we show that
$\alpha_1$ and $\alpha_2$ are weakly equivalent.
Consider the homeomorphism $j(h)$ of $\dn$ into itself. It is clear
that $j(h)\ast \lambda_1\times\dots\times \lambda_n =
\mu_1\times\dots\times \mu_n$ and that:
$$
j( {\HU})= j(h)j({\K})(j(h))^{-1}.
$$
Thus it remains to prove that $\alpha_2$ and $j(h)\circ\alpha_1$ are
cohomologous.
\NI For $\lambda_1\times\dots\times\lambda_n$ a.e. $P_n$ in $\dn$, we
have:
$$
\beta(P_n; \phi_2) \,=\, \beta(P_n; h^{-1})\,\,\beta (j(h^{-1})(P_n);
\phi_1)\,\,\beta (j(\phi_1\circ h^{-1})(P_n); h).
$$
This reads :
$$
\alpha_2(P_n, j(\phi_2))\,=
\,\beta(P_n; h^{-1})\,\,h\circ \alpha_1(P_n, j(\phi_2))\,\,\beta
(j(\phi_1\circ h^{-1})(P_n)); h).
$$
Since the map $h^{-1}$ is in $\H$ we know from Lemma 2 that the map:
$$
\begin{array}{rll}
{\dn} & \longrightarrow & {\cal F}_n\\
P_n & \longmapsto & \beta(P_n; h^{-1})
\end{array}
$$
is measurable. This shows that $\alpha_2$ and $j(h)\circ \alpha_1$ are
cohomologous.
\end{pf}
\section{A discussion about these invariants}
\subsection {The fixed points case}
In the particular case when we consider a set of fixed points
$P_n^0=(p_1^0, \dots , p_n^0)$ of $\phi\in\D$, we get, for all $N\geq
0$:
$$
\beta (P_n^0; \phi^N)\,\,=\,\,\beta (P_n^0; \phi)^N,
$$
and consequently, for $i=1$ and $i=2$, we have:
$$
\Theta^{(i)}(P_n^0; \phi)\,\,=\,\, \theta_i(\beta (P_n^0; \phi)).
$$
For the set of Dirac measures $\delta_{p^0_1}, \dots , \delta_{p^0_n}$,
which are coherent, the invariant numbers
$\Theta^{(i)}_{\delta_{p^0_1}, \dots , \delta_{p^0_n}}$ have a clear
meaning:
\begin{itemize}
\item The invariant number $\Theta^{(1)}_{\delta_{p^0_1}, \dots ,
\delta_{p^0_n}}$ is the minimum number of generators $\sigma_i$ which
are necessary to write the word
$\beta (P_n^0; \phi)$. In the particular case of a pair of fixed points
$(p^0_1, p^0_2)$, it is the absolute value of the linking number of
these two fixed points.
\item The map $\phi$ induces a map $\phi_\star$ on the first homotopy
group of the punctered disk ${\a^2}\setminus P_n^0$. The invariant
$\Theta^{(2)}_{\delta_{p^0_1}, \dots , \delta_{p^0_n}}$ is the growth
rate of the map $\phi_\star$. It has been shown by R. Bowen
\cite{Bowen} that this growth rate is a lower bound of the topological
entropy $h(\phi)$ of the map $\phi$ \cite{AKM}.
The assumption on the map $\phi$ to be a diffeomorphism is required in
order to get a continuous map acting on the compact surface
$\hat{\d_n}$ obtained from ${\d_n}={\a^2}\setminus \{q_1, \dots ,q_n\}$
after blowing up the points $q_1, \dots ,q_n$ with the circles of
directions.
\end{itemize}
\subsection {The general case}
Let $\phi$ be a map in $\D$. From Proposition 2, there exists a
positive number $K(\phi, n)$ such that for all $P_n$ where $\beta
(P_n; \phi)$ is defined:
$$
{{1}\over{\log 3}}\theta^{(2)}(\beta(P_n; \phi))
\leq\theta^{(1)}(\beta(P_n; \phi)) \leq K(\phi, n).
$$
Let ${\cal M}_n(\phi)$ be the set of $n$-tuples of $\phi$-invariant,
coherent probability measures, and let $(\lambda_1,\dots , \lambda_n)$
be in ${\cal M}_n(\phi)$. By integrating with respect to
$\lambda_1\times\dots\times\lambda_n$, we get:
$$
{{1}\over{\log 3}}\Theta^{(2)}_{\lambda_1,\dots , \lambda_n} (\phi)
\leq\Theta^{(1)}_{\lambda_1,\dots , \lambda_n} (\phi) \leq K(\phi, n).
$$
It follows that the quantities:
$$
\Theta^{(i)}_{ n}(\phi)\,=\, \sup_{(\lambda_1,\dots , \lambda_n)\in
{{\cal M}_n}(\phi)}\Theta^{(i)}_ {\lambda_1,\dots , \lambda_n} (\phi),
$$
are positive real numbers which are, by construction, topological
invariants. More precisely, for any pair of maps $\phi_1$ and $\phi_2$
in $\D$, which are conjugated by a map in $\H$, we have, for $i=1$,
$i=2$ and for all $n\geq 2$:
$$
\Theta^{(i)}_{ n}(\phi_1)\,=\,\Theta^{(i)}_{ n}(\phi_2).
$$
It also results from the definitions that, for $i=1$ and $i=2$ the
sequences $n\mapsto \Theta^{(i)}_{ n}(\phi)$ are non decreasing
sequences.
\NI In the next two paragraphs, we give estimates of these invariants
that generalize the fixed points case.
\subsubsection{The Calabi invariant as a minoration of the sequence
$(\Theta^{(1)}_{ n}(\phi))_n$ }
Let $\phi$ be a map in $\H$ and
consider an isotopy $\phi_t$ from the identity to $\phi$ in $\H$. To a
pair of distinct points $p_1$ and $p_2$ in $\d^2$ , we can associate a
real number $Ang_\phi(p_1, p_2)$ which is the angular variation of the
vector $\overrightarrow{ \phi_t(p_1)\phi_t(p_2)}$ when $t $ goes from 0
to 1 (when normalizing to 1 the angular variation of a vector
making one loop on the unit circle in the direct direction).
Since the set $\H$ is contractible, it is clear that the map $(p_1,
p_2) \mapsto Ang_\phi(p_1, p_2)$ does not depend on the choice of the
isotopy.
If $\phi $ is in $\D$, using arguments similar to the ones we used in
the proof of Proposition 2, it is easy to check (see \cite{GG}) that
the function $Ang_\phi$ is bounded where it is defined, that is to say
on ${\d^2}\times {\d^2} \setminus \Delta$. Let $(\lambda_1,
\lambda_2)$ be a pair of $\phi$-invariant, coherent probability
measures. The function $Ang_\phi$ is integrable on ${\a^2}\times
{\a^2} $ with respect to $\lambda_1\times \lambda_2$ and the {\it
Calabi invariant} of the map $\phi$ with respect to the pair
$(\lambda_1, \lambda_2)$ is defined by the following integral:
$$ {\cal C}_{\lambda_1, \lambda_2}(\phi)\,=\, \int\!\int_{{\a^2}\times
{\a^2} } Ang_\phi (p_1, p_2)d\lambda_1(p_1)d\lambda_2(p_2)$$
\NI {\bf Remark:} In \cite{Calabi}, E.~Calabi defines a series of
invariant numbers associated to symplectic diffeomorphisms on
symplectic manifolds. In the particular case of area preserving maps of
the 2-disk, only one of these invariant quantities is not trivial. In
\cite{GG}, it is shown that this invariant fits with our definition in
the particular case when the invariant measures $\lambda_i$ are all
equal to the area.
\begin{prop} Let $\phi$ be a map in $\D$ which preserves a pair of
coherent probability measures $(\lambda_1, \lambda_2)$. Then:
$$
\vert{\cal C}_{\lambda_1, \lambda_2}(\phi)\vert\,\leq \,
{\Theta}^{(1)}_{\lambda_1, \lambda_2} (\phi).
$$
Consequently, for any map $\phi$ in $\D$, and for all $n\geq 2$:
$$
\sup_{(\lambda_1, \lambda_2)\in {\cal{M}}_2(\phi)}
\vert{\cal C}_{\lambda_1, \lambda_2}(\phi)\vert
\,\leq \, {\Theta}^{(1)}_2 (\phi) \,\leq \, {\Theta}^{(1)}_n (\phi).
$$
\end{prop}
\begin{pf}
In order to compute the Calabi invariant of $\phi$ with respect to
$(\lambda_1, \lambda_2)$, we can use the Birkhoff ergodic theorem. The
corresponding Birkhoff sums:
$$
Ang_\phi (p_1, p_2) \,+\, Ang_\phi(\phi(p_1), \phi(p_2))\,+\,\dots +\,
Ang_\phi (\phi^{N-1}(p_1), \phi^{N-1}(p_2)),
$$
are equal to:
$$
Ang_{\phi^N}(p_1, p_2).
$$
It follows that for $\lambda_1\times \lambda_2$ almost every points
$(p_1, p_2)$ the following limit:
$$
\tilde{Ang}_\phi(p_1, p_2) \,=\, \lim_{N\to +\infty} {{1}\over{N}}
Ang_{\phi^N}(p_1, p_2),
$$
exists and is integrable. Furthermore:
$$
\int\!\int_{{\a^2}\times {\a^2} }\tilde{ Ang}_\phi (p_1,
p_2)d\lambda(p_1)d\lambda(p_2)\,=\, \int\!\int_{{\a^2}\times {\a^2} }
Ang_\phi (p_1, p_2)d\lambda(p_1)d\lambda(p_2).$$
It is clear that for all $N\geq 0$ and for all pair of distinct points
$p$ and $q$ in $D^2$, we have the following estimate:
$$
\vert L(\beta(p, q; \phi^N))\,- \vert Ang_{\phi^N}(p, q) \vert\,\vert
\leq 3.
$$
Dividing by $N$ and making $N$ going to $\infty$, we get for
$\lambda_1\times \lambda_2$ a.e $(p_1, p_2)$:
$$
\vert {\tilde{Ang}}_{\phi}(p_1, p_2)\vert \,\,=\,\, \Theta^{(1)}(p_1,
p_2; \phi).
$$
The integration gives:
$$
\vert{\cal C}_{\lambda_1, \lambda_2}(\phi)\vert\,\leq \,
\int\!\int_{{\a^2}\times {\a^2} }\vert\tilde{ Ang}_\phi (p_1, p_2)\vert
d\lambda_1(p_1)d\lambda_2(p_2) \,\leq \,
{\Theta}^{(1)}_{\lambda_1, \lambda_2} (\phi).
$$
\end{pf}
\subsubsection{The topological entropy as a majoration of the sequence
$(\Theta_{2, n}(\phi))_n$ }
Let ${{\rm Diff}^\infty({\a^2}, \partial {\a^2})}$ denote the subset of
elements in $\D$ which are $C^\infty$-diffeomor\-phisms. The following
theorem can be seen as an extension of the Bowen result (\cite{Bowen})
when we relax the hypothesis of invariance of finite sets.
\begin{thm}
Let $\phi$ be an element in ${{\rm Diff}^\infty({\d^2}, \partial
{\d^2})} $ with entropy $h(\phi)$ and $\lambda_1, \dots , \lambda_n$ a
set of coherent, $\phi$-invariant probability measures. Then, for
$\lambda_1\times\dots\times\lambda_n$-a.e. $P_n$ in $\dn$, we have:
$$
\Theta^{(2)}(P_n; \phi)\,\,\leq \,\, h(\phi),
$$
and consequently, for all $n\geq 2$:
$${\Theta}^{(2)}_{\lambda_1\times\dots\times\lambda_n}(\phi)
\,\,\leq\,\, h(\phi).$$
\end{thm}
\begin{pf}
Let $P_n=(p_1, \dots ,p_n)$ be a point in $\Omega^{2n}$ and
let $h_{P_n}$ be an element in $\D$ with the following properties:
\begin{enumerate}
\item For each $1\leq i\leq n$, $h_{P_n}$ maps $q_i$ on $p_i$.
\item There exists an isotopy $(h_{P_n; t})_{t\in [0, 1]}$ from the
identity to $h_{P_n}$ such that, for each $1\leq i\leq n$, the arc
$\{(h_{P_n; t}, t)\,\,\,\, t\in [0, 1]\}$ coincides with the segment
joining $(q_i, 0)$ to $(p_i, 1)$ in $D^2 \times [0, 1]$.
\item The map
$$\begin{array}{rcl}
\Omega^{2n} & \longrightarrow & \D\\
P_n & \longmapsto & h_{P_n}
\end{array}$$
is continuous when $\D$ is endowed with the $C^1$ topology.
\end{enumerate}
\NI Let $\phi$ be in ${{\rm Diff}^\infty({\a^2}, \partial {\a^2})} $
and assume that for $N\geq 0$, $P_n$ and $j(\phi^N)(P_n)$ are in
$\Omega^{2n}$. From the above construction , it follows that the map:
$$
\Psi^{(N)}_{P_n}\,\,=\,\, h^{-1}_{j(\phi^N)(P_n)} \circ \phi^N \circ
h_{P_n}$$ let the points $q_1, \dots , q_n$ invariant and that its
restriction to $\d_n$ induces, through the Birman isomorphism (Theorem
3), the braid $\beta(P_n; \phi^N).$
\NI Let $P_n$ be such that the limit $\Theta^{(2)}(P_n; \phi)$ exists
(which is true for a $\lambda_1 \times \dots \times \lambda_n$ full
measure set of points). It follows that there exists $\epsilon >0$ such
that, for $N$ big enough, we have:
$$
\Theta^{(2)}(P_n; \phi) \,-\epsilon \leq{{1}\over{N}}
\theta^{(2)}(\beta(P_n; \phi^N))$$
\NI which reads:
$$(*)\,\,\, e^{(\Theta^{(2)}(P_n; \phi) \,-\epsilon)N} \leq \sup_{i=1,
\dots ,n}L(x_i\beta(P_n; \phi^N))$$
\NI It is standard that there exists a constant $c(n)>0$ such that for
any differentiable loop $\tau$ in $\hat{\d_n}$:
$$L([\tau])\,\,\leq \,\, c(n)l(\tau) $$
where $l$ stands for the euclidian length and $[-]$ is the homotopy
class in $\hat{\d_n}$. Combined with (*), this argument gives:
$$
{{1}\over{c(n)}}e^{(\Theta^{(2)}(P_n; \phi) \,-\epsilon)N} \leq
\sup_{i=1, \dots ,n}l(\Psi^{(N)}_{P_n}(x_i)).
$$
That is to say:
$$
{{1}\over{c(n)}}e^{(\Theta^{(2)}(P_n; \phi) \,-\epsilon)N} \leq
\sup_{i=1, \dots ,n}l( h^{-1}_{j(\phi^N)(P_n)} \circ \phi^N \circ
h_{P_n}(x_i))
$$
and consequently:
$$
(**)\,\,\,\, {{1}\over{c(n)}}e^{(\Theta^{(2)}(P_n; \phi) \,-\epsilon)N}
\leq\Vert h^{-1}_{j(\phi^N)(P_n)}\Vert_{1} \sup_{i=1, \dots ,n}l(
\phi^N ( h_{P_n}(x_i)),$$
\NI where $\Vert-\Vert_{1}$ stands for the $C^1$ norm.
\NI Assume from now on that the point $P_n$ is a recurrent point of
the map $j(\phi)$ (which is true for a $\lambda_1 \times \dots \times
\lambda_n$ full measure set of points) and let $\nu(N)$ be a sequence
of integers so that:
$$
\lim_{N\to \infty}j(\phi^{\nu(N)})(P_n) \,\, =\,\, P_n.
$$
From (**) we deduce that there exists a subsequence $\hat\nu(N)$ of
$\nu(N)$ such that for $N$ big enough, and for some $1\leq i_0\leq n$,
we have:
$$
{{1}\over{c(n)K}}e^{(\Theta^{(2)}(P_n; \phi) \,-\epsilon)N}
\,\,\leq \,\, l( \phi^N ( h_{P_n}(x_{i_0})),$$
where $K$ is an upper bound of $\Vert h^{-1}_{P_n}\Vert_1$.
\NI In conclusion, the euclidian length of the loop $ h_{P_n}(x_{i_0})$
increases exponentially under iteration of the map $\phi$ with a growth
rate which is at least $\Theta^{(2)}(P_n; \phi) \,-\epsilon$. Using a
result by Y. Yomdim (see \cite{Yomdim} and \cite{Gromov}), in the case
of $C^\infty$ maps,
we know that this provides a lower estimate of the topological entropy.
Namely, if $\phi$ is in $ {{\rm Diff}^\infty({\a^2}, \partial
{\a^2})}$, we get:
$$\Theta^{(2)}(P_n; \phi) \,-\epsilon\,\,\leq\,\, h(\phi).$$
Since, this is true for any $\epsilon >0$, this yields:
$$
\Theta^{(2)}(P_n; \phi)\,\,\leq \,\, h(\phi),
$$
and by integrating:
$${\Theta}^{(2)}_ {\lambda_1\times\dots\times\lambda_n}(\phi)
\,\,\leq\,\, h(\phi).$$
\end{pf}
\vskip 2cm
\NI{\bf ACKNOWLEDGEMENTS:} It is a pleasure for the authors to thank E.
Ghys and D. Sullivan for very helpful comments and suggestions. Most
part of this work was achieved during a visit of J.M. Gambaudo at the
Institute for Mathematical Sciences. He is very grateful to the I.M.S.
for this invitation.
\vskip 2cm
\NI===================================================
\NI{\bf J.-M. Gambaudo} - {\sc Institut Non Lin\'eaire de Nice,} U.M.R.
du C.N.R.S 129, 1361 route des Lucioles, 06560 Valbonne, France.
\NI===================================================
\NI{\bf E. E. P\'ecou} - {\sc Institute for Mathematical Sciences,}
SUNY at Stony Brook, Stony Brook, NY 11794, U.S.A.
\NI===================================================
\end{document} |
\begin{document}
\author{W.\ De Baere}
\email{[email protected]}
\affiliation{Unit for Subatomic and Radiation Physics,
Laboratory for Theoretical
Physics, State University of Ghent, Proeftuinstraat 86, B--9000 Ghent,
Belgium}
\newcommand{\cal S}{\cal S}
\newcommand{\lambda}{\lambda}
\newcommand{\ket}[1]{\left|#1 \right>}
\newcommand{\mv}[1]{\left< #1 \right>}
\newcommand{\one}{\mbox{{\sf 1}\hspace{-0.20em}{\rm l}}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\longlongrightarrow}{\longlongrightarrow}
\newcommand{\Longleftrightarrow}{\Longleftrightarrow}
\newcommand{\Longrightarrow}\vspace*{1.0cm}{\Longrightarrow}\vspace*{1.0cm}
\newcommand{\leftarrow}{\leftarrow}\noindent
\newcommand{\longleftarrow}{\longleftarrow}{\large
\title{Renninger's Thought Experiment: Implications for Quantum
Ontology and for Quantum Mechanics' Interpretation}
\begin{abstract}
It is argued that the conclusions obtained by Renninger
({\em Z.\ Physik} {\bf 136}, 251 (1953)), by means of an
interferometer thought experiment, have important
implications for a number of still ongoing discussions
about quantum mechanics (QM). To these belong the
ontology underlying QM, Bohr's complementarity principle,
the significance of QM's wave function, the ``elements of
reality'' introduced by Einstein, Podolsky and Rosen
(EPR), and Bohm's version of QM (BQM). A slightly
extended setup is used to make a physical prediction at
variance with the mathematical prediction of QM. An
english translation of Renninger's paper, which was
originally published in german language, follows the
present paper. This should facilitate access to that
remarkable, apparently overlooked and forgotten, paper.
\end{abstract}
\vspace*{.3cm}
\noindent
\keywords{quantum ontology, quantum mechanics,
interpretation}
\pacs{03.65.-W, 03.65.Bz}
\maketitle
\section{Some historical notes}
Some 80 years ago the main equations of QM have been
invented, with a subsequent overwhelming succes of its
predictive power, see e.g.\ M.\ Jammer \cite{jammer66}.
In contrast, however, its significance or interpretation
is still the subject of intense debate. These
interpretations range between two possible extremes. One
extreme contains the Copenhagen--like interpretations
\cite{stappciqm}, to the other extreme belong the
``realistic'' interpretations. Whereas the former are
concerned mainly with relationships between measurement
outcomes and carefully avoid ontological statements, the
latter consider observations made by human observers as
properties {\em possessed} by real existing objects. Each
of these interpretations has its range of supporters,
and each claims to give a acceptable explanation of what
QM is really about.
An extensive historical survey is given in M.\ Jammer's
classic book ``The Philosophy of Quantum Mechanics -- The
Interpretations of Quantum Mechanics in Historical
Perspective'' \cite{jammer74}. However, in the literature
the question is rarely addressed whether there exist
empirical data or thought experiments which could rule
out some of these interpretations. More precisely, the
issue is whether there exist unavoidable ontological
truths which should, therefore, be part of {\em any}
acceptable interpretation. The answer -- maybe unexpected
and surprinsingly -- is that such a truth exists. Indeed,
in 1953 M.\ Renninger published a paper ``Zum
Wellen--Korspuskel--Dualismus'' (``On Wave--Particle
Duality'') in {\em Zeitschrift f\"ur Physik}
\cite{renninger} in which an interferometer thought
experiment played a central role. The basic result of
Renninger was that, independently of any theory, physical
reality at the quantum level exists of extended objects
which {\em at the same time}, i.e.\ in the same
experiment, have a wavelike {\em and} a particle--like
behaviour. Because this conclusion rests purely on
empirical facts, Renninger argues that it is compelling
and unavoidable. It should, therefore, be part of any
reasonable interpretation of QM. Yet even today, more
than 50 years after Renninger's paper, this is not the
case. How can this be, how is it possible that such a
fundamental truth has been overlooked and apparently
completely forgotten? One exception is M.\ Jammer's book
\cite{jammer74} where Renninger's paper is mentioned (p.\
494), together with some reactions by A.\ Einstein, M.\
Born, and P.\ Jordan. According to M.\ Jammer it ``caused
quite a stir'' among the experts in quantum physics.
It is interesting to recall that this picture of physical
reality was already introduced by L.\ de Broglie as early
as the introduction of the quantum formalism. It was
supported by e.g.\ A.\ Einstein and E.\ Schr\"odinger,
and later on used by D.\ Bohm in his attempt to set up an
alternative formulation of QM. For this reason we will
call this model the de Broglie--Einstein--Bohm (dBEB)
model. So, basically it was L.\ de Broglie who introduced
the idea that in reality, a quantum system should be
considered as an extended structure having at the same
time wave--like and particle--like properties. These
particle--like properties should then be characteristics
of a more localized region within the extended
phenomenon. In the dBEB model, in some way the localized
region -- or the ``particle'' -- is guided by the more
extended structure. In Bohm's version of QM, a so--called
``quantum potential'' is introduced to do this job. Now,
again invoking Renninger's ingenious analysis, the idea
of such a guidance is supported convincingly when the
extended structure moves through a system, such as a
Mach--Zehnder interferometer (MZI). Indeed, depending on
the wave properties of (various components of) that
structure as it moves through MZI, the subsequent
observation may occur in one of physically separated
detectors, and is the result of the interference of
different real waves (see Section 2).
From the lack of referring to Renninger's paper in later,
more recent, works on the interpretation of QM and, in
particular, on the ``reality of quantum waves'', it may
be concluded that this work has been unnoticed entirely
by the english and the french speaking physics community.
Indeed, it is never cited, neither by e.g.\ de Broglie
when -- after early criticism by W.\ Pauli -- he took up
again his idea, nor by Bohm while developing (together
with J.P.\ Vigier) his alternative approach to QM, which
was precisely based on de Broglie's model of a quantum
system. As a result, the fundamental importance of
Renninger's conclusions -- the existence of a causally
influencable ontological reality, in particular of the
reality of the so--called ``empty wave'' (EW) -- have
been overlooked and forgotten also by present day
physicists working in the field of the foundations of QM.
However, as the present author is convinced of the
fundamental importance of Renninger's penetrating
analysis of wave--particle duality underlying QM, an
english translation has been made of the original german
version. It is hoped that in this way many investigators
in the field of the foundations of QM may reconsider or
revisit their ideas with respect to quantum ontology, the
significance of the quantum formalism, and other related
quantum issues (see also Section 3).
More recently, the following investigations were carried
through to prove the reality of EWs in the sense of
observably influencing other physical systems. First,
from 1980 on, there were studies by Selleri
\cite{selleriewfop82,selleriewanf82,selleriewlnc83},
Croca \cite{croca87} and others \cite{crocafopl90}. In
some of these proposals it was maintained that there was
an observable difference between QM and a theory based
from the outset on the dBEB model of reality. However,
experimental results obtained by L.\ Mandel and coworkers
\cite{wang91} ``\ldots clearly contradict what is
expected on the basis of the de Broglie guided--wave
theory, but are in good agreement with the predictions of
standard quantum theory\ldots''. In a subsequent comment,
P.\ Holland and J.-P.\ Vigier \cite{holland91} replied
that Mandel's results did not invalidate the dBEB model
of reality itself. Finally, more recently there was an
attempt by L.\ Hardy \cite{hardyew} to prove the
observable reality of EWs, which was criticized in
various papers
\cite{pagonisew92,griffithsew93,zukowskiew93,dbew},
followed by replies by Hardy
\cite{hardyewonpag92,hardyewonzuk93}.
The purpose of the present paper is threefold. First, in
Section 2 we recall the basic results obtained by M.\
Renninger already in 1953 \cite{renninger} about the
ontology underlying QM, and emphasize {\em the
unavoidable character} of Renninger's conclusions. Next,
in Section 3, we give a brief survey of the implications
for some present day quantum issues. Finally, by
considering an alternative setup in Section 4, we extend
and complete Renninger's argumentation by showing that
empty waves not only are causally influencable (as proven
in \cite{renninger}), but are themselves able to
influence observably other physical systems. The main
difference with Hardy's argumentation is that we get our
conclusion {\em without the introduction of any specific
supplementary hypothesis} in order to ascertain the path
along which the ``particle'' possibly moves.
\section{Renninger's argumentation: main results}
Renninger's setup is essentially a Mach--Zehnder
interferometer which we present here as in Fig.\ 1, in
which the same notations as in Renninger's original work
are used.
A source in path $1$ prepares single quantum systems
$\cal S$ (photons in \cite{renninger}) which move towards
a lossles 50--50 beam splitter situated in a location 2.
From location 2, two paths lead to mirrors $S_A,S_B$, who
reflect the beams in path 6 and 7 to a second beam
splitter in location 3. Finally, detectors $D_1,D_2$ are
located in outgoing paths 4, 5. As a first essential
point, Renninger remarks that in his argumentation only
{\em empirical} facts are considered, and that, hence,
his conclusions are {\em independent} of any existing
theory used to explain these empirical facts. The
empirical facts considered by Renninger are then the
following ones:
\begin{figure}
\caption{Renninger's interference set--up}
\end{figure}
\begin{itemize}
\item[a.]
If detectors are placed in paths 6-8 or 7-9, then
detection will occur in only one path at the same time,
never in both together.
This proves the {\em particle--like} aspect of the
phenomenon of the passage of the quantum system through
MZI.
\item[b.]
For equal path lenghts 6-8 and 7-9, with each single
quantum system moving through MZI, there will correspond
an observation with certainty in $D_1$, while nothing
will be observed in $D_2$.
This may be interpreted as the result of interference of
waves simultaneously moving along paths 6--8 and 7--9,
and should be considered as evidence for the {\em
wave-like} aspect of the phenomenon {\em in the same
setup}.
This has been verified by Grangier {\em et al.}
\cite{aspect861} for light.
\item[c.]
If a transparent ``half--wave'' plate is inserted at a
specific location in one of the paths 6-8 or 7-9 at
appropriate times (i.e.\ before the system $\cal S$ has
passed that location), then observation behind a beam
splitter at 3, will now occur with certainty in detector
$D_2$.
The fact that it does not matter in which path the
half--wave plate is inserted, is according to Renninger
empirical evidence for the simultaneous motion -- or
existence -- of physical realities in both paths. Because
of a.\ above, Renninger speaks of an ``empty wave''
moving along one path, and of another wave containing the
``particle'' along the other path. In order to
distinguish furtheron both waves, we will call this wave
the ``full wave'' (FW), i.e.\ the one responsible for
transfer of particle--like properties. If necessary, we
will add an index $\cal S$, e.g.\ $\text{EW}_{\cal S}$,
$\text{FW}_{\cal S}$.
\item[d.]
The result of the action in c., i.e.\ steering detection
from $D_1$ to $D_2$, may be suspended by inserting on a
later instant another -- or even the same! -- half--wave
plate in the same or in the other path. And, if the paths
are long enough, then the previous action may be repeated
an arbitrary number of times, each time in such a way
that it will be {\em causally predictable}, i.e.\ with
certainty, in which detector $D_1$ or $D_2$ observation
will take place.
This is further evidence for the reality of both EW and FW.
\end{itemize}
For the subtleties of the argumentation, we refer to
Renninger's paper \cite{renninger} or to its translation
in english.
\section{Implications for various issues in
quantum mechanics}
\subsection{The nature of quantum systems}
It follows from Renninger's thought experiment that
ontologically a quantum system is an extended structure
consisting of realities which, under appropriate
circumstances (such as after the passage through a beam
splitter), may move along paths largely separated in
space. Hence, in considering physical processes it is
reasonable to make the hypothesis that, ultimately, it
are the properties of the {\em entire} structure that
brings about a definite localized measurement outcome.
One such property is, e.g., the phase {\em difference}
between $\text{EW}_{\cal S}$ and $\text{FW}_{\cal S}$.
What Renninger has shown also is that, in one and the
same single experiment, {\em both} $\text{EW}_{\cal S}$
and $\text{FW}_{\cal S}$ may be influenced {\em in a
causal way} by placing another system ${\cal S}'$, in
particular $\text{FW}_{\lambda/2}$, in either path 6--8
or 7--9. Here the causal character of the influence
refers to the certainty of predictions about observations
after a subsequent interaction with the second beam
splitter in the location 3.
Also, it might be the case that both ontological
components of a quantum system $\cal S$ -- i.e.\
$\text{EW}_{\cal S}$ and $\text{FW}_{\cal S}$ -- may be
influenced either separately or both at the same time,
again in a causal way in the case of a
$\lambda/2$--system. And furthermore, the possibility
should be envisaged that, conversely, both realities
themselves have the ability to influence observably the
realities of other physical systems. In particular this
would imply that $\text{EW}_{{\cal S}'}$ associated with
one system ${\cal S}'$ should be able to influence
$\text{EW}_{\cal S}$ associated with another system $\cal
S$, changing in this way the wave guiding $\cal S$, which
finally gives rise to a localized observation. This
possibility is discussed in Section 4.
So, Renninger's thought experiment has revealed that
ontologically -- and independent of any physical theory
-- quantum systems should be considered {\em at the same
time} as consisting of an {\em extended structure} with
wavelike properties, {\em and} a more localized region
within this structure which is able to exchange with
other systems, properties which are characterized by
means of particle--like variables such as energy $E$,
momentum $p$, spin, etc. Basically, Renninger's work
confirms -- on empirical grounds -- the validity of the
dBEB picture of physical reality.
\subsection{Bohr's complementarity}
According to Bohr's complementarity principle, the
behaviour of a quantum system in a particular proces is
either wave--like or particle--like. In essence, it is
the whole setup, including the measurement apparatus,
which determines whether a quantum system behaves as a
particle or as a wave, but never in both ways together.
However, Renninger's interferometer thought experiment
clearly shows that the passage of {\em one single}
quantum system through the simplest version of a MZI
reveals both aspects at the same time: a detector placed
in either of the paths 6--8 or 7--9 reveals its
particle--like behaviour, while the causal effect of the
insertion of a $\lambda/2$ plate in either path -- and
the subsequent observation in either $D_1$ or $D_2$ --
should be interpreted as evidence of its wave--like
behaviour.
It is interesting to note here that Renninger's
empirically based conclusion predates by 40 years similar
conclusions obtained by e.g.\ Ghose and D.\ Home
\cite{ghosehome93,ghosehome96}.
\subsection{The significance of the wave function}
Like EPR, it was not Renninger's aim to criticize QM, but
only ``\ldots to point to some very precise conclusions,
which follow merely from purely experimental physical
aspects, without any previous knowledge of the
mathematical quantum formalism\ldots''. Once these
conclusions have arrived at, Renninger is, however, very
clear about the significance of QM's mathematical
formalism: ``Of course one is free, to speak of the wave
as a pure ``probability''--wave. But one should be aware
of the fact, that this probability wave propagates in
space and time in a continuous way, and in a way that she
can be influenced in a finite region of space -- and only
there! -- and also at that time! --, with an unambiguous
observable physical effect!''
So, by Renninger's result the meaning of the QM
wavefunction $\psi$ is very clear, both ontologically and
mathematically: ontologically $\psi$ represents a real,
causally influencable, wave, and mathematically it
satisfies the deterministic Schr\"odinger equation. The
important significance of Renninger's analysis is to have
revealed, in a compelling and unavoidable way, the
existence of a deeper lying layer of reality, the causal
and quantitative behaviour of which is mathematically
described by the standard quantum formalism. Therefore,
it is fair to say that Renninger's results should be part
of any acceptable interpretation of QM, and that it is
unreasonable to discard quantum ontology, and look at QM
only in a pure mathematical way. Hence, it is incorrect
to claim that the meaning of $\psi$ is nothing more than
a mathematical {\em function} which enables one to
predict future statistical results for given initial
conditions. Yet, this viewpoint has been defended from
the early days of QM (e.g.\ by advocates of Bohr,
Heisenberg and others), thereby neglecting Renninger's
findings, right up to now (see e.g.\ the provocative
paper by Peres and Fuchs, ``Quantum theory needs no
`Interpretation''' \cite{fuchsper}), probably because of
being innocent of Renninger's basic work.
Therefore, I think that Renninger's conclusions answer
many of today's issues with respect to the significance
of the wave function. And, had the significance of
Renninger's work been appreciated properly in past and
recent times, it would have influenced significantly many
other papers. In fact, the overwhelming amount of
relevant literature should have reduced considerably.
Therefore, it is pointless to make a selection from among
the huge list of available references -- anyone should
make his own selection, and judge in what sense the
papers' statements should be adjusted by taking into
account Renninger's 1953 analysis.
\subsection{The issue of Einstein locality}
In his paper Renninger strongly argues in favour of the
validity of the locality principle (``Einstein
locality'') underlying Einstein's successful relativity
theories. In his discussion on a possible alternative for
the ontological reality of the EW, he states that the
only alternative for explaining the wavelike behaviour of
a quantum system, e.g.\ in a MZI setup, would be the
introduction of a ``normal electromagnetic wave'' with
the unavoidable consequence that ``\ldots at the moment
of absorption the wave would contract with superluminal
speed, and moreover through closed walls. {\em Such
assumption would be completely unacceptable}'' [emphasis
by the present author]. Of course, at present one could
object that Renninger's analysis predates about one
decade Bell's investigations \cite{bell64} of the EPR
issue, and that according to the present majority view,
Bell's theorem ``proves'' a nonlocality property of QM.
However, in recent and past work (see e.g.\
\cite{dbfop051}) we have argued strongly in favour of the
validity of Einstein locality, and concluded that the
breakdown of counterfactual definiteness at the level of
individual quantum processes would be a far more
reasonable explanation for the violation of Bell's
inequality by QM, and for resolving all other so--called
contradictory results.
\subsection{EPR's elements of physical reality}
In his paper, Renninger also reports about Einstein's
interest in his analysis. In particular, EPR's notion of
``elements of physical reality'' is mentioned as having
inspired Renninger's own definition of ``physical
reality'' (see the Abstract of \cite{renninger}): ``The
notion `physical reality' should be understood such that,
when this physical reality is considered in a particular
space at a particular time, it should be experimentally
possible to influence this reality in such a way that
future results of experiments show unambiguously that
this reality has been causally influenced by the
experimental act in this space and at that time.''
Here one may remark that, as in the EPR paper, Einstein,
there is a clear relationship
between predictions for later observations in some
detector, and formerly existing, causally influencable,
realities. Because of this relationship one may identify
Renninger's realities with EPR's definition of ``elements
of physical reality''. In this sense, the realities
corresponding with EWs may be considered as EPR
``elements of physical reality'', of which one might
reasonably expect, as claimed by EPR, that they have a
representation in the physical theory. However, according
to Renninger, this is not the case with the present QM
formalism: ``\ldots the proven reality of the wave
associated with the single particle, which quantum
mechanics, \ldots, is unable to account for \ldots''.
\section{On Bohm's Version of Quantum Mechanics}
Because Renninger's analysis confirms on empirical
grounds the dBEB picture of reality at the quantum level,
one might be tempted to conclude that Bohm's
reformulation of QM in terms of the notion of ``quantum
potential'' supersedes the standard quantum formalism.
However, one of Bohm's intentions was to present a causal
quantitative formalism for the description of {\em
single} quantum processes, and to get back immediately --
almost {\em by construction} -- the statistical
predictions of QM.
As is well known \cite{hollandbook}, the price to be
payed was that the quantum potential should be
interpreted as resulting from an instantaneous
action--at--a--distance between quantum systems, and with
an intensity which is independent of the mutual distance
between these systems. Here the term ``quantum system''
should be understood as that localized part of the
extended structure which has the characteristic to
exchange particle--like properties such as $E,p,$ etc.\
with other systems.
However, it is precisely because of the conflict between
the explicit nonlocality property of Bohm's quantum
potential with the empirical validity of locality in
actual physical processes, that BQM cannot be considered
as a valid quantitative scheme for {\em individual}
quantum processes. Recalling that QM in general makes
deterministic predictions for the statistics of
measurement outcomes, the minimum requirement for any
attempt to reproduce these predictions in terms of a {\em
local} theory for {\em individual} processes should be
the introduction of extra or supplementary variables in
the theory. This means that BQM cannot be considered as
such a valid alternative HVT for QM. At best, it is only
a {\em reformulation} of QM in terms of another
mathematical quantity -- the quantum potential -- which
should be considered only from a mathematical point of
view, i.e.\ it may not be considered as a faithful
representation of the underlying physical reality lead
bare by Renninger.
Probably, this was one reason for Einstein to consider
Bohm's solution as ``too cheap''(see e.g.\
\cite{bohrsolvay}). Therefore, an acceptable alternative
would be a theory based on the dBEB picture in which the
validity of locality is retained. However, not only does
such a formalism not yet exist, but neither is there any
onset to set up such a formalism (see, however,
\cite{hesspnas01b}).
\section{Another possible physical consequence of the
$\text{d}$BEB picture of reality}
\subsection{Physical argumentation}
In his work, on page 9, Renninger addresses the following
question: ``What happens to the wave devoid of energy of
a photon after its absorption? When it is absorbed for
example in (6), and when in addition the detectors in (4)
and (5) are removed, what happens then with the wave in
$B$? Does she move further towards infinity, or does she
disappear at the moment of absorption? Of course this
question cannot be answered principally. The former
assumption appears to be the more natural one, because it
avoids the conclusion to the existence of influences
which propagate with infinite speed also through closed
walls, a conclusion which within the physical world is
inconceivable. In any case were such influences not
associated with transport of energy.''
Although Renninger is of the opinion that ``\ldots this
question cannot be answered principally\ldots'', we yet
propose a slightly modified setup in which this issue
might be clarified. In fact, we follow Renninger's own
proposal that the dBEB model of reality can be used as
``a valuable aid for the visual comprehension of
elementary processes and for making exact prognoses about
the outcome of experiments.'' Because the modified setup
is almost as simple and elementary as Renninger's setup,
we have some confidence about the correctness of our
predictions applying to the new setup. These predictions
now concern observable effects of the EW reality on other
physical systems and, as in Renninger's analysis, only
physical arguments are invoked.
In this section we elaborate on the ontological picture
resulting from Renninger's argumentation. If the
behaviour of a quantum system is determined by its
ontological constitution -- consisting of {\em both} wave
and particle characteristics -- one should be able to
influence this behaviour either by changing the wave
characteristics, or by changing the particle
characteristics. The first possibility has been evidenced
by Renninger's point c.\ in Section 2, i.e.\ the
insertion of a {\em material} system -- a half--wave
plate -- in either path. Hence, in this case it is the
component $\text{FW}_{\lambda/2}$ which provokes the
observable influence. The question then naturally arises,
whether the wavelike properties of the components
$\text{EW}_{\cal S}$ or $\text{FW}_{\cal S}$ of $\cal S$
may also be influenced by the empty wave component of
another system, e.g.\ the component $\text{EW}_{{\cal
S}'}$ of quantum system ${\cal S}'$.
This implies that one should be able to observe the
effect of superposing the realities $\text{EW}_{{\cal
S}'}$ with $\text{EW}_{\cal S}$ or $\text{FW}_{\cal S}$.
To this end we supplement Renninger's setup in a minimal
way by adding 1) another source of systems ${\cal S}'$,
2) a third beam splitter $\text{BS}_{2'}$ in location 2',
and 3) by letting mirror $S_B$ removable (see Fig.\ 2).
Systems ${\cal S}'$ move then along direction 1' towards
$BS_{2'}$. The outgoing arms are $A'$, which is in line
with path 9, and $B'$ which contains a detector $D_{2'}$.
For given masses $m_{\cal S}$ and $m_{{\cal S}'}$ we
choose the velocities $v_{\cal S}$ and $v_{{\cal S}'}$
such that the frequencies of the waves associated with
$\cal S$ and with ${\cal S}'$ are the same.
\begin{figure}
\caption{Modified
interference set--up}
\end{figure}
We are now interested in those cases where ${\cal S}'$
is observed in $D_{2'}$. Then with certainty
$\text{EW}_{{\cal S}'}$ is moving towards $S_B$. We
assume further that the dimensions of the arrangement
allow the following: after $\text{FW}_{\cal S}$ or
$\text{EW}_{\cal S}$ has passed $S_B$, $S_B$ is removed
so that at a later instant $\text{EW}_{{\cal S}'}$ can
pass freely, move further along path 9 and, finally,
catch up either $\text{FW}_{\cal S}$ or $\text{EW}_{\cal
S}$, which moved equally along path 7--9.
From Renninger's analysis we know that steering the
future observation of $\cal S$ -- to $D_1$ or to $D_2$ --
cannot take place by installing a half--wave plate at
positions where $\text{EW}_{\cal S}$ or $\text{FW}_{\cal
S}$ has already passed. For the same reason one may
assume that, after $\text{EW}_{\cal S}$ or
$\text{FW}_{\cal S}$ has passed $S_B$, the removal of
$S_B$ does not have any influence -- or steering -- on
the future course of the processes in the MZI, in
particular it has no influence on the detector where
observation of $\cal S$ will take place. Assume then that
at such an appropriate instant the mirror $S_B$ is
removed, and that the various path lengths and velocities
of both $\cal S$ and ${\cal S}'$ are chosen in such a way
that $\text{EW}_{{\cal S}'}$ and the component of $\cal
S$ moving along path 9 (i.e.\ either $\text{EW}_{\cal S}$
or $\text{FW}_{\cal S}$), will superpose {\em just
before} entering $\text{BS}_3$ at 3 (see Fig.\ 3,
where only the last stage before entering $BS_3$ is
shown). As a result of this superposition the wave
properties of $\cal S$, in particular the phase, will
have been changed.
\begin{figure}
\caption{Interaction
between $\cal S$ and $\text{EW}
\end{figure}
Then it might legitimately be expected that, as a result
of this interaction, the coherence between
$\text{EW}_{\cal S}$ and $\text{FW}_{\cal S}$ has been
disturbed, and that the final superposition of these real
waves within $BS_3$ will no longer give rise to
observations {\em in detector $D_1$ only}. Observation of
$\cal S$ in $D_2$ should then be considered as evidence
for the ability of empty waves to influence other
physical systems, i.e.\ either another empty wave or a
material system, in a directly observable way. As a final
remark, we note that in the above reasoning we have only
used Renninger's conclusion of the reality of empty
waves, and supplemented it only by the reasonable
assumption that physical realities may be changed by
interactions among them.
\subsection{Quantummechanical predictions}
Let us now look at how QM describes the processes in this
minimally extended MZI setup. The systems leaving the
sources in paths 1 and 1' may be assumed independent, and
their state is the direct product $\ket{1}\ket{1'}$.
Passage of ${\cal S}'$ through $BS_{2'}$ and of $\cal S$
through $BS_2$ results in the evolution:
\begin{eqnarray}
\ket{1'}\ket{1} &\stackrel{BS_{2'},BS_2}{\longrightarrow}
\frac{1}{\sqrt{2}}(\ket{A'}e^{ik'z_{A'}}
+i\ket{B'}e^{ik'z_{B'}})
\nonumber\\
&\times\frac{1}{\sqrt{2}}
(\ket{A}e^{ikz_{A}}+i\ket{B}e^{ikz_{B}}), \label{eendoorbs}
\end{eqnarray}
where $\ket{A}$ represents the state along a horizontal
path, $\ket{B}$ the state along a vertical path, etc.
Now, because we will be interested in observations in
$D_1$ and $D_2$ for equal path lenghts 6--8 and 7--9, we
may replace already at this stage the lengths $z_A$ and
$z_B$ by $z$. In a similar way we replace $z_{A'}$ and
$z_{B'}$ by $z'$. This gives rise to common exponential
factors $e^{ikz}$ and $e^{ik'z'}$. Reflection at the
mirrors amounts to a phase shift of the states by
$\frac{\pi}{2}$, and the QM state transforms further to
\begin{equation}
\stackrel{S_A,S_B}{\longrightarrow}
\frac{1}{\sqrt{2}}(\ket{A'}+i\ket{B'})e^{ik'z'}
\frac{1}{\sqrt{2}}(i\ket{B}-\ket{A})e^{ikz},
\label{eendoorbs1}
\end{equation}
At this moment, observation in detector $D_{2'}$ is
recorded, and only those cases where ${\cal S}'$ is
observed in $D_{2'}$ are retained. This subensemble is
described quantummechanically by the state
\begin{equation}
\ket{\Psi_{12}}= \frac{1}{2} \ket{A'}e^{ik'z'}
(i\ket{B}-\ket{A})e^{ikz}.
\end{equation}
It follows that the systems $\cal S$ in this subensemble
are still described by the QM state
\begin{equation}
\ket{\Psi_{\cal S}}=\frac{1}{2}(i\ket{B}-\ket{A})e^{ikz}
\end{equation}
Finally, after the passage through $BS_3$, this
state becomes again:
\begin{equation}
\ket{\Psi_{\cal S}}= i\frac{1}{2}(\ket{B}+i\ket{A}) e^{ikz}
\stackrel{BS_3}{\longrightarrow}
-\frac{1}{\sqrt{2}}\ket{A}e^{ikz},
\end{equation}
so that QM predicts that all systems $\cal S$ will still
be observed in detector $D_1$. Clearly this reflects the
fact that the QM formalism predicts that EWs cannot
influence other quantum systems in an observable way.
So, observations in $D_2$ would clearly be caused by the
influence of $\text{EW}_{{\cal S}'}$ on system $\cal S$
itself. This, then, would be conclusive evidence for the
possibility of EWs to influence in an {\em observable}
way other physical systems.
\section{Conclusions}
In this work we have reviewed Renninger's penetrating
analysis leading to his empirical proof of the reality of
quantum waves, in particular of the empty wave. We have
argued -- or, rather, called attention to Renninger's
opinion -- that these results have fundamental
ontological significance. If de Broglie should be
credited for the idea, then Renninger should certainly be
credited for the empirical proof of its validity. We also
discussed briefly the impact on some still ongoing issues
in the foundations of QM. In particular, the dBEB
ontological picture of reality should be part of any
acceptable interpretation of QM, implying
that many of these interpretations should be revised.
Next, we have proposed a slightly modified setup in
which, possibly, the real EW should influence -- instead
of being influenced by -- another quantum system in an
observable way. This influence should manifest itself by
an observation in detector $D_2$, whereas QM still
predicts no observation in that detector.
If it should turn out that QM gives the wrong prediction,
then a new formal scheme should be required for giving a
more faithful description of the EW reality -- a
description which QM is unable to give. Tentatively, this
could be realized by means of a {\em local} HVT having
the general characteristics described in \cite{dbfop051}.
\end{document} |
\begin{document}
\title{Efficient Algorithms for Sparse Moment Problems without Separation}
\begin{abstract}
We consider the sparse moment problem of learning a $k$-spike mixture in high dimensional space from its noisy moment information in any dimension.
We measure the accuracy of the learned mixtures using transportation distance. Previous algorithms either assume certain separation assumptions, use more recovery moments, or run in (super) exponential time.
Our algorithm for the 1-dimension problem (also called the sparse Hausdorff moment problem) is a robust version of the classic Prony's method,
and our contribution mainly lies in the analysis.
We adopt a global and much tighter analysis than previous work (which analyzes the perturbation of the intermediate results of Prony's method). A useful technical ingredient is a connection between
the linear system defined by the Vandermonde matrix and the Schur polynomial, which allows us to provide tight perturbation bound independent of the separation and may be useful in other contexts.
To tackle the high dimensional problem, we first solve the 2-dimensional problem
by extending the 1-dimension algorithm and analysis to complex numbers.
Our algorithm for the high dimensional case determines the coordinates of each spike by aligning a 1-d projection of the mixture to a random vector and a set of 2d-projections
of the mixture. Our results have applications to learning topic models and Gaussian mixtures, implying improved sample complexity results or running time over prior work.
\end{abstract}
\section{Introduction}
\subsection{Background}
We study the problem of learning a statistical mixture of $k$ discrete distributions over a common discrete domain $[d] = \{1,2,\cdots,d\}$ from noisy measurements of its first few moments.
In particular, we consider mixture $\boldsymbol{\vartheta}$ which is a $k$-spike distribution supported on the $d-1$ dimensional simplex
$\Delta_{d-1}=\{\boldsymbol{x}=(x_1,\cdots, x_d)\mathrm{i}n \mathbb{R}^d \mid \sum_{i=1}^d x_i=1, x_i\geq 0 \,\,\forall i\mathrm{i}n [d]\}$. Each point in $\Delta_{d-1}$ represents a discrete distribution
over $[d]$.
\topic{The sparse Hausdorff moment problem}
We first define the sparse moment problem for $d=2$, where the mixture is a $k$-spike distribution supported on $\Delta_1\cong[0,1]$.
We call the model {\em the $k$-coin model} (i.e., a mixture of $k$ Bernoulli distributions).
Denote the underlying mixture as $\boldsymbol{\vartheta}=(\boldsymbol{\alpha},\boldsymbol{w})$.
Here, $\boldsymbol{\alpha}=\{\alpha_1,\alpha_2,\cdots,\alpha_k\}$
and $\boldsymbol{w}=\{w_1,w_2,\cdots,w_k\}\mathrm{i}n \Delta_{k-1}$,
where $\alpha_i\mathrm{i}n [0,1]$
specifies the $i$th Bernoulli distribution
and $w_i\mathrm{i}n [0,1]$ is the probability of mixture component $\alpha_i$.
For mixture $\boldsymbol{\vartheta}=(\boldsymbol{\alpha},\boldsymbol{w})$,
the $t$-th moment is defined as $$M_{t}(\boldsymbol{\vartheta})=\mathrm{i}nt_{[0,1]} \alpha^t \boldsymbol{\vartheta}(\mathrm{d} \alpha) =\sum_{i=1}^k w_i \alpha_i^{t}.$$
Our goal is to learn the unknown parameters $(\boldsymbol{\alpha},\boldsymbol{w})$ of the mixture $\boldsymbol{\vartheta}$ given the first $K$
noisy moment values $M'_{t}$ with $|M'_{t}-M_{t}(\boldsymbol{\vartheta})|_\mathrm{i}nfty\leq \xi$ for $1\leq t\leq K$.
The problem is also known as the sparse {\em Hausdorff moment} problem
or the $k$-coin model
\cite{schmudgen2017moment,li2015learning,gordon2020sparse}.
We call $K$ the {\em moment number} and $\xi$ the moment precision/accuracy, both of them are important parameters of the problem.
We measure the quality of our estimation $\widetilde{\mix}$
in term of the transportation distance between probability
distributions, i.e., $\tran(\widetilde{\mix}, \boldsymbol{\vartheta})\leq O(\widetilde{\mu}silon)$.
Using transportation distance as the metric is advantageous for several reasons:
(1) if the desired accuracy $\widetilde{\mu}s$ is much smaller than the minimum separation $\zeta=\min\|\alpha_i-\alpha_j\|$, the estimation $\widetilde{\mix}$ must be a per-spike recovery since it must contain a spike that is sufficiently close to an original spike in $\boldsymbol{\vartheta}$ (given the weight of the spike is lower bounded);
(2) if the desired accuracy $\widetilde{\mu}s$ is larger than $\zeta$ or the minimum weight $w_{\min}$, we are allowed to be
confused by two spikes that are very close, or miss a spike with very small weight, thus potentially avoiding the inverse dependency on $\zeta$ and $w_{\min}$, which are otherwise unavoidable if we must recover every spike.
Generally, for this problem, we are interested in understanding
the relations and trade-offs among the moment number $K$, the moment accuracy/precision $\xi$,
and the accuracy $\widetilde{\mu}silon$ (i.e., transportation distance at most $\widetilde{\mu}silon$) of our estimation
$\widetilde{\mix}$, and designing efficient algorithms for recovering $\boldsymbol{\vartheta}$ to the desired accuracy.
\topic{The sparse moment problem in higher dimensions}
For the higher dimensional case,
suppose the underlying mixture is
$\boldsymbol{\vartheta}=(\boldsymbol{\alpha},\boldsymbol{w})$ where $\boldsymbol{\alpha}=\{\boldsymbol{\alpha}_1,\ldots, \boldsymbol{\alpha}_k\}$ ($\boldsymbol{\alpha}_i\mathrm{i}n \Delta_{d-1}$ which are $d$-dimensional points in a bounded domain) and
$\boldsymbol{w}=\{w_1,\ldots, w_k\}\mathrm{i}n \Delta_{k-1}$.
For $\boldsymbol{\alpha}_i=(\alpha_{i,1},\ldots, \alpha_{i,d})$ and $\mathbf{t}=(t_1,\cdots,t_d)\mathrm{i}n \mathbb{Z}_+^d$, we denote $\boldsymbol{\alpha}_i^{\mathbf{t}}=\alpha_{i,1}^{t_1}\cdots \alpha_{i,d
}^{t_d}$, and
the $\mathbf{t}$-moment of $\boldsymbol{\vartheta}$ is $$M_{\mathbf{t}}(\boldsymbol{\vartheta})=\mathrm{i}nt_{\Delta_{d-1}} \boldsymbol{\alpha}^\mathbf{t} \boldsymbol{\vartheta}(\mathrm{d} \boldsymbol{\alpha})
=\sum_{i=1}^k w_i \boldsymbol{\alpha}_i^{\mathbf{t}}.$$ We are given noisy access of $\mathbf{t}$-moment
$M_{\mathbf{t}}(\boldsymbol{\vartheta})$ for $\|\mathbf{t}\|_1\leq K$. Here we also call $K$ the {\em moment number}.
Since there are $K^{O(d)}$ such moments, when $d$ is large, we also consider the case
where we have noisy access to the moments of the projection of $\boldsymbol{\vartheta}$ onto a lower dimension subspace.
Such noisy moments can often be obtained by an affine transformation of the samples in
concrete applications such as topic models.
Also motivated by topic models, we measure the accuracy of our estimation $\widetilde{\mix}$
in term of the $L_1$-transportation distance between probability
distributions (see the formal definition in Section~\ref{sec:pre}), which is a more stringent metric than $L_2$-transportation.
For example, the distance between two discrete distributions $(1/d,\ldots,1/d)$ and $(2/d,\ldots,2/d, 0, \ldots, 0)$ is $1$ in $L_1$ metric of $\mathbb{R}^d$ but only $\sqrt{1/d}$ in $L_2$ metric. These two distributions should be regarded as very different distributions in topic model and thus $L_1$ should be a more appropriate distance measure for our setting.
The problems we study are noisy versions of the classical {\em moment problems}~\cite{schmudgen2017moment,lasserre2009moments} and have applications
to a variety of unsupervised learning scenarios, including learning topic models \cite{Hof99,BNJ03,PRTV97}, learning Gaussian mixtures \cite{wu2020optimal}, collaborative filtering \cite{HP99,kleinberg2008using}, learning mixture of product distributions \cite{FeldmanOS05,gordon2021source} and causal inference \cite{gordon2021identifying}.
For example, one important application of the above model is learning topic models.
We are given a corpus of documents. We adopt the popular
“bag of words” model and take each document as an unordered multiset of words.
The assumption is that there is a small number $k$
of “pure” topics, where each topic is a distribution over the underlying vocabulary of $d$ words. A $K$-word document (i.e., a string in $[d]^K$) is generated by first selecting a
topic $p\mathrm{i}n\Delta_{d-1}$ from the mixture $\boldsymbol{\vartheta}$, and then sampling
$K$ words from this topic.
$K$ is called {\em the snapshot number} and we call such a sample a {\em $K$-snapshot} of $p$.
For all $\|\mathbf{t}\|_1\leq K$, we can obtain noisy estimates of $M_{\mathbf{t}}(\boldsymbol{\vartheta})$ or $M_{\mathbf{t}}(\proj{B}(\boldsymbol{\vartheta}))$
(the projection of $\boldsymbol{\vartheta}$ onto a lower dimensional subspace $B$) using
the $K$-snapshots (i.e., the documents in the corpus) .
Our goal here is to recover the mixture $\boldsymbol{\vartheta}$. We will discuss the application
to topic models in Section~\ref{sec:topic} and also to Gaussian mixture in Section~\ref{sec:Gaussian}.
The mixture model is indeed a special case of the mixture models of $k$ product distributions \cite{FeldmanOS05,gordon2021source}, which is further a special case of the more general problem of the mixture models of Bayesian networks \cite{gordon2021identifying}.
\subsection{Previous Work and Connections}
We first discuss prior work on the $k$-coin model ($\boldsymbol{\vartheta}$ is supported on $[0,1]$).
Rabani et al. \cite{rabani2014learning}
showed that recovering a $k$-spike mixture
requires at least the first $K=2k-1$ moments in the worst case
(even without any noise).
Moreover, they showed that for the topic learning problem,
if one use $K=c(2k-1)$ snapshot samples (for any constant $c\geq 1)$
and wish to achieve an accuracy
$O(1/k)$ in terms of transportation distance,
$e^{\Omega(K)}$ samples are required
(or equivalently, the moment accuracy should be at most $e^{-\Omega(K)}$).
On the positive side,
they solved the problem using sample complexity
$ \max\{(1/\zeta)^{O(k)}, (k/\widetilde{\mu}s)^{O(k^2)}\}$
(or moment accuracy $\min\{\zeta^{O(k)}, (\widetilde{\mu}s/k)^{O(k^2)}\}$),
where $\zeta$ is a lower bound of the minimum separation
). In fact, their algorithm is a variant of the classic Prony method
and the (post-sampling) running time is $\poly(k)$ where their bottleneck is solving a special convex quadratic programming instance.
Li et al. \cite{li2015learning} provided a linear programming based algorithm
that requires the moment accuracy $(\widetilde{\mu}s/k)^{O(k)}$ (or sample complexity
$(k/\widetilde{\mu}s)^{O(k)}$), matching the lower bound in \cite{rabani2014learning}.
However, after computing the noisy moments, the running time of their algorithm
is super-exponential $k^{O(k^2)}$.
Motivated by a problem in
population genetics that reduces to the k-coin mixture problem,
Kim et al. \cite{Kim2019HowMS} analyzed the Matrix Pencil Method,
which requires moment accuracy $\zeta^{O(k)} \cdot w_{\min}^{2} \cdot \widetilde{\mu}silon$.
The matrix pencil method requires solving a generalized eigenvalue problem,
which needs $O(k^3)$ time.
Wu and Yang \cite{wu2020optimal} studied the same problem in the context of learning Gaussian mixtures. In fact, they showed that learning 1-d Gaussian mixture models with same variance can be reduced to learning the $k$-coin model. Their algorithm achieves the optimal dependency on
the moment accuracy without separation assumption.
Their recovery algorithm is based on SDP and runs in
time $O(k^{6.5})$.
Recently, Gordon et al. \cite{gordon2020sparse} showed that
under the separation assumption,
it is possible to recover all $k$ spikes up to accuracy $\widetilde{\mu}silon$, using
the first $2k$-moments with accuracy $\zeta^{O(k)} \cdot w_{\min} \cdot \widetilde{\mu}silon$. Their algorithm is also a variant of Prony's method and it runs in time $O(k^{2+o(1)})$.
We summarize the prior results in Table~\ref{tab:1dim-result}.
For the high-dimensional case,
especially when $d$ is large, it is costly to obtain all moments
$M_{\mathbf{t}}(\boldsymbol{\vartheta})$ for all $\mathbf{t}$ with $\|\mathbf{t}\|_1\leq K$.
Indeed, most previous work on topic modeling solved the problem using noisy moment information
of the projection of $\boldsymbol{\vartheta}$ onto some lower dimensional subspaces.
As different previous work used different projection methods, it is difficult to
compare the accuracy of their moment estimations, as in the 1-dimensional case.
Hence, we directly state the sample complexity result in the context of topic modeling.
Rabani et al. \cite{rabani2014learning} showed that
it is possible to produce an estimate $\widetilde{\mix}$ of the original mixture $\boldsymbol{\vartheta}$
such that $\tran(\boldsymbol{\vartheta}, \widetilde{\mix})\leq \widetilde{\mu}silon$,
using $\poly(n,k, 1/\widetilde{\mu}silon)$ 1- and 2-snapshot samples, and $(k/\widetilde{\mu}silon)^{O(k^2)} \cdot (w_{\min} \zeta)^{-O(k^2)} $
$K$-snapshot samples, under minimal separation assumptions.
Li et al. \cite{li2015learning} provided
an LP-based learning algorithm which
uses almost the same number of samples (with a slightly worse polynomial for 2-snapshot samples), but
without requiring any separation assumptions.
The number of the $(2k-1)$-snapshot samples used in both
\cite{rabani2014learning,li2015learning} is $(k/\widetilde{\mu}silon)^{O(k^2)}$
while that of the lower bound (for 1-d) is only $e^{\Omega(k)}$.
Recently, Gordon et al. \cite{gordon2020sparse} showed that
under the minimal separation assumption,
the sample complexity can be reduced to $(k/\widetilde{\mu}s)^{O(k)} \cdot w_{\min}^{-O(1)} \cdot \zeta^{-O(k)}$.
We summarize the prior results in Table~\ref{tab:ndim-result}.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Reference & Moment Number $(K)$ & Moment Accuracy $(\xi)$ & Running Time & Separation\\
\hline
\cite{rabani2014learning} & $2k-1$ & $\min\{\zeta^{O(k)}, (\widetilde{\mu}silon/k)^{O(k^2)}\}$ & $\textrm{poly}(k)$ & Required \\
\hline
\cite{li2015learning} & $2k-1$ & $(\widetilde{\mu}silon/k)^{O(k)}$ & $(k/\widetilde{\mu}silon)^{O(k^2)}$ & No Need\\
\hline
\cite{Kim2019HowMS} & $2k-1$ & $\zeta^{O(k)} \cdot w_{\min}^{2} \cdot \widetilde{\mu}silon$ & $O(k^3)$ & Required\\
\hline
\cite{wu2020optimal} & $2k-1$ & $(\widetilde{\mu}silon/k)^{O(k)}$ & $O(k^{6.5})$ & No Need\\
\hline
\cite{gordon2020sparse} & $2k$ & $\zeta^{O(k)} \cdot w_{\min} \cdot \widetilde{\mu}silon$ & $O(k^{2+o(1)})$ & Required\\
\hline
Theorem~\ref{thm:1dim-kspikecoin} & $2k-1$ & $(\widetilde{\mu}silon/k)^{O(k)}$ & $O(k^{2})$ & No Need\\
\hline
\end{tabular}
\widecheck{\alpha}ption{Algorithms for the $k$-coin problem.
The last column indicates whether the algorithm needs
the separation assumption.
In particular, the algorithms that require the separation assumption
need to know $\zeta$ and $w_{\min}$ where $\zeta$ is the minimum separation, i.e.,
$\zeta\leq \min_{i\ne j}|\alpha_i-\alpha_j|\leq 1/k$, and $w_{\min}\leq \min_i w_i$.
}
\label{tab:1dim-result}
\end{table}
\subsection{Our Contributions}
\noindent
{\bf The $k$-Coin model.}
We first state our result for the 1-dimensional problem.
\begin{theorem}
\label{thm:1dim-kspikecoin}
(The $k$-Coin model)
Let $\boldsymbol{\vartheta}$ be an arbitrary $k$-spike distribution over $[0,1]$.
Suppose we have noisy moments $M'_{t}$ such that
$|M_t(\boldsymbol{\vartheta})-M'_{t}|\leq (\widetilde{\mu}silon/k)^{O(k)}$ for $0\leq t \leq 2k-1$.
We can obtain a mixture $\widetilde{\mix}$ such that $\tran(\widetilde{\mix}, \boldsymbol{\vartheta})\leq O(\widetilde{\mu}silon)$ and
our algorithm only uses $O(k^{2})$ arithmetic operations.
\end{theorem}
\noindent
{\bf Our techniques:}
Our algorithm for the $k$-coin model is also based on the classic Prony's algorithm \cite{prony1795}.
Suppose the true mixture is $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w})$
where $\boldsymbol{\alpha} = [\alpha_1, \cdots, \alpha_k]^{\top}$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top}$.
In Prony's algorithm, if we know the moment vector $M$ exactly,
all $\alpha_i$s can be recovered from the roots
of a polynomial whose coefficients are the entries of the eigenvector $\boldsymbol{c}$
(corresponding to eigenvalue 0)
of a Hankel matrix (see e.g., the matrix ${\mathcal H}_{k+1}$ in \cite{gordon2020sparse}).
However, if we only know the noisy moment vector $M'$, we only get a perturbation of the original Hankel matrix.
Recent analyses of Prony's method \cite{rabani2014learning,gordon2020sparse} aim to bound the error of recovery from the perturbed Hankel matrix.
If $\alpha_i$ and $\alpha_j$ ($i\ne j$) are very close, or any $w_i$ is very small, the 2nd smallest eigenvalue of the Hankel matrix is also very close to 0, hence the the eigenvector $\boldsymbol{c}$ corresponding to eigenvalue 0 is extremely sensitive to small perturbation (see e.g.,\cite{stewart1990matrix}).
Hence, recent analysis of Prony's method \cite{rabani2014learning,gordon2020sparse}
requires that $\alpha_i$ and $\alpha_j$ are separated by at least $\zeta$ and the minimum $w_i$ is lower bounded by $w_{\min}$ to ensure certain stability of the eigenvector (hence the coefficients of the polynomial), and the corresponding sample complexity becomes unbounded when $\zeta$ or $w_{\min}$ approaches to 0.
Our algorithm (Algorithm~\ref{alg:dim1}) can be seen as a robust version of Prony's algorithm. Instead of computing the smallest eigenvector of the perturbed Hankel matrix, we solve a ridge regression to obtain a vector $\widehat{\vecc}$, which plays a similar role as $\boldsymbol{c}$ (Line 2 in Algorithm~\ref{alg:dim1}). However, we do not show $\widehat{\vecc}$ is close to $\boldsymbol{c}$ (in fact, they can be very different). Instead, we adopt a more global analysis to show that the moment vector of the estimated mixture is close to the true moment vector $M$, and by a moment-transportation inequality
(see Section~\ref{app:momenttransineq}),
we can guarantee the quantity of our solution in terms of transportation distance.
Another technical challenge lies in bounding the error in recovering the weight vector $\boldsymbol{w}$. In the noiseless case, the weight is simply
the solution of $V_{\boldsymbol{\alpha}} \boldsymbol{w} =M$ where $V_{\boldsymbol{\alpha}}$ is a Vandemonde matrix
(see Equation \eqref{eq:matrixdef}).
It is known that Vandermonde matrices tend to be badly ill-conditioned \cite{gautschi1987lower,pan2016bad,Moitra15} (with a large dependency on the inverse of the minimal separation). Hence, using standard condition number-based analysis, slight perturbations of $V_{\boldsymbol{\alpha}}$ and $M$ may result in an unbounded error.
\footnote{
Note that for Vandermonde matrix $V=(a^j_i)_{0\leq i<k,1\leq j\leq k}$, its determinant
is $\mathrm{d}et(V)=(-1)^{k(k-1)/2}\prod_{p<q} (a_p - a_q)$.
Hence, $\mathrm{d}et(V)^{-1}$ depends inversely on the separation.
}
However, we show interestingly, in our case, that we can bound the error independent of the separation (Lemma~\ref{lm:step3}) via the connection between Vandermonde linear systems and Schur polynomial (Lemma~\ref{lm:dim1-step3-matrix}).
Our technical are useful in analyzing the perturbation of Vandermonde linear systems
in other contexts as well.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
Reference & Snapshot Number $(K)$ & Sample Complexity for $K$-Snapshots & Separation \\
\hline
\cite{rabani2014learning} & $2k-1$ & $(k/\widetilde{\mu}s)^{O(k^2)} \cdot (w_{\min} \zeta)^{-O(k^2)}$ & Required \\
\hline
\cite{li2015learning} & $2k-1$ & $(k/\widetilde{\mu}silon)^{O(k^2)}$ & No Need\\
\hline
\cite{gordon2020sparse} & $2k$ & $(k/\widetilde{\mu}s)^{O(k)} \cdot w_{\min}^{-O(1)} \cdot \zeta^{-O(k)}$ & Required\\
\hline
Theorem~\ref{thm:highdimtopic} & $2k-1$ & $(k/\widetilde{\mu}silon)^{O(k)}$ & No Need\\
\hline
\end{tabular}
\widecheck{\alpha}ption{Algorithms for recovering the mixture over $\Delta_{d-1}$.
All algorithms requires $\poly(1/\widetilde{\mu}silon, d, k)$ many 1- and 2-snapshots.
The last column indicates whether the algorithm needs
the minimum separation assumption.
}
\label{tab:ndim-result}
\end{table}
\noindent
{\bf Higher Dimensions.}
Now state our result for the high dimensional case.
Here, we assume we have noisy access to the moments
of the linear projections $\proj{R}(\boldsymbol{\vartheta})$ of $\boldsymbol{\vartheta}$ onto lines or 2-d planes
(one may refer to the formal definition of $\proj{R}(\boldsymbol{\vartheta})$
in Section~\ref{subsec:algoforhigherdim}).
\begin{theorem}
\label{thm:highdim-kspike}
Let $\boldsymbol{\vartheta}$ be an arbitrary $k$-spike mixture supported in $\Delta_{d-1}$. Suppose we can access noisy $\mathbf{t}$-moments of
$\proj{R}(\boldsymbol{\vartheta})$, with precision $(\widetilde{\mu}silon/(dk))^{O(k)}$,
where $\proj{R}(\boldsymbol{\vartheta})$ is the projected measure obtained by
applying linear transformation $R$ (with $\|R\|_\mathrm{i}nfty\leq 1$) of $\boldsymbol{\vartheta}$ onto any $h$-dimensional subspace we choose,
for all $\|\mathbf{t}\|\leq K=2k-1$ and $h=1,2$
(i.e., lines and 2-d planes).
We can construct a $k$-spike mixture $\widetilde{\mix}$ such that $\tran(\widetilde{\mix}, \boldsymbol{\vartheta})\leq O(\widetilde{\mu}silon)$ using only $O(dk^3)$ arithmetic operations with high probability.
\end{theorem}
\noindent
{\bf Applications to Learning Topic Models and Gaussian Mixtures.}
Our result for the high dimensional case
can be easily translated into an algorithm for topic modeling.
We first apply the dimension reduction developed in \cite{li2015learning}
to find a special subspace of dimension $O(k)$, then apply our Algorithm~\ref{alg:dim3} to the projection of $\boldsymbol{\vartheta}$ on this subspace.
See Section~\ref{sec:topic} and Theorem~\ref{thm:highdimtopic}.
We obtain for the first time the worst case optimal sample complexity for $K$-snapshots for the high dimensional case, improving previous work
\cite{rabani2014learning,li2015learning,gordon2020sparse} and matching the lower bound even for the 1-dimensional case
\cite{rabani2014learning}.
A comparison between our results and prior results can be found in Table~\ref{tab:ndim-result}.
We also study the problem of learning the parameters of Gaussian mixtures.
We assume all Gaussian components share a variance parameter, following the setting studied in \cite{wu2020optimal}.
For the 1-dimensional setting, we can leverage our algorithm for the $k$-coin model.
Our algorithm achieves the same sample complexity as in \cite{wu2020optimal}, but an $O(k^2)$ post-sampling running time, improving over the $O(k^{6.5})$ time
SDP-based algorithm developed in \cite{wu2020optimal}.
See Theorem~\ref{thm:1dim-Gaussian}.
For the high dimensional setting,
we can use the dimension reduction technique in \cite{li2015learning} or
\cite{doss2020optimal} to reduce the dimension $d$ to $O(k)$.
The dimension reduction part is not a bottleneck and we assume $d=O(k)$ for the
following discussion.
We show in Section~\ref{sec:highdim-Gaussian} that we can utilize our algorithm for the high dimensional sparse moment problem and obtain an algorithm without any separation assumption with sample complexity $(k/\widetilde{\mu}silon)^{O(k)}$. Note that the algorithm in \cite{wu2020optimal} requires a sample size of $(k/\widetilde{\mu}s)^{O(k)} \cdot w_{\min}^{-O(1)} \cdot \zeta^{-O(k)}$, which depends on the separation parameter $\zeta$ between Gaussian distributions.
Recently, Doss et al. \cite{doss2020optimal} removed the separation assumption
and achieved the optimal sample complexity $(k/\widetilde{\mu}s)^{O(k)}$.
Comparing with \cite{doss2020optimal}, the sample complexity of our algorithm is the same, but our running time is substantially better:
during the sampling phase, the algorithm in \cite{doss2020optimal} requires $O(n^{5/4}\poly(k))$ time where $n=(k/\widetilde{\mu}s)^{O(k)}$ is the number of samples
(for each sample, they need to update $n^{1/4}\poly(k)$ numbers, since
their algorithm requires $n^{1/4}$ 1-d projections)
while our algorithm only needs $O(n\poly(k))$ time
(we only need one 1-d projection and $d=O(k)$ 2-d projections).
The post-sampling running time of the algorithm in \cite{doss2020optimal}
is exponential $(k/\widetilde{\mu}s)^{O(k)}$ while our algorithms
runs in polynomial time $\poly(k)$.
For more details, see Section~\ref{sec:Gaussian}.
Finally, we argue that although the sample complexity is exponential (for the above problems), improving the post-sampling running time is also very important. During the sampling phase, we only need to keep track of the first few moments (e.g., using basic operations such as counting or adding numbers), hence the sampling phase can be easily distributed, streamed, or implemented in inexpensive computing devices. Our mixture recovery algorithm only requires the moment information (without storing the samples) and runs in time polynomial in $k$ (not the sample size).
Moreover, one may well have other means to measure the moments to achieve the desired moment accuracy (e.g., via longer documents, prior knowledge, existing samples etc.)
Exploiting other settings and applications is an interesting further direction.
\noindent
{\bf Our techniques:}
We first solve the 2-dimensional problem.
This is done by simply extending the previous 1-dimensional algorithm and
its analysis to complex numbers. The real and imaginary parts of a complex location can be used to represent the two dimensions, respectively.
While most ideas are similar to the 1-dimensional case,
the analysis requires extra care in various places. In particular,
it also requires us to extend the moment-transportation inequality to complex numbers
(Appendix~\ref{app:momenttransineq}).
Our algorithm for the high dimensional case uses
the algorithms for 1- and 2-dimensional cases as subroutines.
We first pick a random unit vector $\boldsymbol{r}$
and learn the projection $\widetilde{\pmix}$ of $\boldsymbol{\vartheta}$ onto $\boldsymbol{r}$ using the
1-dimensional algorithm.
It is not difficult to show that for two spikes that are far apart, with reasonable probability,
their projections on $\boldsymbol{r}$ are also far apart.
Hence, the remaining task is to recover the coordinates ($\mathrm{i}n \mathbb{R}^d$) of each spike in $\widetilde{\pmix}$.
Then, for each dimension $i\mathrm{i}n [d]$, we learn the 2d-linear map $\widetilde{\mix}_i$ of $\boldsymbol{\vartheta}$ onto the 2-dimensional subspace spanned by $\boldsymbol{r}$ and the $i$th dimensional axis.
By assembling the projected measure $\widetilde{\mix}$ and $\widetilde{\mix}_i$, we show that we can recover the $i$th coordinates of all spikes.
We remark that the improvement of our algorithm over prior work can be attributed to the usage of the 2-dimensional projections.
Note that all prior works \cite{rabani2014learning,li2015learning,wu2020optimal,gordon2020sparse,doss2020optimal} recover the high dimensional distribution by somehow ``assembling" many
1-dimensional projections.
In particular, if one projects $\boldsymbol{\vartheta}$ to $O(k)$ directions such as in \cite{rabani2014learning,wu2020optimal,gordon2020sparse},
all existing techniques require the separation assumption for the recovery.
\footnote{
It is possible to convert an algorithm with separation assumption
to an algorithm without separation assumption by merging close-by spikes and removing
spikes with small weight. However, the resulting sample complexity is not optimal.
For example, sample complexity $(k/\widetilde{\mu}s)^{O(k)} \cdot w_{\min}^{-O(1)} \cdot \zeta^{-O(k)}$ in \cite{gordon2020sparse} can be converted to one without separation with sample complexity $(k/\widetilde{\mu}s)^{O(k^2)}$. We omit the details.
}
Other works without separation condition
such as \cite{li2015learning,doss2020optimal}
requires to project $\boldsymbol{\vartheta}$ to a net of exponentially many directions.
We hope this idea of using 2-d projections
is useful in solving other high dimensional recovery problems.
\eat{
\noindent
{\bf A Moment-transportation Inequality.}
Another crucial technical tool for all above algorithms is a moment-transportation inequality which connects the transportation distance and moment distance for two $k$-spike mixtures.
With this inequality, we only need to focus on efficiently reconstructing a mixture $\boldsymbol{\vartheta}'$ such that $\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')$ is small enough, which enables a more global and much tighter analysis. This is unlike the previous analysis (\cite{rabani2014learning,gordon2020sparse}) in which we need to analyze the perturbation of each intermediate result and each component and weight.
\begin{theorem}
\label{thm:maininequality}
Let $\boldsymbol{\vartheta},\boldsymbol{\vartheta}'$ be two $k$-spike mixtures in $\mathrm{Spike}(\Delta_{d-1}, \Delta_{k-1})$, and
$K=2k-1$. Then, the following inequality holds
$$\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')\leq O(\mathrm{poly}(k, d)\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')^{\frac{1}{2k-1}}).$$
\end{theorem}
The high level idea of the proof of the inequality is as follows.
Suppose two mixtures $\boldsymbol{\vartheta}$ and $\boldsymbol{\vartheta}'$ have a small moment distance.
First, we apply Kantorovich-Rubinstein duality theorem to $\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')$.
So, we need to show for any 1-Lipschitz function $f$, $\mathrm{i}nt f \mathrm{d} (\boldsymbol{\vartheta}-\boldsymbol{\vartheta}')$ is small.
Since the support of $\boldsymbol{\vartheta}-\boldsymbol{\vartheta}'$ has at most $2k$ points
we can replace $f$ by a degree $2k-1$ polynomial due to the simple fact that
a polynomial of degree at most $2k-1$ can interpolate any function values at $2k$ discrete points.
However, if we want to interpolate $f$ exactly at those $2k$ points,
the coefficient of the polynomial can be unbounded
(in particular, it depends on the minimum separation of the points).
Indeed, this can be seen from the Lagrangian interpolation formula:
$f(x) := \sum_{j=1}^{n} f(\alpha_j) \ell_j(x)$
where $\ell_j$ is the Lagrange basis polynomial
$
\ell_j(x) := \prod_{1\le m\le n, m\neq j} (x-\alpha_m)/(\alpha_j - \alpha_m).
$
This motivates us to study a ``robust'' version of the polynomial interpolation problem,
which may be interesting in its own right.
We show that for any $f$ that is 1-Lipschitz over $n$ points $\alpha_1,\cdots,\alpha_n$ in $[-1,1]$,
there is a polynomial
with bounded height (a.k.a. largest coefficient)
which can approximately interpolate $f$ at those $n$ points (see Lemma~\ref{lm:decomposition}).
We show that it is possible to modify the values of some $f(\alpha_i)$ slightly, so that
the interpolating polynomial has a bounded height independent of the separation.
Interestingly, we leverage Newton's interpolation formula to design the modification of $f(\alpha_j)$
to ensure that 2nd order difference $F(\alpha_{i}, \alpha_{i+1})$ vanishes and does not contribute to the height of interpolating polynomial if $\alpha_i$ and $\alpha_{i+1}$ are very close to each other.
}
\subsection{Other Related Work}
Learning statistical mixture models has been studied extensively for the past two decades.
A central problem
in this area was the problem of learning a mixture of high-dimensional
Gaussians, even robustly \cite{Das99,DS00,AK01,VW02,KSV05,AM05,FOS06,BV08,KMV10,BS10,MV10,liu2021settling,bakshi2020robustly,wu2020optimal,doss2020optimal}.
Many other structured mixture models have also been studied (see e.g., \cite{KMRRSS94,CryanGG02,BGK04,MR05,DHKS05,
FeldmanOS05,KSV05,CR08a,DaskalakisDS12,liu2018efficiently}).
Our problem is closely related to topic
models which have also been studied extensively recently~\cite{AGM12,AFHKL12,AHK12,arora2018learning}.
Most work all make
certain assumptions on the structure of the mixture, such as the
pure topics are separated, Dirichlet prior, the existence of anchor words, or certain rank conditions.
Some assumptions (such as \cite{AGM12,AFHKL12}) allow one to use documents of constant length
(independent of the number of pure topics $k$ and the desired accuracy $\widetilde{\mu}s$).
The problem we study can be seen as a sparse version of the classic {\em moment problem}
in which our goal is to invert the mapping that takes a measure to the sequences of moments
\cite{schmudgen2017moment,lasserre2009moments}. When the measure is supported on a finite interval, the problem is known as the Hausdorff moment problem.
Our problem is also related to the super-resolution problem in which each measurement takes the form
$
v_\ell=\mathrm{i}nt e^{2\pi \ell t} \mathrm{d} \boldsymbol{\vartheta}(t)+\xi_\ell
$
where $\xi_\ell$ is the noise of the measurement. We can observe the first few $v_\ell$ for $|\ell|\leq K$ where $K$ is called {\em cutoff frequency}, and the goal is to recover the original signal $\boldsymbol{\vartheta}$.
There is a long history of the estimation problem. The noiseless can also be solved by
Prony's method \cite{prony1795}, ESPRIT algorithm \cite{roy1989esprit} or matrix pencil method \cite{hua1990matrix}, if $K\geq k$ (i.e., we have $2k+1$ measurements). The noisy case has
also been studied extensively and a central goal is to understand the relations between the cutoff frequency, the size of measure noises, and minimum separation \cite{donoho1992superresolution,candes2014towards,candes2013super,Moitra15,huang2015super,chen2016fourier}. Various properties, such as the
condition number of the Vandermonde matrix, also play essential roles in this line of study \cite{Moitra15}. The relation between the Vandermonde matrix and Schur polynomial is also exploited in \cite{chen2016fourier}.
\section{Preliminaries}\label{sec:pre}
We are given an statistical mixture $\boldsymbol{\vartheta}$ of $k$ discrete distributions
over $[d]=\{1,2,\cdots,d\}$.
Each discrete distribution $\boldsymbol{\alpha}_i$ can be regarded as a point in the $(d-1)$-simplex
$\Delta_{d-1}=\{\boldsymbol{x}=(x_1,\cdots, x_d)\mathrm{i}n \mathbb{R}^d \mid \sum_{i=1}^d x_i=1, x_i\geq 0 \,\,\forall i\mathrm{i}n [d]\}$.
We use $\boldsymbol{\vartheta}=(\boldsymbol{\alpha},\boldsymbol{w})$ to represent the mixture where
$\boldsymbol{\alpha}=\{\boldsymbol{\alpha}_1,\boldsymbol{\alpha}_2,\cdots,\boldsymbol{\alpha}_k\}\subset \Delta_{d-1}$ are the locations of spikes
and $\boldsymbol{w}=\{w_1,w_2,\cdots,w_k\}\mathrm{i}n \Delta_{k-1}$
in which $w_i$ is the probability of $\boldsymbol{\alpha}_i$.
Since the dimension of $\Delta_{d-1}$ is $d-1$, we say the {\em dimension} of the problem is $d-1$.
Moreover, we use $\mathrm{Spike}(\Delta_{d-1},\Delta_{k-1})$
to denote the set of all such mixtures where $\Delta_{d-1}$ indicates
the domain of $\boldsymbol{\alpha}_i$ and $\Delta_{k-1}$ indicates the domain of $\boldsymbol{w}$.
Our algorithms may produce negative or complex weights as intermediate results,
so we further denote $\Sigma^{\mathbb{R}}_{d-1}=\{\boldsymbol{x}=(x_1,\cdots, x_d)\mathrm{i}n \mathbb{R}^d \mid \sum_{i=1}^d x_i=1\}$
and
$\Sigma^{\mathbb{C}}_{d-1}=\{\boldsymbol{x}=(x_1,\cdots, x_d)\mathrm{i}n \mathbb{C}^d \mid \sum_{i=1}^d x_i=1\}$.
For the 1-dimensional case (which is called the $k$-coin problem or
sparse Hausdorff moment problem),
we have
$\boldsymbol{\vartheta} = (\boldsymbol{\alpha},\boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_1, \Delta_{k-1})$,
where
$\boldsymbol{\alpha}=\{\alpha_1,\alpha_2,\cdots,\alpha_k\}$ is a set of $k$ discrete points in $[0,1]$.
In this scenario, for each $t\mathrm{i}n \mathbb{N}$, we denote the $t$th moment as
$$
M_{t}(\boldsymbol{\vartheta})=\mathrm{i}nt_{[0,1]} \alpha^t \boldsymbol{\vartheta}(\mathrm{d} \alpha) =\sum_{i=1}^k w_i \alpha_i^{t}.
$$
For the higher dimensional case, we use
$t=(t_1,t_2,\cdots,t_d)\mathrm{i}n \mathbb{Z}_+^d$ such that $\|t\|_1 = K(= 2k-1)$
to denote a multi-index.
In addition, we denote
$\boldsymbol{\alpha}_i^{\mathbf{t}}=\alpha_{i,1}^{t_1}\alpha_{i,2}^{t_2}\cdots\alpha_{i,d}^{t_d}$ for every discrete distribution
$\boldsymbol{\alpha}_i=(\alpha_{i,1},\alpha_{i,2},\cdots,\alpha_{i,d})\mathrm{i}n \mathbb{R}^d$.
The moment vector of $\boldsymbol{\vartheta}$ is then defined as
\begin{align}
\label{eq:highdimmomvector}
M(\boldsymbol{\vartheta})=\left\{M_{\mathbf{t}}(\boldsymbol{\vartheta})\right\}_{\|\mathbf{t}\|_1\leq K}
\text{ where }
M_{\mathbf{t}}(\boldsymbol{\vartheta})=\mathrm{i}nt_{\Delta_{d-1}} \boldsymbol{\alpha}^\mathbf{t} \boldsymbol{\vartheta}(\mathrm{d} \boldsymbol{\alpha}) =\sum_{i=1}^k w_i \boldsymbol{\alpha}_i^{\mathbf{t}}.
\end{align}
We define the {\em moment distance} between two mixtures $\boldsymbol{\vartheta}$ and $\boldsymbol{\vartheta}'$
as the $L_{\mathrm{i}nfty}$ norm of the difference between corresponding moment vectors,
$$
\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}'):=\max_{|\mathbf{t}|\leq K} |M_\mathbf{t}(\boldsymbol{\vartheta})-M_\mathbf{t}(\boldsymbol{\vartheta}')|.
$$
\topic{Transportation Distance}
Recall that for any two probability measures $P$ and $Q$ defined over $\mathbb{R}^d$, the $L_1$-transportation distance $\tran(P,Q)$ is defined as
\begin{align}
\label{eq:transprimal}
\tran(P,Q):= \mathrm{i}nf \left\{\mathrm{i}nt \|x-y\|_1 \,\mathrm{d}\mu(x,y) : \mu\mathrm{i}n M(P,Q) \right\},
\end{align}
where $M(P,Q)$ is the set of all joint distributions (also called coupling) on $\mathbb{R}^d \times \mathbb{R}^d$ with marginals $P$ and $Q$.
Transportation distance is also called Rubinstein distance, Wasserstein distance
or earth mover distance in the literature.
Let $1\text{-}\mathsf{Lip}$ be the set of 1-Lipschitz functions on $\mathbb{R}^d$, i.e.,
$1\text{-}\mathsf{Lip}:=\{f: \mathbb{R}^d\rightarrow \mathbb{R} \mid |f(x)-f(y)|\leq \|x-y\|_1 \text{ for any }x,y\mathrm{i}n \mathbb{R}^d\}$.
We need the following important theorem by Kantorovich and Rubinstein (see e.g., \cite{dudley2002real}),
which states the dual formulation of transportation distance:
\begin{align}
\label{eq:trans}
\tran(P,Q)=\sup\left\{\mathrm{i}nt f \mathrm{d}(P-Q): f\mathrm{i}n 1\text{-}\mathsf{Lip}\right\}.
\end{align}
\section{An Efficient Algorithm for the $k$-Coin Problem}
\label{sec:1d-recover}
In this section, we study the $k$-coin model,
i.e., $\boldsymbol{\vartheta}$ is a $k$-spike distribution over $[0,1]$.
We present an efficient algorithm reconstructing the mixture from the first $K=2k-1$ noisy moments.
We denote $\boldsymbol{\vartheta}:=(\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_1,\Delta_{k-1})$ as the ground truth mixture where $\boldsymbol{\alpha} = [\alpha_1, \cdots, \alpha_k]^{\top} \mathrm{i}n \Delta_1^k$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top} \mathrm{i}n \Delta_k$.
Let $M(\boldsymbol{\vartheta}) = [M_0(\boldsymbol{\vartheta}), \cdots, M_{2k-1}(\boldsymbol{\vartheta})]^{\top}$ be the ground truth moment vector containing moments of degree at most $K = 2k-1$.
We denote the noisy moment vector as $M' = [M_0', \cdots, M_{2k-1}']$ such that the error is bounded by $\|M' -M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} \leq \xi$. We further assume $M'_0=M_0(\boldsymbol{\vartheta}) = 1$ (because it is a distribution) and the noise satisfies $\xi \leq 2^{-\Omega(k)}$
(this is necessary in light of the lower bound in \cite{rabani2014learning} or Lemma~\ref{lm:inequality1d}).
For $M = [M_0, \cdots, M_{2k-1}]^{\top}$ and $\boldsymbol{\alpha} = [\alpha_1, \cdots, \alpha_k]^{\top}$, we denote
\begin{align}\label{eq:matrixdef}
A_{M} := \begin{bmatrix}
M_{0} & M_{1} & \cdots & M_{k-1} \\
M_{1} & M_{2} & \cdots & M_{k} \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
M_{k-1} & M_{k} & \cdots & M_{2k-2}
\end{bmatrix}, \quad
b_{M} := \begin{bmatrix}
M_{k} \\
M_{k+1} \\
\vdots \\
M_{2k-1}
\end{bmatrix}, \quad
V_{\boldsymbol{\alpha}} := \begin{bmatrix}
\alpha_1^0 & \alpha_2^0 & \cdots & \alpha_k^0 \\
\alpha_1^1 & \alpha_2^1 & \cdots & \alpha_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
\alpha_1^{2k-1} & \alpha_2^{2k-1} & \cdots & \alpha_k^{2k-1}
\end{bmatrix}.
\end{align}
Note that $A_M$ is a Hankel matrix and $V_{\boldsymbol{\alpha}}$ is a Vandemonde matrix.
\begin{algorithm}[!ht]
\widecheck{\alpha}ption{Reconstruction Algorithm in 1-Dimension}\label{alg:dim1}
\begin{algorithmic}[1]
\Function{OneDimension}{$k, M', \xi$}
\State $\widehat{\vecc} \leftarrow \arg \min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \| A_{M'} \boldsymbol{x} + b_{M'} \|_2^2 + \xi^2 \|\boldsymbol{x}\|_2^2$ \mathbb{C}omment{$\widehat{\vecc} = [\widehat{c}_0, \cdots, \widehat{c}_{k-1}]^{\top} \mathrm{i}n \mathbb{R}^k$}
\State $\widehat{\veca} \leftarrow \textrm{roots}(\sum_{i=0}^{k-1} \widehat{c}_i x^i + x^k)$ \mathbb{C}omment{$\widehat{\veca} = [\widehat{\alpha}_1, \cdots, \widehat{\alpha}_k]^{\top} \mathrm{i}n \mathbb{C}^k$}
\State $\overline{\veca} \leftarrow \textrm{project}_{\Delta_1}(\widehat{\veca})$
\mathbb{C}omment{$\overline{\veca} = [\overline{\alpha}_1, \cdots, \overline{\alpha}_{k}]^{\top} \mathrm{i}n \Delta_1^k$}
\State $\widetilde{\veca} \leftarrow \overline{\veca} + \textrm{Noise}(\xi)$
\mathbb{C}omment{$\widetilde{\veca} = [\widetilde{\alpha}_1, \cdots, \widetilde{\alpha}_{k}]^{\top} \mathrm{i}n \Delta_1^k$}
\State $\widehat{\vecw} \xleftarrow{O(1)-\textrm{approx}} \arg \min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \|V_{\widetilde{\veca}} \boldsymbol{x} - M'\|_2^2$
\mathbb{C}omment{$\widehat{\vecw} = [\widehat{w}_1, \cdots, \widehat{w}_{k}]^{\top} \mathrm{i}n \mathbb{R}^k$}
\State $\widetilde{\vecw} \leftarrow \widehat{\vecw} / (\sum_{i=1}^{k} \widehat{w}_i)$
\mathbb{C}omment{$\widetilde{\vecw} = [\widetilde{w}_1, \cdots, \widetilde{w}_{k}]^{\top} \mathrm{i}n \Sigma_{k-1}$}
\State $\widetilde{\mix} \leftarrow (\widetilde{\veca}, \widetilde{\vecw})$ \mathbb{C}omment{$\widetilde{\mix} \mathrm{i}n \mathrm{Spike}(\Delta_1, \Sigma_{k-1})$}
\State $\widecheck{\vecw} \leftarrow \arg \min_{\boldsymbol{x} \mathrm{i}n \Delta_{k-1}} \tran(\widetilde{\mix}, (\widetilde{\veca}, \boldsymbol{x}))$
\State report $\widecheck{\mix} \leftarrow (\widetilde{\veca}, \widecheck{\vecw})$ \mathbb{C}omment{$\widecheck{\mix} \mathrm{i}n \mathrm{Spike}(\Delta_1, \Delta_{k-1})$}
\EndFunction
\end{algorithmic}
\end{algorithm}
Our algorithm is a variant of Prony's algorithm.
The pseudocode can be found in Algorithm~\ref{alg:dim1}. The algorithm takes the number of spikes $k$, the noisy moment vector $M'$ which $\|M' - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty}\leq \xi$ and the moment accuracy $\xi$ as the input.
We describe our algorithm as follows.
Let $\boldsymbol{c} = [c_0, \cdots, c_{k-1}]^{\top} \mathrm{i}n \mathbb{R}^k$ such that $\prod_{i=1}^{k} (x - \alpha_i) = \sum_{i=0}^{k-1} c_i x^i + x^k$ be the characteristic vector of locations $\alpha_i$.
We first perform a ridge regression to obtain $\widehat{\vecc}$ in Line 2.
Note that $A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})} = \mathbf{0}.$ (see Lemma~\ref{lm:dim1-char-poly}).
Hence, $\widehat{\vecc}$ serves a similar role as $\boldsymbol{c}$
(note that $\widehat{\vecc}$ is not necessarily close to $\boldsymbol{c}$ without the separation assumption).
From Line 3 to Line 5, we aim to obtain estimations of the positions of the spikes, i.e., $\alpha_i$s.
We first solve the roots of polynomial $\sum_{i=0}^{k-1} \widehat{c}_i x^i + x^k$.
For polynomial root findings,
note that some roots may be complex without any separation assumption.
\footnote{Gordon et al.~\cite{gordon2020sparse} prove that all roots
are real and separated under the minimal separation assumption.
}
Hence, we need to project the solutions back to $\Delta_1$ and
inject small noise, ensuring that all values are distinct and still in $\Delta_1$.
We note that any noise of size at most $\xi$ suffices.
After recovering the positions of the spikes, we aim to recover the corresponding weights $\widetilde{\vecw}$
by the linear regression defined by the Vandemonde matrix $V_{\boldsymbol{\alpha}}$ (Line 6).
We normalized the weight vector in Line 7.
We note that $\widetilde{\vecw}$ may still have some negative components. In Line 9, we
find the closest $k$-spike distribution in $\mathrm{Spike}(\Delta_1, \Delta_{k-1})$, which is our final output.
The details for implementing the above steps in $O(k^2)$ time can be found in Appendix~\ref{subsec:1d-alg-detail}.
\subsection{Error Analysis}
Now, we start to bound the reconstruction error of the algorithm.
The following lemma is well known (see e.g., \cite{chihara2011introduction,gordon2020sparse}). We provide a proof in the appendix
for completeness.
\begin{lemma}\label{lm:dim1-char-poly}
Let $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_1, \Delta_{k-1})$ where $\boldsymbol{\alpha} = [\alpha_1, \cdots, \alpha_k]^{\top}$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top}$.
Let $\boldsymbol{c} = [c_0, \cdots, c_{k-1}]^{\top} \mathrm{i}n \mathbb{R}^k$ such that $\prod_{i=1}^{k} (x - \alpha_i) = \sum_{i=0}^{k-1} c_i x^i + x^k$.
For all $i \geq 0$, the following equation holds
$\sum_{j=0}^{k-1} M_{i+j}(\boldsymbol{\vartheta}) c_j + M_{i+k}(\boldsymbol{\vartheta}) = 0.$
In matrix form, the equation can be written as follows:
$
A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})} = \mathbf{0}.
$
\end{lemma}
Next, the following lemma shows that the intermediate result $\widehat{\vecc}$ a good estimation for the solution of $A_{M(\boldsymbol{\vartheta})} \boldsymbol{x} + b_{M(\boldsymbol{\vartheta})} = \boldsymbol{0}$ with a small norm:
\begin{lemma} \label{lm:dim1-step1}
Let $\boldsymbol{c} = [c_0, \cdots, c_{k-1}]^{\top} \mathrm{i}n \mathbb{R}^k$ such that $\prod_{i=1}^{k} (x - \alpha_i) = \sum_{i=0}^{k-1} c_i x^i + x^k$.
Suppose $\|M' - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty}\leq \xi$.
Let $\widehat{\vecc} = [\widehat{c}_0, \cdots, \widehat{c}_{k-1}]^{\top} \mathrm{i}n \mathbb{R}^k$ be the intermediate result (Line 2) in Algorithm~\ref{alg:dim1}. Then, $\|\boldsymbol{c}\|_1 \leq 2^k$, $\|\widehat{\vecc}\|_1 \leq 2^{O(k)}$ and $\|A_{M(\boldsymbol{\vartheta})} \widehat{\vecc} + b_{M(\boldsymbol{\vartheta})}\|_\mathrm{i}nfty \leq 2^{O(k)} \cdot \xi$.
\end{lemma}
\begin{proof}
From Vieta's formulas (see e.g., \cite{barbeau2003polynomials}), we have
$c_i = \sum_{S \mathrm{i}n \binom{[k]}{k-i}} \prod_{j\mathrm{i}n S} (-\alpha_j).$
Thus,
$$\|\boldsymbol{c}\|_1 = \sum_{i=0}^{k-1} |c_i| \leq \sum_{S \mathrm{i}n 2^{[k]}} \prod_{j\mathrm{i}n S} |\alpha_j| = \prod_{i=1}^{k} (1+|\alpha_i|) \leq 2^k.$$
where the last inequality holds because $|\alpha_i| \leq 1$ for all $i$.
From Lemma \ref{lm:dim1-char-poly}, we can see that $\|A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} = 0$.
Therefore,
\begin{align*}
\|A_{M'} \boldsymbol{c} + b_{M'}\|_\mathrm{i}nfty &\leq \|A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} + \|A_{M'} \boldsymbol{c} - A_{M(\boldsymbol{\vartheta})} \boldsymbol{c}\|_{\mathrm{i}nfty} + \|b_{M'} - b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} + \|(A_{M'} - A_{M(\boldsymbol{\vartheta})}) \boldsymbol{c}\|_{\mathrm{i}nfty} + \|b_{M'} - b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} + \|A_{M'} - A_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \|\boldsymbol{c}\|_1 + \|b_{M'} - b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} + \|M' - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} (\|\boldsymbol{c}\|_1 + 1) \\
&\leq 2^{O(k)} \cdot \xi.
\end{align*}
The fourth inequality holds since $(A_{M'} - A_{M(\boldsymbol{\vartheta})})_{i,j}=(M'-M(\boldsymbol{\vartheta}))_{i+j}$ and $(b_{M'}-b_{M(\boldsymbol{\vartheta})})_i = (M'-M(\boldsymbol{\vartheta}))_{i+k}$.
From the definition of $\widehat{\vecc}$, we can see that
\begin{align*}
\|A_{M'} \widehat{\vecc} + b_{M'}\|_2^2 + \xi^2 \|\widehat{\vecc}\|_2^2
&\leq \|A_{M'} \boldsymbol{c} + b_{M'}\|_2^2 + \xi^2 \|\boldsymbol{c}\|_2^2 \\
&\leq k \|A_{M'} \boldsymbol{c} + b_{M'}\|_\mathrm{i}nfty^2 + \xi^2 \|\boldsymbol{c}\|_1^2 \\
&\leq 2^{O(k)} \cdot \xi^2.
\end{align*}
The second inequality holds since $\|\boldsymbol{x}\|_2 \leq \|\boldsymbol{x}\|_1$ and $\|\boldsymbol{x}\|_2 \leq \sqrt{k} \|\boldsymbol{x}\|_{\mathrm{i}nfty}$ holds for any vector $\boldsymbol{x} \mathrm{i}n \mathbb{R}^k$.
Now, we can directly see that
\begin{align*}
\|A_{M'} \widehat{\vecc} + b_{M'}\|_\mathrm{i}nfty &\leq \|A_{M'} \widehat{\vecc} + b_{M'}\|_2 \leq 2^{O(k)} \cdot \xi, \\
\|\widehat{\vecc}\|_1 &\leq \sqrt{k} \|\widehat{\vecc}\|_2 \leq 2^{O(k)}.
\end{align*}
Finally, we can bound $\|A_{M(\boldsymbol{\vartheta})} \widehat{\vecc} + b_{M(\boldsymbol{\vartheta})}\|_\mathrm{i}nfty$
as follows:
\begin{align*}
\|A_{M(\boldsymbol{\vartheta})} \widehat{\vecc} + b_{M(\boldsymbol{\vartheta})}\|_\mathrm{i}nfty &\leq \|A_{M'} \widehat{\vecc} + b_{M'}\|_\mathrm{i}nfty + \|A_{M'} \widehat{\vecc} - A_{M(\boldsymbol{\vartheta})} \widehat{\vecc}\|_{\mathrm{i}nfty} + \|b_{M'} - b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{M'} \widehat{\vecc} + b_{M'}\|_\mathrm{i}nfty + \|A_{M'} - A_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \| \widehat{\vecc}\|_1 + \|b_{M'} - b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{M'} \widehat{\vecc} + b_{M'}\|_\mathrm{i}nfty + \|M' - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} (\|\widehat{\vecc}\|_1 + 1) \\
&\leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{proof}
Using this result, we can show that $\widetilde{\veca}$ in Line 5 is a good estimation for the ground truth spikes, $\boldsymbol{\alpha}$.
In particular, the following lemma shows that every ground truth spike $a_i \mathrm{i}n \boldsymbol{\alpha}$ has a close spike $\widetilde{\alpha}_j \mathrm{i}n \widetilde{\veca}$ where the closeness depends on the weight of the spike $w_i$.
We note that this property tolerates permutation and small spikes weights, enabling us to analyze mixtures without separations.
\begin{lemma} \label{lm:dim1-step2-1}
Let $\widehat{\veca} = [\widehat{\alpha}_1, \cdots, \widehat{\alpha}_{k}]^{\top}\mathrm{i}n \mathbb{C}^k$ be the intermediate result (Line 3) in Algorithm~\ref{alg:dim1}.
Then, the following inequality holds:
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widehat{\alpha}_j|^2 \leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{lemma}
\begin{proof}
Since $\widehat{\vecc}$ and $\boldsymbol{\alpha}$ are real, from the definition of $\widehat{\vecc}$,
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widehat{\alpha}_j|^2
= \sum_{i=1}^{k} w_i \left|\prod_{j=1}^{k} (\alpha_i - \widehat{\alpha}_j)\right|^2
= \sum_{i=1}^{k} w_i \left|\sum_{j=0}^{k-1} \widehat{c}_j \alpha_i^j + \alpha_i^k\right|^2
= \sum_{i=1}^{k} w_i \left(\sum_{j=0}^{k-1} \widehat{c}_j \alpha_i^j + \alpha_i^k\right)^2.
\end{align*}
Hence, we can see that
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widehat{\alpha}_j|^2
&= \sum_{i=1}^{k} w_i \left(\sum_{p=0}^{k-1}\sum_{q=0}^{k-1}\widehat{c}_p \widehat{c}_q \alpha_i^{p+q} + 2\sum_{p=0}^{k-1} \widehat{c}_p \alpha_i^{p+k} + \alpha_i^{2k} \right) \\
&= \sum_{p=0}^{k-1}\sum_{q=0}^{k-1}\widehat{c}_p \widehat{c}_q \sum_{i=1}^{k}w_i \alpha_i^{p+q} + 2\sum_{p=0}^{k-1}\widehat{c}_p \sum_{i=1}^{k} w_i \alpha_i^{p+k} + \sum_{i=1}^k w_i \alpha_i^{2k} \\
&= \sum_{p=0}^{k-1} \widehat{c}_p \left(\sum_{q=0}^{k-1} M_{p+q}(\boldsymbol{\vartheta}) \widehat{c}_q + M_{p+k}(\boldsymbol{\vartheta})\right) + \sum_{p=0}^{k-1} \widehat{c}_p M_{p+k}(\boldsymbol{\vartheta}) + M_{2k}(\boldsymbol{\vartheta}) \\
&= \sum_{p=0}^{k-1} \widehat{c}_p \mathsf{B}igl(A_{M(\boldsymbol{\vartheta})} \widehat{c} + b_{M(\boldsymbol{\vartheta})}\mathsf{B}igr)_p + \sum_{p=0}^{k-1} \widehat{c}_p M_{p+k}(\boldsymbol{\vartheta}) + M_{2k}(\boldsymbol{\vartheta}).
\end{align*}
According to Lemma~\ref{lm:dim1-char-poly}, $M_{p+k}(\boldsymbol{\vartheta}) = - \sum_{q=0}^{k-1} M_{p+q}(\boldsymbol{\vartheta}) c_q$, so
\begin{align*}
\sum_{p=0}^{k-1} \widehat{c}_p M_{p+k}(\boldsymbol{\vartheta}) + M_{2k}(\boldsymbol{\vartheta}) &= \sum_{p=0}^{k-1} \widehat{c}_p \left(- \sum_{q=0}^{k-1} M_{p+q}(\boldsymbol{\vartheta}) c_q\right) + \left(- \sum_{q=0}^{k-1} M_{k+q}(\boldsymbol{\vartheta}) c_q \right) \\
&= -\sum_{q=0}^{k-1} c_q \left(\sum_{p=0}^{k-1} M_{p+q}(\boldsymbol{\vartheta}) \widehat{c}_p + M_{k+q}(\boldsymbol{\vartheta})\right) \\
&= -\sum_{q=0}^{k-1} c_q \mathsf{B}igl(A_{M(\boldsymbol{\vartheta})} \widehat{c} + b_{M(\boldsymbol{\vartheta})}\mathsf{B}igr)_q.
\end{align*}
Combining the above equations, we finally get
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widehat{\alpha}_j|^2 &= \sum_{i=0}^{k-1} (\widehat{c}_i - c_i) \mathsf{B}igl(A_{M(\boldsymbol{\vartheta})} \widehat{c} + b_{M(\boldsymbol{\vartheta})}\mathsf{B}igr)_i \\
&\leq (\|\widehat{c}\|_1 + \|c\|_1) \|A_{M(\boldsymbol{\vartheta})} \widehat{c} + b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \\
&\leq (2^k + 2^{O(k)}) \cdot 2^{O(k)}\cdot \xi & (\textrm{Lemma}~\ref{lm:dim1-step1})\\
&\leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{proof}
We then give the following simple property that allows us to upper bound the impact of injected noise. The proof can be found in the appendix.
\begin{lemma} \label{lm:injected-noise}
Let $b>0$ be some constant.
For $a_1, \cdots, a_k \mathrm{i}n \mathbb{R}$ and $a_1', \cdots, a_k' \mathrm{i}n \mathbb{R}$ such that $a_i \mathrm{i}n [0, b]$ and $|a_i' - a_i|\leq \xi$ holds for all $i$. In case that $\xi \leq n^{-1}$, then,
\begin{align*}
\prod_{i=1}^k a_i' \leq \prod_{i=1}^{k} a_i + O(b^k \cdot k\xi).
\end{align*}
\end{lemma}
With the lemma, we are able to show that the projection (Line 4) and the injected noise (Line 5) do not introduce much extra error for our estimation on the positions of ground truth spikes.
\begin{lemma} \label{lm:dim1-step2}
Let $\widetilde{\veca} = [\widetilde{\alpha}_1, \cdots, \widetilde{\alpha}_{k}]^{\top} \mathrm{i}n \mathbb{R}^k$ be the intermediate result (Line 5) in Algorithm~\ref{alg:dim1}. Then,
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widetilde{\alpha}_j|^2 \leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{lemma}
\begin{proof}
Let $\overline{\veca} = [\overline{\alpha}_1, \cdots, \overline{\alpha}_{k}]^{\top} \mathrm{i}n \mathbb{R}^k$ be the set of projections (Line 4). Since the projected domain $\Delta_1$ is convex,
$|\alpha_i - \overline{\alpha}_j| \leq |\alpha_i - \widehat{\alpha}_j|$ holds. Thus,
\begin{align*}
\prod_{j=1}^{k} |\alpha_i - \overline{\alpha}_j|^2 \leq \prod_{j=1}^{k} |\alpha_i - \widehat{\alpha}_j|^2.
\end{align*}
Recall that $\widetilde{\alpha}_j$ is obtained from $\overline{\alpha}_j$ by adding an independent noise
of size at most $\xi$. We have $|\alpha_i - \widetilde{\alpha}_j| \leq |\alpha_i - \overline{\alpha}_j| + \xi$.
Applying Lemma~\ref{lm:injected-noise} by regarding $|\alpha_i - \widetilde{\alpha}_j|$ as $a_j$ and $|\alpha_i - \widehat{\alpha}_j|$ as $a_j'$, from $|\alpha_i - \widetilde{\alpha}_j|\leq |\alpha_i| + |\widetilde{\alpha}_j| \leq 2$, we can conclude that
\begin{align*}
\prod_{j=1}^{k} |\alpha_i - \widetilde{\alpha}_j|^2 \leq \prod_{j=1}^{k} |\alpha_i - \overline{\alpha}_j|^2 + O(2^k \cdot k\xi).
\end{align*}
Combining two inequalities, we get the desired inequality:
$$
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widetilde{\alpha}_j|^2 \leq \sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widehat{\alpha}_j| + O(2^k \cdot k\xi) \leq 2^{O(k)} \cdot \xi.
$$
\end{proof}
Now, we start to bound the error in Line 6 which is a linear regression
defined by the Vandermonde matrix.
We first introduce Schur polynomial which has a strong connection to Vandermonde matrix.
\begin{definition}[Schur polynomial]
Given a partition $\lambda = (\lambda_1, \cdots, \lambda_k)$ where $\lambda_1 \geq \cdots \geq \lambda_k \geq 0$. The Schur polynomial $s_{\lambda}(x_1, x_2, \cdots, x_k) $ is defined by the ratio
$$
s_{\lambda}(x_1, x_2, \cdots, x_k) =
\left|\begin{matrix}
x_1^{\lambda_1+k-1} & x_2^{\lambda_1+k-1} & \cdots & x_k^{\lambda_1+k-1} \\
x_1^{\lambda_2+k-2} & x_2^{\lambda_2+k-2} & \cdots & x_k^{\lambda_2+k-2} \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
x_1^{\lambda_k} & x_2^{\lambda_k} & \cdots & x_k^{\lambda_k}
\end{matrix}\right| \cdot \left|\begin{matrix}
1 & 1 & \cdots & 1 \\
x_1^1 & x_2^1 & \cdots & x_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
x_1^{k-1} & x_2^{k-1} & \cdots & x_k^{k-1}
\end{matrix}\right|^{-1}.
$$
\end{definition}
A Schur polynomial is a symmetric function because the numerator and denominator are
both {\em alternating} (i.e., it changes sign if we swap two variables), and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant
$\prod_{i<j}(x_i-x_j)$.
\footnote{It is easy to see that if a polynomial $p(x_1,\ldots, x_n)$ is alternating,
it must contain $(x_i-x_j)$ as a factor .}
The following classical result (see, e.g., \cite{10.5555/2124415}) shows an alternative way to calculate Schur polynomial.
It plays a crucial role in bounding the error of a linear system
defined by the Vandermonde matrix (Lemma~\ref{lm:step3}).
\begin{theorem} (e.g. \cite{10.5555/2124415})
Let $\textrm{SSYT}(\lambda)$ be the set consists all semistandard Young tableau of shape $\lambda = (\lambda_1, \cdots, \lambda_k)$. Then, Schur polynomial $s_{\lambda}(x_1, x_2, \cdots, x_k)$ can also be computed as follows:
\begin{align*}
s_{\lambda}(x_1, x_2, \cdots, x_k) = \sum_{T\mathrm{i}n \textrm{SSYT}(\lambda)} \prod_{i=1}^{k} x_i^{T_i}.
\end{align*}
where $T_i$ counts the occurrences of the number $i$ in $T$.
\end{theorem}
In a semistandard Young tableau, we
allow the same number to appear more than once (or not at all), and we require the entries weakly increase along each row and strictly increase down each column
As an example, for $\lambda = (2, 1, 0)$, the list of semistandard Young tableau are:
The corresponding Schur polynomial is
\begin{align*}
s_{(2, 1, 0)}(x_1, x_2, x_3) = x_1^2x_2 + x_1^2x_3 + x_1x_2^2 + x_1x_2x_3 + x_1x_3x_2 + x_1x_3^2 + x_2^2x_3 + x_2x_3^2.
\end{align*}
Considering the special case where $\lambda = (j-k+1, 1, 1, \cdots, 1)$, this theorem implies the following result. For completeness, we also present a self-contained proof of the lemma in the appendix.
\begin{lemma} \label{lm:dim1-step3-matrix}
For all $j \geq k$ and distinct numbers $a_1, \cdots, a_k \mathrm{i}n \mathbb{R}$ (or $a_1, \cdots, a_k \mathrm{i}n \mathbb{C}$),
$$\left|\begin{matrix}
a_1^j & a_2^j & \cdots & a_k^j \\
a_1^1 & a_2^1 & \cdots & a_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
a_1^{k-1} & a_2^{k-1} & \cdots & a_k^{k-1}
\end{matrix}\right| \cdot \left|\begin{matrix}
1 & 1 & \cdots & 1 \\
a_1^1 & a_2^1 & \cdots & a_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
a_1^{k-1} & a_2^{k-1} & \cdots & a_k^{k-1}
\end{matrix}\right|^{-1} = \prod_{i=1}^{k} a_i \cdot \sum_{s \mathrm{i}n (k)^{j-k}} \prod_{i=1}^{k} a_i^{s_i} $$
where $(a)^{b} \subset \{0,\cdots,b\}^a$ is the set that contains all vector $s$ in which $\sum_{i=1}^{a} s_i = b$.
\end{lemma}
Using this result, we can show the reconstructed moment vector $V_{\widetilde{\veca}}\widehat{\vecw}$ is close to the ground true moment vector $M(\boldsymbol{\vartheta})$.
The next lemma shows the discrete mixture with support $\widetilde{\veca}$ can well approximate a single spike distribution at $\alpha$ in terms of the moments, if any of the spikes in $\widetilde{\veca}$ is close to $\alpha$.
\begin{lemma} \label{lm:step3}
Let $M(\alpha)$ be the vector
$[\alpha^0, \alpha^1, \cdots, \alpha^{2k-1}]^{\top}$.
Suppose $F$ is either $\{\mathbb{R}$ or $\mathbb{C}\}$.
For $\alpha \mathrm{i}n F$ with $|\alpha| \leq 1$ and $\widetilde{\veca} = [\widetilde{\alpha}_1, \cdots, \widetilde{\alpha}_{k}]^{\top}\mathrm{i}n F^k$ with
all $\widetilde{\alpha}$s distinct and $\|\widetilde{\veca}\|_{\mathrm{i}nfty} \leq 1$. We have,
$$\min_{\boldsymbol{x}\mathrm{i}n F^k}\|V_{\widetilde{\veca}}\boldsymbol{x} - M(\alpha)\|_2 \leq 2^{O(k)} \prod_{j=1}^{k} |\alpha - \widetilde{\alpha}_j|.$$
\end{lemma}
\begin{proof}
Let $\Delta \widetilde{\alpha}_j = \widetilde{\alpha}_j - \alpha$.
Consider $\boldsymbol{x}^* = [x_1^*, \cdots, x_k^*]^{\top}$ such that
\begin{align*}
\begin{bmatrix}
1 & 1 & \cdots & 1 \\
\Delta \widetilde{\alpha}_1^1 & \Delta \widetilde{\alpha}_2^1 & \cdots & \Delta \widetilde{\alpha}_{k}^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
\Delta \widetilde{\alpha}_1^{k-1} & \Delta \widetilde{\alpha}_2^{k-1} & \cdots & \Delta \widetilde{\alpha}_{k}^{k-1}
\end{bmatrix}
\begin{bmatrix}
x_1^* \\
x_2^* \\
\vdots \\
x_k^* \\
\end{bmatrix} =
\begin{bmatrix}
1 \\
0 \\
\vdots \\
0 \\
\end{bmatrix}.
\end{align*}
We see that for each $j$,
\begin{align*}
(V_{\widetilde{\veca}}\boldsymbol{x}^* - M(\alpha))_j &= \sum_{t=1}^{k} x^*_t \widetilde{\alpha}_t^j - \alpha^j
= \sum_{t=1}^{k} x^*_t \sum_{p=0}^{j} \binom{j}{p} \Delta \widetilde{\alpha}_t^p \alpha^{j-p} - \alpha^j \\
&= \sum_{p=1}^{j} \binom{j}{p} \alpha^{j-p} \sum_{t=1}^{k} x_t^* \Delta \widetilde{\alpha}_t^p.
\end{align*}
When $1 \leq j < k$, clear we have $\sum_{t=1}^{k} x_t^* \Delta \widetilde{\alpha}_t^j = 0.$
Now, we bound $\sum_{t=1}^{k} x_t^* \Delta \widetilde{\alpha}_t^j$ for $j\geq k$.
Considering the relation between the inverse matrix
the corresponding adjoint matrix, we have for example
\begin{align*}
x_1^* = \left|\begin{matrix}
\Delta \widetilde{\alpha}_2^1 & \cdots & \Delta \widetilde{\alpha}_{k}^1 \\
\vdots & \mathrm{d}dots & \vdots \\
\Delta \widetilde{\alpha}_2^{k-1} & \cdots & \Delta \widetilde{\alpha}_{k}^{k-1}
\end{matrix}\right| \cdot \left|\begin{matrix}
1 & 1 & \cdots & 1 \\
\Delta \widetilde{\alpha}_1^1 & \Delta \widetilde{\alpha}_2^1 & \cdots & \Delta \widetilde{\alpha}_{k}^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
\Delta \widetilde{\alpha}_1^{k-1} & \Delta \widetilde{\alpha}_2^{k-1} & \cdots & \Delta \widetilde{\alpha}_{k}^{k-1}
\end{matrix}\right|^{-1}.
\end{align*}
Thus, for $j \geq k$, we have that
\begin{align*}
\sum_{t=1}^{k} x_t^* \Delta \widetilde{\alpha}_t^j
&=
\left|\begin{matrix}
\Delta \widetilde{\alpha}_1^{j} & \Delta \widetilde{\alpha}_2^{j} & \cdots & \Delta \widetilde{\alpha}_{k}^{j} \\
\Delta \widetilde{\alpha}_1^1 & \Delta \widetilde{\alpha}_2^1 & \cdots & \Delta \widetilde{\alpha}_{k}^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
\Delta \widetilde{\alpha}_1^{k-1} & \Delta \widetilde{\alpha}_2^{k-1} & \cdots & \Delta \widetilde{\alpha}_{k}^{k-1}
\end{matrix}\right| \cdot \left|\begin{matrix}
1 & 1 & \cdots & 1 \\
\Delta \widetilde{\alpha}_1^1 & \Delta \widetilde{\alpha}_2^1 & \cdots & \Delta \widetilde{\alpha}_{k}^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
\Delta \widetilde{\alpha}_1^{k-1} & \Delta \widetilde{\alpha}_2^{k-1} & \cdots & \Delta \widetilde{\alpha}_{k}^{k-1}
\end{matrix}\right|^{-1}\\
&= \prod_{t=1}^{k} \Delta \widetilde{\alpha}_t \cdot \sum_{s \subseteq (k)^{j-k}} \prod_{t=1}^{k} \Delta \widetilde{\alpha}_t^{s_t}
\end{align*}
where the last equation holds due to Lemma~\ref{lm:dim1-step3-matrix}.
According to $|\Delta \widetilde{\alpha}_t| \leq |\alpha|+|\widetilde{\alpha}_t| \leq 2$ and $(k)^{j-k}$ has $\binom{j}{k}$ terms, we have
\begin{align*}
\left|\sum_{t=1}^{k} x_t^* \Delta \widetilde{\alpha}_t^j\right| \leq 2^{k} \cdot 2^{j-k} \prod_{t=1}^{k} |\Delta \widetilde{\alpha}_t|.
\end{align*}
Plugging in this result, we can see that
\begin{align*}
|(V_{\widetilde{\veca}}\boldsymbol{x}^* - M(\alpha))_j| &= \left|\sum_{p=1}^{j} \binom{j}{p} \alpha^{j-p} \sum_{t=1}^{k} x_t^* \Delta \widetilde{\alpha}_t^p\right|
\leq \sum_{p=k}^{j} \binom{j}{p} \cdot 2^{k} \cdot 2^{p-k} \prod_{t=1}^{k} |\Delta \widetilde{\alpha}_t| \\ &\leq 2^{k} \cdot 2^{2j} \prod_{t=1}^{k} |\Delta \widetilde{\alpha}_t|.
\end{align*}
Therefore, we have that
\begin{align*}
\min_{\boldsymbol{x}\mathrm{i}n F^k}\|V_{\widetilde{\veca}}\boldsymbol{x} - M(\alpha)\|_2 &\leq \|V_{\widetilde{\veca}}\boldsymbol{x}^* - M(\alpha)\|_2
\leq \sqrt{\sum_{j=k}^{2k-1} \left(2^{k} \cdot 2^{2j} \prod_{t=1}^{k} |\Delta \widetilde{\alpha}_t|\right)^2}
\leq 2^{O(k)} \prod_{j=1}^{k} |\alpha - \widetilde{\alpha}_j|.
\end{align*}
\end{proof}
Lemma \ref{lm:dim1-step2} shows that every discrete distribution in ground truth $\boldsymbol{\vartheta}$ has a close spike in recovered positions $\widetilde{\veca}$ and thus can be well approximated according to the above lemma. Therefore, the mixture of the discrete distributions can also be well approximated with support $\widetilde{\veca}$. As the following lemma shows, solving linear regression in Line 6 finds a weight vector $\widetilde{w}$ which the corresponding moment vector $V_{\widetilde{\veca}}\widehat{\vecw}$ is close to the ground truth $M(\boldsymbol{\vartheta})$.
\begin{lemma}\label{lm:dim1-step4-pre}
Let $\widehat{\vecw} = [\widehat{w}_1, \cdots, \widehat{w}_{k}]^{\top} \mathrm{i}n \mathbb{R}^k$ be the intermediate result (Line 6) in Algorithm~\ref{alg:dim1}. Then,
$$\|V_{\widetilde{\veca}}\widehat{\vecw} - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$$
\end{lemma}
\begin{proof}
Firstly, by triangle inequality, we can see
\begin{align*}
\|V_{\widetilde{\veca}}\widehat{\vecw} - M(\boldsymbol{\vartheta})\|_2 &\leq \|V_{\widetilde{\veca}}\widehat{\vecw} - M'\|_2 + \|M(\boldsymbol{\vartheta}) - M'\|_2 \\
&\leq O(1) \cdot \min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \|V_{\widetilde{\veca}} \boldsymbol{x} - M'\|_2 + \|M(\boldsymbol{\vartheta}) - M'\|_2 \\
&\leq O(1) \cdot \min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \|V_{\widetilde{\veca}} \boldsymbol{x} - M(\boldsymbol{\vartheta})\|_2 + O(1) \cdot \|M(\boldsymbol{\vartheta}) - M'\|_2
\end{align*}
where the first and third inequalities hold due to triangle inequality, and the second inequality holds since $\|V_{\widetilde{\veca}} \widehat{\vecw} - M'\|_2^2 \leq O(1) \cdot \min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \|V_{\widetilde{\veca}} \boldsymbol{x} - M'\|_2^2$
(we only find an $O(1)$-approximation in this step).
For the first term, we can see that
\begin{align*}
\min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \|V_{\widetilde{\veca}} \boldsymbol{x} - M(\boldsymbol{\vartheta})\|_2 &= \min_{\boldsymbol{x}_1, \cdots, \boldsymbol{x}_k \mathrm{i}n \mathbb{R}^k} \left\|\sum_{i=1}^{k}w_i V_{\widetilde{\veca}}\boldsymbol{x}_i - \sum_{i=1}^{k}w_i M(\alpha_i)\right\|_2 \\
&\leq \sum_{i=1}^{k} w_i \min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \|V_{\widetilde{\veca}}\boldsymbol{x} - M(\alpha_i)\|_2 \\
&\leq 2^{O(k)} \sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widetilde{\alpha}_j| & (\text{Lemma~\ref{lm:step3}}) \\
&\leq 2^{O(k)} \sqrt{\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\alpha_i - \widetilde{\alpha}_j|^2} & (\text{AM-QM Inequality}, \boldsymbol{w} \mathrm{i}n \Delta_{k-1}) \\
&\leq 2^{O(k)} \cdot \sqrt{\xi}. & (\text{Lemma~\ref{lm:dim1-step2}}).
\end{align*}
For the second term, $\|M(\boldsymbol{\vartheta}) - M'\|_2 \leq \sqrt{k} \|M(\boldsymbol{\vartheta}) - M'\|_{\mathrm{i}nfty} \leq \sqrt{k} \cdot \xi$.
As a result, $$\|V_{\widetilde{\veca}}\widehat{\vecw} - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} \leq \|V_{\widetilde{\veca}}\widehat{\vecw} - M(\boldsymbol{\vartheta})\|_2 \leq O(1) \cdot 2^{O(k)} \cdot \sqrt{\xi} + O(1) \cdot \sqrt{k} \cdot \xi \leq 2^{O(k)} \cdot \zeta^{\frac{1}{2}}.$$
\end{proof}
The moment-transportation inequality (Lemma~\ref{lm:inequality1d})
requires the input to be signed measure.
So, we normalize the recovered spikes in Line 7.
The following lemma shows the moment vector would not change too much after normalization.
\begin{lemma}\label{lm:dim1-step4}
Let $\widetilde{\mix} = (\widetilde{\veca}, \widetilde{\vecw})$ be the intermediate result
(Line 8) in Algorithm~\ref{alg:dim1}.
Then, we have
$$\Mdis_K(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$$
\end{lemma}
\begin{proof}
According to the definition of $V_{\widetilde{\veca}}$ and $M(\boldsymbol{\vartheta})$, we have $(V_{\widetilde{\veca}}\widehat{\vecw} - M(\boldsymbol{\vartheta}))_1 = \sum_{i=1}^{k} \widehat{w}_i - 1$.
Therefore, $|\sum_{i=1}^{k} \widehat{w}_i - 1| \leq |(V_{\widetilde{\veca}}\widehat{\vecw} - M(\boldsymbol{\vartheta}))_1| \leq \|V_{\widetilde{\veca}}\widehat{\vecw} - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$
Note that $\|\widetilde{\vecw} - \widehat{\vecw}\|_{1} = \|\widetilde{\vecw}\|_1 - \|\widehat{\vecw}\|_{1} = |(\sum_{i=1}^{k} \widehat{w}_i)^{-1} - 1|$. For $2^{O(k)} \cdot \xi^{\frac{1}{2}} \leq 1/2$, we can conclude that $\|\widetilde{\vecw} - \widehat{\vecw}\|_{1} = |(\sum_{i=1}^{k} \widehat{w}_i)^{-1} - 1| \leq 2 |\sum_{i=1}^{k} \widehat{w}_i - 1| \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$
Moreover,
\begin{align*}
\Mdis_K(\widetilde{\mix}, \boldsymbol{\vartheta})&=\|M(\widetilde{\mix}) - M(\boldsymbol{\vartheta})\|_\mathrm{i}nfty = \|V_{\widetilde{\veca}} \widetilde{\vecw} - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} \\
&\leq \|V_{\widetilde{\veca}} \widehat{\vecw} - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} + \|V_{\widetilde{\veca}} \widetilde{\vecw} - V_{\widetilde{\veca}} \widehat{\vecw})\|_{\mathrm{i}nfty} \\
&\leq \|V_{\widetilde{\veca}} \widehat{\vecw} - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} + \|V_{\widetilde{\veca}}\|_\mathrm{i}nfty \|\widetilde{\vecw} - \widehat{\vecw}\|_1 \\
&\leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.
\end{align*}
where the first inequality holds due to triangle inequality and third holds since $\|V_{\widetilde{\veca}}\|_{\mathrm{i}nfty} \leq 1$.
\end{proof}
Moreover, the weight can still be negative after normalization. To find a reconstruction in original space, we want to find a mixture of discrete distributions in $\mathrm{Spike}(\Delta_1, \Delta_{k-1})$ that is close to $\widetilde{\mix}$. However, this step can cast a huge impact on the moments. Thus, we directly estimate the influence in terms of the transportation distance instead of the moment distance. We note the transportation distance in the next lemma
is defined possibly for non-probability measures (see Equation \eqref{eq:signedtrans}
in Appendix~\ref{app:transfornonprob} for details).
\begin{lemma}\label{lm:step5}
Let $\mathcal{C}$ be a compact convex set in $\mathbb{R}^d$ and let $\widetilde{\mix} = (\widetilde{\veca}, \widetilde{\vecw}) \mathrm{i}n \mathrm{Spike}(\mathcal{C}, \Sigma_{k-1})$ be a $k$-spike distribution over support $\mathcal{C}$.
Then, we have
$$\min_{\widecheck{\mix} \mathrm{i}n \mathrm{Spike}(\widetilde{\veca}, \Delta_{k-1})} \tran(\widetilde{\mix}, \widecheck{\mix}) \leq 2 \min_{\overline{\mix} \mathrm{i}n \mathrm{Spike}(\mathcal{C}, \Delta_{k-1})} \tran(\widetilde{\mix}, \overline{\mix}).$$
Note that the minimization on the left is over support $\widetilde{\veca}$.
\end{lemma}
The proof of the above lemma can be found in Appendix~\ref{subsec:missingproofsec3}.
Finally, we are ready to bound the transportation distance error of the reconstructed $k$-spike distribution:
\begin{lemma}\label{lm:dim1-reconstruction}
Let $\widecheck{\mix} = (\widetilde{\veca}, \widecheck{\vecw})$ be the final result
(Line 10) in Algorithm~\ref{alg:dim1}.
Then, we have
$$\tran(\widecheck{\mix}, \boldsymbol{\vartheta}) \leq O(k\xi^{\frac{1}{4k-2}}).$$
\end{lemma}
\begin{proof}
Combining Lemma \ref{lm:inequality1d} and Lemma \ref{lm:dim1-step4}, we can see
$$\tran(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq O(k \cdot (2^{O(k)} \cdot \xi^{\frac{1}{2}})^{\frac{1}{2k-1}}) \leq O(k \xi^{\frac{1}{4k-2}}).$$
Moreover, from Lemma \ref{lm:step5}, we have
\begin{align*}
\tran(\widetilde{\mix}, \widecheck{\mix}) \leq 2 \min_{\overline{\mix} \mathrm{i}n \mathrm{Spike}(\Delta_1, \Delta_{k-1})} \tran(\widetilde{\mix}, \overline{\mix}) \leq 2 \tran(\widetilde{\mix}, \boldsymbol{\vartheta}) = O(k \xi^{\frac{1}{4k-2}}).
\end{align*}
Finally, by triangle inequality, $\tran(\widecheck{\mix}, \boldsymbol{\vartheta}) \leq \tran(\widecheck{\mix}, \widetilde{\mix}) + \tran(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq O(k \xi^{\frac{1}{4k-2}}).$
\end{proof}
By choosing $\xi = (\widetilde{\mu}s/k)^{O(k)}$, the previous lemma directly implies Theorem \ref{thm:1dim-kspikecoin}.
\section{Efficient Algorithms for Higher Dimensions}
\label{sec:high-dim-recover}
In this section, we solve the problem in higher dimensions.
We first present the algorithm for 2-dimension.
The algorithm for higher dimensions use the algorithms for both 1- and 2-dimension as subroutines.
\subsection{An Efficient Algorithm for the 2-Dimensional Problem}
We can generalize the 1-dimension algorithm described in Section~\ref{sec:1d-recover} to 2-dimension.
Let $\boldsymbol{\vartheta}:=(\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_2,\Delta_{k-1})$
be the underlying mixture where
$\boldsymbol{\alpha} = [\boldsymbol{\alpha}_1, \cdots, \boldsymbol{\alpha}_k]^{\top}$ for $\boldsymbol{\alpha}_{i} = (\alpha_{i,1}, \alpha_{i,2})$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top} \mathrm{i}n \Delta_{k-1}$.
The true moments can be computed according to $M_{i,j}(\boldsymbol{\vartheta}) = \sum_{t=1}^k w_t \alpha_{t,1}^i \alpha_{t,2}^j$.
The input is the noisy moments $M'_{i,j}$ in $0\leq i, j, i+j \leq 2k-1$ such that $|M'_{i,j} - M_{i,j}(\boldsymbol{\vartheta})| \leq \xi$. We further assume that $M'_{0,0}=M_{0,0}(\boldsymbol{\vartheta}) = 1$ and $\xi \leq 2^{-\Omega(k)}$.
The key idea is very simple:
a distribution supported in $\mathbb{R}^2$ can be mapped to a distribution supported in the complex plane $\mathbb{C}$. In particular, we define the complex set $\Delta_{\mathbb{C}} = \{a+b{\mathfrak{i}} \mid (a,b) \mathrm{i}n \Delta_2\}$.
Moreover, we denote $\boldsymbol{\beta} = [\beta_{1}, \cdots, \beta_{k}]^{\top} := [\alpha_{1,1} + \alpha_{1,2} {\mathfrak{i}}, \cdots, \alpha_{k,1} + \alpha_{k,2} {\mathfrak{i}}]^{\top} \mathrm{i}n \Delta_{\mathbb{C}}^k$, and define $\boldsymbol{\phi} := (\boldsymbol{\beta}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_{\mathbb{C}}^k, \Delta_{k-1})$ to be the complex mixture corresponding to $\boldsymbol{\vartheta}$.
The corresponding moments of $\boldsymbol{\phi}$ can thus be defined as $G_{i,j}(\boldsymbol{\phi}) = \sum_{t=1}^{k} w_t (\beta_t^{\mathrm{d}agger})^i \beta_t^j$, where $\beta_t^{\mathrm{d}agger}$ is the complex conjugate of $\beta_t$.
In fact $G_{i,j}$ can be computed from $M_{i,j}$ (see Lemma~\ref{lm:dim2-complex-corr}).
Therefore, our task reduces to the recovery the complex mixture $\boldsymbol{\phi}$ given noisy moments $G'_{i,j}$.
The algorithm and the proof of the following theorem are very similar to that in Section~\ref{sec:1d-recover}.
Handling complex numbers requires several minor changes and we present the full details in Appendix~\ref{sec:2d-recover}.
For the analysis, we need to generalize the moment-transportation inequality to complex domain as well (see Appendix~\ref{subsec:2dmoment}).
\begin{theorem}
\label{thm:2dim-kspikecoin}
(The 2-dimensional Problem)
Let $\boldsymbol{\vartheta}\mathrm{i}n \mathrm{Spike}(\Delta_2,\Delta_{k-1})$.
Suppose we have noisy moments $M_{i,j}$ ($0\leq i, j, i+j \leq 2k-1$) up to precision $(\widetilde{\mu}silon/k)^{O(k)}$.
We can obtain a mixture $\widetilde{\mix}$ such that $\tran(\widetilde{\mix}, \boldsymbol{\vartheta})\leq O(\widetilde{\mu}silon)$ and
our algorithm runs in time $O(k^3)$.
\end{theorem}
\subsection{An Efficient Algorithm for $d> 3$}
\label{subsec:algoforhigherdim}
Now, we present our algorithm for higher dimensions.
Let $\boldsymbol{\vartheta}:=(\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_{d-1}, \Delta_{k-1})$
be the underlying mixture where $\boldsymbol{\alpha} = [\boldsymbol{\alpha}_1, \cdots, \boldsymbol{\alpha}_k]^{\top} \mathrm{i}n \mathbb{R}^{k\times d}$ such that $\boldsymbol{\alpha}_i = [\alpha_{i,1}, \cdots, \alpha_{i,d}]^{\top}$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top}$. We denote $\boldsymbol{\alpha}_{:,i} = [\alpha_{1,i}, \cdots, \alpha_{k,i}]$ as the vector of the $i$th coordinates of the
spikes. We write $\widetilde{\veca} = [\boldsymbol{\alpha}_{:,1}, \cdots, \boldsymbol{\alpha}_{:,k}]^{\top}$.
Since the number of different moments is exponential in $d$,
we consider the setting in which one can
access the noisy moments of some linear maps of $\boldsymbol{\vartheta}$ onto lower dimensional subspaces, as in previous work \cite{rabani2014learning,li2015learning}
(such noisy moments can be easily obtained in applications such as topic modeling, see Section~\ref{sec:topic}, or learning Gaussian mixtures, see
Section~\ref{sec:Gaussian}).
For some $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\mathbb{R}^d, \Delta_{k-1})$ and a vector $\boldsymbol{r} \mathrm{i}n \mathbb{R}^d$, we denote $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}) := (\boldsymbol{\alpha}\boldsymbol{r}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\mathbb{R}, \Delta_{k-1})$ as the projected distribution of $\boldsymbol{\vartheta}$ along vector $\boldsymbol{r}$. For a matrix $R = [\boldsymbol{r}_1, \cdots, \boldsymbol{r}_p]\mathrm{i}n \mathbb{R}^{d \times p}$, we also denote $\proj{R}(\boldsymbol{\vartheta}) := (\boldsymbol{\alpha} R, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\mathbb{R}^p, \Delta_{k-1})$ as the projected distribution of $\boldsymbol{\vartheta}$ along multiple dimensions $\boldsymbol{r}_1, \cdots, \boldsymbol{r}_p$. In this section, we consider mappings to 1-dimensional and 2-dimensional subspaces, which suffice for our purposes.
We assume the moment information is revealed by a noisy moment oracle. A moment oracle corresponding to $k$-spike distribution $\boldsymbol{\vartheta}$ is defined to be a function $M'(\cdot)$ that takes a matrix (or a vector) $R$ with $\|R\|_{\mathrm{i}nfty}\leq 1$ and generates a noisy vector $M'(R)$ such that $\|M'(R) - M(\proj{R}(\boldsymbol{\vartheta}))\|_{\mathrm{i}nfty} \leq \xi$ and $M'_{\mathbf{0}}(R) = M_{\mathbf{0}}(\proj{R}(\boldsymbol{\vartheta})) = 1$ where $\xi \leq 2^{-\Omega(k)}$. Recall that the moment vector $M$ is defined in \eqref{eq:highdimmomvector}.
Let $e_t\mathrm{i}n \mathbb{R}^d$ be the unit vector that takes value $1$ only $t$-th coordinates and $0$ in the remaining coordinates. Denote $\mathbf{1_d} = (1, \cdots, 1) \mathrm{i}n \mathbb{R}^d$. Let $\mathbb{S}_{d-1}$ be the unit sphere in $\mathbb{R}^d$.
\begin{algorithm}[!ht]
\widecheck{\alpha}ption{Reconstruction Algorithm in High Dimension} \label{alg:dim3}
\begin{algorithmic}[1]
\Function{HighDimension}{$k, M'(\cdot), \xi$}
\State $\boldsymbol{r} \leftarrow \mathbb{S}_{d-1}$
\State $\widetilde{\pmix}'\leftarrow \textrm{OneDimension}(k, M'(\frac{\boldsymbol{r}+\mathbf{1}}{4}), \xi)$ \mathbb{C}omment{$\widetilde{\pmix}' = (\widetilde{\vecy}', \widetilde{\vecw}') \mathrm{i}n \mathrm{Spike}(\Delta_1, \Delta_{k-1})$}
\State $\widetilde{\pmix} \leftarrow([4\widetilde{\vecy}'-\mathbf{1}_d], \widetilde{\vecw}')$
\mathbb{C}omment{$\widetilde{\pmix} = (\widetilde{\vecy}, \widetilde{\vecw})$}
\For {$t\mathrm{i}n[1, d]$}
\State $\widehat{\pmix}_t' \leftarrow \textrm{TwoDimension}(k, M'([\frac{\boldsymbol{r}+\mathbf{1}}{4}, \frac{e_t}{2}]), \xi)$
\mathbb{C}omment{$\widehat{\pmix}_t' = ([\widehat{\vecy}_{:,t}', \widehat{\veca}_{:,t}'], \widehat{\vecw}_{:,t}')\mathrm{i}n \mathrm{Spike}(\Delta_2, \Delta_{k-1})$}
\State $\widehat{\pmix}_t \leftarrow ([4\boldsymbol{y}_{:,t}'-\mathbf{1}_d, 2\widehat{\veca}_{:,t}'], \widehat{\vecw}_{:,t}')$
\mathbb{C}omment{$\widehat{\pmix}_t = ([\widehat{\vecy}_{:,t}, \widehat{\veca}_{:,t}], \widehat{\vecw}_{:,t})$}
\For {$j \mathrm{i}n [1,k]$}
\State $s^*_{j,t} \leftarrow \arg \min_{s : \widehat{w}_{s,t} \geq \frac{\sqrt{\widetilde{\mu}silon}}{k}} |\widetilde{y}_j - \widehat{y}_{s,t}|$
\State $\widetilde{\alpha}_{j,t} \leftarrow \widehat{\alpha}_{s_{j,t}^*,t}$
\EndFor
\State $\widetilde{\veca}_{:,t} \leftarrow [\widetilde{\alpha}_{1,t}, \cdots, \widetilde{\alpha}_{k,t}]^{\top}$
\mathbb{C}omment {$\widetilde{\veca} \mathrm{i}n \mathbb{R}^{k}$}
\EndFor
\State $\widetilde{\veca} \leftarrow [\widetilde{\veca}_{:,1}, \cdots, \widetilde{\veca}_{:,d}]^{\top}$
\mathbb{C}omment{$\widetilde{\veca} = [\widetilde{\veca}_{1}, \cdots, \widetilde{\veca}_{k}]\mathrm{i}n \mathbb{R}^{d\times k}$}
\State $\widetilde{\mix} \leftarrow (\widetilde{\veca}, \widetilde{\vecw})$
\mathbb{C}omment $\widetilde{\mix} \mathrm{i}n \mathrm{Spike}(\mathbb{R}^d, \Delta_{k-1})$
\State $\widecheck{\veca} \leftarrow \textrm{project}_{\Delta_{d-1}}(\widetilde{\veca})$
\mathbb{C}omment{$\widecheck{\veca} = [\widecheck{\veca}_{1}, \cdots, \widecheck{\veca}_{k}]\mathrm{i}n \Delta_{d-1}^{k}$}
\State report $\widecheck{\mix} \leftarrow (\widecheck{\veca}, \widetilde{\vecw})$
\mathbb{C}omment $\widecheck{\mix} \mathrm{i}n \mathrm{Spike}(\Delta_{d-1}, \Delta_{k-1})$
\EndFunction
\end{algorithmic}
\end{algorithm}
The pseudocode can be found in Algorithm \ref{alg:dim3}. The input of the algorithm consists of the number of spikes $k$, the moment error bound $\xi$ and the noisy moment oracle $M'(\cdot)$ mentioned above. Now, we describe the implementation details of our algorithm.
We first generate a random vector $\boldsymbol{r}$ in Line 2 by sampling from unit sphere $\mathbb{S}_{d-1} = \{\boldsymbol{r} \mathrm{i}n \mathbb{R}^d \mid \|\boldsymbol{r}\|_2 = 1\}$. Note that by a probabilistic argument, the distance between spikes are roughly kept after the projection along $\boldsymbol{r}$ (see Lemma \ref{lm:prob-proj}). Then, we aim to use the one dimension algorithm to recover $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}) $.
However, the support of $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$ is contained in $[-1,1]$ but not $[0, 1]$.
In Line 3, we apply the 1-dimension algorithm (Algorithm~\ref{alg:dim1})
to the noisy moments of a shifted and scaled map $\proj{(\boldsymbol{r}+\mathbf{1}_d)/4}(\boldsymbol{\vartheta})$ whose support is in $[0,1]$. Then we scale and shift back to obtain the result $\widetilde{\pmix}$ in Line 4. Intuitively, $\widetilde{\pmix}$ is close to $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$.
Now, we try to recover the coordinates of the spikes.
In particular, we would like to find, for each spike of $\widetilde{\pmix}$, the coordinates of the corresponding location in $\Delta_{d-1}$. This is done by considering each dimension separately. For each coordinate $t\mathrm{i}n [d]$, we run the 2-dimension (Algorithm~\ref{alg:dim2}) to recovery the projection
$\proj{[\boldsymbol{r}, e_t]}(\boldsymbol{\vartheta})$, that is a linear map of $\boldsymbol{\vartheta}$ onto the subspace spanned by vector $\boldsymbol{r}$ and axis $e_t$.
However, $[\boldsymbol{r}, e_t]\boldsymbol{\alpha}_i$ is in $[-1,1]\times[0,1]$. Again, in Line 6, we apply the algorithm to the noisy moments of a shifted and scaled projection $\proj{B_t}(\boldsymbol{\vartheta})$ where $B_t=[\frac{\boldsymbol{r}+\mathbf{1}_d}{4}, \frac{e_t}{2}]$, and scale the result back in Line 7.
The result is denoted as $\widehat{\pmix}_{t}$.
Next, we assemble the 1-d projection $\widetilde{\pmix}$ and 2-d projections $\widehat{\pmix}_{t}, t\mathrm{i}n [d]$.
Due to the noise, $\widetilde{\pmix}_{t}$ and $\widetilde{\pmix}$ may have different supports along $\boldsymbol{r}$.
So we assign the coordinates of $\widetilde{\pmix}$ according to the closest spike of $\widehat{\pmix}_{t}$ with a relative large weights in Line 8 to Line 12.
Due to the error, the support of the reconstructed distribution $\widetilde{\mix}$ may not locate in $\Delta_{d-1}$.
Thus the final step in Line 16 is to project each individual spike back to $\Delta_{d-1}$.
We note the bottleneck of this algorithm is Line 6 where we have to solve the 2-dimensional problem $d$ times. Hence, the total running time of the algorithm is $O(d k^3)$.
\subsection{Error Analysis}
First, we show the random vector $\boldsymbol{r}$ has a critical property that two distant spikes are still distant after the projection with high probability.
This result is standard and similar statements are known in the literature (e.g., Lemma 37 in \cite{wu2020optimal}).
\begin{lemma}\label{lm:prob-proj}
Let $\mathrm{Supp}=(\boldsymbol{\alpha}_1, \cdots, \boldsymbol{\alpha}_k)\subset\Delta_{d-1}$ such that $d > 3$.
For every constant $0 < \eta < 1$ and a random vector $\boldsymbol{r} \leftarrow \mathbb{S}_{d-1}$, with probability at least $1-\eta$, for any $\boldsymbol{\alpha}_i,\boldsymbol{\alpha}_j\mathrm{i}n \mathrm{Supp}$, we have that
$$
|\boldsymbol{r}^{\top} (\boldsymbol{\alpha}_i - \boldsymbol{\alpha}_j)| \geq \frac{\eta}{k^2d}\|\boldsymbol{\alpha}_i-\boldsymbol{\alpha}_j\|_1.
$$
\end{lemma}
\begin{proof}
Let $0 < \widetilde{\mu}s < 1$ be some constant depending on $k$ and $d$ which we fix later. Assume $\boldsymbol{r} = [r_1, \cdots, r_d]^{\top}$. For a fixed pair $(\boldsymbol{\alpha}_i, \boldsymbol{\alpha}_j)$, we have $$\Pr_{\boldsymbol{r} \sim \mathbb{S}_{d-1}} [|\boldsymbol{r}^{\top}(\boldsymbol{\alpha}_i - \boldsymbol{\alpha}_j)| < \widetilde{\mu}s\|\boldsymbol{\alpha}_i - \boldsymbol{\alpha}_j\|_2] = \Pr_{\boldsymbol{r} \sim \mathbb{S}_{d-1}}[|r_1| < \widetilde{\mu}silon]$$
Suppose $\boldsymbol{r}$ is obtained by first sampling a vector from
unit ball $\mathbb{B}_d$ and then
normalizing it to the unit sphere $\mathbb{S}_{d-1}$.
We can see that
\begin{align*}
\Pr_{\boldsymbol{r} \sim \mathbb{S}_{d-1}}[|r_1| < \widetilde{\mu}silon] \leq \Pr_{\boldsymbol{r} \sim \mathbb{B}_{d}}[|r_1| < \widetilde{\mu}silon]
\end{align*}
where the right hand side is equal to the probability that $\boldsymbol{r}$ lies in
the slice $[-\widetilde{\mu}silon, \widetilde{\mu}silon]$.
Let $V_d$ be the volume of $d$-dimensional ball.
We have
\begin{align*}
\Pr_{\boldsymbol{r} \sim \mathbb{B}_{d}}[|r_1| < \widetilde{\mu}silon] &= \frac{\mathrm{i}nt_{-\widetilde{\mu}silon}^{\widetilde{\mu}silon} V_{d-1} (1-x^2)^{\frac{d-1}{2}}\mathrm{d} x}{V_d}
\leq \widetilde{\mu}silon\sqrt{d}
\end{align*}
where the last inequality holds since $V_d / V_{d-1} \geq 2/\sqrt{d}$.
By union bound over all pairs $(\boldsymbol{\alpha}_i, \boldsymbol{\alpha}_j)$, we can see that
the failure probability can be bounded by
$$
\Pr_{\boldsymbol{r} \sim \mathbb{S}_{d-1}} [\exists \boldsymbol{\alpha}_i, \boldsymbol{\alpha}_j \mathrm{i}n \mathrm{Supp} : |\boldsymbol{r}^{\top}(A_i - A_j)| < \widetilde{\mu}s\|A_i - A_j\|_2] < k^2 \widetilde{\mu}s\sqrt{d}.
$$
Notice that $\|\boldsymbol{\alpha}\|_1 \leq \sqrt{d}\|\boldsymbol{\alpha}\|_2$ for any $\boldsymbol{\alpha} \mathrm{i}n \mathbb{R}^d$, and
let $\widetilde{\mu}s = \frac{\eta}{k^2\sqrt{d}}$. We conclude that
$$\Pr_{\boldsymbol{r} \sim \mathbb{S}_{d-1}} [\forall \boldsymbol{\alpha}_i, \boldsymbol{\alpha}_j \mathrm{i}n \mathrm{Supp} : |\boldsymbol{r}^{\top}(A_i - A_j)| \geq \frac{\eta}{k^2d}\|A_i - A_j\|_1] > 1-\eta.$$
\end{proof}
Next, we show that we can recover reconstruct the higher dimension mixture $\boldsymbol{\vartheta}$, using its 1-d projection $\widetilde{\pmix} \sim \proj{[\boldsymbol{r}]}(\boldsymbol{\vartheta})$, by assigning each spike $(\widetilde{y}_i, \widetilde{w}_i)$ in the one dimension distribution a location in $\mathbb{R}^d$.
In a high level,
consider a clustering of the spikes in $\proj{[\boldsymbol{r}]}(\boldsymbol{\vartheta})$. The spikes in one cluster still form a cluster in the original $\boldsymbol{\vartheta}$ by Lemma~\ref{lm:prob-proj}
since the projection approximately keeps the distance.
Thus assigning the clusters a location in $\mathbb{R}^d$ can produce to a good estimation for $\boldsymbol{\vartheta}$.
The following lemma formally proves that we can reconstruct the original mixture using 2-dimension projections.
\begin{lemma}\label{lm:dimn-step3}
Let $\widetilde{\mix} \mathrm{i}n \mathrm{Spike}(\mathbb{R}^d, \Delta_{k-1})$ be the intermediate result (Line 15) in Algorithm~\ref{alg:dim3}.
Let $\widetilde{\mu}s$ be the smallest number such that $\tran(\widetilde{\pmix}, \proj{\boldsymbol{r}}(\boldsymbol{\vartheta})) \leq \widetilde{\mu}s$ and $\tran(\widehat{\pmix}_t, \proj{[\boldsymbol{r}, e_t]}(\boldsymbol{\vartheta})) \leq \widetilde{\mu}s$ for all $t$.
If Lemma \ref{lm:prob-proj} holds for $\boldsymbol{\vartheta}$,
we have that
$$\tran(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq O(k^3d^2\sqrt{\widetilde{\mu}s}).$$
\end{lemma}
\begin{proof}
Partition the spikes of $\boldsymbol{\vartheta}$ into clusters $C_1, \cdots, C_m$ such that
\begin{itemize}
\mathrm{i}tem $|\boldsymbol{r}^{\top} \boldsymbol{\alpha}_i - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_j| \leq k\sqrt{\widetilde{\mu}s}$ if $\boldsymbol{\alpha}_i$ and $\boldsymbol{\alpha}_j$ are in the same cluster $C_p$.
\mathrm{i}tem $|\boldsymbol{r}^{\top} \boldsymbol{\alpha}_i - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_j| > \sqrt{\widetilde{\mu}s}$ if $\boldsymbol{\alpha}_i$ and $\boldsymbol{\alpha}_j$ are in different clusters.
\end{itemize}
We define the distance between spike $\widetilde{y}_j$ and cluster $C_p$ to be $\min_{i\mathrm{i}n C_p} |\widetilde{y}_j - \boldsymbol{r}^{\top}\boldsymbol{\alpha}_i|$. Clearly, there exists at most one cluster $C_p$ such that the distance between the spike and the cluster is less than $\frac{\sqrt{\widetilde{\mu}silon}}{2}$. In this case, $|\widetilde{y}_j - \boldsymbol{r}^{\top}\boldsymbol{\alpha}_i| > \frac{\sqrt{\widetilde{\mu}silon}}{2}$ holds for all $j \notin C_p$.
Let $\widetilde{C}_p$ be the set of all index $j$ such that the distance between spike $\widetilde{y}_j$ and cluster $C_p$ is less than $\frac{\sqrt{\widetilde{\mu}silon}}{2}$.
From this definition, the distance between the spikes in $\widetilde{C}_p$ and the spikes not in $C_p$ is at least $\frac{\sqrt{\widetilde{\mu}silon}}{2}$.
By the performance guarantee of our 1-dimensional algorithm, the total transportation distance between $\widetilde{\pmix}$ and $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$ is no more than $\widetilde{\mu}silon$.
Hence, if we consider couple $C_p$ and $\widetilde{C}_p$ together,
we can see that
$$|\sum_{i \mathrm{i}n C_p} w_i - \sum_{j \mathrm{i}n \widetilde{C}_p} \widetilde{w}_j| < 2\sqrt{\widetilde{\mu}silon}.$$
Recall the transportation distance $\tran(\boldsymbol{\vartheta}, \widetilde{\mix})$ can be also defined as the solution of the following program over some joint distribution $\mu$ (i.e., coupling).
\begin{align*}
\textrm{ min } & \sum_{i=1}^{k}\sum_{j=1}^{k} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 \\
\textrm{ s.t. } & w_i = \sum_{j=1}^{k} \mu_{i,j}, \widetilde{w}_j = \sum_{i=1}^{k} \mu_{i,j}, \mu \geq \mathbf{0}.
\end{align*}
Intuitively, a near-optimal way for generating such distribution is to first couple spikes in the same cluster with as much weight as possible, then couple spikes between different clusters.
Formally, consider a two-stage procedure that construct $\mu$.
In the first stage, increase $\mu_{i,j}$ as much as possible without exceeding the boundary for $(i,j)$ that $i \mathrm{i}n C_p$ and $j \mathrm{i}n \widetilde{C}_p$ for some common $p$.
In the second stage, fill $\mu_{i,j}$ to fulfill all constraints.
Since the restriction set is convex, $C_p$ are disjoint, and $\widetilde{C}_p$ are disjoint, this two-stage process ensures
\begin{align*}
\sum_{i \mathrm{i}n C_p} \sum_{j \mathrm{i}n \widetilde{C}_p} \mu_{i,j} = \min \{ \sum_{i \mathrm{i}n C_p} w_i, \sum_{j \mathrm{i}n \widetilde{C}_p} \widetilde{w}_j\}.
\end{align*}
As a result, the coupling distribution satisfies
\begin{align*}
\sum_{i \mathrm{i}n C_p} \sum_{j \notin \widetilde{C}_p} \mu_{i,j} \leq |\sum_{i \mathrm{i}n C_p} w_i - \sum_{j \mathrm{i}n \widetilde{C}_p} \widetilde{w}_j| \leq 2\sqrt{\widetilde{\mu}silon}.
\end{align*}
Now we begin to show $\mu$ is a good joint distribution for $\tran(\boldsymbol{\vartheta}, \widetilde{\mix})$.
Since $\{C_p\}$ is a partition of $[1, k]$, we have
\begin{align*}
\tran(\boldsymbol{\vartheta}, \widetilde{\mix}) &\leq \sum_{i=1}^{k}\sum_{j=1}^{k} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 = \sum_{p} \sum_{i \mathrm{i}n C_p} \sum_{j=1}^{k} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1.
\end{align*}
Next, we bound the RHS.
If $\sum_{i\mathrm{i}n C_p} \sum_{j=1}^{k} \mu_{i,j} \leq 2k\sqrt{\widetilde{\mu}silon}$ for some specific $p$, we directly get $$\sum_{i \mathrm{i}n C_p} \sum_{j=1}^{k} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 \leq 2k\sqrt{\widetilde{\mu}silon}$$ since $\|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 \leq 1$ always holds. Otherwise we have $\sum_{i\mathrm{i}n C_p} w_i = \sum_{i\mathrm{i}n C_p} \sum_{j=1}^{k} \mu_{i,j} > 2k\sqrt{\widetilde{\mu}silon}$. In this case, we decompose the last summation by
\begin{align*}
\sum_{i \mathrm{i}n C_p} \sum_{j=1}^{k} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 &= \sum_{i \mathrm{i}n C_p} \sum_{j \notin \widetilde{C}_p} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 + \sum_{i \mathrm{i}n C_p} \sum_{j \mathrm{i}n \widetilde{C}_p} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1.
\end{align*}
For the first term, we have $\sum_{i \mathrm{i}n C_p} \sum_{j \notin \widetilde{C}_p} \mu_{i,j} \leq 2\sqrt{\widetilde{\mu}silon}$ as we stated above and $\|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 \leq 1$ according to their definitions. Thus the first term can be bounded by $2\sqrt{\widetilde{\mu}silon}$. For the second term, we have $\sum_{i \mathrm{i}n C_p} \sum_{j \mathrm{i}n \widetilde{C}_p} \mu_{i,j} \leq 1$ according to the property of coupling distribution and the following lemma shows $\|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1$ is small. We delay the proof for readability.
\begin{lemma}\label{lm:dimn-step3.5}
If $\sum_{i\mathrm{i}n C_p} w_i > 2k\sqrt{\widetilde{\mu}silon}$,
then $\|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 \leq O(k^3d^2\sqrt{\widetilde{\mu}silon})$ for all $i \mathrm{i}n C_p$ and $j \mathrm{i}n \widetilde{C}_p$.
\end{lemma}
As a result,
\begin{align*}
\sum_{i \mathrm{i}n C_p} \sum_{j=1}^{k} \mu_{i,j} \|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 \leq O(k^3d^2 \sqrt{\widetilde{\mu}silon}).
\end{align*}
Hence, we can conclude that
$
\tran(\boldsymbol{\vartheta}, \widetilde{\mix}) \leq O(k^3d^2 \sqrt{\widetilde{\mu}silon}).
$
\end{proof}
\begin{proofof}{Lemma~\ref{lm:dimn-step3.5}}
We prove the statement by showing $\widetilde{\veca}_j$ and $\boldsymbol{\alpha}_i$ are close for each dimension $t$.
Firstly, for each $t$, we claim that there must exist some spike $(\widehat{y}_{s,t}, \widehat{\alpha}_{s,t})$ of weight $\widehat{w}_{s,t}$ in $\widehat{\pmix}_{t}$ such that $\widehat{w}_{s,t} \geq \sqrt{\widetilde{\mu}silon}$ and $|\widehat{y}_{s,t} - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'}| + |\widehat{\alpha}_{s,t} - \alpha_{i',t}| \leq \sqrt{\widetilde{\mu}silon}$ for some $i' \mathrm{i}n C_p$.
If it is not the case, that is, each spike in $\widehat{\pmix}_{t}$ is either of weight less than $\sqrt{\widetilde{\mu}silon}$ or it is of distance $\sqrt{\widetilde{\mu}silon}$ from spikes in $C_p$, then, each unit of the weights in $C_p$ suffer a transportation cost of at least $\sqrt{\widetilde{\mu}silon}$ after first $k \times \sqrt{\widetilde{\mu}silon}$ weights (Since spikes with low weight is not sufficient for covering $C_p$ while other spikes are far away). Recall the total weight in $C_p$ is at least $\sqrt{\widetilde{\mu}silon}$. So in this case, the transportation distance between $\widehat{\pmix}$ and $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$ is at least $\tran(\widehat{\pmix}_t, \proj{[\boldsymbol{r}, e_t]}(\boldsymbol{\vartheta})) \geq (2k\sqrt{\widetilde{\mu}silon} - \sqrt{\widetilde{\mu}silon} \times k) \times \sqrt{\widetilde{\mu}silon} > k\widetilde{\mu}silon$ which contradicts to our assignment for $\widetilde{\mu}silon$. Since we have $|\widetilde{y}_j - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''}| \leq \frac{\sqrt{\widetilde{\mu}silon}}{2}$ for some $i'' \mathrm{i}n C_p$ according to our construction of $\widetilde{C}_p$ and $|\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'} - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''}| \leq k\sqrt{\widetilde{\mu}silon}$ according to the clustering, with triangular inequality, we have
$$|\widetilde{y}_j - \widehat{y}_{s,t}| \leq |\widehat{y}_{s,t} - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'}| + |\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'} - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''}| + |\widetilde{y}_j - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''}| \leq (1.5 + k)\sqrt{\widetilde{\mu}silon}.$$ Since $s$ satisfies the condition in Line 9 of Algorithm~\ref{alg:dim3}, the minimization ensures $|\widetilde{y}_{j} - \widehat{y}_{s^*_{j,t},t}| \leq (1.5 + k)\sqrt{\widetilde{\mu}silon}.$
Moreover, we also claim that there must exist some spike $(\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'''}, \alpha_{i''', t})$ in $\proj{[\boldsymbol{r}, e_t]}(\boldsymbol{\vartheta})$ such that $|\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'''} - \widehat{y}_{s^*_{j,t},t}| + |\alpha_{i''', t} - \widehat{\alpha}_{s^*_{j,t}, t}| \leq 2\sqrt{\widetilde{\mu}silon}$. If it is not the case,
then the spike $(\widehat{y}_{s^*_{j,t},t}, \widehat{\alpha}_{s^*_{j,t}, t})$, which is of weight at least $\sqrt{\widetilde{\mu}silon}$, is distant from spikes in $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$. This makes the transportation between $\widehat{\pmix}$ and $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$ exceed $\tran(\widehat{\pmix}_t, \proj{[\boldsymbol{r}, e_t]}(\boldsymbol{\vartheta})) \geq \sqrt{\widetilde{\mu}silon} \times 2\sqrt{\widetilde{\mu}silon} > 2\widetilde{\mu}silon$, which contradicts to our assignment for $\widetilde{\mu}silon$.
Since we have $|\widetilde{y}_j - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''''}| \leq \frac{\sqrt{\widetilde{\mu}silon}}{2}$ for some $i'''' \mathrm{i}n C_p$ according to our construction of $\widetilde{C}_p$ and $|\boldsymbol{r}^{\top} \boldsymbol{\alpha}_i - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''''}| \leq k\sqrt{\widetilde{\mu}silon}$ for each specific $i \mathrm{i}n C_p$.
Recall we already proved $|\widetilde{y}_{j} - \widehat{y}_{s^*_{j,t},t}| \leq (1.5 + k)\sqrt{\widetilde{\mu}silon}.$
By triangle inequality, we have
\begin{align*}
&\quad\ |\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i} - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'''}| + |\alpha_{i''', t} - \widehat{\alpha}_{s^*_{j,t}, t}| \\
&\leq |\boldsymbol{r}^{\top} \boldsymbol{\alpha}_i - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''''}| + |\widetilde{y}_j - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i''''}| + |\widetilde{y}_{j} - \widehat{y}_{s^*_{j,t},t}| + |\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'''} - \widehat{y}_{s^*_{j,t},t}| + |\alpha_{i''', t} - \widehat{\alpha}_{s^*_{j,t}, t}| \\
&\leq (4 + 2k)\sqrt{\widetilde{\mu}silon}.
\end{align*}
According to Lemma \ref{lm:prob-proj}, we have $|\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i} - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'''}| \geq \frac{\eta}{k^2 d} \|\boldsymbol{\alpha}_{i} - \boldsymbol{\alpha}_{i'''} \|_1 \geq \frac{\eta}{k^2 d} |\alpha_{i,t} - \alpha_{i''',t}|$. Thus,
\begin{align*}
|\widetilde{\alpha}_{j,t} - \alpha_{i,t}| &= |\widehat{\alpha}_{s^*_{j,t}, t} - \alpha_{i,t}| \\
&\leq |\alpha_{i,t} - \alpha_{i''', t}| + |\alpha_{i'''} - \widehat{\alpha}_{s^*_{j,t},t}| \\
&\leq |\alpha_{i,t} - \alpha_{i''', t}| + \frac{k^2d}{\eta}|\alpha_{i'''} - \widehat{\alpha}_{s^*_{j,t},t}| \\
&\leq \frac{k^2d}{\eta} (|\boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i} - \boldsymbol{r}^{\top} \boldsymbol{\alpha}_{i'''}| + |\alpha_{i''', t} - \widehat{\alpha}_{s^*_{j,t}, t}|) \\
&\leq \frac{k^2d}{\eta} \cdot (4 + 2k)\sqrt{\widetilde{\mu}silon} \leq O(k^3d \sqrt{\widetilde{\mu}silon})
\end{align*}
where the equality dues to the assignment of $\widetilde{\alpha}_{j,t} \leftarrow \widehat{\alpha}_{s_{j,t}^*,t}$ in Line 10 of Algorithm~\ref{alg:dim3} and the second inequality dues to $\eta < 1$.
Taking summation over $t$, we conclude that
\begin{align*}
\|\widetilde{\veca}_j - \boldsymbol{\alpha}_i\|_1 \leq \sum_{t=1}^{d} |\widetilde{\alpha}_{j,t} - \alpha_{i,t}| \leq O(k^3d^2 \sqrt{\widetilde{\mu}silon}).
\end{align*}
\end{proofof}
Note that we can bound $\widetilde{\mu}s$ using Lemma \ref{lm:dim1-reconstruction} and Lemma \ref{lm:dim2-reconstruction}.
As a result, we can provide performance guarantee of our algorithm for
the high dimensional case.
\begin{theorem}\label{thm:dimn-reconstruction}
Let $\widecheck{\mix} = (\widecheck{\veca}, \widetilde{\vecw})$ be the final result
(Line 17) in Algorithm~\ref{alg:dim3}.
Then with probability at least $1-\eta$ for any fixed constant $0 < \eta < 1$,
$$\tran(\widecheck{\mix}, \boldsymbol{\vartheta}) \leq (kd \xi^{\frac{1}{k}})^{O(1)}.$$
\end{theorem}
\begin{proof}
Since $\widetilde{\pmix}'$ is generated by one dimension algorithm, according to Lemma \ref{lm:dim1-reconstruction}, we have $$\tran(\widetilde{\pmix}', \proj{\frac{\boldsymbol{r}+\mathbf{1}}{4}}(\boldsymbol{\vartheta})) \leq O(k \xi^{\frac{1}{4k-2}}).$$
Note that
$\proj{\frac{\boldsymbol{r}+\mathbf{1}}{4}}(\boldsymbol{\vartheta})$ can be transformed to $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$ by transforming the coordinates by $\alpha \rightarrow 4\alpha - 1$. We have
$$
\tran(\widetilde{\mix}, \proj{\boldsymbol{r}}(\boldsymbol{\vartheta})) \leq 4\tran(\widetilde{\pmix}', \proj{\frac{\boldsymbol{r}+\mathbf{1}}{4}}(\boldsymbol{\vartheta})) \leq O(k \xi^{\frac{1}{4k-2}}).
$$
Similarly, according to Lemma \ref{lm:dim2-reconstruction}, we can see
$
\tran(\widehat{\pmix}_t, \proj{[\boldsymbol{r}, e_t]}(\boldsymbol{\vartheta})) \leq O(k \xi^{\frac{1}{4k-2}}).
$
Thus, from the definition of $\widetilde{\mu}s$, we have $\widetilde{\mu}s \leq O(k \xi^{\frac{1}{4k-2}})$. With Lemma \ref{lm:dimn-step3}, we have $$\tran(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq O(k^3d^2\sqrt{\widetilde{\mu}s}) \leq (kd \xi^{\frac{1}{k}})^{O(1)}.$$
Note that $\Delta_{d-1}$ is a convex set and $\widecheck{\veca}_j$ are the projection of $\widetilde{\veca}_j$ to the same set. Thus, $\|\boldsymbol{\alpha}_i - \widecheck{\veca}_j\|_1 \leq \|\boldsymbol{\alpha}_i - \widetilde{\veca}_j\|_1$ holds for all $A_i \mathrm{i}n \Delta_{d-1}$ and $j$. Hence, the projection to $\Delta_{d-1}$ does not increase the transportation distance. As a result,
$
\tran(\widetilde{\mix}, \widecheck{\mix}) \leq \tran(\widetilde{\mix}, \boldsymbol{\vartheta}).
$
Finally, by triangle inequality, we have
$$\tran(\widecheck{\mix}, \boldsymbol{\vartheta}) \leq \tran(\widetilde{\mix}, \boldsymbol{\vartheta}) + \tran(\widecheck{\mix}, \widetilde{\mix}) \leq (kd \xi^{\frac{1}{k}})^{O(1)}.$$
\end{proof}
By choosing $\xi = (\widetilde{\mu}s/(dk))^{O(k)}$, the previous theorem directly implies Theorem \ref{thm:highdim-kspike}.
\section{Applications to Topic Models}
\label{sec:topic}
In this section, we discuss the application of our results to topic models.
Here, the underlying vocabulary consists of $d$ words.
We are given a corpus of documents. We adopt the popular
“bag of words” model and take each document as an unordered multiset of words.
The assumption is that there is a small number $k$
of “pure” topics, where each topic is a distribution over $[d]$.
A $K$-word document (i.e., a string in $[d]^K$) is generated by first selecting a
topic $i\mathrm{i}n\Delta_{d-1}$ from the mixture $\boldsymbol{\vartheta}$, and then sampling
$K$ words i.i.d. according to $\boldsymbol{\alpha}_{i}$ from this topic.
Here, $K$ is referred to as {\em the snapshot number} and we call such a sample a {\em $K$-snapshot} of $p$.
Our goal here is to recover $\boldsymbol{\vartheta}\mathrm{i}n \mathrm{Spike}(\Delta_{d-1},\Delta_{k-1})$,
which is a discrete distribution over $k$ pure topics.
Again, we measure the accuracy in terms of $L_1$-transportation distance.
\subsection{An Algorithm for $d=O(k)$}
\label{subsec:lowdimtopic}
In this section, we first directly handle the high dimensional case using Algorithm~\ref{alg:dim3}.
We will perform dimension reduction when $d\gg k$, in Section~\ref{subsec:larged}.
\begin{lemma}
\label{lm:lowdimtopic}
There is an algorithm that can learn an arbitrary $k$-spike mixture
of discrete distributions supported in $\Delta_{d-1}$
for any $d$ within $L_1$ transportation distance $\widetilde{\mu}silon$ with probability at least $0.99$ using $(kd/\widetilde{\mu}silon)^{O(k)}$ many $(2k-1)$-snapshots.
\end{lemma}
The following lemma is a high dimensional version for empirical error (see e.g. \cite{rabani2014learning}, \cite{li2015learning}, \cite{gordon2020sparse}).
\begin{lemma}
\label{lm:dim-sampling}
For matrix $R = [\boldsymbol{r}_{1}, \cdots, \boldsymbol{r}_{p}]\mathrm{i}n \mathbb{R}^{d \times p}$ with $\|R\|_{\mathrm{i}nfty}\leq 1$,
using $2^{O(kp)}\cdot\xi^{-2}\cdot\log(1/\eta)$ samples, we can obtain transformed empirical moments $M'(R)$ satisfying $$\max_{|\mathbf{t}|\leq K}|M'_{\mathbf{t}}(R) - M_{\mathbf{t}}(\proj{R}(\boldsymbol{\vartheta}))| \leq \xi$$
with probability at least $1 - \eta$.
\end{lemma}
\begin{proof}
Assume the words of a document are $x_1, \cdots, x_K$.
If the document is from some topic $i$,
we have $\mathbb{E}[x_j = c] = \alpha_{p,t}$ for all $i, j, c$.
For each word $x_j$, we generate a i.i.d. random vector $\boldsymbol{\beta}_j = [\beta_{1,j}, \cdots, \beta_{p,j}]^{\top}\mathrm{i}n \{0,1\}^p$ such that $\mathbb{E}[\beta_{c,j}] = r_{c, x_j}$. For $\mathbf{t} = (t_1, \cdots, t_p)$ with $|\mathbf{t}|\leq K$, we define observable $h_{\mathbf{t}} = \mathbf{1}[\forall c : \sum_{j=1}^{K} \beta_{c,j} = t_c]$.
Note that $h_{\mathbf{t}}$ is the normalized histogram for projected distribution $\proj{R}(\boldsymbol{\vartheta})$. Hence, from the relation between normalized histogram and standard moments, we can conclude that
\begin{align*}
M(\proj{R}(\boldsymbol{\vartheta})) = \left(\bigotimes_{t=1}^{p} \textrm{Pas}\right) h
\end{align*}
where $\textrm{Pas} \mathrm{i}n \mathbb{R}^{(K+1)\times (K+1)}$ such that $\textrm{Pas}_{i,j} = [j\geq i] \binom{j}{i} \binom{K+1}{i}$ and $\otimes$ is the tensor product.
Since $\|\textrm{Pas}\| \leq 2^{O(k)}$, we have $\left(\bigotimes_{t=1}^{p} \textrm{Pas}\right) \leq 2^{O{(kp)}}$.
Let $\widehat{h}_{\mathbf{t}}$ be the empirical average of $h_{\mathbf{t}}$.
Since $h_{\mathbf{t}}$ is a Bernoulli variable, from Hoeffding's inequality, we have
$\Pr[|\widehat{h}_{\mathbf{t}} - \mathbb{E}[h_{\mathbf{t}}]| < \widetilde{\mu}s] \leq 2 \exp(-2\widetilde{\mu}s^2 s)$.
By applying union bound on $\mathbf{t}$, we can see that using $s>2^{O(kp)}\cdot\xi^{-2}\cdot\log(1/\eta)$ samples, we have $|\widehat{h}_{\mathbf{t}} - \mathbb{E}[h_{\mathbf{t}}]| < \xi \cdot 2^{-\Omega(kp)}$ with probability at least $1-\eta$.
Moreover, if we calculate $M'$ from $\widehat{h}$ using the relationship between $M$ and $h$, this leads to
$\max_{|\mathbf{t}|\leq K}|M'_{\mathbf{t}}(R) - M_{\mathbf{t}}(\proj{R}(\boldsymbol{\vartheta}))| \leq \xi$.
\end{proof}
\begin{proofof}{Lemma~\ref{lm:lowdimtopic}}
Let $\xi = (\widetilde{\mu}s/kd)^{O(k)}$. From Theorem~\ref{thm:dimn-reconstruction}, the recovered
$k$-spike distribution $\widecheck{\mix}$ satisfies $\tran(\widecheck{\mix}, \boldsymbol{\vartheta}) \leq (kd\xi^{\frac{1}{k}})^{O(1)} = O(\widetilde{\mu}s)$.
According to Lemma \ref{lm:dim-sampling}, we can construct such a noisy moment oracle $M'(\cdot)$ using $(kd/\widetilde{\mu}s)^{O(k)}$ samples with high probability.
\end{proofof}
\subsection{Dimension Reduction When $d\gg k^{\Omega(1)}$}
\label{subsec:larged}
In topic modeling, the number of words is typically much larger than the number of pure topics
(i.e., $d\gg k$).
In this case, we face another challenge to recover the mixture.
First, there are $\binom{d}{1}+\binom{d}{2}+\cdots+\binom{d}{k}=O(d^k)$ many different moments.
To obtain all empirical moments accurate enough would require a huge number of $2k-1$-snapshot samples.
So if $d\gg k$, we reduce the dimension from $d$ to $O(k)$.
Now, we prove the following theorem. It
improves on the result in \cite{rabani2014learning,li2015learning}, which uses
more than $(k/\widetilde{\mu}silon)^{O(k^2)}$ $(2k-1)$-snapshots,
and \cite{gordon2020sparse} which use $(k/\widetilde{\mu}s)^{O(k)} \cdot (w_{\min}\zeta^k)^{-O(1)}$
$2k$-snapshots and requires the minimum separation assumption
(See Table~\ref{tab:ndim-result}).
\begin{theorem}\label{thm:highdimtopic}
There is an algorithm that can learn an arbitrary $k$-spike mixture supported in $\Delta_{d-1}$
for any $d$ within $L_1$ transportation distance $O(\widetilde{\mu}silon)$ with probability at least $0.99$ using
$\poly(d,k,\frac{1}{\widetilde{\mu}silon})$, $\poly(d,k,\frac{1}{\widetilde{\mu}silon})$,
many $1$-,$2$-,snapshots and $(k/\widetilde{\mu}silon)^{O(k)}$ many $(2k-1)$-snapshots.
\end{theorem}
Let $\boldsymbol{\vartheta}$ be the target $k$-spike mixture in $\Delta_{d-1}$ ($d\gg k$).
Suppose there is a learning algorithm ${\mathcal A}$ satisfying the assumption in the theorem.
We show how to apply ${\mathcal A}$ to the projection of $\boldsymbol{\vartheta}$ to a subspace of dimension at most $k$ in $\mathbb{R}^d$.
However, an arbitrary subspace of dimension at most $k$ is not enough, even the one spanning the $k$ spikes of $\boldsymbol{\vartheta}$.
This is because $L_1$ distance is not rotationally invariant and not preserved under projection.
For example, the $L_1$ distance between $(1/d,\ldots,1/d)$ and $(2/d,\ldots,2/d, 0, \ldots, 0)$ is 1 in $\mathbb{R}^d$ but only $\sqrt{1/d}$ in the line spanned by the two points.
Hence, an accurate estimation of the projected measure $\boldsymbol{\vartheta}_B=\proj{B}(\boldsymbol{\vartheta})$ may not translate to
an accurate estimation in the ambient space $\mathbb{R}^d$.
Here, we can use the dimension reduction method developed in \cite{li2015learning}, which shows that
if suffices to project
the mixture to a special subspace $B$,
such that a unit $L_1$-ball in $B$ ($L_1$ measured in $\mathbb{R}^d$) is close to being an
$\widetilde{O}(d^{-1/2})$ $L_2$-ball
($\widetilde{O}$ hides factors depending only on $k$ and $\widetilde{\mu}silon$, but not $d$).
\begin{lemma}
\label{lm:dimensionreduction}
(\cite{li2015learning}) There is an algorithm that requires $\poly(d,k,\frac{1}{\widetilde{\mu}silon})$
many 1-,2- snapshots to construct a subspace $B$ ($\mathrm{d}im(B)\leq k$)
with the following useful properties:
\begin{enumerate}
\mathrm{i}tem Suppose
$\{b_1,b_2,\cdots,b_m\}$ is an orthonormal basis of $B$ where $m=\mathrm{d}im(B)\leq k$
(such a basis can be produced by the above algorithm as well).
For any $i\mathrm{i}n [m]$, $\|b_i\|_{\mathrm{i}nfty}\leq O(k^{3/2}\widetilde{\mu}silon^{-2} d^{-1/2})$.
\mathrm{i}tem
Suppose we can learn an approximation $\widetilde{\mix}_B$, supported on $\sspan(B)$, of
the projected measure $\boldsymbol{\vartheta}_B=\Pi_{\sspan(B)}(\boldsymbol{\vartheta})$ such that $\tran(\boldsymbol{\vartheta}_B, \widetilde{\mix}_B)\leq \widetilde{\mu}silon_1=\poly(1/k,\widetilde{\mu}silon)$ (here $L_1$ is measured in $\mathbb{R}^d$, not in the subspace)
using $N_1(d)$, $N_2(d)$ and $N_{K}(d)$ 1-, 2-, and $K$-snapshot samples.
Then there is an algorithm for learning a mixture $\widetilde{\mix}$ such that $\tran(\boldsymbol{\vartheta}, \widetilde{\mix})\leq \widetilde{\mu}silon$ using
$O(N_1(d/\widetilde{\mu}silon)+d\log d/\widetilde{\mu}silon^3)$,
$O(N_2(d/\widetilde{\mu}silon)+O(k^4 d^{3}\log n/\widetilde{\mu}silon^6))$ and $O(N_{K}(d/\widetilde{\mu}silon))$ 1-, 2-, and
$K$-snapshot samples respectively.
\end{enumerate}
\end{lemma}
\begin{proofof}{Theorem~\ref{thm:highdimtopic}}
In light of Lemma~\ref{lm:dimensionreduction}, we only need to show how to learn the projection $\boldsymbol{\vartheta}_B=\Pi_{\sspan(B)}(\boldsymbol{\vartheta})$
using the algorithm developed in Section~\ref{subsec:lowdimtopic}.
First, we show how to translate a $K$-snapshots from $\boldsymbol{\vartheta}$ to a $K$-snapshot
in some new $k$-dimensional mixture $\boldsymbol{\vartheta}new$ (then we apply the algorithm in Section~\ref{subsec:lowdimtopic} for dimension $k$).
Let $L=\sum_{i=1}^m \|b_i\|_{\mathrm{i}nfty}$.
So $L\leq O(k^{5/2}\widetilde{\mu}silon^{-2} d^{-1/2})$.
Let $f:[-L,L]\rightarrow [0,1]$ defined as $f(x)=\frac{x}{2mL}+\frac{1}{2m}$.
For each word in the original $K$-snapshot, say $i\mathrm{i}n[d]$, we translate it to $j$ with probability
$q_{i,j}=f(b_{j,i})$, for each $j\mathrm{i}n [m]$, where $b_{j,i}$ denotes the $i$th coordinate of $b_j$,
and translate it into $m+1$ with probability $q_{i,m+1}=1-\sum_{j=1}^m q_{i,j}$. Since we have $\sum_{j=1}^m f(b_{j,i})= \frac{1}{2mL}\sum_{j=1}^mb_{j,i}+\frac{1}{2}$, we know that $q_{i,1},q_{i,2},\cdots,q_{i,m+1}\mathrm{i}n [0,1]$ and $\sum_{j=1}^{k+1} q_{i,j}=1$. So this is a well defined $k$-spike mixture $\boldsymbol{\vartheta}new$ and its $K$ snapshots.
Then we apply our learning algorithm ${\mathcal A}$ for dimension $k$ to obtain some $\boldsymbol{\vartheta}new'$ such that
$\tran(\boldsymbol{\vartheta}new,\boldsymbol{\vartheta}new')\leq \widetilde{\mu}s_1=\poly(1/k,\widetilde{\mu}silon)$
with $\poly(k,1/\widetilde{\mu}s_1)=\poly(k,1/\widetilde{\mu}s)$ many $2k-1$-snapshots.
Next we can see $\boldsymbol{\vartheta}new$ (in $\mathbb{R}^k$) and $\boldsymbol{\vartheta}_{B}$ (in $\Span(B)$)
are related by an affine transform.
For a spike $\alpha_i\mathrm{i}n\mathrm{Supp}(\boldsymbol{\vartheta})$, one can see that $\Pi_B(\alpha_i)=\sum_{j=1}^m a_{i,j}b_j$ ,
where $a_{i,j}=\langle\alpha_i,b_j\rangle$.
$\alpha_{i}$ produces a new spike $\beta_i$ in $\boldsymbol{\vartheta}new$ and the mapping is as follows:
for each $j\mathrm{i}n [k]$,
we have
$$
\beta_{i,j}=\sum_{t=1}^d \alpha_{i,t}f(b_{j,t})=
\sum_{t=1}^d \alpha_{i,t}(\frac{b_{j,t}}{2mL}+\frac{1}{2m})=\frac{a_{i,j}}{2mL}+\frac{1}{2m}
$$
($\beta_{i,j}$ denotes the $j$th coordinate of $\beta_i$).
So $a_{i,j}=2mL(\beta_{i,j}-\frac{1}{2m})$ and $g(x)=2mLx-L$ is the affine transformation.
Now we can translate $\boldsymbol{\vartheta}new'$ into a mixture in $Span(B)$ by applying $g(.)$ in each coordinate of each $\beta_i\mathrm{i}n \mathrm{Supp}(\boldsymbol{\vartheta}new')$, and obtain an estimation of $\boldsymbol{\vartheta}_B$, say $\boldsymbol{\vartheta}_B'$.
It remains to show $\tran(\boldsymbol{\vartheta}_B',\boldsymbol{\vartheta}_B)\leq\widetilde{\mu}silon_1$.
If we let $\tran_B$ denote the transportation distance in $\Span(B)$ (where if $a=\sum_{j=1}^k a_{j}b_j, c=\sum_{j=1}^k c_{j}b_j$ then $\tran_B(a,c)=\sum_{j=1}^k |a_{j}-c_{j}|$). Then $\tran_B(\boldsymbol{\vartheta}_B',\boldsymbol{\vartheta}_B)\leq 2mL\tran(\boldsymbol{\vartheta}new',\boldsymbol{\vartheta}new)=\poly(\widetilde{\mu}silon,\frac{1}{k})/\sqrt{d}$.
Hence, we have
$$
\tran(\boldsymbol{\vartheta}_B',\boldsymbol{\vartheta}_B)\leq d \max_i \|b_i\|_\mathrm{i}nfty\tran_B(\boldsymbol{\vartheta}_B',\boldsymbol{\vartheta}_B)\leq \widetilde{\mu}silon_1.
$$
where the first inequality holds since
$\tran(a,c)=\|\sum_{j=1}^k (a_{j}-c_{j})b_j\|_1 \leq d\max_i \|b_i\|_\mathrm{i}nfty \sum_{j=1}^k (a_{j}-c_{j})$ for any two point $a,c\mathrm{i}n \Span(B)$.
\end{proofof}
\section{Applications to Gaussian Mixture Learning}
\label{sec:Gaussian}
In this section, we show how to leverage our results
for sparse moment problem to obtain improved algorithms for learning Gaussian mixtures.
We consider the following setting
studied in \cite{wu2020optimal, doss2020optimal}.
A $k$-Gaussian mixture in $\mathbb{R}^d$ can be parameterised as $\boldsymbol{\vartheta}_N = (\boldsymbol{\alpha}, \boldsymbol{w}, \Sigma)$. Here, $\boldsymbol{\alpha} = \{\boldsymbol{\alpha}_1, \boldsymbol{\alpha}_2, \cdots, \boldsymbol{\alpha}_k\}$ and $\boldsymbol{w} = \{w_1, w_2, \cdots, w_k\} \mathrm{i}n \Delta_{k-1}$ where $\boldsymbol{\alpha}_i \mathrm{i}n \mathbb{R}^d$ specifies the mean of $i$th component and $w_i \mathrm{i}n [0, 1]$ is the corresponding weight. $\Sigma \mathrm{i}n \mathbb{R}^{d\times d}$ is the common covariance matrix for all $k$ mixture components and we assume that $\Sigma$ is known in advance.
We further assume $\|\boldsymbol{\alpha}_i\|_2 \leq 1$ and the maximal eigenvalue $\|\Sigma\|_2$ is bounded by a constant.
For $k$-Gaussian mixture $\boldsymbol{\vartheta}_N = (\boldsymbol{\alpha}, \boldsymbol{w}, \Sigma)$, each observation is distributed as
\begin{align*}
\boldsymbol{\vartheta}_N \sim \sum_{i=1}^{k} w_i N(\boldsymbol{\alpha}_i, \Sigma).
\end{align*}
We consider the parameter learning problem, that is, to learn the parameter $\boldsymbol{\alpha}$ and $\boldsymbol{w}$ given known covariance matrix $\Sigma$ and a set of i.i.d. samples from $\boldsymbol{\vartheta}_N$. The model is also called {\em Gaussian location mixture model} \cite{wu2020optimal, doss2020optimal}.
\footnote{
Wu and Yang \cite{wu2020optimal} also studied the problem with unknown $\Sigma$. We leave it as a future direction.
}
\subsection{Efficient Algorithm for $d=1$}
In the 1-dimensional case, we denote the known variance by $\sigma$
which is upper bounded by some constant.
As shown in \cite{wu2020optimal}, the moments of Gaussian mixture has a close connection with the moments of corresponding discrete distributions $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w})$. For $x \sim N(\mu, 1)$, we have ${\mathbb{E}}[H_t(x)] = \mu^t$ for Hermite polynomial $H_t(x)$ which is defined as
\begin{align}
\label{eq:hermite}
H_t(x) = \sum_{i=0}^{t} h_{t,i} x^i = t!\sum_{j=0}^{\lfloor \frac{t}{2} \rfloor} \frac{(-1/2)^j}{j!(t - 2j)!} x^{t-2j}.
\end{align}
For 1-dimensional Gaussian mixture $\boldsymbol{\vartheta}_N = (\boldsymbol{\alpha}, \boldsymbol{w}, \sigma)$ with variance $\sigma$, the $t$th moment of the discrete distribution $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w})$ satisfies
\begin{align*}
M_t(\boldsymbol{\vartheta}) = {\mathbb{E}}_{x \sim \boldsymbol{\vartheta}_N}\left[\sum_{i=0}^{t} h_{t,i} \sigma^{t-i} x^i\right].
\end{align*}
The following lemma guarantees the performance of estimating the moment by sampling.
\begin{lemma}{(Lemma 5 of \cite{wu2020optimal}, restated)}
\label{lm:var-Gauss}
Let $x_1, \cdots, x_n \sim \boldsymbol{\vartheta}_N$ be $n$ independent samples.
$$\widetilde{M}_t(\boldsymbol{\vartheta}) = \frac{1}{n} \sum_{j=1}^{n} \left[ \sum_{i=0}^{t} h_{t,i} \sigma^{t-i} x^i \right]$$
is an unbiased estimator for $M_{t}(\boldsymbol{\vartheta})$, and the variance of this estimator can be bounded by
$$\operatorname{Var}[{\widetilde{M}_t(\boldsymbol{\vartheta})}] \leq \frac{1}{n} (\sigma t)^{O(t)}.$$
\end{lemma}
Thus we can compute the moment of original distribution and use our Algorithm \ref{alg:dim1} for recovering the parameter of $\boldsymbol{\vartheta}$. More concretely, we can replace the last two lines (an SDP) in Algorithm 2 of \cite{wu2020optimal} by our Algorithm \ref{alg:dim1}, and obtain the following theorem.
The post-sampling time is improved from from $O(k^{6.5})$
in \cite{wu2020optimal} to $O(k^2)$.
\begin{theorem}
\label{thm:1dim-Gaussian}
Let $\boldsymbol{\vartheta}_N$ be an arbitrary $k$-Gaussian mixture over $\mathbb{R}$ with means $\alpha_1, \cdots, \alpha_n$ and known variance $\sigma$ bounded by some constant.
There is an algorithm that can learn the parameter $\boldsymbol{\vartheta}=(\boldsymbol{\alpha}, \boldsymbol{w})$ within transportation distance $O(\widetilde{\mu}s)$, with probability at least $0.99$, using $(k/\widetilde{\mu}s)^{O(k)}$ samples from the mixture $\boldsymbol{\vartheta}_N$. Moreover, once we obtain the estimation of the moments of $\boldsymbol{\vartheta}_N$,
our algorithm only uses $O(k^2)$ arithmetic operations.
\end{theorem}
\begin{proof}
Let $c$ be the constant to be determined.
With $n = (k/\widetilde{\mu}silon)^{ck}$ samples, the variance of empirical moment can be bounded by
\begin{align*}
\operatorname{Var}[M_t'(\boldsymbol{\vartheta})] \leq \frac{1}{(k/\widetilde{\mu}silon)^{ck}} \cdot k^{O(k)} \leq (\widetilde{\mu}silon/k)^{\Omega(k)}
\end{align*}
where the first inequality holds due to Lemma \ref{lm:var-Gauss} and $\sigma$ is bounded by a constant, and the second inequality holds by selecting a large enough constant $c>0$.
According to Chebyshev's inequality, for each $t$, with probability at least $1 - 0.01 k^{-2}$,
\begin{align*}
|M'_{t}(\boldsymbol{\vartheta}) - M_{t}(\boldsymbol{\vartheta})| \leq 0.1k \cdot (\widetilde{\mu}silon/k)^{\Omega(k)} \leq (\widetilde{\mu}silon/k)^{\Omega(k)}.
\end{align*}
By taking union bound, the probability that the above inequality holds for all $0 \leq t \leq 2k - 1$ is greater than $0.99$. We can conclude the result by applying Theorem \ref{thm:1dim-kspikecoin} directly.
\end{proof}
\subsection{Efficient Algorithm for $d>1$}
\label{sec:highdim-Gaussian}
For higher dimensional Gaussian mixture,
we can reduce the problem to learning the discrete mixture $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w})$,
and leverage Algorithm \ref{alg:dim3} to solve the problem.
Assume the locations of all Gaussians are in the unit ball.
The error can be bounded easily according to Theorem \ref{thm:highdim-kspike}.
Firstly, as shown in \cite{doss2020optimal}, one can use SVD to reduce a $d$-dimensions problem to a $k$-dimension problem using $\poly(1/\widetilde{\mu}s, d, k)$ samples. Thus, we only need to consider settings with $d \leq k$.
Similar to the 1-d case, we can transform the problem of learning Gaussian mixture $\boldsymbol{\vartheta}_N = (\boldsymbol{\alpha}, \boldsymbol{w}, \Sigma)$
to the problem of learning the discrete mixture $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w})$.
In particular, as we show below, we can estimate the moments of the projection of $\boldsymbol{\vartheta}$ from the samples. After obtaining the noisy moment information, we can
apply Algorithm \ref{alg:dim3} to recover the parameter $(\boldsymbol{\alpha}, \boldsymbol{w})$.
We mention that we need to modify Algorithm \ref{alg:dim3} slightly:
we change to domain from $\Delta_{d-1}$ to the unit ball.
We note this does not affect any parts of the proof since Theorem \ref{thm:highdim-kspike} only requires the projected space to be a convex domain.
The remaining task is to show how to estimate the projected moments
for 1-d and 2-d projections.
To estimate the moments of 1-dimensional projection, we can use the estimator in the last section.
In particular, for the projected measure $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$ where $\boldsymbol{r}$ is an arbitrary unit vector, the $t$-th moment can be computed by
\begin{align*}
M_{t}(\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})) = {\mathbb{E}}_{\boldsymbol{x} \sim \boldsymbol{\vartheta}_N} \left [\sum_{i=0}^{t} h_{t,i} (\boldsymbol{r}^{\top} \Sigma \boldsymbol{r})^{t-i} (\boldsymbol{r}^{\top} \boldsymbol{x})^i \right].
\end{align*}
We can use sample average to estimate $M_{t}(\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}))$,
and we denote the estimation as $\widetilde{M}_{t}(\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}))$.
Now, we consider the problem of estimating the moments
for 2-dimensional projection along $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$.
First, we compute the vector $\boldsymbol{r}_2' = \boldsymbol{r}_2 - \frac{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_2}{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1} \boldsymbol{r}_1$.
According to Gram–Schmidt process,
$\boldsymbol{r}_2'$ is $\Sigma$-orthogonal to $\boldsymbol{r}_1$, i.e.,
$\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_2' = 0$.
Moreover, $\boldsymbol{r}_1^{\top} \boldsymbol{x}$ and $\boldsymbol{r}_2'^{\top} \boldsymbol{x}$ are independent Gaussian distributions for variable $\boldsymbol{x} \sim N(0, \Sigma)$.
Note that for independent Gaussian variables $x_1 \sim N(\mu_1, 1)$ and $x_2 \sim N(\mu_2, 1)$, we have ${\mathbb{E}}[H_{t_1}(x_1) H_{t_2}(x_2)] = {\mathbb{E}}[H_{t_1}(x_1)] {\mathbb{E}}[H_{t_2}(x_2)] = \mu_1^{t_1} \mu_2^{t_2}$,
where $H$ is the Hermite polynomial defined in \eqref{eq:hermite}.
As a result, the projected moments along $\boldsymbol{r}_1$ and $\boldsymbol{r}_2'$ can be computed by
\begin{align*}
M_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2']}(\boldsymbol{\vartheta})) = {\mathbb{E}}_{\boldsymbol{x} \sim \boldsymbol{\vartheta}_N} \left [ \sum_{i_1=0}^{t_1} \sum_{i_2=0}^{t_2} h_{t_1,i_1} h_{t_2,i_2} (\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1)^{t_1-i_1} (\boldsymbol{r}_2'^{\top} \Sigma \boldsymbol{r}_2')^{t_2-i_2} (\boldsymbol{r}_1^{\top} \boldsymbol{x})^{i_1} (\boldsymbol{r}_2'^{\top} \boldsymbol{x})^{i_2} \right].
\end{align*}
Moreover, the projected moments along $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ can be computed by
\begin{align*}
M_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2]}(\boldsymbol{\vartheta})) = \sum_{i=0}^{t_2} \binom{t_2}{i} \left(\frac{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_2}{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1}\right)^{i} M_{(t_1+i,t_2-i)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2']}(\boldsymbol{\vartheta}))
\end{align*}
since $\boldsymbol{r}_2^{\top} \boldsymbol{x} = \boldsymbol{r}_2'^{\top} \boldsymbol{x} + \frac{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_2}{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1} \boldsymbol{r}_1^{\top} \boldsymbol{x}$ which implies $(r_2^{\top} \boldsymbol{x})^{t_2} = \sum_{i=0}^{t_2} \binom{t_2}{i} \left(\frac{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_2}{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1}\right)^{i} (r_1^{\top} \boldsymbol{x})^{i} (r_2'^{\top} \boldsymbol{x})^{t_2-i}$.
We use $\widetilde{M}_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2]}(\boldsymbol{\vartheta}))$
to denote the sample average estimation of
$M_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2]}(\boldsymbol{\vartheta}))$.
We are ready to state the performance guarantee:
\begin{theorem}
\label{thm:ndim-Gaussian}
The algorithm can learn an arbitrary $d$-dimensional $k$-spike Gaussian mixture within transportation distance $O(\widetilde{\mu}s)$ given the covariance matrix $\Sigma$ using $(k/\widetilde{\mu}s)^{O(k)} + \poly(1/\widetilde{\mu}s, d, k)$ samples.
\end{theorem}
\begin{proof}
We first consider the problems of dimension $d \leq k$.
The only thing we need to show is that the above estimators
$\widetilde{M}_{t}(\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}))$ and $\widetilde{M}_{t}(\proj{\boldsymbol{r}_1, \boldsymbol{r}_2}(\boldsymbol{\vartheta}))$
have sufficient accuracy with
high probability using $n = (k /\widetilde{\mu}s)^{\Theta(k)}$ samples.
Clearly, both estimators are unbiased.
So it is sufficient to bound the variance of the estimators.
For 1-dimensional projected moments, we can use the same argument as Theorem \ref{thm:1dim-Gaussian}. In particular, we can show that with probability at least $0.999$, for all $0 \leq t \leq 2k - 1$,
\begin{align*}
|\widetilde{M}_{t}(\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})) - M_{t}(\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}))| \leq (k/\widetilde{\mu}s)^{O(k)}.
\end{align*}
For 2-dimensional projected moments, we first bound the variance of estimator along $r_1$ and $r_2'$. With the same argument as Lemma \ref{lm:var-Gauss}, we have
\begin{align*}
\operatorname{Var}[\widetilde{M}_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2']}(\boldsymbol{\vartheta}))] \leq (\widetilde{\mu}s / k)^{\Omega(k)}.
\end{align*}
for all $t_1 + t_2 \leq 2k$ with probability at lest $0.999$.
Moreover, since we randomly chooses the same $r_1$ over a sphere of radius $\Theta(1)$ in Line 6 of Algorithm \ref{alg:dim3}, we have $\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1 \geq \Omega(\frac{\mu}{k^2} \|\boldsymbol{r}_1\|_2^2 \|\Sigma\|_2)$ with probability at least $1 - \mu$. In this case, we have $$\frac{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_2}{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1} \leq \frac{\|\boldsymbol{r}_1\|_2 \|\Sigma\|_2 \|\boldsymbol{r}_2\|}{\Omega(\frac{\mu}{k^2} \|\boldsymbol{r}_1\|_2^2 \|\Sigma\|_2)} \leq O(k^2/ \mu)$$
where the last inequality holds because $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ are of norm $\Theta(1)$. Since $\sqrt{\operatorname{Var}[x + y]} \leq \sqrt{\operatorname{Var}[x]} + \sqrt{\operatorname{Var}[y]}$ for any random variables $x$ and $y$,
\begin{align*}
\sqrt{\operatorname{Var}[\widetilde{M}_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2]}(\boldsymbol{\vartheta}))]} \leq \sum_{i=0}^{t_2} \binom{t_2}{i} \left(\frac{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_2}{\boldsymbol{r}_1^{\top} \Sigma \boldsymbol{r}_1}\right)^{i} \sqrt{\operatorname{Var}[\widetilde{M}_{(t_1+i, t_2-i)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2']}(\boldsymbol{\vartheta}))]}.
\end{align*}
As a result, by choosing $\mu = 0.001$, we have
\begin{align*}
\operatorname{Var}[\widetilde{M}_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2]}(\boldsymbol{\vartheta}))] \leq (\widetilde{\mu}s /k)^{\Omega(k)}
\end{align*}
holds for all $\boldsymbol{r}_2$ with probability at least $0.999$.
Conditioning on that this event holds, according Chebyshev's inequality and a union bound, with probability at least $1 - 0.001d$, for all $0 \leq t_1, t_2 \leq 2k - 1$,
\begin{align*}
|\widetilde{M}_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2]}(\boldsymbol{\vartheta}))] - M_{(t_1, t_2)}(\proj{[\boldsymbol{r}_1, \boldsymbol{r}_2]}(\boldsymbol{\vartheta}))]| \leq (\widetilde{\mu}s / k)^{\Omega(k)}.
\end{align*}
Recall that the algorithm uses 1-d projection for one $\boldsymbol{r}$, 2-d projection for one $\boldsymbol{r}_1$ and $d$ different $\boldsymbol{r}_2$ and the algorithm succeeds with high probability statement for random $\boldsymbol{r}$. Hence, by union bound over all these events, the projected moment estimations have sufficient accuracy with probability at least $0.995$.
If the dimension $d>k$,
we can apply the dimension reduction in \cite{doss2020optimal},
which requires an extra sample complexity of $\poly(1/\widetilde{\mu}s, d, k)$. The total sample complexity is $(k/\widetilde{\mu}s)^{O(k)} + \poly(1/\widetilde{\mu}s, d, k)$.
\end{proof}
Finally, we discuss the running time during the sampling phase and post-sampling
phase.
The dimension reduction requires $\poly(1/\widetilde{\mu}s, d, k)$ samples and
$O(d^3)$ time \cite{doss2020optimal}.
For the recovery problem in dimension $k$, each $1$-d or $2$-d moment oracle can be computed in $O(nk^3)$ time where $n=(k/\widetilde{\mu}s)^{O(k)}$ is the number of samples. Since we only requires $O(k)$ moment oracles, the sampling time can be bounded by $O(n k^4)$.
This improves the $O(n^{5/4}\poly(k))$ sampling time in \cite{doss2020optimal}
(their algorithm requires 1-d projections to $O(n^{1/4}$ many directions).
For post-sampling running time, our Algorithm \ref{alg:dim3}
runs in time $O(k^4)$ (since $d\leq k$ by dimension reduction).
This improves the $O(n^{1/2}\poly(k))$ post-sampling time in \cite{doss2020optimal}.
\section{Concluding Remarks}
We provide efficient and easy-to-implement algorithms for learning the $k$-mixture over discrete distributions from noisy moments.
We measure the accuracy in terms of transportation distance, and our analysis is independent of the minimal separation.
The techniques used in our analysis for the 1-dimensional case may be useful in the perturbative analysis of other problems involving the Hankel matrix or the Vandermonde matrix (such as super-resolution).
Our problem is a special case of learning mixtures of product distribution \cite{gordon2021source}, which is further a special case of the general problem of learning mixtures of Bayesian networks \cite{gordon2021identifying}, which has important applications in causal inferences. In fact, the algorithms in \cite{gordon2021source,gordon2021identifying} used the algorithm for sparse Hausdorff moment problem as a subroutine. It would be interesting to see if our techniques can be used to improve the learning algorithms for these problems.
\section{Transportation Distance for Non-probability Measures}
\label{app:transfornonprob}
The dual formulation \eqref{eq:trans} can be generalized to non-probability measures.
Specifically, for two signed measure $P$ and $Q$ defined over $\mathbb{R}^d$ with $\mathrm{i}nt P \mathrm{d} x = \mathrm{i}nt Q \mathrm{d} x = 1$, we define its transportation distance as \eqref{eq:trans}.
The equivalent primal formulation is as follows:
Define measure $(P-Q)^+$ and $(Q-P)^+$ in which $(P-Q)^+(x) = \max\{P(x) - Q(x), 0\}$. According to the dual form \eqref{eq:trans}, one is easy to check $\tran(P, Q) = \tran((P-Q)^+, (Q-P)^+)$. Note that both $(P-Q)^+$ and $(Q-P)^+$ are non-negative. As a result, we conclude another definition for generalized transportation distance:
\begin{align}
\label{eq:signedtrans}
\tran(P, Q) = \mathrm{i}nf \left \{ \mathrm{i}nt \|x - y\|_1 \mathrm{d} \mu(x, y) : \mu \mathrm{i}n M((P-Q)^+, (Q-P)^+) \right \}.
\end{align}
We also need to define the transportation distance over
complex measures.
Denote $\mathbb{B}_{\mathbb{C}} = \{\alpha \mathrm{i}n \mathbb{C} : |\alpha|\leq 1\}$.
We consider a complex measure $P$ such that $\mathrm{i}nt_{\mathbb{C}} \textrm{d}P(x) = 1$. Denote the real and imaginary components of a complex measure $P$ as $P^{\textrm{r}}$ and $P^{\textrm{i}}$ respectively. In concrete, $P(x) = P^{\textrm{r}}(x) + {\mathfrak{i}} P^{\textrm{i}}(x)$. We generalize transportation distance to complex weights as
\begin{align}
\label{eq:complextrans}
\tran(P, Q) = \tran(P^{\textrm{r}}, Q^{\textrm{r}}) + \tran(P^{\textrm{i}}, Q^{\textrm{i}})
\end{align}
By this definition, we can easily see that
\begin{align*}
\tran(P, Q) &= \sup\left\{\mathrm{i}nt f^{\textrm{r}} \mathrm{d}(P^{\textrm{r}}-Q^{\textrm{r}}): f^{\textrm{r}}\mathrm{i}n 1\text{-}\mathsf{Lip}\right\} + \sup\left\{\mathrm{i}nt f^{\textrm{i}} \mathrm{d}(P^{\textrm{i}}-Q^{\textrm{i}}): f^{\textrm{i}}\mathrm{i}n 1\text{-}\mathsf{Lip}\right\} \\
&= \sup \left \{ \textrm{real} \left(\mathrm{i}nt (f^{\textrm{r}} - {\mathfrak{i}} f^{\textrm{i}})\mathrm{d} (P - Q) \right ) : f^{\textrm{r}}, f^{\textrm{i}} \mathrm{i}n 1\text{-}\mathsf{Lip}\right\} \\
&\leq 2\sup \left \{ \mathrm{i}nt f\mathrm{d} (P - Q) : f \mathrm{i}n 1\text{-}\mathsf{Lip}_{\mathbb{C}}\right\}
\end{align*}
where $1\text{-}\mathsf{Lip}_{\mathbb{C}}:=\{f: \mathbb{C}\rightarrow \mathbb{C} \mid |f(x)-f(y)|\leq |x-y| \text{ for any }x,y\mathrm{i}n \mathbb{C}\}$ is the complex 1-Lipschitz functions on $\mathbb{C}$. The last inequality holds since we can assign $f(x) = (f_r(x) - if_i(x)) / 2$.
\section{Moment-Transportation Inequalities}
\label{app:momenttransineq}
Our analysis needs to bound the transportation distance
by a function of moment distance.
We first present a moment-transportation inequality
in one dimension (Lemma~\ref{lm:inequality1d}),
and then generalize the result to complex numbers and higher dimensions.
The first such inequality in one dimension is obtained
in \cite[(Lemma 5.4)]{rabani2014learning} (with worse parameters).
Lemma~\ref{lm:inequality1d} is firstly proved by
Wu and Yang \cite{wu2020optimal} in the context of learning Gaussian mixtures.
Here, we provide a proof in our notations for completeness.
\subsection{A 1-dimensional Moment-Transportation Inequality}
\label{subsec:1dmoment}
We allow more general mixtures with spikes in $[-1,1]$ and negative weights.
This slight generalization will be useful since some intermediate coordinates and weights may be negative. Recall that $\Sigma_{k-1}=\{\boldsymbol{w}=(w_1,\cdots, w_k)\mathrm{i}n \mathbb{R}^k \mid \sum_{i=1}^k w_i=1\}$.
\begin{lemma}
\label{lm:inequality1d} \cite{wu2020optimal}
For any two mixtures with $k$ components $\boldsymbol{\vartheta},\boldsymbol{\vartheta}' \mathrm{i}n \mathrm{Spike}([-1, 1],\Sigma_{k-1})$, it holds that
$$
\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')\leq O(k\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')^{\frac{1}{2k-1}}).
$$
\end{lemma}
We begin with some notations.
Let $\mathrm{Supp} = \mathrm{Supp}(\boldsymbol{\vartheta}) \cup \mathrm{Supp}(\boldsymbol{\vartheta}')$ and let $n=|\mathrm{Supp}|$ ($n\leq 2k$).
Arrange the points
in $\mathrm{Supp}$ as $\boldsymbol{\alpha}=(\alpha_1,\cdots,\alpha_n)$ where $\alpha_1< \cdots <\alpha_n$.
\begin{definition}
Let $\boldsymbol{\alpha}=(\alpha_1,\cdots,\alpha_n)$ such that $-1\leq \alpha_1< \cdots< \alpha_n\leq 1$.
Let ${\mathcal P}_{\boldsymbol{\alpha}}$ denote the set of polynomials of degree at most $n-1$ and 1-Lipschitz over
the discrete points in $\boldsymbol{\alpha}$, i.e.,
$$
{\mathcal P}_{\boldsymbol{\alpha}}=\{f\mid \mathrm{d}eg f\leq n-1,f(\alpha_1)=0; \, |f(\alpha_i)-f(\alpha_j)|\leq|\alpha_i-\alpha_j|\,\, \forall i,j\mathrm{i}n [n].\}
$$
\end{definition}
Note that we do not require $f$ to be 1-Lipschitz over the entire interval $[-1,1]$.
We make the following simple observation.
\begin{observation}
\label{ob:simple}
$
\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')=\sup\left\{\mathrm{i}nt f \mathrm{d}(\boldsymbol{\vartheta}-\boldsymbol{\vartheta}'): f\mathrm{i}n 1\text{-}\mathsf{Lip}\right\}=\sup\left\{\mathrm{i}nt p \mathrm{d}(\boldsymbol{\vartheta}-\boldsymbol{\vartheta}'): p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}\right\}
$
\end{observation}
We can view ${\mathcal P}_{\boldsymbol{\alpha}}$ as a subset in the linear space of all polynomials of degree at most $n-1$.
In fact, ${\mathcal P}_{\boldsymbol{\alpha}}$ is a convex polytope
(a convex combination of two polynomials being 1-Lipschitz over $\boldsymbol{\alpha}$
is also a polynomial that is 1-Lipschitz over $\boldsymbol{\alpha}$).
The {\em height} of a polynomial $\sum_i c_i x^i$ is the maximum absolute coefficient $\max_i |c_i|$.
As we will see shortly, the height of a polynomial in ${\mathcal P}_{\boldsymbol{\alpha}}$ is related to the required moment accuracy, and we need the height to be upper bounded by a value independent of the minimum separation
$\zeta$. However, a polynomial in ${\mathcal P}_{\boldsymbol{\alpha}}$ may have very large height (depending on the inverse of the minimum
separation $1/|\alpha_{i+1}-\alpha_i|$).
This can be seen from the Lagrangian interpolation formula:
$p(x) := \sum_{j=1}^{n} p(\alpha_j) \ell_j(x)$
where $\ell_j$ is the Lagrange basis polynomial
$
\ell_j(x) := \prod_{1\le m\le n, m\neq j} (x-\alpha_m)/(\alpha_j-\alpha_m).
$
To remedy this, one can in fact show that for any polynomial in ${\mathcal P}_{\boldsymbol{\alpha}}$, there exist an approximate polynomial whose height can be bounded.
\begin{lemma}
\label{lm:decomposition}
For any polynomial $p(x)\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}$ and $\eta>0$,
there is a polynomial $p_{\eta}(x)$ of degree at most $n-1$ such that the following properties hold:
\begin{enumerate}
\mathrm{i}tem
$|p_{\eta}(\alpha_i)-p(\alpha_i)|\leq 2\eta$, for any $i\mathrm{i}n [n]$.
\mathrm{i}tem
The height of $p_{\eta}$ is at most $n2^{2n-2}(\frac{n}{\eta})^{n-2}$.
\end{enumerate}
\end{lemma}
Before proving Lemma~\ref{lm:decomposition}, we show Lemma~\ref{lm:inequality1d} can be easily derived from
Observation~\ref{ob:simple}
and Lemma~\ref{lm:decomposition}.
\begin{proofof}{Lemma~\ref{lm:inequality1d}}
Fix any constant $\eta>0$.
Let $\xi=\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')$
\begin{align*}
\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')&=\sup_{f\mathrm{i}n 1\text{-}\mathsf{Lip}} \mathrm{i}nt f \mathrm{d}(\boldsymbol{\vartheta}-\boldsymbol{\vartheta}')
=\sup_{p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}}\mathrm{i}nt p \mathrm{d}(\boldsymbol{\vartheta}-\boldsymbol{\vartheta}') &
(\text{Observation~\ref{ob:simple}})\\
&\leq 4\eta+\sup_{p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}} \mathrm{i}nt p_{\eta} \mathrm{d}(\boldsymbol{\vartheta}-\boldsymbol{\vartheta}') & (\text{Lemma~\ref{lm:decomposition}})\\
&\leq 4\eta+\sup_{p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}} \sum_{i=1}^{n-1} n2^{2n-2}\left(\frac{n}{\eta}\right)^{n-2}\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')
& (\text{Lemma~\ref{lm:decomposition}})\\
&\leq 4\eta+2^{4k}(2k)^{2k}\frac{\xi}{\eta^{2k-2}} & (n\leq 2k)
\end{align*}
By choosing $\eta=16k\xi^{\frac{1}{2k-1}}$,
we conclude that $\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')\leq O(k\xi^{\frac{1}{2k-1}})$.
\end{proofof}
Next, we prove Lemma~\ref{lm:decomposition}.
We introduce the following set ${\mathcal F}_{\boldsymbol{\alpha}}$ of special polynomials.
They are in fact the extreme points of the polytope ${\mathcal P}_{\boldsymbol{\alpha}}$.
\begin{definition}
Let $\boldsymbol{\alpha}=(\alpha_1, \cdots, \alpha_n)$ such that $-1\leq \alpha_1<\cdots<\alpha_n\leq 1$.
Let
$$
{\mathcal F}_{\boldsymbol{\alpha}}=\{p\mid \mathrm{d}eg p\leq n-1, p(\alpha_1)=0; \,|p(\alpha_{i+1})-p(\alpha_i)|=|\alpha_{i+1}-\alpha_i|\,\, \forall i\mathrm{i}n [n-1]\}
$$
\end{definition}
It is easy to see that ${\mathcal F}_{\boldsymbol{\alpha}}\subset {\mathcal P}_{\boldsymbol{\alpha}}$ and $|{\mathcal F}_{\boldsymbol{\alpha}}|=2^{n-1}$.
Now, we modify each polynomial
$p\mathrm{i}n {\mathcal F}_{\boldsymbol{\alpha}}$ slightly, as follows.
Consider the intervals $[\alpha_1,\alpha_2], \cdots, [\alpha_{n-1},\alpha_n]$.
If $\alpha_i-\alpha_{i-1}\leq \eta/n$, we say the interval $[\alpha_{i-1}, \alpha_i]$ is
a {\em small interval}.
Otherwise, it is {\em large}.
We first merge all consecutive small intervals.
For each resulting interval, we merge it with the large interval to its right.
Note that we never merge two large intervals together.
Let $S=\{\alpha_{i_1}=\alpha_1, \alpha_{i_2}, \cdots, \alpha_{i_m}=\alpha_n\}\subseteq \boldsymbol{\alpha}$
be the endpoints of the current intervals.
It is easy to see the distance between any two points in $S$
is at least $\eta/n$ (since current interval contains exactly one original large interval).
Define a continuous piece wise linear function $L:[\alpha_1,\alpha_n]\rightarrow \mathbb{R}$ as follows:
(1)
$L(\alpha_1)=p(\alpha_1)=0$.
(2)
The breaking points are the points in $S$;
(3)
Each linear piece of $L$ has slope either $1$ or $-1$;
for two consecutive breaking points $\alpha_{i_j}, \alpha_{i_{j+1}}\mathrm{i}n S$,
if $p(\alpha_{i_j})>p(\alpha_{i_{j+1}})$, the slope of the corresponding piece is $-1$.
Otherwise, it is $1$.
\begin{lemma}
\label{lm:close}
For any $\alpha_i\mathrm{i}n \boldsymbol{\alpha}$, $|L(\alpha_i)-p(\alpha_i)|\leq 2\eta$.
\end{lemma}
\begin{proof}
We prove inductively that $|L(\alpha_i)-p(\alpha_i)|\leq 2i\eta/n$.
The base case ($i=1$) is trivial.
Suppose the lemma holds true for all value at most
$i$ and now we show it holds for $i+1$.
There are two cases.
If $\alpha_{i+1}-\alpha_{i}\leq \eta/n$,
we have
\begin{align*}
|L(\alpha_{i+1})-p(\alpha_{i+1})|& \leq |L(\alpha_{i+1})-L(\alpha_{i})|+|L(\alpha_{i})-p(\alpha_{i})|+|p(\alpha_{i})-p(\alpha_{i+1})| \\
&= 2|\alpha_{i+1}-\alpha_{i}|+|L(\alpha_{i})-p(\alpha_{i})|\leq 2(i+1)\eta/n.
\end{align*}
If $\alpha_{i+1}-\alpha_{i}> \eta/n$, we have $\alpha_{i+1}\mathrm{i}n S$ (by the merge procedure).
Suppose $\alpha_j$ ($\alpha\leq i$) is the point in $S$ right before $\alpha_{i+1}$.
We can see that (by the definition of $p$)
$$
|p(\alpha_j)-p(\alpha_{i+1})| \mathrm{i}n |\alpha_{i+1}-\alpha_{i}|\pm |\alpha_{i}-\alpha_{j}| \subseteq |\alpha_{i+1}-\alpha_{i}|\pm (j-i)\eta/n.
$$
From this, we can see that
\begin{align*}
|L(\alpha_{i+1})-p(\alpha_{i+1})|& \leq |L(\alpha_{i+1})-L(\alpha_{j})+ p(\alpha_{j})-p(\alpha_{i+1})|+|L(\alpha_{j})-p(\alpha_{j})| \\
&\leq 2(i+1-j)\eta/n +2j\eta/n\leq 2(i+1)\eta/n.
\end{align*}
The second inequality holds since $L(\alpha_{i+1})-L(\alpha_{j})=\alpha_{i-1}-\alpha_j$ if $p(\alpha_{i+1})>p(\alpha_j)$ and
($L(\alpha_{i+1})-L(\alpha_{j})=\alpha_{j}-\alpha_{i+1}$ if $p(\alpha_{i+1})\leq p(\alpha_j)$ ).
\end{proof}
Let $T_\eta(p)$ be the polynomial with degree $\leq n-1$ that interpolates
$L(\alpha_i)$, (i.e. $T_{\eta}(p)(\alpha_i)=L(\alpha_i)$ for all $i\mathrm{i}n [n]$. )
$T_\eta(p)$ is unique by this definition and
by Lemma~\ref{lm:close}, $|T_\eta(p)(\alpha_i)-p(\alpha_i)|\leq 2i\cdot\frac{\eta}{n}\leq 2\eta$.
Now, we show the height of $T_\eta(p)$ can be bounded, independent of the
minimum separation $|\alpha_{i+1}-\alpha_i|$, due to its special structure.
\begin{lemma}
\label{lm:robustpolynomial}
For any polynomial $p(x) \mathrm{i}n {\mathcal F}_{\boldsymbol{\alpha}}$, define
polynomial $T_\eta(p)$ as the above.
Suppose $T_\eta(p)=\sum_{i=0}^{n-1} c_i x^i$.
We have $|c_t|\leq n2^{2n-2}(\frac{n}{\eta})^{n-2}$ for $t\mathrm{i}n [n-1]$.
\end{lemma}
\begin{proof}
The key of the proof is Newton's interpolation formula, which we briefly review here
(see e.g., \cite{hamming2012numerical}).
Let $F[\alpha_1,\cdots,\alpha_i]$ be the $i$th order difference of $T_\eta(p)$, which can be defined recursively:
$$
F[\alpha_t,\alpha_{t+1}] = \frac{T_{\eta}(p)(\alpha_{t+1}) - T_{\eta}(p)(\alpha_{t})}{\alpha_{t+1}-\alpha_t}, \cdots,
$$
$$
F[\alpha_t,\cdots,\alpha_{t+i}] = \frac{F[\alpha_{t+1},\cdots,\alpha_{t+i}] - F[\alpha_{t},\cdots,\alpha_{t+i-1}]}{\alpha_{t+i}-\alpha_{i}}.
$$
Then, by Newton's interpolation formula, we can write that
$$
T_\eta(p)(x)=F[\alpha_1,\alpha_2](x-\alpha_1)+F[\alpha_1,\alpha_2,\alpha_3](x-\alpha_1)(x-\alpha_2)+\cdots+F[\alpha_1,\cdots,\alpha_n]\prod_{i=1}^{n-1}(x-\alpha_i).
$$
By the definition of $T_\eta(p)$,
we know that every $2$nd order difference of $T_\eta(p)$ is either $1$ or $-1$.
Now, we show inductively that the absolute value of
the $i$th order difference absolute value is at most $2^{i-2}(\frac{n}{\eta})^{i-2}$ for any $i=3,\cdots,n$.
The base case is simple:
$F[\alpha,\alpha_{t+1}]=\pm 1$.
Now, we prove it for $i\geq 3$.
We distinguish two cases.
\begin{enumerate}
\mathrm{i}tem
If $\alpha_{t+i-1}-\alpha_t\leq \eta/n$,
all $\alpha_t, \alpha_{t+1}, \cdots, \alpha_{t+i-1}$ must belong to the same segment of $L$
(since all intervals in this range are small, thus merge together).
Therefore, we can see that
$F[\alpha_t,\alpha_{t+1}]=F[\alpha_{t+1},\alpha_{t+2}]=\cdots=F[\alpha_{t+1},\alpha_{t+i-1}]$,
from which we can see any 3rd order difference (hence the 4th, 5th, up to the $i$th)
in this interval is zero.
\mathrm{i}tem
Suppose $\alpha_{t+i-1}-\alpha_t>\eta/n$.
By the induction hypothesis, we have that
\begin{align*}
|F[\alpha_t,\cdots,\alpha_{t+i-1}]|&=\left|\frac{F[\alpha_t,\cdots,\alpha_{t+i-2}]-F[\alpha_{t+1},\cdots,\alpha_{t+i-1}]}{\alpha_{t+i-1}-\alpha_t}\right| \\
&\leq 2\cdot2^{i-3}\left(\frac{n}{\eta}\right)^{i-3}\cdot \frac{n}{\eta}
\leq 2^{i-2}\left(\frac{n}{\eta}\right)^{i-2}.
\end{align*}
\end{enumerate}
Also since $\alpha_j\mathrm{i}n [-1,1]$, the absolute value of the coefficient of $x^t$ in
$\prod_{j=1}^i(x-\alpha_j)$ is less than $2^n$.
So, finally we have $c_i\leq n2^{2n-2}(\frac{\eta}{n})^{n-2}$.
\end{proof}
\begin{proofof}{Lemma~\ref{lm:decomposition}}
For every polynomial $p(x) \mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}$.
Let $g_i=(p(\alpha_{i+1})-p(\alpha_i))/(\alpha_{i+1}-\alpha_i)$ for $i=[n-1]$.
So $-1\leq g_i\leq 1$, implying that the point $\boldsymbol{g}=(g_1,\cdots,g_{n-1})$ is in the $n-1$ dimensional hypercube.
Hence, by Carathedory theorem, there are $\lambda_1,\cdots,\lambda_{n}\geq 0, \lambda_1+\cdots+\lambda_{n}=1$,
such that $\boldsymbol{g}=\sum_{i=1}^{n}\lambda_i \boldsymbol{q}_i$ where $\boldsymbol{q}_1,\cdots,\boldsymbol{q}_{n}$ are $n$ vertices of the hypercube,
i.e., $\boldsymbol{q}_i\mathrm{i}n \{-1,+1\}^{n-1}$.
For each $\boldsymbol{q}_i$, we define $p_i(x)$ be the polynomial with degree at most $n-1$
and
$$
p_i(\alpha_1)=0,\;
p_i(\alpha_m)=\sum_{j=1}^{m-1} q_{i,j}(\alpha_{j+1}-\alpha_j).
$$
where $q_{i,j}$ is the $j$th coordinate of $\boldsymbol{q}_i$.
We can see $p_i\mathrm{i}n {\mathcal F}_{\boldsymbol{\alpha}}$.
It is easy to verify that
$
p(x)=\sum_{i=1}^{n} \lambda_i p_i(x),
$
since both the LHS and RHS take the same values at $\{\alpha_1,\cdots, \alpha_n\}$.
For $\eta>0$, we define $p_{\eta}(x)$ as $p_{\eta}=\sum_{i=1}^{n} \lambda_i T_\eta(p_i)$.
We know $|T_\eta(p_i)(\alpha_j)-p_i(\alpha_j)|\leq 2\eta$ for all $i\mathrm{i}n [n]$ and $j\mathrm{i}n [n]$
and the height of each $T_\eta(p_i)$ is at most $n2^{2n-2}(\frac{n}{\eta})^{n-2}$.
So, $p_{\eta}(x)$ satisfies the properties of the lemma.
\end{proofof}
\subsection{Moment-Transportation Inequality over Complex Numbers}
\label{subsec:2dmoment}
In this subsection, we extend Lemma~\ref{lm:inequality1d} to complex numbers.
Not only we allow the mixture components to be complex numbers,
the mixture weight $w_i$ can also be complex number.
Recall in the complex domain,
the definition of transportation distance is extended according to \eqref{eq:complextrans}.
We first see that $n$ complex numbers in $\mathbb{C}$ can be clustered with the following guarantee.
\begin{observation}
For $\alpha_1, \alpha_2, \cdots, \alpha_n$ where $\alpha_i \mathrm{i}n \mathbb{C}$ and any constant $\eta>0$, we can partition these $n$ points into clusters such that:
\begin{enumerate}
\mathrm{i}tem
$|\alpha_i - \alpha_j| < \eta$ if $\alpha_i$ and $\alpha_j$ are in the same cluster.
\mathrm{i}tem
$|\alpha_i - \alpha_j| > \eta / n$ if $\alpha_i$ and $\alpha_j$ are in different clusters.
\end{enumerate}
\end{observation}
Let $\mathrm{Supp} = \mathrm{Supp}(\boldsymbol{\vartheta}) \cup \mathrm{Supp}(\boldsymbol{\vartheta}')$ and let $n = |\mathrm{Supp}|$ $(n \leq 2k)$. Arrange the points in $\mathrm{Supp}$ as $\boldsymbol{\alpha} = (\alpha_1, \cdots, \alpha_n)$ such that each cluster lies in a continuous segment. In other words, if $\alpha_i$ and $\alpha_j$ lie in the same cluster for indexes $i < j$, then $\alpha_{i'}$ and $\alpha_{j'}$ also lie in the same cluster for all $i \leq i' < j' \leq j$; if $\alpha_i$ and $\alpha_j$ lie in the different cluster for indexes $i < j$, then $\alpha_{i'}$ and $\alpha_{j'}$ also lie in the different cluster for all $i' \leq i < j \leq j'$.
\begin{definition}
Suppose $\boldsymbol{\alpha} = (\alpha_1, \alpha_2, \cdots, \alpha_n) \mathrm{i}n \mathbb{B}_{\mathbb{C}}^n$.
Let ${\mathcal P}_{\boldsymbol{\alpha}}$ be the set of polynomials of degree at most $n-1$ and Lipschitz over $\boldsymbol{\alpha}$, i.e.,
$$
{\mathcal P}_{\boldsymbol{\alpha}}=\{P\mid \mathrm{d}eg P\leq n-1,P(\alpha_1)=0; \, |P(\alpha_i)-P(\alpha_j)|\leq |\alpha_i-\alpha_j|\,\, \forall i,j\mathrm{i}n [n].\}
$$
\end{definition}
We have a similar observation.
\begin{observation}
\label{ob:ge-simple}
$
\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')\leq 2\sup\left\{\mathrm{i}nt f \mathrm{d}(\boldsymbol{\vartheta}-\boldsymbol{\vartheta}'): f\mathrm{i}n 1\text{-}\mathsf{Lip}_{\mathbb{C}}\right\}=2\sup\left\{\mathrm{i}nt p \mathrm{d}(\boldsymbol{\vartheta}-\boldsymbol{\vartheta}'): p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}\right\}
$
\end{observation}
Similar to the real case, we only need to focus on the extreme points of ${\mathcal P}_{\boldsymbol{\alpha}}$.
\begin{definition}
Let $\boldsymbol{\alpha} = (\alpha_1, \alpha_2, \cdots, \alpha_n) \mathrm{i}n \mathbb{B}_{\mathbb{C}}^n$.
Let
$$
{\mathcal F}_{\boldsymbol{\alpha}}=\left\{p\large\mid \mathrm{d}eg p\leq n-1, p(\alpha_1)=0; \, \frac{p(\alpha_{i+1})-p(\alpha_i)}{\alpha_{i+1}-\alpha_i}\mathrm{i}n\{\pm 1, \pm {\mathfrak{i}}\}\,\, \forall i\mathrm{i}n [n-1]\right\}.
$$
It is easy to see that $|{\mathcal F}_{\boldsymbol{\alpha}}|=4^{n-1}$.
\end{definition}
Again, we modify each polynomial $p \mathrm{i}n {\mathcal F}_{\boldsymbol{\alpha}}$ slightly.
We define a degree-$n-1$ polynomial $T_{\eta}(p) : \mathbb{C} \rightarrow \mathbb{C}$ by assigning values to all $\alpha_i$ for $i \mathrm{i}n[n]$ as follows:
\begin{enumerate}
\mathrm{i}tem If $\alpha_i$ is one of the first two points in the cluster corresponding to $\alpha_i$, assign $T_{\eta}(p)(\alpha_i) = p(\alpha_i)$.
\mathrm{i}tem
Otherwise, denote $\alpha_j, \alpha_{j+1}$ as the first two points of the cluster, we would assign $T_{\eta}(p)(\alpha_i)$ to be the linear interpolation of $T_{\eta}(p)(\alpha_j)$ and $T_{\eta}(p)(\alpha_j+1)$. In concrete, $$T_{\eta}(p)(\alpha_i) = \frac{p(\alpha_{j+1}) - p(\alpha_j)}{\alpha_{j+1} - \alpha_j}(\alpha_i - \alpha_j) + p(\alpha_j).$$
\end{enumerate}
Since we have fixed $n$ points in $n-1$-degree polynomial $T_{\eta}(p)$, we can see that $T_{\eta}(p)$ is uniquely determined.
The following lemma shows $T_{\eta}(p)$ is close to $p$ over the points in $\boldsymbol{\alpha}$.
\begin{lemma}
\label{lm:complex-eta-close} $|T_{\eta}(p)(\alpha_i) - p(\alpha_i)| \leq 2\eta$
for each $\alpha_i\mathrm{i}n \boldsymbol{\alpha}$.
\end{lemma}
\begin{proof}
We only need to prove the case that $\alpha_i$ is not the first two points in the cluster.
\begin{align*}
|T_{\eta}(p)(\alpha_i) - p(\alpha_i)| &\leq \left|\frac{p(\alpha_{j+1}) - p(\alpha_j)}{\alpha_{j+1} - \alpha_j}(\alpha_i - \alpha_j) + p(\alpha_j) - p(\alpha_i) \right| \\
&\leq \left|\frac{p(\alpha_{j+1}) - p(\alpha_j)}{\alpha_{j+1} - \alpha_j}(\alpha_i - \alpha_j)\right| + \left|p(\alpha_j) - p(\alpha_i)\right| \\
&\leq 2|\alpha_i - \alpha_j| \leq 2\eta
\end{align*}
where the third inequality holds since $|p(\alpha_{j+1}) - p(\alpha_j)| = |\alpha_{j+1} - \alpha_j|$ and the last inequality holds because $\alpha_i$ and $\alpha_j$ are in the same cluster.
\end{proof}
\begin{lemma}
\label{lm:add-term}
Let $\boldsymbol{\alpha} = (\alpha_1, \alpha_2, \cdots, \alpha_n) \mathrm{i}n \mathbb{B}_{\mathbb{C}}$. Suppose $T_\eta(p)=\sum_{i=0}^{n-1} c_i x^i$. We have $|c_t| \leq n2^{2n-2}(\frac{n}{\eta})^{n-2}$.
\end{lemma}
\begin{proof}
Again, we would use Newton's interpolation polynomials.
Let $F[x_1, \cdots, x_i]$ be the $i$th order difference of $T_{\eta}(p)$.
By definition of $T_{\eta}(p)$, we know that every 2nd order difference of $T_{\eta}(p)$ is in $\{\pm 1, \pm {\mathfrak{i}}\}$. Now, we show inductively that the absolute value of $i$th order difference absolute value is at most $(2n/\eta)^{i-2}$ for any $i=3, \cdots, n$. We distinguish two cases:
\begin{enumerate}
\mathrm{i}tem
If $|\alpha_{t+i}-\alpha_t| < \eta/n$, then all $\alpha_t, \alpha_{t+1}, \cdots, \alpha_{t+i}$ lie in the same cluster (according to the assigned order). Therefore, we can see that $F[\alpha_t,\alpha_{t+1}] = F[\alpha_{t+1},\alpha_{t+2}] = \cdots = F[\alpha_{t+i-1}, \alpha_{t+i}]$, from which any 3rd order difference (hence the 4th, 5th, up to the $i$th) in this interval is zero.
\mathrm{i}tem Otherwise, $|\alpha_{t+i}-\alpha_t|>\eta/n$.
By the induction hypothesis, we have that
$$
|F[\alpha_t,\cdots,\alpha_{t+i}]|=\left|\frac{F[\alpha_{t+1},\cdots,\alpha_{t+i}] - F[\alpha_{t},\cdots,\alpha_{t+i-1}]}{\alpha_{t+i}-\alpha_{t}}\right|\leq 2 \cdot (2n/\eta)^{i-3} \cdot (n/\eta) \leq (2n/\eta)^{i-2}
$$
\end{enumerate}
Also since $|\alpha_j|\leq 1$, the absolute value of the coefficient of $x^t$ in
$\prod_{j=1}^i(x-\alpha_j)$ is less than $2^n$.
So, finally we have $c_t\leq n2^{2n-2}(\frac{n}{\eta})^{n-2}$.
\end{proof}
\begin{lemma}
\label{lm:ge-decomposition}
For any polynomial $p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}$, there is a polynomial $p_\eta(x)$ such that the following properties hold:
\begin{enumerate}
\mathrm{i}tem
$|p_{\eta}(\alpha_i)-p(\alpha_i)|\leq 2\eta$, for any $i\mathrm{i}n [n]$.
\mathrm{i}tem
The height of $g_\eta$ is at most $n2^{2n-2}(\frac{n}{\eta})^{n-2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For every polynomial $p(x)\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}$,
let $\boldsymbol{g}=\frac{p(\alpha_{i+1})-p(\alpha_i)}{\alpha_{i+1}-\alpha_i}$ for $i=[n-1]$.
So $|g_i|\leq 1$, that means the point $\boldsymbol{p}=(g_1,\cdots,g_{n-1})$ is in the $n-1$ dimensional
complex hypercube (which has $4^{n-1}$ vertices).
Then, we apply exactly the same argument as in the proof of Lemma~\ref{lm:decomposition}.
\eat{
Hence, there are $\lambda_1,\cdots,\lambda_{4^{n-1}}\geq 0, \lambda_1+\cdots+\lambda_{4^{n-1}}=1$,
such that $\boldsymbol{g}=\sum_{i=1}^{4^{n-1}}\lambda_i \boldsymbol{q}_i$ where $\boldsymbol{q}_1,\cdots,\boldsymbol{q}_{4^{n-1}}$ are the vertices of hypercube,
i.e., $\boldsymbol{q}_i\mathrm{i}n \{\pm1,\pm{\mathfrak{i}}\}^{n-1}$.
For each $\boldsymbol{q}_i$, we define $p_i(x)$ be the polynomial with degree at most $n-1$
and
$$
p_i(\alpha_1)=0,\;
p_i(\alpha_m)=\sum_{j=1}^{m-1} q_{i,j}(\alpha_{j+1}-\alpha_j).
$$
where $q_{i,j}$ is the $j$th coordinate of $\boldsymbol{q}_i$.
We can see $p_i\mathrm{i}n {\mathcal F}_{\boldsymbol{\alpha}}$.
It is easy to verify that
$$
p(x)=\sum_{i=1}^{4^{n-1}} \lambda_i p_i(x),
$$
since both the LHS and RHS take the same values at $\{\alpha_1,\cdots, \alpha_n\}$.
For $\eta>0$, we define $p_{\eta}(x)$ as $p_{\eta}=\sum_{i=1}^{4^{n-1}} \lambda_i T_\eta(p_i)$.
We know $|T_\eta(p_i)(\alpha_j)-p_i(\alpha_j)|\leq 2\eta$ for all $i\mathrm{i}n [2^{n-1}]$ and $j\mathrm{i}n [n]$
and the height of each $T_\eta(p_i)$ is at most $n2^{2n-2}(\frac{n}{\eta})^{n-2}$.
So, $p_{\eta}(x)$ satisfies the properties of the lemma.
}
\end{proof}
\begin{theorem}
\label{thm:complex-moment-inequality}
For any two mixtures with $k$ components $\boldsymbol{\vartheta},\boldsymbol{\vartheta}' \mathrm{i}n \mathrm{Spike}(\mathbb{B}_{\mathbb{C}},\Sigma^{\mathbb{C}}_{k-1})$, it holds that
$$
\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')\leq O(k\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')^{\frac{1}{2k-1}}).
$$
\end{theorem}
\begin{proof}
Fix any constant $\eta > 0$. Let $\xi = \Mdis_K(\boldsymbol{\vartheta}, \boldsymbol{\vartheta}')$
\begin{align*}
\tran(\boldsymbol{\vartheta}, \boldsymbol{\vartheta}') &\leq 2\sup_{f\mathrm{i}n 1\text{-}\mathsf{Lip}_{\mathbb{C}}} \mathrm{i}nt f \mathrm{d} (\boldsymbol{\vartheta} - \boldsymbol{\vartheta}') = 2\sup_{p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}} \mathrm{i}nt p \mathrm{d}(\boldsymbol{\vartheta} - \boldsymbol{\vartheta}') \\
&\leq 2\left(4\eta + \sup_{p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}} \mathrm{i}nt p_{\eta} \mathrm{d}(\boldsymbol{\vartheta} - \boldsymbol{\vartheta}')\right) & (\textrm{Lemma}~\ref{lm:ge-decomposition}) \\
&\leq 2\left(4\eta + \sup_{p\mathrm{i}n {\mathcal P}_{\boldsymbol{\alpha}}} \sum_{p=0}^{2k-1} n2^{2n-2}\left(\frac{n}{\eta}\right)^{n-2} \Mdis_K(\boldsymbol{\vartheta}, \boldsymbol{\vartheta}')\right) & (\textrm{Lemma}~\ref{lm:ge-decomposition}) \\
&\leq 2\left(4\eta+2^{4k}(2k)^{2k}\frac{\xi}{\eta^{2k-2}}\right). & (n\leq 2k)
\end{align*}
By choosing $\eta=16k\xi^{\frac{1}{2k-1}}$,
we conclude that $\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')\leq O(k\xi^{\frac{1}{2k-1}})$.
\end{proof}
\subsection{Moment-Transportation Inequality in Higher Dimensions}
We generalize our proof for one dimension to higher dimensions.
Lemma \ref{lm:prob-proj} shows there exists a vector $\boldsymbol{r}$ such that
the distance between spikes are still lower bounded (up to factors depending on $k$ and $d$) after projection. This enables us to extend Lemma~\ref{lm:inequality1d} to high dimension.
\begin{theorem}
Let $\boldsymbol{\vartheta},\boldsymbol{\vartheta}'$ be two $k$-spike mixtures in $\mathrm{Spike}(\Delta_{d-1}, \Delta_{k-1})$, and
$K=2k-1$. Then, the following inequality holds
$$\tran(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')\leq O(k^3d\Mdis_K(\boldsymbol{\vartheta},\boldsymbol{\vartheta}')^{\frac{1}{2k-1}}).$$
\end{theorem}
\begin{proof}
Apply the argument in Lemma \ref{lm:prob-proj} to $\mathrm{Supp} = \mathrm{Supp}(\boldsymbol{\vartheta}) \cup \mathrm{Supp}(\boldsymbol{\vartheta}')$. There exists $\boldsymbol{r} \mathrm{i}n \mathbb{S}_{d-1}$ and constant $c$ such that $|r^{\top}(\boldsymbol{\alpha}_i - \boldsymbol{\alpha}_j)| \geq \frac{c}{k^2d} \|\boldsymbol{\alpha}_i - \boldsymbol{\alpha}_j\|_1$ for all $\boldsymbol{\alpha}_i, \boldsymbol{\alpha}_j \mathrm{i}n \mathrm{Supp}$. Note that $|r^{\top}\boldsymbol{\alpha}_i| \leq \|r\|_{\mathrm{i}nfty} \|\boldsymbol{\alpha}_i\|_1 \leq 1$, both $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta})$ and $\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}')$ are in $\mathrm{Spike}(\Delta_1, \Delta_{k-1})$. In this case,
\begin{align*}
\tran(\boldsymbol{\vartheta}, \boldsymbol{\vartheta}') &= \mathrm{i}nf \left\{\mathrm{i}nt \|\boldsymbol{x} - \boldsymbol{y}\|_1 \mathrm{d} \mu(\boldsymbol{x},\boldsymbol{y}) : \mu\mathrm{i}n M(\boldsymbol{\vartheta},\boldsymbol{\vartheta}') \right\} & (\textrm{Definition}) \\
&\leq \mathrm{i}nf \left\{ \mathrm{i}nt \frac{k^2d}{c}|\boldsymbol{r}^{\top}(\boldsymbol{x} - \boldsymbol{y})| \mathrm{d} \mu(\boldsymbol{x},\boldsymbol{y}) : \mu\mathrm{i}n M(\boldsymbol{\vartheta},\boldsymbol{\vartheta}') \right\} & (\textrm{Lemma }\ref{lm:prob-proj}) \\
&= \frac{k^2d}{c} \tran(\proj{\boldsymbol{r}}(\boldsymbol{\vartheta}), \proj{\boldsymbol{r}}(\boldsymbol{\vartheta}')) \\
&\leq O(k^3d \Mdis_K(\boldsymbol{\vartheta}, \boldsymbol{\vartheta}')^{\frac{1}{2k-1}}). & (\textrm{Lemma }\ref{lm:inequality1d})
\end{align*}
\end{proof}
\section{Missing Details from Section~\ref{sec:1d-recover}}
\subsection{Implementation Details of Algorithm~\ref{alg:dim1}}
\label{subsec:1d-alg-detail}
In this subsection, we show how to implement Algorithm~\ref{alg:dim1} in $O(k^2)$ time.
We first perform a ridge regression to obtain $\widehat{\vecc}$ in Line 2.
The explicit solution of this ridge regression is
$\widehat{\vecc} = (A_{M'}^{\top} A_{M'} + \xi^{2} I)^{-1} A_{M'}^{\top} b_{M'}$.
Since $A_{M'}$ is a Hankel matrix, $\widehat{\vecc} = (A_{M'} - {\mathfrak{i}} \xi I)^{-1}(A_{M'} + {\mathfrak{i}} \xi I)^{-1}A_{M'} b_{M'}$ holds.
Note that $\boldsymbol{x} = (A + \lambda I)^{-1}b$ is a single step of inverse power method, which can be computed in $O(k^2)$ time when $A$ is a Hankel matrix(see \cite{Xu2008AFS}). Hence, Line 2 can be implemented in $O(k^2)$ arithmetic operations.
In Line 3, we solve the roots of polynomial $\sum_{i=0}^{k-1} \widehat{c}_i x^i + x^k$.
We can use the algorithm in Neff et al. \cite{Neff1996AnEA}
to find the solution with $\xi$-additive noise in $O(k^{1+o(1)} \cdot \log \log (1/\xi))$ arithmetic operations.
Line 6 is a linear regression defined by Vandemonde matrix $V_{\boldsymbol{\alpha}}$. We can use the recent algorithm developed in \cite{Meyer2022FastRF} to find a constant factor multiplicative approximation, which uses $O(k^{1+o(1)})$ arithmetic operations. Note that a constant approximation suffices for our purpose.
In Line 9, we find the $k$-spike distribution in $\mathrm{Spike}(\Delta_1, \Delta_{k-1})$ closest to $\tilde{\boldsymbol{\vartheta}}$.
To achieve $O(k^2)$ running time, we limit the spike distribution to have support $\widetilde{\veca}$. In this case, the optimization problem is equivalent to finding a transportation plan from $(\widetilde{\veca}, \widetilde{\vecw}^{-})$ to $(\widetilde{\veca}, \widetilde{\vecw}^{+})$ where $\widetilde{\vecw}^{-}$ and $\widetilde{\vecw}^{+}$ are the negative components and positive components of $\widetilde{\vecw}$ respectively. In concrete, $\widetilde{w}^{-}_i = \max\{0, -\widetilde{w}_i\}$ and $\widetilde{w}^{+}_i = \max\{0, \widetilde{w}_i\}$ holds.
Note that the points are in one dimension.
Using the classical algorithm in \cite{Aggarwal1992EfficientMC}, we can solve this transportation problem in $O(k^{1+o(1)})$ arithmetic operations.
\subsection{Missing Proofs}
\label{subsec:missingproofsec3}
\noindent
\textbf{Lemma~\ref{lm:dim1-char-poly}} (restated)
\emph{
Let $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_1, \Delta_{k-1})$ where $\boldsymbol{\alpha} = [\alpha_1, \cdots, \alpha_k]^{\top}$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top}$.
Let $\boldsymbol{c} = [c_0, \cdots, c_{k-1}]^{\top} \mathrm{i}n \mathbb{R}^k$ such that $\prod_{i=1}^{k} (x - \alpha_i) = \sum_{i=0}^{k-1} c_i x^i + x^k$.
For all $i \geq 0$, the following equation holds
$\sum_{j=0}^{k-1} M_{i+j}(\boldsymbol{\vartheta}) c_j + M_{i+k}(\boldsymbol{\vartheta}) = 0.$
In matrix form, the equation can be written as follows:
$$
A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})} = \mathbf{0}.
$$
}
\begin{proof}
\begin{align*}
\sum_{j=0}^{k-1} M_{i+j}(\boldsymbol{\vartheta}) c_j + M_{i+k}(\boldsymbol{\vartheta})
&= \sum_{j=0}^{k-1} \sum_{t=1}^k w_t \alpha_t^{i+j} c_j + \sum_{t=1}^k w_t \alpha_t^{i+k}
= \sum_{t=1}^k w_t \alpha_t^{i} \left(\sum_{j=0}^{k-1} c_j \alpha_t^j + \alpha_t^k\right) \\
&= \sum_{t=1}^k w_t \alpha_t^{i} \prod_{j=1}^{k} (\alpha_t - \alpha_j) = 0.
\end{align*}
The $i$th row of $A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})}$ is $\sum_{j=0}^{k-1} M_{i+j}(\boldsymbol{\vartheta}) c_j + M_{i+k}(\boldsymbol{\vartheta})$, hence
the matrix form.
\end{proof}
\noindent
\textbf{Lemma~\ref{lm:injected-noise}} (restated)
\emph{
Let $b>0$ be some constant.
For $a_1, \cdots, a_k \mathrm{i}n \mathbb{R}$ and $a_1', \cdots, a_k' \mathrm{i}n \mathbb{R}$ such that $a_i \mathrm{i}n [0, b]$ and $|a_i' - a_i|\leq \xi$ holds for all $i$. If $\xi \leq n^{-1}$, then,
\begin{align*}
\prod_{i=1}^k a_i' \leq \prod_{i=1}^{k} a_i + O(b^k \cdot k\xi).
\end{align*}
}
\begin{proof}
\begin{align*}
\prod_{i=1}^k a_i' - \prod_{i=1}^{k} a_i &= \sum_{S\subseteq [k], S\neq \emptyset} \prod_{i \mathrm{i}n S} a_i \prod_{i\not\mathrm{i}n S} (a_i' - a_i)\ \\
&\leq \sum_{t=1}^{k} \sum_{|S|=t} \prod_{i \mathrm{i}n S} a_i \prod_{i\not\mathrm{i}n S} (a_i' - a_i) \\
&\leq b^k \sum_{t=1}^{k} \binom{k}{t} \xi^{k} \leq b^k \sum_{t=1}^{k} \frac{(k\xi)^{k}}{t!} \\
&\leq b^k (e^{k\xi} - 1) = O(b^k \cdot k\xi).
\end{align*}
The third inequality holds because $\binom{k}{t} \leq \frac{k^t}{t!}$. The last inequality holds since $e^{x} = \sum_{i>0} \frac{x^i}{i!}$ and the last equality is due to the fact that $e^{f(x)} = 1 + O(f(x))$ if $f(x) = O(1)$.
\end{proof}
\noindent
\textbf{Lemma~\ref{lm:dim1-step3-matrix}} (restated)
\emph{
For all $j \geq k$ and distinct numbers $a_1, \cdots, a_k \mathrm{i}n \mathbb{R}$ (or $a_1, \cdots, a_k \mathrm{i}n \mathbb{C}$),
$$\left|\begin{matrix}
a_1^j & a_2^j & \cdots & a_k^j \\
a_1^1 & a_2^1 & \cdots & a_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
a_1^{k-1} & a_2^{k-1} & \cdots & a_k^{k-1}
\end{matrix}\right| \cdot \left|\begin{matrix}
1 & 1 & \cdots & 1 \\
a_1^1 & a_2^1 & \cdots & a_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
a_1^{k-1} & a_2^{k-1} & \cdots & a_k^{k-1}
\end{matrix}\right|^{-1} = \prod_{i=1}^{k} a_i \cdot \sum_{s \mathrm{i}n (k)^{j-k}} \prod_{i=1}^{k} a_i^{s_i} $$
}
\begin{proof}
We provide a self-contained proof here for completeness.
According to the property of Vandermonde determinant, we have
\begin{align*}
\left|\begin{matrix}
1 & 1 & \cdots & 1 \\
a_1^1 & a_2^1 & \cdots & a_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
a_1^{k-1} & a_2^{k-1} & \cdots & a_k^{k-1}
\end{matrix}\right| &= (-1)^{k(k-1)/2}\prod_{p<q} (a_p - a_q).
\end{align*}
By expansion along first row, we have
\begin{align*}
\left|\begin{matrix}
a_1^j & a_2^j & \cdots & a_k^j \\
a_1^1 & a_2^1 & \cdots & a_k^1 \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
a_1^{k-1} & a_2^{k-1} & \cdots & a_k^{k-1}
\end{matrix}\right| &= \sum_{i=1}^{k} (-1)^{i+1} a_i^j
\left|\begin{matrix}
a_1^1 & \cdots & a_{i-1}^1 & a_{i+1}^1 & \cdots & a_k^1 \\
\vdots & \mathrm{d}dots & \vdots & \vdots & \mathrm{d}dots & \vdots \\
a_1^{k-1} & \cdots & a_{i-1}^{k-1} & a_{i+1}^{k-1} & \cdots & a_k^{k-1} \\
\end{matrix}\right| \\
&= \sum_{i=1}^{k} (-1)^{i+1} a_i^j (-1)^{(k-1)(k-2)/2} \prod_{t\neq i} a_t \prod_{p<q,p\neq i, q\neq i} (a_p - a_q) \\
&= (-1)^{k(k-1)/2} \cdot (-1)^{k+1}
\sum_{i=1}^{k}a_i^j (-1)^{i+1} \prod_{t\neq i} a_t \prod_{p<q,p\neq i, q\neq i} (a_p - a_q)
\end{align*}
Thus,
\begin{align*}
\text{LHS}(j) = \sum_{i=1}^{k} a_i^{j} \cdot \prod_{t\neq i} \frac{a_t }{a_i - a_t}
\end{align*}
Let $L(z) = \sum_{j\geq k} \text{LHS}(j) z^j $ and $R(z) = \sum_{j \geq k} \text{RHS}(j) z^j$ be the generating functions of LHS and RHS corresponding to $j$ respectively. Then,
\begin{align*}
L(z) &= \sum_{i=1}^{k} \left(\sum_{j\geq k} a_i^j z^j \right) \prod_{t\neq i} \frac{a_t }{a_t - a_i} = z^k \sum_{i=1}^{k} \frac{a_i^k } {1 - a_i z} \prod_{t\neq i} \frac{a_t }{a_i - a_t} \\
R(z) &= \prod_{i=1}^{k} \left( \sum_{j \geq 1} a_i^j z^j \right) = z^k \prod_{t=1}^{k} \frac{a_t}{1 - a_t z}
\end{align*}
Consider the quotient between two functions,
\begin{align*}
\frac{L(z)}{R(z)} = \sum_{i=1}^{k} a_i^{k-1}\prod_{t\neq i} \frac{1 - a_t z}{a_i - a_t}.
\end{align*}
For $z = a_p^{-1}$ where $p \mathrm{i}n [1, k]$, the product in each additive term with $i \neq p$ equals $0$. As a result, we can see that
\begin{align*}
\frac{L(a_p^{-1})}{R(a_p^{-1})} = a_p^{k-1} \prod_{t\neq p} \frac{1 - a_t a_p^{-1}}{a_i - a_t} = 1.
\end{align*}
Since $L(z)/R(z)$ is polynomial of $z$ of degree $k-1$, from the uniqueness of polynomial, we have
$$\frac{L(z)}{R(z)} = 1 \mathbb{R}ightarrow \text{LHS} = \text{RHS}$$
which directly proves the statement.
\end{proof}
\noindent
\textbf{Lemma~\ref{lm:step5}} (restated)
\emph{
Let $\mathcal{C}$ be a convex set and let $\widetilde{\mix} = (\widetilde{\veca}, \widetilde{\vecw}) \mathrm{i}n \mathrm{Spike}(\mathcal{C}, \Sigma_{k-1})$ be a $k$-spike distribution over support $\mathcal{C}$.
Then, we have
$$\min_{\widecheck{\mix} \mathrm{i}n \mathrm{Spike}(\widetilde{\veca}, \Delta_{k-1})} \tran(\widetilde{\mix}, \widecheck{\mix}) \leq 2 \min_{\overline{\mix} \mathrm{i}n \mathrm{Spike}(\mathcal{C}, \Delta_{k-1})} \tran(\widetilde{\mix}, \overline{\mix}).$$
where the left minimization is taking over all $k$-spike distribution with support $\widetilde{\veca}$.
}
\begin{proof}
Consider any $\overline{\mix} = (\overline{\veca}, \widebar{\vecw}) \mathrm{i}n \mathrm{Spike}(\mathcal{C}, \Delta_{k-1})$ such that $\overline{\veca} = [\overline{\alpha}_1, \cdots, \overline{\alpha}_k]^{\top}$ and $\widebar{\vecw} = [\widebar{w}_1, \cdots, \widebar{w}_k]^{\top}$. Let $\boldsymbol{\vartheta}' = \arg \min_{\boldsymbol{\vartheta}' \mathrm{i}n \mathrm{Spike}(\widetilde{\veca}, \Sigma_{k-1})} \tran(\boldsymbol{\vartheta}', \overline{\mix})$. We have $\tran(\boldsymbol{\vartheta}', \overline{\mix}) \leq \tran(\widetilde{\mix}, \overline{\mix})$. From triangle inequality, $\tran(\widetilde{\mix}, \boldsymbol{\vartheta}') \leq \tran(\widetilde{\mix}, \overline{\mix}) + \tran(\overline{\mix}, \boldsymbol{\vartheta}') \leq 2\tran(\widetilde{\mix}, \overline{\mix})$. Now, we show $\boldsymbol{\vartheta}' \mathrm{i}n \mathrm{Spike}(\widetilde{\veca}, \Delta_{k-1})$, i.e., the weight of spikes in the closest distribution to $\overline{\mix}$ with support $\widetilde{\veca}$ should be all non-negative.
Towards a contradiction, assume the optimal distribution $\boldsymbol{\vartheta}'$ contains at least one negative spike, that is, let $(\boldsymbol{\vartheta}')^- = (\boldsymbol{\alpha}, \max\{-\boldsymbol{w}, \mathbf{0}\})$ be the negative spikes of $\boldsymbol{\vartheta}'$, we have $(\boldsymbol{\vartheta}')^- \neq \mathbf{0}$. In this case, since $\overline{\mix}$ is a probability distribution, $\tran(\overline{\mix}, \boldsymbol{\vartheta}') = \tran(\overline{\mix} + (\boldsymbol{\vartheta}')^-, (\boldsymbol{\vartheta}')^+)$ where $(\boldsymbol{\vartheta}')^+ = (\boldsymbol{\alpha}, \max\{\boldsymbol{w}, \mathbf{0}\})$ is positive spikes of $\boldsymbol{\vartheta}'$.
Let $\mu'$ be the optimal matching distribution corresponding to $\tran(\overline{\mix} + (\boldsymbol{\vartheta}')^-, (\boldsymbol{\vartheta}')^+)$. From its definition, $\mu'$ is non-negative.
Eliminate terms in $(\boldsymbol{\vartheta}')^-$ that is related to $\boldsymbol{\vartheta}'$, and
denote the result as $\mu''$. In other words, $\mu''$ only consists of the components corresponding to $\overline{\mix} \times (\boldsymbol{\vartheta}')^+$. Let $\boldsymbol{\vartheta}''$ be the marginal distribution of $\mu''$ other than $\overline{\mix}$. In this case, $\mu''$ is a valid matching distribution for $\tran(\overline{\mix}, \boldsymbol{\vartheta}'')$. Moreover, $\mu''$ gives less transportation distance compare to $\mu'$ since the cost on the eliminated terms are non-negative. Hence, $\tran(\overline{\mix}, \boldsymbol{\vartheta}'') < \tran(\overline{\mix}, \boldsymbol{\vartheta}')$, which is a contradiction.
As a result, $\boldsymbol{\vartheta}' \mathrm{i}n \mathrm{Spike}(\widetilde{\veca}, \Delta_{k-1})$. Therefore, $\min_{\widecheck{\mix} \mathrm{i}n \mathrm{Spike}(\boldsymbol{\alpha}, \Delta_{k-1})} \tran(\widetilde{\mix}, \widecheck{\mix}) \leq \tran(\widetilde{\mix}, \boldsymbol{\vartheta}')$. By taking minimization over $\overline{\mix}$, we can conclude that
\begin{align*}
\min_{\widecheck{\mix} \mathrm{i}n \mathrm{Spike}(\boldsymbol{\alpha}, \Delta_{k-1})} \tran(\widetilde{\mix}, \widecheck{\mix}) \leq 2 \min_{\overline{\mix} \mathrm{i}n \mathrm{Spike}(\mathcal{C}, \Delta_{k-1})} \tran(\widetilde{\mix}, \overline{\mix}).
\end{align*}
\end{proof}
\section{An Algorithm for the 2-dimensional Problem}
\label{sec:2d-recover}
We can generalize the 1-dimension algorithm described in Section~\ref{sec:1d-recover} to 2-dimension.
Let $\boldsymbol{\vartheta}:=(\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_2,\Delta_{k-1})$
be the underlying mixture
where $\boldsymbol{\alpha} = \{\boldsymbol{\alpha}_1, \cdots, \boldsymbol{\alpha}_k\}$ for $\boldsymbol{\alpha}_{i} = (\alpha_{i,1}, \alpha_{i,2})$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top} \mathrm{i}n \Delta_{k-1}$.
The true moments can be computed according to $M_{i,j}(\boldsymbol{\vartheta}) = \sum_{t=1}^k w_t \alpha_{t,1}^i \alpha_{t,2}^j$.
The input is the noisy moments $M'_{i,j}$ in $0\leq i, j, i+j \leq 2k-1$ such that $|M'_{i,j} - M_{i,j}(\boldsymbol{\vartheta})| \leq \xi$. We further assume that $M'_{0,0}=M_{0,0}(\boldsymbol{\vartheta}) = 1$ and $\xi \leq 2^{-\Omega(k)}$.
The key idea is very simple:
a distribution supported in $\mathbb{R}^2$ can be mapped to a distribution supported in the complex plane $\mathbb{C}$. In particular, we define the complex set $\Delta_{\mathbb{C}} = \{a+b{\mathfrak{i}} \mid (a,b) \mathrm{i}n \Delta_2\}$.
Moreover, we denote $\boldsymbol{\beta} = [\beta_{1}, \cdots, \beta_{k}]^{\top} := [\alpha_{1,1} + \alpha_{1,2} {\mathfrak{i}}, \cdots, \alpha_{k,1} + \alpha_{k,2} {\mathfrak{i}}]^{\top} \mathrm{i}n \Delta_{\mathbb{C}}^k$, and define $\boldsymbol{\phi} := (\boldsymbol{\beta}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_{\mathbb{C}}^k, \Delta_{k-1})$ to be the complex mixture corresponding to $\boldsymbol{\vartheta}$.
The corresponding moments of $\boldsymbol{\phi}$ can thus be defined as $G_{i,j}(\boldsymbol{\phi}) = \sum_{t=1}^{k} w_t (\beta_t^{\mathrm{d}agger})^i \beta_t^j$.
For $G = [G_{i,j}]^{\top}_{0 \leq i \leq k; 0 \leq j \leq k-1}$, denote
\begin{align*}
A_G := \begin{bmatrix}
G_{0,0} & G_{0,1} & \cdots & G_{0,k-1} \\
G_{1,0} & G_{1,1} & \cdots & G_{1,k-1} \\
\vdots & \vdots & \mathrm{d}dots & \vdots \\
G_{k-1,0} & G_{1,k-1} & \cdots & G_{k-1,k-1}
\end{bmatrix},
b_G := \begin{bmatrix}
G_{0,k} \\
G_{1,k} \\
\vdots \\
G_{k-1,k}
\end{bmatrix},
M_G := \begin{bmatrix}
G_{0,0} \\
G_{0,1} \\
\vdots \\
G_{0,k-1}
\end{bmatrix}.
\end{align*}
\begin{algorithm}[!ht]
\widecheck{\alpha}ption{Reconstruction Algorithm in 2-Dimension}\label{alg:dim2}
\begin{algorithmic}[1]
\Function{TwoDimension}{$k, M', \xi$}
\State $G' \leftarrow [G'_{i,j} := \sum_{p=0}^{i}\sum_{q=0}^{j} \binom{i}{p}\binom{j}{q} (-{\mathfrak{i}})^{i-p} {\mathfrak{i}}^{j-q} M'_{p+q,i+j-p-q}]^{\top}_{0 \leq i \leq k; 0 \leq j \leq k-1}$
\State $\widehat{\vecc} \leftarrow \arg \min_{\boldsymbol{x} \mathrm{i}n \mathbb{C}^k} \| A_{G'} \boldsymbol{x} + b_{G'} \|_2^2 + \xi^2 \|\boldsymbol{x}\|_2^2$ \mathbb{C}omment{$\widehat{\vecc} = [\widehat{c}_0, \cdots, \widehat{c}_{k-1}]^{\top} \mathrm{i}n \mathbb{C}^k$}
\State $\widehat{\vecb} \leftarrow \textrm{roots}(\sum_{i=0}^{k-1} \widehat{c}_i x^i + x^k)$ \mathbb{C}omment{$\widehat{\vecb} = [\widehat{\beta}_1, \cdots, \widehat{\beta}_k]^{\top} \mathrm{i}n \mathbb{C}^k$}
\State $\widebar{\vecb} \leftarrow \textrm{project}_{\Delta_{1}^\mathbb{C}}(\widehat{\vecb})$
\mathbb{C}omment{$\widebar{\vecb} = [\widebar{\beta}_1, \cdots, \widebar{\beta}_{k}]^{\top} \mathrm{i}n \Delta_{\mathbb{C}}^k$}
\State $\widetilde{\vecb} \leftarrow \widebar{\vecb} + \textrm{Noise}(\xi)$
\mathbb{C}omment{$\widetilde{\vecb} = [\widetilde{\beta}_1, \cdots, \widetilde{\beta}_{k}]^{\top} \mathrm{i}n \Delta_{\mathbb{C}}^k$}
\State $\widehat{\vecw} \leftarrow \arg \min_{\boldsymbol{x} \mathrm{i}n \mathbb{C}^k} \|V_{\widetilde{\vecb}} \boldsymbol{x} - M_{G'}\|_2^2$
\mathbb{C}omment{$\widehat{\vecw} = [\widehat{w}_1, \cdots, \widehat{w}_{k}]^{\top} \mathrm{i}n \mathbb{C}^k$}
\State $\widetilde{\vecw} \leftarrow \widehat{\vecw} / (\sum_{i=1}^{k} \widehat{w}_i)$
\mathbb{C}omment{$\widetilde{\vecw} = [\widetilde{w}_1, \cdots, \widetilde{w}_{k}]^{\top} \mathrm{i}n \Sigma_{k-1}^{\mathbb{C}}$}
\State $\widetilde{\pmix} \leftarrow (\widetilde{\vecb}, \widetilde{\vecw})$ \mathbb{C}omment{$\widetilde{\pmix} \mathrm{i}n \mathrm{Spike}(\Delta_{\mathbb{C}}, \Sigma_{k-1}^{\mathbb{C}})$}
\State $\widetilde{\mix} \leftarrow ([\textrm{real}(\widetilde{\vecb}), \textrm{imag}(\widetilde{\vecb})], \textrm{real}(\widetilde{\vecw}))$\mathbb{C}omment{$\widetilde{\mix} \mathrm{i}n \mathrm{Spike}(\Delta_2, \Sigma_{k-1})$}
\State $\widecheck{\vecw} \leftarrow \arg \min_{\boldsymbol{x} \mathrm{i}n \Delta_{k-1}} \tran(\widetilde{\mix}, (\widetilde{\veca}, \boldsymbol{x}))$
\State report $\widecheck{\mix} \leftarrow (\widetilde{\veca}, \widecheck{\vecw})$ \mathbb{C}omment{$\widecheck{\mix} \mathrm{i}n \mathrm{Spike}(\Delta_2, \Delta_{k-1})$}
\EndFunction
\end{algorithmic}
\end{algorithm}
The pseudocode can be found in Algorithm~\ref{alg:dim2}. The algorithm takes the number of spikes $k$ and the error bound $\xi$ to reconstruct original spike distribution using empirical moments $M'$.
Now, we describe the implementation details of our algorithm.
We first calculate the empirical complete moments $G'$ of $\boldsymbol{\phi}$ in Line 2. Since the process follows relationship between $G(\boldsymbol{\phi})$ and $M(\boldsymbol{\vartheta})$ (see Lemma \ref{lm:dim2-complex-corr}), $G'$ would be an estimation of ground truth $G(\boldsymbol{\phi})$. This step can implemented in $O(k^3)$ arithmetic operations since its a two dimension convolution.
Then, we perform a ridge regression to obtain $\boldsymbol{c}$ in Line 3. We note that $A_{G(\boldsymbol{\phi})}\boldsymbol{c} + b_{G(\boldsymbol{\phi})} = \mathbf{0}$. (see Lemma~\ref{lm:dim2-char-poly}). Hence, $\widehat{\vecc}$ can be seen as an approximation of $\boldsymbol{c}$. The explicit solution of this ridge regression is $\widehat{\vecc} = (A^{\top}_{G'} A_{G'} + \xi^2 I)^{-1} A^{\top}_{G'} b_{G'}$ which can be computed in $O(k^3)$ time.
From Line 4 to Line 6, we aim to estimate the positions of the spikes of complex correspondence, i.e., $\beta_i$ s. Similar to the 1-dimensional case, we solve the roots of polynomial $\sum_{i=0}^{k-1} \widehat{c}_i x^i + x^k$. Note that the roots we found may locate outside $\Delta_{\mathbb{C}}$, which is the support of the ground truth. Thus, we use the description of $\Delta_2$ to project the solutions back to $\Delta_{\mathbb{C}}$ and inject small noise to ensure that all values are distinct and $\widetilde{\vecb}$ are still in $\Delta_{\mathbb{C}}$. Any noise of size at most $\xi$ suffices here. We note that from the definition of complex correspondence, the realized spikes in the original space are $[\textrm{real}(\widetilde{\vecb}), \textrm{imag}(\widetilde{\vecb})]$. For implementation, this step can be done using numerical root finding algorithm allowing $\xi$ additive noise in $O(k^{1+o(1)} \cdot \log \log (1/\xi))$ arithmetic operations.
After that, we aim to recover the weight of the spikes. Line 7 is a linear regression defined by Vandemonde matrix $V_{\boldsymbol{\beta}}$. Since $\boldsymbol{\beta}$ may be complex number, we would calculate over complex space. Again, we would apply moment inequality to bound the error of recovered parameters. Hence, we normalize $\widehat{\vecw}$ and get $\widetilde{\vecw}$ in Line 8. Note from our definition of transportation distance, the real components and imaginary components are considered separately. Hence, we can discard the imaginary parts of $\widetilde{\vecw}$ in Line 10 and reconstruct the $k$-spike distribution in Line 10. Using linear regression, this step can be done in $O(k^3)$ time.
The remaining thing is to deal with the negative weights of $\widetilde{\vecw}$. We find a close $k$-spike distribution in $\mathrm{Spike}(\Delta_2, \Delta_{k-1})$ in Line 11. The optimization problem is equivalent to finding a transportation from $(\widetilde{\veca}, \widetilde{\vecw}^{-})$ to $(\widetilde{\veca}, \widetilde{\vecw}^{+})$ where $\widetilde{\vecw}^{-}$ and $\widetilde{\vecw}^{+}$ are the negative components and positive components of $\widetilde{\vecw}$ respectively, i.e., $\widetilde{w}^{-}_i = \max\{0, -\widetilde{w}_i\}$ and $\widetilde{w}^{+}_i = \max\{0, \widetilde{w}_i\}$. This transportation can be found using standard network flow technique, which takes $O(k^3)$ time.
We note the noise satisfies $\xi \leq 2^{-\Omega(k)}$.
Hence, the whole algorithm requires $O(k^3)$ arithmetic operations.
\subsection{Error Analysis}
From now, we bound the reconstruction error of the algorithm. The following lemma presents the relationship between the moments of the original $k$-spike distribution and its complex correspondence.
\begin{lemma}\label{lm:dim2-complex-corr}
Let $\boldsymbol{\vartheta} = (\boldsymbol{\alpha}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_2, \Delta_{k-1})$ and let $\boldsymbol{\phi} = (\boldsymbol{\beta}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_{\mathbb{C}}, \Delta_{k-1})$ be its complex correspondence. Then, the complete moments satisfies
\begin{align*}
G_{i,j} (\boldsymbol{\phi}) &= \sum_{p=0}^{i}\sum_{q=0}^{j} \binom{i}{p}\binom{j}{q} (-{\mathfrak{i}})^{i-p} {\mathfrak{i}}^{j-q} M_{p+q,i+j-p-q}(\boldsymbol{\vartheta}).
\end{align*}
\end{lemma}
\begin{proof}
According to the definition,
\begin{align*}
G_{i,j} (\boldsymbol{\phi}) &= \sum_{t=1}^{k} w_t (\beta_t^{\mathrm{d}agger})^i \beta_t^j
= \sum_{t=1}^k w_t (\alpha_{t,1} - \alpha_{t,2} {\mathfrak{i}})^i (\alpha_{t,1} + \alpha_{t,2} {\mathfrak{i}})^j \\
&= \sum_{t=1}^k w_t \sum_{p=0}^i \binom{i}{p} \alpha_{t,1}^p \alpha_{t,2}^{i-p} (-{\mathfrak{i}})^{i-p} \sum_{q=0}^j \binom{j}{q} \alpha_{t,1}^q \alpha_{t,2}^{j-q} {\mathfrak{i}}^{j-q} \\
&= \sum_{p=0}^{i}\sum_{q=0}^{j} \binom{i}{p}\binom{j}{q} (-{\mathfrak{i}})^{i-p} {\mathfrak{i}}^{j-q} M_{p+q,i+j-p-q}(\boldsymbol{\vartheta}).
\end{align*}
\end{proof}
The following lemma is a complex extension of Lemma \ref{lm:dim1-char-poly}.
\begin{lemma}\label{lm:dim2-char-poly}
Let $\boldsymbol{\phi} = (\boldsymbol{\beta}, \boldsymbol{w}) \mathrm{i}n \mathrm{Spike}(\Delta_{\mathbb{C}}, \Delta_{k-1})$ where $\boldsymbol{\beta} = [\beta_1, \cdots, \beta_k]^{\top}$ and $\boldsymbol{w} = [w_1, \cdots, w_k]^{\top}$.
Let $\boldsymbol{c} = [c_0, \cdots, c_{k-1}]^{\top} \mathrm{i}n \mathbb{C}^k$ such that $\prod_{i=1}^{k} (x - \alpha_i) = \sum_{i=0}^{k-1} c_i x^i + x^k$.
For all $i \geq 0$, the following equation holds
$\sum_{j=0}^{k-1} G_{i,j}(\boldsymbol{\vartheta}) c_j + G_{i,k}(\boldsymbol{\vartheta}) = \sum_{j=0}^{k-1} G_{j,i}(\boldsymbol{\vartheta}) c_j^{\mathrm{d}agger} + G_{k,i}(\boldsymbol{\vartheta}) = 0.$
In matrix form, the equation can be written as follows:
$$
A_{G(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{G(\boldsymbol{\vartheta})} = \mathbf{0}.
$$
\end{lemma}
\begin{proof}
\begin{align*}
\sum_{j=0}^{k-1} G_{i,j}(\boldsymbol{\phi}) c_j + G_{k,i}(\boldsymbol{\phi})
&= \sum_{j=0}^{k-1} \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^i\beta_t^{j} c_j + \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^i \beta_t^{k} \\
&= \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^i \left(\sum_{j=0}^{k-1} c_j \beta_t^j + \beta_t^k\right) \\
&= \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^i \prod_{j=1}^{k} (\beta_t - \beta_j) = 0,
\end{align*}
\begin{align*}
\sum_{j=0}^{k-1} G_{j,i}(\boldsymbol{\phi}) c_j^{\mathrm{d}agger} + G_{k,i}(\boldsymbol{\phi}) &= \sum_{j=0}^{k-1} \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^j \beta_t^{i} c_j^{\mathrm{d}agger} + \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^k \beta_t^{i} \\
&= \left(\sum_{j=0}^{k-1} \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^i \beta_t^{j} c_j + \sum_{t=1}^k w_t (\beta_t^\mathrm{d}agger)^i \beta_t^{k}\right)^{\mathrm{d}agger} = 0.
\end{align*}
\end{proof}
Similar to Lemma \ref{lm:dim1-step1}, we present some useful properties of $\widehat{\vecc}$.
\begin{lemma} \label{lm:dim2-step1}
Let $\widehat{\vecc} = [\widehat{c}_0, \cdots, \widehat{c}_{k-1}]^{\top} \mathrm{i}n \mathbb{R}^k$ be the intermediate result (Line 3) in the algorithm. Then, $\|G' - G(\boldsymbol{\phi})\|_{\mathrm{i}nfty} \leq 2^{2k} \cdot \xi$, $\|\boldsymbol{c}\|_1 \leq 2^k$, $\|\widehat{\vecc}\|_1 \leq 2^{O(k)}$ and $\|A_{G(\boldsymbol{\phi})} \widehat{\vecc} + b_{G(\boldsymbol{\phi})}\|_\mathrm{i}nfty \leq 2^{O(k)} \cdot \xi$.
\end{lemma}
\begin{proof}
From Vieta's formulas, we have
$c_i = \sum_{S \mathrm{i}n \binom{[k]}{k-i}} \prod_{j\mathrm{i}n S} (-\alpha_j).$
Thus,
$$\|\boldsymbol{c}\|_1 = \sum_{i=0}^{k-1} |c_i| \leq \sum_{S \mathrm{i}n 2^{[k]}} \prod_{j\mathrm{i}n S} |\beta_j| = \prod_{i=1}^{k} (1+|\beta_i|) \leq 2^k.$$
where the last inequality holds because $|\beta_i| = \sqrt{\alpha_{i,1}^2 + \alpha_{i,2}^2} \leq 1$ for all $i$.
According to Lemma \ref{lm:dim2-complex-corr}
\begin{align*}
|G'_{i,j} - G_{i,j}(\boldsymbol{\phi})| &\leq \sum_{p=0}^{i}\sum_{q=0}^{j} \left|\binom{i}{p}\binom{j}{q} (-{\mathfrak{i}})^{i-p} {\mathfrak{i}}^{j-q} \right| \|M' - M(\boldsymbol{\vartheta})\|_{\mathrm{i}nfty} \leq 2^{i+j} \cdot \xi.
\end{align*}
This shows that $\|G' - G(\boldsymbol{\phi})\|_{\mathrm{i}nfty} \leq 2^{2k} \cdot \xi$.
From Lemma \ref{lm:dim1-char-poly}, we can see that $\|A_{M(\boldsymbol{\vartheta})} \boldsymbol{c} + b_{M(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} = 0$.
Therefore,
\begin{align*}
\|A_{G'} \boldsymbol{c} + b_{G'}\|_\mathrm{i}nfty &\leq \|A_{G(\boldsymbol{\phi})} \boldsymbol{c} + b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} + \|A_{G'} \boldsymbol{c} - A_{G(\boldsymbol{\phi})} \boldsymbol{c}\|_{\mathrm{i}nfty} + \|b_{G'} - b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{G(\boldsymbol{\phi})} \boldsymbol{c} + b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} + \|(A_{G'} - A_{G(\boldsymbol{\phi})}) \boldsymbol{c}\|_{\mathrm{i}nfty} + \|b_{G'} - b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{G(\boldsymbol{\phi})} \boldsymbol{c} + b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} + \|A_{G'} - A_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} \|\boldsymbol{c}\|_1 + \|b_{G'} - b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{G(\boldsymbol{\phi})} \boldsymbol{c} + b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} + \|G' - G(\boldsymbol{\phi})\|_{\mathrm{i}nfty} (\|\boldsymbol{c}\|_1 + 1) \\
&\leq 2^{O(k)} \cdot \xi.
\end{align*}
The fourth inequality holds since $(A_{G'} - A_{G(\boldsymbol{\phi})})_{i,j}=(G'-G(\boldsymbol{\phi}))_{i,j}$ and $(b_{G'}-b_{G(\boldsymbol{\phi})})_i = (G'-G(\boldsymbol{\phi}))_{i,k}$.
From the definition of $\widehat{\vecc}$, we can see that
\begin{align*}
\|A_{G'} \widehat{\vecc} + b_{G'}\|_2^2 + \xi^2 \|\widehat{\vecc}\|_2^2
&\leq \|A_{G'} \boldsymbol{c} + b_{G'}\|_2^2 + \xi^2 \|\boldsymbol{c}\|_2^2 \\
&\leq k \|A_{G'} \boldsymbol{c} + b_{G'}\|_\mathrm{i}nfty^2 + \xi^2 \|\boldsymbol{c}\|_1^2 \\
&\leq 2^{O(k)} \cdot \xi^2.
\end{align*}
The second inequality holds since $\|\boldsymbol{x}\|_2 \leq \|\boldsymbol{x}\|_1$ and $\|\boldsymbol{x}\|_2 \leq \sqrt{k} \|\boldsymbol{x}\|_{\mathrm{i}nfty}$ holds for any vector $\boldsymbol{x} \mathrm{i}n \mathbb{R}^k$.
Now, we can directly see that
\begin{align*}
\|A_{G'} \widehat{\vecc} + b_{G'}\|_\mathrm{i}nfty &\leq \|A_{G'} \widehat{\vecc} + b_{G'}\|_2 \leq 2^{O(k)} \cdot \xi, \\
\|\widehat{\vecc}\|_1 &\leq \sqrt{k} \|\widehat{\vecc}\|_2 \leq 2^{O(k)}.
\end{align*}
Finally, we can bound $\|A_{G(\boldsymbol{\phi})} \widehat{\vecc} + b_{G(\boldsymbol{\phi})}\|_\mathrm{i}nfty$
as follows:
\begin{align*}
\|A_{G(\boldsymbol{\phi})} \widehat{\vecc} + b_{G(\boldsymbol{\phi})}\|_\mathrm{i}nfty &\leq \|A_{G'} \widehat{\vecc} + b_{G'}\|_\mathrm{i}nfty + \|A_{G'} \widehat{\vecc} - A_{G(\boldsymbol{\phi})} \widehat{\vecc}\|_{\mathrm{i}nfty} + \|b_{G'} - b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{G'} \widehat{\vecc} + b_{G'}\|_\mathrm{i}nfty + \|A_{G'} - A_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} \| \widehat{\vecc}\|_1 + \|b_{G'} - b_{G(\boldsymbol{\phi})}\|_{\mathrm{i}nfty} \\
&\leq \|A_{G'} \widehat{\vecc} + b_{G'}\|_\mathrm{i}nfty + \|G' - G(\boldsymbol{\phi})\|_{\mathrm{i}nfty} (\|\widehat{\vecc}\|_1 + 1) \\
&\leq 2^{O(k)} \cdot \xi.
\end{align*}
This finishes the proof of the lemma.
\end{proof}
We then show $\widetilde{\vecw}$ obtained in Line 6 is close to ground truth $\boldsymbol{\beta}$.
\begin{lemma} \label{lm:dim2-step2-1}
Let $\widehat{\vecb} = [\widehat{\beta}_1, \cdots, \widehat{\beta}_{k}]^{\top}\mathrm{i}n \mathbb{C}^k$ be the intermediate result (Line 4) in Algorithm~\ref{alg:dim2}. Then, the following inequality holds:
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widehat{\beta}_j|^2 \leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{lemma}
\begin{proof}
Consider
\begin{align*}
&\quad\ \sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widehat{\beta}_j|^2
= \sum_{i=1}^{k} w_i \left|\prod_{j=1}^{k} (\beta_i - \widehat{\beta}_j)\right|^2
= \sum_{i=1}^{k} w_i \left|\sum_{j=0}^{k-1} \widehat{c}_j \beta_i^t + \beta_i^k\right|^2 \\
&= \sum_{i=1}^{k} w_i \left(\sum_{j=0}^{k-1} \widehat{c}_j \beta_i^t + \beta_i^k\right)^{\mathrm{d}agger}\left(\sum_{j=0}^{k-1} \widehat{c}_j \beta_i^t + \beta_i^k\right) \\
&= \sum_{i=1}^{k} w_i \left(\sum_{p=0}^{k-1}\sum_{q=0}^{k-1}\widehat{c}_p^{\mathrm{d}agger} \widehat{c}_q (\beta_i^{\mathrm{d}agger})^p \beta_i^q + \sum_{p=0}^{k-1} \widehat{c}_p^{\mathrm{d}agger} (\beta_i^{\mathrm{d}agger})^p \beta_i^{k} + \sum_{p=0}^{k-1} \widehat{c}_p \beta_i^{p} (\beta_i^{\mathrm{d}agger})^k + (\beta_i^{\mathrm{d}agger})^k \beta_i^{k} \right) \\
&= \sum_{p=0}^{k-1}\sum_{q=0}^{k-1}\widehat{c}_p^{\mathrm{d}agger} \widehat{c}_q \sum_{i=1}^{k}w_i (\beta_i^{\mathrm{d}agger})^p \beta_i^q + \sum_{p=0}^{k-1}\widehat{c}_p^{\mathrm{d}agger} \sum_{i=1}^{k} w_i (\beta_i^{\mathrm{d}agger})^p \beta_i^{k} + \sum_{p=0}^{k-1}\widehat{c}_p \sum_{i=1}^{k} w_i \beta_i^{p} (\beta_i^{\mathrm{d}agger})^k + \sum_{i=1}^k w_i (\beta_i^{\mathrm{d}agger})^k \beta_i^{k} \\
&= \sum_{p=0}^{k-1} \widehat{c}_p^{\mathrm{d}agger} \left(\sum_{q=0}^{k-1} G_{p,q}(\boldsymbol{\vartheta}) \widehat{c}_q + G_{p,k}(\boldsymbol{\vartheta})\right) + \sum_{p=0}^{k-1} \widehat{c}_p G_{k,p}(\boldsymbol{\vartheta}) + G_{k,k}(\boldsymbol{\vartheta}) \\
&= \sum_{p=0}^{k-1} \widehat{c}_p^{\mathrm{d}agger} (A_{G(\boldsymbol{\vartheta})} \widehat{c} + b_{G(\boldsymbol{\vartheta})})_p + \sum_{p=0}^{k-1} \widehat{c}_p G_{k,p}(\boldsymbol{\vartheta}) + G_{k,k}(\boldsymbol{\vartheta}).
\end{align*}
According to Lemma \ref{lm:dim2-char-poly}, $G_{k,p}(\boldsymbol{\vartheta}) = - \sum_{q=0}^{k-1} G_{q,p}(\boldsymbol{\vartheta}) c_q^{\mathrm{d}agger}$, so
\begin{align*}
\sum_{p=0}^{k-1} \widehat{c}_p G_{k,p}(\boldsymbol{\vartheta}) + G_{k,k}(\boldsymbol{\vartheta}) &= \sum_{p=0}^{k-1} \widehat{c}_p \left(- \sum_{q=0}^{k-1} G_{q,p}(\boldsymbol{\vartheta}) c_q^{\mathrm{d}agger}\right) + \left(- \sum_{q=0}^{k-1} G_{q,k}(\boldsymbol{\vartheta}) c_q^{\mathrm{d}agger} \right) \\
&= -\sum_{q=0}^{k-1} c_q^{\mathrm{d}agger} \left(\sum_{p=0}^{k-1} G_{q,p}(\boldsymbol{\vartheta}) \widehat{c}_p + G_{k,q}(\boldsymbol{\vartheta})\right) \\
&= -\sum_{q=0}^{k-1} c_q^{\mathrm{d}agger} (A_{G(\boldsymbol{\vartheta})} \widehat{c} + b_{G(\boldsymbol{\vartheta})})_q.
\end{align*}
Therefore,
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widehat{\beta}_j|^2 &= \sum_{i=0}^{k-1} (\widehat{c}_i^{\mathrm{d}agger} - c_i^{\mathrm{d}agger}) (A_{G(\boldsymbol{\vartheta})} \widehat{c} + b_{G(\boldsymbol{\vartheta})})_i \\
&\leq (\|\widehat{c}\|_1 + \|c\|_1) \|A_{G(\boldsymbol{\vartheta})} \widehat{c} + b_{G(\boldsymbol{\vartheta})}\|_{\mathrm{i}nfty} \\
&\leq (2^k + 2^{O(k)}) \cdot 2^{O(k)}\cdot \xi & (\text{Lemma~\ref{lm:dim2-step1}})\\
&\leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{proof}
Similar to Lemma \ref{lm:dim1-step2}, the following lemma shows the error still can be bounded after projection and injecting noise.
\begin{lemma} \label{lm:dim2-step2}
Let $\widetilde{\vecb} = [\widetilde{\beta}_1, \cdots, \widetilde{\beta}_{k}]^{\top}\mathrm{i}n \mathbb{C}^k$ be the intermediate result (Line 6) in Algorithm~\ref{alg:dim2}. Then,
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widetilde{\beta}_j|^2 \leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{lemma}
\begin{proof}
Let $\widebar{\vecb} = [\widebar{\beta}_1, \cdots, \widebar{\beta}_{k}]^{\top} \mathrm{i}n \mathbb{R}^k$ be the set of projections (Line 5). Since $\Delta_2$ is convex, $\Delta_{\mathbb{C}}$ is also convex. From $\beta_i \mathrm{i}n \Delta_{\mathbb{C}}$, $|\beta_i - \widebar{\beta}_j| \leq |\beta_i - \widehat{\beta}_j|$ holds. Thus,
\begin{align*}
\prod_{j=1}^{k} |\beta_i - \widebar{\beta}_j|^2 \leq \prod_{j=1}^{k} |\beta_i - \widehat{\beta}_j|^2.
\end{align*}
Recall that $\widetilde{\beta}_j$ is obtained from $\widebar{\beta}_j$ by adding of size noise no more than $\xi$. We have $|\beta_i - \widetilde{\beta}_j| \leq |\beta_i - \widebar{\beta}_j| + \xi$.
Apply Lemma~\ref{lm:injected-noise} by regarding $|\beta_i - \widetilde{\beta}_j|$ as $a_j$ and $|\beta_i - \widehat{\beta}_j|$ as $a_j'$. From $|\beta_i - \widetilde{\beta}_j|\leq |\beta_i| + |\widetilde{\beta}_j| \leq 2$, we can conclude that
\begin{align*}
\prod_{j=1}^{k} |\beta_i - \widetilde{\beta}_j|^2 \leq \prod_{j=1}^{k} |\beta_i - \widebar{\beta}_j|^2 + O(2^k \cdot k\xi).
\end{align*}
Combining two inequalities,
\begin{align*}
\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widetilde{\beta}_j|^2 &\leq \sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widehat{\beta}_j| + 2k \cdot \xi \leq 2^{O(k)} \cdot \xi.
\end{align*}
\end{proof}
Similar to Lemma \ref{lm:dim1-step4-pre}, we can bound the error of approximating ground truth over recovered spikes.
\begin{lemma}\label{lm:dim2-step4-pre}
Let $\widehat{\vecw} = [\widehat{w}_1, \cdots, \widehat{w}_{k}]^{\top} \mathrm{i}n \mathbb{R}^k$ be the intermediate result (Line 7) in Algorithm~\ref{alg:dim2}. Then,
$$\|V_{\widetilde{\vecb}}\widehat{\vecw} - M(\boldsymbol{\phi})\|_{\mathrm{i}nfty} \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$$
\end{lemma}
\begin{proof}
Firstly,
\begin{align*}
\|V_{\widetilde{\vecb}}\widehat{\vecw} - M(\boldsymbol{\phi})\|_2 &\leq \|V_{\widetilde{\vecb}}\widehat{\vecw} - M_{G'}\|_2 + \|M(\boldsymbol{\phi}) - M_{G'}\|_2 \\
&\leq \min_{\boldsymbol{x} \mathrm{i}n \mathbb{C}^k} \|V_{\widetilde{\vecb}} \boldsymbol{x} - M_{G'}\|_2 + \|M(\boldsymbol{\phi}) - M_{G'}\|_2 \\
&\leq \min_{\boldsymbol{x} \mathrm{i}n \mathbb{C}^k} \|V_{\widetilde{\vecb}} \boldsymbol{x} - M(\boldsymbol{\phi})\|_2 + \|M(\boldsymbol{\phi}) - M_{G'}\|_2
\end{align*}
where the first and third inequalities hold due to triangle inequality, and the second inequality holds since $\|V_{\widetilde{\vecb}} \widehat{\vecw} - M_{G'}\|_2^2 \leq \min_{\boldsymbol{x} \mathrm{i}n \mathbb{R}^k} \|V_{\widetilde{\vecb}} \boldsymbol{x} - M_{G'}\|_2^2$.
For the first term, we can see that
\begin{align*}
\min_{\boldsymbol{x} \mathrm{i}n \mathbb{C}^k} \|V_{\widetilde{\vecb}} \boldsymbol{x} - M(\boldsymbol{\phi})\|_2 &= \min_{\boldsymbol{x}_1, \cdots, \boldsymbol{x}_k \mathrm{i}n \mathbb{C}^k} \left\|\sum_{i=1}^{k}w_i V_{\widetilde{\vecb}}\boldsymbol{x}_i - \sum_{i=1}^{k}w_i M(\beta_i)\right\|_2 \\
&\leq \sum_{i=1}^{k} w_i \min_{\boldsymbol{x} \mathrm{i}n \mathbb{C}^k} \|V_{\widetilde{\vecb}}\boldsymbol{x} - M(\beta_i)\|_2 \\
&\leq 2^{O(k)} \sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widetilde{\beta}_j| & (\text{Lemma~\ref{lm:step3}}) \\
&\leq 2^{O(k)} \sqrt{\sum_{i=1}^{k} w_i \prod_{j=1}^{k} |\beta_i - \widetilde{\beta}_j|^2} & (\text{AM-QM Inequality}, \boldsymbol{w} \mathrm{i}n \Delta_{k-1}) \\
&\leq 2^{O(k)} \cdot \sqrt{\xi}. & (\text{Lemma~\ref{lm:dim2-step2}}).
\end{align*}
For the second term,
\begin{align*}
\|M(\boldsymbol{\phi}) - M_{G'}\|_2 &\leq \sqrt{k} \|M(\boldsymbol{\phi}) - M_{G'}\|_{\mathrm{i}nfty} \\
&\leq \sqrt{k} \|G' - G(\boldsymbol{\phi})\|_{\mathrm{i}nfty} \\
&\leq 2^{O(k)} \cdot \xi. & (\text{Lemma~\ref{lm:dim2-step1}})
\end{align*}
As a result, $$\|V_{\widetilde{\vecb}}\widehat{\vecw} - M(\boldsymbol{\phi})\|_{\mathrm{i}nfty} \leq \|V_{\widetilde{\vecb}}\widehat{\vecw} - M(\boldsymbol{\phi})\|_2 \leq 2^{O(k)} \cdot \sqrt{\xi} + 2^{O(k)} \cdot \xi \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$$
\end{proof}
Similar to Lemma \ref{lm:dim1-step4}, we can prove that $\widetilde{\pmix}$ is also a good estimation of $\boldsymbol{\phi}$.
\begin{lemma}\label{lm:dim2-step4}
Let $\widetilde{\pmix} = (\widetilde{\vecb}, \widetilde{\vecw})$ be the intermediate result (Line 10) in Algorithm~\ref{alg:dim2}. Then,
$$\Mdis_K(\widetilde{\pmix}, \boldsymbol{\phi}) \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$$
\end{lemma}
\begin{proof}
According to the definition of $V_{\widetilde{\vecb}}$ and $M(\boldsymbol{\phi})$, we have $(V_{\widetilde{\vecb}}\widehat{\vecw} - M(\boldsymbol{\phi}))_1 = \sum_{i=1}^{k} \widehat{w}_i - 1$.
Therefore, $|\sum_{i=1}^{k} \widehat{w}_i - 1| \leq |(V_{\widetilde{\vecb}}\widehat{\vecw} - M(\boldsymbol{\phi}))_1| \leq \|V_{\widetilde{\vecb}}\widehat{\vecw} - M(\boldsymbol{\phi})\|_{\mathrm{i}nfty} \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$
Note that $\|\widetilde{\vecw} - \widehat{\vecw}\|_{1} = \|\widetilde{\vecw}\|_1 - \|\widehat{\vecw}\|_{1} = |(\sum_{i=1}^{k} \widehat{w}_i)^{-1} - 1|$. For $2^{O(k)} \cdot \xi^{\frac{1}{2}} \leq 1/2$, we can conclude that $\|\widetilde{\vecw} - \widehat{\vecw}\|_{1} = |(\sum_{i=1}^{k} \widehat{w}_i)^{-1} - 1| \leq 2 |\sum_{i=1}^{k} \widehat{w}_i - 1| \leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.$
Thus,
\begin{align*}
\Mdis_K(\widetilde{\pmix}, \boldsymbol{\phi})&=\|M(\widetilde{\pmix}) - M(\boldsymbol{\phi})\|_\mathrm{i}nfty = \|V_{\widetilde{\vecb}} \widetilde{\vecw} - M(\boldsymbol{\phi})\|_{\mathrm{i}nfty} \\
&\leq \|V_{\widetilde{\vecb}} \widehat{\vecw} - M(\boldsymbol{\phi})\|_{\mathrm{i}nfty} + \|V_{\widetilde{\vecb}}\|_\mathrm{i}nfty \|\widetilde{\vecw} - \widehat{\vecw}\|_1 \\
&\leq 2^{O(k)} \cdot \xi^{\frac{1}{2}}.
\end{align*}
where the first inequality holds because of triangle inequality.
\end{proof}
Now everything is in place to show the final bound.
\begin{lemma}\label{lm:dim2-reconstruction}
Let $\widecheck{\mix} = (\widetilde{\veca}, \widecheck{\vecw})$ be the final result
(Line 11) in Algorithm~\ref{alg:dim2}.
Then, we have
$$\tran(\widecheck{\mix}, \boldsymbol{\vartheta}) \leq O(k\xi^{\frac{1}{4k-2}}).$$
\end{lemma}
\begin{proof}
Apply Lemma \ref{lm:dim2-step4} to the result of Lemma \ref{thm:complex-moment-inequality}. We have
$$\tran(\widetilde{\pmix}, \boldsymbol{\phi}) \leq O(k \cdot (2^{O(k)} \cdot \xi^{\frac{1}{2}})^{\frac{1}{2k-1}}) \leq O(k \xi^{\frac{1}{4k-2}}).$$
Denote $\widetilde{\pmix}' = (\widetilde{\vecb}, \textrm{real}(\widetilde{\vecw}))$ as the result that discards all imaginary components from $\widetilde{\pmix}$. From the definition of transportation distance for complex weights, the imaginary components is independent from the real components. Moreover, $\boldsymbol{\phi}$ has no imaginary components. Thus, discarding all imaginary components from $\widetilde{\pmix}$ reduces the transportation distance to $\boldsymbol{\phi}$. That is,
\begin{align*}
\tran(\widetilde{\pmix}', \boldsymbol{\phi}) \leq \tran(\widetilde{\pmix}, \boldsymbol{\phi}).
\end{align*}
Moreover, from any $\beta_i, \beta_j \mathrm{i}n \mathbb{C}$, we have
\begin{align*}
\quad \|[\textrm{real}(\beta_i), \textrm{imag}(\beta_i)] - [\textrm{real}(\beta_j), \textrm{imag}(\beta_j)]\|_1
&\leq 2 \|[\textrm{real}(\beta_i), \textrm{imag}(\beta_i)] - [\textrm{real}(\beta_j), \textrm{imag}(\beta_j)]\|_2 \\
&= 2|\beta_i - \beta_j|.
\end{align*}
This shows the distance over $\mathbb{C}$ is smaller than $2$ times of the distance over $\mathbb{R}^2$. As a result,
\begin{align*}
\tran(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq 2\tran(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq O(k \xi^{\frac{1}{4k-2}}).
\end{align*}
Moreover, from Lemma~\ref{lm:step5}, we have
\begin{align*}
\tran(\widetilde{\mix}, \widecheck{\mix}) \leq 2 \min_{\overline{\mix} \mathrm{i}n \mathrm{Spike}(\Delta_{2}, \Delta_{k-1})} \tran(\widetilde{\mix}, \overline{\mix}) \leq 2 \tran(\widetilde{\mix}, \boldsymbol{\vartheta}) = O(k \xi^{\frac{1}{4k-2}}).
\end{align*}
Finally, from triangle inequality, $\tran(\widecheck{\mix}, \boldsymbol{\vartheta}) \leq \tran(\widecheck{\mix}, \widetilde{\mix}) + \tran(\widetilde{\mix}, \boldsymbol{\vartheta}) \leq O(k \xi^{\frac{1}{4k-2}}).$
\end{proof}
\end{document} |
\begin{document}
\title{Regularity for a special case of two-phase Hele-Shaw flow via parabolic integro-differential equations}
\author{Farhan Abedin}
\author{Russell W. Schwab}
\address{Department of Mathematics\\
Michigan State University\\
619 Red Cedar Road \\
East Lansing, MI 48824}
\varepsilonmail{[email protected], [email protected]}
\begin{abstract}
We establish that the $C^{1,\gamma}$ regularity theory for translation invariant fractional order parabolic integro-differential equations (via Krylov-Safonov estimates) gives an improvement of regularity mechanism for solutions to a special case of a two-phase free boundary flow related to Hele-Shaw. The special case is due to both a graph assumption on the free boundary of the flow and an assumption that the free boundary is $C^{1,\textnormal{Dini}}$ in space. The free boundary then must immediately become $C^{1,\gamma}$ for a universal $\gamma$ depending upon the Dini modulus of the gradient of the graph. These results also apply to one-phase problems of the same type.
\varepsilonnd{abstract}
\date{\today,\ arXiv ver 2}
\thanks{R. Schwab acknowledges partial support from the NSF with DMS-1665285. F. Abedin acknowledges support from the AMS and the Simons Foundation with an AMS--Simons Travel Grant. }
\keywords{Global Comparison Property, Integro-differential Operators, Dirichlet-to-Neumann, Free Boundaries, Hele-Shaw, Fully Nonlinear Equations, Viscosity Solutiuons, Krylov-Safonov}
\subjclass[2010]{
35B51,
35R09,
35R35,
45K05,
47G20,
49L25,
60J75,
76D27,
76S05
}
\maketitle
\markboth{Hele-Shaw Parabolic Regularization}{Hele-Shaw Parabolic Regularization}
\section{Introduction}\left\langlebel{sec:introduction}
\setcounter{equation}{0}
This paper has two goals. The first is to give a precise characterization of the integro-differential operators that can be used to represent the solution of some free boundary flows with both one and two phases, of what we call Hele-Shaw type. We give a characterization that is precise enough to determine whether or not existing integro-differential results apply to this setting. The second goal is to show that, indeed, a new regularization mechanism resulting from parabolic integro-differential theory is applicable. This will show that solutions that are $C^{1,\textnormal{Dini}}$ must immediately become $C^{1,\gamma}$ regular. We note that there is an earlier and stronger regularization mechanism for the one-phase Hele-Shaw flow by Choi-Jerison-Kim \cite{ChoiJerisonKim-2007RegHSLipInitialAJM} which shows that Lipschitz solutions with a dimensionally small Lipschitz norm must be $C^{1}$ regular and hence classical. We want to emphasize that in our context, both one and two phase problems are treated under the exact same methods. For simplicity and technical reasons, we focus on the case in which the free boundary is the graph of a time dependent function on $\mathbb R^n$, $n\geq 2$.
These free boundary problems are the time dependent evolution of the zero level set of a function $U: \mathbb R^{n+1} \times [0,T] \to \mathbb R$ that satisfies the following equation, with $V$ representing the normal velocity on $\partial\{U(\cdot,t)>0\}$, and $G$ a prescribed balance law. Here $A_1$ and $A_2$ are two (possibly different) elliptic constant coefficient diffusion matrices that dictate the equations:
\begin{align}\left\langlebel{eqIN:HSMain}
\begin{cases}
\mathcal Tr(A_1 D^2U)=0\ &\text{in}\ \{U(\cdot,t)>0\}\\
\mathcal Tr(A_2 D^2U)=0\ &\text{in}\ \{U(\cdot,t)<0\}\\
U(\cdot,t)=1\ &\text{on}\ \{x_{n+1}=0\}\\
U(\cdot,t)=-1\ &\text{on}\ \{x_{n+1}=L\}\\
V=G(\partial^+_\nu U,\partial^-_\nu U)\ &\text{on}\ \partial\{U(\cdot,t)>0\}.
\varepsilonnd{cases}
\varepsilonnd{align}
Without loss of generality, we take $A_1=\textnormal{Id}$ (which can be obtained by an orthogonal change of coordinates). The prescribed values for $U$ at $x_{n+1}=0$ and $x_{n+1}=L$ can be thought of as an ambient background pressure for $U$, and the free boundary, $\{U=0\}$, will be located in between.
As mentioned above, this work treats the special case of the free boundary problem in which the boundary of the positivity set can be given as the graph of a function over $\mathbb R^n$. To this end,
we will use the notation, $D_f$, as
\begin{align*}
D_f=\{(x,x_{n+1})\in\mathbb R^{n+1}\ :\ 0<x_{n+1}<f(x)\},
\varepsilonnd{align*}
and in our context, we will assume that for some $f:\mathbb R^n\times[0,T]\to\mathbb R$,
\begin{align*}
\{ U(\cdot,t)>0 \} = D_{f(\cdot,t)}
\varepsilonnd{align*}
and
\begin{align*}
\partial\{ U(\cdot,t)>0\} = \textnormal{graph}(f(\cdot, t)).
\varepsilonnd{align*}
The main technical part of our work is centered on the properties of the (fully nonlinear) operator we call $I$, which is defined for the one-phase problem as
\begin{align}\left\langlebel{eqIN:BulkEqForHSOperator}
\begin{cases}
\mathcal Delta U_f=0\ &\text{in}\ D_f\\
U_f=1\ &\text{on}\ \mathbb R^n\times\{0\}\\
U_f=0\ &\text{on}\ \Gamma_f=\textnormal{graph}(f),
\varepsilonnd{cases}
\varepsilonnd{align}
and $I$ is the map,
\begin{align}\left\langlebel{eqIN:defHSOperator}
I(f,x) = \partial_\nu U_f(x,f(x)).
\varepsilonnd{align}
We note, the map $I$ does not depend on $t$ and it is a fully nonlinear function of $f$ (in the sense that it does not have a divergence structure, and it fails linearity in the highest order terms acting on $f$ -- in fact it is fails linearity for all terms). Here, $I$, can be thought of as a nonlinear Dirichlet-to-Neumann operator, but one that tracks how a particular solution depends on the boundary. This type of operator is not at all new, and we will briefly comment on its rather long history later on, in Section \ref{sec:BackgroundLiterature}.
It turns out (probably not surprisingly) that the key features of (\ref{eqIN:HSMain}) are entirely determined by the properties of the mapping, $I$. To this end, we will define a two phase version of this operator via the positive and negative sets,
\begin{align}\left\langlebel{eqIN:DefOfSetsDfPlusMinus}
&D_f^+ = \{ (x,x_{n+1}) \ :\ 0<x_{n+1}<f(x) \},\\
& D_f^- = \{ (x,x_{n+1})\ :\ f(x)<x_{n+1}<L \},
\varepsilonnd{align}
with the equation, (recall we take $A_1=\textnormal{Id}$)
\begin{align}\left\langlebel{eqIN:TwoPhaseBulk}
\begin{cases}
\mathcal Delta U_f = 0\ &\text{in}\ D_f^+\\
\mathcal Tr(A_2 D^2 U_f) = 0\ &\text{in}\ D_f^-\\
U_f = 0\ &\text{on}\ \Gamma_f\\
U_f=1\ &\text{on}\ \{x_{n+1}=0\}\\
U_f=-1\ &\text{on}\ \{x_{n+1}=L\}.
\varepsilonnd{cases}
\varepsilonnd{align}
We define the respective normal derivatives to the positive and negative sets:
\begin{align}
&\text{for}\ X_0\in\Gammama_f,\ \text{and}\ \nu(X_0)\ \text{the unit normal derivative to $\Gamma_f$, pointing into the set}\ D^+f,\nonumber\\
&\partial^+_\nu U(X_0):=\lim_{t\to0}\frac{U(X_0+t\nu(X_0))-U(X_0)}{t}\ \ \text{and}\ \ \partial^-_\nu U(X_0)=-\lim_{t\to0}\frac{U(X_0-t\nu(X_0))-U(X_0)}{t}.\left\langlebel{eqIN:DefOfPosNegNormalDeriv}
\varepsilonnd{align}
With these, we can define the operator, $H$, as
\begin{align}\left\langlebel{eqIN:DefOfH}
H(f,x):= G(I^+(f,x),I^-(f,x))\cdot \sqrt{1+\abs{\nabla f}^2},
\varepsilonnd{align}
where
\begin{align}\left\langlebel{eqIN:DefIPLusAndMinus}
I^+(f,x):= \partial_\nu^+ U_f(x,f(x)),\ \ \text{and}\ \
I^-(f,x):= \partial_\nu^- U_f(x,f(x)).
\varepsilonnd{align}
The standard ellipticity assumption on $G$ is the following:
\begin{align}\left\langlebel{eqIN:GEllipticity}
G\ \text{is Lipschitz and}\ \ \left\langlembda\leq \frac{\partial}{\partial a} G(a,b)\leq \mathcal Lambda,\ \
\left\langlembda\leq -\frac{\partial}{\partial b} G(a,b)\leq \mathcal Lambda.
\varepsilonnd{align}
A canonical example of $G$ for the two-phase problem is $G(a,b)=a-b$, whereas a one-phase problem will simply be given by $G(a,b)=\tilde G(a)$, and the problem often referred to as one-phase Hele-Shaw flow is $G(a,b)=a$ (we note that the name ``Hele-Shaw'' has multiple meanings, depending upon the literature involved; both instances can be seen in Saffman-Taylor \cite{SaffamnTaylor-1958PorousMedAndHeleShaw}).
In a previous work, \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}, it was shown that under the graph assumption, the flow (\ref{eqIN:HSMain}) is equivalent in the sense of viscosity solutions for free boundary problems to viscosity solutions of the nonlinear, nonlocal, parabolic equation for $f$
\begin{align}\left\langlebel{eqIN:HeleShawIntDiffParabolic}
\begin{cases}
\partial_t f = G(I^+(f), I^-(f))\cdot\sqrt{1+\abs{\nabla f}^2}\ &\text{in}\ \mathbb R^n\times [0,T],\\
f(\cdot,0) = f_0\ &\text{on}\ \mathbb R^n\times\{0\}.
\varepsilonnd{cases}
\varepsilonnd{align}
We remark that a viscosity solution for the respective equations (\ref{eqIN:HSMain}) and (\ref{eqIN:HeleShawIntDiffParabolic}) (they are different objects) will exist whenever the free boundary (or in this case, $f$) is uniformly continuous, i.e. in very low regularity conditions.
In this paper, we explore a higher regularity regime, already assuming the existence of a classical solution of (\ref{eqIN:HSMain}). Whenever $f$ remains in a particular convex set of $C^{1,\textnormal{Dini}}$ (the set of $C^1$ functions whose gradients enjoy a Dini modulus), we will show that the operator $H$ takes a precise form as an integro-differential operator. This convex set is denoted as, $\mathcal K(\delta,L,m,\rho)$, and is made up as
\begin{align*}
C^{1,\textnormal{Dini}}_\rho(\mathbb R^n) = \{ f:\mathbb R^n\to\mathbb R\ |\ \nabla f\in L^\infty\ \text{and is Dini continuous with modulus}\ \rho \},
\varepsilonnd{align*}
\begin{align}\left\langlebel{eqIN:DefOfSetK}
\mathcal K(\delta,L,m,\rho) = \{ f\in C^{1,\textnormal{Dini}}_\rho\ :\ \delta<f<L-\delta,\ \abs{\nabla f}\leq m \}.
\varepsilonnd{align}
We note that the extra requirement $\delta<f<L-\delta$ is simply that the free boundary remains away from the fixed boundary where the pressure conditions are imposed.
The first theorem gives the integro-differential structure of $H$, and the details of which ellipticity class it falls into.
\begin{theorem}\left\langlebel{thm:StructureOfHMain}
Assume that $G$ satisfies (\ref{eqIN:GEllipticity}) and $H$ is the operator defined by (\ref{eqIN:DefOfH}), using the equation, (\ref{eqIN:TwoPhaseBulk}).
\begin{enumerate}[(i)]
\item
For each fixed $\delta$, $L$, $m$, $\rho$, that define the set $\mathcal K$ in (\ref{eqIN:DefOfSetK}) there exists a collection $\{a^{ij}, c^{ij}, b^{ij}, K^{ij}\}\subset{\mathbb R\times\mathbb R\times\mathbb R^n\times \textnormal{Borel}(\mathbb R^n\setminus\{0\})}$ (depending upon $\delta$, $L$, $m$, $\rho$), so that
\begin{align*}
\forall\ f\in \mathcal K(\delta,L,m,\rho),\ \ \
H(f,x) = \min_i\max_j\left(
a^{ij}+c^{ij}f(x) + b^{ij}\cdot\nabla f(x) + \int_{\mathbb R^n}\delta_y f(x)K^{ij}(y)dy
\right),
\varepsilonnd{align*}
where for an $r_0$ depending upon $\delta$, $L$, $m$, we use the notation,
\begin{align}\left\langlebel{eqIN:DelhFNotation}
\delta_yf(x)= f(x+y)-f(x)-{\mathbbm{1}}_{B_{r_0}}(y)\nabla f(x)\cdot y.
\varepsilonnd{align}
\item
Furthermore, there exists $R_0$ and $C$, depending on $\delta$, $L$, $m$, $\rho$, so that for all $i,j$,
\begin{align*}
\forall\ y\in\mathbb R^n,\ \ \
C^{-1}\abs{y}^{-n-1}{\mathbbm{1}}_{B_{R_0}}(y)\leq K^{ij}(y)\leq C\abs{y}^{-n-1},
\varepsilonnd{align*}
and
\begin{align*}
\sup_{0<r<r_0}\abs{b^{ij} - \int_{B_{r_0}\setminus B_r}yK^{ij}(y)dy}\leq C.
\varepsilonnd{align*}
The value of $r_0$ in (\ref{eqIN:DelhFNotation}) depends on $R_0$.
\varepsilonnd{enumerate}
\varepsilonnd{theorem}
The second result of this paper is to use the above result, plus recent results for parabolic integro-differential equations that include (\ref{eqIN:HeleShawIntDiffParabolic}), thanks to part (ii) of Theorem \ref{thm:StructureOfHMain}, to deduce regularity for the resulting free boundary (in this case, the set $\Gamma_f=\textnormal{graph}(f(\cdot,t))$). This is the content of our second main result.
\begin{theorem}\left\langlebel{thm:FBRegularity}
There exist universal constants, $C>0$ and $\gamma\in(0,1)$, depending upon $\delta$, $L$, $m$, and $\rho$, which define $\mathcal K$ in (\ref{eqIN:DefOfSetK}) so that
if $f$ solves (\ref{eqIN:HeleShawIntDiffParabolic}) and for all $t\in[0,T]$, $f(\cdot,t)\in\mathcal K(\delta,L, m, \rho)$, then $f\in C^{1,\gamma}(\mathbb R^n\times[\frac{T}{2},T])$, and
\begin{align*}
\norm{f}_{C^{1,\gamma}(\mathbb R^n\times[\frac{T}{2},T])}\leq \frac{C(\delta,L,m,\rho)(1 + T)}{T^\gamma}\norm{f(\cdot,0)}_{C^{0,1}}.
\varepsilonnd{align*}
In particular, under the \varepsilonmph{assumption} that for all $t\in[0,T]$, $\partial\{U(\cdot,t)>0\}=\textnormal{graph}(f(\cdot,t))$, \varepsilonmph{and} for all $t\in[0,T]$, $f\in\mathcal K(\delta,L,m,\rho)$, we conclude that $\partial\{U>0\}$ is a $C^{1,\gamma}$ hypersurface in space and time.
\varepsilonnd{theorem}
\begin{rem}
It is important to note the strange presentation of the $C^{1,\gamma}$ estimate in Theorem \ref{thm:FBRegularity} with only $\norm{f(\cdot,0)}_{C^{0,1}}$ on the right hand side. We emphasize that we have \varepsilonmph{not} proved that Lipschitz free boundaries become $C^{1,\gamma}$, due to the constant, $C(\delta,L,m,\rho)$. As the reader will see in Section \ref{sec:KrylovSafonovForHS}, the constant $C$ depends in a complicated way on the parameters, $\delta$, $L$, $m$, $\rho$, as all of these impact the boundary behavior of the Green's function for the elliptic equations in (\ref{eqIN:HSMain}), which in turn changes the estimates in Theorem \ref{thm:StructureOfHMain}, and hence the resulting parabolic estimates in Section \ref{sec:KrylovSafonovForHS}. Nevertheless, once one knows that $f\in\mathcal K(\delta,L,m,\rho)$ for some fixed choice of $\delta$, $L$, $m$, $\rho$, subsequently decreasing the Lipschitz norm of $f(\cdot,0)$ would decrease the $C^{1,\gamma}$ norm of the solution at later times. However, since the parameters $\delta$, $L$, $m$, $\rho$ give an upper bound on the quantity, for each $t$, $\norm{f(\cdot,t)}_{C^{1,\textnormal{Dini}}_\rho}$, a reasonable interpretation of the result is rather given as
\begin{align*}
\norm{f}_{C^{1,\gamma}(\mathbb R^n\times[\frac{T}{2},T])}\leq \frac{C(\delta,L,m,\rho)(1+T)}{T^\gamma}\sup_{t\in[0,T]}\norm{f(\cdot,t)}_{C^{1,\textnormal{Dini}}_\rho}.
\varepsilonnd{align*}
\varepsilonnd{rem}
We note that the work \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} established the equivalence between free boundary viscosity solutions of some Hele-Shaw type evolutions, like (\ref{eqIN:HSMain}), and viscosity solutions of fractional integro-differential parabolic equations in (\ref{eqIN:HeleShawIntDiffParabolic}). However, the results in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} focused on this equivalence at the level of viscosity solutions and low regularity properties, and they stopped short of addressing the question of a \varepsilonmph{regularization} phenomenon that may occur in a slightly higher regularity regime. As shown in the current paper, one needs to obtain much more precise information about the integro-differential operators appearing in, for instance, Theorem \ref{thm:StructureOfHMain} in order to utilize recent tools from the realm of integro-differential equations to investigate how this equation regularizes. Furthermore, obtaining the estimate as in Theorem \ref{thm:StructureOfHMain} required a slightly different approach than the one pursued in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}, instead invoking a finite dimensional approximation technique from \cite{GuSc-2019MinMaxEuclideanNATMA}. This can be seen in Sections \ref{sec:LevyMeasureEstimate} and \ref{sec:DriftEstimates}.
\section{Some Historical Background and Related Results}\left\langlebel{sec:BackgroundLiterature}
Basically, (\ref{eqIN:HSMain}) is a two-phase Hele-Shaw type problem without surface tension and neglecting the effects of gravity. For our purposes, we are interested in (\ref{eqIN:HSMain}) for mathematical reasons to uncover some of its structural properties and to explore the possibility of regularizing effects. Thus, we do not comment much on the model's physical origins. The fact that (\ref{eqIN:HSMain}) governs a two-phase situation is important for us to demonstrate that these techniques work for both one and two phase problems of a certain type.
In the following discussion, we attempt to focus on results most closely related to (\ref{eqIN:HSMain}), and we note that a more extended discussion can be found in the works \cite{ChangLaraGuillen-2016FreeBondaryHeleShawNonlocalEqsArXiv} and \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}.
\subsection{Hele-Shaw type free boundary problems without gravity.}
In most of the existing literature, (\ref{eqIN:HSMain}) is studied in its one-phase form, where the set $\{U<0\}$ is ignored by simply dictating that the velocity condition is $V=G(\partial^+_\nu U^+)$.
Some of the earliest works for short time existence and uniqueness are \cite{ElliottJanovsky-1981VariationalApproachHeleShaw} and \cite{EscherSimonett-1997ClassicalSolutionsHeleShaw-SIAM}, where a type of variational problem is studied in \cite{ElliottJanovsky-1981VariationalApproachHeleShaw} and a classical solution (for short time) is produced in \cite{EscherSimonett-1997ClassicalSolutionsHeleShaw-SIAM}. For the one-phase problem, under a smoothness and convexity assumption, \cite{DaskaLee-2005AllTimeSmoothSolHeleShawStefan-CPDE} gives global in time smooth solutions. Viscosity solutions for the one-phase version of (\ref{eqIN:HSMain}) are defined and shown to exist and be unique in \cite{Kim-2003UniquenessAndExistenceHeleShawStefanARMA}, which follows the approach first developed in \cite{Caffarelli-1988HarnackApproachFBPart3PISA} for the stationary two-phase problem and subsequently used in \cite{AthanaCaffarelliSalsa-1996RegFBParabolicPhaseTransitionACTA} for the two-phase Stefan problem. A follow-up modification of the definition of viscosity solutions for (\ref{eqIN:HSMain}) was given in \cite[Section 9]{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}. Of course, for our results, we are assuming already the existence of a classical solution, and so none of the definitions of viscosity solutions for (\ref{eqIN:HSMain}) are invoked here. (However, we do invoke viscosity solutions for the function $f$, as they are useful even when studying smooth solutions, such as in investigating the equation for discrete spatial derivatives of solutions. But the notion of solution for $f$ is entirely different from that of $U_f$.)
Moving on to issues of regularity, beyond the smooth initial data case in \cite{EscherSimonett-1997ClassicalSolutionsHeleShaw-SIAM}, and the convex case in \cite{DaskaLee-2005AllTimeSmoothSolHeleShawStefan-CPDE}, there are a number of works. All of the following works apply to the one-phase problem. With some assumptions on the quantity $\abs{U_t}/\abs{DU}$, \cite{Kim-2006RegularityFBOnePhaseHeleShaw-JDE} showed a Lipschitz free boundary becomes $C^1$ in space-time with a modulus, and long time regularity, involving propagation of a Lipschitz modulus, was obtained in \cite{Kim-2006LongTimeRegularitySolutionsHeleShaw-NonlinAnalysis}. Subsequently, the extra condition on the space-time non-degeneracy in \cite{Kim-2006RegularityFBOnePhaseHeleShaw-JDE} was removed in the work of \cite{ChoiJerisonKim-2007RegHSLipInitialAJM}, where under a dimensional small Lipschitz condition on the initial free boundary, Lipschitz free boundaries must be $C^1$ in space-time and hence classical. This was then followed up by the work \cite{ChoiJerisonKim-2009LocalRegularizationOnePhaseINDIANA} where more precise results can be proved when the solution starts from a global Lipschitz graph. In this context, it is fair to say that our results are the extension of \cite{ChoiJerisonKim-2009LocalRegularizationOnePhaseINDIANA} to the two-phase case, but with paying the extra price of requiring $C^{1,\textnormal{Dini}}$ regularity of the initial graph instead of being only Lipschitz. There is another regularity result for the one-phase Hele-Shaw problem in \cite{ChangLaraGuillen-2016FreeBondaryHeleShawNonlocalEqsArXiv} that follows more the strategy of \cite{DeSilva-2011FBProblemWithRHS} and \cite{Savin-2007SmallPerturbationCPDE}, instead of \cite{Caffarelli-1987HarnackInqualityApproachFBPart1RevMatIbero}, \cite{Caffarelli-1989HarnackForFBFlatAreLipCPAM}, \cite{ChoiJerisonKim-2007RegHSLipInitialAJM}, \cite{Kim-2006RegularityFBOnePhaseHeleShaw-JDE}. In \cite{ChangLaraGuillen-2016FreeBondaryHeleShawNonlocalEqsArXiv} the approach to regularity for the one-phase Hele-Shaw invoked parabolic regularity theory for fractional equations, but in that context the regularity theory applied to a blow-up limit of the solutions under a flatness condition in space-time, which resulted in a local $C^{1,\gamma}$ space-time regularity for the solution. Thus already \cite{ChangLaraGuillen-2016FreeBondaryHeleShawNonlocalEqsArXiv} foreshadowed the type of strategy that we have pursued in Theorem \ref{thm:FBRegularity}.
\subsection{The nonlinear Dirichlet-to-Neumann mapping}
In this paper, it is reasonable to call the operator, $I$, defined in (\ref{eqIN:BulkEqForHSOperator}) and (\ref{eqIN:defHSOperator}) a nonlinear version of the classical Dirichlet-to-Neumann mapping. In this case it records the dependence on the shape of the domain of a particular harmonic function. This operator $I$, and the resulting mapping defined as $H$ in (\ref{eqIN:DefOfH}) are key components in our analysis, as well as were one of the main ingredients in the previous work \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}. Such operators are not new, and they have a relatively long study, particularly in some water wave equations (in fact, the authors in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} were unaware of this long history). Although the map, $I$, appearing in (\ref{eqIN:defHSOperator}), is not exactly the operator appearing in earlier works, it is very similar. Most of the earlier versions are a slight variant on the following: given two functions, $h:\mathbb R^n\to\mathbb R$ and $\psi:\mathbb R^n\to\mathbb R$, $U_{h,\psi}$ is the unique, bounded, harmonic function,
\begin{align*}
\begin{cases}
\mathcal Delta U_{h,\psi}=0\ &\text{in}\ \{(x,x_{n+1})\in\mathbb R^{n+1}\ :\ x_{n+1}<h(x)\}\\
U_{h,\psi}(x,x_{n+1}) = \psi(x)\ &\text{on}\ \textnormal{graph}(h)
\varepsilonnd{cases}
\varepsilonnd{align*}
and the Dirichlet-to-Nuemann operator is
\begin{align*}
[\tilde G(h)\psi](x):= \partial_\nu U_{h,\psi}(x,h(x))\sqrt{1+\abs{\nabla h(x)}^2}.
\varepsilonnd{align*}
We note that this operator is in fact in the literature usually denoted as $G(h)\psi$, but we use $\tilde G(h)\psi$ due to the conflicting notation with our use of ``$G$'' in (\ref{eqIN:HSMain}), which is entirely different. The reader should note that in this context, it is very frequent that $\psi$ actually does not depend on $x_{n+1}$, which can be justified in that $\tilde G(h)$ often is used when acting on such vertically constant boundary data. Sometimes instead of taking $U$ to be defined in the subgraph of $h$, there may be other boundary conditions, such as, for example when $h>1$, a no flux condition $\partial_\nu U_{h,\psi}|_{\{x_{n+1}=0\}}=0$, or even there could be a fixed bottom boundary with a nontrivial shape. For the purposes of discussion, the equation in the subgraph of $h$ will suffice. The use of the map, $\tilde G(h)$, appears to go back to \cite{Zakharov-1968StabilityPeriodicWaves} and then \cite{CraigSulem-1993NumericalSimulationGravityWaves-JCompPhys}. The operator, $\tilde G(h)$ was revisited in \cite{NichollsReitich-2001NewApproachDtoNAnalyticity} for the sake of improving computational tractability for various problems like (\ref{eqIN:HSMain}) that may involve interfaces moving via a normal derivative. The work \cite{Lannes-2005WellPoseWaterWave-JAMS} investigates the mapping and boundedness properties of $\tilde G(h)$ on various Sobolev spaces for proving well-posedness of water wave equations, and also gives a very detailed description of the usage of $\tilde G$ in earlier works on water waves; we refer to \cite{Lannes-2005WellPoseWaterWave-JAMS} for more discussion on the history of $\tilde G$ in water wave results. The subsequent article \cite{AlazardBurqZuily-2014CauchyProblemGravityWaterWaves-Inventiones} showed a more careful analysis of $\tilde G$ could give improved conditions on well-posedness in gravity water waves. $\tilde G$ recently played a central role in \cite{NguyenPausader-2020ParadifferentialWellPoseMuskatARMA} for well-posedness of the Muskat problem and in \cite{Alazard-2020ConvexityAndHeleShaw-ArXiv}, \cite{AlazardMeunierSmets-2019LyapounovCauchyProbHeleShaw-ArXiv} for well-posedness of the one-phase Hele-Shaw equation with gravity as well as to deduce results related to Lyapunov functionals for the solution.
\subsection{Hele-Shaw type free boundary problems with gravity-- Muskat type problems}
A pair of free boundary problems that is closely related to (\ref{eqIN:HSMain}), but pose their own set of additional challenges are those that are also called Hele-Shaw and Muskat problems. They can be cast as both one and two phase problems, and they govern the free surface between two fluids of different density and possibly different viscosity. We note that in both, gravity is taken into account, and this changes the nature of the equation a bit away from (\ref{eqIN:HSMain}); also the pressure is not required to be constant along the free boundary. There is a large amount of literature on this class of problems, and we focus on the ones most closely related to (\ref{eqIN:HSMain}). A feature that links the Muskat problem to that considered in this paper is to rewrite the solution for the original problem in $n+1$ space dimensions as a problem in $n$ dimensions that governs the free surface itself, directly, via a nonlinear equation that is inherently integro-differential in nature and which linearizes to the fractional heat equation of order $1/2$. Already the reformulation of the problem in terms of integro-differential equations goes back to \cite{Ambrose-2004WellPoseHeleShawWithoutSurfaceTensionEurJAppMath}, \cite{CaflischOrellanaSiegel-1990LocalizedApproxMethodVorticalFLows-SIMA}, \cite{CaflischHowisonSiegel-2004GlobalExistenceSingularSolMuskatProblem-CPAM}, with gloabl existence of solutions with small data in \cite{CaflischHowisonSiegel-2004GlobalExistenceSingularSolMuskatProblem-CPAM} and short time existence of solutions with large data in an appropriate Sobolev space in \cite{Ambrose-2004WellPoseHeleShawWithoutSurfaceTensionEurJAppMath}. This method of writing the Muskat problem as an equation for the free surface directly continues in \cite{CordobaGancedo-2007ContourDynamics-CMP}, and this is an integro-differential type equation \varepsilonmph{for the gradient} of the free surface function, where for a 2-dimensional interface is
\begin{align}\left\langlebel{eqLIT:MuskatFPrime}
\partial_t f = \frac{\rho_2-\rho_1}{4\pi}\int_{\mathbb R^2}\frac{(\nabla f(x,t)-\nabla f(x-y,t))\cdot y}{(y^2+[f(x,t)-f(x-y,t)]^2)^{3/2}}dy.
\varepsilonnd{align}
This formulation was then used to show that near a stable solution that is sufficiently regular, the equation linearizes to the 1/2-heat equation, and \cite{CordobaGancedo-2007ContourDynamics-CMP} further showed existence of solutions in this region (see a few more comments about linearization in Section \ref{sec:Commentary}). It was subsequently used to produce many well-posedness and regularity results, both short time and global time, a few of which are: \cite{ConstantinCordobaGancedoStrain-2013GlobalExistenceMuskat-JEMS}, \cite{ConstantinGancedoShvydkoyVicol-2017Global2DMuskatRegularity-AIHP}, \cite{CordobaCordobaGancedo-2011InterfaceHeleShawMuskat-AnnalsMath}.
There are (at least) two other variants on studying the Muskat problem as an equation for the free surface alone, and the ones that are very close in spirit to our work are, on the one hand, \cite{Cameron-2019WellPosedTwoDimMuskat-APDE}, \cite{Cameron-2020WellPose3DMuskatMediumSlope-ArXiv}, \cite{CordobaGancedo-2009MaxPrincipleForMuskat-CMP}, and on the other hand, \cite{AlazardMeunierSmets-2019LyapounovCauchyProbHeleShaw-ArXiv}, \cite{NguyenPausader-2020ParadifferentialWellPoseMuskatARMA}. In \cite{CordobaGancedo-2009MaxPrincipleForMuskat-CMP}, equation (\ref{eqLIT:MuskatFPrime}) was rewritten as a fully nonlinear integro-differential equation on $f$ itself, instead of $\partial_x f$, which is given in 1-d as
\begin{align}\left\langlebel{eqLITMuskatForF}
\partial_t f = \int_\mathbb R \frac{f(y,t)-f(x,t)-(y-x)\partial_x f(x,t)}{(y-x)^2 + (f(y,t)-f(x,t))^2}dy,
\varepsilonnd{align}
which is an equation of the form,
\begin{align*}
\partial_t f = \int_\mathbb R \delta_y f(x,t)K_f(y,t)dy,
\varepsilonnd{align*}
where $K_f\geq0$ is a kernel that depends on $f$ and has the same structure as what we provide in Theorem \ref{thm:StructureOfHMain} above. The integro-differential equation for $f$ (as opposed to $\partial_x f$) played a role in \cite{CordobaGancedo-2009MaxPrincipleForMuskat-CMP} to show non-expansion of the Lipschitz norm of solutions with nice enough data. The integro-differential nature of the Muskat problem was subsequently utilized in \cite{Cameron-2019WellPosedTwoDimMuskat-APDE}, \cite{Cameron-2020WellPose3DMuskatMediumSlope-ArXiv} to study well-posedness for Lipschitz data as well as establish regularizing effects from (\ref{eqLITMuskatForF}). Thus, in spirit, our work combined with \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} is very close to \cite{Cameron-2019WellPosedTwoDimMuskat-APDE}, \cite{Cameron-2020WellPose3DMuskatMediumSlope-ArXiv}. The other variation closely related to our work is to utilize the equation for $f$ given by the operator, $\tilde G(f)$ shown above, and this is used in \cite{AlazardMeunierSmets-2019LyapounovCauchyProbHeleShaw-ArXiv} for one-phase Hele-Shaw with gravity and \cite{NguyenPausader-2020ParadifferentialWellPoseMuskatARMA} for both the one and two phase Muskat problem. The analogy is easiest to see for the one phase problem, and in both \cite{AlazardMeunierSmets-2019LyapounovCauchyProbHeleShaw-ArXiv} and \cite{NguyenPausader-2020ParadifferentialWellPoseMuskatARMA} it is established that if the graph of $f$ gives the free surface, then $f$ can be completely characterized by the flow
\begin{align}\left\langlebel{eqLIT:DtoNMuskat}
\partial_t f = \tilde G(f)f \ \ \ \text{on}\ \mathbb R^n\times[0,T].
\varepsilonnd{align}
At least for the Hele-Shaw type flow we study in (\ref{eqIN:HSMain}), it appears as though the first result to show that weak solutions (viscosity solutions) of (\ref{eqIN:HSMain}) are equivalent to the flow governed by the Dirichlet-to-Neumann operator acting on $f$, as above in (\ref{eqLIT:DtoNMuskat}) (in our context, this is $H$ in (\ref{eqIN:DefOfH}) and (\ref{eqIN:HeleShawIntDiffParabolic})), was proved in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}. The reduction to the equation for the free surface is not surprising, as a similar (and more complicated) reduction to a system for the free surface in water waves was known since \cite{CraigSulem-1993NumericalSimulationGravityWaves-JCompPhys} (also appearing in \cite{AlazardBurqZuily-2014CauchyProblemGravityWaterWaves-Inventiones}, \cite{Lannes-2005WellPoseWaterWave-JAMS}, among others)-- the novelty in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} was that the reduction holds for viscosity solutions, which may not be classical. In \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} it was shown that under the graph assumption, the notion of the viscosity free boundary solution for $U$ is equivalent to the viscosity solution of the equation for $f$, which is (\ref{eqIN:HeleShawIntDiffParabolic}). Furthermore, global in time existence and uniqueness for (\ref{eqIN:HeleShawIntDiffParabolic})-- or well-posedness-- holds, and it can be used to construct solutions to (\ref{eqIN:HSMain}), as well as show that a modulus of continuity for the initial interface will be preserved for all time. Subsequently, both \cite{AlazardMeunierSmets-2019LyapounovCauchyProbHeleShaw-ArXiv} and \cite{NguyenPausader-2020ParadifferentialWellPoseMuskatARMA} showed that for respectively the one-phase Hele-Shaw with gravity and the Muskat problem, the equation (\ref{eqLIT:DtoNMuskat}) is equivalent to solving the original free boundary problem, and that this equation is globally in time well posed in $H^s(\mathbb R^n)$ for $s>1+\frac{n}{2}$, regardless of the size of the initial data in $H^s$. Thus, the work in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} and our work here is again, very closely related to \cite{AlazardMeunierSmets-2019LyapounovCauchyProbHeleShaw-ArXiv}, \cite{NguyenPausader-2020ParadifferentialWellPoseMuskatARMA}, by utilizing (\ref{eqLIT:DtoNMuskat}) directly. There is an important difference to note, however, where the results in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} and our results here exploit the fact that $H$ enjoys the global comparison property (see Definition \ref{defFD:GCP}) and the structure provided by Theorem \ref{thm:StructureOfHMain}, contrasted with \cite{AlazardMeunierSmets-2019LyapounovCauchyProbHeleShaw-ArXiv}, \cite{NguyenPausader-2020ParadifferentialWellPoseMuskatARMA} for which the analysis is derived from the properties of $\tilde G$ as a mapping on $H^s$.
\subsection{Parabolic integro-differential equations}
For the sake of presentation, in the context of this paper, the parabolic integro-differential equations that we utilize are of the form
\begin{align}\left\langlebel{eqLIT:ParabolicIntDiff}
\partial_t f = b(x)\cdot \nabla f + \int_{\mathbb R^n} \delta_hf(x,t) K(x,h)dh,
\varepsilonnd{align}
with $\delta_h f(x) = f(x+h)-f(x)-{\mathbbm{1}}_{B_{r_0}}(h)\nabla f(x)\cdot h$,
and their nonlinear counterparts given as those in Theorem \ref{thm:StructureOfHMain}. Here, $b$ is a bounded vector field, and $K\geq0$. The main issue for our work is the possibility that solutions of (\ref{eqLIT:ParabolicIntDiff}) enjoy some sort of extra regularity when $K$ has better behavior than simply being non-negative. Are solutions to (\ref{eqLIT:ParabolicIntDiff}) H\"older continuous in some way that still allows for rough coefficients? Are they $C^{1,\alpha}$? We note that as written, (\ref{eqLIT:ParabolicIntDiff}), is an equation in non-divergence form, and in the literature, the theory that addresses these questions commonly carries the name Krylov-Safonov results, which comes from the result for local, second order parabolic equations \cite{KrSa-1980PropertyParabolicEqMeasurable} (in the divergence case, they usually carry the name De Giorgi - Nash - Moser). These questions pertaining to (\ref{eqLIT:ParabolicIntDiff}) have gathered considerable attention in the past 20 or so years, and most of the works relevant to our study find their origins in either \cite{BaLe-2002Harnack} or \cite{BaLe-2002TransitionProb}, followed by a combination of \cite{CaSi-09RegularityIntegroDiff} and \cite{Silv-2006Holder}. Examples of the works on parabolic equations that are close to our needs include \cite{ChDa-2012RegNonlocalParabolicCalcVar}, \cite{ChangLaraDavila-2014RegNonlocalParabolicII-JDE}, \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE}, \cite{SchwabSilvestre-2014RegularityIntDiffVeryIrregKernelsAPDE}, \cite{Serra-2015RegularityNonlocalParabolicRoughKernels-CalcVar}, \cite{Silvestre-2011RegularityHJE}, \cite{Silvestre-2014RegularityParabolicICM}. We note there are many references for elliptic problems and problems involving existence and uniqueness of viscosity solutions which are not mentioned above.
A common feature of most of the parabolic works listed above is that they arose from the interest of studying the probabilistic implications and analytical properties of equations like (\ref{eqLIT:ParabolicIntDiff}) for their sake as fundamental mathematical objects in their own right. A typical and frequently mentioned application among the nonlinear works is their relationship to optimal control and differential games. There has also been interest in utilizing equations like (\ref{eqLIT:ParabolicIntDiff}) in situations which are not necessarily originally posed as an integro-differential equation, such as we do in this work as it pertains to (\ref{eqIN:HeleShawIntDiffParabolic}). In our case, we find that (\ref{eqIN:HeleShawIntDiffParabolic}) coincidentally landed within the scope of existing results, as the reader may see in Sections \ref{sec:BackgroundParabolic} and \ref{sec:KrylovSafonovForHS}. This is not always the case, and sometimes the intended application of the integro-differential theory has led to new advances in the integro-differential field. One recent occurrence of this is the application of integro-differential techniques to the Boltzmann equation. For the homogeneous Boltzmann equation, new integro-differential results were first produced in \cite{SchwabSilvestre-2014RegularityIntDiffVeryIrregKernelsAPDE} to be subsequently applied in \cite{Silvestre2016NewRegularizationBoltzCMP} (which is mentioned in \cite[Section 1B]{SchwabSilvestre-2014RegularityIntDiffVeryIrregKernelsAPDE}). Even more advanced techniques were required for the inhomogenous Boltzmann equation, and one can see the evolution of the integro-differential theory in \cite{ImbertSilvestre-2020WeakHarnackBoltzmann-JEMS}, which was followed by \cite{ImbertSilvestre-2018SchauderForKineticIntegralEq-ArXiV}, \cite{ImbertSilvestre-2019GlobalRegularityBoltzmannWithoutCutOff-ArXiv}, \cite{ImbertSilvestre-2020RegularityBoltzmannConditionalMacro-ArXiv}.
\section{Notation and Assumptions}
We will collect some notation here.
\begin{itemize}
\item $n$ is the dimension of the free boundary hypersurface, with $n\geq 2$.
\item $X=(x,x_{n+1})\in\mathbb R^{n+1}$.
\item $B_r(x)\subset\mathbb R^n$ and $B^{n+1}_r(X)\subset\mathbb R^{n+1}$. When the context is clear, the superscript may be dropped.
\item $d(x,y)$ is the distant between $x$ and $y$, $d(x,E)$ is the distance between $x$ and a set $E$, and may be abbreviated $d(x)$ when $d(x,E)$ is understood for a particular $E$.
\item $\nu_f(X)$ is the unit normal vector to the boundary at $X\in\Gamma_f$, often abbreviated without the subscript.
\item $I^+$ and $I^-$ are the respective normal derivatives from the positive and negative phases of $U_f$, defined using (\ref{eqIN:TwoPhaseBulk}), (\ref{eqIN:DefOfPosNegNormalDeriv}), and (\ref{eqIN:DefIPLusAndMinus}).
\item $C^{1,\textnormal{Dini}}_\rho$ is a Banach space, as in Stein \cite[Chapter VI, Cor 2.2.3 and Exercise 4.6]{Stei-71} (also see \varepsilonqref{eqIN:DefOfSetK} as well as $X_\rho$ in Definition \ref{defFD:XrhoSpace}).
\item $X_\rho$, see Definition \ref{defFD:XrhoSpace} and Remark \ref{remFD:ModulusIsAlsoInXRho}, cf. \cite[Chapter VI, Cor 2.2.3 and Exercise 4.6]{Stei-71}.
\item $C^0(\mathbb R^N)$ is the space of continuous functions on $\mathbb R^N$.
\item $C^0_b(\mathbb R^N)$ is the Banach space of continuous bounded functions with the norm $\norm{\cdot}_{L^\infty}$.
\item $C^{1,\alpha}_b(\mathbb R^N)$ is the space of functions that are bounded with bounded derivatives, with the derivatives $\alpha$-H\"older continuous.
\item $D_f=\{(x,x_{n+1})\ :\ 0< x_{n+1}<f(x) \}=D^+_f$, $D^-_f=\{(x,x_{n+1})\ :\ f(x)<x_{n+1}<L \}$
\item $\Gamma_f=\textnormal{graph}(f)=\{ (x,x_{n+1})\ :\ x_{n+1}=f(x) \}$.
\item $d\sigmama_f$ the surface measure on $\Gamma_f$, often abbreviated without the subscript as $d\sigmama$.
\item $G_f$ the Green's function in $D_f$ for the operator in \varepsilonqref{eqIN:TwoPhaseBulk}.
\item $P_f$ the Poisson kernel for $D_f$ on $\Gamma_f$.
\item $\mathcal K(\delta, L, m, \rho)$, see (\ref{eqIN:DefOfSetK}).
\item $\delta_h f(x) = f(x+h)-f(x)-{\mathbbm{1}}_{B_{r_0}}(h) \nabla f(x)\cdot h$
\varepsilonnd{itemize}
\section{Background results on Green's Functions and Parabolic Equations}\left\langlebel{sec:BackgroundTools}
This section has two subsections, collecting respectively background results related to Green's functions for equations in Dini domains and background results for fractional parabolic equations.
\subsection{Boundary behavior of Green's functions}\left\langlebel{sec:GreenFunction}
We utilize results about the boundary behavior of Green's functions for equations with Dini coefficients in domains that have $C^{1,\textnormal{Dini}}$ boundaries, for a Dini modulus, $\omegaega$. We will use the shorthand $d(x) := \text{dist}(x,\partial \Omegaega)$ for $x \in \Omegaega$. The main way in which we use the boundary behavior of the Green's function is to deduce the boundary behavior of the Poisson kernel as well as that of solutions that may vanish on a portion of the boundary. The study of the boundary behavior of Green's functions is a well developed topic, and none of what we present here is new. The results in either Theorem \ref{thm:GreenBoundaryBehavior} or Proposition \ref{propGF:PoissonKernel} reside in the literature in various combinations of \cite{Bogdan-2000SharpEstGreenLipDomJMAA}, \cite{Cho-2006GreensFunctionPotentialAnal}, \cite{Zhao-1984UniformBoundednessConditionalGaugeCMP}, among other references.
\begin{theorem}\left\langlebel{thm:GreenBoundaryBehavior}
If $G_f$ is the Green's function for the domain, $D_f$, then there exist positive constants $C_1$, $C_2$, and $R_0$, that depend upon the Dini modulus of $\nabla f$ and other universal parameters so that for all $x,y\in D_f$ with $\abs{x-y}\leq R_0$
\begin{equation}\left\langlebel{GlobalGreenEstimate}
C_1\min\left\{ \frac{d(x)d(y)}{\abs{x-y}^{n+1}}, \frac{1}{4\abs{x-y}^{n-1}} \right\}
\leq G_f(x,y)
\leq C_2\min\left\{ \frac{d(x)d(y)}{\abs{x-y}^{n+1}}, \frac{1}{4\abs{x-y}^{n-1}} \right\}.
\varepsilonnd{equation}
\varepsilonnd{theorem}
The essential ingredient in the proof of Theorem \ref{thm:GreenBoundaryBehavior} is the following Lemma \ref{lemGF:lineargrowth} on the growth of solutions away from their zero set. Before stating this result, we need a few definitions. Denote by $[x_0,z_0]$ the closed line segment with endpoints $x_0,z_0 \in \Omegaega$, and denote by $\mathcal A_{2r}(x_0)$ the annulus $B_{2r}(x_0) \backslash B_r(x_0)$.
\begin{definition}
A domain $\Omegaega \subset \mathbb R^{n+1}$ satisfies the uniform interior ball condition with radius $\rho_0$ if for every $\xi \in \partial \Omegaega$, there exists an open ball $B$ of radius $\rho_0$ such that $B \subset \Omegaega$ and $\overline{B} \cap \partial \Omegaega = \{\xi\}$.
\varepsilonnd{definition} Observe that since $\deltata \leq f\leq L-\deltata$ and $\nabla f$ has a Dini modulus of continuity $\rho$, there exists a $C^{1,\text{Dini}}$ map $T_f : \overline{D_f} \rightarrow \mathcal RN \times [0,L]$ satisfying
\begin{equation}\left\langlebel{flatteningtransformation}
\begin{cases}
T_f(D_f) = \mathcal RN \times [0,L], \\
T_f(\Gammama_f) = \left\{x_{n+1} = L \right\}, \\
T_f(\left\{x_{n+1} = 0 \right\}) = \left\{x_{n+1} = 0 \right\}.
\varepsilonnd{cases}
\varepsilonnd{equation}
Consequently, the function $V_f := U_f\circ T_f^{-1}$ satisfies an equation of the form $L_A V_f = -\text{div}(A(y) \nabla V_f(y)) = 0$ on $\mathcal RN \times [0,L]$, where the coefficients $A(\cdot)$ satisfy $0 < \left\langlembdabda \mathbb{I}_{n+1} \leq A \leq \mathcal Lambdabda \mathbb{I}_{n+1}$ with $\left\langlembdabda, \mathcal Lambdabda$ depending on $\deltata, L, m$, and are Dini continuous on $\mathcal RN \times [0,L]$ up to the boundary with a modulus of continuity $\omegaega$. Thus, for the purposes of the next lemma, we will only consider a domain $\Omegaega \subset \mathcal Rn$ which satisfies the uniform interior ball condition with radius $\rho_0$ and a solution to a uniformly elliptic equation in divergence form on $\Omegaega$ with coefficients having a Dini modulus of continuity $\omegaega$.
\begin{lemma}\left\langlebel{lemGF:lineargrowth} Suppose $\Omegaega \subset \mathcal Rn$ satisfies the uniform interior ball condition with radius $\rho_0$. Let $u\in C^2(\Omegaega) \cap C(\overline{\Omegaegamega})$ be non-negative and satisfy
$$
\begin{cases}
L_A u = 0 \quad \text{in } \Omegaega,\\
u = 0 \quad \text{on } \Gammama \subset \partial \Omegaega,
\varepsilonnd{cases}
$$
Then there exist positive constants $C = C(n,\left\langlembdabda,\mathcal Lambdabda)$ and $r_0 = r_0(n,\omegaega,\left\langlembdabda,\mathcal Lambdabda) \leq \frac{\rho_0}{2}$ such that for all balls $B_{2r}(x_0) \subset \Omegaega$ with $\overline{B_{2r}(x_0)} \cap \Gammama \neq \varepsilonmptyset$ and $r \leq r_0$, we have the estimate
\begin{equation}\left\langlebel{lineargrowthatboundary}
u(x) \geq \frac{C}{r} u(x_0) d(x) + o(d(x)) \qquad \text{ for all } x \in [x_0,z_0] \cap \mathcal A_{2r}(x_0), \ z_0 \in \overline{B_{2r}(x_0)} \cap \Gammama.
\varepsilonnd{equation}
\varepsilonnd{lemma}
Let us state some useful consequences of Lemma \ref{lemGF:lineargrowth} and Theorem \ref{thm:GreenBoundaryBehavior}. First, notice that Lemma \ref{lemGF:lineargrowth} implies the following uniform linear growth of $U_f$ away from $\Gamma_f$.
\begin{lemma}\left\langlebel{lemGF:LinearGrowthFromGammaF}
There exist a constant $C>0$ that depends on $\delta$, $L$, $m$, $\rho$, so that for all $f\in\mathcal K(\delta,L,m,\rho)$, for $U_f$ defined in (\ref{eqIN:BulkEqForHSOperator}), and for all $Y\in\Gamma_f$,
\begin{align*}
\frac{s}{C}\leq U_f(Y-sy_{n+1})\leq Cs,\ \ \text{and}\ \
\frac{s}{C}\leq U_f(Y+s\nu_f(Y))\leq Cs.
\varepsilonnd{align*}
(Recall $\nu_f$ is the inward normal to $D_f$.)
\varepsilonnd{lemma}
Theorem \ref{thm:GreenBoundaryBehavior} also induces the following behavior on the Poisson kernel.
\begin{proposition}\left\langlebel{propGF:PoissonKernel}
If $f\in \mathcal K(\delta, L, m, \rho)$ and $P_f$ is the Poisson kernel for the domain, $D_f$, then there exists constants $C_1$, $C_2$, $C_3$ and $R_0$, that depend upon $\deltata, L, m, \rho$ and other universal parameters so that for all $X\in D_f$, $Y\in\Gamma_f$, with $\abs{X-Y}\leq R_0$,
\begin{align*}
C_1 \frac{d(X)}{\abs{X-Y}^{n+1}}
\leq P_f(X,Y)
\leq C_2 \frac{d(X)}{\abs{X-Y}^{n+1}}.
\varepsilonnd{align*}
Furthermore, there exists an exponent, $\alpha\in(0,1]$, depending on $\deltata, L, m, \rho$ and universal parameters, so that for $X\in\Gamma_f$ and with $R>R_0$,
\begin{equation}\left\langlebel{eqGF:DecayOnMassOfPoissonKernel}
\int_{\Gamma_f\setminus B_{R}} P_f(X+s\nu(X),Y)d\sigmama_f(Y)\leq \frac{Cs}{R^\alpha}.
\varepsilonnd{equation}
\varepsilonnd{proposition}
For technical reasons, we also need a slight variation on Proposition \ref{propGF:PoissonKernel}, which is related to conditions necessary to invoke results from the earlier work \cite{GuSc-2019MinMaxEuclideanNATMA} that we state here in Theorem \ref{thmFD:StructureOfJFromGSEucSpace}.
\begin{lemma}\left\langlebel{lemGF:DecayAtInfinityForTechnicalReasons}
There exists constants $c_0$, $C>0$ and $\alpha\in(0,1]$, depending on $\delta$, $L$, $m$, $\rho$, so that if $f\in C^{1,\textnormal{Dini}}_\rho(B_{2R}(0))$, $\delta\leq f\leq L-\delta$, and $\abs{\nabla f}\leq m$, then for $X\in B_R\cap \Gamma_f$ and $0<s<c_0$,
\begin{align*}
\int_{\Gamma_f\setminus B_{2R}}P_f(X+s\nu(X),Y)d\sigmama_f(Y) \leq \frac{Cs}{R^\alpha}.
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{rem}
It is worth noting that based on purely the Lipschitz constant of $f$, one would obtain this same estimate in Lemma \ref{lemGF:DecayAtInfinityForTechnicalReasons}, but with the upper bound of $C\frac{s^\alpha}{r^\alpha}$. The Dini condition in $B_{2R}$ is what allows to obtain $s$, instead of $s^\alpha$ in the estimate.
\varepsilonnd{rem}
For the convenience of the reader, we have provided proofs of the above results in the Appendix. See \cite{Cho-2006GreensFunctionPotentialAnal} for a parabolic version of related results.
\subsection{Background results on regularity for integro-differential equations}\left\langlebel{sec:BackgroundParabolic}
For our purposes, we will invoke results for parabolic integro-differential equations that originate mainly in Chang Lara - Davila \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE} and Silvestre \cite{Silvestre-2011RegularityHJE}.
Following \cite{Chan-2012NonlocalDriftArxiv} and \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE}, we consider fully nonlinear parabolic equations whose linear versions are $(\partial_t - L_{K,b} )u$, where for $u : \mathcal RN \times \mathbb{R} \rightarrow \mathbb{R}$,
\begin{equation}\left\langlebel{eqPara:linearoperators}
L_{K,b} u(x,t) := b(x,t)\cdot\nabla u(x,t) + \int_{\mathbb R^n}\delta_h u(x,t)K(x,t,h) \ dh,
\varepsilonnd{equation}
$b(x,t) \in \mathbb{R}^n$ is a bounded vector field and $\delta_hu(x,t) := u(x+h,t)-u(x,t)- {\mathbbm{1}}_{B_{r_0}}(h)\nabla u(x,t)\cdot h$. For any $r \in (0,r_0)$, consider the rescaled function
$$u_r(x,t) := \frac{1}{r} u(rx, rt).$$
A direct calculation shows that if $u$ satisfies the equation $(\partial_t - L_{K,b})u(x,t) = \varphi(x,t)$, then $u_r$ satisfies the equation $(\partial _t - L_{K_r,b_r})u_r(x,t) = \varphi_r(x,t)$, where
$$K_r(x,t,h) := r^{n+1} K(rx,rt, rh), \quad b_r(x,t) := b(rx,rt) - \int_{B_{r_0} \backslash B_r} h K(rx,rt,h) \ dh, \quad \varphi_r(x,t) = \varphi(rx,rt).$$
Based on this scaling behavior, we are led to consider the following class of linear operators.
\begin{definition}[cf. Section 2 of \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE}]\left\langlebel{defPara:scaleinvariantclass} Given a positive number $\mathcal Lambda$, the class $\mathcal L_{\mathcal Lambda}$ is the collection of linear operators of the form $L_{K,b}$ as in \varepsilonqref{eqPara:linearoperators} with $K$ and $b$ satisfying the properties
\begin{align*}
&\mathrm{(i)} \
\mathcal Lambda^{-1} \abs{h}^{-n-1}\leq K(x,t,h)\leq \mathcal Lambda\abs{h}^{-n-1} \qquad \text{for all } (x,t,h) \in \mathcal RN\times[0,T]\times\mathcal RN, \\
&\mathrm{(ii)} \ \sup_{0 < \rho < 1, \ (x,t) \in \mathcal Rn} \abs{b(x,t)-\int_{B_{r_0} \setminus B_{\rho}} hK(x,t,h)dh}\leq \mathcal Lambda.
\varepsilonnd{align*}
\varepsilonnd{definition}
Let us show that if $b,K \in \mathcal L_{\mathcal Lambda}$ then $b_r,K_r\in\mathcal L_{\mathcal Lambda}$ for all $r \in (0,1)$. We suppress the dependence on $t$. The bounds (i) on the kernels are immediate: for the upper bound, we have
$$K_r(x,h) = r^{n+1} K(rx, rh) \leq r^{n+1} \mathcal Lambdabda |rh|^{-n-1} = \mathcal Lambdabda |h|^{-n-1},$$
while for the lower bound, we have
$$K_r(x,h) = r^{n+1} K(rx, rh) \geq \mathcal Lambda^{-1} r^{n+1} |rh|^{-n-1} \geq \mathcal Lambda^{-1} |h|^{-n-1}.$$
To show that $b_r, K_r$ satisfy (ii), let $\rho \in (0,1)$ and $x \in \mathcal RN$ be arbitrary. Then
\begin{align*}
\abs{b_r(x)-\int_{B_1 \setminus B_{\rho}} hK_r(x,h)dh} & = \abs{b(rx) - \int_{B_1 \setminus B_r} hK(rx,h)dh - \int_{B_1 \setminus B_{\rho}} hr^{n+1}K(rx,rh)dh} \\
& = \abs{b(rx) - \int_{B_1 \setminus B_r} hK(rx,h)dh - \int_{B_r \setminus B_{\rho r}} h K(rx,h)dh} \\
& = \abs{ b(rx) - \int_{B_1 \setminus B_{\rho r}} hK(rx,h)dh} \leq \mathcal Lambdabda.
\varepsilonnd{align*}
Consequently, $b_r, K_r \in \mathcal L_{\mathcal Lambda}$.
The class $\mathcal L_{\mathcal Lambda}$ gives rise to the extremal operators
\begin{align}\left\langlebel{eqPara:ExtremalOperators}
\mathcal M^+_{\mathcal L_{\mathcal Lambda}}(u) = \sup_{L\in\mathcal L_{\mathcal Lambda}}L(u), \quad \mathcal M^-_{\mathcal L_{\mathcal Lambda}}(u) = \inf_{L\in \mathcal L_{\mathcal Lambda}}L(u).
\varepsilonnd{align}
These operators are typically used to characterize differences of a given nonlocal operator, say $J$, in which one would require
\begin{align}\left\langlebel{eqPara:ExtremalInequalities}
\mathcal M^-_{\mathcal L_{\mathcal Lambda}}(u-v)\leq J(u)-J(v)\leq M^+_{\mathcal L_\mathcal Lambda}(u-v),
\varepsilonnd{align}
where one can change the operators by changing the set of functionals included in $\mathcal L_\mathcal Lambda$. This is what is known as determining an ``ellipticity'' class for $J$.
By the scale invariance of $\mathcal L_{\mathcal Lambda}$ we know that $\mathcal M^{\pm}_{\mathcal L_{\mathcal Lambda}}(u_r)(x) = \mathcal M^{\pm}_{\mathcal L_{\mathcal Lambda}}(u)(rx)$.
The cylinders corresponding to the maximal operators $\mathcal M^{\pm}_{\mathcal L_{\mathcal Lambda}}$ are
$$Q_r = (-r,0] \times B_r(0), \qquad Q_r(t_0,x_0) = (t_0 - r, t_0] \times B_r(x_0).$$
\begin{definition} The function $u$ is a viscosity supersolution of the equation
$$\partial_t u - \mathcal M^{-}_{\mathcal L_{\mathcal Lambda}}u = \varphi$$
if for all $\varepsilon > 0$ and $\psi : (t,x) \in \mathbb R \times \mathcal RN \rightarrow \mathbb R$ left-differentiable in $t$, twice pointwise differentiable in $x$, and satisfying $\psi(t,x) \leq u(t,x)$ with equality at $(t_0,x_0)$, the function $v_{\varepsilon}$ defined as
$$
v_{\varepsilon}(t,x) =
\begin{cases}
\psi(t,x) \quad \text{if } (t,x) \in Q_{\varepsilon}(t_0,x_0), \\
u(t,x) \quad \text{otherwise}
\varepsilonnd{cases}
$$
satisfies the inequality
$$\partial_t v_{\varepsilon}(t_0,x_0) - \mathcal M^{-}_{\mathcal L_{\mathcal Lambda}}v_{\varepsilon}(t_0,x_0) \geq \varphi(t_0,x_0).$$
The corresponding definition of a viscosity subsolution is obtained by considering a function $\psi$ satisfying $\psi(t,x) \geq u(t,x)$ with equality at $(t_0,x_0)$, and requiring
\begin{align*}
\partial_t v_{\varepsilon}(t_0,x_0) - \mathcal M^{-}_{\mathcal L_{\mathcal Lambda}}v_{\varepsilon}(t_0,x_0) \leq \varphi(t_0,x_0).
\varepsilonnd{align*}
The same definitions hold for $\partial_t u -\mathcal M^+_{\mathcal L_{\mathcal Lambda}}=\varphi$.
\varepsilonnd{definition}
The main regularity result that we need is stated below, and can be found in \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE}; see also \cite{SchwabSilvestre-2014RegularityIntDiffVeryIrregKernelsAPDE, Silvestre-2011RegularityHJE, Silvestre-2014RegularityParabolicICM}.
\begin{proposition}[H\"older Estimate, Section 7 of \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE}]\left\langlebel{propPara:holderestimate} Suppose $u$ is bounded in $\mathbb{R}^n \times [0,t_0]$ and satisfies in the viscosity sense
\begin{equation}\left\langlebel{eqPara:subandsuper}
\begin{cases}
\partial_t u - \mathcal M_{\mathcal L_{\mathcal Lambda}}^+u \leq A \\
\partial_t u - \mathcal M_{\mathcal L_{\mathcal Lambda}}^-u \geq -A
\varepsilonnd{cases}
\varepsilonnd{equation}
in $Q_{t_0}(t_0,x_0)$ for some constant $A > 0$. Then there exist constants $C>0$ and $\gammama\in(0,1)$, depending only on $n$ and $\mathcal Lambdabda$, such that
$$
||u||_{C^{\gammama}(Q_{\frac{t_0}{2}}(t_0,x_0))} \leq \frac{C}{t_0^{\gammama}} \left(||u||_{L^{\infty}(\mathbb{R}^n \times [0,t_0])} + t_0 A \right).
$$
\varepsilonnd{proposition}
\begin{rem}
The equations in (\ref{eqPara:subandsuper}) simply say that $u$ is a subsolution of $\partial_t u - \mathcal M^+_{\mathcal L_\mathcal Lambda} u = A$ and a supersolution of $\partial_t u - \mathcal M^-_{\mathcal L_\mathcal Lambda} u = -A$.
\varepsilonnd{rem}
Since Proposition \ref{propPara:holderestimate} differs slightly from \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE} in that it accommodates the cylinder $Q_{t_0}(t_0,x_0)$ other than the standard cylinder $Q_1$ and also from \cite{Silvestre-2011RegularityHJE} in that it includes a non-zero right hand side, $A$, we make a small comment here as to the appearance of the term $t_0A$ in the conclusion of the estimate. Indeed, this is simply a result of rescaling the equation. As in \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE}, we already know that Proposition \ref{propPara:holderestimate} holds for $u$ that are bounded in $\mathbb R^n \times [-1,0]$ and satisfy \varepsilonqref{eqPara:subandsuper} in $Q_1$; in this case, the $C^{\gammama}$ estimate holds on $Q_{\frac{1}{2}}$. Let us now show what happens for arbitrary $t_0 > 0$ and $x_0 \in \mathbb R^n$.
Let $u$ be as in the statement of Propositon \ref{propPara:holderestimate} and define $\tilde{u}(t,x) := u((t_0, x_0) + t_0(t,x))$. Notice that if $(t,x) \in Q_r$, then $(t_0, x_0) + t_0(t,x) \in Q_{t_0 r}(t_0,x_0)$ for all $r \in [0,1]$. By the translation and scaling invariance properties of the operators $\partial_t - \mathcal M^{\pm}_{\mathcal L_\mathcal Lambda}$, we thus have
$$\partial_t \tilde{u} - \mathcal M^+_{\mathcal L_\mathcal Lambda} \tilde{u} = t_0 (\partial_t u - \mathcal M^+_{\mathcal L_\mathcal Lambda}u) \leq t_0 A \quad \text{ and } \quad \partial_t \tilde{u} - \mathcal M^-_{\mathcal L_\mathcal Lambda}\tilde{u} = t_0(\partial_t u - \mathcal M^-_{\mathcal L_\mathcal Lambda}u) \geq -t_0A \quad \text{ in } Q_1.$$
On the other hand, we also have $||\tilde{u}||_{L^{\infty}(\mathbb R^n \times [-1,0])} = ||u||_{L^{\infty}(\mathbb R^n \times [0,t_0])}$ and for all $(t,x) \in Q_{\frac{1}{2}}$,
$$\frac{|\tilde{u}(t,x) - \tilde{u}(0,0)|}{|(t,x)|^{\gammama}} = \frac{|u((t_0,x_0) + t_0(t,x)) - u(t_0,x_0)|}{|(t,x)|^{\gammama}} = \frac{t_0^{\gammama} |u((t_0,x_0) + t_0(t,x)) - u(t_0,x_0)|}{|(t_0,x_0) + t_0(t,x) - (t_0,x_0)|^{\gammama}}.$$
Consequently, $||\tilde{u}||_{C^{\gammama}(Q_{\frac{1}{2}})} = t_0^{\gammama} ||u||_{C^{\gammama}(Q_{\frac{t_0}{2}}(t_0,x_0))}$. The conclusion follows by applying to $\tilde{u}$ the version of Proposition \ref{propPara:holderestimate} for functions that are bounded in $\mathbb R^n \times [-1,0]$ and satisfy \varepsilonqref{eqPara:subandsuper} in $Q_1$, and then rewriting the resulting $C^{\gammama}$ estimate in terms of $u$.
\section{A Finite Dimensional Approximation For $I$}\left\langlebel{sec:FiniteDim}
An important note for this section is we will take $N$ to be an arbitrary dimension, and we are looking generically at operators on $C^{1,\textnormal{Dini}}(\mathbb R^N)$. The application to equation (\ref{eqIN:HSMain}) will be for $N=n$ (as $f:\mathbb R^n\to\mathbb R$).
Here we will record some tools that were developed in \cite{GuSc-2019MinMaxEuclideanNATMA} and \cite{GuSc-2019MinMaxNonlocalTOAPPEAR} to investigate the structure of operators that enjoy what we call a global comparison property (see Definition \ref{defFD:GCP}, below). The point of these tools is to build linear mappings that can be used to ``linearize'' the nonlinear operator, $I$, through the min-max procedure apparent in Theorem \ref{thm:StructureOfHMain}, or more precisely, to reconstruct $I$ from a min-max of a special family of linear operators.
The linear mappings we build to achieve a min-max for $I$ are limits of linear mappings that are differentials of maps with similar properties for a family of simpler operators that can be used to approximate $I$. The advantage of the approximations constructed in \cite{GuSc-2019MinMaxEuclideanNATMA} and \cite{GuSc-2019MinMaxNonlocalTOAPPEAR} is that they are operators with the same domain as $I$ but enjoy the property of having finite rank (with the rank going to infinity as the approximates converge to the original). In this regard, even though the original operator and approximating operators are nonlinear, the approximates behave as Lipschitz operators on a high, but finite dimensional space, and are hence differentiable almost everywhere. This differentiability makes the min-max procedure straightforward, and it is then passed through the limit back to the original operator, $I$. The basis for our finite dimensional approximation to $I$ is the Whitney extension for a family of discrete and finite subsets of $\mathbb R^n$, whose union is dense in $\mathbb R^n$. The reason for doing this is that we can restrict the functions to be identically zero outside of a finite set, and naturally, the collection of these functions is a finite dimensional vector space. Thus, Lipschitz operators on those functions will be differentiable almost everywhere, and as mentioned this is one of the main points of \cite{GuSc-2019MinMaxEuclideanNATMA} to represent $I$ as a min-max over linear operators.
\subsection{The Whitney Extension}
Here we just list some of the main properties of the Whitney extension constructed in \cite{GuSc-2019MinMaxEuclideanNATMA}. It is a variant of the construction in Stein \cite{Stei-71}, where in \cite{GuSc-2019MinMaxEuclideanNATMA} it is designed to preserve the grid structure of $2^{-m}\mathbb Z^N$. We refer the reader to \cite[Section 4]{GuSc-2019MinMaxEuclideanNATMA} for complete details.
\begin{definition}\left\langlebel{defFD:GridSetGm}
For each $m\in\mathbb N$, the finite set, $G_m$, is defined as
\begin{align*}
G_m=2^{-m}\mathbb Z^N.
\varepsilonnd{align*}
We will call $h_m$ the grid size, defined as $h_m=2^{-m}$.
\varepsilonnd{definition}
We note that in \cite[Section 4]{GuSc-2019MinMaxEuclideanNATMA}, the sets for the Whitney extension were constructed as a particular disjoint cube decomposition that covers $\mathbb R^n\setminus h_m\mathbb Z^N$ and was shown to be invariant under translations of $G_m$ by any vector in $G_m$. For each $m$, we will index these sets by $k\in\mathbb N$, and we will call them $Q_{m,k}$. See \cite[Section 4]{GuSc-2019MinMaxEuclideanNATMA} for the precise details of $Q_{m,k}$ and $\phi_{m,k}$. Here we record these results.
\begin{lemma}[Lemma 4.3 in \cite{GuSc-2019MinMaxEuclideanNATMA}]\left\langlebel{LemFD:Cubesmk}
For every $m\in\mathbb N$, there exists a collection of cubes $\{Q_{m,k}\}_k$ such that
\begin{enumerate}
\item The cubes $\{Q_{m,k}\}_k$ have pairwise disjoint interiors.
\item The cubes $\{Q_{m,k}\}_k$ cover $\mathbb{R}^d \setminus G_m$.
\item There exist a universal pair of constants, $c_1$, $c_2$, so that
\begin{align*}
c_1 \textnormal{diam}(Q_{m,k})\leq \textnormal{dist}(Q_{m,k},G_m) \leq c_2 \textnormal{diam}(Q_{m,k}).
\varepsilonnd{align*}
\item For every $h \in G_m$, there is a bijection $\sigmama_h:\mathbb{N}\to\mathbb{N}$ such that $Q_{m,k}+h = Q_{m,\sigmama_h k}$ for every $k\in\mathbb{N}$.
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
\begin{rem}\left\langlebel{remFD:CubeParametersmk}
Just for clarity, we make explicit for the reader: the parameter, $m\in\mathbb N$, is used for the grid size, $2^{-m}\mathbb Z^N$, and the parameter, $k\in\mathbb N$, in $Q_{m,k}$, etc. is the index resulting from a cube decomposition of $\mathbb R^N\setminus G_m$.
\varepsilonnd{rem}
\begin{rem}\left\langlebel{remark:maximum number of overlapping cubes}
In what follows, given a cube $Q$, we shall denote by $Q^*$ the cube with the same center as $Q$ but whose sides are increased by a factor of $9/8$. Observe that for every $m$ and $k$, we have $Q_{m,k}^* \subset \mathbb{R}^n\setminus 2^{2-m}\mathbb{Z}^N$, and that any given $x$ lies in at most some number $C(N)$ of the cubes $Q_{m,k}^*$.
\varepsilonnd{rem}
\begin{proposition}[Proposition 4.6 in \cite{GuSc-2019MinMaxEuclideanNATMA}]\left\langlebel{propFD:ParitionOfUnitymk}
For every $m$, there is a family of functions $\phi_{m,k}(x)$, with $k\in\mathbb N$, such that
\begin{enumerate}
\item $0\leq \phi_{m,k}(x)\leq 1$ for every $k$ and $\phi_{m,k} \varepsilonquiv 0$ outside $Q_{m,k}^*$ (using the notation in Remark \ref{remark:maximum number of overlapping cubes})
\item $\sum_k \phi_{m,k}(x) =1 $ for every $x \in \mathbb{R}^n\setminus G_m$.
\item There is a constant $C$, independent of $m$ and $k$, such that
\begin{align*}
|\nabla\phi_{m,k}(x)| \leq \frac{C}{\textnormal{diam}(Q_{m,k})}.
\varepsilonnd{align*}
\item For every $z \in G_m$, we have
\begin{align*}
\phi_{m,k}(x-z) = \phi_{m,\sigmama_zk}(x),\;\;\forall\;k,\;x,
\varepsilonnd{align*}
where $\sigmama_z$ are the bijections introduced in Lemma \ref{LemFD:Cubesmk}.
\varepsilonnd{enumerate}
\varepsilonnd{proposition}
We will call $\{\phi_{m,k}\}$ the corresponding partition of unity for $\{Q_{m,k}\}$ that is appropriate for the Whitney extension. As in \cite[Section 4]{GuSc-2019MinMaxEuclideanNATMA}, we use the following finite difference operator to construct approximate Taylor polynomials for the Whitney extension. Denote by $\nabla_m^1u(x)$ the unique vector that satisfies for $x\in G_m$ and $j=1,\dots,n$
\begin{align}\left\langlebel{eqFD:DiscreteGradient}
\nabla_m^1u(x)\cdot e_j = \frac{1}{2h_m}(u(x+h_me_j)-u(x-h_me_j)).
\varepsilonnd{align}
Note that this exploits the fact that $x\pm h_me_j\in G_m$ if $x \in G_m$.
In order to define the polynomials that will be used to build the Whitney extension, we need some notation for the centers of cubes and closest points in $G_m$.
\begin{definition}\left\langlebel{defFD:CubeCentersAndClosestPoints}
For each $m$ and $k$, we will call $y_{m,k}$ the center of the cube $Q_{m,k}$, and $\hat y_{m,k}$ will denote the unique element of $G_m$ so that
\begin{align*}
d(y_{m,k},G_m)=\abs{y_{m,k}-\hat y_{m,k}}.
\varepsilonnd{align*}
\varepsilonnd{definition}
For $f:\mathbb R^N\to\mathbb R$, we can now define a polynomial used to approximate it:
\begin{definition}\left\langlebel{defFD:PolymonialForWhitney}
Using the discrete gradient, $\nabla_m^1 f$ in (\ref{eqFD:DiscreteGradient}), we define a first order polynomial depending on $f$, $m$, $k$, as
\begin{align*}
\text{for}\ x\in Q_{m,k},\ \ P^1_{f,k}(x)=f(\hat y_{m,k}) + \nabla_m^1 f(\hat y_{mk})\cdot(x-\hat y_{m,k}).
\varepsilonnd{align*}
Given any $f$, we denote the $m$-level truncation, $\tilde f_m$ as
\begin{align*}
\tilde f_m = f{\mathbbm{1}}_{B_{2^m}}.
\varepsilonnd{align*}
\varepsilonnd{definition}
With all of these ingredients in hand, we can define the Whitney extensions that we will use.
\begin{definition}\left\langlebel{defFD:WhitneyExtension}
Using the notation of Definition \ref{defFD:PolymonialForWhitney}, and partition of unity, $(\phi_{m,k})$, in Proposition \ref{propFD:ParitionOfUnitymk},
the zero order Whitney extension is
\begin{align*}
E^0_m(f,x)=
\begin{cases}
\tilde f_m(x)\ &\text{if}\ x\in G_m,\\
\sum_{k\in\mathbb N}\tilde f_m(\hat y_{m,k})\phi_{m,k}(x)\ &\text{if}\ x\not\in G_m,
\varepsilonnd{cases}
\varepsilonnd{align*}
and the first order Whitney extension is
\begin{align*}
E^1_m(f,x)=
\begin{cases}
\tilde f_m(x)\ &\text{if}\ x\in G_m,\\
\sum_{k\in\mathbb N} P^1_{\tilde f_{m,k}}(x)\phi_{m,k}(x)\ &\text{if}\ x\not\in G_m.
\varepsilonnd{cases}
\varepsilonnd{align*}
\varepsilonnd{definition}
\subsection{The finite dimensional approximation}
As mentioned above, we give an approximation procedure and min-max formula for generic operators acting on convex subsets of $C^{1,\textnormal{Dini}}(\mathbb R^N)$. We will call these operators, $J:X_\rho\to C^0(\mathbb R^N)$, where the Banach space, $X_\rho$ appears below, in Definition \ref{defFD:XrhoSpace}. Our particular interest is the eventual application of this material to the operator $I$ defined in (\ref{eqIN:BulkEqForHSOperator}) and (\ref{eqIN:defHSOperator}).
The spaces that are used for the domain of the operators, $J$, are given here.
\begin{definition}\left\langlebel{defFD:XrhoSpace}
\begin{align*}
X_{\rho}= \left\{ f\in C^{1,\textnormal{Dini}}(\mathbb R^N) \ :\ \varepsilonxists\ C_f,\ \textnormal{s.t.}\ \abs{\nabla f(x)-\nabla f(y)}\leq C_f\rho(\abs{x-y}) \ \textnormal{ for all } x,y \in \mathbb R^N\right\}.
\varepsilonnd{align*}
\begin{align*}
X_{\rho,x}=\left\{ f\in X_\rho \ :\ \varepsilonxists\ C_f,\ \textnormal{s.t.}\ \abs{f(y)-f(x)}\leq C_f\abs{y-x}\rho(\abs{y-x}) \ \textnormal{ for all } y \in \mathbb R^N \right\}.
\varepsilonnd{align*}
\varepsilonnd{definition}
We note that $X_\rho$ is a Banach space with the usual norm on $C^1$ combined with the additional $\textnormal{Dini}$ semi-norm
\begin{align*}
[\nabla f]_\rho = \inf_{C}\{\sup_{x,y}\abs{\nabla f(x)-\nabla f(y)}\leq C\rho(\abs{x-y}\},
\varepsilonnd{align*}
see \cite[Chapter VI, Cor 2.2.3 and Exercise 4.6]{Stei-71}. Furthermore, $X_{\rho,x}$ is a subspace of $X_\rho$ consisting of those functions vanishing with a rate at $x$.
\begin{rem}\left\langlebel{remFD:ModulusIsAlsoInXRho}
We note that $f\in X_\rho$ if and only if
\begin{align*}
\forall\ x,y\in\mathbb R^N,\ \ \abs{f(x+y)-f(x)-\nabla f(x)\cdot y}\leq C_f\abs{y}\rho(\abs{y}).
\varepsilonnd{align*}
Without loss of generality, $\rho$ can be chosen so that $\tilde \rho(y)=\norm{f}_{L^\infty}\abs{y}\rho(\abs{y})$ satisfies $\tilde \rho\in X_\rho$. This means that whenever $f\in \mathcal K(\delta,L,m,\rho)$, we have that $\psi(y)=\delta+ \norm{f}_{L^\infty}\abs{y}\rho(\abs{y})$ satisfies $\psi\in\mathcal K(\delta,L,m,\rho)$.
\varepsilonnd{rem}
The first step in making operators with finite rank is to first restrict input functions to the finite set, $G_m$. So, we define the restriction operator,
\begin{align}\left\langlebel{eqFD:DefOfTmRestrictionOperator}
T_m: C^0(\mathbb R^N)\to \mathbb R^{G_m},\ \ \ T_m f := f|_{G_m}.
\varepsilonnd{align}
Thus, we can use the restriction operator to create a projection of $X_\rho$ onto a finite dimensional subspace of functions depending only on their values on $G_m$:
\begin{align}\left\langlebel{eqFD:DefOfProjectionOperatorOntoGm}
\pi_m = E^1_m\circ T_m : X_\rho \to X_\rho.
\varepsilonnd{align}
One of the reasons for using the Whitney extension to define $E^1$ is that operators such as $\pi_m$ will be Lipschitz, and with a norm that is independent from $G_m$.
\begin{theorem}[Stein Chapter VI result 4.6 \cite{Stei-71}]\left\langlebel{thmFD:ProjectionIsLinearAndBounded}
$E^0_m$ is linear, and if $g$ is Lipschitz on $G_m$, then $E^0_mg$ is Lipschitz on $\mathbb R^N$ with the same Lipschitz constant. Furthermore,
$\pi_m$ is linear and, for a constant, $C>0$ that depends only on dimension, for all $f\in X_\rho$,
\begin{align*}
\norm{\pi_m f}_{X_\rho}\leq C\norm{f}_{X_\rho}
\varepsilonnd{align*}
\varepsilonnd{theorem}
On top of the boundedness of $\pi_m$, we have intentionally constructed the sets $G_m$, the cubes $\{Q_{m,k}\}_k$, and the partition functions $\phi_{m,k}$, to respect translations over $G_m$.
\begin{definition}\left\langlebel{defFD:TranslationOperator}
For $f:\mathbb R^N\to \mathbb R$, and $z\in\mathbb R^N$, we define the translation operator $\tau_z$ as
\begin{align*}
\tau_zf(x) = f(x+z).
\varepsilonnd{align*}
\varepsilonnd{definition}
In particular, property (4) of Proposition \ref{propFD:ParitionOfUnitymk} gives the following translation invariance of $\pi_m$.
\begin{lemma}[Proposition 4.14 of \cite{GuSc-2019MinMaxEuclideanNATMA}]\left\langlebel{lemFD:Pi-mTranslationInvariance}
If $f:\mathbb R^N\to\mathbb R$, $z\in G_m$, fixed, and $\tau_z$ in Definition \ref{defFD:TranslationOperator}, then
\begin{align*}
\pi_m( \tau_z f) = \tau_z\left(\pi_m f\right),\ \ \text{and}\ \ E^0_m\circ T_m(\tau_z f) = \tau_z \left(E^0_m\circ T_m f\right).
\varepsilonnd{align*}
\varepsilonnd{lemma}
With these nice facts about the projection operator, $\pi_m$, we can thus define our approximating operators to $J$, in which the approximates have finite rank.
\begin{definition}\left\langlebel{defFD:DefOfFDApprox}
Given $J$ that is a Lipschitz mapping of $X_\rho\to C^0_b(\mathbb R^N)$ the finite dimensional approximation, $J_m$, is defined as
\begin{align}\left\langlebel{eqFD:DefOfFdApprox}
J^m:= E^0_m\circ T_m\circ J \circ E^1_m\circ T_m= E^0_m\circ T_m\circ J\circ \pi_m,
\varepsilonnd{align}
where $E^0$ and $E^1$ appear in Definition \ref{defFD:WhitneyExtension}, $T_m$ is defined in (\ref{eqFD:DefOfTmRestrictionOperator}), and $\pi_m$ is defined in (\ref{eqFD:DefOfProjectionOperatorOntoGm}).
\varepsilonnd{definition}
Below, we will see $J^m$ are Lipschitz maps. It will also matter in which way $J^m\to J$; for our purposes, it is enough that these approximate operators converge pointwise to $J$ over $X_\rho$, in the following sense.
\begin{proposition}[Corollary 5.20 of \cite{GuSc-2019MinMaxEuclideanNATMA}]\left\langlebel{propFD:ConvergenceOFApprox}
For all $f\in X_\rho$, for each $R>0$,
\begin{align*}
\lim_{m\to\infty}\norm{J^m(f)-J(f)}_{L^\infty(B_R)}=0.
\varepsilonnd{align*}
\varepsilonnd{proposition}
A property that was observed in \cite{GuSc-2019MinMaxNonlocalTOAPPEAR} and also used in \cite{GuSc-2019MinMaxEuclideanNATMA} is the ``almost'' preservation of ordering by the projections, $\pi_m$. Although ordering is, in general, not preserved, on functions that are regular enough, there is a quantifiable error term. We record this here because it plays a fundamental role later on, in Section \ref{sec:ProofOfStructureThm}, to preserve certain estimates. In particular, we eventually focus on the fact that our operators have an extra structure called the global comparison property (see Definition \ref{defFD:GCP}), and so whenever $J$ enjoys the global comparison property, then $J_m$ almost enjoys the global comparison property, up to a quantifiable error term over a large enough subspace of $X_\rho$. The main ingredient to this end is the following lemma.
\begin{lemma}[Lemma 4.17 of \cite{GuSc-2019MinMaxEuclideanNATMA}]\left\langlebel{lemFD:RemainderForNonNegPi}
If $w\in C^{1,\alpha}(\mathbb R^N)$, $x_0\in G_m$, $w\geq0$, $w(x_0)=0$, then there exists a function, $R_{\alpha,m,w,x_0}\in C^{1,\alpha/2}(\mathbb R^N)$ with $R_{\alpha,m,w,x_0}(x_0)=0$,
\begin{align*}
\forall\ x\in\mathbb R^N,\ \pi_mw(x)+R_{\alpha,m,w,x_0}(x)\geq 0,
\ \ \ \textnormal{and}\ \ \
\norm{R_{\alpha,m,w,x_0}}_{C^{1,\alpha/2}}\leq C h_n^\beta\norm{w}_{C^{1,\alpha}},
\varepsilonnd{align*}
where $\beta\in(0,1)$ and depends upon $\alpha$.
\varepsilonnd{lemma}
\begin{rem}\left\langlebel{remFD:JAlsoLipOnC1Al}
If $J:X_\rho\to C^0_b$ is Lipschitz, then for any modulus, $\omega$ so that $\omega\leq\rho$, $J$ is also a Lipschitz mapping on $X_\omega$. In particular, for all $\alpha\in(0,1)$, such a $J$ is a Lipschitz mapping on $C^{1,\alpha}_b(\mathbb R^N)$.
\varepsilonnd{rem}
\subsection{A subset of ``supporting'' linear operators, $\mathcal D_J$}
The main reason for using the approximating operators, $J_m$, is that as maps that have finite rank, they are effectively maps on a finite dimensional space and hence are differentiable at almost every $f\in X_\rho$. Furthermore, this a.e. $f$ differentiability endows them with a natural min-max structure. It turns out that taking limits of ``linearizations'' of $J_m$ produces a rich enough family to construct a min-max representation for the original $J$. That is the purpose of this subsection.
First, we have some notation for the set of ``supporting'' differentials of maps on $X_\rho$. The first is simply the collection of limits of derivatives of a map that is differentiable almost everywhere.
\begin{definition}[Differential Set Almost Everywhere]
If $J$ is differentiable a.e. $X_\rho$, we call the differential set,
\begin{align*}
\mathcal D J = \textnormal{c.h.} \{ L=\lim_k DJ[f_k;\cdot]\ :\ f_k\to f\ \text{and}\ J\ \text{is differentiable at}\ f \},
\varepsilonnd{align*}
where we used the abbreviation ``\textnormal{c.h.}'' to denote the convex hull.
Here $DJ[f;\cdot]$ is the derivative of $J$ at $f$.
\varepsilonnd{definition}
This is used to build a weaker notion of ``differential'' set that we will use later, which is the limits of all derivatives of approximating operators.
\begin{definition}[Weak Differential Set]\left\langlebel{defFD:DJLimitingDifferential}
For $J:X_\rho\to C^0_b(\mathbb R^N)$, we can define a weak differential set as the following:
\begin{align}\left\langlebel{eqFD:DJLimitingDifferential}
\mathcal D_J = \textnormal{c.h.} \{ L\ :\ \varepsilonxists m_k,\ L_{m_k}\in \mathcal D J^{m_k}\ \text{s.t.}\ \forall\ f\in X_\rho,\
\lim_{k\to\infty} L_{m_k}(f,\cdot)= L(f, \cdot) \},
\varepsilonnd{align}
where we used the abbreviation ``\textnormal{c.h.}'' to denote the convex hull. Here, $J^{m_k}$ are the approximating operators for $J$ that are given in Definition \ref{defFD:DefOfFDApprox}.
\varepsilonnd{definition}
\begin{lemma}\left\langlebel{lemFD:TransInvariant}
If $J: X_\rho\to C^0_b(\mathbb R^n)$ is Lipschitz and translation invariant, then so are $J^m$, and all $L\in\mathcal D J^m$ enjoy a bound which is the Lipschitz norm of $J^m$.
\varepsilonnd{lemma}
\begin{proof}[Main idea of proof of Lemma \ref{lemFD:TransInvariant}]
We do not give all the details here, but simply comments on a few points. First of all, the Lipschitz nature of $J^m$ is evident from that of $J$ and Theorem \ref{thmFD:ProjectionIsLinearAndBounded} (translation invariance is not used here). Furthermore, as $J^m$ is a Lipschitz function on a finite dimensional space, we see that all $L\in\mathcal D J^m$ must be realized as limits of derivatives of $J^m$. However, it is easily checked that the operator norm of any differential is bounded by the Lipschitz norm of the original operator, hence the claim about $L\in \mathcal D J^m$. Finally, we need to address the translation invariance of $J^m$ and $L$. This follows immediately from the translation invariance properties of the projection and extension operators listed in Lemma \ref{lemFD:Pi-mTranslationInvariance}. Furthermore, again, this translation invariance will also be inherited by any derivative of $J^m$ and hence $L\in \mathcal D J^m$.
\varepsilonnd{proof}
The reason that the set, $\mathcal D_J$, is useful for our purposes is that it gives a sort of ``maximal'' mean value inequality, which is just a variant on the usual mean value theorem (cf. Lebourg's Theorem in \cite{Clarke-1990OptimizationNonsmoothAnalysisSIAMreprint}).
\begin{lemma}[Lemma 5.2 and Remark 5.4 of \cite{GuSc-2019MinMaxEuclideanNATMA}]\left\langlebel{lemFD:MVmaxProperty}
If $\mathcal K$ is a convex subset of $X_\rho$ and $J: \mathcal K\to C^0_b(\mathbb R^N)$ is Lipschitz, then
\begin{align*}
\forall\ f,g\in\mathcal K,\ \ J(f) - J(g)\leq \max_{L\in \mathcal D_J} L(f-g),
\varepsilonnd{align*}
where $\mathcal D_J$ is from Definition \ref{defFD:DJLimitingDifferential}.
\varepsilonnd{lemma}
\begin{proof}[Sketch of Lemma \ref{lemFD:MVmaxProperty}]
We note more careful details are given in \cite[Section 5]{GuSc-2019MinMaxEuclideanNATMA}, and so we just give the main idea. Given $f,g\in\mathcal K$, the usual Mean Value theorem of Lebourg \cite{Clarke-1990OptimizationNonsmoothAnalysisSIAMreprint} shows that there exists $t\in[0,1]$ and $z=tf+(1-t)g$ with the property that there is at least one $L\in \mathcal D_J(z)$ (the differential only at $z$) with the property that
\begin{align*}
J(f)-J(g)= L(f-g).
\varepsilonnd{align*}
Hence taking that maximum gives the result. The actual result requires a small amount more detail in the invocation of Lebourg's mean value theorem, which is presented in \cite[Section 5]{GuSc-2019MinMaxEuclideanNATMA}.
\varepsilonnd{proof}
From this mean value inequality, a generic min-max formula for $J$ becomes immediate.
\begin{corollary}\left\langlebel{corFD:GenericMinMaxForJ}
Given a convex subset $\mathcal K\subset X_\rho$, and $J:\mathcal K\to C^0_b(\mathbb R^N)$ that is Lipschitz, $J$ can be realized in the following way:
\begin{align*}
\forall\ f\in \mathcal K,\ \
J(f,x) = \min_{g\in \mathcal K}\max_{L\in \mathcal D_J} J(g,x) + L(f-g,x),
\varepsilonnd{align*}
where $\mathcal D_J$ is from Definition \ref{defFD:DJLimitingDifferential}.
\varepsilonnd{corollary}
\begin{proof}[Proof of Corollary \ref{corFD:GenericMinMaxForJ}]
For generic $f,g\in X_\rho$, we can utilize Lemma \ref{lemFD:MVmaxProperty}, and then taking the minimum over all $g\in X_\rho$ yields the claim.
\varepsilonnd{proof}
The next result needs a feature we call the global comparison property.
\begin{definition}\left\langlebel{defFD:GCP}
We say that $J:X_\rho\to C^0(\mathbb R^N)$ obeys the global comparison property (GCP) provided that for all $f,g\in X_\rho$ and $x_0$ such that $f\leq g$ and $f(x_0)= g(x_0)$, $J$ satisfies $J(f,x_0)\leq J(g,x_0)$.
\varepsilonnd{definition}
In the case that $J$ enjoys the GCP, more can be said. This is one of the main results from \cite{GuSc-2019MinMaxEuclideanNATMA} and \cite{GuSc-2019MinMaxNonlocalTOAPPEAR}.
\begin{theorem}[Theorem 1.11 in \cite{GuSc-2019MinMaxEuclideanNATMA}, Theorem 1.6 \cite{GuSc-2019MinMaxNonlocalTOAPPEAR}]\left\langlebel{thmFD:StructureOfJFromGSEucSpace}
If $\mathcal K$ is a convex subset of $X_\rho$ and $J:\mathcal K\to C^0_b(\mathbb R^N)$ is such that
\begin{enumerate}[(i)]
\item $J$ is Lipschitz
\item $J$ is translation invariant
\item $J$ enjoys the GCP
\item there exists a modulus, $\omega$, with $\lim_{R\to\infty}\omega(R)=0$ and
\begin{align}\left\langlebel{eqFD:ExtraModulusConditionOutsideBR}
\forall\ f,g\in \mathcal K,\ \text{with}\ f\varepsilonquiv g\ \text{in}\ B_{2R},\
\norm{J(f)-J(g)}_{L^\infty(B_R)}\leq \omega(R)\norm{f-g}_{L^\infty(\mathbb R^N)},
\varepsilonnd{align}
\varepsilonnd{enumerate}
then for each $L\in \mathcal D_J$, there exists the following parameters that are independent of $x$:
\begin{align*}
c_L\in\mathbb R,\ b_L\in \mathbb R^N,\ \mu_L\in\textnormal{measures}(\mathbb R^n\setminus \{0\}),
\varepsilonnd{align*}
such that for all $f$,
\begin{align*}
L(f,x) = c_Lf(x) + b_L\cdot \nabla f(x) + \int_{\mathbb R^n}\delta_h f(x)\mu_L(dh),
\varepsilonnd{align*}
and $J$ can be represented as
\begin{align*}
\forall\ f\in \mathcal K,\ \ J(f,x) = \min_{g\in\mathcal K}\max_{L\in \mathcal D_J} J(g,x) + L(f-g,x).
\varepsilonnd{align*}
Here, for some appropriate, fixed, $r_0$, depending upon $J$, we use the notation
\begin{align*}
\delta_h f(x) = f(x+h)-f(x)-{\mathbbm{1}}_{B_{r_0}}(h) \nabla f(x)\cdot h.
\varepsilonnd{align*}
Furthermore, for a universal $C>0$, we have
\begin{align*}
\sup_{L\in\mathcal D_J} \left\{ \abs{c_L} + \abs{b_L} + \int_{\mathbb R^N}\min\{\abs{h}\rho(\abs{h}),1\}\mu_L(dh)\right\}
\leq C\norm{J}_{Lip,\ X_\rho\to C^0_b}.
\varepsilonnd{align*}
\varepsilonnd{theorem}
\begin{rem}
Generically, $r_0$ can be taken as $r_0=1$, allowing for a change to each of the corresponding $b_L$, but in our context, it is more natural to choose $r_0$ depending on $J$.
\varepsilonnd{rem}
\begin{proof}[Comments on the proof of Theorem \ref{thmFD:StructureOfJFromGSEucSpace}]
As the way Theorem \ref{thmFD:StructureOfJFromGSEucSpace} is stated does not match exactly the statements of those in \cite[Theorem 1.11]{GuSc-2019MinMaxEuclideanNATMA} or \cite[Theorem 1.6]{GuSc-2019MinMaxNonlocalTOAPPEAR}, some comments are in order. The point is that we explicitly show that the min-max representation for $J$ uses the set of linear mappings, $\mathcal D_J$, which is not made explicit in the theorems in \cite{GuSc-2019MinMaxEuclideanNATMA}, \cite{GuSc-2019MinMaxNonlocalTOAPPEAR}. This is purely a matter of presentation.
By Lemma \ref{lemFD:TransInvariant}, we know that since $J$ is translation invariant, then also all $L\in \mathcal D J^m$ are translation invariant. Taking this fact in hand, and combining it with the analysis that appears in \cite[Section 3]{GuSc-2019MinMaxEuclideanNATMA}, in particular, \cite[Lemma 3.9]{GuSc-2019MinMaxEuclideanNATMA}, we see that all $L\in\mathcal D J^m$ have the form claimed here in Theorem \ref{thmFD:StructureOfJFromGSEucSpace}. The passage from operators in $\mathcal D J^m$ to $\mathcal D_J$ and the preservation of their structure follows in the same way as in \cite[Section 5]{GuSc-2019MinMaxEuclideanNATMA}. We note that the structured imparted on $L\in \mathcal D J^m$ by the fact that $L$ is an operator that is translation invariant and enjoys the GCP allows us to remove any requirement of \cite[Assumption 1.4]{GuSc-2019MinMaxEuclideanNATMA} as it pertains to the arguments in \cite[Section 5]{GuSc-2019MinMaxEuclideanNATMA}.
\varepsilonnd{proof}
\begin{rem}
A curious reader may notice that in \cite{GuSc-2019MinMaxEuclideanNATMA}, all of Theorems 1.9, 1.10, and 1.11 apply to the $J$ that we study herein. The most relevant two are Theorems 1.10 and 1.11 in \cite{GuSc-2019MinMaxEuclideanNATMA}, and in particular as here $J$ is translation invariant, Theorem 1.10 in \cite{GuSc-2019MinMaxEuclideanNATMA} is much simpler in that there is no requirement for (\ref{eqFD:ExtraModulusConditionOutsideBR}) as we do above. The reason Theorem 1.10 in \cite{GuSc-2019MinMaxEuclideanNATMA} does not suit us here is subtle, and is based on the fact that we will subsequently require a non-degeneracy property of all of the $L$ used to reconstruct $J$ as a min-max. In our case this will result from using the approximations $J^m$ as above, and to describe the limits of $L_m\in\mathcal D J^m$, we need an extra condition to get some compactness on the nonlocal terms, which is the use of (\ref{eqFD:ExtraModulusConditionOutsideBR}). The type of non-degeneracy we will need for $L$ will be apparent in Section \ref{sec:ProofOfStructureThm}, and we will add some further discussion later.
\varepsilonnd{rem}
\section{Lipschitz Property of $I$ and $H$}\left\langlebel{sec:ILipschitz}
First, we will show that for each fixed choice of parameters, $\delta$, $L$, $m$, $\rho$, $I$ is a Lipschitz mapping, from $\mathcal K(\delta,L,m,\rho)$ to $C^{0}_b(\mathbb R^n)$. The main properties of $H$ are deduced from the more basic operator, $I$, which we study first. Then, later in the section we will show how the same results follow for $H$.
\subsection{The analysis for the operator, $I$}
Because $H$ is defined as a function of two operators that take the form, (\ref{eqIN:defHSOperator}), the key result in proving $H$ is Lipschitz is to prove that $I$ as in (\ref{eqIN:defHSOperator}) is Lipschitz.
\begin{proposition}\left\langlebel{propLIP:BigILipOnKRho}
If $I$ is the operator defined via (\ref{eqIN:BulkEqForHSOperator}) and (\ref{eqIN:defHSOperator}), then for each $\delta$, $L$, $m$, $\rho$ fixed, $I$ is a Lipschitz mapping,
\begin{align*}
I: \mathcal K(\delta,L,m,\rho)\to C^0_b(\mathbb R^n),
\varepsilonnd{align*}
and the Lipschitz norm of $I$ depends upon all of $\delta$, $L$, $m$, $\rho$.
\varepsilonnd{proposition}
Because of the definition of $I^+$ and $I^-$ using (\ref{eqIN:TwoPhaseBulk}) and (\ref{eqIN:DefIPLusAndMinus}), we see that all of the argument in the domain $D^+_f$ for the operator, $I$ (which is, by definition $I^+$), have direct analogs to the operator $I^-$ and the domain $D_f^-$. Thus, we state the following as a corollary of the techniques that prove Proposition \ref{propLIP:BigILipOnKRho}, but we do not provide a proof.
\begin{corollary}\left\langlebel{corLIP:IMinusIsLipschitz}
The operator, $I^-$, defined in (\ref{eqIN:TwoPhaseBulk}) and (\ref{eqIN:DefIPLusAndMinus}) has the same Lipschitz property as $I$ in Proposition \ref{propLIP:BigILipOnKRho}.
\varepsilonnd{corollary}
Before we can establish Proposition \ref{propLIP:BigILipOnKRho}, we give some more basic results.
\begin{lemma}\left\langlebel{lemLIP:LittleILipEstPart1}
For $R_0$ as in Theorem \ref{thm:GreenBoundaryBehavior}, there exists a universal $C>0$ and $\alpha\in(0,1]$, so that if $\psi\geq 0$, $\psi(0)=0$, $\psi(y)\leq c\abs{y}\rho(y)$, $f\in\mathcal K(\delta,L,m,\rho)$, and $f+\psi\in\mathcal K(\delta,L,m,\rho)$, then for $\nu=\nu_f=\nu_{f+\psi}$ and $X_0=(0,f(0))=(0,(f+\psi)(0))$, with $U_f$, $U_{f+\psi}$ as in (\ref{eqIN:BulkEqForHSOperator}),
\begin{align*}
&\frac{1}{C}\left(
\int_{\Gamma_f\cap B^{n+1}_{R_0}(X_0)} \psi(y) \abs{Y-X_0}^{-n-1} dY
\right)\\
&\ \ \ \ \leq \partial_\nu U_{f+\psi}(X_0)- \partial_\nu U_f(X_0)\\
&\ \ \ \ \ \ \ \ \leq C\left(
R_0^{-\alpha}\norm{\psi}_{L^\infty(\mathbb R^n\setminus B_{R_0})}
+ \int_{\Gamma_f\cap B^{n+1}_{R_0}(X_0)} \psi(y) \abs{Y-X_0}^{-n-1} dY
\right).
\varepsilonnd{align*}
Recall, $B_R\subset\mathbb R^n$ and $B^{n+1}_R(X_0)\subset\mathbb R^{n+1}$.
\varepsilonnd{lemma}
\begin{corollary}\left\langlebel{corLIP:CorOfLipEstPart1}
With $f$ and $\psi$ as in Lemma \ref{lemLIP:LittleILipEstPart1},
\begin{align*}
&\frac{1}{C}\left(
\int_{B_{R_0}} \psi(y) \abs{y}^{-n-1} dy
\right)\\
&\ \ \ \ \leq \partial_\nu U_{f+\psi}(X_0)- \partial_\nu U_f(X_0)\\
&\ \ \ \ \ \ \ \ \leq C\left(
R_0^{-\alpha}\norm{\psi}_{L^\infty(\mathbb R^n\setminus B_{R_0})}
+ \int_{B_{R_0}} \psi(y) \abs{y}^{-n-1} dy
\right),
\varepsilonnd{align*}
where the integration occurs over $\mathbb R^n$ instead of $\Gamma_f$.
\varepsilonnd{corollary}
\begin{rem}
The exponent, $\alpha$, in Lemma \ref{lemLIP:HEnjoysLittleIEstPart1} and Corollary \ref{corLIP:CorOfLipEstPart1} is the same exponent that appears in the second part of Proposition \ref{propGF:PoissonKernel}, from (\ref{eqGF:DecayOnMassOfPoissonKernel}).
\varepsilonnd{rem}
First we note how the corollary follows from Lemma \ref{lemLIP:LittleILipEstPart1}.
\begin{proof}[Proof of Corollary \ref{corLIP:CorOfLipEstPart1}]
Because $\Gamma_f$ is a $C^{1,\textnormal{Dini}}$ graph, we know that up to a constant (depending on only the Lipschitz norm of $f$),
\begin{align*}
&\frac{1}{C}\int_{\Gamma_f\cap \left( B_{R_0}(X_0) \right)} \psi(y)\abs{X_0-Y}^{n+1}dY\\
&\leq \int_{B_{R_0}(0)} \psi(h)\abs{h}^{-n-1}dh\\
&\leq C\int_{\Gamma_f\cap \left( B_{R_0}(X_0) \right)} \psi(y)\abs{X_0-Y}^{n+1}dY,
\varepsilonnd{align*}
and we emphasize that the first and third integrals occur on the set $\Gamma_f$, whereas the second integral is over a subset of $\mathbb R^n$.
\varepsilonnd{proof}
\begin{proof}[Proof of Lemma \ref{lemLIP:LittleILipEstPart1}]
This lemma uses, via the fact that $\psi\geq 0$, a sort of ``semigroup'' property of $U_f$ (recall $U_f$, $U_{f+\psi}$ are as in (\ref{eqIN:BulkEqForHSOperator})). In particular, since $D_f\subset D_{f+\psi}$, we can decompose $U_{f+\psi}$ as the following
\begin{align*}
U_{f+\psi} = U_f+W,
\varepsilonnd{align*}
where $W$ is the unique solution of
\begin{align*}
\begin{cases}
\mathcal Delta W = 0\ &\text{in}\ D_f\\
W=0\ &\text{on}\ \{x_{n+1}=0\}\\
W=U_{f+\psi}|_{\Gamma_f}\ &\text{on}\ \Gamma_f.
\varepsilonnd{cases}
\varepsilonnd{align*}
We can invoke the linear growth of $U_{f+\psi}$ away from $\Gamma_{f+\psi}$ given in Lemma \ref{lemGF:LinearGrowthFromGammaF} to see that
\begin{align}
\forall\ Y=(y,y_{n+1})=(y,f(y))\in\Gamma_f,\ \ \frac{\psi(y)}{C}
\leq U_{f+\psi}(Y)\leq C\psi(y). \left\langlebel{eqLIP:LittleIEstLinearGrowthUf}
\varepsilonnd{align}
Now, we can fix $0<s<<1$ and use the Poisson kernel, $P_f$, to evaluate $U_{f+\psi}(X_0+s\nu(X_0))$ (and we recall that $X_0=(0,f(0))$).
We first show the details of the next argument as they pertain to the lower bound. The argument for the upper bound follows by analogous arguments, invoking the upper bound on $U_{f+\psi}(Y)$, given previously. We will also use the boundary behavior of $P_f$ given in Proposition \ref{propGF:PoissonKernel} (the lower bound in $B_{R_0}(X_0)$ here, and the upper bound for the analogous upper bound argument on $U_{f+\psi}$).
Thus, we can estimate:
\begin{align}
&U_{f+\psi}\left(X_0+s\nu_f(X_0)\right) \nonumber \\
&= U_f(X_0+s\nu_f(X_0)) + W(X_0+s\nu_f(X_0)) \nonumber \\
&= U_f(X_0+s\nu_f(X_0))
+ \int_{\Gamma_f} U_{f+\psi}|_{\Gamma_f}(Y) P_f\left(X_0+s\nu_f(X_0), Y \right)dY \nonumber \\
&\geq U_f(X_0+s\nu_f(X_0))
+ \int_{\Gamma_f} \frac{\psi(y)}{C}P_f\left(X_0+s\nu_f(X_0), Y \right)dY \left\langlebel{eqLIP:RefLine1InLittleIEstPart1} \\
&\geq U_f(X_0+s\nu_f(X_0))
+ \int_{\Gamma_f\cap B^{n+1}_{R_0} (X_0)} \tilde C s\psi(y)\abs{X_0-Y}^{-n-1} dY, \nonumber
\varepsilonnd{align}
where in the second to last line, we invoke the estimate of Lemma \ref{lemGF:LinearGrowthFromGammaF} as in (\ref{eqLIP:LittleIEstLinearGrowthUf}).
(We have used $\nu_f$ as the inward normal derivative to $D_f$ and we recall the notation $Y=(y,y_{n+1})$, as well as $R_0$ originating in Proposition \ref{propGF:PoissonKernel}.) We note the use of the assumption that $\psi(y)\leq c\abs{y}\rho(\abs{y})$ in order that the following integral is well defined:
\begin{align*}
\int_{\Gamma_f\cap B^{n+1}_{R_0}(X_0)}\psi(y)\abs{X_0-Y}^{-n-1}dY.
\varepsilonnd{align*}
Thus, since $U_{f+\psi}(0,f(0))=0=U_f(0,f(0))$, as well $\nu_{f+\psi}(X_0)=\nu_f(X_0)$ (as $\nabla (f+\psi)(0)=\nabla f(0)$), we see that by rearranging terms, dividing by $s$, and taking $s\to0$ (with an abuse of the use of the constant, $C$)
\begin{align*}
\partial_\nu U_{f+t\psi}(X_0) - \partial_\nu U_f(X_0)\geq
C\left( \int_{\Gamma_f\cap B^{n+1}_{R_0}(X_0)} \psi(y)\abs{X_0-Y}^{-n-1} dY
\right).
\varepsilonnd{align*}
Now, we mention the minor modification to obtain the upper bound. Working just as above, we can start at the upper bound analog of line (\ref{eqLIP:RefLine1InLittleIEstPart1}), and then we invoke Proposition \ref{propGF:PoissonKernel}, both the pointwise estimates in $B_{R_0}$ and the integral estimate in $B_{R_0}^C$ in (\ref{eqGF:DecayOnMassOfPoissonKernel}). This yields:
\begin{align*}
&U_{f+\psi}\left(X_0+s\nu_f(X_0)\right)\\
&\leq U_f(X_0+s\nu_f(X_0))
+ \int_{\Gamma_f} \frac{\psi(y)}{C}P_f\left(X_0+s\nu_f(X_0), Y \right)dY\\
&\leq U_f(X_0+s\nu_f(X_0))
+ \int_{\Gamma_f\cap B^{n+1}_{R_0}(X_0)} \tilde C s\psi(y)\abs{X_0-Y}^{-n-1} dY\\
&\ \ \ \ \ \ \ \ \ \ + \int_{\Gamma_f\setminus B^{n+1}_{R_0}(X_0)} \norm{\psi}_{L^\infty(\mathbb R^n\setminus B_{R_0})} P_f(X_0+s\nu,Y)dY\\
&\leq U_f(X_0+s\nu_f(X_0))
+ \int_{\Gamma_f\cap B^{n+1}_{R_0}(X_0)} \tilde C s\psi(y)\abs{X_0-Y}^{-n-1} dY
+ \frac{Cs\norm{\psi}_{L^\infty(\mathbb R^n\setminus B_{R_0})}}{R_0^\alpha}.
\varepsilonnd{align*}
The upper bound concludes as the lower bound, and this finishes the proof of the lemma.
\varepsilonnd{proof}
\begin{lemma}\left\langlebel{lemLIP:LittleILipEstPart2}
There exists a universal $C>0$ and $\varepsilonp_2>0$ so that if $\psi(0)=0$, $\abs{\nabla\psi}\leq \varepsilonp_2$, $f\in\mathcal K(\delta,L,m,\rho)$, and $f+\psi\in\mathcal K(\delta,L,m,\rho)$, then for $X_0=(0,f(0))$,
\begin{align*}
\abs{\partial_{\nu_{f+\psi}}U_{f+\psi}(X_0) - \partial_{\nu_f}U_f(X_0)}
\leq C\abs{\nabla\psi(0)} + C\varepsilonp_2\norm{\psi}_{L^{\infty}}.
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemLIP:LittleILipEstPart2}]
The main part of this proof is to use a rotation to reduce to the case of Lemma \ref{lemLIP:LittleILipEstPart1}. Let $\mathcal R$ be the unique rotation that satisfies
\begin{align*}
\mathcal R(\nu_{f+\psi}(X_0))=\nu_f(X_0)
\varepsilonnd{align*}
and leaves
\begin{align*}
\left(\textnormal{span}\{\nu_{f+\psi}(X_0), \nu_{f}(X_0\}\right)^{\perp}
\varepsilonnd{align*}
unchanged. Then we can define for a yet to be chosen cutoff function, $\varepsilonta$, the transformation $T$
\begin{align*}
T:\mathbb R^{n+1}\to\mathbb R^{n+1},\ \ T(X) = X_0 + \varepsilonta(\abs{X-X_0})\mathcal R (X-X_0) + (1-\varepsilonta(\abs{X-X_0}))(X-X_0)
\varepsilonnd{align*}
We compose this mapping with $U_{f+\psi}$ to define an auxiliary function,
\begin{align*}
V(X) = (U_{f+\psi}\circ T^{-1})(X).
\varepsilonnd{align*}
If the parameter, $\varepsilonp_2$, in the assumption of the lemma is not too large (depending upon the Lipschitz bound on $f$, which is $m$), the transformation induces a new domain, whose top boundary will still be a graph. Let $g$ be the unique function which defines the transformed domain, i.e.
\begin{align*}
TD_{f+\psi} = D_g.
\varepsilonnd{align*}
By construction, we have $\nu_f(X_0)=\nu_g(X_0)$.
On top of the previous restriction on $\varepsilonp_2$, we can choose it smaller so that $\norm{\nabla g}_{L^\infty}\leq 2m$. This means that we can also make a choice of $\varepsilonta$ so that
\begin{align*}
T\Gamma_{f+\psi}=\Gamma_g,\ \ \text{and}\ \ g\in\mathcal K(\delta/2, L+\delta/2, 2m, \tilde \rho),
\varepsilonnd{align*}
where the new modulus, $\tilde\rho$ is simply $\tilde\rho(s)=\rho(Cs)$, for a universal $C$. Finally, we will enforce that $\varepsilonta$ satisfies
\begin{align}\left\langlebel{eqLIP:LittleIEstPart2SizeOfEtaAndRotation}
\varepsilonta\varepsilonquiv1\ \text{in}\ [0,r_0],\ \ \text{and}\ \ r_0=c\norm{\mathcal R}\leq c\abs{\nabla \psi(0)},
\varepsilonnd{align}
which is possible if $\varepsilonp_2$ is small enough, depending upon $\delta$, $L$, $m$, $\rho$.
We remark that these restrictions on $\varepsilonp_2$ and the choice of $\varepsilonta$ will be such that the function $g$, satisfies
\begin{align}\left\langlebel{eqLIP:LittleIEstPart2GisCloseToF}
\abs{f(x)-g(x)}\leq C\varepsilonp_2\norm{\psi}_{L^\infty}\abs{x}\tilde \rho(\abs{x}),
\varepsilonnd{align}
as by assumption, $\abs{\nabla\psi(0)}\leq \varepsilonp_2$.
We will use three steps to estimate
\begin{align*}
\abs{ \partial_{\nu_{f+\psi} } U_{f+\psi}(X_0)-\partial_{\nu_f} U_f(X_0) },
\varepsilonnd{align*}
using the two additional auxiliary functions, $V$ and $U_g$. We emphasize that $V$ is not harmonic in all of $D_g$.
\underline{Step 1:}
\begin{align}\left\langlebel{eqLIP:LittleIEstPart2Step1Goal}
\partial_{\nu_{f+\psi}} U_{f+\psi}(X_0) =\partial_{\nu_g} V(X_0) .
\varepsilonnd{align}
\underline{Step 2:}
\begin{align}\left\langlebel{eqLIP:LittleIEstPart2Step2Goal}
\abs{ \partial_{\nu_g}V(X_0) - \partial_{\nu_g} U_g(X_0) } \leq C\abs{\nabla \psi(0)}.
\varepsilonnd{align}
\underline{Step 3:}
\begin{align}\left\langlebel{eqLIP:LittleIEstPart2Step3Goal}
\abs{ \partial_{\nu_g} U_g(X_0) - \partial_{\nu_f}U_f(X_0) } \leq C\varepsilonp_2\norm{\psi}_{L^\infty} .
\varepsilonnd{align}
Step 1 follows by a direct calculation, by the definition of $R$ and that $\nu_g=R\nu_{f+\psi}$.
Next, to establish step 2, we will use that fact that once $\varepsilonta$ is chosen, depending only on $\varepsilonp_2$ and the collection $\delta$, $L$, $m$, $\rho$, if $\varepsilonta\varepsilonquiv 1$ on the interval $[0,r_0]$, then $V$ is harmonic in $B_{r_0}(X_0)\cap D_g$. We can then compare the respective normal derivatives of $V$ and $U_g$ using a global Lipschitz estimate combined with the comparison principle. Indeed, both $V$ and $U_g$ enjoy global Lipschitz estimates, for some $C$ that depends only on $\delta$, $L$, $m$, $\rho$,
\begin{align*}
\norm{\nabla V}_{L^\infty(D_g)},\ \norm{\nabla U_g}_{L^\infty(D_g)}\leq C.
\varepsilonnd{align*}
Since on the upper part of $\partial (B_{r_0}(X_0)\cap D_g)$, we have
\begin{align*}
V=U_g\varepsilonquiv 0\ \ \text{on}\ \ B_{r_0}(X_0)\cap\Gamma_g,
\varepsilonnd{align*}
it follows from the Lipschitz estimates and (\ref{eqLIP:LittleIEstPart2SizeOfEtaAndRotation}) that
\begin{align*}
\norm{V-U_g}_{L^\infty(\partial(B_{r_0}(X_0)\cap D_g))}\leq Cr_0\leq Cc\abs{\nabla \psi(0)}.
\varepsilonnd{align*}
Since the function $V-U_g$ is harmonic in $B_{r_0}(X_0)\cap D_g$ we can use linearly growing barriers for $C^{1,\textnormal{Dini}}$ domains to deduce that for $s>0$ and small enough,
\begin{align*}
&\abs{V(X_0+s\nu)-U_g(X_0+s\nu)}\leq Cs\norm{V-U_g}_{L^\infty(B_{r_0}(X_0)\cap D_g)}\\
&\leq Cs\norm{V-U_g}_{L^\infty(\partial(B_{r_0}(X_0)\cap D_g))}\leq s\tilde C r_0\leq s\tilde C \abs{\nabla\psi(0)}.
\varepsilonnd{align*}
This establishes Step 2 after dividing by $s$ and taking $s\to 0$. (Note, these are the same type of barriers from Lemma \ref{lemGF:3.2InGW}, and they can be combined with a transformation that flattens $D_g$.)
Now we finish with Step 3.
We can break up the estimate into two separate parts, for which we define the functions $g_1$ and $g_2$ as
\begin{align*}
g_1=\min\{g,f\},\ \ \text{and}\ \ g_2=\max\{g,f\}.
\varepsilonnd{align*}
Notice that $\nabla f(0)=\nabla g(0)=\nabla g_1(0)=\nabla g_2(0)$, and so we will denote $\nu=\nu_f(X_0)=\nu_g(X_0)$.
By construction, it follows that
\begin{align*}
U_{g_1}(X_0+s\nu) - U_f(X_0+s\nu)
\leq U_g(X_0+s\nu) - U_f(X_0+s\nu)
\leq U_{g_2}(X_0 + s\nu) - U_f(X_0+s\nu).
\varepsilonnd{align*}
The key improvement from this construction is that by the $C^{1,\textnormal{Dini}}$ property of $f$, and $g$, owing to (\ref{eqLIP:LittleIEstPart2GisCloseToF}),
\begin{align*}
0\leq g_2(y)-f(y)\leq C\varepsilonp_2\norm{\psi}_{L^\infty}\abs{y}\tilde\rho(\abs{y}),
\varepsilonnd{align*}
and as noted in the assumptions, we know that the function $\abs{y}\tilde\rho(y)$ is actually in $X_\rho$. This is useful because $g_2-f$ will be Lipschitz, but may only enjoy a one sided modulus.
First, we will demonstrate the upper bound that comes from $U_{g_2}$.
Defining the function, $\tilde \psi$ as
\begin{align*}
\tilde\psi=C\varepsilonp_2\norm{\psi}_{L^\infty}\abs{y}\tilde\rho(\abs{y}),
\varepsilonnd{align*}
we see that $\tilde\psi$ satisfies the assumptions of Lemma \ref{lemLIP:LittleILipEstPart1} (recall that we have defined the modulus so that $\abs{y}\rho(y)$ is an element of $X_\rho$). Thus we have that
\begin{align*}
0&\leq \partial_\nu U_{g}(X_0)-\partial_\nu U_f(X_0)\\
&\leq \partial_\nu U_{f+(g_2-f)}(X_0)-\partial_\nu U_f(X_0)\\
&\leq \partial_\nu U_{f+\tilde\psi}(X_0) - \partial_\nu U_f(X_0) \\
&\leq C\left( R_0^{-\alpha}\norm{\tilde\psi}_{L^\infty}
+ \int_{\Gamma_f\setminus B_{R_0}(X_0)} \tilde\psi(y)\abs{X_0-Y}^{-n-1}dY \right)\\
& \leq C\left( R_0^{-\alpha}C\varepsilonp_2\norm{\psi}_{L^\infty}
+ C\varepsilonp_2\norm{\psi}_{L^\infty}\int_{\Gamma_f\setminus B_{R_0}(X_0)} \tilde\rho(\abs{y})\abs{X_0-Y}^{-n}dY \right)\\
&\leq \tilde C\varepsilonp_2 \norm{\psi}_{L^\infty}.
\varepsilonnd{align*}
The lower bound follows similarly, but we instead use the inequality
\begin{align*}
f-\tilde\psi=f-C\varepsilonp_2\norm{\psi}_{L^\infty}\abs{y}\tilde\rho(\abs{y})\leq g_1\leq f,
\varepsilonnd{align*}
so that
\begin{align*}
0\leq \partial_\nu U_f- \partial_\nu U_{g_1}
\leq \partial_\nu U_f- \partial_\nu U_{f-\tilde\psi}.
\varepsilonnd{align*}
Thus, we can invoke Lemma \ref{lemLIP:LittleILipEstPart1} with $f$ replaced by $f-\tilde\psi$, $\psi=\tilde\psi$, and $f+\tilde\psi$ replaced by $f$. The rest of the calculation is the same. This completes Step 3 and the proof of the lemma.
\varepsilonnd{proof}
Because the operator, $I$, is translation invariant, it is useful to define an auxiliary operator, fixed at $x=0$.
\begin{definition}\left\langlebel{defLIP:LittleI}
The functional, $i$, is defined as
\begin{align*}
i: \mathcal K(\delta,L,m,\rho)\to \mathbb R,\ \ i(f):= I(f,0),
\varepsilonnd{align*}
and analogously, using (\ref{eqIN:TwoPhaseBulk}) and (\ref{eqIN:DefIPLusAndMinus}), we have
\begin{align*}
i^+(f)=i(f)=I^+(f,0)=I(f,0),\ \ \text{and}\ \ i^-(f)=I^-(f,0).
\varepsilonnd{align*}
\varepsilonnd{definition}
\begin{lemma}\left\langlebel{lemLIP:IAddAConstant}
There exists a constant, $C$ depending upon $\delta$, $L$, $m$, $\rho$ so that if $0<\varepsilonp<\delta/2$, is a constant and $f\in\mathcal K(\delta,L,m,\rho)$, then
\begin{align*}
i^+(f)-C\varepsilonp\leq i^+(f+\varepsilonp)\leq i^+(f)
\varepsilonnd{align*}
and
\begin{align*}
i^-(f)+C\varepsilonp \geq i^-(f+\varepsilonp) \geq i^-(f).
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemLIP:IAddAConstant}]
We note that the restriction on $\varepsilonp$ is simply to keep both $i^+(f)$ and $i^-(f)$ well-defined. If we were working with $i^+$ only, no restriction on $\varepsilonp$ would be necessary. Furthermore, we will only establish the inequalities as they pertain to $i^+$. The corresponding pair of inequalities for $i^-$ are analogous.
We first translate the function, $U_{f+\varepsilonp}$, down so that it vanishes on $\Gamma_f$. To this end, we define
\begin{align*}
V(X):= U_{f+\varepsilonp}(x,x_{d+1}+\varepsilonp),
\varepsilonnd{align*}
so that $V$ is defined in $D^+_f$, and $V=0$ on $\Gamma_f$. As $U_{f+\varepsilonp}\leq 1$, we see that $V\leq 1$ on $\{x_{d+1} = 0\}$. This and the comparison principle implies that $V\leq U_f$ in $D_f$, and hence,
\begin{align*}
\partial_\nu V(X_0)\leq \partial_\nu U_f(X_0).
\varepsilonnd{align*}
But $\partial_\nu V(0,f(0)) = \partial_\nu U_{f+\varepsilonp}(0, f(0)+\varepsilonp)=i^+(f+\varepsilonp)$. This establishes the second inequality.
For the first inequality, we note that $U_{f+\varepsilonp}$ enjoys a uniform Lipschitz estimate depending on $\delta$, $L$, $m$, $\rho$. Thus, there is a universal $C$ so that, in particular,
\begin{align*}
\text{on}\ \{x_{d+1}=0\},\ \ 1-C\varepsilonp\leq V\leq 1.
\varepsilonnd{align*}
Thus $0\leq (U_f-V)\leq C\varepsilonp$ everywhere in $D_f$. Again, by the universal Lipschitz estimate, we see that
\begin{align*}
0\leq \partial_\nu (U_f-V)(0,f(0))\leq C\varepsilonp.
\varepsilonnd{align*}
Hence, this shows that
\begin{align*}
i^+(f)-i^+(f+\varepsilonp)\leq C\varepsilonp,
\varepsilonnd{align*}
which gives the first inequality of the Lemma.
\varepsilonnd{proof}
Although not used until the next subsection, it will be worthwhile to record a result about $i$ which is an immediate consequence of Corollary \ref{corLIP:CorOfLipEstPart1}.
\begin{lemma}\left\langlebel{lemLIP:LittleIPluMinusStrictIncreasing}
If $f$ and $\psi$ are functions as in Lemma \ref{lemLIP:LittleILipEstPart1}, then for the same constants as in Corollary \ref{corLIP:CorOfLipEstPart1},
\begin{align*}
\frac{1}{C}\left( \int_{B_{R_0}}\psi(y)\abs{y}^{-n-1}dy \right)
\leq i^+(f+\psi)-i^+(f)
\leq C\left( R_0^{-\alpha}\norm{\psi}_{L^\infty(\mathbb R^n\setminus B_{R_0})} + \int_{B_{R_0}} \psi(y)\abs{y}^{-n-1}dy \right),
\varepsilonnd{align*}
and
\begin{align*}
\frac{1}{C}\left( \int_{B_{R_0}}\psi(y)\abs{y}^{-n-1}dy \right)
\leq i^-(f)-i^-(f+\psi)
\leq C\left( R_0^{-\alpha}\norm{\psi}_{L^\infty(\mathbb R^n\setminus B_{R_0})} + \int_{B_{R_0}} \psi(y)\abs{y}^{-n-1}dy \right).
\varepsilonnd{align*}
\varepsilonnd{lemma}
We are now in a position to prove Proposition \ref{propLIP:BigILipOnKRho}.
\begin{proof}[Proof of Proposition \ref{propLIP:BigILipOnKRho}]
We first note that we will choose parameters, $\varepsilonp_1$ and $\varepsilonp_2$, depending upon $\delta$, $L$, $m$, and $\rho$ so that we establish the proposition whenever
\begin{align}\left\langlebel{eqLIP:ProofBigILipConstraintFMinusG}
\norm{f-g}_{L^\infty}\leq \varepsilonp_1,\ \ \text{and}\ \ \norm{\nabla f-\nabla g}_{L^\infty}\leq \varepsilonp_2.
\varepsilonnd{align}
Assuming we have already proved the proposition under this restriction on $f-g$, we see that we can choose the Lipschitz constant to also depend upon $\varepsilonp_1$ and $\varepsilonp_2$. Indeed if either $\norm{f-g}>\varepsilonp_1$ or $\norm{\nabla f-\nabla g}>\varepsilonp_2$, since $I$ is bounded on $\mathcal K(\delta,L,m,\rho)$, we see that
\begin{align*}
\norm{I(f)-I(g)}_{L^\infty}\leq \norm{I(f)}_{L^\infty}+\norm{I(g)}_{L^\infty}\leq 2C\leq 2C (\varepsilonp_1^{-1}\norm{f-g}_{L^\infty}+\varepsilonp_2^{-1}\norm{\nabla f-\nabla g}_{L^\infty})
\varepsilonnd{align*}
(as under the assumption on $f-g$, $1<(\varepsilonp_1^{-1}\norm{f-g}+\varepsilonp_2^{-1}\norm{\nabla f-\nabla g})$).
Now, we explain how to choose $\varepsilonp_1$ and $\varepsilonp_2$ and establish the proposition under (\ref{eqLIP:ProofBigILipConstraintFMinusG}). We note that with $i$ as in Definition \ref{defLIP:LittleI}, by translation invariance,
\begin{align*}
I(f,x)=i(\tau_x f).
\varepsilonnd{align*}
Thus, we will establish that $i$ is Lipschitz.
Let us assume, without loss of generality that $f(0)>g(0)$. First, we take
\begin{align*}
\varepsilonp=f(0)-g(0),
\varepsilonnd{align*}
and we define the new function,
\begin{align*}
\tilde g = g+\varepsilonp.
\varepsilonnd{align*}
Since $f,g\in\mathcal K(\delta,L,m,\rho)$, we can choose the parameter, $\varepsilonp_1<\delta/2$, so that
\begin{align*}
\tilde g\in \mathcal K(\delta/2,L,m,\rho).
\varepsilonnd{align*}
Next, we take $\varepsilonp_2$ to be the parameter from Lemma \ref{lemLIP:LittleILipEstPart2} that corresponds to the set $\mathcal K(\delta/2,L,m,\rho)$. Under this assumption, we see that $\psi=\tilde g-f$ satisfies the assumptions of Lemma \ref{lemLIP:LittleILipEstPart2}. Hence, because by definition $i(f)=\partial_\nu U_f(X_0)$, we see that
\begin{align*}
\abs{i(\tilde g)-i(f)}\leq C\abs{\nabla(\tilde g-f)(0)} + \varepsilonp_2\norm{\tilde g-f}_{L^\infty}
\leq C(\norm{f-g}_{L^\infty}+\norm{\nabla f-\nabla g}_{L^\infty}).
\varepsilonnd{align*}
Furthermore, Lemma \ref{lemLIP:IAddAConstant} shows that
\begin{align*}
\abs{i(\tilde g)-i(g)}\leq C\abs{f(0)-g(0)}\leq C\norm{f-g}_{L^\infty}.
\varepsilonnd{align*}
This shows that $i$ is Lipschitz, and hence also $I$.
\varepsilonnd{proof}
\subsection{Analysis For $H$}
Because of the assumptions on $G$, the following corollary is immediate from Proposition \ref{propLIP:BigILipOnKRho} and Corollary \ref{corLIP:IMinusIsLipschitz}, recalling that $I^+=I$.
\begin{corollary}\left\langlebel{corLIP:HIsLip}
For each $\delta$, $L$, $m$, $\rho$ fixed, $H$ is a Lipschitz mapping,
\begin{align*}
H: \mathcal K(\delta,L,m,\rho)\to C^0_b(\mathbb R^n),
\varepsilonnd{align*}
and the Lipschitz norm of $H$ depends upon all of $\delta$, $L$, $m$, $\rho$.
\varepsilonnd{corollary}
The results of Lemma \ref{lemLIP:LittleILipEstPart1} and Corollary \ref{corLIP:CorOfLipEstPart1} are also used in building appropriate finite dimensional approximations to $I$ and $H$. We note that $H$ also enjoys these properties.
\begin{lemma}\left\langlebel{lemLIP:HEnjoysLittleIEstPart1}
The results in Lemma \ref{lemLIP:LittleILipEstPart1}, Corollary \ref{corLIP:CorOfLipEstPart1}, and Lemma \ref{lemLIP:LittleIPluMinusStrictIncreasing} hold for the operator,
\begin{align}\left\langlebel{eqLIP:DefLittleH}
h(f)=H(f,0).
\varepsilonnd{align}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemLIP:HEnjoysLittleIEstPart1}] Since $\nabla\psi(0) = 0$, we have $\nabla(f + t \psi)(0) = \nabla f(0)$ for all $t \geq 0$. Consequently,
$$h(f + \psi) - h(f) = \left(G(i^+(f+\psi),i^-(f+\psi)) - G(i^+(f),i^-(f)) \right)\sqrt{1+\abs{\nabla{f}(0)}^2}.$$
We proceed to estimate $G(i^+(f+\psi),i^-(f+\psi)) - G(i^+(f),i^-(f))$. First, observe that $i^+(f + \psi) \geq i^+(f)$ and $i^-(f + \psi) \leq i^-(f)$. By the assumptions on $G$, we thus have
\begin{align*}
& G(i^+(f+\psi),i^-(f+\psi)) - G(i^+(f),i^-(f)) \\
= \ & G(i^+(f+\psi),i^-(f+\psi)) - G(i^+(f),i^-(f+\psi)) + G(i^+(f),i^-(f+\psi)) - G(i^+(f),i^-(f)) \\
\geq \ & \left\langlembdabda \left(i^+(f+\psi) - i^+(f) \right) + \left\langlembdabda \left(i^-(f) - i^-(f+\psi) \right).
\varepsilonnd{align*}
Similarly,
$$G(i^+(f+\psi),i^-(f+\psi)) - G(i^+(f),i^-(f)) \leq \mathcal Lambdabda \left(i^+(f+\psi) - i^+(f) \right) + \mathcal Lambdabda \left(i^-(f) - i^-(f+\psi) \right).$$
The claim now follows from Lemma \ref{lemLIP:LittleILipEstPart1} and Corollary \ref{corLIP:CorOfLipEstPart1}, where we note the factor $\sqrt{1+\abs{\nabla f(0)}^2}$ is controlled by $(1+m)$, and so can be absorbed into the constant in the resulting inequalities.
\varepsilonnd{proof}
The extension of the results in Lemma \ref{lemLIP:LittleILipEstPart1}, Corollary \ref{corLIP:CorOfLipEstPart1}, and Lemma \ref{lemLIP:LittleIPluMinusStrictIncreasing} to the operator $H$ also implies that the remaining key result above applies to $H$ as well. We omit the proof, as it follows the same adaptations as in the previous Lemma.
\begin{lemma}\left\langlebel{lemLIP:HEnjoysLittleIEstPart2}
The results in Lemma \ref{lemLIP:LittleILipEstPart2} hold for the operator, $h(f)=H(f,0)$.
\varepsilonnd{lemma}
\section{Proof of Theorem \ref{thm:StructureOfHMain}}\left\langlebel{sec:ProofOfStructureThm}
Before we can prove Theorem \ref{thm:StructureOfHMain}, we must make a number of observations about how $I$ behaves with respect to some positive perturbations in $X_{\rho,0}$ and especially what this behavior implies for the linear operators in $\mathcal D_I$ (recall $\mathcal D_I$ is in Definition \ref{defFD:DJLimitingDifferential}, which is applicable since $I$ is Lipschitz). Then we will show how these properties carry over to $H$, and finally we will collect the ideas to finish the proof of Theorem \ref{thm:StructureOfHMain}.
\subsection{Estimates on the L\'evy measures for $I$ and $H$}\left\langlebel{sec:LevyMeasureEstimate}
We will show that once a Lipschitz operator, $J$, with the GCP enjoys bounds similar to those in Lemma \ref{lemLIP:LittleILipEstPart1} and Corollary \ref{corLIP:CorOfLipEstPart1}, then as a consequence, its resulting linear supporting operators are comparable to a modified 1/2-Laplacian, and subsequently the corresponding L\'evy measures have a density that is comparable to the 1/2-Laplacian. The main result in this direction is Proposition \ref{propPfThm1:IntegralBoundsForEll}.
Basically, the analysis we use follows almost exactly as some arguments in \cite[Section 4.6]{GuSc-2019MinMaxNonlocalTOAPPEAR} regarding inequalities for extremal operators and linear functionals in the min-max representation.
For this subsection, we will assume that $J$ is an operator as in Section \ref{sec:FiniteDim}, and assume further that $J$ satisfies the assumptions of Theorem \ref{thmFD:StructureOfJFromGSEucSpace} and the conclusion of Lemma \ref{lemLIP:LittleILipEstPart1}.
We can utilize the translation invariance of $J$ to focus on linear functionals via evaluation at $x=0$.
\begin{definition}\left\langlebel{defPfThm1:DefDHAt0}
\begin{align}\left\langlebel{eqPfThm1:DifferentialsAtZero}
\mathcal D_J(0)=\{ \varepsilonll\in \left(X_\rho\right)^*\ :\ \varepsilonxists\ L\in D_J,\ \textnormal{s.t.}\ \forall\ f\in\mathcal K,\ \varepsilonll(f)=L(f,0) \}.
\varepsilonnd{align}
\varepsilonnd{definition}
We will compare the linear support functionals of $J$ to a modified version of the 1/2-Laplacian, which we define here.
\begin{definition}
With the constant, $R_0$ as in Theorem \ref{thm:GreenBoundaryBehavior}, the linear operator, $L_\mathcal Delta$, is defined as
\begin{align*}
L_{\mathcal Delta}(f,x)=\int_{B_{R_0}}\delta_hf(x)\abs{h}^{-n-1}dh,
\varepsilonnd{align*}
which is well defined for all $f\in X_{\rho}$. Note, this is simply the 1/2-Laplacian, but computed with a truncated kernel.
\varepsilonnd{definition}
\begin{lemma}\left\langlebel{lemPfThm1:FDApproxCompareToFracLaplace}
Let $R_0$ be the constant in Theorem \ref{thm:GreenBoundaryBehavior}. There exists a constant, $C>0$, so that if $J$ is an operator that satisfies the assumptions of Theorem \ref{thmFD:StructureOfJFromGSEucSpace} as well as the outcome of Lemma \ref{lemLIP:LittleILipEstPart1}, $J^m$ and $\left(L_\mathcal Delta\right)^m$ are the finite dimensional approximations to $J$ and $L_\mathcal Delta$, defined in (\ref{eqFD:DefOfFdApprox}), then for all $x\in G^m$ and $\psi\in X_{\rho,x}\cap C^{1,\alpha}(\mathbb R^n)$, with $f+\psi\in\mathcal K$ and $\textnormal{supp}(\psi)\subset B_{R_0}(x)$,
\begin{align*}
-C h_m^\beta\norm{\psi}_{C^{1,\alpha}}+
\frac{1}{C}\left( L_\mathcal Delta \right)^m(\psi,x) \leq J^m(f+\psi,x)-J^m(f,x)
\leq C\left( L_\mathcal Delta \right)^m(\psi,x)
+C h_m^\beta\norm{\psi}_{C^{1,\alpha}},
\varepsilonnd{align*}
where $h_m$ is the grid size parameter from Definition \ref{defFD:GridSetGm} and $\beta$ is the exponent from Lemma \ref{lemFD:RemainderForNonNegPi}.
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemPfThm1:FDApproxCompareToFracLaplace}]
We note that by the translation invariance of both $J$ and $L_\mathcal Delta$, it suffices to prove this result for $x=0$ (see Lemma \ref{lemFD:TransInvariant}). We need to utilize Lemma \ref{lemFD:RemainderForNonNegPi} because we will also use Lemma \ref{lemLIP:LittleILipEstPart1} and Corollary \ref{corLIP:CorOfLipEstPart1}, which require $\psi$ to be non-negative. Given such a $\psi$, it is not true in general that $\pi_m\psi\geq0$, but we can correct with a quantifiable remainder. This is what follows.
Let $R$ be the function, $R_{\alpha,m,w,0}$ which results from Lemma \ref{lemFD:RemainderForNonNegPi} when applied to $\psi$. Let $\tilde\psi_m$ be the function,
\begin{align*}
\tilde\psi_m= \pi_m\psi + R,
\varepsilonnd{align*}
so that $\tilde\psi_m\in X_{\rho,0}$.
This means we can apply Corollary \ref{corLIP:CorOfLipEstPart1} to $\pi_mf + \tilde\psi_m$, and this gives
\begin{align*}
\frac{1}{C}\int_{B_{R_0}} \tilde\psi_m(y)\abs{y}^{-n-1}dy
\leq J(\pi_mf+\tilde\psi_m,0)-J(\pi_mf,0)
\leq C\int_{B_{R_0}} \tilde\psi_m(y)\abs{y}^{-n-1}dy,
\varepsilonnd{align*}
and hence since $\tilde\psi_m(0)=0$ and $\nabla\tilde\psi(0)=0$,
\begin{align*}
\frac{1}{C}L_\mathcal Delta(\tilde\psi_m,0)\leq J(\pi_mf+\tilde\psi_m,0)-J(\pi_mf,0)
\leq C L_\mathcal Delta(\tilde\psi_m,0).
\varepsilonnd{align*}
Using the continuity of $L_\mathcal Delta$ over $C^{1,\alpha/2}$ as well as the Lipschitz nature of $J$ over $X_\rho$ and $C^{1,\alpha}$ (recall that $\pi_m\psi\in X_\rho\cap C^{1,\alpha}$ as well as Remark \ref{remFD:JAlsoLipOnC1Al}), we obtain
\begin{align*}
-\tilde C\norm{R}_{C^{1,\alpha/2}}+\frac{1}{C}L_\mathcal Delta(\pi_m\psi,0)\leq J(\pi_mf+\pi_m\psi,0)-J(\pi_mf,0)
\leq C L_\mathcal Delta(\pi_m\psi,0) + \tilde C\norm{R}_{C^{1,\alpha/2}}.
\varepsilonnd{align*}
Invoking Lemma \ref{lemFD:RemainderForNonNegPi}, for the parameter, $\beta$ in Lemma \ref{lemFD:RemainderForNonNegPi},
\begin{align*}
-\tilde C h_m^\beta\norm{\psi}_{C^{1,\alpha}}+\frac{1}{C}L_\mathcal Delta(\pi_m\psi,0)\leq J(\pi_mf+\pi_m\psi,0)-J(\pi_mf,0)
\leq C L_\mathcal Delta(\pi_m\psi,0) + \tilde C h_m^\beta\norm{\psi}_{C^{1,\alpha}}.
\varepsilonnd{align*}
Finally, using the fact that the operator $E_m^0\circ T_m$ is linear, preserves ordering, and agrees with its input function over $G^m$, we see that applying $E^0_m\circ T_m$ to each of the operators in the last inequality, not evaluated at $x=0$, and then relabeling constants and evaluating at $x=0\in G^m$,
\begin{align*}
-C h_m^\beta\norm{\psi}_{C^{1,\alpha}}+\frac{1}{C}\left(L_\mathcal Delta\right)^m(\psi,0)\leq J^m(f+\psi,0)-J^m(f,0)
\leq C \left(L_\mathcal Delta\right)^m(\psi,0) + \tilde C h_m^\beta\norm{\psi}_{C^{1,\alpha}}.
\varepsilonnd{align*}
\varepsilonnd{proof}
\begin{corollary}\left\langlebel{corPfThm1:LComparableToHalfLaplace}
If $J$ satisfies the assumptions of Theorem \ref{thmFD:StructureOfJFromGSEucSpace} as well as the outcome of Lemma \ref{lemLIP:LittleILipEstPart1}, then for all $L\in\mathcal D_J$, the constant $C$ and functions $\psi$ as in Lemma \ref{lemPfThm1:FDApproxCompareToFracLaplace},
\begin{align*}
\frac{1}{C}L_\mathcal Delta(\psi,x)\leq L(\psi,x) \leq CL_\mathcal Delta(\psi,x).
\varepsilonnd{align*}
\varepsilonnd{corollary}
\begin{proof}[Proof of Corollary \ref{corPfThm1:LComparableToHalfLaplace}]
We recall that $\mathcal D_J$ is a convex hull of limits of linear operators that are derivatives of $J^m$. Thus, it suffices to prove the result for $f\in X_{\rho}$ so that $f=\lim f_m$ and that $J^m$ is differentiable at $f_m$, i.e.
\begin{align*}
\forall\ \psi,\ DJ^m(f_m)[\psi]= \lim_{s\to0}\frac{J^m(f_m+s\psi)-J^m(f_m)}{s},
\varepsilonnd{align*}
and
\begin{align*}
L=\lim_{m\to\infty} DJ^m(f_m).
\varepsilonnd{align*}
Thus, for all $\psi$ satisfying the requirements of Lemma \ref{lemPfThm1:FDApproxCompareToFracLaplace}, we see that
\begin{align*}
-Ch_m^\beta\norm{\psi}_{C^{1,\alpha}} + \frac{1}{C}\left( L_\mathcal Delta\right)^m(\psi)
\leq DJ^m(f_m)[\psi]
\leq C\left( L_\mathcal Delta \right)^m(\psi) + Ch_m^\beta\norm{\psi}_{C^{1,\alpha}}.
\varepsilonnd{align*}
We can now take limits as $m\to0$, using that $h_m\to0$ and Proposition \ref{propFD:ConvergenceOFApprox} that shows $(L_\mathcal Delta)^m\to L_\mathcal Delta$, to conclude the result of the corollary for such $L$, $f_m$, and $f$. Since these inequalities are stable under convex combinations, we are finished.
\varepsilonnd{proof}
Just as above, thanks to translation invariance, we have the luxury of focusing on all of the operators in $\mathcal D_J$ evaluated at $x=0$. Thus, as an immediate consequence of Corollary \ref{corPfThm1:LComparableToHalfLaplace}, we obtain the following result.
\begin{proposition}\left\langlebel{propPfThm1:IntegralBoundsForEll}
For all $\varepsilonll\in\mathcal D_J(0)$, for $f\in\mathcal K$, for $\psi\in X_{\rho,0}$ with $\textnormal{supp}(\psi)\subset B_{R_0}$,
\begin{align*}
\frac{1}{C}\int_{B_{R_0}}\psi(y)\abs{y}^{-n-1}dy\leq \varepsilonll(\psi) \leq C\int_{B_{R_0}}\psi(y)\abs{y}^{-n-1}dy,
\varepsilonnd{align*}
\varepsilonnd{proposition}
\begin{corollary}\left\langlebel{corIUG:LevyMeasures}
If $\varepsilonll\in\mathcal D_J(0)$, and $\mu_\varepsilonll$ is the L\'evy measure corresponding to $\varepsilonll$ from Theorem \ref{thmFD:StructureOfJFromGSEucSpace}, then there exists a function, $K_\varepsilonll$ so that
\begin{align*}
\mu_\varepsilonll(E) = \int_E K_\varepsilonll(h)dh,
\varepsilonnd{align*}
and
\begin{align*}
\forall\ \ h\in B_{R_0}\setminus \{0\},\ \
\frac{1}{C}\abs{h}^{-n-1}\leq K_\varepsilonll(h)\leq C\abs{h}^{-n-1}.
\varepsilonnd{align*}
\varepsilonnd{corollary}
\begin{proof}[Proof of Corollary \ref{corIUG:LevyMeasures}]
We recall the structure of $\varepsilonll$ from Theorem \ref{thmFD:StructureOfJFromGSEucSpace} and the fact that for any $\psi$ as in Lemma \ref{lemPfThm1:FDApproxCompareToFracLaplace} we have $\psi(0)=0$ and $\nabla \psi(0)=0$, so that
\begin{align*}
\varepsilonll(\psi) = \int_{\mathbb R^n}\psi(h)\mu_\varepsilonll(dh).
\varepsilonnd{align*}
Hence, for each fixed $r$, from Proposition \ref{propPfThm1:IntegralBoundsForEll} we can already deduce that $\mu_\varepsilonll$ has a density in $B_{R_0}\setminus B_{r}$, and that this density must inherit the bounds given in Proposition \ref{propPfThm1:IntegralBoundsForEll}. Hence the Corollary holds for the measure $\mu_\varepsilonll$, restricted to $B_{R_0}(0)\setminus B_r(0)$. Since $r>0$ was arbitrary, we see that there will be a density on the set $B_{R_0}(0)\setminus \{0\}$ and that the required bounds still follow from Proposition \ref{propPfThm1:IntegralBoundsForEll}.
\varepsilonnd{proof}
\subsection{Estimates on the drift}\left\langlebel{sec:DriftEstimates}
Just as the estimates for the L\'evy measures corresponding to a mapping, $J$, depended upon a variant of the inequality of Lemma \ref{lemLIP:LittleILipEstPart1} being inherited by the finite dimensional approximations, so too will the proof here for the estimate on the drift. This time, we need a finite dimensional version of Lemma \ref{lemLIP:LittleILipEstPart2}.
\begin{lemma}\left\langlebel{lemPfThm1:FiniteDimVersionOfLittleIEst2}
With $C$, $\varepsilonp_2$, $f$, $\psi$ as in Lemma \ref{lemLIP:LittleILipEstPart2}, we also have
\begin{align*}
\abs{J^m(f+\psi,0)-J^m(f,0)}\leq C\left( \abs{\nabla\psi(0)} + \varepsilonp_2\norm{\psi}_{L^\infty} \right).
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemPfThm1:FiniteDimVersionOfLittleIEst2}]
Applying Lemma \ref{lemLIP:LittleILipEstPart2} or Lemma \ref{lemLIP:HEnjoysLittleIEstPart2} to $\pi_mf$ and $\pi_m\psi$, we obtain
$$|J(\pi_m f+\pi_m\psi,0)-J(\pi_mf,0)| \leq C\left(|\nabla (\pi_m\psi)(0)| + \varepsilonp_2||\pi_m\psi||_{L^{\infty}} \right).$$
We next apply Theorem \ref{thmFD:ProjectionIsLinearAndBounded} to bound $||\pi_m\psi||_{L^{\infty}}$ with respect to $||\psi||_{L^{\infty}}$. Also, since $\pi_m$ agrees up to first order with its input function on $G_m$, and because $0 \in G_m$, we see that $\nabla (\pi_m\psi)(0) = \nabla \psi(0)$. Finally, using the fact that $E^0_m \circ T_m$ is order-preserving and agrees with its input function on $G_m$, we obtain the desired estimate.
\varepsilonnd{proof}
With this information in hand, we need to address how the drift and L\'evy measures given in Theorem \ref{thmFD:StructureOfJFromGSEucSpace} relate to each other, particularly in the context of the assumptions in Section \ref{sec:BackgroundParabolic}.
To this end, fix $e \in \mathcal RN$, $|e| = 1$, and a smooth cutoff function $\varepsilonta \in C^{\infty}_c(\mathcal RN)$ between $B_{1/2}$ and $B_1$. We define the functions, for $0 < \tau \leq r$,
\begin{align}\left\langlebel{eqPfThm1:DefOfTruncatedLinearPhi}
\phi(y) = (e\cdot y)\varepsilonta(y)\ \ \ \ \text{and}\ \ \ \
\phi_{\tau,r}(y) := \tau r \phi \left(\frac{y}{r} \right).
\varepsilonnd{align}
A crucial property of $\phi_{\tau,r}$ is given in the next lemma.
\begin{lemma}\left\langlebel{lemPfThm1:BoundsOnEllPhi} There exists a constant, $C$, depending on $\delta$, $L$, $m$, $\rho$, such that if
\begin{align*}
\varepsilonll\in \mathcal D_J(0),\ \
\text{with}\ \varepsilonll(f)=c_\varepsilonll f(0) + b_\varepsilonll\cdot\nabla f(0) + \int_{\mathbb R^n}\delta_h f(0)K_\varepsilonll(h)dh,
\varepsilonnd{align*}
then for $\phi_{\tau,r}$ defined in (\ref{eqPfThm1:DefOfTruncatedLinearPhi}),
\begin{align*}
|\varepsilonll(\phi_{\tau,r})| \leq C\tau \quad \text{ for all } \tau \leq r.
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemPfThm1:BoundsOnEllPhi}]
First, we list a number of properties of $\phi_{\tau,r}$.
\begin{enumerate}
\item[(i)] $\phi_{\tau,r}(0) = 0$ and $\nabla \phi_{\tau,r}(0) = \tau e$.
\item[(ii)] There exists a universal constant $C' > 0$ such that $|\nabla \phi_{\tau,r}(y)| \leq C' \tau$ for all $y \in \textnormal{supp}(\phi_{\tau,r})$.
\item[(iii)] If $\varepsilonta$ is $C^{1,\text{Dini}}_{\rho}$, then $\phi_{\tau,r}$ is $C^{1,\text{Dini}}_{\rho}$. Indeed, by the concavity of $\rho$, and since $\tau \leq r$, we see that for any $x \in \mathcal RN$ and $y \in B_r(x)$, we have
\begin{align*}
& |\phi_{\tau,r}(x+y) - \phi_{\tau,r}(x) - \nabla \phi_{\tau,r}(x) \cdot y| \\
& = \bigg|(\tau e\cdot (x+y)) \varepsilonta\left(\frac{x+y}{r}\right) - (\tau e\cdot x) \varepsilonta\left(\frac{x}{r}\right) - (\tau e\cdot y) \varepsilonta\left(\frac{x}{r}\right) - \left(\tau e\cdot \frac{x}{r}\right) \left(\nabla \varepsilonta\left(\frac{x}{r}\right) \cdot y \right) \bigg| \\
& = \bigg|(\tau e\cdot (x+y)) \left(\varepsilonta\left(\frac{x+y}{r}\right) - \varepsilonta \left( \frac{x}{r} \right) \right) - \left(\tau e\cdot x \right) \left(\nabla \varepsilonta\left(\frac{x}{r}\right) \cdot \frac{y}{r} \right) \bigg| \\
& = \bigg|(\tau e\cdot (x+y)) \left(\varepsilonta\left(\frac{x+y}{r}\right) - \varepsilonta \left( \frac{x}{r} \right) - \nabla \varepsilonta\left(\frac{x}{r}\right) \cdot \frac{y}{r} \right) + \left(\tau e\cdot y \right) \left(\nabla \varepsilonta\left(\frac{x}{r}\right) \cdot \frac{y}{r} \right) \bigg| \\
& \leq \tau |x + y| \bigg|\frac{y}{r} \bigg|\rho\left(\frac{y}{r}\right) + \frac{\tau}{r} ||\nabla \varepsilonta||_{L^{\infty}(\mathcal RN)} |y|^2 \\
& \leq |y| \tau \rho\left(\frac{y}{r}\right) + ||\nabla \varepsilonta||_{L^{\infty}(\mathcal RN)} |y|^2 \\
& \leq |y| \frac{\tau}{r} \rho(y) + ||\nabla \varepsilonta||_{L^{\infty}(\mathcal RN)} |y|^2 \\
& \leq |y|(\rho(y) + C|y|).
\varepsilonnd{align*}
\varepsilonnd{enumerate}
Without loss of generality, we can assume that for $\abs{y}\leq 1$, $\abs{y}\leq \rho(\abs{y})$.
In order to conclude the bound on $\varepsilonll(\phi_{\tau,r})$, we look to Lemma \ref{lemPfThm1:FiniteDimVersionOfLittleIEst2}. This shows that for all $\varepsilonll\in \mathcal D_J(0)$, for all $\psi\in X_\rho$
\begin{align*}
\abs{\varepsilonll(\psi)}\leq C(\abs{\nabla \psi(0)} + \varepsilonp_2\norm{\psi}_{L^\infty}),
\varepsilonnd{align*}
where $C$ is a constant that depends only on $\delta$, $L$, $m$, $\rho$. Thus as $\phi_{\tau,r}\in X_\rho$, applying this to $\psi=\phi_{\tau,r}$ shows $\abs{\varepsilonll(\phi_{\tau,r})}\leq C\tau$.
\varepsilonnd{proof}
We are finally ready to prove the estimates on the drift.
\begin{lemma}\left\langlebel{lemPfThm1:bijBounded}
There exists a constant, $C$, depending on $\delta$, $L$, $m$, $\rho$, such that if $r_0$ and $\delta_h f$ are as in Theorem \ref{thmFD:StructureOfJFromGSEucSpace},
\begin{align*}
\varepsilonll\in \mathcal D_J(0),\ \
\text{with}\ \varepsilonll(f)=c_\varepsilonll f(0) + b_\varepsilonll\cdot\nabla f(0) + \int_{\mathbb R^n}\delta_h f(0)K_\varepsilonll(h)dh,
\varepsilonnd{align*}
then for $0<r<r_0$,
\begin{align*}
\abs{b_\varepsilonll-\int_{B_{r_0}\setminus B_r} hK_\varepsilonll(h)dh}\leq C.
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemPfThm1:bijBounded}]
Fix $e \in \mathcal RN$, $|e| = 1$ and $0 < \tau \leq r < r_0$. Consider the function $\phi_{\tau,r}$ defined above. We have
\begin{align*}
\varepsilonll(\phi_{\tau,r}) & = \tau (b_{\varepsilonll}\cdot e) + \int_{\mathcal RN} (\tau e \cdot h) \left[\varepsilonta\left(\frac{h}{r}\right) - {\mathbbm{1}}_{B_{r_0}}(h) \right] \ K_{\varepsilonll}(h) dh \\
& = \tau\left(b_{\varepsilonll}\cdot e + \int_{B_{r_0} \backslash B_{r/2}} (e \cdot h) \left[\varepsilonta\left(\frac{h}{r}\right) - 1 \right] \ K_{\varepsilonll}(h) dh \right)\\
& = \tau\left(b_{\varepsilonll}\cdot e + \int_{B_{r_0} \backslash B_r} (e \cdot h) \left[\varepsilonta\left(\frac{h}{r}\right) - 1 \right] \ K_{\varepsilonll}(h) dh + \int_{B_r \backslash B_{r/2}} (e \cdot h) \left[\varepsilonta\left(\frac{h}{r}\right) - 1 \right] \ K_{\varepsilonll}(h) dh \right)\\
& = \tau\left(b_{\varepsilonll}\cdot e - \int_{B_{r_0} \backslash B_r} (e \cdot h) \ K_{\varepsilonll}(h) dh + \int_{B_r \backslash B_{r/2}} (e \cdot h) \left[\varepsilonta\left(\frac{h}{r}\right) - 1 \right] \ K_{\varepsilonll}(h) dh \right)\\
\varepsilonnd{align*}
Consequently,
$$\tau\left(b_{\varepsilonll}\cdot e - \int_{B_{r_0} \backslash B_r} (e \cdot h) \ K_{\varepsilonll}(h) dh \right) = \varepsilonll(\phi_{\tau,r}) + \tau \int_{B_r \backslash B_{r/2}} (e \cdot h) \left[1 - \varepsilonta\left(\frac{h}{r}\right) \right] \ K_{\varepsilonll}(h) dh.$$
Using Lemma \ref{lemPfThm1:BoundsOnEllPhi}, we have
$$\tau\bigg|b_{\varepsilonll}\cdot e - \int_{B_{r_0} \backslash B_r} (e \cdot h) \ K_{\varepsilonll}(h) dh \bigg| \leq C\tau + \tau \bigg|\int_{B_r \backslash B_{r/2}} (e \cdot h) \left[1 - \varepsilonta\left(\frac{h}{r}\right) \right] \ K_{\varepsilonll}(h) dh \bigg|.$$
Dividing by $\tau$ yields the estimate
$$\bigg|b_{\varepsilonll}\cdot e - \int_{B_{r_0} \backslash B_r} (e \cdot h) \ K_{\varepsilonll}(h) dh \bigg| \leq C + \bigg|\int_{B_r \backslash B_{r/2}} (e \cdot h) \left[1 - \varepsilonta\left(\frac{h}{r}\right) \right] \ K_{\varepsilonll}(h) dh \bigg|.$$
To estimate the integral on the right-hand side, we recall from Corollary \ref{corIUG:LevyMeasures} that $K_{\varepsilonll}(h) \approx |h|^{-(n+1)}$. This yields
\begin{align*}
\bigg|\int_{B_r \backslash B_{r/2}} (e \cdot h) \left[1 - \varepsilonta\left(\frac{h}{r}\right) \right] \ K_{\varepsilonll}(h) dh \bigg| & \leq \int_{B_r \backslash B_{r/2}} C |h|^{-n} \ dh \\
& = C \int_{r/2}^r s^{-1} \ ds = C \log(2).
\varepsilonnd{align*}
\varepsilonnd{proof}
\subsection{Collecting the arguments to finish Theorem \ref{thm:StructureOfHMain}}\left\langlebel{sec:FinishProofOfStructureTheorem}
Here we just list all of the particular previous results that are used to culminate in the proof of Theorem \ref{thm:StructureOfHMain}.
\begin{proof}[Proof of Theorem \ref{thm:StructureOfHMain}]
First, we note that the function, $H$, enjoys the GCP over $\mathcal K$ (see Definition \ref{defFD:GCP}). This was already established in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}, but we will briefly comment on it here. Indeed, if $f,g\in\mathcal K$ and $f\leq g$ with $f(x_0)=g(x_0)$, then we also know that $D_f^+\subset D_g^+$. Thus, since $U_g^+\geq 0$ on $\Gamma_f$, we see that $U_g^+$ is a supersolution of the same equation that governs $U_f^+$. Since $U_f^+(x_0,f(x_0))=0=U_g^+(x_0,g(x_0))$, we see that also $\partial_\nu^+ U_f(x_0,f(x_0))\leq \partial_\nu^+ U_g(x_0,g(x_0))$. Hence, $I^+(f,x_0)\leq I^+(g,x_0)$. A similar argument can be applied to $\partial_\nu^- U_f$ and $\partial_\nu^- U_g$, but this time the ordering is reversed (per the definition in (\ref{eqIN:DefOfPosNegNormalDeriv})), as now we have $D_g^-\subset D_f^-$. Combining these inequalities with the definition in (\ref{eqIN:DefOfH}), and remembering that $G$ is increasing in its first variable and decreasing in its second variable (and by assumption on $f$ and $g$, $\nabla f(x_0)=\nabla g(x_0)$), we conclude the GCP for $H$.
We know that since $H$ is Lipschitz on $\mathcal K$ and enjoys the GCP, we will want to invoke Theorem \ref{thmFD:StructureOfJFromGSEucSpace}. However, we still need to establish that the extra decay requirement in (\ref{eqFD:ExtraModulusConditionOutsideBR}) is satisfied. Indeed it is, which we will show after this current proof in Lemma \ref{lemPfThm1:ExtraDecayCondition}, below. Now, \varepsilonmph{assuming} we have established (\ref{eqFD:ExtraModulusConditionOutsideBR}), then Theorem \ref{thmFD:StructureOfJFromGSEucSpace} shows that all $\varepsilonll\in \mathcal D_H(0)$ enjoy the structure as claimed in part (i) of Theorem \ref{thm:StructureOfHMain} ($\mathcal D_H(0)$ is from Definition \ref{defPfThm1:DefDHAt0}, following Definition \ref{defFD:DJLimitingDifferential}). After a relabeling of $a^{ij}=h(g)-\varepsilonll(g)$ and the triple $c_\varepsilonll$, $b_\varepsilonll$, and $K_\varepsilonll$, from Theorem \ref{thmFD:StructureOfJFromGSEucSpace}, we see that part (i) has been established.
To conclude part (ii) of the theorem, we can invoke Corollary \ref{corIUG:LevyMeasures} for the L\'evy measure estimates and Lemma \ref{lemPfThm1:bijBounded} for the bounds involving the drift terms.
\varepsilonnd{proof}
\begin{lemma}\left\langlebel{lemPfThm1:ExtraDecayCondition}
There exists constants, $C>0$, and $\alpha\in(0,1]$, depending on $\delta$, $L$, $m$, $\rho$, and $n$, so that if $f,g\in\mathcal K(\delta,L,m,\rho)$ and $f\varepsilonquiv g$ in $B_{2R}$, then
\begin{align*}
\norm{H(f)-H(g)}_{L^\infty(B_R)}\leq \frac{C}{R^\alpha}.
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemPfThm1:ExtraDecayCondition}]
First, we will establish that
\begin{align}\left\langlebel{eqPfThm1:ExtraDecayForI}
\norm{I^+(f)-I^+(g)}_{L^\infty(B_R)}\leq \frac{C}{R^\alpha}.
\varepsilonnd{align}
Then, as following the proof of Lemma \ref{lemLIP:HEnjoysLittleIEstPart1}, we will see that this estimate carries over to $H$ as well.
The proof of (\ref{eqPfThm1:ExtraDecayForI}) goes very similarly to the proofs of Lemmas \ref{lemLIP:LittleILipEstPart1} and \ref{lemLIP:LittleILipEstPart2} (specifically, Step 3), combined with Lemma \ref{lemGF:DecayAtInfinityForTechnicalReasons}. As in the proof of Lemma \ref{lemLIP:HEnjoysLittleIEstPart2}, we define the functions,
\begin{align*}
g_1=\min\{f,g\}\ \ \ \text{and}\ \ \ g_2=\max\{f,g\},
\varepsilonnd{align*}
and by construction, the respective domains are the ordered as follows:
\begin{align*}
D_{g_1}\subset D_f \subset D_{g_2}\ \ \ \text{and}\ \ \
D_{g_1}\subset D_g \subset D_{g_2}.
\varepsilonnd{align*}
Thus, we see that at least in $B_R$, since $f\varepsilonquiv g$,
\begin{align*}
\partial_\nu U_{g_1}\leq\partial_\nu U_f \leq \partial_\nu U_{g_2}\ \ \ \text{and}\ \ \
\partial_\nu U_{g_1}\leq\partial_\nu U_g\leq \partial_\nu U_{g_2},
\varepsilonnd{align*}
so we have
\begin{align*}
\partial_\nu U_{g_2} - \partial_\nu U_{g_1}\leq
\partial_\nu U_f(X)-\partial_\nu U_g(X) \leq \partial_\nu U_{g_1}(X) - \partial_\nu U_{g_2}(X).
\varepsilonnd{align*}
Furthermore, for the function $W$, defined as
\begin{align*}
\begin{cases}
\mathcal Delta W = 0\ &\text{in}\ D_{g_1}\\
W=0\ &\text{on}\ \{x_{n+1}=0\}\\
W= U_{g_2}|_{\Gamma_{g_1}}\ &\text{on}\ \Gamma_{g_1},
\varepsilonnd{cases}
\varepsilonnd{align*}
we see that in the smaller domain, $D_{g_1}$,
\begin{align*}
U_{g_2} = U_{g_1} + W.
\varepsilonnd{align*}
Thus, we have reduced the estimate to
\begin{align*}
\partial_\nu U_{g_1}(X) - \partial_\nu U_{g_2}(X) = \partial_\nu W(X),
\varepsilonnd{align*}
and so
\begin{align*}
\abs{\partial_\nu U_f(X)-\partial_\nu U_g(X)}\leq \abs{\partial_\nu W(X)}.
\varepsilonnd{align*}
As $g_1$ and $g_2$ are $C^{1,\textnormal{Dini}}(B_{2R})$ and globally Lipschitz with Lipschitz constant, $m$, we see that Lemma \ref{lemGF:DecayAtInfinityForTechnicalReasons} gives for $X\in\Gamma_f\cap B_R$, and $s>0$,
\begin{align*}
W(X+s\nu(X))\leq \frac{Cs}{R^\alpha},
\varepsilonnd{align*}
and hence
\begin{align*}
\partial_\nu W(X)\leq \frac{C}{R^\alpha}.
\varepsilonnd{align*}
Thus, we have established
\begin{align*}
\forall\ x\in B_R,\ \ \abs{I^+(f,x) - I^+(g,x)}\leq \frac{C}{R^\alpha}.
\varepsilonnd{align*}
\varepsilonnd{proof}
\begin{rem}
The curious reader may see that our approach to establish Theorem \ref{thm:StructureOfHMain} deviates slightly from the one given in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}, and we believe the reasons for this deviation are noteworthy. We will discuss this in more detail in Section \ref{sec:Commentary}.
\varepsilonnd{rem}
\section{Proof of Theorem \ref{thm:FBRegularity}}\left\langlebel{sec:KrylovSafonovForHS}
Here we will prove Theorem \ref{thm:FBRegularity}. As a first step, we wish to exhibit which ellipticity class will apply to the equation solved by the finite differences of $f$. Determining this class gives a result that depends on the structure provided in Theorem \ref{thm:StructureOfHMain}, and the class, as well as resulting regularity results will depend on the parameters $\delta$, $L$, $m$, $\rho$, which is the source for the dependence in the outcome of Theorem \ref{thm:FBRegularity}. The key is to note what are some valid choices for extremal operators that govern our mapping, $H$ (extremal operators are those defined in (\ref{eqPara:ExtremalOperators}) that satisfy (\ref{eqPara:ExtremalInequalities})). We see from the min-max representation of $h$ in Theorem \ref{thm:StructureOfHMain} (recall $h(f)=H(f,0)$, as in Lemma \ref{lemLIP:HEnjoysLittleIEstPart1}) that if $f_1, f_2 \in \mathcal K$, then
\begin{align*}
h(f_1) & = \min_{g\in\mathcal K} \left( \max_{\varepsilonll\in \mathcal D_H(0)} h(g) - \varepsilonll(g) + \varepsilonll(f) \right) \\
& \leq \max_{\varepsilonll\in \mathcal D_H(0)} h(f_2) - \varepsilonll(f_2) + \varepsilonll(f_1) \\
& \leq h(f_2) + \max_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_1 - f_2).
\varepsilonnd{align*}
Next, let $g_1 \in \mathcal K$ be such that $h(f_1) = \max_{\varepsilonll\in \mathcal D_H(0)} h(g_1) - \varepsilonll(g_1) + \varepsilonll(f_1)$, and let $\varepsilonll_2 \in \mathcal D_H(0)$ be such that $\varepsilonll_2(f_2 - g_1) = \max_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_2 - g_1)$. We then find that
\begin{align*}
h(f_1) - h(f_2) & = \left( \max_{\varepsilonll\in \mathcal D_H(0)} h(g_1) - \varepsilonll(g_1) + \varepsilonll(f_1) \right) - \min_{g\in\mathcal K} \left( \max_{\varepsilonll\in \mathcal D_H(0)} h(g) - \varepsilonll(g) + \varepsilonll(f_2) \right) \\
& \geq h(g_1) + \left( \max_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_1 - g_1) \right) - \left( \max_{\varepsilonll\in \mathcal D_H(0)} h(g_1) - \varepsilonll(g_1) + \varepsilonll(f_2) \right) \\
& = h(g_1) + \left( \max_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_1 - g_1) \right) - h(g_1) - \left( \max_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_2 - g_1) \right) \\
& = \max_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_1 - g_1) - \varepsilonll_2(f_2 - g_1) \\
& \geq \varepsilonll_2(f_1 - g_1) - \varepsilonll_2(f_2 - g_1)\\
& = \varepsilonll_2(f_1 - f_2) \geq \min_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_1 - f_2)
\varepsilonnd{align*}
In summary, we have
\begin{equation}\left\langlebel{eqPfThm2:boundwithwrongmaximaloperators}
\forall\ f_1, f_2 \in \mathcal K,\ \ \ \min_{\varepsilonll\in \mathcal D_H(0)}\varepsilonll(f_1 - f_2) \leq h(f_1) - h(f_2) \leq \max_{\varepsilonll\in \mathcal D_H(0)} \varepsilonll(f_1 - f_2).
\varepsilonnd{equation}
Now let $C$ be the constant in Theorem \ref{thm:StructureOfHMain} (ii), and let $\mathcal L_{\mathcal Lambda}$ be the class of operators from Definition \ref{defPara:scaleinvariantclass} with $\mathcal Lambda = C$. We claim there exist constants $C_1,C_2$ such that
\begin{equation}\left\langlebel{eqPfThm2:boundinglinearoperatorsbymaximaloperators}
\max_{\varepsilonll \in \mathcal D_H(0)} \varepsilonll(f) \leq \mathcal{M}^+_{\mathcal L_{\mathcal Lambda}}(f) + C_1||f||_{L^{\infty}(\mathcal RN)}, \quad \min_{\varepsilonll \in \mathcal D_H(0)} \varepsilonll(f) \geq \mathcal{M}^-_{\mathcal L_{\mathcal Lambda}}(f) - C_2||f||_{L^{\infty}(\mathcal RN)}.
\varepsilonnd{equation}
First notice that the lower bound on the kernels in Theorem \ref{thm:StructureOfHMain} (ii) is only valid in a small ball. To be able to apply the regularity results in Section \ref{sec:BackgroundParabolic}, namely Proposition \ref{propPara:holderestimate}, the kernels must satisfy the lower bound stated in Definition \ref{defPara:scaleinvariantclass}. To do this, we employ a strategy similar to that in \cite[Section 14]{CaSi-09RegularityIntegroDiff} for truncated kernels. Indeed, if $\varepsilonll \in \mathcal D_H(0)$, then we may write
$$\varepsilonll(f)(x) = c^{ij}f(x) + b^{ij}\cdot\nabla f(x) + \int_{\mathbb R^n}\delta_y f(x)\tilde{K}^{ij}(y)dy - \mathcal Lambda^{-1} \int_{\mathbb R^n \backslash B_{r_0}} (f(x+y)-f(x))|y|^{-n-1} \ dy,$$
where $\tilde{K}^{ij}(y) = K^{ij}(y) + \mathcal Lambda^{-1}{\mathbbm{1}}_{\mathbb R^n \backslash B_{r_0}} |y|^{-n-1}$. Since $b^{ij}, \tilde{K}^{ij} \in \mathcal L_{\mathcal Lambda}$ and $\mathcal Lambda^{-1}{\mathbbm{1}}_{\mathbb R^n \backslash B_{r_0}} |y|^{-n-1} \in L^1(\mathcal RN)$, and taking into account the bound on $c^{ij}$ given in Theorem \ref{thm:StructureOfHMain}, the inequalities \varepsilonqref{eqPfThm2:boundinglinearoperatorsbymaximaloperators} hold.
As an immediate consequence of \varepsilonqref{eqPfThm2:boundinglinearoperatorsbymaximaloperators} and \varepsilonqref{eqPfThm2:boundwithwrongmaximaloperators}, we find that
\begin{equation}\left\langlebel{eqPfThm2:ellipticityofH}
-C_1 |f_1 - f_2| + \mathcal{M}^-_{\mathcal L_{\mathcal Lambda}}(f_1 - f_2) \leq h(f_1) - h(f_2) \leq \mathcal{M}^+_{\mathcal L_{\mathcal Lambda}}(f_1 - f_2) + C_2|f_1 - f_2| \text{ for all } f_1, f_2 \in \mathcal K.
\varepsilonnd{equation}
With \varepsilonqref{eqPfThm2:ellipticityofH} at hand, Theorem \ref{thm:FBRegularity} follows by combining the conclusion of (\ref{eqPfThm2:ellipticityofH}) with the following $C^{1,\gammama}$ estimate for translation invariant operators, Proposition \ref{propPfThm2:TranslationInvariantC1gammaEstimate}, whose statement and proof are essentially that of \cite[Theorem 6.2]{Silvestre-2011RegularityHJE}. As $\mathcal L_\mathcal Lambda$ depends upon $\delta$, $L$, $m$, $\rho$, then so does the constant obtained in Theorem \ref{thm:FBRegularity}. We note that by \cite[Theorem 1.1 (iii)]{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal}, the Lipschitz bound on $f(\cdot,0)$ is preserved for all time. Thus, in the following Proposition \ref{propPfThm2:TranslationInvariantC1gammaEstimate}, when applied to $f$ in Theorem \ref{thm:FBRegularity}, we can replace $\norm{f}_{C^{0,1}(\mathbb R^n\times[0,T])}$ by $\norm{f}_{C^{0,1}(\mathbb R^n\times\{0\})}$.
We provide the standard argument for the proof of Proposition \ref{propPfThm2:TranslationInvariantC1gammaEstimate} using difference quotients for the sake of completeness.
\begin{proposition}\left\langlebel{propPfThm2:TranslationInvariantC1gammaEstimate} Suppose $u \in C^{0,1}(\mathcal RN \times [0,t_0])$ is a viscosity solution of the translation invariant non-local equation $\partial_t u - J(u) =0$ in $\mathcal RN \times (0,t_0)$, where $J$ satisfies the ellipticity condition
\begin{equation}\left\langlebel{eqPfThm2:ellipticityofI}
-C_1|u - v| + \mathcal M^-_{\mathcal L_{\mathcal Lambda}}(u-v)
\leq J(u)-J(v)
\leq \mathcal M^+_{\mathcal L_{\mathcal Lambda}}(u-v) +C_2 |u - v|, \quad \text{for all } u, v \in C^{0,1}(\mathcal RN).
\varepsilonnd{equation}
Then we have the estimate
$$
||u||_{C^{1, \gammama}(Q_{\frac{t_0}{2}}(t_0,x_0))} \leq \frac{C(1+t_0)}{t_0^{\gammama}} ||u||_{C^{0,1}(\mathbb{R}^n \times [0,t_0])},
$$
where $C$ and $\gammama$ are the constants from Proposition \ref{propPara:holderestimate}.
\varepsilonnd{proposition}
\begin{rem}
The constants $C$ and $\gamma$ arising from Propostion \ref{propPara:holderestimate} depend upon the ellipticity class, $\mathcal L_\mathcal Lambda$. Since, as above, our particular choice of class, $\mathcal L_\mathcal Lambda$, depends on the estimates of Theorem \ref{thm:StructureOfHMain}, which depend upon $\delta$, $L$, $m$, $\rho$, we see that an invocation of Proposition \ref{propPfThm2:TranslationInvariantC1gammaEstimate} for our situation retains such dependence on the constants in the $C^{1,\gamma}$ estimate.
\varepsilonnd{rem}
\begin{proof}[Proof of Proposition \ref{propPfThm2:TranslationInvariantC1gammaEstimate}]
For $(x,t) \in Q_1$, consider the difference quotient in space
$$v_h(x,t) := \frac{u(x+h,t) - u(x,t)}{|h|}.$$
Using the ellipticity condition \varepsilonqref{eqPfThm2:ellipticityofI} and the translation invariance of $J$, we find that \varepsilonmph{in the viscosity sense}, $v_h$ solves
$$
C_2|v_h| + \mathcal M^+_{\mathcal L_{\mathcal Lambda}}(v_h) \geq \frac{I(u(\cdot + h,t),x,t) - I(u,x,t)}{|h|} = \partial_t v_h(x,t).
$$
Since $||v_h||_{L^{\infty}(\mathcal RN \times [0,t_0])} \leq ||u||_{C^{0,1}(\mathcal RN \times [0,t_0])}$ independently of $h$, it follows that $v_h$ satisfies the inequality, \varepsilonmph{in the viscosity sense},
$$\partial_t v_h(x,t) - \mathcal M^+_{\mathcal L_{\mathcal Lambda}}(v_h)(x,t) \leq ||u||_{C^{0,1}(\mathcal RN \times [0,t_0])} \qquad \text{for all } (x,t) \in \mathcal RN \times (0,t_0).$$
A similar argument shows $v_h$ also satisfies \varepsilonmph{in the viscosity sense},
$$\partial_t v_h(x,t) - \mathcal M^-_{\mathcal L_{\mathcal Lambda}}(v_h)(x,t) \geq -||u||_{C^{0,1}(\mathcal RN \times [0,t_0])} \qquad \text{for all } (x,t) \in \mathcal RN \times (0,t_0).$$
Applying Proposition \ref{propPara:holderestimate} to $v_h$, we conclude that
$$||v_h||_{C^{\gammama}(Q_{\frac{t_0}{2}}(t_0,x_0))} \leq \frac{C(1+t_0)}{t_0^{\gammama}} ||u||_{C^{0,1}(\mathbb{R}^n \times [0,t_0])}.$$
Since the right-hand side is independent of $h$, we may let $|h| \rightarrow 0$ to obtain the desired $C^{1,\gammama}$ estimate in space. By considering $(x,t) \in \mathcal RN \times (0,t_0)$ the difference quotient in time
$$w_h(x,t) := \frac{u(x,t+h) - u(x,t)}{|h|},$$
with $h$ sufficiently small and carrying out an argument as above, we also obtain a $C^{1,\gammama}$ estimate in time. The one extra step is that once we obtain the regularity in space, we see that $u_t$ is bounded \varepsilonmph{in the viscosity sense}, and hence $u$ is Lipschitz in time. Thus, $w_h$ is a bounded viscosity solution of the extremal inequalities. Another invocation of Proposition \ref{propPara:holderestimate} concludes the regularity in time.
\varepsilonnd{proof}
\section{Commentary on Many Issues}\left\langlebel{sec:Commentary}
\subsection{Where is the min-max structure utilized?}
The first place the min-max in Theorem \ref{thm:StructureOfHMain} is used is to identify the correct class of integro-differential operators for invoking the Krylov-Safonov theory. In most of the existing literature on regularity theory (as well as existence and uniqueness theory), a min-max structure is assumed for the given equations. However, the min-max structure is quickly replaced by simply requiring the existence of a class of linear nonlocal operators so that the relevant nonlinear operator, say $J$, satisfies inequalities such as (\ref{eqPara:ExtremalInequalities}). Then, as one sees by, e.g. Proposition \ref{propPara:holderestimate}, it is these extremal inequalities that govern the regularity theory. Thus, as outlined in Section \ref{sec:KrylovSafonovForHS}, as soon as a min-max, plus some properties of the ingredients are obtained as in Theorem \ref{thm:StructureOfHMain} one can deduce which ellipticity class and results will apply to solutions of $\partial_t f= H(f)$. It was rather striking to find in the case of (\ref{eqIN:HeleShawIntDiffParabolic}), under the extra $C^{1,\textnormal{Dini}}$ regularity assumption for $f$, that the resulting ellipticity class had already been studied in the literature as in \cite{ChangLaraDavila-2016HolderNonlocalParabolicDriftJDE}. Furthermore, thanks to the translation invariance of $H$, combined with the inequalities (\ref{eqPara:ExtremalInequalities}), it is not hard to show that the finite differences, $w=\frac{1}{\abs{h}}(f(\cdot +h)-f(\cdot))$ satisfy, in the viscosity sense, the pair of inequalities (\ref{eqPara:ExtremalOperators}). This is key to obtaining the $C^{1,\gamma}$ regularity for $f$.
In some sense, the min-max provided by Theorem \ref{thm:StructureOfHMain} gives a way of ``linearizing'' the equation, but in a possibly slightly different manner than sometimes carried out. One way to linearize (\ref{eqIN:HeleShawIntDiffParabolic}) would be to fix a very smooth solution, $f_0$, and then find an equation, say $\partial_t \psi = L_{f_0}\psi$, where $L_{f_0}$ is an operator with coefficients depending upon $f_0$, and the equation governs functions of the form $f=f_0+\varepsilonp\psi$ for $\varepsilonp<<1$. The min-max gives a different linear equation in the sense that for \varepsilonmph{any} solution, say $f$, of (\ref{eqIN:HeleShawIntDiffParabolic}), one can think $f$ \varepsilonmph{itself} solves a linear equation with bounded measurable coefficients of the form,
\begin{align*}
\partial_t f = c^*_f(x)f(x) + b^*_f(x)\cdot \nabla f(x)
+ \int_{\mathbb R^n} \delta_h f(x) K^*_f(x,h)dh,
\varepsilonnd{align*}
where $c^*_f$, $b^*_f$, $K^*_f$ are all $x$-depended coefficients that can be any of those that attain the min-max for $f$ in Theorem \ref{thm:StructureOfHMain} at a given $x$. Of course, one cannot expect these coefficients to be better than bounded and measurable in $x$, and this is one reason why it is typically presented in the elliptic and parabolic literature that linear equations with bounded measurable coefficients are as easy or hard to treat (it depends upon your point of view) as fully nonlinear equations that are translation invariant. Of course, we ``linearized'' equation (\ref{eqIN:HeleShawIntDiffParabolic}) in neither of the two approaches mentioned above, but as earlier, we found that linearizing for $w=\frac{1}{\abs{h}}(f(\cdot +h)-f(\cdot))$ gives the inequalities pertinent to Proposition \ref{propPara:holderestimate}. If one used the mean value theorem, it would formally give a linear equation with bounded measurable coefficients, \varepsilonmph{assuming} that $H$ was a Fr\'echet differentiable map (but one can obtain the inequalities \varepsilonqref{eqPara:ExtremalInequalities} without any assumption of differentiability of $H$, thanks to the min-max).
In Section \ref{sec:KrylovSafonovForHS}, the min-max representation of $h$ suggests that the natural maximal and minimal operators corresponding to $h$ should be the ones given by \varepsilonqref{eqPfThm2:boundwithwrongmaximaloperators}. However, one does not know if there is regularity theory available for these maximal and minimal operators. One annoyance in this direction is that the class of linear operators used to define them is, in general, not invariant under translations and dilations. Certainly the needed regularity is true, but the arguments to produce such results are better implemented for a larger class of equations, such as those described in Definition \ref{defPara:scaleinvariantclass}. The bounds obtained in Theorem \ref{thm:StructureOfHMain} (ii) instead allow us to estimate $h$ by a different set of maximal and minimal operators as shown in \varepsilonqref{eqPfThm2:ellipticityofH}, where the operators $\mathcal M^+_{\mathcal L_{\mathcal Lambda}}$ and $ \mathcal M^-_{\mathcal L_{\mathcal Lambda}}$ are defined using a class of linear operators which satisfies the translation and dilation invariance properties necessary to invoke existing regularity theory while also containing the linear functionals that support $h$.
We also note an interesting departure from an easier min-max approach as utilized in \cite{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} and \cite[Theorem 1.10]{GuSc-2019MinMaxEuclideanNATMA}. The curious reader may see that since $H$ is translation invariant, there is a quicker and more straightforward way to obtaining the first half of Theorem \ref{thm:StructureOfHMain} stated in part (i). The translation invariance means that it suffices to look only at $H(f,0)$, and as a Lipschitz \varepsilonmph{functional} from the Banach space, $X_\rho$, to $\mathbb R$, $H(f,0)$ enjoys a larger collection of tools from the nonlinear analysis setting built in Clarke's book \cite{Clarke-1990OptimizationNonsmoothAnalysisSIAMreprint}. The mean value theorem of Lebourg \cite{Clarke-1990OptimizationNonsmoothAnalysisSIAMreprint} that we give a variant on in Lemma \ref{lemFD:MVmaxProperty} has a more straightforward presentation using a more natural subdifferential set than the one defined in Definition \ref{defFD:DJLimitingDifferential}. This is the approach that is pursued in proving the corresponding result in \cite[Theorem 1.4]{ChangLaraGuillenSchwab-2019SomeFBAsNonlocalParaboic-NonlinAnal} and \cite[Theorem 1.10]{GuSc-2019MinMaxEuclideanNATMA}. The problem with using the more natural subdifferential set that circumvents the cumbersome details of the finite dimensional approximations is that it is very hard to capture in the linear operators for the min-max the non-degeneracy property that is proved in Lemma \ref{lemLIP:LittleILipEstPart1}. For a lack of a better analogy, it is like saying that for the function $A:\mathbb R^N\to\mathbb R$ given by $A(x)=\abs{x}$, one can think of the contrast in reconstructing $A$, by considering the set of all possible supporting hyperplanes, versus considering the actual derivative $DA$ at any point where $DA$ may exist. In the former situation, one cannot avoid that degenerate linear functionals, such as the zero functional, appear in the collection that makes up a min-max (just a max, actually) representation of $A$, whereas in the latter, one can see that the only differentials that would be used will be those with norm $1$, and hence are ``non-degenerate'' in a sense. This is the reason for the finite dimensional approximations used in Section \ref{sec:ProofOfStructureThm} because a non-degeneracy property like that in Lemma \ref{lemLIP:LittleILipEstPart1} can be preserved in the functionals used for the min-max in Corollary \ref{corFD:GenericMinMaxForJ}.
\subsection{A counter example}
There are interesting pathologies in Hele-Shaw free boundary problems related to the contrast between $U$ being regular in space-time and $\partial\{U>0\}$ being regular in space-time. Aside from the fact that there are geometries in which the free boundary may stagnate and then immediately jump in space-time (see \cite{KingLaceyVazquez-1995PersistenceOfCornersHeleShaw}), there are solutions of (\ref{eqIN:HSMain}) with space-planar free boundaries such as (see \cite{Kim-2006RegularityFBOnePhaseHeleShaw-JDE})
\begin{align*}
U(X,t) = a(t)\left( X_{n+1} + \int_0^t a(s)ds \right),
\varepsilonnd{align*}
with e.g. $a$ is a bounded function of $t$.
The zero set is, of course, given by
\begin{align*}
\partial\{U>0\}=\left\{X_{n+1}=-\int_0^t a(s)ds\right\},\ \ \text{hence}\ \ f(x,t)=-\int_0^ta(s)ds.
\varepsilonnd{align*}
We note that this special solution does not necessarily satisfy the spatial boundary conditions prescribed by (\ref{eqIN:HSMain}), and indeed, in the absence of further restrictions on $a$, it is not true that $f\in C^{1,\gamma}(\mathbb R^n\times[\tau,T])$. However, if one insists that this solution does satisfy (\ref{eqIN:HSMain}) exactly, we then see
the boundary condition that $U(0,t)=1$ means that
\begin{align*}
a(t)\left(\int_0^t a(s)ds\right)=1,
\varepsilonnd{align*}
whereby $a(t)=\pm(2t+c)^{-1/2}$, for some $c\geq0$, and hence $\int_0^t a(s)ds=\pm (2t+c)^{1/2}$. In order that $U>0$, we see that in fact $a(t)=-(2t+c)^{-1/2}$, and hence
\begin{align*}
U(X,t)=-(2t+c)^{-1/2}\left(X_{n+1}-(2t+c)^{1/2}\right),\ \ \ \text{in}\ \ \ \{0<X_{n+1}<(2t+c)^{1/2}\},
\varepsilonnd{align*}
and so
\begin{align*}
f(x,t)=(2t+c)^{1/2}.
\varepsilonnd{align*}
In particular, requiring that the free boundary resides in the region $\mathbb R^n\times[\delta,L-\delta]$, we see
\begin{align*}
c>\delta^2.
\varepsilonnd{align*}
Thus, indeed, $f\in C^{\infty}(\mathbb R^n\times[0,T])$, with a norm that depends on $\delta$, which is compatible with the result in Theorem \ref{thm:FBRegularity}.
\subsection{Some questions}
Here, we list some questions related to (\ref{eqIN:HSMain}) and Theorem \ref{thm:FBRegularity}.
\begin{itemize}
\item Is the gain in regularity given in Theorem \ref{thm:FBRegularity} enough to prove higher regularity, such as a $C^\infty$ free boundary? This would be related to higher regularity via Schauder or bootstrap methods for integro-differential equations, such as that pursued in e.g. \cite{Bass2009-RegularityStableLikeJFA}, \cite{DongJinZhang-2018DiniAndSchauderForNonlocalParabolicAPDE}, \cite{JinXiong-2015SchauderEstLinearParabolicIntDiffDCDS-A}, \cite{MikuleviciusPragarauskas-2014CauchyProblemIntDiffHolderClassesPotAnalysis}; or like the analysis for free boundary problems that attains smooth solutions, such as in \cite{BarriosFigalliValdinoci-2014BootstrapRegularityIntDiffNonlocalMinimalAnnScuolNormPisa}, \cite{ChoiJerisonKim-2009LocalRegularizationOnePhaseINDIANA}, \cite{KinderlehrerNirenberg-1977RegularityFBAnnScuolNormPisa}, \cite{KinderlehrerNirenberg-1978AnalaticityAtTheBoundaryCPAM}.
\item Is it possible to include variable coefficients in equation (\ref{eqIN:HSMain}) and obtain the same regularity of the solution? This could be for either a divergence form operator or a non-divergence form operator. It is conceivable that similar regularity should hold, and one may expect to use either directly, or modifications of works such as \cite{DongJinZhang-2018DiniAndSchauderForNonlocalParabolicAPDE}, \cite{Kriventsov-2013RegRoughKernelsCPDE}, \cite{Serra-2015RegularityNonlocalParabolicRoughKernels-CalcVar}, when the order of the kernels is $1$.
\item How does incorporating an inhomogeneous boundary law, $V=G(X,\partial^+_\nu U^+, \partial^-_\nu U^-)$, in (\ref{eqIN:HSMain}) change the outcome of the results? At least when $G(X,\partial^+_\nu U^+, \partial^-_\nu U^-)=g(X)\tilde G(\partial^+_\nu U^+, \partial^-_\nu U^-)$ it appears as though the steps would be very similar, but if the $X$ dependence is more general, the analysis in Section \ref{sec:KrylovSafonovForHS}, may be complicated by the fact that the equation is not translation invariant, and the $x$ dependence is not as easily isolated.
\item The most important question to address could be to adapt the method to apply to situations in which $\partial\{U>0\}$ is only \varepsilonmph{locally} a space-time graph of a function. In many free boundary problems related to (\ref{eqIN:HSMain}), it is not natural to assume that the free boundary is globally the graph of some function. Rather, without assuming the free boundary is a graph, some low regularity assumption like a Lipschitz condition or a flatness condition then forces the free boundary to in fact be locally a graph that is quite regular (at least for small time that avoids different regions of the free boundary colliding and causing topological changes). This could be attained by including as a parameter in the definition of $I$, some extra space-time boundary condition that allows $I$ to act on functions that are merely defined in, say, $B_1$, instead of $\mathbb R^n$, with this extra boundary condition providing the information of the free boundary outside of $B_1$.
\item Another interesting question is to address the possibility to modify the method to apply to Stefan type problems wherein (\ref{eqIN:HSMain}) now requires $U$ to solve a parabolic problem in the sets $\{U>0\}$ and $\{U<0\}$. Of course, the two-phase Stefan problem itself is already rather well understood, but there are many variations that could be considered. This would require adapting the results in Section \ref{sec:FiniteDim} to accommodate operators acting on $f:\mathbb R^n\times[0,T]$ that satisfy the GCP in space-time, rather than simply looking at those operators that satisfy the GCP in space.
\varepsilonnd{itemize}
\appendix
\section{Proofs related to Green's function estimates}
Before proving Lemma \ref{lemGF:lineargrowth}, we recall the following fact from \cite{GruterWidman-1982GreenFunUnifEllipticManMath}.
\begin{lemma}\left\langlebel{lemGF:3.2InGW} (cf. Lemma 3.2 in \cite{GruterWidman-1982GreenFunUnifEllipticManMath}) Suppose $A$ is $\left\langlembda,\mathcal Lambda$ uniformly elliptic and Dini continuous with modulus, $\omega$, and $v$ solves the Dirichlet problem
\begin{align}\left\langlebel{eqAppendix:Lem3.2InGW}
\begin{cases}
L_A v = 0 \quad \text{in } \mathcal{A}_{2r}(x_0), \ x_0 \in \Omegaega, \ r \leq 1,\\
v = 1 \quad \text{on } \partial B_r(x_0), \\
v = 0 \quad \text{on } \partial B_{2r}(x_0).
\varepsilonnd{cases}
\varepsilonnd{align}
There exists a constant $K = K(n,\left\langlembdabda, \mathcal Lambdabda, \omegaega) > 0$ such that
$$|\nabla v(x)| \leq \frac{K}{r} \qquad \text{for all } x \in \mathcal A_{2r}(x_0).$$
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemGF:lineargrowth}]
We first perform a reduction to a model problem. By Harnack's inequality applied to the non-negative solution $u$ in the ball $B_r(x_0)$, we know there exists a constant $\tilde{C} = \tilde{C}(n,\left\langlembdabda,\mathcal Lambdabda)$ such that
$$\inf_{\partial B_r(x_0)} u \geq \tilde{C} u(x_0).$$
Rescaling $u$, we may thus assume $\inf_{\partial B_r(x_0)} u = 1$. Let $v$ be the solution to the problem
\begin{equation}\left\langlebel{eqnforv}
\begin{cases}
L_Av = 0 \quad \text{in } \mathcal A_{2r}(x_0),\\
v = 1 \quad \text{on } \partial B_r(x_0),\\
v = 0 \quad \text{on } \partial B_{2r}(x_0).
\varepsilonnd{cases}
\varepsilonnd{equation}
We recall that by assumption, $\mathcal A_{2r}(x_0)\subset\Omegaegam$, and hence as $u\geq 0$, by the maximum principle, $u \geq v$ on $\mathcal A_{2r}(x_0)$ and so it suffices to prove the estimate \varepsilonqref{lineargrowthatboundary} for the function $v$.
Consider the constant coefficient operator $L_0 := -\text{div}(A(z_0) \nabla \cdot)$, and let $\hat{v}$ solve the problem
\begin{equation}\left\langlebel{eqnforhatv}
\begin{cases}
L_0 \hat{v} = 0 \quad \text{in } \mathcal A_{2r}(x_0),\\
\hat{v} = 1 \quad \text{on } \partial B_r(x_0),\\
\hat{v} = 0 \quad \text{on } \partial B_{2r}(x_0).
\varepsilonnd{cases}
\varepsilonnd{equation}
The function $w: = \hat{v} - v$ vanishes on the boundary of $\mathcal A_{2r}(x_0)$. If $G_0$ is the Green's function for the operator $L_0$, then by the representation formula for $L_0$, we have for all $x \in \mathcal{A}_{2r}(x_0)$
$$w(x) = \int_{\mathcal A_{2r}(x_0)} G_0(x,y) L_0 w(y) \ dy = \int_{\mathcal A_{2r}(x_0)} \left\langle \nabla_y G_0(x,y) , A(z_0) \nabla w(y) \right\rangle \ dy.$$
Now since $\hat{v}$ solves \varepsilonqref{eqnforhatv}, we know that
$$\int_{\mathcal A_{2r}(x_0)} \left\langle \nabla_y G_0(x,y), A(z_0) \nabla \hat{v}(y) \right\rangle \ dy = 0.$$
Consequently,
$$w(x) = -\int_{\mathcal A_{2r}(x_0)} \left\langle \nabla_y G_0(x,y) , A(z_0) \nabla v(y) \right\rangle \ dy.$$
Next, since $v$ solves \varepsilonqref{eqnforv}, we know that
$$\int_{\mathcal A_{2r}(x_0)} \left\langle \nabla_y G_0(x,y) , A(y) \nabla v(y) \right\rangle \ dy = 0.$$
It follows that
$$w(x) = -\int_{\mathcal A_{2r}(x_0)} \left\langle \nabla_y G_0(x,y) , (A(z_0) - A(y)) \nabla v(y) \right\rangle \ dy.$$
Differentiating in $x$ yields
$$Dw(x) = - \int_{\mathcal A_{2r}(x_0)} \left\langle D^2_{x,y}G_0(x,y) , (A(z_0) - A(y)) \nabla v(y) \right\rangle \ dy.$$
Evaluating at $x = z_0$, we thus conclude
$$Dw(z_0) = - \int_{\mathcal A_{2r}(x_0)} \left\langle D^2_{x,y}G_0(z_0,y) , (A(z_0) - A(y)) \nabla v(y) \right\rangle \ dy.$$
Now by estimates for the Green's function for constant coefficient operators, we know there exists a constant $C_1 = C_1(n,\left\langlembdabda, \mathcal Lambdabda) > 0$ such that
$$|D^2_{x,y}G_0(z_0,y)| \leq C_1|z_0-y|^{-n}.$$
It follows that
$$|Dw(z_0)| \leq C_1 \int_{\mathcal A_{2r}(x_0)} \frac{|A(z_0) - A(y)|}{|z_0 - y|^n} |\nabla v(y)| \ dy.$$
By Lemma \ref{lemGF:3.2InGW}, there exists a constant $K = K(n,\left\langlembdabda, \mathcal Lambdabda, \omegaega) > 0$ such that
$$|\nabla v(y)| \leq \frac{K}{r} \qquad \text{for all } y \in \mathcal A_{2r}(x_0).$$
Therefore,
$$|Dw(z_0)| \leq \frac{C_1 K}{r} \int_{\mathcal A_{2r}(x_0)} \frac{|A(z_0) - A(y)|}{|z_0 - y|^n} \ dy.$$
We now write the integral above as
\begin{align*}
\int\limits_{\mathcal A_{2r}(x_0)} \frac{|A(z_0) - A(y)|}{|z_0 - y|^n} \ dy & = \int\limits_{\mathcal A_{2r}(x_0) \cap B_r(z_0)} \frac{|A(z_0) - A(y)|}{|z_0 - y|^n} \ dy + \int\limits_{\mathcal A_{2r}(x_0) \backslash B_r(z_0)} \frac{|A(z_0) - A(y)|}{|z_0 - y|^n} \ dy \\
& = \text{I} + \text{II}.
\varepsilonnd{align*}
Converting to polar coordinates centered at $z_0$, and using the Dini continuity of the coefficients $A(\cdot)$ yields
$$\text{I} \leq C_2 \int_0^r \frac{\omegaega(t)}{t} \ dt,$$
for a dimensional constant $C_2 > 0$. To control $\text{II}$, we notice that $|z_0 - y| \geq r$ if $y \in \mathcal A_{2r}(x_0) \backslash B_r(z_0)$, and so
$$\text{II} \leq r^{-n} \int\limits_{\mathcal A_{2r}(x_0)} |A(z_0) - A(y)| \ dy \leq r^{-n}|\mathcal A_{2r}(x_0)| \sup_{y \in \mathcal A_{2r}(x_0)} \omegaega(|z_0 - y|)| \leq C_3 \sup_{y \in \mathcal A_{2r}(x_0)} \omegaega(|z_0 - y|),$$
where $C_3 > 0$ is a dimensional constant. It follows that given $\varepsilon > 0$, there exists $r_0 = r_0(n,\omegaega, \left\langlembdabda, \mathcal Lambdabda, \varepsilon)$ such that if $r \leq r_0$, then $|Dw(z_0)| \leq \frac{\varepsilon}{r}$.
By Taylor expansion around $z_0$, we have
$$v(x) = v(z_0) + Dv(z_0)\cdot(x-z_0) + o(|x-z_0|) \qquad \text{for all } x \in [x_0,z_0] \cap \mathcal A_{2r}(x_0).$$
Let $D_{\nu} \varphi(z_0) := \left\langle D\varphi(z_0), \nu(z_0) \right\rangle$ denote the derivative of a function $\varphi$ in the direction of the inward pointing unit normal vector $\nu(z_0)$ to $\partial B_{2r}(x_0)$ at $z_0$. Since $v(z_0) = 0$ and $d(x)\nu(z_0) = x - z_0$, we see that
$$v(x) = D_{\nu}v(z_0) d(x) + o(d(x)) \qquad \text{for all } x \in [x_0,z_0] \cap \mathcal A_{2r}(x_0).$$
Writing $v = \hat{v} - w$, we thus obtain
$$v(x) = \left(D_{\nu}\hat{v}(z_0) - D_{\nu}w(z_0) \right)d(x) + o(d(x)) \qquad \text{for all } x \in [x_0,z_0] \cap \mathcal A_{2r}(x_0).$$
Now, by explicit calculation of $\hat{v}$, it is possible to show that there exists a constant $C_4 = C_4(n,\left\langlembdabda,\mathcal Lambdabda) > 0$ such that
$$D_{\nu} \hat{v}(z_0) \geq \frac{C_4}{r}.$$
If we now choose $\varepsilon := \frac{C_4}{2}$ above, we obtain
$$D_{\nu}\hat{v}(z_0) - D_{\nu}w(z_0) \geq \frac{C_4}{r} - \frac{\varepsilon}{r} = \frac{C_4}{2r}.$$
Therefore, there exist constants $C = C(n,\left\langlembdabda,\mathcal Lambdabda) > 0$ and $r_0 = r_0(n,\omegaega,\left\langlembdabda, \mathcal Lambdabda) > 0$ such that if $r \leq r_0$, then
$$v(x) \geq \frac{C}{r} \ d(x) + o(d(x)) \qquad \text{for all } x \in [x_0,z_0] \cap \mathcal A_{2r}(x_0).$$
\varepsilonnd{proof}
From here on, we assume we are working with $\Omegaega \subset \mathcal Rn$. Before we prove Theorem \ref{thm:GreenBoundaryBehavior}, let us first recall a number of useful facts from \cite{CaffarelliFabesMortolaSalsa-1981Indiana, GruterWidman-1982GreenFunUnifEllipticManMath}. For any $y_0 \in \partial \Omegaega$ and $r > 0$, let $\mathcal Delta_r(y_0) := B_r(y_0)\cap \partial \Omegaega$. We denote by $W_{r,y_0}$ the solution to the Dirichlet problem
\begin{equation}\left\langlebel{harmonicmeasure}
\begin{cases}
L_A W_{r,y_0} = 0 \quad \text{in } \Omegaega,\\
W_{r,y_0} = \mathbbm{1}_{\mathcal Delta_r(y_0)} \quad \text{on } \partial\Omegaega,
\varepsilonnd{cases}
\varepsilonnd{equation}
i.e. $W_{r,y_0}(x)$ is the harmonic measure of $\mathcal Delta_r(y_0)$, based at $x$.
\begin{lemma}\left\langlebel{CFMSLemma2.1} (cf. Lemma 2.1 in \cite{CaffarelliFabesMortolaSalsa-1981Indiana}) There exist positive numbers $r_0 =r_0(m)$ and $C = C(\left\langlembdabda, \mathcal Lambdabda, m)$ such that for $r \leq r_0$, we have
$$W_{r,y_0}(y_0 + r\nu(y_0)) \geq C.$$
\varepsilonnd{lemma}
\begin{lemma}\left\langlebel{CFMSLemma2.2} (cf. Lemma 2.2 in \cite{CaffarelliFabesMortolaSalsa-1981Indiana}) There exist positive numbers $r_0 =r_0(m)$ and $c = c(\left\langlembdabda,\mathcal Lambdabda,m)$ such that for $r \leq r_0$ and for all $x \notin B_{3r}(y_0) \cap \Omegaegamega$, we have
$$c^{-1}r^{n-1} G(y_0 + r\nu(y_0), x) \leq W_{r,y_0}(x) \leq cr^{n-1} G(y_0 + r\nu(y_0), x),$$
where $G$ is the Green's function corresponding to $L_A$ in $\Omegaega \subset \mathcal Rn$.
\varepsilonnd{lemma}
\begin{lemma}\left\langlebel{GWTheorem1.1} (cf. Theorem 1.1 in \cite{GruterWidman-1982GreenFunUnifEllipticManMath}) There exists a positive constant $K = K(n, \left\langlembdabda, \mathcal Lambdabda)$ such that if $p,q \in \Omegaega \subset \mathcal RN$ satisfy $|p-q| \leq \frac{1}{2}d(q)$, then
$$G(p,q) \geq K|p-q|^{1-n},$$
where $G$ is the Green's function corresponding to $L_A$ in $\Omegaega \subset \mathcal Rn$.
\varepsilonnd{lemma}
\begin{proof}[Proof of Theorem \ref{thm:GreenBoundaryBehavior}] By flattening $D_f$, we may work on the domain $\Omegaega = \left\{0 < x_{n+1} < L \right\}$. We will only focus on proving the estimate \varepsilonqref{GlobalGreenEstimate} on the portion of the boundary, $\Gammama_0 := \left\{x_{n+1} = 0\right\}$. Let $R_0$ be the minimum of $L$ and the smallest value of $r_0$ for which the conclusions of Lemma \ref{lemGF:lineargrowth}, Lemma \ref{CFMSLemma2.1}, and Lemma \ref{CFMSLemma2.2} hold. Evidently, $R_0$ depends only on the Dini modulus of $A(\cdot)$, the $C^{1,\text{Dini}}$ modulus of $f$, and other universal parameters. Since the upper bound in \varepsilonqref{GlobalGreenEstimate} is a consequence of \cite[Theorem 3.3]{GruterWidman-1982GreenFunUnifEllipticManMath}, we only show the proof of the lower bound.
Fix $x, y \in \left\{0 < x_{n+1} < L \right\}$ and let $r := |x-y| \leq R_0$. Let $x_0$ (resp. $y_0$) denote the point on $\Gammama_0$ closest to $x$ (resp. $y$), and define $x^* := x_0 + re_{n+1}$ (resp. $y^* := y_0 + r e_{n+1}$). Notice that $d(x) = \text{dist}(x,\Gammama_0) = x_{n+1}$ (resp. $d(y) = \text{dist}(y,\Gammama_0) = y_{n+1}$). Consider the following scenarios:
\noindent {\bf Case 1:} $0 < d(x),d(y) \leq \frac{r}{2}$. \\
Since $x \notin B_r(y^*)$, $G(\cdot, x)$ satisfies the hypotheses of Lemma \ref{lemGF:lineargrowth} in $B_r(y^*)$ and vanishes at $y_0$. Hence, there exists $C_1 = C_1(\left\langlembdabda,\mathcal Lambdabda,n) > 0$ such that
$$G(y,x) \geq \frac{C_1}{r}G(y^*,x)d(y).$$
Let $\hat{y}:= y_0 + \frac{r}{2\sqrt{3}} e_{n+1}$. By the Boundary Harnack Principle, there exists a constant $C_2 = C_2(\left\langlembdabda,\mathcal Lambdabda,n) > 0$ such that
$$G(y^*,x) \geq C_2 G\left(\hat{y},x\right).$$
Notice that $x \notin B_{\frac{\sqrt{3}r}{2}}(y_0)$ since
$$|x_0 - y_0|^2 = |x - y|^2 - |x_{n+1} - y_{n+1}|^2 \geq r^2 - \frac{r^2}{4} = \frac{3r^2}{4}.$$
Therefore, by Lemma \ref{CFMSLemma2.2}, there exists a constant $C_3 = C_3(\left\langlembdabda, \mathcal Lambdabda, m) > 0$ such that
$$G\left(\hat{y},x\right) \geq C_3 r^{1-n} W_{\frac{r}{2\sqrt{3}},y_0}(x).$$
Applying Lemma \ref{lemGF:lineargrowth} to $W_{\frac{r}{2\sqrt{3}},y_0}$ in $B_r(x^*)$, we find there exists a constant $C_4 = C_4(\left\langlembdabda, \mathcal Lambdabda, n) > 0$ such that
$$W_{\frac{r}{2\sqrt{3}},y_0}(x) \geq \frac{C_4}{r} W_{\frac{r}{2\sqrt{3}},y_0}(x^*) d(x).$$
A crude estimate shows
$$|\hat{y} - x^*| \leq |\hat{y} - y_0| + |y_0 - x_0| + |x_0 - x^*| \leq \frac{r}{2\sqrt{3}} + r + r < \frac{5r}{2}.$$
It follows from a covering argument and Harnack's inequality that there exists a constant $C_5 = C_5(\left\langlembdabda, \mathcal Lambdabda, n) > 0$ such that
$$W_{\frac{r}{2\sqrt{3}},y_0}(x^*) \geq C_5 W_{\frac{r}{2\sqrt{3}},y_0}(\hat{y}).$$
Finally, by Lemma \ref{CFMSLemma2.1}, there exists a constant $C_6 = C_6(\left\langlembdabda,\mathcal Lambdabda,m) > 0$ such that
$$W_{\frac{r}{2\sqrt{3}},y_0}(\hat{y}) \geq C_6.$$
Combining all the bounds above, and recalling that $|x - y| = r$ we conclude that
$$G(x,y) \geq C r^{-(n+1)}d(x)d(y) = C\frac{d(x)d(y)}{|x-y|^{n+1}}.$$
\noindent {\bf Case 2:} $d(y) \leq \frac{r}{2} < d(x)$. \\
Since $|x -y| = r$, it follows that $d(x) \leq |x-y| + d(y) \leq \frac{3r}{2}$. Let $\hat{x} \in \partial B_r(y) \cap \left\{x_{n+1} = \frac{r}{2} \right\}$ be the point closest to $x$. Then $d(\hat{x}) = \frac{r}{2} \geq \frac{d(x)}{3}$ and $|\hat{x} - y| = r = |x-y|$. Consequently, by Case 1,
$$G(\hat{x},y) \geq C\frac{d(\hat{x})d(y)}{|\hat{x}-y|^{n+1}} \geq \frac{C}{3}\frac{d(x)d(y)}{|x-y|^{n+1}}.$$
On the other hand, by a covering argument and Harnack's inequality, there exists a constant $C_1 = C_1(\left\langlembdabda, \mathcal Lambdabda, n) > 0$ such that
$$G(x,y) \geq C_1 G(\hat{x},y).$$
\noindent {\bf Case 3:} $\frac{r}{2} < d(y), d(x)$. \\
In this case,
$$\min\left\{ \frac{d(x)d(y)}{|x-y|^{n+1}}, \frac{1}{4|x-y|^{n-1}} \right\} = \frac{1}{4|x-y|^{n-1}}.$$
Let $p = y + \frac{1}{4}(x-y)$ and $q = y$. Note that $d(p) \geq \frac{r}{2}$ by convexity of the half-space $\left\{x_{n+1} \geq \frac{r}{2} \right\}$. Also, $|p - q| = \frac{r}{4} < \frac{1}{2} d(q)$. Consequently, by Lemma \ref{GWTheorem1.1}, we have
$$G(p,y) = G(p,q) \geq K|p - q|^{1-n} = K4^{n-1} |x-y|^{1-n}.$$
On the other hand, by connecting the points $p$ and $x$ using a Harnack chain using balls of radius $\frac{r}{8}$, and applying Harnack's inequality to the positive solution $G(\cdot,y)$, we conclude that there exists a positive constant $C_3 = C_3(n,\left\langlembdabda,\mathcal Lambdabda)$ such that
$$G(x,y) \geq C_3 G(p,y).$$
The estimate \varepsilonqref{GlobalGreenEstimate} thus follows.
\varepsilonnd{proof}
In order to address the behavior of $P_f$ in $\mathbb R^n\setminus B_R$, for large $R$, we need a variation on the barrier function given in Lemma \ref{lemGF:3.2InGW}. The difference between the two results is that Lemma \ref{lemGF:3.2InGW} applies to the situation for $r\in(0,1]$, whereas in the Lemma \ref{lemAppendix:BarrierLargeR}, $r>1$. This is a modification of a well known result about the uniform H\"older continuity of solutions to equations with bounded measurable coefficients in domains with an exterior cone condition, e.g. \cite[Lemma 7.1]{GruterWidman-1982GreenFunUnifEllipticManMath}.
\begin{lemma}\left\langlebel{lemAppendix:BarrierLargeR}
There exists constants, $C>0$, $\alpha\in(0,1]$, and $\varepsilonp>0$, depending on the Dini modulus and ellipticity of $A$ and $n$, so that for all $r>1$, and for $v$ as in Lemma \ref{lemGF:3.2InGW}, for all $\abs{X}\leq r+\varepsilonp$,
\begin{align*}
v(X)\leq C\frac{d(X)}{r^\alpha}.
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}[Proof of Lemma \ref{lemAppendix:BarrierLargeR}]
First, we note that in Lemma \ref{lemGF:3.2InGW}, the constant, $C$, to depended upon only the Dini modulus of $A$, ellipticity, and $n$. The scaling argument used for $r<1$ in Lemma \ref{lemGF:3.2InGW} will not work here because in order to have $A$ given in a ball of radius $r>1$, the result at scale $1$ must be applied to the coefficients $A(rx)$, whose Dini modulus blows up as $r$ is large.
Thus, instead, we can appeal to results at scale $r=1$ that only depend on ellipticity, and then rescale the equation in $\mathcal A_{2r}$ to $\mathcal A_2$, which preserves ellipticity, but not the Dini modulus. This is the reason for the appearance of the factor $r^\alpha$ for possibly $\alpha<1$. To this end, we simply note that for $v_1$ that solves equation \ref{eqAppendix:Lem3.2InGW} with $r=1$, $v_1$ is H\"older continuous for some universal $\alpha\in(0,1]$ in $\overline \mathcal A_2$. Thus, under rescaling, we see that as $v_1\varepsilonquiv 0$ on $\partial B_1$, (e.g. \cite[Lemma 1.7]{GruterWidman-1982GreenFunUnifEllipticManMath})
\begin{align*}
0\leq v_1(X)\leq Cd(X)^\alpha.
\varepsilonnd{align*}
Under rescaling, back to the case of $v_r$ that solves (\ref{eqAppendix:Lem3.2InGW}) in $\mathcal A_{2r}$, we have
\begin{align*}
0\leq v_r(X)\leq C\frac{d(X)^\alpha}{r^\alpha}.
\varepsilonnd{align*}
Now, as the domain $\mathcal A_{2r}$ enjoys a uniform exterior ball condition of radius $r>1$, we can invoke the Dini property of $A$ to use a barrier for $v$ near the boundary $\partial B_r$. In particular, we can use a barrier in an outer annulus with inner radius $1$, outer radius $2$ (given in Lemma \ref{lemGF:3.2InGW}), to conclude that
\begin{align*}
v(X)\leq C\frac{d(X)}{r^\alpha}.
\varepsilonnd{align*}
This, of course follows from the fact that the first estimate established in this proof that for all $X$ with $r<\abs{X}\leq r+1$, $v(X)\leq C\frac{1}{r^\alpha}$.
\varepsilonnd{proof}
\begin{proof}[Proof of Proposition \ref{propGF:PoissonKernel}]
First of all, we address the bounds for the case $\abs{X-Y}<R_0$. As
\begin{align*}
P_f(X,Y)=(\partial_{\nu}G(X,\cdot))(Y),
\varepsilonnd{align*}
we see that the bounds on $P_f$ are immediate from Theorem \ref{thm:GreenBoundaryBehavior}.
Now, we focus on the second estimate.
We may assume, without loss of generality, that $X = X_0 = (0,f(0))$. Notice that
$$\int_{\Gamma_f\setminus B_{R}(X_0)} P_f(X_0+s\nu(X_0),Y)d\sigmama(Y) = W(X_0 + s\nu(X_0)),$$
where $W$ solves the equation
$$
\begin{cases}
\mathcal Delta W=0\ \text{in}\ D_f,\\
W={\mathbbm{1}}_{B_R^c(X_0)}\ \text{on}\ \Gamma_f,\\
W=0\ \text{on}\ \{x_{n+1}=0\}.
\varepsilonnd{cases}
$$
We next flatten the domain $D_f$ by using the transformation $T_f$ defined in \varepsilonqref{flatteningtransformation}. The function $\tilde{W} = W\circ T_f^{-1}$ then solves
$$
\begin{cases}
\div(A(y) \nabla \tilde{W}(y)) = 0\ \text{in}\ \mathbb R^n \times [0,L],\\
\tilde{W}={\mathbbm{1}}_{B_R^c(0,L)}\ \text{on}\ \left\{y_{n+1} = L \right\},\\
\tilde{W}=0\ \text{on}\ \{y_{n+1}=0\},
\varepsilonnd{cases}
$$
with $A(y) \in \mathbb{R}^{(n+1)\times(n+1)}$ uniformly elliptic and Dini continuous (depending on $\delta$, $L$, $m$, $\omegaega$). Note that $0\leq \tilde W\leq 1$ on $\mathbb R^n \times [0,L]$ by the comparison principle.
We now extend the coefficients $A$ to all of $\mathcal Rn$ in a Dini continuous fashion with the same modulus of continuity $\omegaega$, and denote them $\hat{A}$. The corresponding divergence form operator on $\mathcal Rn$ will be denoted $\hat{L} := \div(\hat{A}(y) \nabla \cdot)$. Note that $\hat{A}$ can also be taken to satisfy the same ellipticity conditions as $A$. Now suppose $R > \sqrt{3}L$, and let $Y_0 = (0, L + \frac{R}{\sqrt{3}})$. On the annular domain $\mathcal{A}_{\frac{2R}{\sqrt{3}}}(Y_0)$, consider the function $\varphi$ which solves the problem
$$
\begin{cases}
\hat{L}\varphi = 0 \text{ in } \mathcal{A}_{\frac{2R}{\sqrt{3}}}(Y_0), \\
\varphi = 0 \text{ on } \partial B_{\frac{R}{\sqrt{3}}}(Y_0), \\
\varphi = 1 \text{ on } \partial B_{\frac{2R}{\sqrt{3}}}(Y_0). \\
\varepsilonnd{cases}
$$
By Lemma \ref{lemAppendix:BarrierLargeR} (we can assume, without loss of generality that $R>1$), there exists constant $K = K(n,\left\langlembdabda,\mathcal Lambdabda, \omegaega)$ such that when $R>1$, $\abs{\varphi(X)}\leq C\frac{d(X)}{R^\alpha}$ for all $X \in \mathcal{A}_{\frac{2R}{\sqrt{3}}}(Y_0)$ with $R<\abs{X}<R+\varepsilonp$. Consequently, since $\varphi(0,L) = 0$, we conclude that $\varphi(0, L-s) \leq \frac{Ks}{R^\alpha}$ for all $s > 0$ sufficiently small.
It remains to show that $\tilde{W} \leq \varphi$ on $\Omegaegamega_R := \mathcal{A}_{\frac{2R}{\sqrt{3}}}(Y_0) \cap \mathcal RN \times [0,L]$. To show this, notice that $\partial \Omegaegamega_R$ consists of three pieces; the first two are the flat portions consisting of the intersection of $\mathcal{A}_{\frac{2R}{\sqrt{3}}}(Y_0)$ with $\left\{y_{n+1} = 0 \right\}$ and $\left\{y_{n+1} = L \right\}$ respectively, while the third piece is the intersection of $\partial B_{\frac{2R}{\sqrt{3}}}(Y_0)$ with $\mathcal RN \times [0,L]$. On the flat portions, we know $\tilde{W} = 0$ and since $\varphi \geq 0$ by the maximum principle, we see that $\varphi \geq \tilde{W}$ on this portion of $\partial \Omegaegamega_R$. On the remaining portion of $\partial \Omegaegamega_R$, we know that $\varphi = 1$ and since $\tilde{W} \leq 1$ on $\mathcal RN \times [0,L]$, we conclude that $\varphi \geq \tilde{W}$ on this piece of $\partial \Omegaegamega_R$ as well. Consequently, by the maximum principle, $\varphi \geq \tilde{W}$ on $\Omegaegamega_R$. In particular, $\tilde{W}(0,L-s) \leq \varphi(0, L-s) \leq \frac{Ks}{R^\alpha}$ for all $s > 0$ sufficiently small. Rewriting this in terms of $W$, we obtain the desired estimate \varepsilonqref{eqGF:DecayOnMassOfPoissonKernel}.
\varepsilonnd{proof}
With only a few modifications, we can adapt the proof of Proposition \ref{propGF:PoissonKernel} to also give the proof of Lemma \ref{lemGF:DecayAtInfinityForTechnicalReasons}.
\begin{proof}[Proof of Lemma \ref{lemGF:DecayAtInfinityForTechnicalReasons}]
We note that in this setting, as $D_f$ is a Lipschitz domain, then $P_f$ exists and is an $A^\infty$ weight as in \cite{Dahlberg-1977EstimatesHarmonicMeasARMA}, and by the above results, $P_f$ will be more regular when restricted to $B_{R}$, as in that region, $\Gamma(f)$ is $C^{1,\textnormal{Dini}}$.
We see that this time, we have
\begin{align*}
\int_{\Gamma_f\setminus B_{2R}(X)} P_f(X+s\nu(X),Y)d\sigmama_f(Y) = W(X+s\nu(X)),
\varepsilonnd{align*}
where $W$ is the unique solution of
\begin{align*}
\begin{cases}
\mathcal Delta W=0\ &\text{in}\ D_f\\
W={\mathbbm{1}}_{B_{2R}^c(X)}\ &\text{on}\ \Gamma_f\\
W=0\ &\text{on}\ \{x_{n+1}=0\}.
\varepsilonnd{cases}
\varepsilonnd{align*}
Owing to the fact that $f$ is globally Lipschitz and $C^{1,\textnormal{Dini}}_\rho(B_{2R})$, we see that
after the straightening procedure, $\tilde W$ solves an equation on $\mathbb R^n\times[0,L]$, with coefficients, $\hat A$, that have been extended to all of $\mathbb R^{n+1}$ and that are Dini continuous in $B_{2R}\times\mathbb R$, while they are globally bounded and uniformly elliptic. We note that we are now concerned with the behavior of $\tilde W$ at $\tilde X - se_{n+1}$, where for $X=(x,f(x))$, $\tilde X=(x,L)$. Thus, for the barrier, $\varphi$, we can now center the annular region at $Y_0=(x, L+\frac{R}{\sqrt 3})$. As $\tilde X\in B_R\times[0,L]$, it also holds that $\mathcal A_R(Y_0)$ is contained in $B_{2R}\times\mathbb R$, in which $\hat A$ is Dini continuous. Thus, Lemma \ref{lemAppendix:BarrierLargeR} is applicable. The rest of the proof is the same.
\varepsilonnd{proof}
\varepsilonnd{document} |
\begin{document}
\baselineskip=15pt
\allowdisplaybreaks
\title{Anti Lie-Trotter formula}
\author{Koenraad M.R.\ Audenaert$^{1,2,}$\footnote{E-mail: [email protected]}
\ and
Fumio Hiai$^{3,}$\footnote{E-mail: [email protected]}}
\date{\today}
\maketitle
\begin{center}
$^1$\,Department of Mathematics, Royal Holloway University of London, \\
Egham TW20 0EX, United Kingdom
\end{center}
\begin{center}
$^2$\,Department of Physics and Astronomy, Ghent University, \\
S9, Krijgslaan 281, B-9000 Ghent, Belgium
\end{center}
\begin{center}
$^3$\,Tohoku University (Emeritus), \\
Hakusan 3-8-16-303, Abiko 270-1154, Japan
\end{center}
\begin{abstract}
\noindent
Let $A$ and $B$ be positive semidefinite matrices.
The limit of the expression $Z_p:=(A^{p/2}B^pA^{p/2})^{1/p}$ as $p$ tends to $0$ is given by the well known
Lie-Trotter-Kato formula. A similar formula holds for the limit of $G_p:=(A^p\,\#\,B^p)^{2/p}$ as $p$ tends to $0$, where $X\,\#\,Y$ is the geometric mean of $X$ and $Y$.
In this paper we study the complementary limit of $Z_p$ and $G_p$ as $p$ tends to $\infty$, with the ultimate goal of finding an explicit formula, which we
call the anti Lie-Trotter formula.
We show that the limit of $Z_p$ exists and find an explicit formula in a special case. The limit of $G_p$ is shown for $2\times2$ matrices only.
\noindent
{\it 2010 Mathematics Subject Classification:}
Primary 15A42, 15A16, 47A64
\noindent
{\it Key Words and Phrases:}
Lie-Trotter-Kato product formula, Lie-Trotter formula, anti Lie-Trotter formula,
positive semidefinite matrix, operator mean, geometric mean, log-majorization, antisymmetric tensor power,
Grassmannian manifold
\end{abstract}
\section{Introduction}
When $H,K$ are lower bounded self-adjoint operators on a Hilbert space $\mathcal{H}$ and $H_+,K_+$
are their positive parts, the sum of $H$ and $K$ can be given a precise meaning as a lower bounded self-adjoint
operator on the subspace $\mathcal{H}_0$, which is defined as the closure of $\mathrm{dom}\, H_+^{1/2}\cap\mathrm{dom}\, K_+^{1/2}$.
We denote this formal sum as $H\dot+K$.
Then the well-known Lie-Trotter-Kato product formula, as originally established in \cite{Tr,Ka78} and
refined by many authors, expresses the convergence
$$
\lim_{n\to\infty}(e^{-tH/n}e^{-tK/n})^n=e^{-t(H\dot+K)}P_0,\qquad t>0,
$$
in the strong operator topology (uniformly in $t\in[a,b]$ for any $0<a<b$), where $P_0$ is
the orthogonal projection onto $\mathcal{H}_0$. Although this formula is usually stated for densely-defined
$H,K$, the proof in \cite{Ka78} applies to
the improper case (i.e., $H,K$ are not densely-defined) as well,
under the convention that $e^{-tH}=0$ on $(\mathrm{dom}\, H)^\perp$ for $t>0$, and similarly for $e^{-tK}$.
The Lie-Trotter-Kato formula can easily be modified to
symmetric form and with a continuous parameter as \cite[Theorem 3.6]{Hi1}
$$
\lim_{p\searrow0}(e^{-ptH/2}e^{-ptK}e^{-ptH/2})^{1/p}=e^{-t(H\dot+K)}P_0,\qquad t>0.
$$
When restricted to matrices (and to $t=1$) this can be rephrased as
\begin{equation}\label{F-1.1}
\lim_{p\searrow0}(A^{p/2}B^pA^{p/2})^{1/p}=P_0\exp(\log A\dot+\log B),
\end{equation}
where $A$ and $B$ are positive semidefinite matrices (written as $A,B\ge0$ below),
$P_0$ is now the orthogonal projection onto the intersection of the supports of $A,B$ and
$\log A\dot+\log B$ is defined as $P_0(\log A)P_0+P_0(\log B)P_0$.
When $\sigma$ is an operator mean \cite{KA} corresponding to an operator monotone function
$f$ on $(0,\infty)$ such that $\alpha:=f'(1)$ is in $(0,1)$, the operator mean version of the
Lie-Trotter-Kato product formula is the convergence \cite[Theorem 4.11]{Hi1}
$$
\lim_{p\searrow0}(e^{-ptH}\,\sigma\,e^{-ptK})^{1/p}
=e^{-t((1-\alpha)H\dot+\alpha K)},\qquad t>0,
$$
in the strong operator topology, for a bounded self-adjoint operator $H$ and a lower-bounded
self-adjoint operator $K$ on $\mathcal{H}$. Although it is not known whether the above formula holds
even when both $H,K$ are lower bounded (and unbounded), we can verify that \eqref{F-1.1} has the operator mean
version
\begin{equation}\label{F-1.2}
\lim_{p\searrow0}(A^p\,\sigma\,B^p)^{1/p}=P_0\exp((1-\alpha)\log A\dot+\alpha\log B),
\end{equation}
for matrices $A,B\ge0$. A proof of \eqref{F-1.2} is supplied in an appendix of this paper since it is
not our main theme.
In particular, let $\sigma$ be the geometric mean $A\,\#\,B$ (introduced first in \cite{PW} and
further discussed in \cite{KA}), corresponding to the operator monotone function $f(x)=x^{1/2}$
(hence $\alpha=1/2$). Then \eqref{F-1.2} yields
\begin{equation}\label{F-1.3}
\lim_{p\searrow0}(A^p\,\#\,B^p)^{2/p}=P_0\exp(\log A\dot+\log B),
\end{equation}
which has the same right-hand side as \eqref{F-1.1}.
It turns out that the convergence of both \eqref{F-1.1} and \eqref{F-1.3} is monotone in the log-majorization order.
For $d\times d$ matrices $X,Y\ge0$, the log-majorization relation
$X\prec_{(\log)}Y$ means that
$$
\prod_{i=1}^k\lambda_i(X)\le\prod_{i=1}^k\lambda_i(Y),\qquad1\le k\le d,
$$
with equality for $k=d$, where $\lambda_1(X)\ge\dots\ge\lambda_d(X)$ are the eigenvalues of
$X$ sorted in decreasing order and counting multiplicities.
The Araki-Lieb-Thirring inequality can be written in terms of log-majorization as
\begin{equation}\label{F-1.4}
(A^{p/2}B^pA^{p/2})^{1/p}\prec_{(\log)}(A^{q/2}B^qA^{q/2})^{1/q}
\quad\mbox{if}\quad0<p<q,
\end{equation}
for matrices $A,B\ge0$, see \cite{LT,Ar,AH}. One can also consider the complementary version of
\eqref{F-1.4} in terms of the geometric mean. Indeed, for $A,B\ge0$ we have \cite{AH}
\begin{equation}\label{F-1.5}
(A^q\,\#\,B^q)^{2/q}\prec_{(\log)}(A^p\,\#\,B^p)^{2/p}
\quad\mbox{if}\quad0<p<q.
\end{equation}
Hence, for matrices $A,B\ge0$, we see that
$Z_p:=(A^{p/2}B^pA^{p/2})^{1/p}$ and $G_p:=(A^p\,\#\,B^p)^{2/p}$ both tend to $P_0\exp(\log A\dot+\log B)$ as $p\searrow0$,
with the former decreasing (by \eqref{F-1.4}) and
the latter increasing (by \eqref{F-1.5}) in the log-majorization order.
The main topic of this paper is the complementary question about what happens to the limits of $Z_p$ and $G_p$ as $p$ tends to $\infty$ instead of $0$. Although
this seems a natural mathematical problem, we have not been able to find an explicit
statement of concern in the literature. It is obvious that if $A$ and $B$ are commuting then $G_p=AB=Z_p$,
independently of $p>0$. However, if $A$ and $B$ are not commuting, then the limit behavior of
$Z_p$ and its eigenvalues as $p\to\infty$ is of a rather complicated combinatorial nature,
and that of $G_p$ seems even more complicated.
The problem of finding an explicit formula, which we henceforth call the anti Lie-Trotter formula, also emerges from recent
developments of new R\'enyi relative entropies relevant to quantum information theory.
Indeed, the recent paper \cite{AD} proposed to generalize the R\'enyi relative entropy as
$$
D_{\alpha,z}(\rho\|\sigma):={1\over\alpha-1}\log\mathrm{Tr}\,
\bigl(\rho^{\alpha/2z}\sigma^{(1-\alpha)/z}\rho^{\alpha/2z}\bigr)^z
$$
for density matrices $\rho,\sigma$ with two real parameters $\alpha,z$, and discussed the
limit formulas when $\alpha,z$ converge to some special values. The limit case of
$D_{\alpha,z}(\rho\|\sigma)$ as $z\to0$ with $\alpha$ fixed is exactly related to our
anti Lie-Trotter problem.
The rest of the paper is organized as follows. In Section \ref{sec2} we prove the existence of the
limit of $Z_p$ as $p\to\infty$ when $A,B$ are $d\times d$ positive semidefinite matrices.
In Section \ref{sec3} we analyze the case when the limit eigenvalue list of $Z_p$ becomes
$\lambda_i(A)\lambda_i(B)$ ($1\le i\le d$), the maximal case in the log-majorization order.
In Section \ref{sec4} we extend the existence of the limit of $Z_p$ to that of
$\bigl(A_1^{p/2}\cdots A_{m-1}^{p/2}A_m^pA_{m-1}^{p/2}\cdots A_1^{p/2}\bigr)^{1/p}$
with more than two matrices. Finally in Section \ref{sec5} we treat $G_p$; however we can prove
the existence of the limit of $G_p$ as $p\to\infty$ only when $A,B$ are $2\times2$
matrices, and the general case must be left unsettled. The paper contains two appendices.
The first is a proof of a technical lemma stated in Section \ref{sec2}, and the second supplies
the detailed proof of \eqref{F-1.2}.
\section{Limit of $(A^{p/2}B^pA^{p/2})^{1/p}$ as $p\to\infty$ \label{sec2}}
Let $A$ and $B$ be $d\times d$ positive semidefinite matrices having the eigenvalues
$a_1\ge\cdots\ge a_d\,(\ge0)$ and $b_1\ge\cdots\ge b_d\,(\ge0)$, respectively, sorted in decreasing
order and counting multiplicities. Let $\{v_1,\dots,v_d\}$ be an orthonormal set of eigenvectors
of $A$ such that $Av_i=a_iv_i$ for $i=1,\dots,d$, and $\{w_1,\dots,w_d\}$ an orthonormal
set of eigenvectors of $B$ in a similar way. Then $A$ and $B$ are diagonalized as
\begin{align}
A&=V\mathrm{diag}(a_1,\dots,a_d)V^*=\sum_{i=1}^da_iv_iv_i^*, \label{F-2.1}\\
B&=W\mathrm{diag}(b_1,\dots,b_d)W^*=\sum_{i=1}^db_iw_iw_i^*. \label{F-2.2}
\end{align}
For each $p>0$ define a positive semidefinite matrix
\begin{equation}\label{F-2.3}
Z_p:=(A^{p/2}B^pA^{p/2})^{1/p},
\end{equation}
whose eigenvalues are denoted as $\lambda_1(p)\ge\lambda_2(p)\ge\dots\ge\lambda_d(p)$, again in
decreasing order and counting multiplicities.
\begin{lemma}\label{L-2.1}
For every $i=1,\dots,d$ the limit
\begin{equation}\label{F-2.4}
\lambda_i:=\lim_{p\to\infty}\lambda_i(p)
\end{equation}
exists, and $a_1b_1\ge\lambda_1\dots\ge\lambda_d\ge a_db_d$.
\end{lemma}
\begin{proof}
Since $(a_1b_1)^pI\ge A^{p/2}B^pA^{p/2}\ge(a_db_d)^pI$, we have
$a_1b_1\ge\lambda_i(p)\ge a_db_d$ for all $i=1,\dots,d$ and all $p>0$. By the
Araki-Lieb-Thirring inequality \cite{Ar} (or the log-majorization \cite{AH}), for every
$k=1,\dots,d$ we have
\begin{equation}\label{F-2.5}
\prod_{i=1}^k\lambda_i(p)\le\prod_{i=1}^k\lambda_i(q)\quad
\mbox{if}\quad0<p<q.
\end{equation}
Therefore, the limit $\eta_k$ of $\prod_{i=1}^k\lambda_i(p)$ as $p\to\infty$ exists for any
$k=1,\dots,d$ so that $\eta_1\ge\dots\ge\eta_d\ge0$. Let $m$ ($0\le m\le d$) be the
largest $k$ such that $\eta_k>0$ (with $m:=0$ if $\eta_1=0$). When $1\le k\le m$, we have
$\lambda_k(p)\to\eta_k/\eta_{k-1}$ (where $\eta_0:=1$) as $p\to\infty$. When $m<d$,
$\lambda_{m+1}(p)\to\eta_{m+1}/\eta_m=0$ as $p\to\infty$. Hence $\lambda_k(p)\to0$ for all
$k>m$. Therefore, the limit of $\lambda_i(p)$ as $p\to\infty$ exists for any $i=1,\dots,d$.
The latter assertion is clear now.
\end{proof}
\begin{lemma}\label{L-2.2}
The first eigenvalue in \eqref{F-2.4} is given by
$$
\lambda_1=\max\{a_ib_j:(V^*W)_{ij}\ne0\},
$$
where $(V^*W)_{ij}$ denotes the $(i,j)$ entry of $V^*W$.
\end{lemma}
\begin{proof}
Write $V^*W=[u_{ij}]$. We observe that
$$
\bigl(V^*A^{p/2}B^pA^{p/2}V\bigr)_{ij}
=\sum_{k=1}^du_{ik}\overline u_{jk}a_i^{p/2}a_j^{p/2}b_k^p.
$$
In particular,
$$
\bigl(V^*A^{p/2}B^pA^{p/2}V\bigr)_{ii}
=\sum_{k=1}^d|u_{ik}|^2a_i^pb_k^p
$$
and hence we have
$$
\lambda_1(p)^p\le\mathrm{Tr}\, A^{p/2}B^pA^{p/2}
=\sum_{i=1}^d\sum_{k=1}^d|u_{ik}|^2a_i^pb_k^p
\le d^2\max\{a_i^pb_k^p:u_{ik}\ne0\},
$$
where $\mathrm{Tr}\,$ is the usual trace functional on $d\times d$ matrices. Therefore,
\begin{equation}\label{F-2.6}
\lambda_1(p)\le d^{2/p}\max\{a_ib_k:u_{ik}\ne0\}.
\end{equation}
On the other hand, we have
$$
d\lambda_1(p)^p\ge\mathrm{Tr}\, A^{p/2}B^pA^{p/2}
\ge\min\{|u_{ik}|^2:u_{ik}\ne0\}\max\{a_i^pb_k^p:u_{ik}\ne0\}
$$
so that
\begin{equation}\label{F-2.7}
\lambda_1(p)\ge\biggl({\min\{|u_{ik}|^2:u_{ik}\ne0\}\over d}\biggr)^{1/p}
\max\{a_ib_k:u_{ik}\ne0\}.
\end{equation}
Estimates \eqref{F-2.6} and \eqref{F-2.7} give the desired expression immediately.
In fact, they prove the existence of the limit in \eqref{F-2.4} as well apart from
Lemma \ref{L-2.1}.
\end{proof}
In what follows, for each $k=1,\dots,d$ we write $\mathcal{I}_d(k)$ for the set of all subsets
$I$ of $\{1,\dots,d\}$ with $|I|=k$. For $I,J\in\mathcal{I}_d(k)$ we denote by $(V^*W)_{I,J}$
the $k\times k$ submatrix of $V^*W$ corresponding to rows in $I$ and columns in $J$;
hence $\det(V^*W)_{I,J}$ denotes the corresponding minor of $V^*W$. We also write
$a_I:=\prod_{i\in I}a_i$ and $b_I:=\prod_{i\in I}b_i$. Since $\det(V^*W)\ne0$, note that
for any $k=1,\dots,d$ and any $I\in\mathcal{I}_d(k)$ we have $\det(V^*W)_{I,J}\ne0$ for some
$J\in\mathcal{I}_d(k)$, and that for any $J\in\mathcal{I}_d(k)$ we have $\det(V^*W)_{I,J}\ne0$ for some
$I\in\mathcal{I}_d(k)$.
\begin{lemma}\label{L-2.3}
For every $k=1,\dots,d$,
\begin{equation}\label{F-2.8}
\lambda_1\lambda_2\cdots\lambda_k=\max\{a_Ib_J:I,J\in\mathcal{I}_d(k),\,\det(V^*W)_{I,J}\ne0\}.
\end{equation}
\end{lemma}
\begin{proof}
For each $k=1,\dots,d$ the antisymmetric tensor powers $A^{\wedge k}$ and $B^{\wedge k}$
(see \cite{Bh1}) are given in the form of diagonalizations as
$$
A^{\wedge k}=V^{\wedge k}\mathrm{diag}(a_I)_{I\in\mathcal{I}_d(k)}V^{\wedge k},\qquad
B^{\wedge k}=W^{\wedge k}\mathrm{diag}(b_I)_{I\in\mathcal{I}_d(k)}W^{\wedge k},
$$
and the corresponding representation of the ${n\choose k}\times{n\choose k}$ unitary
matrix $V^{*\wedge k}W^{\wedge k}$ is given by
$$
(V^{*\wedge k}W^{\wedge k})_{I,J}=\det(V^*W)_{I,J},\qquad I,J\in\mathcal{I}_d(k).
$$
Note that the largest eigenvalue of
$$
\bigl((A^{\wedge k})^{p/2}(B^{\wedge k})^p(A^{\wedge k})^{p/2}\bigr)^{1/p}
=\bigl((A^{p/2}B^pA^{p/2})^{1/p}\bigr)^{\wedge k}
$$
is $\lambda_1(p)\lambda_2(p)\cdots\lambda_k(p)$, whose limit as $p\to\infty$ is
$\lambda_1\lambda_2\cdots\lambda_k$ by Lemma \ref{L-2.1}. Apply Lemma \ref{L-2.2} to
$A^{\wedge k}$ and $B^{\wedge k}$ to obtain expression \eqref{F-2.8}.
\end{proof}
Let $\mathcal{H}$ be a $d$-dimensional Hilbert space (say, $\mathbb{C}^d$), $k$ be an integer with
$1\le k\le d$, and $\mathcal{H}^{\wedge k}$ be the $k$-fold antisymmetric tensor of $\mathcal{H}$. We
write $x_1\wedge\cdots\wedge x_k$ ($\in\mathcal{H}^{\wedge k}$) for the antisymmetric tensor of
$x_1,\dots,x_k\in\mathcal{H}$ (see \cite{Bh1}). The next lemma says that the Grassmannian
manifold $G(k,d)$ is realized in the projective space of $\mathcal{H}^{\wedge k}$. Although the
lemma might be known to specialists, we cannot find a precise explanation in the literature.
So, for the convenience of the reader, we will present its sketchy proof in Appendix A
based on \cite{FGP}.
\begin{lemma}\label{L-2.4}
There are constants $\alpha,\beta>0$ (depending on only $d$ and $k$) such that
$$
\alpha\|P-Q\|\le\inf_{\theta\in\mathbb{R}}
\|u_1\wedge\cdots\wedge u_k-e^{\sqrt{-1}\theta}v_1\wedge\cdots\wedge v_k\|
\le\beta\|P-Q\|
$$
for all orthonormal sets $\{u_1,\dots,u_k\}$ and $\{v_1,\dots,v_k\}$ and the respective
orthogonal projections $P$ and $Q$ onto $\mathrm{span}\{u_1,\dots,u_k\}$ and $\mathrm{span}\{v_1,\dots,v_k\}$,
where $\|P-Q\|$ is the operator norm of $P-Q$ and $\|\cdot\|$ inside infimum is the norm on
$\mathcal{H}^{\wedge k}$.
\end{lemma}
The main result of the paper is the next theorem showing the existence of limit for the
anti version of \eqref{F-1.1}.
\begin{thm}\label{T-2.5}
For every $d\times d$ positive semidefinite matrices $A$ and $B$ the matrix $Z_p$ in
\eqref{F-2.3} converges as $p\to\infty$ to a positive semidefinite matrix.
\end{thm}
\begin{proof}
By replacing $A$ and $B$ with $VAV^*$ and $VBV^*$, respectively, we may assume that $V=I$
and so
$$
A=\mathrm{diag}(a_1,\dots,a_d),\qquad B=W\mathrm{diag}(b_1,\dots,b_d)W^*.
$$
Choose an orthonormal basis $\{u_1(p),\dots,u_d(p)\}$ of $\mathbb{C}^d$ for which we have
$Z_pu_i(p)=\lambda_i(p)u_i(p)$ for $1\le i\le d$. Let $\lambda_i$ be given in Lemma
\ref{L-2.1}, and assume that $1\le k<d$ and $\lambda_1\ge\dots\ge\lambda_k>\lambda_{k+1}$.
Moreover, let $\lambda_1(Z_p^{\wedge k})\ge\lambda_2(Z_p^{\wedge k})\ge\dots$ be the
eigenvalues of $Z_p^{\wedge k}$ in decreasing order. We note that
\begin{align}
\lim_{p\to\infty}\lambda_1(Z_p^{\wedge k})
&=\lim_{p\to\infty}\lambda_1(p)\cdots\lambda_{k-1}(p)\lambda_k(p) \nonumber\\
&=\lambda_1\dots\lambda_{k-1}\lambda_k \nonumber\\
&>\lambda_1\cdots\lambda_{k-1}\lambda_{k+1}
=\lim_{p\to\infty}\lambda_2(Z_p^{\wedge k}). \label{F-2.9}
\end{align}
Hence it follows that $\lambda_1(Z_p^{\wedge k})$ is a simple eigenvalue of
$Z_p^{\wedge k}$ for every $p$ sufficiently large. Letting $w_{I,J}:=\det W_{I,J}$ for
$I,J\in\mathcal{I}_d(k)$ we compute
\begin{align*}
(Z_p^{\wedge k})^p
&=(A^{\wedge k})^{p/2}W^{\wedge k}((\mathrm{diag}(b_1,\dots,b_d))^{\wedge k})^p
(W^{\wedge k})^*(A^{\wedge k})^{p/2} \\
&=\mathrm{diag}(a_I^{p/2})_I\bigl[w_{I,J}\bigr]_{I,J}\mathrm{diag}(b_I^p)_I
\bigl[\overline w_{J,I}\bigr]_{I,J}\mathrm{diag}(a_I^{p/2})_I \\
&=\left[\sum_{K\in\mathcal{I}_d(k)}w_{I,K}\overline w_{J,K}
a_I^{p/2}a_J^{p/2}b_K^p\right]_{I,J} \\
&=\eta_k^p\left[\sum_{K\in\mathcal{I}_d(k)}w_{I,K}\overline w_{J,K}
\Biggl({a_I^{1/2}a_J^{1/2}b_K\over\eta_k}\Biggr)^p\right]_{I,J},
\end{align*}
where $\eta_k:=\lambda_1\lambda_2\cdots\lambda_k>0$ so that
$$
\eta_k=\max\{a_Ib_K:I,K\in\mathcal{I}_d(k),\,w_{I,K}\ne0\}
$$
due to Lemma \ref{L-2.3}. We now define
$$
\Delta_k:=\bigl\{(I,K)\in\mathcal{I}_d(k)^2:w_{I,K}\ne0\ \mbox{and}\ a_Ib_K=\eta_k\bigr\}.
$$
Then we have
\begin{align*}
\biggl({Z_p^{\wedge k}\over\eta_k}\biggr)^p
&=\left[\sum_{K\in\mathcal{I}_d(k)}w_{I,K}\overline w_{J,K}
\Biggl({a_I^{1/2}a_J^{1/2}b_K\over\eta_k}\Biggr)^p\right]_{I,J} \\
&\longrightarrow
Q:=\left[\sum_{K\in\mathcal{I}_d(k)}w_{I,K}\overline w_{J,K}\delta_{I,J,K}\right]_{I,J},
\end{align*}
where
$$
\delta_{I,J,K}:=\begin{cases}1 & \text{if $(I,K),(J,K)\in\Delta_k$}, \\
0 & \text{otherwise}.
\end{cases}
$$
Since $Q_{I,I}\ge|w_{I,K}|^2>0$ when $(I,K)\in\Delta_k$, note that $Q\ne0$. Furthermore,
since the eigenvalue $\lambda_1(Z_p^{\wedge k})$ is simple (if $p$ large), it follows
from \eqref{F-2.9} that the limit $Q$ of $\bigl(Z_p^{\wedge k}/\eta_k\bigr)^p$ must be
a rank one projection $\psi\psi^*$ up to a positive scalar multiple, where $\psi$ is a
unit vector in $(\mathbb{C}^d)^{\wedge k}$. Since the unit eigenvector
$u_1(p)\wedge\dots\wedge u_k(p)$ of $Z_p^{\wedge k}$ corresponding to the largest
(simple) eigenvalue coincides with that of $\bigl(Z_p^{\wedge k}/\eta_k\bigr)^p$, we
conclude that $u_1(p)\wedge\dots\wedge u_k(p)$ converges $\psi$ up to a scalar multiple
$e^{\sqrt{-1}\theta}$. Therefore, by Lemma \ref{L-2.4} the orthogonal projection onto
$\mathrm{span}\{u_1(p),\dots,u_k(p)\}$ converges as $p\to\infty$.
Assume now that
$$
\lambda_1=\dots=\lambda_{k_1}>\lambda_{k_1+1}=\dots=\lambda_{k_2}
>\dots>\lambda_{k_{s-1}+1}=\dots=\lambda_{k_s}\quad(k_s=d).
$$
From the fact proved above, the orthogonal projection onto
$\mathrm{span}\{u_1(p),\dots,u_{k_r}(p)\}$ converges for any $r=1,\dots,s-1$, and this is trivial
for $r=s$. Therefore, the orthogonal projection onto
$\mathrm{span}\{u_{k_{r-1}+1}(p),\dots,u_{k_r}(p)\}$ converges to a projection $P_r$ for any
$r=1,\dots,s$, and thus $Z_p$ converges to $\sum_{r=1}^s\lambda_{k_r}P_r$.
\end{proof}
For $1\le k\le d$ define $\eta_k$ by the right-hand side of \eqref{F-2.8}. Then
Lemma \ref{L-2.3} (see also the proof of Lemma \ref{L-2.1}) implies that, for
$k=1,\dots,d$,
$$
\lambda_k={\eta_k\over\eta_{k-1}}\quad\mbox{if}\quad\eta_k>0
$$
(where $\eta_0:=1$), and $\lambda_k=0$ if $\eta_k=0$. So one can effectively compute the
eigenvalues of $Z:=\lim_{p\to\infty}Z_p$; however, it does not seem that there is a simple
algebraic method to compute the limit matrix $Z$.
\section{The maximal case \label{sec3}}
Let $A$ and $B$ be $d\times d$ positive semidefinite matrices with diagonalizations
\eqref{F-2.1} and \eqref{F-2.2}. For each $d\times d$ matrix $X$ we write
$s_1(X)\ge s_2(X)\ge\dots\ge s_d(X)$ for the singular values of $X$ in decreasing order
with multiplicities. For each $p>0$ and $k=1,\dots,d$, since
$\prod_{i=1}^k\lambda_i(p)=\bigl(\prod_{i=1}^ks_i(A^{p/2}B^{p/2})\bigr)^{2/p}$, by the
majorization results of Gel'fand and Naimark and of Horn (see, e.g., \cite{MOA,Bh1,Hi2}),
we have
$$
\prod_{j=1}^ka_{i_j}b_{n+1-i_j}\le\prod_{j=1}^k\lambda_j(p)\le\prod_{j=1}^ka_jb_j
$$
for any choice of $1\le i_1<i_2<\dots<i_k\le d$, and for $k=d$
$$
\prod_{i=1}^d\lambda_i(p)=\det A\cdot\det B=\prod_{i=1}^da_ib_i.
$$
That is, for any $p>0$,
\begin{equation}\label{F-3.1}
(a_ib_{n+1-i})_{i=1}^d\prec_{(\log)}(\lambda_i(p))_{i=1}^d
\prec_{(\log)}(a_ib_i)_{i=1}^d
\end{equation}
with the notation of log-majorization, see \cite{AH}. Letting $p\to\infty$ gives
\begin{equation}\label{F-3.2}
(a_ib_{n+1-i})_{i=1}^d\prec_{(\log)}(\lambda_i)_{i=1}^d\prec_{(\log)}(a_ib_i)_{i=1}^d
\end{equation}
for the eigenvalues $\lambda_1\ge\dots\ge\lambda_d$ of $Z=\lim_{p\to\infty}Z_p$. In general,
we have nothing to say about the position of $(\lambda_i)_{i=1}^d$ in \eqref{F-3.2}. For
instance, when $V^*W$ becomes the permutation matrix corresponding to a permutation
$(j_1,\dots,j_d)$ of $(1,\dots,d)$, we have $Z_p=V\mathrm{diag}(a_1b_{j_1},\dots,a_db_{j_d})V^*$
independently of $p>0$ so that $(\lambda_i)=(a_ib_{j_i})$.
In this section we clarify the case when $(\lambda_i)_{i=1}^d$ is equal to
$(a_ib_i)_{i=1}^d$, the maximal case in the log-majorization order in \eqref{F-3.2}. To do
this, let $0=i_0<i_1<\cdots<i_{l-1}<i_l=d$ and $0=j_0<j_1<\cdots<j_{m-1}<j_m=d$ be taken
so that
\begin{align*}
&a_1=\dots=a_{i_1}>a_{i_1+1}=\dots=a_{i_2}>\dots>a_{i_{l-1}+1}=\dots=a_{i_l}, \\
&b_1=\dots=b_{j_1}>b_{j_1+1}=\dots=b_{j_2}>\dots>b_{j_{m-1}+1}=\dots=b_{j_m}.
\end{align*}
\begin{thm}\label{T-3.1}
In the above situation the following conditions are equivalent:
\begin{itemize}
\item[(i)] $\lambda_i=a_ib_i$ for all $i=1,\dots,d$;
\item[(ii)] for every $k=1,\dots,d$ so that $i_{r-1}<k\le i_r$ and $j_{s-1}<k\le j_s$,
there are $I_k,J_k\in\mathcal{I}_d(k)$ such that
$$
\{1,\dots,i_{r-1}\}\subset I_k\subset\{1,\dots,i_r\},\qquad
\{1,\dots,j_{s-1}\}\subset J_k\subset\{1,\dots,j_s\},
$$
$$
\det(V^*W)_{I_k,J_k}\ne0;
$$
\item[(iii)] the property in (ii) holds for every
$k\in\{i_1,\dots,i_{l-1},j_1,\dots,j_{m-1}\}$.
\end{itemize}
\end{thm}
\begin{proof}
(i) $\Leftrightarrow$ (ii).\enspace
By Lemma \ref{L-2.3} condition (ii) means that
$$
\prod_{i=1}^k\lambda_i=\prod_{i=1}^ka_ib_i,\qquad k=1,\dots,d.
$$
It follows (see the proof of Lemma \ref{L-2.1}) that this is equivalent to (i).
(ii) $\Rightarrow$ (iii) is trivial.
(iii) $\Rightarrow$ (i).\enspace
By Lemma \ref{L-2.3} again condition (iii) means that
\begin{equation}\label{F-3.3}
\prod_{i=1}^h\lambda_i=\prod_{i=1}^ha_ib_i\quad
\mbox{for all}\ h\in\{i_1,\dots,i_{l-1},j_1,\dots,j_{m-1}\}.
\end{equation}
This holds also for $h=d$ thanks to \eqref{F-3.2}. We need to prove that
$\prod_{i=1}^k\lambda_i=\prod_{i=1}^ka_ib_i$ for all $k=1,\dots,d$. Now, let
$i_{r-1}<k\le i_r$ and $j_{s-1}<k\le j_s$ as in condition (ii). If $k=i_r$ or $k=j_s$,
then the conclusion has already been stated in \eqref{F-3.3}. So assume that
$i_{r-1}<k<i_r$ and $j_{s-1}<k<j_s$. Set $h_0:=\max\{i_{r-1},j_{s-1}\}$ and
$h_1:=\min\{i_r,j_s\}$ so that $h_0<k<h_1$. By \eqref{F-3.3} for $h=h_0,h_1$ we have
$$
\prod_{i=1}^{h_0}\lambda_i=\prod_{i=1}^{h_0}a_ib_i>0,\qquad
\prod_{i=1}^{h_1}\lambda_i=\prod_{i=1}^{h_1}a_ib_i.
$$
Since $a_i=a_{h_1}$ and $b_i=b_{h_1}$ for $h_0<i\le h_1$, we have
$\prod_{i=h_0+1}^{h_1}\lambda_i=(a_{h_1}b_{h_1})^{h_1-h_0}$.
By \eqref{F-3.2} we furthermore have
$\prod_{i=1}^{h_0+1}\lambda_i\le\prod_{i=1}^{h_0+1}a_ib_i$ and hence
$$
a_{h_1}b_{h_1}\ge\lambda_{h_0+1}\ge\lambda_{h_0+2}\ge\dots\ge\lambda_{h_1}.
$$
Therefore, $\lambda_i=a_{h_1}b_{h_1}$ for all $i$ with $h_0+1<i\le h_1$, from which
$\prod_{i=1}^k\lambda_i=\prod_{i=1}^ka_ib_i$ follows for $h_0<k<h_1$.
\end{proof}
\begin{prop}\label{P-3.2}
Assume that the equivalent conditions of Theorem \ref{T-3.1} hold. Then, for each
$r=1,\dots,l$, the spectral projection of $Z$ corresponding to the set of eigenvalues
$\{a_{i_{r-1}+1}b_{i_{r-1}+1},\dots,a_{i_r}b_{i_r}\}$ is equal to the spectral projection
$\sum_{i=i_{r-1}+1}^{i_r}v_iv_i^*$ of $A$ corresponding to $a_{i_r}$. Hence $Z$ is of the
form
$$
Z=\sum_{i=1}^da_ib_iu_iu_i^*
$$
for some orthonormal set $\{u_1,\dots,u_d\}$ such that
$\sum_{i=i_{r-1}+1}^{i_r}u_iu_i^*=\sum_{i=i_{r-1}+1}^{i_r}v_iv_i^*$ for $r=1,\dots,l$.
\end{prop}
\begin{proof}
In addition to Theorem \ref{T-2.5} we may prove that, for each $k\in\{i_1,\dots,i_{l-1}\}$,
the spectral projection of $Z_p$ corresponding to $\{\lambda_1(p),\dots,\lambda_k(p)\}$
converges to $\sum_{i=1}^kv_iv_i^*$. Assume that $k=i_r$ with $1\le r\le l-1$. When
$j_{s-1}<k<j_s$, by condition (iii) of Theorem \ref{T-3.1} we have
$\det(V^*W)_{\{1,\dots,k\},\{1,\dots,j_{s-1},j_s',\dots,j_k'\}}\ne0$ for some
$\{j_s',\dots,j_k'\}\subset\{j_{s-1}+1,\dots,j_s\}$. By exchanging
$w_{j_s'},\dots,w_{j_k'}$ with $w_{j_{s-1}+1},\dots,w_k$ we may assume that
$\det(V^*W)_{\{1,\dots,k\},\{1,\dots,k\}}\ne0$. Furthermore, by replacing $A$ and $B$ with
$VAV^*$ and $VBV^*$, respectively, we may assume that $V=I$. So we end up assuming that
$$
A=\mathrm{diag}(a_1,\dots,a_d),\qquad B=W\mathrm{diag}(b_1,\dots,b_d)W^*,
$$
and $\det W(1,\dots,k)\ne0$, where $W(1,\dots,k)$ denotes the principal $k\times k$
submatrix of the top-left corner. Let $\{e_1,\dots,e_d\}$ be the standard basis of $\mathbb{C}^d$.
By Theorem \ref{T-3.1} we have
$$
\lim_{p\to\infty}\lambda_1(Z_p^{\wedge k})
=\prod_{i=1}^ka_ib_i
>\prod_{i=1}^{k-1}a_ib_i\cdot a_{k+1}b_{k+1}
=\lim_{p\to\infty}\lambda_2(Z_p^{\wedge k})
$$
so that the largest eigenvalue of $Z_p^{\wedge k}$ is simple for every sufficiently large
$p$. Let $\{u_1(p),\dots,u_d(p)\}$ be an orthonormal basis of $\mathbb{C}^d$ for which
$Z_pu_i(p)=\lambda_i(p)u_i(p)$ for $1\le i\le d$. Then $u_1(p)\wedge\dots\wedge u_k(p)$
is the unit eigenvector of $Z_p^{\wedge k}$ corresponding to the eigenvalue
$\lambda_1(Z_p^{\wedge k})$. We now show that $u_1(p)\wedge\dots\wedge u_k(p)$ converges
to $e_1\wedge\dots\wedge e_k$ in $(\mathbb{C}^d)^{\wedge k}$. We observe that
$$
(A^{\wedge k})^{p/2}=\mathrm{diag}\bigl(a_I^{p/2}\bigr)_I
=a_{\{1,\dots,k\}}^{p/2}\mathrm{diag}\biggl(1,\alpha_2^{p/2},\dots,
\alpha_{d\choose k}^{p/2}\biggr)
$$
with respect to the basis $\bigl\{e_{i_1}\wedge\dots\wedge e_{i_k}:
I=\{i_1,\dots,i_k\}\in\mathcal{I}_d(k)\bigr\}$, where the first diagonal entry $1$
corresponds to $e_1\wedge\dots\wedge e_k$ and $0\le\alpha_h<1$ for $2\le h\le{d\choose k}$.
Similarly,
$$
\bigl((\mathrm{diag}(b_1,\dots,b_d))^{\wedge k}\bigr)^p
=b_{\{1,\dots,k\}}^p\mathrm{diag}\biggl(1,\beta_2^p,\dots,\beta_{d\choose k}^p\biggr),
$$
where $0\le\beta_h\le1$ for $2\le h\le{d\choose k}$. Moreover, $W^{\wedge k}$ is given as
$$
W^{\wedge k}=\bigl[w_{I,J}\bigr]_{I,J}
=\begin{bmatrix}w_{11}&\cdots&w_{1{d\choose k}}\\
\vdots&\ddots&\vdots\\
w_{{d\choose k}1}&\cdots&w_{{d\choose k}{d\choose k}}
\end{bmatrix},
$$
where $w_{I,J}=\det W_{I,J}$ and so $w_{11}=\det W(1,\dots,k)\ne0$. As in the proof of
Theorem \ref{T-2.5} we now compute
\begin{align*}
(Z_p^{\wedge k})^p
&=(A^{\wedge k})^{p/2}W^{\wedge k}\bigl((\mathrm{diag}(b_1,\dots,b_d))^{\wedge k}\bigr)^p
(W^{\wedge k})^*(A^{\wedge k})^{p/2} \\
&=\bigl(a_{\{1,\dots,k\}}b_{\{1,\dots,k\}}\bigr)^p
\left[\sum_{h=1}^{d\choose k}w_{ih}\overline w_{jh}
\alpha_i^{p/2}\alpha_j^{p/2}\beta_h^p\right]_{i,j=1}^{d\choose k},
\end{align*}
where $\alpha_1=\beta_1=1$. As $p\to\infty$ we have
$$
\left[\sum_{h=1}^{d\choose k}w_{ih}\overline w_{jh}
\alpha_i^{p/2}\alpha_j^{p/2}\beta_h^p\right]
\longrightarrow\mathrm{diag}\Biggl(\sum_{h:\beta_h=1}|w_{1h}|^2,0,\dots,0\Biggr)
$$
Since the unit eigenvector of $Z_p^{\wedge k}$ corresponding to the largest eigenvalue
coincides with that of $\biggl[\sum_{h=1}^{d\choose k}w_{ih}\overline w_{jh}
\alpha_i^{p/2}\alpha_j^{p/2}\beta_h^p\biggr]$, it follows that
$u_1(p)\wedge\dots\wedge u_k(p)$ converges to $e_1\wedge\dots\wedge e_k$ up to a scalar
multiple $e^{\sqrt{-1}\theta}$, $\theta\in\mathbb{R}$. By Lemma \ref{L-2.4} this implies the
desired assertion.
\end{proof}
\begin{cor}\label{C-3.3}
If the eigenvalues $a_1,\dots,a_d$ of $A$ are all distinct and the conditions of Theorem \ref{T-3.1} hold, then
$$
\lim_{p\to\infty}(A^{p/2}B^pA^{p/2})^{1/p}=V\mathrm{diag}(a_1b_1,a_2b_2,\dots,a_db_d)V^*.
$$
\end{cor}
In particular, when the eigenvalues of $A$ are all distinct and so are those of $B$, the conditions
of Theorem \ref{T-3.1} means that all the leading principal minors of $V^*W$ are non-zero.
\section{Extension to more than two matrices \label{sec4}}
Let $A_1,\dots,A_m$ be $d\times d$ positive semidefinite matrices with diagonalizations
$$
A_l=V_lD_lV_l^*,\qquad D_l=\mathrm{diag}\bigl(a_1^{(l)},\dots,a_d^{(l)}\bigr),
\qquad1\le l\le m.
$$
For each $p>0$ consider the positive semidefinite matrix
\begin{align*}
Z_p&:=\bigl(A_1^{p/2}A_2^{p/2}\cdots A_{m-1}^{p/2}A_m^pA_{m-1}^{p/2}\cdots
A_1^{p/2}A_1^{p/2}\bigr)^{1/p}, \\
&\ =V_1\bigl(D_1^{p/2}W_1\cdots D_{m-1}^{p/2}W_{m-1}D_m^pW_{m-1}^*D_{m-1}^{p/2}\cdots
W_1^*D_1^{p/2}\bigr)^{1/p}V_1^*,
\end{align*}
where
$$
W_l:=V_l^*V_{l+1}=\Bigl[w_{ij}^{(l)}\Bigr]_{i,j=1}^d,\qquad1\le l\le m-1.
$$
The eigenvalues of $Z_p$ are denoted as
$\lambda_1(p)\ge\lambda_2(p)\ge\dots\ge\lambda_d(p)$ in decreasing order.
Although the log-majorization in \eqref{F-2.5} is no longer available in the present
situation, we can extend Lemma \ref{L-2.2} as follows.
\begin{lemma}\label{L-4.1}
The limit $\lambda_1:=\lim_{p\to\infty}\lambda_1(p)$ exists and
\begin{equation}\label{F-4.1}
\lambda_1=\max\bigl\{a_{i_1}^{(1)}a_{i_2}^{(2)}\cdots a_{i_m}^{(m)}:
\mathbf{w}(i_1,i_2,\dots,i_m)\ne0\bigr\},
\end{equation}
where
\begin{align*}
&\mathbf{w}(i_1,i_2,\dots,i_m) \\
&:=\sum\Bigl\{w_{i_1j_2}^{(1)}w_{j_2j_3}^{(2)}\cdots w_{j_{m-1}i_m}^{(m)}:
1\le j_2,\dots,j_{m-1}\le d,\,a_{j_2}^{(2)}\cdots
a_{j_{m-1}}^{(m-1)}=a_{i_2}^{(2)}\cdots a_{i_{m-1}}^{(m-1)}\Bigr\}.
\end{align*}
Moreover, $a_1^{(1)}\cdots a_1^{(m)}\ge\lambda_1\ge a_d^{(1)}\cdots a_d^{(m)}$.
\end{lemma}
\begin{proof}
We notice that
\begin{align*}
\bigl[V_1^*Z_p^pV_1\bigr]_{ii}
&=\bigl[D_1^{p/2}W_1\cdots D_{m-1}^{p/2}W_{m-1}D_m^pW_{m-1}^*D_{m-1}^{p/2}
\cdots W_1^*D_1^{p/2}\bigr]_{ii} \\
&=\sum_{i_2,\dots,i_{m-1},k,j_{m-1},\dots,j_2}
\bigl(a_i^{(1)}\bigr)^{p/2}w_{ii_2}^{(1)}\bigl(a_{i_2}^{(2)}\bigr)^{p/2}\cdots
w_{i_{m-2}i_{m-1}}^{(m-2)}\bigl(a_{i_{m-1}}^{(m-1)}\bigr)^{p/2} \\
&\qquad\times
w_{i_{m-1}k}^{(m-1)}\bigl(a_k^{(m)}\bigr)^p
\overline w_{j_{m-1}k}^{(m-1)}\bigl(a_{j_{m-1}}^{(m-1)}\bigr)^{p/2}
\overline w_{j_{m-2}j_{m-1}}^{(m-2)}\cdots\bigl(a_{j_2}^{(2)}\bigr)^{p/2}
\overline w_{ij_2}^{(1)}\bigl(a_i^{(1)}\bigr)^{p/2} \\
&=\sum_k\sum_{i_2,\dots,i_{m-1}}
w_{ii_2}^{(1)}w_{i_2i_3}^{(2)}\cdots w_{i_{m-1}k}^{(m-1)}
\bigl(a_i^{(1)}a_{i_2}^{(2)}\cdots a_{i_{m-1}}^{(m-1)}a_k^{(m)}\bigr)^{p/2} \\
&\qquad\times
\sum_{j_2,\dots,j_{m-1}}
\overline{w_{ij_2}^{(1)}w_{j_2j_3}^{(2)}\cdots w_{j_{m-1}k}^{(m-1)}}
\bigl(a_i^{(1)}a_{j_2}^{(2)}\cdots a_{j_{m-1}}^{(m-1)}a_k^{(m)}\bigr)^{p/2} \\
&=\sum_k\Bigg|\sum_{j_2,\dots,j_{m-1}}
w_{ij_2}^{(1)}w_{j_2j_3}^{(2)}\cdots w_{j_{m-1}k}^{(m-1)}
\bigl(a_i^{(1)}a_{j_2}^{(2)}\cdots a_{j_{m-1}}^{(m-1)}a_k^{(m)}\bigr)^{p/2}
\Bigg|^2.
\end{align*}
Let $\eta$ be the right-hand side of \eqref{F-4.1}. From the above expression we have
\begin{align*}
\lambda_1(p)^p&\le\mathrm{Tr}\, V_1^*Z_p^pV_1 \\
&=\sum_{i,k}\Bigg|\sum_{j_2,\dots,j_{m-1}}
w_{ij_2}^{(1)}w_{j_2j_3}^{(2)}\cdots w_{j_{m-1}k}^{(m-1)}
\bigl(a_i^{(1)}a_{j_2}^{(2)}\cdots a_{j_{m-1}}^{(m-1)}a_k^{(m)}\bigr)^{p/2}
\Bigg|^2 \\
&\le M\eta^p,
\end{align*}
where $M>0$ is a constant independent of $p$. Therefore,
$\limsup_{p\to\infty}\lambda_1(p)\le\eta$. On the other hand, let
$(i,i_2,\dots,i_{m-1},k)$ be such that $a_i^{(1)}a_{i_2}^{(2)}\cdots a_{i_{m-1}}^{(m-1)}a_k^{(m)}=\eta$,
and let $\delta:=|\mathbf{w}(i,i_2,\dots,i_{m-1},k)|>0$. Then we have
$$
\Bigg|\sum_{j_2,\dots,j_{m-1}}
w_{ij_2}^{(1)}w_{j_2j_3}^{(2)}\cdots w_{j_{m-1}k}^{(m-1)}
\bigl(a_i^{(1)}a_{j_2}^{(2)}\cdots a_{j_{m-1}}^{(m-1)}a_k^{(m)}\bigr)^{p/2}\Bigg|
\ge\delta\eta^{p/2}-M'\alpha^{p/2}
$$
for some constants $M'>0$ and $\alpha>0$ with $\alpha<\eta$. Therefore, for sufficiently
large $p$ we have $\delta\eta^{p/2}-M'\alpha^{p/2}>0$ and
$$
d\lambda_1(p)^p\ge\mathrm{Tr}\, V_1^*Z_p^pV_1
\ge\bigl(\delta\eta^{p/2}-M'\alpha^{p/2}\bigr)^2
=\delta^2\eta^p\biggl(1-{M'\over\delta}\biggl({\alpha\over\eta}\biggr)^{p/2}\biggr)^2
$$
so that $\liminf_{p\to\infty}\lambda_1(p)\ge\eta$. The latter assertion is obvious.
\end{proof}
\begin{lemma}\label{L-4.2}
For every $i=1,\dots,d$ the limit $\lambda_i:=\lim_{p\to\infty}\lambda_i(p)$ exists.
\end{lemma}
\begin{proof}
For every $k=1,\dots,d$ apply Lemma \ref{L-4.1} to $A_1^{\wedge k},\dots,A_m^{\wedge k}$
to see that
$$
\lim_{p\to\infty}\lambda_1(p)\lambda_2(p)\cdots\lambda_k(p)
$$
exists. Hence, the limit $\lim_{p\to\infty}\lambda_i(p)$ exists for $i=1,\dots,d$ as in
the proof of Lemma \ref{L-2.1}.
\end{proof}
\begin{thm}\label{T-4.3}
For every $d\times d$ positive semidefinite matrices $A_1,\dots,A_m$ the matrix
$$
Z_p=\bigl(A_1^{p/2}A_2^{p/2}\cdots A_{m-1}^{p/2}A_m^pA_{m-1}^{p/2}\cdots
A_2^{p/2}A_1^{p/2}\bigr)^{1/p}
$$
converges as $p\to\infty$.
\end{thm}
\begin{proof}
The proof is similar to that of Theorem \ref{T-2.5}. Choose an orthogonal basis
$\{u_1(p),\dots,u_d(p)\}$ of $\mathbb{C}^d$ such that $Z_pu_i(p)=\lambda_i(p)u_i(p)$ for
$1\le i\le d$. Let $k$ ($1\le k<d$) be such that
$\lambda_1\ge\dots\ge\lambda_k>\lambda_{k+1}$. Since \eqref{F-2.9} holds in the present
case too, $\lambda_1(Z_p^{\wedge k})$ is a simple eigenvalue of $Z_p^{\wedge k}$ for
every $p$ sufficiently large. For $I,J\in\mathcal{I}_d(k)$ we write
$w_{I,J}^{(l)}:=\det W_{I,J}^{(l)}$ for $1\le l\le m-1$ and
$a_I^{(l)}:=\prod_{i\in I}a_i^{(l)}$ for $1\le l\le m$. We have
\begin{align*}
&\bigl[V_1^{*\wedge k}(Z_p^{\wedge k})^pV_1^{\wedge k}\bigr]_{I,J} \\
&\qquad=\sum_{K\in\mathcal{I}_d(k)}\sum_{I_2,\dots,I_{m-1}}
w_{I,I_2}^{(1)}w_{I_2,I_3}^{(2)}\cdots w_{I_{m-1},K}^{(m-1)}
\bigl(a_I^{(1)}a_{I_2}^{(2)}\cdots a_{I_{m-1}}^{(m-1)}a_K^{(m)}\bigr)^{p/2} \\
&\qquad\qquad\times\sum_{J_2,\dots,J_{m-1}}
\overline{w_{J,J_2}^{(1)}w_{J_2,J_3}^{(2)}\cdots w_{J_{m-1},K}^{(m-1)}}
\bigl(a_J^{(1)}a_{J_2}^{(2)}\cdots a_{J_{m-1}}^{(m-1)}a_K^{(m)}\bigr)^{p/2} \\
&\qquad=\eta_k^p\sum_{K\in\mathcal{I}_d(k)}\sum_{I_2,\dots,I_{m-1}}
w_{I,I_2}^{(1)}w_{I_2,I_3}^{(2)}\cdots w_{I_{m-1},K}^{(m-1)}
\Biggl({a_I^{(1)}a_{I_2}^{(2)}\cdots a_{I_{m-1}}^{(m-1)}a_K^{(m)}\over\eta_k}
\Biggr)^{p/2} \\
&\qquad\qquad\quad\times\sum_{J_2,\dots,J_{m-1}}
\overline{w_{J,J_2}^{(1)}w_{J_2,J_3}^{(2)}\cdots w_{J_{m-1},K}^{(m-1)}}
\Biggl({a_J^{(1)}a_{J_2}^{(2)}\cdots a_{J_{m-1}}^{(m-1)}a_K^{(m)}\over\eta_k}
\Biggr)^{p/2},
\end{align*}
where
$$
\eta_k:=\lambda_1\lambda_2\cdots\lambda_k
=\max\bigl\{a_{I_1}^{(1)}a_{I_2}^{(2)}\cdots a_{I_{m-1}}^{(m-1)}a_{I_m}^{(m)}:
\mathbf{w}_k(I_1,I_2,\dots,I_{m-1},I_m)\ne0\bigr\}
$$
and
\begin{align*}
&\mathbf{w}_k(I_1,I_2,\dots,I_{m-1},I_m) \\
&:=\sum\Bigl\{w_{I_1J_2}^{(1)}w_{J_2J_3}^{(2)}\cdots w_{J_{m-1}I_m}^{(m-1)}:
J_2,\dots,J_{m-1}\in\mathcal{I}_d(k),\,a_{J_2}^{(2)}\cdots
a_{J_{m-1}}^{(m-1)}=a_{I_2}^{(2)}\cdots a_{I_{m-1}}^{(m-1)}\Bigr\}.
\end{align*}
We see that
$$
V_1^{*\wedge k}\biggl({Z_p^{\wedge k}\over\eta_k}\biggr)^pV_1^{\wedge k}
\longrightarrow
Q:=\Biggl[\sum_{K\in\mathcal{I}_d(k)}\mathbf{v}_k(I,K)\overline{\mathbf{v}_k(J,K)}\Biggr]_{I,J}
\quad\mbox{as}\ p\to\infty,
$$
where
$$
\mathbf{v}_k(I,K):=\mathbf{w}_k(I,I_2,\dots,I_{m-1},K)
$$
if $\mathbf{w}_k(I,I_2,\dots,I_{m-1},K)\ne0$ and
$a_I^{(1)}a_{I_2}^{(2)}\cdots a_{I_{m-1}}^{(m-1)}a_K^{(m)}=\eta_k$ for some
$I_2,\dots,I_{m-1}\in\mathcal{I}_d(k)$, and otherwise $\mathbf{v}_k(I,K):=0$. Since
$Q_{I,I}\ge|\mathbf{v}_k(I,K)|^2>0$ for some $I,K\in\mathcal{I}_d(k)$, note that $Q\ne0$. The remaining
proof is the same as in that of Theorem \ref{T-2.5}.
\end{proof}
\section{Limit of $(A^p\,\#\,B^p)^{1/p}$ as $p\to\infty$ \label{sec5}}
Another problem, seemingly more interesting, is to know what is shown on the convergence
$(A^p\,\sigma\,B^p)^{1/p}$ as $p\to\infty$, the anti-version of \eqref{F-1.2} (or Theorem
\ref{T-B.1}). For example, when $\sigma=\triangledown$, the arithmetic mean, the increasing
limit of $(A^p\,\triangledown\,B^p)^{1/p}=\bigl((A^p+B^p)/2\bigr)^{1/p}$ as $p\to\infty$
exists and
\begin{equation}\label{F-5.1}
A\vee B:=\lim_{p\to\infty}(A^{-p}\,\triangledown\,B^{-p})^{-1/p}
=\lim_{p\to\infty}(A^p+B^p)^{1/p}
\end{equation}
is the supremum of $A,B$ with respect to some spectral order among Hermitian matrices,
see \cite{Ka79} and \cite[Lemma 6.5]{An}. When $\sigma=\,!$, the harmonic mean, we have the
infimum counterpart $A\wedge B:=\lim_{p\to\infty}(A^p\,!\,B^p)^{1/p}$, the
decreasing limit.
In this section we are interested in the case where $\sigma=\#$, the geometric mean. For
each $p>0$ and $d\times d$ positive semidefinite matrices $A,B$ with the diagonalizations
in \eqref{F-2.1} and \eqref{F-2.2} we define
\begin{equation}\label{F-5.2}
G_p:=(A^p\,\#\,B^p)^{2/p},
\end{equation}
which is given as $\bigl(A^{p/2}(A^{-p/2}B^pA^{-p/2})^{1/2}A^{p/2}\bigr)^{2/p}$ if $A>0$.
The eigenvalues of $G_p$ are denoted as $\lambda_1(G_p)\ge\dots\ge\lambda_d(G_p)$ in
decreasing order.
\begin{prop}\label{P-5.1}
For every $i=1,\dots,d$ the limit
$$
\widehat\lambda_i:=\lim_{p\to\infty}\lambda_i(G_p)
$$
exists, and $a_1b_1\ge\widehat\lambda_1\ge\dots\ge\widehat\lambda_d\ge a_db_d$. Furthermore,
\begin{equation}\label{F-5.3}
(a_ib_{d+1-i})_{i=1}^d\prec_{(\log)}
\bigl(\widehat\lambda_i\bigr)_{i=1}^d\prec_{(\log)}(a_ib_i)_{i=1}^d.
\end{equation}
\end{prop}
\begin{proof}
Since $(a_1b_1)^{p/2}I\ge A^p\,\#\,B^p\ge(a_db_d)^{p/2}I$, we have
$a_1b_1\ge\lambda_i(G_p)\ge a_db_d$ for all $i=1,\dots,d$ and $p>0$. By the log-majorization
result in \cite[Theorem 2.1]{AH}, for every $k=1,\dots,d$ we have
\begin{equation}\label{F-5.4}
\prod_{i=1}^k\lambda_i(G_p)\ge\prod_{i=1}^k\lambda_i(G_q)
\quad\mbox{if\quad $0<p<q$}.
\end{equation}
This implies that the limit of $\prod_{i=1}^k\lambda_i(G_p)$ as $p\to\infty$ exists for
every $k=1,\dots,d$, and hence the limit $\lambda_i(G_p)$ exists for $i=1,\dots,d$ as in
the proof of Lemma \ref{L-2.1}.
To prove the latter assertion, it suffices to show that
\begin{equation}\label{F-5.5}
(a_ib_{d+1-i})_{i=1}^d\prec_{(\log)}
(\lambda_i(G_1))_{i=1}^d\prec_{(\log)}(a_ib_i)_{i=1}^d
\end{equation}
for $G_1=(A\,\#\,B)^2$. Indeed, applying this to $A^p$ and $B^p$ we have
$$
(a_ib_{d+1-i})_{i=1}^d\prec_{(\log)}
(\lambda_i(G_p))_{i=1}^d\prec_{(\log)}(a_ib_i)_{i=1}^d
$$
so that \eqref{F-5.3} follows by letting $p\to\infty$. To prove \eqref{F-5.5}, we may by
continuity assume that $A>0$. By \cite[Corollary 2.3]{AH} and \eqref{F-3.1} we have
$$
(\lambda_i(G_1))_{i=1}^d\prec_{(\log)}
\bigl(\lambda_i(A^{1/2}BA^{1/2})\bigr)_{i=1}^d\prec_{(\log)}(a_ib_i)_{i=1}^d.
$$
Since $G_1^{1/2}A^{-1}G_1^{1/2}=B$, there exists a unitary matrix $V$ such that
$A^{-1/2}G_1A^{-1/2}=VBV^*$ and hence $G_1=A^{1/2}VBV^*A^{1/2}$. Since
$\lambda_i(VBV^*)=b_i$, by the majorization of Gel'fand and Naimark we have
$$
(a_ib_{d+1-i})_{i=1}^d\prec_{(\log)}(\lambda_i(G_1))_{i=1}^d,
$$
proving \eqref{F-5.5}
\end{proof}
In view of \eqref{F-2.5} and \eqref{F-5.4} we may consider $G_p$ as the complementary
counterpart of $Z_p$ in some sense; yet it is also worth noting
that $G_p$ is symmetric in $A$ and $B$ while $Z_p$ is not. Our ultimate goal is to
prove the existence of the limit of $G_p$ in \eqref{F-5.2} as $p\to\infty$ similarly to
Theorem \ref{T-2.5} and to clarify, similarly to Theorem \ref{T-3.1}, the minimal case
when $\bigl(\widehat\lambda_i\bigr)_{i=1}^d$ is equal to the decreasing rearrangement of
$(a_ib_{d+1-i})_{i=1}^d$. However, the problem seems much more difficult, and we can currently
settle the special case of $2\times2$ matrices only.
\begin{prop}\label{P-5.2}
Let $A$ and $B$ be $2\times2$ positive semidefinite matrices with the diagonalizations
\eqref{F-2.1} and \eqref{F-2.2} with $d=2$. Then $G_p$ in \eqref{F-5.2} converges as
$p\to\infty$ to a positive semidefinite matrix whose eigenvalues are
$$
\bigl(\widehat\lambda_1,\widehat\lambda_2\bigr)=\begin{cases}
(a_1b_1,a_2b_2) & \text{if $(V^*W)_{12}=0$}, \\
(\max\{a_1b_2,a_2b_1\},\min\{a_1b_2,a_2b_1\}) & \text{if $(V^*W)_{12}\ne0$}.
\end{cases}
$$
\end{prop}
\begin{proof}
Since
$$
G_p=V\bigl((\mathrm{diag}(a_1,a_2))^p\,\#\,(V^*W\mathrm{diag}(b_1,b_2)V^*W)^p\bigr)^{2/p}V^*,
$$
we may assume without loss of generality that $V=I$ (then $V^*W=W$).
First, when $W_{12}=0$ (hence $W$ is diagonal), we have for every $p>0$
$$
G_p=\mathrm{diag}(a_1b_1,a_2b_2).
$$
Next, when $W_{11}=0$ (hence $W=\begin{bmatrix}0&w_1\\w_2&0\end{bmatrix}$ with
$|w_1|=|w_2|=1$), we have for every $p>0$
$$
G_p=\mathrm{diag}(a_1b_2,a_2b_1).
$$
In the rest it suffices to consider the case where
$W=\begin{bmatrix}w_{11}&w_{12}\\w_{21}&w_{22}\end{bmatrix}$ with $w_{ij}\ne0$ for all
$i,j=1,2$. First, assume that $\det A=\det B=1$ so that $a_1a_2=b_1b_2=1$. For every $p>0$,
since $\det A^p=\det B^p=1$, it is known \cite[Proposition 3.11]{Mo} (also
\cite[Proposition 4.1.12]{Bh2}) that
$$
A^p\,\#\,B^p={A^p+B^p\over\sqrt{\det(A^p+B^p)}}
$$
so that
$$
G_p={(A^p+B^p)^{2/p}\over\bigl(\det(A^p+B^p)\bigr)^{1/p}}.
$$
Compute
\begin{equation}\label{F-5.6}
A^p+B^p=\begin{bmatrix}
a_1^p+|w_{11}|^2b_1^p+|w_{12}|^2b_2^p&
w_{11}\overline{w_{21}}b_1^p+w_{12}\overline{w_{22}}b_2^p\\
\overline{w_{11}}w_{21}b_1^p+\overline{w_{12}}w_{22}b_2^p&
a_2^p+|w_{21}|^2b_1^p+|w_{22}|^2b_2^p
\end{bmatrix}
\end{equation}
and
\begin{align}
\det(A^p+B^p)
&=1+|w_{21}|^2(a_1b_1)^p+|w_{22}|^2(a_1b_2)^p+|w_{11}|^2(a_2b_1)^p+|w_{12}|^2(a_2b_2)^p
\nonumber\\
&\qquad+|w_{11}w_{22}-w_{12}w_{21}|^2. \label{F-5.7}
\end{align}
Hence we have
$$
\lim_{p\to\infty}\bigl(\det(A^p+B^p)\bigr)^{1/p}=a_1b_1,\qquad
\lim_{p\to\infty}\bigl(\mathrm{Tr}\,(A^p+B^p)\bigr)^{1/p}=\max\{a_1,b_1\}.
$$
Therefore, thanks to \eqref{F-5.1} we have
$$
\lim_{p\to\infty}G_p={(A\vee B)^2\over a_1b_1}.
$$
Since
$$
{1\over2}\,\mathrm{Tr}\,(A^p\,\#\,B^p)\le\bigl(\lambda_1(G_p)\bigr)^{p/2}
\le\mathrm{Tr}\,(A^p\,\#\,B^p),
$$
we obtain
\begin{align*}
\widehat\lambda_1&=\lim_{p\to\infty}\bigl(\mathrm{Tr}\,(A^p\,\#\,B^p)\bigr)^{2/p}
=\lim_{p\to\infty}{\bigl(\mathrm{Tr}\,(A^p+B^p)\bigr)^{2/p}\over\bigl(\det(A^p+B^p)\bigr)^{1/p}} \\
&={\max\{a_1^2,b_1^2\}\over a_1b_1}
=\max\biggl\{{a_1\over b_1},{b_1\over a_1}\biggr\}
=\max\{a_1b_2,a_2b_1\}.
\end{align*}
Furthermore, $\widehat\lambda_2=\min\{a_1b_2,a_2b_1\}$ follows since
$\widehat\lambda_1\widehat\lambda_2=1$.
For general $A,B>0$ let $\alpha:=\sqrt{\det A}$ and $\beta:=\sqrt{\det B}$. Since
$$
G_p=\alpha\beta\bigl((\alpha^{-1}A)^p\,\#\,(\beta^{-1}B)^p\bigr)^{2/p},
$$
we see from the above case that $G_p$ converges as $p\to\infty$ and
$$
\widehat\lambda_1
=\alpha\beta\max\{(\alpha^{-1}a_1)(\beta^{-1}b_2),(\alpha^{-1}a_2)(\beta^{-1}b_1)\}
=\max\{a_1b_2,a_2b_1\},
$$
and similarly for $\widehat\lambda_2$.
The remaining is the case when $a_2$ and/or $b_2=0$. We may assume that $a_1,b_1>0$ since
the case $A=0$ or $B=0$ is trivial. When $a_2=b_2=0$, since $a_1^{-1}A$ and $b_1^{-1}B$
are non-commuting rank one projections, we have $G_p=0$ for all $p>0$ by
\cite[(3.11)]{KA}. Finally, assume that $a_2=0$ and $B>0$. Then we may assume that
$a_1=1$ and $\det B=1$. For $\varepsilon>0$ set $A_\varepsilon:=\mathrm{diag}(1,\varepsilon^2)$. Since
$\det(\varepsilon^{-1}A_\varepsilon)=1$, we have
$$
A_\varepsilon^p\,\#\,B^p=\varepsilon^{p/2}\bigl((\varepsilon^{-1}A_\varepsilon)^p\,\#\,B^p\bigr)
=\varepsilon^{p/2}\,{(\varepsilon^{-1}A_\varepsilon)^p+B^p\over
\sqrt{\det\bigl((\varepsilon^{-1}A_\varepsilon)^p+B^p\bigr)}}.
$$
By use of \eqref{F-5.6} and \eqref{F-5.7} with $a_1=\varepsilon^{-1}$ and $a_2=\varepsilon$ we compute
$$
A^p\,\#\,B^p=\lim_{\varepsilon\searrow0}A_\varepsilon^p\,\#\,B^p
=\bigl(|w_{21}|^2b_1^p+|w_{22}|^2b_2^p\bigr)^{-1/2}\,\mathrm{diag}(1,0)
$$
so that
$$
\lim_{p\to\infty}G_p=\mathrm{diag}(b_1^{-1},0)=\mathrm{diag}(b_2,0),
$$
which is the desired assertion in this final situation.
\end{proof}
\appendix
\section{Proof of Lemma \ref{L-2.4}}
We may assume that $\mathcal{H}=\mathbb{C}^d$ by fixing an orthonormal basis of $\mathcal{H}$. Let $G(k,d)$
denote the Grassmannian manifold consisting of $k$-dimensional subspaces of $\mathcal{H}$. Let
$\mathcal{O}_{k,d}$ denote the set of all $u=(u_1,\dots,u_k)\in\mathcal{H}^k$ such that $u_1,\dots,u_k$
are orthonormal in $\mathcal{H}$. Consider $\mathcal{O}_{k,d}$ as a metric space with the metric
$$
d_2(u,v):=\Biggl(\sum_{i=1}^k\|u_i-v_i\|^2\Biggr)^{1/2},\qquad
u=(u_1,\dots,u_k),\ v=(v_1,\dots,v_k)\in\mathcal{H}^k.
$$
Moreover, let $\widetilde\mathcal{H}_{k,d}$ be the set of projectivised vectors
$u=u_1\wedge\dots\wedge u_k$ in $\mathcal{H}^{\wedge k}$ of norm $1$, i.e., the quotient
space of $\mathcal{H}_{k,d}:=\{u\in\mathcal{H}^{\wedge k}:u=u_1\wedge\dots\wedge u_k,\,\|u\|=1\}$ under
the equivalent relation $u\sim v$ on $\mathcal{H}_{k,d}$ defined as $u=e^{i\theta}v$ for some
$\theta\in\mathbb{R}$. We then have the commutative diagram:
$$
\setlength{\unitlength}{1mm}
\begin{picture}(50,27)(0,0)
\put(0,22){$\mathcal{O}_{k,d}$}
\put(10,23){\vector(1,0){15}}
\put(16,25){$\pi$}
\put(28,22){$G(k,d)$}
\put(28,0){$\widetilde\mathcal{H}_{k,d}$}
\put(10,20){\vector(1,-1){15}}
\put(13,9){$\widetilde\pi$}
\put(32,19){\vector(0,-1){12}}
\put(34,12){$\phi$}
\end{picture}
$$
where $\pi$ and $\widetilde\pi$ are surjective maps defined for
$u=(u_1,\dots,u_k)\in\mathcal{O}_{k,d}$ as
\begin{align*}
\pi(u)&:=\mathrm{span}\{u_1,\dots,u_k\}, \\
\widetilde\pi(u)&:=[u_1\wedge\dots\wedge u_k],\ \mbox{the equivalence class of
$u_1\wedge\dots\wedge u_k$},
\end{align*}
and $\phi$ is the canonical representation of $G(k,d)$ by the $k$th antisymmetric tensors
(or the $k$th exterior products).
As shown in \cite{FGP}, the standard Grassmannian topology on $G(k,d)$ is the final
topology (the quotient topology) from the map $\pi$ and it coincides with the topology
induced by the gap metric:
$$
d_\mathrm{gap}(\mathcal{U},\mathcal{V}):=\|P_\mathcal{U}-P_\mathcal{V}\|
$$
for $k$-dimensional subspaces $\mathcal{U},\mathcal{V}$ of $\mathcal{H}$ and the orthogonal projections
$P_\mathcal{U},P_\mathcal{V}$ onto them. On the other hand, consider the quotient topology on
$\widetilde\mathcal{H}_{k,d}$ induced from the norm on $\mathcal{H}_{k,d}\subset\mathcal{H}^{\wedge k}$, which is
determined by the metric
$$
\widetilde d(\widetilde\pi(u),\widetilde\pi(v)):=\inf_{\theta\in\mathbb{R}}
\|u_1\wedge\cdots\wedge u_k-e^{\sqrt{-1}\theta}v_1\wedge\cdots\wedge v_k\|,
\qquad u,v\in\mathcal{O}_{k,d}.
$$
It is easy to prove that
$\widetilde\pi:(\mathcal{O}_{k,d},d_2)\to(\widetilde\mathcal{H}_{k,d},\widetilde d)$ is continuous. Since
$(\mathcal{O}_{k,d},d_2)$ is compact, it thus follows that the final topology on
$\widetilde\mathcal{H}_{k,d}$ from the map $\widetilde\pi$ coincides with the
$\widetilde d$-topology.
It is clear from the above commutative diagram that the final topology on $G(k,d)$ from
$\pi$ is homeomorphic via $\phi$ to that on $\widetilde\mathcal{H}_{k,d}$ from $\widetilde\pi$.
Hence $\phi$ is a homeomorphism from $(G(k,d),d_\mathrm{gap})$ onto
$(\widetilde\mathcal{H}_{k,d},\widetilde d)$. From the homogeneity of $(G(k,d),d_\mathrm{gap})$ and
$(\widetilde\mathcal{H}_{k,d},\widetilde d)$ under the unitary transformations there exist
constant $\alpha,\beta>0$ (depending on only $k,d$) such that
$$
\alpha\|P_{\pi(u)}-P_{\pi(v)}\|\le\widetilde d(\widetilde\pi(u),\widetilde\pi(v))
\le\beta\|P_{\pi(u)}-P_{\pi(v)}\|,\qquad u,v\in\mathcal{O}_{k,d},
$$
which is the desired inequality.
\section{Proof of \eqref{F-1.2}}
This appendix is aimed to supply the proof of \eqref{F-1.2} for matrices $A,B\ge0$. Throughout
the appendix let $A,B$ be $d\times d$ positive semidefinite matrices with the support
projections $A^0,B^0$. We define $\log A$ in the generalized sense as
$$
\log A:=(\log A)A^0,
$$
i.e., $\log A$ is defined by the usual functional calculus on the range of $A^0$ and it
is zero on the range of $A^{0\perp}=I-A^0$, and similarly $\log B:=(\log B)B^0$. We write
$P_0:=A^0\wedge B^0$ and
$$
\log A\,\dot+\log B:=P_0(\log A)P_0+P_0(\log B)P_0.
$$
Note \cite[Section 4]{HP} that
\begin{align}
P_0\exp(\log A\,\dot+\log B)
&=\lim_{\varepsilon\searrow0}\exp(\log(A+\varepsilon A^{0\perp})+\log(B+\varepsilon B^{0\perp})) \nonumber\\
&=\lim_{\varepsilon\searrow0}\exp(\log(A+\varepsilon I)+\log(B+\varepsilon I)). \label{F-B.1}
\end{align}
Now, let $\sigma$ be an operator mean with the representing operator monotone function $f$ on
$(0,\infty)$, and let $\alpha:=f'(1)$. Note that $0\le\alpha\le1$ and if $\alpha=0$ (resp.,
$\alpha=1$) then $A\,\sigma\,B=A$ (resp., $A\,\sigma B=B)$ so that $(A^p\,\sigma\,B^p)^{1/p}=A$
(resp., $(A^p\,\sigma\,B^p)^{1/p}=B$) for all $A,B\ge0$ and $p>0$. So in the rest we assume
that $0<\alpha<1$.
\begin{thm}\label{T-B.1}
With the above assumptions, for every $A,B\ge0$,
\begin{equation}\label{F-B.2}
\lim_{p\searrow0}(A^p\,\sigma\,B^p)^{1/p}
=P_0\exp((1-\alpha)\log A\dot+\alpha\log B).
\end{equation}
\end{thm}
From \eqref{F-B.1} we may write
\begin{align*}
\lim_{p\searrow0}(A^p\,\sigma\,B^p)^{1/p}
&=\lim_{\varepsilon\searrow0}\exp((1-\alpha)\log(A+\varepsilon I)+\alpha\log(B+\varepsilon I)) \\
&=\lim_{\varepsilon\searrow0}\lim_{p\searrow0}((A+\varepsilon I)^p\,\sigma\,(B+\varepsilon I)^p)^{1/p}.
\end{align*}
The next lemma is essential to prove the theorem. The proof of the lemma is a slight
modification of that of \cite[Lemma 4.1]{HP}.
\begin{lemma}\label{L-B.2}
For each $p\in(0,p_0)$ with some $p_0>0$, a Hermitian matrix $Z(p)$ is given in the
$2\times2$ block form as
$$
Z(p)=\begin{bmatrix}Z_0(p)&Z_2(p)\\Z_2^*(p)&Z_1(p)\end{bmatrix},
$$
where $Z_0(p)$ is $k\times k$, $Z_1(p)$ is $l\times l$ and $Z_2(p)$ is $k\times l$.
Assume:
\begin{itemize}
\item[(a)] $Z_0(p)\to Z_0$ and $Z_2(p)\to Z_2$ as $p\searrow0$,
\item[(b)] there is a $\delta>0$ such that $pZ_1(p)\le-\delta I_l$ for all $p\in(0,p_0)$.
\end{itemize}
Then
$$
e^{Z(p)}\longrightarrow\begin{bmatrix}e^{Z_0}&0\\0&0\end{bmatrix}\quad
\mbox{as $p\searrow0$}.
$$
\end{lemma}
\begin{proof}
We list the eigenvalues of $Z(p)$ in decreasing order (with multiplicities) as
$$
\lambda_1(p)\ge\dots\ge\lambda_k(p)\ge\lambda_{k+1}(p)\ge\dots\ge\lambda_m(p)
$$
together with the corresponding orthonormal eigenvectors
$$
u_1(p),\dots,u_k(p),u_{k+1}(p),\dots,u_m(p),
$$
where $m:=k+l$. Then
\begin{equation}\label{F-B.3}
e^{Z(p)}=\sum_{i=1}^me^{\lambda_i(p)}u_i(p)u_i(p)^*.
\end{equation}
Furthermore, let $\mu_1(p)\ge\dots\ge\mu_k(p)$ be the eigenvalues of $Z_0(p)$ and
$\mu_1\ge\dots\ge\mu_k$ be the eigenvalues of $Z_0$ Then $\mu_i(p)\to\mu_i$ as $p\searrow0$
thanks to assumption (a). By the majorization result for eigenvalues in
\cite[Corollary 7.2]{An} we have
\begin{equation}\label{F-B.4}
\sum_{i=1}^r\mu_i(p)\le\sum_{i=1}^r\lambda_i(p),\qquad1\le r\le k.
\end{equation}
Since
$$
pZ(p)\le\begin{bmatrix}pZ_0(p)&pZ_2(p)\\pZ_2^*(p)&-\delta I_l\end{bmatrix}
\longrightarrow\begin{bmatrix}0&0\\0&-\delta I_l\end{bmatrix}\quad
\mbox{as $p\searrow0$}
$$
thanks to assumptions (a) and (b), it follows that, for $k<i\le m$,
$p\lambda_i(p)<-\delta/2$ for any $p>0$ sufficiently small so that
\begin{equation}\label{F-B.5}
\lim_{p\searrow0}\lambda_i(p)=-\infty,\qquad k<i\le m.
\end{equation}
Hence, it suffices to prove that for any sequence $(p_0>)\ p_n\searrow0$ there exists
a subsequence $\{p_n'\}$ of $\{p_n\}$ such that we have for $1\le i\le k$
\begin{align}
\lambda_i(p_n')&\longrightarrow\mu_i\quad\mbox{as $n\to\infty$}, \label{F-B.6}\\
u_i(p_n')&\longrightarrow v_i\oplus0\in\mathbb{C}^k\oplus\mathbb{C}^l\quad\mbox{as $n\to\infty$},
\label{F-B.7}\\
Z_0v_i&=\mu_iv_i. \label{F-B.8}
\end{align}
Indeed, it then follows that $v_1,\dots,v_k$ are orthonormal vectors in $\mathbb{C}^k$, so from
\eqref{F-B.3} and \eqref{F-B.5} we obtain
$$
\lim_{n\to\infty}e^{Z(p_n')}
=\sum_{i=1}^ke^{\mu_i}v_iv_i^*\oplus0=e^{Z_0}\oplus0.
$$
Now, replacing $\{p_n\}$ with a subsequence, we may assume that $u_i(p_n)$ itself converges
to some $u_i\in\mathbb{C}^m$ for
$1\le i\le k$. Writing $u_i(p_n)=v_i^{(n)}\oplus w_i^{(n)}$ in $\mathbb{C}^k\oplus\mathbb{C}^l$, we have
\begin{align}
\lambda_1(p_n)
&=\bigl\langlev_i^{(n)}\oplus w_i^{(n)},Z(p_n)(v_i^{(n)}\oplus w_i^{(n)})\bigr\rangle \nonumber\\
&=\bigl\langlev_i^{(n)},Z_0(p)v_i^{(n)}\bigr\rangle
+2\mathrm{Re}\,\bigl\langlev_i^{(n)},Z_2(p_n)w_i^{(n)}\bigr\rangle
+\bigl\langlew_i^{(n)},Z_1(p_n)w_i^{(n)}\bigr\rangle \nonumber\\
&\le\bigl\langlev_i^{(n)},Z_0(p_n)v_i^{(n)}\bigr\rangle
+2\mathrm{Re}\,\bigl\langlev_i^{(n)},Z_2(p_n)w_i^{(n)}\bigr\rangle
-{\delta\over p_n}\,\big\|w_i^{(n)}\big\|^2 \label{F-B.9}
\end{align}
due to assumption (b). For $i=1$, since $\mu_1(p_n)\le\lambda_1(p_n)$ by \eqref{F-B.4} for
$r=1$, it follows from \eqref{F-B.9} that
$$
p_n\mu_1(p_n)\le p_n\|Z_0(p_n)\|+2p_n\|Z_2(p_n)\|-\delta\big\|w_1^{(n)}\big\|^2,
$$
where $\|Z_0(p_n)\|$ and $\|Z_2(p_n)\|$ are the operator norms. As $n\to\infty$
($p_n\searrow0$), by assumption (a) we have $w_1^{(n)}\to0$ so that
$u_1(p_n)\to u_1=v_1\oplus0$ in $\mathbb{C}^k\oplus\mathbb{C}^l$. From \eqref{F-B.9} again we furthermore have
$$
\limsup_{n\to\infty}\lambda_1(p_n)\le\langlev_1,Z_0v_1\rangle
\le\mu_1\le\liminf_{n\to\infty}\lambda_1(p_n)
$$
since $\mu_1(p_n)\le\lambda_1(p_n)$ and $\mu_1(p_n)\to\mu_1$. Therefore,
$\lambda_1(p_n)\to\langlev_1,Z_0v_1\rangle=\mu_1$ and hence $Z_0v_1=\mu_1v_1$. Next, when $k\ge2$ and $i=2$,
since $\lambda_2(p_n)$ is bounded below by \eqref{F-B.4} for $r=2$, it follows
as above that $w_2^{(n)}\to0$ and hence $u_2(p_n)\to u_2=v_2\oplus0$. Therefore,
$$
\limsup_{n\to\infty}\lambda_2(p_n)\le\langlev_2,Z_0v_2\rangle
\le\mu_2\le\liminf_{n\to\infty}\lambda_2(p_n)
$$
so that $\lambda_2(p_n)\to\langlev_2Z_0v_2\rangle=\mu_2$ and $Z_0v_2=\mu_2v_2$, since $\mu_2$
is the largest eigenvalue of $Z_0$ restricted to $\{v_1\}^\perp\cap\mathbb{C}^k$. Repeating
this argument we obtain \eqref{F-B.6}--\eqref{F-B.8} for $1\le i\le k$.
\end{proof}
Note that the lemma and its proof hold true even when the assumption $Z_2(p)\to Z_2$
in (b) is slightly relaxed into $p^{1/3}Z_2(p)\to0$ as $p\searrow0$. (For this, from
\eqref{F-B.9} note that $p_n^{-1/3}w_i^{(n)}\to0$ and so $Z_2(p_n)w_i^{(n)}\to0$.)
\noindent
{\it Proof of Theorem \ref{T-B.1}.}\enspace
Let us divide the proof into two steps. In the proof below we denote by $\triangledown_\alpha$
and $!_\alpha$ the weighted arithmetic and harmonic operator means having the representing
functions $(1-\alpha)+\alpha x$ and $x/((1-\alpha)x+\alpha)$, respectively. Note that
$$
A\,!_\alpha\,B\le A\,\sigma\,B\le A\,\triangledown_\alpha\,B,\qquad A,B\ge0.
$$
\noindent
{\it Step 1.}\enspace
First, we prove the theorem in the case where $P\,\sigma\,Q=P\wedge Q$ for all orthogonal
projections $P,Q$ (this is the case, for instance, when $\sigma$ is the weighted harmonic
operator mean $!_\alpha$, see \cite[Theorem 3.7]{KA}). Let $\mathcal{H}_0$ be the range of $P_0$
($=A^0\,!_\alpha\,B^0=A^0\,\sigma\,B^0$). From the operator monotonicity of $\log x$ ($x>0$)
it follows that, for every $p>0$,
\begin{equation}\label{F-B.10}
{1\over p}\log(A^p\,!_\alpha\,B^p)\big|_{\mathcal{H}_0}\le{1\over p}\log(A^p\,\sigma\,B^p)\big|_{\mathcal{H}_0}
\le{1\over p}\log\bigl(P_0(A^p\,\triangledown_\alpha\,B^p)P_0\bigr)\big|_{\mathcal{H}_0}.
\end{equation}
For every $\varepsilon>0$ we have
\begin{align*}
(A+\varepsilon A^{0\perp})^p\,!_\alpha\,(B+\varepsilon B^{0\perp})^p
&=\bigl((A+\varepsilon A^{0\perp})^{-p}\,\triangledown_\alpha\,(B+\varepsilon B^{0\perp})^{-p}\bigr)^{-1} \\
&=\bigl(A^{-p}\,\triangledown_\alpha\,B^{-p}
+\varepsilon^{-p}(A^{0\perp}\,\triangledown_\alpha\,B^{0\perp})\bigr)^{-1},
\end{align*}
where $A^{-p}=(A^{-1})^p$ and $B^{-p}=(B^{-1})^p$ are taken as the generalized inverses.
Therefore,
\begin{align}
P_0\bigl((A+\varepsilon A^{0\perp})^p\,!_\alpha\,(B+\varepsilon B^{0\perp})^p\bigr)P_0
&\ge\bigl(P_0\bigl(A^{-p}\,\triangledown_\alpha\,B^{-p}
+\varepsilon^{-p}(A^{0\perp}\,\triangledown_\alpha\,B^{0\perp})\bigr)P_0\bigr)^{-1} \nonumber\\
&=\bigl(P_0(A^{-p}\,\triangledown_\alpha\,B^{-p})P_0\bigr)^{-1}, \label{F-B.11}
\end{align}
since the support projection of $A^{0\perp}+B^{0\perp}$ is
$A^{0\perp}\vee B^{0\perp}=P_0^\perp$. In the above, $(-)^{-1}$ is the generalized
inverse (with support $\mathcal{H}_0$) and the inequality follows from the operator convexity of
$x^{-1}$ ($x>0$). Letting $\varepsilon\searrow0$ in \eqref{F-B.11} gives
$$
A^p\,!_\alpha\,B^p=P_0(A^p\,!_\alpha\,B^p)P_0
\ge\bigl(P_0(A^{-p}\,\triangledown_\alpha\,B^{-p})P_0\bigr)^{-1}
$$
so that
\begin{equation}\label{F-B.12}
{1\over p}\log(A^p\,!_\alpha\,B^p)\big|_{\mathcal{H}_0}
\ge-{1\over p}\log\bigl(P_0(A^{-p}\,\triangledown_\alpha\,B^{-p})P_0\bigr)\big|_{\mathcal{H}_0}.
\end{equation}
Combining \eqref{F-B.10} and \eqref{F-B.12} yields
\begin{equation}\label{F-B.13}
-{1\over p}\log\bigl(P_0(A^{-p}\,\triangledown_\alpha\,B^{-p})P_0\bigr)\big|_{\mathcal{H}_0}
\le{1\over p}\log(A^p\,\sigma\,B^p)\big|_{\mathcal{H}_0}
\le{1\over p}\log\bigl(P_0(A^p\,\triangledown_\alpha\,B^p)P_0\bigr)\big|_{\mathcal{H}_0}.
\end{equation}
Since
$$
A^{-p}=A^0-p\log A+o(p),\qquad B^{-p}=B^0-p\log B+o(p)
$$
as $p\searrow0$, we have
$$
A^{-p}\,\triangledown_\alpha\,B^{-p}
=A^0\,\triangledown_\alpha\,B^0-p((\log A)\,\triangledown_\alpha(\log B))+o(p)
$$
so that
$$
P_0(A^{-p}\,\triangledown_\alpha\,B^{-p})P_0=P_0-p((1-\alpha)\log A\dot+\alpha\log B)+o(p).
$$
Therefore,
\begin{equation}\label{F-B.14}
-{1\over p}\log\bigl(P_0(A^{-p}\,\triangledown\,B^{-p})P_0\bigr)\big|_{\mathcal{H}_0}
=((1-\alpha)\log A\dot+\alpha\log B)\big|_{\mathcal{H}_0}+o(1).
\end{equation}
Similarly,
\begin{equation}\label{F-B.15}
{1\over p}\log\bigl(P_0(A^p\,\triangledown\,B^p)P_0\bigr)\big|_{\mathcal{H}_0}
=((1-\alpha)\log A\dot+\alpha\log B)\big|_{\mathcal{H}_0}+o(1).
\end{equation}
From \eqref{F-B.13}--\eqref{F-B.15} we obtain
$$
\lim_{p\searrow0}{1\over p}\log(A^p\,\sigma\,B^p)\big|_{\mathcal{H}_0}
=((1-\alpha)\log A\dot+\alpha\log B)\big|_{\mathcal{H}_0},
$$
which yields the required limit formula.
\noindent
{\it Step 2.}\enspace
For a general operator mean $\sigma$ the integral representation theorem
\cite[Theorem 4.4]{KA} says that there are $0\le\theta\le1$, $0\le\beta\le1$ and an operator
mean $\tau$ such that
$$
\sigma=\theta\triangledown_\beta+(1-\theta)\tau
$$
and $P\,\tau\,Q=P\wedge Q$ for all orthogonal projections $P,Q$. Moreover, $\tau$ has the
representing operator monotone function $g$ on $(0,\infty)$ for which $\gamma:=g'(1)\in(0,1)$
and
$$
\alpha=\theta\beta+(1-\theta)\gamma.
$$
We may assume that $0<\theta\le1$ since the case $\theta=0$ was shown in Step 1. Moreover,
when $\theta=1$, we have $\beta=\alpha\in(0,1)$. For the present, assume that $0<\theta\le1$
and $0<\beta<1$. Let $A,B\ge0$ be given, and note that
$A^0\,\sigma\,B^0=\theta A^0\,\triangledown_\beta\,B^0+(1-\theta)(A^0\wedge B^0)$ has the
support projection $A^0\vee B^0$. Let $\mathcal{H}$, $\mathcal{H}_0$ and $\mathcal{H}_1$ denote the ranges of
$A^0\vee B^0$, $P_0=A^0\wedge B^0$ and $A^0\vee B^0-P_0$, respectively, so that
$\mathcal{H}=\mathcal{H}_0\oplus\mathcal{H}_1$. Note that the support of $A^p\,\sigma\,B^p$ for any $p>0$ is $\mathcal{H}$.
We will describe ${1\over p}\log(A^p\,\sigma\,B^p)\big|_\mathcal{H}$ in the $2\times2$ block form
with respect to the decomposition $\mathcal{H}=\mathcal{H}_0\oplus\mathcal{H}_1$. Let
$$
Z_0:=((1-\gamma)\log A\dot+\gamma\log B)|_{\mathcal{H}_0}.
$$
It follows from Step 1 that $\lim_{p\searrow0}(A^p\,\tau\,B^p)^{1/p}=P_0e^{Z_0}P_0$ and
hence
\begin{align*}
A^p\,\tau\,B^p&=P_0\bigl(e^{Z_0}+o(1)\bigr)^pP_0 \\
&=P_0\bigl(I_{\mathcal{H}_0}+p\log\bigl(e^{Z_0}+o(1)\bigr)+o(p)
\bigr)P_0 \\
&=P_0\bigl(I_{\mathcal{H}_0}+pZ_0+o(p)\bigr)P_0 \\
&=P_0+p((1-\gamma)\log A\dot+\gamma\log B)+o(p).
\end{align*}
In the above, the third equality follows since
$\log\bigl(e^{Z_0}+o(1)\bigr)=Z_0+o(1)$. On the other hand, we have
$$
A^p\,\triangledown_\beta\,B^p
=A^0\,\triangledown_\beta\,B^0+p((\log A)\,\triangledown_\beta\,(\log B))+o(p).
$$
Therefore, we have
\begin{align*}
A^p\,\sigma\,B^p&=\theta(A^0\,\triangledown_\beta\,B^0)+(1-\theta)P_0 \\
&\qquad+p\theta((\log A)\,\triangledown_\beta(\log B))
+p(1-\theta)((1-\gamma)\log A\dot+\gamma\log B)+o(p).
\end{align*}
Setting
\begin{align*}
C&:=\bigl(\theta(A^0\,\triangledown_\beta\,B^0)+(1-\theta)P_0\bigr)\big|_\mathcal{H}, \\
H&:=\bigl(\theta((\log A)\,\triangledown_\beta(\log B))
+(1-\theta)((1-\gamma)\log A\,\dot+\gamma\log B)\bigr)\big|_\mathcal{H},
\end{align*}
we write
\begin{equation}\label{F-B.16}
{1\over p}\log(A^p\,\sigma\,B^p)\big|_\mathcal{H}
={1\over p}\log(C+pH+o(p)),
\end{equation}
which $C$ is a positive definite contraction on $\mathcal{H}$ and $H$ is a Hermitian operator on
$\mathcal{H}$. Note that the eigenspace of $C$ for the eigenvalue $1$ is $\mathcal{H}_0$. Hence, with a
basis consisting of orthonormal eigenvectors for $C$ we may assume that $C$ is diagonal so
that $C=\mathrm{diag}(c_1,\dots,c_m)$ with
$$
c_1=\dots=c_k=1>c_{k+1}\ge\dots\ge c_m>0
$$
where $m=\dim\mathcal{H}$ and $k=\dim\mathcal{H}_0$.
Applying the Taylor formula (see, e.g., \cite[Theorem 2.3.1]{Hi2} to $\log(C+pH+o(p))$ we
have
\begin{equation}\label{F-B.17}
\log(C+pH+o(p))=\log C+pD\log(C)(H)+o(p),
\end{equation}
where $D\log(C)$ denotes the Fr\'echet derivative of the matrix functional calculus by
$\log x$ at $C$. The Daleckii and Krein's derivative formula (see, e.g.,
\cite[Theorem 2.3.1]{Hi2}) says that
\begin{equation}\label{F-B.18}
D\log(C)(H)=\Biggl[{\log c_i-\log c_j\over c_i-c_j}\Biggr]_{i,j=1}^m\circ H,
\end{equation}
where $\circ$ denotes the Schur (or Hadamard) product and $(\log c_i-\log c_j)/(c_i-c_j)$ is
understood as $1/c_i$ when $c_i=c_j$. We write $D\log(C)(H)$ in the $2\times2$ block form on
$\mathcal{H}_0\oplus\mathcal{H}_1$ as $\begin{bmatrix}Z_0&Z_2\\Z_2^*&Z_1\end{bmatrix}$ where
$Z_0:=P_0HP_0|_{\mathcal{H}_0}$. By \eqref{F-B.16}--\eqref{F-B.18} we can write
$$
{1\over p}\log(A^p\,\sigma\,B^p)={1\over p}\log C+D\log(C)(H)+o(1)
=\begin{bmatrix}Z_0(p)&Z_2(p)\\Z_2^*(p)&Z_1(p)\end{bmatrix},
$$
where
\begin{align*}
Z_0(p)&=Z_0+o(1),\qquad Z_2(p)=Z_2+o(1), \\
Z_1(p)&={1\over p}\,\mathrm{diag}(\log c_{k+1},\dots,\log c_m)+Z_1+o(1).
\end{align*}
This $2\times2$ block form of $Z(p):={1\over p}\log(A^p\,\sigma\,B^p)\big|_\mathcal{H}$ satisfies
assumptions (a) and (b) of Lemma \ref{L-B.2} for $p\in(0,p_0)$ with a sufficiently small
$p_0>0$. Therefore, the lemma implies that
$$
\lim_{p\searrow0}(A^p\,\sigma\,B^p)^{2/p}\big|_\mathcal{H}
=\lim_{p\searrow0}\exp\biggl({1\over p}\log(A^p\,\sigma\,B^p)\big|_\mathcal{H}\biggr)
=e^{Z_0}\oplus0
$$
on $\mathcal{H}=\mathcal{H}_0\oplus\mathcal{H}_1$. Since
\begin{align*}
Z_0=P_0HP_0|_{\mathcal{H}_0}&=\theta((1-\beta)\log A\dot+\beta\log B)
+(1-\theta)((1-\gamma)\log A\dot+\gamma\log B) \\
&=(1-\alpha)\log A\dot+\alpha\log B,
\end{align*}
we obtain the desired limit formula.
For the remaining case where $0<\theta<1$ and $\beta=0$ or $1$ the proof is similar to the
above when we take as $\mathcal{H}$ the range of $A^0$ (for $\beta=0$) or $B^0$ (for $\beta=1$)
instead of the range of $A^0\vee B^0$.\qed
Finally, we remark that the same method as in the proof of Step 2 above can also be applied
to give an independent proof of \eqref{F-1.1} for matrices $A,B\ge0$.
\end{document} |
\begin{document}
\preprint{}
\title{On the Matrix Representation of Quantum Operations}
\author{Yoshihiro Nambu and Kazuo Nakamura}
\affiliation{Fundamental and Environmental Research Laboratories, NEC, 34
Miyukigaoka, Tsukuba, Ibaraki 305-8501, Japan}
\begin{abstract}
This paper considers two frequently used matrix representations --- what we call
the $\chi$- and $\mathcal{S}$-matrices --- of a quantum operation and their
applications. The matrices defined with respect to an arbitrary operator
basis, that is, the orthonormal basis for the space of linear operators on the
state space are considered for a general operation acting on a single
or two \textit{d}-level quantum system (qudit). We show that the two matrices are
given by the expansion coefficients
of the Liouville superoperator as well as the associated bijective, positive
operator on the doubled-space defined with respect to two types of induced
operator basis having different tensor product structures, i.e., Kronecker
products of the relevant operator basis and dyadic products of the associated bipartite state
basis. The explicit conversion formulas between the two matrices are established as
a computable matrix multiplication. Extention to more qudits case is trivial.
Several applications of these matrices and the conversion formulas in quantum
information science and technology are presented.
\end{abstract}
\pacs{03.67.-a, 03.65.Wj}
\keywords{quantum, operation, matrix, representation}
\date{\today}
\maketitle
\section{Introduction}
\label{intro}
The formalism of quantum operations offers us a powerful tool for describing the
dynamics of quantum systems occurring in quantum computation, including
unitary evolution as well as non-unitary evolution \cite{Kraus,Nielsen}. It can deal
with several central issues: measurements in the middle of the computation,
decoherence and noise, using probabilistic subroutines, etc. \cite{Aharanov}.
It describes the most general transformation allowed by quantum mechanics for an
initially isolated quantum system \cite{Stelmachovic,Hayashi}. Experimental
characterization and analysis of quantum operations is an essential component
of on-going efforts to develop devices capable of reliable quantum computing
and quantum communications, and is a research subject of considerable recent interest.
There have been extensive efforts on the statistical
estimation of quantum operations occurring in natural or engineered quantum
processes from experimental data known as \textquotedblleft quantum channel
identification\textquotedblright\ or \textquotedblleft quantum process
tomography\textquotedblright
\cite{Nambu,Altepeter,Mitchell,Martini,O'Brien,Secondi,Nambu2}.
There are several ways of introducing the notion of a quantum operation, one of
which is to consider it as superoperators acting on the space of linear
operators \cite{Caves,Aharanov,Tarasov}. Any physical quantum
operation has to be described by a superoperator that is completely positive
(CP); i.e., it should map the set of density operators acting on the trivially
extended Hilbert space to itself \cite{Kraus,Nielsen}. It is known that any CP map can be
decomposed into the so-called Kraus form by a set of Kraus
operators \cite{Kraus}. However, this description is unique only up to unitary
equivalence \cite{Nielsen}, just like the decomposition of a given density operator
into convex sum of distinct, but not necessarily orthogonal, projectors is
unique only up to unitary equivalence \cite{Hughston,Nielsen}. Alternatively,
the superoperators can be represented in matrix form by providing an operator
basis \cite{D'Ariano}, i.e., the orthonormal basis for the space of linear
operators on the state space, just as the operators on the Hilbert space can
be represented in matrix form by providing a state basis for the Hilbert
space. For example, the density operator can be represented by a density matrix
defined with respect to the chosen state basis.
The density matrix provides a unique description of the quantum state once the
state basis is fixed, although we still have a freedom in choosing the state
basis. Similarly, a quantum operation can also be uniquely described using the
matrix once the operator basis has been fixed.
One can construct many different types of matrix representation. This
paper considers two different matrix representations for superoperators
frequently found in the literature. The first one is what we call the $\chi$-matrix,
which is also called the process or dynamical matrix by several
authors \cite{Nielsen,Chuang,Altepeter,O'Brien,Nambu2,Sudarshan,Zyczkowski}. Let
us consider a \textit{d}-dimensional Hilbert space (\textit{H}-space)
$\mathcal{H}_{d}$, and the space of linear operators acting on $\mathcal{H}
_{d}$ with a scalar product $\langle {\hat{A},\hat{B}}\rangle
\equiv\mathrm{Tr}\hat{A}^{\dag}\hat{B}$, that is, the Hilbert-Schmidt space
(\textit{HS}-space) $\mathcal{HS}_{d}$. If we choose the fixed basis set $\{
{\hat{E}_{\alpha}}\} _{\alpha=0}^{d^{2}-1}$ in $\mathcal{HS}_{d}$, the
linear operation $\mathfrak{S}$ can be represented by the binary form of the superoperator
\begin{equation}
\mathcal{\hat{\hat{S}}}(\odot)=\sum\limits_{\alpha,\beta
=0}^{d^{2}-1}{\chi_{\alpha\beta}\hat{E}_{\alpha}\odot\hat{E}_{\beta}^{\dag}}
\label{eq1}
\end{equation}
acting on $\mathcal{HS}_{d}$, which maps a linear operator in $\mathcal{HS}
_{d}$ into another one. In Eq. (\ref{eq1}), the substitution symbol $\odot$ should be
replaced by a transformed operator, and double-hat $\Hat{\Hat{ }}$ is used to
distinguish the superoperator from an ordinary operator acting on $\mathcal{H}
_{d}$. The coefficients $\chi_{\alpha\beta}$ form a
$d^{2}\times d^{2}$ positive matrix $\chi\equiv\left[ {\chi_{\alpha\beta}
}\right] _{\alpha,\beta=0}^{d^{2}-1}$, if $\mathfrak{S}$ is a physical quantum
operation \cite{Arrighi}.
Alternatively, another matrix representation of a quantum operation is given in
terms of the Liouville formalism \cite{Caves,Blum,Royer}. In this formalism,
the linear operators in $\mathcal{HS}_{d}$ are identified with the supervectors
in a Liouville space (\textit{L}-space) $\mathcal{L}_{d^{2}}$. Introducing a
double bra-ket notation for the elements of $\mathcal{L}_{d^{2}}$ \cite{Royer},
we associate every operator $\hat{A}$ with an \textit{L}-ket $\dket{\hat{A}}$
and its Hermitian conjugate
operator $\hat{A}^{\dagger}$ with an \textit{L}-bra $\dbra{
\hat{A}}$. The space $\mathcal{L}_{d^{2}}$ is furnished
with an inner product $\dbraket{\hat{A}}{\hat{B}}=\mathrm{{Tr}}\hat{A}^{\dag}\hat{B}$, and
constitutes a $d^{2}$-dimensional Hilbert space. Then by chosing an arbitrary fixed
set of operator basis $\{ {\hat{E}_{\alpha}}\} _{\alpha=0}
^{d^{2}-1}$ in $\mathcal{HS}_{d}$, any linear operation $\mathfrak{S}$ can be
written as the superoperator
\begin{equation}
\mathcal{\hat{\hat{S}}}=\sum\limits_{\alpha,\beta=0}^{d^{2}-1}\mathcal{S}
_{\alpha\beta} \dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}} \label{eq2}
\end{equation}
acting on $\mathcal{L}_{d^{2}}$, which maps a \textit{L}-space supervector
into another one. The coefficients $\mathcal{S}_{\alpha\beta}$ form a
$d^{2}\times d^{2}$ complex matrix $\mathcal{S}\equiv\left[ \mathcal{S}
{_{\alpha\beta}}\right] _{\alpha,\beta=0}^{d^{2}-1}$. They represent the
amplitudes of the operator components $\hat{E}_{\alpha}$ contained in the state
after applying the quantum operation on the operator component $\hat{E}_{\beta}$.
We call it the $\mathcal{S}$-matrix by analogy with the S-matrix appearing in
time-independent scattering theory \cite{Hawking,Wald}. The $\mathcal{S}
$-matrix has been actually used to describe quantum operations by several
researchers on quantum information science \cite{Mitchell,Martini,Fujiwara}. The $\chi
$- and $\mathcal{S}$-matrices offer us the most general description of the
dynamics of initially isolated quantum systems allowed in quantum mechanics,
just as the density matrix offers us the most general description of the quantum
mechanical state.
The choice of matrix representation type is a matter of convenience,
depending on the application. We will discuss later how the $\chi$- and
$\mathcal{S}$-matrices are useful for the analysis and design of quantum
operations. Although these matrices indeed have their own useful applications,
their mutual relation is non-trivial and has not been clarified. The main
purpose of this paper is to clarify the underlying relation between two
different matrix representations of quantum operations, and to provide the way
for building bridges across the different classes of applications. We here
consider a quantum operation acting on the state of a single \textit{d}-level
quantum system (abbreviated as single-qudit operation) or a two \textit{d}
-level quantum systems (two-qudit operation). We start in Sec. \ref{operator basis} by recalling
the notion of operator basis and its properties, which is helpful for the subsequent discussions.
We note the equivalence between the supervectors in the \textit{L}-space
$\mathcal{L}_{d^{2}}$ and the vectors in the doubled Hilbert space
$\mathcal{H}_{d}^{\otimes2}$=$\mathcal{H}_{d}\otimes\mathcal{H}_{d}$. This equivalence
implies that for any operator-basis set $\{{\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$
for $\mathcal{HS}_{d}$, there is an isomorphic state-basis set $\{\dket{\hat{E}_{\alpha}}\}
_{\alpha=0}^{d^{2}-1}$ for$\mathcal{H}_{d}^{\otimes2}$.
We review several properties of the operator basis for later
discussion. In Sec. \ref{matrix}, we first consider the single-qudit
operations. We show that $\chi$- and $\mathcal{S}$-matrices are given by the
expansion coefficients of the \textit{L}-space superoperator $\mathcal{\hat
{\hat{S}}}$ and the associated operator $\hat{\hat{\chi}}\equiv\mathcal{\hat
{\hat{S}}}\otimes\mathcal{\hat{\hat{I}}}(d\hat{\rho}_{I})$ acting on
$\mathcal{H}_{d}^{\otimes2}$ defined with respect to two types of induced
operator basis on $\mathcal{H}_{d}^{\otimes2}$ having different tensor product
structures. Here, $\hat{\rho}_{I}$ is the density operator of the isotropic
state on $\mathcal{H}_{d}^{\otimes2}$, and $\mathcal{\hat{\hat{I}}}\left(
\odot\right) =\odot$ is the identity superoperator acting on the
\textit{HS}-space of the second system. This result implies that there is a
bijection between $\mathcal{\hat{\hat{S}}}$ and $\hat{\hat{\chi
}}$, from which we can deduce the conversion formula between $\chi$- and
$\mathcal{S}$-matrices as a computable matrix algebra. Although Nielsen and
Chuang have considered such a formula \cite{Nielsen,Chuang}, their method requires
finding matrix inverses to convert from the $\mathcal{S}$-matrix to the $\chi$-matrix.
Here, we show a conversion formula without matrix inversion. We then
extend the formula to two-qudit operations. We also briefly
review the requirement for the $\chi$-matrix to represent physical
operations. In Sec. \ref{applications}, we illustrate the applications
of the present formulation. First, we discuss how $\chi$- and $\mathcal{S}
$-matrices can be obtained experimentally. We describe a typical procedure to
obtain the $\chi$- and $\mathcal{S}$-matrices defined with respect to an
arbitrary operator basis set. Next, we discuss how these matrices and the
present conversion formulas are useful for the analysis and design of the
quantum operations, quantum circuits, as well as quantum algorithms. In Sec.
\ref{conclusions}, we summarize the results.
\section{Operator basis}
\label{operator basis}
It has been noted by many researchers that the supervectors defined in the
\textit{L}-space $\mathcal{L}_{d^{2}}$ can be identified with the vectors in
the doubled Hilbert space $\mathcal{H}_{d}^{\otimes2}$
\cite{Ben-Reuven1,Ben-Reuven2,Ben-Reuven3,Ben-Reuven4}. Let us start by
reviewing this fact, briefly. Consider an arbitrary chosen set of an
orthonormal basis $\{ {\left\vert i\right\rangle }\} _{i=1}
^{d-1}$ for $\mathcal{H}_{d}$ (denoted as standard state basis). Any linear
operator $\hat{A}$ in $\mathcal{HS}_{d}$ can be expanded as $\hat{A}
=\sum\nolimits_{i,j=0}^{d-1}A_{ij}\ketbra{i}{j}$ where the dyadic products
$\ketbra{i}{j}$ form a basis for $\mathcal{HS}_{d}$. The
\textit{L}-space supervectors corresponding to the dyadic operator
$\ketbra{i}{j}$ is denoted by the double ket
$\dket{ij}$ , with which the
\textit{L}-space supervector associated with $\hat{A}$ is written as
$\dket{\hat{A}} =\sum
\nolimits_{i,j=0}^{d-1}{A_{ij}\dket{ij}}$. The scalar product of two \textit{L}-space supervectors
$\dket{\hat{A}}$ and $\dket{\hat{B}}$ is defined as
\begin{equation}
\dbraket{\hat{A}}{\hat{B}}
=\mathrm{{Tr}}\hat{A}^{\dag}\hat{B}, \label{eq3}
\end{equation}
which introduces a metric of an \textit{L}-space. The vectors $\dket{ij}$ form a basis for a
$d^{2}$-dimensional Hilbert space. Thus, we can safely identify $\dket{ij}$ with the product state $\ket{i} \otimes\ket{j}$. Then, the \textit{L}-space
vector associated with $\hat{A}$ can be identified with vector $\dket{\hat{A}}
\equiv ( {\hat{A}\otimes\hat{I}}) \dket{\hat{I}} $ in the doubled space
$\mathcal{H}_{d}^{\otimes2}
$\cite{D'Ariano2,Arrighi}, where $d^{-1/2}\dket{\hat{I}} \equiv d^{-1/2}\sum\nolimits_{i=0}
^{d-1}\ket{i} \otimes\ket{i}$ is the
isotropic state in $\mathcal{H}_{d}^{\otimes2}$ \cite{Horodecki,Terhal},
and $\hat{I}$ is the identity operator in
$\mathcal{HS}_{d}$. It may be helpful to recall the mathematical
representations of $\hat{A}$ in the space $\mathbb{C}^{d\times
d}$ of $d\times d$ complex matrices and the vector $\dket{\hat{A}}$ in $\mathbb{C}^{d^{2}}$. Consider a
representation where $\ket{i}$ is a column vector in
$\mathbb{C}^{d}$ with a unit element in the \textit{j}th row and zeros
elsewhere. Then, $\hat{A}$ is identified with the $d\times d$ complex
matrix $A\equiv[A_{ij}]_{i,j=0}^{d-1}$ in $\mathbb{C}
^{d\times d}$, and $\dket{\hat{A}}$ is obtained by placing the entries of a $d\times d$ matrix
into a column vector of size $d^{2}$ row-by-row, i.e.,
\begin{equation}
\dket{\hat{A}} \equiv\left[
{A_{11},\cdots,A_{1d},A_{21},\cdots,A_{2d},\cdots,A_{d1},\cdots,A_{dd}
}\right] ^{T}. \label{eq4}
\end{equation}
Therefore, $\dket{\hat{A}}$ contains the same elements as $\hat{A}$ but in
different positions. This and Eq. (\ref{eq3}) indicate that $\hat{A}$ and
$\dket{\hat{A}}$ are isometrically isomorphic. Accordingly, we
may identify the \textit{L}-space with the doubled Hilbert space, i.e.,
$\mathcal{L}_{d^{2}}{=H}_{d}^{\otimes2}$. Hereafter, we use a common symbol
$\mathcal{H}_{d}^{\otimes2}$ to denote both these spaces without loss of clarity.
It will be useful for the later discussion to note the following relations
hold:
\begin{equation}
\hat{A}\otimes\hat{B}\dket{\hat{C}}=\dket{\hat{A}\hat{C}\hat{B}^{T}},
\label{eq5}
\end{equation}
\begin{equation}
Tr_{2}[\dket{\hat{A}}_{12}{}_{12}\dbra{\hat{B}}]=({\hat{A}\hat{B}^{\dag}})^{(1)},
\label{eq6}
\end{equation}
\begin{equation}
Tr_{1}[\dket{\hat{A}}_{12}{}_{12}\dbra{\hat{B}}]=({\hat{A}^{T}\hat{B}^{\ast}})^{(2)},
\label{eq7}
\end{equation}
where the indices refer to the factors in $\mathcal{H}_{d}^{\otimes2}$ in
which the corresponding operators have a nontrivial action, and the
transposition and conjugation are referred to the chosen standard state
basis \cite{D'Ariano}.
Now, let us consider the operator basis, that is, the complete basis for
$\mathcal{HS}_{d}$. Consider an arbitrary set of $d^{2}$-vectors in
$\mathcal{H}_{d}^{\otimes2}$. From the above isomorphism, this set can be
written as $\{ {\dket{\hat{E}_{\alpha}}}\}_{\alpha=0}^{d^{2}-1}$, where $\{\hat
{E}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$ is the associated set in
$\mathcal{HS}_{d}$. The set $\{\dket{\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ is the state
basis for $\mathcal{H}_{d}^{\otimes2}$ iff it is orthonormal, i.e.,
\begin{equation}
\dbraket{\hat{E}_{\alpha}}{\hat{E}_{\beta}}=\delta_{\alpha\beta},
\label{eq8}
\end{equation}
and it is complete, i.e.,
\begin{equation}
\sum\limits_{\alpha=0}^{d^{2}-1}\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\alpha}}=\hat{I}\otimes\hat{I}.
\label{eq9}
\end{equation}
The previous discussion shows that the vector $\dket{\hat{A}}$ in $\mathcal{H}_{d}^{\otimes2}$ is identified
with \textit{L}-space supervectors associated with the operator $\hat{A}$ in
$\mathcal{HS}_{d.}$ This implies the following proposition.
\begin{proposition}
\label{P1}
A set of the operators $\{ {\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$
is a basis set for $\mathcal{HS}_{d}$ iff a set of states
$\{\dket{\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ is the basis set for
$\mathcal{H}_{d}^{\otimes 2}$.
\end{proposition}
We argue below that this is true. First, we note the following lemmmas.
\begin{lemma}
\label{L1}A set of the operators $\{ {\hat{E}_{\alpha}}\}
_{\alpha=0}^{d^{2}-1}$ in $\mathcal{HS}_{d}$ is orthonormal iff a set of
states $\{\dket{\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ in $\mathcal{H}
_{d}^{\otimes2}$ is orthonormal.
\end{lemma}
This is a trivial consequence of Eq. (\ref{eq3}).
\begin{lemma}
\label{L2}A set of the operators $\{ {\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$
in $\mathcal{HS}_{d}$ is complete iff a set of states $\{\dket{\hat{E}_{\alpha}}\}
_{\alpha=0}^{d^{2}-1}$ in $\mathcal{H}_{d}^{\otimes2}$ is complete.
\end{lemma}
To prove this, the following Theorem is helpful \cite{D'Ariano}:
\begin{theorem}
\label{Th1}
(D'Ariano, Presti, and Sacchi (2000)) A set of the operators
$\{\hat{E}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$\textit{\ in}
$\mathcal{HS}_{d}$ is complete iff it satisfies one of the following
equivalent statements:
\begin{enumerate}
\item For any linear operator $\hat{A}$\textit{ on }$\mathcal{H}_{d}$, we
have
\begin{equation}
\hat{A}=\sum\limits_{\alpha=0}^{d^{2}-1}{( {\mathrm{{Tr}}\hat{E}
_{\alpha}^{\dag}\hat{A}}) \hat{E}_{\alpha}.} \label{eq10}
\end{equation}
\item Let $\hat{\hat{\mathcal{E}}}_{depol}(\cdots)$ be the superoperator on
the space of linear operators in $\mathcal{HS}_{d}$ describing completely
depolarizing operation. For any linear operator $\hat{A}$ on $\mathcal{H}_{d}
$, we have
\begin{equation}
\hat{\hat{\mathcal{E}}}_{depol}(\hat{A})=\frac{1}{d}(\mathrm{{Tr}}\hat{A})\hat{I}
=\frac{1}{d}\sum\limits_{\alpha=0}^{d^{2}-1}{\hat{E}_{\alpha}\hat{A}\hat
{E}_{\alpha}^{\dag}.} \label{eq11}
\end{equation}
\item For chosen any state basis $\{\ket{i}\}_{i=1}^{d-1}$ for $\mathcal{H}_{d}$, we have
\begin{equation}
\sum\limits_{\alpha=0}^{d^{2}-1}{\bra{n} \hat{E}_{\alpha
}^{\dag}\ket{m} \bra{l} \hat{E}_{\alpha}
}\ket{k}=\delta_{nk}\delta_{ml}.
\label{eq12}
\end{equation}
\end{enumerate}
\end{theorem}
Now, let us prove Lemma \ref{L2}. If $\{\dket{\hat
{E}_{\alpha}}\} _{\alpha=0}^{d^{2}-1}$ is
complete, Eq. (\ref{eq9}) must be satisfied. Then, for any $\hat{A}$
in $\mathcal{HS}_{d}$, we have
\begin{eqnarray*}
(\hat{A}\otimes\hat{I})\dket{\hat{I}}=\dket{\hat{A}}
&=&\sum\limits_{\alpha=0}^{d^{2}-1}
\dket{\hat{E}_{\alpha}}
\dbraket{\hat{E}_{\alpha}}{\hat{A}}
\\
&=&\sum\limits_{\alpha=0}^{d^{2}-1}(\mathrm{Tr}\hat{E}_{\alpha}^{\dag}\hat{A}) \hat
{E}_{\alpha}\otimes\hat{I}\dket{\hat{I}}.
\end{eqnarray*}
Since this holds for any $\hat{A}$, Eq. (\ref{eq10}) must be satisfied. Hence,
$\{{\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ is
complete. Conversely, if $\{{\hat{E}_{\alpha}}\}_{\alpha
=0}^{d^{2}-1}$ is complete, then for any $\dket{\hat{A}}$
in $\mathcal{H}_{d}^{\otimes2}$, we have
\begin{eqnarray*}
\dket{\hat{A}}=(\hat{A} \otimes \hat{I})\dket{\hat{I}}
&=&\sum\limits_{\alpha=0}^{d^{2}-1}({\mathrm{Tr}\hat{E}_{\alpha}^{\dag}\hat{A}})
\hat{E}_{\alpha}\otimes\hat{I}\dket{\hat{I}}
\\
&=&\sum\limits_{\alpha=0}^{d^{2}-1}
\dket{\hat{E}_{\alpha}}
\dbraket{\hat{E}_{\alpha}}{\hat{A}}.
\end{eqnarray*}
Since this holds for any $\dket{\hat{A}}$ , Eq. (\ref{eq9}) must be satisfied.
Hence, $\{\dket{\hat{E}_{\alpha}} \}_{\alpha=0}^{d^{2}-1}$ is complete.
From these two lemmas, we obtain Proposition \ref{P1}. This indicates that the state
basis $\dket{\hat E_{\alpha}}$ for $\mathcal{H}_{d}^{\otimes2}$ has bijective correspondence
to the operator basis $\hat E_{\alpha}$ for $\mathcal{HS}_{d}$. The following
corollary is a consequence of Theorem \ref{Th1}, which will be useful for the
later discussions.
\begin{corollary}
\label{C1}Let $\{\hat{E}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$
be an arbitrary chosen set of an operator basis for $\mathcal{HS}
_{d}$. The isotropic state in $\mathcal{H}_{d}^{\otimes2}$ is written as
\begin{equation}
\hat{\rho}_{I}=\frac{1}{d}\dketbra{\hat{I}}{\hat{I}}
=\frac{1}{d}\sum\limits_{\alpha=0}^{d^{2}-1}\hat{E}_{\alpha}\otimes\hat
{E}_{\alpha}^{\ast}, \label{eq13}
\end{equation}
and the swap operator $\hat{V}$\textit{on} $\mathcal{H}_{d}^{\otimes2}$ is
written as
\begin{equation}
\hat{V}=\sum\limits_{\alpha=0}^{d^{2}-1}\hat{E}_{\alpha}\otimes\hat{E}
_{\alpha}^{\dag}. \label{eq14}
\end{equation}
\end{corollary}
Equation (\ref{eq13}) can be proven by explicit evaluation of the matrix
elements and using Eq. (\ref{eq12}). Equation (\ref{eq14}) is obtained by
performing the partial transpose on both sides of Eq. (\ref{eq13}) with respect to
the second system.\\
\textbf{Examples of the operator basis}
We show three illustrative examples of the operator basis for $\mathcal{H}
_{d}$ frequently found in the literature. The first example is a set of
transition operators $\hat{\pi}_{\left( {i,j}\right) }:=\ketbra{i}{j}$
with $i,j=0,\cdots,d-1$, and
$(i,j):=di+j$ \cite{Mahler}. The associated states form a basis set
$\{\dket{\hat{\pi}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ whose elements are the
tensor product of the standard state basis, i.e., $\dket{\hat{\pi}_{({i,j})}}
=\ket{i}\otimes\ket{j}$. The next example is a set of unitary
irreducible representations of the group \textit{SU}(\textit{d}) or the
discrete displacement operators on the phase-space torus, whose elements are
$\hat{U}_{(m,n)}={{\omega^{mn/2}\sum\nolimits_{k=0}
^{d-1}{\omega^{mk}\hat{\pi}_{(k\oplus n,k) }/}}}\sqrt{d}$ with
$m,n=0,\cdots,d-1$, where $\omega=1^{1/d}=e^{i2\pi/d}$ and $\oplus$
denotes addition modulo $d$ \cite{Mahler,Miquel,Aolita}. The associated states
form a basis set $\{\dket{\hat{U}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ of $d^{2}
$-orthogonal maximally entangled states in $\mathcal{H}_{d}^{\otimes2}$. The
last example is a set of $d^{2}-1$ traceless Hermitian generators of the group
\textit{SU}(\textit{d}) supplemented with the normalized identity operator given
by $\{\hat{\lambda}_{\alpha}\}_{\alpha=0}^{d^2-1}
=\{\hat{I}/\sqrt{d},\hat{u}_{0,1},\hat{u}_{0,2},\cdots,\hat{u}_{d-2,d-1},
\hat{v}_{0,1},\hat{v}_{0,2},\cdots,\hat{v}_{d-2,d-1},\\
\hat{w}_{1},\hat{w}_{2},\cdots,\hat{w}_{d-1}\}$ where $d(d-1)$ off-diagonal
generators are given by $\hat{u}_{i,j}={{( {\hat{\pi}_{(
i,j)}+\hat{\pi}_{(j,i)}})}}/\sqrt{2}$,
$\hat{v}_{i,j}={{i({\hat{\pi}_{(i,j)}-\hat{\pi
}_{(j,i)}})}/\sqrt{2}}$ with $0\leq i<j\leq d-1$, and $d-1$ diagonal
generators are given by $\hat{w}_{k}={{( {-\sum
\nolimits_{i=0}^{k-1}{\hat{\pi}_{( {i,i}) }}+k\hat{\pi}_{(
{k,k}) }}) /}}\sqrt{k(k+1)}$ with $1\leq k\leq d-1$ \cite{Mahler}.
The choice of basis is of course a matter of convenience, depending on the application.
Since the associated sets $\{\dket{\hat{\pi}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$,
$\{\dket{\hat{U}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$, and $\{\dket{\hat{\lambda
}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ are
state bases for $\mathcal{H}_{d}^{\otimes2}$, they should be unitarily related.
This implies that the operator bases ${\hat{\pi
}_{\alpha}}$, ${\hat{U}_{\alpha}}$, and ${\hat{\lambda}_{\alpha}}$ should also be
unitarily related. In general, two sets of state basis $\{\dket{\hat{E}_{\alpha}}\}_{\alpha
=0}^{d^{2}-1}$ and $\{\dket{\hat{F}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ in
$\mathcal{H}_{d}^{\otimes2}$ are unitarily related, i.e.,
\begin{equation}
\dket{\hat{F}_{\beta}}
=\sum\limits_{\alpha=0}^{d^{2}-1}
\dket{\hat{E}_{\alpha}}
\dbraket{\hat{E}_{\alpha}}{\hat{F}_{\beta}}
=\sum\limits_{\alpha=0}^{d^{2}-1}\dket{\hat{E}_{\alpha}}\mathcal{U}{_{\alpha\beta}}
\label{eq15}
\end{equation}
iff the operator bases ${\hat{E}_{\alpha}}$ and ${\hat{F}_{\alpha}}$ in
$\mathcal{HS}_{d}$ are unitarily related, i.e.,
\begin{equation}
\hat{F}_{\beta}=\sum\limits_{\alpha=0}^{d^{2}-1}{\hat{E}_{\alpha}}
\mathcal{U}{_{\alpha\beta}.} \label{eq16}
\end{equation}
In Eqs. (\ref{eq15}) and (\ref{eq16}), $\mathcal{U}_{\alpha\beta}=\dbraket
{\hat{E}_{\alpha}}{\hat{F}_{\beta}}
=\mathrm{Tr}\hat{E}_{\alpha}^{\dag}\hat{F}_{\beta}$ is a $\alpha\beta$-entry
of the $d^{2}\times d^{2}$ unitary matrix $\mathcal{U}$. If we consider a
unitary superoperator acting on the vectors in $\mathcal{H}_{d}^{\otimes2}$,
\begin{equation}
\mathcal{\hat{\hat{U}}}=\sum\limits_{\alpha,\beta=0}^{d^{2}-1}\mathcal{U}
{_{\alpha\beta}
\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}},}
\label{eq17}
\end{equation}
Equations. (\ref{eq15}) and (\ref{eq16}) are the unitary
transformation of the operators. Note that Eq. (\ref{eq16})
does not imply unitary equivalence of $\hat{E}_{\alpha}$ and ${\hat{F}_{\alpha}
}$, i.e., $\hat{F}_{\beta}=\hat{W}\hat{E}_{\alpha}\hat{W}^{\dag}$ for some
unitary operator $\hat{W}$ in $\mathcal{HS}_{d}$, although the unitary equivalence
of $\hat{E}_{\alpha}$ and ${\hat{F}_{\alpha}}$ implies Eq. (\ref{eq16}).
In general, $\hat{F}_{\beta}\neq\hat{W}\hat{E}_{\alpha}\hat
{W}^{\dag}$ for any unitary operator $\hat{W}$ in $\mathcal{HS}_{d}$, even if
Eq. (\ref{eq16}) holds. For example, ${\hat{\pi}_{\left(
{i,j}\right) }}$ is the operator basis of all the elements of which have rank
one, whereas ${\hat{U}_{\alpha}}$ and ${\hat{\lambda}_{\alpha}}$ are those
operator bases of all the elements of which have rank exceeding one. Therefore,
$\hat{\pi}_{({i,j})}$ is induced by a standard state basis
$\{\ket{i}\}_{i=0}^{d-1}$ for
$\mathcal{H}_{d}$, whereas ${\hat{U}_{\alpha}}$ and ${\hat{\lambda
}_{\alpha}}$ are not. This clearly indicates that either the set $\{
\hat{U}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$ or $\{\hat{\lambda
}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$ is unitarily related to the set
$\{\hat{\pi}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$, but is not unitarily equivalent to this set.
\section{Matrix representation of quantum operations}
\label{matrix}
Let us turn our attention to a single-qudit operation. As shown
in Sec. \ref{intro}, this quantum operation can be represented by either an
\textit{HS}-space superoperator or an \textit{L}-space superoperator,
in which the $\chi$- and $\mathcal{S}$-matrices are introduced
with respect to the arbitrary, but associated sets of basis $\{ {\hat
{E}_{\alpha}}\} _{\alpha=0}^{d^{2}-1}$ for $\mathcal{HS}_{d}$ and
$\{\dket{\hat{E}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$ for $\mathcal{H}_{d}^{\otimes2}$,
respectively. In this section, the underlying relationship between these two
matrix representations is discussed.
Let us first consider the \textit{L}-space superoperator.
In the Liouville formalism, the operators in $\mathcal{HS}_{d}$ are identified
with the vectors in $\mathcal{H}_{d}^{\otimes2}$. Any operation
$\mathfrak{S}$ is identified with the one-sided operator $\mathcal{\hat
{\hat{S}}}$ acting on $\mathcal{H}_{d}^{\otimes2}$, which can be expanded
using the state basis $\dket{\hat{E}_{\alpha}}$ for $\mathcal{H}_{d}^{\otimes2}$
as shown in Eq. (\ref{eq2}).
The elements of a $d^{2}\times d^{2}$ matrix $\mathcal{S}$ are formally
written as $\mathcal{S}_{\alpha\beta}=\dbra{\hat{E}_{\alpha}} \mathcal{\hat{\hat{S}}}
\dket{\hat{E}_{\beta}}$. Alternatively, the same operation
$\mathfrak{S}$ is written as a two-sided superoperator acting on
the operator in $\mathcal{HS}_{d}$, which can be expanded
using the operator basis $\hat{E}_{\alpha}$ for $\mathcal{HS}_{d}$
as shown in Eq. (\ref{eq1}). Since
$\mathcal{\hat{\hat{S}}} \dket{\hat{E}_{\beta}}=\dket{\mathcal{\hat{\hat{S}}}(\hat{E}_{\beta})}$,
we find the matrix element $\mathcal{S}_{\alpha\beta}$ can also be written as
\begin{equation}
\mathcal{S}_{\alpha\beta}=
\dbraket{\hat{E}_{\alpha}}{\mathcal{\hat{\hat{S}}}(\hat{E}_{\beta})}
=\sum\limits_{\gamma,\delta=0}^{d^{2}-1}{\chi_{\gamma\delta}
\dbra{\hat{E}_{\alpha}} \hat{E}_{\gamma}\otimes\hat{E}_{\delta}^{\ast}
\dket{\hat{E}_{\beta}},} \label{eq18}
\end{equation}
where we used Eq. (\ref{eq5}).
Substituting the right-hand side of Eq. (\ref{eq18}) for $\mathcal{S}
_{\alpha\beta}$ in Eq. (\ref{eq2}), we find that $\mathcal{\hat{\hat{S}}}$ can
be written in terms of either the matrix $\mathcal{S}$ or $\chi$ as
\begin{equation}
\mathcal{\hat{\hat{S}}}=\sum\limits_{\alpha,\beta=0}^{d^{2}-1}\mathcal{S}
{_{\alpha\beta}
\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}} }=\sum\limits_{\alpha,\beta=0}^{d^{2}-1}{\chi_{\alpha\beta}
\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}.} \label{eq19}
\end{equation}
In Eq. (\ref{eq19}), we find two types of induced operator basis on $\mathcal{H}
_{d}^{\otimes2}$ having different tensor product structures, that is,
Kronecker products $\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}$ and dyadic
products $\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}} $ of the state
basis associated with the operator basis set $\{ {\hat{E}_{\alpha}
}\}_{\alpha=0}^{d^{2}-1}$. Note that both types of basis set do
not cover all the possible basis sets on $\mathcal{H}
_{d}^{\otimes2}$. Obviously, the former type of basis set covers only those
sets that are factorable with respect to the original and extended system
spaces. For example, the set of $d^{4}$-dyadic products of the $d^{2}
$-maximally-entangled states in $\mathcal{H}_{d}^{\otimes2}$ is not a
factorable basis set, and can not be covered by the former type.
Similarly, only an operator basis on $\mathcal{H}_{d}^{\otimes2}$ with all
elements of which have rank one can be reduced to the latter type of the basis
set. Therefore, each type of basis set can describe its own particular subset
of all the possible basis sets on $\mathcal{H}_{d}^{\otimes2}$.
Let us next consider another operator $\hat{\hat{\chi}}$ on $\mathcal{H}
_{d}^{\otimes2}$, which we call the Choi operator \cite{Choi}. We will show that
this operator has bijective correspondence to the \textit{L}-space
superoperator $\mathcal{\hat{\hat{S}}}$. It is known that the isomorphism
between the operator in $\mathcal{HS}_{d}$ and the bipartite vector in
$\mathcal{H}_{d}^{\otimes2}$ can be straightforwardly extended to the isomorphism
between the superoperator acting on $\mathcal{HS}_{d}$ and the operator acting
on $\mathcal{H}_{d}^{\otimes2}$. Jamio\l kowski first showed that the map
between the \textit{HS}-space superoperator $\mathcal{\hat{\hat{S}}}\left(
\odot\right)$ and the operator $\hat{\hat{\chi
}}\equiv\mathcal{\hat{\hat{S}}}\otimes\mathcal{\hat{\hat{I}}}(d\hat{\rho}
_{I})$ acting on $\mathcal{H}_{d}^{\otimes2}$ is an isomorphism, where
$\mathcal{\hat{\hat{S}}}\left( \odot\right) $ and $\mathcal{\hat{\hat{I}}
}\left( \odot\right) $ act on the \textit{HS}-space of the first and second
systems, respectively \cite{Jamiolkowski}. If we note Eq. (\ref{eq13}) and the
following equivalent relation that follows from Eq. (\ref{eq19})
\[
\mathcal{\hat{\hat{S}}} \dket{\hat{E}_{\beta}} =\sum\limits_{\alpha=0}^{d^{2}-1}
\mathcal{S}{_{\alpha\beta} \dket{\hat{E}_{\alpha}}}\leftrightarrow
\mathcal{\hat{\hat{S}}}( {\hat{E}_{\beta}})
=\sum\limits_{\alpha=0}^{d^{2}-1}\mathcal{S}{_{\alpha\beta}\hat{E}_{\alpha},}
\]
it is easy to confirm that $\hat{\hat{\chi}}$ can be written
as
\begin{equation}
\hat{\hat{\chi}}=\sum\limits_{\alpha,\beta=0}^{d^{2}-1}{\chi_{\alpha\beta}
\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}} }
=\sum\limits_{\alpha,\beta=0}^{d^{2}-1}\mathcal{S}{_{\alpha\beta}\hat
{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}.} \label{eq20}
\end{equation}
Equations (\ref{eq19}) and (\ref{eq20}) are one of the main results of this paper.
From these equations, we find that $\mathcal{\hat{\hat{S}}}$ and $\hat
{\hat{\chi}}$ are complementary to each other in the sense that they can be
interchanged if we exchange the two operator bases $\dketbra{\hat{E}_{\alpha}}
{\hat{E}_{\beta}} $ and $\hat{E}_{\alpha}\otimes\hat{E}
_{\beta}^{\ast}$ on $\mathcal{H}_{d}^{\otimes2}$ in their expressions. These
equations show that the $\mathcal{S}$-matrix ($\chi$-matrix) is given by the
expansion coefficients of $\mathcal{\hat{\hat{S}}}$ ($\hat{\hat{\chi}}$) with
respect to $\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}} $ as well as
those of $\hat{\hat{\chi}}$ ($\mathcal{\hat{\hat{S}}}$) with respect to
$\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}$, which is explicitly written
as
\begin{equation}
\chi_{\alpha\beta}=\mathrm{Tr}(\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}})
^{\dag}\hat{\hat{\chi}}=\mathrm{Tr}({\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}})
^{\dag}\mathcal{\hat{\hat{S}}},
\label{eq21}
\end{equation}
\begin{equation}
\mathcal{S}_{\alpha\beta}=\mathrm{Tr}(\dketbra{\hat{E}_{\alpha}}
{\hat{E}_{\beta}})^{\dag}\mathcal{\hat{\hat{S}}}
=\mathrm{Tr}(\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast})^{\dag}\hat{\hat{\chi}}.
\label{eq22}
\end{equation}
From Eqs. (\ref{eq19}) and (\ref{eq20}), we can explore the mutual conversion
formulas between $\chi$- and $\mathcal{S}$-matrices. To this end, let us define
a bijection between the two operators on $\mathcal{H}_{d}^{\otimes2}$
originally found by Havel \cite{Havel}.
\begin{equation}
\Lambda(\odot)=\sum\limits_{\gamma=0}^{d^{2}-1}{(\hat{I}\otimes
\hat{\pi}_{\gamma}) \odot (\hat{\pi}_{\gamma}\otimes\hat{I}) ,}
\label{eq23}
\end{equation}
which is also considered to be the super-superoperator acting on $\mathcal{H}
_{d}.$ Then, we have the following Theorem \cite{Havel}.
\begin{theorem}
\label{Th2} (Havel (2003)) For arbitrary operators $\hat{X}$ and $\hat{Y}$ in
$\mathcal{HS}_{d}$, we have
\begin{equation}
\dketbra{\hat{X}}{\hat{Y}}=\Lambda(\hat{X}\otimes\hat{Y}^{\ast}),
\label{eq24}
\end{equation}
\begin{equation}
\hat{X}\otimes\hat{Y}^{\ast}=\Lambda(\dketbra{\hat{X}}{\hat{Y}}).
\label{eq25}
\end{equation}
\end{theorem}
Theorem \ref{Th2} connects two relevant operators on $\mathcal{H}_{d}
^{\otimes2}$ having different tensor product structures, i.e., the Kronecker
product ${\hat{X}\otimes\hat{Y}^{\ast}}$ and the dyadic product $\dketbra{\hat{X}}
{\hat{Y}} $. To prove the Theorem \ref{Th2}, we first note the following lemma.
\begin{lemma}
\label{L3}
The identity operator on $\mathcal{H}_{d}^{\otimes2}$ and the
(unnormalized) density operator of the isotropic state on $\mathcal{H}
_{d}^{\otimes2}$ are related as follows.
\begin{equation}
\dketbra{\hat{I}}{\hat{I}}=\Lambda(\hat{I}\otimes\hat{I}),
\label{eq26}
\end{equation}
\begin{equation}
\hat{I}\otimes\hat{I}=\Lambda(\dketbra{\hat{I}}{\hat{I}}).
\label{eq27}
\end{equation}
\end{lemma}
It is straightforward to confirm Eqs. (\ref{eq26}) and (\ref{eq27}) by writing
$\dketbra{\hat{I}}{\hat{I}} $ and $\hat{I}\otimes\hat{I}$
using the standard state basis $\ket{i}$ explicitly.
Then, it follows that\textit{ }
\begin{eqnarray*}
\dketbra{\hat{X}}{\hat{Y}}
&=&(\hat{X}\otimes\hat{I}) \dketbra{\hat{I}}{\hat{I}}(\hat{I}\otimes\hat{Y}^{\ast})
\\
&=&\Lambda((\hat{X}\otimes\hat{I})(\hat{I}\otimes\hat{I})(\hat{I}\otimes\hat{Y}^{\ast}))
=\Lambda(\hat{X}\otimes\hat{Y}^{\ast}),
\end{eqnarray*}
\begin{eqnarray*}
\hat{X}\otimes\hat{Y}^{\ast}
&=&(\hat{X}\otimes\hat{I})(\hat{I}\otimes\hat{I})(\hat{I}\otimes\hat{Y}^{\ast})
\\
&=&\Lambda((\hat{X}\otimes\hat{I}) \dketbra{\hat{I}}{\hat{I}}(\hat{I}\otimes\hat{Y}^{\ast}))
=\Lambda(\ketbra{\hat{X}}{\hat{Y}}),
\end{eqnarray*}
where we used Eq. (\ref{eq5}). Accordingly, Theorem \ref{Th2} is proved. At
this point, we note that the action of the bijection $\Lambda\left( {\odot
}\right)$ corresponds to reshuffling of the matrix introduced by
\.{Z}yczkowski and Bengtsson: if we consider the matrix for the operator on
$\mathcal{H}_{d}^{\otimes2}$ defined with respect to the standard state basis,
the mapped operator by $\Lambda\left( {\odot}\right) $ has a
reshuffled one of the original matrix \cite{Zyczkowski}. The bijection
$\Lambda(\odot)$ is also closely related to the matrix
realignment introduced by Chen and Wu to discuss the separability criterion for
the bipartite density matrix \cite{Chen}. It is easy to confirm that
$\Lambda(\odot)$ is involutory, that is, $\Lambda
(\Lambda(\odot))=\odot$. It should be also noted that
$\Lambda(\odot)$ does not preserve Hermiticity and the
rank of the transformed operator, so its spectrum is not
preserved \cite{Zyczkowski}. This means that $\Lambda(\odot)$
represents a non-physical operation. It follows from Theorem \ref{Th2}
that $\hat{\hat{\chi}}=\Lambda(\mathcal{\hat{\hat{S}}})$ and
$\mathcal{\hat{\hat{S}}}=\Lambda(\hat{\hat{\chi}})$, i.e., $\mathcal{\hat{\hat{S}}}$
and $\hat{\hat{\chi}}$ are bijective.
From this bijective relation, we can explore the bijection between $\chi$- and $\mathcal{S}
$-matrices. To this end, we expand $\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}}$
in terms of ${\hat{E}_{\alpha}\otimes\hat{E}_{\beta
}^{\ast}}$, and vice versa. Since they are bijective, it follows that
\begin{eqnarray}
\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}
&=&\Lambda(\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}})
\nonumber\\
&=&\sum\limits_{\alpha^{\prime},\beta^{\prime},\gamma=0}^{d^{2}-1}
{\dketbra{\hat{E}_{\alpha^{\prime}}}{\hat{E}_{\beta^{\prime}}} M_{\alpha
^{\prime}\beta^{\prime},\alpha\beta},} \label{eq28}
\end{eqnarray}
\begin{eqnarray}
\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}}
&=&\Lambda(\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast})
\nonumber\\
&=&\sum\limits_{\alpha^{\prime},\beta^{\prime}=0}^{d^{2}-1}{\hat{E}
_{\alpha^{\prime}}\otimes\hat{E}_{\beta^{\prime}}^{\ast}
(M^{\dag})_{\alpha^{\prime}\beta^{\prime},\alpha\beta},}
\label{eq29}
\end{eqnarray}
where ${M_{\alpha^{\prime}\beta^{\prime},\alpha\beta}}$ is $\alpha^{\prime
}\beta^{\prime};\alpha\beta$-entry of the $d^{4}\times d^{4}$ complex matrix
$M$. It is explicitly given as
\begin{eqnarray}
M_{\alpha^{\prime}\beta^{\prime};\alpha\beta}
&=&\mathrm{Tr}(\dketbra{\hat{E}_{\alpha^\prime}}{\hat{E}_{\beta^\prime}})
^{\dag}\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}
\nonumber\\
&=&\sum\limits_{\gamma=0}^{d^{2}-1}{Q_{\alpha^{\prime}\alpha}^{\gamma}R_{\beta\beta^{\prime}}^{\gamma},}
\label{eq30}
\end{eqnarray}
where the coefficients
\begin{equation}
Q_{\alpha\beta}^{\gamma}=\dbra{\hat{E}_{\alpha}}(\hat{I}\otimes\hat{\pi}_{\gamma})
\dket{\hat{E}_{\beta}}, \label{eq31}
\end{equation}
\begin{equation}
R_{\alpha\beta}^{\gamma}=\dbra{\hat{E}_{\alpha}}(\hat{\pi}_{\gamma}\otimes\hat{I})
\dket{\hat{E}_{\beta}}
\label{eq32}
\end{equation}
are the computable matrix elements of the operators ${\hat{I}\otimes\hat{\pi
}_{\gamma}}$ and ${\hat{\pi}_{\gamma}\otimes\hat{I}}$ on $\mathcal{H}
_{d}^{\otimes2}$ defined with respect to the basis set $\{\dket{\hat{E}_{\alpha}}\}_{\alpha
=0}^{d^{2}-1}$. These coefficients form the
$d^{2}\times d^{2}$ complex matrices
$Q^{\gamma}\equiv[Q_{\alpha\beta}^{\gamma}]_{\alpha,\beta=0}^{d^{2}-1}$
and $R^{\gamma}\equiv[R_{\alpha\beta}^{\gamma}]_{\alpha,\beta=0}^{d^{2}-1}$.
It is easy to confirm that $M$ is Hermitian as well as unitary, i.e.,
$(M^{\dag})_{\alpha^{\prime}\beta^{\prime},\alpha\beta}=(M_{\alpha\beta,\alpha^{\prime}\beta^{\prime}})^{\ast}={M_{\alpha^{\prime}\beta^{\prime},\alpha\beta}}$ and
$\sum\nolimits_{\alpha^{\prime\prime},\beta^{\prime\prime}}
(M^{\dag})_{\alpha\beta,\alpha^{\prime\prime}\beta^{\prime\prime}}
M_{\alpha^{\prime\prime}\beta^{\prime\prime};\alpha^{\prime}\beta^{\prime}}
=\sum\nolimits_{\alpha^{\prime\prime},\beta^{\prime\prime}}
M_{\alpha\beta;\alpha^{\prime\prime}\beta^{\prime\prime}}
(M^{\dag})_{\alpha^{\prime\prime}\beta^{\prime\prime},\alpha^{\prime}\beta^{\prime}}
=\delta_{\alpha\alpha^{\prime}}\delta_{\beta\beta^{\prime}}$. By using these results, we
obtain a bijection between the $\chi$- and $\mathcal{S}$ -matrices:
\begin{equation}
\chi=\sum\limits_{\gamma=0}^{d^{2}-1}{Q^{\gamma}\mathcal{S}R^{\gamma},}
\label{eq33}
\end{equation}
\begin{equation}
\mathcal{S}=\sum\limits_{\gamma=0}^{d^{2}-1}{Q^{\gamma}\chi R^{\gamma},}
\label{eq34}
\end{equation}
which are given by the sum of the multiplication of three known matrices $Q^{\gamma}$,
$\chi$, and $R^{\gamma}$, and evidently computable.
The above formulation can be straightforwardly extended to describe two-qudit
operations. In this case, different choices of basis sets are allowed
for two systems. Let us choose the set $\{\hat{E}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$ acting on the
first qudit space and the set $\{\hat{F}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$
acting on the second qudit space. The general two-qudit superoperator can be written as
\begin{equation}
\mathcal{\hat{\hat{S}}}
=\sum\limits_{\alpha,\beta,\gamma,\delta=0}^{d^{2}-1}\mathcal{S}_{\alpha\beta,\gamma\delta}
\dket{\hat{E}_{\alpha}}
\dketbra{\hat{F}_{\beta}}{\hat{E}_{\gamma}}
\dbra{\hat{F}_{\delta}}
\label{eq35}
\end{equation}
acting on the $d^{4}$-dimensional \textit{L}-space $\mathcal{L}_{d^{4}}$ as
well as
\begin{equation}
\mathcal{\hat{\hat{S}}}\left( \odot\right) =\sum\limits_{\alpha,\beta
,\gamma,\delta=0}^{d^{2}-1}{\chi_{\alpha\beta,\gamma\delta}\hat{E}_{\alpha
}\otimes\hat{F}_{\beta}\odot\hat{E}_{\gamma}^{\dag}\otimes\hat{F}_{\delta
}^{\dag}} \label{eq36}
\end{equation}
acting on $\mathcal{HS}_{d^{2}}$. These superoperators are characterized with
the $d^{4}\times d^{4}$ matrices $\mathcal{S}\equiv\left[ \mathcal{S}
{_{\alpha\beta,\gamma\delta}}\right] _{\alpha,\beta,\gamma,\delta=0}
^{d^{2}-1}$ and $\chi\equiv\left[ {\chi_{\alpha\beta,\gamma\delta}}\right]
_{\alpha,\beta,\gamma,\delta=0}^{d^{2}-1}$. The bijective Choi operator is
defined on the $d^{4}$-dimensional \textit{H}-space $\mathcal{H}_{d}
^{\otimes4}$ that is identified with the \textit{L}-space $\mathcal{L}_{d^{4}
}$ as follows:
\begin{equation}
\hat{\hat{\chi}}\equiv\mathcal{\hat{\hat{S}}}^{(13)}\otimes\mathcal{\hat
{\hat{I}}}^{(24)}(d^{2}\hat{\rho}_{I}^{(12)}\otimes\hat{\rho}_{I}^{(34)}),
\label{eq37}
\end{equation}
where the indices refer to the factors in $\mathcal{H}_{d}^{\otimes4}$ in
which the corresponding operations have a nontrivial
action \cite{Cirac,Dur,Harrow}. Then, it is straightforward to show that
$\mathcal{\hat{\hat{S}}}$ and $\hat{\hat{\chi}}$ can be written as follows:
\begin{eqnarray}
\mathcal{\hat{\hat{S}}}
&=&\sum\limits_{\alpha,\beta,\gamma,\delta=0}^{d^{2}-1}\mathcal{S}_{\alpha\beta,\gamma\delta}
\dket{\hat{E}_{\alpha}}
\dketbra{\hat{F}_{\beta}}{\hat{E}_{\gamma}}
\dbra{\hat{F}_{\delta}}
\nonumber\\
&=&\sum\limits_{\alpha,\beta,\gamma,\delta=0}^{d^{2}-1}
{\chi_{\alpha\beta,\gamma\delta}\hat{E}_{\alpha}\otimes\hat{E}_{\gamma}^{\ast
}\otimes\hat{F}_{\beta}}\otimes\hat{F}_{\delta}^{\ast}.
\label{eq38}
\end{eqnarray}
\begin{eqnarray}
\hat{\hat{\chi}}
&=&\sum\limits_{\alpha,\beta,\gamma,\delta=0}^{d^{2}-1}
{{\chi_{\alpha\beta,\gamma\delta}}}
\dket{\hat{E}_{\alpha}}
\dketbra{\hat{F}_{\beta}}{\hat{E}_{\gamma}}
\dbra{\hat{F}_{\delta}}
\nonumber\\
&=&\sum\limits_{\alpha,\beta,\gamma,\delta=0}^{d^{2}-1}
\mathcal{S}{_{\alpha\beta,\gamma\delta}\hat{E}_{\alpha}\otimes\hat{E}_{\gamma
}^{\ast}\otimes\hat{F}_{\beta}}\otimes\hat{F}_{\delta}^{\ast},
\label{eq39}
\end{eqnarray}
It follows that the two operators are certainly bijective: $\hat{\hat{\chi}
}=\Lambda\otimes\Lambda( \mathcal{\hat{\hat{S}}}) $ and
$\mathcal{\hat{\hat{S}}=}\Lambda\otimes\Lambda( \hat{\hat{\chi}})$.
From these bijective relations, we can explore the bijection between the $\chi$- and
$\mathcal{S}$-matrices for a two-qudit operation as
\begin{equation}
\chi=\sum\limits_{\gamma,\lambda=0}^{d^{2}-1}{Q^{\gamma}\otimes S^{\lambda}
}\mathcal{S}{R^{\gamma}\otimes T^{\lambda},} \label{eq40}
\end{equation}
\begin{equation}
\mathcal{S}=\sum\limits_{\gamma,\lambda=0}^{d^{2}-1}{Q^{\gamma}\otimes
S^{\lambda}\chi R^{\gamma}\otimes T^{\lambda},} \label{eq41}
\end{equation}
where $d^{2}\times d^{2}$ matrices ${Q^{\gamma}}$ and ${R^{\gamma}}$ are given
by Eqs. (\ref{eq31}) and (\ref{eq32}), and the matrix entries of ${S^{\lambda
}}$ and ${T^{\lambda}}$ are given by the matrix elements of the operators
${\hat{I}\otimes\hat{\pi}_{\gamma}}$ and ${\hat{\pi}_{\gamma}\otimes\hat{I}}$
on $\mathcal{H}_{d}^{\otimes2}$ defined with respect to the basis set
$\{\dket{\hat{F}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$:
\begin{equation}
S_{\alpha\beta}^{\gamma}=\dbra{\hat{F}_{\alpha}}
(\hat{I}\otimes\hat{\pi}_{\gamma})
\dket{\hat{F}_{\beta}},
\label{eq42}
\end{equation}
\begin{equation}
T_{\alpha\beta}^{\gamma}=
\dbra{\hat{F}_{\alpha}}
(\hat{\pi}_{\gamma}\otimes\hat{I})
\dket{\hat{F}_{\beta}}.
\label{eq43}
\end{equation}
We can further extend the above formulation to describe \textit{n}-qudit
operations. In this case, the bijective relation between $\mathcal{\hat{\hat
{S}}}$ and $\hat{\hat{\chi}}$ reads $\hat{\hat{\chi}}=\Lambda^{\otimes
n}( \mathcal{\hat{\hat{S}}}) $ and $\mathcal{\hat{\hat{S}}
=}\Lambda^{\otimes n}( \hat{\hat{\chi}}) $. By using these
relations, we can straightforwardly extend Eqs. (\ref{eq40}) and (\ref{eq41})
to the bijection between the $\chi$- and $\mathcal{S}$-matrices for an
\textit{n}-qudit operation.
It is obvious that not all the space of $\chi$- and $\mathcal{S}$-matrices
corresponds to physically realizable operations. For example, we can describe an
anti-unitary operation by using the $\chi$- and $\mathcal{S}$-matrices, which
is evidently an unphysical operation. The requirement for the $\chi$- and
$\mathcal{S}$-matrices to represent physical quantum operations has been
extensively studied by many researchers. In the following, the requirements
common for the single- and two-qudit operations are summarized \cite{Arrighi}.
\begin{condition}
\label{Con1}
(Hermiticity) The physical quantum operation $\mathfrak{S}$
should preserve Hermiticity; i.e., $\mathfrak{S}$ maps any Hermite operator
into an Hermite operator.
\end{condition}
\begin{condition}
\label{Con2}
(Positivity) The physical quantum operation $\mathfrak{S}$ should
be positive; i.e., $\mathfrak{S}$ maps any positive operator into an positive operator.
\end{condition}
\begin{condition}
\label{Con3}
(Complete positivity) The physical quantum operation
$\mathfrak{S}$ should be completely positive; i.e., positivity is preserved
if we extend the L-space and HS-space by adding more qudits. That is, the
superoperator $\mathcal{\hat{\hat{S}}}\otimes\mathcal{\hat{\hat{I}}}$ on the
extended spaces should be positive.
\end{condition}
It is known that \ref{Con3} is sufficient for
\ref{Con1} and \ref{Con2}, and \ref{Con2} is sufficient for
\ref{Con1}. Therefore, we require complete positivity for a
physical quantum operation. Complete positivity can be expressed as a
particularly simple condition for the $\chi$-matrix.
\begin{theorem}
\label{Th3}
The linear operation $\mathfrak{S}$ is completely positive, iff
the $\chi$-matrix is positive.
\end{theorem}
This is natural on physical grounds, because the Choi operator $\hat{\hat
{\chi}}$ should be an unnormalized density operator
associated with the system which was subjected to the quantum operation as
will be discussed in the next section.
In addition to \ref{Con3}, any physical
quantum operation should satisfy the following condition.
\begin{condition}
\label{Con4}
(Trace non-increasing) The physical quantum operation
$\mathfrak{S}$ should be trace non-increasing; i.e., the mapped operator should
have trace less than one.
\end{condition}
This condition is simply expressed as the restriction on the $\chi$-matrix:
$\mathrm{Tr}_{1}\chi\leq I^{(1)}$ for a single-qudit operation and
$\mathrm{Tr}_{13}\chi\leq I^{(2)}$ for a two-qudit operation, where $I^{(n)}$ is
an identity matrix with size $d^{n}$.
It is needless to say that the $\chi$- and $\mathcal{S}$-matrices can be
defined with respect to arbitrary operator basis sets. Once these matrices are
given with respect to a particular operator basis set, they can be converted
into those defined with respect to the other basis set. It is obvious from Eqs.
(\ref{eq19}), (\ref{eq20}), (\ref{eq39}), and (\ref{eq38}) that the two
matrices defined with respect to different bases are unitarily equivalent. To be
specific, let $\chi^{E}$ and $\mathcal{S}^{E}$ be the $\chi$- and
$\mathcal{S}$-matrices for a single-qudit operation defined with respect to the
operator basis set $\{\hat{E}_{\alpha}\}_{\alpha=0}^{d^{2}-1}$ , and
$\chi^{F}$ and $\mathcal{S}^{F}$ be those defined with respect to
the operator basis set $\{ {\hat{F}_{\alpha}}\}_{\alpha=0}^{d^{2}-1}$,
where the two bases are unitarily related as shown in Eqs. (\ref{eq15}) and (\ref{eq16}).
Then, these matrices should be written as
\begin{equation}
\mathcal{S}^{F}=\mathcal{U}^{\dag}\mathcal{S}^{E}\mathcal{U}, \label{eq44}
\end{equation}
\begin{equation}
\chi^{F}=\mathcal{U}^{\dag}\chi^{E}\mathcal{U}, \label{eq45}
\end{equation}
where $\mathcal{U}\equiv\left[ \mathcal{U}{_{\alpha\beta}}\right]
_{\alpha=0}^{d^{2}-1}$ is a $d^{2}\times d^{2}$ unitary matrix. For the case
of two-qudit operation, we need to extend the set of the operator basis to cover
all the possible basis sets defined for the two-qudit operator space,
that is, one that is a factorable set as well as not a factrorable set with
respect to the first and second systems. The general set of operator basis
$\{ {\hat{\Phi}_{\gamma}}\}_{\gamma=0}^{d^{4}-1}$ on
$\mathcal{H}_{d}^{\otimes2}$ should be unitarily related to the factorable
operator bases ${\hat{E}_{\alpha}\otimes\hat{F}_{\beta}}$. If
we introduce a $d^{4}\times d^{4}$ unitary matrix $\mathcal{U}\equiv\left[
\mathcal{U}{_{\alpha\beta}}\right] _{\alpha=0}^{d^{4}-1}$ that relates
${\hat{\Phi}_{\gamma}}$ and ${\hat{E}_{\alpha}\otimes\hat{F}_{\beta}}$:
\begin{equation}
\hat{\Phi}_{\gamma}=\sum\limits_{\alpha,\beta=0}^{d^{2}-1}{\hat{E}_{\alpha
}\otimes\hat{F}_{\beta}}\mathcal{U}{_{\left[ {\alpha,\beta}\right] \gamma},}
\label{eq46}
\end{equation}
where $[\alpha,\beta]:=d^{2}\alpha+\beta$, it follows that Eqs. (\ref{eq44})
and (\ref{eq45}) also hold for two-qudit operations.
The $\chi$- and $\mathcal{S}$-matrices can be diagonalized by choosing the
appropriate operator basis sets, but are not necessarily diagonalized
simultaneously by a unique set. The operator basis set that
diagonalizes the $\chi$-matrix, each element of which is multiplied by the square root
of the associated eigenvalue, forms a particular set of Kraus operators in the
Kraus form of the quantum operation. Any set of Kraus operators can be
obtained by noting the unitary freedom in the Kraus form \cite{Nielsen}. It
follows from Eqs. (\ref{eq19}), (\ref{eq20}), (\ref{eq39}), and (\ref{eq38})
that the same operator basis set with the associated set of eigenvalues also
gives an operator-Schmidt decomposition for the \textit{L}-space superoperator
$\mathcal{\hat{\hat{S}}}$ \cite{Zyczkowski}. Therefore, the Kraus rank for the
\textit{HS}-space superoperator $\mathcal{\hat{\hat{S}}}\left( \odot\right) $
and Schmidt number of the \textit{L}-space superoperator
$\mathcal{\hat{\hat{S}}}$ must be equal.
\section{Applications of $\chi$- and $\mathcal{S}$-matrices}
\label{applications}
This section presents the several applications of the $\chi$- and
$\mathcal{S}$-matrices. We discuss how these matrices and the present
formulation are useful for analysis and design of quantum operations.\\
\textbf{Experimental identification of quantum operations}
In the first example, we explain how useful the present formulation is for
experimental identification of quantum
operations \cite{Nambu,Altepeter,Mitchell,Martini,O'Brien,Secondi,Nambu2}. This
task is important because the development of any quantum device or circuit for
quantum computation and communication, which can be considered as an
input-output system that performs an intended quantum operation on its input
state and transforms it into its output state, necessarily requires experimental
benchmarking of its performance. The identification of a two-qudit device is
particularly interesting from a practical viewpoint as well as a scientific
one because it may involve a nonseparable operation which has a purely quantum
mechanical nature, i.e., it cannot be simulated by using any classical method.
Identification of an input-output system amounts to identifying its $\chi$- or
$\mathcal{S}$-matrix, since these matrices characterize the system in question
completely as far as input and output data are concerned. The evaluated
matrices should reproduce the behavior of the system well enough when the
system is stimulated by any class of inputs of interest, and they
should be useful for engineering the system of interest, e.g., to permit
control of the system, to allow transmission of information through the
system, to yield predictions of future behavior, etc. Identification problems
are commonly regarded as inversion problems, where the $\chi$- or
$\mathcal{S}$-matrix is to be statistically estimated from incomplete prior
knowledge of the system, using prior knowledge of corresponding inputs and the
collection of data obtained by measurement of outputs that usually contain
noise. In what follows, it will be shown that this common belief is not the
case for identification of quantum operations. To be specific, we can estimate
both the $\chi$- and $\mathcal{S}$-matrices without any inversion procedure
if we can make use of an entangled resource and a sequence of local
measurements assisted by classical communication \cite{Altepeter,Martini,Secondi,
D'Ariano2,Dur}.
Consider first the identification of a single-qudit operation. Equations
(\ref{eq19}) and (\ref{eq20}) show that all the elements of the $\chi$- and
$\mathcal{S}$-matrices are given by the expansion coefficients of the
operators $\hat{\hat{\chi}}$ or $\mathcal{\hat{\hat{S}}}$
with respect to two different types of operator basis on
$\mathcal{H}_{d}^{\otimes2}$. Of these two operators, the Choi operator
$\hat{\hat{\chi}}$ is particularly useful since it is a positive operator
associated with the physical state of the bipartite object. To be specific, it
can be interpreted as the unnormalized output state from the system in
question where qudit 1 of the two qudits prepared in an isotropic state is
input into the system and undergoes the quantum operation $\mathfrak{S}$ while
qudit 2 is left untouched. Therefore, we can prepare the output state
corresponding to the normalized Choi operator ${{\hat{\hat{\chi}}/}d}
\equiv\mathcal{\hat{\hat{S}}}\otimes\mathcal{\hat{\hat{I}}}(\hat{\rho}_{I})$
with the use of several copies of the isotropic-state input for the two
qudits. Thus, the identification of a single-qudit operation reduces to the
identification of a two-qudit state. It follows from Eq. (\ref{eq22}) that
every element of the $\mathcal{S}$-matrix can be directly obtained by
determining the expectation value of the corresponding product operator basis
$\langle {\hat{E}_{\alpha}\otimes\hat{E}_{\beta}^{\ast}}\rangle $
for the output states after the quantum operation has taken place. If the
basis ${\hat{E}_{\alpha}}$ is chosen to be the Hermitian operator basis
${\hat{\lambda}_{\alpha}}$, it suffices to make a set of $d^{4}$-independent
local measurements assisted by classical communication to determine the whole
set of the real expectation values $\langle {\hat{\lambda}_{\alpha
}\otimes\hat{\lambda}_{\beta}}\rangle $. Accordingly, we can obtain the
$\mathcal{S}$-matrix defined with respect to the Hermitian operator basis set
$\{ {\hat{\lambda}_{\alpha}}\} _{\alpha=0}^{d^{2}-1}$. Once the
$\mathcal{S}$-matrix is obtained, it is easy to convert it to the $\chi
$-matrix defined with respect to the same basis by using Eqs. (\ref{eq31}
)-(\ref{eq33}) and also into the $\chi$- and $\mathcal{S}$-matrices defined with
respect to the arbitrary chosen basis by using Eqs. (\ref{eq44}) and
(\ref{eq45}).
The identification of a two-qudit operation can be carried out in the same way as the
identification of a single-qudit operation. In this case, we prepare the state
corresponding to the Choi operator in Eq. (\ref{eq37}) with the use of several
copies of the product of isotropic states prepared in the four qudits. To
prepare the output state, we initially prepare the product of isotropic
states $\hat{\rho}_{I}^{(12)}\otimes\hat{\rho}_{I}^{(34)}$ in two pairs of two
qudits (qudits 1-2 and qudits 3-4). Then qudit 1 and qudit 3 are input into
the system in question, undergo the quantum operation $\mathfrak{S}$ jointly
while the other qudits are left untouched. This setup leads to an output
state ${{\hat{\hat{\chi}}/}d}^{2}$ in four-qudits. Thus, the identification of a
two-qudit operation reduces to the identification of a four-qudit state. It
follows from Eq. (\ref{eq38}) that every element of the $\mathcal{S}$-matrix
for the two-qudit operation can be directly obtained by determining the real
expectation value of the corresponding product operator basis $\langle
{\hat{\lambda}_{\alpha}\otimes\hat{\lambda}_{\gamma}\otimes\hat{\lambda
}_{\beta}\otimes\hat{\lambda}_{\delta}}\rangle $ for the output states
after the quantum operation has taken place, if all the relevant basis sets
are chosen to be the Hermitian operator basis ${\hat{\lambda}_{\alpha}}$. It
suffices to make a set of $d^{8}$-independent local measurements assisted by
classical communication to determine the whole set of real expectation
values $\langle {\hat{\lambda}_{\alpha}\otimes\hat{\lambda}_{\gamma
}\otimes\hat{\lambda}_{\beta}\otimes\hat{\lambda}_{\delta}}\rangle $.
Accordingly, we can obtain the $\mathcal{S}$-matrix defined with respect to
the Hermitian operator basis $\{ {\hat{\lambda}_{\alpha}\otimes
\hat{\lambda}_{\beta}}\} _{\alpha,\beta=0}^{d^{2}-1}$. The
$\mathcal{S}$-matrix can be converted into the $\chi$-matrix by using Eqs.
(\ref{eq40})-(\ref{eq43}). The $\mathcal{S}$- and $\chi$-matrices defined with
respect to arbitrary chosen bases can be obtained by applying the appropriate
matrix unitary transformation.\\
\textbf{Matrix analysis of quantum operations}
This section discusses in what way the $\chi$- and $\mathcal{S}$-matrices
contribute to developing quantum devices and circuits for quantum
computation and communication. We consider two classes of applications in
which these matrices offer useful mathematical models for quantum operations.
The first one concerns physical and information theoretic analysis of quantum
operations and the other concerns a logical calculus of quantum circuits or
algorithms comprised of a sequence of quantum operations.
Let us first consider the physical and information theoretic analysis of
quantum operations. For this purpose, it is preferable to use the $\chi
$-matrix. This stems partly from the fact that the $\chi$-matrix is positive
and isomorphic to the density matrix in the doubled Hilbert space. Physically,
the diagonal elements of the process matrix show the populations of, and its
off-diagonal elements show the coherences between, the basis operators making
up the quantum operation, analogous to the interpretation of density matrix
elements as populations of, and coherences between, basis states. Owing to
Jamiolkowski isomorphism, the dynamic problems concerning quantum operations
can be turned into kinematic problems concerning quantum states in a
higher dimensional space, and one can make use of a well-understood
state-based technique for analyzing the quantum operation. In what follows, we
show several illustrative examples and interesting problems from the physical
and information theoretic viewpoints.
The first example concerns the fidelity or distance measure between two
quantum operations. Several measures that make use of the above
isomorphism have been proposed to quantify how close the quantum operation
in question is to the ideal operation (usually a unitary operation) we are
trying to implement. For example, the state fidelity defined between two states is
extended to compare the two operations. The process fidelity $F_{p}$ is
defined by using the $\chi$-matrix $\tilde{\chi}$ of the system
in question and the rank one $\chi$-matrix $\chi_{ideal}$ of the ideal system
in the state-fidelity formula, that is, $F_{p}=\frac{1}{{d^{2n}}}{\text{Tr}
}\tilde{\chi}\chi_{ideal}$, where \textit{n}=1 for single-qudit operation and
\textit{n}=2 for a two-qudit operation. The average gate fidelity $\bar{F}$
defined as the state fidelity between the output state after the quantum operation
and the ideal output can be calculated from the process fidelity. The purity
defined for the density matrix can be extended to characterize how much of a
mixture the quantum operation introduces, which is also represented by the
simple function of the $\chi$-matrix of the system in
question \cite{O'Brien,White,Raginsky,Gilchrist}.
The next example concerns the analysis of a quantum operation acting on the
composite system. As mentioned before, the $\chi$-matrix of a two-qudit
operation is interesting from a practical as well as a scientific viewpoint.
The Jamiolkowski isomorphism for a two-qudit operation (Eqs. (\ref{eq37}) and
(\ref{eq38})) implies that the notion of entanglement can be extended from
quantum states to quantum operations. Analogously to what happens for states,
quantum operations on a composite system can be entangled \cite{Zanardi,Wang}. A
quantum operation acting on two subsystems is said to be separable if its
action can be expressed in the Kraus form
\begin{equation}
\mathcal{\hat{\hat{S}}}\left( \odot\right) =\sum\limits_{i}{( {\hat
{A}_{i}\otimes\hat{B}_{i}}) \odot( {\hat{A}_{i}\otimes\hat{B}
_{i}}) ^{\dag},} \label{eq47}
\end{equation}
where ${\hat{A}_{i}}$ and ${\hat{B}_{i}}$ are operators acting on each
subsystem \cite{Vedral,Barnum,Rains}. Otherwise, we say that it is nonseparable
(or entangled). Quantum operations that can be performed by local operations
and classical communications (the class of LOCC operations) are described by
separable quantum operations, yet
there are separable quantum operations that cannot be implemented with LOCC
operations with probability one \cite{Bennett,Cirac,Harrow}. Anyway, these are
useless for creating entanglement in an initially unentangled system. It has been
pointed out by several authors that the separability and entangling properties
of quantum operations acting on two systems can be discussed in terms of the Choi
operator for two-qudit operations (Eq. (\ref{eq38})) \cite{Cirac,Dur,Harrow}. In
the present context, this reduces to discussing the separability properties of
the $\chi$-matrices. For example, there is a condition for the $\chi$-matrix
equivalent to Eq. (\ref{eq47}): a quantum operation acting on two subsystems
is separable if its $\chi$-matrix can be written as $\chi=\sum\nolimits_{i}
{\chi_{i}^{(A)}\otimes\chi_{i}^{(B)}}$, where ${\chi_{i}^{(A)}}$ and
${\chi_{i}^{(B)}}$ are the $\chi$-matrices for the quantum operation acting on
each subsystem. Thus, the separability of general quantum operations acting
on the composite system is reduced to the separability of its $\chi$-matrix. Since
the separability criterion and measure for the general $d^{4}\times d^{4}$
positive matrix is not fully understood, it remains as an important problem for
quantum information science to find such a criterion
and measure for general two-qudit quantum operations.
Let us turn our attention to a logical calculus of quantum circuits or
algorithms. For this purpose, the $\mathcal{S}$-matrix is practically useful.
This follows from the fact that \textit{L}-space superoperator algebra works
just like Dirac operator algebra. For example, consider the scenario in which
two-quantum operations $\mathfrak{S}_{1}$ and $\mathfrak{S}_{2}$ act
sequentially on a quantum system. Assume that the associated $\mathcal{S}
$-matrices are given with respect to the same operator basis set $\{
\dket{\hat{E}_{\alpha}}\}
_{\alpha=0}^{N-1}$ in $\mathcal{L}_{N}$, where $N=d^{2}$ for single-qudit
operation and $N=d^{4}$ for two-qudit operation. Then the composite operation
$\mathfrak{S}=\mathfrak{S}_{1}\circ\mathfrak{S}_{2}$ is described by the
multiplication of \textit{L}-space superoperators
\begin{equation}
\mathcal{\hat{\hat{S}}}=\mathcal{\hat{\hat{S}}}_{1}\mathcal{\hat{\hat{S}}
}_{2}
=\sum\limits_{\alpha,\beta=0}^{N-1}\mathcal{S}_{\alpha\beta}
\dketbra{\hat{E}_{\alpha}}{\hat{E}_{\beta}},
\label{eq48}
\end{equation}
where
\begin{equation}
\mathcal{S}_{\alpha\beta}=\sum\limits_{\gamma=0}^{N-1}{\left( \mathcal{S}
{_{1}}\right) _{\alpha\gamma}\left( \mathcal{S}{_{2}}\right) _{\gamma\beta
}}. \label{eq49}
\end{equation}
The extension to the case in which a sequence of a finite number of quantum
operations is applied to the same quantum system is trivial. Equation
(\ref{eq49}) implies that the $\mathcal{S}$-matrix of the composite operation
reduces to the multiplication of the $\mathcal{S}$-matrices of the individual
operations. This makes it practically advantageous using the $\mathcal{S}
$-matrix to make a logical calculus of quantum circuits or algorithms comprised
of a sequence of elementary single- and two-qudit quantum operations.
Consider next the quantum circuit or algorithm comprised of a sequence of
quantum operations each of which acts not necessarily on the same quantum
system. This offers a general model for the quantum circuit acting on the large
numbers of qudits \cite{Aharanov}. To analyze
and design such a quantum circuit, we need to consider quantum operations
acting on the whole set of qudits and associated extended $\mathcal{S}
$-matrix. Such an extended $\mathcal{S}$-matrix is non-trivial, but its
bijective $\chi$-matrix is trivially obtained by taking the tensor product with
identity matrix, that is, the $\chi$-matrix for the identity operation on the
irrelevant system. If the $\chi$-matrix for the quantum operation
$\mathfrak{S}$ on the relevant space is given by $\chi$, the $\chi$-matrix for
the extended quantum operation is given by $\chi\otimes I$. On the other hand,
we can trivially extend the conversion formulas (\ref{eq40}) and (\ref{eq41})
to those formulas for the quantum operation acting on more qudits. Therefore,
we can calculate the extended $\mathcal{S}$-matrix for each quantum operation
from the associated $\mathcal{S}$-matrix for the quantum operation acting on
the relevant space. The $\mathcal{S}$-matrix for a sequence of operations
acting on the space of the whole quantum systems can be calculated by
multiplication of the extended $\mathcal{S}$-matrices for the individual
quantum operations.
The $\mathcal{S}$-matrix analysis of the quantum operation has the following
potential advantage. It can deal with a non-unitary operation in which mixed state
evolution occurs. Noisy quantum operation, probabilistic subroutines,
measurements, and even trace-decreasing quantum filters can be treated. This is
in contrast to the usual analysis based on unitary matrix which can deal only
with unitary gate in which only the pure state evolution is allowed. It thus
offers us an mathematical model to analyze and design the logical operation of
wider range of the complex quantum circuits and algorithms \cite{Aharanov}.
In the above discussion, we considered two applicational classes, i.e.,
physical and information theoretic analysis and logical calculus of
quantum operations, in which the $\chi$- and $\mathcal{S}$-matrices matrices
offer useful mathematical models. They have their own useful applications. The
present formulation will offer us the way of building bridges across the two
applicational classes. For example, the entangling properties of the quantum
circuits comprised of a sequence of single- and two-qudit quantum operations
acting on several qudits can be discussed. The present formulation will also
help us to analyze and benchmark the quantum operation realized in the actual device.
\section{Conclusions}
\label{conclusions}
We have considered two matrix representations of single- and two-qudit quantum
operations defined with respect to an arbitrary operator basis, i.e., the $\chi$- and
$\mathcal{S}$-matrices. We have provided various change-of-representation
formulas for these matrices including bijections between the $\chi$- and
$\mathcal{S}$-matrices. These matrices are defined with the expansion coefficients of
two operators on a doubled Hilbert space, that is, the \textit{L}-space
superoperator and the Choi operator. These operators are mutually convertible through
a particular bijection by which the Kronecker products of the relevant operator
basis and the dyadic products of the associated state basis are mutually converted.
From this fact, the mutual conversion formulas between two matrices are
established as computable matrix multiplication formulas. Extention
to multi-qudit quantum operation is also trivial. These matrices are
useful for their own particular classes of applications, which might be
interesting from a practical as well as a scientific point of view.
We have
presented possible applications of the present formulation. By using the present
formulation, an experimental identification of a quantum operation can be
reduced to determining the expectation values of a Hermitian operator basis set
on a doubled Hilbert space. This can be done if we prepare several
copies of the isotropic-state input or the product of isotropic states input.
By using the $\chi$-matrix, we can make a physical as well as a quantum
information theoretic characterization of the quantum operation. In
particular, the $\chi$-matrix is useful to discuss the entangling properties
of the quantum operation acting on the composite system, since the problem of the
separability of the quantum operation is reduced to the problem of the
separability of the $\chi$-matrix. On the other hand, the $\mathcal{S}
$--matrix is useful when we discuss the typical quantum circuit comprised of a
sequence of single- and two-qudit quantum operations each of which acts on
different quantum qubits. It is possible by considering the extended
$\mathcal{S}$--matrix of each quantum operation acting on the whole state
space of the relevant qudits. Such extended $\mathcal{S}$--matrices can be
calculated from the associated, bijective $\chi$-matrices by taking the tensor product
with the appropriate identity matrix. Accordingly, we can calculate the
$\mathcal{S}$--matrix for a quantum circuit by multiplying the
extended $\mathcal{S}$--matrices of each operation. This should be very useful to analyze and
design a wide range of the quantum circuits and algorithms involving
non-unitary operation.
We thank Satoshi Ishizaka and Akihisa Tomita for their helpful discussions. This
work was supported by the CREST program of the Japan Science and Technology Agency.
\end{document} |
\begin{document}
\title{Network resampling for estimating uncertainty}
\begin{abstract}
With network data becoming ubiquitous in many applications, many models and algorithms for network analysis have been proposed. Yet methods for providing uncertainty estimates in addition to point estimates of network parameters are much less common. While bootstrap and other resampling procedures have been an effective general tool for estimating uncertainty from i.i.d. samples, adapting them to networks is highly nontrivial. In this work, we study three different network resampling procedures for uncertainty estimation, and propose a general algorithm to construct confidence intervals for network parameters through network resampling. We also propose an algorithm for selecting the sampling fraction, which has a substantial effect on performance. We find that, unsurprisingly, no one procedure is empirically best for all tasks, but that selecting an appropriate sampling fraction substantially improves performance in many cases. We illustrate this on simulated networks and on Facebook data.
\end{abstract}
\section{Introduction}
\label{sec:intro}
With network data becoming common in many fields, from gene interactions to communications networks, much research has been done on fitting various models to networks and estimating parameters of interest. In contrast, few methods are available for evaluating uncertainty associated with these estimates, which is key to statistical inference. In classical settings, when asymptotic analysis for obtaining error estimates is intractable, various resampling schemes have been successfully developed for estimating uncertainty, such as the jackknife \cite{shao_1989_general}, the bootstrap \cite{efron_1986_bootstrap}, and subsampling or $m$-out-of-$n$ bootstrap \cite{bickel_1997_resampling,politis_subsampling_1999}.
Most of the classical resampling methods were developed for i.i.d.\ samples and are not easily transferred to the network setting, where nodes are connected by edges and have a complex dependence structure. Earlier work on bootstrap for dependent data focused on settings with a natural ordering, such as time series \cite{politis_1994_stationary} or spatial random fields \cite{bickel_2006_covariance}.
More recently, a few analogues of bootstrap have been proposed specifically for network data. One general approach is to extract some features of the network that can be reasonably treated as i.i.d., and resample those following classical bootstrap. For instance, under a latent space model assumptio latent node positions are assumed to be drawn i.i.d.\ from some unknown distribution, and the network model is based on these positions. Then as long as the latent positions can be estimated from a given data, they can be resampled in the usual bootstrap fashion to generate new bootstrap network samples, as proposed in \cite{levin_bootstrapping_2019},. Another network feature that easily lends itself to this kind of approach is subgraph counts, because they are calculated from small network ``patches'' that can be extracted and then treated as i.i.d.\ for resampling purposes. This approach has been applied to estimating uncertainty in the mean degree of the graph \cite{thompson_using_2016}, and extended to bootstrapping the distribution of node degrees \cite{gel_bootstrap_2017, green_2017_bootstrapping}. Related work \cite{bhattacharyya_subsampling_2015} proposed to estimate the variance of network count statistics by bootstrapping subgraphs isomorphic to the pattern of interest. This approach works well for network quantities that can be computed from small patches, and does not require the latent space model assumption, but it does not generalize easily to other more global inference tasks.
A more general approach analogous to the jackknife was proposed in \cite{lin_2020_on}, who analyzed a leave-one-node-out procedure for network data. This procedure can be applied to most network statistics, and the authors show that the variance estimates are conservative in expectation under some regularity conditions. However, for a size $N$ network, this procedure requires to compute the statistics of interest on $N$ networks with $N-1$ nodes each, which can be computationally expensive.
Another class of methods works by subsampling the network, which we can think of as analogous to $m$-out-of-$n$ bootstrap, e.g., \cite{bickel_1981_asymptotic}. Inference based on a partially observed or subsampled network was first considered for computational reasons, when it is either not computationally feasible to conduct inference on the whole network, or only part of the network was observed in the first place. Most commonly, subsampling is implemented through node sampling, where we first select a fraction of all nodes at random and then observe the induced subgraph, but there are alternative subsampling strategies, to be discussed below.
Examples in this line of work include subsampling under a sparse graphon model \cite{lunde_2019_subsampling}, estimating the number of connected components in a unknown parent graph from a subsampled subgraph \cite{frank_estimation_1978,klusowski_estimating_2020}, and estimating the degree distribution of a network from a subgraph generated by node sampling or a random walk \cite{zhang_2015_estimating}. Similar methods have also been employed for cross-validation on networks, with subsampling either nodes \cite{chen_network_2018} or edges \cite{li_network_2020} multiple times.
Our goal in this paper is to study general subsampling schemes that can be used to estimate uncertainty in most network statistics with minimal model assumptions. We will study three different subsampling procedures, discussed in detail in the next section: node sampling, row sampling, and node pair sampling. Our goal is to understand how the choice of subsampling procedure affects performance for different inference tasks, and how each subsampling method performs under different models. One of our main findings is that choosing the fraction of the network to sample has significant implications for performance. To this end, we propose a data-driven procedure to select the resampling fraction and show it performs close to optimal.
The paper is organized as follows: In Section \ref{sec:methods}, we present the proposed resampling procedures and the data-driven method to choose the subsampling fraction. In Section \ref{sec:results}, we compare numerical performance of different subsampling methods for different inference tasks. Section \ref{sec:data} presents an application of our methods to triangle density estimation in Facebook networks, and Section \ref{sec:disc} concludes with discussion.
\section{Network subsampling methods}
\label{sec:methods}
We start with fixing notation. Let $G=(V,E)$ be the observed network, with $V$ the node set and $E$ the edge set, and $n = |V|$ nodes. We represent $G$ using its $n\times n$ adjacency matrix $A$, where $A_{ij}=1$ if there exists an edge from node $i$ to node $j$, i.e., $(i,j)\in E$, and $A_{ij}=0$ otherwise. For the purposes of this paper, we focus on binary undirected networks. Generalization to weighted undirected networks should be straightforward, as the resampling mechanisms would not change. Directed networks will, generally speaking, require a different approach, as for the undirected setting a single row of the adjacency matrix contains all the information about the corresponding node, whereas for directed matrices this would not be the case.
In classical statistics, we have a well-established standard bootstrap procedure \cite{efron_1994_introduction}: given an i.i.d.\ sample $\mathcal{X} = \{X_1,\dots,X_n\}$ from an underlying distribution $F$, we can construct a confidence set for some statistics of interest $T(F)$ by estimating the sample distribution of $\hat{T}_n(\mathcal{X})$, and, typically, using its quantiles to construct a confidence interval for $T(F)$.
By drawing new samples $\mathcal{X}_1,\dots,\mathcal{X}_B$ with replacement from the set $\mathbf{X}$, or in other words sampling i.i.d.\ from the empirical distribution $\hat{F}_n$, we can obtain the empirical distribution of $\hat{T}_n(\mathcal{X}_b)$, $b = 1, \dots, B$, to approximate the distribution of $\hat{T}_n(\mathcal{X})$.
Thinking about applying this procedure to a network immediately reveals a number of questions: what are the units of resampling, nodes or edges? What does it mean to sample a node or an edge with replacement? Sampling without replacement seems more amendable to working with resulting induced graphs, and thus we turn to the $m$-out-of-$n$ bootstrap, typically done without replacement, which provides asymptotically consistent estimation not only for i.i.d.\ but also for weakly dependent stationary samples \cite{politis_subsampling_1999}. Instead of drawing samples of size $n$ with replacement, we will now draw new samples of size $m<n$ without replacement.
For networks, we argue that the the best unit to resample and compare across procedures is {\em node pairs} (whether connected by an edge or not). Resampling algorithms can be constructed in different ways and operate either on nodes or on node pairs, but in the end it is the number of node pairs that determines the amount of useful information we have available. Resampling node pairs with replacement does not really work as it is not clear how to incorporate duplicated node pairs. Thus we will focus on network resampling without replacement, similar to the $m$-out-of-$n$ bootstrap approach. The fraction of resampled node pairs, denoted by $q$, turns out to be an important tuning parameter, and we will propose a data-driven way of choosing it. First, we describe three different resampling procedures we study.
\subsection{Subsampling procedures}
\label{subsec:subsampling}
Next, we describe in detail the three subsampling procedures we study. While there are other subsampling strategies and some may have advantages in specific applications, we focus on these three because they are popular and general.
\begin{figure}
\caption{Different subsampling methods. In all cases, the fraction of node pairs sampled is $q \approx 0.5$ (exact $q=0.5$ is not always possible with only 10 nodes). }
\label{fig:1a}
\label{fig:1b}
\label{fig:1c}
\label{sampling}
\end{figure}
\subsubsection*{Subsampling by node sampling}
This is perhaps the most common subsampling scheme, also known as $p$-sampling or induced subgraph sampling. A subset of nodes $\tilde V$ is first sampled from the node set $V$ independently at random, through $n$ independent Bernoulli trials with success probability $p$. We then observe the subgraph induced by these nodes, which consists of all edges between the nodes in $\tilde V$, that is, $\{(i,j) \in E : i \in \tilde V, j \in \tilde V\}$. The expected fraction of node pairs sampled under this scheme is $q = p^2$ of the entries in the adjacency matrix.
Sampling nodes is arguably the most intuitive way to subsample a network, as it is analogous to first selecting the objects of interest and then observing the relationships among them, a common way of collecting network data in the real world. The intuition behind node sampling is that the induced network should inherit the global structure of its parent network, such as communities. A variation where inference is on node-level statistics, such as node degree, will work well if these statistics are subsampled directly with the nodes themselves, rather than recomputed from the induced subgraph \cite{kolaczyk_sampling_2009}. At the same time, since each subsampled network will be on a different set of nodes, it is less clear how well this method can work for more general inferential tasks. Recently, node sampling was shown to consistently estimate the distribution of some network statistics such as triangle density \cite{lunde_2019_subsampling}, under a sparse graphon model satisfying some regularity conditions. But this result only holds asymptotically, and the question of how to choose $p$ for finite samples was not considered.
\subsubsection*{Subsampling by row sampling}
This subsampling method was first proposed for the cross-validation on networks in \cite{chen_network_2018}. In row sampling, one first chooses a subset of nodes $\Tilde{V}$ by independent Bernoulli trials with success probability $p$; then edges from nodes in $\Tilde{V}$ and {\em all} nodes in $V$ are observed. This is equivalent to sampling whole rows from the adjacency matrix. This way of sampling is related to star sampling, a common model for social network data collected through surveys, where first a set of nodes is sampled, and then each of the sampled nodes reports all of its connections, especially relevant in situations where we may not know all of the nodes in $V$. We assume that both 1s and 0s are reported accurately, meaning that a 0 represents that an edge is absent rather than the status of that connection is unknown. For undirected graphs with symmetric adjacency matrices, row sampling is equivalent to masking a square submatrix from the original adjacency matrix, that is, $q = 1 - (1-p)^2$ proportion of all node pairs observed.
\subsubsection*{Subsampling by node pair sampling}
Finally, a network can be subsampled by directly selecting a subset of node pairs to observe. This sampling method was used in \cite{li_network_2020} for cross-validation on networks, and shown to be superior to row sampling in most cross-validation tasks considered. A subset of node pairs $(i,j)$ are selected from $V\times V$ through independent Bernoulli trials with success probability $p$, resulting in $q=p$ proportion of entries in the adjacency matrix observed. Again, zeros are treated as truly absent edges, and for undirected networks the sampling is adjusted to draw only from entries with $i < j$ and fill in the symmetric matrix accordingly.
In \cite{li_network_2020}, noisy matrix completion was applied first to create a low rank approximation to a full matrix before proceeding with cross-validation, and in principle, we have a choice of whether to use the subsampled matrix or its completed version for downstream tasks.
\subsection{Constructing confidence intervals by resampling}
\label{subsec:bootstrap}
Before we proceed to the data-driven method of choosing the settings for resampling, we summarize the algorithm we employ for constructing confidence intervals for network statistics based on a given resampling method with a given sampling fraction $q$. This is a general bootstrap-style algorithm that can be applied to any network statistic $T$ computable on a subgraph.
\noindent {\bf Algorithm 1: constructing a bootstrap confidence interval. } \\
Input: a graph $G$, a network statistic $T(G)$, a subsampling method, a fraction $q$, and the number of bootstrap samples $B$.
\begin{enumerate}
\item For $b = 1, \dots, B$
\begin{enumerate}
\item Randomly subsample subgraph $G_b$, using the chosen method and fraction $q$.
\item Calculate the estimate $\hat{T}_b = \hat{T}(G_b)$.
\end{enumerate}
\item Sort the $B$ estimates, $\hat{T}_{(1)} \le \dots \le \hat{T}_{(B)}$, and construct a $100(1-\alpha)\%$ confidence interval for $T(G)$ as $[\hat{T}_{(l)},\hat{T}_{(u)}]$, where $l = \lfloor \frac{\alpha}{2} B \rfloor $ and $u = \lceil \frac{1-\alpha}{2} B \rceil$.
\end{enumerate}
We give this version of bootstrap as the most commonly used one, but any other version of bootstrap based on the empirical cumulative distribution function (symmetrized quantiles, etc) can be equally well applied.
\subsection{Choosing the subsampling fraction $q$}
For any method of network subsampling, we have to choose the sampling fraction; for consistency across different methods, we focus on the choice of the node pair fraction $q$, which can be converted to $p$ for node or row sampling. In the classical bootstrap analogue, the $m$-out-of-$n$ bootstrap, consistency results have been obtained for $m\rightarrow \infty$ and $m/n\rightarrow 0$ as $n\rightarrow \infty$. These do not offer us much practical guidance for how to choose $m$ for a given finite sample size $n$. The situation is similar for network bootstrap, with no practical rules for choosing $q$ implied by consistency analysis.
The iterated bootstrap has been commonly used to calibrate bootstrap results since it was proposed and developed in the 1980s \cite{hall_1986_bootstrap, bera_1987_prepivoting, hall_1988_bootstrap}.
An especially relevant algorithm for our purposes is the double bootstrap approach, originally proposed to improve coverage of bootstrap intervals \cite{martin_bootstrap_1990}. A similar double bootstrap approach was proposed by \cite{lee_class_1999} for $m$-out-of-$n$ bootstrap, to correct a discrepancy between the nominal and the actual coverage for bootstrap confidence intervals due to a different sample size $m$.
A generic double bootstrap algorithm proceeds as follows. Given the original i.i.d.\ sample $\mathcal{X} = \{X_1,\dots,X_n\}$ and an estimator $\hat\theta$ based on $\mathcal{X}$, consider a size $n$ bootstrap sample $\mathcal{X}^*$, drawn from $\mathcal{X}$ with replacement, and the corresponding estimator $\hat{\theta}^*$. If the distribution of $\hat{\theta}$ can be approximated by the distribution of $\hat{\theta}^*$ conditional on $\mathcal{X}$, a confidence interval of nominal level $1-\alpha$ can be constructed as $I(\alpha) = [\hat{t}_{\alpha/2},\hat{t}_{1-(\alpha/2)}]$, with $\hat{t}_{\alpha}$ defined as $\sup\{t:\mathbf{P}(\hat{\theta}^*\leq t\mid \mathcal{X})\leq \alpha\}$. This confidence interval typically has correct coverage up to $O(n^{-1})$, and double bootstrap proposes to improve this bound by using $I(\beta_{\alpha})$ instead, where $\beta_{\alpha}$ is defined as the solution to $\mathbf{P}(\theta \in I(\beta_{\alpha}\mid\mathcal{X})) = 1-\alpha$ and can be approximated by $\hat{\beta}_{\alpha}$, the solution to $\mathbf{P}(\hat{\theta} \in I(\hat{\beta}_{\alpha}\mid\mathcal{X}^*)\mid\mathcal{X}) = 1-\alpha$.
In practice, this is implemented by sampling multiple size $n$ $\mathcal{X}^{**}$ draws from each $\mathcal{X}^{*}$ with replacement, and estimating $\mathbf{P}(\hat{\theta} \in I(\beta_j\mid\mathcal{X}^*)\mid\mathcal{X})$ as the proportion of confidence intervals based on $(1-\beta_j)$ percentiles of $\mathcal{X}^{**}$ that cover $\hat{\theta}$ for a set of candidate $\beta_j$'s.
We then choose $\hat{\beta}_{\alpha} = \beta_j$ such that $\mathbf{P}(\hat{\theta} \in I(\beta_j\mid\mathcal{X}^*)\mid\mathcal{X}) $ is the closest to $1-\alpha$ among all candidate $\beta_j$'s.
Since different choice of subsampling fraction $q$ will lead to different level to coverage, here we propose an analogous double bootstrap method for network resampling. Compared to the i.i.d. case, instead of using size $n$ resamples sampled with replacement, we will generate resamples using network subsampling methods described above and instead of choosing an appropriate $\alpha$, we aim to choose an appropriate $q$. The algorithm is detailed as follows:
\noindent {\bf Algorithm 2: choosing the sampling fraction $q$. } \\
Input: a graph $G$, a network statistic $T(G)$, a subsampling method, a set of candidate fractions $q_1, \dots, q_J$, and the number of bootstrap samples $B$.
For $j = 1, \dots, J$,
\begin{enumerate}
\item For $b = 1,\dots,B$,
\begin{enumerate}
\item Split all nodes randomly into two sets $\mathcal{V}_1^b$, $\mathcal{V}_2^b$.
\item On the subgraph induced by $\mathcal{V}_1^b$, calculate the estimate $\hat{T}_{b,j}^{(1)}$.
\item On the subgraph induced by $\mathcal{V}_2^b$, run Algorithm 1 with $q = q_j$ to construct a confidence interval $R_{b,j} $ for $T_{b,j}^{(2)}$
\end{enumerate}
\item Calculate empirical coverage rate $\pi_j = \frac{1}{B}\sum_{b=1}^{B}\mathbf{1}\{\hat{T}_{b,j}\in R_{b,j}\}$.
\end{enumerate}
Output: $\hat j = \arg\min (\pi_j-(1-\alpha))^2$.
One key difference from the i.i.d.\ case is that we estimate empirical coverage by checking if the confidence interval constructed on each subsample covers $\hat{T}_{b,j}^{(1)}$,
not $\hat{T}(G)$, the estimate on the original observed graph $G$. This is because with network subsampling, we cannot preserve the original sample size $n$, and if we are going to compare network statistics on two subsamples, we need to match the graph sizes (in this case $n/2$).
\section{Empirical results}
\label{sec:results}
We evaluate the three different subsampling techniques and our method for choosing the subsampling fraction on four distinct tasks. Three focus on estimating uncertainty, in (1) the normalized triangle density, a global network summary statistic; (2) the number of communities under the stochastic block model, a network model selection parameter; and (3) coefficients estimated in regression with network cohesion, regression parameters estimated with the use of network information. Task (4) is to choose a tuning parameter for regression with network cohesion with a lasso penalty, and evaluate subsampling as a tuning method in this setting.
The networks for all the tasks will be generated from the stochastic block models (SBM). To briefly review, the SBM assigns each node $i$ to one of $K$ communities; the node labels $c_i \in \{1, \dots, K\}$, for $i = 1, \dots, n$ are sometimes viewed as randomly generated from a multinomial distribution with given probabilities for each community, but for the purposes of this evaluation we treat them as fixed, assigning a fixed number of nodes $n_k$, $k = 1, \dots K$, to each community $k$. Edges are then generated as independent Bernoulli variables, with probabilities of success determined by the communities of their incident nodes, $P(A_{ij}=1)=B_{c_i c_j}$, where $B$ is a $K \times K$ symmetric probability matrix. We will investigate the effects of varying the number of communities, the number of nodes in each community, the expected overall edge density $\rho$, and the ratio of probabilities of within- and between-community edges $t$, to be defined below.
\subsection{Normalized Triangle Density}
Count statistics are functions of counts of subgraphs in network, and are commonly viewed as key statistical summaries of a network \cite{bhattacharyya_subsampling_2015, bickel_method_2011}, analogous to moments for i.i.d.\ data. Here we will examine a simple and commonly studied statistic, the density of triangles. A triangle subgraph is three vertices all connected to each other. Triangle density is frequently studied because it is related to the concept of transitivity, a key property of social networks where two nodes that have a ``friend'' in common are more likely to be connected themselves. We will apply resampling to construct confidence intervals for the normalized triangle density for a given network.
\begin{Def}[edge density]
Let $G$ be a graph on $n$ vertices with adjacency matrix $A$. The edge density is defined as
$$\rho = \dfrac{1}{{n\choose 2}}\sum_{i<j
\subset\mathcal{V}}A_{ij} . $$
\end{Def}
\begin{Def}[normalized triangle density]
Let $G$ be a graph on $n$ vertices with adjacency matrix $A$. The normalized triangle density is defined as
$$T = \rho^{-3}\frac{1}{{n\choose 3}}\sum_{i<j<k
\subset\mathcal{V}}A_{ij}A_{jk}A_{ik} . $$
\end{Def}
We generate the networks from the SBM models with number of communities $K$ set to either 1 (no communities, the Erd\"os-R\'enyi graph) or 3, with equal community sizes. We consider two values of $n$, 300 and 600. The matrix of probabilities $B$ for the SBM is defined by, for all $k, l = 1, \dots, K$,
$$
B_{kl} = \begin{cases} \rho \gamma_{1}& k = l , \\
\rho \gamma_{2} & k \neq l . \end{cases}
$$
The difficulty of community detection in this setting is controlled by the edge density $\rho$ and the ratio $t = \gamma_{1}/\gamma_{2}$. We set $t=5$, and vary
the expected edge density $\rho$ from 0.01 to 0.1, with higher $\rho$ corresponding to more information available for community detection.
For a fair comparison, we match the proportion of adjacency matrix observed, $q$, in all subsampling schemes, and vary $q$ from 0.2 to 0.8. Figures \ref{Triden1} and \ref{Triden2} show the widths and coverage rates for the confidence intervals constructed by the three subsampling procedures over the full range of $q$, as well as the value of $q$ chosen by our proposed double bootstrap procedure.
There are several general conclusions we can draw from the results in Figures~\ref{Triden1} and \ref{Triden2}.
(1) The confidence intervals generally get shorter as $q$ increases, since the subsampled graphs overlap more, and this dependence reduces variance. The coverage rate correspondingly goes down as $q$ goes up. Still, all resampling procedures achieve nominal or higher coverage for $q \le 0.5$. The one exception to this is node pair sampling at the lowest value of $q$ with the lowest edge density, where most subsampled graphs are too sparse to have any triangles. In this case the procedure fails, producing estimated zero triangle density and confidence intervals with length and coverage both equal to zero.
(2) For a given value of $q$, a larger effective sample size, either from higher density $\rho$ or from a larger network size $n$, leads to shorter intervals, as one would expect.
(3) Node sampling produces shorter intervals than the other two schemes, while maintaining similar coverage rates for smaller values of $q$. The coverage rate falls faster for node sampling for larger values of $q$ outside the optimal range.
(4) The double bootstrap procedure selects a value of $q$ that achieves a good balance of coverage rate and confidence interval width, maintaining close to nominal coverage without being overly conservative. It does get misled by the pathological sparse case with no triangles, highlighting the need of not deploying resampling algorithms without checking that there is enough data to preserve the feature of interest (triangles in this case) in the subsampled graphs.
Figure \ref{Triden3} shows the value of $q$ chosen by the double bootstrap procedure as a function of graph size and edge density; both appear to not have much effect on the value of $q$ within the range considered.
This suggests that as long as the structure of a network is preserved, a reasonable value of $q$ could be chosen on a smaller subsampled network, resulting in computational savings with little change in accuracy. And we can see that the $q$ chosen for node sampling is always smaller than that for the other two subsampling methods, so node sampling can be more computationally efficient, giving smaller subsampled graphs.
\begin{figure}
\caption{Confidence intervals width (left) and coverage (right) for the three different subsampling schemes on the Erd\"os-R\'enyi graphs with various values of $n$ and edge density $\rho$. The value of $q$ chosen by our algorithm is shown as a red dot.}
\label{Triden1}
\end{figure}
\begin{figure}
\caption{Confidence intervals width (left) and coverage (right) for the three different subsampling schemes on SBM graphs with $K=3$ equal-sized communities, and various values of $n$ and edge density $\rho$. The value of $q$ chosen by our algorithm is shown as a red dot.}
\label{Triden2}
\end{figure}
\begin{figure}
\caption{The optimal value $q$ chosen by double bootstrap, as a function of edge density $\rho$ (left) and number of nodes $n$ (right). }
\label{Triden3}
\end{figure}
\FloatBarrier
\subsection{Number of Communities}
Finding communities in a network often helps understand and interpret the network patterns, and the number of communities $K$ is required as input for most community detection algorithms \cite{bhattacharyya_2014_community, fortunato_2010_community}. A number of methods have been proposed to estimate $K$: as a function of the observed network's spectrum \cite{le_estimating_2015}, through sequential hypothesis testing \cite{bickel_hypothesis_2016}, or subsampling \cite{li_network_2020,chen_network_2018}. Subsampling has been shown to be one of the most reliable methods to estimate $K$, but little attention has been paid to choosing the subsampling fraction and understanding its effects. Here we investigate the performance of the three subsampling schemes on this task and the choice of the subsampling fraction on the task of understanding uncertainty in $K$, beyond the earlier focus on point estimation. Again, we match the proportion of adjacency matrix observed, $q$, across all subsampling schemes.
For this task, we generate networks from an SBM with $K=3$ equal-sized communities. We vary the number of nodes $n$, the expected edge density $\rho$, and the ratio of within and between communities edge probabilities $t$, which together control the problem difficulty.
For subgraphs generated by node sampling and row sampling, we estimate the number of communities using the spectral method based on the Bethe-Hessian matrix \cite{le_estimating_2015, saade_matrix_2016}. For a discrete variable like $K$, confidence intervals make little sense, and thus we look at their bootstrap distributions instead. These distributions are not symmetric, and rarely overestimate the number of communities.
For subgraphs generated from node pair sampling, there is no complete adjacency matrix to compute the Bethe-Hessian for, and computing it with missing values replaced by zeros is clearly introducing additional noise. We instead estimate the number of communities from node-pair samples using the network cross-validation approach \cite{li_network_2020}, first applying low-rank matrix completion with values of rank $K$ ranging from 1 to 6, and choosing the best $K$ by maximizing the AUC score for reconstructing the missing edges.
Figure \ref{Community1} and \ref{Community2} shows the results under different subsampling schemes. Since $K$ is discrete, instead of considering coverage rate, we compare methods by the proportion of correctly estimated $K$.
It is clear that whenever the community estimation problem itself is hard (the lowest values of $\rho$, $t$, and $n$), the number of communities is also hard to estimate. In these hardest cases, $K$ tends to be severely underestimated by node and row sampling, and varies highly for node pair sampling, and there is no good choice of $q$. In all other scenarios, our proposed algorithm does choose a good value of $q$ balancing the bias and variability of $\hat{K}$ by maintaining the empirical coverage rate above 90\% and provide a set of candidate rank estimates.
\begin{figure}
\caption{Simulation results for SBMs with $K = 3$ and $n = 300$.}
\label{Community1}
\end{figure}
\begin{figure}
\caption{Simulation results for SBMs with $K = 3$ and $n = 600$.}
\label{Community2}
\end{figure}
\subsection{Regression with node cohesion}
In many network applications, we often observe covariates associated with nodes. For example, in a school friendship network survey, one may also have the students' demographic information, grades, family background, and so on \cite{addhealth_2018}. In such settings, the network itself is often used as an aid in answering the main question of interest, rather than the primary object of analysis. For instance, a network cohesion penalty was proposed as a tool to improve linear regression when observations are connected by a network \cite{li_prediction_2019}. They consider the predictive model
$$Y=\mathbf{\alpha}+ X\beta+\epsilon,$$ where $X\in \mathbb{R}^{n\times p}$, $Y\in \mathbb{R}^n$, and $\beta \in \mathbb{R}^p$ are the usual matrix of predictors, response vector, and regression coefficients, respectively, and $\mathbf{\alpha}\in \mathbb{R}^n$ is the individual effect for each node associated with the network. They proposed fitting this model by minimizing the objective function
\begin{equation}
\|Y-\alpha-X\beta\|_2+\lambda_1\alpha^T L\alpha,
\label{eqn:rncreg}
\end{equation}
where $\lambda_1$ is a tuning parameter, and $L = D-A$ is the Laplacian of the network, with $D = \text{diag}(d_1,\dots,d_n)$ the degree matrix. The Laplacian-based network cohesion penalty has the effect of shrinking individual effects of connected nodes towards each other, ensuring more similar behavior for neighbors, and improving prediction \cite{li_prediction_2019}.
This method provides a point estimate of the coefficients $\beta$, but no measure of uncertainty, which makes interpretation difficult. Our subsampling methods can add a measure of uncertainty to point estimates $\hat{\beta}$. Since different subsampling methods will lead to different numbers of remaining nodes and thus different sample sizes for regression, we only replace the penalty term in \eqref{eqn:rncreg} with $\lambda_1\alpha^T \tilde{L}\alpha$, where $\tilde{L}$ is the corresponding Laplacian of the subsampled graph, and retain the full size $n$ sample of $Y$ and $X$ for regression, to isolate the uncertainly associated with the network penalty. Alternatively, one could resample $(X,Y)$ pairs together with the network to assess uncertainty overall. If we view the observed graph $A$ as a realization from a distribution $F$, then the subsampled graphs under different subsampling methods can also be considered as a smaller sample from the same distribution $F$ and the estimation based on these smaller samples should still be consistent.
In addition to the three subsampling methods, we also include a naive bootstrap method. Let $A$ be the original adjacency matrix with $n$ nodes. Sample $n$ nodes with replacement, obtaining $n_1, n_2, \dots, n_n$, and create a new adjacency matrix $\tilde{A}$ by setting $\tilde{A}_{ij} = 1$ if $A_{n_i n_j}=1$ or $n_i = n_j$, otherwise, $\tilde{A}_{ij}=0$. Note that with this method we cannot isolate the network as a source of uncertainty, and thus the uncertainty assessment will include the usual bootstrap sample variability of regression coefficients.
We generate the adjacency matrix $A$ as an SBM with with $K=3$ communities, with 200 nodes each. The individual effect $\alpha$ is generated from the normal distribution $N(c_k, \sigma_{\alpha}^2)$, where $c_k$ are $-1$, $0$, and $1$ for nodes from communities $1$, $2$, and $3$, respectively. We vary the ratio of within and between edge probabilities $t$ and the standard deviation $\sigma_\alpha$, to obtain different signal-to-noise ratios. Regression coefficients $\beta_j$'s are drawn from $N(1,1)$ independently, and each response $y_i$ is drawn from $N(\alpha_i + \beta^T x_i,1)$. Figures \ref{RNC1}, \ref{RNC2} and \ref{RNC3} show the results from this simulation.
Once again, as $q$ increases, the confidence intervals get narrower as subgraphs become more similar to the full network. However, for node and row sampling the coverage is also lower for the smallest value of $q=0.1$, since the variance of bootstrap estimates of $\beta$ is larger for smaller $q$,
and the center of the constructed confidence interval gets further from the true $\beta$. This may be caused by the fact that only a small fraction of nodes contribute to the penalty term. Node pair sampling is different: it gives better performance in terms of mean squared error for nearly all choices of $q$, but since its confidence intervals are much narrower than those of node and row sampling, their coverage rate is very poor.
All subsampling methods are anti-conservative and can only exceed the nominal coverage rate with small $q$. One possible explanation is that in a regression setting, the impact of node covariates dominates any potential impact of the network. In all cases, Algorithm 2 chooses $q=0.1$, which is either the only choice of $q$ that exceeds the nominal coverage rate or the choice of $q$ that gives a coverage rate close to the nominal coverage rate.
\begin{figure}
\caption{Resampling performance as a function of $\sigma_\alpha$. Coverage rate (left), widths of confidence intervals along the longest and shortest directions (middle), mean squared error of the mean of resampling estimates of $\beta$ (right). }
\label{RNC1}
\end{figure}
\begin{figure}
\caption{Resampling performance as a function of $t$. From left to right: coverage rate, widths of the confidence interval along the longest and shortest direction, mean squared error of the mean of resampling estimates of $\beta$.}
\label{RNC2}
\end{figure}
\begin{figure}
\caption{Resampling performance as a function of $\rho$. From left to right: coverage rate, widths of the confidence interval along the longest and shortest direction, mean squared error of the mean of resampling estimates of $\beta$.}
\label{RNC3}
\end{figure}
\subsection{Regression with node cohesion and variable selection}
We now consider the case when the dimension of node covariates is larger. The LASSO in \cite{tibshirani_regression_1996} is a popular technique used for high dimensional i.i.d. data. By minimizing
$$\|Y-X\beta\|^2+\lambda_2\|\beta\|_1,$$
the resulted estimator $\hat{\beta}$ will have some entries shrunk to zero, and the set $S=\{i:\hat{\beta}_i\neq 0\}$ is known as the activated set. However, we need to balance the goodness-of-fit and complexity of the model properly by tuning $\lambda_2$, and model selection using cross validation is not always stable. In \cite{meinshausen_stability_2010}, the authors proposed stability selection, which uses subsampling to provide variable selection that is less sensitive to the choice of $\lambda$.
Here we propose a similar procedure for network data. With a graph $G$ observed, we first generate $B$ copies of $g_b$ using one of the resampling schemes. Then we minimize the objective function
$$\|Y-\alpha-X\beta\|_2+\lambda_1\alpha^T L\alpha + \lambda_2\|\beta\|_1,$$
over $g_b$ to construct activated set $S_b$, and for each $1\leq j \leq p$, calculate $\frac{1}{B}\sum \mathbf{1}\{j\in S_b\}$, the selection probability of each predictor.
The simulation setup is going as follows: A graph with 3 blocks with 200 nodes in each community is generated through SBM. Again we vary the ratio of within and between communities edge probability and the edge density of the graph. The individual effect $\alpha$ is generated from $N(c_{k},\sigma_{\alpha})$, where $c_k$ are -1, 0, 1 for nodes from different communities respectively. 25 coefficients $\beta_j$ are drawn from $N(1,1)$ independently, while the rest 75 $\beta_j$ are set to 0. Then $y_i$ is drawn from $N(\alpha_i + \beta^T x_i,1)$. The simulation result is shown in Figure \ref{lasso1}, \ref{lasso2}, \ref{lasso3}.
The three resampling schemes give different performances in AUC when we vary the subsampling proportion $q$: the performance of $p$ sampling is not sensitive to the choice of $q$, while the performances of row sampling and node pair sampling improve or worsen respectively as $q$ increases.
Similar to what we observed in the last section, although the subsampling step does bring in uncertainty into the inference, in a regression setting, node covariates often play a more important role and the performance of resampling only the network structure is not as good as expected.
\begin{figure}
\caption{Resampling performance as a function of $\sigma_\alpha$ with fixed $\rho = 0.2$ and $t = 10$.}
\label{lasso1}
\end{figure}
\begin{figure}
\caption{Resampling performance as a function of $t$ with fixed $\rho = 0.2$ and $\sigma_{\alpha}
\label{lasso2}
\end{figure}
\begin{figure}
\caption{Resampling performance as a function of $\rho$ with fixed $\sigma_{\alpha}
\label{lasso3}
\end{figure}
\section{A data analysis: triangle density in Facebook networks}
\label{sec:data}
The data studied are Facebook networks of 95 American colleges and universities from a single-day snapshot taken in September 2005 \cite{ryan_network_2015, traud_2012_social}. Size of these college networks ranges from around 800 to 30000 and their edge density ranges from 0.0025 to 0.06 as shown in Figure~\ref{plot:data:rho_vs_n}.
Questions of interest include the distribution of normalized triangle density, understanding whether there are significant differences between colleges, and discovering relationships with other important network characteristics, such as size or edge density.
Answering these and other similar questions requires uncertainty estimates in addition to point estimates. We construct these uncertainty estimates by applying our Algorithm 1 to obtain 90\% confidence intervals, with subsampling fraction $q$ chosen by Algorithm 2. The results are shown in Figures \ref{plot:data} and~\ref{plot:data_residuals}.
There are several conclusions to draw from Figure~\ref{plot:data}. First, larger networks have higher normalized triangle density, despite normalization, and the confidence intervals are short enough to make these differences very significant. The same is true for normalized triangle density as a function of edge density: denser networks have lower normalized triangle density. This may be due to the fact that the edge density of these social networks goes down as the size of the network goes up, as shown in Figure~\ref{plot:data:rho_vs_n}, since the number of connections to each individual node grows much slower compared to the network size.
Generally, the intervals from all three schemes have similar centers, and those obtained by pair sampling are somewhat shorter than those constructed with node and row sampling. Algorithm 2 generally chooses small values of subsampling proportion $q$ and Table~\ref{table:data:q} shows the percentage of different $q$ chosen out of the 95 graphs.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}
\hline
& $q=0.1$ & $q=0.2$ \\ \hline
node & 81.1\% & 18.9\% \\ \hline
row & 54.7\% & 45.3\% \\ \hline
pair & 96.8\% & 3.2\% \\ \hline
\end{tabular}
\caption{Percentage of different $q$'s chosen for different colleges using different subsampling methods.}
\label{table:data:q}
\end{table}
It is evident from Figure~\ref{plot:data} that there is a strong positive correlation between normalized triangle density and the number of nodes, and so it is instructive to look at triangle density while controlling for network size.
We show residuals of normalized triangle density after regressing normalized triangle density on $n$, in Figure \ref{plot:data_residuals}. We observed that residuals mostly centered around zero, we larger variance as $n$ grows, suggesting there is no longer a dependence on $n$. However, colleges such as UCF, USF and UC do have larger normalized triangle density compared to colleges of similar size, as seen in both Figure~\ref{plot:data} and \ref{plot:data_residuals}. Similar conclusion can be drawn for edge density.
\begin{figure}
\caption{Network size and edge density of 95 college Facebook networks. }
\label{plot:data:rho_vs_n}
\end{figure}
\begin{figure}
\caption{Confidence intervals of normalized triangle density constructed for 95 college Facebook networks. }
\label{plot:data}
\end{figure}
\begin{figure}
\caption{Normalized triangle density residuals after regressing on number of nodes and log edge density respectively.}
\label{plot:data_residuals}
\end{figure}
While the example we gave is a simple exploratory analysis, it demonstrates the possibilities of much richer network data analysis when point estimates of network statistics are accompanied by uncertainty estimates.
\section{Discussion}
\label{sec:disc}
The three different subsampling methods we studied - node, row, and node pair sampling - are all reasonably good at achieving nominal coverage if the sampling proportion $q$ is selected appropriately, both for estimating uncertainty in network summary statistics and in estimated parameters of a model fitted to network data. While there is no universally best resampling scheme for all tasks, node sampling is a safe choice for most, especially compared to row sampling. The double bootstrap algorithm to choose $q$ we proposed works well empirically over a range of settings, especially for network statistics that are normalized with respect to the size or the edge density of the graph. Our algorithm is written explicitly for confidence intervals, and would thus need to modified to use with other losses, for instance, prediction intervals, but such a modification would be straightforward.
An obvious question arises about theoretical guarantees of coverage for bootstrap confidence intervals. We have intentionally made no assumptions on the underlying network model when deriving these bootstrap methods. Proving any such guarantee requires assuming an underlying probability model that generated the network. If such a model is assumed, then, for all common network models, there will be an alternative model-based bootstrap approach tailor-made for the model. For example, for latent variable models where network edge probabilities are functions of unobserved latent variables in $R^d$, a bootstrap approach based on estimating latent variables in $R^d$, bootstrapping those estimated variables, and generating networks from those has been shown to have good asymptotic properties \cite{levin_bootstrapping_2019}.
Similarly, specialized methods for network moments have been proposed in \cite{bhattacharyya_subsampling_2015, lunde_2019_subsampling}.
There is an inevitable tradeoff between keeping a method assumption-free and establishing theoretical guarantees. In this computational paper, we have focused on the algorithms themselves, and leave establishing guarantees for special cases for future work.
\FloatBarrier
\end{document} |
\begin{equation}gin{document}
\begin{equation}gin{center}
\lambdarge{ \bf Global Existence of Smooth Solutions and Convergence to Barenblatt Solutions for the Physical Vacuum Free Boundary Problem of Compressible Euler Equations with Damping }
\end{center}
\centerline{ Tao Luo, Huihui Zeng}
\begin{equation}gin{abstract} For the physical vacuum free boundary problem with the sound speed being $C^{{1}/{2}}$-H$\ddot{\rm o}$lder continuous near vacuum boundaries of the one-dimensional compressible Euler equations with damping,
the global existence of the smooth solution is proved, which is shown to converge to the Barenblatt self-similar solution for the
the porous media equation with the same total mass when the initial data is a small perturbation of the Barenblatt solution. The pointwise convergence with a rate of density, the convergence rate of velocity in supreme norm and the precise expanding rate of the physical vacuum boundaries are also given. The proof is based on a construction of higher-order weighted functionals with both space and time
weights capturing the behavior of solutions both near vacuum states and in large time, an introduction of a new ansatz, higher-order nonlinear energy estimates and elliptic estimates. \end{abstract}
\section{Introduction}
The aim of this paper is to prove the global existence and time-asymptotic equivalence to the Barenblatt self-similar solutions of smooth solutions for the following physical vacuum free boundary problem for the Euler equations of compressible isentropic flow with damping:
\begin{equation}gin{equation}\lambdabel{1.1}
\begin{equation}gin{split}
& \rho_t+(\rho u)_x=0 & {\rm in}& \ \ {\rm I}(t) : = \left\{(x,t) \left | x_-(t) < x < x_+(t), \ t > 0\right. \right\}, \\
&(\rho u)_t+(p(\rho)+\rho u^2)_x=- \rho u & {\rm in}& \ \ {\rm I}(t),\\
&\rho>0 &{\rm in } & \ \ {\rm I} (t),\\
& \rho=0 & {\rm on}& \ \ \Gamma(t): = \left\{(x,t) \left | x= x_\pm (t) , \ t > 0\right. \right\},\\
& \dot{ \Gamma}(t) =u(\Gamma(t),t ), & &\\
&(\rho, u)=(\rho_0, u_0) & {\rm on} & \ \ {\rm I}(0):=\left\{(x,t) \left | x_-(0) < x < x_+(0), \ t =0\right. \right\}.
\end{split}
\end{equation}
Here $(x,t)\in \mathbb{R}\times [0,\infty)$, $\rho $, ${\bf u} $ and $p$ denote, respectively, the space and time variable, density, velocity and pressure; ${\rm I}(t) $, $\Gamma(t)$ and
$\dot{ \Gamma}(t)$ represent, respectively, the changing domain occupied by the
gas, moving vacuum boundary and velocity of $\Gamma(t)$; $-\rho u$ appearing on the right-hand side of $\eqref{1.1}_2$ describes the frictional damping.
We assume that the pressure satisfies the $\gammamma$-law:
$$
p(\rho)= \rho^{\gammamma} \ \ {\rm for} \ \ \gammamma>1
$$
(Here the adiabatic constant is set to be unity.) Let $c=\sqrt {p'(\rho)}$ be the sound speed, a vacuum boundary is called {\it physical} if
$$
0< \left|\fracrac{\partial c^2}{\partial x}\right|<+\infty $$
in a small neighborhood of the boundary.
In order to capture this physical singularity, the initial density is supposed to satisfy
\begin{equation}\lambdabel{156}\begin{equation}gin{split}
\rho_0(x)>0 \ \ {\rm for} \ \ x_-(0)< x<x_+(0), \ \ \rho_0\left(x_\pm(0)\right)=0\ \ {\rm and}
\ \ 0< \left|\left(\rho_0^{\gamma-1}\right)_x \left(x_\pm(0)\right)\right| <\infty .
\end{split} \end{equation}
Let $M\in(0,\infty)$ be the initial total mass, then the conservation law of mass, $\eqref{1.1}_1$, gives
$$\int_{x_-(t)}^{x_+(t)}\rho(x,t)dx = \int_{x_-(0)}^{x_+(0)} \rho_0(x)dx =:M \ \ {\rm for} \ \ t>0.$$
The compressible Euler equations of isentropic flow with damping is closely related to the
porous media equation (cf. \cite{HL, HMP, HPW, 23}):
\begin{equation}gin{equation}\lambdabel{pm}
\rho_t=p(\rho)_{xx},
\end{equation}
when $(\ref{1.1})_2$ is simplified to Darcy's law:
\begin{equation}gin{equation}\lambdabel{darcy}
p(\rho)_x=- \rho u.
\end{equation}
For \eqref{pm}, basic understanding of the solution with finite mass is provided by Barenblatt (cf. \cite{ba}), which is given by
\begin{equation}gin{equation}\lambdabel{bar}\begin{equation}gin{split}
\begin{equation}tar \rho(x, t)= (1+ t)^{-\fracrac{1}{\gammamma+1}}\left[A- B(1+ t)^{-\fracrac{2}{\gammamma+1}}x^2\right]^{\fracrac{1}{\gammamma-1}},
\end{split}\end{equation}
where
\begin{equation}gin{equation}\lambdabel{mass1}
B=\fracrac{\gamma-1}{2\gamma(\gamma+1)} \ \ {\rm and} \ \ A^{\fracrac{\gamma+1}{2(\gamma-1)}} = {M} \sqrt{B} \left( {\int_{-1}^1 (1-y^2)^{1/(\gamma-1)}dy} \right)^{-1}; \end{equation}
so that the Barenblatt self-similar solution defined in ${\rm I}_b(t)$ has the same total mass as that for the solution of \eqref{1.1}:
\begin{equation}\lambdabel{massforbaren}
\int_{\begin{equation}tar x_-(t)}^{\begin{equation}tar x_+(t)} \begin{equation}tar\rho(x,t) dx = M = \int_{x_-(t)}^{x_+(t)}\rho(x,t)dx \ \ {\rm for} \ \ t\ge 0,\end{equation}
where
\begin{equation}gin{equation}\lambdabel{IB} {\rm I}_b(t)= \left\{(x,t)\left| \ \begin{equation}tar x_-(t)< x< \begin{equation}tar x_+(t), \ t\ge 0 \right. \right\} \ \ {\rm with} \ \ \begin{equation}tar x_{\pm}(t)=\pm \sqrt{A B^{-1}} (1+ t)^{ {1}/({\gammamma+1})}. \end{equation}
The corresponding Barenblatt velocity is defined in ${\rm I}_b(t)$ by
\begin{equation}gin{equation*}\lambdabel{}
\begin{equation}tar u(x,t)=- \fracrac{p(\begin{equation}tar \rho)_x}{\begin{equation}tar \rho}
=\frac{ x}{(\gamma+1)(1+ t)} \ \ {\rm satsifying} \ \ \dot {\begin{equation}tar x}_{\pm}(t)=\begin{equation}tar u(\begin{equation}tar x_{\pm}(t),\ t).
\end{equation*}
So, $(\begin{equation}tar\rho,\ \begin{equation}tar u)$ defined in the region ${\rm I}_b (t)$
solves (\ref{pm}) and (\ref{darcy}) and satisfies \eqref{massforbaren}.
It is clear that the vacuum boundaries $x=\begin{equation}tar x_{\pm}(t)$ of Barenblatt's solution are physical. This is the major motivation to study problem \eqref{1.1}, the physical vacuum free boundary problem of
compressible Euler equations with damping. To this end, a class of explicit solutions to problem \eqref{1.1} was constructed by Liu in \cite{23}, which are of the following form:
\begin{equation}\lambdabel{liuexplicitsolution}\begin{equation}gin{split} & {\rm I}(t)=\left\{(x,t) \left | \ -\sqrt{e(t)/b(t)} < x < \sqrt {e(t)/b(t)}, \ \ t \ge 0\right. \right\}, \\
& c^2(x, t)={e(t)-b(t)x^2} \ \ {\rm and} \ \ u(x, t)=a(t) x \ \ {\rm in } \ \ {\rm I} (t).\end{split}\end{equation}
In \cite{23}, a system of ordinary differential equations for $(e,b,a)(t) $ was derived with $e(t), b(t)>0$ for $t\ge 0$ by substituting \eqref {liuexplicitsolution} into $\eqref{1.1}_{1, 2}$ and the time-asymptotic equivalence of this explicit solution and Barenblatt's solution with the same total mass was shown. Indeed, the Barenblatt solution of \eqref{pm} and \eqref{darcy} can be obtained by the same ansatz as \eqref{liuexplicitsolution}:
$$
\begin{equation}tar c^2(x, t)=\begin{equation}tar e(t)-\begin{equation}tar b(t)x^2 \ \ {\rm and } \ \ \begin{equation}tar u(x, t)=\begin{equation}tar a(t)x. $$
Substituting this into \eqref{pm}, \eqref{darcy} and \eqref{massforbaren} with $\begin{equation}tar {x}_{\pm}(t)=\pm\sqrt{{\begin{equation}tar e(t)}/{\begin{equation}tar b(t)}}$ gives
$$
\begin{equation}tar e(t)= \gamma A (1+ t)^{-({\gammamma-1})/({\gammamma+1})},\ \begin{equation}tar b(t)= \gamma B(1+ t)^{-1}
\ \ {\rm and} \ \ \begin{equation}tar a(t)= {(\gammamma+1)^{-1}(1+t)^{-1}},$$
where $A$ and $B$ are determined by \eqref{mass1}. Precisely, it was proved in \cite{23} the following time-asymptotic equivalence:
$$
(a,\ b,\ e)(t)=(\begin{equation}tar a, \ \begin{equation}tar b, \ \begin{equation}tar e)(t)+ O(1)(1+t)^{-1}{\ln (1+t)} \ \ {\rm as}\ \ t\to\infty.$$
A question was raised in \cite{23} whether this equivalence is still true for general solutions to problem \eqref{1.1}. The purpose of this paper is to prove the global existence of smooth solutions to the physical vacuum free boundary problem \eqref{1.1} for general initial data which are small perturbations of Barenblatt's solutions, and the time-asymptotic equivalence of them. In particular, we obtain the pointwise convergence with a rate of density which gives the detailed behavior of the density, the convergence rate of velocity in supreme norm and the precise expanding rate of the physical vacuum boundaries. The results obtained in the present work also prove the nonlinear asymptotic stability of Barenblatt's solutions in the setting of physical vacuum free boundary problems.
The physical vacuum that the sound speed is $C^{ {1}/{2}}$-H$\ddot{\rm o}$lder continuous across vacuum boundaries makes the study of free boundary problems in compressible fluids challenging and very interesting, even for the local-in-time existence theory, because standard methods of symmetric hyperbolic systems (cf. \cite{17}) do not apply.
Indeed, characteristic speeds of the compressible isentropic Euler equations become singular with infinite spatial derivatives
at vacuum boundaries which creates much severe difficulties in analyzing the regularity near boundaries. The phenomena of physical vacuum arise in several important physical situations naturally besides the above mentioned, for example, the equilibrium and dynamics of boundaries of gaseous stars (cf. \cite{17', LXZ}).
Recently, important progress has been made in the local-in-time well-posedness theory for the one- and three-dimensional compressible
Euler equations (cf. \cite{16, 10, 7, 10', 16'}).
In the theory and application of nonlinear partial differential equations, it is of fundamental importance to study the global-in-time existence and long time asymptotic behavior of solutions. However, it poses a great challenge to extend the local-in-time existence theory to the global one of smooth solutions, due to the strong degeneracy near vacuum states caused by the singular behavior of physical vacuum. The key in analyses is to obtain the
global-in-time regularity of solutions near vacuum boundaries by establishing the uniform-in-time higher-order estimates, which is nontrivial to achieve due to strong degenerate nonlinear hyperbolic characters. To the best of our knowledge, the results obtained in this paper are the first ones on the global existence of smooth solutions for the physical vacuum free boundary problems in inviscid compressible fluids. This is somewhat surprising due to the difficulties mentioned above.
It should be pointed that the $L^p$-convergence of
$L^{\infty}$-weak solutions for the Cauchy problem of the one-dimensional compressible Euler equations with damping to Barenblatt solutions of the porous media equations was given in \cite{HMP} with $p=2$ if $1<\gamma\le 2$ and $p=\gamma$ if $\gamma>2$ and in \cite{HPW} with $p=1$, respectively, using entropy-type estimates for the solution itself without deriving estimates for derivatives.
However, the interfaces separating gases and vacuum cannot be traced in the framework of $L^{\infty}$-weak solutions. The aim of the present work is to understand the behavior and long time dynamics of physical vacuum boundaries, for which obtaining the global-in-time regularity of solutions is essential.
In order to overcome difficulties in the analysis of obtaining
global-in-time regularities of solutions near vacuum boundaries, we construct higher-order weighted functionals with both space and time weights, introduce a new ansatz to bypass the obstacle that Barenblatt solutions do not solve $\eqref{1.1}_2$ exactly which causes errors in large time, and perform higher-order nonlinear energy estimates and elliptic estimates.
In the construction of higher-order weighted functionals, the space and time weights are used to capture the behavior of solutions near vacuum states and ditect the decay of solutions to Barenblatt solutions, respectively.
The choice of these weights also depends on the behavior both near vacuum states and in large time of Barenblatt solutions.
As shown in \cite{16, 10, 7, 10', 16'},
a powerful tool in the study of physical vacuum free boundary problems of nonlinear hyperbolic equations is the weighted energy estimate. It should be remarked that
weighted estimates used in establishing the local-in-time well-posedness theory (cf. \cite{16, 10, 7, 10', 16'}) involve only spatial weights. Yet weighted estimates only involving spatial weights seem to be limited to proving local existence results. To obtain global-in-time higher-order estimates, we introduce time weights to quantify the large time behavior of solutions.
The choice of time weights may be suggested by looking at the linearized problem to get hints on how the solution decays.
Indeed, after introducing a new ansatz to correct the error due to the fact that Barenblatt solutions do not solve $\eqref{1.1}_2$ exactly, one may decompose the solution of \eqref{1.1}
as a sum of the Barenblatt solution, the new ansatz and an error term.
For the linearized problem of the error term around the Barenblatt solution, one may obtain precise time decay rates of weighted norms for various orders of derivatives, while the $L^2$-weighted norm of solution itself is bounded. However, it requires tremendous efforts to pass from linear analyses to nonlinear analyses for the nonlinear problem \eqref{1.1}.
Our strategy for the nonlinear analysis is using a bootstrap argument for which we identify an appropriate a priori assumption. This a priori assumption involves not only the $L^{\infty}$-norms of the solution and its first derivatives but also the weighted $L^{\infty}$-norms of higher-order derivatives with both spatial and temporal weights.
This is one of the new ingredients of the present work compared with the methods used
either for the local existence theory in \cite{16, 10, 7, 10', 16'} or the nonlinear instability theory in \cite{17'}.
Under this a priori assumption, we first use elliptic estimates to bound the space-time weighted $L^2$-norms of higher-order derivatives in both normal and tangential-normal directions of vacuum boundaries by the corresponding space-time weighted $L^2$-norms of tangential derivatives. With these bounds, we perform the nonlinear weighted energy estimate by differentiating equations in the tangential direction to give the uniform in time space-time weighted $L^2$-estimates of various order derivatives in the tangential direction.
It is discovered here that the a priori assumption solely is not enough to close the nonlinear energy estimates, one has to use the bounds obtained in the elliptic estimates also. This gives the uniform in time estimates of the higher-order weighted functional we construct.
The bootstrap argument is closed by verifying the a priori assumption for which we prove the weighted $L^{\infty}$-norms appearing on the a priori assumption can be bounded by the the higher-order weighted functional. One of the advantages of our approach is that we can prove the global existence and large time convergence of solutions with the detailed convergence rates simultaneously. It should be remarked that the convergence rates obtained in this article is the same as those for the linearized problem.
We would like to close this introduction by reviewing some priori results on vacuum free boundary problems for the compressible Euler equations besides the results mentioned above.
Some local-in-time well- and ill-posedness
results were obtained in \cite{JMnew} for the one-dimensional compressible Euler equations for polytropic gases featuring
various behaviors at the fluid-vacuum interface. In \cite{24}, when the singularity near the vacuum is mild in the sense that $c^\alpha$ is smooth across the interface with $0 < \alpha \le 1$ for the sound speed $c$, a local existence theory was developed for the one-dimensional Euler equations with damping, based on the adaptation of the theory of symmetric hyperbolic systems which is not applicable to physical vacuum boundary problems for which only $c^2$, the square of sound speed in stead of $c^{\alpha}$ ( $0 < \alpha \le 1$) , is required to be smooth across the gas-vacuum interface (further development for this can be found in \cite{38}). In \cite{LXZ}, a general uniqueness theorem was proved for three dimensional motions of compressible Euler equations with or without self-gravitation and a new local-in-time well-posedness theory was established for spherically symmetric motions without imposing the compatibility condition of the first derivative being zero at the center of symmetry. An instability theory of stationary solutions to the physical vacuum free boundary problem for the spherically symmetric compressible Euler-Poisson equations of gaseous stars as $6/5<\gammamma<4/3$ was established in \cite{17'}. In \cite{zhenlei}, the local-in-time well-posedness of the physical vacuum free boundary problem was investigated for the one-dimensional Euler-Poisson equations, adopting the methods motivated by those in \cite{10} for the one-dimensional Euler equations.
\section{Reformulation of the problem and main results}
\subsection{Fix the domain and Lagrangian variables}
We make the initial interval of the Barenblatt solution, $\left(\begin{equation}tar x_-(0), \ \begin{equation}tar x_+(0)\right)$, as the reference interval and define a diffeomorphism
$\eta_0: \left(\begin{equation}tar x_-(0), \ \begin{equation}tar x_+(0)\right) \to \left( x_-(0), \ x_+(0)\right)$
by
\begin{equation}e\lambdabel{} \int_{x_-(0)}^{\eta_0(x)}\rho_0(y)dy=\int_{\begin{equation}tar x_-(0)}^{x}\begin{equation}tar\rho_0(y)dy \ \ {\rm for} \ \ x \in \left(\begin{equation}tar x_-(0),\ \begin{equation}tar x_+(0)\right) ,\end{equation}e
where $\begin{equation}tar \rho_0(x) : = \begin{equation}tar\rho(x,0) $ is the initial density of the Barenblatt solution. Clearly,
\begin{equation}\lambdabel{2.3}
\rho_0(\eta_0(x))\eta_{0}'(x) = \begin{equation}tar \rho_0(x) \ \ {\rm for} \ \ x \in \left(\begin{equation}tar x_-(0),\ \begin{equation}tar x_+(0)\right). \end{equation}
Due to \eqref{156}, \eqref{bar} and the fact that the total mass of the Barenblatt solution is the same as that of $\rho_0$, \eqref{massforbaren}, the diffeomorphism $\eta_0$ is well defined. For simplicity of presentation, set
$$ \mathcal{I} : = \left(\begin{equation}tar x_-(0), \ \begin{equation}tar x_+(0)\right)=\left(-\sqrt{A /B }, \ \sqrt{A/ B } \right). $$
To fix the boundary, we transform system \eqref{1.1} into Lagrangian variables. For $x\in \mathcal{I}$, we define the Lagrangian variable $\eta(x, t)$ by
$$
\eta_t(x, t)= u(\eta(x, t), t) \ \ {\rm for} \ \ t>0 \ \ {\rm and} \ \ \eta(x, 0)=\eta_0(x),
$$
and set the Lagrangian density and velocity by
\begin{equation}\lambdabel{ldv}f(x, t)=\rho(\eta(x, t), t) \ \ {\rm and} \ \ v(x, t)= u(\eta(x, t), t) .\end{equation}
Then the Lagrangian version of system \eqref{1.1} can be written on the reference domain $\mathcal{I} $ as
\begin{equation}\lambdabel{e1-2} \begin{equation}gin{split}
& f_t + f v_x /\eta_x=0 & {\rm in}& \ \ \mathcal{I} \times (0, \infty),\\
& f v_t+\left( f^\gamma \right)_x /\eta_x = -fv \ \ &{\rm in}& \ \ \mathcal{I}\times (0, \infty), \\
& (f, v)=(\rho_0(\eta_0), u_0(\eta_0) ) & {\rm on}& \ \ \mathcal{I} \times \{t=0\}.
\end{split}
\end{equation}
The map $\eta(\cdot, t)$ defined above can be extended to $\begin{equation}tar {\mathcal{I}}= [-\sqrt{A /B }, \ \sqrt{A/ B } ]$. In the setting, the vacuum free boundaries for problem \eqref{1.1} are given by
\begin{equation}\lambdabel{vbs} x_{\pm}(t)=\eta(\begin{equation}tar x_{\pm}(0), \ t)=\eta\left(\pm\sqrt{A/ B },t \right) \ \ {\rm for} \ \ t\ge 0.\end{equation}
It follows from solving $\eqref{e1-2}_1$ and using \eqref{2.3} that
\begin{equation}\lambdabel{ld}
f(x,t)\eta_x(x,t)=\rho_0(\eta_0(x))\eta_{0}'(x)=\begin{equation}tar \rho_0(x), \ x\in \mathcal{I} .
\end{equation}
It should be noticed that we need $\eta_x(x,t)>0$ for $x\in\mathcal{I}$ and $t\ge 0$ to make the Lagrangian transformation sensible, which will be verified in \eqref{basic}.
So, the initial density of the Barenblatt solution, $\begin{equation}tar\rho_0$, can be regarded as a parameter and system \eqref{e1-2} can be rewritten as
\begin{equation}\lambdabel{equ}\left.\begin{equation}gin{split}
&\begin{equation}tar\rho_0 \eta_{tt} + \begin{equation}tar\rho_0 \eta_{t}+ \left( \begin{equation}tar\rho_0^\gamma/\eta_x^\gamma \right)_x= 0,\ \ &{\rm in}& \ \ \mathcal{I}\times (0, \infty), \\
&(\eta, \eta_t)= \left( \eta_0, u_0(\eta_0)\right), & {\rm on} & \ \ \mathcal{I} \times\{t=0\}.
\end{split}\right.\end{equation}
\subsection{Ansatz}
Define the Lagrangian variable $\begin{equation}tar\eta(x, t)$ for the Barenblatt flow in $\begin{equation}tar {\mathcal{I}}$ by
$$
\begin{equation}tar\eta_t (x, t)= \begin{equation}tar u(\begin{equation}tar\eta(x, t), t)=\frac{ \begin{equation}tar\eta(x,t)}{(\gamma+1)(1+ t)} \ \ {\rm for} \ \ t>0 \ \ {\rm and} \ \ \begin{equation}tar\eta(x, 0)=x,
$$
so that
\begin{equation}\lambdabel{212}
\begin{equation}tar\eta(x,t)=x(1+ t)^{{1}/({\gamma+1})} \ \ {\rm for} \ \ (x,t) \in \begin{equation}tar {\mathcal{I}}\times [0,\infty)
\end{equation}
and
\begin{equation}e\lambdabel{}\left.\begin{equation}gin{split}
&\begin{equation}tar \rho_0 \begin{equation}tar\eta_t + \left(\begin{equation}tar \rho_0^\gamma/\begin{equation}tar \eta_x^\gamma \right)_x= 0 \ \ &{\rm in}& \ \ \mathcal{I}\times (0, \infty) .
\end{split}\right.\end{equation}e
Since $\begin{equation}tar\eta$ does not solve $\eqref{equ}_1$ exactly, we introduce a correction $h(t)$ which is the solution of the following initial value problem of ordinary differential equations:
\begin{equation}\lambdabel{mt}\left.\begin{equation}gin{split}
&h_{tt} + h_t - {(\begin{equation}tar\eta_x+h)^{-\gamma}} /(\gamma+1)+ \begin{equation}tar\eta_{xtt} + \begin{equation}tar\eta_{xt} =0 , \ \ t>0, \\
&h(t=0)=h_t(t=0)=0.
\end{split}\right.\end{equation}
(Notice that $\begin{equation}tar \eta_x$, $\begin{equation}tar \eta_{xt}$ and $\begin{equation}tar \eta_{xtt}$ are independent of $x$.)
The new ansatz is then given by
\begin{equation}\lambdabel{ansatz}
\tilde{\eta}(x,t) :=\begin{equation}tar\eta(x,t)+x h(t),
\end{equation}
so that
\begin{equation}\lambdabel{equeta} \begin{equation}gin{split}
& \begin{equation}tar\rho_0 \tilde\eta_{tt} + \begin{equation}tar\rho_0\tilde\eta_t + \left(\begin{equation}tar \rho_0^\gamma/\tilde \eta_x^\gamma \right)_x= 0 \ \ &{\rm in}& \ \ \mathcal{I}\times (0, \infty).
\end{split}
\end{equation}
It should be noticed that $\tilde\eta_x$ is independent of $x$. We will prove in the Appendix that $\tilde\eta$ behaves similar to $\begin{equation}tar \eta$, that is, there exist positive constants $K$ and $C(n)$ independent of time $t$ such that for all $t\ge 0$,
\begin{equation}\lambdabel{decay}\begin{equation}gin{split}
&\left(1 + t \right)^{{1}/({\gamma+1})} \le \tilde \eta_{x} (t)\le K \left(1 + t \right)^{{1}/({\gamma+1})}, \ \ \ \ \tilde\eta_{xt}(t)\ge 0, \\
&\left|\frac{ d^k\tilde \eta_{x}(t)}{dt^k}\right| \le C(n)\left(1 + t \right)^{\fracrac{1}{\gamma+1}-k}, \ \ k= 1, \cdots, n.
\end{split}\end{equation}
Moreover, there exists a certain constant $C$ independent of $t$ such that
\begin{equation}\lambdabel{h} 0\le h(t)\le (1+t)^{-\fracrac{\gamma}{\gamma+1}}\ln(1+t) \ \ {\rm and} \ \
|h_t|\le (1+t)^{-1-\fracrac{\gamma}{\gamma+1}}\ln(1+t), \ \ t\ge 0. \end{equation}
The proof of \eqref{h} will also be given in the Appendix.
\subsection{Main results}
Let
$$w(x,t)=\eta(x,t)- \tilde\eta(x,t) .$$
Then, it follows from \eqref{equ} and \eqref{equeta} that
\begin{equation}\lambdabel{eluerpert}\begin{equation}gin{split}
\begin{equation}tar\rho_0w_{tt} +
\begin{equation}tar\rho_0w_t + \left[ \begin{equation}tar\rho_0^\gamma \left( (\tilde\eta_x+w_x)^{-\gamma} - {\tilde\eta_x}^{-\gamma} \right) \right]_x =0.
\end{split}\end{equation}
Denote
$$ \alpha := 1/(\gamma-1), \ \ l:=3+ \min\left\{m \in \mathbb{N}: \ \ m> \alpha \right\}=4 +[\alpha] .$$
For $j=0,\cdots, l$ and $i=0,\cdots, l-j$, we set
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\mathcal{ E}_{j}(t) : = & (1+ t)^{2j} \int_\mathcal{I} \left[\begin{equation}tar\rho_0\left(\partial_t^j w\right)^2 + \begin{equation}tar\rho_0^\gamma \left(\partial_t^j w_x \right)^2 + (1+ t) \begin{equation}tar\rho_0 \left(\partial_t^{j+1} w\right)^2 \right] (x, t) dx , \\
\mathcal{ E}_{j, i}(t): = & (1+ t)^{2j} \int_\mathcal{I} \left[\begin{equation}tar\rho_0^{1+(i-1)(\gamma-1) } \left(\partial_t^j \partial_x^i w\right)^2 + \begin{equation}tar\rho_0^{1+(i+1)(\gamma-1) } \left(\partial_t^j \partial_x^{i+1}w \right)^2\right] (x, t) dx .
\end{split}\end{equation}e
The higher-order norm is defined by
$$
\mathcal{E}(t) := \sum_{j=0}^l \left(\mathcal{ E}_{j}(t) + \sum_{i=1}^{l-j} \mathcal{ E}_{j, i}(t) \right).
$$
It will be proved in Lemma \ref{lem31} that
$$
\sup_{x\in \mathcal{I}} \left\{\sum_{j=0}^3 (1+ t)^{2j} \left |\partial_t^j w(x, t)\right |^2 + \sum_{j=0}^1 (1+ t)^{2j} \left |\partial_t^j w_x(x, t)\right|^2\right\} \le C \mathcal{E}(t)
$$
for some constant $C$ independent of $t$. So the bound of $\mathcal{E}(t)$ gives the uniform bound and decay of $w$ and its derivatives. Now, we are ready to state the main result.
{\begin{equation}gin{thm}\lambdabel{mainthm} There exists a constant $\begin{equation}tar \delta >0$ such that if
$\mathcal{E}(0)\le \begin{equation}tar \delta,$
then the problem \eqref{equ} admits a global unique smooth solution in $\mathcal{I}\times[0, \infty)$ satisfying for all $t\ge 0$,
$$\mathcal{E}(t)\le C\mathcal{E}(0) $$
and
\begin{equation}\lambdabel{2}\begin{equation}gin{split}
& \sup_{x\in \mathcal{I}} \left\{\sum_{j=0}^3 (1+ t)^{2j} \left |\partial_t^j w(x, t)\right |^2 + \sum_{j=0}^1 (1+ t)^{2j} \left |\partial_t^j w_x(x, t)\right |^2\right\}
\\
& \qquad +
\sup_{x\in \mathcal{I}}\sum_{
i+j\le l,\ 2i+j \ge 4 } (1+ t)^{2j}\left | \begin{equation}tar\rho_0^{\frac{(\gamma-1)(2i+j-3)}{2}}\partial_t^j \partial_x^i w(x, t)\right|^2 \le C \mathcal{E}(0),
\end{split}\end{equation}
where $C$ is a positive constant independent of $t$.
\end{thm}
It should be noticed that the time derivatives involved in the initial higher-order energy norm,
$\mathcal{E}(0)$, can be determined via the equation by the initial data $\rho_0$ and $u_0$ (see \cite{10} for instance).
\begin{equation}gin{rmk} For the linearized equation of \eqref{eluerpert} around the Barenblatt solution:
$$
\begin{equation}tar\rho_0w^L_{tt} +
\begin{equation}tar\rho_0w^L_t -\gamma \left( \begin{equation}tar\rho_0^\gamma \tilde \eta_x^{-(\gamma+1)}w^L_x\right)_x =0,
$$
one may easily show that, for example, using the weighted energy method and \eqref{decay},
$$\sum_{j=0}^{k} \mathcal{ E}_{j}(w^L)(t)\le C \sum_{j=0}^{k} \mathcal{ E}_{j}(w^L)(0),\ \ t\ge 0,$$
for any integer $k\ge 0$. Here $C>0$ is a constant independent of $t$ and
$$ \mathcal{ E}_{j}(w^L)(t) := (1+ t)^{2j} \int_\mathcal{I} \left[\begin{equation}tar\rho_0\left(\partial_t^j w^L\right)^2 + \begin{equation}tar\rho_0^\gamma \left(\partial_t^j w^L_x \right)^2 + (1+t)\begin{equation}tar\rho_0 \left(\partial_t^{j+1} w^L\right)^2\right](x, t) dx .$$
In Theorem \eqref{mainthm}, we obtain the same decay rates for the nonlinear problem.
\end{rmk}
As a corollary of Theorem \ref{mainthm}, we have the following theorem for solutions to the original vacuum free boundary problem
\eqref{1.1}.
{\begin{equation}gin{thm}\lambdabel{mainthm1} There exists a constant $\begin{equation}tar\delta >0$ such that if
$\mathcal{E}(0)\le \begin{equation}tar\delta, $
then the problem
\eqref{1.1} admits a global unique smooth solution $\left(\rho, u, {\rm I}(t)\right)$ for $t\in[0,\infty)$ satisfying
\begin{equation}\lambdabel{1'}\begin{equation}gin{split} \left|\rho\left(\eta(x, t),t\right)-\begin{equation}tar\rho\left(\begin{equation}tar\eta(x, t), t\right)\right|
\le & C\left(A-Bx^2\right)^{\fracrac{1}{\gamma-1}}(1+t)^{-\fracrac{2}{\gamma+1}} \\
& \times \left(\sqrt{\mathcal{E}(0)}+(1+t)^{-\fracrac{\gamma}{\gamma+1}}\ln(1+t)\right), \end{split} \end{equation}
\begin{equation}\lambdabel{2'} \left|u\left(\eta(x, t),t\right)-\begin{equation}tar u\left(\begin{equation}tar\eta(x, t), t\right)\right|\le C(1+t)^{-1}\left( \sqrt{\mathcal{E}(0)}+(1+t)^{-\fracrac{\gamma}{\gamma+1}}\ln(1+t)\right), \end{equation}
\begin{equation}\lambdabel{3'} -c_2(1+t)^{\fracrac{1}{\gamma+1}}\le x_-(t)\le -c_1(1+t)^{\fracrac{1}{\gamma+1}}, \ \ c_1(1+t)^{\fracrac{1}{\gamma+1}}\le x_+(t)\le c_2(1+t)^{\fracrac{1}{\gamma+1}}, \end{equation}
\begin{equation}\lambdabel{4'} \left|\fracrac{d^k x_{\pm}(t)}{dt^k}\right|\le C(1+t)^{\fracrac{1}{\gamma+1}-k} , \ \ k=1, 2, 3 ,\end{equation}
for all $x \in \mathcal{I}$ and $t\ge 0 $. Here $C$, $c_1$ and $c_2$ are positive constants independent of $t$.
\end{thm}
The pointwise behavior of the density and the convergence of velocity for the vacuum free boundary problem
\eqref{1.1} to that of the Barenblatt solution are given by \eqref{1'} and \eqref{2'}, respectively. \eqref{3'} gives the precise expanding rate
of the vacuum boundaries of the problem \eqref{1.1}, which is the same as that for the Barenblatt solution shown in \eqref{IB}.
It is also shown in \eqref{1'} that
the difference of density to problem \eqref{1.1} and the corresponding Barenblatt density decays at the
rate of $(1+t)^{-{2}/(\gamma+1)}$ in $L^\infty$, while the density of the Barenblatt solution, $\begin{equation}tar\rho$, decays at the
rate of $(1 + t)^{-1/(\gamma+1)}$ in $L^\infty$ (see \eqref{bar} ).
\section{Proof of Theorem \ref{mainthm}}
The proof is based on the local existence of smooth solutions (cf. \cite{10, 16}) and continuation arguments. The uniqueness of the smooth solutions can be obtained as in section 11 of \cite{LXZ}. In order to prove the global existence
of smooth solutions, we need to obtain the uniform-in-time {\it a priori} estimates on any given time interval $[0, T]$ satisfying $\sup_{t\in [0, T]}\mathcal{E}(t)<\infty$. To this end, we use a bootstrap argument by making the following {\it a priori} assumption: Let $w$ be a smooth solution to \eqref{eluerpert} on $[0 , T]$, there exists a suitably small fixed positive number $\epsilon_0\in (0,1)$ independent of $t$ such that
\begin{equation}\lambdabel{apriori}\begin{equation}gin{split}
& \sum_{j=0}^3 (1+ t)^{2j} \left\|\partial_t^j w(\cdot,t)\right\|_{L^\infty(\mathcal{I})}^2 + \sum_{j=0}^1 (1+ t)^{2j} \left\|\partial_t^j w_x(\cdot,t)\right\|_{L^\infty(\mathcal{I})}^2
\\
& \qquad +
\sum_{
i+j\le l,\ 2i+j \ge 4 } (1+ t)^{2j}\left\| \begin{equation}tar\rho_0^{\frac{(\gamma-1)(2i+j-3)}{2}}\partial_t^j \partial_x^i w(\cdot,t)\right\|_{L^\infty(\mathcal{I})}^2 \le \epsilon_0^2 , \ \ t \in [ 0, T].
\end{split}\end{equation}
This in particular implies, noting $\eqref{decay}$, that for all $\theta\in[0,1]$,
\begin{equation}\lambdabel{basic}
\fracrac{1}{2}(1+t)^{\fracrac{1}{\gamma+1}}\le \left(\tilde \eta_x+\theta w_x\right)(x, t)\le 2K(1+t)^{\fracrac{1}{\gamma+1}}, \ \ (x, t)\in \mathcal{I}\times [0, T],\end{equation}
where $K$ is a positive constant appearing in $\eqref{decay}_1$.
Under this {\it a priori} assumption, we prove in section 3.2 the following elliptic estimates:
$$\mathcal{ E}_{j, i}(t) \le C \left( \widetilde{\mathcal{ E}}_{0}(t) + \sum_{\iota=1}^{i+j}\mathcal{ E}_{\iota}(t)\right), \ \ {\rm when} \ \ j \ge 0, \ \ i \ge 1, \ \ i+j\le l, $$
where $C$ is a positive constant independent of $t$ and
$$\widetilde{\mathcal{ E}}_{0}(t) =\mathcal{ E}_{0}(t)- \int_\mathcal{I} \begin{equation}tar\rho_0 w^2 (x, t)dx.$$
With the {\it a priori} assumption and elliptic estimates, we show in section 3.3 the following nonlinear weighted energy estimate: for some positive constant $C$ independent of $t$,
$$\mathcal{E}_j(t) \le C \sum_{\iota=0}^j \mathcal{ E}_{\iota}(0), \ \ j=0,1, \cdots, l. $$
Finally, the {\it a priori} assumption \eqref{apriori} can be verified in section 3.4 by proving
\begin{equation}gin{align*}
& \sum_{j=0}^3 (1+ t)^{2j} \left\|\partial_t^j w(\cdot,t)\right\|_{L^\infty(\mathcal{I})}^2 + \sum_{j=0}^1 (1+ t)^{2j} \left\|\partial_t^j w_x(\cdot,t)\right\|_{L^\infty(\mathcal{I})}^2
\\
& \qquad +
\sum_{
i+j\le l,\ 2i+j \ge 4 } (1+ t)^{2j}\left\| \begin{equation}tar\rho_0^{\frac{(\gamma-1)(2i+j-3)}{2}} \partial_t^j \partial_x^i w(\cdot,t)\right\|_{L^\infty(\mathcal{I})}^2 \le C \mathcal{E}(t)
\end{align*}
for some positive constant $C$ independent of $t$.
This closes the whole bootstrap argument for small initial perturbations and completes the proof of Theorem \ref{mainthm}.
\subsection{Preliminaries}
In this subsection, we present some embedding estimates for weighted Sobolev spaces which will be used later and introduce some notations to simplify the presentation.
Set
$$
d(x):=dist(x, \partial \mathcal{I})=\min\left\{x+\sqrt{A/B}, \sqrt{A/B}-x\right\} , \ \ x \in \mathcal{I}=\left(-\sqrt{A/B}, \ \sqrt{A/B} \right).
$$
For any $a>0$ and nonnegative integer $b$, the weighted Sobolev space $H^{a, b}(\mathcal{I})$ is given by
$$ H^{a, b}(\mathcal{I}) := \left\{ d^{a/2}F\in L^2(\mathcal{I}): \ \ \int_\mathcal{I} d^a|\partial_x ^k F|^2dx<\infty, \ \ 0\le k\le b\right\}$$
with the norm
$$ \|F\|^2_{H^{a, b}(\mathcal{I})} := \sum_{k=0}^b \int_\mathcal{I} d^a|\partial_x^k F|^2dx.$$
Then for $b\ge {a}/{2}$, it holds the following {\it embedding of weighted Sobolev spaces} (cf. \cite{18'}):
$$ H^{a, b}(\mathcal{I})\hookrightarrow H^{b- {a}/{2}}(\mathcal{I})$$
with the estimate
\begin{equation}\lambdabel{wsv} \|F\|_{H^{b- {a}/{2}}(\mathcal{I})} \le C(a, b) \|F\|_{H^{a, b}(\mathcal{I})} \end{equation}
for some positive constant $C(a, b)$.
The following general version of the {\it Hardy inequality} whose proof can be found in \cite{18'} will also be used often in this paper.
Let $k>1$ be a given real number and $F$ be a function satisfying
$$
\int_0^{\delta} x^k\left(F^2 + F_x^2\right) dx < \infty,
$$
where $\delta$ is a positive constant; then it holds that
$$
\int_0^{\delta} x^{k-2} F^2 dx \le C(\delta, k) \int_0^{\delta} x^k \left( F^2 + F_x^2 \right) dx,
$$
where $C(\delta, k)$ is a constant depending only on $\delta$ and $k$. As a consequence, one has
\begin{equation}e\lambdabel{}
\int_{-\sqrt{A/B}}^0 \left(x+\sqrt{A/B}\right)^{k-2} F^2 dx \le C \int_{-\sqrt{A/B}}^0 \left(x+\sqrt{A/B}\right)^k \left( F^2 + F_x^2 \right) dx,
\end{equation}e
\begin{equation}e\lambdabel{}
\int^{\sqrt{A/B}}_0 \left(\sqrt{A/B}-x\right)^{k-2} F^2 dx \le C \int^{\sqrt{A/B}}_0 \left(\sqrt{A/B}-x\right)^k \left( F^2 + F_x^2 \right) dx,
\end{equation}e
where $C$ is a constant depending on $A$, $B$ and $k$. In particular, it holds that
\begin{equation}\lambdabel{hardybdry}
\int_\mathcal{I} d(x)^{k-2} F^2 dx \le C \int_\mathcal{I} d(x)^k \left( F^2 + F_x^2 \right) dx,
\end{equation}
provided that the right-hand side is finite.
\vskip 0.5cm
{\bf Notations:}
1) Throughout the rest of paper, $C$ will denote a positive constant which only depend on the parameters of the problem,
$\gamma$ and $M$, but does not depend on the data. They are referred as universal and can change
from one inequality to another one. Also we use $C(\begin{equation}ta)$ to denote a certain positive constant
depending on quantity $\begin{equation}ta$.
2) We will employ the notation $a\lesssim b$ to denote $a\le C b$ and $a \sim b$ to denote $C^{-1}b\le a\le Cb$,
where $C$ is the universal constant as defined
above.
3) In the rest of the paper, we will use the notations
$$ \int=:\int_{\mathcal{I}} \ , \ \ \|\cdot\|=:\|\cdot\|_{L^2(\mathcal{I})} \ \ {\rm and} \ \ \|\cdot\|_{L^{\infty}}=:\|\cdot\|_{L^{\infty}(\mathcal{I})}.$$
4) We set
$$
\varthetarsigma(x):=\begin{equation}tar\rho_0^{\gamma-1}(x)=A-Bx^2, \ \ x \in \mathcal{I}.$$
Then $\mathcal{ E}_{j}$ and $\mathcal{ E}_{j, i}$ can be rewritten as
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\mathcal{ E}_{j} (t) = (1+ t)^{2j} \int \left[ \varthetarsigma^{\alpha}\left(\partial_t^j w\right)^2 + \varthetarsigma^{\alpha+1} \left(\partial_t^j w_x \right)^2 + (1+ t)\varthetarsigma^{\alpha} \left(\partial_t^{j+1} w\right)^2\right] (x, t)dx ,\\
&\mathcal{ E}_{j, i}(t) = (1+ t)^{2j} \int \left[ \varthetarsigma^{\alpha+i+1} \left(\partial_t^j \partial_x^{i+1}w \right)^2 + \varthetarsigma^{\alpha+i-1} \left(\partial_t^j \partial_x^{i}w \right)^2 \right] (x, t)dx .
\end{split}\end{equation}e
Obviously, $ \varthetarsigma(x)$ is equivalent to $d(x)$, that is,
\begin{equation}\lambdabel{varsigma}
B \sqrt{A/B} d(x)\le \varthetarsigma(x)\le 2 B \sqrt{A/B} d(x), \ \ x \in \mathcal{I}.
\end{equation}
\subsection{Elliptic estimates }
Denote
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\widetilde{\mathcal{ E}}_{0}(t) := \int \varthetarsigma^{\alpha+1} w_x^2(x, t) dx + (1+ t) \int \varthetarsigma^{\alpha} w_t^2 (x, t) dx=\mathcal{ E}_{0}(t)- \int \varthetarsigma^{\alpha} w^2 (x, t)dx.
\end{split}\end{equation}e
We prove the following elliptic estimates in this subsection.
\begin{equation}gin{prop} Suppose that \eqref{apriori} holds for suitably small positive number $\epsilon_0 \in(0,1) $, then for $0\le t\le T$,
\begin{equation}\lambdabel{ellipticestimate}
\mathcal{ E}_{j, i}(t) \lesssim \widetilde{\mathcal{ E}}_{0}(t) + \sum_{\iota=1}^{i+j}\mathcal{ E}_{\iota}(t), \ \ {\rm when} \ \ j \ge 0, \ \ i \ge 1, \ \ i+j\le l.
\end{equation}
\end{prop}
The proof of this proposition consists of Lemma \ref{lem41} and Lemma \ref{lem43}.
\subsubsection{Lower-order elliptic estimates}
Equation \eqref{eluerpert} can be rewritten as
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\gamma {\tilde\eta_x}^{-\gamma-1} \left(\begin{equation}tar\rho_0^\gamma w_x\right)_x=\begin{equation}tar\rho_0w_{tt} +
\begin{equation}tar\rho_0w_t + \left[ \begin{equation}tar\rho_0^\gamma \left( (\tilde\eta_x+w_x)^{-\gamma} - {\tilde\eta_x}^{-\gamma} +\gamma {\tilde\eta_x}^{-\gamma-1} w_x \right) \right]_x.
\end{split}\end{equation}e
Divide the equation above by $\begin{equation}tar\rho_0$ and expand the resulting equation to give
\begin{equation}\lambdabel{007}\begin{equation}gin{split}
\gamma {\tilde\eta_x}^{-\gamma-1}\left[ \varthetarsigma w_{xx} + (\alpha+1) \varthetarsigma_x w_x\right]
= &
w_{tt} +
w_t -
\gamma \varthetarsigma \left[ (\tilde\eta_x+w_x)^{-\gamma-1} - {\tilde\eta_x}^{-\gamma-1}\right] w_{xx}\\
&+(1+\alpha) \varthetarsigma_x \left[ (\tilde\eta_x+w_x)^{-\gamma} - {\tilde\eta_x}^{-\gamma} +\gamma {\tilde\eta_x}^{-\gamma-1} w_x \right] .
\end{split}\end{equation}
\begin{equation}gin{lem}\lambdabel{lem41}
Suppose that \eqref{apriori} holds for suitably small positive number $\epsilon_0 \in(0,1) $. Then,
$$\mathcal{ E}_{0, 1}(t) \lesssim \widetilde{\mathcal{ E}}_0(t) + \mathcal{ E}_1(t), \ 0\le t\le T.$$
\end{lem}
{\bf Proof}. Multiply the equation \eqref{007} by $ {\tilde\eta_x}^{\gamma+1} \varthetarsigma^{\alpha/2}$ and square the spatial $L^2$-norm of the product to obtain
\begin{equation}\lambdabel{n44}\begin{equation}gin{split}
&\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} + (\alpha+1) \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x\right\|^2
\\
\lesssim &
(1+t)^2 \left( \left\| \varthetarsigma^{\frac{\alpha}{2}} w_{tt}\right\|^2 +\left\|
\varthetarsigma^{\frac{\alpha}{2}} w_t \right\|^2 \right) +
{\tilde\eta_x}^{-2} \left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_x w_{xx} \right\|^2 +
{\tilde\eta_x}^{-2} \left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x^2 \right\|^2\\
\lesssim & \mathcal{ E}_{1} + {\tilde\eta_x}^{-2} \|w_x\|_{L^\infty}^2 \left(\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} \right\|^2 +
\left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x \right\|^2\right),
\end{split}\end{equation}
where we have used the Taylor expansion, the smallness of $w_x$ which is the consequence of \eqref{apriori}, and \eqref{decay} to derive the first inequality and the definition of $\mathcal{ E}_{1}$ the second.
Note that the left-hand side of \eqref{n44} can be expanded as
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} + (\alpha+1) \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x\right\|^2
\\
=& \left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx}\right\|^2 +(\alpha+1)^2 \left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x\right\|^2
+ (\alpha+1) \int \varthetarsigma^{1+\alpha} \varthetarsigma_x \left(w_x^2\right)_x dx \\
= & \left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx}\right\|^2 -(\alpha+1) \int \varthetarsigma^{1+\alpha} \varthetarsigma_{xx} w_x^2 dx,
\end{split}\end{equation}e
where the last equality follows from the integration by parts. Thus,
\begin{equation}\lambdabel{n45}\begin{equation}gin{split}
\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx}\right\|^2 \lesssim \widetilde{\mathcal{ E}}_{0} +\mathcal{ E}_{1} + {\tilde\eta_x}^{-2} \|w_x\|_{L^\infty}^2 \left(\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} \right\|^2 +
\left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x \right\|^2\right).
\end{split}\end{equation}
On the other hand, it follows from \eqref{n44} and \eqref{n45} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\| (\alpha+1) \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x\right\|^2= & \left\| \left[ \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} + (\alpha+1) \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x \right] - \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} \right\|^2
\\
\le & 2\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} + (\alpha+1) \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x\right\|^2
+ 2\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx}\right\|^2\\
\lesssim & \widetilde{\mathcal{ E}}_{0} +\mathcal{ E}_{1} + {\tilde\eta_x}^{-2} \|w_x\|_{L^\infty}^2 \left(\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} \right\|^2 +
\left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x \right\|^2\right)
.
\end{split}\end{equation}e
This, together with \eqref{n45}, gives
\begin{equation}\lambdabel{similar}\begin{equation}gin{split}
\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx}\right\|^2
+ \left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x\right\|^2\lesssim \widetilde{\mathcal{ E}}_{0} +\mathcal{ E}_{1} + {\tilde\eta_x}^{-2} \|w_x\|_{L^\infty}^2 \left(\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx} \right\|^2 +
\left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x \right\|^2\right) ,
\end{split}\end{equation}
which implies, with the aid of the smallness of $w_x$ and \eqref{decay}, that
\begin{equation}\lambdabel{n46}\begin{equation}gin{split}
\left\| \varthetarsigma^{1+\frac{\alpha}{2}} w_{xx}\right\|^2
+ \left\| \varthetarsigma^{\frac{\alpha}{2}} \varthetarsigma_x w_x\right\|^2
\lesssim \widetilde{\mathcal{ E}}_{0} +\mathcal{ E}_{1} .
\end{split}\end{equation}
In view of $\left\| \varthetarsigma^{{\alpha }/{2}} \varthetarsigma^{1/2} w_x\right\|^2 \le \widetilde{\mathcal{ E}}_0 $, we then see that $\left\| \varthetarsigma^{{\alpha }/{2}} w_x\right\|^2 \le \widetilde{\mathcal{ E}}_{0} +\mathcal{ E}_{1} $. Indeed, if we denote
$\mathcal{I}_1=[-\sqrt{A/B}/2,\sqrt{A/B}/2 ]$, then
\begin{equation}\lambdabel{you}\begin{equation}gin{split}
& \left\| \varthetarsigma^{{\alpha }/{2}} w_x\right\|^2\lesssim \left\| \varthetarsigma^{{\alpha }/{2}} w_x\right\|_{L^2(\mathcal{I}/\mathcal{I}_1)}^2 + \left\| \varthetarsigma^{{\alpha }/{2}} w_x\right\|_{L^2(\mathcal{I}_1)}^2 \\
\lesssim & \left\| \varthetarsigma^{{\alpha }/{2}} \varthetarsigma_x w_x\right\|_{L^2(\mathcal{I}/\mathcal{I}_1)}^2 + \left\| \varthetarsigma^{{\alpha }/{2}} \varthetarsigma^{1/2} w_x\right\|_{L^2(\mathcal{I}_1)}^2
\le \left\| \varthetarsigma^{{\alpha }/{2}} \varthetarsigma_x w_x\right\|^2+ \left\| \varthetarsigma^{{\alpha }/{2}} \varthetarsigma^{1/2} w_x\right\|^2 \lesssim \widetilde{\mathcal{ E}}_{0} +\mathcal{ E}_{1},
\end{split}\end{equation}
since $\varthetarsigma_x$ and $\varthetarsigma$ have positive lower-order bounds on intervals $\mathcal{I}/\mathcal{I}_1$ and $\mathcal{I}$, respectively. This completes the proof of Lemma \ref{lem41}. $\Box$
\subsubsection{Higher-order elliptic estimates}
For $i\ge 1$ and $j\ge 0$, $\partial_t^j\partial_x^{i-1}\eqref{007}$ yields
\begin{equation}\lambdabel{007t}\begin{equation}gin{split}
\gamma {\tilde\eta_x}^{-\gamma-1}\left[ \varthetarsigma \partial_t^j \partial_x^{i+1} w + (\alpha+i) \varthetarsigma_x \partial_t^j \partial_x^i w\right]
= &
\partial_t^{j+2} \partial_x^{i-1} w +
\partial_t^{j+1} \partial_x^{i-1} w +Q_1 + Q_2 ,
\end{split}\end{equation}
where
\begin{equation}\lambdabel{q1}\begin{equation}gin{split}
& Q_1 :=-\gamma \sum_{\iota=1}^j \left[\partial_t^\iota \left({\tilde\eta_x}^{-\gamma-1}\right)\right]
\partial_t^{j-\iota}\left[ \varthetarsigma \partial_x^{i+1} w + (\alpha+i) \varthetarsigma_x \partial_x^i w\right] \\
& -\gamma \partial_t^j\left\{{\tilde\eta_x}^{-\gamma-1}
\left[\sum_{\iota=2}^{i-1}C_{i-1}^\iota \left(\partial_x^\iota \varthetarsigma\right)\left(\partial_x^{i+1-\iota} w\right) +(\alpha+1)\sum_{\iota=1}^{i-1}C_{i-1}^\iota \left(\partial_x^{\iota+1} \varthetarsigma\right)\left(\partial_x^{i -\iota} w\right)
\right] \right\},
\end{split}\end{equation}
\begin{equation}\lambdabel{q2}\begin{equation}gin{split}
Q_2 : = & -
\gamma \partial_t^j \partial_x^{i-1} \left\{ \varthetarsigma \left[ (\tilde\eta_x+w_x)^{-\gamma-1} - {\tilde\eta_x}^{-\gamma-1}\right] w_{xx}\right\}\\
&+(1+\alpha) \partial_t^j \partial_x^{i-1} \left\{ \varthetarsigma_x \left[ (\tilde\eta_x+w_x)^{-\gamma} - {\tilde\eta_x}^{-\gamma} +\gamma {\tilde\eta_x}^{-\gamma-1} w_x \right] \right\} .
\end{split}\end{equation}
Here and thereafter $C_m^j$ is used to denote the binomial coefficients for
$0\le j \le m$,
$$C_m^j=\fracrac{m!}{j!(m-j)!}.$$
In this paper, summations $\sum_{\iota=1}^{i-1}$ and $\sum_{\iota=2}^{i-1}$ should be understood as zero when $i=1$ and $i=1, 2$, respectively. Multiply equation \eqref{007t} by ${\tilde\eta_x}^{ \gamma+1} \varthetarsigma^{(\alpha+i-1)/2}$ and square the spatial $L^2$-norm of the product to give
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\varthetarsigma^\fracrac{\alpha+i+1}{2} \partial_t^j \partial_x^{i+1} w + (\alpha+i) \varthetarsigma^\fracrac{\alpha+i-1}{2} \varthetarsigma_x \partial_t^j \partial_x^i w \right\|^2
\lesssim
(1+t)^2 \left(\left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j+2} \partial_x^{i-1} w \right\|^2
\right. \\
\left. +
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j+1} \partial_x^{i-1} w \right\|^2 \right) + (1+t)^2 \left( \left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_1 \right\|^2 +
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_2 \right\|^2 \right).
\end{split}\end{equation}e
Similar to the derivation of \eqref{similar} and \eqref{you}, we can then obtain
\begin{equation}\lambdabel{important}\begin{equation}gin{split}
&(1+t)^{-2j}\mathcal{E}_{j,i}(t) = \left\|\varthetarsigma^\fracrac{\alpha+i+1}{2} \partial_t^j \partial_x^{i+1} w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i-1}{2} \partial_t^j \partial_x^i w \right\|^2
\lesssim \left\| \varthetarsigma^\fracrac{\alpha+i}{2} \partial_t^j \partial_x^i w \right\|^2 +
(1+t)^2 \\
&\times \left(\left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j+2} \partial_x^{i-1} w \right\|^2
+
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j+1} \partial_x^{i-1} w \right\|^2 + \left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_1 \right\|^2 +
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_2 \right\|^2 \right).
\end{split}\end{equation}
We will use this estimate to prove the following lemma by the mathematical induction.
\begin{equation}gin{lem}\lambdabel{lem43} Assume that \eqref{apriori} holds for suitably small positive number $\epsilon_0\in(0,1) $. Then for $ j\ge 0$, $i\ge 1$ and $2\le i+j\le l$,
\begin{equation}\lambdabel{lem43est}
\mathcal{ E}_{j, i}(t) \lesssim \widetilde{\mathcal{ E}}_{0}(t) + \sum_{\iota=1}^{i+j}\mathcal{ E}_{\iota}(t), \ \ t\in [0,T].
\end{equation}
\end{lem}
{\bf Proof}. We use the induction for $i+j$ to prove this lemma. As shown in Lemma \ref{lem41} we know that \eqref{lem43est} holds for $i+j=1$. For $1\le k\le l- 1$, we make the induction hypothesis that \eqref{lem43est} holds for all
$ j\ge 0$, $i\ge 1$ and $i+j\le k$, that is,
\begin{equation}\lambdabel{asssup}
\mathcal{ E}_{j, i}(t) \lesssim \widetilde{\mathcal{ E}}_{0}(t) + \sum_{\iota=1}^{i+j}\mathcal{ E}_{\iota}(t), \ \ j\ge 0, \ \ i\ge 1, \ \ i+j \le k,
\end{equation}
it then suffices to prove \eqref{lem43est} for $ j\ge 0$, $i\ge 1$ and $i+j=k+1$.
(Indeed, there exists an order of $(i,j)$ for the proof. For example, when $i+j=k+1$ we will bound $\mathcal{ E}_{k+1-\iota, \iota}$ from $\iota=1$ to $k+1$ step by step.)
We estimate $Q_1$ and $Q_2$ given by \eqref{q1} and \eqref{q2} as follows.
For $Q_1$, it follows from \eqref{decay} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
|Q_1| \lesssim \sum_{\iota=1}^j (1+t)^{-1-\iota}
\left( \varthetarsigma \left|\partial_t^{j-\iota} \partial_x^{i+1} w\right| + \left| \partial_t^{j-\iota}\partial_x^i w \right|\right) +\sum_{\iota=0}^j \sum_{r=1}^{i-1} (1+t)^{-1-\iota} \left|\partial_t^{j-\iota} \partial_x^{r} w\right|,
\end{split}\end{equation}e
so that
\begin{equation}\lambdabel{building}\begin{equation}gin{split}
\left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_1 \right\|^2 \lesssim &\sum_{\iota=1}^j (1+t)^{-2 -2\iota}
\left( \left\| \varthetarsigma^{\fracrac{\alpha+i+1}{2}} \partial_t^{j-\iota} \partial_x^{i+1} w\right\|^2 + \left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j-\iota}\partial_x^i w \right\|^2\right) \\
&+\sum_{\iota=0}^j \sum_{r=1}^{i-1} (1+t)^{-2-2\iota} \left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j-\iota} \partial_x^{r} w\right\|^2 \\
\lesssim & (1+t)^{-2 -2j} \left(\sum_{\iota=1}^j \mathcal{ E}_{j -\iota, i}+\sum_{\iota=0}^j \sum_{r=1}^{i-1} \mathcal{ E}_{j -\iota, r} \right).
\end{split}\end{equation}
For $Q_2$, it follows from \eqref{decay} and \eqref{apriori} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
|Q_2| \lesssim & \sum_{n=0}^j \sum_{m=0}^{i-1} K_{nm}\left(\left|\partial_t^{j-n}\partial_x^{i-1-m} (\varthetarsigma w_{xx})\right|+ \left|\partial_t^{j-n}\partial_x^{i-1-m} (\varthetarsigma_x w_{x})\right|\right)\\
\lesssim & \sum_{n=0}^j \sum_{m=0}^{i-1} K_{nm}\left(\left|\varthetarsigma \partial_t^{j-n}\partial_x^{i-m+1} w \right|+ \left|\varthetarsigma_x \partial_t^{j-n}\partial_x^{i-m} w \right|+ \sum_{r=1}^{i-m-1} \left| \partial_t^{j-n}\partial_x^{r} w \right| \right) \\
=: & \sum_{n=0}^j \sum_{m=0}^{i-1} Q_{2nm}.
\end{split}\end{equation}e
Here
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& K_{00} = \epsilon_0(1+t)^{-1-\fracrac{1}{\gamma+1}}; \\
& K_{10}= \epsilon_0(1+t)^{-2-\fracrac{1}{\gamma+1}}, \ \
K_{01} = (1+t)^{-1-\fracrac{1}{\gamma+1}}|\partial_x^2 w |;\\
& K_{20}=\epsilon_0(1+t)^{-3-\fracrac{1}{\gamma+1}} + (1+t)^{-1-\fracrac{1}{\gamma+1}}\left|\partial_t^2\partial_x w \right|,\\
&K_{11}= (1+t)^{-2-\fracrac{1}{\gamma+1}}\left|\partial_x^2 w \right| + (1+t)^{-1-\fracrac{1}{\gamma+1}}\left|\partial_t\partial_x^2 w \right| , \\
&K_{02}=(1+t)^{-1-\fracrac{1}{\gamma+1}}\left|\partial_x^3 w \right| + (1+t)^{-1-\fracrac{2}{\gamma+1}}\left|\partial_x^2 w \right|^2.
\end{split}\end{equation}e
We do not list here $K_{nm}$ for $n+m\ge 3$ since we can use the same method to estimate $Q_{2nm}$ for $n+m\ge 3$ as that for $n+m\le 2$. Easily, $Q_{200}$ and $Q_{210}$ can be bounded by
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_{200} \right\|^2 \lesssim & \epsilon_0^2(1+t)^{-2 } \left(\left\|\varthetarsigma^\fracrac{\alpha+i+1}{2} \partial_t^j \partial_x^{i+1} w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i-1}{2} \partial_t^j \partial_x^i w \right\|^2 \right.\\
&\left. + \sum_{r=1}^{i -1} \left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j }\partial_x^{r} w \right\|^2 \right)
\lesssim \epsilon_0^2 (1+t)^{-2-2j} \left( \mathcal{E}_{j,i} + \sum_{r=1}^{i-1 }\mathcal{E}_{j,r}\right) ,
\end{split}\end{equation}e
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_{2 1 0} \right\|^2 \lesssim & \epsilon_0^2(1+t)^{- 4 } \left(\left\|\varthetarsigma^\fracrac{\alpha+i+1}{2} \partial_t^{j-1} \partial_x^{i+1} w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i-1}{2} \partial_t^{j-1} \partial_x^i w \right\|^2 \right.\\
&\left. + \sum_{r=1}^{i -1} \left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j-1 }\partial_x^{r} w \right\|^2 \right)
\lesssim \epsilon_0^2(1+t)^{-2-2j}\sum_{r=1}^{i }\mathcal{E}_{j-1,r}.
\end{split}\end{equation}e
For $Q_{201}$, we use \eqref{apriori} to get $|\varthetarsigma^{1/2} w_{xx}|\lesssim \epsilon_0$ and then obtain
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_{201} \right\|^2 \lesssim & \epsilon_0^2 (1+t)^{-2} \left(\left\|\varthetarsigma^\fracrac{\alpha+i }{2} \partial_t^{j } \partial_x^{i } w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i-2}{2} \partial_t^{j } \partial_x^{i-1} w \right\|^2 \right.\\
&\left. + \sum_{r=1}^{i -2} \left\|\varthetarsigma^{\fracrac{\alpha+i-2}{2}} \partial_t^{j }\partial_x^{r} w \right\|^2 \right)
\lesssim \epsilon_0^2 (1+t)^{-2-2j}\sum_{r=1}^{i-1}\mathcal{E}_{j,r},
\end{split}\end{equation}e
It should be noted that $Q_{201}$ appears when $i\ge 2$. Similarly, $Q_{220}$ can be bounded by
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_{2 2 0} \right\|^2 \lesssim & \epsilon_0^2(1+t)^{-6} \left(\left\|\varthetarsigma^\fracrac{\alpha+i }{2} \partial_t^{j-2} \partial_x^{i+1} w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i-2}{2} \partial_t^{j-2} \partial_x^i w \right\|^2 \right.\\
& \left. + \sum_{r=1}^{i -1} \left\|\varthetarsigma^{\fracrac{\alpha+i-2}{2}} \partial_t^{j-2 }\partial_x^{r} w \right\|^2 \right)
\lesssim \epsilon_0^2(1+t)^{-2-2j} \sum_{r=1}^{i+ 1}\mathcal{E}_{j-2,r},
\end{split}\end{equation}e
where we have used the Hardy inequality \eqref{hardybdry} and the equivalence \eqref{varsigma} to derive that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\| \varthetarsigma^\fracrac{\alpha+i-2}{2} \partial_t^{j-2} \partial_x^i w \right\|^2
\lesssim \left\|\varthetarsigma^\fracrac{\alpha+i}{2} \partial_t^{j-2} \partial_x^{i+1} w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i }{2} \partial_t^{j-2} \partial_x^i w \right\|^2 .
\end{split}\end{equation}e
Similar to the estimate for $ Q_{2 2 0} $, we can obtain
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_{211} \right\|^2 \lesssim & \epsilon_0^2 (1+t)^{-4} \left(\left\|\varthetarsigma^\fracrac{\alpha+i-1 }{2} \partial_t^{j-1 } \partial_x^{i } w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i-3}{2} \partial_t^{j-1 } \partial_x^{i-1} w \right\|^2 \right.\\
&\left. + \sum_{r=1}^{i -2} \left\|\varthetarsigma^{\fracrac{\alpha+i-3}{2}} \partial_t^{j -1 }\partial_x^{r} w \right\|^2 \right)
\lesssim \epsilon_0^2 (1+t)^{-2-2j}\sum_{r=1}^{i }\mathcal{E}_{j-1,r},
\end{split}\end{equation}e
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_{202} \right\|^2 \lesssim & \epsilon_0^2 (1+t)^{-2} \left(\left\|\varthetarsigma^\fracrac{\alpha+i-2 }{2} \partial_t^{j } \partial_x^{i-1 } w \right\|^2 + \left\| \varthetarsigma^\fracrac{\alpha+i-4}{2} \partial_t^{j } \partial_x^{i-2} w \right\|^2 \right.\\
&\left. + \sum_{r=1}^{i -3} \left\|\varthetarsigma^{\fracrac{\alpha+i-4}{2}} \partial_t^{j }\partial_x^{r} w \right\|^2 \right)
\lesssim \epsilon_0^2 (1+t)^{-2-2j}\sum_{r=1}^{i-1}\mathcal{E}_{j,r}.
\end{split}\end{equation}e
It should be noted that $Q_{211}$ and $Q_{202}$ appear when $i\ge 2$ and $i\ge 3$, respectively. This ensures the application of the Hardy inequality. Other cases can be done similarly, since the leading term of $K_{nm}$ is $\sum_{\iota=0}^n (1+t)^{-1-\iota-\fracrac{1}{\gamma+1}}\left|\partial_t^{n-\iota}\partial_x^{m+1} w \right|$.
Now, we conclude that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} Q_2 \right\|^2 \lesssim & \epsilon_0^2(1+t)^{-2-2j}\left(\mathcal{E}_{j , i} + \sum_{0\le \iota\le j, \ r \ge 1 , \ \iota+r \le i+j-1 } \mathcal{ E}_{ \iota, r} \right)(t).
\end{split}\end{equation}e
Substituting this and \eqref{building} into \eqref{important} gives, for suitably small $\epsilon_0$,
\begin{equation}\lambdabel{goal}\begin{equation}gin{split}
\mathcal{E}_{j,i}(t)
\lesssim &
(1+t)^{2j+2} \left(\left\| \varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j+2} \partial_x^{i-1} w \right\|^2
+
\left\|\varthetarsigma^{\fracrac{\alpha+i-1}{2}} \partial_t^{j+1} \partial_x^{i-1} w \right\|^2 \right) \\
& + (1+t)^{2j} \left\| \varthetarsigma^\fracrac{\alpha+i}{2} \partial_t^j \partial_x^i w \right\|^2 + \sum_{0\le \iota\le j, \ r \ge 1 , \ \iota+r \le i+j-1 } \mathcal{ E}_{ \iota, r}(t) .
\end{split}\end{equation}
In particular, when $i\ge 3$
\begin{equation}\lambdabel{goal3}\begin{equation}gin{split}
\mathcal{E}_{j,i}(t)
\lesssim & \mathcal{ E}_{ j+2, i-2} + \mathcal{ E}_{ j+1, i-2} + \mathcal{ E}_{ j, i-1} + \sum_{0\le \iota\le j, \ r \ge 1 , \ \iota+r \le i+j-1 } \mathcal{ E}_{ \iota, r}(t) .
\end{split}\end{equation}
In what follows, we use \eqref{goal} and the induction hypothesis \eqref{asssup} to show that \eqref{lem43est} holds for $i+j=k+1$.
First, choosing $j=k$ and $i=1$ in \eqref{goal} gives
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \mathcal{E}_{k,1}(t)
\lesssim \mathcal{E}_{k+1}(t) + \mathcal{E}_{k}(t) + \sum_{\iota\ge 0, \ r\ge 1, \ \iota+r \le k } \mathcal{ E}_{ \iota, r}(t)
\end{split}\end{equation}e
which, together with \eqref{asssup}, implies
\begin{equation}\lambdabel{nek1t}\begin{equation}gin{split}
& \mathcal{E}_{k,1}(t)
\lesssim \widetilde{\mathcal{ E}}_{0}(t) + \sum_{\iota=1}^{k+1}\mathcal{ E}_{\iota}(t).
\end{split}\end{equation}
Similarly,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \mathcal{E}_{k-1,2}(t) \lesssim \mathcal{E}_{k+1}(t) + \mathcal{E}_{k}(t) + \mathcal{E}_{k-1,1}(t) + \sum_{\iota\ge 0, \ r\ge 1, \ \iota+r \le k } \mathcal{ E}_{ \iota, r}(t)
\lesssim \widetilde{\mathcal{ E}}_{0}(t) + \sum_{\iota=1}^{k+1}\mathcal{ E}_{\iota}(t).
\end{split}\end{equation}e
For $\mathcal{E}_{k-2, 3}$, it follows from \eqref{goal3}, \eqref{nek1t} and \eqref{asssup} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\mathcal{E}_{k-2,3}(t)
\lesssim & \mathcal{ E}_{k , 1}(t) + \mathcal{ E}_{k -1 , 1}(t) + \mathcal{ E}_{k-2, 2}(t) + \sum_{\iota\ge 0, \ r\ge 1, \ \iota+r \le k } \mathcal{ E}_{ \iota, r}(t) \lesssim \widetilde{\mathcal{ E}}_{0}(t) + \sum_{\iota=1}^{k+1}\mathcal{ E}_{\iota}(t).
\end{split}\end{equation}e
The other cases can be handled similarly. So we have proved \eqref{lem43est} when $i+j=k+1$.
This finishes the proof of Lemma \ref{lem43}. $\Box$
\subsection{Nonlinear weighted energy estimates}
In this subsection, we prove that the weighted energy $\mathcal{E}_j(t)$ can be bounded by the initial data for $t\in [0,T]$.
\begin{equation}gin{prop} Suppose that \eqref{apriori} holds for suitably small positive number $\epsilon_0 \in(0,1) $. Then for $t\in [0, T]$,
\begin{equation}\lambdabel{energy}
\mathcal{E}_j(t) \lesssim \sum_{\iota=0}^j \mathcal{ E}_{\iota}(0), \ \ j=0,1, \cdots, l.
\end{equation}
\end{prop}
The proof of this proposition consists of Lemma \ref{lem51} and Lemma \ref{lem53}.
\subsubsection{Basic energy estimates}
\begin{equation}gin{lem}\lambdabel{lem51} Suppose that \eqref{apriori} holds for suitably small positive number $\epsilon_0 \in(0,1) $. Then,
\begin{equation}\lambdabel{5-5}\begin{equation}gin{split}
\mathcal{E}_0(t)
+
\int_0^t \int \left[ (1+s) \begin{equation}tar\rho_0w_s^2 + (1+s)^{-1} \begin{equation}tar\rho_0^\gamma w_{x}^2 \right] dxds
\lesssim \mathcal{E}_0(0), \ \ t\in[0,T].
\end{split}\end{equation}
\end{lem}
{\bf Proof}. Multiply \eqref{eluerpert} by $w_t$ and integrate the product with respect to the spatial variable to get
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\frac{d}{dt}\int \frac{1}{2}\begin{equation}tar\rho_0w_{t}^2 dx +
\int \begin{equation}tar\rho_0w_t^2 dx - \int \begin{equation}tar\rho_0^\gamma \left[ (\tilde{\eta}_x+w_x)^{-\gamma}
-\tilde{\eta}_x ^{-\gamma} \right] w_{xt} dx =0.
\end{split}\end{equation}e
Note that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\left[(\tilde{\eta}_x+w_x)^{-\gamma} - \tilde{\eta}_x^{-\gamma} \right] w_{xt} \\
=&\left[(\tilde{\eta}_x+w_x)^{-\gamma}(\tilde{\eta}_x+w_x)_t - (\tilde{\eta}_x+w_x)^{-\gamma} \tilde{\eta}_{xt} \right] -\left[ \left(\tilde{\eta}_x^{-\gamma} w_{x}\right)_t - \left(\tilde{\eta}_x^{-\gamma}\right)_t w_x\right] \\
=& \frac{1}{1-\gamma}\left[(\tilde{\eta}_x+w_x)^{1-\gamma } - (1-\gamma )\tilde{\eta}_x^{-\gamma} w_{x}\right]_t - \left[ (\tilde{\eta}_x+w_x)^{-\gamma} + \gamma \tilde{\eta}_x^{-\gamma-1} w_x \right]\tilde{\eta}_{xt} \\
=&\frac{1}{1-\gamma}\left[(\tilde{\eta}_x+w_x)^{1-\gamma } -\tilde{\eta}_x^{1-\gamma }-(1-\gamma)\tilde{\eta}_x^{-\gamma} w_{x}\right]_t - \tilde{\eta}_{xt} \mathfrak{F} ,
\end{split}\end{equation}e
where
$$ \mathfrak{F} (x,t):=(\tilde{\eta}_x+w_x)^{-\gamma}
-\tilde{\eta}_x ^{-\gamma} + \gamma \tilde{\eta}_x^{-\gamma-1} w_x.
$$
We then have
\begin{equation}\lambdabel{h1}\begin{equation}gin{split}
\frac{d}{dt}\int \mathfrak{E}_0(x,t) dx
+
\int \begin{equation}tar\rho_0w_t^2 dx + \int \begin{equation}tar\rho_0^\gamma \tilde{\eta}_{xt} \mathfrak{F} dx=0,
\end{split}\end{equation}
where
\begin{equation}e\begin{equation}gin{split}
\mathfrak{E}_0(x,t):=&\frac{1}{2}\begin{equation}tar\rho_0w_{t}^2 + \frac{1}{\gamma-1} \begin{equation}tar\rho_0^\gamma \left[(\tilde{\eta}_x+w_x)^{1-\gamma } -\tilde{\eta}_x^{1-\gamma }-(1-\gamma)\tilde{\eta}_x^{-\gamma} w_{x}\right]\\
\sim & \begin{equation}tar\rho_0w_{t}^2 + \begin{equation}tar\rho_0^\gamma \tilde{\eta}_x^{-\gamma-1} w_x^2 \sim \begin{equation}tar\rho_0w_{t}^2 + {\begin{equation}tar\rho_0^\gamma}({1+t})^{-1} w_x^2 ,
\end{split}\end{equation}e
due to the Taylor expansion, the smallness of $|w_x|$ and \eqref{decay}. Integrating \eqref{h1} gives
\begin{equation}\lambdabel{5-3}\begin{equation}gin{split}
\int \mathfrak{E}_0(x,t) dx +\int_0^t \int \begin{equation}tar\rho_0w_s^2 dx ds+ \int_0^t\int\begin{equation}tar\rho_0^\gamma \tilde{\eta}_{xs} \mathfrak{F} dxds
= \int \mathfrak{E}_0(x,0) dx.
\end{split}\end{equation}
Multiplying \eqref{eluerpert} by $w$ and integrating the product with respect to $x$ and $t$, one has
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left.\int \begin{equation}tar\rho_0\left(\frac{1}{2} w^2 + ww_t \right) dx \right|_0^t - \int_0^t \int \begin{equation}tar\rho_0^\gamma\left[ (\tilde{\eta}_{x}+w_x)^{-\gamma} -\tilde{\eta}_{x}^{-\gamma} \right] w_{x} dx ds = \int_0^t \int \begin{equation}tar\rho_0w_s^2 dx ds .
\end{split}\end{equation}e
It follows from the Taylor expansion and the smallness of $|w_x|$ that
$$\left[ (\tilde{\eta}_{x}+w_x)^{-\gamma} -\tilde{\eta}_{x}^{-\gamma} \right] w_{x} \le -(\gamma/2) \tilde{\eta}_{x}^{-\gamma-1} w_{x}^2.$$
We then obtain, using the Cauchy inequality, \eqref{5-3} and $\eqref{decay} $, that
\begin{equation}\lambdabel{5-4}\begin{equation}gin{split}
\int \left(\begin{equation}tar\rho_0 w^2 \right)(x,t) dx + \int_0^t \int (1+s)^{-1} \begin{equation}tar\rho_0^\gamma w_{x}^2 dx ds \lesssim \int \left(\begin{equation}tar\rho_0w^2 + \mathfrak{E}_0\right)(x,0) dx
=\mathcal{E}_0(0).
\end{split}\end{equation}
Next, we show the time decay of the energy norm. It follows from \eqref{h1} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\frac{d}{dt}\left[(1+t)\int \mathfrak{E}_0(x,t) dx\right]
+
(1+t)\int \begin{equation}tar\rho_0w_t^2 dx + (1+t)\int \begin{equation}tar\rho_0^\gamma \tilde{\eta}_{xt} \mathfrak{F}_1 dx=\int \mathfrak{E}_0(x,t) dx.
\end{split}\end{equation}e
Therefore,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
(1+t)\int \mathfrak{E}_0(x,t) dx
+
\int_0^t (1+s)\int \begin{equation}tar\rho_0w_s^2 dxds
\le \int \mathfrak{E}_0(x,0) dx+\int_0^t \int \mathfrak{E}_0(x,s) dxds \\
\lesssim \int \mathfrak{E}_0(x,0) dx+\int_0^t \int \left[ \begin{equation}tar\rho_0w_s^2 + (1+s)^{-1} \begin{equation}tar\rho_0^\gamma w_{x}^2 \right] dxds
\lesssim \mathcal{E}_0(0),
\end{split}\end{equation}e
where estimates \eqref{5-3} and \eqref{5-4} have been used in the derivation of the last inequality. This implies
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\int \left[ (1+t)\begin{equation}tar\rho_0w_{t}^2 + {\begin{equation}tar\rho_0^\gamma} w_x^2 \right]dx
+
\int_0^t (1+s)\int \begin{equation}tar\rho_0w_s^2 dxds
\lesssim \mathcal{E}_0(0),
\end{split}\end{equation}e
which, together with \eqref{5-4}, gives \eqref {5-5}. This finishes the proof of Lemma \ref{lem51}. $\Box$
\subsubsection{Higher-order energy estimates}
For $k\ge 1$, $\partial_t^k\eqref{eluerpert}$ yields that
\begin{equation}\lambdabel{ptbkt}\begin{equation}gin{split}
\begin{equation}tar\rho_0 \partial_t^{k+2}w +
\begin{equation}tar\rho_0\partial_t^{k+1}w -\gamma \left[ {\begin{equation}tar\rho_0^\gamma}
(\tilde{\eta}_{x}+w_x)^{-\gamma-1} \partial_t^k w_x + {\begin{equation}tar\rho_0^\gamma}J \right]_x =0,
\end{split}\end{equation}
where
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
J := & \partial_t^{k-1}\left\{\tilde\eta_{xt}\left[(\tilde{\eta}_{x}+w_x)^{-\gamma-1}-\tilde{\eta}_{x}^{-\gamma-1}\right]
\right\} \\
& + \left\{ \partial_t^{k-1}\left[\left(\tilde{\eta}_{x}+w_x\right)^{-\gamma-1} w_{xt}\right]-
\left(\tilde{\eta}_{x}+w_x\right)^{-\gamma-1} \partial_t^k w_x \right\}.
\end{split}\end{equation}e
To obtain the leading terms of $J$, we single out the terms involving $\partial_t^{k-1} w_x$. To this end, we rewrite $J$ as
\begin{equation}\lambdabel{j1j2}\begin{equation}gin{split}
J= & \tilde\eta_{xt}\partial_t^{k-1}\left[(\tilde{\eta}_{x}+w_x)^{-\gamma-1}
-\tilde{\eta}_{x}^{-\gamma-1}\right]
+(k-1) \left[\left(\tilde{\eta}_{x}+w_x\right)^{-\gamma-1}\right]_t
\partial_t^{k-1}w_x \\
&+ w_{xt} \partial_t^{k-1}\left[\left(\tilde{\eta}_{x}+w_x\right)^{-\gamma-1}\right]
+ \sum_{\iota=1}^{k-1} C_{k-1}^\iota \left(\partial_t^{\iota}\tilde\eta_{xt}\right) \partial_t^{k-1-\iota}\left[(\tilde{\eta}_{x}+w_x)^{-\gamma-1}-\tilde{\eta}_{x}^{-\gamma-1}\right]
\\
&+ \sum_{\iota=2}^{k-2} C_{k-1}^\iota \left( \partial_t^{k-\iota} w_{x} \right) \partial_t^{\iota}\left[\left(\tilde{\eta}_{x}+w_x\right)^{-\gamma-1}\right]
\\
= & k\left[\tilde{\eta}_{x}+w_x)^{-\gamma-1}\right]_t \partial_t^{k-1}w_x + \tilde{J}
\end{split}\end{equation}
where
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \tilde{J} : = -(\gamma+1) \left(\tilde\eta_{x }+w_{x }\right)_t \sum_{\iota=1}^{k-2} C_{k-2}^\iota \left(\partial_t^{k-1-\iota}w_x\right) \partial_t^{\iota}\left[( \tilde{\eta}_{x}+w_x)^{-\gamma-2} \right] \\
& -(\gamma+1) \left\{ \tilde\eta_{xt}
\partial_t^{k-2 } \left\{ \tilde\eta_{xt}\left[ (\tilde{\eta}_{x}+w_x)^{-\gamma-2} -\tilde{\eta}_{x}^{-\gamma-2}
\right] \right\} + w_{xt} \partial_t^{k-2 } \left[(\tilde{\eta}_{x}+w_x)^{-\gamma-2} \tilde\eta_{xt}\right] \right\} \\
&+ \sum_{\iota=1}^{k-1} C_{k-1}^\iota \left(\partial_t^{\iota}\tilde\eta_{xt}\right) \partial_t^{k-1-\iota}\left[(\tilde{\eta}_{x}+w_x)^{-\gamma-1}-\tilde{\eta}_{x}^{-\gamma-1}\right]
\\
& + \sum_{\iota=2}^{k-2} C_{k-1}^\iota \left( \partial_t^{k-\iota} w_{x} \right) \partial_t^{\iota}\left[\left(\tilde{\eta}_{x}+w_x\right)^{-\gamma-1}\right]
.
\end{split}\end{equation}e
Here summations $\sum_{\iota=1}^{k-2}$ and $\sum_{\iota=2}^{k-2}$ should be understood as zero when $k=1, 2$ and $k=1, 2, 3$, respectively.
It should be noted that only the terms of lower-order derivatives, $w_x$, $\cdots$, $\partial_t^{k-2}w_x$ are contained in $\tilde J$. In particular, $\tilde J=0$ when $k=1$.
\vskip 1cm
\begin{equation}gin{lem}\lambdabel{lem53} Suppose that \eqref{apriori} holds for some small positive number $\epsilon_0\in(0,1) $. Then for all $j= 1,\cdots,l$,
\begin{equation}\lambdabel{new512}\begin{equation}gin{split}
& \mathcal{E}_j(t) + \int_0^t \int \left[(1+ s)^{2j+1} \begin{equation}tar\rho_0 \left(\partial_s^{j+1} w\right)^2 + (1+ s)^{2j-1} \begin{equation}tar\rho_0^\gamma \left(\partial_s^{j } w_x\right)^2 \right] dx ds
\\
\lesssim & \sum_{\iota=0}^j \mathcal{ E}_{\iota}(0), \ \ t\in[0,T].
\end{split}\end{equation}
\end{lem}
{\bf Proof}. We use induction to prove \eqref{new512}. As shown in Lemma \ref{lem51} we know that \eqref{new512} holds for $j=0$.
For $1\le k \le l$, we make the induction hypothesis that \eqref{new512} holds for all $j=0, 1, \cdots, k-1$, i.e.,
\begin{equation}\lambdabel{512}\begin{equation}gin{split}
& \mathcal{E}_j(t) + \int_0^t \int \left[(1+ s)^{2j+1} \begin{equation}tar\rho_0 \left(\partial_s^{j+1} w\right)^2 + (1+ s)^{2j-1} \begin{equation}tar\rho_0^\gamma \left(\partial_s^{j } w_x\right)^2 \right] dx ds\\
\lesssim & \sum_{\iota=0}^j \mathcal{ E}_{\iota}(0), \ 0\le j \le k-1.
\end{split}\end{equation}
It suffices to prove
\eqref{new512} holds for $j=k$ under the induction hypothesis \eqref{512} .
{\bf Step 1}. In this step, we prove that
\begin{equation}\lambdabel{516'}\begin{equation}gin{split}
&\frac{d}{dt} \int \left[ \frac{1}{2} \begin{equation}tar\rho_0 \left(\partial_t^{k+1} w\right)^2 + \widetilde{\mathfrak{E}}_k(x,t) \right] dx +\int \begin{equation}tar\rho_0 \left(\partial_t^{k+1}w\right)^2 dx \\
\lesssim &(\delta+ \epsilon_0) (1+t)^{-2k-2} \mathcal{E}_{k}(t) + \left(\delta^{-1}+\epsilon_0\right) (1+t)^{-2k-2} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t) ,
\end{split}\end{equation}
for any postive number $\delta>0$ which will be specified later, where
$$
\widetilde{\mathfrak{E}}_k(x,t) : = \gamma {\begin{equation}tar\rho_0^\gamma} \left[\fracrac{1}{2} (\tilde{\eta}_{x}+w_x)^{-\gamma-1} \left(\partial_t^k w_x\right)^2 + J \partial_t^{k } w_x \right]
$$
satisfying the following estimates:
\begin{equation}\lambdabel{lowbds}\begin{equation}gin{split}
\int \widetilde{\mathfrak{E}}_k(x,t)dx \ge C^{-1} (1+t)^{-1}\int {\begin{equation}tar\rho_0^\gamma}
\left(\partial_t^k w_{x}\right)^2 dx -C (1+t)^{-2k-1}\left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota} \right)(t),
\end{split}\end{equation}
\begin{equation}\lambdabel{upbds}\begin{equation}gin{split}
\int \widetilde{\mathfrak{E}}_k(x,t)dx \lesssim (1+t)^{-1}\int {\begin{equation}tar\rho_0^\gamma}
\left(\partial_t^k w_{x}\right)^2 dx + (1+t)^{-2k-1}\left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t).
\end{split}\end{equation}
We begin with integrating the product of \eqref{ptbkt} and $\partial_t^{k+1} w $
with respect to the spatial variable which gives
\begin{equation}\lambdabel{515}\begin{equation}gin{split}
\frac{d}{dt} \int \frac{1}{2} \begin{equation}tar\rho_0 \left(\partial_t^{k+1} w\right)^2 dx +\int \begin{equation}tar\rho_0 \left(\partial_t^{k+1}w\right)^2 dx + \frac{\gamma}{2} \fracrac{d}{dt}
\int {\begin{equation}tar\rho_0^\gamma} (\tilde{\eta}_{x}+w_x)^{-\gamma-1} \left(\partial_t^k w_x\right)^2 dx
\\
= \frac{\gamma}{2}
\int {\begin{equation}tar\rho_0^\gamma} \left[(\tilde{\eta}_{x}+w_x)^{-\gamma-1} \right]_t \left(\partial_t^k w_x\right)^2 dx - \gamma \int {\begin{equation}tar\rho_0^\gamma} J \partial_t^{k+1} w_x dx.
\end{split}\end{equation}
We use \eqref{j1j2} to estimate the last term on the right-hand side of \eqref{515} as follows:
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \int {\begin{equation}tar\rho_0^\gamma} J \partial_t^{k+1} w_x dx =\fracrac{d}{dt} \int {\begin{equation}tar\rho_0^\gamma} J \partial_t^{k } w_x dx - \int {\begin{equation}tar\rho_0^\gamma} J_t \partial_t^{k } w_x dx \\
= & \fracrac{d}{dt} \int {\begin{equation}tar\rho_0^\gamma} J \partial_t^{k } w_x dx + (\gamma+1) k \int {\begin{equation}tar\rho_0^\gamma} (\tilde{\eta}_{x}+w_x)^{-\gamma-2}\left( w_{xt}
+ \tilde{\eta}_{xt} \right) \left(\partial_t^{k } w_x \right)^2 dx \\
& - \int {\begin{equation}tar\rho_0^\gamma} \left\{ k \left[ (\tilde{\eta}_{x}+w_x)^{-\gamma-1}\right]_{tt} \partial_t^{k-1} w_x + \tilde{J}_t \right\}
\partial_t^{k } w_x dx.
\end{split}\end{equation}e
It then follows from \eqref{515} that
\begin{equation}e\begin{equation}gin{split}
&\frac{d}{dt} \int \left[ \frac{1}{2} \begin{equation}tar\rho_0 \left(\partial_t^{k+1} w\right)^2 + \widetilde{\mathfrak{E}}_k(x,t) \right] dx +\int \begin{equation}tar\rho_0 \left(\partial_t^{k+1}w\right)^2 dx
\\
=&- \gamma (\gamma+1)\left( k +\fracrac{1}{2}\right) \int {\begin{equation}tar\rho_0^\gamma} (\tilde{\eta}_{x}+w_x)^{-\gamma-2}\left( w_{xt}
+ \tilde{\eta}_{xt} \right) \left(\partial_t^{k } w_x \right)^2 dx \\
& + \gamma \int {\begin{equation}tar\rho_0^\gamma} \left\{ k \left[ (\tilde{\eta}_{x}+w_x)^{-\gamma-1}\right]_{tt} \partial_t^{k-1} w_x + \tilde{J}_t \right\}
\partial_t^{k } w_x dx .
\end{split}\end{equation}e
This, together with the fact that $\tilde{\eta}_{xt}\ge 0$ and \eqref{basic}, implies
\begin{equation}\lambdabel{516}\begin{equation}gin{split}
&\frac{d}{dt} \int \left[ \frac{1}{2} \begin{equation}tar\rho_0 \left(\partial_t^{k+1} w\right)^2 + \widetilde{\mathfrak{E}}_k(x,t) \right] dx +\int \begin{equation}tar\rho_0 \left(\partial_t^{k+1}w\right)^2 dx
\\
\le & -\gamma (\gamma+1)\left( k +\fracrac{1}{2}\right) \int {\begin{equation}tar\rho_0^\gamma} (\tilde{\eta}_{x}+w_x)^{-\gamma-2} w_{xt}
\left(\partial_t^{k } w_x \right)^2 dx \\
& + k\gamma \int {\begin{equation}tar\rho_0^\gamma} \left[ (\tilde{\eta}_{x}+w_x)^{-\gamma-1}\right]_{tt} \left(\partial_t^{k-1} w_x \right)
\partial_t^{k } w_x dx+ \gamma \int {\begin{equation}tar\rho_0^\gamma} \tilde J_t
\partial_t^{k } w_x dx
.
\end{split}\end{equation}
For $\widetilde{\mathfrak{E}}_k$, it follows from \eqref{basic} and the Cauchy-Schwarz inequality that
\begin{equation}\lambdabel{eek1}\begin{equation}gin{split}
\widetilde{\mathfrak{E}}_k(x,t) \ge {\begin{equation}tar\rho_0^\gamma}\left[
C^{-1}(1+t)^{-1}\left(\partial_t^k w_{x}\right)^2 -C(1+t)J^2 \right],
\end{split}\end{equation}
\begin{equation}\lambdabel{eek2}\begin{equation}gin{split}
\widetilde{\mathfrak{E}}_k(x,t) \lesssim {\begin{equation}tar\rho_0^\gamma}\left[
(1+t)^{-1}\left(\partial_t^k w_{x}\right)^2 + (1+t)J^2 \right].
\end{split}\end{equation}
It needs to bound $J$, which contains lower-order terms involving $w_x, \cdots, \partial_t^{k-1}w_x$. Following from \eqref{j1j2}, \eqref{apriori} and \eqref{decay}, one has
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
J^2 \lesssim & \left|\left[ (\tilde{\eta}_{x}+w_x)^{-\gamma-1}\right]_t \right|^2 \left( \partial_t^{k-1} w_x \right)^2 + {\tilde{J}}^2
\lesssim (1+t)^{-4} \left(\partial_t^{k-1} w_x \right)^2 + \tilde J^2 \\
= &(1+t)^{-2k-2} \left[(1+t)^{k-1}\partial_t^{k-1} w_x \right]^2 + \tilde J^2
\end{split}\end{equation}e
which yields that
\begin{equation}\lambdabel{ggg}\begin{equation}gin{split}
\int {\begin{equation}tar\rho_0^\gamma} (1+t) J^2 dx
\lesssim &(1+t)^{-2k-1} \int {\begin{equation}tar\rho_0^\gamma}\left[(1+t)^{k-1}\partial_t^{k-1} w_x \right]^2 dx + (1+t)\int {\begin{equation}tar\rho_0^\gamma}\tilde J^2dx.
\end{split}\end{equation}
We now show that the integral involving $\tilde J$ in \eqref{ggg} can be bounded by $\widetilde{\mathcal{E}}_{0}+ \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}$ as follows. First, in view of \eqref{apriori} and \eqref{decay}, we have
\begin{equation}\lambdabel{tildej}\begin{equation}gin{split}
& | \tilde{J} | \lesssim \sum_{\iota=1}^{k-1} (1+t)^{-2-\iota} \left|\partial_t^{k-1-\iota}w_x\right|
+(1+t)^{-1-\fracrac{1}{ \gamma+ 1}}\left|\partial_t^{2}w_x\right| \left|\partial_t^{k-2}w_x\right| \\
&+(1+t)^{-2-\fracrac{1}{ \gamma+1}}\left|\partial_t^{2}w_x\right|\left|\partial_t^{k-3}w_x\right| + (1+t)^{-1-\fracrac{1}{\gamma+1}}\left|\partial_t^{3}w_x\right|\left|\partial_t^{k-3}w_x\right|
+ {\rm l.o.t.},
\end{split}\end{equation}
where and thereafter the notation ${\rm l.o.t.}$ is used to represent the lower-order terms involving $\partial_t^{\iota}w_x$ with $\iota=2,\cdots, k-4$. It should be noticed that the second term on the right-hand side of \eqref{tildej} only appears as $k-2\ge 2$, the third term as $k-3\ge 2$ and the fourth as $k-3\ge 3$.
Clearly, we use \eqref{apriori} again to obtain
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
| \tilde{J} | \lesssim & \sum_{\iota=1}^{k-1} (1+t)^{-2-\iota} \left|\partial_t^{k-1-\iota}w_x\right|
+\epsilon_0 (1+t)^{-3-\fracrac{1}{ \gamma+ 1}} \varthetarsigma^{-\fracrac{1}{2}}\left|\partial_t^{k-2}w_x\right| \\
& + \epsilon_0 (1+t)^{-4-\fracrac{1}{\gamma+1}}\varthetarsigma^{-1}\left|\partial_t^{k-3}w_x\right| + {\rm l.o.t.},
\end{split}\end{equation}e
if $k\ge 6$. Similarly, we can bound ${\rm l.o.t.}$ and obtain
\begin{equation}\lambdabel{tildej'}\begin{equation}gin{split}
| \tilde{J} | \lesssim & \sum_{\iota=1}^{k-1} (1+t)^{-2-\iota} \left|\partial_t^{k-1-\iota}w_x\right|
+\epsilon_0 \sum_{\iota=2}^{\left[{k}/{2}\right]} (1+t)^{-1-\iota-\fracrac{1}{\gamma+1}} \varthetarsigma^{ \fracrac{1-\iota}{2}}\left|\partial_t^{k-\iota}w_x\right| ,
\end{split}\end{equation}
which implies
\begin{equation}\lambdabel{ggg1}\begin{equation}gin{split}
(1+t)\int {\begin{equation}tar\rho_0^\gamma}\tilde J^2dx
\lesssim & (1+t)^{-2k-1}\left(\widetilde{\mathcal{E}}_0(t) + \sum_{\iota=1}^{k-2} \mathcal{E}_{k-1-\iota}(t)\right) \\
& + \epsilon_0^2 \sum_{\iota=2}^{\left[{k}/{2}\right]} (1+t)^{-1-2\iota-\fracrac{2}{\gamma+1}} \int \begin{equation}tar\rho_0^\gamma \varthetarsigma^{ {1-\iota} }\left|\partial_t^{k-\iota}w_x\right|^2 dx .
\end{split}\end{equation}
In view of the Hardy inequality \eqref{hardybdry} and the equivalence of $d$ and $\varthetarsigma$ , \eqref{varsigma}, we see that for $\iota=2,\cdots,\left[{k}/{2}\right]$,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \int \begin{equation}tar\rho_0^\gamma \varthetarsigma^{ {1-\iota} }\left|\partial_t^{k-\iota}w_x\right|^2 dx
=\int \varthetarsigma^{ {\alpha + 2 -\iota} }\left|\partial_t^{k-\iota}w_x\right|^2 dx \\
\lesssim & \int \varthetarsigma^{ \alpha + 2 -\iota +2 }\left(\left|\partial_t^{k-\iota}w_x\right|^2
+ \left|\partial_t^{k-\iota}w_{xx}\right|^2\right)dx \\
\lesssim & \sum_{i=0}^{\iota-1} \int \varthetarsigma^{ \alpha + \iota } \left|\partial_t^{k-\iota}\partial_x^{i}w_x\right|^2 dx \lesssim (1+t)^{2\iota-2k}\sum_{i=1}^{\iota-1} \mathcal{E}_{k-\iota, i}.
\end{split}\end{equation}e
Since $\alpha + 2 -\iota \ge \alpha - \left[\left[\alpha\right]/{2}\right] \ge 0$ for $k\le l$, which ensures the application of the Hardy inequality.
It then yields from \eqref{ggg}, \eqref{ggg1} and the elliptic estimates \eqref{ellipticestimate} that
\begin{equation}\lambdabel{jjj}\begin{equation}gin{split}
(1+t)\int {\begin{equation}tar\rho_0^\gamma} J^2dx
\lesssim & (1+t)^{-2k-1}\left(\widetilde{\mathcal{E}}_{0}(t)+ \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}(t)\right).
\end{split}\end{equation}
This, together with \eqref{eek1} and \eqref{eek2}, proves \eqref{lowbds} and \eqref{upbds}.
In what follows, we estimate the terms on the right-hand side of \eqref{516} to prove \eqref{516'}. It follows from \eqref{apriori} and \eqref{decay} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left|\int {\begin{equation}tar\rho_0^\gamma} (\tilde{\eta}_{x}+w_x)^{-\gamma-2} w_{xt}
\left(\partial_t^{k } w_x \right)^2 dx \right| \lesssim
\epsilon_0 (1+t)^{-2-\fracrac{1}{\gamma+1}}\int {\begin{equation}tar\rho_0^\gamma}
\left(\partial_t^{k } w_x \right)^2 dx
\end{split}\end{equation}e
and
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\left| \int {\begin{equation}tar\rho_0^\gamma} \left[ (\tilde{\eta}_{x}+w_x)^{-\gamma-1}\right]_{tt} \left( \partial_t^{k-1} w_x \right) \partial_t^k w_x dx \right|\\
\lesssim & \int {\begin{equation}tar\rho_0^\gamma} \left[(1+t)^{-3} +(1+t)^{-1-\fracrac{1}{\gamma+1}}|w_{xtt}| \right]\left|\partial_t^{k-1} w_x \right| \left| \partial_t^k w_x \right| dx \\
\lesssim & (1+t)^{-3} \int {\begin{equation}tar\rho_0^\gamma} \left| \partial_t^{k-1} w_x \right| \left| \partial_t^k w_x \right| dx + \epsilon_0 (1+t)^{-3-\fracrac{1}{\gamma+1}} \int \begin{equation}tar\rho_0^{(\gamma+1)/2} \left| \partial_t^{k-1} w_x \right| \left| \partial_t^k w_x \right| dx \\ \lesssim & \left(\delta +\epsilon_0\right) (1+t)^{-2} \int {\begin{equation}tar\rho_0^\gamma} \left(\partial_t^k w_x\right)^2 dx + \delta^{-1} (1+t)^{-4} \int {\begin{equation}tar\rho_0^\gamma} \left(\partial_t^{k-1} w_x\right)^2 dx \\
& + \epsilon_0 (1+t)^{-4-\fracrac{2}{\gamma+1}} \int {\begin{equation}tar\rho_0 } \left(\partial_t^{k-1} w_x\right)^2 dx,
\end{split}\end{equation}e
for any $\delta>0$. With the help of elliptic estimates \eqref{ellipticestimate}, we have
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\int {\begin{equation}tar\rho_0 } \left(\partial_t^{k-1} w_x\right)^2 dx
\le & (1+t)^{-2(k-1)}\mathcal{E}_{k-1,1}(t)
\lesssim (1+t)^{-2(k-1)} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k }\mathcal{E}_{\iota}\right)(t).
\end{split}\end{equation}e
Therefore, we obtain \eqref{516'} from \eqref{516};
since the last integral on the right-hand side of \eqref{516} can be bounded by
\begin{equation}\lambdabel{uu}\begin{equation}gin{split}
\int {\begin{equation}tar\rho_0^\gamma} \left| \tilde J_t
\partial_t^{k } w_x \right| dx \lesssim & (\delta+ \epsilon_0) (1+t)^{-2k-2} \mathcal{E}_{k}(t) \\& + \left(\delta^{-1}+\epsilon_0\right) (1+t)^{-2k-2} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t).
\end{split}\end{equation}
It remains to prove \eqref{uu}. Indeed, we obtain in a similar way to derving \eqref{tildej'} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
| \tilde{J}_t | \lesssim & \sum_{\iota=2}^{k} (1+t)^{-2-\iota} \left|\partial_t^{k-\iota}w_x\right| + \left[(1+t)^{-1-\fracrac{1}{ \gamma+1}}\left|\partial_t^{3}w_x \right|
+(1+t)^{-2-\fracrac{1}{ \gamma+1}}\left|\partial_t^{2}w_x\right|\right]\\
&\times\left|\partial_t^{k-2}w_x\right|
+ \left[(1+t)^{-1-\fracrac{1}{ \gamma+1}}\left|\partial_t^{4}w_x\right|
+(1+t)^{-2-\fracrac{1}{ \gamma+1}}\left|\partial_t^{3}w_x\right|
\right.\\
&\left. + (1+t)^{-3-\fracrac{1}{ \gamma+1}}\left|\partial_t^{2}w_x\right|
+(1+t)^{-1-\fracrac{2}{ \gamma+1}}\left|\partial_t^{2}w_x\right|^2 \right]\left|\partial_t^{k-3}w_x\right| + {\rm l.o.t.}
\end{split}\end{equation}e
and
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
| \tilde{J}_t | \lesssim & \sum_{\iota=2}^{k} (1+t)^{-2-\iota} \left|\partial_t^{k-\iota}w_x\right|
+\epsilon_0 \sum_{\iota=2}^{\left[({k-1})/{2}\right] } (1+t)^{-2-\iota-\fracrac{1}{\gamma+1}} \varthetarsigma^{- \fracrac{ \iota}{2}} \left|\partial_t^{k-\iota}w_x\right| ,
\end{split}\end{equation}e
which implies
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\int {\begin{equation}tar\rho_0^\gamma} & \left| \tilde J_t
\partial_t^{k } w_x \right| dx \lesssim \sum_{\iota=2}^{k} (1+t)^{-2-\iota} \int {\begin{equation}tar\rho_0^\gamma} \left|\partial_t^{k-\iota}w_x\right| \left|
\partial_t^{k } w_x \right| dx \\
& + \epsilon_0 \sum_{\iota=2}^{\left[({k-1})/{2}\right]} (1+t)^{-2-\iota-\fracrac{1}{\gamma+1}} \int \begin{equation}tar\rho^\gamma \varthetarsigma^{- \fracrac{ \iota}{2}}\left|\partial_t^{k-\iota}w_x\right| \left|
\partial_t^{k } w_x \right| dx =:P_1+P_2.
\end{split}\end{equation}e
Easily, it follows from the Cauchy-Schwarz inequality that for any $\delta>0$,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
P_1 \lesssim & \delta(1+t)^{-2 }\int {\begin{equation}tar\rho_0^\gamma} \left|\partial_t^{k }w_x\right|^2 dx
+ \delta^{-1}\sum_{\iota=2}^{k}(1+t)^{-2-2\iota}\int {\begin{equation}tar\rho_0^\gamma} \left|\partial_t^{k-\iota}w_x\right|^2 dx \\
\le & \delta (1+t)^{-2k-2} \mathcal{E}_{k}(t) + \delta^{-1} (1+t)^{-2k-2}\left( \widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-2} \mathcal{E}_{\iota}\right)(t) \end{split}\end{equation}e
and
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
P_2 \lesssim & \epsilon_0 (1+t)^{-2 }\int {\begin{equation}tar\rho_0^\gamma} \left|\partial_t^{k }w_x\right|^2 dx
+ \epsilon_0 \sum_{\iota=2}^{\left[({k-1})/{2}\right]} (1+t)^{-2-2\iota } \int {\begin{equation}tar\rho_0^\gamma} \varthetarsigma^{-\iota} \left|\partial_t^{k-\iota}w_x\right|^2 dx \\
\lesssim & \epsilon_0 (1+t)^{-2k-2}\left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k }\mathcal{E}_{\iota}\right)(t).
\end{split}\end{equation}e
Here the last inequality comes from the following estimate:
for $\iota=2,\cdots,\left[({k-1})/{2}\right]$,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\int \begin{equation}tar\rho^\gamma \varthetarsigma^{ { -\iota} }\left|\partial_t^{k-\iota}w_x\right|^2 dx
= & \int \varthetarsigma^{ {\alpha + 1 -\iota} }\left|\partial_t^{k-\iota}w_x\right|^2 dx
\lesssim \sum_{i=0}^{\iota } \int \varthetarsigma^{ \alpha +1 + \iota } \left|\partial_t^{k-\iota}\partial_x^{i}w_x\right|^2 dx
\\
&\lesssim (1+t)^{2\iota-2k}\sum_{i=1}^{\iota } \mathcal{E}_{k-\iota, i} \lesssim (1+t)^{2\iota-2k} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k }\mathcal{E}_{\iota}\right)(t),
\end{split}\end{equation}e
which is deduced from the Hardy inequality \eqref{hardybdry}, the equivalence \eqref{varsigma} and the elliptic estimate \eqref{ellipticestimate} where we note that $\alpha + 1 -\iota \ge \alpha - \left[([\alpha]+1)/2\right] \ge 0 $ for $k\le l$ so that the Hardy inequality can be applied.
Now, we finish the proof of \eqref{uu} and obtain \eqref{516'}.
\vskip 0.5cm
\noindent{\bf Step 2}. In this step, we prove that
\begin{equation}\lambdabel{521}\begin{equation}gin{split}
&\frac{d}{dt} \int {\mathfrak{E}}_k(x,t) dx +\int \begin{equation}tar\rho_0 \left(\partial_t^{k+1}w\right)^2 dx +
(1+t)^{-1} \int {\begin{equation}tar\rho_0^\gamma} \left(\partial_t^k w_x\right)^2 dx \\
\lesssim & (1+t)^{-2k-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t)
+ (1+t)^{-2} \int \begin{equation}tar\rho_0 \left(\partial_t^{k }w\right)^2 dx \\
\lesssim & (1+t)^{-2k-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t),
\end{split}\end{equation}
where
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
{\mathfrak{E}}_k(x,t): = \begin{equation}tar\rho_0 \left[ \frac{1}{2} \left(\partial_t^k w\right)^2 + \left(\partial_t^{k+1}w\right) \partial_t^k w + \left(\partial_t^{k+1} w\right)^2 \right]+ 2\widetilde{\mathfrak{E}}_k(x,t).
\end{split}\end{equation}e
We start with integrating the product of \eqref{ptbkt} and $\partial_t^k w $ with respect to $x$ to yield
\begin{equation}\lambdabel{514}\begin{equation}gin{split}
&\frac{d}{dt} \int \begin{equation}tar\rho_0\left( \left(\partial_t^{k+1}w\right) \partial_t^k w + \frac{1}{2} \left(\partial_t^k w\right)^2 \right) dx +\gamma
\int {\begin{equation}tar\rho_0^\gamma} (\tilde{\eta}_{x}+w_x)^{-\gamma-1} \left(\partial_t^k w_x\right)^2 dx \\
= & \int \begin{equation}tar\rho_0 \left(\partial_t^{k+1}w\right)^2 dx
-
\gamma \int {\begin{equation}tar\rho_0^\gamma} J \partial_t^k w_x dx.
\end{split}\end{equation}
It follows from $\eqref{514}+2\times\eqref{516'}$ that
\begin{equation}\lambdabel{519}\begin{equation}gin{split}
&\frac{d}{dt} \int {\mathfrak{E}}_k(x,t) dx +\int \begin{equation}tar\rho_0 \left(\partial_t^{k+1}w\right)^2 dx +\gamma
\int {\begin{equation}tar\rho_0^\gamma} (\tilde{\eta}_{x}+w_x)^{-\gamma-1} \left(\partial_t^k w_x\right)^2 dx \\
\lesssim &
\left|\int {\begin{equation}tar\rho_0^\gamma} J \partial_t^k w_x dx \right| + (\delta+ \epsilon_0) (1+t)^{-2k-2} \mathcal{E}_{k}(t) \\
&+ \left(\delta^{-1}+\epsilon_0\right) (1+t)^{-2k-2} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t),
\end{split}\end{equation}
For the first term on the right-hand side of \eqref{519}, we have, with the aid of the
Cauchy-Schwarz inequality and \eqref{jjj}, that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left|\int {\begin{equation}tar\rho_0^\gamma} J \partial_t^k w_x dx \right|
\lesssim \delta (1+t)^{-1} \int {\begin{equation}tar\rho_0^\gamma} \left(\partial_t^k w_x\right)^2 dx + \delta^{-1} (1+t)\int {\begin{equation}tar\rho_0^\gamma} J^2dx \\
\lesssim \delta (1+t)^{-1} \int {\begin{equation}tar\rho_0^\gamma} \left(\partial_t^k w_x\right)^2 dx + \delta^{-1}
(1+t)^{-2k-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t). \end{split}\end{equation}e
This finishes the proof of \eqref{521}, by using \eqref{basic}, noting the smallness of $\epsilon_0$ and choosing $\delta$ suitably small. Moreover, we deduce from \eqref{lowbds} and \eqref{upbds} that
\begin{equation}\lambdabel{fl1}\begin{equation}gin{split}
\int {\mathfrak{E}}_k(x,t)dx \ge &C^{-1} \int \begin{equation}tar\rho_0 \left[ \left(\partial_t^k w\right)^2 + \left(\partial_t^{k+1} w\right)^2 \right]dx+ C^{-1} (1+t)^{-1}\int {\begin{equation}tar\rho_0^\gamma}
\left(\partial_t^k w_{x}\right)^2 dx \\
&-C (1+t)^{-2k-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t),
\end{split}\end{equation}
\begin{equation}\lambdabel{fl2}\begin{equation}gin{split}
\int {\mathfrak{E}}_k(x,t)dx \lesssim & \int \begin{equation}tar\rho_0 \left[ \left(\partial_t^k w\right)^2 + \left(\partial_t^{k+1} w\right)^2 \right]dx+ (1+t)^{-1}\int {\begin{equation}tar\rho_0^\gamma}
\left(\partial_t^k w_{x}\right)^2 dx \\
&+ (1+t)^{-2k-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(t).
\end{split}\end{equation}
\vskip 0.5cm
{\bf Step 3}. We show the time decay of the norm in this step.
We integrate \eqref{521} and use the induction hypothesis \eqref{512} to show, noting \eqref{fl1} and \eqref{fl2}, that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \int \left[\begin{equation}tar\rho_0\left( \left(\partial_t^k w\right)^2 + \left(\partial_t^{k+1} w\right)^2 \right)
+
{\begin{equation}tar\rho_0^\gamma}(1+t)^{-1}\left(\partial_t^k w_x\right)^2 \right](x,t) dx \\
& +\int_0^t \int \begin{equation}tar\rho_0 \left(\partial_s^{k+1}w\right)^2 dx ds+ \int_0^t (1+s)^{-1} \int {\begin{equation}tar\rho_0^\gamma} \left(\partial_s^{k}w_x\right)^2 dxds
\\ \lesssim & \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0) + \int_0^t (1+s)^{-2k-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(s)ds
\lesssim \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0).
\end{split}\end{equation}e
Here the following estimate has been used,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\int_0^t (1+s)^{ -1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(s)ds =
\int_0^t \int \left[ (1+ s)^{ -1} \begin{equation}tar\rho_0^\gamma w_x^2 + \begin{equation}tar\rho_0 w_s^2 \right] dx ds
\\
&+\sum_{\iota=1}^{k-1}\int_0^t (1+ s)^{2(\iota-1)+1} \int \begin{equation}tar\rho_0 \left(\partial_s^\iota w\right)^2 dxds + \sum_{\iota=1}^{k-1} \int_0^t (1+ s)^{2\iota-1} \int \begin{equation}tar\rho_0^\gamma \left(\partial_s^\iota w_x \right)^2 dx ds
\\
&+\sum_{\iota=1}^{k-1}\int_0^t (1+ s)^ {2\iota} \int \begin{equation}tar\rho_0 \left(\partial_s^{\iota+1} w\right)^2 dx
\lesssim \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0).
\end{split}\end{equation}e
Multiplying \eqref{521} by $(1+t)^{p}$ and integrating the product with respect to the temporal variable from $p=1$ to $p=2k$ step by step, we can get
\begin{equation}\lambdabel{ai1}\begin{equation}gin{split}
& (1+t)^{2k} \int \left[\begin{equation}tar\rho_0\left( \left(\partial_t^k w\right)^2 + \left(\partial_t^{k+1} w\right)^2 \right)
+
{\begin{equation}tar\rho_0^\gamma}(1+t)^{-1}\left(\partial_t^k w_x\right)^2 \right](x,t) dx \\
& +\int_0^t (1+s)^{2k} \int \begin{equation}tar\rho_0 \left(\partial_s^{k+1}w\right)^2 dx ds + \int_0^t (1+s)^{2k-1} \int {\begin{equation}tar\rho_0^\gamma} \left(\partial_s^{k}w_x\right)^2 dxds \\
\lesssim & \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0) + \int_0^t (1+s)^{-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(s)ds
\lesssim \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0) .
\end{split}\end{equation}
With this estimate at hand, we multiply \eqref{516'} by $(1+t)^{2k+1}$ and integrate the product with respect to the temporal variable to get, in view of \eqref{ai1} and \eqref{512},
\begin{equation}\lambdabel{ai2}\begin{equation}gin{split}
& (1+t)^{2k+1} \int \left[\begin{equation}tar\rho_0 \left(\partial_t^{k+1} w\right)^2
+
{\begin{equation}tar\rho_0^\gamma}(1+t)^{-1}\left(\partial_t^k w_x\right)^2 \right] dx \\
& +\int_0^t \int (1+ s)^{2k+1} \begin{equation}tar\rho_0 \left(\partial_s^{k+1} w\right)^2 dx ds\\
\lesssim & \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0) + \int_0^t (1+s)^{-1} \int \mathcal{E}_{k}(s)ds + \int_0^t (1+s)^{-1} \left(\widetilde{\mathcal{E}}_{0} + \sum_{\iota=1}^{k-1}\mathcal{E}_{\iota}\right)(s)ds \\
\lesssim& \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0),
\end{split}\end{equation}
since
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \int_0^t (1+s)^{ -1} \mathcal{E}_{k} (s)ds
= \int_0^t (1+ s)^{2(k-1)+1} \int \begin{equation}tar\rho_0 \left(\partial_s^k w\right)^2 dxds \\
& + \int_0^t (1+ s)^{2k-1} \int \begin{equation}tar\rho_0^\gamma \left(\partial_s^k w_x \right)^2 dx ds
+\int_0^t (1+ s)^ {2k} \int \begin{equation}tar\rho_0 \left(\partial_s^{k+1} w\right)^2 dx
\lesssim \sum_{\iota=0}^{k} \mathcal{E}_{\iota}(0) .
\end{split}\end{equation}e
It finally follows from \eqref{ai1} and \eqref{ai2} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\mathcal{E}_k(t) + \int_0^t \int \left[(1+ s)^{2k+1} \begin{equation}tar\rho_0 \left(\partial_s^{k+1} w\right)^2 + (1+ s)^{2k-1} \begin{equation}tar\rho_0^\gamma \left(\partial_s^{k } w_x\right)^2 \right] dx ds
\lesssim \sum_{\iota=0}^k \mathcal{ E}_{\iota}(0) .
\end{split}\end{equation}e This finishes the proof of Lemma \ref{lem53}. $\Box$
\subsection{Verification of the a priori assumption}
In this subsection, we prove the following lemma.
\begin{equation}gin{lem}\lambdabel{lem31} Suppose that $\mathcal{E}(t)$ is finite, then it holds that
\begin{equation}\lambdabel{lem31est}\begin{equation}gin{split}
& \sum_{j=0}^3 (1+ t)^{2j} \left\|\partial_t^j w(\cdot,t)\right\|_{L^\infty}^2 + \sum_{j=0}^1 (1+ t)^{2j} \left\|\partial_t^j w_x(\cdot,t)\right\|_{L^\infty}^2
\\
& \qquad +
\sum_{
i+j\le l,\ 2i+j \ge 4 } (1+ t)^{2j}\left\| \varthetarsigma^{\frac{2i+j-3}{2}}\partial_t^j \partial_x^i w(\cdot,t)\right\|_{L^\infty}^2 \lesssim \mathcal{E}(t).
\end{split}\end{equation}
\end{lem}
Once this lemma is proved, the a priori assumption \eqref{apriori} is then verified and the proof of Theorem \ref{mainthm} is completed, since it follows from the nonlinear weighted energy estimate \eqref{energy} and the elliptic estimate \eqref{ellipticestimate} that
$$\mathcal{E}(t)\lesssim \mathcal{E}(0), \ \ t\in [0,T].$$
\noindent {\bf Proof}. We first note that $\mathcal{E}_{j,0}\lesssim \mathcal{E}_j$ for $j=0,\cdots, l$. It follows from \eqref{hardybdry} and \eqref{varsigma} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\int \varthetarsigma^{\alpha-1}\left(\partial_t^j w\right)^2 dx \lesssim
\int d^{\alpha-1}\left(\partial_t^j w\right)^2 dx \lesssim \int d^{\alpha+1} \left[ \left(\partial_t^j w_x \right)^2 + \left(\partial_t^j w\right)^2 \right] dx \\
\lesssim &\int \varthetarsigma^{\alpha+1} \left[ \left(\partial_t^j w_x \right)^2 + \left(\partial_t^j w\right)^2 \right] dx
\lesssim \int \left[ \varthetarsigma^{\alpha+1} \left(\partial_t^j w_x \right)^2 + \varthetarsigma^{\alpha}\left(\partial_t^j w\right)^2 \right] dx \\
\le & (1+ t)^ {-2j } \mathcal{ E}_{j} ,
\end{split}\end{equation}e
which implies
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\mathcal{ E}_{j,0}= (1+t)^{2j} \int \left[ \varthetarsigma^{\alpha+1}\left(\partial_t^j w_x\right)^2 + \varthetarsigma^{\alpha-1}\left(\partial_t^j w\right)^2 \right]dx \lesssim \mathcal{ E}_{j}.
\end{split}\end{equation}e
So, we have
\begin{equation}\lambdabel{newnorm}\begin{equation}gin{split}
\sum_{j=0}^l \left(\mathcal{ E}_{j}(t) + \sum_{i=0}^{l-j} \mathcal{ E}_{j, i}(t) \right)\lesssim \mathcal{E}(t).
\end{split}\end{equation}
The following embedding (cf. \cite{adams}):
$H^{1/2+\delta}(\mathcal{I})\hookrightarrow L^\infty(\mathcal{I}) $
with the estimate
\begin{equation}\lambdabel{half} \|F\|_{L^\infty(\mathcal{I})} \le C(\delta) \|F\|_{H^{1/2+\delta}(\mathcal{I})},\end{equation}
for $\delta>0$ will be used in the rest of the proof.
It follows from \eqref{wsv} and \eqref{varsigma} that for $j\le 5+[\alpha]-\alpha$,
\begin{equation}\lambdabel{t311}\begin{equation}gin{split}
&\left\|\partial_t^j w \right\|_{H^{\fracrac{5-j+[\alpha]-\alpha}{2}}}^2
= \left\|\partial_t^j w \right\|_{H^{l-j+1-\fracrac{l-j+1+\alpha}{2}}}^2
\lesssim\left\|\partial_t^j w \right\|_{H^{ {l-j+1+\alpha} , l-j+1}}^2 \\
= &\sum_{k=0}^{l-j+1} \int d^{\alpha+1+l-j}|\partial_x^k \partial_t^j w|^2dx
\lesssim \sum_{k=0}^{l-j+1} \int \varthetarsigma^{\alpha+1+l-j}|\partial_x^k \partial_t^j w|^2dx
\\
\lesssim & \sum_{k=0}^{l-j+1} \int \varthetarsigma^{\alpha+k}|\partial_x^k \partial_t^j w|^2dx
\le (1+t)^{-2j}\left(\mathcal{ E}_{j}(t) + \sum_{k=1}^{l-j } \mathcal{ E}_{j,k}(t)\right)\le (1+t)^{-2j} \mathcal{E}(t).
\end{split}\end{equation}
This, together with \eqref{half}, gives
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\sum_{j=0}^3 (1+ t)^{2j} \left\|\partial_t^j w\right\|_{L^\infty}^2 + \sum_{j=0}^1 (1+ t)^{2j} \left\|\partial_t^j w_x\right\|_{L^\infty}^2 \lesssim \mathcal{E}(t).
\end{split}\end{equation}e
To bound the third term on the left-hand side of \eqref{lem31est}, we denote
$$\psi := \varthetarsigma^{\frac{2i+j-3}{2}}\partial_t^j \partial_x^i w.$$
In the following, we prove that $\left\|\psi\right\|_{L^\infty}^2
\lesssim (1+t)^{-2j}\mathcal{E}(t) $ by separating the cases when $\alpha$ is or is not an integer.
{\bf Case 1} ($\alpha\neq [\alpha]$). When $\alpha$ is not an integer, we choose $ \varthetarsigma^{2(l-i-j)+ \alpha-[\alpha]}$ as the spatial weight. A simple calculation yields
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left|\partial_x \psi \right| \lesssim\left| \varthetarsigma^{\frac{2i+j-3}{2}}\partial_t^j \partial_x^{i+1} w\right|
+ \left| \varthetarsigma^{\frac{2i+j-3}{2}-1}\partial_t^j \partial_x^i w\right|,
\end{split}\end{equation}e
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left|\partial_x^2 \psi \right| \lesssim\left| \varthetarsigma^{\frac{2i+j-3}{2}}\partial_t^j \partial_x^{i+2} w\right|
+ \left| \varthetarsigma^{\frac{2i+j-3}{2}-1}\partial_t^j \partial_x^{i+1} w\right| + \left| \varthetarsigma^{\frac{2i+j-3}{2}-2}\partial_t^j \partial_x^i w\right|,
\end{split}\end{equation}e
$$\cdots\cdots$$
\begin{equation}\lambdabel{l313}\begin{equation}gin{split}
&\left|\partial_x^k \psi \right| \lesssim \sum_{p=0}^k \left| \varthetarsigma^{\frac{2i+j-3}{2}-p}\partial_t^j \partial_x^{i+k-p} w\right| \ \ {\rm for} \ \ k=1, 2, \cdots, l-j+1-i.
\end{split}\end{equation}
It follows from \eqref{l313} that for $1\le k \le l+1-i-j$,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
& \int \varthetarsigma^{2(l-i-j)+ \alpha-[\alpha] } \left|\partial_x^k \psi \right|^2 dx \lesssim \int \sum_{p=0}^k \varthetarsigma^{\alpha+l-j+1-2p}\left|\partial_t^j \partial_x^{i+k-p} w\right|^2 dx \\
\lesssim & \int \varthetarsigma^{l-i-j+1-k} \sum_{p=0}^1 \varthetarsigma^{\alpha+i+k-2p} \left|\partial_t^j \partial_x^{i+k-p} w\right|^2 dx + \int \sum_{p=2}^k \varthetarsigma^{\alpha+l-j+1-2p}\left|\partial_t^j \partial_x^{i+k-p} w\right|^2 dx
\\
\lesssim & (1+t)^{-2j} \mathcal{E}_{j,i+k-1}+ \int \sum_{p=2}^k \varthetarsigma^{\alpha+l-j+1-2p}\left|\partial_t^j \partial_x^{i+k-p} w\right|^2 dx.
\end{split}\end{equation}e
To bound the 2nd term on the right-hand side of the inequality above, notice that
\begin{equation}\lambdabel{3.14}\begin{equation}gin{split}
&\alpha+l-j+1-2p \\
= &2( l+1-i-j- k) +2(k-p) + (\alpha-[\alpha])+(2i+j-3) -2 > -1
\end{split}\end{equation}
for $p\in [2, k]$, due to $\alpha\neq [\alpha]$ and $2i+j\ge 4$. We then have, with the aid of \eqref{hardybdry} and \eqref{varsigma}, that for $p\in [2, k]$,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\int \varthetarsigma^{\alpha+l-j+1-2p}\left|\partial_t^j \partial_x^{i+k-p} w\right|^2 dx
\lesssim \int d^{\alpha+l-j+1-2p}\left|\partial_t^j \partial_x^{i+k-p} w\right|^2 dx \\
\lesssim & \int d^{\alpha+l-j+1-2p+2} \sum_{\iota=0}^1 \left|\partial_t^j \partial_x^{i+k-p+\iota} w\right|^2 dx \lesssim \cdots \cdots \\
\lesssim & \int d^{\alpha+l-j+1 } \sum_{\iota=0}^p \left|\partial_t^j \partial_x^{i+k-p+\iota} w\right|^2 dx \lesssim \int \varthetarsigma^{\alpha+l-j+1 } \sum_{\iota=0}^p \left|\partial_t^j \partial_x^{i+k-p+\iota} w\right|^2 dx\\
=&\int \sum_{\iota=0}^p \varthetarsigma^{(l+1-i-j-k)+(p-\iota)} \varthetarsigma^{\alpha+i+k-p+\iota } \left|\partial_t^j \partial_x^{i+k-p+\iota} w\right|^2 dx\\
\lesssim & \sum_{\iota=0}^p \int \varthetarsigma^{\alpha+i+k-p+\iota } \left|\partial_t^j \partial_x^{i+k-p+\iota} w\right|^2 dx
\le \sum_{\iota=i+k-p}^{i+k-1} (1+t)^{-2j} \mathcal{E}_{j,\iota}.
\end{split}\end{equation}e
That yields, for $k=1, 2, \cdots, l-j+1-i$,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\int \varthetarsigma^{2(l-i-j)+ \alpha-[\alpha] } \left|\partial_x^k \psi \right|^2 dx
\lesssim & (1+t)^{-2j} \mathcal{E}_{j,i+k-1}+ \sum_{p=2}^k \sum_{\iota=i+k-p}^{i+k-1} (1+t)^{-2j} \mathcal{E}_{j,\iota} \\
\lesssim &(1+t)^{-2j} \sum_{\iota=i}^{i+k-1} \mathcal{E}_{j,\iota}.
\end{split}\end{equation}e
Therefore, it follows from \eqref{varsigma} and \eqref{newnorm} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\left\|\psi\right\|_{H^{2(l-i-j)+\alpha-[\alpha] , \ l+1-i-j }}^2 = \sum_{k=0}^{l+1-i-j} \int d^{2(l-i-j)+ \alpha-[\alpha] } \left|\partial_x^k \psi \right|^2 dx
\\
\lesssim & \sum_{k=0}^{l+1-i-j} \int \varthetarsigma^{2(l-i-j)+ \alpha-[\alpha] } \left|\partial_x^k \psi \right|^2dx
\lesssim \int \varthetarsigma^{2(l-i-j)+ \alpha-[\alpha] } \left| \psi \right|^2 dx+ (1+t)^{-2j} \sum_{\iota=i}^{l-j} \mathcal{E}_{j,\iota} \\
\lesssim & (1+t)^{-2j} \sum_{\iota=i}^{l-j} \mathcal{E}_{j,\iota}
\le (1+t)^{-2j}\mathcal{E}(t).
\end{split}\end{equation}e
When $\alpha$ is not an integer, $\alpha-[\alpha]\in(0,1)$. So, it follows from \eqref{half} and \eqref{wsv} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\psi\right\|_{L^\infty}^2 \lesssim \left\|\psi\right\|_{H^{1-\fracrac{\alpha-[\alpha]}{2}}}^2
\lesssim \left\|\psi\right\|_{H^{2(l-i-j)+\alpha-[\alpha] , \ l+1-i-j }}^2
\lesssim (1+t)^{-2j}\mathcal{E}(t).
\end{split}\end{equation}e
{\bf Case 2} ($\alpha=[\alpha]$). In this case $\alpha$ is an integer, we choose $ \varthetarsigma^{2(l-i-j)+ 1/2 }$ as the spatial weight. As shown in Case 1, we
have for $1\le k \le l+1-i-j$,
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\int \varthetarsigma^{2(l-i-j)+ 1/2} \left|\partial_x^k \psi \right|^2dx
\lesssim (1+t)^{-2j} \mathcal{E}_{j,i+k-1}+ \int \sum_{p=2}^k \varthetarsigma^{\alpha+l-j+1-2p+ 1/2}\left|\partial_t^j \partial_x^{i+k-p} w\right|^2 dx
\end{split}\end{equation}e
and
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\alpha+l-j+1-2p + \fracrac{1}{2}
= 2( l+1-i-j- k) +2(k-p) +(2i+j-3) - \fracrac{3}{2} \ge -\fracrac{1}{2}.
\end{split}\end{equation}e
We can use the Hardy inequality to obtain
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\int \varthetarsigma^{2(l-i-j)+ 1/2 } \left|\partial_x^k \psi \right|^2 dx
\lesssim (1+t)^{-2j} \sum_{\iota=i}^{i+k-1} \mathcal{E}_{j,\iota}, \ \ k=1, 2, \cdots, l-j+1-i
\end{split}\end{equation}e
and
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
&\left\|\psi\right\|_{H^{2(l-i-j)+1/2 , \ l+1-i-j }}^2
\lesssim (1+t)^{-2j} \sum_{\iota=i}^{l-j} \mathcal{E}_{j,\iota}.
\end{split}\end{equation}e
Therefore, it follows from \eqref{half} and \eqref{wsv} that
\begin{equation}e\lambdabel{}\begin{equation}gin{split}
\left\|\psi\right\|_{L^\infty}^2 \lesssim \left\|\psi\right\|_{H^{3/4}}^2
\lesssim \left\|\psi\right\|_{H^{2(l-i-j)+1/2 , \ l+1-i-j }}^2
\lesssim (1+t)^{-2j}\mathcal{E}(t).
\end{split}\end{equation}e
This completes the proof of Lemma \ref{lem31}. $\Box$
\section{Proof of Theorem \ref{mainthm1}}
In this section, we prove Theorem \ref{mainthm1}.
First, it follows from \eqref{ldv}, \eqref{ld}, \eqref{bar} and \eqref{212} that for $(x,t)\in \mathcal{I}\times[0,\infty)$,
$$
\rho(\eta(x, t), t)-\begin{equation}tar\rho(\begin{equation}tar\eta(x, t), t)
=\fracrac{\begin{equation}tar\rho_0(x)}{\eta_x(x, t)}-\fracrac{\begin{equation}tar\rho_0(x)}{\begin{equation}tar\eta_x(x, t)}
=-\begin{equation}tar\rho_0(x)\fracrac{w_x(x, t)+h(t)}{(\tilde \eta_x+w_x)\begin{equation}tar\eta_x(x, t)}
$$
and
$$
u(\eta(x, t), t)-\begin{equation}tar u (\begin{equation}tar\eta(x, t), t)=w_t(x, t)+xh_t(t) . $$
Hence, by virtue of \eqref{basic}, \eqref{212}, \eqref{2} and \eqref{h}, we have, for $x \in \mathcal{I}$ and $t\ge 0$,
$$
|\rho(\eta(x, t), t)-\begin{equation}tar\rho(\begin{equation}tar\eta(x, t), t)|\lesssim \left(A-Bx^2\right)^{\fracrac{1}{\gamma-1}}(1+t)^{-\fracrac{2}{\gamma+1}}\left(\sqrt{\mathcal{E}(0)}+
(1+t)^{-\fracrac{\gamma}{\gamma+1}}\ln(1+t)\right) $$
and
$$|u(\eta(x, t), t)-\begin{equation}tar u (\begin{equation}tar\eta(x, t), t)|\lesssim (1+t)^{-1}\sqrt{\mathcal{E}(0)}+(1+t)^{-\fracrac{2\gamma+1}{\gamma+1}} \ln(1+t).$$
Then \eqref{1'} and \eqref{2'} follow. It follows from \eqref{vbs} and \eqref{212} that
\begin{equation}e\begin{equation}gin{split}
x_+(t) = &\eta\left(\begin{equation}tar x_+(0),t\right)=\left(\tilde{\eta}+ w\right)\left(\begin{equation}tar x_+(0),t\right)=\left(\begin{equation}tar{\eta}+ xh + w\right)\left(\begin{equation}tar x_+(0),t\right) \\
=& \sqrt{ {A}/{B}} \left((1+t)^{\fracrac{1}{\gamma+1}}+h(t)\right)+w\left(\sqrt{ {A}/{B}}, t\right), \end{split}\end{equation}e
which, together with \eqref{h} and \eqref{2}, implies that for $t\ge 0$,
\begin{equation}e\lambdabel{}
\sqrt{ {A}/{B}}(1+t)^{\fracrac{1}{\gamma+1}}-C\sqrt{\mathcal{E}(0)}\le x_+(t)
\le \sqrt{ {A}/{B}}\left((1+t)^{\fracrac{1}{\gamma+1}}+C(1+t)^{-\fracrac{1}{\gamma+1}}\right)+C\sqrt{\mathcal{E}(0)}. \end{equation}e
Similarly, we have for $t\ge 0$,
\begin{equation}e\lambdabel{}
- \sqrt{ {A}/{B}}\left((1+t)^{\fracrac{1}{\gamma+1}}+C(1+t)^{-\fracrac{1}{\gamma+1}}\right)-C\sqrt{\mathcal{E}(0)}\le x_-(t)\le - \sqrt{ {A}/{B}}(1+t)^{\fracrac{1}{\gamma+1}}+ C\sqrt{\mathcal{E}(0)}. \end{equation}e
Thus, \eqref{3'} follows from the smallness of $\mathcal{E}(0)$. Notice that
for $k=1, 2, 3$,
$$\fracrac{d^k x_{\pm}(t)}{dt^k}=\partial_t^ k\tilde \eta \left(\pm \sqrt{ {A}/{B}}, t\right)+ \partial_t^ k w \left(\pm \sqrt{ {A}/{B}}, t\right).$$
Therefore, $\eqref{decay}$ and \eqref{2} implies \eqref{4'}. $\Box$
\section{Appendix}
In this appendix, we prove \eqref{decay} and \eqref{h}. We may write \eqref{mt} as the following system
\begin{equation}\lambdabel{ansatzeq}\left\{\begin{equation}gin{split}
&h_t= z,\\
&z_t= -z - \left[ { \begin{equation}tar\eta_x ^{-\gamma}} - {(\begin{equation}tar\eta_x+h)^{- \gamma}}\right] /({\gamma+1}) - \begin{equation}tar\eta_{xtt},\\
&(h,z)(t=0)=(0,0).
\end{split}\right.\end{equation}
Recalling that $\begin{equation}tar\eta_x (t)=(1+t)^{\fracrac{1}{\gamma+1}}$, thus $\begin{equation}tar\eta_{xtt}<0$. A simple phase plane analysis shows that there exist $0<t_0<t_1<t_2$ such that, starting from $(h, z)=(0, 0)$ at $t=0$, $h$ and $z$ increases in the interval $[0, t_0]$ and $z$ reaches its positive maxima at $t_0$; in the interval $[t_0, t_1]$, $h$ keeps increasing and reaches its maxima at $t_1$, $z$ decreases from its positive maxima to 0; in the interval $[t_1, t_2]$,
both $h$ and $z$ decrease, and $z$ reaches its negative minima at $t_2$; in the interval $[t_2, \infty)$, $h$ decreases and $z$ increases, and $(h, z)\to (0, 0)$ as $t\to \infty$. This can be summarized as follows: \\
$$z(t) \uparrow_0 , \ \ h(t) \uparrow_0 , \ \ t\in [0,t_0]$$ $$ z(t) \downarrow_0, \ \ h(t) \uparrow, \ \ t\in[t_0,t_1] $$
$$ z(t) \downarrow^0, \ \ h(t) \downarrow, \ \ t\in[t_1,t_2] $$
$$ z(t) \uparrow^0, \ \ h(t) \downarrow_0, \ \ t\in[t_2,\infty). $$
It follows from the above analysis that there exists a constant $C=C(\gamma, M)$ such that
\begin{equation}\lambdabel{boundforh}
0\le h(t) \le C \ \ {\rm for} \ \ t\ge 0.\end{equation}
In view of \eqref{ansatz}, we then see that for some constant $K>0$,
$$\left(1 + t \right)^{{1}/({\gamma+1})} \le \tilde \eta_{x} \le K \left(1 + t \right)^{{1}/({\gamma+1})}.$$
To derive the decay property,
we may rewrite \eqref{mt} as
\begin{equation}\lambdabel{gt}\begin{equation}gin{split}
&\tilde \eta_{x tt}+\tilde \eta_{xt}- \tilde \eta_x^{-\gamma}/(\gamma+1)=0,\\
&\tilde \eta_x (t=0)= 1, \ \ \tilde \eta_{xt}(t=0)={1 }/({\gamma+1}) .
\end{split}\end{equation}
Then, we have by solving \eqref{gt} that
\begin{equation}\lambdabel{k0}\begin{equation}gin{split}
\tilde \eta_{xt}(t)=\frac{ 1 }{\gamma+1}e^{-t} + \frac{ 1 }{\gamma+1} \int_0^t e^{-(t-s)}\tilde \eta_x^{-\gamma}(s) ds \ge 0.
\end{split}\end{equation}
Next, we use the mathematical induction to prove $\eqref{decay}_{2}$.
First, it follows from \eqref{k0} that
\begin{equation}e\begin{equation}gin{split}
({\gamma+1}) \tilde \eta_{xt}(t) = &e^{-t} + \int_0^{t/2} e^{-(t-s)}\tilde \eta_{x}^{-\gamma}(s) ds + \int_{t/2}^t e^{-(t-s)}\tilde \eta_{x}^{-\gamma}(s) ds \\
\le & e^{-t} + e^{-t/2}\int_0^{t/2}(1+ s)^{-\frac{\gamma}{\gamma+1}}ds
+\left(1+ {t}/{2} \right)^{-\frac{\gamma}{\gamma+1}} \int_{t/2}^t e^{-(t-s)} ds \\
\le & e^{-t} + \frac{ e^{-t/2} }{1+\gamma} \left(1+ {t}/{2}\right)^{{1}/({\gamma+1})}
+\left(1+ {t}/{2} \right)^{-{\gamma}/({\gamma+1})} \\
\le & C \left(1+ {t} \right)^{-{\gamma}/({\gamma+1})}, \ t\ge 0,
\end{split}\end{equation}e
for some positive constant $C$ independent of $t$. This proves $\eqref{decay}_{2}$ for $k=1$.
For $2\le m \le n $ for a fixed positive integer $n$, we make the induction hypothesis that $\eqref{decay}_{2}$ holds for all $k=1,2,\cdots,m-1$, that is,
\begin{equation}\lambdabel{induction} \left|\frac{d^k\tilde \eta_{x}(t)}{dt^k}\right| \le C(m)\left(1 + t \right)^{\fracrac{1}{\gamma+1}-k}, \ \ k= 1, 2, \cdots, m-1.\end{equation}
It suffices to prove $\eqref{decay}_{2}$ holds for $k=m$.
We derive from \eqref{gt} that
$$
\frac{d^{m+1}\tilde \eta_{x}}{dt^{m+1}}(t)+\frac{d^{m}\tilde \eta_{x}}{dt^{m}}(t)- \fracrac{1}{\gamma+1}\fracrac{d^{m-1}\tilde \eta_x^{-\gamma}}{dt^{m-1}}(t)=0 , $$
so that
\begin{equation}\lambdabel{gttt}
\frac{d^{m}\tilde \eta_{x}}{dt^{m}}(t) = e^{-t} \frac{d^{m}\tilde \eta_{x}}{dt^{m}}(0) + \fracrac{1}{\gamma+1} \int_0^{t} e^{-(t-s)}\fracrac{d^{m-1}\tilde \eta_x^{-\gamma}}{ds^{m-1}}(s)ds, \end{equation}
where $\frac{d^{m}\tilde \eta_{x}}{dt^{m}}(0) $ can be determined by the equation inductively. To bound the last term on the right-hand side of \eqref{gttt}, we are to derive that
\begin{equation}\lambdabel{estimate1}
|\partial_t^r(\tilde \eta_x^{-1})|( t)\le C(m) (1+t)^{-\fracrac{1}{\gamma+1}-r}, \ \ 0\le r\le m-1, \end{equation}
for some constant $C(m)$ depending only on $\gamma$, $M$ and $m$. First, \eqref {estimate1} is true for $r=0$ in view of $(2.12)_1$. For $1 \le r\le m-1$, note that
\begin{equation}gin{align*}\lambdabel{}
\partial_t^r(\tilde \eta_x^{-1})&=\partial_t^{r-1}(\tilde \eta_x^{-2}\tilde \eta_{xt})
=\sum_{i=0}^{r-1} C_{r-1}^i \partial_t^{i}(\tilde \eta_x^{-2}) \partial_t^{r-i}(\tilde \eta_x)\notag\\ & = \sum_{i=0}^{r-1} C_{r-1}^i \partial_t^{i}(\tilde \eta_x^{-2}) \partial_t^{r-i}(\tilde \eta_x)\notag\\
&=\sum_{i=0}^{r-1} C_{r-1}^i \left(\sum_{j=0}^{i} C_i^j \partial_t^{j}(\tilde \eta_x^{-1}) \partial_t^{i-j}(\tilde \eta_x^{-1})\right)\partial_t^{r-i}(\tilde \eta_x).
\end{align*}
Then, \eqref{estimate1} can be proved by an iteration, with the aid of \eqref{induction}.
Notice that
\begin{equation}e\begin{equation}gin{split}\lambdabel{xx1}
\partial_t^{m-1}( \tilde \eta_{x}^{-\gamma})&=-\gamma\partial_t^{m-2}\left( \tilde \eta_{x}^{-(\gamma+1)}\tilde \eta_{xt}\right)=-\gamma\sum_{i=0}^{m-2}C_{m-2}^i\partial_t^{i}\left( \tilde \eta_{x}^{-(\gamma+1)}\right)\left(\partial_t^{m-1-i}\tilde \eta_{xt}\right) \notag\\
&=\gamma(\gamma+1)\sum_{i=0}^{m-2}C_{m-2}^i\left[\sum_{j=0}^iC_i^j\partial_t^{j}\left( \tilde \eta_{x}^{-\gamma}\right)\partial_t^{i-j}\left(\tilde \eta_x^{-1}\right)\right]\left(\partial_t^{m-1-i}\tilde \eta_{xt}\right). \end{split}\end{equation}e
It therefore follows from \eqref{estimate1} and \eqref{induction} that
\begin{equation}\lambdabel{xx2}
|\partial_t^{m-1}(\tilde\eta_x^{-\gamma})|\le C_1(m)(1+t)^{-\fracrac{\gamma}{\gamma+1}-(m-1)}\end{equation}
for some constant $C_1(m)$ independent of $t$. This, together with \eqref{gttt}, proves that $\eqref{decay}_{2}$ is also true for $k=m$, and completes the proof of $\eqref{decay}_{2}$.
Finally, we prove the decay estimate for $h$. We may write the equation for $h$ as
\begin{equation}\lambdabel{zuihou}
h_t+\fracrac{1}{\gamma+1}(1+t)^{-\fracrac{\gamma}{\gamma+1}}\left[1-\left(1+ h(1+t)^{-\fracrac{1}{\gamma+1}}\right)^{-\gamma}\right]=-\tilde \eta_{xtt}.
\end{equation}
Notice that
$$\left(1+ h(1+t)^{-\fracrac{1}{\gamma+1}}\right)^{-\gamma}\le 1-\gamma h (1+t)^{-\fracrac{1}{\gamma+1}}+\fracrac{\gamma(\gamma+1)}{2} h^2 (1+t)^{-\fracrac{2}{\gamma+1}},$$
due to $h\ge 0$. We then obtain, in view of $\eqref{decay}_{2}$, that
$$
h_t+\fracrac{\gamma}{\gamma+1}(1+t)^{-1}h\le \fracrac{\gamma}{2}(1+t)^{-\fracrac{\gamma+2}{\gamma+1}}h^2+C (1+t)^{\fracrac{1}{\gamma+1}-2}.$$
Thus,
\begin{equation}\lambdabel{7201}
h(t)\le C(1+t)^{-\fracrac{\gamma}{\gamma+1}} \int_0^t \left((1+s)^{-\fracrac{2}{\gamma+1}}h^2(s)+(1+s)^{-1}\right)ds .\end{equation}
We use an iteration to prove \eqref{h}.
First, since $h$ is bounded due to \eqref{boundforh}, we have
\begin{equation}\lambdabel{7202}h(t)\le C(1+t)^{-\fracrac{\gamma}{\gamma+1}} \int_0^t (1+s)^{-\fracrac{2}{\gamma+1}}ds \le C(1+t)^{-\fracrac{1}{\gamma+1}}. \end{equation}
Substituting this into \eqref{7201}, we obtain
$$h(t) \le C(1+t)^{-\fracrac{\gamma}{\gamma+1}} \int_0^t \left((1+s)^{-\fracrac{4}{\gamma+1}}+(1+s)^{-1}\right)ds ;$$
which implies
\begin{equation}e\lambdabel{}h(t) \le \left\{ \begin{equation}gin{split} & C (1+t)^{-\fracrac{\gamma}{\gamma+1}}\ln(1+t) & {\rm if} \ \ \gamma \le 3,\\
& C (1+t)^{-\fracrac{3}{\gamma+1}} & {\rm if} \ \ \gamma > 3. \end{split}\right. \end{equation}e
If $\gamma\le 3$, then the first part of \eqref{h} has been proved. If $\gamma>3$, we repeat this procedure and obtain
\begin{equation}e\lambdabel{}h(t) \le \left\{ \begin{equation}gin{split} & C (1+t)^{-\fracrac{\gamma}{\gamma+1}}\ln(1+t) & {\rm if} \ \ \gamma \le 7,\\
& C (1+t)^{-\fracrac{7}{\gamma+1}} & {\rm if} \ \ \gamma > 7. \end{split}\right. \end{equation}e
For general $\gammamma$, we repeat this procedure $k$ times with $k= \lceil\log_2(\gamma+1)\rceil $ satisfying
$\sum_{j=0}^{k} 2^j\ge \gamma$ to obtain
$$h(t) \le C (1+t)^{-\fracrac{\gamma}{\gamma+1}}\ln(1+t). $$
This, together with \eqref{boundforh}, proves the first part of \eqref{h}, which in turn implies the second part of \eqref{h}, by virtue of \eqref{zuihou} and $\eqref{decay}_2$.
\vskip 0.5cm
\centerline{\bf Acknowledgement}
Luo's research was supported in part by NSF under grant DMS-1408839, Zeng's research was supported in part by NSFC under grant \#11301293/A010801.
\begin{equation}gin{thebibliography}{99}
\bibitem{adams} Adams R. A.: Sobolev Spaces. New York: Academic Press 1975.
\bibitem{ba}Barenblatt G. J.: On one class of solutions of the one-dimensional problem of non-stationary
filtration of a gas in a porous medium, Prikl. Mat. i. Mekh., 17, 739-742 (1953).
\bibitem{7} Coutand, D.; Lindblad, H., Shkoller, S.: A priori estimates for the free-boundary
3-D compressible Euler equations in physical vacuum. Commun. Math. Phys. 296,
559-587 (2010)
\bibitem{9} Coutand, D.; Shkoller, S.: Well-posedness of the free-surface incompressible Euler
equations with or without surface tension. J. Am. Math. Soc. 20, 829-930 (2007)
\bibitem{10} Coutand, D.; Shkoller, S.: Well-posedness in smooth function spaces for the moving-
boundary 1-D compressible Euler equations in physical vacuum. Commun. Pure
Appl. Math. 64, 328-366 (2011).
\bibitem{10'}Coutand, D.; Shkoller, S. : Well-Posedness in Smooth Function Spaces for the Moving-Boundary Three-Dimensional Compressible Euler Equations in Physical Vacuum. Arch. Ration. Mech. Anal. 206, no. 2, 515-616 (2012).
\bibitem{zhenlei} Gu, X; Lei, Z. : Well-posedness of 1-D compressible Euler-Poisson equations with physical vacuum. J. Differential Equations 252, no. 3, 2160-2188 (2012).
\bibitem{HL} Hsiao. L. ; Liu, T. P.: Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping. Comm. Math. Phys. 143, no. 3, 599-605 (1992).
\bibitem{HMP} Huang, F; Marcati, P. ; Pan, R.: Convergence to the Barenblatt solution for the compressible Euler equations with damping and vacuum. Arch. Ration. Mech. Anal. 176, no. 1, 1-24 (2005).
\bibitem{HPW} Huang, F; Pan, R.; Wang, Z.: $L^1$ convergence to the Barenblatt solution for compressible Euler equations with damping. Arch. Ration. Mech. Anal. 200, no. 2, 665-689 (2011).
\bibitem{16} Jang, J.; Masmoudi, N. :Well-posedness for compressible Euler with physical vacuum
singularity. Commun. Pure Appl. Math. 62, 1327-1385 (2009)
\bibitem{16'} Jang, J.; Masmoudi, N.: Well-posedness of compressible Euler equations in a physical vacuum, arXiv:1005.4441.
\bibitem{JMnew}Jang, J.; Masmoudi, N.: Well and ill-posedness for compressible Euler equations with vacuum. J. Math. Phys. 53 , no. 11, 115625, 11 pp (2012).
\bibitem{17'} Jang, J. : Nonlinear Instability Theory of Lane-Emden stars, Commun. Pure
Appl. Math. 67, 1418-1465 (2014).
\bibitem{17} Kreiss, H.O.: Initial boundary value problems for hyperbolic systems. Commun. Pure
Appl. Math. 23, 277-296 (1970)
\bibitem{18'} Kufner, A.; Maligranda, L.; Persson, E.: The Hardy inequality. Vydavatelsky
Servis, Plzen, 2007. About its history and some related results.
\bibitem{23} Liu, T.-P.: Compressible flow with damping and vacuum. Jpn. J. Appl.Math. 13, 25-32
(1996)
\bibitem{24} Liu, T.-P.; Yang, T.: Compressible Euler equations with vacuum. J. Differ. Equ. 140,
223-237 (1997)
\bibitem{25} Liu, T.-P.; Yang, T.: Compressible flow with vacuum and physical singularity. Methods
Appl. Anal. 7, 495-310 (2000)
\bibitem{LXZ} Luo, T.; Xin, Z. \& Zeng, H: Well-Posedness for the Motion of Physical Vacuum of the Three-dimensional Compressible Euler Equations with or without Self-Gravitation, Arch. Rational Mech. Anal, 213, no. 3, 763-831 (2014).
\bibitem{38}Xu, C.-J.; Yang, T.: Local existence with physical vacuum boundary condition to Euler
equations with damping. J. Differ. Equ. 210, 217-231 (2005)
\bibitem{39} Yang, T.: Singular behavior of vacuum states for compressible fluids. J. Comput. Appl.
Math. 190, 211-231 (2006)
\end{thebibliography}
\noindent {Tao Luo}\\
Department of Mathematics and Statistics\\
Georgetown University,\\
Washington, DC, 20057, USA. \\
Email: [email protected]\\
\noindent Huihui Zeng\\
Mathematical Sciences Center\\
Tsinghua University\\
Beijing, 100084, China.\\
E-mail: [email protected]
\end{document} |
\begin{document}
\title{Tomography via Correlation of Noisy Measurement Records}
\date{\today}
\author{Colm A. Ryan}
\author{Blake R. Johnson}
\affiliation{Raytheon BBN Technologies, Cambridge, MA 02138, USA}
\author{Jay M. Gambetta}
\author{Jerry M. Chow}
\affiliation{IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA}
\author{Marcus P. da Silva}
\affiliation{Raytheon BBN Technologies, Cambridge, MA 02138, USA}
\author{Oliver E. Dial}
\affiliation{IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA}
\author{Thomas A. Ohki}
\affiliation{Raytheon BBN Technologies, Cambridge, MA 02138, USA}
\begin{abstract}
We present methods and results of shot-by-shot correlation of noisy
measurements to extract entangled state and process tomography in a
superconducting qubit architecture. We show that averaging continuous values,
rather than counting discrete thresholded values, is a valid tomographic
strategy and is in fact the better choice in the low signal-to-noise regime. We show that
the effort to measure $N$-body correlations from individual measurements
scales exponentially with $N$, but with sufficient signal-to-noise the
approach remains viable for few-body correlations. We provide a new protocol
to optimally account for the transient behavior of pulsed measurements.
Despite single-shot measurement fidelity that is less than perfect, we
demonstrate appropriate processing to extract and verify entangled states and
processes.
\end{abstract}
\maketitle
By engineering coherent manipulation of quantum states we hope to gain
computational power not available to classical computers. The purported
advantage of a quantum information processing system comes from its ability to
create and control quantum correlations or entanglement \cite{Jozsa2003}. As we
work towards obtaining high-fidelity quantum control, the ability to measure,
confirm and benchmark quantum correlations is an important tool for debugging
and verification.
Measurement results for only parts of a system by themselves are insufficient
to reconstruct the overall state of the system, or of a process acting on the
system. These sub-system statistics cannot capture correlations, whether
classical or quantum. Joint quantum measurements, ones that indicate the
joint quantum state of a system, can provide information about the entire
system, and thus allow for complete reconstruction of states or processes.
These joint measurements may be intrinsically available or they may be
effectively enabled by post-measurement correlation of the single-qubit
results.
The physics of particular systems may enable such intrinsic joint
measurements---for example, in a circuit quantum electrodynamics (QED) setup,
the frequency shift of a resonator coupled dispersively to multiple qubits
provides information about the joint state of the qubits
\cite{Chow2010,Filipp2009}, or in an ion-trap system the total fluoresence
from a chain of ions provides similar joint information \cite{Leibfried2005}.
However, there is a limit to the correlations that can be engineered while
maintaining individual addressability and control. Alternatively,
entangling operations between qubits, before their individual measurement,
can implement effective joint measurements by mapping a multi-body observable
to a single-qubit one \cite{Liu2005a}, but these operations can also be error-prone or
difficult to implement. Instead, an accessible approach to joint measurements
is to measure individual qubits and {\em correlate} the separate single-shot
records. This approach is common to other solid state qubits
\cite{McDermott2005,Nowack2011}, optical photonics \cite{Altepeter2004} and microwave optics \cite{Bozyigit2010,Eichler2012}.
When constructing joint measurements from correlating individual ones, one
must address the requirements for the individual measurements in order to
obtain high-fidelity reconstruction of multi-qubit states and processes. In
particular, since errors in the individual measurements will propagate into
correlations, one might wonder if low-fidelity individual measurements make it
difficult or impossible to reconstruct entangled states. Fortunately, circuit
QED allows for high-\emph{visibility} measurements even when the single-shot
fidelity is poor \cite{Wallraff2005}, meaning that the dominant measurement
noise is state-independent. This allows for a strategy that is not available
with traditional counting detectors: correlate the \emph{continuous}
measurement response without thresholding into binary outcomes. We will show
that averaging such continuous outcomes allows for an unbiased estimate of
multi-qubit observables, and is the best strategy in the low signal-to-noise (SNR) regime.
Whenever correlating noisy individual measurements, there is reduced SNR in
the correlated measurements. We will show that for $N$-body correlations this
SNR penalty is generally exponential in $N$, but that in the large SNR regime
reduces to $1/N$. This implies that a greater amount of averaging is necessary
to obtain accurate estimates of multi-body terms than single-body ones, but
for the SNR available in current experiments, few-body correlations remain
readily accessible.
Certain quantum computing architectures, such as the surface code \cite{DiVincenzo2009},
require individual qubit measurements which must be correlated for debugging
and validation purposes. In a subsection of a circuit QED implementation of such
an architecture, we present a phase-stable heterodyne measurement technique and a
new filter protocol to optimize the SNR of a pulsed measurement. Using these
techniques we verified that our measurements have a highly single-qubit
character. Finally, we demonstrate the viability of correlating noisy
measurement records by characterizing two-qubit states and entangling
processes.
\section{Soft-averaging vs. Thresholding}
Measurement in a circuit QED setup typically consists of coupling the qubit to
an auxiliary meter (usually a cavity mode), and then inferring the qubit state
from directly measuring the auxiliary mode. Since the qubit-mode coupling is
effectively diagonal in the qubit's eigenbasis, the qubit POVMs corresponding
to different measurement outcomes will always be diagonal as well, even when
the measurement is not projective (i.e., when there is a finite probability of
error for the measurement to distinguish between excited and ground states).
Although the measurement of the meter can in principle take an unbounded
continum of values, discrete outcomes can be obtained by {\em thresholding} to
bin those measurement outcomes into a finite set. However, for tomography we
are more interested in estimating the expectation value of observables than in
the single-shot distinguishability of states. In this case, {\em soft-averaging}
of the measurement records over many shots, without thresholding,
can yield significant advantages.
A simple model of circuit QED measurement illustrates this quite vividly. In
the absence of relaxation, measurements of the meter will yield a weighted
mixture of Gaussian distributions. Each of these Gaussian distributions will
have a mean and variance determined by details of the device, and the weights
correspond to the probabilities of the qubit being in different eigenstates.
In other words, conditioned on the qubit being prepared in an eigenstate $i$,
the distribution of measurement outcomes is $N(\mu_i,\nu_i^2)$, where $\mu_i$
is the measurement eigenvalue. Soft-averaging over $R$ shots scales the
variance of the distributions by a factor of ${1\over R}$, but an estimate of
any diagonal observable will have an unbiased mean. Thresholding, on the other
hand, will result in biased estimates of the expectation value because for
Gaussian distributions there is always a finite probability of mistaking one
measurement outcome for another. This bias can be corrected by rescaling the
measurement results which converts the bias into additional variance. Soft-averaging
also requires scaling to translate from measurement eigenvalues to
the $\pm1$ outcomes expected for a Pauli operator measurement \footnote{In
both cases this can lead to systematic errors if coherent imperfections are
present, e.g. wrong basis from rotation errors. Full measurement tomography,
with perfect state preparation, could correct for these more general biases.
Or, in the case of soft-averaging access to a calibration that is state-preparation
independent could also side-step this issue.}. The mean-squared
error (MSE), equal to the sum of the variance and the square of the bias, is a
figure of merit one can use to evaluate how well we can estimate an
observable. If we assume the bias can be correted then the best strategy
is determined by the relative variance of the two approaches which will depend on
the SNR of the individual measurements and the number of averages. In the low
SNR regime soft-averaging is preferred whereas in the high SNR regime
thresholding is better.
As a simple concrete example, consider the measurement of a single qubit,
where $\nu_0=\nu_1=\nu$ and $\mu_0=-\mu_1=1$. For an arbitrary state, soft-averaged
estimates of $\avg{\sigma_z}$ will be distributed according to
$N(\avg{\sigma_z},{\nu^2 \over R}+{1-\avg{\sigma_z}^2 \over R})$, where the
first variance term is the instrinic Gaussian variance and the second is
quantum shot noise. Setting the threshold at zero, the thresholded estimates
will have a mean given by $\avg{\sigma_z} [2\Phi(1/\nu\sqrt{2}) - 1]$, where
$\Phi$ is the cummulative distribution function of a normal random variable,
and variance of ${1\over R}-{\avg{\sigma_z}^2\over R} [2\Phi(1/\nu\sqrt{2}) - 1]^2$.
Consequently, thresholding will introduce a bias of $2 \avg{\sigma_z}
[1-\Phi(1/\nu\sqrt{2})]$, which is independent of the number of averages, $R$.
If we now assume we have perfect knowledge of the bias from calibration
experiments then the rescaled thresholded variance is
\begin{equation}
{1\over R [ 2 \Phi(1/\nu\sqrt{2}) - 1]^2} - {\avg{\sigma_z}^2\over R},
\end{equation}
whereas it has not changed for soft-averaging. The $\mathrm{SNR} =
1/\nu^2$ where the variances are equal occurs at
\begin{equation}
{1\over [ 2 \Phi(1/\nu\sqrt{2}) - 1]^2} = \nu^2 + 1,
\end{equation}
which is satisfied at $\mathrm{SNR} \approx 1.41$ (corresponding to a single-shot
fidelity of 76\%) and is unchanged as one correlates multiple
measurements. Above this cross-over, soft-averaging pays an additional
variance penalty which is exponentially worse than thresholding; however, as
seen in Figure \ref{variance-crossover}, in this regime we are often limited
by quantum shot noise. Substantial advances in measurement fidelity in
superconducting circuits due to the use of quantum-limited amplifiers\cite
{Castellanos-Beltran2008, Bergeal2010, Hatridge2011}, implies that the
community will shortly enter the regime where thresholding is the preferred
strategy.
\begin{figure}
\caption{\label{variance-crossover}
\label{variance-crossover}
\end{figure}
\section{SNR Scaling of Correlation Terms}
There is a cost associated with using the correlations of subsystem
measurements instead of joint measurements, which can be most easily
illustrated by seeing how the accuracy of measurement estimates scales with
the variance of the observations~\cite{DaSilva2010}. Consider a product state
such that the measurement records $X_i$ are independent random variables with
mean and variance $(\avg{\sigma_{z,i}},\nu_{i}^2 + (1-\avg{\sigma_{z,i}}^2))$. Then the variance of the correlated records of $N$ subsystems is given by \cite{Goodman1962},
\begin{align}
\label{eq:bare-corr}
\nu_{\mathrm{corr}}^2 &= {\mathrm{Var}}(X_{i_1}X_{i_2}\cdots X_{i_N}) \nonumber \\
&= \Pi_{k=1}^N (\nu_{k}^2 + (1-\avg{\sigma_{z,i}}^2) + \avg{\sigma_{z,i}}^2) - \Pi_{k=1}^N \avg{\sigma_{z,i}}^2 \nonumber\\
& = \left[\Pi_{k=1}^N \left(\nu_{k}^2 + 1\right) - \avg{\sigma_{z,1}\cdots\sigma_{z,N}}^2\right].
\end{align}
$\avg{X_{i_1}X_{i_2}\cdots X_{i_N}}^2$ is an unbiased estimate for
$\avg{\sigma_{z,1}\cdots\sigma_{z,N}}^2$ so the MSE is
equal to $\nu_{\mathrm{corr}}^2$ and it grows exponentially with the number of
correlated terms. Entangled states will have correlated shot noise and the variance calculation is considerably more involved. However, for tomography of arbitrary states we are limited by this exponential scaling. It is possible to reduce the variance by repeating the
measurement $R$ times and averaging, but in order to get some fixed accuracy
on the estimate of $\langle X_{i_1}X_{i_2}\cdots X_{i_N}\rangle$, $R$ will
still have to scale exponentially with $N$.
For small $N$, and equal and sufficiently high SNR ($\nu_{k}^2 = 1/\mathrm{SNR} \ll 1$), then Eq.~\ref{eq:bare-corr} reduces to,
\begin{equation}
\nu_{\mathrm{corr}}^2 \approx \frac{N}{\mathrm{SNR}},
\end{equation}
so $R$ is simply linearly related to $N$ in this favorable regime. Thus, measurements of low-weight correlators are still accessible without a punative experimental effort.
\section{Experimental Setup}
Samples were fabricated with three single-junction transmon qubits in linear
configuration with nearest-neighbours joined by bus resonators \cite{Chow2012}.
For the experiments discussed here, we used two of the three qubits, so
the relevant subsystem is as shown in Fig.~\ref{ExpSchematic}(b). Similar
chips were measured at IBM and BBN. Each qubit has an individual measurement
resonator coupled to it via the standard circuit QED Hamiltonian
\cite{Blais2004}. The same resonator is also used for driving the qubit
dynamics with microwave pulses. The resonant frequency of the cavity exhibits
a qubit-state dependence which we measure by the response of a microwave pulse
applied near the cavity frequency.
We employed an ``autodyne'' approach, similar to Ref. \cite{Jerger2012} and
shown in Fig.~\ref{ExpSchematic}(a), to measure the qubit state via the
reflection of a microwave pulse off the coupled cavity. Autodyning produces a
heterodyne signal from a single local oscillator (LO). The microwave LO,
detuned from the cavity, is single-sideband (SSB) modulated via an IQ mixer to
bring the microwave pulse on-resonance with the cavity. The reflected
amplified signal is then mixed down with the same microwave carrier. This
eliminates the need for two microwave sources, nulls out any microwave phase
drifts and moves the measurement away from DC. If the SSB modulation comes
from an arbitrary waveform generator (AWG) it also allows for measurement
pulse shaping. In addition, if multiple read-out cavities are close in
frequency, it allows us to use a single microwave source to drive multiple
channels with relatively less expensive power splitters and amplifiers.
At the readout side, the ability to choose the heterodyne IF frequency allows
multiple readout channels to be frequency muliplexed onto the same high-speed
digitizer and then digitally separated using techniques from software defined
radio. Although our current implementation is purely in software, we expect
it to readily transfer to hardware as we scale up the number of readout
channels \cite{mchugh:044702}. The initial data stream is sampled at 500MS/s
and is immediately decimated with a low-pass finite-impulse response (FIR)
filter. This allows us to achieve good phase precision with the relatively
low vertical resolution (8-bits) of our digitizer card (AlazarTech 9870). The
channels are then extracted with a frequency shifting low-pass filter. The
bandwidth needed per channel is $(2\chi + \kappa)/2\pi$, where $\kappa$ is the
cavity linewidth, and $\chi$ the dispersive shift. For current parameters,
this gives channels bandwidths of a few MHz. Future devices optimized for
high-fidelity readout will have larger $\chi$ and $\kappa$, increasing the
channel bandwidth to $\approx 10\,\mathrm{MHz}$. This is much smaller than the
typical analog bandwidth of commercial digitizing hardware, allowing many
readout signals to be multiplexed onto a single physical channel.
\begin{figure}
\caption{\label{ExpSchematic}
\label{ExpSchematic}
\end{figure}
\section{Measurement Tomography}
Before analyzing correlated sub-system measurements, it helps to first
optimize each individual readout and reduce the signal to a single quadrature.
In the absence of experimental bandwidth constraints, elegant solutions to the
optimal measurement filter exist \cite{Gambetta2007}. However, when the
measurement is made through a cavity, the measurement response is similar to
that of a kicked oscillator \cite{Gambetta2008} producing a phase transient in
the response. During the rising edge of the pulse the reflected signal's phase
swings wildly. Unfortunately, this is exactly the most crucial time in the
record because it has been least affected by $T_1$ decay. Simple integration
over a quadrature will lose information as the different phases cancel out.
Here we use the mean ground and excited state traces as a matched filter
\cite{Turin1960} to unroll the measurement traces and weight them according to
the time-varying SNR. The matched filter is optimal for SNR, which would also
be optimal for measurement fidelity in the absence of $T_1$. For the
experiments considered here, $T_1$ is much greater than the cavity rise time,
$1/\kappa$. In this case, the filter is close to the optimal linear filter.
To derive the matched filter consider that the measurement signal is given by
\begin{equation}
\psi(t) \propto {1\over 2}[\alpha_0(t) - \alpha_1(t)]\sigma_z + \xi(t),
\end{equation}
where $\alpha_i(t)$ is the time-dependent cavity response when the qubit is in state $i$, and $\xi(t)$ is a zero-mean noise term that is uncorrelated with the state. The filtered measurement is given by an integration kernel, $K(t)$, such that
\begin{align}
S &= \int_0^t K(t)\psi(t)\,\mathrm{d}t,\\
&= \sum_j K_j \psi_j,
\end{align}
where in the second form we discretize time such that $K_j = K(t_j)$ and $\psi_j = \psi(t_j)$. From this we can see that
\begin{align}
\avg{\Delta S} &= \sum_j K_j\avg{\alpha_0(t_j) - \alpha_1(t_j)},\\
\nu^2 &= \var{\Delta S} = \sum_j K_j^2 \nu_j^2,
\end{align}
where $\avg{\Delta S}$ is the average difference of $S$ for $\sigma_z = \pm 1$ and $\nu_j^2 = \var{\alpha_0(t_j) - \alpha_1(t_j)} + \var{\xi}$. Then, to optimize the SNR we set $\pd{K_j}\left|\avg{S}\right|^2/\nu^2 = 0$. After dropping scaling factors that are independent of $j$, we find
\begin{equation}
K_j = \frac{D^*(t_j)}{\nu_j^2},
\end{equation}
where $D(t_j) = \avg{\alpha_0(t_j) - \alpha_1(t_j)}$ is the difference vector
between the mean ground-state and excited-state responses. Since $T_1$
prevents fixing $\sigma_z$, we do not have direct access to $D(t)$.
Consequently, we approximate it by measuring the mean cavity response after
preparing the qubit in $\sigma_z$ basis states. Note that $\angle D$ gives the \emph
{time-dependent} quadrature containing qubit-state information. Thus, the
above construction rotates all information into the real part of the resulting
signal, so one can discard the orthogonal imaginary quadrature. We
additionally subtract the mean response to remove the identity component in
the measurement. This ensures that the resulting correlators are composed
mostly of multi-body terms. Finally, the optimal integration time is
determined by maximizing the single-shot fidelity.
\begin{figure}
\caption{\label{MeasScatter}
\label{MeasScatter}
\end{figure}
In the tight confines of the chip there is inevitable microwave coupling
between nominally independent lines that may inadvertently enable spurious
multi-qubit readout. Therefore, it is important to confirm that the
measurements give mostly independent single-qubit information and that our
joint readout is enabled only from post-measurement correlation of the
results. After verifying a sufficient level of qubit control with randomized
benchmarking \cite{Magesan2011} we run a limited tomography on the measurement
operators. By analyzing the measurements associated with preparing the four basis
states, shown in Fig.~\ref{MeasScatter}, we can convert to expectation values
of diagonal Pauli operators \cite{Chow2010} and confirm limited multi-Hamming
weight operators.
\begin{align}
\hat M_1 &= 1.0110(4)ZI + 0.0164(6)IZ - 0.0106(6)ZZ \nonumber \\
\hat M_2 &= 0.00(1)ZI + 0.98(1)IZ + 0.02(1)ZZ\\
\hat M_{1,2} &= 0.00(2)ZI + 0.00(2)IZ + 0.98(2)ZZ. \nonumber
\end{align}
Given access to the single-shot measurement records, it is possible to
estimate the variances associated with the different computational-basis
states of each subsystem, and thus quantify the scaling of the correlated SNR.
In the experiments discussed here, the subsystem measurement records have the
single-shot variances shown in Table.~\ref{tab:snr}. The corresponding SNR's
approach the cross-over between soft-averaging and thresholding. While a
hybrid strategy could be envisaged, for simplicity we soft-average all
results. Then, the correlations computed using \eqref{eq:bare-corr} have
variance $\nu_{\mathrm{corr}}^2$ between 2.35 and 4.02 for the different
computational basis states. Thus, in order to resolve the correlated
measurement $\avg{\sigma_{z,1}\sigma_{z,2}}$ to the same accuracy as
$\avg{\sigma_{z,1}}$ we need $\approx 5$ times more averaging.
\begin{table}[tb]
\begin{ruledtabular}
\begin{tabular}{l|cccc}
\textbf{State} & $\ket{00}$ & $\ket{01}$ & $\ket{10}$ & $\ket{11}$ \\
\hline
$M_1\; \nu^2$ & 0.42 & 0.44 & 0.85 & 0.77 \\
$M_2\; \nu^2$ & 1.36 & 1.67 & 1.37 & 1.84 \\
$\nu_\mathrm{corr}^2$ & 2.35 & 2.85 & 3.39 & 4.02 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:snr}
Variances of $M_1$ and $M_2$ measurement operators and correlation variance
for various computational-basis states. Variance increases when the
corresponding qubit is prepared in the $\ket{1}$ (excited) state, due to
increased variance from relaxation events ($T_1$) during measurement. The
corresponding single-shot readout fidelities are 0.59 and 0.18,
respectively.}
\end{table}
\section{Tomographic Inversion}
Tomographic inversion is the process of converting a set of measurements into
a physical density matrix or process map. Since the estimates of single-
and multi-body terms may have unequal variances, the standard inversion
procedure should be modified to take these into account. The method below
provides an estimator for states and processes that may be readily fed into a
semidefinite program solver, if one wishes to add additional physical
constraints (such as positivity in the process map) \cite{Kosut2004}.
The correlated measurement records result in estimates of the
expectation value $\langle \hat{M}_{(i,j,\cdots)}\rangle$
for the $N$-body observable
\begin{align}
\hat{M}_{(i,j,\cdots)} =
\hat{U}_i^\dagger \hat M_0 \hat{U}_i \otimes
\hat{U}_j^\dagger \hat M_1 \hat{U}_j \otimes
\cdots.
\end{align}
If the state $\hat\rho$ of the system is initially unknown, the
problem of estimating $\hat\rho$ from a linearly-independent and
informationally complete set of observables $\hat{M}_{(i,j,\cdots)}$
can be cast as a standard linear regression problem. Similarly, if the
system evolves according to some unknown superoperator $\mathcal E$,
this superoperator can be estimated via linear regression by preparing
various known initial states $\hat\rho_\alpha$ and measuring a
linearly-independent and informationally complete set of observables
$\hat{M}_{(i,j,\cdots)}$.
The best linear unbiased estimator for the reconstruction of a state
or process can be computed exactly for the cases where the covariance
matrix of the observations is known a priori---this corresponds to
generalized least-squares (GLS) regression. In the quantum case,
however, this covariance matrix depends on the state or process being
reconstructed, and so it is never known a priori---the best linear
unbiased estimator cannot be constructed. However, it can be
approximated by empirically computing the covariance matrix of the
observations. In the experiments described here, only the diagonal
elements of the covariance matrix were computed and used in the
reconstruction of the state or process, which leads to a slight bias
in the estimate, although the mean squared-error performance is still
good.
Formally, we can describe the state-tomography experiments as
follows. Let ${\rm vec}(\hat A)$ be the column-major vectorization of
an operator $\hat A$. Then the vector of state measurement expectation
values $m_s$ is given by
\begin{align}
m_s = P_{s} {\rm vec}(\hat\rho),
\end{align}
where
\begin{align}
P_{s} = \left[
\begin{array}{c}
{\rm vec}(\hat{M}_{(0,0,\cdots,0,0)})^\dagger\\
{\rm vec}(\hat{M}_{(0,0,\cdots,0,1)})^\dagger\\
\vdots
\end{array}
\right],
\end{align}
is the {\em state predictor matrix}, relating the state $\hat\rho$ to
the vector $m_s$ of expectation values. Then, if we have an estimate
$\tilde{m}_s$ for the expectation values, an estimator for $\hat\rho$ is
given by
\begin{align}
\tilde{\rho} = P_{s}^+ \tilde{m}_s
\end{align}
where
\begin{align}
P_{s}^+ = (P_s^\dagger C^{-1} P_s)^{-1} P_s^\dagger C^{-1},
\end{align}
is the GLS equivalent of the Moore-Penrose
pseudo-inverse of the state predictor, and where $C$ is the empirical
covariance for the measurements. In practice there are equivalent
alternatives to computing $\tilde{\rho}$ explicitly that have better
numerical stability properties.
Process-tomography experiments have an analogous description. The main
distinction is that the predictor then maps a pair of input state and
measurement observable to expectation values, so the vector of expectation
values $m_p$ is given by
\begin{align}
m_p = P_{p} {\rm vec}(\mathcal E),
\end{align}
where $\mathcal E$ is the Liouville representation \cite{Blum2012} of the
process being characterized, and the {\em process tomography predictor
matrix} $P_{p}$ is given by
\begin{align}
P_{p} = \left[
\begin{array}{c}
{\rm vec}({\rm vec}(\hat{M}_{(0,0,\cdots,0,0)})^\dagger {\rm vec}(\hat\rho_{(0,0,\cdots,0,0)}))^\dagger\\
{\rm vec}({\rm vec}(\hat{M}_{(0,0,\cdots,0,1)})^\dagger {\rm vec}(\hat\rho_{(0,0,\cdots,0,0)}))^\dagger\\
\vdots\\
{\rm vec}({\rm vec}(\hat{M}_{(0,0,\cdots,0,0)})^\dagger {\rm vec}(\hat\rho_{(0,0,\cdots,0,1)}))^\dagger\\
{\rm vec}({\rm vec}(\hat{M}_{(0,0,\cdots,0,1)})^\dagger {\rm vec}(\hat\rho_{(0,0,\cdots,0,1)}))^\dagger\\
\vdots
\end{array}
\right],
\end{align}
where the input states $\hat\rho_{(i,j,\cdots)}$ are given by
\begin{align}
\hat{\rho}_{(i,j,\cdots)} =
\hat{U}_i \ketbra{0}{0} \hat{U}_i^\dagger \otimes
\hat{U}_j \ketbra{0}{0} \hat{U}_j^\dagger \otimes
\cdots.
\end{align}
\section{Tomography of Entangled States and Processes}
Given this machinery we can apply it to verfying the correlations in entangled
states and reconstructing processes that create entanglement. We use an
echoed cross-resonance interaction as an entangling two-qubit gate
\cite{Corcoles2013}. The single-qubit pulses used were 40 ns long and the total
duration of the refocused $\mathrm{ZX}_{-\pi/2}$ was 370 ns.
Despite our imperfect single-shot readout we clearly witness high-fidelity
entanglement. The shot-by-shot correlation approach allows us to see
correlations that are not present in the product of the averages. In
Fig.~\ref{TwoQubitPauliDecomps}(a) the product state shows a large response in
the individual measurements and two-qubit terms that are simply the product of
the single-qubit terms. Whereas in Fig.~\ref{TwoQubitPauliDecomps}(b and c), we
have an elegant demonstration of how maximally entangled states have only
correlated information: for the single-qubit operators, in all readouts we
observe a zero-mean response; however, certain two-qubit terms show maximal
response.
\begin{figure}
\caption{
\label{TwoQubitPauliDecomps}
\label{TwoQubitPauliDecomps}
\end{figure}
Process tomography, shown in Fig.~\ref{ProcessTomo}, follows in a similar
fashion. Applying the procedure outlined above we find a gate fidelity for the
$\mathrm{ZX}_{-\pi/2}$ gate of 0.88. This process map clearly demonstrates
that our two-qubit interaction works on arbitrary input states, and that the
single-shot correlation strategy can recover information in arbitrary two-qubit
components in the resulting states.
\begin{figure}
\caption{\label{ProcessTomo}
\label{ProcessTomo}
\end{figure}
\section{Conclusion}
Systems such as circuit QED with continuous measurment outcomes provide a
choice of strategies for qubit measurements. We show that is is possible, and
sometimes preferrable, to directly correlate the continuous values without
thresholding. This is an important tool when high-fidelity readout is not
available on all channels. We further have provided a straightforward protocol to
experimentally derive a nearly optimal linear filter to handle the transient
response of a pulsed measurement. Building on this we have constructed multi-qubit
measurement operators from shot-by-shot correlation of single-qubit
measurement records. The SNR of these correlated operators decreases
exponentially with the number of qubits, but in the high SNR regime multibody
correlations of a handful of qubits are still accessible. This provides a
framework for verifying quantum operations in architectures with imperfect
single-qubit measurements.
\begin{acknowledgments}
The authors would like to thank George A. Keefe and Mary B. Rothwell for
device fabrication. This research was funded by the Office of the Director of
National Intelligence (ODNI), Intelligence Advanced Research Projects Activity
(IARPA), through the Army Research Office contract no.~W911NF-10-1-0324. All
statements of fact, opinion or conclusions contained herein are those of the
authors and should not be construed as representing the official views or
policies of IARPA, the ODNI, or the U.S. Government.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{D4FT: A Deep Learning Approach to \ Kohn-Sham Density Functional Theory}
\begin{abstract}
Kohn-Sham Density Functional Theory (KS-DFT) has been traditionally solved by the Self-Consistent Field (SCF) method. Behind the SCF loop is the physics intuition of solving a system of non-interactive single-electron wave functions under an effective potential. In this work, we propose a deep learning approach to KS-DFT. First, in contrast to the conventional SCF loop, we propose directly minimizing the total energy by reparameterizing the orthogonal constraint as a feed-forward computation. We prove that such an approach has the same expressivity as the SCF method yet reduces the computational complexity from $\mathcal{O}(N^4)$ to $\mathcal{O}(N^3)$. Second, the numerical integration, which involves a summation over the quadrature grids, can be amortized to the optimization steps. At each step, stochastic gradient descent (SGD) is performed with a sampled minibatch of the grids. Extensive experiments are carried out to demonstrate the advantage of our approach in terms of efficiency and stability. In addition, we show that our approach enables us to explore more complex neural-based wave functions.
\end{abstract}
\section{Introduction}
Density functional theory (DFT) is the most successful quantum-mechanical method, which is widely used in chemistry and physics for predicting electron-related properties of matters \citep{szabo2012modern, levine2009quantum, koch2015chemist}. As scientists are exploring more complex molecules and materials, DFT methods are often limited in scale or accuracy due to their computation complexity. On the other hand, Deep Learning (DL) has achieved great success in function approximations \citep{hornik1989multilayer}, optimization algorithms \citep{kingma2014adam}, and systems \citep{jax2018github} in the past decade.
Many aspects of deep learning can be harnessed to improve DFT.\ Of them, data-driven function fitting is the most straightforward and often the first to be considered. It has been shown that models learned from a sufficient amount of data generalize greatly to unseen data, given that the models have the right inductive bias. The Hohenberg-Kohn theorem proves that the ground state energy is a functional of electron density~\citep{hohenberg1964density}, but this functional is not available analytically. This is where data-driven learning can be helpful for DFT. The strong function approximation capability of deep learning gives hope to learning such functionals in a data-driven manner. There have already been initial successes in learning the exchange-correlation functional~\citep{deephf,deepks,neuralxc}. Furthermore, deep learning has shifted the mindsets of researchers and engineers towards differentiable programming. Implementing the derivative of a function has no extra cost if the primal function is implemented with deep learning frameworks. Derivation of functions frequently appears in DFT, e.g., estimating the kinetic energy of a wave function; calculating generalized gradient approximation (GGA) exchange-correlation functional, etc. Using modern automatic differentiation (AD) techniques ease the implementation greatly~\citep{quax}.
Despite the numerous efforts that apply deep learning to DFT, there is still a vast space for exploration.
For example, the most popular Kohn-Sham DFT (KS-DFT)~\citep{kohnsham} utilizes the self-consistency field (SCF) method for solving the parameters. At each SCF step, it solves a closed-form eigendecomposition problem, which finally leads to energy minimization. However, this method suffers from many drawbacks. Many computational chemists and material scientists criticize that optimizing via SCF is time-consuming for large molecules or solid cells, and that the convergence of SCF is not always guaranteed. Furthermore, DFT methods often utilize the linear combination of basis functions as the ansatz of wave functions, which may not have satisfactory expressiveness to approximate realistic quantum systems.
To address the problems of SCF, we propose a deep learning approach for solving KS-DFT. Our approach differs from SCF in the following aspects. First, the eigendecomposition steps in SCF come from the orthogonal constraints on the wave functions; we show in this work that the original objective function for KS-DFT can be converted into an unconstrained equivalent by reparameterizing the orthogonal constraints as part of the objective function. Second, we further explore amortizing the integral in the objective function over the optimization steps, i.e., using stochastic gradient descent (SGD), which is well-motivated both empirically and theoretically for large-scale machine learning~\citep{sgd}. We demonstrate the equivalence between our approach and the conventional SCF both empirically and theoretically. Our approach reduces the computational complexity from $\mathcal{O}(N^4)$ to $\mathcal{O}(N^3)$, which significantly improves the efficiency and scalability of KS-DFT. Third, gradient-based optimization treats all parameters equally. We show that it is possible to optimize more complex neural-based wave functions instead of optimizing only the coefficients. In this paper, we instantiate this idea with local scaling transformation as an example showing how to construct neural-based wave functions for DFT.
\section{DFT Preliminaries}
\label{sec:prelim}
Density functional theory (DFT) is among the most successful quantum-mechanical simulation methods for computing electronic structure and all electron-related properties. DFT defines the ground state energy as a functional of the electron density ${\textnormal{h}}o: \mathbb{R}^3{\textnormal{i}}ghtarrow \mathbb{R}$:
\begin{equation}
E_{\text{gs}} = E[{\textnormal{h}}o].
\end{equation}
The Hohenberg-Kohn theorem \citep{hohenberg1964inhomogeneous} guarantees that such functionals $E$ exists and the ground state energy can be determined uniquely by the electron density. However, the exact definition of such functional has been a puzzling obstacle for physicists and chemists. Some approximations, including the famous Thomas-Fermi method and Kohn-Sham method, have been proposed and have later become the most important ab-initio calculation methods.
\textbf{The Objective Function }
One of the difficulties in finding the functional of electron density is the lack of an accurate functional of the kinetic energy. The Kohn-Sham method resolves this issue by introducing an orthogonal set of single-particle wave functions $\{\psi_i\}$ and rewriting the energy as a functional of these wave functions. The energy functional connects back to the Schr\"{o}dinger equation. Without compromising the understanding of this paper, we leave the detailed derivation from Schr\"{o}dinger equation and the motivation of the orthogonality constraint in Appendix {\textnormal{e}}f{app:ksobject}. As far as this paper is concerned, we focus on the objective function of KS-DFT, defined as,
{\bm{s}}pace{0.05in}
\begin{align}
E_{\text{gs}}=\;&\min_{\{\psi^\sigma_i\}}\ E[\{\psi^\sigma_i\}] \\
=\;&\min_{\{\psi^\sigma_i\}}\ E_{\text{Kin}}[\{\psi^\sigma_i\}] + E_{\text{Ext}}[\{\psi^\sigma_i\}] + E_{\text{H}}[\{\psi^\sigma_i\}] + E_{\text{XC}}[\{\psi^\sigma_i\}] \label{eq:totalloss} \\
&\text{ s.t.} \ \ \ \ \langle \psi^\sigma_i | \psi^\sigma_j {\textnormal{a}}ngle = \delta_{i j} \label{eq_ortho}
{\bm{s}}pace{0.05in}
\end{align}
where $\psi_i^\sigma$ is a wave function mapping $\mathbb{R}^3{\textnormal{i}}ghtarrow \mathbb{C}$, and $\psi_i^{\sigma*}$ denotes its complex conjugate. For simplicity, we use the bra-ket notation for $\langle\psi_i^{\sigma}|\psi_j^\sigma{\textnormal{a}}ngle=\int \psi_i^{\sigma*}(\boldsymbol{r})\psi_j^{\sigma}(\boldsymbol{r})d\boldsymbol{r}$. $\delta_{ij}$ is the Kronecker delta function. The superscript $\sigma \in \{\alpha, \beta\}$ denotes the spin.\footnote{We omit the spin notation $\sigma$ in the following sections for simplification reasons.} $E_{\text{Kin}}$, $ E_{\text{Ext}}$, $ E_{\text{H}}$, $E_{\text{XC}}$ are the kinetic, external potential (nuclear attraction), Hartree (Coulomb repulsion between electrons) and exchange-correlation energies respectively, defined by
\begin{align}
\label{eq:Ekin}
E_{\text{Kin}}[\{\psi^\sigma_i\}] &= -\dfrac12 \sum_i^N\sum_\sigma \int {\psi_i^\sigma}^*(\boldsymbol{r}) \left( \nabla^2 \psi^\sigma_i(\boldsymbol{r}) {\textnormal{i}}ght) d\boldsymbol{r},\\
E_{\text{Ext}}[\{\psi^\sigma_i\}] &= \sum_i^N \sum_\sigma \int v_{\mathrm{ext}}(\boldsymbol{r}) |\psi^\sigma_i(\boldsymbol{r})|^2 d\boldsymbol{r}, \label{eq:Eext}
\end{align}
\begin{align}
E_{\text{H}}[\{\psi^\sigma_i\}] \ &= \dfrac{1}{2} \int \int \dfrac{ \big(\sum_i^N\sum_\sigma |\psi_i^\sigma(\boldsymbol{r})|^2 \big) \big(\sum_j^N\sum_\sigma |\psi^\sigma_j(\boldsymbol{r}')|^2 \big)}{{\bm{e}}rt \boldsymbol{r}-\boldsymbol{r}' {\bm{e}}rt}d\boldsymbol{r}d\boldsymbol{r}', \label{eq:Eh}\\
E_{\text{XC}}[\{\psi^\sigma_i\}] &= \sum_i^N\sum_\sigma \int {\bm{a}}repsilon_{\mathrm{xc}}\big( \boldsymbol{r} \big) |\psi^\sigma_i(\boldsymbol{r})|^2 d\boldsymbol{r} ,\label{eq:Exc}
\end{align}
in which $v_{\mathrm{ext}}$ is the external potential defined by the molecule's geometry. ${\bm{a}}repsilon_{\mathrm{xc}}$ is the exchange-correlation energy density, which has different instantiations. The entire objective function is given analytically, except for the function $\psi_i^\sigma$ that we replace with parametric functions to be optimized. In the rest of this paper, we focus on the algorithms that optimize this objective while minimizing the discussion on its scientific background.
\textbf{Kohn-Sham Equation and SCF Method } The object function above can be solved by Euler–Lagrange method, which yields the canonical form of the well-known Kohn-Sham equation,
\begin{align}
\left[ -\dfrac{1}{2}\nabla ^2 + v_{\mathrm{ext}}(\boldsymbol{r}) + \int d^3 \boldsymbol{r}'
\dfrac{\sum_i^N\big{\bm{e}}rt{\psi_i(\boldsymbol{r}' )}\big{\bm{e}}rt^2 }{{\bm{e}}rt \boldsymbol{r} - \boldsymbol{r}'{\bm{e}}rt} + v_{\mathrm{xc}}(\boldsymbol{r}) {\textnormal{i}}ght] \psi_i = {\bm{a}}repsilon_i \psi_i \label{eq:ks}
\end{align}
where $N$ is the total number of electrons. $v_{\text{ext}}$ and $v_{\mathrm{xc}}$ are the external and exchange-correlation potentials, respectively. This equation is usually solved in an iterative manner called Self-Consistent Field (SCF). This method starts with an initial guess of the orthogonal set of single electron wave functions, which are then used for constructing the Hamiltonian operator. The new wave functions and corresponding electron density can be obtained by solving the eigenvalue equation. This process is repeated until the electron density of the system converges. The derivation of the Kohn-Sham equation and the SCF algorithm is presented in Appendix {\textnormal{e}}f{app:derivation_ks}.
\textbf{The LCAO Method } In the general case, we can use any parametric approximator for $\psi_i$ to transform the optimization problem from the function space to the parameter space. In quantum chemistry, they are usually represented by the \textit{linear combination of atomic orbitals} (LCAO):
\begin{equation}
\psi_i (\boldsymbol{r}) = \sum_j^B c_{ij}\phi_j(\boldsymbol{r}),
\end{equation}
where $\phi_j$ are atomic orbitals (or basis functions more generally), which are usually pre-determined analytical functions, e.g., truncated series of spherical harmonics or plane waves. We denote the number of the basis functions as $B$. The single-particle wave functions are linear combinations of these basis functions with $c_{ij}$ as the only optimizable parameters. We introduce vectorized notations $\boldsymbol{\Psi}:= (\psi_1,\psi_2, \cdots,\psi_N)^\top$, $\boldsymbol{\Phi}:=(\phi_1,\phi_2,\cdots,\phi_N)^\top$ and $\boldsymbol{C}:= [c_{ij}]$, so the LCAO wave functions can be written as,
\begin{equation}
\boldsymbol{\Psi}=\boldsymbol{C}\boldsymbol{\Phi}.
\end{equation}
Classical atomic obitals include Slater-type orbitals, Pople basis sets \citep{ditchfield1971self}, correlation-consistent basis sets \citep{dunning1989gaussian}, etc.
\begin{figure}
\caption{An illustration on the computational graph of D4FT. In conventional DFT methods using fixed basis sets, the gradients w.r.t. the basis functions (blue dotted line) are unnecessary.}
\label{fig:framework}
\end{figure}
\section{A Deep Learning Approach to KS-DFT}
In this section, we propose a deep learning approach to solving KS-DFT. Our method can be described by three keywords:
\begin{itemize}[noitemsep,topsep=0pt]\setlength\itemsep{0em}
\item \textit{deep learning}: our method is deep learning native; it is implemented in a widely-used deep learning framework JAX;
\item \textit{differentiable}: all the functions are differentiable, and thus the energy can be optimized via purely gradient-based methods;
\item \textit{direct optimization}: due to the differentiability, our method does not require self-consistent field iterations, but the convergence result is also self-consistent.
\end{itemize}
We refer to our method as \textbf{D4FT} that represents the above 3 keywords and DFT. \figref{fig:framework} shows an overarching framework of our method. We elaborate on each component in the following parts.
\subsection{Reparameterize the Orthonormal constraints}
Constraints can be handled with various optimization methods, e.g., the Lagrangian multiplier method or the penalty method. In deep learning, it is preferred to reparameterize the constraints into the computation graph. For example, when we want the normalization constraint on a vector $x$, we can reparameterize it to $y/\|y\|$ such that it converts to solving $y$ without constraints. Then this constraint-free optimization can benefit from the existing differentiable frameworks and various gradient descent optimizers. The wave functions in Kohn-Sham DFT have to satisfy the constraint $\langle\psi_i|\psi_j{\textnormal{a}}ngle=\delta_{ij}$ as given in \eqref{eq_ortho}. Traditionally the constraint is handled by the Lagrangian multiplier method, which leads to the SCF method introduced in \secref{sec:prelim}. To enable direct optimization, we propose the following reparameterization of the constraint.
Using LCAO wave functions, the constraint translates to $\int\sum_kc_{ik}\phi_k(\boldsymbol{r})\sum_lc_{jl}\phi_l(\boldsymbol{r})d\boldsymbol{r}=\delta_{ij}$. Or in the matrix form
\begin{equation}
\label{eq:coeff_constraint}
\boldsymbol{C}\boldsymbol{S}\boldsymbol{C}^\top=I.
\end{equation}
$\boldsymbol{S}$ is called \textit{the overlap matrix} of the basis functions, where the $ij$-th entry $S_{ij}=\langle\phi_i|\phi_j {\textnormal{a}}ngle$. The literature on whitening transformation~\citep{kessy2018optimal} offers many ways to construct $\boldsymbol{C}$ to satisfy \eqref{eq:coeff_constraint} based on different matrix factorization of $\boldsymbol{S}$:
\begin{equation}
\boldsymbol{C} =
\begin{cases}
\boldsymbol{Q}\boldsymbol{\Lambda}^{-1/2}\boldsymbol{U} & \text{PCA whitening, with } \boldsymbol{U}^\top\boldsymbol{\Lambda} \boldsymbol{U}=\boldsymbol{S} \\
\boldsymbol{Q}\boldsymbol{L}^\top & \text{Cholesky whitening, with } \boldsymbol{L}\boldsymbol{L}^\top=\boldsymbol{S}^{-1} \\
\boldsymbol{Q}\boldsymbol{S}^{-1/2} & \text{ZCA whitening} \\
\end{cases}
\end{equation}
Taking PCA whitening as an example. Since $\boldsymbol{S}$ can be precomputed from the overlap of basis functions, $\boldsymbol\Lambda^{-1/2}\boldsymbol{U}$ is fixed. The $\boldsymbol{Q}$ matrix can be any orthonormal matrix; for deep learning, we can parameterize $\boldsymbol{Q}$ in a differentiable way using QR decomposition of an unconstrained matrix $\boldsymbol{W}$:
\begin{equation}
\boldsymbol{Q},\boldsymbol{R}=\texttt{QR}(\boldsymbol{W}).
\end{equation}
Besides QR decomposition, there are several other differentiable methods to construct the orthonormal matrix $\boldsymbol{Q}$, e.g., Householder transformation~\citep{mathiasen2020faster}, and exponential map~\citep{lezcano2019cheap}. Finally, the wave functions can be written as:
\begin{equation}
\boldsymbol{\Psi}_{\boldsymbol{W}}=\boldsymbol{Q}\boldsymbol{D}\boldsymbol{\Phi}.
\label{eq:mo_qr}
\end{equation}
In this way, the wave functions $\boldsymbol{\Psi}_{\boldsymbol{W}}$ are always orthogonal given arbitrary $\boldsymbol{W}$. The searching over the orthogonal function space is transformed into optimizing over the parameter space that $\boldsymbol{W}$ resides. Moreover, this parameterization covers all possible sets of orthonormal wave functions in the space spanned by the basis functions. These statements are formalized in the following proposition, and the proof is presented in Appendix {\textnormal{e}}f{app:1}.
\begin{proposition} \label{prop:1}
Define the original orthogonal function space $\mathcal{F}$ and the transformed search space $\mathcal{F}_{{\boldsymbol{W}}}$ by $\mathcal{F}=\{{\boldsymbol{\Psi}} =(\psi_1, \psi_2, \cdots, \psi_N)^\top :{\boldsymbol{\Psi}} = \boldsymbol{C} \boldsymbol{\Phi}, \boldsymbol{C} \in \mathbb{R}R^{N \times N}, \langle \psi _i | \psi_j {\textnormal{a}}ngle = \delta_{i j}\}$ and $\mathcal{F}_{{\boldsymbol{W}}}=\{{\boldsymbol{\Psi}}_{{\boldsymbol{W}}} =(\psi_1^{{\boldsymbol{W}}}, \psi_2^{{\boldsymbol{W}}}, \cdots, \psi_N^{{\boldsymbol{W}}})^\top :{\boldsymbol{\Psi}}_{{\boldsymbol{W}}} = \boldsymbol{Q} \boldsymbol{D} \boldsymbol{\Phi}, (\boldsymbol Q, \boldsymbol R) = \texttt{QR}(\boldsymbol{W}), {\boldsymbol{W}} \in \mathbb{R}R^{N \times N}\}$. Then, they are equivalent to $\mathcal{F}_{\boldsymbol{W}}=\mathcal{F}$.
\end{proposition}
\subsection{Stochastic Gradient}
SGD is the modern workhorse for large-scale machine learning optimizations. It has been harnessed to achieve an unprecedented scale of training, which would have been impossible with full batch training. We elaborate in this section on how DFT could also benefit from SGD.
\textbf{Numerical Quadrature } The total energies defined in Equations {\textnormal{e}}f{eq:Ekin}-{\textnormal{e}}f{eq:Exc} are integrals that involve the wave functions. Although the analytical solution to the integrals of commonly-used basis sets does exist, most DFT implementations adopt numerical quadrature integration, which approximate the value of a definite integral using a set of grids $\boldsymbol{g}=\{(\boldsymbol{x}_i, w_i)\}_{i=1}^n$ where ${\boldsymbol{x}}_i$ and $w_i$ are the coordinate and corresponding weights, respectively:
\begin{align}
\int_a^b f({\boldsymbol{x}})d{\boldsymbol{x}} \approx \sum\limits_{\boldsymbol{x}_i, w_i \in \boldsymbol{g}} f({\boldsymbol{x}}_i) w_i.
\end{align}
These grids and weights can be obtained via solving polynomial equations \citep{golub1969calculation, abramowitz1964handbook}. One key issue that hinders its application in large-scale systems is that the Hartree energy requires at least $O(n^2)$ calculation as it needs to compute the distance between every two grid points. Some large quantum systems will need 100k $\sim$ 10m grid points, which causes out-of-memory errors and hence is not feasible for most devices.
\textbf{Stochastic Gradient on Quadrature } Instead of evaluating the gradient of the total energy at all grid points in $\boldsymbol{g}$, we randomly sample a minibatch $\boldsymbol{g}' \subset \boldsymbol{g}$, where $\boldsymbol{g}'$ contains $m$ grid points, $m<n$, and evaluate the objective and its gradient on this minibatch. For example, for single integral energies such as kinetic energy, the gradient can be estimated by,
\begin{equation}
\widehat{ \frac{\partial E_{kin}}{\partial \boldsymbol W}} = -\dfrac{1}{2} \dfrac n m \sum\limits_{\boldsymbol{x}_i, w_i \in \boldsymbol{g}'} w_i \dfrac{ \partial \left[ \boldsymbol{\Psi}_{\boldsymbol{W}}^* (\boldsymbol{x}_i)(\nabla^2 \boldsymbol{\Psi}_{\boldsymbol{W}})(\boldsymbol{x}_i) {\textnormal{i}}ght] }{\partial \boldsymbol W} .
\end{equation}
The gradients of external and exchange-correlation energies can be defined accordingly. The gradient of Hartree energy, which is a double integral, can be defined as,
\begin{equation}
\widehat{\frac{\partial E_{H}}{\partial \boldsymbol W}} = \dfrac{2n(n-1)}{m(m-1)} \sum\limits_{\boldsymbol{x}_i, w_i \in \boldsymbol{g}'} \sum\limits_{\boldsymbol{x}_j, w_j \in \boldsymbol{g}'} \dfrac{ w_i w_j\Vert \boldsymbol{\Psi}_{\boldsymbol{W} }(\boldsymbol{x}_j) \Vert^2 }{ \Vert \boldsymbol{x}_i - \boldsymbol{x}_j \Vert }
\bigg[ \sum_k \dfrac{ \partial \psi_k (\boldsymbol{x}_i) }{ \partial \boldsymbol{W} } \bigg] .
\end{equation}
It can be proved that the expectation of the above stochastic gradient is equivalent to those of the full gradient. Note that the summation over the quadrature grids resembles the summation over all points in a dataset in deep learning. Therefore, in our implementation, we can directly rely on AD with minibatch to generate unbiased gradient estimates.
\subsection{Theoretical Properties}
\label{sec:theoretical_properties}
\textbf{Asympototic Complexity }
Here we analyze the computation complexity of D4FT in comparison with SCF. We use $N$ to denote the number of electrons, $B$ to denote the number of basis functions, $n$ for the number of grid points for quadrature integral, and $m$ for the minibatch size when the stochastic gradient is used. The major source of complexity comes from the hartree energy $E_H$, as it includes a double integral.
In our direct optimization approach, computing the hartree energy involves computing $\sum_i|\psi_i(\boldsymbol{r})|^2$ for all grid points, which takes $\mathcal{O}(nNB)$. After that $\mathcal{O}(n^2)$ computation is needed to compute and aggregate repulsions energies between all the electron pairs. Therefore, the total complexity is $\mathcal{O}(nNB)+\mathcal{O}(n^2)$. \footnote{We assume the backward mode gradient computation has the same complexity as the forward computation.} In the SCF approach, it costs $\mathcal{O}(nNB)+\mathcal{O}(n^2N^2)$, a lot more expensive because it computes the Fock matrix instead of scalar energy.
Considering both $n$ and $B$ are approximately linear to $N$, the direct optimization approach has a more favorable $\mathcal{O}(N^3)$ compared to $\mathcal{O}(N^4)$ for SCF. However, since $n$ is often much bigger than $N$ and $B$ practically, the minibatch SGD is an indispensable ingredient for computation efficiency. At each iteration, minibatch reduces the factor of $n$ to a small constant $m$, making it $\mathcal{O}(mNB)+\mathcal{O}(m^2)$ for D4FT. A full breakdown of the complexity is available in Appendix {\textnormal{e}}f{app:breakdown}.
\textbf{Self-Consistency }
The direct optimization method will converge at a self-consistent point. We demonstrate this in the following. We first give the definition of self-consistency as follows.
\begin{definition}[Self-consistency]\label{def:sc} The wave functions $\boldsymbol{\Psi}$ is said to be self-consistent if the eigenfunctions of its corresponding Hamiltonian $\hat{H}(\boldsymbol{\Psi})$ is $\boldsymbol{\Psi}$ itself, i.e., there exists a real number ${\bm{a}}repsilon \in \mathbb{R}$, such that $\hat{H}(\boldsymbol{\Psi}) {\bm{e}}rt \boldsymbol{\Psi} {\textnormal{a}}ngle = {\bm{a}}repsilon {\bm{e}}rt \boldsymbol{\Psi} {\textnormal{a}}ngle$.
\end{definition}
The next proposition states the equivalence between SCF and direct optimization. It states that the convergence point of D4FT is self-consistent. The proof is shown in the Appendix {\textnormal{e}}f{app:p2}.
\begin{proposition}[Equivalence between SCF and D4FT for KS-DFT] \label{prop:2}
Let ${\boldsymbol{\Psi}}^\dagger$ be a local optimimal of the ground state energy $\mathbb{E}_{\text{gs}}$ defined in \eqref{eq:totalloss}, such that $\frac{\partial E_{\text{gs}}}{\partial {\boldsymbol{\Psi}}^\dagger} = \boldsymbol{0}, $ then ${\boldsymbol{\Psi}}^\dagger$ is self-consistent.
\end{proposition}
\section{Neural Basis with Local Scaling Transformation}
\label{sec:neuralbasis}
In previous sections, the basis functions are considered given and fixed. Now we demonstrate and discuss the possibility of a learnable basis set that can be jointly optimized in D4FT. We hope that this extension could serve as a stepping stone toward neural-based wave function approximators.
As discussed in \secref{sec:prelim}, LCAO wave functions are used in the existing DFT calculations. This restricts the wave functions to be in the subspace spanned by the basis functions, and it would then require a large number of basis functions in order to make a strong approximator. From the deep learning viewpoint, it is more efficient to increase the depth of the computation. The obstacle to using deep basis functions is that the overlap matrix $\boldsymbol{S}$ will change with the basis, requiring us to do expensive recomputation of whitening matrix $\boldsymbol{D}$ at each iteration. To remedy this problem, we introduce basis functions ${\bm{a}}rphi_i$ in the following form
\begin{equation}
{\bm{a}}rphi_i(\boldsymbol{r}) := \big{\bm{e}}rt \det J_f (\boldsymbol{r}) \big{\bm{e}}rt^{\frac12} \phi_i(f(\boldsymbol{r}))
\end{equation}
where $f: \mathbb{R}^3 {\textnormal{i}}ghtarrow \mathbb{R}^3$ is a parametric bijective function, and $\det J_f (\boldsymbol{r})$ denotes its Jacobian determinant. It is verifiable by change of variable that
\begin{equation*}
\langle{\bm{a}}rphi_i|{\bm{a}}rphi_j {\textnormal{a}}ngle=\int |\det J_f (\boldsymbol{r})| \phi_i(f(\boldsymbol{r}))\phi_i(f(\boldsymbol{r})) d\boldsymbol{r} = \int \phi_i(\boldsymbol{u})\phi_j(\boldsymbol{u}) d\boldsymbol{u} = \langle\phi_i|\phi_j{\textnormal{a}}ngle.
\end{equation*}
Therefore, the overlap matrix $\boldsymbol{S}$ will be fixed and remain unchanged even if $f$ varies.
Within our framework, we can use a parameterized function $f_{\boldsymbol{\theta}}$ and optimize both ${\boldsymbol{\theta}}$ and $\boldsymbol{W}$ jointly with gradient descent.
As a proof of concept, we design $f_{\boldsymbol{\theta}}$ as follows, which we term as \tilde{p}h{neural local scaling}:
\begin{align}
f_{\boldsymbol{\theta}}(\boldsymbol{r}) &:= \lambda_{\boldsymbol{\theta}}(\boldsymbol{r}) \boldsymbol{r}, \\ \lambda_{\boldsymbol{\theta}}(\boldsymbol{r}) &:= \alpha \eta (g_{{\boldsymbol{\theta}}}(\boldsymbol{r})).
\end{align}
where $\eta$ is the sigmoid function, and $\alpha$ is a scaling factor to control the range of $\lambda_{\boldsymbol{\theta}}$. $g_{{\boldsymbol{\theta}}}: \mathbb{R}^3 {\textnormal{i}}ghtarrow \mathbb{R}$ can be arbitrarily complex neural network parametrized by ${\boldsymbol{\theta}}$.
This method has two benefits. First, by introducing additional parameters, the orbitals are more expressive and can potentially achieve better ground state energy. Second, the conventional gaussian basis function struggles to tackle the \textit{cusps} at the location of the nuclei. By introducing the local scaling transformation, we have a scaling function that can control the sharpness of the wave functions near the nuclei, and as a consequence, this approximator can model the ground state wave function better. We perform experiments using neural local transformed orbitals on atoms. Details and results will be presented in Section {\textnormal{e}}f{sec:exp:neuralobitals}.
\section{Related Work}
\textbf{Deep Learning for DFT }
There are several lines of work using deep learning for DFT. Some of them use neural networks in a supervised manner, i.e., use data from simulations or experiments to fit neural networks. A representative work is conducted by \citep{gilmer2017neural}, which proposes a message-passing neural network and predicts DFT results on the QM9 dataset within chemical accuracy. \cite{custodio2019artificial} and \cite{ellis2021accelerating} also follow similar perspectives and predict the DFT ground state energy using feed-forward neural networks. Another active research line falls into learning the functionals, i.e. the kinetic and exchange-correlation functionals. As it is difficult to evaluate the kinetic energy given the electron density, some researchers turn to deep learning to fit a neural kinetic functional \citep{alghadeer2021highly, ghasemi2021artificial, ellis2021accelerating}, which makes energy functional orbital-free (depending only on electron density). Achieve chemical accuracy is one of the biggest obstacles in DFT, many works \citep{kirkpatrick2021pushing, kalita2021learning, ryabov2020neural, kasim2021learning, dick2021using, nagai2020completing} research into approximating the exchange-correlation functional with neural networks in the past few years, for making the DFT more accurate. It is worth noting that, the above deep learning approaches are mostly data-driven, and their generalization capability is to be tested.
\textbf{Differentiable DFT }
As deep learning approaches have been applied to classical DFT methods, many researchers \citep{diffiqult, li2021kohn, kasim2021learning, dick2021highly, laestadius2019kohn} have discussed how to make the training of these neural-based DFT methods, particularly the SCF loop, differentiable, so that the optimization of these methods can be easily tackled by the auto-differentiation frameworks. \cite{li2021kohn} uses backward mode differentiation to backpropagate through the SCF loop. In contrast, \cite{diffiqult} employs the forward mode differentiation, which has less burden on system memory compared to backward mode. \cite{kasim2021learning} uses the implicit gradient method, which has the smallest memory footprint, however it requires running the SCF loop to convergence.
\textbf{Direct Optimization in DFT } Albeit SCF method dominates the optimization in DFT, there are researches exploring the direct optimization methods \citep{gillan1989calculation, van2002geometric, ismail2000new, vandevondele2003efficient, weber2008direct, ivanov2021direct}. The challenging part is how to preserve the orthonormality constraint of the wave functions. A straightforward way to achieve this is via explicit orthonormalizaiton of the orbitals after each update \citep{gillan1989calculation}. In recent years, some have investigated direct optimization of the total energy with the orthonormality constraint incorporated in the formulation of wave functions. A representative method is to express the coefficients of basis functions as an exponential transformation of a skew-Hermitian matrix \citep{ismail2000new, ivanov2021direct}.
\section{Experiments}
In this section, we demonstrate the accuracy and scalability of D4FT via numerical experiments on molecules. We compare our method with two benchmarks,
\begin{itemize}[leftmargin=*, noitemsep, topsep=0pt]
\setlength\itemsep{0em}
\item PySCF \citep{sun2018pyscf}: one of the most widely-used open-sourced quantum chemistry computation frameworks. It uses python/c implementation.
\item JAX-SCF: our implementation of classical SCF method with Fock matrix momentum mechanism.
\end{itemize}
We implement our D4FT method with the deep learning framework JAX \citep{jax2018github}. For a fair comparison with the same software/hardware environment, we reimplemented the SCF method in JAX. All the experiments with JAX implementation (D4FT, JAX-SCF) are conducted on an NVIDIA A100 GPU with 40GB memory. As a reference, we also test with PySCF on a 64-core Intel Xeon [email protected] with 128GB memory.
\subsection{Accuracy}
We first evaluate the accuracy of D4FT on two tasks. We first compare the ground state energy obtained from D4FT with PySCF on a series of molecules. We then predict the paramagnetism of molecules to test if our method handles spin correctly. For both tasks, we use the 6-31g basis set with the LDA exchange-correlation functional.
\textbf{Ground State Energy } The predicted ground state energies are presented in Table {\textnormal{e}}f{tb:groundstate1}. It can be seen that the absolute error between D4FT and PySCF is smaller than 0.02Ha ($\approx$0.54eV) over all the molecules compared. This validates the equivalence of our SGD optimization and SCF loop.
\begin{figure}
\caption{Convergence speed on different carbon Fullerene molecules. Columns 1-3: the convergence curves on different molecules. X-axis: wall clock time (s), Y-axis: total energy (1k Ha). Right-top: the scale of convergence time. Right-bottom: the scale of per-epoch time. It can be seen that as the number of orbitals increases, D4FT scales much better than JAX-SCF.}
\label{fig:cvg}
\end{figure}
\begin{table}[h]
\small
\centering
\caption{Partial results of ground state energy calculation (LDA, 6-31g, Ha)}\label{tb:groundstate1}
{\textnormal{e}}sizebox{0.9\columnwidth}{!}{
\begin{tabular}{c c c c c c c}
\toprule[1pt]
Molecule & Hydrogen & Methane & Water & Oxygen & Ethanol & Benzene \\
\midrule[0.5pt]
PySCF & -1.03864 & -39.49700 & -75.15381 & -148.02180 & -152.01379 & -227.35910 \\
JAX-SCF & -1.03919 & -39.48745 & -75.15124 & -148.03399 & -152.01460 & -227.36755 \\
D4FT & -1.03982 & -39.48155 & -75.13754 & -148.02304 & -152.02893 & -227.34860 \\
\bottomrule[1pt]
\end{tabular}
}
\end{table}
\textbf{Magnetism Prediction } We predict whether a molecule is paramagnetic or diamagnetic with D4FT. Paramagnetism is due to the presence of unpaired electrons in the molecule. Therefore, we can calculate the ground state energy with different spin magnitude and see if ground state energy is lower with unpaired electrons. We present the experimental results in Table {\textnormal{e}}f{tb:spin}.
We can observe that an oxygen molecule with 2 unpaired electrons achieves lower energy, whereas carbon dioxide is the opposite, in both methods. This coincides with the fact that oxygen molecule is paramagnetic, while carbon dioxide is diamagnetic.
\begin{table}[ht]
\centering
\caption{Prediction on magnetism (LSDA, 6-31g, Ha)}\label{tb:spin}
{\textnormal{e}}sizebox{0.9\columnwidth}{!}{
\begin{subtable}[h]{0.45\textwidth}
\small
\begin{tabular}{c c c c }
\toprule[1pt]
& Method & \makecell{No unpaired \\ electron} & \makecell{2 unpaired \\ electrons} \\
\midrule[0.5pt]
\multirow{2}{*}{$\text{O}_2$} & PySCF & -148.02180 & \textbf{-148.09143} \\
& D4FT & -148.04391 & \textbf{-148.08367} \\
\bottomrule[1pt]
\end{tabular}
\end{subtable}
\begin{subtable}[h]{0.45\textwidth}
\small
\begin{tabular}{c c c c }
\toprule[1pt]
& Method & \makecell{No unpaired \\ electron} & \makecell{2 unpaired \\ electrons} \\
\midrule[0.5pt]
\multirow{2}{*}{$\text{CO}_2$} & PySCF & \textbf{-185.57994} & -185.29154 \\
& D4FT & \textbf{-185.58268} & -185.26463 \\
\bottomrule[1pt]
\end{tabular}
\end{subtable}
}
\end{table}
\subsection{Scalability}
In this section, we evaluate the scalability of D4FT. The experiments are conducted on carbon Fullerene molecules containing ranging from 20 to 180 carbon atoms. For D4FT, we use the Adam optimizer \citep{kingma2014adam} with a piecewise constant learning rate decay. The initial learning rate is 0.1, it is reduced to 0.01 to 0.0001 \footnote{Larger molecule gets a larger learning rate decay.} after 60 epochs. In total, we run 200 epochs for each molecule. The results are shown in \figref{fig:cvg}. When the system size is small, the acceleration brought by SGD is limited. SCF has a clear advantage as the closed-form eigendecomposition brings the solution very close to optimal in just one step. Both methods become on par when the system reaches a size of 480 orbitals (C80). After that, SGD is significantly more efficient than SCF because exact numerical integration is prohibitively expensive for large molecules. Column 4 in \figref{fig:cvg} shows a clear advantage of D4FT to JAX-SCF, which verifies the smaller complexity of D4FT.
\textbf{Influence of Batch Size } As repeatedly proved in large-scale machine learning, SGD with minibatch has a more favorable convergence speed as compared to full batch GD. \figref{fig:batch_size} provides evidence that this is also true for D4FT. Smaller batch size generally leads to faster convergence in terms of epochs over the grids. However, when the batch size gets too small to fully utilize the GPU's computation power, it becomes less favorable in wall clock time.
\begin{figure}
\caption{Convergence speed at different batch sizes on Fullerene C20. Left: Comparison under wallclock time. Right: Comparison under the number of epochs. Smaller batch sizes are less favorable in wallclock time because they can not fully utilize the GPU computation.}
\label{fig:batch_size}
\end{figure}
\subsection{Neural Basis}
\label{sec:exp:neuralobitals}
In this part, we test the neural basis with local scaling transformation presented in \secref{sec:neuralbasis} on atoms. In this experiment, we use a simple STO-3g basis set. A 3-layer MLP with tanh activation is adopted for $g_{\boldsymbol{\theta}}$. The hidden dimension at each layer is 9. The experimental results (partial) are shown in Table {\textnormal{e}}f{tab:localscaling}. Complete results are in Appendix {\textnormal{e}}f{sec:table2}. The results demonstrate that the local scaling transformation effectively increases its flexibility of STO-3g basis set and achieves lower ground-state energy.
\begin{table}[ht]
\centering
\small
\caption{Partial results of the neural basis method (LSDA, Ha) }
\label{tab:localscaling}
{\textnormal{e}}sizebox{0.95\columnwidth}{!}{
\begin{tabular}{l c c c c c c }
\toprule[1pt]
Atom & He & Li & Be & C & N & O \\\midrule[0.5pt]
STO-3g & -2.65731 & -7.04847 & -13.97781 & -36.47026 & -52.82486 & -72.78665 \\
STO-3g + local scaling & \textbf{-2.65978} & \textbf{-7.13649} & \textbf{-13.99859} & \textbf{-36.52543} & \textbf{-52.86340} & \textbf{-72.95256} \\
\bottomrule[1pt]
\end{tabular}
}
\end{table}
\section{Discussion and Conclusion}
We demonstrate in this paper that KS-DFT can be solved in a more deep learning native manner. Our method brings many benefits including a unified optimization algorithm, a stronger convergence guarantee, and better scalability. It also enables us to design neural-based wave functions that have better approximation capability. However, at this point, there are still several issues that await solutions in order to move further along this path. One such issue attributes to the inherent stochasticity of SGD. The convergence of our algorithm is affected by many factors, such as the choice of batch sizes and optimizers. Another issue is about the integral which involves fully neural-based wave functions. Existing numerical integration methods rely on truncated series expansions, e.g., quadrature, and Fourier transform. They could face a larger truncation error when more potent function approximators such as neural networks are used. Neural networks might not be helpful when we still rely on linear series expansions for integration. Monte Carlo methods could be what we should resort to for this problem and potentially it could link to Quantum Monte Carlo methods. On the bright side, these limitations are at the same time opportunities for machine learning researchers to contribute to this extremely important problem, whose breakthrough could benefit tremendous applications in material science, drug discovery, etc.
\begin{appendices}
\appendix
\section{Notations}
\subsection{Bra-Ket Notation}
The bra-ket notation, or the Direc notation, is the conventional notation in quantum mechanics for denoting quantum states. It can be viewed as a way to denote the linear operation in Hilbert space. A ket ${\bm{e}}rt \psi {\textnormal{a}}ngle$ represent a quantum state in $\mathbb{V}$, whereas a bra $\langle \psi {\bm{e}}rt$ represents a linear mapping: $\mathbb{V} \to \mathbb{C}$. The inner product of two quantum states, represented by wave functions $\psi_i$ and $\psi_j$, can be rewritten in the bra-ket notation as
\begin{equation}
\langle \psi_i {\bm{e}}rt \psi_j {\textnormal{a}}ngle := \int_{\mathbb{R}^3} \psi_i^*(\boldsymbol{r}) \psi_j (\boldsymbol{r}) d\boldsymbol{r}.
\end{equation}
Let $\hat{O}$ be an operator, the expectation of $\hat{O}$ is defined as,
\begin{equation}
\langle \psi {\bm{e}}rt \hat{O} {\bm{e}}rt \psi {\textnormal{a}}ngle := \int_{\mathbb{R}^3} \psi^*(\boldsymbol{r}) \big[ \hat{O} \psi(\boldsymbol{r}) \big] d\boldsymbol{r}.
\end{equation}
\subsection{Notation Table}
We list frequently-used notations used in this paper in the next Table {\textnormal{e}}f{table:notation}.
\begin{table}[h]
\caption{Notation used in this paper}
\label{table:notation}
\centering
\begin{tabular}{ll}
\toprule[2pt]
Notation & Meaning \\\midrule[1.5pt]
$\mathbb{R}$ & real space \\
$\mathbb{C}$ & complex space \\
$\boldsymbol{r}$ & a coordinate in $\mathbb{R}^3$\\
$\psi$ & single-particle wave function/molecular orbitals \\
$\phi$ & basis function/atomic orbitals \\
$\boldsymbol{\Psi}$, $\boldsymbol{\Phi}$ & vectorized wave function: $\mathbb{R}^3 \to \mathbb{R}^N$ \\
$N$ & number of molecular orbitals \\
$B$ & number of basis functions \\
$\boldsymbol{x_i}$, $w_i$ & grid point pair (coordinate and weight) \\
$n$ & number of grid points for numerical quadrature \\
$\hat{H}$ & hamiltonian operator \\
${\textnormal{h}}o$ & electron density\\
${\boldsymbol{B}}$ & overlap matrix \\
$\sigma$ & spin index \\
$E_{\text{gs}}$, $E_{\text{Kin}}$, $\cdots$ & energy functionals \\
\bottomrule[2pt]
\end{tabular}
\end{table}
\section{Kohn-Sham Equation and SCF Method}
\subsection{KS-DFT Objective Function}
\label{app:ksobject}
\paragraph{The Schr\"{o}dinger Equation} The energy functional of KS-DFT can be derived from the quantum many-body Schr\"{o}dinger Equation. The time-invariant Schr\"{o}dinger Equation writes
\begin{equation}
\hat H | \Psi {\textnormal{a}}ngle = {\bm{a}}repsilon | \Psi {\textnormal{a}}ngle,
\end{equation}
where $\hat H$ is the system hamiltonian and $\Psi(\boldsymbol{r})$ denotes the many-body wave function mapping $\mathbb{R}^{N\times 3}{\textnormal{i}}ghtarrow\mathbb{C}$. $\boldsymbol{r}$ is a vector of $N$ 3D particles. The Hamiltonian is
\begin{equation}
\hat H = \underbrace{-\dfrac{1}{2}\nabla^2}_{\text{kinetic}} - \underbrace{\sum_i\sum_j\dfrac{1}{|\boldsymbol{r}_i-\boldsymbol{R}_j|}}_{\text{external}} + \underbrace{\sum_{i<j}\dfrac{1}{|\boldsymbol{r}_i-\boldsymbol{r}_j|}}_{\text{electron repulsion}},
\end{equation}
where $\boldsymbol{r}_i$ is the coordinate of the $i$-th electron and $R_j$ is the coordinate of the $j$-th nuclei. The following objective function is optimized to obtain the ground-state energy and corresponding ground-state wave function:
\begin{align}
&\min_{\Psi} \langle\Psi|\hat H|\Psi{\textnormal{a}}ngle, \label{eq:schr_loss} \\
\text{s.t.}&\;\Psi(\boldsymbol{r})=\sigma(\mathrm{P})\Psi(\mathrm{P}\boldsymbol{r}); \; \forall \, \mathrm{P}.
\end{align}
\paragraph{Antisymmetric Constraint} Notice there is a constraint in the above objective function, $\mathrm{P}$ is any permutation matrix, and $\sigma(\mathrm{P})$ denotes the parity of the permutation. This constraint comes from the Pauli exclusion principle, which enforces the antisymmetry of the wave function $Psi$. When solving the above optimization problem, we need a variational ansatz, in machine learning terms, a function approximator for $\Psi$. To satisfy the antisymmetry constraint, we can start with any function approximator $\hat \Psi$, and apply the antisymmetrizer $\Psi=\mathcal{A}\hat\Psi$, the antisymmetrizer is defined as
\begin{equation}
\mathcal{A}\hat\Psi(\boldsymbol{r})=\frac{1}{\sqrt{N!}}\sum_{\mathrm{P}}\sigma(\mathrm{P})\hat\Psi(\mathrm{P}\boldsymbol{r}).
\end{equation}
However, there are $N!$ terms summing over $\mathrm{P}$, which is prohibitively expensive. Therefore, the Slater determinant is resorted to as a much cheaper approximation.
\paragraph{Slater Determinant \& Hartree Fock Approximation} To efficiently compute the above antisymmetrizer, the mean-field assumption is usually made, i.e. the many-body wave function is made of the product of single-particle wave functions:
\begin{equation}
\Psi_{\text{slater}}=\mathcal{A}\hat\Psi_{\text{slater}}=\mathcal{A}\prod_i\psi_i=\frac{1}{\sqrt{N!}}\begin{vmatrix}
\psi_1(\boldsymbol{r}_1) & \psi_1(\boldsymbol{r}_2) & \cdots & \psi_1(\boldsymbol{r}_N) \\
\psi_2(\boldsymbol{r}_1) & \psi_2(\boldsymbol{r}_2) & \cdots & \psi_2(\boldsymbol{r}_N) \\
{\bm{d}}ots & {\bm{d}}ots & \ddots & {\bm{d}}ots \\
\psi_N(\boldsymbol{r}_1) & \psi_N(\boldsymbol{r}_2) & \cdots & \psi_N(\boldsymbol{r}_N)
\end{vmatrix}
\end{equation}
This approximation has a much more favorable computation complexity, computing the determinant only takes $O(N^3)$. However, it is more restricted in its approximation capability. Notice that the mean-field assumption discards the correlation between different wave functions. Therefore, with the Slater determinant approximation, it omits the correlation between electrons. Plugging this into {\textnormal{e}}f{eq:schr_loss} gives us the Hartree-Fock approximation.
\paragraph{Orthogonal Constraints} While the Slater determinant is much cheaper to compute, integration in the $R^{N\times 3}$ space is still complex. Introducing an orthogonal between the single-particle wave functions simplifies things greatly.
\begin{equation}
\langle \psi_i | \psi_j {\textnormal{a}}ngle = \delta_{ij}.
\end{equation}
Plugging this constraint and the Slater determinant into \eqref{eq:schr_loss}, and with some elementary calculus, there are the following useful conclusions. From this point on, without ambiguity, the symbol $\boldsymbol{r}$ is also used to denote a vector in 3D space when it appears in the single-particle wave function $\psi_i(\boldsymbol{r})$.
First, the total density of the electron is the sum of the density contributed by each wave function. With this conclusion, the external part of the hamiltonian from the many body Schr\"{o}dinger Equation connects with the external energy in DFT in \eqref{eq:totalloss}.
\begin{equation}
{\textnormal{h}}o(\boldsymbol{r})\eqortho\sum_i^N{|\psi_i(\boldsymbol{r})|^2}.
\end{equation}
Second, the kinetic energy of the wave function in the joint $N\times 3$ space breaks down into the summation of the kinetic energy of each single-particle wave function in 3D space, which again equals the kinetic term in \eqref{eq:totalloss}.
\begin{equation}
\langle\Psi_{\text{slater}}|-\frac{1}{2}\nabla^2|\Psi_{\text{slater}}{\textnormal{a}}ngle\eqortho\sum_i\langle\psi_i|-\frac{1}{2}\nabla^2|\psi_i{\textnormal{a}}ngle.
\end{equation}
Third, the electron repulsion term breaks into two terms, one corresponding to the hartree energy in \eqref{eq:totalloss}, and the other is the exact exchange energy.
\begin{align}
\langle\Psi_{\text{slater}}|&\frac{1}{\sum_{i<j}|\boldsymbol{r}_i-\boldsymbol{r}_j|}|\Psi_{\text{slater}}{\textnormal{a}}ngle\eqortho \\
&\underbrace{\frac{1}{2}\sum_{i\neq j}\int\int\psi^*_{i}(\boldsymbol{r}_1)\psi^*_j(\boldsymbol{r}_2)\psi_{i}(\boldsymbol{r}_1)\psi_{j}(\boldsymbol{r}_2)\frac{1}{|\boldsymbol{r}_1-\boldsymbol{r}_2|}d\boldsymbol{r}_1d\boldsymbol{r}_2}_{E_H}\\
&\underbrace{-\frac{1}{2}\sum_{i\neq j}\int\int\psi^*_{i}(\boldsymbol{r}_1)\psi^*_j(\boldsymbol{r}_1)\psi_{i}(\boldsymbol{r}_2)\psi_{j}(\boldsymbol{r}_2)\frac{1}{|\boldsymbol{r}_1-\boldsymbol{r}_2|}d\boldsymbol{r}_1d\boldsymbol{r}_2}_{E_X}
\end{align}
The above three conclusions link the Schr\"{o}dinger Equation with the three terms in the KS-DFT objective. To finish the link, notice that when Slater determinant is introduced, we made the mean-field assumption. This causes an approximation error due to the electron correlation in the Hartree-Fock formulation we're deriving. In DFT, this correlation error, together with the exchange energy $E_{\text{X}}$ is approximated with a functional $E_{\text{XC}}$. The art of DFT is thus to find the best $E_{\text{XC}}$ that minimizes the approximation error.
\subsection{Derivation of the Kohn-Sham Equation}
\label{app:derivation_ks}
The energy minimization in the functional space formally reads
\begin{equation}
\label{eq:origin}
\min_{\psi_i} \left\{E_{\text{gs}}\left(\psi_{1}, \ldots, \psi_{N}{\textnormal{i}}ght)\ |\ \psi_{i} \in H^{1}\left(\mathbb{R}^{3}{\textnormal{i}}ght), \langle \psi_{i}, \psi_{j} {\textnormal{a}}ngle = \delta_{i j}, 1 \leqslant i, j \leqslant N{\textnormal{i}}ght\},
\end{equation}
where we can denote the electron density as
\begin{equation}
{\textnormal{h}}o(\boldsymbol{r}) = \sum_{i=1}^N |\psi_i(\boldsymbol{r})|^2,
\end{equation}
and have the objective function to be minimized:
\begin{align*}\label{eq:E_gs}
E
&= -\dfrac12 \sum_i^N \int \psi_i^*(\boldsymbol{r}) \left( \nabla^2 \psi_i(\boldsymbol{r}) {\textnormal{i}}ght) d\boldsymbol{r} + \sum_i^N \int v_{\mathrm{ext}}(\boldsymbol{r}) |\psi_i(\boldsymbol{r})|^2 d\boldsymbol{r} \\
& + \dfrac12\int \int \dfrac{ \big(\sum_i^N |\psi_i(\boldsymbol{r})|^2 \big) \big(\sum_j^N |\psi_j(\boldsymbol{r}')|^2 \big)}{{\bm{e}}rt \boldsymbol{r}-\boldsymbol{r}' {\bm{e}}rt}d\boldsymbol{r}d\boldsymbol{r}' + \sum_i^N \int v_{\mathrm{xc}}\big( \boldsymbol{r} \big) |\psi_i(\boldsymbol{r})|^2 d\boldsymbol{r}.
\end{align*}
This objective dunction can be solved by Euler-Lagrangian method. The Lagrangian is expressive as,
\begin{equation}
\mathcal{L}(\psi_1, \cdots, \psi_N) = E\left(\psi_{1}, \ldots, \psi_{N}{\textnormal{i}}ght) - \sum_{i,j=1}^N {\epsilon}ilon_{ij} \left(\int_{\mathbb{R}^{3}} \psi_{i}^* \psi_{j} - \delta_{i j}{\textnormal{i}}ght).
\end{equation}
The first-order condition of the above optimization problem can be writen as,
\begin{equation}
\left\{
\begin{aligned}
\dfrac{\delta \mathcal{L}}{\delta \psi_i} &= 0, \ \ \ \ i=1, \cdots, N \\
\dfrac{\partial \mathcal{L}}{\partial {\epsilon}ilon_{ij}} &= 0, \ \ \ \ i, j=1, \cdots, N
\end{aligned}
{\textnormal{i}}ght.
\end{equation}
The first variation above can be written as,
\begin{equation}
\dfrac{ \delta \mathcal{L} }{ \delta \psi_i } = \left( \dfrac{\delta E_{\text{Kin}}}{ \delta {\textnormal{h}}o} + \dfrac{\delta E_{\text{Ext}}}{ \delta {\textnormal{h}}o} + \dfrac{\delta E_{\text{Hartree}}}{ \delta {\textnormal{h}}o} + \dfrac{\delta E_{\text{XC}}}{ \delta {\textnormal{h}}o} {\textnormal{i}}ght) \dfrac{\delta {\textnormal{h}}o}{\delta \psi_i} - \sum\limits_{j=1}^N {\epsilon}ilon_{ij}\psi_i.
\end{equation}
Next, we derive each components in the above equation. We follow the widely adopted tradition that treats $\psi_i^*$ and $\psi_i$ as different functions \cite{parr1980density}, and have
\begin{equation}
\begin{aligned}
\dfrac{\delta {\textnormal{h}}o}{\delta \psi_i} &= \dfrac{\delta (\psi_i^* \psi_i)}{\delta \psi_i^*} = \psi_i,
\end{aligned}
\end{equation}
For $E_{\text{Hartree}}$, according to definition of functional derivatives:
\begin{equation}
\begin{aligned}
E_{\text{Hartree}}[{\textnormal{h}}o + \delta {\textnormal{h}}o] - E_{\text{Hartree}}[{\textnormal{h}}o] &= \frac{1}{2}\int\int \frac{({\textnormal{h}}o+\delta {\textnormal{h}}o)(\boldsymbol{r})({\textnormal{h}}o+\delta {\textnormal{h}}o)(\boldsymbol{r}') - {\textnormal{h}}o(\boldsymbol{r}){\textnormal{h}}o(\boldsymbol{r}')}{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r} d\boldsymbol{r}'\\
&= \frac{1}{2}\int\int \frac{(\delta {\textnormal{h}}o)(\boldsymbol{r}){\textnormal{h}}o(\boldsymbol{r}') + (\delta {\textnormal{h}}o)(\boldsymbol{r}') {\textnormal{h}}o(\boldsymbol{r})}{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r} d\boldsymbol{r}'\\
&= \frac{1}{2}\cdot 2\int\int \frac{(\delta {\textnormal{h}}o)(\boldsymbol{r}){\textnormal{h}}o(\boldsymbol{r}') }{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r} d\boldsymbol{r}'\\
&= \int\int \frac{(\delta {\textnormal{h}}o)(\boldsymbol{r}){\textnormal{h}}o(\boldsymbol{r}') }{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r} d\boldsymbol{r}'\\
&= \int (\delta {\textnormal{h}}o)(\boldsymbol{r})\left(\int \frac{{\textnormal{h}}o(\boldsymbol{r}') }{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r}'{\textnormal{i}}ght) d\boldsymbol{r},
\end{aligned}
\end{equation}
where we have used the fact that:
\begin{equation}
\int\int \frac{f_1(\boldsymbol{r})f_2(\boldsymbol{r}')}{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r} d\boldsymbol{r}' = \int\int \frac{f_1(\boldsymbol{r}')f_2(\boldsymbol{r})}{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r} d\boldsymbol{r}',
\end{equation}
due to the symmetry between $\boldsymbol{r}$ and $\boldsymbol{r}'$.
We emphasize that the functional derivatives are computed on functionals mapping functions to scalars, and consequently, we need to compute that for the double integral on both $\boldsymbol{r}$ and $\boldsymbol{r}'$.
Therefore, the functional derivative for $E_{\text{Hartree}}$ can be rewritten into:
\begin{equation}
\begin{aligned}
\frac{\delta E_{\text{Hartree}}[{\textnormal{h}}o]}{\delta {\textnormal{h}}o}
&= \int \frac{{\textnormal{h}}o(\boldsymbol{r}') }{|\boldsymbol{r}-\boldsymbol{r}'|} d\boldsymbol{r}'.
\end{aligned}
\end{equation}
Using the chain rule, we have
\begin{equation}
\begin{aligned}
\frac{\delta E_{\text{Hartree}}[{\textnormal{h}}o]}{\delta \psi^*}
&= \left(\int \frac{{\textnormal{h}}o(\boldsymbol{r}') }{|\boldsymbol{r}-\boldsymbol{r}'|} d\boldsymbol{r}'{\textnormal{i}}ght)\frac{\partial {\textnormal{h}}o}{\partial \psi_i}\\
&= \left(\int \frac{n(\boldsymbol{r}') }{|\boldsymbol{r}-\boldsymbol{r}'|} d\boldsymbol{r}'{\textnormal{i}}ght)\psi_i.
\end{aligned}
\end{equation}
For $E_{\text{Ext}}$ and $E_{\text{XC}}$,
\begin{equation}
\begin{aligned}
\frac{\delta E_{\text{Ext}}}{\delta \psi_i} &= \frac{\delta E_{\text{Ext}}}{\delta {\textnormal{h}}o}\frac{\partial {\textnormal{h}}o}{\partial \psi_i} = v_{\mathrm{ext}} \psi_i.\\
\frac{\delta E_{\text{XC}}}{\delta \psi_i} &= \frac{\delta E_{\text{XC}}}{\delta {\textnormal{h}}o}\frac{\partial {\textnormal{h}}o}{\partial \psi_i} = v_{\mathrm{xc}} \psi_i.
\end{aligned}
\end{equation}
For the kinetic energy,
\begin{equation}
\begin{aligned}
\frac{\delta E_{\text{Kin}}}{\delta \psi_i^*} = \frac{-\frac{1}{2}\psi^*_i\nabla^2\psi_i}{\delta \psi_i^*} = -\frac{1}{2}\nabla^2 \psi_i.
\end{aligned}
\end{equation}
Taking first-order derivatives, the Lagrangian becomes:
\begin{align}
\label{eq:ks}
H \left(\psi_1, \ldots, \psi_{N}{\textnormal{i}}ght)\psi_i = \lambda_i \psi_i,
\end{align}
where $\lambda_i = \sum_{j=1}^N {\epsilon}ilon_{ij}$, and the Hamiltonian is explicitly written as
\begin{align}
\label{eq:hamil}
\hat{H} = -\frac{1}{2}\nabla^2 + v_{\mathrm{ext}}(\boldsymbol{r}) + \int_{\mathbb{R}^3}\frac{{\textnormal{h}}o(\boldsymbol{r}')}{|\boldsymbol{r}-\boldsymbol{r}'|}d\boldsymbol{r}' + v_{\mathrm{xc}}(\boldsymbol{r}),
\end{align}
\begin{equation}
{\textnormal{h}}o(\boldsymbol{r}) = \sum_{\sigma} \sum_{i=1}^{N}\big{\bm{e}}rt{\psi_i(\boldsymbol{r} )}\big{\bm{e}}rt^2, \label{eq:wave2density}
\end{equation}
which is the desired form of the Kohn-Sham Equation.
\begin{comment}
Before constructing the formal connection, we need to show the following two conditions are equivalent:
\begin{equation}
\begin{aligned}
H(A)\left(\sum_{j=1}^{N} \alpha_{ij} \phi_j {\textnormal{i}}ght)&= \lambda_i \left(\sum_{j=1}^{N} \alpha_{ij} \phi_j {\textnormal{i}}ght).\\
\sum_{j=1}^{N} \langle \phi_i, H\phi_j {\textnormal{a}}ngle \alpha_{jk} &= \lambda_k \sum_{j=1}^{N} \langle \phi_i, \phi_j {\textnormal{a}}ngle \alpha_{jk}.
\end{aligned}
\end{equation}
Otherwise, the Lagrangian condition is stronger than the nonlinear eigenvector problem, which are not equivalent and cannot be formally connected with GD.
Note that, on the right hand side of the upper equation, the function is in the space spanned by the basis. Therefore, to ensure the equality, we only need ensure the inner products with all the basis of two functions are the same.
\end{comment}
\subsection{The SCF Algorithm.}
The connection between SCF and direct optimization can be summarized as follows. D4FT is minimizing the energy directly where the constraints are encoded into the corresponding parameter constraints, while SCF deals with the constrained optimization by Lagrangian. They are solving the same minimization problem using different approaches, which is reflected in how they deal with the constraint in the problem.
Obviously, D4FT is doing projected gradient descent on the constraint set $\Phi$. SCF is the corresponding Lagrangian, where the Lagrangian enforces the orthogonal constraint. Another constraint on the basis function is reflected in the inner product of SCF.
The SCF algorithm is presented in Algo. {\textnormal{e}}f{alg:scf}.
\begin{algorithm}[th]
\caption{Self-consistent Field Optimization for Kohn-Sham Equation}
\label{alg:scf}
\begin{algorithmic}[1]
\mathbb{R}equire a set of single particle wave function $\{ {\psi}_i \}$, convergence criteria ${\bm{a}}repsilon$.
\mathbb{E}nsure ground state energy, electron density
\State Compute the initial electron density ${\textnormal{h}}o$ via \eqref{eq:wave2density};
\While{ True }
\State Update Hamiltonian via \eqref{eq:hamil};
\State Solve the eigenvalue equations defined in \eqref{eq:ks} and get new $\{ {\psi}_i \}$.
\State Compute electron density ${\textnormal{h}}o^{\text{new}}$ via \eqref{eq:wave2density}
\If{$ {\bm{e}}rt {\textnormal{h}}o^{\text{new}} - {\textnormal{h}}o {\bm{e}}rt < {\bm{a}}repsilon $}
\State break
\mathbb{E}ndIf
\mathbb{E}ndWhile
\end{algorithmic}
\end{algorithm}
\section{Proofs and Derivations}
\subsection{Proof of Proposition {\textnormal{e}}f{prop:1} }\label{app:1}
\begin{proof}
We first prove the following statement as a preparation:\ (orthogonal constraint) for any $(\psi_1, \psi_2, \cdots, \psi_N)^\top \in \mathcal{F}_{{\boldsymbol{W}}}$, it holds that $\langle \psi _i | \psi_j {\textnormal{a}}ngle = \delta_{i j}$ for all $(i,j)$. Let $(\psi_1, \psi_2, \cdots, \psi_N)^\top \in \mathcal{F}_{{\boldsymbol{W}}}$. From the definition,
$$
\psi_i (\boldsymbol{r}) ={\boldsymbol{Q}}_{i} {\boldsymbol{D}} {\boldsymbol{\Phi}} (\boldsymbol{r}),
$$
where ${\boldsymbol{Q}}_{i}$ is the $i$-th row vector of ${\boldsymbol{Q}}$ and $ {\boldsymbol{\Phi}} (\boldsymbol{r})= (\phi_1(\boldsymbol{r}), \phi_2(\boldsymbol{r}), \cdots, \phi_N(\boldsymbol{r}))^\top$.
Thus, since $\boldsymbol U^\top\boldsymbol U =I$,
we have that for any $(i,j) \in [N] \times [N]$, \begin{align*}
\langle \psi _i | \psi_j {\textnormal{a}}ngle = {\boldsymbol{Q}}_i {\boldsymbol{D}}\boldsymbol B {\boldsymbol{D}} ^\top {\boldsymbol{Q}}_j ^\top ={\boldsymbol{Q}}_i \boldsymbol \Sigma^{-1/2} \boldsymbol U^\top\boldsymbol U \boldsymbol \Sigma \boldsymbol U^\top \boldsymbol U \boldsymbol \Sigma^{-1/2} {\boldsymbol{Q}}_j ^\top ={\boldsymbol{Q}}_i {\boldsymbol{Q}}_j ^\top =\delta_{i j}.
\end{align*}
This proves the statement that for any $(\psi_1, \psi_2, \cdots, \psi_N)^\top \in \mathcal{F}_{{\boldsymbol{W}}}$, it holds that $\langle \psi _i | \psi_j {\textnormal{a}}ngle = \delta_{i j}$ for all $(i,j)$. This statement on the orthogonal constraint and the definitions of $\mathcal{F}_{\boldsymbol{W}}$ and $\mathcal{F}$ implies that $\mathcal{F}_{\boldsymbol{W}} \subseteq\mathcal{F}$. Thus, in the rest of the proof, we need to prove $\mathcal{F}_{\boldsymbol{W}} \supseteq \mathcal{F}$. This is equivalent to
$$
\{\boldsymbol{Q} \boldsymbol{D} \boldsymbol{\Phi}:(\boldsymbol Q, \boldsymbol R) = \mathtt{QR}(\boldsymbol{W}), {\boldsymbol{W}} \in \mathbb{R}R^{N \times N}\}\supseteq \{\boldsymbol{C} \boldsymbol{\Phi}:\boldsymbol{C} \in \mathbb{R}R^{N \times N}, {\boldsymbol{C}}_i {\boldsymbol{B}} {\boldsymbol{C}}_j^\top= \delta_{i j}\},
$$
where ${\boldsymbol{C}}_{i}$ is the $i$-th row vector of ${\boldsymbol{C}}$. This is implied if the following holds:
\begin{align} \label{eq:1}
\{\boldsymbol{Q} \boldsymbol{D} :(\boldsymbol Q, \boldsymbol R) = \mathtt{QR}(\boldsymbol{W}), {\boldsymbol{W}} \in \mathbb{R}R^{N \times N}\}\supseteq \{\boldsymbol{C}:\boldsymbol{C}\in \mathbb{R}R^{N \times N} ,{\boldsymbol{C}}_i {\boldsymbol{B}} {\boldsymbol{C}}_j^\top= \delta_{i j}\}.
\end{align}
Since ${\boldsymbol{D}}$ is non-singular, for any ${\boldsymbol{C}} \in \{\boldsymbol{C}:\boldsymbol{C}\in \mathbb{R}R^{N \times N} ,{\boldsymbol{C}}_i {\boldsymbol{B}} {\boldsymbol{C}}_j^\top= \delta_{i j}\}$, there exists $\tilde {\boldsymbol{C}} \in \mathbb{R}R^{N\times N}$ such that ${\boldsymbol{C}}=\tilde {\boldsymbol{C}} {\boldsymbol{D}}$ (and thus ${\boldsymbol{C}}_i {\boldsymbol{B}} {\boldsymbol{C}}_j^\top=\tilde {\boldsymbol{C}} {\boldsymbol{D}}{\boldsymbol{B}} {\boldsymbol{D}}^\top \tilde {\boldsymbol{C}}_j= \delta_{i j}$). Similarly, for any $\tilde {\boldsymbol{C}} {\boldsymbol{D}} \in \{\tilde{\boldsymbol{C}} {\boldsymbol{D}} :\tilde {\boldsymbol{C}}\in \mathbb{R}R^{N \times N}, \tilde{\boldsymbol{C}}_i {\boldsymbol{D}}{\boldsymbol{B}} {\boldsymbol{D}}^\top \tilde{\boldsymbol{C}}_j^\top= \delta_{i j}\}$, there exists ${\boldsymbol{C}} \in \mathbb{R}R^{N\times N}$ such that ${\boldsymbol{C}}=\tilde {\boldsymbol{C}} {\boldsymbol{D}}$ (and thus ${\boldsymbol{C}}_i {\boldsymbol{B}} {\boldsymbol{C}}_j^\top= \delta_{i j}$). Therefore,
$$
\{\boldsymbol{C}:\boldsymbol{C}\in \mathbb{R}R^{N \times N} ,{\boldsymbol{C}}_i {\boldsymbol{B}} {\boldsymbol{C}}_j^\top= \delta_{i j}\}=\{\boldsymbol{C} {\boldsymbol{D}} :\boldsymbol{C}\in \mathbb{R}R^{N \times N} ,{\boldsymbol{C}}_i {\boldsymbol{D}}{\boldsymbol{B}} {\boldsymbol{D}}^\top {\boldsymbol{C}}_j^\top= \delta_{i j}\}.
$$
Here, ${\boldsymbol{C}}_i {\boldsymbol{D}}{\boldsymbol{B}} {\boldsymbol{D}}^\top {\boldsymbol{C}}_j={\boldsymbol{C}}_i \boldsymbol \Sigma^{-1/2} \boldsymbol U^\top\boldsymbol U \boldsymbol \Sigma \boldsymbol U^\top \boldsymbol U \boldsymbol \Sigma^{-1/2} {\boldsymbol{C}}_j ^\top ={\boldsymbol{C}}_i {\boldsymbol{C}}_j^\top$. Thus,
$$
\{\boldsymbol{C}:\boldsymbol{C}\in \mathbb{R}R^{N \times N} ,{\boldsymbol{C}}_i {\boldsymbol{B}} {\boldsymbol{C}}_j^\top= \delta_{i j}\}=\{\boldsymbol{C} {\boldsymbol{D}} :\boldsymbol{C}\in \mathbb{R}R^{N \times N} ,{\boldsymbol{C}}_i {\boldsymbol{C}}_j^\top= \delta_{i j}\}.
$$
Substituting this into \eqref{eq:1}, the desired statement is implied if the following holds:$$
\{\boldsymbol{Q} \boldsymbol{D} :(\boldsymbol Q, \boldsymbol R) = \mathtt{QR}(\boldsymbol{W}), {\boldsymbol{W}} \in \mathbb{R}R^{N \times N}\}\supseteq \{\boldsymbol{C} {\boldsymbol{D}} :\boldsymbol{C}\in \mathbb{R}R^{N \times N} ,{\boldsymbol{C}}_i {\boldsymbol{C}}_j^\top= \delta_{i j}\},
$$
which is implied by
\begin{align} \label{eq:2}
\{\boldsymbol{Q} :(\boldsymbol Q, \boldsymbol R) = \mathtt{QR}(\boldsymbol{W}), {\boldsymbol{W}} \in \mathbb{R}R^{N \times N}\}\supseteq \{ \boldsymbol{C}\in \mathbb{R}R^{N \times N} :{\boldsymbol{C}}_i {\boldsymbol{C}}_j^\top= \delta_{i j}\}.
\end{align}
Finally, we note that for any ${\boldsymbol{C}} \in \{ \boldsymbol{C}\in \mathbb{R}R^{N \times N} :{\boldsymbol{C}}_i {\boldsymbol{C}}_j^\top= \delta_{i j}\}$, there exists ${\boldsymbol{W}} \in \mathbb{R}R^{N \times N}$ such that $\boldsymbol{Q} = \boldsymbol{C}$ where $(\boldsymbol Q, \boldsymbol R) = \mathtt{QR}(\boldsymbol{W})$.
Indeed, using the Gram--–Schmidt process of the QR decomposition, for any ${\boldsymbol{C}} \in \{ \boldsymbol{C}\in \mathbb{R}R^{N \times N} :{\boldsymbol{C}}_i {\boldsymbol{C}}_j^\top= \delta_{i j}\}$, setting ${\boldsymbol{W}}={\boldsymbol{C}} \in \mathbb{R}R^{N\times N}$ suffices to obtain $\boldsymbol{Q} = \boldsymbol{C}$ where $(\boldsymbol Q, \boldsymbol R) = \mathtt{QR}(\boldsymbol{W})$.
This implies \eqref{eq:2}, which implies the desired statement.
\end{proof}
\subsection{Proof of Proposition {\textnormal{e}}f{prop:2} }
\label{app:p2}
\begin{proof}
The total energy minimization problem defined in \eqref{eq:totalloss} is a constraint minimum problem, which can be rewritten as,
\begin{equation}
\min_{\boldsymbol{\Psi}} \left\{E_{\text{gs}}\left(\boldsymbol{\Psi}{\textnormal{i}}ght)\ \Big{\bm{e}}rt \ \boldsymbol{\Psi} \in H^{1}\left(\mathbb{R}^{3}{\textnormal{i}}ght), \langle \boldsymbol{\Psi} {\bm{e}}rt \boldsymbol{\Psi} {\textnormal{a}}ngle=\boldsymbol{I} {\textnormal{i}}ght\},
\end{equation}
with $H^1$ denoted as Sobolev space, which contains all $L^2$ functions whose first order derivatives are also $L^2$ integrable. People resort to the associated Euler-Lagrange equations for solution:
$$
\mathcal{L}(\boldsymbol{\Psi}) = E_{\text{gs}}(\boldsymbol{\Psi}) - \sum_{i,j=1}^N {\epsilon}ilon_{ij} \left(\langle \psi_{i}| \psi_{j} {\textnormal{a}}ngle - \delta_{i j}{\textnormal{i}}ght).
$$
Letting the first-order derivatives equal to zero, i.e., $\frac{\delta \mathcal{L}}{\delta \psi_i^*} = 0$, the Lagrangian becomes:
$
\hat H \left(\boldsymbol{\Psi}{\textnormal{i}}ght)\psi_i = \lambda_i \psi_i,
$
where $\lambda_i = \sum_{j=1}^N {\epsilon}ilon_{ij}$, and the Hamiltonian $\hat H$ is the same as defined in \eqref{eq:ks}.
To solve this nonlinear eigenvalue/eigenfunction problem, SCF iteration in Algorithm {\textnormal{e}}f{alg:scf} is adopted. The output of the $(t+1)$th iteration is derived from the Hamiltonian with the previous output, i.e.,
$
\hat H \left(\boldsymbol{\Psi}^t{\textnormal{i}}ght)\psi_i^{t+1} = \lambda_i^{t+1} \psi_i^{t+1}.
$
The convergence criterion is $\boldsymbol{\Psi}^t = \boldsymbol{\Psi}^{t+1}$, which is the self-consistency formally defined in Definition {\textnormal{e}}f{def:sc}.
Since GD and SCF are solving essentially the same problem, we can derive the equivalence between the solutions of GD and SCF. GD is doing projected gradient descent on the constraint set $\left\{\boldsymbol{\Psi} \ {\bm{e}}rt \ \langle \boldsymbol{\Psi} {\bm{e}}rt \boldsymbol{\Psi} {\textnormal{a}}ngle=\boldsymbol{I}{\textnormal{i}}ght\}$. SCF is the corresponding Lagrangian, where the Lagrangian enforces the orthogonal constraint.
\end{proof}
\section{Complexity analysis}
\label{app:breakdown}
\subsection{Complexity analysis for full batch}
SCF's computation complexity analysis
\begin{align*}
\int\int{\textnormal{h}}o(\boldsymbol{r}_1)\frac{1}{|\boldsymbol{r}_1-\boldsymbol{r}_2|}\psi_i(\boldsymbol{r}_2)\psi_j(\boldsymbol{r}_2)d{\boldsymbol{r}_1}d{\boldsymbol{r}_2}
\end{align*}
\begin{enumerate}
\item Compute $\psi_i(\boldsymbol{r})$ for $i\in{[1\mathinner {\ldotp \ldotp} N]}$, and all the $n$ grid points. it takes $\mathcal{O}(NB)$ to compute for each $r$, and we do it for $n$ points. Therefore, it takes $\mathcal{O}(nNB)$, producing a $\mathbb{R}^{n\times N}$ matrix. \label{step:1}
\item Compute ${\textnormal{h}}o(\boldsymbol{r})$, it takes an extra $\mathcal{O}(nN)$ step to sum the square of the wave functions, on top of step {\textnormal{e}}f{step:1}, produces $\mathbb{R}^{n}$ vector.
\item Compute $\psi_i(\boldsymbol{r})\psi_j(\boldsymbol{r})$, it takes an outer product on top of step {\textnormal{e}}f{step:1}, $\mathcal{O}(nN^2)$, produces $\mathbb{R}^{n\times N\times N}$ tensor.
\item Compute $\frac{1}{|\boldsymbol{r}_1-\boldsymbol{r}_2|}$, $\mathcal{O}(n^2)$.
\item Tensor contraction on outcomes from step 2, 3, 4. $\mathcal{O}(n^2N^2)$.
\item In total $\mathcal{O}(nNB)+\mathcal{O}(n^2N^2) = \mathcal{O}(n^2N^2)$.
\end{enumerate}
SGD's computation complexity analysis
\begin{align*}
\int\int{\textnormal{h}}o(\boldsymbol{r}_1)\frac{1}{|\boldsymbol{r}_1-\boldsymbol{r}_2|}{\textnormal{h}}o(\boldsymbol{r}_2)d\boldsymbol{r}_1d\boldsymbol{r}_2
\end{align*}
\begin{enumerate}
\item Using steps 1, 2, and 4 from SCF. Step 3 is not used because only energy is needed, $\mathcal{O}(nNB)$.
\item Contract $\mathbb{R}^{n}$, $\mathbb{R}^{n}$, $\mathbb{R}^{n\times n}$ into a scalar, $\mathcal{O}(n^2)$.
\item In total $\mathcal{O}(nNB)+\mathcal{O}(n^2)=\mathcal{O}(nN^2)+\mathcal{O}(n^2)$.
\end{enumerate}
\subsection{Complexity analysis for minibatch}
The complexity of SCF under minibatch size $m$ is $\mathcal{O}(mNB)+\mathcal{O}(m^2N^2)$, since the number of iteration per epoch is $n / m$, the total complexity for one epoch is $\mathcal{O}(nNB)+\mathcal{O}(nmN^2) = \mathcal{O}(n(1+m)N^2)$ since $B=N$, i.e., the number of orbitals equal to that of the basis.
Similarly, the complexity of GD is $(\mathcal{O}(mNB)+\mathcal{O}(m^2))n / m = \mathcal{O}(n(N^2+m))$.
\subsection{Literature on the convergence of SCF methods.}
There is a rich literature on the convergence of SCF.
\cite{cances2021convergence} provides a literature review and method summarization for the iterative SCF algorithm. Specifically, they show that the problem can be tackled via either Lagrangian + SCF or direct gradient descent on matrix manifold.
\cite{yang2007trust} shows that the Lagrangian + SCF algorithm is indirectly optimizing the original energy by minimizing a sequence of quadratic surrogate functions. Specifically, at the $t$th time step, SCF is minimizing the quadrature surrogate energy using the $t$th Hamiltonian,
$
\min_A \frac{1}{2}\operatorname{Tr}(A^*H^{(t)}A)
$,
while the correct nonlinear energy should be
$
\min_A \frac{1}{2}\operatorname{Tr}(A^*H(A)A)
$.
So, our GD directly minimizes the nonlinear energy, while SCF deals with a quadrature surrogate energy.
Using a concrete two-dimensional example, how SCF may fail to converge is illustrated, since the second order approximation of the original.
\cite{yang2009convergence} shows that SCF will converge to two different limit points, but neither of them is the solution to a particular class of nonlinear eigenvector problems. Nevertheless, they also identify the condition under which the SCF iteration becomes contractive which guarantees its convergence, which is the gap between the occupied states and unoccupied states is sufficiently large and the second order derivatives of the exchange-correlation functional are uniformly upper bounded.
\cite{bai2020optimal} provides optimal convergence rate for a class of nonlinear eigenvector directly minimizesly; since SCF is an iterative algorithm, we can show that SCF is a contraction mapping:
\begin{equation}
\limsup_{t {\textnormal{i}}ghtarrow \infty} \frac{d(A^{(t+1)}, A^*)}{d(A^{(t)}, A^*)} \leq \eta,
\end{equation}
where $d$ measures the distance between two matrices, $A^*$ is the solution, and $\eta$ is the contraction factor determining the convergence rate of SCF, whose upper bound is given in their result.
\cite{liu2015analysis,liu2014convergence} formulates SCF as a fixed point map, whose Jacobian is derived explicitly, from which the convergence of SCF is established, under a similar condition to the one in \cite{yang2009convergence}.
\cite{cai2018eigenvector} derive nearly optimal local and global convergence rate of SCF.
\section{More Experimental Results}
\subsection{Full Version of Table {\textnormal{e}}f{tb:groundstate1}}
\begin{table}[ht]
\centering
\caption{Comparison of Ground State Energy (LDA, 6-31g, Ha)}
\label{tb:groundstate}
\begin{tabular}{c p{1cm} p{1.8cm}<{\centering} p{1.8cm}<{\centering} p{1.8cm}<{\centering} p{1.8cm}<{\centering} p{1.8cm}<{\centering}}
\toprule[2pt]
Molecule & Method & \makecell{Nuclear \\ Repulsion \\ Energy} & \makecell{ Kinetic+\\mathbb{E}xternel \\ Energy} & \makecell{ Hartree \\ Energy} & \makecell{XC \\ Energy} & \makecell{Total \\ Energy} \\
\midrule[1.5pt]
\multirow{3}{*}{Hydrogen} & PySCF & 0.71375 & -2.48017 & 1.28032 & -0.55256 & -1.03864 \\
& D4FT & 0.71375 & -2.48343 & 1.28357 & -0.55371 & -1.03982 \\
& JaxSCF & 0.71375 & -2.48448 & 1.28527 & -0.55390 & -1.03919 \\
\midrule[0.5pt]
\multirow{2}{*}{Methane} & PySCF & 13.47203 & -79.79071 & 32.68441 & -5.86273 & -39.49700 \\
& D4FT & 13.47203 & -79.77446 & 32.68037 & -5.85949 & -39.48155 \\
& JaxSCF & 13.47203 & -79.80889 & 32.71502 & -5.86561 & -39.48745 \\\midrule[0.5pt]
\multirow{3}{*}{Water} & PySCF & 9.18953 & -122.83945 & 46.58941 & -8.09331 & -75.15381 \\
& D4FT & 9.18953 & -122.78656 & 46.54537 & -8.08588 & -75.13754 \\
& JaxSCF & 9.18953 & -122.78241 & 46.53507 & -8.09343 & -75.15124 \\\midrule[0.5pt]
\multirow{3}{*}{Oxygen} & PySCF & 28.04748 & -261.19971 & 99.92152 & -14.79109 & -148.02180 \\
& D4FT & 28.04748 & -261.13046 & 99.82705 & -14.77807 & -148.03399 \\
& JaxSCF & 28.04748 & -261.16314 & 99.87551 & -14.78290 & -148.02304 \\ \midrule[0.5pt]
\multirow{3}{*}{Ethanol} & PySCF & 82.01074 & -371.73603 & 156.36891 & -18.65741 & -152.01379 \\
& D4FT & 82.01074 & -371.73431 & 156.34982 & -18.65517 & -152.02893 \\
& JaxSCF & 82.01074 & -371.64258 & 156.27433 & -18.65711 & -152.01460 \\\midrule[0.5pt]
\multirow{3}{*}{Benzene} & PySCF & 203.22654 & -713.15807 & 312.42026 & -29.84784 & -227.35910 \\
& D4FT & 203.22654 & -712.71081 & 311.94665 & -29.81097 & -227.34860 \\
& JaxSCF & 203.22654 & -712.91053 & 312.15690 & -29.84042 & -227.36755 \\\bottomrule[2pt]
\end{tabular}
\end{table}
\subsection{Full Version of Table {\textnormal{e}}f{tab:localscaling}}
\label{sec:table2}
\begin{table}[ht]
\centering
\caption{Comparison of ground state energy on atoms (LSDA, STO-3g, Hartree). }
\begin{tabular}{l p{1.8cm}<{\centering} p{1.8cm}<{\centering} p{1.8cm}<{\centering} p{1.8cm}<{\centering} p{1.8cm}<{\centering} }
\toprule[2pt]
Atom & He & Li &Be & B & C \\ \midrule[1pt]
PySCF & -2.65731 & -7.06641 & -13.98871 & -23.66157 & -36.47199 \\
D4FT & -2.65731 & -7.04847 & -13.97781 & -23.64988 & -36.47026 \\
NBLST & \textbf{-2.65978} & \textbf{-7.13649} & \textbf{-13.99859} & \textbf{-23.67838} & \textbf{-36.52543} \\\midrule[1.2pt]
Atom & N & O & F & Ne & Na \\ \midrule[1pt]
PySCF & -52.80242 & -72.76501 & -96.93644 & -125.38990 & -158.40032 \\
D4FT & -52.82486 & -72.78665 & -96.99325 & -125.16031 & -158.16140 \\
NBLST & \textbf{-52.86340} & \textbf{-72.95256} & \textbf{-97.24058} & \textbf{-125.62643} & -157.72314 \\\midrule[1.2pt]
Atom & Mg & Al & Si & P & S \\ \midrule[1pt]
PySCF & -195.58268 & -237.25091 & -283.59821 & -334.79401 & -390.90888 \\
D4FT & -195.47165 & -237.03756 & -283.41116 & -335.10473 & -391.09967 \\
NBLST & \textbf{-196.00629} & -235.83420 & \textbf{-283.72964} & \textbf{-335.38168} & \textbf{-391.34567} \\
\bottomrule[2pt]
\end{tabular}
\end{table}
\subsection{More results on the efficiency comparison}
We test the efficiency and scalability of our methods against a couple of mature Quantum Chemistry softwares. We compute the ground-state energy using each implementation and record the wall-clock running time. The softwares we compare with are:
\begin{itemize}
\item PySCF. PySCF is implemented mainly with C (Qint and libxc), and use python as the interface.
\item GPAW. A Plane-wave method for solid and molecules.
\item Psi4. Psi4 is mainly written in C++.
\end{itemize}
We use an sto-3g basis set and a lda exchange-correlation functional. For GPAW, we apply the lcao mode, and use dzp basis set. The number of bands calculated is the double of the number of electrons. The number of grids for evaluating the XC and Coulomb potentials is 400*400*400 by default. The maximum number of iteration is 100 for all the methods. Our D4FT ran on an NVIDIA A100 40G GPU, whereas other methods did on an Intel Xeon 64-core CPU with 60G memory. The wall-clock running time for convergence (by default setting) are shown in the following table.
\label{sec:table2}
\begin{table}[ht]
\centering
\caption{Comparison of wall-clock running time (s). }
\begin{tabular}{l r r r r r }
\toprule[2pt]
Molecule & C20 & C60 & C80 & C100 & C180\\\midrule[1pt]
D4FT &11.46 & 23.14 & 62.98&195.13&1103.19 \\
PySCF &20.75 & 186.67 & $>$3228.20&672.98&1925.25 \\
GPAW &$>$2783.08 & -- & -- & -- & -- \\
Psi4 &$>$46.12 & 510.35 & $>$2144.49& 2555.16&14321.64 \\
\bottomrule[2pt]
\end{tabular}
\end{table}
Notation $>$ represent that the experiment cannot finish or converge given the default convergence condition. It can be seen from the results that our D4FT is the fastest among all the packages.
\marginpar{NEW}page
\subsection{Cases that PySCF fails.}
We show two cases that do not converge with PySCF. We test two unstable systems, C120 and C140, which do not exist in the real world. Both of them are truncated from a C180 Fullerene molecule. Results are shown in Fig. {\textnormal{e}}f{fig:diverge}.
\begin{figure}
\caption{Cases that do not converge with PySCF, but do with D4FT. Due to the good convergence property of SGD, D4FT is more likely to converge.}
\label{fig:diverge}
\end{figure}
\end{appendices}
\end{document} |
\begin{document}
\date{}
\begin{center}
\textbf{{A Limit Theorem for Supercritical Branching Random Walks with Branching Sources of Varying Intensity}}
\end{center}
\begin{center}
\textit{{\lambdaarge Ivan Khristolyubov, Elena Yarovaya}}
\end{center}
\begin{abstract}
We consider a supercritical symmetric continuous-time branching random walk
on a multidimensional lattice with a finite number of particle generation
sources of varying positive intensities without any restrictions on the variance of
jumps of the underlying random walk. It is assumed that the spectrum of the
evolution operator contains at least one positive eigenvalue. We prove that
under these conditions the largest eigenvalue of the evolution operator is
simple and determines the rate of exponential growth of particle quantities
at every point on the lattice as well as on the lattice as a whole.
\end{abstract}
\section{Introduction.}
We consider a continuous-time branching random walk (BRW) with a finite
number of branching sources that are situated at some points
$x_{1},x_{2},\dots,x_{N}$ of the lattice ${\mathbf Z}^{d}$, $d\ge1$,
see~\cite{Y17_TPA:e} for details. The behaviour of BRWs, which are based on
symmetric spatially homogeneous irreducible random walks on ${\mathbf Z}^{d}$ with
finite variance of jumps, for the case of a single branching source was
considered, for example, in~\cite{YarBRW:e}. To the authors' best knowledge,
BRWs with a finite variance of jumps and a finite number of branching sources
of various types, at some of which the underlying random walk can become
asymmetric, were first introduced in~\cite{Y12_MZ:e}, and BRWs with identical
branching sources and no restrictions on the variance of jumps were first
considered in~\cite{Y16-MCAP:e}.
Let $\mu_{t}(y)$ be the number of particles at the time $t$ at the point $y$
under the condition that at the initial time $t=0$ the lattice ${\mathbf Z}^{d}$ contains a
single particle which is situated at $x$, that is, $\mu_{0}(y)=\delta(x-y)$. We
denote by $\mu_{t}=\sum_{y\in {\mathbf Z}^{d}}\mu_{t}(y)$ the total number of
particles on ${\mathbf Z}^{d}$. Let $m_{1}(t,x,y):=\mathsf{E}_{x}\mu_{t}(y)$ and
$m_{1}(t,x):=\mathsf{E}_{x}\sum_{y\in {\mathbf Z}^{d}}\mu_{t}(y)$ denote the expectation
of the number of particles at $y$ and on the lattice ${\mathbf Z}^{d}$ respectively
under the condition that $\mu_{0}(y)\equiv \delta(x-y)$ at the time $t$.
We assume the branching process at each of the branching sources
$x_{1},x_{2},\dots,x_{N}$ to be a continuous-time Galton-Watson process (see
\cite[Ch.~I, \S4]{Se2:e}, \cite[Ch.~III]{AN:e}) defined by its infinitesimal
generating function (which depends on the source $x_{i}$)
\begin{equation}\lambdaabel{ch6:E-defGenFunc}
f(u,x_{i})=\sum_{n=0}^\infty b_n(x_{i}) u^n,\quad 0\lambdae u \lambdae1,
\end{equation}
where $b_n(x_{i})\ge0$ if $n\ne 1$, $b_1(x_{i})<0$, and $\sum_{n} b_n(x_{i})=0$.
We also assume that the inequalities $\beta_{i}^{(r)}:=f^{(r)}(1,x_{i})<\infty$ hold for all $i=1,2,\dots,N$ and $r\in{{\mathbf N}}$. We call
\begin{equation}\lambdaabel{D:beta}
\beta_{i}:=\beta_{i}^{(1)}=f^{(1)}(1,x_{i})=\sum_{n}nb_{n}(x_{i})
\end{equation}
the \emph{intensity} of the branching source $x_{i}$.
The behaviour of $m_{1}(t,x,y)$ and $m_{1}(t,x,y)$ can be described in terms
of the \emph{evolution operator}
$\mathscr{H}:=\mathscr{H}_{\beta_{1},\lambdadots,\beta_{N}}$ \cite{Y17_TPA:e}, the
definition of which is recalled in Section~\ref{S:statement}. We call a BRW
\emph{supercritical}
if the spectrum of the operator $\mathscr{H}$ contains at least one eigenvalue $\lambdaambda>0$. In the case of a supercritical BRW with equal branching source intensities
$\beta_{1}=\beta_{2}=\dots=\beta_{N}$ with no restrictions on the variance of
jumps, it was shown in~\cite{Y16-MCAP:e} that the spectrum of $\mathscr{H}$
is real and contains no more than $N$ positive eigenvalues counted with their
multiplicity, and that the largest eigenvalue $\lambdaambda_{0}$ has multiplicity
$1$. In the present study the aforementioned result is extended to the case
of a supercritical BRW with positive source intensities
$\beta_{1},\beta_{2},\lambdadots,\beta_{N}$ with no restrictions on the
variance of jumps or the number of descendants particles can produce. The main result of this work is the following limit theorem, the
proof of which is provided in Section~\ref{S:limthm}.
\begin{theorem}\lambdaabel{thm1} Let the operator $\mathscr{H}$ have an isolated eigenvalue $\lambdaambda_{0}>0$, and let the remaining part of its spectrum be located on the halfline $\{\lambdaambda\in{\mathbf R}:~\lambdaambda\lambdaeqslant\lambdaambda_{0}-\epsilon\}$, where
$\epsilon>0$. If $\beta_{i}^{(r)} = O(r! r^{r-1})$ for all $i = 1, \lambdadots,
N$ and $r\in{\mathbf N}$, then in the sense of convergence in distribution the
following statements hold:
\begin{equation}
\lambdaabel{eq1}
\lambdaim_{t \to \infty} \mu_{t}(y) e^{-\lambdaambda_{0} t} = \psi(y)\xi,\quad \lambdaim_{t \to \infty} \mu_{t} e^{-\lambdaambda_{0} t} = \xi,
\end{equation}
where $\psi(y)$ is a non-negative non-random function and $\xi$ is a proper random variable.
\end{theorem}
Theorem~\ref{thm1} generalizes the results obtained in \cite{BY-2:e,YarBRW:e}
for a supercritical BRW on ${\mathbf Z}^{d}$ with finite variance of jumps and a
single branching source. Its proof is fundamentally based on Carleman's
condition~\cite[Th.~1.11]{ShT:e}. In the case of a single branching source
and particles producing no more than two descendants Theorem~\ref{thm1} was
proved in~\cite{YarBRW:e}. In the case of a single branching source and no
restrictions on the number of descendants particles can produce
Theorem~\ref{thm1} was provided in~\cite{BY-2:e} without proof.
Let us briefly outline the structure of the paper. In
Section~\ref{S:statement} we recall the formal definition of a BRW. In
Section~\ref{S:maineq} we provide some key evolution equations for generating
functions and the moments of particle quantities in the case of a BRW with
several branching sources (Theorems~\ref{thm01}--\ref{thm04}). These theorems
are a natural generalization of the corresponding results that were obtained
for BRWs with a single branching source in~\cite{YarBRW:e}. In
Section~\ref{S:WSBRW} we establish a criterion for the existence of positive
eigenvalues in the spectrum of $\mathscr{H}$ (Theorem~\ref{thm002}), which is
later used to examine the properties of the spectrum of this evolution
operator. We then prove Theorem~\ref{thm06} on the behaviour of particle
quantity moments. Section~\ref{S:limthm} is dedicated to the proof of
Theorem~\ref{thm1}.
\section{The BRW model.}\lambdaabel{S:statement}
By a branching random walk (BRW) we mean a stochastic process that
combines a random walk of particles with their branching (birth or death) at
certain points on ${\mathbf Z}^{d}$ called \emph {branching sources}.
Let us give more precise definitions.
We assume that the random walk is defined by its matrix of transition
intensities $A=\lambdaeft(a(x,y)\right)_{x,y \in {\mathbf Z}^{d}}$ that satisfies the
\emph{regularity property} $\sum_{y \in {\mathbf Z}^{d}}a(x,y)=0$ for all $x$, where
$a(x,y)\ge 0$ for $x\neq y$ and $-\infty<a(x,x)<0$.
Suppose that at the moment $t=0$ there is a single particle on the lattice
that is situated at the point $x\in {\mathbf Z}^{d}$. Following the axiomatics
provided in~\cite[Ch.~III, \S2]{GS:e}, the probabilities $p(h,x,y)$ of a particle situated at $x\notin\{x_{1},x_{2},\dots,x_{N}\}$ to move to
an arbitrary point $y$ over a short period of time $h$ can be represented as
\begin{align*}
p(h,x,y)&=a(x,y)h+o(h)\qquad \text{for}\quad y\ne x,\\
p(h,x,x)&=1+a(x,x)h+o(h).
\end{align*}
It follows from these equalities, see, for example,~\cite[Ch.~III]{GS:e}, that the
transition probabilities $p(t,x,y)$ satisfy the following system of
differential-difference equations (called the \emph{Kolmogorov backward
equations}):
\begin{equation}\lambdaabel{E:ptxy}
\frac{\partial p(t,x,y)}{\partial t}=\sum_{x'}a(x,x') p(t,x',y),\qquad
p(0,x,y)=\delta(x-y),
\end{equation}
where $\delta(\cdot)$ is the discrete Kronecker $\delta$-function on ${\mathbf Z}^{d}$.
The branching process at each of the sources $x_{1},x_{2},\dots,x_{N}$ is governed by the infinitesimal generating function~\eqref{ch6:E-defGenFunc}. Of particular interest to us are the source intensities~\eqref{D:beta}, which can be rewritten as follows:
\[
\beta_{i}=
(-b_{1}(x_{i}))\lambdaeft(\sum_{n\neq1}n\frac{b_{n}(x_{i})}{(-b_{1}(x_{i}))}-1\right),
\]
where the sum is the average number of descendants a particle has at the source
$x_{i}$.
If at the moment $t=0$ a particle is located at a point different from the branching
sources, then its random walk follows the rules above. Therefore in order to
complete the description of its evolution we only have to consider a
situation combining both the branching process and the random walk, that is
to say, when the particle is at one of the branching sources
$x_{1},x_{2},\dots,x_{N}$. In this case the possible outcomes that can
happen over a small period of time $h$ are the following: the particle will
either move to a point $y\neq x_{i}$ with the probability of
\[
p(h,x_{i},y)=a(x_{i},y)h+o(h),
\]
or will remain at the source and produce $n\neq1$ descendants with the
probability of
\[
p_{*}(h,x_{i},n)=b_{n}(x_{i})h+o(h)
\]
(we suppose, that the particle itself is included in these $n$ descendants;
therefore, if $n=0$ we say that the particle dies), or no change will happen
to the particle at all, which has the probability of
\[
1-\sum_{y\neq
x_{i}}a(x_{i},y)h-\sum_{n\neq 1}b_{n}(x_{i})h+o(h).
\]
As a result, the sojourn time of a particle at the source $x_{i}$ is
exponentially distributed with the parameter
$-(a(x_{i},x_{i})+b_{1}(x_{i}))$. Note that each new particle evolves
according to the same law independently of other particles.
As it was shown in~\cite{Y12_MZ:e}, \cite{Y13-PS:e}, the moments $m_{1}(t,x,y)$
and $m_{1}(t,x)$ satisfy the following equations:
\begin{align}\lambdaabel{E:m1xy}
\frac{\partial m_{1}(t,x,y)}{\partial t}&=\sum_{x'}a(x,x') m_{1}(t,x',y)+
\sum_{i=1}^{N} \beta_{i}\delta(x-x_{i})m_{1}(t,x,y),\\\lambdaabel{E:m1xy1}
\frac{\partial m_{1}(t,x)}{\partial t}&=\sum_{x'}a(x,x') m_{1}(t,x')+
\sum_{i=1}^{N} \beta_{i}\delta(x-x_{i})m_{1}(t,x)
\end{align}
with the initial values $m_{1}(0,x,y)=\delta(x-y)$ and $m_{1}(0,x)\equiv
1$ respectively.
Equations~\eqref{E:ptxy}--\eqref{E:m1xy1} are rather difficult to analyze,
and therefore we will from now on only consider BRWs that satisfy the following additional and quite natural
assumptions. First, we assume that the
intensities $a(x,y)$ are \emph{symmetric} and \emph{spatially homogeneous},
that is, $a(x,y)=a(y,x)=a(0,y-x)$. This allows us, for the sake of brevity,
to denote by $a(x-y)$ any of the three pairwise equal functions $a(x,y)$,
$a(y,x)$, $a(0,y-x)$, that is, $a(x-y):= a(x,y)=a(y,x)=a(0,y-x)$. Second, we
assume that the random walk is \emph{irreducible}, which in terms of the
matrix $A$ means that it itself is irreducible: for any $z\in{\mathbf Z}^{d}$ there is
such a set of vectors $z_{1},\dots,z_{k}\in{\mathbf Z}^{d}$ that
$z=\sum_{i=1}^{k}z_{i}$ and $a(z_{i})\neq0$ for $i=1,\dots,k$.
One approach to analysing equations~\eqref{E:ptxy} and~\eqref{E:m1xy}
consists in treating them as differential equations in Banach spaces. In
order to apply this approach to our case, we introduce the operators
\[
(\mathscr{A} u)(x)=\sum_{x'}a(x-x') u(x'),\qquad
(\Delta_{x_{i}}u)(x)=\delta(x-x_{i})u(x),\quad i=1,\lambdadots,N.
\]
on the set of functions $u(x)$, $x\in{\mathbf Z}^{d}$.
We also introduce the operator
\begin{equation}\lambdaabel{E:defHbbb}
\mathscr{H}:=\mathscr{H}_{\beta_{1},\lambdadots,\beta_{N}}=
\mathscr{A}+\sum_{i=1}^{N}\beta_{i}\Delta_{x_{i}}.
\end{equation}
for each set of source intensities $\beta_{1},\lambdadots,\beta_{N}$.
Let us note that all these operators can be regarded as linear continuous
operators in any of the spaces $l^{p}(\mathbf{Z}^{d})$, $p\in[1,\infty]$. We
also point out that the operator $\mathscr{A}$
is self-adjoint in $l^{2}(\mathbf{Z}^{d})$
\cite{Y12_MZ:e,Y13-PS:e,Y16-MCAP:e}.
Now, treating for each $t\ge0$ and each $y\in{\mathbf Z}^{d}$ the functions
$p(t,\cdot,y)$ and $m_{1}(t,\cdot,y)$ as elements of $l^{p}(\mathbf{Z}^{d})$
for some $p$, we can rewrite (see, for example,~\cite{Y12_MZ:e})~\eqref{E:ptxy}
and~\eqref{E:m1xy} as differential equations in $l^{p}(\mathbf{Z}^{d})$:
\begin{alignat*}{2}
\frac{d p(t,x,y)}{d t} &=(\mathscr{A}p(t,\cdot,y))(x),&\qquad p(0,x,y)&=\delta(x-y),\\
\frac{d m_{1}(t,x,y)}{d t} &=(\mathscr{H} m_{1}(t,\cdot,y))(x),&
\qquad m_{1}(0,x,y)&=\delta(x-y),
\end{alignat*}
and~\eqref{E:m1xy1} as a differential equation in
$l^{\infty}(\mathbf{Z}^{d})$:
\begin{equation*}
\frac{d m_{1}(t,x)}{d t}=(\mathscr{H} m_{1}(t,\cdot))(x),
\qquad m_{1}(0,x)\equiv 1.
\end{equation*}
Note that the asymptotic behaviour for large $t$ of the transition
probabilities $p(t,x,y)$, as well as of the mean particle numbers
$m_{1}(t,x,y)$ $m_{1}(t,x)$ is tightly connected with the spectral properties
of the operators $\mathscr{A}$ and $\mathscr{H}$ respectively.
It is convenient to express various properties of the transition
probabilities $p(t,x,y)$ in terms of Green's function, which can be defined
as the Laplace transform of the transition probability $p(t,x,y)$:
\[
G_\lambdaambda(x,y):=\int_0^\infty e^{-\lambdaambda t}p(t,x,y)\,dt,\qquad \lambdaambda\ge 0,
\]
and can also be rewritten (see, for example,~\cite[\S~2.2]{YarBRW:e}) as follows:
\[
G_\lambdaambda(x,y)=\frac{1}{(2\pi)^d} \int_{ [-\pi,\pi ]^{d}}
\frac{e^{i(\theta, y-x)}} {\lambdaambda-\phi(\theta)}\,d\theta =\frac{1}{(2\pi)^d} \int_{ [-\pi,\pi ]^{d}}
\frac{\cos{(\theta, y-x)}} {\lambdaambda-\phi(\theta)}\,d\theta,
\]
where $x,y\in{\mathbf Z}^{d}$, $\lambdaambda \ge 0$, and $\phi(\theta)$ is the Fourier transform of the transition intensity $a(z)$:
\begin{equation}\lambdaabel{E:Fourier}
\phi(\theta):=\sum_{z\in\mathbf{Z}^{d}}a(z)e^{i(\theta,z)}=\sum_{x \in {\mathbf Z}^{d}} a(x) \cos(x, \theta),\qquad \theta\in[-\pi,\pi]^{d}.
\end{equation}
The function $G_{0}(x,y)$ has a simple meaning for a (non-branching) random
walk: namely, it is equal to the mean amount of time a particle spends at
$y\in{\mathbf Z}^{d}$ as $t\to\infty$ under the condition that at the initial moment
$t=0$ the particle was at $x\in{\mathbf Z}^{d}$. Also, the asymptotic behaviour of the
mean numbers of particles $m_{1}(t,x,y)$ and $m_{1}(t,x)$ as $t\to\infty$ can
be described in terms of the function $G_\lambdaambda(x,y)$, see,
e.g.,~\cite{YarBRW:e}. Lastly, in~\cite{Y17_TPA:e} it was shown that the
asymptotic behaviour of a BRW depends strongly on whether $G_{0}: =
G_{0}(0,0)$ is finite.
\begin{remark}
The approach described in this section, based on interpreting BRW evolution
equations as differential equations in Banach spaces, is also applicable to a wide selection of problems,
notably to describing the evolution of higher particle number moments (see, e.g.,~\cite{YarBRW:e}, \cite{Y12_MZ:e}).
\end{remark}
\section{Key equations and auxiliary results.}\lambdaabel{S:maineq}
Let us introduce the Laplace generating functions of the random variables $\mu_{t}(y)$ and
$\mu_{t}$ for $z \geqslant 0$:
\[
F(z; t, x, y):= \mathsf{E}_{x} e^{-z \mu_{t}(y)},\qquad
F(z; t, x):= \mathsf{E}_{x} e^{-z \mu_{t}}.
\]
where $\mathsf{E}_{x}$ is the mean on condition $\mu_{0}(\cdot) =
\delta_{x}(\cdot)$.
The following four theorems are a result of an immediate generalization of the corresponding theorems in \cite{YarBRW:e} proved for BRWs with a single branching source; since the reasoning is
virtually the same, these theorems are presented here without proof.
\begin{theorem}
\lambdaabel{thm01} For all $0 \lambdaeqslant z \lambdaeqslant \infty$ the functions $F(z; t,
x)$ and $F(z; t, x, y)$ are continuously differentiable with respect to $t$ uniformly with respect to $x,y\in{\mathbf Z}^{d}$. They also satisfy the inequalities $0 \lambdaeqslant F(z; t, x), F(z;
t, x, y) \lambdaeqslant 1$ and are the solutions to the following Cauchy problems in
$l^{\infty}\lambdaeft({\mathbf Z}^{d} \right)$:
\begin{alignat}{2}\lambdaabel{E:Fzt}
\frac{d F(z;t, \cdot)}{d t} &= \mathscr{A}F(z;t, \cdot) +
\sum_{j=1}^{N} \Delta_{x_{j}} f_{j} \lambdaeft(F(z; t, \cdot) \right),
\qquad &F(z; 0, \cdot) &= e^{-z},\\
\lambdaabel{E:Fzty}\frac{d F(z; t, \cdot, y)}{d t} &= \mathscr{A}F(z;t, \cdot, y) +
\sum_{j=1}^{N} \Delta_{x_{j}} f_{j} \lambdaeft(F(z;t, \cdot, y) \right),
\qquad &F(z;0, \cdot, y) &= e^{-z \delta_{y}(\cdot)}.
\end{alignat}
\end{theorem}
Theorem~\ref{thm01} allows us to advance from analysing the BRW at hand to
considering the corresponding Cauchy problem in a Banach space instead. We
also note that, contrary to the single branching source case examined
in~\cite{YarBRW:e}, there is not one but several terms
$\Delta_{x_{j}}f_{j}(F)$ in the right-hand side of equations~\eqref{E:Fzt}
and~\eqref{E:Fzty}, $j=1,2,\dots,N$.
Let us set
\[
m_{n}(t, x, y) := \mathsf{E}_{x} \mu_{t}^{n}(y),\qquad
m_{n}(t, x) := \mathsf{E}_{x} \mu_{t}^{n}.
\]
\begin{theorem}
\lambdaabel{thm02} For all natural $k \geqslant 1$ the moments $m_{k}(t, \cdot, y)\in
l^{2}\lambdaeft({\mathbf Z}^{d}\right)$ and $m_{k}(t, \cdot)\in
l^{\infty}\lambdaeft({\mathbf Z}^{d}\right)$ satisfy the following differential equations in the corresponding Banach spaces:
\begin{align}
\lambdaabel{eq007}
\frac{d m_{1}}{d\,t} &= \mathscr{H}m_{1},\\
\lambdaabel{eq008}
\frac{d m_{k}}{d\,t} &= \mathscr{H}m_{k} +
\sum_{j=1}^{N} \Delta_{x_{j}} g_{k}^{(j)} (m_{1}, \lambdadots, m_{k-1}),\qquad k \geqslant 2,
\end{align}
the initial values being $m_{n}(0, \cdot, y) = \delta_{y}(\cdot)$ and $m_{n}(0,
\cdot) \equiv 1$ respectively. Here $\mathscr{H}m_{k}$ stands for $\mathscr{H}m_{k}(t, \cdot, y)$ or $\mathscr{H}m_{k}
(t, \cdot)$ respectively, and
\begin{equation}\lambdaabel{E:defgnj}
g_{k}^{(j)} (m_{1}, \lambdadots, m_{k-1}) :=\sum_{r=2}^{k} \frac{\beta_{j}^{(r)}}{r!} \sum_{\substack{i_{1}, \lambdadots, i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n}} \frac{n!}{i_{1}! \cdots i_{r}!} m_{i_{1}} \cdots m_{i_{r}}.
\end{equation}
\end{theorem}
Theorem~\ref{thm02} will later be used in the proof of Theorem~\ref{thm06} to help determine the asymptotic behaviour of the moments as $t\to \infty$.
\begin{theorem}
\lambdaabel{thm03} The moments $m_{1}(t, x,\cdot)\in l^{2}\lambdaeft({\mathbf Z}^{d}\right)$
satisfy the following Cauchy problem in $l^{2}\lambdaeft({\mathbf Z}^{d}\right)$:
\[
\frac{d m_{1}(t, x,\cdot)}{d t} = \mathscr{H} m_{1}(t, x,\cdot),\qquad m_{1} (0, x, \cdot) = \delta_{x}(\cdot).
\]
\end{theorem}
This theorem allows us to obtain different differential equations by making use of the symmetry of the BRW.
\begin{theorem}
\lambdaabel{thm04} The moment $m_{1}(t, x, y)$ satisfies both integral equations
\begin{align*}
m_{1}(t, x, y) &= p(t, x, y) + \sum_{j=1}^{N} \beta_{j} \int_{0}^{t} p(t-s, x, x_{j}) m_{1}(t-s, x_{j}, y)\,ds,\\
m_{1}(t, x, y) &= p(t, x, y) + \sum_{j=1}^{N} \beta_{j} \int_{0}^{t} p(t-s, x_{j}, y) m_{1}(t-s, x, x_{j})\,ds.
\end{align*}
Similarly, the moment $m_{1}(t, x)$ satisfies both integral equations
\begin{align}
m_{1}(t, x) &= 1 + \sum_{j=1}^{N} \beta_{j} \int_{0}^{t} p(t-s, x, x_{j}) m_{1}(s, x_{j}) ds,\\
\lambdaabel{eq015} m_{1}(t, x) &= 1 + \sum_{j=1}^{N} \beta_{j} \int_{0}^{t} m_{1}(s, x, x_{j}) ds.
\end{align}
The moments $m_{k}(t, x, y)$ and $m_{k}(t, x)$ for $k > 1$ satisfy the equations
\begin{align*}
m_{k}(t, x, y) &= m_{1}(t, x, y) +\notag\\&+ \sum_{j=1}^{N} \int_{0}^{t} m_{1}(t-s, x, x_{j}) g_{k}^{(j)} \lambdaeft(m_{1}(s, x_{j}, y), \lambdadots, m_{k-1}(s, x_{j}, y) \right)\,ds,\\
m_{k}(t, x) &= m_{1}(t, x) +\notag\\&+ \sum_{j=1}^{N} \int_{0}^{t} m_{1}(t-s, x, x_{j}) g_{k}^{(j)} \lambdaeft(m_{1}(s, x_{j}), \lambdadots, m_{k-1}(s, x_{j}) \right)\,ds .
\end{align*}
\end{theorem}
This theorem allows us to transition from differential equations to integral equations. It is later used to prove Theorem~\ref{thm06}.
\section{Properties of the operator $\mathscr{H}$.}\lambdaabel{S:WSBRW}
We call a BRW supercritical if the local and global numbers of particles $\mu_{t}(y)$ and
$\mu_{t}$ grow exponentially. As was mentioned in the Introduction, one of the main results of this work is the equations~\eqref{eq1}, from which it follows that a BRW with several branching sources is supercritical if the operator $\mathscr{H}$ has a positive eigenvalue
$\lambdaambda$. For this reason we dedicate this section to a further examination of the spectral properties of the operator
$\mathscr{H}$.
We first mention an important statement proved
in~\cite[Lemma~3.1.1]{YarBRW:e}.
\begin{lemma}\lambdaabel{lem01}
The spectrum $\sigma (\mathscr{A})$ of the operator $\mathscr{A}$ is included
in the half-line $(-\infty, 0]$. Also, since the operator $\sum_{j=1}^{N}
\beta_{j} \Delta_{x_{j}}$ is compact, $\sigma_{ess}(\mathscr{H}) = \sigma
\lambdaeft(\mathscr{A} \right) \subset (-\infty, 0]$, where
$\sigma_{ess}(\mathscr{H})$ denotes the essential spectrum~\cite{Kato:e} of
the operator $\mathscr{H}$.
\end{lemma}
The following theorem provides a criterion of there being a positive
eigenvalue in the spectrum of the operator $\mathscr{H}$.
\begin{theorem}
\lambdaabel{thm002} A number $\lambdaambda > 0$ is an eigenvalue and $f \in
l^{2}\lambdaeft({\mathbf Z}^{d} \right)$ is the corresponding eigenvector of the operator
$\mathscr{H}$ if and only if the system of linear equations
\begin{equation}\lambdaabel{E:fxi}
f(x_{i}) = \sum_{j=1}^{N} \beta_{j}f(x_{j}) I_{x_{j} - x_{i}} (\lambdaambda),\qquad i = 1, \lambdadots, N
\end{equation}
with respect to the variables $f(x_{i})$, where
\[
I_{x}(\lambdaambda) := G_{\lambdaambda}(x, 0) =
\frac{1}{(2\pi)^{d}} \int_{[-\pi, \pi]^{d}} \frac{e^{-i(\theta, x)}}{\lambdaambda - \phi(\theta)} d\theta,\qquad x \in {\mathbf Z}^{d},
\]
has a non-trivial solution.
\end{theorem}
\begin{proof}
For $\lambdaambda > 0$ to be an eigenvalue of the operator
$\mathscr{H}$ it is necessary and sufficient that there be a non-zero element $f
\in l^{2}\lambdaeft({\mathbf Z}^{d} \right) $ that satisfies the equation
\[
\lambdaeft(\mathscr{H} - \lambdaambda I \right)f =
\lambdaeft(\mathscr{A} + \sum_{j=1}^{N} \beta_{j}\Delta_{x_{j}} - \lambdaambda I \right)f = 0.
\]
Since $(\Delta_{x_{j}} f)(x) := f(x)\delta_{x_{j}}(x) =
f(x_{j})\delta_{x_{j}}(x)$, the preceding equality can be rewritten as follows:
\[
(\mathscr{A}f)(x) + \sum_{j=1}^{N} \beta_{j}f(x_{j})\delta_{x_{j}}(x) = \lambdaambda f(x),\qquad x \in {\mathbf Z}^{d}.
\]
By applying the Fourier transform to this equality, we obtain
\begin{equation}\lambdaabel{eq999}
(\widetilde{\mathscr{A}f})(\theta) + \sum_{j=1}^{N} \beta_{j} f(x_{j}) e^{i(\theta, x_{j})} = \lambdaambda \tilde{f}(\theta),\qquad \theta \in [-\pi, \pi]^{d}.
\end{equation}
Here the Fourier transform $\widetilde{\mathscr{A}f}$ of the function
$(\mathscr{A}f)(x)$ is of the form $\phi \tilde{f}$, where $\tilde{f}$ is the
Fourier transform of the function $f$, and the function $\phi(\theta)$ is
defined by the equality~\eqref{E:Fourier}, see~\cite[Lemma 3.1.1]{YarBRW:e}. With
this in mind, we rewrite the equality~\eqref{eq999} as
\[
\phi(\theta) \tilde{f}(\theta) + \sum_{j=1}^{N} \beta_{j} f(x_{j}) e^{i(\theta, x_{j})} =
\lambdaambda \tilde{f}(\theta),\qquad \theta \in [-\pi, \pi]^{d},
\]
or
\begin{equation}\lambdaabel{eq998}
\tilde{f}(\theta) = \frac{1}{\lambdaambda - \phi(\theta)} \sum_{j=1}^{N} \beta_{j} f(x_{j}) e^{i(\theta, x_{j})},\qquad \theta \in [-\pi, \pi]^{d}.
\end{equation}
Since $\lambdaambda > 0$ and $\phi(\theta) \lambdaeqslant 0$, $
\int_{[-\pi, \pi]^{d}} |\lambdaambda - \phi(\theta)|^{-2} \,d\theta < \infty$, which allows us to apply the inverse Fourier transform to~\eqref{eq998}:
\begin{equation}\lambdaabel{eq997}
f(x)= \sum_{j=1}^{N} \beta_{j} f(x_{j}) I_{x_{j} - x}(\lambdaambda),\qquad x \in {\mathbf Z}^{d}.
\end{equation}
Finally, we note that any solution of the system~\eqref{E:fxi}
completely defines the function $f(x)$ on the entirety of its domain by the formula~\eqref{eq997}, which proves the theorem.
\end{proof}
\begin{corollary}
\lambdaabel{cor1} The number of positive eigenvalues of the
$\mathscr{H}$, counted with their multiplicity, does not exceed $N$.
\end{corollary}
\begin{proof}
Suppose the contrary is true. Then there are at least
$N+1$ linearly independent eigenvectors $f_{i}$ of
$\mathscr{H}$. Since, as it was established in the proof of Theorem~\ref{thm002}, the function $f(x)$ satisfies the equality~\eqref{eq997},
where $\beta_{j} > 0$ for all $j$, and $I_{x_{j} - x} > 0$ for all $j$ and $x$, the linear independence of the vectors $f_{i}$ is equivalent to the linear independence of the vectors
\[
\widehat{f}_{i} := \lambdaeft(f_{i}(x_{1}), \lambdadots, f_{i}(x_{N}) \right),\qquad i = 1, \lambdadots, N+1.
\]
Given that such a set of $N+1$ vectors of dimension $N$ is always linearly dependent, so is the initial set of the vectors $f_{i}$, which contradicts our assumption.
\end{proof}
Let us introduce the matrix
\begin{equation}\lambdaabel{E:defG}
G(\lambdaambda) :=
\begin{pmatrix}
\beta_{1} I_{0}(\lambdaambda) & \beta_{2} I_{x_{2} - x_{1}}(\lambdaambda) & \cdots & \beta_{N} I_{x_{N} - x_{1}}(\lambdaambda)\\
\beta_{1} I_{x_{1} - x_{2}} (\lambdaambda) & \beta_{2} I_{0} (\lambdaambda) & \cdots & \beta_{N} I_{x_{N} - x_{2}}(\lambdaambda)\\
\cdots & \cdots & {d}ots & \cdots\\
\beta_{1} I_{x_{1} - x_{N}}(\lambdaambda) & \beta_{2} I_{x_{2} - x_{N}} (\lambdaambda) & \cdots & \beta_{N} I_{0}(\lambdaambda)
\end{pmatrix}.
\end{equation}
\begin{corollary}\lambdaabel{corD} A number $\lambdaambda > 0$ is an eigenvalue of
$\mathscr{H}$ if and only if $1$ is an eigenvalue of the matrix $G(\lambdaambda)$, or, in other words, when the equality
\[
\det(G(\lambdaambda)-I)=0
\]
holds.
\end{corollary}
\begin{proof}
This statement is a reformulation of the sufficient and necessary condition for consistency of the system~\eqref{E:fxi}.
\end{proof}
\begin{corollary}
\lambdaabel{cor2} Let $\lambdaambda_{0}>0$ be the largest eigenvalue of the operator
$\mathscr{H}$. Then $\lambdaambda_{0}$ is a simple eigenvalue of $\mathscr{H}$, and $1$ is the largest eigenvalue of the matrix
$G(\lambdaambda_{0})$.
\end{corollary}
\begin{proof}
Let us first demonstrate that if $\lambdaambda_{0}$ is the largest eigenvalue of the operator $\mathscr{H}$, then $1$ is the largest (by absolute value) eigenvalue of the matrix $G(\lambdaambda_{0})$. Indeed, assume it is not the case.
It follows from Corollary~\ref{corD} that $\lambdaambda_{0}>0$ is an eigenvalue of
$\mathscr{H}$ if and only if $1$ is an eigenvalue of the matrix
$G(\lambdaambda_{0})$. By the Perron-Frobenius theorem,
see~\cite[Theorem~8.4.4]{HJ:e}, which is applicable to the matrix
$G(\lambdaambda_{0})$ since all its elements are strictly positive, the matrix
$G(\lambdaambda_{0})$ has a strictly positive eigenvalue that is strictly greater
(by absolute value) than any other of its eigenvalues. We denote this
dominant eigenvalue by $\gamma(\lambdaambda_{0})$. Then $\gamma(\lambdaambda_{0})> 1$,
since we assumed that $1$ is not the largest eigenvalue of $G(\lambdaambda_{0})$.
Given that the functions $I_{x_{i} - x_{j}}(\lambdaambda)$ are continuous with
respect to $\lambdaambda$, all elements of $G(\lambdaambda)$, and therefore all
eigenvalues of $G(\lambdaambda)$ are continuous functions of $\lambdaambda$. Because
for all $i$ and $j$ $I_{x_{i} - x_{j}}(\lambdaambda) \to 0$ as $\lambdaambda \to
\infty$, all eigenvalues of the matrix $G(\lambdaambda)$ tend to zero as $\lambdaambda
\to \infty$. Therefore there is such a $\hat{\lambdaambda} > \lambdaambda_{0}$ that
$\gamma(\hat{\lambdaambda}) = 1$. Corollary~\ref{corD} states that this
$\hat{\lambdaambda}$ then has to be an eigenvalue of the operator $\mathscr{H}$,
which contradicts our initial assumption that $\lambdaambda_{0}$ is the largest
eigenvalue of $\mathscr{H}$.
We have just proved that $1$ is the largest eigenvalue of the matrix $G(\lambdaambda_{0})$; it then follows from the Perron-Frobenius theorem that this eigenvalue is simple. Now, in order to complete the proof we only have to show that the eigenvalue
$\lambdaambda_{0}$ of the operator $\mathscr{H}$ is also simple.
Assume it is not the case, and $\lambdaambda_{0}$ is not simple. Then there are at least two linearly independent eigenvectors $f_{1}$ and $f_{2}$ corresponding to the eigenvalue
$\lambdaambda_{0}$. We then can, by applying the equality~\eqref{eq997} once again, see that the linear independence of the vectors $f_{1}$ and $f_{2}$ is equivalent to the linear independence of the vectors
\[
\hat{f}_{i} := \lambdaeft(f_{i}(x_{1}), \lambdadots, f_{i}(x_{N}) \right),\qquad i = 1, 2.
\]
It also follows from Theorem~\ref{thm002} and the definition of
$G(\lambdaambda)$ that both vectors $\hat{f}_{i}$ satisfy the system of linear equations $\lambdaeft(G(\lambdaambda_{0}) - I\right) f = 0$, which contradicts the simplicity of eigenvalue $1$ of $G(\lambdaambda_{0})$.
This completes the proof.
\end{proof}
We will also need the following result \cite[Corollary~8.1.29]{HJ:e}.
\begin{lemma}
\lambdaabel{lem001} Let the elements of a matrix $G$ and vector $f$ be strictly positive. Let us also assume that
$\lambdaeft(G f \right)_{i} > f_{i}$ for all $i=1,\lambdadots,N$. Then the matrix
$G$ has an eigenvalue $\gamma > 1$.
\end{lemma}
\begin{corollary}
\lambdaabel{cor3} The largest eigenvalue $\gamma(\lambdaambda)$ of the matrix
$G(\lambdaambda)$ is a continuous strictly decreasing function for $\lambdaambda > 0$.
\end{corollary}
\begin{proof}
The continuity of $\gamma(\lambdaambda)$ follows from the fact that the elements of $G(\lambdaambda)$ are themselves continuous functions of $\lambdaambda$. We now prove the decreasing monotonicity of
$\gamma(\lambdaambda)$. Assume the contrary: let there be such two numbers $\lambdaambda' >
\lambdaambda'' > 0$ that $\gamma(\lambdaambda') \geqslant \gamma(\lambdaambda'') > 0$.
We denote by $f$ an eigenvector of $G(\lambdaambda')$ corresponding to the eigenvalue $\gamma(\lambdaambda')$. By the Perron-Frobenius theorem this vector can be chosen uniquely up to multiplication by a constant, and can furthermore be chosen to be strictly positive. Let us set
\[
G'' := \frac{1}{\gamma(\lambdaambda')} G(\lambdaambda''),\qquad
G' := \frac{1}{\gamma(\lambdaambda')} G(\lambdaambda').
\]
Then $G' f = f$, and the largest eigenvalue of the matrix $G''$ does not exceed
$1$. Also, since all elements of the matrices $G'$ and $G''$ are monotonously decreasing strictly positive functions of $\lambdaambda$, $\lambdaeft(G''
f\right)_{i} > f_{i}$ for $i=1,\lambdadots,N$, which contradicts Lemma~\ref{lem001} and concludes the proof.
\end{proof}
\begin{corollary}
\lambdaabel{cor4} Let the operator $\mathscr{H}$ have an eigenvalue
$\lambdaambda
> 0$. Consider the operator
$\mathscr{H}' = \mathscr{A} + \sum_{j=1}^{N} \beta'_{j} \Delta_{x_{j}}$ with parameters
$\beta_{i}'$, $i=1,\lambdadots,N$ that satisfy the inequalities
$\beta_{j}' \geqslant \beta_{j}$ for $j=1,\lambdadots,N$. Moreover, let there be such an $i$ that $\beta_{i}' > \beta_{i}$. Then the operator
$\mathscr{H}'$ has an eigenvalue $\lambdaambda' > \lambdaambda$.
\end{corollary}
\begin{proof}
It suffices to show that the matrix $G'(\lambdaambda)$ corresponding to the operator
$\mathscr{H}'$ and defined according to~\eqref{E:defG}
has an eigenvalue $1$ for some $\lambdaambda'>\lambdaambda$. Let us first demonstrate that the matrix $G'(\lambdaambda)$ has an eigenvalue
$\gamma'> 1$.
Since we assumed $\lambdaambda$ is an eigenvalue of the operator
$\mathscr{H}$, it follows from Corollary~\ref{corD} that $1$ is an eigenvalue of the matrix $G(\lambdaambda)$. Now, as all elements of the matrix
$G(\lambdaambda)$ are strictly positive, by applying the Perron-Frobenius theorem we conclude that
$G(\lambdaambda)$ has the strictly largest (by absolute value) eigenvalue
$\gamma\ge1$ with a corresponding strictly positive eigenvector $f$. Therefore,
\begin{equation}\lambdaabel{EGl0}
\lambdaeft(G(\lambdaambda) f\right)_{i} = \gamma f_{i}\geqslant f_{i},\qquad i=1,\lambdadots,N,
\end{equation}
By assumption, the following inequalities hold:
\[
\beta'_{j} I_{x_{i} - x_{j}}(\lambdaambda)
\geqslant \beta_{j} I_{x_{i} - x_{j}}(\lambdaambda) > 0,\qquad i,j=1,\lambdadots,N;
\]
moreover,
\[
\beta'_{i} I_{x_{i} - x_{j}}(\lambdaambda) > \beta_{i} I_{x_{i} - x_{j}}(\lambdaambda) > 0.
\]
It then follows from~\eqref{EGl0} that
\[
\lambdaeft(G'(\lambdaambda) f\right)_{i} > \gamma f_{i}\geqslant f_{i},\qquad i=1,\lambdadots,N.
\]
We now obtain from Lemma~\ref{lem001} that the matrix $G'(\lambdaambda)$ has an eigenvalue $\gamma'> 1$. Since its largest eigenvalue $\gamma(\lambdaambda)$ is a continuous function of $\lambdaambda$ that tends to zero as $\lambdaambda\to\infty$, there is such a
$\lambdaambda'>\lambdaambda$ that $\gamma(\lambdaambda') = 1$. This completes the proof.
\end{proof}
\begin{corollary}\lambdaabel{cor5}
Let the operator $\mathscr{H}$ have the largest eigenvalue $\lambdaambda_{0}
> 0$. Consider the operator $\mathscr{H}' = \mathscr{A} + \sum_{j=1}^{N}
\beta'_{j} \Delta_{x_{j}}$ with parameters $\beta_{i}'$, $i=1,\lambdadots,N$
that satisfy the inequalities $\beta_{j}' \lambdaeqslant \beta_{j}$ for
$j=1,\lambdadots,N$. Moreover, let there be such an $i$ that $\beta_{i}' <
\beta_{i}$. Then all eigenvalues of the operator $\mathscr{H}'$ are strictly less (by absolute value) than $\lambdaambda_{0}$.
\end{corollary}
\begin{proof}
This statement immediately follows from the corollary above.
\end{proof}
\begin{lemma}\lambdaabel{lem03} Let $\mathscr{H}$ be a continuous self-adjoint operator on a separable Hilbert space $E$, the spectrum of which is a disjoint union of two sets: a finite (counting multiplicity) set of isolated eigenvalues $\lambdaambda_{i} > 0$ and the remaining part of the spectrum which is included in $[-s, 0]$, $s> 0$. Then the solution $\,m(t)$ of the Cauchy problem
\begin{equation}
\lambdaabel{eq030}
\frac{d m(t)}{dt} = \mathscr{H} m(t),\qquad m(0) = m_{0},
\end{equation}
satisfies the condition
\[
\lambdaim_{t \to \infty} e^{-\lambdaambda_{0} t} m(t) = C\lambdaeft(m_{0}\right),
\]
where $\lambdaambda_{0} = \max_{i} \lambdaambda_{i}$.
\end{lemma}
\begin{proof}
We denote by $V_{\lambdaambda_{i}}$ the finite-dimensional eigenspace of
$\mathscr{H}$ corresponding to the eigenvalue $\lambdaambda_{i}$.
Consider the projection $P_{i}$ of $\mathscr{H}$ onto $V_{\lambdaambda_{i}}$,
see~\cite{Kato:e}. Let
\begin{align*}
x_{i}(t) &:= P_{i} m(t),\\
v(t) &:= \lambdaeft(I - \sum_{i} P_{i} \right) m(t) = m(t) - \sum_{i} x_{i}(t).
\end{align*}
It is known, see~\cite{Kato:e}, that all spectral operators $P_{i}$ and
$\lambdaeft(I - \sum P_{i} \right)$ commute with $\mathscr{H}$. Therefore
\begin{align*}
\frac{d x_{i}(t)}{dt} &= P_{i} \mathscr{H} m(t) = \mathscr{H} x_{i}(t)\\
\frac{d v(t)}{dt} &= \lambdaeft(I - \sum P_{i} \right) \mathscr{H} m(t) = \lambdaeft(I - \sum P_{i} \right) \mathscr{H} \lambdaeft(I - \sum P_{i} \right) v(t).
\end{align*}
As $x_{i}(t) \in V_{\lambdaambda_{i}}$, we can see that $\mathscr{H} x_{i}(t) =
\lambdaambda_{i} x_{i}(t)$, from which it follows that $x_{i}(t) = e^{\lambdaambda_{i}
t} x_{i}(0)$. Also, since the spectrum of the operator $\mathscr{H}_{0} :=
\lambdaeft(I - \sum P_{i} \right) \mathscr{H} \lambdaeft(I - \sum P_{i} \right)$ is
included into the spectrum of $\mathscr{H}$ and does not contain any of the
isolated eigenvalues $\lambdaambda_{i}$, it is included into $[-s, 0]$. From this
we obtain $|v(t)| \lambdaeqslant |v(0)|$ for all $t\geqslant0$,
see~\cite[Lemma~3.3.5]{YarBRW:e}. Therefore
\begin{equation}\lambdaabel{E:mt}
m(t) = \sum_{i} e^{\lambdaambda_{i} t} P_{i} m(0 ) + v(t),
\end{equation}
and the proof is complete.
\end{proof}
\begin{remark}\lambdaabel{RemL}
Let $\lambdaambda_{0}$ be the largest eigenvalue of the operator
$\mathscr{H}$. Then due to~\eqref{E:mt}
$C(m_{0})=P_{0} m(0 )$. Therefore $C(m_0)\neq 0$ if and only if the orthogonal projection $P_{0} m(0 )$ of the initial value $m_0=m(0)$ onto the eigenspace corresponding to the eigenvalue $\lambdaambda_{0}$ is non-zero.
If the eigenvalue $\lambdaambda_{0}$ of the operator $\mathscr{H}$
is simple and $f$ is a corresponding eigenvector, the projection $P_{0}$
is defined by the formula $P_{0}x=\frac{(f,x)}{(f,f)}f$, where $(\cdot,\cdot)$
is the scalar product in the Hilbert space $E$.
In cases when this $\lambdaambda_{0}$ is not simple, describing the projection $P_{0}$ is a significantly more difficult task.
We remind the reader that we proved the simplicity of the largest eigenvalue of $H$ above, which allows us to bypass this complication.
\end{remark}
\begin{theorem}
\lambdaabel{thm06} Let the operator $\mathscr{H}$, defined as in~\eqref{E:defHbbb} with parameters $\lambdabrace
\beta_{i} \rbrace_{i=1}^{N}$, have a finite (counting multiplicity) number of positive eigenvalues. We denote the largest of them by $\lambdaambda_{0}$, and the corresponding normalized vector by $f$. Then for all $n
\in{\mathbf N}$ and $t \to \infty$ the following limit statements hold:
\begin{equation}\lambdaabel{E:mainmom}
m_{n}(t, x, y) \sim C_{n}(x, y) e^{n \lambdaambda_{0} t},\quad
m_{n}(t, x) \sim C_{n}(x) e^{n\lambdaambda_{0} t},
\end{equation}
where
\[
C_{1} (x, y) = f(y) f(x),\qquad
C_{1}(x) = f(x)\frac{1}{\lambdaambda_{0}} \sum_{j=1}^{N} \beta_{j} f(x_{j}),
\]
and for $n \geqslant 2$ the functions $C_{n}(x, y)$ and $C_{n}(x) > 0$ are defined by the equalities below:
\begin{align*}
C_{n}(x, y) &= \sum_{j=1}^{N} g^{(j)}_{n}\lambdaeft(C_{1}(x_{j}, y), \lambdadots, C_{n-1}(x_{j}, y) \right) D^{(j)}_{n}(x),\\
C_{n}(x) &= \sum_{j=1}^{N} g^{(j)}_{n}\lambdaeft(C_{1}(x_{j}), \lambdadots, C_{n-1}(x_{j}) \right) D^{(j)}_{n}(x),
\end{align*}
where $g^{(j)}_{n}$ are the functions defined in~\eqref{E:defgnj}
and $D^{(j)}_{n}(x)$ are certain functions that satisfy the estimate $|D_{n}^{(j)} (x)|\lambdaeqslant
\frac{2}{n\lambdaambda_{0}}$ for $n\geqslant
n_{*}$ and some $n_{*}\in{\mathbf N}$.
\end{theorem}
\begin{proof}
For $n \in{\mathbf N}$ we introduce the functions
\[
\nu_{n} := m_{n} (t, x, y) e^{-n \lambdaambda_{0} t}.
\]
We obtain from Theorem~\ref{thm02} (see equations~\eqref{eq007} and~\eqref{eq008} for
$m_{n}$) the following equations for $\nu_{n}$:
\begin{align*}
\frac{d \nu_{1}}{dt} &= \mathscr{H} \nu_{1} - \lambdaambda_{0} \nu_{1},\\
\frac{d \nu_{n}}{dt} &= \mathscr{H} \nu_{n} - n \lambdaambda_{0} \nu_{n} +
\sum_{j=1}^{N} \Delta_{x_{j}} g_{n}^{(j)} \lambdaeft(\nu_{1}, \lambdadots, \nu_{n-1} \right),\qquad n \geqslant 2,
\end{align*}
the initial values being $\nu_{n}(0, \cdot, y) = \delta_{y}(\cdot), n \in{\mathbf N}$.
Since $\lambdaambda_{0}$ is the largest eigenvalue of $\mathscr{H}$, for $n
\geqslant 2$ the spectrum of the operator $\mathscr{H}_{n} := \mathscr{H} - n
\lambdaambda_{0} I$ is included into $(-\infty, -(n-1)\lambdaambda_{0}]$. As it was
shown, for example, in \cite[p.~58]{YarBRW:e}, if the spectrum of a
continuous self-adjoint operator $\widetilde{\mathscr{H}}$ on a Hilbert space
is included into $(-\infty, -s], s > 0$, and also $f(t) \to f_{*}$ as $t \to
\infty$, then the solution of the differential equation
\[
\frac{d \nu}{dt} = \widetilde{\mathscr{H}} \nu + f(t)
\]
satisfies the condition $\nu(t) \to -\widetilde{\mathscr{H}}^{-1} f_{*}$.
For this reason for $n \geqslant 2$ we obtain
\begin{multline*}
C_{n}(x, y) = \lambdaim_{t \to \infty} \nu_{n} =
-\sum_{j=1}^{N} \lambdaeft(\mathscr{H}_{n}^{-1} \Delta_{x_{j}}g_{n}^{(j)}(C_{1}(\cdot, y), \lambdadots, C_{n-1}(\cdot, y) )\right)(x)=\\
-\sum_{j=1}^{N} g_{n}^{(j)}(C_{1}(x_{j}, y), \lambdadots, C_{n-1}(x_{j}, y) )(\mathscr{H}_{n}^{-1} \delta_{x_{j}}(\cdot))(x)).
\end{multline*}
Let us now prove the existence of such a natural number $n_{*}$ that for all
$n\geqslant n_{*}$ the estimates
\[
D_{n}^{(j)} (x) := |(\mathscr{H}_{n}^{-1} \delta_{x_{j}}(\cdot))(x)|
\lambdaeqslant \frac{2}{n\lambdaambda_{0}}
\]
hold.
We first evaluate the norm of the operator $\mathscr{H}_{n}^{-1}$. For this purpose, let us introduce two vectors $x$ and $u$ such that $u=\mathscr{H}_{n}x=
\mathscr{H}x - n\lambdaambda_{0} x$. Then $\|u\|\geqslant n\lambdaambda_{0} \|x\| -
\|\mathscr{H}x\|\geqslant (n\lambdaambda_{0} -\|\mathscr{H}\|)\|x\|$, hence
$\|\mathscr{H}_{n}^{-1}u\|=\|x\|\lambdaeqslant \|u\|/\lambdaeft(n\lambdaambda_{0}
-\|\mathscr{H}\|\right)$, and therefore for all $n\geqslant
n_{*}=2\lambdaambda_{0}^{-1}\|\mathscr{H}\|$ the estimate
\[
\|\mathscr{H}_{n}^{-1}\|\lambdaeqslant \frac{2}{n\lambdaambda_{0}}
\]
holds. From this we conclude that
\[
|(\mathscr{H}_{n}^{-1} \delta_{x_{j}}(\cdot))(x)| \lambdaeqslant
\|\mathscr{H}_{n}^{-1} \delta_{x_{j}}(\cdot)\|\lambdaeqslant
\|\mathscr{H}_{n}^{-1} \| \|\delta_{x_{j}}(\cdot) \| \lambdaeqslant \frac{2}{n\lambdaambda_{0}},\qquad n\geqslant n_{*}.
\]
Let us now turn to estimating the asymptotic behaviour of particle number
moments. It follows from~\eqref{eq015} that as $t \to \infty$ the following
asymptotic equivalences hold:
\begin{equation}\lambdaabel{eq1298}
m_{1}(t, x) \sim \sum_{j=1}^{N} \beta_{j} \int_{0}^{t} m_{1}(s, x, x_{j}) \,ds
\sim \sum_{j=1}^{N} \frac{\beta_{j}}{\lambdaambda_{0}} m_{1}(t, x, x_{j}).
\end{equation}
Since the function $m_{1}(t, x, 0)$ exhibits exponential growth as $t \to \infty$, the function $m_{1}(t, x)$
will display the same behaviour.
We can now infer the asymptotic behaviour of the higher moments $m_{n}(t, x)$ for $n \geqslant 2$ from the equations~\eqref{eq008} in much the same way it was done above for the higher moments $m_{n}(t, x, y)$.
We now proceed to prove the equalities for $C_{1} (x, y)$ and $ C_{1}(x)$. By Corollary~\ref{cor2} the eigenvalue $\lambdaambda_{0}$ is simple, from which it follows, according to Remark~\ref{RemL}, that
\[
C_{1} (x, y) = \lambdaim_{t \to \infty} e^{-\lambdaambda_{0} t} m_{1}(t, x, y) = Pm_{0} =
\lambdaeft( m_{1}(0, x, y), f\right) f(x).
\]
But $m_{1}(0, x, y) = \delta_{y}(x)$, therefore
\[
C_{1} (x, y) = \lambdaeft( m_{1}(0, x, y), f\right) f(x) = f(y) f(x).
\]
We also obtain from~\eqref{eq1298} that
\[
C_{1}(x) = \frac{1}{\lambdaambda_{0}} \sum_{j=1}^{N} \beta_{j} C_{1}(x, x_{j}) = f(x)\frac{1}{\lambdaambda_{0}} \sum_{j=1}^{N} \beta_{j} f(x_{j}),
\]
which concludes the proof.\end{proof}
\begin{corollary}
\lambdaabel{cor6} $C_{n}(x, y) = \psi^{n}(y) C_{n}(x)$, where $ \psi(y) =
\frac{\lambdaambda_{0}f(y)}{\sum_{j=1}^{N} \beta_{j} f(x_{j})}$.
\end{corollary}
\begin{proof}
We prove the corollary by induction on $n$. The induction basis for $n=1$ holds due to Theorem~\ref{thm06}. Let us now deal with the induction step: according to Theorem~\ref{thm06},
\begin{align}
\lambdaabel{eq1998}
C_{n+1}(x, y) &= \sum_{j=1}^{N} g^{(j)}_{n+1}\lambdaeft(C_{1}(x_{j}, y), \lambdadots, C_{n}(x_{j}, y) \right) D^{(j)}_{n+1}(x),\\
\lambdaabel{eq1999}
C_{n+1}(x) &= \sum_{j=1}^{N} g^{(j)}_{n+1}\lambdaeft(C_{1}(x_{j}), \lambdadots, C_{n}(x_{j}) \right) D^{(j)}_{n+1}(x);
\end{align}
so it suffices to prove that for all $j$ the equalities
\[
g^{(j)}_{n+1}\lambdaeft(C_{1}(x_{j}, y), \lambdadots, C_{n}(x_{j}, y) \right) = \psi^{n+1}(y) g^{(j)}_{n+1}\lambdaeft(C_{1}(x_{j}), \lambdadots, C_{n}(x_{j}) \right)
\]
hold.
As it follows from the definition and the induction hypothesis,
\begin{multline*}
g^{(j)}_{n+1}\lambdaeft(C_{1}(x_{j}, y), \lambdadots, C_{n}(x_{j}, y) \right) =\\=
\sum_{r=2}^{n+1} \frac{\beta_{j}^{(r)}}{r!} \sum_{\substack{i_{1}, \lambdadots, i_{r} > 0 \\
i_{1} + \cdots + i_{r} = n+1}} \frac{n!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(x_{j}, y) \cdots C_{i_{r}}(x_{j}, y) =\\
= \psi^{n+1} (y) \sum_{r=2}^{n+1} \frac{\beta_{j}^{(r)}}{r!} \sum_{\substack{i_{1}, \lambdadots, i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n+1}} \frac{n!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(x_{j}) \cdots C_{i_{r}}(x_{j}),
\end{multline*}
which proves the corollary.
\end{proof}
\section{Proof of Theorem~\ref{thm1}.}\lambdaabel{S:limthm}
We will need a few auxiliary lemmas. Let us introduce the function
\begin{equation}\lambdaabel{def1}
f(n, r) := \sum_{\substack{i_{1}, \lambdadots, i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n}} i_{1}^{i_{1}} \cdots \,i_{r}^{i_{r}},\qquad 1 \lambdaeqslant r \lambdaeqslant n.
\end{equation}
\begin{lemma}
\lambdaabel{lem2} $f(n, n) = 1$ and $f(n, 1) = n^{n}$; for
$2 \lambdaeqslant r \lambdaeqslant n$ the following formula holds:
\[
f(n, r) = \sum_{u=1}^{n-r+1} u^{u} f(n-u, r-1).
\]
\end{lemma}
\begin{proof}
To prove the lemma, group all addends in~\eqref{def1} by the possible values of
$i_{1}$; since $1 \lambdaeqslant i_{1} \lambdaeqslant n-r+1$,
\[
f(n, r) = \sum_{i_{1} = 1}^{ n-r+1} i_{1}^{i_{1}} \sum_{\substack{i_{2}, \lambdadots, i_{r} > 0 \\
i_{2} + \cdots + i_{r} = n-i_{1}}} i_{2}^{i_{2}} \cdots i_{r}^{i_{r}} = \sum_{u=1}^{n-r+1} u^{u} f(n-u, r-1),
\]
which completes the proof.
\end{proof}
\begin{lemma}
\lambdaabel{lem3} The function $g(x) = x^{x} (n-x)^{n-x}$, $x \in [1, n-1]$, attains its maximum at the ends of its domain.
\end{lemma}
\begin{proof}
By applying the logarithm to both sides of the equation above, we obtain
\[
\lambdan g(x) = x\lambdan x + (n-x) \lambdan (n-x),
\]
from which it follows that
\[
\lambdaeft( \lambdan g(x) \right)' = \lambdan x + 1 - \lambdan (n-x) - 1 = \lambdan x - \lambdan (n-x).
\]
This means that $\lambdaeft( \lambdan g(x) \right)' < 0$ for $x < \frac{n}{2}$, and $\lambdaeft( \lambdan
g(x) \right)' > 0$ for $x > \frac{n}{2}$. Therefore, the function $g(x)$
is decreasing when $x < \frac{n}{2}$ and increasing when $x > \frac{n}{2}$, which concludes the proof.
\end{proof}
\begin{remark}
The lemma above holds for any other intercept included in the original domain $[1,
n-1]$ (with the only difference being that the function's values at the ends of the new intercept can be different.)
\end{remark}
\begin{lemma}
\lambdaabel{lem4} For all $n \geqslant 2$ the inequality $f(n, 2) < 6
(n-1)^{n-1}$ holds.
\end{lemma}
\begin{proof}
We prove the lemma by induction on $n$. The induction basis for $n=2$ and $n>3$ holds: $f(2, 2) = 1 < 6$, $f(3, 2) =
4 < 24$.
Let us now turn to the induction step: to complete the proof, we have to show that the statement of the lemma holds for any $n>3$ if it holds for all the preceding values of $n$. By applying Lemmas~\ref{lem2} and~\ref{lem3} and evaluating the sum by the maximum term multiplied by their number, we obtain
\begin{multline*}
f(n, 2) = \sum_{u=1}^{n-2+1} u^{u} f(n - u, 2- 1) =\\=
\sum_{u=1}^{n-1} u^{u} (n-u)^{n-u} = 2 (n-1)^{n-1} +
\sum_{u=2}^{n-2} u^{u} (n-u)^{n-u} \lambdaeqslant\\
\lambdaeqslant 2(n-1)^{n-1} + 4 (n-3) (n-2)^{n-2} < 6 (n-1)^{n-1},
\end{multline*}
which concludes the proof.\end{proof}
\begin{lemma}
\lambdaabel{lem5} For all $n \geqslant 3$ the inequality $f(n, 3) < 6
(n-1)^{n-1}$ holds.
\end{lemma}
\begin{proof}
We prove the lemma by induction on $n$. The induction basis for $n>3$ holds: $f(3, 3) = 1 < 24$.
As for the inductive step, according to Lemma~\ref{lem4}
\begin{multline*}
f(n, 3) = \sum_{u=1}^{n-2} u^{u} f(n-u, 2) \lambdaeqslant
6 \sum_{u=1}^{n-2} u^{u} (n-u-1)^{n-u-1}\lambdaeqslant\\ \lambdaeqslant 6 (n-2)^{n-2} (n-2) = 6 (n-2)^{n-1} < 6 (n-1)^{n-1},
\end{multline*}
which completes the proof.
\end{proof}
\begin{lemma}
\lambdaabel{lem6} For all $n \geqslant r$ and $r \geqslant 2$ the inequality
$f(n, r) < 6 (n-1)^{n-1}$ holds.
\end{lemma}
\begin{proof}
We prove the lemma by induction on $n$. The induction basis and the cases $r = 2$ and $r = 3$ were considered in Lemmas~\ref{lem4} and~\ref{lem5}.
We can therefore assume that $r \geqslant 4$. Let us now prove the inductive step. By Lemma~\ref{lem2}
\begin{equation}
\lambdaabel{eq2}
f(n, r) = \sum_{u=1}^{n-r+1} u^{u} f(n-u, r-1) \lambdaeqslant 6 \sum_{u=1}^{n-r+1} u^{u} (n-u-1)^{n-u-1}.
\end{equation}
It follows from Lemma~\ref{lem3} that the function $g(u) := u^{u} (n-u-1)^{n-u-1}$ attains its maximum value at (one of) the ends of the intercept $[1, n-r+1]$. These values are
\[
g(1) = (n-2)^{n-2},\qquad
g(n-r+1) = (n-r+1)^{n-r+1} (r-2)^{r-2}\,.
\]
Consider $g(n-r+1)$. By applying Lemma~\ref{lem3} once again, we obtain $h(r) := g(n-r+1)$
attains its maximum value at (one of) the ends of the intercept $[3, n]$, these values being
\[
h(3) = (n-2)^{n-2},\qquad h(n) = (n-2)^{n-2}.
\]
So the largest term in the right-hand side of~\eqref{eq2} is
$g(1) =(n-2)^{n-2}$. Therefore,
\[
6 \sum_{u=1}^{n-r+1} u^{u} (n-u-1)^{n-u-1} \lambdaeqslant 6 (n - r + 1) (n-2)^{n-2} < 6 (n-1)^{n-1},
\]
which concludes the proof.
\end{proof}
\begin{lemma}
\lambdaabel{lem7} There is such a constant $C>0$ that for all $n \geqslant r
\geqslant 2$ the inequality
\begin{align}\lambdaabel{eq019}
f(n, r) < C \frac{n^{n}}{r^{ r -1}}
\end{align}
holds.
\end{lemma}
\begin{proof}
We first introduce the quantities
\begin{align}\lambdaabel{E:defn1}
n_{1}&=\max\{n\in{\mathbf N}:~ 6^{6}(n-5)^{n-5} \geqslant 4^{4} (n-3)^{n-3}\},\\
\lambdaabel{E:defn2}
n_{2}&=\max\{n\in{\mathbf N}:~ 283 (n-2)^{n-2} \geqslant (n-1)^{n-1}\},\\
\lambdaabel{E:defn3}
n_{3}&=\max\lambdaeft\{n\in{\mathbf N}:~ \lambdaeft(1+ \frac{2}{n-1} \right)^{n-1} \lambdaeqslant 2 e\right\},\\
\lambdaabel{E:defntilde}
\tilde{n} &= \max \{n_{1}, n_{2}, n_{3}\}
\end{align}
and then the following sets of ordered pairs $(n,r)$:
\begin{align*}
\mathbb{D}&=\{(n,r):~ 2\lambdaeqslant r\lambdaeqslant n\},\\
\mathbb{D}_{1}&=\{(n,r):~ 2\lambdaeqslant r\lambdaeqslant n\lambdaeqslant \tilde{n}\},\\
\mathbb{D}_{2}&=\{(n,r):~ 2\lambdaeqslant r\lambdaeqslant 6,~
r\lambdaeqslant n\},\\
\mathbb{D}_{3}&=\{(n,r):~ r = n \text{ or } r = n-1\},\\
\widetilde{\mathbb{D}} &= \mathbb{D}_{1} \cup \mathbb{D}_{2} \cup \mathbb{D}_{3}.
\end{align*}
It can easily be seen that $n_{1},n_{2},n_{3}<\infty$; in fact, $\tilde{n} =
106$. Therefore, the set $\mathbb{D}_{1}$ contains a finite number of pairs,
and we can pick a large enough $C$ for~\eqref{eq019} to hold for all pairs
from this set.
As follows from Lemma~\ref{lem6}, the same can be said of the set $\mathbb{D}_{2}$. Indeed, since for all $n \geqslant r
\geqslant 2$ the inequality $f(n, r) < 6(n-1)^{n-1}$ is true, we obtain
\begin{align*}
f(n, 2) &< 6 (n-1)^{n-1} = 6 \cdot 2^{2-1} \frac{(n-1)^{n-1}}{2^{2-1}};\\
f(n, 3) &< 6 (n-1)^{n-1} = 6 \cdot 3^{3-2} \frac{(n-1)^{n-1}}{2^{2-1}};\\
&\cdots \\
f(n, 6) &< 6 (n-1)^{n-1} = 6 \cdot 6^{6-1} \frac{(n-1)^{n-1}}{6^{6-1}}.
\end{align*}
Therefore, for $C \geqslant 6^{6}$~\eqref{eq019} holds for any pair from the
set $\mathbb{D}_{2}$.
Finally,~\eqref{eq019} also holds for any pair from $\mathbb{D}_{3}$ with $C
\geqslant 1/2$, because by definition the following inequalities are true:
\[
f(n, n) = 1 < \frac{C n^{n}}{r^{r-1}} = Cn,\qquad
f(n, n-1) = 2(n-1) < \frac{C n^{n}}{r^{r-1}} = \frac{C n^{n}}{(n-1)^{n-2}}.
\]
From this it follows that the constant $C$ can be chosen large enough for
\eqref{eq019} to hold for any element of the set $\widetilde{\mathbb{D}}$. In addition, we set it to be large enough for the inequality to hold for all ordered pairs
$(\tilde{n}+1, r) \in \mathbb{D} \setminus \widetilde{\mathbb{D}}$, which can be done due to the number of these pairs being finite. We now fix $C$ according to these considerations.
Consequently, to complete the proof we only have to demonstrate that for the $C$ chosen above~\eqref{eq019} holds for all $(n, r) \in \mathbb{D} \setminus
\widetilde{\mathbb{D}}$. We do this by induction on
$n$, proving the statement on every step for all $r$ such that $(n, r) \in \mathbb{D} \setminus
\widetilde{\mathbb{D}}$.
By the definition of $\tilde{n}$ (see~\eqref{E:defntilde}) and thanks to the choice of $C$
we can use $n=\tilde{n}+1$ as induction basis. We now turn to the induction step: we assume that for some $n\geqslant \tilde{n}+1$ the statement of the lemma holds for all ordered pairs $(n,r) \in \mathbb{D} \setminus
\widetilde{\mathbb{D}}$ and prove that it then holds for all pairs $(n+1,
r) \in \mathbb{D} \setminus \widetilde{\mathbb{D}}$.
By Lemma~\ref{lem2} and the induction hypothesis
\[
f(n+1, r)= \sum_{u=1}^{n-r+2} u^{u} f(n+1-u, r-1) < \frac{C}{(r-1)^{ r-2}} \sum_{u=1}^{n-r+2} u^{u} (n+1-u)^{n-u+1}.
\]
We note that
\begin{equation}\lambdaabel{E:sum1}
\sum_{u=1}^{n-r+2} u^{u} (n+1-u)^{n-u+1} = n^{n} + 4(n-1)^{n-1} + 27(n-2)^{n-2} + \sum_{u=4}^{n-r+2} u^{u} (n+1-u)^{n-u+1}.
\end{equation}
In order to evaluate the sum in the right-hand side of the equation, we point out that by Lemma~\ref{lem3} the terms $k(u) :=
u^{u} (n+1-u)^{n-u+1}$ in the sum attain their maximum values at the ends of $[4, n-r+2]$; these values are
\[
k(4) = 4^{4} (n-3)^{n-3},\qquad
k(n-r+2) = (n-r+2)^{n-r+2} (r-1)^{r-1}.
\]
To find out which one of these two values $k(4)$ and $k(n-r+2)$ is greater,
consider the function
\[
l(r) :=
(n-r+2)^{n-r+2}\cdot (r-1)^{r-1}.
\]
Since $(n,r)\not\in\widetilde{\mathbb{D}}$, $(n,r)\not\in\mathbb{D}_{2}
\cup \mathbb{D}_{3}$. Therefore, $r \in [7, n-2]$, and the function $l(r)$ assumes its maximum value at the ends of $[7, n-2]$; these values are
\[
l(7) = 6^{6} (n-5)^{n-5},\qquad l(n-2) = 4^{4} (n-3)^{n-3},
\]
and we have to find out, once again, which one of them is larger.
Again, since $(n,r)\not\in\widetilde{\mathbb{D}}$,
$(n,r)\not\in\mathbb{D}_{1}$, and $6^{6}(n-5)^{n-5} < 4^{4}
(n-3)^{n-3}$. Therefore, $l(7) \lambdaeqslant l(n-2)$, from which we obtain
\begin{equation}\lambdaabel{E:estku}
k(u) := u^{u} (n+1-u)^{n-u+1}\lambdaeqslant 4^{4} (n-3)^{n-3}\quad\text{for}\quad n\in [4, n-r+2].
\end{equation}
We can now finally evaluate the sum in the right-hand side of~\eqref{E:sum1}. Since none of the terms $u^{u} (n+1-u)^{n-u+1}$ for $u \in [4, n-r+2]$ in the right-hand side of~\eqref{E:sum1} exceed $4^{4} (n-3)^{n-3}$, and the number of these terms does not exceed $n-8$,
\begin{equation}\lambdaabel{E:sum2}
\sum_{u=4}^{n-r+2} u^{u} (n+1-u)^{n-u} \lambdaeqslant 4^{4} (n-8) (n-3)^{n-3} \lambdaeqslant 4^{4} (n-2)^{n-2}.
\end{equation}
We therefore obtain from~\eqref{E:sum1} and~\eqref{E:sum2} that
\begin{multline*}
\sum_{u=1}^{n-r+2} u^{u} (n+1-u)^{n-u+1}
\lambdaeqslant n^{n} + 4(n-1)^{n-1} + 27(n-2)^{n-2} + 4^{4} (n-2)^{n-2} =\\
= n^{n} + 4(n-1)^{n-1} + 283 (n-2)^{n-2}.
\end{multline*}
Since $(n,r)\not\in\widetilde{\mathbb{D}}$,
$(n,r)\not\in\mathbb{D}_{1}$, $283 (n-2)^{n-2} < (n-1)^{n-1}$, which allows us to rewrite the previous inequality as follows:
\[
\sum_{u=1}^{n-r+2} u^{u} (n+1-u)^{n-u+1}
\lambdaeqslant n^{n} + 4(n-1)^{n-1} + 283 (n-2)^{n-2} \lambdaeqslant n^{n} + 5(n-1)^{n-1}.
\]
Consequently,
\begin{multline}\lambdaabel{E:sum233}
f(n+1, r) \lambdaeqslant \frac{C}{(r-1)^{ r-2}} \lambdaeft[n^{n} + 5(n-1)^{n-1} \right] =\\ = \frac{C (n+1)^{n+1}}{r^{r -1}} \frac{r^{r - 1}}{(r-1)^{r-2}}\lambdaeft[\frac{n^{n}}{(n+1)^{n+1}} + \frac{5 (n-1)^{n-1}}{(n+1)^{n+1}}\right].
\end{multline}
It is obvious that
\[
\frac{r^{r - 1}}{(r-1)^{r-2}} \cdot\frac{n^{n}}{(n+1)^{n+1}} = \frac{\lambdaeft(1 + \frac{1}{r-1} \right)^{r-1}}{\lambdaeft(1 + \frac{1}{n} \right)^{n}} \cdot \frac{r-1}{n}.
\]
Since the function $\lambdaeft(1 + \frac{1}{x} \right)^{x}$ is monotonically increasing,
\[
\frac{\lambdaeft(1 + \frac{1}{r-1} \right)^{r-1}}{\lambdaeft(1 + \frac{1}{n} \right)^{n}} \cdot \frac{r-1}{n} < \frac{r-1}{n}.
\]
Now, as
\[
\frac{r^{r - 1}}{(r-1)^{r-2}} \cdot \frac{5 (n-1)^{n-1}}{(n+1)^{n+1}} = 5 \frac{\lambdaeft(1 + \frac{1}{r-1} \right)^{r-1}}{\lambdaeft(1 + \frac{2}{n-1} \right)^{n-1}} \cdot \frac{r-1}{(n+1)^{2}}
\]
and as the function $\lambdaeft(1 + \frac{1}{x} \right)^{x}$ is also monotonically increasing,
\[
5 \frac{\lambdaeft(1 + \frac{1}{r-1} \right)^{r-1}}{\lambdaeft(1 + \frac{2}{n-1} \right)^{n-1}} \cdot \frac{r-1}{(n+1)^{2}} \lambdaeqslant \frac{5}{2} \frac{1}{n+1},
\]
because $(n,r)\not\in\widetilde{\mathbb{D}}$ and therefore,
$(n,r)\not\in\mathbb{D}_{1}$, from which we obtain $ \lambdaeft(1+ \frac{2}{n-1} \right)^{n-1}
> 2 e$. This allows us to rewrite \eqref{E:sum233} as follows:
\begin{multline*}
f(n+1, r) \lambdaeqslant \frac{C (n+1)^{n+1}}{r^{r -1}} \lambdaeft[ \frac{r-1}{n} +
\frac{5}{2} \frac{1}{n+1}\right] \lambdaeqslant \frac{C (n+1)^{n+1}}{r^{r -1}} \cdot \frac{r + 3/2}{n} \lambdaeqslant \frac{C (n+1)^{n+1}}{r^{r -1}},
\end{multline*}
where the last inequality follows from the fact that
$(n,r)\not\in\widetilde{\mathbb{D}}$ and therefore,
$(n,r)\not\in\mathbb{D}_{3}$, that is to say, $r \lambdaeqslant n-1$. This concludes the proof.
\end{proof}
We now turn to proving Theorem~\ref{thm1}.
\begin{proof}
Let us define the functions
\[
m(n, x, y) :=\lambdaim_{t\to \infty}\frac{m_{n}(t,x,y)}{m_{1}^{n}(t,x,y)} =\frac{C_{n} (x, y)}{C_{1}^{n} (x, y)},\quad
m(n, x) :=\lambdaim_{t\to \infty}\frac{m_{n}(t,x)}{m_{1}^{n}(t,x)}= \frac{C_{n} (x)}{C_{1}^{n} (x)};
\]
as follows from Theorem~\ref{thm06} and $G_{\lambdaambda}(x,y)$ being positive, these definitions are sound. Corollary~\ref{cor6} yields
\[
m(n, x, y) = m(n, x) = \frac{C_{n}(x)}{C_{1}^{n}(x)} = \frac{C_{n}(x, y)}{C_{1}^{n}(x, y)}.
\]
From these equalities and the asymptotic equivalences~\eqref{E:mainmom} we obtain the equalities~\eqref{eq1} in Theorem~\ref{thm1} in terms of moment convergence of the random variables $\xi(y)=\psi(y)\xi$ and $\xi$.
For the random variables $\xi(y)$ and $\xi$ to be uniquely defined by their
moments, it suffices to demonstrate, as was shown in \cite{YarBRW:e}, that
Carleman's criterion
\begin{equation}\lambdaabel{E:Carl}
\sum_{n=1}^{\infty} m(n, x, y)^{-1/2n} = \infty,\qquad \sum_{n=1}^{\infty} m(n, x)^{-1/2n} = \infty
\end{equation}
holds.
We establish below that the series for the moments $m(n, x)$ diverges and that, therefore, said moments define the random variable $\xi$ uniquely; the statement concerning $\xi(y)$ and its moments can be proved in much the same manner.
Since $\beta^{(r)}_{j} = O(r! \,r^{r-1})$, there is such a constant $D$ that
for all $r \geqslant 2$ and $j = 1, \lambdadots, N$ the inequality
$\beta^{(r)}_{j} < D r! \,r^{r-1}$ holds. We assume without loss of
generality that for all $n$
\[
C_{n}(x) \lambdaeqslant \max_{j = 1, \lambdadots, N}\lambdaeft(C_{n}(x_{j}) \right) = \lambdaeft(C_{n}(x_{1}) \right).
\]
Let
\[
\gamma := 2 N \cdot C \cdot D \cdot E\cdot\frac{\lambdaambda_{0}\beta_{2}}{2}\cdot C_{1}^{2}(x_{1}),
\]
where $C$ is as defined in Lemma~\ref{lem7}, and the constant $E$ is such that
$C_{n}(x_{1}) \lambdaeqslant \gamma^{n-1} n! \,n^{n}$ for $n \lambdaeqslant
\max\{n_{*}, 2 \}$, where $n_{*}$ is defined in Theorem~\ref{thm06}.
Let us show by induction that
\[
C_{n}(x) \lambdaeqslant C_{n}(x_{1}) \lambdaeqslant \gamma^{n-1} n! \,n^{n}.
\]
The induction basis for $n = 1$ is valid due to the choice of $C$. To prove the inductive step, we will demonstrate that
\[
C_{n+1}(x) \lambdaeqslant C_{n+1}(x_{1}) \lambdaeqslant \gamma^{n} (n+1)! \,(n+1)^{n+1}.
\]
It follows from the formula for $C_{n+1}(x_{1})$ and the estimate for
$D_{n}^{(j)}(x)$ from Theorem~\ref{thm06} that
\[
C_{n+1}(x_{1}) \lambdaeqslant
\sum_{j=1}^{N}\sum_{r=2}^{n+1} \frac{\beta^{(j)}_{r}}{r!} \sum_{\substack{i_{1}, \lambdadots, i_{r} > 0 \\
i_{1} + \cdots + i_{r} = n+1}} \frac{(n+1)!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(x_{1})\cdots C_{i_{r}}(x_{1}) \frac{2}{\lambdaambda_{0}(n+1)}.
\]
By the induction hypothesis
\[
\frac{(n+1)!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(0) \cdots C_{i_{r}}(0) \lambdaeqslant \gamma^{n+1-r} (n+1)! i_{1}^{i_{1}} \cdots i_{r}^{i_{r}};
\]
which, added to the fact that $\beta_{j}^{(r)} < D r! \,r^{r-1}$ and $\gamma^{n+1-r} \lambdaeqslant
\gamma^{n-1}$, yields
\begin{multline*}
\sum_{j=1}^{N} \sum_{r=2}^{n+1} \frac{\beta_{j}^{(r)}}{r!} \sum_{\substack{i_{1}, \lambdadots, i_{r} > 0 \\
i_{1} + \cdots + i_{r} = n+1}} \frac{(n+1)!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(x_{1}) \cdots C_{i_{r}}(x_{1}) \lambdaeqslant\\
\lambdaeqslant N\gamma^{n-1} D (n+1)! \sum_{r=2}^{n+1} r^{r-1} \sum_{\substack{i_{1}, \lambdadots, i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n+1}} i_{1}^{i_{1}} \cdots i_{r}^{i_{r}} =\\
= N \gamma^{n-1}\,D (n+1)! \sum_{r=2}^{n+1} r^{r-1} f(n+1, r).
\end{multline*}
We infer from Lemma~\ref{lem7} that
\begin{multline*}
N \gamma^{n-1}\,D (n+1)! \sum_{r=2}^{n+1} r^{r-1} f(n+1, r) \lambdaeqslant N \gamma^{n-1} (n+1)!\,D\cdot C \sum_{r=2}^{n+1} (n+1)^{n+1} \lambdaeqslant\\\lambdaeqslant N \gamma^{n-1} D \cdot C (n+1)! (n+1)^{n+2}.
\end{multline*}
Therefore, by referring to the definition of $\gamma$ we obtain
\[
C_{n+1}(x) \lambdaeqslant \gamma^{n} (n+1)! (n+1)^{n+1},
\]
which completes the proof of the induction step.
Finally, since $n! \lambdaeqslant \lambdaeft(\frac{n+1}{2} \right)^{n}$,
$C_{n}(x) \lambdaeqslant \frac{\gamma^{n}}{2^{n}} (n+1)^{2n}$. Thus,
\[
m(n, x) = \frac{C_{n}(x)}{C_{1}^{n}(x)} \lambdaeqslant \lambdaeft(\frac{\gamma}{2 C_{1}(x)} \right)^{n} (n+1)^{2n},
\]
from which it follows that
\[
\sum_{n=1}^{\infty} m(n, x)^{-1/2n} \geqslant \sqrt{\frac{2 C_{1}(x)}{\gamma}} \sum_{n=1}^{\infty} \frac{1}{n+1} = \infty.
\]
Thus the condition~\eqref{E:Carl} holds, and the corresponding Stieltjes moment problem for the moments $m(n,x)$ has a unique solution~\cite[Th.~1.11]{ShT:e}, and therefore the equalities~\eqref{eq1} hold in terms of convergence in distribution. This completes the proof of Theorem~\ref{thm1}.
\end{proof}
\textbf{Acknowledgements.} The research was supported by the Russian
Foundation for Basic Research, project no. 17-01-00468.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{LP-rounding Algorithms for the Fault-Tolerant\\
Facility Placement Problem\tnoteref{t1}} \tnotetext[t1]{A
preliminary version of this article appeared in Proc. CIAC 2013.}
\author{Li Yan\corref{cor1} and Marek Chrobak\fnref{fn1}}
\ead{\{lyan,marek\}@cs.ucr.edu}
\address{Department of Computer Science\\
University of California at Riverside\\
Riverside, CA 92521, USA}
\cortext[cor1]{Corresponding author}
\fntext[fn1]{Work supported by NSF grants CCF-0729071 and CCF-1217314.}
\begin{abstract}
The Fault-Tolerant Facility Placement problem (FTFP) is a
generalization of the classic Uncapacitated Facility
Location Problem (UFL). In FTFP we are given a set of
facility sites and a set of clients. Opening a facility at
site $i$ costs $f_i$ and connecting client $j$ to a
facility at site $i$ costs $d_{ij}$. We assume that the
connection costs (distances) $d_{ij}$ satisfy the triangle
inequality. Multiple facilities can be opened at any
site. Each client $j$ has a demand $r_j$, which means that
it needs to be connected to $r_j$ different facilities
(some of which could be located on the same site). The
goal is to minimize the sum of facility opening cost and
connection cost.
The main result of this paper is a $1.575$-approximation algorithm
for FTFP, based on LP-rounding. The algorithm first reduces the
demands to values polynomial in the number of sites. Then it uses a
technique that we call adaptive partitioning, which partitions the
instance by splitting clients into unit demands and creating a
number of (not yet opened) facilities at each site. It also
partitions the optimal fractional solution to produce a fractional
solution for this new instance. The partitioned fractional solution
satisfies a number of properties that allow us to exploit existing
LP-rounding methods for UFL to round our partitioned solution to an
integral solution, preserving the approximation ratio. In
particular, our $1.575$-approximation algorithm is based on the
ideas from the $1.575$-approximation algorithm for UFL by
Byrka~{\it et al.}, with changes necessary to satisfy the fault-tolerance
requirement.
\end{abstract}
\begin{keyword}
Facility Location \sep Approximation Algorithms
\end{keyword}
\end{frontmatter}
\section{Introduction}
In the \emph{Fault-Tolerant Facility Placement} problem
(FTFP), we are given a set $\mathbb{F}$ of \emph{sites} at
which facilities can be built, and a set $\mathbb{C}$ of
\emph{clients} with some demands that need to be satisfied
by different facilities. A client $j\in\mathbb{C}$ has demand
$r_j$. Building one facility at a site $i\in\mathbb{F}$ incurs a cost
$f_i$, and connecting one unit of demand from client $j$ to
a facility at site $i$ costs $d_{ij}$. Throughout the
paper we assume that the connection costs (distances)
$d_{ij}$ form a metric, that is, they are
symmetric and satisfy the triangle inequality. In a feasible solution, some
number of facilities, possibly zero, are opened at each site
$i$, and demands from each client are connected to those
open facilities, with the constraint that demands from the
same client have to be connected to different
facilities. Note that any two facilities at the same site are considered different.
It is easy to see that if all $r_j=1$ then FTFP reduces to
the classic Uncapacitated Facility Location problem (UFL).
If we add a constraint that each site can have at most one
facility built on it, then the problem becomes equivalent to the
Fault-Tolerant Facility Location problem (FTFL). One
implication of the one-facility-per-site restriction in FTFL
is that $\max_{j\in\mathbb{C}}r_j \leq |\mathbb{F}|$, while
in FTFP the values of $r_j$'s can be much bigger than
$|\mathbb{F}|$.
The UFL problem has a long history; in particular, great
progress has been achieved in the past two decades in
developing techniques for designing constant-ratio
approximation algorithms for UFL. Shmoys, Tardos and
Aardal~\cite{ShmoysTA97} proposed an approach based on
LP-rounding, that they used to achieve a ratio of 3.16.
This was then improved by Chudak~\cite{ChudakS04} to 1.736,
and later by Sviridenko~\cite{Svi02} to 1.582.
The best known ``pure" LP-rounding algorithm is due to
Byrka~{{\it et al.}}~\cite{ByrkaGS10} with ratio 1.575.
Byrka and Aardal~\cite{ByrkaA10} gave a hybrid algorithm that combines LP-rounding
and dual-fitting (based on \cite{JainMMSV03}), achieving a ratio of 1.5. Recently,
Li~\cite{Li11} showed that, with a more refined analysis and
randomizing the scaling parameter used in \cite{ByrkaA10}, the ratio can be improved
to 1.488. This is the best known approximation result for UFL.
Other techniques include the primal-dual algorithm with ratio 3 by
Jain and Vazirani~\cite{JainV01}, the dual fitting method by
Jain~{{\it et al.}}~\cite{JainMMSV03} that gives ratio 1.61, and a
local search heuristic by Arya~{{\it et al.}}~\cite{AryaGKMMP04}
with approximation ratio 3. On the hardness side, UFL is
easily shown to be {{\mbox{\sf NP}}}-hard, and it is known that it is
not possible to approximate UFL in polynomial time with
ratio less than $1.463$, provided that
${\mbox{\sf NP}}\not\subseteq{\mbox{\sf DTIME}}(n^{O(\log\log
n)})$~\cite{GuhaK98}. An observation by Sviridenko
strengthened the underlying assumption to ${\mbox{\sf P}}\ne {\mbox{\sf NP}}$ (see \cite{vygen05}).
FTFL was first introduced by Jain and
Vazirani~\cite{JainV03} and they adapted their primal-dual
algorithm for UFL to obtain a ratio of
$3\ln(\max_{j\in\mathbb{C}}r_j)$. All subsequently
discovered constant-ratio approximation algorithms use
variations of LP-rounding. The first such algorithm, by
Guha~{{\it et al.}}~\cite{GuhaMM01}, adapted the approach for UFL
from \cite{ShmoysTA97}. Swamy and Shmoys~\cite{SwamyS08}
improved the ratio to $2.076$ using the idea of pipage
rounding introduced in \cite{Svi02}. Most recently,
Byrka~{{\it et al.}}~\cite{ByrkaSS10} improved the ratio to 1.7245
using dependent rounding and laminar clustering.
FTFP is a natural generalization of UFL. It was first
studied by Xu and Shen~\cite{XuS09}, who extended the
dual-fitting algorithm from~\cite{JainMMSV03} to give an
approximation algorithm with a ratio claimed to be
$1.861$. However their algorithm runs in polynomial time
only if $\max_{j\in\mathbb{C}} r_j$ is polynomial in
$O(|\mathbb{F}|\cdot |\mathbb{C}|)$ and the analysis of the
performance guarantee in \cite{XuS09} is flawed\footnote{Confirmed through
private communication with the authors.}. To date, the
best approximation ratio for FTFP in the literature is $3.16$,
established by Yan and Chrobak~\cite{YanC11}, while the only
known lower bound is the $1.463$ lower bound for UFL
from~\cite{GuhaK98}, as UFL is a special case of FTFP.
If all demand values $r_j$ are equal, the problem can be solved
by simple scaling and applying LP-rounding algorithms for UFL. This does
not affect the approximation ratio, thus achieving ratio $1.575$ for this
special case (see also \cite{LiaoShen11}).
The main result of this paper is an LP-rounding algorithm
for FTFP with approximation ratio 1.575, matching the best
ratio for UFL achieved via the LP-rounding method
\cite{ByrkaGS10} and significantly improving our earlier
bound in~\cite{YanC11}. In Section~\ref{sec: polynomial
demands} we prove that, for the purpose of LP-based
approximations, the general FTFP problem can be reduced to
the restricted version where all demand values are
polynomial in the number of sites. This \emph{demand
reduction} trick itself gives us a ratio of $1.7245$,
since we can then treat an instance of FTFP as an instance
of FTFL by creating a sufficient (but polynomial) number of
facilities at each site, and then using the algorithm
from~\cite{ByrkaSS10} to solve the FTFL instance.
The reduction to polynomial demands suggests an approach where
clients' demands are split into unit demands. These unit demands can
be thought of as ``unit-demand clients'', and a natural approach would
be to adapt LP-rounding methods from
\cite{gupta08,ChudakS04,ByrkaGS10} to this new set of unit-demand
clients. Roughly, these algorithms iteratively pick a client that
minimizes a certain cost function (that varies for different
algorithms) and open one facility in the neighborhood of this
client. The remaining clients are then connected to these open
facilities. In order for this to work, we also need to convert the
optimal fractional solution $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$ of the original
instance into a solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$ of the modified instance
which then can be used in the LP-rounding process. This can be thought
of as partitioning the fractional solution, as each connection value
$x^\ast_{ij}$ must be divided between the $r_j$ unit demands
of client $j$ in some way. In Section~\ref{sec: adaptive partitioning} we
formulate a set of properties required for this partitioning to
work. For example, one property guarantees that we can connect demands
to facilities so that two demands from the same client are connected
to different facilities. Then we present our \emph{adaptive
partitioning} technique that computes a partitioning with all the
desired properties. Using adaptive partitioning we were able to extend
the algorithms for UFL from \cite{gupta08,ChudakS04,ByrkaGS10} to
FTFP. We illustrate the fundamental ideas of our approach in
Section~\ref{sec: 3-approximation}, showing how they can be used to
design an LP-rounding algorithm with ratio $3$. In Section~\ref{sec:
1.736-approximation} we refine the algorithm to improve the
approximation ratio to $1+2/e\approx 1.736$. Finally, in
Section~\ref{sec: 1.575-approximation}, we improve it even further to
$1.575$ -- the main result of this paper.
Summarizing, our contributions are two-fold: One, we show
that the existing LP-rounding algorithms for UFL can be
extended to a much more general problem FTFP, retaining the
approximation ratio. We believe that, should even better
LP-rounding algorithms be developed for UFL in the future,
using our demand reduction and adaptive partitioning
methods, it should be possible to extend them to FTFP.
In fact, some improvement of the ratio
should be achieved by randomizing the scaling parameter
$\gamma$ used in our algorithm, as Li showed in \cite{Li11}
for UFL. (Since the ratio $1.488$ for UFL in~\cite{Li11}
uses also dual-fitting
algorithms~\cite{MahdianYZ06}, we would not obtain the same
ratio for FTFP yet using only LP-rounding.)
Two, our ratio of $1.575$ is significantly better than the
best currently known ratio of $1.7245$ for the
closely-related FTFL problem. This suggests that in the
fault-tolerant scenario the capability of creating
additional copies of facilities on the existing sites makes
the problem easier from the point of view of approximation.
\section{The LP Formulation}\label{sec: the lp formulation}
The FTFP problem has a natural Integer Programming (IP)
formulation. Let $y_i$ represent the number of facilities
built at site $i$ and let $x_{ij}$ represent the number of
connections from client $j$ to facilities at site $i$. If we
relax the integrality constraints, we obtain the following LP:
\begin{alignat}{3}
\textrm{minimize} \quad {\it cost}(\boldsymbol{x},\boldsymbol{y}) &= \textstyle{\sum_{i\in \mathbb{F}}f_iy_i
+ \sum_{i\in \mathbb{F}, j\in \mathbb{C}}d_{ij}x_{ij}}\label{eqn:fac_primal}\hspace{-1.5in}&&
\\ \notag
\textrm{subject to}\quad y_i - x_{ij} &\geq 0 &\quad &\forall i\in \mathbb{F}, j\in \mathbb{C}
\\ \notag
\textstyle{\sum_{i\in \mathbb{F}} x_{ij}} &\geq r_j & &\forall j\in \mathbb{C}
\\ \notag
x_{ij} \geq 0, y_i &\geq 0 & &\forall i\in \mathbb{F}, j\in \mathbb{C}
\\ \notag
\end{alignat}
\noindent
The dual program is:
\begin{alignat}{3}
\textrm{maximize}\quad \textstyle{\sum_{j\in \mathbb{C}}} r_j\alpha_j&\label{eqn:fac_dual}
\\ \notag
\textrm{subject to} \quad \textstyle{
\sum_{j\in \mathbb{C}}\beta_{ij}} &\leq f_i &\quad\quad &\forall i \in \mathbb{F}
\\ \notag
\alpha_{j} - \beta_{ij} &\leq d_{ij} & & \forall i\in \mathbb{F}, j\in \mathbb{C}
\\ \notag
\alpha_j \geq 0, \beta_{ij} &\geq 0 & & \forall i\in \mathbb{F}, j\in \mathbb{C}
\\ \notag
\end{alignat}
In each of our algorithms we will fix some optimal
solutions of the LPs (\ref{eqn:fac_primal}) and (\ref{eqn:fac_dual})
that we will denote by $(\boldsymbol{x}^\ast, \boldsymbol{y}^\ast)$ and
$(\boldsymbol{\alpha}^\ast,\boldsymbol{\beta}^\ast)$, respectively.
With $(\boldsymbol{x}^\ast, \boldsymbol{y}^\ast)$ fixed, we can define the
optimal facility cost as $F^\ast=\sum_{i\in\mathbb{F}} f_i
y_i^\ast$ and the optimal connection cost as $C^\ast =
\sum_{i\in\mathbb{F},j\in\mathbb{C}} d_{ij}x_{ij}^\ast$.
Then $\mbox{\rm LP}^\ast = {\it cost}(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast) = F^\ast+C^\ast$
is the joint optimal value of (\ref{eqn:fac_primal}) and
(\ref{eqn:fac_dual}). We can also associate with each
client $j$ its fractional connection cost $C^\ast_j =
\sum_{i\in\mathbb{F}} d_{ij}x_{ij}^\ast$. Clearly, $C^\ast =
\sum_{j\in\mathbb{C}} C^\ast_j$. Throughout the paper we
will use notation $\mbox{\rm OPT}$ for the optimal integral solution
of (\ref{eqn:fac_primal}). $\mbox{\rm OPT}$ is the value we wish to
approximate, but, since $\mbox{\rm OPT}\ge\mbox{\rm LP}^\ast$, we can instead use
$\mbox{\rm LP}^\ast$ to estimate the approximation ratio of our
algorithms.
\paragraph{Completeness and facility splitting.}
Define $(\boldsymbol{x}^\ast, \boldsymbol{y}^\ast)$ to be \emph{complete} if
$x_{ij}^\ast>0$ implies that $x_{ij}^\ast=y_i^\ast$ for all $i,j$. In
other words, each connection either uses a site fully or not at all.
As shown by Chudak and Shmoys~\cite{ChudakS04}, we can modify the
given instance by adding at most $|\mathbb{C}|$ sites to obtain an
equivalent instance that has a complete optimal solution, where
``equivalent" means that the values of $F^\ast$, $C^\ast$ and
$\mbox{\rm LP}^\ast$, as well as $\mbox{\rm OPT}$, are not affected. Roughly, the argument
is this: We notice that, without loss of generality, for each client
$k$ there exists at most one site $i$ such that $0 < x_{ik}^\ast <
y_i^\ast$. We can then perform the following \emph{facility
splitting} operation on $i$: introduce a new site $i'$, let
$y^\ast_{i'} = y^\ast_i - x^\ast_{ik}$, redefine $y^\ast_i$ to be
$x^\ast_{ik}$, and then for each client $j$ redistribute $x^\ast_{ij}$
so that $i$ retains as much connection value as possible and $i'$
receives the rest. Specifically, we set
\begin{align*}
&y^\ast_{i'} \;{\,\leftarrow\,}\; y^\ast_i - x^\ast_{ik},\; y^\ast_{i} \;{\,\leftarrow\,}\; x^\ast_{ik}, \quad \text{ and }\\
&x^\ast_{i'j} \;{\,\leftarrow\,}\;\max( x^\ast_{ij} - x^\ast_{ik}, 0 ),\; x^\ast_{ij} \;{\,\leftarrow\,}\; \min( x^\ast_{ij} , x^\ast_{ik})
\quad \textrm{for all}\ j \neq k.
\end{align*}
This operation eliminates the partial connection between $k$
and $i$ and does not create any new partial
connections. Each client can split at most one site and
hence we shall have at most $|\mathbb{C}|$ more sites.
By the above paragraph, without loss of generality we can
assume that the optimal fractional solution $(\boldsymbol{x}^\ast, \boldsymbol{y}^\ast)$
is complete. This assumption will in fact greatly simplify some of
the arguments in the paper. Additionally, we will frequently use the facility
splitting operation described above in our algorithms to obtain fractional solutions with
desirable properties.
\section{Reduction to Polynomial Demands}
\label{sec: polynomial demands}
This section presents a \emph{demand reduction} trick that
reduces the problem for arbitrary demands to a special case
where demands are bounded by $|\mathbb{F}|$, the number of
sites. (The formal statement is a little more technical --
see Theorem~\ref{thm: reduction to polynomial}.) Our
algorithms in the sections that follow process individual
demands of each client one by one, and thus they critically
rely on the demands being bounded polynomially in terms of
$|\mathbb{F}|$ and $|\mathbb{C}|$ to keep the overall running time polynomial.
The reduction is based on an optimal fractional solution
$(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$ of LP~(\ref{eqn:fac_primal}). From the
optimality of this solution, we can also assume that
$\sum_{i\in\mathbb{F}} x^\ast_{ij} = r_j$ for all
$j\in\mathbb{C}$. As explained in Section~\ref{sec: the lp
formulation}, we can assume that $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$
is complete, that is $x^\ast_{ij} > 0$ implies $x^\ast_{ij}
= y^\ast_i$ for all $i,j$. We split this solution into two
parts, namely $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast) = (\hat{\boldsymbol{x}},\hat{\boldsymbol{y}})+
(\dot{\boldsymbol{x}},\dot{\boldsymbol{y}})$, where
\begin{align*}
{\hat y}_i &\;{\,\leftarrow\,}\; \floor{y_i^\ast}, \quad
{\hat x}_{ij} \;{\,\leftarrow\,}\; \floor{x_{ij}^\ast} \quad\textrm{and}
\\
{\dot y}_i &\;{\,\leftarrow\,}\; y_i^\ast - \floor{y_i^\ast}, \quad
{\dot x}_{ij} \;{\,\leftarrow\,}\; x_{ij}^\ast - \floor{x_{ij}^\ast}
\end{align*}
for all $i,j$. Now we construct two
FTFP instances ${\hat c}alI$ and ${\dot{\cal I}}$ with the same
parameters as the original instance, except that the demand of each client $j$ is
${\hat r}_j = \sum_{i\in\mathbb{F}} {\hat x}_{ij}$ in instance ${\hat c}alI$ and
${\dot r}_j = \sum_{i\in\mathbb{F}} {\dot x}_{ij} = r_j - {\hat r}_j$ in instance ${\dot{\cal I}}$.
It is obvious that if we have integral solutions to both ${\hat c}alI$
and ${\dot{\cal I}}$ then, when added together, they form an integral
solution to the original instance. Moreover, we have the
following lemma.
\begin{lemma}\label{lem: polynomial demands partition}
{\rm (i)}
$(\hat{\boldsymbol{x}}, \hat{\boldsymbol{y}})$ is a feasible integral solution to
instance ${\hat c}alI$.
\noindent
{\rm (ii)}
$(\dot{\boldsymbol{x}}, \dot{\boldsymbol{y}})$ is a feasible fractional
solution to instance ${\dot{\cal I}}$.
\noindent
{\rm (iii)}
${\dot r}_j\leq |\mathbb{F}|$ for every client $j$.
\end{lemma}
\begin{proof}
(i) For feasibility, we need to verify that the constraints of LP~(\ref{eqn:fac_primal})
are satisfied. Directly from the definition, we have ${\hat r}_j = \sum_{i\in\mathbb{F}} {\hat x}_{ij}$.
For any $i$ and $j$, by the feasibility of $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$ we have
${\hat x}_{ij} = \floor{x_{ij}^\ast} \le \floor{y^\ast_i} = {\hat y}_i$.
(ii) From the definition, we have ${\dot r}_j = \sum_{i\in\mathbb{F}} {\dot x}_{ij}$.
It remains to show that ${\dot y}_i \geq {\dot x}_{ij}$ for all $i,j$.
If $x_{ij}^\ast=0$, then ${\dot x}_{ij}=0$ and we are done.
Otherwise, by completeness, we have $x_{ij}^\ast=y_i^\ast$.
Then ${\dot y}_i = y_i^\ast - \floor{y_i^\ast} = x_{ij}^\ast - \floor{x_{ij}^\ast} ={\dot x}_{ij}$.
(iii) From the definition of ${\dot x}_{ij}$ we have
${\dot x}_{ij} < 1$. Then the bound follows from the definition of ${\dot r}_j$.
\end{proof}
Notice that our construction relies on the completeness assumption; in fact, it is
easy to give an example where $(\dot{\boldsymbol{x}}, \dot{\boldsymbol{y}})$ would not be feasible if we
used a non-complete optimal solution $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$.
Note also that the solutions $(\hat{\boldsymbol{x}},\hat{\boldsymbol{y}})$ and $(\dot{\boldsymbol{x}}, \dot{\boldsymbol{y}})$ are in fact
optimal for their corresponding instances, for if a better solution to ${\hat c}alI$ or
${\dot{\cal I}}$ existed, it could
give us a solution to $\mathcal{I}$ with a smaller objective value.
\begin{theorem}\label{thm: reduction to polynomial}
Suppose that there is a polynomial-time algorithm ${\cal A}$
that, for any instance of {\mbox{\rm FTFP}} with maximum demand
bounded by $|\mathbb{F}|$, computes an integral solution
that approximates the fractional optimum of this instance
within factor $\rho\geq 1$. Then there is a
$\rho$-approximation algorithm ${\cal A}'$ for {\mbox{\rm FTFP}}.
\end{theorem}
\begin{proof}
Given an {\mbox{\rm FTFP}} instance with arbitrary demands, Algorithm~${\cal A}'$ works
as follows: it solves the LP~(\ref{eqn:fac_primal}) to obtain a
fractional optimal solution $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$, then it constructs
instances ${\hat c}alI$ and ${\dot{\cal I}}$ described above, applies
algorithm~${\cal A}$ to ${\dot{\cal I}}$, and finally combines (by adding
the values) the integral solution $(\hat{\boldsymbol{x}}, \hat{\boldsymbol{y}})$ of
${\hat c}alI$ and the integral solution of ${\dot{\cal I}}$ produced
by ${\cal A}$. This clearly produces a feasible integral
solution for the original instance $\mathcal{I}$.
The solution produced by ${\cal A}$ has cost at most
$\rho\cdot{\it cost}(\dot{\boldsymbol{x}},\dot{\boldsymbol{y}})$, because $(\dot{\boldsymbol{x}},\dot{\boldsymbol{y}})$
is feasible for ${\dot{\cal I}}$. Thus the cost of ${\cal A}'$ is at most
\begin{align*}
{\it cost}(\hat{\boldsymbol{x}}, \hat{\boldsymbol{y}}) + \rho\cdot{\it cost}(\dot{\boldsymbol{x}},\dot{\boldsymbol{y}})
\le
\rho({\it cost}(\hat{\boldsymbol{x}}, \hat{\boldsymbol{y}}) + {\it cost}(\dot{\boldsymbol{x}},\dot{\boldsymbol{y}}))
= \rho\cdot\mbox{\rm LP}^\ast \le \rho\cdot\mbox{\rm OPT},
\end{align*}
where the first inequality follows from $\rho\geq 1$. This completes
the proof.
\end{proof}
\section{Adaptive Partitioning}
\label{sec: adaptive partitioning}
In this section we develop our second technique, which we
call \emph{adaptive partitioning}. Given an FTFP instance
and an optimal fractional solution $(\boldsymbol{x}^\ast, \boldsymbol{y}^\ast)$
to LP~(\ref{eqn:fac_primal}), we split each client $j$ into
$r_j$ individual \emph{unit demand points} (or just
\emph{demands}), and we split each site $i$ into no more
than $|\mathbb{F}|+2R|\mathbb{C}|^2$ \emph{facility points} (or
\emph{facilities}), where $R=\max_{j\in\mathbb{C}}r_j$. We
denote the demand set by $\overline{\clientset}$ and the facility set
by $\overline{\sitesset}$, respectively. We will also partition
$(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$ into a fractional solution
$(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$ for the split instance. We will
typically use symbols $\nu$ and $\mu$ to index demands and
facilities respectively, that is $\bar{\boldsymbol{x}} =
({\bar x}_{\mu\nu})$ and $\bar{\boldsymbol{y}} = ({\bar y}_{\mu})$. As before,
the \emph{neighborhood of a demand} $\nu$ is
${\overline{N}}(\nu)=\braced{\mu\in\overline{\sitesset} {\,:\,}
{\bar x}_{\mu\nu}>0}$. We will use notation $\nu\in j$ to
mean that $\nu$ is a demand of client $j$; similarly,
$\mu\in i$ means that facility $\mu$ is on site
$i$. Different demands of the same client (that is,
$\nu,\nu'\in j$) are called \emph{siblings}. Further, we
use the convention that $f_\mu = f_i$ for $\mu\in i$,
$\alpha_\nu^\ast = \alpha_j^\ast$ for $\nu\in j$ and
$d_{\mu\nu} = d_{\mu j} = d_{ij}$ for $\mu\in i$ and $\nu\in
j$. We define $C^{\avg}_{\nu}
=\sum_{\mu\in{\overline{N}}(\nu)}d_{\mu\nu}{\bar x}_{\mu\nu} =
\sum_{\mu\in\overline{\sitesset}}d_{\mu\nu}{\bar x}_{\mu\nu}$.
One can think of $C^{\avg}_{\nu}$ as the
average connection cost of demand $\nu$, if we chose a
connection to facility $\mu$ with probability
${\bar x}_{\mu\nu}$. In our partitioned fractional solution we
guarantee for every $\nu$ that $\sum_{\mu\in\overline{\sitesset}}
{\bar x}_{\mu\nu}=1$.
Some demands in $\overline{\clientset}$ will be designated as
\emph{primary demands} and the set of primary demands will
be denoted by $P$. By definition we have $P\subseteq \overline{\clientset}$.
In addition, we will use the overlap
structure between demand neighborhoods to define a mapping
that assigns each demand $\nu\in\overline{\clientset}$ to some primary
demand $\kappa\in P$. As shown in the rounding algorithms in
later sections, for each primary demand we guarantee exactly
one open facility in its neighborhood, while for a
non-primary demand, there is constant probability that none
of its neighbors open. In this case we estimate its
connection cost by the distance to the facility opened in
its assigned primary demand's neighborhood. For this reason
the connection cost of a primary demand must be ``small''
compared to the non-primary demands assigned to it. We also
need sibling demands assigned to different primary demands to satisfy
the fault-tolerance requirement. Specifically, this
partitioning will be constructed to satisfy a number of
properties that are detailed below.
\begin{description}
\renewcommand{(\alph{enumii})}{(\alph{enumii})}
\renewcommand{\theenumii}{(\alph{enumii})}
\item{(PS)} \emph{Partitioned solution}.
Vector $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$ is a partition of $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$, with unit-value
demands, that is:
\begin{enumerate}
\item \label{PS:one}
$\sum_{\mu\in\overline{\sitesset}} {\bar x}_{\mu\nu} = 1$ for each demand $\nu\in\overline{\clientset}$.
\item \label{PS:xij} $\sum_{\mu\in i, \nu\in j} {\bar x}_{\mu\nu}
= x^\ast_{ij}$ for each site $i\in\mathbb{F}$ and client $j\in\mathbb{C}$.
\item \label{PS:yi}
$\sum_{\mu\in i} {\bar y}_{\mu} = y^\ast_i$ for each site $i\in\mathbb{F}$.
\end{enumerate}
\item{(CO)} \emph{Completeness.}
Solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$ is complete, that is ${\bar x}_{\mu\nu}\neq 0$ implies
${\bar x}_{\mu\nu} = {\bar y}_{\mu}$, for all $\mu\in\overline{\sitesset}, \nu\in\overline{\clientset}$.
\item{(PD)} \emph{Primary demands.}
Primary demands satisfy the following conditions:
\begin{enumerate}
\item\label{PD:disjoint} For any two different primary demands $\kappa,\kappa'\in P$ we have
${\overline{N}}(\kappa)\cap {\overline{N}}(\kappa') = \emptyset$.
\item \label{PD:yi} For each site $i\in\mathbb{F}$,
$ \sum_{\mu\in i}\sum_{\kappa\in P}{\bar x}_{\mu\kappa} \leq y_i^\ast$.
\item \label{PD:assign} Each demand $\nu\in\overline{\clientset}$ is assigned
to one primary demand $\kappa\in P$ such that
\begin{enumerate}
\item \label{PD:assign:overlap} ${\overline{N}}(\nu) \cap {\overline{N}}(\kappa) \neq \emptyset$, and
\item \label{PD:assign:cost} $C^{\avg}_{\nu}+\alpha_{\nu}^\ast \geq
C^{\avg}_{\kappa}+\alpha_{\kappa}^\ast$.
\end{enumerate}
\end{enumerate}
\item{(SI)} \emph{Siblings}. For any pair $\nu,\nu'$ of different siblings we have
\begin{enumerate}
\item \label{SI:siblings disjoint}
${\overline{N}}(\nu)\cap {\overline{N}}(\nu') = \emptyset$.
\item \label{SI:primary disjoint} If $\nu$ is assigned to a primary demand $\kappa$ then
${\overline{N}}(\nu')\cap {\overline{N}}(\kappa) = \emptyset$. In particular, by Property~(PD.\ref{PD:assign:overlap}),
this implies that different sibling demands are assigned to different primary demands.
\end{enumerate}
\end{description}
As we shall demonstrate in later sections, these properties allow us
to extend known UFL rounding algorithms to obtain an integral solution
to our FTFP problem with a matching approximation ratio. Our
partitioning is ``adaptive" in the sense that it is constructed one
demand at a time, and the connection values for the demands of a
client depend on the choice of earlier demands, of this or other
clients, and their connection values. We would like to point out that
the adaptive partitioning process for the $1.575$-approximation
algorithm (Section~\ref{sec: 1.575-approximation}) is more subtle than that for
the $3$-apprximation (Section~\ref{sec: 3-approximation}) and the
$1.736$-approximation algorithms (Section~\ref{sec:
1.736-approximation}), due to the introduction of close and far
neighborhood.
\paragraph{Implementation of Adaptive Partitioning.}
We now describe an algorithm for partitioning the instance
and the fractional solution so that the properties (PS),
(CO), (PD), and (SI) are satisfied. Recall that
$\overline{\sitesset}$ and $\overline{\clientset}$, respectively, denote the
sets of facilities and demands that will be created in this
stage, and $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$ is the partitioned solution
to be computed.
The adaptive partitioning algorithm consists of two phases:
Phase 1 is called the partitioning phase and Phase 2 is called
the augmenting phase. Phase 1 is done in iterations, where
in each iteration we find the ``best'' client $j$ and create a
new demand $\nu$ out of it. This demand either becomes a
primary demand itself, or it is assigned to some existing
primary demand. We call a client $j$ \emph{exhausted} when
all its $r_j$ demands have been created and assigned to some
primary demands. Phase 1 completes when all clients are
exhausted. In Phase 2 we ensure that every demand has a
total connection values ${\bar x}_{\mu\nu}$ equal to $1$, that is condition (PS.\ref{PS:one}).
For each site $i$ we will initially create one ``big" facility $\mu$
with initial value ${\bar y}_\mu = y^\ast_i$. While we partition the
instance, creating new demands and connections, this facility may end
up being split into more facilities to preserve completeness of the
fractional solution. Also, we will gradually decrease the fractional
connection vector for each client $j$, to account for the demands
already created for $j$ and their connection values. These decreased
connection values will be stored in an auxiliary vector
$\widetilde{\boldsymbol{x}}$. The intuition is that $\widetilde{\boldsymbol{x}}$ represents the part of
$\boldsymbol{x}^\ast$ that still has not been allocated to existing demands and
future demands can use $\widetilde{\boldsymbol{x}}$ for their connections. For
technical reasons, $\widetilde{\boldsymbol{x}}$ will be indexed by facilities (rather
than sites) and clients, that is $\widetilde{\boldsymbol{x}} = ({\widetilde x}_{\mu j})$. At
the beginning, we set ${\widetilde x}_{\mu j}{\,\leftarrow\,} x_{ij}^\ast$ for each
$j\in\mathbb{C}$, where $\mu\in i$ is the single facility created
initially at site $i$. At each step, whenever we create a new demand
$\nu$ for a client $j$, we will define its values ${\bar x}_{\mu\nu}$ and
appropriately reduce the values ${\widetilde x}_{\mu j}$, for all facilities
$\mu$. We will deal with two types of neighborhoods, with respect to
$\widetilde{\boldsymbol{x}}$ and $\bar{\boldsymbol{x}}$, that is ${\widetilde N}(j)=\{\mu\in\overline{\sitesset}
{\,:\,}{\widetilde x}_{\mu j} > 0\}$ for $j\in\mathbb{C}$ and
${\overline{N}}(\nu)=\{\mu\in\overline{\sitesset} {\,:\,} {\bar x}_{\mu\nu} >0\}$ for
$\nu\in\overline{\clientset}$. During this process we preserve the completeness
(CO) of the fractional solutions $\widetilde{\boldsymbol{x}}$ and $\bar{\boldsymbol{x}}$. More
precisely, the following properties will hold for every facility $\mu$
after every iteration:
\begin{description}
\item{(c1)} For each demand $\nu$ either ${\bar x}_{\mu\nu}=0$ or
${\bar x}_{\mu\nu}={\bar y}_{\mu}$. This is the same
condition as condition (CO), yet we repeat it here as
(c1) needs to hold after every iteration, while
condition (CO) only applies to the final partitioned
fractional solution $(\bar{\boldsymbol{x}}, \bar{\boldsymbol{y}})$.
\item{(c2)} For each client $j$,
either ${\widetilde x}_{\mu j}=0$ or ${\widetilde x}_{\mu j}={\bar y}_{\mu}$.
\end{description}
A full description of the algorithm is given in
Pseudocode~\ref{alg:lpr2}. Initially, the set $U$ of
non-exhausted clients contains all clients, the set
$\overline{\clientset}$ of demands is empty, the set $\overline{\sitesset}$ of
facilities consists of one facility $\mu$ on each site $i$
with ${\bar y}_\mu = y^\ast_i$, and the set $P$ of primary
demands is empty (Lines 1--4). In one iteration of the
while loop (Lines 5--8), for each client $j$ we
compute a quantity called $\mbox{\rm{tcc}}(j)$ (tentative connection
cost), that represents the average distance from $j$ to the
set ${\widetilde N}_1(j)$ of the nearest facilities $\mu$ whose
total connection value to $j$ (the sum of ${\widetilde x}_{\mu
j}$'s) equals $1$. This set is computed by Procedure
${\textsc{NearestUnitChunk}}()$ (see Pseudocode~\ref{alg:helper},
Lines~1--9), which adds facilities to ${\widetilde N}_1(j)$ in
order of nondecreasing distance, until the total connection
value is exactly $1$. (The procedure actually uses the
${\bar y}_\mu$ values, which are equal to the connection values,
by the completeness condition (c2).) This may require splitting the last added
facility and adjusting the connection values so that
conditions (c1) and (c2) are preserved.
\begin{algorithm}[ht]
\caption{Algorithm: Adaptive Partitioning}
\label{alg:lpr2}
\begin{algorithmic}[1]
\Require $\mathbb{F}$, $\mathbb{C}$, $(\boldsymbol{x}^\ast,\boldsymbol{y}^\ast)$
\Ensure $\overline{\sitesset}$, $\overline{\clientset}$, $(\bar{\boldsymbol{x}}, \bar{\boldsymbol{y}})$
\Comment Unspecified ${\bar x}_{\mu \nu}$'s and ${\widetilde x}_{\mu j}$'s are assumed to be $0$
\State $\widetilde{\boldsymbol{r}} {\,\leftarrow\,} \boldsymbol{r}, U{\,\leftarrow\,} \mathbb{C}, \overline{\sitesset}{\,\leftarrow\,} \emptyset,
\overline{\clientset}{\,\leftarrow\,} \emptyset, P{\,\leftarrow\,} \emptyset$
\Comment{Phase 1}
\For{each site $i\in\mathbb{F}$}
\State create a facility $\mu$ at $i$ and add $\mu$ to $\overline{\sitesset}$
\State ${\bar y}_\mu {\,\leftarrow\,} y_i^\ast$ and ${\widetilde x}_{\mu j}{\,\leftarrow\,}
x_{ij}^\ast$ for each $j\in\mathbb{C}$
\EndFor
\While{$U\neq \emptyset$}
\For{each $j\in U$}
\State ${\widetilde N}_1(j) {\,\leftarrow\,} {{\textsc{NearestUnitChunk}}}(j, \overline{\sitesset}, \widetilde{\boldsymbol{x}}, \bar{\boldsymbol{x}}, \bar{\boldsymbol{y}})$ \Comment see Pseudocode~\ref{alg:helper}
\State $\mbox{\rm{tcc}}(j){\,\leftarrow\,} \sum_{\mu\in {\widetilde N}_1(j)} d_{{\mu}j}\cdot {\widetilde x}_{\mu j}$
\EndFor
\State $p {\,\leftarrow\,} {\argmin}_{j\in U}\{ \mbox{\rm{tcc}}(j)+\alpha_j^\ast \}$
\State create a new demand $\nu$ for client $p$
\If{${\widetilde N}_1 (p)\cap {\overline{N}}(\kappa) \neq \emptyset$
for some primary demand $\kappa\in P$}
\State assign $\nu$ to $\kappa$
\State ${\bar x}_{\mu \nu}{\,\leftarrow\,} {\widetilde x}_{\mu p}$ and ${\widetilde x}_{\mu p}{\,\leftarrow\,} 0$ for each $\mu \in {\widetilde N}(p) \cap {\overline{N}}(\kappa)$
\Else
\State make $\nu$ primary, $P {\,\leftarrow\,} P \cup \{\nu\}$, assign $\nu$ to itself
\State set ${\bar x}_{\mu\nu} {\,\leftarrow\,} {\widetilde x}_{\mu p}$ and ${\widetilde x}_{\mu p}{\,\leftarrow\,} 0$ for each $\mu\in {\widetilde N}_1(p)$
\EndIf
\State $\overline{\clientset}{\,\leftarrow\,} \overline{\clientset}\cup \{\nu\},
{\widetilde r}_p {\,\leftarrow\,} {\widetilde r}_p -1$
\State \textbf{if} {${\widetilde r}_p=0$} \textbf{then} $U{\,\leftarrow\,} U \,{\leftarrow}\,minus \{p\}$
\EndWhile
\For{each client $j\in\mathbb{C}$} \Comment{Phase 2}
\For{each demand $\nu\in j$} \Comment{each client $j$ has $r_j$ demands}
\State \textbf{if} $\sum_{\mu\in {\overline{N}}(\nu)}{\bar x}_{\mu\nu}<1$
\textbf{then} ${\textsc{AugmentToUnit}}(\nu, j, \overline{\sitesset}, \widetilde{\boldsymbol{x}}, \bar{\boldsymbol{x}}, \bar{\boldsymbol{y}})$ \Comment see Pseudocode~\ref{alg:helper}
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht]
\caption{Helper functions used in Pseudocode~\ref{alg:lpr2}}
\label{alg:helper}
\begin{algorithmic}[1]
\Function{{\textsc{NearestUnitChunk}}}{$j, \overline{\sitesset}, \widetilde{\boldsymbol{x}}, \bar{\boldsymbol{x}},\bar{\boldsymbol{y}}$}
\Comment upon return, $\sum_{\mu\in{\widetilde N}_1(j)} {\widetilde x}_{\mu j} = 1$
\State Let ${\widetilde N}(j) = \{\mu_1,...,\mu_{q}\}$ where $d_{\mu_1 j} \leq d_{\mu_2 j} \leq \ldots \leq d_{\mu_{q j}}$
\State Let $l$ be such that $\sum_{k=1}^{l} {\bar y}_{\mu_k} \geq 1$ and $\sum_{k=1}^{l -1} {\bar y}_{\mu_{k}} < 1$
\State Create a new facility $\sigma$ at the same site as $\mu_l$ and add it to $\overline{\sitesset}$
\Comment split $\mu_l$
\State Set ${\bar y}_{\sigma}{\,\leftarrow\,} \sum_{k=1}^{l} {\bar y}_{\mu_{k}}-1$
and ${\bar y}_{\mu_l} {\,\leftarrow\,} {\bar y}_{\mu_l} - {\bar y}_{\sigma}$
\State For each $\nu\in\overline{\clientset}$ with ${\bar x}_{\mu_{l}\nu}>0$
set ${\bar x}_{\mu_{l}\nu} {\,\leftarrow\,} {\bar y}_{\mu_l}$ and ${\bar x}_{\sigma \nu} {\,\leftarrow\,} {\bar y}_{\sigma}$
\State For each $j'\in\mathbb{C}$ with ${\widetilde x}_{\mu_{l} j'}>0$ (including $j$)
set ${\widetilde x}_{\mu_l j'} {\,\leftarrow\,} {\bar y}_{\mu_l}$ and ${\widetilde x}_{\sigma j'} {\,\leftarrow\,} {\bar y}_\sigma$
\State (All other new connection values are set to $0$)
\State \Return ${\widetilde N}_1(j) = \{\mu_{1},\ldots,\mu_{l-1}, \mu_{l}\}$
\EndFunction
\Function{{\textsc{AugmentToUnit}}}{$\nu, j, \overline{\sitesset}, \widetilde{\boldsymbol{x}}, \bar{\boldsymbol{x}}, \bar{\boldsymbol{y}}$}
\Comment $\nu$ is a demand of client $j$
\While{$\sum_{\mu\in \overline{\sitesset}} {\bar x}_{\mu\nu} <1$}
\Comment upon return, $\sum_{\mu\in{\overline{N}}(\nu)} {\bar x}_{\mu\nu} = 1$
\State Let $\eta$ be any facility such that ${\widetilde x}_{\eta j} > 0$
\If{$1-\sum_{\mu\in \overline{\sitesset}} {\bar x}_{\mu\nu} \geq {\widetilde x}_{\eta j}$}
\State ${\bar x}_{\eta\nu} {\,\leftarrow\,} {\widetilde x}_{\eta j}, {\widetilde x}_{\eta j} {\,\leftarrow\,} 0$
\Else
\State Create a new facility $\sigma$ at the same site as $\eta$ and add it to $\overline{\sitesset}$
\Comment split $\eta$
\State Let ${\bar y}_\sigma {\,\leftarrow\,} 1-\sum_{\mu\in \overline{\sitesset}} {\bar x}_{\mu\nu}, {\bar y}_{\eta} {\,\leftarrow\,} {\bar y}_{\eta} - {\bar y}_{\sigma}$
\State Set ${\bar x}_{\sigma\nu}{\,\leftarrow\,} {\bar y}_{\sigma},\; {\bar x}_{\eta \nu} {\,\leftarrow\,} 0,\; {\widetilde x}_{\eta j} {\,\leftarrow\,} {\bar y}_{\eta}, \; {\widetilde x}_{\sigma j} {\,\leftarrow\,} 0$
\State For each $\nu' \neq \nu$ with ${\bar x}_{\eta \nu'}>0$, set ${\bar x}_{\eta \nu'} {\,\leftarrow\,} {\bar y}_{\eta},\; {\bar x}_{\sigma \nu'} {\,\leftarrow\,} {\bar y}_{\sigma}$
\State For each $j' \neq j$ with ${\widetilde x}_{\eta j'}>0$, set ${\widetilde x}_{\eta j'} {\,\leftarrow\,} {\bar y}_{\eta}, {\widetilde x}_{\sigma j'} {\,\leftarrow\,} {\bar y}_{\sigma}$
\State (All other new connection values are set to $0$)
\EndIf
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
The next step is to pick a client $p$ with minimum
$\mbox{\rm{tcc}}(p)+\alpha_p^\ast$ and create a demand $\nu$ for $p$
(Lines~9--10). If ${\widetilde N}_1(p)$ overlaps the neighborhood
of some existing primary demand $\kappa$ (if there are
multiple such $\kappa$'s, pick any of them), we assign $\nu$
to $\kappa$, and $\nu$ acquires all the connection values
${\widetilde x}_{\mu p}$ between client $p$ and facility $\mu$ in
${\widetilde N}(p)\cap {\overline{N}}(\kappa)$ (Lines~11--13). Note that
although we check for overlap with ${\widetilde N}_1(p)$, we then
move all facilities in the intersection with ${\widetilde N}(p)$,
a bigger set, into ${\overline{N}}(\nu)$. The other case is when
${\widetilde N}_1(p)$ is disjoint from the neighborhoods of all
existing primary demands. Then, in Lines~15--16, $\nu$
becomes itself a primary demand and we assign $\nu$ to
itself. It also inherits the connection values to all
facilities $\mu\in{\widetilde N}_1(p)$ from $p$ (recall that
${\widetilde x}_{\mu p} = {\bar y}_{\mu}$), with all other
${\bar x}_{\mu\nu}$ values set to $0$.
At this point all primary demands satisfy
Property~(PS.\ref{PS:one}), but this may not be true for
non-primary demands. For those demands we still may need to
adjust the ${\bar x}_{\mu\nu}$ values so that the total
connection value for $\nu$, that is $\text{conn}sum(\nu) \stackrel{\mathrm{def}}{=}
\sum_{\mu\in\overline{\sitesset}}{\bar x}_{\mu \nu}$, is equal $1$. This
is accomplished by Procedure ${\textsc{AugmentToUnit}}()$ (definition
in Pseudocode~\ref{alg:helper}, Lines~10--21) that allocates
to $\nu\in j$ some of the remaining connection values
${\widetilde x}_{\mu j}$ of client $j$ (Lines 19--21).
${\textsc{AugmentToUnit}}()$ will repeatedly pick any facility $\eta$ with
${\widetilde x}_{\eta j} >0$. If ${\widetilde x}_{\eta j} \leq
1-\text{conn}sum(\nu)$, then the connection value ${\widetilde x}_{\eta
j}$ is reassigned to $\nu$.
Otherwise, ${\widetilde x}_{\eta j} >
1-\text{conn}sum(\nu)$, in which case we split $\eta$ so that
connecting $\nu$ to one of the created copies of $\eta$ will
make $\text{conn}sum(\nu)$ equal $1$, and we'll be done.
Notice that we start with $|\mathbb{F}|$ facilities and in
each iteration of the while loop in Line~5 (Pseudocode~\ref{alg:lpr2}) each client causes at most one split.
We have a total of no more than $R|\mathbb{C}|$ iterations as in
each iteration we create one demand. (Recall that $R =
\max_jr_j$.) In Phase 2 we do an augment step for each
demand $\nu$ and this creates no more than $R|\mathbb{C}|$
new facilities. So the total number of facilities we
created will be at most $|\mathbb{F}|+ R|\mathbb{C}|^2 +
R|\mathbb{C}| \leq |\mathbb{F}| + 2R|\mathbb{C}|^2$, which is
polynomial in $|\mathbb{F}|+|\mathbb{C}|$ due to our earlier
bound on $R$.
\paragraph{Example.}
We now illustrate our partitioning algorithm with an example, where the FTFP instance
has four sites and four clients. The demands are $r_1=1$ and $r_2=r_3=r_4=2$.
The facility costs are $f_i = 1$ for all $i$. The distances are defined as follows:
$d_{ii} = 3$ for $i=1,2,3,4$ and $d_{ij} = 1$ for all $i\neq j$.
Solving the LP(\ref{eqn:fac_primal}), we obtain the fractional solution given in
Table~\ref{tbl:example_opt}.
{
\small
\begin{table}[ht]
\,{\leftarrow}\,length{\extrarowheight}{4pt}
\begin{subtable}{0.2\textwidth}
\centering
\begin{tabular}{c | c c c c | c }
$x_{ij}^\ast$ & $1$ & $2$ & $3$ & $4$ & $y_{i}^\ast$\\
\hline
$1$ & 0 & $\fourthirds$ & $\fourthirds$ & $\fourthirds$ & $\fourthirds$ \\
$2$ & $\onethird$ & 0 & $\onethird$ & $\onethird$ & $\onethird$ \\
$3$ & $\onethird$ & $\onethird$ & 0 & $\onethird$ & $\onethird$ \\
$4$ & $\onethird$ & $\onethird$ & $\onethird$ & 0 & $\onethird$ \\
\end{tabular}
\subcaption{}
\label{tbl:example_opt}
\end{subtable}
\hspace{0.8in}
\begin{subtable}{0.4\textwidth}
\centering
\begin{tabular}{c | c c c c c c c | c}
${\bar x}_{\mu\nu}$ & $1'$ & $2'$ & $2''$ & $3'$ & $3''$ & $4'$ & $4''$ & ${\bar y}_{\mu}$ \\
\hline
$\dot{1}$ & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1\\
$\ddot{1}$ & 0 & 0 & $\onethird$ & 0 & $\onethird$ & 0 & $\onethird$ & $\onethird$ \\
$\dot{2}$ & $\onethird$ & 0 & 0 & 0 & $\onethird$ & 0 & $\onethird$ & $\onethird$ \\
$\dot{3}$ & $\onethird$ & 0 & $\onethird$ & 0 & 0 & 0 & $\onethird$ & $\onethird$ \\
$\dot{4}$ & $\onethird$ & 0 & $\onethird$ & 0 & $\onethird$ & 0 & 0 & $\onethird$ \\
\end{tabular}
\subcaption{}
\label{tbl:example_part}
\end{subtable}
{\ }
\caption{
An example of an execution of the partitioning algorithm.
(a) An optimal fractional solution $x^\ast,y^\ast$.
(b) The partitioned solution. $j'$ and $j''$ denote the first and second demand of a client $j$,
and $\dot{\imath}$ and $\ddot{\imath}$ denote the first and second facility at site $i$.}
\end{table}
}
It is easily seen that the fractional solution in
Table~\ref{tbl:example_opt} is optimal and complete ($x_{ij}^\ast > 0$
implies $x_{ij}^\ast = y_i^\ast$). The dual optimal solution has all
$\alpha_j^\ast = 4/3$ for $j=1,2,3,4$.
Now we perform Phase 1, the adaptive partitioning, following the
description in Pseudocode~\ref{alg:lpr2}. To streamline the
presentation, we assume that all ties are broken in favor of
lower-numbered clients, demands or facilities. First we create one
facility at each of the four sites, denoted as $\dot{1}$, $\dot{2}$,
$\dot{3}$ and $\dot{4}$ (Line~2--4, Pseudocode~\ref{alg:lpr2}). We
then execute the ``while'' loop in Line 5
Pseudocode~\ref{alg:lpr2}. This loop will have seven iterations.
Consider the first iteration. In Line 7--8 we compute $\mbox{\rm{tcc}}(j)$ for
each client $j=1,2,3,4$ in $U$. When computing ${\widetilde N}_1(2)$,
facility $\dot{1}$ will get split into $\dot{1}$ and $\ddot{1}$ with
${\bar y}_{\dot{1}}=1$ and ${\bar y}_{\ddot{1}} = 1/3$. (This will happen in
Line~4--7 of Pseudocode~\ref{alg:helper}.) Then, in Line~9 we will
pick client $p=1$ and create a demand denoted as $1'$ (see
Table~\ref{tbl:example_part}). Since there are no primary demands yet,
we make $1'$ a primary demand with ${\overline{N}}(1') = {\widetilde N}_1(1) =
\{\dot{2}, \dot{3}, \dot{4}\}$. Notice that client $1$ is exhausted
after this iteration and $U$ becomes $\{2,3,4\}$.
In the second iteration we compute $\mbox{\rm{tcc}}(j)$ for $j=2,3,4$ and pick
client $p=2$, from which we create a new demand $2'$. We have
${\widetilde N}_1(2) = \{\dot{1}\}$, which is disjoint from ${\overline{N}}(1')$. So
we create a demand $2'$ and make it primary, and set ${\overline{N}}(2') =
\{\dot{1}\}$. In the third iteration we compute $\mbox{\rm{tcc}}(j)$ for
$j=2,3,4$ and again we pick client $p=2$. Since ${\widetilde N}_1(2) =
\{\ddot{1}, \dot{3}, \dot{4}\}$ overlaps with ${\overline{N}}(1')$, we create
a demand $2''$ and assign it to $1'$. We also set ${\overline{N}}(2'') =
{\overline{N}}(1') \cap {\widetilde N}(2) = \{\dot{3}, \dot{4}\}$. After this
iteration client $2$ is exhausted and we have $U = \{3,4\}$.
In the fourth iteration we compute $\mbox{\rm{tcc}}(j)$ for client $j=3,4$. We
pick $p=3$ and create demand $3'$. Since ${\widetilde N}_1(3) = \{\dot{1}\}$
overlaps ${\overline{N}}(2')$, we assign $3'$ to $2'$ and set
${\overline{N}}(3') = \{\dot{1}\}$. In the fifth iteration we compute
$\mbox{\rm{tcc}}(j)$ for client $j=3,4$ and pick $p=3$ again. At this time
${\widetilde N}_1(3) = \{\ddot{1},\dot{2},\dot{4}\}$, which overlaps with
${\overline{N}}(1')$. So we create a demand $3''$ and assign it to $1'$, as
well as set ${\overline{N}}(3'') = \{\dot{2}, \dot{4}\}$.
In the last two iterations we will pick client $p=4$ twice and
create demands $4'$ and $4''$. For $4'$ we have ${\widetilde N}_1(4) =
\{\dot{1}\}$ so we assign $4'$ to $2'$ and set ${\overline{N}}(4') =
\{\dot{1}\}$. For $4''$ we have ${\widetilde N}_1(4) = \{\ddot{1}, \dot{2},
\dot{3}\}$ and we assign it to $1'$, as well as set ${\overline{N}}(4'') =
\{\dot{2}, \dot{3}\}$.
Now that all clients are exhausted we perform Phase 2, the augmenting
phase, to construct a fractional solution in which all demands have
total connection value equal to $1$. We iterate through each of the
seven demands created, that is $1',2',2'',3',3'',4',4''$. $1'$ and $2'$
already have neighborhoods with total connection value of $1$, so
nothing will change in the first two iterations.
$2''$ has $\dot{3},\dot{4}$ in its neighborhood, with total connection value of
$2/3$, and ${\widetilde N}(2) = \{\ddot{1}\}$ at this time, so we add
$\ddot{1}$ into ${\overline{N}}(2'')$ to make ${\overline{N}}(2'') = \{\ddot{1},
\dot{3}, \dot{4}\}$ and now $2''$ has total connection value of
$1$. Similarly, $3''$ and $4''$ each get $\ddot{1}$ added to their
neighborhood and end up with total connection value of $1$. The other
two demands, namely $3'$ and $4'$, each have $\dot{1}$ in its
neighborhood so each of them has already its total connection value
equal $1$. This completes Phase 2.
The final partitioned fractional solution is given in
Table~\ref{tbl:example_part}. We have created a total of five
facilities $\dot{1}, \ddot{1}, \dot{2}, \dot{3}, \dot{4}$, and seven
demands, $1',2',2'',3',3'',4',4''$. It can be verified that all the
stated properties are satisfied.
\emparagraph{Correctness.} We now show that all the
required properties (PS), (CO), (PD) and (SI) are satisfied
by the above construction.
Properties~(PS) and (CO) follow directly from the
algorithm. (CO) is implied by the completeness condition
(c1) that the algorithm maintains after each
iteration. Condition~(PS.\ref{PS:one}) is a result of
calling Procedure~${\textsc{AugmentToUnit}}()$ in Line~21. To see that
(PS.\ref{PS:xij}) holds, note that
at each step the algorithm maintains the
invariant that, for every $i\in\mathbb{F}$ and
$j\in\mathbb{C}$, we have $\sum_{\mu\in i}\sum_{\nu \in j}
{\bar x}_{\mu \nu} + \sum_{\mu\in i} {\widetilde x}_{\mu j} =
x_{ij}^\ast$. In the end, we will create $r_j$ demands for
each client $j$, with each demand $\nu\in j$ satisfying
(PS.\ref{PS:one}), and thus $\sum_{\nu\in
j}\sum_{\mu\in\overline{\sitesset}}{\bar x}_{\mu\nu}=r_j$. This
implies that ${\widetilde x}_{\mu j}=0$ for every facility
$\mu\in\overline{\sitesset}$, and (PS.\ref{PS:xij}) follows.
(PS.\ref{PS:yi}) holds because every time we split a
facility $\mu$ into $\mu'$ and $\mu''$, the sum of
${\bar y}_{\mu'}$ and ${\bar y}_{\mu''}$ is equal to the old value of
${\bar y}_{\mu}$.
Now we deal with properties in group (PD). First,
(PD.\ref{PD:disjoint}) follows directly from the algorithm,
Pseudocode~\ref{alg:lpr2} (Lines 14--16), since every
primary demand has its neighborhood fixed when created, and
that neighborhood is disjoint from those of the existing primary
demands.
Property (PD.\ref{PD:yi}) follows from (PD.\ref{PD:disjoint}), (CO) and
(PS.\ref{PS:yi}). In more detail, it can be justified as
follows. By (PD.\ref{PD:disjoint}), for each $\mu\in i$ there
is at most one $\kappa\in P$ with ${\bar x}_{\mu\kappa} > 0$
and we have ${\bar x}_{\mu\kappa} = {\bar y}_{\mu}$ due do (CO).
Let $K\subseteq i$ be the set of those $\mu$'s for which
such $\kappa\in P$ exists, and denote this $\kappa$ by
$\kappa_\mu$. Then, using conditions (CO) and
(PS.\ref{PS:yi}), we have $ \sum_{\mu\in i}\sum_{\kappa\in
P}{\bar x}_{\mu\kappa} = \sum_{\mu\in K}{\bar x}_{\mu\kappa_\mu}
= \sum_{\mu\in K}{\bar y}_{\mu} \leq \sum_{\mu\in i}
{\bar y}_{\mu} = y_i^\ast$.
Property (PD.\ref{PD:assign:overlap}) follows from the way the algorithm
assigns primary demands. When demand $\nu$ of
client $p$ is assigned to a primary demand $\kappa$ in
Lines~11--13 of Pseudocode~\ref{alg:lpr2}, we move all
facilities in ${\widetilde N}(p)\cap {\overline{N}}(\kappa)$ (the
intersection is nonempty) into ${\overline{N}}(\nu)$, and we never
remove a facility from ${\overline{N}}(\nu)$. We postpone the proof
for (PD.\ref{PD:assign:cost}) to Lemma~\ref{lem: PD:assign:cost holds}.
Finally we argue that the properties in group (SI)
hold. (SI.\ref{SI:siblings disjoint}) is easy, since for any client
$j$, each facility $\mu$ is added to the neighborhood of at most one
demand $\nu\in j$, by setting ${\bar x}_{\mu\nu}$ to ${\bar y}_\mu$, while
other siblings $\nu'$ of $\nu$ have ${\bar x}_{\mu\nu'}=0$. Note that
right after a demand $\nu\in p$ is created, its neighborhood is
disjoint from the neighborhood of $p$, that is ${\overline{N}}(\nu)\cap
{\widetilde N}(p) = \emptyset$, by Lines~11--13 of the algorithm. Thus all
demands of $p$ created later will have neighborhoods disjoint from the
set ${\overline{N}}(\nu)$ before the augmenting phase 2. Furthermore,
Procedure~${\textsc{AugmentToUnit}}()$ preserves this property, because when it
adds a facility to ${\overline{N}}(\nu)$ then it removes it from
${\widetilde N}(p)$, and in case of splitting, one resulting facility is
added to ${\overline{N}}(\nu)$ and the other to ${\widetilde N}(p)$. Property
(SI.\ref{SI:primary disjoint}) is shown below in Lemma~\ref{lem:
property SI:primary disjoint holds}.
It remains to show Properties~(PD.\ref{PD:assign:cost}) and
(SI.\ref{SI:primary disjoint}). We show them in the lemmas
below, thus completing the description of our adaptive
partition process.
\begin{lemma}\label{lem: property SI:primary disjoint holds}
Property~(SI.\ref{SI:primary disjoint}) holds after the
Adaptive Partitioning stage.
\end{lemma}
\begin{proof}
Let $\nu_1,\ldots,\nu_{r_j}$ be the demands of a client
$j\in\mathbb{C}$, listed in the order of creation, and, for each
$q=1,2,\ldots,r_j$, denote by $\kappa_q$ the primary demand that
$\nu_q$ is assigned to. After the completion of Phase~1 of
Pseudocode~\ref{alg:lpr2} (Lines 5--18), we have
${\overline{N}}(\nu_s)\subseteq {\overline{N}}(\kappa_s)$ for $s=1,\ldots,r_j$.
Since any two primary demands have disjoint
neighborhoods, we have ${\overline{N}}(\nu_s) \cap {\overline{N}}(\kappa_q) =
\emptyset$ for any $s\neq q$, that is
Property~(SI.\ref{SI:primary disjoint}) holds right after Phase~1.
After Phase~1 all neighborhoods ${\overline{N}}(\kappa_s),
s=1,\ldots,r_j$ have already been fixed and they do not change
in Phase~2. None of the facilities in ${\widetilde N}(j)$ appear in
any of ${\overline{N}}(\kappa_s)$ for $s=1,\ldots,r_j$, by the way we
allocate facilities in Lines~13 and 16. Therefore during the
augmentation process in Phase~2, when we add facilities from
${\widetilde N}(j)$ to ${\overline{N}}(\nu)$, for some $\nu\in j$
(Line~19--21 of Pseudocode~\ref{alg:lpr2}), all the required
disjointness conditions will be preserved.
\end{proof}
We need one more lemma before proving our last property
(PD.\ref{PD:assign:cost}). For a client $j$ and a demand
$\nu$, we use notation $\mbox{\rm{tcc}}^{\nu}(j)$ for the value of
$\mbox{\rm{tcc}}(j)$ at the time when $\nu$ was created. (It is not
necessary that $\nu\in j$ but we assume that $j$ is not
exhausted at that time.)
\begin{lemma}\label{lem: tcc optimal}
Let $\eta$ and $\nu$ be two demands, with $\eta$ created
no later than $\nu$, and let $j\in\mathbb{C}$ be a client
that is not exhausted when $\nu$ is created. Then we have
\begin{description}
\item{(a)} $\mbox{\mbox{\rm{tcc}}}^\eta(j) \le \mbox{\mbox{\rm{tcc}}}^{\nu}(j)$, and
\item{(b)} if $\nu\in j$ then $\mbox{\mbox{\rm{tcc}}}^\eta(j) \le C^{\avg}_{\nu}$.
\end{description}
\end{lemma}
\begin{proof}
We focus first on the time when demand $\eta$ is about to be created,
right after the call to ${\textsc{NearestUnitChunk}}()$ in
Pseudocode~\ref{alg:lpr2}, Line~7. Let ${\widetilde N}(j) =
\{\mu_1,...,\mu_q\}$ with all facilities $\mu_s$ ordered
according to nondecreasing distance from $j$. Consider
the following linear program:
\begin{alignat*}{1}
\textrm{minimize} \quad & \sum_s d_{\mu_s j}z_s
\\
\textrm{subject to} \quad & \sum_s z_s \ge 1
\\
0 &\le z_s \le {\widetilde x}_{\mu_s j} \quad \textrm{for all}\ s
\end{alignat*}
This is a fractional
minimum knapsack covering problem (with knapsack size equal $1$) and its optimal fractional
solution is the greedy solution, whose value is exactly
$\mbox{\rm{tcc}}^\eta(j)$.
On the other hand, we claim that
$\mbox{\rm{tcc}}^{\nu}(j)$ can be thought of as the value of some feasible
solution to this linear program, and that the same is true for $C^{\avg}_{\nu}$ if $\nu\in j$.
Indeed, each of these
quantities involves some later values ${\widetilde x}_{\mu j}$,
where $\mu$ could be one of the facilities $\mu_s$ or a
new facility obtained from splitting. For each $s$,
however, the sum of all values ${\widetilde x}_{\mu j}$,
over the facilities $\mu$ that were split from $\mu_s$, cannot exceed
the value ${\widetilde x}_{\mu_s j}$ at the time when
$\eta$ was created, because splitting facilities preserves this sum and
creating new demands for $j$ can only decrease it.
Therefore both quantities
$\mbox{\rm{tcc}}^{\nu}(j)$ and $C^{\avg}_{\nu}$ (for $\nu\in j$) correspond to some
choice of the $z_s$ variables (adding up to $1$), and the
lemma follows.
\end{proof}
\begin{lemma}\label{lem: PD:assign:cost holds}
Property~(PD.\ref{PD:assign:cost}) holds after the Adaptive Partitioning stage.
\end{lemma}
\begin{proof}
Suppose that demand $\nu\in j$ is assigned to some primary demand $\kappa\in p$.
Then
\begin{eqnarray*}
C^{\avg}_{\kappa} + \alpha_{\kappa}^\ast \;=\; \mbox{\rm{tcc}}^{\kappa}(p) + \alpha^\ast_p
\;\le\; \mbox{\rm{tcc}}^{\kappa}(j) + \alpha^\ast_j
\;\le\; C^{\avg}_{\nu} + \alpha^\ast_\nu.
\end{eqnarray*}
We now justify this derivation. By definition we have
$\alpha_{\kappa}^\ast = \alpha^\ast_p$. Further, by the
algorithm, if $\kappa$ is a primary demand of client $p$,
then $C^{\avg}_{\kappa}$ is equal to $\mbox{\rm{tcc}}(p)$ computed when
$\kappa$ is created, which is exactly $\mbox{\rm{tcc}}^{\kappa}(p)$. Thus
the first equation is true. The first inequality follows
from the choice of $p$ in Line~9 in
Pseudocode~\ref{alg:lpr2}. The last inequality holds
because $\alpha^\ast_j = \alpha^\ast_\nu$ (due to $\nu\in
j$), and because $\mbox{\rm{tcc}}^{\kappa}(j) \le C^{\avg}_{\nu}$, which
follows from Lemma~\ref{lem: tcc optimal}.
\end{proof}
We have thus proved that all properties (PS), (CO), (PD) and (SI) hold
for our partitioned fractional solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$. In the
following sections we show how to use these properties to round the
fractional solution to an approximate integral solution. For the
$3$-approximation algorithm (Section~\ref{sec: 3-approximation}) and
the $1.736$-approximation algorithm (Section~\ref{sec:
1.736-approximation}), the first phase of the algorithm is exactly
the same partition process as described above. However, the
$1.575$-approximation algorithm (Section~\ref{sec:
1.575-approximation}) demands a more sophisticated partitioning
process as the interplay between close and far neighborhood of sibling
demands result in more delicate properties that our partitioned
fractional solution must satisfy.
\section{Algorithm~{\mbox{\rm EGUP}} with Ratio $3$}
\label{sec: 3-approximation}
With the partitioned FTFP instance and its associated fractional
solution in place, we now begin to introduce our rounding algorithms.
The algorithm we describe in this section achieves ratio $3$. Although
this is still quite far from our best ratio $1.575$ that we derive
later, we include this algorithm in the paper to illustrate, in a
relatively simple setting, how the properties of our partitioned
fractional solution are used in rounding it to an integral solution
with cost not too far away from an optimal solution. The rounding
approach we use here is an extension of the corresponding method for
UFL described in~\cite{gupta08}.
\paragraph{Algorithm~{\mbox{\rm EGUP}.}}
At a high level, we would open exactly one facility for each
primary demand $\kappa$, and each non-primary demand is
connected to the facility opened for the primary demand it
was assigned to.
More precisely, we apply a rounding process, guided by the
fractional values $({\bar y}_{\mu})$ and $({\bar x}_{\mu\nu})$,
that produces an integral solution. This integral solution
is obtained by choosing a subset of facilities in
$\overline{\sitesset}$ to open, and for each demand in $\overline{\clientset}$,
specifying an open facility that this demand will be
connected to. For each primary demand $\kappa\in P$, we
want to open one facility $\phi(\kappa) \in
{\overline{N}}(\kappa)$. To this end, we use randomization: for each
$\mu\in{\overline{N}}(\kappa)$, we choose $\phi(\kappa) = \mu$ with
probability ${\bar x}_{\mu\kappa}$, ensuring that exactly one
$\mu \in {\overline{N}}(\kappa)$ is chosen. Note that
$\sum_{\mu\in{\overline{N}}(\kappa)}{\bar x}_{\mu\kappa}=1$, so this
distribution is well-defined. We open this facility
$\phi(\kappa)$ and connect to $\phi(\kappa)$ all demands
that are assigned to $\kappa$.
In our description above, the algorithm is presented as a
randomized algorithm. It can be de-randomized using the
method of conditional expectations, which is commonly used
in approximation algorithms for facility location problems
and standard enough that presenting it here would be
redundant. Readers less familiar with this field are
recommended to consult \cite{ChudakS04}, where the method of
conditional expectations is applied in a context very
similar to ours.
\paragraph{Analysis.}
We now bound the expected facility cost and connection cost
by establishing the two lemmas below.
\begin{lemma}\label{lemma:3fac}
The expectation of facility cost $F_{\mbox{\tiny\rm EGUP}}$ of our solution is
at most $F^\ast$.
\end{lemma}
\begin{proof}
By Property~(PD.\ref{PD:disjoint}), the neighborhoods of
primary demands are disjoint. Also, for any primary demand
$\kappa\in P$, the probability that a facility
$\mu\in{\overline{N}}(\kappa)$ is chosen as the open facility
$\phi(\kappa)$ is ${\bar x}_{\mu\kappa}$. Hence the expected
total facility cost is
\begin{align*}
\mathbb{E}[F_{\mbox{\tiny\rm EGUP}}]
&= \textstyle{\sum_{\kappa\in P}\sum_{\mu\in{\overline{N}}(\kappa)}} f_{\mu} {\bar x}_{\mu\kappa}
\\
&= \textstyle{\sum_{\kappa\in P}\sum_{\mu\in\overline{\sitesset}}} f_{\mu} {\bar x}_{\mu\kappa}
\\
&= \textstyle{\sum_{i\in\mathbb{F}}} f_i \textstyle{\sum_{\mu\in i}\sum_{\kappa\in P}} {\bar x}_{\mu\kappa}
\\
&\leq \textstyle{\sum_{i\in\mathbb{F}}} f_i y_i^\ast
= F^\ast,
\end{align*}
where the inequality follows from Property~(PD.\ref{PD:yi}).
\end{proof}
\begin{lemma}\label{lemma:3dist}
The expectation of connection cost $C_{\mbox{\tiny\rm EGUP}}$ of our solution
is at most $C^\ast+2\cdot\mbox{\rm LP}^\ast$.
\end{lemma}
\begin{proof}
For a primary demand $\kappa$, its expected connection cost is
$C_{\kappa}^{{\mbox{\scriptsize\rm avg}}}$ because we choose facility $\mu$ with
probability ${\bar x}_{\mu\kappa}$.
Consider a non-primary demand $\nu$ assigned to a primary demand
$\kappa\in P$. Let $\mu$ be any facility in ${\overline{N}}(\nu) \cap
{\overline{N}}(\kappa)$. Since $\mu$ is in both ${\overline{N}}(\nu)$ and
${\overline{N}}(\kappa)$, we have $d_{\mu\nu} \leq \alpha_{\nu}^\ast$ and
$d_{\mu\kappa} \leq \alpha_{\kappa}^\ast$ (This follows from the
complementary slackness conditions since
$\alpha_{\nu}^\ast=\beta_{\mu\nu}^\ast + d_{\mu\nu}$ for each
$\mu\in {\overline{N}}(\nu)$.). Thus, applying the triangle inequality, for
any fixed choice of facility $\phi(\kappa)$ we have
\begin{equation*}
d_{\phi(\kappa)\nu} \leq d_{\phi(\kappa)\kappa}+d_{\mu\kappa}+d_{\mu\nu}
\leq d_{\phi(\kappa)\kappa} + \alpha_{\kappa}^\ast + \alpha_{\nu}^\ast.
\end{equation*}
Therefore the expected distance from $\nu$ to its facility $\phi(\kappa)$ is
\begin{align*}
\mathbb{E}[ d_{\phi(\kappa)\nu} ] &\le C^{\avg}_{\kappa} + \alpha_{\kappa}^\ast + \alpha_{\nu}^\ast
\\
&\leq C^{\avg}_{\nu} + \alpha_{\nu}^\ast + \alpha_{\nu}^\ast
= C^{\avg}_{\nu} + 2\alpha_{\nu}^\ast,
\end{align*}
where the second inequality follows from Property~(PD.\ref{PD:assign:cost}).
From the definition of $C^{\avg}_{\nu}$ and Property~(PS.\ref{PS:xij}), for any $j\in \mathbb{C}$
we have
\begin{align*}
\sum_{\nu\in j} C^{\avg}_{\nu} &= \sum_{\nu\in j}\sum_{\mu\in\overline{\sitesset}}d_{\mu\nu}{\bar x}_{\mu\nu}
\\
&= \sum_{i\in\mathbb{F}} d_{ij}\sum_{\nu\in j}\sum_{\mu\in i}{\bar x}_{\mu\nu}
\\
&= \sum_{i\in\mathbb{F}} d_{ij}x^\ast_{ij}
= C^\ast_j.
\end{align*}
Thus, summing over all demands, the expected total connection cost is
\begin{align*}
\mathbb{E}[C_{\mbox{\tiny\rm EGUP}}] &\le
\textstyle{\sum_{j\in\mathbb{C}} \sum_{\nu\in j}} (C^{\avg}_{\nu} + 2\alpha_{\nu}^\ast)
\\
& = \textstyle{\sum_{j\in\mathbb{C}}} (C_j^\ast + 2r_j\alpha_j^\ast)
= C^\ast + 2\cdot\mbox{\rm LP}^\ast,
\end{align*}
completing the proof of the lemma.
\end{proof}
\begin{theorem}
Algorithm~{\mbox{\rm EGUP}} is a $3$-approximation algorithm.
\end{theorem}
\begin{proof}
By Property~(SI.\ref{SI:primary disjoint}), different
demands from the same client are assigned to different
primary demands, and by (PD.\ref{PD:disjoint}) each primary
demand opens a different facility. This ensures that our
solution is feasible, namely each client $j$ is connected
to $r_j$ different facilities (some possibly located on
the same site). As for the total cost,
Lemma~\ref{lemma:3fac} and Lemma~\ref{lemma:3dist} imply
that the total cost is at most
$F^\ast+C^\ast+2\cdot\mbox{\rm LP}^\ast = 3\cdot\mbox{\rm LP}^\ast \leq
3\cdot\mbox{\rm OPT}$.
\end{proof}
\section{Algorithm~{\mbox{\rm ECHS}} with Ratio $1.736$}\label{sec: 1.736-approximation}
In this section we improve the approximation ratio to $1+2/e \approx
1.736$. The improvement comes from a slightly modified rounding
process and refined analysis. Note that the facility opening cost of
Algorithm~{\mbox{\rm EGUP}} does not exceed that of the fractional optimum
solution, while the connection cost could be far from the optimum,
since we connect a non-primary demand to a facility in the neighborhood of
its assigned primary demand and then estimate the distance using the
triangle inequality. The basic idea to improve the estimate of the connection cost,
following the approach of Chudak and Shmoys~\cite{ChudakS04},
is to connect each non-primary demand to its
nearest neighbor when one is available, and to only use the facility opened by
its assigned primary demand when none of its neighbors is open.
\paragraph{Algorithm~{\mbox{\rm ECHS}}.}
As before,
the algorithm starts by solving the linear program and applying the
adaptive partitioning algorithm described in
Section~\ref{sec: adaptive partitioning} to obtain a partitioned
solution $(\bar{\boldsymbol{x}}, \bar{\boldsymbol{y}})$. Then we apply the rounding
process to compute an integral solution (see Pseudocode~\ref{alg:lpr3}).
We start, as before, by opening exactly one facility $\phi(\kappa)$ in the
neighborhood of each primary demand $\kappa$ (Line 2). For any
non-primary demand $\nu$ assigned to $\kappa$, we refer to
$\phi(\kappa)$ as the \emph{target} facility of $\nu$. In
Algorithm~{\mbox{\rm EGUP}}, $\nu$ was connected to $\phi(\kappa)$,
but in Algorithm~{\mbox{\rm ECHS}} we may be able to find an open
facility in $\nu$'s neighborhood and connect $\nu$ to this
facility. Specifically, the two changes in the
algorithm are as follows:
\begin{description}
\item{(1)} Each facility $\mu$ that is not in the neighborhood of any
primary demand is opened, independently, with probability
${\bar y}_{\mu}$ (Lines 4--5). Notice that if ${\bar y}_\mu>0$ then, due
to completeness of the partitioned fractional solution, we have
${\bar y}_{\mu}= {\bar x}_{\mu\nu}$ for some demand $\nu$. This implies
that ${\bar y}_{\mu}\leq 1$, because ${\bar x}_{\mu\nu}\le 1$, by
(PS.\ref{PS:one}).
\item{(2)} When connecting demands to facilities, a primary demand
$\kappa$ is connected to the only facility $\phi(\kappa)$ opened in
its neighborhood, as before (Line 3). For a non-primary demand
$\nu$, if its neighborhood ${\overline{N}}(\nu)$ has an open facility, we
connect $\nu$ to the closest open facility in ${\overline{N}}(\nu)$ (Line
8). Otherwise, we connect $\nu$ to its target facility (Line 10).
\end{description}
\begin{algorithm}
\caption{Algorithm~{\mbox{\rm ECHS}}:
Constructing Integral Solution}
\label{alg:lpr3}
\begin{algorithmic}[1]
\For{each $\kappa\in P$}
\State choose one $\phi(\kappa)\in {\overline{N}}(\kappa)$,
with each $\mu\in{\overline{N}}(\kappa)$ chosen as $\phi(\kappa)$
with probability ${\bar y}_\mu$
\State open $\phi(\kappa)$ and connect $\kappa$ to $\phi(\kappa)$
\EndFor
\For{each $\mu\in\overline{\sitesset} - \bigcup_{\kappa\in P}{\overline{N}}(\kappa)$}
\State open $\mu$ with probability ${\bar y}_\mu$ (independently)
\EndFor
\For{each non-primary demand $\nu\in\overline{\clientset}$}
\If{any facility in ${\overline{N}}(\nu)$ is open}
\State{connect $\nu$ to the nearest open facility in ${\overline{N}}(\nu)$}
\Else
\State connect $\nu$ to $\phi(\kappa)$ where $\kappa$ is $\nu$'s
assigned primary demand
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\paragraph{Analysis.}
We shall first argue that the integral solution thus
constructed is feasible, and then we bound the total cost of
the solution. Regarding feasibility, the only constraint
that is not explicitly enforced by the algorithm is the
fault-tolerance requirement; namely that each client $j$ is
connected to $r_j$ different facilities. Let $\nu$ and
$\nu'$ be two different sibling demands of client $j$ and let
their assigned primary demands be $\kappa$ and $\kappa'$
respectively. Due to (SI.\ref{SI:primary
disjoint}) we know $\kappa \neq \kappa'$. From
(SI.\ref{SI:siblings disjoint}) we have ${\overline{N}}(\nu) \cap
{\overline{N}}(\nu') = \emptyset$. From (SI.\ref{SI:primary
disjoint}), we have ${\overline{N}}(\nu) \cap {\overline{N}}(\kappa') =
\emptyset$ and ${\overline{N}}(\nu') \cap {\overline{N}}(\kappa) =
\emptyset$. From (PD.\ref{PD:disjoint}) we have
${\overline{N}}(\kappa)\cap {\overline{N}}(\kappa') = \emptyset$. It follows
that $({\overline{N}}(\nu) \cup {\overline{N}}(\kappa)) \cap ({\overline{N}}(\nu')
\cup {\overline{N}}(\kappa')) = \emptyset$. Since the algorithm
connects $\nu$ to some facility in ${\overline{N}}(\nu) \cup
{\overline{N}}(\kappa)$ and $\nu'$ to some facility in ${\overline{N}}(\nu')
\cup {\overline{N}}(\kappa')$, $\nu$ and $\nu'$ will be connected to
different facilities.
We now show that the expected cost of the computed solution is bounded by
$(1+2/e) \cdot \mbox{\rm LP}^\ast$. By
(PD.\ref{PD:disjoint}), every facility may appear in at
most one primary demand's neighborhood, and the facilities
open in Line~4--5 of Pseudocode~\ref{alg:lpr3} do not appear
in any primary demand's neighborhood. Therefore, by
linearity of expectation, the expected facility cost of
Algorithm~{\mbox{\rm ECHS}} is
\begin{equation*}
\mathbb{E}[F_{\mbox{\tiny\rm ECHS}}]
= \sum_{\mu\in\overline{\sitesset}} f_\mu {\bar y}_{\mu}
= \sum_{i\in\mathbb{F}} f_i\sum_{\mu\in i} {\bar y}_{\mu}
= \sum_{i\in\mathbb{F}} f_i y_i^\ast = F^\ast,
\end{equation*}
where the third equality follows from (PS.\ref{PS:yi}).
To bound the connection cost, we adapt an argument of Chudak
and Shmoys~\cite{ChudakS04}. Consider a demand $\nu$ and denote by $C_\nu$ the
random variable representing the connection cost for $\nu$.
Our goal now is to estimate $\mathbb{E}[C_\nu]$, the expected value of $C_\nu$.
Demand $\nu$ can either get connected directly to some facility in
${\overline{N}}(\nu)$ or indirectly to its target facility $\phi(\kappa)\in
{\overline{N}}(\kappa)$, where $\kappa$ is the primary demand to
which $\nu$ is assigned. We will analyze these two cases separately.
In our analysis, in this section and the next one, we will use notation
\begin{equation*}
D(A,\sigma) {=} \sum_{\mu\in A}
d_{\mu\sigma}{\bar y}_{\mu}/\sum_{\mu\in A} {\bar y}_{\mu}
\end{equation*}
for the average distance between a demand $\sigma$ and a set $A$ of facilities.
Note that, in particular, we have $C^{\avg}_\nu = D({\overline{N}}(\nu),\nu)$.
We first estimate the expected cost $d_{\phi(\kappa)\nu}$ of the indirect
connection. Let $\Lambda^\nu$ denote the event that some
facility in ${\overline{N}}(\nu)$ is opened. Then
\begin{equation}
\mathbb{E}[C_\nu \mid\neg\Lambda^\nu]
= \mathbb{E}[ d_{\phi(\kappa)\nu} \mid \neg\Lambda^\nu]
= D({\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu), \nu).
\label{eqn: expected indirect connection}
\end{equation}
Note that $\neg\Lambda^\nu$ implies that ${\overline{N}}(\kappa) \,{\leftarrow}\,minus
{\overline{N}}(\nu)\neq\emptyset$, since ${\overline{N}}(\kappa)$ contains
exactly one open facility, namely $\phi(\kappa)$.
\begin{lemma}
\label{lem:echu indirect}
Let $\nu$ be a demand assigned to a primary demand $\kappa$, and
assume that ${\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu)\neq\emptyset$.
Then
\begin{equation*}
\mathbb{E}[ C_\nu \mid\neg\Lambda^\nu] \leq
C^{\avg}_\nu+2\alpha_{\nu}^\ast.
\end{equation*}
\end{lemma}
\begin{proof}
By (\ref{eqn: expected indirect connection}), we need to show that $D({\overline{N}}(\kappa)
\,{\leftarrow}\,minus {\overline{N}}(\nu), \nu) \leq C^{\avg}_\nu +
2\alpha_{\nu}^\ast$. There are two cases to consider.
\begin{description}
\item{\mycase{1}}
There exists some $\mu'\in {\overline{N}}(\kappa) \cap
{\overline{N}}(\nu)$ such that $d_{\mu' \kappa} \leq C^{\avg}_\kappa$.
In this case, for every $\mu\in {\overline{N}}(\kappa)\,{\leftarrow}\,minus {\overline{N}}(\nu)$, we have
\begin{equation*}
d_{\mu \nu} \leq d_{\mu \kappa} + d_{\mu' \kappa} + d_{\mu' \nu}
\le \alpha^\ast_\kappa + C^{\avg}_\kappa + \alpha^\ast_{\nu}
\leq C^{\avg}_\nu + 2\alpha_{\nu}^\ast,
\end{equation*}
using the triangle inequality, complementary slackness, and (PD.\ref{PD:assign:cost}).
By summing over all $\mu\in {\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu)$, it
follows that $D({\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu), \nu) \leq
C^{\avg}_\nu + 2\alpha_{\nu}^\ast$.
\item{\mycase{2}}
Every $\mu'\in {\overline{N}}(\kappa)\cap {\overline{N}}(\nu)$
has $d_{\mu'\kappa} > C^{\avg}_\kappa$. Since $C^{\avg}_{\kappa} = D({\overline{N}}(\kappa),\kappa)$,
this implies that
$D({\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu),\kappa)\leq C^{\avg}_{\kappa}$. Therefore,
choosing an arbitrary $\mu'\in {\overline{N}}(\kappa)\cap {\overline{N}}(\nu)$,
we obtain
\begin{equation*}
D({\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu), \nu)
\leq D({\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu), \kappa)
+ d_{\mu' \kappa} + d_{\mu' \nu}
\leq C^{\avg}_{\kappa} +
\alpha_{\kappa}^\ast + \alpha_{\nu}^\ast
\leq C^{\avg}_\nu + 2\alpha_{\nu}^\ast,
\end{equation*}
where we again use the triangle inequality,
complementary slackness, and (PD.\ref{PD:assign:cost}).
\end{description}
Since the lemma holds in both cases, the proof is now complete.
\end{proof}
We now continue our estimation of the connection cost. The next step
of our analysis is to show that
\begin{equation}
\mathbb{E}[C_\nu]\le C^{\avg}_{\nu} + \frac{2}{e}\alpha^\ast_\nu.
\label{eqn: echs bound for connection cost}
\end{equation}
The argument is divided into three cases. The first, easy case is when
$\nu$ is a primary demand $\kappa$. According to the algorithm
(see Pseudocode~\ref{alg:lpr3}, Line~2), we have $C_\kappa = d_{\mu\kappa}$ with probability ${\bar y}_{\mu}$,
for $\mu\in {\overline{N}}(\kappa)$. Therefore $\mathbb{E}[C_\kappa] = C^{\avg}_{\kappa}$, so
(\ref{eqn: echs bound for connection cost}) holds.
Next, we consider a non-primary demand $\nu$. Let $\kappa$
be the primary demand that $\nu$ is assigned to. We first
deal with the sub-case when ${\overline{N}}(\kappa)\,{\leftarrow}\,minus
{\overline{N}}(\nu) = \emptyset$, which is the same as
${\overline{N}}(\kappa) \subseteq {\overline{N}}(\nu)$. Property (CO)
implies that ${\bar x}_{\mu\nu} = {\bar y}_{\mu} =
{\bar x}_{\mu\kappa}$ for every $\mu \in {\overline{N}}(\kappa)$, so we
have $\sum_{\mu\in{\overline{N}}(\kappa)} {\bar x}_{\mu\nu} =
\sum_{\mu\in{\overline{N}}(\kappa)} {\bar x}_{\mu\kappa} = 1$, due to
(PS.\ref{PS:one}). On the other hand, we have
$\sum_{\mu\in{\overline{N}}(\nu)} {\bar x}_{\mu\nu} = 1$, and
${\bar x}_{\mu\nu} > 0$ for all $\mu\in {\overline{N}}(\nu)$. Therefore
${\overline{N}}(\kappa) = {\overline{N}}(\nu)$ and $C_\nu$ has exactly the
same distribution as $C_\kappa$. So this case reduces to
the first case, namely we have $\mathbb{E}[C_{\nu}] =
C^{\avg}_{\nu}$, and (\ref{eqn: echs bound for connection
cost}) holds.
The last, and only non-trivial case is when ${\overline{N}}(\kappa)\,{\leftarrow}\,minus
{\overline{N}}(\nu)\neq\emptyset$. We handle this case in the following lemma.
\begin{lemma}\label{lem: echs expected C_nu}
Assume that ${\overline{N}}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu) \neq \emptyset$.
Then the expected connection cost of $\nu$, conditioned on the event that at least one of
its neighbor opens, satisfies
\begin{equation*}
\mathbb{E}[C_\nu \mid \Lambda^\nu] \leq C^{\avg}_{\nu}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is similar to an analogous result in~\cite{ChudakS04,ByrkaA10}.
For the sake of completeness we sketch here a simplified argument, adapted to our
terminology and notation.
The idea is to consider a different random process that is
easier to analyze and whose expected connection cost is not better than that in
the algorithm.
We partition ${\overline{N}}(\nu)$ into groups $G_1,...,G_k$, where two
different facilities $\mu$ and $\mu'$ are put in the same $G_s$, where
$s\in \{1,\ldots,k\}$, if they both belong to the same set
${\overline{N}}(\kappa)$ for some primary demand $\kappa$. If some $\mu$ is
not a neighbor of any primary demand, then it constitutes a singleton
group. For each $s$, let ${\bar d}_s = D(G_s,\nu)$ be the average
distance from $\nu$ to $G_s$. Assume that $G_1,...,G_k$ are ordered
by nondecreasing average distance to $\nu$, that is ${\bar d}_1 \le
{\bar d}_2 \le ... \le {\bar d}_k$. For each group $G_s$, we select it,
independently, with probability $g_s = \sum_{\mu\in G_s}{\bar y}_{\mu}$.
For each selected group $G_s$, we
open exactly one facility in $G_s$, where each $\mu\in G_s$
is opened with probability ${\bar y}_{\mu}/\sum_{\eta\in G_s}
{\bar y}_{\eta}$.
So far, this process is the same as that in the algorithm (if restricted to ${\overline{N}}(\nu)$).
However, we connect $\nu$ in a slightly different way, by choosing the smallest
$s$ for which $G_s$ was selected and connecting $\nu$ to the open facility in $G_s$.
This can only increase our expected connection cost, assuming that at least one
facility in ${\overline{N}}(\nu)$ opens, so
\begin{align}
\mathbb{E}[C_\nu \mid \Lambda^\nu] &\leq \frac{1}{\mathbb{P}[\Lambda^\nu]}
\left( {\bar d}_1 g_1 + {\bar d}_2 g_2 (1-g_1) + \ldots + {\bar d}_k g_k
(1-g_1) (1-g_2) \ldots (1-g_k) \right)
\notag
\\
&\leq \frac{1}{\mathbb{P}[\Lambda^\nu]}
\cdot \sum_{s=1}^k {\bar d}_s g_s
\cdot
\left(\sum_{t=1}^k g_t \prod_{z=1}^{t-1} (1-g_z)\right)
\label{eqn: echs ineq direct cost, step 1}
\\
&= \sum_{s=1}^k {\bar d}_s g_s
\label{eqn: echs ineq direct cost, step 2}
\\
&= C^{\avg}_{\nu}.
\label{eqn: echs ineq direct cost, step 3}
\end{align}
The proof for inequality (\ref{eqn: echs ineq direct cost, step 1})
is given in \ref{sec: ECHSinequality} (note that $\sum_{s=1}^k g_s = 1$),
equality (\ref{eqn: echs ineq direct cost, step 2}) follows from
$\mathbb{P}[\Lambda^\nu] = 1 - \prod_{t=1}^k (1-g_t)
= \sum_{t=1}^k g_t
\prod_{z=1}^{t-1} (1 - g_z)$,
and (\ref{eqn: echs ineq direct cost, step 3}) follows from the definition
of the distances ${\bar d}_s$, probabilities $g_s$, and simple algebra.
\end{proof}
Next, we show an estimate on the probability that none of $\nu$'s
neighbors is opened by the algorithm.
\begin{lemma}\label{lem: probability of not Lambda^nu}
The probability that none of $\nu$'s neighbors is opened satisfies
$\mathbb{P}[\neg\Lambda^\nu] \le 1/e$.
\end{lemma}
\begin{proof}
We use the same partition of ${\overline{N}}(\nu)$ into groups $G_1,...,G_k$ as
in the proof of Lemma~\ref{lem: echs expected C_nu}. Denoting by
$g_s$ the probability that a group $G_s$ is selected (and thus that it
has an open facility), we have
\begin{equation*}
\mathbb{P}[\neg\Lambda^\nu] = \prod_{s=1}^k (1 - g_s)
\le e^{- \sum_{s=1}^k g_s}
= e^{-\sum_{\mu \in {\overline{N}}(\nu)} {\bar y}_{\mu}}
= \frac{1}{e}.
\end{equation*}
In this derivation, we first use that $1-x\le e^{-x}$ holds for all $x$,
the second equality follows from $\sum_{s=1}^k g_s = \sum_{\mu \in {\overline{N}}(\nu)} {\bar y}_{\mu}$
and the last equality follows from
$\sum_{\mu \in {\overline{N}}(\nu)} {\bar y}_{\mu} = 1$.
\end{proof}
We are now ready to estimate the unconditional expected connection cost of $\nu$
(in the case when ${\overline{N}}(\kappa)\,{\leftarrow}\,minus {\overline{N}}(\nu)\neq\emptyset$)
as follows:
\begin{align}
\notag
\mathbb{E}[C_\nu] &= \mathbb{E}[C_{\nu} \mid \Lambda^\nu] \cdot \mathbb{P}[\Lambda^\nu]
+ \mathbb{E}[C_{\nu} \mid \neg \Lambda^\nu] \cdot \mathbb{P}[\neg \Lambda^\nu]
\\
&\leq C^{\avg}_{\nu} \cdot \mathbb{P}[\Lambda^\nu]
+ (C^{\avg}_{\nu} + 2\alpha_{\nu}^\ast) \cdot \mathbb{P}[\neg \Lambda^\nu]
\label{eqn: Cnu estimate 0}
\\
&= C^{\avg}_{\nu}
+ 2\alpha_{\nu}^\ast \cdot \mathbb{P}[\neg \Lambda^\nu]
\notag
\\
&\le C^{\avg}_{\nu} + \frac{2}{e}\cdot\alpha_{\nu}^\ast.
\label{eqn: Cnu estimate last}
\end{align}
In the above derivation, inequality (\ref{eqn: Cnu estimate 0})
follows from Lemmas~\ref{lem:echu indirect} and \ref{lem: echs expected C_nu},
and inequality (\ref{eqn: Cnu estimate last}) follows from
Lemma~\ref{lem: probability of not Lambda^nu}.
We have thus shown that the bound (\ref{eqn: echs bound for connection cost})
holds in all three cases.
Summing over all demands $\nu$ of a client $j$, we can now bound
the expected connection cost of client $j$:
\begin{equation*}
\mathbb{E}[C_j] = \textstyle\sum_{\nu\in j} \mathbb{E}[C_\nu]
\leq {\textstyle\sum_{\nu\in j} (C^{\avg}_{\nu} + \frac{2}{e}\cdot\alpha_{\nu}^\ast) }
= { C_j^\ast + \frac{2}{e}\cdot r_j\alpha_j^\ast}.
\end{equation*}
Finally, summing over all clients $j$, we obtain our bound on
the expected connection cost,
\begin{equation*}
\mathbb{E}[ C_{\mbox{\tiny\rm ECHS}}] \le C^\ast + \frac{2}{e}\cdot\mbox{\rm LP}^\ast.
\end{equation*}
Therefore we have established that
our algorithm constructs a feasible integral solution with
an overall expected cost
\begin{equation*}
\label{eq:chudakall}
\mathbb{E}[ F_{\mbox{\tiny\rm ECHS}} + C_{\mbox{\tiny\rm ECHS}}]
\le
F^\ast + C^\ast + \frac{2}{e}\cdot \mbox{\rm LP}^\ast = (1+2/e)\cdot \mbox{\rm LP}^\ast
\leq (1+2/e)\cdot \mbox{\rm OPT}.
\end{equation*}
Summarizing, we obtain the main result of this section.
\begin{theorem}\label{thm:1736}
Algorithm~{\mbox{\rm ECHS}} is a $(1+2/e)$-approximation algorithm for \mbox{\rm FTFP}.
\end{theorem}
\section{Algorithm~{\mbox{\rm EBGS}} with Ratio $1.575$}\label{sec: 1.575-approximation}
In this section we give our main result, a $1.575$-approximation
algorithm for $\mbox{\rm FTFP}$, where $1.575$ is the value of $\min_{\gamma\geq
1}\max\{\gamma, 1+2/e^\gamma, \frac{1/e+1/e^\gamma}{1-1/\gamma}\}$,
rounded to three decimal digits. This matches the ratio of the best
known LP-rounding algorithm for UFL by
Byrka~{{\it et al.}}~\cite{ByrkaGS10}.
Recall that in Section~\ref{sec: 1.736-approximation} we showed how to
compute an integral solution with facility cost bounded by $F^\ast$
and connection cost bounded by $C^\ast + 2/e\cdot\mbox{\rm LP}^\ast$. Thus,
while our facility cost does not exceed the optimal fractional
facility cost, our connection cost is significantly larger than the
connection cost in the optimal fractional solution. A natural idea is
to balance these two ratios by reducing the connection cost at the
expense of the facility cost. One way to do this would be to increase
the probability of opening facilities, from ${\bar y}_{\mu}$ (used in
Algorithm~{\mbox{\rm ECHS}}) to, say, $\gamma{\bar y}_{\mu}$, for some $\gamma >
1$. This increases the expected facility cost by a factor of $\gamma$
but, as it turns out, it also reduces the probability that an indirect
connection occurs for a non-primary demand to $1/e^\gamma$ (from the
previous value $1/e$ in {\mbox{\rm ECHS}}). As a consequence, for each primary
demand $\kappa$, the new algorithm will select a facility to open from
the nearest facilities $\mu$ in ${\overline{N}}(\kappa)$ such that the
connection values ${\bar x}_{\mu\nu}$ sum up to $1/\gamma$, instead of
$1$ as in Algorithm {\mbox{\rm ECHS}}. It is easily seen that this will improve
the estimate on connection cost for primary demands. These two
changes, along with a more refined analysis, are the essence of the
approach in~\cite{ByrkaGS10}, expressed in our terminology.
Our approach can be thought of as a combination of the above ideas
with the techniques of demand reduction and
adaptive partitioning that we introduced earlier. However, our
adaptive partitioning technique needs to be carefully modified,
because now we will be using a more intricate neighborhood structure,
with the neighborhood of each demand divided into two disjoint parts,
and with restrictions on how parts from different demands can overlap.
We begin by describing properties that our partitioned fractional
solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$ needs to satisfy. Assume that $\gamma$ is
some constant such that $1 < \gamma < 2$. As mentioned earlier,
the neighborhood ${\overline{N}}(\nu)$ of each demand $\nu$ will be divided
into two disjoint parts. The first part, called the \emph{close
neighborhood} and denoted $\wbarN_{\cls}(\nu)$, contains the facilities
in ${\overline{N}}(\nu)$ nearest to $\nu$ with the total connection value
equal $1/\gamma$, that is $\sum_{\mu\in\wbarN_{\cls}(\nu)} {\bar x}_{\mu\nu}
= 1/\gamma$. The second part, called the \emph{far neighborhood} and
denoted $\wbarN_{\far}(\nu)$, contains the remaining facilities in
${\overline{N}}(\nu)$ (so $\sum_{\mu\in\wbarN_{\far}(\nu)} {\bar x}_{\mu\nu} = 1-1/\gamma$). We
restate these definitions formally below in Property~(NB). Recall
that for any set $A$ of facilities and a demand $\nu$, by
$D(A,\nu)$ we denote the average distance between $\nu$ and the
facilities in $A$, that is $D(A,\nu) =\sum_{\mu\in A}
d_{\mu\nu}{\bar y}_{\mu}/\sum_{\mu\in A} {\bar y}_{\mu}$. We will use
notations ${\mbox{\scriptsize\rm cls}}dist(\nu)=D(\wbarN_{\cls}(\nu),\nu)$ and
${\mbox{\scriptsize\rm far}}dist(\nu)=D(\wbarN_{\far}(\nu),\nu)$ for the average distances from
$\nu$ to its close and far neighborhoods, respectively. By the
definition of these sets and the completeness property (CO), these
distances can be expressed as
\begin{equation*}
{\mbox{\scriptsize\rm cls}}dist(\nu)=\gamma\sum_{\mu\in\wbarN_{\cls}(\nu)}
d_{\mu\nu}{\bar x}_{\mu\nu} \quad\text{and}\quad
{\mbox{\scriptsize\rm far}}dist(\nu)=\frac{\gamma}{\gamma-1}\sum_{\mu\in\wbarN_{\far}(\nu)}
d_{\mu\nu}{\bar x}_{\mu\nu}.
\end{equation*}
We will also use notation ${\mbox{\scriptsize\rm cls}}max(\nu)=\max_{\mu\in\wbarN_{\cls}(\nu)}
d_{\mu\nu}$ for the maximum distance from $\nu$ to its close
neighborhood. The average distance from a demand $\nu$ to its overall
neighborhood ${\overline{N}}(\nu)$ is denoted as $C^{\avg}(\nu) =
D({\overline{N}}(\nu), \nu) = \sum_{\mu \in {\overline{N}}(\nu)} d_{\mu\nu}
{\bar x}_{\mu\nu}$. It is easy to see that
\begin{equation}
C^{\avg}(\nu) = \frac{1}{\gamma} {\mbox{\scriptsize\rm cls}}dist(\nu) + \frac{\gamma -
1}{\gamma} {\mbox{\scriptsize\rm far}}dist(\nu).
\label{eqn:avg dist cls dist far dist}
\end{equation}
Our partitioned solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$ must satisfy the same
partitioning and completeness properties as before, namely properties
(PS) and (CO) in Section~\ref{sec: adaptive partitioning}. In
addition, it must satisfy a new neighborhood property (NB) and modified
properties (PD') and (SI'), listed below.
\begin{description}
\renewcommand{(\alph{enumii})}{(\alph{enumii})}
\renewcommand{\theenumii}{(\alph{enumii})}
\item{(NB)} \label{NB}
\emph{Neighborhoods.}
For each demand $\nu \in \overline{\clientset}$, its neighborhood is divided into \emph{close} and
\emph{far} neighborhood, that is ${\overline{N}}(\nu) = \wbarN_{\cls}(\nu) \cup \wbarN_{\far}(\nu)$, where
\begin{itemize}
\item $\wbarN_{\cls}(\nu) \cap \wbarN_{\far}(\nu) = \emptyset$,
\item $\sum_{\mu\in\wbarN_{\cls}(\nu)} {\bar x}_{\mu\nu} =1/\gamma$, and
\item if $\mu\in \wbarN_{\cls}(\nu)$ and $\mu'\in \wbarN_{\far}(\nu)$
then $d_{\mu\nu}\le d_{\mu'\nu}$.
\end{itemize}
Note that the first two conditions, together with
(PS.\ref{PS:one}), imply that $\sum_{\mu\in\wbarN_{\far}(\nu)}
{\bar x}_{\mu\nu} =1-1/\gamma$. When defining $\wbarN_{\cls}(\nu)$,
in case of ties, which can occur when some facilities in
${\overline{N}}(\nu)$ are at the same distance from $\nu$, we use a
tie-breaking rule that is explained in the proof of
Lemma~\ref{lem: PD1: primary overlap} (the only place where
the rule is needed).
\item{(PD')} \emph{Primary demands.}
Primary demands satisfy the following conditions:
\begin{enumerate}
\item\label{PD1:disjoint} For any two different primary demands $\kappa,\kappa'\in P$ we have
$\wbarN_{\cls}(\kappa)\cap \wbarN_{\cls}(\kappa') = \emptyset$.
\item \label{PD1:yi} For each site $i\in\mathbb{F}$,
$ \sum_{\kappa\in P}\sum_{\mu\in
i\cap\wbarN_{\cls}(\kappa)}{\bar x}_{\mu\kappa} \leq
y_i^\ast$. In the summation, as before, we overload notation $i$ to stand for the set of
facilities created on site $i$.
\item \label{PD1:assign} Each demand $\nu\in\overline{\clientset}$ is assigned
to one primary demand $\kappa\in P$ such that
\begin{enumerate}
\item \label{PD1:assign:overlap} $\wbarN_{\cls}(\nu) \cap \wbarN_{\cls}(\kappa) \neq \emptyset$, and
\item \label{PD1:assign:cost}
${\mbox{\scriptsize\rm cls}}dist(\nu)+{\mbox{\scriptsize\rm cls}}max(\nu) \geq
{\mbox{\scriptsize\rm cls}}dist(\kappa)+{\mbox{\scriptsize\rm cls}}max(\kappa)$.
\end{enumerate}
\end{enumerate}
\item{(SI')} \emph{Siblings}. For any pair $\nu,\nu'\in\overline{\clientset}$ of different siblings we have
\begin{enumerate}
\item \label{SI1:siblings disjoint}
${\overline{N}}(\nu)\cap {\overline{N}}(\nu') = \emptyset$.
\item \label{SI1:primary disjoint} If $\nu$ is assigned to a primary demand $\kappa$ then
${\overline{N}}(\nu')\cap \wbarN_{\cls}(\kappa) = \emptyset$. In particular, by Property~(PD'.\ref{PD1:assign:overlap}),
this implies that different sibling demands are assigned to different primary demands, since $\wbarN_{\cls}(\nu')$ is a subset of ${\overline{N}}(\nu')$.
\end{enumerate}
\end{description}
\paragraph{Modified adaptive partitioning.}
To obtain a fractional solution with the above properties, we employ a
modified adaptive partitioning algorithm. As in Section~\ref{sec:
adaptive partitioning}, we have two phases. In Phase~1 we split
clients into demands and create facilities on sites, while in Phase~2
we augment each demand's connection values ${\bar x}_{\mu\nu}$ so that the total connection
value of each demand $\nu$ is $1$. As the partitioning algorithm proceeds, for any demand $\nu$,
${\overline{N}}(\nu)$ denotes the set of facilities with ${\bar x}_{\mu\nu} > 0$;
hence the notation ${\overline{N}}(\nu)$ actually represents a dynamic set which gets fixed
once the partitioning algorithm concludes both Phase 2. On the
other hand, $\wbarN_{\cls}(\nu)$ and $\wbarN_{\far}(\nu)$ refer to the close
and far neighborhoods at the time when ${\overline{N}}(\nu)$ is fixed.
Similar to the algorithm in Section~\ref{sec: adaptive partitioning},
Phase~1 runs in iterations. Fix some iteration and consider any client
$j$. As before, ${\widetilde N}(j)$ is the neighborhood of $j$ with respect
to the yet unpartitioned solution, namely the set of facilities $\mu$
such that ${\widetilde x}_{\mu j}>0$. Order the facilities in this set as
${\widetilde N}(j) = \braced{\mu_1,...,\mu_q}$ with non-decreasing distance
from $j$, that is $d_{\mu_1 j} \leq d_{\mu_2 j} \leq \ldots \leq
d_{\mu_q j}$. Without loss of generality,
there is an index $l$ for which $\sum_{s=1}^l {\widetilde x}_{\mu_s j} =
1/\gamma$, since we can always split one facility to achieve
this. Then we define $\wtildeN_{\cls}(j) = \braced{\mu_1,...,\mu_l}$.
(Unlike close neighborhoods of demands, $\wtildeN_{\cls}(j)$ can vary over time.)
We also use notation
\begin{equation*}
\mbox{\rm{tcc}}cls(j) = D(\wtildeN_{\cls}(j), j) = \gamma\sum_{\mu\in\wtildeN_{\cls}(j)} d_{\mu j} {\widetilde x}_{\mu j}
\quad\textrm{ and }\quad
\text{dmax}cls(j) = \max_{\mu \in \wtildeN_{\cls}(j)} d_{\mu j}.
\end{equation*}
When the iteration starts, we first find a not-yet-exhausted client
$p$ that minimizes the value of $\mbox{\rm{tcc}}cls(p) + \text{dmax}cls(p)$ and create
a new demand $\nu$ for $p$. Now we have two cases:
\begin{description}
\item{\mycase{1}} $\wtildeN_{\cls}(p) \cap {\overline{N}}(\kappa)\neq\emptyset$
for some existing primary demand $\kappa\in P$. In this case we
assign $\nu$ to $\kappa$. As before, if there are multiple such
$\kappa$, we pick any of them. We also fix ${\bar x}_{\mu \nu} {\,\leftarrow\,}
{\widetilde x}_{\mu p}$ and ${\widetilde x}_{\mu p}{\,\leftarrow\,} 0$ for each $\mu \in
{\widetilde N}(p)\cap {\overline{N}}(\kappa)$. Note that although we
check for overlap between $\wtildeN_{\cls}(p)$ and ${\overline{N}}(\kappa)$,
the facilities we actually move into ${\overline{N}}(\nu)$ include all
facilities in the intersection of ${\widetilde N}(p)$, a bigger set, with
${\overline{N}}(\kappa)$.
At this time, the total connection value
between $\nu$ and $\mu\in {\overline{N}}(\nu)$ is at most $1/\gamma$,
since $\sum_{\mu \in {\overline{N}}(\kappa)}{\bar y}_{\mu} = 1/\gamma$
(this follows from the definition of neighborhoods for new primary demands in Case~2 below)
and we have ${\overline{N}}(\nu) \subseteq {\overline{N}}(\kappa)$ at this point. Later
in Phase 2 we will add additional facilities from ${\widetilde N}(p)$ to
${\overline{N}}(\nu)$ to make $\nu$'s total connection value equal to $1$.
\item{\mycase{2}} $\wtildeN_{\cls}(p) \cap {\overline{N}}(\kappa) = \emptyset$
for all existing primary demands $\kappa\in P$. In this case we
make $\nu$ a primary demand (that is, add it to $P$) and assign it
to itself. We then move the facilities from $\wtildeN_{\cls}(p)$ to
${\overline{N}}(\nu)$, that is for $\mu \in \wtildeN_{\cls}(p)$ we set
${\bar x}_{\mu \nu}{\,\leftarrow\,} {\widetilde x}_{\mu p}$ and ${\widetilde x}_{\mu p}\,{\leftarrow}\,
0$.
It is easy to see that the total connection value of $\nu$ to
${\overline{N}}(\nu)$ is now exactly $1/\gamma$, that is
$\sum_{\mu \in {\overline{N}}(\nu)}{\bar y}_{\mu} = 1/\gamma$.
Moreover, facilities
remaining in ${\widetilde N}(p)$ are all farther away from $\nu$ than
those in ${\overline{N}}(\nu)$. As we add only facilities from ${\widetilde N}(p)$
to ${\overline{N}}(\nu)$ in Phase~2, the final $\wbarN_{\cls}(\nu)$ contains
the same set of facilities as the current set ${\overline{N}}(\nu)$.
(More precisely, $\wbarN_{\cls}(\nu)$ consists of the facilities that
either are currently in ${\overline{N}}(\nu)$ or were obtained from splitting
the facilities currently in ${\overline{N}}(\nu)$.)
\end{description}
Once all clients are exhausted, that is, each client $j$ has $r_j$
demands created, Phase~1 concludes. We then run Phase~2, the
augmenting phase, following the same steps as in Section~\ref{sec:
adaptive partitioning}. For each client $j$ and each demand $\nu\in
j$ with total connection value to ${\overline{N}}(\nu)$ less than $1$
(that is, $\sum_{\mu\in{\overline{N}}(\nu)} {\bar x}_{\mu\nu} < 1$),
we use our ${\textsc{AugmentToUnit}}()$
procedure to add additional facilities (possibly split, if necessary)
from ${\widetilde N}(j)$ to ${\overline{N}}(\nu)$ to make the total connection value
between $\nu$ and ${\overline{N}}(\nu)$ equal $1$.
This completes the description of the partitioning
algorithm. Summarizing, for each client $j\in\mathbb{C}$ we
created $r_j$ demands on the same point as $j$, and we created a number
of facilities at each site $i\in\mathbb{F}$. Thus computed sets of
demands and facilities are denoted $\overline{\clientset}$ and $\overline{\sitesset}$,
respectively. For each facility $\mu\in i$ we defined its fractional
opening value ${\bar y}_\mu$, $0\le {\bar y}_\mu\le 1$, and for each demand
$\nu\in j$ we defined its fractional connection value
${\bar x}_{\mu\nu}\in \braced{0,{\bar y}_\mu}$. The connections with
${\bar x}_{\mu\nu} > 0$ define the neighborhood ${\overline{N}}(\nu)$. The facilities in
${\overline{N}}(\nu)$ that are closest to $\nu$ and have total connection value from $\nu$ equal
$1/\gamma$ form the close neighborhood $\wbarN_{\cls}(\nu)$, while the remaining facilities
in ${\overline{N}}(\nu)$ form the far neighborhood
$\wbarN_{\far}(\nu)$. It remains to show that this partitioning satisfies all the desired
properties.
\paragraph{Correctness of partitioning.}
We now argue that our partitioned fractional solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{y}})$
satisfies all the stated properties. Properties~(PS), (CO) and (NB) are
directly enforced by the algorithm.
(PD'.\ref{PD1:disjoint}) holds because for each primary demand
$\kappa\in p$, $\wbarN_{\cls}(\kappa)$ is the same set as
$\wtildeN_{\cls}(p)$ at the time when $\kappa$ was created, and
$\wtildeN_{\cls}(p)$ is removed from ${\widetilde N}(p)$ right after this
step. Further, the partitioning algorithm makes $\kappa$ a primary
demand only if $\wtildeN_{\cls}(p)$ is disjoint from the set
${\overline{N}}(\kappa')$ of all existing primary demands $\kappa'$ at that
iteration, but these neighborhoods are the same as the final close
neighborhoods $\wbarN_{\cls}(\kappa')$.
The justification of (PD'.\ref{PD1:yi}) is similar to that for
(PD.\ref{PD:yi}) from Section~\ref{sec: adaptive partitioning}. All
close neighborhoods of primary demands are disjoint, due to
(PD'.\ref{PD1:disjoint}), so each facility $\mu \in i$ can appear in
at most one $\wbarN_{\cls}(\kappa)$, for some $\kappa\in P$. Condition
(CO) implies that ${\bar y}_{\mu} = {\bar x}_{\mu\kappa}$ for $\mu \in \wbarN_{\cls}(\kappa)$.
As a result, the summation on
the left-hand side is not larger than $\sum_{\mu\in i}{\bar y}_{\mu} = y_i^\ast$.
Regarding (PD'.\ref{PD1:assign:overlap}), at first glance this
property seems to follow directly from the algorithm, as we only
assign a demand $\nu$ to a primary demand $\kappa$ when ${\overline{N}}(\nu)$
at that iteration overlaps with ${\overline{N}}(\kappa)$ (which is equal to
the final value of $\wbarN_{\cls}(\kappa)$). However, it is a little
more subtle, as the final $\wbarN_{\cls}(\nu)$ may contain facilities
added to ${\overline{N}}(\nu)$ in Phase 2. Those facilities may turn out to be
closer to $\nu$ than some facilities in ${\overline{N}}(\kappa) \cap
{\widetilde N}(j) $ (not ${\widetilde N}_{{\mbox{\scriptsize\rm cls}}}(j)$) that we added to
${\overline{N}}(\nu)$ in Phase 1. If the final $\wbarN_{\cls}(\nu)$ consists only of
facilities added in Phase 2, we no longer have the desired overlap of
$\wbarN_{\cls}(\kappa)$ and $\wbarN_{\cls}(\nu)$. Luckily this bad scenario
never occurs. We postpone the proof of this property to
Lemma~\ref{lem: PD1: primary overlap}. The proof of
(PD'.\ref{PD1:assign:cost}) is similar to that of Lemma~\ref{lem:
PD:assign:cost holds}, and we defer it to Lemma~\ref{lem: PD1:
primary optimal}.
(SI'.\ref{SI1:siblings disjoint}) follows directly from the algorithm
because for each demand $\nu\in j$, all facilities added to
${\overline{N}}(\nu)$ are immediately removed from ${\widetilde N}(j)$ and each
facility is added to ${\overline{N}}(\nu)$ of exactly one demand $\nu \in j$.
Splitting facilities obviously preserves (SI'.\ref{SI1:siblings disjoint}).
The proof of (SI'.\ref{SI1:primary disjoint}) is similar to that of
Lemma~\ref{lem: property SI:primary disjoint holds}. If $\kappa=\nu$
then (SI'.\ref{SI1:primary disjoint}) follows from
(SI'.\ref{SI1:siblings disjoint}), so we can assume that
$\kappa\neq\nu$. Suppose that $\nu'\in j$ is assigned to $\kappa'\in
P$ and consider the situation after Phase~1. By the way we reassign
facilities in Case~1, at this time we have ${\overline{N}}(\nu)\subseteq
{\overline{N}}(\kappa) = \wbarN_{\cls}(\kappa)$ and ${\overline{N}}(\nu')\subseteq
{\overline{N}}(\kappa') =\wbarN_{\cls}(\kappa')$, so ${\overline{N}}(\nu')\cap
\wbarN_{\cls}(\kappa) = \emptyset$, by (PD'.\ref{PD1:disjoint}).
Moreover, we have ${\widetilde N}(j) \cap \wbarN_{\cls}(\kappa) = \emptyset$
after this iteration, because any facilities that were also in
$\wbarN_{\cls}(\kappa)$ were removed from ${\widetilde N}(j)$ when $\nu$ was
created. In Phase~2, augmentation does not change $\wbarN_{\cls}(\kappa)$
and all facilities added to ${\overline{N}}(\nu')$ are from the set
${\widetilde N}(j)$ at the end of Phase 1, which is a subset of the set
${\widetilde N}(j)$ after this iteration, since ${\widetilde N}(j)$ can only shrink.
So the condition (SI'.\ref{SI1:primary disjoint}) will
remain true.
\begin{lemma} \label{lem: PD1: primary overlap}
Property (PD'.\ref{PD1:assign:overlap}) holds.
\end{lemma}
\begin{proof}
Let $j$ be the client for which $\nu\in j$. We consider an iteration
when we create $\nu$ from $j$ and assign it to $\kappa$, and
within this proof, notation $\wtildeN_{\cls}(j)$ and ${\widetilde N}(j)$
will refer to the value of the sets at this particular time.
At this time, ${\overline{N}}(\nu)$ is initialized to ${\widetilde N}(j)\cap
{\overline{N}}(\kappa)$. Recall that ${\overline{N}}(\kappa)$ is now equal to the
final $\wbarN_{\cls}(\kappa)$ (taking into account facility splitting). We
would like to show that the set $\wtildeN_{\cls}(j)\cap
\wbarN_{\cls}(\kappa)$ (which is not empty) will be included in
$\wbarN_{\cls}(\nu)$ at the end. Technically speaking, this will not be
true due to facility splitting, so we need to rephrase this claim
and the proof in terms of the set of facilities obtained after the
algorithm completes.
\begin{figure}
\caption{Illustration of the sets ${\overline{N}
\label{fig: sets lemma PD'3a}
\end{figure}
We define the sets $A$, $B$, $E^-$ and $E^+$ as the subsets of
$\overline{\sitesset}$ (the final set of facilities) that were obtained from
splitting facilities in the sets ${\widetilde N}(j)$, $\wtildeN_{\cls}(j)\cap
\wbarN_{\cls}(\kappa)$, $\wtildeN_{\cls}(j) - \wbarN_{\cls}(\kappa)$ and
${\widetilde N}(j) - \wtildeN_{\cls}(j)$, respectively. (See
Figure~\ref{fig: sets lemma PD'3a}.) We claim that at the end
$B\subseteq \wbarN_{\cls}(\nu)$, with the caveat that the ties in the
definition of $\wbarN_{\cls}(\nu)$ are broken in favor of the
facilities in $B$. (This is the tie-breaking rule that we mentioned
in the definition of $\wbarN_{\cls}(\nu)$.) This will be sufficient to
prove the lemma because $B\neq\emptyset$, by the algorithm.
We now prove this claim. In this paragraph ${\overline{N}}(\nu)$ denotes the
final set ${\overline{N}}(\nu)$ after both phases are completed. Thus the total
connection value of ${\overline{N}}(\nu)$ to $\nu$ is $1$.
Note first that
$B\subseteq {\overline{N}}(\nu) \subseteq A$, because we never remove
facilities from ${\overline{N}}(\nu)$ and we only add facilities from
${\widetilde N}(j)$. Also, $B\cup E^-$ represents the facilities obtained
from $\wtildeN_{\cls}(j)$, so $\sum_{\mu\in B\cup E^-} {\bar y}_{\mu} =
1/\gamma$. This and $B\subseteq {\overline{N}}(\nu)$ implies that the total
connection value of $B\cup ({\overline{N}}(\nu)\cap E^-)$ to $\nu$ is at
most $1/\gamma$. But all facilities in $B\cup ({\overline{N}}(\nu)\cap E^-)$
are closer to $\nu$ (taking into account our tie breaking in property (NB))
than those in $E^+\cap {\overline{N}}(\nu)$. It follows
that $B\subseteq \wbarN_{\cls}(\nu)$, completing the proof.
\end{proof}
\begin{lemma}\label{lem: PD1: primary optimal}
Property (PD'.\ref{PD:assign:cost}) holds.
\end{lemma}
\begin{proof}
This proof is similar to that for Lemma~\ref{lem: PD:assign:cost holds}.
For a client $j$ and demand $\eta$, we will write
$\mbox{\rm{tcc}}cls^\eta(j)$ and $\text{dmax}cls^\eta(j)$ to denote the values of
$\mbox{\rm{tcc}}cls(j)$ and $\text{dmax}cls(j)$ at the time when $\eta$
was created. (Here $\eta$ may or may not be a demand of client $j$).
Suppose $\nu \in j$ is assigned to a primary demand $\kappa \in p$.
By the way primary demands are constructed in the partitioning
algorithm, $\wtildeN_{\cls}(p)$ becomes ${\overline{N}}(\kappa)$, which is equal
to the final value of $\wbarN_{\cls}(\kappa)$. So we have
${\mbox{\scriptsize\rm cls}}dist(\kappa) = \mbox{\rm{tcc}}cls^\kappa (p)$ and ${\mbox{\scriptsize\rm cls}}max(\kappa) =
\text{dmax}cls^\kappa(p)$. Further, since we choose $p$ to minimize
$\mbox{\rm{tcc}}cls(p) + \text{dmax}cls(p)$, we have that $\mbox{\rm{tcc}}cls^\kappa(p) +
\text{dmax}cls^\kappa(p) \leq \mbox{\rm{tcc}}cls^\kappa(j) + \text{dmax}cls^\kappa(j)$.
Using an argument analogous to that in the proof of Lemma~\ref{lem: tcc optimal},
our modified partitioning algorithm guarantees that
$\mbox{\rm{tcc}}cls^{\kappa}(j) \leq \mbox{\rm{tcc}}cls^{\nu}(j) \leq {\mbox{\scriptsize\rm cls}}dist(\nu)$ and
$\text{dmax}cls^{\kappa}(j) \leq \text{dmax}cls^{\nu}(j) \leq {\mbox{\scriptsize\rm cls}}max(\nu)$ since $\nu$ was
created later.
Therefore, we have
\begin{align*}
{\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm cls}}max(\kappa) &= \mbox{\rm{tcc}}cls^{\kappa}(p) + \text{dmax}cls^{\kappa}(p)
\\
&\leq \mbox{\rm{tcc}}cls^{\kappa}(j) + \text{dmax}cls^{\kappa}(j)
\leq \mbox{\rm{tcc}}cls^{\nu}(j) + \text{dmax}cls^{\nu}(j)
\leq {\mbox{\scriptsize\rm cls}}dist(\nu) + {\mbox{\scriptsize\rm cls}}max(\nu),
\end{align*}
completing the proof.
\end{proof}
Now we have completed the proof that the computed partitioning satisfies
all the required properties.
\paragraph{Algorithm~{\mbox{\rm EBGS}}.}
The complete algorithm starts with solving the LP(\ref{eqn:fac_primal}) and
computing the partitioning described earlier in this section. Given
the partitioned fractional solution $(\bar{\boldsymbol{x}}, \bar{\boldsymbol{y}})$ with the
desired properties, we start the process of opening facilities and
making connections to obtain an integral solution. To this end, for
each primary demand $\kappa\in P$, we open exactly one facility
$\phi(\kappa)$ in $\wbarN_{\cls}(\kappa)$, where each
$\mu\in\wbarN_{\cls}(\kappa)$ is chosen as $\phi(\kappa)$ with
probability $\gamma{\bar y}_{\mu}$. For all facilities
$\mu\in\overline{\sitesset} - \bigcup_{\kappa\in P}\wbarN_{\cls}(\kappa)$, we
open them independently, each with probability
$\gamma{\bar y}_{\mu}$.
We claim that all probabilities are well-defined, that is
$\gamma{\bar y}_{\mu} \le 1$ for all $\mu$. Indeed, if ${\bar y}_{\mu}>0$ then
${\bar y}_{\mu} = {\bar x}_{\mu\nu}$ for some $\nu$, by Property~(CO).
If $\mu\in \wbarN_{\cls}(\nu)$ then the definition of close
neighborhoods implies that ${\bar x}_{\mu\nu} \le 1/\gamma$.
If $\mu\in \wbarN_{\far}(\nu)$ then
${\bar x}_{\mu\nu} \le 1-1/\gamma \le 1/\gamma$, because $\gamma < 2$.
Thus $\gamma{\bar y}_{\mu} \le 1$, as claimed.
Next, we connect demands to facilities. Each primary demand
$\kappa\in P$ will connect to the only open facility $\phi(\kappa)$ in
$\wbarN_{\cls}(\kappa)$. For each non-primary demand $\nu\in \overline{\clientset}
- P$, if there is an open facility in $\wbarN_{\cls}(\nu)$ then we
connect $\nu$ to the nearest such facility. Otherwise, we connect
$\nu$ to the nearest far facility in $\wbarN_{\far}(\nu)$ if one is
open. Otherwise, we connect $\nu$ to its \emph{target facility}
$\phi(\kappa)$, where $\kappa$ is the primary demand that $\nu$ is
assigned to.
\paragraph{Analysis.}
By the algorithm, for each client $j$, all its $r_j$ demands are connected to
open facilities. If two different siblings $\nu,\nu'\in j$ are assigned, respectively,
to primary demands $\kappa$, $\kappa'$ then, by
Properties~(SI'.\ref{SI1:siblings disjoint}), (SI'.\ref{SI1:primary
disjoint}), and (PD'.\ref{PD1:disjoint}) we have
\begin{equation*}
( {\overline{N}}(\nu) \cup \wbarN_{\cls}(\kappa)) \cap ({\overline{N}}(\nu')\cup \wbarN_{\cls}(\kappa')) = \emptyset.
\end{equation*}
This condition guarantees that $\nu$ and $\nu'$ are assigned to different facilities,
regardless whether they are connected to a neighbor facility or to its target facility.
Therefore the computed solution is feasible.
We now estimate the cost of the solution computed by Algorithm {\mbox{\rm EBGS}}. The lemma
below bounds the expected facility cost.
\begin{lemma} \label{lem: EBGS facility cost}
The expectation of facility cost $F_{\mbox{\tiny\rm EBGS}}$ of Algorithm~{\mbox{\rm EBGS}} is at most $\gamma F^\ast$.
\end{lemma}
\begin{proof}
By the algorithm, each facility $\mu\in \overline{\sitesset}$ is opened with
probability $\gamma {\bar y}_{\mu}$, independently of whether it belongs to the
close neighborhood of a primary demand or not. Therefore, by
linearity of expectation, we have that the expected facility cost is
\begin{equation*}
\mathbb{E}[F_{\mbox{\tiny\rm EBGS}}] = \sum_{\mu \in \overline{\sitesset}} f_\mu \gamma {\bar y}_{\mu}
= \gamma \sum_{i\in \mathbb{F}} f_i \sum_{\mu\in i} {\bar y}_{\mu}
= \gamma \sum_{i \in \mathbb{F}} f_i y_i^\ast = \gamma F^\ast,
\end{equation*}
where the third equality follows from (PS.\ref{PS:yi}).
\end{proof}
In the remainder of this section we focus on the connection cost. Let $C_{\nu}$ be the
random variable representing the connection cost of a demand $\nu$. Our objective is
to show that the expectation of $\nu$ satisfies
\begin{equation}
\mathbb{E}[C_\nu] \leq C^{\avg}(\nu) \cdot \max\left\{\frac{1/e+1/e^\gamma}{1-1/\gamma}, 1 + \frac{2}{e^\gamma}\right\}.
\label{eqn: expectation of C_nu for EBGS}
\end{equation}
If $\nu$ is a primary demand then, due to the algorithm, we have $\mathbb{E}[C_{\nu}] =
{\mbox{\scriptsize\rm cls}}dist(\nu) \le C^{\avg}(\nu)$, so (\ref{eqn: expectation of C_nu for EBGS}) is
easily satisfied.
Thus for the rest of the argument we will focus on the case when $\nu$
is a non-primary demand. Recall that the
algorithm connects $\nu$ to the nearest open facility in
$\wbarN_{\cls}(\nu)$ if at least one facility in $\wbarN_{\cls}(\nu)$ is
open. Otherwise the algorithm connects $\nu$ to the nearest open
facility in $\wbarN_{\far}(\nu)$, if any. In the event that no facility in
${\overline{N}}(\nu)$ opens, the algorithm will connect $\nu$ to its target
facility $\phi(\kappa)$, where $\kappa$ is the primary demand that
$\nu$ was assigned to, and $\phi(\kappa)$ is the only facility open in
$\wbarN_{\cls}(\kappa)$. Let $\Lambda^\nu$ denote the event that at least
one facility in ${\overline{N}}(\nu)$ is open and $\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}$ be the
event that at least one facility in $\wbarN_{\cls}(\nu)$ is open.
$\neg \Lambda^\nu$ denotes the complement event of $\Lambda^\nu$, that is,
the event that none of $\nu$'s neighbors opens.
We want to estimate the following three conditional expectations:
\begin{equation*}
\mathbb{E}[C_{\nu} \mid
\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}],\quad \mathbb{E}[C_{\nu} \mid \Lambda^\nu \wedge \neg
\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}], \quad\text{and}\quad \mathbb{E}[C_{\nu} \mid \neg \Lambda^\nu],
\end{equation*}
and their associated probabilities.
We start with a lemma dealing with the third expectation,
$\mathbb{E}[C_\nu\mid\neg \Lambda^{\nu}] = \mathbb{E}[d_{\phi(\kappa)\nu} \mid
\Lambda^{\nu}]$. The proof of this lemma relies on
Properties~(PD'.\ref{PD1:assign:overlap}) and
(PD'.\ref{PD1:assign:cost}) of modified partitioning and follows the
reasoning in the proof of a similar lemma
in~\cite{ByrkaGS10,ByrkaA10}. For the sake of completeness, we
include a proof in~\ref{sec: proof of lemma 15}.
\begin{lemma}\label{lem: EBGS target connection cost}
Assuming that no facility in ${\overline{N}}(\nu)$ opens, the expected connection
cost of $\nu$ is
\begin{equation}
\mathbb{E}[C_{\nu} \mid \neg \Lambda^{\nu}] \leq
{\mbox{\scriptsize\rm cls}}dist(\nu) + 2{\mbox{\scriptsize\rm far}}dist(\nu).
\label{eqn: expected connection cost target facility}
\end{equation}
\end{lemma}
\begin{proof}
See \ref{sec: proof of lemma 15}.
\end{proof}
Next, we derive some estimates for the expected cost of direct
connections. The next technical lemma is a generalization of
Lemma~\ref{lem: echs expected C_nu}. In Lemma~\ref{lem: echs expected
C_nu} we bound the expected distance to the closest open facility in
${\overline{N}}(\nu)$, conditioned on at least one facility in ${\overline{N}}(\nu)$
being open. The lemma below provides a similar estimate for an
arbitrary set $A$ of facilities in ${\overline{N}}(\nu)$, conditioned on that
at least one facility in set $A$ is open. Recall that $D(A,\nu) =
\sum_{\mu \in A} d_{\mu\nu} {\bar y}_{\mu} / \sum_{\mu \in A}
{\bar y}_{\mu}$ is the average distance from $\nu$ to a facility in $A$.
\begin{lemma}\label{lem: expected distance in EBGS}
For any non-empty set $A\subseteq {\overline{N}}(\nu)$, let $\Lambda^\nu_A$ be
the event that at least one facility in $A$ is opened by Algorithm
{\mbox{\rm EBGS}}, and denote by $C_\nu(A)$ the random variable representing
the distance from $\nu$ to the closest open facility in $A$. Then
the expected distance from $\nu$ to the nearest open facility in
$A$, conditioned on at least one facility in $A$ being opened, is
\begin{equation*}
\mathbb{E}[C_\nu(A) \mid \Lambda^\nu_A ] \le D(A,\nu).
\end{equation*}
\end{lemma}
\begin{proof}
The proof follows the same reasoning as the proof of Lemma~\ref{lem:
echs expected C_nu}, so we only sketch it here. We start with a
similar grouping of facilities in $A$: for each primary demand
$\kappa$, if $\wbarN_{\cls}(\kappa)\cap A\neq\emptyset$ then
$\wbarN_{\cls}(\kappa)\cap A$ forms a group. Facilities in $A$ that are
not in a neighborhood of any primary demand form singleton groups.
We denote these groups $G_1,...,G_k$. It is clear that the groups
are disjoint because of (PD'.\ref{PD1:disjoint}). Denoting by
${\bar d}_s = D(G_s, \nu)$ the average distance from $\nu$ to a group $G_s$, we
can assume that these groups are ordered so that ${\bar d}_1\le ... \le
{\bar d}_k$.
Each group can have at most one facility open and the events
representing opening of any two facilities that belong to different
groups are independent. To estimate the distance from $\nu$ to the
nearest open facility in $A$, we use an alternative
random process to make connections, that is easier to
analyze. Instead of connecting $\nu$ to the nearest open facility in
$A$, we will choose the smallest $s$ for which $G_s$ has an open
facility and connect $\nu$ to this facility. (Thus we selected an
open facility with respect to the minimum ${\bar d}_s$, not the actual
distance from $\nu$ to this facility.) This can only increase the
expected connection cost, thus denoting $g_s = \sum_{\mu\in G_s}
\gamma{\bar y}_\mu$ for all $s=1,\ldots,k$, and letting $\mathbb{P}[\Lambda^\nu_A]$
be the probability that $A$ has at least one facility open, we have
\begin{align}
\mathbb{E}[C_\nu(A) \mid \Lambda^\nu_A] &\leq \frac{1}{\mathbb{P}[\Lambda^\nu_A]} ({\bar d}_1 g_1 +
{\bar d}_2 g_2 (1 - g_1) + \ldots + {\bar d}_k g_k(1 -
g_1)\ldots(1-g_{k-1}))
\label{eqn: dist set to nu 1}
\\
&\leq \frac{1}{\mathbb{P}[\Lambda^\nu_A]} \frac{\sum_{s=1}^k {\bar d}_s
g_s}{\sum_{s=1}^k g_s} (1 - \prod_{s=1}^k (1 - g_s))
\label{eqn: dist set to nu 2}
\\
\notag
&= \frac{\sum_{s=1}^k {\bar d}_s g_s}{\sum_{s=1}^k g_s} =
\frac{\sum_{\mu \in A} d_{\mu\nu} \gamma {\bar y}_{\mu}}{\sum_{\mu
\in A} \gamma {\bar y}_{\mu}}
\\
\notag
&= \frac{\sum_{s=1}^k d_{\mu\nu} {\bar y}_{\mu}}{\sum_{\mu \in A}
{\bar y}_{\mu}} = D(A, \nu).
\\
\notag
\end{align}
Inequality (\ref{eqn: dist set to nu 2}) follows from inequality
(\ref{eq:min expected distance}) in~\ref{sec: ECHSinequality}. The rest of the
derivation follows from $\mathbb{P}[\Lambda^\nu_A] = 1 - \prod_{s=1}^k (1 -
g_s)$, and the definition of ${\bar d}_s$, $g_s$ and $D(A,\nu)$.
\end{proof}
A consequence of Lemma~\ref{lem: expected distance in EBGS} is the
following corollary which bounds the other two expectations
of $C_\nu$, when at least one facility is opened in $\wbarN_{\cls}(\nu)$,
and when no facility in $\wbarN_{\cls}(\nu)$ opens but a facility in
$\wbarN_{\far}(\nu)$ is opened.
\begin{corollary} \label{coro: EBGS close and far distance}
{\rm (a)} $\mathbb{E}[C_{\nu} \mid \Lambda_{{\mbox{\scriptsize\rm cls}}}^\nu] \leq {\mbox{\scriptsize\rm cls}}dist(\nu)$,
and
{\rm (b)} $\mathbb{E}[C_{\nu} \mid \Lambda^\nu \wedge \neg \Lambda_{{\mbox{\scriptsize\rm cls}}}^\nu]
\leq {\mbox{\scriptsize\rm far}}dist(\nu)$.
\end{corollary}
\begin{proof}
When there is an open facility in $\wbarN_{\cls}(\nu)$, the algorithm
connect $\nu$ to the nearest open facility in
$\wbarN_{\cls}(\nu)$. When no facility in $\wbarN_{\cls}(\nu)$ opens but
some facility in $\wbarN_{\far}(\nu)$ opens, the algorithm connects
$\nu$ to the nearest open facility in $\wbarN_{\far}(\nu)$. The rest of
the proof follows from Lemma~\ref{lem: expected distance in
EBGS}. By setting the set $A$ in Lemma~\ref{lem: expected distance
in EBGS} to $\wbarN_{\cls}(\nu)$, we have
\begin{equation*}
\mathbb{E}[C_{\nu} \mid \Lambda_{{\mbox{\scriptsize\rm cls}}}^\nu] \leq D(\wbarN_{\cls}(\nu), \nu),
= {\mbox{\scriptsize\rm cls}}dist(\nu),
\label{eqn: expected connection cost close facility}
\end{equation*}
proving part (a), and by setting the set $A$ to $\wbarN_{\far}(\nu)$, we have
\begin{equation*}
\mathbb{E}[C_{\nu}
\mid \Lambda^\nu \wedge \neg \Lambda_{{\mbox{\scriptsize\rm cls}}}^\nu] \leq
D(\wbarN_{\far}(\nu), \nu) = {\mbox{\scriptsize\rm far}}dist(\nu),
\label{eqn: expected connection cost far facility}
\end{equation*}
which proves part (b).
\end{proof}
Given the estimate on the three expected distances when $\nu$ connects
to its close facility in $\wbarN_{\cls}(\nu)$ in (\ref{eqn: expected
connection cost close facility}), or its far facility in
$\wbarN_{\far}(\nu)$ in (\ref{eqn: expected connection cost far
facility}), or its target facility $\phi(\kappa)$ in (\ref{eqn:
expected connection cost target facility}), the only missing pieces
are estimates on the corresponding probabilities of each event, which
we do in the next lemma. Once done, we shall put all pieces together
and proving the desired inequality on $\mathbb{E}[C_{\nu}]$, that is
(\ref{eqn: expectation of C_nu for EBGS}).
The next Lemma bounds the probabilities for events
that no facilities in $\wbarN_{\cls}(\nu)$ and ${\overline{N}}(\nu)$ are
opened by the algorithm.
\begin{lemma}\label{lem: close and far neighbor probability}
{\rm (a)} $\mathbb{P}[\neg\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}] \le 1/e$, and
{\rm (b)} $\mathbb{P}[\neg\Lambda^\nu] \le 1/e^\gamma$.
\end{lemma}
\begin{proof}
(a) To estimate $\mathbb{P}[\neg\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}]$, we again consider a
grouping of facilities in $\wbarN_{\cls}(\nu)$, as in the proof of
Lemma~\ref{lem: expected distance in EBGS}, according to the primary
demand's close neighborhood that they fall in, with facilities not
belonging to such neighborhoods forming their own singleton groups.
As before, the groups are denoted $G_1, \ldots, G_k$. It is easy to
see that $\sum_{s=1}^k g_s = \sum_{\mu \in \wbarN_{\cls}(\nu)} \gamma
{\bar y}_{\mu} = 1$. For any group $G_s$, the probability that a
facility in this group opens is $\sum_{\mu \in G_s} \gamma
{\bar y}_{\mu} = g_s$ because in the algorithm at most one facility in
a group can be chosen and each is chosen with probability $\gamma
{\bar y}_{\mu}$. Therefore the probability that no facility
opens is $\prod_{s=1}^k (1 - g_s)$, which is
at most $e^{-\sum_{s=1}^k g_s} = 1/e$. Therefore we have
$\mathbb{P}[\neg\Lambda^\nu_A] \leq 1/e$.
(b)
This proof is similar to the proof of (a). The probability $\mathbb{P}[\neg\Lambda^\nu]$ is at most
$e^{-\sum_{s=1}^k g_s} = 1/e^\gamma$, because we now have
$\sum_{s=1}^k g_s = \gamma \sum_{\mu \in {\overline{N}}(\nu)} {\bar y}_{\mu} =
\gamma \cdot 1 = \gamma$.
\end{proof}
We are now ready to bound the overall connection cost of
Algorithm~{\mbox{\rm EBGS}}, namely inequality (\ref{eqn: expectation of C_nu for EBGS}).
\begin{lemma}\label{lem: EBGS nu's connection cost}
The expected connection of $\nu$ is
\begin{equation*}
\mathbb{E}[C_\nu] \le
C^{\avg}(\nu)\cdot\max\Big\{\frac{1/e+1/e^\gamma}{1-1/\gamma}, 1+\frac{2}{e^\gamma}\Big\}.
\end{equation*}
\end{lemma}
\begin{proof}
Recall that, to connect $\nu$, the algorithm uses the closest facility in
$\wbarN_{\cls}(\nu)$ if one is opened; otherwise it will try to connect $\nu$
to the closest facility in $\wbarN_{\far}(\nu)$. Failing that, it will
connect $\nu$ to $\phi(\kappa)$, the sole facility open in the
neighborhood of $\kappa$, the primary demand $\nu$ was assigned
to. Given that, we estimate $\mathbb{E}[C_\nu]$ as follows:
\begin{align}
\mathbb{E}[C_{\nu}]
\;&= \;\mathbb{E}[C_{\nu}\mid \Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}] \cdot \mathbb{P}[\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}]
\;+\; \mathbb{E}[C_{\nu}\mid \Lambda^\nu\ \wedge\neg \Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}]
\cdot \mathbb{P}[\Lambda^\nu\, \wedge\neg \Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}]
\notag
\\
& \quad\quad\quad
+ \; \mathbb{E}[C_{\nu}\mid \neg \Lambda^\nu] \cdot \mathbb{P}[\neg \Lambda^\nu]
\notag
\\
&\leq \; {\mbox{\scriptsize\rm cls}}dist(\nu) \cdot \mathbb{P}[\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}]
\;+\; {\mbox{\scriptsize\rm far}}dist(\nu)
\cdot \mathbb{P}[\Lambda^\nu\, \wedge\neg \Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}]
\label{eqn: apply three expected dist}
\\
&\quad\quad\quad
+\; [\,{\mbox{\scriptsize\rm cls}}dist(\nu) + 2{\mbox{\scriptsize\rm far}}dist(\nu)\,] \cdot \mathbb{P}[\neg\Lambda^\nu]
\notag
\\
&=\; [\,{\mbox{\scriptsize\rm cls}}dist(\nu) + {\mbox{\scriptsize\rm far}}dist(\nu)\,]\cdot \mathbb{P}[\neg\Lambda^\nu]
\;+\;
[\,{\mbox{\scriptsize\rm far}}dist(\nu) -{\mbox{\scriptsize\rm cls}}dist(\nu)\,]
\cdot \mathbb{P}[\neg\Lambda^\nu_{{\mbox{\scriptsize\rm cls}}}]
\;+\; {\mbox{\scriptsize\rm cls}}dist(\nu)
\notag
\\
&\leq\; [\,{\mbox{\scriptsize\rm cls}}dist(\nu) + {\mbox{\scriptsize\rm far}}dist(\nu)\,] \cdot \frac{1}{e^\gamma}
\;+\; [\,{\mbox{\scriptsize\rm far}}dist(\nu) - {\mbox{\scriptsize\rm cls}}dist(\nu)\,] \cdot \frac{1}{e}
\;+\; {\mbox{\scriptsize\rm cls}}dist(\nu)
\label{eqn: probability estimate}
\\
\notag
&=\; \Big(1 - \frac{1}{e} + \frac{1}{e^\gamma}\Big)\cdot {\mbox{\scriptsize\rm cls}}dist(\nu)
\;+\; \Big(\frac{1}{e} + \frac{1}{e^\gamma}\Big)\cdot{\mbox{\scriptsize\rm far}}dist(\nu).
\end{align}
Inequality (\ref{eqn: apply three expected dist}) follows from
Corollary~\ref{coro: EBGS close and far distance} and
Lemma~\ref{lem: EBGS target connection cost}.
Inequality (\ref{eqn: probability estimate}) follows from
Lemma~\ref{lem: close and far neighbor probability} and
${\mbox{\scriptsize\rm far}}dist(\nu) - {\mbox{\scriptsize\rm cls}}dist(\nu)\ge 0$.
Now define $\rho ={\mbox{\scriptsize\rm cls}}dist(\nu)/C^{\avg}(\nu)$. It is easy to
see that $\rho$ is between 0 and 1. Continuing the above
derivation, applying (\ref{eqn:avg dist cls dist far dist}), we get
\begin{align*}
\mathbb{E}[C_{\nu}]
\;&\le\; C^{\avg}(\nu)
\cdot\left((1-\rho)\frac{1/e+1/e^\gamma}{1-1/\gamma}
+ \rho (1 + \frac{2}{e^\gamma})\right)
\\
&\leq C^{\avg}(\nu)
\cdot \max\left\{\frac{1/e+1/e^\gamma}{1-1/\gamma}, 1 + \frac{2}{e^\gamma}\right\},
\end{align*}
and the proof is now complete.
\end{proof}
With Lemma~\ref{lem: EBGS nu's connection cost} proven, we are now ready to bound our total connection cost.
For any client $j$ we have
\begin{align*}
\sum_{\nu\in j} C^{{\mbox{\scriptsize\rm avg}}}(\nu)
&= \sum_{\nu\in j}\sum_{\mu\in\overline{\sitesset}} d_{\mu\nu}{\bar x}_{\mu\nu}
\\
&= \sum_{i\in\mathbb{F}}d_{ij}\sum_{\mu\in i}\sum_{\nu\in j} {\bar x}_{\mu\nu}
= \sum_{i\in\mathbb{F}} d_{ij}x_{ij}^\ast = C_j^\ast.
\end{align*}
Summing over all clients $j$ we obtain that the total expected connection cost is
\begin{equation*}
\mathbb{E}[ C_{\mbox{\tiny\rm EBGS}} ] \le C^\ast\max\left\{\frac{1/e+1/e^\gamma}{1-1/\gamma}, 1+\frac{2}{e^\gamma}\right\}.
\end{equation*}
Recall that the expected facility cost is bounded by $\gamma F^\ast$,
as argued earlier. Hence the total expected cost is bounded by $\max\{\gamma,
\frac{1/e+1/e^\gamma}{1-1/\gamma}, 1+\frac{2}{e^\gamma}\}\cdot
\mbox{\rm LP}^\ast$. Picking $\gamma=1.575$ we obtain the desired ratio.
\begin{theorem}\label{thm:ebgs}
Algorithm~{\mbox{\rm EBGS}} is a $1.575$-approximation algorithm for \mbox{\rm FTFP}.
\end{theorem}
\section{Final Comments}
In this paper we show a sequence of LP-rounding approximation algorithms
for FTFP, with the best algorithm achieving ratio $1.575$.
As we mentioned earlier, we believe that
our techniques of demand reduction and adaptive partitioning are very flexible and
should be useful in extending other LP-rounding methods for UFL to obtain
matching bounds for FTFP.
One of the main open problems in this area is whether FTFL can be approximated with the
same ratio as UFL, and our work was partly motivated by this question. The techniques we
introduced are not directly applicable to FTFL, mainly because our partitioning
approach involves facility splitting that could result in several sibling demands being served
by facilities on the same site. Nonetheless, we hope that further refinements of
our construction might get around this issue and
lead to new algorithms for FTFL with improved ratios.
\pagebreak
\pagebreak
\appendix
\section{Proof of Lemma~\ref{lem: EBGS target connection cost}}\label{sec: proof of lemma 15}
Lemma~\ref{lem: EBGS target connection cost} provides a bound on the
expected connection cost of a demand $\nu$ when Algorithm~{\mbox{\rm EBGS}} does not open
any facilities in ${\overline{N}}(\nu)$, namely
\begin{equation}
\mathbb{E}[C_{\nu} \mid \neg \Lambda^{\nu}] \leq
{\mbox{\scriptsize\rm cls}}dist(\nu) + 2{\mbox{\scriptsize\rm far}}dist(\nu),
\label{eqn: lemma ebgs target connection cost}
\end{equation}
We show a stronger inequality that
\begin{equation}
\mathbb{E}[C_{\nu} \mid \neg \Lambda^{\nu}] \leq
{\mbox{\scriptsize\rm cls}}dist(\nu) + {\mbox{\scriptsize\rm cls}}max(\nu) + {\mbox{\scriptsize\rm far}}dist(\nu)
\label{eqn: lemma ebgs indirect connection cost},
\end{equation}
which then implies (\ref{eqn: lemma ebgs target connection cost})
because ${\mbox{\scriptsize\rm cls}}max(\nu) \leq {\mbox{\scriptsize\rm far}}dist(\nu)$. The proof of (\ref{eqn:
lemma ebgs indirect connection cost}) is similar to that in
\cite{ByrkaA10}. For the sake of completeness, we provide it here,
formulated in our terminology and notation.
Assume that the event $\neg \Lambda^{\nu}$ is true, that is Algorithm~{\mbox{\rm EBGS}}
does not open any facility in ${\overline{N}}(\nu)$.
Let $\kappa$ be the primary demand that $\nu$ was assigned to. Also let
\begin{equation*}
K = \wbarN_{\cls}(\kappa) \,{\leftarrow}\,minus {\overline{N}}(\nu), \quad
V_{{\mbox{\scriptsize\rm cls}}} = \wbarN_{\cls}(\kappa) \cap \wbarN_{\cls}(\nu) \quad \textrm{and}\quad
V_{{\mbox{\scriptsize\rm far}}} = \wbarN_{\cls}(\kappa) \cap \wbarN_{\far}(\nu).
\end{equation*}
Then $K, V_{{\mbox{\scriptsize\rm cls}}}, V_{{\mbox{\scriptsize\rm far}}}$ form a partition of
$\wbarN_{\cls}(\kappa)$, that is, they are disjoint and their union is $\wbarN_{\cls}(\kappa)$.
Moreover, we have that $K$ is not empty, because Algorithm~{\mbox{\rm EBGS}}
opens some facility in $\wbarN_{\cls}(\kappa)$ and this facility cannot be in $V_{{\mbox{\scriptsize\rm cls}}}\cup V_{{\mbox{\scriptsize\rm far}}}$,
by our assumption.
We also have that $V_{{\mbox{\scriptsize\rm cls}}}$ is not empty due to (PD'.\ref{PD1:assign:overlap}).
Recall that $D(A,\eta) = \sum_{\mu\in A}d_{\mu\eta}{\bar y}_{\mu}/\sum_{\mu\in A}{\bar y}_{\mu}$
is the average distance between a demand $\eta$ and the facilities in a set $A$. We shall show that
\begin{equation}
D(K, \nu) \leq {\mbox{\scriptsize\rm cls}}dist(\kappa)+{\mbox{\scriptsize\rm cls}}max(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu).
\label{eqn: bound on D(K,nu)}
\end{equation}
This is sufficient, because, by the algorithm, $D(K,\nu)$ is exactly
the expected connection cost for demand $\nu$ conditioned on
the event that none of $\nu$'s neighbors
opens, that is the left-hand side of (\ref{eqn: lemma ebgs indirect connection cost}).
Further, (PD'.\ref{PD1:assign:cost}) states that
${\mbox{\scriptsize\rm cls}}dist(\kappa)+{\mbox{\scriptsize\rm cls}}max(\kappa) \le {\mbox{\scriptsize\rm cls}}dist(\nu) + {\mbox{\scriptsize\rm cls}}max(\nu)$, and thus
(\ref{eqn: bound on D(K,nu)}) implies (\ref{eqn: lemma ebgs indirect connection cost}).
The proof of (\ref{eqn: bound on D(K,nu)}) is by analysis of several cases.
\noindent
{\mycase{1}} $D(K, \kappa) \leq {\mbox{\scriptsize\rm cls}}dist(\kappa)$. For any
facility $\mu \in V_{{\mbox{\scriptsize\rm cls}}}$ (recall that $V_{{\mbox{\scriptsize\rm cls}}}\neq\emptyset$),
we have $d_{\mu\kappa} \leq {\mbox{\scriptsize\rm cls}}max(\kappa)$
and $d_{\mu\nu} \leq {\mbox{\scriptsize\rm cls}}max(\nu) \leq {\mbox{\scriptsize\rm far}}dist(\nu)$. Therefore, using the
case assumption, we get
$D(K,\nu) \leq D(K,\kappa) + d_{\mu\kappa} + d_{\mu\nu}
\leq {\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm cls}}max(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu)$.
\noindent
{\mycase{2}} There exists a facility $\mu\in V_{{\mbox{\scriptsize\rm cls}}}$ such that
$d_{\mu\kappa} \leq {\mbox{\scriptsize\rm cls}}dist(\kappa)$. Since $\mu\in V_{{\mbox{\scriptsize\rm cls}}}$, we infer
that $d_{\mu\nu} \leq {\mbox{\scriptsize\rm cls}}max(\nu) \leq {\mbox{\scriptsize\rm far}}dist(\nu)$. Using
${\mbox{\scriptsize\rm cls}}max(\kappa)$ to bound $D(K, \kappa)$, we have $D(K, \nu)
\leq D(K, \kappa) + d_{\mu\kappa} + d_{\mu\nu} \leq
{\mbox{\scriptsize\rm cls}}max(\kappa) + {\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu)$.
\noindent
{\mycase{3}} In this case we assume that neither of Cases~1 and 2 applies, that is
$D(K, \kappa) > {\mbox{\scriptsize\rm cls}}dist(\kappa)$ and every $\mu \in V_{{\mbox{\scriptsize\rm cls}}}$ satisfies
$d_{\mu\kappa} > {\mbox{\scriptsize\rm cls}}dist(\kappa)$. This implies that
$D(K\cup V_{{\mbox{\scriptsize\rm cls}}}, \kappa) > {\mbox{\scriptsize\rm cls}}dist(\kappa) = D(\wbarN_{\cls}(\kappa), \kappa)$.
Since sets $K$, $V_{{\mbox{\scriptsize\rm cls}}}$ and $V_{{\mbox{\scriptsize\rm far}}}$ form a partition of $\wbarN_{\cls}(\kappa)$,
we obtain that in this case $V_{{\mbox{\scriptsize\rm far}}}$ is not
empty and $D(V_{{\mbox{\scriptsize\rm far}}}, \kappa) < {\mbox{\scriptsize\rm cls}}dist(\kappa)$.
Let $\delta = {\mbox{\scriptsize\rm cls}}dist(\kappa) - D(V_{{\mbox{\scriptsize\rm far}}}, \kappa) > 0$.
We now have two sub-cases:
\begin{description}
\item{\mycase{3.1}} {$D(V_{{\mbox{\scriptsize\rm far}}}, \nu) \leq {\mbox{\scriptsize\rm far}}dist(\nu) + \delta$}.
Substituting $\delta$, this implies that $D(V_{{\mbox{\scriptsize\rm far}}}, \nu) +
D(V_{{\mbox{\scriptsize\rm far}}},\kappa) \le {\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu)$. From the
definition of the average distance $D(V_{{\mbox{\scriptsize\rm far}}},\kappa)$ and
$D(V_{{\mbox{\scriptsize\rm far}}}, \nu)$, we obtain that there exists some $\mu \in
V_{{\mbox{\scriptsize\rm far}}}$ such that $d_{\mu\kappa} + d_{\mu\nu} \leq
{\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu)$. Thus $D(K, \nu) \leq D(K,
\kappa) + d_{\mu\kappa} + d_{\mu\nu} \leq {\mbox{\scriptsize\rm cls}}max(\kappa) +
{\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu)$.
\item{\mycase{3.2}} {$D(V_{{\mbox{\scriptsize\rm far}}}, \nu) > {\mbox{\scriptsize\rm far}}dist(\nu) + \delta$}.
The case assumption implies that $V_{{\mbox{\scriptsize\rm far}}}$ is a proper subset of
$\wbarN_{\far}(\nu)$, that is $\wbarN_{\far}(\nu) \,{\leftarrow}\,minus V_{{\mbox{\scriptsize\rm far}}}
\neq\emptyset$. Let $\hat{y} = \gamma \sum_{\mu\in V_{{\mbox{\tiny\rm far}}}}
{\bar y}_{\mu}$. We can express ${\mbox{\scriptsize\rm far}}dist(\nu)$ using $\hat{y}$ as
follows
\begin{equation*}
{\mbox{\scriptsize\rm far}}dist(\nu) = D(V_{{\mbox{\scriptsize\rm far}}},\nu) \frac{\hat{y}}{\gamma-1} +
D(\wbarN_{\far}(\nu)\,{\leftarrow}\,minus V_{{\mbox{\scriptsize\rm far}}}, \nu) \frac{\gamma-1-\hat{y}}{\gamma-1}.
\end{equation*}
Then, using the case condition and simple algebra, we have
\begin{align}
{\mbox{\scriptsize\rm cls}}max(\nu) &\leq D(\wbarN_{\far}(\nu) \,{\leftarrow}\,minus V_{{\mbox{\scriptsize\rm far}}}, \nu)
\notag
\\
&\leq {\mbox{\scriptsize\rm far}}dist(\nu) - \frac{\hat{y}\delta}{\gamma-1-\hat{y}}
\leq {\mbox{\scriptsize\rm far}}dist(\nu) - \frac{\hat{y}\delta}{1-\hat{y}},
\label{eqn: case 3, bound on C_cls^max(nu)}
\end{align}
where the last step follows from $1 < \gamma < 2$.
On the other hand, since $K$, $V_{{\mbox{\scriptsize\rm cls}}}$, and $V_{{\mbox{\scriptsize\rm far}}}$ form a partition of $\wbarN_{\cls}(\kappa)$,
we have
${\mbox{\scriptsize\rm cls}}dist(\kappa) = (1-\hat{y}) D(K\cup V_{{\mbox{\scriptsize\rm cls}}}, \kappa) + \hat{y} D(V_{{\mbox{\scriptsize\rm far}}}, \kappa)$.
Then using the definition of $\delta$ we obtain
\begin{equation}
D(K \cup V_{{\mbox{\scriptsize\rm cls}}}, \kappa) = {\mbox{\scriptsize\rm cls}}dist(\kappa) + \frac{\hat{y}\delta}{1-\hat{y}}.
\label{eqn: formula for D(V_cls,kappa)}
\end{equation}
Now we are essentially done. If there exists some $\mu \in V_{{\mbox{\scriptsize\rm cls}}}$ such
that $d_{\mu\kappa} \leq {\mbox{\scriptsize\rm cls}}dist(\kappa) +
\hat{y}\delta/(1-\hat{y})$, then we have
\begin{align*}
D(K, \nu) &\leq D(K, \kappa) + d_{\mu\kappa} + d_{\mu\nu} \\
&\leq {\mbox{\scriptsize\rm cls}}max(\kappa) + {\mbox{\scriptsize\rm cls}}dist(\kappa) +
\frac{\hat{y}\delta}{1-\hat{y}}
+ {\mbox{\scriptsize\rm cls}}max(\nu)\\
&\leq {\mbox{\scriptsize\rm cls}}max(\kappa) + {\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu),
\end{align*}
where we used (\ref{eqn: case 3, bound on C_cls^max(nu)}) in the last step.
Otherwise, from (\ref{eqn: formula for D(V_cls,kappa)}),
we must have $D(K, \kappa) \leq {\mbox{\scriptsize\rm cls}}dist(\kappa) +
\hat{y}\delta/(1-\hat{y})$. Choosing any $\mu \in V_{{\mbox{\scriptsize\rm cls}}}$, it follows that
\begin{align*}
D(K, \nu) &\leq D(K, \kappa) + d_{\mu\kappa} + d_{\mu\nu} \\
&\leq {\mbox{\scriptsize\rm cls}}dist(\kappa) + \frac{\hat{y}\delta}{1-\hat{y}} +
{\mbox{\scriptsize\rm cls}}max(\kappa) + {\mbox{\scriptsize\rm cls}}max(\nu)\\
&\leq {\mbox{\scriptsize\rm cls}}dist(\kappa) + {\mbox{\scriptsize\rm cls}}max(\kappa) + {\mbox{\scriptsize\rm far}}dist(\nu),
\end{align*}
again using (\ref{eqn: case 3, bound on C_cls^max(nu)}) in the last step.
\end{description}
This concludes the proof of (\ref{eqn: lemma ebgs target connection cost}).
As explained earlier, Lemma~\ref{lem: EBGS target connection cost} follows.
\section{Proof of Inequality (\ref{eqn: echs ineq direct cost, step 1})}
\label{sec: ECHSinequality}
In Sections~\ref{sec: 1.736-approximation} and \ref{sec: 1.575-approximation}
we use the following inequality
\begin{align}
\label{eq:min expected distance}
{\bar d}_1 g_1 + {\bar d}_2 g_2 (1-g_1) +
\ldots &+ {\bar d}_k g_k (1-g_1) (1-g_2) \ldots (1-g_k)\\ \notag
&\leq \frac{1}{\sum_{s=1}^k g_s} \left(\textstyle\sum_{s=1}^k {\bar d}_s g_s\right)\left(\textstyle\sum_{t=1}^k g_t \textstyle\prod_{z=1}^{t-1} (1-g_z)\right).
\end{align}
for $0 < {\bar d}_1\leq {\bar d}_2 \leq \ldots \leq {\bar d}_k$, and
$0 < g_1,...,g_s \le 1$.
We give here a new proof of this inequality, much simpler
than the existing proof in \cite{ChudakS04}, and also simpler than the
argument by Sviridenko~\cite{Svi02}. We derive this inequality from
the following generalized version of the Chebyshev Sum Inequality:
\begin{equation}
\label{eq:cheby}
\textstyle{\sum_{i}} p_i \textstyle{\sum_j} p_j a_j b_j \leq \textstyle{\sum_i} p_i a_i \textstyle{\sum_j} p_j b_j,
\end{equation}
where each summation runs from $1$ to $l$ and the sequences $(a_i)$,
$(b_i)$ and $(p_i)$ satisfy the following conditions: $p_i\geq 0, a_i
\geq 0, b_i \geq 0$ for all $i$, $a_1\leq a_2 \leq \ldots \leq a_l$,
and $b_1 \geq b_2 \geq \ldots \geq b_l$.
Given inequality (\ref{eq:cheby}), we can obtain our inequality
(\ref{eq:min expected distance}) by simple substitution
\begin{equation*}
p_i \leftarrow g_i, a_i \leftarrow {\bar d}_i, b_i \leftarrow
\Pi_{s=1}^{i-1} (1-g_s),
\end{equation*}
for $i = 1,...,k$.
\ignore{
For the sake of completeness, we include the proof of inequality (\ref{eq:cheby}),
due to Hardy, Littlewood and Polya~\cite{HardyLP88}. The idea is to evaluate the
following sum:
\begin{align*}
S &= \textstyle{\sum_i} p_i \textstyle{\sum_j} p_j a_j b_j - \textstyle{\sum_i} p_i a_i \textstyle{\sum_j} p_j b_j
\\
& = \textstyle{\sum_i \sum_j} p_i p_j a_j b_j - \textstyle{\sum_i \sum_j} p_i a_i p_j b_j
\\
& = \textstyle{\sum_j \sum_i} p_j p_i a_i b_i - \textstyle{\sum_j \sum _i} p_j a_j p_i b_i
\\
&= \half \cdot \textstyle{\sum_i \sum_j} (p_i p_j a_j b_j - p_i a_i p_j b_j + p_j p_i a_i
b_i - p_j a_j p_i b_i)
\\
&= \half \cdot \textstyle{\sum_i \sum_j} p_i p_j (a_i - a_j)(b_i - b_j) \leq 0.
\end{align*}
The last inequality holds because $(a_i-a_j)(b_i-b_j) \leq 0$, since the sequences
$(a_i)$ and $(b_i)$ are ordered oppositely.
}
\end{document}
\end{document} |
\begin{equation}gin{document}
\def{\color{red} \underline{??????}\color{black}}{{\color{red} \underline{??????}\color{black}}}
\def\nto#1{{\color{colorcccc} \footnote{\varepsilonm \color{colorcccc} #1}}}
\def\fractext#1#2{{#1}/{#2}}
\def\fracsm#1#2{{\textstyle{\frac{#1}{#2}}}}
\def\nnonumber{}
\def\color{red}{{}}
\def\color{colordddd}{{}}
\def\color{black}{{}}
\def\color{colorgggg}{{}}
\def\color{coloraaaa}{{}}
\def\color{colorbbbb}{{}}
\def\color{colorcccc}{{}}
\def\color{colordddd}{{}}
\def\color{coloreeee}{{}}
\def\color{colorffff}{{}}
\ifnum\colonoryes=1
\definecolor{coloraaaa}{rgb}{0.1,0.2,0.8}
\definecolor{colorbbbb}{rgb}{0.1,0.7,0.1}
\definecolor{colorcccc}{rgb}{0.8,0.3,0.9}
\definecolor{colordddd}{rgb}{0.0,.5,0.0}
\definecolor{coloreeee}{rgb}{0.8,0.3,0.9}
\definecolor{colorffff}{rgb}{0.8,0.3,0.9}
\definecolor{colorgggg}{rgb}{0.5,0.0,0.4}
\def\color{colordddd}{\colonor{colordddd}}
\def\color{black}{\colonor{black}}
\def\color{red}{\colonor{red}}
\def\color{colorgggg}{\colonor{colorgggg}}
\def\color{coloraaaa}{\colonor{coloraaaa}}
\def\color{colorbbbb}{\colonor{colorbbbb}}
\def\color{colorcccc}{\colonor{colorcccc}}
\def\color{colordddd}{\colonor{colordddd}}
\def\color{coloreeee}{\colonor{coloreeee}}
\def\color{colorffff}{\colonor{colorffff}}
\def\color{colorgggg}{\colonor{colorgggg}}
\fi
\ifnum\isitdraft=1
\chardef\colonoryes=1
\mbox{\boldmath$a$}selineskip=17pt
\input macros.tex
\def{\rule[-3mm]{8mm}{8mm}}{{\colonor{red}{\hskip-.0truecm\rule[-1mm]{4mm}{4mm}\hskip.2truecm}}\hskip-.3truecm}
\def{\coC {\hskip-.0truecm\rule[-1mm]{4mm}{4mm}\hskip.2truecm}}\hskip-.3truecm{{\color{colorcccc} {\hskip-.0truecm\rule[-1mm]{4mm}{4mm}\hskip.2truecm}}\hskip-.3truecm}
\def{\rule[-3mm]{8mm}{8mm}}{{\color{coloraaaa}{\rule[0mm]{4mm}{4mm}}\color{black}}}
\def\purpledot{{\rule[-3mm]{8mm}{8mm}}}
\varepsilonlse
\mbox{\boldmath$a$}selineskip=15pt
\def{\rule[-3mm]{8mm}{8mm}}{{\rule[-3mm]{8mm}{8mm}}}
\def{\rule[-3mm]{8mm}{8mm}}{{\rule[-3mm]{8mm}{8mm}}}
\def\purpledot{}
\fi
\def\fbox{\fbox{\bf\tiny I'm here; \today \ \currenttime}}{\fbox{\fbox{\bf\tiny I'm here; \today \ \currenttime}}}
\def\nts#1{{\hbox{\bf ~#1~}}}
\def\nts#1{{\color{red}\hbox{\bf ~#1~}}}
\def\ntsf#1{\footnote{\hbox{\bf ~#1~}}}
\def\ntsf#1{\footnote{\color{red}\hbox{\bf ~#1~}}}
\def\bigline#1{~\\\hskip2truecm~~~~{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}\\}
\def\bigline{$\downarrow\,$ $\downarrow\,$}{\bigline{$\downarrow\,$ $\downarrow\,$}}
\def\bigline{---}{\bigline{---}}
\def\bigline{$\uparrow\,$ $\uparrow\,$}{\bigline{$\uparrow\,$ $\uparrow\,$}}
\def\omega{\omega}
\def\widetilde{\omegaidetilde}
\newtheorem{Theorem}{Theorem}[section]
\newtheorem{Corollary}[Theorem]{Corollary}
\newtheorem{Proposition}[Theorem]{Proposition}
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{Remark}[Theorem]{Remark}
\newtheorem{Example}[Theorem]{Example}
\newtheorem{Assumption}[Theorem]{Assumption}
\newtheorem{Claim}[Theorem]{Claim}
\newtheorem{Question}[Theorem]{Question}
\def\thesection.\arabic{equation}{\thesection.\arabic{equation}}
\def\varepsilonndproof{
${\cal B}ox$\\}
\def\mathcal{S}quare{
${\cal B}ox$\\}
\def {\rm ,\qquad{}} { {\rm ,\qquad{}} }
\def {\rm ,\qquad{}} one{ {\rm ,\qquad{}} }
\def\mathop{\rm dist}\nolimits{\mathop{\rm dist}\nolimits}
\def\mathcal{S}gn{\mathop{\rm sgn\,}\nolimits}
\def{\cal T}r{\mathop{\rm Tr}\nolimits}
\def\mathop{\rm{Lip}}\nolimits{\mathop{\rm div}\nolimits}
\def\mathcal{S}upp{\mathop{\rm supp}\nolimits}
\def\mathop{\rm{Lip}}\nolimitstwo{\mathop{{\rm div}_2\,}\nolimits}
\def\mathop{\rm {\mathbb R}e}\nolimits{\mathop{\rm {\mathbb R}e}\nolimits}
\def\mathop{\rm{Lip}}\nolimits{\mathop{\rm{Lip}}\nolimits}
\def{\mbox{\boldmath$1$}}eq{\qquad{}}
\def.{.}
\def\mathcal{S}emicolon{\,;}
\title{Robust Optimal Control Using Conditional Risk Mappings in Infinite Horizon}
\author{Kerem U\u{g}urlu}
\maketitle
\date{}
\begin{equation}gin{center}
\varepsilonnd{center}
{\mbox{\boldmath$1$}}ent Department of Applied Mathematics, University of Washington, Seattle, WA 98195\\
{\mbox{\boldmath$1$}}ent e-mail: [email protected]
\begin{equation}gin{abstract}
We use one-step conditional risk mappings to formulate a risk averse version of a total cost problem on a controlled Markov process in discrete time infinite horizon. The nonnegative one step costs are assumed to be lower semi-continuous but not necessarily bounded. We derive the conditions for the existence of the optimal strategies and solve the problem explicitly by giving the robust dynamic programming equations under very mild conditions. We further give an $\varepsilonpsilon$-optimal approximation to the solution and illustrate our algorithm in two examples of optimal investment and LQ regulator problems.
\varepsilonnd{abstract}
\mathcal{S}ection{Introduction}
Controlled Markov decision processes have been an active research area in sequential decision making problems in operations research and in mathematical finance. We refer the reader to \cite{HL,HL2,SB} for an extensive treatment on theoretical background. Classically, the evaluation operator has been the expectation operator, and the optimal control problem is to be solved via Bellman's dynamic programming \cite{key-6}. This approach and the corresponding problems continue to be an active research area in various scenarios (see e.g. the recent works \cite{SN,NST,GHL} and the references therein)
On the other hand, expected values are not appropriate to measure the performance of the agent. Hence, expected criteria with utility functions have been extensively used in the literature (see e.g. \cite{FS1,FS2} and the references therein). Other than the evaluation of the performance via utility functions, to put risk aversion into an axiomatic framework, coherent risk measures has been introduced in the seminal paper \cite{key-1}. \cite{FSH1} has removed the positive homogeneity assumption of a coherent risk measure and named it as a convex risk measure (see \cite{FSH2} for an extensive treatment on this subject).
However, this kind of operator has brought up another difficulty. Deriving dynamic programming equations with these operators in multistage optimization problems is challenging or impossible in many optimization problems. The reason for it is that the Bellman's optimality principle is not necessarily true using this type of operators. That is to say, the optimization problems are not \textit{time-consistent}. Namely, a multistage stochastic decision problem is time-consistent, if resolving the problem at later stages (i.e., after observing some random outcomes), the original solutions remain optimal for the later stages. We refer the reader to \cite{key-141, key-334, key-148, AS, BMM} for further elaboration and examples on this type of inconsistency. Hence, optimal control problems on multi-period setting using risk measures on bounded and unbounded costs are not vast, but still, some works in this direction are \cite{ key-149,key-150,key-151, BO}.
To overcome this deficit, dynamic extensions of convex/coherent risk measures so called conditional risk measures are introduced in \cite{FR} and studied extensively in \cite{RS06}. In \cite{key-3}, so called Markov risk measures are introduced and an optimization problem is solved in a controlled Markov decision framework both in finite and discounted infinite horizon, where the cost functions are assumed to be bounded. This idea is extended to transient models in \cite{CR1,CR2} and to unbounded costs with $w$-weighted bounds in \cite{LS,CZ,SSO} and to so called \textit{process-based} measures in \cite{FR1} and to partially observable Markov chain frameworks in \cite{FR2}.
In this paper, we derive \textit{robust} dynamic programming equations in discrete time on infinite horizon using one step conditional risk mappings that are dynamic analogues of coherent risk measures. We assume that our one step costs are nonnegative, but may well be unbounded from above. We show the existence of an optimal policy via dynamic programming under very mild assumptions. Since our methodology is based on dynamic programming, our optimal policy is by construction time consistent. We further give a recipe to construct an $\varepsilonpsilon$-optimal policy for the infinite horizon problem and illustrate our theory in two examples of optimal investment and LQ regulator control problem, respectively. To the best of our knowledge, this is the first work solving the optimal control problem in infinite horizon with the minimal assumptions stated in our model.
The rest of the paper is as follows. In Section 2, we briefly review the theoretical background on coherent risk measures and their dynamic analogues in multistage setting, and further describe the framework for the controlled Markov chain that we will work on. In Section 3, we state our main result on the existence of the optimal policy and the existence of optimality equations. In Section 4, we prove our main theorem and present an $\varepsilonpsilon$ algorithm to our control problem. In Section 5, we illustrate our results with two examples, one on an optimal investment problem, and the other on an LQ regulator control problem.
\mathcal{S}ection{Theoretical Background}
In this section, we recall the necessary background on static coherent risk measures, and then we extend this kind of operators to the dynamic setting in controlled Markov chain framework in discrete time.
\mathcal{S}ubsection{Coherent Risk Measures}
Consider an atomless probability space $(\Omega,{\cal F}, {\mathbb{P}})$ and the space ${\cal Z}: = L^1(\Omega,{\cal F},{\mathbb{P}})$ of measurable functions $Z:\Omega {\ranglele gle}r \mathbb{R}$ (random variables) having finite first order moment, i.e. $\mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}}[|Z|] < \infty$, where $\mbox{\tiny\boldmath$b$}E^{\mathbb{P}}[\cdot]$ stands for the expectation with respect to the probability measure ${\mathbb{P}}$. A mapping $\rho:{\cal Z} {\ranglele gle}r \mathbb{R}$ is said to
be a \textit{coherent risk measure}, if it satisfies the following axioms
\begin{equation}gin{itemize}
\item (A1)(Convexity) $\rho(\langlegle mbda X+(1-\langlegle mbda)Y)\leq\langlegle mbda\rho(X)+(1-\langlegle mbda)\rho(Y)$
$\forall\langlegle mbda\in(0,1)$, $X,Y \in {\cal Z}$.
\item (A2)(Monotonicity) If $X \preceq Y$, then $\rho(X) \leq \rho(Y)$, for all $X,Y \in {\cal Z}$.
\item (A3)(Translation Invariance) $\rho(c+X) = c + \rho(X)$, $\forall c\in\mathbb{R}$, $X\in {\cal Z}$.
\item (A4)(Homogeneity) $\rho(\begin{equation}ta X)=\begin{equation}ta\rho(X),$ $\forall X\in {\cal Z}$. $\begin{equation}ta \geq 0$.
\varepsilonnd{itemize}
The notation $X \preceq Y$ means that $X(\omega) \leq Y(\omega)$ for ${\mathbb{P}}$-a.s. Risk measures $\rho:{\cal Z} {\ranglele gle}r \mathbb{R}$, which satisfy (A1)-(A3) only, are called convex risk measures. We remark that under the fourth property (homogeneity), the first property (convexity) is equivalent to sub-additivity. We call the risk measure $\rho:{\cal Z}{\ranglele gle}r \mathbb{R}$ law invariant, if $\rho(X) = \rho(Y)$, whenever $X$ and $Y$ have the same distributions. We pair the space ${\cal Z} = L^1(\Omega,{\cal F}, {\mathbb{P}})$ with ${\cal Z}^* = L^\infty(\Omega,{\cal F}, {\mathbb{P}})$, and the corresponding scalar product
\begin{equation}gin{equation}
\langlegle n { z}eta, Z {\ranglele gle}n = \int_\Omega { z}eta(\omega)Z(\omega)dP(\omega), \; { z}eta\in {\cal Z}^*,Z\in{\cal Z}.
\varepsilonnd{equation}
By \cite{key-14}, we know that real-valued law-invariant convex risk measures are continuous, hence lower semi-continuous (l.s.c.), in the norm topology of the space $L^1(\Omega,{\cal F}, {\mathbb{P}})$.
Hence, it follows by Fenchel-Moreau theorem that
\begin{equation}gin{equation}
\langlegle bel{eqn12}
\rho(Z) = \mathcal{S}up_{{ z}eta \in {\cal Z}^*} \{ \langlegle n { z}eta, Z {\ranglele gle}n - \rho^*({ z}eta) \},\;\textrm{for all } Z \in {\cal Z},
\varepsilonnd{equation}
where $\rho^*(Z) = \mathcal{S}up_{Z \in {\cal Z}} \{ \langlegle n { z}eta, Z {\ranglele gle}n - \rho(Z) \}$ is the corresponding conjugate functional (see \cite{RW}). If the risk measure $\rho$ is convex and positively homogeneous, hence coherent, then $\rho^*$ is an indicator function of a convex and closed set ${\mathfrak A} \mathcal{S}ubset {\cal Z}^*$ in the respective paired topology. The dual representation in Equation \mathop{\rm {\mathbb R}e}\nolimitsf{eqn12} then takes the form
\begin{equation}gin{equation}
\langlegle bel{eqn13}
\rho(Z) = \mathcal{S}up_{{ z}eta \in {\mathfrak A}} \langlegle n { z}eta,Z {\ranglele gle}n,\;Z\in {\cal Z},
\varepsilonnd{equation}
where the set ${\mathfrak A}$ consists of probability density functions ${ z}eta:\Omega{\ranglele gle}r \mathbb{R}$, i.e. with ${ z}eta \mathcal{S}ucceq 0$ and $\int{ z}eta dP = 1$.
A fundamental example of law invariant coherent risk measures is Average- Value-at-Risk measure (also
called the Conditional-Value-at-Risk or Expected Shortfall Measure). Average-Value- at-Risk at the level of $\alpha$ for $Z \in {\cal Z}$ is defined as
\begin{equation}gin{equation}
\langlegle bel{eqn016}
{\sf AV@R}_\alpha (Z) = \frac{1}{1-\alpha}\int_\alpha^1 {\cal V}aR_p(Z)dp,
\varepsilonnd{equation}
where
\begin{equation}gin{equation}
{\cal V}aR_p(Z) = \inf \{ z \in \mathbb{R}: {\mathbb{P}}(Z \leq z) \geq p \}
\varepsilonnd{equation}
is the corresponding left side quantile. The corresponding dual representation for ${\sf AV@R}_\alpha(Z)$ is
\begin{equation}gin{equation}
{\sf AV@R}_\alpha(Z) = \mathcal{S}up_{m \in {\cal A}}\langlegle n m,Z {\ranglele gle}n,
\varepsilonnd{equation}
with
\begin{equation}gin{equation}
{\cal A} = \{ m \in L^\infty(\Omega,{\cal F},{\mathbb{P}}): \int_\Omega md{\mathbb{P}} =1, 0 \leq \lVert m \rVert_\infty \leq \frac{1}{\alpha} \}.
\varepsilonnd{equation}
Next, we give a representation characterizing any law invariant coherent risk measure, which is first presented in Kusuoka \cite{K} for random variables in $L^\infty(\Omega, {\cal F}, {\mathbb{P}})$, and later further investigated in ${\cal Z}^p = L^p(\Omega, {\cal F}, {\mathbb{P}})$ for $1 \leq p < \infty$ in \cite{PR}.
\begin{equation}gin{lemma}
\langlegle bel{lem11}\cite{K}
Any law invariant coherent risk measure $\rho: {\cal Z}^p {\ranglele gle}r \mathbb{R}$ can be represented in the following
form
\begin{equation}gin{equation}
\rho(Z) = \mathcal{S}up_{\nu \in {\mathfrak M}}\int_0^1 {\sf AV@R}_\alpha(Z)d\nu(\alpha),
\varepsilonnd{equation}
where ${\mathfrak M}$ is a set of probability measures on the interval [0,1].
\varepsilonnd{lemma}
\mathcal{S}ubsection{Controlled Markov Chain Framework}
Next, we introduce the controlled Markov chain framework that we are going to study our problem on.
We take the control model $\mathcal{M} = \{ \mathcal{M}_n, n \in \mathbb{N}_0 \}$, where for each $n \geq 0$, we have
\begin{equation}gin{equation}
\langlegle bel{eqn270}
{\cal M}_n := (X_n, A_n, \mathbb{K}_n, Q_n, F_n, c_n)
\varepsilonnd{equation}
with the following components:
\begin{equation}gin{itemize}
\item $X_n$ and $A_n$ denote the state and action (or control) spaces,which are assumed to be complete seperable metric spaces with their corresponding Borel $\mathcal{S}g$-algebras ${\cal B}(X_n)$ and ${\cal B}(A_n)$.
\item For each $x_n \in X_n$, let $A_n(x_n) \mathcal{S}ubset A_n$ be the set of all admissible controls in the state $x_n$. Then
\begin{equation}gin{equation}
\mathbb{K}_n := \{ (x_n,a_n): x_n \in X_n,\; a_n \in A_n \}
\varepsilonnd{equation}
stands for the set of feasible state-action pairs at time $n$.
\item We let
\begin{equation}gin{equation}
\langlegle bel{eqn23}
x_{i+1} = F_i(x_i,a_i, { x}i_i),
\varepsilonnd{equation}
for all $i = 0,1,...$ with $x_i \in X_i$ and $a_i \in A_i$ as described above, with independent random variables $({ x}i_i)_{i \geq 0}$ on the atomless probability space
\begin{equation}gin{equation}
\langlegle bel{eqn2100}
(\Omega^i,\mathcal{G}^i, {\mathbb{P}}^i).
\varepsilonnd{equation}
We take that ${ x}i_i \in S_i$, where $S_i$ are Borel spaces. Moreover, we assume that the system equation
\begin{equation}gin{equation}
\langlegle bel{eqn678}
F_i: \mathbb{K}_i \times S_i {\ranglele gle}r X_i
\varepsilonnd{equation}
as in Equation \varepsilonqref{eqn23} is continuous.
\item We let
\begin{equation}al
\langlegle bel{eqn27}
\Omega &= \otimes_{i=1}^{\infty} X^i
\varepsilonal
where $X^i$ is as defined in Equation \varepsilonqref{eqn678}. For $n\geq 0$, we let
\begin{equation}al
\langlegle bel{eqn28}
{\cal F}_n &= \mathcal{S}g(\mathcal{S}g({\displaystyle \cup_{i=0}^n \mathcal{G}^i)} \cup \mathcal{S}g(X_0,A_0,X_1,A_1\ldots,A_{n-1},X_n)) \\
{\cal F} &= \mathcal{S}g(\cup_{i=0}^\infty {\cal F}_{i})
\varepsilonal be the filtration of increasing $\mathcal{S}g$-algebras. Furthermore, we define the corresponding probability measures $(\Omega,{\cal F})$ as
\begin{equation}al
\langlegle bel{eqn2140}
{\mathbb{P}} &= \prod_{i=1}^{\infty} \mbox{\tiny\boldmath$b$}p^i,
\varepsilonal
where the existence of ${\mathbb{P}}$ is justified by Kolmogorov extension theorem (see \cite{HL}). We assume that for any $n\geq 0$, the random vector ${ x}i_{[n]} = ({ x}i_0,{ x}i_1,\ldots,{ x}i_n)$ and ${ x}i_{n+1}$ are independent on $(\Omega,{\cal F},{\mathbb{P}})$.
\item The transition law is denoted by $Q_{n+1}(B_{n+1}|x_n,a_n)$, where $B_{n+1} \in \mathcal{B}(X_{n+1})$ is the Borel $\mathcal{S}igma$-algebra on $X_n$, and $(x_n,a_n) \in X_n \times A_n$ is a stochastic kernel on $X_n$ given $\mathbb{K}_n$ (see \cite{SB,HL} for further details). We remark here that at each $n\geq 0$ the stochastic kernel depends only on $(x_n,a_n)$ rather than ${\cal F}_n$.
That is, for each pair $(x_n,a_n) \in \mathbb{K}_n$, $Q_{n+1}(\cdot|x_n,a_n)$ is a probability measure on $X_{n+1}$, and for each $B_{n+1} \in \mathcal{B}_{n+1}(X_{n+1})$, $Q_{n+1}(B_{n+1}|\cdot,\cdot)$ is a measurable function on $\mathbb{K}_n$. Let $x_0 \in X_0$ be given with the corresponding policy $\Pi = (\pi_n)_{n\geq0}$. By the Ionescu Tulcea theorem (see e.g. \cite{HL}), we know that there exists a unique probability measure ${\mathbb{P}}^\pi$ on $(\Omega, {\cal F})$ such that given $x_0 \in X_0$, a measurable
set $B_{n+1} \mathcal{S}ubset X_{n+1}$ and $(x_n,a_n) \in \mathbb{K}_n$, for any $n\geq 0$, we have
\begin{equation}al
\langlegle bel{eqn2150}
{\mathbb{P}}^{\Pi}_{n+1} (x_{n+1} \in B_{n+1}) &\triangleq Q_{n+1}(B_{n+1} | x_n,a_n).
\varepsilonal
\item Let $\mathbb{F}_n$ be the family of measurable functions $\pi_n:X_n {\ranglele gle}r A_n$ for $n \geq 0$. A sequence $( \pi_n )_{n\geq 0}$ of functions $\pi_n \in \mathbb{F}_n$ for $n \geq 0$ is called a control policy (or simply a policy), and the function $\pi_n(\cdot)$ is called the decision rule or control at time $n\geq 0$. We denote by $\Pi$ the set of all control policies. For notational convenience, for every $n \in \mathbb{N}_0$ and $(\pi_n)_{n \geq 0} \in \Pi$, we write
\begin{equation}gin{align*}
c_n(x_n,\pi_n) &:= c_n(x_n,\pi_n(x_n))\\
&:= c_n(x_n,a_n).
\varepsilonnd{align*}
We denote by ${\mathfrak P}(A_n(x_n))$ as the set of probability measures on $A_n(x_n)$ for each time $n\geq0$. A randomized Markovian policy $(\pi_n)_{n \geq 0}$ is a sequence of measurable functions such that $\pi_n(x_n) \in {\mathfrak P}(A_n(x_n))$ for all $x_n \in X_n$, i.e. $\pi_n(x_n)$ is a probability measure on $A_n(x_n)$. $(\pi_n)_{n \geq 0}$ is called a deterministic policy, if $\pi_n(x_n) = a_n$ with $a_n \in A_n(x_n)$.
\item $c_n(x_n,a_n): \mathbb{K}_n \rightarrow \mathbb{R}_{+}$ is the real-valued cost-per-stage function at stage $n \in \mathbb{N}_0$ with $(x_n,a_n) \in \mathbb{K}_n$.
\varepsilonnd{itemize}
\begin{equation}gin{definition}
\langlegle bel{defn31}
A real valued function $v$ on $\mathbb{K}_n$ is said to be inf-compact on $\mathbb{K}_n$, if the set
\begin{equation}gin{equation}
\{ a_n \in A_n(x_n) | v(x_n,a_n) \leq r \}
\varepsilonnd{equation}
is compact for every $x_n \in X_n$ and $r \in \mathbb{R}$. As an example, if the sets $A_n(x_n)$ are compact and $v(x_n,a_n)$ is l.s.c. in $a_n\in A_n(x_n)$ for every $x_n \in X_n$, then $v(\cdot,\cdot)$ is inf-compact on $\mathbb{K}_n$. Conversely, if $v$ is inf-compact on $\mathbb{K}_n$, then $v$ is l.s.c. in $a_n \in A_n(x_n)$ for every $x_n \in X_n$.
\varepsilonnd{definition}
We make the following assumption about the transition law $(Q_n)_{n\geq1}$.
\begin{equation}gin{Assumption}\langlegle bel{ass31}
For any $n \geq 0$, the transition law $Q_n$ is weakly continuous; i.e. for any continuous and bounded function $u(\cdot)$ on $X_{n+1}$, the map
\begin{equation}gin{equation}
(x_n,a_n) \rightarrow \int_{X_{n+1}} u(y)dQ_n(y|x_n,a_n)
\varepsilonnd{equation}
is continuous on $\mathbb{K}_n$.
\varepsilonnd{Assumption}
Furthermore, we make the following assumptions on the one step cost functions and action sets.
\begin{equation}gin{Assumption} \langlegle bel{ass32}
For every $n \geq 0$,
\begin{equation}gin{itemize}
\item the real valued non-negative cost function $c_n(\cdot,\cdot)$ is l.s.c. in $(x_n,a_n)$. That is for any $(x_n,a_n) \in X_n \times A_n$, we have
\begin{equation}gin{equation}
c_n(x_n,a_n) \leq \liminf_{(x^k_n,a^k_n) {\ranglele gle}r (x_n,a_n) } c_n(x^k_n,a^k_n),
\varepsilonnd{equation}
as $k{\ranglele gle}r \infty$.
\item The multifunction (also known as a correspondence or point-to-set function) $x_n {\ranglele gle}r A_n(x_n)$, from $X_n$ to $A_n$, is upper semicontinuous (u.s.c.) that is, if $\{x_n^l\} \mathcal{S}ubset X_n$ and $\{ a_n^l\} \mathcal{S}ubset A_n$ are sequences such that $\{x_n^l\} {\ranglele gle}r \mbox{\boldmath$a$}r{x}_n$ with $\{a_n^l\} \mathcal{S}ubset A_n$ for all $l$, and $a_n^l {\ranglele gle}r \mbox{\boldmath$a$}r{a}_n$, then $\mbox{\boldmath$a$}r{a}_n$ is in $A_n(\mbox{\boldmath$a$}r{x}_n)$.
\item For every state $x_n \in X_n$, the admissible action set $A_n(x_n)$ is compact.
\varepsilonnd{itemize}
\varepsilonnd{Assumption}
\mathcal{S}ubsection{Conditional Risk Mappings} In order to construct dynamic models of risk, we extend the concept of static coherent risk measures to dynamic setting. For any $n \geq 1$, we denote the space ${\cal Z}_n: = L^1(\Omega,{\cal F}_n,{\mathbb{P}}^\pi_{n})$ of measurable functions with $Z:\Omega {\ranglele gle}r \mathbb{R}$ (random variables) having finite first order moment, i.e. $\mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}^\pi_{n}}[|Z|] < \infty$ ${\mathbb{P}}^\pi_{n}$-a.s., where $\mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}^\pi_{n}}$ stands for the conditional expectation at time $n$ with respect to the conditional probability measure ${\mathbb{P}}^\pi_{n}$ as defined in Equation \varepsilonqref{eqn2150}.
\begin{equation}gin{definition}\langlegle bel{def21}
Let $X,Y \in {\cal Z}_{n+1}$. We say that a mapping $\rho_n:{\cal Z}_{n+1} {\ranglele gle}r {\cal Z}_n$ is a one step conditional risk mapping, if it satisfies following properties
\begin{equation}gin{itemize}
\item(a1) Let $\gamma \in [0,1]$. Then,
\begin{equation}gin{equation}
\rho_{n}(\gamma X + (1-\gamma) Y) \preceq
\gamma \rho_{n}(X) + (1-\gamma)\rho_{n}(Y)
\varepsilonnd{equation}
\item (a2) If $X \preceq Y$, then $\rho_{n}(X) \preceq \rho_{n}(Y)$
\item (a3) If $Y \in {\cal Z}_{n}$ and $X \in {\cal Z}_{n+1}$, then $\rho_{n}(X + Y) = \rho_{n}(X) + Y$.
\item (a4) For $\langlegle mbda \mathcal{S}ucceq 0$ with $\langlegle mbda \in {\cal Z}_n$ and $X \in {\cal Z}_{n+1}$, we have that $\rho_{n+1}(\langlegle mbda X) = \langlegle mbda \rho_{n+1}(X)$.
\varepsilonnd{itemize}
\varepsilonnd{definition}
Here, the relation $Y(\omega) \preceq X(\omega)$ stands for $Y \leq X$ ${\mathbb{P}}^{\pi}_n$-a.s. We next state the analogous results for representation theorem for conditional risk mappings as in Equation \varepsilonqref{eqn13} (see also \cite{RS06}).
\begin{equation}gin{theorem}
\langlegle bel{thm21}
Let $\rho_n: {\cal Z}_{n+1} {\ranglele gle}r {\cal Z}_n$ be a law-invariant conditional risk mapping satisfying assumptions as stated in Definition \mathop{\rm {\mathbb R}e}\nolimitsf{def21}. Let $Z \in {\cal Z}_{n+1}$. Then
\begin{equation}gin{equation}
\langlegle bel{eqn2220}
\rho_n(Z) = \mathcal{S}up_{\mu \in {\mathfrak A}_{n+1}} \langlegle n \mu, Z {\ranglele gle}n,
\varepsilonnd{equation}
where ${\mathfrak A}_{n+1}$ is a convex closed set of conditional probability measures on $(\Omega, {\cal F}_{n+1})$, that are absolutely continuous with respect to ${\mathbb{P}}^{\pi}_{n+1}$.
\varepsilonnd{theorem}
Next, we give the Kusuoka representation for conditional risk mappings analogous to Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem11}.
\begin{equation}gin{lemma} \langlegle bel{lem22}
Let $\rho_n:{\cal Z}_{n+1} {\ranglele gle}r {\cal Z}_{n}$ be a law invariant one-step conditional risk mapping satisfying Assumptions
(a1)-(a4) as in Definition \mathop{\rm {\mathbb R}e}\nolimitsf{def21}. Let $Z \in {\cal Z}_{n+1}$. Then, conditional Average-Value-at-Risk at the level of $0 < \alpha < 1$ is defined as
\begin{equation}gin{equation}
\langlegle bel{eqn016}
{\sf AV@R}^{n}_\alpha (Z) \triangleq \frac{1}{1-\alpha}\int_\alpha^1 {\cal V}aR^{n}_p(Z)dp,
\varepsilonnd{equation}
where
\begin{equation}gin{equation}
\langlegle bel{eqn212}
{\cal V}aR^{n}_p(Z) \triangleq \operatornamewithlimits{ess\,inf} \{ z \in \mathbb{R}: {\mathbb{P}}^{\pi}_{n+1}(Z \leq z) \geq p \}.
\varepsilonnd{equation}
Here, we note that ${\cal V}aR^{n}_p(Z)$ is ${\cal F}_{n}$-measurable by definition of essential infimum (see \cite{FSH2} for a definition of essential infimum and essential supremum).
Then, we have
\begin{equation}gin{equation}
\langlegle bel{eqn2130}
\rho_{n}(Z) \triangleq \varepsilonsssup_{\nu \in {\mathfrak M}}\int_0^1{\sf AV@R}^n_\alpha(Z)d\nu(\alpha),
\varepsilonnd{equation}
where ${\mathfrak M}$ is a set of probability measures on the interval [0,1].
\varepsilonnd{lemma}
\begin{equation}gin{remark} By Equations \varepsilonqref{eqn016},\varepsilonqref{eqn212} and \varepsilonqref{eqn2130}, it is easy to see that the corresponding optimal controls at each time $n \geq 0$ is deterministic, if the one step conditional risk mappings are ${\sf AV@R}^n_\alf: {\cal Z}_{n+1} {\ranglele gle}r {\cal Z}_{n}$ as defined in \varepsilonqref{eqn016}. On the other hand, by Kusuoka representation, Equation \varepsilonqref{eqn2130}, it is clear that for other coherent risk randomized policies might be optimal. In this paper, we restrict our study to deterministic policies.
\varepsilonnd{remark}
\begin{equation}gin{definition}\langlegle bel{defn230} A policy $\pi \in \Pi$ is called admissible, if for any $n\geq 0$, we have
\begin{equation}al
&c_n(x_n,a_n) + \lim_{N{\ranglele gle}r \infty}\gamma\rho_n\big( c_{n+1}(x_{n+1},a_{n+1}) \\
&{\mbox{\boldmath$1$}}eq + \gamma\rho_{n+1}( c_{n+2}(x_{n+2},a_{n+2})
\ldots + \gamma\rho_{N-1}(c_N(x_N,a_N))) \big) < \infty,\; {\mathbb{P}}^\pi_{n}\textrm{ a.s. }
\varepsilonal
The set of all admissible policies is denoted by $\Pi_{\mathrm{ad}}$.
\varepsilonnd{definition}
\mathcal{S}ection{Main Problem}
Under Assumptions \mathop{\rm {\mathbb R}e}\nolimitsf{ass31}, \mathop{\rm {\mathbb R}e}\nolimitsf{ass32}, our control problem reads as
\begin{equation}al
\langlegle bel{eqn321}
&\inf_{\pi \in \Pi_{\textrm{ad}}} \bigg( c_0(x_0,a_0) + \lim_{N{\ranglele gle}r \infty}\gamma\rho_0( c_1(x_1,a_1) + \gamma\rho_{1}( c_2(x_2,a_2) \\
&{\mbox{\boldmath$1$}}eq \ldots + \gamma\rho_{N-1}(c_N(x_N,a_N))) \bigg)
\varepsilonal
Namely, our objective is to find a policy $(\pi^*_n)_{ n \geq 0}$ such that the value function in Equation \varepsilonqref{eqn321} is minimized. For convenience, we introduce the following notations that are to be used in the rest of the paper
\begin{equation}gin{align*}
{\rm Var}rho_{n-1} ( \mathcal{S}um_{t=n}^\infty c_t(x_t,\pi_t) ) &:= \lim_{N{\ranglele gle}r \infty}\gamma\rho_{n-1} (c_n(x_n,a_n) + \gamma\rho_{n}( c_{n+1}(x_{n+1},a_{n+1})
\\
&... + \rho_{N-1}(c_N(x_N,a_N))) \\
V_n(x,\pi) &:= c_n(x_n,a_n) + {\rm Var}rho_n(\mathcal{S}um_{t=n+1}^\infty c_t(x_t,a_t))
\varepsilonnd{align*}
\begin{equation}gin{align*}
V_n^*(x) &:= \inf_{\pi \in \Pi_{\textrm{ad}}} c_n(x_n,a_n) + {\rm Var}rho_n(\mathcal{S}um_{t=n+1}^\infty c_t(x_t,a_t))\\
V_{n,N}(x,\pi) &:= c_n(x_n,a_n) + {\rm Var}rho_n(\mathcal{S}um_{t=n+1}^{N-1}c_t(x_t,a_t)) \\
V_{N,\infty}(x,\pi) &:= c_N(x_N,a_N) + {\rm Var}rho_N(\mathcal{S}um_{t=N+1}^\infty c_t(x_t,a_t)) \\
V_{n,N}^*(x) &:= \inf_{\pi \in \Pi_{\textrm{ad}}} c_N(x_N,a_N) +{\rm Var}rho_n(\mathcal{S}um_{t=n+1}^Nc_t(x_t,a_t))
\varepsilonnd{align*}
For the control problem to be nontrivial, we need the following assumption on the existence of the policy.
\begin{equation}gin{Assumption}
\langlegle bel{ass41}
There exists a policy $\pi \in \Pi_{\textrm{ad}}$ such that
\begin{equation}gin{equation}
c_0(x_0, a_0) + {\rm Var}rho_0(x_0) < \infty.
\varepsilonnd{equation}
\varepsilonnd{Assumption}
We are now ready to state our main theorem.
\begin{equation}gin{theorem}
\langlegle bel{thm41}
Let $0 < \gamma < 1$. Suppose that Assumptions \mathop{\rm {\mathbb R}e}\nolimitsf{ass31}, \mathop{\rm {\mathbb R}e}\nolimitsf{ass32} and \mathop{\rm {\mathbb R}e}\nolimitsf{ass41} are satisfied. Then,
\begin{equation}gin{itemize}
\item[(a)] the optimal cost functions $V_n^*$ are the pointwise minimal solutions of the optimality equations: that is, for every $n \in \mathbb{N}_0$ and $x_n \in X_n$,
\begin{equation}gin{equation}
\langlegle bel{eqn33}
V_n^*(x_n) = \inf_{a \in A(x_n)} \bigg(c_n(x_n,a_n) + \gamma\rho_n(V_{n+1}^*(x_{n+1}))\bigg).
\varepsilonnd{equation}
\item[(b)] There exists a policy $\pi^* = (\pi^*_n)_{n \geq 0}$ such that for each $n \geq 0$, the control attains the minimum in \varepsilonqref{eqn33}, namely for $x_n \in X_n$
\begin{equation}gin{equation}
V_n^*(x_n) = c_n(x_n,\pi^*_n) + \gamma\rho_n(V_{n+1}^*(x_{n+1}) ).
\varepsilonnd{equation}
\varepsilonnd{itemize}
\varepsilonnd{theorem}
\mathcal{S}ection{Proof of Main Result}
\begin{equation}gin{lemma} \cite{key-35} \langlegle bel{lem31} Fix an arbitrary $n \in \mathbb{N}_0$. Let $\mathbb{K}$ be defined as
\begin{equation}gin{equation}
\mathbb{K} := \{ (x,a)| x \in X, a \in A(x) \},
\varepsilonnd{equation}
where $X$ and $A$ are complete seperable metric Borel spaces and let $v: \mathbb{K} {\ranglele gle}r \mathbb{R}$ be a given ${\cal B}(X \times A)$ measurable function. For $x \in X$, define
\begin{equation}gin{equation}
v^*(x) := \inf_{a \in A(x)} v(x,a).
\varepsilonnd{equation}
If $v$ is non-negative, l.s.c. and inf-compact on $\mathbb{K}$ as defined in Definition \mathop{\rm {\mathbb R}e}\nolimitsf{defn31}, then for any $x \in X$, there exists a measurable mapping $\pi_n: X {\ranglele gle}r A$ such that
\begin{equation}gin{equation}
v^*(x) = v(x,\pi_n)
\varepsilonnd{equation}
and $v^*(\cdot):X{\ranglele gle}r \mathbb{R}$ is measurable, and l.s.c.
\varepsilonnd{lemma}
\begin{equation}gin{lemma} \langlegle bel{lem52}
For any $n \geq 1$, let $c_n(x_n,a_n)$ be in ${\cal Z}_n$. Then $\rho_{n-1}(c_n(x_n,a_n))$ is an element of ${\cal Z}_{n-1} = L^1(\Omega,{\cal F}_n,{\mathbb{P}}^\pi_n)$.
\varepsilonnd{lemma}
\begin{equation}gin{proof}
Let $\mu \in {\mathfrak A}_{n}$ be as in Theorem \mathop{\rm {\mathbb R}e}\nolimitsf{thm21}. By non-negativity of the one step cost function $c_n(\cdot, \cdot)$ and by Fatou Lemma, we have
\begin{equation}gin{equation}
\langlegle bel{eqn433}
\langlegle n \mu,c_n(x_n,a_n) {\ranglele gle} \leq \liminf_{(x^k_n,a^k_n) {\ranglele gle}r (x_n,a_n)} \langlegle n \mu,c_n(x^k_n,a^k_n) {\ranglele gle}.
\varepsilonnd{equation}
Hence, $\langlegle n \mu,c_n(x_n,a_n) {\ranglele gle}$ is l.s.c. for ${\mathbb{P}}^{\pi}_{n-1}$-a.s. Then, by Equation \varepsilonqref{eqn2220}, we have
\begin{equation}gin{equation}
\langlegle bel{eqn434}
\rho_{n-1}(c_n(x_n,a_n)) = \varepsilonsssup_{\mu \in {\mathfrak A}_{n}} \langlegle n \mu, c_n(x_n,a_n) {\ranglele gle}n.
\varepsilonnd{equation}
Hence, by Equation \varepsilonqref{eqn433} and by Equation \varepsilonqref{eqn434} taking supremum of l.s.c. functions being still l.s.c., we conclude that for fixed $\omega$, $\rho_{n-1}(c_n(x_n(\omega),a_n(\omega)))$ is l.s.c. with respect to $(x_n,a_n)$.
Next, we show that $\rho_{n-1}(c_n(x_n,a_n))$ is ${\cal F}_{n-1}$ measurable. By Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem22}, we have
\begin{equation}al
\langlegle bel{eqn018}
&\rho_{n-1}(c_n(x_{n},a_{n})) = \varepsilonsssup_{\nu \in {\mathfrak M}} \int_{[0,1]} {\sf AV@R}^{n-1}_\alpha(c_n(x_{n},a_{n})) d\nu,\\
&{\mbox{\boldmath$1$}}eq= \varepsilonsssup_{\nu \in {\mathfrak M}} \int_{[0,1]} \frac{1}{1-\alpha}\int_\alpha^1 {\cal V}aR_p^{n-1}(c_n(x_{n},a_{n})) dp\; d\nu\\
&{\mbox{\boldmath$1$}}eq= \varepsilonsssup_{\nu \in {\mathfrak M}} \int_{[0,1]} \frac{1}{1-\alpha}\int_\alpha^1 \operatornamewithlimits{ess\,inf} \big( z \in \mathbb{R}: {\mathbb{P}}^\pi_n(c_n(x_{n},a_{n})\leq z) \geq p \big) dp\; d\nu,
\varepsilonal
where ${\mathfrak M}$ is a set of probability measures on the interval [0,1]. By noting that for any $p \in [\alpha,1]$, $\operatornamewithlimits{ess\,inf} \big( z \in \mathbb{R}: {\mathbb{P}}^\pi_n(c_n(x_{n},a_{n})\leq z) \geq p \big)$ is ${\cal F}_{n-1}$-measurable, and then, by integrating from $\alpha$ to 1 and multiplying by $\frac{1}{1-\alpha}$, ${\cal F}_{n-1}$ measurability is preserved. Similarly, in Equation \mathop{\rm {\mathbb R}e}\nolimitsf{eqn018}, integrating with respect to a probability measure $\nu$ on $[0,1]$ and taking supremum of the integrals preserve ${\cal F}_{n-1}$ measurability. Hence, we conclude the proof.
\varepsilonnd{proof}
\begin{equation}gin{center}or Let $n \geq 1$, $x_n \in X_n$ and $a_n \in A_n$, where $X_n$ and $A_n$ are as introduced in Equation \varepsilonqref{eqn270}. Then,
\langlegle bel{cor51}
\begin{equation}gin{equation}
\min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n))
\varepsilonnd{equation}
is l.s.c. in $x_n$ ${\mathbb{P}}^{\pi}_{n-1}$-a.s. Furthermore, $\displaystyle\min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n))$ is ${\cal F}_{n-1}$ measurable.
\varepsilonnd{Corollary}
\begin{equation}gin{proof}
We know by Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem52}, $\rho_{n-1}(c_n(x_n,a_n))$ is l.s.c. ${\mathbb{P}}^{\pi}_{n-1}$-a.s. Hence, by Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem31},
\begin{equation}gin{equation}
\min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n))
\varepsilonnd{equation}
is l.s.c. in $x_n$ for any $x_n \in X_n$ ${\mathbb{P}}^{\pi}_{n-1}$-a.s. for $n \geq 1$. Furthermore, by Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem31}, we know that there exists an $\pi^* \in \Pi$ such that
\begin{equation}al
\min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n)) &= \rho_{n-1}(c_n(x_n,\pi^*(x_n)))\\
&= \rho_{n-1}(c_n(F_{n-1}(x_{n-1},a_{n-1},{ x}i_{n-1}),\\
&{\mbox{\boldmath$1$}}eq \pi^*(F_{n-1}(x_{n-1},a_{n-1},{ x}i_{n-1})))),\\
\varepsilonal
where $F_{n-1}$ is as defined in Equation \varepsilonqref{eqn23}, but we know that $\rho_{n-1}(c_n(x_n,\pi^*_n)$ is ${\cal F}_{n-1}$ measurable. Hence, the result follows by Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem52}.
\varepsilonnd{proof}
For every $n \geq 0$, let $L_n(X_n)$ and $L_n(X_n,A_n)$ be the family of non-negative mappings on $(X_n,A_n)$, respectively. Denote
\begin{equation}gin{equation}
\langlegle bel{eqn41}
T_{n}(v_{n+1}) := \min_{a_n \in A(x_n)} \big\{ c_n(x_n,a_n) + \gamma\rho_n(v_{n+1}(F_n(x_n,a_n, { x}i_n)))\big\}.
\varepsilonnd{equation}
\begin{equation}gin{lemma} \langlegle bel{lem53}
Suppose that Assumption \mathop{\rm {\mathbb R}e}\nolimitsf{ass31}, \mathop{\rm {\mathbb R}e}\nolimitsf{ass32} and \mathop{\rm {\mathbb R}e}\nolimitsf{ass41} hold, then for every $n \geq 0$, we have
\begin{equation}gin{itemize}
\item[(a)] $T_n$ maps $L_{n+1}(X_{n+1})$ into $L_{n}(X_{n})$.
\item[(b)] For every $v_{n+1} \in L_{n+1}(X_{n+1})$, there exists a policy $\pi^*_n$ such that for any $x_n \in X_n$, $\pi^*_n(x_n) \in A_n(x_n)$ attains the minimum in \varepsilonqref{eqn41}, namely
\begin{equation}gin{equation}
\langlegle bel{eqn316}
T_{n}(v_{n+1}) := c_n(x_n,\pi^*_n) + \gamma \rho(v_{n+1}(F_n(x_n,\pi^*_n, { x}i_n)))
\varepsilonnd{equation}
\varepsilonnd{itemize}
\varepsilonnd{lemma}
\begin{equation}gin{proof}
By assumption, our one-step cost functions $c_n(x_n,a_n)$ are in $L_n(X_n)$. By Corollary \mathop{\rm {\mathbb R}e}\nolimitsf{cor51}, $\gamma \rho_n(v_{n+1}(F_n(x_n,\pi^*_n, { x}i_n)))$ is in $L_n(X_n)$. Hence their sum is in $L_n(X_n,A_n)$, as well. Hence, the result follows via Corollary \mathop{\rm {\mathbb R}e}\nolimitsf{cor51} again.
\varepsilonnd{proof}
By Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem53}, we express the optimality equations \varepsilonqref{eqn41} as
\begin{equation}gin{equation}
V_n^* = T_nV^*_{n+1}\; \textrm{ for }n \geq 0.
\varepsilonnd{equation}
Next, we continue with the following lemma.
\begin{equation}gin{lemma}
\langlegle bel{lem54}
Under the Assumptions \mathop{\rm {\mathbb R}e}\nolimitsf{ass31} and \mathop{\rm {\mathbb R}e}\nolimitsf{ass32}, for $n \geq 0$, let $v_n \in L_n(X_n)$ and $v_{n+1} \in L_{n+1}(X_{n+1})$.
\begin{equation}gin{itemize}
\item[(a)] If $v_n \geq T_n (v_{n+1})$, then $v_n \geq V_n^*$.
\item[(b)] If $v_n \leq T_n (v_{n+1})$ and in addition,
\begin{equation}gin{equation}
\lim_{N{\ranglele gle}r \infty}v_{N}(x_{N+1}(\omega)) = 0,
\varepsilonnd{equation}
${\mathbb{P}}$-a.s., then $v_n \leq V_n^*$.
\varepsilonnd{itemize}
\varepsilonnd{lemma}
\begin{equation}gin{proof}
\begin{equation}gin{itemize}
\item[(a)]
By Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem53}, there exists a policy $\pi=(\pi_n)_{n \geq 0}$ such that for all $n \geq 0$,
\begin{equation}gin{equation}
v_{n}(x_n) \geq c_n(x_n,\pi_n) + \rho_n( v_{n+1}(F_n(x_n,\pi_n, { x}i_n))).
\varepsilonnd{equation}
By iterating the right hand side and by monotonicity of ${\rm Var}rho_n(\cdot)$, we get
\begin{equation}gin{equation}
v_{n}(x_{n}) \geq c_n(x_n,\pi_n) + {\rm Var}rho_n(\mathcal{S}um_{i = n+1}^{N-1}c_i(x_i,\pi_i) + v_{N}(x_{N})).
\varepsilonnd{equation}
Since $v_{N}(x_{N}) \geq 0$, we have
\begin{equation}gin{equation}
v_{n}(x_{n}) \geq c_n(x_n,\pi_n) + {\rm Var}rho_n(\mathcal{S}um_{i=n+1}^{N-1}c_i(x_i,\pi_i)), \textrm{ a.s.}
\varepsilonnd{equation}
Hence, letting $N {\ranglele gle}r \infty$, we obtain $v_{n}(x) \geq V_n(x,\pi)$ and so $v_{n}(x) \geq V_n^*(x)$.
\item[(b)] Suppose that $v_{n} \leq T_nv_{n+1}$ for $n \geq 0$, so that
\begin{equation}gin{equation}
v_{n}(x_n) \leq c_n(x_n,\pi_n) + \rho_n(c_{n+1}(x_{n+1},\pi_{n+1}) + v_{n+1}(x_{n+1}))
\varepsilonnd{equation}
for any $\pi \in \Pi_{\textrm{ad}}$, ${\mathbb{P}}^\pi_n$-a,s. Summing from $i=1$ to $i=N-1$ gives
\begin{equation}al
&v_{n}(x_n) \leq c_n(x_n,a_n) + {\rm Var}rho_n(\mathcal{S}um_{i=1}^{N-1}c_{n+i}(x_{n+i},a_{n+i}) \\
&{\mbox{\boldmath$1$}}eq + {\rm Var}rho_{N}(\mathcal{S}um_{i=n+N}^\infty c_i(x_i,a_i)))
\varepsilonal
Letting $N{\ranglele gle}r \infty$ and by $\pi \in \Pi_{\mathrm{ad}}$, we get that
\begin{equation}gin{equation}
\lim_{N{\ranglele gle}r \infty}{\rm Var}o_n(v_{n+N}) = 0
\varepsilonnd{equation}
so that we have
\begin{equation}gin{equation}
v_n(x_n) \leq V_{n}(x_n,\pi),
\varepsilonnd{equation}
Taking infimum, we have
\begin{equation}gin{equation}
v_n(x_n) \leq V_{n}^*(x_n)
\varepsilonnd{equation}
Thus, we conclude the proof.
\varepsilonnd{itemize}
\varepsilonnd{proof}
To further proceed, we need the following technical lemma.
\begin{equation}gin{lemma}
\langlegle bel{lem34}\cite{HL}
For every $N > n \geq 0$, let $X_n,A_n$ be complete, seperable metric spaces and $\mathbb{K}_n:= \{(x_n,a_n):x_n\in X_n,a_n\in A_n\}$ with $w_n$ and $w_{n,N}$ be functions on $\mathbb{K}_n$ that are non-negative, l.s.c. and inf-compact on $\mathbb{K}_n$. If $w_{n,N} \uparrow w_n$ as $N {\ranglele gle}r \infty$, then
\begin{equation}gin{equation}
\lim_{N{\ranglele gle}r \infty}\min_{a_n \in A_n}w_{n,N}(x_n,a_n) = \min_{a_n \in A_n} w_n(x_n,a_n),
\varepsilonnd{equation}
for all $x_n \in X$.
\varepsilonnd{lemma}
The next result gives the validity of the convergence of value iteration.
\begin{equation}gin{theorem}\langlegle bel{thm51} Suppose that Assumptions \mathop{\rm {\mathbb R}e}\nolimitsf{ass31} and \mathop{\rm {\mathbb R}e}\nolimitsf{ass32} are satisfied. Then, for every $n \geq 0$ and $x_n \in X_n$,
\begin{equation}gin{equation}
V_{n,N}^*(x_n) \uparrow V_n^*(x_n)\;{\mathbb{P}}_\textrm{-a.s. }N{\ranglele gle}r\infty
\varepsilonnd{equation}
and $V_n^*(x_n)$ l.s.c. ${\mathbb{P}}$-a.s.
\varepsilonnd{theorem}
\begin{equation}gin{proof}
We obtain $V_{n,N}^*$ by the usual dynamic programming. Indeed, let $J_{N+1}(x_{N+1}) \varepsilonquiv 0$ for all $x_{N+1} \in X_{N+1}$ a.s. and going backwards in time for $n=N,N-1,\ldots$, let
\begin{equation}gin{equation}
\langlegle bel{eqn328}
J_{n}(x_{n}) := \inf_{a_n \in A(x_n)} c_n(x_n,a_n) + \rho_n(J_{n+1}(F_n(x_n,a_n, { x}i_n))).
\varepsilonnd{equation}
Since $J_{N+1}(\cdot) \varepsilonquiv 0$ is l.s.c., by backward induction, $J_N$ is l.s.c. ${\mathbb{P}}$-a.s. and ${\cal F}_N$-measurable. Moreover, by Corollary \mathop{\rm {\mathbb R}e}\nolimitsf{lem31}, for every $t = N-1,...,n$, there exists $\pi_t^N$ such that $\pi_t^N(x_t) \in A_t(x_t)$ attains the minimum in Equation \varepsilonqref{eqn328}. Hence $\{ \pi_{N-1}^N,...,\pi_n^{N} \}$ is an optimal policy. We note that $c_n(x_n,a_n)$ as well as $\rho_n(J_{n+1}(F_n(x_n,a_n, { x}i_n)))$ is l.s.c., ${\cal F}_n$ measurable, inf-compact and non-negative. Hence their sum preserves those properties. Furthermore, $J_n$ is the optimal $(N-n)$ cost by construction. Hence, $J_n(x) = V_{n,N}^*(x)$ and since $J_n(x)$ is l.s.c. so is $V_{n,N}^*(x_n)$ with
\begin{equation}gin{equation}
\langlegle bel{eqn329}
V_{n,N}^*(x_n) := \inf_{a_n \in A(x_n)} \bigg(c_n(x_n,a_n) + \rho_n(V_{n+1,N}^*(x_{n+1})) \bigg).
\varepsilonnd{equation}
By the non-negativity assumption on $c_n(\cdot,\cdot)$ for all $n \geq 0$, the sequence $N {\ranglele gle}r V_{n,N}^*$ is non-decreasing and $V_{n,N}^*(x_n) \leq V_n^*(x_n)$, for every $x_n \in X_n$ and $N > n$. Hence, denoting
\begin{equation}gin{equation}
v_n(x_n) := \mathcal{S}up_{N>n}V_{n.N}^*(x_n) \textrm{ for all }x_n \in X_n.
\varepsilonnd{equation}
and $v_n$ being supremum of l.s.c. functions is itself l.s.c. ${\mathbb{P}}$-a.s. and ${\cal F}$-measurable. Letting $N{\ranglele gle}r \infty$ in \varepsilonqref{eqn329} by Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem34}, we have that
\begin{equation}gin{equation}
\langlegle bel{eqn410}
v_{n}(x_n) := \inf_{a_n \in A(x_n)} \bigg( c_n(x_n,a_n) + \rho_n(V_{n+1}(x_{n+1})) \bigg)
\varepsilonnd{equation}
for all $n \in \mathbb{N}_0$ and $x_n \in X_n$. Hence, $v_n$ are solutions of the optimality equations, $v_n = T_nv_{n+1}$, and so by Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem53}, $v_n(x_n) \geq V_n^*(x_n)$. This gives $v_n(x) = V_n^*(x)$. Hence, $V_{n,N}^* \uparrow V_n^*$ and $V_n^*$ is l.s.c.
\varepsilonnd{proof}
Now, we are ready to prove our main theorem.
\begin{equation}gin{proof}[Proof of Theorem \mathop{\rm {\mathbb R}e}\nolimitsf{thm21}]
\begin{equation}gin{itemize}
\item[(a)] By Theorem \mathop{\rm {\mathbb R}e}\nolimitsf{thm51}, the sequence $(V_n^*)_{n \geq 0}$ is a solution to the optimality equations. By Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{lem53}, it is the minimal such solution.
\varepsilonnd{itemize}
\item[(b)] By Theorem \mathop{\rm {\mathbb R}e}\nolimitsf{thm51}, the functions $V_n^*$ are l.s.c. ${\mathbb{P}}$-a.s. and ${\cal F}_n$-measurable. Therefore,
\begin{equation}gin{equation}
\langlegle bel{eqn32}
c_n(x_n,\pi^*_n) + \rho_n(V^*_{n+1}(x_{n+1}))
\varepsilonnd{equation}
is non-negative, l.s.c. ${\mathbb{P}}$-a.s., ${\cal F}_n$-measurable and inf-compact on $\mathbb{K}_n$ for any $a_n \in A_n$, for every $n \geq 0$. Thus, the existence of optimal policy $\pi_n^*$ follows from Lemma \mathop{\rm {\mathbb R}e}\nolimitsf{thm51}. Iterating Equation \varepsilonqref{eqn32} gives
\begin{equation}gin{align}
V_n^*(x_n) &= c_n(x_n, \pi_t^*) + {\rm Var}rho_n\bigg(\mathcal{S}um_{t=n+1}^{N-1}c_t(x_t, \pi_t^*) + V_{N}^*(x_{N})\bigg)\\
&\geq V_{n,N}(x_n,\pi_n^*).
\varepsilonnd{align}
Letting $N {\ranglele gle}r \infty$, we conclude that $V_n^*(x) \geq V_n(x,\pi^*)$. But by definition of $V_n^*(x)$, we have $V_n^*(x) \leq V_n(x,\pi^*)$. Hence, $V_n^*(x) = V_n(x,\pi^*)$, and we conclude the proof.
\varepsilonnd{proof}
\mathcal{S}ubsection{An $\varepsilonpsilon$-Optimal Approximation to Optimal Value}
We note that our iterative scheme via validity of convergence of value iterations in Theorem \mathop{\rm {\mathbb R}e}\nolimitsf{thm21} is computationally not effective for large horizon $N$ problem, since we have to calculate the dynamic programming equations for each time horizon $n \leq N$. To overcome this difficulty, we propose the following methodology, which requires only one time calculation of dynamic programming equations of the optimal control problem and is able to give an $\varepsilonpsilon$-optimal approximation to the original problem.
By Assumption \mathop{\rm {\mathbb R}e}\nolimitsf{ass41}, we have after some $N_0$
\begin{equation}gin{equation}
\langlegle bel{eqn470}
{\rm Var}rho_{N_0}(\mathcal{S}um_{n=N_0+1}^\infty c_n(x_n, a_n)) < \varepsilonpsilon \;{\mathbb{P}} \textrm{-a.s}.
\varepsilonnd{equation}
But, then this means for the theoretical optimal policy $(\pi^*_n)_{n\geq0}$, justified in Theorem \mathop{\rm {\mathbb R}e}\nolimitsf{thm21}, we have
\begin{equation}gin{equation}
{\rm Var}rho_{N_0}(\mathcal{S}um_{n=N_0+1}^\infty c_n(x_n, \pi^*_n)) \leq \varepsilonpsilon\;{\mathbb{P}} \textrm{-a.s.}
\varepsilonnd{equation}
since, the optimal policy gives a smaller value than the one in Equation \varepsilonqref{eqn470}.
Then, by monotonicity of ${\rm Var}o$, for the optimal policy $\pi^*$ we have
\begin{equation}gin{equation}
{\rm Var}o_{N_0}(\mathcal{S}um_{n=N_0+1}^\infty c_n(x_n, \pi^*_n)) \leq {\rm Var}o_{N_0}(\mathcal{S}um_{n=N_0+1}^\infty c_n(x_n, \pi_n)) \leq \varepsilonpsilon\;{\mathbb{P}} \textrm{-a.s}.
\varepsilonnd{equation}
Hence, this means that by solving the optimal control problem up to time $N_0$ via dynamic programming and combine these decision rules $(\pi^*_0, \pi^*_1,\pi^*_2,...,\pi^*_{N_0})$ with the decision rules from time $N_0+1$ onwards, we have an $\varepsilonpsilon$-optimal policy.
Hence, we have proved the following theorem.
\begin{equation}gin{theorem}\langlegle bel{thm42}
Suppose that Assumptions \mathop{\rm {\mathbb R}e}\nolimitsf{ass31} and \mathop{\rm {\mathbb R}e}\nolimitsf{ass32} hold. Let $\pi_0 \in \Pi_{\textrm{ad}}$ be the policy in Assumption \mathop{\rm {\mathbb R}e}\nolimitsf{ass41} such that
\begin{equation}gin{equation}
{\rm Var}rho_{N_0}\big(\mathcal{S}um_{n=N_0+1}^\infty c_n(x_n, a_n)\big) < \varepsilonpsilon\;{\mathbb{P}} \textrm{-a.s}..
\varepsilonnd{equation}
Then, we have for the optimal policy
\begin{equation}gin{equation}
{\rm Var}rho_{N_0}\big(\mathcal{S}um_{n=N_0+1}^\infty c_n(x_n, a^*_n)\big) \leq \varepsilonpsilon\;{\mathbb{P}} \textrm{-a.s}.
\varepsilonnd{equation}
Hence $\pi^* = \{\pi^*_0, \pi^*_1,\pi^*_2,...,\pi^*_{N_0}, \pi^0_{N+1},\pi^0_{N+2},\pi^0_{N+3}\dots \}$ is an $\varepsilonpsilon$-optimal policy for the original problem.
\varepsilonnd{theorem}
\mathcal{S}ection{Applications}
\mathcal{S}ubsection{An Optimal Investment Problem}
In this section, we are going to study a variant of mean-variance utility optimization (see e.g. \cite{BMZ}). The framework is as follows. We consider a financial market on an infinite time horizon $[0,\infty)$. The market consists of a risky asset $S_n$ and a riskless asset $R_n$, whose dynamics are given by
\begin{equation}gin{align*}
&S_{n+1} - S_n = \mu S_n + \mathcal{S}igma S_n { x}i_n\\
&R_{n+1} - R_n = r R_n\\
\varepsilonnd{align*}
with $R_0 = 1, S_0=s_0$, where $({ x}i_n)_{n\geq 0}$ are i.i.d standard normal random variables having distribution functions $\Phi$ on $\mathbb{R}$ with ${\cal Z} = L^1(\mathbb{R},{\mathfrak B}(\mathbb{R}),\Phi)$ and $\mu,r,\mathcal{S}igma > 0$. We consider a self-financing portfolio composed of $S$ and $R$. We let $(\widetilde{\pi}_n)_{n \geq 0}$ denote the amount of money invested in risky asset $S_n$ at time $n$ and $X_n$ denote the investor's wealth at time $n$. Namely,
\begin{equation}al
X^{\widetilde{\pi}}_n &= \widetilde{\pi}_n S_n + R_n \\
X^{\widetilde{\pi}}_{n+1}-X^{\widetilde{\pi}}_n &= \widetilde{\pi}_n(S_{n+1} - S_n) + (X^{\widetilde{\pi}}_n - \widetilde{\pi}_n) r R_n
\varepsilonal
For each $n \geq 0$, we denote $\widetilde{\pi}_n = X^{\widetilde{\pi}}_n \pi_n$ so that $\pi_n$ stands for the fraction of wealth that is put in risky asset. Hence, the wealth dynamics are governed by
\begin{equation}gin{align}
X^\pi_{n+1}-X^\pi_n &= [rZ^\pi_n + (\mu - r)\pi_n] + \mathcal{S}igma \pi_n { x}i_n
\varepsilonnd{align}
with initial value $x_0 = S_0 + B_0$. We further assume $|\pi_n| \leq C$ for some constant $C > 0$ at each time $n \geq 0$.
The particular coherent risk measure used in this example is the mean-deviation risk measure that is in \textit{static setting} defined on ${\cal Z}$ as
\begin{equation}gin{equation}
{\rm Var}o(X) := \mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}}[X] + \gamma g(X),
\varepsilonnd{equation} with $\gamma > 0$ with
\begin{equation}gin{equation}
g(X) := \mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}}\big(|X - \mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}}[X]| \big),
\varepsilonnd{equation}
for $X \in {\cal Z}$, where $\mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}}$ stands for the expectation taken with respect to the measure ${\mathbb{P}}$.
Hence $\gamma$ determines our \textit{risk averseness} level. For ${\rm Var}o$ to satisfy the properties of a coherent risk measure, it is necessary that $\gamma$ is in $[0,1/2]$. In fact, $\gamma$ being in $[0,1/2]$ is both necessary and sufficient for ${\rm Var}o$ to satisfy monotonicity (see \cite{key-14}). Hence, for fixed $0 \leq \gamma \leq 1/2$ with $X \in {\cal Z}$, we have that
\begin{equation}gin{equation}
\rho(X) = \mathcal{S}up_{m \in {\mathfrak A}}\langlegle m, X{\ranglele gle},
\varepsilonnd{equation}
where ${\mathfrak A}$ is a subset of the probability measures, that are of the form (identifying them with their corresponding densities)
\begin{equation}al
\langlegle bel{eqn662}
&{\mathfrak A} = \mbox{\tiny\boldmath$b$}para m \in L^\infty(\mathbb{R}, {\cal B}(\mathbb{R}), \Phi): \int_\mathbb{R} m(x) d\Phi(x) = 1,\\
&{\mbox{\boldmath$1$}}eq m(x) = 1 + h(x) - \int_\mathbb{R} h(x)d\Phi(x),\; \| h \|_\infty \leq \gamma\;\Phi\textrm{-a.s.} \mbox{\tiny\boldmath$b$}park
\varepsilonal
for some $h \in L^\infty(\mathbb{R}, {\cal B}(\mathbb{R}), \Phi)$.
Then, we \textit{define} for each time $n \geq 0$, the dynamic correspondent of $\rho$ as $\rho_n: {\cal Z}_{n+1} {\ranglele gle}r {\cal Z}_n$ with
\begin{equation}gin{align}
\rho_n (X_{n+1}) &= \mathcal{S}up_{m_n \in {\mathfrak A}_{n+1} }\langlegle n m_n, X_{n+1}{\ranglele gle}n,
\varepsilonnd{align}
as in Equation $\varepsilonqref{eqn27},\varepsilonqref{eqn28},\varepsilonqref{eqn2140}$ using $(\mathbb{R},{\cal B}(\mathbb{R}),\Phi)$. Hence, the controlled one step conditional risk mapping has the following representation
\begin{equation}gin{equation}
\mathcal{S}up_{m_n \in {\mathfrak A}_{n+1} } \langlegle n m_n,X^\pi_n {\ranglele gle}n,
\varepsilonnd{equation}
and our optimization problem reads as
\begin{equation}gin{equation} \langlegle bel{eqn567}
\min_{\pi_n \in \Pi_{\mathrm{ad}}} \mathcal{S}up_{m_n \in {\mathfrak A}_{n+1} } \langlegle n m_n,X^\pi_n {\ranglele gle}n,
\varepsilonnd{equation}
where ${\mathfrak A}_{n+1}$ are the sets of conditional probabilities analogous to Equation \varepsilonqref{eqn662} with $\Pi_{\textrm{ad}}$ as defined in Definition \mathop{\rm {\mathbb R}e}\nolimitsf{defn230}. Namely, ${\mathfrak A}_{n+1}$ is a subset of the conditional probability measures at time $n+1$ that are of the form (identifying them with their corresponding densities)
\begin{equation}al
\langlegle bel{eqn565}
&{\mathfrak A}_{n+1} = \mbox{\tiny\boldmath$b$}para m_{n+1} \in L^\infty(\Omegamega, {\cal F}_{n+1}, {\mathbb{P}}^\pi_{n+1}): \int_\Omega m_{n+1} d{{\mathbb{P}}}^\pi_{n+1} = 1,\\
&{\mbox{\boldmath$1$}}eq m_{n+1} = 1 + h - \int_\Omega h d{{\mathbb{P}}}^\pi_{n+1},\; \| h \|_\infty \leq \gamma\;{\mathbb{P}}^\pi_{n}\textrm{-a.s.} \mbox{\tiny\boldmath$b$}park
\varepsilonal
for some $h \in L^\infty(\Omegamega, {\cal F}_{n+1}, {\mathbb{P}}^\pi_{n+1})$, where ${\mathbb{P}}^\pi_{n+1}$ stands for the conditional probability measure on $\Omega$ at time $n+1$ as constructed in \varepsilonqref{eqn2150}.
Our one step cost functions are $c_n(x_n,a_n) = x_n$ for $n \geq 0$ for some discount factor $0 < \gamma < 1$ that are l.s.c. (in fact continuous) in $(x_n,a_n)$ for $n \geq 0$.
Hence, starting with initial wealth at time 0, denoted by $x_0$, investor's control problem reads as
\begin{equation}al
\langlegle bel{exprob}
&x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} {\rm Var}rho_0\bigg( \mathcal{S}um_{n = 1}^\infty X^\pi_n \bigg) \\
&\triangleq x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} \lim_{N{\ranglele gle}r \infty} \bigg( c_0(x_0,a_0) + \gamma\rho_0(c_1,(x_1,a_1) + \ldots+\gamma \rho_{N-1}(c_N(x_N,a_N) )\ldots)\bigg)
\varepsilonal
We note that $\Pi_{\mathrm{ad}}$ is not empty so that our example satisfies Assumption \mathop{\rm {\mathbb R}e}\nolimitsf{ass41}. Indeed, by choosing $a_n \varepsilonquiv 0$ for $n \geq 0$, i.e. investing all the current wealth into riskless asset $R_n$ for $n\geq 0$, we have that
\begin{equation}gin{equation}
{\rm Var}rho\bigg( \mathcal{S}um_{n = 0}^\infty \gamma^n x_0 \bigg) = \frac{x_0}{1 - \gamma}
\varepsilonnd{equation}
Hence, as in Theorem \mathop{\rm {\mathbb R}e}\nolimitsf{thm42}, we find $N_0$ such that
\begin{equation}gin{equation}
x_0 \mathcal{S}um_{n = N_0}^\infty \gamma^n < \varepsilonpsilon.
\varepsilonnd{equation}
Thus, we write the corresponding \textit{robust} dynamic programming equations as follows. Starting with $V^*_{N_0+1} \varepsilonquiv 0$ for $n = 1,2,...,N_0$, we have by Equation \varepsilonqref{eqn567}
\begin{equation}gin{align}
V^*_{n}(X^\pi_{n}) &= \min_{|\pi_n| \leq C} X^\pi_n + \gamma \rho_{n}(V^*_{n+1}(X^\pi_{n+1})) \\
&= \min_{|\pi_n| \leq C} X_n^\pi + \gamma \mathcal{S}up_{m_{n+1} \in {\mathfrak A}_{n+1}}\langlegle n m_{n}, V^*_{n+1}(X_{n+1}^\pi) {\ranglele gle}n
\varepsilonnd{align}
going backwards iteratively at first stage, the problem to solve is then
\begin{equation}gin{align}
V^*_0(x_0) &= \min_{ |a_0| \leq C}x_0 + \gamma\rho_0( V^*_1( X^\pi_1) )\\
&= x_0 + \gamma \min_{|a_0| \leq C}\mathcal{S}up_{m_1 \in {\mathfrak A}_{1}}\langlegle n m_1, V^*_1( X_1^\pi) {\ranglele gle}n
\varepsilonnd{align}
Hence, the corresponding policy
\begin{equation}gin{equation}
\widetilde{\pi} = \{ \pi^*_0, \pi^*_1,\pi^*_2,\ldots,\pi^*_{N_0},0,0,0,\ldots, \}
\varepsilonnd{equation}
is $\varepsilonpsilon$-optimal with the optimal value $V^\pi_0(x_0)$ for our example optimization problem \varepsilonqref{exprob}.
\mathcal{S}ubsection{The Discounted LQ-Problem}
We consider the linear-quadratic regulator problem in infinite horizon. We refer the reader to \cite{HL} for its study using expectation performance criteria. Instead of the expected value, we use the ${\sf AV@R}$ operator to evaluate total discounted performance.
For $n\geq 0$, we consider the scalar, linear system
\begin{equation}gin{equation}
x_{n+1} = x_n + a_n + { x}i_n,
\varepsilonnd{equation}
with $X_0 = x_0$, where the disturbances $({ x}i_n)_{n \geq 0}$ are independent, identically distributed random variables on ${\cal Z}_n^2 = L^2(\mathbb{R},{\cal B}(\mathbb{R}),{\mathbb{P}}^n)$ with mean zero and $\mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}^n}[{ x}i^2_n] < \infty$. The control problem reads as
\begin{equation}al
\langlegle bel{orn2}
&x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} {\rm Var}rho_0\bigg( \mathcal{S}um_{n = 1}^\infty x^\pi_n \bigg) \\
&\triangleq x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} \lim_{N{\ranglele gle}r \infty} \bigg( (x^2_0+ a^2_0) + \gamma\rho_0( (x^2_1+ a^2_1) \\
&{\mbox{\boldmath$1$}}eq + \ldots +\gamma^N\rho_{N-1}((x^2_N+ a^2_N) )\ldots)\bigg),
\varepsilonal
where $\rho_n(\cdot): {\cal Z}_{n+1}^2 {\ranglele gle}r {\cal Z}_{n}^2$ is the dynamic ${\sf AV@R}_\alf: {\cal Z}_{n+1}^2 {\ranglele gle}r {\cal Z}_{n}^2$ operator defined as
\begin{equation}al
\rho_n(Z) &\triangleq \mathcal{S}up_{m_{n+1} \in {\mathfrak A}_{n+1}} \langlegle n m_{n+1},Z {\ranglele gle}n,
\varepsilonal
with
\begin{equation}al
\langlegle bel{eqn05999}
{\mathfrak A}_n = \big\{ &m_n \in L^{\infty}(\Omega,{\cal F}_n,{\mathbb{P}}^\pi_n): \int_\Omega m_n d{\mathbb{P}}^\pi_n = 1, \\
&{\mbox{\boldmath$1$}}eq 0 \leq \lVert m_n \rVert_\infty \leq \frac{1}{\alf}, {\mathbb{P}}^\pi_{n-1}-\textrm{a.s.} \big\}
\varepsilonal
We note that $\Pi_{\textrm{ad}}$ is not empty. Indeed, choose $\pi_n \varepsilonquiv 0$ for $n \geq 0$ so that
\begin{equation}gin{equation}
x_n = x_0 + \mathcal{S}um_{i=0}^{n-1}{ x}i_i,
\varepsilonnd{equation}
with
\begin{equation}al
{\rm Var}rho(\mathcal{S}um_{n=0}^\infty x^2_n ) &\leq 2x_0^2 + 2{\rm Var}rho( \mathcal{S}um_{n=0}^\infty { x}i^2_n ) \\
&\leq 2x^2_0 + 2\mathcal{S}um_{n=0}^\infty \gamma^n {\sf AV@R}_\alf({ x}i^2_n)\\
&\leq 2 x^2_0 + 2 \mathcal{S}um_{n=0}^\infty \gamma^n \frac{1}{\alf}\mbox{\tiny\boldmath$b$}E^{{\mathbb{P}}}[{ x}i^2_i]\\
&\leq 2 x^2_0 + \frac{2\mathcal{S}g^2}{\alf(1-\gamma)}\\
&< \infty,
\varepsilonal
where we used Equation \varepsilonqref{eqn05999} in the third inequality.
Hence, we find $N_0$ such that
\begin{equation}gin{equation}
\frac{2\mathcal{S}g^2}{\alf}\mathcal{S}um_{n=N_0}^\infty \gamma^n < \varepsilonpsilon.
\varepsilonnd{equation}
Starting with $J_{N_0+1} \varepsilonquiv 0$, the corresponding $\varepsilonpsilon$-optimal policy for $n=0,1,\ldots,N_0$ is found via
\begin{equation}al
J_n(x_n) = \min_{|\pi_n|\leq C}\bigg( (x^2_n + a^2_n) + \gamma {\sf AV@R}^n_{\alf}\big(J_{n+1}(x_{n+1} + a_{n+1})\big) \bigg),
\varepsilonal
so that at the final stage, we have
\begin{equation}al
J_0(x_0) &= \min_{ |a_0| \leq C}x_0 + \gamma{\sf AV@R}_\alf( J_1( x^\pi_1) )\\
&= x_0 + \gamma \min_{|a_0| \leq C}\mathcal{S}up_{m_1 \in {\mathfrak A}_{1}}\langlegle n m_1, J_1( x_1) {\ranglele gle}n,
\varepsilonal
where ${\mathfrak A}$ is as defined in Equation \varepsilonqref{eqn05999}. Thus, the corresponding policy
\begin{equation}gin{equation}
\widetilde{\pi} = \{ \pi^*_0, \pi^*_1,\pi^*_2,\ldots,\pi^*_{N_0},0,0,0,\ldots, \}
\varepsilonnd{equation}
is $\varepsilonpsilon$-optimal with the optimal value $V^\pi_0(x_0)$ for problem \varepsilonqref{orn2}.
\begin{equation}gin{thebibliography}{99}
\bibitem{key-1}
{\mathcal{S}c Artzner, P., Delbaen, F., Eber, J.M., Heath, D.} (1999). {\varepsilonm Coherent
measures of risk}, Math. Finance 9, 203-228.
\bibitem{HL2} {\mathcal{S}c Hernandez-Lerma, O.}(1989), {\varepsilonm Adaptive Markov Control Processes}, Springer-Verlag. New York.
\bibitem{key-35}
{\mathcal{S}c Rieder, U.} (1978). {\varepsilonm Measurable Selection Theorems for Optimisation Problems}, Manuscripta Mathematica, 24, 115-131.
\bibitem{RW}
{\mathcal{S}c R. T. Rockafellar and R. J.-B. Wets}, {\varepsilonm Variational Analysis}, Springer, New York, 1998.
\bibitem{key-6}
{\mathcal{S}c Bellman, R.}
(1952). {\varepsilonm On the theory of dynamic programming} Proc. Natl. Acad. Sci 38, 716.
\bibitem{key-14}
{\mathcal{S}c Ruszczynski, A. and Shapiro, A.} (2006). {\varepsilonm Optimization
of convex risk functions}, Mathematics of Operations
Research, vol. 31, pp. 433-452.
\bibitem{FSH1}
{\mathcal{S}c H. Follmer and A. Schied}, (2002), {\varepsilonm Convex measures of risk and trading constraints}, Finance Stochastics, 6 429-447.
\bibitem{FSH2}
{\mathcal{S}c H. Follmer and A. Schied}, (2011) {\varepsilonm Stochastic finance. An introduction in discrete time}, de Gruyter, Berlin.
\bibitem{key-141}
{\mathcal{S}c M.Stadje and P. Cheridito}, {\varepsilonm Time-inconsistencies of Value at Risk and Time-Consistent Alternatives}, Finance Research Letters. (2009) 6, 1, 40-46.
\bibitem{key-334}
{\mathcal{S}c A. Shapiro}, On a time consistency concept in risk averse multi-stage stochastic
programming, {\varepsilonm Operations Research Letters}. 37 (2009) 143-147.
\bibitem{key-149} {\mathcal{S}c Goovaerts, M.J. and Laeven, R. }(2008), {\varepsilonm Actuarial risk measures for financial derivative procing}. Insurance: Mathematics and Economics, 42, 540-547.
\bibitem{key-150} {\mathcal{S}c Godin, F.}(2016), {\varepsilonm Minimizing CVaR in global dynamic hedging with transaction costs }(2016), Quantitative Finance, 6, 461-475.
\bibitem{BO}
{\mathcal{S}c Bauerle, N., Ott, J.}(2011){\varepsilonm Markov Decision Processes with Average-Value-at-Risk Criteria }(2011) Mathematical Methods of Operations Research 74, 361-379.
\bibitem{key-151} {\mathcal{S}c Balbas, A., Balbas, R. and Garrido, J.}(2010), {\varepsilonm Extending pricing rules with general risk functions}, European Journal of Operational Research, 201, 23-33.
\bibitem{key-148} {\mathcal{S}c Roorda, B. and Schumacher, J.}(2016), {\varepsilonm Weakly time consistent concave valuations and their dual representations}. Finance and Stochastics, 20, 123-151.
\bibitem{key-3}
{\mathcal{S}c Ruszczynski, A.} (2010). {\varepsilonm Risk-averse dynamic programming for
Markov decision processes}, Math. Program. Ser. B 125:235-261.
\bibitem{PR}
{\mathcal{S}c Pflug, G.Ch., R\"{o}misch, W.} (2007) {\varepsilonm Modeling, Measuring and Managing Risk}. World Scientific, Singapore.
\bibitem{RS06}
{\mathcal{S}c Ruszczynski, A., Shapiro, A.} (2006) {\varepsilonm Conditional risk mappings}. Mathematics of Operations Research, 31, 544-561
\bibitem{K}
{\mathcal{S}c Kusuoka S} (2001). {\varepsilonm On law-invariant coherent risk measures}. Kusuoka S, Maruyama T, eds. Advances in Mathematical Economics,
Vol. 3, Springer, Tokyo, 83-95.
\bibitem{RW}
{\mathcal{S}c Rockafellar, R.T., Wets, R.J.-B.} (1998). {\varepsilonm Variational Analysis.}, Springer, Berlin.
\bibitem{BMZ}
{\mathcal{S}c Bjork, T., Murgoci, A., and Zhou, X.} (2014). {\varepsilonm Mean variance portfolio optimization with state dependent risk aversion}, Mathematical Finance 24: 1-24.
\bibitem{BMM}
{\mathcal{S}c Bjork, T., Mariana Khapko, M., and Murgoci, A.} (2017).
{\varepsilonm On time-inconsistent stochastic control in continuous time}, Finance and Stochastics,
21(2),331–360
\bibitem{SB}{\mathcal{S}c Shreve, S., Bertsekas, P.D.}(2007). {\varepsilonm Stochastic Optimal Control: The Discrete-Time Case}, Athena, Scientific.
\bibitem{HL}
{\mathcal{S}c Hernandez-Lerma,O., Lasserre, J.B.} (1996). {\varepsilonm Discrete-time Markov
Control Processes. Basic Optimality Criteria.}, Springer,New York.
\bibitem{FR}
{\mathcal{S}c Riedel, F.} (2004). {\varepsilonm Dynamic Coherent Risk Measures. } Stochastic Processes Applications, 112, 185-200.
\bibitem{CR1}
{\mathcal{S}c Cavus, O, Ruszczynski, A.}(2014)
{\varepsilonm Computational Methods for Risk-averse Undiscounted Transient Markov Models.} Operations Research 62(2), 401-417.
\bibitem{CR2}
{\mathcal{S}c Cavus, O, Ruszczynski, A.}(2014)
{ \varepsilonm Risk-averse Control of Undiscounted Transient Markov models}. SIAM Journal on Control Optimization 52(6):3935-3966.
\bibitem{LS}
{\mathcal{S}c Lin, K., Marcus, S.}
{\varepsilonm Dynamic Programming with Non-convex Risk-sensitive Measures}. American
Control Conference (ACC), 2013, IEEE. 6778-6783.
\bibitem{CZ}
{\mathcal{S}c Chu, S., Zhang, Y.}(2014)
{\varepsilonm Markov Decision processes with Iterated Coherent Risk Measures} International Journal of Control 87(11):2286–2293
\bibitem{SSO}
{\mathcal{S}c Shen, Y., Stannat, W., Obermayer, K.} (2013) {\varepsilonm Risk-sensitive Markov control processes}. SIAM Journal on Control Optimization.
51(5):3652–3672.
\bibitem{FR1}
{\mathcal{S}c Fan, J., Ruszczynski, A.} (2016) {\varepsilonm Process-based Risk Measures and Risk-averse Control of Discrete-time Systems } arXiv:1411.2675
\bibitem{FR2}
{\mathcal{S}c J. Fan, J., Ruszczynski, A.} (2018) {\varepsilonm Risk Measurement and Risk-averse Control of Partially Observable Discrete-time Markov Systems}, Mathematical Methods of Operations Research, 1-24.
\bibitem{SN}
{\mathcal{S}c Yuksel, S., Saldi, N.}
{\varepsilonm Convex Analysis in Decentralized Stochastic Control, Strategic Measures and Optimal Solutions}(2017)
SIAM Journal on Control and Optimization, 55(1):1-28,
\bibitem{NST}
{\mathcal{S}c Saldi, N., Yuksel, S., and Linder, T}.(2017)
{\varepsilonm Asymptotic Optimality of Finite Approximations to Markov Decision Processes with Borel Spaces}
Mathematics of Operations Research, 945-978.
\bibitem{FS1}
{\mathcal{S}c Fleming, W.H., Sheu, S.J.}(1999) {\varepsilonm Optimal long term growth rate of expected utility of wealth.}
Annals of Applied Probability 9, 871-903.
\bibitem{FS2}
{\mathcal{S}c Fleming, W.H., Sheu, S.J.}(2000)
{\varepsilonm Risk-sensitive control and an optimal investment model}. Mathematical Finance 10, 197-213.
\bibitem{GHL}
{\mathcal{S}c Guo, X., Hernandez-del-Valle, A., and Hernandez-Lerma, O.}(2010)
{\varepsilonm Nonstationary discrete- time deterministic and stochastic control systems with infinite horizon}, International Journal of Control, 83:9, 1751-1757.
\bibitem{AS}
{\mathcal{S}c Shapiro, A.}(2016) {\varepsilonm Rectangular Sets of Probability Measures}, Operations Research, 64, 528-541.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title{Average Frobenius distribution for the degree two primes of a number field}
\author[James]{Kevin James}
\address[Kevin James]{
Department of Mathematical Sciences\\
Clemson University\\
Box 340975 Clemson, SC 29634-0975
}
\email{[email protected]}
\urladdr{www.math.clemson.edu/~kevja}
\author[Smith]{Ethan Smith}
\address[Ethan Smith]{
Centre de recherches math\'ematiques\\
Universit\'e de Montr\'eal\\
P.O. Box 6128\\
Centre-ville Station\\
Montr\'eal, Qu\'ebec\\
H3C 3J7\\
Canada;
\and
Department of Mathematical Sciences\\
Michigan Technological University\\
1400 Townsend Drive\\
Houghton, Michigan\\
49931-1295\\
USA
}
\email{[email protected]}
\urladdr{www.math.mtu.edu/~ethans}
\begin{abstract}
Let $K$ be a number field and $r$ an integer.
Given an elliptic curve $E$, defined over $K$, we consider the problem of
counting the number of degree two prime ideals of $K$ with trace of Frobenius equal
to $r$. Under certain restrictions on $K$, we show that ``on average'' the
number of such prime ideals with norm less than or equal to $x$ satisfies an asymptotic
identity that is in accordance with standard heuristics. This work is related to the classical
Lang-Trotter conjecture and extends the work of several authors.
\end{abstract}
\subjclass[2000]{11N05, 11G05}
\keywords{Lang-Trotter Conjecture, Hurwtiz-Kronecker class number, Chebotar\"ev Density Theorem, Barban-Davenport-Halberstam Theorem}
\mathpzc maketitle
\section{\textbf{Introduction.}}
Let $E$ be an elliptic curve defined over a number field $K$.
For a prime ideal $\mathpzc mathfrak{p}p$ of the ring of integers $\mathpzc mathcal{O}_K$ where $E$ has good reduction, we
let $a_\mathpzc mathfrak{p}p(E)$ denote the trace of the Frobenius morphism at $\mathpzc mathfrak{p}p$.
It follows that the number of points on the reduction of $E$ modulo $\mathpzc mathfrak{p}p$ satisfies the identity
\begin{equation*}
\#E_\mathpzc mathfrak{p}p(\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p) =\mathpzc mathbf{N}\mathpzc mathfrak{p}p+1-a_\mathpzc mathfrak{p}p(E),
\end{equation*}
where $\mathpzc mathbf{N}\mathpzc mathfrak{p}p:=\#(\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p)$ denotes the norm of $\mathpzc mathfrak{p}p$.
It is a classical result of Hasse that
\begin{equation*}
|a_\mathpzc mathfrak{p}p(E)|\le 2\sqrt{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}.
\end{equation*}
See~\cite[p.~131]{Sil:1986} for example.
It is well-known that if $p$ is the unique rational prime lying below $\mathpzc mathfrak{p}p$ (i.e., $p\mathpzc mathbb{Z}=\mathpzc mathbb{Z}\cap\mathpzc mathfrak{p}p$),
then $\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p$ is isomorphic to the finite field $\mathpzc mathbb{F}_{p^f}$ for some positive integer $f$.
We refer to this integer $f$ as the (absolute) \textit{degree} of $\mathpzc mathfrak{p}p$ and write $\deg\mathpzc mathfrak{p}p=f$.
Given a fixed elliptic curve $E$ and fixed integers $r$ and $f$, the classical heuristics of
Lang and Trotter~\cite{LT:1976} may be generalized to consider the prime counting function
\begin{equation*}
\mathpzc mathfrak{p}i_E^{r,f}(x):=\#\left\{\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x: a_\mathpzc mathfrak{p}p(E)=r\text{ and } \deg\mathpzc mathfrak{p}p=f\right\}.
\end{equation*}
\begin{conj}[Lang-Trotter for number fields]\label{LT for K}
Let $E$ be a fixed elliptic curve defined over $K$, and let $r$ be a fixed integer.
In the case that $E$ has complex multiplication, also assume that $r\mathpzc mathpzc ne 0$.
Let $f$ be a positive integer.
There exists a constant $\mathpzc mathfrak{C}_{E,r,f}$ such that
\begin{equation}
\mathpzc mathfrak{p}i_E^{r,f}(x)\sim\mathpzc mathfrak{C}_{E,r,f}\begin{cases}
\frac{\sqrt x}{\log x}& \text{if }f=1,\\
\log\log x& \text{if }f=2,\\
1& \text{if }f\ge 3
\end{cases}
\end{equation}
as $x\rightarrow\infty$.
\end{conj}
\begin{rmk}
It is possible that the constant $\mathpzc mathfrak{C}_{E,r,f}$ may be zero.
In this event, we interpret the conjecture to mean that there are only finitely many such primes.
In the case that $f\ge 3$, we always interpret the conjecture to mean that there are only finitely
many such primes.
\end{rmk}
\begin{rmk}
The first appearance of Conjecture~\ref{LT for K} in the literature seems to be in the work of
David and Pappalardi~\cite{DP:2004}. It is not clear to the authors what the constant
$\mathpzc mathfrak C_{E,r,f}$
should be for the cases when $f\ge 2$. Indeed, it does not appear that an explicit constant has
ever been conjectured for these cases. We hope that one of the benefits of our work is that it will
shed some light on what the constant should look like for the case $f=2$.
\end{rmk}
Given a family $\mathpzc mathscr{C}$ of elliptic curves defined over $K$, by the
\textit{average Lang-Trotter problem} for $\mathpzc mathscr{C}$, we mean the problem of computing an
asymptotic formula for
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,f}(x).
\end{equation*}
We refer to this expression as the \textit{average order} of $\mathpzc mathfrak{p}i_E^{r,f}(x)$ over $\mathpzc mathscr{C}$.
In order to provide support for Conjecture~\ref{LT for K}, several authors have proven
results about the average order of $\mathpzc mathfrak{p}i_E^{r,f}(x)$ over various families of elliptic curves.
See~\cite{FM:1996,DP:1999,DP:2004,Jam:2004,BBIJ:2005,Bai:2007,CFJKP:2011,JS:2011}.
In each case, the results have been found to be in accordance with Conjecture~\ref{LT for K}.
Unfortunately, at present, it is necessary to take $\mathpzc mathscr{C}$ to be a family of curves that must
``grow'' at some specified rate with respect to the variable $x$. The authors of the
works~\cite{FM:1996,Bai:2007,JS:2011} put a great deal of effort into keeping the average
as ``short'' as possible. This seems like a difficult task for the cases of the average Lang-Trotter
problem that we will consider here.
In~\cite{CFJKP:2011}, it was shown how to solve the average Lang-Trotter problem when $K/\mathpzc mathbb{Q}$
is an Abelian extension and $\mathpzc mathscr{C}$ is essentially the family of elliptic curves defined
by~\eqref{defn of ecbox} below. It turns out that their methods were actually sufficient to handle
some non-Abelian Galois extensions as well in the case when $f=2$.
In~\cite{JS:2011}, the results of~\cite{CFJKP:2011} were extended to the setting of any Galois
extension $K/\mathpzc mathbb{Q}$ except in
the case that $f=2$. In this paper, we consider the case when $f=2$ and $K/\mathpzc mathbb{Q}$ is an
arbitrary Galois extension. We show how the problem of computing an asymptotic formula for
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,2}(x)
\end{equation*}
may be reduced to a certain average error problem for the Chebotar\"ev Density Theorem
that may be viewed as a variation on a classical problem solved by Barban, Davenport, and
Halberstam. We then show how to solve this problem in certain cases.
\section{\textbf{Acknowledgment.}}
We would like to thank the anonymous referee for many helpful suggestions and a very careful reading of the manuscript.
The second author would also like to thank David Grant, Hershy Kisilevsky, and Dimitris
Koukoulopoulos for helpful discussions during the preparation of this paper.
\section{\textbf{An average error problem for the Chebotar\"ev Density Theorem.}}
\label{cheb for composites}
For the remainder of the article it will be assumed that $K/\mathpzc mathbb{Q}$ is a finite degree Galois
extension with group $G$. Our technique for computing an asymptotic formula for the
average order of $\mathpzc mathfrak{p}i_E^{r,2}(x)$ involves estimating sums of the form
\begin{equation*}
\theta(x;C,q,a):=\sum_{\substack{p\le x\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq C\\ p\equiv a\mathpzc mathfrak{p}mod q}}\log p,
\end{equation*}
where the sum is over the primes $p$ which do not ramify in $K$,
$\leg{K/\mathpzc mathbb{Q}}{p}$ denotes the Frobenius class of $p$ in $G$, and $C$ is a union of conjugacy
classes of $G$ consisting entirely of elements of order two.
Since the last two conditions on $p$ under the sum may be in conflict for certain choices of
$q$ and $a$, we will need to take some care when attempting to estimate such sums via
the Chebotar\"ev Density Theorem.
For each positive integer $q$, we fix a primitive $q$-th root of unity and denote it by $\zeta_q$.
It is well-known that there is an isomorphism
\begin{equation}\label{cyclotomic rep}
\begin{diagram}
\mathpzc mathpzc node{(\mathpzc mathbb{Z}/q\mathpzc mathbb{Z})^\times}\arrow{e,t}{\sim}\mathpzc mathpzc node{\mathpzc mathrm{Gal}(\mathpzc mathbb{Q}(\zeta_q)/\mathpzc mathbb{Q})}
\end{diagram}
\end{equation}
given by $a\mathpzc mapsto\sigma_{q,a}$ where
$\sigma_{q,a}$ denotes the unique automorphism in $\mathpzc mathrm{Gal}(\mathpzc mathbb{Q}(\zeta_q)/\mathpzc mathbb{Q})$ such that
$\sigma_{q,a}(\zeta_q)=\zeta_q^a$. By definition of the Frobenius automorphism, it turns
out that if $p$ is a rational prime, then $\leg{\mathpzc mathbb{Q}(\zeta_q)/\mathpzc mathbb{Q}}{p}=\sigma_{q,a}$ if and only if
$p\equiv a\mathpzc mathfrak{p}mod q$. See~\cite[pp.~11-14]{Was:1997} for example.
More generally, for any number field the extension $K(\zeta_q)/K$ is Galois, and under
restriction of automorphisms of $K(\zeta_q)$ down to $\mathpzc mathbb{Q}(\zeta_q)$ we have mappings
\begin{equation*}
\begin{diagram}
\mathpzc mathpzc node{\mathpzc mathrm{Gal}(K(\zeta_q)/K)}\arrow{e,t}{\sim}\mathpzc mathpzc node{\mathpzc mathrm{Gal}(\mathpzc mathbb{Q}(\zeta_q)/K\cap\mathpzc mathbb{Q}(\zeta_q))}
\arrow{e,J}\mathpzc mathpzc node{\mathpzc mathrm{Gal}(\mathpzc mathbb{Q}(\zeta_q)/\mathpzc mathbb{Q}).}
\end{diagram}
\end{equation*}
Therefore, via~\eqref{cyclotomic rep}, we obtain a natural injection
\begin{equation}\label{cyc injection}
\begin{diagram}
\mathpzc mathpzc node{\mathpzc mathrm{Gal}(K(\zeta_q)/K)}\arrow{e,J}\mathpzc mathpzc node{(\mathpzc mathbb{Z}/q\mathpzc mathbb{Z})^\times.}
\end{diagram}
\end{equation}
We let $G_{K,q}$ denote the image of the map~\eqref{cyc injection} in
$(\mathpzc mathbb{Z}/q\mathpzc mathbb{Z})^\times$ and
$\varphi_K(q):=\#G_{K,q}$. Note that $\varphi_\mathpzc mathbb{Q}$ is the usual Euler $\varphi$-function.
For $a\in G_{K,q}$ and a prime ideal $\mathpzc mathfrak{p}$ of $K$, it follows that
$\leg{K(\zeta_q)/K}{\mathpzc mathfrak{p}}=\sigma_{q,a}$ if and only if $\mathpzc mathbf{N}\mathpzc mathfrak{p}\equiv a\mathpzc mathfrak{p}mod q$.
Now let $G'$ denote the commutator subgroup of $G$, and let $K'$ denote the fixed field of $G'$.
We will use the notation throughout the article.
It follows that $K'$ is the maximal Abelian subextension of $K$. By the Kronecker-Weber
Theorem~\cite[p.~210]{Lan:1994}, there is a smallest integer $\mathpzc m_K$ so that
$K'\subseteq\mathpzc mathbb{Q}(\zeta_{\mathpzc m_K})$. For every $q\ge 1$, it follows that $K\cap\mathpzc mathbb{Q}(\zeta_{q\mathpzc m_K})=K'$.
Furthermore, the extension $K(\zeta_{q\mathpzc m_K})/\mathpzc mathbb{Q}$ is Galois with group isomorphic to
the fibered product
\begin{equation*}
\{(\sigma_1,\sigma_2)\in\mathpzc mathrm{Gal}(\mathpzc mathbb{Q}(\zeta_{q\mathpzc m_K})/\mathpzc mathbb{Q})\times G: \sigma_1|_{K'}=\sigma_2|_{K'}\}.
\end{equation*}
See~\cite[pp.~592-593]{DF:2004} for example.
It follows that
\begin{equation}\label{deg of comp extn}
[K(\zeta_{q\mathpzc m_K}):\mathpzc mathbb{Q}]=\frac{\varphi(q\mathpzc m_K)\mathpzc mathpzc n_K}{\mathpzc mathpzc n_{K'}}=\varphi_K(q\mathpzc m_K)\mathpzc mathpzc n_K,
\end{equation}
where here and throughout we use the notation $\mathpzc mathpzc n_{F}:=[F:\mathpzc mathbb{Q}]$ to denote the degree of
a number field $F$.
For each $\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$, it follows from the above facts that there is a finite list
$\mathpzc mathcal S_\tau$ of congruence conditions modulo $\mathpzc m_K$
(really a coset of $G_{K,\mathpzc m_K}$ in $(\mathpzc mathbb{Z}/\mathpzc m_K\mathpzc mathbb{Z})^\times$)
such that for any rational prime not ramifying in $K'$, $\leg{K'/\mathpzc mathbb{Q}}{p}=\tau$ if and only if
$p\equiv a\mathpzc mathfrak{p}mod{\mathpzc m_K}$ for some $a\in\mathpzc mathcal S_\tau$.
Now, suppose that $\tau$ has order one or two in $\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$, and
let $\mathpzc mathcal C_\tau$ be the subset of order two elements of $G$ that restrict to $\tau$ on $K'$, i.e.,
\begin{equation*}
\mathpzc mathcal C_\tau:=\{\sigma\in G: \sigma|_{K'}=\tau\text{ and }|\sigma|=2\}.
\end{equation*}
Since $K'/\mathpzc mathbb{Q}$ is Abelian, it follows that $\mathpzc mathcal C_\tau$ is a union of conjugacy classes in $G$.
Then for each $a\in (\mathpzc mathbb{Z}/q\mathpzc m_K\mathpzc mathbb{Z})^\times$,
the Chebotar\"ev Density Theorem gives the asymptotic formula
\begin{equation}\label{cheb comp}
\theta(x;\mathpzc mathcal C_\tau,q\mathpzc m_K,a)
\sim \frac{\#\mathpzc mathcal C_\tau}{\varphi_K(q\mathpzc m_K)\mathpzc mathpzc n_K}x,
\end{equation}
provided that $a\equiv b\mathpzc mathfrak{p}mod{\mathpzc m_K}$ for some $b\in\mathpzc mathcal S_\tau$.
Otherwise, the sum on the left is empty.
For $Q\ge 1$, we define the \textit{Barban-Davenport-Halberstam average square error} for
this problem by
\begin{equation}\label{bdh problem}
\mathpzc mathcal E_{K}(x;Q, \mathpzc mathcal C_\tau)
:=\sum_{q\le Q}\sum_{a=1}^{q\mathpzc m_K}\strut^\mathpzc mathfrak{p}rime
\left(\theta(x;C,q\mathpzc m_K,a)-\frac{\#\mathpzc mathcal C_\tau}{\varphi_K(q\mathpzc m_K)\mathpzc mathpzc n_K}x\right)^2,
\end{equation}
where the prime on the sum over $a$ means that the sum is to be restricted to those $a$
such that $a\equiv b\mathpzc mathfrak{p}mod{\mathpzc m_K}$ for some $b\in\mathpzc mathcal S_\tau$.
\section{\textbf{Notation and statement of results.}}
We are now ready to state our main results on the average Lang-Trotter problem.
Recall that the ring of integers $\mathpzc mathcal{O}_K$ is a free $\mathpzc mathbb{Z}$-module of
rank $\mathpzc mathpzc n_K$, and let $\mathpzc mathcal{B}=\{\gamma_j\}_{j=1}^{\mathpzc mathpzc n_K}$ be a fixed integral
basis for $\mathpzc mathcal{O}_K$. We denote the coordinate map for the basis $\mathpzc mathcal{B}$ by
\begin{equation*}
[\cdot]_{\mathpzc mathcal{B}}:\mathpzc mathcal{O}_K\stackrel{\sim}{\longrightarrow}
\bigoplus_{j=1}^{\mathpzc mathpzc n_K}\mathpzc mathbb{Z}=\mathpzc mathbb{Z}^{\mathpzc mathpzc n_K}.
\end{equation*}
If $\mathpzc mathbf A,\mathpzc mathbf B\in\mathpzc mathbb{Z}^{\mathpzc mathpzc n_K}$, then we write $\mathpzc mathbf A\le\mathpzc mathbf B$ if each entry of $\mathpzc mathbf A$
is less than or equal to the corresponding entry of $\mathpzc mathbf B$.
For two algebraic integers $\alpha,\beta\in\mathpzc mathcal{O}_K$, we write $E_{\alpha,\beta}$
for the elliptic curve given by the model
\begin{equation*}
E_{\alpha,\beta}: Y^2=X^3+\alpha X+\beta.
\end{equation*}
From now on, we assume that the entries of $\mathpzc mathbf A, \mathpzc mathbf B$ are all non-negative, and
we take as our family of elliptic curves the set
\begin{equation}\label{defn of ecbox}
\mathpzc mathscr{C}:=\mathpzc mathscr{C}(\mathpzc mathbf A;\mathpzc mathbf B)
=\{E_{\alpha,\beta}:
-\mathpzc mathbf A\le [\alpha]_\mathpzc mathcal{B}\le\mathpzc mathbf A,
-\mathpzc mathbf B\le [\beta]_\mathpzc mathcal{B}\le\mathpzc mathbf B, -16(4\alpha^3+27\beta^2)\mathpzc mathpzc ne 0\}.
\end{equation}
To be more precise, this box should be thought of as a box of equations or models
since the same elliptic curve may appear multiple times in $\mathpzc mathscr{C}$.
For $1\le i\le\mathpzc mathpzc n_K$, we let $a_i$ denote the $i$-th entry of $\mathpzc mathbf A$ and $b_i$ denote
the $i$-th entry of $\mathpzc mathbf B$.
Associated to box $\mathpzc mathscr{C}$, we define the quantities
\begin{align*}
\mathrm V_1(\mathpzc mathscr{C})&:=2^{\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{i=1}^{\mathpzc mathpzc n_K}a_i,
&\mathrm V_2(\mathpzc mathscr{C})&:=2^{\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{i=1}^{\mathpzc mathpzc n_K}b_i,\\
\mathpzc minside_{1}(\mathpzc mathscr{C})&:=\mathpzc min_{1\le i\le\mathpzc mathpzc n_K}\{a_{i}\},
&\mathpzc minside_{2}(\mathpzc mathscr{C})&:=\mathpzc min_{1\le i\le\mathpzc mathpzc n_K}\{b_{i}\},\\
\mathrm V(\mathpzc mathscr{C})&:=\mathrm V_1(\mathpzc mathscr{C})\mathrm V_2(\mathpzc mathscr{C}),
&\mathpzc min(\mathpzc mathscr{C})&:=\mathpzc min\{\mathpzc minside_{1}(\mathpzc mathscr{C}), \mathpzc minside_{2}(\mathpzc mathscr{C})\},
\end{align*}
which give a description of the size of the box $\mathpzc mathscr{C}$.
In particular,
\begin{equation*}\label{size of ecbox}
\#\mathpzc mathscr{C}=\mathrm V(\mathpzc mathscr{C})+O\left(\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc min(\mathpzc mathscr{C})}\right).
\end{equation*}
Our first main result is that the average order problem for $\mathpzc mathfrak{p}i_E^{r,2}(x)$ may be
reduced to the Barban-Davenport-Halberstam type average error problem described
in the previous section.
\begin{thm}\label{main thm 0}
Let $r$ be a fixed odd integer, and recall the definition of
$\mathpzc mathcal E_K(x;Q,\mathpzc mathcal C_\tau)$ as given by~\eqref{bdh problem}.
If
\begin{equation*}
\mathpzc mathcal E_K(x;x/(\log x)^{12},\mathpzc mathcal C_\tau)\ll \frac{x^2}{(\log x)^{11}}
\end{equation*}
for every $\tau$ of order dividing two in $\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$ and if $\mathpzc min(\mathpzc mathscr{C})\ge\sqrt x$,
then there exists an explicit constant $\mathpzc mathfrak C_{K,r,2}$ such that
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,2}(x)
=\mathpzc mathfrak C_{K,r,2}\log\log x+O(1),
\end{equation*}
where the implied constants depend at most on $K$ and $r$.
Furthermore, the constant $\mathpzc mathfrak C_{K,r,2}$ is given by
\begin{equation*}
\mathpzc mathfrak C_{K,r,2}=\frac{2}{3\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{\substack{\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})\\ |\tau|=1,2}}\#\mathpzc mathcal C_\tau
\sum_{g\in\mathpzc mathcal S_\tau}\mathpzc mathfrak{c}_r^{(g)},
\end{equation*}
where the product is taken over the rational primes $\ell$ dividing $r$,
\begin{equation}\label{g part of const}
\mathpzc mathfrak{c}_r^{(g)}
:=\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k)
\end{equation}
and
\begin{equation*}
C_{g}(r,a,n,k)
:=\left\{b\in (\mathpzc mathbb{Z}/\mathpzc m_Knk^2\mathpzc mathbb{Z})^\times: 4b^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{nk^2}, b\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_K}\right\}.
\end{equation*}
Alternatively, the constant $\mathpzc mathfrak C_{K,r,2}$ may be written as
\begin{equation}\label{avg LT const}
\mathpzc mathfrak C_{K,r,2}
=\frac{\mathpzc mathpzc n_{K'}}{3\mathpzc mathfrak{p}i\varphi(\mathpzc m_K)}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\mathpzc mathfrak{p}rod_{\ell\mathpzc mathpzc nmid 2r\mathpzc m_K}\left(\frac{\ell(\ell-1-\leg{-1}{\ell})}{(\ell-1)(\ell-\leg{-1}{\ell})}\right)
\sum_{\substack{\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})\\ |\tau|=1,2}}\#\mathpzc mathcal C_\tau
\sum_{g\in\mathpzc mathcal S_\tau}
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid\mathpzc m_K\\ \ell\mathpzc mathpzc nmid 2r}}\mathpzc mathfrak{K}_r^{(g)},
\end{equation}
where the products are taken over the rational primes $\ell$ satisfying the stated conditions
and $\mathpzc mathfrak{K}_r^{(g)}$ is defined by
\begin{equation*}\label{defn of K factors}
\mathpzc mathfrak{K}_r^{(g)}:=
\begin{cases}
\displaystyle
\frac{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)+1}{2}}-1}{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)-1}{2}}(\ell-1)}
&\begin{array}{l}\text{if }\mathpzc mathpzc nu_\ell(4g^2-r^2)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)\\ \mathpzc mathfrak{q}uad\text{ and } 2\mathpzc mathpzc nmid\mathpzc mathpzc nu_\ell(4g^2-r^2),\end{array}\\
\displaystyle
\frac{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)}{2}+1}-1}{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)}{2}}(\ell-1)}
+\frac{\leg{(r^2-4g^2)/\ell^{\mathpzc mathpzc nu_\ell(r^2-4g^2)}}{\ell}}{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)}{2}}\left(\ell-\leg{(r^2-4g^2)/\ell^{\mathpzc mathpzc nu_\ell(r^2-4g^2)}}{\ell}\right)}
&\begin{array}{l}\text{if }\mathpzc mathpzc nu_\ell(4g^2-r^2)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)\\ \mathpzc mathfrak{q}uad\text{ and }2\mathpzc mid\mathpzc mathpzc nu_\ell(4g^2-r^2),\end{array}\\
\displaystyle
\frac{\ell^{2\ceil{\frac{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}{2}}+1}(\ell+1)\left(\ell^{\ceil{\frac{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}{2}}}-1\right)
+\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)+2}}{\ell^{3\ceil{\frac{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}{2}}}(\ell^2-1)}
&\text{if }\mathpzc mathpzc nu_\ell(4g^2-r^2)\ge\mathpzc mathpzc nu_\ell(\mathpzc m_K).\\
\end{cases}
\end{equation*}
\end{thm}
\begin{rmk}
The notation $\mathpzc mathpzc nu_\ell(4g^2-r^2)$ in the definition of $\mathpzc mathfrak K_r^{(g)}$ is a bit strange as $g$ is defined to be an element of
$(\mathpzc mathbb{Z}/\mathpzc m_K\mathpzc mathbb{Z})^\times$. This can be remedied by choosing any integer representative of $g$,
and noting that any choice with $4g^2\equiv r^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}}$ corresponds to the case that
$\mathpzc mathpzc nu_\ell(4g^2-r^2)\ge\mathpzc mathpzc nu_\ell(\mathpzc m_K)$.
\end{rmk}
\begin{rmk}
We have chosen to restrict ourselves to the case when $r$ is odd since it simplifies some of
the technical difficulties involved in computing the constant $\mathpzc mathfrak C_{K,r,2}$.
A result of the same nature should hold for non-zero even $r$ as well.
For the case $r=0$, see Theorem~\ref{main thm ss} below.
\end{rmk}
The proof of Theorem~\ref{main thm 0} proceeds by a series of reductions.
We make no restriction on the number field $K$ except that it be a finite degree Galois extension of
$\mathpzc mathbb{Q}$. In Section~\ref{reduce to avg of class numbers}, we reduce the proof of
Theorem~\ref{main thm 0} to the computation of a certain average of class numbers.
In Section~\ref{reduce to avg of l-series}, we reduce that computation to
a certain average of special values of Dirichlet $L$-functions.
In Section~\ref{reduce to bdh prob}, the problem is reduced to the
problem of bounding $\mathpzc mathcal E_K(x;Q,\mathpzc mathcal C_\tau)$.
Finally, in Section~\ref{factor constant}, we compute the constant $\mathpzc mathfrak C_{K,r,2}$.
Under certain conditions on the Galois group $G=\mathpzc mathrm{Gal}(K/\mathpzc mathbb{Q})$, we are able to completely solve
our problem by bounding $\mathpzc mathcal E_K(x;Q,\mathpzc mathcal C_\tau)$.
One easy case is when the Galois group $G$ is equal to its own commutator subgroup, i.e., when
$G$ is a perfect group. In this case, we say that the number field $K$ is \textit{totally non-Abelian}.
The authors of~\cite{CFJKP:2011} were able to prove a version of Theorem~\ref{main thm 0}
whenever $G$ is Abelian. That is, when the commutator subgroup is trivial, or equivalently,
when $K=K'$. It turns out that their methods are actually
sufficient to handle some non-Abelian number fields as well. In particular, their technique is
sufficient whenever there is a finite list of congruence conditions that determine exactly which
rational primes decompose as a product of degree two primes in $K$. Such a number field
need not be Abelian over $\mathpzc mathbb{Q}$. For example, the splitting field of the polynomial
$x^3-2$ possesses this property. If $K$ is a finite degree Galois extension of $\mathpzc mathbb{Q}$ possessing
this property, we say that $K$ is $2$-\textit{pretentious}. The name is meant to call to mind the
notion that such number fields ``pretend'' to be Abelian over $\mathpzc mathbb{Q}$, at least as far as their degree
two primes are concerned.\footnote{We borrow the term pretentious from Granville and
Soundararajan who use the term to describe the way in which one multiplicative function
``pretends" to be another in a certain technical sense.}
In Section~\ref{pretend and tna fields}, we give more precise descriptions of $2$-pretentious and
totally non-Abelian number fields and prove some basic facts which serve to characterize such
fields. Then, in Section~\ref{resolution}, we show how to give a complete solution to the
average order problem for $\mathpzc mathfrak{p}i_E^{r,2}(x)$ whenever $K$ may be decomposed
$K=K_1K_2$, where $K_1$ is a $2$-pretentious Galois extension of $\mathpzc mathbb{Q}$, $K_2$ is totally
non-Abelian, and $K_1\cap K_2=\mathpzc mathbb{Q}$.
\begin{thm}\label{main thm 1}
Let $r$ be a fixed odd integer, and assume that $K$ may be decomposed as above.
If $\mathpzc min(\mathpzc mathscr{C})\ge\sqrt{x}$, then
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,2}(x)
=\mathpzc mathfrak C_{K,r,2}\log\log x+O(1),
\end{equation*}
where the implied constant depends at most upon $K$ and $r$, and the constant
$\mathpzc mathfrak C_{K,r,2}$ is as in Theorem~\ref{main thm 0}.
\end{thm}
By a slight alteration in the method we employ to prove Theorem~\ref{main thm 0},
we can also provide a complete solution to our problem for another class of number fields.
\begin{thm}\label{main thm 2}
Let $r$ be a fixed odd integer, and suppose that $K'$ is ramified only at primes which divide $2r$.
If $\mathpzc min(\mathpzc mathscr{C})\ge\sqrt{x}$, then
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,2}(x)
=\mathpzc mathfrak C_{K,r,2}\log\log x+O(1),
\end{equation*}
where the implied constant depends at most upon $K$ and $r$.
Furthermore, the constant $\mathpzc mathfrak C_{K,r,2}$ may be simplified to
\begin{equation*}
\mathpzc mathfrak C_{K,r,2}
=\frac{\#C}{3\mathpzc mathfrak{p}i}
\mathpzc mathfrak{p}rod_{\ell> 2}\frac{\ell(\ell-1-\leg{-r^2}{\ell})}{(\ell-1)(\ell-\leg{-1}{\ell})},
\end{equation*}
where the product is taken over the rational primes $\ell>2$ and
$C=\left\{\sigma\in\mathpzc mathrm{Gal}(K/\mathpzc mathbb{Q}): |\sigma|=2\right\}$.
\end{thm}
\begin{rmk}
We note that the required growth rate $\mathpzc min(\mathpzc mathscr{C})\ge\sqrt x$ for
Theorems~\ref{main thm 0},~\ref{main thm 1},~\ref{main thm 2} can be relaxed to
$\mathpzc min(\mathpzc mathscr{C})\ge\sqrt x/\log x$. The key piece of information necessary for making the improvement
is to observe that~\eqref{bound for scr H} (see page~\mathpzc mathfrak{p}ageref{bound for scr H}) can be improved to
$\mathpzc mathcal H(T)\ll\frac{T^2}{\log T}$, where $\mathpzc mathcal H(T)$ is the sum defined by~\eqref{defn of scr H}.
Indeed, the techniques used to prove
Propositions~\ref{reduction to avg of special values} and~\ref{avg of l-series prop} below can be used to
show that $\mathpzc mathcal H(T)$ is asymptotic to some constant multiple of $\frac{T^2}{\log T}$.
\end{rmk}
Following~\cite{DP:2004}, we also obtain an easy result concerning the average
\textit{supersingular} distribution of degree two primes. To this end, we define the prime counting
function
\begin{equation*}
\mathpzc mathfrak{p}i_E^{\mathpzc mathrm{ss},2}(x):=\#\{\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x: E\text{ is supersingular at } \mathpzc mathfrak{p}p,\ \deg\mathpzc mathfrak{p}p=2\}.
\end{equation*}
Recall that if $\mathpzc mathfrak{p}p$ is a degree two
prime of $K$ lying above the rational prime $p$, then $E$ is supersingular at $\mathpzc mathfrak{p}p$ if and only
if $a_\mathpzc mathfrak{p}p(E)=0,\mathpzc mathfrak{p}m p,\mathpzc mathfrak{p}m 2p$. By a straightforward adaption of~\cite[pp.~199-200]{DP:2004}, we obtain the following.
\begin{thm}\label{main thm ss}
Let $K$ be any Galois number field. Then provided that $\mathpzc min(\mathpzc mathscr{C})\ge\log\log x$,
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{0,2}(x)\ll 1,
\end{equation*}
where the implied constant depends at most upon $K$ and $r$.
Furthermore, if $\mathpzc min(\mathpzc mathscr{C})\ge\sqrt x/\log x$, then
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{\mathpzc mathrm{ss},2}(x)
\sim\frac{\#C}{12\mathpzc mathpzc n_K}\log\log x,
\end{equation*}
where $C=\left\{\sigma\in\mathpzc mathrm{Gal}(K/\mathpzc mathbb{Q}): |\sigma|=2\right\}$.
\end{thm}
Since the proof of this result merely requires a straightforward adaptation
of~\cite[pp.~199-200]{DP:2004}, we choose to omit it.
\begin{rmk}
In all of our computations, the number field $K$ and the integer $r$ are assumed to be fixed.
We have not kept track of the way in which our implied constants depend on these two
parameters. Thus, all implied constants in this article may depend on $K$ and $r$ even
though we do not make this explicit in what follows.
\end{rmk}
\section{\textbf{Counting isomorphic reductions.}}
In this section, we count the number of models $E\in\mathpzc mathscr{C}$ that reduce modulo $\mathpzc mathfrak{p}p$
to a given isomorphism class.
\begin{lma}\label{count mod p reductions}
Let $\mathpzc mathfrak{p}p$ be a prime ideal of $K$ and let $E'$ be an elliptic curve defined over $\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p$.
Suppose that $\deg\mathpzc mathfrak{p}p=2$ and $\mathpzc mathfrak{p}p\mathpzc mathpzc nmid 6$. Then the number of $E\in\mathpzc mathscr{C}$ for which $E$ is isomorphic to $E'$ over $\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p$ is
\begin{equation*}
\#\{E\in\mathpzc mathscr{C}: E_\mathpzc mathfrak{p}p\cong E'\}
=\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p\#\mathpzc mathrm{Aut}(E')}
+O\left(
\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p^2}
+\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc min(\mathpzc mathscr{C})\sqrt{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}}
+\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc min_1(\mathpzc mathscr{C})\mathpzc min_2(\mathpzc mathscr{C})}\right).
\end{equation*}
\end{lma}
\begin{proof}
Since $\deg\mathpzc mathfrak{p}=2$, the residue ring $\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p$ is isomorphic to the finite field
$\mathpzc mathbb{F}_{p^2}$, where $p$ is the unique rational prime lying below $\mathpzc mathfrak{p}p$.
Since $\mathpzc mathfrak{p}p\mathpzc mathpzc nmid 6$, the characteristic $p$ is greater than $3$.
Hence, $E'$ may be modeled by an equation of the form
\begin{equation*}
E_{a,b}: Y^2=X^3+aX+b
\end{equation*}
for some $a,b\in\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p$.
The number of equations of this form that are isomorphic to $E'$ is exactly
\begin{equation*}
\frac{p^2-1}{\#\mathpzc mathrm{Aut}(E')}=\frac{\mathpzc mathbf{N}\mathpzc mathfrak{p}p-1}{\#\mathpzc mathrm{Aut}(E')}.
\end{equation*}
Therefore,
\begin{equation*}
\#\{E\in\mathpzc mathscr{C}: E_\mathpzc mathfrak{p}p\cong E'\}
=\frac{\mathpzc mathbf{N}\mathpzc mathfrak{p}p-1}{\#\mathpzc mathrm{Aut}(E')}\#\{E\in\mathpzc mathscr{C}: E_\mathpzc mathfrak{p}p=E_{a,b}\}.
\end{equation*}
Suppose that $E\in\mathpzc mathscr{C}$ such that $E_\mathpzc mathfrak{p}p=E_{a,b}$, say $E: Y^2=X^2+\alpha X+\beta$.
Then either $\alpha\equiv a\mathpzc mathfrak{p}mod{\mathpzc mathfrak{p}p}$ and $\beta\equiv b\mathpzc mathfrak{p}mod{\mathpzc mathfrak{p}p}$ or $E_{\alpha,\beta}$
is not minimal at $\mathpzc mathfrak{p}p$. If $E$ is not minimal at $\mathpzc mathfrak{p}p$, then
$\mathpzc mathfrak{p}p^4\mathpzc mid \alpha$ and $\mathpzc mathfrak{p}p^6\mathpzc mid\beta$.
For $a,b\in\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p$, we adapt the argument of~\cite[p.~192]{DP:2004} in the obvious manner
to obtain the estimates
\begin{align*}
\#\{\alpha\in\mathpzc mathcal{O}_K: -\mathpzc mathbf A\le [\alpha]_\mathpzc mathcal B\le \mathpzc mathbf A, \alpha\equiv a\mathpzc mathfrak{p}mod\mathpzc mathfrak{p}p\}
&=\frac{\mathrm V_1(\mathpzc mathscr{C})}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}+O\left(\frac{\mathrm V_1(\mathpzc mathscr{C})}{\mathpzc min_1(\mathpzc mathscr{C})\sqrt{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}}\right),\\
\#\{\beta\in\mathpzc mathcal{O}_K: -\mathpzc mathbf B\le [\beta]_\mathpzc mathcal B\le \mathpzc mathbf B, \alpha\equiv b\mathpzc mathfrak{p}mod\mathpzc mathfrak{p}p\}
&=\frac{\mathrm V_2(\mathpzc mathscr{C})}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}+O\left(\frac{\mathrm V_2(\mathpzc mathscr{C})}{\mathpzc min_2(\mathpzc mathscr{C})\sqrt{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}}\right).
\end{align*}
It follows that
\begin{equation*}
\#\{E\in\mathpzc mathscr{C}: E_\mathpzc mathfrak{p}p=E_{a,b}\}=\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p^2}+
O\left(\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc min(\mathpzc mathscr{C})\mathpzc mathbf{N}\mathpzc mathfrak{p}p^{3/2}}
+\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc min_1(\mathpzc mathscr{C})\mathpzc min_2(\mathpzc mathscr{C})\mathpzc mathbf{N}\mathpzc mathfrak{p}p}
+\frac{\mathrm V(\mathpzc mathscr{C})}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p^{10}}\right),
\end{equation*}
where the last term in the error accounts for the curves which are not minimal at $\mathpzc mathfrak{p}p$.
\end{proof}
\section{\textbf{Reduction of the average order to an average of class numbers.}}
\label{reduce to avg of class numbers}
In this section, we reduce our average order computation to the computation of
an average of class numbers.
Given a (not necessarily fundamental) discriminant $D<0$,
if $D\equiv 0,1\mathpzc mathfrak{p}mod{4}$,
we define the \textit{Hurwitz-Kronecker class number} of discriminant $D$ by
\begin{equation}\label{Hurwitz defn}
H(D):=\sum_{\substack{k^2|D\\ \frac{D}{k^2}\equiv 0,1\mathpzc mathfrak{p}mod 4}}
\frac{h(D/k^2)}{w(D/k^2)},
\end{equation}
where $h(d)$ denotes the class number of the unique imaginary quadratic order of
discriminant $d$ and $w(d)$ denotes the order of its unit group.
A simple adaption of the proof of Theorem 4.6 in~\cite{Sch:1987}
to count isomorphism classes with weights (as in~\cite[p.~654]{Len:1987})
yields the following result, which is attributed to Deuring~\cite{Deu:1941}.
\begin{thm}[Deuring]\label{Deuring's thm}
Let $p$ be a prime greater than $3$, and let $r$ be an integer such that $p\mathpzc mathpzc nmid r$
and $r^2-4p^2<0$. Then
\begin{equation*}
\sum_{\substack{\tilde E/\mathpzc mathbb{F}_{p^2}\\ \#\tilde E(\mathpzc mathbb{F}_{p^2})=p^2+1-r}}\frac{1}{\#\mathpzc mathrm{Aut}(\tilde E)}
=H(r^2-4p^2),
\end{equation*}
where the sum on the left is over the $\mathpzc mathbb{F}_{p^2}$-isomorphism classes of elliptic curves
possessing exactly $p^2+1-r$ points and $\mathpzc mathrm{Aut}(\tilde E)$ denotes the
$\mathpzc mathbb{F}_{p^2}$-automorphism group of any representative of $\tilde E$.
\end{thm}
\begin{prop}\label{reduction to avg of class numbers}
Let $r$ be any integer. If $\mathpzc min(\mathpzc mathscr{C})\ge\sqrt{x}$, then
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,2}(x)
=\frac{\mathpzc mathpzc n_K}{2}\sum_{\substack{3|r|<p\le\sqrt x\\ f_K(p)=2}}\frac{H(r^2-4p^2)}{p^2}
+O\left(1\right),
\end{equation*}
where the sum on the right is over the rational primes $p$ which do not ramify and which
split into degree two primes in $K$.
\end{prop}
\begin{rmk}
We do not place any restriction on $r$ in the above, nor do we place any restriction on $K$
except that the extension $K/\mathpzc mathbb{Q}$ be Galois.
\end{rmk}
\begin{proof}
For each $E\in\mathpzc mathscr{C}$, we write $\mathpzc mathfrak{p}i_E^{r,2}(x)$ as a sum over the degree two primes of $K$
and switch the order of summation, which yields
\begin{equation*}
\begin{split}
\frac{1}{\#\mathpzc mathscr{C}}
\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,2}(x)&=
\frac{1}{\#\mathpzc mathscr{C}}
\sum_{\substack{\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x\\ \deg\mathpzc mathfrak{p}p=2}}
\sum_{\substack{E\in\mathpzc mathscr{C}\\ a_\mathpzc mathfrak{p}p(E)=r}}1
=\sum_{\substack{\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x\\ \deg\mathpzc mathfrak{p}p=2}}
\left[
\frac{1}{\#\mathpzc mathscr{C}}
\sum_{\substack{\tilde E/(\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p)\\ a_\mathpzc mathfrak{p}p(\tilde E)=r}}
\#\left\{E\in\mathpzc mathscr{C} : E_\mathpzc mathfrak{p}p\cong \tilde E\right\}
\right],
\end{split}
\end{equation*}
where the sum in brackets is over the isomorphism classes $\tilde E$ of elliptic curves
defined over $\mathpzc mathcal{O}_K/\mathpzc mathfrak{p}p$ having exactly $\mathpzc mathbf{N}\mathpzc mathfrak{p}p+1-r$ points.
Removing the primes with $\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le (3r)^2$ introduces at most a bounded error depending on $r$.
For the primes with $\mathpzc mathbf{N}\mathpzc mathfrak{p}p>(3r)^2$, we apply Theorem~\ref{Deuring's thm} and
Lemma~\ref{count mod p reductions} to estimate the expression in brackets above.
The result is equal to
\begin{equation}\label{interchange primes with curves}
\frac{H(r^2-4\mathpzc mathbf{N}\mathpzc mathfrak{p}p)}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}
+O\left(
H(r^2-4\mathpzc mathbf{N}\mathpzc mathfrak{p}p)\left[
\frac{1}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p^2}+\frac{1}{\mathpzc min(\mathpzc mathscr{C})\sqrt{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}}
+\frac{1}{\mathpzc min_1(\mathpzc mathscr{C})\mathpzc min_2(\mathpzc mathscr{C})}\right]
\right).
\end{equation}
Summing the main term of~\eqref{interchange primes with curves} over the appropriate $\mathpzc mathfrak{p}p$ gives
\begin{equation*}
\sum_{\substack{(3r)^2<\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x\\ \deg\mathpzc mathfrak{p}p=2}}\frac{H(r^2-4\mathpzc mathbf{N}\mathpzc mathfrak{p}p)}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}
=\frac{\mathpzc mathpzc n_K}{2}\sum_{\substack{3|r|<p\le\sqrt x\\ f_K(p)=2}}\frac{H(r^2-4p^2)}{p^2},
\end{equation*}
where the sum on the right is over the rational primes $p$ which split into degree two primes in
$K$.
To estimate the error terms, we proceed as follows. For $T>0$, let
\begin{equation}\label{defn of scr H}
\mathpzc mathcal H(T):=\sum_{3|r|<p\le T}H(r^2-4p^2).
\end{equation}
Given a discriminant $d<0$, we let $\chi_d$ denote the Kronecker symbol $\leg{d}{\cdot}$.
The class number formula states that
\begin{equation}\label{class number formula}
\frac{h(d)}{w(d)}=\frac{|d|^{1/2}}{2\mathpzc mathfrak{p}i}L(1,\chi_d),
\end{equation}
where $L(1,\chi_d)=\sum_{n=1}^\infty\frac{\chi_d(n)}{n}$.
Thus, the class number formula together with the definition of the Hurwitz-Kronecker class
number implies that
\begin{equation*}
\begin{split}
\mathpzc mathcal H(T)
&\ll\sum_{k\le 2T}\frac{1}{k}
\sum_{\substack{3|r|<p\le T\\ k^2\mathpzc mid r^2-4p^2}}p\log p
\le T\log T\sum_{k\le 2T}\frac{1}{k}\sum_{\substack{3|r|<p\le 4T\\ k\mathpzc mid r^2-4p^2}}1\\
&\ll T\log T\sum_{k\le 2T}\frac{1}{k}
\sum_{\substack{a\in (\mathpzc mathbb{Z}/k\mathpzc mathbb{Z})^\times\\ 4a^2\equiv r^2\mathpzc mathfrak{p}mod k}}
\sum_{\substack{p\le 4T\\ p\equiv a\mathpzc mathfrak{p}mod k}}1.
\end{split}
\end{equation*}
We apply the Brun-Titchmarsh inequality~\cite[p.~167]{IK:2004} to bound the sum over $p$
and the Chinese Remainder Theorem to deduce that
\begin{equation*}
\#\{a\in (\mathpzc mathbb{Z}/k\mathpzc mathbb{Z})^\times: 4a^2\equiv r^2\mathpzc mathfrak{p}mod k\}\le 2^{\omega(k)},
\end{equation*}
where $\omega(k)$ denotes the number of distinct prime factors of $k$.
The result is that
\begin{equation}\label{bound for scr H}
\mathpzc mathcal H(T)\ll T^2\log T\sum_{k\le 2T}\frac{2^{\omega(k)}}{k\varphi(k)\log (4T/k)}
\ll T^2\log T\sum_{k\le 2T}\frac{2^{\omega(k)}\log k}{k\varphi(k)\log(4T)}
\ll T^2.
\end{equation}
From this, we deduce the bounds
\begin{equation*}
\sum_{\substack{(3r)^2<\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x\\ \deg\mathpzc mathfrak{p}p=2}}H(r^2-4\mathpzc mathbf{N}\mathpzc mathfrak{p}p)
\ll\sum_{3|r|<p\le\sqrt x}H(r^2-4p^2)=\mathpzc mathcal H(\sqrt x)\ll x,
\end{equation*}
\begin{equation*}
\sum_{\substack{(3r)^2<\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x\\ \deg\mathpzc mathfrak{p}p=2}}\frac{H(r^2-4\mathpzc mathbf{N}\mathpzc mathfrak{p}p)}{\sqrt{\mathpzc mathbf{N}\mathpzc mathfrak{p}p}}
\ll\sum_{3|r|<p\le\sqrt x}\frac{H(r^2-4p^2)}{p}
=\int_{3|r|}^{\sqrt x}\frac{\mathpzc mathrm{d}\mathpzc mathcal H(T)}{T}
\ll\sqrt x,
\end{equation*}
and
\begin{equation*}
\sum_{\substack{(3r)^2<\mathpzc mathbf{N}\mathpzc mathfrak{p}p\le x\\ \deg\mathpzc mathfrak{p}p=2}}\frac{H(r^2-4\mathpzc mathbf{N}\mathpzc mathfrak{p}p)}{\mathpzc mathbf{N}\mathpzc mathfrak{p}p^2}
\ll\sum_{3|r|<p\le\sqrt x}\frac{H(r^2-4p^2)}{p^4}
=\int_{3|r|}^{\sqrt x}\frac{\mathpzc mathrm{d}\mathpzc mathcal H(T)}{T^4}
\ll 1.
\end{equation*}
Using these estimates, it is easy to see that summing the error terms
of~\eqref{interchange primes with curves} over $\mathpzc mathfrak{p}p$ yields a bounded error
whenever $\mathpzc min(\mathpzc mathscr{C})\ge\sqrt x$.
\end{proof}
\section{\textbf{Reduction to an average of special values of Dirichlet $L$-functions.}}
\label{reduce to avg of l-series}
In the previous section, we reduced the problem of computing the average order of
$\mathpzc mathfrak{p}i_E^{r,2}(x)$ to that of computing a certain average of Hurwitz-Kronecker class numbers.
In this section, we reduce the computation of that average of Hurwitz-Kronecker class numbers to
the computation of a certain average of special values of Dirichlet $L$-functions.
Recall that if $\chi$ is a Dirichlet character,
then the Dirichlet $L$-function attached to $\chi$ is given by
\begin{equation*}
L(s,\chi):=\sum_{n=1}^\infty\frac{\chi(n)}{n^s}
\end{equation*}
for $s>1$.
If $\chi$ is not trivial, then the above definition is valid at $s=1$ as well.
As in the previous section, given an integer $d$, we write $\chi_d$ for the Kronecker symbol
$\leg{d}{\cdot}$.
We now define
\begin{equation}\label{defn of A}
A_{K,2}(T;r):=
\sum_{\substack{k\le 2T \\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{3|r|<p\le T\\ f_K(p)=2\\ k^2\mathpzc mid r^2-4p^2}}
L\left(1,\chi_{d_k(p^2)}\right)\log p,
\end{equation}
where the condition $f_K(p)=2$ means that $p$ factors in $K$ as a product of degree two prime
ideals of $\mathpzc mathcal{O}_K$, and we put $d_k(p^2):=(r^2-4p^2)/k^2$ whenever $k^2\mathpzc mid r^2-4p^2$.
\begin{prop}\label{reduction to avg of special values}
Let $r$ be any odd integer. If there exists a constant $\mathpzc mathfrak C_{K,r,2}'$ such that
\begin{equation*}
A_{K,2}(T;r)=\mathpzc mathfrak C_{K,r,2}'T+O\left(\frac{T}{\log T}\right),
\end{equation*}
then
\begin{equation*}
\frac{\mathpzc mathpzc n_K}{2}\sum_{\substack{3|r|<p\le\sqrt x\\f_K(p)=2}}\frac{H(r^2-4p^2)}{p^2}
=\mathpzc mathfrak C_{K,r,2}\log\log x+O(1),
\end{equation*}
where $\mathpzc mathfrak C_{K,r,2}=\frac{\mathpzc mathpzc n_K}{2\mathpzc mathfrak{p}i}\mathpzc mathfrak C_{K,r,2}'$.
\end{prop}
\begin{proof}
Combining the class number formula~\eqref{class number formula}
with the definition of the Hurwitz-Kronecker class number,
we obtain the identity
\begin{equation}\label{class numbers to l-series}
\frac{\mathpzc mathpzc n_K}{2}\sum_{\substack{3|r|<p\le\sqrt x\\f_K(p)=2}}\frac{H(r^2-4p^2)}{p^2}
=
\frac{\mathpzc mathpzc n_K}{4\mathpzc mathfrak{p}i}
\sum_{\substack{3|r|<p\le\sqrt x\\ f_K(p)=2}}
\sum_{\substack{k^2\mathpzc mid r^2-4p^2\\ d_k(p^2)\equiv 0,1\mathpzc mathfrak{p}mod 4}}
\frac{\sqrt{4p^2-r^2}}{kp^2}L\left(1,\chi_{d_k(p^2)}\right).
\end{equation}
By assumption $r$ is odd, and hence $r^2-4p^2\equiv 1\mathpzc mathfrak{p}mod 4$.
Thus, if $k^2\mathpzc mid r^2-4p^2$, it follows that $k$ must be odd and $k^2\equiv 1\mathpzc mathfrak{p}mod 4$.
Whence, the sum over $k$ above may be restricted to odd integers whose squares divide
$r^2-4p^2$, and the congruence conditions on $d_k(p^2)=(r^2-4p^2)/k^2$ may be omitted.
Furthermore, if $\ell$ is a prime dividing $(k,r)$ and $k^2\mathpzc mid r^2-4p^2$, then
\begin{equation*}
0\equiv r^2-4p^2\equiv -(2p)^2\mathpzc mathfrak{p}mod{\ell^2},
\end{equation*}
and it follows that $\ell=p$. This is not possible for $p>3|r|$ since the fact that $\ell$ divides
$r$ implies that $\ell\le r$. Hence, the sum on $k$ above may be further restricted to
integers which are coprime to $r$.
Therefore, switching the order of summation in~\eqref{class numbers to l-series}
and employing the approximation
$\sqrt{4p^2-r^2}=2p+O\left(1/p\right)$ gives
\begin{equation*}
\frac{\mathpzc mathpzc n_K}{2}\sum_{\substack{3|r|<p\le\sqrt x\\f_K(p)=2}}\frac{H(r^2-4p^2)}{p^2}
=
\frac{\mathpzc mathpzc n_K}{2\mathpzc mathfrak{p}i}
\sum_{\substack{k\le 2\sqrt x\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{3|r|<p\le \sqrt x\\ f_K(p)=2\\ k^2\mathpzc mid r^2-4p^2}}
\frac{L\left(1,\chi_{d_k(p^2)}\right)}{p}
+O\left(1\right).
\end{equation*}
With $A_{K,2}(T;r)$ as defined by~\eqref{defn of A}, the main term on the right hand side is
\begin{equation*}
\frac{\mathpzc mathpzc n_K}{2\mathpzc mathfrak{p}i}
\sum_{\substack{k\le 2\sqrt x\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{3|r|<p\le \sqrt x\\ f_K(p)=2\\ k^2\mathpzc mid r^2-4p^2}}
\frac{L\left(1,\chi_{d_k(p^2)}\right)}{p}
=\frac{\mathpzc mathpzc n_K}{2\mathpzc mathfrak{p}i}
\int_{3|r|}^{\sqrt x}\frac{\mathpzc mathrm{d} A_{K,2}(T;r)}{T\log T}.
\end{equation*}
By assumption, $A_{K,2}(T;r)=\mathpzc mathfrak C_{K,r,2}'T+O(T/\log T)$.
Hence, integrating by parts gives
\begin{equation*}
\frac{\mathpzc mathpzc n_K}{2\mathpzc mathfrak{p}i}\int_{3|r|}^{\sqrt x}\frac{\mathpzc mathrm{d} A_{K,2}(T;r)}{T\log T}
=\frac{\mathpzc mathpzc n_K}{2\mathpzc mathfrak{p}i}\mathpzc mathfrak C_{K,r,2}'\log\log x
+O(1).
\end{equation*}
\end{proof}
\section{\textbf{Reduction to a problem of Barban-Davenport-Halberstam Type.}}
\label{reduce to bdh prob}
Propositions~\ref{reduction to avg of class numbers} and~\ref{reduction to avg of special values}
reduce the problem of computing an asymptotic formula for
\begin{equation*}
\frac{1}{\#\mathpzc mathscr{C}}\sum_{E\in\mathpzc mathscr{C}}\mathpzc mathfrak{p}i_E^{r,2}(x)
\end{equation*}
to the problem of showing that there exists a constant $\mathpzc mathfrak C_{K,r,2}'$ such that
\begin{equation}\label{avg of l-series goal}
A_{K,2}(T;r)
=\sum_{\substack{k\le 2T \\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{3|r|<p\le T\\ f_K(p)=2\\ k^2\mathpzc mid r^2-4p^2}}
L\left(1,\chi_{d_k(p^2)}\right)\log p
=\mathpzc mathfrak C_{K,r,2}'T+O(T/\log T).
\end{equation}
In this section, we reduce this to a problem of ``Barban-Davenport-Halberstam type."
Since every rational prime $p$ that does not ramify and splits into degree two primes
in $K$ must either split completely in $K'$ or split into degree two primes in
$K'$, we may write
\begin{equation*}
A_{K,2}(T;r)=\sum_{\substack{\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})\\ |\tau|=1,2}}A_{K,\tau}(T;r),
\end{equation*}
where the sum runs over the elements $\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$ of order dividing
two, $A_{K,\tau}(T;r)$ is defined by
\begin{equation}
A_{K,\tau}(T;r)
:=\sum_{\substack{k\le 2T \\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{3|r|<p\le T\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau\\ k^2\mathpzc mid r^2-4p^2}}
L\left(1,\chi_{d_k(p^2)}\right)\log p,
\end{equation}
and $\mathpzc mathcal C_\tau$ is the subset of all order two elements of $\mathpzc mathrm{Gal}(K/\mathpzc mathbb{Q})$ whose
restriction to $K'$ is equal to $\tau$.
Thus, it follows that~\eqref{avg of l-series goal} holds if
there exists a constant $\mathpzc mathfrak C_{r}^{(\tau)}$ such that
\begin{equation*}
A_{K,\tau}(T,r)=\mathpzc mathfrak C_{r}^{(\tau)}T+O(T/\log T)
\end{equation*}
for every element $\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$ of order dividing two.
\begin{prop}\label{avg of l-series prop}
Let $r$ be a fixed odd integer, let $\tau$ be an element of $\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$ of order dividing two,
and recall the definition of $\mathpzc mathcal E_K(x;Q,\mathpzc mathcal C_\tau)$ as given by~\eqref{bdh problem}.
If
\begin{equation}\label{error bound assump}
\mathpzc mathcal E_K(T;T/(\log T)^{12},\mathpzc mathcal C_\tau)\ll\frac{T^2}{(\log T)^{11}},
\end{equation}
then
\begin{equation}\label{avg formula for tau}
A_{K,\tau}(T;r)=\mathpzc mathfrak C_{r}^{(\tau)} T+O\left(\frac{T}{\log T}\right),
\end{equation}
where
\begin{equation}\label{constant for l-series avg}
\mathpzc mathfrak C_{r}^{(\tau)}
=\frac{2\#\mathpzc mathcal C_\tau}{3\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{g\in\mathpzc mathcal S_\tau}
\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k)
\end{equation}
and
\begin{equation*}\label{defn of C_g(r,a,n,k)}
C_{g}(r,a,n,k)
=\left\{b\in (\mathpzc mathbb{Z}/\mathpzc m_Knk^2\mathpzc mathbb{Z})^\times: 4b^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{nk^2}, b\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_K}\right\}.
\end{equation*}
\end{prop}
\begin{proof}
Suppose that $d$ is a discriminant, and let
\begin{equation*}
S_d(y):=\sum_{\substack{n\le y\\ (n,2r)=1}}\chi_d(n).
\end{equation*}
Burgess' bound for character sums~\cite[Theorem 2]{Bur:1963} implies that
\begin{equation*}
\sum_{n\le y}\chi_d(n)\ll y^{1/2}|d|^{7/32}.
\end{equation*}
Since $r$ is a fixed integer, we have that
\begin{equation*}
\left|S_d(y)\right|
=\left|\sum_{m\mathpzc mid 2r}\mathpzc mu(m)\sum_{\substack{n\le y\\ m\mathpzc mid n}}\chi_d(n)\right|
\ll y^{1/2}|d|^{7/32},
\end{equation*}
where the implied constant depends on $r$ alone.
Therefore, for any $U>0$, we have that
\begin{equation}\label{Burgess bound for l-series}
\sum_{\substack{n>U\\ (n,2r)=1}}\frac{\chi_d(n)}{n}
=\int_U^\infty\frac{\mathpzc mathrm{d} S_d(y)}{y}
\ll\frac{|d|^{7/32}}{\sqrt U}.
\end{equation}
Now, we consider the case when $d=d_k(p^2)=(r^2-4p^2)/k^2$ with
$(k,2r)=1$ and $p>3|r|$. Since $r$ is odd, it is easily checked that
$\chi_{d_k(p^2)}(2)=\leg{5}{2}=-1$, and $\chi_{d_k(p^2)}(\ell)=\leg{-1}{\ell}$ for any prime
$\ell$ dividing $r$. Therefore, we may write
\begin{equation*}
L(1,\chi_{d_k(p^2)})=\frac{2}{3} \mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(1-\frac{\leg{-1}{\ell}}{\ell}\right)^{-1}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty\leg{d_k(p^2)}{n}\frac{1}{n},
\end{equation*}
the product being over the primes $\ell$ dividing $r$.
Since we also have the bound
$|d_k(p^2)|\le (2p/k)^2$, the inequality~\eqref{Burgess bound for l-series} implies that
\begin{equation*}
A_{K,\tau}(T;r)=
\frac{2}{3}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(1-\frac{\leg{-1}{\ell}}{\ell}\right)^{-1}
\sum_{\substack{k\le 2T\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n}
\sum_{\substack{3|r|<p\le T\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq\mathpzc mathcal C_\tau\\ k^2\mathpzc mid r^2-4p^2}}
\leg{d_k(p^2)}{n}\log p
+O\left(
\frac{T^{23/16}}{\sqrt U}
\right).
\end{equation*}
For any $V>0$, we also have that
\begin{equation*}
\sum_{\substack{V<k\le 2T\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n}
\sum_{\substack{3|r|<p\le T\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq\mathpzc mathcal C_\tau\\ k^2\mathpzc mid r^2-4p^2}}
\leg{d_k(p^2)}{n}\log p
\ll \log T\log U
\sum_{\substack{V<k\le 2T\\ (k,2r)=1}}\frac{1}{k}\sum_{\substack{m\le T\\ k^2\mathpzc mid r^2-4m^2}}1,
\end{equation*}
where the last sum on the right runs over all integers $m\le T$ such that $k^2\mathpzc mid r^2-4m^2$.
To bound the double sum on the right, we employ the Chinese Remainder Theorem to see that
\begin{equation*}
\begin{split}
\sum_{\substack{V<k\le 2T\\ (k,2r)=1}}\frac{1}{k}\sum_{\substack{m\le T\\ k^2\mathpzc mid r^2-4m^2}}1
&<
\sum_{\substack{V<k\le 2T\\ (k,2r)=1}}\frac{1}{k}\sum_{\substack{m\le 2T\\ k\mathpzc mid r^2-4m^2}}1
\ll \sum_{\substack{V<k\le 2T\\ (k,2r)=1}}
\frac{\#\{z\in\mathpzc mathbb{Z}/k\mathpzc mathbb{Z}: 4z^2\equiv r^2\mathpzc mathfrak{p}mod{k}\}}{k}\frac{T}{k}\\
&\ll T\sum_{V<k\le 2T}\frac{2^{\omega(k)}}{k^2}
< T\int_V^{\infty}\frac{\mathpzc mathrm{d} N(y)}{y^2}
\ll \frac{T\log V}{V},
\end{split}
\end{equation*}
where $\omega(k)$ is the number of distinct prime divisors of $k$ and
$N(y)=\sum_{k\le y}2^{\omega(k)}\ll y\log y$.
See~\cite[p.~68]{Mur:2001} for example.
Therefore, since including the primes $p\le 3|r|$ introduces an error that is
$O(\log U\log V)$, we have
\begin{equation*}
\begin{split}
A_{K,\tau}(T;r)
&=\frac{2}{3}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n}
\sum_{\substack{p\le T\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq\mathpzc mathcal C_\tau\\ k^2\mathpzc mid r^2-4p^2}}
\leg{d_k(p^2)}{n}\log p\\
&\mathpzc mathfrak{q}uad+O\left(\frac{T^{23/16}}{\sqrt U}
+\frac{T\log T\log U\log V}{V}
+\log U\log V\right).
\end{split}
\end{equation*}
If $n$ is odd, the value of $\leg{d_k(p^2)}{n}$ depends only on the residue of $d_k(p^2)$
modulo $n$. Thus, we may regroup the terms of the innermost sum on $p$ to obtain
\begin{equation*}
\begin{split}
A_{K,\tau}(T;r)
&=\frac{2}{3}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\sum_{\substack{p\le T\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq\mathpzc mathcal C_\tau\\ 4p^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{nk^2}}}
\log p\\
&\mathpzc mathfrak{q}uad+O\left(\frac{T^{23/16}}{\sqrt U}
+\frac{T\log T\log U\log V}{V}
+\log U\log V\right).
\end{split}
\end{equation*}
Suppose that there is a prime $p\mathpzc mid nk^2$ and satisfying the congruence
$4p^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{nk^2}$. Since $(k,r)=1$, it follows that $p$ must divide $n$.
Therefore, there can be at most $O(\log n)$ such primes for any given values of $a,k$ and $n$.
Thus,
\begin{equation}\label{truncation at U,V complete}
\begin{split}
A_{K,\tau}(T;r)
&=\frac{2}{3}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)\\
&\mathpzc mathfrak{q}uad\times
\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\sum_{\substack{b\in(\mathpzc mathbb{Z}/nk^2\mathpzc mathbb{Z})^\times\\ 4b^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{nk^2}}}
\sum_{\substack{p\le T\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq\mathpzc mathcal C_\tau\\ p\equiv b\mathpzc mathfrak{p}mod{nk^2}}}\log p\\
&\mathpzc mathfrak{q}uad\mathpzc mathfrak{q}uad+O\left(\frac{T^{23/16}}{\sqrt U}
+\frac{T\log T\log U\log V}{V}
+U\log U\log V\right).
\end{split}
\end{equation}
We now make the choice
\begin{align}
U&:=\frac{T}{(\log T)^{20}},\label{U choice}\\
V&:=(\log T)^{4}\label{V choice}.
\end{align}
Note that with this choice the error above is easily $O(T/\log T)$.
Recall the definitions of $\mathpzc mathcal C_\tau$ and $\mathpzc mathcal S_\tau$ from
Section~\ref{cheb for composites}. Then every prime $p$ counted by the innermost sum
of~\eqref{truncation at U,V complete} satisfies the condition that $\leg{K'/\mathpzc mathbb{Q}}{p}=\tau$, and hence
it follows that $p\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_K}$ for some $g\in\mathpzc mathcal S_\tau$. Therefore, we may
rewrite the main term of~\eqref{truncation at U,V complete} as
\begin{equation}\label{ready for cheb}
\frac{2}{3}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{g\in\mathpzc mathcal S_\tau}
\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\sum_{\substack{b\in(\mathpzc mathbb{Z}/\mathpzc m_Knk^2\mathpzc mathbb{Z})^\times\\
4b^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{nk^2}\\ b\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_K}}}
\theta(T;\mathpzc mathcal C_\tau,\mathpzc m_Knk^2,b).
\end{equation}
In accordance with our observation in Section~\ref{cheb for composites}, the condition that
$b\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_K}$ ensures that the two Chebotar\"ev conditions
$\leg{K/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau$ and $p\equiv b\mathpzc mathfrak{p}mod{\mathpzc m_Knk^2}$ are compatible.
Therefore, we choose to approximate~\eqref{ready for cheb} by
\begin{equation}\label{apply cheb}
T\frac{2\#\mathpzc mathcal C_\tau}{3\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{g\in\mathpzc mathcal S_\tau}
\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k),
\end{equation}
where $C_g(r,a,n,k)$ is as defined in the statement of the proposition.
For the moment, we ignore the error in this approximation and concentrate on the
supposed main term.
The following lemma, whose proof we delay until Section~\ref{proofs of lemmas},
implies that the expression in~\eqref{apply cheb} is equal to
$\mathpzc mathfrak C_{r}^{(\tau)}T+O(T/\log T)$ for $U$ and $V$ satisfying~\eqref{U choice}
and~\eqref{V choice}.
\begin{lma}\label{truncated constant lemma}
With $\mathpzc mathfrak C_{r}^{(\tau)}$ as defined in~\eqref{constant for l-series avg}, we have
\begin{equation*}
\begin{split}
\mathpzc mathfrak C_{r}^{(\tau)}
&=\frac{2\#\mathpzc mathcal C_\tau}{3\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{g\in\mathpzc mathcal S_\tau}
\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k)\\
&\mathpzc mathfrak{q}uad+O\left(\frac{1}{\sqrt U}+\frac{\log V}{V^2}\right).
\end{split}
\end{equation*}
\end{lma}
We now consider the error in approximating~\eqref{ready for cheb} by~\eqref{apply cheb}.
The error in the approximation is equal to a constant (depending only on $K$ and $r$) times
\begin{equation*}
\sum_{g\in\mathpzc mathcal S_\tau}
\sum_{\substack{k\le V\\ (k,2r)=1,\\ n\le U\\ (n,2r)=1}}\frac{1}{kn}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\sum_{b\in C_g(r,a,n,k)}
\left(\theta(T;\mathpzc mathcal C_\tau,\mathpzc m_Knk^2,b)-\frac{\#\mathpzc mathcal C_\tau}{\mathpzc mathpzc n_K\varphi_K(\mathpzc m_Knk^2)}T\right).
\end{equation*}
We note that for each $b\in(\mathpzc mathbb{Z}/\mathpzc m_Knk^2\mathpzc mathbb{Z})^\times$, there is at most one $a\in(\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times$ such that
$ak^2\equiv 4b^2-r^2\mathpzc mathfrak{p}mod{nk^2}$.
Therefore, interchanging the sum on $a$ with the sum on $b$ and applying the Cauchy-Schwarz
inequality, the above error is bounded by
\begin{equation*}
\sum_{k\le V}\frac{1}{k}\left[\sum_{n\le U}\frac{\varphi(\mathpzc m_Knk^2)}{n^2}\right]^{1/2}
\left[\sum_{n\le U}
\sum_{\substack{g\in \mathpzc mathcal S_\tau,\\ b\in(\mathpzc mathbb{Z}/\mathpzc m_Knk^2\mathpzc mathbb{Z})^\times\\ b\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_K}}}
\left(\theta(T;\mathpzc mathcal C_\tau,\mathpzc m_Knk^2,b)-\frac{\#\mathpzc mathcal C_\tau}{\mathpzc mathpzc n_K\varphi_K(\mathpzc m_Knk^2)}T\right)^2
\right]^{1/2}.
\end{equation*}
We bound this last expression by a constant times
\begin{equation*}
V\sqrt{\log U}\sqrt{\mathpzc mathcal E_{K}(T;UV^2,\mathpzc mathcal C_\tau)},
\end{equation*}
where $\mathpzc mathcal E_{K}(T;UV^2,\mathpzc mathcal C_\tau)$ is defined by~\eqref{bdh problem}.
Given our assumption~\eqref{error bound assump} and our choices~\eqref{U choice}
and~\eqref{V choice} for $U$ and $V$, the proposition now follows.
\end{proof}
\section{\textbf{Computing the average order constant for a general Galois extension.}}
\label{factor constant}
In this section, we finish the proof of Theorem~\ref{main thm 0} by computing the product
formula~\eqref{avg LT const} for the constant $\mathpzc mathfrak C_{K,r,2}$. It follows from
Propositions~\ref{reduction to avg of class numbers},~\ref{reduction to avg of special values},
and~\ref{avg of l-series prop} that
\begin{equation*}
\mathpzc mathfrak C_{K,r,2}=\frac{\mathpzc mathpzc n_K}{2\mathpzc mathfrak{p}i}\mathpzc mathfrak C_{K,r,2}',
\end{equation*}
where
\begin{equation*}
\mathpzc mathfrak C_{K,r,2}'=\sum_{\substack{\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})\\ |\tau|=1,2}}\mathpzc mathfrak C_{r}^{(\tau)}
\end{equation*}
and $\mathpzc mathfrak C_{r}^{(\tau)}$ is defined by
\begin{equation*}
\mathpzc mathfrak C_{r}^{(\tau)}
=\frac{2\#\mathpzc mathcal C_\tau}{3\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{g\in\mathpzc mathcal S_\tau}
\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k).
\end{equation*}
We now recall the definition
\begin{equation*}
\mathpzc mathfrak{c}_r^{(g)}
=\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k)
\end{equation*}
and note that
\begin{equation*}
\mathpzc mathfrak C_{r}^{(\tau)}
=\frac{2\#\mathpzc mathcal C_\tau}{3\mathpzc mathpzc n_K}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid r}\left(\frac{\ell}{\ell-\leg{-1}{\ell}}\right)
\sum_{g\in\mathpzc mathcal S_\tau}\mathpzc mathfrak{c}_r^{(g)}.
\end{equation*}
It remains then to show that
\begin{equation}\label{C tau g part}
\mathpzc mathfrak{c}_r^{(g)}=\frac{\mathpzc mathpzc n_{K'}}{\varphi(\mathpzc m_K)}
\mathpzc mathfrak{p}rod_{\ell\mathpzc mathpzc nmid 2r\mathpzc m_K}\left(\frac{\ell(\ell-1-\leg{-1}{\ell})}{(\ell-1)(\ell-\leg{-1}{\ell})}\right)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid\mathpzc m_K\\ \ell\mathpzc mathpzc nmid 2r}}\mathpzc mathfrak{K}_r^{(g)},
\end{equation}
where the products are taken over the rational primes $\ell$ satisfying the stated conditions,
recalling that $\mathpzc mathfrak{K}_r^{(g)}$ was defined by
\begin{equation*}
\mathpzc mathfrak{K}_r^{(g)}=
\begin{cases}
\displaystyle
\frac{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)+1}{2}}-1}{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)-1}{2}}(\ell-1)}
&\begin{array}{l}\text{if }\mathpzc mathpzc nu_\ell(4g^2-r^2)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)\\ \mathpzc mathfrak{q}uad\text{ and } 2\mathpzc mathpzc nmid\mathpzc mathpzc nu_\ell(4g^2-r^2),\end{array}\\
\displaystyle
\frac{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)}{2}+1}-1}{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)}{2}}(\ell-1)}
+\frac{\leg{(r^2-4g^2)/\ell^{\mathpzc mathpzc nu_\ell(r^2-4g^2)}}{\ell}}{\ell^{\frac{\mathpzc mathpzc nu_\ell(4g^2-r^2)}{2}}\left(\ell-\leg{(r^2-4g^2)/\ell^{\mathpzc mathpzc nu_\ell(r^2-4g^2)}}{\ell}\right)}
&\begin{array}{l}\text{if }\mathpzc mathpzc nu_\ell(4g^2-r^2)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)\\ \mathpzc mathfrak{q}uad\text{ and }2\mathpzc mid\mathpzc mathpzc nu_\ell(4g^2-r^2),\end{array}\\
\displaystyle
\frac{\ell^{2\ceil{\frac{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}{2}}+1}(\ell+1)\left(\ell^{\ceil{\frac{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}{2}}}-1\right)
+\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)+2}}{\ell^{3\ceil{\frac{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}{2}}}(\ell^2-1)}
&\text{if }\mathpzc mathpzc nu_\ell(4g^2-r^2)\ge\mathpzc mathpzc nu_\ell(\mathpzc m_K).\\
\end{cases}
\end{equation*}
By the Chinese Remainder Theorem and equation~\eqref{deg of comp extn},
\begin{equation*}
\begin{split}
\mathpzc mathfrak{c}_r^{(g)}
&=\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k)\\
&=\mathpzc mathpzc n_{K'}\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty\frac{1}{n\varphi(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\mathpzc mathfrak{p}rod_{\ell\mathpzc mid\mathpzc m_Knk^2}\# C_{g}^{(\ell)}(r,a,n,k),
\end{split}
\end{equation*}
where the product is taken over the distinct primes $\ell$ dividing $\mathpzc m_Knk^2$,
\begin{equation*}
C_{g}^{(\ell)}(r,a,n,k)
:=\left\{
b\in (\mathpzc mathbb{Z}/\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_Knk^2)}\mathpzc mathbb{Z})^\times:
4b^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc mathpzc nu_\ell(nk^2)}},
b\equiv g\mathpzc mathfrak{p}mod{\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}}
\right\},
\end{equation*}
and $\mathpzc mathpzc nu_\ell$ is the usual $\ell$-adic valuation.
With somewhat different notation, the following evaluation of $\#C_{g}^{(\ell)}(r,a,n,k)$ can be
found in~\cite{CFJKP:2011}.
\begin{lma}\label{count solutions mod ell}
Let $k$ and $n$ be positive integers satisfying the condition $(nk,2r)=1$.
Suppose that $\ell$ is any prime dividing $\mathpzc m_Knk^2$.
If $\ell\mathpzc mathpzc nmid\mathpzc m_K$, then
\begin{equation*}
\#C_{g}^{(\ell)}(r,a,n,k)
=\begin{cases}
1+\leg{r^2-ak^2}{\ell}&\text{if }\ell\mathpzc mathpzc nmid r^2-ak^2,\\
0&\text{otherwise;}
\end{cases}
\end{equation*}
if $\ell\mathpzc mid\mathpzc m_K$, then
\begin{equation*}
\#C_{g}^{(\ell)}(r,a,n,k)
=\begin{cases}
\ell^{\mathpzc min\{\mathpzc mathpzc nu_\ell(nk^2),\mathpzc mathpzc nu_\ell(\mathpzc m_K)\}}&\text{if } 4g^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc min\{\mathpzc mathpzc nu_\ell(nk^2),\mathpzc mathpzc nu_\ell(\mathpzc m_K)\}}},\\
0&\text{otherwise.}
\end{cases}
\end{equation*}
In particular,
\begin{equation*}
\#C_g^{(\ell)}(r,1,1,k)=\begin{cases}
2&\text{if }\ell\mathpzc mid k\text{ and }\ell\mathpzc mathpzc nmid\mathpzc m_K,\\
\ell^{\mathpzc min\{2\mathpzc mathpzc nu_\ell(k),\mathpzc mathpzc nu_\ell(\mathpzc m_K)\}}&\text{if }\ell\mathpzc mid\mathpzc m_K
\text{ and }4g^2\equiv r^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc min\{2\mathpzc mathpzc nu_\ell(k),\mathpzc mathpzc nu_\ell(\mathpzc m_K)\}}},\\
0&\text{otherwise}.
\end{cases}
\end{equation*}
\end{lma}
By Lemma~\ref{count solutions mod ell} we note that if $\ell$ is a prime dividing $\mathpzc m_K$ and
$\ell$ does not divide $nk$, then $\#C_{g}^{(\ell)}(r,a,n,k)=1$.
We also see that
$\#C_{g}^{(\ell)}(r,a,n,k)=0$ if $(r^2-ak^2,n)>1$.
Finally, if $\ell\mathpzc mid k$ and $\ell\mathpzc mathpzc nmid n$, then
\begin{equation*}
\#C_{g}^{(\ell)}(r,a,n,k)
=\#C_{g}^{(\ell)}(r,1,1,k)
\end{equation*}
as $\mathpzc mathpzc nu_\ell(nk^2)=2\mathpzc mathpzc nu_\ell(k)$ in this case.
Therefore,
using the formula $\varphi(mn)=\varphi(m)\varphi(n)(m,n)/\varphi((m,n))$, we have
\begin{equation}\label{ready to start factoring}
\begin{split}
\mathpzc mathfrak{c}_r^{(g)}
&=\mathpzc mathpzc n_{K'}\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k^2\varphi(\mathpzc m_Kk)}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty
\frac{\varphi\left((n,\mathpzc m_Kk)\right)}{n\varphi(n)(n,\mathpzc m_Kk)}
\sum_{\substack{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times\\ (r^2-ak^2,n)=1}}\leg{a}{n}
\mathpzc mathfrak{p}rod_{\ell\mathpzc mid nk}\# C_{g}^{(\ell)}(r,a,n,k)\\
&=\mathpzc mathpzc n_{K'}\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{1}{k^2\varphi(\mathpzc m_Kk)}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty
\frac{\varphi\left((n,\mathpzc m_Kk)\right)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid k\\ \ell\mathpzc mathpzc nmid n}}\#C_{g}^{(\ell)}(r,1,1,k)}
{n\varphi(n)(n,\mathpzc m_Kk)}c_k(n)\\
&=\frac{\mathpzc mathpzc n_{K'}}{\varphi(\mathpzc m_K)}
\sum_{\substack{k=1\\ (k,2r)=1}}^\infty\frac{\varphi((\mathpzc m_K,k))}{(\mathpzc m_K,k)k^2\varphi(k)}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty
\frac{\varphi\left((n,\mathpzc m_Kk)\right)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid k\\ \ell\mathpzc mathpzc nmid n}}\#C_{g}^{(\ell)}(r,1,1,k)}
{n\varphi(n)(n,\mathpzc m_Kk)}c_k(n)\\
&=\frac{\mathpzc mathpzc n_{K'}}{\varphi(\mathpzc m_K)}\sum_{k=1}^\infty\strut^\mathpzc mathfrak{p}rime
\frac{\varphi((\mathpzc m_K,k))\mathpzc mathfrak{p}rod_{\ell\mathpzc mid k}\#C_g^{(\ell)}(r,1,1,k)}{(\mathpzc m_K,k)k^2\varphi(k)}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty
\frac{\varphi\left((n,\mathpzc m_Kk)\right)c_k(n)}
{n\varphi(n)(n,\mathpzc m_Kk)\mathpzc mathfrak{p}rod_{\ell\mathpzc mid (k,n)}\#C_{g}^{(\ell)}(r,1,1,k)}.
\end{split}
\end{equation}
Here $c_k(n)$ is defined by
\begin{equation}\label{defn of c}
c_k(n):=\sum_{\substack{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times\\ (r^2-ak^2,n)=1}}\leg{a}{n}
\mathpzc mathfrak{p}rod_{\ell\mathpzc mid n}\# C_{g}^{(\ell)}(r,a,n,k),
\end{equation}
for $(n,2r)=1$,
and the prime on the sum over $k$ is meant to indicate that the sum is to be restricted to those
$k$ which are coprime to $2r$ and not divisible by any prime $\ell$ for which
$\#C_{g}^{(\ell)}(r,1,1,k)=0$.
\begin{lma}\label{compute little c}
Assume that $k$ is an integer coprime to $2r$.
The function $c_k(n)$ defined by equation~\eqref{defn of c} is multiplicative in $n$.
Suppose that $\ell$ is a prime not dividing $2r$.
If $\ell\mathpzc mathpzc nmid k\mathpzc m_K$, then
\begin{equation*}
\frac{c_k(\ell^e)}{\ell^{e-1}}=\begin{cases}
\ell-3&\text{if }2\mathpzc mid e,\\
-\left(1+\leg{-1}{\ell}\right)&\text{if }2\mathpzc mathpzc nmid e.
\end{cases}
\end{equation*}
If $\ell\mathpzc mid k\mathpzc m_K$, then
\begin{equation*}
\frac{c_k(\ell^e)}{\ell^{e-1}}
=\#C_g^{(\ell)}(r,1,1,k)\begin{cases}
\ell-1&\text{if }2\mathpzc mid e,\\
0&\text{if }2\mathpzc mathpzc nmid e
\end{cases}
\end{equation*}
in the case that $\mathpzc mathpzc nu_\ell(\mathpzc m_K)\le 2\mathpzc mathpzc nu_\ell(k)$; and
\begin{equation*}
\frac{c_k(\ell^e)}{\ell^{e-1}}
=\#C_g^{(\ell)}(r,1,1,k)\leg{(r^2-4g^2)/\ell^{2\mathpzc mathpzc nu_\ell(k)}}{\ell}^e\ell
\end{equation*}
in the case that $2\mathpzc mathpzc nu_\ell(k)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)$.
Furthermore, for $(n,2r)=1$, we have
\begin{equation*}
c_k(n)\ll\frac{n\mathpzc mathfrak{p}rod_{\ell\mathpzc mid (n,k)}\#C_g^{(g)}(r,1,1,k)}{\kappa_{\mathpzc m_K}(n)},
\end{equation*}
where for any integer $N$, $\kappa_N(n)$ is the multiplicative function defined on
prime powers by
\begin{equation}\label{defn of kappa}
\kappa_N(\ell^e):=\begin{cases}
\ell&\text{if }\ell\mathpzc mathpzc nmid N\text{ and }2\mathpzc mathpzc nmid e,\\
1&\text{otherwise}.
\end{cases}
\end{equation}
\end{lma}
\begin{rmk}
Lemma~\ref{compute little c} is essentially proved in~\cite{CFJKP:2011}, but we give the proof in
Section~\ref{proofs of lemmas} for completeness.
\end{rmk}
Using the lemma and recalling the restrictions on $k$, we factor the sum over $n$
in~\eqref{ready to start factoring} as
\begin{equation*}
\begin{split}
&\sum_{\substack{n=1\\ (n,2r)=1}}^\infty
\frac{\varphi\left((n,\mathpzc m_Kk)\right)c_k(n)}
{n\varphi(n)(n,\mathpzc m_Kk)\mathpzc mathfrak{p}rod_{\ell\mathpzc mid (k,n)}\#C_{g}^{(\ell)}(r,1,1,k)}\\
&\mathpzc mathfrak{q}uad\mathpzc mathfrak{q}uad=\mathpzc mathfrak{p}rod_{\ell\mathpzc mathpzc nmid 2r\mathpzc m_Kk}
\left[\sum_{e\ge 0}\frac{c_k(\ell^e)}{\ell^e\varphi(\ell^e)}\right]
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid\mathpzc m_Kk\\ (\ell\mathpzc mathpzc nmid 2r)}}
\left[1+\sum_{e\ge 1}\frac{\left(1-\frac{1}{\ell}\right)c_k(\ell^e)}
{\ell^e\varphi(\ell^e)\#C_g^{(\ell)}(r,1,1,k)}\right]\\
&\mathpzc mathfrak{q}uad\mathpzc mathfrak{q}uad=\mathpzc mathfrak{p}rod_{\ell\mathpzc mathpzc nmid 2r\mathpzc m_Kk}F_0(\ell)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid\mathpzc m_Kk\\ (\ell\mathpzc mathpzc nmid 2r)}}F_1^{(g)}(\ell,k)\\
&\mathpzc mathfrak{q}uad\mathpzc mathfrak{q}uad=
\mathpzc mathfrak{p}rod_{\ell\mathpzc mathpzc nmid 2r\mathpzc m_K}F_0(\ell)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid\mathpzc m_K\\ \ell\mathpzc mathpzc nmid 2r}}F_1^{(g)}(\ell,1)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid k\\ \ell\mathpzc mathpzc nmid\mathpzc m_K\\ (\ell\mathpzc mathpzc nmid 2r)}}\frac{F_1^{(g)}(\ell,k)}{F_0(\ell)}
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid(\mathpzc m_K,k)\\ (\ell\mathpzc mathpzc nmid 2r)}}\frac{F_1^{(g)}(\ell,k)}{F_1^{(g)}(\ell,1)}
\end{split}
\end{equation*}
where for any odd prime $\ell$, we make the definitions
\begin{align*}
F_0(\ell)&:=1-\frac{\leg{-1}{\ell}\ell+3}{(\ell-1)^2(\ell+1)},\\
F_1^{(g)}(\ell,k)&:=\begin{cases}
1+\frac{\leg{(r^2-4g^2)/\ell^{2\mathpzc mathpzc nu_\ell(k)}}{\ell}}{\ell-\leg{(r^2-4g^2)/\ell^{2\mathpzc mathpzc nu_\ell(k)}}{\ell}}
&\text{if }2\mathpzc mathpzc nu_\ell(k)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)
\text{ and }4g^2\equiv r^2\mathpzc mathfrak{p}mod{\ell^{2\mathpzc mathpzc nu_\ell(k)}},\\
1+\frac{1}{\ell(\ell+1)}
&\text{if }2\mathpzc mathpzc nu_\ell(k)\ge\mathpzc mathpzc nu_\ell(\mathpzc m_K)
\text{ and }4g^2\equiv r^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}}.\\
\end{cases}
\end{align*}
Substituting this back into~\eqref{ready to start factoring} and factoring the sum over $k$, we have
\begin{equation*}
\begin{split}
\mathpzc mathfrak{c}_r^{(g)}
&=\frac{\mathpzc mathpzc n_{K'}}{\varphi(\mathpzc m_K)}\mathpzc mathfrak{p}rod_{\ell\mathpzc mathpzc nmid 2r\mathpzc m_K}F_0(\ell)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mid\mathpzc m_K\\ \ell\mathpzc mathpzc nmid 2r}}F_1^{(g)}(\ell,1)\\
&\mathpzc mathfrak{q}uad\times\mathpzc mathfrak{p}rod_{\ell\mathpzc mathpzc nmid 2r\mathpzc m_K}
\left(1+
\sum_{e\ge 1}\frac{F_1(\ell,\ell^e)2^{\omega(\ell^e)}}{F_0(\ell)\ell^{2e}\varphi(\ell^e)}\right)
\mathpzc mathfrak{p}rod_{\substack{\ell\mathpzc mathpzc nmid 2r\\ \ell\mathpzc mid\mathpzc m_K}}
\left(1+\sum_{e\ge 1}
\frac{\left(1-\frac{1}{\ell}\right)\#C_g^{(\ell)}(r,1,1,\ell^e)F_1^{(g)}(\ell,\ell^e)}
{\ell^{2e}\varphi(\ell^e)F_1^{(g)}(\ell,1)}\right).
\end{split}
\end{equation*}
Using Lemma~\ref{count solutions mod ell} and the definitions of $F_0(\ell)$ and
$F_1^{(g)}(\ell,k)$ to simplify, we have proved~\eqref{C tau g part}.
\section{\textbf{Pretentious and totally non-Abelian number fields.}}
\label{pretend and tna fields}
In this section, we give the definitions and basic properties of pretentious and
totally non-Abelian number fields.
\begin{defn}\label{tna}
We say that a number field $F$ is \textit{totally non-Abelian} if $F/\mathpzc mathbb{Q}$ is Galois and
$\mathpzc mathrm{Gal}(F/\mathpzc mathbb{Q})$ is a perfect group, i.e., $\mathpzc mathrm{Gal}(F/\mathpzc mathbb{Q})$ is equal to its own commutator subgroup.
\end{defn}
Recall that a group is Abelian if and only if its commutator subgroup is trivial. Thus, in this sense,
perfect groups are as far away from being Abelian as possible.
However, we adopt the convention that the
trivial group is perfect, and so the trivial extension $(F=\mathpzc mathbb{Q})$ is both Abelian and totally
non-Abelian.
The following proposition follows easily from basic group theory and the
Kronecker-Weber Theorem~\cite[p.~210]{Lan:1994}.
\begin{prop}\label{tna char}
Let $F$ be a number field. Then $F$ is totally non-Abelian if and only if $F$ is linearly disjoint
from every cyclotomic field, i.e., $F\cap\mathpzc mathbb{Q}(\zeta_q)=\mathpzc mathbb{Q}$ for every $q\ge 1$.
\end{prop}
\begin{defn}\label{pretentious field}
Let $f$ be a positive integer.
We say that a number field $F$ is $f$-\textit{pretentious} if there exists a finite list of congruence
conditions $\mathpzc mathscr L$ such that, apart from a density zero subset of exceptions,
every rational prime $p$ splits into degree $f$ primes in $F$ if and only if $p$ satisfies a
congruence on the list $\mathpzc mathscr L$.
\end{defn}
If $F$ is a Galois extension and $f\mathpzc mathpzc nmid\mathpzc mathpzc n_F$, then no rational prime may split into degree $f$
primes in $F$. In this case, we say that $F$ is ``vacuously'' $f$-pretentious. In this sense, we say
the trivial extension ($F=\mathpzc mathbb{Q}$) is $f$-pretentious for every $f\ge 1$.
The term pretentious is meant to call to mind the notion that such number fields ``pretend" to be
Abelian over $\mathpzc mathbb{Q}$, at least in so far as their degree $f$ primes are
concerned.
Indeed, one can prove that the
$1$-pretentious number fields are precisely the Abelian extensions of $\mathpzc mathbb{Q}$, and every
Abelian extension is $f$-pretentious for every $f\ge 1$ (being vacuously $f$-pretentious for
every $f$ not dividing the degree of the extension). The smallest non-Abelian group to be the Galois
group of a $2$-pretentious extension of $\mathpzc mathbb{Q}$ is the symmetric group
$S_3:=\langle r,s : |r|=3, s^2=1, rs=sr^{-1}\rangle$.
The smallest groups that cannot be the Galois group of a $2$-pretentious extension of $\mathpzc mathbb{Q}$
are the dihedral group $D_4:=\langle r,s : |r|=4,s^2=1,rs=sr^{-1}\rangle$ and the quaternion group $Q_8:=\langle -1,i,j,k : (-1)^2=1, i^2=j^2=k^2=ijk=-1\rangle$.
\begin{prop}\label{2-pretentious char}
Suppose that $F$ is a $2$-pretentious Galois extension of $\mathpzc mathbb{Q}$, and let $F'$ denote the
fixed field of the commutator subgroup of $\mathpzc mathrm{Gal}(F/\mathpzc mathbb{Q})$. Let $\tau$ be an order two element
of $\mathpzc mathrm{Gal}(F'/\mathpzc mathbb{Q})$, and let $\mathpzc mathcal C_\tau$ be the subset of order two elements of $G=\mathpzc mathrm{Gal}(F/\mathpzc mathbb{Q})$ whose restriction to $F'$ is equal to $\tau$.
Then for any rational prime $p$ that does not ramify in $F$, we have that
$\leg{F'/\mathpzc mathbb{Q}}{p}=\tau$ if and only if $p\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_F}$ for some $g\in\mathpzc mathcal S_\tau$
if and only if $\leg{F/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau$.
\end{prop}
\begin{proof}
In Section~\ref{cheb for composites}, we saw that the first equivalence holds. Indeed, this is
the definition of $\mathpzc mathcal S_\tau$. Furthermore, if $\leg{F/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau$, then
$\leg{F'/\mathpzc mathbb{Q}}{p}=\left.\leg{F/\mathpzc mathbb{Q}}{p}\right|_{F'}=\tau$, and so $p\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_F}$ for
some $g\in\mathpzc mathcal S_\tau$. Thus, it remains to show that if $p\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_F}$ for some
$g\in\mathpzc mathcal S_\tau$, then $\leg{F/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau$.
Since $F$ is $2$-pretentious, there exists a a finite list of congruences $\mathpzc mathscr L$ that
determine, apart from a density zero subset of exceptions, which rational primes split into degree
two primes in $F$. Lifting congruences, if necessary, we may assume that all of the congruences
on the list $\mathpzc mathscr L$ have the same modulus, say $m$.
Lifting congruences again, if necessary, we may assume that $\mathpzc m_F\mathpzc mid m$.
Since $\mathpzc m_F\mathpzc mid m$, it follows that $\mathpzc mathbb{Q}(\zeta_{m})\cap F=F'$ by definition of $F'$.
As noted in Section~\ref{cheb for composites}, the extension $F(\zeta_m)/\mathpzc mathbb{Q}$ is Galois with
group
\begin{equation}\label{Gal group as fibered prod}
\mathpzc mathrm{Gal}\left(F(\zeta_m)/\mathpzc mathbb{Q}\right)\cong
\left\{(\sigma_1,\sigma_2)\in\mathpzc mathrm{Gal}(\mathpzc mathbb{Q}(\zeta_m)/\mathpzc mathbb{Q})\times G:
\left.\sigma_1\right|_{F'}=\left.\sigma_2\right|_{F'}\right\}.
\end{equation}
Let $\varpi: \mathpzc mathrm{Gal}(F/\mathpzc mathbb{Q})\rightarrow\mathpzc mathrm{Gal}(F'/\mathpzc mathbb{Q})$ be the natural projection given by restriction of
automorphisms. We first show that $[F:F']=\#\ker\varpi$ is odd, which allows us to deduce that
$\mathpzc mathcal C_\tau$ is not empty. For each $\sigma\in G=\mathpzc mathrm{Gal}(F/\mathpzc mathbb{Q})$, we let
$C_\sigma$ denote the conjugacy class of $\sigma$ in $G$. We note
that~\eqref{Gal group as fibered prod} and the Chebotar\"ev Density Theorem together imply
that for each $\sigma\in\ker\varpi$ the density of primes $p$ such that
$p\equiv 1\mathpzc mathfrak{p}mod m$ and $\leg{F/\mathpzc mathbb{Q}}{p}=C_\sigma$ is equal to
$\frac{\#C_\sigma}{\varphi_F(m)\mathpzc mathpzc n_F}=\frac{\mathpzc mathpzc n_{F'}\#C_\sigma}{\varphi(m)\mathpzc mathpzc n_F}>0$.
In particular, the trivial automorphism $1_F\in\ker\varpi$, and so it follows by definition of
$2$-pretentious that at most a density zero subset of the $p\equiv 1\mathpzc mathfrak{p}mod m$ may split into degree
two primes in $F$. However, if $[F:F']=\#\ker\varpi$ is even, then $\ker\varpi$ would contain
an element of order $2$ and the same argument with $\sigma$ replacing $1_F$
would imply that there is a positive density of $p\equiv 1\mathpzc mathfrak{p}mod m$ that split into degree two primes in $F$.
Therefore, we conclude that $[F:F']$ is odd.
Now letting $\sigma$ be any element of $G$ such that $\varpi(\sigma)=\left.\sigma\right|_{F'}=\tau$,
we find that $\sigma^{[F:F']}\in\mathpzc mathcal C_\tau$, and so $\mathpzc mathcal C_\tau$ is not empty.
Finally, let $g\in\mathpzc mathcal S_\tau$ be arbitrarily chosen, and let $a$ by any integer such that
$a\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_F}$. Again using~\eqref{Gal group as fibered prod} and the Chebotar\"ev
Density Theorem, we see that the density of rational primes $p$ satisfying the two conditions
$p\equiv a\mathpzc mathfrak{p}mod{m}$ and $\leg{F/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau$ is equal to
$\#\mathpzc mathcal C_\tau/\varphi_F(m)\mathpzc mathpzc n_F>0$.
Since every such prime must split into degree two primes in $F$ and since $a$ was an
arbitrary integer satisfying the condition $a\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_F}$, it follows from the definition of
$2$-pretentious that, apart from a density zero subset of exceptions, every rational prime
$p\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_F}$ must split into degree two primes in $F$.
Therefore, if $p$ is any rational prime not ramifying in $F$ and satisfying the congruence
condition $p\equiv g\mathpzc mathfrak{p}mod{\mathpzc m_F}$, then $\leg{F'/\mathpzc mathbb{Q}}{p}={\tau}$ and $\leg{F/\mathpzc mathbb{Q}}{p}=C'$ for some conjugacy class $C'$ of order two elements in $F$. Hence, it follows that
$\leg{F/\mathpzc mathbb{Q}}{p}=C'\subseteq \mathpzc mathcal C_\tau$.
\end{proof}
\section{\textbf{Proofs of Theorems~\ref{main thm 1} and~\ref{main thm 2}.}}
\label{resolution}
In this section, we give the proof of Theorem~\ref{main thm 1} and sketch
the alteration in strategy that gives the proof of Theorem~\ref{main thm 2}.
The main tool in this section is a certain variant of the classical
Barban-Davenport-Halberstam Theorem.
The setup is as follows. Let $F/F_0$ be a Galois extension of number fields, let
$C$ be any subset of $\mathpzc mathrm{Gal}(F/F_0)$ that is closed under conjugation, and for any pair
of integers $q$ and $a$, define
\begin{equation*}
\theta_{F/F_0}(x;C,q,a)
:=\sum_{\substack{\mathpzc mathbf{N}\mathpzc mathfrak{p}\le x\\ \deg\mathpzc mathfrak{p}=1\\ \leg{F/F_0}{\mathpzc mathfrak{p}}\subseteq C\\ \mathpzc mathbf{N}\mathpzc mathfrak{p}\equiv a\mathpzc mathfrak{p}mod q}}\log\mathpzc mathbf{N}\mathpzc mathfrak{p},
\end{equation*}
where the sum is taken over the degree one prime ideals $\mathpzc mathfrak{p}$ of $F_0$.
If $F_0(\zeta_q)\cap F=F_0$, it follows from the ideas discussed in
Section~\ref{cheb for composites} that
\begin{equation*}
\theta_{F/F_0}(x;C,q,a)\sim\frac{\mathpzc mathpzc n_{F_0}\#C}{\mathpzc mathpzc n_F\varphi_{F_0}(q)}x
\end{equation*}
whenever $a\in G_{F_0,q}$.
The following is a restatement of the main result of~\cite{Smi:2011}.
\begin{thm}\label{bdh variant}
Let $M>0$.
If $x(\log x)^{-M}\le Q\le x$, then
\begin{equation}
\sum_{q\le Q}\strut^\mathpzc mathfrak{p}rime\sum_{a\in G_{k,q}}\left(\theta_{F/F_0}(x;C,q,a)
-\frac{\mathpzc mathpzc n_{F_0}\#C}{\mathpzc mathpzc n_{F}\varphi_{F_0}(q)}x\right)^2
\ll xQ\log x,
\end{equation}
where the prime on the outer summation indicates that
the sum is to be restricted to those $q\le Q$ satisfying $F\cap F_0(\zeta_q)=F_0$.
The constant implied by the symbol $\ll$ depends on $F$ and $M$.
\end{thm}
\begin{proof}[Proof of Theorem~\ref{main thm 1}.]
In light of Theorem~\ref{main thm 0}, it suffices to show that
\begin{equation*}
\mathpzc mathcal E_K(x; x/(\log x)^{12},\mathpzc mathcal C_\tau)\ll\frac{x^2}{(\log x)^{11}}
\end{equation*}
for every element $\tau$ of order dividing two in $\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$.
By assumption, we may decompose the field $K$ as a disjoint compositum, writing
$K=K_1K_2$, where $K_1\cap K_2=\mathpzc mathbb{Q}$,
$K_1$ is a $2$-pretentious Galois extension of $\mathpzc mathbb{Q}$, and $K_2$ is totally non-Abelian.
Let $G_1, G_2$ denote the Galois groups of $K_1/\mathpzc mathbb{Q}$ and $K_2/\mathpzc mathbb{Q}$, respectively.
We identify the Galois group $G=\mathpzc mathrm{Gal}(K/\mathpzc mathbb{Q})$ with $G_1\times G_2$.
Since $K_2$ is totally non-Abelian, it follows that $G'=G_1'\times G_2$, and hence $K'=K_1'$
and $\mathpzc m_K=\mathpzc m_{K_1}$.
Let $C_{2,2}$ denote the subset of all order two elements in $G_2$ and let
$C_{1,\tau}$ denote the subset of elements in $G_1$ whose restriction to $K'$ is equal to
$\tau$. Recalling that every element of $\mathpzc mathcal C_\tau$ must have order two in $G$,
we find that under the identification $G=G_1\times G_2$, we have
\begin{equation*}
\mathpzc mathcal C_\tau=\{1\}\times C_{2,2}
\end{equation*}
if $|\tau|=1$ and
\begin{equation*}
\mathpzc mathcal C_\tau= C_{1,\tau}\times \left(C_{2,2}\cup\{1\}\right)
\end{equation*}
if $|\tau|=2$. Here we have used Proposition~\ref{2-pretentious char} with $F=K_1$
and the fact that $K'=K_1'$.
We now break into cases depending on whether $\tau\in\mathpzc mathrm{Gal}(K'/\mathpzc mathbb{Q})$ is trivial or not.
First, suppose that $\tau$ is trivial.
Then for each $a\in(\mathpzc mathbb{Z}/q\mathpzc m_K\mathpzc mathbb{Z})^\times$ such that $a\equiv b\mathpzc mathfrak{p}mod{\mathpzc m_K}$ for some
$b\in \mathpzc mathcal S_\tau$, we have
\begin{equation*}
\begin{split}
\theta(x;\mathpzc mathcal C_\tau,q\mathpzc m_K,a)-\frac{\#\mathpzc mathcal C_\tau}{\mathpzc mathpzc n_K\varphi_K(q\mathpzc m_K)}x
&=\sum_{\substack{p\le x\\ p\equiv a\mathpzc mathfrak{p}mod{q\mathpzc m_K}\\ \leg{K/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau}}\log p
-\frac{\#\mathpzc mathcal C_\tau}{\mathpzc mathpzc n_K\varphi_K(q\mathpzc m_K)}x\\
&=\frac{1}{\mathpzc mathpzc n_{K_1}}
\sum_{\substack{\mathpzc mathbf{N}\mathpzc mathfrak{p}\le x\\ \deg\mathpzc mathfrak{p}=1\\ \mathpzc mathbf{N}\mathpzc mathfrak{p}\equiv a\mathpzc mathfrak{p}mod{q\mathpzc m_K}\\ \leg{K/K_1}{\mathpzc mathfrak{p}}\subseteq C_{2,2}}}\log\mathpzc mathbf{N}\mathpzc mathfrak{p}
-\frac{\#C_{2,2}}{\mathpzc mathpzc n_{K}\varphi_{K_1}(q\mathpzc m_{K_1})}x\\
&=\frac{1}{\mathpzc mathpzc n_{K_1}}\left(\theta_{K/K_1}(x;C_{2,2},q\mathpzc m_{K_1},a)
-\frac{\mathpzc mathpzc n_{K_1}\#C_{2,2}}{\mathpzc mathpzc n_{K}\varphi_{K_1}(q\mathpzc m_{K_1})}x\right).
\end{split}
\end{equation*}
Thus, we have that
\begin{equation*}
\mathpzc mathcal E_K(x; x/(\log x)^{12},\mathpzc mathcal C_\tau)
=\frac{1}{\mathpzc mathpzc n_{K_1}^2}\sum_{q\le\frac{x}{(\log x)^{12}}}
\sum_{a\in G_{K_1,q\mathpzc m_K}}
\left(\theta_{K/K_1}(x;C_{2,2},q\mathpzc m_{K_1},a)
-\frac{\mathpzc mathpzc n_{K_1}\#C_{2,2}}{\mathpzc mathpzc n_{K}\varphi_{K_1}(q\mathpzc m_{K_1})}x\right)^2.
\end{equation*}
We note that $K_1(\zeta_{q\mathpzc m_K})\cap K=K_1$ for all $q\ge 1$ since $K_2$ is totally
non-Abelian.
Hence, the result follows for this case by applying Theorem~\ref{bdh variant} with $F_0=K_1$
and $F=K$.
Now, suppose that $|\tau|=2$. Then the condition $\leg{K/\mathpzc mathbb{Q}}{p}\subseteq \mathpzc mathcal C_\tau$ is
equivalent to the two conditions $\leg{K_1/\mathpzc mathbb{Q}}{p}\subseteq C_{1,\tau}$ and
$\leg{K_2/\mathpzc mathbb{Q}}{p}\subseteq C_{2,2}\cup\{1\}$. Using Proposition~\ref{2-pretentious char}
and the fact that $K_1'=K'$, this is equivalent to the two conditions
$p\equiv b\mathpzc mathfrak{p}mod{\mathpzc m_{K}}$ for some $b\in\mathpzc mathcal S_\tau$
and $\leg{K_2/\mathpzc mathbb{Q}}{p}\subseteq C_{2,2}\cup\{1\}$. Hence,
for each $a\in(\mathpzc mathbb{Z}/q\mathpzc m_K\mathpzc mathbb{Z})^\times$ such that $a\equiv b\mathpzc mathfrak{p}mod{\mathpzc m_K}$ for some $b\in \mathpzc mathcal S_\tau$,
we have
\begin{equation*}
\theta(x;\mathpzc mathcal C_\tau,q\mathpzc m_K,a)-\frac{\#\mathpzc mathcal C_\tau}{\mathpzc mathpzc n_K\varphi_K(q\mathpzc m_K)}x
=\theta_{K_2/\mathpzc mathbb{Q}}(x;C_{2,2}\cup\{1\},q\mathpzc m_K,a)
-\frac{1+\#C_{2,2}}{\mathpzc mathpzc n_{K_2}\varphi(q\mathpzc m_K)}x
\end{equation*}
as
\begin{equation*}
\frac{\#C_{1,\tau}}{\mathpzc mathpzc n_{K_1}\varphi_{K_1}(q\mathpzc m_K)}
=\frac{\mathpzc mathpzc n_{K_1}/\mathpzc mathpzc n_{K_1'}}{\mathpzc mathpzc n_{K_1}\varphi_{K_1}(q\mathpzc m_K)}
=\frac{1}{\varphi(q\mathpzc m_K)}.
\end{equation*}
Thus, we have that
\begin{equation*}
\mathpzc mathcal E_K(x; x/(\log x)^{12},\mathpzc mathcal C_\tau)
=\sum_{q\le\frac{x}{(\log x)^{12}}}
\sum_{a\in (\mathpzc mathbb{Z}/q\mathpzc m_K\mathpzc mathbb{Z})^\times}\left(\theta_{K_2/\mathpzc mathbb{Q}}(x;C_{2,2}\cup\{1\},q\mathpzc m_K,a)
-\frac{1+\#C_{2,2}}{\mathpzc mathpzc n_{K_2}\varphi(q\mathpzc m_K)}x\right)^2.
\end{equation*}
Here, as well, we have that $\mathpzc mathbb{Q}(\zeta_{q\mathpzc m_K})\cap K_2=\mathpzc mathbb{Q}$ for all $q\ge 1$ because
$K_2$ is totally non-Abelian. Hence,
the result follows for this case by applying Theorem~\ref{bdh variant} with $F_0=\mathpzc mathbb{Q}$
and $F=K_2$.
\end{proof}
\begin{proof}[Proof Sketch of Theorem~\ref{main thm 2}]
In order to obtain this result, we change our strategy from the proof of Theorem~\ref{main thm 0}
slightly. In particular, if $K'$ is ramified only at primes which divide $2r$, then it follows that
$\mathpzc mathbb{Q}(\zeta_q)\cap K=\mathpzc mathbb{Q}$ whenever $(q,2r)=1$. Therefore, we go back to
equation~\eqref{truncation at U,V complete} in the proof of Proposition~\ref{avg of l-series prop}
and apply the Chebotar\"ev Density Theorem immediately. Then we use Cauchy-Schwarz and
Theorem~\ref{bdh variant} to bound the error in this approximation.
\end{proof}
\section{\textbf{Proofs of Lemmas.}}\label{proofs of lemmas}
\begin{proof}[Proof of Lemma~\ref{truncated constant lemma}.]
It suffices to show that
\begin{equation*}
\mathpzc mathfrak{c}_r^{(g)}
=\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k)
+O\left(\frac{1}{\sqrt U}+\frac{\log V}{V^2}\right)
\end{equation*}
for each $g\in\mathpzc mathcal S_\tau$, where $\mathpzc mathfrak{c}_r^{(g)}$ is defined
by~\eqref{g part of const}.
We note that since $K$ is a fixed number field, it follows that $\mathpzc m_K$ is fixed.
Thus, using Lemma~\ref{count solutions mod ell}, Lemma~\ref{compute little c}, and
equation~\eqref{deg of comp extn}, we have that
\begin{equation}\label{bounding tail of the constant}
\begin{split}
\mathpzc mathfrak{c}_r^{(g)}&-
\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{1}{k}
\sum_{\substack{n\le U\\ (n,2r)=1}}\frac{1}{n\varphi_K(\mathpzc m_Knk^2)}
\sum_{a\in (\mathpzc mathbb{Z}/n\mathpzc mathbb{Z})^\times}\leg{a}{n}
\#C_{g}(r,a,n,k)\\
&\ll\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{\mathpzc mathfrak{p}rod_{\ell\mathpzc mid k}\#C_g^{(\ell)}(r,1,1,k)}{k^2\varphi(k)}
\sum_{\substack{n> U\\ (n,2r)=1}}
\frac{c_k(n)}{n\varphi(n)\mathpzc mathfrak{p}rod_{\ell\mathpzc mid (n,k)}\#C_{g}^{(\ell)}(r,1,1,k)}\\
&\mathpzc mathfrak{q}uad+\sum_{\substack{k> V\\ (k,2r)=1}}\frac{\mathpzc mathfrak{p}rod_{\ell\mathpzc mid k}\#C_g^{(\ell)}(r,1,1,k)}{k^2\varphi(k)}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty
\frac{c_k(n)}{n\varphi(n)\mathpzc mathfrak{p}rod_{\ell\mathpzc mid (n,k)}\#C_{g}^{(\ell)}(r,1,1,k)}\\
&\ll
\sum_{\substack{k\le V\\ (k,2r)=1}}\frac{\mathpzc mathfrak{p}rod_{\ell\mathpzc mid k}\#C_g^{(\ell)}(r,1,1,k)}{k^2\varphi(k)}
\sum_{\substack{n> U\\ (n,2r)=1}}
\frac{1}{\kappa_{\mathpzc m_K}(n)\varphi(n)}\\
&\mathpzc mathfrak{q}uad+
\sum_{\substack{k> V\\ (k,2r)=1}}\frac{\mathpzc mathfrak{p}rod_{\ell\mathpzc mid k}\#C_g^{(\ell)}(r,1,1,k)}{k^2\varphi(k)}
\sum_{\substack{n=1\\ (n,2r)=1}}^\infty
\frac{1}{\kappa_{\mathpzc m_K}(n)\varphi(n)}.
\end{split}
\end{equation}
where for any integer $N$, $\kappa_{N}(n)$ is the multiplicative function defined by~\eqref{defn of kappa}.
In~\cite[p.~175]{DP:1999}, we find the bound
\begin{equation*}
\sum_{n>U}\frac{1}{\kappa_1(n)\varphi(n)}\ll\frac{1}{\sqrt U}.
\end{equation*}
Therefore,
\begin{equation*}
\begin{split}
\sum_{n>U}\frac{1}{\kappa_{\mathpzc m_K}(n)\varphi(n)}
&=\sum_{\substack{mn>U\\ (n,\mathpzc m_K)=1\\ \ell\mathpzc mid m\Rightarrow\ell\mathpzc mid \mathpzc m_K}}
\frac{1}{\kappa_1(n)\varphi(n)\varphi(m)}
\le\sum_{\substack{m\ge 1\\ \ell\mathpzc mid m\Rightarrow\ell\mathpzc mid\mathpzc m_K}}\frac{1}{\varphi(m)}
\sum_{n>U/m}\frac{1}{\kappa_1(n)\varphi(n)}\\
&\ll\frac{1}{\sqrt U}
\sum_{\substack{m\ge 1\\ \ell\mathpzc mid m\Rightarrow\ell\mathpzc mid\mathpzc m_K}}\frac{\sqrt m}{\varphi(m)}
=\frac{1}{\sqrt U}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid\mathpzc m_K}\left(1+\frac{\ell}{(\ell-1)(\sqrt\ell-1)}\right)\\
&\ll\frac{1}{\sqrt U}.
\end{split}
\end{equation*}
Similarly, using Lemma~\ref{count solutions mod ell}, we have that
\begin{equation*}
\begin{split}
\sum_{\substack{k> V\\ (k,2r)=1}}\frac{\mathpzc mathfrak{p}rod_{\ell\mathpzc mid k}\#C_g^{(\ell)}(r,1,1,k)}{k^2\varphi(k)}
&\le\sum_{\substack{m\ge 1\\ \ell\mathpzc mid m\Rightarrow \ell\mathpzc mid\mathpzc m_K}}
\frac{\mathpzc m_K}{m^2\varphi(m)}
\sum_{\substack{k>V/m\\ (k,2r\mathpzc m_K)=1}}\frac{2^{\omega(k)}}{k^2\varphi(k)}\\
&\ll\sum_{\substack{m\ge 1\\ \ell\mathpzc mid m\Rightarrow \ell\mathpzc mid\mathpzc m_K}}
\frac{\log(V/m)}{m^2\varphi(m)(V/m)^2}\\
&\le\frac{\log V}{V^2}\sum_{\substack{m\ge 1\\ \ell\mathpzc mid m\Rightarrow \ell\mathpzc mid\mathpzc m_K}}
\frac{1}{\varphi(m)}\\
&=\frac{\log V}{V^2}\mathpzc mathfrak{p}rod_{\ell\mathpzc mid\mathpzc m_K}\left(1+\frac{\ell}{(\ell-1)^2}\right)\\
&\ll\frac{\log V}{V^2}
\end{split}
\end{equation*}
as
\begin{equation*}
\sum_{k>V}\frac{2^{\omega(k)}}{k^2\varphi(k)}
=\int_V^\infty\frac{\mathpzc mathrm{d} N_0(t)}{t^3}
\ll\frac{\log V}{V^2},
\end{equation*}
where
\begin{equation*}
N_0(t):=\sum_{k\le t}\frac{k^32^{\omega(k)}}{k^2\varphi(k)}
\ll\frac{t}{\log t}\sum_{k\le t}\frac{k^32^{\omega(k)}/k^2\varphi(k)}{k}
\ll\frac{t}{\log t}\exp\left\{\sum_{\ell\le t}\frac{2}{\ell-1}\right\}
\ll t\log t.
\end{equation*}
Substituting these bounds into~\eqref{bounding tail of the constant} finishes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{compute little c}.]
The multiplicativity of $c_k(n)$ follows easily by the Chinese Remainder Theorem.
We now compute $c_k(n)$ when $n=\ell^e$ is a prime power and $\ell\mathpzc mathpzc nmid 2r$.
If $\ell\mathpzc mathpzc nmid\mathpzc m_K$, then by Lemma~\ref{count solutions mod ell},
\begin{equation}\label{compute c when ell dnd m}
\begin{split}
c_k(\ell^e)
&=\sum_{\substack{a\in (\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z})^\times\\ (r^2-ak^2,\ell^e)=1}}\leg{a}{\ell}^e
\# C_{g}^{(\ell)}(r,a,\ell^e,k)\\
&=\ell^{e-1}\sum_{a\in(\mathpzc mathbb{Z}/\ell\mathpzc mathbb{Z})^\times}
\leg{a}{\ell}^e\leg{r^2-ak^2}{\ell}^2\left[1+\leg{r^2-ak^2}{\ell}\right]\\
&=\ell^{e-1}\sum_{a\in\mathpzc mathbb{Z}/\ell\mathpzc mathbb{Z}}\leg{a}{\ell}^e\left[\leg{r^2-ak^2}{\ell}^2+\leg{r^2-ak^2}{\ell}\right].
\end{split}
\end{equation}
If $\ell\mathpzc mid k$, then this last expression gives
\begin{equation*}
\frac{c_k(\ell^e)}{\ell^{e-1}}
=2\sum_{a\in\mathpzc mathbb{Z}/\ell\mathpzc mathbb{Z}}\leg{a}{\ell}^e
=\#C_g^{(\ell)}(r,1,1,k)\begin{cases}
\ell-1&\text{if }2\mathpzc mid e,\\
0&\text{if }2\mathpzc mathpzc nmid e
\end{cases}
\end{equation*}
as $(k,r)=1$.
If $\ell\mathpzc mathpzc nmid k$, then~\eqref{compute c when ell dnd m} gives
\begin{equation*}
\begin{split}
\frac{c_k(\ell^e)}{\ell^{e-1}}
&=\sum_{a\in\mathpzc mathbb{Z}/\ell\mathpzc mathbb{Z}}\leg{a}{\ell}^e\left[\leg{r^2-a}{\ell}^2+\leg{r^2-a}{\ell}\right]\\
&=\sum_{b\in\mathpzc mathbb{Z}/\ell\mathpzc mathbb{Z}}\leg{r^2-b}{\ell}^e\left[\leg{b}{\ell}^2+\leg{b}{\ell}\right]\\
&=\begin{cases}
\ell-3&\text{if }
2\mathpzc mid e,\\
-\left(1+\leg{-1}{\ell}\right)&\text{if }
2\mathpzc mathpzc nmid e.
\end{cases}
\end{split}
\end{equation*}
Now, we consider the cases when $\ell\mathpzc mid\mathpzc m_K$.
First, suppose that $1\le\mathpzc mathpzc nu_\ell(\mathpzc m_K)\le 2\mathpzc mathpzc nu_\ell(k)$.
Then as $\mathpzc mathpzc nu_\ell(\mathpzc m_K)\le 2\mathpzc mathpzc nu_\ell(k)<e+2\mathpzc mathpzc nu_\ell(k)=\mathpzc mathpzc nu_\ell(nk^2)$, we have that
$4g^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}}$ if and only if
$4g^2\equiv r^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}}$. Therefore,
\begin{equation*}
\begin{split}
\#C_g^{(\ell)}(r,a,\ell^e,k)
&=\begin{cases}
\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}&\text{if }4g^2\equiv r^2\mathpzc mathfrak{p}mod{\ell^{\mathpzc mathpzc nu_\ell(\mathpzc m_K)}},\\
0&\text{otherwise},
\end{cases}\\
&=\#C_g^{(\ell)}(r,1,1,k)
\end{split}
\end{equation*}
for all $a\in(\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z})^\times$.
Since $\ell\mathpzc mid k$ and $(k,r)=1$, it follows that $\ell\mathpzc mathpzc nmid r^2-ak^2$ for all $a\in\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z}$.
Whence, in this case,
\begin{equation*}
\begin{split}
\frac{c_k(\ell^e)}{\ell^{e-1}}
&=\frac{1}{\ell^{e-1}}
\sum_{\substack{a\in\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z}\\ (r^2-ak^2,\ell)=1}}\leg{a}{\ell}^e\#C_{g}^{(\ell)}(r,a,\ell^e,k)\\
&=\#C_g^{(\ell)}(r,1,1,k)\sum_{a\in\mathpzc mathbb{Z}/\ell\mathpzc mathbb{Z}}\leg{a}{\ell}^e\\
&=\#C_g^{(\ell)}(r,1,1,k)\begin{cases}
\ell-1&\text{if }2\mathpzc mid e,\\
0&\text{if }2\mathpzc mathpzc nmid e.
\end{cases}
\end{split}
\end{equation*}
Now, suppose that $2\mathpzc mathpzc nu_\ell(k)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)$. We write $k=\ell^{\mathpzc mathpzc nu_\ell(k)} k_\ell$ with
$(\ell, k_\ell)=1$, and let $t=\mathpzc min\{\mathpzc mathpzc nu_\ell(\mathpzc m_K),e+2\mathpzc mathpzc nu_\ell(k)\}$. Then $t>2\mathpzc mathpzc nu_\ell(k)$ and
$4g^2\equiv r^2-ak^2\mathpzc mathfrak{p}mod{\ell^t}$ if and only if
$ak_\ell^2\equiv\frac{r^2-4g^2}{\ell^{2\mathpzc mathpzc nu_\ell(k)}}\mathpzc mathfrak{p}mod{\ell^{t-2\mathpzc mathpzc nu_\ell(k)}}$.
Combining this information with Lemma~\ref{count solutions mod ell}, we have that
\begin{equation*}
\#C_g^{(\ell)}(r,a,\ell^e,k)=\begin{cases}
\ell^t&\text{if }\ell^{2\mathpzc mathpzc nu_\ell(k)}\mathpzc mid r^2-4g^2
\text{ and } ak_\ell^2\equiv\frac{r^2-4g^2}{\ell^{2\mathpzc mathpzc nu_\ell(k)}}\mathpzc mathfrak{p}mod{\ell^{t-\mathpzc mathpzc nu_\ell(k)}},\\
0&\text{otherwise}.
\end{cases}
\end{equation*}
In particular, we see that $c_k(\ell^e)=0$ if $r^2\mathpzc mathpzc not\equiv 4g^2\mathpzc mathfrak{p}mod{\ell^{2\mathpzc mathpzc nu_\ell(k)}}$.
Suppose that $r^2\equiv 4g^2\mathpzc mathfrak{p}mod{\ell^{2\mathpzc mathpzc nu_\ell(k)}}$.
Since $(g,\mathpzc m_K)=1$ and $\ell\mathpzc mid\mathpzc m_K$, we have that
\begin{equation*}
\begin{split}
c_k(\ell^e)
&=\sum_{\substack{a\in\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z}\\ (r^2-ak^2,\ell)=1}}\leg{a}{\ell}^e\#C_{g}^{(\ell)}(r,a,\ell^e,k)\\
&=\sum_{\substack{a\in\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z}\\ ak^2\mathpzc mathpzc not\equiv r^2\mathpzc mathfrak{p}mod{\ell}\\ ak^2\equiv r^2-4g^2\mathpzc mathfrak{p}mod{\ell^t}}}
\leg{a}{\ell}^e\ell^t\\
&=\sum_{\substack{a\in\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z}\\ ak^2\equiv r^2-4g^2\mathpzc mathfrak{p}mod{\ell^t}}}
\leg{a}{\ell}^e\ell^t\\
&=\sum_{\substack{a\in\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z}\\
ak_\ell^2\equiv\frac{r^2-4g^2}{\ell^{2\mathpzc mathpzc nu_\ell(k)}}\mathpzc mathfrak{p}mod{\ell^{t-2\mathpzc mathpzc nu_\ell(k)}}}}
\leg{ak_\ell^2}{\ell}^e\ell^t\\
&=\ell^t\sum_{\substack{a\in\mathpzc mathbb{Z}/\ell^e\mathpzc mathbb{Z}\\
a\equiv\frac{r^2-4g^2}{\ell^{2\mathpzc mathpzc nu_\ell(k)}}\mathpzc mathfrak{p}mod{\ell^{t-2\mathpzc mathpzc nu_\ell(k)}}}}
\leg{a}{\ell}^e\\
&=\ell^t\ell^{e-t+2\mathpzc mathpzc nu_\ell(k)}\leg{(r^2-4g^2)/\ell^{2\mathpzc mathpzc nu_\ell(k)}}{\ell}^e.
\end{split}
\end{equation*}
Therefore, in the case that $\ell\mathpzc mid\mathpzc m_K$ and $2\mathpzc mathpzc nu_\ell(k)<\mathpzc mathpzc nu_\ell(\mathpzc m_K)$,
we have
\begin{equation*}
\frac{c_k(\ell^e)}{\ell^{e-1}}
=\#C_g^{(\ell)}(r,1,1,k)\leg{(r^2-4g^2)/\ell^{2\mathpzc mathpzc nu_\ell(k)}}{\ell}^e\ell
\end{equation*}
since
\begin{equation*}
\#C_g^{(\ell)}(r,1,1,k)
=\begin{cases}
\ell^{2\mathpzc mathpzc nu_\ell(k)}&\text{if }r^2\equiv 4g^2\mathpzc mathfrak{p}mod{\ell^{2\mathpzc mathpzc nu_\ell(k)}},\\
0&\text{otherwise}.
\end{cases}
\end{equation*}
\end{proof}
\end{document} |
\boldsymbol \epsilonpsilonilongin{document}
\twocolumn[
\icmltitle{AdaNet: Adaptive Structural Learning of Artificial Neural Networks}
\boldsymbol \epsilonpsilonilongin{icmlauthorlist}
\icmlauthor{Corinna Cortes}{google}
\icmlauthor{Xavier Gonzalvo}{google}
\icmlauthor{Vitaly Kuznetsov}{google}
\icmlauthor{Mehryar Mohri}{cims,google}
\icmlauthor{Scott Yang}{cims}
\epsilonnd{icmlauthorlist}
\icmlaffiliation{cims}{Courant Institute, New York, NY, USA}
\icmlaffiliation{google}{Google Research, New York, NY, USA}
\icmlcorrespondingauthor{Vitaly Kuznetsov}{[email protected]}
\icmlkeywords{Neural Networks, Ensemble Methods, Learning Theory}
\vskip 0.3in
]
\printAffiliationsAndNotice{}
\boldsymbol \epsilonpsilonilongin{abstract}
We present new algorithms for adaptively learning artificial neural
networks. Our algorithms (\textsc{AdaNet}) adaptively learn both
the structure of the network and its weights. They are based on a
solid theoretical analysis, including data-dependent generalization
guarantees that we prove and discuss in detail. We report the
results of large-scale experiments with one of our algorithms on
several binary classification tasks extracted from the CIFAR-10
dataset. The results demonstrate that our algorithm can
automatically learn network structures with very competitive
performance accuracies when compared with those achieved for neural
networks found by standard approaches.
\ignore{
We present a new framework for analyzing and learning artificial
neural networks. Our approach simultaneously and adaptively learns
both the structure of the network as well as its weights. The
methodology is based upon and accompanied by strong data-dependent
theoretical learning guarantees, so that the final network
architecture provably adapts to the complexity of any given
problem. We validate our framework with experimental data on
real-life dataset which demonstrate that our algorithm can
automatically find network structures with accuracies competitive to
networks found by traditional approaches.
}
\epsilonnd{abstract}
\section{Introduction}
\label{sec:intro}
Deep neural networks form a powerful framework for machine learning
and have achieved a remarkable performance in several areas in recent
years. Representing the input through increasingly more abstract
layers of feature representation has shown to be extremely effective
in areas such as natural language processing, image captioning, speech
recognition and many others
\citep{KrizhevskySutskeverHinton2012,SutskeverVinyalsLe2014}.\ignore{
The concept of multilayer feature representations and modeling
machine learning problems using a network of neurons is also
motivated and guided by studies of the brain, neurological behavior,
and cognition.} However, despite the compelling arguments for using
neural networks as a general template for solving machine learning
problems, training these models and designing the right network for a
given task has been filled with many theoretical gaps and practical
concerns.
To train a neural network, one needs to specify the parameters of a
typically large network architecture with several layers and units,
and then solve a difficult non-convex optimization problem. From an
optimization perspective, there is no guarantee of optimality for a
model obtained in this way, and often, one needs to implement ad hoc
methods (e.g. gradient clipping or batch normalization
\citep{PascanuMikolovBengio2013,IoffeSzegedy15}) to derive a coherent
models.
Moreover, if a network architecture is specified a priori and trained
using back-propagation, the model will always have as many layers as
the one specified because there needs to be at least one path through
the network in order for the hypothesis to be non-trivial. While
single weights may be pruned \citep{HanPoolTranDally}, a technique
originally termed Optimal Brain Damage \citep{LeCunDenkerSolla}, the
architecture itself is unchanged. This imposes a stringent lower
bound on the complexity of the model. Since not all machine learning
problems admit the same level of difficulty and different tasks
naturally require varying levels of complexity, complex models trained
with insufficient data can be prone to overfitting. This places a
burden on a practitioner to specify an architecture at the right level
of complexity which is often hard and requires significant levels of
experience and domain knowledge. For this reason, network
architecture is often treated as a hyperparameter which is tuned using
a validation set. The search space can quickly become exorbitantly
large \citep{SzegedyEtal2015,HeZhangRenSun2015} and large-scale
hyperparameter tuning to find an effective network architecture is
wasteful of data, time, and resources (e.g. grid search, random search
\citep{BergstraBardenetBengioKegl2011}).
In this paper, we attempt to remedy some of these issues. In
particular, we provide a theoretical analysis of a supervised learning
scenario in which the network architecture and parameters are learned
simultaneously. To the best of our knowledge, our results are the
first generalization bounds for the problem of \epsilonmph{structural
learning of neural networks}. These general guarantees can guide
the design of a variety of different algorithms for learning in this
setting. We describe in depth two such algorithms that directly
benefit from the theory that we develop.
In contrast to enforcing a pre-specified architecture and a
corresponding fixed complexity, our algorithms learn the requisite
model complexity for a machine learning problem in an \epsilonmph{adaptive}
fashion. Starting from a simple linear model, we add more units and
additional layers as needed. \ignore{From the cognitive perspective,
we will adapt the neural complexity and architecture to the
difficulty of the problem. } The additional units that we add are
carefully selected and penalized according to rigorous estimates from
the theory of statistical learning.\ignore{This will serve as a
catalyst for the sparsity of our model as well as the strong
generalization bonds that we will be able to derive. The first
algorithm that we present learns traditional feedforward
architectures. The second algorithm that we develop steps outside of
this class of models and can learn more exotic structures.}
Remarkably, optimization problems for both of our algorithms turn out
to be strongly convex and hence are guaranteed to have a unique global
solution which is in stark contrast with other methodologies for
training neural networks.
The paper is organized as follows. In
Appendix~\ref{sec:related_work}, we give a detailed discussion of
previous work related to this topic.
Section~\ref{sec:structural_learning} describes the broad network
architecture and therefore hypothesis set that we
consider. Section~\ref{sec:scenario} provides a formal description of
our learning scenario. In Section~\ref{sec:bounds}, we prove strong
generalization guarantees for learning in this setting which guide the
design of the algorithm described in Section~\ref{sec:algorithm} as
well as a variant described in Appendix~\ref{sec:alternatives}. We
conclude with experimental results in Section~\ref{sec:experiments}.
\ignore{
Accepting
the general structure of a neural network as an effective parameterized
model for supervised learning, we provide a theoretical analysis of this
model and proceed to derive an algorithm benefitting from that theory.
In the process, we introduce a framework for training neural networks that:
\boldsymbol \epsilonpsilonilongin{enumerate}
\item uses a stable and robust algorithm with a unique solution.
\item can produce much sparser and/or shallower networks compared to existing methods.
\item adapts the structure and complexity of the network to the difficulty of the particular problem at hand, with no pre-defined architecture.
\item is accompanied and in fact motivated by strong data-dependent generalization
bounds, validating their adaptivity and statistical efficacy.
\item is intuitive from the cognitive standpoint that originally
motivated neural network architectures.
\epsilonnd{enumerate}}
\section{Network architecture}
\label{sec:structural_learning}
In this section, we describe the general network architecture we
consider for feedforward neural networks, thereby also defining
our hypothesis set. To simplify the presentation, we restrict our attention to the case of
binary classification. However, all our results can be
straightforwardly extended to multi-class classification, including
the network architecture by augmenting the number of output units, and
our generalization guarantees by using existing multi-class
counterparts of the binary classification ensemble margin bounds we
use.
A common model for feedforward neural networks is the multi-layer
architecture where units in each layer are only connected to those
in the layer below. We will consider more general architectures where
a unit can be connected to units in any of the layers below, as
illustrated by Figure~\ref{fig:arch}. In particular, the output unit
in our network architectures can be connected to any other unit. These
more general architectures include as special cases standard
multi-layer networks (by zeroing out appropriate connections) as well
as somewhat more exotic ones
\citep{HeZhangRenSun2015,HuangLiuWeinberger2016}.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\centering
\includegraphics[scale=.35]{nn3.png}
\caption{An example of a general network architecture: output layer (green) is
connected to all of the hidden units as well as some input units. Some hidden
units (red and yellow) are connected not only to the units in the layer
directly below but to units at other levels as well.}
\vskip -.15in
\label{fig:arch}
\epsilonnd{figure}
More formally, the artificial neural networks we consider are defined
as follows. Let $l$ denote the number of intermediate layers in the
network and $n_k$ the maximum number of units in layer $k \in [l]$.
Each unit $j \in [n_k]$ in layer $k$ represents a function denoted by
$h_{k, j}$ (before composition with an activation function). Let
${\mathscr X}$ denote the input space and for any $x \in \mathcal X$, let
$\boldsymbol \mathbb Psi(x) \in \mathbb R^{n_0}$ denote the corresponding feature vector.
Then, the family of functions defined by the first layer functions
$h_{1, j}$, $j \in [n_1]$, is the following:
\boldsymbol \epsilonpsilonilongin{equation}
\label{eq:H1}
{\mathscr H}_{1} = \set[\Big]{x \mapsto \mathbf u \cdot \boldsymbol \mathbb Psi(x)
\colon \mathbf u \in \mathbb R^{n_0}, \| \mathbf u \|_p \leq \Lambda_{1, 0}},
\epsilonnd{equation}
where $p \geq 1$ defines an $l_p$-norm and $\Lambda_{1, 0} \geq 0$ is
a hyperparameter on the weights connecting layer 0 and layer 1. The family of functions $h_{k, j}$, $j \in [n_k]$,
in a higher layer $k > 1$ is then defined as follows:
\boldsymbol \epsilonpsilonilongin{align}
\label{eq:Hk}
{\mathscr H}_{k} & = \set[\bigg]{x \mapsto \sum_{s = 1}^{k-1} \mathbf u_s
\cdot (\varphi_{s}\circ \mathbf h_s)(x)
\colon \nonumber \\
&\quad\quad \mathbf u_s \in \mathbb R^{n_s}, \| \mathbf u_s \|_p \leq \Lambda_{k,s}, h_{k,s} \in
{\mathscr H}_{s}},
\epsilonnd{align}
where for each unit function
$h_{k, s}$, $\mathbf u_s$ in \epsilonqref{eq:Hk} denotes the vector of weights for
connections from that unit to a lower layer $s < k$. The $\Lambda_{k,
s}$s are non-negative hyperparameters and
$\varphi_{s}\circ \mathbf h_s$ abusively denotes a coordinate-wise
composition:
$\varphi_{s}\circ \mathbf h_s = (\varphi_s \circ h_{s,1}, \ldots, \varphi_s
\circ h_{s,n_s})$. The $\varphi_s$s are assumed to be $1$-Lipschitz
activation functions. In particular, they can be chosen to be the
\epsilonmph{Rectified Linear Unit function} (ReLU function)
$x \mapsto \max \set{0, x}$, or the \epsilonmph{sigmoid function}
$x \mapsto \frac{1}{1 + e^{-x}}$. The choice of the parameter $p \geq
1$ determines the sparsity of the network and the complexity of
the hypothesis sets ${\mathscr H}_k$.
For the networks we consider, the output
unit can be connected to all intermediate units, which therefore
defines a function $f$ as follows:
\boldsymbol \epsilonpsilonilongin{equation}
\label{eq:output}
f = \sum_{k = 1}^{l} \sum_{j = 1}^{n_k} w_{k, j} h_{k, j} =
\sum_{k = 1}^l \mathbf w_k \cdot \mathbf h_k,
\epsilonnd{equation}
where
$\mathbf h_k = [h_{k,1}, \ldots, h_{k, n_k}]^\top \mspace{-8mu} \in
{\mathscr H}_k^{n_k}$ and $\mathbf w_k \in \mathbb R^{n_k}$ is the vector of connection
weights to units of layer $k$. Observe that for
$\mathbf u_s = 0$ for $s < k - 1$ and $\mathbf w_k = 0$ for $k < l$, our
architectures coincides with standard multi-layer feedforward ones.
We will denote by ${\mathscr F}$ the family of functions $f$ defined by
\epsilonqref{eq:output} with the absolute value of the weights summing to
one:
\boldsymbol \epsilonpsilonilongin{equation*}
{\mathscr F} = \set[\Bigg]{\sum_{k = 1}^l \mathbf w_k \cdot \mathbf h_k \colon \mathbf h_k \in
{\mathscr H}_k^{n_k}, \sum_{k = 1}^{l} \| \mathbf w_k \|_1 = 1}.
\epsilonnd{equation*}
Let $\widetilde {\mathscr H}_k$ denote the union of ${\mathscr H}_k$ and its reflection,
$\widetilde {\mathscr H}_k = {\mathscr H}_k \cup (-{\mathscr H}_k)$, and let ${\mathscr H}$ denote the union of
the families $\widetilde {\mathscr H}_k$: ${\mathscr H} = \bigcup_{k = 1}^l \widetilde {\mathscr H}_k$.
Then, ${\mathscr F}$ coincides with the convex hull of ${\mathscr H}$:
${\mathscr F} = \conv({\mathscr H})$.
For any $k \in [l]$ we will also consider the family ${\mathscr H}^*_k$ derived
from ${\mathscr H}_k$ by setting $\Lambda_{k, s} = 0$ for $s < k - 1$, which
corresponds to units connected only to the layer below. We similarly
define $\widetilde {\mathscr H}^*_k = {\mathscr H}^*_k \cup (-{\mathscr H}^*_k)$ and
${\mathscr H}^* = \cup_{k = 1}^l {\mathscr H}^*_k$, and define the ${\mathscr F}^*$ as the convex
hull ${\mathscr F}^* = \conv({\mathscr H}^*)$. Note that the architecture corresponding
to the family of functions ${\mathscr F}^*$ is still more general than
standard feedforward neural network architectures since the output
unit can be connected to units in different layers.
\section{Learning problem}
\label{sec:scenario}
We consider the standard supervised learning scenario and assume that
training and test points are drawn i.i.d.\ according to some
distribution ${\mathscr D}$ over ${\mathscr X} \times \set{-1, +1}$ and denote by
$S = ((x_1, y_1), \ldots, (x_m, y_m))$ a training sample of size $m$
drawn according to ${\mathscr D}^m$.
For a function $f$ taking values in $\mathbb R$, we denote by
$R(f) = \mathbb{E}_{(x,y)\sim {\mathscr D}}[1_{yf(x) \leq 0}]$ its generalization
error and, for any $\rho > 0$, by $\widehat R_{S, \rho}(f)$ its empirical
margin error on the sample $S$:
$\widehat R_{S, \rho}(f) = \frac{1}{m} \sum_{i = 1}^m 1_{y_i f(x_i) \leq
\rho}$.
The learning problem consists of using the training sample $S$ to
determine a function $f$ defined by \epsilonqref{eq:output} with small
generalization error $R(f)$. For an accurate predictor $f$, we expect
many of the weights to be zero and the corresponding architecture
to be quite sparse, with fewer than $n_k$ units at layer $k$ and relatively
few non-zero connections. In that sense, learning an accurate
function $f$ implies also learning the underlying architecture.
In the next section, we present data-dependent learning bounds for
this problem that will help guide the design of our algorithm.
\section{Generalization bounds}
\label{sec:bounds}
Our learning bounds are expressed in terms of the Rademacher
complexities of the hypothesis sets ${\mathscr H}_k$. The empirical Rademacher
complexity of a hypothesis set ${\mathscr G}$ for a sample $S$ is denoted by $\widehat \mathfrak R_S({\mathscr G})$
and defined as follows:
\boldsymbol \epsilonpsilonilongin{equation*}
\widehat \mathfrak R_S({\mathscr G}) = \frac{1}{m}\E_{{\boldsymbol \sigma}}\left[ \sup_{h \in {\mathscr G}}
\sum_{i = 1}^m \sigma_i h(x_i) \right],
\epsilonnd{equation*}
where ${\boldsymbol \sigma} = (\sigma_1, \ldots, \sigma_m)$, with $\sigma_i$s
independent uniformly distributed random variables taking values in
$\set{-1, +1}$. Its Rademacher complexity is defined by $\mathfrak R_m({\mathscr G}) = \E_{S \sim {\mathscr D}^m} [\widehat \mathfrak R_S({\mathscr G})]$. These
are data-dependent complexity measures that lead to finer learning
guarantees \citep{KoltchinskiiPanchenko2002,BartlettMendelson2002}.
As pointed out earlier, the family of functions ${\mathscr F}$ is the convex
hull of ${\mathscr H}$. Thus, generalization bounds for ensemble methods can be
used to analyze learning with ${\mathscr F}$. In particular, we can leverage
the recent margin-based learning guarantees of
\citet{CortesMohriSyed2014}, which are finer than those that can be
derived via a standard Rademacher complexity analysis
\citep{KoltchinskiiPanchenko2002}, and which admit an explicit
dependency on the mixture weights $\mathbf w_k$ defining the
ensemble function $f$. That leads to the following learning
guarantee.
\boldsymbol \epsilonpsilonilongin{theorem}[Learning bound]
\label{th:adanet}
Fix $\rho > 0$. Then, for any $\delta > 0$, with probability at least
$1 - \delta$ over the draw of a sample $S$ of size $m$ from ${\mathscr D}^m$,
the following inequality holds for all
$f = \sum_{k = 1}^l \mathbf w_k \cdot \mathbf h_k \in {\mathscr F}$:
\boldsymbol \epsilonpsilonilongin{align*}
R(f)
& \leq \widehat{R}_{S, \rho}(f) + \frac{4}{\rho} \sum_{k = 1}^{l} \big \| \mathbf w
_k \big \|_1 \mathfrak R_m(\widetilde {\mathscr H}_k) + \frac{2}{\rho} \sqrt{\frac{\log l}{m}}\\
& \quad + C(\rho, l, m, \delta),
\epsilonnd{align*}
where
$C(\rho, l, m, \delta) = \sqrt{\big\lceil \frac{4}{\rho^2}
\log(\frac{\rho^2 m}{\log l}) \big\rceil \frac{\log l}{m} +
\frac{\log(\frac{2}{\delta})}{2m}} = \widetilde O\Big(\frac{1}{\rho}
\sqrt{\frac{\log l}{m}}\Big)$.
\epsilonnd{theorem}
The proof of this result, as well as that of all other main theorems
are given in Appendix~\ref{app:theory}. The bound of the theorem can
be generalized to hold uniformly for all $\rho \in (0, 1]$, at the
price of an additional term of the form
$\sqrt{\log \log_2 (2/\rho)/m}$ using standard
techniques \citep{KoltchinskiiPanchenko2002}.
Observe that the bound of the theorem depends only logarithmically on
the depth of the network $l$. But, perhaps more remarkably, the
complexity term of the bound is a $\| \mathbf w_k \|_1$-weighted average of
the complexities of the layer hypothesis sets ${\mathscr H}_k$, where the
weights are precisely those defining the network, or the function $f$.
This suggests that a function $f$ with a small empirical margin error
and a deep architecture benefits nevertheless from a strong
generalization guarantee, if it allocates more weights to lower layer
units and less to higher ones. Of course, when the weights are sparse,
that will imply an architecture with relatively fewer units or
connections at higher layers than at lower ones. The bound of the theorem
further gives a quantitative guide for apportioning the weights
depending on the Rademacher complexities of the layer hypothesis sets.
This data-dependent learning guarantee will serve as a foundation for
the design of our structural learning algorithms in
Section~\ref{sec:algorithm} and Appendix~\ref{sec:alternatives}.
However, to fully exploit it, the Rademacher complexity measures need
to be made explicit. One advantage of these data-dependent measures is
that they can be estimated from data, which can lead to more
informative bounds. Alternatively, we can derive useful upper bounds
for these measures which can be more conveniently used in our
algorithms. The next results in this section provide precisely such
upper bounds, thereby leading to a more explicit generalization bound.
We will denote by $q$ the conjugate of $p$, that is
$\frac{1}{p} + \frac{1}{q} = 1$, and define
$r_\infty = \max_{i \in[1, m]} \| \boldsymbol \mathbb Psi(x_i)\|_\infty$.
Our first result gives an upper bound on the Rademacher
complexity of ${\mathscr H}_k$ in terms of the Rademacher
complexity of other layer families.
\boldsymbol \epsilonpsilonilongin{lemma}
\label{lemma:Rad_Hk_Hk-1}
For any $k > 1$, the empirical Rademacher complexity of ${\mathscr H}_k$ for a
sample $S$ of size $m$ can be upper-bounded as follows in terms of
those of ${\mathscr H}_s$s with $s < k$:
\boldsymbol \epsilonpsilonilongin{equation*}
\widehat \mathfrak R_S({\mathscr H}_k)
\leq 2 \sum_{s=1}^{k-1} \Lambda_{k,s}
n_{s}^{\frac{1}{q}} \widehat \mathfrak R_S({\mathscr H}_{s}).
\epsilonnd{equation*}
\epsilonnd{lemma}
For the family ${\mathscr H}^*_k$, which is directly relevant to many of our
experiments, the following more explicit upper bound can be derived,
using Lemma~\ref{lemma:Rad_Hk_Hk-1}.
\boldsymbol \epsilonpsilonilongin{lemma}
\label{lemma:Rad_Hk}
Let $\Lambda_k \!= \prod_{s = 1}^k 2 \Lambda_{s, s - 1}$ and
$N_k \!= \prod_{s = 1}^k n_{s - 1}$. Then, for any $k \geq 1$, the
empirical Rademacher complexity of ${\mathscr H}_k^*$ for a sample $S$ of size
$m$ can be upper bounded as follows:
\boldsymbol \epsilonpsilonilongin{equation*}
\widehat \mathfrak R_S({\mathscr H}_k^*)
\leq r_\infty \Lambda_k N_k^{\frac{1}{q}}
\sqrt{\frac{\log (2 n_0)}{2 m}}.
\epsilonnd{equation*}
\epsilonnd{lemma}
Note that $N_k$, which is the product of the number of units in layers
below $k$, can be large. This suggests that values of $p$ closer to
one, that is larger values of $q$, could be more helpful to control
complexity in such cases. More generally, similar explicit upper
bounds can be given for the Rademacher complexities of subfamilies of
${\mathscr H}_k$ with units connected only to layers $k, k - 1, \ldots, k - d$,
with $d$ fixed, $d < k$. Combining Lemma~\ref{lemma:Rad_Hk} with
Theorem~\ref{th:adanet} helps derive the following explicit learning
guarantee for feedforward neural networks with an output unit
connected to all the other units.
\boldsymbol \epsilonpsilonilongin{corollary}[Explicit learning bound]
\label{cor:adanet}
Fix $\rho > 0$. Let
$\Lambda_k = \prod_{s = 1}^k 4 \Lambda_{s, s - 1}$ and
$N_k \!= \prod_{s = 1}^k n_{s - 1}$. Then, for any $\delta > 0$, with
probability at least $1 - \delta$ over the draw of a sample $S$ of
size $m$ from ${\mathscr D}^m$, the following inequality holds for all
$f = \sum_{k = 1}^l \mathbf w_k \cdot \mathbf h_k \in {\mathscr F}^*$:
\boldsymbol \epsilonpsilonilongin{align*}
R(f)
& \leq \widehat{R}_{S, \rho}(f) + \frac{2}{\rho} \sum_{k = 1}^{l} \big \| \mathbf w
_k \big \|_1 \bigg[\overline r_\infty \Lambda_k N_k^{\frac{1}{q}} \sqrt{\frac{2 \log (2 n_0)}{m}}\bigg] \\
& \quad + \frac{2}{\rho} \sqrt{\frac{\log l}{m}} + C(\rho, l, m, \delta),
\epsilonnd{align*}
where $C(\rho, l, m, \delta) = \sqrt{\big\lceil \frac{4}{\rho^2}
\log(\frac{\rho^2 m}{\log l}) \big\rceil \frac{\log l}{m} +
\frac{\log(\frac{2}{\delta})}{2m}} = \widetilde O\Big(\frac{1}{\rho}
\sqrt{\frac{\log l}{m}}\Big)$, and where $\overline r_\infty = \E_{S \sim {\mathscr D}^m}[r_\infty]$.
\epsilonnd{corollary}
The learning bound of Corollary~\ref{cor:adanet} is
a finer guarantee than previous ones by \citep{Bartlett1998},
\cite{NeyshaburTomiokaSrebro2015}, or \cite{SunChenWangLiu2016}.
This is because it explicitly
differentiates between the weights of different
layers while previous bounds treat all weights indiscriminately.
This is crucial to the design of algorithmic design since the network
complexity no longer needs to grow exponentially as a function of
depth. Our bounds are also more general and apply to more
other network architectures, such as those introduced in
\citep{HeZhangRenSun2015,HuangLiuWeinberger2016}.
\ignore{The generalization bound above informs us that the complexity of the
neural network returned is a weighted combination of the complexities
of each unit in the neural network, where the weights
are precisely the ones that define our network. Specifically, this again agrees
with our intuition that deeper networks are more complex and suggests
that if we can find a model that has both small empirical error and
most of its weight on the shallower units, then such a model will
generalize well.
Toward this goal, we will design an algorithm that directly seeks to
minimize upper bounds of this generalization bound. In the process,
our algorithm will train neural networks that discriminate deeper
networks from shallower ones. This is a novel property that existing
regularization techniques in the deep learning toolbox do not enforce.
Techniques such as $l_2$ and $l_1$ regularization and
dropout (see e.g. \cite{GoodfellowBengioCourville2016}) are generally
applied uniformly across all units in the network.}
\section{Algorithm}
\label{sec:algorithm}
This section describes our algorithm, \textsc{AdaNet}, for
\epsilonmph{adaptive} learning of neural networks. \textsc{AdaNet}
adaptively grows the structure of a neural network, balancing model
complexity with empirical risk minimization. We also describe in
detail in Appendix~\ref{sec:alternatives} another variant of
\textsc{AdaNet} which admits some favorable properties.
Let $x \mapsto \mathbb Phi(-x)$ be a non-increasing convex function
upper-bounding the zero-one loss, $x \mapsto 1_{x \leq 0}$, such that
$\mathbb Phi$ is differentiable over $\mathbb R$ and $\mathbb Phi'(x) \neq 0$ for all
$x$. This surrogate loss $\mathbb Phi$ may be, for instance, the exponential
function $\mathbb Phi(x) = e^x$ as in AdaBoost
\citet{FreundSchapire97}, or the logistic function,
$\mathbb Phi(x) = \log(1 + e^x)$ as in logistic regression.
\subsection{Objective function}
Let $\set{h_1, \ldots, h_N}$ be a subset of ${\mathscr H}^*$. In the most
general case, $N$ is infinite. However, as discussed later, in
practice, the search is limited to a finite set. For any $j \in [N]$, we will denote by $r_j$ the Rademacher complexity of the family
${\mathscr H}_{k_j}$ that contains $h_j$: $r_j = \mathfrak R_m({\mathscr H}_{k_j})$.
\textsc{AdaNet} seeks to find a function
$f = \sum_{j = 1}^N w_j h_j \in {\mathscr F}^*$ (or neural network) that directly minimizes the
data-dependent generalization bound of Corollary~\ref{cor:adanet}.
This leads to the following objective function:
\boldsymbol \epsilonpsilonilongin{align}
\label{eq:objective}
F(\mathbf w) = \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\Big(1
- y_i \sum_{j = 1}^N w_j h_j \Big) + \sum_{j = 1}^N \Gamma_j |w_j|,
\epsilonnd{align}
where $\mathbf w \in \mathbb R^{N}$ and $\Gamma_j = \lambda r_j + \boldsymbol \epsilonpsilonilonta$, with
$\lambda \geq 0$ and $\boldsymbol \epsilonpsilonilonta \geq 0$ hyperparameters.
The objective function \epsilonqref{eq:objective} is a convex function of
$\mathbf w$. It is the sum of a convex surrogate of the empirical error and
a regularization term, which is a weighted-$l_1$ penalty containing
two sub-terms: a standard norm-1 regularization which admits $\boldsymbol \epsilonpsilonilonta$
as a hyperparameter, and a term that discriminates the functions $h_j$
based on their complexity.
The optimization problem consisting of minimizing the objective
function $F$ in \epsilonqref{eq:objective} is defined over a very large space
of base functions $h_j$. \textsc{AdaNet} consists of applying
coordinate descent to \epsilonqref{eq:objective}. In that sense, our
algorithm is similar to the DeepBoost algorithm of
\citet{CortesMohriSyed2014}. However, unlike DeepBoost, which combines
decision trees, \textsc{AdaNet} learns a deep neural network, which
requires new methods for constructing and searching the space of
functions $h_j$. Both of these aspects differ significantly from the
decision tree framework. In particular, the search is particularly
challenging. In fact, the main difference between the algorithm
presented in this section and the variant described in
Appendix~\ref{sec:alternatives} is the way new candidates $h_j$ are
examined at each iteration.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\centering
\boldsymbol \epsilonpsilonilongin{tabular}{c@{\widehatspace{1cm}}c}
(a) & \includegraphics[scale=.2]{adanet2.png} \\
(b) & \includegraphics[scale=.2]{adanet.png} \\
\epsilonnd{tabular}
\caption{Illustration of the algorithm's incremental construction of a
neural network. The input layer is indicated in blue, the output
layer in green. Units in the yellow block are added at the first
iteration while units in purple are added at the second iteration.
Two candidate extensions of the architecture are considered at the
the third iteration (shown in red): (a) a two-layer extension; (b) a
three-layer extension. Here, a line between two blocks of units
indicates that these blocks are fully-connected.}
\vskip -.25in
\label{fig:adanet}
\epsilonnd{figure}
\subsection{Description}
We start with an informal description of \textsc{AdaNet}. Let
$B \geq 1$ be a fixed parameter determining the number of units per
layer of a candidate subnetwork. The algorithm proceeds in $T$
iterations. Let $l_{t - 1}$ denote the depth of the neural network
constructed before the start of the $t$-th iteration. At iteration
$t$, the algorithm selects one of the following two options:
1. augmenting the current neural network with a subnetwork with the
same depth as that of the current network
$\mathbf h \in {\mathscr H}^{* B}_{l_{t - 1}}$, with $B$ units per layer. Each unit
in layer $k$ of this subnetwork may have connections to existing units
in layer $k-1$ of \textsc{AdaNet} in addition to connections to units
in layer $k-1$ of the subnetwork.
2. augmenting the current neural network with a deeper subnetwork
(depth $l_{t - 1} + 1$) $\mathbf h' \in {\mathscr H}^{* B}_{l_{t - 1}}$. The set of
allowed connections is defined the same way as for $\mathbf h$.
The option selected is the one leading to the best reduction of the
current value of the objective function, which depends both on the
empirical error and the complexity of the subnetwork added, which is
penalized differently in these two options.
Figure~\ref{fig:adanet} illustrates this construction and the two
options just described. An important aspect of our algorithm is that
the units of a subnetwork learned at a previous iteration (say
$\mathbf h_{1,1}$ in Figure~\ref{fig:adanet}) can serve as input to deeper
subnetwork added later (for example $\mathbf h_{2, 2}$ or $\mathbf h_{2, 3}$ in
the Figure). Thus, the deeper subnetworks added later can take
advantage of the embeddings that were learned at the previous iterations.
The algorithm terminates after $T$ rounds or if the \textsc{AdaNet}
architecture can no longer be extended to improve the objective
\epsilonqref{eq:objective}.
\ignore{
rounds $t=1, \ldots, T$. At the first round
$t=1$, we consider two models: a linear model and a neural network
with one hidden layer. This neural network can be viewed as a vector
of elements of ${\mathscr H}_2^{(p)}$:
$\mathbf h_{1,1} = (h_{1,1,1}, \ldots, h_{1,B,1})$ where $B$ is a size of
the hidden layer and $h_{k, j, t}$ denotes $j$-th unit in the
$k$-layer introduced to the model on $t$-th iteration. Similarly, the
linear model is an element of ${\mathscr H}_1^{(p)}$. The model that provides
the best improvement in \epsilonqref{eq:objective}, that is the model with
the best tradeoff between empirical error and complexity, becomes a
part of the \textsc{AdaNet} architecture. Now suppose that at round
$t=1$, we have selected a model with one hidden layer. Then at the
next round we consider two subnetworks, with one and two hidden layers
respectively. An important aspect of our algorithm is that the
$\mathbf h_{1,1} = (h_{1,1,1}, \ldots, h_{1,B,1})$ that was learned on the
first iteration serves as an input to the last hidden layer of the
subnetwork with two hidden layers. In this way, the new units that we
introduce to the model can take advantage of the embeddings that we
have learned at the previous iterations. Similarly, if, for $t=2$, the
subnetwork of depth two provides a greater improvement in
\epsilonqref{eq:objective}, then, at the next iteration, we choose between
subnetworks of depth two and three. In this case,
$(\mathbf h_{1,1}, \mathbf h_{1,2}) = (h_{1,1,1}, \ldots, h_{1,B,1}, h_{1, 1, 2},
\ldots, h_{1, B, 2})$ serves as an input to the second layer of each
of these subnetworks. The second layer
$\mathbf h_{2, 2} = (h_{2,1,2}, \ldots, h_{2,B,2})$ of the subnetwork added
at $t=2$ serves as an input to the third layer of subnetwork of depth
three. The algorithm terminates after $T$ rounds or if we can no
longer extend the \textsc{AdaNet} architecture to improve
\epsilonqref{eq:objective}. See Figure~\ref{fig:adanet} for an example.
}
More formally, \textsc{AdaNet} is a boosting-style algorithm that
applies (block) coordinate descent to \epsilonqref{eq:objective}. At each
iteration of block coordinate descent, descent coordinates $\mathbf h$ (base
learners in the boosting literature) are selected from the space of
functions ${\mathscr H}^*$. These coordinates correspond to the direction of
the largest decrease in \epsilonqref{eq:objective}. Once these coordinates
are determined, an optimal step size in each of these directions is
chosen, which is accomplished by solving an appropriate convex
optimization problem.
Note that, in general, the search for the optimal descent coordinate
in an infinite-dimensional space or even in finite but large sets such
as that of all decision trees of some large depth may be intractable,
and it is common to resort to a heuristic search (weak learning
algorithm) that returns $\delta$-optimal coordinates. For instance, in
the case of boosting with trees one often grows trees according to
some particular heuristic \cite{FreundSchapire97}.
We denote the \textsc{AdaNet} model after $t - 1$
rounds by $f_{t - 1}$, which is parameterized by $\mathbf w_{t - 1}$.
Let $\mathbf h_{k, t-1}$ denote the vector of outputs of units in the $k$-th
layer of the \textsc{AdaNet} model, $l_{t-1}$ be the depth of the
\textsc{AdaNet} architecture, $n_{k, t-1}$
be the number of units in $k$-th layer after $t-1$ rounds.
At round $t$, we select descent coordinates
by considering two candidate subnetworks
$\mathbf h \in \widetilde{{\mathscr H}}_{l_{t-1}}^*$ and
$\mathbf h' \in \widetilde{{\mathscr H}}_{l_{t-1} + 1}^*$
that are generated by a weak learning algorithm
\textsc{WeakLearner}. Some choices for this algorithm
in our setting are described below.\ignore{,
where $\widetilde{{\mathscr H}}^{(p)}_{k}$ is defined to be:
\boldsymbol \epsilonpsilonilongin{align}
\label{eq:Hk_tilde}
& \set[\bigg]{x \mapsto \mathbf u' \cdot (\varphi_{k-1}\circ \mathbf h')(x) +
\mathbf u \cdot (\varphi_{k-1}\circ \mathbf h_{k-1, t-1})(x) \colon \nonumber \\
&\quad\quad\quad (\mathbf u', \mathbf u) \in \mathbb R^{B + n_{k-1, t-1}}, h'_j \in \widetilde{{\mathscr H}}^{(p)}_{k-1}},
\epsilonnd{align}
In other words, $\widetilde{{\mathscr H}}^{(p)}_{k}$ can viewed as a
restriction of ${\mathscr H}_{k}$ to functions that only use inputs from the
layer directly below, and this layer, itself, consists of the units
that have been previously added to \textsc{AdaNet} architecture as
well as some newly introduced ones. There are multiple ways of
generating candidate subnetworks, and we discuss some of them below.}
Once we obtain $\mathbf h$ and $\mathbf h'$, we select one of these vectors of
units, as well as a vector of weights $\mathbf w \in \mathbb R^B$, so that the
result yields the best improvement in \epsilonqref{eq:objective}. This is
equivalent to minimizing the following objective function over
$\mathbf w \in \mathbb R^B$ and $\mathbf u \in \{\mathbf h, \mathbf h'\}$:
\boldsymbol \epsilonpsilonilongin{align}
\label{eq:objective_t}
F_t(\mathbf w, \mathbf u)
\!=\! \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\Big( 1 - y_i f_{t - 1}(x_i)
&- y_i \mathbf w \cdot \mathbf u(x_i) \Big)
\nonumber \\ & \quad+ \Gamma_{\mathbf u} \| \mathbf w \|_1,
\epsilonnd{align}
where $\Gamma_\mathbf u = \lambda r_\mathbf u + \boldsymbol \epsilonpsilonilonta$ and $r_\mathbf u$
is $\mathfrak R_m\big({\mathscr H}_{l_{t-1}}\big)$ if $\mathbf u = \mathbf h$ and
$\mathfrak R_m\big({\mathscr H}_{l_{t-1} + 1}\big)$ otherwise.
In other words, if $\min_{\mathbf w} F_t(\mathbf w, \mathbf h) \leq \min_{\mathbf w} F_t(\mathbf w, \mathbf h')$,
then
\boldsymbol \epsilonpsilonilongin{align*}
\mathbf w^* = \argmin_{\mathbf w \in \mathbb R^B} {F_t(\mathbf w, \mathbf h)}, \quad \mathbf h_t = \mathbf h
\epsilonnd{align*}
and otherwise
\boldsymbol \epsilonpsilonilongin{align*}
&\mathbf w^* = \argmin_{\mathbf w \in \mathbb R^B} {F_t(\mathbf w, \mathbf h')}, \quad \mathbf h_t = \mathbf h'
\epsilonnd{align*}
If $F(\mathbf w_{t-1} + \mathbf w^*) < F(\mathbf w_{t-1})$ then we set
$f_{t-1} = f_t + \mathbf w^* \cdot \mathbf h_t$ and otherwise we terminate
the algorithm.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\boldsymbol \epsilonpsilonilongin{ALGO}{AdaNet}{S = ((x_i, y_i)_{i = 1}^m}
\SET{f_0}{0}
\DOFOR{t \gets 1 \mbox{ {\bf to }} T}
\SET{\mathbf h, \mathbf h'}{\textsc{WeakLearner}\big(S, f_{t-1}\big)}
\SET{\mathbf w}{\textsc{Minimize}\big(F_t(\mathbf w, \mathbf h)\big)}
\SET{\mathbf w'}{\textsc{Minimize}\big(F_t(\mathbf w, \mathbf h')\big)}
\mathbf 1F{F_t(\mathbf w, \mathbf h') \leq F_t(\mathbf w', \mathbf h')}
\SET{\mathbf h_t}{\mathbf h}
\ELSE
\SET{\mathbf h_t}{\mathbf h'}
\FI
\mathbf 1F{F(\mathbf w_{t-1} + \mathbf w^*) < F(\mathbf w_{t-1})}
\SET{f_{t-1}}{f_t + \mathbf w^* \cdot \mathbf h_t}
\ELSE
\mathfrak RETURN{f_{t-1}}
\FI
\OD
\mathfrak RETURN{f_T}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode of the AdaNet algorithm. On line 3 two candidate
subnetworks are generated (e.g. randomly or by solving
\epsilonqref{eq:subnetwork_objective}). On lines 3 and 4, \epsilonqref{eq:objective_t}
is solved for each of these candidates. On lines 5-7 the best
subnetwork is selected and on lines 9-11 termination condition
is checked.}
\label{algo:adanet}
\epsilonnd{figure}
\boldsymbol \epsilonpsilonilongin{table*}[t]
\caption{Experimental results for \textsc{AdaNet}, \textsc{NN},
\textsc{LR} and \textsc{NN-GP} for different
pairs of labels in {\tt CIFAR-10}. Boldfaced results are
statistically significant at a 5\% confidence level.}
\label{tbl:adanet_vs_dnn}
\vskip 0.15in
\boldsymbol \epsilonpsilonilongin{center}
\boldsymbol \epsilonpsilonilongin{small}
\boldsymbol \epsilonpsilonilongin{tabular}{lcccc}
\widehatline
\abovespace\boldsymbol \epsilonpsilonilonlowspace
Label pair & \textsc{AdaNet} & \textsc{LR} & \textsc{NN} & \textsc{NN-GP} \\
\widehatline
\abovespace
{\tt deer-truck} & {\bf 0.9372 $\pm$ 0.0082} & 0.8997 $\pm$ 0.0066 & 0.9213 $\pm$ 0.0065 & 0.9220 $\pm$ 0.0069 \\
{\tt deer-horse} & {\bf 0.8430 $\pm$ 0.0076} & 0.7685 $\pm$ 0.0119 & 0.8055 $\pm$ 0.0178 & 0.8060 $\pm$ 0.0181 \\
{\tt automobile-truck} & {\bf 0.8461 $\pm$ 0.0069} & 0.7976 $\pm$ 0.0076 & 0.8063 $\pm$ 0.0064 & 0.8056 $\pm$ 0.0138 \\
{\tt cat-dog} & {\bf 0.6924 $\pm$ 0.0129} & 0.6664 $\pm$ 0.0099 & 0.6595 $\pm$ 0.0141 & 0.6607 $\pm$ 0.0097 \\
{\tt dog-horse} & {\bf 0.8350 $\pm$ 0.0089} & 0.7968 $\pm$ 0.0128 & 0.8066 $\pm$ 0.0087 & 0.8087 $\pm$ 0.0109 \\
\widehatline
\epsilonnd{tabular}
\epsilonnd{small}
\epsilonnd{center}
\vskip -0.2in
\epsilonnd{table*}
There are many different choices for the \textsc{WeakLearner}
algorithm. For instance, one may generate a large number of random
networks and select the one that optimizes
\epsilonqref{eq:objective_t}. Another option is to directly minimize
\epsilonqref{eq:objective_t} or its regularized version:
\boldsymbol \epsilonpsilonilongin{align}
\label{eq:subnetwork_objective}
\widetilde{F}_t(\mathbf w, \mathbf h)
\!=\! \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\Big( 1 - y_i f_{t - 1}(x_i)
&- y_i \mathbf w \cdot \mathbf h(x_i) \Big)
\nonumber\\ &\quad+ \mathcal{R}(\mathbf w, \mathbf h),
\epsilonnd{align}
over both $\mathbf w$ and $\mathbf h$.
Here $\mathcal{R}(\mathbf w ,\mathbf h)$ is a regularization term that,
for instance, can be used to enforce
that $\|\mathbf u_s\|_p \leq \Lambda_{k,s}$ in \epsilonqref{eq:Hk}.
Note that, in general, \epsilonqref{eq:subnetwork_objective} is
a non-convex objective. However, we do not rely
on finding a global solution to the corresponding optimization
problem. In fact, standard guarantees for regularized
boosting only require that each $\mathbf h$ that is added to the model
decreases the objective by a constant amount (i.e. it satisfies
$\delta$-optimality condition) for a boosting algorithm
to converge \citep{RatschMikaWarmuth2001,LuoTseng1992}.
Furthermore, the algorithm that we present in
Appendix~\ref{sec:alternatives} uses a weak-learning algorithm that
solves a convex sub-problem at each step and that additionally has a
closed-form solution. This comes at the cost of a more restricted
search space for finding a descent coordinate at each step of the
algorithm.
We conclude this section by observing that in our description of
\textsc{AdaNet} we have fixed $B$ for all iterations
and only two candidate subnetworks are considered at each step.
Our approach easily extends to an arbitrary number of candidate
subnetworks (for instance of different depth $l$) as well
as varying number of units per layer $B$. Furthermore,
selecting an optimal subnetwork among the candidates is
easily parallelizable allowing for efficient and effective search
for optimal descent directions.
\section{Experiments}
\label{sec:experiments}
In this section we present the results of our experiments with
\textsc{AdaNet} algorithm.
\ignore{
To show the performance of our
proposed system on CIFAR-10~\cite{Krizhevsky09learningmultiple} on two
set of experiments. Firstly, we present the benchmark of multiple
binary classification tasks and secondly, we present an exhaustive
system exploration where different configurations have competed with
each other.}
\subsection{CIFAR-10}
In our first set of experiments, we used the {\tt CIFAR-10} dataset
\cite{Krizhevsky09learningmultiple}. This dataset consists of
$60\mathord,000$ images evenly categorized in $10$ different classes.
To reduce the problem to binary classification we considered five
pairs of classes: {\tt deer-truck}, {\tt deer-horse}, {\tt
automobile-truck}, {\tt cat-dog}, {\tt dog-horse}. Raw images have
been preprocessed to obtain color histograms and histogram of gradient
features. The result is $154$ real valued features with ranges
$[0,1]$.
We compared \textsc{AdaNet} to standard
feedforward neural networks (\textsc{NN}) and logistic regression
(\textsc{LR}) models.
Note that convolutional neural networks are often
a more natural choice for image classification problems
such as {\tt CIFAR-10}. However, the goal of these
experiments is not to obtain state-of-the-art results
for this particular task, but to provide a proof-of-concept
illustrating that our structural learning approach can be
competitive with traditional approaches
for finding efficient architectures and training corresponding
networks.
Note that \textsc{AdaNet} algorithm requires the knowledge
of complexities $r_j$, which in certain cases can be estimated from data.
In our experiments, we have used the upper bound in Lemma~\ref{lemma:Rad_Hk}.
Our algorithm admits a number of hyperparameters:
regularization hyperparameters $\lambda$, $\boldsymbol \epsilonpsilonilonta$,
number of units $B$ in each layer of new subnetworks that are used
to extend the model at each iteration and
a bound $\Lambda_{k}$ on weights $(\mathbf u',\mathbf u)$ in each unit.
As discussed in Section~\ref{sec:algorithm}, there are different approaches
to finding candidate subnetworks in each iteration. In our experiments,
we searched for candidate subnetworks by minimizing
\epsilonqref{eq:subnetwork_objective} with $\mathcal{R} = 0$.
This also requires a learning rate hyperparameter $\epsilonta$.
These hyperparamers have been optimized over the following ranges:
$\lambda \in \{0, 10^{-8}, 10^{-7}, 10^{-6}, 10^{-5}, 10^{-4}\}$,
$B \in \{100, 150, 250\}$, $\epsilonta \in \{10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}\}$.
We have used a single $\Lambda_k$ for all $k > 1$
optimized over $\{1.0, 1.005, 1.01, 1.1, 1.2\}$. For simplicity,
$\boldsymbol \epsilonpsilonilonta = 0$.
Neural network models also admit learning rate $\epsilonta$ and regularization
coefficient $\lambda$ as hyperparameters, as well as the number of
hidden layers $l$ and number of units $n$ in each hidden layer.
The range of $\epsilonta$ was the same as for \textsc{AdaNet}
and we varied $l$ in $\{1,2,3\}$, $n$ in $\{100, 150, 512, 1024, 2048\}$
and $\lambda \in \{0, 10^{-5}, 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}\}$.
Logistic regression only admits $\epsilonta$ and $\lambda$ as its
hyperparameters that were optimized over the same ranges.
Note that the total number of hyperparameter settings
for \textsc{AdaNet} and standard neural networks is exactly
the same. Furthermore, the same holds for the number of hyperparameters
that determine resulting architecture of the model:
$\Lambda$ and $B$ for \textsc{AdaNet} and $l$ and $n$ for neural
network models. Observe that while a particular setting of $l$ and $n$
determines a fixed architecture $\Lambda$ and $B$ parameterize
a structural learning procedure that may result in a different
architecture depending on the data.
In addition to the grid search procedure,
we have conducted a hyperparameter optimization
for neural networks using Gaussian process
bandits (\textsc{NN-GP}),
which is a sophisticated Bayesian non-parametric method for
response-surface modeling in conjunction with a bandit
algorithm~\cite{Snoek12bandit}. Instead of operating
on a pre-specified grid, this allows one to search for hyperparameters
in a given range. We used the following ranges:
$\lambda \in [10^{-5}, 1]$, $\epsilonta \in [10^{-5}, 1]$,
$l \in [1,3]$ and $n \in [100,2048]$. This algorithm was run
for 500 trials which is more than the number of hyperparameter settings
considered by \textsc{AdaNet} and \textsc{NN}.
Observe that this search procedure can also
be applied to our algorithm but we choose not to do it
in this set of experiments to further demonstrate competitiveness
of the structural learning approach.
In all experiments we use ReLu activations. \textsc{NN},
\textsc{NN-GP} and \textsc{LR} are trained using
stochastic gradient method with batch size of 100 and maximum of
10,000 iterations. The same configuration is used for solving
\epsilonqref{eq:subnetwork_objective}. We use $T=30$ for \textsc{AdaNet}
in all our experiments although in most cases algorithm
terminates after 10 rounds.
In each of the experiments, we used standard 10-fold cross-validation
for performance evaluation and model selection. In particular, the
dataset was randomly partitioned into 10 folds, and each algorithm was
run 10 times, with a different assignment of folds to the training
set, validation set and test set for each run. Specifically, for each
$i \in \{0, \ldots, 9\}$, fold $i$ was used for testing, fold $i +
1~(\text{mod}~10)$ was used for validation, and the remaining folds
were used for training. For each setting of the parameters, we
computed the average validation error across the 10 folds, and
selected the parameter setting with maximum average accuracy across
validation folds. We report average accuracy (and standard deviations)
of the selected hyperparameter setting across test folds
in Table~\ref{tbl:adanet_vs_dnn}.
\ignore{
The set of
hyperparameters studied for \textsc{AdaNet} are learning rate between
$[10^{-5}, 0.1]$, 10,000 iterations per subnetwork, 100 as batch size,
nodes per layer between [100, 250], cost per layer between [1.0, 1.2]
with 0.05 increments, complexity regularization in $[0, 10^{-8},
10^{-7}, 10^{-6}, 10^{-5}, 10^{-4}]$ and 3,000 iterations for the
output layer. Neural networks and logistic regression have been
optimized with the following hyperparameters range: batch size in [5,
200], learning rate between $[10^{-5}, 2]$, L1 and L2
regularizations between $[10^{-4}, 10]$. In addition, neural networks
are also optimized using between 1 to 3 hidden layers, number of nodes
in each hidden layer ranged from 20 to 2048.}
Our results show that \textsc{AdaNet} outperforms other methods
on each of the datasets.
The average architectures for all label
pairs are provided in Table~\ref{tbl:adanet_archs}.
Note that \textsc{NN} and \textsc{NN-GP} always selects
one layer architecture. The architectures selected by \textsc{AdaNet}
typically also have just one layer and fewer nodes than those
selected by \textsc{NN} and \textsc{NN-GP}. However,
on a more challenging problem {\tt cat-dog} \textsc{AdaNet}
opts for a more complex model with two layers which results in
a better performance. This further illustrates that
our approach allows to learn network architectures in
adaptive fashion depending on the complexity of the given problem.
\boldsymbol \epsilonpsilonilongin{table}[t]
\caption{Average number of units in each layer.}
\label{tbl:adanet_archs}
\vskip 0.05in
\boldsymbol \epsilonpsilonilongin{center}
\boldsymbol \epsilonpsilonilongin{small}
\boldsymbol \epsilonpsilonilongin{tabular}{l@{\widehatspace{.075cm}}c@{\widehatspace{.075cm}}c@{\widehatspace{.075cm}}c@{\widehatspace{.075cm}}c@{\widehatspace{.075cm}}}
\widehatline
\abovespace\boldsymbol \epsilonpsilonilonlowspace
Label pair & \multicolumn{2}{c}{\textsc{AdaNet}} & \textsc{NN} & \textsc{NN-GP} \\
\widehatline
& 1st layer & 2nd layer & & \\
\widehatline
\abovespace
{\tt deer-truck} & 990 & 0 & 2048 & 1050 \\
{\tt deer-horse} & 1475 & 0 & 2048 & 488 \\
{\tt automobile-truck} & 2000 & 0 & 2048 & 1595 \\
{\tt cat-dog} & 1800 & 25 & 512 & 155 \\
{\tt dog-horse} & 1600 & 0 & 2048 & 1273 \\
\widehatline
\epsilonnd{tabular}
\epsilonnd{small}
\epsilonnd{center}
\vskip -0.15in
\epsilonnd{table}
\boldsymbol \epsilonpsilonilongin{table}[t]
\caption{Experimental results for different variants of\textsc{AdaNet}.}
\label{tbl:adanet_systems}
\vskip 0.15in
\boldsymbol \epsilonpsilonilongin{center}
\boldsymbol \epsilonpsilonilongin{small}
\boldsymbol \epsilonpsilonilongin{tabular}{cc}
\widehatline
\abovespace\boldsymbol \epsilonpsilonilonlowspace
Algorithm & Accuracy ($\pm$ std. dev.) \\
\widehatline
\abovespace
\ignore{Baseline & 0.9354 $\pm$ 0.0082 \\}
\ignore{R & 0.9370 $\pm$ 0.0082 \\}
\textsc{AdaNet.SD} & 0.9309 $\pm$ 0.0069 \\
\textsc{AdaNet.R} & 0.9336 $\pm$ 0.0075 \\
\textsc{AdaNet.P} & 0.9321 $\pm$ 0.0065 \\
\textsc{AdaNet.D} & 0.9376 $\pm$ 0.0080 \\
\widehatline
\epsilonnd{tabular}
\epsilonnd{small}
\epsilonnd{center}
\vskip -0.2in
\epsilonnd{table}
As discussed in
Section~\ref{sec:algorithm}, various different heuristics
can be used to generate candidate subnetworks on each
iteration of \textsc{AdaNet}.
In the second set of experiments we have varied objective
function \epsilonqref{eq:subnetwork_objective}, as well as
the domain over which it is optimized. This allows
us to study sensitivity of \textsc{AdaNet}
to the choice of heuristic that is used to generate
candidate subnetworks.
In particular, we have considered the following variants
of \textsc{AdaNet}.
\textsc{AdaNet.R} uses $\mathcal{R}(\mathbf w, \mathbf h) = \Gamma_\mathbf h \|w\|_1$
as a regularization term in \epsilonqref{eq:subnetwork_objective}.
As \textsc{AdaNet} architecture grows, each new subnetwork
is connected to all the previous subnetworks which
significantly increases the number of connections
in the network and overall complexity of the model.
\textsc{AdaNet.P} and \textsc{AdaNet.D} are
restricting connections to existing subnetworks
in different ways.
\textsc{AdaNet.P}
connects each new subnetwork only to subnetwork that was
added on the previous iteration.
\textsc{AdaNet.D} uses dropout on the connections
to previously added subnetworks.
Finally, \textsc{AdaNet}
uses an upper bound on Rademacher complexity
from Lemma~\ref{lemma:Rad_Hk}.
\textsc{AdaNet.SD} uses standard deviations of the outputs
of the last hidden layer on the training data as surrogate
for Rademacher complexities. The advantage of using
this data-dependent measure of complexity is that it
eliminates hyperparameter $\Lambda$ reducing the hyperparameter
search space.
We report average accuracies across test folds
for {\tt deer-truck} pair in Table~\ref{tbl:adanet_systems}.
\ignore{
In this case the hyperparameter boundaries used for the study involve
using a learning rate in between [0.0001, 0.1], subnetworks training
iterations between 100 and 10,000, a batch size of 100, number of
nodes per layer from 5 to 150, a cost per layer from 1.01 to 1.4 in
0.05 increments, complexity regularization goes from 0 to $10^{-4}$,
dropout probability in the range [0.9, 0] and 10,000 iteration to
train the output layer.
As it can be seen in Table~\ref{tbl:adanet_systems} performance is
similar among all systems. In terms of performance on the
cross-validation study systems R and D are preferred whereas system S
and P are compelling since they tend to produce smaller networks with
fewer number of subnetworks (e.g. in this dataset a minimum of 60\%
fewer subnetworks compared to the rest of the systems).}
\subsection{Criteo Click Rate Prediction}
We also compared \textsc{AdaNet} to \textsc{NN}
on Criteo Click Rate Prediction
dataset~{\small \url{https://www.kaggle.com/c/criteo-display-ad-challenge}}.
This dataset consists of 7 days of data where each instance
is an impression and a binary label (clicked or not clicked).
Each impression has 13 count features and 26 categorical features.
Count features have been transformed by taking the natural logarithm.
The values of categorical features appearing less than 100 times are replaced
by zeros.
The rest of the values are then converted to integers
which are then used as keys to look up embeddings
(that are trained together with each model).
If the number of possible values for a feature $x$ is $d(x)$,
then embedding dimension is set to $6d(f)^{1/4}$ for $d(f) > 25$.
Otherwise, embedding dimension is $d(f)$. Missing feature values
are set to zero.
We have split training set provided in the link above into
training, validation and test set.\footnote{Note that test
set provided in this link does not have ground truth labels
and can not be used in our experiments.} Our training
set received the first 5 days of data
(32,743,299 instances) and validation and test
sets consist of 1 day of data (6,548,659 instances).
Gaussian processes bandits were used to find the best hyperparameter
settings on validation set both for \textsc{AdaNet} and \textsc{NN}.
For \textsc{AdaNet} we have optimized over the following
hyperparameter ranges: $B \in {125, 256, 512}$,
$\Lambda \in [1, 1.5]$, $\epsilonta \in [10^{-4}, 10^{-1}]$,
$\lambda \in [10^{-12},10^{-4}]$. For \textsc{NN} the ranges were as
follows: $l \in [1,6]$, $n \in [250,512,1024,2048]$,
$\epsilonta \in [10^{-5}, 10^{-1}]$, $\lambda \in [10^{-6},10^{-1}]$. We
train \textsc{NN}s for 100,000 iterations using mini-batch stochastic
gradient method with batch size of 512. Same configuration is used at
each iteration of \textsc{AdaNet} to solve
\epsilonqref{eq:subnetwork_objective}. The maximum number of hyperparameter
trials is 2,000 for both methods. Results are presented in
Table~\ref{tbl:criteo}. In this experiment, \textsc{NN} chooses
architecture with four hidden layer and 512 units in each hidden
layer. Remarkbly, \textsc{AdaNet} achieves a better
accuracy with an architecture consisting of single
layer with just 512 nodes. While the difference
in performance appears small it is statistically significant
on this challenging task.
\boldsymbol \epsilonpsilonilongin{table}[t]
\caption{Experimental results for Criteo dataset.}
\label{tbl:criteo}
\vskip 0.15in
\boldsymbol \epsilonpsilonilongin{center}
\boldsymbol \epsilonpsilonilongin{small}
\boldsymbol \epsilonpsilonilongin{tabular}{cc}
\widehatline
\abovespace\boldsymbol \epsilonpsilonilonlowspace
Algorithm & Accuracy \\
\widehatline
\abovespace
\textsc{AdaNet} & 0.7846 \\
\textsc{NN} & 0.7811\ignore{08617782593} \\
\widehatline
\epsilonnd{tabular}
\epsilonnd{small}
\epsilonnd{center}
\vskip -0.25in
\epsilonnd{table}
\section{Conclusion}
We presented a new framework and algorithms for adaptively learning
artificial neural networks. Our algorithm, \textsc{AdaNet}, benefits
from strong theoretical guarantees. It simultaneously learns a neural
network architecture and its parameters by balancing a trade-off
between model complexity and empirical risk minimization. The
data-dependent generalization bounds that we presented can help guide
the design of alternative algorithms for this problem. We reported
favorable experimental results demonstrating that our algorithm is
able to learn network architectures that perform better than the ones
found via grid search. Our techniques are general and can be applied
to other neural network architectures such as CNNs and RNNs.
\ignore{
\section*{Acknowledgments}
This work was partly funded by NSF IIS-1117591 and CCF-1535987.
}
{\small
}
\appendix
\section{Related work}
\label{sec:related_work}
There have been several major lines of research on the theoretical
understanding of neural networks. The first one deals with understanding
the properties of the objective function used when training neural
networks
\citep{ChoromanskaHenaffMathieuArousLeCun2014,SagunGuneyArousLeCun2014,ZhangLeeJordan2015,LivniShalevShwartzShamir2014,Kawaguchi2016}.
The second involves studying the black-box optimization algorithms
that are often used for training these networks
\citep{HardtRechtSinger2015,LianHuangLiLiu2015}. The third analyzes
the statistical and generalization properties of the neural networks
\citep{Bartlett1998,ZhangeEtAl2016,NeyshaburTomiokaSrebro2015,SunChenWangLiu2016}.
The fourth takes the generative point of view
\citep{AroraBhaskaraGeMa2014, AroraLiangMa2015}, assuming that the
data actually comes from a particular network and then show how to
recover it. The fifth investigates the expressive ability of neural
networks and analyzing what types of mappings they can learn
\citep{CohenSharirShashua2015,EldanShamir2015,Telgarsky2016,DanielyFrostigSinger2016}.
This paper is most closely related to the work on statistical and
generalization properties of neural networks. However, instead of
analyzing the problem of learning with a fixed architecture we study a
more general task of learning both architecture and model parameters
simultaneously. On the other hand, the insights that we gain by
studying this more general setting can also be directly applied to the
setting with a fixed architecture.
There has also been extensive work involving structure learning for
neural networks
\citep{KwokYeung1997,LeungLamLingTam2003,IslamYaoMurase2003,Lehtokangas1999,IslamSattarAminYaoMurase2009,MaKhorasani2003,NarasimhaDelashmitManryLiMaldonado2008,HanQiao2013,KotaniKajikiAkazawa1997,AlvarezSalzmann2016}.
All these publications seek to grow and prune the neural network
architecture using some heuristic\ignore{ (e.g. genetic, information
theoretic, or correlation)}. More recently, search-based approaches
have been an area of active research
\citep{HaDaiLe16,ChenGoodfellowShlens2015,ZophLe2016,BakerGuptaNaikRaskar2016}.
In this line of work, a learning meta-algorithm is used to search for
an efficient architecture. Once better architecture is found
previously trained networks are discarded. This search requires a
significant amount of computational resources. To the best of our
knowledge, none of these methods come with a theoretical guarantee on
their performance. Furthermore, optimization problems associated with
these methods are intractable. In contrast, the structure learning
algorithms introduced in this paper are directly based on
data-dependent generalization bounds and aim to solve a convex
optimization problem by adaptively growing network and preserving
previously trained components.
Finally, \citep{JanzaminSedghiAnandkumar2015} is another paper that
analyzes the generalization and training of two-layer neural networks
through tensor methods. Our work uses different methods, applies to
arbitrary networks, and also learns a network structure from a single
input layer.
\section{Proofs}
\label{app:theory}
We will use the following structural learning guarantee for ensembles of hypotheses.
\boldsymbol \epsilonpsilonilongin{theorem}[DeepBoost Generalization Bound, Theorem~1, \citep{CortesMohriSyed2014}]
\label{lemma:db}
Let $\mathcal H$ be a hypothesis set admitting a decomposition
$\mathcal H = \cup_{i = 1}^l \mathcal H_i$ for some $l > 1$. Fix $\rho > 0$. Then,
for any $\delta > 0$, with probability at least $1 - \delta$ over the
draw of a sample $S$ from $D^m$, the following inequality holds for
any $f = \sum_{t = 1}^T \alpha_t h_t$ with $\alpha_t \in \mathbb R_+$ and
$\sum_{t = 1}^T \alpha_t = 1$:
\boldsymbol \epsilonpsilonilongin{align*}
R(f)
&\leq \widehat{R}_{S, \rho} + \frac{4}{\rho} \sum_{t=1}^T \alpha_t \mathfrak{R}_m(\mathcal{H}_{k_t}) + \frac{2}{\rho} \sqrt{\frac{\log l}{m}} \\
&\quad + \sqrt{\bigg\lceil \frac{4}{\rho^2} \log\left(\frac{\rho^2 m}{\log l}\right) \bigg\rceil \frac{\log l}{m} + \frac{\log(\frac{2}{\delta})}{2m}},
\epsilonnd{align*}
where, for each $h_t \in \mathcal H$, $k_t$ denotes the smallest $k \in [l]$
such that $h_t \in \mathcal H_{k_t}$.
\epsilonnd{theorem}
\boldsymbol \epsilonpsilonilongin{reptheorem}{th:adanet}
Fix $\rho > 0$. Then, for any $\delta > 0$, with probability at least
$1 - \delta$ over the draw of a sample $S$ of size $m$ from ${\mathscr D}^m$,
the following inequality holds for all
$f = \sum_{k = 1}^l \mathbf w_k \cdot \mathbf h_k \in {\mathscr F}$:
\boldsymbol \epsilonpsilonilongin{align*}
R(f)
& \leq \widehat{R}_{S, \rho}(f) + \frac{4}{\rho} \sum_{k = 1}^{l} \big \| \mathbf w
_k \big \|_1 \mathfrak R_m(\widetilde {\mathscr H}_k) + \frac{2}{\rho} \sqrt{\frac{\log l}{m}}\\
& \quad + C(\rho, l, m, \delta),
\epsilonnd{align*}
where
$C(\rho, l, m, \delta) = \sqrt{\big\lceil \frac{4}{\rho^2}
\log(\frac{\rho^2 m}{\log l}) \big\rceil \frac{\log l}{m} +
\frac{\log(\frac{2}{\delta})}{2m}} = \widetilde O\Big(\frac{1}{\rho}
\sqrt{\frac{\log l}{m}}\Big)$.
\epsilonnd{reptheorem}
\boldsymbol \epsilonpsilonilongin{proof}
This result follows directly from Theorem~\ref{lemma:db}.
\epsilonnd{proof}
Theorem~\ref{th:adanet} can be straightforwardly generalized to
the multi-class classification setting by using the ensemble
margin bounds of
\citet{KuznetsovMohriSyed2014}.
\boldsymbol \epsilonpsilonilongin{replemma}{lemma:Rad_Hk_Hk-1}
For any $k > 1$, the empirical Rademacher complexity of ${\mathscr H}_k$ for a
sample $S$ of size $m$ can be upper-bounded as follows in terms of
those of ${\mathscr H}_s$s with $s < k$:
\boldsymbol \epsilonpsilonilongin{equation*}
\widehat \mathfrak R_S({\mathscr H}_k)
\leq 2 \sum_{s=1}^{k-1} \Lambda_{k,s}
n_{s}^{\frac{1}{q}} \widehat \mathfrak R_S({\mathscr H}_{s}).
\epsilonnd{equation*}
\epsilonnd{replemma}
\boldsymbol \epsilonpsilonilongin{proof}
By definition, $\widehat \mathfrak R_S({\mathscr H}_k)$ can be expressed as follows:
\boldsymbol \epsilonpsilonilongin{equation*}
\widehat \mathfrak R_S({\mathscr H}_k)
= \frac{1}{m} \E_{\boldsymbol \sigma} \mspace{-5mu} \left[ \sup_{\substack{\mathbf h_s \in
{\mathscr H}_s^{n_s}\\\| \mathbf u_s \|_p \leq \Lambda_{k, s}}} \mspace{-5mu} \sum_{i = 1}^m
\sigma_i \sum_{s = 1}^{k - 1} \mathbf u_s \cdot (\varphi_s \circ
\mathbf h_s)(x_i) \right] \mspace{-5mu}.
\epsilonnd{equation*}
By the sub-additivity of the supremum, it
can be upper-bounded as follows:
\boldsymbol \epsilonpsilonilongin{equation*}
\widehat \mathfrak R_S({\mathscr H}_k)
\leq \sum_{s = 1}^{k - 1} \frac{1}{m} \E_{\boldsymbol \sigma} \mspace{-5mu} \left[ \sup_{\substack{\mathbf h_s \in
{\mathscr H}_s^{n_s}\\\| \mathbf u_s \|_p \leq \Lambda_{k, s}}} \sum_{i = 1}^m
\sigma_i \mathbf u_s \cdot (\varphi_s \circ \mathbf h_s)(x_i) \right] \mspace{-5mu}.
\epsilonnd{equation*}
We now bound each term of this sum, starting with the following
chain of equalities:
\boldsymbol \epsilonpsilonilongin{align*}
& \frac{1}{m} \E_{\boldsymbol \sigma} \mspace{-5mu} \left[ \sup_{\substack{\mathbf h_s \in
{\mathscr H}_s^{n_s}\\\| \mathbf u_s \|_p \leq \Lambda_{k, s}}} \sum_{i = 1}^m
\sigma_i \mathbf u_s \cdot (\varphi_s \circ \mathbf h_s)(x_i) \right] \\
& = \frac{\Lambda_{k,s}}{m} \E_{\boldsymbol \sigma}\left[ \sup_{\mathbf h_s \in
{\mathscr H}_s^{n_s}} \bigg\| \sum_{i = 1}^m
\sigma_i (\varphi_s \circ \mathbf h_s)(x_i) \bigg\|_q
\right] \\
& = \frac{\Lambda_{k,s} n_{s}^{\frac{1}{q}}}{m} \E_{\boldsymbol \sigma}\left[ \sup_{\substack{h \in
{\mathscr H}_{s}}} \bigg| \sum_{i = 1}^m \sigma_i (\varphi_{s} \circ h)(x_i) \bigg|
\right] \\
& = \frac{\Lambda_{k,s} n_{s}^{\frac{1}{q}} }{m} \E_{\boldsymbol \sigma}\left[ \sup_{\substack{h \in
{\mathscr H}_{s}\\ \sigma \in \set{-1, +1}}} \sigma \sum_{i = 1}^m \sigma_i (\varphi_{s} \circ h)(x_i)
\right],
\epsilonnd{align*}
where the second equality holds by definition of the dual norm and the
third equality by the following equality:
\boldsymbol \epsilonpsilonilongin{align*}
\sup_{z_i \in Z} \| \mathbf z \|_q
& = \sup_{z_i \in Z} \Big[ \sum_{i = 1}^n |z_i|^q \Big]^{\frac{1}{q}}
= \Big[ \sum_{i = 1}^n [\sup_{z_i \in Z} |z_i|]^q \Big]^{\frac{1}{q}}\\
& = n^{\frac{1}{q}} \sup_{z_i \in Z} |z_i|.
\epsilonnd{align*}
The following chain of inequalities concludes the
proof:
\boldsymbol \epsilonpsilonilongin{align*}
&\frac{\Lambda_{k,s} n_{s}^{\frac{1}{q}} }{m} \E_{\boldsymbol \sigma}\left[ \sup_{\substack{h \in
{\mathscr H}_{s}\\ \sigma \in \set{-1, +1}}} \sigma \sum_{i = 1}^m \sigma_i (\varphi_{s} \circ h)(x_i)
\right] \\
& \leq \frac{\Lambda_{k,s} n_{s}^{\frac{1}{q}} }{m} \E_{\boldsymbol \sigma}\left[ \sup_{\substack{h \in
{\mathscr H}_{s}}} \sum_{i = 1}^m \sigma_i (\varphi_{s} \circ h)(x_i) \right] \\
&\quad + \frac{\Lambda_{k,s}}{m} \E_{\boldsymbol \sigma}\left[ \sup_{\substack{h \in
{\mathscr H}_{s}}} \sum_{i = 1}^m -\sigma_i (\varphi_{j} \circ h)(x_i)
\right] \\
& = \frac{2 \Lambda_{k,s} n_{s}^{\frac{1}{q}} }{m} \E_{\boldsymbol \sigma}\left[ \sup_{\substack{h \in
{\mathscr H}_{s}}} \sum_{i = 1}^m \sigma_i (\varphi_{s} \circ h)(x_i) \right] \\
& \leq \frac{2 \Lambda_{k,s} n_{s}^{\frac{1}{q}} }{m} \E_{\boldsymbol \sigma}\left[ \sup_{\substack{h \in
{\mathscr H}_{s}}} \sum_{i = 1}^m \sigma_i h(x_i) \right] \\
& \leq 2 \Lambda_{k,s} n_{s}^{\frac{1}{q}} \widehat \mathfrak R_S({\mathscr H}_{s}),
\epsilonnd{align*}
where the second inequality holds by Talagrand's contraction lemma.
\epsilonnd{proof}
\boldsymbol \epsilonpsilonilongin{replemma}{lemma:Rad_Hk}
Let $\Lambda_k \!= \prod_{s = 1}^k 2 \Lambda_{s, s - 1}$ and
$N_k \!= \prod_{s = 1}^k n_{s - 1}$. Then, for any $k \geq 1$, the
empirical Rademacher complexity of ${\mathscr H}_k^*$ for a sample $S$ of size
$m$ can be upper bounded as follows:
\boldsymbol \epsilonpsilonilongin{equation*}
\widehat \mathfrak R_S({\mathscr H}_k^*)
\leq r_\infty \Lambda_k N_k^{\frac{1}{q}}
\sqrt{\frac{\log (2 n_0)}{2 m}}.
\epsilonnd{equation*}
\epsilonnd{replemma}
\boldsymbol \epsilonpsilonilongin{proof}
The empirical Rademacher complexity of ${\mathscr H}_1$ can be bounded as follows:
\boldsymbol \epsilonpsilonilongin{align*}
\widehat \mathfrak R_S({\mathscr H}_1)
& = \frac{1}{m} \E_{\boldsymbol \sigma}\left[ \sup_{\| \mathbf u \|_p \leq \Lambda_{1,0}} \sum_{i = 1}^m \sigma_i \mathbf u \cdot \boldsymbol \mathbb Psi(x_i) \right]\\
& = \frac{1}{m} \E_{\boldsymbol \sigma}\left[ \sup_{\| \mathbf u \|_p \leq \Lambda_{1,0}}
\mathbf u \cdot \sum_{i = 1}^m \sigma_i \boldsymbol \mathbb Psi(x_i) \right] \\
& = \frac{\Lambda_{1,0}}{m} \E_{\boldsymbol \sigma}\left[ \bigg\| \sum_{i = 1}^m \sigma_i
[\boldsymbol \mathbb Psi(x_i)] \bigg\|_q \right] \ignore{& (\text{def. of dual norm})}\\
& \leq \frac{\Lambda_{1,0} n_0^{\frac{1}{q}}}{m} \E_{\boldsymbol \sigma}\left[ \bigg\| \sum_{i = 1}^m \sigma_i
[\boldsymbol \mathbb Psi(x_i)] \bigg\|_\infty \right] \ignore{& (\text{equivalence of $l_p$ norms})}\\
& = \frac{\Lambda_{1,0} n_0^{\frac{1}{q}}}{m} \E_{\boldsymbol \sigma}\left[ \max_{j \in [1, n_1]} \bigg | \sum_{i = 1}^m \sigma_i
[\boldsymbol \mathbb Psi(x_i)]_j \bigg | \right] \ignore{& (\text{def. of $l_\infty$ norm})}\\
& = \frac{\Lambda_{1,0} n_0^{\frac{1}{q}}}{m} \E_{\boldsymbol \sigma}\left[ \max_{\substack{j \in [1,
n_1]\\ s \in \set{-1, +1}}} \sum_{i = 1}^m \sigma_i
s[\boldsymbol \mathbb Psi(x_i)]_j \right] \ignore{& (\text{def. of absolute value})}\\
& \leq \Lambda_{1,0} n_0^{\frac{1}{q}} r_\infty \sqrt{m} \frac{\sqrt{2 \log (2 n_0)}}{m} \\
&= r_\infty \Lambda_{1,0} n_0^{\frac{1}{q}}\sqrt{\frac{2 \log (2 n_0)}{m}}. \ignore{& (\text{Massart's lemma})}
\epsilonnd{align*}
The result then follows by application of Lemma~\ref{lemma:Rad_Hk_Hk-1}.
\epsilonnd{proof}
\boldsymbol \epsilonpsilonilongin{repcorollary}{cor:adanet}
Fix $\rho > 0$. Let
$\Lambda_k = \prod_{s = 1}^k 4 \Lambda_{s, s - 1}$ and
$N_k \!= \prod_{s = 1}^k n_{s - 1}$. Then, for any $\delta > 0$, with
probability at least $1 - \delta$ over the draw of a sample $S$ of
size $m$ from ${\mathscr D}^m$, the following inequality holds for all
$f = \sum_{k = 1}^l \mathbf w_k \cdot \mathbf h_k \in {\mathscr F}^*$:
\boldsymbol \epsilonpsilonilongin{align*}
R(f)
& \leq \widehat{R}_{S, \rho}(f) + \frac{2}{\rho} \sum_{k = 1}^{l} \big \| \mathbf w
_k \big \|_1 \bigg[\overline r_\infty \Lambda_k N_k^{\frac{1}{q}} \sqrt{\frac{2 \log (2 n_0)}{m}}\bigg] \\
& \quad + \frac{2}{\rho} \sqrt{\frac{\log l}{m}} + C(\rho, l, m, \delta),
\epsilonnd{align*}
where $C(\rho, l, m, \delta) = \sqrt{\big\lceil \frac{4}{\rho^2}
\log(\frac{\rho^2 m}{\log l}) \big\rceil \frac{\log l}{m} +
\frac{\log(\frac{2}{\delta})}{2m}} = \widetilde O\Big(\frac{1}{\rho}
\sqrt{\frac{\log l}{m}}\Big)$, and where $\overline r_\infty = \E_{S \sim {\mathscr D}^m}[r_\infty]$.
\epsilonnd{repcorollary}
\boldsymbol \epsilonpsilonilongin{proof}
Since ${\mathscr F}^*$ is the convex hull of ${\mathscr H}^*$, we can apply
Theorem~\ref{th:adanet} with $\mathfrak R_m(\widetilde {\mathscr H}^*_k)$ instead of
$\mathfrak R_m(\widetilde {\mathscr H}_k)$. Observe that, since for any $k \in [l]$,
$\widetilde {\mathscr H}^*_k$ is the union of ${\mathscr H}^*_k$ and its reflection, to
derive a bound on $\mathfrak R_m(\widetilde {\mathscr H}^*_k)$ from a bound on
$\mathfrak R_m(\widetilde {\mathscr H}_k)$ it suffices to double each $\Lambda_{s, s -
1}$. Combining this observation with the bound of
Lemma~\ref{lemma:Rad_Hk} completes the proof.
\epsilonnd{proof}
\section{Alternative Algorithm}
\label{sec:alternatives}
In this section, we present an alternative algorithm, \textsc{AdaNet.CVX}, that
generates candidate subnetworks in closed-form using Banach space duality.
As in Section~\ref{sec:algorithm}, let $f_{t-1}$ denote the \textsc{AdaNet}
model after $t-1$ rounds, and let $l_{t-1}$ be the depth of the architecture.
\textsc{AdaNet.CVX} will consider $l_{t-1} + 1$ candidate subnetworks, one
for each layer in the model plus an additional one for extending the model.
Let $h^{(s)}$ denote the candidate subnetwork associated to layer $s \in [l_{t-1}+1]$.
We define $h^{(s)}$ to be a single unit in layer $s$ that is connected
to units of $f_{t-1}$ in layer $s-1$:
\boldsymbol \epsilonpsilonilongin{align*}
&h^{(s)} \in \{x \mapsto \mathbf u \cdot (\varphi_{s-1} \circ \mathbf h_{s-1,t-1})(x) : \\
&\quad \mathbf u \in \mathbb R^{n_{s-1,t-1}}, \|\mathbf u\|_p \leq \Lambda_{s,s-1}\}.
\epsilonnd{align*}
See Figure~\ref{fig:adanetcvx} for an illustration of the type of neural network
designed using these candidate subnetworks.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\centering
\includegraphics[scale=.35]{nn1}
\caption{Illustration of a neural network designed by \textsc{AdaNet.CVX}.
Units at each layer (other than the output layer) are only connected
to units in the layer below.}
\vskip -.1in
\label{fig:adanetcvx}
\epsilonnd{figure}
For convenience, we denote this space of subnetworks by ${\mathscr H}_s'$:
\boldsymbol \epsilonpsilonilongin{align*}
&{\mathscr H}_{s}' = \{x \mapsto \mathbf u \cdot (\varphi_{s-1} \circ \mathbf h_{s-1,t-1})(x) : \\
&\quad \mathbf u \in \mathbb R^{n_{s-1,t-1}}, \|\mathbf u\|_p \leq \Lambda_{s,s-1}\}.
\epsilonnd{align*}
Now recall the notation
\boldsymbol \epsilonpsilonilongin{align*}
&F_t(w, h) \\
&= \frac{1}{m} \sum_{i=1}^m \mathbb Phi\Big(1 - y_i (f_{t-1}(x_i) - w h(x_i))\Big) + \Gamma_{\mathbf h} \|\mathbf w\|_1
\epsilonnd{align*}
used in Section~\ref{sec:algorithm}. As in \textsc{AdaNet},
the candidate subnetwork chosen by \textsc{AdaNet.CVX} is given
by the following optimization problem:
\boldsymbol \epsilonpsilonilongin{align*}
\argmin_{h \in \cup_{s=1}^{l_{t-1}+1} {\mathscr H}_{s}'} \min_{w \in \mathbb R} F_t(w, h).
\epsilonnd{align*}
Remarkably, the subnetwork that solves this infinite dimensional optimization
problem can be obtained directly in closed-form:
\boldsymbol \epsilonpsilonilongin{theorem}[\textsc{AdaNet.CVX} Optimization]
\label{th:adanetcvx}
Let $(w^*, h^*)$ be the solution to the following optimization problem:
\boldsymbol \epsilonpsilonilongin{align*}
\argmin_{h \in \cup_{s=1}^{l_{t-1}} {\mathscr H}_{s}'} \min_{w \in \mathbb R} F_t(w, h).
\epsilonnd{align*}
Let $D_t$ be a distribution over the sample $(x_i,y_i)_{i=1}^m$ such that $D_t(i) \propto \mathbb Phi'(1 - y_i f_{t-1}(x_i)$, and denote
$\epsilonpsilon_{t,h} = \E_{i \sim D_t}[y_ih(x_i)]$.
Then,
$$w^* h^* = w^{(s^*)}h^{(s^*)},$$
where $(w^{(s^*)}, h^{(s^*)})$ are defined by:
\boldsymbol \epsilonpsilonilongin{align*}
&s^* = \argmax_{s\in[l_t-1]} \Lambda_{s,s-1} \|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q. \\
&\mathbf u^{(s^*)} = u^{(s)}_i = \frac{\Lambda_{s,s-1}}{\|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_{q}^{\frac{q}{p}}} |\epsilon_{t,h_{s-1,t-1,i}}|^{q-1} \\
&h^{(s^*)} = \mathbf u^{(s^*)} \cdot (\varphi_s \circ \mathbf h_{s-1,t-1}) \\
&w^{(s^*)} = \argmin_{w \in \mathbb R}\frac{1}{m} \sum_{i=1}^m \mathbb Phi\Big(1 - y_i f_{t-1}(x_i) \\
&\quad - y_iw h^{(s^*)}(x_i)\Big) + \Gamma_{s^*} |w|.
\epsilonnd{align*}
\epsilonnd{theorem}
\boldsymbol \epsilonpsilonilongin{proof}
By definition,
\boldsymbol \epsilonpsilonilongin{align*}
&F_t(w, h)\\
&= \frac{1}{m} \sum_{i=1}^m \mathbb Phi\Big(1 - y_i (f_{t-1}(x_i) - w h(x_i))\Big) + \Gamma_{h} |w|.
\epsilonnd{align*}
Notice that the
minimizer over $\cup_{s=1}^{l_{t-1}+1} {\mathscr H}_{s}'$ can be determined by
comparing the minimizers over each ${\mathscr H}_s'$.
Moreover, since the penalty term $\Gamma_h |w|$ has the same contribution for every
$h \in {\mathscr H}_{s}'$, it has no impact on the optimal choice of $h$ over ${\mathscr H}_s'$. Thus, to find the minimizer
over each ${\mathscr H}_{s}'$, we can compute the derivative of $F_t - \Gamma_h |w|$ with respect to
$w$:
\boldsymbol \epsilonpsilonilongin{align*}
&\frac{d(F_t - \Gamma_h |\epsilonta|)}{dw}(w, h) \\
&= \frac{-1}{m} \sum_{i=1}^m y_i h(x_i) \mathbb Phi'\Big( 1 - y_i f_{t-1}(x_i) \Big) .
\epsilonnd{align*}
Now if we let
$$D_t(i) S_t = \mathbb Phi'\Big( 1 - y_i f_{t-1}(x_i) \Big),$$
then this expression is equal to
\boldsymbol \epsilonpsilonilongin{align*}
&-\left[ \sum_{i=1}^m y_i h(x_i) D_t(i) \right] \frac{S_t}{m}
= (2\epsilon_{t,h} - 1) \frac{S_t}{m},
\epsilonnd{align*}
where $\epsilon_{t,h} = \E_{i \sim D_t} [y_i h(x_i)]$.
Thus, it follows that for any $s \in [l_{t-1} + 1]$,
\boldsymbol \epsilonpsilonilongin{align*}
\argmax_{h \in {\mathscr H}_s'} \frac{d(F_t - \Gamma_h |w|)}{dw}(w, h)
&= \argmax_{h \in {\mathscr H}_s'} \epsilon_{t,h}.
\epsilonnd{align*}
Note that we still need to search for the optimal descent coordinate
over an infinite dimensional space. However, we can write
\boldsymbol \epsilonpsilonilongin{align*}
&\max_{h \in {\mathscr H}_s'} \boldsymbol \epsilonpsilonilon_{t,h } \\
&= \max_{h \in {\mathscr H}_s'}\E_{i \sim D_t} [y_i h(x_i)] \\
&= \max_{\mathbf u \in \mathbb R^{n_{s-1,t-1}}} \E_{i \sim D_t} [y_i \mathbf u \cdot (\varphi_{s-1} \circ \mathbf h_{s-1,t-1})(x_i)] \\
&= \max_{\mathbf u \in \mathbb R^{n_{s-1,t-1}}} \mathbf u \cdot \E_{i \sim D_t} [y_i \cdot (\varphi_{s-1} \circ \mathbf h_{s-1,t-1})(x_i)].
\epsilonnd{align*}
Now, if we denote by $u^{(s)}$ the connection weights associated to $h^{(s)}$, then we claim that
\boldsymbol \epsilonpsilonilongin{align*}
u^{(s)}_i = \frac{\Lambda_{s,s-1}}{\|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_{q}^{\frac{q}{p}}} |\epsilon_{t,h_{s-1,t-1,i}}|^{q-1},
\epsilonnd{align*}
which is a consequence of Banach space duality.
To see this, note first that by H\"{o}lder's inequality, every $\mathbf u \in \mathbb R^{n_{s-1,t-1}}$ with $\|\mathbf u\|_p \leq \Lambda_{s,s-1}$ satisfies:
\boldsymbol \epsilonpsilonilongin{align*}
&\mathbf u \cdot \E_{i \sim D_t} [y_i \cdot (\varphi_{s-1} \circ \mathbf h_{s-1,t-1})(x_i)]\\
&\leq \|\mathbf u\|_p \|\E_{i \sim D_t} [y_i \cdot (\varphi_{s-1} \circ \mathbf h_{s-1,t-1})(x_i)]\|_q\\
&\leq \Lambda_{s,s-1} \|\E_{i \sim D_t} [y_i \cdot (\varphi_{s-1} \circ \mathbf h_{s-1,t-1})(x_i)]\|_q.
\epsilonnd{align*}
At the same time, our choice of $\mathbf u^{(s)}$ also attains this upper bound:
\boldsymbol \epsilonpsilonilongin{align*}
&\mathbf u^{(s)} \cdot \boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}} \\
&= \sum_{i=1}^{n_{s-1,t-1}} u^{(s)}_i \epsilon_{t,h_{s-1,t-1,i}} \\
&= \sum_{i=1}^{n_{s-1,t-1}} \frac{\Lambda_{s,s-1}}{\|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q^{\frac{q}{p}}} |\epsilon_{t,h_{s-1,t-1,i}}|^q \\
&= \frac{\Lambda_{s,s-1}}{\|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q^{\frac{q}{p}}}\|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q^q \\
&= \Lambda_{s,s-1} \|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q.
\epsilonnd{align*}
Thus, $\mathbf u^{(s)}$ and the associated network $h^{(s)}$ is the coordinate that maximizes the derivative of $F_t$ with respect to $w$ among all
subnetworks in ${\mathscr H}_s'$. Moreover, $h^{(s)}$ also achieves the value: $\Lambda_{s,s-1} \|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q$.
This implies that by computing $\Lambda_{s,s-1} \|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q$ for every $s \in [l_{t-1}+1]$, we can find
the descent coordinate across all $s \in [l_{t-1}+1]$ that improves the objective by the largest amount. Moreover, we can then
solve for the optimal step size in this direction to compute the weight update.
\epsilonnd{proof}
The theorem above defines the choice of descent coordinate at each round
and motivates the following algorithm, \textsc{AdaNet.CVX}. At each round,
\textsc{AdaNet.CVX} can design the optimal candidate subnetwork within
its searched space in closed form, leading to an extremely efficient
update.
However, this comes at the cost of a more restrictive search space
than the one used in \textsc{AdaNet}.
The pseudocode of \textsc{AdaNet.CVX} is provided in Figure~\ref{algo:adanetcvx}.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\boldsymbol \epsilonpsilonilongin{ALGO}{AdaNet.CVX}{S = ((x_i, y_i)_{i = 1}^m}
\SET{f_0}{0}
\DOFOR{t \gets 1 \mbox{ {\bf to }} T}
\SET{s^*}{\argmax_{s\in[l_{t-1}+1]} \Lambda_{s,s-1} \|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s-1,t-1}}\|_q.}
\SET{u^{(s^*)}_i}{\frac{\Lambda_{s^*,s^*-1}}{\|\boldsymbol \epsilonpsilonilon_{t,\mathbf h_{s^*-1,t-1}}\|_{q}^{\frac{q}{p}}} |\epsilon_{t,h_{s^*-1,t-1,i}}|^{q-1}}
\SET{\mathbf h'}{\mathbf u^{(s^*)} \cdot (\phi_{s^*-1} \circ \mathbf h_{s^*-1,t-1})}
\SET{\epsilonta'}{\textsc{Minimize}(\tilde{F}_t(\epsilonta, \mathbf h'))}
\SET{f_t}{f_{t-1} + \epsilonta' \cdot \mathbf h'}
\OD
\mathfrak RETURN{f_T}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode of the AdaNet.CVX algorithm.}
\vskip -.0cm
\label{algo:adanetcvx}
\epsilonnd{figure}
\epsilonnd{document}
\section{Alternative Algorithms}
\label{sec:alternatives}
This section describes our algorithm, \textsc{AdaNet}, for \epsilonmph{adaptive} deep
learning. \textsc{AdaNet} adaptively grows the structure of a neural network,
balancing model complexity with margin maximization.
We start with a high-level description of the algorithm before proceeding to a
more detailed presentation.
\subsection{Overview}
Our algorithm starts with the network reduced to the input layer,
corresponding to the input feature vector, and an output unit, fully
connected, and then augments or modifies the network over $T$
rounds. At each round, it either augments the network with a new unit
or updates the weights defining the function $h \in {\mathscr H}_k$ of an
existing unit of the network at layer $k$. A new unit may be selected
at any layer $k \in [1,l]$ already populated or start on a new layer
but, in all cases, it is chosen with links only to existing units in
the network in the layer below plus a connection to the output
unit. Existing units are either those of the input layer or units
previously added to the network by the
algorithm. Figure~\ref{fig:nn}(a) illustrates this design.
The choice of the unit to construct or update at each round is a key
aspect of our algorithm. This is done by iteratively optimizing an
objective function that we describe in detail later. At each round,
the choice of the best unit minimizing the current objective is
subject to the following trade-off: the best unit selected from a lower
layer may not help reduce the objective as much as one selected from a
higher layer; on the other hand, units selected from higher layers
augment the network with substantially more complex functions, thereby
increasing the risk of overfitting. To resolve this tension
quantitatively, our algorithm selects the best unit at each round
based on a combination of the amount by which it helps reduce the
objective and the complexity of the family of hypotheses defined units
at that layer.
The output unit of our network is connected to all the
units created during these $T$ rounds, so that the hypothesis will
directly use all units in the network. As our theory demonstrated in
Theorem~\ref{th:adanet}, this can
significantly reduce the complexity of our model by assigning more weight
to the shallower units. At the same time, it also provides us the flexibility
to learn larger and more complex models. In fact, the family of neural networks
that we search over is actually larger than the family of
feedforward neural networks typically considered using back-propagation due
to these additional connections.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\centering
\boldsymbol \epsilonpsilonilongin{tabular}{c@{\widehatspace{1cm}}c}
(a) & \includegraphics[scale=.35]{nn1} \\
(b) & \includegraphics[scale=.35]{nn2} \\
\epsilonnd{tabular}
\caption{Illustration of the algorithm's incremental construction of a
neural network: (a) standard construction where units at each layer
other than the output layer can only be connected to the layer
below; (b) more complex construction where units can be connected to
other layers as well.}
\vskip -.1in
\label{fig:nn}
\epsilonnd{figure}
An additional more sophisticated variant of our algorithm is depicted
in Figure~\ref{fig:nn}(b). In this design, the units created at each
round can be connected not just to the units of the previous layer,
but to those of any layer below. This allows for greater model flexibility,
and by modifying the definitions of the hypotheses sets~\ref{eq:H1}
and~\ref{eq:Hk}, we can adopt a principled complexity-sensitive way for
learning these types of structures as well.
In the next section, we give a more formal description of our
algorithm, including the exact optimization problem as well as a specific
search process for new units.
\subsection{Objective function}
\label{sec:obj}
Recall the definition of our hypothesis space
$\conv({\mathscr H}) = \bigcup_{k = 1}^l ({\mathscr H}_k \cup (-{\mathscr H}_k))$,
which is the convex hull of all neural networks up to depth $l$ and
naturally includes all neural networks of depth $l$ -- the common
hypothesis space in deep learning. Note that the set of all functions
in ${\mathscr H}$ is infinite, since the weights corresponding to any function
can be any real value inside their respective $l_p$ balls.
Despite this challenge, we will efficiently discover a finite subset
of ${\mathscr H}$, denoted by $\set{h_1, \ldots, h_N}$, that will serve as the
basis for our convex combination. Here, $N$ will also represent the
maximum number of units in our network. Thus, we have that
$N = \sum_{k = 1}^l n_k$, and $N$ will also generally be assumed as very
large. Moreover, we will actually define and update our set
$\set{h_1, \ldots, h_N}$ \epsilonmph{online}, in a manner that will be made
precise in Section~\ref{sec:search}. We will also rely on the natural bijection between the
two enumerations $\set{h_1,\ldots, h_N}$ and
$\set{h_{k, j}}_{k\in[l], j\in[n_k]}$, depending on which is more
convenient. The latter is useful for saying that
$h_{k, j} \in {\mathscr H}_{k}$.
Moreover, for any $j \in [N]$, we will denote by $k_j \in [l]$, the
layer in which hypothesis $h_j$ lies. For simplicity, we will also
write as $r_j$ the Rademacher complexity of the family of functions
${\mathscr H}_{k_j}$ containing $h_j$: $r_j = \mathfrak R_m({\mathscr H}_{k_j})$.
Let $x \mapsto \mathbb Phi(-x)$ be a non-increasing convex function
upper-bounding the zero-one loss, $x \mapsto 1_{x \leq 0}$, with
$\mathbb Phi$ differentiable over $\mathbb R$ and $\mathbb Phi'(x) \neq 0$ for all
$x$. $\mathbb Phi$ may, for instance, be the exponential function,
$\mathbb Phi(x) = e^x$ as in the AdaBoost of \cite{FreundSchapire97} or the
logistic function, $\mathbb Phi(x) = \log(1 + e^x)$ as in logistic
regression.
As in regularized boosting style methods (e.g. \citep{RatschMikaWarmuth2001}), our algorithm will apply coordinate
descent to the following objective function over $\mathbb R^N$:
\boldsymbol \epsilonpsilonilongin{align}
F(\mathbf w) = \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\Big(1
- y_i \sum_{j = 1}^N w_j h_j(x_i) \Big) + \sum_{j =
1}^N \Gamma_j |w_j|,
\epsilonnd{align}
where $\Gamma_j = \lambda r_j + \boldsymbol \epsilonpsilonilonta$ with $\lambda \geq 0$ and
$\boldsymbol \epsilonpsilonilonta \geq 0$ hyperparameters. The objective function is
the sum of the empirical error based on a convex surrogate loss
function $x \mapsto \mathbb Phi(-x)$ of the binary loss and a
regularization term. The regularization term is a weighted-$l_1$ penalty
that contains two sub-terms: a standard norm-1 regularization which admits
$\boldsymbol \epsilonpsilonilonta$ as a parameter, and a term that discriminates functions $h_j$
based on their complexity (i.e. $r_j$) and which
admits $\lambda$ as a parameter.
Our algorithm can be viewed as an instance of the DeepBoost algorithm
of \citet{CortesMohriSyed2014}. However, unlike DeepBoost, which
combines decision trees, \textsc{AdaNet} algorithm learns a deep neural
network, which requires both deep learning-specific theoretical analysis
as well as an online method for constructing and searching new units.
Both of these aspects differ significantly from the decision tree
framework in DeepBoost, and the latter is particularly challenging due to the
fact that our hypothesis space ${\mathscr H}$ is infinite.
\subsection{Coordinate descent}
\label{sec:coord}
Let $\mathbf w_t = (w_{t, 1}, \ldots, w_{t, N})^\top$ denote
the vector obtained after $t \geq 1$ iterations and let $\mathbf w_0 =
\mathbf zero$. Let $e_k$ denote the $k$th unit vector in $\mathbb R^N$, $k \in [1,
N]$. The direction $e_k$ and the step $\epsilonta$ selected at the $t$th
round are those minimizing $F(\mathbf w_{t - 1} + \epsilonta e_k)$. Let $f_{t -
1} = \sum_{j = 1}^N w_{t - 1, j} h_j$. Then we can write
\boldsymbol \epsilonpsilonilongin{align*}
F(\mathbf w_{t - 1} + \epsilonta e_k)
& \!=\! \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\Big( 1 - y_i f_{t - 1}(x_i)
\!-\! \epsilonta y_i h_k(x_i) \Big)\\
&\quad + \sum_{j \neq k} \Gamma_j |w_{t -1,
j}| + \Gamma_k |w_{t -1, k} + \epsilonta|,
\epsilonnd{align*}
For any $t \in [1, T]$, we will maintain the following distribution
${\mathscr D}_t$ over our sample: ${\mathscr D}_t(i) = \frac{\mathbb Phi' \big( 1 - y_i f_{t - 1}(x_i) \big)}{S_t},$
where $S_t$ is a normalization factor, $S_t = \sum_{i = 1}^m \mathbb Phi' ( 1
- y_i f_{t - 1}(x_i) )$. Moreover, for any $s \in [1, T]$ and $j \in [1, N]$
and a given hypothesis $h_j$ bounded by $C > 0$, we
will consider $\epsilon_{s, j}$, the weighted error of hypothesis $h_j$ over the
distribution ${\mathscr D}_s$: $\epsilon_{s, j} = \frac{C}{2}\Big[ 1 - \E_{i \sim {\mathscr D}_s}\big[\frac{y_i h_j(x_i)}{C}\big] \Big].$
These weighted errors will be crucial for ``scoring'' the direction that
the algorithm takes at each round.
\subsection{Search and active coordinates}
\label{sec:search}
As already mentioned, a key aspect of our AdaNet algorithm is the
construction of new hypotheses at each round. We do not enumerate
all $N$ hypotheses at the beginning of the algorithm, because it would be
extremely difficult to select good candidates before seeing the data.
At the same time, searching through all unit possible combinations using the data
would be a computationally infeasible task.
Instead, our search procedure will be online, building upon the units
that we already have in our network and selecting at most a single at a time.
Specifically, at each round, the algorithm
selects a unit out of the following set of ``active'' candidates: existing units in our
network or new units with connections to existing units in some layer $k$.
There are many potential methods to construct new candidate units, and
at first glance, scoring every possible new unit with connections to
existing units may seem a computational impediment.
However, by using Banach space duality, we can compute directly and
efficiently in closed form the new unit that best optimizes the
objective at each layer.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\boldsymbol \epsilonpsilonilongin{ALGO}{AdaNet}{S = ((x_i, y_i)_{i = 1}^m, (n_k, \Lambda_k, \Gamma_k, C_k)_{k = 1}^l}
\SET{(w_{0,k, j})_{k, j}, {\mathscr D}_1, \widehatat{l}, (\widehatat{n}_k)_{k, j})}{\textsc{Init}\big(m, l, (n_k)_{k = 1}^l\big)}
\DOFOR{t \gets 1 \mbox{ {\bf to }} T}
\SET{(d_{k, j})_{k, j}}{\textsc{ExistingNodes}\big({\mathscr D}_t, (\mathbf h_k)_{k = 1}^{\widehatat{l}}\big)}
\SET{(\tilde{d}_k, \tilde{u}_k)_{k=2}^{(\widehatat{l}+1)\wedge l}}{\textsc{NewNodes}\big({\mathscr D}_t, (\mathbf h_k)_{k = 1}^{\widehatat{l}}\big)}
\SET{((k, j), \epsilon_t )}{\textsc{BestNode}\big((d_{k, j})_{k, j}, (\tilde{d}_k)_{k}\big)}
\SET{\mathbf w_t}{\textsc{ApplyStep}\big((k, j), w_{t-1}\big)}
\SET{({\mathscr D}_{t+1}, S_{t+1})}{\textsc{UpdateDistrib}\big(\mathbf w_t\big)}
\OD
\SET{f}{\sum_{k = 1}^{\widehatat{l}} \sum_{j = 1}^{\widehatat{n}_{k}} w_{T, k, j} h_{k, j}}
\mathfrak RETURN{f}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode of the AdaNet algorithm.}
\vskip -.0cm
\label{alg:adanet}
\epsilonnd{figure}
\boldsymbol \epsilonpsilonilongin{figure}[t]
\boldsymbol \epsilonpsilonilongin{ALGO}{ExistingNodes}{{\mathscr D}_t, (h_{k, j})_{k\in[1,\widehatat{l}], j\in[1,\widehatat{n}_k]}}
\DOFOR{k \gets 1 \mbox{ {\bf to }} \widehatat{l}}
\DOFOR{j \gets 1 \mbox{ {\bf to }} \widehatat{n}_{k}}
\SET{\epsilon_{t, k, j}}{\frac{C_k}{2}\Big[1 - \E_{i \sim {\mathscr D}_t}\big[\frac{y_i h_{k, j}(x_i)}{C_k}\big] \Big]}
\mathbf 1F{(w_{t - 1, k, j} \neq 0)}
\SET{d_{k, j}}{\big(\epsilon_{t, k, j} - \frac{1}{2}C_k \big) + \sgn(w_{t - 1, k, j}) \frac{\Gamma_k m}{2S_t} }
\ELSEIF{\big(\big| \epsilon_{t, k, j} - \frac{1}{2}C_k \big| \leq \frac{\Gamma_k m}{2 S_t} \big)}
\SET{d_{k, j}}{0}
\ELSE
\SET{d_{k, j}}{\big(\epsilon_{t, k, j} - \frac{1}{2}C_k \big) - \sgn( \epsilon_{t, k, j} - \frac{1}{2}C_k ) \frac{\Gamma_k m}{2S_t} }
\FI
\OD
\OD
\mathfrak RETURN{(d_{k, j})_{k\in[1,\widehatat{l}], j\in[1,\widehatat{n}_k]}}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode for computing the potential contribution of each existing unit.}
\vskip -.2cm
\label{alg:existing}
\epsilonnd{figure}
\subsubsection{New candidate units}
Given a distribution ${\mathscr D}$ over the sample $S$ and a tuple of
hypotheses $\mathbf h_k = (h_{k,1}, \ldots, h_{k,n_k}) \subset {\mathscr H}_k$, we
denote by $\Margin({\mathscr D}, h_{k, j})$ the weighted margin of hypothesis
$h_{k, j}$ composed with its activation on distribution ${\mathscr D}$:
$$\Margin({\mathscr D}, h_{k, j}) = \mathbb{E}_{i \sim {\mathscr D}}[y_i (\varphi_{k}\circ h_{k, j})(x_i)],$$
and we denote by $\Margin({\mathscr D}, \mathbf h_k)$ the vector of weighted margins of all units in layer $k$:
\boldsymbol \epsilonpsilonilongin{align*}
\Margin({\mathscr D}, \mathbf h_k) = \big(
&\mathbb{E}_{i \sim D}[y_i (\varphi_{k}\circ h_{k,1})(x_i)],\ldots, \\
&\mathbb{E}_{i \sim {\mathscr D}}[y_i (\varphi_{k}\circ h_{k,n_k})(x_i)] \big).
\epsilonnd{align*}
For any layer $k$ in an existing neural network, a vector $u \in \mathbb R^{n_k}$ uniquely specifies a unit that connects from
units in the previous layer $k-1$. Let $\tilde{u}_k$ denote such a new unit in layer $k$, and let $\widehatat{l}$ be the number of layers
with non-zero units. Then
for layers $2 \leq k \leq \widehatat{l}$, if the number of units is less than the maximum allowed size $n_k$, we will consider as candidates
the units with the largest weighted margin. Remarkably, these units can be computed efficiently and in closed form:
\boldsymbol \epsilonpsilonilongin{lemma}[Construction of new candidate units, see proof in Appendix~\ref{app:alg}]
\label{lemma:duality}
Fix $(h_{k-1, j})_{j = 1}^{n_{k-1}} \subset {\mathscr H}_{k-1}^{(p)}$. Then the solution $\tilde{u}_k$ to the optimization problem
$$ \max_{\|u\|_p \leq \Lambda_{k}} \mathbb{E}_{i \sim {\mathscr D}}\bigg[y_i \sum_{j = 1}^{n_{k-1}} u_j (\varphi_{k-1}\circ h_{k-1,j})(x_i)\bigg],$$
can be computed coordinate-wise as:
\boldsymbol \epsilonpsilonilongin{align*}
\left(\tilde{u}_{k}\right)_j
= &\frac{\Lambda_{k}}{\left\|\Margin({\mathscr D}, \mathbf h_{k-1})\right\|_q^{\frac{q}{p}}} \left|\Margin({\mathscr D}, h_{k-1, j}) \right|^{q-1}\\
& \sgn\left(\Margin({\mathscr D}, h_{k-1, j})\right),
\epsilonnd{align*}
and the solution has value:
$\Lambda_{k} \left\| \Margin({\mathscr D}, \mathbf h_{k-1})\right\|_q$.
\epsilonnd{lemma}
\subsection{Pseudocode}
\label{sec:code}
In this section, we present the pseudocode for our algorithm, \textsc{AdaNet}, which applies the greedy coordinate-wise optimization procedure described in Section~\ref{sec:coord}
on the objective presented in Section~\ref{sec:obj} with the search procedure described in Section~\ref{sec:search}.
The algorithm takes as input the sample $S$, the maximum number of units per layer $(n_k)_{k = 1}^l$, the complexity penalties $(\Gamma_k)_{k = 1}^l$, the $l_p$ norms of the
weights defining new units $(\Lambda_k)_{k = 1}^l$, and upper bounds on the units in each layer $(C_k)_{k = 1}^l$. \textsc{AdaNet} then initializes all weights to zero, sets the distribution to be uniform, and
considers the active
set of coordinates to be the initial layer in the network. Then the algorithm repeats the following sequence of steps: it computes
the scores of the existing units in the method \textsc{ExistingNodes},
of the new candidate units in \textsc{NewNodes}, and finds the unit
with the best score ($|d_{k, j}|$ or $|\tilde{d}_k|$) in \textsc{BestNode}. After finding this ``coordinate'',
it updates the step size and distribution before proceeding to the next iteration (as described in Section~\ref{sec:coord}).
The precise pseudocode is provided in Figure~\ref{alg:adanet}, and details of its derivation are given in Section~\ref{app:alg}.
\boldsymbol \epsilonpsilonilongin{figure}[!]
\boldsymbol \epsilonpsilonilongin{ALGO}{NewNodes}{{\mathscr D}_t, (\mathbf h_k)_{k = 1}^{\widehatat{l}}, (n_k, \widehatat{n}_k, C_k, \Lambda_k,\Gamma_k))_{k = 1}^l}
\DOFOR{k \gets 2 \mbox{ {\bf to }} (\widehatat{l} + 1)\wedge l}
\mathbf 1F{\widehatat{n}_{k} < n_k \text{ and } \widehatat{n}_{k-1} > 0}
\SET{\tilde{\epsilon}_{k}}{\frac{C_k}{2} \left( 1 - \frac{\Lambda_{k}}{C_k} \left\|\mathcal{E}({\mathscr D}_t, \mathbf h_{k-1})\right\|_q \right)}
\SET{\tilde{u}_{k}}
{\frac{\Lambda_{k}\left|\mathcal{E}({\mathscr D}_t, h_{k-1, j})\right|^{q-1}\sgn\left(\mathcal{E}({\mathscr D}_t, h_{k-1, j})\right)}
{\left\|\mathcal{E}({\mathscr D}_t, \mathbf h_{k-1})\right\|_q^{\frac{q}{p}}}}
\ELSE
\SET{\tilde{\epsilon}_{k}}{\frac{1}{2}C_k}
\FI
\mathbf 1F{\big(\big| \tilde{\epsilon}_{k} - \frac{1}{2}C_k \big| \leq \frac{\Gamma_k m}{2 S_t} \big)}
\SET{\tilde{d}_{k}}{0}
\ELSE
\SET{\tilde{d}_{k}}{\big(\tilde{\epsilon}_{k} - \frac{1}{2}C_k \big) - \sgn( \tilde{\epsilon}_{k} - \frac{1}{2}C_k ) \frac{\Gamma_k m}{2S_t} }
\FI
\OD
\mathfrak RETURN{(\tilde{d}_k, \tilde{u}_k)_{k=2}^{(\widehatat{l}+1)\wedge l}}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode for the construction of new candidate units. Remarkably, we can use $l_p$ duality to derive a closed-form expression
of the unit with largest weighted error at each layer.}
\vskip -.2cm
\label{alg:new}
\epsilonnd{figure}
\subsection{Convergence of \textsc{AdaNet}}
\label{sec:converge}
Remarkably, the neural network that \textsc{AdaNet} outputs
is competitive against the \epsilonmph{optimal} weights for any sub-network that it
sees during training. Moreover, it achieves this guarantee in linear time.
The precise statement and proof are provided in Appendix~\ref{app:converge}.
\subsection{Large-scale implementation of AdaNet}
\label{sec:scale}
We describe a large-scale implementation of the
\textsc{AdaNet} optimization problem using state-of-the-art techniques
from stochastic optimization in Appendix~\ref{app:scale}.
\section{Proofs for algorithmic design}
\label{app:alg}
\subsection{New candidate units}
\boldsymbol \epsilonpsilonilongin{replemma}{lemma:duality}[Construction of new candidate units]
Fix $(h_{k-1, j})_{j = 1}^{n_{k-1}} \subset {\mathscr H}_{k-1}^{(p)}$. Then the solution $\tilde{u}_k$ to the optimization problem
$$ \max_{\|u\|_p \leq \Lambda_{k}} \mathbb{E}_{i \sim {\mathscr D}}[y_i \sum_{j = 1}^{n_{k-1}} u_j (\varphi_{k-1}\circ h_{k-1,j})(x_i)],$$
can be computed coordinate-wise as:
\boldsymbol \epsilonpsilonilongin{align*}
\left(\tilde{u}_{k}\right)_j
= &\frac{\Lambda_{k}}{\left\|\Margin({\mathscr D}, \mathbf h_{k-1})\right\|_q^{\frac{q}{p}}} \left|\Margin({\mathscr D}, h_{k-1, j}) \right|^{q-1} \\
&\sgn\left(\Margin({\mathscr D}, h_{k-1, j})\right),
\epsilonnd{align*}
and the solution has value:
$\Lambda_{k} \left\| \Margin({\mathscr D}, \mathbf h_{k-1})\right\|_q$
\epsilonnd{replemma}
\boldsymbol \epsilonpsilonilongin{proof}
By linearity of expectation,
\boldsymbol \epsilonpsilonilongin{align*}
\tilde{u}_{k}
&= \argmax_{\|u\|_p \leq \Lambda_{k}} \mathbb{E}_{i \sim {\mathscr D}}[y_i \sum_{j = 1}^{n_{k-1}} u_j (\varphi_{k-1}\circ h_{k-1,j})(x_i)] \\
&= \argmax_{\|u\|_p \leq \Lambda_{k}} \mathbf u \cdot \Margin({\mathscr D}, \mathbf h_{k-1}).\\
\epsilonnd{align*}
We claim that
\boldsymbol \epsilonpsilonilongin{align*}
\left(\tilde{u}_{k}\right)_j =
&\frac{\Lambda_{k}}{\left\|\Margin({\mathscr D}, \mathbf h_{k-1})\right\|_q^{\frac{q}{p}}} \left|\Margin({\mathscr D}, h_{k-1, j}) \right|^{q-1}\\
&\sgn\left(\Margin({\mathscr D}, h_{k-1, j})\right).
\epsilonnd{align*}
To see this, note first that by Holder's inequality,
\boldsymbol \epsilonpsilonilongin{align*}
\mathbf u \cdot \Margin({\mathscr D}, \mathbf h_{k-1})
&\leq \|u\|_{p} \|\Margin({\mathscr D}, \mathbf h_{k-1})\|_{q} \\
&\leq \Lambda_k \|\Margin({\mathscr D}, \mathbf h_{k-1})\|_{q},
\epsilonnd{align*}
and the expression on the right-hand side is our proposed value.
At the same time, our choice of $\tilde{u}_k$ also satisfies this upper bound:
\boldsymbol \epsilonpsilonilongin{align*}
&\tilde{u}_k \cdot \Margin({\mathscr D}, \mathbf h_{k-1}) \\
&= \sum_{j = 1}^{n_{k-1}} \left(\tilde{u}_{k}\right)_j \Margin({\mathscr D}, h_{k-1, j}) \\
&= \sum_{j = 1}^{n_{k-1}} \frac{\Lambda_{k}}{\left\|\Margin({\mathscr D}, \mathbf h_{k-1})\right\|_q^{\frac{q}{p}}} \left|\Margin({\mathscr D}, h_{k-1, j}) \right|^{q} \\
&= \frac{\Lambda_{k}}{\left\|\Margin({\mathscr D}, \mathbf h_{k-1})\right\|_q^{\frac{q}{p}}} \left\|\Margin({\mathscr D}, h_{k-1, j}) \right\|_q^{q} \\
&= \Lambda_{k} \|\Margin({\mathscr D}, \mathbf h_{k-1})\|_{q}.
\epsilonnd{align*}
Thus, $\tilde{u}_k$ is a solution to the optimization problem and achieves the claimed value.
\epsilonnd{proof}
\subsection{Derivation of coordinate descent update}
Recall the form of our objective function:
\boldsymbol \epsilonpsilonilongin{align*}
F(w_t)
&= \frac{1}{m}\sum_{i = 1}^m \mathbb Phi\big(1 - y_i \sum_{k = 1}^l \sum_{j = 1}^{n_k} w_{t, k, j} h_{k, j}(x_i)\big) \\
&\quad + \sum_{k = 1}^{l} \sum_{j = 1}^{n_k} \Gamma_k |w_{k, j}|.
\epsilonnd{align*}
We want to find the directional derivative with largest magnitude as well as the optimal step-size in this coordinate direction.
Since $F$ is non-differentiable at $0$ for each coordinate (due to the weighted $l_1$ regularization), we must choose a representative
of the subgradient. Since, $F$ is convex, it admits both left and right directional derivatives, which we denote by
\boldsymbol \epsilonpsilonilongin{align*}
&\nabla_{k, j}^+ F(w) = \lim_{\epsilonta \to 0^+} \frac{ F(w + \epsilonta e_{k, j}) - F(w)}{\epsilonta}, \\
&\nabla_{k, j}^- F(w) = \lim_{\epsilonta \to 0^-} \frac{ F(w + \epsilonta e_{k, j}) - F(w)}{\epsilonta}.
\epsilonnd{align*}
Moreover, convexity ensures that $\nabla_{k, j}^- F \leq \nabla_{k, j}^+ F$. Now, let $\delta_{k, j} F$ be the element of the subgradient that we will
use to compare descent magnitudes, so that $(k_t, j_t) = \argmax_{k\in[1,l], j\in[1,n_k]} \big|\delta_{k, j}F(w_t)\big|$.
This subgradient will always be chosen as the one closest to 0:
\[
\delta_{k, j}F(w) =
\boldsymbol \epsilonpsilonilongin{cases}
0 & \text{ if } \nabla_{k, j}^- F(w) \leq 0 \leq \nabla_{k, j}^+ F(w) \\
\nabla_{k, j}^+(w) & \text{ if } \nabla_{k, j}^- F(w) \leq \nabla_{k, j}^+ F(w) \leq 0 \\
\nabla_{k, j}^-(w) & \text{ if } 0 \leq \nabla_{k, j}^- F(w) \leq \nabla_{k, j}^+ F(w).
\epsilonnd{cases}
\]
Suppose that $w_{t,k, j} \neq 0$. Then by continuity, for $\epsilonta$ sufficiently small, $w_{t,k, j}$ and $w_{t,k, j} + \epsilonta e_{k, j}$ have the same sign so that
\boldsymbol \epsilonpsilonilongin{align*}
&F(w_t + \epsilonta e_{k, j}) \\
&= \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\bigg(1 - y_i \sum_{k = 1}^l \sum_{j = 1}^{n_k} w_{t,k, j} h_{k, j}(x_i) - \epsilonta y_i h_{k, j}(x_i)\bigg) \\
&\quad + \sum_{(\tilde{k}, \tilde{j}) \neq (k, j)} \Gamma_k |w_{t, \tilde{k}, \tilde{j}}| + \Gamma_k \sgn(w_{t, k, j}) (w_{t,k, j} + \epsilonta).
\epsilonnd{align*}
Furthermore, $F$ is differentiable in the $(k, j)$-th at $w_t$, which implies that
\boldsymbol \epsilonpsilonilongin{align*}
&\nabla_{k, j} F(w_t) \\
&= \frac{1}{m} \sum_{i = 1}^m -y_i h_{k, j}(x_i) \mathbb Phi'\bigg(1 - y_i \sum_{k = 1}^l \sum_{j = 1}^{n_k} w_{t,k, j} h_{k, j}(x_i) \bigg) \\
&\quad + \sgn(w_{t,k, j}) \Gamma_k \\
&= \frac{1}{m} \sum_{i = 1}^m y_i h_{k, j}(x_i) {\mathscr D}_t(i) S_t + \sgn(w_{t,k, j}) \Gamma_k \\
&= (2 \epsilonpsilon_{t,k, j} - C_k ) \frac{S_t}{m} + \sgn(w_{t,k, j}) \Gamma_k.
\epsilonnd{align*}
When $w_{t,k, j} = 0$, we can consider the left and right directional derivatives:
\boldsymbol \epsilonpsilonilongin{align*}
&\nabla_{k, j}^+ F(w_t) = (2 \epsilonpsilon_{t,k, j} - C_k ) \frac{S_t}{m} + \Gamma_k \\
&\nabla_{k, j}^- F(w_t) = (2 \epsilonpsilon_{t,k, j} - C_k ) \frac{S_t}{m} - \Gamma_k \\
\epsilonnd{align*}
Moreover,
\boldsymbol \epsilonpsilonilongin{align*}
&\bigg|\epsilonpsilon_{t,k, j} - \frac{C_k}{2} \bigg| \leq \frac{\Gamma_k m}{2 S_t} &\Leftrightarrow \nabla_{k, j}^- F(w) \leq 0 \leq \nabla_{k, j}^+ F(w) \\
& \epsilonpsilon_{t,k, j} - \frac{C_k}{2} \leq \frac{-\Gamma_k m}{2 S_t} &\Leftrightarrow \nabla_{k, j}^- F(w) \leq \nabla_{k, j}^+ F(w) \leq 0 \\
& \frac{\Gamma_k m}{2 S_t} \leq \epsilonpsilon_{t,k, j} - \frac{C_k}{2} &\Leftrightarrow 0 \leq \nabla_{k, j}^- F(w) \leq \nabla_{k, j}^+ F(w),
\epsilonnd{align*}
so that we have
\[ \delta_{k, j} F(w_t) =
\boldsymbol \epsilonpsilonilongin{cases}
(2 \epsilonpsilon_{t,k, j} - C_k ) \frac{S_t}{m} + \sgn(w_{t,k, j}) \Gamma_k\\
\quad \text{ if } \bigg|\epsilonpsilon_{t,k, j} - \frac{C_k}{2} \bigg| \leq \frac{\Gamma_k m}{2 S_t}, \\
0 \\
\quad \text{ else if } \bigg|\epsilonpsilon_{t,k, j} - \frac{C_k}{2} \bigg| \leq \frac{\Gamma_k m}{2 S_t}, \\
(2 \epsilonpsilon_{t,k, j} - C_k ) \frac{S_t}{m} - \sgn\bigg(\epsilonpsilon_{t,k, j} - \frac{C_k}{2}\bigg) \Gamma_k \\
\quad \text{ otherwise} .
\epsilonnd{cases}
\]
\section{Other components of pseudocode}
\label{app:pseudocode}
Figures~\ref{alg:init}, ~\ref{alg:best}, ~\ref{alg:step}, ~\ref{alg:dist} present
the remaining components of the pseudocode for \textsc{AdaNet} with
exponential loss.
The initial weight vector $\mathbf w_0$ is initialized to 0,
and the initial weight distribution ${\mathscr D}_1$ is uniform over the coordinates.
The best unit is simply the one with the highest score $|d_{k, j}|$
(or $|\tilde{d}_k|$) among
all existing units and the new candidate units.
The step-size taken at each round is the optimal step in the
direction computed. For exponential loss functions, this can be computed
exactly, and in general, it can be approximated numerically
via line search methods (since the objective is convex).
The updated
distribution at time $t$ will be proportional to $\mathbb Phi'(1- y_i f_{t-1}(x_i)$,
as explained in Section~\ref{sec:coord}.
\boldsymbol \epsilonpsilonilongin{figure}
\boldsymbol \epsilonpsilonilongin{ALGO}{Init}{m, l, n_1,\ldots, n_l}
\DOFOR{k \gets 1 \mbox{ {\bf to }} l}
\DOFOR{j \gets 1 \mbox{ {\bf to }} n_k}
\SET{w_{0, k, j}}{0}
\OD
\OD
\DOFOR{i \gets 1 \mbox{ {\bf to }} m}
\SET{{\mathscr D}_1(i)}{\frac{1}{m}}
\OD
\SET{(\widehatat{l}, \widehatat{n}_1)}{(1, n_1)}
\DOFOR{k \gets 2 \mbox{ {\bf to }} l}
\SET{\widehatat{n}_k}{0}
\OD
\mathfrak RETURN{(w_{0,k, j})_{k, j}, {\mathscr D}_1, \widehatat{l}, (\widehatat{n}_k)_{k, j})}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode for initializing parameters.}
\vskip -.25cm
\label{alg:init}
\epsilonnd{figure}
\boldsymbol \epsilonpsilonilongin{figure}
\boldsymbol \epsilonpsilonilongin{ALGO}{BestNode}{(d_{k, j})_{k, j}, (\tilde{d}_k)_{k}}
\mathbf 1F {\max_{k \in [1, \widehatat{l}], j \in[1,\widehatat{n}_{k}]} \textstyle | d_{k, j} | > \max_{j \in [2,\widehatat{l}+1]} \textstyle | \tilde{d}_{j} | }
\SET{(k, j)}{\displaystyle \argmax_{k \in [1, \widehatat{l}], j \in [1, \widehatat{n}_{k}]} \textstyle | d_{k, j} |}
\ELSE
\SET{k_{\text{new}}}{\displaystyle \argmax_{j \in[2,\widehatat{l}+1]} \textstyle | \tilde{d}_{j} |}
\mathbf 1F {k_{\text{new}} > \widehatat{l}}
\SET{\widehatat{l}}{\widehatat{l}+1}
\FI
\SET{\widehatat{n}_{k_{\text{new}}}}{\widehatat{n}_{k_{\text{new}}}+1}
\SET{(k, j)}{(\tilde{k}, \widehatat{n}_{k_{\text{new}}})}
\SET{(h_{k, j}, \epsilon_{t,k, j})}{(\tilde{u}_{k}, \tilde{\epsilon}_{k_{\text{new}}})}
\FI
\SET{\epsilon_t}{\epsilon_{t, k, j}}
\mathfrak RETURN{( (k, j), \epsilon_t) }
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode for finding the unit with the largest contribution toward decreasing the objective.}
\vskip -.25cm
\label{alg:best}
\epsilonnd{figure}
\boldsymbol \epsilonpsilonilongin{figure}
\boldsymbol \epsilonpsilonilongin{ALGO}{ApplyStep}{(k, j), w_{t-1}}
\SET{\epsilonta_t}{\argmin_{\epsilonta} F(w_t + \epsilonta e_{k_t,j_t})}
\SET{\mathbf w_t}{\mathbf w_{t - 1}}
\SET{w_{t, k, j}}{w_{t, k, j} + \epsilonta_t }
\mathfrak RETURN{\mathbf w_t}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode for applying the optimal step size to the network.
In general, the step size can be computed via line search.}
\vskip -.25cm
\label{alg:step}
\epsilonnd{figure}
\boldsymbol \epsilonpsilonilongin{figure}
\boldsymbol \epsilonpsilonilongin{ALGO}{UpdateDistrib}{\mathbf w_t}
\SET{S_{t + 1}}{\!\sum_{i = 1}^m \mathbb Phi' \big( 1 - y_i \sum_{k = 1}^{\widehatat{l}}\sum_{j = 1}^{\widehatat{n}_{k}} w_{t, k, j} h_{k, j}(x_i) \big)}
\DOFOR{i \gets 1 \mbox{ {\bf to }} m}
\SET{{\mathscr D}_{t + 1}(i)}{\frac{\mathbb Phi' \big( 1 - y_i
\sum_{k = 1}^{\widehatat{l}}\sum_{j = 1}^{\widehatat{n}_{k}} w_{t, k, j} h_{k, j}(x_i) \big)}{S_{t + 1}}}
\label{eq:adaboost_dist_update}
\OD
\mathfrak RETURN{({\mathscr D}_{t+1}, S_{t+1})}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode for updating the distribution for the next iteration.}
\vskip -.25cm
\label{alg:dist}
\epsilonnd{figure}
\section{Convergence of \textsc{AdaNet}}
\label{app:converge}
\boldsymbol \epsilonpsilonilongin{theorem}
Let $\mathbb Phi$ be a twice-continuously differentiable function
with $\mathbb Phi'' > 0$,
and suppose we terminate \textsc{AdaNet} after $\mathcal O\big(\log(1/\epsilon)\big)$
iterations if it does not add a new unit.
Let $I_s \subset [1,N]$ be the first $j$ units added by \textsc{AdaNet},
and let $w_{I_s}^* = \argmin_{w \in P_{I_s}(\mathbb R^N_+)} F(w)$,
where $P_{I_s}$ denotes projection onto $\mathbb R^{I_s}$.
Let $\widehatat{N} = \sum_{k = 1}^{\widehatat{l}} \widehatat{n}_k$ be the total number of units
constructed by the \textsc{AdaNet} algorithm at termination.
Then \textsc{AdaNet} will terminate after at most $\mathcal O(\widehatat{N} \log(1/\epsilon))$ iterations,
producing a neural network and a set of weights such that
$$F(w_\text{AdaNet}) - \min_{s\in[1,\widehatat{N}]} F(w_{I_s}^*) < \epsilon$$
\epsilonnd{theorem}
\boldsymbol \epsilonpsilonilongin{proof}
Recall that $$F(w) = \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\Big(1
- y_i \sum_{j = 1}^N w_j h_j(x_i) \Big) + \sum_{j =
1}^N \Gamma_j |w_j|.$$
Since \textsc{AdaNet} initializes all weights to 0 and grows the
coordinate space in an online fashion, we may consider the algorithm in epochs,
so that if the support of $w_t$ at any given $t$ is $I_s$, then
\boldsymbol \epsilonpsilonilongin{align*}
F(w)
&= F_{I_s}(w) = \frac{1}{m} \sum_{i = 1}^m \mathbb Phi\Big(1 - y_i \sum_{j \in I_s} w_j h_j(x_i) \Big) \\
&\quad + \sum_{j \in I_s} \Gamma_j |w_j|.
\epsilonnd{align*}
Fix an epoch $s \in[1,\widehatat{N}]$. The optimal set of weights within
the support $I_s$ is given by $w_{I_s}^*$, and this solution exists because
$F$ is a coercive convex function (due to the weighted $l_1$ regularization).
Let $M \in \mathbb R^{m \times N}$ be a matrix with elements given by $M_{i,j} = h_j(x_i) y_i$,
and let $e_i \in \mathbb R^{m}$ be a $i$-th elementary basis vector of $\mathbb R^m$. Then
for any vector $w \in \mathbb R^{N}$, $e_i^\top M w = \sum_{j = 1}^N w_j h_j(x_i) y_i$.
Thus, if we define denote by $G_{I_s}(z) = \frac{1}{m} \sum_{i = 1}^m \mathbb Phi(1 - e_i^\top z)$, then
the first component of $F_{I_s}$ can be written as $G_{I_s}(M w)$.
Moreover, since $\mathbb Phi$ is twice continuously differentiable and $\mathbb Phi'' > 0$, it follows that $G_{I_s}$ is twice continuously
differentiable and strictly convex.
We can also compute that for any $w \in \mathbb R^N$,
\boldsymbol \epsilonpsilonilongin{align*}
\nabla^2 g(M w) &= \frac{1}{m} \sum_{i = 1}^m \mathbb Phi''(1 - e_i^\top M w) e_i e_i^\top,
\epsilonnd{align*}
which is positive definite since $\mathbb Phi'' > 0$.
Finally, since ${\mathscr H}$ is symmetric, we can, at the cost of flipping some
$h_j \to - h_j$, equivalently write:
$F_{I_s}(w) = g(Mw) + \sum_{j\in I_s} \Gamma_j w_j$,
subject to $w_j \geq 0$.\\
Thus, the problem of minimizing $F_{I_s}$ is equivalent to the optimization problem studied in \cite{LuoTseng1992},
and we have verified that the conditions are satisfied as well. If the algorithm doesn't add another coordinate, then it performs the
Gauss-Southwell method on the coordinates $I_s$. By Theorem 2.1 in \cite{LuoTseng1992},
this method will converge
to the optimal set of weights with support in $I_s$ linearly. This implies
that if the algorithm terminates, it will maintain the error guarantee:
$F_{I_s}(w_\textsc{AdaNet}) - F_{I_s}(w_{I_s}^*) < \epsilon$.
If the algorithm does add a new coordinate before termination, then we can
apply the same argument to $F_{I_{s+1}}$.
Thus, when the algorithm does finally terminate, we maintain the error
guarantee for every subset $I_s \subset [1,N]$ of units that we create, and the total run time
is at most $\mathcal O(\widehatat{N} log(1/\epsilon))$ iterations.
\epsilonnd{proof}
\section{Large-scale implementation: \textsc{AdaNet+}}
\label{app:scale}
Our optimization problem is a regularized
empirical risk minimization problem of the form:
$F(x) = G(x) + \psi(x) = \frac{1}{m} \sum_{i = 1}^m f_i(x) + \sum_{j = 1}^N \psi_j(x_j),$
where each $f_i$ is smooth and convex and $\psi$, also convex, decomposes across the coordinates of $x$.
For any subset $\mathcal{B} \subset [1,m]$, let $G_\mathcal{B} = \frac{1}{m} \sum_{i \in \mathcal{B}} f_i$
denote the components of the ERM objective that correspond to that subset. For any $j\in[1,N]$, let $\nabla_j$ denote the partial derivative
in the coordinate $j$.
We can leverage the Mini-Batch Randomized Block Coordinate Descent (MBRCD) technique
introduced by \cite{ZhaoYuWangAroraLiu2014}. Their work can be viewed as the
randomized block coordinate descent variant of the
Stochastic Variance Reduced Gradient (SVRG) family of algorithms (\cite{JohnsonZhang2013,XiaoZhang2014}).
Our algorithm divides the entire training period into $K$ epochs.
At every step, the algorithm samples two mini-batches from $[1,m]$ uniformly.
The first is used to approximate the ERM objective for the descent step,
and the second is used to apply
the \textsc{NewNodes} subroutine in Figure~\ref{alg:new} to generate new
candidate coordinates. After generating these coordinates, the
algorithm samples a coordinate from the set of new coordinates and the
existing ones. Based on the coordinate chosen, it updates the active set.
Then the algorithm computes a gradient descent step with the SVRG
estimator in place of the gradient and with the sampled approximation
of the true ERM objective. It then makes a proximal update with $\psi$
as the proximal function. Finally,
the ``checkpoint parameter'' of the SVRG estimator is updated at the end of every epoch.
While our optimization problem itself is not strongly convex, we can apply the MBRCD
indirectly by adding a strongly convex regularizer, solving: $\tilde{F}(x) = F(x) + \gamma R(x),$
where $R$ is a 1-strongly convex function, to still yield a competitive guarantee.
Moreover, since our non-smooth component is a weighted-$l_1$ term, the proximal update can be solved
efficiently and in closed form in a manner that is similar to the Iterative Shrinkage-Thresholding
Algorithm (ISTA) of \cite{BeckTebouille2009}.
Figure~\ref{alg:adanet+} presents the pseudocode of the algorithm, \textsc{AdaNet+}.
\boldsymbol \epsilonpsilonilongin{figure}[t]
\boldsymbol \epsilonpsilonilongin{ALGO}{AdaNet+}{\tilde{w}^{0}, \epsilonta, (x_i,y_i)_{i = 1}^m, (n_k, \Lambda_k, \Gamma_k, C_k)_{k = 1}^l}
\SET{(\widehatat{N}, \widehatat{l}, (\widehatat{n}_k)_{k = 1}^l)}{\textsc{Init+}((n_k)_{k = 1}^l)}
\DOFOR{s \gets 1 \mbox{ {\bf to }} S}
\SET{\tilde{w}}{\tilde{w}^{(s-1)}}
\SET{\tilde{\mu}}{\nabla \mathbb Phi(\tilde{w}^{(s-1)})}
\SET{w^{(0)}}{\tilde{w}^{(s-1)}}
\DOFOR{t \gets 1 \mbox{ {\bf to }} T}
\SET{\mathcal{B}_1}{U([1,m])}
\SET{\mathcal{B}_2}{U([1,m])}
\SET{(\tilde{d}_k, \tilde{u}_k)_{k = 1}^{l}}{\textsc{NewNodes+}(\mathcal{B}_2, (\mathbf h_k)_{k})}
\SET{j}{U([1,\widehatat{N} + \widehatat{l}])}
\SET{(\mathbf h_k)_{k}}{\textsc{UpdateActiveSet+}(j, (\mathbf h_k, \tilde{u}_k)_{k})}
\SET{w_{t,j}}{\prox_{\Gamma_j |\cdot| + \frac{\gamma}{2} |\cdot|^2 }(w_{t-1, j}
- \epsilonta [ \nabla_j G_{\mathcal{B}_1}(w_{t-1}) - \nabla_j F_{\mathcal{B}_1}(\tilde{w}) + \tilde{\mu}_j])}
\OD
\OD
\mathfrak RETURN{(d_{k, j})_{k\in[1,\widehatat{l}], j\in[1,\widehatat{n}_k]}}
\epsilonnd{ALGO}
\vskip -.5cm
\caption{Pseudocode for large-scale implementation of \textsc{AdaNet}.}
\vskip -.3cm
\label{alg:adanet+}
\epsilonnd{figure}
\ignore{
\boldsymbol \epsilonpsilonilongin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\boldsymbol \epsilonpsilonilongin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\mathfrak REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i = 1$ {\bfseries to} $m-1$}
\mathbf 1F{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\epsilonnd{algorithmic}
\epsilonnd{algorithm}
\subsection{Tables}
\boldsymbol \epsilonpsilonilongin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\boldsymbol \epsilonpsilonilongin{center}
\boldsymbol \epsilonpsilonilongin{small}
\boldsymbol \epsilonpsilonilongin{sc}
\boldsymbol \epsilonpsilonilongin{tabular}{lcccr}
\widehatline
\abovespace\boldsymbol \epsilonpsilonilonlowspace
Data set & Naive & Flexible & Better? \\
\widehatline
\abovespace
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
\boldsymbol \epsilonpsilonilonlowspace
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\widehatline
\epsilonnd{tabular}
\epsilonnd{sc}
\epsilonnd{small}
\epsilonnd{center}
\vskip -0.1in
\epsilonnd{table}
}
\epsilonnd{document} |
\begin{document}
\section{Introduction} \label{1}
There are several good reasons you might want to read about uniform
spanning trees, one being that spanning trees are
useful combinatorial objects. Not only are they fundamental in
algebraic graph theory and combinatorial geometry, but they
predate both of these subjects, having been used by Kirchoff in
the study of resistor networks. This article addresses the question
about spanning trees most natural to anyone in probability
theory, namely what does a typical spanning tree look like?
Some readers will be happy to know that understanding the basic
questions requires no background knowledge or technical expertise.
While the model is elementary, the answers are surprisingly rich.
The combination of a simple question and a structurally complex
answer is sometimes taken to be the quintessential mathematical
experience. This nonwithstanding, I think the best reason to
set out on a mathematical odyssey is to enjoy the ride.
Answering the basic questions about spanning trees depends on a sort of
vertical integration of techniques and results from diverse realms of
probability theory and discrete mathematics. Some of the topics
encountered en route are random walks, resistor networks,
discrete harmonic analysis, stationary Markov chains, circulant
matrices, inclusion-exclusion, branching processes and the method of
moments. Also touched on are characters of abelian groups, entropy
and the infamous incipient infinite cluster.
The introductory section defines the model and previews some of the
connections to these other topics. The remaining sections develop
these at length. Explanations of jargon and results borrowed from other
fields are provided whenever possible. Complete proofs are given in
most cases, as appropriate.
\subsection{Defining the model} \label{1.1}
Begin with a finite graph $G$. That means a finite collection $V(G)$
of {\em vertices} along with a finite collection $E(G)$ of {\em edges}.
Each edge either connects two vertices $v$ and $w \in V(G)$ or else
is a {\em self-edge}, connecting some $v \in V(G)$ to itself. There
may be more than one edge connecting a pair of vertices. Edges
are said to be {\em incident} to the vertices they connect. To make
the notation less cumbersome we will write $v \in G$ and $e \in G$
instead of $v \in V(G)$ and $e \in E(G)$. For $v,w \in G$ say $v$
is a {\em neighbor} of $w$, written $v \sim w$ if and only if some edge connects
$v$ and $w$. Here is an example of a graph $G_1$ which will serve often as
an illustration.
\setlength{\unitlength}{2pt}
\begin{picture}(170,60)
\put(100,50){\circle{2}}
\put(98,53){A}
\put(140,50){\circle{2}}
\put(140,53){B}
\put(140,10){\circle{2}}
\put(140,3){C}
\put(100,10){\circle{2}}
\put(98,2){D}
\put(70,30){\circle{2}}
\put(62,28){E}
\put(100,50){\line(1,0){40}}
\put(120,53){$e_1$}
\put(100,50){\line(0,-1){40}}
\put(102,30){$e_6$}
\put(140,50){\line(0,-1){40}}
\put(142,30){$e_2$}
\put(140,10){\line(-1,0){40}}
\put(120,1){$e_3$}
\put(100,50){\line(-3,-2){30}}
\put(77,42){$e_5$}
\put(100,10){\line(-3,2){30}}
\put(77,12){$e_4$}
\end{picture}
figure~\thepfigure
\label{pfig1.1}
\addtocounter{pfigure}{1}
\noindent{Its} vertex set is $\{ A,B,C,D,E \}$ and it has six edges
$e_1 , \ldots , e_6$, none of which is a self-edge.
A subgraph of a graph $G$ will mean a graph with the same vertex
set but only a subset of the edges. (This differs from standard
usage which allows the vertex set to be a subset as well.)
Since $G_1$ has 6 edges, there are $2^6 = 64$ possible different
subgraphs of $G_1$. A subgraph $H \subseteq G$
is said to be a {\em forest} if there are no cycles, i.e. you cannot
find a sequence of vertices $v_1 , \ldots , v_k$ for which there
are edges in $H$ connecting $v_i$ to $v_{i+1}$ for each $i < k$ and an
edge connecting $v_k$ to $v_1$. In particular $(k=1)$ there are no
self-edges in a forest. A {\em tree} is a forest that is connected,
i.e. for any $v$ and $w$ there is a path of edges that connects them.
The {\em components} of a graph are the maximal connected subgraphs,
so for example the components of a forest are trees.
A {\em spanning} forest is a forest in which every vertex has at
least one incident edge; a {\em spanning tree} is a tree in which
every vertex has at least one incident edge. If $G$ is connected
(and all our graphs will be) then a spanning tree is just a subgraph
with no cycles such that the addition of any other edge would create
a cycle. From this it is easy to see that every connected graph
has at least one spanning tree.
Now if $G$ is any finite connected graph, imagine listing all of its
spanning trees (there are only finitely many) and then choosing one
of them at random with an equal probability of choosing any one.
Call this random choice ${\bf T}$ and say that
${\bf T}$ is a {\em uniform random spanning tree} for $G$.
In the above example there are eleven spanning trees for $G_1$
given (in the obvious notation) as follows:
$$ \begin{array} {rrrr}
e_1e_2e_3e_4 & e_1e_2e_3e_5 & e_1e_2e_4e_5 & e_1e_3e_4e_5 \\[2ex]
e_2e_3e_4e_5 & e_1e_2e_4e_6 & e_1e_3e_4e_6 & e_2e_3e_4e_6 \\[2ex]
e_1e_2e_5e_6 & e_1e_3e_5e_6 & e_2e_3e_5e_6
\end{array} $$
In this case, ${\bf T}$ is just one of these eleven trees, picked
with uniform probability. The model is so simple, you may wonder what
there is to say about it! One answer is that the model has some
properties that are easy to state but hard to prove; these are
introduced in the coming subsections. Another answer is that the
definition of a uniform random spanning tree does not give us
a way of readily computing {local characteristics} of the
random tree. To phrase this as a question: can you compute probabilities
of events local to a small set of edges, such as ${\bf{P}} (e_1 \in {\bf T})$
or ${\bf{P}} (e_1 , e_4 \in {\bf T})$ without actually enumerating all
of the spanning trees of $G$? In a sense, most of the article is
devoted to answering this question. (Events such as $e_1$ being in the
tree are called local in contrast to a global event such as the
tree having diameter -- longest path between two vertices -- at most three.)
\subsection{Uniform spanning trees have negative correlations} \label{1.2}
Continuing the example in figure~1,
suppose I calculate the probability that $e_1 \in {\bf T}$. That's easy:
there are 8 spanning trees containing $e_1$, so
$${\bf{P}} (e_1 \in {\bf T}) = {8 \over 11}.$$
Similarly there are 7 spanning trees containing $e_4$ so
$${\bf{P}} (e_4 \in {\bf T}) = {7 \over 11}.$$
There are only 4 spanning trees containing both $e_1$ and $e_4$, so
$$ {\bf{P}} (e_1 \in {\bf T} \mbox{ and } e_4 \in {\bf T}) = {4 \over 11} .$$
Compare the probability of both of these edges being in the tree with
the product of the probabilities of each of the edges being in the tree:
$${8 \over 11} \cdot {7 \over 11} = {56 / 121} > {4 \over 11} . $$
Thus
$${\bf{P}} (e_1 \in {\bf T} \, | \, e_4 \in {\bf T}) = { {\bf{P}} (e_1 \in {\bf T} \mbox{ and }
e_4 \in {\bf T}) \over {\bf{P}} (e_4 \in {\bf T})} < {\bf{P}} (e_1 \in {\bf T} ) $$
or in words, the conditional probability of $e_1$ being in the tree if you
know that $e_4$ is in the tree is less than the original unconditional
probability. This negative correlation of edges holds in
general, with the inequality not necessarily strict.
\begin{th} \label{neg cor}
For any finite connected graph $G$, let ${\bf T}$ be a uniform spanning
tree. If $e$ and $f$ are distinct edges, then ${\bf{P}} (e,f \in {\bf T} )
\leq {\bf{P}} (e \in {\bf T}) {\bf{P}} (f \in {\bf T})$.
\end{th}
Any spanning tree of an $n$-vertx graph contains $n-1$ edges, so
it should seem intuitively plausible -- even obvious -- that if
one edge is forced to be in the tree then any other edge is less
likely to be needed. Two proofs will be given later, but neither
is straightforward, and in fact the only proofs I know involve
elaborate connections between spanning trees, random walks and
electrical networks. Sections~\ref{2} and~\ref{3} will be occupied with the
elucidation of these connections. The connection between random walks
and electrical networks will be given more briefly, since an excellent
treatment is available \cite{DS}.
As an indication that the previous theorem is not trivial, here is
a slightly stronger statement, the truth or falsity of which is
unknown. Think of the distribution of ${\bf T}$ as
a probability distribution on the outcome space $\Omega$ consisting
of all the $2^{|E(G)|}$ subgraphs
of $G$ that just happens to give probability zero to any
subgraph that is not a spanning tree. An event $A$ (i.e. any subset of
the outcome space) is called an {\em up-event} -- short for {\em
upwardly closed} -- if whenever a subgraph $H$ of $G$ has
a further subgraph $K$ and $K \in A$, then $H \in A$.
An example of an up-event is the event of containing at least two of the
three edges $e_1, e_3$ and $e_5$. Say an event $A$ ignores an edge
$e$ if for every $H$, $H \in A \Leftrightarrow H \cup
e \in A$.
\begin{conj} \label{stoch decr}
For any finite connected graph $G$, let ${\bf T}$ be a uniform spanning
tree. Let $e$ be any edge and $A$ be any up-event that ignores $e$.
Then
$$ {\bf{P}} (A \mbox{ and } e \in {\bf T}) \leq {\bf{P}} (A) {\bf{P}} (e \in {\bf T}) . $$
\end{conj}
Theorem~\ref{neg cor} is a special case of this when $A$ is the
event of $f$ being in the tree. The conjecture is known to be true for
{\em series-parallel} graphs and it is also know to be true in
the case when $A$ is an {\em elementary cylinder event}, i.e. the
event of containing some fixed $e_1 , \ldots , e_k$. On the negative
side, there are natural generalizations of graphs and spanning trees, namely
{\em matroids} and {\em bases} (see \cite{Wh} for definitions),
and both Theorem~\ref{neg cor} and Conjecture~1 fail to generalize
to this setting. If you're interested in seeing the counterexample,
look at the end of \cite{SW}.
\subsection{The transfer-impedance matrix} \label{1.3}
The next two paragraphs discuss a theorem that computes probabilities
such as ${\bf{P}} (e,f \in {\bf T})$. These computations alone would render
the theorem useful, but it appears even more powerful in the context
of how strongly it constrains the probability measure governing
${\bf T}$. Let me elaborate.
Fix a subset $S = \{ e_1 , \ldots , e_k \}$ of the edges of a
finite connected graph $G$. If ${\bf T}$ is a uniform random spanning
tree of $G$ then the knowledge of whether $e_i \in {\bf T}$ for each $i$
partitions the space into $2^k$ possible outcomes. (Some of these
may have probability zero if $S$ contain cycles,
but if not, all $2^k$ may be possible.) In any case, choosing
${\bf T}$ from the uniform distribution on spanning trees of $G$
induces a probability distribution on $\Omega$, the space of
these $2^k$ outcomes. There
are many possible probability distributions on $\Omega$:
the ways of choosing $2^k$ nonnegative numbers summing
to one are a $2^k - 1$-dimensional space. Theorem~\ref{neg cor}
shows that the actual measure induced by ${\bf T}$ satisfies
certain inequalities, so not all probability distributions on $\Omega$
can be gotten in this way. But the set of probability distributions
on $\Omega$ satisfying these inequalities is still $2^k -1$-dimensional.
It turns out, however, that the set of probability distributions on $\Omega$
that arise as induced distributions of uniform spanning trees on
subsets of $k$ edges actually has at most the much smaller dimension
$k(k+1)/2$. This is a consequence of the following theorem which
is the bulwark of our entire discussion of spanning trees:
\begin{th}[Transfer-Impedance Theorem] \label{exist matrix}
Let $G$ be any finite connected graph. There is a symmetric function
$H(e,f)$ on pairs of edges in $G$ such that for any $e_1 , \ldots , e_r
\in G$,
$$ {\bf{P}} (e_1 , \ldots , e_r \in {\bf T}) = \det M(e_1 , \ldots , e_r)$$
where $M(e_1 , \ldots , e_r)$ is the $r$ by $r$ matrix whose
$i,j$-entry is $H(e_i , e_j)$.
\end{th}
By inclusion-exclusion, the probability of any event in $\Omega$
may be determined from the probabilities of ${\bf{P}} (e_{j_1} , \ldots ,
e_{j_r} \in {\bf T})$ as $e_{j_1} , \ldots , e_{j_r}$ vary over all
subsets of $e_1 , \ldots , e_k$. The theorem says that these
are all determined by the $k(k+1)/2$ numbers $\{ H(e_i , e_j) :
i,j \leq k \}$, which shows that there are indeed only $k(k+1)/2$
degrees of freedom in determining the measure on $\Omega$.
Another way of saying this is that the measure is almost completely
determined by its
two-dimensional marginals, i.e. from the values of ${\bf{P}} (e,f \in {\bf T})$
as $e$ and $f$ vary over pairs of (not necessarily distinct) edges.
To see this, calculate the values of $H(e,f)$. The values of
$H(e,e)$ in the theorem must be equal
to ${\bf{P}} (e \in {\bf T})$ since ${\bf{P}} (e,e) = \det M(e) = H(e,e)$.
To see what $H(e,f)$ is for $e \neq f$, write
\begin{eqnarray*}
{\bf{P}} (e,f \in {\bf T}) & = & \det M(e,f) \\[2ex]
& = & H(e,e) H(e,f) - H(e,f)^2 \\[2ex]
& = & {\bf{P}}(e \in {\bf T}) {\bf{P}} (f \in {\bf T}) - H(e,f)^2
\end{eqnarray*}
and hence
$$H(e,f) = \pm \sqrt{{\bf{P}} (e \in {\bf T}) {\bf{P}} (f \in {\bf T}) - {\bf{P}} (e,f \in {\bf T})}.$$
Thus the two dimension marginals determine $H$ up to sign, and $H$
determines the measure. Note that the above square
root is always real, since by Theorem~\ref{neg cor} the quantity
under the radical is nonnegative.
Section~\ref{4} will be devoted to proving Theorem\ref{exist matrix},
the proof depending heavily on the connections to random walks and
electrical networks developed in Sections~\ref{2} and~\ref{3}.
\subsection{Applications of transfer-impedance to limit theorems} \label{1.4}
Let $K_n$ denote the complete graph on $n$ vertices, i.e. there
are no self-edges and precisely one edge connecting each pair of
distinct vertices. Imagine picking a uniform random spanning
tree of $K_n$ and letting $n$ grow to infinity. What kind of
limit theorem might we expect? Since a spanning tree of $K_n$ has only
$n-1$ edges, each of the $n(n-1)/2$ edges should have probability
$2/n$ of being in the tree (by symmetry) and is hence decreasingly
likely to be included as $n \rightarrow \infty$.
On the other hand, the number of edges
incident to each vertex is increasing. Say we fix a particular
vertex $v_n$ in each $K_n$ and look at the number of edges incident
to $v_n$ that are included in the tree. Each of $n-1$ incident
edges has probability $2/n$ of being included, so the expected number of
of such edges is $2(n-1)/n$, which is evidently converging to 2. If the
inclusion of each of these $n-1$ edges in the tree were independent
of each other, then the number of edges incident to $v_n$ in ${\bf T}$
would be a binomial random variable with parameters $(n-1 , 2/n)$;
the well known Poisson limit theorem would then say
that the random variable $D_{\bf T} (v_n)$ counting how many edges incident
to $v_n$ are in ${\bf T}$ converged as $n \rightarrow \infty$ to
a Poisson distribution with mean two.
(A quick explanation: integer-valued random variables $X_n$ are said to
converge to $X$ in distribution if ${\bf{P}} (X_n = k) \rightarrow {\bf{P}} (X=k)$
for all integers $k$. In this instance, convergence of $D_{\bf T} (v_n)$
to a Poisson of mean two would mean that for each $k$,
${\bf{P}} (D_{\bf T} (v_n) = k) \rightarrow e^{-2} k^2 / 2$ as $n \rightarrow
\infty$ for each integer $k$.) Unfortunately
this can't be true because a Poisson(2) is sometimes zero, whereas
$D_{\bf T} (v_n)$ can never be zero. It has however been shown \cite{Al}
that $D_{\bf T} (v_n)$ converges in distribution to the next simplest
thing: one plus a Poisson of mean one.
To show you why this really
is the next best thing, let me point out a property of the
mean one Poisson distribution. Pretend that if you picked a family
in the United States at random, then the number of children in the
family would have a Poisson distribution with mean one (population
control having apparently succeeded). Now
imagine picking a child at random instead of picking a family
at random, and asking how many children in the family. You would
certainly get a different distribution, since you couldn't ever
get the answer zero. In fact you would get one plus a Poisson
of mean one. (Poisson distributions are the only ones with
this property.) Thus a Poisson-plus-one distribution is a more
natural distribution than it looks at first.
At any rate, the convergence theorem is
\begin{th} \label{Al Pois}
Let $D_{\bf T} (v_n)$ be the random degree of the vertex $v_n$ in a
uniform spanning tree of $K_n$. Then as $n \rightarrow \infty$,
$D_{\bf T} (v_n)$ converges in distribution to $X$ where $X$ is one
plus a Poisson of mean one.
\end{th}
Consider now the $n$-cube $B_n$. Its vertices are defined to be
all strings of zeros and ones of length $n$, where two vertices are
connected by an edge if and only if they differ in precisely one
location. Fix a vertex $v_n \in B_n$ and play the same game: choose a
uniform random spanning tree and let $D_{\bf T} (v_n)$ be the random
degree of $v_n$ in the tree. It is not hard to see again that the
expected value, ${\bf{E}} D$, converges to 2 as $n \rightarrow \infty$.
Indeed, for any graph the number of vertices in a spanning tree
is one less than the number of vertices, and since each edge
has two endpoints the average degree of the vertices will be
$\approx 2$; if the graph is symmetric, each vertex will then have the
same expected degree which must be 2. One could expect
Theorem~\ref{Al Pois} to hold for $B_n$ as well as $K_n$ and in fact it
does. A proof of this for a class of sequences of graphs that includes
both $K_n$ and $B_n$ and does not use transfer-impedances
appears in \cite{Al} along with the conjecture that the result
should hold for more general sequences of graphs. This can indeed
be established, and in Section~\ref{5} we will discuss the proof of
Theorem~\ref{Al Pois} via transfer-impedances which can
be extended to more general sequences of graphs.
The convergence in distribution of $D_{\bf T} (v_n)$ in these
theorems is actually a special case of a stronger kind of
convergence. To begin discussing this stronger kind of convergence,
imagine that we pick a uniform random spanning tree of a graph, say
$K_n$, and want to write down what it looks like ``near $v_n$''.
Interpret ``near $v_n$'' to mean within a distance of $r$ of
$v_n$, where $r$ is some arbitrary positive integer.
The answer will be a rooted tree of height $r$. (A
{\em rooted} tree is a tree plus a choice of one of its vertices, called
the root. The {\em height} of a rooted tree is the maximum distance
of any vertex from the root.) The rooted tree representing ${\bf T}$
near $v_n$ will be the tree you get by picking up ${\bf T}$, dangling
it from $v_n$, and ignoring everything more than $r$ levels
below the top.
Call this the {\em $r$-truncation} of ${\bf T}$, written ${\bf T}
\wedge_{v_n} r$ or just ${\bf T} \wedge r$ when the choice of $v_n$
is obvious. For example, suppose $r=2$, $v_n$ has 2 neighbors in
${\bf T}$, $w_1$ and $w_2$, $w_1$ has 3 neighbors other than $v_n$
in ${\bf T}$ and $w_2$ has none. This information is encoded
in the following picture. The picture could also have been
drawn with left and right reversed, since we consider this to
be the same abstract tree, no matter how it is drawn.
\setlength{\unitlength}{1pt}
\begin{picture}(310,120)(-90,-10)
\put(90,65) {\circle*{5}}
\put(95,68) {$v_n$}
\put(90,65) {\line(2,-1){60}}
\put(90,65) {\line(-2,-1){60}}
\put(150,35) {\line(1,-1){30}}
\put(150,35) {\line(0,-1){30}}
\put(150,35) {\line(-1,-1){30}}
\put(150,35) {\circle{2}}
\put(30,35) {\circle{2}}
\put(180,5) {\circle{2}}
\put(150,5) {\circle{2}}
\put(120,5) {\circle{2}}
\end{picture}
\setlength{\unitlength}{2pt}
figure~\thepfigure
\label{pfig1.2}
\addtocounter{pfigure}{1}
When $r=1$, the only information in ${\bf T} \wedge r$ is the number
of children of the root, i.e. $D_{\bf T} (v_n)$. Thus the
previous theorem asserts the convergence in distribution of
${\bf T} \wedge_{v_n} 1$ to a root with a (1+Poisson) number of
vertices. Generalizing this is the following theorem, proved in
Section~\ref{5}.
\begin{th} \label{pois conv}
For any $r \geq 1$, as $n \rightarrow \infty$, ${\bf T} \wedge_{v_n} r$
converges in distribution to a particular random tree, ${\cal P}_1
\wedge r$ to be defined later.
\end{th}
Convergence in distribution means that for any fixed tree $t$ of height
at most $r$, ${\bf{P}} ({\bf T} \wedge_{v_n} r = t)$ converges as $n \rightarrow
\infty$ to the probability of the random tree ${\cal P}_1 \wedge r$ equalling
$t$. As the notation indicates, the random tree ${\cal P}_1 \wedge r$ is
the $r$-truncation of an infinite random tree. It is in fact the
tree of a Poisson(1) branching process conditioned to live forever,
but these terms will be
defined later, in Section~\ref{5}. The theorem is stated here only for
the sequence $K_n$, but is in fact true for a more general class of
sequences, which includes $B_n$.
\section{Spanning trees and random walks} \label{2}
Unless $G$ is a very small graph, it is virtually impossible to
list all of its spanning trees. For example, if $G = K_n$ is
the complete graph on $n$ vertices, then the number of spanning
trees is $n^{n-2}$ according to the well known Pr\"ufer bijection
\cite{St}. If $n$ is much bigger than say 20, this is too
many to be enumerated even by the snazziest computer that ever will be.
Luckily, there are shortcuts which enable us to compute probabilities
such as ${\bf{P}} (e \in {\bf T})$ without actually enumerating all spanning
trees and counting the proportion containing $e$. The shortcuts
are based on a close correspondence between spanning trees and
random walks, which is the subject of this section.
\subsection{Simple random walk} \label{2.1}
Begin by defining a simple random walk on $G$. To avoid obscuring
the issue, we will place extra assumptions on the graph $G$ and later
indicate how to remove these. In particular, in addition to assuming
that $G$ is finite and connected, we will often suppose that it is $D$-regular
for some positive integer $D$, which means that every vertex has
precisely $D$ edges incident to it. Also suppose that $G$ is simple,
i.e. it has no self-edges or parallel edges (different edges connecting the
same pair of vertices). For any vertex $x \in G$, define a simple random
walk on $G$ starting at $x$, written $SRW_x^G$, intuitively as follows.
Imagine a particle beginning at time 0 at the vertex $x$. At each
future time $1,2,3,\ldots$, it moves along some edge, always choosing
among the $D$ edges incident to the vertex it is currently at with
equal probability. When $G$ is not $D$-regular, the definition will
be the same: each of the edges leading away from the current position
will be chosen with probability $1 / \mbox{degree}(v)$.
This defines a sequence of random positions
$SRW_x^G (0), SRW_x^G (1) , SRW_x^G (2) , \ldots$ which is thus
a random function $SRW_x^G$ (or just $SRW$ if $x$ and $G$ may be
understood without ambiguity) from the nonnegative integers to the
vertices of $G$. Formally, this random function may be defined by
its finite-dimensional marginals which are given by
${\bf{P}} (SRW_x^G (0) = y_0 , SRW_x^G (1) = y_1 , \ldots , SRW_x^G (k) = y_k) =
D^{-k}$ if $y_0 = x$ and for all $i = 1 , \ldots , k$ there is an edge
from $y_{i-1}$ to $y_i$, and zero otherwise.
For an illustration of this definition, let $G$
be the following 3-regular simple graph.
\begin{picture}(200,80)(-40,0)
\put(10,10) {\line(1,0){90}}
\put(10,10) {\line(0,1){60}}
\put(10,70) {\line(1,0){90}}
\put(100,10) {\line(0,1){60}}
\put(40,40) {\line(1,0){30}}
\put(40,40) {\line(-1,1){30}}
\put(40,40) {\line(-1,-1){30}}
\put(70,40) {\line(1,-1){30}}
\put(70,40) {\line(1,1){30}}
\put(6,72){A}
\put(100,72){B}
\put(100,4){C}
\put(6,4){D}
\put(40,42){E}
\put(67,42){F}
\end{picture}
figure~\thepfigure
\label{pfig2.0}
\addtocounter{pfigure}{1} \\
Consider a simple random walk $SRW_A^G$ starting at the vertex $A$.
The probability of a particular beginning, say $SRW (1) = B$ and $SRW(2)
= F$ is just $(1/3)^2$. The random position at time 2, $SRW (2)$, is then
equal to $F$ with probability $2/9$, since each of the two ways,
ABF and AEF, of getting to $F$ in two steps has probability $1/9$.
Another variant of random walk we will need is the {\em stationary Markov
chain} corresponding to a simple random walk on $G$. I will preface this
definition with a quick explanation of Markov chains; since I cannot do
justice to this large topic in two paragraphs, the reader is referred to
\cite{IM}, \cite{Fe} or any other favorite introductory probability
text for further details.
A (time-homogeneous) {\em Markov chain} on a finite state space $S$
is a sequence of random variables $\{ X_i \}$ taking values in $S$,
indexed by either the integers or the nonnegative integers and having
the Markov property: there is a set of transition probabilities
$\{ p(x,y) \, : \, x,y \in S \}$ so that the probability of
$X_{i+1}$ being $y$, conditional upon $X_i = x$, is always equal
to $p(x,y)$ regardless of how much more information about the
past you have. (Formally, this means ${\bf{P}} (X_{i+1} = y \, | \, X_i = x$
and any values of $X_j$ for $j < i )$ is still $p(x,y)$.)
An example of this is $SRW_x^G$, where $S$ is the set of vertices
of $G$ and $p(x,y) = D^{-1}$ if $x \sim y$
and $0$ otherwise (recall that $x \sim y$ means $x$ is a neighbor of $y$).
The values $p(x,y)$ must satisfy $\sum_y p(x,y) =
1$ for every $x$ in order to be legitimate conditional probabilities.
If in addition they satisfy $\sum_x p(x,y) = 1$ for every $y$, the
Markov chain is said to be {\em doubly stochastic}. It will be useful
later to know that the Markov property is time-reversible, meaning
if $\{ X_i \}$ is a Markov chaing then so is the sequence
$\{\tilde{X}_i = X_{-i} \}$, and there
are backwards transition probabilities $\tilde{p} (x,y)$
for which ${\bf{P}} (X_{i-1} = y \, | \, X_i = x) = \tilde{p} (x,y)$.
If it is possible eventually
to get from every state in $S$ to every other, then there is a
unique {\em stationary distribution} which is a set of
probabilities $\{ \pi (x) \, : \, x \in S \}$ summing to one and
having the property that $\sum_x \pi (x) p(x,y) = \pi (y)$ for all $y$.
Intuitively, this means that if we build a Markov chain with
transition probabilities $p(x,y)$ and start it by randomizing $X_0$
so that ${\bf{P}} (X_0 = x) = \pi (x)$ then it will also be true that
${\bf{P}} (X_i = x) = \pi(x)$ for every $i>0$. A {\em stationary}
Markov chain is one indexed by the integers (as opposed to just the
positive integers),
in which ${\bf{P}} (X_i = x) = \pi (x)$ for some, hence every $i$. If
a Markov chain is doubly stochastic, it is easy to check that the
uniform distribution $U$ is stationary:
$$\sum_x U (x) p(x,y) = \sum_x |S|^{-1} p(x,y) = |S|^{-1} = U (y).$$
The stationary distribution $\pi$ is unique (assuming every state can
get to every other) and is hence uniform over all states.
Now we define a stationary simple random walk on $G$ to be a
stationary Markov chain with state space $V(G)$ and transition
probabilities $p(x,y) = D^{-1}$ if $x \sim y$ and $0$ otherwise.
Intuitively this can be built by choosing $X_0$ at random
uniformly over $V(G)$, then choosing the $X_i$ for $i > 0$ by
walking randomly from $X_0$ along the edges and choosing the $X_i$
for $i < 0$ also by walking randomly from $X_0$, thinking of this
latter walk as going backwards in time. (For SRW, $p (x,y) =
p(y,x) = \tilde{p} (x,y)$ so the walk looks the same backwards
as forwards.)
\subsection{The random walk construction of uniform spanning trees} \label{2.2}
Now we are ready for the random walk construction of uniform random
spanning trees. What we will actually get is a directed spanning
tree, which is a spanning tree together with a choice of vertex
called the root and an orientation on each edge (an arrow pointing
along the edge in one of the two possible directions) such that
following the arrows always leads to the root. Of course a directed
spanning tree yields an ordinary spanning tree if you ignore the
arrows and the root. Here is an algorithm to generate directed trees
from random walks.
\noindent{\small GROUNDSKEEPER'S ALGORITHM}
\begin{quotation}
Let $G$ be a finite, connected, $D$-regular, simple
graph and let $x$ be any vertex of $G$. Imagine that we send the
groundskeeper from the local baseball diamond on a walk along the edges
of $G$ starting from $x$; later we will take to be the walk $SRW_x^G$.
She brings with her the wheelbarrow full
of chalk used to draw in lines. This groundskeeper is so eager to
choose a spanning tree for $G$ that she wants to chalk a line over
each edge she walks along. Of course if that edge, along with the
edges she's already chalked, would form a cycle (or is already
chalked), she is not allowed
to chalk it. In this case she continues walking that edge but
temporarily -- and reluctantly -- shuts off the flow of chalk.
Every time she chalks a new edge she inscribes an arrow pointing
from the new vertex back to the old.
\end{quotation}
Eventually every vertex is connected to every other by a chalked
path, so no more can be added without forming a cycle and the
chalking is complete. It is easy to see that the subgraph consisting
of chalked edges is always a single connected component. The first
time the walk reaches a vertex $y$, the edge just travelled cannot
form a cycle with the other chalked edges. Conversely, if the
walk moves from $z$ to some $y$ that has been reached before,
then $y$ is connected to $z$ already by some chalked path,
so adding the edge $zy$ would create a cycle and is not permitted. Also
it is clear that following the arrows leads always to vertices that were
visited previously, and hence eventually back to the root. Furthermore,
every vertex except $x$ has exactly one oriented edge leading out
of it, namely the edge along which the vertex was first reached.
Putting this all together, we have defined a function -- say $\tau$ -- from
walks on $G$
(infinite sequences of vertices each consecutive pair connected by
an edge) to directed spanning trees of $G$. Formally $\tau(y_0 , y_1 , y_2 ,
\ldots)$ is the subgraph $H \subseteq G$ such that if $e$ is an oriented
edge from $w$ to $z$ then
$$ e \in H \Leftrightarrow \mbox{ for some } k > 0, y_k = z , y_{k-1}
= w, \mbox{ and there is no } j < k \mbox{ such that } y_j = z .$$
As an example, suppose $SRW_A^G$ in figure 2.1 begins
ABFBCDAE. Then applying $\tau$ gives the tree with edges
BA, FB, CB, DC and EA.
To be completely formal, I should admit that the groundskeeper's
algorithm never stops if there is a vertex that the walk fails
to hit in finite time. This is not a problem since we are going to
apply $\tau$ to the path of a $SRW$, and this hits every vertex
with probability one.
As hinted earlier, the importance of this construction is the
following equivalence.
\begin{th} \label{rw construction}
Let $G$ be any finite, connected, $D$-regular, simple graph and let
$x$ be any vertex of $G$. Run a simple random walk $SRW_x^G$ and
let ${\bf T}$ be the random spanning tree gotten by ignoring the
arrows and root of the random directed spanning tree $\tau(SRW_x^G)$.
Then ${\bf T}$ has the distribution of a uniform random spanning tree.
\end{th}
To prove this it is necessary to consider a stationary simple random
walk on $G$ ($SSRW^G$). It will be easy to get back to a $SRW_x^G$ because the
sites visited in positive time by a $SSRW^G$ conditioned on being at
$x$ at time zero form a $SRW_x^G$. Let $T_n$ be the tree
$\tau(SSRW (n) , SSRW(n+1) , \ldots)$; in other words, $T_n$ is the
directed tree gotten by applying the groundskeeper's algorithm to the
portion of the stationary simple random walk from time
$n$ onwards. The first goal is to show that the random collection of
directed trees $T_n$ forms a time-homogeneous Markov chain
as $n$ ranges over all integers.
Showing this is pretty straightforward because the transition probabilities
are easy to see. First note that if $t$ and $u$ are any two directed
trees on disjoint sets of vertices, rooted respectively at $v$ and $w$,
then adding any arrow from $v$ to a vertex in $u$ combines them into a
single tree rooted at $w$. Now define two operations on directed spanning
trees of $G$ as follows.
\\[2ex] {\bf Operation} $F(t,x)$: {\it
Start with a directed tree $t$ rooted at $v$. Choose one of the
the $D$ neighbors of $v$ in $G$, say $x$. Take away the edge in
$t$ that leads out of $x$, separating $t$ into two trees, rooted at
$v$ and $x$. Now add an edge from $v$ to $x$, resulting in a single
tree $F(t,x)$.
}
\\[2ex] {\bf Operation} $F^{-1}(t,w)$: {\it
Start with a directed tree $t$ rooted at $x$. Choose one of the
the $D$ neighbors of $x$ in $G$, say $w$. Follow the path from
$w$ to $x$ in $t$ and let $v$ be the last vertex on this path
before $x$. Take away the edge in $t$ that leads out of $v$,
separating $t$ into two trees, rooted at $x$ and $v$. Now add an
edge from $x$ to $w$, resulting in a single directed tree $F^{-1} (t,w)$.
}
It is easy to see that these operations really are inverse to
each other, i.e. if $t$ is rooted at $v$ then $F^{-1} (F(t,x),w) =t$
for any $x \sim v$, where $w$ is the other endpoint of the edge
leading out of $x$ in $t$. Here is a pictorial example.
\setlength{\unitlength}{1pt}
\begin{picture}(300,140)(-10,0)
\put(100,100){\circle{2}}
\put(95,103){v}
\put(120,80){\circle{2}}
\put(120,80){\vector(-1,1){20}}
\put(80,80){\circle{2}}
\put(80,80){\vector(1,1){20}}
\put(60,60){\circle{2}}
\put(60,60){\vector(1,1){20}}
\put(53,63){w}
\put(40,40){\circle{2}}
\put(40,40){\vector(1,1){20}}
\put(33,43){x}
\put(20,20){\circle{2}}
\put(20,20){\vector(1,1){20}}
\put(0,0){\circle{2}}
\put(0,0){\vector(1,1){20}}
\put(40,0){\circle{2}}
\put(40,0){\vector(-1,1){20}}
\put(350,100){\circle{2}}
\put(345,103){v}
\put(370,80){\circle{2}}
\put(370,80){\vector(-1,1){20}}
\put(330,80){\circle{2}}
\put(330,80){\vector(1,1){20}}
\put(310,60){\circle{2}}
\put(310,60){\vector(1,1){20}}
\put(303,63){w}
\put(290,120){\circle{2}}
\put(283,123){x}
\put(270,100){\circle{2}}
\put(270,100){\vector(1,1){20}}
\put(250,80){\circle{2}}
\put(250,80){\vector(1,1){20}}
\put(290,80){\circle{2}}
\put(290,80){\vector(-1,1){20}}
\put(350,100){\vector(-3,1){60}}
\put(100,40){The tree $t$}
\put(250,40){The tree $F(t,x)$}
\end{picture}
figure~\thepfigure
\label{pfig2.1}
\addtocounter{pfigure}{1}
I claim that for any directed trees $t$ and $u$,
the backward transition probability $\tilde{p}(t,u)$ is equal to $D^{-1}$
if $u = F(t,x)$ for some $x$ and zero otherwise. To see this,
it is just a matter of realizing where the operation $F$ comes from.
Remember that $T_n$ is
just $\tau(SSRW (n) , SSRW(n+1) , \ldots)$, so in particular the root
of $T_n$ is $SSRW (n)$. Now $SSRW$ really is a Markov chain.
We already know that ${\bf{P}} (SSRW (n-1) = x \, | \, SSRW (n) = v)$ is $D^{-1}$
if $x \sim v$ and zero otherwise. Also, this is
unaffected by knowledge of $SSRW(j)$ for any $j > n$. Suppose
it turns out that $SSRW(n-1)=x$. Then knowing only $T_n$ and
$x$ (but not the values of $SSRW(j)$ for $j > n$) it is possible
to work out what $T_{n-1}$ is. Remember that $T_n$ and $T_{n-1}$ come
from applying $\tau$ to the respective sequences $SSRW(n) , SSRW(n+1) ,
\ldots$ and $SSRW(n-1) , SSRW(n), \ldots$ whose only difference
is that the second of these has an extra $x$ tacked on the beginning.
Every time the first sequence reaches a vertex for the first time,
so does the second, unless that vertex happens to be $x$. So the
$T_{n-1}$ has has all the oriented edges of $T_n$ except the
one out of $x$. What it has instead is an oriented edge from
$v$ to $x$, chalked in by the groundskeeper at her very first step.
Adding in the edge from $v$ to some neighbor $x$ and erasing
the edge out of $x$ yields precisely $F(t,x)$.
So we have shown that $T_{n-1} = F(T_n , SSRW (n-1))$. But $SSRW(n-1)$
is uniformly distributed among the neighbors of $SSRW(n)$ no matter
what other information we know about the future. This proves the
claim and the time-homogeneous Markov property.
The next thing to show is that the stationary distribution is uniform
over all directed trees. As we've seen, this would follow if
we knew that $\{ T_n \}$ was doubly stochastic. Since $p(t,u)$
is $D^{-1}$ whenever $u = F(t,x)$ for some $x$ and zero otherwise,
this would be true if for every tree $u$ there are
precisely $D$ trees $t$ for which $F(t,x) = u$ for some $x$.
But the trees $t$ for which $F(t,x) = u$ for some $x$ are
precisely the trees $F^{-1} (u,x)$ for some neighbor $x$ of the
root of $u$, hence there are $D$ such trees and transition
probabilities for $SSRW$ are doubly stochastic.
Now that the stationary distribution for $\{ T_n \}$ has been shown
to be uniform, the proof of Theorem~\ref{rw construction} is almost
done. Note that the event $SSRW (0) = x$ is the same
as the event of $\tau(SSRW (0), SSRW (1) , \ldots)$ being rooted at $x$.
Since $SRW_x^G$ is just $SSRW$ conditioned on $SSRW(0)=x$,
$T_0 (SRW_x^G)$ is distributed as a uniform directed spanning
tree conditioned on being rooted at $x$. That is to say, $T_0(SRW_x^G)$
is uniformly distributed over all directed spanning trees rooted
at $x$. But ordinary spanning trees are in a one to one correspondence
with directed spanning trees rooted at a fixed vertex $x$, the
correspondence being that to get from the ordinary tree to the directed
tree you name $x$ as the root and add arrows that point toward $x$.
Then the tree ${\bf T}$ gotten from $T_0 (SRW_x^G)$ by ignoring
the root and the arrows is uniformly distributed over all ordinary
spanning trees of $G$, which is what we wanted to prove. $
\Box$
\subsection{Weighted graphs} \label{2.3}
It is time to remove the extra assumptions that $G$ is $D$-regular
and simple. It will make sense later to generalize from graphs to
weighted graphs, and since the generalization of Theorem~\ref{rw
construction} is as easy for weighted graphs as for unweighted graphs,
we may as well introduce weights now.
A weighted graph is just a graph to each edge $e$ of which is
assigned a positive real number called its weight and written
$w(e)$. Edge weights are not allowed to be zero, though one may
conceptually identify a graph with an edge of weight zero with
the same graph minus the edge in question. An unweighted graph may be
thought of as a graph with all edge weights equal to one, as will be clear
from the way random trees and random walks generalize. Write $d(v)$ for the
sum of the weights of all edges incident to $v$. Corresponding
to the old notion of a uniform random spanning tree is the
{\em weight-selected} random spanning tree ($WST$). A $WST$,
${\bf T}$ is defined to have
$$ {\bf{P}} ({\bf T} = t) = { \prod_{e \in t} w(e) \over \sum_u
\prod_{e \in u} w(e) } $$
so that the probability of any individual tree is proportional
to its weight which is by definition the product of the weights of its edges.
Corresponding to a simple random walk from a vertex $x$ is the
weighted random walk from $x$, $WRW_x^G$ which is a Markov
Chain in which the transition probabilities from a vertex
$v$ are proportional to the weights of the edges incident
to $v$ (among which the walk must choose). Thus if
$v$ has two neighbors $w$ and $x$, and there are four edges
incident to $v$ with respective weights $1,2,3$ and $4$ that
connect $v$ respectively to itself, $w$, $x$ and $x$, then
the probabilities of choosing these four edges are respectively
$1/10, 2/10, 3/10$ and $4/10$. Formally, the probability of walking
along an edge $e$ incident to the current position $v$ is given
by $w(e) / d(v)$. The bookkeeping is a little
unwieldly since knowing the set of vertices $WRW (0), WRW(1), \ldots$
visited by the $WRW$ does not necessarily determine which edges
were travelled now that the graph is not required to be simple.
Rather than invent some clumsy {\em ad hoc} notation to include
the edges, it is easier just to think that a $WRW$ includes
this information, so it is \underline{not} simply given
by its positions $WRW (j) : j \geq 0$, but that we will refer
to this information in words when necessary. If $G$ is a connected
weighted graph then $WRW^G$ has a unique stationary distribution
denoted by positive numbers $\pi^G (v)$ summing to one. This
will not in general be uniform, but its existence is enough to
guarantee the existence of a stationary Markov chain with the
same transition probabilities. We call this stationary Markov chain
$SWRW$ the few times the need arises. The new and improved
theorem then reads:
\begin{th} \label{weighted rw construction}
Let $G$ be any finite, connected weighted graph and let
$x$ be any vertex of $G$. Run a weighted random walk $WRW_x^G$ and
let ${\bf T}$ be the random spanning tree gotten by ignoring the
arrows and root of the random directed spanning tree $\tau(WRW_x^G)$.
Then ${\bf T}$ has the distribution of $WST$.
\end{th}
The proof of Theorem~\ref{rw construction} serves for
Theorem~\ref{weighted rw construction} with a few alterations. These
will now be described, thought not much would be lost by taking
these details on faith and skipping to the next section.
The groundskeeper's algorithm is unchanged with
the provision that the $WRW$ brings with it the information of which
edge she should travel if more than one edge connects $WRW(i)$ to
$WRW(i+1)$ for some $i$. The operation to get from the directed
tree $T_n$ to a candidate for $T_{n-1}$ is basically the same
only instead of there being $D$ choices for how to do this there
is one choice for each edge incident to the root $v$ of $T_n$: choose
such an edge, add it to the tree oriented from $v$ to its other
endpoint $x$ and remove the edge out of $x$. It is easy to see
again that $\{ T_n \}$ is a time-homogeneous Markov chain with
transition probability from $t$ to $u$ zero unless $u$ can be
gotten from $t$ by the above operation, and if so the probability
is proportional to the weight of the edge that was added in
the operation. (This is because if $T_n = t$ then $T_{n-1} = u$
if and only if $u$ can be gotten from this operation and
$WRW$ travelled along the edge added in this operation between
times $n-1$ and $n$.)
The uniform distribution on vertices is no longer stationary
for $WRW$ since we no longer have $D$-regularity,
but the distribution $\pi (v) = d(v) / \sum_x d(x)$ is
easily seen to be stationary: start a $WRW$ with $WRW (0)$ having
distribution $\pi$; then
\begin{eqnarray*}
{\bf{P}} (WRW (1) = v) & = & \sum_x {\bf{P}} (WRW (0) = x \mbox{ and }
WRW(1) = v) \\[2ex]
& = & \sum_x {d(x) \over \sum_y d(y)} \left ( \sum_{e
\mbox{\scriptsize ~connecting $x$ to }v} w(e)/d(x) \right ) \\[2ex]
& = & {1 \over \sum_y d(y)} \sum_{e \mbox{\scriptsize ~incident to }v} w(e)
\\[2ex]
& = & \pi (v) .
\end{eqnarray*}
The stationary distribution $\pi$ for the Markov chain $\{ T_n \}$
gives a directed tree $t$ rooted at $v$ probability
$$\pi (t) = K d(v) \prod_{e \in t} w(e), $$
where $K = (\sum_t d(\mbox{root}(t))
\prod_{e \in t} w(e))^{-1}$ is a normalizing constant.
If $t$, rooted at $v$, can go to $u$, rooted at $x$, by adding an
edge $e$ and removing the edge $f$, then $\pi (u) / \pi (t)
= d(x) w(e) / d(v) w(f)$. To verify that $\pi$ is a stationary
distribution for $T_n$ write ${\cal{C}} (u)$ for the class of trees
from which it is possible to get to $u$ in one step and for
each $t \in {\cal{C}} (u)$ write $v_t, e_t$ and $f_t$ for the root of
$t$, edge added to $t$ to get $u$ and edge taken away from
$t$ to get $u$ respectively. If $u$ is rooted at $x$, then
\begin{eqnarray*}
{\bf{P}} (T_{n-1} = u) & = & \sum_t {\bf{P}} (T_n = t \mbox{ and }
T_{n-1} = u) \\[2ex]
& = & \sum_{t \in {\cal{C}} (u)} \pi (t) w(e_t) / d(v_t) \\[2ex]
& = & \sum_{t \in {\cal{C}} (u)} [\pi (u) d(v_t) w(f_t) /d(x) w(e_t)] w(e_t)
/ d(v_t) \\[2ex]
& = & \pi (u) \sum_{t \in {\cal{C}} (u)} w(f_t) / d(x) \\[2ex]
& = & \pi (u) ,
\end{eqnarray*}
since as $t$ ranges over all trees that can get to $u$, $f_t$
ranges over all edges incident to $x$.
Finally, we have again that $\tau(WRW_x^G (0))$ is distributed
as $\tau(SWRW^G (0))$ conditioned on having root $x$, and since
the unconditioned $\pi$ is proportional to $d(x)$ times
the weight of the tree (product of the edge weights), the factor of $d$
is constant and ${\bf{P}} (\tau(WRW_x^G(0)) = t)$ is proportional to
$\prod_{e \in t} w(e)$ for any $t$ rooted at $x$. Thus
$\tau(WRW_x^G(0))$ is distributed identically to $WST$. $
\Box$
\subsection{Applying the random walk construction to our model} \label{2.4}
Although the benefit is not yet clear, we have succeeded in translating
the question of determining ${\bf{P}} (e \in {\bf T})$ from a question
about uniform spanning trees to a question about simple random walks.
To see how this works, suppose that $e$ connects the vertices
$x$ and $y$ and generate a uniform spanning tree by the random
walk construction starting at $x$: ${\bf T} =$ the tree gotten from
$\tau(SRW_x^G)$ by ignoring the root and arrows. If $e \in {\bf T}$
then its orientation in $\tau(SRW_x^G)$ must be from $y$ to $x$,
and so $e \in {\bf T}$ if and only if $SRW (k-1) = x$ where
$k$ is the least $k$ for which $SRW (k) = y$. In other words,
\begin{equation} \label{rw prob 1}
{\bf{P}} (e \in {\bf T}) = {\bf{P}} (\mbox{first visit of } SRW_x^G \mbox{ to }
y \mbox{ is along } e) .
\end{equation}
The computation of this random walk probability turns out to be tractable.
More important is the fact that this may be iterated to get
probabilities such as ${\bf{P}} (e,f \in {\bf T})$. This requires two
more definitions. If $G$ is a finite connected graph and $e$
is an edge of $G$ whose removal does not disconnect $G$, then
the {\em deletion} of $G$ by $e$ is the graph $G \setminus e$
with the same vertex set and the same edges minus $e$. If $e$
is any edge that connects distinct vertices $x$ and $y$,
then the {\em contraction} of $G$ by $e$ is the graph $G/e$
whose vertices are the vertices of $G$ with $x$ and $y$ replaced
by a single vertex $x*y$. There is an edge $\rho (f)$ of $G/e$ for every
edge of $f$ of $G$, where if one or both endpoints of $f$
is $x$ or $y$ then that endpoint is replaced by $x*y$ in $\rho (f)$.
We write $\rho (z)$ for the vertex corresponding to $z$ in this
correspondence, so $\rho (x) = \rho (y) = x*y$ and $\rho (z) = z$
for every $z \neq x,y$. The following example shows $G_1$ and
$G_1 / e_4$. The edge $e_4$ itself maps to a self-edge under $\rho$,
$e_5$ becomes parallel to $e_6$ and $D$ and $E$ map to $D*E$. \\
\setlength{\unitlength}{2pt}
\begin{picture}(200,80)(60,0)
\put(100,50){\circle{2}}
\put(98,53){A}
\put(140,50){\circle{2}}
\put(140,53){B}
\put(140,10){\circle{2}}
\put(140,3){C}
\put(100,10){\circle{2}}
\put(98,3){D}
\put(70,30){\circle{2}}
\put(62,28){E}
\put(100,50){\line(1,0){40}}
\put(120,53){$e_1$}
\put(100,50){\line(0,-1){40}}
\put(102,30){$e_6$}
\put(140,50){\line(0,-1){40}}
\put(142,30){$e_2$}
\put(140,10){\line(-1,0){40}}
\put(120,3){$e_3$}
\put(100,50){\line(-3,-2){30}}
\put(77,42){$e_5$}
\put(100,10){\line(-3,2){30}}
\put(77,12){$e_4$}
\put(155,30){$G_1$}
\put(200,50){\circle{2}}
\put(198,53){A}
\put(240,50){\circle{2}}
\put(240,53){B}
\put(240,10){\circle{2}}
\put(240,3){C}
\put(200,10){\circle{2}}
\put(196,4){\scriptsize D*E}
\put(200,50){\line(1,0){40}}
\put(220,53){$e_1$}
\put(204,30){$e_6$}
\put(240,50){\line(0,-1){40}}
\put(242,30){$e_2$}
\put(240,10){\line(-1,0){40}}
\put(220,3){$e_3$}
\put(200,30){\oval(5,40)}
\put(185,30){$e_5$}
\put(200,2){\circle {16}}
\put(185,3){$e_4$}
\put(255,30){$G_1 / e_4$}
\end{picture}
figure~\thepfigure
\label{pfig2.2}
\addtocounter{pfigure}{1}
It is easy to see that successive deletions and contractions may be
performed in any order with the same result. If $e_1 , \ldots , e_r$
are edges of $G$ whose joint removal does not disconnect $G$ then the
successive deletion of these edges is permissible. Similarly
if $\{ f_1, \ldots , f_s \}$ is a set of edges of $G$ that contains
no cycle, these edges
may be successively contracted and the graph $G \setminus e_1, \ldots , e_r /
f_1 , \ldots , f_s$ is well-defined. It is obvious that the spanning
trees of $G \setminus e$ are just those spanning trees of $G$ that do not
contain $e$. Almost as obvious is a one to one correspondence between
spanning trees of $G$ containing $e$ and spanning trees of $G/e$:
if $t$ is a spanning tree of $G$ containing $e$ then there is a
spanning tree of $G/e$ consisting of $\{ \rho (f) : f \neq e \in t \}$.
To translate ${\bf{P}} (e,f \in {\bf T})$ to the random walk setting,
write this as ${\bf{P}} (e \in {\bf T}) {\bf{P}} (f \in {\bf T} \, | \, e \in {\bf T})$.
The first term has already been translated. The conditional
distribution of a uniform random spanning tree given that it contains
$e$ is just uniform among those trees containing $e$, which
is just ${\bf{P}}_{G/e} (\rho (f) \in {\bf T})$ where the subscript $G/e$
refers to the fact that ${\bf T}$ is now taken to be a uniform
random spanning tree of $G/e$. If $f$ connects $z$ and $x$
then this is in turn equal to ${\bf{P}} (SRW_{\rho (x)}^{G/e}
\mbox{ first hits } \rho (z) \mbox{ along } \rho(f))$. Both the terms
have thus been translated; in general it should be clear how this may
be iterated to translate the probability of any elementary
event, ${\bf{P}} (e_1 , \ldots , e_r \in {\bf T} \mbox{ and } f_1 , \ldots ,
f_s \notin {\bf T})$ into a product of random walk probabilities.
It remains to be seen how these probabilities may be calculated.
\section{Random walks and electrical networks} \label{3}
Sections~\ref{3.1} - \ref{3.3} contain a development of
the connection between random walks and electrical networks.
The right place to read about this is in \cite{DS}; what
you will see here is necessarily a bit rushed. Sections~\ref{3.5}
and~\ref{3.6} contain similarly condensed material from
other sources.
\subsection{Resistor circuits} \label{3.1}
The electrical networks we discuss will have only two kinds of
elements: resistors and voltage sources. Picture the resistors as
straight pieces of wire. A resistor network will be built by soldering
resistors together at their endpoints. That means that a diagram
of a resistor network will just look like a finite graph with
each edge bearing a number: the resistance.
Associated with every resistor network $H$ is a weighted
graph $G_H$ which looks exactly like the graph just mentioned except
that the weight of an edge is not the resistance but the
{\em conductance}, which is the reciprocal of the resistance.
The distinction between $H$ and $G_H$ is only necessary while we are
discussing precise definitions and will then be dropped.
A voltage source may be a single battery that provides a specified
voltage difference (explained below) across a specified pair
of vertices or is may be a more complicated device to hold various
voltages fixed at various vertices of the network.
Here is an example of a resistor network on a familiar graph,
with a one volt battery drawn as a dashed box. Resistances on
the edges (made up arbitrarily) are given in ohms.
\setlength{\unitlength}{2pt}
\begin{picture}(180,80)
\put(100,50){\circle{2}}
\put(90,53){A}
\put(140,50){\circle{2}}
\put(142,53){B}
\put(140,10){\circle{2}}
\put(140,3){C}
\put(100,10){\circle{2}}
\put(98,2){D}
\put(70,30){\circle{2}}
\put(62,28){E}
\put(100,50){\line(1,0){40}}
\put(120,44){1}
\put(100,50){\line(0,-1){40}}
\put(102,30){2}
\put(140,50){\line(0,-1){40}}
\put(142,30){2}
\put(140,10){\line(-1,0){40}}
\put(120,1){.5}
\put(100,50){\line(-3,-2){30}}
\put(77,42){1}
\put(100,10){\line(-3,2){30}}
\put(77,12){1}
\put (106,56) {\dashbox{2}(28,14){1 V}}
\put (106,63) {\line (-1,-2){6}}
\put (100,72) {\scriptsize +}
\put (134,63) {\line (1,-2){6}}
\put (137,72) {-}
\end{picture}
figure~\thepfigure
\label{pfig3.1}
\addtocounter{pfigure}{1}
The electrical properties of such a network are given by Kirchoff's
laws. For the sake of exposition I will give the laws numbers,
although these do not correspond to the way Kirchoff actually
stated the laws. The first law is that every vertex of the network has
a voltage which is a real number.
The second law gives every oriented edge (resistor) a current.
Each edge has two possible orientations. Say an edge connects
$x$ and $y$. Then the current through the edge is a real number
whose sign depends on which orientation you choose for the edge.
In other words, the current $I({\vec {xy}})$ that flows from $x$ to $y$
is some real number and the current $I({\vec {yx}})$ is its negative.
(Note though that the weights $w(e)$
are always taken to positive; weights are functions of unoriented
edges, whereas currents are functions of oriented edges.)
If $I (e)$ denotes the current along an oriented edge $e = {\vec {xy}}$,
$V(x)$ denotes the voltage at $x$ and $R(e)$ denotes the resistance
of $e$, then quantatively, the second law says
\begin{equation} \label{KL1}
I({\vec {xy}}) = [V(x) - V(y)] R(e)^{-1} .
\end{equation}
Kirchoff's third law is that the total current flowing into
a vertex equals the total current flowing out, or in other
words
\begin{equation} \label{KL1.5}
\sum_{y \sim x} I({\vec {xy}}) = 0 .
\end{equation}
This may be rewritten using~(\ref{KL1}).
Recalling that in the weighted graph $G_H$, the weight $w(e)$
is just $R(e)^{-1}$ and that $d(v)$ denotes the sum of $w(e)$
over edges incident to $v$, we get at every vertex $x$ an equation
\begin{equation} \label{KL2}
0 = \sum_{y \sim x} [V(x) - V(y)] w(xy) =
V(x) d(x) - \sum_{y \sim x} V(y) w(xy) .
\end{equation}
Since a voltage source may provide current, this may fail to hold at
any vertex connected to a voltage source. The above laws are
sufficient to specify the voltages of the network -- and hence the
currents -- except that a constant may be added to all the voltages
(in other words, it is the voltage
differences that are determined, not the absolute voltages).
In the above example the voltage difference across
$AB$ is required to be one. Setting the voltage at $B$ to
zero (since the voltages are determined only up to an additive constant)
the reader may check that the voltages at $A,C,D$ and $E$ are
respectively $1,4/7,5/7$ and $6/7$ and the currents through
$AB,AE,ED,AD,DC,CB$ are respectively $1,1/7,1/7,1/7,2/7,2/7$.
\subsection{Harmonic functions} \label{3.2}
The voltages in a weighted graph $G$ (which we are now identifying with
the resistor network it represents) under application of a voltage
source are calculated by finding a solution to Kirchoff's laws on $G$
with specified boundary conditions. For each vertex $x$ there is an unknown
voltage $V(x)$. There is also a linear equation for every
vertex not connected to a voltage source, and an equation given by
the nature of each voltage source. Will these always be
enough information so that Kirchoff's laws have a unique solution?
The answer is yes and it is most easily seen in the context of harmonic
functions. \footnote{There is also the question of whether any
solution exists, but addressing that would take us too far afield.
If you aren't convinced of its existence on physical grounds, wait
until the next subsection where a probabilistic interpretation for the
voltage is given, and then deduce existence of a solution from
the fact that these probabilities obey Kirchoff's laws.}
If $f$ is a function on the vertices of a weighted graph $G$, define
the {\em excess} of $f$ at a vertex $v$, written $\bigtriangleup f(v)$ by
$$ \bigtriangleup f (v) = \sum_{y \sim v} [f(v) - f(y)] w(vy) .$$
You can think of $\bigtriangleup$ as an operator that maps functions $f$ to
other functions $\bigtriangleup f$ that is a discrete analog of
the Laplacian operator.
A function $f$ from the vertices of a finite weighted graph $G$
to the reals is said to be {\em harmonic} at a vertex
$v$ if and only if $\bigtriangleup f(v) = 0$.
Note that for any function $f$, the sum of the excesses
$\sum_{v \in G} \bigtriangleup f(v) = 0$, since each $[f(x)-f(y)]w(xy)$
cancels a $[f(y)-f(x)]w(yx)$ due to $w(xy) = w(yx)$.
To see what harmonic functions are intuitively, consider the special case
where $G$ is unweighted, i.e. all of the edge weights are one. Then a
function is harmonic if and only if its value at a vertex $x$ is the
average of the values at the neighbors of $x$. In the weighted
case the same is true, but with a weighted average! Here is an easy
but important lemma about harmonic functions.
\begin{lem}[Maximum principle] \label{max principle}
Let $V$ be a function on the vertices of a finite connected weighted
graph, harmonic everywhere except possibly at vertices of
some set $X = \{ x_1 , \ldots , x_k \}$.
Then $V$ attains its maximum and minimum on $X$. If
$V$ is harmonic everywhere then it is constant.
\end{lem}
\noindent{Proof:} Let $S$ be the set of vertices where $V$ attains
its maximum. Certainly $S$ is nonempty. If $x \in S$ has a neighbor
$y \notin S$ then $V$ cannot be harmonic at $x$ since $V(x)$ would
then be a weighted average of values less than or equal to $V(x)$
with at least one strictly less. In the case where $V$ is harmonic
everywhere, this shows that no vertex in $S$ has a neighbor not in
$S$, hence since the graph is connected every vertex is in $S$ and
$V$ is constant. Otherwise, suppose $V$ attains its maximum at
some $y \notin X$ and pick a path connecting
$y$ to some $x \in X$. The entire path must then be in $S$ up until
and including the first vertex along the path at which $V$ is not
harmonic. This is some $x' \in X$. The argument for the minimum is just
the same. $
\Box$
Kirchoff's third law~(\ref{KL2}) says that the voltage function is
harmonic at every $x$ not connected to a voltage source.
Suppose we have a voltage source that provides a fixed voltage
at some specified set of vertices. Say for concreteness that
the vertices are $x_1, \ldots , x_k$ and the voltages produced
at these vertices are $c_1 , \ldots , c_k$. We now show that
Kirchoff's laws determine the voltages everywhere else, i.e. there
is at most one solution to them.
\begin{th} \label{unique fix V}
Let $V$ and $W$ be real-valued functions on the vertices of a finite
weighted graph $G$. Suppose that $V(x_i) = W(x_i) = c_i$ for some set
of vertices $x_1 , \ldots , x_k$ and $1 \leq i \leq k$ and that
$V$ and $W$ are harmonic at every vertex other than $x_1 , \ldots , x_k$.
Then $V=W$.
\end{th}
\noindent{Proof:} Consider the function $V-W$. It is easy to check
that being harmonic at $x$ is a linear property, so $V-W$ is
harmonic at every vertex at which both $V$ and $W$ are harmonic.
Then by the Maximum Principle, $V-W$ attains its maximum and minimum
at some $x_i$. But $V-W = 0$ at every $x_i$, so $V-W \equiv 0$.
$
\Box$
Suppose that instead of fixing the voltages at a number of points,
the voltage source acts as a current source and supplies a
fixed amount of current $I_i$ to vertices $x_i , 1 \leq i \leq k$.
This is physically reasonable only if $\sum_{i=1}^k I_i = 0$.
Then a net current of $I_i$ will have to flow out of each $x_i$
into the network. Using~(\ref{KL1}) gives
$$I_i = \sum_{y \sim x} w(x,y) (V(x) - V(y)) =
\bigtriangleup V(x) .$$
From this it is apparent that the assumption $\sum_i I_i = 0$
is algebraically as well as physically necessary since the excesses must
sum to zero. Kirchoff's laws also determine the voltages (up to an
additive constant) of a network with current sources, as we now show.
\begin{th} \label{unique fix I}
Let $V$ and $W$ be real-valued functions on the vertices of a finite
weighted graph $G$. Suppose that $V$ and $W$ both have excess
$c_i$ at $x_i$ for some set of vertices $x_i$ and reals $c_i$ , $1 \leq
i \leq k$. Suppose also that $V$ and $W$ are harmonic elsewhere. Then
$V=W$ up to an additive constant.
\end{th}
\noindent{Proof:} Excess is linear, so the excess of $V-W$ is the
excess of $V$ minus the excess of $W$. This is zero everywhere, so
$V-W$ is harmonic everywhere. By the Maximum Principle, $V-W$
is constant. $
\Box$
\subsection{Harmonic random walk probabilities} \label{3.3}
Getting back to the problem of random walks, suppose $G$ is a finite
connected graph and $x,a,b$ are vertices of $G$. Let's say that I want
to calculate the probability that $SRW_x$ reaches $a$ before $b$.
Call this probability $h_{ab} (x)$. It is not immediately obvious
what this probability is, but we can get an equation by watching
where the random walk takes its first step. Say the neighbors of $x$
are $y_1 , \ldots , y_d$. Then ${\bf{P}} (SRW_x (1) = y_i) = d^{-1}$ for
each $i \leq d$. If we condition on ${\bf{P}} (SRW_x (1) = y_i)$ then
the probability of the walk reaching $a$ before $b$ is (by the
Markov property) the same as if it had started out at $y_i$. This
is just $h_{ab} (y_i)$. Thus
\begin{eqnarray*}
h_{ab} (x) & = & \sum_i {\bf{P}} (SRW_x (1) = y_i) h_{ab} (y_i) \\[2ex]
&=& d^{-1} \sum_i h_{ab} (y_i) .
\end{eqnarray*}
In other words, $h_{ab}$ is harmonic at $x$. Be careful though, if
$x$ is equal to $a$ or $b$, it doesn't make sense to look one step
ahead since $SRW_x (0)$ already determines whether the walk hit
$a$ or $b$ first. In particular, $h_{ab} (a) = 1$ and $h_{ab} (b) = 0$,
with $h_{ab}$ being harmonic at every $x \neq a,b$.
Theorem~\ref{unique fix V} tells us that there is only one such
function $h_{ab}$. This same function solves Kirchoff's laws for
the unweighted graph $G$ with voltages at $a$ and $b$ fixed at
$1$ and $0$ respectively. In other words, the probability of $SRW_x$
reaching $a$ before $b$ is just the voltage at $x$ when a one volt
battery is connected to $a$ and $b$ and the voltage at $b$ is
taken to be zero. If $G$ is a weighted graph, we can use a similar
argument: it is easy to check that the first-step transition
probabilities $p(x,y) = w({\vec {xy}}) / \sum_z w({\vec {xz}})$ show that
$h_{ab} (x)$ is harmonic in the sense of weighted graphs. Summarizing
this:
\begin{th} \label{elec rw}
Let $G$ be a finite connected weighted graph. Let $a$ and $b$ be
vertices of $G$. For any vertex $x$, the probability of $SRW_x^{G}$
reaching $a$ before $b$ is equal to the voltage at $x$ in $G$
when the voltages at $a$ and $b$ are fixed at one and zero volts
respectively.
\end{th}
Although more generality will not be needed we remark that this same
theorem holds when $a$ and $b$ are taken to be sets of vertices.
The probability of $SRW_x$ reaching a vertex in $a$ before reaching
a vertex in $b$ is harmonic at vertices not in $a \cup b$, is zero
on $b$ and one on $a$. The voltage when vertices in $b$ are held at
zero volts and vertices in $a$ are held at one volt also satisfies
this, so the voltages and the probabilities must coincide.
Having given an interpretation of voltage in probabilistic terms,
the next thing to find is a probabilistic interpretation of the
current. The arguments are similar so they will be treated briefly; a
more detailed treatment appears in \cite{DS}. First we will need
to find an electrical analogue for the numbers $u_{ab} (x)$ which
are defined probabilistically as the expected number of times
a $SRW_a$ hits $x$ before the first time it hits $b$. This is
defined to be zero for $x=b$. For any $x \neq a,b$, let
$y_1, \ldots , y_r$ be the neighbors of $x$. Then the number
of visits to $x$ before hitting $b$ is the sum over $i$ of the number
of times $SRW_a$ hits $y_i$ before $b$ and goes to $x$ on the
next move (the walk had to be somewhere the move before it hit $x$).
By the Markov property, this quantity is $u_{ab} (y_i) p(y_i,x)
= u_{ab} (y_i) w({\vec {xy}}_i) / d(y_i)$. Letting $\phi_{ab} (z)$
denote $u_{ab} (z) / d(z)$ for any $z$, this yields
$$\phi_{ab} (x) = d(x) u_{ab} (x) = \sum_i u_{ab} (y_i) w({\vec {xy}}_i) / d(y_i)
= \sum_i w({\vec {xy}}_i) \phi_{ab} (y_i) .$$
In other words $\phi_{ab}$ is harmonic at every $x \neq a,b$.
Writing $K_{ab}$ for $\phi_{ab} (a)$ we then have that
$\phi_{ab}$ is $K_{ab}$ at $a$, zero at $b$ and harmonic elsewhere,
hence it is the same function as the the voltage induced by
a battery of $K_{ab}$ volts connected to $a$ and $b$, with the
voltage at $b$ taken to be zero. Without yet knowing what $K_{ab}$
is, this determines $\phi_{ab}$ up to a constant multiple. This
in turn determines $u_{ab}$, since $u_{ab} (x) = d(x) \phi_{ab} (x)$.
Now imagine that we watch $SRW_a$ to see when it crosses
over a particular edge ${\vec {xy}}$ and count plus one every
time it crosses from $x$ to $y$ and minus one every time it crosses
from $y$ to $x$. Stop counting as soon as the walk hits $b$.
Let $H_{ab}({\vec {xy}})$ denote the expected number of
signed crossings. ($H$ now stands for harmonic, not for the name of a
resistor network.) We can calculate $H$ in terms of $u_{ab}$
by counting the plusses and the minuses separately. The expected
number of plus crossings is just the expected number of times
the walk hits $x$, mulitplied by the probability on each of
these occasions that the walk
crosses to $y$ on the next move. This is $u_{ab} (x)
w({\vec {xy}}) / d(x)$. Similarly the expected number of minus
crossings is $u_{ab} (y) w({\vec {xy}}) / d(y).$ Thus
\begin{eqnarray*}
H_{ab} ({\vec {xy}}) & = & u_{ab} (x) w({\vec {xy}}) / d(x) - u_{ab} (y) w({\vec {xy}}) /
d(y) \\[2ex]
& = & w({\vec {xy}}) [ \phi_{ab} (x) - \phi_{ab} (y) ] .
\end{eqnarray*}
But $\phi_{ab} (x) - \phi_{ab} (y)$ is just the voltage difference
across ${\vec {xy}}$ induced by a $K_{ab}$-volt battery across $a$ and $b$.
Using~(\ref{KL1}) and $w({\vec {xy}}) = R({\vec {xy}})^{-1}$ shows that the expected
number of signed crossings of ${\vec {xy}}$ is just the current induced
in ${\vec {xy}}$ by a $K_{ab}$-volt battery connected to $a$ and $b$.
A moment's thought shows that the expected number of signed
crossings of all edges leading out of $a$ must be one, since the
walk is guaranteed to leave $a$ one more time than it returns to
$a$. So the current supplied by the $K_{ab}$-volt battery must
be one amp. Another way of saying this is that
\begin{equation} \label{eq2}
\bigtriangleup \phi_{ab} = \delta_a - \delta_b .
\end{equation}
Instead of worrying about what $K_{ab}$ is, we may
just as well say that the expected number of crossings of
${\vec {xy}}$ by $SRW_a$ before hitting $b$ is the current induced
when one amp is supplied to $a$ and drawn out at $b$.
\subsection{Electricity applied to random walks applied to spanning trees}
\label{3.4}
Finally we can address the random walk question that relates to
spanning trees. In particular, the claim that the probability
in equation~(\ref{rw prob 1}) is tractable will be borne out several
different ways. First we will see how the probability may be
``calculated'' by an analog computing device, namely a resistor
network. In the next subsection, the computation will be carried out
algebraically and very neatly, but only for particularly nice
symmetric graphs. At the end of the section, a universal method
will be given for the computation which is a little messier. Finally in
Section~\ref{4} the question of the individual probabilities in~(\ref{rw prob
1}) will be avoided altogether and we will see instead how values
for these probabilities (wherever they might come from) determine
the probabilities for all contractions and deletions of the graph
and therefore determine all the joint probabilities ${\bf{P}} (e_1 , \ldots
, e_k \in {\bf T})$ and hence the entire measure.
Let $e = {\vec {xy}}$ be any edge of a finite connected weighted graph $G$.
Run $SRW_x^G$ until it hits $y$. At this point either the walk just
moved along $e$ from $x$ to $y$ -- necessarily for the first time -- and
$e$ will be in the tree ${\bf T}$ given by $\tau(SRW_x^G)$, or else
the walk arrived at $y$ via a different edge in which case the walk
never crossed $e$ at all and $e \notin {\bf T}$. In either case the
walk never crossed from $y$ to $x$ since it stops if it hits $y$.
Then the expected number of signed crossings of $e={\vec {xy}}$ by $SRW_x$
up to the first time it hits $y$ is equal to the probability of
first reaching $y$ along $e$ which equals ${\bf{P}} (e \in {\bf T})$.
Putting this together with the electrical interpretation of signed
crossings give
\begin{th} \label{current}
${\bf{P}} (e \in {\bf T}) = $ the fraction of the current that goes through
edge $e$ when a battery is hooked up to the two endpoints of $e$.
\end{th}
$
\Box$
This characterization leads to a proof of Theorem~\ref{neg cor}
provided we are willing to accept a proposition that is
physically obvious but not so easy to prove, namely
\begin{th}[Rayleigh's monotonicity law] \label{Rayleigh}
The effective resistance of a circuit cannot increase when
a new resistor is added.
\end{th}
The reason this is physically obvious is that adding a new
resistor provides a new path for current
to take while allowing the current still to flow through
all the old paths. Theorem~\ref{neg cor} says that the conditional
probability of $e \in {\bf T}$ given $f \in {\bf T}$ must be less
than or equal to the unconditional probability. Using
Theorem~\ref{current} and the fact that the probabilities
conditioned on $f \notin {\bf T}$ are just the probabilities for
$WST$ on $G \setminus f$, this boils down to showing that the fraction
of current flowing directly across $e$ is no greater on $G$
than it is on $G \setminus f$. The battery across $e$ meets two parallel
resistances: $e$ and the effective resistance of the rest of $G$.
The fraction of current flowing through $e$ is inversely proportional
to the ratio of these two resistances. Rayleigh's theorem says
that the effective resistance of the rest of $G$ including
$f$ is at most the effective resistance of $G \setminus f$, so the fraction
flowing through $e$ on $G$ is at most the fraction flowing
through $e$ on $G \setminus f$.
In Section~\ref{4}, a proof will be given that does not rely
on Rayleigh.
\subsection{Algebraic calculations for the square lattice} \label{3.5}
If $G$ is a finite graph, then the functions from the vertices of $G$
to the reals form a finite-dimensional real vector space. The operator
$\bigtriangleup$ that maps a function $V$ to its excess is a linear operator
on this vector space. In this
language, the voltages in a resistor network with one
unit of current supplied to $a$ and drawn out at $b$ are the unique
(up to additive constant) function $V$ that solves $\bigtriangleup V =
\delta_a - \delta_b$. Here $\delta_x$ is the function that is one at $x$
and zero elsewhere. This means that $V$ can be calculated simply
by inverting $\bigtriangleup$ in the basis $\{ \delta_x ; x \in G \}$. Although
$\bigtriangleup$ is technically not invertible, its nullspace has dimension one
so it can be inverted on a set of codimension one. A classical
determination of $V$ for arbitrary graphs is carried out in the
next subsection. The point of this subsection is to show how the
inverse can be obtained in a simpler way for nice graphs.
The most general ``nice'' graphs to which the method will apply are
the infinite $\hbox{Z\kern-.4em\hbox{Z}}^d$-periodic lattices. Since in this article I am
restricting attention to finite graphs, I will not attempt to be general
but will instead show a single example. The reader may look in
\cite{BP} for further generality. The example considered here is
the square lattice. This is just the graph you see on a piece of
graph paper, with vertices at each pair of integer coordinates and four
edges connecting each point to its nearest neighbors. The exposition
will be easiest if we consider a finite square piece of this and impose
wrap-around boundary conditions. Formally, let $T_n$ (T for torus) be
the graph whose vertices are pairs of integers $\{ (i,j) : 0
\leq i,j \leq n-1 \}$ and for which two points are connected if and only
if they agree in one component and differ by one mod $n$ in the other
component. Here is a picture of this with $n=3$ and the broken edges
denoting edges that wrap around to the other side of the graph. The
graph is unweighted (all edge weights are one.)
\begin{picture}(150,90)
\put (20,20) {\circle {2}}
\put (20,20) {\line (1,0) {20}}
\put (20,20) {\line (0,1) {20}}
\put (20,40) {\circle {2}}
\put (20,40) {\line (1,0) {20}}
\put (20,40) {\line (0,1) {20}}
\put (20,60) {\circle {2}}
\put (20,60) {\line (1,0) {20}}
\put (20,60) {\dashbox(0,15)}
\put (40,20) {\circle {2}}
\put (40,20) {\line (1,0) {20}}
\put (40,20) {\line (0,1) {20}}
\put (40,40) {\circle {2}}
\put (40,40) {\line (1,0) {20}}
\put (40,40) {\line (0,1) {20}}
\put (40,60) {\circle {2}}
\put (40,60) {\line (1,0) {20}}
\put (40,60) {\dashbox{1}(0,15)}
\put (60,20) {\circle {2}}
\put (60,20) {\dashbox{1}(15,0)}
\put (60,20) {\line (0,1) {20}}
\put (60,40) {\circle {2}}
\put (60,40) {\dashbox{1}(15,0)}
\put (60,40) {\line (0,1) {20}}
\put (60,60) {\circle {2}}
\put (60,60) {\dashbox{1}(0,15)}
\put (60,60) {\dashbox{1}(15,0)}
\end{picture}
figure~\thepfigure
\label{pfig3.2}
\addtocounter{pfigure}{1}
Let $\zeta = e^{2\pi i/n}$ denote the first $n^{th}$ root of unity.
To invert $\bigtriangleup$ we exhibit its eigenvectors. Since the vector
space is a space of functions, the eigenvectors are called
eigenfunctions. For each pair of integers $0 \leq k,l \leq n-1$
let $f_{kl}$ be the function on the vertices of $T_n$ defined by
$$f_{kl} (i,j) = \zeta^{ki+lj} .$$
If you have studied group representations, you will recognize $f_{kl}$
as the representations of the group $T_n = (\hbox{Z\kern-.4em\hbox{Z}} / n\hbox{Z\kern-.4em\hbox{Z}})^2$ and in fact
the rest of this section may be restated more compactly in terms of
characters of this abelian group.
It is easy to calculate
\begin{eqnarray*}
\bigtriangleup f_{kl} (i,j) & = & 4 \zeta^{ki+lj} - \zeta^{ki+l(j+1)} - \zeta^{ki+l(j-1)}
- \zeta^{k(i+1)+lj} - \zeta^{k(i-1)+lj} \\[2ex]
& = & \zeta^{ki+lj} (4 - \zeta^k - \zeta^{-k} - \zeta^l - \zeta^{-l} )
\\[2ex]
& = & \zeta^{ki+lj} (4 - 2 \cos (2\pi k/n) - 2 \cos (2 \pi l/n)) .
\end{eqnarray*}
Since the multiplicative factor $(4 - 2 \cos (2\pi k/n) - 2 \cos
(2 \pi l/n))$ does not depend on $i$ or $j$, this shows that
$f_{kl}$ is indeed an eigenfunction for $\bigtriangleup$
with eigenvalue $\lambda_{kl} = 4 - 2 \cos (2\pi k/n) - 2 \cos
(2 \pi l/n)$.
Now if $\{ v_k \}$ are eigenvectors for some linear operator
$A$ with eigenvalues $\{ \lambda_k \}$, then for any constants
$\{ c_k \}$,
\begin{equation} \label{invert eig}
A^{-1} (\sum_k c_k v_k) = \sum_k \lambda_k^{-1} c_k v_k .
\end{equation}
If some $\lambda_k$ is equal to zero, then the range of $A$ does not
include vectors $w$ with $c_k \neq 0$, so $A^{-1} w$ does not exist for
such $w$ and indeed the formula blows up due to the $\lambda_k^{-1}$.
In our case $\lambda_{kl} = 4 - 2 \cos (2\pi k/n) - 2 \cos
(2 \pi l/n) = 0$ only when $k=l=0$.
Thus to calculate $\bigtriangleup^{-1} (\delta_a - \delta_b)$ we need to
figure out coefficents $c_{kl}$ for which $\delta_a - \delta_b =
\sum_{kl} c_{kl} f_{kl}$ and verify that $c_{00} = 0$.
For this puropose, it is fortunate that
the eigenfunctions $\{ f_{kl} \}$ are actually a unitary basis
in the inner product $<f,g> = \sum_{ij} f(i,j) \overline{g(i,j)}$.
You can check this by writing
$$< f_{kl} , f_{k'l'} > = \sum_{ij} \zeta^{ki+lj} \overline
{\zeta^{k'i+l'j}};$$
elementary algebra show this to be one if $k=k'$ and $l=l'$ and zero
otherwise, which what it means to be unitary. Unitary bases are
great for calculation because the coefficients $\{c_{kl} \}$ of
any $V$ in a unitary eigenbasis $\{ f_{kl} \}$ are
given by $c_{kl} = <V , f_{kl}>$. In our case, this means
$c_{kl} = \sum_{ij} V(i,j) \overline{f_{kl} (i,j)}$. Letting
$a$ be the vertex $(0,0)$, $b$ be the vertex $(1,0)$ and $V = \delta_a
- \delta_b$, this gives $c_{kl} = 1 - {\overline{\zeta}}^k$ and hence
$$ \delta_a - \delta_b = \sum_{k,l} (1 - {\overline{\zeta}}^k ) f_{kl} .$$
We can now plug this into equation~(\ref{invert eig}), since
clearly $c_{00} = 0$. This gives
\begin{eqnarray}
\bigtriangleup V & = & \delta_a - \delta_b \nonumber \\[2ex]
&&\nonumber \\
\Leftrightarrow \;\;\; V(i,j) & = & cf_{00}(i,j) + \sum_{(k,l) \neq
(0,0)} (1 - \zeta^k) \lambda_{kl}^{-1} f_{kl}(i,j) \nonumber \\[2ex]
& = & c + \sum_{(k,l) \neq (0,0)} { 1 - \zeta^k \over 4 -
2 \cos (2 \pi k / n) - 2 \cos (2 \pi l/n)} \zeta^{ki+lj} . \label{eq1}
\end{eqnarray}
This sum is easy to compute exactly and to approximate efficiently
when $n$ is large. In particular as $n \rightarrow \infty$ the
sum may be replaced by an integral which by a small miracle admits
an exact computation. Details of this may be found in \cite[page 148]{Sp}.
You may check your arithmetic against mine by using~(\ref{eq1}) to derive the
voltages for a one volt battery placed across the bottom left
edge $e$ of $T_3$ and across the bottom left edge $e'$ of $T_4$:
$$\begin{array}{ccc} 5/8 & 3/8 & 1/2 \\ 5/8 & 3/8 & 1/2 \\ 1 & 0 & 1/2
\end{array} \hspace{2in} \begin{array}{cccc} 56/90 & 34/90 & 40/90 & 50/90 \\
50/90 & 40/90 & 42/90 & 48/90 \\ 56/90 & 34/90 & 40/90 & 50/90 \\
1 & 0 & 34/90 & 56/90 \end{array}
$$
Section~\ref{5} shows how to put these numbers to good use, but
we can already make one calculation based on Theorem~\ref{current}.
The four currents flowing out of the bottom left vertex under
the voltages shown are given by the voltage differences: $1, 3/8, 1/2$
and $3/8$. The fraction of the current flowing directly through
the bottom left edge $e$ is $8/18$, and according to Theorem~\ref{current},
this is ${\bf{P}} (e \in {\bf T})$. An easy way to see this is right is by
the symmetry of the graph $T_3$. Each of the 18 edges should
be equally likely to be in ${\bf T}$, and since every spanning tree
has 8 edges, the probability of any given edge being in the tree
must be $8/18$.
\subsection{Electrical networks and spanning trees} \label{3.6}
The order in which topics have been presented so far makes sense
from an expository viewpoint but is historically backwards. The
first interest in enumerating spanning trees came from problems in
electrical network theory. To set the record straight and also to close
the circle of ideas
\begin{quote}
spanning trees $\rightarrow$ random walks $\rightarrow$ electrical
networks $\rightarrow$ spanning trees
\end{quote}
I will spend a couple of paragraphs on this remaining connection.
Let $G$ be a finite weighted graph. Assume there are no voltage
sources and the quantity of interest is the {\em effective resistance}
between two vertices $a$ and $b$. This is defined to be the voltage
it is necessary to place across $a$ and $b$ to induce a unit current
flow. A classical theorem known to Kirchoff is:
\begin{th} \label{eff resistance}
Say $s$ is an $a,b$-spanning bitree if $s$ is a spanning forest
with two components, one containing $a$ and the other containing $b$.
The effective resistance between $a$ and $b$ may be computed from
the weighted graph $G$ by taking the quotient $N / D$ where
$$D = \sum_{\mbox{\scriptsize spanning trees }t} \hspace{1in}
\left ( \prod_{e \in t} w(e) \right )$$
is the sum of the weights of all spanning trees of $G$ and
$$N = \sum_{a,b\mbox{\scriptsize -spanning bitrees }s} \hspace{1in}
\left ( \prod_{e \in s} w(e) \right )$$
is the analogous sum over $a,b$-spanning bitrees. $
\Box$
\end{th}
To see that how this is implied by Theorem~\ref{current} and
equation~(\ref{rw prob 1}), imagine adding an extra one ohm resistor
from $a$ to $b$. The probability of this edge being chosen in
a $WST$ on the new graph is by definition given by summing the
weights of trees containing the new edge and
dividing by the total sum of the weights of all spanning trees.
Clearly $D$ is the
sum of the weights of trees not containing the extra edge. But
the trees containing the extra edge are in one-to-one correspondence
with $a,b$-spanning bitrees (the correspondence being to remove
the extra edge). The extra edge has weight one, so the sum of the
weights of trees that do contain the extra edge is $N$ and the probability
of a $WST$ containing the extra edge is $N/(N+D)$. By equation~(\ref{rw
prob 1}) and
Theorem~\ref{current}, this must then be the fraction of current flowing
directly through the extra edge when a battery is placed across $a$ and
$b$. Thinking of the new circuit as consisting of the extra edge in
parallel with $G$, the fractions of the current passing through the
two components are proportional to the inverses of their resistances,
so the ratio of the resistance of the extra edge to the rest of
the circuit must be $D:N$. Since the extra edge has resistance one,
the effective resistance of the rest of the circuit is $N/D$.
The next problem of course was to efficiently evaluate the sum of
the weights of all spanning trees of a weighted graph. The solution
to this problem is almost as well known and can be found, among
other places in \cite{CDS}.
\begin{th}[Matrix-Tree Theorem] \label{matrix tree}
Let $G$ be a finite, simple, connected, weighted graph
and define a matrix indexed by the vertices of $G$ by letting
$M(x,x) = d(x)$, $M(x,y) = - w({\vec {xy}})$ if $x$ and $y$ are connected
by an edge, and $M(x,y) = 0$ otherwise. Then for any vertex $x$,
the sum of the weights of all spanning trees of $G$ is equal to
the determinant of the matrix gotten from $M$ by deleting by the
row and column corresponding to $x$.
\end{th}
The matrix $M$ is nothing but a representation of $\bigtriangleup$ with
respect to the basis $\{ \delta_x \}$. Recalling that the problem
essentially boils down to inverting $\bigtriangleup$, the only other
ingredient in this theorem is the trick of inverting the
action of a singular matrix on an element on its range by inverting
the largest invertible principal minor of the matrix. Details can
be found in \cite{CDS}. $
\Box$
\section{Transfer-impedances} \label{4}
In the last section we saw how to calculate ${\bf{P}} (e \in {\bf T})$ in
several ways: by Theorems~\ref{current} or~\ref{eff resistance} in
general and by equations such as~(\ref{eq1}) in particularly symmetric
cases. By repeating the calculations in Theorem~\ref{current}
and~\ref{eff resistance} for contractions and deletions of a graph
(see Section~\ref{2.4}),
we could then find enough conditional probabilities to determine the
probability of any elementary event ${\bf{P}} (e_1 , \ldots , e_r \in {\bf T}
\mbox{ and } f_1 , \ldots , f_s \notin {\bf T})$. Not only is this
inefficient, but it fails to apply to the symmetric case of
equation~(\ref{eq1}) since contracting or deleting the graph breaks
the symmetry. The task at hand is to alleviate this problem by showing
how the data we already know how to get -- current flows on $G$ --
determine the current flows on contractions and deletions of $G$ and
thereby determine all the elementary probabilities for $WST$ on $G$.
This will culminate in a proof of Theorem~\ref{exist matrix}, which
encapsulates all of the necessary computation into a single determinant.
\subsection{An electrical argument} \label{4.1}
To keep notation to a minimum this subsection will only deal with
unweighted, $D$-regular graphs.
Begin by stating explicitly the data that will be used to determine all
other probabilities.
For oriented edges $e = {\vec {xy}}$ and $f = {\vec {zw}}$ in a finite connected
graph $G$, define the {\em transfer-impedance} $H (e,f) =
\phi_{xy} (z) - \phi_{xy} (w)$ which is equal to
the voltage difference across $f$, $V(z) - V(w)$, when one
amp of current is supplied to $x$ and drawn out at $y$.
We will assume knowledge of
$H (e,f)$ for every pair of edges in $G$ (presumably via some
analog calculation, or in a symmetric case by equation~(\ref{eq1})
or something similar) and show how to derive all other
probabilities from these transfer-impedances.
Note first that $H(e,e)$ is the voltage across $e$ for a unit current
flow supplied to one end of $e$ and drawn out of the other.
This is equal to the current flowing directly
along $e$ under a unit current flow and is thus ${\bf{P}} (e \in {\bf T})$.
The next step is to try a computation involving a single contraction.
For notation, recall the map $\rho$ which projects vertices and edges of
$G$ to vertices and edges of $G/f$.
Fix edges $e={\vec {xy}}$ and $f={\vec {zw}}$ and let $\{ V(v) : v \in G/f\}$
be the voltages we need to solve for: voltages at vertices of $G/f$
when a unit current is supplied to $\rho (x)$ and drawn out at
$\rho (y)$. As we have seen, this means $\bigtriangleup V (v) = +1, -1$ or $0$
according to whether $v = x,y$ or neither.
Suppose we lift this to a function ${\overline V}$ on the
vertices of $G$ by letting ${\overline V} (x) = V( \rho (x))$. Let's calculate
the excess $\bigtriangleup {\overline V}$ of ${\overline V}$. Each edge of $G$ corresponds to
an edge in $G/f$, so for any $v \neq z,w$ in $G$, $\bigtriangleup {\overline V} (v)
= \bigtriangleup V(\rho (v))$; this is equal to $+1$ if $v=x$, $-1$ if $v=y$
and zero otherwise. Since $\rho$ maps both $z$ and $w$ onto
the same vertex $v*w$, we can't tell what the $\bigtriangleup {\overline V}$ is at
$z$ or $w$ individually, but $\bigtriangleup {\overline V} (z) + \bigtriangleup {\overline V} (w)$ will
equal $\bigtriangleup V (z*w)$ which will equal $+1$ if $z$ or $w$ coincides
with $x$, $-1$ if $z$ or $w$ coincides with $y$ and zero otherwise
(or if both coincide!). The last piece of information we have
is that ${\overline V} (z) = {\overline V} (w)$. Summarizing,
\begin{quote}
$(i)$ $\bigtriangleup {\overline V} = \delta_x - \delta_y + c (\delta_z - \delta_w)$; \\
$(ii)$ ${\overline V} (z) = {\overline V} (w)$ ,
\end{quote}
where $c$ is some unknown constant. To see that this uniquely defines
${\overline V}$ up to an additive constant, note that the difference between
any two such functions has excess $c (\delta_z - \delta_w)$ for some $c$,
hence by the maximum principle reaches its maximum and minimum on
$\{ z , w \}$; on the other hand the values at $z$ and $w$ are equal,
so the difference is constant.
Now it is easy to find ${\overline V}$. Recall from equation~(\ref{eq2}) that
$\phi$ satisfies $\bigtriangleup \phi_{ab} = \delta_a - \delta_b$. The function
${\overline V}$ we are looking for is then $\phi_{xy} + c \phi_{zw}$ where $c$ is
chosen so that
$$\phi_{xy} (z) + c \phi_{zw} (z) = \phi_{xy} (w) + c \phi_{zw} (w) . $$
In words, ${\overline V}$ gives the voltages for a battery supplying unit
current in at $x$ and out at $y$ plus another battery across $z$ and $w$
just strong enough to equalize the voltages at $z$ and $w$. How
strong is that? The battery supplying unit current to $x$ and $y$
induces by definition a voltage $H({\vec {xy}} , {\vec {zw}})$ across $z$ and $w$.
To counteract that, we need a $-H({\vec {xy}} , {\vec {zw}})$-volt battery across
$z$ and $w$. Since supplying one unit of current in at $z$ and out at $w$
produces a voltage across $z$ and $w$ of $H({\vec {zw}} , {\vec {zw}})$, the current
supplied by the counterbattery must be $c = -H({\vec {xy}} , {\vec {zw}}) / H({\vec {zw}} , {\vec {zw}})$.
We do not need to worry about $H ({\vec {zw}} , {\vec {zw}})$ being zero since this
means that ${\bf{P}} (f \in {\bf T}) = 0$ so we shouldn't be conditioning
on $f \in {\bf T}$. Going back to the original problem,
\begin{eqnarray*}
{\bf{P}} (e \in {\bf T} \, | \, f \in {\bf T}) & = & V (\rho (x)) - V (\rho(y)) \\[2ex]
& = & {\overline V} (x) - {\overline V} (y) \\[2ex]
& = & H ({\vec {xy}} , {\vec {xy}}) + H({\vec {zw}} , {\vec {xy}}) { - H ({\vec {xy}} , {\vec {zw}} ) \over H({\vec {zw}} , {\vec {zw}})}
\\[2ex]
& = & {H ({\vec {xy}} , {\vec {xy}}) H({\vec {zw}} , {\vec {zw}}) - H({\vec {xy}} , {\vec {zw}}) H({\vec {zw}} , {\vec {xy}}) \over
H({\vec {zw}} , {\vec {zw}})} .
\end{eqnarray*}
Multiplying this conditional probability by the unconditional
probability ${\bf{P}} (f \in {\bf T})$ gives the probability of both $e$ and $f$
being in ${\bf T}$ which may be written as
$${\bf{P}} (e,f \in {\bf T}) = \left | \begin{array}{cc} H({\vec {xy}},{\vec {xy}}) &
H({\vec {xy}} , {\vec {zw}}) \\ H({\vec {zw}} , {\vec {xy}}) & H({\vec {zw}} , {\vec {zw}}) \end{array} \right | .$$
Thus ${\bf{P}} (e,f \in {\bf T}) = \det M(e,f)$ where $M$ is the matrix
of values of $H$ as in Theorem~\ref{exist matrix}.
Theorem~\ref{exist matrix} has in fact now been proved for $r = 1,2$.
The procedure for general $r$ will be similar. Write ${\bf{P}} (e_1 ,
\ldots e_r \in {\bf T})$ as a product of conditional probabilities
${\bf{P}} (e_i \in {\bf T} \, | \, e_{i+1} , \ldots , e_r \in {\bf T})$. Then
evaluate this conditional probability by solving for voltages on
$G/e_{i+1} \cdots e_r$. This is done by placing batteries
across $e_1 , \ldots , e_r$ so as to equalize voltages across
all $e_{i+1} , \ldots , e_r$ simultaneously. Although in the
$r=2$ case it was not necessary to worry about dividing by
zero, this problem does come up in the general case which
causes an extra step in the proof. We will now summarily generalize
the above discussion on how to solve for voltages on contractions
of a graph and then forget about electricity altogether.
\begin{lem} \label{gen contraction}
Let $G$ be a finite $D$-regular connected graph and let $f_1 , \ldots ,
f_r$ and $e = {\vec {xy}}$ be edges of $G$ that form no cycle. Let $\rho$ be the
map from
$G$ to $G/f_1 \ldots f_r$ that maps edges to corresponding edges and
maps vertices of $G$ to their equivalence classes under the relation
of being connected by edges in $\{ f_1 , \ldots , f_r \}$. Let
${\overline V}$ be a function on the vertices of $G$ such that
\begin{quote}
$(i)$ If ${\vec {zw}} = f_i$ for some $i$ then ${\overline V} (z) = {\overline V} (w)$ ; \\
$(ii)$ $\sum_{z \in \rho^{-1} (v)} \bigtriangleup {\overline V} (z) = +1$ if $\rho (x)
=v$, $-1$ if $\rho (y) = v$ and zero otherwise.
\end{quote}
If ${\bf T}$ is a uniform spanning tree for $G$ then ${\bf{P}} (e \in
{\bf T} \, | \, f_1 , \ldots , f_r \in {\bf T} ) = {\overline V} (x) - {\overline V} (y)$.
\end{lem}
\noindent{Proof:} As before, we know that ${\bf{P}} (e \in {\bf T} \, | \, f_1 ,
\ldots , f_r \in {\bf T})$ is given by $V(\rho (x)) - V(\rho (y))$
where $V$ is the voltage function on $G/f_1 \cdots f_r$ for
a unit current supplied in at $x$ and out at $y$. Defining ${\overline V} (v)$
to be $V(\rho (v))$, the lemma will be proved if we can show that
${\overline V}$ is the unique function on the vertices of $G$
satisfying~$(i)$ and~$(ii)$. Seeing that ${\overline V}$ satisfies~$(i)$
and~$(ii)$ is the same as before. Since $\rho$ provides
a one to one correspondence between edges of $G$ and edges of
$G/f_1 , \ldots , f_r$, the excess of ${\overline V}$ at vertices of
$\rho^{-1}(v)$ is the sum over edges leading out of vertices
in $\rho^{-1} (v)$ of the difference of ${\overline V}$ across that edge,
which is the sum over edges leading out of $\rho (v)$ of the
difference of $V$ across that edge; this is the excess of $V$ at
$\rho (v)$ which is $=1,-1$ or $0$ according to whether $x$ or
$y$ or neither is in $\rho^{-1} (v)$.
Uniqueness is also easy. If ${\overline W}$ is
any function satisfying $(i)$, define a function
$W$ on the vertices of $G/f_1 \cdots f_r$ by $W(\rho (v)) = {\overline W} (v)$.
If ${\overline W}$ satisfies $(ii)$ as well then it is easy to check that
$W$ satisfies $\bigtriangleup W = \delta_{\rho (x)} - \delta_{\rho(y)}$ so that
$W = V$ and ${\overline W} = {\overline V}$. $
\Box$
\subsection{Proof of the transfer-impedance theorem} \label{4.2}
First of all, though is is true that the function $H$ in the
previous subsection and the statement of the theorem is
symmetric, I'm not going to include a proof -- nothing else
we talk about relies on symmetry of $H$ and a proof may be found
in any standard treatment of the Green's function, such as
\cite{Sp}. Secondly, it is easiest to reduce the problem to the
case of $D$-regular graphs immediately so as to be able to use the
previous lemma. Suppose $G$ is any finite connected graph. Let $D$ be
the maximum degree of any vertex in $G$ and to any vertex of
lesser degree $k$, add $D-k$ self-edges. The resulting
graph is $D$-regular (though not simple) and furthermore it has
the same spanning trees as $G$. To prove Theorem~\ref{exist matrix}
for finite connected graphs, it therefore suffices to prove
the theorem for finite, connected, $D$-regular graphs. Restating
what is to be proved:
\begin{th} \label{transfer thm}
Let $G$ be any finite, connected, $D$-regular graph and let
${\bf T}$ be a uniform random spanning tree of $G$. Let
$H({\vec {xy}} , {\vec {zw}})$ be the voltage induced across ${\vec {zw}}$ when one
amp is supplied from $x$ to $y$. Then for any $e_1 , \ldots , e_r
\in G$,
$$ {\bf{P}} (e_1 , \ldots , e_r \in {\bf T}) = \det M(e_1 , \ldots , e_r)$$
where $M(e_1 , \ldots , e_r)$ is the $r$ by $r$ matrix whose
$i,j$-entry is $H(e_i , e_j)$.
\end{th}
The proof is by induction on $r$. We have already proved it
for $r = 1,2$, so now we assume it for $r-1$ and try to prove it
for $r$. There are two cases. The first possibility is that
${\bf{P}} (e_1 , \ldots , e_{r-1} \in {\bf T}) = 0$. This means that
no spanning tree of $G$ contains $e_1 , \ldots , e_r$ which means
that these edges contain some cycle. Say the cycle is $e_{n(0)},
\ldots , e_{n(k-1)}$ where there are vertices $v(i)$ for which
$e_{n(i)}$ connects $v(i)$ to $v(i+1 \mbox{ mod } k)$. For any
vertices $x,y$, $\phi_{xy}$ is the unique solution up
to an additive constant of $\bigtriangleup \phi_{xy} = \delta_x - \delta_y$. Thus
$\bigtriangleup \left ( \sum_{i=0}^{k-1} \phi_{v(i)\,v(i+1 \mbox{ mod } k)} \right ) = 0$
which means that $\sum_{i=0}^{k-1} \phi_{v(i) \, v(i+1 \mbox{ mod } k)}$
is constant. Then for any ${\vec {xy}}$,
\begin{eqnarray*}
&&\sum_{i=0}^{k-1} H(e_{n(i)} , {\vec {xy}}) \\[2ex]
& = & \sum_{i=0}^{k-1} \phi_{v(i) \, v(i+1 \mbox{ mod } k)} (x) -
\sum_{i=0}^{k-1} \phi_{v(i) \, v(i+1 \mbox{ mod } k)} (y) \\[2ex]
& = & 0 .
\end{eqnarray*}
This says that in the matrix $M(e_1 , \ldots , e_r)$, the rows
$n(1), \ldots , n(k)$ are linearly dependent, summing to zero.
Then $\det M(e_1 , \ldots , e_r) = 0$ which is certainly the probability
of $e_1 , \ldots , e_r \in {\bf T}$.
The second possibility is that ${\bf{P}} (e_1 , \ldots , e_{r-1} \in {\bf T})
\neq 0$. We can then write
\begin{eqnarray*}
&& {\bf{P}} (e_1 , \ldots , e_r \in {\bf T}) \\[2ex]
& = & {\bf{P}} (e_1 , \ldots e_{r-1} \in {\bf T}) {\bf{P}} (e_r \in {\bf T} \, | \, e_1 ,
\ldots e_{r-1} \in {\bf T}) \\[2ex]
& = & \det M(e_1 , \ldots , e_{r-1}) {\bf{P}} (e_r \in {\bf T} \, | \, e_1 ,
\ldots e_{r-1} \in {\bf T})
\end{eqnarray*}
by the induction hypothesis. To evaluate the last term we look for a
function ${\overline V}$ satisfying the conditions of Lemma~\ref{gen contraction}
with $e_r$ instead of $e$ and $e_1 , \ldots, e_{r-1}$ instead of $f_1
, \ldots , f_r$.
For $i \leq r-1$, let $x_i$ and $y_i$ denote the vertices connected
by $e_i$. For any $v \in G/e_1 \cdots e_{r-1}$ and any $i \leq r-1$,
$\sum_{z \in \rho^{-1} (v)} \bigtriangleup \phi_{x_i y_i} (z) =
\sum_{z \in \rho^{-1} (v)} \delta_{x_i} (z) - \delta_{y_i} (z)$
which is zero since the class $\rho^{-1} (v)$ contains both
$x_i$ and $y_i$ or else contains neither. The excess of $\phi_{x_r
y_r}$ summed over $\rho^{-1} (v)$ is just $1$ if $\rho (x_r) =
v$, $-1$ if $\rho (y_r) = v$ and zero otherwise. By linearity
of excess, this implies that the sum of $\phi_{x_r y_r}$ with
any linear combination of $\{ \phi_{x_i y_i} : i \leq r-1 \}$
satisfies $(ii)$ of the lemma.
Satisfying part $(i)$ is then a matter of choosing the right linear
combination, but the lovely thing is that we don't have to actually
compute it! We do need to know it exists and here's the argument
for that. The $i^{th}$ row of $M(e_1 , \ldots , e_r)$ lists the
values of $\phi_{x_i y_i} (x_j) - \phi_{x_i y_i} (y_j)$ as $j$
runs from 1 to $r$. Looking for $c_1 , \ldots , c_{r-1}$
such that $\phi_{x_r y_r} + \sum_{i=1}^{r-1} \phi_{x_i y_i}$
is the same on $x_j$ as on $y_j$ for $j \leq r-1$ is the
same as looking for $c_i$ for which the $r^{th}$ row of
$M$ plus the sum of $C_i$ times the $i^{th}$ row of $M$ has
zeros for every entry except the $r^{th}$. In other words
we want to row-reduce, using the first $r-1$ rows to clear
$r-1$ zeros in the last row. There is a unique way to do this
precisely when the determinant of the upper $r-1$ by $r-1$
submatrix is nonzero, which is what we have assumed. So these
$c_1 , \ldots , c_{r-1}$ exist and ${\overline V} (v) = \phi_{x_r y_r} (v)
+ \sum_{i=1}^{r-1} \phi_{x_i y_i} (v)$.
The lemma tells us that ${\bf{P}} (e_r \in {\bf T} \, | \, e_1 , \ldots , e_{r-1}
\in {\bf T})$ is ${\overline V} (x_r) - {\overline V} (y_r)$. This is just the $r,r$-entry
of the row-reduced matrix. Now calculate the determinant of
the row-reduced matrix in two ways. Firstly, since row-reduction
does not change the determinant of a matrix, the determinant must
still be $\det M(e_1 , \ldots , e_r)$. On the other hand, since
the last row is all zeros except the last entry, expanding along
the last row gives that the determinant is the $r,r$-entry times
the determinant of the upper $r-1$ by $r-1$ submatrix, which
is just ${\bf{P}} (e_r \in {\bf T} \, | \, e_1 , \ldots , e_{r-1} \in {\bf T})
\det M(e_1 , \ldots , e_{r-1})$. Setting these two equal gives
$${\bf{P}} (e_r \in {\bf T} \, | \, e_1 , \ldots , e_{r-1} \in {\bf T})
= \det M(e_1 , \ldots , e_r) / \det M(e_1 , \ldots , e_{r-1}) .$$
The induction hypothesis says that
$$ {\bf{P}} (e_1 , \ldots , e_{r-1} \in {\bf T}) = \det M(e_1 , \ldots e_{r-1})$$
and multiplying the conditional and unconditional probabilities proves
the theorem. $
\Box$
\subsection{A few computational examples} \label{4.3}
It's time to take a break from theorem-proving to see how well the
machinery we've built actually works. A good place to test it is
the graph $T_3$, since the calculations have essentially been
done, and since even $T_3$ is large enough to prohibit enumeration
of the spanning trees directly by hand (you can use the Matrix-Tree
Theorem with all weights one to check that there are 11664 of them).
Say we want to know the probability that the middle vertex $A$
is connected to $B,C$ and $D$ in a uniform random spanning tree
${\bf T}$ of $T_3$.\\
\setlength{\unitlength}{2pt}
\begin{picture}(150,90)
\put (20,20) {\circle {2}}
\put (20,20) {\line (1,0) {20}}
\put (20,20) {\line (0,1) {20}}
\put (20,40) {\circle {2}}
\put(22,42){B}
\put (20,40) {\line (1,0) {20}}
\put (20,40) {\line (0,1) {20}}
\put (20,60) {\circle {2}}
\put (20,60) {\line (1,0) {20}}
\put (20,60) {\dashbox(0,15)}
\put (40,20) {\circle {2}}
\put(42,22){E}
\put (40,20) {\line (1,0) {20}}
\put (40,20) {\line (0,1) {20}}
\put (40,40) {\circle {2}}
\put(42,42){A}
\put (40,40) {\line (1,0) {20}}
\put (40,40) {\line (0,1) {20}}
\put (40,60) {\circle {2}}
\put (40,60) {\line (1,0) {20}}
\put(42,62){C}
\put (40,60) {\dashbox{1}(0,15)}
\put (60,20) {\circle {2}}
\put (60,20) {\dashbox{1}(15,0)}
\put (60,20) {\line (0,1) {20}}
\put (60,40) {\circle {2}}
\put(62,42){D}
\put (60,40) {\dashbox{1}(15,0)}
\put (60,40) {\line (0,1) {20}}
\put (60,60) {\circle {2}}
\put (60,60) {\dashbox{1}(0,15)}
\put (60,60) {\dashbox{1}(15,0)}
\end{picture}
figure~\thepfigure
\label{pfig4.1}
\addtocounter{pfigure}{1} \\
We need then to calculate the transfer-impedance matrix for the
edges $AB,AC$ and $AD$. Let's say we orient them all toward $A$.
The symmetry of $T_3$ under translation and $90^\circ$ rotation
allows us to rely completely on the voltages calculated at the end
of~3.5. Sliding the picture upwards one square and multiplying
the given voltages by $4/9$ to produce a unit current flow from $B$
to $A$ gives voltages
$$\begin{array}{ccc} 5/18 & 3/18 & 4/18 \\ 8/18 & 0 & 4/18 \\
5/18 & 3/18 & 4/18 \end{array} $$
which gives transfer-impedances $H(BA,BA) = 8/18$, $H(BA,CA) = 3/18$
and $H(BA,DA)= 4/18$. The rest of the values follow by symmetry,
giving
$$M(BA,CA,DA) = {1 \over 18} \left ( \begin{array}{ccc} 8 & 3 & 4 \\
3 & 8 & 3 \\ 4 & 3 & 8 \end{array} \right ) .$$
Applying Theorem~\ref{transfer thm} gives
${\bf{P}} (BA,CA,DA \in {\bf T}) = \det M(BA,CA,DA) = {\displaystyle {312 \over
5832}}$, or in other words just 624 of the 11664 spanning trees of $T_3$
contain all these edges. Compare this to using the Matrix-Tree Theorem
to calculate the same probability. That does not require the
preliminary calculation of the voltages, but it does require an
eight by eight determinant.
Suppose we want now to calculate the probability that $A$ is
a {\em leaf} of ${\bf T}$, that is to say there is only one edge in
${\bf T}$ incident to $A$. By symmetry this edge will be $AB$
$1/4$ of the time, so we need to calculate ${\bf{P}} (BA \in {\bf T} \mbox{ and }
CA,DA,EA \notin {\bf T})$ and then multiply by four. As
remarked earlier, we can use inclusion-exclusion to get the
answer. This would entail writing
\begin{eqnarray*}
& & {\bf{P}} (BA \in {\bf T} \mbox{ and } CA,DA,EA \notin {\bf T}) \\[2ex]
& = & {\bf{P}} (BA \in {\bf T}) - {\bf{P}} (BA,CA \in {\bf T}) - {\bf{P}} (BA,DA \in {\bf T})
- {\bf{P}} (BA,EA \in {\bf T}) \\
&& + {\bf{P}} (BA,CA,DA \in {\bf T}) + {\bf{P}} (BA,CA,EA \in {\bf T}) + {\bf{P}} (BA,DA,EA
\in {\bf T}) \\
&& - {\bf{P}} (BA,CA,DA,EA \in {\bf T}) .
\end{eqnarray*}
This is barely manageable for four edges, and gets exponentially messier
as we want to know about probabilities involving more edges. Here is
an easy but useful theorem telling how to calculate the probability of a
general {\em cylinder} event, namely the event that $e_1 , \ldots , e_r$ are
in the tree, while $f_1 , \ldots , f_s$ are not in the tree.
\begin{th} \label{incl-excl}
Let $M (e_1 , \ldots , e_k)$ be an $k$ by $k$
transfer-impedance matrix. Let $M^{(r)}$ be the matrix for which
$M^{(r)}(i,j) = M(i,j)$ if $i \leq r$ and $M^{(r)}(i,j) = 1 - M(i,j)$ if
$r+1 \leq i \leq k$. Then ${\bf{P}} (e_1 , \ldots , e_r \in {\bf T}
\mbox{ and } e_{r+1} , \ldots , e_k \notin {\bf T}) = \det M^{(r)}$.
\end{th}
\noindent{Proof:} The proof is by induction on $k-r$. The initial
step is when $r=k$; then $M^{(r)} = M$ so the theorem
reduces to Theorem~\ref{transfer thm}. Now suppose the theorem
to be true for $k-r = s$ and let $k-r = s+1$. Write
\begin{eqnarray*}
&&{\bf{P}} (e_1 , \ldots , e_r \in {\bf T} \mbox{ and } e_{r+1} , \ldots ,
e_k \notin {\bf T}) \\[2ex]
& = & {\bf{P}} (e_1 , \ldots , e_r \in {\bf T} \mbox{ and } e_{r+2} , \ldots ,
e_k \notin {\bf T}) \\
& & - {\bf{P}} (e_1 , \ldots , e_{r+1} \in {\bf T} \mbox{ and } e_{r+2} , \ldots ,
e_k \notin {\bf T}) \\[2ex]
& = & \det M(e_1 , \ldots , e_r , e_{r+2} , \ldots e_k) -
\det M(e_1 , \ldots , e_{r+1} , e_{r+2} , \ldots e_k) ,
\end{eqnarray*}
since the induction hypothesis applies to both of the last two
probabilities. Call these last two matrices $M_1$ and $M_2$.
The trick now is to stick an extra row and column into $M_1$: let $M'$
be $M(e_1 , \ldots , e+k)$ with the $r+1^{st}$ row replaced by
zeros except for a one in the $r+1^{st}$ position. Then $M'$
is $M_1$ with an extra row and column inserted. Expanding
along the extra row gives $\det M' = \det M_1$. But $M'$ and
$M_2$ differ only in the $r+1^{st}$ row, so by multilinearity
of the determinant,
$$\det M_1 - \det M_2 = \det M' - det M_2 = \det M''$$
where $M''$ agrees with $M'$ and $M_2$ except that the
$r+1^{st}$ row is the difference of the $r+1^{st}$ rows of
$M'$ and $M_2$. The induction is done as soon as you realize
that $M''$ is just $M^{(r)}$. $
\Box$
Applying this to the probability of $A$ being a leaf of $T_3$,
we write
\begin{eqnarray*}
&&{\bf{P}} (BA \in {\bf T} \mbox{ and } CA,DA,EA \notin {\bf T}) \\[2ex]
& = & \det M^{(3)} (BA,CA,DA,EA) \\[3ex]
& = & \left | \begin{array}{cccc} 8/18 & 3/18 & 4/18 & 3/18 \\
-3/18 & 10/18 & -3/18 & -4/18 \\ -4/18 & -3/18 & 10/18 & -3/18 \\
-3/18 & -4/18 & -3/18 & 10/18 \end{array} \right | \;\; = \;\;
{10584 \over 18^4} \;\; = \;\; {1176 \over 11664}
\end{eqnarray*}
so $A$ is a leaf of $4 \cdot 1176 = 4704$ of the 11664 spanning trees
of $T_3$. This time, the Matrix-Tree Theorem would have required
evaluation of several different eight by eight determinants. If
$T_3$ were replaced by $T_n$, the transfer-impedance
calculation would not be significantly harder, but the
Matrix-Tree Theorem would require several $n^2$ by $n^2$ determinants.
If $n$ goes to $\infty$, as it might when calculating some sort
of limit behavior, these large determinants would not be tractable.
\section{Poisson limits} \label{5}
As mentioned in the introduction, the random degree of a vertex
in a uniform spanning tree of $G$ converges in distribution to
one plus a Poisson(1) random
variable as $G$ gets larger and more highly connected. This section
investigates some such limits, beginning with an example symmetric
enough to compute explicitly. The reason for this limit may seem
clearer at the end of the section when we discuss a stronger limit
theorem. Proofs in this section are mostly
sketched since the details occupy many pages in \cite{BP}.
\subsection{The degree of a vertex in $K_n$} \label{5.1}
The simplest situation in which to look for a Poisson limit is
on the complete graph $K_n$. This is pictured here for $n=8$. \\
\setlength{\unitlength}{2pt}
\begin{picture}(190,120)(-70,-10)
\put(0,24){\circle* {6}}
\put(0,24){\line(0,1){36}}
\put(0,24){\line(2,5){24}}
\put(0,60){\circle* {6}}
\put(0,60){\line(1,1){24}}
\put(0,60){\line(5,2){60}}
\put(24,84){\circle* {6}}
\put(24,84){\line(1,0){36}}
\put(24,84){\line(5,-2){60}}
\put(60,84){\circle* {6}}
\put(60,84){\line(1,-1){24}}
\put(60,84){\line(2,-5){24}}
\put(84,60){\circle* {6}}
\put(84,60){\line(0,-1){36}}
\put(84,60){\line(-2,-5){24}}
\put(84,24){\circle* {6}}
\put(84,24){\line(-1,-1){24}}
\put(84,24){\line(-5,-2){60}}
\put(60,0){\circle* {6}}
\put(60,0){\line(-1,0){36}}
\put(60,0){\line(-5,2){60}}
\put(24,0){\circle* {6}}
\put(24,0){\line(-1,1){24}}
\put(24,0){\line(-2,5){24}}
\put(0,24){\line(1,0){84}}
\put(0,60){\line(1,0){84}}
\put(24,0){\line(0,1){84}}
\put(60,0){\line(0,1){84}}
\put(0,24){\line(1,1){60}}
\put(24,0){\line(1,1){60}}
\put(60,0){\line(-1,1){60}}
\put(84,24){\line(-1,1){60}}
\put(-3,24){\line(5,2){90}}
\put(-3,60){\line(5,-2){90}}
\put(24,-3){\line(2,5){36}}
\put(60,-3){\line(-2,5){36}}
\end{picture}
figure~\thepfigure
\label{pfig5.1}
\addtocounter{pfigure}{1}
Calculating the voltages for a complete graph is particularly easy
because of all the symmetry. Say the vertices of $K_n$ are called
$v_1 , \ldots , v_n$, and put a one volt battery across $v_1$ and
$v_2$, so $V(v_1) = 1$ and $V(v_2) = 0$. By Theorem~\ref{elec rw},
the voltage at any other vertex $v_j$ is equal to the probability
that $SRW_{v_j}^{K_n}$ hits $v_1$ before $v_2$. This is clearly equal
to $1/2$. The total current flow out of $v_1$ with these voltages
is $n/2$, since one amp flows along the edge to $v_2$ and $1/2$
amp flows along each of the $n-2$ other edges out of $v_1$. Multiplying
by $2/n$ to get a unit current flow gives voltages
$$V(v_i) = \left \{ \begin{array}{ccl} ~~2/n~~ & : & ~~i = 1 \\
~~0~~ & : & ~~i = 2 \\ ~~1/n~~ & & \mbox{ otherwise}.
\end{array} \right. $$
The calculations will of course come out similarly for a unit
current flow supplied across any other edge of $K_n$.
The first distribution we are going to examine is of the degree
in ${\bf T}$ of a vertex, say $v_1$. Since we are interested in
which of the edges incident to $v_1$ are in ${\bf T}$, we need
to calculate $H(\overline{v_1 v_i} , \overline{v_1 v_j})$ for
every $i,j \neq 1$. Orienting all of these edges away from
$v_1$ and using the voltages we just worked out gives
$$H(\overline{v_1 v_i} , \overline{v_1 v_j}) = \left \{ \begin{array}{ccl}
~~2/n~~ & : & ~~i = j \\ ~~1/n~~ & & \mbox{ otherwise}
\end{array} \right. . $$
Denoting the edge from $v_1$ to $v_i$ by $e_i$, we have the $n-1$ by
$n-1$ matrices
$$M(e_2 , \ldots , e_n) = \left ( \begin{array}{cccc} {2 \over n} & {1
\over n} & \cdots & {1 \over n} \\ {1 \over n} & {2 \over n} & \cdots
& {1 \over n} \\ &&& \\ &&\vdots & \\
&&& \\ {1 \over n} & {1 \over n} & \cdots & {2 \over n} \end{array}
\right ) \hspace{.4in}
M^{(n-1)} (e_2 , \ldots , e_n) = \left ( \begin{array}{cccc}
{n-2 \over n} & {-1 \over n} & \cdots & {-1 \over n} \\ {-1 \over n}
& {n-2 \over n} & \cdots & {-1 \over n} \\ &&& \\ &&\vdots & \\ &&&
\\ {-1 \over n} & {-1 \over n} & \cdots & {n-2 \over n} \end{array}
\right ). $$
There must be at least one edge in ${\bf T}$ incident to $v_1$ so
Theorem~\ref{incl-excl} says $\det M^{(n-1)} = {\bf{P}} (e_2 , \ldots , e_n
\notin {\bf T}) = 0$. This is easy to verify: the rows sums to zero.
We can use $M^{(n-1)}$ to calculate the probability that $e_2$
is the only edge in ${\bf T}$ incident to $v_1$ by noting that this
happens if and only if $e_3 , \ldots , e_n \notin {\bf T}$. This
is the determinant of $M^{(n-2)} (e_3 , \ldots , e_n)$ which is
a matrix smaller by one thatn $M^{(n-1)}(e_2 , \ldots , e_n)$ but
which still has $(n-2)/n$'s down the diagonal and $-1/n$'s elsewhere.
This is a special case of a {\em circulant} matrix, which is a type
of matrix whose determinant is fairly easy to calculate.
A $k$ by $k$ circulant matrix is an $M$ for which $M(i,j)$
is some number $a(i-j)$ depending only on $i-j$ mod $k$. Thus
$M$ has $a_0$ all down the diagonal for some $a_0$, $a_1$ on the
next diagonal, and so forth. The eigenvalues of a circulant
matrix $\lambda_0 , \ldots , \lambda_{k-1}$ are given by
$\lambda_j = \sum_{t=0}^{k-1} a_t \zeta^{jt}$ where $\zeta
= e^{2 \pi i/n}$ is the $n^{th}$ root of unity. It is easy
to verify that these are the eigenvalues, by checking that
the vector $\vec {w}$ for which $w_t = \zeta^{tj}$ is an eigenvector
for $M$ (no matter what the $a_i$ are) and has eigenvalue $\lambda_j$.
The determinant is then the product of the eigenvalues.
Details of this may be found in \cite{St}.
In the case of $M^{(n-2)}$, $a_0 = (n-2)/n$ and $a_j = -1/n$
for $j \neq 0$. Then $\lambda_0 = \sum_j a_j = 1/n$. To calculate
the other eigenvalues note that for any $j \neq 0$ mod $n-2$,
$\sum_{t=1}^{n-3} \zeta^{jt} = 0$. Then $\lambda_j = (n-2)/n
\sum_{t=1}^{n-3} (-1/n) \zeta^{jt} = (n-1)/n - (1/n) \sum_{t=0}^{n-3}
\zeta^{tj} = (n-1)/n$. This gives
$$\det M^{(n-2)} = \prod_{j=0}^{n-3} \lambda_j = {1 \over n} \,
\left ( {n-1 \over n} \right )^{n-3} = {1+ o(1) \over ne}$$
as $n \rightarrow \infty$. \footnote{Here, $o(1)$ signifies a quantity
going to zero as $n \rightarrow \infty$. This is a convenient and
standard notation that allows manipulation such as $(2+o(1))(3+o(1))=6+o(1)$.}
Part of the Poisson limit has emerged: the probability
that $v_1$ has degree one in ${\bf T}$ is (by symmetry) $n-1$ times
the probability that the particular edge $e_2$ is the only edge in
${\bf T}$ incident to $v_1$; this is $(n-1) (1+o(1))/en$ so it
converges to $e^{-1}$ as $n \rightarrow \infty$. This is
${\bf{P}} (X=1)$ where $X$ is one plus a Poisson(1) , i.e. a Poisson of mean
one.
Each further part of the Poisson limit requires a more careful
evaluation of the limit. To illustrate, we carry out the
second step. Use one more degree of precision in the
Taylor series for $\ln (x)$ and $\exp (x)$ to get
\begin{eqnarray*}
&& n^{-1} \left ( {n-1 \over n} \right )^{n-3} \\[2ex]
& = & n^{-1} \exp [ (n-3) (-n^{-1} - n^{-2} (1/2 + o(1)))] \\[2ex]
& = & n^{-1} \exp [-1 + (5/2 + o(1)) n^{-1}] \\[2ex]
& = & n^{-1} e^{-1} [1 + (5/2 + o(1)) n^{-1}] .
\end{eqnarray*}
The reason we need this precision is that we are going to calculate
the probability of $v_1$ having degree $2$ by summing the
${\bf{P}} (e,f$ are the only edges incident to $v_1$ in ${\bf T})$ over
all pairs of edges $e,f$ coming out of $v_1$. By symmetry
this is just $(n-1)(n-2)/2$ times the probability that the particular
edges $e_2$ and $e_3$ are the only edges in ${\bf T}$ incident to $v_1$.
This probability is the determinant of a matrix which is not
a circulant, and to avoid calculating a difficult determinant
it is better to write this probability as the following difference:
the probability that no edges other that $e_2$ and $e_3$ are incident
to $v_1$ minus the probability that $e_2$ is the only edge
incident to $v_1$ minus the probability that $e_2$ is the only edge
incident to $v_3$. Since the final probability is this difference
multiplied by $(n-1)(n-2)/2$, the difference should be of order
$n^{-2}$, which explains why this degree of precision is required
for the latter two probabilities.
The probability of ${\bf T}$ containing no edges incident to $v_1$
other than $e_2$ and $e_3$ is the determinant of $M^{(n-3)}(e_4 ,
\ldots ,e_n)$, which is an $n-3$ by $n-3$ circulant again having
$(n-2)/n$ on the diagonal and $-1/n$ elsewhere. Then $\lambda_0
= \sum_{j=0}^{n-4} a_j = 2/n$ and $\lambda_j = (n-1)/n$ for
$j \neq 0$ mod $n-3$, yielding
$$\det M^{(n-3)} = 2 n^{-1} \left ( {n-1 \over n} \right )^{n-4}
= 2 n^{-1} e^{-1} [1 + (7/2 + o(1)) n^{-1}] $$
in the same manner as before. Subtracting off the probabilities
of $e_2$ or $e_3$ being the only edge in ${\bf T}$ incident to $v_1$
gives
\begin{eqnarray*}
&&{\bf{P}} (e_2 , e_3 \in {\bf T} , e_4 , \ldots , e_n \notin {\bf T}) \\[2ex]
& = & 2n^{-1} e^{-1} [1+(7/2 +o(1))n^{-1}] - 2n^{-1} e^{-1} [1+(5/2
+o(1))n^{-1}] = (2+o(1)) n^{-2} e^{-1} .
\end{eqnarray*}
Multiplying by $(n-1)(n-2)/2$ gives
$${\bf{P}} (v_1 \mbox{ has degree 2 in } {\bf T}) \rightarrow e^{-1} $$
as $n \rightarrow \infty$, which is ${\bf{P}} (X=2)$ where $X$ is
one plus a Poisson(1).
\subsection{Another point of view} \label{5.2}
The calculations of the last section may be continued {\em ad infinitum},
but each
step requires a more careful estimate so it pays to look for a way
to do all the steps at once. The right alternative method will
be more readily apparent if we generalize to graphs
other than $K_n$ which do not admit such a precise calculation
(if a tool that is difficult to use breaks, you
may discover a better one).
The important feature about $K_n$ was that the voltages were
easy to calculate. There is a large class of graphs for which
the voltages are just as easy to calculate approximately.
The term ``approximately'' can be made more rigorous by
considering sequences of graphs $G_n$ and stating approximations
in terms of limits as $n \rightarrow \infty$. Since I've always
wanted to name a technical term after my dog, call a
sequence of graphs $G_n$ {\em Gino-regular} if there is a sequence
$D_n$ such that
\begin{quote}
$(i)$ The maximum and minimum degree of a vertex in $G_n$ are $(1+o(1))D_n$
as $n \rightarrow \infty$; and \\
$(ii)$ The maximum and minimum over vertices $x \neq y,z$ of $G_n$
of the probability that $SRW_x^{G_n}$ hits $y$
before $z$ are $1/2 + o(1)$ as $n \rightarrow \infty$.
\end{quote}
Condition $(ii)$ implies that $D_n \rightarrow \infty$, so the
graphs $G_n$ are growing locally.
It is not hard to see that the voltage $V(z)$ in a unit current
flow across any edge $e={\vec {xy}}$ of a graph $G_n$ in a Gino-regular
sequence is $(1+o(1)) D_n^{-1} (\delta_x - \delta_y)(z)$ uniformly over
all choices of $x,y,z \in G_n$ as $n \rightarrow \infty$.
The complete graphs $K_n$ are Gino-regular. So are the $n$-cubes,
$B_n$, whose vertex sets are all the $n$-long sequences of zeros
and ones and whose edges connect sequences differing in only one
place.
\begin{picture}(200,100)
\put(20,20){\circle{5}}
\put(20,20){\line(1,0){40}}
\put(20,20){\line(0,1){40}}
\put(20,60){\circle{5}}
\put(20,60){\line(1,0){40}}
\put(60,20){\circle{5}}
\put(60,20){\line(0,1){40}}
\put(60,60){\circle{5}}
\put(33,5){$B_2$}
\put(110,20){\circle{5}}
\put(110,20){\line(1,0){40}}
\put(110,20){\line(0,1){40}}
\put(110,60){\circle{5}}
\put(110,60){\line(1,0){40}}
\put(150,20){\circle{5}}
\put(150,20){\line(0,1){40}}
\put(150,60){\circle{5}}
\put(123,5){$B_3$}
\put(130,40){\circle{5}}
\put(130,40){\line(1,0){40}}
\put(130,40){\line(0,1){40}}
\put(130,80){\circle{5}}
\put(130,80){\line(1,0){40}}
\put(170,40){\circle{5}}
\put(170,40){\line(0,1){40}}
\put(170,80){\circle{5}}
\put(110,20){\line(1,1){20}}
\put(110,60){\line(1,1){20}}
\put(150,20){\line(1,1){20}}
\put(150,60){\line(1,1){20}}
\end{picture}
figure~\thepfigure
\label{pfig5.2}
\addtocounter{pfigure}{1}
To see why $\{ B_n \}$ is Gino-regular, consider the ``worst case''
when $x$ is a neighbor of $y$. There is a small probability that
$SRW_x (1)$ will equal $y$, small because this is $\mbox{degree}(x)^{-1}
= (1+o(1)) D_n^{-1}$ which is going to zero. There are even smaller
probabilities of reaching $y$ in the next few steps; in general,
unless $SRW_x$ hits $y$ in one step, it tends to get ``lost'' and
by the time it comes near $y$ or $z$ again it is thoroughly random
and is equally likely to hit $y$ or $z$ first. In fact Gino-regular
sequences may be thought of as graphs that are nearly degree-regular,
which $SRW$ gets lost quickly.
The approximate voltages give approximate transfer-impedances
$H(e,f) = (2+o(1))/n$ if $e = f$, $(1+o(1))/n$ if $e$ and $f$
meet at a single vertex (choose orientations away from the vertex)
and $o(1)/n$ if $e$ and $f$ do not meet. The determinant of a
matrix is continuous in its entries, so it may seem that we have
everything necessary to calculate limiting probabilities as
limits of determinants of transfer-impedance matrices. If $v$ is a
vertex in $G_k$ and $e_1 , \ldots , e_n$ are the edges incident
to $v$ in $G_k$ (so $n \approx D_k$), then the probability
of $e_2$ being the only edge in ${\bf T}$ incident to $v$ is
the determinant of
$$M^{(n-1)} (e_2 , \ldots , e_n) = \left ( \begin{array}{cccc} (n-2+o(1))/n &
(-1+o(1))/n & \cdots & (-1+o(1))/n \\ (-1+o(1))/n & (n-2+o(1))/n &
\cdots & (-1+o(1))/n \\ &&& \\ &&\vdots & \\ &&& \\ (-1+o(1))/n &
(-1+o(1))/n & \cdots & (n-2+o(1))/n \end{array} \right ). $$
Unfortunately, the matrix is changing size as $n \rightarrow \infty$,
so convergence of each entry to a known limit does not give us
the limit of the determinant.
If the matrix were staying the same size, the problem would disappear.
This means we can successfully take the limit of probabilities of
events as long as they involve a bounded number of edges. Thus
for any fixed edge $e_1$, ${\bf{P}} (e_1 \in {\bf T}) = \det M(e_1)
= (1+o(1)) (2/n)$. For any fixed pair of edges $e_1$ and $e_2$ incident
to the same vertex,
$${\bf{P}} (e_1 , e_2 \in {\bf T})\; =\; \det M(e_1,e_2) = \left | \begin{array}{cc}
(2+o(1))/n & (1+o(1)) / n \\ (1+o(1))/n & (2+o(1)) / n
\end{array} \right | \;=\; (3+o(1)) n^{-2} .$$
In general if $e_1 , \ldots , e_r$ are all incident to $v$ then
the transfer-impedance matrix is $n^{-1}$ times an $r$ by $r$ matrix
converging to the matrix with $2$ down the diagonal and $1$ elsewhere. The
eigenvalues of this circulant are $\lambda_0 = r+1$ and $\lambda_j
= 1$ for $j \neq 0$, yielding
$${\bf{P}} (e_1 , \ldots , e_r \in {\bf T}) = (r+1+o(1)) n^{-r} .$$
What can we do with these probabilities? Inclusion-exclusion fails
for the same reason as the large determinants fail -- the $o(1)$
errors pile up. On the other hand, these probabilities determine
certain expectations. Write $e_1 , \ldots , e_n$
again for the edges adjacent to $v$ and $I_i$ for
the indicator function which is one when $e_i \in {\bf T}$ and zero
otherwise; then
$$\sum_i {\bf{P}} (e_i \in {\bf T}) = \sum_i {\bf{E}} I_i = {\bf{E}} \sum_i I_i = {\bf{E}} \deg (v) .$$
This tells us that ${\bf{E}} \deg (v) = n (2+o(1))n^{-1} = 2+o(1)$.
If try this with ordered pairs of edges, we get
$$\sum_{i \neq j} {\bf{P}} (e_i , e_j \in {\bf T}) = \sum_{i \neq j} {\bf{E}} I_i I_j =
{\bf{E}} \sum_{i \neq j} I_i I_j .$$
This last quantity is the sum of all distinct ordered pairs of edges
incident to $v$ of the quantity: $1$ if they are both in the tree and
0 otherwise. If $\deg (v) = r$ then a one occurs in this sum $r(r-1)$
times, so the sum is $\deg (v) (\deg (v)-1)$. The determinant calculation
gave ${\bf{P}} (e_i,e_j \in {\bf T}) = (3+o(1))n^{-2}$ for each $i,j$, so
$${\bf{E}} [\deg (v) (\deg(v)-1)] = n(n-1) (3+o(1))n^{-2} = 3+o(1) .$$
In general, using ordered $r$-tuples of distinct edges gives
\begin{eqnarray*}
&& {\bf{E}} [\deg (v) (\deg(v) - 1) \cdots (\deg(v)-r+1)] \\
& = & n(n-1) \cdots (n-r+1) (r+1+o(1))n^{-r} \\
& = & r+1+o(1) .
\end{eqnarray*}
Use the notation $(A)_r$ to denote $A(A-1)\cdots (A-r+1)$ which is
called the $r^{th}$ lower factorial of $A$. If $Y_n$ is the random
variable $\deg (v)$ then we have succinctly,
\begin{equation} \label{moment 1}
{\bf{E}} (Y_n)_r = r+1+o(1) .
\end{equation}
${\bf{E}} (Y_n)_r$ is called the $r^{th}$ factorial moment of $Y_n$.
If you remember why we are doing these calculations, you have probably
guessed that ${\bf{E}} (X)_r = r+1$ when $X$ is one plus a Poisson(1).
This is indeed true and can be seen easily enough from the
logarithmic moment generating function ${\bf{E}} t^X$ via the identity
$${\bf{E}} (X)_r = \left. \left ({d \over dt} \right )^r \right |_{t=1} {\bf{E}} t^X , $$
using ${\bf{E}} t^X = {\bf{E}} e^{X \ln (t)} = \phi (\ln (t)) = t e^{t-1}$ ; consult
\cite[page 301]{Ro} for details. All that we need now for a
Poisson limit result is a theorem saying that
if the factorial moments of $Y_n$ are each converging to the
factorial moments of $X$, then $Y_n$ is actually converging in
distribution to $X$. This is worth spending a short subsection
on because it is algebraically very neat.
\subsection{The method of moments} \label{5.3}
A standard piece of real analysis shows that if all the factorial
moments of a sequence of random variables converging to a limit
are finite, then for each $r$, the limit of the $r^{th}$ factorial
moments is the $r^{th}$ factorial moment of the limit.
(This is essentially the Lebesgue-dominated convergence theorem.)
Another standard result is that if the moments of a sequence
of random variables converge, then the sequence, or at least
some subsequence is converging in distribution to some other
random variable whose moments are the limits of the moments in the
sequence. Piecing together these straight-forward
facts leaves a serious gap in our prospective proof: What
if there is some random variable $Z$ distributed differently from
$X$ with the same factorial moments? If this could happen, then
there would be no reason to think that $Y_n$ converged in distribution
to $X$ rather than $Z$. This scenario can actually happen -- there
really are differently distributed random variables with the same moments!
(See the discussion of the lognormal distribution in \cite{Fe}.)
Luckily this only happens when $X$ is badly behaved, and a
Poisson plus one is not badly behaved. Here then is a proof
of the fact that the distribution of $X$ is the only one
with $r^{th}$ factorial moment $r+1$ for all $r$. I will leave
it to you to piece together, look up in \cite{Fe} or take on faith how this
fact plus the results from real analysis imply $Y \, {\stackrel {{\cal D}} {\rightarrow}} X$.
\begin{th} \label{moment method}
Let $X$ be a random variable with ${\bf{E}} (X)_r \leq e^{kr}$ for some $k$.
Then no random variable distributed differently from $X$ has the
same factorial moments.
\end{th}
\noindent{Proof:} The factorial moments ${\bf{E}} (X)_r$ determine
the regular moments $\mu_r = {\bf{E}} X^r$ and {\em vice versa} by the
linear relations $(X)_1 = X^1 ; (X)_2 = X^2 - X^1$, etc.
From these linear relations it also follows that factorial
moments are bounded by some $e^{kr}$ if and only if regular
moments are bounded by some $e^{kr}$, thus it suffices to
prove the theorem for regular moments. Not only do the
moments determine the distribution, it is even possible to
calculate ${\bf{P}} (X=j)$ directly from the moments of $X$ in
the following manner.
The {\em characteristic function} of $X$ is the function $\phi (t) =
{\bf{E}} e^{itX}$ where $i = \sqrt{-1}$. This is determined by the
moments since ${\bf{E}} e^{itX} = {\bf{E}} (1 + (itX) + (itX)^2/2! + \cdots )
= 1 + it\mu_1 + (it)^2 \mu_2 / 2! + \cdots$. We use the exponential
bound on the growth of $\mu_r$ to deduce that this is absolutely
convergent for all $t$ (though a somewhat weaker condition would do).
The growth condition also shows that ${\bf{E}} e^{itX}$ is bounded and
absolutely convergent for
$y \in [0,2\pi]$. Now ${\bf{P}} (X=j)$ can be determined by Fourier
inversion:
\begin{eqnarray*}
&& {1 \over 2 \pi} \int_0^{2\pi} {\bf{E}} e^{itX} e^{-ijt} dt \\[2ex]
& = & {1 \over 2 \pi} \int_0^{2\pi} [\sum_{r \geq 0} e^{itr} {\bf{P}} (X=r)]
e^{-ijt} dt \\[2ex]
& = & {1 \over 2 \pi} \sum_{r \geq 0} {\bf{P}} (X=r) \int_0^{2\pi} e^{itr}
e^{-ijt} dt \\[2ex]
&& \mbox{ (switching the sum and integral is OK for bounded, absolutely
convergent integrals)} \\[2ex]
& = & {1 \over 2 \pi} \sum_{r \geq 0} {\bf{P}} (X=r) \delta_0 (r-j) \\[2ex]
& = & {\bf{P}} (X=j) .
\end{eqnarray*}
$
\Box$
\subsection{A branching process} \label{5.4}
In the last half of section~\ref{1.4} I promised to explain how
convergence in distribution of $\deg (v)$ was a special case of
convergence of ${\bf T}$ near $v$ to a distribution called ${\cal P}_1$.
(You might want to go back and reread that section before continuing.)
The infinite tree ${\cal P}_1$ is interesting in its own right and
I'll start making good on the promise by describing ${\cal P}_1$.
This begins with a short description of {\em Galton-Watson
branching processes}. You can think of a Galton-Watson process
as a family tree for some fictional amoebas. These fictional amoebas
reproduce by splitting into any number of smaller amoebas (unlike
real amoebas that can only split into two parts at a time). At
time $t=0$ there is just a single amoeba, and at each time
$t=1,2,3,\ldots$, each living amoeba ${\cal{A}}$ splits into a random
number $N = N_t({\cal{A}})$ of amoebas, where the random numbers are independent
and all have the same distribution ${\bf{P}} (N_t ({\cal{A}}) = j) = p_j$.
Allow the possibility that $N = 0$ (the amoeba died) or that
$N=1$ (the amoeba didn't do anything). Let $\mu = \sum_j j p_j$
be the mean number of amoebas produced in a split. A standard
result from the theory of branching processes \cite{AN} is that
if $\mu > 1$ then there is a positive probability that the
family tree will survive forever, the population exploding
exponentially as in the usual Malthusian forecasts for human
population in the twenty-first century.
Conversely when $\mu < 1$, the amoeba population dies out with
probability 1 and in fact the chance of it surviving
$n$ generations decreases exponentially with $n$. When $\mu = 1$
the branching process is said to be {\em critical}. It must
still die out, but the probability of it surviving $n$ generations
decays more slowly, like a constant times $1/n$. The theory
of branching processes is quite large and you can find more
details in \cite{AN} or \cite{Ha}.
Specialize now to the case where the random number of offspring has
a Poisson(1) distribution, i.e. $p_j = e^{-1} / j!$.
Here's the motivation for considering this case. Imagine a graph
$G$ in which each vertex has $N$ neighbors and $N$ is so large
it is virtually infinite. Choose a subgraph $U$ by letting each edge
be included independently with probability $N^{-1}$. Fix a vertex
$v \in G$ and look at the vertices connected to $v$ in $U$.
The number of neighbors of $v$ in $U$ has a Poisson(1)
distribution by the standard characterization of a Poisson as
the limit of number of occurrences of rare events. For each neighbor
$y$ of $v$ in $U$, there are $N-1$ edges out of $y$ other than the
one to $v$, and the number of those in $U$ will again be Poisson(1)
(since $N \approx \infty$, subtracting one does not matter)
and continuing this way shows that the connected component of
$v$ in $U$ is distributed as a Galton-Watson process with
Poisson(1) offspring.
Of course $U$ is not distributed like a uniform spanning tree ${\bf T}$.
For one thing, $U$ may with probability $e^{-1}$ fail to have any
edges out of $v$. Even if this doesn't happen, the chance of
$U$ having more than $n$ vertices goes to zero as $n \rightarrow
\infty$ (a critical Galton-Watson process dies out) whereas
${\bf T}$, being a spanning tree of an almost infinite graph, goes on
as far as the eye can see. The next hope is that ${\bf T}$ looks like
$U$ conditioned not to die out. This should in fact seem plausible:
you can check that $U$ has no cycles near $v$ since virtually all of
the $N$ edges out of each neighbor of $v$ lead further away from $v$;
then a uniform spanning tree should be a random cycle-free graph $U$
that treats each edge as equally likely, conditioned on being connected.
The conditioning must be done carefully, since the probability
of $U$ living forever is zero, but it turns out fine if you
condition on $U$ living for at least $n$ generations and take the
limit as $n \rightarrow \infty$. The random infinite tree ${\cal P}_1$ that
results is called the {\em incipient infinite cluster} at $v$,
so named by percolation theorists (people who study connectivity
properties of random graphs). It turns out there is an alternate
description for the incipient infinite cluster. Let $v = v_0 , v_1 ,
v_2 , \ldots $ be a single line of vertices with edges $\overline{v
v_1} , \overline{v_1 v_2} , \ldots$. For each of the vertices
$v_i$ independently, make a separate independent
copy $U_i$ of the critical Poisson(1) branching process $U$
with $v_i$ as the root and paste it onto the line already there.
Then this collage has the same distribution as ${\cal P}_1$. This
fact is the ``whole tree'' version of the fact that a Poisson(1)
conditioned to be nonzero is distributed as one plus a
Poisson(1) (you can recover this fact from the fact about ${\cal P}_1$ by
looking just at the neighbors of $v$).
\subsection{Tree moments}
To prove that a uniform spanning tree ${\bf T}_n$ of $G_n$ converges
in distribution to ${\cal P}_1$ when $G_n$ is Gino-regular, we generalize
factorial moments to trees. Let $t$ be a finite tree rooted
at some vertex $x$ and let $W$ be a tree rooted at $v$. $W$ is allowed
to be infinite but it must be locally finite -- only finitely
many edges incident to any vertex. Say that a map $f$ from the vertices
of $t$ to the vertices of $W$ is a {\em tree-map} if $f$ is one to one,
maps $x$ to $v$ and neighbors to neighbors. Let $N(W;t)$ count the
number of tree-maps from $t$ into $W$. For example in the following
picture, $N(W;t) = 4$, since C and D can map to H and I in either
order with A mapping to E, and B can map to F or G.
\begin{picture}(200,140)
\put(20,20){\circle*{3}}
\put(14,20){\scriptsize C}
\put(40,50){\circle*{3}}
\put(35,50){\scriptsize A}
\put(60,20){\circle*{3}}
\put(54,20){\scriptsize D}
\put(60,80){\circle*{3}}
\put(55,80){$x$}
\put(80,50){\circle*{3}}
\put(75,50){\scriptsize B}
\put(20,20){\line(2,3){20}}
\put(40,50){\line(2,3){20}}
\put(80,50){\line(-2,3){20}}
\put(60,20){\line(-2,3){20}}
\put(65,115){$t$}
\put(120,20){\circle*{3}}
\put(115,20){\scriptsize H}
\put(160,20){\circle*{3}}
\put(155,20){\scriptsize I}
\put(190,20){\circle*{3}}
\put(185,20){\scriptsize J}
\put(140,50){\circle*{3}}
\put(135,50){\scriptsize E}
\put(160,50){\circle*{3}}
\put(155,50){\scriptsize F}
\put(180,50){\circle*{3}}
\put(185,50){\scriptsize G}
\put(160,80){\circle*{3}}
\put(155,80){$v$}
\put(120,20){\line(2,3){20}}
\put(140,50){\line(2,3){20}}
\put(160,20){\line(-2,3){20}}
\put(180,50){\line(-2,3){20}}
\put(160,50){\line(0,1){30}}
\put(190,20){\line(-1,3){10}}
\put(155,115){$W$}
\end{picture}
figure~\thepfigure
\label{pfig5.3}
\addtocounter{pfigure}{1}
Define the $t^{th}$ {\em tree-moment} of a random tree $Z$ rooted
at $v$ to be ${\bf{E}} N(Z;t)$.
If $t$ is an $n$-star, meaning a tree consisting of $n$ edges all
emanating from $x$, then a tree-map from $t$ to $W$ is just a
choice of $n$ distinct neighbors of $v$ in order, so $N(W;t) =
(\deg(v))_n$. Thus ${\bf{E}} N(Z;t) = {\bf{E}} (\deg (v))_n$, the $n^{th}$
factorial moment of $\deg (v)$.
This is to show you that tree-moments generalize
factorial moments. Now let's see what the tree-moments of
${\cal P}_1$ are. Let $t$ be any finite tree and let $|t|$ denote
the number of vertices in $t$.
\begin{lem} \label{NUt=1}
Let $U$ be a Galton-Wastson process rooted at $v$ with
Poisson(1) offspring. Then ${\bf{E}} N(U;t) = 1$ for all finite trees $t$.
\end{lem}
\noindent{Proof:} Use induction on $t$, the lemma being clear when
$t$ is a single vertex. The way the induction step works for trees is
to show that if a fact is true for a collection of trees $t_1 , \ldots
, t_n$ then it is true for the tree $t_*$ consisting of a root $x$ with
$n$ neighbors $x_1 , \ldots , x_n$ having subtrees $t_1 , \ldots t_n$
respectively as in the following illustration.
\begin{picture}(200,100)
\put(10,20){\circle*{3}}
\put(40,20){\circle*{3}}
\put(25,50){\circle*{3}}
\put(10,20){\line(1,2){15}}
\put(40,20){\line(-1,2){15}}
\put(25,5){$t_1$}
\put(65,20){\circle*{3}}
\put(64,5){$t_2$}
\put(90,20){\circle*{3}}
\put(80,50){\circle*{3}}
\put(90,20){\line(-1,3){10}}
\put(88,5){$t_3$}
\put(120,20){\circle*{3}}
\put(160,20){\circle*{3}}
\put(190,20){\circle*{3}}
\put(140,50){\circle*{3}}
\put(160,50){\circle*{3}}
\put(180,50){\circle*{3}}
\put(160,80){\circle*{3}}
\put(120,20){\line(2,3){20}}
\put(140,50){\line(2,3){20}}
\put(160,20){\line(-2,3){20}}
\put(180,50){\line(-2,3){20}}
\put(160,50){\line(0,1){30}}
\put(190,20){\line(-1,3){10}}
\put(160,5){$t_*$}
\end{picture}
figure~\thepfigure
\label{pfig5.4}
\addtocounter{pfigure}{1}
So let $t_1 , \ldots , t_n$ and $t_*$ be as above. Any tree-map
$f : t_* \rightarrow U$ must map the $n$ neighbors of $v$ into
distinct neighbors of $U$ and the expected number of ways to do
this is ${\bf{E}} (\deg (v))_n$ which is one for all $n$ since $\deg (v)$
is a Poisson(1) \cite{Fe}. Now for any such assignment
of $f$ on the neighbors of $v$, the number of ways of completing
the assignment to a tree-map is the product over $i= 1 , \ldots n$
of the number of ways of mapping each $t_i$ into the subtree of
$U$ below $f(x_i)$. After conditioning on what the first generation
of $U$ looks like, the subtrees below any neighbors of $v$ are
independent and themselves Galton-Watsons with Poisson(1) offspring.
(This is what it means to be Galton-Watson.) By induction then,
the expected number of ways of completing the assignment of $f$
is the product of a bunch of ones and is therefore one. Thus
${\bf{E}} N(U;t) = {\bf{E}} (\deg (v))_n \prod_{i=1}^n 1 = 1$. $
\Box$
Back to calculating ${\bf{E}} N({\cal P}_1 ; t)$. Recall that ${\cal P}_1$ is
a line $v_0 , v_1 , \ldots $ with Poisson(1) branching processes
$U_i$ stapled on. Each tree-map $f : t \rightarrow {\cal P}_1$
hits some initial segment $v_0 , \ldots v_k$ of the original
line, so there is some vertex $y_f \in t$ such that $f(y_f) = v_k$
for some $k$ but $v_{k+1}$ is not in the image of $f$. For each
$y \in t$, we count the expected number of tree-maps $f$
for which $y_f = y$. There is a path $x = f^{-1} (v_0) , \ldots , f^{-1}
(v_k) = y$ in $t$ going from the root $x$ to $y$.
The remaining vertices of $t$ can be separated into $k+1$ subtrees
below each of the $f^{-1} (v_i)$. These subtrees must then
get mapped respectively into the $U_i$. By the lemma, the expected
number of ways of mapping anything into a $U_i$ is one, so the
expected number of $f$ for which $y_f = y$ is $\prod_{i=1}^k 1 = 1$.
Summing over $y$ then gives
\begin{equation}
{\bf{E}} N({\cal P}_1 ; t) = |t| \end{equation}
The last thing we are going to do to in proving the stronger Poisson
convergence theorem is to show
\begin{lem} \label{NTt}
Let $G_n$ be a Gino-regular sequence of graphs, and let ${\bf T}_n$
be a uniform spanning tree of $G_n$ rooted at some $v_n$. Then
for any finite rooted tree $t$, ${\bf{E}} N({\bf T}_n ; t) \rightarrow
|t|$ as $n \rightarrow \infty$.
\end{lem}
It is not trivial from here to establish that ${\bf T} \wedge r$
converges in distribution to ${\cal P}_1 \wedge r$ for every $r$.
The standard real analysis facts I quoted in section~\ref{5.3}
about moments need to be replace by some not-so-standard (but
not too hard) facts about tree-moments. Suffice it to say that
the previous two lemmas do in the end prove (see \cite{BP} for details)
\begin{th} \label{strong converge}
Let $G_n$ be a Gino-regular sequence of graphs, and let ${\bf T}_n$
be a uniform spanning tree of $G_n$ rooted at some $v_n$. Then
for any $r$, ${\bf T}_n \wedge r$ converges in distribution to
${\cal P}_1 \wedge r$ as $n \rightarrow \infty$.
\end{th}
\noindent{Sketch of proof of Lemma~\protect{\ref{NTt}}:} Fix
a finite $t$ rooted at $x$. To calculate the expected number
of tree-maps from $t$ into ${\bf T}_n$ we will sum over every possible
image of a tree-map the probability that all of those edges
are actually present in ${\bf T}_n$. By an image of a tree-map,
I mean two things: (1) a collection $\{ v_x : x \in t \}$ of vertices of
$G_n$ indexed by the vertices of $t$ for which $v_x \sim v_y$
in $G$ whenever $x \sim y$ in $t$; (2) a collection of edges
$e_\epsilon$ connecting $v_x$ and $v_y$ for every edge $\epsilon \in t$ connecting
some $x$ and $y$. Fix such an image.
The transfer-impedance theorem tells us that the probability
of finding all the edges $v_e$ in ${\bf T}$ is the determinant
of $M(e_\epsilon : \epsilon \in t)$. Now for edges $e , e' \in G$,
Gino-regularity gives that $H(e , e') = D_n^{-1} (o(1) + \kappa)$
uniformly over edges of $G_n$,
where $\kappa$ is $2,1$ or $0$ according to whether $e=e'$,
they share an endpoint, or they are disjoint. The determinant
is then well approximated by the corresponding determinant without
the $o(1)$ terms, which can be worked out as exactly $|t| D_n^{1-|t|}$.
This must now be summed over all possible images, which amounts
to multiplying $|t| D_n^{1-|t|}$ by the number of possible images.
I claim the number of possible images is approximately $D_n^{|t|-1}$.
To see this, imagine starting at the root $x$, which must get mapped
to $v_n$, and choosing successively where to map each nest vertex of
$t$. Since there are approximately $D_n$ edges coming out of each
vertex of $G_n$, there are always about $D_n$ choices for the
image of the next vertex (the fact that you are not allowed to
choose any vertex already chosen is insignificant as $D_n$ gets
large). There are $|t|-1$ choices, so the number of maps
is about $D_n^{|t|-1}$. This proves the claim. The claim
implies that the expected number of tree-maps from $t$ to ${\bf T}_n$
is $|t| D_n^{1-|t|} D_n^{|t|-1} = |t|$, proving the lemma. $
\Box$
\section{Infinite lattices, dimers and entropy}
There is, believe it or not, another model that ends up being equivalent
to the uniform spanning tree model under a correspondence at least as
surprising as the correspondence between spanning trees and random walks.
This is the so-called {\em dimer} or {\em domino tiling} model, which was
studied by statistical physicists quite independently
of the uniform spanning tree model. The present section is
intended to show how one of the fundamental questions of
this model, namely calculating its entropy, can be solved using
what we know about spanning trees. Since it's getting late,
there will be pictures but no detailed proofs.
\subsection{Dimers}
A dimer is a substance that on the molecular level is made up
of two smaller groups of atoms (imagine two spheres of matter)
adhering to each other via a covalent bond; consequently it is shaped
like a dumbbell. If a bunch of dimer molecules are packed together
in a cold room and a few of the less significant laws of physics
are ignored, the molecules should array themselves into some
sort of regular lattice, fitting together as snugly as dumbbells can.
To model this, let $r$ be some positive real number representing the
length of one of the dumbbells. Let $L$ be a lattice, i.e. a
regular array of points in three-space, for which each point in $L$
has some neighbors at distance $r$. For example $r$ could be
$1$ and $L$ could be the standard integer lattice
$\{ (x,y,z) : x,y,z \in \hbox{Z\kern-.4em\hbox{Z}} \}$, so $r$ is the minimum distance
between any two points of $L$ (see the picture below).
Alternatively $r$ could be $\sqrt{2}$
or $\sqrt{3}$ for the same $L$. Make a graph $G$ whose vertices are
the points of $L$, with an edge between any pair of points
at distance $r$ from each other. Then the possible packings
of dimers in the lattice are just the ways of partitioning the
lattice into pairs of vertices, each pair (representing one molecule)
being the two enpoints of some edge. The following picture shows
part of a packing of the integer lattice with nearest-neighbor edges. \\
\begin{picture}(100,100)(-50,-10)
\put(0,0){\circle* {7}}
\put(0,30){\circle* {7}}
\put(0,60){\circle* {7}}
\put(30,30){\circle* {7}}
\put(30,60){\circle* {7}}
\put(20,50){\circle* {7}}
\put(50,50){\circle* {7}}
\put(20,80){\circle* {7}}
\put(50,80){\circle* {7}}
\put(60,30){\circle* {7}}
\multiput(-3,0)(.5,0){13}{\line(0,1){30}}
\multiput(30,27)(0,.5){13}{\line(1,0){30}}
\multiput(20,47)(0,.5){13}{\line(1,0){30}}
\multiput(-3,60)(.5,0){13}{\line(1,1){20}}
\multiput(27,60)(.5,0){13}{\line(1,1){20}}
\put(0,-10){\line(0,1){80}}
\put(30,20){\line(0,1){50}}
\put(20,40){\line(0,1){50}}
\put(50,40){\line(0,1){50}}
\put(-10,30){\line(1,0){80}}
\put(-10,60){\line(1,0){50}}
\put(20,50){\line(1,0){50}}
\put(20,80){\line(1,0){50}}
\put(-7,23){\line(1,1){34}}
\put(23,23){\line(1,1){34}}
\put(-7,53){\line(1,1){34}}
\put(23,53){\line(1,1){34}}
\end{picture}
figure~\thepfigure
\label{pfig6.1}
\addtocounter{pfigure}{1}
Take a large finite box inside the lattice, containing $N$ vertices.
If $N$ is even and the box is not an awkward shape, there
will be not only one but many ways to pack it with
dimers. There will be several edges incident to each vertex $v$,
representing a choice to be made as to which other vertex
will be covered by the molecule with one atom covering $v$.
These choices obviously cannot be made independently, but it
should be plausible from this that the total number of configurations
is approximately $\gamma^N$ for some $\gamma > 1$ as $N$
goes to infinity. This number can be written alternatively
as $e^{hN}$ where $h = \ln (\gamma )$ is called the entropy
of the packing problem. The thermodynamics of the resulting
substance depend on, among other things, the entropy $h$.
The case that has been studied the most is where $L$ is the
two-dimensional integer lattice with $r=1$. The graph $G$ is then
the usual nearest-neighbor square lattice. Physically this
corresponds to packing the dimers between two slides.
You can get the same packing problem by attempting to tile
the plane with dominos -- vertical and horizontal 1 by 2 rectangles --
which is why the model also goes by the name of domino tiling.
\subsection{Dominos and spanning trees}
We have not yet talked about spanning trees of an infinite
graph, but the definition remains the same: a connected subgraph
touching each vertex and contaning no cycles. If the subgraph
need not be connected, it is a spanning forest. Define an
{\em essential spanning forest} or ESF to be a spanning forest
that has no finite components. Informally, an ESF is a subgraph
that you can't distinguish from a spanning tree by only looking
at a finite part of it (since it has no cycles or {\em islands}).
Let $G_2$ denote the nearest-neighbor graph on the two dimensional
integer lattice. Since $G_2$ is a planar graph, it has a {\em
dual} graph $G_2^*$, which has a vertex in each cell of $G_2$ and
an edge $e^*$ crossing each edge $e$ of $G_2$. In the following picture,
filled circles and heavy lines denote $G_2$ and open circles
and dotted lines denote $G_2^*$. Note that $G_2$, together with $G_2^*$
and the points where edges cross dual edges, forms another graph
$\tilde{G_2}$ that is just $G_2$ scaled down by a factor of two. \\
\begin{picture}(160,100)(-65,0)
\put(10,20){\line(1,0){80}}
\put(10,50){\line(1,0){80}}
\put(10,80){\line(1,0){80}}
\put(20,10){\line(0,1){80}}
\put(50,10){\line(0,1){80}}
\put(80,10){\line(0,1){80}}
\put(20,20){\circle*{3}}
\put(20,50){\circle*{3}}
\put(20,80){\circle*{3}}
\put(50,20){\circle*{3}}
\put(50,50){\circle*{3}}
\put(50,80){\circle*{3}}
\put(80,20){\circle*{3}}
\put(80,50){\circle*{3}}
\put(80,80){\circle*{3}}
\put(35,35){\dashbox{2}(30,30)}
\put(35,35){\circle{3}}
\put(35,65){\circle{3}}
\put(65,35){\circle{3}}
\put(65,65){\circle{3}}
\put(5,35){\dashbox{2}(30,0)}
\put(5,65){\dashbox{2}(30,0)}
\put(65,35){\dashbox{2}(30,0)}
\put(65,65){\dashbox{2}(30,0)}
\put(35,5){\dashbox{2}(0,30)}
\put(65,5){\dashbox{2}(0,30)}
\put(35,65){\dashbox{2}(0,30)}
\put(65,65){\dashbox{2}(0,30)}
\end{picture}
figure~\thepfigure
\label{pfig6.2}
\addtocounter{pfigure}{1} \\
Each subgraph $H$ of $G$ has a dual subgraph $H^*$ consisting
of all edges $e^*$ of $G^*$ dual to edges $e$ \underline{not}
in $H$. If $H$ has a cycle, then the duals of all edges in the
cycle are absent from $H^*$ which separates $H^*$ into two
components: the interior and exterior of the cycle. Similarly,
an island in $H$ corresponds to a cycle in $H^*$ as in the picture:
\begin{picture}(180,100)(-50,10)
\multiput(20,20)(30,0){4}{\circle*{3}}
\multiput(20,50)(30,0){4}{\circle*{3}}
\multiput(20,80)(30,0){4}{\circle*{3}}
\multiput(35,35)(30,0){3}{\circle {3}}
\multiput(35,65)(30,0){3}{\circle {3}}
\put(20,20){\line(1,0){90}}
\put(20,20){\line(0,1){60}}
\put(110,20){\line(0,1){60}}
\put(50,80){\line(1,0){60}}
\put(50,50){\line(1,0){30}}
\put(20,20){\line(0,1){60}}
\put(35,35){\dashbox{2}(60,30)}
\put(35,65){\dashbox{2}(0,30)}
\end{picture}
figure~\thepfigure
\label{pfig6.3}
\addtocounter{pfigure}{1} \\
From this description, it is clear that $T$ is an essential spanning
forest of $G_2$ if and only if $T^*$ is an essential spanning forest
of $G_2^* \cong G_2$.
Let $T$ now an infinite tree. We
define {\em directed} a little differently than in the finite case:
say $T$ is directed if the edges are oriented so that every
vertex has precisely one edge leading out of it. Following the
arrows from any vertex gives an infinite path and it is not hard
to check that any two such paths from different vertices eventually
merge. Thus directedness for infinite trees is like directedness
for finite trees, toward a vertex at infinity.
Say an essential spanning forest of $G_2$ is directed
if a direction has been chosen for each of its components and each
of the components of its dual. Here then is the connection between
dominos and essential spanning forests.
\begin{quotation}
Let $T$ be a directed essential spanning forest of $G_2$, with
dual $T^*$. Construct a domino tiling of $\tilde{G_2}$ as follows.
Each vertex $v \in V(G_2) \subseteq V(\tilde{G_2})$ is covered by a
domino that also covers the vertex of $\tilde{G_2}$ in the middle
of the edge of $T$ that leads out of $v$. Similarly,
each vertex $v^* \in V(G_2^*)$ is covered by a
domino also covering the middle of the edge of $T^*$ leading out
of $v$. It is easy to check that this gives a legitimate domino
tiling: every domino covers two neighboring vertices,
and each vertex is covered by precisely one domino. \\
Conversely, for any domino tiling of $\tilde{G_2}$, directed essential
spanning forests $T$ and $T^*$ for $G_2$ and $G_2^*$ can be constructed
as follows. For each $v \in V(G_2)$, the oriented edge leading
out of $v$ in $T$ is the one along which the domino covering $v$
lies (i.e. the one whose midpoint is the other vertex of $\tilde{G_2}$
covered by the domino covering $v$). Construct $T^*$ analogously.
To show that $T$ and $T^*$ are directed ESF's amounts to showing
there are no cycles, since clearly $T$ and $T^*$ will have one edge
coming out of each vertex. This is true because if you set up
dominos in such a way as to create a cycle, they will always enclose an
odd number of vertices (check it yourself!). Then there is no way to
extend this configuration to a legitimate domino tiling of $\tilde{G_2}$.
\end{quotation}
It is easy to see that the two operations above invert each other,
giving a one to one correspondence between domino tilings of
$\tilde{G_2}$ and directed essential spanning forests of $G_2$.
To bring this back into the realm of finite graphs requires
ironing out some technicalities which I am instead going to
ignore. The basic idea is that domino tilings of the $2n$-torus
$T_{2n}$ correspond to spanning trees of $T_n$ almost as well
as domino tilings of $\tilde{G_2}$ correspond to spanning trees of $G_2$.
Going from directed essential spanning forests to spanning trees
is one of the details glossed over here, but explained somewhat in the
next subsection. The entropy for domino tilings is then one
quarter the entropy for spanning trees, since $T_{2n}$ has
four times as many vertices as $T_n$. Entropy for spanning trees
just means the number $h$ for which $T_n$ has approximately
$e^{hn^2}$ spanning trees. To calculate this, we use the
matrix-tree theorem.
The number of spanning trees of $T_n$ according to this theorem
is the determinant of a minor of the matrix indexed by vertices of $T_n$
whose $v,w$-entry is $4$ if $v=w$, $-1$ if $v \sim w$ and $0$
otherwise. If $T_n$ were replaced by $n$ edges in a circle,
then this would be a circulant matrix. As is, it is a
generalized circulant, with symmetry group $T_n = (\hbox{Z\kern-.4em\hbox{Z}} / n\hbox{Z\kern-.4em\hbox{Z}})^2$
instead of $Z / n\hbox{Z\kern-.4em\hbox{Z}}$. The eigenvalues can be gotten via
group representations of $T_n$, resulting in eigenvalues
$4 - 2 \cos (2 \pi k/n) - 2 \cos (2 \pi l/n)$ as $k$ and
$l$ range from $0$ to $n-1$. The determinant we want
is the product of all of these except for the zero eigenvalue at
$k=l=0$. The log of the determinant divided by $n^2$
is the average of these as $k$ and $l$ vary, and the entropy
is the limit of this as $n \rightarrow \infty$ which is given by
$$\int_0^1 \int_0^1 \ln (4 - 2 \cos (2 \pi x) - 2 \cos (2 \pi y))
\,\, dx \, dy . $$
\subsection{Miscellany}
The limit theorems in Section~\ref{5} involved letting $G_n$
tend to infinity locally, in the sense that each vertex
in $G_n$ had higher degree as $n$ grew larger. Instead, one
may consider a sequence such as $G_n = T_n$; clearly the $n$-torus
converges in some sense to $G_2$ as $n \rightarrow \infty$, so there
ought to be some limit theorem. Let ${\bf T}_n$ be a uniform
spanning tree of $G_n$. Since $G_n$ is not Gino-regular, the
limit may not be ${\cal P}_1$ and in fact cannot be since the limit
has degree bounded by four. It turns out that ${\bf T}_n$ converges
in distribution to a random tree ${\bf T}$ called the uniform
random spanning tree for the integer lattice. This works also
for any sequence of graphs converging to the three or four
dimensional integer lattices \cite{Pe}. Unfortunately the process breaks
down in dimensions five and higher. There the uniform spanning
spanning trees on $G_n$ do converge to a limiting distribution
but instead of a spanning tree of the lattice, you get an
essential spanning forest that has infinitely many components.
If you can't see how the limit of spanning trees could be a spanning
forest, remember that an essential spanning forest is so similar
to a spanning tree that you can't tell them apart with any finite
amount of information.
Another result from this study is that in dimensions $2,3$ and $4$,
the uniform random spanning tree ${\bf T}$ has only one path
to infinity. What this really means is that any two infinite
paths must eventually join up. Not only that, but ${\bf T}^*$
has the same property. That means there is only one way to
direct ${\bf T}$, so that each choice of ${\bf T}$ uniquely
determines a domino tiling of $\tilde{G_2}$. In this way
it makes sense to speak of a uniform random domino tiling
of the plane: just choose a uniform random spanning tree
and see what domino tiling it corresponds to.
That takes care of one of the details glossed over in the
previous subsection. It also just about wraps up what I wanted
to talk about in this article. As a parting note, let me mention
an open problem. Let $G$ be the infinite nearest neighbor
graph on the integer lattice in $d$ dimensions and let ${\bf T}$
be the uniform spanning tree on $G$ gotten by taking a distributional
limit of uniform spanning trees on $d$-dimensional $n$-tori
as $n \rightarrow \infty$ as explained above.
\begin{conj} \label{one end}
Suppose $d \geq 5$. Then with probability one, each component
of the essential spanning forest has only one path to infinity,
in the sense that any two infinite paths must eventually merge.
\end{conj}
\renewcommand{1.0}{1.0}\large\normalsize
\end{document} |
\begin{eqnarray}gin{document}
\title{On monogamy and polygamy relations of multipartite systems}
\author{Xia Zhang$^1$, Naihuan Jing$^{2}$, Ming Liu$^1$, Haitao Ma$^3$\\
$^{1}${School of Mathematics, South China University of Technology, Guangzhou 510641, China}\\
$^{2}${Department of Mathematics, North Carolina State University, Raleigh NC 27695, USA}\\
$^{3}${College of Mathematical Science, Harbin Engineering University, Harbin 150001, China} \\
$^\dag$
Corresponding author. E-mail: [email protected]}
\begin{eqnarray}gin{abstract}
\begin{array}{l}selineskip18pt
We study the monogamy and polygamy relations related to quantum correlations for multipartite quantum systems in a unified manner. It is known that any bipartite measure obeys monogamy and polygamy relations for the $r$-power of the measure. We show in a uniformed manner that the generalized monogamy and polygamy relations are transitive to other powers of the measure in weighted forms.
We demonstrate that our weighted monogamy and polygamy relations are stronger than recently available relations.
Comparisons are given in detailed examples which show that our results are stronger in both situations.
\keywords{Genuine multipartite entanglement \and Correlation tensor \and Weyl operators}
\end{abstract}
\maketitle
\section{Introduction}
Monogamy relations of quantum entanglement are important feature of quantum physics that play an important role in quantum information and quantum communication. Monogamy relations confine the entanglement of a quantum system with the other (sub)systems, thus they are closely related to quantum information processing tasks such as security analysis of quantum key distribution \cite{MP}.
The monogamy relation for a three-qubit state $\rho_{A_1A_2A_3}$ is defined \cite{MK} as
$$\mathcal{E}(\rho_{A_1|A_2A_3})\geq \mathcal{E}(\rho_{A_1A_2}) +\mathcal{E}(\rho_{A_1A_3}),$$
where $\mathcal{E}$ is a bipartite entanglement measure, $\rho_{A_1A_2}$ and $\rho_{A_1A_3}$ are the reduced density matrices of $\rho_{A_1A_2A_3}$. The monogamy relation was generalized to multiqubit quantum systems, high-dimensional quantum systems in
general settings \cite{ZXN,JZX,jll,012329,gy1,gy2,jin1,jin2, SC, RWF, ZYS}.
The first polygamy relation of entanglement was established in \cite{gg} for some three-qubit system as the inequality ${E_a}_{A_1|A_2A_3}\leq {E_a}_{A_1A_2} +{E_a}_{A_1A_3}$ where ${E_a}_{A_1|A_2A_3}$ is the assisted entanglement \cite{gg} between $A_1$ and $A_2A_3$ and later generalized to some multiqubit systems in \cite{jsb,jin3}. General polygamy inequalities of multipartite entanglement were also given in \cite{062328,295303,jsb,042332} in terms of entanglement of assistance. While it is
known that the monogamy relation does not hold for all quantum systems, it is also shown \cite{GG} that
any monotonic bipartite measure is monogamous on pure tripartite states.
It turns out that a generalized monogamy relation always holds for any quantum system. In \cite{JZX, jll,ZXN}, multiqubit monogamy relations have been demonstrated for the $x$th power of the entanglement of
formation ($x\geq\sqrt{2}$) and the concurrence ($x\geq2$), which opened new direction to study the monogamy relation. Similar general polygamy relation
has been shown for R\'ennyi-$\alpha$ entanglement \cite{GYG}. Monogamy relations for quantum steering have also been shown in \cite{hqy,mko,jk1,jk2,jk3}.
Moreover, polygamy inequalities were given in terms of the
$\alpha$th ($0\leq\alpha\leq 1$) power of square of convex-roof extended negativity (SCREN) and the entanglement of assistance \cite{j012334, 042332}.
\iffalse
Whereas the monogamy of entanglement shows the restricted sharability of multipartite quantum entanglement, the distribution of entanglement in multipartite quantum systems was shown to have a dually monogamous property.
Based on concurrence of assistance \cite{qic}, the polygamy of entanglement provides a lower bound for the distribution of bipartite entanglement in a multipartite system \cite{jmp}.
Monogamy and polygamy of entanglement can restrict the possible correlations between the authorized users and the eavesdroppers, thus tightening the security bounds in quantum cryptography \cite{MP}.
\fi
\iffalse
Whereas the monogamy of entanglement shows the restricted sharability of multipartite quantum entanglement, the distribution of entanglement in multipartite quantum systems was shown to have a dually monogamous property.
Based on concurrence of assistance \cite{qic}, the polygamy of entanglement provides a lower bound for the distribution of bipartite entanglement in a multipartite system \cite{jmp}.
Monogamy and polygamy of entanglement can restrict possible correlations between the authorized users and the eavesdroppers, thus tightening the security bounds in quantum cryptography \cite{MP}.
\fi
An important feature for this generalized monogamy relation for $\alpha$th power of the measure is its transitivity in the sense that other power of the measure satisfies a weighted monogamous relation. Recently, the authors in \cite{JFQ} provided a class of monogamy and polygamy relations of the $\alpha$th $(0\leq\alpha\leq r,r\geq2)$ and the $\begin{eqnarray}ta$th $(\begin{eqnarray}ta\geq s,0\leq s\leq1)$ powers for any quantum correlation. Applying the monogamy relations in \cite{JFQ} to quantum correlations like squared convex-roof extended negativity, entanglement of formation and concurrence one can get tighter monogamy inequalities than those given in \cite{zhu}.
Similarly applying the bounds in \cite {JFQ}
to specific quantum correlations such as the concurrence of assistance, square of convex-roof extended negativity of assistance (SCRENoA), entanglement of assistance,
corresponding polygamy relations were obtained, which are complementary to the existing ones \cite{jin3,062328,295303,jsb,042332} with different regions of parameter $\begin{eqnarray}ta$. In \cite{ZJZ1,ZJZ2}, the authors gave another set of monogamy relations for $(0\leq\alpha\leq \frac{r}{2},r\geq2)$ and polygamy relations for $(\begin{eqnarray}ta\geq s,0\leq s\leq1)$, and we note that the bound is stronger than \cite{JFQ} in monogamy case but weaker in polygamy case.
One realizes that the monogamy and polygamy relations given in \cite{JFQ, ZJZ1,ZJZ2} were obtained by bounding the function $(1+t)^x$ by various estimates. In this paper, we revisit the function $(1+t)^x$ and give a unified and improved method to estimate
its upper and lower bounds, which then lead to new improved monogamy and polygamy relations stronger than some of the recent strong ones in both situations.
For instance, we will rigorously show the monogamy and polygamy relations of quantum correlations for the cases $(0\leq\alpha\leq r,r\geq2)$ and $(\begin{eqnarray}ta\geq s,0\leq s\leq1)$ are tighter than those given in \cite{JFQ, ZJZ1,ZJZ2} all together. We also use the concurrence and SCRENoA as examples to demonstrate how our bounds have improved
previously available strong bounds.
\section{Monogamy relations of quantum correlations}
Let $\rho$ be a density matrix on a multipartite quantum system $\bigotimes_{i=1}^nA_i$, and let
$\mathcal{Q}$ be a measure of quantum correlation for any bipartite (sub)system. If $\mathcal{Q}$ satisfies \cite{ARA} the inequality
\begin{eqnarray}gin{eqnarray}\label{q}
&&\mathcal{Q}(\rho_{A_1|A_2,\cdots,A_{n}})\nonumber\\
&&\geq\mathcal{Q}(\rho_{A_1A_2})+\mathcal{Q}(\rho_{A_1A_3})+\cdots+\mathcal{Q}(\rho_{A_1A_{n}}),
\end{eqnarray}
$\mathcal Q$ is said to be {\it monogamous}, where $\rho_{A_1A_i}$, $i=2,...,n$, are the reduced density matrices of $\rho$. For simplicity, we denote $\mathcal{Q}(\rho_{A_1A_i})$ by $\mathcal{Q}_{A_1A_i}$, and $\mathcal{Q}(\rho_{A_1|A_2,\cdots,A_{n}})$ by $\mathcal{Q}_{A_1|A_2,\cdots,A_{n}}$. It is known that some quantum measures obey the monogamous relation \cite{AKE,SSS} for certain quantum states, while
there are quantum measures which do not satisfy the monogamy relation \cite{GLGP, RPAK}.
In Ref. \cite{SPAU,TJ,YKM}, the authors have proved that there exists $r\in \mathbb R~(r\geq2)$ such that the $r$th power of any measure $\mathcal{Q}$ satisfies the following generalized monogamy relation for arbitrary dimensional tripartite state \cite{SPAU}:
\begin{eqnarray}gin{eqnarray}\label{aqq}
\mathcal{Q}^r_{A_1|A_2A_3}\geq\mathcal{Q}^r_{A_1A_2}+\mathcal{Q}^r_{A_1A_3}.
\end{eqnarray}
Assuming \eqref{aqq}, we would like to
prove that other power of $\mathcal Q$ also obeys a weighted monogamy relation. First of all, using the inequality $(1+t)^{x} \geq 1+t^{x}$ for $x \geq 1,0 \leq t \leq 1$ one can easily
derive the following generalized polygamy relation for the $n$-partite case,
$$
\mathcal{Q}_{A_1 \mid A_{2}, \ldots, A_{n}}^{r} \geq \sum_{i=2}^{n} \mathcal{Q}_{A_1 A_{i}}^{r}
$$
\iffalse
The concurrence of a bipartite pure state $|\psi\rangle_{A B}$ is given by \cite{U,RBCHJ,AF},
$$
C\left(|\psi\rangle_{A B}\right)=\sqrt{2\left[1-\operatorname{Tr}\left(\rho_{A}^{2}\right)\right]},
$$
where $\rho_{A}$ is reduced density matrix $\rho_{A}=\operatorname{Tr}_{B}\left(|\psi\rangle_{A B}\langle\psi|\right)$. The concurrence of mixed states $\rho=\sum_{i} p_{i}\left|\psi_{i}\right\rangle\left\langle\psi_{i}\right|, p_{i} \geq 0, \sum_{i} p_{i}=1$, is given by the convex roof construction,
$$
C\left(\rho_{A B}\right)=\min _{\left\{p_{i},\left|\psi_{i}\right\rangle\right\}} \sum_{i} p_{i} C\left(\left|\psi_{i}\right\rangle\right)
$$
For $n$-qubit quantum states, the concurrence satisfies \cite{ZF}
$$
C_{A_1 \mid A_{2}\ldots A_{n}}^{\alpha} \geq C_{A_1 A_{2}}^{\alpha}+\ldots+C_{A_1 A_{n}}^{\alpha}
$$
for $\alpha \geq 2$, where $C_{A_1 \mid A_{2} \ldots A_{n}}$ is the concurrence of $\rho$ under bipartite partition $A_1 \mid A_{2} \ldots A_{n}$, and $C_{A_1 A_{i}}, i=$ $2 \ldots, n$, is the concurrence of the reduced states $\rho_{A_1 A_{i}}$.
\fi
We now try to generalize the monogamy relation to other powers. Let's start with a useful lemma.
\begin{eqnarray}gin{lemma}\label{lem:1} Let $a\geq 1$ be a real number. Then for $t\geq a\geq 1$, the function $(1+t)^x$ satisfies the following inequality
\begin{eqnarray}gin{equation}
(1+t)^x\geq (1+a)^{x-1}+(1+\frac{1}{a})^{x-1}t^x,
\end{equation}
where $0<x\leq 1$.
\end{lemma}
\begin{eqnarray}gin{proof}.
Consider $g(x,y)=(1+y)^x-(1+{a})^{x-1}y^x$, $0<y\leq \frac{1}{a}$, $0\leq x\leq 1$. Then
$$\frac{\partial g}{\partial y}=xy^{x-1}\left((1+\frac{1}{y})^{x-1}-(1+a)^{x-1}\right).$$
Let $h(x,y)=(1+\frac{1}{y})^{x-1}$, $0<y\leq \frac{1}{a}$, $0\leq x\leq 1$. Since $h(x,y)$ is an increasing function of $y$, we have $h(x,y)\leq h(x,\frac{1}{a})=(1+{a})^{x-1}$.
Thus $\frac{\partial g}{\partial y}\leq 0$ and $g(x,y)$ is decreasing with respect to $y$. Therefore we have $g(x,y)\geq g(x,\frac{1}{a})=(1+\frac{1}{a})^{x-1}$. Subsequently
\begin{eqnarray}gin{equation*}
g(x,\frac{1}{t})=\frac{(1+t)^x}{t^x}-\frac{(1+{a})^{x-1}}{t^x}\geq (1+\frac{1}{a})^{x-1}
\end{equation*}
for $t\geq a$. Then we have
\begin{eqnarray}gin{equation*}
(1+t)^x\geq(1+{a})^{x-1}+(1+\frac{1}{a})^{x-1}t^x,
\end{equation*}
for $t\geq a$.
\iffalse
Similarly, consider ${g}(x,y)=(1+y)^x-2^{x-1}y^x$, $y\geq 1$, $0\leq x\leq 1$.
Then
$$\frac{\partial g}{\partial y}=(1+y)^x\frac{x}{1+y}-2^{x-1}y^x\frac{x}{y}=xy^{x-1}\left((1+\frac{1}{y})^{x-1}-2^{x-1}\right).$$
Let $h(x,y)=(1+\frac{1}{y})^{x-1}$, $y\geq 1$, $0\leq x\leq 1$. Since $h(x,y)$ is an increasing function of $y$, we have $h(x,y)\geq h(x,1)=2^{x-1}$.
Thus $\frac{\partial g}{\partial y}\geq 0$ and $g(x,y)$ is increasing with respect to $y$. Therefore we have $g(x,y)\geq g(x,1)=2^{x-1}$.
Thus we can easily get that
\begin{eqnarray}gin{equation*}
g(x,\frac{1}{t})=\frac{(1+t)^x}{t^x}-\frac{2^{x-1}}{t^x}\geq 2^{x-1}
\end{equation*}
for $0<t\leq 1$. Then we have
\begin{eqnarray}gin{equation*}
(1+t)^x\geq (1+\frac{1}{a})^{x-1}+\frac{(1+a)^x-(1+\frac{1}{a})^{x-1}}{a^x}t^x,
\end{equation*}
for $0<t\leq 1$.
\fi
\end{proof}
\begin{eqnarray}gin{remark}\label{rem:1}~~
The proof of Lemma \ref{lem:1} implies that for $0\leq x\leq 1$, $t\geq a\geq 1$ and $f(x)\geq (1+{a})^{x-1} $, we have
\begin{eqnarray}gin{equation*}
(1+t)^x\geq f(x)+\frac{(1+a)^x-f(x)}{a^x}t^x.
\end{equation*}
It is not hard to see that for $0\leq x\leq 1$, $t\geq a\geq 1$ and for $f(x)\geq (1+{a})^{x-1}$,
$$(1+{a})^{x-1}+(1+\frac{1}{a})^{x-1}t^x-
\left[f(x)+\frac{(1+a)^x-f(x)}{a^x}t^x\right]\geq 0.$$
In fact,
\begin{eqnarray}gin{equation*}
\begin{eqnarray}gin{aligned}
{ LHS}
&=(1+{a})^{x-1}-f(x)+\Big(\frac{a(1+a)^{x-1}}{a^x}t^{x}-\frac{(1+a)(1+a)^{x-1}-f(x)}{a^x}t^{x}\Big)\\
&=(1+{a})^{x-1}-f(x)+\frac{f(x)-(1+{a})^{x-1}}{a^x}t^{x}\\
&=(\frac{t^x}{a^x}-1)(f(x)-(1+{a})^{x-1})\geq 0.
\end{aligned}
\end{equation*}
\end{remark}
\begin{eqnarray}gin{remark} \label{rem:2}
In \cite[Lemma 1]{JFQ}, the authors have given a lower bound of $(1+t)^x$ for $0\leq x\leq 1$ and $t\geq a\geq 1$:
\begin{eqnarray}gin{equation*}
(1+t)^{x} \geq 1+\frac{(1+a)^{x}-1}{a^{x}} t^{x}.
\end{equation*}
This is a special case of our result in Remark \ref{rem:1}. In fact, let $f(x)=1\geq (1+{a})^{x-1}$ for $0\leq x\leq 1$ in Remark \ref{rem:1}, then the inequality
descends to theirs. Therefore our lower bound of $(1+t)^x$ is better than that of \cite{JFQ}, consequently
any monogamy relations based on Lemma \ref{lem:1} are better than those given in \cite{JFQ} based on Lemma 1 of \cite{JFQ}.
\end{remark}
\begin{eqnarray}gin{remark}\label{rem:ZJZ1}
In \cite[Lemma 1]{ZJZ1}, the authors gave another lower bound of $(1+t)^x$ for $0\leq x\leq \frac{1}{2}$ and $t\geq a\geq 1$
\begin{eqnarray}gin{equation*}
(1+t)^{x} \geq p^{x}+\frac{(1+a)^{x}-p^{x}}{a^{x}} t^{x},
\end{equation*}
where $\frac{1}{2}\leq p\leq 1$.
This is a also special case of our Remark \ref{rem:1} where $f(x)=p^{x}$ for $0\leq x\leq \frac{1}{2}$ and $t\geq a\geq 1$.
Since $(1+a)^{x-1}\leq p^x$ for $0\leq x\leq \frac{1}{2}$ and $\frac{1}{2}\leq p\leq 1$,
therefore our lower bound of $(1+t)^x$ for $0\leq x\leq \frac{1}{2}$ and $t\geq a\geq 1$ is stronger than that given in \cite{ZJZ1}.
Naturally our monogamy relations based on Lemma \ref{lem:1} will outperform those given in \cite{ZJZ1} based on Lemma 1 of \cite{ZJZ1}.
\end{remark}
\begin{eqnarray}gin{remark}\label{rem:ZJZ2}
In \cite[Lemma 1]{ZJZ2}, the following lower bound was given: $(1+t)^x$ for $0\leq x\leq \frac{1}{2}$ and $t\geq a\geq 1$
\begin{eqnarray}gin{equation*}
(1+t)^{x} \geq (\frac{1}{2})^{x}+\frac{(1+a)^{x}-(\frac{1}{2})^{x}}{a^{x}} t^{x},
\end{equation*}
which is the special case $p=\frac{1}{2}$ of \cite{ZJZ1}.
therefore our lower bound of $(1+t)^x$ for $0\leq x\leq \frac{1}{2}$ and $t\geq a\geq 1$ is better than the one given in \cite{ZJZ2}.
Thus, our monodamy relations based on Lemma \ref{lem:1} are better than the ones given in \cite{ZJZ2} based on \cite[Lemma 1]{ZJZ2}.
\end{remark}
\begin{eqnarray}gin{lemma}\label{lem:2}
Let $p_i$ $(i=1,\cdots, n)$ be nonnegative numbers arranged as $p_{(1)}\geq p_{(2)}\geq ...\geq p_{(n)}$
for a permutation $(1)(2)\cdots (n)$ of $12\cdots n$.
If $p_{(i)}\geq a p_{(i+1)}$ for $i=1,...,n-1$,
we have
\begin{eqnarray}gin{equation}\label{eq:3}
\left(\sum_{i=1}^n p_i\right)^x\geq (1+{a})^{x-1}\sum_{i=1}^n \left((1+\frac{1}{a})^{x-1}\right)^{n-i}p_{(i)}^x,
\end{equation}
for $0\leq x\leq 1$.
\iffalse
and
\begin{eqnarray}gin{equation}\label{eq:4}
\left(\sum_{i=1}^n p_i\right)^x\leq \sum_{j=2}^{n}2^{(n-j+1)(x-1)}p_{(j)}^x+2^{(n-1)(x-1)}p_{(1)}^x,
\end{equation}
for $x\geq 1$,
\fi
\end{lemma}
\begin{eqnarray}gin{proof}
For $0\leq x\leq 1$, if $p_{(n)}>0$ using Lemma \ref{lem:1} we have
\begin{eqnarray}gin{equation*}
\begin{eqnarray}gin{aligned}
\left(\sum_{i=1}^n p_i\right)^x&=p_{(n)}^{x}\left(1+\frac{p_{(1)}+...+p_{(n-1)}}{p_{(n)}}\right)^x\\
&\geq (1+{a})^{x-1}p_{(n)}^x+(1+\frac{1}{a})^{x-1}(p_{(1)}+...+p_{(n-1)})^x\\
&\geq...\\
&\geq (1+{a})^{x-1}\sum_{i=1}^n \left((1+\frac{1}{a})^{x-1}\right)^{n-i}p_{(i)}^x.
\end{aligned}
\end{equation*}
The other cases can be easily checked by Lemma \ref{lem:1}.
\end{proof}
\begin{eqnarray}gin{theorem}\label{thm:1}
For any tripartite mixed state $\rho_{A_1A_2A_3}$, let $\mathcal{Q}$ be a (bipartite) quantum measure satisfying the generalized monogamy relation \eqref{aqq} for $r\geq 2$
and $\mathcal{Q}_{A_1A_3}^{r}\geq a\mathcal{Q}_{A_1A_2}^{r}$ for some $a$, then
we have
\begin{eqnarray}gin{equation}
\mathcal{Q}^{\alpha}_{A_1|A_2A_3}\geq (1+{a})^{\frac{\alpha}{r}-1}\mathcal{Q}_{A_1A_2}^{\alpha}+(1+\frac{1}{a})^{\frac{\alpha}{r}-1}\mathcal{Q}_{A_1A_3}^{\alpha}.
\end{equation}
for $0\leq \alpha\leq r$.
\end{theorem}
\begin{eqnarray}gin{proof} It follows from Lemma \ref{lem:1} that
\begin{eqnarray}gin{equation}
\begin{eqnarray}gin{aligned}
\mathcal{Q}_{A_1|A_2A_3}^{\alpha}&=\left(\mathcal{Q}^{r}_{A_1|A_2A_3}\right)^{\frac{\alpha}{r}}\geq \left(\mathcal{Q}_{A_1A_2}^{r}+\mathcal Q_{A_1A_3}^{r}\right)^{\frac{\alpha}{r}}\\
&=\mathcal{Q}_{A_1A_2}^{\alpha}\left(1+\frac{\mathcal{Q}_{A_1A_3}^{r}}{\mathcal{Q}_{A_1A_2}^{r}}\right)^{\frac{\alpha}{r}}\\
&\geq(1+{a})^{\frac{\alpha}{r}-1}\mathcal{Q}_{A_1A_2}^{\alpha}+(1+\frac{1}{a})^{\frac{\alpha}{r}-1}\mathcal{Q}_{A_1A_3}^{\alpha}.
\end{aligned}
\end{equation}
\end{proof}
\begin{eqnarray}gin{theorem}\label{thm:2}
For any n-partite quantum state $\rho_{A_1A_2...A_n}$, let $\mathcal{Q}$ be a (bipartite) quantum measure satisfying the generalized monogamy relation \eqref{aqq} for $r\geq 2$.
Arrange $\mathcal{Q}_{(1)}\geq \mathcal{Q}_{(2)}\geq...\geq \mathcal{Q}_{(n-1)}$ with $\mathcal{Q}_{(j)}\in\{\mathcal{Q}_{A_1A_i}|i=2,...,n\}, j=1,...,n-1$. If for some $a$, $\mathcal{Q}^r_{(i)}\geq a \mathcal{Q}^r_{(i+1)}$ for $i=1,...,n-2$, then we have
\begin{eqnarray}gin{equation}
\mathcal{Q}^{\alpha}_{A_1|A_2...A_n}\geq(1+{a})^{\frac{\alpha}{r}-1}\sum_{i=1}^{n-1} \left((1+\frac{1}{a})^{\frac{\alpha}{r}-1}\right)^{n-1-i}\mathcal{Q}_{(i)}^{{\alpha}}
\end{equation}
for $0\leq \alpha\leq r$.
\end{theorem}
\begin{eqnarray}gin{proof}
By Lemma \ref{lem:2}, we have
\begin{eqnarray}gin{equation*}
\begin{eqnarray}gin{aligned}
\mathcal{Q}^{\alpha}_{A_1|A_2...A_n}&=\left(\mathcal{Q}^{r}_{A_1|A_2...A_n}\right)^{\frac{\alpha}{r}}
\geq \left(\mathcal{Q}_{A_1A_2}^{r}+...+\mathcal{Q}_{A_1A_n}^{r}\right)^{\frac{\alpha}{r}}\\
&\geq(1+{a})^{\frac{\alpha}{r}-1}\sum_{i=1}^{n-1} \left((1+\frac{1}{a})^{\frac{\alpha}{r}-1}\right)^{n-1-i}\mathcal{Q}_{(i)}^{{\alpha}},
\end{aligned}
\end{equation*}
\end{proof}
The general monogamy relations work for any quantum correlation measure such as concurrence, negativity, entanglement of formation etc.
Thus our theorems produce
tighter weighted monogamy relations than the existing ones (cf. \cite{JFQ,zhu,ZJZ1,ZJZ2}). Moreover, the new weighted monogamy relations also can be used for
the Tsallis-$q$ entanglement and R\'enyi-$q$ entanglement measures, and they also
outperform some of the recently found monogamy relations in \cite{jll,jin3,slh}.
In the following, we use the concurrence as an example to show the advantage of our monogamy relations.
For a bipartite pure state $\rho=|\psi\rangle_{AB}\in{H}_A\otimes {H}_B$, the concurrence is defined \cite{AU,PR,SA} by $C(|\psi\rangle_{AB})=\sqrt{{2\left[1-\mathrm{Tr}(\rho_A^2)\right]}}$,
where $\rho_A$ is the reduced density matrix. For a mixed state $\rho_{AB}$ the concurrence is given by the convex roof extension
$C(\rho_{AB})=\min_{\{p_i,|\psi_i\rangle\}}\sum_ip_iC(|\psi_i\rangle)$,
where the minimum is taken over all possible decompositions of $\rho_{AB}=\sum\limits_{i}p_i|\psi_i\rangle\langle\psi_i|$, with $p_i\geq0$, $\sum\limits_{i}p_i=1$ and $|\psi_i\rangle\in {H}_A\otimes {H}_B$.
\iffalse
For an $n$-qubit state $\rho_{A_1A_2\cdots A_{n}}\in {H}_A\otimes {H}_{A_2}\otimes\cdots\otimes {H}_{A_{n}}$, if $C(\rho_{A_1B_i})\geq C(\rho_{A|B_{i+1}\cdots B_{N-1}})$ for $i=1, 2, \cdots, N-2$, $N\geq 4$, the concurrence satisfies \cite{jll},
\begin{eqnarray}gin{eqnarray}\label{mo2}
&&C^\alpha(\rho_{A|B_1B_2\cdots B_{N-1}})\geq \sum_{j=1}^{N-1}(2^\frac{\alpha}{2}-1)^{j-1}C^\alpha(\rho_{AB_j}),
\end{eqnarray}
for $\alpha\geq2$.
For any $2\otimes2\otimes2^{N-2}$ tripartite mixed state $\rho_{ABC}$, if $C(\rho_{AC})\geq C(\rho_{AB})$, the concurrence satisfies \cite{zhu}
\begin{eqnarray}gin{eqnarray}\label{mo3}
C^\alpha(\rho_{A|BC})\geq C^\alpha(\rho_{AB})+(2^\frac{\alpha}{\gamma}-1)C^\alpha(\rho_{AC})
\end{eqnarray}
for $0\leq\alpha\leq \gamma$ and $\gamma\geq 2$.
\fi
For convenience, we write $C_{A_1A_i}=C(\rho_{A_1A_i})$ and $C_{A_1|A_2,\cdots,A_{n}}=C(\rho_{A_1|A_2\cdots A_{n}})$. The following conclusions are easily seen by
the similar method as in the proof of Theorem 1.
\begin{eqnarray}gin{corollary} Let $C$ be the concurrence satisfying the generalized monogamy relation \eqref{aqq} for $r\geq 2$.
For any 3-qubit mixed state $\rho_{A_1A_2A_3}$, if $C_{A_1A_3}^{r}\geq aC_{A_1A_2}^{r}$ for some $a$, then
we have
\begin{eqnarray}gin{equation}
C^{\alpha}_{A_1|A_2A_3}\geq (1+{a})^{\frac{\alpha}{r}-1}C_{A_1A_2}^{\alpha}+(1+\frac{1}{a})^{\frac{\alpha}{r}-1}C_{A_1A_3}^{\alpha}.
\end{equation}
for $0\leq \alpha\leq r$.
\end{corollary}
\begin{eqnarray}gin{corollary} Let $C$ be the concurrence satisfying the generalized monogamy relation \eqref{aqq} for $r\geq 2$.
For any n-qubit quantum state $\rho_{A_1A_2...A_n}$, arrange
$C_{(1)}\geq C_{(2)}\geq...\geq C_{(n-1)}$ with $C_{(j)}\in\{C_{A_1A_i}|i=2,...,n\}, j=1,...,n-1$. If for some $a$, $C^r_{(i)}\geq a C^r_{(i+1)}$ for $i=1,...,n-2$, then we have
\begin{eqnarray}gin{equation}
C^{\alpha}_{A_1|A_2...A_n}\geq(1+{a})^{\frac{\alpha}{r}-1}\sum_{i=1}^{n-1} \left((1+\frac{1}{a})^{\frac{\alpha}{r}-1}\right)^{n-1-i}C_{(i)}^{{\alpha}}
\end{equation}
for $0\leq \alpha\leq r$.
\end{corollary}
\begin{eqnarray}gin{example}
Consider the following three-qubit state $|\psi\rangle$ with generalized Schmidt decomposition \cite{AA,XH},
$$
|\psi\rangle=\lambda_{0}|000\rangle+\lambda_{1} e^{i \varphi}|100\rangle+\lambda_{2}|101\rangle+\lambda_{3}|110\rangle+\lambda_{4}|111\rangle,
$$
where $\lambda_{i} \geq 0$ and $\sum_{i=0}^{4} \lambda_{i}^{2}=1$. Then $C_{A_1 \mid A_2 A_3}=2 \lambda_{0} \sqrt{\lambda_{2}^{2}+\lambda_{3}^{2}+\lambda_{4}^{2}}, C_{A_1 A_2}=$ $2 \lambda_{0} \lambda_{2}$, and $C_{A_1 A_3}=2 \lambda_{0} \lambda_{3}$.
Set $\lambda_{0}=\lambda_3=\frac{1}{2}, \lambda_{1}=\lambda_{2}=\lambda_{4}=\frac{\sqrt{6}}{6}$.
We have
$C_{A_1 \mid A_2A_3}=\frac{\sqrt{21}}{6}, C_{A_1 A_2}=\frac{\sqrt{6}}{6}, C_{A_1A_3}=\frac{1}{2}$. Set $a=\sqrt{6}/2$, then
the lower bound of $C_{A_1 \mid A_2A_3}^{\alpha}$ given in \cite{JFQ} is
\begin{eqnarray}gin{equation*}
C_{A_1A_2}^{\alpha}+\frac{(1+a)^{\frac{\alpha}{r}}-1}{a^{\frac{\alpha}{r}}} C_{A_1 A_3}^{\alpha}=\left(\frac{\sqrt{6}}{6}\right)^{\alpha}+\frac{(1+\frac{\sqrt{6}}{2})^{\frac{\alpha}{r}}-1}{(\frac{\sqrt{6}}{2})^{\frac{\alpha}{r}}}\left(\frac{1}{2}\right)^{\alpha}=Z_1,
\end{equation*}
the lower bound of $C_{A_1 \mid A_2A_3}^{\alpha}$ given in \cite{ZJZ1,ZJZ2} is
\begin{eqnarray}gin{equation*}
C_{A_1A_2}^{\alpha}p^{\frac{\alpha}{r}}+\frac{(1+a)^{\frac{\alpha}{r}}-p^{\frac{\alpha}{r}}}{a^{\frac{\alpha}{r}}} C_{A_1 A_3}^{\alpha}=\left(\frac{\sqrt{6}}{6}\right)^{\alpha}(\frac{1}{2})^{\frac{\alpha}{r}}+\frac{(1+\frac{\sqrt{6}}{2})^{\frac{\alpha}{r}}-(\frac{1}{2})^{\frac{\alpha}{r}}}{(\frac{\sqrt{6}}{2})^{\frac{\alpha}{r}}}\left(\frac{1}{2}\right)^{\alpha}=Z_2,
\end{equation*}
with $p=\frac{1}{2}$ and $\alpha\leq \frac{r}{2}$.
While our bound is
\begin{eqnarray}gin{equation*}
(1+{a})^{\frac{\alpha}{r}-1}C_{A_1A_2}^{\alpha}+(1+\frac{1}{a})^{\frac{\alpha}{r}-1}C_{A_1A_3}^{\alpha}=(1+\frac{\sqrt{6}}{2})^{\frac{\alpha}{r}-1}\left(\frac{\sqrt{6}}{6}\right)^{\alpha}+(1+\frac{2}{\sqrt{6}})^{\frac{\alpha}{r}-1}\left(\frac{1}{2}\right)^{\alpha}=Z_3
\end{equation*}
Fig. 1 charts the graphs of the three bounds and the figure clearly shows that our result is better than those in \cite{JFQ, ZJZ1, ZJZ2} for $0 \leq \alpha \leq 1$ and $r \geq 2$.
\begin{eqnarray}gin{figure}[!htb]
\centerline{\includegraphics[width=0.6\textwidth]{fig1.eps}}
\renewcommand{Fig.}{Fig.}
\caption{The $z$-axis shows the concurrence as a function of $\alpha, r$. The blue, green and red surfaces represent our lower bound, the lower bound from \cite{ZJZ1,ZJZ2} and the lower bound from \cite{JFQ} respectively.}
\end{figure}
\end{example}
\section{Polygamy relations for general quantum correlations}
In \cite{jinzx}, the authors proved that for arbitrary dimensional tripartite states, there exists $0 \leq s \leq 1$ such that any quantum correlation measure $\mathcal{Q}$ satisfies the following polygamy relation:
\begin{eqnarray}gin{equation}\label{poly}
\mathcal{Q}_{A \mid B C}^{s} \leq \mathcal{Q}_{A B}^{s}+\mathcal{Q}_{A C}^{s}.
\end{equation}
Using the similar method as Lemma \ref{lem:1} and Lemma \ref{lem:2}, we can prove the following Lemma \ref{lem:4} and Lemma \ref{lem:5}.
\begin{eqnarray}gin{lemma}\label{lem:4}
For $t\geq a\geq 1$, $x\geq 1$ we have,
\begin{eqnarray}gin{equation}
(1+t)^x\leq (1+{a})^{x-1}+(1+\frac{1}{a})^{x-1}t^x.
\end{equation}
\end{lemma}
\begin{eqnarray}gin{lemma}\label{lem:5}
For nonnegative numbers $p_i$, $i=1,\cdots, n$, rearrange them in descending order: $p_{(1)}\geq p_{(2)}\geq ...\geq p_{(n)}$
where $p_{(i)}\in \{p_j|j=1,\cdots, n\}$. If $p_{(i)}\geq a p_{(i+1)}$ for $i=1,...,n-1$ and $a$, then
we have
\begin{eqnarray}gin{equation}
\left(\sum_{i=1}^n p_i\right)^x\leq (1+{a})^{x-1}\sum_{i=1}^n \left((1+\frac{1}{a})^{x-1}\right)^{n-i}p_{(i)}^x,
\end{equation}
for $x\geq 1$.
\end{lemma}
\begin{eqnarray}gin{remark}\label{rem:3}~~
Similar argument as Remark \ref{rem:1} implies that
for the case $x\geq 1$, $t\geq a\geq 1$ and $f(x)\leq (1+{a})^{x-1} $, we have
\begin{eqnarray}gin{equation*}
(1+t)^x\leq f(x)+\frac{(1+a)^x-f(x)}{a^x}t^x.
\end{equation*}
We can easily check that for $ x\geq 1$, $t\geq a\geq 1$ and $f(x)\leq (1+{a})^{x-1}$,
$$(1+{a})^{x-1}+(1+\frac{1}{a})^{x-1}t^x-
\left[f(x)+\frac{(1+a)^x-f(x)}{a^x}t^x\right]\leq 0.$$
\end{remark}
\begin{eqnarray}gin{remark}\label{rem:4}
In Lemma 2 of \cite{JFQ}, the authors gave an upper bound of $(1+t)^x$ for $x\geq 1$ and $t\geq a\geq 1$
\begin{eqnarray}gin{equation*}
(1+t)^{x} \leq 1+\frac{(1+a)^{x}-1}{a^{x}} t^{x}.
\end{equation*}
Actually this is a special case of $f(x)=1\leq (1+{a})^{x-1}$ for $ x\geq 1$ in Remark \ref{rem:3},
therefore our upper bound of $(1+t)^x$ is better than the one given in \cite{JFQ}.
Consequently our polygamy relations based on Lemma \ref{lem:4} are better than those given in \cite{JFQ} based on Lemma 2 of \cite{JFQ}.
\end{remark}
\begin{eqnarray}gin{remark}\label{rem:Jing's paper1}
In \cite[Lemma 2]{ZJZ1}, the authors also gave an upper bound of $(1+t)^x$ for $x\geq 1$, $t\geq a\geq 1$ and $0<q\leq 1$
\begin{eqnarray}gin{equation*}
(1+t)^{x} \leq q^{x}+\frac{(1+a)^{x}-q^{x}}{a^{x}} t^{x},
\end{equation*}
Actually this is the special cases of $f(x)=q^x\leq (1+{a})^{x-1}$ for $ x\geq 1$ in Remark \ref{rem:3},
therefore our upper bound of $(1+t)^x$ is better than the one given in \cite{ZJZ1}.
Thus, our polygamy relations based on Lemma \ref{lem:4} are better than those given in \cite{ZJZ1} based on Lemma 2 of \cite{ZJZ1}.
\end{remark}
\begin{eqnarray}gin{remark}\label{rem:Jing's paper2}
In \cite[Lemma 1]{ZJZ2}, an upper bound of $(1+t)^x$ for $x\geq 1$, $t\geq a\geq 1$ was given:
\begin{eqnarray}gin{equation*}
(1+t)^{x} \leq (\frac{1}{2})^{x}+\frac{(1+a)^{x}-(\frac{1}{2})^{x}}{a^{x}} t^{x},
\end{equation*}
Again this is a special case of $f(x)=(\frac{1}{2})^{x}\leq (1+{a})^{x-1}$ for $ x\geq 1$ in Remark \ref{rem:3},
therefore our upper bound of $(1+t)^x$ is better than the one given in \cite{ZJZ2}. Naturally
our polygamy relations based on Lemma \ref{lem:4} are stronger than those of \cite{ZJZ2} based on Lemma 1 of \cite{ZJZ2}.
\end{remark}
\begin{eqnarray}gin{theorem} Let $\mathcal{Q}$ be a bipartite measure satisfying the generalized polygamy relation \eqref{poly} for $0 \leq s \leq 1$.
Suppose $\mathcal{Q}_{A_1A_3}^{s} \geq a \mathcal{Q}_{A_1 A_2}^{s}$
for $a\geq 1$ on any tripartite state $\rho_{A B C} \in$ $H_{A_1} \otimes H_{A_2} \otimes H_{A_3}$, then the quantum correlation measure $\mathcal{Q}$ satisfies
$$
\mathcal{Q}_{A_1 \mid A_2 A_3}^{\begin{eqnarray}ta} \leq (1+{a})^{\frac{\begin{eqnarray}ta}{s}-1}\mathcal{Q}_{A_1 A_2}^{\begin{eqnarray}ta}+(1+\frac{1}{a})^{\frac{\begin{eqnarray}ta}{s}-1} \mathcal{Q}_{A_1 A_3}^{\begin{eqnarray}ta}
$$
for $\begin{eqnarray}ta \geq s$.
\end{theorem}
\begin{eqnarray}gin{theorem} Let $\rho$ be a state on the multipartite system $A_1A_2...A_n$. Let $\mathcal{Q}$ be a bipartite measure satisfying the generalized polygamy relation \eqref{poly} for $0 \leq s \leq 1$. Set $\mathcal{Q}_{(1)}\geq \mathcal{Q}_{(2)}\geq...\geq \mathcal{Q}_{(n-1)}$ with $\mathcal{Q}_{(j)}\in\{\mathcal{Q}_{A_1A_i}|i=2,...,n\}, j=1,...,n-1$. If
$\mathcal{Q}_{(i)}^{s}\geq a \mathcal{Q}_{(i+1)}^s$ for $a$ and $i=1,...,n-2$, then we have
\begin{eqnarray}gin{equation}
\mathcal{Q}^{\begin{eqnarray}ta}_{A_1|A_2...A_n}\leq(1+{a})^{\frac{\begin{eqnarray}ta}{s}-1}\sum_{i=1}^{n-1} \left((1+\frac{1}{a})^{\frac{\begin{eqnarray}ta}{s}-1}\right)^{n-1-i}\mathcal{Q}_{(i)}^{{\begin{eqnarray}ta}}
\end{equation}
for $\begin{eqnarray}ta\geq s$.
\end{theorem}
\begin{eqnarray}gin{proof}
Since $\mathcal{Q}_{A \mid B C}^{s} \leq \mathcal{Q}_{A B}^{s}+\mathcal{Q}_{A C}^{s}$, we have
\begin{eqnarray}gin{equation}
\mathcal{Q}^{s}_{A_1|A_2...A_n}\leq \mathcal{Q}^{s}_{A_1|A_2}+\mathcal{Q}^{s}_{A_1|A_3...A_n}\leq\cdots\leq \sum_{i=2}^n\mathcal{Q}^{s}_{A_1|A_i}=\sum_{j=1}^{n-1}\mathcal{Q}^{s}_{(j)}.
\end{equation}
By Lemma \ref{lem:5} we have
\begin{eqnarray}gin{equation}
\begin{eqnarray}gin{aligned}
\mathcal{Q}^{\begin{eqnarray}ta}_{A_1|A_2...A_n}&=\left(\mathcal{Q}^{s}_{A_1|A_2...A_n}\right)^{\frac{\begin{eqnarray}ta}{s}}\leq \left(\sum_{j=1}^{n-1}\mathcal{Q}^{s}_{(j)}\right)^{\frac{\begin{eqnarray}ta}{s}}\\
&\leq(1+{a})^{\frac{\begin{eqnarray}ta}{s}-1}\sum_{i=1}^{n-1} \left((1+\frac{1}{a})^{\frac{\begin{eqnarray}ta}{s}-1}\right)^{n-1-i}\mathcal{Q}_{(i)}^{{\begin{eqnarray}ta}}
\end{aligned}
\end{equation}
\end{proof}
The general monogamy relations can be applied to any quantum correlation measure such as the concurrence of assistance, square of convex-roof extended negativity of assistance (SCRENoA), and entanglement of assistance etc. Correspondingly new class of (weighted) polygamy relations are obtained. In the following, we take the SCRENoA as an example.
The negativity of bipartite state $\rho_{A_1A_2}$ is defined by \cite{GRF}:
$N(\rho_{A_1A_2})=(||\rho_{A_1A_2}^{T_{A_{1}}}||-1)/2$,
where $\rho_{A_1A_2}^{T_{A_1}}$ is the partial transposition with respect to the subsystem $A_1$ and $||X||=\mathrm{Tr}\sqrt{XX^\dag}$ is the trace norm of $X$.
For convennience, we use the following definition of negativity, $ N(\rho_{A_1A_2})=||\rho_{A_1A_2}^{T_{A_{1}}}||-1$.
For any bipartite pure state $|\psi\rangle_{A_1A_2}$, the negativity $ N(\rho_{A_1A_2})$ is given by
$$N(|\psi\rangle_{A_1A_2})=2\sum_{i<j}\sqrt{\lambda_i\lambda_j}=(\mathrm{Tr}\sqrt{\rho_{A_1}})^2-1,$$
where $\lambda_i$ are the eigenvalues for the reduced density matrix $\rho_A$ of $|\psi\rangle_{A_1A_2}$. For a mixed state $\rho_{A_1A_2}$, the square of convex-roof extended negativity (SCREN) is defined by
$N_{sc}(\rho_{A_1A_2})=[\mathrm{min}\sum_ip_iN(|\psi_i\rangle_{A_1A_2})]^2$,
where the minimum is taken over all possible pure state decompositions $\{p_i,~|\psi_i\rangle_{A_1A_2}\}$ of $\rho_{A_1A_2}$. The SCRENoA is then defined by $N_{sc}^a(\rho_{A_1A_2})=[\mathrm{max}\sum_ip_iN(|\psi_i\rangle_{A_1A_2})]^2$, where the maximum is taken over all possible pure state decompositions $\{p_i,~|\psi_i\rangle_{A_1A_2}\}$ of $\rho_{A_1A_2}$. For convenience, we denote ${N_a}_{A_1A_i}=N_{sc}^a(\rho_{A_1A_i})$ the SCRENoA of $\rho_{A_1A_i}$ and ${N_a}_{A_1|A_2\cdots A_{n}}=N^a_{sc}(|\psi\rangle_{A_1|A_2\cdots A_{n}})$.
\iffalse
In \cite{j012334} it has been shown that
${N_a}_{A|B_1\cdots B_{N-1}}\leq \sum_{j=1}^{N-1}{N_a}_{AB_j}.$
It is further improved that for $0\leq\begin{eqnarray}ta\leq1$ \cite{jin3},
\begin{eqnarray}gin{eqnarray}\label{n5}
{N_a}^\begin{eqnarray}ta_{A|B_1\cdots B_{N-1}}\leq\sum_{j=1}^{N-1} (2^\begin{eqnarray}ta-1)^j{N_a}^\begin{eqnarray}ta_{AB_j}.
\end{eqnarray}
\fi
\begin{eqnarray}gin{corollary} Let $s\in (0, 1)$ be the fixed number so that the SCRENoA satisfying the generalized polygamy relation \eqref{poly}.
Suppose ${N_a}^s_{A_1A_3}\geq a{N_a}^s_{A_1A_2}$
for $a\geq 1$ on a $2\otimes2\otimes2^{N-2}$ tripartite mixed state $\rho$, then the SCRENoA satisfies
\begin{eqnarray}gin{eqnarray}\label{co31}
{N_a}^\begin{eqnarray}ta_{A_1|A_2A_3}\leq(1+{a})^{\frac{\begin{eqnarray}ta}{s}-1}{N_a}^\begin{eqnarray}ta_{A_1A_2}+(1+\frac{1}{a})^{\frac{\begin{eqnarray}ta}{s}-1} {N_a}^\begin{eqnarray}ta_{A_1A_3}
\end{eqnarray}
for $\begin{eqnarray}ta\geq \delta$.
\end{corollary}
By induction, the following result is immediate for a multiqubit quantum state $\rho_{A_1A_2\cdots A_{n}}$.
\begin{eqnarray}gin{corollary} Let $s\in (0, 1)$ be the fixed number so that the SCRENoA satisfying the generalized polygamy relation \eqref{poly}.
Let $N_{a(1)}\geq N_{a(2)}\geq...\geq N_{a(n-1)}$ be a reordering of $N_{aA_1A_j}$, $j=2,...,n$.
If $N_{a(i)}^{s}\geq a N_{a(i+1)}^s$ for $a$ and $i=1,...,n-2$, then we have
\begin{eqnarray}gin{equation}
N^{\begin{eqnarray}ta}_{aA_1|A_2...A_n}\leq(1+{a})^{\frac{\begin{eqnarray}ta}{s}-1}\sum_{i=1}^{n-1} \left((1+\frac{1}{a})^{\frac{\begin{eqnarray}ta}{s}-1}\right)^{n-1-i}N_{a(i)}^{{\begin{eqnarray}ta}}
\end{equation}
for $\begin{eqnarray}ta\geq s$.
\end{corollary}
\begin{eqnarray}gin{example} Let us consider the three-qubit generlized $W$-class state,
\begin{eqnarray}gin{eqnarray}\label{W}
|W\rangle_{A_1A_2A_3}=\frac{1}{2}(|100\rangle+|010\rangle)+\frac{\sqrt{2}}{2}|001\rangle.
\end{eqnarray}
Then ${N_a}_{A_1|A_2A_3}=\frac{3}{4}$, ${N_a}_{A_1A_2}=\frac{1}{4},~{N_a}_{A_1A_3}=\frac{1}{2}$.
Let $a=2^{0.6}$, then the upper bound given in \cite{JFQ} is $$W_1=(\frac{1}{4})^\begin{eqnarray}ta+\frac{(1+2^{0.6})^\frac{\begin{eqnarray}ta}{s}-1}{(2^{0.6})^{\frac{\begin{eqnarray}ta}{s}}}(\frac{1}{2})^\begin{eqnarray}ta,$$
the upper bound given in \cite{ZJZ1,ZJZ2} is
$$W_2=(\frac{1}{2})^{\frac{\begin{eqnarray}ta}{s}}(\frac{1}{4})^\begin{eqnarray}ta+\frac{(1+2^{0.6})^\frac{\begin{eqnarray}ta}{s}-(\frac{1}{2})^{\frac{\begin{eqnarray}ta}{s}}}{(2^{0.6})^{\frac{\begin{eqnarray}ta}{s}}}(\frac{1}{2})^\begin{eqnarray}ta,$$
while our upper bound is $$W_3=(1+2^{0.6})^{\frac{\begin{eqnarray}ta}{s}-1}(\frac{1}{4})^\begin{eqnarray}ta+(1+2^{-0.6})^{\frac{\begin{eqnarray}ta}{s}-1} (\frac{1}{2})^\begin{eqnarray}ta.$$
Fig. 2 charts our bound together with other bounds, and Fig. 3 and Fig. 4 show the comparison of our bound with those given in \cite{JFQ,ZJZ1,ZJZ2}. Our bound is found to be stronger than the other two.
\end{example}
\begin{eqnarray}gin{figure}[H]
\centerline{\includegraphics[width=0.6\textwidth]{fig2.eps}}
\renewcommand{Fig.}{Fig.}
\caption{The z-axis presents SCRENoA for the state $|W\rangle_{A_1A_2A_3}$ as a function of $\begin{eqnarray}ta, s$. The red, green and blue surfaces chart
our upper bound, the upper bound of \cite{JFQ} and the upper bound of \cite{ZJZ1,ZJZ2} respectively.}
\end{figure}
\begin{eqnarray}gin{figure}[H]
\centerline{\includegraphics[width=0.6\textwidth]{fig3.eps}}
\renewcommand{Fig.}{Fig.}
\caption{The surface depicts our upper bound of SCRENoA minus that given by \cite{JFQ}.}
\end{figure}
\begin{eqnarray}gin{figure}[H]
\centerline{\includegraphics[width=0.6\textwidth]{fig4.eps}}
\renewcommand{Fig.}{Fig.}
\caption{The surface depicts our upper bound of SCRENoA minus that of \cite{ZJZ1,ZJZ2}.}
\end{figure}
\section{Conclusion}
Monogamy relations reveal special properties of correlations in terms of inequalities satisfied by various quantum measurements of the subsystems.
In this paper, we have examined the physical meanings and mathematical formulations related to monogamy and polygamy relations
in multipartite quantum systems. By grossly generalizing a technical inequality for the function $(1+t)^x$, we have obtained general stronger weighted
monogamy and polygamy relations for any quantum measurement such as concurrence, negativity, entanglement of formation etc. as well as
the Tsallis-$q$ entanglement and R\'enyi-$q$ entanglement measures. We have shown rigorously that our bounds outperform some of the strong bounds found recently
in a unified manner, notably that our results are not only stronger for monogamy relations but also polygamy relations. We have also used the concurrence and the SCRENoA
(square of convex-roof extended negativity of assistance) to show that our bounds are indeed better than the
recently available bounds through detailed examples and charts in both situations.
\noindent{\bf Acknowledgments}
This work is partially supported by Simons Foundation grant no. 523868
and National Natural Science Foundation of China grant no. 12126351.
\noindent\textbf {Data Availability Statements} All data generated or analysed during this study are included in the paper.
\begin{eqnarray}gin{thebibliography}{99}
\bibitem{MP} M. Pawlowski, Security proof for cryptographic protocols based only on the monogamy of bells inequality violations, Phys. Rev. A 82, 032313 (2010).
\bibitem{MK} M. Koashi and A. Winter, Monogamy of quantum entanglement and other correlations, Phys. Rev. A 69, 022309 (2004).
\bibitem{JZX} Z. X. Jin and S. M. Fei, Tighter entanglement monogamy relations of qubit systems, Quantum Inf. Process. 16:77 (2017).
\bibitem{ZXN}X. N. Zhu and S. M. Fei, Entanglement monogamy relations of qubit systems, Phys. Rev. A 90, 024304 (2014).
\bibitem{jll}Z. X. Jin, J. Li, T. Li, and S. M. Fei, Tighter monogamy relations in multiqubit systems, Phys. Rev. A 97, 032336 (2018).
\bibitem{012329} J. S. Kim, A. Das, and B. C. Sanders, Entanglement monogamy of multipartite higher-dimensional quantum systems using convex-roof extended negativity, Phys. Rev. A 79, 012329 (2009).
\bibitem{gy1} G. Gour and Y. Guo, Monogamy of entanglement without inequalities, Quantum 2, 81 (2018).
\bibitem{gy2} Y. Guo, Any entanglement of assistance is polygamous, Quantum Inf. Process. 17, 222 (2018).
\bibitem{jin1} Z. X. Jin and S. M. Fei, Tighter monogamy relations of quantum entanglement for multiqubit W-class states, Quantum Inf. Process. 17:2 (2018).
\bibitem{jin2}Z. X. Jin, S. M. Fei, and X. Li-Jost, Improved monogamy relations with concurrence of assistance and negativity of assistance for multiqubit W-class states, Quantum Inf. Process. 17:213 (2018).
\bibitem{SC} X. Shi and L. Chen, Monogamy relations for the generalized W-class states beyond qubits. Phys. Rev. A 101, 032344 (2020).
\bibitem{RWF} Y. Y. Ren, Z. X. Wang, and S. M. Fei, Tighter constraints of multiqubit entanglement in terms of unified entropy, Laser Physics Lett. 18, 115204 (2021)
\bibitem{ZYS} X. L. Zong, H. H. Yin, W. Song, and Z. L. Cao. Monogamy of quantum entanglement. Front. Phys. 10, 880560 (2022)
\bibitem{gg} G. Gour, D. A. Meyer, and B. C. Sanders, Deterministic entanglement of assistance and monogamy constraints, Phys. Rev. A 72, 042329 (2005).
\bibitem{jin3}Z. X. Jin and S. M. Fei, Finer distribution of quantum correlations among multiqubit systems, Quantum Inf. Process. 18:21 (2019).
\bibitem{jsb} G. Gour, S. Bandyopadhay, and B. C. Sanders, Dual monogamy inequality for entanglement, J. Math. Phys. 48, 012108 (2007).
\bibitem{042332} J. S. Kim, Weighted polygamy inequalities of multiparty entanglement in arbitrary-dimensional quantum systems, Phys. Rev. A 97, 042332 (2018).
\bibitem{062328} J. S. Kim, Tsallis entropy and entanglement constraints in multiqubit systems, Phys. Rev. A 81, 062328 (2010).
\bibitem{295303} J. S. Kim and B. C. Sanders, J. Phys. A: Math. Theor. 44, 295303 (2011).
\bibitem{GG} Y. Guo and G. Gour, Monogamy of the entanglement of formation, Phys. Rev. A 99, 042305 (2019).
\bibitem{GYG} L. M. Gao, F. L. Yan, and T. Gao, Tighter monogamy relations of multiqubit entanglement in terms of R\'enyi-$\alpha$ entanglement, Comm. Theoret. Phys. 72(8) (2020), 085102.
\bibitem{jk1}J. K. Kalaga and W. Leo$\acute{n}$ski, Quantum steering borders in three-qubit systems, Quantum Inf. Process. 16:175 (2017).
\bibitem{jk2} J. K. Kalaga, W. Leo$\acute{n}$ski, and R. Szcze$\acute{s}$niak, Quantum steering and entanglement in three-mode triangle Bose-Hubbard system, Quantum Inf. Process. 16:265 (2017).
\bibitem{mko} M. K. Olsen, Spreading of entanglement and steering along small Bose-Hubbard chains, Phys. Rev. A 92, 033627 (2015).
\bibitem{hqy} X. Deng, Y. Xiang, C. Tian, G. Adesso, and Q. He, Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States, Phys. Rev. Lett. 118, 230501 (2017).
\bibitem{jk3} J. K. Kalaga and W. Leo\'nski, Einstein-Podolsky-Rosen steering and coherence in the family of entangled three-qubit states, Phys. Rev. A 97, 042110 (2018).
\bibitem{j012334} J. S. Kim, Negativity and tight constraints of multiqubit entanglement. Phys. Rev. A 97, 012334 (2018).
\bibitem{JFQ}Z. Jin, S. Fei, and C. Qiao
Complementary quantum correlations among multipartite systems. Quantum Inf. Process. 19:101 (2020).
\bibitem{zhu} X. N. Zhu and S. M. Fei, Monogamy properties of qubit systems, Quantum Inf. Process. 18:23 (2019).
\bibitem{ZJZ1}M. M. Zhang, N. Jing, and H. Zhao,
Monogamy and Polygamy Relations of Quantum
Correlations for Multipartite Systems.
Internat. J. Theoret. Phys. 61 (2022), no. 1, Paper No. 6, 12 pp.
\bibitem{ZJZ2}M. M. Zhang, N. Jing, and H. Zhao,
Tightening monogamy and polygamy relations of unified
entanglement in multipartite systems.
Quantum Inf. Process. 21:136 (2022).
\bibitem{ARA} A. Kumar, R. Prabhu, A. Sen(De), and U. Sen, Effect of a large number of parties on the monogamy of quantum correlations. Phys. Rev. A 91, 012341, (2015).
\bibitem{SSS} G. Adesso, A. Serafini, and F. Illuminati, Multipartite entanglement in three-mode Gaussian states of continuous-variable systems: Quantification, sharing structure, and decoherence. Phys. Rev. A 73, 032345 (2006).
\bibitem{AKE} A. K. Ekert, Quantum cryptography based on Bell's theorem. Phys. Rev. Lett. 67, 661 (1991).
\bibitem{RPAK} R. Prabhu, A. K. Pati, A. Sen(De), and U. Sen, Conditions for monogamy of quantum correlations: Greenberger-Horne-Zeilinger versus $W$ states. Phys. Rev. A 85, 040102(R) (2012).
\bibitem{GLGP} G. L. Giorgi, Monogamy properties of quantum and classical correlations. Phys. Rev. A 84, 054301 (2011).
\bibitem{SPAU}K. Salini, R. Prabhu, A. Sen(De), and U. Sen, Monotonically increasing functions of any quantum correlation can make all multiparty states monogamous. Ann. Phys 348, 297-305 (2014).
\bibitem{TJ} T. J. Osborne and F. Verstraete, General monogamy inequality for bipartite qubit entanglement. Phys. Rev. Lett. 96, 220503 (2006).
\bibitem{YKM} Y. K. Bai, M. Y. Ye, and Z. D. Wang, Entanglement monogamy and entanglement evolution in multipartite systems, Phys. Rev. A 80, 044301 (2009).
\bibitem{slh} Y. Luo, T. Tian, L. H. Shao, and Y. Li, General monogamy of Tsallis $q$-entropy entanglement in multiqubit systems, Phys. Rev. A 93, 062340 (2016).
\bibitem{AU} A. Uhlmann, Fidelity and concurrence of conjugated states.Phys. Rev. A 62, 032307 (2000).
\bibitem{PR} P. Rungta, V. Bu\u{z}ek, C. M. Caves, M. Hillery, and G. J. Milburn, Universal state inversion and concurrence in arbitrary dimensions. Phys. Rev. A 64, 042315 (2001).
\bibitem{SA} S. Albeverio and S. M. Fei, A note on invariants and entanglements. J. Opt. B: Quantum Semiclass Opt. 3, 223 (2001).
\bibitem{AA} A. Acin, A. Andrianov, L. Costa, E. Jane, J. I. Latorre, and R. Tarrach, Generalized schmidt decomposition and classification of three-quantum-bit states, Phys. Rev. Lett. 85, 1560 (2000).
\bibitem{XH} X. H. Gao and S. M. Fei, Estimation of concurrence for multipartite mixed states, Eur. Phys. J. Special Topics 159, 71 (2008).
\bibitem{jinzx} Z. X. Jin and S. M. Fei, Superactivation of monogamy relations for nonadditive quantum correlation measures. Phys. Rev. A, 99, 032343 (2019).
\bibitem{GRF} G. Vidal and R. F. Werner, Computable measure of entanglement. Phys. Rev. A. 65, 032314 (2002).
\end{thebibliography}
\end{document} |
\begin{document}
\parindent=0pt
\begin{abstract}
When Newton's method, or Halley's method is used to approximate the $p${th} root of $1-z$, a sequence
of rational functions is obtained. In this paper, a beautiful formula for these rational functions is proved in the square root case, using an interesting link to Chebyshev's polynomials. It allows the determination of the sign of the coefficients of the power series expansion of these rational functions. This answers positively the square root case of a proposed conjecture by Guo(2010). \par
\end{abstract}
\goodbreak
\parindent=0pt
\maketitle
\section{\bf Introduction }\label{sec1}
\goodbreak
\parindent=0pt
\qquad There are several articles and research papers that deal with
the determination of the sign pattern of the coefficients of a power series expansions (see \cite{kra},\cite{str} and the bibliography therein). In this work we consider this problem
in a particular case.
\goodbreak
\qquad Let $p$ be an integer greater than $1$, and $z$ any complex number. If we apply Newton's method to solve the equation $x^p=1-z$ starting from the initial value $1$, we obtain
the sequence of rational functions $(F_k)_{k\geq0}$ in the variable $z$ defined by the following iteration
\begin{equation*}
F_{k+1}(z)=\frac{1}{p}\left((p-1)F_k(z)+\frac{1-z}{F_k^{p-1}(z)}\right),\qquad F_0(z)\equiv 1.
\end{equation*}
\qquad Similarly, if we apply Halley's method to solve the equation $x^p=1-z$ starting from the same initial value $1$, we get
the sequence of rational functions $(G_k)_{k\geq0}$ in the variable $z$ defined by the iteration
\begin{equation*}
G_{k+1}(z)=\frac{(p-1)G_k^p(z)+(p+1)(1-z)}{(p+1)G_k^{p}(z)+(p-1)(1-z)}G_k(z),\qquad G_0(z)\equiv 1.
\end{equation*}
\qquad It was shown in \cite{guo} and more explicitly stated in \cite{kou} that both $F_k$ and
$G_k$ have power series expansions that are convergent in the neighbourhood of $z=0$, and that
these expansions start similar to the power series expansion of $z\mapsto\root{p}\of{1-z}$. More precisely,
the first $2^k$ coefficients of the power series expansion of $F_k$ are identical to the corresponding
coefficients in the power series expansion of $z\mapsto\root{p}\of{1-z}$, and
the same holds for the first $3^k$ coefficients of the power series expansion of $G_k$. It was conjectured
\cite[Conjecture 12]{guo}, that the coefficients of the power series expansions of $(F_k)_{k\geq2}$, $ (G_k)_{k\geq1}$ and $z\mapsto\root{p}\of{1-z}$ have the same sign pattern.
\goodbreak
\qquad In this article, we will consider only the case $p=2$. In this case an unsuspected link to Chebyshev polynomials of the first and the second kind is discovered, it will allow us to find general formul\ae~
for $F_k$ and $G_k$ as sums of partial fractions. This will allow us to prove Guo's conjecture in this particular case. Finally, we note that for $p\geq3$ the conjecture remains open.
\goodbreak
\qquad Before proceeding to our results, let us fix some notation and definitions. We will use freely
the properties of Chebyshev polynomials $(T_n)_{n\geq0}$ and
$(U_n)_{n\geq0}$ of the first and the second kind. In particular, they can be defined for $x>1$ by the folmul\ae :
\begin{equation}\label{E:eq1}
\forall\,\varphi\in\mathbb{R},\quad T_n(\cos\varphi)=\cos(n\varphi),\quad\hbox{and}\quad U_n=\frac{1}{n+1}T'_n.
\end{equation}
\qquad For these definitions and more on the properties of these polynomials we invite the reader
to consult \cite{mas} or any general treatise on special functions for example \cite[Chapter 22]{abr}, \cite[\$13.3-4]{arf}
or \cite[Part six]{pol}, and the references therein.
\goodbreak
\qquad Let $\Omega =\mathbb{C}\setminus[1,+\infty)$, (this is the complex plane cut along the real numbers greater or equal to 1.) For $z\in \Omega$, we denote by $\sqrt{1-z}$ the square root of $1-z$ with positive real part. We know that $z\mapsto \sqrt{1-z}$
is holomorphic in $\Omega$. Moreover,
\begin{equation}\label{E:bin}
\forall\, z\in\adh{D(0,1)},\qquad \sqrt{1-z}=\sum_{m=0}^\infty\lambda_m z^m,
\end{equation}
where
\begin{equation}\label{E:bin1}
\forall\,m\geq0,\qquad\lambda_m=\frac{1}{(1-2m)2^{2m}}\binom{2m}{m}.
\end{equation}
\qquad The standard result, is that \eqref{E:bin} is true for $z$ in the open unit disk, but the fact that for $n\geq1$ we have $\lambda_n=\mu_{n-1}-\mu_n<0$, where $\mu_n=2^{-2n}\binom{2n}{n}\sim1/\sqrt{\pi n}$, proves the uniform convergence of the series $\sum\lambda_mz^m$ in
the closed unit disk, and \eqref{E:bin} follows by Abel's Theorem since $z\mapsto \sqrt{1-z}$ can be continuously extended to $\adh{D(0,1)}$.
\goodbreak
\qquad Finally, we consider the sequences of
rational functions $(V_n)_{n\geq0}$, $(F_n)_{n\geq0}$ and $(G_n)_{n\geq0}$ defined by
\begin{align}
V_{n+1}(z)&=\frac{1-z+V_n(z)}{1+V_n(z)}, && V_0(z)\equiv 1,\label{E:eqv}\\
F_{n+1}(z)&=\frac{1}{2}\left(F_n(z)+\frac{1-z}{F_n (z)}\right), && F_0(z)\equiv 1,\label{E:eqf}\\
G_{n+1}(z)&=\frac{ G_n^3(z)+3(1-z)G_n(z)}{3G_n^{2}(z)+1-z}, && G_0(z)\equiv 1,\label{E:eqg}
\end{align}
\goodbreak
where $(F_n)_{n\geq0}$ and $(G_n)_{n\geq0}$ are Newton's and Halley's iterations mentioned before, (in the case $p=2$.) Since the sequence $(V_n)_{n\geq0}$ is simpler than the other two sequences, we
will prove our main result for the $V_n$'s, then we will deduce the corresponding properties for $F_n$ and $G_n$.
\goodbreak
\section{\bf The Main Results }\label{sec2}
\goodbreak
\qquad We start this section by proving a simple property of that shows why it is sufficient to
study the sequence of $V_n$'s to deduce the properties of Newton's and Halley's
iterations the $F_n$'s and $G_n$'s :
\goodbreak
\begin{lemma}\label{lm1}
The sequences $(V_n)_n$, $(F_n)_n$ and $(G_n)_n$ of the rational functions defined inductively by \eqref{E:eqv}, \eqref{E:eqf} and \eqref{E:eqg}, satisfy the following properties :
\begin{enumeratea}
\item For $z\in\Omega$ and $n\geq0$, we have :
\[
\frac{V_n(z)-\sqrt{1-z}}{V_n(z)+\sqrt{1-z}}
=\left(\frac{1-\sqrt{1-z}}{1+\sqrt{1-z}}\right)^{n+1},
\]\label{itlm11}
\item For $n\geq0$ we have $F_n=V_{2^n-1}$, and $G_n=V_{3^n-1}$.\label{itlm12}
\end{enumeratea}
\end{lemma}
\begin{proof}
First, let us suppose that $z=x\in(0,1)$. In this case, we see by induction
that all the terms of the sequence $(V_n(x))_{n\geq0}$ are well defined and positive,
and we have the following recurrence relation :
\begin{align*}
\frac{V_{n+1}(x)-\sqrt{1-x}}{V_{n+1}(x)+\sqrt{1-x}}&=
\frac{1-x+V_n(x)-\sqrt{1-x}(1+V_n(x))}{1-x+V_n(x)+\sqrt{1-x}(1+V_n(x))}\\
&=\frac{1-\sqrt{1-x}}{1+\sqrt{1-x}}\cdot
\frac{V_n(x)-\sqrt{1-x}}{V_n(x)+\sqrt{1-x}}.
\end{align*}
Therefore, using simple induction, we obtain
\begin{equation}\label{E:eq2}
\forall\,n\geq0,\qquad
\frac{V_n(x)-\sqrt{1-x}}{V_n(x)+\sqrt{1-x}}
=\left(\frac{1-\sqrt{1-x}}{1+\sqrt{1-x}}\right)^{n+1},
\end{equation}
Now, for a given $n\geq0$, let us define $R(z)$ and $L(z)$ by the formul\ae\ :
\[
L(z)=\frac{V_n(z)-\sqrt{1-z}}{V_n(z)+\sqrt{1-z}}\quad\hbox{and}\quad R(z)= \left(\frac{1-\sqrt{1-z}}{1+\sqrt{1-z}}\right)^{n+1}.
\]
Clearly, $R$ is analytic in $\Omega$ since $\Re(\sqrt{1-z})>0$ for $z\in\Omega$. On the other hand, if $V_n=A_n/B_n$ where $A_n$ and $B_n$ are two co-prime polynomials then
\[
L(z)=\frac{(A_n(z)-\sqrt{1-z}B_n(z))^2}{A_n^2(z)-(1-z)B_n^2(z)}.
\]
So $L$ is meromorphic with, (at the most,) a finite number
of poles in $\Omega$. Using analyticity, we conclude from \eqref{E:eq2} that
$L(z)=R(z)$ for $z\in\Omega\setminus\mathcal{P}$ for some finite set $\mathcal{P}
\subset\Omega$, but this implies that the points in $\mathcal{P}$ are
removable singularities, and that $L$ is holomorphic and identical to $R$ in
$\Omega$. This concludes the proof of \itemref{itlm11}.
\goodbreak
\qquad To prove \itemref{itlm12}, consider $z=x\in(0,1)$, as before,
we see by induction
that all the terms of the sequences $(F_n(x))_{n\geq0}$ and $(G_n(x))_{n\geq0}$
are well-defined and positive. Also, we check easily, using
\eqref{E:eqf} and \eqref{E:eqg}, that the following recurrence relations
hold :
\begin{align*}
\frac{F_{n+1}(x)-\sqrt{1-x}}{F_{n+1}(x)+\sqrt{1-x}}&=
\left(\frac{1-\sqrt{1-x}}{1+\sqrt{1-x}}\right)^2\cdot
\frac{F_{n}(x)-\sqrt{1-x}}{F_{n}(x)+\sqrt{1-x}},\\
\noalign{\hbox{and}}\\
\frac{G_{n+1}(x)-\sqrt{1-x}}{G_{n+1}(x)+\sqrt{1-x}}&=
\left(\frac{1-\sqrt{1-x}}{1+\sqrt{1-x}}\right)^3\cdot
\frac{G_{n}(x)-\sqrt{1-x}}{G_{n}(x)+\sqrt{1-x}}.
\end{align*}
It follows that
\begin{align*}
\frac{F_{n}(x)-\sqrt{1-x}}{F_{n}(x)+\sqrt{1-x}}&=
\left(\frac{1-\sqrt{1-x}}{1+\sqrt{1-x}}\right)^{2^n}=
\frac{V_{2^n-1}(x)-\sqrt{1-x}}{V_{2^n-1}(x)+\sqrt{1-x}},\\
\noalign{\hbox{and}}\\
\frac{G_{n}(x)-\sqrt{1-x}}{G_{n}(x)+\sqrt{1-x}}&=
\left(\frac{1-\sqrt{1-x}}{1+\sqrt{1-x}}\right)^{3^n}=
\frac{V_{3^n-1}(x)-\sqrt{1-x}}{V_{3^n-1}(x)+\sqrt{1-x}}.
\end{align*}
Hence $F_n(x)=V_{2^n-1}(x)$ and $G_n(x)=V_{3^n-1}(x)$ for every
$x\in(0,1)$, and \itemref{itlm12} follows by analyticity.
\end{proof}
\qquad Lemma \ref{lm1} provides a simple proof of the following known result.
\goodbreak
\begin{corollary}\label{co1}
The sequences
$(V_n)_n$, $(F_n)_n$ and $(G_n)_n$ of the rational functions defined inductively by \eqref{E:eqv}, \eqref{E:eqf} and \eqref{E:eqg}, converge
uniformly on compact subsets of $\Omega$ to the function $z\mapsto\sqrt{1-z}$.
\end{corollary}
\begin{proof}
Indeed, this follows from Lemma \ref{lm1} and the fact that for every non-empty compact set $K\subset\Omega$ we have :
\[
\sup_{z\in K}\abs{\frac{1-\sqrt{1-z}}{1+\sqrt{1-z}}}<1,
\]
which is easy to prove.
\end{proof}
\goodbreak
\qquad One can also use Lemma \ref{lm1} to study $V_n$ in the neighbourhood of $z=0$ :
\goodbreak
\begin{corollary}\label{co2}
For every $n\geq0$, and for $z$ in the neighbourhood of $0$ we have
\begin{align}
V_n(z)&=\sqrt{1-z}+O(z^{n+1}),\label{E:eq3}\\
\noalign{\hbox{and}}
V_n(z)&=\sum_{m=0}^n\lambda_mz^m+O(z^{n+1}).\label{E:eq4}
\end{align}
where $\lambda_m$ is defined by \eqref{E:bin1}
\end{corollary}
\begin{proof}
Indeed, using Lemma \ref{lm1} we have :
\begin{align*}
V_n(z)&=\sqrt{1-z}\cdot
\frac{(1+\sqrt{1-z})^{n+1}+(1-\sqrt{1-z})^{n+1}}{(1+\sqrt{1-z})^{n+1}-(1-\sqrt{1-z})^{n+1}},\\
&=\sqrt{1-z}+\frac{2\sqrt{1-z}}{(1+\sqrt{1-z})^{n+1}-(1-\sqrt{1-z})^{n+1}}\cdot (1-\sqrt{1-z})^{n+1},\\
&=\sqrt{1-z}+\frac{2\sqrt{1-z}}{(1+\sqrt{1-z})^{n+1}-(1-\sqrt{1-z})^{n+1}}\cdot \frac{z^{n+1}}{(1+\sqrt{1-z})^{n+1}},\\
&=\sqrt{1-z}+\frac{2\sqrt{1-z}}{(1+\sqrt{1-z})^{2(n+1)}-z^{n+1}}\cdot z^{n+1},
\end{align*}
In particular, for $z$ in the neighbourhood of $0$, we have
$V_n(z)=\sqrt{1-z}+O(z^{n+1})$, which is \itemref{E:eq3}.
On the other hand, using \eqref{E:bin},
we obtain
\[
V_n(z)=\sum_{m=0}^n\lambda_mz^m+O(z^{n+1})
\]
which is \itemref{E:eq4}.
\end{proof}
\qquad Recall that we are interested in the sign pattern of the coefficients
of the power series expansion of $F_n$ and $G_n$, in the neighbourhood of $z=0$.
Lemma \ref{lm1} reduces the problem to finding sign pattern of the coefficients
of the power series expansion of $V_n$. But, $V_n$ is rational function
and a partial fraction decomposition would be helpful. The next theorem
is our main result :
\goodbreak
\begin{theorem}\label{th21}
Let $n$ be a positive integer, and let $V_n$ be the rational function defined by the recursion \eqref{E:eqv}. Then the partial fraction decomposition of $V_n$ is as follows :
\begin{align*}
V_1(z)&=1-\frac{z}{2},\\
V_n(z)&=1-\frac{z}{2}-\frac{z^2}{2(n+1)}\sum_{k=1}^{\floor{n/2}}\frac{
\sin^2(\frac{2\pi k}{n+1})}{1-z\cos^2(\frac{\pi k}{n+1})},&&\hbox{for $n\geq2$.}
\end{align*}
\end{theorem}
\goodbreak
\begin{proof}
Let us recall the fact that for $x>1$ Chebyshev polynomials of the first and the second kind satisfy the following identities :
\begin{align}
T_n(x) & =\frac{(x+\sqrt{x^2-1})^n+(x-\sqrt{x^2-1})^n}{2},\label{E:eqch1}\\
U_n(x) & =\frac{(x+\sqrt{x^2-1})^{n+1}-(x+\sqrt{x^2-1})^{n+1}}{2\sqrt{x^2-1}}.\label{E:eqch2}
\end{align}
Thus, for $0<x<1$ we have
\begin{align*}
x^nT_n\left(\frac1x\right) & =\frac{(1+\sqrt{1-x^2})^n+(1-\sqrt{1-x^2})^n}{2},\\
x^n U_n\left(\frac1x\right) & =\frac{(1+\sqrt{1-x^2})^{n+1}-(1+\sqrt{1-x^2})^{n+1}}{2\sqrt{1-x^2}}.
\end{align*}
So, using by Lemma \ref{lm1}(a), we find that
\begin{equation}\label{E:eq5}
V_n(x^2)=\frac{x^{n+1}T_{n+1}(1/x)}{x^nU_{n}(1/x)}=
\frac{P_n(x)}{Q_n(x)},
\end{equation}
where $P_n(x)=x^{n+1}T_{n+1}(1/x)$ and $Q_n(x)=x^{n}U_{n}(1/x)$.
\goodbreak
It is known that $U_n$ has $n$ simple zeros, namely $\{\cos(k\theta_n):1\leq k\leq n\}$ where $\theta_n=\pi/(n+1)$. So, if we define
\begin{align*}
\Delta_{2m}&=\{1,2,\ldots,2m\}\\
\noalign{\hbox{ and}}\\
\Delta_{2m-1}&=\{1,2,\ldots,2m-1\}\setminus\{m\},
\end{align*}
then the singular points of the rational function ${P_n}/{Q_n}$ are $\{\lambda_{k}:k\in\Delta_n\}$
with $\lambda_{k}=\sec(k\theta_n)$. Moreover, since
\begin{align*}
P_n(\lambda_k)&=\lambda_k^{n+1}T_{n+1}\left(\cos(k\theta_n)\right)\\
&=\lambda_k^{n+1}\cos(\pi k)=(-1)^k\lambda_k^{n+1}\ne0,
\end{align*}
we conclude that these singular points are, in fact, simple poles with residues given by
\begin{equation*}
\hbox{Res}\left(\frac{P_n}{Q_n},\lambda_k\right)
=-\lambda_k^3\frac{ T_{n+1}(1/\lambda_k)}{U_n^\prime(1/\lambda_k)}.
\end{equation*}
But, from the identity $U_n(\cos\varphi)=\sin((n+1)\varphi)/\sin\varphi$ we conclude that
\begin{equation*}
U_n^\prime(\cos\varphi)=\frac{\cos\varphi \sin((n+1)\varphi)}{\sin^3\varphi}
-(n+1)\frac{\cos((n+1)\varphi)}{\sin^2\varphi},
\end{equation*}
hence,
\begin{equation*}
U_n^\prime\left(\cos(k\theta_n)\right)=(n+1)\frac{(-1)^{k+1}}{\sin^2(k\theta_n)},
\end{equation*}
and finally,
\begin{equation}\label{E:eq6}
\hbox{Res}\left(\frac{P_n}{Q_n},\lambda_k\right)
=\frac{\lambda_k^3}{n+1}\sin^2(k\theta_n )
=\frac{\lambda_k}{n+1}\tan^2(k\theta_n ).
\end{equation}
From this we conclude that the rational function $R_n$ defined by
\begin{align}
R_n(x)&=\frac{P_n(x)}{Q_n(x)}-\frac{1}{n+1}\sum_{k\in\Delta_n}\frac{\lambda_k\tan^2(k\theta_n )}{x-\lambda_k}\notag\\
&=\frac{P_n(x)}{Q_n(x)}+\frac{1}{n+1}\sum_{k\in\Delta_n}\frac{\tan^2(k\theta_n )}{1-x\cos(k\theta_n )},
\label{E:eq7}
\end{align}
is, in fact, a polynomial, and $\deg R_n=\deg P_n-\deg Q_n\leq (n+1)-(n-1)=2$.
\goodbreak
Noting that $k\mapsto n+1-k$ is a permutation of $\Delta_n$ we conclude that
\begin{align*}
\sum_{k\in\Delta_n}\frac{\tan^2(k\theta_n )}{1-x\cos(k\theta_n )}&=
\sum_{k\in\Delta_n}\frac{\tan^2(k\theta_n )}{1+x\cos(k\theta_n )}\\
&=\frac{1}{2}\sum_{k\in\Delta_n}\left(\frac{\tan^2(k\theta_n )}{1+x\cos(k\theta_n )}+\frac{\tan^2(k\theta_n )}{1-x\cos(k\theta_n )}\right)\\
&=\sum_{k\in\Delta_n}\frac{\tan^2(k\theta_n )}{1-x^2\cos^2(k\theta_n )}
\end{align*}
and using the fact that
\begin{equation*}
\frac{1}{1-z \cos^2\varphi}=1+z\cos^2\varphi+\frac{z^2\cos^4\varphi}{1-z \cos^2\varphi},
\end{equation*}
we find
\begin{align*}
\sum_{k\in\Delta_n}\frac{\tan^2(k\theta_n )}{1-x\cos(k\theta_n )}
&=\sum_{k\in\Delta_n}\tan^2(k\theta_n )
+x^2\sum_{k\in\Delta_n}\sin^2(k\theta_n )\\
&\qquad+
x^4 \sum_{k=1}^n\frac{\cos^2(k\theta_n ) \sin^2(k\theta_n )}{1-x^2\cos^2(k\theta_n )}
\end{align*}
where we added a ``zero'' term to the last sum for odd $n$.
Combining this conclusion with \eqref{E:eq7}
we conclude that there exists a polynomial $S_n$ with $\deg S_n\leq 2$ such that
\begin{equation*}
S_n(x)=\frac{P_n(x)}{Q_n(x)}+\frac{x^4}{4(n+1)} \sum_{k=1}^n\frac{\sin^2(2k\theta_n )}{1-x^2\cos^2(k\theta_n )}.
\end{equation*}
Moreover, $S_n$ is even, since both $P_n$ and $Q_n$ are even. Thus, going back to \eqref{E:eq5} we conclude that there exists two constants $\alpha_n$ and $\beta_n$ such that
\begin{equation}\label{E:eq8}
V_n(z)=\alpha_n+\beta_n z
+\frac{z^2}{4(n+1)} \sum_{k=1}^n\frac{ \sin^2(2k\theta_n )}{1-z\cos^2(k\theta_n )}
\end{equation}
But, from Corollary \ref{co2}, we also have $V_n(z)=1-\frac{1}{2}z+O(z^2)$ for $n\geq 1$, thus
$\alpha_n=1$ and $\beta_n=-1/2$.
\qquad Finally, noting that the terms corresponding to $k$ and $n+1-k$ in the sum \eqref{E:eq8}
are identical, we arrive to the desired formula.
This concludes the proof of Theorem \ref{th21}.
\end{proof}
\goodbreak
\qquad Theorem \ref{th21} allows us to obtain a precise information about the power series
expansion of $V_n$ in the neighbourhood of $z=0$ :
\goodbreak
\begin{corollary}\label{co22}
Let $n$ be a positive integer greater than 1, and let $V_n$ be the rational function defined by \eqref{E:eqv}.
Then the radius of convergence of power series expansion $\sum_{m=0}^\infty A_{m}^{(n)}z^m$ of $V_n$ in the neighbourhood of $z=0$ is $\sec^2(\frac{\pi}{n+1})>1$, and the coefficients $(A_{m}^{(n)})_{m\geq0}$ satisfy the following properties :
\begin{enumeratea}
\item For $0\leq m\leq n$ we have $A_{m}^{(n)}=\lambda_m$, where $\lambda_m$ is defined by
\eqref{E:bin1}.\label{coit221}
\item For $m>n$ we have $A_{m}^{(n)}<0$ and
\[
\sum_{m=n+1}^\infty\left(-A_{m}^{(n)}\right)=\frac{1}{2^{2n}}\binom{2n}{n}-\frac{1}{n+1}.
\]\label{coit222}
\end{enumeratea}
Moreover, for every $n\geq0$ and every $z$ in the closed unit disk $\adh{D(0,1)}$ we have
\[
\abs{V_n(z)-\sqrt{1-z}}\leq \frac{2}{\sqrt{\pi n}}\abs{z}^{n+1}.
\]
In particular, $(V_n)_{n\geq0}$ converges uniformly on $\adh{D(0,1)}$ to the function $z\mapsto\sqrt{1-z}$.
\end{corollary}
\goodbreak
\begin{proof}
Let us denote $\pi/(n+1)$ by $\theta_n$. By Theorem \ref{th21} the poles of $V_n$, for $n>1$, are $\{\sec^2(k\theta_n):1\leq k\leq\floor{n/2}\}$ and the nearest one to $0$ is $\sec^2(\theta_n)$. This proves the statement about the radius of convergence.
\goodbreak
\qquad Also, we have seen in Corollary \ref{co2}, that in the neighbourhood of $z=0$
we have
\[
V_n(z)=\sum_{m=0}^n\lambda_mz^m+O(z^{n+1}).
\]
This proves that for $0\leq m\leq n$ we have $A_m^{(n)}=\lambda_m$ which is \itemref{coit221}.
\qquad To prove \itemref{coit222}, we note that for $1\leq k\leq \floor{n/2}$, and $z\in D(0,\sec^2(\theta_n)$, we have
\[
\frac{1}{1-z\cos^2(k\theta_n)}=\sum_{m=0}^\infty\cos^{2m}(k\theta_n)z^m,
\]
hence
\[
V_n(z)=1-\frac{z}{2}-\sum_{m=0}^\infty\left(\frac{1}{2(n+1)}\sum_{k=1}^{\floor{n/2}}\cos^{2m}(k\theta_n)
\sin^2(2k\theta_n)\right)\,z^{m+2}.
\]
This gives the following alternative formul\ae~ for $A_{m}^{(n)}$ when $m\geq2$ :
\begin{align}
A_{m}^{(n)}&=-\frac{2}{n+1}\sum_{k=1}^{\floor{n/2}}\cos^{2(m-1)}(k\theta_n)
\sin^2(k\theta_n),\notag\\
&=-\frac{1}{n+1}\sum_{k=1}^{n}\cos^{2(m-1)}(k\theta_n)
\sin^2(k\theta_n),\label{E:eq9}
\end{align}
where \eqref{E:eq9} is also valid for $m=1$. This proves in particular that $A_{m}^{(n)}<0$ for $m>n$.
\goodbreak
\qquad Since the radius of convergence of $\sum_{m=0}^\infty A_m^{(n)}z^m$ is greater than $1$
we conclude that
\begin{equation}\label{E:eq10}
\sum_{m=0}^\infty A_m^{(n)}=V_n(1),
\end{equation}
but, we can prove by induction from \eqref{E:eqv} that $V_n(1)=1/(n+1)$. Therefore, \eqref{E:eq10} implies that, for $n\geq 1$ we have
\begin{align*}
\sum_{m=n+1}^\infty (-A_m^{(n)})&=-\frac{1}{n+1}+\sum_{m=0}^n A_m^{(n)}=-\frac{1}{n+1}+1-\sum_{m=1}^n \lambda_m,\\
&=-\frac{1}{n+1}+1-\sum_{m=1}^n \left(\mu_{m-1}-\mu_m\right),\\
&=\mu_n-\frac{1}{n+1},
\end{align*}
where $\mu_m=2^{-2m}\binom{2m}{m}$, which is \itemref{coit222}.
\qquad For $n\geq 1$ and $z\in\adh{D(0,1)}$ we have
\begin{align}
\abs{V_n(z)-\sum_{m=0}^n\lambda_m z^m}&\leq
\abs{\sum_{m=n+1}^\infty A_{m}^{(n)} z^m}\notag\\
&\leq\abs{z}^{n+1}\cdot\sum_{m=n+1}^\infty (-A_{m}^{(n)})\notag\\
&\leq\left(\frac{1}{2^{2n}}\binom{2n}{n}-\frac{1}{n+1}\right)\abs{z}^{n+1}.\label{E:eq11}
\end{align}
On the other hand, using \eqref{E:bin} we see that for $n\geq 1$ and $z\in\adh{D(0,1)}$ we have
\begin{align}
\abs{\sqrt{1-z}-\sum_{m=0}^n\lambda_m z^m}&\leq
\abs{\sum_{m=n+1}^\infty \lambda_m z^m}\notag\\
&\leq\abs{z}^{n+1}\cdot \sum_{m=n+1}^\infty (\mu_{m-1}-\mu_{m})\notag\\
&\leq\frac{1}{2^{2n}}\binom{2n}{n}\abs{z}^{n+1}.\label{E:eq12}
\end{align}
Combining \eqref{E:eq11} and \eqref{E:eq12}, and noting that $2^{-2n}\binom{2n}{n}\leq 1/\sqrt{\pi n}$ we obtain
\[
\abs{V_n(z)-\sqrt{1-z}}\leq \frac{2}{\sqrt{\pi n}}\abs{z}^{n+1},
\]
which is the desired conclusion.
\end{proof}
\goodbreak
\qquad The following corollary is an immediate consequence of Lemma \ref{lm1} and Corollary \ref{co22}. It proves
that Conjecture 12 in \cite{guo} is correct in the case of square roots.
\goodbreak
\begin{corollary}\label{co3} The following properties of $(F_n(z))_{n\geq0}$ and
$(G_n(z))_{n\geq0}$, the
Newton's and Halley's approximants of $\sqrt{1-z}$, defined by the recurrences \eqref{E:eqf} and \eqref{E:eqg} hold :
\begin{enumeratea}
\item For $n>1$, the rational function $F_n(z)$ has a power series expansion $1-\sum_{m=1}^\infty B_{m}^{(n)}z^m$ with $\sec^2(2^{-n}\pi)$ as
radius of convergence, and $B_{m}^{(n)}>0$ for every $m\geq1$. Moreover,
\[
\forall\,z\in\adh{D(0,1)},\qquad\abs{F_n(z)-\sqrt{1-z}}\leq\frac{2}{\sqrt{\pi}}\cdot\frac{\abs{z}^{2^n}}{\sqrt{2^{n}-1}}.
\]
\item For $n\geq1$, the rational function $G_n(z)$ has a power series expansion $1-\sum_{m=1}^\infty C_{m}^{(n)}z^m$ with $\sec^2(3^{-n}\pi)$ as radius of convergence, and $C_{m}^{(n)}>0$ for every $m\geq1$. Moreover,
\[
\forall\,z\in\adh{D(0,1)},\qquad\abs{G_n(z)-\sqrt{1-z}}\leq\frac{2}{\sqrt{\pi}}\cdot\frac{\abs{z}^{3^n}}{\sqrt{3^{n}-1}}.
\]
\end{enumeratea}
\end{corollary}
\goodbreak
\textit{Remark.} Guo's conjecture \cite[Conjecture 12]{guo} in the case of square roots is just the fact that $B_m^{(n)}>0$ for $n>1$, $m\geq1$, and that
$C_m^{(n)}>0$ for $n,~m\geq1$. The case of $p$th roots for $p\geq3$ remains open.
\goodbreak
\textbf{Conclusion.} In this paper, we presented a link between the rational functions approximating $z\mapsto\sqrt{1-z}$
obtained from the application of Newton's and Halley's method, and Chebyshev Polynomials. This was used
to find the partial fraction decomposition of this rational functions, and the sign pattern of the coefficients of their power series
expansions was obtained. Finally, this was used to prove the square root case of Conjecture 12 in \cite{guo}, and to give estimates of the approximation error for $z$ in the closed unit disk.
\end{document} |
\begin{document}
\title{Realization of the $1\rightarrow 3$ optimal phase-covariant quantum cloning
machine}
\author{Fabio Sciarrino and Francesco De Martini}
\address{Dipartimento di Fisica and \\
Istituto Nazionale per la Fisica della Materia\\
Universit\`{a} di Roma ''La Sapienza'', Roma, 00185 - Italy}
\maketitle
\begin{abstract}
The $1\rightarrow 3$ quantum phase covariant cloning, which optimally clones
qubits belonging to the equatorial plane of the Bloch sphere, achieves the
fidelity ${\cal F}_{cov}^{1\rightarrow 3}=0.833$, l$\arg $er than for the $
1\rightarrow 3$ universal cloning ${\cal F}_{univ}^{1\rightarrow 3}=0.778$.
We show how the $1\rightarrow 3$ phase covariant cloning can be implemented
by a smart modification of the standard {\it universal} quantum machine by a
projection of the output states over the symmetric subspace. A complete
experimental realization of the protocol for polarization encoded qubits
based on non-linear and linear methods will be discussed.
\end{abstract}
\pacs{03.67.-a, 03.65.-w, 42.50.-p}
In the last years a great deal of efforts has been devoted to the
realization of the optimal approximations to the quantum cloning and
flipping operations over an unknown qubit $\left| \phi \right\rangle $. Even
if these two processes are unrealizable in their exact forms \cite{Woot82},
\cite{Bech99}, they can be optimally approximated by the corresponding
universal machines, i.e., by the universal quantum cloning machine (UQCM)
and the universal-NOT (U-NOT) gate \cite{Buze96}. The optimal quantum
cloning machine has been experimentally realized following several
approaches, i.e. by exploiting the process of stimulated emission in a
quantum-injected optical parametric amplifier (QI-OPA) \cite
{DeMa98,DeMa02,Lama02,Fase02}, by a quantum network \cite{Cumm02} and by
acting with projective operators over the symmetric subspaces of many qubits
\cite{Ricc04,Irvi04}. The $N\rightarrow M$ UQCM transforms $N$ input qubits
in the state $\left| \phi \right\rangle $ into $M$ entangled output qubits
in the mixed state $\rho _{out}.$ The quality of the resulting copies is
quantified by the fidelity parameter ${\cal F}_{univ}^{N\rightarrow
M}=\left\langle \phi \right| \rho _{out}\left| \phi \right\rangle =\frac{
N+1+\beta }{N+2}$ with $\beta =\frac{N}{M}\leq 1.$
Not only the perfect cloning of unknown qubit is forbidden but also perfect
cloning of subsets containing non orthogonal states. This no-go theorem
ensures the security of cryptographic protocols as $BB84$ \cite{Gisi02}.
Recently {\it state dependent} cloning machines have been investigated that
are optimal respect to a given ensemble \cite{Brus00}. The partial a-priori
\ knowledge of the state allows to reach a higher fidelity than for the
universal cloning. In particular the $N\rightarrow M$ {\it phase-covariant
quantum cloning machine} (PQCM) considers the cloning of $N$ into $M$ output
qubits, where the input ones belong to the equatorial plane of the
corresponding Poincare' sphere, i.e. expressed by: $\left| \phi
\right\rangle =2^{-1/2}\left( \left| 0\right\rangle +e^{i\phi }\left|
1\right\rangle \right) $. The values of the optimal fidelities ${\cal F}
_{cov}^{N\rightarrow M}$ for this machine have been found \cite{DAri03}. In
the present article we will restrict ourselves to the case in which $N=1.$
For $M$ assuming odd values it is found ${\cal F}_{cov}^{1\rightarrow M}=
{\frac14}
\left( 3+M^{-1}\right) \;$while in the case of even $M-$values ${\cal F}
_{cov}^{1\rightarrow M}=
{\frac12}
\left( 1+
{\frac12}
\sqrt{1+2M^{-1}}\right) $. In particular we have ${\cal F}
_{cov}^{1\rightarrow 2}=0.854$ to be compared with ${\cal F}
_{univ}^{1\rightarrow 2}=0.833$ and ${\cal F}_{cov}^{1\rightarrow 3}=0.833$
with: ${\cal F}_{univ}^{1\rightarrow 3}=0.778$.
It is worthwhile to enlighten the connections existing between the cloning
processes and the theory of quantum measurement \cite{Brus98}. The concept
of universal quantum cloning is indeed related to the problem of optimal
quantum state estimation \cite{Mass95} since for $M\rightarrow \infty ,$ $
{\cal F}_{univ}^{N\rightarrow M}\rightarrow {\cal F}_{estim}^{N}=\frac{N+1}{
N+2}$ where ${\cal F}_{estim}^{N}$ is the optimal fidelity for the state
estimation of any ensemble of $N$ unknown, identically prepared qubits.
Likewise, the phase-covariant cloning has a connection with the estimation
of an equatorial qubit, that is, with the problem of finding the optimal
strategy to estimate the value of the phase $\phi $ \cite{Hole82}, \cite
{Derk98}$.$ Precisely, the optimal strategy consists of a POVM corresponding
to a Von Neumann measurement of $N$ input qubits characterized by a set of $
N+1$ orthogonal projectors and achieves the fidelity ${\cal F}_{phase}^{N}\ $
\cite{Derk98}. In general for $M\rightarrow \infty ,$ ${\cal F}
_{cov}^{N\rightarrow M}\rightarrow {\cal F}_{phase}^{N}$. For $N=1$ is
found: ${\cal F}_{cov}^{1\rightarrow M}={\cal F}_{phase}^{1}+\frac{1}{4M}$
with ${\cal F}_{phase}^{1}=3/4.$
To our knowledge, no PQCM device\ has been implemented experimentally in the
domain of Quantum Optics \cite{Du03,Fiur03}. In the present work we report
the implementation of a $1\rightarrow 3$ PQCM by adopting a modified
standard $1\rightarrow 2$ UQCM and by further projecting the output qubits
over the symmetric subspace \cite{DeMa02,Ricc04}. Let the state of the input
qubit be expressed by: $\left| \phi \right\rangle _{S}=\alpha \left|
0\right\rangle _{S}+\beta \left| 1\right\rangle _{S}$ with {\it real}
parameters $\alpha $ and $\beta $ and $\alpha ^{2}+\beta ^{2}=1$. The output
state of the $1\rightarrow 2$ UQCM device reads:
\begin{equation}
\left| \Sigma \right\rangle _{SAB}=\sqrt{\frac{2}{3}}\left| \phi
\right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi ^{\perp
}\right\rangle _{B}-\frac{1}{\sqrt{6}}\left( \left| \phi \right\rangle
_{S}\left| \phi ^{\perp }\right\rangle _{A}+\left| \phi ^{\perp
}\right\rangle _{S}\left| \phi \right\rangle _{A}\right) \left| \phi
\right\rangle _{B}
\end{equation}
The qubits $S$ and $A$ are the optimal cloned qubits while the qubit $B$ is
the optimally flipped one. We perform the operation $U_{B}=\sigma _{Y}$ on
the qubit $B.$ This local flipping transformation of $\left| \phi
\right\rangle _{B}$\ leads to: $\left| \Upsilon \right\rangle _{SAB}=({\Bbb I
}_{S}\otimes {\Bbb I}_{A}\otimes U_{B})\left| \Sigma \right\rangle _{SAB}=
\sqrt{\frac{2}{3}}\left| \phi \right\rangle _{S}\left| \phi \right\rangle
_{A}\left| \phi \right\rangle _{B}-\frac{1}{\sqrt{6}}\left( \left| \phi
\right\rangle _{S}\left| \phi ^{\perp }\right\rangle _{A}+\left| \phi
^{\perp }\right\rangle _{S}\left| \phi \right\rangle _{A}\right) \left| \phi
^{\perp }\right\rangle _{B}$. By this non-universal cloning process three
{\it asymmetric} copies have been obtained: two clones (qubits $S$ and $A)$
with fidelity $5/6$, and a third one (qubit $B$) with fidelity $2/3$. We may
now project $S,$ $A$ and $B$ over the symmetric subspace and obtain three
symmetric clones with a higher average fidelity. The symmetrization operator
$\Pi _{sym}^{SAB}$ reads as $\Pi _{sym}^{SAB}=\left| \Pi _{1}\right\rangle
\left\langle \Pi _{1}\right| +\left| \Pi _{2}\right\rangle \left\langle \Pi
_{2}\right| +\left| \Pi _{3}\right\rangle \left\langle \Pi _{3}\right|
+\left| \Pi _{4}\right\rangle \left\langle \Pi _{4}\right| $ where $\left|
\Pi _{1}\right\rangle =\left| \phi \right\rangle _{S}\left| \phi
\right\rangle _{A}\left| \phi \right\rangle _{B}$, $\left| \Pi
_{2}\right\rangle =\left| \phi ^{\perp }\right\rangle _{S}\left| \phi
^{\perp }\right\rangle _{A}\left| \phi ^{\perp }\right\rangle _{B}$, $\left|
\Pi _{3}\right\rangle =\frac{1}{\sqrt{3}}\left( \left| \phi \right\rangle
_{S}\left| \phi ^{\perp }\right\rangle _{A}\left| \phi ^{\perp
}\right\rangle _{B}+\left| \phi ^{\perp }\right\rangle _{S}\left| \phi
\right\rangle _{A}\left| \phi ^{\perp }\right\rangle _{B}+\left| \phi
^{\perp }\right\rangle _{S}\left| \phi ^{\perp }\right\rangle _{A}\left|
\phi \right\rangle _{B}\right) $ and $\left| \Pi _{4}\right\rangle =\frac{1}{
\sqrt{3}}\left( \left| \phi \right\rangle _{S}\left| \phi \right\rangle
_{A}\left| \phi ^{\perp }\right\rangle _{B}+\left| \phi ^{\perp
}\right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi \right\rangle
_{B}+\left| \phi \right\rangle _{S}\left| \phi ^{\perp }\right\rangle
_{B}\left| \phi \right\rangle _{A}\right) .$ The symmetric subspace has
dimension 4 since three qubits are involved. The probability of success of
the projection is equal to $\frac{8}{9}$. The normalized output state $
\left| \xi \right\rangle _{SAB}=\Pi _{sym}^{SAB}\left| \Upsilon
\right\rangle _{SAB}$ is
\begin{equation}
\left| \xi \right\rangle _{SAB}=\frac{\sqrt{3}}{2}\left| \phi \right\rangle
_{S}\left| \phi \right\rangle _{A}\left| \phi \right\rangle _{B}-\frac{1}{2
\sqrt{3}}\left( \left| \phi \right\rangle _{S}\left| \phi ^{\perp
}\right\rangle _{A}\left| \phi ^{\perp }\right\rangle _{B}+\left| \phi
^{\perp }\right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi ^{\perp
}\right\rangle _{B}+\left| \phi ^{\perp }\right\rangle _{S}\left| \phi
^{\perp }\right\rangle _{A}\left| \phi \right\rangle _{B}\right)
\label{outputPQCM}
\end{equation}
Let us now estimate the output density matrices of the qubits $S,$ $A$ and $
B $
\begin{equation}
\rho _{S}=\rho _{A}=\rho _{B}=\frac{5}{6}\left| \phi \right\rangle
\left\langle \phi \right| +\frac{1}{6}\left| \phi ^{\perp }\right\rangle
\left\langle \phi ^{\perp }\right| \label{outputdensitymatrices}
\end{equation}
This leads to the fidelity ${\cal F}_{cov}^{1\rightarrow 3}=5/6$ equal to
the optimal one \cite{Brus00,DAri03}.
By applying a different unitary operator $U_{B}$ to the qubit $B$ we can
implement the phase-covariant cloning for different equatorial planes.
Interestingly, note that by this symmetrization technique a depolarizing
channel $E_{dep}(\rho )=
{\frac14}
\left( \rho +\sigma _{X}\rho \sigma _{X}+\sigma _{Y}\rho \sigma _{Y}+\sigma
_{Z}\rho \sigma _{Z}\right) $ on channel $B$ \ transforms immediately the
non-universal phase covariant cloning into the {\it universal} $1\rightarrow
3$ UQCM with the overall fidelity ${\cal F}_{univ}^{1\rightarrow 3}=$ $7/9$.
This represent a relevant new proposal to be implemented within the $
1\longrightarrow 2$ UQCM QI-OPA device or other $1\longrightarrow 2$
U-cloning schemes \cite{DeMa02,Ricc05}. Let us return to the $1\rightarrow 3$
PQCM. In the present scheme the input qubit, to be injected into a QI-OPA
over the spatial mode $k_{1}$ with wavelength (wl) $\lambda $, is encoded
into the polarization $(\overrightarrow{\pi })$ state $\left| \phi
\right\rangle _{in}=\alpha \left| H\right\rangle +\beta \left|
V\right\rangle $ of a single photon, where $\left| H\right\rangle $ and $
\left| V\right\rangle $ stand for horizontal and vertical polarization:
Figure 1. The QI-OPA consisted of a nonlinear (NL) BBO ($\beta $
-barium-borate), cut for Type II phase matching and excited by a sequence of
UV\ mode-locked laser pulses having wl. $\lambda _{p}$. The relevant modes
of the NL 3-wave interaction driven by the UV pulses associated with mode $
k_{p}$ were the two spatial modes with wave-vector (wv) $k_{i}$, $i=1,2$,
each one supporting the two horizontal and vertical polarizations of the
interacting photons. The QI-OPA was $\lambda $-degenerate, i.e. the
interacting photons had the same wl's $\lambda =2\lambda _{p}=795nm$. The
NL\ crystal orientation was set as to realize the insensitivity of the
amplification quantum efficiency$\;$to any input state $\left| \phi
\right\rangle _{in}$ i.e. the {\it universality} (U)\ of the ''cloning
machine'' and of the U-NOT gate \cite{DeMa02}.$\ $This key property is
assured by the squeezing hamiltonian $\widehat{H}_{int}=i\chi \hbar \left(
\widehat{a}_{1\phi }^{\dagger }\widehat{a}_{2\phi \perp }^{\dagger }-
\widehat{a}_{1\phi \perp }^{\dagger }\widehat{a}_{2\phi }^{\dagger }\right)
+h.c.$ where the field operator $\widehat{a}_{ij}^{\dagger }$ refers to the
state of polarization $j$ $(j=\phi ,\phi ^{\perp })$, realized on the two
interacting spatial modes $k_{i}$ $(i=1,2)$.
Let us consider the injected photon in the mode $k_{1}$ to have any linear
polarization $\overrightarrow{\pi }{\bf =}\phi $. We express this $
\overrightarrow{\pi }$-state as $\widehat{a}_{1\phi }^{\dagger }\left|
0,0\right\rangle _{k_{1}}=\left| 1,0\right\rangle _{k_{1}}$ where $\left|
m,n\right\rangle _{k_{1}}$ represents a product state with $m$ photons of
the mode $k_{1}$ with polarization $\phi $, and $n$ photons with
polarization $\phi ^{\perp }$. Assume the input mode $k_{2}$ to be in the
{\it vacuum state }$\left| 0,0\right\rangle _{k_{2}}$. The initial $
\overrightarrow{\pi }$-state of modes $k_{i}$ reads $\left| \phi
\right\rangle _{in}=\left| 1,0\right\rangle _{k_{1}}\left| 0,0\right\rangle
_{k_{2}}$ and evolves according to the unitary operator $\widehat{{\bf U}}
\equiv \exp \left( -i\frac{\widehat{H}_{int}t}{\hbar }\right) $. The
1st-order contribution of the output state of the {\it QIOPA} is $\sqrt{
\frac{2}{3}}\left| 2,0\right\rangle _{k1}\left| 0,1\right\rangle _{k2}-\sqrt{
\frac{1}{3}}\left| 1,1\right\rangle _{k1}\left| 1,0\right\rangle _{k2}.$ The
above linearization procedure is justified here by the small experimental
value of the {\it gain }$g\equiv \chi t\approx 0.1$. In this context, the
state $\left| 2,0\right\rangle _{k_{1}},$ expressing two photons of the $
\phi $ mode $k_{1}$ in the $\overrightarrow{\pi }$-state $\phi ,$
corresponds to the state $\left| \phi \phi \right\rangle $ expressed by the
general theory and implies the $L=2$ cloning of the input $N=1$ qubit.\
Contextually with the realization of cloning on mode $k_{1}$,\ the vector $
\left| 0,1\right\rangle _{k_{2}}$ expresses the single photon state on mode $
k_{2}$ with polarization $\phi ^{\perp }$, i.e. the {\it flipped} version of
the input qubit. In summary, the qubits $S$ and $A$ are realized by two
single photons propagating along mode $k_{1}$ while the qubit $B$
corresponds to the $\overrightarrow{\pi }$-state of the photon on mode $
k_{2} $.
The $U_{B}=\sigma _{Y}$ flipping operation on the output mode $k_{2}$,
implemented by means of two $\lambda /2$ waveplates, transformed the QI-OPA$
\;$output state into$:$ $\left| \Upsilon \right\rangle _{SAB}$=$\sqrt{\frac{2
}{3}}\left| 2,0\right\rangle _{k1}\left| 1,0\right\rangle _{k2}-\sqrt{\frac{1
}{3}}\left| 1,1\right\rangle _{k1}\left| 0,1\right\rangle _{k2}$. The
physical implementation of the projector $\Pi _{sym}^{SAB}$ on the three
photons $\overrightarrow{\pi }$-states was carried out by linearly
superimposing the modes $k_{1}$ and $k_{2}$ on the $50:50$ beam-splitter $
BS_{A}$ and then by selecting the case in which the 3 photons \ emerged from
$BS_{A}$ on the same output mode $k_{3}$ (or, alternatively on $k_{4}$) \cite
{Ricc04}. The $BS_{A}$ input-output mode relations are expressed by the
field operators: $\widehat{a}_{1j}^{\dagger }$= $2^{-1/2}(\widehat{a}
_{3j}^{\dagger }+i\widehat{a}_{4j}^{\dagger })$; $\widehat{a}_{2j}^{\dagger
} $= $2^{-1/2}(i\widehat{a}_{3j}^{\dagger }+\widehat{a}_{4j}^{\dagger })$
where the operator $\widehat{a}_{ij}^{\dagger }$ refers to the mode $k_{i}$
with polarization $j$. The input state of $BS_{A}$ can be re-written in the
following form $\frac{1}{\sqrt{3}}\left( \widehat{a}_{1\phi }^{\dagger 2}
\widehat{a}_{2\phi }^{\dagger }-\widehat{a}_{1\phi }^{\dagger }\widehat{a}
_{1\phi \perp }^{\dagger }\widehat{a}_{2\phi \perp }^{\dagger }\right)
\left| 0,0\right\rangle _{k1}\left| 0,0\right\rangle _{k2}.$ By adopting the
previous relations and by considering the case in which 3 photons emerge
over the mode $k_{3},$ the output state is found to be $\frac{1}{2\sqrt{2}}(
\widehat{a}_{3\phi }^{\dagger 3}+\widehat{a}_{3\phi }^{\dagger }\widehat{a}
_{3\phi \perp }^{\dagger 2})\left| 0,0\right\rangle _{k3}=\frac{\sqrt{3}}{2}
\left| 3,0\right\rangle _{k3}+\frac{1}{2}\left| 1,2\right\rangle _{k3}$. The
output fidelity is ${\cal F}_{cov}^{1\rightarrow 3}=\frac{5}{6}.$
Interestingly, the same overall state evolution can also be obtained, with
no need of \ the final $BS_{A}$ symmetrization, at the output of a QI-OPA
with a type II crystal working in a {\it collinear} configuration, as
proposed by \cite{DeMartL98}. In this case the interaction Hamiltonian $
\widehat{H}_{coll}=i\chi \hbar \left( \widehat{a}_{H}^{\dagger }\widehat{a}
_{V}^{\dagger }\right) +h.c.$ acts on a single spatial mode $k$. A
fundamental physical property of $\widehat{H}_{coll}$ consists of its
rotational invariance under $U(1)$ transformations, that is, under any
arbitrary rotation around the $z$-axis . Indeed $\widehat{H}_{coll}$ can be
re-expressed as $\frac{1}{2}i\chi \hbar e^{-i\psi }\left( \widehat{a}_{\psi
}^{\dagger 2}-e^{i2\psi }\widehat{a}_{\psi \perp }^{\dagger 2}\right) +h.c.$
for $\psi \in (0,2\pi )$ where $\widehat{a}_{\psi }^{\dagger }=2^{-1/2}(
\widehat{a}_{H}^{\dagger }+e^{i\psi }\widehat{a}_{V}^{\dagger })$ and $
\widehat{a}_{\psi \perp }^{\dagger }=2^{-1/2}(-e^{-i\psi }\widehat{a}
_{H}^{\dagger }+\widehat{a}_{V}^{\dagger })$. Let us consider an injected
single photon with $\overrightarrow{\pi }$-state $\left| \psi \right\rangle
_{in}=2^{-1/2}(\left| H\right\rangle +e^{i\psi }\left| V\right\rangle
)=\left| 1,0\right\rangle _{k}.$ The first contribution to the amplified
state, $\sqrt{6}\left| 3,0\right\rangle _{k}-\sqrt{2}e^{i2\psi }\left|
1,2\right\rangle _{k}$ is identical to the output state obtained with the
device dealt with in the present work up to a phase factor which does not
affect the fidelity value.
The UV pump beam with wl $\lambda _{p}$, back reflected by the spherical
mirror $M_{p}$ with 100\% reflectivity and $\mu -$adjustable position ${\bf Z
}$, excited the NL crystal in both directions $-k_{p}$ and $k_{p}$, i.e.
correspondingly oriented towards the right hand side and the left hand side
of Fig.1. A Spontaneous Parametric Down Conversion (SPDC) process excited by
the $-k_{p}$ UV\ mode created {\it singlet-states} of photon polarization $(
\overrightarrow{\pi })$. The photon of each SPDC pair emitted over the mode $
-k_{1}$ was back-reflected by a spherical mirror $M$ into the NL crystal and
provided the $N=1$ {\it quantum injection} into the OPA excited by the UV
beam associated with the back-reflected mode $k_{p}$. The twin SPDC\ photon
emitted over mode $-k_{2}$ , selected by the ''state analyzer'' consisting
of the combination (Wave-Plate + Polarizing Beam Splitter: $WP_{T}\ $+ $
PBS_{T}$) and detected by $D_{T}$, provided the ''trigger'' of the overall
conditional experiment. Because of the EPR non-locality of the emitted
singlet, the $\overrightarrow{\pi }$-selection made on $-k_{2}$ implied
deterministically the selection of the input state $\left| \phi
\right\rangle _{in}$on the injection mode $k_{1}$. By adopting a $\lambda /2$
wave-plate ($WP_{T}$) with different orientations of the optical axis, the
following $\left| \phi \right\rangle _{in}$states were injected:$\;\left|
H\right\rangle $ and $2^{-1/2}(\left| H\right\rangle +\left| V\right\rangle
)=\left| +\right\rangle $. A more detailed description of the {\it QI-OPA}
setup can be found in \cite{DeMa02}. The $U_{B}=\sigma _{Y}$ flipping
operation was implemented by two $\lambda /2$ waveplates (wp), as said. The
device $BS_{A}$ was positioned onto a motorized translational stage: the
position $X=0$ in Fig. 2 was conventionally assumed to correspond to the
best overlap between the interacting photon wavepackets which propagate
along $k_{1}$ and $k_{2}$.
The output state on mode $k_{3}$ was analyzed by the setup shown in the
inset of Fig. 1: the field on mode $k_{4}$ was disregarded, for simplicity.
The polarization state on mode $k_{3}$ was analyzed by the combination of
the $\lambda /2$ wp $WP_{C}$ and of the polarizer beam splitter $PBS_{C}$.
For each input $\overrightarrow{\pi }$-state $\left| \phi \right\rangle _{S}$
, two different measurements were performed. In a first experiment $WP_{C}$
was set in order to make $PBS_{C}$ to transmit $\left| \phi \right\rangle $
and reflect $\left| \phi ^{\bot }\right\rangle .$ The cloned state $\left|
\phi \phi \phi \right\rangle $ was detected by a coincidence between the
detectors $[D_{C}^{1},D_{C}^{2},D_{C}^{3}]$ while the state $\left| \phi
\phi \phi ^{\bot }\right\rangle $, in the ideal case not present, was
detected by a coincidence recorded either by the $D$ set $
[D_{C}^{1},D_{C}^{2},D_{C}^{\ast }]$, or by $[D_{C}^{1},D_{C}^{3},D_{C}^{
\ast }]$, or by $[D_{C}^{2},D_{C}^{3},D_{C}^{\ast }]$. In order to detect
the contribution due to $\left| \phi \phi ^{\bot }\phi ^{\bot }\right\rangle
$, $WP_{C}$ was rotated in order to make $PBS_{C}$ to transmit $\left| \phi
^{\bot }\right\rangle $ and reflect $\left| \phi \right\rangle $ and by
recording the coincidences by one of the sets $[D_{C}^{1},D_{C}^{2},D_{C}^{
\ast }]$, $[D_{C}^{1},D_{C}^{3},D_{C}^{\ast }]$, $
[D_{C}^{2},D_{C}^{3},D_{C}^{\ast }]$. The different overall quantum
efficiencies have been taken into account in the processing of the
experimental data. The precise sequence of the experimental procedures was
suggested by the following considerations. Assume the cloning machine turned
off, by setting the optical delay $\left| Z\right| >>c\tau _{coh}$, i.e., by
spoiling the temporal overlap between the injected photon and the UV\ pump
pulse. In this case since the states $\left| \phi \phi \right\rangle $ and $
\left| \phi ^{\bot }\phi ^{\bot }\right\rangle $ are emitted with same
probability by the machine, the rate of coincidences due to $\left| \phi
\phi \phi \right\rangle $ and $\left| \phi \phi ^{\bot }\phi ^{\bot
}\right\rangle $ were expected to be equal. By turning on the PQCM, i.e., by
setting $\left| Z\right| <<c\tau _{coh}$, the output state (\ref{outputPQCM}
) was realized showing a factor $R=3$ enhancement of the counting rate of $
\left| \phi \phi \phi \right\rangle $ and {\it no} enhancement of $\left|
\phi \phi ^{\bot }\phi ^{\bot }\right\rangle $. \ In Fig.2 the coincidences
data for the different state components are reported versus the delay $Z$
for the two input qubits $\left| \phi \right\rangle _{in}$. We may check
that the phase covariant cloning process affects only the $\left| \phi \phi
\phi \right\rangle $ component, as expected. Let us label by the symbol $h$
the output state components as follows: $\left\{ h=1\leftrightarrow \left|
\phi \phi ^{\bot }\phi ^{\bot }\right\rangle \text{, }2\leftrightarrow
\left| \phi \phi \phi ^{\bot }\right\rangle \text{, }3\leftrightarrow \left|
\phi \phi \phi \right\rangle \right\} $. For each index $h$, $b_{h}$ is the
average coincidence rate when the cloning machine is turned off , i.e. $
\left| Z\right| >>c\tau _{coh}$. while the signal-to-noise (S/N)\ parameter $
R_{h}$ is the ratio between the peak values of the coincidence rates
detected respectively for $Z\simeq 0$ and $\left| Z\right| >>c\tau _{coh}$.
The optimal values obtained by the above analysis are: $R_{3}=3,$ $R_{1}=1$,
$b_{3}=b_{1}$ and $b_{2}=0$, $R_{2}=0$. These last values, $h=2$ are
considered since they are actually measured in the experiment:\ Fig.2. The
fidelity has been evaluated by means of the expression ${\cal F}
_{cov}^{1\rightarrow 3}(\phi )$ = $(3b_{3}R_{3}+2b_{2}R_{2}+b_{1}R_{1})
\times (3b_{3}R_{3}+3b_{2}R_{2}+3b_{1}R_{1})^{-1}$and by the experimental
values of $b_{h}$, $R_{h}$. For $\left| \phi \right\rangle _{in}=\left|
H\right\rangle $ and $\left| \phi \right\rangle _{in}=\left| +\right\rangle $
we have found respectively $R_{3}=2.00\pm 0.12$ and $R_{3}=1.92\pm 0.06$
(see Fig.2). We have obtained ${\cal F}_{cov}^{1\rightarrow 3}(\left|
+\right\rangle )=0.76\pm 0.01$, and ${\cal F}_{cov}^{1\rightarrow 3}(\left|
H\right\rangle )=0.80\pm 0.01,$ to be compared with the theoretical value $
0.83$. The fidelity of the cloning $\left| H\right\rangle $ is slightly
increased by a contribution $0.02$ due to an unbalancement of the
Hamiltonian terms.
For the sake of completeness, we have carried out an experiment setting the
pump mirror in the position $Z\simeq 0$ and changing the position $X$ ob $
BS_{A}$. The injected state was $\left| \phi \right\rangle _{in}=\left|
+\right\rangle $. Due to quantum interference, the coincidence rate was
enhanced by a factor $V^{\ast }$ moving from the position $\left| X\right|
>>c\tau _{coh}$ to the condition $X\approx 0$ . The $\left| \phi \phi
^{\perp }\phi ^{\perp }\right\rangle $ enhancement was found $V_{\exp
}^{\ast }=1.70\pm 0.10$, to be compared with the theoretical value $V^{\ast
}=2$ while the enhancement of the term $\left| \phi \phi \phi \right\rangle $
was found $V_{\exp }^{\ast }=2.16\pm 0.12$, to be compared with the
theoretical value $V^{\ast }=3$. These results, not reported in Fig. 2, are
a further demonstration of the 3-photon interference in the Hong-Ou-Mandel
device.
In conclusion, we have implemented the optimal quantum triplicators for
equatorial qubits. The present approach can be extended in a straightforward
way to the case of $1\rightarrow M$ PQCM for $M$ odd. The results are
relevant in the modern science of quantum communication as the PQCM is
deeply connected to the optimal eavesdropping attack at {\it BB84} protocol,
which exploits the transmission of quantum states belonging to the $x-z$
plane of the Bloch sphere. \cite{Fuch97,Gisi02}. The optimal fidelities
achievable for equatorial qubits are equal to the ones considered for the
four states adopted in $BB84$ \cite{DAri03}. In addition, the phase
covariant cloning can be useful to optimally perform different quantum
computation tasks adopting qubits belonging to the equatorial subspace \cite
{Galv00}.
This work has been supported by the FET European Network on Quantum
Information and Communication (Contract IST-2000-29681: ATESIT), by Istituto
Nazionale per la Fisica della Materia (PRA\ ''CLON'')\ and by Ministero
dell'Istruzione, dell'Universit\`{a} e della Ricerca (COFIN 2002).
\begin{references}
\bibitem{Woot82} W.K. Wootters, and W.H. Zurek, Nature (London) {\bf 299},
802 (1982), G.Ghirardi, Referee Report for Found. of Physics (1981).
\bibitem{Bech99} H. Bechmann-Pasquinucci and N. Gisin, Phys. Rev. A {\bf 59}
, 4238 (1999); V. Secondi, F. Sciarrino, and F. De Martini, Phys. Rev A (in
press).
\bibitem{Buze96} V. Bu\v{z}ek, and M. Hillery, Phys. Rev. A {\bf 54}, 1844
(1996); N. Gisin, and S. Massar, Phys. Rev. Lett. {\bf 79}, 2153 (1997); R.
Derka, V. Buzek and A. Ekert, Phys. Rev. Lett. {\bf 80}, 1571 (1998).
\bibitem{DeMa98} F. De Martini, Phys. Rev. Lett. {\bf 81}, 2842 (1998).
\bibitem{DeMa02} F. De Martini, V. Bu\v{z}ek, F. Sciarrino, and C. Sias,
Nature (London)\ {\bf 419}, 815 (2002); D. Pelliccia, et al., Phys. Rev. A
{\bf 68}, 042306 (2003); F. De Martini, D. Pelliccia, and F. Sciarrino,
Phys. Rev. Lett. {\bf 92}, 067901 (2004).
\bibitem{Lama02} A. Lamas-Linares, C. Simon, J.C. Howell, and D.
Bouwmeester, Science{\em \ }{\bf 296}, 712 (2002).
\bibitem{Fase02} S. Fasel, {\it et al.}, Phys. Rev. Lett. {\bf 89}, 107901
(2002).
\bibitem{Cumm02} H.K. Cummins {\it et al.,} Phys. Rev. Lett. {\bf 88},
187901 (2002).
\bibitem{Ricc04} M. Ricci, F. Sciarrino,\ C. Sias, and F. De Martini, Phys.
Rev. Lett. {\bf 92}, 047901 (2004); F. Sciarrino, C. Sias, M. Ricci, and F.
De Martini, Phys. Lett. A. {\bf 323}, 34 (2004); F. Sciarrino, C. Sias, M.
Ricci, and F. De Martini, Phys. Rev A {\bf 70}, 052305 (2004).
\bibitem{Irvi04} W.T.M. Irvine, A. Lamas Linares, M.J.A. de Dood, and D.
Bouwmeester, Phys. Rev. Lett. {\bf 92}, 047902 (2004).
\bibitem{Gisi02} N. Gisin, G. Ribordy, W. Tittel., H.\ Zbinden, Rev. Mod.
Phys. {\bf 74}, 145 (2002).
\bibitem{Brus00} D. Bruss, M. Cinchetti, G.M. D'Ariano, and C.
Macchiavello, Phys. Rev. A {\bf 62}, 012302 (2000); G.M. D'Ariano, and P. Lo
Presti, Phys. Rev. A {\bf 64}, 042308 (2001); H. Fan, K. Matsumoto, X. Wang,
and M. Wadati, Phys. Rev. A {\bf 65}, 012304 (2001).
\bibitem{DAri03} D. Bruss and C. Macchiavello, J. Phys. A {\bf 34}, 6815
(2001); G.M. D'Ariano, and C. Macchiavello, Phys. Rev. A {\bf 67}, 042306
(2003).
\bibitem{Brus98} D. Bruss, A. Ekert, and C. Macchiavello, Phys. Rev. Lett.
{\bf 81}, 2598 (1998).
\bibitem{Mass95} S. Massar and S. Popescu, Phys. Rev. Lett. {\bf 74}, 1259
(1995).
\bibitem{Hole82} A.S. Holevo, {\it Probabilistic and Statistical Aspects of
Quantum Theory }(North-Holland, Amsterdam, 1982), p.163.
\bibitem{Derk98} R. Derka, V. Buzek, and A.E. Ekert, Phys. Rev. Lett. {\bf
80}, 1571 (1998).
\bibitem{Du03} J. Du, {\it et al.}, quant-ph/0311010 reports the
experimental realization of the $1\rightarrow 2$ PQCM by a NMR scheme.
\bibitem{Fiur03} J. Fiurasek, Phys. Rev. A {\bf 67}, 052314 (2003) has
proposed a $1\rightarrow 2$ PQCM based on an unbalanced beam-splitter with
probability of success $p=1/3$.
\bibitem{Ricc05} M. Ricci {\it et al }(to be published): recently a similar
proposal has been implemented successfully by the full symmetrization
methods reported by \cite{Ricc04}.
\bibitem{DeMartL98} F. De Martini, Phys. Lett. A {\bf 250}, 15 (1998).
\bibitem{Fuch97} C.A. Fuchs, {\it et al.}, Phys. Rev. A {\bf 56}, 1163
(1997); N.J. Cerf, M.\ Bourennane A. Karlson, and N. Gisin, Phys. Rev. Lett.
{\bf 88}, 127902 (2002).
\bibitem{Galv00} E.F. Galv\~{a}o and L. Hardy, Phys. Rev. A {\bf 62},
022301 (2000).
\end{references}
\centerline{\bf Figure Captions}
\vskip 8mm
\parindent=0pt
\parskip=3mm
Figure.1. Schematic diagram of phase-covariant cloner, PQCM\ made up by a
QI-OPA and a Hong-Ou-Mandel interferometer $BS_{A}$. INSET: measurement
setup used for testing the cloning process.
Figure.2. Experimental results of the PQCM\ for the input qubits $\left|
H\right\rangle $ and $\left| +\right\rangle =2^{-1/2}(\left| H\right\rangle
+\left| V\right\rangle )$. The measurement time of each 4-coincidence
experimental datum was $\ \sim 13000$ $s.$ The different overall detection
efficiencies have been taken into account. The solid line represents the
best Gaussian fit.
\end{document} |
\begin{document}
\title{Tunnelling dynamics of Bose-Einstein condensate in a four wells loop shaped system}
\author{Simone \surname{De Liberato}}
\affiliation{Laboratoire Pierre Aigrain, \'Ecole Normale Sup\'erieure, 24 rue Lhomond, 75005 Paris, France}
\author{Christopher J. Foot}
\affiliation{Clarendon Laboratory, Parks Road, Oxford OX1 3PU, United Kingdom}
\date{\today}
\begin{abstract}
We investigated the tunnelling dynamics of a zero-temperature Bose-Einstein condensate (BEC) in configuration of four potential wells arranged in a loop.
We found three interesting dynamic regimes: (a) flows of matter with small amplitude, (b) steady flow and (c) forced flow of matter for large amplitudes. The regime of quantum self-confinement has been studied and a new variant of it has been found for this system.
\end{abstract}
\pacs{03.75.Kk, 03.75.Lm, 74.50.+r}
\maketitle
\section{Introduction}
The behavior of Bose-Einstein condensate in two wells potential based on two state approximation has been investigated in a number of theoretical and experimental papers (\cite{smer97,smer99,zapa98,ragh99,jack99,mari99,sake04,albi04}). Here we
expand this method to a four-well system with periodic boundary conditions.
The main aim of this analysis is to study possible ways in which to achieve mass transport around a loop and persistent currents.
\section{The Gross-Pitaevskii equation with the Feynman ansaz}
The behavior of a Bose-Einstein condensate at low temperature is
accurately described by a nonlinear Schr\"odinger equation, known as
the Gross-Piteavskii equation (GPE), obtained from the two bodies interaction Hamiltonian by neglecting the quantum fluctuations of the bosonic field.
The GPE equation that describes a BEC trapped in a potential $V(\bm{r})$
is:\begin{equation}
i\hbar\frac{\partial\Psi}{\partial t}=\left[-\frac{\hbar^{2}\nabla^{2}}{2m}+V(\bm{r})+g\mid\Psi\mid^{2}\right]\Psi\label{eq:GPE}\end{equation}
with a coupling constant $g=4\pi\hbar^{2}a/m$, where $a$ is the scattering length of atom (that has mass $m$), taken here to be positive.
We take the normalized solution of the time-independent GPE for the ith non interacting well to be $\Phi_{i}$ and look for an approximate solution of the form (\cite{feyn65,smer97,mahm05}):
\begin{equation}
\Psi(\bm{r},t)=\sum_{i=1}^{4}\psi_{i}(t)\Phi_{i}(r).\label{eq:ansaz}\end{equation}
Substituting this form into Eq.(\ref{eq:GPE}) and writing the time-dependent function $\psi_{i}$ as $\sqrt{N_{i}}e^{i\theta_{i}}$ we obtain, after an integration over the spatial variables, a system of four coupled complex
equations:
\begin{eqnarray}
\label{firstsystem}
i\hbar\partial_{t}\psi_{1}=(E_{1}^{0}+U_{1}N_{1})\psi_{1}-K_{1}\psi_{2}-K_{4}\psi_{4}\nonumber\\
i\hbar\partial_{t}\psi_{2}=(E_{2}^{0}+U_{2}N_{2})\psi_{2}-K_{2}\psi_{3}-K_{1}\psi_{1}\\
i\hbar\partial_{t}\psi_{3}=(E_{3}^{0}+U_{3}N_{3})\psi_{3}-K_{3}\psi_{4}-K_{2}\psi_{2}\nonumber\\
i\hbar\partial_{t}\psi_{4}=(E_{4}^{0}+U_{4}N_{4})\psi_{4}-K_{4}\psi_{1}-K_{3}\psi_{3}\nonumber
\end{eqnarray}
with the condition $\sum_{i=1}^{4}N_{i}=N_{T}$.
Where $N_{T}$ is the total number of particles, $E_{i}^{0}$
is the ground state energy for the ith well, $U_{i}$ is the self-interaction energy of the ith well and $K_{i}$ is a parameter that characterizes the overlap between wells.
It is useful to recast Eq.(\ref{firstsystem}) as a system of
eight real equations:
\begin{eqnarray}
\label{realsystem}
\hbar\partial_tN_i= & -K_i\sqrt{N_iN_{i+1}}\sin(\theta_{i+1}-\theta_i)& \nonumber \\
&+K_{i-1}\sqrt{N_{i-1}N_i}\sin(\theta_i-\theta_{i-1})& \\
\hbar\partial_t\theta_i=&
+K_i\sqrt{N_{i+1}/N_i}\cos(\theta_{i+1}-\theta_i)&\nonumber \\
&+K_{i-1}\sqrt{N_{i-1}/N_i}\cos(\theta_i-\theta_{i-1})&-UN_i-E_i^0\nonumber
\end{eqnarray}
with $i=1$ to $4$ and all the arithmetic on the index modulo $4$ ($N_5\equiv N_1$ etc.).
As pointed out in \cite{smer97} the total number of atoms $N_{T}$
is constant but, in order to have a coherent phase description, the phase
fluctuations must be small, giving a lower bound on the number of particles, namely $N_{T}>N_{min}\simeq10^{3}$
\cite{gajd97}. One may notice that typically the value of $U_{i}$ is largely independent from $i$; we thus drop the index $i$ and consider all the $U_{i}$ to be the same.
In the rest of the paper we will consider only the case of all the $K_i$ modulated around the same mean value and the case of all the $K_i$ fixed to a certain positive value or set to $0$. We will call $\tilde K$ this mean or fixed value and define the two quantity $\omega_R$ and $\Lambda$, such that $\tilde K=\hbar\omega_R/2$ and $\Lambda=UN_T/2\tilde K$. The first parameter has the dimension of a frequency and we will take its inverse as time unit while the second is a dimensionless quantity regulating the behavior of the system.
\begin{figure}
\caption{\label{fig1}
\label{fig1}
\end{figure}
\section{Small amplitude regime}
Starting with an arbitrary small population inbalance in one of the four wells we can in principle, by modulating the coupling constants, amplify it and make it spin around the ring. In the case of symmetric wells ($E_{i}^{0}=E_{j}^{0}$ for all $i,j$) we
simulated the system with coupling constants of the form: $K_{i}=\tilde K(1+(-1)^{i}\sin(wt+\phi))$, i.e. a periodic oscillation at the frequency $w=\sqrt{3UN_{T}\tilde K+2\tilde K^{2}}$, that is the resonance frequency of the Eq.(\ref{realsystem}) linearized around
the value $N_i=N_T/4$ and $\theta_i-\theta_{i+1}=0$ for all $i$.
The result of our simulation is shown in Fig. \ref{fig2}; starting
with an arbitrary small population excess in one well the resonant driving
increases the population difference and makes it spin around the loop.
We are effectively causing the system to spin by modulation of the coupling constant in the same way one could make a ball turn in a dish by raising and lowering its edges at the right moment. The direction of the spinning is imposed by the initial phase $\phi$ of the perturbation relative to the position of the initial imbalance.
\begin{figure}
\caption{\label{fig2}
\label{fig2}
\end{figure}
If we stop the modulation at $t=\tau$ we can see in Fig. \ref{fig2} (a) that the particle surplus keeps going around the loop and will only be dumped by dissipation (\cite{kohl03,zapa03,zapa98,sina00,kohl02}). However the dumping time is of the order of some $1/\omega_R$, substantially longer than the time-scale we are interested in. We can thus neglect it. In any case we must notice that this process can prove to be very difficult to measure in practice, due to the extremely small amplitude of the oscillations.
Indeed one can not amplify the imbalance over a certain small percentage of the average well population as in this case, due to the change of resonance frequency of the system with the amplitude of the oscillations, we see periodic beats of the amplitude ( Fig. \ref{fig3} (b)). As $\Lambda$ decreases, the peak amplitude of these beats increases but their frequency decreases. This means that, in order to have beats large enough to be observed, their period will be of the order of some tens of $1/\omega_R$ and thus the dissipation could not be a negligible problem anymore, due to the intrinsically longer time-scale implied. As stated in \cite{kohl03} the application of a periodic modulation of the potential well can stabilize the tunneling dynamics against dissipation and thus the beats in amplitude could be effectively observable. In any case, the exact study of these dissipation effects is out of the scope of our present work.
\section{Nonlinear regime}
\subsection{Phase imprinting}
In the nonlinear regime, exploiting the fact that the phase is defined modulo $2\pi$, we can have a constant dephasing between neighboring wells.
If we set all the populations at the same value $\frac{1}{4}N_T$, all the $E_i^0$ and all the $K_i$ at the same constant values and all the phase differences between adjacent wells to $\pi/2$, building up in this way a phase difference of $2\pi$ around the loop, we have that in Eq.(\ref{realsystem}) the gain and the loss term for each well equilibrates each other, and the relative phases of the wells stay constant.
Being in this case the system invariant under rotations of $\pi/2$ and
being the evolution of the populations dependent on the relative phases,
we can expect that the only possible behaviors of the system can be
a steady flow of particles around the loop or no flux at all.
In both cases, from our simulation we will see nothing but the fact that all the populations stay constant at the value of $\frac{1}{4}N_T$.
In order to probe what is happening we simulate an abrupt cut of the loop at one point (by setting one coupling constant to zero for $t>\tau$) and observe at the results, shown in Fig. \ref{fig3}. If effectively there was a steady flow we would expect it to continue by inertia so that the last well before the cut would be filled with atoms while the others would be depleted;
this is exactly what emerges from Fig. \ref{fig3} (a).
In experiments it is often quite difficult to control the exact value
of the coupling constants and also to assure that all of them have
the same value. A coupling constant different from the
others will behave as a bottleneck, thus creating an oscillation of populations superimposed on the steady flux (Fig. \ref{fig3} (b)). In order to perform any experiment it would then be necessary to assure that these oscillations do not hide the current we are interested in. In any case this turns out not to be a major problem as both the period and the amplitude of these oscillations increase with $\Lambda$. It is then a priori possible, knowing that we have a given degree of incertitude in the coupling constants, to tune the value of $UN_T$ in order to have enough time to
perform our measurements before large oscillations develop.
\begin{figure}
\caption{\label{fig3}
\label{fig3}
\end{figure}
\subsection{Self-confinement}
As in the two-well case (\cite{smer97}) the nonlinearity of the GPE leads to the possibility of self-confined states.
As a condition for a self-confined state to arise we can take that
in the time in which the phase difference between two wells varies
between $0$ and $\pi$ their population difference doesn't change sign.
Using Eq.(\ref{realsystem}) we can derive, by means of very rough approximations,
an approximate condition for this to happens:
\begin{equation}
\label{condition}
\frac{2}{3n}\tan(\frac{-3\sqrt3}{4\Lambda n})+1>0.
\end{equation}
Where $n$ is the normalized population imbalance between the well in which we are considering the self-confinement and the population of an adjacent well, supposed to be $\frac{N_{T}-N}{3}$.
We can check that this condition is satisfied both for positive and negative values of $n$.
The first case corresponds to the usual quantum self-confinement, the second one corresponds to a state of self-depletion, in which one well almost empty at the beginning remains almost empty.
Numerical simulations show that condition stated in Eq.(\ref{condition})
works effectively quite well. For $\Lambda = 100$ and $N_T=10^5$ it gives us a minimal number of atoms in one well for being in the self-confined regime of $31750$ while the simulated value is $35000$ and a maximal number of atoms in the considered well for being in the self-depleted regime of $18250$ where the simulated value is $15000$.
It is important to remark that in the second case
the populations of the other three wells are smaller than the lower limit
for the quantum self-confinement, so effectively we are considering a new effect, not simply a self-confinement of the three full wells.
We can have an insight into this phenomenon if we interpret the usual
quantum self-confinement of the two-well system as a self-confinement for
one well and a self-depletion for the others. In the two-well system the
conditions for having them coincide (obviously if one well is empty the other has to be full) but in the four wells case they don not coincide anymore and we can thus experimentally distinguish them (Fig. \ref{fig4}). In both cases we will have to face the standard dissipation phenomena we encounter in the usual quantum self-confined regime (\cite{kohl03,zapa03}) but also in this case the time-scale we are interested in to observe the phenomenon are substantially smaller when compared with the typical dissipation time.
\begin{figure}
\caption{\label{fig4}
\label{fig4}
\end{figure}
\subsection{Full condensate spinning}
\begin{figure}
\caption{\label{fig5}
\label{fig5}
\end{figure}
Exploiting the self-confining nonlinearity we can make the
entire condensate move all the way around the loop.
Let us suppose that the condensate is at the beginning at one well.
Increasing the coupling constant toward an adjacent well, enough in order to
make the well exit the self-confined regime, causes a flux
toward this second well. By lowering the coupling constant before
the flux changes sign, we find ourselves with the second well in the
self-confined regime. Repeating this process we can
make the condensate move around the loop (Fig. \ref{fig5}) in a sort of quantum version of the conveyor belt mechanism (\cite{hans00}). The number of atoms in the depleted wells has to be quite small (of the order of few percents of $N_T$) in order to avoid having significant spurious population's oscillations that could bring the system off resonance after very few turns.
The process has revealed itself to be almost independent from the initial phases of the four condensates.
This spinning of the whole BEC is possible because we have no intensity
resonance, as we had before, and so the resonance frequency remains the same. In any case, because of the long period implicated (of the order of some centains of $1/\omega_R$), dissipation could play an important role. However, contrary to what happens in high frequency oscillation in one self-confined well, here we have only very low frequency population's fluctuations. Thus the only significant dissipation source should be the spontaneous atom loss that is normally possible to keep relatively small for the time-scale we are interested in. It is not straightforward to calculate the correct modulation frequency analytically but simple numerical integration of Eq.(\ref{firstsystem}) is sufficient to find it accurately enough for any experiment. We may notice however that contrary to what happens in the others two cases, as soon as the modulation is interrupted, the spinning stops.
\subsection{Conclusion}
We have shown that, even though the dynamics of BEC in two-well system is a well studied problem, the extension to four wells allows us to highlight new and interesting aspects. It also gives us the possibility to understand better well known phenomena. We would like to stress that we used the four wells configuration simply because it is the less complex nontrivial loop configuration, but that the phenomena we highlighted have nothing specific to the number four, being all of them trivially expandable to loops made up of a larger number of wells.
S.D.L. would like to thank Yvan Castin for useful comments and corrections.
\end{document} |
\begin{document}
\pagenumbering{roman}
\pagestyle{headings}
\begin{center}
{\bf{Application of KKT to Transportation Problem with Volume Discount on Shipping Cost.}\\[5mm] Haruna Issaka\textsuperscript{1}, Ahmed mubarack\textsuperscript{2}, Osman Shaibu\textsuperscript{3}}\\PhD Computational Mathematics.\\2. PhD Student, University of Nottingham, United Kingdom.\\\underline{3. Department of Mathematics, University of Health and Allied Sciences, Ghana.}
\end{center}
\setcounter{page}{1}
\pagenumbering{arabic}
\subsection*{Abstract}
Nonlinear programming problems are useful in designing and assigning work schedule and also in transporting goods and services from known sources to specified destinations. The objective function could be linear or nonlinear depending on the mode or rout of transportation. In supplying goods and services during emergency cases, the objective cost function is always assumed to be nonlinear.
\\
The purpose of this research is to study the nature of the cost function of the transportation problem and provide solution algorithms to solve such nonlinear cost functions. The cost per unit commodity transported may not be fixed since volume discounts are sometimes available for large shipments of goods and services. This will make the cost function either piecewise linear, separable concave or convex.
The approach to solving such problems as they arise is to apply existing general nonlinear programming algorithms and when necessary make modifications to fit the special structure of the problem. We describe theoretically , algorithms for the nonlinear cost function and state conditions under which stability is achieved with both sufficient and necessary cases.
\noindent \underline{\textit{keywords:} Convex function, shipping cost, transportation problem,\\ optimization problem.}
\section*{1. Introduction} A number of factors needs to be taken into account when considering transportation problems. For instance, port selection, rout of transport, port to port carrier selection as well as distribution-related cases. Freight companies dealing with large volume shipments, encounter various problems during the movement of good and services due to the cost involved. Volume discounts are meant to target minimizing shipping cost of transport \cite{issaka2010transportation}. A very important area of application is the management and efficient use of scarce resources to increase productivity \cite{adby1974introduction},\cite{mccormick1983nonlinear}.
\\
\cite{shetty1959solution} formulated an algorithm to solve transportation problems taking nonlinear cost function.
In his paper, solution to the generalized transportation problem taking nonlinear costs is given. The solution procedure used is an iterative method and a feasible solution is obtained at each stage. The value of the criterion function is improved from one stage to another.
\\
Bhudury et al \cite{arcelus1996buyer} gave a detailed game-theory analysis of the buyer-vendor coordination problem embedded in the price-discount inventory model. Pure and mixed, cooperative and non-cooperative strategies were developed. Highlights of the paper include the full characterization of the Pareto optimal set, the determination of profit-sharing mechanisms for the cooperative case and the derivation of a set of parameter-specific non-cooperative mixed strategies. Numerical example was presented to illustrate the main features of the model.
\\
\cite{goyal1988joint} also developed a joint economic‐lot‐size model for the case where a vendor produces according to order for a purchaser on a lot‐for‐lot basis under deterministic conditions. The assumption of lot‐for‐lot bases is restrictive in nature. In his paper, general joint economic‐lot‐size model was suggested and it was shown to provide a lower or equal joint total relevant cost as compared to the model of Banerjee.
\\
In their paper, \cite{sharp1970decomposition},\cite{abramovich2004refining} considered a firm with several plants supplying several markets with known demands. The cost of production was considered to be nonlinear while transportation cost between any plant and market pair are linear. The Kuhn-Tucker conditions and the dual to the transportation model were used to derive the optimal conditions for the problem. The conditions were shown to be both sufficient and necessary if the production cost are convex at each plant else it is only necessary. An algorithm was also developed for reaching an optimal solution to the production-transportation problem for the convex case.
\\
Tapia et al \cite{tapia1994extension} extended the Karush–Kuhn–Tucker approach to Lagrange multiplier of an infinite programming formulation. The main result generalized the usual first-order necessity conditions to address problems in which the domain of the objective function is Hilbert space and the number of constraints is arbitrary.
Under production economies of scale, the trade‐off between production and transportation costs will create a tendency towards a decentralized decision‐making process \cite{youssef1996iterative}.
\\
The algorithms of this paper belong to the direct-search or implicit-enumeration type algorithms was proposed by \cite{spielberg1969algorithms}. In the paper, a general plan of procedure is expected to be equally valid for the capacitated plant-location problem and also for transshipment problems with fixed charges. After considerable computational experience accumulated and discussed at some length, the authors suggested an additional work on the construction of adaptive programs that matches algorithm to data structure.
\\
Williams, \cite{williams1962treatment} applied the decomposition principle of \cite{dantzig1959truck} to the solution of the \cite{hitchcock1941distribution} transportation problem and to several generalizations of it. Among the generalizations are $(i)$ the transportation problem with source availability subject to general linear constraints, and $(ii)$ the case in which the costs are piecewise linear convex functions.
\subsection*{2. Non-Linear programming Problem}
In order to study nonlinear problems, we observe the following definitions:
\\
$i.$ \textbf{Polyhedral set}: A set $P$ in an $n$ dimensional vector space $T^n$ is a polyhedral set if it is the intersection of a finite number of closed- half spaces, i.e. $P = {x : p^t_ix}\leq \alpha _i \quad \forall i=1...n$, where $p_i$ is a non zero vector in $T^n$ and $\alpha_i$ is scalar.
\\
$ii$. \textbf{Convex set}: A subset $S$ of $R^n$ is convex if for any $x_1, x_2 \in S$, the line segment $[x_1, x_2]$ is contained in $S$. In other words the set $S$ is convex if $\lambda x_1+(1-\lambda)x_2\in S$ \quad $\forall \quad 0\leq \lambda \leq 1$.
\subsection*{3. Karish Kuhn Tucker conditions for Optimality}
Consider the non-linear transportation
\\
Min $C_i(x_i)$ subject to the constraint $A x_i=b_i$\quad $\forall x_i>0$ and $i=1,2,...$, where $x=(x_1,x_2,...x_n)^T$,
$b=\begin{pmatrix}
s_{1}\\s_{2}\\..\\{s_n}\\{d_1}\\..\\{d_n}
\end{pmatrix}$, $A=\begin{pmatrix}
1 &1&.&.&.&1 \\ .&.&.&.&.&.&\\1&1&.&.&.&1
\end{pmatrix}$.
\\
$C_i$ is the cost associated with each $x_i$.
\\
The solution tableau is setup as below
\\[5mm]
\begin{tabular}{|c|C|C|C|r|}\hline
& $x_{11}$ & $x_{12}$ & $x_{ij}$ & Supply \\\hline
1 & \innerbox{$x_{1,1}$}{$C_{1,1}$} & \innerbox{$x_{1,2}$}{$C_{1,2}$} & \innerbox{$x_{1,j}$}{$C_{1,j}$} & $s_1$ \\\hline
2 & \innerbox{$x_{2,1}$}{$C_{2,1}$} & \innerbox{$x_{2,2}$}{$C_{2,2}$} & \innerbox{$x_{2,j}$}{$C_{2,j}$} & $s_2$ \\\hline
.. & \innerbox{..}{..} & \innerbox{..}{..} & \innerbox{..}{..} & .. \\\hline
n & \innerbox{$x_{1n}$}{$C_{1n}$} & \innerbox{$x_{2,n}$}{$C_{2,n}$} & \innerbox{$x_{i,n}$}{$C_{i,n}$} & $s_n$ \\\hline
n+1 & \innerbox{$x_{1,n+1}$}{$C_{1,n+1}$} & \innerbox{$x_{2,n+1}$}{$C_{2,n+1}$} & \innerbox{$x_{i,n+1}$}{$C_{i,n+1}$} & $s_{n+1}$ \\\hline
Demand & \bottombox{$d_1$} & \bottombox{$d_2$} & \bottombox{$d_n$} & $\sum_{i=1}^{n}s_i=\sum_{j=1}^{n}d_j$ \\\hline
\end{tabular}
\\[5mm]
where $C_{ij}$ is the cost of transporting quantity $x_{ij}$ from source $s_i$ to destination $d_j$.
\\
The Lagrangian function for the system is $Z(x,\lambda,\omega)=Cx+\omega (b-Ax)-\lambda x$ where $\lambda$ and $\omega$ are Lagrangian multipliers. By the KKT condition, the basic solution $\bar{x}$ is feasible if \\
\begin{equation}
\begin{cases}
\label{eqn1}
\triangledown Z=\triangledown C(\bar{x})-\omega^TA-\lambda=0,\\ \lambda \bar{x}=0,\\ \lambda \geq 0, \\ \bar{x}\geq 0.
\end{cases}
\end{equation}
from \ref{eqn1}, we have the following
\\
\begin{equation}
\label{eqn2}
\begin{cases}
\frac{\partial Z}{\partial x_{ij}}=\frac{\partial C(\bar{x})}{\partial x_{ij}}-(u,v)(e_i,e_{n+j})-\lambda_k=0,
\\
\lambda_{ij}x_{ij}=0\\
x_{ij}\geq 0\\
\lambda_k\geq 0.
\end{cases}
\end{equation}
\noindent From \ref{eqn2} we have \\
\begin{equation}
\label{eqn3}
\frac{\partial Z}{\partial x_{ij}}=\frac{\partial C(\bar{x})}{\partial x_{ij}}-(u_i,v_j) \geq 0
\end{equation}
and
\begin{equation}
\label{eqn4}
x_{ij}\frac{\partial Z}{\partial x_{ij}}=\frac{\partial C(\bar{x})}{\partial x_{ij}}-(u_i,v_j) \geq 0 \quad \forall x_{ij}\geq 0.
\end{equation}
\noindent Hence, when ever conditions $3$ and $4$ are satisfied, the system attains its optimal value.
\subsection*{4. General Solution to the Non-linear Transportation Problem}
The solution process is detailed below
\begin{enumerate}
\item Find an initial basic feasible solution $\bar{x}$ using any of the known methods. i.e. North-west corner rule, Vogel's approximation, Row minima etc.
\item If $\bar{x}$ satisfies \ref{eqn3} and \ref{eqn4}, then $\bar{x}$ is KKT point and therefore stop
\item If $\bar{x}$ does not satisfies \ref{eqn3} and \ref{eqn4}, then move to find a new basic solution by improving the cost function and repeat the process.
\end{enumerate}
\subsection*{5. Non-linear Transportation Problem with Concave Cost Function}
During emergency cases, goods and services are shipped through unapproved routs to the affected destination. In such cases, the cost function assumes a nonlinear structure. The cost function is concave in nature with a concave structure. Volume discounts are sometimes available for shipments in large volumes. When volume discounts are available, the cost per unit shipped decreases with increasing volume shipped. The volume may be directly related to the unit shipped or have the same rate for equal amounts shipped.\\
If the discount is directly related to the unit commodity shipped, the resulting cost function will be continuous with continuous first partial derivative.\\ The associated nonlinear problem formulation is
\\
Min$f(x)=C_{ij}x_{ij}$ subject to the constraints \\ $\sum_{i=1}^{n} x_{ij}=s_i$ and $\sum_{j=1}^{n} x_{ij}=d_j$ where the cost function $C_{ij} : R^{mn}\rightarrow R$.
\\[2mm]
\textbf{Theorem 1}: Let $f$ be concave and continuous function and $P$ a non-empty compact polyhedral set. Then the optimal solution to the problem $min f(x), \quad x\in P$ exist and can be found at an extreme point of $P$.\\
\textbf{Proof}: Let $E=(x_1, x_2,...,x_n)$ be an extreme point of $P$ and $x_k\in E$ such that\\ $f(x_k)=min \{ f(x_i)\mid i=1,2,...n \}$. Since $P$ is compact and $f$ is continuous, $f$ attains it's minimum in $P$ at $x_k$. If $\bar{x}$ is an extreme point of $P$, the solution is optimal and $\bar{x}=\sum_{i=1}^{n}\lambda_ix_i$, $\sum_{i=1}^{n}\lambda_i=1$, $\lambda_i >0$ for all extreme points $(x_1, x_2, ...x_n)$. \\ By the concavity of $f$, it follows that\\ $f(\bar{x})=f(\sum_{i=1}^{n}\lambda_ix_i)\geq \sum_{i=1}^{n}\lambda_if(x_i)\geq f(x_k)\sum_{i=1}^{n}\lambda_i$.
This implies $f(\bar{x})\geq f(x_k)$ since $f(x_k)\leq f(x_i)$ and $\sum_{i=1}^{n}\lambda_i=1$. Therefore $\bar{x}$ is minimum and so $f(\bar{x})\leq f(x_k)$. Hence $f(\bar{x})=f(x_k)$
\subsection*{6. Solution to Concave Cost Function}
Theorem $1$ allows us to solve the concave problem as follows:\\ We consider only the extreme points of $P$ to minimize the cost $C_{ij}$.\\ Let $\bar{x}$ be the basic solution in the current iteration. We decompose $\bar{x}$ into $(x_B, x_N)$ where $x_B$ and $x_N$ are respectively the basic and non-basic variables.
\\
For $x_B>0$, the complementary slackness condition is $m+n-1$, where $m$ and $n$ are the demand and supply capacities.
\\
Next we determine the values of $u_i$ and $v_j$ from\\ $\frac{\partial C_{ij}x_{ij}}{\partial x_{Bij}}-(u_i+v_j)=0$.
\\
We determine $u_i$ and $v_j$ for all $i,j=1,...n$ by assigning $u_1$ the value zero and solving for the remaining $u_i's$ and $v_j's$. for each non-basic variable $x_{Nij}$, we calculate from \ref{eqn3}, $\frac{\partial Z}{\partial x_ij}$.
\\
At the extreme point of $P$, the $x_{ij}'s$ are zeros and hence the complementary slackness condition is satisfied.
\\
On the other hand, if
\\
$\frac{\partial Z}{\partial x_ij}-(u_i+v_j)<0$ for all $ij=1,...,n$, we move to search for a better basic solution by determining the leaving and new entering basic variables.
\subsection*{7. Convex Cost Function}
\textbf{Definition:} A line segment joining the points $x$ and $y$ in $R^n$ is the set ${x,y}$ = ${x\in R^n:x=\lambda x + (1 – \lambda)y}$, $\forall$ $0 \leq \lambda \leq 1$. A point on the line segment for which $0 < \lambda <1$, is called an interior point of the line segment.\\
A subset $S$ of $R^n$ is said to be convex if for any two elements $x$, $y$ in $S$, the line segment $[x,y]$ is contained in $S$ . Thus $x$ and $y$ in $S$ imply ${x + (1 – \lambda)y}$ $\in S$ for $0 \leq \lambda \leq 1$ if $S$ is convex.
\subsection*{8. Convex Optimization Problem:}
A convex optimization problem is an optimization technique that consist of minimizing a convex cost function over a convex set of constraints.
\\
Mathematically we have,
\textit{min} $f(x)$ subject to $x\in C$, where $C$ is a convex set and $f$ a convex function in $C$. \\
In particular, \textit{min} $f(x)$ subject to $g_i\leq 0$, $i=1,2,..n$, where $f$ is a convex function and $g$ the associated constraint.\\ In line with the above, we state the following theorem.
\\[4mm]
$\textbf{Theorem 2:}$
\\
Let $f:C\mapsto R$ be convex function on the set $C$. If $\bar{x}$ is a local minimum of $f$ over $C$, i.e. $\bar{x}$, then $\bar{x}$ is a global minimum of $f$ over $C$.
\\
\noindent \textbf{Proof:}\\
If $\bar{x}$ is a local minimum of $f$ over $C$, then there exist $r>0$ such that $f(x)\geq f(\bar{x})$ for all $x\in C$. Let $y\in C$ such that $y\neq \bar{x}$. It suffices to show that $f(y)\geq f(\bar{x})$. Suppose $\lambda \in (0,1]$ is such that $\bar{x}+\lambda(y-\bar{x})\in [\bar{x}, r]$. It follows from Jensen’s inequality that $f(\bar{x})\leq f(\bar{x}+\lambda (y-\bar{x}))\leq (1-\lambda)f(\bar{x})-\lambda f(y)$.
\\
$\implies \lambda f(\bar{x})\leq\lambda f(y)$ and hence $f(\bar{x})\leq f(y)$.
\subsection*{9. Convex Transportation Problem:}
During emergency situations (natural disasters like flooding) where logistics are required to be moved immediately to site, the mode and method of transportation is sometimes not regular. In developing a cost function for such a case, the objective function assumes a nonlinear form.
This case arise when the objective function is composed of not only the unit transportation cost but also of production cost related to each commodity. Or in the case when the distance from each source to each destination is not fixed, the objective function nonlinear; concave or convex cost function.
The associated problem is formulated as:
\noindent \textit{Minimize} $C(x)$\\ subject to $Ax=b$ $x\geq0$\\ where $C(x)$ is convex, continuous and has continuous first order partial derivatives.
\subsection*{10. Solution to the Convex Cost function}
For a convex cost function, the minimum point may not be attained at an extreme point of the function but a solution may be reached at the boundary of the feasible solution set. This case may arise when a nonbasic variable has a positive allocation while none of the basis is driven to zero.\\
We use Zangwill's \cite{abramovich2004refining} convex simplex algorithm to seek for a local optimal solution by partitioning the solution variable $x$ into $\{x_B, x_N\}$ where $x_B$ is the $n+m-1$ component of the basic variable and $x_N$ is the $nm-(nm-1)$ component vector of the nonbasic variable.\\
Suppose we have an initial basic solution $\bar{x}$, we improve the solution by the KKT conditions studied earlier, until each cell $x_{ij}$ satisfies sufficiently the following conditions:
\begin{enumerate}
\item $x_{ij}\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)=0$,
\item $\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)\geq0$
\end{enumerate}
\noindent Since each $x_{B_{ij}}>0$, condition $2$, the slackness condition, becomes\\
$\dfrac{\partial f(\bar{x})}{\partial x_{B_{ij}}}-(u_i+v_j)=0$ where $x_{B_{ij}}$ is a basic variable in the $ij$ cell.
\\
But for a nonbasic cell $x_N$ at a feasible iteration point, any of the following may occur:
\\
\begin{enumerate}
\item $\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)>0$ \quad
and \quad $x_{ij}\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)>0$
\item $\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)<0$ \quad
and \quad $x_{ij}\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)<0$
\item $\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)<0$ \quad
and \quad $x_{ij}\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)=0$
\end{enumerate}
\noindent If $x$ satisfies any of the three conditions, we improve the solution as follows:
\\
We compute $\frac{\partial z}{\partial x_{rl}}$, the minimum of $\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)$ and
\\
$x_{st}\frac{\partial z}{\partial x_{st}}$ the maximum of $x_{ij}\dfrac{\partial f(\bar{x})}{\partial x_{ij}}-(u_i+v_j)$
\\
At this point we only improve a nonbasic variable $x_{ij}$ with a positive value and a positive partial derivative.\\ We select the leaving variable under each of the three conditions stated above as follows:
\\
\textbf{Case 1}
\\
If $\frac{\partial z}{\partial x_{rl}}\geq0$ and $x_{st}\frac{\partial z}{\partial x_{st}}>0$, we decrease $x_st$ by the value $\theta$ using the transportation table as in the linear and concave cases.
Let $y^k = (y_{11}^k, y_{12}^k,...,y_{mn}^k)$ be the value of $x^k = (x_{11}^k,..., x_{mn}^k)$ after making the necessary adjustment by adding and subtracting $\theta$ in the loop containing $x_{st}$ so that all the constraints are satisfied.
By doing so, either $x_{st}$ itself or a basic variable say $x_{B_{st}}$ will be driven to zero.
Now $y^k$ may not be the next iteration point. Since the function is convex, a better point could be found before reaching $y^k$. To check this, we solve
\begin{equation}
\label{eqn5}
f(x^{k+1}) = min f\lambda x^k + (1–\lambda)y^k) : 0 \leq \lambda \leq 1
\end{equation}
\noindent and obtain $x^{k+1}=\lambda x^k + (1 –\lambda)y^k$ where $\lambda$ is the optimal solution of \ref{eqn5}.
Before the next iteration, If $x^{k+1} = y^k$ and a basic variable goes to zero, we change the basis.
If $x^{k+1} \neq y^k$ or if $x^{k+1} = y^k$ and $x_st$ is driven to zero, we don’t change the basic by substituting the leaving basic variable by $x_st$.
\\
\textbf{Case 2}
\\
If $\frac{\partial z}{\partial x_{rl}}\leq 0$ and $x_{st}\frac{\partial z}{\partial x_{st}}\leq 0$
In this case the value of $x_rl$ is increased by $\theta$ and we find $y^k$, where $\theta$ and $y^k$ are defined as in case $1$.
As we increase the value of $x_rl$ one of the basic variables will be driven to zero.
But now after solving for $x^k+1$ from \ref{eqn5}, before going to the next iteration, we will have the following possibilities.
If $x^{k+1} = y^k$ we change the former basis, substitute $x_B$ by $x_{rl}$.
If $x^{k+1} \neq y^k$ we do not change the basis.
All the basic variables outside of the loop will remain unchanged.
Case $3$
If $\frac{\partial z}{\partial x_{rl}}<0$ and $x_{st}\frac{\partial z}{\partial x_{st}}>0$, we either decrease $x_{st}$ as in the case $1$ or increase $x_{rl}$ as in case $2$.
\subsection*{11. The Transportation Convex Simplex Algorithm}
Algorithm for solving the convex transportation problem is developed in the following three steps.
\\
\textbf{Initialization}
\\
Final the initial basic feasible solution
Iteration
\\
\textbf{Step 1}: We determine all $u_i$ and $v_j$ from $\frac{\partial f(x^k)}{\partial x_{B_{rl}}}-u_i-v_j=0$
for each basic cell.
\textbf{Step 2}: For each non basic cell, calculate
$$\frac{\partial z}{\partial x_{rl}}=min\{\frac{\partial f(x^k)}{\partial x_{ij}}-u_i-v_j\}$$
$$x_{st}\frac{\partial z}{\partial x_{rl}}=max\{x_{ij}\frac{\partial f(x^k)}{\partial x_{ij}}-u_i-v_j\}$$
\\
If
$\frac{\partial z}{\partial x_{rl}}\geq 0$ and $x_{st}\frac{\partial z}{\partial x_{st}}=0$
we stop otherwise we move to step $3$.
\\
\textbf{Step 3}:
Determine the non basic variable to change.
Decrease $x_st$ according to case $1$ if
$\frac{\partial z}{\partial x_{rl}}< 0$ and $x_{st}\frac{\partial z}{\partial x_{st}}>0$.
Increase $x_rl$ according to case $2$ if
$\frac{\partial z}{\partial x_{rl}}< 0$ and $x_{st}\frac{\partial z}{\partial x_{st}}\leq0$
Either increase $x_rl$ or decrease $x_st$ if
$\frac{\partial z}{\partial x_{rl}}< 0$ and $x_{st}\frac{\partial z}{\partial x_{st}}<0$
Step $4$:
Find the values of $y^k$, by means of $\theta$, and $x^{k+1}$, from \ref{eqn5}.
If $y^k = x^{k+1}$ and a basic variable is driven to zero, change the basic otherwise do not change the basis.
$x^k = x^{k+1}$
go to step $1$.
\subsection*{Conclusion}
In conclusion, we have established, under considerable conditions, solution method for the various problems that may arise in setting up sets of equations defining a given transportation problem with its associated constraints. Unlike the traditional methods of setting up the objective function and its associated constraints which is always linear, shipments during emergency cases where supply is scheduled to reach particular destinations within a specified time period, comes with its own accompanying problems. We found out that during such cases, the objective function is nonlinear. Solution algorithms were therefore developed taking into account existing algorithms with reasonable modifications to accommodate the nonlinear objective function. These algorithms can be applied to real industry data to minimize cost of shipment.
\subsection*{13. Conflict of Interest.}
The authors wish to categorically state that there is no conflict of interest what soever regarding the publication of this research work.
\renewcommand{References}{References}
\addcontentsline{toc}{chapter}{References}
\nocite{*}
\end{document} |
\begin{document}
\title{Comment on ``Fault-Tolerate Quantum Private Comparison Based on GHZ States and ECC"}
\titlerunning{Comment on ``Fault-Tolerate Quantum Private Comparison Based on...}
\author{Sai Ji \and
Fang Wang \and
Wen-Jie Liu \and
Chao Liu \and
Hai-Bin Wang
}
\authorrunning{S. Ji, F. Wang, W.-J. Liu, etc.}
\institute{S. Ji \and W.-J. Liu \Envelope (Corresponding author) \at
Jiangsu Engineering Center of Network Monitoring, Nanjing University of Information Science \& Technology, Nanjing 210044, China \\
\email{[email protected]}
\and
S. Ji \and F. Wang \and W.-J. Liu \and C. Liu \and H.-B. Wang \at
School of Computer and Software, Nanjing University of Information Science \& Technology, Nanjing 210044, China\\
}
\date{Received: 26 March 2014 / Accepted: date}
\maketitle
\begin{abstract}
A two-party quantum private comparison scheme using GHZ states and error-correcting code (ECC) was introduced in Li et al.'s paper [Int. J. Theor. Phys. 52: 2818-2815, 2013], which holds the capability of fault-tolerate and could be performed in a none-ideal scenario. However, this study points out there exists a fatal loophole under a special attack, namely the twice-Hadamard-CNOT attack. A malicious party may intercept the other's particles, firstly executes the Hadamard operations on these intercepted particles and his (her) own ones respectively, and then sequentially performs twice CNOT operations on them and the auxiliary particles prepared in advance. As a result, the secret input will be revealed without being detected through measuring the auxiliary particles. For resisting this special attack, an improvement is proposed by applying a permutation operator before TP sends the particle sequences to all the participants.
\keywords{Quantum private comparison\and GHZ state\and twice-Hadamard-CNOT attack\and Improvement}
\end{abstract}
\section{Introduction}
\label{intro}
Since quantum mechanics principles are introduced into cryptography, quantum cryptography attracts more and more attention. Due to the characteristic of quantum unconditional security, many quantum cryptography protocols, such as quantum key distribution (QKD) [1-3], quantum direct communication (QDC) [4-7], quantum secret sharing (QSS) [8-10], quantum teleportation (QT) [11,12], have been proposed to solve various secure problems.
Recently, quantum private comparison (QPC) has become an important branch in quantum cryptography. Based on the properties of quantum mechanics, the participants can determine whether their secret inputs are equal or not without disclosing their own secrets to each other. In 2009, Yang et al. [13] put forward a pioneering QPC scheme based on Bell states and a hash function. Since then, a large number of QPC protocols utilizing the entangled states, such as EPR pairs, GHZ state, etc., have been proposed [14-23].
However, these QPC protocols [13-23] are feasible in the ideal scenario but not secure in practical scenario where faults (including noise and error) are existent in the quantum channel and measurement. In order to solve the problem, in 2013, Li et al. [24] present a novel QPC scheme based on GHZ states and error-correcting code (ECC) against noise. But, through analyzing Li et al.'s QPC scheme, we find it is unsecure under a special attack, called the twice-Hadamard-CNOT attack. To be specific, if any malicious party performed the twice-Hadamard-CNOT attack, he (she) can get another's secret input, which goes against the QPC's principles [25]. In order to fix the loophole, a simple solution by adopting a permutation operator before TP distributes the particles to the participants, is proposed.
The rest of this paper is constructed as follows. At first, Li et al.'s QPC protocol is briefly reviewed in Sect. 2. And in Sect. 3, we analyze the security of Li et al.'s QPC protocol by introducing the twice-Hadmard-CNOT attack, and give an improvement to fix the problem. Finally, a short collusion is drawn in Sect. 4.
\section{Review of Li et al.'s QPC protocol}
In the Ref. [24], in order to guarantee the QPC protocols secure in the practical scenario, Li et al. present a new two-party QPC scheme by using error-correcting code. The whole protocol consists of eight steps as below.
\begin{enumerate}[(1)]
\item
Alice, Bob and Calvin prepare a [$m,n$] error-correcting code which uses $m$ bits codeword to encode $n$ bits word and can correct $l$ error bits in codeword with the error-correcting function $D(x^{m})$ according to the fault rate of the noise scenario. We suppose the error-correcting code's generator matrix is $G$, and check matrix is $Q$. Then they encode $X=(x_{0},x_{1},...,x_{n-1})$ and $Y=(y_{0},y_{1},...,y_{n-1})$ to $X'=(x'_{0},x'_{1},...,x'_{m-1})$ and $Y'=(y'_{0},y'_{1},...,y'_{m-1})$ with the generator matrix $G$, respectively. There are
\begin{equation}
X'=X\cdot G,
\end{equation}
\begin{equation}
Y'=Y\cdot G.
\end{equation}
\item
Calvin prepares $m$ triplet GHZ states all in
\begin{equation}
\begin{split}
|\psi\rangle_{CAB}&=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle)_{CAB} \\
&=\frac{1}{2}(|+++\rangle+|+--\rangle+|-+-\rangle+|--+\rangle)_{CAB},
\end{split}
\end{equation}
where $|0\rangle$ and $|1\rangle$ are measured in $Z$ basis, $|+\rangle$ and $|-\rangle$ are measured in $X$ basis, and $|\pm\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)$. Calvin divides these $m$ GHZ states into three sequences $S_{A}$, $S_{B}$ and $S_{C}$, which includes the first, the second, and the third particles of all GHZ states, respectively.
\item
Calvin prepares some decoy photons prepared in states ${|0\rangle,|1\rangle,|+\rangle,|-\rangle}$ in random. He inserts these decoy photons in $S_{A}$ and $S_{B}$ at random positions to form sequences $S^{*}_{A}$ and $S^{*}_{B}$ respectively. Calvin retains the quantum sequence $S_{C}$ and sends the sequence $S^{*}_{A}$ to Alice, $S^{*}_{B}$ to Bob.
\item
When Alice and Bob receive $S^{*}_{A}$ and $S^{*}_{B}$, Calvin announces the positions and measurement base of these decoy photons. Alice and Bob measure them in the same base and announce their outcome. If the error rate exceeds a rational threshold, Calvin aborts the protocol and restarts from Step (1). Otherwise, there is no eavesdropper, and the protocol continues to the next step.
\item
Alice and Bob recover $S_{A}$ and $S_{B}$ respectively by discarding the decoy photons. Then Alice, Bob and Calvin measure $S_{A}$, $S_{B}$ and $S_{C}$ in $X$ basis, respectively. If the measurement result is $|+\rangle$ ($|-\rangle$), then they encode it as the classical bit 0 (1). Thus, each of Alice, Bob and Calvin will obtain $m$ bits from $S_{A}$, $S_{B}$ and $S_{C}$, respectively. We denote each of them as $k^{A}_{i}$, $k^{B}_{i}$ and $k^{C}_{i}$ (i=0,1,...,m-1).
\item
Alice and Bob calculate $x^{''}_{i}=k^A_{i}\oplus x^{'}_{i}$ and $y^{''}_{i}=k^{B}_{i}\oplus y^{'}_{i}$. They announce $X^{''}=(x^{''}_{0},x^{''}_{1},...,x^{''}_{m-1})$ and $Y^{''}=(y^{''}_{0},y^{''}_{1},..,y^{''}_{m-1})$ to Calvin.
\item
Calvin calculates $c^{'}_{i}=k^C_{i}\oplus x^{''}_{i}\oplus y^{''}_{i}$, and gets $m$ bits sequence $C^{'}=(c^{'}_{0},c^{'}_{1},...,c^{'}_{m-1})$.
\item
Then Calvin uses the check matrix $Q$ to check whether the number of error bits exceeds the threshold $l$. If it does, Calvin aborts the protocol and restarts from Step (1). Otherwise, he gets $n$ bits sequence $C^{*}$ by decoding $C^{'}$ with error-correcting function $D(C^{'})$. If there is at least one bit 1 in $C^{*}$, Calvin announces $X\neq Y$. Otherwise, he announces $X=Y$.
\end{enumerate}
In Ref. [24], the authors claimed the scheme was secure even in the practical scenario. However, we will show how a malicious party gets the other's secret input by launching the twice-Hadmard-CNOT attack in the next section.
\section{Twice-Hadmard-CNOT attack on Li et al.'s QPC protocol and the improvement}
\subsection{Twice-Hadmard-CNOT attack on Li et al.'s QPC protocol}
As analyzed in Ref.[24], due to the decoy photons adopted in the Li et al.'s QPC protocol, some well-known attacks, such as intercept-resend attack, measurement-resend attack, and entanglement-resend attack, can be detected via the checking mechanism. Unfortunately, we find Li et al.'s QPC protocol cannot resist a special attack, i.e., the twice-Hadmard-CNOT attack. To be specific, if any party (Alice or Bob) performs the twice-Hadmard-CNOT attack, he/she can get the other's secret input. The detailed procedure of the twice-Hadmard-CNOT attack is depicted as follows.
Without loss of generality, we suppose Bob is malicious, who wants to get Alice's secret input. At first, Bob prepares $m$ auxiliary particles all in the state $|0\rangle_{e}$, then, the GHZ state and an auxiliary particle compose a composite system:
\begin{equation}
\begin{split}
|\eta\rangle&=|\psi\rangle_{CAB}|0\rangle_{e} \\
&=\frac{1}{\sqrt{2}}(|0000\rangle+|1110\rangle)_{CABe} \\
&=\frac{1}{2}(|+++0\rangle+|+--0\rangle+|-+-0\rangle+|--+0\rangle)_{CABe},
\end{split}
\end{equation}
where the subscript $C$, $A$, $B$ represent the particles in the hand of Calvin, Alice, Bob, respectively, and the subscript $e$ represents the auxiliary particle.
In Step (3), when Calvin sends the sequence $S^{*}_{A}$ to Alice, Bob may intercept $S^{*}_{A}$ and execute a Hardmard ($H$) operation on every particle in $S^{*}_{A}$ to form sequence $S^{**}_{A}$. Then, he performs a controlled-NOT (CNOT) operation $C_{Ae}$ on every particle in $S^{**}_{A}$ and the corresponding auxiliary particle $e$. Here, the particle in $S^{**}_{A}$ is the control qubit, while particle $e$ is the target qubit. After that, Bob performs another $H$ operation on every particle in $S^{**}_{A}$ in order to restore sequence $S^{**}_{A}$ to $S^{*}_{A}$, and sends $S^{*}_{A}$ to Alice. What should be noted is that the transmitting sequence $S^{*}_{A}$ remains unchanged.
Since Calvin announces the decoy photons' positions in Step (4), Bob can discard the auxiliary particles $e$ which correspond to the decoy photons in $S^{*}_{A}$. And in Step (5), after Bob recovers $S_{B}$ by discarding the decoy photons, he executes an $H$ operation on every particle in $S_{B}$ to form sequence $S^{**}_{B}$. Then Bob performs a CNOT operation $C_{Be}$ on every particle (control particle) in $S^{**}_{B}$ and the corresponding auxiliary particle $e$ (target particle). After that, he performs a $H$ operation on every particle in $S^{**}_{B}$, which aims to restore $S^{**}_{B}$ to $S_{B}$, and every auxiliary particle, respectively. Now, the state of the composite system is changed into
\begin{equation}
|\eta^{'}\rangle=\frac{1}{2}(|++\rangle_{Ce}(|++\rangle+|--\rangle)_{AB}+|--\rangle_{Ce}(|+-\rangle+|-+\rangle)_{AB}).
\end{equation}
From Eq.(5), a rule can be concluded: if $e$ is $|+\rangle$, particle $A$ and particle $B$ are in the same state; and if $e$ is $|-\rangle$, they are in the different states.
After Alice announces $X^{''}=(x^{''}_{0},x^{''}_{1},...,x^{''}_{m-1})$ in Step (6), Bob measures them in the $X$ basis, and get the measurement result $k^{e}_{i}(i=0,1,...,m-1)$. According to $k^{e}_{i}$ and the rule from Eq.(5), Bob can obtain Alice's states $k^{A}_{i}(i=0,1,...,m-1)$. Since $X^{'}$ has been announced by Alice, Bob can calculate $x^{''}_{i}\oplus k^{A}_{i}=(k^{A}_{i}\oplus x^{'}_{i})\oplus k^{A}_{i}=x^{'}_{i}$, that is to say, he can get Alice's secret input $x^{'}_{i}$.
For sake of clearness, the above procedure of the twice-Hadmard-CNOT attack can be intuitively demonstrated by the following figure (See Fig. 1).
\begin{figure}
\caption{The circuit diagram of the process of twice-Hadmard-CNOT attack. Here, $|\psi\rangle_{CAB}
\end{figure}
\subsection{The improvement}
In order to fix the loophole of Li et al.'s QPC protocol, we meliorate it by applying a permutation operator before TP sends the particle sequences to all the participants. To be specific, Step (3), Step (4) and Step (5) in the original protocol need to be revised as follows.
\begin{enumerate}
\item [$(3)^{*}$]
Calvin prepares two group $r$-length decoy photons sequences at random in $\{|+\rangle,|-\rangle,|0\rangle,|1\rangle\}$, namely $R_{A}$, $R_{B}$, and concatenates them with $S_{A}$ ($S_{B}$) to form an extended sequence $S^{'}_{A}=R_{A}||S_{A}$ ($S^{'}_{B}=R_{B}||S_{B}$), respectively. Then Calvin applies a permutation operator $\Pi_{(r+m)}$ on $S^{'}_{A}$ ($S^{'}_{B}$) to create a new sequence $\Pi_{(r+m)}S^{'}_{A}=S^{*}_{A}$ ($\Pi_{(r+m)}S^{'}_{B}=S^{*}_{B}$), and sends the new sequence $S^{*}_{A}$ to Alice, $S^{*}_{B}$ to Bob.
\item[$(4)^{*}$]
When Alice and Bob receive $S^{*}_{A}$ and $S^{*}_{B}$, Calvin announces the coordinates of the decoy qubits $\Pi_{r}$ ($\Pi_{r}\subset\Pi_{r+m}$) send by him, and the corresponding measurement bases. Note that Calvin does not disclose the actual order of the message qubits. Then Alice and Bob measure these decoy qubits in the same bases and announce their outcomes. If the error rate exceeds a rational threshold, Calvin aborts the protocol and restarts from Step (1); otherwise, there is no eavesdropper, and the protocol continues to the next step.
\item[$(5)^{*}$]
Alice and Bob discard their decoy photons, and denote the left qubits in their hands as $S^{''}_{A}$ and $S^{''}_{B}$ (i.e.,$S^{''}_{A}=\Pi_{m}S_{A},S^{''}_{B}=\Pi_{m}S_{B}$), respectively. Then Alice, Bob and Calvin measure $S^{''}_{A}$, $S^{''}_{B}$ and $S_{C}$ in the $X$ basis, respectively, and obtain $m$ bits, $k^{A}_{i}$, $k^{B}_{i}$ and $k^{C}_{i}$ (i=0,1,...,m-1 ), respectively. After they have completed measurement operation, Calvin immediately announces the actual order of the message qubits $\Pi_{m}$ ($\Pi_{m}\subset\Pi_{r+m}$). Using this information, Alice (Bob) can rearrange $k^{A}_{i}$ ($k^{B}_{i}$) in correspondence with the original order of $S_{A}$ ($S_{B}$).
\end{enumerate}
Let us examine the security of our improvement under the twice-Hadmard-CNOT attack. Similarly, we suppose the malicious Bob aims to get Alice's secret input. In Step $(3)^{*}$, Since Calvin disrupts the sequences $S^{'}_{A}$ and $S^{'}_{B}$ by using $\Pi_{(r+m)}$ to get $S^{*}_{A}$, $S^{*}_{B}$, it means the order of transmited sequences $S^{*}_{A}$, $S^{*}_{B}$ is fully disturbed. And after discarding the decoy qubits $R_{A}$ and $R_{B}$ in Step $(5)^{*}$, Alice and Bob can only get $S^{''}_{A}$ and $S^{''}_{B}$, and cannot recover the original sequences $S_{A}$ and $S_{B}$ because that $\Pi_{m}$ is only known by Calvin. So, even if Bob launch the twice-Hadmard-CNOT attack, he cannot get the final state $|\eta^{'}\rangle=\frac{1}{2}(|++\rangle_{Ce}(|++\rangle+|--\rangle)_{AB}+|--\rangle_{Ce}(+-\rangle+|-+\rangle)_{AB})$. That is to say, Bob cannot get the correlations between Alice's and Bob's qubits by measuring the auxiliary particles $e$.
For simplicity, we take a simple two-bit private comparisonas example without considering error-correcting code and the decoy photos. In the case, suppose Alice's input is 10, Bob's input is 11, and Calvin prepares two GHZ states all in the state as Eq.(3),
\begin{equation}
\begin{split}
|\psi\rangle_{C_{1}A_{1}B_{1}}\otimes|\psi\rangle_{C_{2}A_{2}B_{2}}
&=1/2(|+++\rangle+|+--\rangle+|-+-\rangle+|--+\rangle)_{C_{1}A_{1}B_{1}}\\
&\otimes1/2(|+++\rangle+|+--\rangle+|-+-\rangle+|--+\rangle)_{C_{2}A_{2}B_{2}}
\end{split}
\end{equation}
In Step $(3)^{*}$, Calvin executes a permutation operator $\Pi_{m=2}$ on sequence $S^{'}_{A}$ and $S^{'}_{B}$, then the system may be changed into
\begin{equation}
\begin{split}
\Pi_{m=2}(|\psi\rangle_{C_{1}A_{1}B_{1}}\otimes|\psi\rangle_{C_{2}A_{2}B_{2}})&=|\psi\rangle_{C_{1}A_{2}B_{2}}\otimes|\psi\rangle_{C_{2}A_{1}B_{1}}\\
&=1/4\{|+\rangle_{C_{1}}(|++\rangle+|--\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|+\rangle_{C_{2}}(|++\rangle+|--\rangle)_{A_{1}B_{1}}\\
&\quad\;+|+\rangle_{C_{1}}(|+-\rangle+|-+\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|-\rangle_{C_{2}}(|++\rangle+|--\rangle)_{A_{1}B_{1}}\\
&\quad\;+|-\rangle_{C_{1}}(|++\rangle+|--\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|+\rangle_{C_{2}}(|+-\rangle+|-+\rangle)_{A_{1}B_{1}}\\
&\quad\;+|-\rangle_{C_{1}}(|+-\rangle+|-+\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|-\rangle_{C_{2}}(+-\rangle+|-+\rangle)_{A_{1}B_{1}}
\}
\end{split}
\end{equation}
After Bob performs the twice-Hadmard-CNOT attack, the composite system which consists of GHZ state and auxiliary particle becomes
\begin{equation}
\begin{split}
|\eta^{'}\rangle_{1}\otimes|\eta^{'}\rangle_{2}
&=1/4\{|+\rangle_{C_{1}}|+\rangle_{e_{1}}(|++\rangle+|--\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|+\rangle_{C_{2}}|+\rangle_{e_{2}}(|++\rangle+|--\rangle)_{A_{1}B_{1}}\\ &\quad\;+|+\rangle_{C_{1}}|-\rangle_{e_{1}}(|+-\rangle|-+\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|-\rangle_{C_{2}}|+\rangle_{e_{2}}(|++\rangle+|--\rangle)_{A_{1}B_{1}}\\
&\quad\;+|-\rangle_{C_{1}}|+\rangle_{e_{1}}(|++\rangle+|--\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|+\rangle_{C_{2}}|-\rangle_{e_{2}}(|+-\rangle+|-+\rangle)_{A_{1}B_{1}}\\
&\quad\;+|-\rangle_{C_{1}}|-\rangle_{e_{1}}(|+-\rangle+|-+\rangle)_{A_{2}B_{2}}\\
&\quad\;\otimes|-\rangle_{C_{2}}|-\rangle_{e_{2}}(|+-\rangle+|-+\rangle)_{A_{1}B_{1}}
\}.
\end{split}
\end{equation}
From above equation, we cannot get the correlations between the states of $\{A_{1},B_{1}\}$ or $\{A_{2},B_{2}\}$ according to the the final state of the auxiliary particle $e_{1}$ or $e_{2}$. That means, Bob cannot steal Alice's input through measuring the auxiliary particles. So, we can say the improvement can resist the twice-Hadmard-CNOT attack.
\section{Conclusione}
In all the QPC protocols, in order to ensure the protocol's security, we must guarantee any participant only knows his (her) own secret input without obtaining another' secret input. In this paper, we firstly review and analyze Li et al.'s two-party QPC protocol, and find it cannot resist the twice-Hadmard-CNOT attack, i.e., if one participant Bob launches this attack, he (she) can get the other's secret input without being detected. For avoiding this loophole, we adopt the permutation operator to rearranges the quantum sequences sent to Alice and Bob from Charlie. The delicate analysis shows the security of our improvement can be guaranteed well and truly.\\\\
\textbf{Acknowledgments}\quad\;\small\textmd{This work is supported by the National Nature Science Foundation of China (Grant Nos. 61103235, 61373131 and 61373016), the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), and State Key Laboratory of Software Engineering, Wuhan University (SKLSE2012-09-41).}
\end{document} |
\begin{document}
\title{\large \bf INTERLACING DIFFUSIONS}
\begin{abstract}
We study in some generality intertwinings between $h$-transforms of Karlin-McGregor semigroups associated with one dimensional diffusion processes and those of their Siegmund duals. We obtain couplings so that the corresponding processes are interlaced and furthermore give formulae in terms of block determinants for the transition densities of these coupled processes. This allows us to build diffusion processes in the space of Gelfand-Tsetlin patterns so that the evolution of each level is Markovian. We show how known examples naturally fit into this framework and construct new processes related to minors of matrix valued diffusions. We also provide explicit formulae for the transition densities of the particle systems with one-sided collisions at either edge of such patterns.
\end{abstract}
{\small\tableofcontents}
\section{Introduction}
In this work we study in some generality intertwinings and couplings between Karlin-McGregor semigroups (see \cite{KarlinMcGregor}, also \cite{Karlin}) associated with one dimensional diffusion processes and their duals. Let $X(t)$ be a diffusion process with state space an interval $I\subset \mathbb{R}$ with end points $l<r$ and transition density $p_t(x,y)$. We define the Karlin-McGregor semigroup associated with $X$, with $n$ particles, by its transition densities (with respect to Lebesgue measure) given by,
\begin{align*}
\det(p_t(x_i,y_j))^n_{i,j=1} ,
\end{align*}
for $x,y \in W^{n}(I^{\circ})$ where $W^{n}(I^{\circ})=(x=(x_1,\cdots,x_n):l < x_1 \le \cdots \le x_{n}< r )$. This sub-Markov semigroup is exactly the semigroup of $n$ independent copies of the diffusion process $X$ which are killed when they intersect. For such a diffusion process $X(t)$ we consider the conjugate (see \cite{ConjugateToth}) or Siegmund dual (see \cite{CoxRosler} or the original paper \cite{Siegmund}) diffusion process $\hat{X}(t)$ via a description of its generator and boundary behaviour in the next subsection. The key relation dual/conjugate diffusion processes satisfy is the following (see Lemma \ref{ConjugacyLemma}), with $z,z'\in I^{\circ}$,
\begin{align*}
\mathbb{P}_z(X(t)\le z')=\mathbb{P}_{z'}(\hat{X}(t)\ge z) .
\end{align*}
We will obtain \textit{couplings} of $h$-transforms of Karlin-McGregor semigroups associated with a diffusion process and its dual so that the corresponding processes \textit{interlace}. We say that $y\in W^{n}(I^{\circ})$ and $x\in W^{n+1}(I^{\circ})$ interlace and denote this by $y\prec x$ if $x_1 \le y_1 \le x_2 \le \cdots \le x_{n+1}$. Note that this defines a space denoted by $W^{n,n+1}(I^{\circ})=((x,y):l < x_1 \le y_1 \le x_2 \le \cdots \le x_{n+1}< r)$,
\begin{center}
\begin{tabular}{ c c c c c c c c c }
$\overset{x_1}{\bullet}$&$\overset{\textcolor{red}{y_1}}{\textcolor{red}{\bullet}}$ &$\overset{x_2}{\bullet}$ &$\overset{\textcolor{red}{y_1}}{\textcolor{red}{\bullet}}$ &$\overset{x_3}{\bullet}$ &$\cdots$ &$\overset{x_{n}}{\bullet}$&$\overset{\textcolor{red}{y_n}}{\textcolor{red}{\bullet}}$&$\overset{x_{n+1}}{\bullet}$ \ ,
\end{tabular}
\end{center}
with the following two-level representation,
\begin{center}
\begin{tabular}{ c c c c c c c c c }
&$\overset{\textcolor{red}{y_1}}{\textcolor{red}{\bullet}}$ & &$\overset{\textcolor{red}{y_1}}{\textcolor{red}{\bullet}}$ &$\textcolor{red}{\cdots\cdots}$ &&&$\overset{\textcolor{red}{y_n}}{\textcolor{red}{\bullet}}$&\\
$\overset{x_1}{\bullet}$& &$\overset{x_2}{\bullet}$ & &$\overset{x_3}{\bullet}$&$\cdots$&$\overset{x_{n}}{\bullet}$&&$\overset{x_{n+1}}{\bullet}$ \ .
\end{tabular}
\end{center}
Similarly, we say that $x,y \in W^n(I^{\circ})$ interlace if $l< y_1 \le x_1 \le y_2 \le \cdots \le x_{n}< r$ (we still denote this by $y\prec x$). Again, this defines the space $W^{n,n}(I^{\circ})=((x,y):l < y_1 \le x_1 \le y_2 \le \cdots \le x_{n}< r )$,
\begin{center}
\begin{tabular}{ c c c c c c c c }
$\overset{\textcolor{red}{y_1}}{\textcolor{red}{\bullet}}$ &$\overset{x_1}{\bullet}$ &$\overset{\textcolor{red}{y_2}}{\textcolor{red}{\bullet}}$ &$\overset{x_2}{\bullet}$ &$\cdots$ &$\overset{x_{n-1}}{\bullet}$&$\overset{\textcolor{red}{y_n}}{\textcolor{red}{\bullet}}$&$\overset{x_{n}}{\bullet}$ \ ,
\end{tabular}
\end{center}
with the two-level representation,
\begin{center}
\begin{tabular}{ c c c c c c c c c }
&$\overset{\textcolor{red}{y_1}}{\textcolor{red}{\bullet}}$ & &$\overset{\textcolor{red}{y_2}}{\textcolor{red}{\bullet}}$ & &$\textcolor{red}{\cdots}$&&$\overset{\textcolor{red}{y_n}}{\textcolor{red}{\bullet}}$&\\
& &$\overset{x_1}{\bullet}$ & &$\overset{x_2}{\bullet}$&$\cdots$&$\overset{x_{n-1}}{\bullet}$&&$\overset{x_{n}}{\bullet}$ \ .
\end{tabular}
\end{center}
Our starting point in this paper are explicit transition kernels, actually arising from the consideration of stochastic coalescing flows. These kernels defined on $W^{n,n+1}(I^{\circ})$ (or $W^{n,n}(I^{\circ})$) are given in terms of block determinants and give rise to a Markov process $Z=(X,Y)$ with (sub-)Markov transition semigroup $Q_t$ with joint dynamics described as follows. Let $L$ and $\hat{L}$ be the generators of a pair of one dimensional diffusions in Siegmund duality. Then, after an appropriate Doob's $h$-transformation $Y$ evolves \textit{autonomously} as $n$ $\hat{L}$-diffusions conditioned not to intersect. The $X$ components then evolve as $n+1$ (or $n$) independent $L$-diffusions reflected off the random $Y$ barriers, a notion made precise in the next subsection. Our main result, Theorem \ref{MasterDynamics} in the text, states (modulo technical assumptions) that under a special initial condition for $Z=(X,Y)$, the \textit{non-autonomous} $X$ component is distributed as a Markov process in its own right. Its evolution governed by an explicit Doob's $h$-transform of the Karlin-McGregor semigroup associated with $n+1$ (or $n$) $L$-diffusions.
At the heart of this result lie certain intertwining relations, obtained immediately from the special structure of $Q_t$, of the form,
\begin{align}
P_t\Lambda&=\Lambda Q_t \label{introinter1} \ ,\\
\Pi\hat{P}_t&=Q_t\Pi \label{introinter2} \ ,
\end{align}
where $\Lambda$ is an explicit positive kernel (not yet normalized), $\Pi$ is the operator induced by the projection on the $Y$ level, $P_t$ is the Karlin-McGregor semigroup associated with the one dimensional diffusion process with transition density $p_t(x,y)$ and $\hat{P}_t$ the corresponding semigroup associated with its dual/conjugate (some conditions and more care is needed regarding boundary behaviour for which the reader is referred to the next section).
Now we move towards building a multilevel process. First, note that by concatenating $W^{1,2}(I^{\circ}), W^{2,3}(I^{\circ}),\cdots,W^{N-1,N}(I^{\circ})$ we obtain the space of Gelfand-Tsetlin patterns of depth $N$ denoted by $\mathbb{GT}(N)$,
\begin{align*}
\mathbb{GT}(N)=\{(X^{(1)},\cdots,X^{(N)}):X^{(n)}\in W^{n}(I^{\circ}), \ X^{(n)}\prec X^{(n+1)}\} \ .
\end{align*}
A point $(X^{(1)},\cdots,X^{(N)})\in\mathbb{GT}(N)$ is typically depicted as an array as shown in the following diagram:
\begin{center}
\begin{tabular}{ c c c c c c c c c }
& & &&$\overset{X^{(1)}_1}{\bullet}$ &&&&\\
& & & $\overset{X^{(2)}_1}{\bullet} $&&$\overset{X^{(2)}_2}{\bullet} $&&& \\
& &$\overset{X^{(3)}_1}{\bullet} $ &&$\overset{X^{(3)}_2}{\bullet} $ &&$\overset{X^{(3)}_3}{\bullet} $&& \\
&$\iddots$ & & &$\vdots$& &&$\ddots$&\\
$\overset{X^{(N)}_1}{\bullet} $ & $\overset{X^{(N)}_2}{\bullet} $ &$\overset{X^{(N)}_3}{\bullet} $&&$\cdots\cdots$&& & $\overset{X^{(N)}_{N-1}}{\bullet} $&$\overset{X^{(N)}_N}{\bullet} $
\end{tabular}
\end{center}
Similarly, by concatenating $W^{1,1}(I^{\circ}), W^{1,2}(I^{\circ}),W^{2,2}(I^{\circ}),\cdots,W^{N,N}(I^{\circ})$ we obtain the space of symplectic Gelfand-Tsetlin patterns of depth $N$ denoted by $\mathbb{GT}_{\textbf{s}}(N)$,
\begin{align*}
\mathbb{GT}_{\textbf{s}}(N)=\{(X^{(1)},\hat{X}^{(1)}\cdots,X^{(N)}, \hat{X}^{(N)}):X^{(n)},\hat{X}^{(n)}\in W^{n}(I^{\circ}), \ X^{(n)}\prec \hat{X}^{(n)}\prec X^{(n+1)} \} \ ,
\end{align*}
\begin{center}
\begin{tabular}{ | c c c c c c c c}
$\overset{X^{(1)}_1}{\bullet}$ & & &&&&&\\
& $\overset{\hat{X}^{(1)}_1}{\circ}$& &&&&&\\
$\overset{X^{(2)}_1}{\bullet} $ & & $\overset{X^{(2)}_2}{\bullet} $ &&&&& \\
& $\overset{\hat{X}^{(2)}_1}{\circ}$& &$\overset{\hat{X}^{(2)}_2}{\circ}$ &&&&\\
$\overset{X^{(3)}_1}{\bullet} $ & & $\overset{X^{(3)}_2}{\bullet} $ && $\overset{X^{(3)}_3}{\bullet} $&&& \\
$\vdots $& &$\vdots$ & &&$\ddots$&&\\
$\overset{X^{(N)}_1}{\bullet} $ & & $\overset{X^{(N)}_2}{\bullet} $ &$\cdots$&&&$\overset{X^{(N)}_N}{\bullet} $ & \\
& $\overset{\hat{X}^{(N)}_{1}}{\circ}$ & &$\overset{\hat{X}^{(N)}_{2}}{\circ} $ &$\cdots$&$\overset{\hat{X}^{(N)}_{N-1}}{\circ}$&&$\overset{\hat{X}^{(N)}_N}{\circ}$
\end{tabular}
\end{center}
Theorem \ref{MasterDynamics} allows us to concatenate a sequence of $W^{n,n+1}$-valued processes (or two-level processes), by a procedure described at the beginning of Section 3, in order to build diffusion processes in the space of Gelfand Tsetlin patterns so that each level is Markovian with explicit transition densities. Such examples of dynamics on \textit{discrete} Gelfand-Tsetlin patterns have been extensively studied over the past decade as models for random surface growth, see in particular \cite{BorodinKuan}, \cite{BorodinFerrari}, \cite{WarrenWindridge} and the more recent paper \cite{CerenziaKuan} and the references therein. They have also been considered in relation to building infinite dimensional Markov processes, preserving some distinguished measures of representation theoretic origin, on the boundary of these Gelfand-Tsetlin graphs via the \textit{method of intertwiners}; see Borodin and Olshanski \cite{BorodinOlshanski} for the type A case and more recently Cuenca \cite{Cuenca} for the type BC. In the paper \cite{Random growth} we pursued these directions in some detail.
Returning to the continuum discussion both the process considered by Warren in \cite{Warren} which originally provided motivation for this work and a process recently constructed by Cerenzia in \cite{Cerenzia} that involves a hard wall fit in the framework introduced here. The techniques developed in this paper also allow us to study at the process level (and not just at fixed times) the process constructed by Ferrari and Frings in \cite{FerrariFrings}. The main new examples considered in this paper are:
\begin{itemize}
\item Interlacing diffusion processes built from non-intersecting squared Bessel processes, that are related to the $LUE$ matrix diffusion process minors studied by K$\ddot{o}$nig and O'Connell in \cite{O Connell} and a dynamical version of a model considered by Dieker and Warren in \cite{DiekerWarren}. More generally, we study all diffusion processes associated with the classical orthogonal polynomials in a uniform way. This includes non-intersecting Jacobi diffusions and is related to the $JUE$ matrix diffusion, see \cite{Doumerc}.
\item Interlacing Brownian motions in an interval, related to the eigenvalue processes of Brownian motions on some classical compact groups.
\item A general study of interlacing diffusion processes with discrete spectrum and connections to the classical theory of total positivity and Chebyshev systems, see for example the monograph of Karlin \cite{Karlin}.
\end{itemize}
We now mention a couple of recent works in the literature that are related to ours. Firstly a different approach based on generators for obtaining couplings of intertwined multidimensional diffusion processes via hard reflection is investigated in Theorem 3 of \cite{PalShkolnikov}. This has subsequently been extended by Sun \cite{Sun} to isotropic diffusion coefficients, who making use of this has independently obtained similar results to us for the specific $LUE$ and $JUE$ processes. Moreover, a general $\beta$ extension of the intertwining relations for the random matrix related aforementioned processes was also established in the note \cite{Assiotis} by one of us. Finally, some results from this paper have been used recently in \cite{HuaPickrell} to construct an infinite dimensional Feller process on the so called \textit{graph of spectra}, that is the continuum analogue of the Gelfand-Tsetlin graph, which leaves the celebrated Hua-Pickrell measures invariant.
We also study the interacting particle systems with one-sided collisions at either edge of such Gelfand-Tsetlin pattern valued processes and give explicit Schutz-type determinantal transition densities for them in terms of derivatives and integrals of the one dimensional kernels. This also leads to formulas for the largest and smallest eigenvalues of the $LUE$ and $JUE$ ensembles in analogy to the ones obtained in \cite{Warren} for the $GUE$.
Finally, we briefly explain how this work is connected to superpositions/ decimations
of random matrix ensembles (see e.g.\cite{ForresterRains}) and in a different direction to the study of strong stationary duals. This notion was considered by Fill and Lyzinski in \cite{FillLyzinski} motivated in turn by the study of strong stationary times for diffusion processes (first introduced by Diaconis and Fill in \cite{DiaconisFill} in the Markov chain setting).
The rest of this paper is organised as follows:
\begin{enumerate}[(i)]
\item In Section 2 we introduce the basic setup of dual/conjugate diffusion processes, give the transition kernels on interlacing spaces and our main results on intertwinings and Markov functions.
\item In Section 3 we apply the theory developed in this paper to show how known examples easily fit into this framework and construct new ones, among others the ones alluded to above.
\item In Section 4 we study the interacting particle systems at the edges of the Gelfand-Tsetlin patterns.
\item In Section 5 we prove well-posedness of the simple systems of $SDEs$ with reflection described informally in the first paragraphs of the introduction and under assumptions that their transition kernels are given by those in Section 2.
\item In the Appendix we elaborate on and give proofs of some of the facts stated about dual diffusion processes in Section 2 and also discuss entrance laws.
\end{enumerate}
\paragraph{Acknowledgements.}Research of N.O'C. supported by ERC Advanced Grant 669306. Research of T.A. supported through the MASDOC DTC grant number EP/HO23364/1. We would like to thank an anonymous referee for many useful comments and suggestions which have led to many improvements in presentation.
\section{Two-level construction}
\subsection{Set up of conjugate diffusions}
Since our basic building blocks will be one dimensional diffusion processes and their conjugates we introduce them here and collect a number of facts about them (for justifications and proofs see the Appendix). The majority of the facts below can be found in the seminal book of Ito and McKean \cite{ItoMckean}, and also more specifically regarding the transition densities of general one dimensional diffusion processes, in the classical paper of McKean \cite{McKean} and also section 4.11 of \cite{ItoMckean} which we partly follow at various places.
We consider $(X_t)_{t\ge 0}$ a time homogeneous one dimensional diffusion process with state space an interval $I$ with endpoints $l<r$ which can be open or closed, finite or infinite (interior denoted by $I^{\circ}$) with infinitesimal generator given by,
\begin{align*}
L=a(x)\frac{d^2}{dx^2}+b(x)\frac{d}{dx},
\end{align*}
with domain to be specified later in this section. In order to be more concise, we will frequently refer to such a diffusion process with generator $L$ as an $L$-diffusion. We make the following regularity assumption throughout the paper.
\begin{defn}[Assumption (\textbf{R})]
We assume that $a(\cdot)\in C^1(I^{\circ})$ with $a(x)>0$ for $x\in I^{\circ}$ and $b(\cdot)\in C(I^{\circ})$.
\end{defn}
We start by giving the very convenient description of the generator $L$ in terms of its speed measure and scale function. Define its scale function $s(x)$ by
$s'(x)=\exp\big(-\int_{c}^{x}\frac{b(y)}{a(y)}dy\big)$ (the scale function is defined up to affine transformations) where $c$ is an arbitrary point in $I^\circ$, its speed measure with density $m(x)=\frac{1}{s'(x)a(x)}$ in $I^\circ$ with respect to the Lebesgue measure (note that it is a Radon measure in $I^\circ$ and also strictly positive in $I^{\circ}$) and speed function $M(x)=\int_{c}^{x}m(y)dy$. With these definitions the formal infinitesimal generator $L$ can be written as,
\begin{align*}
L=\mathcal{D}_m \mathcal{D}_s \ ,
\end{align*}
where $\mathcal{D}_m=\frac{1}{m(x)}\frac{d}{dx}=\frac{d}{dM}$ and $\mathcal{D}_s=\frac{1}{s'(x)}\frac{d}{dx}=\frac{d}{ds}$.
We now define the conjugate diffusion (see \cite{ConjugateToth}) or Siegmund dual (see \cite{Siegmund}) $(\hat{X}_t)_{t \ge 0}$ of $X$ to be a diffusion process with generator,
\begin{align*}
\hat{L}=a(x)\frac{d^2}{dx^2}+(a'(x)-b(x))\frac{d}{dx},
\end{align*}
and domain to be given shortly.
The following relations are easy to verify and are key to us.
\begin{align*}
\hat{s}'(x)=m(x) \ and \ \hat{m}(x)=s'(x) .
\end{align*}
So the conjugation operation swaps the scale functions and speed measures. In particular
\begin{align*}
\hat{L}=\mathcal{D}_{\hat{m}}\mathcal{D}_{\hat{s}}=\mathcal{D}_s\mathcal{D}_m \ .
\end{align*}
Using Feller's classification of boundary points (see Appendix) we obtain the following table for the boundary behaviour of the diffusion processes with generators $L$ and $\hat{L}$ at $l$ or $r$,
\begin{align*}
\begin{tabular}{ l | r }
Bound. Class. of L & Bound. Class. of $\hat{L}$ \\
\hline
natural & natural \\
entrance & exit \\
exit & entrance \\
regular & regular \\
\hline
\end{tabular}
\end{align*}
We briefly explain what these boundary behaviours mean. A process can neither be started at, nor reach in finite time a \textit{natural} boundary point. It can be started from an \textit{entrance} point but such a boundary point cannot be reached from the interior $I^\circ$. Such points are called \textit{inaccessible} and can be removed from the state space. A diffusion can reach an \textit{exit} boundary point from $I^\circ$ and once it does it is absorbed there. Finally, at a \textit{regular} (also called entrance and exit) boundary point a variety of behaviours is possible and we need to \textit{specify} one such. We will only be concerned with the two extreme possibilities namely \textit{instantaneous reflection} and \textit{absorption} ( sticky behaviour interpolates between the two and is not considered here). Furthermore, note that if $l$ is \textit{instantaneously reflecting} then (see for example Chapter 2 paragraph 7 in \cite{BorodinSalminen}) $Leb\{t: X_t=l \}=0 \ a.s.$ and analogously for the upper boundary point $r$.
Now in order to describe the domain, $Dom(L)$, of the diffusion process with formal generator $L$ we first define the following function spaces (with the obvious abbreviations),
\begin{align*}
C(\bar{I})&=\{f\in C(I^\circ):\lim_{x \downarrow l}f(x), \lim_{x \uparrow r}f(x)\textnormal{ exist and are finite}\} \ , \\
\mathfrak{D}&=\{f\in C(\bar{I})\cap C^2(I^\circ):Lf\in C(\bar{I})\} \ ,\\
\mathfrak{D}_{nat}&=\mathfrak{D}\ ,\\
\mathfrak{D}_{entr}&=\mathfrak{D}_{refl}=\{f\in \mathfrak{D}:(\mathcal{D}_sf)(l^+)=0 \}\ ,\\
\mathfrak{D}_{exit}&=\mathfrak{D}_{abs}=\{f\in \mathfrak{D}:(Lf)(l^+)=0 \} .
\end{align*}
Similarly, define $\mathfrak{D}^{nat},\mathfrak{D}^{entr},\mathfrak{D}^{refl},\mathfrak{D}^{exit},\mathfrak{D}^{abs}$ by replacing $l$ with $r$ in the definitions above. Then the domain of the generator of the $\left(X_t\right)_{t\ge 0}$ diffusion process (with generator $L$) with boundary behaviour $i$ at $l$ and $j$ at $r$ where $i,j \in \{nat, entr, refl, exit, abs\}$ is given by,
\begin{align*}
Dom(L)=\mathfrak{D}_i \cap \mathfrak{D}^j \ .
\end{align*}
For justifications see for example Chapter 8 in \cite{EthierKurtz} and for an entrance boundary point also Theorem 12.2 of \cite{StochasticBook} or page 122 of \cite{McKean}.
Coming back to conjugate diffusions note that the boundary behaviour of $X_t$, the $L$-diffusion, determines the boundary behaviour of $\hat{X}_t$, the $\hat{L}$-diffusion, except at a regular point. At such a point we define the boundary behaviour of the $\hat{L}$-diffusion to be dual to that of the $L$-diffusion. Namely, if $l$ is regular reflecting for $L$ then we define it to be regular absorbing for $\hat{L}$. Similarly, if $l$ is regular absorbing for $L$ we define it to be regular reflecting for $\hat{L}$. The analogous definition being enforced at the upper boundary point $r$. Furthermore, we denote the semigroups associated with $X_t$ and $\hat{X}_t$ by $\mathsf{P}_t$ and $\mathsf{\hat{P}}_t$ respectively and note that $\mathsf{P}_t1=\mathsf{\hat{P}}_t1=1$. We remark that at an \textit{exit} or \textit{regular absorbing} boundary point the transition \textit{kernel} $p_t(x,dy)$ associated with $\mathsf{P}_t$ has an \textit{atom} there with mass (depending on $t$ and $x$) the probability that the diffusion has reached that point by time $t$ started from $x$.
We finally arrive at the following duality relation, going back in some form to Siegmund. This is proven via an approximation by birth and death chains in Section 4 of \cite{CoxRosler}. We also give a proof in the Appendix in Section \ref{Appendix} following \cite{WarrenWatanabe} (where the proof is given in a special case). The reader should note the restriction to the interior $I^{\circ}$.
\begin{lem} \label{ConjugacyLemma}
$\mathsf{P}_t \textbf{1}_{[l,y]}(x)=\mathsf{\hat{P}}_t \textbf{1}_{[x,r]}(y)$ for $x,y \in I^{\circ}$.
\end{lem}
Now, it is well known that, the transition density $p_t(x,y):(0,\infty)\times I^\circ \times I^\circ \to (0,\infty)$ of any one dimensional diffusion process with a speed measure which has a continuous density with respect to the Lebesgue measure in $I^\circ$ (as is the case in our setting) is continuous in $(t,x,y)$. Moreover, under our assumptions $\partial_xp_t(x,y)$ exists for $x\in I^\circ$ and as a function of $(t,y)$ is continuous in $(0,\infty)\times I^\circ$ (see Theorem 4.3 of \cite{McKean}).
This fact along with Lemma \ref{ConjugacyLemma} gives the following relationships between the transition densities for $x,y \in I^{\circ}$,
\begin{align}
p_t(x,y)&=\partial_y\mathsf{\hat{P}}_t \textbf{1}_{[x,r]}(y)=\partial_y\int_{x}^{r}\hat{p}_t(y,dz) \ , \\
\hat{p}_t(x,y)&=-\partial_{y} \mathsf{P}_t \textbf{1}_{[l,x]}(y)=-\partial_y\int_{l}^{x}p_t(y,dz). \label{conjtrans}
\end{align}
Before closing this section, we note that the speed measure is the \textit{symmetrizing} measure of the diffusion process and this shall be useful in what follows. In particular, for $x,y \in I^\circ$ we have,
\begin{align}\label{symmetrizing}
\frac{m(y)}{m(x)}p_t(y,x)=p_t(x,y).
\end{align}
\subsection{Transition kernels for two-level processes}
First, we recall the definitions of the interlacing spaces our processes will take values in,
\begin{align*}
W^{n}(I^{\circ})&=((x):l < x_1 \le \cdots \le x_{n}< r ) \ ,\\
W^{n,n+1}(I^{\circ})&=((x,y):l < x_1 \le y_1 \le x_2 \le \cdots \le x_{n+1}< r ) \ ,\\
W^{n,n}(I^{\circ})&=((x,y):l < y_1 \le x_1 \le y_2 \le \cdots \le x_{n}< r ) \ , \\
W^{n+1,n}(I^{\circ})&=((x,y):l < y_1 \le x_1 \le y_2 \le \cdots \le y_{n+1}< r ) .
\end{align*}
Note that, for $(x,y)\in W^{n,n+1}(I^{\circ})$ we have $x\in W^{n+1}(I^{\circ})$ and $y\in W^{n}(I^{\circ})$, this is a minor difference in notation to the one used in \cite{Warren}; in the notations of that paper $W^{n+1,n}$ is our $W^{n,n+1}\left(\mathbb{R}\right)$.
Also define for $x\in W^{n}(I^\circ)$,
\begin{align*}
W^{\bullet,n}(x)=\{y\in W^{\bullet}(I^{\circ}):(x,y)\in W^{\bullet,n}(I^{\circ})\}.
\end{align*}
We now make the following standing assumption, enforced throughout the paper, on the boundary behaviour of the one dimensional diffusion process with generator $L$, depending on which interlacing space our two-level process defined next takes values in. Its significance will be explained later on. Note that any possible combination is allowed between the behaviour at $l$ and $r$.
\begin{defn}[Assumption (\textbf{BC})] Assume the $L$-diffusion has the following boundary behaviour:\\
When considering $\boldsymbol{W^{n,n+1}(I^{\circ})}$:
\begin{align}
& l \textnormal{ is either } Natural \textnormal{ or } Entrance \textnormal{ or } Regular \ Reflecting\label{bc1l} \ , \\
& r \textnormal{ is either } Natural \textnormal{ or } Entrance \textnormal{ or } Regular \ Reflecting\label{bc1r}.
\end{align}
When considering $\boldsymbol{W^{n,n}(I^{\circ})}$:
\begin{align}
& l \textnormal{ is either } Natural \textnormal{ or } Exit \textnormal{ or } Regular \ Absorbing\label{bc2l} \ ,\\
& r \textnormal{ is either } Natural \textnormal{ or } Entrance \textnormal{ or } Regular \ Reflecting\label{bc2r} .
\end{align}
When considering $\boldsymbol{W^{n+1,n}(I^{\circ})}$:
\begin{align}
& l \textnormal{ is either } Natural \textnormal{ or } Exit \textnormal{ or } Regular \ Absorbing\label{bc3l} \ ,\\
& r \textnormal{ is either } Natural \textnormal{ or } Exit \textnormal{ or } Regular \ Absorbing\label{bc3r} .
\end{align}
\end{defn}
We will need to enforce a further regularity and non-degeneracy assumption at regular boundary points for some of our results. This is a technical condition and presumably can be removed.
\begin{defn}[Assumption (\textbf{BC+})] Assume condition (\textbf{BC}) above. Moreover, if a boundary point $\mathfrak{b} \in \{l,r\}$ is regular we assume that
$\underset{x \to \mathfrak{b}}{\lim} a(x)>0$ and the limits $\underset{x \to \mathfrak{b}}{\lim} b(x)$, $\underset{x \to \mathfrak{b}}{\lim} \left(a'(x)-b(x)\right)$ exist and are finite.
\end{defn}
We shall begin by considering the following stochastic process which we will denote by $\left(\boldsymbol{\Phi}_{0,t}(x_1),\cdots,\boldsymbol{\Phi}_{0,t}(x_n);t \ge 0\right)$. It consists of a system of $n$ independent $L$-diffusions started from $x_1\le \cdots \le x_n$ which \textit{coalesce} and move together once they meet. This is a process in $W^n(I)$ which once it reaches any of the hyperplanes $\{x_i=x_{i+1}\}$ continues there forever. We have the following proposition for the finite dimensional distributions of the coalescing process:
\begin{prop}\label{flowfdd}
For $z,z'\in W^n(I^{\circ})$,
\begin{align*}
\mathbb{P}\big(\boldsymbol{\Phi}_{0,t}(z_i)\le z_i' \ \ \text{for} \ 1 \le i \le n\big)=\det \big(\mathsf{P}_t\textbf{1}_{[l,z_j']}(z_i)-\textbf{1}(i<j)\big)_{i,j=1}^n \ .
\end{align*}
\end{prop}
\begin{proof}
This is done for Brownian motions in Proposition 9 of \cite{Warren} using a generic argument based on continuous non-intersecting paths. The only variation here is that there might be an atom at $l$ which however does not alter the proof.
\end{proof}
We now define the kernel $q_t^{n,n+1}((x,y),(x',y'))dx'dy'$ on $W^{n,n+1}(I^{\circ})$ as follows:
\begin{defn}
For $(x,y),(x',y')\in W^{n,n+1}(I^{\circ})$ define $q_t^{n,n+1}((x,y),(x',y'))$ by,
\begin{align*}
& q_t^{n,n+1}((x,y),(x',y'))=\\
&=\frac{\prod_{i=1}^{n}\hat{m}(y'_i)}{\prod_{i=1}^{n}\hat{m}(y_i)}(-1)^n\frac{\partial^n}{\partial_{y_1}\cdots\partial_{y_n}}\frac{\partial^{n+1}}{\partial_{x'_1}\cdots\partial_{x'_{n+1}}}\mathbb{P}\big(\boldsymbol{\Phi}_{0,t}(x_i)\le x_i',\boldsymbol{\Phi}_{0,t}(y_j)\le y_j' \ \ \text{for all} \ \ i,j \big) \ .
\end{align*}
\end{defn}
This density exists by virtue of the regularity of the one dimensional transition densities. It is then an elementary computation using Proposition \ref{flowfdd} and Lemma \ref{ConjugacyLemma}, along with relation (\ref{conjtrans}), that $q_t^{n,n+1}$ can be written out explicitly as shown below. Note that each $y_i$ and $x_j'$ variable appears only in a certain row or column respectively.
\begin{align}
{q}_t^{n,n+1}((x,y),(x',y'))=\det\
\begin{pmatrix}
{A}_t(x,x') & {B}_t(x,y')\\
{C}_t(y,x') & {D}_t(y,y')
\end{pmatrix} \
\end{align}
where,
\begin{align*}
{A}_t(x,x')_{ij} &=\partial_{x'_j}\mathsf{P}_t \textbf{1}_{[l,x_j']} (x_i)= p_t(x_i,x_j') \ , \\
{B}_t(x,y')_{ij}&=\hat{m}(y'_j)(\mathsf{P}_t \textbf{1}_{[l,y'_j]}(x_i) -\textbf{1}(j\ge i)) \ ,\\
{C}_t(y,x')_{ij}&=-\hat{m}^{-1}(y_i)\partial_{y_i}\partial_{x'_j}\mathsf{P}_t \textbf{1}_{[l,x'_j]}(y_i)=-\mathcal{D}_s^{y_i}p_t(y_i,x_j') \ ,\\
{D}_t(y,y')_{ij}&=-\frac{\hat{m}(y'_j)}{\hat{m}(y_i)}\partial_{y_i} \mathsf{P}_t \textbf{1}_{[l,y'_j]}(y_i)=\hat{p}_t(y_i,y_j').
\end{align*}
We now define for $t>0$ the operators $Q_t^{n,n+1}$ acting on the bounded Borel functions on $W^{n,n+1}(I^{\circ})$ by,
\begin{align}\label{definitionofsemigroup1}
(Q_t^{n,n+1}f)(x,y)=\int_{W^{n,n+1}(I^{\circ})}^{}q_t^{n,n+1}((x,y),(x',y'))f(x',y')dx'dy'.
\end{align}
Then the following facts hold:
\begin{lem} Assume (\textbf{R}) and (\textbf{BC}) hold for the $L$-diffusion. Then,
\begin{align*}
&Q_t^{n,n+1} 1\le 1 ,\\
& Q_t^{n,n+1}f \ge 0 \ \textnormal{for } f \ge 0.
\end{align*}
\end{lem}
\begin{proof}
The first property will follow from performing the $dx'$ integration first in equation (\ref{definitionofsemigroup1}) with $f\equiv 1$. This is easily done by the very structure of the entries of $q_t^{n,n+1}$: noting that each $x_i'$ variable appears in a single column, then using multilinearity to bring the integrals inside the determinant and the relations:
\begin{align*}
\int_{y_{j-1}'}^{y_j'}{A}_t(x,x')_{ij}dx_j'&=\mathsf{P}_t \textbf{1}_{[l,y'_j]}(x_i)-\mathsf{P}_t \textbf{1}_{[l,y'_{j-1}]}(x_i),\\
\int_{y_{j-1}'}^{y_j'}{C}_t(y,x')_{ij}dx_j'&=-\frac{1}{\hat{m}(y_i)}\partial_{y_i} \mathsf{P}_t \textbf{1}_{[l,y'_j]}(y_i)+\frac{1}{\hat{m}(y_i)}\partial_{y_i} \mathsf{P}_t \textbf{1}_{[l,y'_{j-1}]}(y_i),
\end{align*}
and observing that by (\textbf{BC}) the boundary terms are:
\begin{align*}
\int_{y_N'}^{r}A_t(x,x')_{iN+1}dx_{N+1}'&=1-\mathsf{P}_t \textbf{1}_{[l,y'_{N}]}(x_i), &\int_{y_N'}^{r}C_t(y,x')_{iN+1}dx_{N+1}'=\frac{1}{\hat{m}(y_i)}\partial_{y_i} \mathsf{P}_t \textbf{1}_{[l,y'_N]}(y_i),\\
\int_{l}^{y_1'}A_t(x,x')_{i1}dx_{1}'&=\mathsf{P}_t \textbf{1}_{[l,y'_{1}]}(x_i), &\int_{l}^{y_1'}C_t(y,x')_{i1}dx_{1}'=-\frac{1}{\hat{m}(y_i)}\partial_{y_i} \mathsf{P}_t \textbf{1}_{[l,y'_1]}(y_i),
\end{align*}
we are left with the integral:
\begin{align*}
Q_t^{n,n+1} 1=\int_{W^n(I^{\circ})}^{}\det(\hat{p}_t(y_i,y'_j))^{n}_{i,j=1}dy'\le 1 .
\end{align*}
This is just a restatement of the fact that a Karlin-McGregor semigroup, to be defined shortly in this subsection, is sub-Markov.
The \textit{positivity} preserving property also follows immediately from the original definition, since $\mathbb{P}\big(\boldsymbol{\Phi}_{0,t}(x_i)\le x_i',\boldsymbol{\Phi}_{0,t}(y_j)\le y_j' \ \ \text{for all} \ \ i,j \big)$ is increasing in the $x'_i$ and decreasing in the $y_j$ respectively: Obviously for any $k$ with $x_k'\le \tilde{x}_k'$ and $\tilde{y}_k\le y_k$ each of the events:
\begin{align*}
&\big\{\boldsymbol{\Phi}_{0,t}(x_i)\le x_i', i \ne k, \boldsymbol{\Phi}_{0,t}(x_k)\le \tilde{x}_k', \boldsymbol{\Phi}_{0,t}(y_j)\le y_j', \text{ for all} \ j \big\},\\
&\big\{\boldsymbol{\Phi}_{0,t}(x_i)\le x_i',\text{ for all} \ i, \boldsymbol{\Phi}_{0,t}(y_j)\le y_j', j\ne k, \boldsymbol{\Phi}_{0,t}(\tilde{y}_k)\le y_k'\big\},
\end{align*}
contain the event:
\begin{align*}
\big\{\boldsymbol{\Phi}_{0,t}(x_i)\le x_i',\boldsymbol{\Phi}_{0,t}(y_j)\le y_j' \ \ \text{for all} \ \ i,j \big\}.
\end{align*}
Thus, the partial derivatives $\partial_{x_i'}$ and $-\partial_{y_j}$ of $\mathbb{P}\big(\boldsymbol{\Phi}_{0,t}(x_i)\le x_i',\boldsymbol{\Phi}_{0,t}(y_j)\le y_j' \ \ \text{for all} \ \ i,j \big)$ are positive.
\end{proof}
In fact, $Q_t^{n,n+1}$ defined above, forms a sub-Markov semigroup, associated with a Markov process $Z=(X,Y)$, with possibly finite lifetime, described informally as follows: the $X$ components follow independent $L$-diffusions reflected off the $Y$ components. More precisely assume that the $L$-diffusion is given as the pathwise unique solution $\mathsf{X}$ to the $SDE$,
\begin{align*}
d\mathsf{X}(t)=\sqrt{2a(\mathsf{X}(t))}d\beta(t)+b(\mathsf{X}(t))dt+dK^l(t)-dK^r(t)
\end{align*}
where $\beta$ is a standard Brownian motion and $K^l$ and $K^r$ are (possibly zero) positive finite variation processes that only increase when $\mathsf{X}=l$ or $\mathsf{X}=r$, so that $\mathsf{X}\in I$ and $Leb\{t:\mathsf{X}(t)=l \textnormal{ or } r \}=0$ a.s. We write $\mathfrak{s}_{L}$ for the corresponding measurable solution map on path space, namely so that $\mathsf{X}=\mathfrak{s}_{L}\left(\beta\right)$.
Consider the following system of $SDEs$ with reflection in $W^{n,n+1}$ which can be described in words as follows. The $Y$ components evolve as $n$ autonomous $\hat{L}$-diffusions stopped when they collide or when (if) they hit $l$ or $r$, and we denote this time by $T^{n,n+1}$. The $X$ components evolve as $n+1$ $L$-diffusions reflected off the $Y$ particles.
\begin{align}\label{System1SDEs}
dX_1(t)&=\sqrt{2a(X_1(t))}d\beta_1(t)+b(X_1(t))dt+dK^l(t)-dK_1^+(t)\nonumber ,\\
dY_1(t)&=\sqrt{2a(Y_1(t))}d\gamma_1(t)+(a'(Y_1(t))-b(Y_1(t)))dt\nonumber ,\\
dX_2(t)&=\sqrt{2a(X_2(t))}d\beta_2(t)+b(X_2(t))dt+dK_2^-(t)-dK_2^+(t)\nonumber ,\\
\vdots\\
dY_n(t)&=\sqrt{2a(Y_n(t))}d\gamma_n(t)+(a'(Y_n(t))-b(Y_n(t)))dt\nonumber , \\
dX_{n+1}(t)&=\sqrt{2a(X_{n+1}(t))}d\beta_{n+1}(t)+b(X_{n+1}(t))dt+dK_{n+1}^-(t)-dK^r(t)\nonumber.
\end{align}
Here $\beta_1,\cdots,\beta_{n+1},\gamma_1,\cdots,\gamma_n$ are independent standard Brownian motions and the positive finite variation processes $K^l,K^r,K_i^+,K_i^-$ are such that $K^l$ (possibly zero) increases only when $X_1=l$, $K^r$ (possibly zero) increases only when $X_{n+1}=r$,
$K^+_i(t)$ increases only when $Y_i=X_i$ and $K^-_{i}(t)$ only when $Y_{i-1}=X_i$, so that $\left(X_1(t)\le Y_1(t)\le \cdots \le X_{n+1}(t);t\ge 0\right)\in W^{n,n+1}(I)$ up to time $T^{n,n+1}$. Note that, $X$ either reflects at $l$ or $r$ or does not visit them at all by our boundary conditions (\ref{bc1l}) and (\ref{bc1r}). The problematic possibility of an $X$ component being trapped between a $Y$ particle and a boundary point and pushed in opposite directions does not arise, since the whole process is then instantly stopped.
The fact that these $SDEs$ are well-posed, so that in particular $(X,Y)$ is Markovian, is proven in Proposition \ref{wellposedness} under a Yamada-Watanabe condition (incorporating a linear growth assumption), that we now define precisely and abbreviate throughout by $(\textbf{YW})$. Note that, the functions $a(\cdot)$ and $b(\cdot)$ initially defined in $I^{\circ}$ can in certain circumstances be continuously extended to the boundary points $l$ and $r$ and this is implicit in assumption (\textbf{YW}).
\begin{defn}[Assumption (\textbf{YW})]
Let $I$ be an interval with endpoints $l<r$ and suppose $\rho$ is a non-decreasing function from $(0,\infty)$ to itself such that $\int_{0^+}^{}\frac{dx}{\rho(x)}=\infty$. Consider the following condition on functions $a:I\to \mathbb{R}_+$ and $b: I\to \mathbb{R}$,
\begin{align*}
&|\sqrt {a(x)}-\sqrt {a(y)}|^2 \le \rho(|x-y|),\\
&|b(x)-b(y)|\le C|x-y|.
\end{align*}
Moreover, we assume that the functions $\sqrt {a(\cdot)}$ and $b(\cdot)$ are of at most linear growth (for $b(\cdot)$ this is immediate by Lipschitz continuity above).
We will say that the $L$-diffusion satisfies (\textbf{YW}) if its diffusion and drift coefficients $a$ and $b$ satisfy (\textbf{YW}).
\end{defn}
Moreover, by virtue of the following result these $SDEs$ provide a precise description of the dynamics of the two-level process $Z=(X,Y)$ associated with $Q_t^{n,n+1}$. Proposition \ref{dynamics1} below will be proven in Section \ref{SectionTransitionDensities} as either Proposition \ref{transititiondensities1} or Proposition \ref{transitiondensitiesreflecting}, depending on the boundary conditions.
\begin{prop}\label{dynamics1}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. Then, $Q_t^{n,n+1}$ is the sub-Markov semigroup associated with the (Markovian) system of $SDEs$ (\ref{System1SDEs}) in the sense that if $\textbf{Q}^{n,n+1}_{x,y}$ governs the processes $(X,Y)$ satisfying the $SDEs$ (\ref{System1SDEs}) and with initial condition $(x,y)$ then for any $f$ continuous with compact support and fixed $T>0$,
\begin{align*}
\int_{W^{n,n+1}(I^\circ)}^{}q_T^{n,n+1}((x,y),(x',y'))f(x',y')dx'dy'=\textbf{Q}^{n,n+1}_{x,y}\big[f(X(T),Y(T))\textbf{1}(T<T^{n,n+1})\big].
\end{align*}
\end{prop}
For further motivation regarding the definition of $Q_t^{n,n+1}$ and moreover, a completely different argument for its semigroup property, that however does not describe explicitly the dynamics of $X$ and $Y$, we refer the reader to the next subsection \ref{coalescinginterpretation}.
We now briefly study some properties of $Q_t^{n,n+1}$, that are immediate from its algebraic structure (with no reference to the $SDEs$ above required). In order to proceed and fix notations for the rest of this section, start by defining the Karlin-McGregor semigroup $P_t^n$ associated with $n$ $L$-diffusions in $I^\circ$ given by the transition density, with $x,y \in W^n(I^\circ)$,
\begin{align}\label{KM1}
p^n_t(x,y)dy=\det(p_t(x_i,y_j))_{i,j=1}^ndy .
\end{align}
Note that, in the case an exit or regular absorbing boundary point exists, $P_t^1$ is the semigroup of the $L$-diffusion \textit{killed} and not absorbed at that point. In particular it is not the same as $\mathsf{P}_t$ which is a Markov semigroup. Similarly, define the Karlin-McGregor semigroup $\hat{P}_t^n$ associated with $n$ $\hat{L}$-diffusions by,
\begin{align}\label{KM2}
\hat{p}^n_t(x,y)dy=\det(\hat{p}_t(x_i,y_j))_{i,j=1}^ndy ,
\end{align}
with $x,y \in W^n(I^\circ)$. The same comment regarding absorbing and exit boundary points applies here as well.
Now, define the operators $\Pi_{n,n+1}$, induced by the projections on the $Y$ level as follows with $f$ a bounded Borel function on $W^{n}(I^{\circ})$,
\begin{align*}
(\Pi_{n,n+1}f)(x,y)=f(y).
\end{align*}
The following proposition immediately follows by performing the $dx'$ integration in the explicit formula for the block determinant (as already implied in the proof that $Q_t^{n,n+1}1\le1$).
\begin{prop} Assume $(\textbf{R})$ and $(\textbf{BC})$ hold for the $L$-diffusion. For $t>0$ and $f$ a bounded Borel function on $W^{n}(I^{\circ})$ we have,
\begin{align}\label{firstdynkinintertwining}
\Pi_{n,n+1}\hat{P}_t^{n}f&=Q_t^{n,n+1}\Pi_{n,n+1}f .
\end{align}
\end{prop}
The fact that $Y$ is distributed as $n$ independent $\hat{L}$-diffusions killed when they collide or when they hit $l$ or $r$ is already implicit in the statement of Proposition \ref{dynamics1}. However, it is also the probabilistic consequence of the proposition above. Namely, the intertwining relation (\ref{firstdynkinintertwining}), being an instance of Dynkin's criterion (see for example Exercise 1.17 Chapter 3 of \cite{RevuzYor}), implies that the evolution of $Y$ is Markovian with respect to the joint filtration of $X$ and $Y$ i.e. of the process $Z$ and we take this as the definition of $Y$ being autonomous. Moreover, $Y$ is evolving according to $\hat{P}_t^n$. Thus, the $Y$ components form an \textit{autonomous} \textit{diffusion} process. Finally, by taking $f\equiv 1$ above we get that the finite lifetime of $Z$ exactly corresponds to the killing time of $Y$, which we denote by $T^{n,n+1}$.
Similarly, we define the kernel $q_t^{n,n}((x,y),(x',y'))dx'dy'$ on $W^{n,n}(I^\circ)$ as follows:
\begin{defn}
For $(x,y),(x',y')\in W^{n,n}(I^\circ)$ define $q_t^{n,n}((x,y),(x',y'))$ by,
\begin{align*}
& q_t^{n,n}((x,y),(x',y'))=\\
&=\frac{\prod_{i=1}^{n}\hat{m}(y'_i)}{\prod_{i=1}^{n}\hat{m}(y_i)}(-1)^n\frac{\partial^n}{\partial_{y_1}\cdots\partial_{y_n}}\frac{\partial^{n}}{\partial_{x'_1}\cdots\partial_{x'_{n}}}\mathbb{P}\big(\boldsymbol{\Phi}_{0,t}(x_i)\le x_i',\boldsymbol{\Phi}_{0,t}(y_j)\le y_j' \ \ \text{for all} \ \ i,j \big).
\end{align*}
\end{defn}
We note that as before $q_t^{n,n}$ can in fact be written out explicitly,
\begin{align}
{q}_t^{n,n}((x,y),(x',y'))=\det\
\begin{pmatrix}
{A}_t(x,x') & {B}_t(x,y')\\
{C}_t(y,x') & {D}_t(y,y')
\end{pmatrix}
\end{align}
where,
\begin{align*}
{A}_t(x,x')_{ij} &=\partial_{x'_j}\mathsf{P}_t \textbf{1}_{[l,x_j']} (x_i) = p_t(x_i,x_j') ,\\
{B}_t(x,y')_{ij}&=\hat{m}(y'_j)(\mathsf{P}_t \textbf{1}_{[l,y'_j]}(x_i) -\textbf{1}(j> i)),\\
{C}_t(y,x')_{ij}&=-\hat{m}^{-1}(y_i)\partial_{y_i}\partial_{x'_j}\mathsf{P}_t \textbf{1}_{[l,x'_j]}(y_i)=-\mathcal{D}_s^{y_i}p_t(y_i,x_j'),\\
{D}_t(y,y')_{ij}&=-\frac{\hat{m}(y'_j)}{\hat{m}(y_i)}\partial_{y_i} \mathsf{P}_t \textbf{1}_{[l,y'_j]}(y_i)=\hat{p}_t(y_i,y_j').
\end{align*}
\begin{rmk}
Comparing with the $q_t^{n,n+1}$ formulae everything is the same except for the indicator function being $\textbf{1}(j> i)$ instead of $\textbf{1}(j \ge i)$.
\end{rmk}
Define the operator $Q_t^{n,n}$ for $t>0$ acting on bounded Borel functions on $W^{n,n}(I^\circ)$ by,
\begin{align}\label{semigroupdefinition2}
(Q_t^{n,n}f)(x,y)=\int_{W^{n,n}(I^\circ)}^{}q_t^{n,n}((x,y),(x',y'))f(x',y')dx'dy'.
\end{align}
Then with the analogous considerations as for $Q_t^{n,n+1}$ (see subsection \ref{coalescinginterpretation} as well), we can see that $Q_t^{n,n}$ should form a sub-Markov semigroup, to which we can associate a Markov process $Z$, with possibly finite lifetime, taking values in $W^{n,n}(I^\circ)$, the evolution of which we now make precise.
To proceed as before, we assume that the $L$-diffusion is given by an $SDE$ and we consider the following system of $SDEs$ with reflection in $W^{n,n}$ which can be described as follows. The $Y$ components evolve as $n$ autonomous $\hat{L}$-diffusions killed when they collide or when (if) they hit the boundary point $r$, a time which we denote by $T^{n,n}$. The $X$ components evolve as $n$ $L$-diffusions being kept apart by hard reflection on the $Y$ particles.
\begin{align}\label{System2}
dY_1(t)&=\sqrt{2a(Y_1(t))}d\gamma_1(t)+(a'(Y_1(t))-b(Y_1(t)))dt+dK^l(t),\nonumber\\
dX_1(t)&=\sqrt{2a(X_1(t))}d\beta_1(t)+b(X_1(t))dt+dK_1^+(t)-dK_1^-(t),\nonumber\\
\vdots\\
dY_n(t)&=\sqrt{2a(Y_n(t))}d\gamma_n(t)+(a'(Y_n(t))-b(Y_n(t)))dt,\nonumber\\
dX_{n}(t)&=\sqrt{2a(X_{n}(t))}d\beta_{n}(t)+b(X_{n}(t))dt+dK_{n}^+(t)-dK^r(t).\nonumber
\end{align}
Here $\beta_1,\cdots,\beta_{n},\gamma_1,\cdots,\gamma_n$ are independent standard Brownian motions and the positive finite variation processes $K^l,K^r, K_i^+,K_i^-$ are such that $\bar{K}^l$ (possibly zero) increases only when $Y_1=l$, $K^r$ (possibly zero) increases only when $X_n=r$,
$K^+_i(t)$ increases only when $Y_i=X_i$ and $K^-_{i}(t)$ only when $Y_{i-1}=X_i$, so that $\left(Y_1(t)\le X_1(t)\le \cdots \le X_{n}(t);t\ge 0\right)\in W^{n,n}(I)$ up to $T^{n,n}$. Note that, $Y$ reflects at the boundary point $l$ or does not visit it all and similarly $X$ reflects at $r$ or does not reach it all by our boundary assumptions (\ref{bc2l}) and (\ref{bc2r}). The intuitively problematic issue of $Y_n$ pushing $X_n$ upwards at $r$ does not arise since the whole process is stopped at such instance.
That these $SDEs$ are well-posed, so that in particular $(X,Y)$ is Markovian, again follows from the arguments of Proposition \ref{wellposedness}. As before, we have the following precise description of the dynamics of the two-level process $Z=(X,Y)$ associated with $Q_t^{n,n}$.
\begin{prop}\label{dynamics2}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. Then, $Q_t^{n,n}$ is the sub-Markov semigroup associated with the (Markovian) system of $SDEs$ (\ref{System2}) in the sense that if $\textbf{Q}^{n,n}_{x,y}$ governs the processes $(X,Y)$ satisfying the $SDEs$ (\ref{System2}) with initial condition $(x,y)$ then for any $f$ continuous with compact support and fixed $T>0$,
\begin{align*}
\int_{W^{n,n}(I^\circ)}^{}q_T^{n,n}((x,y),(x',y'))f(x',y')dx'dy'=\textbf{Q}^{n,n}_{x,y}\big[f(X(T),Y(T))\textbf{1}(T<T^{n,n})\big].
\end{align*}
\end{prop}
We also define, analogously to before, an operator $\Pi_{n,n}$, induced by the projection on the $Y$ level by,
\begin{align*}
(\Pi_{n,n}f)(x,y)=f(y).
\end{align*}
We have the following proposition which immediately follows by performing the $dx'$ integration in equation (\ref{semigroupdefinition2}).
\begin{prop} Assume $(\textbf{R})$ and $(\textbf{BC})$ hold for the $L$-diffusion. For $t>0$ and $f$ a bounded Borel function on $W^{n}(I^\circ)$ we have,
\begin{align}
\Pi_{n,n}\hat{P}_t^{n}f&=Q_t^{n,n}\Pi_{n,n}f.
\end{align}
\end{prop}
This, again implies that the evolution of $Y$ is Markovian with respect to the joint filtration of $X$ and $Y$. Furthermore, $Y$ is distributed as $n$ $\hat{L}$-diffusions killed when they collide or when (if) they hit the boundary point $r$ (note the difference here to $W^{n,n+1}$ is because of the asymmetry between $X$ and $Y$ and our standing assumption (\ref{bc2l}) and (\ref{bc2r})). Hence, the $Y$ components form a \textit{diffusion} process and they are \textit{autonomous}. The finite lifetime of $Z$ analogously to before (by taking $f\equiv 1$ in the proposition above), exactly corresponds to the killing time of $Y$ which we denote by $T^{n,n}$. As before, this is already implicit in the statement of Proposition \ref{dynamics2}.
Finally, we can define the kernel $q_t^{n+1,n}((x,y),(x',y'))dx'dy'$ on $W^{n+1,n}(I^\circ)$ in an analogous way and also the operator $Q_t^{n+1,n}$ for $t>0$ acting on bounded Borel functions on $W^{n+1,n}(I^\circ)$ as well. The description of the associated process $Z$ in $W^{n+1,n}(I^\circ)$ in words is as follows. The $Y$ components evolve as $n+1$ autonomous $\hat{L}$-diffusions killed when they collide (by our boundary conditions (\ref{bc3l}) and (\ref{bc3r}) if the $Y$ particles do visit $l$ or $r$ they are reflecting there) and the $X$ components evolve as $n$ $L$-diffusions reflected on the $Y$ particles. These dynamics can be described in terms of $SDEs$ with reflection under completely analogous assumptions. The details are omitted.
\subsection{Stochastic coalescing flow interpretation} \label{coalescinginterpretation}
The definition of $q_t^{n,n+1}$, and similarly of $q_t^{n,n}$, might look rather mysterious and surprising. It is originally motivated from considering stochastic coalescing flows. Briefly, the finite system $\left(\boldsymbol{\Phi}_{0,t}(x_1),\cdots,\boldsymbol{\Phi}_{0,t}(x_n);t \ge 0\right)$ can be extended to an infinite system of coalescing $L$-diffusions starting from each space time point and denoted by $(\boldsymbol{\Phi}_{s,t}(\cdot),s\le t)$. This is well documented in Theorem 4.1 of \cite{LeJan} for example. The random family of maps $(\boldsymbol{\Phi}_{s,t},s\le t)$ from $I$ to $I$ enjoys among others the following natural looking and intuitive properties: the \textit{cocycle} or \textit{flow} property $\boldsymbol{\Phi}_{t_1,t_3}=\boldsymbol{\Phi}_{t_2,t_3}\circ\boldsymbol{\Phi}_{t_1,t_2}$, \textit{independence of its increments} $\boldsymbol{\Phi}_{t_1,t_2}\perp\boldsymbol{\Phi}_{t_3,t_4}$ for $t_2\le t_3$ and \textit{stationarity} $\boldsymbol{\Phi}_{t_1,t_2}\overset{law}{=}\boldsymbol{\Phi}_{0,t_2-t_1}$. Finally, we can consider its generalized inverse by $\boldsymbol{\Phi}^{-1}_{s,t}(x)=sup\{w:\boldsymbol{\Phi}_{s,t}(w)\le x\}$ which is well defined since $\boldsymbol{\Phi}_{s,t}$ is non-decreasing.
With these notations in place $q_t^{n,n+1}$ can also be written as,
\begin{align}\label{flowtransition1}
q_t^{n,n+1}((x,y),(x',y'))dx'dy=\frac{\prod_{i=1}^{n}\hat{m}(y'_i)}{\prod_{i=1}^{n}\hat{m}(y_i)}\mathbb{P}\big(\boldsymbol{\Phi}_{0,t}(x_i)\in dx_i',\boldsymbol{\Phi}^{-1}_{0,t}(y'_j)\in dy_j \ \ \text{for all} \ \ i,j \big).
\end{align}
We now sketch an argument that gives the semigroup property $ Q_{t+s}^{n,n+1}=Q_t^{n,n+1}Q_s^{n,n+1}$. We do not try to give all the details that would render it completely rigorous, mainly because it cannot be used to precisely describe the dynamics of $Q_t^{n,n+1}$, but nevertheless all the main steps are spelled out.
All equalities below should be understood after being integrated with respect to $dx''$ and $dy$ over arbitrary Borel sets. The first equality is by definition. The second equality follows from the \textit{cocycle} property and conditioning on the values of $\boldsymbol{\Phi}_{0,s}(x_i)$ and $\boldsymbol{\Phi}^{-1}_{s,s+t}(y''_j)$. Most importantly, this is where the \textit{boundary behaviour assumptions} (\ref{bc1l}) and (\ref{bc1r}) we made at the beginning of this subsection are used. These ensure that no possible contributions from atoms on $\partial I$ are missed; namely the random variable $\boldsymbol{\Phi}_{0,s}(x_i)$ is supported (its distribution gives full mass) in $I^\circ$. Moreover, it is not too hard to see from the coalescing property of the flow that, we can restrict the integration over $(x',y')\in W^{n,n+1}(I^\circ)$ for otherwise the integrand vanishes. Finally, the third equality follows from \textit{independence} of the increments and the fourth one by \textit{stationarity} of the flow.
\begin{align*}
&q_{s+t}^{n,n+1}((x,y),(x'',y''))dx''dy=\frac{\prod_{i=1}^{n}\hat{m}(y''_i)}{\prod_{i=1}^{n}\hat{m}(y_i)}\mathbb{P}\big(\boldsymbol{\Phi}_{0,s+t}(x_i)\in dx_i'',\boldsymbol{\Phi}^{-1}_{0,s+t}(y''_j)\in dy_j \ \ \text{for all} \ \ i,j \big) \\
&=\frac{\prod_{i=1}^{n}\hat{m}(y''_i)}{\prod_{i=1}^{n}\hat{m}(y_i)}\int_{(x',y')\in W^{n,n+1}(I^\circ)}^{}\mathbb{P}\big(\boldsymbol{\Phi}_{0,s}(x_i)\in dx_i',\boldsymbol{\Phi}_{s,s+t}(x_i')\in dx_i'',\boldsymbol{\Phi}^{-1}_{0,s}(y'_j)\in dy_j,\boldsymbol{\Phi}^{-1}_{s,s+t}(y''_j)\in dy_j' \big)
\\
& =\int_{(x',y')\in W^{n,n+1}(I^\circ)}^{} \frac{\prod_{i=1}^{n}\hat{m}(y'_i)}{\prod_{i=1}^{n}\hat{m}(y_i)}\mathbb{P}\big(\boldsymbol{\Phi}_{0,s}(x_i)\in dx_i',\boldsymbol{\Phi}^{-1}_{0,s}(y'_j)\in dy_j \big)\\
&\times \frac{\prod_{i=1}^{n}\hat{m}(y''_i)}{\prod_{i=1}^{n}\hat{m}(y'_i)}\mathbb{P}\big(\boldsymbol{\Phi}_{s,s+t}(x'_i)\in dx_i'',\boldsymbol{\Phi}^{-1}_{s,s+t}(y''_j)\in dy_j' \big)\\
&=\int_{(x',y')\in W^{n,n+1}(I^\circ)}^{} q_s^{n,n+1}((x,y),(x',y'))q_t^{n,n+1}((x',y'),(x'',y''))dx'dy' dx''dy .
\end{align*}
\subsection{Intertwining and Markov functions}
In this subsection $(n_1,n_2)$ denotes one of $\{(n,n-1),(n,n),(n,n+1)\}$. First, recall the definitions of $P_t^n$ and $\hat{P}_t^n$ given in (\ref{KM1}) and (\ref{KM2}) respectively. Similarly, we record here again, the following proposition and recall that it can in principle completely describe the evolution of the $Y$ particles and characterizes the finite lifetime of the process $Z$ as the killing time of $Y$.
\begin{prop}\label{DynkinProposition}Assume $(\textbf{R})$ and $(\textbf{BC})$ hold for the $L$-diffusion. For $t>0$ and $f$ a bounded Borel function on $W^{n_1}(I^\circ)$ we have,
\begin{align}
\Pi_{n_1,n_2}\hat{P}_t^{n_1}f&=Q_t^{n_1,n_2}\Pi_{n_1,n_2}f.
\end{align}
\end{prop}
Now, we define the following integral operator $\Lambda_{n_1,n_2}$ acting on Borel functions on $W^{n_1,n_2}(I^\circ)$, whenever $f$ is integrable as,
\begin{align}\label{PreIntertwiningKernel}
(\Lambda_{n_1,n_2}f)(x) &= \int_{W^{n_1,n_2}(x)}^{}\prod_{i=1}^{n_1}\hat{m}(y_i)f(x,y)dy,
\end{align}
where we remind the reader that $\hat{m}(\cdot)$ is the density with respect to Lebesgue measure of the speed measure of the diffusion with generator $\hat{L}$.
The following intertwining relation is the fundamental ingredient needed for applying the theory of Markov functions, originating with the seminal paper of Rogers and Pitman \cite{RogersPitman}. This proposition directly follows by performing the $dy$ integration in the explicit formula of the block determinant (or alternatively by invoking the coalescing property of the stochastic flow $\left(\boldsymbol{\Phi}_{s,t}(\cdot);s \le t\right)$ and the original definitions).
\begin{prop} Assume $(\textbf{R})$ and $(\textbf{BC})$ hold for the $L$-diffusion. For $t>0$ we have the following equality of positive kernels,
\begin{align}
P_t^{n_2}\Lambda_{n_1,n_2}&=\Lambda_{n_1,n_2}Q_t^{n_1,n_2}.
\end{align}
\end{prop}
Combining the two propositions above gives the following relation for the Karlin-McGregor semigroups,
\begin{align}\label{KMintertwining}
P_t^{n_2}\Lambda_{n_1,n_2}\Pi_{n_1,n_2}&=\Lambda_{n_1,n_2}\Pi_{n_1,n_2}\hat{P}_t^{n_1}.
\end{align}
Namely, the two semigroups are themselves intertwined with kernel,
\begin{align*}
\left(\Lambda_{n_1,n_2}\Pi_{n_1,n_2}f\right)(x)=\int_{W^{n_1,n_2}(x)}^{}\prod_{i=1}^{n_1}\hat{m}(y_1)f(y)dy.
\end{align*}
This implies the following. Suppose $\hat{h}_{n_1}$ is a strictly positive (in $\mathring{W}^{n_1}$) eigenfunction for $\hat{P}_t^{n_1}$ namely, $\hat{P}_t^{n_1}\hat{h}_{n_1}=e^{\lambda_{n_1}t}\hat{h}_{n_1}$, then (with both sides possibly being infinite),
\begin{align*}
(P_t^{n_2}\Lambda_{n_1,n_2}\Pi_{n_1,n_2}\hat{h}_{n_1})(x)&=e^{\lambda_{n_1}t}(\Lambda_{n_1,n_2}\Pi_{n_1,n_2}\hat{h}_{n_1})(x).
\end{align*}
We are interested in strictly positive eigenfunctions because they allow us to define Markov processes, however non positive eigenfunctions can be built this way as well.
We now finally arrive at our main results. We need to make precise one more notion, already referenced several times in the introduction. For a possibly sub-Markov semigroup $\left(\mathfrak{P}_t;t \ge 0\right)$ or more generally, for fixed $t$, a sub-Markov kernel with eigenfunction $\mathfrak{h}$ with eigenvalue $e^{ct}$ we define the Doob's $h$-transform by $
e^{-ct}\mathfrak{h}^{-1}\circ \mathfrak{P}_t \circ \mathfrak{h}$.
Observe that, this is now an honest Markov semigroup (or Markov kernel).
If $\hat{h}_{n_1}$ is a strictly positive in $\mathring{W}^{n_1}$ eigenfunction for $\hat{P}_t^{n_1}$ then so is the function $\hat{h}_{n_1,n_2}(x,y)=\hat{h}_{n_1}(y)$ for $Q_t^{n_1,n_2}$ from Proposition \ref{DynkinProposition}. We can thus define the proper Markov kernel $Q_t^{n_1,n_2,\hat{h}_{n_1}}$ which is the $h$-transform of $Q_t^{n_1,n_2}$ by $\hat{h}_{n_1}$. Define $h_{n_2}(x)$, strictly positive in $\mathring{W}^{n_2}$, as follows, assuming that the integrals are finite in the case of $W^{n,n}(I^\circ)$ and $W^{n+1,n}(I^\circ)$,
\begin{align*}
h_{n_2}(x)=(\Lambda_{n_1,n_2}\Pi_{n_1,n_2}\hat{h}_{n_1})(x),
\end{align*}
and the Markov Kernel $\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}(x,\cdot)$ with $x \in \mathring{W}^{n_2}$ by,
\begin{align*}
(\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}f)(x)=\frac{1}{h_{n_2}(x)}\int_{W^{n_1,n_2}(x)}^{}\prod_{i=1}^{n_1}\hat{m}(y_i)\hat{h}_{n_1}(y)f(x,y)dy.
\end{align*}
Finally, defining $P_t^{n_2,h_{n_2}}$ to be the Karlin-McGregor semigroup $P_t^{n_2}$ $h$-transformed by $h_{n_2}$ we obtain:
\begin{prop} \label{Master} Assume $(\textbf{R})$ and $(\textbf{BC})$ hold for the $L$-diffusion. Let $Q_t^{n_1,n_2}$ denote one of the operators induced by the sub-Markov kernels on $W^{n_1,n_2}(I^\circ)$ defined in the previous subsection. Let $\hat{h}_{n_1}$ be a strictly positive eigenfunction for $\hat{P}_t^{n_1}$ and assume that $h_{n_2}(x)=(\Lambda_{n_1,n_2}\Pi_{n_1,n_2}\hat{h}_{n_1})(x)$ is finite in $W^{n_2}(I^\circ)$, so that in particular $\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}$ is a Markov kernel. Then, with the notations of the preceding paragraph we have the following relation for $t>0$,
\begin{align} \label{MasterIntertwining}
P_t^{n_2,h_{n_2}}\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}f&=\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}Q_t^{n_1,n_2,\hat{h}_{n_2}}f ,
\end{align}
with $f$ a bounded Borel function in $W^{n_1,n_2}(I^{\circ})$.
\end{prop}
This intertwining relation and the theory of Markov functions (see Section 2 of \cite{RogersPitman} for example) immediately imply the following corollary:
\begin{cor}\label{mastercorollary}
Assume $Z=(X,Y)$ is a Markov process with semigroup $Q_t^{n_1,n_2,\hat{h}_{n_2}}$, then the $X$ component is distributed as a Markov process with semigroup $P_t^{n_2,h_{n_2}}$ started from $x$ if $(X,Y)$ is started from $\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}(x,\cdot)$. Moreover, the conditional distribution of $Y(t)$ given $\left(X(s);s \le t\right)$ is $\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}(X(t),\cdot)$.
\end{cor}
We give a final definition in the case of $W^{n,n+1}$ only, that has a natural analogue for $W^{n,n}$ and $W^{n+1,n}$ (we shall elaborate on the notion introduced below in Section 5.1 on well-posedness of $SDEs$ with reflection). Take $Y=(Y_1,\cdots,Y_n)$ to be an $n$-dimensional system of \textit{non-intersecting} paths in $\mathring{W}^n(I^{\circ})$, so that in particular $Y_1<Y_2<\cdots<Y_n$. Then, by \textit{$X$ is a system of $n+1$ $L$-diffusions reflected off $Y$} we mean processes $\left(X_1(t),\cdots,X_{n+1}(t);t \ge 0\right)$, satisfying $X_1(t)\le Y_1(t) \le X_2(t) \le \cdots \le X_{n+1}(t)$ for all $t \ge 0$, and so that the following $SDEs$ hold,
\begin{align}\label{systemofreflectingLdiffusions}
dX_1(t)&=\sqrt{2a(X_1(t))}d\beta_1(t)+b(X_1(t))dt+dK^l(t)-{dK_1^+(t)},\nonumber\\
\vdots\nonumber\\
dX_j(t)&=\sqrt{2a(X_j(t))}d\beta_j(t)+b(X_j(t))dt+{dK_j^-(t)}-{dK_j^+(t)},\\
\vdots\nonumber\\
dX_{n+1}(t)&=\sqrt{2a(X_{n+1}(t))}d\beta_{n+1}(t)+b(X_{n+1}(t))dt+{dK_{n+1}^-(t)}-dK^r(t),\nonumber
\end{align}
where the positive finite variation processes $K^l,K^r,K_i^+,K_i^-$ are such that $K^l$ increases only when $X_1=l$, $K^r$ increases only when $X_{n+1}=r$,
$K^+_i(t)$ increases only when $Y_i=X_i$ and $K^-_{i}(t)$ only when $Y_{i-1}=X_i$, so that $(X_1(t)\le Y_1(t)\le \cdots \le X_{n+1}(t))\in W^{n,n+1}(I)$ forever. Here $\beta_1,\cdots,\beta_{n+1}$ are independent standard Brownian motions which are moreover \textit{independent} of $Y$. The reader should observe that the dynamics between $(X,Y)$ are exactly the ones prescribed in the system of $SDEs$ (\ref{System1SDEs}) with the difference being that now the process has infinite lifetime. This can be achieved from (\ref{System1SDEs}) by $h$-transforming the $Y$ process as explained in this section to have infinite lifetime. By pathwise uniqueness of solutions to reflecting $SDEs$, with coefficients satisfying $(\mathbf{YW})$, in continuous time-dependent domains proven in Proposition \ref{wellposedness}, under any absolutely continuous change of measure for the $(X,Y)$-process that depends only on $Y$ (a Doob $h$-transform in particular), the equations (\ref{systemofreflectingLdiffusions}) still hold with the $\beta_i$ independent Brownian motions which moreover remain independent of the $Y$ process. We thus arrive at our main theorem:
\begin{thm} \label{MasterDynamics}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. Moreover, assume $\hat{h}_n$ is a strictly positive eigenfunction for $\hat{P}_t^{n}$. Suppose $Y$ consists of $n$ non-intersecting $\hat{L}$-diffusions $h$-transformed by $\hat{h}_n$, with transition semigroup $\hat{P}_{t}^{n,\hat{h}_n}$, and $X$ is a system of $n+1$ $L$-diffusions reflected off $Y$ started according to the distribution $\Lambda^{\hat{h}_{n}}_{n,n+1}(x,\cdot)$ for some $x\in\mathring{W}^{n+1}(I)$. Then $X$ is distributed as a diffusion process with semigroup $P_t^{n+1,h_{n+1}}$ started from $x$, where $h_{n+1}=\Lambda_{n,n+1}\Pi_{n,n+1}\hat{h}_{n}$.
\end{thm}
\begin{proof}
By Proposition \ref{dynamics1} and the discussion above, the process (X,Y) evolves according to the Markov semigroup $Q_t^{n_1,n_2,\hat{h}_{n_2}}$. Then, an application of the Rogers-Pitman Markov functions criterion in \cite{RogersPitman} with the function $\phi(x,y)=x$ and the intertwining (\ref{MasterIntertwining}) gives that, under the initial law $\Lambda^{\hat{h}_{n}}_{n,n+1}(x,\cdot)$ for $(X,Y)$, $\left(X(t);t\ge 0\right)$ is a Markov process with semigroup $P_t^{n+1,h_{n+1}}$ started from $x$, in particular a diffusion.
\end{proof}
The statement and proof of the result for $W^{n,n}$ and $W^{n+1,n}$ is completely analogous.
Finally, the intertwining relation (\ref{MasterIntertwining}) also allows us to start the two-level process $(X,Y)$ from a degenerate point, in particular the system of reflecting $SDEs$ when some of the $Y$ coordinates coincide, as long as starting the process with semigroup $P_t^{n_2,h_{n_2}}$ from such a degenerate point is valid. Suppose $\big(\mu_t^{n_2,h_{n_2}}\big)_{t>0}$ is an entrance law for $P_t^{n_2,h_{n_2}}$, namely for $t,s>0$,
\begin{align*}
\mu_s^{n_2,h_{n_2}}P_t^{n_2,h_{n_2}}=\mu_{t+s}^{n_2,h_{n_2}} ,
\end{align*}
then we have the following corollary, which is obtained immediately by applying $\mu_t^{n_2,h_{n_2}}$ to both sides of (\ref{MasterIntertwining}):
\begin{cor}\label{EntranceLawCorollary}
Under the assumptions above, if $\left(\mu_s^{n_2,h_{n_2}}\right)_{s>0}$ is an entrance law for the process with semigroup $P_t^{n_2,h_{n_2}}$ then $\left(\mu_s^{n_2,h_{n_2}}\Lambda^{\hat{h}_{n_1}}_{n_1,n_2}\right)_{s>0}$
forms an entrance law for process $(X,Y)$ with semigroup $Q_t^{n_1,n_2,\hat{h}_{n_1}}$.
\end{cor}
Hence, the statement of Theorem \ref{MasterDynamics} generalizes, so that if $X$ is a system of $L$-diffusions reflected off $Y$ started according to an entrance law, then $X$ is again itself distributed as a Markov process.
The entrance laws that we will be concerned with in this paper will correspond to starting the process with semigroup $P_t^{n_2,h_{n_2}}$ from a single point $(x,\cdots,x)$ for some $x\in I$. These will be given by so called time dependent \textit{biorthogonal ensembles}, namely measures of the form,
\begin{align}\label{biorthogonalensemble}
\det\left(f_i(t,x_j)\right)^{n_2}_{i,j=1}\det\left(g_i(t,x_j)\right)^{n_2}_{i,j=1} \ .
\end{align}
Under some further assumptions on the Taylor expansion of the one dimensional transition density $p_t(x,y)$ they will be given by so called \textit{polynomial ensembles}, where one of the determinant factors is the Vandermonde determinant,
\begin{align}\label{polynomialensemble}
\det\left(\phi_i(t,x_j)\right)^{n_2}_{i,j=1}\det\left(x_j^{i-1}\right)^{n_2}_{i,j=1} \ .
\end{align}
A detailed discussion is given in the Appendix.
\section{Applications and examples}
Applying the theory developed in the previous section we will now show how some of the known examples of diffusions in Gelfand-Tsetlin patterns fit into this framework and construct new processes of this kind. In particular we will treat all the diffusions associated with Random Matrix eigenvalues, a model related to Plancherel growth that involves a wall, examples coming from Sturm-Liouville semigroups and finally point out the connection to strong stationary times and superpositions and decimations of Random Matrix ensembles.
First, recall that the space of Gelfand-Tsetlin patterns of depth $N$ denoted by $\mathbb{GT}(N)$ is defined to be,
\begin{align*}
\left\{\left(x^{(1)},\cdots,x^{(N)}\right):x^{(n)}\in W^{n}, \ x^{(n)}\prec x^{(n+1)}\right\},
\end{align*}
and also the space of symplectic Gelfand-Tsetlin patterns of depth $N$ denoted by $\mathbb{GT}_{\textbf{s}}(N)$ is given by,
\begin{align*}
\left\{\left(x^{(1)},\hat{x}^{(1)}\cdots,x^{(N)}, \hat{x}^{(N)}\right):x^{(n)},\hat{x}^{(n)}\in W^{n}, \ x^{(n)}\prec \hat{x}^{(n)}\prec x^{(n+1)} \right\}.
\end{align*}
Please note the minor discrepancy in the definition of $\mathbb{GT}(N)$ with the notation used for $W^{n,n+1}$: here for two consecutive levels $x^{(n)}\in W^n, x^{(n+1)}\in W^{n+1}$ in the Gelfand-Tsetlin pattern the pair $(x^{(n+1)},x^{(n)})\in W^{n,n+1}$ and not the other way round.
\subsection{Concatenating two-level processes}
We will describe the construction for $\mathbb{GT}$, with the extension to $\mathbb{GT}_{\textbf{s}}$ being analogous. Let us fix an interval $I$ with endpoints $l<r$ and let $L_n$ for $n=1,\cdots, N$ be a sequence of diffusion process generators in $I$ (satisfying (\ref{bc1l}) and (\ref{bc1r})) given by,
\begin{align}
L_n=a_n(x)\frac{d^2}{dx^2}+b_n(x)\frac{d}{dx}.
\end{align}
We will moreover denote their transition densities with respect to Lebesgue measure by $p_t^n(\cdot,\cdot)$.
We want to consider a process $\left(\mathbb{X}(t);t \ge 0\right)=\left(\left(\mathbb{X}^{(1)}(t),\cdots,\mathbb{X}^{(N)}(t)\right);t \ge 0\right)$ taking values in $\mathbb{GT}(N)$ so that, for each $2 \le n \le N ,\ \mathbb{X}^{(n)}$ consists of $n$ independent $L_n$ diffusions reflected off the paths of $\mathbb{X}^{(n-1)}$. More precisely we consider the following system of reflecting $SDEs$, with $1 \le i \le n \le N$, initialized in $\mathbb{GT}(N)$ and stopped at the stopping time $\tau_{\mathbb{GT}(N)}$ to be defined below,
\begin{align}\label{GelfandTsetlinSDEs}
d\mathbb{X}_i^{(n)}(t)=\sqrt{2a_n\left(\mathbb{X}_i^{(n)}(t)\right)}d\beta_i^{(n)}(t)+b_n\left(\mathbb{X}_i^{(n)}(t)\right)dt+dK_i^{(n),-}-dK_i^{(n),+},
\end{align}
driven by an array $\left(\beta_i^{(n)}(t);t\ge 0, 1 \le i \le n \le N\right)$ of $\frac{N(N+1)}{2}$ independent standard Brownian motions. The positive finite variation processes $K_i^{(n),-}$ and $K_i^{(n),+}$ are such that $K_i^{(n),-}$ increases only when $\mathbb{X}_i^{(n)}=\mathbb{X}_{i-1}^{(n-1)}$, $K_i^{(n),+}$ increases only when $\mathbb{X}_i^{(n)}=\mathbb{X}_{i}^{(n-1)}$ with $K_1^{(N),-}$ increasing when $\mathbb{X}_1^{(N)}=l$ and $K_N^{(N),+}$ increasing when $\mathbb{X}_N^{(N)}=r$, so that $\mathbb{X}=\left(\mathbb{X}^{(1)},\cdots,\mathbb{X}^{(N)}\right)$ stays in $\mathbb{GT}(N)$ forever. The stopping $\tau_{\mathbb{GT}(N)}$ is given by,
\begin{align*}
\tau_{\mathbb{GT}(N)}=\inf \big\{ t \ge 0: \exists \ (n,i,j) \ 2 \le n \le N-1, 1 \le i < j\le n \textnormal{ s.t. } \mathbb{X}_i^{(n)}(t)=\mathbb{X}_j^{(n)}(t) \big\}.
\end{align*}
Stopping at $\tau_{\mathbb{GT}(N)}$ takes care of the problematic possibility of two of the time dependent barriers coming together. It will turn out that $\tau_{\mathbb{GT}(N)}=\infty$ almost surely under certain initial conditions of interest to us given in Proposition \ref{multilevelproposition} below; this will be the case since then each level $\mathbb{X}^{(n)}$ will evolve according to a Doob's $h$-transform and thus consisting of non-intersecting paths.
That the system of reflecting $SDEs$ (\ref{GelfandTsetlinSDEs}) above is well-posed, under a Yamada-Watanabe condition on the coefficients $\left(\sqrt{a_n},b_n\right)$ for $1\le n \le N$, follows (inductively) from Proposition \ref{wellposedness}.
We would like Theorem \ref{MasterDynamics} to be applicable to each pair $(\mathbb{X}^{(n-1)},\mathbb{X}^{(n)})$, with $X=\mathbb{X}^{(n)}$ and $Y=\mathbb{X}^{(n-1)}$. To this end, for $n=2,\cdots,N$, suppose that $\mathbb{X}^{(n-1)}$ is distributed according to the following $h$-transformed Karlin-McGregor semigroup by the strictly positive in $\mathring{W}^{n-1}$ eigenfunction $g_{n-1}$ with eigenvalue $e^{c_{n-1}t}$,
\begin{align*}
e^{-c_{n-1}t}\frac{g_{n-1}(y_1,\cdots,y_{n-1})}{g_{n-1}(x_1,\cdots,x_{n-1})}\det\left(\widehat{p}_t^{n}(x_i,y_j)\right)_{i,j=1}^{n-1} ,
\end{align*}
where $\widehat{p}^{n}_t(\cdot,\cdot)$ denotes the transition density associated with the dual $\widehat{L}_n$ (killed at an exit of regular absorbing boundary point) of $L_n$. We furthermore, denote by $\widehat{m}^{n}(\cdot)$ the density with respect to Lebesgue measure of the speed measure of $\widehat{L}_n$. Then, Theorem \ref{MasterDynamics} gives that under a special initial condition (stated therein) for the joint dynamics of $(\mathbb{X}^{(n-1)},\mathbb{X}^{(n)})$, with $X=\mathbb{X}^{(n)}$ and $Y=\mathbb{X}^{(n-1)}$, the projection on $\mathbb{X}^{(n)}$ is distributed as the $G_{n-1}$ $h$-transform of $n$ independent $L_n$ diffusions, thus consisting of non-intersecting paths, where $G_{n-1}$ is given by,
\begin{align}\label{eigenGelfand}
G_{n-1}(x_1,\cdots,x_{n})=\int_{W^{n-1,n}(x)}^{}\prod_{i=1}^{n-1}\widehat{m^n}(y_i)g_{n-1}(y_1,\cdots,y_{n-1})dy_1\cdots dy_{n-1}.
\end{align}
Consistency then demands, by comparing $(\mathbb{X}^{(n-1)},\mathbb{X}^{(n)})$ and $(\mathbb{X}^{(n)},\mathbb{X}^{(n+1)})$, the following condition between the transition kernels (which is also sufficient as we see below for the construction of a consistent process $(\mathbb{X}^{(1)},\cdots,\mathbb{X}^{(N)})$), for $t>0,x,y \in \mathring{W}^n$,
\begin{align}\label{consistencyGelfand}
e^{-c_{n-1}t}\frac{G_{n-1}(y_1,\cdots,y_{n})}{G_{n-1}(x_1,\cdots,x_{n})}\det\left(p_t^n(x_i,y_j)\right)_{i,j=1}^{n}=e^{-c_nt}\frac{g_n(y_1,\cdots,y_{n})}{g_n(x_1,\cdots,x_{n})}\det\left(\widehat{p}_t^{n+1}(x_i,y_j)\right)_{i,j=1}^{n}.
\end{align}
Denote the semigroup associated with these densities by $\left(\mathfrak{P}^{(n)}(t);t >0\right)$ and also define the Markov kernels $\mathfrak{L}_{n-1}^n(x,dy)$ for $x \in \mathring{W}^n$ by,
\begin{align*}
\mathfrak{L}_{n-1}^n(x,dy)=\frac{\prod_{i=1}^{n-1}\widehat{m^n}(y_i)g_{n-1}(y_1,\cdots,y_{n-1})}{G_{n-1}(x_1,\cdots,x_{n})}\textbf{1}\left(y \in W^{n-1,n}(x)\right)dy_1\cdots dy_{n-1}.
\end{align*}
Then, by inductively applying Theorem \ref{MasterDynamics}, we easily see the following Proposition holds:
\begin{prop}\label{multilevelproposition}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L_n$-diffusion and $(\textbf{YW})$ holds for the pairs of $(L_n,\hat{L}_n)$-diffusions for $2\le n \le N$. Moreover, suppose that there exist functions $g_n$ and $G_n$ so that the consistency relations (\ref{eigenGelfand}) and (\ref{consistencyGelfand}) hold. Let $\nu_N(dx)$ be a measure supported in $\mathring{W}^N$. Consider the process $\left(\mathbb{X}(t);t \ge 0\right)=\left(\left(\mathbb{X}^{(1)}(t),\cdots,\mathbb{X}^{(N)}(t)\right);t \ge 0\right)$ in $\mathbb{GT}(N)$ satisfying the $SDEs$ (\ref{GelfandTsetlinSDEs}) and initialized according to,
\begin{align}\label{Gibbsinitial}
\nu_N(dx^{(N)})\mathfrak{L}_{N-1}^N(x^{(N)},dx^{(N-1)}) \cdots \mathfrak{L}_{1}^2(x^{(2)},dx^{(1)}).
\end{align}
Then $\tau_{\mathbb{GT}(N)}=\infty$ almost surely, $\left(\mathbb{X}^{(n)}(t);t \ge 0\right)$ for $1 \le n \le N$ evolves according to $\mathfrak{P}^{(n)}(t)$ and for fixed $T>0$ the law of $\left(\mathbb{X}^{(1)}(T),\cdots,\mathbb{X}^{(N)}(T)\right)$ is given by,
\begin{align}\label{Gibbsevolved}
\left(\nu_N\mathfrak{P}^{(N)}_T\right)(dx^{(N)})\mathfrak{L}_{N-1}^N(x^{(N)},dx^{(N-1)}) \cdots \mathfrak{L}_{1}^2(x^{(2)},dx^{(1)}).
\end{align}
\end{prop}
\begin{proof}
For $n=2$ this is the statement of Theorem \ref{MasterDynamics}. Assume that the proposition is proven for $n=N-1$. Observe that, an initial condition of the form (\ref{Gibbsinitial}) in $\mathbb{GT}(N)$ gives rise to an initial condition of the same form in $\mathbb{GT}(N-1)$:
\begin{align*}
\tilde{\nu}_{N-1}(dx^{(N-1)})\mathfrak{L}_{N-2}^{N-1}(x^{(N-1)},dx^{(N-2)}) \cdots \mathfrak{L}_{1}^2(x^{(2)},dx^{(1)}),\\
\tilde{\nu}_{N-1}(dx^{(N-1)})=\int_{\mathring{W}^N}^{}\nu_N(dx^{(N)})\mathfrak{L}_{N-1}^N(x^{(N)},dx^{(N-1)}).
\end{align*}
Then, by the inductive hypothesis $\left(\mathbb{X}^{(N-1)}(t);t \ge 0\right)$ evolves according to $\mathfrak{P}^{(N-1)}(t)$, with the joint evolution of $(\mathbb{X}^{(N-1)},\mathbb{X}^{(N)})$, by (\ref{eigenGelfand}) and (\ref{consistencyGelfand}) with $n=N-1$, as in Theorem \ref{MasterDynamics}, with $X=\mathbb{X}^{(N)}$ and $Y=\mathbb{X}^{(N-1)}$ and with initial condition $\nu_N(dx^{(N)})\mathfrak{L}_{N-1}^N(x^{(N)},dx^{(N-1)})$. We thus obtain that $\left(\mathbb{X}^{(N)}(t);t \ge 0\right)$ evolves according to $\mathfrak{P}^{(N)}(t)$ and for fixed $T$ the conditional distribution of $\mathbb{X}^{(N-1)}(T)$ given $\mathbb{X}^{(N)}(T)$ is $\mathfrak{L}_{N-1}^N\left(\mathbb{X}^{(N)}(T),dx^{(N-1)}\right)$. This, along with the inductive hypothesis on the law of $\mathbb{GT}(N-1)$ at time $T$ yields (\ref{Gibbsevolved}). The fact that $\tau_{\mathbb{GT}(N)}=\infty$ is also clear since each $\left(\mathbb{X}^{(n)}(t);t \ge 0\right)$ is governed by a Doob transformed Karlin-McGregor semigroup.
\end{proof}
Similarly, the result above holds by replacing $\nu_N(dx^{(N)})$ by an entrance law $\left(\nu_t^{(N)}(dx^{(N)})\right)_{t\ge 0}$ for $\mathfrak{P}^{(N)}(t)$, in which case $\left(\nu_N\mathfrak{P}^{(N)}_T\right)(dx^{(N)})$ gets replaced by $\nu_T^{(N)}(dx^{(N)})$.
The consistency relations (\ref{eigenGelfand}) and (\ref{consistencyGelfand}) and the implications for which choices of $L_1,\cdots, L_N$ to make will not be studied here. These questions are worth further investigation and will be addressed in future work.
\subsection{Brownian motions in full space}\label{MultilevelDBM}
The process considered here was first constructed by Warren in \cite{Warren}. Suppose in our setup of the previous section we take as the $L$-diffusion a standard Brownian motion with generator $\frac{1}{2}\frac{d^2}{dx^2}$, speed measure with density $m(x)=2$ and scale function $s(x)=x$. Then, its conjugate diffusion with generator $\hat{L}$ from the results of the previous section is again a standard Brownian motion, so that in particular $P_t^n=\hat{P}_t^n$. Recall that the Vandermonde determinant $h_n(x)=\prod_{1 \le i<j \le n}^{}(x_j-x_i)$ is a positive harmonic function for $P_t^n$ (see for example \cite{Warren} or by iteration from the results here). Moreover, the $h$-transformed semigroup $P_t^{n,h_n}$ is exactly the semigroup of $n$ particle Dyson Brownian motion.
\begin{prop}\label{DysonProposition}
Let $x \in \mathring{W}^{n+1}(\mathbb{R})$ and consider a process $(X,Y)\in W^{n,n+1}(\mathbb{R})$ started from the distribution $\left(\delta_x,\frac{n!h_n(y)}{h_{n+1}(x)}\mathbf{1}(y\prec x)dy\right)$ with the $Y$ particles evolving as $n$ particle Dyson Brownian motion and the $X$ particles as $n+1$ standard Brownian motions reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n+1$ Dyson Brownian motion started from $x$.
\end{prop}
\begin{proof}
We apply Theorem (\ref{MasterDynamics}) with the $L$-diffusion being a standard Brownian motion. Observe that, $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are easily seen to be satisfied. Finally, as recalled above the Vandermonde determinant $h_n(x)=\prod_{1 \le i<j \le n}^{}(x_j-x_i)$ is a positive harmonic function for $n$ independent Brownian motions killed when they intersect and the semigroup $P_t^{n,h_n}$ is the one associated to $n$ particle Dyson Brownian motion.
\end{proof}
In fact, we can start the process from the boundary of $W^{n,n+1}(\mathbb{R})$ via an entrance law as described in the previous section. To be more concrete, an entrance law for $P_t^{n+1,h_{n+1}}$ describing the process starting from the origin, which can be obtained via a limiting procedure detailed in the Appendix is the following:
\begin{align*}
\mu_t^{n+1,h_{n+1}}(dx)=C_{n+1}t^{-(n+1)^2/2}\exp\big(-\frac{1}{2t}\sum_{i=1}^{n+1}x_i^2\big)h_{n+1}^2(x)dx.
\end{align*}
Thus, from the previous section's results
\begin{align*}
\nu_t^{n,n+1,h_{n+1}}(dx,dy)=\mu_t^{n+1,h_{n+1}}(dx) \frac{n!h_n(y)}{h_{n+1}(x)}\mathbf{1}(y\prec x)dy,
\end{align*}
forms an entrance law for the semigroup associated to the two-level process in Proposition \ref{DysonProposition}. Hence, we obtain the following:
\begin{prop}\label{superpositionref1}
Consider a Markovian process $(X,Y)\in W^{n,n+1}(\mathbb{R})$ initialized according to the entrance law $\nu_t^{n,n+1,h_{n+1}}(dx,dy)$ with the $Y$ particles evolving as $n$ particle Dyson Brownian motion and the $X$ particles as $n+1$ standard Brownian motions reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n+1$ Dyson Brownian motion started from the origin.
\end{prop}
It can be seen that we are in the setting of Proposition \ref{multilevelproposition} with the $L_k\equiv L$-diffusion a standard Brownian motion and the functions $g_k, G_k$ being up to a multiplicative constant equal to the Vandermonde determinant $\prod_{1 \le i<j \le k}^{}(x_j-x_i)$. Thus, we can concatenate these two-level processes to build a process $\left(\mathbb{X}^{n}(t);t \ge 0\right)=(X_i^{(k)}(t);t \ge 0, 1 \le i \le k \le n)$ taking values in $\mathbb{GT}(n)$ recovering Proposition 6 of \cite{Warren}. Being more concrete, the dynamics of $\mathbb{X}^{n}(t)$ are as follows: level $k$ of this process consists of $k$ independent standard Brownian motions reflected off the paths of level $k-1$. Then, from Proposition \ref{multilevelproposition} we get:
\begin{prop}
If $\mathbb{X}^{n}$ is started from the origin then the $k^{th}$ level process $X^{(k)}$ is distributed as $k$ particle Dyson Brownian motion started from the origin.
\end{prop}
\paragraph{Connection to Hermitian Brownian motion} We now point out the well known connection to the minor process of a Hermitian valued Brownian motion.It is a well known fact that the eigenvalues of minors of Hermitian matrices interlace. In particular, for any $n\times n$ Hermitian valued diffusion the eigenvalues of the $k\times k$ minor $\left(\lambda^{(k)}(t);t\ge 0\right)$ and of the $(k-1)\times (k-1)$ minor $\left(\lambda^{(k-1)}(t);t\ge0\right)$ interlace: $\left(\lambda^{(k)}_1(t)\le \lambda_2^{(k-1)}(t)\le \cdots \le \lambda_k^{(k)}(t);t\ge 0\right)$. Now, let $\left(H(t);t\ge 0\right)$ be an $n\times n$ Hermitian valued Brownian motion. Then $\left(\lambda^{(k)}(t);t\ge0\right)$ evolves as $k$ particle Dyson Brownian motion. Also for any \textit{fixed} time $T$ the vector $(\lambda^{(1)}(T),\cdots,\lambda^{(n)}(T))$ has the same distribution as $\mathbb{X}(T)$, namely it is uniform on the space of $\mathbb{GT}(n)$ with bottom level $\lambda^{(n)}(T)$. However the evolution of these processes is different, in fact the interaction between two levels of the minor process $\left(\lambda^{(k-1)}(t),\lambda^{(k)}(t);t\ge0\right)$ is quite complicated involving long range interactions and not the local reflection as in our case as shown in \cite{Adler}. In fact, the evolution of $\left(\lambda^{(k-1)}(t),\lambda^{(k)}(t),\lambda^{(k+1)}(t);t\ge 0\right)$ is not even Markovian at least for some initial conditions (again see \cite{Adler}).
\subsection{Brownian motions in half line and BES(3)}
The process we will consider here, taking values in a symplectic Gelfand-Tsetlin pattern, was first constructed by Cerenzia in \cite{Cerenzia} as the diffusive scaling limit of the symplectic Plancherel growth model. It is built from reflecting and killed Brownian motions in the half line. We begin in the simplest possible setting:
\begin{prop}\label{Bes3}
Consider a process $(X,Y)\in W^{1,1}([0,\infty))$ started according to the distribution $(\delta_x,\mathbf{1}_{[0,x]}dy)$ for $x>0$ with the $Y$ particle evolving as a reflecting Brownian motion in $[0,\infty)$ and the $X$ particle as a Brownian motion in $(0,\infty)$ reflected upwards off the $Y$ particle. Then, the $X$ particle is distributed as a $BES(3)$ process (Bessel process of dimension 3) started from $x$.
\end{prop}
\begin{proof}
Take as the $L$-diffusion a Brownian motion absorbed when it reaches $0$ and let $P_t^1$ be the semigroup of Brownian motion \textit{killed} (not absorbed) at 0. Then, its dual diffusion $\hat{L}$ is a reflecting Brownian motion in the positive half line and let $\hat{P}_t^1$ be the semigroup it gives rise to. Observe that, $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are easily seen to be satisfied. Letting, $\hat{h}_{1,1}(x)=1$ which is clearly a positive harmonic function for $\hat{L}$, we get that $h_{1,1}(x)=\int_{0}^{x}1dx=x$. Now, note that $P_t^{1,h_{1,1}}$ is exactly the semigroup of a $BES(3)$ process. As is well known, a Bessel process of dimension 3 is a Brownian motion conditioned to stay in $(0,\infty)$ by an $h$-transform with the function $x$. Then, from the analogue of Theorem \ref{MasterDynamics} in $W^{n,n}$ we obtain the statement.
\end{proof}
Now we move to the next stage of $2$ particles evolving as reflecting Brownian motions being reflected off a $BES(3)$ process.
\begin{prop}
Consider a process $(X,Y)\in W^{1,2}([0,\infty))$ started according to the following distribution $\left(\delta_{(x_1,x_2)},\frac{2y}{x_2^2-x_1^2}1_{[x_1,x_2]}dy\right)$ for $x_1<x_2$ with the $Y$ particle evolving as a BES(3) process and the $X$ particles as reflecting Brownian motions in $[0,\infty)$ reflected off the $Y$ particles. Then, the $X$ particles are distributed as two non-intersecting reflecting Brownian motions started from $(x_1,x_2)$.
\end{prop}
\begin{proof}
We apply Theorem \ref{MasterDynamics}. We take as the $L$-diffusion a reflecting Brownian motion. Write $P_t^{2}$ for the Karlin-McGregor semigroup associated to 2 reflecting Brownian motions killed when they intersect. Note that, $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are clearly satisfied. Observe that with $\hat{h}_{1,2}(x)=x$, which is a positive harmonic function for a Brownian motion killed at 0, we have:
\begin{align*}
h_{1,2}(x_1,x_2)=\int_{x_1}^{x_2}xdx=\frac{1}{2}(x_2^2-x_1^2).
\end{align*}
Finally note that, $P_t^{2,h_{1,2}}$ is exactly the semigroup of $2$ non-intersecting reflecting Brownian motions in $[0,\infty)$.
\end{proof}
These relations can be iterated to $n$ and $n$ and also $n$ and $n+1$ particles. Define the functions:
\begin{align*}
\hat{h}_{n,n}(x)&=\prod_{1 \le i < j \le n}^{}(x_j^2-x_i^2),\\
\hat{h}_{n,n+1}(x)&=\prod_{1 \le i < j \le n}^{}(x_j^2-x_i^2)\prod_{i=1}^{n}x_i.
\end{align*}
Also, consider the positive kernels $\Lambda_{n_1,n_2}$, defined in (\ref{PreIntertwiningKernel}), with $\hat{m}\equiv 2$. Then, an easy calculation (after writing these functions as determinants) gives that up to a constant $h_{n,n}=\Lambda_{n,n}\hat{h}_{n,n}$ is equal to $\hat{h}_{n,n+1}$ and $h_{n,n+1}=\Lambda_{n,n+1}\hat{h}_{n,n+1}$ is equal to $\hat{h}_{n+1,n+1}$. Finally, let $\Lambda^{\hat{h}_{n,n}}_{n,n}$ and $\Lambda^{\hat{h}_{n,n+1}}_{n,n+1}$ denote the corresponding normalized Markov kernels.
\begin{prop}
Consider a process $(X,Y)\in W^{n,n}([0,\infty))$ started according to the distribution $(\delta_x,\Lambda^{\hat{h}_{n,n}}_{n,n}(x,\cdot))$ for $x\in\mathring{W}^{n}([0,\infty))$ with the $Y$ particles evolving as $n$ reflecting Brownian motions conditioned not to intersect in $[0,\infty)$ and the $X$ particles as $n$ Brownian motion in $(0,\infty)$ reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n$ $BES(3)$ processes conditioned never to intersect started from $x$.
\end{prop}
\begin{proof}
We take as the $L$-diffusion a Brownian motion absorbed at $0$. Then, the $\hat{L}$-diffusion is a reflecting Brownian motion. As before, $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are clearly satisfied. Note that, $\hat{h}_{n,n}$ is a harmonic function for $n$ reflecting Brownian motions killed when they intersect. Moreover, note that $P_t^{n,h_{n,n}}$ is exactly the semigroup of $n$ non-intersecting $BES(3)$ processes (note that the $n$ particle Karlin-McGregor semigroup $P_t^n$ is that of $n$ killed at zero Brownian motions). The statement follows from the analogue of Theorem \ref{MasterDynamics} in $W^{n,n}$.
\end{proof}
\begin{prop}
Consider a process $(X,Y)\in W^{n,n+1}([0,\infty))$ started according to the following distribution $\left(\delta_x,\Lambda^{\hat{h}_{n,n+1}}_{n,n+1}(x,\cdot)\right)$ for $x \in\mathring{W}^{n+1}([0,\infty))$ with the $Y$ particles evolving as $n$ BES(3) processes conditioned not to intersect and the $X$ particles as $n+1$ reflecting Brownian motions in $[0,\infty)$ reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n+1$ non-intersecting reflecting Brownian motions started from $x$.
\end{prop}
\begin{proof}
We take as the $L$-diffusion a reflecting Brownian motion. Then, the $\hat{L}$-diffusion is a Brownian motion absorbed at 0. As before, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are clearly satisfied. Note that, $\hat{h}_{n,n+1}$ is harmonic for the corresponding Karlin-McGregor semigroup $\hat{P}_t^n$, associated with $n$ Brownian motions killed at zero and when they intersect. Moreover, note that the semigroup $\hat{P}_t^{n, \hat{h}_{n,n+1}}$, namely the semigroup $\hat{P}_t^{n}$ $h$-transformed by $\hat{h}_{n,n+1}$, gives the semigroup of the process $Y$. Finally, observe that $P_t^{n+1,h_{n,n+1}}$ is exactly the semigroup of $n+1$ non-intersecting reflecting Brownian motions. The statement follows from Theorem \ref{MasterDynamics}.
\end{proof}
\begin{rmk}
The semigroups considered above are also the semigroups of $n$ Brownian motions conditioned to stay in a Weyl Chamber of type $B$ and type $D$ (after we disregard the sign of the last coordinate) respectively (see for example \cite{JonesO'Connell} where such a study was undertaken).
\end{rmk}
We can in fact start these processes from the origin, by using the following explicit entrance law (see for example \cite{Cerenzia} or the Appendix for the general recipe) for $P_t^{n,h_{n,n}}$ and $P_t^{n,h_{n-1,n}}$ issued from zero,
\begin{align*}
\mu_t^{n,h_{n,n}}(dx)&=C_{n,n}'t^{-n(n+\frac{1}{2})}\exp\bigg(-\frac{1}{2t}\sum_{i=1}^{n}x_i^2\bigg)h^2_{n,n}(x)dx,\\
\mu_t^{n,h_{n-1,n}}(dx)&=C'_{n-1,n}t^{-n(n-\frac{1}{2})}\exp\bigg(-\frac{1}{2t}\sum_{i=1}^{n}x_i^2\bigg)h_{n-1,n}^2(x)dx.
\end{align*}
Concatenating these two-level processes, we construct a process $\left(\mathbb{X}_s^{(n)}(t);t \ge 0\right)=(X^{(1)}(t)\prec \hat{X}^{(1)}(t)\prec \cdots \prec X^{(n)}(t)\prec \hat{X}^{(n)}(t);t\ge 0)$ in $\mathbb{GT}_{\textbf{s}}(n)$ with dynamics as follows: Firstly, $X_1^{(1)}$ is a Brownian motion reflecting at the origin. Then, for each $k$, the $k$ particles corresponding to $\hat{X}^{(k)}$ perform independent Brownian motions reflecting off the $X^{(k)}$ particles to maintain interlacing. Finally, for $k\ge 2$ the $k$ particles corresponding to $X^{(k)}$ reflect off $\hat{X}^{(k-1)}$ and also in the case of $X^{(k)}_1$ reflecting at the origin.
Then, the symplectic analogue of Proposition \ref{multilevelproposition} (which is again proven in the same way by consistently patching together two-level processes) implies the following, recovering the results of Section 2.3 of \cite{Cerenzia}:
\begin{prop}\label{CerenziaGelfand}
If $\mathbb{X}_s^{n}$ is started from the origin then the projections onto $X^{(k)}$ and $\hat{X}^{(k)}$ are distributed as $k$ non-intersecting reflecting Brownian motions and $k$ non-intersecting $BES(3)$ processes respectively started from the origin.
\end{prop}
\subsection{Brownian motions in an interval}
Let $I=[0,\pi]$ for concreteness and let the $L$-diffusion be a reflecting Brownian motion in $I$. Then its dual, the $\hat{L}$-diffusion is a Brownian motion absorbed at $0$ or $\pi$. It will be shown in Corollary \ref{minimal}, that the minimal positive eigenfunction, is given up to a (signed) constant factor by,
\begin{align}\label{killedinterval}
\hat{h}_n(x)=\det(\sin(kx_j))_{k,j=1}^n.
\end{align}
This is the eigenfunction that corresponds to conditioning these Brownian motions to stay in the interval $(0,\pi)$ and not intersect forever. Also, observe that up to a constant factor $\hat{h}_n$ is given by (see the notes \cite{Conrey}, \cite{Meckes} and Remark \ref{RemarkClassicalGroups} below for the connection to classical compact groups),
\begin{align*}
\prod_{i=1}^{n}\sin(x_i)\prod_{1 \le i < j \le n}^{}\big(\cos(x_i)-\cos(x_j)\big).
\end{align*}
Now, via the iterative procedure of producing eigenfunctions, namely by taking $\Lambda_{n,n+1}\hat{h}_n$, where $\Lambda_{n,n+1}$ is defined in (\ref{PreIntertwiningKernel}), we obtain that up to a (signed) constant factor,
\begin{align}\label{reflecting intervale}
h_{n+1}(x)=\det(\cos((k-1)x_j))_{k,j=1}^{n+1},
\end{align}
is a strictly positive eigenfunction for $P_t^{n+1}$. In fact, it is the minimal positive eigenfunction (again this follows from Corollary \ref{minimal}) of $P_t^{n+1}$ and it corresponds to conditioning these reflected Brownian motions in the interval to not intersect. This is also (see \cite{Conrey},\cite{Meckes} and Remark \ref{RemarkClassicalGroups}) given up to a constant factor by,
\begin{align*}
\prod_{1 \le i < j \le n+1}^{}\big(\cos(x_i)-\cos(x_j)\big).
\end{align*}
Define the Markov kernel:
\begin{align*}
(\Lambda^{\hat{h}_n}_{n,n+1}f)(x)=\frac{n!}{h_{n+1}(x)}\int_{W^{n,n+1}(x)}^{}\hat{h}_n(y)f(x,y)dy.
\end{align*}
Then we have the following result:
\begin{prop}
Let $x \in \mathring{W}^{n+1}([0,\pi])$. Consider a process $(X,Y)\in W^{n,n+1}([0,\pi])$ started at $\left(\delta_x,\Lambda^{\hat{h}_n}_{n,n+1}(x,\cdot)\right)$ with the $Y$ particles evolving as n Brownian motions
conditioned to stay in $(0,\pi)$ and conditioned to not intersect and the $X$ particles as $n+1$ reflecting Brownian motions in $[0,\pi]$ reflected off the $Y$ particles. Then the $X$ particles are distributed as $n+1$ non-intersecting Brownian motions reflected at the boundaries of $[0,\pi]$ started from $x$.
\end{prop}
\begin{proof}
Take as the $L$-diffusion a reflecting Brownian motion in $[0,\pi]$. The $\hat{L}$-diffusion is a Brownian motion absorbed at $0$ or $\pi$. Observe that, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are satisfied. Moreover, as noted above $\hat{h}_n$ is the ground state for $n$ Brownian motions killed when they hit $0$ or $\pi$ or when they intersect. The statement of the proposition then follows from Theorem \ref{MasterDynamics}.
\end{proof}
\begin{rmk}
The dual relation, in the following sense is also true: If we reflect $n$ Brownian motions between $n+1$ reflecting Brownian motions in $[0,\pi]$ conditioned not to intersect then we obtain $n$ Brownian motions conditioned to stay in $(0,\pi)$ and conditioned not to intersect. This is obtained by noting that up to a constant factor $\hat{h}_n$ defined in (\ref{killedinterval}) is given by $\Lambda_{n+1,n}h_{n+1}$, with $h_{n+1}$ as in (\ref{reflecting intervale}).
\end{rmk}
\begin{rmk}\label{RemarkClassicalGroups}
The processes studied above are related to the eigenvalue evolutions of Brownian motions on $SO(2(n+1))$ (reflecting Brownian motions in $[0,\pi]$) and $USp(2n)$ (conditioned Brownian motions in $[0,\pi]$) respectively (see e.g. \cite{PauwelsRogers} for skew product decompositions of Brownian motions on manifolds of matrices).
\end{rmk}
\begin{rmk}
It is also possible to build the following interlacing processes with equal number of particles. Consider as the $Y$ process $n$ Brownian motions in $[0,\pi)$ reflecting at $0$ and conditioned to stay away from $\pi$ and not to intersect. In our framework $\hat{L}=\frac{1}{2}\frac{d^2}{dx^2}$ with Neumann boundary condition at $0$ and Dirichlet at $\pi$. Then the minimal eigenfunction corresponding to this conditioning is given up to a sign by,
\begin{align*}
\det\bigg(\cos\bigg(\big(k-\frac{1}{2}\big)y_j\bigg)\bigg)_{k,j=1}^n.
\end{align*}
Now let $X$ be $n$ Brownian motions in $(0,\pi]$ reflecting at $\pi$ and reflected off the $Y$ particles. Then the projection onto the $X$ process (assuming the two levels $(X,Y)$ are started appropriately) evolves as $n$ Brownian motions in $(0,\pi]$ reflecting at $\pi$ and conditioned to stay away from $0$ and not to intersect. These processes are related to the eigenvalues of Brownian motions on $SO(2n+1)$ and $SO^-(2n+1)$ respectively.
\end{rmk}
\subsection{Brownian motions with drifts}\label{DriftingBrownianSection}
The processes considered here were first introduced by Ferrari and Frings in \cite{FerrariFrings} (there only the \textit{fixed time} picture was studied, namely no statement was made about the distribution of the projections on single levels as processes). They form a generalization of the process studied in the first subsection.
\subsubsection{Hermitian Brownian with drifts}
We begin by a brief study of the matrix valued process first. Let $\left(Y_t;t\ge 0\right)=\left(B_t;t \ge 0\right)$ be an $n\times n$ Hermitian Brownian motion. We seek to add a matrix of \textit{drifts} and study the resulting eigenvalue process. For simplicity let $M$ be a diagonal $n\times n$ Hermitian matrix with distinct ordered eigenvalues $\mu_1<\cdots<\mu_n$ and consider the Hermitian valued process $\left(Y_t^M;t\ge 0\right)=\left(B_t+tM;t\ge0\right)$.
Then a computation that starts by applying Girsanov's theorem, using unitary invariance of Hermitian Brownian motion, integrating over $\mathbb{U}(n)$, the group of $n\times n $ unitary matrices, and then computing that integral using the classical Harish Chandra-Itzykson-Zuber (HCIZ) formula gives that the eigenvalues $(\lambda_1^M(t),\cdots,\lambda_n^M(t);t\ge0)$ of $\left(Y_t^M;t \ge0\right)$ form a diffusion process with explicit transition density given by,
\begin{align*}
s_t^{n,M}(\lambda,\lambda')=\exp\big(-\frac{1}{2}\sum_{i=1}^{n}\mu_i^2t\big)\frac{\det\big(\exp(\mu_j\lambda'_i))\big)_{i,j=1}^n}{\det\big(\exp(\mu_j\lambda_i))\big)_{i,j=1}^n}\det\big(\phi_t(\lambda_i,\lambda_j')\big)_{i,j=1}^n,
\end{align*}
where $\phi_t$ is the standard heat kernel. For a proof of this fact, which uses the theory of Markov functions, see for example \cite{Chin}.
Observe that, $s_t^{n,M}$ is exactly the transition density of $n$ Brownian motions with drifts $\mu_1<\cdots<\mu_n$ conditioned to never intersect as studied in \cite{BBO}. More generally, if we look at the $k\times k$ minor of $\left(Y_t^M;t \ge 0\right)$ then its eigenvalues evolve as $k$ Brownian motions with drifts $\mu_1<\cdots<\mu_k$ conditioned to never intersect.
\begin{rmk}
These processes also appear in the recent work of Ipsen and Schomerus \cite{IpsenSchomerus} as the finite time Lyapunov exponents of "Isotropic Brownian motions".
\end{rmk}
Now, write $\mu^{(k)}$ for $(\mu_1,\cdots,\mu_k)$ and $P_t^{n,\mu^{(n)}}$ for the semigroup that arises from $s_t^{n,M}$. Then, $u_t^{n,\mu^{(n)}}(d\lambda)$ defined by,
\begin{align*}
u_t^{n,\mu^{(n)}}(d\lambda)=const_{n,t}\det(e^{-(\lambda_i-t\mu_j)^2/2t})^n_{i,j=1} \frac{\prod_{1 \le i < j \le n}^{}(\lambda_j-\lambda_i)}{\prod_{1 \le i < j \le n}^{}(\mu^{(n)}_j-\mu^{(n)}_i)}d\lambda,
\end{align*}
forms an entrance law for $P_t^{n,\mu^{(n)}}$ starting from the origin (see for example \cite{FerrariFrings} or the Appendix).
\subsubsection{Interlacing construction with drifting Brownian motions with reflection}
Now moving on to Warren's process with drifts (as referred to in \cite{FerrariFrings}). We seek to build $n+1$ Brownian motions with drifts $\mu_1<\cdots<\mu_{n+1}$ conditioned to never intersect by reflecting off $n$ Brownian motions with drifts $\mu_{1}<\cdots<\mu_{n}$ conditioned to never intersect $n+1$ independent Brownian motions each with drift $\mu_{n+1}$. We prove the following:
\begin{prop}
Consider a Markov process $(X,Y)\in W^{n,n+1}(\mathbb{R})$ started from the origin with the $Y$ particles evolving as $n$ Brownian motions with drifts $\mu_1<\cdots<\mu_n$ conditioned to never intersect and the $X$ particles as $n+1$ Brownian motions all with drift $\mu_{n+1}$ reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n+1$ Brownian motions with drifts $\mu_1<\cdots<\mu_{n+1}$ conditioned to never intersect started from the origin.
\end{prop}
\begin{proof}
Let the $L$-diffusion be a Brownian motion with drift $\mu_{n+1}$, namely with generator $L=\frac{1}{2}\frac{d^2}{dx^2}+\mu_{n+1}\frac{d}{dx}$. Then, its dual diffusion $\hat{L}=\frac{1}{2}\frac{d^2}{dx^2}-\mu_{n+1}\frac{d}{dx}$ has speed measure $\hat{m}(x)=2e^{-2\mu_{n+1}x}$. Note that, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are easily seen to be satisfied. Let $P_t^{n+1,\mu_{n+1}}$ and $\hat{P}_t^{n,\mu_{n+1}}$ denote the corresponding Karlin-McGregor semigroups. Consider the (not yet normalized) positive kernel $\Lambda^{\mu_{n+1}}_{n,n+1}$ given by,
\begin{align*}
(\Lambda^{\mu_{n+1}}_{n,n+1}f) (x)=\int_{W^{n,n+1}(x)}^{}f(x,y)\prod_{i=1}^{n}2e^{-2\mu_{n+1}y_i}dy_i.
\end{align*}
and define the function
\begin{align*}
\hat{h}_n^{\mu_{n+1},\mu^{(n)}}(y)=\prod_{i=1}^{n}e^{\mu_{n+1}y_i}\det(e^{\mu_iy_j} )_{ i,j=1}^{ n}.
\end{align*}
Note that, $\hat{h}_n^{\mu_{n+1},\mu^{(n)}}$ is a strictly positive eigenfunction for $\hat{P}_t^{n,\mu_{n+1}}$. Moreover, the $h$-transform of $\hat{P}_t^{n,\mu_{n+1}}$ with $\hat{h}_n^{\mu_{n+1},\mu^{(n)}}$ is exactly the semigroup $P_t^{n,\mu^{(n)}}$ of $n$ Brownian motions with drifts $(\mu_1,\cdots,\mu_n)$ conditioned to never intersect. By integrating the determinant we get,
\begin{align*}
(\Lambda^{\mu_{n+1}}_{n,n+1}\hat{h}_n^{\mu_{n+1},\mu^{(n)}}) (x)=\frac{2^n}{\prod_{i=1}^{n}(\mu_{n+1}-\mu_i)}\det(e^{(\mu_{i}-\mu_{n+1})x_j})_{i,j=1}^{n+1},
\end{align*}
and note that the $h$-transform of $P_t^{n+1,\mu_{n+1}}$ by $\Lambda^{\mu_{n+1}}_{n,n+1}\hat{h}_n^{\mu_{n+1},\mu^{(n)}}$ is $P_t^{n+1,\mu^{(n+1)}}$.
Finally, defining the entrance law for the two-level process started from the origin by $\nu_t^{n,n+1,\mu_{n+1},\mu^{(n)}}=u_t^{n+1,\mu^{(n+1)}}\Lambda^{\mu_{n+1},\mu^{(n)}}_{n,n+1}$,
we obtain the statement of the proposition from Theorem \ref{MasterDynamics} (see also discussion after Corollary \ref{EntranceLawCorollary}).
\end{proof}
\begin{rmk}
A 'positive temperature' version of the proposition above appears as Proposition 9.1 in \cite{Toda}.
\end{rmk}
We can then iteratively apply the result above to concatenate two-level processes and build a process:
\begin{align*}
\left(\mathbb{X}_{(\mu_1,\cdots,\mu_n)}(t);t\ge 0\right)=\left(X^{(1)}_{\mu_1}(t)\prec X^{(2)}_{\mu_2}(t)\prec\cdots \prec X^{(n)}_{\mu_n}(t);t\ge0\right),
\end{align*}
in $\mathbb{GT}(n)$ as in Proposition \ref{multilevelproposition} whose joint dynamics are given as follows (this was also described in \cite{FerrariFrings}): Level $k$ consists of $k$ copies of independent Brownian motions all with drifts $\mu_k$ reflected off the paths of level $k-1$. Then, from Proposition \ref{multilevelproposition} one obtains:
\begin{prop}
Assume $\mu_1 < \mu_2< \cdots < \mu_n$. Consider the process $\left(\mathbb{X}_{(\mu_1,\cdots,\mu_n)}(t);t\ge 0\right)$ defined above started from the origin. Then, the projection on $X^{(k)}_{\mu_k}$ is distributed as $k$ Brownian motions with drifts $\mu_1<\cdots<\mu_k$ conditioned to never intersect, issueing from the origin.
\end{prop}
\begin{rmk}
Note that, the multilevel process whose construction is described above via the hard reflection dynamics and the minors of the Hermitian valued process $\left(Y_t^M;t \ge 0\right)$ coincide on each fixed level $k$ (as single level processes, this is what we have proven here) and also at fixed times (this is already part of the results of \cite{FerrariFrings}). However, they do not have the same law as processes. Finally, for the fixed time correlation kernel of this Gelfand-Tsetlin valued process see Theorem 1 of \cite{FerrariFrings}.
\end{rmk}
\subsection{Geometric Brownian motions and quantum Calogero-Sutherland}
A geometric Brownian motion of unit diffusivity and drift parameter $\alpha$ is given by the $SDE$,
\begin{align*}
ds(t)=s(t)dW(t)+\alpha s(t)dt ,
\end{align*}
which can be solved explicitly to give that,
\begin{align*}
s(t)=s(0)\exp\left(W(t)+\left(\alpha-\frac{1}{2}\right)t\right).
\end{align*}
We will assume that $s(0)>0$, so that the process lives in $(0,\infty)$. Its generator is given by,
\begin{align*}
L^{\alpha}=\frac{1}{2}x^2\frac{d^2}{dx^2}+\alpha x \frac{d}{dx} ,
\end{align*}
with both $0$ and $\infty$ being natural boundaries. With $h_n(x)=\prod_{1\le i< j \le n}^{}(x_j-x_i)$ denoting the Vandermonde determinant it can be easily verified (although it also follows by recursively applying the results below) that $h_n$ is a positive eigenfunction of $n$ independent geometric Brownian motions, namely that with,
\begin{align*}
L_n^{\alpha}=\sum_{i=1}^{n}\frac{1}{2}x_i^2\partial_{x_i}^2+\alpha\sum_{i=1}^{n}x_i\partial_{x_i} ,
\end{align*}
we have,
\begin{align*}
L_n^{\alpha}h_n=\frac{n(n-1)}{2}\left(\frac{n-2}{3}+\alpha\right)h_n=c_{n,\alpha}h_n.
\end{align*}
The quantum Calogero-Sutherland Hamiltonian $\mathcal{H}^{\theta}_{CS}$ (see \cite{Calogero}, \cite{Sutherland}) is given by,
\begin{align*}
\mathcal{H}^{\theta}_{CS}=\frac{1}{2}\sum_{i=1}^{n}\left(x_i\partial_{x_i}\right)^2+\theta \sum_{i=1}^{n}\sum_{j\ne i}^{} \frac{x_i^2}{x_i-x_j}\partial_{x_i} .
\end{align*}
Its relation to geometric Brownian motions lies in the following simple observation. For $\theta=1$ this quantum Hamiltonian coincides with the infinitesimal generator of $n$ independent geometric Brownian motions with drift parameter $\frac{1}{2}$ $h$-transformed by the Vandermonde determinant namely,
\begin{align*}
\mathcal{H}^{1}_{CS}=h_n^{-1}\circ L^{\frac{1}{2}}_{n} \circ h_n-c_{n,\frac{1}{2}}.
\end{align*}
We now show how one can construct a $\mathbb{GT}(n)$ valued process so that the $k^{th}$ level consists of $k$ geometric Brownian motions with drift parameter $n-k+\frac{1}{2}$ $h$-transformed by the Vandermonde determinant. The key ingredient is the following:
\begin{prop}\label{GeometricProp}
Consider a process $(X,Y)\in W^{n,n+1}((0,\infty))$ started according to the following distribution $(\delta_x,\frac{n!h_n(y)}{h_{n+1}(x)}\mathbf{1}(y\prec x)dy)$ for $x\in \mathring{W}^{n+1}((0,\infty))$ with the $Y$ particles evolving as $n$ non-intersecting geometric Brownian motions with drift parameter $\alpha+1$ conditioned to not intersect via an $h$-transform by $h_n$ and the $X$ particles evolving as $n+1$ geometric Brownian motions with drift parameter $\alpha$ being reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n+1$ non-intersecting geometric Brownian motions with drift parameter $\alpha$ conditioned to not intersect via an $h$-transform by $h_{n+1}$, started form $x\in \mathring{W}^{n+1}((0,\infty))$.
\end{prop}
\begin{proof}
Taking as the $L$-diffusion $L^{\alpha}$, and note that its speed measure is given by $m^{\alpha}(x)=2x^{2\alpha-2}$, the conjugate diffusion is $\widehat{L^{\alpha}}=L^{1-\alpha}$. Observe that, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are clearly satisfied.
First, note that an easy calculation gives that the $h$-transform of $\widehat{L^{\alpha}}$ by $\widehat{m^{\alpha}}^{-1}$ is an $L^{\alpha+1}$-diffusion, namely a geometric Brownian motion with drift parameter $\alpha+1$. Hence, an $h$-transform of $n$ $\widehat{L^{\alpha}}$-diffusions by the eigenfunction $\prod_{i=1}^{n}\widehat{m^{\alpha}}^{-1}(y_i)h_n(y)$ gives $n$ non-intersecting geometric Brownian motions with drift parameter $\alpha+1$ conditioned to not intersect via an $h$-transform by $h_n$. The statement of the proposition is then obtained from an application of Theorem \ref{MasterDynamics}.
\end{proof}
\begin{rmk}
Observe that, under an application of the exponential map the results of Section \ref{DriftingBrownianSection}, give a generalization of Proposition \ref{GeometricProp} above.
\end{rmk}
Using the proposition above it is straightforward, and we will not elaborate on, how to iterate to build the $\mathbb{GT}(n)$ valued process with the correct drift parameters on each level.
\begin{rmk}
The following geometric Brownian motion,
\begin{align*}
ds(t)=\sqrt{2}s(t)dW(t)-(u+u'+v+v') s(t)dt,
\end{align*}
also arises as a continuum scaling limit after we scale space by $1/N$ and send $N$ to infinity of the bilateral birth and death chain with birth rates $(x-u)(x-u')$ and death rates $(x+v)(x+v')$ considered by Borodin and Olshanski in \cite{BorodinOlshanski}.
\end{rmk}
\subsection{Squared Bessel processes and LUE matrix diffusions}
In this subsection we will first construct a process taking values in $\mathbb{GT}$ being the analogue of the Brownian motion model for squared Bessel processes and having close connections to the LUE matrix valued diffusion. We also build a process in $\mathbb{GT}_\textbf{s}$ generalizing the construction of Cerenzia (after a "squaring" transformation of the state space) for all dimensions $d\ge2$. We begin with a definition:
\begin{defn}
The squared Bessel process of dimension $d$, abbreviated from now on as $BESQ(d)$ process, is the one dimensional diffusion with generator in $(0,\infty)$,
\begin{align*}
L^{(d)}=2x\frac{d^2}{dx^2}+d\frac{d}{dx}.
\end{align*}
The origin is an entrance boundary for $d\ge 2$, a regular boundary point for $0<d<2$ and an exit one for $d\le 0$. Define the index $\nu(d)=\frac{d}{2}-1$. The density of the speed measure of $L^{(d)}$ is $m_{\nu}(y)=c_{\nu}y^{\nu}$ and its scale function $s_{\nu}(x)=\bar{c}_{\nu}x^{-\nu}$, $\nu \ne 0$ and $s_0(x)=logx$. Then from the results of the previous section its conjugate, the $\widehat{L^{(d)}}$ diffusion, is a $BESQ(2-d)$ process with the dual boundary condition. Moreover, the following relation will be key, see \cite{SurveyBessel}: A Doob $h$-transform of a $BESQ(2-d)$ process by its scale function $x^{\nu+1}$ gives a $BESQ(d+2)$ process.
\end{defn}
Note that, condition $(\mathbf{BC+})$ only holds for dimensions $d \in (-\infty,0] \cup [2,\infty)$; this is because for $0<d<2$, the origin is a regular boundary point and the diffusion coefficient degenerates (these values of the parameters will not be considered here). We use the following notation throughout, for $d \in (-\infty,0] \cup [2,\infty)$: we write $P_t^{n,(d)}$ for the Karlin-McGregor semigroup of $n$ $BESQ(d)$ processes killed when they intersect or when they hit the origin, in case $d \le 0$.
We start in the simplest setting of $W^{1,1}$ and consider the situation of a single $BESQ(2-d)$ process being reflected upwards off a $BESQ(d)$ process:
\begin{prop}\label{BESQ11}
Let $d\ge 2$. Consider a process $(X,Y)\in W^{1,1}([0,\infty))$ started according to the distribution $(\delta_x,\frac{(\nu+1)y^{\nu}}{x^{\nu+1}}1_{[0,x]}dy)$ for $x>0$ with the $Y$ particle evolving as a $BESQ(d)$ process and the $X$ particle as a $BESQ(2-d)$ process in $(0,\infty)$ reflected off the $Y$ particle. Then, the $X$ particle is distributed as a $BESQ(d+2)$ process started from $x$.
\end{prop}
\begin{proof}
We take as the $L$-diffusion a $BESQ(2-d)$ process. Then, the $\hat{L}$-diffusion is a $BESQ(d)$ process. Note that, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are satisfied. Since $\hat{h}_{1,1}^{(d)}(x)=1$ is invariant for $BESQ(d)$, the following is invariant for $BESQ(2-d)$,
\begin{align*}
h^{(d)}_{1,1}(x)=\int_{0}^{x}c_{\nu}y^{\nu}dy=\frac{c_{\nu}}{\nu+1}x^{\nu+1}.
\end{align*}
Then, as already remarked above, see \cite{SurveyBessel}, the $h$-transformed process with semigroup $P_t^{1,(2-d),h^{(d)}_{1,1}}$ is exactly a $BESQ(d+2)$ process. The analogue of Theorem \ref{MasterDynamics} in $W^{n,n}$ gives the statement of the proposition.
\end{proof}
We expect that the restriction to $d\ge 2$ is not necessary for the result to hold (it should be true for $d>0$). In fact, Corollary \ref{Bes3}, corresponds to $d=1$, after we perform the transformation $x \mapsto \sqrt{x}$, which in particular maps $BESQ(1)$ and $BESQ(3)$ to reflecting Brownian motion and $BES(3)$ respectively.
We now move on to an arbitrary number of particles. Define the functions,
\begin{align*}
\hat{h}^{(d)}_{n,n}(x)&=\prod_{1 \le i < j \le n}^{}(x_j-x_i)=\det\left(x_i^{j-1}\right)_{i,j=1}^n \ , \\
\hat{h}_{n,n+1}^{(d)}(x)&=\prod_{1 \le i < j \le n}^{}(x_j-x_i)\prod_{i=1}^{n}x_i^{\nu+1}=\det\left(x_i^{j+\nu}\right)_{i,j=1}^n \ .
\end{align*}
Moreover, let $\Lambda_{n-1,n}$ and $\Lambda_{n,n}$ be the following positive kernels, defined as in (\ref{PreIntertwiningKernel}), where we recall that $m_{\nu(d)}(\cdot)$ is the speed measure density with respect to Lebesgue measure of a $BESQ(d)$ process:
\begin{align*}
(\Lambda_{n-1,n}f)(x) &= \int_{W^{n-1,n}(x)}^{}\prod_{i=1}^{n-1}m_{\nu(2-d)}(y_i)f(x,y)dy,\\
(\Lambda_{n,n}f)(x) &= \int_{W^{n,n}(x)}^{}\prod_{i=1}^{n_1}m_{\nu(d)}(y_i)f(x,y)dy.
\end{align*}
An easy calculation gives that $h^{(d)}_{n-1,n}(x)=c_{n-1,n}(\nu)(\Lambda_{n-1,n}\Pi_{n-1,n}\hat{h}_{n-1,n}^{(d)})(x)$ is equal to $\hat{h}^{(d)}_{n,n}(x)$ and $h_{n,n}^{(d)}(x)=c_{n,n}(\nu)(\Lambda_{n,n}\Pi_{n,n}\hat{h}_{n,n}^{(d)})(x)$ is equal $\hat{h}_{n,n+1}^{(d)}(x)$, where $c_{n-1,n}(\nu), c_{n,n}(\nu)$ are explicit constants whose exact values are not important in what follows. Then we have:
\begin{prop}\label{BesqProp1}
Let $d\ge 2$. Consider a process $(X,Y)\in W^{n,n+1}([0,\infty))$ started according to the following distribution $(\delta_x,\frac{n!\prod_{1 \le i < j \le n}^{}(y_j-y_i)}{\prod_{1 \le i < j \le n+1}^{}(x_j-x_i)}\mathbf{1}(y\prec x)dy)$ for $x\in \mathring{W}^{n+1}([0,\infty))$ with the $Y$ particles evolving as $n$ non-intersecting $BESQ(d+2)$ processes and the $X$ particles evolving as $n+1$ $BESQ(d)$ processes being reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n+1$ non-intersecting $BESQ(d)$ processes started form $x\in \mathring{W}^{n+1}([0,\infty))$.
\end{prop}
\begin{proof}
Take as the $L$-diffusion a $BESQ(d)$ process. Then, the $\hat{L}$-diffusion is a $BESQ(2-d)$ process. Note that, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are satisfied. We use the positive harmonic function $\hat{h}_{n,n+1}^{(d)}(x)$ for the semigroup $P_t^{n,(2-d)}$ of $n$ independent $BESQ(2-d)$ processes killed when they hit 0 or when they intersect, which transforms them into $n$ non-intersecting $BESQ(d+2)$ processes.
Finally observe that, $P_t^{n+1,(d),h^{(d)}_{n,n+1}}$ is exactly the semigroup of $n+1$ $BESQ(d)$ processes conditioned to never intersect (see e.g. \cite{O Connell}). Then, Theorem \ref{MasterDynamics} gives the statement of the proposition.
\end{proof}
\begin{prop}\label{BesqProp2}
Let $d\ge 2$. Consider a process $(X,Y)\in W^{n,n}([0,\infty))$ started according to the following distribution $\big(\delta_x,\frac{c_{n,n}(\nu)\hat{h}_{n,n}^{(d)}(y)\prod_{i=1}^{n}m_{\nu(d)}(y_i)}{h_{n,n}^{(d)}(x)}\mathbf{1}(y\prec x)dy\big)$ for $x \in \mathring{W}^{n}([0,\infty))$ with the $Y$ particles evolving as $n$ non-intersecting $BESQ(d)$ processes and the $X$ particles evolving as $n$ $BESQ(2-d)$ processes being reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n$ non-intersecting $BESQ(d+2)$ processes started form $x\in \mathring{W}^{n}([0,\infty))$.
\end{prop}
\begin{proof}
Take as the $L$-diffusion a $BESQ(2-d)$ process. Then, the $\hat{L}$-diffusion is a $BESQ(d)$ process. Note that, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are satisfied. We use the positive harmonic function $\hat{h}^{(d)}_{n,n}(x)$ for the semigroup $P_t^{n,(d)}$ of $n$ independent $BESQ(d)$ processes killed when they intersect. Furthermore note that, $P_t^{n,(2-d),h_{n,n}^{(d)}}$ is the semigroup of $n$ $BESQ(d+2)$ processes conditioned to never intersect (the transformation by $h_{n,n}^{(d)}$ corresponds to transforming the $BESQ(2-d)$ processes to $BESQ(d+2)$ and then conditioning these to never intersect). Then, the analogue of Theorem \ref{MasterDynamics} in $W^{n,n}$ gives the statement.
\end{proof}
It is possible to start both of these processes from the origin via the following explicit entrance law for $n$ non-intersecting $BESQ(d)$ processes (see for example \cite{O Connell}),
\begin{align*}
\mu_t^{n,(d)}(dx)=C_{n,d}t^{-n(n+\nu)}\prod_{1 \le i < j \le n}^{}(x_j-x_i)^2\prod_{i=1}^{n}x_i^{\nu}e^{-\frac{1}{2t}x_i}dx.
\end{align*}
Defining the two entrance laws,
\begin{align*}
\nu_t^{n,n,\hat{h}_{n,n}^{(d)}}(dx,dy)&=\mu_t^{n,(d+2)}(dx) \frac{c_{n,n}(\nu)\hat{h}_{n,n}^{(d)}(y)\prod_{i=1}^{n}m_{\nu(d)}(y_i)}{h_{n,n}^{(d)}(x)}\mathbf{1}(y\prec x) dy,\\
\nu_t^{n,n+1,\hat{h}^{(d)}_{n,n+1}}(dx,dy)&=\mu_t^{n+1,(d)}(dx) \frac{n!\prod_{1 \le i < j \le n}^{}(y_j-y_i)}{\prod_{1 \le i < j \le n+1}^{}(x_j-x_i)}\mathbf{1}(y\prec x)dy,
\end{align*}
for the processes with semigroups corresponding to the pair $(X,Y)$ described in Propositions \ref{BesqProp2} and \ref{BesqProp1} respectively, we immediately arrive at the following proposition in analogy to the case of Dyson's Brownian motion:
\begin{prop} \label{superpositionref2} \textbf{(a)}Let $d\ge 2$. Consider a process $(X,Y)\in W^{n,n+1}([0,\infty))$ started according to the entrance law $\nu_t^{n,n+1,\hat{h}^{(d)}_{n,n+1}}(dx,dy)$ with the $Y$ particles evolving as $n$ non-intersecting $BESQ(d+2)$ processes and the $X$ particles evolving as $n+1$ $BESQ(d)$ processes being reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n+1$ non-intersecting $BESQ(d)$ processes issueing from the origin.\\
\textbf{(b)} Let $d\ge 2$. Consider a process $(X,Y)\in W^{n,n}([0,\infty))$ started according to the entrance law $\nu_t^{n,n,\hat{h}_{n,n}^{(d)}}(dx,dy)$ with the $Y$ particles which evolve as $n$ non-intersecting $BESQ(d)$ processes and the $X$ particles evolving as $n$ $BESQ(2-d)$ processes being reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n$ non-intersecting $BESQ(d+2)$ processes issueing from the origin.
\end{prop}
Making use of the proposition above we build two processes in Gelfand-Tsetlin patterns. First, the process in $\mathbb{GT}(n)$. To do this, we make repeated use of part \textbf{(a)} of Proposition \ref{superpositionref2} to consistently concatenate two-level processes. Note the fact that the dimension $d$, of the $BESQ(d)$ processes, decreases by $2$ at each stage that we increase the number of particles. So we fix $n$ the depth of the Gelfand-Tsetlin pattern and $d^*$ the dimension of the $BESQ$ processes at the bottom of the pattern. Then, we build a consistent process,
\begin{align*}
\left(\mathbb{X}^{n,(d^*)}(t);t\ge 0\right)=(X_i^{(k)}(t);t\ge 0, 1 \le i\le k \le n) ,
\end{align*}
taking values in $\mathbb{GT}(n)$ with the joint dynamics described as follows: $X_1^{(1)}$ evolves as a $BESQ(d^*+2(n-1))$ process. Moreover, for $k\ge 2$ particles at level $k$ evolve as $k$ independent $BESQ(d^*+2(n-k))$ processes reflecting off the $(k-1)$ particles at the $(k-1)^{th}$ level to maintain the interlacing. Hence, from Proposition \ref{multilevelproposition} (see discussion following it regarding the entrance laws) we obtain:
\begin{prop}
Let $d \ge 2$. If $\mathbb{X}^{n,{(d^*)}}$ is started from the origin according to the entrance law then the projection onto the $k^{th}$ level process $X^{(k)}$ is distributed as $k$ $BESQ(d^*+2(n-k))$ processes conditioned to never intersect.
\end{prop}
By making alternating use of parts \textbf{(a)} and \textbf{(b)} of Proposition \ref{superpositionref2} we construct a consistent process
\begin{align*}
\left(\mathbb{X}^{n,(d)}(t);t \ge 0\right)=(X^{(1)}(t)\prec \hat{X}^{(1)}(t)\prec \cdots \prec X^{(n)}(t)\prec \hat{X}^{(n)}(t);t\ge 0)
\end{align*}
in $\mathbb{GT}_\textbf{s}(n)$, for which Proposition \ref{CerenziaGelfand} can be viewed as the $d=1$ case, and whose joint dynamics are given as follows: $X_1^{(1)}$ evolves as a $BESQ(d)$ process. Then, for any $k$, the $k$ particles corresponding to $\hat{X}^{(k)}$ evolve as $k$ independent $BESQ(2-d)$ processes reflecting off the particles corresponding to $X^{(k)}$ in order for the interlacing to be maintained. Moreover, for $k\ge 2$ the $k$ particles corresponding to $X^{(k)}$ evolve as $k$ independent $BESQ(d)$ processes reflecting off the particles corresponding to $\hat{X}^{(k-1)}$ in order to maintain the interlacing.
Then, it is a consequence of the symplectic analogue of Proposition \ref{multilevelproposition} (involving an entrance law, see the discussion following Proposition \ref{multilevelproposition}) that:
\begin{prop} \label{symplecticBESQ}
Let $d\ge 2$. If $\mathbb{X}^{n,{(d)}}$ is started from the origin then the projections onto $X^{(k)}$ and $\hat{X}^{(k)}$ are distributed as $k$ non-intersecting $BESQ(d)$ and $k$ non-intersecting $BESQ(d+2)$ processes respectively started from the origin.
\end{prop}
\paragraph{Connection to Wishart processes} We now spell out the connection between the processes constructed above and matrix valued diffusion processes by first considering the connection to $\mathbb{X}^{n,(d^*)}$, for $d^*$ even. Let $d^*=2$ for simplicity.
Take $\left(A(t);t\ge 0\right)$ to be an $n\times n$ complex Brownian matrix and consider $\left(H(t);t\ge 0\right)=\left(A(t)A(t)^*;t \ge 0\right)$. This is called the Wishart process and was first studied in the real symmetric case by Marie-France Bru in \cite{Wishart}, see also \cite{Demni} for a detailed study in the Hermitian setting and some of its properties. Then, it is well known (first proven in \cite{O Connell}), we have that $\left(\lambda^{(k)}(t);t\ge0\right)$, the eigenvalues of the $k\times k$ minor of $\left(H(t);t \ge0 \right)$, evolve as $k$ non-colliding $BESQ(2(n-k+1))$ processes. These eigenvalues then interlace with $\left(\lambda^{(k-1)}(t);t\ge0\right)$ which evolve as $k-1$ non-colliding $BESQ(2(n-k+1)+1)$ processes with the \textit{fixed} time $T$ conditional density of $\lambda^{(k-1)}(T)$ given $\lambda^{(k)}(T)$ on $W^{k-1,k}(\lambda^{(k)}(T))$ being $\Lambda^{\hat{h}^{(d)}_{k-1,k}}_{k-1,k}\left(\lambda^{(k)}(T),\cdot\right)$ (see Section 3 of \cite{FerrariFrings}, Section 3.3 of \cite{ForresterNagao}). Inductively (since for fixed $T$, $\lambda^{(n-k)}(T)$ is a Markov chain in $k$ see Section 4 of \cite{FerrariFrings}) this gives that the distribution at \textit{fixed} times $T$ of the vector $(\lambda^{(1)}(T),\cdots,\lambda^{(n)}(T))$ is uniform over the space of $\mathbb{GT}(n)$ with bottom level $\lambda^{(n)}(T)$. Moreover, by making use of this coincidence along \textit{space-like paths} one can write down the dynamical correlation kernel (along space-like paths) of the process we constructed from Theorem 1.3 of \cite{FerrariFringsPartial}.
\begin{rmk}
Although $\mathbb{X}^{n,(2)}$ and the minor process described in the preceding paragraph on single levels or at fixed times coincide, the interaction between consecutive levels of the minor process should be different from local hard reflection, although the dynamics of consecutive levels of the $LUE$ process have not been studied yet (as far as we know).
\end{rmk}
We now describe the random matrix model that parallels $\mathbb{X}_s^{n,{(d)}}$ for $d$ even. Start with a row vector $\left(A^{(d)}(t);t \ge 0\right)$ of $d/2$ independent standard complex Brownian motions, then $\left(X^{(d)}(t);t \ge 0\right)=\left(A^{(d)}(t)A^{(d)}(t)^*;t \ge 0\right)$ evolves as a one dimensional $BESQ(d)$ diffusion (this is really just the definition of a $BESQ(d)$ process). Now, add another independent complex Brownian motion to make $\left(A^{(d)}(t);t\ge 0\right)$ a row vector of length $d/2+1$. Then, $\left(X^{(d)}(t);t \ge 0\right)=\left(A^{(d)}(t)A^{(d)}(t)^*;t \ge 0\right)$ evolves as a $BESQ(d+2)$ process interlacing with the aforementioned $BESQ(d)$. At fixed times, the fact that the conditional distribution of the $BESQ(d)$ process given the position $x$ of the $BESQ(d+2)$ process is proportional to $y^{\frac{d}{2}-1} 1_{[0,x]}$ follows from the conditional laws in \cite{DiekerWarren} (see also \cite{Defosseux}) and will be spelled out in a few sentences. Now, make $\left(A^{(d)}(t);t \ge 0\right)$ a $2 \times \left(\frac{d}{2}+1\right)$ matrix by adding a row of $d/2+1$ independent complex Brownian motions, the eigenvalues of $\left(X^{(d)}(t);t\ge 0\right)=\left(A^{(d)}(t)A^{(d)}(t)^*;t \ge 0\right)$ evolve as $2$ $BESQ(d)$ processes which interlace with the $BESQ(d+2)$. We can continue this construction indefinitely by adding columns and rows successively of independent complex Brownian motions. As before, this eigenvalue process will coincide with $\mathbb{X}_s^{n,{(d)}}$ on single levels as stochastic processes but also at \textit{fixed} times as distributions of whole interlacing arrays. We elaborate a bit on this fixed time coincidence. For simplicity, let $T=1$. Let $A$ be an $n\times k$ matrix of independent standard complex normal random variables. Let $A'$ be the $n \times (k+1)$ matrix obtained from $A$ by adding to it a column of independent standard complex normal random variables. Let $\lambda$ be the $n$ eigenvalues of $AA^*$ and $\lambda'$ be the $n$ eigenvalues of $A'(A')^*$. We want the conditional density $\rho_{\lambda|\lambda'}(\lambda)$, of $\lambda$ given $\lambda'$, with respect to Lebesgue measure. From \cite{DiekerWarren} (see also \cite{Defosseux}) the conditional density $\rho_{\lambda'|\lambda}(\lambda)$ is given by,
\begin{align*}
\rho_{\lambda'|\lambda}(\lambda)=\frac{\prod_{1\le i < j \le n}^{}(\lambda'_j-\lambda'_i)}{\prod_{1\le i < j \le n}^{}(\lambda_j-\lambda_i)}e^{-\sum_{i=1}^{n}(\lambda'_i-\lambda_i)}\textbf{1}(\lambda\prec \lambda') \ .
\end{align*}
Hence, by Bayes' rule, and recalling the law of the $LUE$ ensemble, we have,
\begin{align*}
\rho_{\lambda|\lambda'}(\lambda)=\left[\frac{\rho_{\lambda}}{\rho_{\lambda'}}\rho_{\lambda'|\lambda}\right](\lambda)=\frac{\prod_{1\le i < j \le n}^{}(\lambda_j-\lambda_i)\prod_{i=1}^{n}\lambda_i^{\frac{d}{2}-1}}{\prod_{1\le i < j \le n}^{}(\lambda'_j-\lambda'_i)\prod_{i=1}^{n}\lambda_i'^{\frac{d}{2}}}\textbf{1}(\lambda\prec \lambda') \ .
\end{align*}
Similarly to the case of $\mathbb{GT}$, by induction this gives fixed time coincidence of the two $\mathbb{GT}_\textbf{s}$ valued processes.
\subsection{Diffusions associated with orthogonal polynomials}
Here, we consider three diffusions in Gelfand-Tsetlin patterns associated with the classical orthogonal polynomials, Hermite, Laguerre and Jacobi. Although the one dimensional diffusion processes these are built from, the Ornstein-Uhlenbeck, the Laguerre and Jacobi are special cases of Sturm-Liouville diffusions with discrete spectrum, which we will consider in the next subsection, they are arguably the most interesting examples, with close connections to random matrices and so we consider them separately (for the classification of one dimensional diffusion operators with polynomial eigenfunctions see \cite{Mazet} and for a nice exposition Section 2.7 of \cite{MarkovDiffusion}). One of the common features of the Karlin-McGregor semigroups associated with them is that they all have the Vandermonde determinant as their ground state (this follows from Corollary \ref{minimal}). At the end of this subsection we describe the connection to eigenvalue processes of minors of matrix diffusions.
\begin{defn}
The Ornstein-Uhlenbeck (OU) diffusion process in $I=\mathbb{R}$ has generator and SDE description,
\begin{align*}
L_{OU}&=\frac{1}{2}\frac{d^2}{dx^2}-x\frac{d}{dx},\\
dX(t)&=dB(t)-X(t)dt,
\end{align*}
with $m_{OU}(x)=e^{-x^2}$ and $-\infty$ and $\infty$ both natural boundaries. Its conjugate diffusion process $\hat{L}_{OU}$ has generator and SDE description,
\begin{align*}
\hat{L}_{OU}&=\frac{1}{2}\frac{d^2}{dx^2}+x\frac{d}{dx},\\
d\hat{X}(t)&=dB(t)+\hat{X}(t)dt,
\end{align*}
and again $-\infty$ and $\infty$ are both natural boundaries and note the drift away from the origin.
\end{defn}
\begin{defn}
The Laguerre $Lag(\alpha)$ diffusion process in $I=[0,\infty)$ has generator and SDE description,
\begin{align*}
L_{Lag(\alpha)}&=2x\frac{d^2}{dx^2}+(\alpha-2x)\frac{d}{dx},\\ dX(t)&=2\sqrt{X(t)}dB(t)+(\alpha-2X(t))dt,
\end{align*}
with $m_{Lag(\alpha)}(x)=x^{\alpha/2}e^{-x}$ and $\infty$ being natural and for $\alpha \ge 2$ the point $0$ is an entrance boundary. We will only be concerned with such values of $\alpha$ here.
\end{defn}
\begin{defn}
The Jacobi diffusion process $Jac(\beta,\gamma)$ in $I=[0,1]$ has generator and SDE description,
\begin{align*}
L_{Jac(\beta,\gamma)}&=2x(1-x)\frac{d^2}{dx^2}+2(\beta-(\beta+\gamma)x)\frac{d}{dx},\\
dX(t)&=2\sqrt{X(t)(1-X(t))}dB(t)+2(\beta-(\beta+\gamma)X(t))dt,
\end{align*}
with $m_{Jac(\beta,\gamma)}(x)=x^{\beta-1}(1-x)^{\gamma-1}$ and $0$ and $1$ being entrance for $\beta, \gamma \ge 1$. We will only be concerned with such values of $\beta$ and $\gamma$ in this section.
\end{defn}
The restriction of parameters $\alpha, \beta, \gamma$ for $Lag(\alpha)$ and $Jac(\beta,\gamma)$ is so that $(\mathbf{BC}+)$ is satisfied (for a certain range of the parameters the points $0$ and/or $1$ are regular boundaries in which case $(\mathbf{BC}+)$ is no longer satisfied due to the fact that the diffusion coefficients degenerate at the boundary points).
We are interested in the construction of a process in $\mathbb{GT}(N)$, so that in particular at each stage the number of particles increases by one. We start in the simplest setting of $W^{1,2}$ and in particular the Ornstein-Uhlenbeck case to explain some subtleties. We will then treat all cases uniformly.
Consider a two-level process $(X,Y)$ with the $X$ particles evolving as two $OU$ processes being reflected off the $Y$ particle which evolves as an $\hat{L}_{OU}$ diffusion. Then, since this is an honest Markov process, Theorem \ref{MasterDynamics} (whose conditions are easily seen to be satisfied) gives that if started appropriately, the projection on the $X$ particles is Markovian with semigroup $P_t^{2,OU,\bar{h}_2}$. Here, $P_t^{2,OU,\bar{h}_2}$ is the Doob $h$-transformed semigroup of two independent $OU$ processes killed when they intersect by the harmonic function $\bar{h}_2$:
\begin{align*}
\bar{h}_2(x_1,x_2)=\int_{x_1}^{x_2}\hat{m}_{OU}(y)dy=s_{OU}(x_2)-s_{OU}(x_1),
\end{align*}
where $s_{OU}(x)=e^{x^2}F(x)$ is the scale function of the $OU$ process and $F(x)=e^{-x^2}\int_{0}^{x}e^{y^2}dy$ is the Dawson function. We note that, although this process is built from two $OU$ processes being kept apart (more precisely this diffusion lives in $\mathring{W}^2$), it is \textit{not two independent $OU$ processes conditioned to never intersect}.
However, we can initially $h$-transform the $\hat{L}_{OU}$ process to make it an $OU$ process with the $h$-transform given by $\hat{h}_1(x)=\hat{m}^{-1}_{OU}(x)$ with eigenvalue $-1$. Now, note that:
\begin{align*}
h_2(x_1,x_2)=\int_{x_1}^{x_2}\hat{m}_{OU}(y)\hat{m}^{-1}_{OU}(y)dy=(x_2-x_1).
\end{align*}
This, as we see later in Corollary \ref{minimal} is the ground state of the semigroup associated to two independent $OU$ processes killed when they intersect.
Thus, if we consider a two-level process $(X,Y)$ with the $X$ particles evolving as 2 $OU$ processes reflected off a single $OU$ process, we get from Theorem \ref{MasterDynamics} that the projection on the $X$ particles is distributed as two independent $OU$ processes conditioned to never intersect via a Doob $h$-transform by $h_2$.
Similarly, an easy calculation gives that we can $h$-transform the $\hat{L}_{Lag(\alpha)}$-diffusion to make it a $Lag(\alpha+2)$ with the $h$-transform being $\hat{m}^{-1}_{Lag(\alpha)}(x)$ with eigenvalue $-2$ and $h$-transform with $\hat{m}^{-1}_{Jac(\beta,\gamma)}(x)$ with eigenvalue $-2(\beta+\gamma)$ the $\hat{L}_{Jac(\beta,\gamma)}$-diffusion to make it a $Jac(\beta+1,\gamma+1)$ to obtain the analogous result. Furthermore, this generalizes to arbitrary $n$. First, let
\begin{align*}
h_{n+1}(x)&=\frac{1}{n!}\prod_{1 \le i < j \le n+1}^{}(x_j-x_i)
\end{align*}
denote the Vandermonde determinant. By Corollary \ref{minimal}, $h_{n+1}$ is the ground state of the semigroup associated to $n+1$ independent copies of an $OU$ or $Lag(\alpha)$ or $Jac(\beta,\gamma)$ diffusion killed when they intersect.
\begin{prop}
Assume the constants $\alpha, \beta, \gamma$ satisfy $\alpha \ge 2, \beta \ge 1, \gamma \ge 1$. Let $(X,Y)$ be a two-level diffusion process in $W^{n,n+1}(I^{\circ})$ started according to the distribution $(\delta_x,\frac{n!\prod_{1 \le i < j \le n}^{}(y_j-y_i)}{\prod_{1 \le i < j \le n+1}^{}(x_j-x_i)}\mathbf{1}(y\prec x)dy)$, where $x\in \mathring{W}^{n+1}(I)$, and $X$ and $Y$ evolving as follows:\\
\textbf{OU}: $X$ as $n+1$ independent $OU$ processes reflected off $Y$ which evolves as $n$ $OU$ processes conditioned to never intersect via a Doob $h$-transform by $h_n$, \\
\textbf{Lag}: $X$ as $n+1$ independent $Lag(\alpha)$ processes reflected off $Y$ which evolves as $n$ $Lag(\alpha+2)$ processes conditioned to never intersect via a Doob $h$-transform by $h_n$,\\
\textbf{Jac}: $X$ as $n+1$ independent $Jac(\beta,\gamma)$ processes reflected off $Y$ which evolves as $n$ $Jac(\beta+1,\gamma+1)$ processes conditioned to never intersect via a Doob $h$-transform by $h_n$.\\
Then, the $X$ particles are distributed as,\\
\textbf{OU}: $n+1$ $OU$ processes conditioned to never intersect via a Doob $h$-transform by $h_{n+1}$,\\
\textbf{Lag}: $n+1$ $Lag(\alpha)$ processes conditioned to never intersect via a Doob $h$-transform by $h_{n+1}$,\\
\textbf{Jac}: $n+1$ $Jac(\beta,\gamma)$ processes conditioned to never intersect via a Doob $h$-transform by $h_{n+1}$,\\
started from $x$.
\end{prop}
\begin{proof}
We take as the $L$-diffusion an $OU$ or $Lag(\alpha)$ or $Jac(\beta,\gamma)$ diffusion respectively. Note that, the assumptions $(\mathbf{R})$, $(\mathbf{BC+})$ and $(\mathbf{YW})$ are satisfied for $Lag(\alpha)$ for $\alpha \ge 2$ and for $Jac(\beta,\gamma)$ for $\beta \ge 1, \gamma \ge 1$ (also these assumptions are clearly satisfied for an $OU$ process). Furthermore, observe that with,
\begin{align*}
\hat{h}_n(x)&=\prod_{i=1}^{n}\hat{m}^{-1}(x)\prod_{1 \le i < j \le n}^{}(x_j-x_i),
\end{align*}
we have:
\begin{align*}
h_{n+1}(x)&=(\Lambda_{n,n+1}\Pi_{n,n+1}\hat{h}_n)(x)=\frac{1}{n!}\prod_{1 \le i < j \le n+1}^{}(x_j-x_i).
\end{align*}
Moreover, note that the semigroup of $n$ independent copies of an $\hat{L}$-diffusion (namely either an $\hat{L}_{OU}$ or $\hat{L}_{Lag(\alpha)}$ or $\hat{L}_{Jac(\beta,\gamma)}$ diffusion) killed when they intersect $h$-transformed by $\hat{h}_n$ is exactly the semigroup corresponding to $Y$ in the statement of the proposition. Finally, making use of Theorem \ref{MasterDynamics} we obtain the required statement.
\end{proof}
It is rather easy to see how to iterate this construction to obtain a consistent process in a Gelfand-Tsetlin pattern. To be precise, let us fix $N$ the depth of the pattern and constants $\alpha \ge 2$, $\beta \ge 1$ and $\gamma \ge 1$ that will be the parameters of the processes at the bottom row. Then, in the Ornstein-Uhlenbeck case level $k$ evolves as $k$ independent $OU$ processes reflected off the paths at level $k-1$. In the Laguerre case level $k$ evolves as $k$ independent $Lag(\alpha+2(N-k))$ processes reflected off the particles at level $k-1$. Finally, in the Jacobi case level $k$ evolves as $k$ independent $Jac(\beta+(N-k),\gamma+(N-k))$ processes reflected off the particles at level $(k-1)$. The result giving the distribution of the projection on each level (under certain initial conditions) is completely analogous to previous sections and we omit the statement.
\begin{rmk}
In the $Laguerre$ case we can build in a completely analogous way a process in $\mathbb{GT}_\textbf{s}$ in analogy to the $BESQ(d)$ case of Proposition \ref{symplecticBESQ}. In the Jacobi case (with $\beta,\gamma \ge 1$) we can build a process $(X,Y)\in W^{n,n}((0,1))$ started from the origin (according to the entrance law) with the $Y$ particles evolving as $n$ non-intersecting $Jac(\beta,\gamma+1)$ and the $X$ particles as $n$ $Jac(1-\beta,\gamma)$ in $(0,1)$ reflected off the $Y$ particles. Then, the $X$ particles are distributed as $n$ non-intersecting $Jac(\beta+1,\gamma)$ processes started from the origin.
\end{rmk}
\paragraph{Connection to random matrices}
We now make the connection to the eigenvalues of matrix valued diffusion processes associated with orthogonal polynomials. The relation for the Ornstein-Uhlenbeck process and $Lag(d)$ processes we constructed is the same as for Brownian motions and $BESQ(d)$ processes. The only difference being, that we replace the complex Brownian motions by complex Ornstein-Uhlenbeck processes in the matrix valued diffusions (the only difference being, that this introduces a restoring $-x$ drift in both the matrix valued diffusion processes and the $SDEs$ for the eigenvalues).
We now turn to the Jacobi minor process. First, following Doumerc's PhD thesis \cite{Doumerc} (see in particular Section 9.4.3 therein) we construct the matrix Jacobi diffusion as follows. Let $\left(U(t),t\ge0\right)$ be a Brownian motion on $\mathbb{U}(N)$, the manifold of $N \times N$ unitary matrices and let $p+q=N$. Let $n$ be such that $n\le p,q$ and consider $\left(H(t),t\ge0\right)$ the projection onto the first $n$ rows and $p$ columns of $\left(U(t),t\ge 0\right)$. Then $\left(J^{p,q}(t),t\ge0\right)=\left(H(t)H(t)^*,t\ge 0\right)$ is defined to be the $n \times n$ matrix Jacobi diffusion (with parameters $p,q$). Its eigenvalues evolve as $n$ non-colliding $Jac(p-(n-1),q-(n-1))$ diffusions. Its $k\times k$ minor is built by projecting onto the first $k$ rows of $\left(U(t),t\ge0 \right)$ and it has eigenvalues $\left(\lambda^{(k)}(t),t\ge 0\right)$ that evolve as $k$ non-colliding $Jac(p-(n-1)+n-k,q-(n-1)+n-k)$. For fixed times $T$, if $\left(U(t),t\ge0\right)$ is started according to Haar measure, the distribution of $\lambda^{(k-1)}(T)$ given $\lambda^{(k)}(T)$ on $W^{k-1,k}(\lambda^{(k)}(T))$ being $\Lambda_{k-1,k}^{\hat{h}_{k-1}}\left(\lambda^{(k)}(T),\cdot\right)$ see e.g. \cite{ForresterNagao}. For the connection to the process in $W^{n,n}$ described in the remark, we could have projected on the first $n$ rows and $p+1$ columns of $\left(U(t),t\ge 0\right)$ and denoting that by $\left(H(t)',t\ge 0\right)$, then $\left(J^{p+1,q-1}(t),t\ge 0\right)=\left(H(t)'(H(t)')^*,t\ge 0\right)$ has eigenvalues evolving as $n$ non-colliding $Jac(p-(n-1)+1,q-(n-1)-1)$ and those interlace with the eigenvalues of $\left(J^{p,q}(t),t\ge 0\right)$.
\begin{rmk}
Non-colliding Jacobi diffusions have also appeared in the work of Gorin \cite{Gorin} as the scaling limits of some natural Markov chains on the Gelfand-Tsetlin graph in relation to the harmonic analysis of the infinite unitary group $\mathbb{U}(\infty)$.
\end{rmk}
\subsection{Diffusions with discrete spectrum}
\subsubsection{Spectral expansion and ground state of the Karlin-McGregor semigroup}
In this subsection, we show how the diffusions associated with the classical orthogonal polynomials and the Brownian motions in an interval are special cases of a wider class of one dimensional diffusion processes with explicitly known minimal eigenfunctions for the Karlin-McGregor semigroups associated with them. We start by considering the diffusion process generator $L$ with \textit{discrete spectrum} $0\ge -\lambda_1>-\lambda_2>\cdots$ (the absence of natural boundaries is sufficient for this, see for example Theorem 3.1 of \cite{McKean}) with speed measure $m$ and transition density given by $p_t(x,dy)=q_t(x,y)m(dy)$ where,
\begin{align*}
L\phi_k(x)&=-\lambda_k\phi_k(x),\\
q_t(x,y)&=\sum_{k=1}^{\infty}e^{-\lambda_kt}\phi_k(x)\phi_k(y).
\end{align*}
The eigenfunctions $\{\phi_k\}_{k\ge 1}$ form an orthonormal basis of $L^2(I,m(dx))$ and the expansion $\sum_{k=1}^{\infty}e^{-\lambda_kt}\phi_k(x)\phi_k(y)$ converges uniformly on compact squares in $I^{\circ}\times I^{\circ}$. Furthermore, the Karlin-McGregor semigroup transition density with respect to $\prod_{i=1}^{n}m(dy_i)$ is given by,
\begin{align*}
\det(q_t(x_i,y_j))_{i,j=1}^n.
\end{align*}
We now obtain an analogous spectral expansion for this. We start by expanding the determinant to get,
\begin{align*}
\det(q_t(x_i,y_j))_{i,j=1}^n&=\sum_{\sigma \in \mathfrak{S}_n}^{}sign(\sigma)\prod_{i=1}^{n}q_t(x_i,y_{\sigma(i)})\\ &=\sum_{k_1,\cdots,k_n}^{}\prod_{i=1}^{n}\phi_{k_i}(x_i)e^{-\lambda_{k_i}t}\sum_{\sigma \in \mathfrak{S}_n}^{}sign(\sigma)\prod_{i=1}^{n}\phi_{k_i}(y_{\sigma(i)})\\
&=\sum_{k_1,\cdots,k_n}^{}\prod_{i=1}^{n}\phi_{k_i}(x_i)e^{-\lambda_{k_i}t}\det(\phi_{k_i}(y_j))_{i,j=1}^n.
\end{align*}
Write $\phi_{\mathsf{k}}(y)$ for $\det(\phi_{k_i}(y_j))_{i,j=1}^n$ for an n-tuple $\mathsf{k}=(k_1,\cdots,k_n)$ and also $\lambda_{\mathsf{k}}$ for $(\lambda_{k_1},\cdots, \lambda_{k_n})$ and note that we can restrict to $k_1,\cdots, k_n$ distinct otherwise the determinant vanishes. In fact we can restrict to $k_1,\cdots,k_n$ ordered by replacing $k_1,\cdots,k_n$ by $k_{\tau(1)},\cdots,k_{\tau(n)}$ and summing over $\tau \in \mathfrak{S}_n$ to obtain, with $|\lambda_{\mathsf{k}}|=\sum_{i=1}^{n}\lambda_{k_i}$:
\begin{align}
\det(q_t(x_i,y_j))_{i,j=1}^n=\sum_{1\le k_1 < \cdots <k_n}^{}e^{-|\lambda_{\mathsf{k}}|t}\phi_{\mathsf{k}}(x)\phi_{\mathsf{k}}(y).
\end{align}
The expansion is converging uniformly on compacts in $W^n(I^\circ)\times W^n(I^\circ)$ for $t>0$. Now, denoting by $T$ the lifetime of the process we obtain, for $x=(x_1,\cdots,x_n) \in \mathring{W}^n(I)$, the following spectral expansion that converges uniformly on compacts in $x\in W^n(I^\circ)$,
\begin{align}
\mathbb{P}_x(T>t)=\left[P_t^n\textbf{1}\right](x)=\sum_{1\le k_1 < \cdots <k_n}^{}e^{-|\lambda_{\mathsf{k}}|t}\phi_{\mathsf{k}}(x)\langle \phi_{\mathsf{k}},\textbf{1}\rangle_{W_{}^{n}(m)}
\end{align}
where we used the notation:
\begin{align*}
\langle f,g\rangle_{W_{}^{n}(m)}=\int_{W^n(I^\circ)}^{}f(x_1,\cdots,x_n)g(x_1,\cdots,x_n)\prod_{i=1}^{n}m(x_i)dx_i.
\end{align*}
So, as $t \to \infty$ by the fact that the eigenvalues are distinct and ordered the leading exponential term is forced to be $k_i=i$ and thus:
\begin{align*}
\mathbb{P}_x(T>t)= \langle \phi_{(1,\cdots,n)},\textbf{1}\rangle_{W_{}^{n}(m)} \times \ e^{-\sum_{i=1}^{n}\lambda_it}\det(\phi_i(x_j))_{i,j=1}^n +o\left(e^{-\sum_{i=1}^{n}\lambda_it}\right), \ \text{as} \ t \to \infty .
\end{align*}
Hence, we can state the following corollary.
\begin{cor}\label{minimal} The function,
\begin{align}\label{GroundState}
h_n(x)=\det(\phi_i(x_j))_{i,j=1}^n
\end{align}
is the ground state of $P_t^n$.
\end{cor}
The above argument proves that $h_n(x)\ge 0$ but in fact the positivity is strict, $h_n(x)> 0$ for all $x \in W^n(I^\circ)$ which can be seen as follows. We have the eigenfunction relation, by the Andreif (or generalized Cauchy-Binet) identity:
\begin{align*}
\int_{W^n(I^\circ)}^{}\det(q_t(x_i,y_j))_{i,j=1}^n\det(\phi_i(y_j))_{i,j=1}^n\prod_{i=1}^{n}m(y_i)dy_i=e^{-\sum_{i=1}^{n}\lambda_it}\det(\phi_i(x_j))_{i,j=1}^n.
\end{align*}
Assume that $\det(\phi_i(x_j))_{i,j=1}^n=0$ for some $x \in W^n(I^\circ)$. Then, by the \textit{strict} positivity of $\det(q_t(x_i,y_j))_{i,j}^n\prod_{i=1}^{n}m(y_i)>0$ and continuity of $h_n(x)$ (see Theorem 4 of \cite{KarlinMcGregor}, also Problem 6 and its solution on pages 158-159 of \cite{ItoMckean}), the determinant $\det(\phi_i(y_j))_{i,j=1}^n$ must necessarily vanish everywhere in $W^n(I^\circ)$. Hence, we can write for all $x \in I^{\circ}$ $\phi_n(x)=\sum_{i=1}^{n-1}a_i \phi_i(x)$ for some constants $a_i$. However, this contradicts the orthonormality of the eigenfunctions and so $h_n(x)> 0$ for all $x \in W^n(I^\circ)$.
A different way to see that $h_n(x)$ is strictly positive (up to a constant) in $\mathring{W}^n(I)$ is the well known fact (see paragraph immediately after Theorem 6.2 of Chapter 1 on page 36 of \cite{Karlin}) that the eigenfunctions coming from Sturm-Liouville operators form a Complete $T$-system ($CT$-system) or $Chebyshev$ system namely $\forall n \ge 1$,
\begin{align*}
h_{n}(x)=\det(\phi_i(x_j))_{i,j=1}^{n}>0, \ x\in \mathring{W}^n(I).
\end{align*}
\begin{rmk}
In fact a $CT$-system requires that the determinant does not vanish in $W^n(I)$ so w.l.o.g multiplying by -1 if needed we can assume it is positive.
\end{rmk}
For the orthogonal polynomial diffusions and Brownian motions in an interval taking the $\phi_j$'s to be the Hermite, Laguerre, Jacobi polynomials (which via row and column operations give the Vandermonde determinant) and trigonometric functions (of increasing frequencies) we obtain the minimal eigenfunction.
Following this discussion, we can thus define the \textit{conditioned semigroup} with transition kernel $p_t^{n,h_n}$ with respect to Lebesgue measure in $W^n(I^\circ)$ as follows,
\begin{align*}
p_t^{n,h_n}(x,y)=e^{\sum_{i=1}^{n}\lambda_it}\frac{\det(\phi_i(y_j))_{i,j=1}^n}{\det(\phi_i(x_j))_{i,j=1}^n}\det(p_t(x_i,y_j))_{i,j=1}^n.
\end{align*}
\subsubsection{Conditioning diffusions for non-intersection through local interactions}
Now, a natural question arising is the following. When is it possible to obtain $n$ conservative (by that we mean in case $l$ or $r$ can be reached then they are forced to be regular reflecting) $L$-diffusions \textit{conditioned via the minimal positive eigenfunction} to never intersect through the hard reflection interactions we have been studying in this work? We are able to provide an answer in Proposition \ref{conditioningviareflection} below under a certain assumption that we now explain.
First, note that $L$ being conservative implies $\phi_1=1$. Furthermore, assuming that the $\phi_k\in C^{n-1}(I^\circ)$ for $1\le k\le n$ and denoting by $\phi_k^{(j)}$ their $j^{th}$ derivative we define the Wronskian $W(\phi_1,\cdots,\phi_n)(x)$ of $\phi_1,\cdots,\phi_n$ by,
\begin{align*}
W(\phi_1,\cdots,\phi_n)(x)=\det\big(\phi_i^{(j-1)}(x)\big)_{i,j=1}^n.
\end{align*}
Then, we say that $\{\phi_j\}^n_{j=1}$ form a (positive) Extended Complete $T$-system or $ECT$-system if for all $1\le k \le n$,
\begin{align*}
W(\phi_1,\cdots,\phi_k)(x)>0, \ \forall x \in I^\circ .
\end{align*}
This is a stronger property, in particular implying that $\{\phi_j\}^n_{j=1}$ form a $CT$-system (see Theorem 2.3 of Chapter 2 of \cite{Karlin}). Assuming that the eigenfunctions in question $\{\phi_j\}^n_{j=1}$ form a (positive) $ECT$-system then since $\phi_1=1$,
\begin{align*}
W(\phi_2^{(1)},\cdots,\phi_n^{(1)})(x)>0, \ \forall x \in I^\circ,
\end{align*}
and hence,
\begin{align}\label{DiscreteSpectrumEigenfunction}
\hat{h}_{n-1}(x):=\det(\mathcal{D}_{\hat{m}}\phi_{i+1}(x_j))_{i,j=1}^{n-1}>0, \ x\in \mathring{W}^{n-1}(I).
\end{align}
We then have the following positive answer for the question we stated previously:
\begin{prop}\label{conditioningviareflection}
Under the conditions of Theorem \ref{MasterDynamics}, furthermore assume that the generator $L$ has discrete spectrum and its first $n$ eigenfunctions $\{\phi_j\}^n_{j=1}$ form an ECT-system. Now assume that the $X$ particles consist of $n$ independent $L$-diffusions reflected off the $Y$ particles which evolve as an $n-1$ dimensional diffusion with semigroup $P_t^{n-1,\hat{h}_{n-1}}$, where $\hat{h}_{n-1}$ is defined in (\ref{DiscreteSpectrumEigenfunction}). Then, the $X$ particles (if the two-level process is started appropriately) are distributed as $n$ independent $L$-diffusions conditioned to never intersect with semigroup $P_t^{n,h_n}$, where $h_n$ is defined by (\ref{GroundState}).
\end{prop}
\begin{proof}
Making use of the relations $\mathcal{D}_{\hat{m}}=\mathcal{D}_{s}$ and $\mathcal{D}_{\hat{s}}=\mathcal{D}_{m}$ between the diffusion process generator $L$ and its dual we obtain,
\begin{align*}
\hat{L}\mathcal{D}_{\hat{m}}\phi_{i}=\mathcal{D}_{\hat{m}}\mathcal{D}_{\hat{s}}\mathcal{D}_{\hat{m}}\phi_i=\mathcal{D}_{\hat{m}}\mathcal{D}_{m}\mathcal{D}_{s}\phi_i=-\lambda_i\mathcal{D}_{\hat{m}}\phi_i.
\end{align*}
Thus, $\left(e^{\lambda_it}\mathcal{D}_{\hat{m}}\phi_{i}(\hat{X}(t));t\ge 0\right)$ for each $1\le i \le n$ is a local martingale. By virtue of boundedness (since we assume that the $L$-diffusion is conservative we have $\underset{x\to l,r}{\lim}\mathcal{D}_{\hat{m}}\phi_i(x)=\underset{x\to l,r}{\lim}\mathcal{D}_{s}\phi_i(x)=0$) it is in fact a true martingale and so for $1\le i \le n$,
\begin{align}\label{truemartingale}
\hat{P}_t^1\mathcal{D}_{\hat{m}}\phi_{i}=e^{-\lambda_it}\mathcal{D}_{\hat{m}}\phi_{i}.
\end{align}
Then, by the well-known Andreif (or generalized Cauchy-Binet) identity we obtain,
\begin{align*}
\hat{P}_t^{n-1}\hat{h}_{n-1}=e^{-\sum_{i=1}^{n-1}\lambda_{i+1}t}\hat{h}_{n-1}
\end{align*}
and thus $\hat{h}_{n-1}$ is a strictly positive eigenfunction for $\hat{P}_t^{n-1}$. Finally, by performing a simple integration we see that,
\begin{align*}
(\Lambda_{n-1,n}\Pi_{n-1,n}\hat{h}_{n-1})(x)=const_n h_n(x), \ x \in W^n(I).
\end{align*}
Using Theorem \ref{MasterDynamics} we obtain the statement of the proposition.
\end{proof}
Obviously the diffusions associated with orthogonal polynomials and Brownian motions in an interval fall under this framework.
\subsection{Eigenfunctions via intertwining} \label{subsectioneigen}
In this short subsection we point out that all eigenfunctions for $n$ copies of a diffusion process with generator $L$ in $W^n$ (not necessarily diffusions with discrete spectrum e.g. Brownian motions or $BESQ(d)$ processes) that are obtained by iteration of the intertwining kernels considered in this work, or equivalently from building a process in a Gelfand-Tsetlin pattern, are of the form,
\begin{align}\label{eigenfunctionrepresentation1}
\mathfrak{H}_n(x_1,\cdots,x_n)=\det\left(h_i^{(n)}(x_j)\right)^n_{i,j=1},
\end{align}
for functions $\left(h_1^{(n)},\cdots,h_n^{(n)}\right)$ (not necessarily the eigenfunctions of a one dimensional diffusion operator) given by,
\begin{align}\label{eigenfunctionrepresentation2}
h_i^{(n)}(x)=w^{(n)}_1(x)\int_{c}^{x}w^{(n)}_2(\xi_1)\int_{c}^{\xi_1}w^{(n)}_3(\xi_2)\cdots \int_{c}^{\xi_{i-2}}w^{(n)}_i(\xi_{i-1})d\xi_{i-1}\cdots d\xi_1,
\end{align}
for some weights $w_i^{(n)}(x)>0$ and $c\in I^\circ$. An easy consequence of the representation above (see e.g. Theorem 1.1 of Chapter 6 of \cite{Karlin}) and assuming $w_i^{(n)}\in C^{n-i}(l,r)$ ($n-i$ times continuously differentiable) is that the Wronskian $W\left(h_1^{(n)},\cdots,h_n^{(n)}\right)$ is given by for $x \in I^{\circ}$,
\begin{align}
W\left(h_1^{(n)},\cdots,h_n^{(n)}\right)(x)=\left[w_1^{(n)}(x)\right]^n\left[w_2^{(n)}(x)\right]^{n-1}\cdots\left[w_n^{(n)}(x)\right],
\end{align}
so that in particular $W\left(h_1^{(n)},\cdots,h_n^{(n)}\right)(x)>0$.
We shall restrict to the case of $\mathbb{GT}(n)$ (where the number of particles on each level increases by $1$) for simplicity and prove claims (\ref{eigenfunctionrepresentation1}) and (\ref{eigenfunctionrepresentation2}) by induction. For $n=1$ there is nothing to prove. We conclude by stating and proving the inductive step as a precise proposition:
\begin{prop}
Assume that the input, strictly positive, eigenfunction $\mathfrak{H}_{n-1}$ for $n-1$ copies of a one dimensional diffusion process is of the form (\ref{eigenfunctionrepresentation1}) and (\ref{eigenfunctionrepresentation2}). Then, the eigenfunction $\mathfrak{H}_{n}$ built from the intertwining relation of Karlin-McGregor semigroups (\ref{KMintertwining}) for $n$ copies of its dual diffusion has the same form (\ref{eigenfunctionrepresentation1}) and (\ref{eigenfunctionrepresentation2}), with the weights $\{w_i^{(n)}\}_{i=1}^n$ satisfying an explicit recursion in terms of the $\{w_i^{(n-1)}\}_{i=1}^{n-1}$.
\end{prop}
\begin{proof}
In order to obtain a strictly positive eigenfunction for $n$ copies of an $L$-diffusion, we can in fact start more generally with $n$ copies of an $L$-diffusion $h$-transformed by a one dimensional strictly positive eigenfunction $h$ (denoting by $L^h$ such a diffusion process where we assume that $L^h$ satisfies the boundary conditions of Section 2 in order for the intertwining (\ref{KMintertwining}) to hold). It is then clear that:
\begin{align}
\mathfrak{H}_n(x_1,\cdots,x_n)=\prod_{i=1}^{n}h(x_i)(\Lambda_{n-1,n}\mathfrak{H}_{n-1})(x_1,\cdots,x_n),
\end{align}
where now $\mathfrak{H}_{n-1}(x_1,\cdots,x_{n-1})$ is a strictly positive eigenfunction of $n-1$ copies of an $\widehat{L^h}$ diffusion and which by our hypothesis is given by,
\begin{align}
\mathfrak{H}_{n-1}(x_1,\cdots,x_{n-1})=\det\left(h_i^{(n-1)}(x_j)\right)^{n-1}_{i,j=1},
\end{align}
for some functions $\left(h_1^{(n-1)},\cdots,h_{n-1}^{(n-1)}\right)$ with a representation as in (\ref{eigenfunctionrepresentation2}) for some weights $\{w_{i}^{(n-1)}\}_{i\le n-1}$. A simple integration now gives,
\begin{align*}
h^{(n)}_1(x)&=h(x),\\
h_i^{(n)}(x)&=h(x)\int_{c}^{x}\widehat{m^h}(y)h_{i-1}^{(n-1)}(y)dy, \ \ \textnormal{for} \ i \ge 2,
\end{align*}
where $\widehat{m^h}(x)=h^{-2}(x)s'(x)$ is the density of the speed measure of a $\widehat{L^h}$ diffusion. We thus obtain the following recursive representation for the weights $\{w_i^{(n)}\}_{i\le n}$,
\begin{align}
w_1^{(n)}(x)&=h(x),\\
w_2^{(n)}(x)&=h^{-2}(x)s'(x)w_1^{(n-1)}(x),\\
w_i^{(n)}(x)&=w_{i-1}^{(n-1)}(x), \ \ \textnormal{for} \ i \ge 3.
\end{align}
\end{proof}
\subsection{Connection to superpositions and decimations}
For particular entrance laws, the joint law of X and Y at a fixed time can be interpreted in terms of superpositions/decimations
of random matrix ensembles (see e.g. \cite{ForresterRains}). For example, in the context of Proposition \ref{superpositionref1}, the joint law of X and Y at time 1 agrees with the joint law of the odd (respectively even)
eigenvalues in a superposition of two independent samples from the $GOE_{n+1}$ and $GOE_n$ ensembles, consistent with the fact that
in such a superposition, the odd (respectively even) eigenvalues are distributed according to the $GUE_{n+1}$ (respectively $GUE_n$)
ensembles, see Theorem 5.2 in \cite{ForresterRains}. In the BESQ/Laguerre case, our Proposition \ref{superpositionref2} is similarly related to recent work on GOE singular
values by Bornemann and La Croix \cite{BornemannLaCroix} and Bornemann and
Forrester \cite{BornemannForrester}.
\subsection{Connection to strong stationary duals}
Strong stationary duality (SSD) first introduced by Diaconis and Fill \cite{DiaconisFill} in the discrete state space setting is a fundamental notion in the study of strong stationary times which are a key tool in understanding mixing times of Markov Chains. More recently, Fill and Lyzinski \cite{FillLyzinski} developed an analogous theory for diffusion processes in compact intervals. Given a conservative diffusion $\mathcal{G}$ one associates to it a SSD $\mathcal{G}^*$ such that the two semigroups are intertwined (see Definition 3.1 there). In Theorem 3.4 therein the form of the dual generator is derived and as already indicated in Remark 5.4 in the same paper this is exactly the dual diffusion $\hat{\mathcal{G}}$ $h$-transformed by its scale function.
In our framework, considering a two-level process in $W^{1,1}$ with $L=\hat{\mathcal{G}}$ and so $\hat{L}=\mathcal{G}$ and using the positive harmonic function $\hat{h}_1\equiv 1$, the distribution of the projection on the $X$ particle (under certain initial conditions) coincides with the SSD $\mathcal{G}^*$ diffusion.
Hence this provides a coupling of a diffusion $\mathcal{G}$ and its strong stationary dual $\mathcal{G}^*$ respecting the intertwining between $\mathcal{G}$ and $\mathcal{G}^*$.
\section{Edge particle systems}
In this section we will study the autonomous particle systems at either edge of the Gelfand-Tsetlin pattern valued processes we have constructed. In the figure below, the particles we will be concerned with are denoted in $\bullet$.
\begin{center}
\begin{tabular}{ c c c c c c c c c }
& & &&$\overset{X^{(1)}_1}{\bullet}$ &&&&\\
& & & $\overset{X^{(2)}_1}{\bullet} $&&$\overset{X^{(2)}_2}{\bullet} $&&& \\
& &$\overset{X^{(3)}_1}{\bullet} $ &&$\overset{X^{(3)}_2}{\circ} $ &&$\overset{X^{(3)}_3}{\bullet} $&& \\
&$\iddots$ & & &$\vdots$& &&$\ddots$&\\
$\overset{X^{(N)}_1}{\bullet} $ & $\overset{X^{(N)}_2}{\circ} $ &$\overset{X^{(N)}_3}{\circ} $&&$\cdots\cdots$&& $\overset{X^{(N)}_{N-2}}{\circ} $& $\overset{X^{(N)}_{N-1}}{\circ} $&$\overset{X^{(N)}_N}{\bullet} $
\end{tabular}
\end{center}
Our goal is to derive determinantal expressions for their transition densities. Such expressions were derived by Schutz for TASEP in \cite{Schutz} and later Warren \cite{Warren} for Brownian motions. See also Johansson's work in \cite{Johansson}, for an analogous formula for a Markov chain related to the Meixner ensemble and finally Dieker and Warren's investigation in \cite{DiekerWarren2}, for formulae in the discrete setting based on the RSK correspondence. These so called Schutz-type formulae were the starting points for the recent complete solution of TASEP in \cite{KPZfixedpoint} which led to the KPZ fixed point and also for the recent progress \cite{JohanssonPercolation} in the study of the two time joint distribution in Brownian directed percolation. For a detailed investigation of the Brownian motion model the reader is referred to the book \cite{ReflectedBrownianKPZ}.
We will mainly restrict ourselves to the consideration of Brownian motions, $BESQ(d)$ processes and the diffusions associated with orthogonal polynomials. In a little bit more generality we will assume that the interacting diffusions have generators of the form,
\begin{align*}
L=a(x)\frac{d^2}{dx^2}+b(x)\frac{d}{dx},
\end{align*}
with,
\begin{align*}
a(x)=a_0+a_1x+a_2x^2 \ \ \ b(x)=b_0+b_1x.
\end{align*}
We will also make the following \textbf{standing assumption} in this section. We restrict to the case of the boundaries of the state space $I$ being either \textit{natural} or \textit{entrance} thus the state space is an open interval $(l,r)$. Under these assumptions the transition densities will be smooth in $(l,r)$ in both the backwards and forwards variables (possibly blowing up as we approach $l$ or $r$ see e.g \cite{Stroock} and for a detailed study of the transition densities of the Wright-Fisher diffusion see \cite{WrightFisher}). This covers all the processes we built that relate to minor processes of matrix diffusions. This interacting particle system can also be seen as the solution to the following system of $SDE$'s with one-sided collisions with $(x_1^1 \le \cdots \le x_n^n)$,
\begin{align}\label{edgesystem1}
X_1^{(1)}(t)&=x^1_1+\int_{0}^{t}\sqrt{2a(X_1^{(1)}(s))}d\gamma_1^1(s)+\int_{0}^{t}b^{(1)}(X_1^{(1)}(s))ds,\nonumber\\
&\vdots\nonumber\\
X_m^{(m)}(t)&=x_m^m+\int_{0}^{t}\sqrt{2a(X_m^{(m)}(s))}d\gamma_m^m(s)+\int_{0}^{t}b^{(m)}(X_m^{(m)}(s))ds+K_m^{m,-}(t),\\
&\vdots\nonumber\\
X_n^{(n)}(t)&=x_n^n+\int_{0}^{t}\sqrt{2a(X_n^{(n)}(s))}d\gamma_n^n(s)+\int_{0}^{t}b^{(n)}(X_n^{(n)}(s))ds+K_n^{n,-}(t).\nonumber
\end{align}
where $\gamma_i^i$ are independent standard Brownian motions and $K_i^{i,-}$ are positive finite variation processes with the measure $dK_i^{i,-}$ supported on $\left\{t:X_i^{(i)}(t)=X_{i-1}^{(i-1)}(t)\right\}$ and
\begin{align*}
b^{(k)}(x)=b(x)+(n-k)a'(x)=b_0+(n-k)a_1+(b_1+2(n-k)a_2)x.
\end{align*}
That these $SDE$'s are well-posed, so that in particular the solution is Markov, follows from the same arguments as in Section \ref{SectionWellposedness}. Note that, a quadratic diffusion coefficient $a(\cdot)$ and linear drift $b(\cdot)$ satisfy $(\mathbf{YW})$. See the following figure for a description of the interaction. The arrows indicate the direction of the 'pushing force' (with magnitude the finite variation process $K$) applied when collisions occur between the particles so that the ordering is maintained.
\begin{center}
\begin{tabular}{ c c c c c c c c c }
$\overset{X^{(1)}_1}{\bullet}$&$\longrightarrow$ &$\overset{X^{(2)}_2}{\bullet}$&$\longrightarrow$ &$\overset{X^{(3)}_3}{\bullet}$ &$\cdots$ &$\overset{X^{(n-1)}_{n-1}}{\bullet}$&$\longrightarrow$&$\overset{X^{(n)}_{n}}{\bullet}$.
\end{tabular}
\end{center}
Note that our assumption that the boundary points are either \textit{entrance} or \textit{natural} does not always allow for an \textit{infinite} such particle system,in particular think of the $BESQ(d)$ case where $d$ drops down by $2$ each time we add a particle. Denote by $p_t^{(k)}(x,y)$ the transition kernel associated with the $L^{(k)}$-diffusion with generator,
\begin{align*}
L^{(k)}=a(x)\frac{d^2}{dx^2}+b^{(k)}(x)\frac{d}{dx}.
\end{align*}
Defining,
\begin{align*}
\mathcal{S}_t^{(k),j}(x,x')=\begin{cases}
\int_{l}^{x'}\frac{(x'-z)^{j-1}}{(j-1)!}p_t^{(k)}(x,z)dz \ \ j\ge 1\\
\partial_{x'}^{-j}p_t^{(k)}(x,x') \ \ j \le 0
\end{cases},
\end{align*}
and with $x=(x_1,\cdots,x_n)$, $x'=(x_1',\cdots,x_n')$,
\begin{align}
s_t(x,x')=\det\left(\mathcal{S}_t^{(i),i-j}(x_i,x_j')\right)^n_{i,j=1},
\end{align}
we arrive at the following proposition.
\begin{prop}\label{PropositionEdgeParticle}
Assume that the diffusion and drift coefficients of the generators $L^{(k)}$ are of the form $a(x)=a_0+a_1x+a_2x^2$ and $b^{(k)}(x)=b_0+(n-k)a_1+(b_1+2(n-k)a_2)x$ and moreover assume that the boundaries of the state space are either natural or entrance for the $L^{(k)}$-diffusion; in particular this implies certain constraints on the constants $a_0, a_1, a_2, b_0, b_1$. Then, the process $(X_1^{(1)}(t),\cdots,X_n^{(n)}(t))$ satisfying the $SDEs$ (\ref{edgesystem1}), in which $X^{(k)}_k$ is an $L^{(k)}$-diffusion reflected off $X^{(k-1)}_{k-1}$, has transition densities $s_t(x,x')$.
\end{prop}
\begin{proof}
First, we make the following crucial observation. Define the constant $c_{k,n}=2(n-k-1)a_2+b_1$ and note that the $L^{(k)}$-diffusion is the $h$-transform of the conjugate $\widehat{L^{(k+1)}}$ with $\widehat{m^{(k+1)}}^{-1}(x)$ with eigenvalue $c_{k,n}$, so that $L^{(k)}=\left(\widehat{L^{(k+1)}}\right)^{*}-c_{k,n}$ which is again a bona fide diffusion process generator (with $\mathsf{L}^{*}$ denoting the formal adjoint of $\mathsf{L}$ with respect to Lebesgue measure). Thus, making use of (\ref{conjtrans}) and (\ref{symmetrizing}) we obtain the following relation between the transition densities,
\begin{align}
p_t^{(k)}(x,z)=-e^{c_{k,n}t}\int_{l}^{z}\partial_xp_t^{(k+1)}(x,w)dw, \label{onesidedtransitionrelation}\\
\partial_z^jp_t^{(k)}(x,z)=-e^{c_{k,n}t}\partial_z^{j-1}\partial_xp_t^{(k+1)}(x,z).\nonumber
\end{align}
Now, let $f:{W}^n(I^\circ)\mapsto \mathbb{R}$ be continuous with compact support. Then, we have the following $t=0$ boundary condition,
\begin{align}\label{timezeroedge}
\lim_{t \to 0}\int_{W^n(I^\circ)}^{}s_t(x,x')f(x')dx'=f(x),
\end{align}
which formally can easily be seen to hold since the transition densities along the main diagonal approximate delta functions and all other contributions vanish. We spell this out now. Let $\epsilon>0$ and suppose $f$ is zero in a $2\epsilon$ neighbourhood of $\partial W^n(I^\circ)$. We consider a contribution to the Leibniz expansion of the determinant coming from a permutation $\rho$ that is not the identity. Hence there exist $i<j$ so that $\rho(i)>i$ and $\rho(j)\le i$ and note that the factors $\mathcal{S}_t^{(i),i-\rho(i)}\left(x_i,x_{\rho(i)}'\right)$ and $\mathcal{S}_t^{(j),j-\rho(j)}\left(x_j,x_{\rho(j)}'\right)$ are contained in the contribution corresponding to $\rho$. Since $j-\rho(j)>0$ and $i-\rho(i)<0$ observe that on the set $\left\{x_{\rho(i)}'-x_i>\epsilon\right\}\cup\left\{x_{\rho(j)}'-x_j<-\epsilon\right\}$ at least one of these factors and so the whole contribution as $t \downarrow 0$
vanishes uniformly. On the other hand on the complement of this set we have $x_{\rho(i)}'\le x_i+\epsilon\le x_j+\epsilon\le x'_{\rho(j)}+2\epsilon$. Since $\rho(j)<\rho(i)$ so that $x'_{\rho(j)}\le x'_{\rho(i)}$ we thus obtain that if $x'$ is in the complement of $\left\{x_{\rho(i)}'-x_i>\epsilon\right\}\cup\left\{x_{\rho(j)}'-x_j<-\epsilon\right\}$ it also belongs to some $2\epsilon$ neighbourhood of $\partial W^n(I^\circ)$ and hence outside the support of $f$. (\ref{timezeroedge}) then follows.
Now by multilinearity of the determinant the equation in $(0,\infty)\times \mathring{W}^n(I)\times \mathring{W}^n(I)$,
\begin{align*}
\partial_ts_t(x,x')=\sum_{i=1}^{n}L^{(k)}_{x_i}s_t(x,x'),
\end{align*}
is satisfied since we have $\partial_t\mathcal{S}_t^{(k),j}(x,x')=L_x^{(k)}\mathcal{S}_t^{(k),j}(x,x')$ for all $k$. Here, $L^{(k)}_{x_i}$ is simply a copy of the differential operator $L^{(k)}$ acting in the $x_i$ variable.
Moreover, for the Neumann/reflecting boundary conditions we need to check the following conditions $\partial_{x_i}s_t(x,x')|_{x_i=x_{i-1}}=0$ for $i=2,\cdots,n$.\\
This follows from,
\begin{align*}
\partial_{x_i}\mathcal{S}_t^{(i),i-j}(x_i,x_j')|_{x_i=x_{i-1}}=-e^{-c_{i-1,n}t}\mathcal{S}_t^{(i-1),i-1-j}(x_{i-1},x_j').
\end{align*}
This is true because of the following observations. For $j \le -1$
\begin{align*}
\partial_z^{-j}p_t^{(i-1)}(x,z)=-e^{c_{i-1,n}t}\partial_z^{-j-1}\partial_xp_t^{(i)}(x,z).
\end{align*}
For $j \ge 1$
\begin{align*}
&\int_{l}^{x'}\frac{(x'-z)^{j-1}}{(j-1)!}p_t^{(i-1)}(x,z)dz=-e^{c_{i-1,n}t}\partial_x\int_{l}^{x'}\frac{(x'-z)^{j-1}}{(j-1)!}\int_{l}^{z}p_t^{(i)}(x,w)dwdz\\
&=-e^{c_{i-1,n}t}\partial_x \bigg[\bigg[-\frac{(x'-z)^j}{j!}\int_{l}^{z}p_t^{(k)}(x,w)dw\bigg]_{l}^{x'}-\int_{l}^{x'}-\frac{(x'-z)^j}{j!}p_t^{(i)}(x,z)dz\bigg]\\
&=-e^{c_{i-1,n}t}\partial_x\int_{l}^{x'}\frac{(x'-z)^{j}}{j!}p_t^{(i)}(x,z)dz.
\end{align*}
Hence $\mathcal{S}_t^{(i-1),j}(x,x')=-e^{c_{i-1,n}t}\partial_x\mathcal{S}_t^{(i),j+1}(x,x')$ and thus
\begin{align*}
\partial_{x_i}s_t(x,x')|_{x_i=x_{i-1}}=0,
\end{align*}
for $i=2,\cdots,n$.\\
Define for $f$ as in the first paragraph,
\begin{align*}
F(t,x)=\int_{W^n(I^\circ)}^{}s_t(x,x')f(x')dx'.
\end{align*}
Let $\textbf{S}_x$ denote the law of $(X_1^{(1)},\cdots,X_n^{(n)})$ started from $x=(x_1,\cdots,x_n) \in W^n$. Fixing $T,\epsilon$ and applying Ito's formula to the process $(F(T+\epsilon-t,x), t \le T)$ we obtain that it is a local martingale and by virtue of boundedness indeed a true martingale. Hence,
\begin{align*}
F(T+\epsilon,x)&=\textbf{S}_x\left[F\left(\epsilon,\left(X_1^{(1)}(T),\cdots,X_n^{(n)}(T)\right)\right)\right].
\end{align*}
Now letting $\epsilon \downarrow 0$ we obtain,
\begin{align*}
F(T,x)=\textbf{S}_x\left[f\left(X_1^{(1)}(T),\cdots,X_n^{(n)}(T)\right)\right].
\end{align*}
The result follows since the process spends zero Lebesgue time on the boundary so that in particular such $f$ determine its distribution.
\end{proof}
In the standard Brownian motion case with $p_t^{(k)}$ the heat kernel this recovers Proposition 8 from \cite{Warren}.
Now, we consider the interacting particle system at the other edge of the pattern with the $i^{th}$ particle getting reflected downwards from the $i-1^{th}$, namely with $(x_1^1\ge \cdots\ge x_1^n)$ this is given by the following system of $SDEs$ with reflection,
\begin{align}\label{edgesystem2}
X_1^{(1)}(t)&=x^1_1+\int_{0}^{t}\sqrt{2a(X_1^{(1)}(s))}d\gamma_1^1(s)+\int_{0}^{t}b^{(1)}(X_1^{(1)}(s))ds,\nonumber\\
&\vdots\nonumber\\
X_1^{(m)}(t)&=x_1^m+\int_{0}^{t}\sqrt{2a(X_1^{(m)}(s))}d\gamma_1^m(s)+\int_{0}^{t}b^{(m)}(X_1^{(m)}(s))ds-K_1^{m,+}(t),\\
&\vdots\nonumber\\
X_1^{(n)}(t)&=x_1^n+\int_{0}^{t}\sqrt{2a(X_1^{(n)}(s))}d\gamma_1^n(s)+\int_{0}^{t}b^{(n)}(X_1^{(n)}(s))ds-K_1^{n,+}(t),\nonumber
\end{align}
where $\gamma_1^i$ are independent standard Brownian motions and $K_1^{i,+}$ are positive finite variation processes with the measure $dK_1^{i,+}$ supported on $\left\{t:X_i^{(i)}(t)=X_{i-1}^{(i-1)}(t)\right\}$. Again see the figure below,
\begin{center}
\begin{tabular}{ c c c c c c c c c }
$\overset{X^{(n)}_1}{\bullet}$&$\longleftarrow$ &$\overset{X^{(n-1)}_1}{\bullet}$&$\longleftarrow$ &$\overset{X^{(n-2)}_1}{\bullet}$ &$\cdots$ &$\overset{X^{(2)}_1}{\bullet}$&$\longleftarrow$&$\overset{X^{(1)}_1}{\bullet}$.
\end{tabular}
\end{center}
Define,
\begin{align*}
\bar{\mathcal{S}}_t^{(k),j}(x,x')=\begin{cases}
-\int_{x'}^{r}\frac{(x'-z)^{j-1}}{(j-1)!}p_t^{(k)}(x,z)dz \ \ j\ge 1\\
\partial_
{x'}^{-j}p_t^{(k)}(x,x') \ \ j \le 0
\end{cases},
\end{align*}
Then letting, with $x=(x_1,\cdots,x_n)$, $x'=(x_1',\cdots,x_n')$,
\begin{align}
\bar{s}_t(x,x')=\det(\bar{\mathcal{S}}_t^{(i),i-j}(x_i,x_j'))^n_{i,j=1},
\end{align}
we arrive at the following proposition.
\begin{prop}
Assume that the diffusion and drift coefficients of the generators $L^{(k)}$ are of the form $a(x)=a_0+a_1x+a_2x^2$ and $b^{(k)}(x)=b_0+(n-k)a_1+(b_1+2(n-k)a_2)x$ and moreover assume that the boundaries of the state space are either natural or entrance for the $L^{(k)}$-diffusion. Then, the process $(X_1^{(1)}(t),\cdots,X_1^{(n)}(t))$ satisfying the $SDEs$ (\ref{edgesystem2}), in which $X^{(k)}_1$ is an $L^{(k)}$-diffusion reflected off $X^{(k-1)}_{1}$, has transition densities $\bar{s}_t(x,x')$.
\end{prop}
\begin{proof}
The key observation in this setting is the following relation between the transition kernels:
\begin{align*}
p_t^{(k)}(x,z)=e^{c_{k,n}t}\int_{z}^{r}\partial_xp_t^{(k+1)}(x,w)dw.
\end{align*}
This is immediate from (\ref{onesidedtransitionrelation}) since each diffusion process in this section is an honest Markov process.
Then, checking the parabolic equation with the correct spatial boundary conditions is as before. Now the $t=0$ boundary condition, again follows from the fact that all contributions from off diagonal terms in the determinant have at least one term vanishing uniformly in this new domain $(x_1 \ge \cdots \ge x_n) $.
\end{proof}
Via a simple integration, we obtain the following formulae for the distributions of the leftmost and rightmost particles in the Gelfand-Tsetlin pattern,
\begin{cor}
\begin{align*}
\mathbb{P}_{x^{(0)}}(X_n^{(n)}(t)\le z)&=\det\big(\mathcal{S}_t^{(i),i-j+1}(x^{(0)}_i,z)\big)_{i,j=1}^n ,\\
\mathbb{P}_{\bar{x}^{(0)}}(X_1^{(n)}(t)\ge z)&=\det\big(-\bar{\mathcal{S}}_t^{(i),i-j+1}(\bar{x}^{(0)}_i,z)\big)_{i,j=1}^n,
\end{align*}
where $x^{(0)}=(x^{(0)}_1\le \cdots \le x^{(0)}_n)$ and $\bar{x}^{(0)}=(\bar{x}^{(0)}_1\ge \cdots \ge \bar{x}^{(0)}_n)$.
\end{cor}
For $p_t^{(k)}$ the heat kernel and $x^{(0)}=(0, \cdots ,0)$ this recovers a formula from \cite{Warren}. In the $BESQ(d)$ case and $t=1$ the above give expressions for the largest and smallest eigenvalues for the $LUE$ ensemble. We obtain the analogous expressions in the Jacobi case as $t\to \infty$ since the $JUE$ is the invariant measure of non-intersecting Jacobi processes.
\section{Well-posedness and transition densities for SDEs with reflection}
\subsection{Well-posedness of reflecting SDEs}\label{SectionWellposedness}
We will prove well-posedness (existence and uniqueness) for the systems of reflecting $SDEs$ (\ref{System1SDEs}), (\ref{System2}), (\ref{GelfandTsetlinSDEs}), (\ref{edgesystem1}) and (\ref{edgesystem2}) considered in this work. It will be more convenient, although essentially equivalent for our purposes, to consider reflecting $SDEs$ for $X$ in the time dependent domains (or between barriers) given by $Y$ i.e. in the form of (\ref{systemofreflectingLdiffusions}). More precisely we will consider $SDEs$ with reflection for a single particle $X$ in the time dependent domain $[\mathsf{Y}^-,\mathsf{Y}^+]$ where $\mathsf{Y}^-$ is the lower time dependent boundary and $\mathsf{Y}^+$ is the upper time dependent boundary. This covers all the cases of interest to us by taking $\mathsf{Y}^-=Y_{i-1}$ and $\mathsf{Y}^+=Y_{i}$ with the possibility $\mathsf{Y}^-\equiv l$ and/or $\mathsf{Y}^+\equiv r$.
We will first obtain weak existence, for coefficients $\sigma(x)=\sqrt{2a(x)},b(x)$ continuous and of at most linear growth, the precise statement to found in Proposition \ref{WeakExistenceProp} below. We begin by recalling the definition and some properties of the Skorokhod problem in a time dependent domain. We will use the following notation, $\mathbb{R}_+=[0,\infty)$. Suppose we are given continuous functions $z,\mathsf{Y}^-,\mathsf{Y}^+ \in C\left(\mathbb{R}_+;\mathbb{R}\right)$ such that $\forall T\ge 0$,
\begin{align*}
\underset{t\le T}{\inf}\left(\mathsf{Y}^+(t)-\mathsf{Y}^-(t)\right)>0,
\end{align*}
a condition to be removed shortly by a stopping argument. We then say that the pair $(x,k)\in C\left(\mathbb{R}_+;\mathbb{R}\right) \times C\left(\mathbb{R}_+;\mathbb{R}\right)$ is a solution to the Skorokhod problem for $\left(z,\mathsf{Y}^-,\mathsf{Y}^+\right)$ if for every $t\ge 0$ we have $x(t)=z(t)+k(t)\in [\mathsf{Y}^-(t),\mathsf{Y}^+(t)]$ and $k(t)=k^-(t)-k^+(t)$ where $k^+$ and $k^-$ are non decreasing, in particular bounded variation functions, such that $\forall t \ge 0$ ,
\begin{align*}
\int_{0}^{t}\textbf{1}\left(z(s)>\mathsf{Y}^-(s)\right)dk^-(s)=0 \ \textnormal{ and }\ \int_{0}^{t}\textbf{1}\left(z(s)<\mathsf{Y}^+(s)\right)dk^+(s)=0.
\end{align*}
Observe that the constraining terms $k^+$ and $k^-$ only increase on the boundaries of the time dependent domain, namely at $\mathsf{Y}^+$ and $\mathsf{Y}^-$ respectively. Now, consider the \textit{solution} map denoted by $\mathcal{S}$,
\begin{align*}
\mathcal{S}:C\left(\mathbb{R}_+;\mathbb{R}\right) \times C\left(\mathbb{R}_+;\mathbb{R}\right)\times C\left(\mathbb{R}_+;\mathbb{R}\right) \to C\left(\mathbb{R}_+;\mathbb{R}\right) \times C\left(\mathbb{R}_+;\mathbb{R}\right)
\end{align*}
given by,
\begin{align*}
\mathcal{S}:\left(z,\mathsf{Y}^-,\mathsf{Y}^+\right)\mapsto \left(x,k\right).
\end{align*}
Then the key fact is that the map $\mathcal{S}$ is Lipschitz continuous in the supremum norm and there exists a unique solution to the Skorokhod problem, see for example Proposition 2.3 and Corollary 2.4 of \cite{SDER} (also Theorem 2.6 of \cite{RamananBurdzySkorokhod}). Below we will sometimes abuse notation and write $x=\mathcal{S}\left(z,\mathsf{Y}^-,\mathsf{Y}^+\right)$ just for the $x$-component of the solution $(x,k)$.
Now suppose $\sigma:\mathbb{R} \to \mathbb{R}$ and $b:\mathbb{R}\to \mathbb{R}$ are Lipschitz continuous functions. Then by a classical argument based on Picard iteration, see for example Theorem 3.3 of \cite{SDER}, we obtain that there exists a unique strong solution to the $SDER$ ($SDE$ with reflection) for $\mathsf{Y}^-(0) \le X(0) \le \mathsf{Y}^+(0)$,
\begin{align*}
X(t )=X(0)+\int_{0}^{t}\sigma\left(X(s)\right)d\beta(s)+\int_{0}^{t }b\left(X(s)\right)ds+K^-(t )-K^+(t ),
\end{align*}
where $\beta$ is a standard Brownian motion and $\left(K^+(t);t \ge 0\right)$ and $\left(K^-(t );t \ge 0\right)$ are non decreasing processes that increase only when $X(t)=\mathsf{Y}^+(t)$ and $X(t)=\mathsf{Y}^-(t)$ respectively so that for all $t \ge0$ we have $X(t ) \in [\mathsf{Y}^-(t),\mathsf{Y}^+(t)]$. Here, by strong solution we mean that on the filtered probability space $\left(\Omega,\mathcal{F},\{\mathcal{F}_t\},\mathbb{P}\right)$ on which $\left(X,K,\beta\right)$ is defined, the process $(X,K)$ is adapted with respect to the filtration $\mathcal{F}_t^{\beta}$ generated by the Brownian motion $\beta$. Equivalently $(X,K)$ where $K=K^+-K^-$ solves the Skorokhod problem for $(z,\mathsf{Y}^-,\mathsf{Y}^+)$ where,
\begin{align*}
z\left(\cdot\right)\overset{def}{=}X(0)+\int_{0}^{\cdot}\sigma\left(X(s)\right)d\beta(s)+\int_{0}^{\cdot}b\left(X(s)\right)ds.
\end{align*}
We write $\mathfrak{s}_L^R$ for the corresponding measurable solution map on path space, namely so that $X=\mathfrak{s}_L^R\left(\beta;\mathsf{Y}^-,\mathsf{Y}^+\right)$.
Now, suppose $\sigma:\mathbb{R} \to \mathbb{R} $ and $b:\mathbb{R}\to \mathbb{R}$ are merely continuous and of at most linear growth, namely:
\begin{align*}
|\sigma(x)|,|b(x)| \le C\left(1+|x|\right),
\end{align*}
for some constant $C$. We will abbreviate this assumption by $(\textbf{CLG})$. Then, we can still obtain weak existence using the following rather standard argument. Take $\sigma^{(n)}:\mathbb{R} \to \mathbb{R} $ and $b^{(n)}:\mathbb{R}\to \mathbb{R}$ to be Lipschitz, converging uniformly to $\sigma$ and $b$ and satisfying a uniform linear growth condition. More precisely:
\begin{align}
&\sigma^{(n)}\overset{\textnormal{unif}}{\longrightarrow}\sigma, \ b^{(n)}\overset{\textnormal{unif}}{\longrightarrow}b\nonumber,\\
&|\sigma^{(n)}(x)|,|b^{(n)}(x)|\le \tilde{C}\left(1+|x|\right)\label{uniformlineargrowth},
\end{align}
for some constant $\tilde{C}$ that is independent of $n$. For example, we could take the mollification $\sigma^{(n)}=\phi_n*\sigma$, with $\phi_n(x)=n\phi(nx)$ where $\phi$ is a smooth bump function: $\phi \in C^{\infty}, \phi \ge 0, \int \phi=1$ and $\textnormal{ supp}(\phi) \subset [-1,1]$. Then, if $|\sigma(x)| \le C\left(1+|x|\right)$ we easily get $|(\phi_n*\sigma)(x)|\le \left(2+|x|\right)$ uniformly in $n$. Let $\left(X^{(n)},K^{(n)}\right)$ be the corresponding strong solution to the $SDER$ above with coefficients $\sigma^{(n)}$ and $b^{(n)}$. Then the laws of,
\begin{align*}
X(0)+\int_{0}^{\cdot}\sigma^{(n)}\left(X^{(n)}(s)\right)d\beta(s)+\int_{0}^{\cdot}b^{(n)}\left(X^{(n)}(s)\right)ds,
\end{align*}
are easily seen to be tight by applying Aldous' tightness criterion (see for example Chapter 16 of \cite{Kallenberg} or Chapter 3 of \cite{EthierKurtz}) using the uniformity in $n$ of the linear growth condition (\ref{uniformlineargrowth}). Hence, from the Lipschitz continuity of $\mathcal{S}$ we obtain that the laws of $\left(X^{(n)},K^{(n)}\right)$ are tight as well.
Thus, we can choose a subsequence $\left(n_i;i \ge 1\right)$ such that the laws of $\left(X^{(n_i)},K^{(n_i)}\right)$ converge weakly to some $\left(X,K\right)$. Using the Skorokhod representation theorem we can upgrade this to joint almost sure convergence on a new probability space $\left(\tilde{\Omega},\tilde{\mathcal{F}},\{\tilde{\mathcal{F}}_t\},\tilde{\mathbb{P}}\right)$. More precisely, we can define processes $\left(\tilde{X}^{(i)},\tilde{K}^{(i)}\right)_{i\ge 1},\left(\tilde{X},\tilde{K}\right)$ on $\left(\tilde{\Omega},\tilde{\mathcal{F}},\{\tilde{\mathcal{F}}_t\},\tilde{\mathbb{P}}\right)$ so that:
\begin{align*}
\left(\tilde{X}^{(i)},\tilde{K}^{(i)}\right)\overset{\textnormal{d}}{=}\left(X^{(n_i)},K^{(n_i)}\right), \ \left(\tilde{X},\tilde{K}\right)\overset{\textnormal{d}}{=}\left(X,K\right), \ \left(\tilde{X}^{(i)},\tilde{K}^{(i)}\right)\overset{\textnormal{a.s.}}{\longrightarrow}(\tilde{X},\tilde{K}).
\end{align*}
Now, the stochastic processes:
\begin{align*}
M_n(t)=\tilde{X}^{(n)}(t)-\tilde{X}^{(n)}(0)-\int_{0}^{t}b^{(n)}\left(\tilde{X}^{(n)}(s)\right)ds-\left(\tilde{K}^{(n)}\right)^-(t)+\left(\tilde{K}^{(n)}\right)^+(t)
\end{align*}
are martingales with quadratic variation:
\begin{align*}
\langle M_n, M_n\rangle(t)=\int_{0}^{t}\left(\sigma^{(n)}\left(\tilde{X}^{(n)}(s)\right)\right)^2ds.
\end{align*}
By the following convergences:
\begin{align*}
\sigma^{(n)}\overset{\textnormal{unif}}{\longrightarrow}\sigma, \ b^{(n)}\overset{\textnormal{unif}}{\longrightarrow}b, \ \left(\tilde{X}^{(i)},\tilde{K}^{(i)}\right)\overset{\textnormal{a.s.}}{\longrightarrow}(\tilde{X},\tilde{K})
\end{align*}
we obtain that $M_n \overset{\textnormal{a.s.}}{\longrightarrow}M$ where,
\begin{align*}
M(t)=\tilde{X}(t)-\tilde{X}(0)-\int_{0}^{t}b\left(\tilde{X}(s)\right)ds-\tilde{K}^-(t)+\tilde{K}^+(t)
\end{align*}
is a martingale with quadratic variation given by:
\begin{align*}
\langle M, M\rangle(t)=\int_{0}^{t}\sigma^2\left(\tilde{X}(s)\right)ds.
\end{align*}
Then, by the martingale representation theorem there exists a standard Brownian motion $\tilde{\beta}$, that is defined on a possibly enlarged probability space, so that $M(t)=\int_{0}^{t}\sigma\left(\tilde{X}(s)\right)d\tilde{\beta}(s)$ and thus:
\begin{align*}
\tilde{X}(t)=\tilde{X}(0)+\int_{0}^{t}\sigma\left(\tilde{X}(s)\right)d\tilde{\beta}(s)+\int_{0}^{t }b\left(\tilde{X}(s)\right)ds+\tilde{K}^-(t )-\tilde{K}^+(t),
\end{align*}
where again the non decreasing processes $\left(\tilde{K}^+(t);t \ge 0\right)$ and $\left(\tilde{K}^-(t);t \ge 0\right)$ increase only when $\tilde{X}(t)=\mathsf{Y}^+(t)$ and $\tilde{X}(t)=\mathsf{Y}^-(t)$ respectively so that $\tilde{X}(t) \in [\mathsf{Y}^-(t),\mathsf{Y}^+(t)] \ \forall t \ge0$. Hence, we have obtained the existence of a weak solution to the $SDER$ for $\sigma$ and $b$ continuous and of at most linear growth.
We now remove the condition that $\mathsf{Y}^-,\mathsf{Y}^+$ never collide by stopping the process at the first time $\tau=\inf\{t\ge0:\mathsf{Y}^-(t)=\mathsf{Y}^+(t)\}$ that they do. First we note that, there exists an extension to the Skorokhod problem and to $SDER$, allowing for reflecting barriers $\mathsf{Y}^-,\mathsf{Y}^+$ that come together, see \cite{RamananBurdzySkorokhod}, \cite{SDER} for the detailed definition. Both results used in the previous argument, namely the Lipschitz continuity of the solution map, which we still denote by $\mathcal{S}$, and existence and uniqueness of strong solutions to $SDER$ extend to this setting, see e.g. Theorem 2.6, also Corollary 2.4 and Theorem 3.3 in \cite{SDER}. The difference of the extended problem to the classical one described at the beginning, being that $k=k^--k^+$ is allowed to have infinite variation. However, as proven in Proposition 2.3 and Corollary 2.4 in \cite{RamananBurdzySkorokhod} (see also Remark 2.2 in \cite{SDER}) the unique solution to the extended Skorokhod problem coincides with the one of the classical one in $[0,T]$ while $\underset{t\le T}{\inf}\left(\mathsf{Y}^+(t)-\mathsf{Y}^-(t)\right)>0$. Thus, by the previous considerations, for any $T<\tau$, we still have a weak solution to the $SDER$ above, with bounded variation local terms $K$; the final statement more precisely given as:
\begin{prop}\label{WeakExistenceProp}
Assume $\mathsf{Y}^-,\mathsf{Y}^+$ are continuous functions such that $\mathsf{Y}^-(t)\le \mathsf{Y}^+(t), \forall t \ge 0$ and let $\tau=\inf\{t\ge0:\mathsf{Y}^-(t)=\mathsf{Y}^+(t)\}$. Assume (\textbf{CLG}), namely that $\sigma(\cdot),b(\cdot)$ are continuous functions satisfying an at most linear growth condition, for some positive constant $C$:
\begin{align*}
|\sigma(x)|,|b(x)| \le C\left(1+|x|\right).
\end{align*}
Then, there exists a filtered probability space $\left(\Omega,\mathcal{F},\{\mathcal{F}_t\},\mathbb{P}\right)$ on which firstly an adapted Brownian motion $\beta$ is defined (not necessarily generating the filtration). Moreover, for $\mathsf{Y}^-(0)\le X(0) \le \mathsf{Y}^+(0)$ the adapted process $(X,K)$ satisfies:
\begin{align}\label{prototypeSDER}
X(t\wedge \tau)=X(0)+\int_{0}^{t\wedge \tau}\sigma\left(X(s)\right)d\beta(s)+\int_{0}^{t \wedge \tau}b\left(X(s)\right)ds+K^-(t \wedge \tau)-K^+(t \wedge \tau),
\end{align}
such that for all $t \ge0$ we have $X(t\wedge \tau) \in [\mathsf{Y}^-(t\wedge \tau),\mathsf{Y}^+(t \wedge \tau)]$ and for any $T< \tau$ the non decreasing processes $\left(K^+(t);t \le T\right)$ and $\left(K^-(t );t \le T\right)$ increase only when $X(t)=\mathsf{Y}^+(t)$ and $X(t)=\mathsf{Y}^-(t)$ respectively.
\end{prop}
We will now be concerned with pathwise uniqueness. Due to the intrinsic one-dimensionality of the problem we can fortunately apply a simple Yamada-Watanabe type argument. For the convenience of the reader we now recall assumption $(\mathbf{YW})$, defined in Section 2: Let $I$ be an interval with endpoints $l<r$ and suppose $\rho$ is a non-decreasing function from $(0,\infty)$ to itself such that $\int_{0^+}^{}\frac{dx}{\rho(x)}=\infty$. Consider, the following condition on functions $a:I\to \mathbb{R}_+$ and $b: I\to \mathbb{R}$, where we implicitly assume that $a$ and $b$ initially defined in $I^\circ$ can be extended continuously to the boundary points $l$ and $r$ (in case these are finite),
\begin{align*}
&|\sqrt {a(x)}-\sqrt {a(y)}|^2 \le \rho(|x-y|),\\
&|b(x)-b(y)|\le C|x-y|.
\end{align*}
Moreover, we assume that $\sqrt {a(\cdot)}$ is of at most linear growth. Note that, for $b(\cdot)$ this is immediate by Lipschitz continuity.
Also, observe that since $\rho$ is continuous at $0$ with $\rho(0)=0$ (the assumption on $\rho$ implies this) we get that $\sqrt {a(\cdot)}$ is continuous. Thus, $(\mathbf{YW})$ implies (\textbf{CLG}) and in particular the existence result above applies under $(\mathbf{YW})$. We are now ready to state and prove our well-posedness result.
\begin{prop}\label{wellposedness}
Under the $(\mathbf{YW})$ assumption the $SDER$ (\ref{prototypeSDER}) with $(\sigma,b)=(\sqrt{2a},b)$ has a pathwise unique solution.
\end{prop}
\begin{proof}
Suppose that $X$ and $\tilde{X}$ are two solutions of (\ref{prototypeSDER}) with respect to the same noise. Then the argument given at Chapter IX Corollary 3.4 of \cite{RevuzYor} shows that $L^0(X_i-\tilde{X}_i)=0$ where for a semimartingale $Z$, $L^{a}(Z)$ denotes its semimartingale local time at $a$ (see for example Section 1 Chapter VI of \cite{RevuzYor}). Hence by Tanaka's formula we get,\\
$|X(t\wedge \tau)-\tilde{X}(t\wedge \tau)|=\int_{0}^{t\wedge\tau}\textnormal{sgn}(X(s)-\tilde{X}(s))d(X(s)-\tilde{X}(s))$\\$=\int_{0}^{t \wedge \tau}\textnormal{sgn}(X(s)-\tilde{X}(s))\left(\sqrt{ 2a(X(s))}-\sqrt{ 2a(\tilde{X}(s))}\right)d\beta(s)$\\
$+\int_{0}^{t\wedge \tau}\textnormal{sgn}(X(s)-\tilde{X}(s))(b(X(s))-b (\tilde{X}(s)))ds$\\
$-\int_{0}^{t\wedge \tau}\textnormal{sgn}(X(s)-\tilde{X}(s))d(K^+(s)-\tilde{K}^+(s))+\int_{0}^{t\wedge \tau}\textnormal{sgn}(X(s)-\tilde{X}(s))d(K^-(s)-\tilde{K}^-(s))$.\\
Note that $\mathsf{Y}^- \le X,\tilde{X}\le \mathsf{Y}^+$, $dK^+$ is supported on $\{t:X(t)=\mathsf{Y}^+(t)\}$ and $d\tilde{K}^+$ is supported on $\{t:\tilde{X}(t)=\mathsf{Y}^+(t)\}$. So if $\tilde{X}<X\le \mathsf{Y}^+$ then $dK^+-d\tilde{K}^+\ge 0$ and if $X<\tilde{X}\le \mathsf{Y}^+$ then $dK^+-d\tilde{K}^+\le 0$. Hence $\int_{0}^{t\wedge \tau}\textnormal{sgn}(X(s)-\tilde{X}(s))d(K^+(s)-\tilde{K}^+(s))\ge 0$. With similar considerations $\int_{0}^{t\wedge \tau}\textnormal{sgn}(X(s)-\tilde{X}(s))d(K^-(s)-\tilde{K}^-(s))\le 0$. Taking expectations we obtain,
\begin{align*}
\mathbb{E}[|X(t\wedge \tau)-\tilde{X}(t\wedge\tau)|]\le \mathbb{E}\left[\int_{0}^{t\wedge \tau}\textnormal{sgn}(X(s)-\tilde{X}(s))(b(X(s))-b (\tilde{X}(s)))ds\right]\\ \le C\int_{0}^{t\wedge \tau} \mathbb{E} [|X(s)-\tilde{X}(s)|]ds.
\end{align*}
The statement of the proposition then follows from Gronwall's lemma.
\end{proof}
Under the pathwise uniqueness obtained in Proposition \ref{wellposedness} above, if the evolution $\left(\mathsf{Y}^-(t\wedge \tau),\mathsf{Y}^+(t\wedge \tau);t\ge 0\right)$ is Markovian, then standard arguments (see for example Section 1 of Chapter IX of \cite{RevuzYor}) imply that $\left(\mathsf{Y}^-(t\wedge \tau),\mathsf{Y}^+(t\wedge \tau),X(t \wedge \tau);t\ge 0\right)$ is Markov as well. Moreover, under this $(\mathbf{YW})$ condition we still have the solution map $X=\mathfrak{s}_L^R\left(\beta;\mathsf{Y}^-,\mathsf{Y}^+\right)$.
The reader should note that Proposition \ref*{wellposedness} covers in particular \textbf{all} the cases of Brownian motions, Ornstein-Uhlenbeck, $BESQ(d)$ , $Lag(\alpha)$ and $Jac(\beta,\gamma)$ diffusions considered in the Applications and Examples section.
\subsection{Transition densities for SDER}\label{SectionTransitionDensities}
The aim of this section is to prove under some conditions that $q_t^{n,n+1}$ and $q_t^{n,n}$ form the transition kernels for the two-level systems of $SDEs$ (\ref{System1SDEs}) and (\ref*{System2}) in $W^{n,n+1}$ and $W^{n,n}$ respectively. For the sake of exposition we shall mainly restrict our attention to (\ref*{System1SDEs}). In the sequel, $\tau$ will denote the stopping time $T^{n,n+1}$ (or $T^{n,n}$ respectively).
Throughout this section we assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. In particular, there exists a Markov semimartingale $(X,Y)$ satisfying equation (\ref{System1SDEs}) (or respectively (\ref*{System2})).
To begin with we make a few simple but important observations. First, note that if the $L$-diffusion does not hit $l$ (i.e. $l$ is natural or entrance), then $X_{1}$ doesn't hit $l$ either before being driven to $l$ by $Y_1$ (in case $l$ is exit for $\hat{L}$). Similarly, it is rather obvious, since the particles are ordered, that in case $l$ is regular reflecting for the $L$-diffusion the time spent at $l$ up to time $\tau$ by the $SDEs$ (\ref{System1SDEs}) is equal to the time spent by $X_1$ at $l$. This is in turn equal to the time spent at $l$ by the excursions of $X_1$ between collisions with $Y_1$ (and before $\tau$) during which the evolution of $X_1$ coincides with the unconstrained $L$-diffusion which spends zero Lebesgue time at $l$ (e.g. see Chapter 2 paragraph 7 in \cite{BorodinSalminen}). Hence the system of reflecting $SDEs$ (\ref{System1SDEs}) spends zero Lebesgue time at either $l$ or $r$ up to time $\tau$. Since in addition to this, the noise driving the $SDEs$ is uncorrelated and the diffusion coefficients do not vanish in $I^\circ$ we get that,
\begin{align}\label{boundaryLebesgue}
\int_{0}^{\tau}\mathbf{1}_{\partial W^{n,n+1}(I)}\left(X(t),Y(t)\right)dt=0 \ \ \textnormal{ a.s.} \ .
\end{align}
We can now in fact relate the constraining finite variation terms $K$ to the semimartingale local times of the gaps between particles (although this will not be essential in what follows). Using the observation (\ref{boundaryLebesgue}) above and Exercise 1.16 ($3^\circ$) of Chapter VI of \cite{RevuzYor}, which states that for a positive semimartingale $Z=M+V \ge 0$ (where $M$ is the martingale part) its local time at $0$ is equal to $2\int_{0}^{\cdot}1\left(Z_s=0\right)dVs$, we get that for the $SDEs$ (\ref{System1SDEs}) the semimartingale local time of $Y_i-X_i$ at 0 up to time $\tau$ is,
\begin{align*}
2\int_{0}^{t\wedge \tau}\textbf{1}(Y_i(s)=X_i(s))dK_i^+(s)=2K_i^+(t\wedge \tau),
\end{align*}
and similarly the semimartingale local time of $X_{i+1}-Y_i$ at $0$ up to $\tau$ is,
\begin{align*}
2\int_{0}^{t\wedge \tau}\textbf{1}(X_{i+1}(s)=Y_i(s))dK_{i+1}^-(s)=2K_{i+1}^-(t\wedge \tau).
\end{align*}
Now, we state a lemma corresponding to the \textit{time 0} boundary condition.
\begin{lem}\label{time0lemma}
For any $f:{W^{n,n+1}(I^\circ)}\to \mathbb{R}$ continuous with compact support we have,
\begin{align*}
\lim_{t\to 0}\int_{W^{n,n+1}(I^\circ)}^{}q_t^{n,n+1}((x,y),(x',y'))f(x',y')dx'dy'=f(x,y).
\end{align*}
\end{lem}
\begin{proof}
This follows as in the proof of Lemma 1 of \cite{Warren}. See also the beginning of the proof of Proposition \ref{PropositionEdgeParticle}.
\end{proof}
We are now ready to prove the following result on the transition densities.
\begin{prop}\label{transititiondensities1}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. Moreover, assume that $l$ and $r$ are either natural or entrance for the $L$-diffusion. Then $q_t^{n,n+1}$ form the transition densities for the system of $SDEs$ (\ref{System1SDEs}).
\end{prop}
\begin{proof}
Let $\textbf{Q}^{n,n+1}_{x,y}$ denote the law of the process $(X_1,Y_1,\cdots,Y_n,X_{n+1})$ satisfying the system of $SDEs$ (\ref{System1SDEs}) and starting from $(x,y)$. Define for $f$ continuous with compact support,
\begin{align*}
F^{n,n+1}(t,(x,y))&=\int_{W^{n,n+1}(I^\circ)}^{}q_t^{n,n+1}((x,y),(x',y'))f(x',y')dx'dy'.
\end{align*}
Our goal is to prove that for fixed $T>0$,
\begin{align}
F^{n,n+1}(T,(x,y))&=\textbf{Q}^{n,n+1}_{x,y}\big[f(X(T),Y(T))\textbf{1}(T<\tau)\big]\label{goaltransition1}.
\end{align}
The result then follows since from observation (\ref{boundaryLebesgue}) the only part of the distribution of $(X(T),Y(T))$ that charges the boundary corresponds to the event $\{T\ge \tau\}$.
In what follows we shall slightly abuse notation and use the same notation for both the scalar entries and the matrices that come into the definition of $q_t^{n,n+1}$. First, note the following with $x,y \in I^\circ$,
\begin{align*}
\partial_t A_t(x,x')&=\mathcal{D}_m^{x}\mathcal{D}_s^{x}A_t(x,x')\ \ ,\
\ \ \partial_tB_t(x,y')=\mathcal{D}_m^{x}\mathcal{D}_s^{x}B_t(x,y'),\\
\partial_t C_t(y,x')&=\mathcal{D}_{\hat{m}}^{y}\mathcal{D}_{\hat{s}}^{y}C_t(y,x')\ \ ,\ \
\partial_t D_t(y,y')=\mathcal{D}_{\hat{m}}^{y}\mathcal{D}_{\hat{s}}^{y}D_t(y,y').
\end{align*}
To see the equation for $ C_t(y,x')$ note that since $\mathcal{D}_{\hat{m}}=\mathcal{D}_s$ and $\mathcal{D}_{\hat{s}}=\mathcal{D}_m$ we have,
\begin{align*}
\partial_t C_t(y,x')=-\mathcal{D}_{s}^{y}\partial_tp_t(y,x')=-\mathcal{D}_{s}^{y}\mathcal{D}_m^{y}\mathcal{D}_s^{y}p_t(y,x')=-\mathcal{D}_{\hat{m}}^{y}\mathcal{D}_{\hat{s}}^{y}\mathcal{D}_s^{y}p_t(y,x')=\mathcal{D}_{\hat{m}}^{y}\mathcal{D}_{\hat{s}}^{y}C_t(y,x').
\end{align*}
Hence, for fixed $(x',y')\in \mathring{W}^{n,n+1}(I^\circ)$ we have,
\begin{align*}
\partial_t q_t^{n,n+1}((x,y),(x',y'))=\bigg(\sum_{i=1}^{n+1}\mathcal{D}_m^{x_i}\mathcal{D}_s^{x_i}+\sum_{i=i}^{n}\mathcal{D}_{\hat{m}}^{y_i}\mathcal{D}_{\hat{s}}^{y_i} \bigg) \ q_t^{n,n+1}((x,y),(x',y')),\\ \text{in} \ (0,\infty)\times \mathring{W}^{n,n+1}(I^\circ).
\end{align*}
Now, by definition of the entries $A_t,B_t,C_t,D_t$ we have for $x,y\in I^\circ$,
\begin{align*}
\partial_x A_t(x,x')|_{x=y}=-\hat{m}(y)C_t(y,x'),\\
\partial_x B_t(x,y')|_{x=y}=-\hat{m}(y)D_t(y,y').
\end{align*}
Hence for fixed $(x',y')\in W^{n,n+1}(I^\circ)$ by differentiating the determinant and since two rows are equal up to multiplication by a constant we obtain,
\begin{align*}
\partial_{x_i}q_t^{n,n+1}((x,y),(x',y'))|_{x_i=y_i}&=0 \ , \ \partial_{x_i}q_t^{n,n+1}((x,y),(x',y'))|_{x_i=y_{i-1}}=0.
\end{align*}
The Dirichlet boundary conditions for $y_i=y_{i+1}$ are immediate since again two rows of the determinant are equal. Furthermore, in case $l$ or $r$ are entrance boundaries for the $L$-diffusion the Dirichlet boundary conditions for $y_1=l$ and $y_n=r$ follow from the fact that (in the limit as $y \to l,r$),
\begin{align*}
D_t(y,y')|_{y=l,r}=0, C_t(y,x')|_{y=l,r}=\mathcal{D}_s^{x}A_t(x,x')|_{x=l,r}=0.
\end{align*}
Fix $T,\epsilon>0$. Applying Ito's formula we obtain that for each $(x',y')$ the process,
\begin{align*}
\left(\mathfrak{Q}_t(x',y'):t\in [0,T]\right)=\left(q^{n,n+1}_{T+\epsilon-t}\left((X(t),Y(t)),(x',y')\right):t\in [0,T]\right),
\end{align*}
is a local martingale. Now consider a sequence of compact intervals $J_k$ exhausting $I$ as $k\to \infty$ and write $\tau_k$ for $\inf\{t:(X(t),Y(t)) \notin J_k \}$. Note that $\textbf{1}(T<\tau\wedge \tau_k)\to \textbf{1}(T<\tau)$ as $k\to \infty$ by our boundary assumptions, more precisely by making use of the observation that $X$ does not hit $l$ or $r$ before $Y$ does. Using the optional stopping theorem (since the stopped process $\left(\mathfrak{Q}^{\tau_k}_t(x',y'):t\in [0,T]\right)$ is bounded and hence a true martingale) and then the monotone convergence theorem we obtain,
\begin{align*}
q_{T+\epsilon}^{n,n+1}((x,y),(x',y'))=\textbf{Q}^{n,n+1}_{x,y}\big[q_{\epsilon}^{n,n+1}((X(T),Y(T)),(x',y'))\textbf{1}(T<\tau)\big].
\end{align*}
Now multiplying by $f$ continuous with compact support, integrating with respect to $(x',y')$ and using Fubini's theorem to exchange expectation and integral we obtain,
\begin{align*}
F^{n,n+1}(T+\epsilon,(x,y))=\textbf{Q}^{n,n+1}_{x,y}\big[F^{n,n+1}(\epsilon,(X(T),Y(T))\textbf{1}(T<\tau)\big].
\end{align*}
By Lemma \ref{time0lemma}, we can let $\epsilon \downarrow 0$ to conclude,
\begin{align*}
F^{n,n+1}(T,(x,y))&=\textbf{Q}^{n,n+1}_{x,y}\big[f(X(T),Y(T))\textbf{1}(T<\tau)\big].
\end{align*}
The proposition is proven.
\end{proof}
Completely analogous arguments prove the following:
\begin{prop}\label{transititiondensities2}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. Moreover, assume that $l$ is either natural or exit and $r$ is either natural or entrance for the $L$-diffusion. Then $q_t^{n,n}$ form the transition densities for the system of $SDEs$ (\ref{System2}).
\end{prop}
We note here that Propositions \ref{transititiondensities1} and \ref{transititiondensities2} apply in particular to the cases of Brownian motions with drifts, Ornstein-Uhlenbeck, $BESQ(d)$ for $d\ge2$, $Lag(\alpha)$ for $\alpha \ge 2$ and $Jac(\beta,\gamma)$ for $\beta,\gamma \ge 1$ considered in the Applications and Examples section.
In the case $l$ and/or $r$ are regular reflecting boundary points we have the following proposition. This is where the non-degeneracy and regularity at the boundary in assumption $(\textbf{BC+})$ is used. This is technical but quite convenient since it allows for a rather streamlined rigorous argument. It presumably can be removed.
\begin{prop}\label{transitiondensitiesreflecting}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. Moreover, assume that $l$ and/or $r$ are regular reflecting for the $L$-diffusion. Then $q_t^{n,n+1}$ form the transition densities for the system of $SDEs$ (\ref{System1SDEs}).
\end{prop}
\begin{proof}
The strategy is the same as in Proposition \ref{transititiondensities1} above. We give the proof in the case that both $l$ and $r$ are regular reflecting for the $L$-diffusion (the other cases are analogous). First, recall that $(\textbf{BC+})$ in this case requires that $\underset{x \to l,r}{\lim} a(x)>0$ and that the limits $\underset{x \to l,r}{\lim} b(x)$, $\underset{x \to l,r}{\lim} \left(a'(x)-b(x)\right)$ exist and are finite.
Now, note that by the non-degeneracy condition $\underset{x \to l,r}{\lim}a(x)>0$ and since $\underset{x \to l,r}{\lim}b(x)$ is finite we thus obtain $\underset{x \to l,r}{\lim}s'(x)>0$.
So for $x'\in I^{\circ}$ the relations,
\begin{align*}
\underset{x \to l,r}{\lim}\mathcal{D}^x_sA_t(x,x')=0 \textnormal{ and } \underset{x \to l,r}{\lim}\mathcal{D}^x_sB_t(x,x')=0,
\end{align*}
actually imply that for $x' \in I^{\circ}$,
\begin{align}\label{Neumannboundary}
\underset{x \to l,r}{\lim}\partial_xA_t(x,x')=0 \textnormal{ and } \underset{x \to l,r}{\lim}\partial_xB_t(x,x')=0.
\end{align}
Moreover, by rearranging the backwards equations we have for fixed $y \in I^{\circ}$ that the functions,
\begin{align*}
(t,x)\mapsto \partial_x^{2}p_t(x,y)&=\frac{\partial_tp_t(x,y)-b(x)\partial_xp_t(x,y)}{a(x)},\\
(t,x)\mapsto \partial_x^{2}\mathcal{D}^x_sp_t(x,y)&=\frac{\partial_t\mathcal{D}_s^xp_t(x,y)-\left(a'(x)-b(x)\right)\partial_x\mathcal{D}_s^xp_t(x,y)}{a(x)},\\
&=\frac{\partial_t\mathcal{D}_s^xp_t(x,y)-\left(a'(x)-b(x)\right)m(x)\partial_tp_t(x,y)}{a(x)},
\end{align*}
and more generally for $n \ge 0$ and fixed $y\in I^{\circ}$,
\begin{align*}
(t,x)\mapsto \partial_t^n\partial_x^{2}\mathcal{D}^x_sp_t(x,y)=\frac{\partial_t^{n+1}\mathcal{D}_s^xp_t(x,y)-\left(a'(x)-b(x)\right)m(x)\partial^{n+1}_tp_t(x,y)}{a(x)},
\end{align*}
can be extended continuously to $(0,\infty) \times [l,r]$ (note the closed interval $[l,r]$). This is because every function on the right hand side can be extended by the assumptions of proposition and the fact that for $y\in I^{\circ} $, $\partial_t^np_t(\cdot,y) \in Dom(L)$ (see Theorem 4.3 of \cite{McKean} for example). Thus by Whitney's extension theorem, essentially a clever reflection argument in this case (see Section 3 of \cite{ExtensionWhitney} for example), $q_t^{n,n+1}((x,y),(x',y'))$ can be extended as a $C^{1,2}$ function in $(t,(x,y))$ to the whole space. We can hence apply Ito's formula, and it is important to observe that the finite variation terms $dK^l$ and $dK^r$ at $l$ and $r$ respectively (corresponding to $X_1$ and $X_{n+1}$) vanish by the Neumann boundary conditions (\ref{Neumannboundary}), from which we deduce as before that for fixed $T>0$,
\begin{align*}
q_{T+\epsilon}^{n,n+1}((x,y),(x',y'))=\textbf{Q}^{n,n+1}_{x,y}\big[q_{\epsilon}^{n,n+1}((X(T),Y(T)),(x',y'))\textbf{1}(T<\tau)\big].
\end{align*}
The conclusion then follows as in Proposition \ref{transititiondensities1}.
\end{proof}
Completely analogous arguments give the following:
\begin{prop}\label{transititiondensities2reflecting}
Assume $(\textbf{R})$ and $(\textbf{BC+})$ hold for the $L$-diffusion and $(\textbf{YW})$ holds for both the $L$ and $\hat{L}$ diffusions. Moreover, assume that $l$ is regular absorbing and/or $r$ is regular reflecting for the $L$-diffusion. Then $q_t^{n,n}$ form the transition densities for the system of $SDEs$ (\ref{System2}).
\end{prop}
These propositions cover in particular the cases of Brownian motions in the half line and in an interval considered in Sections 3.2 and 3.3 respectively.
\section{Appendix}\label{Appendix}
We collect here the proofs of some of the facts regarding conjugate diffusions that were stated and used in previous sections.
We first give the derivation of the table on the boundary behaviour of a diffusion and its conjugate. Keeping with the notation of Section 2 consider the following quantities with $x\in I^\circ$ arbitrary,
\begin{align*}
& N(l)=\int_{(l^+,x]}^{}(s(x)-s(y))M(dy)=\int_{(l^+,x]}^{}(s(x)-s(y))m(y)dy ,\\
&\Sigma(l)=\int_{(l^+,x]}^{}(M(x)-M(y))s(dy)=\int_{(l^+,x]}^{}(M(x)-M(y))s'(y)dy.
\end{align*}
We then have the following classification of the boundary behaviour at $l$ (see e.g. \cite{EthierKurtz}):
\begin{itemize}
\item $l$ is an entrance boundary iff $N(l)<\infty, \Sigma(l)=\infty$.
\item $l$ is a exit boundary iff $N(l)=\infty, \Sigma(l)
<\infty$.
\item $l$ is a natural boundary iff $N(l)=\infty, \Sigma(l)=\infty$.
\item $l$ is a regular boundary iff $N(l)<\infty, \Sigma(l)<\infty$.
\end{itemize}
From the relations $\hat{s}'(x)=m(x)$ and $\hat{m}(x)=s'(x)$ we obtain the following,
\begin{align*}
&\hat{N}(l)=\int_{(l^+,x]}^{}(\hat{s}(x)-\hat{s}(y))\hat{m}(y)dy=\Sigma(l),\\
&\hat{\Sigma}(l)=\int_{(l^+,x]}^{}(\hat{M}(x)-\hat{M}(y))\hat{s}'(y)dy=N(l).
\end{align*}
These relations immediately give us the table on boundary behaviour, namely: If $l$ is an entrance boundary for $X$, then it is exit for $\hat{X}$ and vice versa. If $l$ is natural for $X$, then so it is for its conjugate. If $l$ is regular for $X$, then so it is for its conjugate. In this instance as already stated in Section 2 we define the conjugate diffusion $\hat{X}$ to have boundary behaviour dual to that of $X$, namely if $l$ is reflecting for $X$ then it is absorbing for $\hat{X}$ and vice versa.
\begin{proof}[Proof of Lemma 2.1]
There is a total number of $5^2$ boundary behaviours ($5$ at $l$ and $5$ at $r$) for the $L$-diffusion (the boundary behaviour of $\hat{L}$ is completely determined from $L$ as explained above) however since the boundary conditions for an entrance and regular reflecting ($\mathcal{D}_sv=0$) and similarly for an exit and regular absorbing boundary ($\mathcal{D}_m\mathcal{D}_sv=0$) are the same we can pair them to reduce to $3^2$ cases $(\mathsf{b.c.}(l),\mathsf{b.c.}(r))$ abbreviated as follows:
\begin{align*}
(nat,nat),(ref,ref),(abs,abs),(nat,abs),(ref,abs),(abs,ref),(abs,nat),(nat,ref),(ref,nat).
\end{align*}
We now make some further reductions. Note that for $x,y \in I^{\circ}$,
\begin{align*}
\mathsf{P}_t \textbf{1}_{[l,y]}(x)=\mathsf{\hat{P}}_t \textbf{1}_{[x,r]}(y) \iff \mathsf{P}_t \textbf{1}_{[y,r]}(x)=\mathsf{\hat{P}}_t \textbf{1}_{[l,x]}(y).
\end{align*}
After swapping $x \leftrightarrow y$ this is equivalent to,
\begin{align*}
\mathsf{\hat{P}}_t \textbf{1}_{[l,y]}(x)=\mathsf{P}_t \textbf{1}_{[x,r]}(y).
\end{align*}
So we have a bijection that swaps boundary conditions with their duals $(\mathsf{b.c.}(l),\mathsf{b.c.}(r))\leftrightarrow(\widehat{\mathsf{b.c.}(l)},\widehat{\mathsf{b.c.}(r)})$. Moreover, if $\mathfrak{h}:(l,r)\to (l,r)$ is any homeomorphism such that $\mathfrak{h}(l)=r,\mathfrak{h}(r)=l$ and writing $\mathsf{H}_t$ for the semigroup associated with the $\mathfrak{h}(X)(t)$-diffusion and similarly $\hat{\mathsf{H}}_t$ for the semigroup associated with the $\mathfrak{h}(\hat{X})(t)$-diffusion we see that,
\begin{align*}
\mathsf{P}_t \textbf{1}_{[l,y]}(x)=\mathsf{\hat{P}}_t \textbf{1}_{[x,r]}(y) \ \ \forall x,y \in I^\circ \iff \mathsf{H}_t \textbf{1}_{[l,y]}(x)=\mathsf{\hat{H}}_t \textbf{1}_{[x,r]}(y) \ \ \forall x,y \in I^{\circ}.
\end{align*}
And we furthermore observe that, the boundary behaviour of the $\mathfrak{h}(X)(t)$-diffusion at $l$ is the boundary behaviour of the $L$-diffusion at $r$ and its boundary behaviour at $r$ is that of the $L$-diffusion at $l$ and similarly for $\mathfrak{h}(\hat{X})(t)$. We thus obtain an equivalent problem where now $(\mathsf{b.c.}(l),\mathsf{b.c.}(r))\leftrightarrow(\mathsf{b.c.}(r),\mathsf{b.c.}(l))$. Putting it all together, we reduce to the following 4 cases since all others can be obtained from the transformations above,
\begin{align*}
(nat,nat),(ref,nat),(ref,ref),(ref,abs).
\end{align*}
The first case is easy since there are no boundary conditions to keep track of and is omitted. The second case is the one originally considered by Siegmund and studied extensively in the literature (see e.g. \cite{CoxRosler} for a proof). We give the proof for the last two cases.
First, assume $l$ and $r$ are regular reflecting for $X$ and so absorbing for $\hat{X}$. Let $\mathcal{R}_{\lambda}$ and $\hat{\mathcal{R}}_{\lambda}$ be the resolvent operators associated with $\mathsf{P}_t$ and $\mathsf{\hat{P}}_t$ then with $f$ being a continuous function with compact support in $I^{\circ}$ the function $u=\mathcal{R}_{\lambda}f$ solves Poisson's equation $\mathcal{D}_m\mathcal{D}_su-\lambda u=-f$ with $\mathcal{D}_su(l^+)=0, \mathcal{D}_su(r^-)=0$. Apply $\mathcal{D}_m^{-1}$ defined by $\mathcal{D}_m^{-1}f(y)=\int_{l}^{y}m(z)f(z)dz$ for $y\in I^\circ$ to obtain $\mathcal{D}_su-\lambda\mathcal{D}_m^{-1} u=-\mathcal{D}_m^{-1}f$ which can be written as,
\begin{align*}
\mathcal{D}_{\hat{m}}\mathcal{D}_{\hat{s}}\mathcal{D}_m^{-1}u-\lambda\mathcal{D}_m^{-1} u=-\mathcal{D}_m^{-1}f.
\end{align*}
So $v=\mathcal{D}_m^{-1}u$ solves Poisson's equation with $g=\mathcal{D}_m^{-1}f$,
\begin{align*}
\mathcal{D}_{\hat{m}}\mathcal{D}_{\hat{s}}v-\lambda v=-g,
\end{align*}
with the boundary conditions $\mathcal{D}_{\hat{m}}\mathcal{D}_{\hat{s}}v(l^+)=\mathcal{D}_{s}\mathcal{D}_{m}\mathcal{D}_{m}^{-1}u(l^+)=\mathcal{D}_s u(l^+)=0$ and $\mathcal{D}_{\hat{m}}\mathcal{D}_{\hat{s}}v(r^-)=0$. Now in the second case when $l$ is reflecting and $r$ absorbing we would like to check the reflecting boundary condition for $v=\mathcal{D}_m^{-1}u$ at $r$. Namely, that $(\mathcal{D}_{\hat{s}})v(r^-)=0$ and note that this is equivalent to $(\mathcal{D}_{m})v(r^-)=u(r^-)=0$. This then follows from the fact that (since $r$ is now absorbing for the $L$-diffusion) $(\mathcal{D}_m\mathcal{D}_s)u(r^-)=0$ and that $f$ is of compact support. The proof proceeds in the same way for both cases, by uniqueness of solutions to Poisson's equation (see e.g. Section 3.7 of \cite{ItoMckean}) this implies $v= \hat{\mathcal{R}}_{\lambda}g$ and thus we may rewrite the relationship as,
\begin{align*}
\mathcal{D}_m^{-1}\mathcal{R}_{\lambda}f= \hat{\mathcal{R}}_{\lambda}\mathcal{D}_m^{-1}f.
\end{align*}
Let now $f$ approximate $\delta_x$ with $x \in I^\circ$ to obtain with $r_{\lambda}(x,z)$ the resolvent density of $\mathcal{R}_{\lambda}$ with respect to the speed measure in $I^\circ \times I^\circ$,
\begin{align*}
\int_{l}^{y}r_{\lambda}(z,x)m(z)dz=m(x)\hat{\mathcal{R}}_{\lambda}\textbf{1}_{[x,r]}(y).
\end{align*}
Since $r_{\lambda}(z,x)m(z)=m(x)r_{\lambda}(x,z)$ we obtain,
\begin{align*}
\mathcal{R}_{\lambda}\textbf{1}_{[l,y]}(x)=\hat{\mathcal{R}}_{\lambda}\textbf{1}_{[x,r]}(y),
\end{align*}
and the result follows by uniqueness of Laplace transforms.
\end{proof}
It is certainly clear to the reader that the proof only works for $x,y$ in the interior $I^\circ$. In fact the lemma is not always true if we allow $x,y$ to take the values $l,r$. To wit, first assume $x=l$ so that we would like,
\begin{align*}
\mathsf{P}_t \textbf{1}_{[l,y]}(l)\overset{?}{=}\mathsf{\hat{P}}_t \textbf{1}_{[l,r]}(y)=1 \ \forall y.
\end{align*}
This is true if and only if $l$ is either absorbing, exit or natural for the $L$-diffusion (where in the case of a natural boundary we understand $\mathsf{P}_t \textbf{1}_{[l,y]}(l)$ as $\lim_{x\to l}\mathsf{P}_t \textbf{1}_{[l,y]}(x)$). Analogous considerations give the following: The statement of Lemma \ref{ConjugacyLemma} remains true with $x=r$ if $r$ is either a natural, reflecting or entrance boundary point for the $L$-diffusion. Enforcing the exact same boundary conditions gives that the statement remains true with $y$ taking values on the boundary of $I$.
\begin{rmk}
For the reader who is familiar with the close relationship between duality and intertwining first note that with the $L$-diffusion satisfying the boundary conditions in the paragraph above and denoting as in Section 2 by $P_t$ the semigroup associated with an $L$-diffusion killed (not absorbed) at $l$ our duality relation becomes,
\begin{align*}
P_t \textbf{1}_{[x,r]}(y)=\mathsf{\hat{P}}_t \textbf{1}_{[l,y]}(x) .
\end{align*}
It is then a simple exercise, see Proposition 5.1 of \cite{CarmonaPetitYor} for the general recipe of how to do this, that this is equivalent to the intertwining relation,
\begin{align*}
P_t\Lambda=\Lambda\mathsf{\hat{P}}_t,
\end{align*}
where $\Lambda$ is the unnormalized kernel given by $(\Lambda f)(x)=\int_{l}^{x}\hat{m}(z)f(z)dz$. This is exactly the intertwining relation obtained in (\ref{KMintertwining}) with $n_1=n_2=1$.
\end{rmk}
\paragraph{Entrance Laws} For $x \in I$ and $\mathfrak{h}_n$ a positive eigenfunction of $P_t^n$ we would like to compute the following limit that defines our entrance law $\mu_t^x\left(\vec{y}\right)$ (with respect to Lebesgue measure) and corresponds to starting the Markov process $P^{n,\mathfrak{h}_n}_t$ from $(x,\cdots,x)$,
\begin{align*}
\mu_t^x\left(\vec{y}\right):=\lim_{(x_1,\cdots,x_n)\to x \vec{1}}e^{-\lambda t}\frac{\mathfrak{h}_n(y_1,\cdots,y_n)}{\mathfrak{h}_n(x_1,\cdots,x_n)}\det\left(p_t(x_i,y_j)\right)^n_{i,j=1} .
\end{align*}
Note that, since as proven in subsection \ref{subsectioneigen} all eigenfunctions built from the intertwining kernels are of the form $\det\left(h_i(x_j)\right)^n_{i,j=1}$ we will restrict to computing,
\begin{align*}
\mu_t^x\left(\vec{y}\right):= e^{-\lambda t}\det\left(h_i(y_j)\right)^n_{i,j=1}\lim_{(x_1,\cdots,x_n)\to x\vec{1}}\frac{\det\left(p_t(x_i,y_j)\right)^n_{i,j=1}}{\det\left(h_i(x_j)\right)^n_{i,j=1}}.
\end{align*}
If we now assume that $p_t(\cdot,y) \in C^{n-1} \forall t>0, y\in I^{\circ}$ and similarly $h_i(\cdot) \in C^{n-1}$ (in fact we only need to require this in a neighbourhood of $x$) we have,
\begin{align*}
\lim_{(x_1,\cdots,x_n)\to x\vec{1}}\frac{\det\left(p_t(x_i,y_j)\right)^n_{i,j=1}}{\det\left(h_i(x_j)\right)^n_{i,j=1}}&=\lim_{(x_1,\cdots,x_n)\to x\vec{1}}\frac{\det\left(x_j^{i-1}\right)^n_{i,j=1}}{\det\left(h_i(x_j)\right)^n_{i,j=1}}\times \frac{\det\left(p_t(x_i,y_j)\right)^n_{i,j=1}}{\det\left(x_j^{i-1}\right)^n_{i,j=1}}\\
&=\frac{1}{\det\left(\partial^{i-1}_xh_j(x)\right)^n_{i,j=1}}\det\left(\partial^{i-1}_xp_t(x,y_j)\right)^n_{i,j=1}.
\end{align*}
For the fact that the Wronskian, $\det\left(\partial^{i-1}_xh_j(x)\right)^n_{i,j=1}>0$ and in particular does not vanish see subsection \ref{subsectioneigen}. Thus,
\begin{align*}
\mu_t^x\left(\vec{y}\right)=const_{x,t}\times \det\left(h_i(y_j)\right)^n_{i,j=1}\det\left(\partial^{i-1}_xp_t(x,y_j)\right)^n_{i,j=1},
\end{align*}
is given by a biorthogonal ensemble as in (\ref{biorthogonalensemble}). The following lemma, which is an adaptation of Lemma 3.2 of \cite{O Connell} to our general setting, gives some more explicit information.
\begin{lem}
Assume that for $x'$ in a neighbourhood of $x$ there is a convergent Taylor expansion $\forall t>0, y \in I^{\circ}$,
\begin{align*}
\frac{p_t(x',y)}{p_t(x,y)}=f(t,x')\sum_{i=0}^{\infty}\left(x'-x\right)^i\phi_i(t,y),
\end{align*}
for some functions $f, \{\phi_{i}\}_{i\ge 0}$ that in particular satisfy $f(t,x)\phi_0(t,y)\equiv 1$. Then $\mu_t^x\left(\vec{y}\right)$ is given by the biorthogonal ensemble,
\begin{align*}
const_{x,t}\times \det\left(h_i(y_j)\right)^n_{i,j=1} \det\left(\phi_{i-1}(t,y_j)\right)^n_{i,j=1} \prod_{i=1}^{n}p_t(x,y_i).
\end{align*}
If moreover we assume that we have a factorization $\phi_i(t,y)=y^ig_i(t)$ then $\mu_t^x\left(\vec{y}\right)$ is given by the polynomial ensemble,
\begin{align*}
const'_{x,t} \times \det\left(h_i(y_j)\right)^n_{i,j=1}\det\left(y^{i-1}_j\right)^n_{i,j=1} \prod_{i=1}^{n}p_t(x,y_i).
\end{align*}
\end{lem}
\begin{proof}
By expanding the Karlin-McGregor determinant and plugging in the Taylor expansion above we obtain,
\begin{align*}
\frac{\det\left(p_t(x_i,y_j)\right)^n_{i,j=1}}{\prod_{i=1}^{n}p_t(x,y_i)}&=\prod_{i=1}^{n}f(t,x_i)\sum_{k_1,\cdots,k_n \ge 0}^{}\prod_{i=1}^{n}\left(x_i-x\right)^{k_i}\sum_{\sigma \in \mathfrak{S}_n}^{} sign(\sigma)\prod_{i=1}^{n}\phi_{k_i}(t,y_{\sigma(i)})\\
&=\prod_{i=1}^{n}f(t,x_i)\sum_{k_1,\cdots,k_n \ge 0}^{}\prod_{i=1}^{n}\left(x_i-x\right)^{k_i}\det\left(\phi_{k_i}(t,y_j)\right)^n_{i,j=1}.
\end{align*}
First, note that we can restrict to $k_1,\cdots k_n$ distinct otherwise the determinant vanishes. Moreover, we can in fact restrict the sum over $k_1,\cdots,k_n \ge 0$ to $k_1,\cdots,k_n$ ordered by replacing $k_1,\cdots,k_n$ by $k_{\tau(1)},\cdots,k_{\tau(n)}$ and summing over $\tau \in \mathfrak{S}_n$ to arrive at the following expansion,
\begin{align*}
\frac{\det\left(p_t(x_i,y_j)\right)^n_{i,j=1}}{\prod_{i=1}^{n}p_t(x,y_i)}=\prod_{i=1}^{n}f(t,x_i)\sum_{0\le k_1<k_2<\cdots<k_n }^{}\det\left(\left(x_j-x\right)^{k_i}\right)^n_{i,j=1}\det\left(\phi_{k_i}(t,y_j)\right)^n_{i,j=1}.
\end{align*}
Now, write with $\vec{k}=(0\le k_1< \cdots <k_n)$ ,
\begin{align*}
\chi_{\vec{k}}(z_1,\cdots,z_n)=\frac{\det\left(z^{k_i}_j\right)^n_{i,j=1}}{\det\left(z^{i-1}_j\right)^n_{i,j=1}},
\end{align*}
for the Schur function and note that $\lim_{(z_1,\cdots,z_n)\to 0}\chi_{\vec{k}}(z_1,\cdots,z_n)=0$ unless $\vec{k}=(0,\cdots,n-1)$ in which case we have $\chi_{\vec{k}}\equiv 1$. We can now finally compute,
\begin{align*}
&\lim_{(x_1,\cdots,x_n)\to x\vec{1}} \frac{\det\left(p_t(x_i,y_j)\right)^n_{i,j=1}}{\det\left(x_j^{i-1}\right)^n_{i,j=1}}=\lim_{(x_1,\cdots,x_n)\to x\vec{1}} \frac{\det\left(p_t(x_i,y_j)\right)^n_{i,j=1}}{\det\left((x_j-x)^{i-1}\right)^n_{i,j=1}}=\prod_{i=1}^{n}p_t(x,y_i)\times\\
&\times \lim_{(x_1,\cdots,x_n)\to x\vec{1}} \prod_{i=1}^{n}f(t,x_i)\sum_{0\le k_1<k_2<\cdots<k_n }^{}\chi_{\vec{k}}(x_1-x,\cdots,x_n-x)\det\left(\phi_{k_i}(t,y_j)\right)^n_{i,j=1}=\\
&=f^n(t,x)\times \prod_{i=1}^{n}p_t(x,y_i)\det\left(\phi_{i-1}(t,y_j)\right)^n_{i,j=1}.
\end{align*}
The first statement of the lemma now follows with,
\begin{align*}
const_{x,t}=e^{-\lambda t} f^n(t,x)\frac{1}{\det\left(\partial^{i-1}_xh_j(x)\right)^n_{i,j=1}}.
\end{align*}
The fact that when $\phi_i(t,y)=y^ig_i(t)$ we obtain a polynomial ensemble is then immediate.
\end{proof}
\noindent
{\sc School of Mathematics, University of Bristol, U.K.}\newline
\href{mailto:[email protected]}{\small [email protected]}
\noindent
{\sc School of Mathematics and Statistics, University College Dublin, Belfield, Dublin 4, Ireland}\newline
\href{mailto:[email protected]}{\small [email protected]}
\noindent
{\sc Department of Statistics, University of Warwick, Coventry CV4 7AL, U.K.}\newline
\href{mailto:[email protected]}{\small [email protected]}
\end{document} |
\begin{document}
\title{A $D$-competitive algorithm for the Multilevel Aggregation Problem with Deadlines}
\begin{abstract}
In this paper, we consider the multi-level aggregation problem with deadlines \\ (MLAPD) previously studied by Bienkowski et al. \cite{OG}, Buchbinder et al. \cite{D}, and Azar and Touitou \cite{GF}. This is an online problem where the algorithm services requests arriving over time and can save costs by aggregating similar requests. Costs are structured in the form of a rooted tree. This problem has applications to many important areas such as multicasting, sensor networks, and supply-chain management. In particular, the TCP-acknowledgment problem, joint-replenishment problem, and assembly problem are all special cases of the delay version of the problem.
We present a $D$-competitive algorithm for MLAPD. This beats the $6(D+1)$-competitive algorithm given in Buchbinder et al. \cite{D}. Our approach illuminates key structural aspects of the problem and provides an algorithm that is simpler to implement than previous approaches. We also give improved competitive ratios for special cases of the problem.
\end{abstract}
\section{Introduction}
In many optimization contexts, aggregating tasks and performing them together can be more efficient. For example, consider sending multiple gifts to a relative. If there are multiple presents, it may be cheaper to bundle them all in one large box rather than sending each present individually. Although, aggregation is useful in many contexts, it is often constrained. For the gift example, the postal service may have weight restrictions on packages so bundling items together may be impossible. In such a situation, multiple smaller bundles may have to be constructed resulting in a complex optimization problem. The type of aggregation constraints we will be interested in form a tree-like structure that commonly appear in communication networks.
Besides the aggregation constraints, another difficulty in these problems are the urgency of the tasks. A task might have a hard deadline of when it must be completed or may accrue a cost the longer it is delayed. At the same time, delaying a task allows more tasks to pile up which better exploits the power of aggregation. This illustrates a natural trade-off between the aggregation benefits of delaying tasks and the cost of doing so. Generally, we will refer to strategies that mostly delay tasks to maximize aggregation benefits as \textit{lazy-aggregating} and to strategies that prioritize the urgency of the tasks as \text{eager-aggregating}.
To illustrate these ideas, consider the example of a communication tree of employees at a restaurant. A health inspector may continually issue fines to a host at a restaurant until some concern is resolved. Depending on the scope of the concern, it may indicate issues more broadly in the company and so would need to be delivered to a higher official than just the local manager. However, the cost of the fines might not be as expensive as the cost for the host to send a message up the chain of command. So, at any time, the host will have to decide whether to send the fine now or wait to receive more messages to send together. In fact, there are many problems of a similar flavor involving transmitting control packages in communication networks \cite{Gathercast, SchemesCMP, TCPOPTRand, wiresensornet}. Similarly, many optimization problems in supply chain management face similar considerations \cite{LotSizeMSA, lotsizingbook, ProductionPlanning, DynamicLotSize}.
The \textit{online multi-level aggregation problem with delay} (MLAP) introduced in \cite{OG} captures many of the scenarios mentioned above. In the deadline version (MLAPD) \cite{D}, which is a special case of the delay version, requests arrive over time and must be served between their time of arrival and the time they are due. The aggregation constraints are modeled using a node-weighted rooted tree and a service involves transmitting a subtree that contains the root. All requests in a service subtree are satisfied to model aggregation. Then, the goal is to minimize the total cost of all services. MLAP encapsulates many classic problems such as the \textit{TCP-acknowledgment Problem}, the \textit{Joint Replenishment Problem} (JRP), and the \textit{Assembly Problem}.
\subsection{Our Contributions}
In this paper, we present a $D$-competitive algorithm for MLAPD. This beats the previous best algorithm constructed in \cite{D}, which was roughly $6(D+1)$-competitive. Also, the proposed algorithm attacks the problem directly rather than relying on a reduction to $L$-decreasing trees. This illuminates key structural aspects of the problem and results in the algorithm being simpler to implement than previous approaches.
The main analysis used is based off the framework of critically overdue nodes introduced in \cite{OG}, but this framework is paired with a generalization of phases introduced in \cite{D2} to achieve a more refined analysis. In addition, using the ideas of late and early nodes within the phases context provides additional structure that can be exploited. Overall, the analysis uses a charging strategy that uses the concept of node investments to charge nodes proportionally to their impact on a service. The strategy exploits both top-down and bottom-up views of a service in order to maximize the precision of the charges. Investments also allow charging future nodes without causing complications. This in contrast to past approaches that only considered nodes available at the current moment or earlier.
Additionally, we give improved competitive algorithms for special cases of MLAPD. In particular, we present an algorithm with competitive ratio strictly less than $4$ for paths. This shows that the known lower-bound of $4$ for infinite graphs does not hold for finite graphs. The main analysis used exploits a notion of phases that utilizes the special structure of path instances. This approach involves looking at nested subpaths and then charging several such subpaths to a single path of the optimal schedule.
\subsection{Related Work}
MLAP and MLAPD were originally formulated by \cite{OG}. The offline versions of MLAP and MLAPD have been well studied. Even for $D = 2$, MLAPD and so MLAP is known to be APX-hard \cite{OGJRP, JRPHard, ProductionPlanning}. When the tree is a path, MLAP can be solved efficiently \cite{5P}. For MLAPD, the best approximation factor is $2$ \cite{SensorNets, OG}. Using an algorithm for the assembly problem from \cite{PDI}, \cite{Priv} constructs a $(2 + \epsilon)$-approximation for MLAP where $\epsilon > 0$ is arbitrary.
For the online variant, the best competitive ratio for MLAP is $O(D^2)$ due to an algorithm constructed in \cite{GF}. For MLAPD, the best competitive ratio known was $6(D+1)$ due to an algorithm developed in \cite{D}. The best known lower bound for MLAPD and MLAP for finite depth trees is $2$, which comes from the lower bound of $2$ on JRP with deadlines (JRPD) given in \cite{D2}. If the depth of the tree is unbounded, the best lower bound is $4$ as shown in \cite{OG} by constructing a hard instance where the tree is a path of arbitrary length. This improves on the lower bound of $2 + \phi$ presented in \cite{5P} for trees of unbounded depth. Additionally, \cite{OG} shows that $4$ is in fact the optimal competitive ratio for paths of unbounded length improving on the results in \cite{5P, 8P}.
There is even more known about MLAP for small $D$. For $D = 1$, MLAP is equivalent to the TCP-acknowledgment problem which involves sending control messages in a single packet in a network. The offline version of the problem is equivalent to the lot sizing problem \cite{DynamicLotSize} and can be solved efficiently \cite{ImprovedLotSize}. For the online variant, it is known that the optimal deterministic competitive ratio is $2$ \cite{TCPDelay}. The optimal competitive ratio when randomness can be used is known to be $\frac{e}{e-1}$ \cite{TCPOPTRand, Seiden00aguessing, OnlPDApproach, AdRev}. For the deadline case, a simple greedy algorithm is exactly $1$-competitive.
For $D = 2$, MLAP is equivalent to JRP. JRP tries to optimize the total shipping cost from suppliers to retailers when a warehouse is used as an intermediary place for storage. Although JRPD and so JRP is APX-hard \cite{OGJRP, JRPHard, ProductionPlanning}, JRP permits an $1.791$-approximation \cite{D2} which improves on the results in \cite{OneWareMultiRetail}. In addition, JRPD permits a $1.574$-approximation \cite{OGJRP}. In the online variant, \cite{OPD} gives a $3$-competitive algorithm for JRP which improves upon the $5$-competitive algorithm given in \cite{8P}. This competitive ratio is nearly tight as the best known lower bound for the competitive ratio of JRP is $2.754$ \cite{D2}. For JRPD, the optimal competitive ratio is exactly $2$ \cite{D2}.
\section{Preliminaries}
\paragraph{Multi-level aggregation problem.} The \textit{multi-level aggregation problem with deadlines} (MLAPD) is defined by a pair, $(\mathcal{T}, \mathcal{R})$, where $\mathcal{T}$ is a node-weighted tree rooted at a node $r$ and $\mathcal{R}$ is a set of requests that arrive at the nodes of $\mathcal{T}$ over some time period. A request $\rho = (v, a, d)$ is specified by an arrival time $a$, a deadline $d$, and a node at which the request is issued, $v \in \mathcal{T}$. Without loss of generality, we can assume that the deadlines of each request are distinct and that the cost of every node is positive. We define $D$ to the height of the tree plus one or, equivalently, the maximum number of nodes present in a path in $\mathcal{T}$. We call $D$ the depth or number of levels of the tree. Then, we define the level of a node, $L_v$, to be the number of nodes on the unique $r$ to $v$ path in $\mathcal{T}$.
A service is a pair $(S, t)$ where $S$ is a sub-tree of $\mathcal{T}$ containing the root and $t$ is the time that $S$ is transmitted. Notice the definition of a service implies that if $v \in S$ and $u$ is an ancestor of $v$, then $u \in S$. We refer to this fact as the \textit{subtree property} of services. We sometimes refer to the subtree $S$ as the service when the time of the service is clear or irrelevant to the given context. A request is satisfied by a service, $(S,t)$, if $v \in S$ and $a \leq t \leq d$. A schedule is a set of services $(S_1,t_1),\ldots, (S_k,t_k)$. A schedule is a solution to MLAPD if every request in $\mathcal{R}$ is satisfied by some service in the schedule. The cost of a schedule is $\sum_{i = 1}^k c(S_i)$ where $c(S_i) = \sum_{u \in S_i} c(u)$ is the total cost of nodes in the service. Note that this node-weighted definition is equivalent to the original definition of the problem as shown by \cite{D}.
In the online setting, $\mathcal{T}$ is known to the algorithm but the request arrive in an online fashion. In particular, if $\rho = (v,a,d) \in \mathcal{R}$ is a request, then the algorithm receives this request at time $a$. At time $t$, the algorithm must decide whether to transmit a service and if so decide what nodes to include in the service given only knowledge of requests whose arrival time are before or at time $t$. The algorithms we consider only keep track of active requests, i.e. requests that have arrived yet have not been served previously.
For a node $v$, we let $\mathcal{T}_v$ denote the subtree of $\mathcal{T}$ rooted at $v$. Given a request $\rho$ and a time $t$, we use the notation $\rho \in \mathcal{T}_v^t$ to mean that $\rho$ was issued at a node in $\mathcal{T}_v$ by time $t$ and has not yet been satisfied. Then, if $\rho \in \mathcal{T}_v^t$, we define $P(v \to \rho)$ to be the nodes on the path in $\mathcal{T}_v$ from $v$ to the node at which $\rho$ is issued. For the root, we use the special notation $P_{\rho}$ to be $P(r \to \rho)$. If $S$ is any subtree, then we define $P(v \to \rho | S) = P(v \to \rho) \setminus S$ to be the nodes on the path to $\rho$ in $\mathcal{T}_v$ that are not already in $S$. We also define $c(v \to \rho | S) = c(P(v \to \rho | S))$ to be the cost of the nodes of the path to $\rho$ that are not in $S$.
\paragraph{Critically overdue.} It is known that there is always an optimal schedule that only transmits at times that are deadlines of requests. Given such an optimal schedule and a schedule of some algorithm, we can define a notion of a node being late. Suppose $(S,t)$ is a service of the algorithm in the given schedule and that $v \in S$. Then, we define $d^t(v) = \min_{\rho \in \mathcal{T}_v^t} d_{\rho}$ to be the deadline of the earliest request in $\mathcal{T}_v^t$, which we call the deadline of $v$ or the urgency of $v$. Also, we define $\nos{v}{t}$ to be the time of the next service of the optimal schedule that includes $v$ and is transmitted at a time strictly after time $t$. Note, we sometimes also use $\nos{v}{t}$ to reference this service of the optimal schedule instead of the time at which it was transmitted. If no other services of the optimal schedule include $v$ at time $t$ or later, then we define $\nos{v}{t} = \infty$. Now, we say that a node $v$ is late at time $t$ if $d^t(v) < \nos{v}{t}$. This is so called since the optimal schedule must have already satisfied the earliest due request in $v$ as the next time this schedule includes $v$ this request will have already been due. Next, we say $v$ is \textit{critically overdue} at time $t$ if $v$ is late at time $t$ and if there is no service of the algorithm's schedule including $v$ that is transmitted at a time in $(t,\nos{v}{t})$ \cite{OG}. We often will say the pair $(v,t)$ is critically overdue to mean $v$ is critically overdue at time $t$.
\section{Special Tree Classes}
In this section we present new upper bounds on the competitive ratio for MLAPD when restricted to special classes of graphs. The arguments in this section serve as a warm up for the main argument in the next section. Also, the new upper bound for path instances shows that the known lower-bound of $4$ on the competitive ratio does not hold for finite graphs.
\subsection{Increasing Trees}
An increasing tree is a tree where weights continually increase as one travels away from the root down any path in the tree. We consider an algorithm that serves only the path to a due request. In particular, whenever a request becomes due, the algorithm transmits exactly the path to the due request and nothing else. Hence, every service is of the form $(P_{\rho}, d_{\rho})$. We call this algorithm \textsc{Noadd} since it never adds any nodes to a service other than the bare minimum.
\begin{theorem}
\textsc{Noadd} is $D$-competitive for any Increasing Tree of depth $D$.
\end{theorem}
\begin{proof}
We use a simple charging argument. Suppose $\rho$ is a request that causes a service of ALG at time $t = d_{\rho}$. Further, suppose that $v$, the node at which $\rho$ was issued, is at level $L$ and $r = v_1, \ldots,v_L = v$ are the nodes on the path $P_{\rho}$. Then, by definition, we know that at the time of its satisfaction by ALG, $\rho$ is due and so must have been served by OPT by this time. Suppose $(O,t')$ was the service of OPT that satisfied $\rho$. We charge the entire cost of this service of ALG just to the occurrence of $v$ appearing in $O$. We note that $\sum_{i = 1}^L c(v_i) \leq \sum_{i = 1}^Lc(v) = Lc(v)$ since the tree is increasing. Then, the charge to $v$ is exactly $\frac{Lc(v)}{c(v)} = L$. Since $L \leq D$, the total charge to $v$ is at most $D$. Also, since all the requests at $v$ are cleared after this transmission, the next time a request issued at $v$ becomes due must be strictly after the current time $t$. By the same argument, at that time $t'$ a new service of OPT must have been transmitted that contains $v$ and it will be this occurrence of $v$ that is charged next. Thus, all the charges are disjoint and so the charge to any node of OPT is at most $D$. Thus, ALG is $D$-competitive.
\end{proof}
Note that the argument above is actually proving $v$ is critically overdue at time $d_{\rho}$. Then, the charging scheme boils down to charging critically overdue nodes. We can use the same ideas to get an even better result for $L$-increasing trees. $L$-increasing trees simply satisfy the property that if $u$ is an ancestor of $v$ then $c(v) \geq L c(u)$.
\begin{corollary}
\textsc{Noadd} is $\frac{L}{L-1}$-competitive for any $L$-increasing Tree of depth $D$.
\end{corollary}
\begin{proof}
We use the same argument as for increasing trees but refine the charge to any node. To avoid confusion we use $\ell$ to mean the level and $L$ to be the factor of increase. With the same setup as before, the only change is the total charge to $v$. Note that $c(v) \geq L c(v_{\ell-1}) \geq L^2 c(v_{\ell-2}) \geq \ldots \geq L^{\ell-1} c(v_1)$ since the tree is $L$-increasing. In particular, $c(v_i) \leq L^{i - \ell} c(v)$. Then,
$$\sum_{i = 1}^{\ell} c(v_i) \leq \sum_{i = 1}^{\ell} L^{i-\ell} c(v) = c(v)\sum_{i = 0}^{\ell-1} L^{-i} = \frac{L - L^{1-\ell}}{L-1}c(v) \leq \frac{L}{L-1}c(v)$$
So, the total charge to $v$ is $\frac{L}{L-1}$. The rest of the argument goes through as before. Thus, ALG is $\frac{L}{L-1}$-competitive.
\end{proof}
\subsection{Paths}\label{section: paths}
In our context, we consider a path to simply be a path graph where one of the endpoints is the root of the tree. Consider the algorithm, \textsc{Double}, that upon a request $\rho$ becoming due, first adds the subpath from the root to the node at which $\rho$ was issued, $P_{\rho}$. Then, the algorithm continually tries to extend the current subpath containing the root by adding other subpaths that connect the last node in the current subpath to nodes holding urgent requests. If adding a subpath would cause the resultant service to cost more than $2c(P_{\rho})$, then the path is not added and the algorithm transmits the current service. This iterative procedure is described by \cref{alg: double}.
\begin{algorithm}[H]
\caption{\textsc{Double}}\label{alg: double}
\SetAlgoLined
\KwData{Request $\rho$ just became due}
\KwResult{A subtree is transmitted that serves $\rho$ }
Initiate a service $S = P_{\rho}$.\\
\While{there is an unsatisfied request}{
Let $\gamma$ be the earliest due request not yet satisfied by $S$\\
\If{$c(S) + c(r \to \gamma | S) > 2c(P_{\rho})$}{
break;
}
$S \gets S \cup P(r \to \gamma | S)$\\
}
\textbf{transmit} $S$
\end{algorithm}
\begin{theorem}\label{theorem: paths}
\textsc{Double} is $(4 - 2^{-D})$-competitive for any path with depth $D$.
\end{theorem}
\begin{proof}
We will use a charging argument that charges the cost of several services of ALG to a single service of OPT. We visualize each subpath as an interval going from the root to another node. Then, this strategy boils down to charging sequences of nested intervals of ALG to a single interval of OPT.
Let $(S,d) = (P_{\rho} \cup A, d_{\rho})$ be a service of ALG. Since $\rho$ is due at time $d_{\rho}$, OPT must have already satisfied $\rho$ at some earlier time $t$. If $(O,t)$ is the service of OPT satisfying $\rho$, then we know that $P_{\rho} \subseteq O$ by the subtree property of services. Also, the subtree property and the fact that the tree is a path implies that either $O \subseteq S$ or $S \subseteq O$. Since each cost is positive, this corresponds to $c(O) \leq c(S)$ in the first case and $c(S) \leq c(O)$ in the second case.
\begin{itemize}
\item Suppose that $c(S) \geq c(O)$ so that ALG's interval contains OPT's interval. By definition of ALG, we have that $c(A) \leq c(P_{\rho})$. Thus, we charge $(S,d_{\rho})$ to $(O,t)$ at a total charge of
$$c(S) = c(P_{\rho}) + c(A) \leq 2c(P_{\rho}) \leq 2c(O) $$
Here, we used the fact that $P_{\rho} \subseteq O$ and so $c(P_{\rho}) \leq c(O)$.
We claim that no other service $(S',d') = (P_{\gamma} \cup B, d_{\gamma})$ of ALG satisfying $c(S') \geq c(O)$ will be charged to $(O,t)$. For the sake of contradiction, suppose that $(S',d')$ and $(S,d)$ are both charged to $(O,t)$. WLOG, we assume that $d_{\gamma} > d_{\rho}$. Note, it must be the case that $\gamma$ and $\rho$ both arrive before time $t$ for $O$ to satisfy them. Thus, when $S$ was constructed, $\gamma$ had already arrived. Since $\gamma$ triggered a service, we know it was not satisfied by $S$. Hence, the stopping conditions of ALG imply that $S_i \subset P_{\gamma}$. But, $S$ containing $O$ then implies that $O \subseteq S \subset P_{\gamma}$ so $O$ could not have satisfied $\gamma$, a contradiction. Thus, only one service of ALG with larger cost may be charged to any service of OPT.
\item Suppose that $c(S) < c(O)$ so that ALG's interval is strictly contained in OPT's interval. We will pay for all such $S$ at once. Suppose that $\rho_1,\ldots, \rho_k$ are requests in order of increasing deadline that are all satisfied by $(O,t)$ and that each trigger a service of ALG having smaller service cost than $c(O)$. Let $(S_i, t_i) = (P_{\rho_i} \cup A_i, d_{\rho_i})$ be the services of ALG triggered by $\rho_i$. By \cref{lemma: doubling} we know that
$$\sum_{i = 1}^k c(S_i) < (2 - 2^{-k})c(O)$$
and so the charge to $O$ for all these intervals is $2 - 2^{-k}$.
\end{itemize}
Overall, we see any service of OPT is charged at most $2$ larger services of ALG and at most $2-2^{-k}$ for all the smaller services of ALG. Thus, the total charge made to any service of OPT is at most $2 + (2-2^{-k}) = 4 - 2^{-k}$. Hence, the algorithm is $4 - 2^{-k}$-competitive, where $k$ is the maximal number of smaller services that we charge to any service of OPT. A trivial upper bound for $k$ is $D$ as each interval must differ by at least one node. Thus, ALG is $(4 - 2^{-D})$-competitive.
\end{proof}
\begin{lemma}\label{lemma: doubling}
Suppose that $(O,t)$ is a service of OPT and $\rho_1,\ldots, \rho_k$ are requests in order of increasing deadline that are satisfied by $(O,t)$ and that trigger services $(S_1,t_1),\ldots, (S_k,t_k)$ of ALG each having smaller service cost than $c(O)$. Then,
$$\sum_{i = 1}^k c(S_i) < (2 - 2^{-k})c(O)$$
\end{lemma}
We defer the proof of \cref{lemma: doubling} to section~\ref{sec: pathappendix} of the appendix. In fact, the analysis above can be refined to give a $(4 - 2^{1 - \frac{D}{2}})$-competitive ratio. The proof of this stronger result can also be found in section~\ref{sec: pathappendix} of the appendix.
\begin{corollary}
The competitive ratio lower bound of $4$ does not hold for finite paths. Thus, the true competitive ratio for MLAPD for path instances is either at most $3$ or non-integer.
\end{corollary}
\begin{proof}
The claims are immediate as $4 - 2^{-D} < 4$ for any finite $D$.
\end{proof}
\iffalse
\subsection{Lower Bounds}
We want to show that for $D = 3$ certain techniques will fail to be competitive. Being that there are only so many approaches that can be tried for $D = 3$, this greatly reduces the number of possible approaches. Also, this shows a fundamental structure of the problem that algorithms should exploit. To begin, we want to show that purely global algorithms fail. In particular, we focus on the algorithm that adds the path to a due request and then adds the earliest due nodes with total cost of addition being at most the cost of the root. In fact, if we allow the entire path cost be used as the roots budget, a similar instance gives the claim.
Consider a $2$-decreasing tree with $D = 3$. The root has cost $x$ and it has three children having cost $x/2$ each. Then, each child of the root has $x/2$ children of cost $1$ each. There are requests at each leaf and they arrive all at the initial time (We can recursively construct the instance as before for paths if desired so offline techniques can't be attempted). Let $d_1 < \ldots < d_{3x/2}$ be the deadline of the requests in increasing order. Then, the requests in the first child of the roots subtree are those with deadlines $d_1,d_4,d_7,\ldots$. Those in the second child's subtree are those with deadlines $d_2, d_5, d_8 \ldots$. Lastly, those in the third child's subtree are $d_3,d_6,d_9, \ldots$. We then have that OPT could add all the nodes at once and thus satisfy all requests. The total cost of the tree is $x + 3x/2 + 3x/2 = 4x$. On the other hand, For every child of the first child of the root, a service of ALG will be issued. Specifically, ALG adds the path to $d_1$ then it adds the path to $d_2$ which costs $x/2 + 1$. The next most urgent request is $d_3$ which costs $x/2 + 1$ to add which would push over the budget. Thus, $d_3$ is not added. For any service of ALG exactly two requests will be satisfied and they will be in subtrees of seperate children of the root. Thus, ALG pays $x + 2(x/2+1)$ for every service it transmits, and it must transmit at least one service per child of the first root's children of which there are $x/2$. Hence ALG spends at least $2(x+1)x/2 = x(x+1)$. Hence, the competitive ratio is at least $(x+1)/4$. Since $x$ was arbitrary, this algorithm is not competitive.
Similarly, a purely local algorithm or an algorithm that only uses local aspects for nodes on the initial path will also fail. In particular if $v$ is the child of the root included in the initial path, we can instead give $v$ a budget and add urgent requests in $\mathcal{T}_v$. Similarly, giving the cost of $v$ and the root to this process does not help much. Thus, we need both global and local aspects of any algorithm we wish to be competitive. This holds for any algorithm that uses fixed or not too wildly changing budgets and considers deadlines in increasing order, which encapsulates all known strategies. In the next section, we construct an algorithm that tries to optimally balance these two factors.
\fi
\section{Arbitrary Trees}
In this section, we present a $D$-competitive algorithm for MLAPD on arbitrary trees of depth $D$.
\paragraph{Intuition.} Since the path to a due request must always be served, the main task is to decide what other nodes will be added to the service. The main observation is that several of these nodes have been included in ALG unnecessarily in the past. So, we need to justify their inclusion in the current service. For a node $v$, we do this by satisfying requests in $\mathcal{T}_v$ using a procedure called a $v$-fall. This way if $v$ was indeed serviced too many times before, we have other nodes that can effectively be used to pay for $v$'s cost in the analysis. Repeating this logic, we need to recursively justify all the nodes added by the $v$-fall, which will be nodes of higher level than $v$. Eventually, we find good nodes of low level, or this process reaches the leaves, which will always have the desired properties.
Clearly, for this to work the fall will have to include enough new nodes to pay for the cost of the given node. At the same time, adding too many nodes would runs the risk of too much eager-aggregation which would just cause more of the same problem we seek to avoid. To deal with this, we used a fixed budget that determines how many nodes can be added by the fall. However, the path to a request in $\mathcal{T}_v$ may have cost far greater than $v$'s budget. To avoid $v$'s budget going to waste, we discount the cost of this path so that it becomes easier to include by other falls. We treat this amount discounted as an investment that can be used to justify $v$'s inclusion. To implement this, we keep track of a price, $p(u)$, for every node that represents how much of $c(u)$ is leftover to be paid for. Then, we only need a fall to pay for the price of a path in order to include it in a service, rather than its full cost.
Overall, the fall will add paths to requests in increasing order of deadline. Naturally, this ordering ensures we satisfy many urgent requests. Though, some requests that aren't urgent may be cheap to add and so should be considered as well. However, the fact that every node included triggers its own fall ensures that requests that aren't globally urgent but are inexpensive to include are added to the service.
\paragraph{Falls.} Formally, given a tentative service tree $S$ at time $t$, we define a \textit{$v$-fall} to be an iterative procedure that considers requests in $\mathcal{T}_v^t$ for addition to the service. We define $F_v^t$ to be the set of nodes, $\mathcal{A}_v$, constructed by the $v$-fall at time $t$. We will write $P(v \to \rho)$ to mean $P(v \to \rho | S')$, where $S'$ is the tentative service tree right before this path was added to the fall $F_v^t$. If the nodes on a path to a request that are not already included in the service, $S$, can be included without the total price exceeding $v$'s budget, then the nodes are added to the service and the budget reduced by the corresponding price of these nodes. This procedure continues until there are no pending requests in $\mathcal{T}_v^t$ or until the addition of a path to a request exceeds the remaining budget. If a path $P(v \to \rho | S)$ exceeds the remaining budget, then we reduce the price of nodes on this path by the remaining budget. In particular, we evenly divide the remaining budget amongst the price of the path and then reduce each node's price proportional to its contribution to the total price of the path. We refer to these nodes as \textit{overflow nodes} for $F_v^t$. Whenever a fall causes some node's price to change, we think of this fall as investing in that node. This entire process is described as \cref{alg: vfall}. Note we refer to the entire process (\cref{alg: init}, \cref{alg: waterfall}, \cref{alg: vfall}) as well as just \cref{alg: waterfall} as \textsc{Waterfall}.
\paragraph{Waterfall Interpretation.} \cref{alg: waterfall} behaves like a flow in a flow network defined by $\mathcal{T}$. The sources are the root and other nodes on the path to the due request and the sinks are generally the leaves. Roughly, each $v$-fall is a way of selecting which children of $v$ in the network the flow will continue through. Recursively, all such nodes also send the amount of flow it received down its subtree, and we continue the process until we reach the leaves which cannot send their flow elsewhere. Nodes may have excesses if they don't add enough in a fall, but excess is invested into the overflow nodes for their eventual inclusion. So, if we take future inclusions into account then conservation is preserved. With this flow interpretation, the algorithm behaves like a waterfall where water runs over ledges which act as containers. Whenever enough water fills up on a ledge, it spills over causing a smaller waterfall.
\
\begin{algorithm}[H]
\caption{\textsc{Initialization}}\label{alg: init}
\SetAlgoLined
\For{$v \in \mathcal{T}$}{
$p(v) \gets c(v)$
}
\end{algorithm}
\
\begin{algorithm}[H]
\caption{\textsc{Waterfall}}\label{alg: waterfall}
\SetAlgoLined
\KwData{Request $\rho$ just became due}
\KwResult{A subtree is transmitted that serves $\rho$ }
Initiate a service $S = P_{\rho}$ and set $p(v) = c(v)$ for each $v \in P_{\rho}$.\\
Let $Q$ be a queue for nodes. Enqueue the nodes of $P_{\rho}$ into $Q$\\
\While{$Q \not = \varnothing$}{
Dequeue $v$ from $Q$\\
$\mathcal{A}_v \gets F_v(S)$\\
For every $u \in \mathcal{A}_v$ enqueue $u$ into $Q$\\
}
\textbf{transmit} $S$
\end{algorithm}
\
\begin{algorithm}[H]
\caption{\textsc{$v$-fall}, $F_v(S)$}\label{alg: vfall}
\SetAlgoLined
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{output}
\Input{A tentative service subtree $S$}
\KwResult{The service subtree $S$ is possibly extended to satisfy some requests in $\mathcal{T}_v$ and the set of added nodes is returned. }
$\mathcal{A}_v = \emptyset, b(v) = c(v)$ \\
\For{$\rho \in \mathcal{T}_v$ in increasing deadline order}{
\eIf{$p(v \to \rho | S) > b(v)$}{
\For{$u \in P(v \to \rho | S)$}{
$p(u) \gets p(u)(1 - \frac{b(v)}{p(v \to \rho | S)})$
}
break;
}{
$\mathcal{A}_v \gets \mathcal{A}_v \cup P(v \to \rho | S)$ \\
$b(v) \gets b(v) - p(v \to \rho | S)$ \\
\For{$u \in P(v \to \rho | S)$}{
$p(u) \gets c(u)$
}
$S \gets S \cup \mathcal{A}_v$\\
}
}
\mathcal{R}eturn{$\mathcal{A}_v$}
\end{algorithm}
\begin{theorem}\label{theorem: main}
\textsc{Waterfall} is $D$-competitive on trees of depth $D$.
\end{theorem}
The main approach to analyzing \textsc{Waterfall} will be a charging argument.
Whenever a node is included in a service of ALG, its cost must be paid for by charging to a service of OPT. We think of the amount invested in a node at some time as the total cost it is responsible for. If the node is critically overdue, then it can pay for its own cost and the amount invested in it. Otherwise, a fall investing a node's budget to its descendants can be interpreted as the node delegating the payment of its cost to its descendants. Then, these descendants will either be able to pay for their own costs as well as the ancestors', or will further transfer responsibility of payment to their descendants. Continuing this process, nodes that can pay off the costs will eventually be reached. In particular, the leaves of the tree will eventually be reached and these will be critically overdue.
However, nodes in the subtree of a critically overdue node may be early and so cannot pay for themselves. The cost of such nodes will also be charged to their critically overdue ancestor. Thus, a critically overdue node will have to pay both for its ancestors (including itself) and its early descendants. This charging idea can be visualized as the evolution of debt in a family tree. The family may continually pass down debt from generation to generation. At some point, the descendants will have accumulated enough wealth to pay-off the debt of their ancestors. Though these descendants might also have children who will accumulate further debt. These unlucky descendants then would be be tasked with paying off debts from both sides, ancestors and descendants.
\subsection{ALG's Structure}
\paragraph{Phases} It will be helpful to break services of ALG into different phases. Suppose that $(O_1,t_1), \ldots, (O_j,t_j)$ are all the services of OPT that include the node $v$ in chronological order of transmission time. Then, for some $i \in [j-1]$ suppose $(A_1,t_1'),\ldots, (A_k,t_k')$ are all the services of ALG that include $v$ and satisfy $t_i \leq t_h' < t_{i+1}$ where $h \in [k]$. We call these services of ALG a $v$-phase and say that $(O_i,t_i)$ starts the $i$-th $v$-phase. For $i = j$, a $v$-phase consists of all the services of ALG containing $v$ that are transmitted at time $t_j$ or later. Note that if a request $\rho$ becomes due in the $i$-th $v$-phase, then $\rho$ must be satisfied by one of $(O_h,t_h)$ for $h \in [i]$. Consequently, if a request, $\rho$, arrives in $\mathcal{T}_v$ during a $v$-phase, it cannot become due before the next $v$-phase since no service of OPT includes $v$ during a $v$-phase and so cannot satisfy $\rho$. Thus, if $v$ is late at time $t$ during a phase, it must have been satisfied earlier. Also, for any service $(A,t)$ in the $i$th $v$-phase, we have that $\pos{v}{t} = t_i$ and $\nos{v}{t} = t_{i+1}$ or $\nos{v}{t} = \infty$ if $i = j$.
Suppose $v$ is late at the time $t_i$ when $A_i$ is transmitted in the phase. Then, we have $v$ is late at all previous services in the phase. This is because ALG always considers the earliest due requests in $\mathcal{T}_v$ for satisfaction first. Specifically, since late requests cannot arrive in the middle of a phase, we know the request causing ALG to enter $\mathcal{T}_v$, $\rho_i$, must have been available in the previous services of ALG. Hence, the earliest due request in $\mathcal{T}_v^{t_h}$ for $h < i$, $\rho_h$, had an earlier deadline so is also overdue. Formally, $$d^{t_h}(v) = d_{\rho_h} < d_{\rho_i} < \nos{v}{t_i} = \nos{v}{t_h}$$
Thus, we can partition any phase into two pieces: an initial segment where $v$ is late and a later segment where $v$ is not late. We call the former services late with respect to the phase, and the latter early. Note it could be that a phase has only early or only late services. However, $r$-phases have only late services since transmissions of the algorithm only happen when a request becomes due.
Let $v$ be a node included in a service of ALG at time $t$. Recall, we say $v$ is critically overdue at time $t$ if $v$ is late at time $t$ and if for any service of the algorithm's schedule including $v$ at a time $t' \in (t,\nos{v}{t})$ we have that $v$ is not late at time $t'$. In terms of phases, $v$ is critically overdue at time $t$ if the service of ALG at time $t$ is the last late service in its respective $v$-phase. We ``charge'' to critically overdue nodes, $(v,t)$, by charging to the occurrence of $v$ that appears in $\pos{v}{t}$. We have that charging to critically overdue nodes is injective since only one service in any phase may be critically overdue.
Next, we discuss structural properties of late and critically overdue nodes. First, we note that if $(v,t)$ is late and the $v$-fall satisfies all requests in $\mathcal{T}_v^t$, then $(v,t)$ must be critically overdue. This is clear since a new request must arrive after time $t$ in order for $v$ to become late again and a new service of OPT must be transmitted to satisfy this new request.
\begin{lemma}\label{lemma: latepaths}
Suppose that $v$ is late at time $t$ and that $\rho$ is the earliest due request in $\mathcal{T}_v^t$. Then, for all $u \in P(v \to \rho)$, $u$ is late at time $t$. Furthermore, if $v$ is late at time $t$ and $u \in F_v^t$ is early at time $t$, then $v$ is critically overdue at time $t$.
\end{lemma}
\begin{proof}
If $v$ is late at time $t$ then, by definition, we know that $d^t(v) < \nos{v}{t}$. Since $\rho$ is the earliest due request in $\mathcal{T}_v^t$, we also know that $d^t(v) = d_{\rho}$. If for $u \in P(v \to \rho)$ a request $\gamma \in \mathcal{T}_u^t$ was due earlier than $\rho$, $\gamma$ would also be an earlier due request in $\mathcal{T}_v^t \supseteq \mathcal{T}_u^t$. Hence, we have $\rho$ is the earliest due request in $\mathcal{T}_u^t$, so $d^t(v) = d^t(u)$. Now, by the subtree property of services, if OPT includes $u$ in a service it must also include $v$ in that service implying that $\nos{v}{t} \leq \nos{u}{t}$. Thus, we see that $d^t(u) = d^t(v) < \nos{v}{t} \leq \nos{u}{t}$ and so $u$ is late at time $t$.
Now, further suppose that $u \in F_v^t$ is early at time $t$. Also, suppose that ALG next includes $v$ at time $t'$. If $v$ is early at time $t'$, then either this service at time $t'$ is in a new $v$-phase or this service is an early service in the same $v$-phase. In either case, the partition of phases into earlier and late parts implies that the service at time $t$ is the last late service for the $v$-phase. Hence, $v$ is critically overdue at time $t$.
Next, suppose that $v$ is late at time $t'$. Again, if the service at time $t'$ is in a new $v$-phase, $v$ is critically overdue at time $t$. If the service at time $t'$ is in the same $v$-phase as the service at time $t$, we show that $u$ at time $t$ was actually late. In particular, the services at times $t$ and $t'$ being in the same $v$-phase implies that $\pos{v}{t} = \pos{v}{t'}$ and $\nos{v}{t} = \nos{v}{t'}$. We claim that the earliest due request, $\gamma$, in $\mathcal{T}_v^{t'}$ was issued by time $t$. If not, we have that $\pos{v}{t} \leq t < a_{\gamma} \leq d_{\gamma} < \nos{v}{t}$. This means that $\gamma$ arrived and became due between two services of OPT and so OPT never satisfied this request. This is impossible by feasibility of OPT and so $\gamma \in \mathcal{T}_v^t$. Since $u$ was included in the $v$-fall at time $t$, but $\gamma$ was not satisfied by this $v$-fall, we know that $d^t(u) < d_{\gamma}$. But, since $d_{\gamma} = d^{t'}(v)$ and $v$ is late at time $t'$, we have
$$d^t(u) < d_{\gamma} = d^{t'}(v) < \nos{v}{t'} = \nos{v}{t} $$
Finally, applying the subtree property at time $t$, we have that $d^t(u) < \nos{v}{t} \leq \nos{u}{t}$ and so $u$ is late at time $t$. This contradicts that $u$ is early and hence $v$ must be critically overdue at time $t$.
\end{proof}
\paragraph{Direct Investments.} Suppose that $(S,t)$ is a service of ALG with $v \in S$. We say that the pair $(v,t)$ \textit{directly invests} in the pair $(w,t')$ if $w \in F_v^t$ and $t = t'$ or if $w$ is an overflow node for $F_v^t$ and $t'$ is the next time after $t$ that $w$ is included in a service of ALG. The \textit{investment} made, $I_{v}^t(w,t')$, is $p(w)$, $w$'s price during the construction of $F_v^t$, in the former case. In the latter case, the investment made is $\frac{rb(v,t)}{p(v \to \rho)}p(w)$ where $rb(v,t)$ is the remaining budget, $b(v)$, at the end of the $v$-fall's construction and $\rho$ is this earliest request not satisfied by this fall. If $rb(v,t) = 0$, then we do not consider any $0$ investments and say there are no overflow nodes for the fall. We imagine a fall including a node as reducing its price to $0$ before resetting its price to its cost. Then, we see the direct investment in any node is exactly the amount by which the fall reduces the node's price.
\begin{lemma}\label{lemma: direct}
The total direct investment made by any pair $(v,t)$ is at most $c(v)$ and is strictly less than $c(v)$ only if $F_v^t$ satisfied all requests in $\mathcal{T}_v^t$. In addition, the total direct investments made to a pair $(v,t)$ is at most $c(v)$ and is strictly less than $c(v)$ exactly when $v$ is on the path to the request that triggers the service of ALG at time $t$.
\end{lemma}
\begin{proof}
The total direct investment made by $(v,t)$ is exactly the total price of all nodes included in $F_v^t$ plus the investments made to the overflow nodes. Note that the budget starts at $c(v)$ and is reduced by the price of each node added to the fall in each iteration. Hence, the total investment made by by $(v,t)$ to nodes included in $F_v^t$ is exactly $c(v) - rb(v,t)$. If all requests in $\mathcal{T}_v^t$ are satisfied by $F_v^t$, then this is all the direct investments made and this is clearly at most $c(v)$. If there are overflow nodes, each overflow node, $(u,t')$, is invested exactly $\frac{rb(v,t)}{p(v \to \rho)} p(u)$. Thus, the total investment made by $(v,t)$ to overflow nodes is exactly
$$\sum_{u \in P(v \to \rho)} \frac{rb(v,t)}{p(v \to \rho)} p(u) = \frac{rb(v,t)}{p(v \to \rho)}\sum_{u \in P(v \to \rho)}p(u) = rb(v,t) $$
So, the total direct investments made by $(v,t)$ is $c(v) - rb(v,t) + rb(v,t) = c(v)$. Hence, as long as there are overflow nodes, the total direct investment will be exactly $c(v)$. So, the only way the total direct investment can be strictly less than $c(v)$ is if there are no overflow nodes, which can only happen if $F_v^t$ satisfied all requests in $\mathcal{T}_v^t$ by definition.
For the second claim, first note that every direct investment made to $(v,t)$ will be when $v$ is a overflow node other than possibly the very last direct investment as $v$ will be included in the service at that point and then cannot be invested in further by definition. Now, the direct investments made to $v$ are exactly the amounts the price of the node is reduced by. Since the price of the node begins at its cost and is reduced until the price becomes $0$ or until the node is included in a service due to a request becoming due, we see the total direct investments made to $v$ is at most $c(v)$. If $v$ is added by a fall then the last investment was its final price and so the total direct investment is simply the final price $p(v)$ plus the amounts the price was reduced by which is $c(v) - p(v)$ for $c(v)$ total direct investments. Hence, we see the total direct investment in $(v,t)$ is less than $c(v)$ exactly when it is not added by some fall. This happens if and only if $v$ was on a path to a request due at time $t$ and so the initial part of \cref{alg: waterfall} added $v$ rather than a fall.
\end{proof}
\paragraph{Investments.} We say $(v,t)$ \textit{invests} in $(w,t')$ if either $(v,t)$ directly invests in $(w,t')$ or if $(v,t)$ invests in $(u,t'')$ and $(u,t'')$ directly invests in $(w,t')$. Equivalently, the latter case can be defined so that $(v,t)$ directly invests in $(u,t'')$ and $(u,t'')$ invests in $(w,t')$ and we will use both versions interchangeably. The equivalence follows similarly to how we can define a walk in a graph inductively by either looking at the last step or the first step in the walk. In the inductive case, we define $I_v^t(w,t') = \frac{I_v^t(u,t'')}{c(u)} \cdot I_u^{t''}(w,t')$ where one of the investments will be direct and the other will be recursively constructed depending on which definition we use. Intuitively, this is just the investment made to $(w,t')$ by $(u,t'')$ scaled by the relative reduction in $c(u)$ caused by $(v,t)$'s investment to $(u,t'')$.
We define the \textit{total investment} in $(v,t)$ to be $I(v,t) = \sum_{\substack{(w,t') \text{ invests} \\ \text{in } (v,t)}} I_w^{t'}(v,t)$. Note, that we can calculate the total investment by focusing on direct investments. In particular, $I(v,t) = \sum_{\substack{(w,t') \text{ directly } \\ \text{invests in } (v,t)}} \frac{I(w,t')}{c(w)} I_w^{t'}(v,t)$ which follows since any pair that invests in $(v,t)$ must invest in some pair directly investing in $(v,t)$.
To streamline our arguments we will consider $(v,t)$ as investing in $(v,t)$ and the investment made is $c(v)$.
Note that only ancestors of $v$ can invest in a pair $(v,t)$.
\begin{lemma}\label{lemma: investedin}
For any pair $(v,t)$, we have $I(v,t) \leq L_v c(v)$.
\end{lemma}
\begin{proof}
We proceed by induction on $L_v$. For the base case, we have $L_v = 1$, so $v = r$. The only ancestor of $r$ is $r$ itself, so the only investment made is from $r$ itself. Hence, the total investment to $(r,t)$ is $I(r,t) = c(r) = L_r c(r)$.
Suppose that $L_v > 1$, so that $v$ is a proper descendant of $r$. If no ancestors of $v$ directly invested in $v$, then again $I(v,t) = c(v) \leq L_v c(v)$. Suppose that $(u_1,t_1), \ldots, (u_k, t_k)$ directly invested in $(v,t)$. Note that each $u_i$ must be a proper ancestor of $v$ and so $L_{u_i} \leq L_v - 1$. The induction hypothesis implies that $I(u_i,t_i) \leq L_{u_i} c(u_i)$ for each $i$. Excluding the investment from $(v,t)$ to itself, the total investment in $(v,t)$ is
\begin{align*}
I(v,t) - c(v) &= \sum_{i = 1}^k \frac{I(u_i,t_i)}{c(u_i)} I_{u_i}^{t_i}(v,t) \\
&\leq \sum_{i = 1}^k \frac{L_{u_i} c(u_i)}{c(u_i)} I_{u_i}^{t_i}(v,t) \\
&= \sum_{i = 1}^k L_{u_i} I_{u_i}^{t_i} (v,t) \\
&\leq (L_v - 1)\sum_{i = 1}^k I_{u_i}^{t_i}(v,t)
\end{align*}
Lastly, we use the fact from \cref{lemma: direct} that the total direct investment into $(v,t)$ is at most $c(v)$, so this final sum is at most $c(v)$. Thus, we have that $I(v,t) - c(v) \leq (L_v - 1) c(v)$ implying that $I(v,t) \leq L_v c(v)$.
\end{proof}
We will not only want to look at the amount invested in a pair, but also how much a pair invests in other pairs. To this end, we define $IM(v,t) = \sum_{\substack{(v,t) \text{ invests} \\ \text{in } (w,t'), w \not = v}} I_v^t(w,t')$ to be the total investment made by $(v,t)$ to its proper descendants. Again, we can focus on the pairs that are directly invested in as follows: $IM(v,t) = \sum_{\substack{(v,t) \text{ directly } \\ \text{invests in } (w,t')}} I_v^t(w,t') + \frac{I_v^{t}(w,t')}{c(w)} IM(w,t')$.
\begin{lemma}\label{lemma: investmentmade}
For any pair $(v,t)$, $IM(v,t) \leq (D-L_v)c(v)$.
\end{lemma}
\begin{proof}
We proceed by induction on $D-L_v$. For the base case, $D-L_v = 0$, so $v$ is a leaf. In this case, $v$ has no proper descendants and so $IM(v,t) = 0 = (D-L_v) c(v)$.
Now, suppose that $D-L_v > 0$ so that $v$ is not a leaf. If $(v,t)$ does not directly invest in any pair we are done, so suppose that $(v,t)$ directly invests in $(w_1,t_1), \ldots, (w_k,t_k)$. By \cref{lemma: direct}, we know the total direct investment made by $(v,t)$ is at most $c(v)$. In symbols, $\sum_{i = 1}^k I_v^t(w_i,t_i) \leq c(v)$. Also, each $w_i$ is a proper descendant of $v$ and so the induction hypothesis implies that $IM(w_i,t_i) \leq (D - L_{w_i}) c(w_i)$ for all $i$. Using these two facts gives,
\begin{align*}
IM(v,t) &= \sum_{i = 1}^k I_v^t(w_i,t_i) + \frac{I_v^t(w_i,t_i)}{c(w_i)}IM(w_i,t_i) \\
&\leq \sum_{i = 1}^k I_v^t(w_i,t_i) + \frac{I_v^t(w_i,t_i)}{c(w_i)}(D - L_{w_i})c(w_i) \\
&= \sum_{i = 1}^k (D - L_{w_i} + 1) I_v^t(w_i,t_i) \\
&\leq \sum_{i = 1}^k (D - L_v) I_v^t(w_i,t_i) \\
&\leq (D- L_v) c(v)
\end{align*}
Above we have used the fact that $L_{w_i} \geq L_v + 1$ since descendants have higher level and so $D - L_{w_i} + 1 \leq D - L_v$.
\end{proof}
\subsection{Charging Scheme}
Now, we begin the construction of the charges we will make. For a set of pairs, $S$, we let $I_v^t(S) = \sum_{(w,t') \in S} I_v^t(w,t')$ be the total investment $(v,t)$ made to the pairs in $S$.
\begin{lemma}\label{lemma: construction}
Let $(S,t)$ be a service of ALG and suppose $v \in S$ is late at time $t$. Then, there exists a set $U_v^t \subseteq \mathcal{T}_v$ of critically overdue nodes such that $I_v^t(U_v^t) = c(v)$.
\end{lemma}
\begin{proof}
We proceed by induction on $D-L_v$.
For the base case, $D-L_v = 0$, so $v$ is a leaf. In this case, we claim $v$ is critically overdue at time $t$ and hence $U_v^t = \{(v,t)\}$ satisfies the requirements as $I_v^t(v,t) = c(v)$. Suppose that there was another service of ALG including $v$ at a time $t' \in (t, \nos{v}{t})$. Further, suppose that $v$ is late at time $t'$. Now, since all requests at $v$ are satisfied after time $t$ by $v$'s inclusion, it must be that some other request $\rho$ arrived at $v$ after time $t$. However, $v$ late at time $t'$ implies $t < a_{\rho} \leq d_{\rho} < \nos{v}{t}$. This implies $\rho$ becomes due before OPT could have satisfied it, contradicting feasibility of OPT. Thus, $(v,t)$ is critically overdue.
Now, suppose that $D-L_v > 0$, so $v$ is not a leaf. If $(v,t)$ is critically overdue, we again have $U_v^t = \{(v,t)\}$ satisfies the requirements. Otherwise, suppose that $v$ is not critically overdue at time $t$. Then, by definition, there must be a time $t' > t$ at which ALG transmits a service, and this service contains $v$, which is late at time $t'$. By the partitioning of phases into early and late parts, we can assume that time $t'$ is the very next time that ALG includes $v$ after time $t$ since at time $t'$, $v$ must be late if it is at a even later time in the phase.
Consider the request $\gamma$ whose satisfaction caused $v$ to be included at time $t'$ and $\rho$ the last request considered by $F_v^t$. We claim that $\rho = \gamma$. If $\gamma \not = \rho$, then it must be the case that $d_{\gamma} < d_{\rho}$. However, the definition of $v$-falls imply they consider requests in earliest deadline order. Thus, for $\rho$ to be considered before $\gamma$ it must be the case that $\gamma$ arrived after time $t$. But, this implies $t'$ is in another phase and so $v$ was critically overdue at time $t$, a contradiction. Hence, $\rho = \gamma$.
Then, \cref{lemma: latepaths} implies all the nodes in $P(v \to \rho)$ are late at time $t'$ as $\rho$ is the earliest request in $\mathcal{T}_v^{t'}$. Also, the contrapositive of the last statement in the lemma implies that any $u \in F_v^t$ is late at time $t$ since $(v,t)$ is not critically overdue. Thus, all of the nodes in $F_v^t$ and the overflow nodes $P(v \to \rho)$ are higher level nodes than $v$ that are late so the induction hypothesis applies to these nodes. In particular, if $(w_1,t_1), \ldots, (w_k,t_k)$ are the nodes that $(v,t)$ directly invested in, then we have that for each $i$ there exists a set of critically overdue nodes, $U_{w_i}^{t_i}$, satisfying $I_{w_i}^{t_i}(U_{w_i}^{t_i}) = c(w_i)$. Since $(v,t)$ is not critically overdue, it must be the case that $F_v^t$ did not satisfy every request in $\mathcal{T}_v^t$ as mentioned previously. Thus, by \cref{lemma: direct} we have that the total direct investment made by $(v,t)$ is exactly $c(v)$. Hence, we have that $\sum_{i = 1}^k I_v^t(w_i,t_i) = c(v)$. Defining $U_v^t = \cup_{i = 1}^k U_{w_i}^{t_i}$, we then have,
\begin{align*}
I_v^t(U_v^t) &= \sum_{i = 1}^k I_v^t(U_{w_i}^{t_i}) \\
&= \sum_{i = 1}^k \frac{I_v^t(w_i,t_i)}{c(w_i)}I_{w_i}^{t_i}(U_{w_i}^{t_i}) \\
&= \sum_{i = 1}^k I_v^t(w_i,t_i) = c(v)
\end{align*}
Also, $U_v^t \subseteq \mathcal{T}_v$ contains only nodes that are critically overdue.
\end{proof}
Fix any service $(S, t)$ of ALG. \cref{lemma: construction} implies for any late node $v \in S$, we can pay for the $c(v)$ in this service by charging the critically overdue nodes in $U_v^t$ the amount $(v,t)$ invested in them. This will be the charges we make for late nodes. Note, at this point the total charge to any critically overdue node, $(v,t)$, will be at most $L_v$. This follows since we at most charge a node the total investment made in it and this is at most $L_v c(v)$ by \cref{lemma: investedin}.
We also need to pay for early nodes of a service, which we do next. The analysis above essentially covers the cost of all nodes ``above" a node $v$, i.e. its ancestors. Now, we want to bound the cost of all nodes ``below" a node $v$, i.e. its descendants. To continue the charging strategy, we will charge every critically overdue node, $(v,t)$, for all of the early nodes that it invests in. This is at most the totality of all investments made by $(v,t)$ so this adds a charge of at most $D-L_v$ to $(v,t)$ according to \cref{lemma: investmentmade}. Thus, each node is charged $L_v$ for nodes above it and $D-L_v$ for nodes below it for a total of $D$. We just need to argue every cost is in fact covered by this strategy.
\begin{lemma}\label{lemma: coverage}
Let $(S,t)$ be a service of ALG. Then, the charging strategy pays for all the costs of nodes in $S$.
\end{lemma}
\begin{proof}
Any late node is clearly paid for by the definition of the charging scheme and \cref{lemma: construction}. Next, we argue all early nodes are also paid for. Note any node on a path to a due request cannot be early by \cref{lemma: latepaths}. Thus, if a node is early it cannot be on the path to a request that triggers a service of ALG. Consequently, \cref{lemma: direct} implies the total investments to an early node must exactly equal its cost.
We proceed by induction on $L_v$ to show if $v$ is early at time $t$ then the cost of $v$ is exactly paid for by charges to some critically overdue nodes as dictated by the charging scheme. For the base case, $L_v = 1$ and we have that $v = r$. We know $r$ is never early by definition of ALG only including $r$ when a request becomes due and so this case vacuously holds.
Suppose that $L_v > 1$. Let $(u,t')$ be a pair that directly invested in $(v,t)$. If $(u,t')$ is critically overdue, then we know by the charging scheme that $(u,t')$ pays for the investment $I_u^{t'}(v,t)$ that it made to $(v,t)$. We just need to ensure that investments of the other pairs that are not critically overdue were paid for. First, we claim all non-critically overdue pairs that invested in $(v,t)$ are early. For the sake of contradiction, suppose that $(u,t')$ directly invested in $(v,t)$ and $(w,t')$ is late but not critically overdue.
\begin{itemize}
\item If $v \in F_u^{t'}$ is early at time $t'$, then \cref{lemma: latepaths} implies that $(u,t')$ is critically overdue, a contradiction.
\item If $F_u^{t'}$ does not add $v$ at time $t$, then $v$ was an overflow node for $(u,t')$. Since $(u,t')$ is late but not critically overdue, we know the next time $t''$ that ALG includes $u$ after time $t'$ is in the same $u$-phase as time $t'$. However, since $v$ was an overflow node at time $t'$, we know that the earliest request in $\mathcal{T}_u^{t'}$ that is not satisfied is in $\mathcal{T}_v^{t'}$.
\begin{itemize}
\item If the next time we enter $u$ the path includes $v$, then this happens at time $t''$ and so $t = t''$. Then, \cref{lemma: latepaths} implies $(v,t)$ is late since its on the path to the earliest due request in $\mathcal{T}_u^{t}$.
\item If the next time we enter $u$ the path does not include $v$, it must be that a new request arrived after time $t'$ causing $u$ to be late. However, late requests cannot arrive in the middle of a phase as it would become due before OPT includes $u$ again.
\end{itemize}
\end{itemize}
In all cases, we reach a contradiction and so if $(u,t')$ is not critically overdue then it must be early.
Now, for each $(u,t')$ that is early we have that $u$ is an ancestor of $v$ so the induction hypothesis implies that some charges to critically overdue nodes paid for $c(w)$. In particular, by our charging scheme this means there were critically overdue nodes $(w_1, t_1), \ldots , (w_j, t_k)$ satisfying $\sum_{i = 1}^k I_{w_i}^{t_i}(u,t') = c(u)$. However, since $v$ is an early descendant of $u$, $v$ is also an early descendant of each $w_i$. Hence, our charging scheme would also ensure that $(w_i, t_i)$ paid for its investment to $(v,t)$ since it paid its investment to $(u,t')$. By definition of the investments, the total investment made to $(v,t)$ by these critically overdue nodes is exactly
$$\sum_{i = 1}^k I_{w_i}^{t_i}(v,t) = \sum_{i = 1}^k \frac{I_{w_i}^{t_i}(u,t')}{c(u)} I_u^{t'}(v,t) = \frac{I_u^{t'}(v,t)}{c(u)} \sum_{i = 1}^k I_{w_i}^{t_i}(u,t') = I_u^{t'}(v,t)$$
Now, since this holds true for all early pairs investing in $(v,t)$, we have that all direct investments to $(v,t)$ are paid for by our charging scheme. But the total direct investment for an early node is its cost, and so $c(v)$ is paid for by the charges. Thus, the charging scheme pays for all of the costs of the service $(S,t)$.
\end{proof}
Summarizing the charging strategy above gives a proof of \cref{theorem: main}.
\begin{proof}
For any service $(S,t)$ of ALG, we apply \cref{lemma: construction} to create sets of critically overdue nodes to charge. In particular, each node $v \in S$ that is late at time $t$ we pay for by charging each pair in $U_v^t$ the investment $(v,t)$ made to each. This pays for every late node as $I_v^t(U_v^t) = c(v)$. By \cref{lemma: investedin} the total investments made to $(v,t)$ is at most $L_v c(v)$ and so the charge to any critically overdue node is at most $L_v$ to pay for late nodes through investments. Then, we charge to each critically overdue node the total amount of investments it made to early nodes. These charges add a total of charge of at most $D-L_v$ to any node $v$ as the total investments made by $(v,t)$ is at most $(D-L_v)c(v)$ by \cref{lemma: investmentmade}. Hence, the total charge to any critically overdue node is at most $D$. Lastly, \cref{lemma: coverage} implies every node in a service of ALG is paid for by this charging. Thus, the total cost of ALG's schedule is at most $D$ times the total cost of OPT's schedule. Consequently, ALG is $D$-competitive.
\end{proof}
One thing to note about the above analysis is that its basically tight. Most critically overdue nodes are charged exactly $D$. This optimizes the charging possible if we restrict to only charging to nodes OPT has included prior to a fixed time. The only way to improve upon this analysis would be to allow charging early nodes as well. However, recursive instances give evidence that no such approach would be possible as the adversary could change the instance if it sees ALG early-aggregate many nodes.
At a higher level, there is a clear trade-off between the eager and lazy aggregation costs and neither cost seems to be more dominant than the other. So, perfectly balancing both likely would give the best competitive ratio for the problem and this is exactly what \textsc{Waterfall} does. Specifically, the maximum early aggregation is $D$ resultant from the early aggregation made by the root and the maximum lazy aggregation will come from the leaves which will also have a charge of $D$. More generally, the sum of the early charges and the late charges add to $D$ as the analysis shows. Although they may not always be exactly $D/2$ each, we always have the early and lazy aggregation costs are in equilibrium with each other balancing out to the fixed number $D$. Depending on the tree structure and the requests received, strictly more eager aggregation or strictly more lazy aggregation may be more beneficial and so this more general type of balancing seems necessary. This further indicates that $D$ may be the best competitive ratio possible for MLAPD.
\section{Conclusions}
One prominent open question left by this paper is whether $D$ is the true competitive ratio for MLAPD. In particular, a critical result would be new lower-bounds for MLAPD that improve upon the $2$ lower-bound for the $D = 2$ case. In addition, determining the difference in complexity between the delay and deadline cases is a critical open problem.
For $D = 1$, we know the optimal competitive ratio for MLAPD is $1$ and that for MLAP it is $2$. For $D = 2$, we know that the optimal competitive ratio for MLAPD is $2$ and for MLAP it is essentially $3$ (improving the known lower-bound of $2.754$ to $3$ is still unresolved). Unless the problems for larger $D$ differ greatly than that for smaller $D$, this pattern would suggest that the optimal competitive ratio for MLAP is exactly $1$ more than that for MLAPD.
In fact, \cite{OPD} essentially took an algorithm for JRPD and converted it into an algorithm for JRP using a primal dual framework. The resultant algorithm had competitive ratio $1$ more than the competitive ratio for the algorithm for JRPD. If this technique could be extended for $D > 2$ then using \textsc{Waterfall} as the MLAPD algorithm could result in a $D+1$ competitive algorithm for MLAP, which would improve on the best known $O(D^2)$-competitive algorithm. This is especially promising since the JRPD algorithm used for that result behaves almost exactly as \textsc{Waterfall} does on trees of depth $2$.
Another possible approach would be to extend \textsc{Waterfall} to work for the delay case directly. By simply replacing deadlines with maturation times and making minor adjustments, the modified \textsc{Waterfall} may in fact be $D+1$-competitive. However, unlike the nice structural properties of phases being partitioned into late and early parts, no such structure is clear for the delay version. Thus, a different analysis strategy must be employed absent new structural insights.
\printbibliography
\appendix
\section{Proofs for section~\ref{section: paths}}\label{sec: pathappendix}
We begin with the proof of \cref{lemma: doubling}.
\begin{proof}
In symbols, the assumptions state that $d_{\rho_1} < \ldots < d_{\rho_k}$, $c(S_i) < c(O)$, and $P_{\rho_i} \subseteq P_{\rho_k} \subseteq O$ for all $i \leq k$. We will show that each interval transmitted by ALG has cost at at least double the previously transmitted interval, and use this to show that in reverse order each interval must be at least a power of two smaller than the cost of $O$.
Let $P(\rho_{i-1} \to \rho_i) = P_{\rho_i} \setminus P_{\rho_{i-1}}$ be the nodes that need to be added to $P_{\rho_{i-1}}$ in order to reach the node at which $\rho_i$ was issued. Since $\rho_i$ triggered a service of ALG, we know that $\rho_i$ was not satisfied by previous services of ALG. Then, we know that the nodes in $P(\rho_{i-1} \to \rho_i)$ were not added to the service. By definition of ALG, this implies that $c(\rho_{i-1} \to \rho_i) > c(P_{\rho_{i-1}})$. Thus, $$c(P_{\rho_i}) = c(P_{\rho_{i-1}}) + c(\rho_{i-1} \to \rho_i) > 2c(P_{\rho_{i-1}})$$
In other words, the intervals of the requests are at least doubling each time a request is not satisfied. This implies that $c(P_{\rho_{i-1}}) < \frac{1}{2}c(P_{\rho_i})$ holds for all $i > 1$. As $c(P_{\rho_k}) \leq 2^{-(k-k)}c(P_{\rho_k})$, an easy induction shows that $c(P_{\rho_i}) \leq 2^{-(k-i)}c(P_{\rho_k})$ holds for all $i$. The definition of ALG then implies that $c(S_i) \leq 2c(P_{\rho_i}) \leq 2^{-(k-i)+1}c(P_{\rho_k})$.
Suppose $\gamma$ is the earliest request due at the node $v$ that is the furthest node from the root contained in $O$. As $\rho_k$ was satisfied by $O$ by assumption, and $t \leq t_k$, it must be the case that the request $\gamma$ arrived before the deadline of $\rho_k$ and so has been issued at the time of $S_k$'s construction. Then, we know that $\gamma$ was not satisfied by $S_k$ otherwise $v$ being the last node in $O$'s interval would imply $c(S_k) \geq c(O)$. Thus, the same arguments from before imply that $c(P_{\rho_k}) \leq \frac{1}{2} c(P_{\gamma}) = c(O)$. Thus, for all $i$ we have that
$$c(S_i) \leq 2^{-(k-i)+1}c(P_{\rho_k}) < 2^{-(k-i)+1}(\frac{1}{2}c(O)) = 2^{-(k-i)}c(O)$$
The total cost of these intervals ALG transmitted is
\begin{align*}
\sum_{i = 1}^k c(S_i) = \sum_{i = 1}^k c(S_{k-i+1})
= \sum_{i = 0}^{k-1} c(S_{k-i})
< \sum_{i = 0}^{k-1} 2^{-i} c(O)
= (2 - 2^{-k})c(O)
\end{align*}
\end{proof}
Next, we show the improved competitive ratio for \textsc{Double}.
\begin{proof}
We simply improve the upper bound on $k$ from $D$ to $\frac{D}{2}-1$ to achieve the claimed competitiveness. Notice that the worst charge to a service of OPT happens when both a larger service and smaller services of ALG both charge to it. In this case, to achieve a $2$ charge from the larger service it must be that at least two nodes are not included by the last small service (one that is due and another that is added too early). So, we know that $k \leq D - 2$. Now, in one situation we could have every request is at its own node and each node has very large cost and so $k = D-2$. However, this will not achieve the largest charge. In particular, this means $c(S_i) = c(\rho_i)$ for each $i$ and so instead of $c(S_{k-i}) \leq 2^{-i} c(O)$ we have $c(S_{k-i}) \leq 2^{-(i+1)} c(O)$, which overall yields a decrease of at least $1$ in the charge.
In order for $c(S_k)$ to be very close to the $c(O)$, which happens in the worst case, we need at least an additional node between the node $\rho_k$ is issued at and the last node served by $O$. This pushes the total cost of $S_k$ from $\frac{1}{2} c(O)$ closer to $c(O)$ in exchange for one fewer interval than $D-2$ could be transmitted in this range. However, the overall increase in cost is positive. In particular, even if we added infinitely many intervals, the total cost would be $\sum_{j = 1}^\infty 2^{-j} = 1$, whereas just losing one later interval also pushes the cost to $1$, so there is no improvement for choosing more intervals over a longer initial interval. In fact, if the path is finite, the longer interval is a strict improvement. Similarly, for any request $\rho_{k-i}$ where we would want to exchange a longer service $S_{k-i}$ for fewer total intervals, this is an improvement as $\sum_{j = i}^\infty 2^{-j} = 2^{1-i}$ which is exactly the largest we could make $S_{k-i}$ by sacrificing one smaller interval. Hence, the worst case occurs when each smaller interval is as large as possible, which requires at least one node to exist between any two request nodes. Thus, we remove half of the nodes from consideration giving that $k \leq \frac{D}{2} - 1$. Thus, ALG is $4 - 2^{1 - \frac{D}{2}}$-competitive.
\end{proof}
We note that the analysis above actually depends not only on $k$, but the minimal difference, $\epsilon$, between intervals charged to a longer service. The most accurate competitive ratio that could be achieved by the above analysis is likely closer to $4 - \epsilon k 2^{1-k}$ by taking the minimal differences into account appropriately. This indicates an even better competitive ratio may be possible for path instances.
\end{document} |
\begin{document}
\title{ON THE MULTIPLICITY OF THE HYPERELLIPTIC INTEGRALS}
\author{Claire MOURA}
\address{Laboratoire de Math\'ematiques E. Picard,
UFR MIG, Universit\'e Paul Sabatier,
118 route de Narbonne,
31062 TOULOUSE cedex 4, FRANCE}
\email{[email protected]}
\begin{abstract}
Let $I(t)= \oint_{\delta(t)} \omega$ be an Abelian integral,
where $H=y^2-x^{n+1}+P(x)$ is a hyperelliptic polynomial of Morse type,
$\delta(t)$ a horizontal family of cycles in the curves $\{H=t\}$, and
$\omega$ a polynomial 1-form in the variables $x$ and $y$. We provide an
upper bound on the multiplicity of $I(t)$, away from the critical values of
$H$. Namely: $ord\ I(t) \leq n-1+\frac{n(n-1)}{2}$ if $\deg \omega <\deg
H=n+1$.
The reasoning goes as follows: we consider the analytic curve
parameterized by
the integrals along $\delta(t)$ of the $n$ ``Petrov''
forms of $H$ (polynomial 1-forms that freely generate the module of
relative cohomology of $H$), and interpret the multiplicity of $I(t)$ as the
order of contact of $\gamma(t)$ and a linear hyperplane of $\textbf C^
n$. Using the Picard-Fuchs system satisfied by $\gamma(t)$, we establish an
algebraic identity involving the wronskian determinant of the integrals of the
original form $\omega$ along a basis of the homology of the generic fiber of
$H$. The latter wronskian is analyzed through this identity, which yields
the estimate on the multiplicity of $I(t)$. Still, in some
cases, related to the geometry at infinity of the curves $\{H=t\} \subseteq
\textbf C^2$, the wronskian occurs to be zero identically. In this alternative
we show how to adapt the argument to a system of smaller rank, and get a
nontrivial wronskian.
For a form $\omega$ of arbitrary degree, we are led to estimating the order of
contact between $\gamma(t)$ and a suitable algebraic hypersurface in $\textbf
C^{n+1}$. We observe that $ord\ I(t)$ grows like an affine function with
respect to $\deg \omega$.
\end{abstract}
\maketitle
\section{Introduction}
Consider a complex bivariate polynomial $H(x,y) \in \textbf C[x,y]$. It is
well known that the polynomial mapping $H:\ \textbf C^2 \rightarrow \textbf C$
defines a locally trivial differentiable fibration over the complement of a finite subset of
$\textbf C$ (cf \cite{D}). Under some restrictions on the principal part of $H$, this set limits
to $\mbox{crit}(H)$, the set of critical values of $H$ (see \cite{Br}, \cite{Gav1}). One can then consider the homology bundle
$\cup_t(H_1(\{H=t\}, \textbf Z)\rightarrow \textbf C\ \backslash\ \mbox{crit}(H)$, equipped with the
Gauss-Manin connection. Take a class $\delta(t)$ in the homology group
$H_1(\{H=t\}, \textbf Z)$ of a generic fiber of $H$. As the base of the
homology bundle is 1-dimensional, the connection is flat, and the parallel
transport of $\delta$ depends only on the homotopy class of the path in
$\textbf C \ \backslash \ \mbox{crit}(H)$. The transport of the homology
class $\delta(t)$ along a loop that
encircles a critical value of $H$ results in a nontrivial outcome, due to the
action of the monodromy on $\delta$. Let $\omega$ be a
polynomial 1-form in the variables $x$ and $y$. Its restriction on any
fiber of $H$ is a closed form, therefore the integral of $\omega$ on a cycle
lying in a regular level curve of $H$ depends only on the homology class of this
cycle. Consider the complete Abelian integral $I(t)=\oint_{\delta(t)} \omega$. This function admits
an analytic extension in the complement of $\mbox{crit(H)}$.
We refer to \cite{AGV}, \cite{Yak1} for a detailed survey.
Let $t_0 \in \textbf C$ be a
regular value of $H$. One can ask for an estimate on the multiplicity of
$I(t)$ at $t_0$. The result is expected to depend on two parameters, namely
the degree of $H$ and the degree of
$\omega$. The number of parameters can be reduced by looking first at the case
$\deg \omega< \deg H$.
This case is the most interesting regarding the connection
with the infinitesimal Hilbert 16th problem, that takes place in the real
setting. Assume $H$ has real coefficients. Consider the Hamiltonian
distribution $dH$, and the one-parameter perturbation $dH+ \epsilon \omega$ by an
arbitrary real polynomial 1-form $\omega$. The first order term in the Taylor
expansion at $\epsilon=0$ of the corresponding displacement function $d(t, \epsilon)$
is an integral $I(t)$ of $\omega$ along
an oval in a level curve of $H$. Under the assumption $\deg \omega< \deg H$,
it is proved by Yu. Ilyashenko (cf with \cite
{Il2}) that: $I(t) \equiv 0$ if and only if $\omega$ is \textit{exact}, so that the
perturbation is still a Hamiltonian distribution. In the case when $I(t)
\not\equiv 0$, the multiplicity of $I(t)$ at a point $t_0$ provides an upper bound for the
cyclicity of the displacement function $d(t, \epsilon)$ at $(t_0, 0)$. Thus,
the vanishing of the Abelian integral is relevant to the number of limit cycles born by small perturbation
of the Hamiltonian distribution $dH$.
For polynomials $H$ with generic principal part ($H$ regular at infinity), an
answer on the order of $I(t)$ is given by P. Mardesic in \cite{Mar}: a step towards the multiplicity of the
Abelian integrals consists in measuring the multiplicity of their Wronskian
determinant, which is a globally univalued function on $\textbf {CP}^1$, hence rational, with poles at the critical values of $H$ and possibly at infinity.
We focus on the Abelian integrals performed on level curves of
\textit{hyperelliptic} polynomials. We establish a relation between the
Wronskian and a polynomial that we build up from the Picard-Fuchs system.
As the Picard-Fuchs system reflects the topology of the level sets of $H$,
our approach depends whether $\deg H$ is even
or odd, still our estimate on the multiplicity of $I(t)$ is always quadratic with respect to $\deg H$. We finally show that, for a fixed hyperelliptic Hamiltonian, the growth of the multiplicity of $I(t)$ is
linear with respect to $\deg \omega$.
\section{Preliminary observations}
We begin by recalling a result about flatness of solutions of a linear
differential system. Consider a system $dx= \Omega x$ of order $n$, whose
coefficient matrix $\Omega$ is meromorphic on $\textbf{CP}^1$. Denote by
$t_1, \ldots, t_s$ the poles of $\Omega$. Fix a point $t_0$, distinct from the
poles, and consider a solution $\gamma(t) \subseteq \textbf C^n$, analytic in a
neighbourhood of $t_0$. Take a linear hyperplane $\{h=
\sum_{i=1}^{n}c_ix_i=0\} \subseteq \textbf C^n$. If this hyperplane does not
contain the solution $\gamma$, then (cf \cite{Mou}):
\begin{thm}
$$ord_{t=t_0}(h \circ \gamma)(t) \leq n-1 + \frac{n(n-1)}{2}
\left(\sum_{i=1}^{s}(-ord_{t_i}\Omega)-2 \right)$$ where $ord_{t_i}\Omega$ is
the minimum order of the pole $t_i$ over the entries of $\Omega$.
\end{thm}
We give here a simplified algorithm of the proof: write the system in the affine
chart $t$ in the form
$\dot x=\frac{A(t)}{P(t)} x$,for a polynomial matrix $A$ and scalar polynomial $P$. Replace
the derivation $\frac{\partial}{\partial t}$ by $D=P(t)
\frac{\partial}{\partial t}$. Then the curve $\gamma$ satisfies: $D \gamma=A \gamma$. Due to the linearity, we can write
$y(t)= (h \circ \gamma)(t)$ as the product of the row matrix $q_0=(c_1, \ldots, c_n)$
by the column matrix $\gamma$: $y(t)= q_0 \cdot \gamma(t)$. The successive derivatives of $y$ with
respect to $D$ can be written in a similar way: $D^ky(t)= q_k(t) \cdot \gamma(t)$,
where the row vectors $q_k$ have polynomial coefficients and are constructed
inductively by: $q_{k+1}=Dq_k+q_kA$. We observe that the sequence of $\textbf C(t)$-vector
spaces $V_k \subseteq \textbf C(t)^n$ spanned by the
vectors $q_0,q_1, \dots, q_k$, is strictly increasing (before stabilizing), hence we may extract from the matrix $\Sigma$ with rows $q_0, \ldots, q_{n-1}$, a nondegenerate minor $\Delta$ of rank $l \leq n$, such that any vector $q_k$ decomposes according to the Cramer rule:
$$q_k(t)= \sum_{i=0}^{l-1} \frac{p_{ik}(t)}{\Delta(t)} q_i(t),\quad k\geq l$$ with \textit{polynomial} coefficients $p_{ik}(t)$.
This shows that the function $y$ is a solution of an infinite sequence of
linear differential equations of the form:
\begin{equation}
\Delta \cdot D^ky=\sum_{i=0}^{l-1}p_{ik}(t)D^iy,\quad k\geq l
\end{equation}
Then, by deriving in an appropriate way each of these relations, one arrives
at the key-assertion:
$$ord_{t_0}y \leq l-1+ ord_{t_0} \Delta$$
Thus, the flatness of a particular solution is correlated to the multiplicity of a polynomial constructed from the system.
One can derive an analytic version of this assertion by complementing the solution $\gamma$ by $n-1$ vector-solutions $\Gamma_2, \ldots, \Gamma_{n-1}$, so as to obtain a
fundamental matrix in a simply
connected domain around $t_0$. In particular: $\det(\gamma, \Gamma_2,\ldots, \Gamma_n)(t)$ does not vanish in this domain.
We restrict these solutions on the hyperplane $\{h=0\}$ and
set: $y_1=y=(h \circ \gamma)$, $y_i=(h \circ \Gamma_i), \ i=2,
\ldots,n$. Let $l \leq n$ be the maximum number of
independent functions among $y_1, \ldots, y_n$.
Their Wronskian determinant $W=W(y_1, \ldots, y_l)$ is analytic and does not
vanish identically around $t_0$. Expand $W$ with respect to any of its
columns, it follows that:
$$ord_{t_0}W \geq min_{k=0, \ldots, l-1}
\{ord_{t_0}y_i^{(k)}\}+ord_{t_0} D_k$$ where $D_k$ is the minor corresponding to the
element $y_i^{(k)}$. Hence: $ord_{t_0}W \geq ord _{t_0}y_i-(n-1)$, for any $i=1,
\ldots, n$. So:
$$ord_{t_0}(h\circ \gamma) \leq n-1 + ord_{t_0}W$$
Naturally, the order of vanishing of $W$ does not depend on the particular
choice of fundamental system. Besides, one arrives at the same conclusion by
forming the Wronskian determinant $W_D(y_1, \ldots, y_l)$ with respect to the
derivation $D=P(t) \frac{\partial}{\partial t}$, since $W_D=P^{l(l-1)/2}\cdot
W$, and $D$ is not singular at $t_0$ (meaning that
$P(t_0) \neq 0$).
Note that, like in the algebraic situation, one can interpret $W_D$ as the principal coefficient of a linear differential equation
of order $l\leq n$ satisfied by $y_1, \ldots, y_l$:
\begin{equation}
W_D (y_1, \ldots, y_l)D^ly+a_{l-1}(t) D^{l-1}y+ \ldots + a_0(t)y=0
\end{equation}
with coefficients $a_i(t)$ analytic in a neighbourhood of ${t_0}$. The method
leading to such an equation is standard.
For any linear combination $y$ of $y_1, \ldots, y_l$, the following Wronskian
determinant of size $l+1$ is zero identically:
$$\begin{array}{|cccc|}
y_1 & \ldots & y_l & y\\
Dy_1 & \ldots & Dy_l & Dy\\
\vdots & \vdots & \vdots & \vdots \\
D^{l-1}y_1 & \ldots & D^{l-1} y_l& D^{l-1} y\\
D^{l}y_1 & \ldots & D^{l}y_l & D^{l}y
\end{array}$$
Expanding this determinant with respect to its last column gives the
equation. This is the analytic analogue of the $lth$ order equation in the
sequence
(1). Both of them admit the same solutions.
Suppose that $\Sigma$ is a nondegenerate matrix, that is, its determinant
$\Delta(t)$ is not the null polynomial. Let $\mathcal P$ be a fundamental matrix
of solutions of the system $Dx=Ax$.
From the construction, we obtain immediately the following matrix relation:
\begin{lem}
$\Sigma \cdot \mathcal P = \mathcal W_D(y_1, \ldots, y_n)$, where $\mathcal
W_D(y_1, \ldots, y_n)$ is the Wronski matrix of $y_1, \ldots, y_n$, computed with the derivation $D$.
\end{lem}
\begin{rk}
The matrix $\Sigma$ defines a meromorphic gauge equivalence between the
original system and the companion system of the equation (2).
\end{rk}
This yields the relation between determinants:
\begin{equation}
\det \Sigma \cdot \det \mathcal P= W_D(y_1, \ldots, y_n)= P(t)^
{\frac{n(n-1)}{2}} \cdot W(y_1, \ldots, y_n)
\end{equation}
$W(y_1, \ldots, y_n)$ being
the usual Wronskian $W_{\frac{\partial}{\partial t}}(y_1, \ldots, y_n)$.
Now, at the non-singular point $t_0$ of the system, both $P$ and
$\det\mathcal P$ are nonzero, so that the order at $t_0$ of the analytic function $W(y_1, \ldots, y_n)$ is
exactly the order at $t_0$ of the polynomial determinant $ \det \Sigma$.
\section{Multiplicity of the integrals}
\subsection{Petrov forms and Picard-Fuchs system}
Consider a bivariate hyperelliptic polynomial $H \in \textbf C [x,y]$, $H=y^2-x^{n+1}+ \overline H(x)$, with $\deg \overline H=n-1$.
Hyperelliptic polynomials are examples of semi quasi-homogeneous
polynomials. Recall that a polynomial $H$ is said to be semi quasi-homogeneous if the
following holds: the variables $x$ and $y$ being endowed with weights $w_x$
and $w_y$ (so that a monomial $x^\alpha
y^\beta$ has weighted degree $\alpha w_x+\beta w_y$), $H$ decomposes as a sum
$H^*+ \overline H$, and the highest weighted-degree part $H^*$ possesses an
isolated singularity at the origin. Moreover, a semi quasi-homogeneous
polynomial has only isolated singularities.
In the sequel, the notation $``\deg''$ will
stand for the weighted degree. For $H$ hyperelliptic, $\deg H=n+1$, with
$w_x=1$, $w_y=(n+1)/2$. The weighted degree
extends to polynomial 1-forms: for $\omega=P(x,y) dx+Q(x,y)dy$, $\deg
\omega$ is the maximum $max(\deg P+w_x, \deg Q+w_y)$.
The symbol $\Lambda^k$ will designate the $\textbf C [x,y]$-module of
polynomial k-forms on $\textbf C^2$.
Consider the quotient
$\mathcal P_H= \frac{\Lambda^1}{\Lambda^0dH+d\Lambda^0}$. It is a module over
the ring of polynomials in one indeterminate. Note that the integral of a
1-form in $\Lambda^1$ depends only on its class in $\mathcal P_H$. In
addition, working in the Petrov module of $H$ enables to exhibit a finite
number of privileged 1-forms, that we will call the Petrov forms: indeed, $\mathcal P_H$ is freely generated by the monomial 1-forms $\omega_1=ydx$, $\omega_2=xydx, \ldots$, $\omega_n=x^{n-1}ydx$.
Moreover, the class of any 1-form in $\mathcal P_H$
decomposes as a sum: $p_1(t) \omega_1+ \ldots+ p_n(t) \omega_n$, with the following estimates on the degrees of the polynomials $p_i$:
$$\deg p_i \leq \frac{\deg \omega - \deg
\omega_i}{\deg H}$$
These assertions belong to a general theorem due to L. Gavrilov (\cite{Gav2}),
where the Petrov module of any semi quasi-homogeneous polynomial $H$ is
described. The number of Petrov forms is the global Milnor number of $H$.
Consider a hyperelliptic integral $\oint_{\delta(t)} \omega$, in a
neighbourhood of a regular value $t_0$ of the Hamiltonian $H$.
It becomes natural to consider the germ of analytic curve $\gamma(t)=\left(\oint_{\delta} \omega_1, \oint_{\delta} \omega_2,
\ldots, \oint_{\delta} \omega_n \right)$ parameterized by the integrals of the Petrov
forms. We shall start with forms of small degree, that is $\deg \omega \leq n$, and study the behaviour of the multiplicity of the integral with
respect to $n$. This restriction on the degree implies that $\omega$ is a linear
combination of the Petrov forms, with \textit{constant} coefficients:
$\omega=\sum_{i=1}^n c_i \omega_i$, $c_i \in \bf C$. Therefore, the question
amounts to estimating the order of contact at the point $t=t_0$ of the curve
$\gamma(t)$ and of the linear hyperplane
$\{\sum_{i=0}^{n}c_i x_i=0\}$ whose coefficients
are prescribed by the decomposition of $\omega$.
In order to apply the argument presented in Section 2, we have to interpret
$\gamma$ as a solution of a linear differential system. We recall the
procedure described by S. Yakovenko in \cite[Lecture 2]{Yak1}.
For any $i=1, \ldots, n$, divide the 2-forms $Hd\omega_i$ by $dH$. Then apply
the Gelfand-Leray formula and decompose the Gelfand-Leray residue in the
Petrov module of $H$. For a hyperelliptic Hamiltonian, it is clear that the $\textbf C$-vector space of relative 2-forms $\frac{\Lambda^2}{dH \wedge \Lambda^1}$
is spanned by the differentials of the Petrov forms $d\omega_1, \ldots, d\omega_n$. Whence the decomposition:
\begin{equation}
H \cdot d\omega_i= dH \wedge \eta_i+\sum_{j=1}^{n}a_{ij}d\omega_j, \
a_{ij}\in \bf C,\ \eta_i \in \Lambda^1.
\end{equation}
Now, by the Gelfand-Leray formula,
\begin{equation}
t\frac{d}{dt}\oint_\delta
\omega_i-\sum_{j=1}^{n} a_{ij}\frac{d}{dt} \oint_\delta \omega_j= \oint_\delta
\eta_i
\end{equation}
and this relation does not depend on the cycle of integration.
In order to obtain the system, one decomposes the residues $\eta_i$ in $\mathcal
P_H$. So, an estimate on the degree of $\eta_i$ is required, which can be
quite cumbersome when starting from a general Hamiltonian. Yet, in the
hyperelliptic case, (4) is completely explicit (cf with \cite{NY}), and one
sees immediately that: for any $i$, $\eta_i= \sum_{j=1}^{n}b_{ij} \omega_j$,
$b_{ij} \in \textbf C$.
Then (5) appears as the expanded form of the linear system:
$$(tE-A) \dot x = B x$$
where $A=\left ( a_{ij} \right )$ and $B=\left ( b_{ij} \right )$ are constant matrices, and $E$ is
the $(n\times n)$ Identity matrix.
We can write it as well as a system with rational
coefficients: $\dot x=\frac{C(t)}{P(t)} x$, where the polynomial matrix $C(t)$, obtained as $C(t)=\mbox{Ad}(tE-A) \cdot B$, has degree $n-1$, and the scalar polynomial $P(t)= \det (tE-A)$ has degree $n$. The
polynomial $P$ can be explicited: evaluation of the relation (4) at a critical
point $(x_*,y_*)$ of $H$ shows that the corresponding critical value $t_*=H
(x_*,y_*)$ is an eigenvalue of $A$.
If the critical values of $H$ are assumed pairwise distinct, then:
$P(t)= (t-t_1) \ldots (t-t_n)$. Thus,
the singular points of the system are the critical values of $H$ and the point
at infinity. All of them are Fuchsian.
We can apply Theorem 1 and get the following estimate on the multiplicity at a
zero of the integral: $ord_{t_0} \oint_\delta \omega \leq n-1+
\frac{n(n-1)^2}{2}$. We are going to show how to improve this bound, applying
(3).
\subsection{Main result}
We now formulate the theorem. We impose an additional requirement on the
hyperelliptic Hamiltonian: $H$ has to be of Morse type, that is, with nondegenerate
critical points as well as distinct critical values.
\begin{thm}
Let $H$ be a hyperelliptic polynomial $H=y^2-x^{n+1}+ \overline H(x)$,
where $\overline H$ is a
polynomial of degree $n-1$, of Morse type. Let $\{\omega_1, \ldots,
\omega_n\}$ be the set of monomial Petrov forms associated to $H$, and let
$\omega= \sum_{i=1}^{n}c_i \omega_i$, $c_i \in \textbf C$ be an arbitrary
linear
combination. Consider the Abelian integral of $\omega$ along a horizontal
section $\delta(t)$ of the homology bundle.
Let $t_0$ be a regular value of $H$. If
$\oint_{\delta(t)} \omega \not \equiv 0$, then:
$$ord_{t_0}\left( \oint_{\delta(t)} \omega \right)\leq n-1+\frac{n(n-1)}{2}$$
\end{thm}
The rest of this section is devoted to the proof of Theorem 2.
Recall that a fundamental matrix
of the Picard-Fuchs system is obtained by integrating some $n$ suitable polynomial 1-forms $\Omega_1, \ldots, \Omega_n$ over any basis of the homology
groups $H_1(\{H=t\}, \textbf Z)$:
$$\mathcal P=
\left (
\begin{array}{ccc}
\oint_{\delta_1} \Omega_1 & \cdots & \oint_{\delta_n} \Omega_1\\
\vdots & \vdots & \vdots \\
\oint_{\delta_1}\Omega_n & \cdots & \oint_{\delta_n} \Omega_n
\end{array}
\right )$$
As explained in \cite[Lecture 2]{Yak1}, the determinant of a fundamental
system of solutions has to be a polynomial, divisible by $(t-t_1) \ldots
(t-t_n)$. Its actual degree depends on the choice of the integrands.
It is shown by L. Gavrilov in \cite{Gav2} and D. Novikov in \cite{Nov} that
one can plug the Petrov forms into the period matrix $\mathcal P$ and get
$\det \mathcal P= c \cdot (t-t_1) \ldots (t-t_n)=P(t)$, with a
\textit{nonzero}
constant $c$.
Next, we form the vectors $q_0, \ldots, q_{n-1}$, $q_0=(c_1, \ldots, c_n)$,
$q_{k+1}=Dq_k+q_k C(t)$, $D=P(t) \frac{\partial}{\partial t}$, and collect
them in a matrix $\Sigma$. Lemma 1 says:
$$\Sigma \cdot \mathcal P= \mathcal W_D \left(\oint_{\delta_1} \omega, \ldots,
\oint_{\delta_n} \omega \right)$$
which gives:
\begin{equation}
\Delta \cdot (t-t_1) \ldots (t-t_n)= P^{n(n-1)/2} \cdot W \left(\oint_{\delta_1} \omega,
\ldots,\oint_{\delta_n} \omega\right)
\end{equation}
The disadvantage of this formula is that it does not resist possible degeneracy of the
matrix $\Sigma$: the determinant $\Delta= \det \Sigma$ is a polynomial in the variable $t$, whose coefficients are homogeneous polynomials with respect to the components of $q_0$.
$$ \Delta_{q_0}(t)=P_0(q_0)+P_1(q_0) \cdot t+ \ldots +P_D(q_0) \cdot t^D$$
where $D$ is the maximum possible degree for $\Delta$, achieved for generic $q_0$.
One cannot a priori guarantee that the algebraic subset
$$S=\{q_0 \in \textbf C^n:\ P_i(q_0)=0\ , i=0, \ldots,D\}$$ is reduced to zero.
We are now going to analyze on what conditions the equality (6) makes
sense. It involves
the geometry at infinity of the hyperelliptic affine curves
$\{H=t\} \subseteq \textbf C^2$. Suppose that for the form $\omega=\sum_i ^{}
c_i \omega_i$, the Wronskian $W(\oint_{\delta_1} \omega,
\ldots,\oint_{\delta_n} \omega)$ vanishes identically as a function of
$t$. This means that one can find
a cycle $\sigma$, complex combination of $\delta_1,
\ldots, \delta_n$, such that the integral $\oint_\sigma \omega$ is identically
zero.
\begin{lem}
If an Abelian integral $\oint_{\sigma(t)} \omega$ is zero identically, then
$\sigma(t)$ belongs to
the kernel of the intersection form on $H_1(\{H=t\}, \textbf C)$.
\end{lem}
\begin{proof}
Assume on the contrary, that $\sigma$ has a nonzero intersection number with a
cycle from $H_1(\{H=t\}, \textbf C)$, while $\oint_\sigma \omega$ vanishes identically.
The semi quasi-homogeneity property implies that $H$ defines a trivial
fibration at infinity, and the homology of a
regular fiber can be generated by a basis of $n$ vanishing cycles $v_i$ ( $v_i$
contracts to a point when t approaches $t_i$). Necessarily, $\sigma$
intersects one of the vanishing cycles - say $v_1$: $(\sigma, v_1) \neq 0$.
Continue analytically the integral $\oint_\sigma \omega$ along a loop around $t_1$: the Picard-Lefschetz formula states that the monodromy changes $\sigma$ into $\sigma - (\sigma, v_1) v_1$. On the level of the integral: $0=0- (\sigma, v_1) \oint_{v_1} \omega$, hence $\oint_{v_1} \omega$ is zero.
Moreover, the assumptions of hyperellipticity and Morse type imply that one can produce a
basis $\{v_1, \ldots, v_n\}$ of vanishing cycles in which any two consecutive
cycles intersect: $(v_i,v_{i+1})=\pm 1$ (cf \cite{Il1}).
One proceeds inductively with the rest of the critical values $t_2, \ldots, t_n$:
every integral of $\omega$ along $v_2, \ldots, v_n$ vanishes identically as well. This means that the \textit{restrictions} of $\omega$ on any
generic fiber of $H$ are exact.
We can now apply L. Gavrilov's result \cite[Theorem 1.2]{Gav2} and deduce the global statement: the form $\omega$ has to be
\textit{exact}, hence
zero in the Petrov module. But this is clearly impossible since $\omega$ is a
combination of $\textbf C[t]$-independent forms.
\end{proof}
Consequently, if the integral of $\omega$ vanishes along $\sigma$, this means
that $\sigma$ becomes homologous to zero when the affine fiber is embedded in its normalization.
That is, $\sigma$ lies in the kernel of the morphism $i_*$: $H_1(\Gamma, \textbf Z) \rightarrow
H_1(\overline \Gamma, \textbf Z)$, denoting the affine curve $\{H=t\}
\in \textbf C^2$ by $\Gamma$ and its normalized curve in $\textbf {CP}^2$ by
$\overline \Gamma$.
\begin{lem}
If $n$ is even, then $S$ is limited to $\{0\}$.
\end{lem}
\begin{proof}
The projection
$\Pi:$ $\overline \Gamma \rightarrow \textbf{CP}^1$, $(x,y) \mapsto x$ is a
double ramified covering of $\textbf{CP}^1$.
From the affine equation $y^2=\Pi_{i=1}^{n +1}(x-x_i(t))$, one finds $n+1$
ramification points of $\Pi$ in the complex plane, hence the total number of
ramification points of $\Pi$ is $n+1$ or $n+2$. On the other hand, using
the Riemann-Hurwitz formula, the number of ramification points of $\Pi$ is
$2g_{\overline \Gamma}+2$. This shows that the genus $g_{\overline
\Gamma}$ is equal to $[n/2]$.
Therefore, if $n$ is even, the homology group of $\overline \Gamma$ has rank
$2g_{\overline
\Gamma}=n$, and $i_*$ is an isomorphism. In this case, there is a single point at
infinity on $\overline \Gamma$ above $\infty \in \textbf {CP}^1$ and $\sigma$
is zero in the homology of the \textit{affine} level curve $\{H=t\}$. This
means that no relation can occur between the integrals $\oint_{\delta_1}
\omega, \ldots, \oint_{\delta_n} \omega$, unless $\omega$ is zero.
\end{proof}
For even $n$, we can carry out the analysis further. The Wronskian $W=
W \left(\oint_{\delta_1} \omega,
\ldots,\oint_{\delta_n} \omega\right)$ is a rational function, so the sum of
its orders at all points of $\textbf{CP}^1$ equals $0$. As a consequence, the order at one of its zeros can be deduced from the order at its poles and at the point at infinity:
$$ord_{t_0}W \leq -ord_\infty W -\sum_{i=1}^n ord_{t_i} W$$
From (6), we get:
\begin{equation}
ord_\infty W = ord_\infty \Delta -n + \frac{n^2(n-1)}{2}
\end{equation}
One gets easily an upper bound on $\deg \Delta$: from the inductive
construction the degree of each component of a vector $q_k$ is no larger than
$k(n-1)$, this yields: $\deg \Delta \leq \frac{n(n-1)^2}{2}$. In the right
hand side of (7), the cubic terms
cancel out each other, so that we get the estimate:
$ord_\infty W \geq \frac{n^2-3n}{2}$. This shows in particular that $\infty$
is a zero of $W$.
As for the order of the $W$ on the finite singularities, we reproduce an
argument due to P. Mardesic (\cite{Mar}): the critical points of $H$ are Morse, which allows to
fix the Jordan structure of the monodromy matrix at $t_i$, by choosing an
adapted basis of cycles. This imposes the structure of the integrals in a
neighbourhood of $t_i$: they are all analytic at $t_i$, except one of them that undergoes
ramification. The pole of the Wronskian at $t_i$ may only result from the derivation of this
integral. The estimate follows automatically: $ord_{t_i} W \geq 2-n$. The contribution of the poles is $\sum_{i=1}^n ord_{t_i} W \geq 2n-n^2$. Therefore, we have obtained the following upper bound:
$$ord_{t_0}W \leq \sum_{t_0 \in \bf C \bf P^1, t_0
\not= \infty,\ t_0 \not=t_i} ord_{t_0}W \leq \frac{n(n-1)}{2}$$
which proves Theorem 2 for even $n$.
We now return to the case of odd $n$.
The homology group $H_1(\overline \Gamma, \textbf Z)$ has rank
$2g_{\overline \Gamma}=n-1$, and $\ker\ i_*$ is generated by one cycle that we will call $\delta_\infty$.
The forms $\omega$ that annihilate the Wronskian $W$ are those with zero integral along the cycle $\delta_\infty$. In order to describe this subspace of forms, we prove a dependence relation among Abelian integrals:
\begin{lem}
The complex vector space generated by the residues of the Petrov forms at infinity has dimension 2, that is:
$$\dim_\textbf C \left( \oint_{\delta_\infty} \omega_1, \ldots, \oint_{\delta_\infty} \omega_n \right)=2$$
\end{lem}
\begin{proof}
In order to estimate the residues $\rho(\omega_i)$ of the forms $\omega_i=x^{i-1}ydx$, $i=1, \ldots,
n-1$, at the point at infinity on the curve $y^2- x^{n+1}+ \overline H(x)-t=0$,
($[0:1:0] \in \textbf{CP}^2$), we pass to the chart $u=1/x$ and, expressing
$y$ as a function of $x$: $y=\pm (x^{n+1}+ \overline H(x)-t)^{1/2}$, we get
meromorphic 1-forms at $u=0$:
$$\omega_i= (1/u)^{i-1} \cdot (1/u)^2 \cdot
(1/u)^{(n+1)/2} \cdot (1+R(u)+tu^{n+1})^{1/2} du$$ where $R$ is a
polynomial, $\deg R=n+1$, $R(0)=0$.
We have to compute the coefficient of $1/u$. The Taylor expansion of the
square root gives: for $i< \frac{n+1}{2}$, $\rho(\omega_i)$ is a constant with
respect to $t$, and for $i \geq \frac{n+1}{2}$, $\rho(\omega_i)$ is a
polynomial of degree 1. This proves that the space of residues has dimension 2
over $\textbf C$, and is generated by any pair $\{\rho(\omega_i),
\rho(\omega_{i+(n+1)/2)})\}$, $i< \frac{n+1}{2}$.
\end{proof}
\begin{rk}
In the case of a Hamiltonian of even degree (that is, for odd $n$), we detect
solutions of the Picard-Fuchs system that are included in a
hyperplane. Indeed,
whenever a form $\omega=\sum_{i=1}^{n} c_i \omega_i$ has a zero residue at infinity,
then its coefficients define the equation of a hyperplane $\{h=\sum_{i=1}^{n}
c_i x_i=0\}$ that contains the integral curve
$\Gamma_1(t)=(\oint_{\delta_\infty}\omega_1, \ldots,
\oint_{\delta_\infty}\omega_n)$. This implies that the global monodromy of the
Picard-Fuchs
system is reducible: extend $\Gamma_1$ to a fundamental system by adjoining solutions $\Gamma_2, \ldots, \Gamma_n$. Then, the $\textbf
C$-space spanned by the solutions $\Gamma_i$ such that $h \circ \Gamma_i(t) \equiv 0$
is invariant by the monodromy (see \cite[Lemma 1.3.4]{Bo}).
\end{rk}
It follows from Lemma 4 that the set $S$ coincides with the set of relations
between the residues of the Petrov forms at infinity. It is thus a codimension 2 linear subspace of $\textbf C^n$.
Therefore, we arrive at the conclusion of Theorem 2, for odd $n$ and $\omega
\not \in S$. In the remaining cases, that is for $\omega \in S$, the identity
(6) is useless, since both sides are $0$.
We aim at reconstructing the
identity (6), initiating the reasoning from a linear differential system of
size smaller than $n$. First, we know the exact number of independent integrals
among
$\oint_{\delta_1}\omega, \ldots,\oint_{\delta_n}\omega$.
\begin{lem}
If $\omega$ belongs to $S$, then:
$\dim_{\textbf C} \left(\oint_{\delta_1}\omega, \ldots,\oint_{\delta_n}\omega\right)=n-1$.
\end{lem}
\begin{proof}
The relations between these integrals constitute the space
$$\left \{(d_1, \ldots, d_n) \in
\textbf C^n:\ d_1 \oint_{\delta_1}\omega +\ldots+ d_n \oint_{\delta_n}\omega \equiv
0 \right\}$$ From Lemma 2, any relation $(d_1, \ldots, d_n)$ must verify: $d_1 \delta_1+ \ldots + d_n \delta_n$ is a multiple of
$\delta_\infty$.
This defines a 1-dimensional vector space.
\end{proof}
From now on, we work with an adapted basis of cycles in $H_1(\{H=t\},
\textbf Z)$, that includes $\delta_\infty$, and that we denote by
$(\delta_1, \ldots, \delta_{n-1}, \delta_\infty)$. Then it is clear that
$W(\oint_{\delta_1}\omega, \ldots,\oint_{\delta_{n-1}} \omega) \not \equiv 0$. We also
make several changes in the Petrov frame: recall that the matrix $A=(a_{ij})$ in (4)
describes the vectors $H d\omega_i$ via the correspondence that associates the $i$-th
canonical vector $e_i \in \textbf C^n$ to the monomial $x^i$. We already
noticed that the matrix $A$ was diagonalizable.
We assume $A$ diagonal (which
corresponds to combining linearly the forms $\omega_i$).
Moreover, we know that among the forms $\omega_i$ associated to an eigenbasis
of $A$, two of them will have independent residues at infinity. Up to
permutation, these are $\omega_{n-1}$ and $\omega_n$.
Now,
after adding a scalar multiple of $\omega_n$ to $\omega_{n-1}$, we may assume
that
$\oint_{\delta_\infty} \omega_{n-1}$ is a constant, while the residue of the
form $\omega_n$ is a polynomial of degree $1$ in the variable $t$. With respect to such a
basis $(\omega_1, \ldots, \omega_n)$, a nonzero off-diagonal entry, $a_{n,n-1}$, may appear in $A$. Thus,
with our choice of basis of $\textbf C^n$, $A$ has the form:
$$A=
\left (
\begin{array}{cccc}
a_{1,1} & & & \\
& \ddots & & \\
& & a_{n-1,n-1}& \\
& & a_{n,n-1} & a_{n,n} \\
\end{array}
\right )$$
Now, a form
$\omega=\sum_{i=1}^{n}c_i \omega_i$ belongs to $S$ if and only if
$$c_1 \oint_{\delta_\infty} \omega_1+ \ldots +c_n \oint_{\delta_\infty}
\omega_n \equiv 0$$
So that after a linear change of coordinates in $\textbf C^n$, we may write
$\omega$ as: $\omega=c_1 \widetilde \omega_1+ \ldots + c_{n-2} \widetilde
\omega_{n-2}$, with $\oint_{\delta_\infty}\widetilde \omega_i=0$, $i=1, \ldots, n-2$.
We set: $\widetilde \omega_{n-1}= \omega_{n-1}$ and $\widetilde \omega_{n}=
\omega_{n}$. Thus, the integral
$\oint_{\delta} \omega$ reads: $q_0 \cdot \overline{\gamma} (t)$, $q_0
\in \textbf C^{n-2}$, $\overline{\gamma} (t)=(\oint_\delta\widetilde \omega_{1}, \ldots,
\oint_\delta \widetilde \omega_{n-2})$.
When passing to the Petrov frame $(\widetilde \omega_{i})$, $i=1, \ldots, n$,
the matrix $A$ is changed into $A'=P^{-1}AP$, and part of the
structure of the matrix $P=(p_{i,j})$ is known: $p_{n-1,n-1}=p_{n,n}=1$;
$p_{i,n-1}=0$ for $i \neq n-1$, and
$p_{i,n}=0$ for $i \neq n$, which implies that:
$a'_{i,n-1}=0$ for $i \neq n-1$ and $a'_{i,n}=0$ for $i \neq n$.
We now write the corresponding decomposition of the $n-1$ first 2-forms $Hd
\widetilde \omega_i$ and perform integration \textit{along the cycle}
$\delta_\infty$:
$$ t \frac{d}{dt} \oint_{\delta\infty} \widetilde \omega_i -\sum_{j=1}^{n-1}
a'_{i,j} \frac{d}{dt} \oint_{\delta\infty} \widetilde \omega_j = \oint
_{\delta\infty} \widetilde \eta_i, \quad i=1, \ldots, n-1$$
Since the residues of $\widetilde \omega_1, \ldots, \widetilde \omega_{n-1}$
are constants, these equalities entail: $\oint
_{\delta\infty} \widetilde \eta_i=0$, $i=1, \ldots, n-1$. Hence the Gelfand-Leray forms
$\widetilde \eta_i$ admit a decomposition with respect to $\widetilde \omega_1, \ldots,
\widetilde \omega_{n-2}$ only: $\widetilde \eta_i= \sum_{j=1}^{n-2} b'_{i,j} \widetilde
\omega_j$, $i=1, \ldots, n-1$, $b'_{i,j} \in \textbf C$.
Besides, from the
$n$-th equality
$$ t \frac{d}{dt} \oint_{\delta\infty} \widetilde \omega_n -
a'_{n,n} \frac{d}{dt} \oint_{\delta\infty} \widetilde \omega_n = \oint
_{\delta\infty} \widetilde \eta_n$$
it follows that $\widetilde \eta_n$ has a nonzero component along $\widetilde \omega_n$.
This provides information on the matrix $B'$ related to the new frame $(\widetilde \omega_i)$: $b'_{i,n-1}= b'_{i,n}=0$ for $i=1, \ldots, n-1$.
The curve $\gamma(t)=(\oint_\delta \widetilde \omega_1, \ldots, \oint_\delta
\widetilde \omega_n)$ is a solution of the linear system $\det (tE-A') \cdot \dot x=
\mbox{Ad}(tE-A')B' \cdot x= C \cdot x$ with polynomial matrix $C=(C_{i,j})$. From the structure
of $A'$ and $B'$, most of the entries in the last two columns of
$C$ are zeros, in particular: $C_{i,j}=0$, for $i=1, \ldots, n-2$ and $j=n-1,\
n$. This
means that the truncated curve $\overline \gamma (t)=(\oint_\delta \widetilde \omega_1, \ldots, \oint_\delta
\widetilde \omega_{n-2})$ satisfies the linear system whose matrix $\overline C$ is the
$(n-2) \times (n-2)$ upper-left corner of $C$.
Starting from $q_0=(c_1, \ldots, c_{n-2}) \in \textbf C^{n-2}$, we derive the vectors $q_1, \ldots, q_{n-3} \in \textbf C[t]^{n-2}$ by: $q_{k+1}=Dq_k
+q_k \cdot \overline C$, with the same $D$ as before: $D= \det (tE-A')=(t-t_1)
\ldots (t-t_n)$. They satisfy: $D^k(q_0 \cdot \overline \gamma)=q_k \cdot \overline \gamma$.
Let $\Delta$ be the wedge product of $q_0, \ldots,
q_{n-3}$.
On the other hand, consider the matrix
$$\widetilde {\mathcal P}=
\left (
\begin{array}{ccc}
\oint_{\delta_1} \widetilde \omega_1 & \cdots & \oint_{\delta_{n-2}}
\widetilde \omega_1\\
\vdots & \vdots & \vdots \\
\oint_{\delta_1} \widetilde \omega_{n-2} & \cdots & \oint_{\delta_{n-2}} \widetilde \omega_{n-2}
\end{array}
\right )$$
We obtain:
\begin{equation}
\Delta \cdot \det \widetilde{\mathcal P}= P^{\nu} \cdot \overline W
\end{equation}
with $P(t)=(t-t_1)\ldots (t-t_n)$, $\nu=(n-3)(n-2)/2$, and
$\overline W=W(\oint_{\delta_1} \omega, \ldots, \oint_{\delta_{n-2}} \omega)$.
As $\overline W$ is nonzero (by the choice of the basis of the homology), both determinants $\Delta$ and
$\det \widetilde{\mathcal P}$ are non identically vanishing. Notice that $\deg \Delta
\leq \frac{(n-1)(n-2)(n-3)}{2}$. A closer look at $\det \widetilde
{\mathcal P}$ shows that it is polynomial, of degree:
\begin{lem}
$\deg (\det \widetilde {\mathcal P}) \leq n$.
\end{lem}
\begin{proof}
Note that the matrix $\widetilde {\mathcal P}$ is the product $R \cdot \overline{\mathcal P}$ of a $(n-2) \times
n$ constant matrix $R=(r_{i,j})$, $1 \leq i \leq n-2, 1\leq j \leq n$, of rank
$n-2$ by $\overline{\mathcal P}$, obtained by removing the last two columns
in the standard period matrix $\mathcal P$. There is no restriction in
supposing the first $n-2$
columns of $R$ independent (this amounts to permuting the cycles in $\overline{\mathcal
P}$), and consider $\overline R$ the corresponding
square matrix of rank $n-2$. Form the product $\overline R^{-1} \cdot \widetilde {\mathcal
P}$. Its determinant is the same as $\det \widetilde {\mathcal P}$, up to a
nonzero constant. On the other hand, this matrix has the expression:
$$\overline R^{-1} \cdot \widetilde {\mathcal P}=
\left (
\begin{array}{ccc}
\oint_{\delta_1} ( \omega_1 +\Omega_1) & \cdots & \oint_{\delta_{n-2}}
(\omega_1+ \Omega_1)\\
\vdots & \vdots & \vdots \\
\oint_{\delta_1} (\omega_{n-2} +\Omega_{n-2}) & \cdots & \oint_{\delta_{n-2}}
(\omega_{n-2}+ \Omega_{n-2})
\end{array}
\right )$$
where $\Omega_1, \ldots, \Omega_{n-2}$ belong to the span $\textbf C (\omega_{n-1},
\omega_n)$. Expanding $\det (\overline R^{-1} \cdot \widetilde {\mathcal P})$,
it turns out that the term that brings the highest degree (with respect to
$t$) is the determinant:
$$
\left \vert
\begin{array}{ccc}
\oint_{\delta_1}\Omega_1 & \cdots & \oint_{\delta_{n-2}} \Omega_1\\
\oint_{\delta_1} \omega_2 & \cdots & \oint_{\delta_{n-2}} \omega_2\\
\vdots & \vdots & \vdots \\
\oint_{\delta_1} \omega_{n-2} & \cdots & \oint_{\delta_{n-2}}
\omega_{n-2}
\end{array}
\right \vert$$
Setting
$x=t^{1/(n+1)} x'$, $y=t^{1/2} y'$, it follows that the leading term of this
determinant has degree $\frac{D}{n+1}$, where $D$ is the weighted degree
$\deg \omega_n+ \deg \omega_2+
\ldots + \deg \omega_{n-2}= n+ \frac{n+1}{2}+\sum_{i=2}^{n-2}(i+\frac{n+1}{2})$,
hence $\frac{D}{n+1}\leq n$. The leading coefficient appears
as the determinant of the integrals of the forms $\omega_n(x',y'),
\omega_2(x',y'), \ldots, \omega_{n-2}(x',y')$ over cycles in the level sets of the
\textit{principal} quasi-homogeneous part of $H$, $y^2-x^{n+1}$ (cf \cite{Gav2}). The latter
determinant is guaranteed to be nonzero since the differentials of the forms
involved
are independent in the quotient ring of $\textbf C[x,y]$ by the Jacobian ideal
$(H_x, H_y)$.
\end{proof}
Again, we observe that the identity (8) returns a
\textit{quadratic} lower estimate: $ord_\infty \overline W \geq
\frac{n^2-7n+6}{2}$. On every finite pole: $ord_{t_i}\overline W
\geq 4-n$, and at a zero $t_0$ of $\overline W$: $ord_{t_0}\overline W\leq
\frac{n^2-n-6}{2}$. Finally: $ord_{t_0}\oint_\delta \omega \leq n-3+
\frac{n^2-n-6}{2} \leq n-1 + \frac{n(n-1)}{2}$. The proof of
Theorem 2 is completed.
\subsection{Intersection with an algebraic hypersurface}
We consider the asymptotic behaviour of the integral with respect
to the degree $d$ of the form $\omega$.
\begin{thm}
Under the same assumptions on the Hamiltonian, consider the Abelian integral
of a 1-form $\omega$ of degree $d$. If $\oint_{\delta(t)} \omega \not \equiv 0$, then:
$$ord_{t_0} \left( \oint_{\delta(t)} \omega \right)\leq A(n)+d \cdot B(n)$$
\end{thm}
An idea could be first to decompose $\omega$ in the Petrov module of $H$:
$\omega=p_1(t)\omega_1+ \ldots + p_n(t) \omega_n$, which implies:
$\oint_\delta \omega=p_1(t) \oint_\delta \omega_1+ \ldots + p_n(t)
\oint_\delta \omega_n$, with $\deg p_i \leq \frac{d}{n+1}$,
$i=1,\ldots,n$. Then apply the above reasoning, noticing that the vector
$$(\oint_{\delta} \omega_1, \ldots,
\oint_{\delta}
\omega_n, t\oint_{\delta} \omega_1, \ldots, t\oint_{\delta} \omega_n, \ldots,
t^{[d/(n+1)]} \oint_{\delta} \omega_1, \ldots, t^{[d/(n+1)]} \oint_{\delta}
\omega_n)$$
is a solution of a hypergeometric Picard-Fuchs system
of size $n \cdot ([d/(n+1)]+1)$.
This would give a bound that is quadratic with
respect to the degree $d$ of the form. Yet, one should expect linear growth,
since, for a fixed Hamiltonian, even the growth of the \textit{number} of zeros of the
integrals was proven by A. Khovanskii to be linear in the degree of the form.
\begin{proof}
We define the curve $t \mapsto \Gamma(t)=(t, \oint_\delta \omega_1, \ldots,
\oint_\delta \omega_n) \subseteq \textbf C^{n+1}$, together with the algebraic
hypersurface $\{(t,x_1, \ldots, x_n) \in \textbf C^{n+1}$: $h(t, x_1, \ldots, x_n)=0\}$,
setting $h(t, x_1, \ldots, x_n)= p_1(t) x_1+ \ldots
+p_n(t) x_n$, in view of the above Petrov decomposition.
Thus, $\oint_\delta \omega$ is the composition $(h \circ
\Gamma)(t)$. This reads also as the matrix product of the row vector
$Q_0=(p_1(t), \ldots, p_n(t)) \in \textbf C[t]^n$,
by the column vector $\gamma(t)=(\oint_{\delta} \omega_1, \ldots,
\oint_{\delta}\omega_n)$.
The construction of vectors
$Q_k \in \textbf C [t]^n$ can be performed likewise. Their degrees, as well as the
degree of their exterior product, have affine growth with respect to $d$.
The lower estimate on the order of the Wronskian $W(\oint_{\delta_1} \omega, \ldots,
\oint_{\delta_n}\omega)$ at its finite poles is not affected by $d$.
\end{proof}
\end{document} |
\begin{document}
\begin{center}
{\setstretch{1.1}
\noindent {\Large \textbf{A Method for Preparation and Readout of Polyatomic Molecules in Single Quantum States}} \\
}
{David Patterson \\ [email protected]}
\end{center}
Polyatomic molecular ions contain many desirable attributes of a useful quantum system, including rich internal degrees of freedom and highly controllable coupling to the environment. To date, the vast majority of state-specific experimental work on molecular ions has concentrated on diatomic species. The ability to prepare and readout polyatomic molecules in single quantum states would enable diverse experimental avenues not available with diatomics, including new applications in precision measurement, sensitive chemical and chiral analysis at the single molecule level, and precise studies of Hz-level molecular tunneling dynamics. While cooling the motional state of a polyatomic ion via sympathetic cooling with a laser cooled atomic ion is straightforward, coupling this motional state to the internal state of the molecule has proven challenging. Here we propose a new method for readout and projective measurement of the internal state of a trapped polyatomic ion. The method exploits the rich manifold of technically accessible rotational states in the molecule to realize robust state-preparation and readout with far less stringent engineering than quantum logic methods recently demonstrated on diatomic molecules. The method can be applied to any reasonably small ($\lesssim$ 10 atoms) polyatomic ion with an anisotropic polarizability.
\section{Introduction}
The last half decade has seen enormous progress in our ability to prepare, cool, and read out the state of trapped molecular ions. The vast majority of this work has concentrated on diatomic molecules, and primarily on hydrides\cite{jusko2014two,asmis2003gas,putter1996mass,lien2014broadband,staanum2010rotational,hechtfischer2002photodissociation,roth2005production,shi2013microwave,cirac1992laser,hume2011trapped,wester2009radiofrequency}. Neutral polyatomic molecules have been prepared in single internal states and cooled to the mK regime\cite{zeppenfeld2012sisyphus}. Much of this work has been motivated by the potential to realize high resolution rotational and vibrational spectroscopy in such systems\cite{rugango2016vibronic}. Diatomic molecular ions, such as CaH$^+$, are natural candidates for researchers studying the corresponding \emph{atomic} ion (e.g. Ca$^+$), but exhibit technically challenging rotational spectra with levels spaced by 100s of GHz.
The internal state of trapped molecular ions was recently measured nondestructively, via quantum-logic spectroscopy\cite{wolf2016non}. In 2017 Chou et al. demonstrated state-selective detection and heralded projective preparation of trapped CaH$^+$ ions, using only far-off resonant light\cite{chou2016preparation}. That work further demonstrated quantum logic assisted optical pumping of magnetic substates within a rotational manifold, opening a general pathway to a single state preparation.
Sympathetic cooling of trapped molecular ions via collisions with cold \emph{neutral} atoms is a rapidly evolving field\cite{hudson2016sympathetic}. The short-range collisions intrinsic to this approach are both an asset and a liability - while these collisions can in principal cool all degrees of freedom of a molecular ion, they also allow for chemical reactions and potentially unwanted inelastic collisions leading to neutral atom trap loss.
Polyatomic molecular ions have been cooled to millikelvin motional temperatures but have never been prepared in a single ro-vibrational state\cite{ostendorf2006sympathetic,wellers2011resonant,mokhberi2014sympathetic}. Ensembles of neutral polyatomic molecules have been electrically trapped, prepared in known internal states, and motionally cooled to the mK regime\cite{zeppenfeld2012sisyphus}. Remarkably, the use of state-dependent heating of trapped ion ensembles as a spectroscopy method was first suggested by Hans Dehmelt in 1968, long before the invention of laser cooling\cite{dehmelt1968bolometric}
The ability to control polyatomic molecules at the single quantum state level opens research pathways substantially beyond what diatomic molecules offer. Parity violation is predicted to produce enantiomer-dependent shifts in the spectra of chiral molecules, but these tiny shifts have never been observed\cite{quack2008high}. The precise spectroscopy enabled by the methods proposed here would allow direct observation of this effect, as well as coherent molecular dynamics on the few-Hz level. Vibrational spectroscopy of homonuclear diatomic molecular ions has been proposed for high-sensitivity searches for time-variation of the electron-nucleon mass ratio, and Raman-active modes of nonpolar, closed shell triatomic molecular ions such NO$_2^+$, which could be observed directly by the system described in this work, appear to be a highly systematic-immune platform for such searches\cite{hanneke2016high}.
Polyatomic molecules have been suggested as attractive systems in searches for both nuclear and electronic permanent electric dipole moments\cite{kozyryev2017precision}. The single molecule sensitivity and lack of requirements of well-behaved electronic transitions make the proposed system attractive for studying exotic species where high fluxes are prohibitive, such as molecules containing the short-lived $^{225}$Ra nucleus, which are predicted to be orders of magnitude more sensitive to new physics than current searches. Small asymmetric top molecules typically exhibit a rich spectrum of allowed transitions from $<$ 1 GHz to $>$ 100 GHz, making them attractive and flexible components in proposed hybrid schemes to couple molecules to other quantum systems, such as superconducting microwave resonators\cite{andre2006coherent}.
Finally, the methods described here provide a general and in principle portable chemical analyzer that can definitively and non-destructively identify both isomer and enantiomer of individual molecules - an analytical chemistry feat which is substantially beyond our current state of the art.
\begin{figure}
\caption{A sketch of the proposed apparatus, not to scale. The configuration is a linear Paul trap (axis out of the page). A small crystal comprised of a single Sr$^{+}
\label{appfig}
\end{figure}
\section{Apparatus}
The basic components of our proposed apparatus are shown in figure \ref{appfig}. The apparatus consists of a linear Paul trap, upon which is superimposed an optical lattice, formed by two counterpropagating, linearly polarized infrared laser beams. These lasers could be a single retro-reflected laser, which allows for maximal vibrational stability but less experimental control, or separate counter-propagating lasers brought in via separate objectives. Both the cooling lasers and the optical lattice propagate in a direction that is not parallel to any principal axis of the trap, to allow for addressing of multiple motional modes.
The Paul trap is comprised of the four rods carrying an RF voltage providing radial confinement, and two endcaps carrying a DC voltage to provide axial confinement. A small, static magnetic field of a few Gauss defines a quantization axis, which is required for Sr$^{+}$ laser cooling; this field can be in any direction as long as it is not parallel to the propagation axis of the Sr$^{+}$ cooling lasers\cite{berkeland2002destabilization}. The complex dynamics in the hybrid ion trap-optical lattice potential have been previously studied, and is a model system for studying friction\cite{pruttivarasin2011trapped}
In addition to the trapping fields and optical potential, trapped ions can also be addressed via GHz-frequency electric fields applied directly to the trap electrodes.
These high-frequency fields will have negligible effect on trapped atomic ions, but can rapidly (Rabi frequency $\Omega_R >$ 10 MHz) drive electric dipole transitions between rotational states of polar molecular ions. These electric fields can be applied in arbitrary polarization by appropriate choice of electrodes. If three electrodes (e.g. the top two in figure \ref{appfig} and one endcap) are separately addressed via high frequency electronics, arbitrary AC fields in $\hat{x}$, $\hat{y}$, and $\hat{z}$ can be applied. This ability to agilely drive arbitrary transitions between rotational states is straightforward in molecules with 5-10 atoms, but technologically difficult in diatomics, and especially hydrides, where the required frequencies are in the challenging $\sim$ 200 GHz range.
The entire apparatus can be cooled to $T \sim $ 4 K, and cold helium or hydrogen buffer gas can be introduced into the trap volume.
\subsection{A state dependent potential and state dependent heating}
While the pseudopotential formed by the Paul trap depends only on the mass and charge of the trapped ions, and is therefore state-independent, the potential formed by the optical lattice is highly state dependent in molecules with anisotropic polarizabilities.
The optical potential is given by
\[
U_{optical} = \frac{\alpha_{eff} I_0 }{ c \epsilon_0}\cos{\frac{4 \pi (z-z_0)}{\lambda}}
\]
where $I_0$ is the optical intensity, $\lambda$ is the wavelength, $z_0$ is the distance from the lattice maximum to the ion trap center, and $\alpha_{eff}$ is the effective polarizability. $\alpha_{eff}$ depends on the anisotropic polarizability of the molecular ion, but typically varies by order 10\% between molecular states of distinct $m_J$.
The polarizability anisotropy is characterized by a dimensionless parameter $s \equiv (\alpha_{\parallel} - \alpha_{\perp}) / \bar{\alpha}$. While a complete description of the polarizability in asymmetric top molecules requires a rank 2 tensor (which is not necessarily diagonal in the principal inertial axes of the molecule), the physics relevant for this proposal only relies on a non-zero anisotropy $s$. $s$ is expected to be order of unity for all molecules with non-spherical geometry, including all linear molecules and all planar molecules, and has been calculated for a wide range of neutral species\cite{ewig2002ab}. The anisotropic polarizability of molecules is the mechanism responsible for rotational Raman transitions\cite{atkin2006atkins}.
While dynamics within the hybrid potential $U_{total} = U_{optical} +U_{ion}$ are complex, at low excitation energies ($E << U_{optical}$) they are characterized by a frequency $\omega_{trap}$ which is on the order of the faster of the ion trap secular frequency $\omega_{secular}$ and the optical trap secular frequency $\omega_{lattice} = \left(\frac{16 \pi^2 U_{optical}}{\lambda^2 m}\right)^{1/2}$. The tight focus and short length scale of the optical lattice allows substantial optical forces to be applied to the molecule. For the parameters listed in table \ref{parametertable}, a typical molecule in the lattice feels an acceleration of $|a_{optical}| \approx 10^5$ m s$^{-2}$.
Figure \ref{classical_sim}A shows the potentials felt by a molecule in each of two states, for reasonable estimated molecular constants and trap parameters as described in table \ref{parametertable}. Polar molecular ions can be agilely driven between these two states via microwave frequency electric fields. The rich energy spectrum of polyatomic molecules, and in particular asymmetric top molecules, is highly interconnected, meaning that any two states of energy $E \lesssim 10 k_B$ ($E/h \lesssim 200 $ GHz) can typically be connected by a series of low frequency ($\omega \lesssim 2\pi \times 20$ GHz) electric dipole transitions (figure~\ref{biggrot}
This combination of fields - a state-independent trap, a rotational state-dependent potential superimposed on this trap, and agile microwave drive that allows for controlled transitions between rotational states - allows large, state-specific forces to be applied to trapped molecular ions. These forces can be used to motionally heat trapped ensembles conditional on the internal state of the molecular ions.
Several groups have realized state manipulation and readout of diatomic molecular ions via state-dependent potentials \cite{leibfried2012quantum,ding2012quantum,wolf2016non,staanum2010rotational}. The work of Chou et al. is particularly relevant, as they rely only on an anisotropic polarizability in their molecular ions, rather than a specific electronic structure as is required if near-resonant optical fields are used\cite{chou2016preparation}. Sympathetic heating spectroscopy, which is closely related to the readout method proposed here, has been demonstrated in small crystals of atomic ions \cite{clark2010detection}. In contrast to that work, the heating proposed here does not require any absorption of optical photons by the ``dark'' species, and spontaneous emission of the dark [molecular] species plays no role. To our knowledge, the proposed hybrid trap and the method proposed here to realize robust, state-dependent heating is novel.
\section{State readout method}\subsection{Ensemble preparation}
The molecular ion of interest is co-trapped with a laser-coolable atomic ion, such as Sr$^{+}$, in the trap shown in figure \ref{appfig}. A small crystal, comprised of a single molecule and a single Sr$^{+}$, is considered for simplicity. Extensions to larger crystals comprised of many molecular ions and many atomic ions are also possible, and are discussed in section \ref{bigcrystalsection}.
The ions are
first buffer gas cooled to a temperature of about 10 K. Cryogenic buffer gas cooling is a versatile tool for cooling internal degrees of freedom of both neutral and charged species, and appears to work more efficiently on larger (non-diatomic) species\cite{endres2017incomplete,drayna2016direct,hansen2014efficient}. Micromotion effects are known to lead to heating in ion-neutral buffered mixtures, but such effects are minimized when $m_{neutral} \ll m_{ion}$, as is expected here\cite{chen2014neutral}. Helium is the most natural buffer gas, but H$_2$ is also possibile; H$_2$ offers higher base temperatures and likely more problematic clustering behavior, but can be reliably cryopumped away to essentially perfect vacuum at temperatures $\lesssim$ 8 Kelvin. While the primary advantage of the cryogenic environment is the cooling of the internal state of the molecules, the cold environment also provides very high vacuum and a low black body radiation temperature, resulting in long rotational state lifetimes ($\tau \gg 1$ s).
Buffer gas cooling is necessary because molecules with rotational constants in the 2-10 GHz range, corresponding to approximately 5-15 atoms, occupy many thousands of ro-vibrational states at room temperature, making any state-specific cooling scheme infeasible. The achievable temperature of $\sim$10 K is significantly above the ``single rotational state'' temperature $T_{rot} \approx 2B/k_B \approx $ 300 mK, but rotationally cooled molecules will be left in a small number of states ($\lesssim 50$) which can be addressed individually with microwave fields or optical fields modulated at microwave frequencies.
Once the molecules have collisionally thermalized with the buffer gas, the buffer gas is removed via a cryopump and the motion of the ensemble is cooled via laser cooling applied to the Sr$^{+}$ ion. Absolute ground state cooling is not required to implement the methods proposed here, significantly relaxing trap design requirements.
The laser cooled ensemble is now motionally cold ($T_{motional} <$ 1 mK), while the internal temperature of the molecular ion remains at $\sim$~10 K. The molecular ion will be subject to electric fields from both the trapping fields and from the motion of the nearby atomic ion, but these fields vary slowly compared to the $\sim$ 10 GHz frequencies of molecular rotation, and are therefore extremely unlikely to change the molecule's internal rotational state. The molecule is therefore left in one of many internal states $\ket{0}..\ket{n}$ with $E_n/k_B \lesssim 10$ K. It is the task of our readout method to determine which one.
\subsection{State-specific heating}
\label{bolometrysection}
The ideal readout method would projectively measure which state among $\ket{0}..\ket{n}$ the molecule is in. The method described in this section, which contains the essential physics of this proposal, achieves a slightly more modest goal: we will choose two states, $\ket{0}$ and $\ket{1}$, and determine if the molecule lies within the space spanned by $\ket{0}$ and $\ket{1}$, which we denote $\ket{0,1}$. Extensions to single state determination are described in section \ref{singlestatesection}. We treat in this section the case in which states $\ket{0}$ and $\ket{1}$ are non-degenerate, but extensions to the more realistic case where $\ket{0}$ and $\ket{1}$ are degenerate or nearly degenerate are straightforward, and presented in section \ref{largesubspacesection}.
Our method, which could be considered a version of the sympathetic heating spectroscopy demonstrated by Clark et al., will work by applying an effective potential to the molecules which is constant for molecules outside $\ket{0,1}$, and time varying at about the trap secular frequency for molecules within $\ket{0,1}$\cite{clark2010detection}. In the event that the molecule began within $\ket{0,1}$, the ensemble is heated, and the molecule is left in a random internal state within $\ket{0,1}$. In the event that the molecule began in a state $\ket{k}$ outside $\ket{0,1}$, the ensemble is not heated and the molecule is left in $\ket{k}$.
The complete sequence to determine if the molecule lies within $\ket{0,1}$ is as follows:
\begin{enumerate}
\setlength \itemsep{-5pt}
\item The trapped ensemble is laser cooled to a low motional temperature $T$ via the Sr$^+$ ion; ideally this would be the motional ground state, but this is not a requirement. The optical lattice is turned off during this step.
\item The cooling lasers are turned off, and an optical lattice is adiabatically applied to the ensemble via far-detuned infrared lasers.
While the potential realized by the combination of the ion trapping fields and the optical lattice is state-dependent, the net potential is conservative within the pseudopotential approximation. The dynamics within such a potential are complex, but are characterized by motion on a timescale of order $\tau \approx \omega_{trap}^{-1}$.
\item A pulsed, microwave frequency electric field $E_{mw}$ at frequency $\omega_{01}$ is applied. Molecules outside of $\ket{0,1}$ are not affected by this field; for these molecules, the potential remains constant in time, and the molecules are not heated. Molecules in $\ket{0}$ are driven to $\ket{1}$ and vice-versa, with a Rabi frequency $\Omega_R$ chosen to be comparable to $\omega_{trap}$ or greater. Molecules which began in $\ket{0,1}$ are therefore moved between $\ket{0}$ and $\ket{1}$, which feel different optical potentials, and the ensemble is motionally heated. The duration of $E_{mw}$ is chosen such that an approximate $\pi$-pulse $\ket{0} \leftrightarrow \ket{1}$ is driven, but fidelity of this swap is not critical.
\item Additional $\pi$-pulses are applied in a pseudorandom sequence, pushing molecules between $\ket{0}$ and $\ket{1}$ on a time scale comparable to $\tau \approx \omega_{trap}^{-1}$. As molecules are driven between potentials, they feel a time-varying force and are heated. A typical pseudo-random sequence would be additional $\pi$ pulses delivered with a Poisson distribution with rate $\Gamma_{flip} \lesssim \omega_{trap}^{-1}$
\item The microwave drive is turned off, and the optical lattice is adiabatically lowered.
\item The ensemble motional temperature is read out via the motion of the Sr$^+$ ion. This temperature now depends strongly on whether the molecule lies within $\ket{0,1}$. Temperature readout will not change the internal state of the molecular ion.
\item Steps 1-6 can be repeated to increase fidelity.
\end{enumerate}
The heating described in steps 3 and 4 can be understood in the following semi-classical picture. Consider a motionally cold molecule initially in $\ket{0}$ (blue potential in figure \ref{classical_sim}A). A $\pi$ pulse is applied, putting the molecule in $\ket{1}$ (red potential in figure \ref{classical_sim}). The ensemble is motionally excited in this new potential, and will fall ``downhill'' towards the new trap minimum.
The energy delivered can be increased by applying additional microwave pulses. Each time a $\pi$ pulses is applied, the molecule will again switch potentials. The microwave drive is intentionally dithered to ensure that subsequent kicks are uncorrelated, although in practice complex trap dynamics will likely ensure incoherent behavior even without explicit randomization. Although the details of the dynamics are complex and likely in practice uncontrollable\cite{pruttivarasin2011trapped}, molecules which began this process in $\ket{0}$ or $\ket{1}$ will receive a sequence of effectively random momentum kicks, heating the ensemble; molecules which began in a different state $\ket{k}$ will not be addressed by the microwave fields and will not be heated.
This selective heating method reduces to a variant of QLS in the simultaneous limits of resolved sidebands, ground state cooling, a single, well-controlled $\pi$-pulse drive, and high fidelity single phonon readout, but our method provides high-fidelity readout under much less stringent conditions than QLS. In particular, neither sideband resolution nor motional ground state cooling is required. The proposed method does not rely on the applied optical potential $U_{optical}$ being a small perturbation on the trap pseudopotential; for the parameters listed in table~\ref{parametertable} and figure \ref{classical_sim}A it is not.
The above semi-classical description is of course incomplete; in fact, many motional states and the internal states $\ket{0}$ and $\ket{1}$ will be thoroughly mixed by the series of pulses, and molecules will in general not be in a pure internal state. The description further ignores the spatially-dependent detuning of the $\ket{0} \leftrightarrow \ket{1}$ transition. This spatial dependence is small if the microwave Rabi frequency $\Omega_R$ is fast compared to the optical depth $U_{optical} \approx \alpha_{eff} I_0 / c \epsilon_0 h$, but this is not a requirement - the heating depends only on the thorough mixing of the states $\ket{0}$ and $\ket{1}$
Classical simulations predict a heating rate of $\sim$ 1.5 K s$^{-1}$ for the parameters shown in table \ref{parametertable} (figure \ref{classical_sim}). As expected, simulations show this heating rate to be substantially robust to variations in trap parameters. The robustness of the induced heating to details of the applied fields means multiple motional modes can be heated simultaneously; any motional mode in which the proper motion of the molecular ion is non-zero along the optical lattice axis will be heated.
It is the richness of the rotational state spectrum of polyatomic molecules that makes this type of `partial state determination' both feasible and valuable. In a two state system, $\ket{0,1}$ spans the entire system, and the proposed measurement is worthless - it indicates nothing about which state within the $\ket{0,1}$ subspace we began in, and leaves the molecule in an unknown state within $\ket{0,1}$. In contrast, when combined with a rich spectrum of accessible states, it yields both rapid measurement and heralded state preparation. Heralded \emph{single state} preparation requires repeated application of our method, as described in section \ref{singlestatesection}
\subsection{Why drive incoherently?}
\label{whyincoherent}
The intentional scrambling of the state-dependent force appears counterintuitive. Would it not be better to apply several carefully controlled microwave drive pulses, for example a series of $\pi$ pulses between $\ket{0}$ and $\ket{1}$? In a world where the Hamiltonian could be controlled perfectly, $N$ such pulses could combine coherently, with the net momentum transferred scaling $\propto N$. In the scheme proposed here, pulses only combine incoherently, and the net momentum transferred scales as $N^{1/2}$. However, the freedom to ignore the detailed dynamics of the trapped ensemble allows the experimenter to exert much greater, and far less controlled, forces on the molecule, while simultaneously allowing the trap to `misbehave' essentially arbitrarily on the timescale of trap motion, $\tau \sim \omega_{trap}^{-1}$. For example, the potentials shown in figure ~\ref{classical_sim}A are strongly nonlinear, but ensembles trapped in such potentials can still be heated by a force dithered on a timescale $O(\omega_{trap}^{-1})$. This freedom also allows the experiment to be driven in a non-perturbative limit, in contrast to the quantum-logic based proposals such as \cite{shi2013microwave}.
$\Omega_{R}$ must be kept low enough that undesired transitions to other rotational states, for example $\ket{0} \leftrightarrow \ket{2}$, are not driven. These transitions are typically detuned by many GHz from $\omega_{01}$, easily allowing for $\Omega_{R} \gtrsim$ 10 MHz.
High electrical Rabi frequencies $\Omega_{R}$ are technically straightforward - for example, 300 mV applied to electrodes 300 $\mu$m apart would realize electric-dipole allowed rotational transition rates with $\Omega_{R} \approx$ 10 MHz.
Applied AC electric fields, either from the trap RF voltage or from the microwave drive, must be kept low enough that they do not substantially align molecules in states \emph{outside} of $\ket{0,1}$. Molecules outside of $\ket{0,1}$ which are aligned while within the optical lattice will feel a time-dependent force, which will result in heating. For the trap parameters listed in table \ref{parametertable}, this force is calculated to be about $10^3$ times lower than the time-dependent force responsible for the state-dependent heating for molecules within $\ket{0,1}$. This parasitic heating is dominated by ``near resonant'' alignment of additional states by the microwave drive, with an effective detuning of $\sim$ 1 GHz, rather than low frequency trap fields.
\subsection{Temperature readout}
After the state-selective heating process described above, the temperature of the ensemble will be strongly dependent on the internal state of the molecular ion. An ensemble temperature of a few mK will be reached in a few msec. A final temperature of $\sim$ 2 mK would represent an occupation number of $n \approx 40$ for each of the motional modes of the crystal, and spatial extent of the Sr$^+$ ion of several microns. Detecting this heating is therefore much easier than detecting the single motional quanta excited in QLS. The temperature could be read out via Doppler thermometry on the narrow $^2$S$_{1/2}$ - $^2$D$_{5/2}$ transition; alternatively, the spatial motion of the Sr$^+$ ion could be resolved via the time-dependence of fluorescence from the ion illuminated with counter-propagating 422 nm $^2$S$_{1/2}$ - $^2$P$_{1/2}$ light beams. Since the ion's motional extent is larger or comparable to the node spacing (211 nm), this fluorescence will be modulated at the trap secular
frequency and higher sidebands\cite{Raabthermometry}. The amplitude of these sidebands is highly temperature-dependent.
\subsection{Extensions to larger subspaces}
The simple scheme described above, in which $\ket{0}$ and $\ket{1}$ are assumed to be non-degenerate, is unrealistic for many molecules. For example, if $\ket{0}$ and $\ket{1}$ are taken to be rotational levels, there will be unresolved hyperfine levels within $\ket{0,1}$, from nuclear spins which couple only weakly (kHz or less) to the molecule's rotation. Any such structure is far below the resolution of the rapidly driven microwave transitions used above. In addition, the scheme as described above imposes unreasonable constraints on the control of the polarization of applied electric and optical fields. For example, if $\ket{1}$ is chosen to be $\ket{1_{010}}$, it is likely that population will leak into the states $\ket{1_{01\pm1}}$, even if all fields are nominally $\hat{z}$ polarized.
These limitations can be avoided simply by including the additional nearly-degenerate states in $\ket{0,1}$. Minimal modifications are required to extend the method to include additional states, replacing $\ket{0,1}$ with $\ket{i_1,i_2,..i_n}$. The microwave fields that mix $\ket{0}$ and $\ket{1}$ must be supplemented by additional microwave fields that efficiently mix all states within $\ket{i_1,i_2,..i_n}$, and thus move the molecules between the effective potentials. For example states $\ket{1_{01\pm1}}$ can be mixed in with $\ket{0}$ and $\ket{1}$ via additional $\sigma^+$ or $\sigma^-$ microwave fields. The measurement leaves molecules that began within the subspace $\ket{i_1,i_2,..i_n}$ within this subspace, and the ensemble is heated; molecules in a state $\ket{k}$ outside $\ket{i_1,i_2,..i_n}$ remain in $\ket{k}$, and the ensemble is not heated. The heating rate depends only on the rate of flipping between states with different effective polarizability $\alpha_{eff}$, and will therefore be largely dependent of the number of states in $\ket{i_1,i_2,..i_n}$.
Efficient projective measurement of the molecule's internal state from an initially thermally prepared molecule can be realized by intentionally including about half of the possible thermally occupied states in $\ket{i_1,i_2,..i_n}$. In the event of a ``yes'' measurement the molecule is known to lie within $\ket{i_1,i_2,..i_n}$; in the event of a ``no'' measurement the molecule is known to lie outside $\ket{i_1,i_2,..i_n}$. In this case, additional microwave pulses can be applied, mixing additional states into $\ket{i_1,i_2,..i_n}$; for example, a microwave pulse which swaps $\ket{k} \leftrightarrow \ket{0}$ for an additional state $\ket{k}$. This can be repeated until a ``yes'' measurement is found. Efficient searches are straightforward to design, and require no additional hardware beyond agile microwave electronics to implement. Such searches can provide heralded state preparation into a single state from $n$ thermally occupied states in time $\mathcal{O}(\log{}n)$
\label{largesubspacesection}
\subsection{Extensions to single state preparation}
It is naturally desirable to extend this method to provide heralded \emph{absolute} state identification. The following sequence provides heralded projective measurement into a single arbitrary state $\ket{B}$:
\begin{enumerate}
\setlength \itemsep{-5pt}
\item{As above, determine the state to lie within a manifold $\ket{i_1,i_2,..i_n}$}
\item{With lattice lasers off, carefully drive a microwave transition between \emph{one} state $\ket{A}$ in $\ket{i_1,i_2,..i_n}$ and a single state $\ket{B}$ outside $\ket{i_1,i_2,..i_n}$. This transition requires high ($< 1 $ KHz) resolution, and must be driven slowly. }
\item{Check again to see if the molecule lies within the $\ket{i_1,i_2,..i_n}$ manifold. If it does not, the molecule is known to be in state $\ket{B}$. If it does, we can assume that it is re-randomized within $\ket{i_1,i_2,..i_n}$, and we can repeat steps 2 and 3 until the molecule is projected into state $\ket{B}$}
\end{enumerate}
\label{singlestatesection}
\subsection{Extensions to non-polar species}
The microwave-frequency electric fields used to drive transitions between $\ket{0}$ and $\ket{1}$ are of course ineffective when applied to non-polar molecular ions, such as NO$_2^+$. However, states $\ket{i_1,i_2,..i_n}$ can be chosen such that they are connected via a rotational Raman transitions, for example between $\ket{0_{000}}$ and $\ket{2_{020}}$. These transitions can be driven via a single IR laser that is amplitude-modulated at the appropriate Raman transition frequency $\omega_{01}$, which can be chosen to be well within the bandwidth of high speed electro-optic modulators. This laser could be the lattice laser itself, or an additional laser.
\label{ramansection}
\subsection{Spectroscopy in larger ensembles}
The analysis above has been confined to a small crystal, comprised of a single molecular ion and a single atomic ion. Multiple molecular ions could also be co-trapped with one or more Sr$^{+}$ ion, and the state-dependent heating sequence described in section \ref{bolometrysection} would in that case heat the ensemble if one or more of the molecular ions was in the addressed subspace $\ket{i_1,i_2,..i_n}$. The experiment would not be able to determine which trapped molecule was responsible for the heating, precluding meaningful projective measurement, but spectroscopy on the ensemble could still be performed. In the simplest case, the experimenter would simply learn that \emph{some} thermally populated state is driven at the drive frequency $\omega_{01}$ - the same information that is learned in any single frequency spectroscopy experiment. \label{bigcrystalsection}
\section{Apparatus design and molecule choice}
Existing ion trapping technologies are well suited to the proposal described here; in fact, the apparatus demonstrated by Chou et al.\cite{chou2016preparation} already realizes both Raman transitions and associated sideband cooling in molecular ions, albeit within the hyperfine manifold of a single rotational state. That apparatus, which realized secular frequencies $\omega_{secular}$ of $\sim 2\pi \times$ 5 MHz, and AC Stark shifts of $\sim 2\pi \times 200$ kHz, is a natural starting point for design of a system. We propose here a modestly weaker pseudopotential, realized via a macroscopic linear Paul trap, combined with a significantly stronger optical field. The proposed trap design parameters are summarized in table \ref{parametertable}.
\begin{table}[H]
\centering
\label{my-label}
\begin{tabular}{|l|l|l|}
\hline
\textbf{parameter} & \textbf{symbol} & \textbf{value} \\ \hline
electrode spacing & $d$ & 300 $\mu$m \\
molecule mass & $m_{molecule}$ & 76 amu \\
atomic ion mass & $m_{atom}$ & 88 amu \\
molecule dipole moment & $D$ & 2 Debye \\
Trap RF frequency & $\omega_{RF}$ & $2\pi \times$10 MHz \\
Ion trap secular frequency & $\omega_{secular}$ & $2\pi \times$ 1 MHz \\
Optical power & $P$ & 1 W / beam \\
Optical wavelength & $\lambda$ & 1050 nm \\
Beam waist radius & $\omega_0$ & 10 $\mu$m \\
Rayleigh length & $z_R = \pi \omega_0^2 / \lambda$ & 0.3 mm \\
Ion trap-lattice offset & $\Delta z$ & $\lambda / 8 = 125$ nm \\
Peak optical intensity & $I_{0} = 2P/\pi \omega_0^2$ & $6 \times 10^9 $ W/m$^2$ \\
Assumed molecular polarizability & $\bar{\alpha} = (\alpha_{\parallel} + 2\alpha_{\perp}))/3$ & 2 $\times 10^{-39}$ C m$^2$V$^{-1}$ [2$ \times 10^{-29}$ m$^3$ ] \\
Assumed polarizability anisotropy & $s = (\alpha_{\parallel} - \alpha_{\perp}) / \bar{\alpha} $ & 0.5 \\
Optical potential depth & $U_{optical} = \alpha_{eff} I_0 / c \epsilon_0 h$ & 7 MHz \\
Lattice secular frequency (C.O.M. mode) & $\omega_{lattice} = \left(\frac{16 \pi^2 U_{optical}}{\lambda^2 m}\right)^{1/2}$ & 1.6 MHz \\
Microwave drive amplitude & $V_{mw} $ & 300 mV \\
Microwave Rabi frequency & $\Omega_R \approx V_{mw}D/dh $ & 10 MHz \\
Microwave dither rate & $\Gamma_{flip} $ & 2 MHz \\
Effective polarizability for $\ket{0} = \ket{0_{000}}$ & $\alpha_{eff}(\ket{0})$ & 2 $\times 10^{-39}$ C m$^2$ V$^{-1}$ \\
Effective polarizability for $\ket{1} = \ket{1_{010}}$ & $\alpha_{eff}(\ket{1})$ & 1.7 $\times 10^{-39}$ C m$^2$ V$^{-1}$ \\
Heating rate for molecules in $\ket{0}$ & & 1.5 K s$^{-1}$ \\
Heating rate for molecules in $\ket{1}$ & & 1.5 K s$^{-1}$ \\
Heating rate for molecules in $\ket{2} = \ket{1_{100}}$ & & $<$ 1 mK s$^{-1}$ \\ \hline
\end{tabular}
\caption{Proposed parameters for a versatile hybrid trap for manipulation of polyatomic molecules.}
\label{parametertable}
\end{table}
Because the only molecule-specific experimental design lies in the choice of the microwave pulse sequences applied to the trap electrodes, switching between molecules will be comparatively straightforward. Two classes of molecular ions suggest themselves: positive analogs of stable, closed-shell molecules, (for example, 1,2-propanediol$^+$), and protonated versions of these molecules (for example, H-1,2-propanediol$^+$). The principal difference between the two classes is that the protonated versions are typically closed shell $^1\Sigma$ states, simplifying their spectroscopy but making cooling of hyperfine degrees of freedom more challenging; molecules like propanediol$^+$ are necessarily open shell. This gives the experimenter an additional handle to address the molecules - the magnetic field - but at the cost of greater complexity in $\hat{H}_{int}$. An additional advantage of open shell molecules is that an applied magnetic field breaks the $m_J$ degeneracy of rotational states, which otherwise must be addressed via carefully controlled polarization of electric and/or optical fields. This effect is small ($\sim$ 1.4 MHz/Gauss) compared to $U_{optical}$ even for open shell molecules, and will not interfere with the proposed state selective heating. Open shell molecules are therefore well suited to \emph{absolute} single state preparation, including $m_J$ states and nuclear hyperfine structure. Open shell molecules are also much easier to produce than protonated molecules; indeed, in many cases the \emph{only} thing that is known about a molecular ion species is that it can produced from neutrals via a hot wire or electron bombardment, as is done in mass spectrometers. Protonated species can generally be produced via collisions between neutral molecules and $H_3^{+}$, which readily donates its extra proton to almost any closed shell neutral species\cite{burt1970some}
\section{Applications}
The heralded single state preparation described above forms the heart of a broad set of experiments, which can exploit the diverse physics enabled by polyatomic molecules. The basic sequence of such an experiment is as follows:
\begin{enumerate}
\item The internal state of the molecular ion is measured as described above.
\item The state of the molecule is manipulated by applying microwave-frequency voltages directly to the trap electrodes. An arbitrary unitary transformation $\mathrm{U}$ among a chosen set of molecular states can be applied via appropriate choice of applied voltages. The optical lattice is turned off during this time.
\item The internal state of the molecular ion is read out again.
\end{enumerate}
Depending on the choice of the unitary transformation $\mathrm{U}$, this sequence will realize high-precision spectroscopy of the molecular ion in question, non-destructive single-molecule identification and chiral readout, or a low-decoherence quantum information platform in which a single molecule comprises several Qubits.
\subsection{\textbf{High resolution spectroscopy}}
If $\mathrm{U}$ is chosen to be a carefully applied exchange between the prepared state $\ket{A}$ and another state, the above sequence realizes a high-resolution spectrometer. $\mathrm{U}$ could be a Rabi pulse or a Ramsey sequence. Trapped molecular ions in a cryogenic environment are remarkably well isolated from the environment, and are expected to exhibit exceptionally long coherence times, allowing for high resolution spectroscopy. Background gas collision rates far below 1 Hz are possible, and in closed-shell molecules all rotational transitions are Zeeman-insensitive. The dominant broadening mechanism is expected to be unwanted Stark shifts from the time-varying electric fields from the trap electrodes. The ions by definition find locations where $\braket{\vec{E}} = 0$, but in general $\braket{E^2} > 0$, and unwanted Stark shifts are a possibility. The DC polarizability of rotational states in asymmetric top molecules varies over many orders of magnitude, depending on the proximity of the nearest state of opposite parity, but in a well compensated trap and a small crystal, in general $\abs{E} < 1 $ V cm$^{-1}$, and Stark shifts for most states are expected to be on the 10 Hz level or below. Unwanted \emph{light shifts} from the lattice lasers are not an issue, since these lasers are turned off during the spectroscopy pulse.
\subsection{\textbf{Chiral readout}}
The experimental sequence described above allows for definitive identification of the chirality of the trapped molecular ion. Such an experiment would proceed as follows: as before, the molecule is initialized into a known rotational state $\ket{A}$. As demonstrated in \cite{eibenberger2017enantiomer}, the molecule can then be transferred to a different state $\ket{B}$ conditional on its chirality. To do this, the molecule is transferred simultaneously via two distinct paths: directly from $\ket{A} \rightarrow \ket{B}$, and indirectly $\ket{A} \rightarrow \ket{C} \rightarrow \ket{B}$. The relative phases of the paths can be chosen to interfere constructively for one enantiomer and destructively for the other; a change in the phase of any of the applied fields reverses the choice of enantiomer. Readout of the state thus reads out the molecule's chirality.
It is notable that this enantiomer-selective measurement is non-destructive; the molecule is left in the trap, and in fact is left in a known state. The molecule can thus be re-measured milliseconds - or minutes - later. Many neutral species, for example HOOH, HSSH, and analogous species, are known to tunnel between chiral conformers.
The tunneling time can vary from nanoseconds to beyond the age of the universe, but has never been observed at rates slower than the resolution of beam-based microwave spectroscopy ($\sim$ 5 kHz). The experiment proposed here could push this by several orders of magnitude, allowing both an exquisite probe of intra-molecular dynamics, and an ideal platform to look for predicted but never before seen energy differences between enantiomers arising from the short-range nuclear weak force \cite{quack2012molecular}.
\subsection{\textbf{Searches for new physics}}
Ultraprecise spectroscopy of diatomic molecules has emerged as a sensitive probe of physics beyond the Standard Model. A dramatic example is the search for a permanent electron electric dipole moment (eEDM); while the eEDM is predicted to be unobservably small in the standard model, many plausible extensions predict a larger value, and current experimental limits strongly limit such extensions. The high internal electric field in molecules makes them exquisitely sensitive to the eEDM, and the current limit on the eEDM was set recently in precision spectroscopy experiments on neutral ThO molecules\cite{baron2014order}; a comparable limit was recently established using HfF$^+$ ions \cite{PhysRevLett119153001}. Polyatomic molecules have recently been proposed as leading candidates in next-generation searches for both eEDMs and related nuclear EDMs \cite{kozyryev2017precision}. The generality of the method proposed here is a significant advantage in these schemes, which to date rely on a favorable electronic structure in the molecules which allows for laser induced fluorescence or photofragmentation.
Molecular ions have also been proposed as candidates for searches for variation in the electron/proton mass ratio. In such an experiment, a high-precision clock based on nucleon mass - for example, on a vibrational transition - would be compared to a more conventional optical clock based on electron mass. The homonuclear O$_2^+$ ion has been suggested as a candidate in such searches\cite{hanneke2016high}. Raman-active modes of nonpolar, closed shell triatomic molecular ions such as NO$_2^+$, which could be observed directly by the system described in this work, appear to be a highly systematic-immune platform for such searches. Since these molecules are non-polar, this experiment would require the Raman-mediated heating described in section \ref{ramansection}.
\subsection{\textbf{Quantum information}}
The rich, controllable states of molecules suggests their potential as a quantum information platform. In such a scheme, ``arbitrary single-qubit rotations'' of traditional 2-state qubits would be replaced by ``arbitrary unitary transformations within a subset of low-lying rotational states''. The operations that can be applied to such a molecule are a strict superset of NMR rotations; this platform could be thought of as a revisit of NMR quantum computing, with \emph{additional degrees of freedom} (rotation), \emph{additional controls} (microwave frequency electric fields), and \emph{definitive initialization}. Can meaningful error correction be applied within single molecules? Can a single molecule be transported, carrying multiple qubits? While answering such questions is beyond the scope of the research proposed here, they illustrate the potential power of molecular ions as low-decoherence quantum machines.
Scalability is of course a major concern in any quantum information proposal. Here, distinct, co-trapped molecules - either identical or distinct - could be entangled via shared phonon modes, as in traditional atomic ion quantum information schemes. Such a scheme requires substantially more experimental control than the incoherent heating required in this proposal; in particular, well controlled, sideband resolved manipulation and ground state cooling are required.
\section{Complexity and challenges}
Polyatomic molecules are undoubtedly complex. Figure \ref{biggrot} compares the rotational Grotrian diagram for carbon monoxide and 1,2-propanediol up to 10 Kelvin. It might appear at first that controlling the intricacy of the more complex molecule is impossible, but in fact it is \emph{easier} to manipulate the state of propanediol than carbon monoxide. This is because agile arbitrary waveform generators and microwave electronics, with bandwidth up to $\sim$40 GHz, can address every level populated in a 10 Kelvin propanediol molecule - in contrast, carbon monoxide, and in fact almost all diatomic molecules, require challenging electronics operating at frequencies above 100 GHz. Several groups propose addressing this challenge directly, via comb-mediated coherent control of laser fields differing by THz \cite{leibfried2012quantum,ding2012quantum,chou2016preparation}. In contrast, the only lasers required for the larger molecules proposed here are a single, non-tunable infrared laser, and the cooling lasers for the co-trapped Sr$^+$.
While the diagram in figure \ref{biggrot}B appears incomprehensible, in fact the spectroscopy of such molecules is straightforward; all the lines in neutral 1,2-propanediol shown in this figure have been observed and unambiguously assigned. Spectroscopy of polyatomic ions is far less developed than spectroscopy of neutral species; while the form of the rotational Hamiltonian is unchanged from neutral molecules, the exact rotational constants and internal coupling constants will need to be measured. High-sensitivity rotational spectroscopy via buffer gas cooling has proven a versatile technique on neutral molecules, and could be extended to cold plasmas of untrapped ionic species\cite{mefirstFTMW}. In addition, to our knowledge the polarizability tensor of all polyatomic molecular ions is unknown, and will require measurement; these measurements can of course be performed using the same apparatus that is described here.
While the Hamiltonian of a rigid molecule in arbitrary electric, magnetic, and optical fields can be written down exactly, the complexity of the Hamiltonian quickly grows beyond direct human understanding. We therefore rely on numerical simulation. Our home built software (MATLAB) uses the PGOPHER package to calculate $H_{internal}$, electric dipole transition matrices, and (for open shell molecules) magnetic dipole transition matrices\cite{PGOPHER}. Optically driven Raman transitions and the motional Hamiltonian are calculated directly. Arbitrary pulse sequences of electric, magnetic, and optical fields, either uniform or spatially varying, can be simulated. Various choices of applied pulses yield simulations of standard microwave free induction decay spectroscopy, chirally sensitive three wave mixing, trap motional dynamics, sideband cooling, or state-specific detection.
The complex dynamics of a warm ($T \gg$ 1 mK) ensemble trapped in the potentials shown in figure \ref{classical_sim} would require prohibitive computing power for a fully quantum simulation. We therefore calculate heating rates via a semi-classical approximation, where internal molecular dynamics are simulated under the assumption of spatially uniform fields, and motional dynamics are simulated classically. Figure \ref{classical_sim} shows the results of one such semi-classical simulation.
\section{Conclusion}
Polyatomic molecular ions contain great potential as highly controllable, low-decoherence quantum systems. The tools presented here will allow preparation, detection, and characterization of molecular ions, including high-fidelity non-destructive readout and heralded state preparation at the single quantum state level. These tools are the prerequisite for a broad suite of experiments in high precision spectroscopy and quantum control.
\begin{figure}
\caption{The low-lying rotational states of carbon monoxide (left) and 1,2-propanediol (right). Electric dipole allowed transitions below 20 GHz are marked in dark blue. Allowed transitions above 20 GHz are marked in light blue. All thermally occupied states in the right hand molecule can be reached via combinations of low frequency, electric-dipole allowed transitions.
\label{biggrot}
\label{biggrot}
\end{figure}
\begin{figure}
\caption{A: The state-dependent potential seen by a polyatomic molecule from the combined Hamiltonian $H_c=H_{trap}
\label{classical_sim}
\end{figure}
\end{document}
\section{Old stuff}
This disconnect between the internal and motional states of the molecule is both a blessing and a curse - while it makes our goal of single-state preparation much harder, it is also what make makes a trapped molecular ion such an intriguing system for high precision spectroscopy and quantum information applications, as the internal states are not perturbed even amid a cloud of laser cooled ions. Our motionally cold trapped molecule is in one of dozens of states; it the goal of our readout procedure to determine which one.
The ions can be addressed via spatially uniform microwave fields, which can drive electric dipole allowed rotational transitions in the molecule, and by spatially uniform or spatially varying optical frequency fields\footnote{\emph{Uniform} here can be taken to mean `uniform on the size scale of the size of the ions' secular orbit'. For the case of ground-state cooled crystal, this size scale is $z_0 = \sqrt{\hbar/ 2 m \omega_{sec}}$}. Combinations of electric-dipole microwave transitions with $\hat{x}, \hat{y},$ and $\hat{z}$ polarization can be used to realize arbitrary unitary transformations among addressed states, including transformations that selectively excite molecules of a single chirality\cite{eibenberger2017enantiomer}. These transformations can be considered extensions of single-bit rotations in traditional quantum information schemes. Microwave-frequency voltages can be applied directly to the trap electrodes from arbitrary waveform generators, producing agile, well-controlled electric fields in $E_x(t), E_y(t),$ and $E_z(t)$, which are spatially uniform on the scale of the trapped ion cloud.
Optical fields are applied in focussed, counter-propagating beams, realizing a spatially varying optical field which is superimposed on the ion trap fields. The optical field intensity varies \textbf{spatially} on a length scale of $\lambda/2 \approx$ 500 nm. While this is larger than typical extent of the ground-state cooled ion wavefunction $z_0 \approx$ 20 nm, it results in a non-negligible effective Lamb-Dicke parameter $\eta_{eff}$, and allows motional sidebands to be driven even in a tight trap.
XXX proofread below!
The \textbf{amplitude of the} optical potential can be modulated at frequencies $\Delta \omega$ corresponding to the frequencies of rotational Raman transitions of the molecule. This modulation can be achieved either by modulating the amplitude of a single beam which is retroreflected, or equivalently by retroreflecting a beam comprised of two lasers detuned by $\Delta \omega$. This scheme is similar to that demonstrated by Chou et al., where motional sidebands of (hyperfine) Raman transitions were driven via detuned, counter-propogating
laser fields \cite{chou2016preparation}. Other state-dependent forces, for example near-resonant optical forces and spin-dependent magnetic forces, have also been used to realize state readout in cold, trapped molecular and atomic ions. [XXX Wunderlich and Schmidt]
The ion trap pseudo-potential is state-independent, but the optical fields provide a highly state-dependent influence on the molecule. The beams used to produce this spatially varying and time-varying optical field are shown in figure XXXB [arrows from right and left] The optical field is produced by two counterpropagating, linearly polarized laser beams, labeled 1,2,3, and 4 in figure~\ref{appfig}B. Beams 1 and 2 are separated in detuning by $\Delta$, and propagate to the right; beams 3 and 4 are equal in frequency and polarization to beams 1 and 2 respectively, and propagate to the left. In principal all four beams could be controlled independently, although we propose here to realize beams 1 and 2 by modulating a single incoming beam, and to realize beams 3 and 4 by retroreflecting beams 1 and 2 off a mirror which is mechanically registered to the trap assembly. The combined electric field from these beams of course oscillates at the optical frequency $\omega_{optical} \approx$ 300 THz, but this is much too fast for the molecule's rotational state to follow, while too slow to excite electronic transitions in the molecule. It is appropriate to consider here only how the molecule interacts with this field via its anisotropic polarizability $\alpha$. No timescale faster than the rotational frequency of the molecule of $\lesssim 20 GHz$ is relevant to this proposal.
We first consider the case of a spatially uniform, constant in time optical field. Such a field would be produced, for example, by turning on only beam 1 in figure~\ref{appfig}B. In the classical picture, this constant field exerts a torque on the molecules, reflecting the energy difference between molecules with their ``easy'' polarization axis aligned with the field, and molecules with their ``hard'' polarization axis aligned with the field. These torques average to zero for a rotating molecule, and no work is performed by such a field. In the quantum picture, no optical photons are absorbed.
We next consider the case of a spatially uniform, time-varying optical field. Such a field would be produced, for example, by turning on beams 1 and 2 in figure~\ref{appfig}B with detuning $\Delta \omega$; equivalently, the field is produced by amplitude modulating a single beam at frequency $\Delta \omega /2$. We choose $\Delta \omega = \omega_{02}$ to correspond to a two photon rotational transition, for example from J=0 to J=2, as shown in figure~\ref{appfig}B.
The optical field now applies a time-varying torque to the molecule, again arising from the molecule's anisotropic polarizability. In the classical view, this field now does work on the molecule; in the quantum view, the molecule absorbs a photon at frequency $\omega_1$ from the optical field, and re-emits a photon in the same direction at frequency $\omega_2 = \omega_1 \pm \Delta \omega$. This changes the internal state of the molecule, but does not entangle the internal state with motion. The characteristic Rabi frequency for this process $\Omega_{R}$ is about $\Omega_{R} \approx s \alpha_{eff} I / c \epsilon_0 h$, where $s$ is an order of unity factor which depends on the rotational states in question, [Honl-London], and $\alpha_{eff}$ is an effective polarizability, which is comparable to $\bar{\alpha}$ for 'reasonably non-spherical' molecules.
We now consider the case of an optical field that varies in both space in time, as is produced by turning on beams 1,2,3,and 4 in figure~\ref{appfig}B. We now choose $\Delta \omega = \omega_{02} \pm \omega_{sec}$, where $\omega_{sec}$ is the secular frequency of the molecule - or in the case of a tightly bound crystal, the common mode motion - in the ion trap. The molecule again absorbs a photon at frequency $\omega_1$ from one optical beam, but now re-emits a photon at frequency $\omega_2 = \omega_1 \pm \Delta \omega$ into the other beam. This results in a momentum kick of $\Delta P \approx 2 \hbar \omega_1 / c$. This both changes the internal state of the molecule and the motional state.
\subsection{Ensemble drive and ensemble heating}
If we choose $\Delta \omega = \omega_{sec}$, the resulting slowly-varying optical lattice will drive the secular motion of the trapped ensemble. This is true both in the resolved sideband limit - i.e. $\Omega_{R} < \omega_{sec}$ and the unresolved sideband limit $\Omega_{R} > \omega_{sec}$.
It is instructive to consider the ensemble behavior as the slowly varying lattice is left on for a time $\tau$. In the ideal case described above, the applied force will add coherently, and after time $\tau$ the
The case of the resulting slowly-varying optical lattice will drive the secular motion of the trapped ensemble or even a lone Sr$^{+}$ ion directly, which is useful for diagnostics and alignment purposes\cite{XXX Chou personal}.
XXX section on heating rates
These methods can be applied to any molecule with an anisotropic polarizability and rotational transitions which can be addressed by modern high speed electro-optic modulators. While the molecule's anisotropic polarizability arises from distant electronic states, these states are never excited in this coherent process, and molecular spontaneous emission plays no role. An excellent review of rotational Raman processes can be found in Atkins chapter 10\cite{atkin2006atkins}.
XXX proofread even more!
\textbf{State readout and cooling}
The scheme described above gives the molecule a momentum kick $\Delta P \approx 2 \hbar \omega_1 / c$ conditional on the initial state of the molecule. When combined with laser cooling of the ensemble via the co-trapped atomic ion, this can be used to realize either internal state cooling or state readout, as described below. While the proposed internal state cooling requires sideband-resolved spectroscopy and near-ground state motional cooling of the ensemble, the state readout, which can be used as projective state preparation, puts much less stringent demands on the system.
XXX change notation so 2 becomes 1. start from
\textbf{State readout}
Consider a molecule initially in an unknown state; a typical example would be a molecule which has just been cooled to a rotational temperature of $T_{rot} \approx 10K$. We arbitrarily label these states $\ket{0}.. \ket{n}$, with $\ket{0}$ and $\ket{1}$ chosen such that they are connected by a Raman transition. The ideal state readout method would answer the question ``is the molecule in $\ket{0}$?'' Our method retreats slightly from this ideal, and answers instead the question ``is the molecule in space spanned by $\ket{0}$ and $\ket{1}$?'' In the event of a ``yes'' measurement, the molecule is known to be left in the $\ket{0,1}$ subspace; in the event of a ``no'' measurement, the molecules' state is unperturbed from its initial state. Since the initial number of states $n$ will typically be $n \gg 2$, even a single measurement teaches us a lot about the system; in addition, a positive result on a $\ket{0,1}$ measurement, followed by a negative result on a subsequent $\ket{1,2}$ measurement, provides heralded projection into a single state, ($\ket{0}$ in this example).
State readout is realized as follows:
\begin{itemize}
\item{cool the ensemble to a low temperature.}
\item{Scatter one or more pairs of photons off the molecule, conditional on whether the molecule lies within the $\ket{0,1}$ subspace. Each scattering event imparts a momentum kick of kick $\Delta P \approx 2 \hbar \omega_1 / c$, heating the ensemble. After each scattering event the molecule is left with very high probability within the $\ket{0,1}$ subspace}
\item{measure the final ensemble temperature}
\end{itemize}
A central feature of this proposal is that more than one pair of photons can be exchanged with the molecule in step 2 above. It is a natural question to ask whether subsequent transitions are \emph{coherent}, with a meaningful phase between transitions, or \emph{incoherent}, with random phases. In the classical limit, this corresponds to the difference between exciting an oscillator with a force which varies periodically at the oscillator resonant frequency, or with . XXX THIS PART IS IMPORTANT!
The final ensemble measurement could be realized by a number of established techniques. Sideband resolved excitation of the Sr$^{+}$ can measure even single phonon excitations, and is a natural choice. Any method used to characterize other heating rates, such as the measurement of anomolous heating from electrode surfaces, could also be applied here. These methods have been used to characterize heating rates as low as 10 quanta $^{-1}$\cite{turchette2000heating,hite2017measurements}. Parametric amplification can be applied to the ensemble prior to this temperature measurement.
In the combined limits of ground state cooling, single-phonon detection, and a tight crystal in which only center of mass motion is excited, this scheme is Quantum Logic Spectroscopy (QLS). However, the above scheme can provide high-fidelity readout under far less stringent conditions than QLS. For example, the method can be applied in a weakly bound crystal, where radial modes of the atomic and molecular ion, or indeed several molecular ions, thermalize only slowly with other modes and can be separately addressed, or in a weak trap, where the relevant Rabi frequencies are high compared to the trap secular frequency. In this second limit sideband-resolved spectroscopy is not possible, but momentum can still be transferred to the molecular ion conditional on the molecule's internal state. Since sideband-resolved thermometry is not meaningful in this regime, the final trap temperature would have to be read out by another method, such as Doppler thermometry on a narrow transition.
For this state readout to be meaningful, it is essential that there are no other processes at work which can change the state of the molecular ion during this measurement process. Collisions with background gas molecules would be one such process, but is expected to be slow ($< 1$ Hz) in the cryogenic vacuum proposed here. Absorbtion of black-body microwave photons is will be similarly suppressed by the cryogenic environment. The molecular ion will be subject to electric fields from both the trapping fields and from the motion nearby ions, but these fields vary very slowly compared to the $\sim$ 10 GHz frequencies of molecular rotation, and are therefore extremely unlikely to change the molecule's internal rotational state. On the contrary, if low-energy collisions with co-trapped ions were capable of coupling the ions' motion and internal state, thermalization of the ions' internal state through this process would be straightforward.
\begin{figure}
\caption{A sketch of the proposed apparatus, not to scale. The configuration is a linear Paul trap (axis out of the page). Molecules can be motionally cooled via strong Coulomb-Coulomb motional coupling with nearby laser cooled Sr$^{+}
\label{appfig}
\end{figure}
\section{Coupling the internal and motional state}
The potential experienced by a molecule near the motional ground state of the combined ion-optical trap is shown in figure \ref{corragatedpot}A. At large distances, $r >> \lambda$, the quadratic ion pseudopotential dominates; at small distances, either the pseudopotential or the optical potential can dominate, depending on experimental parameters.
\begin{figure}
\caption{The state-dependent potential seen by a polyatomic molecule from the combined Hamiltonian $H_c=H_{trap}
\label{corragatedpot}
\end{figure}
Depicting the Hamiltonian as a potential, as in figure \ref{corragatedpot}, is in fact misleading. While the molecule's internal Hamiltonian $\hat{H}_{int}$ and motional Hamiltonian in a conservative ion trap $\hat{H}_{trap}$ are in good approximation separable, the optical potential varies substantially for different molecular states. This variation arises from the anisotropy in the polarizability of the trapped molecule, which is typically of order unity. Townes and Schawlow describe analytic expressions for the Hamiltonian of an anisotropic molecule in an optical field explicitly\cite{townes2013microwave}.
Figure \ref{corragatedpot}B shows the potentials seen by a small polyatomic molecule in different rotational states. Chou et al. used this state dependency to realize an internal-state readout of trapped CaH$^+$ ions\cite{chou2016preparation}. At the rotational frequencies characteristic of small polyatomic molecules (0-20 GHz), the optical potential can be modulated to drive two-photon rotational transitions; if this optical field varies in space as well as time, as in an optical lattice, motional sidebands of these transitions can also be driven, as in Raman sideband cooling. It should be emphasized here that spontaneous emission plays no role in such sideband cooling, which depends on dissipative cooling of the ions' motional state via coupling with cold Sr$^{+}$.
In the low-temperature and weak-lattice limit, the ions remain localized in the electric pseudo-potential, and the Hamiltonian is well approximated by a state-independent harmonic potential and an optical field that is nearly uniform on the scale of the molecular motion, $z_0$. The behavior of a trapped ion in a perturbative, non-uniform external field Hamiltonian is described in detail by Mintert and Wunderlich\cite{mintert2001ion}, although in that case the non-uniform field was assumed to be static or slowly varying. In fact, the work of Mintert et al. is largely motivated by the desire to avoid addressing each atomic ion with an individual laser. Avoiding laser addressing is a substantial engineering improvement for atoms, but an absolute requirement for complex molecules such as those considered here. Parameters describing a realizable trap are tabulated in table \ref{parametertable}
The system described in figures \ref{appfig} and \ref{corragatedpot} is characterized by a number of timescales. In the pseudo-potential approximation, the harmonic ion trap is fully characterized by a frequency $\omega_{sec} \lesssim 2 \pi \times 5$ MHz. The optical potential is characterized by a length scale $\lambda$ and the potential depth $U \approx \alpha I_0 / k_B$, where $\alpha$ is the molecule's polarizability along the optical polarization axis, $I_0$ is the peak intensity of the laser field, and $k_B$ is Boltzmann's constant. The relatively high polarizabilities of larger molecules, and the possibility to tightly focus the optical lattice, makes deep ($U >$ 1 mK) potentials realizable even with modest laser power. While depths substantially greater than 1 mK are now routine, much weaker potentials are sufficient for the methods described here. The parameters listed in table \ref{parametertable} yield $U \approx 80$ kHz, or 4 $\mu$K.
Coupling between internal molecular states and motional states is achieved by driving Raman transition sidebands via a far-detuned optical field in a spatially dependent way. Rabi frequencies $\Omega_{Raman}$ for driving carrier transitions in which the motional state is not changed depend on both the matrix elements of the transition and the anisotropy $s$ of the molecular polarizability. In general, the polarizability of an anisotropic molecule is a rank 2 tensor; the expression $s = \frac{\alpha_{\parallel} - \alpha_{\perp}}{\bar{\alpha}}$, which holds for linear molecules, must be generalized. For the discussion here, we only require that the polarizability ellipsoid be non-spherical\cite{landau2013electrodynamics}. For allowed Raman transitions $\Omega_{Raman}$ is typically only modestly smaller than the natural timescale of $\Omega_{Raman} \approx k_B U / \hbar$.
Motional sidebands of these transitions can also be driven, with typical Rabi frequencies $\Omega_{Raman-sideband} = \eta_{eff}\Omega_{Raman}$.
The dimensionless effective Lamb-Dicke parameter $\eta_{eff}$ is given by $\eta_{eff} = \frac{z_0}{\mel{\psi_1}{H_{}}{\psi_2}} \frac{\partial {\mel{\psi_1}{H_{}}{\psi_2}}}{\partial z}$
In traditional single-photon addressing of trapped ions, this reduces to the familiar $\eta = \frac{2 \pi z_0}{\lambda}$. For the Raman transitions considered here, $\eta_{eff}$ depends on the phase between the ion trap minimum and the optical field minima ($\Delta z$ in figure \ref{corragatedpot}). A natural choice is $\Delta z = \lambda/8$; this value gives the maximum achievable sideband Rabi frequency $\Omega_{Raman-sideband}$, and a reasonable $\eta_{eff} = \frac{4 \pi z_0}{\lambda}$. The optical powers must be kept low enough to realize rotational Rabi frequencies $\Omega_{Raman}$ and $\Omega_{Raman-sideband}$ in the resolved sideband limit, i.e. $\Omega < \omega_{secular}$ and $\Omega < \Gamma_{Sr-sideband}$. Plausible parameters are listed in table \ref{parametertable}.
\section{Cooling and readout}
The coupling between internal and motional degrees of freedom allows preparation of molecules in single quantum states, and readout of these states with high fidelity.
Figure \ref{coolingandreadout} shows simple versions of such schemes. In each case, the internal state is coupled to the motional state, which can be cooled via coupling with laser cooled Sr$^+$. In the case of sideband cooling (figure \ref{coolingandreadout}A), red motional sidebands of many accessible Raman transition are driven simultaneously. This cools the molecular ions internally, while heating them motionally; Coulomb-Coulomb coupling with dissipatively cooled Sr$^{+}$ then cool them motionally, and the process can be repeated. In contrast to diatomics, driving many such transitions simultaneously is technologically straightforward, since the characteristic frequencies lie within the range of agile microwave electronics and electro-optic modulators.
In the case of readout (figure \ref{coolingandreadout}B), a single blue motional sideband is driven, exciting the molecule internally while also heating its motional degree of freedom. This motional excitation in turn heats the co-trapped Sr$^{+}$, resulting in increased fluorescence and thus readout. In the strongly coupled, ground-state cooled regime, the only relevant mode is the center of mass mode shared by the molecular ion and the atomic ion, but this regime, while required for QLS, is not required here. The fidelity of the atomic ion motional readout can be near unity, due to well established optical shelving techniques\cite{myerson2008high}, but as described below, high-fidelity atomic state readout is \emph{not} a requirement here - accurate molecular state readout can be achieved via repeated atomic readout at modest fidelity, using shelving states within the molecule.
The detection scheme shown in figure \ref{coolingandreadout}B takes advantage of an advantage of polyatomic molecules that is not widely appreciated. Consider a molecule initially in internal state $\alpha\ket{0} + \beta\ket{1}$; we wish to measure if it is in $\ket{0}$ or $\ket{1}$, i.e. measure $\sigma_z$. Molecules in the ground state (labeled $\ket{0,v=0}$) are excited to a blue motional sideband of state $\ket{2,v=1}$. Long range Coulomb-Coulomb interactions will then cool the molecule into motionally cold $\ket{2,v=0}$, and fluorescence from the resulting heating of the atomic ion can be collected, as in quantum logic spectroscopy (QLS). XXX fix Critically, these slow, long-range interactions will \emph{not} change the internal state of the molecule. The molecule can therefore be recycled to state $\ket{0}$ via the carrier transition (shown as a black dotted line in figure \ref{coolingandreadout}B) or via microwave fields which mix $\ket{0}$ and $\ket{2}$, presumably via an intermediate state. In the imperfectly resolved sideband limit, these carrier transitions are typically driven unintentionally in any case. The molecule can therefore be repeatedly cycled and heated within the $\ket{(0,2),v=(0,1)}$ subspace, with no field tuned anywhere near resonance with the undesired $\ket{0} \rightarrow \ket{1}$ or $\ket{2} \rightarrow \ket{1}$ transitions. This ability to cycle allows many low-fidelity measurements to be combined into a single, high-fidelity readout. While this measurement shares many features with quantum logic spectroscopy and related quantum non-demolition measurements, it is more accurate to think of this as quantum bolometry\cite{dehmelt1968bolometric}. The tolerances on the apparatus are substantially looser than in challenging QLS setups.
It is only the availability of empty, accessible nearby states - a ubiquitous feature of polyatomic molecules - that makes such a scheme possible.
While the sideband-cooling scheme shown in figure \ref{coolingandreadout}A requires high-fidelity driving of sidebands without exciting $\ket{0,v=0} \rightarrow \ket{1,v=0}$ carrier transitions, the readout requires only partially resolved sidebands. Unlike in single photon sideband cooling schemes, there is no natural timescale arising from an excited state lifetime; the Rabi frequency $\Omega_{Raman-sideband}$ can be set via optical design to be lower than the ion trap secular frequency $\omega_{secular}$. It should be noted that in the case of a single molecular ion, high-fidelity readout alone is enough to realize state preparation, via projective measurement. While larger ensembles cannot be cooled via projective measurement, high-fidelity readout of ensembles and populations within ensembles \emph{without} internal cooling is sufficient to realize ultra-high resolution spectroscopy and ultra-sensitive chemical and chiral analysis.
\begin{figure}
\caption{The ability to address motional sidebands via an amplitude-modulated, spatially varying optical lattice driving rotational Raman transitions enables both internal cooling and readout. In A, red sidebands of many Raman transitions are driven simultaneously. In a motionally cold sample, this excites the molecules' motion, but cools the internal state of the molecule. Collisions with dissipatively cooled Sr$^+$ ions then cool the motional state (blue arrows). Additional electric fields (not shown) are needed to remix dark states that would otherwise not be cooled. In B, blue sidebands are used to motionally excite molecules conditional on their internal state. As described in the text, these measurements can be repeated to realize high-fidelity state readout and projective measurement.
\label{coolingandreadout}
\label{coolingandreadout}
\end{figure}
While figure \ref{appfig} describes a hybrid Paul trap/optical lattice, the analysis above applies to a hybrid Penning trap/optical lattice as well. Penning traps appear to be very attractive systems for realizing sideband-mediated spectroscopy, both because they allow for co-trapping and crystallization of species of very different mass, and because heating rates in Penning traps are typically lower than for comparably sized crystals in Paul traps\cite{leibfried2003quantum,larson1986sympathetic,goodwin2016resolved}. The most natural geometry is a tightly confined ``pancake'' of ions, excited axially by a lattice propagating along the trap axis\cite{biercuk2009high}.
\label{bigcrystalsection}
\section{Complexity and challenges}
Polyatomic molecules are undoubtedly complex. Figure \ref{biggrot} shows the rotational Grotrian diagram for carbon monoxide and 1,2-propanediol up to 10 Kelvin. It might appear at first that controlling the intricacy of the right hand molecule is impossible, but in fact it is \emph{easier} to manipulate the state of propanediol than carbon monoxide. This is because agile arbitrary waveform generators and electro-optic modulators, with bandwidth up to $\sim$40 GHz, can address every level populated in a 10 Kelvin propanediol molecule - in contrast, carbon monoxide, and in fact most diatomic molecules, require challenging electronics operating at frequencies above 100 GHz. Chou et al. propose to tackle this difficulty directly, via comb-mediated coherent control of laser fields differing by THz \cite{chou2016preparation}. The only lasers required for the larger molecules proposed here are a single, non-tunable infrared laser that can be modulated by a high frequency electro-optic modulator, and the cooling lasers for the co-trapped Sr$^+$.
While the diagram in figure \ref{biggrot}B appears incomprehensible, in fact the spectroscopy of such molecules is straightforward; all the lines in neutral 1,2-propanediol shown in this figure have been observed and unambiguously assigned. Spectroscopy of polyatomic ions is far less developed experimentally; while the form of the rotational Hamiltonian is unchanged from neutral molecules, the exact rotational constants and internal coupling constants will need to be measured. Our group is well positioned to perform such spectroscopy, on our ultra-sensitive buffer gas cooled microwave spectrometer\cite{mefirstFTMW}. In addition, to our knowledge the polarizability tensor of all polyatomic molecular ions is unknown, and will require measurement; these measurements can of course be performed using the same apparatus that is described here.
While the Hamiltonian of a rigid molecule in arbitrary electric, magnetic, and optical fields can be written down exactly, the complexity of the Hamiltonian quickly grows beyond direct human understanding. We therefore rely on numerical simulation. Our home built software (MATLAB) uses the PGOPHER package to calculate $H_{internal}$, electric dipole transition matrices, and (for open shell molecules) magnetic dipole transition matrices\cite{PGOPHER}. Optically driven Raman transitions and the motional Hamiltonian are calculated directly. Arbitrary pulse sequences of electric, magnetic, and optical fields, either uniform or spatially varying, can be simulated. Various choices of applied pulses yield simulations of standard microwave free induction decay spectroscopy, chirally sensitive three wave mixing, trap motional dynamics, sideband cooling, or state-specific detection.
\section{Conclusion}
Polyatomic molecular ions contain great potential as highly controllable, low-decoherence quantum systems. The tools presented here will allow preparation, detection, and characterization of molecular ions, including high-fidelity non-destructive readout and heralded state preparation at the single quantum state level. These tools are the prerequisite for a broad suite of experiments in high precision spectroscopy and quantum control.
\begin{figure}
\caption{The low-lying rotational states of carbon monoxide (left) and 1,2-propanediol (right). Electric dipole allowed transitions below 20 GHz are marked in dark blue. Allowed transitions above 20 GHz are marked in light blue. Rotational Raman transitions are not shown, but in general are formed via combinations of shown electric-dipole transitions.
\label{biggrot}
\label{biggrot}
\end{figure}
\end{document} |
\begin{document}
\vspace*{-1.5\baselineskip}
\title[Divided-power quantum plane]{Quantum-$s\ell(2)$ action on a
divided-power quantum plane at even roots of unity}
\author[Semikhatov]{A.M.~Semikhatov}
\address{\mbox{}\kern-\parindent Lebedev Physics Institute
\mbox{}\linebreak \texttt{[email protected]}}
\begin{abstract}
We describe a nonstandard version of the quantum plane, the one in
the basis of divided powers at an even root of unity
$\mathfrak{q}=e^{i\pi/p}$. It can be regarded as an extension of the ``nearly
commutative'' algebra $\mathbb{C}[X,Y]$ with $X Y =(-1)^p Y X$ by
nilpotents. For this quantum plane, we construct a
Wess--Zumino-type de~Rham complex and find its decomposition into
representations of the $2p^3$-dimensional quantum group $\overline{\mathscr{U}}_{\q} s\ell(2)$ and its
Lusztig extension $\overline{\mathscr{U}}_{\q} s\ell(2)L$; the quantum group action is also defined on
the algebra of quantum differential operators on the quantum plane.
\end{abstract}
\maketitle
\thispagestyle{empty}
\section{Introduction}
Recent studies of logarithmic conformal field theory models have shown
the remarkable fact that a significant part of the structure of these
models is captured by factorizable ribbon quantum groups at roots of
unity~\cite{[FGST],[FGST2],[FGST3],[FGST-q]}. This fits the general
context of the Kazhdan--Lusztig correspondence$/$duality between
conformal field theories (vertex-operator algebras) and quantum
groups~\cite{[KL]}, but in logarithmic conformal field theories this
correspondence goes further, to the extent that the relevant quantum
groups may be considered ``\textit{hidden symmetries}'' of the
corresponding logarithmic models (see~\cite{[S-q]} for a review
and~\cite{[GT],[S-U],[BFGT]} for further development). This motivates
further studies of the quantum groups that are Kazhdan--Lusztig-dual
to logarithmic conformal field theories. For the class of $(p,1)$
logarithmic models (with $p=2,3,\dots$), the dual quantum group is
$\overline{\mathscr{U}}_{\q} s\ell(2)$ generated by $E$, $K$, and $F$ with the relations
\begin{gather}\label{the-qugr}
\begin{aligned}
KEK^{-1}&=\mathfrak{q}^2E,\mathfrak{q}uad
KFK^{-1}=\mathfrak{q}^{-2}F,\\
[E,F]&=\ffrac{K-K^{-1}}{\mathfrak{q}-\mathfrak{q}^{-1}},
\end{aligned}
\\
\label{the-constraints}
E^{p}=F^{p}=0,\mathfrak{q}uad K^{2p}=1
\end{gather}
at the even root of unity\enlargethispage{1\baselineskip}
\begin{gather}\label{the-q}
\mathfrak{q}=e^{\frac{i\pi}{p}},
\end{gather}
This $2p^3$-dimensional quantum group first appeared in~\cite{[AGL]}
(also see~\cite{[FHT],[Ar]}; a somewhat larger quantum group was
studied in~\cite{[Erd]}).
In this paper, we study not the $\overline{\mathscr{U}}_{\q} s\ell(2)$ quantum group itself but an
algebra carrying its action, a version of the quantum
plane~\cite{[M-Montr]}.\footnote{Quantum planes and their relation to
$s\ell(2)$ quantum groups is a ``classic'' subject discussed in too
many papers to be mentioned here; in addition to~\cite{[M-Montr]},
we just note~\cite{[WZ],[M]}, and~\cite{[LR93]}. In a context close
to the one in this paper, the quantum plane was studied at the third
root of unity in~\cite{[Coq],[DNS],[CGT]}.} Quantum
planes\pagebreak[3] (which can be defined in slightly different
versions at roots of unity, for example, infinite or finite) have the
nice property of being \textit{$\mathscr{H}$-module algebras} for
$\mathscr{H}$ given by an appropriate version of the quantum
$s\ell(2)$ (an $\mathscr{H}$-module algebra is an associative algebra
endowed with an $\mathscr{H}$ action that ``agrees'' with the algebra
multiplication in the sense that $X\,(u\,v)=\sum(X'u)\,(X''v)$ for all
$X\in\mathscr{H}$). Studying such algebras is necessary for extending
the Kazhdan--Lusztig correspondence with logarithmic conformal field
theory, specifically, extending it to the level of fields; module
algebras are to provide a quantum-group counterpart of the algebra of
fields in logarithmic models when these are described in a manifestly
quantum-group invariant way. Another nice feature of quantum planes,
in relation to the Kazhdan--Lusztig correspondence in particular, is
that they allow a ``covariant calculus''~\cite{[WZ]}, i.e., a de~Rham
complex with the space of $0$-forms being just the quantum plane and
with a differential that commutes with the quantum-group action. (A
``covariant calculus'' on a differential $\overline{\mathscr{U}}_{\q} s\ell(2)$-module algebra was also
considered in~\cite{[S-U]} in a setting ideologically similar to but
distinct from a quantum plane.)
Here, we explore a possibility that allows extending the $\overline{\mathscr{U}}_{\q} s\ell(2)$-action
on a module algebra to the action of the corresponding Lusztig quantum
group $\overline{\mathscr{U}}_{\q} s\ell(2)L$, with the additional $s\ell(2)$ generators conventionally
written as $\boldsymbol{\mathscr{E}}=\fffrac{E^p}{[p]!}$ and
$\boldsymbol{\mathscr{F}}=\fffrac{F^p}{[p]!}$.\footnote{We use the standard notation
$\mathfrak{q}int{n}=\fffrac{\mathfrak{q}^n - \mathfrak{q}^{-n}}{\mathfrak{q} - \mathfrak{q}^{-1}}$,
$\mathfrak{q}fac{n}=\mathfrak{q}int{1}\mathfrak{q}int{2}\dots\mathfrak{q}int{n}$, and
$\mbox{\scriptsize$\displaystyle\geqslantnfrac{[}{]}{0pt}{}{m}{n}$}{}
=\fffrac{\mathfrak{q}fac{m}}{\mathfrak{q}fac{m-n}\mathfrak{q}fac{n}}$}$^{,}$\footnote{The
counterpart of this $s\ell(2)$ Lie algebra on the ``logarithmic''
side of the Kazhdan--Lusztig correspondence, in particular,
underlies the name ``triplet'' for the extended chiral algebra of
the $(p,1)$ logarithmic models, introduced
in~\cite{[K-first],[GK1],[GK2]} (also see~\cite{[G-alg]}; the
triplet algebra was shown to be defined in terms of the kernel of a
screening for general~$p$ in~\cite{[FHST]} and was studied, in
particular, in~\cite{[CF],[EF],[AM-triplet]}).} We use Lusztig's
trick of divided powers twice: to extend the $\overline{\mathscr{U}}_{\q} s\ell(2)$ quantum group and to
``distort'' the standard quantum plane $\mathbb{C}_{\mathfrak{q}}[x,y]$, i.e., the
associative algebra on $x$ and $y$ with the relation
\begin{gather}\label{xyyx}
y\,x = \mathfrak{q}\,x\,y,
\end{gather}
by passing to the divided powers
\begin{equation*}
x\mathscr{D}p{m}=\ffrac{x^m}{[m]!},\mathfrak{q}uad
y\mathscr{D}p{m}=\ffrac{y^m}{[m]!},\mathfrak{q}quad m\geqslantqslant0.
\end{equation*}
Constructions with divided powers~\cite{[L-q1],[L]} are interesting at
roots of unity, of course; in our root-of-unity case~\eqref{the-q},
specifically, $\mathfrak{q}int{p}=0$, and the above formulas cannot be viewed as
a change of basis. We actually define \textit{the divided-power
quantum plane} $\dpplane$ (or just $\dpshort$ for brevity) to be the
span of $x\mathscr{D}p{n}\,y\mathscr{D}p{m}$, $m,n\geqslantqslant0$, with the (associative)
multiplication
\begin{equation}\label{xxdiv}
x\mathscr{D}p{m} x\mathscr{D}p{n}=\mathfrak{q}bin{m+n}{m}\,x\mathscr{D}p{m+n},\mathfrak{q}uad
y\mathscr{D}p{m} y\mathscr{D}p{n}=\mathfrak{q}bin{m+n}{m}\,y\mathscr{D}p{m+n}
\end{equation}
and relations
\begin{equation}
y\mathscr{D}p{n}\,x\mathscr{D}p{m} = \mathfrak{q}^{n m} x\mathscr{D}p{m}\, y\mathscr{D}p{n}.
\end{equation}
(Somewhat more formally, \eqref{xxdiv} can be considered relations as
well.) Divided-power quantum spaces were first considered
in~\cite{[Hu]}, in fact, in a greater generality (in an arbitrary
number of dimensions, correspondingly endowed with an $\mathscr{U}_q
s\ell(n)$ action).\footnote{I thank N.~Hu for pointing
Ref.~\cite{[Hu]} out to me and for useful remarks.} Our quantum
plane carries a $\overline{\mathscr{U}}_{\q} s\ell(2)$ action that makes it a $\overline{\mathscr{U}}_{\q} s\ell(2)$-module algebra (a
particular case of the construction in~\cite{[Hu]}),
\begin{equation}\label{U-acts-div}
\begin{split}
E(x\mathscr{D}p{m} y\mathscr{D}p{n}) &= \mathfrak{q}int{m+1} x\mathscr{D}p{m + 1} y\mathscr{D}p{n - 1},\\
K(x\mathscr{D}p{m} y\mathscr{D}p{n}) &= \mathfrak{q}^{m-n} x\mathscr{D}p{m} y\mathscr{D}p{n},\\
F(x\mathscr{D}p{m} y\mathscr{D}p{n}) &= \mathfrak{q}int{n+1}\,x\mathscr{D}p{m - 1} y\mathscr{D}p{n + 1},
\end{split}
\end{equation}
with $x\mathscr{D}p{m}=0=y\mathscr{D}p{m}$ for $m<0$.\footnote{In speaking of a
$\overline{\mathscr{U}}_{\q} s\ell(2)$-module algebra, we refer to the Hopf algebra structure on $\overline{\mathscr{U}}_{\q} s\ell(2)$
given by the comultiplication $\mathscr{D}elta(E)= E\otimes K + 1\otimes E$,
$\mathscr{D}elta(K)=K\otimes K$, $\mathscr{D}elta(F)=F\otimes 1 + K^{-1}\otimes F$,
counit $\mathscr{E}silon(E)=\mathscr{E}silon(F)=0$, $\mathscr{E}silon(K)=1$, and antipode
$S(E)=-E K^{-1}$, $S(K)=K^{-1}$, $S(F)=-K F$~\cite{[FGST]}.} A
useful way to look at $\dpshort$ is to consider the ``almost
commutative'' polynomial ring $\mathbb{C}_{\varepsilon}[X,Y]$ with $X
Y=\varepsilon\,Y X$ for $\varepsilon=(-1)^p$ and extend it with the
algebra of ``infinitesimals'' $x\mathscr{D}p{m} y\mathscr{D}p{n}$, $0\leqslantqslant m,n\leqslantqslant p-1$.
Then, in particular, $X$ and $Y$ behave under the $s\ell(2)$ algebra
of $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$ ``almost'' (modulo signs for odd~$p$) as
homogeneous coordinates on~$\mathbb{C}\mathbb{P}^1$.
We extend $\dpshort$ to a differential algebra, the algebra of
differential forms $\underline{\Omega}_{\mathfrak{q}}$, which can be considered a
(Wess--Zumino-type) de~Rham complex of $\dpshort$, and then describe
the $\overline{\mathscr{U}}_{\q} s\ell(2)$ action on $\underline{\Omega}_{\mathfrak{q}}$ and find how it decomposes into
$\overline{\mathscr{U}}_{\q} s\ell(2)$-representations (Sec.~\ref{sec:dp-plane}). Further, we extend
the quantum group action on the quantum plane and differential forms
to a $\overline{\mathscr{U}}_{\q} s\ell(2)L$ action. In Sec.~\ref{sec:qdiff}, we introduce quantum
differential operators on $\dpshort$, make them into a module algebra,
and illustrate their use with a construction of the projective
quantum-group module~$\mathscrP^+_1$.
\section{Divided-power quantum plane}\label{sec:dp-plane}
The standard quantum plane $\mathbb{C}_q[x,y]$ at a root of unity and the
divided-power quantum plane $\dpshort$ at a root of unity can be
considered two different root-of-unity limits of the ``generic''
quantum plane. The formulas below therefore follow from the
``standard'' ones (for example, $E(x^m y^n) =[n]x^{m+1} y^{n-1}$ and
$F(x^m y^n) =[m]x^{m-1} y^{n+1}$ for the quantum-$s\ell(2)$ action) by
passing to the divided powers at a generic $q$, when
$x\mathscr{D}p{m}=\fffrac{x^m}{[m]!}$ \textit{is} a change of basis, and
setting $q$ equal to our $\mathfrak{q}$ in~\eqref{the-q} in the end. In
particular, we apply this simple strategy to the Wess--Zumino calculus
on the quantum plane~\cite{[WZ]}.
\subsection{De~Rham complex of $\dpplane$}
We define $\underline{\Omega}_{\mathfrak{q}}$, the space of differential forms on $\dpshort$,
as the differential algebra on the $x\mathscr{D}p{m}\,y\mathscr{D}p{n}$ and $\upxi$, $\upeta$
with the relations (in addition to those in~$\dpshort$)
\begin{gather*}
\upxi\upeta =-\mathfrak{q}\upeta\upxi,\mathfrak{q}uad \upxi^2=0,\mathfrak{q}uad\upeta^2=0,
\end{gather*}
and
\begin{alignat*}{2}
\upxi\, x\mathscr{D}p{n} &= \smash[t]{\mathfrak{q}^{2 n}
x\mathscr{D}p{n} \upxi},&\mathfrak{q}uad
\upeta\,x\mathscr{D}p{n} &= \mathfrak{q}^n x\mathscr{D}p{n} \upeta + \mathfrak{q}^{2n-2}(\mathfrak{q}^2 - 1)x\mathscr{D}p{n - 1}
y\upxi,
\\
\upxi\,y\mathscr{D}p{n} &= \mathfrak{q}^n y\mathscr{D}p{n}\upxi,&\mathfrak{q}uad
\upeta\, y\mathscr{D}p{n} &= \mathfrak{q}^{2 n} y\mathscr{D}p{n} \upeta
\end{alignat*}
and with a differential $d$ ($d^2=0$) acting as
\begin{align*}
d (x\mathscr{D}p{m}y\mathscr{D}p{n}) &= \mathfrak{q}^{m + n - 1}x\mathscr{D}p{m - 1} y\mathscr{D}p{n} \upxi + \mathfrak{q}^{n
- 1}x\mathscr{D}p{m} y\mathscr{D}p{n - 1}\upeta.
\end{align*}
In particular, $d x=\upxi$ and $d y = \upeta$.
The $\overline{\mathscr{U}}_{\q} s\ell(2)$ action on $1$-forms that commutes with the differential is
given by
\begin{align*}
E(x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi) &= \mathfrak{q} [m+1] x\mathscr{D}p{m + 1} y\mathscr{D}p{n - 1} \upxi,\\
E(x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta) &= \mathfrak{q}^{-1} [m+1] x\mathscr{D}p{m + 1} y\mathscr{D}p{n - 1} \upeta
+ x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi,\\
K(x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi) &=\mathfrak{q}^{m-n+1}\,x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi,\\
K(x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta) &=\mathfrak{q}^{m-n-1}\,x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta,\\
F(x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi) &= [n+1]\,x\mathscr{D}p{m - 1} y\mathscr{D}p{n + 1} \upxi
+ \mathfrak{q}^{n - m} x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta,\\
F(x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta) &= [n+1]\,x\mathscr{D}p{m - 1} y\mathscr{D}p{n + 1} \upeta
\end{align*}
(and on $2$-forms, simply by $E(x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi \upeta) = [m+1]
x\mathscr{D}p{m + 1} y\mathscr{D}p{n - 1} \upxi \upeta$ and $F(x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi \upeta) =
[n+1]x\mathscr{D}p{m - 1} y\mathscr{D}p{n + 1} \upxi \upeta$).
A feature of the divided-power quantum plane is that the monomials
$x\mathscr{D}p{p m - 1}\,y\mathscr{D}p{p n - 1}$ with $m,n\geqslantqslant1$ are $\overline{\mathscr{U}}_{\q} s\ell(2)$ invariants,
although not ``constants'' with respect to~$d$. Also, the kernel of
$d$ in $\dpshort$ is given by $\mathbb{C} 1$, and the cohomology of $d$
in $\underline{\Omega}_{\mathfrak{q}}^1$ is zero. A related, ``compensating,'' difference from
the standard quantum plane at the same root of unity amounts to zeros
occurring in the multiplication table: for $1\leqslantqslant m,n\leqslantqslant p-1$, it
follows that $x\mathscr{D}p{m}\,x\mathscr{D}p{n}=0$ whenever $m+n\geqslantqslant p$).
\subsection{``Frobenius'' basis}\label{sec:Frobenius}
An ideal (the nilradical) is generated in $\dpshort$ from the
monomials $x\mathscr{D}p{m}y\mathscr{D}p{n}$ with $1\leqslantqslant m,n\leqslantqslant p-1$. We now introduce
a new basis by splitting the powers of $x$ and $y$ accordingly
(actually, by passing to ``\textit{nondivided}'' powers of $x\mathscr{D}p{p}$
and $y\mathscr{D}p{p}$).
\subsubsection{}\label{sec:Frob-basis}
The divided-power quantum plane $\dpplane$ can be equivalently viewed
as the linear span of
\begin{equation}\label{Frob-basis}
X^M\,Y^N\,x\mathscr{D}p{m}\,y\mathscr{D}p{n},\mathfrak{q}quad
0\leqslantqslant m,n\leqslantqslant p-1,\mathfrak{q}uad M,N\geqslantqslant0,
\end{equation}
where $X=x\mathscr{D}p{p}$ and $Y=y\mathscr{D}p{p}$; the change of basis is explicitly
given by\footnote{Here and in what follows, we ``resolve'' the
$q$-binomial coefficients using the general formula~\cite{[L]}
\begin{gather*}
\mathfrak{q}bin{M p + m}{N p + n} = (-1)^{(M-1)N p + m N - n M}\,
\nbin{M}{N}\mathfrak{q}bin{m}{n}
\end{gather*}
for $0\leqslantqslant m,n\leqslantqslant p-1$ and $M\geqslantqslant1$, $N\geqslantqslant0$.}
\begin{equation*}
x\mathscr{D}p{M p + m}=(-1)^{M m}x\mathscr{D}p{M p}\,x\mathscr{D}p{m}=(-1)^{M m +
\frac{M(M-1)}{2}p}\fffrac{1}{M!}\,X^M\,x\mathscr{D}p{m}
\end{equation*}
and similarly for $y\mathscr{D}p{N p + n}$ (cf.\ Proposition~2.4
in~\cite{[Hu]}). Clearly, monomials in the right-hand side here are
multiplied ``componentwise,''
\begin{gather*}
(X^M x\mathscr{D}p{m})(X^N x\mathscr{D}p{n})=X^{M+N}\mathfrak{q}bin{m+n}{m} x\mathscr{D}p{m+n}.
\end{gather*}
The relations involving $X$ and $Y$ are
\begin{alignat*}{2}
X\,Y&=(-1)^p\,Y\,X,\kern-40pt\\
X\,x\mathscr{D}p{n}&=x\mathscr{D}p{n}\,X,&\mathfrak{q}quad X\,y\mathscr{D}p{n}&=(-1)^n\,y\mathscr{D}p{n}\,X,\\
Y\,x\mathscr{D}p{n}&=(-1)^n\,x\mathscr{D}p{n}\,Y,&\mathfrak{q}quad Y\,y\mathscr{D}p{n}&=y\mathscr{D}p{n}\,Y.
\end{alignat*}
We continue writing $\dpshort$ for $\mathbb{C}_{\mathfrak{q}}[X,Y,x,y]$ with all the
relations understood. The quotient of $\dpshort$ by the nilradical is
the ``almost commutative'' polynomial ring $\mathbb{C}_{\varepsilon}[X,Y]$,
where $X Y=\varepsilon\,Y X$ for $\varepsilon=(-1)^p$.
In $\underline{\Omega}_{\mathfrak{q}}$, the relations involving $X$ and $Y$ are
\begin{alignat*}{2}
\upxi\,X&=X\,\upxi,&\mathfrak{q}uad
\upeta\,X^M &= (-1)^M\,X^M\,\upeta + (-1)^{M} M(\mathfrak{q}^{-2} - 1)X^{M - 1}
x\mathscr{D}p{p-1}\,y\,\upxi,
\\
\upxi\,Y^{N} &= (-1)^N\, Y^{N}\,\upxi,&
\mathfrak{q}uad
\upeta\, Y&= Y\,\upeta,
\end{alignat*}
and the differential is readily expressed as
\begin{multline*}
d(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) =\mathfrak{q}^{m+n-1}
\begin{cases}
X^M Y^N x\mathscr{D}p{m-1} y\mathscr{D}p{n} \upxi,&m\neq0,\\
-(-1)^{N p} M\, X^{M-1} Y^N x\mathscr{D}p{p-1} y\mathscr{D}p{n}\upxi,& m=0
\end{cases}\\*
{}+\mathfrak{q}^{n-1}
\begin{cases}
X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n-1} \upeta,& n\neq0,\\
-(-1)^m N\, X^M Y^{N-1} x\mathscr{D}p{m} y\mathscr{D}p{p-1} \upeta,& n=0.
\end{cases}
\end{multline*}
In particular,
$d(X^M\,Y^N) = -(-1)^{N p}M\mathfrak{q}^{-1} X^{M-1} Y^N x\mathscr{D}p{p-1}\upxi -N\mathfrak{q}^{-1}
X^M Y^{N-1} y\mathscr{D}p{p-1}\upeta$.
\subsubsection{}\label{U-XYloc}
The $\overline{\mathscr{U}}_{\q} s\ell(2)$ action in the new basis~\eqref{Frob-basis} becomes
\begin{align*}
E(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n})&=
\begin{cases}
[m+1] X^M Y^N x\mathscr{D}p{m+1} y\mathscr{D}p{n-1},& n\neq0,\\
(-1)^m N [m+1] X^M Y^{N-1} x\mathscr{D}p{m+1} y\mathscr{D}p{p-1},& n=0,
\end{cases}
\\
K(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n})&=
(-1)^{M+N}\mathfrak{q}^{m-n} X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n},
\\
F(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n})&=
\begin{cases}
(-1)^{M+N} [n+1] X^M Y^N x\mathscr{D}p{m-1} y\mathscr{D}p{n+1},& m\neq 0,\\
M(-1)^{M-1+N(p-1)} [n+1] X^{M-1} Y^N x\mathscr{D}p{p-1} y\mathscr{D}p{n+1},& m=0,
\end{cases}
\end{align*}
and on $1$-forms, accordingly,
\begin{align*}
E(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi)&=
\begin{cases}
\mathfrak{q}\, [m+1] X^M Y^N x\mathscr{D}p{m+1} y\mathscr{D}p{n-1} \upxi,& n\neq0,\\
(-1)^m N \mathfrak{q}\, [m+1] X^M Y^{N-1} x\mathscr{D}p{m+1} y\mathscr{D}p{p-1} \upxi,& n=0,
\end{cases}
\\
E(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta)&=
\begin{cases}
\mathfrak{q}^{-1}[m+1] X^M Y^N x\mathscr{D}p{m+1} y\mathscr{D}p{n-1} \upeta,& n\neq0,\\
(-1)^m N \mathfrak{q}^{-1}[m+1] X^M Y^{N-1} x\mathscr{D}p{m+1} y\mathscr{D}p{p-1} \upeta,& n=0
\end{cases}\\*
&\mathfrak{q}uad{}+X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi,
\\
F(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi)&=
\begin{cases}
(-1)^{M+N} [n+1] X^M Y^N x\mathscr{D}p{m-1} y\mathscr{D}p{n+1} \upxi,& m\neq 0,\\
M(-1)^{M-1+N(p-1)} [n+1] X^{M-1} Y^N x\mathscr{D}p{p-1} y\mathscr{D}p{n+1} \upxi,& m=0
\end{cases}\\*
&\mathfrak{q}uad{}+ (-1)^{M+N}\mathfrak{q}^{n-m} X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta,
\\
F(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta)&=
\begin{cases}
(-1)^{M+N} [n+1] X^M Y^N x\mathscr{D}p{m-1} y\mathscr{D}p{n+1} \upeta,& m\neq 0,\\
M(-1)^{M-1+N(p-1)} [n+1] X^{M-1} Y^N x\mathscr{D}p{p-1} y\mathscr{D}p{n+1} \upeta,&
m=0.
\end{cases}
\end{align*}
\subsection{Module decomposition}
We next find how $\underline{\Omega}_{\mathfrak{q}}$ decomposes as a $\overline{\mathscr{U}}_{\q} s\ell(2)$-module. The $\overline{\mathscr{U}}_{\q} s\ell(2)$
representations encountered in what follows are the irreducible
representations $\mathscrX^\pm_r$ ($1\leqslantqslant r\leqslantqslant p$), indecomposable
representations $\mathscrW^\pm_r(n)$ ($1\leqslantqslant r\leqslantqslant p-1$, $n\geqslantqslant2$), and
projective modules $\mathscrP^\pm_{p-1}$. All of these are described
in~\cite{[FGST2]}; we only briefly recall from~\cite{[FGST],[FGST2]}
that $\overline{\mathscr{U}}_{\q} s\ell(2)$ has $2p$ irreducible representations $\mathscrX^\pm_r$, $1\leqslantqslant
r\leqslantqslant p$, with $\dim\mathscrX^\pm_r=r$. With the respective projective
covers denoted by $\mathscrP^\pm_r$, the ``plus'' representations are
distinguished from the ``minus'' ones by the fact that tensor products
$\mathscrX^+_r\otimes\mathscrX^+_s$ decompose into the $\mathscrX^+_{r'}$ and
$\mathscrP^+_{r'}$~\cite{[S-q]} (and $\mathscrX^+_1$ is the trivial
representation). We also note that $\dim\mathscrP^\pm_r=2p$ if $1\leqslantqslant
r\leqslantqslant p-1$.
We first decompose the space of $0$-forms, the quantum plane
$\dpshort$ itself; clearly, the $\overline{\mathscr{U}}_{\q} s\ell(2)$ action restricts to each graded
subspace $(\dpshort)_i$ spanned by $x\mathscr{D}p{m}y\mathscr{D}p{n}$ with $m+n=i$,
$i\geqslantqslant0$. To avoid an unnecessarily long list of formulas, we
explicitly write the decompositions for $(\dpshort)_{(i)}$ spanned by
$x\mathscr{D}p{m}y\mathscr{D}p{n}$ with $0\leqslantqslant m+n \leqslantqslant i$; the decomposition of each
graded component is in fact easy to
extract.\enlargethispage{.5\baselineskip}
\begin{lemma}
$\dpshort$ decomposes into $\overline{\mathscr{U}}_{\q} s\ell(2)$-representations as follows:
\begin{align}\label{decomp(p-1)}
(\dpshort)_{(p-1)} &= \bigoplus_{r=1}^{p}\mathscrX^+_r
\\
\intertext{\textup{(}where each
$\mathscrX^+_r\in(\dpshort)_{r-1}$\textup{)},}
\label{decomp(2p-1)}
(\dpshort)_{(2p-1)} &=
(\dpshort)_{(p-1)}\oplus\smash[t]{\bigoplus_{r=1}^{p-1}}\mathscrW^{-}_r(2)
\oplus 2\mathscrX^-_{p}\\
\intertext{\textup{(}where each
$\mathscrW^{-}_r(2)\in(\dpshort)_{p+r-1}$ and
$\mathscrX^-_{p}\in(\dpshort)_{2p-1}$\textup{)}, and, in general,}
\label{dpdecomp-general}
(\dpshort)_{(n p-1)}&= (\dpshort)_{((n-1)
p-1)}\oplus\bigoplus_{r=1}^{p-1}\mathscrW^{\pm}_r(n) \oplus
n\mathscrX^{\pm}_p
\end{align}
with the $+$ sign for odd $n$ and $-$ for even $n$.
\end{lemma}
The proof is by explicit construction and dimension counting; the
general pattern of the construction is already clear from the lower
grades. For $1\leqslantqslant r\leqslantqslant p$, the irreducible representations
$\mathscrX^+_r$ are realized as
\begin{equation*}
\xymatrix{
x\mathscr{D}p{r-1}\ar^F@/^/[r]
&x\mathscr{D}p{r-2}\,y\ar^(.5)E@/^/[l]
\ar^F@/^/[r]
&\ \dots\ \ar^{E}@/^/[l]\ar^{F}@/^/[r]
&y\mathscr{D}p{r-1}.\ar^{E}@/^/[l]
}
\end{equation*}
(the $E$ and $F$ arrows represent the respective maps up to nonzero
factors). The $\mathscrX^+_r$ thus constructed exhaust the space
$(\dpshort)_{(p-1)}$, which gives~\eqref{decomp(p-1)}. To pass to the
next range of grades, Eq.~\eqref{decomp(2p-1)}, it is easiest to
multiply the above monomials with $X$ or $Y$ and then note that (up to
sign factors) this operation commutes with the $\overline{\mathscr{U}}_{\q} s\ell(2)$ action except ``at
the ends'' in half the cases, as is expressed by the identities
\begin{small}
\begin{align*}
X E(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) - E(X^{M+1} Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) &=0,\\
Y E(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) - (-1)^{M p} E(X^{M} Y^{N+1} x\mathscr{D}p{m}
y\mathscr{D}p{n}) &=
-\delta_{n,0}(-1)^{M p + m} [m+1] X^M Y^N x\mathscr{D}p{m+1} y\mathscr{D}p{p-1},\\
X F(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) + F(X^{M+1} Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) &=
\delta_{m,0}(-1)^{M + N(p-1)}[n+1] X^M Y^N x\mathscr{D}p{p-1} y\mathscr{D}p{n+1},\\
Y F(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) + (-1)^{M p}F(X^{M} Y^{N+1} x\mathscr{D}p{m}
y\mathscr{D}p{n}) &=0.
\end{align*}
\end{small}
It then immediately follows that for each $r=1,\dots,p-1$, the states
\begin{small}
\begin{gather}\label{W(2)-details}
\xymatrix@=14pt{
X x\mathscr{D}p{r-1}\ar^F@/^/[r]
&\ \dots\ \ar^{E}@/^/[l]\ar^{F}@/^/[r]
&X y\mathscr{D}p{r-1}\ar^{E}@/^/[l]
\ar^F[dr]
&&&&Y x\mathscr{D}p{r-1}\ar_E[dl]
\ar^F@/^/[r]
&\ \dots\ \ar^{E}@/^/[l]\ar^{F}@/^/[r]
&Y y\mathscr{D}p{r-1}\ar^{E}@/^/[l]
\\
&&&x\mathscr{D}p{p-1}y\mathscr{D}p{r}\ar^F@/^/[r]
&\ \dots\ \ar^{E}@/^/[l]\ar^{F}@/^/[r]
&x\mathscr{D}p{r}y\mathscr{D}p{p-1}\ar^{E}@/^/[l]
}
\end{gather}
\end{small}
realize the representation~\cite{[FGST2]}
\begin{equation}\label{W(2)}
\raisebox{-\baselineskip}{\mbox{$\displaystyle\mathscrW^{-}_r(2)={}$}}
\xymatrix@=14pt{
\mathscrX^-_{r}\ar[dr]&&\mathscrX^-_{r}\ar[dl]\\
&\mathscrX^+_{p-r}&
}
\end{equation}
These $\mathscrW^{-}_r(2)$ fill the space $(\dpshort)_{(2p-2)}$.
Next, in the subspace $(\dpshort)_{2p-1}$, this picture degenerates
into a sum of two $\mathscrX^-_{p}$, spanned by $X x\mathscr{D}p{p-1},\dots,X
y\mathscr{D}p{p-1}$ and $Y x\mathscr{D}p{p-1},\dots,Y y\mathscr{D}p{p-1}$, with the result
in~\eqref{decomp(2p-1)}.
The longer ``snake'' modules follow similarly; for example, in the
$\mathscrW^+_r(3)$ module
\begin{equation*}
\xymatrix@=14pt{
\mathscrX^+_{r}\ar[dr]&&\mathscrX^+_{r}\ar[dl]\ar[dr]&&\mathscrX^+_{r}\ar[dl]\\
&\mathscrX^-_{p-r}&&\mathscrX^-_{p-r}
}
\end{equation*}
the leftmost state in the left $\mathscrX^+_{r}$ is given by $X^2
x\mathscr{D}p{r-1}$ and the rightmost state in the right $\mathscrX^+_{r}$ is $Y^2
y\mathscr{D}p{r-1}$. These $\mathscrW^+_r(3)$ with $1\leqslantqslant r\leqslantqslant p-1$ exhaust
$(\dpshort)_{(3p-2)}$,
and for $r=p$, the snake degenerates into $3\mathscrX^+_{p}$,
and so on, yielding~\eqref{dpdecomp-general} in general.
Next, to describe the decomposition of the $1$-forms $\underline{\Omega}_{\mathfrak{q}}^1$, we
use the grading by the subspaces
$(\underline{\Omega}_{\mathfrak{q}}^1)_{i}=(\dpshort)_{i}\upxi+(\dpshort)_{i}\upeta$, $i\geqslantqslant0$, and
set $(\underline{\Omega}_{\mathfrak{q}}^1)_{(i)}=(\dpshort)_{(i)}\upxi+(\dpshort)_{(i)}\upeta$
accordingly. For those representations $\mathscrX\in\dpshort$ that are
unchanged under the action of~$d$, we write $d\mathscrX$ for their
isomorphic images in $\underline{\Omega}_{\mathfrak{q}}^1$, to distinguish the $d$-exact
representations in the decompositions that follow.
\begin{lemma}
The space of $1$-forms $\underline{\Omega}_{\mathfrak{q}}^1$ decomposes into
$\overline{\mathscr{U}}_{\q} s\ell(2)$-representations as follows:
\begin{align}\label{decompO(p-1)}
(\underline{\Omega}_{\mathfrak{q}}^1)_{(p-1)}&=\bigoplus_{r=2}^{p} d\mathscrX^+_{r}\oplus
\bigoplus_{r=1}^{p-2}\mathscrX^+_{r}\oplus \mathscrP^+_{p-1}
\\
\intertext{\textup{(}with $d\mathscrX^+_r\in(\underline{\Omega}_{\mathfrak{q}}^1)_{r-2}$,
$\mathscrX^+_r\in(\underline{\Omega}_{\mathfrak{q}}^1)_{r}$, and
$\mathscrP^+_{p-1}\in(\underline{\Omega}_{\mathfrak{q}}^1)_{p-1}$\textup{)},}
\label{decompO(2p-1)}
(\underline{\Omega}_{\mathfrak{q}}^1)_{(2p-1)} &=(\underline{\Omega}_{\mathfrak{q}}^1)_{(p-1)}
\oplus\bigoplus_{r=2}^{p-1} d\mathscrW^-_r(2) \oplus 2d\mathscrX^-_p
\oplus\mathscrX^+_p \oplus\bigoplus_{r=1}^{p-2}\mathscrW^-_r(2) \oplus
2\mathscrP^-_{p-1}
\\
\intertext{\textup{(}with $d\mathscrW^-_r(2)\in(\underline{\Omega}_{\mathfrak{q}}^1)_{p+r-2}$,
$d\mathscrX^-_p\in(\underline{\Omega}_{\mathfrak{q}}^1)_{2p-2}$,
$\mathscrX^+_p\in(\underline{\Omega}_{\mathfrak{q}}^1)_{p}$,
$\mathscrW^-_r(2)\in(\underline{\Omega}_{\mathfrak{q}}^1)_{p+r}$, and
$\mathscrP^-_{p-1}\in(\underline{\Omega}_{\mathfrak{q}}^1)_{2p-1}$\textup{)},}
\label{decompO(3p-1)}
(\underline{\Omega}_{\mathfrak{q}}^1)_{(3p-1)} &=(\underline{\Omega}_{\mathfrak{q}}^1)_{(2p-1)} \oplus
\smash[t]{\bigoplus_{r=2}^{p-1}} d\mathscrW^+_r(3) \oplus 3d\mathscrX^+_p \oplus
2\mathscrX^-_p \oplus\smash[t]{\bigoplus_{r=1}^{p-2}} \mathscrW^+_r(3)
\oplus3\mathscrP^{+}_{p-1}
\\
\intertext{\textup{(}with $d\mathscrW^+_r(3)\in(\underline{\Omega}_{\mathfrak{q}}^1)_{2p+r-2}$,
$d\mathscrX^+_p\in(\underline{\Omega}_{\mathfrak{q}}^1)_{3p-2}$,
$\mathscrX^-_p\in(\underline{\Omega}_{\mathfrak{q}}^1)_{2p}$,
$\mathscrW^+_r(3)\in(\underline{\Omega}_{\mathfrak{q}}^1)_{2p+r}$, and
$\mathscrP^{+}_{p-1}\in(\underline{\Omega}_{\mathfrak{q}}^1)_{3p-1}$\textup{)}, and, in general,}
\label{decomp-last}
(\underline{\Omega}_{\mathfrak{q}}^1)_{(n p-1)} &=(\underline{\Omega}_{\mathfrak{q}}^1)_{((n-1)p-1)}\oplus
\smash[t]{\bigoplus_{r=2}^{p-1}} d\mathscrW^{\pm}_r(n) \oplus
n\,d\mathscrX^{\pm}_p\\*
&\notag \mathfrak{q}quad\mathfrak{q}quad\mathfrak{q}quad{}\oplus (n-1)\mathscrX^{\mp}_p
\oplus\bigoplus_{r=1}^{p-2}\mathscrW^{\pm}_r(n) \oplus
n\mathscrP^{\pm}_{p-1},
\end{align}
with the upper signs for odd $n$ and the lower signs for even~$n$.
\end{lemma}
The proof is again by construction. To begin with
$(\underline{\Omega}_{\mathfrak{q}}^1)_{(p-1)}$ in~\eqref{decompO(p-1)}, we first note that is
contains $d\mathscrX^+_r\approx\mathscrX^+_{r}$ spanned by\pagebreak[3] $d x\mathscr{D}p{r-1}$,
$d(x\mathscr{D}p{r-2}\,y)$, $\dots$, $d y\mathscr{D}p{r-1}$ for each $r=2,\dots,p$
(clearly, the singlet $\mathscrX^+_1$ spanned by~$1$ for $r=1$ is
annihilated by~$d$). Next, for each $r=1,\dots,p-2$, there is an
$\mathscrX^+_r$ spanned by the $1$-forms
\begin{equation*}
x\mathscr{D}p{r-1}y\upxi-\mathfrak{q}[r]x\mathscr{D}p{r}\upeta,\;\dots,\;
[i+1]x\mathscr{D}p{r-i-1}y\mathscr{D}p{i+1}\upxi - [r-i]\mathfrak{q}^{i+1}x\mathscr{D}p{r-i}y\mathscr{D}p{i}\upeta,\;
\dots,\;
[r]y\mathscr{D}p{r}\upxi - \mathfrak{q}^rx y\mathscr{D}p{r-1}\upeta
\end{equation*}
(with just the singlet $y\upxi-\mathfrak{q} x\upeta$ for $r=1$); for $r=p-1$,
however, these states are in the image of~$d$ and actually constitute
the socle of the projective module $\mathscrP^+_{p-1}$ realized as
\begin{small}
\begin{gather}\label{p-1-proj}
\xymatrix@=16pt{
&x\mathscr{D}p{p-1}\upeta \ar[dl]_(.5)E
\ar^{F}@/^/[r]&\mathfrak{q}quad\dots\mathfrak{q}quad\ar^{E}@/^/[l]\ar^{F}@/^/[r] &x
y\mathscr{D}p{p-2}\upeta\ar^{E}@/^/[l] \ar[dr]^(.6)F
\ar^E[];[ddl]+<26pt,26pt>
\\
x\mathscr{D}p{p-1}\upxi \ar[dr]^(.7)F&&&& y\mathscr{D}p{p-1}\upeta \ar[dl]_E
\\
& d(x\mathscr{D}p{p-1}y) \ar^{F}@/^/[r] \ar_(.3)E[uur]-<15pt,25pt>;[]
&\dots\ar^{E}@/^/[l]\ar^{F}@/^/[r]& d(x y\mathscr{D}p{p-1})
\ar^{E}@/^/[l] }
\end{gather}
\end{small}
This gives~\eqref{decompO(p-1)}.
In the next grade, we have $(\underline{\Omega}_{\mathfrak{q}}^1)_{p} = d\mathscrW^-_2(2) \oplus
\mathscrX^+_p$, where $\mathscrX^+_p$ is spanned by
\begin{equation*}
x\mathscr{D}p{p-1}y\upxi
,\
\mathbin{\boldsymbol{.}}s
,\
[i+1]x\mathscr{D}p{p-i-1}y\mathscr{D}p{i+1}\upxi - [i]\mathfrak{q}^{i+1}x\mathscr{D}p{p-i}y\mathscr{D}p{i}\upeta
,\
\mathbin{\boldsymbol{.}}s
,\
x y\mathscr{D}p{p-1}\upeta
\end{equation*}
(with $i=0$ at the left end and $i=p-1$ at the right end). Next,
\begin{equation*}
(\underline{\Omega}_{\mathfrak{q}}^1)_{(2p-2)}
=(\underline{\Omega}_{\mathfrak{q}}^1)_{(p-1)} \oplus\mathscrX^+_p \oplus\bigoplus_{r=2}^{p-1}
d\mathscrW^-_r(2) \oplus 2d\mathscrX^-_p
\oplus\bigoplus_{r=1}^{p-2}\mathscrW^-_r(2),
\end{equation*}
where the $\mathscrW^-_r(2)$ modules with $r=1,\dots,p-2$ are spanned by
(with nonzero factors dropped)
\begin{small}
\begin{equation}\label{OW2}
\mbox{}\kern-10pt
\xymatrix@C=0pt{
\mbox{\footnotesize$\begin{array}{l}
X x\mathscr{D}p{r-1}y\upxi\\
{}-\mathfrak{q}[r]X x\mathscr{D}p{r}\upeta
\end{array},$}
\dots,
\mbox{\footnotesize$\begin{array}{l}
[r]X y\mathscr{D}p{r}\upxi\\
{}-\mathfrak{q}^r X x y\mathscr{D}p{r-1}\upeta
\end{array}$}\ar^(.5)F[]+<70pt,-20pt>;[dr]+<-50pt,14pt>
&&
\mbox{\footnotesize$\begin{array}{l}
Y x\mathscr{D}p{r-1} y\upxi\\
-\mathfrak{q}[r]Y x\mathscr{D}p{r}\upeta
\end{array},$}
\dots,
\mbox{\footnotesize$\begin{array}{l}
[r]Y y\mathscr{D}p{r}\upxi\\
-\mathfrak{q} Y x y\mathscr{D}p{r-1}\upeta
\end{array}$}
\ar_(.5)E[]+<-70pt,-20pt>;[dl]+<50pt,14pt>
\\
&x\mathscr{D}p{p-1}y\mathscr{D}p{r+1}\upxi,\ \dots,\
x\mathscr{D}p{r+1}y\mathscr{D}p{p-1}\upeta
}
\end{equation}
\end{small}
In the next grade, the two $\mathscrP^-_{p-1}$ modules occurring
in~\eqref{decompO(2p-1)} are given by
\begin{small}
\begin{gather}\label{two-proj}
\xymatrix@C=0pt{
&X x\mathscr{D}p{p-1}\upeta, \dots,X x y\mathscr{D}p{p-2}\upeta
\ar[]+<-40pt,-12pt>;[dl]_(.5)E \ar[]+<40pt,-12pt>;[dr]^(.6)F
\\
X x\mathscr{D}p{p-1}\upxi \ar[];[dr]+<-50pt,12pt>^(.7)F&& X y\mathscr{D}p{p-1}\upeta
\ar[];[dl]+<50pt,12pt>_E
\\
& d(X x\mathscr{D}p{p-1}y), \dots, d(X xy\mathscr{D}p{p-1}) } \xymatrix@C=0pt{
&Y x\mathscr{D}p{p-1}\upeta, \dots,Y x y\mathscr{D}p{p-2}\upeta
\ar[]+<-40pt,-12pt>;[dl]_(.5)E \ar[]+<40pt,-12pt>;[dr]^(.6)F
\\
Y x\mathscr{D}p{p-1}\upxi \ar[];[dr]+<-50pt,12pt>^(.7)F&& Y y\mathscr{D}p{p-1}\upeta
\ar[];[dl]+<50pt,12pt>_E
\\
& d(Y x\mathscr{D}p{p-1} y),\dots, d(Y x y\mathscr{D}p{p-1}) }
\end{gather}
\end{small}
(the left--right arrows are again omitted for compactness). Next, we
have
\begin{equation*}
(\underline{\Omega}_{\mathfrak{q}}^1)_{(3p-2)}
=(\underline{\Omega}_{\mathfrak{q}}^1)_{(2p-1)}\oplus 2\mathscrX^-_p\oplus \bigoplus_{r=2}^{p-1}
d\mathscrW^+_r(3) \oplus
3d\mathscrX^+_p\oplus\bigoplus_{r=1}^{p-2}\mathscrW^+_r(3),
\end{equation*}
where,
for example, each $\mathscrW^+_r(3)$ is spanned by monomials that can be
easily constructed starting with $X^2 x\mathscr{D}p{r-1}y\upxi-\mathfrak{q}[r]X^2
x\mathscr{D}p{r}\upeta$ in the top-left corner, and so on; the pattern readily
extends to higher degrees by multiplying with powers of $X$ and $Y$,
yielding~\eqref{decomp-last}.
\subsection{Extending to Lusztig's quantum group}
\subsubsection{}\label{sec:LGQ}
The above formulas for the $\overline{\mathscr{U}}_{\q} s\ell(2)$ action of course imply that $E^p$ and
$F^p$ act on $\underline{\Omega}_{\mathfrak{q}}$ by zero. But using Lusztig's trick once
again, we can define the action of Lusztig's divided-power quantum
group $\overline{\mathscr{U}}_{\q} s\ell(2)L$ that extends our $\overline{\mathscr{U}}_{\q} s\ell(2)$. As before, temporarily taking
the quantum group deformation parameter $q$ generic, evaluating the
action of $\boldsymbol{\mathscr{E}}=\fffrac{1}{[p]!}\,E^{p}$ and
$\boldsymbol{\mathscr{F}}=\fffrac{1}{[p]!}\,F^{p}$, and finally setting $q=\mathfrak{q}$ gives
\begin{align*}
\boldsymbol{\mathscr{E}}(X^M\,Y^N\,x\mathscr{D}p{m} y\mathscr{D}p{n}) &=
N(-1)^{(N-1)p + n + m}\,X^{M+1}\,Y^{N-1}\,x\mathscr{D}p{m} y\mathscr{D}p{n},\\
\boldsymbol{\mathscr{F}}(X^M\,Y^N\,x\mathscr{D}p{m} y\mathscr{D}p{n}) &=
M(-1)^{(M-1)p}\,X^{M-1}\,Y^{N+1}\,x\mathscr{D}p{m} y\mathscr{D}p{n}.\\
\intertext{Then, clearly, $\boldsymbol{\mathscr{H}}=[\boldsymbol{\mathscr{E}},\boldsymbol{\mathscr{F}}]$ acts on $\dpshort$ as}
\boldsymbol{\mathscr{H}}(X^M\,Y^N\,x\mathscr{D}p{m} y\mathscr{D}p{n}) &=
(-1)^{(M+N-1)p + m + n}(M - N) X^M\,Y^N\,x\mathscr{D}p{m} y\mathscr{D}p{n}.
\end{align*}
Modulo the ``slight noncommutativity'' of $X$ and $Y$ for odd $p$, we
here have the standard $\SL2$ action on functions of the homogeneous
coordinates on $\mathbb{C}\mathbb{P}^1$:
\begin{equation*}
\boldsymbol{\mathscr{E}} f(X,Y,x,y) = f(X,Y,x,y)\,
\overleftarrow{\!\ffrac{\partial}{\partial Y}\!}\,\,X,
\mathfrak{q}uad
\boldsymbol{\mathscr{F}} f(X,Y,x,y) = Y\,\overrightarrow{\!\ffrac{\partial}{\partial X}\!}\,\,
f(X,Y,x,y)
\end{equation*}
From the standpoint of this $\mathbb{C}\mathbb{P}^1$, the $x\mathscr{D}p{m}$ and $y\mathscr{D}p{n}$
are to be regarded as some kind of ``infinitesimals,'' almost (modulo
signs) invisible to $\mathscr{U}s\ell(2)$ generated by $\boldsymbol{\mathscr{E}}$ and
$\boldsymbol{\mathscr{F}}$.
The $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$ act on $\underline{\Omega}_{\mathfrak{q}}^1$ as
\begin{align*}
\boldsymbol{\mathscr{E}}(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi) &=
-N(-1)^{(N-1)p + n + m}\,X^{M+1}\,Y^{N-1}\,x\mathscr{D}p{m} y\mathscr{D}p{n}\,\upxi,\\
\boldsymbol{\mathscr{E}}(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta) &=
-N(-1)^{(N-1)p + n + m}\,X^{M+1}\,Y^{N-1}\,x\mathscr{D}p{m} y\mathscr{D}p{n}\,\upeta+{}\\*
&\mathfrak{q}uad{}+\delta_{m,0}
\begin{cases}
N(-1)^n\,X^M Y^{N-1}\,x\mathscr{D}p{p-1} y\mathscr{D}p{n+1}\,\upxi,& 0\leqslantqslant n\leqslantqslant p-2,\\
X^M Y^N\,x\mathscr{D}p{p-1}\,\upxi,& n=p-1,
\end{cases}
\\
\boldsymbol{\mathscr{F}}(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upxi) &=
M(-1)^{(M-1)p}\,X^{M-1}\,Y^{N+1}\,x\mathscr{D}p{m} y\mathscr{D}p{n}\,\upxi\\*
&{}+ \delta_{n,0}
\begin{cases}
M (-1)^{(M-1)p + m}\mathfrak{q}^{-m-1} X^{M-1} Y^N x\mathscr{D}p{m+1} y\mathscr{D}p{p-1}\upeta,&
0\leqslantqslant m\leqslantqslant p-2,\\
(-1)^{(M+N)p}\,X^M Y^N\,y\mathscr{D}p{p-1}\,\upeta,& m=p-1,
\end{cases}
\\
\boldsymbol{\mathscr{F}}(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} \upeta) &=
M(-1)^{(M-1)p}\,X^{M-1}\,Y^{N+1}\,x\mathscr{D}p{m} y\mathscr{D}p{n}\,\upeta.
\end{align*}
\subsubsection{$\overline{\mathscr{U}}_{\q} s\ell(2)L$ representations}Using the above formulas, it is
easy to see how the $\overline{\mathscr{U}}_{\q} s\ell(2)$ representations in
\eqref{decomp(p-1)}--\eqref{decomp-last} combine into
$\overline{\mathscr{U}}_{\q} s\ell(2)L$-representations. The simple result is that
decomposition~\eqref{dpdecomp-general} rewrites in terms of
$\overline{\mathscr{U}}_{\q} s\ell(2)L$-representations as
\begin{gather*}
(\dpshort)_{(n p-1)}= (\dpshort)_{((n-1)
p-1)}\oplus\bigoplus_{r=1}^{p-1}\underbracket{\mathscrW^{\pm}_r(n)}
\oplus \underbracket{n\mathscrX^{\pm}_p},
\end{gather*}
where the underbrackets indicate that each $\mathscrW^{\pm}_r(n)$ becomes
an (indecomposable) $\overline{\mathscr{U}}_{\q} s\ell(2)L$-representation and the $n$ copies of
$\mathscrX^{\pm}_p$ are combined into an (irreducible)
$\overline{\mathscr{U}}_{\q} s\ell(2)L$-repre\-sentation.\footnote{Recently, $\overline{\mathscr{U}}_{\q} s\ell(2)L$-representations were
systematically analyzed in~\cite{[BFGT]}; a more advanced notation
than the underbracketing is used there.} Similarly,
\eqref{decomp-last} becomes
\begin{gather*}
(\underline{\Omega}_{\mathfrak{q}}^1)_{(n p-1)} =(\underline{\Omega}_{\mathfrak{q}}^1)_{((n-1)p-1)}\oplus
\bigoplus_{r=2}^{p-1} \underbracket{d\mathscrW^{\pm}_r(n)} \oplus
\underbracket{n\,d\mathscrX^{\pm}_p}\oplus \underbracket{(n-1)\mathscrX^{\mp}_p}
\oplus\bigoplus_{r=1}^{p-2}\underbracket{\mathscrW^{\pm}_r(n)} \oplus
\underbracket{n\mathscrP^{\pm}_{p-1}},
\end{gather*}
where $(n-1)\mathscrX^{\mp}_p$ is an irreducible, $\mathscrW^{\pm}_r(n)$ an
indecomposable, and $n\mathscrP^{\pm}_{p-1}$ a projective $\overline{\mathscr{U}}_{\q} s\ell(2)L$-module.
To see this, we first note that $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$ act trivially
on~\eqref{decomp(p-1)}; each of the $\mathscrX^+_r$ representations of
$\overline{\mathscr{U}}_{\q} s\ell(2)$ is also an irreducible representation of $\overline{\mathscr{U}}_{\q} s\ell(2)L$. Next,
in~\eqref{decomp(2p-1)}, for the $\mathscrW^{-}_r(2)$ modules shown
in~\eqref{W(2)}, $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$ map between the two $\mathscrX^-_r$ and
(in this lowest, $n=2$, case of $\mathscrW^{\pm}_r(n)$ modules) act
trivially on the socle,
\begin{equation}\label{W(2)-L}
\xymatrix@=14pt{
\mathscrX^-_{r}\ar[dr]
\ar@{-->}@/^8pt/_{\boldsymbol{\mathscr{F}}}[]+<4pt,10pt>;[rr]+<-4pt,10pt>
&&\mathscrX^-_{r}
\ar@{-->}@/_16pt/_{\boldsymbol{\mathscr{E}}}[]+<0pt,12pt>;[ll]+<-0pt,12pt>
\ar[dl]
\\
&\mathscrX^+_{p-r}
&
}
\end{equation}
yielding an indecomposable representation of~$\overline{\mathscr{U}}_{\q} s\ell(2)L$. By the same
pattern, the sum of two $\mathscrX^-_{p}$ in~\eqref{decomp(2p-1)} becomes
an irreducible $\overline{\mathscr{U}}_{\q} s\ell(2)L$-representation. This extends to the general
case~\eqref{dpdecomp-general}: $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$ act horizontally on all
the $\mathscrW^{\pm}_r(n)$ modules and also on the $n$ copies of
$\mathscrX^-_p$, $n\geqslantqslant2$, making them into respectively indecomposable
and irreducible $\overline{\mathscr{U}}_{\q} s\ell(2)L$-modules.
Passing to the space of $1$-forms, each of the $\overline{\mathscr{U}}_{\q} s\ell(2)$-representations in
\eqref{decompO(p-1)} remains an $\overline{\mathscr{U}}_{\q} s\ell(2)L$ representation; $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$
act trivially on the $\mathscrX^+_{r}$ and interchange the ``corners'' of
$\mathscrP^+_{p-1}$ shown in~\eqref{p-1-proj}:
\begin{equation*}
\boldsymbol{\mathscr{F}}(x\mathscr{D}p{p-1}\upxi)=y\mathscr{D}p{p-1}\upeta,\mathfrak{q}quad
\boldsymbol{\mathscr{E}}(y\mathscr{D}p{p-1}\upeta)=x\mathscr{D}p{p-1}\upxi
\end{equation*}
($\mathscrP^+_{p-1}$ thus becomes a projective $\overline{\mathscr{U}}_{\q} s\ell(2)L$-module).
Next, in~\eqref{decompO(2p-1)}, $\mathscrX^+_p$ remains an irreducible
$\overline{\mathscr{U}}_{\q} s\ell(2)L$-module (with $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$ acting trivially), and\pagebreak[3]
each of the $\mathscrW^-_r(2)$ is an indecomposable $\overline{\mathscr{U}}_{\q} s\ell(2)L$-module;
in~\eqref{OW2}, for example, $\boldsymbol{\mathscr{F}}(X x\mathscr{D}p{r-1}y\upxi-\mathfrak{q}[r]X x\mathscr{D}p{r}\upeta)=Y
x\mathscr{D}p{r-1} y\upxi -\mathfrak{q}[r]Y x\mathscr{D}p{r}\upeta$, etc. Further, the sum of two
$\mathscrP^-_{p-1}$ in~\eqref{decompO(2p-1)} becomes a projective
$\overline{\mathscr{U}}_{\q} s\ell(2)L$-module; there, the top floor (see~\eqref{two-proj}) is a doublet
under the $s\ell(2)$ Lie algebra of $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$, the middle floor
is a singlet ($X y\mathscr{D}p{p-1}\upeta - (-1)^p Y x\mathscr{D}p{p-1}\upxi$) plus a triplet
($X x\mathscr{D}p{p-1}\upxi$, $X y\mathscr{D}p{p-1}\upeta +(-1)^p Y x\mathscr{D}p{p-1}\upxi$, $Y
y\mathscr{D}p{p-1}\upeta$),
and the lowest floor is also a doublet.
The organization of $\overline{\mathscr{U}}_{\q} s\ell(2)L$ representations in the general case
in~\eqref{decomp-last} is now obvious; it largely follows from the
picture in the lower degrees by multiplying with powers of $X$ and
$Y$.
\subsubsection{``Lusztig--Frobenius localization''}
The formulas in~\bref{sec:Frobenius} can be extended from nonnegative
integer to integer powers of $X$ and $Y$. This means passing from
$\dpshort=\mathbb{C}_{\mathfrak{q}}[X,Y,x,y]$ to $\mathbb{C}_{\mathfrak{q}}[X,X^{-1},Y,Y^{-1},x,y]$,
which immediately gives rise to infinite-dimensional $\overline{\mathscr{U}}_{\q} s\ell(2)L$
representations, as we now indicate very briefly.
Taking the picture in~\eqref{W(2)-details} (with $1\leqslantqslant r\leqslantqslant p-1$)
and multiplying all monomials by $X^{-1}$ destroys the south-east $F$
arrow and the $\boldsymbol{\mathscr{F}}$ arrows shown in~\eqref{W(2)-L}, yielding
\begin{small}
\begin{equation*}
\xymatrix@=12pt@C=0pt{
x\mathscr{D}p{r-1}
&{\kern-6pt\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows\kern-6pt}
&y\mathscr{D}p{r-1}
&&&&X^{-1}Y x\mathscr{D}p{r-1}\ar_E[dl]
&*{\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows}
\ar@{-->}@/_20pt/_{\boldsymbol{\mathscr{E}}}[]+<-6pt,12pt>;[llllll]+<6pt,12pt>
&X^{-1} Y y\mathscr{D}p{r-1}
\\
&&&X^{-1} x\mathscr{D}p{p-1}y\mathscr{D}p{r}
&*{\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows}
&X^{-1} x\mathscr{D}p{r}y\mathscr{D}p{p-1}
}
\end{equation*}
\end{small}
But this now extends to the right, indefinitely, producing a pattern
shown in Fig.~\thefigure{} (the top diagram), where the $\boldsymbol{\mathscr{E}}$ and
$\boldsymbol{\mathscr{F}}$ generators map between the respective monomials in each block.
A ``mirror-reflected'' picture follows by multiplying~\eqref{W(2)} by
$Y^{-1}$. Multiplying by $X^{-1}Y^{-1}$ gives the bottom diagram in
Fig.~\thefigure{} (where the $\rightleftarrows$ arrows are omitted for
brevity).
After the extension by $X^{-1}$ and $Y^{-1}$, the differential $d$
acquires a nonzero cohomology in $\Omega^1\mathbb{C}[X,X^{-1},Y,Y^{-1},x,y]$,
represented by $X^{-1}x\mathscr{D}p{p-1}\upxi$ and $Y^{-1}y\mathscr{D}p{p-1}\upeta$.
There is a curious possibility to further extend
$\mathbb{C}[X,X^{-1},Y,Y^{-1},x,y]$ by adding a new element $\log(XY)$ that
commutes with $x$, $y$, $X$, and $Y$ and on which the differential and
the quantum group generators act, by definition, as follows:
\begin{alignat*}{2}
d\log(XY) &=-\mathfrak{q}^{-1} X^{-1} x\mathscr{D}p{p-1}\upxi -\mathfrak{q}^{-1} Y^{-1}
y\mathscr{D}p{p-1}\upeta,\kern-120pt
\\
E\log(XY) &= Y^{-1} x y\mathscr{D}p{p-1},&\mathfrak{q}quad \boldsymbol{\mathscr{E}}\log(XY) &= (-1)^p X
Y^{-1},
\\
K\log(XY) &= \log(XY),\\
F\log(XY) &= -X^{-1} x\mathscr{D}p{p-1} y,& \boldsymbol{\mathscr{F}}\log(XY) &= (-1)^p X^{-1} Y.
\end{alignat*}
\afterpage{
\rotatebox[origin=lB]{90}{\parbox{\textheight}{
\footnotesize\vspace*{-\baselineskip}
\begin{gather*}
\xymatrix@=12pt@C=4pt{
x\mathscr{D}p{r-1}
&*{\kern-6pt\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows\kern-6pt}
&y\mathscr{D}p{r-1} &&&& X^{-1} Y x\mathscr{D}p{r-1} \ar_E[dl]
&*{\kern-6pt\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows\kern-6pt}
\ar@{-->}@/_20pt/_{\boldsymbol{\mathscr{E}}}[]+<-6pt,12pt>;[llllll]+<6pt,12pt>
\ar@{-->}@/^20pt/^{\boldsymbol{\mathscr{F}}}[]+<6pt,10pt>;[rrrrrr]+<0pt,10pt>
&X^{-1} Y y\mathscr{D}p{r-1} \ar[dr]^F &&&&X^{-2} Y^2 x\mathscr{D}p{r}
y\mathscr{D}p{p-1}\ar[dl]^E &*{\kern-6pt\rightleftarrows\mathbin{\boldsymbol{.}}s}
\ar@{-->}@/_16pt/^{\boldsymbol{\mathscr{E}}}[]+<0pt,6pt>;[llllll]+<6pt,6pt>
\\
&&&X^{-1}x\mathscr{D}p{p-1}y\mathscr{D}p{r}
&*{\kern-6pt\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows\kern-6pt}
\ar@{-->}@/_20pt/_{\boldsymbol{\mathscr{F}}}[]+<6pt,-12pt>;[rrrrrr]+<0pt,-12pt>
&X^{-1}x\mathscr{D}p{r}y\mathscr{D}p{p-1} &&&&X^{-2} Y x\mathscr{D}p{p-1} y\mathscr{D}p{r}
&*{\kern-6pt\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows\kern-6pt}
\ar@{-->}@/^16pt/_{\boldsymbol{\mathscr{E}}}[]+<0pt,-8pt>;[llllll]+<12pt,-8pt>
&X^{-2} Y x\mathscr{D}p{r} y\mathscr{D}p{p-1} }
\\[\baselineskip]
\xymatrix@=12pt@C=2pt{
&&&Y^{-1} x\mathscr{D}p{r-1} \ar@{-->}@/_/_{\boldsymbol{\mathscr{E}}}[lll]+<-20pt,0pt>
\ar[dl]^E &*{\kern-6pt\mathbin{\boldsymbol{.}}s\kern-6pt} &Y^{-1} y\mathscr{D}p{r-1}
&&&&& X^{-1} x\mathscr{D}p{r-1} &*{\kern-6pt\mathbin{\boldsymbol{.}}s\kern-6pt} &X^{-1}
y\mathscr{D}p{r-1} \ar@{-->}@/^/^{\boldsymbol{\mathscr{F}}}[rrr]+<20pt,0pt> \ar[dr]^F &&&
\\
Y^{-2} x\mathscr{D}p{p-1} y\mathscr{D}p{r} &*{\kern-6pt\mathbin{\boldsymbol{.}}s\kern-6pt}
&Y^{-2} x\mathscr{D}p{r} y\mathscr{D}p{p-1} &&&&X^{-1} Y^{-1} x\mathscr{D}p{p-1}
y\mathscr{D}p{r} \ar@{-->}@/^20pt/_{\boldsymbol{\mathscr{E}}}[llllll]-<6pt,12pt>
&*{\kern-6pt\mathbin{\boldsymbol{.}}s\kern-6pt} &X^{-1} Y^{-1} x\mathscr{D}p{r}
y\mathscr{D}p{p-1}
\ar@{-->}@/_20pt/^{\boldsymbol{\mathscr{F}}}[]-<6pt,10pt>;[rrrrrr]+<16pt,-10pt>
&&&&&X^{-2} x\mathscr{D}p{p-1} y\mathscr{D}p{r} &*{\kern-6pt\mathbin{\boldsymbol{.}}s\kern-6pt}
&X^{-2} x\mathscr{D}p{r} y\mathscr{D}p{p-1} }
\end{gather*}
\textsc{Figure}~\thefigure. Two infinite-dimensional $\overline{\mathscr{U}}_{\q} s\ell(2)L$
representations realized on the quantum plane extended by
negative powers of $X$ and $Y$.
\\%
\addtocounter{figure}{1}
\begin{equation*}
\mbox{}\kern20pt\xymatrix@=10pt@C=0pt{
(-1)^p X Y^{-1}
\ar^(.6){F}[dr]
\ar^{d}[dd]
&&&&\log(XY)
\ar@{-->}@/_/_(.5){\boldsymbol{\mathscr{E}}}[llll]
\ar@{-->}@/^/^(.5){\boldsymbol{\mathscr{F}}}[rrrr]
\ar_{E}[dl]
\ar^{F}[dr]
\ar^{d}[dd]
&&
&
&(-1)^p X^{-1} Y
\ar_(.6){E}[dl]
\ar^{d}[dd]
\\
&-Y^{-1} x\mathscr{D}p{p-1} y
\ar^{d}[dd]
&
*{\kern-6pt\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows\kern-6pt}
&Y^{-1} x y\mathscr{D}p{p-1}
\ar^{d}[dd]
&{}
&-X^{-1} x\mathscr{D}p{p-1} y
\ar^{d}[dd]
&
*{\kern-2pt\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows\kern-2pt}
&(-1)^p X^{-1} x y\mathscr{D}p{p-1}
\ar^{d}[dd]
\\
\dots \ar^(.6){F}[dr] &&&& *{\begin{array}[t]{l}
-\mathfrak{q}^{-1} X^{-1} x\mathscr{D}p{p-1}\upxi\\[-3pt]
{}-\mathfrak{q}^{-1} Y^{-1} y\mathscr{D}p{p-1}\upeta
\end{array}}
\ar_{E}[dl]
\ar^{F}[dr]
&&&&\dots
\ar^(.4){E}[];[dl]+<46pt,14pt>
\\
&
*{\begin{array}[t]{l}
\mathfrak{q}^{-1} Y^{-1} x\mathscr{D}p{p-2} y\upxi\\[-3pt]
{}-Y^{-1} x\mathscr{D}p{p-1}\upeta
\end{array}}
&
*{\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows}
&
*{\begin{array}[t]{l}
-\mathfrak{q}^{-1} Y^{-1} y\mathscr{D}p{p-1}\upxi\\[-3pt]
{}-\mathfrak{q}^{-2} Y^{-1} x y\mathscr{D}p{p-2}\upeta
\end{array}}
&{}
&
*{\begin{array}[t]{l}
\mathfrak{q}^{-1} X^{-1} x\mathscr{D}p{p-2} y\upxi\\[-3pt]
{}-X^{-1} x\mathscr{D}p{p-1}\upeta
\end{array}}
&
*{\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows}
&
*{\begin{array}[t]{l}
(-1)^{p-1}\mathfrak{q}^{-1} X^{-1} y\mathscr{D}p{p-1}\upxi\\[-3pt]
{}-(-1)^p \mathfrak{q}^{-2} X^{-1} x y\mathscr{D}p{p-2} \upeta
\end{array}}
\\
{}&
}
\end{equation*}
\textsc{Figure}~\thefigure. A $\overline{\mathscr{U}}_{\q} s\ell(2)L$ representation involving
$\log(XY)$ on the extended quantum plane, and its map by the
differential.
\addtocounter{figure}{1}\mbox{}}}}
We then have the diagram shown in
\addtocounter{figure}{1}Fig.~\thefigure. \addtocounter{figure}{-1}
There,\pagebreak[3] the $\rightleftarrows\mathbin{\boldsymbol{.}}s\rightleftarrows$
arrows represent maps modulo nonzero factors; for all the other
arrows, the precise numerical factors are indicated. The pattern
extends infinitely both left and right. In the central part, two more
maps that did not fit the picture are
\begin{gather*}
\displaystyle\xymatrix@=12pt@C=80pt{ X Y^{-1}
\ar@{-->}@/^/_{\boldsymbol{\mathscr{F}}}[dr] & &X^{-1} Y \ar@{-->}@/_/^{\boldsymbol{\mathscr{E}}}[dl]
\\
&1 }
\end{gather*}
\section{Quantum differential operators}\label{sec:qdiff}
We next consider differential operators on the divided-power quantum
plane $\dpshort$.
\subsection{}
We define $\partial_x$ and $\partial_y:\dpshort\to\dpshort$ standardly, in
accordance with
\begin{equation*}
d f= \upxi \partial_x f + \upeta \partial_y f
\end{equation*}
for any function $f$ of divided powers of $x$ and $y$. It then
follows that\footnote{The rule to commute $\partial_x$ and $\partial_y$ inherited
from the Wess--Zumino-type complex is not the one in~\cite{[Hu]}, as
was also noted in that paper.}
\begin{align*}
\partial_x \partial_y &= \mathfrak{q} \partial_y \partial_x
\\[-8pt]
\intertext{and}
\smash[t]{\partial_x(x\mathscr{D}p{m} y\mathscr{D}p{n})} &=
\smash[t]{\mathfrak{q}^{-m - 2n + 1} x\mathscr{D}p{m-1} y\mathscr{D}p{n}},\\
\partial_y(x\mathscr{D}p{m} y\mathscr{D}p{n}) &= \mathfrak{q}^{-m - n + 1} x\mathscr{D}p{m} y\mathscr{D}p{n-1}
\\[-1.3\baselineskip]
\end{align*}
for $m,n\in\mathbb{N}$; in terms of the basis in~\bref{sec:Frob-basis},
therefore,
\begin{align*}
\partial_x(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) &=
\begin{cases}
(-1)^N \mathfrak{q}^{-m - 2n + 1} X^M Y^N x\mathscr{D}p{m-1} y\mathscr{D}p{n},& m\neq 0,\\
M (-1)^{N(p-1) + 1} \mathfrak{q}^{-2n+1} X^{M-1} Y^N x\mathscr{D}p{p-1} y\mathscr{D}p{n},&
m=0,
\end{cases}
\\
\partial_y(X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n}) &=
\begin{cases}
(-1)^{M}\mathfrak{q}^{-m-n+1} X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n-1},& n\neq0,\\
N(-1)^M (-1)^{m-1}\mathfrak{q}^{-m+1} X^M Y^{N-1} x\mathscr{D}p{m} y\mathscr{D}p{p-1},& n=0
\end{cases}
\end{align*}
for $0\leqslantqslant m,n\leqslantqslant p-1$ and $M,N\geqslantqslant 0$.
\subsection{}\label{sec:all-D-comm}
Let $\mathscr{D}$ denote the linear span of $x\mathscr{D}p{m} y\mathscr{D}p{n} \partial_x^a \partial_y^b$ with
$m,n,a,b\geqslantqslant 0$. The following commutation relations are easily
verified
($\ell\in\mathbb{N}$):
\begin{equation}
\label{eq:commute-linear}
\begin{aligned}
\partial_x^\ell\,x &= \mathfrak{q}^{-2 \ell}x\partial_x^\ell +
\mathfrak{q}^{-\ell+1}\mathfrak{q}int{\ell}\partial_x^{\ell-1}
-\mathfrak{q}^{-2\ell+1}\mathfrak{q}int{\ell}(\mathfrak{q}-\mathfrak{q}^{-1})y\partial_x^{\ell-1}\partial_y,\\
\partial_y^\ell\,y &= \mathfrak{q}^{-2\ell}y \partial_y^{\ell} + \mathfrak{q}^{ -\ell + 1}
\mathfrak{q}int{\ell}\partial_y^{\ell - 1}.
\end{aligned}
\end{equation}
(Here and in similar relations below, $x$ is to be understood as the
operator of multiplication by $x$, etc.) It then follows that
$\partial_x^{p}$ and $\partial_y^{p}$ are somewhat special: they (anti)commute with
all the divided powers $x\mathscr{D}p{n}$ and $y\mathscr{D}p{n}$ with $n<p$. In
general, we have
\begin{multline*}
\partial_x^{\ell} x\mathscr{D}p{m} y\mathscr{D}p{n} =
\mathfrak{q}^{-2\ell m - \ell n}x\mathscr{D}p{m} y\mathscr{D}p{n} \partial_x^{\ell}\\
\shoveright{{}+ \sum_{i=1}^{\ell}\!\sum_{j=0}^{i} \mathfrak{q}^{-2 m \ell -
\ell n + i(\ell + m - n) - \frac{i(i - 1)}{2} - j(\ell + 1)}
\mathfrak{q}bin{\ell}{i}\mathfrak{q}bin{i}{j}\mathfrak{q}bin{n + j}{j} \mathfrak{q}fac{j}\leqslantft(1 -
\mathfrak{q}^2\right)^j x\mathscr{D}p{m - i} y\mathscr{D}p{j + n} \partial_x^{\ell - i} \partial_y^{j},}
\\
\shoveleft{\partial_y^{\ell} x\mathscr{D}p{m} y\mathscr{D}p{n} = \mathfrak{q}^{-2\ell n - \ell
m}x\mathscr{D}p{m} y\mathscr{D}p{n} \partial_y^{\ell} + \sum_{i=1}^{\ell}\mathfrak{q}^{-2 n \ell -
\ell m + i(n + \ell) - \frac{i(i - 1)}{2}}
\mathfrak{q}bin{\ell}{i}x\mathscr{D}p{m} y\mathscr{D}p{n - i} \partial_y^{\ell - i}}\\[-\baselineskip]
\end{multline*}
for all nonnegative integers $m$, $n$, and $\ell$, and hence the
commutation relations of $\partial_x^p$ and $\partial_y^p$ with elements of
$\dpshort$ can be written as
\begin{align*}
\partial_x^{p}\, X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} &= (-1)^{N p + n} X^M Y^N x\mathscr{D}p{m}
y\mathscr{D}p{n} \partial_x^{p} + M \mathfrak{q}^{-\frac{p(p - 1)}{2}} X^{M-1} Y^N x\mathscr{D}p{m}
y\mathscr{D}p{n},
\\
\partial_y^{p}\,X^M Y^N x\mathscr{D}p{m} y\mathscr{D}p{n} &= (-1)^{Mp + m} X^M Y^N x\mathscr{D}p{m}
y\mathscr{D}p{n} \partial_y^{p} + (-1)^{M p} N \mathfrak{q}^{-\frac{p(p - 1)}{2}} X^M Y^{N-1}
x\mathscr{D}p{m} y\mathscr{D}p{n}
\end{align*}
for $M,N\geqslantqslant1$ and $0\leqslantqslant m,n\leqslantqslant p-1$; in other words, $\partial_x^p$ and
$\partial_y^p$ can be represented as the (left) derivatives
\begin{equation*}
\partial_x^p=i^{1 - p}\ffrac{\partial}{\partial X},\mathfrak{q}quad
\partial_y^p=i^{1 - p}\ffrac{\partial}{\partial Y}.
\end{equation*}
(Evidently, $\partial_x^{p}\partial_y^{p}=(-1)^{p}\partial_y^{p}\partial_x^{p}$.) We here used
that $\mathfrak{q}^{\frac{p(p - 1)}{2}} = i^{p - 1}$.
\begin{Lemma}
The relations in $\mathscr{D}$ are compatible with the $\overline{\mathscr{U}}_{\q} s\ell(2)$-module algebra
structure if we define the $\overline{\mathscr{U}}_{\q} s\ell(2)$ action as
\begin{alignat*}{2}
E \partial_x &=-\mathfrak{q} \partial_y,&\mathfrak{q}quad E \partial_y &=0,\\
K \partial_x &= \mathfrak{q}^{-1}\partial_x,& K \partial_y &= \mathfrak{q}\partial_y,\\
F \partial_x &=0, &\mathfrak{q}uad F \partial_y &=-\mathfrak{q}^{-1}\partial_x.
\end{alignat*}
\end{Lemma}
\noindent
The proof amounts to verifying that the relations
in~\bref{sec:all-D-comm} are mapped into one another under the $\overline{\mathscr{U}}_{\q} s\ell(2)$
action. Then $\boldsymbol{\mathscr{E}}$ and $\boldsymbol{\mathscr{F}}$ act on the differential operators as
\begin{align*}
\boldsymbol{\mathscr{E}}(\partial_x^{Mp + m} \partial_y^{N p + n}) &= -(-1)^{(N+1) p + n} M \partial_x^{(M-1)
p + m} \partial_y^{(N+1) p + n},
\\
\boldsymbol{\mathscr{F}}(\partial_x^{M p + m} \partial_y^{N p + n}) &= -(-1)^{(M+1) p + m} N \partial_x^{(M+1)
p + m}\partial_y^{(N-1) p + n}.
\end{align*}
Modulo sign factors, this is the $s\ell(2)$ action on
$\mathbb{C}[\partial_x^p,\partial_y^p]=\mathbb{C}[\fffrac{\partial}{\partial X},\fffrac{\partial}{\partial Y}]$.
\subsection{A projective $\overline{\mathscr{U}}_{\q} s\ell(2)$ module in terms of quantum differential
operators} As an application of the $q$-differential operators on
$\dpshort$, we construct the projective $\overline{\mathscr{U}}_{\q} s\ell(2)$-module $\mathscrP^+_1$
(see~\cite{[FGST]}) as follows:
\begin{equation*}
\xymatrix@C=4pt@R=12pt{
&&&\tau
\ar^{F}[dr]
\ar_{E}[dl]
\\
\lambda_{p-1}
&
\kern-6pt\rightleftarrows \mathbin{\boldsymbol{.}}s \rightleftarrows\kern-6pt
&\lambda_1
\ar_{F}[dr]
&{}
&\rho_1
\ar^{E}[dl]
&\kern-6pt\rightleftarrows \mathbin{\boldsymbol{.}}s \rightleftarrows\kern-6pt
&
\rho_{p-1}
\\
&&&
\beta
}
\end{equation*}
where the horizontal $\rightleftarrows$ arrows, as before, stand for
the action by $F$ and $E$ up to nonzero factors, and the actual
expressions for the module elements are as follows: first,\pagebreak[3]
\begin{gather*}
\tau = -\mathfrak{q}^{\frac{p(p - 1)}{2} + 1} \sum_{i=1}^p a_{i}\,x\mathscr{D}p{p - i}
y\mathscr{D}p{i - 1} \partial_x^{p - i} \partial_y^{i -
1},\mathfrak{q}uad\text{where}\mathfrak{q}uad
a_i = \alpha + \smash[t]{\sum_{j=2}^i \ffrac{\mathfrak{q}^{1-j}}{\mathfrak{q}int{j-1}}},
\end{gather*}
then $\lambda_i = E^i\tau$ and $\rho_i=F^i\tau$, with
\begin{alignat*}{2}
\lambda_{p-1} & = (-1)^{p-1} \mathfrak{q}fac{p - 1} \mathfrak{q}^{\frac{p(p - 1)}{2} + 1}
x\mathscr{D}p{p - 1}\partial_y^{p - 1},&\ \lambda_1 &= \mathfrak{q}^{\frac{p(p - 1)}{2}}
\!\sum_{i=1}^{p - 1}\!\mathfrak{q}^{i + 2} x\mathscr{D}p{p - i} y\mathscr{D}p{i - 1} \partial_x^{p - i - 1} \partial_y^{i},\\
\rho_1 &= -\mathfrak{q}^{\frac{p(p - 1)}{2}} \!\sum_{i=1}^{p - 1}\!\mathfrak{q}^{i + 1}
x\mathscr{D}p{i - 1} y\mathscr{D}p{p - i} \partial_x^{i} \partial_y^{p - 1 - i},&\ \rho_{p - 1}
&= -\mathfrak{q}fac{p - 1}\mathfrak{q}^{\frac{p(p - 1)}{2} + 2} y\mathscr{D}p{p - 1}\partial_x^{p - 1}
\end{alignat*}
in particular, and, finally, $\beta = F\lambda_1 = E\rho_1$ is
\begin{align*}
\beta &=-\mathfrak{q}^{\frac{p(p - 1)}{2} + 1} \sum_{i=1}^{p}x\mathscr{D}p{p - i} y\mathscr{D}p{i
- 1} \partial_x^{p - i} \partial_y^{i - 1}.
\end{align*}
In the expression for $\tau$, $\alpha$ is an arbitrary constant
(clearly, adding a constant to all the $a_i$ amounts to redefining
$\tau$ by adding $\beta$ times this constant). The normalization is here
chosen such that $\beta$ be a projector,
\begin{equation*}
\beta \beta = \beta.
\end{equation*}
(We note the useful identity $\mathfrak{q}fac{p - 1}(\mathfrak{q} - \mathfrak{q}^{-1})^{p - 1}
=p\,\mathfrak{q}^{\frac{p(p - 1)}{2}}$.)
The ``wings'' of the projective module
are commutative,
\begin{equation*}
\lambda_i \lambda_j=\lambda_j \lambda_i,\mathfrak{q}quad
\rho_i \rho_j=\rho_j \rho_i
\end{equation*}
for all $0\leqslantqslant i,j\leqslantqslant p-1$, where $\lambda_0=\rho_0=\beta$, and, moreover,
$\lambda_i \lambda_j = \rho_i \rho_j=0$ whenever $i+j\geqslantqslant p$.
A similar (but notably simpler) realization of $\mathscrP^+_1$ in a
$\overline{\mathscr{U}}_{\q} s\ell(2)$-module algebra of quantum differential operators on a ``quantum
line'' was given in~\cite{[S-U]}.
\section{Conclusion}
Quantum planes provide a natural example of module algebras over
$s\ell(2)$ quantum groups (they do not allow realizing all of the
quantum-$s\ell(2)$ representations, but the corresponding quantum
differential operators make up a module algebra containing the
projective modules in particular). By~\cite{[WZ]}, moreover,
$GL_q(2)$ can be \textit{characterized} as the ``quantum automorphism
group'' of the de~Rham complex of the quantum plane. This is
conducive to the occurrence of quantum planes in various situations
where the $s\ell(2)$ quantum groups play a role. From the standpoint
of the Kazhdan--Lusztig correspondence, the old subject of a quantum
$s\ell(2)$ action on the quantum plane is interesting in the case of
even roots of unity, which we detailed in this paper.
\subsubsection*{Acknowledgments}This paper was supported in part by
the RFBR grant 08-01-00737 and the grant LSS-1615.2008.2.
\parindent0pt
\end{document} |
\begin{document}
\begin{abstract}
We construct new examples of rational Gushel-Mukai fourfolds,
giving more evidence for the analog of the Kuznetsov Conjecture for cubic fourfolds:
a Gushel--Mukai fourfold is rational if and only if it admits an \emph{associated K3 surface}.
\end{abstract}
\maketitle
\section*{Introduction}
A Gushel-Mukai fourfold is a smooth prime Fano fourfold $X\subset {\mathbb{P}}^8$ of degree $10$ and index $2$ (see \cite{mukai-biregularclassification}).
These fourfolds are parametrized by a coarse moduli space $\mathcal M_4^{GM}$ of dimension $24$ (see \cite[Theorem 5.15]{DK3}), and
the general fourfold $[X]\in \mathcal M_4^{GM}$ is a smooth quadratic section of a smooth hyperplane section of the Grassmannian $\mathbb{G}(1,4)\subset{\mathbb{P}}^9$ of lines in ${\mathbb{P}}^4$.
In \cite{DIM} (see also \cite{DK1,DK2,DK3}), following Hassett's analysis of cubic fourfolds (see \cite{Hassett,Has00}), the authors
studied Gushel-Mukai fourfolds via Hodge theory and the period map.
In particular, they showed that
inside $\mathcal M_4^{GM}$ there is
a countable union $\bigcup_d \mathcal{GM}_d
$
of (not necessarily irreducible) hypersurfaces
parametrizing \emph{Hodge-special} Gushel-Mukai fourfolds, that is,
fourfolds that contain a surface whose cohomology class does not come from the Grassmannian $\mathbb{G}(1,4)$.
The index $d$ is called the \emph{discriminant} of the fourfold and it runs over all positive integers
congruent to $0,2$, or $4$ modulo $8$ (see \cite{DIM}).
However, as far as the authors know, explicit geometric descriptions of Hodge-special Gushel-Mukai fourfolds in $\mathcal{GM}_d$ are unknown for $d>12$.
In Theorem~\ref{mainthm1}, we shall provide such a description when $d=20$.
As in the case of cubic fourfolds, all Gushel-Mukai fourfolds are unirational. Some rational examples are classical and easy to construct, but no examples have yet been proved to be irrational.
Furthermore, there are values of the discriminant $d$ such that a fourfold in $\mathcal{GM}_d$ admits an \emph{associated K3 surface} of degree $d$. For instance, this occurs
for $d=10$ and $d=20$.
The hypersurface $\mathcal{GM}_{10}$ has two irreducible components,
and the general fourfold in each of these two components is rational (see \cite[Propositions~7.4 and 7.7]{DIM} and Examples~\ref{exa1} and \ref{exa2}).
Some of these fourfolds were already studied by Roth in \cite{Roth1949}.
As far as the authors know, there were no other known examples of rational Gushel-Mukai fourfolds. In Theorem~\ref{RatGM},
we shall provide new examples of rational Gushel-Mukai fourfolds that belong to $\mathcal{GM}_{20}$.
A classical and still open question in algebraic geometry
is the rationality of smooth cubic hypersurfaces in ${\mathbb{P}}^5$ (cubic fourfolds for short).
An important conjecture, known as \emph{Kuznetsov's Conjecture} (see \cite{AT, kuz4fold, kuz2, Levico}) asserts that a cubic fourfold is rational if and only if it admits an associated K3 surface in the sense of Hassett/Kuznetsov.
This condition can be expressed by saying that the rational cubic fourfolds are parametrized by a countable union $\bigcup_d \mathcal C_d$ of irreducible hypersurfaces inside the $20$-dimensional coarse moduli space of cubic fourfolds, where $d$ runs over the so-called \emph{admissible values} (the first ones are $d=14,26,38,42,62$).
The rationality of cubic fourfolds in $\mathcal C_{14}$ was proved by Fano in \cite{Fano} (see also \cite{BRS}), while rationality in the case of $\mathcal C_{26}$ and $\mathcal C_{38}$ was proved in \cite{RS1}. Very recently, in \cite{RS3}, rationality was also proved in the case of $\mathcal C_{42}$.
The proof of this last result shows a close relationship between cubic fourfolds in $\mathcal C_{42}$ and the Gushel-Mukai fourfolds in $\mathcal{GM}_{20}$ constructed in this paper.
This beautiful geometry was discovered with the help of \emph{Macaulay2} \cite{macaulay2}.
\section{Generality on Gushel-Mukai fourfolds}\label{generalitiesGM}
In this section, we recall some general facts about Gushel-Mukai fourfolds
which were proved in \cite{DIM} (see also \cite{DK1,DK2,DK3}).
A Gushel-Mukai fourfold $X\subset {\mathbb{P}}^8$, GM fourfold for short,
is a degree-$10$ Fano fourfold with $\mathrm{Pic}(X) = \mathbb{Z}[ \mathcal{O}_X(1)]$
and $K_X\in|\mathcal{O}_X(-2)|$.
Equivalently,
$X$ is a quadratic section of a {$5$}-dimensional linear section
of the cone in ${\mathbb{P}}^{10}$ over the Grassmannian ${\mathbb{G}}(1,4)\subset {\mathbb{P}}^9$ of lines in ${\mathbb{P}}^4$.
There are two types of GM fourfolds:
\begin{itemize}
\item quadratic sections of hyperplane
sections of $\mathbb G(1,4)\subset{\mathbb{P}}^9$ (\emph{Mukai or ordinary fourfolds, \cite{mukai-biregularclassification}});
\item double covers of $\mathbb G(1,4)\cap{\mathbb{P}}^7$ branched along its intersection with a quadric
(\emph{Gushel fourfolds}, \cite{Gu}).
\end{itemize}
There exists a $24$-dimensional coarse moduli space $\mathcal M_4^{GM}$ of GM fourfolds, where the locus of Gushel fourfolds is of codimension $2$. Moreover, we have a \emph{period map} $\mathfrak{p}:\mathcal M_4^{GM}\to\mathcal{D}$ to a $20$-dimensional quasi-projective variety $\mathcal D$,
which is dominant with irreducible $4$-dimensional fibers (see \cite[Corollary 6.3]{DK3}).
For a very general GM fourfold $[X]\in \mathcal M_4^{GM}$, the natural inclusion
\begin{equation}\label{naturalInclusion}
A({\mathbb{G}}(1,4)) := H^4(\mathbb G(1,4),\mathbb Z)\cap H^{2,2}(\mathbb G(1,4))\subseteq A(X) := H^4(X,\mathbb Z)\cap H^{2,2}(X)
\end{equation}
of middle Hodge groups is an equality.
A GM fourfold $X$ is said to be \emph{Hodge-special} if the inclusion \eqref{naturalInclusion} is strict. This means that the fourfold $X$ contains a surface whose cohomology class ``does not come'' from the Grassmannian ${\mathbb{G}}(1,4)$.
Hodge-special GM fourfolds are parametrized by a countable union of hypersurfaces
$\bigcup_d \mathcal{GM}_d\subset \mathcal M_4^{GM}$, labelled
by the positive integers $d\equiv 0,2$, or $4$ (mod $8$) (see \cite[Lemma~6.1]{DIM}). The image
$\mathcal{D}_d=\mathfrak{p}(\mathcal{GM}_d)$ is a hypersurface in $\mathcal{D}$,
which is irreducible if $d\equiv 0$ (mod $4$), and has two irreducible components $\mathcal D_d'$ and $\mathcal D_d''$
if $d\equiv 2$ (mod $8$) (see \cite[Corollary~6.3]{DIM}). The same holds true for $\mathcal{GM}_d$.
In some cases, the value of $d$ can be explicitly computed from the geometry of Hodge-special GM fourfolds (see \cite[Section~7]{DIM}). Indeed, let $X\subset {\mathbb{P}}^8$ be an ordinary GM fourfold containing a smooth surface $S$ such that $[S]\in A(X)\setminus A(\mathbb G(1,4))$.
We may write $[S]=a\sigma_{3,1}+b\sigma_{2,2}$ in terms of Schubert cycles in $\mathbb G(1,4)$ for some integers $a$ and $b$.
We then have $[X]\in \mathcal{GM}_d$, where $d$ is the absolute value of the determinant (or {\it discriminant}) of the intersection matrix
in the basis $(\sigma_{1,1|X}, \sigma_{2|X}-\sigma_{1,1|X}, [S])$.
That is
\begin{equation}\label{discriminant}
d=\left|\det \begin{pmatrix}
2&0&b\\
0&2&a-b\\
b&a-b&(S)_X^2\end{pmatrix}\right| = \left|
4 (S)_X^2-2(b^2+(a-b)^2)\right|,
\end{equation}
where
\begin{equation}\label{doublepoints}
(S)_X^2=3a+4b+2K_S\cdot \sigma_{1|S}+2K_S^2-12\chi(\mathcal O_S).
\end{equation}
For some values of $d$, the non-special cohomology of the GM fourfold $[X]\in \mathcal{GM}_d$
looks like the primitive cohomology of a K3 surface. In this case, as in the case of cubic fourfolds,
one says that $X$ has an associated K3 surface. The first values of $d$ that satisfy
the condition for the existence of an associated K3 surface are: $2$, $4$, $10$, $20$, $26$, $34$.
We refer to \cite[Section~6.2]{DIM} for precise definitions and results.
In Examples~\ref{exa1} and \ref{exa2} below, we recall the known examples of rational GM fourfolds,
which all have discriminant $10$.
In Section~\ref{second},
we shall construct rational GM fourfolds of discriminant $20$.
\begin{example}\label{exa1}
A \emph{$\tau$-quadric} surface in $\mathbb{G}(1,4)$ is a linear section of
$\mathbb{G}(1,3)\subset\mathbb{G}(1,4)$; its class is $\sigma_{1}^2\cdot \sigma_{1,1} = \sigma_{3,1}+ \sigma_{2,2}$.
In \cite[Proposition~7.4]{DIM}, it was proved that the closure $D_{10}'\subset \mathcal M_4^{GM}$ of the family of fourfolds containing a $\tau$-quadric surface is the irreducible hypersurface $\mathfrak{p}^{-1}(\mathcal{D}_{10}')$, and that the general member of $D_{10}'$ is rational.
Furthermore, they are all rational by \cite{KontsevichTschinkelInventiones} or \cite[Theorem 4.15]{DK1}.
In \cite[Theorem~5.3]{RS3}, a different description of $D_{10}'$ and another proof of the rationality of its general member were given.
The rationality for a general fourfold $ [X] \in D_{10}' $
also follows from the fact that
a $ \tau $-quadric surface $ S $, inside the unique del Pezzo fivefold $ Y
\subset {\mathbb{P}}^8 $
containing $ X $, admits \emph{a congruence of $ 1 $-secant lines}, that is, through
the general point of $ Y $, there passes just one line contained in $ Y $ which intersects $ S $.
\end{example}
\begin{example} \label{exa2}
A quintic del Pezzo surface is a two-dimensional linear section of $\mathbb{G}(1,4)$;
its class is $\sigma_{1}^4 = 3\sigma_{3,1} + 2 \sigma_{2,2}$.
In \cite[Proposition~7.7]{DIM}, it was proved that the
closure $D_{10}''\subset \mathcal M_4^{GM}$ of the family
of fourfolds containing a quintic del Pezzo surface is the irreducible hypersurface
$\mathfrak{p}^{-1}(\mathcal{D}_{10}'')$.
The proof of the rationality of a general fourfold
$[X]\in D_{10}''$ is very classical.
Indeed in \cite{Roth1949}, Roth remarked that the projection from the linear span of a quintic del Pezzo surface contained in $ X $ induces a dominant map
\[ \pi : X\dashrightarrow {\mathbb{P}}^2 \]
whose generic fibre is a quintic del Pezzo surface.
By a result of Enriques (see \cite{Enr,EinSh}),
a quintic del Pezzo surface defined over an infinite field $ K $ is $ K $-rational. Thus,
the fibration $ \pi $ admits a rational section and $ X $ is rational.
\end{example}
\section{A Hodge-special family of Gushel-Mukai fourfolds}\label{second}
Let $S\subset{\mathbb{P}}^8$ be
the image of ${\mathbb{P}}^2$ via the linear system of quartic curves through three simple points and one double point in general position.
Then $S$ is a smooth surface of degree $9$ and sectional genus $2$ cut out in ${\mathbb{P}}^8$ by $19$ quadrics.
\begin{lemma}\label{surfInG14} Let $S\subset{\mathbb{P}}^8$ be a rational surface of degree $9$ and sectional genus $2$ as above.
Then $S$ can be embedded in a smooth del Pezzo fivefold $Y={\mathbb{G}}(1,4)\cap {\mathbb{P}}^8$
such that
in the Chow ring of ${\mathbb{G}}(1,4)$, we have
\begin{equation}\label{classAB}
[S]=6\,\sigma_{3,1} + 3\, \sigma_{2,2} .
\end{equation}
Moreover, there exists an irreducible component of the Hilbert scheme parameterizing such surfaces in $Y$
which is generically smooth of dimension $25$.
\end{lemma}
\begin{proof}
Using \emph{Macaulay2} \cite{macaulay2} (see Section~\ref{computations}), we constructed a specific example of a surface $S\subset {\mathbb{P}}^8$ as above
which is embedded in a del Pezzo fivefold $Y\subset{\mathbb{P}}^8$ and satisfies \eqref{classAB}.
Moreover we verified in our example that
$h^0(N_{S,Y})=25$ and $h^1(N_{S,Y})=0$. Thus, $[S]$ is a smooth point in the corresponding Hilbert scheme
$\mathrm{Hilb}_Y^{\chi(\mathcal O_S(t))}$
of subschemes of $Y$,
and the unique irreducible component of $\mathrm{Hilb}_Y^{\chi(\mathcal O_S(t))}$ containing $[S]$ has dimension~$25$.
\end{proof}
\begin{remark}\label{rem0}
After our construction, in a preliminary version of this paper, of an explicit example of a surface
as in Lemma~\ref{surfInG14}, \cite[Section~4]{RS3} provided an explicit geometric description of an irreducible $25$-dimensional family
of these surfaces inside a del Pezzo fivefold, confirming the claim of Lemma~\ref{surfInG14}.
\end{remark}
\begin{theorem}\label{mainthm1}
Inside $\mathcal M_4^{GM}$, the closure $D_{20}$
of the family of GM fourfolds containing a surface $S\subset{\mathbb{P}}^8$ as in Lemma~\ref{surfInG14} is the irreducible hypersurface ${\mathfrak p}^{-1}(\mathcal{D}_{20})$.
\end{theorem}
\begin{proof}
Let $Y={\mathbb{G}}(1,4)\cap{\mathbb{P}}^8$ be a fixed smooth del Pezzo fivefold and
let $\mathcal S$ be the $25$-dimensional irreducible family of rational surfaces
$S\subset Y$ of degree $9$ and sectional genus $2$ described in Lemma~\ref{surfInG14}.
Let $\mathcal{GM}_{Y} = \mathbb{P}(H^0(\mathcal O_{Y}(2)))$ denote the family of GM fourfolds contained in $Y$, that is, the family of quadratic sections
of $Y$. The dimension of $\mathcal{GM}_{Y}$ is $ h^0(\mathcal O_{{\mathbb{P}}^8}(2)) - h^0(\mathcal I_{Y,{\mathbb{P}}^8}(2)) - 1 = 39$.
Consider the incidence correspondence
$$I=\overline{\{([S], [X])\;:\; S\subset X \subset Y \}}\subset \mathcal S\times\mathcal{GM}_{Y},$$ and let
$$\xymatrix{
&I\ar[dl]_{p_1}\ar[dr]^{p_2}&&\\
\mathcal S&&\mathcal{GM}_{Y}&}
$$
be the two natural projections.
Then $p_1$ is a surjective morphism and, for $[S]\in \mathcal S$ general,
the fibre $p_1^{-1}([S]) \simeq {\mathbb{P}}(H^0(\mathcal I_{S,Y}(2)))$ is irreducible
of dimension $h^0(\mathcal I_{S,{\mathbb{P}}^8}(2)) - h^0(\mathcal I_{Y,{\mathbb{P}}^8}(2)) - 1 = 13$.
It follows that $I$ has a unique irreducible component $I^0$ that dominates $\mathcal S$ and that component has dimension $25 + 13 = 38$.
Using \emph{Macaulay2} (see \cite{M2files}), we verified in a specific example of a GM fourfold $X$
containing a surface $[S]\in \mathcal S$
that $H^0(N_{S,X}) = 0$. By semicontinuity, we deduce that $p_2$ is a generically finite morphism onto its image and that $p_2(I^0)$ has dimension $38$. It is therefore a hypersurface in $\mathcal{GM}_Y$. Since all smooth hyperplane sections of the Grassmannian $\mathbb{G}(1,4)\subset{\mathbb{P}}^9$ are projectively equivalent, $\mathcal{GM}_Y$ dominates $\mathcal M_4^{GM}$ and the fourfolds $X$ that we have constructed form an irreducible hypersurface in $\mathcal M_4^{GM}$.
Finally, by applying \eqref{discriminant} and \eqref{doublepoints}, we get
that a general such $[X]$
lies in $\mathfrak{p}^{-1}(\mathcal{D}_{20})$. This proves the theorem.
\end{proof}
\begin{theorem}\label{RatGM}
Every GM fourfold belonging to the family $D_{20}$ described in Theorem~\ref{mainthm1} is rational.
\end{theorem}
\begin{proof}
Let $Y\subset{\mathbb{P}}^8$ be a del Pezzo fivefold and let $S\subset Y$
be a general rational surface of degree $9$ and sectional genus $2$
belonging to the $25$-dimensional family described in Lemma~\ref{surfInG14} and Remark~\ref{rem0}.
The restriction to $Y$ of
the linear system of cubic hypersurfaces
with double points along $S$ gives a dominant rational map
\begin{equation*}
\psi:Y\dashrightarrow {\mathbb{P}}^4
\end{equation*}
whose general fibre is an irreducible conic curve which intersects $S$ at three points.
Thus $S$ admits inside $Y$ a \emph{congruence of $3$-secant conic curves}.
This implies that the restriction of $\psi$ to a general GM fourfold $X$
containing $S$ and contained in $Y$
is a birational map to ${\mathbb{P}}^4$.
The existence of the congruence of $3$-secant conics
can be also verified as follows. The linear system of quadrics through $S$
induces a birational map
\[
\phi:Y\dashrightarrow Z\subset {\mathbb{P}}^{13}
\]
onto a fivefold $Z$ of degree $33$ and cut out by $21$ quadrics.
Let $p\in Y$ be a general point. Then one sees that through $\phi(p)$ there pass
$7$ lines contained in $Z$. Of these, $6$ are the images of the lines passing through
$p$ and which intersect $S$, while the remaining line come from a single $3$-secant conic
to $S$ passing through $p$.
The claim about the rationality of \emph{every} $[X]\in D_{20}$ follows
from the rationality of a general $[X]\in D_{20}$ and from
the main result in \cite{KontsevichTschinkelInventiones} or from \cite[Theorem 4.15]{DK1}.
\end{proof}
\begin{remark}
The inverse map of the birational map $\psi:X\dashrightarrow {\mathbb{P}}^4$
described in the proof of Theorem~\ref{RatGM} is defined
by the linear system of hypersurfaces of degree $9$
having double points along
an
internal projection to ${\mathbb{P}}^4$
of a smooth surface $T\subset {\mathbb{P}}^5$ of degree $11$ and sectional genus $6$
cut out by $9$ cubics. This surface $T$ is
an internal triple projection of
a smooth minimal K3 surface of degree $20$ and genus $11$ in ${\mathbb{P}}^{11}$.
Actually, this was the starting point for this work.
In fact, from the results of \cite{RS3},
we suspected that a triple internal projection of a
minimal K3 surface of degree $20$ and genus $11$
could be related to a GM fourfold of discriminant~$20$.
\end{remark}
\section{Explicit computations}\label{computations}
In the proof of Lemma~\ref{surfInG14}, we claimed that
there exists an example of a rational surface $S\subset{\mathbb{P}}^8$ of degree $9$ and sectional genus $2$
which is also embedded in ${\mathbb{G}}(1,4)$ and satisfies $[S]=6\,\sigma_{3,1} + 3\, \sigma_{2,2}$. In an ancillary file (see \cite{M2files}), we provide the explicit homogeneous
ideal of such a surface which contains the ideal generated by the Pl\"{u}cker relations of ${\mathbb{G}}(1,4)$. The class $[S]$ in terms
of the Schubert cycles $\sigma_{3,1}$ and $\sigma_{2,2}$ can be easily calculated using, for instance, the \emph{Macaulay2} package \emph{SpecialFanoFourfolds}.
In the following, we explain the main steps of the procedure we followed to construct the surface in ${\mathbb{G}}(1,4)$.
We start by taking a general nodal hyperplane section
of a smooth Fano threefold of degree $22$ and sectional genus $12$ in $\mathbb{P}^{13}$ (see \cite{Mukai1983,schreyer_2001}).
The projection of this surface from its node yields
a smooth K3 surface $T\subset{\mathbb{P}}^{11}$ of degree $20$ and sectional genus $11$ which contains a conic
(see \cite{Kapustka_2018}).
Then we take a general triple projection of $T$ in ${\mathbb{P}}^5$, which
is a smooth surface of degree $11$ and sectional genus $8$ (this follows from
\cite[Proposition 4.1]{Voisin} and \cite[Theorem 10]{FS} in the case when the K3 surface $T$ is general).
Let $T'\subset {\mathbb{P}}^4$ be a general internal projection of this surface in ${\mathbb{P}}^5$. Then
$T'$ is a singular surface of degree $10$ and sectional genus $8$, cut out by
$13$ quintics. The linear system of hypersurfaces of degree $9$ having double points along $T'$
gives a birational map
$
\eta: {\mathbb{P}}^4\dashrightarrow X\subset{\mathbb{P}}^8
$
onto a GM fourfold $X$, whose
inverse map is defined by the restriction to $X$ of the linear system of cubic hypersurfaces having double points
along a smooth surface $S\subset X$ of degree $9$ and sectional genus $2$.
Finally, to determine explicitly the surface $S$, one can exploit the fact that
the general quintic hypersurface corresponds via $\eta$ to the general quadric hypersurface (inside $X$) containing $S$.
Indeed,
behind the scenes, we have an occurrence of a flop, similar to the
\emph{Trisecant Flop} considered in \cite{RS3}. In particular,
we have a commutative diagram
$$\xymatrix{
&M&&\\
{\mathbb{P}}^4 \ar@{-->}[ur]^{m_1} \ar@{-->}[rr]^{\eta}&& X \ar@{-->}[ul]_{m_2}&}
$$
where
$m_1$ and $m_2$ are the birational maps defined, respectively,
by the linear system of quintics through $T'$ and
by the linear system of quadrics through $S$. Moreover, $M$ is a fourfold of degree $33$ in ${\mathbb{P}}^{12}$ cut out by $21$ quadrics.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title[]
{On the zeros of asymptotically extremal polynomial sequences in the plane}
\date{\today}
\thanks{}
\thanks{{\it Acknowledgements.}
The first author was partially supported by the
U.S. National Science Foundation grant DMS-1109266.
The second author was supported by the University of Cyprus grant 3/311-21027.}
#1uthor[Edward B.\ Saff]{E.B.\ Saff}
#1ddress{Center for Constructive Approximation,
Department of Mathematics, Vanderbilt University,
1326 Stevenson Center, 37240 Nashville, TN, USA}
\email{[email protected]}
\urladdr{https://my.vanderbilt.edu/edsaff/}
#1uthor[Nikos Stylianopoulos]{N. Stylianopoulos}
#1ddress{Department of Mathematics and Statistics,
University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus}
\email{[email protected]}
\urladdr{http://ucy.ac.cy/\textasciitilde nikos}
\keywords{Orthogonal polynomials, equilibrium measure, extremal polynomials, zeros of polynomials}
\subjclass[2000]{30C10, 30C30, 30C50, 30C62, 31A05, 31A15, 41A10}
\begin{abstract}
Let $E$ be a compact set of positive logarithmic capacity in the complex plane and let
$\{P_n(z)\}_{1}^{\infty}$ be a sequence of asymptotically extremal monic polynomials for $E$
in the sense that
\begin{equation*}
\limsup_{n\to\infty}\|P_n\|_E^{1/n}\le\mathrm{cap}(E).
\end{equation*}
The purpose of this note is to provide sufficient geometric conditions on $E$ under which the (full) sequence of normalized counting measures
of the zeros of $\{P_n\}$ converges in the weak-star topology to the equilibrium measure on $E$, as $n\to\infty.$ Utilizing an argument of Gardiner and Pommerenke
dealing with the balayage of measures, we show that this is true, for example, if the interior of the polynomial convex hull of $E$ has a single component and the boundary
of this component has an ``inward corner" (more generally, a ``non-convex singularity"). This simple fact
has thus far not been sufficiently
emphasized in the literature.
As applications we mention improvements of some known results on the distribution of zeros of
some special polynomial sequences.
\end{abstract}
\maketitle
#1llowdisplaybreaks
\textit{Dedication:} To Herbert Stahl, an exceptional mathematician, a delightful personality, and a dear friend.
\section{Introduction}
Let $E$ be a compact set of positive logarithmic capacity (\textrm{cap}$(E)>0$) contained in the complex plane $\mathbb{C}$.
We denote by $\Omega$ the unbounded component of $\overline{\mathbb{C}}\setminus E$ and
by $\mu_E$ the \textit{equilibrium measure} (energy minimizing Borel probability measure on $E$)
for the logarithmic potential on $E$; see e.g. \cite[Ch.~3]{Ra} and
\cite[Sect. I.1]{ST}. As is well-known, the support $\mathrm{supp}(\mu_E)$ lies on the boundary $\partial\Omega$ of $\Omega$.
For any polynomial $p_n(z)$, of degree $n$, we denote by $\nu_{p_n}$ the \textit{normalized counting measure} for the zeros of $p_n(z)$; that is,
\begin{equation}
\nu_{p_n}:=\mathcal{F}_{\mathbb{R}}ac{1}{n}\sum_{p_n(z)=0}\delta_z,
\end{equation}
where $\delta_z$ is the unit point mass (Dirac delta) at the point $z$.
Let $\mathcal{N}$ denote an increasing sequence of positive integers. Then,
following \cite[p.~169]{ST} we say that a sequence of monic polynomials $\{P_n(z)\}_{n\in\mathcal{N}}$, of respective degrees $n$, is
\textit{asymptotically extremal on} $E$ if
\begin{equation}
\limsup_{n\to\infty,\,n\in\mathcal{N}}\|P_n\|_E^{1/n}\le\mathrm{cap}(E),
\end{equation}
where $\|\cdot\|_E$ denotes the uniform norm on $E$.
(We remark that this inequality implies equality for the limit, since $\|P_n\|_E\ge \mathrm{cap}(E)^n$).
Such sequences arise, for example, in the study of polynomials orthogonal with respect
to a measure $\mu$ belonging to the class \textbf{Reg}, see Definition 3.1.2 in \cite{StTobo}.
Concerning the asymptotic behavior of the zeros of an asymptotically extremal sequence of polynomials, we recall
the following result, see e.g. \cite[Thm 2.3]{MhSa91} and \cite[Thm III.4.7]{ST}.
\begin{theorem}\label{th:ST4.7}
Let $\{P_n\}_{n\in\mathcal{N}}$, denote an asymptotically extremal sequence of monic polynomials on $E$.
If $\mu$ is any weak$^*$ limit measure of the sequence $\{\nu_{P_n}\}_{n\in\mathcal{N}}$, then $\mu$ is a Borel probability
measure supported on $\overline{\mathbb{C}}\setminus\Omega$ and
$\mu^b=\mu_E$, where $\mu^b$ is the balayage of $\mu$ out of $\overline{\mathbb{C}}\setminus\Omega$ onto $\partial\Omega$.
Similarly, the sequence of balayaged counting measures converges to $\mu_E$:
\begin{equation}
\nu^b_{P_n}\,{\stackrel{*}{\longrightarrow}}\, \mu_E,\quad n\to\infty,\quad n\in\mathcal{N}.
\end{equation}
\end{theorem}
By the weak$^*$ convergence of a sequence of measures $\lambda_n$ to a measure $\lambda$ we mean that, for any continuous
$f$ with compact support in $\mathbb{C},$ there holds
\begin{equation*}
\int f d\lambda_n \to \int fd\lambda, \quad\textup{as }n\to\infty.
\end{equation*}
For properties of balayage, see \cite[Sect. II.4]{ST}.
The goal of the present paper is to describe simple geometric conditions under which
the normalized counting measures $\nu_{P_n}$ of an asymptotically extremal sequence $\{P_n\}$ on $E$,
\textit{themselves} converge weak$^*$ to the equilibrium measure. For example, this is the case
whenever $E$ is a non-convex polygonal region, a simple fact that has thus far not been sufficiently emphasized in the
literature. Here we introduce more general sufficient conditions based on arguments of Gardiner and Pommerenke \cite{GaPo02}
dealing with the balayage of measures.
The outline of the paper is as follows: In Section~2 we describe a geometric condition, which we call
the \textit{non-convex singularity} (NCS) condition and state the main result regarding the counting measures
$\nu_{P_n}$ of the zeros of polynomials that form an asymptotically extremal sequence. Its proof is given in Section~4.
In Section~3, we apply the main result to obtain improvements in several previous results on the behavior of
the zeros of orthogonal polynomials, whereby the NCS condition yields convergence conclusions for the full
sequence $\mathcal{N}$ rather than for some subsequence.
\section{A geometric property}
\begin{definition}\label{Def1}
Let $G$ be a bounded simply connected domain in the complex plane. A point $z_0$ on the boundary
of $G$ is said to be a \textbf{ non-convex type singularity} (NCS) if it satisfies the following two conditions:
\begin{itemize}
\item[(i)]
There exists a closed disk $\overline{D}$ with $z_0$ on its circumference, such that $\overline{D}$ is contained in $G$ except for the
point $z_0$.
\item[(ii)]
There exists a line segment $L$ connecting a point $\zeta_0$ in the interior of $\overline{D}$ to $z_0$ such that
\begin{equation}\label{eq:Def1(ii)}
\lim_{z\to z_0 #1top{z\in L}}\mathcal{F}_{\mathbb{R}}ac{g_{G}(z,\zeta_0)}{|z-z_0|}=+\infty,
\end{equation}
where $g_{G}(z,\zeta_0)$ denotes the Green function of $G$ with pole at $\zeta_0\in G$.
\end{itemize}
\end{definition}
Recall that $g_G(\cdot,\zeta)$ is a positive harmonic function in $G\setminus\{\zeta\}$.
Also note that the assumption that $G$ is bounded and simply connected
implies that $G$ is regular with respect to the Dirichlet problem in $G$. This means that
$\lim_{z\to t #1top{z\in G}}g_G(z,\zeta)=0$, for any $t\in\partial G$; see, e.g.,
\cite[pp. 92 and 111]{Ra}.\\
\noindent\textbf{Remark.}
With respect to condition (ii), we note that the existence of a straight line $L$ and a point $\zeta_0$ for which
(\ref{eq:Def1(ii)}) holds, implies that the same is true for any other straight line connecting a point in the open disk
${D}$ with $z_0$. This can be easily deduced from Harnack's Lemma (see e.g. \cite [Lemma 4.9, p.17]{ST},\,\cite[p. 14]{ArGa}) in conjunction
with the symmetry property of Green functions, which imply that for a compact set $K \subset G$ containing $\zeta_0$ and $\dist (z,K)>\delta>0,\, z \in G$, there
is constant $C=C(K,\delta, \zeta_0)>0$ such that the
inequality
\begin{equation}\label{eq:Har}
g_G(z,\zeta)\ge C g_G(z,\zeta_0)
\end{equation}
holds for all $\zeta \in K$.\\
As we shall easily show, a point $z_0$ satisfying the following sector condition is an NCS point.
\begin{definition}\label{Def2}
Let $G$ be a bounded simply connected domain. A point $z_0$ on the boundary of $G$ is said to be an \textbf{inward-corner}
(IC) \textbf{point} if there exists a circular sector of the form
$S:=\{z:0< |z-z_0|< r,\,#1lpha\pi<\textup{arg}(z-z_0)<\beta\pi\}$
with $\beta-#1lpha > 1$ whose closure is
contained in $G$ except for $z_0$.
\end{definition}
To see that an inner-corner point satisfies Definition \ref{Def1}, let $g_S(z,\zeta)$ denote the Green function of the sector $S$.
Then $g_S(z,\zeta)=-\log|\varphi_\zeta(z)|$, where $\varphi_\zeta$ is a conformal mapping of $S$ onto the unit disc $\mathbb{D}:=\{z:|z|<1\}$, satisfying $\varphi_\zeta(\zeta)=0$.
From the theory of conformal mapping it is known \cite{Lehman} that the following expansion
is valid for any $z\in\overline{S}$ near $z_0$:
\begin{equation*}
\varphi_\zeta(z)=\varphi_\zeta(z_0)+a_1(z-z_0)^{1/(\beta-#1lpha)}+O(|z-z_0|^{2/(\beta-#1lpha)}),
\end{equation*}
with $a_1\ne 0$. Since $|\varphi_\zeta(z_0)|=1$, the above implies that the limit in (2.1) holds with $g_S(z,\zeta_0)$ in the place
of $g_G(z,\zeta_0)$. The desired limit then follows from the comparison principle for Green functions:
\begin{equation*}
g_S(z,\zeta)\le g_G(z,\zeta),\quad z,\zeta\in S;
\end{equation*}
see, e.g., \cite[p. 108]{Ra}.\\
\noindent\textbf{Remark.} It is interesting to note that if the boundary $\partial G$ is a piecewise analytic Jordan curve, then at any IC point $z_0$
of $\partial G$ the density of the equilibrium measure is zero. This can be easily deduced from the relation
$d\mu_{\partial G}(z)=|\Phi^\prime(z)|ds$ connecting the equilibrium measure to the arclength measure $ds$
on $\partial G$, where $\Phi$ is a conformal mapping of $\Omega$ onto $\{w:|w|>1\}$, taking $\infty$ to $\infty$.
Then, if $\lambda\pi$ ($1<\lambda<2$) is the interior opening angle at $z_0$, $\Phi(z)$ admits has near $z_0$
an expansion of the form
\begin{equation}\label{eq:Phi-exp}
\Phi(z)=\Phi(z_0)+b_1(z-z_0)^{1/(2-\lambda)}+o(|z-z_0|^{1/(2-\lambda)}),
\end{equation}
with $b_1\ne 0$, which leads to $\Phi^\prime(z_0)=0$.\\
We can now state our main result.
\begin{theorem}\label{th:main}
Let $E\subset\mathbb{C}$ be a compact set of positive capacity, $\Omega$ the unbounded component
of $\overline{\mathbb{C}}\setminus E$, and $\mathcal{E}:=\overline{\mathbb{C}}\setminus{\Omega}$ denote the polynomial
convex hull of $E$. Assume there is closed set $E_0\subset\mathcal{E}$ with the following three properties:
\begin{itemize}
\item[(i)] {\rm{cap}}$(E_0)>0$;
\item[(ii)]
either $E_0=\mathcal{E}$ or $\dist(E_0,\,\mathcal{E}\setminus E_0)>0$;
\item[(iii)]
either the interior $\textup{int}(E_0)$ of $E_0$ is empty or the boundary of each open component of
$\textup{int}(E_0)$ contains an NCS point.
\end{itemize}
Let $V$ be an open set containing $E_0$ such that $\dist(V,\,\mathcal{E}\setminus E_0)>0$
if $E_0\neq \mathcal{E}$. Then for any asymptotically extremal sequence of monic polynomials
$\{P_n\}_{n\in\mathcal{N}}$ for $E$,
\begin{equation}\label{eq:main}
\nu_{P_n}|_{V}\,{\stackrel{\star}{\longrightarrow}}\, \mu_E|_{E_0},\quad n\to\infty,\quad n\in\mathcal{N},
\end{equation}
where $\mu|_{K}$ denotes the restriction of a measure $\mu$ to the set $K$.
\end{theorem}
We remark that, for the case of a Jordan region, the hypothesis of Theorem 3 of \cite{GaPo02} implies the existence of an NCS point.
We also note that the assumption $\dist(E_0,\,\mathcal{E}\setminus E_0)>0$ implies that any (open)
component of $\mathrm{int}(E_0)$ is simply connected.\
As a consequence of Theorem \ref{th:main} and \cite[Theorem III.4.1]{ST} we have the following.
\begin{corollary}
With the hypotheses of Theorem~2.1, if $G$ denotes a component of $\mathrm{int}(E_0)$,
then for any asymptotically extremal sequence $\{P_n\}_{n\in\mathcal{N}}$
of monic polynomials for $E$, there exists a point $\zeta$ in $G$ such that
\begin{equation}
\limsup_{n\to\infty,\, n\in\mathcal{N}}|P_n(\zeta)|^{1/n}=\mathrm{cap}(E).
\end{equation}
\end{corollary}
\begin{corollary}\label{cor2.2}
Let $E$ consist of the union of a finite number of closed Jordan regions $E:=\cup_{j=1}^N \overline{G_j}$, where
$\overline{G_i}\cap\overline{G_j}=\emptyset$, $i\neq j$,
and assume that for each $k=1,\ldots,m$ the boundary of $G_k$ contains an NCS point. Then for any asymptotically extremal sequence of monic polynomials $\{P_n\}_{n\in\mathcal{N}}$ for $E$,
\begin{equation}
\nu_{P_n}|_{\mathcal{V}}\,{\stackrel{*}{\longrightarrow}}\, \mu_E|_{\mathcal{V}},\quad n\to\infty,\quad n\in\mathcal{N},
\end{equation}
where $\mathcal{V}$ is an open set containing $\bigcup_{k=1}^m\overline{G}_k$, such that if $m<N$ the distance of $\overline{\mathcal{V}}$ from
$\bigcup_{j=m+1}^N \overline{G}_j$ is positive.
\end{corollary}
We now give some examples that follow from Theorem~2.1 and Corollary 2.2. If $E$ has one of the following forms, then for any asymptotically extremal sequence $\{P_n\}_{n\in\mathbb{N}}$
of monic polynomials on $E$, we have
\begin{equation}\label{eq:nuPntomuE}
\nu_{P_n}\,{\stackrel{*}{\longrightarrow}}\, \mu_E,\quad n\to\infty,\quad n\in\mathbb{N}.
\end{equation}
\begin{itemize}
\item[(i)]
$E$ is a non-convex polygon or a finite union of mutually exterior non-convex polygons.
\item[(ii)]
$E$ is the union of two mutually exterior non-convex polygons, except for a single common boundary point.
\item[(iii)]
$E$ is the union of two mutually exterior non-convex polygons joined by a Jordan arc in their exterior, such that the complement of $E$ is connected and does not separate the plane; see Figure~\ref{fig:Form-iii}.
\item[(iv)]
$E$ is a a non-convex polygon $\Pi$ together with a finite number of closed Jordan arcs lying exterior to $\Pi$ except for the initial point on the boundary of $\Pi$, \textmd{and} such that the complement of $E$ does not separate the plane;
see Figure~\ref{fig:Form-iv}.
\item[(v)]
$E$ is any of the preceding forms with the polygons replaced by closed bounded Jordan regions, each one having an NCS point.
\item[(vi)]
$E$ is any of the preceding forms union with a compact set $K$ in the complement of $E$ such that $K$ has empty interior and $E\cup K$ does not separate the plane.
\end{itemize}
\begin{figure}
\caption{Form (iii)}
\label{fig:Form-iii}
\end{figure}
\begin{figure}
\caption{Form (iv)}
\label{fig:Form-iv}
\end{figure}
We remark that if $E$ is a convex region, so that the hypotheses of Theorem \ref{th:main} are not fulfilled, then the zero
behavior of an asymptotically extremal sequence of monic polynomials $P_n$ can be quite different. For example, if
$E$ is the closed unit disk centered at the origin for which $\mu_E$ is the uniform measure on the
circumference $|z|=1$, the polynomials $P_n(z)=z^n$ form an extremal sequence for which
$\nu_{P_n}=\delta_0$, the unit point mass at zero. A less trivial example is illustrated
in Figure 3, where the zeros of orthonormal polynomials with respect to area measure on a
circular sector $E$ with opening angle $\pi/2$ are plotted for degrees $n=50, 100,$ and $150.$ These so-called Bergman polynomials $B_n(z)$ form an asymptotically extremal sequence of polynomials for the sector, yet their normalized zero counting measures converge weak* to a measure
$\nu$ that is supported on the union of three curves lying in the interior of $E$ except for their three
endpoints, the vertices of the sector; see \cite{M-DSS}.\
\begin{figure}
\caption{Zeros of the Bergman polynomials $B_n$, $n=50,100,150$, for the circular sector with opening angle $\pi/2$.}
\label{cirsec04}
\end{figure}
On the other hand, for any compact set $E$ of positive capacity, whether convex or not, if $\{q_n(z)\}_{n\in\mathbb{N}}$ denotes a sequence of Fekete polynomials for $E$, then this sequence
is asymptotically extremal on $E$, all their zeros stay on the outer boundary $\partial\Omega$, and $\nu_{q_n}\,{\stackrel{*}{\longrightarrow}}\, \mu_E,\,$ as $n\to\infty,\,\, n\in\mathbb{N}$; see e.g., \cite [p.~176]{ST}.
In every case, according to Theorem 1.1, a limit measure of a sequence of asymptotically extremal monic polynomials must
have a balayage to the outer boundary of $E$ that equals the equilibrium measure $\mu_E.$ The question then of what types
of point sets can support a measure with such a balayage is a relevant inverse problem. In this connection, there is a conjecture of the first author on the
existence of \textit{electrostatic skeletons} for every convex polygon (more generally, for any convex region with boundary consisting of line segments
or circular arcs). By an electrostatic skeleton on $E$ we mean a positive measure with closed support in $E$, such that its
logarithmic potential matches the equilibrium potential in $\Omega$ and its support has empty interior and does not separate
the plane. For example, a square region has a skeleton whose support is the union of its diagonals; the circular sector in
Figure 3 has a skeleton supported on the illustrated curve joining the three vertices. See the discussion in \cite[p. 55]{LunTo} and in \cite{ErLuRa}.
\section{Applications to special polynomial sequences}
We begin with some results for Bergman polynomials $\{B_n\}_{n\in\mathbb{N}}$ that are orthogonal with respect to the area
$dA$ measure over a bounded Jordan domain $G$; i.e.,
\begin{equation}
\int_G B_m(z)\overline{B_n(z)}dA(z)=0,\quad m\neq n.
\end{equation}
The following theorem was established in \cite{LSS}.
\begin{theorem}\label{LSS-th}
Let $G$ be a bounded Jordan domain whose boundary $\partial G$ is singular; i.e., a conformal map $\varphi$ of $G$ onto the unit disk $\mathbb{D}$ cannot be analytically continued to some open set containing $\overline{G}$. Then, there is a subsequence $\mathcal{N}$ of $\mathbb{N}$ such that
\begin{equation}\label{eq:nuBntomu3.2}
\nu_{B_n}\,{\stackrel{*}{\longrightarrow}}\, \mu_{\partial G},\quad n\to\infty,\quad n\in\mathcal{N}.
\end{equation}
\end{theorem}
It is not difficult to see that this property of $G$ is independent of the choice of the conformal map $\varphi$.
As a consequence of Corollary \ref{cor2.2}, we obtain a result that holds for $\mathcal{N}=\mathbb{N}$.
\begin{corollary}\label{LSS-cor}
If the Jordan domain $G$ has a point on its boundary that satisfies the NCS condition, then (\ref{eq:nuBntomu3.2}) holds for $\mathcal{N}=\mathbb{N}$.
\end{corollary}
\begin{figure}
\caption{Zeros of the Bergman polynomials $B_n$, $n=50,100,150$, for the circular sector with opening angle $3\pi/2$.}
\label{cirsec02}
\end{figure}
In Figure~\ref{cirsec02}, we depict zeros of the Bergman polynomials for the circular sector
$G:=\{z:z=e^{i\theta},\, -3\pi/4<\theta<3\pi/4\}$.
The computations of the Bergman polynomials for this sector as well as for the sector
in Figure 3 were carried out in Maple 16 with 300 significant figures, using the
Arnoldi Gram-Schmidt algorithm; see \cite[Section 7.4]{St-CA13} for a discussion regarding the
stability of the algorithm.
For Figure~\ref{cirsec02} the origin is an NCS point and, therefore, Corollary~\ref{LSS-cor} implies that
the only limit distribution of the zeros is the equilibrium measure, a fact reflected in Figures~\ref{cirsec02}. It
is interesting to note that this figure also depicts the facts that the density of the equilibrium measure is
zero at corner points with opening angle greater than $\pi$, and it is infinite at corners with opening angle
less that $\pi$.
In \cite[p.~1427]{GPSS} the following question has been raised:
If $G$ is the union of two mutually exterior Jordan domains $G_1$ and $G_2$ whose boundaries are singular, does there
exist a common sequence of integers $n$ for which $\nu_{B_n}|_{\mathcal{V}_j}$ converges to
$\mu_{\partial G}|_{\partial G_j}$, where ${\mathcal{V}_j}$ is an open set containing $G_j$, $j=1,2$.
Thanks to Theorem~2.1, the answer is affirmative for the full sequence $\mathbb{N}$
if both $G_1$ and $G_2$ have the NCS property.
We conclude by applying the results of the main theorem to the case of Faber polynomials.
For this, we assume that $\Omega$ is simply connected and let
$\Phi$ denote the conformal map $\Omega\to\Delta:=\{w:|w|>1\}$, normalized so that near infinity
\begin{equation}\label{eq:Phi}
\Phi(z)=\gamma z+\gamma_0+\mathcal{F}_{\mathbb{R}}ac{\gamma_1}{z}+\mathcal{F}_{\mathbb{R}}ac{\gamma_2}{z^2}+\cdots,\quad \gamma>0.
\end{equation}
The Faber polynomials $\{F_n(z)\}_{n=0}^\infty$ of $E$ are defined as the polynomial part of the expansion of $\Phi^n(z)$
near infinity.
The following theorem was established in \cite{KS}.
\begin{theorem}\label{KS}
Suppose that $\mathrm{int}(E)$ is connected and $\partial E$ is a piecewise analytic curve that has a singularity other
than an outward cusp.
Then, there is a subsequence $\mathcal{N}$ of $\mathbb{N}$ such that
\begin{equation}\label{eq:nuBntomuGcalN}
\nu_{F_n}\,{\stackrel{*}{\longrightarrow}}\, \mu_{E},\quad n\to\infty,\quad n\in\mathcal{N}.
\end{equation}
\end{theorem}
Using Theorem~\ref{th:main} we can refine the above as follows; see also the question raised in Remark 6.1(c) in
\cite{KS}.
\begin{corollary}\label{KS}
If $\mathrm{int}(E)$ has a point on its boundary that satisfies the NCS condition, then (\ref{eq:nuBntomuGcalN})
holds for $\mathcal{N}=\mathbb{N}$.
\end{corollary}
\section{Proof of Theorem~\ref{th:main}}\label{sec:proofs}
Let $\mu$ be any weak$^*$ limit measure of the sequence $\{\nu_{P_n}\}_{n\in\mathcal{N}}$ and recall from Theorem~\ref{th:ST4.7}
that $\supp(\mu)\subset\mathbb{C}\setminus\Omega$ and
\begin{equation}\label{eq:Um=UmE}
U^\mu(z)=U^{\mu_E}(z),\quad z\in\Omega,
\end{equation}
where
\begin{equation*}
U^\nu(z):=\int\log\mathcal{F}_{\mathbb{R}}ac{1}{|z-t|}d\nu(t)
\end{equation*}
denotes the logarithmic potential on a measure $\nu$.
We consider first the case when $E_0=\mathcal{E}$. It suffices to show that
\begin{equation}\label{eq:supp-mu}
\supp(\mu)\subset\partial\Omega,
\end{equation}
because, this in view of (\ref{eq:Um=UmE}) and Carleson's unicity theorem (\cite[Theorem II.4.13]{ST})
will imply the relation $\mu=\mu_E$, which yields (\ref{eq:main}) with ${V}=\mathbb{C}$. Clearly, (\ref{eq:supp-mu}) is satisfied automatically in the case $\textup{int}(E_0)=\emptyset$, so we turn our attention now to the case $\textup{int}(E_0)\neq\emptyset$ and assume to the contrary
that $\supp(\mu)$ is not contained in $\partial\Omega$. Then
there exists a small closed disk $K$ belonging
to some open component of $\textup{int}(E_0)$, such that $\mu(K)>0$.
We call this particular component $G$, and note that it is simply connected.
Since $\partial G$ is regular with respect to the interior
and exterior Dirichlet problem,
\begin{equation*}
\lim_{z\to t\in\partial G #1top{z\in G}}g_G(z,\zeta)=0\mbox{ and }
\lim_{z\to t\in\partial G #1top
{z\in\overline{\mathbb{C}}\setminus\overline{G}}}g_{\overline{\mathbb{C}}\setminus\overline{G}}(z,\infty)=0,
\end{equation*}
where $g_{\overline{\mathbb{C}}\setminus\overline{G}}(z,\infty)$ denotes the Green function of
$\overline{\mathbb{C}}\setminus\overline{G}$ with pole at infinity.
Following Gardiner and Pommeremke (see \cite[Section 5]{GaPo02}), we set $\mu_0:=\mu|_K$ and consider the function
\begin{equation}\label{eq:S-def}
S(z):=\left\{
\begin{array}{cl}
\int g_G(z,\zeta)d\mu_0(\zeta), &z\in G,\\
U^{\mu_E}(z)-\log\mathcal{F}_{\mathbb{R}}ac{1}{\cpct(E)}, &z\in \overline{\mathbb{C}}\setminus{G}.
\end{array}
\right.
\end{equation}
From the properties of Green functions and equilibrium potentials it follows that $S(z)$ is harmonic in $G\setminus K$ and
in $\Omega\setminus\{\infty\}$, positive in $G$, negative in $\overline{\mathbb{C}}\setminus\overline{G}$ and vanishes
\emph{quasi-everywhere} (that is, apart from a set of capacity zero) on $\partial G$.
Let now $\widehat{\mu}_0$ denote the balayage of $\mu_0$ out of $K$ onto $\partial G$. Then, the relation
$\widehat{\mu}_0\le \mu^b$ follows
from the discussion regarding balayage onto arbitrary compact sets in \cite[pp. 222--228]{La72}.
Since, by Theorem \ref{th:ST4.7}, $\mu^b=\mu_E$, the difference $\mu_E-\widehat{\mu}_0$ is
a positive measure, a fact leading to the following useful representation:
\begin{equation}\label{eq:S-def-2}
S(z)=U^{\mu_0}(z)+U^{\mu_E-\widehat{\mu}_0}(z)-\log\mathcal{F}_{\mathbb{R}}ac{1}{\cpct(E)}, \quad z\in\mathbb{C},
\end{equation}
which shows that $S(z)$ is superharmonic in $\mathbb{C}$.
By assumption, the boundary of $G$ contains an NCS point $z_0$. Without loss of generality, we make the following simplifications regarding the two conditions in
Definition 2.1: By performing a translation and scaling we take $z_0$ to be the origin and by rotation we take
$\zeta_0=i\gamma$, for some $\gamma\in (0,1)$. Finally, in view of the Remark following Definition \ref{Def1}, we take
$\overline{D}:=\{z:|z-i\gamma|\le\gamma\}$ and choose $\gamma$ so that $\overline{D}$ is a subset of the open unit disk $\mathbb{D}$
and $\overline{D}\cap K=\emptyset$.
The contradiction we seek will be a consequence of the following two claims:
\begin{equation*}\label{claim-a}
\textbf{Claim (a)} \quad \int_{\pi}^{2\pi}\mathcal{F}_{\mathbb{R}}ac{S(re^{i\theta})}{r}d\theta \ge \tau,\ as\ r\to 0+,
\end{equation*}
where $\tau$ is a negative constant and
\begin{equation*}\label{claim-b}
\textbf{Claim (b)} \quad \int_{0}^{\pi}\mathcal{F}_{\mathbb{R}}ac{S(re^{i\theta})}{r}d\theta \to +\infty,\ as\ r\to 0+.
\end{equation*}
These claims follow as in \cite{GaPo02}, utilizing in the justification of Claim (b) the essential condition that the
origin is an NCS point so that
\begin{equation}\label{eq:limGy}
\lim_{y\to 0+}\mathcal{F}_{\mathbb{R}}ac{g_G(iy,i\gamma)}{y}=+\infty.
\end{equation}\
Note that for small positive $y$ the definition of $S(z)$ gives
\begin{equation}\label{eq:Siy}
\mathcal{F}_{\mathbb{R}}ac{S(iy)}{y}=\mathcal{F}_{\mathbb{R}}ac{1}{y}\int g_G(iy,\zeta)d\mu_0(\zeta).
\end{equation}
Using (\ref{eq:Har}) we have $g_G(iy,\zeta)\ge C g_G(iy,i\gamma)$ for any $\zeta\in K$ and small $y$, which in view of (\ref{eq:limGy})
leads to the limit
\begin{equation}\label{eq:limSy}
\lim_{y\to 0+}\mathcal{F}_{\mathbb{R}}ac{S(iy)}{y}=+\infty.
\end{equation}
Using Claims (a) and (b) and the fact that $S(0)=0$ (since the origin is a regular point of $\Omega$), it is easy to arrive at a relation that contradicts the mean value inequality for superharmonic
functions (see also \cite[p. 425]{GaPo02}):
\begin{equation*}
\begin{alignedat}{1}
\mathcal{F}_{\mathbb{R}}ac{1}{r}\left(\mathcal{F}_{\mathbb{R}}ac{1}{2\pi}\int_0^{2\pi}S(re^{i\theta})d\theta-S(0)\right)&=
\mathcal{F}_{\mathbb{R}}ac{1}{2\pi r}\int_0^{2\pi}S(re^{i\theta})d\theta\\
&\ge\mathcal{F}_{\mathbb{R}}ac{1}{2\pi r}\int_0^{\pi}S(re^{i\theta})d\theta+\tau\\
&\to\infty, \quad r\to +0.
\end{alignedat}
\end{equation*}
This establishes the theorem for the case $E_0=\mathcal{E}$.
To conclude the proof, we observe that when $\dist(E_0,\,\mathcal{E}\setminus E_0)>0$ our arguments above show
that $\mu$ cannot have any point of its support in the interior of any open component of ${E_0}\cap V$; hence it is supported on the
outer boundary of $E_0$ inside $V$. Therefore, by following the proof
of Theorem II.4.13 in \cite{ST}, we see that the logarithmic potentials of $\mu$ and $\mu_E$ coincide in $V$ and
the required relation $\mu|_V=\mu_E|_{E_0}$ follows from the unicity theorem for logarithmic potentials.
\qed\\
\noindent\textit{Acknowledgment.} The authors are grateful to the referees for their helpful comments.
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\mathcal{H}_{\mathbb{R}}ulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode (u_n)_{n=1}^{\infty}skip\space\fi MR }
\providecommand{\MRhref}[2]{
\mathcal{H}_{\mathbb{R}}ef{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\mathcal{H}_{\mathbb{R}}ef}[2]{#2}
\end{document} |
\begin{document}
\tau} \def\o{\omegaitle[Edge-transitive graphs]{On the automorphism groups \\ of
graphs with twice prime valency}
\tau} \def\o{\omegahanks{2000 Mathematics Subject Classification. 05C25, 20B25.}
\tau} \def\o{\omegahanks{This work was
partially supported by the National Natural Science
Foundation of China (11731002) and the Fundamental Research Funds for the Central Universities.}
\tau} \def\o{\omegahanks{Corresponding author: Zai Ping Lu ([email protected])}
\alpha} \def\b{\beta} \def\d{\deltauthor[Liao]{Hong Ci Liao}
\alpha} \def\b{\beta} \def\d{\deltaddress{H. C. Liao\\ Center for Combinatorics\\
LPMC-TJKLC, Nankai University\\
Tianjin 300071\\
P. R. China} \email{[email protected]}
\alpha} \def\b{\beta} \def\d{\deltauthor[Li]{Jing Jian Li}
\alpha} \def\b{\beta} \def\d{\deltaddress{Jing Jian Li\\ School of Mathematics and Information Sciences\\ Guangxi
University\\ Nanning 530004, P. R. China.}
\alpha} \def\b{\beta} \def\d{\deltaddress{Colleges and Universities
Key Laboratory of Mathematics and Its Applications.}
\email{[email protected]}
\alpha} \def\b{\beta} \def\d{\deltauthor[Lu]{Zai Ping Lu}
\alpha} \def\b{\beta} \def\d{\deltaddress{Z. P. Lu\\ Center for Combinatorics\\
LPMC-TJKLC, Nankai University\\
Tianjin 300071\\
P. R. China} \email{[email protected]}
\maketitle
\date\tau} \def\o{\omegaoday
\begin{abstract}
A graph is edge-transitive if its automorphism group acts transitively on the edge set. In this paper, we investigate the automorphism groups of edge-transitive graphs of odd order and twice prime valency.
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ be a connected graph of odd order and twice prime valency, and let $G$ be a subgroup of the automorphism group of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
In the case where $G$ acts transitively on the edges and quasiprimitively on the vertices of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, we prove that either $G$ is almost simple or $G$ is a primitive group of affine type. If further $G$ is an almost simple primitive group then, with two exceptions, the socle of $G$ acts transitively on the edges of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
\vskip 10pt
\noindent{\scshape Keywords}. Edge-transitive graph, arc-transitive graph, $2$-arc-transitive graph, quasiprimitive group, almost simple group.
\end{abstract}
\vskip 50pt
\section{introduction}
In this paper, all graphs are assumed to be finite and simple. In particular, a graph is a pair ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$
of a nonempty set $V$ and a set $E$ of $2$-subsets of $V$, which are called the vertex set and edge set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, respectively.
Each edge $\{\alpha} \def\b{\beta} \def\d{\delta,\b\}\in E$ gives two ordered pairs $(\alpha} \def\b{\beta} \def\d{\delta,\b)$ and $(\b,\alpha} \def\b{\beta} \def\d{\delta)$, every of them is called an arc of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
A triple $(\alpha} \def\b{\beta} \def\d{\delta,\b,\gamma} \def\s{\sigma)$ of vertices is a $2$-arc if $\alpha} \def\b{\beta} \def\d{\delta\ne \gamma} \def\s{\sigma$ and both $(\alpha} \def\b{\beta} \def\d{\delta,\b)$ and $(\b,\gamma} \def\s{\sigma)$ are arcs.
\vskip 5pt
Assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ is a graph. An automorphism $g$ of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a permutation (i.e., a bijection) on $V$ such that $\{\alpha} \def\b{\beta} \def\d{\delta^g,\b^g\}\in E$ for all $\{\alpha} \def\b{\beta} \def\d{\delta,\b\}\in E$.
Denote by ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ the set of all automorphisms of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Then ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a (finite) group under the product of permutations, which acts naturally on the edge set, arc set and $2$-arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ by
\[\{\alpha} \def\b{\beta} \def\d{\delta,\b\}^g=\{\alpha} \def\b{\beta} \def\d{\delta^g,\b^g\},\, (\alpha} \def\b{\beta} \def\d{\delta,\b)^g=(\alpha} \def\b{\beta} \def\d{\delta^g,\b^g),\, (\alpha} \def\b{\beta} \def\d{\delta,\b,\gamma} \def\s{\sigma)^g=(\alpha} \def\b{\beta} \def\d{\delta^g,\b^g,\gamma} \def\s{\sigma^g),\]
respectively. For a subgroup $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, the graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is said to be
$G$-vertex-transitive, $G$-edge-transitive, $G$-arc-transitive and $(G,2)$-arc-transitive if $G$ acts transitively on the vertex set, edge set, arc set and $2$-arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, respectively.
\vskip 5pt
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph, and $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Suppose that $G$ acts quasiprimitively on $V$, that is, every minimal normal subgroup of $G$ has a unique orbit on $V$.
Following the subdivision in \cite{Prag-quasi},
the group $G$ is one of eight types of quasiprimitive groups.
Conversely, using the `coset graph' construction (see \cite{Sabiddusi}), one can obtain graphs from each type of quasiprimitive groups. Nevertheless, it is believed that the group $G$ is quite restrictive if further the graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is assumed with certain symmetric properties, or some restrictions on the order or valency.
For example, if ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $(G,2)$-arc-transitive then
Praeger \cite{Prag-o'Nan} proved that only four of those $8$ types occur for $G$. If ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ has odd order and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $(G,2)$-arc-transitive, then $G$ is an almost simple group by \cite{Li-odd}.
In this paper, replacing the `$2$-arc-transitivity' by `edge-transitivity' and a restriction on the valency of graph, we investigate the pair of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ and $G$.
\vskip 5pt
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph of twice valency, and $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. In \cite{Prag-Xu}, Praeger and Xu give a nice characterization for the graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ while it is $G$-arc-transitive and $G$ contains an irregular abelian (and so, intransitive) normal subgroup. In Section 3 of this paper,
we focus on the case where ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-edge-transitive and $G$ contains a
transitive normal subgroup. In particular, letting ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$ be the socle of $G$, the following result is proved.
\begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{myth}
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph of odd order and valency $2r$ for some prime $r$, and let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-edge-transitive but not $(G,2)$-arc-transitive. If $G$ is quasiprimitive on $V$ then either
$G$ is almost simple, or ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)=\ZZ_p^k$ for some odd prime $p$ and integer $1\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e k\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e r$.
\end{theorem}
\vskip 5pt
\begin{remark}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{rem}
Li and the last two authors of this paper have been working on the project
classifying all graphs of odd order which admit an almost simple group $X$ acting $2$-arc-transitively. By their classification, such a graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ is not of twice prime valency unless one of the following holds: (1) ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a complete graph, (2) ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is the odd graph of valency $4$, (3) ${\rm soc}} \def\Inndiag{{\rm InnDiag}(X)={\rm PSL}}\def\PGL{{\rm PGL}(2,q)$ and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is of valency $4$ or $6$. In view this, if $X$ has a primitive subgroup acting transitively on $E$ then we may easily prove that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a complete graph, see also the following result.
\end{remark}
\vskip 5pt
A graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is called $2$-arc-transitive if it is $({\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta},2)$-arc-transitive. The following result is a consequence of Theorem \ref{myth}, which is proved in Section 4.
\begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{myth-2}
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph of odd order and twice prime valency, and let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-edge-transitive but not $(G,2)$-arc-transitive. If $G$ is primitive on $V$ then either ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a complete graph, or ${\rm soc}} \def\Inndiag{{\rm InnDiag}({\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta})={\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$; in particular,
${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $2$-arc-transitive
if and only if ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a complete graph.
\end{theorem}
For a pair $(G,{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta})$ in Theorem \ref{myth} or \ref{myth-2}, the action of ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$ on the edge set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is considered in Section 5. We present several examples and prove the following result.
\begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{myth-3}
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph of odd order and twice prime valency, and let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-edge-transitive but not $(G,2)$-arc-transitive. If $G$ is almost simple and primitive on $V$ then either ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$-edge-transitive, or ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is of valency $4$
and isomorphic one of the graphs in Example \ref{exam-2}.
\end{theorem}
\vskip 30pt
\section{Preliminaries}
For convenience, a graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ is sometimes viewed as the digraph on $V$ with directed edges the arcs of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, and a subset ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ of the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is sometimes viewed as a digraph on $V$.
Then, with such a convenience, a digraph ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is a graph if and only if ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta={\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta^*:=\{(\alpha} \def\b{\beta} \def\d{\delta,\b)\mid (\b,\alpha} \def\b{\beta} \def\d{\delta)\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta\}$.
In this section, we make the following assumption:
\begin{enumerate}
\item[] {\em ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ is a connected regular graph, $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, $N\ne 1$ is a normal subgroup of $G$, and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is the union of some $G$-orbits on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.}
\end{enumerate}
\vskip 5pt
Let $\alpha} \def\b{\beta} \def\d{\delta\in V$. Set ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)=\{\b\in V\mid (\alpha} \def\b{\beta} \def\d{\delta,\b)\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta \}$ and $N_\alpha} \def\b{\beta} \def\d{\delta=\{g\in N\mid \alpha} \def\b{\beta} \def\d{\delta^g=\alpha} \def\b{\beta} \def\d{\delta\}$, called respectively the (out-)neighborhood of $\alpha} \def\b{\beta} \def\d{\delta$ in (the digraph) ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ and the stabilizer of $\alpha} \def\b{\beta} \def\d{\delta$ in $N$. Then $N_\alpha} \def\b{\beta} \def\d{\delta$ induces a permutation group $N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$.
Denote by $N_\alpha} \def\b{\beta} \def\d{\delta^{[1]}$ the kernel of $N_\alpha} \def\b{\beta} \def\d{\delta$ acting on ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$.
Then
\begin{equation}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{eq-1}
N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}\cong N_\alpha} \def\b{\beta} \def\d{\delta/N_\alpha} \def\b{\beta} \def\d{\delta^{[1]},\, N_{\alpha} \def\b{\beta} \def\d{\delta^g}=N_\alpha} \def\b{\beta} \def\d{\delta^g,\, N_{\alpha} \def\b{\beta} \def\d{\delta^g}^{[1]}=(N_\alpha} \def\b{\beta} \def\d{\delta^{[1]})^g,\, \forall g\in G.
\end{equation}
For $U\subseteq V$, set $N_{(U)}=\cap_{\b\in U}N_\b$ and $N_U=\{x\in N\mid U^x=U\}$, called respectively
the point-wise stabilizer and set-wise stabilizer of $U$ in $N$. Then
\begin{equation}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{eq-2}
N_{(U^g)}=N_{(U)}^g, \, N_{U^g}=N_U^g,\, \forall g\in G.
\end{equation}
If $U=\{\alpha} \def\b{\beta} \def\d{\delta_1,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots,\alpha} \def\b{\beta} \def\d{\delta_k\}$ then we write $N_{(U)}=N_{\alpha} \def\b{\beta} \def\d{\delta_1\alpha} \def\b{\beta} \def\d{\delta_2\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots\alpha} \def\b{\beta} \def\d{\delta_k}$.
\vskip 5pt
For a finite group $X$, denote by $\pi(X)$ the set of prime divisors of $|X|$.
The following lemma says that $\pi(N_\alpha} \def\b{\beta} \def\d{\delta)=\pi(N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)})$ if $G$ is transitive on $V$, see also \cite[Lemma 1.1]{CLP2000} for the case where ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{normal-subg-1}
Assume that ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is connected and $G$ is transitive on $V$. Then
$\pi(N_\alpha} \def\b{\beta} \def\d{\delta)=\pi(N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)})$ and $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|\gamma} \def\s{\sigmae \max\pi(N_{\alpha} \def\b{\beta} \def\d{\delta})$,
where $\b\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$. If further ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta={\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta^*$ then $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|> \max\pi(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$.
\end{lemma}
\par\noindent{\it Proof.\ \ }
Let $p$ be an arbitrary prime. Suppose that $p$ is a not divisor of $|N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}|$. Then $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_\alpha} \def\b{\beta} \def\d{\delta^{[1]}$, where $P$ is an arbitrary Sylow $p$-subgroup of $N_\alpha} \def\b{\beta} \def\d{\delta$. In particular,
$P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_\b$ for $\b\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$.
Take $g\in G$ with $\alpha} \def\b{\beta} \def\d{\delta^g=\b$. We have ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\b)={\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)^g$,
$N_\b=N_\alpha} \def\b{\beta} \def\d{\delta^g$ and $N_\b^{[1]}=(N_\alpha} \def\b{\beta} \def\d{\delta^{[1]})^g$. It follows that $P$ is also a Sylow $p$-subgroup of $N_\b$, and $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_\b^{[1]}$.
Let $\gamma} \def\s{\sigma\in V\setminus\{\alpha} \def\b{\beta} \def\d{\delta,\b\}$. By \cite[Lemma 2]{Neumann}, there are $\b=\b_0, \b_1, \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots\b_t=\gamma} \def\s{\sigma$ such that $(\b_{i-1},\b_{i})\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ for $1\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e t$. By the argument in the previous paragraph and induction on $i$, we conclude that $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_\gamma} \def\s{\sigma^{[1]}$. It follows that $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_{(V)}=1$, that is, $p\not\in \pi(N_\alpha} \def\b{\beta} \def\d{\delta)$. Then we have $\pi(N_\alpha} \def\b{\beta} \def\d{\delta)=\pi(N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)})$.
Let $q$ be a prime with $q>|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|$, and let $Q$ be a Sylow $q$-subgroup of $N_{\alpha} \def\b{\beta} \def\d{\delta}$. Then $Q$
$Q$ acts trivially on ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$.
Since $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\b)|=|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|<q$, we conclude that $Q$ acts trivially on ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\b)$.
For $\gamma} \def\s{\sigma\in V\setminus\{\alpha} \def\b{\beta} \def\d{\delta,\b\}$,
choose $\b=\b_0, \b_1, \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots\b_t=\gamma} \def\s{\sigma$ such that $(\b_{i-1},\b_{i})\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ for $1\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e t$. Noting that $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\b_i)|=|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|$ for each $i$, by induction on $i$, we have $Q\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_\gamma} \def\s{\sigma$. Thus $Q\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_{(V)}=1$, and so $q\not\in \pi(N_{\alpha} \def\b{\beta} \def\d{\delta})$.
Then $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|\gamma} \def\s{\sigmae \max\pi(N_{\alpha} \def\b{\beta} \def\d{\delta})$.
Finally, by \cite[Lemma 1.1]{CLP2000}, if ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta={\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta^*$ then $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|>\max\pi(G_{\alpha} \def\b{\beta} \def\d{\delta\b})\gamma} \def\s{\sigmae \max\pi(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$. Thus the lemma follows.
\qed
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{normal-subg-2}
Assume that $G$ is transitive on $V$ and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is a $G$-orbit. Suppose that ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is connected.
Then
$\pi(N_{\alpha} \def\b{\beta} \def\d{\delta\b}^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\b)})=\pi(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$, where $\b\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$.
\end{lemma}
\par\noindent{\it Proof.\ \ }
Let $p$ be a prime with $p\not\in \pi(N_{\alpha} \def\b{\beta} \def\d{\delta\b}^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\b)})$. Then $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_\b^{[1]}$, where $P$ is an arbitrary Sylow $p$-subgroup of $N_{\alpha} \def\b{\beta} \def\d{\delta\b}$. In particular, $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_{\b\d}$ for $\d\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\b)$. Choose $g\in G$ with $(\alpha} \def\b{\beta} \def\d{\delta,\b)^g=(\b,\d)$. Then $N_{\b\d}=N_{\alpha} \def\b{\beta} \def\d{\delta\b}^g$, ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\d)={\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\b)^g$ and $N_{\d}^{[1]}= (N_{\b}^{[1]})^g$, which yields that $P$ is a Sylow $p$-subgroup of $N_{\b\d}$, and $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_{\d}^{[1]}$. For $\gamma} \def\s{\sigma\in V\setminus\{\alpha} \def\b{\beta} \def\d{\delta,\b\}$,
choose $\b=\b_0, \b_1, \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots\b_t=\gamma} \def\s{\sigma$ such that $(\b_{i-1},\b_{i})\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ for $1\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e t$. By induction on $i$, we may prove $P\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_{\gamma} \def\s{\sigma}^{[1]}$.
It implies that $P=1$, so $p\not \in \pi(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$. Then the lemma follows.
\qed
Suppose that $N$ is intransitive on $V$. Let ${\mathcal B}} \def{\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}C{{\mathcal C}}\def\PP{{\mathcal P}$ be the set of $N$-orbits on $V$.
Define a digraph ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N$ on ${\mathcal B}} \def{\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}C{{\mathcal C}}\def\PP{{\mathcal P}$ such that
$(B,C)$ is an arc if and only of $B\ne C$ and $(\d,\gamma} \def\s{\sigma)\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ for some
$\d\in B$ and some $\gamma} \def\s{\sigma\in C$.
For the case where ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, we also write ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N$ as ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_N$. Let $B\in {\mathcal B}} \def{\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}C{{\mathcal C}}\def\PP{{\mathcal P}$ and $\alpha} \def\b{\beta} \def\d{\delta\in B$. Then
\begin{equation}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{eq-3}
{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)=(B\cap {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta))\cup (\cup_{C\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N(B)}(C\cap {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)),\,|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|\gamma} \def\s{\sigmae |B\cap {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|+|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N(B)|.
\end{equation}
The following lemma is easily shown.
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{quotient}
Assume that ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is connected and a $G$-orbit. If $N$ is intransitive on $V$, then $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|=|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N(B)||C\cap {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|$ for each given $C\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N(B)$; in this case, $G$ induced a group acting transitively on the arcs of ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N$.
\end{lemma}
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{N-semiregular}
Assume that $G$ is transitive on $V$, $N$ intransitive on $V$ and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is connected. If $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|=|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N(B)|$ for some $N$-orbit $B$ and $\alpha} \def\b{\beta} \def\d{\delta\in B$,
then $N$ is semiregular on $V$, that is, $N_\alpha} \def\b{\beta} \def\d{\delta=1$.
\end{lemma}
\par\noindent{\it Proof.\ \ }
Assume that $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|=|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N(B)|$. Then ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)\cap B=\emptyset$, and
$|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)\cap C|=1$ for every $C\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N(B)$. It follows that $N_\alpha} \def\b{\beta} \def\d{\delta$ fixes ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$ point-wise, so $N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}=1$. Then $N_\alpha} \def\b{\beta} \def\d{\delta=1$ by Lemma \ref{normal-subg-1}.
\qed
\vskip 5pt
Suppose that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ is connected and $G$-arc-transitive. For an edge $\{\alpha} \def\b{\beta} \def\d{\delta,\b\}\in E$, take $x\in G$ with $(\alpha} \def\b{\beta} \def\d{\delta,\b)^x=(\b,\alpha} \def\b{\beta} \def\d{\delta)$. Then $x\in \N_G(G_{\alpha} \def\b{\beta} \def\d{\delta\b})$ and, since ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is connected, $G=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,G_\alpha} \def\b{\beta} \def\d{\delta\r$. Further, such an $x$ may be chosen as a $2$-element in $\N_G(G_{\alpha} \def\b{\beta} \def\d{\delta\b})$. Thus following lemma holds.
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{coset}
If ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ is connected and $G$-arc-transitive. Then, for $\{\alpha} \def\b{\beta} \def\d{\delta,\b\}\in E$, there is a $2$-element $x$ in $\N_G(G_{\alpha} \def\b{\beta} \def\d{\delta\b})$ with $x^2\in G_{\alpha} \def\b{\beta} \def\d{\delta\b}$ and $G=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,G_\alpha} \def\b{\beta} \def\d{\delta\r$.
\end{lemma}
\vskip 5pt
Suppose that $G$ has regular subgroup $R$. Then, fixing a vertex $\alpha} \def\b{\beta} \def\d{\delta$, we have a bijection
\[\tau} \def\o{\omegaheta: V\rightarrow R,\, \alpha} \def\b{\beta} \def\d{\delta^x\mapsto x.\]
Set $S=\{x\in R\mid \alpha} \def\b{\beta} \def\d{\delta^x\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)\}$. Then $\tau} \def\o{\omegaheta$ gives an isomorphism from ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ to the Cayley digraph ${\rm Cay}(R,S)$, which is defined on $R$ such
$(x,y)$ is an arc if and only if $yx^{-1}\in S$. Under the isomorphism $\tau} \def\o{\omegaheta$,
the group $\N_{{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta}(R)$ corresponds to the normalizer $\hat{R}{:}{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}(R,S)$ of $\hat{R}$ in ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\rm Cay}(R,S)$, where
$\hat{R}$ is the image of the action of $R$ by right multiplication on $R$, and
\[{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}(R,S)=\{\s\in {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}(R)\mid S^\s=S\}.\]
Clearly, $S$ does not contains the identify element of $R$, and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ (or ${\rm Cay}(R,S)$) is a graph if and only if $S=S^{-1}:=\{x^{-1}\mid x\in S\}$.
The following lemma collects several well-known facts on Cayley digraphs.
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{cay}
\begin{enumerate}
\item[(1)] ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ is isomorphic a Cayley digraph
${\rm Cay}(R,S)$ if and only if ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ has a regular subgroup isomorphic to $R$.
\item[(2)] A Cayley digraph ${\rm Cay}(R,S)$ is connected if and only if
$\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} S\r=R$, and ${\rm Cay}(R,S)$ is a graph if and only if $S=S^{-1}$.
\item [(3)] $\N_{{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\rm Cay}(R,S)}(\hat{R})=\hat{R}{:}{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}(R,S)$, and if
$\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} S\r=R$ then ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}(R,S)$ acts faithfully on $S$.
\end{enumerate}
\end{lemma}
\vskip 30pt
\section{Proof of Theorem \ref{myth}}
In this section, we assume that
${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ is a connected graph of valency $2r$ for some prime $r$, and $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ such that $G$ is transitive on both $V$ and $E$
but not transitive on the $2$-arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{p-divisor-arc}
Suppose that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-arc-transitive.
Let $N\ne 1$ be a normal subgroup of $G$. Then
either $N$ is transitive on $V$ or $G$ induces an arc-transitive group on ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_N$. Moreover, one of the following holds.
\begin{enumerate}
\item[(1)] $N$ is semiregular on $V$.
\item[(2)] $N_\alpha} \def\b{\beta} \def\d{\delta$ is a nontrivial $2$-group, and either ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_N$ has valency $r$ or $N$ has at most $2$ orbits on $V$.
\item[(3)] $r=\max\pi(N_\alpha} \def\b{\beta} \def\d{\delta)$ is odd, and
\begin{enumerate}
\item[(i)] ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_N$ is a cycle and $G$ induces the full automorphism group of this cycle; or
\item[(ii)] ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is bipartite with two parts $N$-orbits; or
\item[(iii)]$N$ is transitive on both $V$ and $E$; or
\item[(iv)] $E$ has a partition $E=E_1\cup E_2$ such that
$(E_1,E_2)^g=(E_2,E_1)$ for some $g\in G\setminus N$, and $(V,E_i)$ are $N$-arc-transitive graphs of valency $r$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\par\noindent{\it Proof.\ \ }
The first part of this lemma follows from Lemma \ref{quotient}. We next show that one of (1)-(3) occurs.
By Lemma \ref{normal-subg-1}, $\pi(N_\alpha} \def\b{\beta} \def\d{\delta)=\pi(N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)})$.
If $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}=1$ then $N_\alpha} \def\b{\beta} \def\d{\delta=1$, and hence $N$ is semiregular, so (1) follows.
In the following, we let $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\ne 1$.
Since $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is normal in $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$, all $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$-orbits on ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$ have the same length
say $l$. Then $l=2,\,r$ or $2r$.
If $l=2$ then $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is a $2$-group, and so does $N_\alpha} \def\b{\beta} \def\d{\delta$;
if $r=2$ and $l=2r=4$ then, since ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $(G,2)$-arc-transitive, $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_8$, so $N_\alpha} \def\b{\beta} \def\d{\delta$ is a $2$-group. These two case give (2) by Lemma \ref{quotient}.
Thus, we next let $l>2$ and $r$ be odd.
Assume first that $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is primitive. Then $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$
is a transitive normal subgroup of $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$. It follows that
$N$ is transitive on $E$ and has at most two orbits on $V$.
Since $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is not $2$-transitive, by \cite[Theorem 1.51]{Gorenstein}, $r=5$ and $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\cong {\rm A}}\def\Sym{{\rm Sym}_5$ or $\S_5$.
Then $r=5=\max\pi(N_\alpha} \def\b{\beta} \def\d{\delta)$, and so (ii) or (iii) holds.
Now assume that $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is imprimitive. Then $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e \S_2\wr\S_r$ or $\S_r\wr\S_2$, yielding
$r=\max\pi(G_\alpha} \def\b{\beta} \def\d{\delta)$. Recalling that $l=2r$ or $r$, we have $r=\max\pi(N_\alpha} \def\b{\beta} \def\d{\delta)$.
If $l=2r$ then $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is transitive on ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$, which yields that
$N$ is transitive on $E$, and so (ii) or (iii) holds.
Let $l=r$. If $N$ has at three orbits on $V$ then (i) follows from Lemma \ref{quotient}. If $N$ has exactly two orbits on $V$ then ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is bipartite,
and (ii) occurs.
Suppose that $N$ is transitive on $V$.
Then $N$ has exactly two orbits on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, say ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_1$ and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_2$, and either ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_2={\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_1^*\ne {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_1$ or ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_i={\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_i^*$ for each $i$.
These two cases yield parts (iii) and (iv), respectively.
\qed
\begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{odd-order-p-divisor-arc}
Suppose that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-arc-transitive.
Let $N$ be a transitive normal subgroup of $G$.
Then one of the following holds.
\begin{enumerate}
\item[(1)]
Either $N_\alpha} \def\b{\beta} \def\d{\delta$ is a $2$-group, or
$N$ is transitive on $E$;
\item[(2)]
$E$ has a partition $E=E_1\cup E_2$ such that
$(E_1,E_2)^g=(E_2,E_1)$ for some $g\in G\setminus N$, and $(V,E_i)$ are $N$-arc-transitive graphs of valency $r$.
\end{enumerate}
\end{corollary}
\vskip 5pt
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{p-divisor-half}
Suppose that $G$ is intransitive on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
Let $N\ne 1$ be a normal subgroup of $G$. Then one of the following holds.
\begin{enumerate}
\item[(1)] $N$ is semiregular on $V$.
\item[(2)] $r=\max \pi(N_\alpha} \def\b{\beta} \def\d{\delta)$ for $\alpha} \def\b{\beta} \def\d{\delta\in V$, and either
\begin{enumerate}
\item[(i)] $N$ is transitive on both $V$ and $E$; or
\item[(ii)] $G/K\cong \ZZ_l$, where $l\gamma} \def\s{\sigmae 2$ is the number of $N$-orbits on
$V$, and $K$ is the kernel of $G$ acting on the set of $N$-orbits.
\end{enumerate}
\end{enumerate}
\end{lemma}
\par\noindent{\it Proof.\ \ }
Let ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ be a $G$-orbit on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Then $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|=r$, and so $r=\max \pi(G_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)})=\max \pi(G_\alpha} \def\b{\beta} \def\d{\delta)$ by Lemma \ref{normal-subg-1}.
Since $N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$ is normal subgroup of $G_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$ and $G_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$ is transitive, all $N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$-orbits (on ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$) have the same length. Then either $N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}=1$, or
$N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$ is transitive on ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$. By Lemma \ref{normal-subg-1}, the former case yields
$N_\alpha} \def\b{\beta} \def\d{\delta=1$, and so $N$ is semiregular.
Assume that $N_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$ is transitive on ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$. Then
$r=\max \pi(N_\alpha} \def\b{\beta} \def\d{\delta)$. If $N$ is transitive on $V$ then
$N$ is transitive on both $V$ and $E$, and so part (i) occurs.
Suppose that $N$ is intransitive on $V$.
By Lemma \ref{quotient}, ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N$ is a directed cycle, and $G$ induces
an arc-transitive group on this directed cycle. (Note that ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta_N$
has length $2$ if $N$ has two orbits on $V$.) Then part (ii) follows.
\qed
\begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{odd-order-p-divisor-half}
Suppose that $G$ is intransitive on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
Let $N$ be a transitive normal subgroup of $G$. Then either $N$ is regular on $V$, or $r=\max \pi(N_\alpha} \def\b{\beta} \def\d{\delta)$ and $N$ is transitive on both $V$ and $E$.
\end{corollary}
\vskip 5pt
Recall that every minimal normal subgroup of a quasiprimtive group
is transitive. Then the following result finishes
the proof of Theorem \ref{myth}.
\begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{odd-order-prim-2r}
Let $N$ be a minimal normal subgroup of $G$. Suppose that $|V|$ is odd and
$N$ is transitive on $V$. Then $N$ is the unique transitive minimal normal subgroup of $G$, and one of the following holds.
\begin{enumerate}
\item[(1)] $N\cong \ZZ_p^k$ for some odd prime $p$ and positive integer $k\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e r$, and $G_\alpha} \def\b{\beta} \def\d{\delta$ is isomorphic to a subgroup of $\S_2\wr\S_r$; if further $G$ is intransitive on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, then $k<r$, and $G_\alpha} \def\b{\beta} \def\d{\delta$ is isomorphic to a subgroup of $\S_r$.
\item[(2)]
$N$ is nonabelian simple.
\end{enumerate}
\end{theorem}
\par\noindent{\it Proof.\ \ }
Write $N=T_1\tau} \def\o{\omegaimes\cdots\tau} \def\o{\omegaimes T_k$, where $T_i$ are isomorphic simple groups.
Let $M$ be a transitive minimal normal subgroup of $G$. Suppose that $M\cap N=1$. Then $M$ centralizes $N$. By \cite[Theorem 4.2A]{Dixon}, both $M$ and $N$ are regular on $V$, and so $N$ is abelain as $|V|$ is odd. Then we have $M=N$, yielding $M=N=1$, a contradiction. Thus $M\cap N\ne 1$, and so $M=N$. Thus $N$ is the unique transitive minimal normal subgroup of $G$.
\vskip 5pt
{\bf Case 1}. Assume that $N$ is soluble. Then $N\cong\ZZ_p^k$ for a prime $p$ and some integer $k\gamma} \def\s{\sigmae 1$. In particular, $N$ is abelian, and so $N$ is regular on $V$, and $p$ is odd.
By Lemma \ref{cay}, writing ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}={\rm Cay}(N,S)$, we have that
$G_\alpha} \def\b{\beta} \def\d{\delta\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}(N,S)$ which acts faithfully on $S$, where $\alpha} \def\b{\beta} \def\d{\delta$ is the vertex of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ corresponding to the identity of $N$.
Since $p$ is odd and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is connected, $N$ is generated by
a half number of the elements in $S$, and thus $k\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e r$.
Note that $\{\{x,x^{-1}\}\mid x\in S\}$ is a $G_\alpha} \def\b{\beta} \def\d{\delta$-invariant partition of $S$.
It follows that $G_\alpha} \def\b{\beta} \def\d{\delta$ is isomorphic to a subgroup of $\S_2\wr\S_r$.
Suppose that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $G$-arc-transitive. Then $G_\alpha} \def\b{\beta} \def\d{\delta$ has two orbits on
$S$, say $\{x_1,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots,x_r\}$ and $\{x_1^{-1},\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots,x_r^{-1}\}$, see also \cite[Lemma 3.2]{LLiuL}. This yields that $G_\alpha} \def\b{\beta} \def\d{\delta$ is isomorphic to a subgroup of $\S_r$. Consider the element $x=\prod_{i=1}^rx_r$ of $N$. It is easily shown that $G_\alpha} \def\b{\beta} \def\d{\delta$ centralizes $x$. Then $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x\r$ is a normal subgroup of $G$. Since $x\in N$, by the minimum of $N$, we have either $k=1$ or $x=1$. If $x=1$ then $N=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_1,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots, x_{r-1}\r$, and thus $k<r$. Thus part (1) follows.
\vskip 5pt
{\bf Case 2}. Assume that $T_i$ are isomorphic nonabelian simple groups.
Since $|V|=|N:N_\alpha} \def\b{\beta} \def\d{\delta|$ is odd, $N_\alpha} \def\b{\beta} \def\d{\delta$ contains a Sylow $2$-subgroup of $N$, where $\alpha} \def\b{\beta} \def\d{\delta\in V$. Of course, $N_\alpha} \def\b{\beta} \def\d{\delta\ne 1$.
Since $T_i$ is normal in $N$ and $N$ is transitive on $V$,
all $T_i$-orbits on $V$ have equal length, which is a divisor of $|V|$.
Thus $(T_i)_\alpha} \def\b{\beta} \def\d{\delta$ contains a Sylow $2$-subgroup of $T_i$.
Suppose that $k>1$. Then $T_1$ is intransitive on $V$; otherwise, $T_2$ is semiregular or acts trivially on $V$, a contradiction.
Assume that $N$ is transitive on $E$. Since $|V|$ is odd, by Lemmas \ref{p-divisor-arc} and \ref{p-divisor-half}, the quotient ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_{T_1}$
is a cycle of odd length. Thus $N/K$ is soluble, where $K$ is the kernel of
$N$ acting on the set of $T_1$-orbits. Then $N=K$, so $N$ fixes every $T_1$-orbits, a contradiction. Therefore, by Corollaries \ref{odd-order-p-divisor-arc} and \ref{odd-order-p-divisor-half},
$N_\alpha} \def\b{\beta} \def\d{\delta$ is a $2$-group and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-arc-transitive.
Let $r=2$. Then ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ has valency $4$. Again consider the quotient ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_{T_1}$.
By (\ref{eq-3}), ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_{T_1}$ has valency no more than $4$.
On the other hand, ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_{T_1}$ is a regular graph of odd order, either
${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_{T_1}$ is a cycle or ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_{T_1}$ has valency $4$. Noting that $(T_1)_\alpha} \def\b{\beta} \def\d{\delta\ne 1$, by Lemma \ref{N-semiregular}, ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_{T_1}$ is a cycle of odd length. This leads to a similar contradiction as in the previous paragraph.
Assume that $r$ is odd. Note that $N_\alpha} \def\b{\beta} \def\d{\delta$ is a Sylow $2$-subgroup of $N$ and
$(T_i)_\alpha} \def\b{\beta} \def\d{\delta$ is a Sylow $2$-subgroup of $T_i$; in particular, \[N_\alpha} \def\b{\beta} \def\d{\delta=(T_1)_\alpha} \def\b{\beta} \def\d{\delta\tau} \def\o{\omegaimes \cdots\tau} \def\o{\omegaimes (T_k)_\alpha} \def\b{\beta} \def\d{\delta.\]
Since ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-arc-transitive and $N_\alpha} \def\b{\beta} \def\d{\delta$ is a nontrivial $2$-group, all $N_\alpha} \def\b{\beta} \def\d{\delta$-orbits on ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$ have equal length $2$. Take a subgroup $H$ of $G_\alpha} \def\b{\beta} \def\d{\delta$ of order $r$.
Then $N\cap H=1$, and $H$ acts transitively on the set of $N_\alpha} \def\b{\beta} \def\d{\delta$-orbits on ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$.
Set $X=N{:}H$. Then ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $X$-arc-transitive, and $X_\alpha} \def\b{\beta} \def\d{\delta=N_\alpha} \def\b{\beta} \def\d{\delta{:}H$.
Let $M=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} T_1^h\mid h\in H\r$. Then $M$ is a normal subgroup of $X$, $M\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N$ and $M_\alpha} \def\b{\beta} \def\d{\delta$ is a nontrivial $2$-group. If $M\ne N$ then $M$ is intransitive on $V$ and, by Lemmas \ref{p-divisor-arc}, the quotient ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}_M$ is a cycle, which is impossible. Thus $M=N$. Noting that $H$ has prime order $r$, $H$ acts regularly on $\{T_1,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots,T_k\}$ by conjugation; in particular, $k=r$.
For $h\in H$, setting $T_i^h=T_j$, we have \[(T_i)_\alpha} \def\b{\beta} \def\d{\delta^h=(T_i\cap N_\alpha} \def\b{\beta} \def\d{\delta)^h=T_j\cap N_\alpha} \def\b{\beta} \def\d{\delta=(T_j)_\alpha} \def\b{\beta} \def\d{\delta.\] It follows that $H$ acts regularly on $\{(T_1)_\alpha} \def\b{\beta} \def\d{\delta,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots,(T_k)_\alpha} \def\b{\beta} \def\d{\delta\}$ by conjugation.
Let $\b\in {\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$. Then $2r=|X_\alpha} \def\b{\beta} \def\d{\delta:X_{\alpha} \def\b{\beta} \def\d{\delta\b}|$,
and thus $X_{\alpha} \def\b{\beta} \def\d{\delta\b}$ is a $2$-group. Since $X_\alpha} \def\b{\beta} \def\d{\delta$ has a unique Sylow $2$-subgroup, $X_{\alpha} \def\b{\beta} \def\d{\delta\b}$ lies in $N_\alpha} \def\b{\beta} \def\d{\delta$, and so $X_{\alpha} \def\b{\beta} \def\d{\delta\b}=N_{\alpha} \def\b{\beta} \def\d{\delta\b}$ and $|N_\alpha} \def\b{\beta} \def\d{\delta:N_{\alpha} \def\b{\beta} \def\d{\delta\b}|=2$.
Since ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is connected, by lemma \ref{coset},
there is a $2$-element $x\in \N_X(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$ with \[\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,X_\alpha} \def\b{\beta} \def\d{\delta\r=X,\,
x^2\in N_{\alpha} \def\b{\beta} \def\d{\delta\b}.\]
Note that $|N\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x\r|={|N||\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x\r:(N\cap \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x \r)|}$ and $|X|=|N||H|$.
We know that $|\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x\r:(N\cap \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x \r)|$ is a divisor of $|H|=r$.
It follows that $|\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x\r:(N\cap \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x \r)|=1$, and so $x\in N$.
Write
$x=x_1x_2\cdots x_k$ with $x_i\in T_i$ for all $i$. Then
\begin{equation}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{eq-4}
X=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,X_\alpha} \def\b{\beta} \def\d{\delta\r\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_i,(T_i)_\alpha} \def\b{\beta} \def\d{\delta, H\mid 1\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e k \r.
\end{equation}
Consider the projections \[\phi_i: N=T_1\tau} \def\o{\omegaimes\cdots\tau} \def\o{\omegaimes T_k \rightarrow,\,
t_1t_2\cdots t_k\mapsto t_i.\] Setting $L_i=\phi_i(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$ for $1\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e r$, we have $x_i\in \N_{T_i}(L_i)$, $x_i^2\in L_i$ and $N_{\alpha} \def\b{\beta} \def\d{\delta\b}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e L_1\tau} \def\o{\omegaimes \cdots L_k$.
Since $N_{\alpha} \def\b{\beta} \def\d{\delta\b}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e N_\alpha} \def\b{\beta} \def\d{\delta=(T_1)_\alpha} \def\b{\beta} \def\d{\delta\tau} \def\o{\omegaimes \cdots\tau} \def\o{\omegaimes (T_k)_\alpha} \def\b{\beta} \def\d{\delta$, we have $L_i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e (T_i)_\alpha} \def\b{\beta} \def\d{\delta$ for $1\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e k$.
Recalling that $|N_\alpha} \def\b{\beta} \def\d{\delta:N_{\alpha} \def\b{\beta} \def\d{\delta\b}|=2$, we have that $L_i=(T_i)_\alpha} \def\b{\beta} \def\d{\delta$ for all but at most one of $i$.
Without loss of generality, we let $L_i=(T_i)_\alpha} \def\b{\beta} \def\d{\delta$ for $i>1$, and $|(T_1)_\alpha} \def\b{\beta} \def\d{\delta:L_1|\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e 2$.
Since $(T_i)_\alpha} \def\b{\beta} \def\d{\delta$ is a Sylow $2$-subgroup of $T_i$ and $x_i$ is a $2$-element, if $i>1$ then $x_i\in L_i=(T_i)_\alpha} \def\b{\beta} \def\d{\delta$ as $(T_i)_\alpha} \def\b{\beta} \def\d{\delta$ is the unique Sylow $2$-subgroup
of $\N_{T_i}(L_i)$. Recalling that $H$ acts regularly on $\{(T_1)_\alpha} \def\b{\beta} \def\d{\delta,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots,(T_k)_\alpha} \def\b{\beta} \def\d{\delta\}$ by conjugation, by (\ref{eq-4}), we have
\[X=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,X_\alpha} \def\b{\beta} \def\d{\delta\r\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_1,(T_1)_\alpha} \def\b{\beta} \def\d{\delta, H \r.\] Set $H=\{1,h_2,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots, h_k\}$. (Note that $k=r$.) Then \[N{:}H=X=(\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_1,(T_1)_\alpha} \def\b{\beta} \def\d{\delta\r\tau} \def\o{\omegaimes \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_1,(T_1)_\alpha} \def\b{\beta} \def\d{\delta\r^{h_2}\tau} \def\o{\omegaimes \cdots\tau} \def\o{\omegaimes \langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_1,(T_1)_\alpha} \def\b{\beta} \def\d{\delta\r^{h_k}){:}H.\]
This implies that $T_1=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_1,(T_1)_\alpha} \def\b{\beta} \def\d{\delta\r$.
Recall that $|(T_1)_\alpha} \def\b{\beta} \def\d{\delta:L_1|\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e 2$. Then $L_1$ is normal in $(T_1)_\alpha} \def\b{\beta} \def\d{\delta$.
Noting that $x_1\in \N_{T_1}(L_1)$, it follows that $L_1$ is normal in $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x_1,(T_1)_\alpha} \def\b{\beta} \def\d{\delta\r=T_1$, and hence $L_1=1$ as $T_1$ is nonabelian simple and $L_1$ is a $2$-group. Then $|(T_1)_\alpha} \def\b{\beta} \def\d{\delta|\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e 2$, which is impossible as $(T_1)_\alpha} \def\b{\beta} \def\d{\delta$ is a Sylow $2$-subgroup of $T_1$. This completes the proof.
\qed
\vskip 5pt
We end this section by a consequence from Theorem \ref{odd-order-prim-2r}.
\begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{p-power-case}
Assume that $|V|=p^k$ for some odd prime $p$. If $G$ is quasiprimitive on $V$ then
${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is arc-transitive, and one of the following holds.
\begin{enumerate}
\item[(1)] $p=3$, $k$ is an odd prime and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is the complete graph ${\sf K}} \def\H{{\sf H}}\def\sA{{\sf A}}\def\sE{{\sf E}_{3^k}$.
\item[(2)] $G\cong \PSU_4(2)$ or $\PSU_4(2).2$, and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ has order $27$ and valency $10$.
\item[(3)] ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)\cong \ZZ_p^k$ and $k\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e r$.
\end{enumerate}
\end{corollary}
\par\noindent{\it Proof.\ \ }
Let $N={\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$. By Theorem \ref{odd-order-prim-2r}, either $N$ is nonabelian simple, or $N\cong \ZZ_p^k$ with $k\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e r$.
Assume that $N$ is nonabelian simple.
Note that $|N:N_\alpha} \def\b{\beta} \def\d{\delta|=p^k$ for $\alpha} \def\b{\beta} \def\d{\delta\in V$. By \cite{Guralnick}, either $N$ is $2$-transitive on $V$, or $N\cong \PSU_4(2)$ and $N_\alpha} \def\b{\beta} \def\d{\delta$ has exactly three orbits on $V$ with length $1$, $10$ and $16$ respectively. For these two cases, ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $N$-arc-transitive, and the former case yields (1) while the latter case gives (2).
Now let $N\cong \ZZ_p^k$. Then we may write ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}={\rm Cay}(N,S)$, and $G_\alpha} \def\b{\beta} \def\d{\delta$ is faithful on $S$, where $\alpha} \def\b{\beta} \def\d{\delta$ is the vertex corresponding to the identity of $N$. Note that $\{x,x^{-1}\}^h=\{x^h,(x^h)^{-1}\}$ for $x\in S$ and $h\in G_\alpha} \def\b{\beta} \def\d{\delta$. Then $G_\alpha} \def\b{\beta} \def\d{\delta$ induces a transitive action on $\{\{x,x^{-1}\}\mid x\in S\}$. Let $\s:N\rightarrow N,\, y\mapsto y^{-1}$. Then $\s$ is an automorphism of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ which fixes $\alpha} \def\b{\beta} \def\d{\delta$. Thus $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} G_\alpha} \def\b{\beta} \def\d{\delta, \s\r$ is transitive on $S$, and then
${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} G,\s\r$-arc-transitive.
\qed
\vskip 30pt
\section{Proof of Theorem \ref{myth-2}}
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph of odd order and valency $2r$ for some prime $r$. Let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ be such that $G$ is primitive on $V$, transitive on $E$
but not transitive on the $2$-arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
Since $G$ is a primitive group of odd degree, $G$ is known by \cite{Primitive}.
Thus, in this section, we give some further information about the graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ or its automorphism group. If ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a complete graph, then $G$ is $2$-homogeneous on $V$, and the following result holds.
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{complete-graph}
Assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is the complete graph ${\sf K}} \def\H{{\sf H}}\def\sA{{\sf A}}\def\sE{{\sf E}_{2r+1}$ of valency $2r$. Then
\begin{enumerate}
\item[(1)] $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\rm A\Gamma L}_1(p^d)$, $r={p^d-1\overlineer 2}$, and either $d=1$ or $p=3$ and $d$ is an odd prime; or
\item[(2)] ${\rm A}}\def\Sym{{\rm Sym}SL_d(3)\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\rm A}}\def\Sym{{\rm Sym}GL_d(3)$, and $d$ is an odd prime; or
\item[(3)] $G={\rm PSL}}\def\PGL{{\rm PGL}_2(11)$, ${\rm A}}\def\Sym{{\rm Sym}_7$ or ${\rm PSL}}\def\PGL{{\rm PGL}_d(2)$, and
$r=5$, $7$ or $2^{d-1}-1$ respectively.
\end{enumerate}
\end{lemma}
\par\noindent{\it Proof.\ \ }
Note that $G$ is $2$-homogeneous on $V$. Since ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $(G,2)$-arc-transitive, $G$ is not $3$-transitive on $V$.
Suppose that $G$ is an affine primitive group.
Let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\rm A}}\def\Sym{{\rm Sym}GL_d(p)$. Since $p^d=|V|=2r+1$, either $d=1$ or
$p=3$ and $d$ is an odd prime. If $G\not\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}esssim {\rm A\Gamma L}_1(p^d)$ then, by \cite{Kantor}, $G$ is $2$-transitive, and then $G\gamma} \def\s{\sigmae {\rm A}}\def\Sym{{\rm Sym}SL_d(p)$ by checking
\cite[Table 7.3]{Cameron}. Thus
(1) or (2) occurs.
Suppose that $G$ is almost simple. Then $G$ is $2$-transitive but not $3$-transitive. Checking
\cite[Table 7.4]{Cameron}, we get (3).
\qed
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{over-group-2-arc}
Let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e X\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ and $\alpha} \def\b{\beta} \def\d{\delta\in V$. Suppose that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $(X,2)$-arc-transitive.
Then one of the following holds.
\begin{enumerate}
\item[(1)] $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}={\rm A}}\def\Sym{{\rm Sym}_{4}$ or $\S_4$, and $r=2$;
\item[(2)] ${\rm soc}} \def\Inndiag{{\rm InnDiag}(X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)})={\rm A}}\def\Sym{{\rm Sym}_{2r}$, $r\gamma} \def\s{\sigmae 3$;
\item[(3)] ${\rm soc}} \def\Inndiag{{\rm InnDiag}(X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)})={\rm PSL}}\def\PGL{{\rm PGL}(2,q)$, and $q=5$ or $q=2r-1\gamma} \def\s{\sigmae 9$;
\item[(4)] ${\rm soc}} \def\Inndiag{{\rm InnDiag}(X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)})=\M_{22}$, and $r=11$.
\end{enumerate}
\end{lemma}
\par\noindent{\it Proof.\ \ }
Since $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is a $2$-transitive group of degree $2r$,
the lemma follows from checking the finite $2$-transitive groups,
refer to \cite[Tables 7.3 and 7.4]{Cameron}.
\qed
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{over-group-1-arc}
Let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e X\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ and $\alpha} \def\b{\beta} \def\d{\delta\in V$. Suppose that $X_\alpha} \def\b{\beta} \def\d{\delta$ is insoluble and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $(X,2)$-arc-transitive.
Then $X_\alpha} \def\b{\beta} \def\d{\delta$ has a composition factor isomorphic to one of the following simple groups.
\begin{enumerate}
\item[(1)] ${\rm PSL}}\def\PGL{{\rm PGL}(2,11)$, and $r=11$;
\item[(2)] $\M_{11}$ or $\M_{23}$, and $r=11$ or $23$, respectively;
\item[(3)] ${\rm PSL}}\def\PGL{{\rm PGL}(d,q)$, and $r={q^d-1\overlineer q-1}$, where $q$ is prime power and $d$ is a prime;
\item[(4)] ${\rm A}}\def\Sym{{\rm Sym}_r$ for $r\gamma} \def\s{\sigmae 5$.
\end{enumerate}
\end{lemma}
\par\noindent{\it Proof.\ \ }
Let ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)$ be an $X_\alpha} \def\b{\beta} \def\d{\delta$-orbit on ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$. Then $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|\in \{r,2r\}$, and $X_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$ is insoluble, refer to \cite[Theorem 3.2C]{Dixon}. If $|{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)|=r$ then $X_\alpha} \def\b{\beta} \def\d{\delta^{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)}$ is a $2$-transitive group of prime degree, and the lemma follows from checking the finite $2$-transitive groups.
Assume that ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta(\alpha} \def\b{\beta} \def\d{\delta)={\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$. If $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is primitive then, by \cite[Theorem 1.51]{Gorenstein}, $r=5$ and $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}={\rm A}}\def\Sym{{\rm Sym}_5$ or $\S_5$,
and thus the lemma follows.
Suppose next that $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is imprimitive. Then $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e \S_2\wr\S_r$ or $\S_r\wr\S_2$.
Let $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e \S_2\wr\S_r$. Let $O$ be the largest normal $2$-subgroup of the wreath product $\S_2\wr\S_r$. Then $\S_r\cong \S_2\wr\S_r/O\gamma} \def\s{\sigmae X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}O/O\cong X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}/(X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\cap O)$. Since $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is insoluble, $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}/(X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\cap O)$ is isomorphic to a $2$-transitive subgroup of $\S_r$, and then the lemma follows.
Let $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e \S_r\wr\S_2$. Then $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\cap \S_r^2$ has
two orbits on ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$. Considering the action of $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\cap \S_r^2$ on one of its orbits, we get the lemma.
\qed
\begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{over-group}
Let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e X\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Then either ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)={\rm soc}} \def\Inndiag{{\rm InnDiag}(X)$, or ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta} \cong {\sf K}} \def\H{{\sf H}}\def\sA{{\sf A}}\def\sE{{\sf E}_{2r+1}$.
\end{theorem}
\par\noindent{\it Proof.\ \ }
Assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not a complete graph. Then $X$ is not $2$-homogeneous on $V$.
We next suppose that ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)\ne {\rm soc}} \def\Inndiag{{\rm InnDiag}(X)$, and deduce the contradiction.
Suppose that ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)\ne {\rm soc}} \def\Inndiag{{\rm InnDiag}(X)$.
By Theorem \ref{myth} and \cite{Li-odd}, $X$ is either affine or almost simple if
$X$ is not transitive on the $2$-arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, and $X$ is almost simple if ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $(X,2)$-arc-transitive.
Assume that either $G$ or $X$ is an affine primitive group.
By \cite[Propositions 5.1 and 5.2]{Praeger-inclusion}, we have $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\rm A}}\def\Sym{{\rm Sym}GL_3(3)$ and ${\rm soc}} \def\Inndiag{{\rm InnDiag}(X)=\PSU_4(2)$.
Then $X_\alpha} \def\b{\beta} \def\d{\delta\cong \ZZ_2^4{:}{\rm A}}\def\Sym{{\rm Sym}_5$ or $\ZZ_2^4{:}\S_5$ for $\alpha} \def\b{\beta} \def\d{\delta\in V$, and so
$r=5$ by Lemma \ref{normal-subg-1}; however, $5$ is not a divisor of $|G|$, a contradiction.
Assume next that both $G$ and $X$ are almost simple. Let $\alpha} \def\b{\beta} \def\d{\delta\in V$. Recall that $|V|$ is odd and $X$ is not $2$-homogeneous on $V$. By
\cite[Proposition 6.1]{Praeger-inclusion} and \cite{LPS}, all possible triples
$({\rm soc}} \def\Inndiag{{\rm InnDiag}(G),{\rm soc}} \def\Inndiag{{\rm InnDiag}(X), |V|)$ are listed in Table \ref{over-almost}.
\begin{table}
\[\begin{array}{|c|c|c|c|c|} \hline
\mbox{Line}&{\rm soc}} \def\Inndiag{{\rm InnDiag}(G) &{\rm soc}} \def\Inndiag{{\rm InnDiag}(X) & |V| & \mbox{$X$-action, remark} \\ \hline
1&\M_{11}&{\rm A}}\def\Sym{{\rm Sym}_{11}& 55,165& \\ \hline
2&\M_{12}&{\rm A}}\def\Sym{{\rm Sym}_{12}& 495& \\ \hline
3&\M_{22}&{\rm A}}\def\Sym{{\rm Sym}_{22}& 231& \\ \hline
4&\M_{23}&{\rm A}}\def\Sym{{\rm Sym}_{23}& 253,1771& \\ \hline
5&{\rm PSL}}\def\PGL{{\rm PGL}_2(q)&{\rm A}}\def\Sym{{\rm Sym}_{q+1}& {1\overlineer 2}q(q+1)& 2\mbox{-sets}, q \equiv 1\, ({\sf mod~}} \def\val{{\sf val}} \def\dia{{\sf diam} 4)\\ \hline
6&{\rm A}}\def\Sym{{\rm Sym}_{2l-1}&{\rm A}}\def\Sym{{\rm Sym}_{2l}& {1\overlineer 2}\binom{2l}{l}& l,l-\mbox{partitions}, l\gamma} \def\s{\sigmae 3\\ \hline
7&{\rm PSL}}\def\PGL{{\rm PGL}_2(11)&\M_{11} &55 & \\ \hline
8&\M_{23}&\M_{24} &1771 & \\ \hline
9&{\rm PSL}}\def\PGL{{\rm PGL}_m(3)&{\rm P\Omega}^+_{2m}(3)&{1\overlineer 2}3^{m-1}(3^m-1)&\mbox{nonsingular points}, m \mbox{ odd} \\ \hline
10&\G_2(q)&{\it \Omega}ga_7(q)&{1\overlineer 2}q^3(q^3\pm 1)& \mbox{nonsingular
hyperplanes}\\
&&&&q \equiv \pm 1\, ({\sf mod~}} \def\val{{\sf val}} \def\dia{{\sf diam} 4)\\ \hline
11&{\rm A}}\def\Sym{{\rm Sym}_{12}& {\rm P\Omega}^-_{10}(2)& 495& \\ \hline
12&{\rm J}} \def\M{{\rm M}_3&\PSU_9(2)& 43605& \\ \hline
13&{\it \Omega}ga_7(3)& {\rm P\Omega}^+_8(3)& 28431& \\ \hline
14&\G_2(3)& {\it \Omega}ga_7(3)&3159& \\ \hline
\end{array}\]
{\caption{}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{over-almost}}
\end{table}
For Line 10 of Table \ref{over-almost}, by \cite[Table 8.39]{Low-dim}, $X_\alpha} \def\b{\beta} \def\d{\delta$ has a unique insoluble composition factor, say ${\rm P\Omega}^\pm_6(q)$, which contradicts Lemmas \ref{over-group-2-arc} and \ref{over-group-1-arc}.
For Line 13 or 14 of Table \ref{over-almost}, by the Atlas \cite{Atlas}, $X_\alpha} \def\b{\beta} \def\d{\delta$ is an almost simple group with socle ${\rm P\Omega}^+_8(2)$ or ${\rm Sp}}\def\PSp{{\rm PSp}} \def\GammaSp{{\rm \Gamma Sp}_6(2)$ respectively, and we get a similar contradiction.
Note that $r\in \pi(G_\alpha} \def\b{\beta} \def\d{\delta)\cap \pi(X_\alpha} \def\b{\beta} \def\d{\delta)$ and, by Lemma \ref{normal-subg-1},
$\max(X_\alpha} \def\b{\beta} \def\d{\delta)<2r$. This allows us exclude Lines 1-4, 11, and 12 of Table \ref{over-almost}. For example, if Line 12 of Table \ref{over-almost} occurs then $G_\alpha} \def\b{\beta} \def\d{\delta$ is a $\{2,3\}$-group by the Atlas \cite{Atlas}, so $r\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e 3$, yielding that
$|X_\alpha} \def\b{\beta} \def\d{\delta|$ has no prime divisor other than $2,\,3$ and $5$, which is impossible as
$|X_\alpha} \def\b{\beta} \def\d{\delta|$ is divisible by $11$.
Note that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$ is either an $X_\alpha} \def\b{\beta} \def\d{\delta$-orbit of length $2r$ or
the union of two $X_\alpha} \def\b{\beta} \def\d{\delta$-orbits of length $r$. In particular, $X_\alpha} \def\b{\beta} \def\d{\delta$ has a subgroup of index a prime or twice a prime.
For Lines 7 and 8 of Table \ref{over-almost}, the lengthes of $X_\alpha} \def\b{\beta} \def\d{\delta$-orbits on $V$ are known, refer to the Webpage Edition of \cite{Atlas}.
If Line 7 of Table \ref{over-almost} occurs, then $X$ is a primitive group (on $V$) of rank $3$, and $X_\alpha} \def\b{\beta} \def\d{\delta$ has three orbits on $V$ with length $1$, $18$ and $36$ respectively, which gives a contradiction. For Line 8 of Table \ref{over-almost}, $X_\alpha} \def\b{\beta} \def\d{\delta$ has four orbits on $V$ with length $1$, $90$, $240$ and $1440$ respectively, we get a similar contradiction.
For Line 9 of Table \ref{over-almost}, by \cite{HN}, $X_\alpha} \def\b{\beta} \def\d{\delta$ has three orbits on $V$ with length $1$, ${1\overlineer 2}3^{m-1}(3^{m-1}-1)$ and $3^{2m-2}-1$ respectively; however, none of these three numbers has the form of $2r$ or $r$.
Suppose that ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$ and ${\rm soc}} \def\Inndiag{{\rm InnDiag}(X)$ are given as in Line 6 of Table \ref{over-almost}. Then action of $G$ on $V$ is equivalent that on the set
of $(l-1)$-subsets of $\{1,2,\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}dots ,2l-1\}$. Thus $G_\alpha} \def\b{\beta} \def\d{\delta$ has $l$ orbits on $V$,
which have lengthes $\binom{l-1}{i}\binom{l}{l-1-i}$, $0\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e l-1$, respectively.
If $r=\binom{l-1}{i}\binom{l}{l-1-i}$ for some $i$, then $i=0$ and $l=r$.
If $2r=\binom{l-1}{i}\binom{l}{l-1-i}$ for some $i$, then either $i=0$ and $l=2r$ or $i=1$ and $l=3$. For each of these three cases, it is easily shown that
${1\overlineer 2}\binom{2l}{l}$ is even, which is not the case as $|V|$ is odd.
Finally, let ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$ and ${\rm soc}} \def\Inndiag{{\rm InnDiag}(X)$ be as in Line 5 of Table \ref{over-almost}.
Then $X_\alpha} \def\b{\beta} \def\d{\delta$ has three orbits on $V$ with length $1$, $2(q-1)$ and ${1\overlineer 2}(q-1)(q-2)$, respectively. The only possibility is that $q=5$, ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)={\rm PSL}}\def\PGL{{\rm PGL}_2(5)$ and $2r=6$; however,
in this case, $G_\alpha} \def\b{\beta} \def\d{\delta$ is a $2$-group which is not maximal in $G$, a contradiction.
This completes the proof.
\qed
\begin{remark}
For Line 8 of Table \ref{over-almost}, one may construct a graph of valency $90$ and order $1771$ which is both $M_{23}$-arc-transitive and $M_{24}$-arc-transitive. Thus Theorem \ref{over-group} does not hold without the assumption that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ has twice prime valency.
\end{remark}
By Theorem \ref{over-group}, we have the following consequence which finishes the proof of Theorem \ref{myth-2}.
\begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{cor-1}
${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $2$-arc-transitive if and only if ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a complete graph.
\end{corollary}
\par\noindent{\it Proof.\ \ }
Note that a complete graph must be $2$-arc-transitive. Thus, it suffices to show that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $2$-arc-transitive if ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}\not\cong{\sf K}} \def\H{{\sf H}}\def\sA{{\sf A}}\def\sE{{\sf E}_{2r+1}$.
Suppose that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}\not\cong{\sf K}} \def\H{{\sf H}}\def\sA{{\sf A}}\def\sE{{\sf E}_{2r+1}$ and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $2$-arc-transitive. Let $N={\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$ and $X={\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. Then $N={\rm soc}} \def\Inndiag{{\rm InnDiag}(X)$ by Theorem \ref{over-group}.
Noting that $|V|$ is odd, by Theorem \ref{odd-order-prim-2r} and \cite{Ivanov-Praeger-93}, $N$ is a nonabelian simple group.
Let $\alpha} \def\b{\beta} \def\d{\delta\in V$. Since $|V|$ is odd, $N_\alpha} \def\b{\beta} \def\d{\delta\ne 1$, and so $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\ne 1$.
Recalling that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $(G,2)$-arc-transitive, $G_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ and hence $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is not $2$-transitive. Noting that $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$
is a proper normal subgroup of $X_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$, by Lemma \ref{over-group-2-arc}, we have $r=2$, $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}\cong \ZZ_2^2$.
In particular, ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is a primitive $2$-arc-transitive graph of valency $4$.
It follows from \cite[Theorem 1.5]{LLM} that $|N_\alpha} \def\b{\beta} \def\d{\delta|$ is divisible by $3$.
Then, by Lemma \ref{normal-subg-1}, $N_\alpha} \def\b{\beta} \def\d{\delta^{{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)}$ is not a $2$-group, a contradiction.
This completes the proof.
\qed
We end this section by another consequence of Theorem \ref{over-group}.
\begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{cor-2}
Assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta} \not\cong {\sf K}} \def\H{{\sf H}}\def\sA{{\sf A}}\def\sE{{\sf E}_{2r+1}$. Then
either ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-arc-transitive, or
$r\gamma} \def\s{\sigmae 3$ and ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ has a subgroup of index at most $2$ which is transitive on the edge set but not transitive on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
\end{corollary}
\par\noindent{\it Proof.\ \ }
Since
$G$ is primitive on $V$ and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not a cycle, for $\alpha} \def\b{\beta} \def\d{\delta\in V$,
each $G_\alpha} \def\b{\beta} \def\d{\delta$-orbit on $V$ has length at least $3$.
Thus, if $r=2$ then ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-arc-transitive.
Suppose that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $G$-arc-transitive.
Then $r\gamma} \def\s{\sigmae 3$, and $G$ has two orbits on the arc set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$, say ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta^*=\{(\b,\alpha} \def\b{\beta} \def\d{\delta)\mid (\alpha} \def\b{\beta} \def\d{\delta,\b)\in {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta\}$. To complete the proof of this corollary, we assume further that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is arc-transitive. Let $X={\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
By Theorem \ref{over-group}, $N:={\rm soc}} \def\Inndiag{{\rm InnDiag}(G)={\rm soc}} \def\Inndiag{{\rm InnDiag}(X)$.
Noting that $N$ fixes both ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta$ and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta^*$ set-wise,
$X$ induces a transitive action on $\{{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta,{\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}elta^*\}$. Then the result follows.
\qed
\vskip 30pt
\section{Examples and a proof of Theorem \ref{myth-3}}
We first list several examples of graphs which may support some results in Section 4.
\vskip 5pt
Let $G$ be a finite group and $H$ be a subgroup of $G$ with $\cap_{g\in X}H^g=1$.
Then $G$ acts (faithfully) on the set $[G:H]$ of right cosets of $H$ in $G$ by \[y: Hx\mapsto Hxy, \forall x,y\in G.\] Take a $2$-element $x\in G\setminus H$ with $x^2\in H$.
Then $x$ normalizes $K:=H\cap H^x$. Define a graph ${\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)=(V,E)$ with
\[V=[G:H],\, E=\{\{Hg_1,Hg_2\}\mid g_1,g_2\in G,\,g_2g_1^{-1}\in HxH\}.\]
Then ${\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$ is a $G$-arc-transitive graph of order $|G:H|$ and valency $|H:K|$. Moreover, it is well-known that the graph ${\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$ is connected if and only if $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x, H\r=G$.
The first example gives three graphs which meet Corollary \ref{odd-order-p-divisor-arc}.
\begin{example}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{exam-1}
{\rm
(1) Let $T={\rm PSL}}\def\PGL{{\rm PGL}(2,27)$ and $G=T{:}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} \s\r$, where $\s$ is induced by the Frobenius automorphism of the field of order $27$. Then, by the Atlas \cite{Atlas}, $G$ has a subgroup $H$ with $H\cong {\rm A}}\def\Sym{{\rm Sym}_4$ and $H\cap T\cong \ZZ_2^2$.
Let $o\in H\cap T$. Then ${\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}_T(o)\cong {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_{28}$, and ${\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}_T(o)\cap H=H\cap T\cong \ZZ_2^2$. Confirmed by the GAP \cite{GAP}, there is an involution $x\in {\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}_T(o)\setminus H$ such that $|H\cap H^x|=2$ and $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,H\r=G$. Then ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}={\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$ is a $G$-arc-transitive graph of valency $6$ and order $2457$.
Clearly, $T$ is transitive on the vertex set but not transitive on the edge set of ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$. In fact, ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is the edge-disjoint union of three
$T$-arc-transitive graphs of valency $2$, and each of them is the vertex-disjoint union of $351$ copies of the $7$-cycle.
\vskip 5pt
(2) Let $G={\rm PG}L(2,11)$, $T={\rm PSL}}\def\PGL{{\rm PGL}(2,11)$, $H\cong {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_{24}$ and $H_1=T\cap H$.
Let $K\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e H_1$ with $K\cong \ZZ_2^2$. Then $\N_G(K)\cong \S_4$, $\N_H(K)\cong {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_8$ and $\N_T(K)\cong {\rm A}}\def\Sym{{\rm Sym}_4$.
Take an involution $x\in \N_G(K)\setminus H$.
Then $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x, H\r=X$ and $H\cap H^x=K$.
Thus ${\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$ is a connected $G$-arc-transitive graph of valency $6$ and order $55$. It is easily to see that $T$ acts transitively on the edges but not on the arcs of ${\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$.
\vskip 5pt
(3) Let $G={\rm PG}L(2,17)$, $T={\rm PSL}}\def\PGL{{\rm PGL}(2,17)$ and $H=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} c\r \tau} \def\o{\omegaimes H_1$, where $T\gamma} \def\s{\sigmae H_1\cong \S_3$ and
$c\in G\setminus T$ is an involution. Let $o \in H_1$ be an involution.
Then ${\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}_T(o)\cong {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_{16}$.
Take an involution $x\in {\bf C}}\def\N{{\bf N}}\def\Z{{\bf Z}} \def\O{{\bf O}_T(o)\setminus H$. Then
$\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x, H\r=G$, $\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x, H_1\r\cong \S_4$, $|H:(H\cap H^x)|=6$ and $|H_1:(H_1\cap H_1^x)|=6$, confirmed by the GAP \cite{GAP}. It follows that
${\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$ is a connected $G$-arc-transitive graph of valency $6$ and order $408$.
Moreover, ${\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$ is the edge-disjoint union of two
$T$-arc-transitive graphs of valency $3$, and each of them is the vertex-disjoint union of $102$ copies of the complete graph ${\sf K}} \def\H{{\sf H}}\def\sA{{\sf A}}\def\sE{{\sf E}_4$.
}\qed
\end{example}
\begin{example}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{exam-2}
{\rm (1) Let $G={\rm PG}L(2,7)$, $T={\rm PSL}}\def\PGL{{\rm PGL}(2,7)$ and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_{16}\cong H\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e G$.
Let $\ZZ_2^2\cong K\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e H\cap T$. Then $N_H(K)\cong {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_8$ and $N_G(K)=N_T(K)\cong \S_4$.
Take an involution $x\in N_T(K)\setminus H$. Then ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}={\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$ is a connected
$G$-arc-transitive graph of valency $4$ and order $21$. The graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is in fact the edge-disjoint union of two
$T$-arc-transitive graphs of valency $2$, and each of them is the vertex-disjoint union of $7$ copies of the $3$-cycle.
\vskip 5pt
(2) Let $G={\rm PG}L(2,9)$, $T={\rm PSL}}\def\PGL{{\rm PGL}(2,9)$ and ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_{16}\cong H\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e G$.
Take $\ZZ_2^2\cong K\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e H\cap T$. Then $N_H(K)\cong {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_8$ and $N_G(K)=N_T(K)\cong \S_4$. For an involution $x\in N_T(K)\setminus H$, we have a $G$-arc-transitive graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}={\sf Cos}} \def{\rm Cay}{{\sf Cay}(G,H,x)$
of valency $4$ and order $45$. Further, the graph ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is the edge-disjoint union of two
$T$-arc-transitive graphs of valency $2$, and each of them is the vertex-disjoint union of $15$ copies of the $3$-cycle.
}\qed
\end{example}
\vskip 5pt
Note that the graphs in Example \ref{exam-2} are all vertex-primitive. By \cite{LLM}, up to isomorphism, there are exactly two vertex-primitive arc-transitive graphs of valency $4$ which are not $2$-arc-transitive, and then the following result holds.
\begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}abel{prim-4}
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph of valency $4$, and let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$.
Suppose that $G$ is almost simple and transitive on $E$.
If $G$ is primitive on $V$ then either ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is ${\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$-arc-transitive, or one of the following holds:
\begin{enumerate}
\item[(1)] ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=G={\rm PG}L(2,7)$ and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is given in (1) of Example \ref{exam-2};
\item[(2)] $G={\rm PG}L(2,9)$, $\M_{10}$ or ${\rm PG}ammaL(2,9)$, and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is given in (2) of Example \ref{exam-2}.
\end{enumerate}
\end{lemma}
\vskip 5pt
\noindent {\bf Proof Theorem \ref{myth-3}}.
Let ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected graph of odd order and twice prime valency $2r$, and let $G\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e {\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ be such that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-edge-transitive but not $(G,2)$-arc-transitive. Assume that $G$ is almost simple and primitive on $V$.
Let $N={\rm soc}} \def\Inndiag{{\rm InnDiag}(G)$. Note that $N$ is transitive but not regular on $V$.
If ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is not $G$-arc-transitive then, since $G$ is primitive,
$r$ is odd in this case, and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $N$-edge-transitive by Corollary \ref{odd-order-p-divisor-half}. Thus we assume that ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $G$-arc-transitive.
Then, by
Corollary \ref{odd-order-p-divisor-arc}, either ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $N$-edge-transitive
or $N_\alpha} \def\b{\beta} \def\d{\delta$ is a Sylow $2$-subgroup of $N$, where $\alpha} \def\b{\beta} \def\d{\delta\in V$.
Assume that $N_\alpha} \def\b{\beta} \def\d{\delta$ is a Sylow $2$-subgroup of $N$.
Since $G$ is a primitive group of odd degree, by \cite{Primitive},
we conclude that $N\cong {\rm PSL}}\def\PGL{{\rm PGL}(2,q)$ and $N_\alpha} \def\b{\beta} \def\d{\delta$ is a dihedral group.
It follows from \cite[Table 8.1]{Low-dim} that
$N_\alpha} \def\b{\beta} \def\d{\delta\cong {\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_{q-1}$ or ${\rm D}} \def\Q{{\rm Q}} \def\rmQ{{\rm Q}_{q+1}$.
Suppose that $|N_\alpha} \def\b{\beta} \def\d{\delta|>8$. Consider the Frattini subgroup $\Phi$ of $N_\alpha} \def\b{\beta} \def\d{\delta$.
Clearly, $\Phi\ne 1$ is a characteristic subgroup of $N_\alpha} \def\b{\beta} \def\d{\delta$, and $\Phi$ is contained in the unique cyclic subgroup of $N_\alpha} \def\b{\beta} \def\d{\delta$ with index $2$. It follows that
$\Phi$ is a characteristic subgroup of every maximal subgroup of $N_\alpha} \def\b{\beta} \def\d{\delta$.
Take $R\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e G_\alpha} \def\b{\beta} \def\d{\delta$ with $|R|=r$, set $X=N{:}R$. Then ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is $X$-arc-transitive, $X_\alpha} \def\b{\beta} \def\d{\delta=N_\alpha} \def\b{\beta} \def\d{\delta{:}R$ and $X_{\alpha} \def\b{\beta} \def\d{\delta\b}=N_{\alpha} \def\b{\beta} \def\d{\delta\b}$, where $\b\in {\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}(\alpha} \def\b{\beta} \def\d{\delta)$.
Since ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ is connected, by Lemma \ref{coset}, $X=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,N_\alpha} \def\b{\beta} \def\d{\delta R\r$ for some $2$-element $x\in \N_X(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$. It is easily shown that $x\in N$, and so
$x\in N_N(N_{\alpha} \def\b{\beta} \def\d{\delta\b})$.
Noting that $|N_\alpha} \def\b{\beta} \def\d{\delta:N_{\alpha} \def\b{\beta} \def\d{\delta\b}|=2$, it follows that $\Phi$ is a characteristic subgroup of $N_{\alpha} \def\b{\beta} \def\d{\delta\b}$. Then both $R$ and $x$ normalize $\Phi$, and so
$\Phi$ is normal in $X=\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr} x,N_\alpha} \def\b{\beta} \def\d{\delta R\r$, which is impossible.
Finally, let $|N_\alpha} \def\b{\beta} \def\d{\delta|\langle} \def\r{\rangle} \def\tau} \def\o{\omegar{{\rm tr}e 8$. Then we have $q=7$ or $9$, and $N\cong {\rm PSL}}\def\PGL{{\rm PGL}(2,7)$ or ${\rm PSL}}\def\PGL{{\rm PGL}(2,9)$, respectively. Checking the maximal subgroups of $G$, we conclude that $G_\alpha} \def\b{\beta} \def\d{\delta$ is a (Sylow) $2$-subgroup of $G$. Thus we have $r=2$, and ${\it\Gamma}} \def\Sig{{\it\Sigma}} \def\Del{{\it \Delta}$ has valency $4$. Then our theorem follows from Lemma \ref{prim-4}.
\vskip 50pt
\end{document} |
\begin{document}
\title[Gelfand-Tsetlin modules of quantum $\gl_n$]{Gelfand-Tsetlin modules of quantum $\gl_n$ defined by admissible sets of relations}
\author{Vyacheslav Futorny}
\address{Instituto de Matem\'atica e Estat\'istica, Universidade de S\~ao
Paulo, S\~ao Paulo SP, Brasil} \email{[email protected],}
\author{Luis Enrique Ramirez}
\address{Universidade Federal do ABC, Santo Andr\'e SP, Brasil} \email{[email protected],}
\author{Jian Zhang}
\address{Instituto de Matem\'atica e Estat\'istica, Universidade de S\~ao
Paulo, S\~ao Paulo SP, Brasil} \email{[email protected],}
\begin{abstract}
The purpose of this paper is to construct new families of irreducible Gelfand-Tsetlin modules for $U_q(\gl_n)$. These modules have arbitrary singularity
and Gelfand-Tsetlin multiplicities bounded by
$2$. Most previously known irreducible modules had all Gelfand-Tsetlin multiplicities bounded by
$1$ \cite{FRZ1}, \cite{FRZ2}. In particular, our method works for $q=1$ providing new families of irreducible Gelfand-Tsetlin modules for $\gl_n$. This generalizes the results of \cite{FGR3} and
\cite{FRZ}.
\end{abstract}
\subjclass{Primary 17B67}
\keywords{Quantum group, Gelfand-Tsetlin modules, Gelfand-Tsetlin basis, tableaux realization}
\maketitle
\section{Introduction}
Theory of Gelfand-Tsetlin modules is originated in the classical paper of Gelfand and Tsetlin \cite{GT}, where a basis for all finite dimensional representations of $\gl_n$ was constructed consisting of eigenvectors of certain maximal commutative subalgebra of $U(\gl_n)$, a Gelfand-Tsetlin subalgebra. Further, infinite dimensional Gelfand-Tsetlin modules for $\gl_n$
were studied in \cite{GG}, \cite{LP1}, \cite{LP2}, \cite{DFO}, \cite{O}, \cite{FO2}, \cite{Maz1}, \cite{Maz2}, \cite{m:gtsb}, \cite{FGR1}, \cite{FGR2}, \cite{FGR3}, \cite{FGR4}, \cite{FGR5}, \cite{Za}, \cite{RZ}, \cite{Vi1}, \cite{Vi2} among the others. These representations have close connections to different concepts
in Mathematics and Physics (cf. \cite{KW1}, \cite{KW2}, \cite{GS}, \cite{FM}, \cite{Gr1}, \cite{Gr2}, \cite{CE1}, \cite{CE2}, \cite{FO1} and references therein).
For quantum $\gl_n$
Gelfand-Tsetlin modules were considered in \cite{MT}, \cite{FH}, \cite{FRZ1}, \cite{FRZ2}. In particular, families of irreducible Gelfand-Tsetlin modules constructed
in \cite{FRZ1} and \cite{FRZ2} correspond to admissible sets of relations and have all Gelfand-Tsetlin multiplicities bounded by
$1$. In general, it is difficult to construct explicitly irreducible modules that have Gelfand-Tsetlin multiplicities bigger than $1$, even in the case of highest weight modules. Some examples can be obtained from \cite{FJMM}. In this paper we combine two methods. First, the construction of Gelfand-Tsetlin modules out of admissible relations in \cite{FRZ2} and, second, the construction of $1$-singular modules in \cite{FGR3}. As a result we are able to obtain new families of irreducible Gelfand-Tsetlin modules whose Gelfand-Tsetlin multiplicities bounded by
$2$ (Theorem \ref {thm-when L irred}). Moreover, these modules have arbitrary singularities. The construction is explicit with a basis and the action of the Lie algebra (Theorem \ref{thm-main}). This allows to understand the structure of the constructed modules, in particular to describe the action of the generators of the Gelfand-Tsetlin subalgebra (Theorem \ref{GT module structure}).
Our method also works when $q=1$ and allows to construct new families of irreducible modules for $\gl_n$ (Theorem \ref{thm-gl(n)}). Again, these modules allow arbitrary singularities (generalizing \cite{FGR3}) and on the other hand have Gelfand-Tsetlin multiplicities bounded by $2$ with a non-diagonalizable action of the Gelfand-Tsetlin subalgebra (generalizing \cite{FRZ}). Moreover, any irreducible Gelfand-Tsetlin module with a designated singularity appear as a subquotient of the constructed universal module (Theorem \ref{thm-exhaust}).
\
\
\noindent{\bf Acknowledgements.} V.F. is
supported in part by CNPq (301320/2013-6) and by
Fapesp (2014/09310-5). J. Z. is supported by Fapesp (2015/05927-0).
\section{Preliminaries}
Let $P$ be the free $\mathbb Z$-lattice of rank n with the
canonical basis $\{\epsilon_{1},\ldots,\epsilon_{n}\}$, i.e.
$P=\bigoplus_{i=1}^{n}\mathbb Z\epsilon_{i}$, endowed with symmetric
bilinear form
$\langle\epsilon_{i},\epsilon_{j}\rangle=\delta_{ij}$.
Let $\Pi=\{\alpha_j=\epsilon_j-\epsilon_{j+1}\ |\ j=1,2,\ldots\}$ and
$\Phi=\{\epsilon_{i}-\epsilon_{j}\ |\ 1\leq i\neq j\leq n-1\}$. Then $\Phi$ realizes the root system
of type $A_{n-1}$ with $\Phi$ a basis of simple roots.
By $U_q$ we denote the quantum enveloping algebra of $\gl_n$.
We define $U_{q}$ as a unital associative algebra generated by $e_{i},f_{i}(1\leq i
\leq n-1)$ and $q^{h}(h\in P)$ with the
following relations:
\begin{align}
q^{0}=1,\ q^{h}q^{h'}=q^{h+h'} \quad (h,h' \in P),\\
q^{h}e_{i}q^{-h}=q^{\langle h,\alpha_i\rangle}e_{i} ,\\
q^{h}f_{i}q^{-h}=q^{-\langle h,\alpha_i\rangle}f_{i} ,\\
e_{i}f_{j}-f_{j}e_{i}=\delta_{ij}\frac{q^{\alpha_i}-q^{-\alpha_i}}{q-q^{-1}} ,\\
e_{i}^2e_{j}-(q+q^{-1})e_ie_je_i+e_je_{i}^2=0 \quad (|i-j|=1),\\
f_{i}^2f_{j}-(q+q^{-1})f_if_jf_i+f_jf_{i}^2=0 \quad (|i-j|=1),\\
e_{i}e_j=e_je_i,\ f_if_j=f_jf_i \quad (|i-j|>1).
\end{align}
Fix the standard Cartan subalgebra $\mathfrak h$ and the standard triangular decomposition. The weights of $U_q$ will be written as $n$-tuples $(\lambda_1,...,\lambda_n)$.
\begin{remark}[\cite{FRT}, Theorem 12]
$U_{q}$ has the following alternative presentation. It is isomorphic to the algebra generated by
$l_{ij}^{+}$, $l_{ji}^{-}$, $1\leq i \leq j \leq n$ subject to the relations:
\begin{align}
RL_1^{\pm}L_2^{\pm}=L_2^{\pm}L_1^{\pm}R\\
RL_1^{+}L_2^{-}=L_2^{-}L_1^{+}R
\end{align}
where
$R=q\sum_{i}e_{ii}\otimes e_{ii}+\sum_{i\neq j}e_{ii}\otimes e_{jj}
+(q-q^{-1})\sum_{i<j}e_{ij}\otimes e_{ji}$, $e_{ij}\in End(\mathbb{C}^n)$,
$L^{\pm}=(l_{ij}^{\pm})$, $L_1^{\pm}=L^{\pm}\otimes I$ and $L_2^{\pm}=I\otimes L^{\pm}$.
The isomorphism between this two representations is given by
\begin{align*}
l_{ii}^{\pm}=q^{\pm\epsilon_i},\ \ \ \
l_{i,i+1}^{+}=(q-q^{-1})q^{\epsilon_i}e_i,\ \ \ \ l_{i+1,i}^{-}=(q-q^{-1})f_{i}q^{\epsilon_i}.
\end{align*}
\end{remark}
For a commutative ring $R$, by ${\rm Specm}\, R$ we denote the set of maximal ideals of $R$.
For $1\leq j \leq i \leq n$, $\delta^{ij} \in {\mathbb Z}^{\frac{n(n+1)}{2}}$ is defined by $(\delta^{ij})_{ij}=1$ and all other $(\delta^{ij})_{k\ell}$ are zero. For $i>0$ by $S_i$ we denote the $i$th symmetric group. Let $1(q)$ be the set of all complex $x$ such that $q^{x}=1$. Finally, for any complex number $x$, we set
\begin{align*}
(x)_q=\frac{q^x-1}{q-1},\quad
[x]_q=\frac{q^x-q^{-x}}{q-q^{-1}}.
\end{align*}
\subsection{Gelfand-Tsetlin modules}
Let for $m\leqslant n$, $\mathfrak{gl}_{m}$ be the Lie subalgebra
of $\gl_n$ spanned by $\{ E_{ij}\,|\, i,j=1,\ldots,m \}$. We have the following chain
\begin{equation*}
\gl_1\subset \gl_2\subset \ldots \subset \gl_n.
\end{equation*}
If we denote by $(U_m)_q$ the quantum universal enveloping algebra of $\gl_m$. We have the following chain $(U_1)_q\subset$ $(U_2)_q\subset$ $\ldots$ $\subset
(U_n)_q$. Let $Z_{m}$ denotes the center of $(U_{m})_{q}$. The subalgebra ${\Ga}_q$ of $U_q$ generated by $\{
Z_m\,|\,m=1,\ldots, n \}$ is called the \emph{Gelfand-Tsetlin
subalgebra} of $U_q$.
\begin{theorem}[\cite{FRT}, Theorem 14]\label{generators of the quantum center}
The center of $U_{q}(\mathfrak {gl}_m)$ is generated by the following $m+1$ elements
$$c_{mk}=\sum_{\sigma,\sigma'\in S_m}(-q)^{l(\sigma)+l(\sigma')}
l_{\sigma(1),\sigma'(1)}^{+}\cdots l_{\sigma(k),\sigma'(k)}^{+}l_{\sigma(k+1),\sigma'(k+1)}^{-}\cdots l_{\sigma(m),\sigma'(m)}^{-},$$
where $0\leq k \leq m$.
\end{theorem}
\begin{definition}
\label{definition-of-GZ-modules} A finitely generated $U_q$-module
$M$ is called a \emph{Gelfand-Tsetlin module (with respect to
$\Ga_q$)} if
\begin{equation}\label{equation-Gelfand-Tsetlin-module-def}
M=\bigoplus_{\sm\in\Sp\Ga_q}M(\sm),
\end{equation}
where $M(\sm)=\{v\in M| \sm^{k}v=0 \text{ for some }k\geq 0\}$.
\end{definition}
Note that $\Gamma_q$ is a Harish-Chandra subalgebra of $U_q$, that is for any $u\in U_q$ the $\Ga_q$-bimodule $\Ga_q u \Ga_q$ is finitely generated left and right $\Ga_q$-module (\cite{MT}, Proposition 1 and \cite{FH}, Proposition 2.8).
As a result we have the following property of Gelfand-Tsetlin modules.
\begin{lemma}\label{lem-cyclic-Gelfand-Tsetlin}
Let $\sm\in\Sp\Ga_q$ and $M$ be a $U_q$-module generated by a nonzero element $v\in M(\sm)$. Then $M$ is a Gelfand-Tsetlin module.
\end{lemma}
\begin{proof}
Let $M'=\oplus_{\sm\in\Sp\Ga_q} M(\sn)$ be the largest Gelfand-Tsetlin $U_q$-submodule of $M$. We will show that $M'=M$. Indeed, consider any nonzero $u\in U_q$ and apply to $v$. We need to show that $uv\in M'$. Take any nonzero $z\in \Ga_q$. Since $\Ga_q$ is a Harish-Chandra subalgebra and since $\Ga_q v$ is finite dimensional there exists $N$ such that
$$uv, zuv, \ldots, z^N uv$$ are linearly dependent. Hence, the subspace spanned by $\{uv, zuv, \ldots, z^{N-1} uv\}$ is $z$-invariant and $\Pi_{i\in I}(z-\gamma_i(z))^N uv=0$ for some scalars $\gamma_i(z)$ and some set $I$. Choose some generators $z_1, \ldots, z_d$ of $\Ga_q$ and
define such scalars $\gamma_{i}(z_j)$, $i\in I_j$ for each generator $z_j$, $j=1, \ldots, d$.
For each element $\bar{i}=\{i_1, i_2, \ldots, \}$ of $I_1 \times I_2 \times \ldots \times I_d$
consider $\sn_{\bar{i}}\in\Sp\Ga_q$ which contains $z_j- \gamma_{i_j}$ for each $j=1, \ldots, d$. Then $$uv\in \sum_{\bar{i}\in I_1 \times I_2 \times \ldots \times I_d} M(\sn_{\bar{I}}),$$ which proves the lemma.
\end{proof}
\section{Gelfand-Tsetlin modules defined by admissible sets of relations}\label{section modules with relations}
In this section we recall the construction of $U_{q}$-modules from \cite{FRZ2}. As particular cases of this construction one obtains any irreducible finite dimensional module as in \cite{UST2}, Theorem 2.11, and generic modules as in \cite{MT}, Theorem 2.
\begin{definition} For a vector $v=(v_{ij})$ in $\mathbb{C}^{\frac{n(n+1)}{2}}$, by $T(v)$ we will denote the following array with entries $\{v_{ij}:1\leq j\leq i\leq n\}$
\begin{center}
\Stone{\mbox{ \scriptsize {$v_{n1}$}}}\Stone{\mbox{ \scriptsize {$v_{n2}$}}}\hspace{1cm} $\cdots$ \hspace{1cm} \Stone{\mbox{ \scriptsize {$v_{n,n-1}$}}}\Stone{\mbox{ \scriptsize {$v_{nn}$}}}\\[0.2pt]
\Stone{\mbox{ \scriptsize {$v_{n-1,1}$}}}\hspace{1.5cm} $\cdots$ \hspace{1.5cm} \Stone{\mbox{ \tiny {$v_{n-1,n-1}$}}}\\[0.3cm]
\hspace{0.2cm}$\cdots$ \hspace{0.8cm} $\cdots$ \hspace{0.8cm} $\cdots$\\[0.3cm]
\Stone{\mbox{ \scriptsize {$v_{21}$}}}\Stone{\mbox{ \scriptsize {$v_{22}$}}}\\[0.2pt]
\Stone{\mbox{ \scriptsize {$v_{11}$}}}\\
\end{center}
such an array will be called a \emph{Gelfand-Tsetlin tableau} of height $n$.
\end{definition}
Associated with any Gelfand-Tsetlin tableau $T(v)$, by $V(T(v))$ we will denote the $\mathbb{C}$-vector space spanned by the set of Gelfand-Tsetlin tableaux $\mathcal{B}(T(v)):=\{T(v+z)\ |\ z\in\mathbb{Z}^{\frac{n(n-1)}{2}}\}$. For certain subsets $B$ of $\mathcal{B}(T(v))$ and respectively the $\mathbb{C}$-subspaces $V_B(T(v))$ it is possible to define a $U_q$-module structure on $V_B(T(v))$ by the \emph{Gelfand-Tsetlin formulae}:
\begin{equation}\label{Gelfand-Tsetlin formulas}
\begin{split}
q^{\epsilon_{k}}(T(L))&=q^{a_k}T(L),\quad a_k=\sum_{i=1}^{k}l_{k,i}-\sum_{i=1}^{k-1}l_{k-1,i}+k,\ k=1,\ldots,n,\\
e_{k}(T(L))&=-\sum_{j=1}^{k}
\frac{\prod_{i} [l_{k+ 1,i}-l_{k,j}]_q}{\prod_{i\neq j} [l_{k,i}-l_{k,j}]_q}
T(L+\delta^{kj}),\\
f_{k}(T(L))&=\sum_{j=1}^{k}\frac{\prod_{i} [l_{k-1,i}-l_{k,j}]_q}{\prod_{i\neq j} [l_{k,i}-l_{k,j}]_q}T(L-\delta^{kj}),\\
\end{split}
\end{equation}
for all $L\in B$.
\subsection{Admissible sets of relations}
Set $\mathfrak{V}:=\{(i,j)\ |\ 1\leq j\leq i\leq n\}$. We will consider relations between elements of $\mathfrak{V}$ of the form $(i,j)\geq (s,t)$ or $(i,j)>(s,t)$. More precisely, we will consider subsets of the following set of relations:
\begin{align}
\mathcal{R} &:=\mathcal{R}^{\geq} \cup \mathcal{R}^{>}\cup \mathcal{R}^{0},
\end{align}
where
\begin{align}
\mathcal{R^{\geq}} &:=\{(i,j)\geq(i-1,j')\ |\ 1\leq j\leq i\leq n,\ 1\leq j'\leq i-1\},\\
\mathcal{R^{>}} &:=\{(i-1,j')>(i,j)\ |\ 1\leq j\leq i\leq n,\ 1\leq j'\leq i-1\},\\
\mathcal{R}^{0} &:=\{(n,i)\geq(n,j)\ |\ 1\leq i\neq j\leq n\}.
\end{align}
Let $\mathcal{C}$ be a subset of $\mathcal{R}$.
Denote by $\mathfrak{V}(\mathcal{C})$ the set of all $(i,j)$ in $\mathfrak{V}$ such that $(i,j)\geq (\text{respectively }>,\ \leq,\ <) \ (r, s)\in\mathcal{C}$ for some $(r, s)$.
Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be two subsets of $\mathcal{C}$.
We say that $\mathcal{C}_1$ and $\mathcal{C}_2$ are {\it disconnected}
if
$\mathfrak{V}(\mathcal{C}_1)\cap\mathfrak{V}(\mathcal{C}_2)=\emptyset$,
otherwise $\mathcal{C}_1$ and $\mathcal{C}_2$ are connected,
$\mathcal{C}$ is called {\it decomposable} if it can be decomposed into the union of two disconnected subsets of $\mathcal{R}$, otherwise $\mathcal{C}$ is called indecomposable.
\begin{definition}
Let $\mathcal{C}$ be any subset of $\mathcal{R}$. Given $(i,j),\ (r,s)\in \mathfrak{V}(\mathcal{C})$ we will write:
\begin{itemize}
\item[(i)] $(i,j)\succeq_{\mathcal{C}} (r,s)$ if, there exists $\{(i_{1},j_{1}),\ldots,(i_{m},j_{m})\}\subseteq \mathfrak{V}(\mathcal{C})$ such that
\begin{align}\label{geq sub C}
\{(i,j)\geq (i_{1},j_1),\ (i_{1},j_1)\geq (i_2,j_2), \cdots,\ (i_{m},j_m)\geq (r,s)\}&\subseteq \mathcal{C}
\end{align}
\item[(ii)] We write $(i,j)\succ_{\mathcal{C}} (r,s)$ if there exists $\{(i_{1},j_{1}),\ldots,(i_{m},j_{m})\}\subseteq \mathfrak{V}(\mathcal{C})$ such that in the condition (\ref{geq sub C}), at least one of the inequalities is $>$.
\end{itemize}
We will say that $(i,j),\ (r,s)$ are $\mathcal{C}$-related if $(i,j)\succeq_{\mathcal{C}} (r,s)$ or $(r,s)\succeq_{\mathcal{C}} (i,j)$. Given another set of relations $\mathcal{C}'$, we say that {\it\bf $\mathcal{C}$ implies $\mathcal{C}'$} if whenever we have $(i,j)\succ_{\mathcal{C}'} (r,s)$ (respectively $(i,j)\succeq_{\mathcal{C}'} (r,s)$) we also have $(i,j)\succ_{\mathcal{C}} (r,s)$ (respectively $(i,j)\succeq_{\mathcal{C}} (r,s)$). A subset of $\mathcal{R}$ of the form $\{(k,i)\geq (k-1,t),\ (k-1,s)>(k,j)\}$ with $i<j$ and $s<t$, is called a \emph{\bf cross}.
\end{definition}
Now we define the sets of relations of our interest.
\begin{definition}
Let $\mathcal{C}$ be an indecomposable set. We say that $\mathcal{C} $ is {\it \bf admissible} if it satisfies the following conditions:
\begin{itemize}
\item[(i)] For any $1\leq k-1\leq n$, we have $(k, i)\succ_{\mathcal{C}}(k, j)$ only if $i<j$;
\item[(ii)] $(n,i)\succeq_{\mathcal{C}} (n,j)$ only if $i<j$;
\item[(iii)] There is no cross in $\mathcal{C} $;
\item[(iv)] For every $(k,i),\ (k,j)\in\mathfrak{V}(\mathcal{C})$ with $1\leq k\leq n-1$ there exists $s,t$ such that one of the following holds
\begin{equation}\label{condition for admissible}
\begin{split}
&\{(k,i)>(k+1,s)\geq (k,j),\ (k,i)\geq (k-1,t)>(k,j)\}\subseteq \mathcal{C},\\
&\{(k,i)>(k+1,s),(k+1,t)\geq (k,j)\}\subseteq \mathcal{C},\ s<t.
\end{split}
\end{equation}
\end{itemize}
An arbitrary set $\mathcal{C}$ is admissible if every indecomposable subset of $\mathcal{C}$ is admissible.
\end{definition}
Denote by $\mathfrak{F}$ the set of all indecomposable admissible subsets. Recall that $1(q):=\{x\in \mathbb{C}\ |\ q^x=1\}$.
\begin{definition} Let $\mathcal{C}$ be any subset of $\mathcal{R}$ and $T(L)$ any Gelfand-Tsetlin tableau.
\begin{itemize}
\item[(i)]
We say that $T(L)$ satisfies a relation $(i,j)\geq (r,s)$ (respectively, $(i,j)> (r,s)$) if $l_{ij}-l_{st}\in \mathbb{Z}_{\geq 0}+\frac{1(q)}{2}\ (\text{respectively, }l_{ij}-l_{st}\in \mathbb{Z}_{> 0}+\frac{1(q)}{2})$.
\item[(ii)]
We say that a Gelfand-Tsetlin tableau $T(L)$ \emph{satisfies $\mathcal{C}$} if $T(L)$ satisfies all the relations in $\mathcal{C}$
and $l_{ki}-l_{kj}\in \mathbb{Z}+\frac{1(q)}{2}$ only if $(k,i)$ and $(k,j)$
in the same indecomposable subset of $\mathfrak{V}(\mathcal{C})$. In this case we call $T(L)$ a $\mathcal{C}$-\emph{\bf realization}.
\item[(iii)] $\mathcal{C}$ is a \emph{\bf maximal} set of relations for $T(L)$ if $T(L)$ satisfies $\mathcal{C}$ and whenever $T(L)$ satisfies a set of relations $\mathcal{C}'$ we have that $\mathcal{C}$ implies $\mathcal{C}'$.
\item[(iv)] If $T(L)$ satisfies $\mathcal{C}$ we denote by ${\mathcal B}_{\mathcal{C}}(T(L))$ the subset of ${\mathcal B}(T(L))$ of all Gelfand-Tsetlin tableaux satisfying $\mathcal{C}$, and by $V_{\mathcal{C}}(T(L))$ the complex vector space spanned by ${\mathcal B}_{\mathcal{C}}(T(L))$.
\end{itemize}
\end{definition}
Set
\begin{equation}\label{c-for1}
e_{ki}(L)=\left\{
\begin{array}{cc}
0,& \text{ if } T(L)\notin \mathcal {B}_{\mathcal{C}}(T(L))\\
-\frac{\prod_{j=1}^{k+1}[l_{ki}-l_{k+1,j}]_q}{\prod_{j\neq i}^{k}[l_{ki}-l_{kj}]_q},& \text{ if } T(L)\in \mathcal {B}_{\mathcal{C}}(T(L))
\end{array}
\right.
\end{equation}
\begin{equation}\label{c-for2}
f_{ki}(L)=\left\{
\begin{array}{cc}
0,& \text{ if } T(L)\notin \mathcal {B}_{\mathcal{C}}(T(L))\\
\frac{\prod_{j=1}^{k-1}[l_{ki}-l_{k-1,j}]_q}{\prod_{j\neq i}^{k}[l_{ki}-l_{kj}]_q},& \text{ if } T(L)\in \mathcal {B}_{\mathcal{C}}(T(L))
\end{array}
\right.
\end{equation}
\begin{equation}\label{c-for3}
h_{k}(L)=\left\{
\begin{array}{cc}
0,& \text{ if } T(L)\notin \mathcal {B}_{\mathcal{C}}(T(L))\\
q^{2\sum_{i=1}^{k}l_{ki}-\sum_{i=1}^{k-1}l_{k-1,i}-\sum_{i=1}^{k+1}l_{k+1,i} -1},& \text{ if } T(L)\in\mathcal {B}_{\mathcal{C}}(T(L))
\end{array}
\right.
\end{equation}
\begin{definition}
Let $\mathcal{C}$ be a subset of $\mathcal{R}$. We call $\mathcal{C}$ \emph{\bf realizable} if for any tableau $T(L)$ satisfying $\mathcal{C}$, the vector space $V_{\mathcal{C}}(T(L))$ has a structure of a $U_q$-module, endowed with the action of $U_q$ given by the Gelfand-Tsetlin formulas (\ref{Gelfand-Tsetlin formulas}) with coefficients given by (\ref{c-for1}), (\ref{c-for2}) and (\ref{c-for3}).
\end{definition}
\begin{theorem}\label{sufficiency of admissible}[\cite{FRZ2} Theorem 3.9 and Theorem 4.1] If $\mathcal{C}$ is a union of disconnected sets from $\mathfrak{F}$ then $\mathcal{C}$ is realizable. Moreover, $V_{\mathcal{C}}(T(L))$ is a Gelfand-Tsetlin module with the generator $c_{mk}$ of $\Gamma_q$ acting on $T(L)\in\mathcal{B}(T(v))$ as multiplication by
\begin{equation}\label{eigenvalues of gamma_mk}
\gamma_{mk}(L)=(k)_{q^{-2}}!(m-k)_{q^{-2}}!q^{k(k+1)+\frac{m(m-3)}{2}}\sum_{\tau}
q^{\sum_{i=1}^{k}l_{m\tau(i)}-\sum_{i=k+1}^{m}l_{m\tau(i)}}
\end{equation}
where $\tau\in S_m$ is such that $\tau(1)<\cdots<\tau(k), \tau(k+1)<\cdots<\tau(m)$.
\end{theorem}
\begin{remark}
The sets $\mathcal{S} :=\{(i+1,j)\geq(i,j)>(i+1,j+1)\ |\ 1\leq j\leq i\leq n-1\}$ and $\emptyset$ are indecomposable sets of relations. By Theorem 2.11 in \cite{UST2}, any irreducible finite dimensional $U_{q}$-module is isomorphic to $V_{\mathcal{S}}(T(v))$ for some $v$. The family of modules $\{V_{\mathcal{\emptyset}}(T(v))\}$ coincide with the family of generic modules as constructed in \cite{MT}, Theorem 2.
\end{remark}
\section{New Gelfand-Tsetlin modules} \label{sec-der}
In \cite{FGR3} the classical Gelfand-Tsetlin formulas were generalized allowing to construct a new family of $\mathfrak{gl}_n$-modules associated with tableaux with at most one singular pair. In this section we will use this approach and combine with the ideas of Section \ref{section modules with relations} to construct new families of $U_{q}$-modules.
\begin{definition}
A Gelfand-Tsetlin tableu $T(v)$ will be called {\bf $\mathcal{C}$-singular} if:
\begin{itemize}
\item[(i)] $T(v)$ satisfies $\mathcal{C}$.
\item[(ii)] There exists $1\leq s<t\leq r\leq n-1$ such that $\mathfrak{V}(\mathcal{C})\cap\{(r,s),(r,t)\}=\emptyset$ and $v_{rs}-v_{rt}\in\frac{1(q)}{2}+\mathbb{Z}$.
\end{itemize}
The tableau $T(v)$ will be called {\bf ($1$, $\mathcal{C}$)-singular} if it is \emph{$\mathcal{C}$-singular} and the tuple $(r,s,t)$ is unique. The tableau will be called {\bf $\mathcal{C}$-generic} if it is not $\mathcal{C}$-singular.
\end{definition}
From now on we will fix an admissible set of relations $\mathcal{C}$, $T(\bar{v})$ a $(1,\mathcal{C})$-singular tableau, $(i,j,k)$ such that $\bar{v}_{ki}=\bar{v}_{kj}$ with $1\leq i < j \leq k\leq n-1$ and by $\tau$ we denote the element in $S_{n-1} \times\cdots \times S_{1}$ such that $\tau[k]$ is the transposition $(i,j)$ and all other $\tau[t]$ are $\mbox{Id}$.
By ${\mathcal H}$ we denote the hyperplanes $v_{ki} - v_{kj}\in \frac{1(q)}{2}+\mathbb{Z}$ in ${\mathbb C}^{\frac{n(n-1)}{2}}$ and let $\overline{\mathcal H}$ be the subset of all $v$ in ${\mathbb C}^{\frac{n(n-1)}{2}}$ such that $v_{tr} \neq v_{ts}$ for all triples $(t,r,s)$ except for $(t,r,s) = (k,i,j)$. By ${\mathcal F}_{ij}$ the space of all rational functions that are smooth on $\overline{\mathcal H}$.
We impose the conditions $T(\bar{v} + z) = T(\bar{v} +\tau(z))$ on the corresponding tableaux and formally introduce new tableaux ${\mathcal D} T({\bar{v}} + z)$ for every $z \in {\mathbb Z}^{\frac{n(n-1)}{2}}$ subject to the relations ${\mathcal D} T({\bar{v}} + z) + {\mathcal D} T({\bar{v}} + \tau(z)) = 0$. We call ${\mathcal D} T(u)$ {\it the derivative Gelfand-Tsetlin tableau} associated with $u$.
\begin{definition}
We set $V_{\mathcal{C}}(T(\bar{v}))$ to be the $\mathbb{C}$-vector space spanned by the set of tableaux $\{ T(\bar{v} + z), \,\mathcal{D} T(\bar{v} + z) \; | \; T(\bar{v} + z)\in\mathcal{B}_{\mathcal{C}}(T(\bar{v}))\}.$ A basis of $V_{\mathcal{C}}(T(\bar{v}))$ is for example the set
$$
\{ T(\bar{v} + z), \mathcal{D} T(\bar{v} + z') \; | \; T(\bar{v} + z),\ T(\bar{v} + z')\in\mathcal{B}_{\mathcal{C}}(T(\bar{v})), \text{ and } z_{ki} \leq z_{kj}, z'_{ki} > z'_{kj}\}.
$$
\end{definition}
In order to define the action of the generators of $U_q$ on $V_{\mathcal{C}}T(\bar{v})$ we first note that for any $\mathcal{C}$-generic vector $v'$ such that $\mathcal{C}$ is a maximal set of relations for $T(v')$, $V_{\mathcal{C}}(T(v'))$ has a structure of an irreducible $U_{q}$-module. If $v$ denotes the vector with entries $v_{rs}=v'_{rs}$ if $(r,s)\neq (k,i), (k,j)$ and $v_{ki}=x$, $v_{kj}=y$ variables, then $V_{\mathcal{C}}(T(v))$ is a $U_{q}$-module over $\mathbb{C}(x,y)$ with action of the generators given by the formulas (\ref{Gelfand-Tsetlin formulas}). From now on, by $v$ we will denote one such vector.
Define linear map $\mathcal{D}^{\bar{v}}: {\mathcal F}_{ij} \otimes V_{\mathcal{C}}(T(v)) \to V_{\mathcal{C}}(T(\bar{v})) $ by
$$
\mathcal{D}^{\bar{v}} (f T(v+z)) = \mathcal{D}^{\bar{v}} (f) T(\bar{v}+z) + f(\bar{v}) \mathcal{D} T(\bar{v}+z),
$$
where $\mathcal{D}^{\bar{v}}(f) = \frac{q-q^{-1}}{4\ln q}\left(\frac{\partial f}{\partial x}-\frac{\partial f}{\partial y}\right)(\bar{v})$. In particular,
$\mathcal{D}^{\bar{v}} ([x-y]_qT(v+z)) = T(\bar{v}+z)$,
$\mathcal{D}^{\bar{v}} (T(v+z)) = \mathcal{D} (T(\bar{v}+z))$.
\subsection{Module structure on $V_{\mathcal{C}}(T(\bar{v}))$}
We define the action of $e_r,f_r,q^{\epsilon_r}(r=1,2,\ldots,n)$ on the generators of $V_{\mathcal{C}}(T(\bar{v}))$ as follows:
\begin{align*}
g(T(\bar{v} + z))=&\ \mathcal{D}^{\bar{v}}([x - y]_qg(T(v + z)))\\
g(\mathcal{D}T(\bar{v} + z')))=&\ \mathcal{D}^{\bar{v}} ( g(T(v + z'))),
\end{align*}
where $z, z' \in {\mathbb Z}^{\frac{n(n-1)}{2}}$ with $z' \neq \tau(z')$,
$g\in \{e_r,f_r,q^{\epsilon_r}(r=1,2,\ldots,n)\}$. One should note that $[x - y]_qg(T(v + z))$ and $g(T(v + z'))$ are in $ {\mathcal F}_{ij} \otimes V_{\mathcal{C}}(T(v))$, so the right hand sides in the above formulas are well defined.
\begin{proposition} \label{compatible} Let $z \in {\mathbb Z}^{\frac{n(n-1)}{2}}$ and
$g\in \{e_r,f_r,q^{\epsilon_r}(r=1,2,\ldots,n)\}$.
\begin{itemize}
\item[(i)] $ {\mathcal D}^{\bar{v}} ([x-y]_q g T(v+z)) = {\mathcal D}^{\bar{v}} ([x-y]_q g T(v+\tau(z))) $ for all $z$.
\item[(ii)] $ {\mathcal D}^{\ov}(g T(v+z)) = - {\mathcal D}^{\bar{v}} (g T(v+\tau(z))) $ for all $z$ such that $\tau(z) \neq z$.
\end{itemize}
\end{proposition}
It remains to prove that this well-defined action endows $V_{\mathcal{C}}(T(\bar{v}))$ with a $U_q$-module structure.
\begin{lemma} \label{dij-commute}
Let $g$ be generator of $U_q$.
\begin{itemize}
\item[(i)] $g (T(\ov + z)) = {\rm ev} ( \ov) g ( T(v+z))$ whenever $\tau(z) \neq z$.
\item[(ii)] $\mathcal{D}^{\ov} g (F) = g \mathcal{D}^{\ov} (F)$ if $F$ and $g(F)$ are in ${\mathcal F}_{ij} \otimes {V}(T(v))$.
\item[(iii)] $\mathcal{D}^{\ov} ( [x-y]_qg (F)) = g( {\rm ev} (\ov) F)$ if $F \in {\mathcal F}_{ij} \otimes {V}(T(v))$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) Since $\tau(z) \neq z$, $ g ( T(v+z))\in {\mathcal F}_{ij} \otimes {V}(T(v))$. Thus
\begin{align*}
g (T(\ov + z))&=\mathcal{D}^{\ov} ([x-y]_q g (T(v+ z)))\\
&=\mathcal{D}^{\ov} ([x-y]_q)\mbox{ev}(\ov)g (T(v+ z))
+\mbox{ev}(\ov)([x-y]_q) \mathcal{D}^{\ov}g (T(v+ z)\\
&=\mbox{ev}(\ov)g (T(v+ z).
\end{align*}
(ii) Using (i) and the facts that $g( T(v+z)) $ is in ${\mathcal F}_{ij} \otimes {\mathcal V}$ and $g(\mathcal{D}T(\bar{v} + z))=\mathcal{D}^{\ov}(g( T(v + z)))$ we have
$$
\mathcal{D}^{\ov}g ( f T(v+z))-g \mathcal{D}^{\ov} ( f T(v+z)) = \mathcal{D}^{\ov} (f) \left( \mbox{ev} (\ov)g( T(v+z)) - g( T(\ov+z)) \right) =0.
$$
(iii) Taking into consideration that $ [x-y]_q g(F)\in {\mathcal F}_{ij} \otimes {V}(T(v))$, by (ii) we have
\begin{eqnarray*}
\mathcal{D}^{\ov} \left( [x-y]_q g (fT(v + z)) \right) & = & g \mathcal{D}^{\ov} \left( [x-y]_qf(T(v + z)) \right)\\
& = & g ( \mbox{ev}(\ov) f T(v+z)).
\end{eqnarray*}
\end{proof}
\begin{proposition} \label{t-v-rep}
Set $\bar{v}$ be any the fixed $(1,\mathcal{C})$-singular vector and $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$.
\begin{itemize}
\item[(i)]
$q^{0}(T(\ov + z))=T(\ov + z),\ q^{h}q^{h'}(T(\ov + z))=q^{h+h'}(T(\ov + z)) \quad (h,h' \in P)$.
\item[(ii)]$q^{h}e_{r}q^{-h}T(\ov + z)=q^{\langle h,\alpha_r\rangle}e_{r} (T(\ov + z))$.
\item[(iii)]$q^{h}f_{r}q^{-h}(T(\ov + z))=q^{-\langle h,\alpha_r\rangle}f_{r}(T(\ov + z))$.
\item[(iv)]$(e_{r}f_{s}-f_{s}e_{r})(T(\ov + z))=\delta_{rs}\frac{q^{\alpha_r}-q^{-\alpha_r}}{q-q^{-1}}(T(\ov + z))$.
\item[(v)]$(e_{r}^2e_{s}-(q+q^{-1})e_re_se_r+e_je_{r}^2) (T(\ov + z))=0 \quad (|r-s|=1)$.
\item[(vi)]$(f_{r}^2f_{s}-(q+q^{-1})f_rf_sf_r+f_sf_{r}^2) (T(\ov + z))=0 \quad (|r-s|=1)$.
\item[(vii)]
$e_{r}e_s (T(\ov + z))=e_se_r (T(\ov + z))$, and $
f_rf_s (T(\ov + z))=f_sf_r (T(\ov + z))$, $(|r-s|>1)$.
\end{itemize}
\end{proposition}
\begin{proof}
We only give the proof of (v). Other statements can be proved similarly.
\begin{align*}
&(e_{r}^2e_{s}-(q+q^{-1})e_re_se_r+e_se_{r}^2) (T(\ov + z))\\
=&(e_{r}^2e_{s}-(q+q^{-1})e_re_se_r+e_se_{s}^2) \Dv ([x-y]_qT(v + z))
\end{align*}
For any $r_1,r_2,r_3$, if $\#\{r_t:r_t=k\}\leq 2$, then $[x-y]_qT(v + z)$, $[x-y]_q e_{r_1}T(v + z)$, $[x-y]_qe_{r_2}e_{r_1}T(v + z)$, $[x-y]_qe_{r_3}e_{r_2}e_{r_1}T(v + z)$ are in
${\mathcal F}_{ij} \otimes {V}(T(v))$.
It follows from Lemma \ref{dij-commute} (ii) that
\begin{align*}
&(e_{i}^2e_{j}-(q+q^{-1})e_ie_je_i+e_je_{i}^2) (T(\ov + z))\\
=& \Dv((e_{i}^2e_{j}-(q+q^{-1})e_ie_je_i+e_je_{i}^2) ([x-y]_qT(v + z)))\\
=&0.
\end{align*}
\end{proof}
The proof of the following proposition will be given in Appendix \ref{Section: Appendix}.
\begin{proposition} \label{d-t-v-rep}
Set $\bar{v}$ be any the fixed $(1,\mathcal{C})$-singular vector and $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ such that $\tau(z)\neq z$. Then
\begin{itemize}
\item[(i)]$
q^{0}(\D T(\ov + z))=(\D T(\ov+ z))$, and $q^{h}q^{h'}(\D T(\ov + z))=q^{h+h'}(T(\ov + z))$, for any $h,h' \in P$.
\item[(ii)]$q^{h}e_{r}q^{-h}(\D T(\ov + z))=q^{\langle h,\alpha_r\rangle}e_{r} (\D T(\ov + z))$.
\item[(iii)]$q^{h}f_{r}q^{-h}(\D T(\ov + z))=q^{-\langle h,\alpha_r\rangle}f_{r}(\D T(\ov + z))$.
\item[(iv)]$(e_{r}f_{s}-f_{s}e_{r})(\D T(\ov + z))=\delta_{rs}\frac{q^{\alpha_r}-q^{-\alpha_r}}{q-q^{-1}}(\D T(\ov + z))$.
\item[(v)]$(e_{r}^2e_{s}-(q+q^{-1})e_re_se_r+e_se_{r}^2) (\D T(\ov + z))=0 \quad (|r-s|=1)$.
\item[(vi)]$(f_{r}^2f_{s}-(q+q^{-1})f_rf_sf_r+f_sf_{r}^2) (\D T(\ov + z))=0 \quad (|r-s|=1)$.
\item[(vii)]
$e_{r}e_s (\D T(\ov + z))=e_se_r (\D T(\ov + z))$, and $
f_rf_s (\D T(\ov + z))=f_sf_r (\D T(\ov + z))$, $(|r-s|>1)$.
\end{itemize}
\end{proposition}
Combining Propositions \ref{t-v-rep} and \ref{d-t-v-rep} we obtain our main result.
\begin{theorem}\label{thm-main}
If $\bar{v}$ is an $(1,\mathcal{C})$-singular vector in ${\mathbb C}^{\frac{n(n+1)}{2}}$, then $V_{\mathcal{C}}(T(\bar{v}))$ is a $U_q$-module, with action of the generators of $U_q$ given by
\begin{align*}
g(T(\bar{v} + z))=&\ \mathcal{D}^{\bar{v}}([x-y]_qg(T(v + z)))\\
g(\mathcal{D}T(\bar{v} + z')))=&\ \mathcal{D}^{\bar{v}} ( g(T(v + z'))),
\end{align*}
for any $z,z'\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ with $z'\neq\tau(z')$.
\end{theorem}
\subsection{Action of the Gelfand-Tsetlin subalgebra on $V_{\mathcal{C}}(T(\bar{v}))$}
\begin{lemma}\label{generators of Gamma acting in many tableaux}
\begin{itemize}
\item[(i)] If $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ is such that $|z_{ki}-z_{kj}|\geq n-m$ for some $0 \leq m \leq n$, then for each $1\leq s \leq r\leq n-m$ we have:
\begin{itemize}
\item[(a)]
$
c_{rs}(T(\bar{v}+z))=\Dv([x-y]_qc_{rs}(T(v+z))),
$
\item[(b)]
$c_{rs}(\mathcal{D}T(\bar{v}+z))= \Dv(c_{rs}(T(v+z)))$ if $z\neq \tau(z)$,
\end{itemize}
\item[(ii)] If $1\leq s\leq r\leq k$ and $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ then the action of $c_{rs}$ on $T(\bar{v}+z)$ and
$\mathcal{D}T(\bar{v}+z)$ is defined by the formulas in {\rm (i)}.
\end{itemize}
\end{lemma}
\begin{proof}
Recall that
$$c_{rs}=\sum_{\sigma,\sigma'\in S_r}(-q)^{l(\sigma)+l(\sigma')}
l_{\sigma(1),\sigma'(1)}^{+}\cdots l_{\sigma(s),\sigma'(s)}^{+}l_{\sigma(s+1),\sigma'(s+1)}^{-}\cdots l_{\sigma(r),\sigma'(r)}^{-}.$$
\begin{itemize}
\item[(i)]
The elements $l_{rs}^+$ with $r<s$ can be written as sum of products of elements of the form
$l_{r,r+1}^+, l_{r+1,r+2}^+, \ldots, l_{s-1,s}^+$ and $l_{sr}^-$, $r<s$ can be written as sum of products of $l_{s,s-1}^-, l_{s-1,s-2}^- \ldots, l_{r+1,r}^-$.
By the hypothesis $|z_{ki}-z_{kj}|\geq n-m$, the coefficients that appear in the decompositions of the following vectors are all in $\mathcal{F}_{ij}$:
$[x-y]_q l_{\sigma(t),\sigma'(t)}^{-}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))(t>s)$,
$[x-y]_q l_{\sigma(t),\sigma'(t)}^{+}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))(t\leq s)$,
$ l_{\sigma(t),\sigma'(t)}^{-}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))$ ($t>s,z\neq\tau(z)$),
$ l_{\sigma(t),\sigma'(t)}^{+}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))$ ($t\leq s,z\neq\tau(z)$)
Then the statement follows from Lemma \ref{dij-commute}(ii).
\item[(ii)] As $1\leq s\leq r\leq k$, then the every tableau that appears in the following elements have the same $(k,i)$th and $(k,j)$th entries:
\begin{align*}
[x-y]_q l_{\sigma(t),\sigma'(t)}^{-}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))& &(t>s),\\
[x-y]_q l_{\sigma(t),\sigma'(t)}^{+}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))& &(t\leq s),\\
l_{\sigma(t),\sigma'(t)}^{-}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))& &(t>s,z\neq\tau(z)),\\
l_{\sigma(t),\sigma'(t)}^{+}\cdots l_{\sigma(r),\sigma'(r)}^{-}(T(v+z))& &(t\leq s,z\neq\tau(z)).
\end{align*} Hence all of the listed vectors are in ${\mathcal F}_{ij} \otimes {V}(T(v))$ and Lemma \ref{dij-commute}(ii) completes the proof.
\end{itemize}
\end{proof}
\begin{lemma}\label{lem-ck2}
Assume $\tau(z)\neq z$. Then we have:
\begin{itemize}
\item[(i)] $c_{kr}(T(\bar{v}+z))=\gamma_{kr}(\bar{v}+z)T(\bar{v}+z)$
\item[(ii)] $(c_{kr}-\gamma_{kr})(\mathcal{D}T(\bar{v}+z))=0$ if and only if $r\in\{0,k\}$.
\item[(iii)] $(c_{kr}-\gamma_{kr}(\bar{v}+z))^{2}\mathcal{D}T(\bar{v}+z)=0.$
\end{itemize}
\end{lemma}
\begin{proof}
Recall that if $w$ is a $\mathcal{C}$-generic vector then $ c_{kr}T(w)=\gamma_{kr}(w)T(w)$, where $\gamma_{kr}(w)$ is a symmetric function in variables $w_{k1},\ldots,w_{kk}$.
\begin{itemize}
\item[(i)] By Lemma \ref{generators of Gamma acting in many tableaux}(ii), we have
\begin{align*}
c_{kr}(T(\bar{v}+z))=&\Dv([x-y]_qc_{kr}(T(v+z)))\\
=&\Dv([x-y]_q\gamma_{kr}(v+z)T(v+z))\\
=&\gamma_{kr}(\bar{v}+z)T(\bar{v}+z).
\end{align*}
\item[(ii)] Also by Lemma \ref{generators of Gamma acting in many tableaux}(ii) we have:
\begin{align*}
c_{kr}(\mathcal{D}T(\bar{v}+z)))=& \Dv(c_{kr}(T(v+z)))\\
=& \Dv(\gamma_{kr}(v+z)T(v+z))\\
=& \Dv(\gamma_{kr}(v+z))T(\bar{v}+z)+\gamma_{kr}(\bar{v}+z)\mathcal{D}T(\bar{v}+z)
\end{align*}
When $r=0, k$, $\gamma_{kr}(v+z)$ is symmetric function in variables $v_{ki}, v_{kj}$.
Then $\Dv(\gamma_{kr}(v+z))=0$, one has that $c_{kr}(T(\bar{v}+z))=\gamma_{kr}(\bar{v}+z)T(\bar{v}+z)$.
When $1\leq r \leq k-1$, $\gamma_{kr}(v+z)$ is not symmetric.
$\Dv(\gamma_{kr}(v+z))=\frac{a(q-q^{-1})^2[z_{ki}-z_{kj}]_q }{2}\neq 0$ where $a$ is the coefficient of $q^{(v_{ki}+z_{ki})-(v_{kj}+z_{kj})}$ in $\gamma_{kr}(v+z)$.
\item[(iii)] This part follows from (i) and (ii).
\end{itemize}
\end{proof}
The following statement follows directly from the action of generators of $U_q$ and the Gelfand-Tsetlin subalgebra.
\begin{lemma}\label{Gamma k separates tableaux}
Let $\Gamma_{k-1}$ be the subalgebra of $\Gamma$ generated by $\{c_{rs}:1\leq s\leq r\leq k-1\}$.
For any $m\in\mathbb{Z}_{\geq 0}$ let $R_{m}$ be the set of $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ such that $|z_{ki}-z_{kj}|=m$.
\begin{itemize}
\item[(i)]
If $z,z'\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ are such that $z_{rs}\neq z'_{rs}$ for some $1\leq s\leq r\leq k-1$. Then, $\Gamma_{k-1}$ separates the tableaux $T(\bar{v}+z)$ and $T(\bar{v}+z')$, that is, there exist $c\in\Gamma_{k-1}$ and $\gamma\in\mathbb{C}$ such that $(c-\gamma)T(\bar{v}+z)=0$ but $(c-\gamma)T(\bar{v}+z')\neq 0$.
\item[(ii)]
If $z\in R_{m}$ then there exists $\bar{z}\in R_{m+1}$ such that $T(\bar{v}+z)$ appears with non-zero coefficient in the decomposition of $f_{k}f_{k-1}\cdots f_{k-t}T(\bar{v}+\bar{z})$ for some $t\in\{0,1,\ldots,k-1\}$.
\end{itemize}
\end{lemma}
Now we can prove our second main result:
\begin{theorem}\label{GT module structure}
The module $V_{\mathcal{C}}(T(\bar{v}))$ is an $(1,\mathcal{C})$-singular Gelfand-Tsetlin module. Moreover for any $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ and any $1\leq r \leq s\leq n$ the following identities hold.
\begin{equation}\label{Gamma acting on T}
c_{rs}(T(\bar{v}+z))=\Dv([x-y]_qc_{rs}(T(v+z)))
\end{equation}
\begin{equation}\label{Gamma acting on DT}
c_{rs}(\mathcal{D}T(\bar{v}+z))= \Dv(c_{rs}(T(v+z))), \text{ if } z\neq \tau(z).
\end{equation}
\end{theorem}
\begin{proof}
Let $R_{\geq n} := \cup_{m\geq n} R_m$. For any $z\in R_{\geq n}$ consider the submodule $W_{z}$ of $V_{\mathcal{C}}(T(\bar{v}))$ generated by $T(\bar{v}+z)$. By Lemma \ref{generators of Gamma acting in many tableaux}(i)(a), $T(\bar{v}+z)$ is a common eigenvector of all generators of $\Gamma$ and thus $W_{z}$ is a Gelfand-Tsetlin module by Lemma \ref{lem-cyclic-Gelfand-Tsetlin}. Then $W=\sum_{z\in R_{\geq n}}W_{z}$ is also a Gelfand-Tsetlin module. We first show that $W$ contains all tableau $T(\bar{v}+z)$ for any $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$.
Indeed, assume that $|z_{ki}-z_{kj}|= n-1$ and consider $T(\bar{v}+z)$. Then, by Lemma \ref{Gamma k separates tableaux} there exists $z'\in R_n$ and a nonzero $x\in U_{q}(\gl_{k+1})$ such that $xT(\bar{v}+z')=T(\bar{v}+z)$.
Let $c\in \Gamma$ be a central element and $(c-\gamma)T(\bar{v}+z')=0$ for some complex $\gamma$. Then $(c-\gamma)xT(\bar{v}+z')=0$ $=(c-\gamma) (T(\bar{v}+z))$.
Continuing analogously with the sets
$R_{n-3}, \ldots, R_0$ we show that any tableau $T(\bar{v}+z)$ belongs to $W$.
Consider the quotient $\overline{W}=V(T(\bar{v}))/W$.
The vector $\mathcal{D}T(\bar{v}+z)+W$ of $\overline{W}$ is a common eigenvector of $\Gamma$ by Lemma \ref{generators of Gamma acting in many tableaux}(i)(b) for any $z\in R_n$. We can repeat now the argument above substituting everywhere the tableaux
$T(\bar{v}+z)$ by $\mathcal{D}T(\bar{v}+z)$. Hence, $\overline{W}=\sum_{z\in R_n}\overline{W}_{z}$, where
$\overline{W}_{z}$ denotes the submodule of $\overline{W}$ generated by $\mathcal{D}T(\bar{v}+z) +W$.
By Lemma \ref{lem-cyclic-Gelfand-Tsetlin} we conclude that $\overline{W}$ is a Gelfand-Tsetlin module. Therefore,
$V_{\mathcal{C}}(T(\bar{v}))$ is a Gelfand-Tsetlin module with action of $\Gamma$ given by (\ref{Gamma acting on T}) and (\ref{Gamma acting on DT}).
\end{proof}
As a consequence of Theorem \ref{GT module structure} we have $\dim V_{\mathcal{C}}(T(\bar{v}))(\sm)\leq 2$ for any $\sm\in\Sp\Ga$. Here, a maximal ideal $\sm$ of $\Gamma_q$ is determined by the corresponding basis tableau via the formulas in Theorem \ref{GT module structure}. If the entries of this tableau which define the $1$-singular pair are distinct then the dimension of $V_{\mathcal{C}}(T(\bar{v}))(\sm)$ is exactly $2$. Moreover, Lemma \ref{lem-ck2} implies that the generators $c_{ki}$ of $\Gamma_q$ have Jordan cells of size $2$ on $V_{\mathcal{C}}(T(\bar{v}))$.
Finally, we have our third main result
\begin{theorem}\label{thm-when L irred}
The module $V_{\mathcal{C}}(T(\bar{v}))$ is irreducible whenever $\mathcal{C}$ is a maximal set of relations for $T(\bar{v})$ and $\bar{v}_{rs}-\bar{v}_{r-1,t}\notin \mathbb{Z}+\frac{1(q)}{2}$ for any $1\leq t <r\leq n$, $1\leq s\leq r$ such that $(r,s)\notin \mathfrak{V}(\mathcal{C})$ or $(r-1,t)\notin \mathfrak{V}(\mathcal{C})$ .
\end{theorem}
\begin{proof}
Let $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ such that $z\neq\tau(z)$ and $w=\bar{v}+z$. If $w_{rs}-w_{r-1,t}\notin \frac{1(q)}{2} +\mathbb{Z}_{\geq 0}$ for any $r,s,t$, then the module $V_{\mathcal{C}}(T(\bar{v}))$ is generated by the two tableau $T(w)$ and $\mathcal{D}T(w)$.
Let $z\in\mathbb{Z}^{\frac{n(n-1)}{2}}$ such that $z\neq\tau(z)$ and $w=\bar{v}+z$. If $w_{rs}-w_{r-1,t}\notin \frac{1(q)}{2}+\mathbb{Z}_{\geq -1}$ for any $r,s,t$, then $V_{\mathcal{C}}(T(\bar{v}))$ is generated by $\mathcal{D}T(w)$. If $z\neq\tau(z)$ and $\bar{v}_{rs}-\bar{v}_{r-1,t}\notin \frac{1(q)}{2}+\mathbb{Z}$ for any $1\leq t <r\leq n$, $1\leq s\leq r$ then $T(\bar{v}+z)$ generates $V_{\mathcal{C}}(T(\bar{v}))$.
This proves the statement.
\end{proof}
\section{New $(1,\mathcal{C})$-singular Gelfand-Tsetlin modules for $\gl_n$}
Three main results proved above, Theorem \ref{thm-main}, Theorem \ref{GT module structure} and Theorem \ref {thm-when L irred} also hold when $q=1$. We will use the same notation as before. The proofs are analogous to the proofs of theorems above. Therefore, we obtain a new family of $(1,\mathcal{C})$-singular Gelfand-Tsetlin modules for $\gl_n$ for any admissible set of relations $\mathcal{C}$ plus one additional singularity. These modules are irreducible for any maximal set $\mathcal{C}$. Namely, we have
\begin{theorem}\label{thm-gl(n)}
Let $\mathcal{C}$ be an admissible set of relations and $T(\bar{v})$ a $(1,\mathcal{C})$-singular tableau. Then
\begin{itemize}
\item[(i)] $V_{\mathcal{C}}(T(\bar{v}))$ is a $(1,\mathcal{C})$-singular Gelfand-Tsetlin $\gl_n$-module
with the action of the generators of $\mathfrak{gl}_{n}$ given by
\begin{align*}
g(T(\bar{v} + z))=&\ \mathcal{D}^{\bar{v}}((x - y)g(T(v + z)))\\
g(\mathcal{D}T(\bar{v} + z')))=&\ \mathcal{D}^{\bar{v}} (g(T(v + z'))),
\end{align*}
where $\mathcal{D}^{\bar{v}}(f) = \frac{1}{2}\left(\frac{\partial f}{\partial x}-\frac{\partial f}{\partial y}\right)(\bar{v})$. In the particular case of $\mathcal{C}=\emptyset$ we recover the $1$-singular modules constructed in \cite{FGR3}.
\item[(ii)] $V_{\mathcal{C}}(T(\bar{v}))$ is irreducible whenever $\mathcal{C}$ is the maximal set of relations satisfied by $T(\bar{v})$.
\end{itemize}
\end{theorem}
Fix $(1,\mathcal{C})$-singular tableau $T(\bar{v})$ and consider the corresponding maximal ideal $\sm=\sm_{T(\bar{v})}$ of the Gelfand-Tsetlin subalgebra $\Gamma$ \cite{FGR3}. Since $\dim U(\gl_n)/U(\gl_n)\sm\leq 2$ there can be at most two non-isomorphic irreducible modules $V$ with $V(\sm)\neq 0$. It is not immediately clear why both of them are the subquotients of $V_{\mathcal{C}}(T(\bar{v}))$. But this fact can be shown analogously to the proof of the similar statement for the $1$-singular case in \cite{FGR5}, Theorem 5.2. So, we have
\begin{theorem}\label{thm-exhaust} Let $T(\bar{v})$ be a $(1,\mathcal{C})$-singular tableau and $\sm=\sm_{T(\bar{v})}$.
Then any irreducible Gelfand-Tsetlin $\gl_n$-module $V$ with $V(\sm)\neq 0$ is isomorphic to a subquotients of $V_{\mathcal{C}}(T(\bar{v}))$.
\end{theorem}
Theorem \ref{thm-exhaust} completes a classification of $(1,\mathcal{C})$-singular irreducible Gelfand-Tsetlin modules for $\gl_n$.
\appendix
\section{}\label{Section: Appendix}
For a function $f = f(v)$ by $f^{\tau}$ we denote the function $f^{\tau} (v) = f (\tau (v))$.
The following lemma can be easily verified.
\begin{lemma}\label{dif operaator in functions}
Suppose $f\in {\mathcal F}_{ij}$ and $h:=\frac{f-f^{\tau}}{[x-y]_q}$.
\begin{itemize}
\item[(i)]${\mathcal D}^{\bar{v}}(f^{\tau})=-{\mathcal D}^{\bar{v}}(f)$.
\item[(ii)] If $f=f^{\tau}$, then ${\mathcal D}^{\bar{v}}(f)=0$.
\item[(iii)] If $h\in {\mathcal F}_{ij}$, then ${\rm ev} (\bar{v})(h)=2{\mathcal D}^{\bar{v}} (f)$.
\item[(iv)] ${\rm ev} (\bar{v})(f)={\mathcal D}^{\bar{v}}([x-y]_qf)$.
\item[(v)] $\mathcal{D}^{\bar{v}} (f_1f_2 T(v+z)) = \mathcal{D}^{\bar{v}} (f_1)f_2(\bar{v}) T(\bar{v}+z) + f_1(\bar{v}) \mathcal{D} ^{\bar{v}} (f_2T(\bar{v}+z))$ for any $f_1,f_2\in {\mathcal F}_{ij}$.
\item[(vi)] $\Dv([x-y]_qf)=\Dv([x-y]_qf^{\tau})$.
\end{itemize}
\end{lemma}
\begin{proof} Items (i)-(v) follows by definition and direct computation and item (vi) follows from the following:
\begin{equation}
\Dv([x-y]_qf^{\tau})=-\Dv([y-x]_qf)=\Dv([x-y]_qf).
\end{equation}
\end{proof}
\begin{lemma}\label{sym formula}
Let $f,g$, be functions in ${\mathcal F}_{ij}$ such that $g=g^{\tau}$. Then the following identity hold.
\begin{align*}
\Dv (f) \Dv([x-y]_q g)=\Dv(fg).
\end{align*}
\end{lemma}
\begin{proof}
It is easy to be verified by definition and Lemma \ref{dif operaator in functions} (ii), (iii), (iv).
\end{proof}
\begin{lemma} \label{dv-formulas}
Let $f_m, g_m$, $m=1, \ldots, t$, be functions such that $f_m,[x-y]_qg_m$, and $\sum_{m=1}^t f_m g_m$ are in ${\mathcal F}_{ij}$ and $g_m \notin {\mathcal F}_{ij}$. Assume also that $\Dv (\sum_{m=1}^t f_m g_m^{\tau})=0 $ . Then the following identities hold.
\begin{itemize}
\item[(i)] $2\sum\limits_{m=1}^t{\mathcal D}^{\bar{v}} (f_m) {\mathcal D}^{\bar{v}} ( [x-y]_q g_m) = {\mathcal D}^{\bar{v}} \left(\sum_{m=1}^t f_m g_m\right)$.
\item[(ii)] $2\sum\limits_{m=1}^t{\mathcal D}^{\bar{v}} (f_m) {\rm ev} (\bar{v}) ( [x-y]_q g_m) = {\rm ev} (\bar{v}) \left(\sum_{m=1}^t f_mg_m\right)$.
\end{itemize}
\end{lemma}
\begin{proof} Set for simplicity $\bar{g}_m = [x-y]_qg_m$. For (i) we use Lemma \ref{dif operaator in functions} and obtain
\begin{eqnarray*}
\left( \sum_{k=1}^t f_mg_m\right) & = & {\mathcal D}^{\bar{v}} \left(\sum_{m=1}^t f_mg_m+ \sum_{m=1}^t f_mg_m^{\tau}\right)\\
& = & \Dv \left(\sum_{m=1}^t f_m \frac{\bar{g}_m - (\bar{g}_m)^{\tau}}{[x-y]_q}
\right)\\
& = &\sum_{m=1}^t \left( {\mathcal D}^{\bar{v}} (f_m) {\rm ev}(\bar{v}) \left( \frac{\overline{g}_m - (\overline{g}_m)^{\tau}}{[x-y]_q} \right) + {\rm ev}(\bar{v}) (f_m) {\mathcal D}^{\bar{v}} \left( \frac{\overline{g}_m - (\overline{g}_m)^{\tau}}{[x-y]_q} \right)\right)\\
& = &2 \sum_{m=1}^t {\mathcal D}^{\bar{v}} (f_m) {\mathcal D}^{\bar{v}} (\overline{g}_m).
\end{eqnarray*}
For (ii) we use similar arguments.
\end{proof}
\subsection{Proof of Proposition \ref{compatible}}
For any set of relations $\mathcal{C}$ we set
\begin{equation}
\Phi(L,z_1,\ldots,z_m)=
\left\{
\begin{array}{cc}
1,& \text{ if }T(L+z_1+\ldots+z_t)\in \mathcal {B}_{\mathcal{C}}(T(L)) \text{ for any } t,\\
0,& \text{ otherwise}.
\end{array}
\right.
\end{equation}
The action of generators can be expressed as follows:
\begin{equation}
\begin{split}
e_{k}(T(v))&=\sum_{\substack{1\leq j\leq k\\ \Phi(v,\delta^{kj})=1}}e_{kj}T(v+\delta^{kj}),\\
f_{k}(T(v))&=\sum_{\substack{1\leq j\leq k\\ \Phi(v,-\delta^{kj})=1}}f_{kj}T(v-\delta^{kj}).\\
\end{split}
\end{equation}
Let $g_{i_t}=e_{i_t}\text{ or } f_{i_t}$. For any product of generators $g_{i_1},\ldots,g_{i_r}$, the action on $T(v)$ can be expressed as
\begin{equation}\label{example phi}
\begin{split}
\sum_{\Phi(v,\pm\delta^{i_1j_1},\ldots,\pm\delta^{i_{r-1}j_{r-1}})=1}g_{i_rj_r}(v\pm\delta^{i_1j_1} \ldots \pm\delta^{i_rj_r})\cdots g_{i_1j_1}(v)
T(v\pm\delta^{i_1j_1} \ldots \pm\delta^{i_rj_r}).
\end{split}
\end{equation}
In the following all the sums satisfy the condition $\Phi=1$ as in \eqref{example phi}.
For part (i)
\begin{eqnarray*}
{\mathcal D}^{\bar{v}} ([x-y]_q e_{r} T(v+z))&= &\sum\limits_{s=1}^{r}{\mathcal D}^{\bar{v}}
([x-y]_q e_{rs}(v+z)T(v+z+\delta^{rs}))\\
&=&\sum\limits_{s=1}^{r}{\mathcal D}^{\bar{v}}([x-y]_qe_{rs}(v+z)){\rm ev} (\ov) T(v+z+\delta^{rs}))\\
& &+\sum\limits_{s=1}^{r}{\rm ev} (\ov) (([x-y]_q e_{rs}(v+z)){\mathcal D} T(T(v+z+\delta^{rs}))).
\end{eqnarray*}
The same formula holds for ${\mathcal D}^{\bar{v}} ([x-y]_q E_{rs} T(v+\tau(z)))$ after replacing $z$ with $\tau(z)$ on the right hand side. If $r\neq k$, then $e_{rs}(v+z)\in {\mathcal F}_{ij}$. Thus
\begin{eqnarray*}
&{\mathcal D}^{\bar{v}} ([x-y]_qe_{rs}(v+z)T(v+z+\delta^{rs}))={\rm ev} (\ov)(e_{rs}(v+z)T(v+z+\delta^{rs}))\\
&{\mathcal D}^{\bar{v}} ([x-y]_qe_{rs}(v+\tau(z))T(v+\tau(z)+\delta^{rs}))={\rm ev} (\ov)(e_{rs}(v+\tau(z)))T(v+\tau(z))+\delta^{rs}))
\end{eqnarray*}
Since $T(\ov+z+\delta^{rs})=T(\ov+\tau(z)+\delta^{rs})$,
${\rm ev} (\ov)(e_{rs}(v+z))={\rm ev} (\ov)(e_{rs}(v+\tau(z)))$,
${\mathcal D}^{\bar{v}} ([x-y]_q e_{r} T(v+z))={\mathcal D}^{\bar{v}} ([x-y]_q e_{r} T(v+\tau(z))) \text{ for } r\neq k$.
Suppose $r=k$, ${\mathcal D}^{\bar{v}} ([x-y]_qe_{rs}(v+z)T(v+z+\delta^{rs}))={\mathcal D}^{\bar{v}} ([x-y]_qe_{rs}(v+\tau(z))T(v+\tau(z)+\delta^{rs}))$ whenever $r\neq i,j$.
Now we consider the case when $s\in\{i,j\}$. $\tau(z+\sigma(\delta^{ki}))=\tau(z)+(\delta^{kj})$ then
$T(\ov+z+\delta^{ki})=T(\ov+\tau(z)+\delta^{kj})$,
${\mathcal D}T(\ov+z+\delta^{ki})=-{\mathcal D}T(\ov+\tau(z)+\delta^{kj})$.
The following equations follows from Lemma \ref{dif operaator in functions}.
\begin{align}
{\mathcal D}^{\bar{v}}([x-y]_qe_{ki}(v+z))={\mathcal D}^{\bar{v}}([x-y]_qe_{kj}(v+\tau(z))), \\
{\rm ev} (\ov) ([x-y]_q e_{ki}(v+z)=-{\rm ev} (\ov) ([x-y]_q e_{kj}(v+\tau(z)).
\end{align}
The first statement for $e_r$ is proved. The proof of $f_r$ is similar. Since $q^{\varepsilon_r}T(v+z)\in {\mathcal F}_{ij}\otimes T(v+z)$, it is easy to see that statement (i) holds for $q^{\varepsilon_r}$.
The proof of part (ii) is similar.
\subsection{Proof of Proposition \ref{d-t-v-rep}}
Denote
\begin{align}
e_{kr}(L)=
-\frac{\prod_{s=1}^{k+1}[l_{kr}-l_{k+1,s}]_q}{\prod_{s\neq r}^{k}[l_{kr}-l_{ks}]_q},\\
f_{kr}(L)=
\frac{\prod_{s=1}^{k-1}[l_{kr}-l_{k-1,s}]_q}{\prod_{s\neq r}^{k}[l_{kr}-l_{ks}]_q},\\
h_{k}(L)=
q^{\sum_{r=1}^{k}l_{kr}-\sum_{r=1}^{k-1}l_{k-1,r}+k}.
\end{align}
\begin{proof}
Since $e_r T(\bar{v} + z)),f_r T(\bar{v} + z))\in {\mathcal F}_{ij} \otimes {V}(T(v))$,
$q^h T(\bar{v} + z))\in {\mathcal F}_{ij} \otimes T(\bar{v} + z))$. Therefore (i), (ii) and (iii) follow directly from Lemma \ref{dij-commute}.
\emph{ Proof of $(iv)$.} In the cases $r\neq k$, or $s\neq k$ or $r=s=k$, $|z_{ki}-z_{kj}|\geq 2$, the equality holds because of Lemma \ref{dij-commute}.\\
In the following we assume $r=s=k$, $|z_{ki}-z_{kj}|=1$. Without loos of generality we assume $z_{ki}=0$, $z_{kj}=1$.
\begin{align*}
&(e_{k}f_{k}-f_{k}e_{k})(\mathcal{D}T(\bar{v} + z))\\
=&e_{k} \mathcal{D}^{\ov}(f_{k}T(v + z))- f_{k} \mathcal{D}^{\ov}(e_{k}T(v + z))\\
=&e_{k} \mathcal{D}^{\ov}\left(\sum_{r=1}^{k} f_{kr}(v + z)T(v + z- \delta^{kr})\right)-
f_{k} \mathcal{D}^{\ov}\left(\sum_{s=1}^{k}e_{ks}(v + z)T(v + z+\delta^{ks})\right)
\end{align*}
If $r\neq j$, then $e_kf_{kr}(v + z)T(v + z- \delta^{kr})\in {\mathcal F}_{ij} \otimes {V}(T(v))$.
By Lemma \ref{dij-commute} one has that $e_{k} \Dv( f_{kr}(v + z)T(v + z- \delta^{kr})) =\Dv (e_{k}f_{kr}(v + z)T(v + z- \delta^{kr}))$. Similarly if If $s\neq i$, then
$f_{k} \Dv ( e_{ks}(v + z)T(v + z+ \delta^{ks})) =\Dv (f_{k} e_{ks}(v + z)T(v + z+ \delta^{ks}))$. Thus
\begin{align*}
&(e_{k}f_{k}-f_{k}e_{k})(\mathcal{D}T(\bar{v} + z))\\
=& \Dv (\sum_{r\neq j}e_{k} f_{kr}(v + z)T(v + z- \delta^{kr})
+e_{k} \Dv( f_{kj}(v + z)T(v + z- \delta^{kj}))\\
&- \Dv(\sum_{s\neq i}f_{k} e_{ks}(v + z)T(v + z+\delta^{ks})
-f_{k} \Dv( e_{ki}(v + z)T(v + z+ \delta^{ki})))
\end{align*}
The action of $e_kf_k-f_ke_k$ on $T(v+z)$ is as follows
\begin{align*}
(e_kf_k-f_ke_k)T(v + z)=&\sum_{r,s=1}^{k} f_{kr}(v+z)e_{ks}(v + z-\delta^{kr})T(v + z- \delta^{kr}+\delta^{ks})\\
&-\sum_{r,s=1}^{k} e_{ks}(v+z)f_{kr}(v + z+\delta^{ks})T(v + z+ \delta^{ks}-\delta^{kr})\\
=&h_k(v+z)T(v + z).
\end{align*}
one has that
$f_{kr}(v+z)e_{ks}(v + z-\delta^{kr})-e_{ks}(v+z)f_{kr}(v + z+\delta^{ks})=0$ when $r\neq s$, and
$\sum_{r=1}^{k} f_{kr}(v+z)e_{kr}(v + z-\delta^{kr})
-e_{kr}(v+z)f_{kr}(v + z+\delta^{ks})=h_k(v+z)$.
Then
\begin{align*}
&(e_{k}f_{k}-f_{k}e_{k})(\mathcal{D}T(\bar{v} + z))\\
=& \Dv\left(\sum_{r\neq j}\sum_{s\in\{i,r\}}f_{kr}(v + z) e_{ks}(v + z- \delta^{kr}) T(v + z- \delta^{kr}+\delta^{ks})\right)\\
+& \Dv( f_{kj}(v + z)\Dv\left(\sum_{s=1}^{k}[x-y]_qe_{ks}(v + z- \delta^{kj})T(v + z- \delta^{kj}+\delta^{ks})\right)\\
-&\Dv\left(\sum_{s\neq i}\sum_{r\in\{j,s\}} e_{ks}(v + z)f_{kr}(v + z+\delta^{ks}) T(v + z+\delta^{ks}-\delta^{kr})\right)\\
-& \Dv( e_{ki}(v + z)\Dv\left(\sum_{r=1}^{k}[x-y]_qf_{kr}(v + z+ \delta^{ki})T(v + z+ \delta^{ki}-\delta^{kr})\right)
\end{align*}
Now we consider the coefficients of $T(\ov + z+ \delta^{ks}-\delta^{kr})$ and
$\mathcal{D}T(\ov + z+ \delta^{ks}-\delta^{kr})$.
When $r\neq s$, $f_{kr}(v+z)e_{ks}(v + z-\delta^{kr})-e_{ks}(v+z)f_{kr}(v + z+\delta^{ks})=0$. If $r\neq i,j$, the coefficient of $T(\ov + z- \delta^{kr}+\delta^{ki})$ is $\Dv(f_{kr}(v + z) e_{ki}(v + z- \delta^{kr}))-
\Dv( e_{ki}(v + z))\Dv([x-y]_qf_{kr}(v + z+ \delta^{ki}))$. It follows from Lemma \ref{sym formula} that the coefficients of $T(\ov + z+ \delta^{ks}-\delta^{kr})$ and
$\mathcal{D}T(\ov + z+ \delta^{ks}-\delta^{kr})$ are zero.
\noindent
Similarly one has that the coefficient of $T(\ov + z+\delta^{ks}-\delta^{kj})$ is zero when $s\neq i,j$.
By definition $\mathcal{D} T(\ov+ z- \delta^{kr}+\delta^{ki})=0$ if $r\neq i$ and $\mathcal{D} T(\ov+ z- \delta^{kj}+\delta^{ks})=0$ if $s\neq j$.
Since $T(\ov + z+ \delta^{ki}-\delta^{kj})=T(\ov + z)$, $\D T(\ov + z+ \delta^{ki}-\delta^{kj})=\D T(\ov + z)$,
all the remaining tableaux are $T(\ov + z)$ and $\D T(\ov + z)$.
The coefficient of $T(\ov + z)$ is as follows
\begin{align*}
&\sum_{r\neq j}\Dv(f_{kr}(v + z) e_{kr}(v + z- \delta^{kr}))
-\sum_{s\neq i}\Dv( e_{ks}(v + z)f_{ks}(v + z+\delta^{ks}))\\
&+\Dv( f_{kj}(v + z))\sum_{s\in\{i,j\}}\Dv([x-y]_qe_{ks}(v + z- \delta^{kj}))\\
&-\Dv( e_{ki}(v + z))\sum_{r\in\{i,j\}}\Dv([x-y]_qf_{kj}(v + z+ \delta^{ki}))
\end{align*}
\begin{align*}
=&\Dv( e_{ki}(v + z)f_{ki}(v + z+\delta^{ki})-f_{kj}(v + z) e_{kj}(v + z- \delta^{kj}))\\
&+\Dv( f_{kj}(v + z))\Dv([x-y]_qe_{ki}(v + z- \delta^{kj}))\\
&+\Dv( f_{kj}(v + z))\Dv([x-y]_qe_{kj}(v + z- \delta^{kj}))\\
&-\Dv( e_{ki}(v + z))\Dv([x-y]_qf_{ki}(v + z+ \delta^{ki}))\\
&-\Dv( e_{ki}(v + z))\Dv([x-y]_qf_{kj}(v + z+ \delta^{ki}))
\end{align*}
\noindent
Since $ e_{ki}(v + z- \delta^{kj})= e_{kj}(v + z- \delta^{kj})^{\tau}$,
$ f_{kj}(v + z+ \delta^{ki})= f_{ki}(v + z+ \delta^{ki})^{\tau} $.
$f_{kj}(v+z)e_{ki}(v + z-\delta^{kj})-e_{ki}(v+z)f_{kj}(v + z+\delta^{ki})=0$
By lemma \ref{dv-formulas} the coefficient of $T(\ov + z)$ is zero.
Since $\D T(\ov + z+ \delta^{ki}-\delta^{kj})=-\D T(\ov + z )$, the coefficient of $\mathcal{D}T(\ov + z)$ is
\begin{align*}
&\sum_{r\neq j}\ev(f_{kr}(v + z) e_{kr}(v + z- \delta^{kr}) \mathcal{D} T(\ov+ z)\\
&-\sum_{s\neq i}\ev( e_{ks}(v + z)f_{ks}(v + z+\delta^{ks})) \mathcal{D}T(\ov + z)\\
&+ \Dv( f_{kj}(v + z)) \left({\rm ev} (\ov)([x-y]_qe_{kj}(v + z- \delta^{kj})-{\rm ev} (\ov)([x-y]_qe_{ki}(v + z- \delta^{kj})\right)\\
&+\Dv\left( e_{ki}(v + z))\left({\rm ev} (\ov)([x-y]_qf_{kj}(v + z+ \delta^{ki})- {\rm ev} (\ov)([x-y]_qf_{ki}(v + z+ \delta^{ki})\right)\right)
\end{align*}
\begin{align*}
=&\ev(h_k(v+z))\\
&+\ev( e_{ki}(v + z)f_{ki}(v + z+\delta^{ki})-f_{kj}(v + z) e_{kj}(v + z- \delta^{kj}))\\
&+ \Dv( f_{kj}(v + z)) \left({\rm ev} (\ov)([x-y]_qe_{kj}(v + z- \delta^{kj})-{\rm ev} (\ov)([x-y]_qe_{ki}(v + z- \delta^{kj})\right)\\
&+\Dv\left( e_{ki}(v + z))\left({\rm ev} (\ov)([x-y]_qf_{kj}(v + z+ \delta^{ki})- {\rm ev} (\ov)([x-y]_qf_{ki}(v + z+ \delta^{ki})\right)\right)
\end{align*}
By Lemma \ref{dv-formulas} one has that
\begin{align*}
+&\ev( e_{ki}(v + z)f_{ki}(v + z+\delta^{ki})-f_{kj}(v + z) e_{kj}(v + z- \delta^{kj}))\\
-& \Dv( f_{kj}(v + z)) {\rm ev} (\ov)([x-y]_qe_{ki}(v + z- \delta^{kj}))\\
+&\Dv( f_{kj}(v + z) ){\rm ev} (\ov)([x-y]_qe_{kj}(v + z- \delta^{kj})\\
-&\Dv( e_{ki}(v + z)){\rm ev} (\ov)([x-y]_qf_{ki}(v + z+ \delta^{ki}))\\
+&\Dv( e_{ki}(v + z)){\rm ev} (\ov)([x-y]_qf_{kj}(v + z+ \delta^{ki}))
=0
\end{align*}
Then the coefficient of $\D T(\ov+z)$ is $\ev(h_k(v+z))$. Thus
$(e_{k}f_{k}-f_{k}e_{k})(\D T(\ov + z))= h_k(\D T(\ov + z))$.
\emph{ Proof of $(v)$.} $(e_{r}^2e_{s}-(q+q^{-1})e_re_se_r+e_se_{r}^2) (\D T(\ov + z))=0 \quad (|r-s|=1)$
If $r,s\neq k$ or $|z_{ki}-z_{kj}|\geq 2$, one has that $(e_{r}^2e_{s}-(q+q^{-1})e_re_se_r+e_se_{r}^2) (\D T(\ov + z))=\Dv(e_{r}^2e_{s}-(q+q^{-1})e_re_se_r+e_se_{r}^2) T(v + z))=0 $. Suppose $r=k$, $s=k+1$, $|z_{ki}-z_{kj}|=1$. Without loos of generality, we assume that $z_{ki}=0,z_{kj}=1$. In the following formula we write $e_{rs}(v+z+w)$,$T(v+z+w_1+w_2+\cdots w_t)$, $\D T(v+z+w_1+\cdots,w_t)$ as $e_{rs}(w)$, $T(w_1,\ldots,w_t)$, $\D T(w_1,\ldots,w_t)$ respectively.
\begin{align*}
&(e_{k}^2e_{k+1}-(q+q^{-1})e_ke_{k+1}e_k+e_{k+1}e_{k}^2) (\D T(\ov + z))\\
=&\sum_{r,t}\sum_{s\neq i}\Dv( e_{k+1,r}(0)e_{k,s}(\delta^{k+1,r})e_{k,t}(\delta^{k+1,r}+\delta^{k,s})
T(\delta^{k+1,r},\delta^{k,s},\delta^{k,t}))\\
&+\sum_{r,t} \Dv(e_{k+1,r}(0)e_{k,i}(\delta^{k+1,r}))
\Dv([x-y]_qe_{k,t}(\delta^{k+1,r}+\delta^{k,i} )T(\delta^{k+1,r},\delta^{k,i},\delta^{k,t}))\\
&-(q+q^{-1}) \sum_{r,t}\sum_{s\neq i}\Dv(e_{k,s}(0)e_{k+1,r}(\delta^{k,s})e_{k,t}(\delta^{k,s}+\delta^{k+1,r})
T(\delta^{k,s},\delta^{k+1,r},\delta^{k,t}))\\
&-(q+q^{-1})\sum_{r,t} \Dv(e_{k,i}(0)e_{k+1,r}(\delta^{k,i}))
\Dv([x-y]_qe_{k,t}(\delta^{k,i}+\delta^{k+1,r})T(\delta^{k,i},\delta^{k+1,r},\delta^{k,t}))\\
&+\sum_{r,t}\sum_{s\neq i}
\Dv(e_{k,s}(0)e_{k,t}(\delta^{k,s})e_{k+1,r}(\delta^{k,s}+\delta^{k,t})T(\delta^{k,s},\delta^{k,t},\delta^{k+1,r}))\\
&+\sum_{r,t} \Dv(e_{k,i}(0))
\Dv([x-y]_qe_{k,t}(\delta^{k,s})e_{k+1,r}(\delta^{k,s}+\delta^{k,t})T(\delta^{k,s},\delta^{k,t},\delta^{k+1,r})).
\end{align*}
\noindent
Since $[e_k[e_k,e_{k+1}]]T(v)=0$, the coefficients of $T(\delta^{k,s}+\delta^{k,t}+\delta^{k+1,r})$ and $\D T(\delta^{k,s}+\delta^{k,t}+\delta^{k+1,r})$ are zero when $s,t\neq i$.
The coefficient of $T(\ov+z+\delta^{ki}+\delta^{ks}+\delta^{k+1,r})$, $s\neq i,j$ is
\begin{align*}
&\Dv (e_{k+1,r}(v+z)e_{k,s}(v+z+\delta^{k+1,r})e_{k,i}(\delta^{k+1,r}+\delta^{k,s}))\\
+&\Dv(e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))\Dv([x-y]_qe_{k,s}(v+z+\delta^{k+1,r}+\delta^{k,i}))\\
-&(q+q^{-1}) \Dv( e_{k,s}(v+z)e_{k+1,r}(v+z+\delta^{k,s})e_{k,i}(v+z+\delta^{k,s}+\delta^{k+1,r}))\\
-&(q+q^{-1}) \Dv(e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))\\
&\Dv([x-y]_qe_{k,s}(v+z+\delta^{k,i}+\delta^{k+1,r}))\\
+&\Dv( e_{k,s}(v+z)e_{k,i}(v+z+\delta^{k,s})e_{k+1,r}(v+z+\delta^{k,s}+\delta^{k,i}))\\
&\Dv(e_{k,i}(v+z))\Dv([x-y]_qe_{k,s}(v+z+\delta^{k,i})e_{k+1,r}(v+z+\delta^{k,i}+\delta^{k,s})
\end{align*}
Under the conditions $e_{k,s}(v+z+\delta^{k+1,r}+\delta^{k,i})$ is a symmetric function in $\mathcal{F}_{ij}$, so
\begin{align*}
\Dv([x-y]_qe_{k,s}(v+z+\delta^{k+1,r}+\delta^{k,i}))
&=\ev(e_{k,s}(v+z+\delta^{k+1,r}+\delta^{k,i}))),\\
\Dv( e_{k,s}(v+z+\delta^{k+1,r}+\delta^{k,i}))&=0
\end{align*}
which implies
\begin{multline*}
\Dv(e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))\Dv([x-y]_qe_{k,s}(v+z+\delta^{k+1,r}+\delta^{k,i}))=\\
\Dv(e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))e_{k,s}(v+z+\delta^{k+1,r}+\delta^{k,i}))
\end{multline*}
Similarly one has that
\begin{align*}
\Dv(e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))
\Dv([x-y]_qe_{k,s}(v+z+\delta^{k,i}+\delta^{k+1,r}))\\
=\Dv(e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))
e_{k,s}(v+z+\delta^{k,i}+\delta^{k+1,r})),\\
\Dv(e_{k,i}(v+z))\Dv([x-y]_qe_{k,s}(v+z+\delta^{k,i})e_{k+1,r}(v+z+\delta^{k,i}+\delta^{k,s})\\
=\Dv(e_{k,i}(v+z)) e_{k,s}(v+z+\delta^{k,i})e_{k+1,r}(v+z+\delta^{k,i}+\delta^{k,s}).
\end{align*}
Thus the coefficient of $T(\ov+z+\delta^{ki}+\delta^{ks}+\delta^{k+1,r})$ is zero.
The coefficient of $T(\ov+z+2\delta^{ki} +\delta^{k+1,r})$ is as follows
\begin{align*}
&\Dv( e_{k+1,r}(v+z)e_{k,j}(v+z+\delta^{k+1,r})e_{k,i}(v+z+\delta^{k+1,r}+\delta^{k,j})\\
+&\Dv(e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))\Dv([x-y]_qe_{k,i}(v+z+\delta^{k+1,r}+\delta^{k,i}))\\
+&\Dv(e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))\Dv([x-y]_qe_{k,j}(v+z+\delta^{k+1,r}+\delta^{k,i}))\\
-&(q+q^{-1}) \Dv(e_{k,j}(v+z)e_{k+1,r}(v+z+\delta^{k,j})e_{k,i}(v+z+\delta^{k,j}+\delta^{k+1,r})\\
-&(q+q^{-1}) \Dv(e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))\Dv([x-y]_qe_{k,i}(v+z+\delta^{k,i}+\delta^{k+1,r}))\\
-&(q+q^{-1}) \Dv(e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))\Dv([x-y]_qe_{k,j}(v+z+\delta^{k,i}+\delta^{k+1,r}))\\
+&\Dv(e_{k,j}(v+z)e_{k,i}(v+z+\delta^{k,j})e_{k+1,r}(v+z+\delta^{k,j}+\delta^{k,i})\\
+&\Dv(e_{k,i}(v+z))\Dv([x-y]_qe_{k,i}(v+z+\delta^{k,i})e_{k+1,r}(v+z+2\delta^{k,i} )\\
+&\Dv(e_{k,i}(v+z))\Dv([x-y]_qe_{k,j}(v+z+\delta^{k,i})e_{k+1,r}(v+z+\delta^{k,i}+\delta^{k,j} ).
\end{align*}
It follows from Lemma \ref{dv-formulas} that
\begin{align*}
&\Dv(e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))\Dv([x-y]_qe_{k,i}(v+z+\delta^{k+1,r}+\delta^{k,i}))\\
+&\Dv(e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))\Dv([x-y]_qe_{k,j}(v+z+\delta^{k+1,r}+\delta^{k,i}))\\
-&(q+q^{-1}) \Dv(e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))\Dv([x-y]_qe_{k,i}(v+z+\delta^{k,i}+\delta^{k+1,r}))\\
-&(q+q^{-1}) \Dv(e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))\Dv([x-y]_qe_{k,j}(v+z+\delta^{k,i}+\delta^{k+1,r}))\\
+&\Dv(e_{k,i}(v+z))\Dv([x-y]_qe_{k,i}(v+z+\delta^{k,i})e_{k+1,r}(v+z+2\delta^{k,i} )\\
+&\Dv(e_{k,i}(v+z))\Dv([x-y]_qe_{k,j}(v+z+\delta^{k,i})e_{k+1,r}(v+z+\delta^{k,i}+\delta^{k,j} )\\
=&\Dv ( e_{k+1,r}(v+z)e_{k,i}(v+z+\delta^{k+1,r}))e_{k,i}(v+z+\delta^{k+1,r}+\delta^{k,i}) \\
-&(q+q^{-1})e_{k,i}(v+z)e_{k+1,r}(v+z+\delta^{k,i}))e_{k,i}(v+z+\delta^{k,i}+\delta^{k+1,r}))\\
+& (e_{k,i}(v+z))e_{k,i}(v+z+\delta^{k,i})e_{k+1,r}(v+z+2\delta^{k,i})).
\end{align*}
Then the coefficient of $T(\ov+z+2\delta^{ki} +\delta^{k+1,r})$ is zero.
Similarly one has that the coefficients of $\D T(\ov+z+ \delta^{ki} +\delta^{ks}+\delta^{k+1,r}),s\neq i,j$ and
$\D T(\ov+z+2\delta^{ki} +\delta^{k+1,r})$ are zero.
Thus $(e_{k}^2e_{k+1}-(q+q^{-1})e_ke_{k+1}e_k+e_{k+1}e_{k}^2) (\D T(\ov + z))=0$.
\end{proof}
\end{document} |
\begin{document}
\title{But that's not why:\
Inference adjustment by interactive prototype deselection}
\begin{abstract}
Despite significant advances in machine learning, decision-making of artificial agents is still not perfect and often requires post-hoc human interventions. If the prediction of a model relies on unreasonable factors it is desirable to remove their effect. Deep interactive prototype adjustment enables the user to give hints and correct the model's reasoning. In this paper, we demonstrate that prototypical-part models are well suited for this task as their prediction is based on prototypical image patches that can be interpreted semantically by the user. It shows that even correct classifications can rely on unreasonable prototypes that result from confounding variables in a dataset. Hence, we propose simple yet effective interaction schemes for inference adjustment: The user is consulted interactively to identify faulty prototypes. Non-object prototypes can be removed by prototype masking or a custom mode of deselection training. Interactive prototype rejection allows machine learning na\"{i}ve users to adjust the logic of reasoning without compromising the accuracy.
\end{abstract}
\begin{keywords}
Prototype Learning, Human-AI-Interaction, Interactive ML, Prototype Deselection, Inference Adjustment
\end{keywords}
\section{Introduction}
\label{sec:intro}
Human learning typically involves communication between individuals based on shared attributes of symbolic mental representations \cite{Planer2021Symbolic}. However, sub-symbolic processing in artificial neural networks hinders analogous interactions between humans and machines. To overcome this representational divide Interactive Machine Learning (IML) strives to enable novel means of collaborative learning.\\
The concepts learned by most neural networks can neither be accessed nor interpreted directly. Consequently, it is difficult to incorporate user feedback by a direct modification of the parameters. Instead IML often relies on additional training with modified data \parencite{ho_deep_2020,wang_deepigeos_2018}: For example, an initial prediction can be refined interactively by re-using manually corrected samples in a segmentation scenario \parencite{ho_deep_2020}. In an alternative approach an offline-trained neural network is used to propose a segmentation while a second model is trained online, based on user interaction to refine the initial segmentation \parencite{wang_deepigeos_2018}. Another way to incorporate feedback is reinforcement learning. It can be used to interactively fuse segmentation hints given by the user with segmentation probabilities \parencite{liao_iteratively-refined_2020}. However, none of these approaches allows for symbolic interaction. \\
While interaction in segmentation tasks has a clear focus on the \textit{where} aspect, the \textit{what} aspect underlying the reasoning is equivalently important in classification. However, its compositional and conceptual subtleties are hard to access. In this paper, we build on recent advances in protoype-based learning \parencite{chen_this_2019}. It allows to close the gap between cognitive concepts of human users and the representations of the model.\\
Prototype networks capture certain aspects of mental object representation in the brain. While structural signatures of feed-forward processing can be identified \parencite{markov_anatomy_2014}, information processing in the brain is also highly associative \parencite{bosking_orientation_1997}.
Some neurons are known to selectively code for the presence of specific features in their receptive field. Others preferentially link neurons according to the similarity of their response characteristics \parencite{bosking_orientation_1997}. In prototypical part networks this strength of reciprocal association is modeled as the latent similarity between a prototype and the patch encodings. Coherently, evidence from cognitive psychology suggests that human cognition relies on conceptual spaces that are organized by prototypes \parencite{geeraerts_prototype_2008,balkenius_spaces_2016}. Similar to latent grids in prototypical part networks, many layers of the visual hierarchy are also organized retinotopically: Receptive fields are arranged in physical space \parencite{kolster_retinotopic_2014}. Both the spatial configuration of receptive fields in physical space and the arrangement of visual features according to similarity, are well preserved by the formalization of prototypical part networks. \\
Leveraging this similarity of natural and artificial cognitive representation, we propose deep interactive prototype adjustment (DIPA), a novel mode of IML that allows for direct human-AI collaboration and colearning. In \Secref{sec:protopnet} we revise the basics of prototyp-based learning. In \Secref{sec:method} we explain how DIPA can be used to adjust the inference of the network. \Secref{sec:results} summarizes how DIPA can be used for inference adjustment without substantially compromising the accuracy of the prediction. \Secref{sec:conclusion} concludes our paper.
\begin{figure*}
\caption{A prototypical-part network:
We use a network of type ProtoPNet with ResNet34 encoder. Prototype convolution computes the euclidean distance between prototypes and feature-vectors. An evidence score is computed for each latent grid position (7x7) and prototype (N=2000) as a function of the inverted distance. The maximum of each heatmap constitutes the input to the classification head. Deselection can be achieved by removing prototypes via masking or by interactively clustering areas in latent space that encode non-object areas in the input images.}
\label{network}
\end{figure*}
\section{Prototype-based Learning}
\label{sec:protopnet}
Prototype-based deep learning builds on the insight that similar features form compact clusters in latent space. Hence, certain sub-volumes represent specific classes. The surface between areas that are assigned to different labels represents the decision boundary: Prototypes are understood as concepts that are more central for a class as compared to instances which reside close to the decision boundary \parencite{bau2020understanding}.\\
Typically, prototypes are modeled as vectors in embedding space.
Latent features can be represented by a single vector or a latent grid \parencite{yang_robust_2018,chen_this_2019}. \Figref{network} shows the structure of the model used here. If a latent grid is used, prototypical patches can be detected spatially. Prototypes cluster feature space analogously to centroids in K-means. They can also be pushed to the closest feature vectors such that the latent encoding of an image patch coincides with the prototype-vector \parencite{chen_this_2019}. Hence, they can be conceptually equated with image patches of the training set. Training can be end-to-end using a suitable loss function \parencite{chen_this_2019,li_deep_2018,yang_robust_2018}. Alternatively, iterative training algorithms or alternating schemes have been developed \parencite{chong_towards_2021,dong_few-shot_2018}.\\
Recent work addressed hierarchical prototypes that follow predefined taxonomies and shows that prototype based decision trees can be constructed \parencite{hase_interpretable_2019}\parencite{nauta_neural_2021}. Besides classification, prototype-based deep learning is used for segmentation or tracking of objects in videos. Special prototypes can be retrieved from previous frames and used as sparse memory-encodings for spatial attention \parencite{ke_prototypical_2021}. Some approaches address data scarcity and have particularly been developed for few-shot problems making predictions for a new class with few labels ($n<20$). They can be used for classification or semantic segmentation \parencite{snell_prototypical_2017,dong_few-shot_2018}. \\
The benefits of deep prototype learning are the high accuracy and robustness \parencite{yang_robust_2018}, its potentials for outlier detection \parencite{yang_robust_2018}, the ability to yield good predictions in few shot problems \parencite{snell_prototypical_2017,dong_few-shot_2018} and an increased interpretability that allows for intuitive interaction with the represented concepts \parencite{chen_this_2019,hase_interpretable_2019}.
\section{Interactive Prototype Adjustment}
\label{sec:method}
\begin{figure}
\caption{Interactive prototype rejection:
The prediction of a class is based on the detection of its previously learned prototypes. User feedback can be used to reject invalid prototypes such that inference is based on object-prototypes only.}
\label{user_interaction}
\end{figure}
We present two modes of prototype rejection for DIPA. Inference can be adjusted (1) by masking the evidence from undesired prototypes or (2) by iterative prototype rejection via repetitive user feedback and a custom loss-function: Prototypes are removed from areas of latent space that encode non-object patches as the user inserts antitypes at the location of undesired vectors.\\
Here, the network acts as the student that consults the teacher to retrieve information about the learned prototypes. The user takes on the role of the teacher and either accepts or rejects the prototypes presented by the student (\Figref{user_interaction}). Feedback can take different forms. We provide user interfaces that allow to iterate over the prototypes of different classes. Latent space can also be explored by the selection of images with highlighted prototypes that are arranged in an interactive cloud. The position of each image is defined by the projection of the prototypes to their first three principal components (\Figref{user_interfaces}). \\
For deselection without replacement, a mask is assembled to exclude prototypes from inference. To adjust for a decrease in accuracy the final layer of the network is trained afterwards.\\
Prototype rejection with replacement can be achieved with a special loss term $\textit{Reject}$ which represents a function of the L2 norm of the prototypes $p_j$ and the antitypes $q_s$. The term represents a logarithmically decaying cost which is maximal if antitype and a prototype coincide and encourages the prototypes to diverge from latent encodings of non-object patches.
\begin{equation*}
\operatorname{Reject} =\max_{i}\log\left(\frac{d_i + 1}{d_i + \epsilon}\right), \text{ where } d_i= \left\|\mathbf{p}_{j}-\mathbf{q_s}\right\|_{2}^{2}
\end{equation*} The antitype vectors are initialized with the prototypes that the user identifies as non-object patch encodings. A second term $\textit{Con}$ imposes a soft-constraint and ensures that prototypes stay in the hyper-cube that contains the latent vectors by penalizing any value $\textit{i}$ of each prototype-vector $\textit{j}$ outside the desired range.
{\footnotesize
\begin{equation*}
\operatorname{Con} = \sum_j\sum_i v_{i,j}
\text{ where }
v_{i,j}=
\begin{cases}
1, & \text{if}\ (p_{i,j} > 1) \lor (p_{i,j} < 0) \\
0, & \text{otherwise}
\end{cases}
\end{equation*}
}
These terms are used to assemble a modified loss function. It additionally contains the cross-entropy $\textit{CrsEnt}$, the clustering cost $\textit{Clst}$ and the separation cost $\textit{Sep}$ as well as the $\textit{L1}$ term from the original loss of ProtoPNet.
{\footnotesize
\begin{equation*}
L = \text{CrsEnt} + \lambda_{1} \text { Clst }+\lambda_{2} \text { Sep } +\lambda_{2} \text { L1 } +\lambda_{3} \text { Reject } + \lambda_{4} \text { Con }
\end{equation*}
}
Although prototypes diverge from the deselected feature vectors, it shows that some move towards areas in latent space that encode other non-object patches. Hence, deselection training is repeated for several iterations to remove these prototypes. Multiple user consultations are necessary.\\
The user repeatedly interacts with the model and successively explores areas in latent space that are covered by non-object patch encodings. At the beginning of each repetition the prototypes are pushed to the closest feature vector to identify the prototypical image patches. In the next step the user is consulted to reject unreasonable prototypes.\\
\begin{algorithm}
\begin{algorithmic}[1]
\State $ Q\leftarrow\ \emptyset $
\For {$repetition=1,2,\ldots N $}
\State $ p_j\leftarrow \arg \min_{z\in Z_j}\left \| z -p_j \right \|$ \Comment{Push prototypes}
\State $ Q\leftarrow\ Q \cup \textit{non-object}(P)$ \Comment{Consult user}
\For {$epoch=1,2,\ldots,M$}
\For {\textit{batch} of \textit{Dataset}}
\State $\textit{loss} \leftarrow\ \textit{L}(y,\textit{net}(\textit{batch}),P,Q)$
\State $\textit{net} \leftarrow \textit{SGD}(\textit{net},\textit{loss})$ \Comment{Move prototypes}
\EndFor
\EndFor
\EndFor
\end{algorithmic}
\centerline{\textbf{Alg. 1:} \text{Iterative prototype rejection}}
\end{algorithm}
The set of antitypes $\textit{Q}$ is united with the subset of the prototypes $\textit{P}$ that are identified as non-object and the network is trained using our deselection loss. Hence, the areas of latent space that encode undesired features are successively clustered by the interplay of user and model (Alg. 1).
\begin{figure}
\caption{User Interfaces: Custom interfaces target at interactive modes for the exploration of prototype space. The interface on the left side allows for prototype rejection per image class. The one on the right displays prototypes in an interactive low dimensional embedding (3D). We allow for drag-and-drop to select prototypes and target at optional rejection of clusters based on the distance to a selected prototype.}
\label{user_interfaces}
\end{figure}
\section{Results}
\label{sec:results}
We perform three experiments with ten repetitions each. The experiments begin after the first two stages of the original training schedule of ProtoPNet are completed \parencite{chen_this_2019}. For that purpose we use the CUB200-2011 dataset while the initial encoder weights stem from pre-training on ImageNet. The model is trained for five epochs with the encoder weights fixed, then all layers and the prototype-vectors are trained jointly with decaying weights for the convolutional blocks. Afterwards, we push the prototypes to the closest patch encoding of any training image of the prototype's class. Finally the classification head is trained to adjust for the changes and allow for the L1 term to converge \parencite{chen_this_2019}.\\
To enable reproducibility we simulate the user interaction computationally for named experiments. To this end we rely on pixelwise annotations included in the dataset \parencite{wah_caltech-ucsd_2011}. Prototypes are considered non-object if no ground-truth pixel indicates differently in the image patch that corresponds to the upscaled grid-cell of the prototype.\\
After initial training between $157$ and $251$ of the $2000$ prototypes were identified as non-object prototypes ($\bar{n}_{\textit{bg}}=206 , \bar{n}_{\textit{fg}}=1794$). \Figref{subfig:nonObjectPrototypes} shows a random selection of non-object prototypes for a sample run. Here, an overlap of at least 75\% exists only for 1077 prototypes. Analogously, \Figref{subfig:objectPrototypes} shows all prototypes for the class "Eastern Towhee".
The final layer uses the maximal evidence for each prototype to predict the image class \parencite{chen_this_2019}. Hence, the weights of a prototype in the classification head reflect the impact it has on the final prediction. \Figref{relevance}a shows the distribution for object and non-object prototypes in a representative run with a Gaussian fit for the PDF. Larger weights are more frequent for object prototypes with a peak in the PDF at $w = 1.2$. However, substantial weights exist also for non-object prototypes indicating their relevance for the prediction.
In the first experiment we employ prototype masking after the first push. Evidence from prototypes with no object-overlap is cancelled out. An adjustment of the weights of the final layer is necessary to compensate for the deselection. The average accuracy for the ten repetitions is $.778$ before deselection and drops to $.77$ upon deselection masking. After training of the last layer it mostly recovers. With an average of $.776$ it is marginally lower then before. The T-statistic indicates that the difference is insignificant ($p=.29$).
In the second experiment we achieve inference adjustment by training the network with fixed encoder weights and the presented deselection loss. After the subsequent push to the closest patches between 38 and 77 prototypes revealed themselves as non-object prototypes. None of these prototypes covers the same image patch as before. These prototypes are removed and the accuracy drops to an average of .732 and recovers to a level of .745 when the last layer is trained to compensate for the changes due to shifting the prototypes.\\
The third experiment employs our iterative scheme. Instead of removing new non-object prototypes the user is consulted multiple times. The average accuracy is slightly higher than in the previous experiment at .739 before and .749 after fine-tuning of the last layer. The procedure yields the highest number of prototypes with near complete object-overlap (\Figref{masking}b). After iterative rejection 1583 of the 2000 prototypes show an overlap of at least 75\% with the object they represent.
\begin{figure}
\caption{The relevance of non-object prototypes:
(a) The distribution of weights of object-prototypes and non-object prototypes in the final layer. (b) Examples for non-object prototypes (c) Prototypes of the class "Towhee". Red boxes indicate the location of the image patch that corresponds to the up-scaled grid-cell of the prototype-vector. Blue areas indicate where the upscaled heatmap of prototype evidence is larger then .75\% of the maximum.}
\label{relevance}
\end{figure}
\pagebreak
\begin{figure}
\caption{Prototype rejection:
(a) The number of "non-object" pixels of the prototypes changes upon rejection. Masking removes prototypes with zero object-overlap while deselection training alters the overall distribution in favor of object-prototypes (b) The number of non-object prototypes during iterative rejection (left). The change in accuracy for different schemes before and after fine-tuning of the last layer (right)}
\label{masking}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
This paper shows that the inference of prototypical part networks cannot only be understood but also adjusted. Prototypes can be reassigned such that the prediction is restricted to meaningful features. Removing the effect of antitypes leads to a marginal loss in accuracy (1.4\%). A slightly larger drop occurs for our deselection loss (2.9\%). This is arguably due to the greater loss of information from non-object prototypes as larger areas of feature space are excluded.
DIPA has further potentials. It could be used to model configurations of prototypes, for example in conjunction with interactive learning of deformable parts \parencite{branson_strong_2011}. Potentials of prototypes for the prediction of bounding boxes that cluster the combined conceptual and physical space should be investigated. They could potentially help to model whole-part relationships in neural networks \parencite{hinton_how_2021}. Refining the sensitivity and specificity of prototypes would increase interpretability. DIPA with active learning schemes could leverage potentials for increased labeling efficiency \parencite{gal_deep_2017}. \\
Future research should also address prototype learning for interactive labeling and collaborative procedures to address the cold start problem that occurs in the first stage of the interactive collection of labels.
\printbibliography
\end{document} |
\betagin{document}
\global\long\text{d}ef\mathbf{P}p#1{\mathbf{P}\left(#1\right)}
\global\long\text{d}ef\mathbf{E}e#1{\mathbf{E}\left[#1\right]}
\global\long\text{d}ef\norm#1{\left\Vert #1\right\Vert }
\global\long\text{d}ef\abs#1{\left|#1\right|}
\global\long\text{d}ef\given#1{\left|\mathbf{P}hantom{\frac{}{}}#1\right.}
\global\long\text{d}ef\ceil#1{\left\lceil #1\right\rceil }
\global\long\text{d}ef\floor#1{\left\lfloor #1\right\rfloor }
\global\long\text{d}ef\mathbf{Var}{\mathbf{Var}}
\global\long\text{d}ef\mathbf{Q}{\mathbf{Q}}
\global\long\text{d}ef\mathbf{Cov}{\mathbf{Cov}}
\global\long\text{d}ef\mathbf{Corr}{\mathbf{Corr}}
\global\long\text{d}ef\mathbf{E}{\mathbf{E}}
\global\long\text{d}ef\mathtt{1}{\mathtt{1}}
\global\long\text{d}ef\mathbf{P}{\mathbf{P}}
\global\long\text{d}ef\mathbb{A}{\mathbb{A}}
\global\long\text{d}ef\mathbb{B}{\mathbb{B}}
\global\long\text{d}ef\mathbb{C}{\mathbb{C}}
\global\long\text{d}ef\mathbb{D}{\mathbb{D}}
\global\long\text{d}ef\mathbb{E}{\mathbb{E}}
\global\long\text{d}ef\mathbb{F}{\mathbb{F}}
\global\long\text{d}ef\mathbb{G}{\mathbb{G}}
\global\long\text{d}ef\mathbb{H}{\mathbb{H}}
\global\long\text{d}ef\mathbb{I}{\mathbb{I}}
\global\long\text{d}ef\mathbb{J}{\mathbb{J}}
\global\long\text{d}ef\mathbb{K}{\mathbb{K}}
\global\long\text{d}ef\mathbb{L}{\mathbb{L}}
\global\long\text{d}ef\mathbb{M}{\mathbb{M}}
\global\long\text{d}ef\mathbb{N}{\mathbb{N}}
\global\long\text{d}ef\mathbb{O}{\mathbb{O}}
\global\long\text{d}ef\mathbb{P}{\mathbb{P}}
\global\long\text{d}ef\mathbb{Q}{\mathbb{Q}}
\global\long\text{d}ef\mathbb{R}{\mathbb{R}}
\global\long\text{d}ef\mathbb{S}{\mathbb{S}}
\global\long\text{d}ef\mathbb{T}{\mathbb{T}}
\global\long\text{d}ef\mathbb{U}{\mathbb{U}}
\global\long\text{d}ef\mathbb{V}{\mathbb{V}}
\global\long\text{d}ef\mathbb{W}{\mathbb{W}}
\global\long\text{d}ef\mathbb{X}{\mathbb{X}}
\global\long\text{d}ef\mathbb{Y}{\mathbb{Y}}
\global\long\text{d}ef\mathbb{Z}{\mathbb{Z}}
\global\long\text{d}ef\mathcal{A}{\mathcal{A}}
\global\long\text{d}ef\mathcal{B}{\mathcal{B}}
\global\long\text{d}ef\mathcal{C}{\mathcal{C}}
\global\long\text{d}ef\mathcal{D}{\mathcal{D}}
\global\long\text{d}ef\mathcal{E}{\mathcal{E}}
\global\long\text{d}ef\mathcal{F}{\mathcal{F}}
\global\long\text{d}ef\mathcal{G}{\mathcal{G}}
\global\long\text{d}ef\mathcal{H}{\mathcal{H}}
\global\long\text{d}ef\mathcal{I}{\mathcal{I}}
\global\long\text{d}ef\mathcal{J}{\mathcal{J}}
\global\long\text{d}ef\mathcal{K}{\mathcal{K}}
\global\long\text{d}ef\mathcal{L}{\mathcal{L}}
\global\long\text{d}ef\mathcal{M}{\mathcal{M}}
\global\long\text{d}ef\mathcal{N}{\mathcal{N}}
\global\long\text{d}ef\mathcal{O}{\mathcal{O}}
\global\long\text{d}ef\mathcal{P}{\mathcal{P}}
\global\long\text{d}ef\mathcal{Q}{\mathcal{Q}}
\global\long\text{d}ef\mathcal{R}{\mathcal{R}}
\global\long\text{d}ef\mathcal{S}{\mathcal{S}}
\global\long\text{d}ef\mathcal{T}{\mathcal{T}}
\global\long\text{d}ef\mathcal{U}{\mathcal{U}}
\global\long\text{d}ef\mathcal{V}{\mathcal{V}}
\global\long\text{d}ef\mathcal{W}{\mathcal{W}}
\global\long\text{d}ef\mathcal{X}{\mathcal{X}}
\global\long\text{d}ef\mathcal{Y}{\mathcal{Y}}
\global\long\text{d}ef\mathcal{Z}{\mathcal{Z}}
\global\long\text{d}ef\mathscr{A}{\mathscr{A}}
\global\long\text{d}ef\mathscr{B}{\mathscr{B}}
\global\long\text{d}ef\mathscr{C}{\mathscr{C}}
\global\long\text{d}ef\mathscr{D}{\mathscr{D}}
\global\long\text{d}ef\mathscr{E}{\mathscr{E}}
\global\long\text{d}ef\mathscr{F}{\mathscr{F}}
\global\long\text{d}ef\mathscr{G}{\mathscr{G}}
\global\long\text{d}ef\mathscr{H}{\mathscr{H}}
\global\long\text{d}ef\mathscr{I}{\mathscr{I}}
\global\long\text{d}ef\mathscr{J}{\mathscr{J}}
\global\long\text{d}ef\mathscr{K}{\mathscr{K}}
\global\long\text{d}ef\mathscr{L}{\mathscr{L}}
\global\long\text{d}ef\mathscr{M}{\mathscr{M}}
\global\long\text{d}ef\mathscr{N}{\mathscr{N}}
\global\long\text{d}ef\mathscr{O}{\mathscr{O}}
\global\long\text{d}ef\mathscr{P}{\mathscr{P}}
\global\long\text{d}ef\mathscr{Q}{\mathscr{Q}}
\global\long\text{d}ef\mathscr{R}{\mathscr{R}}
\global\long\text{d}ef\mathscr{S}{\mathscr{S}}
\global\long\text{d}ef\mathscr{T}{\mathscr{T}}
\global\long\text{d}ef\mathscr{U}{\mathscr{U}}
\global\long\text{d}ef\mathscr{V}{\mathscr{V}}
\global\long\text{d}ef\mathscr{W}{\mathscr{W}}
\global\long\text{d}ef\mathscr{X}{\mathscr{X}}
\global\long\text{d}ef\mathscr{Y}{\mathscr{Y}}
\global\long\text{d}ef\mathscr{Z}{\mathscr{Z}}
\global\long\text{d}ef\text{Tr}{\text{Tr}}
\global\long\text{d}ef\text{Re}{\text{Re}}
\global\long\text{d}ef\text{Im}{\text{Im}}
\global\long\text{d}ef\text{supp}{\text{supp}}
\global\long\text{d}ef\text{sgn}{\text{sgn}}
\global\long\text{d}ef\text{d}{\text{d}}
\global\long\text{d}ef\text{d}ist{\text{dist}}
\global\long\text{d}ef\text{span}{\text{span}}
\global\long\text{d}ef\text{ran}{\text{ran}}
\global\long\text{d}ef\text{ball}{\text{ball}}
\global\long\text{d}ef\textnormal{Dim}{\textnormal{Dim}}
\global\long\text{d}ef\text{Ai}{\text{Ai}}
\global\long\text{d}ef\text{Occ}{\text{Occ}}
\global\long\text{d}ef\text{sh}{\text{sh}}
\global\long\text{d}ef\Rightarrow{\Rightarrow}
\global\long\text{d}ef\frac{1}{2}{\frac{1}{2}}
\global\long\text{d}ef\oo#1{\frac{1}{#1}}
\newcommand{\slfrac}[2]{\left.#1\middle/#2\right.}
\global\long\text{d}ef\alpha{\alphapha}
\global\long\text{d}ef\beta{\betata}
\global\long\text{d}ef\gamma{\gammamma}
\global\long\text{d}ef\Gamma{\Gammamma}
\global\long\text{d}ef\text{d}e{\text{d}elta}
\global\long\text{d}ef\Delta{\Deltalta}
\global\long\text{d}ef\mathbf{E}p{\mathbf{E}psilon}
\global\long\text{d}ef\zeta{\zetata}
\global\long\text{d}ef\mathbf{E}t{\mathbf{E}ta}
\global\long\text{d}ef\theta{\thetaeta}
\global\long\text{d}ef\Theta{\Thetaeta}
\global\long\text{d}ef\kappa{\kappappa}
\global\long\text{d}ef\lambda{\lambdambda}
\global\long\text{d}ef\Lambda{\Lambdambda}
\global\long\text{d}ef\rho{\rhoo}
\global\long\text{d}ef\sigma{\sigmagma}
\global\long\text{d}ef\tau{\tauu}
\global\long\text{d}ef\mathbf{P}h{\mathbf{P}hi}
\global\long\text{d}ef\Phi{\Phii}
\global\long\text{d}ef\varphi{\mathbf{Var}phi}
\global\long\text{d}ef\chi{\chii}
\global\long\text{d}ef\mathbf{P}s{\mathbf{P}si}
\global\long\text{d}ef\Psi{\Psii}
\global\long\text{d}ef\omega{\omegaega}
\global\long\text{d}ef\Omega{\Omegaega}
\global\long\text{d}ef\Sigma{\Sigmagma}
\global\long\text{d}ef\text{d}equal{\stackrel{d}{=}}
\global\long\text{d}ef\mathbf{P}to{\stackrel{\mathbf{P}}{\to}}
\global\long\text{d}ef\stackrel{\text{a.s.}}{\to}{\stackrel{\text{a.s.}}{\to}}
\global\long\text{d}ef\text{d}to{\stackrel{\text{d.}}{\to}}
\global\long\text{d}ef\ldots{\ldotsots}
\global\long\text{d}ef\text{d}i{\mathbf{P}artial}
\global\long\text{d}ef\text{sh}R{\text{Shapes}_{R}}
\global\long\text{d}ef\text{sh}L{\text{Shapes}_{L}}
\global\long\text{d}ef\text{sh}Z{\text{Shape}_{0}}
\global\long\text{d}ef\sigmaR{\text{Sizes}_{R}}
\global\long\text{d}ef\sigmaL{\text{Sizes}_{L}}
\global\long\text{d}ef\sigmaZ{\text{Size}_{0}}
\newcommand{1.3}{1.3}
\newcommand{\thetak}{ {\scriptstyle \slfrac{\theta}{k}} }
\title{Decorated Young Tableaux and the Poissonized Robinson-Schensted Process}
\author{Mihai Nica {\footnote{ Courant Institute of Mathematical Sciences, New York University
251 Mercer Street, New York, N.Y. 10012-1185, \href{mailto:[email protected]}{[email protected]}}}}
\maketitle
\betagin{abstract}
We introduce an object called a decorated Young tableau which can equivalently be viewed as a continuous time trajectory of Young diagrams or as a non-intersecting line ensemble. By a natural extension of the Robinson-Schensted correspondence, we create a random pair of decorated Young tableaux from a Poisson point process in the plane, which we think of as a stochastic process in discrete space and continuous time. By using only elementary techniques and combinatorial properties, we identify this process as a Schur process and show it has the same law as certain non-intersecting Poisson walkers.
\mathbf{E}nd{abstract}
\section{Introduction}
The Poissonized Plancherel measure is a one parameter family of measures
on Young diagrams. For fixed $\theta$, this is a mixture of the classical
Plancherel measures by Poisson weights. This mixture has nice properties
that make it amenable to analysis, see for instance \cite{Borodin00asymptoticsof}
and \cite{Johansson01discreteorthogonal}. One way this measure is
obtained is to take a unit rate Poisson point process in the square
$[0,\theta]\times[0,\theta]$, then interpret the collection of points as
a permutation, and finally apply the Robinson-Schensted (RS) correspondence.
The RS correspondence gives a pair of Young tableaux of the same shape.
The law of the shape of the Young tableaux constructed in this way
has the Poissonized Plancherel measure. Other than the shape, the
information inside the tableaux themselves are discarded in this construction.
This construction has many nice properties: for example, by the geometric
construction of the RS correspondence due to Viennot (see for example
\cite{0387950672} for details), this shows that the maximum number
of Poisson points an up-right path can pass through has the distribution
of the length of the first row of the Poissonized Plancherel measure.
One can use this to tackle problems like the longest increasing subsequence
problem.
In this article, we extend the above construction slightly in order
to keep the information in the Young tableaux that are generated by
the RS algorithm; we do not discard the information
in the tableaux. As a result, we get a slightly richer random object
which we call the Poissonized Robinson-Schensted process. This object
can be interpreted in several ways. If one views the object as a continuous
time Young diagram valued stochastic process, then its fixed time marginals are exactly
the Poissonized Plancherel measure. Moreover, the joint distribution
at several times form a Schur process as defined in \cite{Okounkov01correlationfunction}. The proof uses only simple properties of the RS correspondence and elementary probabilistic arguments. The model is defined in Section 2 and its distribution is characterized in Section 3.
We also show that the process itself is a special case of stochastic
dynamics related to Plancherel measure studied in \cite{Borodin_stochasticdynamics}.
Unlike the construction from \cite{Borodin_stochasticdynamics}, our methods in this article do not rely on machinery from representation theory. Instead, the proof goes by first finding the multi-time distribution in terms of Poisson probability mass functions using elementary techniques from probability and combinatorics. Only after this, we identify this in terms of a Schur process. The derivation of the distribution does not rely on this previous theory. The connection here allow us to immediately see asymptotics for the
model, in particular it converges to the Airy-2 line ensemble under
the correct scaling. This is discussed in Section 4.
It is also possible to obtain the Poissonized RS process as a limit of a discrete time Young diagram process in a natural way. Instead of starting with a Poisson point process, one instead starts with a point process on a lattice so that the number of points at each site has a geometric distribution. This model was first considered by Johansson in Section 5 of \cite{JoDPP}, in particular see his Theorem 5.1. Again, the approach we take in this article uses only elementary techniques from probability and combinatorics which is in contrast to the analytical methods used in \cite{JoDPP}. This is discussed in Section 5.
\subsection{Notation and Background}
\lambdabel{sec:notation}
We very briefly go over the definitions/notations used here. For more
details, see \cite{StanleyVol2} or \cite{0387950672}.
We denote by $\mathbb{Y}$ the set of Young diagrams. We think of a Young
diagram $\lambda\in\mathbb{Y}$ as a partition $\lambda=\left(\lambda_{1},\lambda_{2},\ldots\right)$
where $\lambda_{i}$ are weakly decreasing and with finitely many non-zero
entries. We can equivalently think of each $\lambda\subset\mathbb{N}^{2}$ as
a collection of stacked unit boxes by $\left(i,j\right)\in\lambda\iff j\leq\lambda_{i}$.
We denote by $\abs{\lambda}=\sum_{i=1}^{n}\lambda_{i}$ the total number of
boxes, or equivalently the sum of the row lengths. We will sometimes
also consider skew tableaux, which are the collection of boxes one
gets from the difference of two Young diagrams $\lambda\backslash\mu$.
A standard Young tableau $T$ can be thought of as a Young diagram $\lambda$
whose boxes have been filled with the numbers $1,2,\ldots,\abs{\lambda}$,
so that the numbers are increasing in any row and in any column. We
call the diagram $\lambda$ in this case the shape of the tableau, and
denote this by $\text{sh}(T)$. We denote by $T(i,j)$ the entry written
in the box at location $i,j$. We will also use the notation $\text{d}im(\lambda)$
to denote the number of standard Young tableau of shape $\lambda$. This
is called the ``dimension'' since this is also the dimension of
the the irreducible representations of the symmetric group $S\left(\abs{\lambda}\right)$
associated with $\lambda$.
In the above notation the Poissonized Plancherel Measure of parameter
$\theta^2$ is:
\[
\mathbf{P}_{\theta}\left(\lambda\right)=e^{-\theta^{2}}\left(\frac{\theta^{\abs{\lambda}}\text{d}im(\lambda)}{\abs{\lambda}!}\right)^{2}.
\]
The Robinson-Schensted (RS) correspondence is a bijection from the
symmetric group $S_{n}$ to pairs of standard Young tableaux of the
same shape of size $\abs{\text{sh}(T)}=n$ (See \cite{StanleyVol2} Section 7.11 for details on this bijection) We will sometimes refer to this
here as the ``ordinary'' RS correspondence, not to diminish the
importance of this, but to avoid confusion with a closely related
map we introduce called the ``decorated RS correspondence''.
We will also make reference to the Schur symmetric functions $s_{\lambda}(x_{1},\ldots)$,
and the skew Schur symmetric functions $s_{\lambda/\mu}(x_{1},\ldots)$ as
they appear in \cite{StanleyVol2} or \cite{0387950672} . A specialization
is a homomorphism from symmetric functions to complex numbers. We
denote by $f(\rhoo)$ the image of the function $f$ under the specialization
$\rhoo$. We denote by $\rhoo_{t}$ the Plancherel specialization (also
known as exponential or ``pure gamma'' specialization) that has $h_{n}(\rhoo_{t})=\frac{t{}^{n}}{n!}$
for each $n\in\mathbb{N}$. This is a Schur positive specialization, in the
sense that $s_{\lambda}\left(\rhoo_{t}\right)\geq0$ always, and moreover
there is an explicit formula for $s_{\lambda}\left(\rhoo_{t}\right)$ in
terms of the number of Young tableaux of shape $\lambda$:
\betagin{equation} \lambdabel{schurid}
s_{\lambda}(\rhoo_{t})=\text{d}im(\lambda)\frac{t^{\abs{\lambda}}}{\abs{\lambda}!}.
\mathbf{E}nd{equation}
\section{Decorated Young Tableaux}
\betagin{defn}
\lambdabel{decoratedYTdef} A \textbf{decorated} \textbf{Young tableau
}is a pair $\tilde{T}=(T,(t_{1},\ldots,t_{\abs{\text{sh}(T)}}))$ where $T$
is a standard Young tableau and $0\leq t_{1}<\ldots<t_{\abs{\text{sh}(T)}}$
is an increasing list of non-negative numbers whose length is equal
to the size of the tableau. We refer to the list $(t_{1},\ldots,t_{\abs{\text{sh}(T)}})$
as the \textbf{decorations} of the tableau. We represent this graphically
when drawing the tableau by recording the number $t_{T(i,j)}$ in
the box $(i,j)$ . \mathbf{E}nd{defn}
\betagin{example}
\lambdabel{YTexample}
The decorated Young Tableau: $$ \left( \young(124,35,6), \left(0.02,0.03,0.05,0.07,0.11,0.13 \right) \right), $$
is represented as:
\betagin{center}
\ytableausetup{centertableaux,boxsize=2.5em}
\betagin{ytableau}
\stackrel{1}{0.02} & \stackrel{2}{0.03} & \stackrel{4}{0.07} \\
\stackrel{3}{0.05} & \stackrel{5}{0.11} \\
\stackrel{6}{0.13}
\mathbf{E}nd{ytableau}.
\mathbf{E}nd{center}\mathbf{E}nd{example}
\betagin{rem}
\lambdabel{otherwaytothink}Since the decorations are always sorted, we
see that from the above diagram one could recover the entire decorated
tableau without the labels ``1'', ``2'' written in the tableau.
In other words, one could equally well think of a decorated Young
tableau as a map $\tilde{T}:\text{sh}(T)\to\mathbb{R}_{+},$ so that $\tilde{T}$
is increasing in each column and in each row. Having $\tilde{T}=(T,(t_{1},\ldots,t_{\abs{\text{sh}(T)}}))$
will be slightly more convenient for our explanations here, and particularly
to relate the model to previous work.\mathbf{E}nd{rem}
\betagin{defn}
\lambdabel{YDprocess}A decorated Young tableau can also be thought of
as a trajectory of Young \uline{diagrams} evolving in continuous
time. The \textbf{Young diagram process} of the decorated Young tableau
$\tilde{T}=(T,(t_{1},\ldots,t_{\abs{\text{sh}(T)}}))$ is a map $\lambda_{\tilde{T}}:\mathbb{R}_{+}\to\mathbb{Y}$
defined by
\[
\lambda_{\tilde{T}}(t)=\left\{ (i,j):\ t_{T(i,j)}\leq t\right\} \in\mathbb{Y}.
\]
One can also think about this as follows: the process starts with
$\lambda(0)=\mathbf{E}mptyset$, and then it gradually adds boxes one by one.
The decoration $t_{T(i,j)}$ is the time at which the box $(i,j)$
is added. The fact that $T$ is a standard Young tableau ensures that
$\lambda(t)$ is indeed a Young diagram at every time $t$. Notice that
the Young diagram process for a decorated Young tableau is always
increasing $\lambda(t_{1})\subset\lambda(t_{2})$ whenever $t_{1}\leq t_{2}$,
and it can only increase by at most one box at a time $\lim_{\mathbf{E}p\to0}\abs{\lambda(t+\mathbf{E}p)-\lambda(t)}\leq1$.
Moreover, given any continuous time sequence of Young diagrams evolving
in this way we can recover the decorated Young tableau: if the $k$-th
box added to the sequence is the box $(i,j)$ and it is added at time
$s$, then put $T(i,j)=k$ and $t_{k}=s$
\mathbf{E}nd{defn}
\betagin{defn}
\lambdabel{nonintersectingLineEnsemble}A decorated Young tableau can
also be thought of as an ensemble of non-intersecting lines. The \textbf{non-intersecting
line ensemble }of the decorated Young tableau $\tilde{T}=(T,(t_{1},\ldots,t_{\abs{\text{sh}(\lambda)}}))$
is a map $M_{\tilde{T}}:\mathbb{N}\times\mathbb{R}_{+}\to\mathbb{Z}$ defined by:
\betagin{eqnarray*}
M_{\tilde{T}}(i;t) & = & \lambda_{i}(t)-i\\
& = & \abs{\left\{ j:t_{T(i,j)}\leq t\right\} }-i,
\mathbf{E}nd{eqnarray*}
where $\lambda_{\tilde{T}}(t)=\left(\lambda_{1}(t),\ldots\right)$ is the Young
diagram process of $\tilde{T}$. The index $i$ is the label of the particle, and the variable $t$ measures the time along the trajectory. The lines $M_{\tilde{T}}(i;t)$ are
non-intersecting in the sense that $M_{\tilde{T}}(i;t)<M_{\tilde{T}}(j;t)$
for $i<j$ and for every $t\in\mathbb{R}_{+}$. This holds since $\lambda_{\tilde{T}}(t)\in\mathbb{Y}$
is a Young diagram. It is clear that one can recover the Young diagram
process from the non-intersecting line ensemble by $\lambda_{i}(t)=M_{\tilde{T}}(i;t)+i$. \mathbf{E}nd{defn}
\betagin{rem}
The map from Young diagrams to collection of integers by $\lambda\to\left\{ \lambda_{i}-i\right\} _{i=1}^{\infty}$
is a well known map with mathematical significance, see for instance
\cite{1212.3351} for a survey. This is sometimes presented as the
map $\lambda\to\left\{ \lambda_{i}-i+\frac{1}{2}\right\} _{i=1}^{\infty}$, where
the target is now half integers. This representation of Young diagrams, which are also known as Maya diagrams, sometimes makes
the resulting calculations much nicer. In this work they do not play
a big role, so we will omit the $\frac{1}{2}$ that some other authors use.
\mathbf{E}nd{rem}
\subsection{Robinson–Schensted Correspondence}
\betagin{defn}
\lambdabel{associatedpermutation} Fix a parameter $\thetaeta\in\mathbb{R}_{+}$
and let $\mathcal{C}_{n}^{\theta}$ be the set of $n$ point configurations in
the square $[0,\theta]\times[0,\theta]\subset\mathbb{R}_{+}^{2}$ so that no two
points lie in the same horizontal line and no two points lie in the
same vertical line. Every configuration of points $\Pi\in\mathcal{C}_{n}^{\theta}$
has an \textbf{associated permutation} $\sigma\in S_{n}$ by the following
prescription. Suppose that $0\leq r_{1}<\ldots<r_{n}\leq\theta$ and $0\leq\mathbf{E}ll_{1}<\ldots<\mathbf{E}ll_{n}\leq\theta$
are respectively the sorted lists of $x$ and $y$ coordinates of
the points which form $\Pi$. Then find the unique permutation $\sigma\in S_{n}$
so that $\Pi=\left\{ \left(r_{i},\mathbf{E}ll_{\sigma(i)}\right)\right\} _{i=1}^{n}$.
Equivalently, if we are given the list of points $\Pi=\left\{ \left(x_{i},y_{i}\right)\right\} _{i=1}^{n}$
sorted so that $0\leq x_{1}<\ldots<x_{n}\leq\theta$ then $\sigma$ is the
permutation so that $0\leq y_{\sigma^{-1}(1)}<y_{\sigma^{-1}(2)}<\ldots<y_{\sigma^{-1}(n)}\leq\theta$.
\mathbf{E}nd{defn}
\betagin{defn}
Let
\[
\mathcal{T}_{n}^{\theta}=\left\{ \left(L,(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{n})\right),\left(R,\left(r_{1},\ldots,r_{n}\right)\right):\ \text{sh}(L)=\text{sh}(R),0\leq\mathbf{E}ll_{1}<\ldots<\mathbf{E}ll_{n}\leq\theta,0\leq r_{1}<\ldots<r_{n}\leq\theta\right\}
\]
be the set of pairs of decorated Young tableaux of the same shape
and of size $n$, whose decorations lie in the interval $[0,\theta]$.
The \textbf{decorated Robinson–Schensted (RS) correspondence }is a
bijection $dRS:\mathcal{T}_{n}^{\theta}\to\mathcal{C}_{n}^{\theta}$ from pairs of decorated
Young tableaux in $\mathcal{T}_{n}^{\theta}$ to configuration of points in $\mathcal{C}_{n}^{\theta}$ defined as follows:
Given a pair of decorated Tableau of size $n$, $\left(L,(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{n})\right),\left(R,\left(r_{1},\ldots,r_{n}\right)\right)$,
use the ordinary RS bijection and the pair of Young tableaux $(L,R)$
to get a permutation $\sigma\in S_{n}$. Then define
\[
dRS\left(\left(L,(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{n})\right),\left(R,\left(r_{1},\ldots,r_{n}\right)\right)\right)=\left\{ (r_{1},\mathbf{E}ll_{\sigma(1)}),(r_{2},\mathbf{E}ll_{\sigma(2)}),\ldots,(r_{n},\mathbf{E}ll_{\sigma(n)})\right\}.
\]
Going the other way, the inverse $dRS{}^{-1}:\mathcal{C}_{n}^{\theta}\to\mathcal{T}_{n}^{\theta}$
is described as follows. Given a configuration of points from $\mathcal{C}_{n}^{\theta}$,
first take the permutation $\sigma$ associated with the configuration
as described in Definition \text{Re}f{associatedpermutation}. Then use
the ordinary RS bijection to find a pair of standard Young tableaux
$(L,R)$ corresponding to this permutation. Define
\[
dRS{}^{-1}\left(\left\{ (x_{1},y_{1}),\ldots,(x_{n},y_{n})\right\} \right)=\left(L,\left(y_{\sigma^{-1}(1)},\ldots,y_{\sigma^{-1}(n)}\right),\left(R,(x_{1},\ldots,x_{n})\right)\right).
\]
Since the ordinary RS algorithm is a bijection from pairs of standard
Young diagrams of size $n$ to permutations in $S_{n}$, and since
the decorations can be recovered from the coordinates of the points
and vice versa as described above, the decorated RS algorithm is indeed
a bijection as the name suggests. See Figure \text{Re}f{fig:small_example} for an example of this bijection. For convenience we will later on use the notations $\mbox{\mathbf{E}nsuremath{\mathcal{C}}}^{\theta}=\cup_{n\in\mathbb{N}}\mathcal{C}_{n}^{\theta}\text{ and }\mathcal{T}^{\theta}=\cup_{n\in\mathbb{N}}\mathcal{T}_{n}^{\theta}$.\mathbf{E}nd{defn}
\betagin{rem}
With the viewpoint as in Remark \text{Re}f{otherwaytothink}, one can equivalently
construct the decorated RS bijection by starting with the list of
points in $\Pi$ in ``two line notation'' $\binom{x_{1}\ x_{2}\ \ldots\ x_{n}}{y_{1\ }y_{2}\ \ldots\ y_{n}}$,
where the points $\left\{ \left(x_{i},y_{i}\right)\right\} _{i=1}^{n}$
are sorted by $x$-coordinate, and then apply the RS
insertion algorithm on these points to build up the Young tableaux
$\tilde{L}$ and $\tilde{R}$. The same rules for insertion in the
ordinary RS apply; the only difference is that the entries and comparisons
the algorithm makes are between real numbers instead of natural numbers.
Each of the individual decorated tableaux from a pair $\left(\tilde{L},\tilde{R}\right)\in\mathcal{T}_{n}$
have an associated Young diagram process as defined in Definition\text{Re}f{YDprocess}
and an associated non-intersecting line ensemble as defined in Definition
\text{Re}f{nonintersectingLineEnsemble}. Since both $L$ and $R$ are the
same shape, and since the decoration are all in the range $[0,\theta]$,
the Young diagram processes and the non-intersecting line ensembles
will agree at all times $t\geq\theta.$ That is to say $\lambda_{\tilde{L}}(t)=\lambda_{\tilde{R}}(t)$
and $M_{\tilde{L}}(\cdot;t)=M_{\tilde{R}}(\cdot;t)$ for $t\geq\theta$.
For this reason, it will be more convenient to do a change of coordinates
on the time axis so that the Young diagram process and non-intersecting
line ensemble are defined on $[-\theta,\theta]$, and the meeting of the
left and right tableau happen at $t=0$. The following definition
makes this precise.\mathbf{E}nd{rem}
\betagin{defn}
\lambdabel{YDprocess_pair}For a pair of decorated Young tableaux, we define the \textbf{Young diagram process
$\lambda_{\tilde{L},\tilde{R}}:[-\theta,\theta]\to\mathbb{Y}$ }of the pair $\left(\tilde{L},\tilde{R}\right)\in\mathcal{T}_{n}$
by
\[
\lambda_{\tilde{L},\tilde{R}}(t)=\betagin{cases}
\lambda_{\tilde{L}}(\theta+t) & \ t\leq0\\
\lambda_{\tilde{R}}(\theta-t) & \ t\geq0
\mathbf{E}nd{cases}.
\]
Notice that this is well defined at $t=0$ since $\lambda_{\tilde{L}}(\theta)=\text{sh}(L)=\text{sh}(R)=\lambda_{\tilde{R}}(\theta)$.
In this way $\lambda_{\tilde{L},\tilde{R}}$ is an increasing sequence
of Young diagrams when $t<0$ and is a decreasing when $t>0$.
Similarly, for a pair of decorated tableaux, we define the \textbf{non-intersecting line ensemble }$M_{\tilde{L},\tilde{R}}:\mathbb{Z}\times[-\theta,\theta]\to\mathbb{Z}$
of the pair $\left(\tilde{L},\tilde{R}\right)\in\mathcal{T}_{n}$ by
\[
M_{\tilde{L},\tilde{R}}(i;t)=\betagin{cases}
M_{\tilde{L}}(i;\theta+t) & \ t\leq0\\
M_{\tilde{R}}(i;\theta-t) & \ t\geq0
\mathbf{E}nd{cases}.
\]
Again, each of the lines are well defined and continuous at $t=0$ because $M_{\tilde{L}}(i;\theta)=M_{\tilde{R}}(i;\theta)$. See Figure \text{Re}f{fig:small_example} for an example of this line ensemble.
\mathbf{E}nd{defn}
\newcommand\rA{0.2}
\newcommand\rB{0.3}
\newcommand\rC{0.5}
\newcommand\rD{0.8}
\newcommand\lA{0.1}
\newcommand\lB{0.4}
\newcommand\lC{0.6}
\newcommand\lD{0.7}
\betagin{figure}
\betagin{subfigure}[b]{.25\textwidth}
\betagin{center}
\betagin{tikzpicture}[scale = 3]
\coordinate (BL) at (0,0);
\coordinate (BR) at (1,0);
\coordinate (TL) at (0,1);
\coordinate (TR) at (1,1);
\text{d}raw [black] (BL) -- (BR);
\text{d}raw [black] (BL) -- (TL);
\text{d}raw [black] (TL) -- (TR);
\text{d}raw [black] (BR) -- (TR);
\coordinate (E) at (\rA, \lD);
\coordinate (F) at (\rB, \lA);
\coordinate (G) at (\rC, \lC);
\coordinate (H) at (\rD, \lB);
\fill (E) circle[radius=0.5pt];
\fill (F) circle[radius=0.5pt];
\fill (G) circle[radius=0.5pt];
\fill (H) circle[radius=0.5pt];
\foreach \x in {0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0}
\text{d}raw (\x,0pt) -- (\x,-0.5pt);
\foreach \x in {0.0,0.5,1.0}
\text{d}raw (\x,0pt) -- (\x,-0.5pt)
node[anchor=north] {$\x$};
\foreach \y in {0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0}
\text{d}raw (0pt,\y) -- (-0.5pt,\y);
\foreach \y in {0.0,0.5,1.0}
\text{d}raw (0pt,\y) -- (-0.5pt,\y)
node[anchor=east] {$\y$};
\mathbf{E}nd{tikzpicture}
\mathbf{E}nd{center}
\caption{Point configuration}
\mathbf{E}nd{subfigure}
\betagin{subfigure}[b]{.33\textwidth}
\betagin{center}
\ytableausetup{centertableaux,boxsize=2.5em}
\betagin{ytableau}
\stackrel{1}{\lA} & \stackrel{2}{\lB} \\
\stackrel{3}{\lC} \\
\stackrel{4}{\lD} \\
\mathbf{E}nd{ytableau}
\ytableausetup{centertableaux,boxsize=2.5em}
\betagin{ytableau}
\stackrel{1}{\rA} & \stackrel{3}{\rC} \\
\stackrel{2}{\rB} \\
\stackrel{4}{\rD} \\
\mathbf{E}nd{ytableau}
\mathbf{E}nd{center}
\caption{Pair of decorated Young tableaux}
\mathbf{E}nd{subfigure}
\betagin{subfigure}[b]{.3\textwidth}
\betagin{center}
\betagin{tikzpicture}[xscale = 3,yscale=0.5]
\foreach \y in {2,1,0,-1,-2,-3,-4}
\text{d}raw[thin,densely dotted](1,\y) -- (-1,\y)
node[anchor=east] {$\y$};
\coordinate (Laxis) at (-1,-4.5);
\coordinate (Raxis) at (1,-4.5);
\coordinate (O) at (0,-4.5);
\coordinate (vert) at (0,3);
\text{d}raw[black,<->] (Laxis) -- (Raxis);
\text{d}raw[black,->] (O) -- (vert);
\foreach \x in {-0.9,-0.8,-0.7,-0.6,-0.5,-0.4,-0.3,-0.2,-0.1,0,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1}
\text{d}raw(\x,-4.5) -- (\x,-4.6);
\foreach \x in {-1.0,-0.5,0,0.5,1.0}
\text{d}raw (\x,-4.5) -- (\x,-4.6)
node[anchor=north] {$\x$};
\text{d}raw (0, -5.3) -- (0, -5.3) node[anchor=north] {t};
\coordinate (LA) at (-1,-1);
\coordinate (LB) at (-1,-2);
\coordinate (LC) at (-1,-3);
\coordinate (LD) at (-1,-4);
\coordinate (LE) at (-1,-5);
\coordinate (RA) at (1,-1);
\coordinate (RB) at (1,-2);
\coordinate (RC) at (1,-3);
\coordinate (RD) at (1,-4);
\text{d}raw [black,thick] (LD) -- (RD);
\coordinate (JAa) at (-1.0+\lA,-1);
\coordinate (JAb) at (-1.0+\lA,0);
\coordinate (JAc) at (-1.0+\lB,0);
\coordinate (JAd) at (-1.0+\lB,1);
\coordinate (JAe) at (1.0-\rC,1);
\coordinate (JAf) at (1.0-\rC,0);
\coordinate (JAg) at (1.0-\rA,0);
\coordinate (JAh) at (1.0-\rA,-1);
\text{d}raw [black,thick] (LA) -- (JAa);
\text{d}raw [black,thick] (JAa) -- (JAb);
\text{d}raw [black,thick] (JAb) -- (JAc);
\text{d}raw [black,thick] (JAc) -- (JAd);
\text{d}raw [black,thick] (JAd) -- (JAe);
\text{d}raw [black,thick] (JAe) -- (JAf);
\text{d}raw [black,thick] (JAf) -- (JAg);
\text{d}raw [black,thick] (JAg) -- (JAh);
\text{d}raw [black,thick] (JAh) -- (RA);
\coordinate (JBa) at (-1.0+\lC,-2);
\coordinate (JBb) at (-1.0+\lC,-1);
\coordinate (JBc) at (1.0-\rB,-1);
\coordinate (JBd) at (1.0-\rB,-2);
\text{d}raw [black,thick] (LB) -- (JBa);
\text{d}raw [black,thick] (JBa) -- (JBb);
\text{d}raw [black,thick] (JBb) -- (JBc);
\text{d}raw [black,thick] (JBc) -- (JBd);
\text{d}raw [black,thick] (JBd) -- (RB);
\coordinate (JCa) at (-1.0+\lD,-3);
\coordinate (JCb) at (-1.0+\lD,-2);
\coordinate (JCc) at (1.0-\rD,-2);
\coordinate (JCd) at (1.0-\rD,-3);
\text{d}raw [black,thick] (LC) -- (JCa);
\text{d}raw [black,thick] (JCa) -- (JCb);
\text{d}raw [black,thick] (JCb) -- (JCc);
\text{d}raw [black,thick] (JCc) -- (JCd);
\text{d}raw [black,thick] (JCd) -- (RC);
\mathbf{E}nd{tikzpicture}
\caption{Non-intersecting line ensemble}
\mathbf{E}nd{center}
\mathbf{E}nd{subfigure}
\caption{An example of the decorated Robinson-Schensted correspondence applied to a particular configuration from $C_n^{\theta}$ when $\theta=1.0$ and $n=4$. The lines of the associated non-intersecting line ensemble, $M_{\tilde{L},\tilde{R}}(i;t)$, are also plotted for $i=1,2,3,4$. In this example, the point configuration is $\binom{x_{1}\ x_{2}\ x_{3} \ x_{4}}{y_{1\ }y_{2}\ y_{3} \ y_{n}} = \binom{ \rA \ \rB \ \rC \ \rD}{\lD \ \lA \ \lC \ \lB}$. The sorted $x$ and $y$ coordinates are respectively $(r_1,r_2,r_3,r_4)=(\rA,\rB,\rC,\rD)$ and $(\mathbf{E}ll_1,\mathbf{E}ll_2,\mathbf{E}ll_3,\mathbf{E}ll_4)=(\lA,\lB,\lC,\lD)$, and the associated permutation is $\sigma = \binom{1 \ 2 \ 3 \ 4}{4 \ 1 \ 3 \ 2}$. Note that in this case the functions $M_{\tilde{L},\tilde{R}}(i,\cdot)\mathbf{E}quiv -i$ are constant for $i \geq 4$; only the first three lines are non-constant. All three pictures contain exactly the same information because of the bijections between the three objects.}
\lambdabel{fig:small_example}
\mathbf{E}nd{figure}
\section{The Poissonized Robinson-Schensted Process}
\subsection{Definition}
\betagin{defn}
Fix a parameter $\theta \in \mathbb{R}^+$. A rate 1 Poisson point process in $[0,\theta]\times[0,\theta]$ is a probability
measure on the set of configurations $\mathcal{C}^{\theta}$. By applying the
decorated RS correspondence this induces a probability measure on
$\mathcal{T}^{\theta}$. We will refer to the resulting random pair of tableaux
$\left(\tilde{L},\tilde{R}\right)\in\mathcal{T}^{\theta}$ as the \textbf{Poissonized RS tableaux}, we refer to the resulting random Young diagram process as the \textbf{Poissonized RS process} and the resulting random non-intersecting line ensemble as the \textbf{Poissonized RS line ensemble}. A realization of the Poissonized RS line ensemble for the case $\theta = 40.0$ is displayed in Figure \text{Re}f{thepic}.
The main results of this article are to characterize the law of the Poissonized RS process. Both the Young diagram process and the non-crossing line ensemble of this object have natural descriptions. In this section we describe the laws of these. For the rest of the section, denote by $\left(\tilde{L},\tilde{R}\right)=\left(\left(L,(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{\abs{\text{sh} L}})\right),\left(R,(r_{1},\ldots,r_{\abs{\text{sh} R}})\right)\right)$ a Poissonized RS random variable. The Young diagram process $\lambda_{\tilde{L},\tilde{R}}$ is a $\mathbb{Y}$ valued stochastic process,
and $M_{\tilde{L},\tilde{R}}$ is a random non-intersecting line
ensemble.
\mathbf{E}nd{defn}
\subsection{Law of the Young diagram process}
\betagin{thm}
\lambdabel{SchurProcessThm}Fix $\theta>0$ and times $t_{1}<\ldots<t_{n}\in[-\theta,0)$
and $s_{1}<\ldots<s_{m}\in(0,\theta]$. Suppose we are given an increasing
list of Young diagrams $\lambda^{(1)}\subset\ldots\subset\lambda^{(n)}$, a decreasing
list of Young diagrams $\mu^{(1)}\supset\ldots\supset\mu^{(m)}$ and
a Young diagram $\nu$ with $\nu\supset\lambda^{(n)}$ and $\nu\supset\mu^{(1)}$.
To simplify the presentation we will use the convention $\lambda^{(0)}=\mathbf{E}mptyset,\lambda^{(n+1)}=\nu,\mu^{(0)}=\nu,\mu^{(m+1)}=\mathbf{E}mptyset$
and $t_{0}=-\theta,t_{n+1}=0,s_{0}=0,s_{m+1}=\theta$. The Poissonized RS process $\lambda_{\tilde{L},\tilde{R}}$ has the following finite dimensional distribution:
\betagin{eqnarray*}
& & \mathbf{P}\left( \bigcap_{i=1}^{n}\left\{ \lambda_{\tilde{L},\tilde{R}}(t_{i})=\lambda^{(i)}\right\} \cap \left\{ \lambda_{\tilde{L},\tilde{R}}(0)=\nu\right\} \cap \bigcap_{j=1}^{m} \left\{ \lambda_{\tilde{L},\tilde{R}}(s_{j})=\mu^{(j)}\right\} \right)\\
& = & e^{-\theta^{2}} \left( \mathbf{P}rod_{i=0}^{n} \text{d}im(\lambda^{(i+1)}/\lambda^{(i)}) \frac{(t_{i+1} - t_{i})^{ \abs{\lambda^{(i+1)}/\lambda^{(i)}} }}{\abs{\lambda^{(i+1)}/\lambda^{(i)}}!} \right) \cdot \left( \mathbf{P}rod_{j=0}^{m} \text{d}im(\mu^{(j)}/\mu^{(j+1)})\frac{(s_{j+1} - s_{j})^{\abs{\mu^{(j)}/\lambda^{(j+1)}}}}{\abs{\mu^{(j)}/\mu^{(j+1)}}!} \right),
\mathbf{E}nd{eqnarray*}
where $\text{d}im(\lambda/\mu)$ is the number of standard Young tableau of skew shape $\lambda/\mu$.
\mathbf{E}nd{thm}
\betagin{figure}
\betagin{center}
\includegraphics[clip,width=6in]{pic}
\mathbf{E}nd{center}
\caption{A realization of the non-intersecting line ensemble for the Poissonized
RS process in the case $\theta=40$ created from $\approx 1600$ points
in the plane. Only the lines that are non-constant are shown. This can be simulated efficiently because the tableaux are created by exactly one execution of the decorated Robinson-Schensted algorithm applied to a realization of a Poisson point process in the plane. }
\lambdabel{thepic}
\mathbf{E}nd{figure}
The proof of Theorem \text{Re}f{SchurProcessThm} is deferred to Subsection \text{Re}f{subsectionpf} and is proven using simple properties of the Robinson Schensted correspondence and probabilistic arguments.
\betagin{rem}
\lambdabel{ShurProcRem}
The conclusion of the theorem can be rewritten in a very algebraically satisfying way in terms of Schur functions specialized by the Plancherel specialization, $\rhoo(t)$ (see Subsection \text{Re}f{sec:notation} for our notations) Using the identity from Equation \text{Re}f{schurid}, the result of Theorem \text{Re}f{SchurProcessThm} can be rewritten as
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=1}^{n}\left\{ \lambda_{\tilde{L},\tilde{R}}(t_{i})=\lambda^{(i)}\right\} \cap \left\{ \lambda_{\tilde{L},\tilde{R}}(0)=\nu\right\} \cap \bigcap_{j=1}^{m}\left\{ \lambda_{\tilde{L},\tilde{R}}(s_{j})=\mu^{(j)}\right\} \right)\\
& = & e^{-\theta^{2}}\left(\mathbf{P}rod_{i=0}^{n}s_{\lambda^{(i+1)}/\lambda^{(i)}}\left(\rhoo_{t_{i+1}-t_{i}}\right)\right)\cdot\left(\mathbf{P}rod_{j=0}^{m}s_{\mu^{(j)}/\mu^{(j+1)}}\left(\rhoo_{s_{j+1}-s_{j}}\right)\right).
\mathbf{E}nd{eqnarray*}
In the literature (see for instance \cite{Okounkov01correlationfunction}
or \cite{1212.3351} for a survey) this type of distribution arising from specializations on sequence of Young diagrams is known as a Schur process. The particular Schur process
that appears here has a very simple ``staircase'' diagram, illustrated
here in the case $n=m=2$:
\betagin{center}
\betagin{tikzpicture}[scale=1.0]
\node (A) at (0,0) {$\mathbf{E}mptyset$};
\node (B) at (1,1) {$\lambdambda_{\tilde{L},\tilde{R}}(t_1)$};
\node (C) at (2,2) {$\lambdambda_{\tilde{L},\tilde{R}}(t_2)$};
\node (O) at (3,3) {$\lambdambda_{\tilde{L},\tilde{R}}(0)$};
\node (X) at (4,2) {$\lambdambda_{\tilde{L},\tilde{R}}(s_1)$};
\node (Y) at (5,1) {$\lambdambda_{\tilde{L},\tilde{R}}(s_2)$};
\node (Z) at (6,0) {$\mathbf{E}mptyset$};
\mathbf{P}ath[->,font=\scriptsize]
(A) edge node[left]{$\rhoo_{t_1 + \thetaeta}$} (B)
(B) edge node[left]{$\rhoo_{t_2 - t_1}$} (C)
(C) edge node[left]{$\rhoo_{0 - t_2}$} (O)
(O) edge node[right]{$\rhoo_{s_1 - 0}$} (X)
(X) edge node[right]{$\rhoo_{s_2 - s_1}$} (Y)
(Y) edge node[right]{$\rhoo_{\thetaeta - s_2}$} (Z);
\mathbf{E}nd{tikzpicture}
\mathbf{E}nd{center}
In Section 4, we will further see that the Poissonized RS process $\lambda_{\tilde{L},\tilde{R}}$ is the same as a particular
instance model introduced in \cite{Borodin_stochasticdynamics}, which
is itself a special case of dynamics studied in \cite{Borodin04markovprocesses}.
\mathbf{E}nd{rem}
\betagin{cor}
At any fixed time $t$, the Young diagram $\lambda_{\tilde{L},\tilde{R}}\left(t\right)$
has the law of the Poissonized Plancherel measure with parameter $\sqrt{\theta\left(\theta-\abs t\right)}$.\mathbf{E}nd{cor}
\betagin{proof}
Suppose first that $t\leq0$ with the case $t\geq0$ being analogous.
By Theorem \text{Re}f{SchurProcessThm}, we have the two time probability distribution of
$\lambda_{\tilde{L},\tilde{R}}$ at time $t$ and time $0$ is
\[
\mathbf{P}\left(\left\{ \lambda_{\tilde{L},\tilde{R}}\left(t\right)=\lambda\right\} \cap\left\{ \lambda_{\tilde{L},\tilde{R}}(0)=\nu\right\} \right)=e^{-\theta^{2}}s_{\lambda}\left(\rhoo_{t+\theta}\right)s_{\nu/\lambda}\left(\rhoo_{0-t}\right)s_{\nu}\left(\rhoo_{\theta-0}\right).
\]
Summing over $\nu\in\mathbb{Y}$ and employing the Cauchy identity $\sum_{\mu}s_{\mu/\lambda}(\rhoo)s_{\mu}(\rhoo')=H(\rhoo;\rhoo')s_{\lambda}(\rhoo')$, and using $H(\rhoo_{a};\rhoo_{b})=\mathbf{E}xp\left(ab\right)$ for the exponential
specialization, we have
\betagin{eqnarray*}
\mathbf{P}\left(\lambda_{\tilde{L},\tilde{R}}\left(t\right)=\lambda\right) & = & e^{-\theta^{2}}s_{\lambda}\left(\rhoo_{t+\theta}\right)\sum_{\nu\in\mathbb{Y}}s_{\nu/\lambda}\left(\rhoo_{0-t}\right)s_{\nu}\left(\rhoo_{\theta-0}\right)\\
& = & e^{-\theta^{2}}H(\rhoo_{-t};\rhoo_{\theta})s_{\lambda}\left(\rhoo_{t+\theta}\right)s_{\lambda}\left(\rhoo_{\theta}\right)\\
& = & e^{-\theta^{2}}e^{\abs t\theta}s_{\lambda}\left(\rhoo_{\theta-\abs t}\right)s_{\lambda}\left(\rhoo_{\theta}\right)\\
& = & e^{-\left(\sqrt{\theta(\theta-\abs t}\right)^{2}}\left(\frac{\text{d}im\left(\lambda\right)\left(\sqrt{\theta(\theta-\abs t)}\right)^{\abs{\lambda}}}{\abs{\lambda}!}\right)^{2},
\mathbf{E}nd{eqnarray*}
as desired.
\mathbf{E}nd{proof}
The Poissonized Plancherel measure and its asymptotics are well studied,
see for example \cite{Borodin00asymptoticsof} or \cite{Johansson01discreteorthogonal}.
The analysis lets us see that, for any fixed $t$, the points of the line ensemble $M_{\tilde{L},\tilde{R}}\left(\cdot;t\right)$
form a determinantal point process whose kernel is the discrete Bessel
kernel. We can also use these results to write some asymptotics for
the Poissonized RS line ensemble,
for instance the following:
\betagin{cor}
Let $\left(\tilde{L}_{\theta},\tilde{R}_{\theta}\right)$ be the Poissonized
RS tableaux of parameter $\theta$. For fixed $\tauu\in(-1,1)$,
the \uline{top line} $M_{\tilde{L}_{\theta},\tilde{R}_{\theta}}(1;\cdot)$
of the line ensemble at some fixed time $t$ satisfies the following
law of large numbers type behavior:
\[
\lim_{\theta\to\infty}\frac{M_{\tilde{L}_{\theta},\tilde{R}_{\theta}}(1;\tauu\theta)}{\theta}=2\sqrt{1-\abs{\tauu}}\text{ a.s.}
\]
The fluctuations are of the Tracy-Widom type:
\[
\lim_{\theta\to\infty}\mathbf{P}\left(\frac{M_{\tilde{L}_{\theta},\tilde{R}_{\theta}}(1;\tauu\theta)-2\theta\sqrt{1-\abs{\tauu}}}{\theta^{1/3}(1-\abs{\tauu})^{1/6}}\leq s\right)=F(s),
\]
where $F(s)$ is the GUE Tracy Widom distribution.
\mathbf{E}nd{cor}
\subsection{The non-intersecting line ensemble}
\betagin{defn}
Fix a parameter $\theta>0$ and an initial location $x\in\mathbb{Z}$. Let $P_{L}:\left[0,\theta\right]\to\mathbb{Z}$
and $P_{R}:\left[0,\theta\right]\to\mathbb{Z}$ be two independent rate 1 Poisson jump
processes with initial condition $P_{L}(0)=P_{R}(0)=x$. Define $P:[-\theta,\theta]\to\mathbb{Z}$
by
\[
P(t)=\betagin{cases}
P_{L}(\theta+t) & t<0\\
P_{R}(\theta-t) & t\geq0
\mathbf{E}nd{cases}.
\]
A \textbf{Poisson arch} on $[-\theta,\theta]$ with initial location $x$
is the stochastic process $\left\{ A(t)\right\} _{t\in[-\theta,\theta]}$
whose probability distribution is the conditional probability distribution
of the process $\left\{ P(t)\right\} _{t\in[-\theta,\theta]}$ conditioned
on the event that $\left\{ P_{L}(\theta)=P_{R}(\theta)\right\} $ . This
has $A(-\theta)=A(\theta)=x$ and the conditioning ensures that $A(t)$
is actually continuous at $t=0$.
\mathbf{E}nd{defn}
The Poissonized RS line ensemble, $M_{\tilde{L},\tilde{R}}(\cdot ; \cdot)$ has
a simple description in terms of Poisson arches which are conditioned not to intersect:
\betagin{thm}
\lambdabel{ArchesThm}Fix $\theta>0$ and times $-\theta<t_{1}<\ldots<t_{n}<\theta$.
For any $N\in\mathbb{N}$, consider a non-intersecting line ensemble $A:\left\{ 1,2,\ldots,N\right\} \times[-\theta,\theta]\to\mathbb{Z}$,
so that $\left\{ A(i;\cdot)\right\} _{i=1}^{N}$ is a collection of
$N$ Poisson arches on $[-\theta,\theta]$ with the initial condition $A_{i}(-\theta)=A_{i}(\theta)=-i$
which are conditioned not to intersect i.e. $A(i;t)<A(j;t)$ for all
$i>j$. Then the joint probability distributions of the line ensemble
$A$ has the same conditional distribution as top $N$ lines of the
non-intersecting line ensemble $M_{\tilde{L},\tilde{R}}$, conditioned
on the event that all of the other lines $M_{\tilde{L},\tilde{R}}\left(k;\cdot\right)$
for $k>N$ do not move at all. To be precise, for fixed target points
$\left\{ x_{i,j}\right\} _{1\leq i\leq n,1\leq j\leq N}$ we have:
\[
\mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ A(j;t_{i})=x_{i,j}\right\} \right)\right)=\mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ M_{\tilde{L},\tilde{R}}(j;t_{i})=x_{i,j}\right\} \right)\given{M_{\tilde{L},\tilde{R}}(k;\cdot)\mathbf{E}quiv -k\ \forall k>N}\right).
\]
\mathbf{E}nd{thm}
The proof of this goes through the Karlin-MacGregor/Lindstr\"{o}m–Gessel–Viennot theorem and is deferred to Section \text{Re}f{subsection_ArchesThm}
\subsection{Proof of Theorem \text{Re}f{SchurProcessThm}} \lambdabel{subsectionpf}
We prove this theorem by splitting it into several lemmas. The idea
behind these lemmas is to exploit the fact that decorations and the tableaux that make up the pair of decorated tableaux of the Poissonized RS process are conditionally independent when conditioned on certain carefully chosen events.
\newcommand\Cti{ C_{t_i}\big( \lambda^{(i)} \big) }
\newcommand\Sti{ S_{t_i}\big( |\lambda^{(i)}| \big) }
\newcommand\Csi{ C_{s_i}\big( \mu^{(i)} \big) }
\newcommand\Ssi{ S_{s_i}\big( |\mu^{(i)}| \big) }
\newcommand\Co{ C_{0}\big( \nu \big) }
\betagin{defn}
\lambdabel{def:shorthands}
For any $-\theta < t<\theta$, Young diagram $\lambda$, and any $k\in\mathbb{N}$ define the shorthand notations:
\betagin{eqnarray*}
C_{t}\big( \lambda \big) &:=& \left\{ \lambda_{\tilde{L},\tilde{R}}(t)=\lambda \right\}, \\
S_{t}\big( k \big) &:=& \left\{ \abs{\lambda_{\tilde{L},\tilde{R}}(t)}= k \right\}.
\mathbf{E}nd{eqnarray*}
With this notation, Theorem \text{Re}f{SchurProcessThm} is an explicit formula for the probability of the event $\bigcap_{i=1}^{n} \Cti \cap \Co \cap \bigcap_{j=1}^{m} \Csi$. The events $\Sti$ and $\Ssi$ will also appear in our arguments below.
$C_{t}(\lambda)$ is the event that the Young diagram process at time $t$ is exactly equal to $\lambda$, while $S_{t}(|\lambda|) \subset C_{t}(\lambda)$ is the event that the Young diagram process at time $t$ has the same \uline{size} as $\lambda$ (but is possibly a different shape).
\mathbf{E}nd{defn}
\betagin{lem}
\lambdabel{conditioninglemma}Let $N=\abs{\lambda_{\tilde{L},\tilde{R}}(0)}$.
Then $N$ has the distribution of a Poisson random variable, $N\sigmam Poisson(\theta^{2})$.
Moreover, conditioned on the event $\left\{ N=n\right\} $, the decorations
$\left(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{n}\right)$ and $\left(r_{1},\ldots,r_{n}\right)$
of $\left(\tilde{L},\tilde{R}\right)$ are independent. Still conditioned
on $\{N=n\}$, the permutation $\sigma\in S_{n}$ associated with $\left(L,R\right)$
via the RS correspondence is uniformly distributed in $S_{n}$ and
is independent of both sets of decorations. \mathbf{E}nd{lem}
\betagin{proof}
From the construction of the Poissonized RS process, $\abs{\lambda_{\tilde{L},\tilde{R}}(0)}$
is the number of points in the square $[0,\theta]\times[0,\theta]$ of a
rate 1 Poisson point process in the plane. Hence $N\sigmam Poisson(\theta^{2})$
is clear. Conditioned on $\left\{ N=n\right\} $ the points of the
rate 1 Poisson process in question are uniformly distributed in the
square $[0,\theta]\times[0,\theta]$ (This is a general property of Poisson
point processes). Consequently, each point has an $x$-coordinate
and $y$-coordinate which are uniformly distributed in $[0,\theta]$
and are independent of all the other coordinates. Hence, since the
decorations $\left(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{n}\right),\left(r_{1},\ldots,r_{n}\right)$
consist of the sorted $y-$coordinates and $x$-coordinates respectively,
these decorations are independent of each other. (More specifically:
they have the distribution of the order statistics for a sample of
$n$ uniformly distributed points in $[0,\theta]$.)
To see that the permutation $\sigma\in S_{n}$ associated with these points
is uniformly distributed in $S_{n}$ first notice that this is the
same as the permutation associated with the points of the Poisson
Point Process. Then notice that if $\left(a_{1},\ldots,a_{n}\right),\left(b_{1},\ldots,b_{n}\right)$
are independent drawings of the order statistics for a sample of $n$
uniformly distributed points in $[0,\theta]$ and $\mathbf{P}i\in S_{n}$ is
drawn uniformly at random and independently of everything else, then
the points $\left\{ \left(a_{i},b_{\mathbf{P}i(i)}\right)\right\} _{i=1}^{n}$
is a sample of $n$ points chosen uniformly from $[0,\theta]\times[0,\theta]$.
This construction of the $n$ uniform points shows that the permutation
$\sigma$ is uniformly distributed and independent of both the $x$ and
$y$ coordinates.\mathbf{E}nd{proof}
\betagin{cor}
\lambdabel{YTcor}
Recall the shorthand notations from Definition \text{Re}f{def:shorthands}. For any Young diagram $\nu$, we have
\[
\mathbf{P}\left( \Co \right)=\left(\frac{\text{d}im(\nu)^{2}}{\abs{\nu}!}\right)\mathbf{P}\left(N=\abs{\nu}\right).
\]
\mathbf{E}nd{cor}
\betagin{proof}
By construction, the Young diagram $\lambda_{\tilde{L},\tilde{R}}(0)$
is the common shape of the tableaux $(L,R)$. Hence $\Co = \left\{ \lambda_{\tilde{L},\tilde{R}}(0)=\nu\right\} =\left\{ N=\abs{\nu}\right\} \cap\left\{ \text{sh}(L)=\nu\right\} $.
Notice that $\text{sh}(L)$ depends only on the associated permutation $\sigma$,
whose conditional distribution is known here to be uniform in $S_{n}$
by Lemma \text{Re}f{conditioninglemma}. Since the RS correspondence is a
bijection, we have only to count the number of pairs of tableaux of
shape $\nu$. Hence:
\betagin{eqnarray*}
\mathbf{P}\left( \Co \right) & = & \mathbf{P}\left(\text{sh}(L)=\nu\given{N=\abs{\nu}}\right)\mathbf{P}\left(N=\abs{\nu}\right)\\
& = & \left(\frac{\text{d}im(\nu)^{2}}{\abs{\nu}!}\right)\mathbf{P}\left(N=\abs{\nu}\right).
\mathbf{E}nd{eqnarray*}
\mathbf{E}nd{proof}
\betagin{lem}
\lambdabel{sizelemma} Recall the shorthand notations from Definition \text{Re}f{def:shorthands}. Consider the law of the process conditioned on the
event $C_0(\nu)$. The conditional
probability that $\lambda_{\tilde{L},\tilde{R}}$ has the correct \uline{sizes}
at times $t_{1}<.\ldots<t_{n}\in[-\theta,0]$ is given by
\betagin{eqnarray*}
\mathbf{P}\left(\bigcap_{i=1}^{n} S_{t_i}\big( |\lambda^{(i)}| \big) \given{ C_0\big(\nu\big) }\right) & = & \frac{\mathbf{P}rod_{i=0}^{n}P_{\theta\left(t_{i+1}-t_{i}\right)}\left(\abs{\lambda^{(i+1)}}-\abs{\lambda^{(i)}}\right)}{\mathbf{P}\left(N=\abs{\nu}\right)},
\mathbf{E}nd{eqnarray*}
where $P_{r}(k)=e^{-r}\frac{r^{k}}{k!}$ is the Poisson probability
mass function. An analogous formula holds for $\mathbf{P}\left(\bigcap_{i=1}^{m} \Ssi \given{\Co} \right)$. Moreover, we have the following type of conditional independence for the sizes at
times $-\theta<t_{1}<.\ldots<t_{n}<0$ and at times $0<s_{1}<.\ldots<s_{m}<\theta$:
$$
\mathbf{P}\left( \bigcap_{i=1}^{n} \Sti \cap \bigcap_{i=1}^{m} \Ssi \given{\Co} \right) = \mathbf{P}\left( \bigcap_{i=1}^{n} \Sti \given{\Co} \right)\cdot \mathbf{P}\left( \bigcap_{i=1}^{m} \Ssi \given{\Co} \right).
$$
\mathbf{E}nd{lem}
\betagin{proof}
As in the previous lemma, we have $\Co =\left\{ N=\abs{\nu}\right\} \cap\left\{ \text{sh}(L)=\nu\right\} $. Then consider:
\betagin{eqnarray*}
\mathbf{P}\left(\bigcap_{i=1}^{n}\Sti \cap \Co \right) & = & \mathbf{P}\left(\bigcap_{i=1}^{n} \Sti \cap \left\{ N=\abs{\nu}\right\} \cap\left\{ \text{sh}(L)=\nu\right\} \right)\\
& = & \mathbf{P}\left(\bigcap_{i=1}^{n} \Sti \cap\left\{ \text{sh}(L)=\nu\right\} \given{N=\abs{\nu}}\right)\mathbf{P}\left(N=\abs{\nu}\right).
\mathbf{E}nd{eqnarray*}
But now, when conditioned on $\left\{N=\abs{\nu}\right\}$, the event $\bigcap_{i=1}^{n}\Sti$ and the event $\left\{ \text{sh}(L)=\nu\right\} $ are independent. The former event depends only on the decorations $\left(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{n}\right)$ by the
definition of $\lambda_{\tilde{L},\tilde{R}}$ and since $t_{i}<0$,
while the latter event depends only on the associated permutation $\sigma$,
and these are conditionally independent by Lemma \text{Re}f{conditioninglemma}.
Hence:
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=1}^{n} \Sti \cap\left\{ \text{sh}(L)=\nu\right\} \given{N=\abs{\nu}}\right)\mathbf{P}\left(N=\abs{\nu}\right)\\
& = & \mathbf{P}\left(\bigcap_{i=1}^{n} \Sti \given{N=\abs{\nu}}\right)\mathbf{P}\left(\text{sh}(L)=\nu\given{N=\abs{\nu}}\right)\mathbf{P}\left(N=\abs{\nu}\right)\\
& = & \mathbf{P}\left(\bigcap_{i=1}^{n} \Sti \given{N=\abs{\nu}}\right)\mathbf{P}\Big( \Co \Big).
\mathbf{E}nd{eqnarray*}
Putting the above displays together, we have:
\[
\mathbf{P}\left(\bigcap_{i=1}^{n} \Sti \given{\Co}\right)=\mathbf{P}\left(\bigcap_{i=1}^{n} \Sti \given{N=\abs{\nu}}\right).
\]
Now from the definition of the Young diagram process, we have for
$-\theta<t<0$, that $\abs{\lambda_{\tilde{L},\tilde{R}}(t)}=\abs{\left\{ i:\ \mathbf{E}ll_{i}<t+\theta\right\} }$
and we see that the event $\bigcap_{i=1}^{n}\Sti \cap\left\{ N=\abs{\nu}\right\} $
depends only on counting the number of decorations from $\left(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{\abs{\nu}}\right)$
in the appropriate regions:
\betagin{eqnarray*}
\bigcap_{i=1}^{n} \Sti \cap\left\{ N=\abs{\nu}\right\} & = & \bigcap_{i=0}^{n+1}\left\{ \abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}=\abs{\lambda^{(i)}}\right\} \\
& = & \bigcap_{i=0}^{n+1}\left\{ \left|\left\{ j\ :\ t_{i}+\theta<\mathbf{E}ll_{j}<t_{i+1}+\theta\right\} \right|=\abs{\lambda^{(i+1)}}-\abs{\lambda^{(i)}}\right\}.
\mathbf{E}nd{eqnarray*}
Finally, from the construction, we notice that the random variable
$\abs{\left\{ j:t_{i}+\theta<\mathbf{E}ll_{j}<t_{i+1}+\theta\right\} }$ counts
the number of points of the Poisson point process in the region $[t_{i}+\theta,t_{i+1}+\theta]\times[0,\theta]$.
Consequently these random variables are independent for different
values of $i$ and are distributed according to
\[
\abs{\left\{ j:t_{i}+\theta<\mathbf{E}ll_{j}<t_{i+1}+\theta\right\} }\sigmam Poisson\left(\theta\left(t_{i+1}-t_{i}\right)\right).
\]
This observation, together with the preceding display, gives the desired first result of the lemma.
To see the second result about the conditional independence at times
$-\theta<t_{1}<.\ldots<t_{n}<0$ and at times $0<s_{1}<.\ldots<s_{m}<\theta$,
we repeat the arguments above and notice that times $-\theta<t_{1}<.\ldots<t_{n}<0$
depend only on the decorations $(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{n})$ (because
of $\abs{\lambda_{\tilde{L},\tilde{R}}(t)}=\abs{\left\{ i:\ \mathbf{E}ll_{i}<t+\theta\right\} }$)
while the times $0<s_{1}<\ldots<s_{m}<\theta$ depend only on the decorations
$(r_{1},\ldots,r_{m})$ (because of $\abs{\lambda_{\tilde{L},\tilde{R}}(s)}=\abs{\left\{ i:\ r_{i}<\theta-s\right\} }$).
These decorations are conditionally independent when conditioned on
$\{N=\abs{\nu}\}$ by Lemma \text{Re}f{conditioninglemma} and the desired
independence result follows.\mathbf{E}nd{proof}
\betagin{lem}
\lambdabel{shapelem}
Recall the shorthand notations from Definition \text{Re}f{def:shorthands}. We
have
\[
\mathbf{P}\left(\bigcap_{i=0}^{n+1}\Cti \given{\bigcap_{i=0}^{n+1} \Sti \cap \Co}\right)=\frac{\mathbf{P}rod_{i=1}^{n+1}\text{d}im\left(\lambda^{(i+1)}/\lambda^{(i)}\right)}{\text{d}im(\nu)}.
\]
An analogous formula holds for $\mathbf{P}\left(\bigcap_{i=0}^{m+1} \Csi \given{\bigcap_{i=0}^{m+1} \Ssi \cap \Co}\right)$. Moreover, we have the following type of conditional independence at
times $-\theta<t_{1}<.\ldots<t_{n}<0$ and at times $0<s_{1}<.\ldots<s_{m}<\theta$:
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=0}^{n+1} \Cti \cap \bigcap_{i=0}^{m+1} \Csi \given{\bigcap_{i=0}^{n+1} \Sti \cap\bigcap_{i=0}^{m+1} \Ssi \cap \Co }\right)\\
& = & \mathbf{P}\left(\bigcap_{i=0}^{n+1}\Cti \given{\bigcap_{i=0}^{n+1} \Sti \cap \Co}\right)\cdot\mathbf{P}\left(\bigcap_{i=0}^{m+1} \Csi \given{\bigcap_{i=0}^{m+1} \Ssi \cap \Co}\right).
\mathbf{E}nd{eqnarray*}
\mathbf{E}nd{lem}
\betagin{proof}
For a standard Young tableau $T$, and $a<b\in\mathbb{N}$ we will denote by
$\text{sh}(T_{a,b})$ the skew Young diagram which consists of the boxes
of $T$ which are labeled with an number $i$ so that $a\leq i\leq b$
and the empty Young diagram in the case $b<a$. With this notation,
we now notice that for $t<0$, that $C_{t}(\lambda) = \left\{ \lambda_{\tilde{L},\tilde{R}}(t)=\lambda\right\} = \left\{ \text{sh}(L_{1,\abs{\lambda_{\tilde{L},\tilde{R}}(t)}})=\lambda\right\} $.
By the same token we have:
\[
\bigcap_{i=1}^{n+1}\Cti = \bigcap_{i=0}^{n+1}\left\{ \text{sh}(L_{1,\abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}})=\lambda^{(i)}\right\} = \bigcap_{i=0}^{n}\left\{ \text{sh}(L_{\abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}+1,\abs{\lambda_{\tilde{L},\tilde{R}}(t_{i+1})}})=\lambda^{(i+1)}/\lambda^{(i)}\right\}.
\]
Hence:
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=0}^{n+1}\Cti \given{\bigcap_{i=0}^{n+1}\Sti \cap \Co}\right)\\
& = & \mathbf{P}\left(\bigcap_{i=0}^{n+1}\left\{ \text{sh}(L_{\abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}+1,\abs{\lambda_{\tilde{L},\tilde{R}}(t_{i+1})}})=\lambda^{(i+1)}/\lambda^{(i)}\right\} \given{\bigcap_{i=0}^{n}\left\{ \abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}=\abs{\lambda^{(i)}}\right\} \cap \Co}\right)\\
& = & \mathbf{P}\left(\bigcap_{i=0}^{n+1}\left\{ \text{sh}(L_{\abs{\lambda^{(i)}}+1,\abs{\lambda^{(i+1)}}})=\lambda^{(i+1)}/\lambda^{(i)}\right\} \given{\bigcap_{i=0}^{n}\left\{ \abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}=\abs{\lambda^{(i)}}\right\} \cap \Co}\right).
\mathbf{E}nd{eqnarray*}
We now notice that this event depends only on the Young
tableau $L$, which is entirely determined by the associated permutation
$\sigma$ (via the RS algorithm). Consequently, the conditioning on $\bigcap_{i=0}^{n}\left\{ \abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}=\abs{\lambda^{(i)}}\right\} $,
which depends only on the decorations $\left(\mathbf{E}ll_{1},\ldots,\mathbf{E}ll_{\abs{\lambda}}\right)$,
has no effect here since $\sigma$ and these decorations are conditionally
independent by Lemma \text{Re}f{conditioninglemma}. Removing this conditioning
on $\bigcap_{i=0}^{n}\left\{ \abs{\lambda_{\tilde{L},\tilde{R}}(t_{i})}=\abs{\lambda^{(i)}}\right\} $,
we remain with:
\[
\mathbf{P}\left(\bigcap_{i=0}^{n+1}\Cti \given{\bigcap_{i=0}^{n+1} \Sti \cap \Co }\right) = \mathbf{P}\left(\bigcap_{i=0}^{n+1}\left\{ \text{sh}(L_{\abs{\lambda^{(i)}}+1,\abs{\lambda^{(i+1)}}})=\lambda^{(i+1)}/\lambda^{(i)}\right\} \given{\Co}\right).
\]
With this conditioning, since $\sigma$ is uniformly distributed by Lemma \text{Re}f{conditioninglemma},
and because the RS algorithm is a bijection, the Young tableau $L$
is uniformly distributed among the set of all Young tableau of shape
$\nu$. Hence it suffices to count the number of tableau of shape
$\nu$ with the correct intermediate shapes i.e. the tableaux $L$
that have $\text{sh}(L_{\abs{\lambda^{(i)}}+1,\abs{\lambda^{(i+1)}}})=\lambda^{(i+1)}/\lambda^{(i)}$
for each $i$. Since each $L_{\abs{\lambda^{(i)}}+1,\abs{\lambda^{(i+1)}}}$must
itself be a standard Young tableau of skew shape $\lambda^{(i+1)}/\lambda^{(i)}$,
this is:
\betagin{eqnarray*}
& & \left|\left\{ \text{S.Y.T. }L\ :\text{sh}(L)=\nu,\ \text{sh}\left(L_{\abs{\lambda^{(i)}}+1,\abs{\lambda^{(i+1)}}}\right)=\lambda^{(i+1)}/\lambda^{(i)}\ \forall1\leq i\leq n\right\} \right|\\
& = & \text{d}im(\lambda_{1}/\mathbf{E}mptyset)\text{d}im\left(\lambda_{2}/\lambda_{1}\right)\ldots\text{d}im\left(\lambda_{n}/\lambda_{n-1}\right)\text{d}im\left(\nu/\lambda_{n}\right).
\mathbf{E}nd{eqnarray*}
Dividing by $\text{d}im(\nu),$ the total number of tableaux of shape $\nu$, gives the desired probability and completes the first result of
the lemma.
To see the second result about the conditional independence at times
$-\theta<t_{1}<.\ldots<t_{n}<0$ and at times $0<s_{1}<.\ldots<s_{m}<\theta$,
we repeat arguments analogous to the above to see that
\[
\mathbf{P}\left(\bigcap_{i=0}^{n+1}\Cti \cap \bigcap_{i=0}^{m+1} \Csi \given{\bigcap_{i=0}^{n+1} \Sti \cap \bigcap_{i=0}^{m+1} \Ssi \cap \Co}\right) = \mathbf{P}\left(A_{L}\cap A_{R}\given{\Co}\right),
\]
where
\betagin{eqnarray*}
A_{L} &:=& \bigcap_{i=0}^{n+1}\left\{ \text{sh}(L_{\abs{\lambda^{(i)}}+1,\abs{\lambda^{(i+1)}}})=\lambda^{(i+1)}/\lambda^{(i)}\right\} \\
A_{R} &:=& \bigcap_{i=0}^{m+1}\left\{ \text{sh}(R_{\abs{\mu^{(i)}}+1,\abs{\mu^{(i+1)}}})=\mu^{(i+1)}/\mu^{(i)}\right\}.
\mathbf{E}nd{eqnarray*}
Now the event $A_{L}$ depends only on the left tableau $L$ while
the event $A_{R}$ depends only on the right tableau $R$. By Lemma \text{Re}f{conditioninglemma} along with the fact that the RS correspondence is a bijection, we know that under the conditioning on the event $\Co$ that the tableaux $L$ and $R$ are independent of each other (both are uniformly distributed among the set of Young tableau of shape $\nu$). Thus the events $A_{L}$
and $A_{R}$ are conditionally independent when conditioned on $\Co$, yielding the desired independence result.
\mathbf{E}nd{proof}
\betagin{proof}
(Of Theorem \text{Re}f{SchurProcessThm}). The proof goes by carefully deconstructing the desired probability and using the conditional independence results from Lemma \text{Re}f{sizelemma} and Lemma \text{Re}f{shapelem}
until we reach an explicit
formula. Recall the shorthand notations from Definition \text{Re}f{def:shorthands}. We have:
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=1}^{n}\Cti \cap \Co \cap \bigcap_{i=1}^{m} \Csi \right)\\
& = & \mathbf{P}\left(\bigcap_{i=1}^{n}\Cti \cap \bigcap_{i=1}^{m} \Csi \given{\Co}\right)\mathbf{P}\left(\Co\right)\\
& = & \mathbf{P}\left(\bigcap_{i=0}^{n} \Cti \cap \bigcap_{i=0}^{m} \Csi \given{\bigcap_{i=0}^{n+1} \Sti \cap \bigcap_{i=0}^{m+1} \Ssi \cap \Co}\right)\cdot\\
& & \cdot\mathbf{P}\left(\bigcap_{i=0}^{n} \Sti \cap \bigcap_{i=0}^{m} \Ssi \given{\Co}\right)\cdot\mathbf{P}\left(\Co\right)\\
& = & \mathbf{P}\left(\bigcap_{i=0}^{n}\Cti \given{\bigcap_{i=0}^{n} \Sti \cap \Co}\right)\cdot \mathbf{P}\left(\bigcap_{i=0}^{m} \Csi \given{\bigcap_{i=0}^{m} \Ssi \cap \Co}\right)\cdot\\
& & \cdot\mathbf{P}\left(\bigcap_{i=0}^{n} \Sti \given{\Co}\right)\mathbf{P}\left(\bigcap_{i=0}^{m} \Ssi \given{\Co}\right)\cdot\mathbf{P}\left(\Co\right)\\
& = & \left(\frac{\mathbf{P}rod_{i=0}^{n}\text{d}im\left(\lambda^{(i+1)}/\lambda^{(i)}\right)}{\text{d}im(\nu)}\right)\cdot\left(\frac{\mathbf{P}rod_{i=0}^{m}\text{d}im\left(\mu^{(i)}/\mu^{(i+1)}\right)}{\text{d}im(\nu)}\right)\cdot\\
& & \cdot\left(\frac{\mathbf{P}rod_{i=0}^{n}P_{\theta\left(t_{i+1}-t_{i}\right)}\left(\abs{\lambda^{(i+1)}}-\abs{\lambda^{(i)}}\right)}{\mathbf{P}\left(N=\abs{\nu}\right)}\right)\cdot\left(\frac{\mathbf{P}rod_{i=0}^{m}P_{\theta\left(s_{i+1}-s_{i}\right)}\left(\abs{\lambda^{(i+1)}}-\abs{\lambda^{(i)}}\right)}{\mathbf{P}\left(N=\abs{\nu}\right)}\right)\cdot\left(\left(\frac{\text{d}im(\nu)^{2}}{\abs{\nu}!}\right)\mathbf{P}\left(N=\abs{\nu}\right)\right).
\mathbf{E}nd{eqnarray*}
We now use $\mathbf{P}(N=\abs{\nu})=P_{\theta^{2}}(\abs{\nu})=e^{-\theta^{2}}\frac{\theta^{2\abs{\nu}}}{\abs{\nu}!}$ to simplify the result. The desired result follows after simplifying the product using the Poisson probability mass formula.
\mathbf{E}nd{proof}
\subsection{Proof of Theorem \text{Re}f{ArchesThm}}
\lambdabel{subsection_ArchesThm}
\betagin{proof}
(Of Theorem \text{Re}f{ArchesThm}) The proof will proceed as follows: First, by an application of the \newline Karlin-MacGregor/Lindstr\"{o}m–Gessel–Viennot theorem and the Jacobi-Trudi identity for Schur functions to compute
the distribution of the Poisson arches in terms of Schur functions.
Then, by Theorem \text{Re}f{SchurProcessThm}, the right hand side is computed to be the same
expression.
For convenience of notation, divide the times into two
parts, times $-\theta<t_{1}<\ldots<t_{n}<0$ and $0<s_{1}<\ldots<s_{m}<\theta$,
and put $t_{0}=-\theta,t_{n+1}=0,s_{0}=0,s_{m+1}=\theta$. Set target points
$\left\{ x_{i,j}\right\} _{1\leq i\leq n,1\leq j\leq N}$ , $\left\{ z_{j}\right\} _{1\leq j\leq N}$and
$\left\{ y_{i,j}\right\} _{1\leq i\leq m,1\leq j\leq N}$ and consider
the event $\left\{ \bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ A(j;t_{i})=x_{i,j}\right\} \right)\cap\bigcap_{i=1}^{m}\left(\bigcap_{j=1}^{N}\left\{ A(j;s_{i})=y_{i,j}\right\} \right)\right\} $.
To each of the fixed time ``slices'', we Young diagram $\left\{ \lambda^{(i)}\right\} _{1\leq i\leq n},\left\{ \mu^{(i)}\right\} _{1\leq i\leq m}$
and $\nu$ by prescribing the length of the rows:
\[
\lambda_{k}^{(i)}=\betagin{cases}
x_{i,k}+k & 1\leq k\leq N\\
0 & k>N
\mathbf{E}nd{cases},\ \nu_{k}=\betagin{cases}
z_{k}+k & 1\leq k\leq N\\
0 & k>N
\mathbf{E}nd{cases},\ \mu_{k}^{(i)}=\betagin{cases}
y_{i,k}+k & 1\leq k\leq N\\
0 & k>N
\mathbf{E}nd{cases}.
\]
Reuse the same conventions as from Theorem \text{Re}f{SchurProcessThm}, $\lambda^{(0)}=\mathbf{E}mptyset,\lambda^{(n+1)}=\nu,\mu^{(0)}=\nu,\mu^{(m+1)}=\mathbf{E}mptyset$
and $t_{0}=-\theta,t_{n+1}=0,s_{0}=0,s_{m+1}=\theta$. Notice that by the
definitions, $\lambda^{(i)}$ and $\mu^{(j)}$ are always Young diagrams
with at most $N$ non-empty rows. Moreover, the admissible target
points are exactly in bijection with the space $\mathbb{Y}(N)$ of Young
diagrams with at most $N$ non-empty rows.
By application of the Karlin-MacGregor theorem \cite{karlin1959} / Lindstr\"{o}m–Gessel–Viennot theorem \cite{GesselViennot} , for the law of non-intersecting random walks, we have that:
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ A(j;t_{i})=x_{i,j}\right\} \right)\cap\bigcap_{i=1}^{m}\left(\bigcap_{j=1}^{N}\left\{ A(j;s_{i})=y_{i,j}\right\} \right)\right)\\
& = & Z_{t_{1},\ldots t_{n}s_{1},\ldots s_{m}}^{-1}\mathbf{P}rod_{i=1}^{n+1}\text{d}et\left(W_{t_{i}-t_{i-1}}^{+}(x_{i-1,a},x_{i,b})\right)_{1\leq a,b\leq N}\cdot\mathbf{P}rod_{i=1}^{m+1}\text{d}et\left(W_{s_{i}-s_{i-1}}^{-}(y_{i-1,a},y_{i,b})\right)_{1\leq a,b\leq N}.
\mathbf{E}nd{eqnarray*}
Here the weights $W_{t}^{+}$and $W_{s}^{-}$ are Poisson \uline{weights}
for an increasing/decreasing Poisson process:
\betagin{eqnarray*}
W_{t}^{+}\left(x,y\right) & = & \frac{t^{(y-x)}}{(y-x)!}\mathtt{1}_{\left\{ y>x\right\} }\ , \ W_{t}^{-}=\frac{t^{(x-y)}}{(x-y)!}\mathtt{1}_{\left\{ x>y\right\} }.
\mathbf{E}nd{eqnarray*}
(We can safely ignore the factor of $e^{-t}$ that appears in the
transition probabilities as long as we are consist with this convention
when we compute the normalizing constant $Z_{t_{1},\ldots s_{m}}$ too.)
We will now use some elementary facts from the theory of symmetric
functions to simplify the result (see \cite{StanleyVol2} or \cite{0387950672}).
Firstly, we use the following identity for the complete homogenous
symmetric functions, specialized to the exponential specialization
of parameter $t$, namely:
\[
h_{n}(\rhoo_{t})=\frac{t{}^{n}}{n!}.
\]
With this in hand, we notice that $W_{t}^{+}(x,y)=h_{y-x}\left(\rhoo_{t}\right)$
and $W_{s}^{-}(x,y)=h_{x-y}(\rhoo_{t})$. Hence by the Jacobi-Trudi
identity (see again \cite{StanleyVol2} or \cite{0387950672}) we
have:
\betagin{eqnarray*}
\text{d}et\left(W_{t}^{+}(x_{i-1,a},x_{i,b})\right)_{1\le a,b\leq N} & = & \text{d}et\left(h_{x_{i,b}-x_{i-1,a}}\left(\rhoo_{t}\right)\right)_{1\leq a,b\leq N}\\
& = & \text{d}et\left(h_{\left(\lambda_{b}^{(i)}-b\right)-\left(\lambda_{a}^{(i-1)}-a\right)}\left(\rhoo_{t}\right)\right)_{1\leq a,b\leq N}\\
& = & s_{\lambda^{(i)}/\lambda^{(i-1)}}(\rhoo_{t}).
\mathbf{E}nd{eqnarray*}
Similarly, we have
\[
\text{d}et\left(W_{t}^{+}(x_{i-1,a},x_{i,b})\right)_{1\le a,b\leq N}=s_{\mu^{(i)}/\mu^{(i-1)}}\left(\rhoo_{t}\right).
\]
Thus
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ A(j;t_{i})=x_{i,j}\right\} \right)\cap\bigcap_{i=1}^{m}\left(\bigcap_{j=1}^{N}\left\{ A(j;s_{i})=y_{i,j}\right\} \right)\right)\\
& = & Z_{t_{1},\ldots t_{n}s_{1},\ldots s_{m}}^{-1}\left(\mathbf{P}rod_{i=0}^{n}s_{\lambda^{(i+1)}/\lambda^{(i)}}\left(\rhoo_{t_{i+1}-t_{i}}\right)\right)\cdot\left(\mathbf{P}rod_{i=0}^{m}s_{\mu^{(i)}/\mu^{(i+1)}}\left(\rhoo_{s_{i+1}-s_{i}}\right)\right).
\mathbf{E}nd{eqnarray*}
We now recognize from the statement of Theorem \text{Re}f{SchurProcessThm}, that this is exactly the probability of the Young diagram process $\lambda_{\tilde{L},\tilde{R}}$
passing through the Young diagrams $\left\{ \lambda^{(i)}\right\} _{1\leq i\leq n}$
and $\left\{ \mu^{(j)}\right\} _{1\leq j\leq m}$ at the appropriate
times except for the constant factor
of $Z_{t_{1},\ldots t_{n}s_{1},\ldots s_{m}}^{-1}\mathbf{E}xp\left(-\theta^{2}\right)$
. By the construction of the non-intersecting line ensemble in
terms of the Young diagram process, this is exactly the same as the
first $N$ lines of the non-intersecting line ensemble hitting the
targets $\left\{ x_{i,j}\right\} $ and $\left\{ y_{i,j}\right\} $
at the appropriate times and, since these Young diagrams have
at most $N$ non-empty rows, the remaining rows must be trivial:
\betagin{eqnarray*}
& & Z_{t_{1},\ldots t_{n}s_{1},\ldots s_{m}}^{-1} \left(\mathbf{P}rod_{i=0}^{n}s_{\lambda^{(i+1)}\backslash\lambda^{(i)}}\left(\rhoo_{t_{i+1}-t_{i}}\right)\right)\cdot\left(\mathbf{P}rod_{i=0}^{m}s_{\mu^{(i)}\backslash\mu^{(i+1)}}\left(\rhoo_{s_{i+1}-s_{i}}\right)\right)\\
& = & \mathbf{P}\left(\bigcap_{i=1}^{n} \left\{ \lambda_{\tilde{L},\tilde{R}}(t_{i})=\lambda^{(i)}\right\} \cap\left\{ \lambda_{\tilde{L}.\tilde{R}}(0)=\nu\right\} \cap\bigcap_{j=1}^{m}\left\{ \lambda_{\tilde{L},\tilde{R}}(s_{j})=\mu^{(j)}\right\} \right)\\
& = & \mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ M_{\tilde{L},\tilde{R}}(j;t_{i})=x_{i,j}\right\} \right)\cap\bigcap_{i=1}^{m}\left(\bigcap_{j=1}^{N}\left\{ M_{\tilde{L},\tilde{R}}(j;s_{i})=y_{i,j}\right\} \right)\cap\left\{ M_{\tilde{L},\tilde{R}}(k;\cdot)=-k\ \forall k>N\right\} \right).
\mathbf{E}nd{eqnarray*}
The constant $Z_{t_{1},\ldots t_{n}s_{1},\ldots s_{m}}$ can be calculated
as a sum over all possible paths the non-crossing arches can take.
By our above calculation, this is the following sum over all possible
sequences of Young diagrams $\left\{ \alpha^{(i)}\right\} _{i=1}^{n+1}\subset\mathbb{Y}(N)$,
$\left\{ \beta^{(i)}\right\} _{i=1}^{m+1}\subset\mathbb{Y}(N)$, $\nu=\alpha^{(n+1)}=\beta^{(m)}$,
which have at most $N$ non-empty rows:
\betagin{eqnarray*}
Z_{t_{1},\ldots t_{n}s_{1},\ldots s_{m}} & = & \sum_{\left\{ \left\{ \alpha^{(i)}\right\} ,\left\{ \beta^{(j)}\right\} \right\} }\left(\mathbf{P}rod_{i=0}^{n+1}s_{\alpha^{(i)}/\alpha^{(i-1)}}(\rhoo_{t_{i+1}-t_{i}})\right)\left(\mathbf{P}rod_{i=0}^{m+1}s_{\beta^{(i)}/\beta^{(i-1)}}(\rhoo_{s_{i+1}-s_{i}})\right).
\mathbf{E}nd{eqnarray*}
Again, by Theorem \text{Re}f{SchurProcessThm}, except up to a constant
factor $\mathbf{E}xp\left(-\theta^{2}\right)$, this can be interpreted as a
probability for the Young diagram process $\lambda_{\tilde{L},\tilde{R}}$
or the line ensemble $M_{\tilde{L},\tilde{R}}$. Because we sum over
all possibilities for the first $N$ rows, we remain only with the
probability that the Young diagram process $\lambda_{\tilde{L},\tilde{R}}$
never has more than $N$ non-trivial rows, or equivalently that all
the line ensemble remains still for all $k>N$:
\betagin{eqnarray*}
Z_{t_{1},\ldots t_{n}s_{1},\ldots s_{m}} & \mathbf{P}ropto & \sum_{\left\{ \left\{ \alpha^{(i)}\right\} ,\left\{ \beta^{(j)}\right\} \right\} }\mathbf{P}\left(\bigcap_{i=1}^{n}\left\{ \lambda_{\tilde{L},\tilde{R}}(t_{i})=\alpha^{(i)}\right\} \bigcap\left\{ \lambda_{\tilde{L}.\tilde{R}}(0)=\nu\right\} \bigcap_{j=1}^{m}\left\{ \lambda_{\tilde{L},\tilde{R}}(s_{j})=\beta^{(j)}\right\} \right)\\
& = & \mathbf{P}\left(\lambda_{\tilde{L},\tilde{R}}(\cdot)\text{ has at most }N\text{ non-empty rows}\right)\\
& = & \mathbf{P}\left(M_{\tilde{L},\tilde{R}}(k;\cdot)=-k\ \forall k>N\right).
\mathbf{E}nd{eqnarray*}
Combining the two calculations, we see that the two factors of $\mathbf{E}xp\left(-\theta^{2}\right)$
cancel and we remain with:
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ A(j;t_{i})=x_{i,j}\right\} \right)\cap\bigcap_{i=1}^{m}\left(\bigcap_{j=1}^{N}\left\{ A(j;s_{i})=y_{i,j}\right\} \right)\right)\\
& = & \frac{\mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ M_{\tilde{L},\tilde{R}}(j;t_{i})=x_{i,j}\right\} \right)\cap\bigcap_{i=1}^{m}\left(\bigcap_{j=1}^{N}\left\{ M_{\tilde{L},\tilde{R}}(j;s_{i})=y_{i,j}\right\} \right)\cap\left\{ M_{\tilde{L},\tilde{R}}(k;\cdot)=-k\ \forall k>N\right\} \right)}{\mathbf{P}\left(M_{\tilde{L},\tilde{R}}(k;\cdot)=-k\ \forall k>N\right)}\\
& = & \mathbf{P}\left(\bigcap_{i=1}^{n}\left(\bigcap_{j=1}^{N}\left\{ M_{\tilde{L},\tilde{R}}(j;t_{i})=x_{i,j}\right\} \right)\cap\bigcap_{i=1}^{m}\left(\bigcap_{j=1}^{N}\left\{ M_{\tilde{L},\tilde{R}}(j;s_{i})=y_{i,j}\right\} \right)\given{M_{\tilde{L},\tilde{R}}(k;\cdot)=-k\ \forall k>N}\right),
\mathbf{E}nd{eqnarray*}
as desired.
\mathbf{E}nd{proof}
\section{Relationship to Stochastic Dynamics on Partitions}
In this section we show that the Poissonized RS process can
be understood as a special case of certain stochastic dynamics on
partitions introduced by Borodin and Olshanski in \cite{Borodin_stochasticdynamics}.
\betagin{thm}
\lambdabel{curve_thm}Let $\left(u(t),v(t)\right)$, $t\in[-\theta,\theta]$
be the parametric curve in $\mathbb{R}_{+}^{2}$ given by:
\[
u(t)=\betagin{cases}
\theta & t\leq0\\
\theta-\abs t & t\geq0
\mathbf{E}nd{cases}\ ,\ v(t)=\betagin{cases}
\theta-\abs t & t\leq0\\
\theta & t\geq0
\mathbf{E}nd{cases}.
\]
Also let $\square(u(t),v(t))$ be the rectangular region $[0,u(t)]\times[0,v(t)]\subset\mathbb{R}_{+}^{2}$. (See Figure \text{Re}f{fig:curve_cartoon} for an illustration of this setup).
For any point configuration $\Pi\in\mathcal{C}^{\theta}$, let $\mathbf{P}i(t)$ be the
permutation associated with the point configuration $\Pi\cap\mbox{\mathbf{E}nsuremath{\square}}[u(t),v(t)]$
(as in Definition \text{Re}f{associatedpermutation}) and let $\lambda_{\Pi}(t)=\text{sh}\left(RS(\mathbf{P}i(t)\right)$
be the \uline{shape} of the Young tableau that one gets by applying
the ordinary RS bijection to the permutation $\mathbf{P}i(t)$.
If $(\tilde{L},\tilde{R})=dRS(\Pi)$ is the decorated Young tableau
that one gets by applying the decorated RS bijection to the configuration
$\Pi$, then the Young diagram process of $(\tilde{L},\tilde{R})$
is exactly $\lambda_{\Pi}(t)$:
\[
\lambda_{\tilde{L},\tilde{R}}(t)=\lambda_{\Pi}(t)\ \forall t\in[-\theta,\theta]
\].
\mathbf{E}nd{thm}
\betagin{proof}
For concreteness, let us suppose there are $n$ points in the configuration
$\Pi$ and label them $\left\{ (x_{i},y_{i})\right\} _{i=1}^{n}$
sorted in ascending order of $x$-coordinate. Also label $\sigma=\mathbf{P}i(0)\in S_{n}$
be the permutation associated to the configuration $\Pi$ and let
$(\tilde{L},\tilde{R})=\left(L,(y_{\sigma^{-1}(1)},\ldots,y_{\sigma^{-1}(n)}),R,\left(x_{1},\ldots,x_{n}\right)\right)$
be the output of the decorated RS correspondence.
We prove first the case $t\in[0,\theta]$, then use a symmetry property
of the RS algorithm to deduce the result for $t\in[-\theta,0]$. Fix
a $k$ with $1\leq k\leq n$ and let us restrict our attention to
times $t\in[0,\theta]$ for which $x_{k-1}<\theta-t\leq x_{k}$. By the
definition of the Young diagram process, we have then by this choice
of $t$ that
\betagin{eqnarray*}
\lambda_{\tilde{L},\tilde{R}}(t) & = & \left\{ (i,j):x_{R(i,j)}\leq\theta-t\right\} \\
& = & \left\{ (i,j):R(i,j)\leq k\right\}.
\mathbf{E}nd{eqnarray*}
Now, by the definition of the decorated RS correspondence, the pair
of tableaux $(L,R)$ correspond to the permutation $\sigma$ when one
applies the ordinary RS algorithm. Since $R$ is the recording tableau
here, the set $\left\{ (i,j):R(i,j)\leq k\right\} $ is exactly the
shape of the tableaux in the RS algorithm after $k$ steps of the
algorithm. At this point the algorithm has used only comparisons between
the numbers $\sigma(1),\sigma(2),\ldots,\sigma(k)$; it has not seen any other
numbers yet.
On the other hand, we have $\mbox{\mathbf{E}nsuremath{\square}}\left(u(t),v(t)\right)=\left\{ (x_{i},y_{i})\in\Pi:\ x_{i}<\theta-t\right\} =\left\{ (x_{i},y_{i})\right\} _{i=1}^{k}$
by the choice $x_{k-1}<\theta-t\leq x_{k}$ and since $x_{i}$ are sorted.
So $\lambda_{u,v}(t)=\text{sh}\left(RSK(\mathbf{P}i(t)\right)$ is the shape outputted
by the RS algorithm after it has worked on the permutation $\mathbf{P}i(t)\in S_{k}$
using comparisons between the numbers $\mathbf{P}i(t)(1),\mathbf{P}i(t)(2),\ldots,\mathbf{P}i(t)(k)$.
But we now notice that for $1\leq i,j\leq k$ that $\mathbf{P}i(t)(i)<\mathbf{P}i(t)(j)$
if and only $\sigma(i)<\sigma(j)$ since they both happen if and only if
$y_{i}<y_{j}$. Hence, in computing $\lambda_{u,v}(t)$, the RS algorithm
makes the exact same comparisons as the first $k$ steps of the RS
algorithm on the list $\sigma(1),\sigma(2),\ldots,\sigma(k)$. For this reason,
$\lambda_{\Pi}(t)=\left\{ (i,j):R(i,j)\leq k\right\} $. Hence $\lambda_{\tilde{L},\tilde{R}}(t)=\lambda_{\Pi}(t)$,
as desired. Since this works for any choice of $k$, this covers all
of $t\in[0,\theta]$
To handle $t\in[-\theta,0]$ consider as follows. Let $\Pi^{T}$ be the
reflection of the point configuration $\Pi$ about the line $x=y$,
in other words swapping the $x$ and $y$ coordinates of every point.
Then the permutation associated with $\Pi^{T}$ is $\sigma^{-1}$. Using
the remarkable fact of the RS correspondence $RS(\sigma)=(L,R)\iff RS(\sigma^{-1})=(R,L)$
we will have from the definition that $dRS(\Pi^{T})=(\tilde{R},\tilde{L})=\left(R,\left(x_{1},\ldots,x_{n}\right),L,(y_{\sigma^{-1}(1)},\ldots,y_{\sigma^{-1}(n)})\right)$.
Using the result for $t\in[0,\theta]$ applied to the configuration $\Pi$,
we have $\lambda_{\tilde{R},\tilde{L}}(t)=\lambda_{\Pi^{T}}(t)$ for all $t\in[0,\theta]$.
Now, $\lambda_{\tilde{R},\tilde{L}}(t)=\lambda_{\tilde{L},\tilde{R}}(-t)$
follows from the definition \text{Re}f{YDprocess_pair}. It is also true
that $\lambda_{\Pi^{T}}(t)=\lambda_{\Pi}(-t)$; this follows from the fact
that $\left(\mbox{\mathbf{E}nsuremath{\square}}\left(u(t),v(t)\right)\right)^{T}=\mbox{\mathbf{E}nsuremath{\square}}\left(u(-t),v(-t)\right)$
as regions in the plane $\mathbb{R}_{+}^{2}$ and so the permutations at
time $t$ will have $\sigma_{\Pi}(t)=\left(\sigma_{\Pi^{-1}}(-t)\right)^{-1}$.
Since the RS correspondence assigns the same shape to the inverse
permutation have $\lambda_{\Pi^{T}}(t)=\lambda_{\Pi}(-t)$. Hence we conclude
$\lambda_{\tilde{L},\tilde{R}}(t)=\lambda_{\Pi}(t)$ for $t\in[-\theta,0]$ too.
\mathbf{E}nd{proof}
\betagin{figure}
\centering
\betagin{subfigure}[b]{.3\textwidth}
\betagin{center}
\betagin{tikzpicture}[scale = 2.5]
\coordinate (BL) at (0,0);
\coordinate (BR) at (1,0);
\coordinate (TL) at (0,1);
\coordinate (TR) at (1,1);
\coordinate (Xmax) at (1.3,0);
\coordinate (Ymax) at (0,1.3);
\coordinate (P) at (1.0,0.6);
\fill[fill=gray!40!white] (0,0) rectangle (P)
node[anchor=west]{$\big(u(t),v(t)\big)$};
\fill (P) circle[radius=0.5pt];
\node[anchor=west] at (TR) {$(\thetaeta,\thetaeta)$};
\text{d}raw [black, ->, thick] (BL) -- (Xmax);
\text{d}raw [black, ->, thick] (BL) -- (Ymax);
\text{d}raw [black] (TR) -- (BR)
node[anchor=north]{$(\thetaeta,0)$};
\text{d}raw [black] (TR) -- (TL)
node[anchor=east]{$(0,\thetaeta)$};
\mathbf{E}nd{tikzpicture}
\mathbf{E}nd{center}
\caption{ $t <0 $}
\mathbf{E}nd{subfigure}
\hspace{0.15\textwidth}
\betagin{subfigure}[b]{.3\textwidth}
\betagin{center}
\betagin{tikzpicture}[scale = 2.5]
\coordinate (BL) at (0,0);
\coordinate (BR) at (1,0);
\coordinate (TL) at (0,1);
\coordinate (TR) at (1,1);
\coordinate (Xmax) at (1.3,0);
\coordinate (Ymax) at (0,1.3);
\coordinate (P) at (0.7,1.0);
\fill[fill=gray!40!white] (0,0) rectangle (P)
node[anchor=south]{$\big(u(t),v(t)\big)$};
\fill (P) circle[radius=0.5pt];
\node[anchor=west] at (TR) {$(\thetaeta,\thetaeta)$};
\text{d}raw [black, ->, thick] (BL) -- (Xmax);
\text{d}raw [black, ->, thick] (BL) -- (Ymax);
\text{d}raw [black] (TR) -- (BR)
node[anchor=north]{$(\thetaeta,0)$};
\text{d}raw [black] (TR) -- (TL)
node[anchor=east]{$(0,\thetaeta)$};
\mathbf{E}nd{tikzpicture}
\mathbf{E}nd{center}
\caption{ $t > 0 $}
\mathbf{E}nd{subfigure}
\caption{ The point $\big( u(t),v(t) \big), t\in [-\theta,\theta]$ moves ``counterclockwise'' around the outer boundary of the square $[0,\theta]\times[0,\theta]$, starting from $(\theta,0)$ at $t = -\theta$, then moving to $(\theta,\theta)$ at $t=0$, and finally moving to $(0,\theta)$ at $t = \theta$. The region $\square\big(u(t),v(t)\big)$ is the shaded rectangular area bounded between $(0,0)$ and $\big( u(t),v(t) \big)$.}
\lambdabel{fig:curve_cartoon}
\mathbf{E}nd{figure}
\betagin{rem}
This is exactly the same construction of the random trajectories $\lambda_{\Pi}(u(t),v(t))$
from Theorem 2.3 in \cite{Borodin_stochasticdynamics}. Note that
in this work, the curve $\left(u(t),v(t)\right)$ was going the other
way going ``clockwise'' around the outside edge of the box $[0,\theta]\times[0,\theta]$
rather than ``counterclockwise'' as we have here. This difference
just arises from the convention of putting the recording tableau as
the right tableau when applying the RS algorithm and makes no practical
difference. Since our construction is a special case of the stochastic
dynamics constructed from this paper, we can use the scaling limit
results to compute the limiting behavior of the Poissonized RS
tableaux. The only obstruction is that one has to do some change of
time coordinate to translate to what is called ``interior time''
in \cite{Borodin_stochasticdynamics} along the curve so that $s=\frac{1}{2}\left(\ln u-\ln v\right)$.
In the below corollary, we record the scaling limit for the topmost
line of the ensemble. \mathbf{E}nd{rem}
\betagin{cor}
Let $\left(\tilde{L}_{\theta},\tilde{R}_{\theta}\right)$ be the Poissonized
Robinson-Schensted tableaux of parameter $\theta$. If we scale around
the point $\tauu=0$, there is convergence to the Airy 2 process on
the time scale $\theta^{2/3}$, namely:
\[
\frac{M_{\tilde{L}_{\theta},\tilde{R}_{\theta}}\left(1;2\theta^{2/3}\tauu\right)-\left(2\theta-2\theta^{2/3}\abs{\tauu}\right)}{\theta^{1/3}}+\tauu^{2}\Rightarrow\mathcal{A}_{2}(\tauu),
\]
where $\mathcal{A}_{2}(\cdot)$ is the Airy 2 process.\mathbf{E}nd{cor}
\betagin{proof}
We first do a change of variables in the parameter $\theta$ by $\alpha=\theta^{2}$
so that the curves we consider have area $u_{\alpha}(0)v_{\alpha}(0)=\alpha$
at time $0$. We will use $s$ to denote ``interior time'' along
the curve $(u_{\alpha},v_{\alpha})$ constructed in Theorem \text{Re}f{curve_thm}
(see Remark 1.4 in \cite{Borodin_stochasticdynamics} for an explanation
of the interior time). This is a change of time variable $s=s(t)$
given by
\[
s(t)=\frac{1}{2}\left(\ln u_{\alpha}(t)-\ln v_{\alpha}(t)\right)=\frac{1}{2}\text{sgn}\left(t\right)\ln\left(1-\frac{\abs t}{\sqrt{\alpha}}\right).
\]
Notice by Taylor expansion now that $s(2\alpha^{1/3}\tauu)=-\tauu\alpha^{-1/6}+o(\alpha^{-1/6})$
as $\alpha\to\infty$. By application of Theorem 4.4. from \cite{Borodin_stochasticdynamics},
(see also Corollary 4.6. for the simplification of Airy line ensemble
to the Airy process) we have that as $\alpha\to\infty$ that
\[
\frac{M_{\tilde{L}_{\alpha},\tilde{R}_{\alpha}}\left(1;2\alpha^{1/3}\tauu\right)-2\sqrt{u_{\alpha}(2\alpha^{1/3}\tau)v_{\alpha}(2\alpha^{1/3}\tauu)}}{\alpha^{1/6}}\Rightarrow\mathcal{A}_{2}(\tauu).
\]
Doing a taylor expansion now for $2\sqrt{u_{\alpha}(t)v_{\alpha}(t)}=2\sqrt{\alpha}\sqrt{1-\frac{\abs t}{\sqrt{\alpha}}}$
gives
\betagin{eqnarray*}
2\sqrt{u_{\alpha}(2\alpha^{1/3}\tau)v_{\alpha}(2\alpha^{1/3}\tauu)} & = & 2\sqrt{\alpha}-2\alpha^{1/3}\abs{\tauu}-\alpha^{1/6}\tauu^{2}+o(\alpha^{1/6}).
\mathbf{E}nd{eqnarray*}
Putting this into the above convergence to $\mathcal{A}_{2}$ gives the desired
result.\mathbf{E}nd{proof}
\betagin{rem}
The scaling that is needed for the convergence of the top line to
the Airy 2 process here is exactly the same as the scaling that appears
for a family of non-crossing Brownian bridges to converge to the Airy
2 process, see \cite{Corwin_browniangibbs}. This is not entirely
surprising in light of Theorem 3.2, which shows that $M_{\tilde{L}_{\theta},\tilde{R}_{\theta}}$
is related to a family of non-crossing Poisson arches, and it might
be expected that non-crossing Poisson arches have the same scaling
limit as non-crossing Brownian bridges. In this vein, we conjecture that it is also possible to get a convergence result for the whole ensemble (not just the topmost line) to the multi-line Airy 2 line ensemble, in the sense of weak convergence as a line ensemble introduced in Definition 2.1 of \cite{Corwin_browniangibbs}.
\mathbf{E}nd{rem}
\section{A discrete limit}
The Poissonized RS process can be realized as the limit of a discrete model created from geometric random variables in a certain scaling limit. This discrete model is a special case of the corner growth model studied in Section 5 of \cite{JoDPP}. We will present the precise construction of the model here, rather than simply citing \cite{JoDPP} in order to present it in a way that makes the connection to the Poissonized RS tableaux more transparent. We also present a different argument yielding the distribution of the model here, again to highlight the connection to the Poissonized RS tableaux. Our proof is very different than the proof from \cite{JoDPP}; it has a much more probabilistic flavor closer to the proof of Theorem \text{Re}f{SchurProcessThm}.
One difference between the discrete model and the Poissonized RS process is due to the possibility of multiple points with the same x-coordinate or y-coordinate. (These events happen with probability 0 for the Poisson point process.) To deal with this we must use Robinson-Schensted-Knuth (RSK) correspondence, which generalizes the RS correspondence to a bijection from generalized permutations to semistandard Young tableau (SSYT). See Section 7.11 in \cite{StanleyVol2} for a reference on the RSK correspondence.
\subsection{Discrete Robinson-Schensted-Knuth process with geometric weights}
\betagin{defn}
Fix a parameter $\thetaeta\in\mathbb{R}_{+}$ and an integer $k \in \mathbb{N}$. Let $\mathcal{L}^{\theta,k} = \{\theta/k,2\theta/k \ldots \theta \}$ be a discretization of the interval $[0,\theta]$ with $k$ points. Let $T$ be any semistandard Young tableau (SSYT) whose entries do not exceed $k$. The \textbf{$\mathcal{L}^{\theta,k}$-discretized Young diagram process} for $T$ is a Young diagram valued map $\lambda^{\theta,k}_T:\mathbb{R}_+ \to \mathbb{Y}$ defined by
\[
\lambda^{\theta,k}_T(t) = \left\{ (i,j) : T(i,j)\frac{\theta}{k} \leq t \right\}.
\]
If we are given two such SSYT $L,R$ so that $sh(L) = sh(R)$, we can define the $\mathcal{L}^{\theta,k}$-discretized Young diagram process $\lambda^{\theta,k}_{L,R}: [-\theta,\theta] \to \mathbb{Y}$ for the pair $(L,R)$ by
\[
\lambda^{\theta,k}_{L,R}(t)= \betagin{cases}
\lambda^{\theta,k}_{L}(\theta+t) & \ t\leq 0 \\
\lambda^{\theta,k}_{R}(\theta-t) & \ t\geq 0
\mathbf{E}nd{cases}.
\]
\mathbf{E}nd{defn}
\betagin{rem}
\lambdabel{discrem}
This definition is analogous to Definition \text{Re}f{YDprocess} and Definition \text{Re}f{YDprocess_pair}. Comparing with Definition \text{Re}f{YDprocess}, we see that in the language of decorated Young tableau, the $\mathcal{L}^{\theta,k}$-discretized Young diagram process corresponds to thinking of decorating the Young tableau with a decoration of $t_{T(i,j)} = T(i,j) \frac{\theta}{k}$. For example the SSYT $\young(122,34)$ would be represented graphically as: (compare with Example \text{Re}f{YTexample})
\betagin{center}
\ytableausetup{centertableaux,boxsize=2.5em}
\betagin{ytableau}
\stackrel{1}{ \thetak } & \stackrel{2}{ 2 \thetak} & \stackrel{2}{2 \thetak} \\
\stackrel{3}{3 \thetak} & \stackrel{4}{4 \thetak}
\mathbf{E}nd{ytableau}.
\mathbf{E}nd{center}
In other words, the decorations are proportional to the entries in the Young tableau by a constant of proportionality $\slfrac{\theta}{k}$. This scaling of the $\mathcal{L}^{\theta,k}$-discretized Young diagram process, despite being a very simple proportionality, will however be important to have convergence to the earlier studied Poissonized RS model in a limit as $k \to \infty$.
\mathbf{E}nd{rem}
\betagin{defn}
Let $\mathcal{C}_{n}^{\theta,k}$ be the set of $n$ point configurations on the lattice $\mathcal{L}^{\theta,k} \times \mathcal{L}^{\theta,k} \subset\mathbb{R}_{+}^{2}$ where we allow the possibility of multiple points to sit at each site. Since there are only $k^2$ possible locations for the points, one can think of elements of $\mathcal{C}_{n}^{\theta,k}$ in a natural way as $\mathbb{N}$-valued $k \times k$ matrices, whose entries sum to $n$:
\[
\mathcal{C}_{n}^{\theta,k} = \left\{ \left\{\mathbf{P}i_{a,b}\right\}_{a,b =1 }^k : \sum_{a,b = 1}^{k} \mathbf{P}i_{a,b} = n \right\}.
\]
Let $\mathcal{T}_{n}^{\theta,k}$ be the set of pairs of semistandard tableaux of size $n$ and of the same shape whose entries do not exceed $k$. This is
\[
\mathcal{T}_{n}^{\theta,k} = \left\{ (L,R) : \text{sh}(L) = \text{sh}(R), \abs{L} = \abs{R} = n, L(a,b) \leq k, R(a,b) \leq k \, \forall a,b \right\}.
\]
The Robinson-Schensted-Knuth (RSK) correspondence is a bijection between $\mathbb{N}$-valued matrices (or equivalently generalized permutations) and pairs of semistandard tableaux of the same shape. Thinking of $\mathcal{C}^{\theta,k}_n$ as $\mathbb{N}$-valued matrices, we see more precisely that the RSK correspondence is a bijection between $\mathcal{C}_{n}^{\theta,k}$ and $\mathcal{T}_{n}^{\theta,k}$. Composing this with the definition of the $\mathcal{L}^{\theta,k}$-discretized Young diagram process we have a bijection, which we call the \textbf{ $\mathcal{L}^{\theta,k}$-discretized RSK bijection} between configurations in $\mathcal{C}_{n}^{\theta,k}$ and $\mathcal{L}^{\theta,k}$-discretized Young diagram processes $\lambda^{\theta,k}_{L,R}: [-\theta,\theta] \to \mathbb{Y}$. We will also use the shorthand $\mathcal{C}^{\theta,k} = \bigcup_n \mathcal{C}^{\theta,k}_n$ and $\mathcal{T}^{\theta,k} = \bigcup_n \mathcal{T}^{\theta,k}_n$.
\mathbf{E}nd{defn}
\betagin{defn}
Let $\{ \xi_{i,j}: 1\leq i\leq k, 1\leq j \leq k \}$ be an iid collection of geometric random variables with parameter $\theta^2/k^2$. To be precise, each $\xi$ has the following probability mass function:
\[
\mathbf{P}(\xi = x) = \left(1 - \frac{\theta^2}{k^2}\right)\left( \frac{\theta^2}{k^2} \right)^x.
\]
This gives a probability measure on the set of point configurations $\mathcal{C}^{\theta,k}$ by placing exactly $\xi_{i,j}$ points at the location $\left(i \slfrac{\theta}{k}, j \slfrac{\theta}{k}\right)$. By applying the $\mathcal{L}^{\theta,k}$-discretized RSK bijection, this induces a probability measure on $\mathcal{L}^{\theta,k}$-discretized Young diagram processes. We refer to the resulting pair of random semistandard tableaux $(L,R)$ as the \textbf{ $\mathcal{L}^{\theta,k}$-geometric weight RSK tableaux}, and we refer to the Young diagram process $\lambda^{\theta,k}_{L,R}$, as the \textbf{ $\mathcal{L}^{\theta,k}$-geometric weight RSK process}.
\mathbf{E}nd{defn}
\betagin{rem}
The word ``geometric weight'' is always in reference to the distribution of the variables $\xi$. This should \mathbf{E}mph{not} be confused with the ``Geometric RSK correspondence'', as in \cite{OtherGeoRSK}, which is a different object and in which ``geometric'' refers to a geometric lifting.
\mathbf{E}nd{rem}
\betagin{rem}
It is possible to construct similar models where the parameter of the geometric random variable used to place particles differs from site to site. For our purposes, however, we will stick to this simple case where all are equal to make the construction and the convergence to the Poissonized RS process as clear as possible. See Section 5 of \cite{JoDPP} for a more general treatment.
\mathbf{E}nd{rem}
With this set up, we have the following very close analogue of Theorem \text{Re}f{SchurProcessThm} for the $\mathcal{L}^{\theta,k}$-geometric weight RSK process, which characterizes the law of this random object.
\betagin{thm}
\lambdabel{DiscreteeSchurProcessThm}Fix $\theta>0$, $k \in \mathbb{N}$ and times $t_{1}<\ldots<t_{n}\in[-\theta,0]$
and $s_{1}<\ldots<s_{m}\in[0,\theta]$. Suppose we are given an increasing
list of Young diagrams $\lambda^{(1)}\subset\ldots\subset\lambda^{(n)}$, a decreasing
list of Young diagrams $\mu^{(1)}\supset\ldots\supset\mu^{(m)}$ and
a Young diagram $\nu$ with $\nu\supset\lambda^{(n)}$ and $\nu\supset\mu^{(1)}$.
To simplify the presentation we will use the convention $\lambda^{(0)}=\mathbf{E}mptyset,\lambda^{(n+1)}=\nu,\mu^{(0)}=\nu,\mu^{(m+1)}=\mathbf{E}mptyset$
and $t_{0}=-\theta,t_{n+1}=0,s_{0}=0,s_{m+1}=\theta$. The geometric weight RSK process $\lambda^{\theta,k}_{L,R}$ has the following finite dimensional distribution:
\betagin{eqnarray*}
& & \mathbf{P}\left( \bigcap_{i=1}^{n}\left\{ \lambda^{\theta,k}_{L,R}(t_{i})=\lambda^{(i)}\right\} \bigcap\left\{ \lambda^{\theta,k}_{L,R}(0)=\nu\right\} \bigcap\bigcap_{j=1}^{m}\left\{ \lambda^{\theta,k}_{L,R}(s_{j})=\mu^{(j)}\right\} \right)\\
& = & \left(1 - \frac{\theta^2}{k^2}\right)^{k^2} \left(\frac{\theta^2}{k^2}\right)^{\abs{\nu}} \left(\mathbf{P}rod_{i=0}^{n} \textnormal{Dim} _{\mathcal{L}^{\theta,k}(t_{i+1},t_{i})} \left(\lambda^{(i+1)}/\lambda^{(i)}\right)\right) \cdot\left(\mathbf{P}rod_{i=0}^{m} \textnormal{Dim} _{\mathcal{L}^{\theta,k}(s_{i+1},s_{i})} \left(\mu^{(i)}/\mu^{(i+1)}\right)\right),
\mathbf{E}nd{eqnarray*}
where $\textnormal{Dim}_k(\lambda/\mu)$ is the number of SSYT of skew shape $\lambda/\mu$ and whose entries do not exceed $k$ and $\mathcal{L}^{\theta,k}(x,y) = \abs{\mathcal{L}^{\theta,k}\cap(x,y]}$ is the number of discretization points from $\mathcal{L}^{\theta,k}$ in the interval $(x,y]$.
\mathbf{E}nd{thm}
\betagin{rem}
\lambdabel{DicreteSchurProcessRem}
The above theorem is purely combinatorial in terms
of $\textnormal{Dim}_j\left(\lambda/\mu\right)$, which is the enumerating semistandard Young
tableaux. As was the case for Theorem \text{Re}f{SchurProcessThm}, this can be written in a very nice way using Schur functions and specializations. Let $\sigmagma^{\theta,k}_{x,y}$ to be the specialization that specializes the first $\mathcal{L}^{\theta,k}_{(x,y)}$ variables to $\theta/m$ and the rest to zero. Namely:
\[
f(\sigmagma^{\theta,k}_{x,y}) = f\left(\underbrace{\frac{\theta}{k},\frac{\theta}{k},\ldots,\frac{\theta}{k}}_{ \mathcal{L}^{\theta,k}_{(x,y)}}, 0, 0, \ldots \right).
\]
This specialization differs by a constant factor from the so called ``principle specialization'', see Section 7.8 of \cite{StanleyVol2}. It is an example of a ``finite length'' specialization as defined in Section 2.2.1 in \cite{MacProc}. One has the identity:
\[
s_{\lambda/\mu}\left(\sigmagma^{\theta,k}_{x,y}\right)= \left(\frac{\theta}{k}\right)^{\abs{\lambda/\mu}} \textnormal{Dim}_{\mathcal{L}^{\theta,k}_{(x,y)}} (\lambda/\mu).
\]
Plugging this into the above theorem, after some very nice telescoping cancellations, we can rewrite the probability as a chain of Schur functions:
\betagin{eqnarray*}
& & \mathbf{P}\left(\bigcap_{i=1}^{n}\left\{ \lambda^{\theta,k}_{L,R}(t_{i})=\lambda^{(i)}\right\} \bigcap\left\{ \lambda^{\theta,k}_{L,R}(0)=\nu\right\} \bigcap\bigcap_{j=1}^{m}\left\{ \lambda^{\theta,k}_{L,R}(s_{j})=\mu^{(j)}\right\} \right)\\
& = & \left(1 - \frac{\theta^2}{k^2}\right)^{k^2} \left(\mathbf{P}rod_{i=0}^{n}s_{\lambda^{(i+1)}/\lambda^{(i)}}\left(\sigmagma^{\thetaeta,N}_{t_{i+1},t_{i}}\right)\right)\cdot\left(\mathbf{P}rod_{i=0}^{m}s_{\mu^{(i)}/\mu^{(i+1)}}\left(\sigmagma^{\thetaeta,N}_{s_{i+1},s_{i}}\right)\right).
\mathbf{E}nd{eqnarray*}
\mathbf{E}nd{rem}
\betagin{rem}
For fixed $\theta$, one might notice that the normalizing prefactor of the geometric weight RSK process $(1-\theta^2/k^2)^{k^2}$ converges as $k\to \infty$ to $e^{-\theta^2}$, the normalizing prefactor of the Poissonized RS process. Even more remarkably, the specializations $\sigmagma^{\theta,k}_{x,y}$ that appear in Remark \text{Re}f{DicreteSchurProcessRem}, converge to the specialization $\rhoo_{y - x}$ that appear in Remark \text{Re}f{ShurProcRem} in the sense that for any symmetric function $f$, one has that
\[
\lim_{N \to \infty} f(\sigmagma^{\theta,N}_{y,x}) = f(\rhoo_{y-x}).
\]
One can verify this convergence by checking the effect of the specialization on the basis $p_\lambda$ of power sum symmetric functions. These have
\[
p_\lambda(\rhoo_{y-x}) =\betagin{cases}
(y-x)^n & \ \lambda = (\underbrace{1,1,\ldots,1}_n) \\
0 & \ \text{otherwise}
\mathbf{E}nd{cases},
\]
and for $\sigmagma^{\theta,N}_{x,y}$, we have
\[
p_{\lambda}(\sigmagma^{\theta,N}_{x,y}) = \left(\mathcal{L}^{\theta,k}_{(x,y)}\right)^{\mathbf{E}ll(\lambda)}\cdot \left( \frac{\theta}{k} \right)^{\abs{\lambda}}.
\]
(Here $\mathbf{E}ll(\lambda)$ is the number of rows of $\lambda$.) Using the bound, $\floor{(y-x)k/\theta} \leq \mathcal{L}^{\theta,k}_{(x,y)} \leq \ceil{(y-x)k/\theta}$, we know that $\mathcal{L}^{\theta,k}_{(x,y)}$ differs from $(y-x)k/\theta$ by no more than one. Since $\mathbf{E}ll(\lambda) \leq \abs{\lambda}$ holds for any Young diagram, the above converges to $0$ as $k \to \infty$ unless $\mathbf{E}ll(\lambda) = \abs{\lambda}$. This only happens if $\lambda = (\underbrace{1,1,\ldots,1}_n)$ is a single vertical column and in this case we get exactly a limit of $(y-x)^n$ as $k \to \infty$, which agrees with $p_\lambda(\rhoo_{y-x})$. See section 7.8 of \cite{StanleyVol2} for these formulas. This convergence of finite length specializations to Plancherel specializations is also mentioned in Section 2.2.1. of \cite{MacProc}.
This observations shows us that the finite dimensional distributions of the geometric weight RSK process converge to the finite dimensional distributions of the Poissonized RS process in the limit $k \to \infty$. One might have expected this convergence since the point process of geometric points from which the geometric weight RSK process is built convergences in the limit $k \to \infty$ to a Poisson point process of rate 1 in the square $[0,\theta]\times[0,\theta]$, from which the Poissonized RS process is built. However, since the decorated RS correspondence can be very sensitive to moving points even very slightly, it is not apriori clear that convergence of point processes in general always leads to convergence at the level of Young diagram processes.
\mathbf{E}nd{rem}
\subsection{Proof of Theorem \text{Re}f{DiscreteeSchurProcessThm}}
The proof follows by similar methods to the proof of Theorem \text{Re}f{SchurProcessThm}. We prove some intermediate results which are the analogues of Lemma \text{Re}f{conditioninglemma}, Corollary \text{Re}f{YTcor} and Lemma \text{Re}f{sizelemma}.
\betagin{lem}
Let $N=\abs{\lambda^{\theta,k}_{L,R}(0)}$.
Then $N$ has the distribution of the sum of $k^2$ i.i.d. geometric random variable of parameter $\theta^2 / k^2$. Moreover, conditioned on the event $\left\{ N=n\right\} $ the pair $(L,R)$ is uniformly distributed in the set $\mathcal{T}^{k,\theta}_n$.
\mathbf{E}nd{lem}
\betagin{proof}
This is analogous to Lemma \text{Re}f{sizelemma} In this case, $N = \sum_{i,j=1}^{k} \xi_{i,j}$ is the sum of geometric random variables.
The fact that all elements of $\mathcal{T}_{n}^{k,\theta}$ are equally likely in the conditioning $\{ N = n \}$ is because of the following remarkable fact about geometric distributions. For a collection of iid geometric random variables, the probability of any configuration $\bigcap_{i,j = 1}^{k}\left\{ \xi_{i,j} = x_{i,j} \right\}$ depends only on the \mathbf{E}mph{sum} $\sum_{i,j = 1}^k x_{i,j}$. Indeed, when $p$ is the parameter for the geometric random variables, the probability is:
\[
\mathbf{P}\left(\bigcap_{i,j = 1}^{k}\left\{ \xi_{i,j} = x_{i,j} \right\}\right) = p^{\sum_{i,j = 1}^k x_{i,j}}(1-p)^{k^2}.
\]
Since this depends only on the sum, and not any other detail of the $x_{i,j}$, when one conditions on the sum, all the configurations are equally likely. Since the RSK is a bijection, it pushes forward the uniform distribution on $\mathcal{C}^{\theta,k}_n$ to a uniform distribution on $\mathcal{T}^{\theta,k}_n$ as desired.
\mathbf{E}nd{proof}
\betagin{rem}
This remarkable fact about geometric random variables is the analogue of the fact that the points of a Poisson point process are uniformly distributed when one conditions on the total number of points. This was a cornerstone of Lemma \text{Re}f{conditioninglemma}. This special property of geometric random variables is what makes this distribution so amenable to analysis: see Lemma 2.2. in the seminal paper by Johansson \cite{Johansson338249} where this exact property is used.
\mathbf{E}nd{rem}
\betagin{cor}
For any Young diagram $\nu$, we have:
\betagin{eqnarray*}
\mathbf{P}\left(\lambda^{\theta,k}_{L,R}(0)=\nu\right)& = &\left(\frac{\textnormal{Dim}_{k}(\nu)^{2}}{\binom{k^2 + \abs{\nu} - 1}{k^2} }\right)\mathbf{P}\left(N=\abs{\nu}\right) \\
& =& \left(1 - \frac{\theta^2}{k^2}\right)^{k^2} \left(\frac{\theta^2}{k^2}\right)^{\abs{\nu}} \textnormal{Dim}_{k}(\nu)^{2}.
\mathbf{E}nd{eqnarray*}
\mathbf{E}nd{cor}
\betagin{proof}
This is analogous to the proof of Corollary \text{Re}f{YTcor}. The only difference is that $\mathcal{T}^{\theta,k}_n$ contains pairs of semi-standard with entries no larger than $k$, of which we are interested in the number of pairs of shape $\nu$. This is exactly what $\textnormal{Dim}_{k}(\nu)$ enumerates. $\abs{ \mathcal{C}^{\theta,k}_n} = \abs{\mathcal{T}^{\theta,k}_n} = \binom{k^2 + \abs{\nu} - 1}{k^2}$ is the number of elements in $\mathcal{T}^{\theta,k}_n$, so it appears as a normalizing factor. In this case, since $N$ has the distribution of the sum of $k^2$ geometric random variables, we can simplify using the probability mass function:
\[
\mathbf{P}\left( N = x \right) = \binom{k^2 + x- 1}{k^2}\left(\frac{\theta^2}{k^2}\right)^x\left(1-\frac{\theta^2}{k^2}\right)^{k^2}.
\]
\mathbf{E}nd{proof}
\betagin{lem}
We have
\[
\mathbf{P}\left( \bigcap_{i=0}^{n+1} \left\{ \lambda^{\theta,k}_{L,R}(t_{i})=\lambda^{(i)} \right\} \given{ \lambda^{\theta,k}_{L,R}(0)=\nu } \right) = \frac{\mathbf{P}rod_{i=1}^{n+1}\textnormal{Dim}_{\mathcal{L}^{\theta,k}_{(t_{i+1},t_{i})} \left(\lambda^{(i+1)}/\lambda^{(i)}\right)}}{\textnormal{Dim}_{k}(\nu)}.
\]
An analogous formula holds for $\mathbf{P}\left(\bigcap_{i=0}^{m+1}\left\{ \lambda^{\theta,k}_{L,R}(s_{i})=\mu^{(i)}\right\} \given{ \lambda^{\theta,k}_{L,R}(0)=\nu }\right)$. Moreover, we have the same type of conditional independence as from Lemma \text{Re}f{shapelem} between times $t > 0$ and times $t < 0$ when we condition on the event $\left\{ \lambda^{\theta,k}_{L,R}(0)=\nu \right\}$.
\mathbf{E}nd{lem}
\betagin{proof}
This is the analogue of Lemma \text{Re}f{shapelem}. The proof proceeds in the same way with the important observation that, when conditioned on $\left\{ \lambda^{\theta,k}_{L,R}(0)=\nu \right\}$, the pair of SSYT $(L,R)$ is uniformly chosen from the set of pairs of shape $\nu$ from $\mathcal{T}^{\theta,k}_n$. This set is the Cartesian product of the set of all such SSYT of shape $\nu$ with itself. Hence, the two SSYT are independent and are both uniformly distributed among the set of all SSYT of shape $\nu$ in this conditioning. For this reason, it suffices to count the number of SSYT of shape $\nu$ with the correct intermediate shapes at times $t_1,\ldots,t_n$. The counting of these SSYT then follows by the same type of argument as Lemma \text{Re}f{shapelem}. Each intermediate SSYT of shape $\mu^{i+1}/\mu^{i}$ must be filled with entries from the interval $\mathcal{L}^{\theta,k}\cap (t_i,t_{i+1}]$ in order for the resulting SSYT to have the correct subshapes. Since we are only interested in the number of such SSYT, counting those with entries between 1 and $\mathcal{L}^{\theta,k}_{t_{i+1},t_{i}}$ will do. This is precisely what $\textnormal{Dim}_{\mathcal{L}^{\theta,k}(t_{i+1},t_{i})} \left(\lambda^{(i+1)}/\lambda^{(i)}\right)$ enumerates.
\mathbf{E}nd{proof}
\betagin{rem}
In the proof of Theorem \text{Re}f{SchurProcessThm}, there were additional lemmas needed to separate the dependence of the decorations and the entries appearing in the Young diagrams. As explained in Remark \text{Re}f{discrem},the discrete geometric weight RSK tableaux case is simpler in this respect because the decorations are proportional to the entries in the tableaux by a factor of $\theta/k$.
\mathbf{E}nd{rem}
\betagin{proof} (Of Theorem \text{Re}f{DiscreteeSchurProcessThm})
Exactly as in the proof of Theorem \text{Re}f{SchurProcessThm}, the proof follows by combining the lemmas.
\mathbf{E}nd{proof}
\textbf{Acknowledgments}
The author extends many thanks to G{\' e}rard Ben Arous for early encouragement on this subject and to Ivan Corwin for his friendly support and helpful discussions, in particular pointing out the connections that led to the development of Section 5. The author was partially supported by NSF grant DMS-1209165.
\mathbf{E}nd{document} |
\bolds{o}lds{e}gin{document}
\title{{\TheTitle}
\bolds{o}lds{e}gin{abstract}
In this work, we calculate the convergence rate of the finite difference approximation for a class of nonlocal fracture models. We consider two point force interactions characterized by a double well potential. We show the existence of a evolving displacement field in H\"{o}lder space with H\"{o}lder exponent $\gamma \in (0,1]$.
The rate of convergence of the finite difference approximation depends on the factor $C_s h^\gamma/\epsilon^2$ where $\epsilon$ gives the length scale of nonlocal interaction, $h$ is the discretization length and $C_s$ is the maximum of H\"older norm of the solution and its second derivatives during the evolution. It is shown that the rate of convergence holds for both the forward Euler scheme as well as general single step implicit schemes. A stability result is established for the semi-discrete approximation. The H\"older continuous evolutions are seen to converge to a brittle fracture evolution in the limit of vanishing nonlocality.
\end{abstract}
\bolds{o}lds{e}gin{keywords}
Nonlocal fracture models, peridynamics, cohesive dynamics, numerical analysis, finite difference approximation
\end{keywords}
\bolds{o}lds{e}gin{AMS}
34A34, 34B10, 74H55, 74S20
\end{AMS}
\section{Introduction}
Nonlocal formulations have been proposed to describe the evolution of deformations which exhibit loss of differentiability and continuity, see \cite{CMPer-Silling} and \cite{States}. These models are commonly referred to as peridynamic models. The main idea is to define the strain in terms of displacement differences and allow nonlocal interactions between material points. This generalization of strain allows for the participation of a larger class of deformations in the dynamics. Numerical simulations based on peridynamic modeling exhibit formation and evolution of sharp interfaces associated with phase transformation and fracture \cite{CMPer-Dayal}, \cite{CMPer-Silling4}, \cite{CMPer-Silling5}, \cite{CMPer-Silling7}, \cite{CMPer-Agwai}, \cite{CMPer-Du}, \cite{CMPer-Lipton2}, \cite{BobaruHu}, \cite{HaBobaru}, \cite{SillBob}, \cite{WeckAbe}, \cite{GerstleSauSilling}. A recent summary of the state of the art can be found in \cite{Handbook}.
In this work, we provide a numerical analysis for the class of nonlocal models introduced in \cite{CMPer-Lipton3} and \cite{CMPer-Lipton}. These models are defined by a double well two point potential. Here one potential well is centered at zero and associated with elastic response while the other well is at infinity and associated with surface energy. The rational for studying these models is that they are shown to be well posed over the class of square integrable non-smooth displacements and, in the limit of vanishing non-locality, their dynamics recover features associated with sharp fracture propagation see, \cite{CMPer-Lipton3} and \cite{CMPer-Lipton}. The numerical simulation of prototypical fracture problems using this model is carried out in \cite{CMPer-Lipton2}. In order to develop an $L^2$ approximation theory, we show the nonlocal evolution is well posed over a more regular space of functions. To include displacement fields which have no well-defined derivatives, we consider displacement fields in the H\"{o}lder space $\Cholder{}$ with H\"{o}lder exponent $\gamma$ taking any value in $(0,1]$. We show that a unique evolution exists in $\Cholder{}$ for $\Cholder{}$ initial data and body force.
The semi-discrete approximation to the H\"{o}lder continuous evolution is considered and it is shown that at any time its energy is bounded by the initial energy and the work done by the body force.
We develop an approximation theory for the forward Euler scheme and show that these ideas can be easily extended to the backward Euler scheme as well other implicit one step time discretization schemes.
It is found that the discrete approximation converges to the exact solution in the $L^2$ norm uniformly over finite time intervals with the rate of convergence proportional to $(C_t\Delta t + C_s h^\gamma/\epsilon^2)$, where $\Delta t$ is the size of time step, $h$ is the size of spatial mesh discretization, and $\epsilon$ is the length scale of nonlocal interaction relative to the size of the domain. The constant $C_t$ depends on the $L^2$ norm of the time derivatives of the solution, $C_s$ depends on the H\"older norm of the solution and the Lipschitz constant of peridynamic force.
We point out that the constants appearing in the convergence estimates with respect to $h$ can be dependent on the horizon and be large when $\epsilon$ is small. This is discussed in \autoref{s:finite difference} and an example is provided in \autoref{s:conclusions}. These results show that while errors can grow with each time step they can be controlled over finite times $t$ by suitable spatial temporal mesh refinement. We then apply the methods developed in \cite{CMPer-Lipton3} and \cite{CMPer-Lipton}, to show that in the limit $\epsilon\rightarrow 0$, the H\"older continuous evolutions converge to a limiting sharp fracture evolution with bounded Griffiths fracture energy. Here the limit evolution is differentiable off the crack set and satisfies the linear elastic wave equation.
In the language of nonlocal operators, the integral kernel associated with the nonlocal model studied here is Lipschitz continuous guaranteeing global stability of the finite difference approximation. This is in contrast to PDE based evolutions where stability can be conditional.
In addition we examine local stability. Unfortunately the problem is nonlinear so we don't establish CFL conditions but instead identify a mode of dynamic instability that can arise during the evolution. This type of instability is due to a radial perturbation of the solution and causes error to grow with each time step for the Euler scheme. For implicit schemes this perturbation can become unstable in parts of the computational domain where there is material softening, see \autoref{ss:proof localstab}. Of course stability conditions like the CFL conditions for linear nonlocal equations are of importance for guidance in implementations. In the case of $d=1$, a CFL type condition is obtained for the finite difference and finite element approximation of the linear peridynamic equation, see \cite{CMPer-Guan}. Recent work develops a new simple CFL condition for one dimensional linearized peridynamics in the absence of body forces \cite{CMPer-JhaLipton}. Related analysis for the linear peridynamic equation in one dimension is taken up in \cite{CMPer-Weckner} and \cite{CMPer-Bobaru}. The recent and related work \cite{CMPer-Du1} and \cite{CMPer-Guan2} addresses numerical approximation for problems of nonlocal diffusion.
There is now a large body of contemporary work addressing the numerical approximation of singular kernels with application to nonlocal diffusion, advection, and mechanics. Numerical formulations and convergence theory for nonlocal $p$-Laplacian formulations are developed in \cite{DeEllaGunzberger}, \cite{Nochetto1}. Numerical analysis of nonlocal steady state diffusion is presented in \cite{CMPer-Du2} and \cite{CMPer-Du3}, and \cite{CMPer-Chen}. The use of fractional Sobolev spaces for nonlocal problems is investigated and developed in \cite{CMPer-Du1}. Quadrature approximations and stability conditions for linear peridynamics are analyzed in \cite{CMPer-Weckner} and \cite{CMPer-Silling8}. The interplay between nonlocal interaction length and grid refinement for linear peridynamic models is presented in \cite{CMPer-Bobaru}. Analysis of adaptive refinement and domain decomposition for linearized peridynamics are provided in \cite{AksoyluParks}, \cite{LindParks}, and \cite{AksMen}. This list is by no means complete and the literature on numerical methods and analysis continues to grow.
The paper is organized as follows. In \autoref{s:peridynamic model}, we describe the nonlocal model. In \autoref{ss:existence holder}, we state theorems which show Lipschitz continuity of the nonlocal force (\autoref{prop:lipschitz}) and the existence and uniqueness of an evolution over any finite time interval (\autoref{thm:existence over finite time domain}). In \autoref{s:finite difference}, we compute the convergence rate of the forward Euler scheme as well as implicit one step methods. We identify stability of the semi-discrete approximation with respect to the energy in \autoref{semidiscrete}. In \autoref{ss:proof localstab}, we identify local instabilities in the fully discrete evolution caused by suitable radial perturbations of the solution. In \autoref{s:proof existence}, we give the proof of \autoref{prop:lipschitz}, \autoref{thm:local existence}, and \autoref{thm:existence over finite time domain}. The convergence of H\"older continuous evolutions to sharp fracture evolutions as $\epsilon\rightarrow 0$ is shown in \autoref{s:discussion}. In \autoref{s:conclusions} we present an example showing the effect of the constants $C_t$ and $C_s$ on the convergence rate and summarize our results.
\section{Double well potential and existence of a solution}
\label{s:peridynamic model}
In this section, we present the nonlinear nonlocal model. Let $D\subset \mathbb{R}^d$, $d=2,3$ be the material domain with characteristic length-scale of unity. Let $\epsilon\in (0,1]$ be the size of horizon across which nonlocal interaction between points takes place. The material point $\bolds{x}\in D$ interacts nonlocally with all material points inside a horizon of length $\epsilon$. Let $H_{\epsilon}(\bolds{x})$ be the ball of radius $\epsilon$ centered at $\bolds{x}$ containing all points $\bolds{y}$ that interact with $\bolds{x}$. After deformation the material point $\bolds{x}$ assumes position $\bolds{z} = \bolds{x} + \bolds{u}(\bolds{x})$. In this treatment we assume infinitesimal displacements and the strain is written in terms of the displacement $\bolds{u}$ as
\bolds{o}lds{e}gin{align*}
S=S(\bolds{y},\bolds{x};\bolds{u}) &:= \dfrac{\bolds{u}(\bolds{y}) - \bolds{u}(\bolds{x})}{\abs{\bolds{y} - \bolds{x}}} \cdot \dfrac{\bolds{y} - \bolds{x}}{\abs{\bolds{y} - \bolds{x}}}.
\end{align*}
Let $W^{\epsilon}(S,\bolds{y} - \bolds{x})$ be the nonlocal potential density per unit length between material point $\bolds{y}$ and $\bolds{x}$. The energy density at $\bolds{x}$ is given by
\bolds{o}lds{e}gin{align*}
\bolds{W}^{\epsilon}(S,\bolds{x}) = \dfrac{1}{\epsilon^d \omega_d} \int_{H_{\epsilon}(\bolds{x})} |\bolds{y}-\bolds{x}|W^{\epsilon}(S, \bolds{y} - \bolds{x}) d\bolds{y},
\end{align*}
where $\omega_d$ is the volume of a unit ball in $d$-dimension and $\epsilon^d \omega_d$ is the volume of the ball of radius $\epsilon$. The potential energy is written as
\bolds{o}lds{e}gin{align*}
PD^{\epsilon}(\bolds{u}) &= \int_D \bolds{W}^{\epsilon}(S(\bolds{u}), \bolds{x}) d\bolds{x},
\end{align*}
and the displacement field satisfies following equation of motion
\bolds{o}lds{e}gin{align}\label{eq:per equation}
\rho \partial^2_{tt} \bolds{u}(t,\bolds{x}) &= -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) + \bolds{o}lds{b}(t,\bolds{x})
\end{align}
for all $\bolds{x} \in D$. Here we have
\bolds{o}lds{e}gin{align*}
-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) = \dfrac{2}{\epsilon^d \omega_d} \int_{H_{\epsilon}(\bolds{x})} \partial_S W^{\epsilon}(S,\bolds{y} - \bolds{x}) \dfrac{\bolds{y} - \bolds{x}}{\abs{\bolds{y} - \bolds{x}}} d\bolds{y}
\end{align*}
where $\bolds{o}lds{b}(t,\bolds{x})$ is the body force, $\rho$ is the density and $\partial_S W^{\epsilon}$ is the derivative of potential with respect to the strain.
We prescribe the zero Dirichlet condition on the boundary of $D$
\bolds{o}lds{e}gin{align}\label{eq:per bc}
\bolds{u}(\bolds{x}) = \mathbf{0} \qquad \forall \bolds{x} \in \partial D,
\end{align}
where we have denoted the boundary by $\partial D$. We extend the zero boundary condition outside $D$ to $\mathbb{R}^3$.
The peridynamic equation, boundary conditions, and initial conditions
\bolds{o}lds{e}gin{align}\label{eq:per initialvalues}
\bolds{u}(0,\bolds{x}) = \bolds{u}_0(\bolds{x}) \qquad \partial_t\bolds{u}(0,\bolds{x})=\bolds{v}_0(\bolds{x})
\end{align}
determine the peridynamic evolution $\bolds{u}(t,\bolds{x})$.
\paragraph{Peridynamics energy}
The total energy $\mathcal{E}^\epsilon(\bolds{u})(t)$ is given by the sum of kinetic and potential energy given by
\bolds{o}lds{e}gin{align}\label{eq:def energy}
\mathcal{E}^\epsilon(\bolds{u})(t) &= \frac{1}{2} ||\dot{\bolds{u}}(t)||_{L^2(D;\mathbb{R}^d)} + PD^\epsilon(\bolds{u}(t)),
\end{align}
where potential energy $PD^\epsilon$ is given by
\bolds{o}lds{e}gin{align*}
PD^{\epsilon}(\bolds{u}) &= \int_D \left[ \dfrac{1}{\epsilon^d \omega_d} \int_{H_\epsilon(\bolds{x})} W^{\epsilon}(S(\bolds{u}), \bolds{y}-\bolds{x}) d\bolds{y} \right] d\bolds{x}.
\end{align*}
Differentiation of \autoref{eq:def energy} gives the identity
\bolds{o}lds{e}gin{align}
\dfrac{d}{dt} \mathcal{E}^\epsilon(\bolds{u})(t) = (\ddot{\bolds{u}}(t), \dot{\bolds{u}}(t)) - (-\bolds{o}ldsymbol{\nabla} PD^\epsilon(\bolds{u}(t)), \dot{\bolds{u}}(t)) \label{eq:energy relat},
\end{align}
where $(\cdot,\cdot)$ is the inner product on $L^2(\mathbb{R}^d,D)$ and $\Vert\cdot\Vert_{L^2(\mathbb{R}^d,D)}$ is the associated norm.
\subsection{Nonlocal potential}
We consider the nonlocal two point interaction potential density $W^{\epsilon}$ of the form
\bolds{o}lds{e}gin{align}\label{eq:per pot}
W^{\epsilon}(S, \bolds{y} - \bolds{x}) &= \omega(\bolds{x})\omega(\bolds{y})\dfrac{J^{\epsilon}(\abs{\bolds{y} - \bolds{x}})}{\epsilon\abs{\bolds{y}-\bolds{x}}} f(\abs{\bolds{y} - \bolds{x}} S^2)
\end{align}
where $f: \mathbb{R}^{+} \to \mathbb{R}$ is assumed to be positive, smooth and concave with following properties
\bolds{o}lds{e}gin{align}\label{eq:per asymptote}
\lim_{r\to 0^+} \dfrac{f(r)}{r} = f'(0), \qquad \lim_{r\to \infty} f(r) = f_{\infty} < \infty
\end{align}
The potential $W^{\epsilon}(S, \bolds{y} - \bolds{x})$ is of double well type and convex near the origin where it has one well and concave and bounded at infinity where it has the second well. $J^{\epsilon}(\abs{\bolds{y} - \bolds{x}})$ models the influence of separation between points $\bolds{y}$ and $\bolds{x}$. We define $J^{\epsilon}$ by rescaling $J(\abs{\bolds{o}ldsymbol{\xi}})$, i.e. $J^{\epsilon}(\abs{\bolds{o}ldsymbol{\xi}}) = J(\abs{\bolds{o}ldsymbol{\xi}}/\epsilon)$. Here $J$ is zero outside the ball $H_1(\mathbf{0})$ and satisfies $0\leq J(\abs{\bolds{o}ldsymbol{\xi}}) \leq M$ for all $\bolds{o}ldsymbol{\xi} \in H_1(\mathbf{0})$. The domain function $\omega$ enforces boundary conditions on $\partial_SW^\epsilon$ at the boundary of the body $D$. Here the boundary is denoted by $\partial D$ and $\omega$ is a nonnegative differentiable function $0\leq \omega\leq 1$. On the boundary $\omega=0$ and $\omega=1$ for points $\bolds{x}$ inside $D$ with distance greater than $\epsilon$ away from the boundary. We continue $\omega$ by zero for all points outside $D$.
The potential described in \autoref{eq:per pot} gives the convex-concave dependence of $W(S,\bolds{y} - \bolds{x})$ on the strain $S$ for fixed $\bolds{y} - \bolds{x}$, see \autoref{fig:per pot}. Initially the force is elastic for small strains and then softens as the strain becomes larger. The critical strain where the force between $\bolds{x}$ and $\bolds{y}$ begins to soften is given by $S_c(\bolds{y}, \bolds{x}) := \bolds{o}lds{a}r{r}/\sqrt{\abs{\bolds{y} - \bolds{x}}}$ and the force decreases monotonically for
\bolds{o}lds{e}gin{align*}
\abs{S(\bolds{y}, \bolds{x};\bolds{u})} > S_c.
\end{align*}
Here $\bolds{o}lds{a}r{r}$ is the inflection point of $r:\to f(r^2)$ and is the root of following equation
\bolds{o}lds{e}gin{align*}
f'({r}^2) + 2{r}^2 f''({r}^2) = 0.
\end{align*}
\bolds{o}lds{e}gin{figure}
\centering
\includegraphics[scale=0.25]{peridynamic_potential.png}
\caption{Two point potential $W^\epsilon(S,\bolds{y} - \bolds{x})$ as a function of strain $S$ for fixed $\bolds{y} - \bolds{x}$.}
\label{fig:per pot}
\end{figure}
\bolds{o}lds{e}gin{figure}
\centering
\includegraphics[scale=0.25]{derivative_peridynamic_potential.png}
\caption{Nonlocal force $\partial_S W^\epsilon(S,\bolds{y} - \bolds{x})$ as a function of strain $S$ for fixed $\bolds{y} - \bolds{x}$. Second derivative of $W^\epsilon(S,\bolds{y}-\bolds{x})$ is zero at $\pm \bolds{o}lds{a}r{r}/\sqrt{|\bolds{y} -\bolds{x}|}$.}
\label{fig:first der per pot}
\end{figure}
\subsection{Existence of solution}\label{ss:existence holder}
Let $\Cholder{D;\bolds{o}lds{b}R^{d}}$ be the H\"{o}lder space with exponent $\gamma \in (0,1]$.
The closure of continuous functions with compact support on $D$ in the supremum norm is denoted by $C_0(D)$. We identify functions in $C_0(D)$ with their unique continuous extensions to $\overline{D}$. It is easily seen that functions belonging to this space take the value zero on the boundary of $D$, see e.g. \cite{MA-Driver}. We introduce $C_0^{0,\gamma}(D)=C^{0,\gamma}(D)\cap C_0(D)$. In this paper we extend all functions in $C_0^{0,\gamma}(D)$ by zero outside $D$.
The norm of $\bolds{u} \in \Cholderz{D;\bolds{o}lds{b}R^{d}}$ is taken to be
\bolds{o}lds{e}gin{align*}
\Choldernorm{\bolds{u}}{D;\bolds{o}lds{b}R^{d}} &:= \sup_{\bolds{x} \in D} \abs{\bolds{u}(\bolds{x})} + \left[\bolds{u} \right]_{\Cholder{D;\bolds{o}lds{b}R^{d}}},
\end{align*}
where $\left[\bolds{u} \right]_{\Cholder{D;\bolds{o}lds{b}R^{d}}}$ is the H\"{o}lder semi norm and given by
\bolds{o}lds{e}gin{align*}
\left[\bolds{u} \right]_{\Cholder{D;\bolds{o}lds{b}R^{d}}} &:= \sup_{\substack{\bolds{x}\neq \bolds{y},\\
\bolds{x},\bolds{y} \in D}} \dfrac{\abs{\bolds{u}(\bolds{x})-\bolds{u}(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^\gamma},
\end{align*}
and $\Cholderz{D;\bolds{o}lds{b}R^{d}}$ is a Banach space with this norm. Here we make the hypothesis that the domain function $\omega$ belongs to $\Cholderz{D;\bolds{o}lds{b}R^{d}}$.
We write the evolution \autoref{eq:per equation} as an equivalent first order system with $y_1(t)=\bolds{u}(t)$ and $y_2(t)=\bolds{v}(t)$ with $\bolds{v}(t)=\partial_t\bolds{u}(t)$. Let $y = (y_1, y_2)^T$ where $y_1,y_2 \in \Cholderz{D;\bolds{o}lds{b}R^{d}}$ and let $F^{\epsilon}(y,t) = (F^{\epsilon}_1(y,t), F^{\epsilon}_2(y,t))^T$ such that
\bolds{o}lds{e}gin{align}
F^\epsilon_1(y,t) &:= y_2 \label{eq:per first order eqn 1} \\
F^\epsilon_2(y, t) &:= -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(y_1) + \bolds{o}lds{b}(t). \label{eq:per first order eqn 2}
\end{align}
The initial boundary value associated with the evolution \autoref{eq:per equation} is equivalent to the initial boundary value problem for the first order system given by
\bolds{o}lds{e}gin{align}\label{eq:per first order}
\dfrac{d}{dt}y = F^{\epsilon}(y,t),
\end{align}
with initial condition given by $y(0) = (\bolds{u}_0, \bolds{v}_0)^T \in \Cholderz{D;\bolds{o}lds{b}R^{d}}\times\Cholderz{D;\bolds{o}lds{b}R^{d}}$.
The function $F^{\epsilon}(y,t)$ satisfies the Lipschitz continuity given by the following theorem.
{\vskip 2mm}
\bolds{o}lds{e}gin{proposition}\label{prop:lipschitz}
\textbf{Lipschitz continuity and bound}\\
Let $X = \Cholderz{D;\bolds{o}lds{b}R^{d}} \times \Cholderz{D;\bolds{o}lds{b}R^{d}}$. The function $F^\epsilon(y,t) = (F^\epsilon_1, F^\epsilon_2)^T$, as defined in \autoref{eq:per first order eqn 1} and \autoref{eq:per first order eqn 2}, is Lipschitz continuous in any bounded subset of $X$. We have, for any $y,z \in X$ and $t> 0$,
\bolds{o}lds{e}gin{align}\label{eq:lipschitz property of F}
&\normX{F^{\epsilon}(y,t) - F^{\epsilon}(z,t)}{X} \notag \\
&\leq \dfrac{\left( L_1 + L_2 \left( \Vert \omega\Vert_{C^{0,\gamma}(D)}+\normX{y}{X} + \normX{z}{X} \right) \right)}{\epsilon^{2 + \alpha(\gamma)}} \normX{y-z}{X}
\end{align}
where $L_1, L_2$ are independent of $\bolds{u},\bolds{v}$ and depend on peridynamic potential function $f$ and influence function $J$ and the exponent $\alpha(\gamma)$ is given by
\bolds{o}lds{e}gin{align*}
\alpha(\gamma) = \bolds{o}lds{e}gin{cases}
0 &\qquad \text{if }\gamma \geq 1/2 \\
1/2 - \gamma &\qquad \text{if } \gamma < 1/2 .
\end{cases}
\end{align*}
Furthermore for any $y \in X$ and any $t\in [0,T]$, we have the bound
\bolds{o}lds{e}gin{align}\label{eq:bound on F}
\normX{F^\epsilon(y,t)}{X} &\leq \dfrac{L_3}{\epsilon^{2+\alpha(\gamma)}} (1+\Vert \omega\Vert_{C^{0,\gamma}(D)} + \normX{y}{X}) + b
\end{align}
where $b = \sup_{t} \Choldernorm{\bolds{o}lds{b}(t)}{D;\bolds{o}lds{b}R^{d}}$ and $L_3$ is independent of $y$.
\end{proposition}
{\vskip 2mm}
\setcounter{theorem}{1}
We easily see that on choosing $z=0$ in \autoref{eq:lipschitz property of F} that $-\nabla PD^\epsilon(\bolds{u})(\bolds{x})$ is in $C^{0,\gamma}(D;\mathbb{R}^3)$ provided that $\bolds{u}$ belongs to $C^{0,\gamma}(D;\mathbb{R}^3)$. Since $-\nabla PD^\epsilon(\bolds{u})(\bolds{x})$ takes the value $0$ on $\partial D$ we conclude that $-\nabla PD^\epsilon(\bolds{u})(\bolds{x})$ belongs to $C^{0,\gamma}_0(D;\mathbb{R}^3)$.
In Theorem 6.1 of \cite{CMPer-Lipton}, the Lipschitz property of a peridynamic force is shown in $X = \Ltwo{D;\bolds{o}lds{b}R^{d}} \times \Ltwo{D;\bolds{o}lds{b}R^{d}}$. It is given by
\bolds{o}lds{e}gin{align}\label{eq: lipshitz}
\normX{F^{\epsilon}(y,t) - F^{\epsilon}(z,t)}{X} &\leq \dfrac{L}{\epsilon^2}\normX{y - z}{X} \qquad \forall y,z \in X, \forall t\in [0,T]
\end{align}
for all $y,z \in \Ltwoz{D;\bolds{o}lds{b}R^{d}}^2$. For this case $L$ does not depend on $\bolds{u}, \bolds{v}$. We now state the existence theorem.
The following theorem gives the existence and uniqueness of solution in any given time domain $I_0 = (-T, T)$.
{\vskip 2mm}
\bolds{o}lds{e}gin{theorem}\label{thm:existence over finite time domain}
\textbf{Existence and uniqueness of H\"older solutions of cohesive dynamics over finite time intervals}\\
For any initial condition $x_0\in X ={ \Cholderz{D;\bolds{o}lds{b}R^{d}} \times \Cholderz{D;\bolds{o}lds{b}R^{d}}}$, time interval $I_0=(-T,T)$, and right hand side $\bolds{o}lds{b}(t)$ continuous in time for $t\in I_0$ such that $\bolds{o}lds{b}(t)$ satisfies $\sup_{t\in I_0} {||\bolds{o}lds{b}(t)||_{\Cholder{}}}<\infty$, there is a unique solution $y(t)\in C^1(I_0;X)$ of
\bolds{o}lds{e}gin{equation*}
y(t)=x_0+\int_0^tF^\epsilon(y(\tau),\tau)\,d\tau,
\label{10}
\end{equation*}
or equivalently
\bolds{o}lds{e}gin{equation*}
y'(t)=F^\epsilon(y(t),t),\hbox{with $y(0)=x_0$},
\label{11}
\end{equation*}
where $y(t)$ and $y'(t)$ are Lipschitz continuous in time for $t\in I_0$.
\end{theorem}
{\vskip 2mm}
The proof of this theorem is given in \autoref{s:proof existence}. We now describe the finite difference scheme and analyze its convergence to H\"older continuous solutions of cohesive dynamics.
\section{Finite difference approximation}
\label{s:finite difference}
In this section, we present the finite difference scheme and compute the rate of convergence. We first consider the semi-discrete approximation and prove the bound on energy of semi-discrete evolution in terms of initial energy and the work done by body forces.
Let $h$ be the size of a mesh and $\Delta t$ be the size of time step. We will keep $\epsilon$ fixed and assume that $h< \epsilon<1$. Let $D_h = D\cap(h \mathbb{Z})^d$ be the discretization of material domain. Let $i\in \mathbb{Z}^d$ be the index such that $\bolds{x}_i = hi \in D$. Let $U_i$ is the unit cell of volume $h^d$ corresponding to the grid point $\bolds{x}_i$. The exact solution evaluated at grid points is denoted by $(\bolds{u}_i(t),\bolds{v}_i(t))$.
\subsection{Time discretization}
Let $[0,T] \cap (\Delta t \mathbb{Z})$ be the discretization of time domain where $\Delta t$ is the size of time step. Denote fully discrete solution at $(t^k = k\Delta t, \bolds{x}_i = ih)$ as $(\hat{\bolds{u}}^k_{i}, \hat{\bolds{v}}^k_i)$. Similarly, the exact solution evaluated at grid points is denoted by $(\bolds{u}^k_i,\bolds{v}^k_i)$. We enforce boundary condition $\hat{\bolds{u}}^k_i = \mathbf{0}$ for all $\bolds{x}_i \notin D$ and for all $k$.
We begin with the forward Euler time discretization, with respect to velocity, and the finite difference scheme for $(\hat{\bolds{u}}^k_{i}, \hat{\bolds{v}}^k_i)$ is written
\bolds{o}lds{e}gin{align}
\dfrac{\bolds{u}hat^{k+1}_i - \bolds{u}hat^k_i}{\Delta t} &= \bolds{v}hat^{k+1}_i \label{eq:finite diff eqn u} \\
\dfrac{\bolds{v}hat^{k+1}_i - \bolds{v}hat^k_i}{\Delta t} &= -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}lds{b}^k_i \label{eq:finite diff eqn v}
\end{align}
The scheme is complemented with the discretized initial conditions $\bolds{u}hat^{0}_i =(\bolds{u}hat_0)_i$ and $\bolds{v}hat^{0}_i =(\bolds{v}hat_0)_i$. If we substitute \autoref{eq:finite diff eqn u} into \autoref{eq:finite diff eqn v}, we get standard Central difference scheme in time for second order in time differential equation. Here we have assumed, without loss of generality, $\rho = 1$.
The piecewise constant extensions of the discrete sets $\{\hat{\bolds{u}}^k_i\}_{i\in \mathbb{Z}^d}$ and $\{\hat{\bolds{v}}^k_i\}_{i\in \mathbb{Z}^d}$ are given by
\bolds{o}lds{e}gin{align*}
\hat{\bolds{u}}^k(\bolds{x}) &:= \sum_{i, \bolds{x}_i \in D} \hat{\bolds{u}}^k_i \chi_{U_i}(\bolds{x}) \\
\hat{\bolds{v}}^k(\bolds{x}) &:= \sum_{i, \bolds{x}_i \in D} \hat{\bolds{v}}^k_i \chi_{U_i}(\bolds{x})
\end{align*}
In this way we represent the finite difference solution as a piecewise constant function. We will show this function provides an $L^2$ approximation of the exact solution.
\bolds{o}lds{e}gin{figure}[h]
\centering
\includegraphics[scale=0.5]{mesh_peridynamic_1.png}
\caption{(a) Typical mesh of size $h$. (b) Unit cell $U_i$ corresponding to material point $\bolds{x}_i$.}\label{fig:peridynamic mesh}
\end{figure}
\subsubsection{Convergence results}
In this section we provide upper bounds on the rate of convergence of the discrete approximation to the solution of the peridynamic evolution. The $L^2$ approximation error $E^k$ at time $t^k$, for $0<t^k\leq T$ is defined as
\bolds{o}lds{e}gin{align*}
E^k &:= \Ltwonorm{\bolds{u}hat^k - \bolds{u}^k}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\bolds{v}hat^k- \bolds{v}^k}{D;\bolds{o}lds{b}R^{d}}
\end{align*}
The upper bound on the convergence rate of the approximation error is given by the following theorem.
{\vskip 2mm}
\bolds{o}lds{e}gin{theorem}\label{thm:convergence}
\textbf{Convergence of finite difference approximation (forward Euler time discretization)}\\
Let $\epsilon>0$ be fixed. Let $(\bolds{u}, \bolds{v})$ be the solution of peridynamic equation \autoref{eq:per first order}. We assume $\bolds{u}, \bolds{v} \in \Ctwointime{[0,T]}{\Cholderz{D;\bolds{o}lds{b}R^{d}}}$. Then the finite difference scheme given by \autoref{eq:finite diff eqn u} and \autoref{eq:finite diff eqn v} is consistent in both time and spatial discretization and converges to the exact solution uniformly in time with respect to the $\Ltwo{D;\bolds{o}lds{b}R^{d}}$ norm. If we assume the error at the initial step is zero then the error $E^k$ at time $t^k$ is bounded and to leading order in the time step $\Delta t$ satisfies
\bolds{o}lds{e}gin{align}\label{eq: first est}
\sup_{0\leq k \leq T/\Delta t} E^k\leq O\left( C_t\Delta t + C_s\dfrac{h^\gamma}{\epsilon^2} \right),
\end{align}
where constant $C_s$ and $C_t$ are independent of $h$ and $\Delta t$ and $C_s$ depends on the H\"older norm of the solution and $C_t$ depends on the $L^2$ norms of time derivatives of the solution.
\end{theorem}
{\vskip 2mm}
Here we have assumed the initial error to be zero for ease of exposition only.
We remark that the explicit constants leading to \autoref{eq: first est} can be large. The inequality that delivers \autoref{eq: first est} is given to leading order by
\bolds{o}lds{e}gin{align}\label{eq: fund est initial}
\sup_{0\leq k \leq T/\Delta t} E^k\leq \exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \right] T \left[ C_t \Delta t + (C_s/\epsilon^2) h^\gamma \right],
\end{align}
where the constants $\bolds{o}lds{a}r{C}$, $C_t$ and $C_s$ are given by \autoref{eq: bar C}, \autoref{eq:const Ct}, and \autoref{eq:const Cs}. The explicit constant $C_t$ depends on the spatial $L^2$ norm of the time derivatives of the solution and $C_s$ depends on the spatial H\"older continuity of the solution and the constant $\bolds{o}lds{a}r{C}$. This constant is bounded independently of horizon $\epsilon$. Although the constants are necessarily pessimistic they deliver a-priori error estimates and an example is discussed in \autoref{s:conclusions}.
An identical convergence rate can be established for the general one step scheme and we state it below.
{\vskip 2mm}
\bolds{o}lds{e}gin{theorem}\label{thm:convergence general}
\textbf{Convergence of finite difference approximation (General single step time discretization)}\\
Let us assume that the hypothesis of \autoref{thm:convergence} holds. Fix $\theta \in [0,1]$, and let $(\bolds{u}hat^k , \bolds{v}hat^k)^T$ be the solution of following finite difference equation
\bolds{o}lds{e}gin{align}
\dfrac{\bolds{u}hat^{k+1}_i - \bolds{u}hat^k_i}{\Delta t} &= (1 - \theta) \bolds{v}hat^k_i + \theta \bolds{v}hat^{k+1}_i \label{eq:finite diff eqn u general} \\
\dfrac{\bolds{v}hat^{k+1}_i - \bolds{v}hat^k_i}{\Delta t} &= (1-\theta)\left( -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}lds{b}^k_i\right) + \theta \left( - \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^{k+1})(\bolds{x}_i) + \bolds{o}lds{b}^{k+1}_i \right). \label{eq:finite diff eqn v general}
\end{align}
Then, for any fixed $\theta\in [0,1]$, there exists a constant $K>0$ independent of $(\bolds{u}hat^k , \bolds{v}hat^k)^T$ and $(\bolds{u}^k , \bolds{v}^k)^T$, such that for $\Delta t<K{\epsilon^2}$ the finite difference scheme given by \autoref{eq:finite diff eqn u general} and \autoref{eq:finite diff eqn v general} is consistent in time and spatial discretization. if we assume the error at the initial step is zero then the error $E^k$ at time $t^k$ is bounded and satisfies
\bolds{o}lds{e}gin{align*}
\sup_{0\leq k \leq T/\Delta t} E^k \leq O\left( C_t\Delta t + C_s\dfrac{h^\gamma}{\epsilon^2} \right).
\end{align*}
The constant $K$ is given by the explicit formula $K=1/\bolds{o}lds{a}r{C}$ where $\bolds{o}lds{a}r{C}$ is described by equation \autoref{eq: bar C}.
Furthermore for the Crank Nicholson scheme, $\theta = 1/2$, if we assume the solutions $\bolds{u},\bolds{v}$ belong to $\Cthreeintime{[0,T]}{\Cholderz{D;\bolds{o}lds{b}R^{d}}}$, then the approximation error $E^k$ satisfies
\bolds{o}lds{e}gin{align*}
\sup_{0\leq k \leq T/\Delta t} E^k \leq O\left(\bolds{o}lds{a}r{C}_t (\Delta t)^2 + C_s\dfrac{h^\gamma}{\epsilon^2} \right),
\end{align*}
where $\bolds{o}lds{a}r{C}_t$ is independent of $\Delta t$ and $h$ and is given by \autoref{eq: timebound}.
\end{theorem}
{\vskip 2mm}
As before we assume that the error in the initial data is zero for ease of exposition. The proofs of \autoref{thm:convergence} and \autoref{thm:convergence general} are given in the following sections.
\textbf{Remark.} {\em In \autoref{thm:convergence general}, we have stated a condition on $\Delta t$ for which the convergence estimate holds. This condition naturally occurs in the analysis and is related to the Lipschitz continuity of the peridnamic force with respect to the $L^2$ norm, see \autoref{eq: lipshitzl2}}.
\subsubsection{Error analysis}\label{ss:error analysis1}
\autoref{thm:convergence} and \autoref{thm:convergence general} are proved along similar lines. In both cases we define the $L^2$-projections of the actual solutions onto the space of piecewise constant functions defined over the cells $U_i$. These are given as follows. Let $(\bolds{u}tilde^k_i, \bolds{v}tilde^k_i)$ be the average of the exact solution $(\bolds{u}^k, \bolds{v}^k)$ in the unit cell $U_i$ given by
\bolds{o}lds{e}gin{align*}
\bolds{u}tilde^k_i &:= \dfrac{1}{h^d} \int_{U_i} \bolds{u}^k(\bolds{x}) d\bolds{x} \\
\bolds{v}tilde^k_i &:= \dfrac{1}{h^d} \int_{U_i} \bolds{v}^k(\bolds{x}) d\bolds{x}
\end{align*}
and the $L^2$ projection of the solution onto piecewise constant functions are $(\bolds{u}tilde^k, \bolds{v}tilde^k)$ given by
\bolds{o}lds{e}gin{align}
\bolds{u}tilde^k(\bolds{x}) &:= \sum_{i, \bolds{x}_i \in D} \bolds{u}tilde^k_i \chi_{U_i}(\bolds{x}) \label{eq:periodpiecewise ext1} \\
\bolds{v}tilde^k(\bolds{x}) &:= \sum_{i, \bolds{x}_i \in D} \bolds{v}tilde^k_i\chi_{U_i}(\bolds{x}) \label{eq:periodpiecewise ext2}
\end{align}
The error between $(\bolds{u}hat^k, \bolds{v}hat^k)^T$ with $(\bolds{u}(t^k), \bolds{v}(t^k))^T$ is now split into two parts. From the triangle inequality, we have
\bolds{o}lds{e}gin{align*}
\Ltwonorm{\bolds{u}hat^k - \bolds{u}(t^k)}{D;\bolds{o}lds{b}R^{d}} &\leq \Ltwonorm{\bolds{u}hat^k - \bolds{u}tilde^k}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\bolds{u}tilde^k - \bolds{u}^k}{D;\bolds{o}lds{b}R^{d}} \\
\Ltwonorm{\bolds{v}hat^k - \bolds{v}(t^k)}{D;\bolds{o}lds{b}R^{d}} &\leq \Ltwonorm{\bolds{v}hat^k - \bolds{v}tilde^k}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\bolds{v}tilde^k - \bolds{v}^k}{D;\bolds{o}lds{b}R^{d}}
\end{align*}
In \autoref{ss:error analysis} and \autoref{ss:implicit} we will show that the error between the $L^2$ projections of the actual solution and the discrete approximation for both forward Euler and implicit one step methods decay according to
\bolds{o}lds{e}gin{align}\label{eq:write estimate error ek}
\sup_{0\leq k \leq T/\Delta t} \left( \Ltwonorm{\bolds{u}hat^k - \bolds{u}tilde^k}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\bolds{v}hat^k - \bolds{v}tilde^k}{D;\bolds{o}lds{b}R^{d}} \right) &= O\left( \Delta t + \dfrac{h^\gamma}{\epsilon^2} \right).
\end{align}
In what follows we can estimate the terms
\bolds{o}lds{e}gin{align}\label{eq: per convgest}
\Ltwonorm{\bolds{u}tilde^k - \bolds{u}(t^k)}{}\hbox{ and }
\Ltwonorm{\bolds{v}tilde^k - \bolds{v}(t^k)}{}
\end{align}
and show they go to zero at a rate of $h^\gamma$ uniformly in time. The estimates given by \autoref{eq:write estimate error ek} together with the $O(h^\gamma)$ estimates for \autoref{eq: per convgest} establish \autoref{thm:convergence} and \autoref{thm:convergence general}. We now establish the $L^2$ estimates for the differences $\bolds{u}tilde^k - \bolds{u}(t^k)$ and $\bolds{v}tilde^k - \bolds{v}(t^k)$.
We write
\bolds{o}lds{e}gin{align}
&\Ltwonorm{\bolds{u}tilde^k - \bolds{u}^k}{D;\bolds{o}lds{b}R^{d}}^2 \notag \\
&= \sum_{i, \bolds{x}_i \in D} \int_{U_i} \abs{\bolds{u}tilde^k(\bolds{x}) - \bolds{u}^k(\bolds{x})}^2 d\bolds{x} \notag \\
&= \sum_{i,\bolds{x}_i \in D} \int_{U_i} \abs{ \dfrac{1}{h^d} \int_{U_i} (\bolds{u}^k(\bolds{y}) - \bolds{u}^k( \bolds{x})) d\bolds{y} }^2 d\bolds{x} \notag \\
&= \sum_{i,\bolds{x}_i \in D} \int_{U_i} \left[ \dfrac{1}{h^{2d}} \int_{U_i} \int_{U_i} (\bolds{u}^k(\bolds{y}) - \bolds{u}^k( \bolds{x})) \cdot (\bolds{u}^k(\bolds{z}) - \bolds{u}^k( \bolds{x})) d\bolds{y} d\bolds{z} \right] d\bolds{x} \notag \\
&\leq \sum_{i,\bolds{x}_i \in D} \int_{U_i} \left[ \dfrac{1}{h^d} \int_{U_i} \abs{\bolds{u}^k(\bolds{y}) - \bolds{u}^k(\bolds{x})}^2 d\bolds{y} \right] d\bolds{x} \label{eq:estimate butilde and bu}
\end{align}
where we used Cauchy's inequality and Jensen's inequality. For $\bolds{x},\bolds{y} \in U_i$, $\abs{\bolds{x} - \bolds{y}} \leq c h$, where $c = \sqrt{2}$ for $d=2$ and $c=\sqrt{3}$ for $d=3$. Since $\bolds{u} \in \Cholderz{}$ we have
\bolds{o}lds{e}gin{align}\label{eq:estimate bu holder}
\abs{\bolds{u}^k(\bolds{x}) - \bolds{u}^k(\bolds{y})} &= \abs{\bolds{x} - \bolds{y}}^\gamma \dfrac{\abs{\bolds{u}^k(\bolds{y}) -\bolds{u}^k(\bolds{x})}}{\abs{\bolds{x} - \bolds{y}}^\gamma} \notag \\
&\leq c^{\gamma} h^\gamma \Choldernorm{\bolds{u}^k}{D;\bolds{o}lds{b}R^{d}} \leq c^{\gamma} h^\gamma \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}}
\end{align}
and substitution in \autoref{eq:estimate butilde and bu} gives
\bolds{o}lds{e}gin{align*}
\Ltwonorm{\bolds{u}tilde^k - \bolds{u}^k}{D;\bolds{o}lds{b}R^{d}}^2 &\leq c^{2\gamma} h^{2\gamma} \sum_{i,\bolds{x}_i \in D} \int_{U_i} d\bolds{x} \left( \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}} \right)^2 \notag \\
&\leq c^{2\gamma} |D| h^{2\gamma} \left( \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}} \right)^2 .
\end{align*}
A similar estimate can be derived for $||\bolds{v}tilde^k - \bolds{v}^k||_{L^2}$ and substitution of the estimates into \autoref{eq: per convgest} gives
\bolds{o}lds{e}gin{align*}
\sup_{k} \left( \Ltwonorm{\bolds{u}tilde^k - \bolds{u}(t^k)}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\bolds{v}tilde^k - \bolds{v}(t^k)}{D;\bolds{o}lds{b}R^{d}} \right) = O(h^\gamma).
\end{align*}
In the next section we establish the error estimate \autoref{eq:write estimate error ek} for both forward Euler and general one step schemes in \autoref{ss:error analysis} and \autoref{ss:implicit}.
\subsubsection{Error analysis for approximation of $L^2$ projection of the the exact solution}\label{ss:error analysis}
In this sub-section, we estimate the difference between approximate solution $(\bolds{u}hat^k,\bolds{v}hat^k)$ and the $L^2$ projection of the exact solution onto piece wise constant functions given by $(\bolds{u}tilde^k, \bolds{v}tilde^k)$, see \autoref{eq:periodpiecewise ext1} and \autoref{eq:periodpiecewise ext2} . Let the differences be denoted by $\bolds{o}lds{e}^k(u) := \bolds{u}hat^k - \bolds{u}tilde^k$ and $\bolds{o}lds{e}^k(v ):= \bolds{v}hat^k - \bolds{v}tilde^k$ and their evaluation at grid points are $\bolds{o}lds{e}^k_i(u) := \bolds{u}hat^k_i - \bolds{u}tilde^k_i$ and $\bolds{o}lds{e}^k_i(v) := \bolds{v}hat^k_i - \bolds{v}tilde^k_i$.
Subtracting $(\bolds{u}tilde^{k+1}_i - \bolds{u}tilde^k_i)/\Delta t$ from \autoref{eq:finite diff eqn u} gives
\bolds{o}lds{e}gin{align*}
& \dfrac{\bolds{u}hat^{k+1}_i - \bolds{u}hat^k_i}{\Delta t} - \dfrac{\bolds{u}tilde^{k+1}_i - \bolds{u}tilde^k_i}{\Delta t} \\
&= \bolds{v}hat^{k+1}_i - \dfrac{\bolds{u}tilde^{k+1}_i - \bolds{u}tilde^k_i}{\Delta t} \notag \\
&= \bolds{v}hat^{k+1}_i - \bolds{v}tilde^{k+1}_i + \left( \bolds{v}tilde^{k+1}_i - \dparder{\bolds{u}tilde^{k+1}_i}{t} \right) + \left( \dparder{\bolds{u}tilde^{k+1}_i}{t} - \dfrac{\bolds{u}tilde^{k+1}_i - \bolds{u}tilde^k_i}{\Delta t} \right).
\end{align*}
Taking the average over unit cell $U_i$ of the exact peridynamic equation \autoref{eq:per first order} at time $t^k$, we will get $\bolds{v}tilde^{k+1}_i - \dparder{\bolds{u}tilde^{k+1}_i}{t} = 0$. Therefore, the equation for $\bolds{o}lds{e}^k_i(u)$ is given by
\bolds{o}lds{e}gin{align}\label{eq:error eqn in u}
\bolds{o}lds{e}^{k+1}_i(u) = \bolds{o}lds{e}^k_i(u) + \Delta t \bolds{o}lds{e}^{k+1}_i(v) + \Delta t\tau^{k}_i(u),
\end{align}
where we identify the discretization error as
\bolds{o}lds{e}gin{align}\label{eq:consistency error in u}
\tau^k_i(u) &:= \dparder{\bolds{u}tilde^{k+1}_i}{t} - \dfrac{\bolds{u}tilde^{k+1}_i - \bolds{u}tilde^k_i}{\Delta t}.
\end{align}
Similarly, we subtract $(\bolds{v}tilde^{k+1}_i - \bolds{v}tilde^k_i)/\Delta t$ from \autoref{eq:finite diff eqn v} and add and subtract terms to get
\bolds{o}lds{e}gin{align}\label{eq:error in v 1}
\dfrac{\bolds{v}hat^{k+1}_i - \bolds{v}hat^k_i}{\Delta t} - \dfrac{\bolds{v}tilde^{k+1}_i - \bolds{v}tilde^k_i}{\Delta t} &= - \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}lds{b}^k_i - \dparder{\bolds{v}^k_i}{t} + \left( \dparder{\bolds{v}^k_i}{t} - \dfrac{\bolds{v}tilde^{k+1}_i - \bolds{v}tilde^k_i}{\Delta t}\right) \notag \\
&= - \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}lds{b}^k_i - \dparder{\bolds{v}^k_i}{t} \notag \\
&\quad + \left( \dparder{\bolds{v}tilde^k_i}{t} - \dfrac{\bolds{v}tilde^{k+1}_i - \bolds{v}tilde^k_i}{\Delta t}\right) + \left( \dparder{\bolds{v}^k_i}{t} - \dparder{\bolds{v}tilde^k_i}{t}\right),
\end{align}
where we identify $\tau^k_i(v)$ as follows
\bolds{o}lds{e}gin{align}\label{eq:consistency error in v}
\tau^k_i(v) &:= \dparder{\bolds{v}tilde^k_i}{t} - \dfrac{\bolds{v}tilde^{k+1}_i - \bolds{v}tilde^k_i}{\Delta t}.
\end{align}
Note that in $\tau^k(u)$ we have $\dparder{\bolds{u}tilde^{k+1}_i}{t}$ and from the exact peridynamic equation, we have
\bolds{o}lds{e}gin{align}\label{eq:exact per eqn v 1}
\bolds{o}lds{b}^k_i - \dparder{\bolds{v}^k_i}{t} = \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}^k)(\bolds{x}_i).
\end{align}
Combining \autoref{eq:error in v 1}, \autoref{eq:consistency error in v}, and \autoref{eq:exact per eqn v 1}, to get
\bolds{o}lds{e}gin{align*}
\bolds{o}lds{e}^{k+1}_i(v) &= \bolds{o}lds{e}^k_i(v) + \Delta t \tau^k_i(v) + \Delta t \left( \dparder{\bolds{v}^k_i}{t} - \dparder{\bolds{v}tilde^k_i}{t}\right) \notag \\
&\quad + \Delta t \left( -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}^k)(\bolds{x}_i) \right) \notag \\
&= \bolds{o}lds{e}^k_i(v) + \Delta t \tau^k_i(v) + \Delta t \left( \dparder{\bolds{v}^k_i}{t} - \dparder{\bolds{v}tilde^k_i}{t}\right) \notag \\
&\quad + \Delta t \left( -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i) \right) \notag \\
&\quad + \Delta t \left( -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}^k)(\bolds{x}_i) \right).
\end{align*}
The spatial discretization error $\sigma^k_i(u)$ and $\sigma^k_i(v)$ is given by
\bolds{o}lds{e}gin{align}
\sigma^k_i(u) &:= \left( -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}^k)(\bolds{x}_i) \right) \label{eq:consistency error in u spatial} \\
\sigma^k_i(v) &:= \dparder{\bolds{v}^k_i}{t} - \dparder{\bolds{v}tilde^k_i}{t}. \label{eq:consistency error in v spatial}
\end{align}
We finally have
\bolds{o}lds{e}gin{align}\label{eq:error eqn in v}
\bolds{o}lds{e}^{k+1}_i(v) &= \bolds{o}lds{e}^k_i(v) + \Delta t \left(\tau^k_i(v) + \sigma^k_i(u) + \sigma^k_i(v) \right) \notag \\
&\quad + \Delta t \left( -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i) \right).
\end{align}
We now show the consistency and stability properties of the numerical scheme.
\subsubsection{Consistency}\label{sss:consistency}
We deal with the error in time discretization and the error in spatial discretization error separately. The time discretization error follows easily using the Taylor's series while spatial the discretization error uses properties of the nonlinear peridynamic force.
\
\textbf{Time discretization: }We first estimate the time discretization error. A Taylor series expansion is used to estimate $\tau^k_i(u)$ as follows
\bolds{o}lds{e}gin{align*}
\tau^k_i(u) &= \dfrac{1}{h^d} \int_{U_i} \left( \dparder{\bolds{u}^k(\bolds{x})}{t} - \dfrac{\bolds{u}^{k+1}(\bolds{x}) - \bolds{u}^k(\bolds{x})}{\Delta t} \right) d\bolds{x} \\
&= \dfrac{1}{h^d} \int_{U_i} \left( -\dfrac{1}{2} \dsecder{\bolds{u}^k(\bolds{x})}{t} \Delta t + O((\Delta t)^2) \right) d\bolds{x} .
\end{align*}
Computing the $\Ltwo{}$ norm of $\tau^k_i(u)$ and using Jensen's inequality gives
\bolds{o}lds{e}gin{align*}
\Ltwonorm{\tau^k(u)}{D;\bolds{o}lds{b}R^{d}} &\leq \frac{\Delta t}{2} \Ltwonorm{\dsecder{\bolds{u}^k}{t}}{D;\bolds{o}lds{b}R^{d}} + O((\Delta t)^2) \notag \\
&\leq \frac{\Delta t}{2} \sup_{t} \Ltwonorm{\dsecder{\bolds{u}(t)}{t}}{D;\bolds{o}lds{b}R^{d}} + O((\Delta t)^2).
\end{align*}
Similarly, we have
\bolds{o}lds{e}gin{align*}
\Ltwonorm{\tau^k(v)}{D;\bolds{o}lds{b}R^{d}} = \frac{\Delta t}{2} \sup_{t} \Ltwonorm{\dsecder{\bolds{v}(t)}{t}}{D;\bolds{o}lds{b}R^{d}} + O((\Delta t)^2).
\end{align*}
\
\textbf{Spatial discretization: }We now estimate the spatial discretization error. Substituting the definition of $\bolds{v}tilde^k$ and following the similar steps employed in \autoref{eq:estimate bu holder}, gives
\bolds{o}lds{e}gin{align*}
\abs{\sigma^k_i(v)} &= \abs{\dparder{\bolds{v}^k_i}{t} - \dfrac{1}{h^d}\int_{U_i} \dparder{\bolds{v}^k(\bolds{x})}{t} d\bolds{x}} \leq c^\gamma h^{\gamma} \int_{U_i} \dfrac{1}{\abs{\bolds{x}_i - \bolds{x}}^\gamma} \abs{\dparder{\bolds{v}^k(\bolds{x}_i)}{t} - \dparder{\bolds{v}^k(\bolds{x})}{t}} d\bolds{x} \notag \\
&\leq c^\gamma h^{\gamma} \Choldernorm{\dparder{\bolds{v}^k}{t}}{D;\bolds{o}lds{b}R^{d}} \leq c^\gamma h^{\gamma} \sup_{t} \Choldernorm{\dparder{\bolds{v}(t)}{t}}{D;\bolds{o}lds{b}R^{d}}.
\end{align*}
Taking the $\Ltwo{}$ norm of error $\sigma^k_i(v)$ and substituting the estimate above delivers
\bolds{o}lds{e}gin{align*}
\Ltwonorm{\sigma^k(v)}{D;\bolds{o}lds{b}R^{d}} &\leq h^{\gamma} c^{\gamma} \sqrt{ \abs{D} } \sup_{t} \Choldernorm{\dparder{\bolds{v}(t)}{t}}{D;\bolds{o}lds{b}R^{d}}.
\end{align*}
Now we estimate $\abs{\sigma^k_i(u)}$. We use the notation $\bolds{o}lds{a}r{\bolds{u}}^k(\bolds{x}):= \bolds{u}^k(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi}) - \bolds{u}^k(\bolds{x})$ and $\overline{\tilde{\bolds{u}}}^k(\bolds{x}):= \tilde{\bolds{u}}(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi}) -\tilde {\bolds{u}}^k(\bolds{x})$ and choose $\bolds{u}=\bolds{u}^k$ and $\bolds{v}=\tilde{\bolds{u}}^k$ in \autoref{eq:diff force bound 1} to find that
\bolds{o}lds{e}gin{align}\label{eq:estimate sigma u 1}
\abs{\sigma^k_i(u)} &= \abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}^k)(\bolds{x}_i)} \notag \\
&\leq \dfrac{2C_2}{\epsilon \omega_d} \abs{ \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{\abs{\bolds{u}^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi}) - \bolds{u}tilde^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi}) - (\bolds{u}^k(\bolds{x}_i) - \bolds{u}tilde^k(\bolds{x}_i))}}{\epsilon \abs{\bolds{o}ldsymbol{\xi}}} d\bolds{o}ldsymbol{\xi} }.
\end{align}
Here $C_2$ is the maximum of the second derivative of the profile describing the potential given by \autoref{C2}.
Following the earlier analysis, see \autoref{eq:estimate bu holder}, we find that
\bolds{o}lds{e}gin{align}
\abs{\bolds{u}^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi}) - \bolds{u}tilde^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi})} &\leq c^\gamma h^\gamma \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}} \notag \\
\abs{ \bolds{u}^k(\bolds{x}_i) - \bolds{u}tilde^k(\bolds{x}_i) } &\leq c^\gamma h^\gamma \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}}. \notag
\end{align}
For reference, we define the constant
\bolds{o}lds{e}gin{align}\label{eq: bar C}
&\bolds{o}lds{a}r{C}=\frac{C_2}{\omega_d}\int_{H_1(\mathbf{0})}J(\abs{\bolds{o}ldsymbol{\xi}})\dfrac{1}{\abs{\bolds{o}ldsymbol{\xi}}}\,d\bolds{o}ldsymbol{\xi}.
\end{align}
We now focus on \autoref{eq:estimate sigma u 1}. We substitute the above two inequalities to get
\bolds{o}lds{e}gin{align*}
\abs{\sigma^k_i(u)} &\leq \dfrac{2C_2}{\epsilon^2 \omega_d} \vline \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\abs{\bolds{o}ldsymbol{\xi}}} \notag \\
&\qquad \left(\abs{\bolds{u}^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi}) - \bolds{u}tilde^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi})} + \abs{ \bolds{u}^k(\bolds{x}_i) - \bolds{u}tilde^k(\bolds{x}_i) } \right) d\bolds{o}ldsymbol{\xi} \vline \notag \\
&\leq 4 h^\gamma c^\gamma \frac{\bolds{o}lds{a}r{C}}{\epsilon^2} \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}}.
\end{align*}
Therefore, we have
\bolds{o}lds{e}gin{align*}
\Ltwonorm{\sigma^k(u)}{D;\bolds{o}lds{b}R^{d}} &\leq h^\gamma \left( 4 c^\gamma \sqrt{|D|} \frac{\bolds{o}lds{a}r{C}}{\epsilon^2} \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}} \right).
\end{align*}
This completes the proof of consistency of numerical approximation.
\subsubsection{Stability}\label{sss:stability}
Let $e^k$ be the total error at the $k^{\text{th}}$ time step. It is defined as
\bolds{o}lds{e}gin{align*}
e^k &:= \Ltwonorm{\bolds{o}lds{e}^k(u)}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\bolds{o}lds{e}^k(v)}{D;\bolds{o}lds{b}R^{d}}.
\end{align*}
To simplify the calculations, we define new term $\tau$ as
\bolds{o}lds{e}gin{align*}
\tau &:= \sup_t \left(\Ltwonorm{\tau^k(u)}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\tau^k(v)}{D;\bolds{o}lds{b}R^{d}} \right. \notag \\
&\quad \left. + \Ltwonorm{\sigma^k(u)}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\sigma^k(v)}{D;\bolds{o}lds{b}R^{d}}\right).
\end{align*}
From our consistency analysis, we know that to leading order
\bolds{o}lds{e}gin{align}\label{eq:estimate tau}
\tau &\leq C_t \Delta t + \dfrac{C_s}{\epsilon^2} h^\gamma
\end{align}
where,
\bolds{o}lds{e}gin{align}
C_t &:= \frac{1}{2} \sup_{t} \Ltwonorm{\dsecder{\bolds{u}(t)}{t}}{D;\bolds{o}lds{b}R^{d}} + \frac{1}{2} \sup_{t} \Ltwonorm{\dfrac{\partial^3 \bolds{u}(t)}{\partial t^3}}{D;\bolds{o}lds{b}R^{d}}, \label{eq:const Ct} \\
C_s &:= c^\gamma \sqrt{|D|} \left[ \epsilon^2 \sup_{t} \Choldernorm{\dfrac{\partial^2 \bolds{u}(t)}{\partial t^2}}{D;\bolds{o}lds{b}R^{d}} + 4 \bolds{o}lds{a}r{C} \sup_t \Choldernorm{\bolds{u}(t)}{D;\bolds{o}lds{b}R^{d}} \right]. \label{eq:const Cs}
\end{align}
We take $\Ltwo{}$ norm of \autoref{eq:error eqn in u} and \autoref{eq:error eqn in v} and add them. Noting the definition of $\tau$ as above, we get
\bolds{o}lds{e}gin{align}\label{eq:error k ineq 1}
e^{k+1} &\leq e^k + \Delta t \Ltwonorm{\bolds{o}lds{e}^{k+1}(v)}{D;\bolds{o}lds{b}R^{d}} + \Delta t \tau \notag \\
&\quad + \Delta t \left( \sum_{i} h^d \abs{ -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i)}^2 \right)^{1/2}.
\end{align}
We only need to estimate the last term in above equation. Similar to the \autoref{eq:estimate sigma u 1}, we have
\bolds{o}lds{e}gin{align*}
&\abs{ -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i)} \notag \\
&\leq \dfrac{2C_2}{\epsilon^2 \omega_d} \vline \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\abs{\bolds{o}ldsymbol{\xi}}} \abs{\bolds{u}hat^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi}) - \bolds{u}tilde^k(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi}) - (\bolds{u}hat^k(\bolds{x}_i) - \bolds{u}tilde^k(\bolds{x}_i)) } d\bolds{o}ldsymbol{\xi} \vline \notag \\
&= \dfrac{2C_2}{\epsilon^2 \omega_d} \vline \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\abs{\bolds{o}ldsymbol{\xi}}} \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi}) - \bolds{o}lds{e}^k(u)(\bolds{x}_i)} d\bolds{o}ldsymbol{\xi} \vline \notag \\
&\leq \dfrac{2C_2}{\epsilon^2 \omega_d} \vline \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\abs{\bolds{o}ldsymbol{\xi}}} \left(\abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi})} + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i )} \right) d\bolds{o}ldsymbol{\xi} \vline .
\end{align*}
By $\bolds{o}lds{e}^k(u)(\bolds{x})$ we mean evaluation of piecewise extension of set $\{ \bolds{o}lds{e}^k_i(u) \}_i$ at $\bolds{x}$. We proceed further as follows
\bolds{o}lds{e}gin{align*}
&\abs{ -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i)}^2 \notag \\
&\leq \left( \dfrac{2C_2}{\epsilon^2 \omega_d} \right)^2 \int_{H_1(\mathbf{0})} \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) J(\abs{\bolds{o}ldsymbol{\eta}}) \dfrac{1}{\abs{\bolds{o}ldsymbol{\xi}}} \dfrac{1}{\abs{\bolds{o}ldsymbol{\eta}}} \notag \\
&\quad \left( \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi})} + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i )} \right) \left( \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\eta})} + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i )} \right) d\bolds{o}ldsymbol{\xi} d\bolds{o}ldsymbol{\eta} .
\end{align*}
Using inequality $|ab| \leq (\abs{a}^2 + \abs{b}^2)/2$, we get
\bolds{o}lds{e}gin{align*}
&\left( \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi})} + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i )} \right) \left( \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\eta})} + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i )} \right) \notag \\
&\leq 3 \left( \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi})}^2 + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\eta})}^2 + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i)}^2 \right),
\end{align*}
and
\bolds{o}lds{e}gin{align*}
&\sum_{\bolds{x}_i \in D} h^d \abs{ -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i)}^2 \notag \\
&\leq \left( \dfrac{2C_2}{\epsilon^2 \omega_d} \right)^2 \int_{H_1(\mathbf{0})} \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) J(\abs{\bolds{o}ldsymbol{\eta}}) \dfrac{1}{\abs{\bolds{o}ldsymbol{\xi}}} \dfrac{1}{\abs{\bolds{o}ldsymbol{\eta}}} \notag \\
&\quad \sum_{\bolds{x}_i \in D} h^d 3 \left( \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\xi})}^2 + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i + \epsilon \bolds{o}ldsymbol{\eta})}^2 + \abs{\bolds{o}lds{e}^k(u)(\bolds{x}_i)}^2 \right) d\bolds{o}ldsymbol{\xi} d\bolds{o}ldsymbol{\eta} .
\end{align*}
Since $\bolds{o}lds{e}^k(u)(\bolds{x}) = \sum_{\bolds{x}_i \in D} \bolds{o}lds{e}^k_i(u) \chi_{U_i}(\bolds{x}) $, we have
\bolds{o}lds{e}gin{align}\label{eq:estimate diff in per force stability}
\sum_{\bolds{x}_i \in D} h^d \abs{ -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i)}^2 &\leq \dfrac{(6\bolds{o}lds{a}r{C})^2}{\epsilon^4} \Ltwonorm{\bolds{o}lds{e}^k(u)}{D;\bolds{o}lds{b}R^{d}}^2.
\end{align}
where $\bolds{o}lds{a}r{C}$ is given by \autoref{eq: bar C}. In summary \autoref{eq:estimate diff in per force stability} shows the Lipschitz continuity of the peridynamic force with respect to the $L^2$ norm, see \autoref{eq: lipshitz}, expressed in this context as
\bolds{o}lds{e}gin{align}\label{eq: lipshitzl2}
\Vert\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}hat^k)(\bolds{x})- \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)\Vert_{L^2(D;\mathbb{R}^d)}\leq\dfrac{(6\bolds{o}lds{a}r{C})}{\epsilon^2}\Vert \bolds{o}lds{e}^k(u)\Vert_{L^2(D;\mathbb{R}^d)}.
\end{align}
Finally, we substitute above inequality in \autoref{eq:error k ineq 1} to get
\bolds{o}lds{e}gin{align*}
e^{k+1} &\leq e^k + \Delta t \Ltwonorm{\bolds{o}lds{e}^{k+1}(v)}{D;\bolds{o}lds{b}R^{d}} + \Delta t \tau + \Delta t \dfrac{6\bolds{o}lds{a}r{C}}{\epsilon^2}\Ltwonorm{\bolds{o}lds{e}^k(u)}{D;\bolds{o}lds{b}R^{d}}
\end{align*}
We add positive quantity $\Delta t ||e^{k+1}(u)||_{L^2(D;\mathbb{R}^d)} + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2 ||\bolds{o}lds{e}^k(v)||_{L^2(D;\mathbb{R}^d)} $ to the right side of above equation, to get
\bolds{o}lds{e}gin{align*}
&e^{k+1} \leq ( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2) e^k + \Delta t e^{k+1} + \Delta t \tau \\
\Rightarrow & e^{k+1} \leq \dfrac{( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2)}{1 - \Delta t} e^{k} + \dfrac{\Delta t }{1 - \Delta t} \tau.
\end{align*}
We recursively substitute $e^{j}$ on above as follows
\bolds{o}lds{e}gin{align}
e^{k+1} &\leq \dfrac{( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2)}{1 - \Delta t} e^k + \dfrac{\Delta t }{1 - \Delta t} \tau \notag \\
&\leq \left(\dfrac{( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2)}{1 - \Delta t} \right)^2 e^{k-1} + \dfrac{\Delta t }{1 - \Delta t} \tau \left(1 + \dfrac{( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2)}{1 - \Delta t}\right) \notag \\
&\leq ...\notag \\
&\leq \left(\dfrac{( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2)}{1 - \Delta t} \right)^{k+1} e^0 + \dfrac{\Delta t }{1 - \Delta t} \tau \sum_{j=0}^k \left(\dfrac{( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2)}{1 - \Delta t} \right)^{k-j}. \label{eq:ek estimnate}
\end{align}
Since $1/(1-\Delta t)= 1 + \Delta t + \Delta t^2 + O(\Delta t^3)$, we have
\bolds{o}lds{e}gin{align*}
\dfrac{( 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2)}{1 - \Delta t} &\leq 1 + (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \Delta t + (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \Delta t^2 + O(\bolds{o}lds{a}r{C}/\epsilon^2) O(\Delta t^3).
\end{align*}
Now, for any $k \leq T /\Delta t$, using identity $(1+ a)^k \leq \exp [ka]$ for $a\leq 0$, we have
\bolds{o}lds{e}gin{align*}
&\left( \dfrac{ 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2}{1 - \Delta t} \right)^k \\
&\leq \exp \left[k (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \Delta t + k(1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \Delta t^2 + k O(\bolds{o}lds{a}r{C}/\epsilon^2) O(\Delta t^3) \right] \\
&\leq \exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) + T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \Delta t + O(T\bolds{o}lds{a}r{C}/\epsilon^2) O(\Delta t^2) \right].
\end{align*}
We write above equation in more compact form as follows
\bolds{o}lds{e}gin{align*}
&\left( \dfrac{ 1 + \Delta t 6\bolds{o}lds{a}r{C}/\epsilon^2}{1 - \Delta t} \right)^k \\
&\leq \exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) (1 + \Delta t + O(\Delta t^2)) \right].
\end{align*}
We use above estimate in \autoref{eq:ek estimnate} and get following inequality for $e^k$
\bolds{o}lds{e}gin{align*}
e^{k+1} &\leq \exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) (1 + \Delta t + O(\Delta t^2)) \right] \left( e^0 + (k+1) \tau \Delta t/(1- \Delta t) \right) \notag \\
&\leq \exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) (1 + \Delta t + O(\Delta t^2)) \right] \left( e^0 + T\tau (1 + \Delta t + O(\Delta t^2) \right).
\end{align*}
where we used the fact that $1/(1-\Delta t) = 1+ \Delta t + O(\Delta t^2)$.
Assuming the error in initial data is zero, i.e. $e^0= 0$, and noting the estimate of $\tau$ in \autoref{eq:estimate tau}, we have
\bolds{o}lds{e}gin{align*}
&\sup_k e^k \leq \exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \right] T \tau
\end{align*}
and we conclude to leading order that
\bolds{o}lds{e}gin{align}\label{eq: fund est}
\sup_k e^k \leq \exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \right] T \left[ C_t \Delta t + (C_s/\epsilon^2) h^\gamma \right],
\end{align}
Here the constants $C_t$ and $C_s$ are given by \autoref{eq:const Ct} and \autoref{eq:const Cs}.
This shows the stability of the numerical scheme. We now address the general one step time discretization.
\subsection{Extension to the implicit schemes}\label{ss:implicit}
Let $\theta \in [0, 1]$ be the parameter which controls the contribution of the implicit and explicit scheme. Let $(\bolds{u}hat^k, \bolds{v}hat^k)$ be the solution of \autoref{eq:finite diff eqn u general} and \autoref{eq:finite diff eqn v general} for given fixed $\theta$.
The forward Euler scheme, backward Euler scheme, and Crank Nicholson scheme correspond to the choices $\theta = 0$, $\theta = 1$, and $\theta = 1/2$ respectively.
To simplify the equations, we define $\Theta$ acting on discrete set $\{f^k\}_k$ as $\Theta f^k := (1-\theta) f^k + \theta f^{k+1}$. By $\Theta \norm{f^k}$, we mean $(1-\theta) \norm{f^k} + \theta \norm{f^{k+1}}$. Following the same steps as in the case of forward Euler, we write down the equation for $\bolds{o}lds{e}^k_i(u) := \bolds{u}hat^k_i - \bolds{u}tilde^k_i$ and $\bolds{o}lds{e}^k_i(v) := \bolds{v}hat^k_i - \bolds{v}tilde^k_i$ as follows
\bolds{o}lds{e}gin{align}
\bolds{o}lds{e}^{k+1}_i(u) &= \bolds{o}lds{e}^k_i(u) + \Delta t \Theta \bolds{o}lds{e}^k_i(v) + \Delta t \Theta \tau^k_i(u) \label{eq:finite diff error eqn u general} \\
\bolds{o}lds{e}^{k+1}_i(v) &= \bolds{o}lds{e}^k_i(v) + \Delta t \Theta \sigma^k_i(u) + \Delta t \Theta \sigma^k_i(v) + \Delta t \Theta \tau^k_i(v) \notag \\
&\quad + \Delta t (1-\theta) \left(-\bolds{o}ldsymbol{\nabla} PD^{\epsilon} (\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i) \right) \notag \\
&\quad + \Delta t \theta \left(-\bolds{o}ldsymbol{\nabla} PD^{\epsilon} (\bolds{u}hat^{k+1})(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^{k+1})(\bolds{x}_i) \right). \label{eq:finite diff error eqn v general}
\end{align}
where $\tau^k_i(v), \sigma^k_i(u) ,\sigma^k_i(v)$ are defined in \autoref{eq:consistency error in v}, \autoref{eq:consistency error in u spatial}, and \autoref{eq:consistency error in v spatial} respectively. In this section $\tau^k(u)$ is defined as follows
\bolds{o}lds{e}gin{align*}
\tau^k_i(u) &:= \dparder{\bolds{u}tilde^k_i}{t} - \dfrac{\bolds{u}tilde^{k+1}_i - \bolds{u}tilde^k_i}{\Delta t}.
\end{align*}
We take the $L^2$ norm of $\bolds{o}lds{e}^k(u)(\bolds{x})$ and $\bolds{o}lds{e}^k(v)(\bolds{x})$ and for brevity we denote the $L^2$ norm by $||\cdot||$. Recall that $\bolds{o}lds{e}^k(u)$ and $\bolds{o}lds{e}^k(v)$ are the piecewise constant extension of $\{ \bolds{o}lds{e}^k_i(u)\}_i$ and $\{ \bolds{o}lds{e}^k_i(v)\}_i$ and we get
\bolds{o}lds{e}gin{align}
\norm{\bolds{o}lds{e}^{k+1}(u)} &\leq \norm{\bolds{o}lds{e}^k(u)} + \Delta t \Theta \norm{\bolds{o}lds{e}^k(v)} + \Delta t \Theta \norm{\tau^k(u)} \label{eq:error norm be u} \\
\norm{\bolds{o}lds{e}^{k+1}(v)} &\leq \norm{\bolds{o}lds{e}^k(v)} + \Delta t \left( \Theta \norm{\sigma^k(u)} + \Theta \norm{\sigma^k(v)} + \Theta \norm{\tau^k(v)} \right) \notag \\
&\quad +\Delta t (1-\theta)\left( \sum_{i, \bolds{x}_i \in D} h^d \abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon} (\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i)}^2 \right)^{1/2} \notag \\
&\quad + \Delta t \theta \left( \sum_{i, \bolds{x}_i \in D} h^d \abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon} (\bolds{u}hat^{k+1})(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^{k+1})(\bolds{x}_i)}^2 \right)^{1/2}. \label{eq:error norm be v}
\end{align}
From our consistency analysis, we have
\bolds{o}lds{e}gin{align}
\tau &= \sup_k \left(\Ltwonorm{\tau^k(u)}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\tau^k(v)}{D;\bolds{o}lds{b}R^{d}} \right. \notag \\
&\quad \left. + \Ltwonorm{\sigma^k(u)}{D;\bolds{o}lds{b}R^{d}} + \Ltwonorm{\sigma^k(v)}{D;\bolds{o}lds{b}R^{d}}\right) \notag \\
&\leq C_t \Delta t + C_s\dfrac{h^\gamma}{\epsilon^2}. \label{eq:estimate tau 1}
\end{align}
where $C_t$ and $C_s$ are given by \autoref{eq:const Ct} and \autoref{eq:const Cs}. Since $0\leq 1-\theta \leq 1$ and $0\leq \theta \leq 1$ for all $\theta \in [0,1]$, we have
\bolds{o}lds{e}gin{align*}
\Theta \left(\norm{\tau^k(u)} + \norm{\tau^k(v)} + \norm{\sigma^k(u)} +\norm{\sigma^k(v)} \right) &\leq 2 \tau.
\end{align*}
\textbf{Crank Nicholson scheme: }If $\theta = 1/2$, and if $\bolds{u},\bolds{v} \in \Cthreeintime{[0,T]}{\Cholder{D;\bolds{o}lds{b}R^{d}}}$, then we can show that
\bolds{o}lds{e}gin{align*}
\dfrac{1}{2} \tau^k_i(u) + \dfrac{1}{2} \tau^{k+1}_i(u) = \dfrac{(\Delta t)^2}{12} \dfrac{\partial^3 \bolds{u}tilde^{k+1/2}_i}{\partial t^3} + O((\Delta t)^3).
\end{align*}
A similar result holds for $1/2 \tau^k_i(v) + 1/2\tau^{k+1}_i(v)$. Therefore, the consistency error will be bounded by $ \bolds{o}lds{a}r{C}_t\Delta t^2 + C_s h^\gamma/\epsilon^2$ with
\bolds{o}lds{e}gin{align}\label{eq: timebound}
\bolds{o}lds{a}r{C}_t := \frac{1}{12} \sup_{t} || \dfrac{\partial^3 \bolds{u}(t)}{\partial t^3} ||_{L^2(D;\mathbb{R}^d)} + \frac{1}{12} \sup_{t} || \dfrac{\partial^4 \bolds{u}(t)}{\partial t^4} ||_{L^2(D;\mathbb{R}^d)}
\end{align}
and $C_s$ is given by \autoref{eq:const Cs}.
We now estimate \autoref{eq:error norm be v}. Similar to \autoref{eq:estimate diff in per force stability}, we have
\bolds{o}lds{e}gin{align}
\left( \sum_{i, \bolds{x}_i \in D} h^d \abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon} (\bolds{u}hat^k)(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^k)(\bolds{x}_i)}^2 \right)^{1/2} &\leq \dfrac{\bolds{o}lds{a}r{C}}{\epsilon^2} \norm{\bolds{o}lds{e}^k(u)}, \label{eq:estimate diff in per force stability 1} \\
\left( \sum_{i, \bolds{x}_i \in D} h^d \abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon} (\bolds{u}hat^{k+1})(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}tilde^{k+1})(\bolds{x}_i)}^2 \right)^{1/2} &\leq \dfrac{\bolds{o}lds{a}r{C}}{\epsilon^2} \norm{\bolds{o}lds{e}^{k+1}(u)}, \label{eq:estimate diff in per force stability 2}
\end{align}
where $\bolds{o}lds{a}r{C}$ is the constant given by \autoref{eq: bar C}.
Let $e^k := \norm{\bolds{o}lds{e}^k(u)} + \norm{\bolds{o}lds{e}^k(v)}$. Adding \autoref{eq:error norm be u} and \autoref{eq:error norm be v} and noting \autoref{eq:estimate diff in per force stability 1}, \autoref{eq:estimate diff in per force stability 2}, and \autoref{eq:estimate tau 1}, we get
\bolds{o}lds{e}gin{align*}
e^{k+1} &\leq (1+ \Delta t (1- \theta) \dfrac{\bolds{o}lds{a}r{C}}{\epsilon^2} ) e^k + \Delta t \theta \dfrac{\bolds{o}lds{a}r{C}}{\epsilon^2} e^{k+1} + 2\tau \Delta t,
\end{align*}
where we assumed $\bolds{o}lds{a}r{C}/\epsilon^2 \geq 1$. We further simplify the equation and write
\bolds{o}lds{e}gin{align}\label{eq:error general time stepping 1}
e^{k+1} &\leq \dfrac{1+ \Delta t (1- \theta) \bolds{o}lds{a}r{C}/\epsilon^2}{1 - \Delta t \theta \bolds{o}lds{a}r{C}/\epsilon^2} e^k+ \dfrac{2 }{1 - \Delta t \theta \bolds{o}lds{a}r{C}/\epsilon^2} \tau \Delta t,
\end{align}
where we have assumed that $1 - \Delta t \theta \bolds{o}lds{a}r{C}/\epsilon^2 > 0 $, i.e.
\bolds{o}lds{e}gin{align}\label{eq:assumption delta t}
\Delta t < \dfrac{\epsilon^2}{\bolds{o}lds{a}r{C}}=K\epsilon^2.
\end{align}
Thus, for fixed $\epsilon>0$, the error calculation in this section applies when the time step $\Delta t$ satisfies \autoref{eq:assumption delta t}. We now define $a$ and $b$ by
\bolds{o}lds{e}gin{align*}
a &:= \dfrac{1+ \Delta t (1- \theta) \bolds{o}lds{a}r{C}/\epsilon^2}{1 - \Delta t \theta \bolds{o}lds{a}r{C}/\epsilon^2} \\
b &:= \dfrac{1}{1 - \Delta t \theta \bolds{o}lds{a}r{C}/\epsilon^2}.
\end{align*}
We use the fact that, for $\Delta t$ small, $(1-\alpha \Delta t)^{-1} = 1 + \alpha \Delta t + \alpha^2 (\Delta t)^2 + O((\Delta t)^3)$, to get
\bolds{o}lds{e}gin{align*}
b &= 1 + \Delta t \theta \bolds{o}lds{a}r{C}/\epsilon^2 + O\left( \left(\Delta t/\epsilon^2 \right)^2 \right) = 1 + O\left( \Delta t/\epsilon^2 \right).
\end{align*}
Now since $\Delta t < \epsilon^2/\bolds{o}lds{a}r{C}$, we have
\bolds{o}lds{e}gin{align}\label{eq:bound on b}
b = O(1).
\end{align}
We have the estimates for $a$ given by
\bolds{o}lds{e}gin{align*}a &\leq (1 + \Delta t (1-\theta) \bolds{o}lds{a}r{C}/\epsilon^2) (1 + \Delta t \theta \bolds{o}lds{a}r{C}/\epsilon^2 + O((\Delta t/\epsilon^2)^2)) \notag \\
&= 1 + \Delta t (\theta + (1-\theta) ) \bolds{o}lds{a}r{C}/\epsilon^2 + O((\Delta t/\epsilon^2)^2) \notag \\
&= 1 + \Delta t \bolds{o}lds{a}r{C}/\epsilon^2 + O((\Delta t/\epsilon^2)^2).
\end{align*}
Therefore, for any $k \leq T /\Delta t$, we have
\bolds{o}lds{e}gin{align*}
a^k &\leq \exp \left[ k \Delta t \dfrac{\bolds{o}lds{a}r{C}}{\epsilon^2} + k O\left( \left( \dfrac{\Delta t}{\epsilon^2} \right)^2 \right) \right] \notag \\
&\leq \exp \left[T\bolds{o}lds{a}r{C}/\epsilon^2 + O\left(\Delta t \left( \dfrac{1}{\epsilon^2} \right)^2 \right) \right] \notag \\
&\leq \exp [T C/\epsilon^2+O(\frac{1}{\epsilon^2})],
\end{align*}
where we simplified the bound by incorporating \autoref{eq:assumption delta t}. Then, from \autoref{eq:error general time stepping 1}, we get
\bolds{o}lds{e}gin{align*}
e^{k+1} &\leq a^{k+1} e^0 + 2\tau \left( \Delta t \sum_{j=0}^k a^{j} \right) b.
\end{align*}
From the estimates on $a^k$, we have
\bolds{o}lds{e}gin{align}\label{eq:bound on sum ak}
\Delta t \sum_{j=0}^k a^{j} &\leq T \exp [T C/\epsilon^2+O(1/\epsilon^2)].
\end{align}
Combining \autoref{eq:bound on b} and \autoref{eq:bound on sum ak}, to get
\bolds{o}lds{e}gin{align*}
e^{k} &\leq \exp [ T C/\epsilon^2 +O(1/\epsilon^2)] \left( e^0 + 2 T \tau O(1) \right).
\end{align*}
Since $\tau = O(C_t\Delta t + C_sh^\gamma/\epsilon^2)$, we conclude that, for any $\epsilon > 0$ fixed,
\bolds{o}lds{e}gin{align*}
\sup_{k} e^{k} \leq O(C_t\Delta t + C_sh^\gamma/\epsilon^2).
\end{align*}
Where we assumed $e^0 = 0$. Similarly, for $\theta = 1/2$, we have $\sup_{k} e^{k+1} \leq O(\bolds{o}lds{a}r{C}_t(\Delta t)^2 +C_s h^\gamma /\epsilon^2)$. Therefore, the scheme is stable and consistent for any $\theta \in [0,1]$.
\subsection{Stability of the energy for the semi-discrete approximation}
\label{semidiscrete}
We first spatially discretize the peridynamics equation \autoref{eq:per equation}. Let $\{\hat{\bolds{u}}_i(t)\}_{i,\bolds{x}_i\in D}$ denote the semi-discrete approximate solution which satisfies following, for all $t\in [0,T]$ and $i$ such that $\bolds{x}_i\in D$,
\bolds{o}lds{e}gin{align}\label{eq:fd semi discrete}
\ddot{\hat{\bolds{u}}}_i(t) = -\bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}_i) + \bolds{o}lds{b}_i(t)
\end{align}
where $\hat{\bolds{u}}(t)$ is the piecewise constant extension of discrete set $\{\hat{\bolds{u}}_i(t) \}_i$ and is defined as
\bolds{o}lds{e}gin{align}\label{eq:def piecewise ext}
\hat{\bolds{u}}(t,\bolds{x}) &:= \sum_{i, \bolds{x}_i \in D} \hat{\bolds{u}}_i(t) \chi_{U_i}(\bolds{x}).
\end{align}
The scheme is complemented with the discretized initial conditions $\hat{\bolds{u}}_i(0) = \bolds{u}_0(\bolds{x}_i)$ and $\hat{\bolds{v}}_i(0) =\bolds{v}_0(\bolds{x}_i)$. We apply boundary condition by setting $\hat{\bolds{u}}_i(t) = \mathbf{0}$ for all $t$ and for all $\bolds{x}_i \notin D$.
We have the stability of semi-discrete evolution.
{\vskip 2mm}
\bolds{o}lds{e}gin{theorem}
\textbf{Energy stability of the semi-discrete approximation}\\
Let $\{\hat{\bolds{u}}_i(t) \}_i$ satisfy \autoref{eq:fd semi discrete} and $\hat{\bolds{u}}(t)$ is its piecewise constant extension. Similarly let $\hat{\bolds{o}lds{b}}(t,\bolds{x})$ denote the piecewise constant extension of $\{ \bolds{o}lds{b}(t,\bolds{x}_i)\}_{i,\bolds{x}_i\in D}$. Then the peridynamic energy $\mathcal{E}^\epsilon$ as defined in \autoref{eq:def energy} satisfies, $\forall t \in [0,T]$,
\bolds{o}lds{e}gin{align}\label{eq:inequal energy}
\mathcal{E}^\epsilon(\hat{\bolds{u}})(t) &\leq \left( \sqrt{\mathcal{E}^\epsilon(\hat{\bolds{u}})(0)} + \dfrac{T C}{\epsilon^{3/2}} + \int_0^T ||\hat{\bolds{o}lds{b}}(s)||_{L^2(D;\mathbb{R}^d)} ds \right)^2 .
\end{align}
The constant $C$, defined in \autoref{eq:def const stab semi fd}, is independent of $\epsilon$ and $h$.
\end{theorem}
{\vskip 2mm}
\bolds{o}lds{e}gin{proof}
We multiply \autoref{eq:fd semi discrete} by $\chi_{U_i}(\bolds{x})$ and sum over $i$ and use definition of piecewise constant extension in \autoref{eq:def piecewise ext} to get
\bolds{o}lds{e}gin{align*}
\ddot{\hat{\bolds{u}}}(t,\bolds{x}) &= -\bolds{o}ldsymbol{\nabla} \hat{PD^\epsilon}(\hat{\bolds{u}}(t))(\bolds{x}) + \hat{\bolds{o}lds{b}}(t,\bolds{x}) \notag \\
&= -\bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}) + \hat{\bolds{o}lds{b}}(t,\bolds{x}) \notag \\
&\quad + (-\bolds{o}ldsymbol{\nabla} \hat{PD^\epsilon}(\hat{\bolds{u}}(t))(\bolds{x}) + \bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}))
\end{align*}
where $-\bolds{o}ldsymbol{\nabla} \hat{PD^\epsilon}(\hat{\bolds{u}}(t))(\bolds{x})$ and $\hat{\bolds{o}lds{b}}(t,\bolds{x})$ are given by
\bolds{o}lds{e}gin{align*}
-\bolds{o}ldsymbol{\nabla} \hat{PD^\epsilon}(\hat{\bolds{u}}(t))(\bolds{x}) &= \sum_{i,\bolds{x}_i \in D} (-\bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}_i)) \chi_{U_i}(\bolds{x}) \notag \\
\hat{\bolds{o}lds{b}}(t,\bolds{x}) &= \sum_{i, \bolds{x}_i\in D} \bolds{o}lds{b}(t,\bolds{x}_i) \chi_{U_i}(\bolds{x}).
\end{align*}
We define set as follows
\bolds{o}lds{e}gin{align}\label{eq:def sigma fd semi}
\sigma(t,\bolds{x}) := -\bolds{o}ldsymbol{\nabla} \hat{PD^\epsilon}(\hat{\bolds{u}}(t))(\bolds{x}) + \bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}).
\end{align}
We use following result which we will show after few steps
\bolds{o}lds{e}gin{align}\label{eq:bd on sigma fd semi}
||\sigma(t)||_{L^2(D;\mathbb{R}^d)} \leq \dfrac{C}{\epsilon^{3/2}}.
\end{align}
We then have
\bolds{o}lds{e}gin{align}\label{eq:fd semi discrete 2}
\ddot{\hat{\bolds{u}}}(t,\bolds{x}) &= -\bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}) + \hat{\bolds{o}lds{b}}(t,\bolds{x}) + \sigma(t,\bolds{x}).
\end{align}
Multiplying above with $\dot{\hat{\bolds{u}}}(t)$ and integrating over $D$ to get
\bolds{o}lds{e}gin{align*}
(\ddot{\hat{\bolds{u}}}(t),\dot{\hat{\bolds{u}}}(t)) &= (-\bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t)), \dot{\hat{\bolds{u}}}(t)) \notag \\
&\quad + (\hat{\bolds{o}lds{b}}(t), \dot{\hat{\bolds{u}}}(t)) + (\sigma(t),\dot{\hat{\bolds{u}}}(t)).
\end{align*}
Consider energy $\mathcal{E}^\epsilon(\hat{\bolds{u}})(t)$ given by \autoref{eq:def energy} and note the identity \autoref{eq:energy relat}, to have
\bolds{o}lds{e}gin{align*}
\dfrac{d}{dt} \mathcal{E}^\epsilon(\hat{\bolds{u}})(t) &= (\hat{\bolds{o}lds{b}}(t), \dot{\hat{\bolds{u}}}(t)) + (\sigma(t),\dot{\hat{\bolds{u}}}(t)) \notag \\
&\leq \left( ||\hat{\bolds{o}lds{b}}(t)||_{L^2(D;\mathbb{R}^d)} + ||\sigma(t)||_{L^2(D;\mathbb{R}^d)} \right) ||\dot{\hat{\bolds{u}}}(t)||_{L^2(D;\mathbb{R}^d)},
\end{align*}
where we used H\"older inequality in last step. Since $PD^\epsilon(\bolds{u})$ is positive for any $\bolds{u}$, we have
\bolds{o}lds{e}gin{align*}
||\dot{\hat{\bolds{u}}}(t)|| &\leq 2 \sqrt{\dfrac{1}{2}||\dot{\hat{\bolds{u}}}(t)||_{L^2(D;\mathbb{R}^d)}^2 + PD^\epsilon(\hat{\bolds{u}}(t)) } = 2 \sqrt{\mathcal{E}^\epsilon(\hat{\bolds{u}})(t)}.
\end{align*}
Using above, we get
\bolds{o}lds{e}gin{align*}
\dfrac{1}{2}\dfrac{d}{dt} \mathcal{E}^\epsilon(\hat{\bolds{u}})(t) &\leq \left( ||\hat{\bolds{o}lds{b}}(t)||_{L^2(D;\mathbb{R}^d)} + ||\sigma(t)||_{L^2(D;\mathbb{R}^d)} \right) \sqrt{\mathcal{E}^\epsilon(\hat{\bolds{u}})(t)}.
\end{align*}
Let $\bolds{o}ldsymbol{\nabla}ta > 0$ is some arbitrary but fixed real number and let $A(t) = \bolds{o}ldsymbol{\nabla}ta + \mathcal{E}^\epsilon(\hat{\bolds{u}})(t)$. Then
\bolds{o}lds{e}gin{align*}
\dfrac{1}{2}\dfrac{d}{dt} A(t) &\leq \left( ||\hat{\bolds{o}lds{b}}(t)||_{L^2(D;\mathbb{R}^d)} + ||\sigma(t)||_{L^2(D;\mathbb{R}^d)} \right) \sqrt{A(t)}.
\end{align*}
Using the fact that $\frac{1}{\sqrt{A(t)}} \frac{d}{dt}A(t) = 2 \frac{d}{dt} \sqrt{A(t)}$, we have
\bolds{o}lds{e}gin{align*}
\sqrt{A(t)} &\leq \sqrt{A(0)} + \int_0^t \left( ||\hat{\bolds{o}lds{b}}(s)||_{L^2(D;\mathbb{R}^d)} + ||\sigma(s)||_{L^2(D;\mathbb{R}^d)} \right) ds \\
&\leq \sqrt{A(0)} + \dfrac{T C}{\epsilon^{3/2}} + \int_0^T ||\hat{\bolds{o}lds{b}}(s)||_{L^2(D;\mathbb{R}^d)} ds.
\end{align*}
where we used bound on $||\sigma(s)||_{L^2(D;\mathbb{R}^d)}$ from \autoref{eq:bd on sigma fd semi}. Noting that $\bolds{o}ldsymbol{\nabla}ta > 0$ is arbitrary, we send it to zero to get
\bolds{o}lds{e}gin{align*}
\sqrt{\mathcal{E}^\epsilon(\hat{\bolds{u}})(t)} &\leq \sqrt{\mathcal{E}^\epsilon(\hat{\bolds{u}})(0)} + \dfrac{T C}{\epsilon^{3/2}} + \int_0^T ||\hat{\bolds{o}lds{b}}(s)|| ds,
\end{align*}
and \autoref{eq:inequal energy} follows by taking square of above equation.
It remains to show \autoref{eq:bd on sigma fd semi}. To simplify the calculations, we use following notations: let $\bolds{o}ldsymbol{\xi} \in H_1(\mathbf{0})$ and let
\bolds{o}lds{e}gin{align*}
& s_{\bolds{o}ldsymbol{\xi}} = \epsilon |\bolds{o}ldsymbol{\xi}|, e_{\bolds{o}ldsymbol{\xi}} = \dfrac{\bolds{o}ldsymbol{\xi}}{|\bolds{o}ldsymbol{\xi}|}, \bolds{o}lds{a}r{\omega}(\bolds{x}) = \omega(\bolds{x}) \omega(\bolds{x}+\epsilon\bolds{o}ldsymbol{\xi}), \notag \\
& S_{\bolds{o}ldsymbol{\xi}}(\bolds{x}) = \dfrac{\hat{\bolds{u}}(t,\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi}) - \hat{\bolds{u}}(t,\bolds{x})}{s_{\bolds{o}ldsymbol{\xi}}} \cdot e_{\bolds{o}ldsymbol{\xi}}.
\end{align*}
With above notations and using expression of $-\bolds{o}ldsymbol{\nabla} PD^\epsilon$ from \autoref{eq:peri force use}, we have for $\bolds{x}\in U_i$
\bolds{o}lds{e}gin{align}
&|\sigma(t,\bolds{x})| = \left\vert -\bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}_i) + \bolds{o}ldsymbol{\nabla} PD^\epsilon(\hat{\bolds{u}}(t))(\bolds{x}) \right\vert \notag \\
&= \left\vert \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} \dfrac{J(|\bolds{o}ldsymbol{\xi}|)}{\sqrt{s_{\bolds{o}ldsymbol{\xi}}}} \left(\bolds{o}lds{a}r{\omega}(\bolds{x}_i) F'_1(\sqrt{s_{\bolds{o}ldsymbol{\xi}}} S_{\bolds{o}ldsymbol{\xi}}(\bolds{x}_i)) - \bolds{o}lds{a}r{\omega}(\bolds{x}) F'_1(\sqrt{s_{\bolds{o}ldsymbol{\xi}}} S_{\bolds{o}ldsymbol{\xi}}(\bolds{x})) \right) e_{\bolds{o}ldsymbol{\xi}} d\bolds{o}ldsymbol{\xi} \right\vert \notag \\
&\leq \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} \dfrac{J(|\bolds{o}ldsymbol{\xi}|)}{\sqrt{s_{\bolds{o}ldsymbol{\xi}}}} \left\vert \bolds{o}lds{a}r{\omega}(\bolds{x}_i) F'_1(\sqrt{s_{\bolds{o}ldsymbol{\xi}}} S_{\bolds{o}ldsymbol{\xi}}(\bolds{x}_i)) - \bolds{o}lds{a}r{\omega}(\bolds{x}) F'_1(\sqrt{s_{\bolds{o}ldsymbol{\xi}}} S_{\bolds{o}ldsymbol{\xi}}(\bolds{x})) \right\vert d\bolds{o}ldsymbol{\xi} \notag \\
&\leq \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} \dfrac{J(|\bolds{o}ldsymbol{\xi}|)}{\sqrt{s_{\bolds{o}ldsymbol{\xi}}}} \left( \left\vert \bolds{o}lds{a}r{\omega}(\bolds{x}_i) F'_1(\sqrt{s_{\bolds{o}ldsymbol{\xi}}} S_{\bolds{o}ldsymbol{\xi}}(\bolds{x}_i)) \right\vert + \left\vert \bolds{o}lds{a}r{\omega}(\bolds{x}) F'_1(\sqrt{s_{\bolds{o}ldsymbol{\xi}}} S_{\bolds{o}ldsymbol{\xi}}(\bolds{x})) \right\vert \right) d\bolds{o}ldsymbol{\xi}.
\end{align}
Using the fact that $0\leq \omega(\bolds{x}) \leq 1$ and $|F_1'(r)|\leq C_1$, where $C_1$ is $\sup_r |F_1'(r)|$, we get
\bolds{o}lds{e}gin{align*}
|\sigma(t,\bolds{x})| &\leq \dfrac{4 C_1 \bolds{o}lds{a}r{J}_{1/2}}{\epsilon^{3/2}}.
\end{align*}
where $\bolds{o}lds{a}r{J}_{1/2} = (1/\omega_d)\int_{H_1(\mathbf{0})} J(|\bolds{o}ldsymbol{\xi}|) |\bolds{o}ldsymbol{\xi}|^{-1/2} d\bolds{o}ldsymbol{\xi}$.
Taking the $L^2$ norm of $\sigma(t,\bolds{x})$, we get
\bolds{o}lds{e}gin{align*}
||\sigma(t)||_{L^2(D;\mathbb{R}^d)}^2 &= \sum_{i,\bolds{x}_i\in D} \int_{U_i} |\sigma(t,\bolds{x})|^2 d\bolds{x} \leq \left(\dfrac{4 C_1 \bolds{o}lds{a}r{J}_{1/2}}{\epsilon^{3/2}}\right)^2 \sum_{i,\bolds{x}_i\in D} \int_{U_i} d\bolds{x}
\end{align*}
thus
\bolds{o}lds{e}gin{align*}
||\sigma(t)||_{L^2(D;\mathbb{R}^d)} \leq \dfrac{4 C_1 \bolds{o}lds{a}r{J}_{1/2}\sqrt{|D|}}{\epsilon^{3/2}} = \dfrac{C}{\epsilon^{3/2}}
\end{align*}
where
\bolds{o}lds{e}gin{align}\label{eq:def const stab semi fd}
C := 4 C_1 \bolds{o}lds{a}r{J}_{1/2}\sqrt{|D|}.
\end{align}
This completes the proof.
\end{proof}
\subsection{Local instability under radial perturbations}
\label{ss:proof localstab}
We observe that both explicit and implicit schemes treated in previous sections show that any increase in local truncation error is controlled at each time step. From the proofs above (and the general approximation theory for ODE), this control is adequate to establish convergence rates as $\Delta t \rightarrow 0$. We now comment on a source of error that can grow with time steps in regions where the strain is large and peridynamic bonds between material points begin to soften.
We examine the Jacobian matrix of the peridynamic system associated by perturbing about a displacement field and seek to understand the stability of the perturbation. Suppose the solution is near the displacement field $\overline{\bolds{u}}(\bolds{x})$ and let $\bolds{s}(t,\bolds{x})=\bolds{u}(t,\bolds{x})-\overline{\bolds{u}}(\bolds{x})$ be the perturbation.
We write the associated strain as $S(\bolds{y},\bolds{x};\overline{\bolds{u}})$ and $S(\bolds{y},\bolds{x};\bolds{s})$.
Expanding the peridynamic force in Taylor series about $\overline{\bolds{u}}$ assuming $\bolds{s}$ is small gives
\bolds{o}lds{e}gin{align*}
\partial_{tt} \bolds{s}(t,\bolds{x})&=\frac{2}{V_d}\left\{\int_{\mathcal{H}_\epsilon(\bolds{x})}\partial^2_{{S}}\mathcal{W}^\epsilon({S}(\bolds{y},\bolds{x};\overline{\bolds{u}}))S(\bolds{y},\bolds{x};\bolds{s})\frac{\bolds{y}-\bolds{x}}{|\bolds{y}-\bolds{x}|}d\bolds{y}\right\} \notag \\
&\quad -\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\overline{\bolds{u}})(\bolds{x}) +\bolds{o}lds{b}(t,\bolds{x}) + O(|\bolds{s}|^2),
\end{align*}
where $\mathcal{W}^\epsilon(S, \bolds{y} - \bolds{x}) = W^\epsilon(S,\bolds{y} - \bolds{x})/|\bolds{y} - \bolds{x}|$ and $W^\epsilon$ is given by \autoref{eq:per pot}.
To recover a local stability formula in terms of a spectral radius we consider local radial perturbations $\bolds{s}$ with spatially constant strain $S(\bolds{y},\bolds{x};\bolds{s})$ of the form $S(\bolds{y},\bolds{x};\bolds{s})=-\bolds{o}ldsymbol{\nabla}ta(t){\bolds{o}lds{\mu}}\cdot\bolds{o}lds{e}$ where $\bolds{o}lds{\mu}$ is in $\mathbb{R}^d$ and $\bolds{s}$ has radial variation about $\bolds{x}$ with $\bolds{s}(\bolds{y})=\bolds{o}ldsymbol{\nabla}ta(t){\bolds{o}lds{\mu}}(1-|\bolds{y}-\bolds{x}|)$. This delivers the local ODE
\bolds{o}lds{e}gin{align*}
\bolds{o}ldsymbol{\nabla}ta''(t)\bolds{o}lds{\mu}=A\bolds{o}ldsymbol{\nabla}ta(t){\bolds{o}lds{\mu}}+b
\end{align*}
where the stability matrix $A$ is selfadjoint and given by
\bolds{o}lds{e}gin{align}\label{eq:A}
A=-\frac{2}{V_d}\left\{\int_{\mathcal{H}_\epsilon(\bolds{x})}\partial^2_{{S}}\mathcal{W}^\epsilon({S}(\bolds{y},\bolds{x};\overline{\bolds{u}}))\frac{\bolds{y}-\bolds{x}}{|\bolds{y}-\bolds{x}|}\otimes\frac{\bolds{y}-\bolds{x}}{|\bolds{y}-\bolds{x}|} d\bolds{y}\right\},
\end{align}
and
\bolds{o}lds{e}gin{align*}
b=-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\overline{\bolds{u}})(\bolds{x}) +\bolds{o}lds{b}(t,\bolds{x}) + O(|\bolds{s}|^2).
\end{align*}
A stability criterion for the perturbation is obtained on analyzing the linear system $\bolds{o}ldsymbol{\nabla}ta''(t)\bolds{o}lds{\mu}=A\bolds{o}ldsymbol{\nabla}ta(t){\bolds{o}lds{\mu}}$. Writing it as a 1st order system gives
\bolds{o}lds{e}gin{align*}
\bolds{o}ldsymbol{\nabla}ta_1'(t)\bolds{o}lds{\mu}&=\bolds{o}ldsymbol{\nabla}ta_2(t)\bolds{o}lds{\mu} \notag \\
\bolds{o}ldsymbol{\nabla}ta_2'(t)\bolds{o}lds{\mu}&=A\bolds{o}ldsymbol{\nabla}ta_1(t)\bolds{o}lds{\mu}
\end{align*}
where $\bolds{o}lds{\mu}$ is a vector in $\mathbb{R}^d$.
The eigenvalues of $A$ are real and denoted by $\lambda_i$, $i=1,\ldots,d$ and the associated eigenvectors are denoted by $\bolds{v}^i$.
Choosing $\bolds{o}lds{\mu}=\bolds{v}^i$ gives
\bolds{o}lds{e}gin{align*}
\bolds{o}ldsymbol{\nabla}ta_1'(t)\bolds{v}^i&=\bolds{o}ldsymbol{\nabla}ta_2(t)\bolds{v}^i\notag \\
\bolds{o}ldsymbol{\nabla}ta_2'(t)\bolds{v}^i&={\lambda_i}\bolds{o}ldsymbol{\nabla}ta_1(t){\bolds{v}^i}.
\end{align*}
Applying the Forward Euler method to this system gives the discrete iterative system
\bolds{o}lds{e}gin{align*}
\bolds{o}ldsymbol{\nabla}ta_1^{k+1} &= \bolds{o}ldsymbol{\nabla}ta_1^k+\Delta t \bolds{o}ldsymbol{\nabla}ta_2^k \notag \\
\bolds{o}ldsymbol{\nabla}ta_2^{k+1} &= {\lambda_i}\Delta t \bolds{o}ldsymbol{\nabla}ta_1^k +\bolds{o}ldsymbol{\nabla}ta_2^k.
\end{align*}
The spectral radius of the matrix associated with this iteration is
\bolds{o}lds{e}gin{align*}
\rho=\max_{i=1,\ldots,d}|1\pm \Delta t \sqrt{\lambda_i}|.
\end{align*}
It is easy to see that the spectral radius is larger than $1$ for any choice of $\lambda_i$ and we conclude local instability for the forward Euler scheme under radial perturbation.
For the implicit scheme given by backward Euler we get the discrete iterative system
\bolds{o}lds{e}gin{align*}
\bolds{o}ldsymbol{\nabla}ta_1^{k} &= \bolds{o}ldsymbol{\nabla}ta_1^{k+1}-\Delta t \bolds{o}ldsymbol{\nabla}ta_2^{k+1} \notag \\
\bolds{o}ldsymbol{\nabla}ta_2^{k} &= {-\lambda_i}\Delta t \bolds{o}ldsymbol{\nabla}ta_1^{k+1} +\bolds{o}ldsymbol{\nabla}ta_2^{k+1}.
\end{align*}
and
\bolds{o}lds{e}gin{equation*}
\left[\bolds{o}lds{e}gin{array}{c}
\bolds{o}ldsymbol{\nabla}ta_1^{k+1} \\
\bolds{o}ldsymbol{\nabla}ta_2^{k+1}
\end{array}\right ]=
\left[\bolds{o}lds{e}gin{array}{cc}
1 &-\Delta t\\
-\Delta t \lambda_i&1
\end{array}\right]^{-1}
\left[\bolds{o}lds{e}gin{array}{c}
\bolds{o}ldsymbol{\nabla}ta_1^{k} \\
\bolds{o}ldsymbol{\nabla}ta_2^{k}
\end{array}\right].
\label{eq:baclwardmatrix}
\end{equation*}
The spectral radius for the iteration matrix is
\bolds{o}lds{e}gin{align*}\label{eq:diccretestabbackwards}
\rho=\max_{i=1,\ldots,d}|\frac{1}{\theta}\pm \frac{\Delta t \sqrt{\lambda_i}|}{|\theta|}|,
\end{align*}
where $\theta=1-\lambda_i (\Delta t)^2$. If we suppose that the stability matrix $A$ is not negative definite and there is a $\lambda_j>0$, then the spectral radius is larger than one, i.e,
\bolds{o}lds{e}gin{align}\label{eq:discretestabbackwardsexplicit}
1<\frac{|1+ \sqrt{\lambda_j}\Delta t|}{|1-\lambda_j(\Delta t)^2|}\leq\rho.
\end{align}
Thus it follows from \autoref{eq:discretestabbackwardsexplicit}
that we can have local instability of the backward Euler scheme for radial perturbations.
Inspection of \autoref{eq:A} shows the sign of the eigenvalues of the matrix $A$ depend explicitly on the sign of $\partial^2_{{S}}\mathcal{W}^\epsilon({S}(\bolds{y},\bolds{x};\overline{\bolds{u}})$. It is shown in \cite{CMPer-Lipton} that
\bolds{o}lds{e}gin{align}
\partial^2_{{S}}\mathcal{W}^\epsilon({S}(\bolds{y},\bolds{x};\overline{\bolds{u}})>0 \hbox{ for } |{S}(\bolds{y},\bolds{x};\overline{\bolds{u}})|<S_c \label{eq:stabilityb}\\
\partial^2_{{S}}\mathcal{W}^\epsilon({S}(\bolds{y},\bolds{x};\overline{\bolds{u}})<0 \hbox{ for } |{S}(\bolds{y},\bolds{x};\overline{\bolds{u}})|>S_c \label{eq:nostability}.
\end{align}
From the model we see that bonds are loosing stiffness when $|{S}(\bolds{y},\bolds{x};\overline{\bolds{u}})|>S_c $ and the points for which $ A$ is non negative definite correspond to points where \autoref{eq:nostability} holds for a preponderance of bonds inside the horizon.
We conclude noting that both explicit and implicit schemes treated in previous sections have demonstrated convergence rates $O((C_t\Delta t + C_sh^\gamma/\epsilon^2))$ as $\Delta t \rightarrow 0$. However, the results of this section show that the error can grow with time for this type of radial perturbation.
\section{Lipschitz continuity in H\"older norm and existence of a solution}
\label{s:proof existence}
In this section, we prove \autoref{prop:lipschitz}, \autoref{thm:local existence}, and \autoref{thm:existence over finite time domain}.
\subsection{Proof of Proposition 1}
Let $I = [0,T]$ be the time domain and $X=\Cholderz{D;\bolds{o}lds{b}R^{d}}\times\Cholderz{D;\bolds{o}lds{b}R^{d}}$. Recall that $F^\epsilon(y,t) = (F^\epsilon_1(y,t), F^\epsilon_2(y,t))$, where $F^\epsilon_1(y,t) = y^2$ and $F^\epsilon_2(y,t) = -\bolds{o}ldsymbol{\nabla} PD^\epsilon(y^1) + \bolds{o}lds{b}(t)$. Given $t\in I$ and $y=(y^1,y^2), z=(z^1, z^2)\in X$, we have
\bolds{o}lds{e}gin{align}\label{eq:norm of F in X}
&\normX{F^\epsilon(y,t) - F^\epsilon(z,t)}{X} \notag \\
&\leq \Choldernorm{y^2 - z^2}{D;\bolds{o}lds{b}R^{d}} + \Choldernorm{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(y^1) + \bolds{o}ldsymbol{\nabla} PD^\epsilon(z^1)}{D;\bolds{o}lds{b}R^{d}}.
\end{align}
Therefore, to prove the \autoref{eq:lipschitz property of F}, we only need to analyze the second term in above inequality. Let $\bolds{u},\bolds{v} \in \Cholderz{D;\bolds{o}lds{b}R^{d}}$, then we have
\bolds{o}lds{e}gin{align}\label{eq:lipschitz norm of per force}
&\Choldernorm{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))}{D;\bolds{o}lds{b}R^{d}} \notag \\
& = \sup_{\bolds{x}\in D} \abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v})(\bolds{x}))} \notag \\
&\quad + \sup_{\substack{\bolds{x}\neq \bolds{y},\\
\bolds{x},\bolds{y} \in D}} \dfrac{\abs{(-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))(\bolds{x}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^{\gamma}}.
\end{align}
Note that the force $-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x})$ can be written as follows
\bolds{o}lds{e}gin{align*}
&- \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) \notag \\
&= \dfrac{4}{\epsilon^{d+1} \omega_d} \int_{H_\epsilon(\bolds{x})} \omega(\bolds{x})\omega(\bolds{y})J(\dfrac{\abs{\bolds{y} - \bolds{x}}}{\epsilon}) f'(\abs{\bolds{y} - \bolds{x}} S(\bolds{y}, \bolds{x}; \bolds{u})^2) S(\bolds{y},\bolds{x};\bolds{u}) \dfrac{\bolds{y} - \bolds{x}}{\abs{\bolds{y} - \bolds{x}}} d\bolds{y} \notag \\
&= \dfrac{4}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} \omega(\bolds{x})\omega(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi})J(\abs{\bolds{o}ldsymbol{\xi}}) f'(\epsilon \abs{\bolds{o}ldsymbol{\xi}} S(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi}, \bolds{x}; \bolds{u})^2) S(\bolds{x}+ \epsilon \bolds{o}ldsymbol{\xi},\bolds{x};\bolds{u}) \dfrac{\bolds{o}ldsymbol{\xi}}{\abs{\bolds{o}ldsymbol{\xi}}} d\bolds{o}ldsymbol{\xi}.
\end{align*}
where we substituted $\partial_S W^{\epsilon}$ using \autoref{eq:per pot}. In second step, we introduced the change in variable $\bolds{y} = \bolds{x} + \epsilon \bolds{o}ldsymbol{\xi}$.
Let $F_1: \mathbb{R} \to \mathbb{R}$ be defined as $F_1(S) = f(S^2)$. Then $F'_1(S) = f'(S^2) 2S$. Using the definition of $F_1$, we have
\bolds{o}lds{e}gin{align*}
2 S f'(\epsilon \abs{\bolds{o}ldsymbol{\xi}} S^2) = \dfrac{F'_1(\sqrt{\epsilon \abs{\bolds{o}ldsymbol{\xi}}} S)}{\sqrt{\epsilon \abs{\bolds{o}ldsymbol{\xi}}}}.
\end{align*}
Because $f$ is assumed to be positive, smooth, and concave, and is bounded far away, we have following bound on derivatives of $F_1$
\bolds{o}lds{e}gin{align}
\sup_{r} \abs{F'_1(r)} &= F'_1(\bolds{o}lds{a}r{r}) =: C_1 \label{C1}\\
\sup_{r} \abs{F''_1(r)} &= \max \{ F''_1(0), F''_1(\hat{u}) \} =: C_2 \label{C2}\\
\sup_{r} \abs{F'''_1(r)} &= \max \{F'''_1(\bolds{o}lds{a}r{u}_2), F'''_1(\tilde{u}_2) \} =:C_3.\label{C3}
\end{align}
where $\bolds{o}lds{a}r{r}$ is the inflection point of $f(r^2)$, i.e. $F''_1(\bolds{o}lds{a}r{r}) = 0$. $\{ 0, \hat{u} \}$ are the maxima of $F''_1(r)$. $\{\bolds{o}lds{a}r{u}, \tilde{u} \}$ are the maxima of $F'''_1(r)$. By chain rule and by considering the assumption on $f$, we can show that $\bolds{o}lds{a}r{r}, \hat{u}, \bolds{o}lds{a}r{u}_2, \tilde{u}_2$ exists and the $C_1, C_2, C_3$ are bounded. \autoref{fig:first der per pot one}, \autoref{fig:second der per pot}, and \autoref{fig:third der per pot} shows the generic graphs of $F'_1(r)$, $F''_1(r)$, and $F'''_1(r)$ respectively.
\bolds{o}lds{e}gin{figure}
\centering
\includegraphics[scale=0.3]{derivative_F_1.png}
\caption{Generic plot of $F'_1(r)$. $|F'_1(r)|$ is bounded by $\abs{F'_1(\bolds{o}lds{a}r{r})}$.}
\label{fig:first der per pot one}
\end{figure}
\bolds{o}lds{e}gin{figure}
\centering
\bolds{o}lds{e}gin{tikzpicture}
\draw[->,thick] (-4,0) -- (4,0) node[right] {$r$};
\draw[->,thick] (0,-1.4) -- (0,4.5) node[right] {$F''_1(r)$};
\draw[scale=1.0,domain=-4:4,smooth,variable=\x,thick] plot ({\x},{\dderperpotFx(\x)});
\draw [-] (0.816,0.2) -- (0.816,-0.2);
\draw [-] (-0.816,0.2) -- (-0.816,-0.2);
\draw [-] (1.414,0.2) -- (1.414,-0.2);
\draw [-] (-1.414,0.2) -- (-1.414,-0.2);
\node [left] at (0.816,-0.2) {$\bolds{o}lds{a}r{r}$};
\node [right] at (-0.816,-0.2) {$-\bolds{o}lds{a}r{r}$};
\node [above] at (1.414,0.2) {$\hat{u}$};
\node [above] at (-1.414,0.2) {$-\hat{u}$};
\end{tikzpicture}
\caption{Generic plot of $F''_1(r)$. At $\pm\bolds{o}lds{a}r{r}$, $F''_1(r) = 0$. At $\pm\hat{u}$, $F'''_1(r) = 0$.}
\label{fig:second der per pot}
\end{figure}
\bolds{o}lds{e}gin{figure}
\centering
\bolds{o}lds{e}gin{tikzpicture}
\draw[->,thick] (-5,0) -- (5,0) node[right] {$r$};
\draw[->,thick] (0,-1) -- (0,1.3) node[above] {$F'''_1(r)$};
\draw[scale=1.0,domain=-5:5,smooth,variable=\x,thick] plot ({\x},{\ddderperpotFx(\x)});
\draw[-] (0.8, 0.1) -- (0.8, -0.1);
\draw[-] (-0.8, 0.1) -- (-0.8, -0.1);
\node [above] at (0.8, 0.1) {$\bolds{o}lds{a}r{u}_2$};
\node [below] at (-0.8,-0.1) {$-\bolds{o}lds{a}r{u}_2$};
\draw[-] (1.64, 0.1) -- (1.64, -0.1);
\draw[-] (-1.64, 0.1) -- (-1.64, -0.1);
\node [right] at (1.64, -0.2) {$\hat{u}$};
\node [left] at (-1.64, 0.2) {$-\hat{u}$};
\draw[-] (2.9, 0.1) -- (2.9, -0.1);
\draw[-] (-2.9, 0.1) -- (-2.9, -0.1);
\node [below] at (2.9, -0.1) {$\tilde{u}_2$};
\node [above] at (-2.9, 0.1) {$-\tilde{u}_2$};
\end{tikzpicture}
\caption{Generic plot of $F'''_1(r)$. At $\pm \bolds{o}lds{a}r{u}_2$ and $\pm \tilde{u}_2$, $F''''_1 = 0$.}
\label{fig:third der per pot}
\end{figure}
The nonlocal force $-\bolds{o}ldsymbol{\nabla} PD^\epsilon$ can be written as
\bolds{o}lds{e}gin{align}\label{eq:peri force use}
&- \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) \notag\\
&= \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} \omega(\bolds{x})\omega(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi})J(\abs{\bolds{o}ldsymbol{\xi}}) F'_1(\sqrt{\epsilon \abs{\bolds{o}ldsymbol{\xi}}} S(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi}, \bolds{x}; \bolds{u})) \dfrac{1}{\sqrt{\epsilon \abs{\bolds{o}ldsymbol{\xi}}}} \dfrac{\bolds{o}ldsymbol{\xi}}{\abs{\bolds{o}ldsymbol{\xi}}} d\bolds{o}ldsymbol{\xi}.
\end{align}
To simplify the calculations, we use following notation
\bolds{o}lds{e}gin{align*}
\bolds{o}lds{a}r{\bolds{u}}(\bolds{x}) &:= \bolds{u}(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi}) - \bolds{u}(\bolds{x}), \\
\bolds{o}lds{a}r{\bolds{u}}(\bolds{y}) &:= \bolds{u}(\bolds{y}+\epsilon \bolds{o}ldsymbol{\xi}) - \bolds{u}(\bolds{y}), \\
(\bolds{u} - \bolds{v})(\bolds{x}) &:= \bolds{u}(\bolds{x}) - \bolds{v}(\bolds{x}),
\end{align*}
and $\overline{(\bolds{u} - \bolds{v})}(\bolds{x})$ is defined similar to $\bolds{o}lds{a}r{\bolds{u}}(\bolds{x})$. Also, let
\bolds{o}lds{e}gin{align*}
\quad s = \epsilon \abs{\bolds{o}ldsymbol{\xi}}, \quad \bolds{o}lds{e} = \dfrac{\bolds{o}ldsymbol{\xi}}{\abs{\bolds{o}ldsymbol{\xi}}}.
\end{align*}
In what follows, we will come across the integral of type $\int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \abs{\bolds{o}ldsymbol{\xi}}^{-\alpha} d\bolds{o}ldsymbol{\xi}$. Recall that $0\leq J(\abs{\bolds{o}ldsymbol{\xi}}) \leq M$ for all $\bolds{o}ldsymbol{\xi}\in H_1(\mathbf{0})$ and $J(\abs{\bolds{o}ldsymbol{\xi}}) = 0$ for $\bolds{o}ldsymbol{\xi} \notin H_1(\mathbf{0})$. Therefore, let
\bolds{o}lds{e}gin{align}
\label{Jbar}
\bolds{o}lds{a}r{J}_\alpha &:= \dfrac{1}{\omega_d} \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \abs{\bolds{o}ldsymbol{\xi}}^{-\alpha} d\bolds{o}ldsymbol{\xi}.
\end{align}
With notations above, we note that $S(\bolds{x}+ \epsilon \bolds{o}ldsymbol{\xi}, \bolds{x}; \bolds{u}) = \bolds{o}lds{a}r{\bolds{u}}(\bolds{x})\cdot \bolds{o}lds{e}/s$. $-\bolds{o}ldsymbol{\nabla} PD^\epsilon$ can be written as
\bolds{o}lds{e}gin{align}\label{eq:del pd simplified}
- \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) &= \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} \omega(\bolds{x})\omega(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi})J(\abs{\bolds{o}ldsymbol{\xi}}) F'_1(\bolds{o}lds{a}r{\bolds{u}}(\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s} ) \dfrac{1}{\sqrt{s}} \bolds{o}lds{e} d\bolds{o}ldsymbol{\xi}.
\end{align}
We first estimate the term $\abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v})(\bolds{x}))}$ in \autoref{eq:lipschitz norm of per force}.
\bolds{o}lds{e}gin{align}
&\abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v})(\bolds{x}))}\notag\\
&\leq\abs{ \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} \omega(\bolds{x})\omega(\bolds{x}+\epsilon \bolds{o}ldsymbol{\xi}) J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{\left( F'_1(\bolds{o}lds{a}r{\bolds{u}}(\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s} ) - F'_1(\bolds{o}lds{a}r{\bolds{v}}(\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s} ) \right) }{\sqrt{s}}\bolds{o}lds{e} d\bolds{o}ldsymbol{\xi} } \notag \\
&\leq \abs{ \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\sqrt{s}} \abs{ F'_1(\bolds{o}lds{a}r{\bolds{u}}(\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s} ) - F'_1(\bolds{o}lds{a}r{\bolds{v}}(\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s} ) } d\bolds{o}ldsymbol{\xi} } \notag \\
&\leq \sup_{r} \abs{F''_1(r)} \abs{ \dfrac{2}{\epsilon \omega_d} \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\sqrt{s}} \abs{ \bolds{o}lds{a}r{\bolds{u}}(\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s} - \bolds{o}lds{a}r{\bolds{v}}(\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s} } d\bolds{o}ldsymbol{\xi} } \notag \\
&\leq \dfrac{2C_2 }{\epsilon \omega_d} \abs{\int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{\abs{\bolds{u}bar({\bolds{x}}) - \bolds{v}bar({\bolds{x}})}}{\epsilon \abs{\bolds{o}ldsymbol{\xi}}} d\bolds{o}ldsymbol{\xi} }. \label{eq:diff force bound 1}
\end{align}
Here we have used the fact that $|\omega(\bolds{x})|\leq1$ and for a vector $\bolds{o}lds{e}$ such that $\abs{\bolds{o}lds{e}} = 1$, $\abs{\bolds{o}lds{a} \cdot \bolds{o}lds{e}} \leq \abs{\bolds{o}lds{a}}$ holds and $\abs{\alpha \bolds{o}lds{e}} \leq \abs{\alpha}$ holds for all $\bolds{o}lds{a} \in \mathbb{R}^d, \alpha \in \mathbb{R}$. Using the fact that $\bolds{u},\bolds{v} \in \Cholderz{D;\bolds{o}lds{b}R^{d}}$, we have
\bolds{o}lds{e}gin{align*}
\dfrac{\abs{\bolds{o}lds{a}r{\bolds{u}}(\bolds{x}) - \bolds{o}lds{a}r{\bolds{v}}(\bolds{x})}}{s} &= \dfrac{\abs{(\bolds{u} - \bolds{v})(\bolds{x}+ \epsilon \bolds{o}ldsymbol{\xi})- (\bolds{u} - \bolds{v})(\bolds{x})}}{(\epsilon \abs{\bolds{o}ldsymbol{\xi}})^{\gamma}} \dfrac{1}{(\epsilon \abs{\bolds{o}ldsymbol{\xi}})^{1-\gamma}} \\
&\leq \Choldernorm{\bolds{u}-\bolds{v}}{D;\bolds{o}lds{b}R^{d}} \dfrac{1}{(\epsilon \abs{\bolds{o}ldsymbol{\xi}})^{1-\gamma}}.
\end{align*}
Substituting the estimate given above, we get
\bolds{o}lds{e}gin{align}\label{eq:estimate lipschitz part 1}
\abs{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u})(\bolds{x}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v})(\bolds{x}))} &\leq \dfrac{2C_2 \bolds{o}lds{a}r{J}_{1-\gamma}}{\epsilon^{2-\gamma}} \Choldernorm{\bolds{u}-\bolds{v}}{D;\bolds{o}lds{b}R^{d}},
\end{align}
where $C_2$ is given by \autoref{C2} and $\bolds{o}lds{a}r{J}_{1-\gamma}$ is given by \autoref{Jbar}.
We now estimate the second term in \autoref{eq:lipschitz norm of per force}. To simplify notation we write $\tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})=\omega(\bolds{x})\omega(\bolds{x}+\epsilon\bolds{o}ldsymbol{\xi})$ and with the help of \autoref{eq:del pd simplified}, we get
\bolds{o}lds{e}gin{align}
&\dfrac{1}{\abs{\bolds{x} - \bolds{y}}^{\gamma}} \abs{(-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))(\bolds{x}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))(\bolds{y})} \notag \\
&= \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^{\gamma}} | \dfrac{2}{\epsilon\omega_d} \int_{H_1(0)} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\sqrt{s}}\times \left( \tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})(F'_1(\dfrac{\bolds{u}bar(\bolds{x})\cdot \bolds{o}lds{e}}{\sqrt{s}}) - F'_1(\dfrac{\bolds{v}bar(\bolds{x}))\cdot \bolds{o}lds{e}}{\sqrt{s}}))\right.\notag \\
& \: \left. - \tilde{\omega}(\bolds{y},\bolds{o}ldsymbol{\xi})(F'_1(\dfrac{\bolds{u}bar(\bolds{y})\cdot \bolds{o}lds{e}}{\sqrt{s}}) - F'_1(\dfrac{\bolds{v}bar(\bolds{y}))\cdot \bolds{o}lds{e}}{\sqrt{s}}) \right) \bolds{o}lds{e} d\bolds{o}ldsymbol{\xi} | \notag \\
&\leq \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^{\gamma}} | \dfrac{2}{\epsilon\omega_d} \int_{H_1(0)} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\sqrt{s}}\times \notag \\
&\quad | \tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})(F'_1(\dfrac{\bolds{u}bar(\bolds{x})\cdot \bolds{o}lds{e}}{\sqrt{s}}) - F'_1(\dfrac{\bolds{v}bar(\bolds{x})\cdot \bolds{o}lds{e}}{\sqrt{s}})) - \tilde{\omega}(\bolds{y},\bolds{o}ldsymbol{\xi})(F'_1(\dfrac{\bolds{u}bar(\bolds{y})\cdot e}{\sqrt{s}}) - F'_1(\dfrac{\bolds{v}bar(\bolds{y})\cdot e}{\sqrt{s}})) | d\bolds{o}ldsymbol{\xi}.\notag\\ \label{eq:H def}
\end{align}
We analyze the integrand in above equation. We let $H$ be defined by
\bolds{o}lds{e}gin{align*}
H &:= \dfrac{| \tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})(F'_1(\dfrac{\bolds{u}bar(\bolds{x})\cdot \bolds{o}lds{e}}{\sqrt{s}}) - F'_1(\dfrac{\bolds{v}bar(\bolds{x})\cdot \bolds{o}lds{e}}{\sqrt{s}})) - \tilde{\omega}(\bolds{y},\bolds{o}ldsymbol{\xi})(F'_1(\dfrac{\bolds{u}bar(\bolds{y})\cdot e}{\sqrt{s}}) - F'_1(\dfrac{\bolds{v}bar(\bolds{y})\cdot e}{\sqrt{s}})) |}{\abs{\bolds{x} - \bolds{y}}^\gamma}.
\end{align*}
Let $\bolds{r} : [0,1] \times D \to \mathbb{R}^d$ be defined as
\bolds{o}lds{e}gin{align*}
\bolds{r}(l,\bolds{x}) = \bolds{v}bar(\bolds{x}) + l (\bolds{u}bar(\bolds{x}) - \bolds{v}bar(\bolds{x})).
\end{align*}
Note $\partial \bolds{r}(l,\bolds{x})/ \partial l = \bolds{u}bar(\bolds{x}) - \bolds{v}bar(\bolds{x})$. Using $\bolds{r}(l,\bolds{x})$, we have
\bolds{o}lds{e}gin{align}
F'_1(\bolds{u}bar(\bolds{x})\cdot \bolds{o}lds{e}/\sqrt{s}) - F'_1(\bolds{v}bar(\bolds{x})\cdot \bolds{o}lds{e}/\sqrt{s}) &= \int_0^1 \dparder{F'_1(\bolds{r}(l,\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s})}{l} dl \\
&= \int_0^1 \dparder{F'_1(\bolds{r} \cdot \bolds{o}lds{e}/\sqrt{s})}{\bolds{r}} \vert_{\bolds{r}= \bolds{r}(l,\bolds{x})} \cdot \dparder{\bolds{r}(l,\bolds{x})}{l} dl. \label{eq:estimate F prime bx}
\end{align}
Similarly, we have.
\bolds{o}lds{e}gin{align}
F'_1(\bolds{u}bar(\bolds{y})\cdot \bolds{o}lds{e}/\sqrt{s}) - F'_1(\bolds{v}bar(\bolds{y})\cdot \bolds{o}lds{e}/\sqrt{s}) &= \int_0^1 \dparder{F'_1(\bolds{r} \cdot \bolds{o}lds{e}/\sqrt{s})}{\bolds{r}} \vert_{\bolds{r}= \bolds{r}(l,\bolds{y})} \cdot \dparder{\bolds{r}(l,\bolds{y})}{l} dl. \label{eq:estimate F prime by}
\end{align}
Note that
\bolds{o}lds{e}gin{align}\label{eq:F prime wrt br}
\dparder{F'_1(\bolds{r} \cdot \bolds{o}lds{e}/\sqrt{s})}{\bolds{r}} \vert_{\bolds{r}= \bolds{r}(l,\bolds{y})} &= F''_1(\bolds{r}(l,\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s}) \dfrac{\bolds{o}lds{e}}{\sqrt{s}}.
\end{align}
Combining \autoref{eq:estimate F prime bx}, \autoref{eq:estimate F prime by}, and \autoref{eq:F prime wrt br}, gives
\bolds{o}lds{e}gin{align*}
H &= \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \vline \int_0^1 \left(\tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi}) F''_1(\bolds{r}(l,\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s}) (\bolds{u}bar(\bolds{x}) - \bolds{v}bar(\bolds{x})) \right. \notag \\
& \qquad \qquad \left. - \tilde{\omega}(\bolds{y},\bolds{o}ldsymbol{\xi})F''_1(\bolds{r}(l,\bolds{y}) \cdot \bolds{o}lds{e}/\sqrt{s}) (\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y})) \right) \cdot \dfrac{\bolds{o}lds{e}}{\sqrt{s}} dl \vline \notag \\
&\leq \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \dfrac{1}{\sqrt{s}} | \int_0^1 | \tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})F''_1(\bolds{r}(l,\bolds{x}) \cdot \bolds{o}lds{e}/\sqrt{s}) (\bolds{u}bar(\bolds{x}) - \bolds{v}bar(\bolds{x})) \notag \\
& \qquad \qquad - \tilde{\omega}(\bolds{y},\bolds{o}ldsymbol{\xi})F''_1(\bolds{r}(l,\bolds{y}) \cdot \bolds{o}lds{e}/\sqrt{s}) (\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y})) | dl |. \notag \\
\end{align*}
Adding and subtracting $\tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})F''_1(\bolds{r}(l,\bolds{x})\cdot\bolds{o}lds{e}/\sqrt{s})(\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y}))$, and noting $0\leq \tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})\leq 1$ gives
\bolds{o}lds{e}gin{align*}
H &\leq \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \dfrac{1}{\sqrt{s}} | \int_0^1 |F''_1(\bolds{r}(l,\bolds{x})\cdot\bolds{o}lds{e}/\sqrt{s})| \abs{\bolds{u}bar(\bolds{x}) - \bolds{v}bar(\bolds{x}) - \bolds{u}bar(\bolds{y}) + \bolds{v}bar(\bolds{y})} dl| \notag \\
&\quad + \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \dfrac{1}{\sqrt{s}} \int_0^1 |(\tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})F''_1(\bolds{r}(l,\bolds{x})\cdot\bolds{o}lds{e}/\sqrt{s}) - \tilde{\omega}(\bolds{y},\bolds{o}ldsymbol{\xi})F''_1(\bolds{r}(l,\bolds{y})\cdot\bolds{o}lds{e}/\sqrt{s}))| \notag\\
&\quad \quad \quad \quad \quad \quad \quad \quad\times \abs{\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y})} dl . \notag \\
&=: H_1 + H_2.
\end{align*}
Estimating $H_1$ first. Note that $|F''_1(r)|\leq C_2$. Since $\bolds{u},\bolds{v} \in \Cholderz{D;\bolds{o}lds{b}R^{d}}$, it is easily seen that
\bolds{o}lds{e}gin{align*}
\dfrac{| \bolds{u}bar(\bolds{x}) - \bolds{v}bar(\bolds{x}) - \bolds{u}bar(\bolds{y}) + \bolds{v}bar(\bolds{y}) |}{\abs{\bolds{x} - \bolds{y}}^\gamma} &\leq 2 \Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}}.
\end{align*}
Therefore, we have
\bolds{o}lds{e}gin{align}
H_1 &\leq \dfrac{2 C_2}{\sqrt{s}} \Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}}. \label{eq:estimate H one}
\end{align}
We now estimate $H_2$. We add and subtract $ \tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi})F''_1(\bolds{r}(l,\bolds{y})\cdot\bolds{o}lds{e}/\sqrt{s}))$ in $H_2$ to get
\bolds{o}lds{e}gin{align*}
H_2 \leq H_3+H_4,
\end{align*}
where
\bolds{o}lds{e}gin{align*}
H_3 = & \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \dfrac{1}{\sqrt{s}} \int_0^1 |(F''_1(\bolds{r}(l,\bolds{x})\cdot\bolds{o}lds{e}/\sqrt{s}) - F''_1(\bolds{r}(l,\bolds{y})\cdot\bolds{o}lds{e}/\sqrt{s}))| \abs{\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y})} dl,\notag
\end{align*}
and
\bolds{o}lds{e}gin{align*}
H_4 &= \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \dfrac{1}{\sqrt{s}} \int_0^1 |(\tilde{\omega}(\bolds{x},\bolds{o}ldsymbol{\xi}) - \tilde{\omega}(\bolds{y},\bolds{o}ldsymbol{\xi})|F''_1(\bolds{r}(l,\bolds{y})\cdot\bolds{o}lds{e}/\sqrt{s}))| \abs{\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y})} dl . \notag
\end{align*}
Now we estimate $H_3$. Since $|F'''_1(r)| \leq C_3$, see \autoref{C3}, we have
\bolds{o}lds{e}gin{align}
&\dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma}|F''_1(\bolds{r}(l,\bolds{x})\cdot\bolds{o}lds{e}/\sqrt{s}) - F''_1(\bolds{r}(l,\bolds{y})\cdot\bolds{o}lds{e}/\sqrt{s})| \notag \\
&\leq \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \sup_r \abs{F'''(r)} \dfrac{\abs{\bolds{r}(l,\bolds{x})\cdot \bolds{o}lds{e} - \bolds{r}(l,\bolds{y}) \cdot \bolds{o}lds{e}} }{\sqrt{s}} \notag \\
&\leq \dfrac{C_3}{\sqrt{s}} \dfrac{\abs{\bolds{r}(l,\bolds{x}) - \bolds{r}(l,\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^\gamma} \notag \\
&= \dfrac{C_3}{\sqrt{s}} \left( \dfrac{\abs{1-l} \abs{\bolds{v}bar(\bolds{x}) - \bolds{v}bar(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^\gamma} + \dfrac{\abs{l} \abs{\bolds{u}bar(\bolds{x}) - \bolds{u}bar(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^\gamma} \right) \notag \\
&\leq \dfrac{C_3}{\sqrt{s}} \left( \dfrac{\abs{\bolds{v}bar(\bolds{x}) - \bolds{v}bar(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^\gamma} + \dfrac{ \abs{\bolds{u}bar(\bolds{x}) - \bolds{u}bar(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^\gamma} \right). \label{eq:estimate H two part one}
\end{align}
Where we have used the fact that $\abs{1-l} \leq 1, \abs{l} \leq 1$, as $l\in [0,1]$. Also, note that
\bolds{o}lds{e}gin{align*}
\dfrac{\abs{\bolds{u}bar(\bolds{x}) - \bolds{u}bar(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^{\gamma}} &\leq 2 \Choldernorm{\bolds{u}}{D;\bolds{o}lds{b}R^{d}} \\
\dfrac{\abs{\bolds{v}bar(\bolds{x}) - \bolds{v}bar(\bolds{y})}}{\abs{\bolds{x} - \bolds{y}}^{\gamma}} &\leq 2 \Choldernorm{\bolds{v}}{D;\bolds{o}lds{b}R^{d}} \\
\abs{\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y})} &\leq s^\gamma \Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}}.
\end{align*}
We combine above estimates with \autoref{eq:estimate H two part one}, to get
\bolds{o}lds{e}gin{align}
H_3&\leq \dfrac{1}{\sqrt{s}} \dfrac{C_3}{\sqrt{s}} \left( \Choldernorm{\bolds{u}}{D;\bolds{o}lds{b}R^{d}} + \Choldernorm{\bolds{v}}{D;\bolds{o}lds{b}R^{d}} \right) s^\gamma \Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}} \notag \\
&= \dfrac{C_3}{s^{1-\gamma}} \left( \Choldernorm{\bolds{u}}{D;\bolds{o}lds{b}R^{d}} + \Choldernorm{\bolds{v}}{D;\bolds{o}lds{b}R^{d}} \right) \Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}}.\label{eq:estimate H two}
\end{align}
Next we estimate $H_4$. Here we add and subtract $\omega(\bolds{y})\omega(\bolds{x}+\epsilon\bolds{o}ldsymbol{\xi})$ to get
\bolds{o}lds{e}gin{align*}
H_4 &= \dfrac{1}{\abs{\bolds{x} - \bolds{y}}^\gamma} \dfrac{1}{\sqrt{s}} \int_0^1 |(\omega(\bolds{x},\bolds{x}+\epsilon\bolds{o}ldsymbol{\xi})(\omega(\bolds{x})-\omega(\bolds{y})) +\omega(\bolds{y})(\omega(\bolds{x}+\epsilon\bolds{o}ldsymbol{\xi})-\omega(\bolds{y}+\epsilon\bolds{o}ldsymbol{\xi}))\notag\\
&\quad \quad \quad \quad \quad \quad \quad \quad\times|F''_1(\bolds{r}(l,\bolds{y})\cdot\bolds{o}lds{e}/\sqrt{s}))| \abs{\bolds{u}bar(\bolds{y}) - \bolds{v}bar(\bolds{y})} dl . \notag
\end{align*}
Recalling that $\omega$ belongs to $\Cholderz{D;\bolds{o}lds{b}R^{d}}$ and in view of the previous estimates a straight forward calculation gives
\bolds{o}lds{e}gin{align}
H_4\leq \dfrac{4C_2}{s^{1/2-\gamma}}\Choldernorm{\omega}{D;\bolds{o}lds{b}R^{d}}\Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}}.\label{eq:estimate H three}
\end{align}
Combining \autoref{eq:estimate H one}, \autoref{eq:estimate H two}, and \autoref{eq:estimate H three} gives
\bolds{o}lds{e}gin{align*}
H &\leq \left( \dfrac{2C_2}{\sqrt{s}} + \dfrac{4C_2}{s^{1/2-\gamma}}\Choldernorm{\omega}{D;\bolds{o}lds{b}R^{d}}+ \right.\notag\\
&\left.+\dfrac{C_3}{s^{1-\gamma}} \left( \Choldernorm{\bolds{u}}{D;\bolds{o}lds{b}R^{d}} + \Choldernorm{\bolds{v}}{D;\bolds{o}lds{b}R^{d}} \right) \right) \Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}}.
\end{align*}
Substituting $H$ in \autoref{eq:H def}, gives
\bolds{o}lds{e}gin{align}
&\dfrac{1}{\abs{\bolds{x} - \bolds{y}}^{\gamma}} \abs{(-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))(\bolds{x}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) + \bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))(\bolds{y})} \notag \\
&\leq | \dfrac{2}{\epsilon\omega_d} \int_{H_1(\mathbf{0})} J(\abs{\bolds{o}ldsymbol{\xi}}) \dfrac{1}{\sqrt{s}} H d\bolds{o}ldsymbol{\xi} | \notag \\
&\leq \left( \dfrac{4C_2 \bolds{o}lds{a}r{J}_1}{\epsilon^2} +\dfrac{4C_2\bolds{o}lds{a}r{J}_{1-\gamma}}{\epsilon^{2-\gamma}} \Choldernorm{\omega}{D;\bolds{o}lds{b}R^{d}}\right.\notag\\
&\left.+ \dfrac{2C_3 \bolds{o}lds{a}r{J}_{3/2 - \gamma}}{\epsilon^{2+1/2 - \gamma}} \left( \Choldernorm{\bolds{u}}{D;\bolds{o}lds{b}R^{d}} + \Choldernorm{\bolds{v}}{D;\bolds{o}lds{b}R^{d}} \right) \right) \Choldernorm{\bolds{u} - \bolds{v}}{D;\bolds{o}lds{b}R^{d}}.
\label{eq:estimate lipschitz part 2}
\end{align}
We combine \autoref{eq:lipschitz norm of per force}, \autoref{eq:estimate lipschitz part 1}, and \autoref{eq:estimate lipschitz part 2}, and get
\bolds{o}lds{e}gin{align}
&\Choldernorm{-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{u}) - (-\bolds{o}ldsymbol{\nabla} PD^{\epsilon}(\bolds{v}))}{} \notag \\
&\leq \left( \dfrac{4C_2 \bolds{o}lds{a}r{J}_1}{\epsilon^2} +\dfrac{2C_2 \bolds{o}lds{a}r{J}_{1-\gamma}}{\epsilon^{2-\gamma}} (1+\Choldernorm{\omega}{}) + \dfrac{2C_3 \bolds{o}lds{a}r{J}_{3/2 - \gamma}}{\epsilon^{2+1/2 - \gamma}} \left( \Choldernorm{\bolds{u}}{} + \Choldernorm{\bolds{v}}{} \right) \right) \Choldernorm{\bolds{u} - \bolds{v}}{} \notag \\
&\leq \dfrac{\bolds{o}lds{a}r{C}_1 + \bolds{o}lds{a}r{C}_2\Choldernorm{\omega}{} +\bolds{o}lds{a}r{C}_3(\Choldernorm{\bolds{u}}{} + \Choldernorm{\bolds{v}}{})}{\epsilon^{2+\alpha(\gamma)}} \Choldernorm{\bolds{u} - \bolds{v}}{} \label{eq:del pd lipschitz}
\end{align}
where we introduce new constants $\bolds{o}lds{a}r{C}_1, \bolds{o}lds{a}r{C}_2, \bolds{o}lds{a}r{C}_3$. We let $\alpha(\gamma) = 0$, if $\gamma \geq 1/2$, and $\alpha(\gamma) = 1/2 - \gamma$, if $\gamma \leq 1/2$. One can easily verify that, for all $\gamma \in (0,1]$ and $0< \epsilon \leq 1$,
\bolds{o}lds{e}gin{align*}
\max \leftcr\dfrac{1}{\epsilon^2}, \dfrac{1}{\epsilon^{2+1/2 - \gamma}}, \dfrac{1}{\epsilon^{2-\gamma}} \rightcr \leq \dfrac{1}{\epsilon^{2+ \alpha(\gamma)}}
\end{align*}
To complete the proof, we combine \autoref{eq:del pd lipschitz} and \autoref{eq:norm of F in X}, and get
\bolds{o}lds{e}gin{align*}
\normX{F^\epsilon(y,t) - F^\epsilon(z,t)}{X} &\leq \dfrac{L_1 + L_2 (\Choldernorm{\omega}{}+\normX{y}{X} + \normX{z}{X})}{\epsilon^{2+\alpha(\gamma)}} \normX{y-z}{X}.
\end{align*}
This proves the Lipschitz continuity of $F^\epsilon(y,t)$ on any bounded subset of $X$.
The bound on $F^\epsilon(y,t)$, see \autoref{eq:bound on F}, follows easily from \autoref{eq:del pd simplified}. This completes the proof of \autoref{prop:lipschitz}.
\subsection{Existence of solution in H\"{o}lder space}
\label{section: existence}
In this section, we prove \autoref{thm:existence over finite time domain}. We begin by proving a local existence theorem. We then show that the local solution can be continued uniquely in time to recover \autoref{thm:existence over finite time domain}.
The existence and uniqueness of local solutions is stated in the following theorem.
{\vskip 2mm}
\bolds{o}lds{e}gin{theorem}\label{thm:local existence}
\textbf{Local existence and uniqueness} \\
Given $X= \Cholderz{D;\bolds{o}lds{b}R^{d}}\times\Cholderz{D;\bolds{o}lds{b}R^{d}}$, $\bolds{o}lds{b}(t)\in C^{0,\gamma}_0(D;\mathbb{R}^d)$, and initial data $x_0=(\bolds{u}_0,\bolds{v}_0)\in X$. We suppose that $\bolds{o}lds{b}(t)$ is continuous in time over some time interval $I_0=(-T,T)$ and satisfies $\sup_{t\in I_0} \Choldernorm{\bolds{o}lds{b}(t)}{D;\bolds{o}lds{b}R^{d}} < \infty$. Then, there exists a time interval $I'=(-T',T')\subset I_0$ and unique solution $y =(y^1,y^2) $ such that $y\in C^1(I';X)$ and
\bolds{o}lds{e}gin{equation}
y(t)=x_0+\int_0^tF^\epsilon(y(\tau),\tau)\,d\tau,\hbox{ for $t\in I'$}
\label{8loc}
\end{equation}
or equivalently
\bolds{o}lds{e}gin{equation*}
y'(t)=F^\epsilon(y(t),t),\hbox{with $y(0)=x_0$},\hbox{ for $t\in I'$}
\label{11loc}
\end{equation*}
where $y(t)$ and $y'(t)$ are Lipschitz continuous in time for $t\in I'\subset I_0$.
\end{theorem}
{\vskip 2mm}
To prove \autoref{thm:local existence}, we proceed as follows.
We write $y(t)=(y^1(t),y^2(t))$ and $||y||_X=||y^1(t)||_{\Cholder{}}+||y^2(t)||_{\Cholder{}}$.
Define the ball $B(0,R)= \{y\in X:\, ||y||_X<R\}$ and choose $R>||x_0||_X$. Let $r = R - \normX{x_0}{X}$ and we consider the ball $B(x_0,r)$ defined by
\bolds{o}lds{e}gin{equation}
B(x_0,r)=\{y\in X:\,||y-x_0||_X<r\}\subset B(0,R),
\label{balls}
\end{equation}
see figure \autoref{figurenested}.
To recover the existence and uniqueness we introduce the transformation
\bolds{o}lds{e}gin{equation*}
S_{x_0}(y)(t)=x_0+\int_0^tF^\epsilon(y(\tau),\tau)\,d\tau.
\label{0}
\end{equation*}
Introduce $0<T'<T$ and the associated set $Y(T')$ of H\"older continuous functions taking values in $B(x_0,r)$ for $I'=(-T',T')\subset I_0=(-T,T)$.
The goal is to find appropriate interval $I'=(-T',T')$ for which $S_{x_0}$ maps into the corresponding set $Y(T')$. Writing out the transformation with $y(t)\in Y(T')$ gives
\bolds{o}lds{e}gin{eqnarray}
&&S_{x_0}^1(y)(t)=x_0^1+\int_0^t y^2(\tau)\,d\tau\label{1}\\
&&S_{x_0}^2(y)(t)=x_0^2+\int_0^t(-\nabla PD^\epsilon(y^1(\tau))+\bolds{o}lds{b}(\tau))\,d\tau,\label{2}
\end{eqnarray}
and there is a positive constant $K=C/\epsilon^{2+\alpha(\gamma)}$, see
\autoref{eq:bound on F}, independent of $y^1(t)$, for $-T'<t<T'$, such that estimation in \autoref{2} gives
\bolds{o}lds{e}gin{eqnarray}
||S_{x_0}^2(y)(t)-x_0^2||_{\Cholder{}}\leq (K(1+\frac{1}{\epsilon^\gamma}+\sup_{t\in(-T',T')}||y^1(t)||_{\Cholder{}})+\sup_{t\in(-T,T)}||\bolds{o}lds{b}(t)||_{\Cholder{}})T'\nonumber\\
\label{3}
\end{eqnarray}
and from \autoref{1}
\bolds{o}lds{e}gin{eqnarray}
||S_{x_0}^1(y)(t)-x_0^1||_{\Cholder{}}\leq\sup_{t\in(-T',T')}||y^2(t)||_{\Cholder{}}T'\label{4}.
\end{eqnarray}
We write $b=\sup_{t\in I_0}||\bolds{o}lds{b}(t)||_{\Cholder{}}$ and adding \autoref{3} and \autoref{4} gives the upper bound
\bolds{o}lds{e}gin{equation}
||S_{x_0}(y)(t)-x_0||_X\leq (K(1+\frac{1}{\epsilon^\gamma}+\sup_{t\in(-T',T')}||y(t)||_X)+b)T'.
\label{5}
\end{equation}Since $B(x_0,r)\subset B(0,R)$, see \autoref{balls}, we make the choice $T'$ so that
\bolds{o}lds{e}gin{equation}
||S_{x_0}(y)(t)-x_0||_X\leq ((K(1+\frac{1}{\epsilon^\gamma}+R)+b)T'<r=R-||x_0||_X.
\label{5point5}
\end{equation}
For this choice we see that
\bolds{o}lds{e}gin{equation}
T'<\theta(R)=\frac{R-||x_0||_X}{K(R+1+\frac{1}{\epsilon^\gamma})+b}.
\label{6}
\end{equation}
Now it is easily seen that $\theta(R)$ is increasing with $R>0$ and
\bolds{o}lds{e}gin{equation}
\lim_{R\rightarrow\infty}\theta(R)=\frac{1}{K}.
\label{7}
\end{equation}
So given $R$ and $||x_0||_X$ we choose $T'$ according to
\bolds{o}lds{e}gin{equation}
\frac{\theta(R)}{2}<T'< \theta(R),
\label{localchoiceofT}
\end{equation}
and set $I'=(-T',T')$. We have found the appropriate time domain $I'$ such that the transformation $S_{x_0}(y)(t)$ as defined in \autoref{0} maps $Y(T')$ into itself. We now proceed using standard arguments, see e.g. [\cite{MA-Driver}, Theorem 6.10], to complete the proof of existence and uniqueness of solution for given initial data $x_0$ over the interval $I'= (-T', T')$.
We now prove \autoref{thm:existence over finite time domain}. From the proof of \autoref{thm:local existence} above, we see that a unique local solution exists over a time domain $(-T',T')$ with $\frac{\theta(R)}{2}<T'$. Since $\theta(R)\nearrow1/K$ as $R\nearrow \infty$ we can fix a tolerance $\eta>0$ so that $[(1/2K)-\eta]>0$. Then given any initial condition with bounded H\"older norm and $b=\sup_{t\in[-T,T)}||\bolds{o}lds{b}(t)||_{\Cholder{}}$ we can choose $R$ sufficiently large so that $||x_0||_X<R$ and $0<(1/2K))-\eta<T'$. Thus we can always find local solutions for time intervals $(-T',T')$ for $T'$ larger than $[(1/2K)-\eta]>0$. Therefore we apply the local existence and uniqueness result to uniquely continue local solutions up to an arbitrary time interval $(-T,T)$.
\bolds{o}lds{e}gin{figure}
\centering
\bolds{o}lds{e}gin{tikzpicture}[xscale=1.0,yscale=1.0]
\draw [] (0.0,0.0) circle [radius=2.9];
\draw [] (1.0,2.0) circle [radius=0.6];
\draw [->,thick] (1.0,2.0) -- (1.38729,2.38729);
\node [right] at (0.8,1.8) {$x_0$};
\node [below] at (-0.2,1.82) {$B(x_0,r)$};
\node [below] at (0.0,0.0) {$0$};
\draw [->,thick] (0,0) -- (2.9,0.0);
\node [below] at (1.45,0.0) {$R$};
\node [below] at (-2.5,-2.3) {$B(0,R)$};
\end{tikzpicture}
\caption{Geometry.}
\label{figurenested}
\end{figure}
\section{Limit behavior of H\"older solutions in the limit of vanishing nonlocality}
\label{s:discussion}
In this section, we consider the behavior of bounded H\"older continuous solutions as the peridynamic horizon tends to zero. We find that the solutions converge to a limiting sharp fracture evolution with bounded Griffiths fracture energy and satisfy the linear elastic wave equation away from the fracture set. We look at a subset of H\"older solutions that are differentiable in the spatial variables to show that sharp fracture evolutions can be approached by spatially smooth evolutions in the limit of vanishing non locality. As $\epsilon$ approaches zero derivatives can become large but must localize to surfaces across which the limiting evolution jumps.
We consider a sequence of peridynamic horizons $\epsilon_k=1/k$, $k=1,\ldots$ and the associated H\"older continuous solutions $\bolds{u}^{\epsilon_k}(t,\bolds{x})$ of the peridynamic initial value problem \autoref{eq:per equation}, \autoref{eq:per bc}, and \autoref{eq:per initialvalues}. We assume that the initial conditions $\bolds{u}_0^{\epsilon_k},\bolds{v}_0^{\epsilon_k}$ have uniformly bounded peridynamic energy and mean square initial velocity given by
\bolds{o}lds{e}gin{equation*}
\sup_{\epsilon_k}PD^{\epsilon_k}(\bolds{u}^{\epsilon_k}_0)<\infty\hbox{ and }\sup_{\epsilon_k}||\bolds{v}^{\epsilon_k}_0||_{L^2(D;\mathbb{R}^d)}<\infty.
\end{equation*}
Moreover we suppose that $\bolds{u}_0^{\epsilon_k},\bolds{v}_0^{\epsilon_k}$ are differentiable on $D$ and that they converge in $L^2(D;\mathbb{R})$ to $\bolds{u}_0^{0},\bolds{v}_0^{0}$ with bounded Griffith free energy given by
\bolds{o}lds{e}gin{eqnarray*}
\int_{D}\,2\mu |\mathcal{E} \bolds{u}_0^0|^2+\lambda |{\rm div}\,\bolds{u}_0^0|^2\,dx+\mathcal{G}_c\mathcal{H}^{d-1}(J_{\bolds{u}_0^0})\leq C < \infty,
\end{eqnarray*}
where $J_{\bolds{u}_0^0}$ denotes an initial fracture surface given by the jumps in the initial deformation $\bolds{u}_0^0$ and $\mathcal{H}^{2}(J_{\bolds{u}^0(t)})$ is its $2$ dimensional Hausdorff measure of the jump set. Here $\mathcal{E} \bolds{u}^0_0$ is the elastic strain and ${\rm div}\,\bolds{u}^0_0=Tr(\mathcal{E} \bolds{u}^0_0)$. The constants $\mu$, $\lambda$ are given by the explicit formulas
\bolds{o}lds{e}gin{eqnarray*}
\hbox{ and }& \mu=\lambda=\frac{1}{5} f'(0)\int_{0}^1r^dJ(r)dr, \hbox{ $d=2,3$}
\end{eqnarray*}
and
\bolds{o}lds{e}gin{eqnarray*}
\mathcal{G}_c=\frac{3}{2}\, f_\infty \int_{0}^1r^dJ(r)dr, \hbox{ $d=2,3$},
\end{eqnarray*}
where $f'(0)$ and $f_\infty$ are defined by \autoref{eq:per asymptote}. Here $\mu=\lambda$ and is a consequence of the central force model used in cohesive dynamics.
Last we suppose as in \cite{CMPer-Lipton} that the solutions are uniformly bounded, i.e.,
\bolds{o}lds{e}gin{equation*}
\sup_{\epsilon_k}\sup_{[0,T]}||\bolds{u}^{\epsilon_k}(t)||_{L^\infty(D;\mathbb{R}^d)}<\infty,
\end{equation*}
The H\"older solutions $\bolds{u}^{\epsilon_k}(t,\bolds{x})$ naturally belong to $L^2(D;\mathbb{R}^d)$ for all $t\in[0,T]$ and we can directly apply the Gr\"onwall inequality (equation (6.9) of \cite{CMPer-Lipton}) together with Theorems 6.2 and 6.4 of \cite{CMPer-Lipton} to conclude similar to Theorems 5.1 and 5.2 of \cite{CMPer-Lipton} that there is at least one ``cluster point'' $\bolds{u}^{0}(t,\bolds{x})$ belonging to $C([0,T];L^2(D;\mathbb{R}^d))$ and subsequence, also denoted by $\bolds{u}^{\epsilon_k}(t,\bolds{x})$ for which
\bolds{o}lds{e}gin{eqnarray*}
\lim_{\epsilon_k\rightarrow 0}\max_{0\leq t\leq T}\left\{\Vert \bolds{u}^{\epsilon_k}(t)-\bolds{u}^0(t)\Vert_{L^2(D;\mathbb{R}^d)}\right\}=0.
\label{eq:per unifconvg}
\end{eqnarray*}
Moreover it follows from \cite{CMPer-Lipton} that the limit evolution $\bolds{u}^0(t,\bolds{x})$ has a weak derivative $\bolds{u}_t^0(t,\bolds{x})$ belonging to $L^2([0,T]\times D;\mathbb{R}^{d})$. For each time $t\in[0,T]$ we can apply methods outlined in \cite{CMPer-Lipton} to find that the cluster point $\bolds{u}^0(t,\bolds{x})$ is a special function of bounded deformation (see, \cite{Ambrosio}, \cite{Bellettini}) and has bounded linear elastic fracture energy given by
\bolds{o}lds{e}gin{eqnarray*}
\int_{D}\,2\mu |\mathcal{E} \bolds{u}^0(t)|^2+\lambda |{\rm div}\,\bolds{u}^0(t)|^2\,dx+\mathcal{G}_c\mathcal{H}^{2}(J_{\bolds{u}^0(t)})\leq C,
\label{LEFMbound}
\end{eqnarray*}
for $0\leq t\leq T$ where $J_{\bolds{u}^0(t)}$ denotes the evolving fracture surface
The deformation - crack set pair $(\bolds{u}^0(t),J_{\bolds{u}^0(t)})$ records the brittle fracture evolution of the limit dynamics.
Arguments identical to \cite{CMPer-Lipton} show that away from sets where $|S(\bolds{y},\bolds{x};\bolds{u}^{\epsilon_k})|>S_c$ the limit $\bolds{u}^0$ satisfies the linear elastic wave equation. This is stated as follows:
Fix $\bolds{o}ldsymbol{\nabla}ta>0$ and for $\epsilon_k<\bolds{o}ldsymbol{\nabla}ta$ and $0\leq t\leq T$ consider the open set $D'\subset D$ for which points $\bolds{x}$ in $D'$ and $\bolds{y}$ for which $|\bolds{y}-\bolds{x}|<\epsilon_k$ satisfy,
\bolds{o}lds{e}gin{eqnarray*}
|S(\bolds{y},\bolds{x};\bolds{u}^{\epsilon_k}(t))|<{S}_c(\bolds{y},\bolds{x}).
\label{eq: per quiecent}
\end{eqnarray*}
Then the limit evolution $\bolds{u}^0(t,\bolds{x})$ evolves elastodynamically on $D'$ and is governed by the balance of linear momentum expressed by the Navier Lam\'e equations on the domain $[0,T]\times D'$ given by
\bolds{o}lds{e}gin{eqnarray*}
\bolds{u}^0_{tt}(t)= {\rm div}\bolds{o}lds{\sigma}(t)+\bolds{o}lds{b}(t), \hbox{on $[0,T]\times D'$},
\label{waveequationn}
\end{eqnarray*}
where the stress tensor $\bolds{o}lds{\sigma}$ is given by,
\bolds{o}lds{e}gin{eqnarray*}
\bolds{o}lds{\sigma} =\lambda I_d Tr(\mathcal{E}\,\bolds{u}^0)+2\mu \mathcal{E}\bolds{u}^0,
\label{stress}
\end{eqnarray*}
where $I_d$ is the identity on $\mathbb{R}^d$ and $Tr(\mathcal{E}\,\bolds{u}^0)$ is the trace of the strain.
Here the second derivative $\bolds{u}_{tt}^0$ is the time derivative in the sense of distributions of $\bolds{u}^0_t$ and ${\rm div}\bolds{o}lds{\sigma}$ is the divergence of the stress tensor $\bolds{o}lds{\sigma}$ in the distributional sense. This shows that sharp fracture evolutions can be approached by spatially smooth evolutions in the limit of vanishing non locality.
\section{Conclusions}
\label{s:conclusions}
In this article, we have presented a numerical analysis for class of nonlinear nonlocal peridynamic models. We have shown that the convergence rate applies, even when the fields do not have well-defined spatial derivatives. We treat both the forward Euler scheme as well as the general implicit single step method.
The convergence rate is found to be the same for both schemes and is given by $C(\Delta t+h^\gamma/\epsilon^2)$. Here the constant $C$ depends on $\epsilon$ and H\"older and $L^2$ norm of the solution and its time derivatives.
The Lipschitz property of the nonlocal, nonlinear force together with boundedness of the nonlocal kernel plays an important role. It ensures that the error in the nonlocal force remains bounded when replacing the exact solution with its approximation. This, in turn, implies that even in the presence of mechanical instabilities the global approximation error remains controlled by the local truncation error in space and time.
Taking $\gamma=1$, a straight forward estimate of \autoref{eq:const Cs} using \autoref{eq:bound on F} gives to leading order
\bolds{o}lds{e}gin{align}\label{eq: final est initial}
\sup_{0\leq k \leq T/\Delta t} E^k\leq \left [C_1 \Delta t C_t + C_2 h\sup_{0<t<T}\Vert u\Vert_{C^{0,1}(D;\mathbb{R}^3)}\right ],
\end{align}
where $C_t$ is independent of $\epsilon$ and depends explicitly on the $L^2$ norms of time derivatives of the solution see \autoref{eq:const Ct} and
\bolds{o}lds{e}gin{align*}
C_1=\exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \right]T,
\end{align*}
\bolds{o}lds{e}gin{align*}
C_2=\exp \left[T (1 + 6\bolds{o}lds{a}r{C}/\epsilon^2) \right]T\left(1+\sqrt{3}\bolds{o}lds{a}r{C}(1+\frac{1}{\epsilon})+\frac{4\sqrt{3}\bolds{o}lds{a}r{C}}{\epsilon^2}\right).
\end{align*}
It is evident that the exponential factor could be large. However we can choose times $T$ for which the effects of the exponential factor can be diminished and $C_1$ and $C_2$ are not too large. To fix ideas consider a $1$ cubic meter sample and a corresponding 1400 meter per second shear wave speed. This wave speed is characteristic of plexiglass.
Then the time for a shear wave to traverse the sample is $718$ $\mu$-seconds. This is the characteristic time $T^\ast$ and a fracture experiment can last a few hundred $\mu$-seconds. The actual time in $\mu$-seconds of a fracture simulation is given by $TT^*$ where $T$ is the non-dimensional simulation time. The dimensionless constant $\bolds{o}lds{a}r{C}$ is $1.19$ and we take $\epsilon=1/10$ and dimensionless body force unity. For a simulation cycle of length $TT^\ast=1.5\mu$-seconds the constants $C_1$ and $C_2$ in \autoref{eq: final est initial} are $0.0193$ and $7.976$ respectively. The solution after cycle time $T$ can be used as initial conditions for a subsequent run and the process can be iterated. Unfortunately these estimates predict a total simulation time of $15\mu$-second before the relative error becomes greater than one even for a large number of spatial degrees of freedom. We point out that because the constants in the a-priori bound are necessarily pessimistic the predicted simulation time is an order of magnitude below what is seen in experiment. Future work will focus on a-posteriori estimates for simulations and adaptive implementations of the finite difference scheme.
In conclusion the analysis shows that the method is stable and one can control the error by choosing the time step and spatial discretization sufficiently small. However errors do accumulate with time steps and this limits the time interval for simulation. We have identified local perturbations for which the error accumulates with time step for the implicit Euler method. These unstable local perturbations correspond to regions for which a preponderance of bonds are in the softening regime.
\end{document} |
\begin{document}
\title{A Comment on ``Asking photons where they have been in plain language''}
\author{ Lev Vaidman}
\affiliation{ Raymond and Beverly Sackler School of Physics and Astronomy,
Tel-Aviv University, Tel-Aviv 69978, Israel
}
\noindent
\begin{abstract}
The criticism of the experiment showing disconnected traces of photons passing through a nested Mach-Zehnder interferometer is shown to be unfounded.
\end{abstract}
\maketitle
In a recent Letter \cite{sokol} Sokolovsky analyzes the experiment by Danan {\it et al.} \cite{Danan} and the theoretical proposal for this experiment \cite{past}. The experiment shows a disconnected trace left by the the photons in the interferometer. Sokolovski agrees that the calculations in the theoretical paper are correct and that the results of the experiment are expected. However, he argues that the conclusion of \cite{Danan} and \cite{past} that ``the past of the photons is not represented by continuous trajectories'' is unjustified. He writes: ``A simple analysis by standard quantum mechanics shows that this claim is false.'' In this Comment I will clarify the meaning of the results of \cite{Danan} and \cite{past} and refute the criticism of Sokolovski.
First, standard quantum mechanics certainly cannot show that the discussed sentence is false. In the framework of standard quantum mechanics photons do not have trajectories of any type.
From the text and the references, I understand that Sokolovski considers Feynman's paths formulation of quantum mechanics as ``standard quantum mechanics''. While it might be considered as a standard calculational tool, only small minority attach to Feynman's paths ontological meaning. Indeed, papers \cite{Danan} and \cite{past} have no arguments against continuity of Feynman's paths. But these paths do not represent a useful picture of the past of the photon. Sokolovski analyzes the arms of the interferometer, but Feynman's paths of the photons are everywhere. Every continuous line between the source and the detector is a Feynman path of the photon. Independently of the design of the interferometer, the photons are everywhere. Not an interesting answer to the question the photons were asked in \cite{Danan}: ``Where have they been?''.
All what experiment \cite{Danan} shows is that there are cases in which the past of the photon, defined in \cite{past} as places with a significant weak trace, have parts which are not connected by continuous lines to the source and the detector. And this is a nontrivial example, since in most experiments with interferometers all traces are connected.
To avoid misunderstanding, I repeat that all the results of \cite{Danan} and \cite{past} can also be explained by standard quantum mechanics as it has already been stated in these works. Also, ``a prudent advice for its authors to double check that in Fig. 2 of \cite{Danan} the signal at the frequencies $f_E$ and $f_F$ is indeed absent, and not just too small to be seen against the background noise'' is not needed. A ``tiny leakage of light in the inner interferometer'', which leads to these signals below the noise, is explicitly mentioned in \cite{Danan} and calculated, (Eq.8), in \cite{past}. A special status of regions $E$ and $F$, where the trace is nonzero but negligible relative to the regions $A$, $B$ and $C$, is discussed in \cite{trace}.
A representative example of the ``plain language'' of Sokolovski: ``The particle remains in a real pathway combining all interfering paths'' is not part of the language of standard quantum mechanics since its formalism has no concept for the past of a pre- and post-selected particle. The past defined as regions with a weak trace is also not part of the standard formalism. However, Sokolovski showed nothing ``false'' or inconsistent about this approach.
This work has been supported in part by the Israel Science Foundation Grant No. 1311/14,
the German-Israeli Foundation for Scientific Research and Development Grant No. I-1275-303.14.
\end{document} |
\begin{document}
\centerline{\Large\bf The functor of units of Burnside rings for $p$-groups}
\centerline{\bf Serge Bouc}
\centerline{\footnotesize LAMFA - UMR 6140 - Universit\'e de Picardie-Jules Verne}
\centerline{\footnotesize 33 rue St Leu - 80039 - Amiens Cedex 1 - France}
\centerline{\footnotesize\tt email : [email protected]}
\def}\footnotetext{{\bf AMS Subject Classification :} 19A22, 16U60 {\bf Keywords :} Burnside ring , unit, biset functor}\def\thefootnote{\arabic{footnote}{}\footnotetext{{\bf AMS Subject Classification :} 19A22, 16U60 {\bf Keywords :} Burnside ring , unit, biset functor}\def}\footnotetext{{\bf AMS Subject Classification :} 19A22, 16U60 {\bf Keywords :} Burnside ring , unit, biset functor}\def\thefootnote{\arabic{footnote}{\arabic{footnote}}
{\footnotesize\bf Abstract:} {\footnotesize In this note I describe the structure of the biset functor $B^\times$ sending a $p$-group $P$ to the group of units of its Burnside ring $B(P)$. In particular, I show that $B^\times$ is a rational biset functor. It follows that if $P$ is a $p$-group, the structure of $B^\times(P)$ can be read from a genetic basis of $P$~: the group $B^\times(P)$ is an elementary abelian 2-group of rank equal to the number isomorphism classes of rational irreducible representations of~$P$ whose type is trivial, cyclic of order 2, or dihedral.}
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{Introduction}
If $G$ is a finite group, denote by $B(G)$ the Burnside ring of $G$, i.e. the Grothendieck ring of the category of finite $G$-sets (see e.g. \cite{handbook}). The question of structure of the multiplicative group $B^\times(G)$ has been studied by T.~tom~Dieck~(\cite{tomdieckgroups}), T.~Matsuda~(\cite{matsuda}), T.~Matsuda and T.~Miyata~(\cite{matsudamiyata}), T.~Yoshida~(\cite{yoshidaunit}), by geometric and algebraic methods.\par
Recently, E. Yal\c cin wrote a very nice paper (\cite{yalcin}), in which he proves an induction theorem for $B^\times$ for $2$-groups, which says that if $P$ is a $2$-group, then any element of $B^\times(P)$ is a sum of elements obtained by inflation and tensor induction from sections $(T,S)$ of $P$, such that $T/S$ is trivial or dihedral.\par
The main theorem of the present paper implies a more precise form of Yal\c cin's Theorem, but the proof is independent, and uses entirely different methods. In particular, the biset functor techniques developed in \cite{doublact}, \cite{fonctrq} and~\cite{bisetsections}, lead to a precise description of $B^\times(P)$, when $P$ is a 2-group (actually also for arbitrary $p$-groups, but the case $p$ odd is known to be rather trivial). The main ingredient consists to show that $B^\times$ is a {\em rational} biset functor, and this is done by showing that the functor $B^\times$ (restricted to $p$-groups) is a subfunctor of the functor $\mathbb{F}_2R_\mathbb{Q}^*$. This leads to a description of $B^\times(P)$ in terms of a {\em genetic basis} of $P$, or equivalently, in terms of rational irreducible representations of $P$.\par
The paper is organized as follows~: in Section~2, I recall the main definitions and notation on biset functors. Section~3 deals with genetic subgroups and rational biset functors. Section~4 gives a natural exposition of the biset functor structure of $B^\times$. In Section~5, I state results about faithful elements in $B^\times(P)$ for some specific $p$-groups $P$. In Section~6, I introduce a natural transformation of biset functors from $B^\times$ to $\mathbb{F}_2B^*$. This transformation is injective, and in Section~7, I show that the image of its restriction to $p$-groups is contained in the subfunctor $\mathbb{F}_2R_\mathbb{Q}^*$ of $\mathbb{F}_2B^*$. This is the key result, leading in Section~8 to a description of the lattice of subfunctors of the restriction of~$B^\times$ to $p$-groups~: it is always a uniserial $p$-biset functor (even simple if $p$ is odd). This also provides an answer to the question, raised by Yal\c cin~(\cite{yalcin}), of the surjectivity of the exponential map $B(P)\to B^\times(P)$ for a 2-group~$P$.
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{Biset functors}
\begin{enonce}{Notation and Definition} Denote by $\mathcal{C}$ the following category~:
\begin{itemize}
\item The objects of $\mathcal{C}$ are the finite groups.
\item If $G$ and $H$ are finite $p$-groups, then $\hbox{\rm Hom}_\mathcal{C}(G,H)=B(H\times G^{op})$ is the Burnside group of finite $(H,G)$-bisets. An element of this group is called a {\em virtual} $(H,G)$-biset.
\item The composition of morphisms is $\mathbb{Z}$-bilinear, and if $G$, $H$, $K$ are finite groups, if $U$ is a finite $(H,G)$-biset, and $V$ is a finite $(K,H)$-biset, then the composition of (the isomorphism classes of) $V$ and $U$ is the (isomorphism class) of $V\times_HU$. The identity morphism $\hbox{\rm Id}_G$ of the group $G$ is the class of the set $G$, with left and right action by multiplication.
\end{itemize}
If $p$ is a prime number, denote by $\mathcal{C}_p$ the full subcategory of $\mathcal{C}$ whose objects are finite $p$-groups.\par
Let $\mathcal{F}$ denote the category of additive functors from $\mathcal{C}$ to the category $\gmod{\mathbb{Z}}$ of abelian groups. An object of $\mathcal{F}$ is called a {\em biset functor}. Similarly, denote by $\mathcal{F}_p$ the category of additive functors from $\mathcal{C}_p$ to $\gmod{\mathbb{Z}}$. An object of $\mathcal{F}_p$ will be called a {\em $p$-biset functor}.
\end{enonce}
If $F$ is an object of $\mathcal{F}$, if $G$ and $H$ are finite groups, and if $\varphi\in\hc{G}{H}$, then the image of $w\in F(G)$ by the map $F(\varphi)$ will generally be denoted by $\varphi(w)$. The composition $\psi\circ\varphi$ of morphisms $\varphi\in\hc{G}{H}$ and $\psi\in\hc{H}{K}$ will also be denoted by $\psi\times_H\varphi$.
\begin{enonce}{Notation} The {\em Burnside} biset functor (defined e.g. as the Yoneda functor $\hc{{\bf 1}}{-}$), will be denoted by $B$. The functor of rational representations (see Section 1 of \cite{fonctrq}) will be denoted by $R_\mathbb{Q}$. The restriction of $B$ and $R_\mathbb{Q}$ to $\mathcal{C}_p$ will also be denoted by $B$ and $R_\mathbb{Q}$.
\end{enonce}
\pagebreak[3]\refstepcounter{prop}\@startsection{subsection}{2}{\z@}{4ex plus 6ex}{-1em}{\reset@font\bf}{Examples~:}\label{indresinfdefiso} Recall that this formalism of bisets gives a single framework for the usual operations of induction, restriction, inflation, deflation, and transport by isomorphism via the following correspondences~:
\begin{itemize}
\item If $H$ is a subgroup of $G$, then let $\hbox{\rm Ind}_H^G\in\hc{H}{G}$ denote the set~$G$, with left action of $G$ and right action of $H$ by multiplication.
\item If $H$ is a subgroup of $G$, then let $\hbox{\rm Res}_H^G\in\hc{G}{H}$ denote the set~$G$, with left action of $H$ and right action of $G$ by multiplication.
\item If $N\mathop{\underline\hbox{\rm tr}iangleleft} G$, and $H=G/N$, then let $\hbox{\rm Inf}_H^G\in\hc{H}{G}$ denote the set~$H$, with left action of $G$ by projection and multiplication, and right action of $H$ by multiplication.
\item If $N\mathop{\underline\hbox{\rm tr}iangleleft} G$, and $H=G/N$, then let $\hbox{\rm Def}_H^G\in\hc{G}{H}$ denote the
set~$H$, with left action of $H$ by multiplication, and right action of $G$ by projection and multiplication.
\item If $\varphi: G\to H$ is a group isomorphism, then let $\hbox{\rm Iso}_G^H=\hbox{\rm Iso}_G^H(\varphi)\in \hc{G}{H}$ denote the set~$H$, with left action of $H$ by multiplication, and right action of $G$ by taking image by $\varphi$, and then multiplying in~$H$.
\end{itemize}
\begin{enonce}{Definition} A {\em section} of the group $G$ is a pair $(T,S)$ of subgroups of $G$ such that $S\mathop{\underline\hbox{\rm tr}iangleleft} T$.
\end{enonce}
\begin{enonce}{Notation} If $(T,S)$ is a section of $G$, set
$$\hbox{\rm Ind}inf_{T/S}^G=\hbox{\rm Ind}_T^G\hbox{\rm Inf}_{T/S}^T\ressort{1cm}\hbox{and}\ressort{1cm}\hbox{\rm Def}res_{T/S}^G=\hbox{\rm Def}_{T/S}^T\hbox{\rm Res}_T^G\;\;\;.$$
\end{enonce}
Then $\hbox{\rm Ind}inf_{T/S}^G\cong G/S$ as $(G,T/S)$-biset, and $\hbox{\rm Def}res_{T/S}^G\cong S\backslash G$ as $(T/S,G)$-biset.
\begin{enonce}{Notation} \label{Tu}Let $G$ and $H$ be groups, let $U$ be an $(H,G)$-biset, and let $u\in U$. If $T$ is a subgroup of $H$, set
$$T^u=\{g\in G\mid \exists t\in T,\; tu=ug\}\;\;\;.$$
This is a subgroup of $G$. Similarly, if $S$ is a subgroup of $G$, set
$${^u}S=\{h\in H\mid\exists s\in S,\;us=hu\}\;\;\;.$$
This is a subgroup of $H$.
\end{enonce}
\begin{enonce}{Lemma}\label{UmodG} Let $G$ and $H$ be groups, let $U$ be an $(H,G)$-biset, and let $S$ be a subgroup of $G$. Then there is an isomorphism of $H$-sets
$$U/G=\bigsqcup_{u\in [H\backslash U/S]}H/{^uS}\;\;\;,$$
where $[H\backslash U/S]$ is a set of representatives of $(H,S)$-orbits on $U$.
\end{enonce}
\noindent{\bf Proof: } Indeed $H\backslash U/S$ is the set of orbits of $H$ on $U/S$, and $^uS$ is the stabilizer of~$uS$ in~$H$.
\pagebreak[3]\refstepcounter{prop}\@startsection{subsection}{2}{\z@}{4ex plus 6ex}{-1em}{\reset@font\bf}{Opposite bisets~:} If $G$ and $H$ are finite groups, and if $U$ is a finite $(H,G)$-biset, then let $U^{op}$ denote the opposite biset~: as a set, it is equal to~$U$, and it is a $(G,H)$-biset for the following action
$$\forall h\in H,\forall u\in U,\forall g\in G,\;g.u.h\;({\rm in}\;U^{op})=h^{-1}ug^{-1}\;({\rm in}\;U)\;\;\;.$$
This definition can be extended by linearity, to give an isomorphism
$$\varphi\mapsto\varphi^{op}: \hc{G}{H}\to\hc{H}{G}\;\;\;.$$
It is easy to check that $(\varphi\circ\psi)^{op}=\psi^{op}\circ\varphi^{op}$, with obvious notation, and the functor
$$\left\{ \begin{array}{l}G\mapsto G\\\varphi\mapsto\varphi^{op}\end{array}\right.$$
is an equivalence of categories from $\mathcal{C}$ to the dual category, which restricts to an equivalence of $\mathcal{C}_p$ to its dual category.\par
\begin{rem}{Example} if $G$ is a finite group, and $(T,S)$ is a section of $G$, then
$$(\hbox{\rm Ind}inf_{T/S}^G)^{op}\cong\hbox{\rm Def}res_{T/S}^G$$
as $(T/S,G)$-bisets.
\end{rem}
\begin{enonce}{Definition and Notation} If $F$ is a biset functor, the {\em dual} biset functor~$F^*$ is defined by
$$F^*(G)=\hbox{\rm Hom}_\mathbb{Z}(F(G),\mathbb{Z})\;\;\;,$$
for a finite group $G$, and by
$$F^*(\varphi)(\alpha)=\alpha\circ F(\varphi^{op})\;\;\;,$$
for any $\alpha\in F^*(G)$, any finite group $H$, and any $\varphi\in\hc{G}{H}$.
\end{enonce}
\pagebreak[3]\refstepcounter{prop}\@startsection{subsection}{2}{\z@}{4ex plus 6ex}{-1em}{\reset@font\bf}{Some idempotents in $\ec{G}$~:} Let $G$ be a finite group, and let $N\mathop{\underline\hbox{\rm tr}iangleleft} G$. Then it is clear from the definitions that
$$\hbox{\rm Def}_{G/N}^G\circ \hbox{\rm Inf}_{G/N}^G=(G/N)\times_G(G/N)=\hbox{\rm Id}_{G/N}\;\;\;.$$
It follows that the composition $e_N^G=\hbox{\rm Inf}_{G/N}^G\circ\hbox{\rm Def}_{G/N}^G$ is an idempotent in $\ec{G}$. Moreover, if $M$ and $N$ are normal subgroups of $G$, then $e_N^G\circ e_M^G=e_{NM}^G$. Moreover $e_1^G=\hbox{\rm Id}_G$.
\begin{enonce}{Lemma}{\rm (\cite{bisetsections} Lemma 2.5)} \label{decomposition} If $N\mathop{\underline\hbox{\rm tr}iangleleft} G$, define $f_N^G\in\ec{G}$ by
$$f_N^G=\sumb{M\mathop{\underline\hbox{\rm tr}iangleleft} G}{N\subseteq M}\mu_{\mathop{\underline\hbox{\rm tr}iangleleft} G}(N,M)e_M^G\;\;\;,$$
where $\mu_{\mathop{\underline\hbox{\rm tr}iangleleft} G}$ denotes the M\"obius function of the poset of normal subgroups of~$G$. Then the elements $f_N^G$, for $N\mathop{\underline\hbox{\rm tr}iangleleft} G$, are orthogonal idempotents of $\ec{G}$, and their sum is equal to $\hbox{\rm Id}_G$.
\end{enonce}
Moreover, it is easy to check from the definition that for $N\mathop{\underline\hbox{\rm tr}iangleleft} G$,
\begin{equation}\label{somme partielle}
f_N^G=\hbox{\rm Inf}_{G/N}^G\circ f_{\bf 1}^{G/N}\circ \hbox{\rm Def}_{G/N}^G\;\;\;,
\end{equation}
and
$$e_N^G=\hbox{\rm Inf}_{G/N}^G\circ \hbox{\rm Def}_{G/N}^G=\sumb{M\mathop{\underline\hbox{\rm tr}iangleleft} G}{M\supseteq N}f_M^G\;\;\;.$$
\begin{enonce}{Lemma}\label{partial}
If $N$ is a non trivial normal subgroup of $G$, then
$$f_{\bf 1}^G\circ \hbox{\rm Inf}_{G/N}^G=0\;\;\;\hbox{ and }\;\;\;\hbox{\rm Def}_{G/N}^G\circ f_{\bf 1}^G=0\;\;\;.$$
\end{enonce}
\noindent{\bf Proof: } Indeed by~\ref{somme partielle}
\begin{eqnarray*}
f_{\bf 1}^G\circ \hbox{\rm Inf}_{G/N}^G&=&f_{\bf 1}^G\circ \hbox{\rm Inf}_{G/N}^G\circ \hbox{\rm Def}_{G/N}^G\circ\hbox{\rm Inf}_{G/N}^G\\
&=&\sumb{M\mathop{\underline\hbox{\rm tr}iangleleft} N}{M\supseteq N}f_{\bf 1}^Gf_M^G\hbox{\rm Inf}_{G/N}^G=0\;\;\;,
\end{eqnarray*}
since $M\neq {\bf 1}$ when $M\supseteq N$. The other equality of the lemma follows by taking opposite bisets.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{rem}{Remark}\label{f1p}
It was also shown in Section 2.7~of~\cite{bisetsections} that if $P$ is a $p$-group, then
$$f_{\bf 1}^P=\sum_{N\subseteq \Omega_1Z(P)}\mu({\bf 1},N)P/N\;\;\;,$$
where $\mu$ is the M\"obius function of the poset of subgroups of~$N$, and $\Omega_1Z(P)$ is the subgroup of the centre of $P$ consisting of elements of order at most $p$.
\end{rem}
\begin{enonce}{Notation and Definition} If $F$ is a a biset functor, and if $G$ is a finite group, then the idempotent $f_{\bf 1}^G$ of $\ec{G}$ acts on $F(G)$. Its image
$${\partial}F(G)=f_{\bf 1}^GF(G)$$
is a direct summand of $F(G)$ as $\mathbb{Z}$-module~: it will be called the set of {\em faithful} elements of $F(G)$.
\end{enonce}
The reason for this name is that any element $u\in F(G)$ which is inflated from a proper quotient of $G$ is such that $F(f_{\bf 1}^G)u=0$. From Lemma~\ref{partial}, it is also clear that
$$\partial F(G)=\mathop{\bigcap}_{{\bf 1}\neq N\mathop{\underline\hbox{\rm tr}iangleleft} G}\hbox{\rm Ker}\;\hbox{\rm Def}_{G/N}^G\;\;\;.$$
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{Genetic subgroups and rational $p$-biset functors}
The following definitions are essentially taken from Section~2 of~\cite{dadegroup}~:
\begin{enonce}{Definition and Notation} Let $P$ be a finite $p$-group. If $Q$ is a subgroup of $P$, denote by $Z_P(Q)$ the subgroup of $P$ defined by
$$Z_P(Q)/Q=Z(N_P(Q)/Q)\;\;\;.$$
A subgroup $Q$ of $P$ is called {\em genetic} if it satisfies the following two conditions~:
\begin{enumerate}
\item The group $N_P(Q)/Q$ has normal $p$-rank 1.
\item If $x\in P$, then $Q^x\cap Z_P(Q)\subseteq Q$ if and only if $Q^x=Q$.
\end{enumerate}
Two genetic subgroups $Q$ and $R$ are said to be {\em linked modulo $P$} (notation $Q\estliemod{P}R$), if there exist elements $x$ and $y$ in $P$ such that $Q^x\cap Z_P(R)\subseteq R$ and $R^y\cap Z_P(Q)\subseteq Q$.\par
This relation is an equivalence relation on the set of genetic subgroups of~$P$. The set of equivalence classes is in one to one correspondence with the set of isomorphism classes of rational irreducible representations of $P$. A {\em genetic basis} of $P$ is a set of representatives of these equivalences classes.
\end{enonce}
If $V$ is an irreducible representation of $P$, then the {\em type} of $V$ is the isomorphism class of the group $N_P(Q)/Q$, where $Q$ is a genetic subgroup of $P$ in the equivalence class corresponding to $V$ by the above bijection.
\begin{rem}{Remark} The definition of the relation $\estliemod{P}$ given here is different from Definition~2.9 of~\cite{dadegroup}, but it is equivalent to it, by Lemma~4.5 of~\cite{bisetsections}.
\end{rem}
The following is Theorem~3.2 of~\cite{bisetsections}, in a slightly different form~:
\begin{enonce}{Theorem} Let $P$ be a finite $p$-group, and $\mathcal{G}$ be a genetic basis of $P$. Let $F$ be a $p$-biset functor. Then the map
$$\mathcal{I}_\mathcal{G}=^{op}lus_{Q\in\mathcal{G}}\hbox{\rm Ind}inf_{N_P(Q)/Q}^P:^{op}lus_{Q\in\mathcal{G}}\partial F\big(N_P(Q)/Q\big)\to F(P)$$
is split injective.
\end{enonce}
\begin{rem}{Remark} There are two differences with the initial statement of Theorem~3.2 of~\cite{bisetsections}~: here I use genetic {\em subgroups} instead of genetic {\em sections}, because these two notions are equivalent by Proposition~4.4 of~\cite{bisetsections}. Also the definition of the map $\mathcal{I}_\mathcal{G}$ is apparently different~: with the notation of~\cite{bisetsections}, the map $\mathcal{I}_\mathcal{G}$ is the sum of the maps $F(a_Q)$, where $a_Q$ is the trivial $(P,P/P)$-biset if $Q=P$, and $a_Q$ is the virtual $(P,N_P(Q)/Q)$-biset $P/Q-P/\hat{Q}$ if $Q\neq P$, where $\hat{Q}$ is the unique subgroup of $Z_P(Q)$ containing $Q$, and such that $|\hat{Q}:Q|=p$. But it is easy to see that the restriction of the map $F(P/\hat{Q})$ to $\partial F(N_P(Q)/Q)$ is actually~0. Moreover, the map $F(a_Q)$ is equal to $\hbox{\rm Ind}inf_{N_P(Q)/Q}^P$. So in fact, the above map $\mathcal{I}_\mathcal{G}$ is the same as the one defined in Theorem~3.2 of~\cite{bisetsections}.
\end{rem}
\begin{enonce}{Definition} A $p$-biset functor $F$ is called {\em rational} if for any finite $p$-group $P$ and any genetic basis $\mathcal{G}$ of $P$, the map $\mathcal{I}_\mathcal{G}$ is an isomorphism.
\end{enonce}
It was shown in Proposition~7.4 of~\cite{bisetsections} that subfunctors, quotient functors, and dual functors of rational $p$-biset functors are rational.
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{The functor of units of the Burnside ring}
\begin{enonce}{Notation} If $G$ is a finite group, let $B^\times(G)$ denote the group of units of the Burnside ring $B(G)$.
\end{enonce}
If $G$ and $H$ are finite groups, if $U$ is a finite $(H,G)$-biset, recall that $U^{op}$ denotes the $(G,H)$-biset obtained from $U$ by reversing the actions. If $X$ is a finite $G$-set, then $T_U(X)=\hbox{\rm Hom}_G(U^{op},X)$ is a finite $H$-set. The correspondence $X\mapsto T_U(X)$ can be extended to a correspondence $T_U:B(G)\to B(H)$, which is multiplicative (i.e. $T_U(ab)=T_U(a)T_U(b)$ for any $a,b\in B(G)$), and preserves identity elements (i.e. $T_U(G/G)=H/H$). This extension to $B(G)$ can be built by different means, and the following is described in Section~4.1 of \cite{tensams}~: if $a$ is an element of $B(G)$, then there exists a finite $G$-poset $X$ such that $a$ is equal to the Lefschetz invariant $\Lambda_X$. Now $\hbox{\rm Hom}_G(U^{op},X)$ has a natural structure of $H$-poset, and one can set $T_U(a)=\Lambda_{{\scriptstyle\rm Hom}_G(U^{op},X)}$. It is an element of $B(H)$, which does not depend of the choice of the poset $X$ such that $a=\Lambda_X$, because with Notation~\ref{Tu} and Lemma~\ref{UmodG}, for any subgroup $T$ of $H$ the Euler-Poincar\'e characteristics $\chi\left(\hbox{\rm Hom}_G(U^{op},X)^T\right)$ can be computed by
$$\chi\left(\hbox{\rm Hom}_G(U^{op},X)^T\right)=\prod_{u\in T\backslash U/G}\chi(X^{T^u})\;\;\;,$$
and the latter only depends on the element $\Lambda_X$ of $B(G)$. As a consequence, one has that
$$|T_U(a)^T|=\prod_{u\in T\backslash U/G}|a^{T^u}|\;\;\;.$$
It follows in particular that $T_U\big(B^\times(G)\big)\subseteq B^\times(H)$. Moreover, it is easy to check that $T_U=T_{U'}$ if $U$ and $U'$ are isomorphic $(H,G)$-bisets, that $T_{U_1\sqcup U_2}(a)=T_{U_1}(a)T_{U_2}(a)$ for any $(H,G)$-bisets $U_1$ and $U_2$, and any $a\in B(G)$.\par
It follows that there is a well defined bilinear pairing
$$B(H\times G^{op})\times B^\times(G)\to B^\times(H)\;\;\;,$$
extending the correspondence $(U,a)\mapsto T_U(a)$. If $f\in B(H\times G^{op})$ (i.e. if $f$ is a virtual $(H,G)$-biset), the corresponding group homomorphism $B^\times(G)\to B^\times(H)$ will be denoted by $B^\times (f)$. \par
Now let $K$ be a third group, and $V$ be a finite $(K,H)$-set. If $X$ is a finite $G$-set, there is a canonical isomorphism of $K$-sets
$$\hbox{\rm Hom}_H\big(V^{op},\hbox{\rm Hom}_G(U^{op},X)\big)\cong \hbox{\rm Hom}_G\big((V\times_HU)^{op},X\big)\;\;\;,$$
showing that $T_V\circ T_U=T_{V\times_HU}$.\par
It follows more generally that $B^\times(g)\circ B^\times(f)=B^\times(g\times_Hf)$ for any $g\in B(K\times H^{op})$ and any $f\in B(H\times G^{op})$. Finally this shows~:
\begin{enonce}{Proposition} The correspondence sending a finite group $G$ to $B^\times(G)$, and an homomorphism $f$ in $\mathcal{C}$ to $B^\times(f)$, is a biset functor.
\end{enonce}
\begin{rem}{Remark and Notation} The restriction and inflation maps for the functor $B^\times$ are the usual ones for the functor $B$. The deflation map $\hbox{\rm Def}_{G/N}^G$ corresponds to taking fixed points under $N$ (so it {\em does not coincide} with the usual deflation map for $B$, which consist in taking {\em orbits under $N$}). \par
Similarly, if $H$ is a subgroup of $G$, the induction map from $H$ to $G$ for the functor $B^\times$ is sometimes called {\em multiplicative induction}. I will call it {\em tensor induction}, and denote it by $\hbox{\rm Ten}_H^G$. If $(T,S)$ is a section of $G$, I will also set $\hbox{\rm Ten}inf_{T/S}^P=\hbox{\rm Ten}_T^P\hbox{\rm Inf}_{T/S}^T$.
\end{rem}
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{Faithful elements in $B^\times(G)$}
\begin{enonce}{Notation and definition} Let $G$ be a finite group. Denote by~$[s_G]$ a set of representatives of conjugacy classes of subgroups of $G$. Then the elements $G/L$, for $L\in [s_G]$, form a basis of $B(G)$ over $\mathbb{Z}$, called the {\em canonical basis} of $B(G)$.
\end{enonce}
The primitive idempotents of $\mathbb{Q} B(G)$ are also indexed by $[s_G]$~: if $H\in[s_G]$, the correspondent idempotent $e_H^G$ is equal to
$$e_H^G=\frac{1}{|N_G(H)|}\sum_{K\subseteq H}|K|\mu(K,H)G/K\;\;\;,$$
where $\mu(K,H)$ denotes the M\"obius function of the poset of subgroups of $G$, ordered by inclusion (see \cite{gluck}, \cite{yoshidaidemp}, or \cite{handbook}).\par
Recall that if $a\in B(G)$, then $a\cdot e_H^G=|a^H|e_H^G$ so that $a$ can be written as
$$ a=\sum_{H\in[s_G]}|a^H|e_H^G\;\;\;.$$
Now $a\in B^\times(G)$ if and only if $a\in B(G)$ and $|a^H|\in\{\pm 1\}$ for any $H\in [s_G]$, or equivalently if $a^2=G/G$. If now $P$ is a $p$-group, and if $p\neq 2$, since $|a^H|\equiv |a|\;(p)$ for any subgroup $|H|$ of $P$, it follows that $|a^H|=|a|$ for any $H$, thus $a=\pm P/P$. This shows the following well know
\begin{enonce}{Lemma}\label{impair} If $P$ is an odd order $p$-group, then $B^\times(P)=\{\pm P/P\}$.
\end{enonce}
\begin{rem}{Remark}
So in the sequel, when considering $p$-groups, the only really non-trivial case will occur for $p=2$. However, some statements will be given for arbitrary $p$-groups.
\end{rem}
\begin{enonce}{Notation} If $G$ is a finite group, denote by $F_G$ the set of subgroups~$H$ of~$G$ such that $H\cap Z(G)={\bf 1}$, and set $[F_G]=F_G\cap[s_G]$.
\end{enonce}
\begin{enonce}{Lemma}\label{groscentre} Let $G$ be a finite group. If $|Z(G)|>2$, then $\partial B^\times(G)$ is trivial.
\end{enonce}
\noindent{\bf Proof: } Indeed let $a\in\partial B^\times(G)$. Then $\hbox{\rm Def}_{G/N}^Ga$ is the identity element of $B^\times(G/N)$, for any non-trivial normal subgroup $N$ of $G$. Now suppose that $H$ is a subgroup of $G$ containing $N$. Then
$$|a^H|=|\hbox{\rm Def}res_{N_G(H)/H}^Ga|=|\hbox{\rm Iso}_{N_{G/N}(H/N)/(H/N)}^{N_G(H)/H}\hbox{\rm Def}res_{N_{G/N}(H/N)}^{G/N} \hbox{\rm Def}_{G/N}^Ga|=1\;\;\;.$$
In particular $|a^H|=1$ if $H\cap Z(G)\neq {\bf 1}$. It follows that there exists a subset~$A$ of~$[F_G]$ such that
$$a=G/G-2\sum_{H\in A}e_H^G\;\;\;.$$
If $A\neq\emptyset$, i.e. if $a\neq G/G$, let $L$ be a maximal element of $A$. Then $L\neq G$, because $Z(G)\neq {\bf 1}$. The coefficient of $G/L$ in the expression of $a$ in the canonical basis of $B(G)$ is equal to
$$-2\frac{|L|\mu(L,L)}{|N_G(L)|}=-2\frac{|L|}{|N_G(L)|}\;\;\;.$$
This is moreover an integer, since $a\in B^\times(G)$. It follows that $|N_G(L):L|$ is equal to 1 or 2. But since $L\cap Z(G)={\bf 1}$, the group $Z(G)$ embeds into the group $N_G(L)/L$. Hence $|N_G(L):L|\geq 3$, and this contradiction shows that $A=\emptyset$, thus $a=G/G$.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{enonce}{Lemma} \label{critere} Let $P$ be a finite $2$-group, of order at least 4, and suppose that the maximal elements of $F_P$ have order 2. If $|P|\geq 2|F_P|$, then $\partial B^\times(P)$ is trivial.
\end{enonce}
\noindent{\bf Proof: } Let $a\in\partial B^\times(P)$. By the argument of the previous proof, there exists a subset $A$ of $[F_P]$ such that
$$a=P/P-2\sum_{H\in A}e_H^P\;\;\;.$$
The hypothesis implies that $\mu({\bf 1},H)=-1$ for any non-trivial element $H$ of $[F_P]$. Now if ${\bf 1}\in A$, the coefficient of $P/1$ in the expression of $a$ in the canonical basis of $B(P)$ is equal to
$$-2\frac{1}{|P|}+2\sum_{H\in A-\{{\bf 1}\}}\frac{1}{|N_P(H)|}=-2\frac{1}{|P|}+2\sum_{H\in \sur{A}-\{{\bf 1}\}}\frac{1}{|P|}=\frac{-4+2|\sur{A}|}{|P|}\;\;\;,$$
where $\sur{A}$ is the set of subgroups of $P$ which are conjugate to some element of~$A$. This coefficient is an integer if $a\in B(P)$, so $|P|$ divides $2|\sur{A}|-4$. But~$|\sur{A}|$ is always odd, since the trivial subgroup is the only normal subgroup of $P$ which is in $\sur{A}$ in this case. Thus $2|\sur{A}|-4$ is congruent to 2 modulo 4, and cannot be divisible by $|P|$, since $|P|\geq 4$. \par
So ${\bf 1}\notin A$, and the coefficient of $P/1$ in the expression of $a$ is equal to
$$2\sum_{H\in A}\frac{1}{|N_P(H)|}=\frac{2|\sur{A}|}{|P|}\;\;\;.$$
Now this is an integer, so $2|\sur{A}|$ is congruent to $0$ or $1$ modulo the order of~$P$, which is even since $|P|\geq 2|F_P|\geq 2$. Thus ${\bf 1}\notin A$, and $2|\sur{A}|$ is a multiple of~$|P|$. But $2|\sur{A}|< 2|F_P|$ since ${\bf 1}\notin A$. So if $2|F_P|\leq|P|$, it follows that
$\sur{A}$ is empty, and $A$ is empty. Hence $a=P/P$, as was to be shown.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{enonce}{Corollary} \label{pasfidele}Let $P$ be a finite 2-group. Then the group $\partial B^\times(P)$ is trivial in each of the following cases~:
\begin{enumerate}
\item $P$ is abelian of order at least 3.
\item $P$ is generalized quaternion or semi-dihedral.
\end{enumerate}
\end{enonce}
\begin{rem}{Remark} Case 1 follows easily from Matsuda's Theorem (\cite{matsuda}). Case~2 follows from Lemma~4.6 of Yal\c cin (\cite{yalcin}).
\end{rem}
\noindent{\bf Proof: } Case 1 follows from Lemma~\ref{groscentre}. In Case 2, if $P$ is generalized quaternion, then $F_P=\{{\bf 1}\}$, thus $|P|\geq 2|F_P|$. And if $P$ is semidihedral, then there is a unique conjugacy class of non-trivial subgroups $H$ of~$P$ such that $H\cap Z(P)={\bf 1}$. Such a group has order 2, and $N_P(H)=HZ(P)$ has order~4. Thus $|F_P|=1+\frac{|P|}{4}$, and $|P|\geq ²2|F_P|$ also in this case.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{enonce}{Corollary} {\rm [Yal\c cin \cite{yalcin} Lemma~4.6 and Lemma~5.2]} \label{upsilon} Let $P$ be a $p$-group of normal $p$-rank 1. Then $\partial B^\times(P)$ is trivial, except if $P$ is
\begin{itemize}
\item the trivial group, and $\partial B^\times(P)$ is the group of order 2 generated by $\upsilon_P=-P/P$.
\item cyclic of order 2, and $\partial B^\times(P)$ is the group of order 2 generated by
$$\upsilon_P=P/P-P/{\bf 1}\;\;\;.$$
\item dihedral of order at least 16, and then $\partial B^\times(P)$ is the group of order 2 generated by the element
$$\upsilon_P=P/P+P/1-P/I-P/J\;\;\;,$$
where $I$ and $J$ are non-central subgroups of order 2 of $P$, not conjugate in $P$.
\end{itemize}
\end{enonce}
\noindent{\bf Proof: } Lemma~\ref{impair} and Lemma~\ref{groscentre} show that $\partial B^\times(P)$ is trivial, when $P$ has normal $p$-rank 1, and $P$ is not trivial, cyclic of order 2, or dihedral~: indeed then, the group $P$ is cyclic of order at least 3, or generalized quaternion, or semi-dihedral.\par
Now if $P$ is trivial, then obviously $B(P)=\mathbb{Z}$, so $B^\times(P)=\partial B^\times(P)=\{\pm P/P\}$. If $P$ has order 2, then clearly $B^\times(P)$ consists of $\pm P/P$ and $\pm(P/P-P/{\bf 1})$, and $\partial B^\times(P)=\{P/P,P/P-P/{\bf 1}\}$. Finally, if $P$ is dihedral, the set $F_P$ consists of the trivial group, and of two conjugacy classes of subgroups $H$ of order~2 of~$P$, and $N_P(H)=HZ$ for each of these, where~$Z$ is the centre of $P$. Thus
$$|F_P|=1+2\frac{|P|}{4}=1+\frac{|P|}{2}\;\;\;.$$
Now with the notation of the proof of Lemma~\ref{critere}, one has that $2|\sur{A}|\equiv 0\;(|P|)$, and $2|\sur{A}|<|F_P|=2+|P|$. So either $A=\emptyset$, and in this case $a=P/P$, or $2|\sur{A}|=|P|$, which means that $\sur{A}$ is the whole set of non-trivial elements of~$F_P$. In this case
$$a=P/P-2(e_I^P+e_J^P)\;\;\;,$$
where $I$ and $J$ are non-central subgroups of order 2 of $P$, not conjugate in~$P$. It is then easy to check that
$$a=P/P+P/{\bf 1}-(P/I+P/J)\;\;\;,$$
so $a$ is indeed in $B(P)$, hence in $B^\times(P)$. Moreover $\hbox{\rm Def}_{P/Z}^Pa$ is the identity element of $B^\times(P/Z)$, so $a=f_{\bf 1}^Pa$, and $a\in\partial B^\times(P)$. This completes the proof.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{A morphism of biset functors}
If $k$ is any commutative ring, there is an obvious isomorphism of biset functor from $kB^*=k\otimes_\mathbb{Z} B^*$ to $\hbox{\rm Hom}(B,k)$, which is defined for a group~$G$ by sending the element $\alpha=\sum_i\alpha_i\otimes \psi_i$, where $\alpha_i\in k$ and $\psi_i\in B^*(G)$, to the linear form $\tilde{\alpha}:B(G)\to k$ defined by $\tilde{\alpha}(G/H)=\sum_i \psi_i(G/H)\alpha_i$.
\begin{enonce}{Notation} Let $\{\pm 1\}=\mathbb{Z}^\times$ be the group of units of the ring $\mathbb{Z}$. The unique group isomorphism from $\{\pm 1\}$ to $\mathbb{Z}/2\mathbb{Z}$ will be denoted by $u\mapsto u_+$.
\end{enonce}
If $G$ is a finite group, and if $a\in B^\times(G)$, then recall that for each subgroup~$S$ of $G$, the integer $|a^S|$ is equal to $\pm 1$. Define a map $\epsilon_G : B^\times(G)\to \mathbb{F}_2B^*(G)$ by setting $\epsilon_G(a)(G/S)=|a^S|_+$, for any $a\in B^\times(G)$ and any subgroup $S$ of $G$.
\begin{enonce}{Proposition} The maps $\epsilon_G$ define a injective morphism of biset functors $$\epsilon:B^\times \to \mathbb{F}_2B^*\;\;\;.$$
\end{enonce}
\noindent{\bf Proof: } The injectivity of the map $\epsilon_G$ is obvious. Now let $G$ and $H$ be finite groups, and let $U$ be a finite $(H,G)$-biset. Also denote by $U$ the corresponding element of $B(H\times G^{op})$. If $a\in B^\times(G)$, and if $T$ is a subgroup of $H$, then
$$|B^\times(U)(a)^T|=\prod_{u\in T\backslash U/G}|a^{T^u}|\;\;\;.$$
Thus
\begin{eqnarray*}
\epsilon_H\left(B^\times(U)(a)\right)(H/T)&=&\left(\prod_{u\in T\backslash U/G}|a^{T^u}|\right)_+\\
&=&\sum_{u\in T\backslash U/G}|a^{T^u}|_+\\
&=&\sum_{u\in T\backslash U/G}\epsilon_G(a)(G/T^u)\\
&=&\epsilon_G(a)(U^{op}/T)\\
&=&\epsilon_G(a)(U^{op}\times_HH/T)\\
&=&\mathbb{F}_2B^*(U)\big(\epsilon_G(a)\big)(H/T)
\end{eqnarray*}
thus $\epsilon_H\circ B^\times(U)=\mathbb{F}_2B^*(U)\circ\epsilon_G$. Since both sides are additive with respect to $U$, the same equality holds when $U$ is an arbitrary element of $B(H\times G^{op})$, completing the proof.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{Restriction to $p$-groups}
The additional result that holds for finite $p$-groups (and not for arbitrary finite groups) is the Ritter-Segal theorem, which says that the natural transformation $B\to R_\mathbb{Q}$ of biset functors for $p$-groups, is surjective. By duality, it follows that the natural transformation $i:kR_\mathbb{Q}^*\to kB^*$ is injective, for any commutative ring $k$. The following gives a characterization of the image $i(kR_\mathbb{Q}^*)$ inside $kB^*$~:
\begin{enonce}{Proposition} \label{caract}Let $p$ be a prime number, let $P$ be a $p$-group, let $k$ be a commutative ring.Then the element $\varphi\in kB^*(P)$ lies in $i\big(kR_\mathbb{Q}^*(P)\big)$ if and only if the element $\hbox{\rm Def}res_{T/S}^P\varphi$ lies in $i\big(kR_\mathbb{Q}^*(T/S)\big)$, for any section $T/S$ of~$P$ which is
\begin{itemize}
\item elementary abelian of rank~2, or non-abelian of order $p^3$ and exponent~$p$, if $p\neq 2$.
\item elementary abelian of rank~2, or dihedral of order at least 8, if $p=2$.
\end{itemize}
\end{enonce}
\noindent{\bf Proof: } Since the image of $kR_\mathbb{Q}^*$ is a subfunctor of $kB^*$, if $\varphi\in i\big(kR_\mathbb{Q}^*(P)\big)$, then $\hbox{\rm Def}res_{T/S}^P\varphi\in i\big(kR_\mathbb{Q}^*(T/S)\big)$, for any section $(T,S)$ of $P$. \par
Conversely, consider the exact sequence of biset functors over $p$-groups
$$0\to K\to B\to R_Q\to 0\;\;\;.$$
Every evaluation of this sequence at a particular $p$-group is a split exact sequence of (free) abelian groups. Hence by duality, for any ring $k$, there is an exact sequence
$$0\to kR_\mathbb{Q}^*\to kB^*\to kK^*\to 0\;\;\;.$$
With the identification $kB^*\cong\hbox{\rm Hom}_\mathbb{Z}(B,k)$, this means that if $P$ is a $p$-group, the element $\varphi\in RB^*(P)$ lies in $i\big(kR_\mathbb{Q}^*(P)\big)$ if and only if $\varphi\big(K(P)\big)=0$. Now by Corollary~6.16 of \cite{dadegroup}, the group $K(P)$ is the set of linear combinations of elements of the form $\hbox{\rm Ind}inf_{T/S}^P\theta(\kappa)$, where $T/S$ is a section of $P$, and $\theta$ is a group isomorphism from one of the group listed in the proposition to $T/S$, and $\kappa$ is a specific element of $K(T/S)$ in each case. The proposition follows, because
$$\varphi\big(\hbox{\rm Ind}inf_{T/S}^P\theta(\kappa)\big)=(\hbox{\rm Def}res_{T/S}^P\varphi)\big(\theta(\kappa)\big)\;\;\;,$$
and this is zero if $\hbox{\rm Def}res_{T/S}^P\varphi$ lies in $i\big(kR_\mathbb{Q}^*(T/S)\big)$.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{enonce}{Theorem} Let $p$ be a prime number, and $P$ be a finite $p$-group. The image of the map $\epsilon_P$ is contained in $i\big(\mathbb{F}_2R_\mathbb{Q}^*(P)\big)$.
\end{enonce}
\noindent{\bf Proof: } Let $a\in B^\times(P)$, and let $T/S$ be any section of $P$. Since
$$\hbox{\rm Def}res_{T/S}^Pi_P(a)=i_{T/S}\hbox{\rm Def}res_{T/S}^Pa\;\;\;,$$
by Proposition~\ref{caract}, it is enough to check that the image of $\epsilon_P$ is contained in~$i\big(\mathbb{F}_2R_\mathbb{Q}^*(P)\big)$, when $P$ is elementary abelian of rank~2 or non-abelian of order $p^3$ and exponent~$p$ if $p$ is odd, or when $P$ is elementary abelian of rank~2 or dihedral if $p=2$.\par
Now if $N$ is a normal subgroup of $P$, one has that
$$f_N^Pi_P(a)=\hbox{\rm Inf}_{P/N}^P\left(i_{P/N}(f_{{\bf 1}}^{P/N}\hbox{\rm Def}_{P/N}^Pa)\right)\;\;\;.$$
Thus by induction on the order of $P$, one can suppose $a\in \partial B^\times(P)$. But if $P$ is elementary abelian of rank~2, or if $P$ has odd order, then $\partial B^\times(P)$ is trivial, by Lemma~\ref{impair} and Corollary~\ref{pasfidele}. Hence there is nothing more to prove if $p$ is odd. And for $p=2$, the only case left is when $P$ is dihedral. In that case by Corollary~\ref{upsilon}, the group $\partial B^\times(P)$ has order 2, generated by the element
$$\upsilon_P=\sum_{H\in[s_P]-\{I,J\}}e_H^P-(e_I^P+e_J^P)\;\;\;,$$
where $[s_P]$ is a set of representatives of conjugacy classes of subgroups of $P$, and where $I$ and $J$ are the elements of $[s_P]$ which have order 2, and are non central in $P$. Moreover the element $\theta(\kappa)$ mentioned above is equal to
$$(P/I'-P/I'Z)-(P/J'-P/J'Z)\;\;\;,$$
where $Z$ is the centre of $P$, and $I'$ and $J'$ are non-central subgroups of order~2 of $P$, not conjugate in $P$. Hence up to sign $\theta(\kappa)$ is equal to
$$\delta_P=(P/I-P/IZ)-(P/J-P/JZ)\;\;\;.$$
Since $\epsilon_P(\upsilon_P)(P/H)$ is equal to zero, except if $H$ is conjugate to $I$ or $J$, and then $\epsilon_P(\upsilon_P)(P/H)=1$, it follows that $\epsilon_P(\upsilon_P)(\delta_P)=1-1=0$, as was to be shown. This completes the proof.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{enonce}{Corollary} The $p$-biset functor $B^\times$ is rational.
\end{enonce}
\noindent{\bf Proof: } Indeed, it is isomorphic to a subfunctor of $\mathbb{F}_2R_\mathbb{Q}^*\cong\hbox{\rm Hom}_\mathbb{Z}(R_\mathbb{Q},\mathbb{F}_2)$, which is rational by Proposition~7.4 of~\cite{bisetsections}.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{enonce}{Theorem} \label{units}Let $P$ be a $p$-group. Then $B^\times(P)$ is an elementary abelian 2-group of rank equal to the number of isomorphism classes of rational irreducible representations of~$P$ whose type is trivial, cyclic of order~2, or dihedral. More precisely~:
\begin{enumerate}
\item If $p\neq 2$, then $B^\times(P)=\{\pm 1\}$.
\item If $p=2$, then let $\mathcal{G}$ be a genetic basis of $P$, and let $\mathcal{H}$ be the subset of $\mathcal{G}$ consisting of elements $Q$ such that $N_P(Q)/Q$ is trivial, cyclic of order~2, or dihedral. If $Q\in\mathcal{H}$, then $\partial B^\times\big(N_P(Q)/Q\big)$ has order~2, generated by $\upsilon_{N_P(Q)/Q}$. Then the set
$$\{\hbox{\rm Ten}inf_{N_P(Q)/Q}^P\upsilon_{N_P(Q)/Q}\mid Q\in \mathcal{H}\}$$
is an $\mathbb{F}_2$-basis of $B^\times(P)$.
\end{enumerate}
\end{enonce}
\noindent{\bf Proof: } This follows from the definition of a rational biset functor, and from Corollary~\ref{upsilon}. ~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{rem}{Remark} If $P$ is abelian, then there is a unique genetic basis of $P$, consisting of subgroups $Q$ such that $P/Q$ is cyclic. So in that case, the rank of $B^\times(P)$ is equal 1 plus the number of subgroups of index 2 in $P$~: this gives a new proof of Matsuda's Theorem~(\cite{matsuda}).
\end{rem}
\pagebreak[3]\setcounter{prop}{0}\setcounter{equation}{0}\@startsection{section}{1}{\z@}{4ex plus 4ex}{4ex}{\center\reset@font\large\bf}{The functorial structure of $B^\times$ for $p$-groups}
In this section, I will describe the lattice of subfunctors of the $p$-biset functor $B^\times$.
\pagebreak[3]\refstepcounter{prop}\@startsection{subsection}{2}{\z@}{4ex plus 6ex}{-1em}{\reset@font\bf}{The case $p\neq 2$.} If $p\neq 2$, there is not much to say, since $B^\times(P)\cong \mathbb{F}_2$ for any $p$-group $P$. In this case, the functor $B^\times$ is the constant functor~$\Gamma_{\mathbb{F}_2}$ introduced in Corollary~8.4 of \cite{both}. It is also isomorphic to the simple functor~$S_{{\bf 1},\mathbb{F}_2}$. In this case, the results of~\cite{bisetsections} and~\cite{dadegroup} lead to the following remarkable version of Theorem~11.2 of~\cite{both}:
\begin{enonce}{Proposition}\label{suiteexacte} If $p\neq 2$, the inclusion $B^\times\to \mathbb{F}_2R_\mathbb{Q}^*$ leads to a short exact sequence of $p$-biset functors
$$0\to B^\times\to \mathbb{F}_2R_\mathbb{Q}^*\to D_{tors}\to 0\;\;\;,$$
where $D_{tors}$ is the torsion part of the Dade $p$-biset functor.
\end{enonce}
\pagebreak[3]\refstepcounter{prop}\@startsection{subsection}{2}{\z@}{4ex plus 6ex}{-1em}{\reset@font\bf}{The case $p=2$.} There is a bilinear pairing
$$\scal{\phantom{a}}{\phantom{b}}:\mathbb{F}_2R_\mathbb{Q}^*\times \mathbb{F}_2R_\mathbb{Q}\to\mathbb{F}_2\;\;\;.$$
This means that for each 2-group $P$, there is a bilinear form
$$\scal{\phantom{a}}{\phantom{b}}_P: \mathbb{F}_2R_\mathbb{Q}^*(P)\times \mathbb{F}_2R_\mathbb{Q}(P)\to\mathbb{F}_2\;\;\;,$$
with the property that for any 2-group $Q$, for any $f\in\hcp{P}{Q}$, for any $a\in \mathbb{F}_2R_\mathbb{Q}^*(P)$ and any $b\in \mathbb{F}_2R_\mathbb{Q}(Q)$, one has that
$$\scal{\mathbb{F}_2R_\mathbb{Q}^*(f)(a)}{b}_Q=\scal{a}{\mathbb{F}_2R_\mathbb{Q}(f^{op})(b)}_P\;\;\;.$$
Moreover this pairing is non-degenerate~: this means that for any 2-group~$P$, the pairing $\scal{\phantom{a}}{\phantom{b}}_P$ is non-degenerate. In particular, each subfunctor $F$ of $\mathbb{F}_2R_\mathbb{Q}^*$ is isomorphic to $\mathbb{F}_2R_\mathbb{Q}/F^\perp$, where $F^\perp$ is the orthogonal of $F$ for the pairing $\scal{\phantom{a}}{\phantom{b}}$.\par
In particular, the lattice of subfunctors of $\mathbb{F}_2R_\mathbb{Q}^*$ is isomorphic to the opposite lattice of subfunctors of $\mathbb{F}_2R_\mathbb{Q}$. Now since $B^\times$ is isomorphic to a subfunctor of $\mathbb{F}_2R_\mathbb{Q}$, its lattice of subfunctors is isomorphic to the opposite lattice of subfunctors of $\mathbb{F}_2R_\mathbb{Q}$ {\em containing} $B^\sharp=(B^\times)^\perp$. By Theorem~4.4 of \cite{fonctrq}, any subfunctor $L$ of $\mathbb{F}_2R_\mathbb{Q}$ is equal to the sum of subfunctors $H_Q$ it contains, where $Q$ is a 2-group of normal 2-rank 1, and $H_Q$ is the subfunctor of $\mathbb{F}_2R_\mathbb{Q}$ generated by the image $\sur{\Phi}_Q$ of the unique (up to isomorphism) irreducible rational faithful $\mathbb{Q} Q$-module $\Phi_Q$ in $\mathbb{F}_2R_\mathbb{Q}$. \par
In particular $B^\sharp$ is the sum of the subfunctors $H_Q$, where $Q$ is a 2-group of normal 2-rank 1 such that $\sur{\Phi}_Q\in B^\sharp(Q)$. This means that
$\scal{a}{\sur{\Phi}_Q}_Q=0$, for any $a\in B^\times(Q)$. Now $\Phi_Q=f_{\bf 1}\Phi_Q$ since $\Phi_Q$ is faithful, so
$$\scal{a}{\sur{\Phi}_Q}_Q=\scal{a}{f_{\bf 1}^Q\sur{\Phi}_Q}_Q=\scal{f_{\bf 1}^Qa}{\sur{\Phi}_Q}_Q\;\;\;,$$
because $f_{\bf 1}^Q=(f_{\bf 1}^Q)^{op}$. Thus $\sur{\Phi}_Q\in B^\sharp(Q)$ if and only if $\sur{\Phi}_Q$ is orthogonal to $\partial B^\times(Q)$. Since $Q$ has normal $2$-rank 1, this is always the case by Corollary~\ref{upsilon}, except maybe if $Q$ is trivial, cyclic of order 2, or dihedral (of order at least 16). Now $H_{\bf 1}=H_{C_2}=\mathbb{F}_2R_\mathbb{Q}$ by Theorem~5.6 of~\cite{fonctrq}. Since $B^\times$ is not the zero subfunctor of $\mathbb{F}_2R_\mathbb{Q}$, it follows that $H_Q\not\subseteq B^\sharp$, if $Q$ is trivial or cyclic of order 2. Now if $Q$ is dihedral, then $\Phi_Q$ is equal to $\mathbb{Q} Q/I-\mathbb{Q} Q/IZ$, where~$I$ is a non-central subgroup of order 2 of $Q$, and $Z$ is the centre of $Q$. Now
$$\epsilon_Q(\upsilon_Q)\big(i(\sur{\Phi}_Q)\big)=\epsilon_Q(\upsilon_Q)(Q/I-Q/IZ)=1-0=1\;\;\;,$$
It follows that $H_Q\not\subseteq B^\sharp$ if $Q$ is dihedral. Finally $B^\sharp$ is the sum of all subfunctors $H_Q$, when $Q$ is cyclic of order at least 4, or generalized quaternion, or semi-dihedral.\par
Recall from Theorem~6.2 of~\cite{fonctrq} that the poset of proper subfunctors of $\mathbb{F}_2R_\mathbb{Q}$ is isomorphic to the poset of closed subsets of the following graph~:
$$\xymatrix@C=10pt@R=30pt@M=0pt@H=0pt@W=0pt{
&\ \boite{C_4}\ & & \boite{SD_{16}}& & \boiteb{D_{16}}& & &\\
\boite{Q_8}\mar[ur]\mar[urrr]& & \boite{C_{8}}\mar[ul]\mar[ur]\mar[urrr]& & \boite{SD_{32}}\mar[ur]& & \boiteb{D_{32}}\mar[ul]& &\\
& \boite{Q_{16}}\mar[ul]\mar[ur]\mar[urrr]& & \boite{C_{16}}\mar[ul]\mar[ur]\mar[urrr]& & \boite{SD_{64}}\mar[ur]& & \boiteb{D_{64}}\mar[ul]& \\
& & \boite{Q_{32}}\mar[ul]\mar[ur]\mar[urrr]& & \boite{C_{32}}\mar[ul]\mar[ur]\mar[urrr]& & \boite{SD_{128}}\mar[ur]& & \boiteb{D_{128}}\mar[ul]\\
& & &{\rule{0pt}{8pt}\ldots\rule{0pt}{8pt}}\mar[ul]\mar[ur]\mar[urrr]& &{\rule{0pt}{8pt}\ldots\rule{0pt}{8pt}}\mar[ul]\mar[ur]\mar[urrr]& &{\rule{0pt}{8pt}\ldots\rule{0pt}{8pt}}\mar[ur]& &{\rule{0pt}{8pt}\ldots\rule{0pt}{8pt}}\mar[ul]\\
}$$
The vertices of this graph are the isomorphism classes of groups of normal 2-rank~1 and order at least~4, and there is an arrow from vertex $Q$ to vertex~$R$ if and only if $H_R\subseteq H_Q$. The vertices with a filled $\bullet$ are exactly labelled by the groups $Q$ for which $H_Q\subseteq B^\sharp$, and the vertices with a $\circ$ are labelled by dihedral groups.\par
By the above remarks, the lattice of subobjects of $B^\times$ is isomorphic to the opposite lattice of subfunctors of $\mathbb{F}_2R_\mathbb{Q}$ containing $B^\sharp$. Thus~:
\begin{enonce}{Theorem} The $p$-biset functor $B^\times$ is uniserial. It has an infinite strictly increasing series of proper subfunctors
$$0\subset L_0\subset L_1\cdots\subset L_n\subset\cdots$$
where $L_{0}$ is generated by the element $\upsilon_{{\bf 1}}$, and $L_i$, for $i>0$, is generated by the element $\upsilon_{D_{2^{i+3}}}$ of $B^\times(D_{2^{i+3}})$. The functor $L_0$ is isomorphic to the simple functor $S_{{\bf 1},\mathbb{F}_2}$, and the quotient $L_{i}/L_{i-1}$, for $i\geq 1$, is isomorphic to the simple functor $S_{D_{2^{i+3}},\mathbb{F}_2}$.
\end{enonce}
\noindent{\bf Proof: } Indeed $L_0^\perp=B^\sharp+H_{D_{16}}$ is the unique maximal proper subfunctor of $\mathbb{F}_2R_\mathbb{Q}$. Thus $L_0$ is isomorphic to the unique simple quotient of $\mathbb{F}_2R_\mathbb{Q}$, which is $S_{{\bf 1},\mathbb{F}_2}$ by Proposition~5.1 of~\cite{fonctrq}. Similarly for $i\geq 1$, the simple quotient $L_i/L_{i-1}$ is isomorphic to the quotient
$$(B^\sharp+H_{D_{2^{i+3}}})/B^\sharp+H_{D_{2^{i+4}}})\;\;\;,$$
which is a quotient of
$$(B^\sharp+H_{D_{2^{i+3}}})/B^\sharp\cong H_{D_{2^{i+3}}}/(B^\sharp\cap H_{D_{2^{i+3}}})\;\;\;.$$
But the only simple quotient of $H_{D_{2^{i+3}}}$ is $S_{D_{2^{i+3}},\mathbb{F}_2}$, by Proposition~5.1 of~\cite{fonctrq} again.~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\begin{rem}{Remark} Let $P$ be a 2-group. By Theorem~5.12 of~\cite{fonctrq}, the $\mathbb{F}_2$-dimension of $S_{{\bf 1},\mathbb{F}_2}(P)$ is equal to the number of isomorphism classes of rational irreducible representations of~$P$ whose type is ${\bf 1}$ or $C_2$, whereas the $\mathbb{F}_2$-dimension of $S_{D_{2^{i+3}},\mathbb{F}_2}(P)$ is the number of isomorphism classes of rational irreducible representations of~$P$ whose type is isomorphic to $D_{2^{i+3}}$. This gives a way to recover Theorem~\ref{units}~: the $\mathbb{F}_2$-dimension of $B^\times(P)$ is equal to the number of isomorphism classes of rational irreducible representations of~$P$ whose type is trivial, cyclic of order 2, or dihedral.
\end{rem}
\pagebreak[3]\refstepcounter{prop}\@startsection{subsection}{2}{\z@}{4ex plus 6ex}{-1em}{\reset@font\bf}{The surjectivity of the exponential map.} Let $G$ be a finite group. The exponential map ${\rm exp}_G: B(G)\to B^\times(G)$ is defined in Section~7 of Yal\c cin's paper~(\cite{yalcin}) by
$${\rm exp}_G(x)=(-1)\uparrow x\;\;\;,$$
where $-1=-{\bf 1}/{\bf 1}\in B^\times({\bf 1})$, and where the exponentiation
$$(y,x)\in B^\times(G)\times B(G)\to B^\times(G)$$
is defined by extending the usual exponential map $(Y,X)\mapsto Y^X$, where $X$ and $Y$ are $G$-sets, and $Y^X$ is the set of maps from $X$ to $Y$, with $G$-action given by $(g\cdot f)(x)=gf(g^{-1}x)$.\par
Its possible to give another interpretation of this map~: indeed $B(G)$ is naturally isomorphic to $\hc{{\bf 1}}{G}$, by considering any $G$-set as a $(G,{\bf 1})$-biset. It is clear that if $X$ is a finite $G$-set, and $Y$ is a finite set, then
$$T_X(Y)=Y^X\;\;\;.$$
This can be extended by linearity, to show that for any $x\in B(G)$
$$(-1)^x=B^\times(x)(-1)\;\;\;.$$
In particular the image $\hbox{\rm Im}({\rm exp}_G)$ of the exponential map ${\rm exp}_G$ is equal to $\hc{{\bf 1}}{G}(-1)$. Denoting by $I$ the sub-biset functor of $B^\times$ generated by $-1\in B^\times({\bf 1})$, it it now clear that
$\hbox{\rm Im}({\rm exp}_G)=I(G)$ for any finite group $G$.\par
Now the restriction of the functor $I$ to the category $\mathcal{C}_2$ is equal to $L_0$, which is isomorphic to the simple functor $S_{{\bf 1},\mathbb{F}_2}$. Using Remark~5.13 of~\cite{fonctrq}, this shows finally the following~:
\begin{enonce}{Proposition} Let $P$ be a finite 2-group. Then~:
\begin{enumerate}
\item The $\mathbb{F}_2$-dimension of the image of the exponential map
$${\rm exp}_P: B(P)\to B^\times(P)$$
is equal to the number of isomorphism classes of absolutely irreducible rational representations of $P$.
\item The map ${\rm exp}_P$ is surjective if and only if the group $P$ has no irreducible rational representation of dihedral type, or equivalently, no genetic subgroup $Q$ such that $N_P(Q)/Q$ is dihedral.
\end{enumerate}
\end{enonce}
\begin{enonce}{Proposition} Let $p$ be a prime number. There is an exact sequence of $p$-biset functors~:
$$0\to B^\times\to \mathbb{F}_2R_\mathbb{Q}^*\to \mathbb{F}_2D^\Omega_{tors}\to 0\;\;\;,$$
where $D^\Omega_{tors}$ is the torsion part of the functor $D^\Omega$ of relative syzygies in the Dade group.
\end{enonce}
\noindent{\bf Proof: } In the case $p\neq 2$, this proposition is equivalent to Proposition~\ref{suiteexacte}, because $\mathbb{F}_2D^\Omega_{tors}=\mathbb{F}_2D_{tors}\cong D_{tors}$ in this case. And for $p=2$, the 2-functor $D^\Omega_{tors}$ is a quotient of the functor $R_\mathbb{Q}^*$, by Corollary~7.5 of~\cite{bisetsections}~: there is a surjective map $\pi: R_\mathbb{Q}^*\to D_{tors}^\Omega$, which is the restriction to $R_\mathbb{Q}^*$ of the surjection $\Theta: B^*\to D^\Omega$ introduced in Theorem~1.7 of~\cite{dadeburnside}. The $\mathbb{F}_2$-reduction of $\pi$ is a surjective map
$$\mathbb{F}_2\pi : \mathbb{F}_2R_\mathbb{Q}^*\to \mathbb{F}_2D^\Omega_{tors}\;\;\;.$$
To prove the proposition in this case, is is enough to show that the image of~$B^\times$ in $\mathbb{F}_2R_\mathbb{Q}^*$ is contained in the kernel of $\mathbb{F}_2\pi$, and that for any $2$-group~$P$, the $\mathbb{F}_2$-dimension of $\mathbb{F}_2R_\mathbb{Q}^*(P)$ is equal to the sum of the $\mathbb{F}_2$-dimensions of $B^\times(P)$ and $\mathbb{F}_2D^\Omega_{tors}(P)$~: but by Corollary~7.6 of~\cite{bisetsections}, there is a group isomorphism
$$D^\Omega_{tors}(P)\cong (\mathbb{Z}/4\mathbb{Z})^{a_P}^{op}lus (\mathbb{Z}/2\mathbb{Z})^{b_P}\;\;\;,$$
where $a_P$ is equal to the number of isomorphism classes of rational irreducible representations of~$P$ whose type is generalized quaternion, and $b_P$ equal to the number of isomorphism classes of rational irreducible representations of~$P$ whose type is cyclic of order at least 3, or semi-dihedral. Thus
$$\dim_{\mathbb{F}_2} \mathbb{F}_2D^\Omega_{tors}(P)=a_P+b_P\;\;\;.$$
Now since $\dim_{\mathbb{F}_2}B^\times(P)$ is equal to the number of isomorphism classes of rational irreducible representations of~$P$ whose type is cyclic of order at most 2, or dihedral, it follows that $\dim_{\mathbb{F}_2} \mathbb{F}_2D^\Omega_{tors}(P)+\dim_{\mathbb{F}_2}B^\times(P)$ is equal to the number of isomorphism classes of rational irreducible representations of $P$, i.e. to $\dim_{\mathbb{F}_2}\mathbb{F}_2R_\mathbb{Q}^*(P)$.\par
So the only thing to check to complete the proof, is that the image of~$B^\times$ in $\mathbb{F}_2R_\mathbb{Q}^*$ is contained in the kernel of $\mathbb{F}_2\pi$. Since $B^\times$, $\mathbb{F}_2R_\mathbb{Q}^*$ and $\mathbb{F}_2D^\Omega_{tors}$ are rational 2-biset functors, it suffices to check that if $P$ is a 2-group of normal 2-rank 1, and $a\in \partial B^\times(P)$, then the image of $a$ in $\partial\mathbb{F}_2R_\mathbb{Q}^*(P)$ lies in the kernel of $\mathbb{F}_2\pi$. There is nothing to do if $P$ is generalized quaternion, or semi-dihedral, or cyclic of order at least 3, for in this case $\partial B^\times(P)=0$ by Corollary~\ref{pasfidele}. Now if $P$ is cyclic of order at most 2, then $D^\Omega(P)=\{0\}$, and the result follows. And if $P$ is dihedral, then $D^\Omega(P)$ is torsion free by Theorem~10.3 of~\cite{cath}, so $D^\Omega_{tors}(P)=\{0\}$ again. ~\leaders\hbox to 1em{\hss\ \hss}
~\raisebox{.5ex}{\framebox[1ex]{}}\smp
\end{document} |
\begin{document}
\title{On the Computational Efficiency of \
Adaptive and Dynamic Regret Minimization}
\begin{abstract}
In online convex optimization, the player aims to minimize regret, or the difference between her loss and that of the best fixed decision in hindsight over the entire repeated game. Algorithms that minimize (standard) regret may converge to a fixed decision, which is undesirable in changing or dynamic environments. This motivates the stronger metrics of performance, notably adaptive and dynamic regret. Adaptive regret is the maximum regret over any continuous sub-interval in time. Dynamic regret is the difference between the total cost and that of the best sequence of decisions in hindsight.
State-of-the-art performance in both adaptive and dynamic regret minimization suffers a computational penalty - typically on the order of a multiplicative factor that grows logarithmically in the number of game iterations. In this paper we show how to reduce this computational penalty to be doubly logarithmic in the number of game iterations, and retain near optimal adaptive and dynamic regret bounds.
\end{abstract}
\section{Introduction}
Online convex optimization is a standard framework for iterative decision making that has been extensively studied and applied to numerous learning settings. In this setting, a player iteratively chooses a point from a convex decision set, and receives loss from an adversarially chosen loss function. Her aim is to minimize her regret, or the difference between her accumulated loss and that of the best fixed comparator in hindsight.
However, in changing environments regret is not the correct metric, as it incentivizes static behavior \cite{hazan2009efficient}.
There are two main directions in the literature for online learning in changing environments.
The early work of \cite{zinkevich2003online} proposed the metric of dynamic regret, which measures the regret vs. the best changing comparator,
$$ \text{D-Regret}({\mathcal A}) = \sum_t \ell_t(x_t) - \min_{x_{1:T}^*} \ell_t(x_t^*) . $$
In general, this metric can be linear with the number of game iterations and thus vacuous. However, the dynamic regret can be sublinear, and is usually related to the path length of the comparator, i.e.
$$ \mathcal{P} = \sum_t \|x_t^* - x_{t+1}^* \| . $$
An alternative to dynamic regret is adaptive regret, which was proposed in \cite{hazan2009efficient}, a metric closely related to regret in the shifting-experts problem \cite{herbster1998tracking}. Adaptive regret is the maximum regret over any continuous sub-interval in time. This notion has led to algorithmic innovations that yield optimal adaptive regret bounds as well as the best known dynamic regret bounds.
The basic technique underlying the state-of-the-art methods in dynamic online learning is based on maintaining a set of expert algorithms that have different history lengths, or attention, in consideration. Each expert is a standard regret minimization algorithm, and they are formed into a committee by a version of the multiplicative update method. This methodology is generaly known as Follow-the-Leading-History (FLH) \cite{hazan2009efficient}. It has yielded near-optimal adaptive regret, strongly-adaptive algorithms \cite{daniely2015strongly}, and near-optimal dynamic regret \cite{baby2021optimal} in a variety of settings.
However, all previous approaches introduce a significant computational overhead to derive adaptive or dynamic regret bounds. The technical reasoning is that all previous approaches follow the method of reduction of FLH, from regret to adaptive regret via expert algorithms. The best known bound on number of experts required to maintain optimal adaptive regret is $\Theta(\log T)$. Since optimal adaptive and dynamic regret bounds are known, the {\bf main open problem} in dynamic online learning is improving the running time overhead.
This is exactly the question we study in this paper, namely:
\iffalse
\fbox{\parbox{\textwidth}{
Can we improve the computational complexity of adaptive and dynamic regret\\ minimization algorithms for online convex optimization?
}}
\fi
\noindent\fbox{\begin{minipage}{\dimexpr\textwidth-2\fboxsep-2\fboxrule{\text{Re}}lax}
\centering
Can we improve the computational complexity of adaptive and dynamic regret\\ minimization algorithms for online convex optimization?
\end{minipage}}
Our main result is an exponential reduction in the number of experts required for the optimal adaptive and dynamic regret bounds. We prove that $O(\log \log T)$ experts are sufficient to obtain near-optimal bounds for general online convex optimization.
\subsection{Summary of Results}
Our starting point is the approach of \cite{hazan2009efficient} for minimizing adaptive regret: an expert algorithm is applied such that every expert is a (standard) regret minimization algorithm, whose starting point in time differentiates it from the other experts. Instead of restarting an expert every single iteration, previous approaches retain a set of {\it active experts}, and update only these.
In this paper we study how to maintain this set of active experts. Previous approaches require a set size that is logarithmic in the total number of iterations. We show a trade-off between the regret bound and the number of experts needed.
By reducing the number of active experts to $O( \frac{\log \log T}{\varepsilon})$, we give an algorithm with an $\tilde{O}(|I|^{\frac{1+\varepsilonilon}{2}})$ adaptive regret. This result improves upon the previous $O(\log T)$ bound, and implies more efficient dynamic regret algorithms as well: for exp-concave and strongly-convex loss, our algorithm achieves $\tilde{O}(\frac{T^{\frac{1}{3}+\varepsilonilon} \mathcal{P}^{\frac{2}{3}}}{\varepsilonilon})$ dynamic regret bounds, using only $O( \frac{\log \log T}{\varepsilonilon})$ experts.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|}
\ensuremath{\mathbf h}line
Algorithm &
Regret over $I=[s,t]$ &
Computation
\\
\ensuremath{\mathbf h}line
\cite{hazan2009efficient} & $\tilde{O}(\sqrt{T })$ & $\Theta( \log T)$ \\
\ensuremath{\mathbf h}line
\cite{daniely2015strongly}, \cite{jun2017improved} & $\tilde{O}(\sqrt{|I|})$& $\Theta( \log T)$ \\
\ensuremath{\mathbf h}line
\cite{cutkosky2020parameter} & $\tilde{O}(\sqrt{\sum_{\tau=s}^t \|\nablaold\mkern-2.5mu_{\tau}\|^2 })$& $\Theta( \log T)$ \\
\ensuremath{\mathbf h}line
\cite{lu2022adaptive} & $ \tilde{O}(\min_H \sqrt{\sum_{\tau=s}^t \|\nablaold\mkern-2.5mu_{\tau}\|_{H}^{*2} })$ & $\Theta(\log T)$ \\
\ensuremath{\mathbf h}line
This paper &$\tilde{O}(\sqrt{|I|^{1+\varepsilonilon}})$ & $O(\log \log T/\varepsilonilon)$ \\
\ensuremath{\mathbf h}line
\end{tabular}
\caption{Comparison of results on adaptive regret. We evaluate the regret performance of the algorithms on any interval $I=[s,t]$, and the $\tilde{O}$ notation hides other parameters and logarithmic dependence on horizon.}
\label{table:result_summary}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\ensuremath{\mathbf h}line
Algorithm &
Loss Class &
Dynamic Regret &
Computation
\\
\ensuremath{\mathbf h}line
\cite{zinkevich2003online} & General Convex & $O(\sqrt{\mathcal{P} T })$ & $O(1)$ \\
\ensuremath{\mathbf h}line
\cite{baby2021optimal} & Exp-concave & $\tilde{O}(T^{\frac{1}{3}} \mathcal{P}^{\frac{2}{3}})$ & $O(\log T)$ \\
\ensuremath{\mathbf h}line
\cite{baby2022optimal} & Strongly-convex & $\tilde{O}(T^{\frac{1}{3}} \mathcal{P}^{\frac{2}{3}})$ & $O(\log T)$ \\
\ensuremath{\mathbf h}line
This Paper & Exp-concave/Strongly-convex & $\tilde{O}(T^{\frac{1}{3}+\varepsilonilon} \mathcal{P}^{\frac{2}{3}}/\varepsilonilon)$ & $O(\frac{\log \log T}{\varepsilonilon})$ \\
\ensuremath{\mathbf h}line
\end{tabular}
\caption{Comparison of results on dynamic regret.}
\label{table:result_summary dynamic}
\end{center}
\end{table}
\subsection{Related Works}
For an in-depth treatment of the framework of online convex optimization see \cite{hazan2016introduction}.
\paragraph{Shifting experts and adaptive regret.}
Online learning with shifting experts were studied in the seminal work of \cite{herbster1998tracking}, and later \cite{bousquet2002tracking}. In this setting, the comparator is allowed to shift $k$ times between the experts, and the regret is no longer with respect to a static expert, but to a $k$-partition of $[1,T]$ in which each segment has its own expert. The algorithm Fixed-Share proposed by \cite{herbster1998tracking} is a variant of the Hedge algorithm \cite{freund1997decision}. On top of the multiplicative updates, it adds a
uniform exploration term to avoid the weight of any expert
from becoming too small. This provably allows a regret bound that tracks the
best expert in any interval. \cite{bousquet2002tracking} improved this method by mixing only with the past posteriors instead of all experts.
The optimal bounds for shifting experts apply to high dimensional continuous sets and structured decision problems and do not necessarily yield efficient algorithms. This is the motivation for adaptive regret algorithms for online convex optimization \cite{hazan2009efficient} which gave an algorithm called Follow-the-Leading-History with $O(\log^2 T)$ adaptive regret for strongly convex online convex optimization, based on the construction of experts with exponential look-back. However, their bound on the adaptive regret for general convex cost functions was $O( \sqrt{T} \log T)$. Later, \cite{daniely2015strongly} followed this idea and generalized adaptive regret to an universal bound for any sub-interval with the same length. They obtained an improved $O(\sqrt{|I|}\log T )$ regret bound for any interval $I$. This bound was further improved to $O(\sqrt{|I| \log T })$ by \cite{jun2017improved} using a coin-betting technique. Recently, \cite{cutkosky2020parameter} achieved a more refined second-order bound $\tilde{O}(\sqrt{\sum_{t\in I}\|\nablaold\mkern-2.5mu_{t}\|^2})$, and \cite{lu2022adaptive} further improved it to $\tilde{O}(\min_{H \succeq 0, Tr(H)\le d} \sqrt{\sum_{t\in I} \nablaold\mkern-2.5mu_{t}^{\top} H^{-1} \nablaold\mkern-2.5mu_{t}})$, which matches the regret of Adagrad \cite{duchi2011adaptive}. However, these algorithms are all based on the initial exponential-lookback technique and require $\Theta(\log T)$ experts per round, increasing the computational complexity of the base algorithm in their reduction by this factor.
\paragraph{Dynamic regret minimization.}
The notion of dynamic regret was introduced by \cite{zinkevich2003online}, and allows the comparator to be time-varying with a bounded total movement. The work of \cite{zinkevich2003online} gave an algorithm with an $O(\sqrt{T \mathcal{P}})$ dynamic regret bound where $\mathcal{P}$ denotes the total sequential distance of the moving predictors, also called the ``path length". Although this bound is optimal in general, recently works study improvements of dynamic regret bounds under further assumptions \cite{zhao2020dynamic, zhang2017improved}. In particular, \cite{baby2021optimal, baby2022optimal} achieved an improved $\tilde{O}(T^{\frac{1}{3}}\mathcal{P}^{\frac{2}{3}})$ dynamic regret bound for exp-concave and strongyly-convex online learning, with a matching lower bound $Omega(T^{\frac{1}{3}}\mathcal{P}^{\frac{2}{3}})$.
Another line of work explores the relationship between these two metrics, and show that adaptive regret implies dynamic regret \cite{zhang2018dynamic}. \cite{zhang2020minimizing} gave algorithms that achieve both adaptive and dynamic regrets simultaneously.
\paragraph{Parameter-free online convex optimizaton.}
Related to adaptivity, an important building block in adaptive algorithms to attain tighter bounds are parameter-free online learning initiated in \cite{mcmahan2010adaptive}. Later parameter-free methods \cite{luo2015achieving, orabona2016coin, cutkosky2020parameter} attained the optimal $\tilde{O}(GD\sqrt{T})$ regret for online convex optimization without knowing any constants ahead of time, and without the usual logarithmic penalty that is a consequence of the doubling trick.
\paragraph{Applications of adaptive online learning.}
Efficient adaptive and dynamic regret algorithms have implications in many other areas.
A recent example is the field of online control \cite{hazan2022introduction}. The work of \cite{baby2021optimal}, \cite{baby2022optimal} used adaptive regret algorithms as building-blocks to derive tighter dynamic regret bounds. In this variant of differentiable reinforcement learning, online learning is used to generate iterative control signals, mostly for linear dynamical systems. Recent work by \cite{gradu2020adaptive,minasyan2021online} considered smooth dynamical systems, and their Lyapunov linearization. They use adaptive and dynamic regret algorithms to obtain provable bounds for time-varying systems. Thus, our results imply more efficient algorithms for control.
Other applications of adaptive algorithms are in the area of time series prediction \cite{koolen2015minimax} and mathematical optimization \cite{lu2022adaptive}. Our improved computational efficiency for adaptive and dynamic regret implies faster algorithms for these applications as well.
\subsection{Paper Outline}
In Section {\text{Re}}f{s2}, we formally define the online convex optimization framework and the basic assumptions we need. In Section {\text{Re}}f{s3}, we present our algorithm and show a simplified analysis that leads to an $\tilde{O}(|I|^{\frac{3}{4}})$ adaptive regret bound with doubly-logarithmic number of experts. We generalize this analysis and give our main theoretical guarantee in Section
{\text{Re}}f{s4}. The limit of the FLH framework recursion is discussed in Section {\text{Re}}f{s6}. Implication to more efficient dynamic regret algorithms is presented in Section {\text{Re}}f{s5}.
\section{Setting}\label{s2}
We consider the online convex optimization (OCO) problem. At each round $t$, the player $\mathcal{A}$ chooses $x_t\in \ensuremath{\mathcal K}$ where $\ensuremath{\mathcal K}\subset {\text{Re}}als^d$ is some convex domain. The adversary then reveals loss function $\ell_t(x)$, and the player suffers loss $\ell_t(x_t)$. The goal is to minimize regret:
$$
{\text{Re}}gret(\mathcal{A})=\sum_{t=1}^T \ell_t(x_t)-\min_{x\in \ensuremath{\mathcal K}} \sum_{t=1}^T \ell_t(x) .
$$
A more subtle goal is to minimize the regret over different sub-intervals of $[1,T]$ at the same time, corresponding to a potential changing environment, which is captured by the notion of adaptive regret introduced by \cite{hazan2009efficient}. \cite{daniely2015strongly} extended this notion to depend on the length of sub-intervals, and provided an algorithm that achieves an $\tilde{O}(\sqrt{|I|})$ regret bound for all sub-intervals $I$. In particular, they define strongly adaptive regret as follows:
$$
\text{SA-Regret}(\mathcal{A},k)=\max_{I=[s,t],t-s=k}\left( \sum_{\tau=s}^t \ell_{\tau}(x_{\tau})-\min_{x\in \ensuremath{\mathcal K}} \sum_{\tau=s}^{t} \ell_{\tau}(x)\right) .
$$
We make the following assumption on the loss $\ell_t$ and domain $\ensuremath{\mathcal K}$, which is standard in literature.
\begin{assumption}
Loss $\ell_t$ is convex, $G$-Lipschitz and non-negative. The domain $\ensuremath{\mathcal K}$ has diameter $D$.
\end{assumption}
We define the path length $\mathcal{P}$ of a dynamic comparator $\{x_t^*\}$ w.r.t. some norm $\| \cdot \|$.
$$
\mathcal{P} = \sum_t \|x_t^* - x_{t+1}^* \|
$$
We also define strongly-convex and exp-concave functions here.
\begin{definition}
A function $f(x)$ is $\lambdabda$-strongly-convex if for any $x,y\in \mathcal{K}$, the following holds:
$$
f(y)\ge f(x)+\nablaold\mkern-2.5mu f(x)^{\top}(y-x)+\frac{\lambdabda}{2}\|x-y\|_2^2
$$
\end{definition}
\begin{definition}
A function $f(x)$ is $\alpha$-exp-concave if $e^{-\alpha f(x)}$ is a convex function.
\end{definition}
\section{A More Efficient Adaptive Regret Algorithm}\label{s3}
\begin{algorithm}[t]
\caption{Efficient Follow-the-Leading-History (EFLH) - Basic Version}
\label{alg1}
\begin{algorithmic}[1]
\STATE Input: OCO algorithm ${\mathcal A}$, active expert set $S_t$.
\STATE Let ${\mathcal A}_t$ be an instance of ${\mathcal A}$ in initialized at time $t$. Initialize the set of active experts: $S_1=\{1\}$, with initial weight $w_1^{(1)}=\frac{1}{2GD}$.
\STATE Pruning rule: for $k\ge 1$, the lifespan $l_t$ of $\mathcal{A}_t$ with integer $t=r 2^{2^k-1}$ is $2^{2^k+1}$ ($=4$ if $2\nmid t$), where $2^{2^k+1}\nmid t$. "Deceased" experts will be removed from the active expert set $S_t$.
\mathbb{F}OR{$t = 1, \ldots, T$}
\STATE Let $W_t=\sum_{j\in S_t} w_t^{(j)}$.
\STATE Play $x_t=\sum_{j\in S_t} \frac{w_t^{(j)}}{W_t} x_t^{(j)}$, where $x_t^{(j)}$ is the prediction of ${\mathcal A}_j$.
\mathbb{F}OR { $j\in S_t$ }
\STATE $$
w_{t+1}^{(j)}=w_t^{(j)} \left(1+\frac{1}{GD} \min\left\{\frac{1}{2},\sqrt{\frac{\log T}{l_j}}\right\} (\ell_t(x_t)-\ell_t(x_t^{(j)})) \right)
$$
\mathbb{E}NDFOR
\STATE Update $S_t$ according to the pruning rule and add $t + 1$ to get $S_{t+1}$. Initialize
$$
w_{t+1}^{t+1}=\frac{1}{GD} \min\left\{\frac{1}{2},\sqrt{\frac{\log T}{l_{t+1}}}\right\}
$$
\mathbb{E}NDFOR
\end{algorithmic}
\end{algorithm}
The Follow-the-Leading History (FLH) algorithm \cite{hazan2009efficient} achieved $\tilde{O}(\sqrt{T})$ adaptive regret by initiating different OCO algorithms at each time step, then treating them as experts and running a multiplicative weight method to choose between them. This can be extended to attain $\tilde{O}(\sqrt{|I|})$ adaptive regret bound by using parameter-free OCO algorithms as experts, or by setting the $\eta$ in the multiplicative weight algorithm as in \cite{daniely2015strongly}.
Although these algorithms can achieve a near-optimal $\tilde{O}(\sqrt{|I|})$ adaptive regret, they have to use $\Theta(\log T)$ experts per round. We propose a more efficient algorithm {\text{Re}}f{alg1} which achieves vanishing regret and uses only $O(\log \log T)$ experts.
The intuition for our algorithm stems from the FLH method, in which the experts' lifespan is of form $2^k$. We denote lifespan as the length of the interval that this expert is run on (it also chooses its parameters optimally according to its lifespan). This leads to $\Theta(\log T)$ number of active experts per round, and we could potentially improve it to $O(\log \log T)$ if we change the lifespan to be $2^{2^k}$. While this does increase the regret, we achieve an $\tilde{O}(|I|^{\frac{3}{4}})$ regret bound. The formal regret guarantee is given below.
\begin{theorem}\label{main}
When using OGD as the expert algorithm ${\mathcal A}$, Algorithm {\text{Re}}f{alg1} achieves the following adaptive regret bound over any interval $I\subset [1,T]$ for genral convex loss
$$ \text{SA-Regret}({\mathcal A}) = 36 GD \sqrt{\log T} \cdot |I|^{\frac{3}{4}}$$
with $O(\log \log T)$ experts.
\end{theorem}
We use the Online Gradient Descent \cite{zinkevich2003online} algorithm in the theorem for its $\frac{3}{2}GD\sqrt{T}$ regret bound, see \cite{hazan2016introduction}.The sub-optimal $\tilde{O}(|I|^{\frac{3}{4}})$ rate can be improved to be closer to the optimal $\tilde{O}(|I|^{\frac{1}{2}})$ rate while still using only $O(\log \log T)$ number of experts. We will discuss this improvement in the next section.
\subsection{Proof of Theorem {\text{Re}}f{main}}
We use OGD as the base algorithm $\mathcal{A}$.
Without loss of generality we only need to consider intervals with length at least 8. The proof idea is to derive a recursion of regret bounds, and use induction on the interval length. The key observation is that, due to the double-exponential construction of interval lengths, for any interval $[s,t]$, it's guaranteed that a sub-interval in the end with length at least $\sqrt{t-s}/2$ is covered by some expert. In the meantime, the number of 'active' experts per round is at most $O(\log \log T)$. We formalize the above observation in the two following lemmas.
\begin{lemma}\label{p1}
For any interval $I=[s,t]$, there exists an integer $i\in [s,t-\sqrt{t-s}/2]$, such that ${\mathcal A}_i$ is alive throughout $[i,t]$.
\end{lemma}
\begin{proof}
Assume $2^{2^k}\le t-s\le 2^{2^{k+1}}$, then $\sqrt{t-s}/2\le 2^{2^k-1}$. Notice that $t\ge 2^{2^k}+1$. Assume $r\ge 2$ is the largest integer such that $r 2^{2^k-1}\le t$, then one of $i=(r-1) 2^{2^k-1}$ and $i=(r-2) 2^{2^k-1}$ is satisfactory because its lifespan is $ 2^{2^k+1}\ge 3\times 2^{2^k-1}$. The reason we consider two candidate $i$ is that when $r\ge 2$, one of $r-2$ and $r-1$ is odd and we use that to guarantee the lifespan isn't strictly larger than $ 2^{2^k+1}$ (such choice also excludes the potential bad case $r-2=0$).
\end{proof}
In fact, Lemma {\text{Re}}f{p1} implies an even stronger argument for the coverage of $ [t-\sqrt{t-s}/2,t]$, that is $2^{2^k-1}\le t-i\le 2^{2^k+1}$, and as a result $\eta=\frac{1}{\sqrt{2^{2^k+1}}}$ is optimal (up-to-constant) for this chosen expert. This property means that we don't need to tune $\eta$ optimally for the length $\sqrt{t-s}/2$, but only need to tune $\eta$ with respect to the lifespan of the expert itself. For example, the OGD algorithm $\mathcal{A}_i$ achieves (nearly) optimal regret on $[i,t]$ as well because the optimal learning rate for $[i,t]$ is the same as that for $[i,i+l_i]$ up to a constant factor of 2. To see this, notice that $l_i\ge t-i$ and $t-i\ge \frac{l_i}{4}$.
\begin{lemma}\label{p2}
$|S_t|=O(\log \log T)$.
\end{lemma}
\begin{proof}
At any time up to $T$, there can only be $O(\log \log T)$ different lifespans sizes by the algorithm definition.
Notice that for any $k$, though the total number of experts with lifespan of $2^{2^k+1}$ might be large, the number of active experts with lifespan of $2^{2^k+1}$ is only at most 4 which concludes the proof.
\end{proof}
Lemma {\text{Re}}f{p2} already proves the efficiency claim of Theorem {\text{Re}}f{main}. To bound the regret we make an induction on the length of interval $|I|$. Let $2^{2^k}\le |I|\le 2^{2^{k+1}}$, we will prove by induction on $|I|$. We need the following technical lemma on the recursion of regret.
\begin{lemma}\label{tech}
For any $x\ge 1$, we have that
$$6 x^{\frac{3}{4}}\ge 6(x-x^{\frac{1}{2}}/2)^{\frac{3}{4}}+(x^{\frac{1}{2}}/2)^{\frac{1}{2}} $$
\end{lemma}
\begin{proof}
Let $y=(x^{\frac{1}{2}}/2)^{\frac{1}{2}}$, after simplification the above inequality becomes
\begin{align*}
&6 x^{\frac{3}{4}}\ge 6(x-x^{\frac{1}{2}}/2)^{\frac{3}{4}}+(x^{\frac{1}{2}}/2)^{\frac{1}{2}}\\
\iff &12\sqrt{2}y^3\ge 6(4y^4-y^2)^{\frac{3}{4}} +y\\
\iff &(12\sqrt{2}y^3-y)^4\ge 6^4 (4y^4-y^2)^3\\
\iff & (12\sqrt{2}y^2-1)^4 \ge 1296y^2(4y^2-1)^3\\
\iff & (62208-13824\sqrt{2})y^6 -13824y^4+(1296-48\sqrt{2})y^2+1\ge 0
\end{align*}
The derivative of the LHS is non-negative because $y\ge 1$ and $62208-13824\sqrt{2}\ge 13824$. This proves the LHS is monotonely increasing in $y$, and we only need to prove its non-negativity when $y=1$, which can be verified by straight calculation.
\end{proof}
The first step is to derive a regret bound on the sub-interval $[i,t]$ which is covered by a single expert $\mathcal{A}_i$. The regret on $[i,t]$ can be decomposed as the sum of the expert regret and the multiplicative weight regret to choose that best expert in the interval. The expert regret is upper bounded by $3 GD \sqrt{t-i}$ due to the optimality of $\mathcal{A}_i$ while the multiplicative weight regret can be upper bounded by $3GD \sqrt{\log T(t-i)}$ as shown in the following lemma, the proof is left to the appendix.
\begin{lemma}\label{mw regret}
For the $i$ and $\mathcal{A}_i$ chosen in Lemma {\text{Re}}f{p1}, the regret of Algorithm {\text{Re}}f{alg1} over the sub-interval $[i,t]$ is upper bounded by $3GD \sqrt{t-i}+3GD\sqrt{\log T(t-i)}$.
\end{lemma}
Now we have gathered all the pieces we need to prove our induction.
\paragraph{Base case:} for $|I|=1$, the regret is upper bounded by $GD \le 36 GD \sqrt{\log T} \cdot 1^{\frac{3}{4}}$.
\paragraph{Induction step:} suppose for any $|I|<m$ we have the regret bound in the statement of theorem. Consider now $t-s=m$, from Lemma {\text{Re}}f{p1} we know there exists an integer $i\in [s,t-\sqrt{t-s}/2]$, such that ${\mathcal A}_i$ is alive throughout $[i,t]$. Algorithm {\text{Re}}f{alg1} guarantees an
$$3GD \sqrt{t-i}+3 GD\sqrt{\log T(t-i)}\le 6GD \sqrt{\log T} (t-i)^{\frac{1}{2}}$$
regret over $[i,t]$ by Lemma {\text{Re}}f{mw regret}, and by induction the regret over $[s,i]$ is upper bounded by\newline $36 GD \sqrt{\log T} (i-s)^{\frac{3}{4}}$. By the monotonicity of the function $f(y)=6(x-y)^{\frac{3}{4}}+\sqrt{y}$ when the variable $y\ge \sqrt{x}/2$, we reach the desired conclusion by using Lemma {\text{Re}}f{tech}:
$$
6(t-i)^{\frac{1}{2}}+36 (i-s)^{\frac{3}{4}}\le 6(\frac{\sqrt{t-s}}{2})^{\frac{1}{2}}+36 (t-s-\frac{\sqrt{t-s}}{2})^{\frac{3}{4}}\le 36 (t-s)^{\frac{3}{4}}
$$
To see the monotonicity, we use the fact $y\ge \sqrt{x}/2$ to see that
$$
f'(y)=\frac{1}{2\sqrt{y}}-\frac{9}{2 (x-y)^{\frac{1}{4}}}
\le \frac{1}{2\sqrt{y}}-\frac{9}{2 x^{\frac{1}{4}}}
\le \frac{1}{2\sqrt{y}}-\frac{9}{2 \sqrt{2y}}\le 0
$$
\section{Approaching the Optimal Rate}\label{s4}
\begin{algorithm}[t]
\caption{Efficient Follow-the-Leading-History (EFLH) - Full Version}
\label{alg2}
\begin{algorithmic}[1]
\STATE Input: OCO algorithm ${\mathcal A}$, active expert set $S_t$, horizon $T$ and constant $\varepsilonilon>0$.
\STATE Pruning rule: let ${\mathcal A}_{(t,k)}$ be an instance of ${\mathcal A}$ initialized at $t$ with lifespan $4 l_k=4\lfloor 2^{(1+\varepsilonilon)^k}/2 \rfloor+4$, for $2^{(1+\varepsilonilon)^k}/2\le T$. "Deceased" experts will be removed from the active expert set $S_t$.
\STATE Initialize: $S_1=\{(1,1),(1,2),...\}$, $w_1^{(1,k)}=\frac{1}{GD} \min \left\{\frac{1}{2}, \sqrt{\frac{\log T}{l_k}}\right\}$.
\mathbb{F}OR{$t = 1, \ldots, T$}
\STATE Let $W_t=\sum_{(j,k)\in S_t} w_t^{(j,k)}$.
\STATE Play $x_t=\sum_{(j,k)\in S_t} \frac{w_t^{(j,k)}}{W_t} x_t^{(j,k)}$, where $x_t^{(j,k)}$ is the prediction of ${\mathcal A}_{(j,k)}$.
\STATE Perform multiplicative weight update to get $w_{t+1}$. For $(j,k)\in S_t$
$$
w_{t+1}^{(j,k)}=w_t^{(j,k)} \left(1+\frac{1}{GD} \min \left\{\frac{1}{2},\sqrt{\frac{\log T}{l_k}}\right\} (\ell_t(x_t)-\ell_t(x_t^{(j,k)})) \right)
$$
\STATE Update $S_t$ according to the pruning rule. Initialize
$$
w_{t+1}^{(t+1,k)}=\frac{1}{GD} \min \left\{\frac{1}{2}, \sqrt{\frac{\log T}{l_k}}\right\}
$$
if $(t+1,k)$ is added to $S_{t+1}$ (when $l_k | t$).
\mathbb{E}NDFOR
\end{algorithmic}
\end{algorithm}
The basic approach given in the previous section achieves vanishing adaptive regret with only $O(\log \log T)$ number of experts, improving the efficiency of previous works \cite{hazan2009efficient,daniely2015strongly}. In this section, we extend the basic version of Algorithm {\text{Re}}f{alg1} and show how to achieve an $\tilde{O}(|I|^{\frac{1+\varepsilonilon}{2}})$ adaptive regret bound with $O( \log \log T/\varepsilon)$ number of experts.
The intuition stems from the recursion of regret bounds. Suppose the construction of our experts guarantees that for any interval with length $x$, there exists a sub-interval with length $\Theta(x^{\alpha})$ in the end which is covered by some expert with the same initial time, for some constant $\alpha \ge 0$. Then similarly, we need to solve the recursion of a regret bound function $g$ such that
$$
g(x)\ge g(x-x^{\alpha})+x^{\frac{\alpha}{2}} ,
$$
which approximately gives the solution of $g(x)=\Theta(x^{1-\frac{\alpha}{2}})$. To approach the optimal rate we set $\alpha=1-\varepsilonilon$, giving an $\tilde{O}(|I|^{\frac{1+\varepsilonilon}{2}})$ regret bound. It remains to describe an explicit construction that guarantees a covering with $\alpha=1-\varepsilonilon$.
Suppose our construction contains experts with lifespan of the form $f(n)$, then it's equivalent to require that $f(n+1)^{1-\varepsilonilon}\sim f(n)$ which is approximately $f(n+1)\sim f(n)^{1+\varepsilonilon}$. Initializing $f(1)=2$, for example, gives an alternative choice of double-exponential lifespan $2^{(1+\varepsilonilon)^k}$.
We also need to slightly modify how we define the experts and the pruning rule, since $2^{(1+\varepsilonilon)^k}$ isn't necessarily an integer now. Define $l_k=\lfloor 2^{(1+\varepsilonilon)^k}/2 \rfloor+1$, we hold experts with lifespan $4 l_k$ for every $k$ satisfying $2^{(1+\varepsilonilon)^k}/2\le T$. Additionally, we initialize an expert with lifespan $4 l_k$ at time $t$ if $l_k \mid (t-1)$, notice that this might create multiple experts with the same initial time point in contrast to Algorithm {\text{Re}}f{alg1}. The resulting Algorithm {\text{Re}}f{alg2} has the following regret guarantee.
\begin{theorem}\label{main2}
When using OGD as the expert algorithm ${\mathcal A}$, Algorithm {\text{Re}}f{alg2} achieves the following adaptive regret bound over any interval $I\subset [1,T]$ for genral convex loss
$$ \text{SA-Regret}({\mathcal A}) = 48cGD \sqrt{\log T} |I|^{\frac{1+\varepsilonilon}{2}}$$
with $O(\log \log T/\varepsilonilon)$ experts.
\end{theorem}
The proof is essentially the same as that of Theorem {\text{Re}}f{main} which we leave to appendix, the main new step is to derive a generalized version of Lemma {\text{Re}}f{tech}, which roughly says that
$$
x^{\frac{1-\varepsilonilon}{2}}=O(x^{\frac{1+\varepsilonilon}{2}}-(x-x^{1-\varepsilonilon})^{\frac{1+\varepsilonilon}{2}})
$$
\section{Limits of the History Lookback Technique}\label{s6}
\begin{figure}
\caption{Illustration of the history lookback technique}
\label{fig:log}
\end{figure}
In this section we discuss the limitation of the history lookback technique, which is used to derive all our results. The basic idea of the history lookback technique is to use recursion to bound the adaptive regret: for any interval of length $y$, there is guaranteed to be an expert initiated in the end covering a smaller interval with length $x(y)$. The expert guarantees some regret $r(x)$ over the small interval, and we denote the regret over the rest of the large interval as $R(y-x)$.
Now $R(y-x)+r(x)$ becomes a regret bound over the large interval, and we would like to find $R$ satisfying $R(y-x)+r(x)\le R(y)$, allowing us to use induction on the length of interval to get an adaptive regret bound $R(I)$ for any interval with length $I$. The result of \cite{hazan2009efficient} for general convex loss, for example, can be interpreted as a special case of setting $r(x)=\log T \sqrt{x}$, $x(y)=\frac{y}{4}$ and $R(x)=5\log T \sqrt{x}$.
Typically the function $r(x)$ is determined by the problem itself. Still, the interval length evolution $x(y)$ is adjustable, and we aim to find the smallest $x(y)$ (which means most efficient) that maintains a near-optimal $R(x)$. We would like to find tight trade-off between $R$ and $x$ in the following inequality:
$$
R(y-x(y))+r(x(y))\le R(y)
$$
In particular, we are interested in how small $x(y)$ can be when $r(x)=\sqrt{x}$, while maintaining $R(x)=o(x)$. We have the following impossibility result.
\begin{proposition}\label{prop:lb}
Suppose $r(x)=C_1 \sqrt{x}$, when $0<x(y)<\min\{C_2 y^{\frac{1}{n}}, \frac{y}{2}\}$ for $y>1$ with some constants $C_1>0, C_2 \ge 1$, then any $R(\cdot)$ satisfying
$$
R(y-x(y))+r(x(y))\le R(y)
$$
must be lower bounded by $R(y)\ge \frac{C_1}{2\sqrt{C_2}} y^{1-\frac{1}{2n}}$.
\end{proposition}
Proposition {\text{Re}}f{prop:lb} indicates that we cannot maintain computation better than double-log and vanishing regret at the same time. For example, if we take interval length to be of form $2^{2^{2^k}}$ (corresponding to $y=2^{2^{2^{k+1}}}$ and $x=Omega(2^{2^{2^k}})$) such that the number of experts is just $O(\log \log \log T)$, it leads to an undesirable regret bound $R(I)=O(I^{1-\frac{1}{2\log T}})$.
Using the same approach, we can also find the optimal $x(y)$ when the expert regret bound $r(x)=x^{\alpha}$ and the desired adaptive regret bound $R(x)=x^{\beta}$ are given.
\begin{proposition} \label{cor:lb}
Let $r(x)=x^{\alpha}, R(x)=x^{\beta}$ where $0\le \alpha\le \frac{1}{2}, \alpha < \beta <1$.
Then choosing $x(y)=\Theta(y^{\frac{1-\beta}{1-\alpha}})$ guarantees the following:
\begin{enumerate}
\item $ r(x(y))\le \frac{2}{\beta}[R(y)-R(y-x(y))]$ is satisfied, thus the adaptive regret is bounded by $ O( \frac{x^\beta}{\beta})$.
\item
The computational overhead is $O(\frac{1-\beta}{\beta-\alpha}\log \log T)$.
\end{enumerate}
Meanwhile, any choice of $x(y)=o(y^{\frac{1-\beta}{1-\alpha}})$ will violate the first property.
\end{proposition}
The proof to Proposition {\text{Re}}f{prop:lb} and Proposition {\text{Re}}f{cor:lb} are deferred it to the appendix. As an implication, the case of $\alpha=0$ will be used in the next section to derive more efficient dynamic regret algorithms.
\section{Efficient Dynamic Regret Minimization} \label{s5}
\begin{algorithm}[h!]
\caption{Efficient Follow-the-Leading-History (EFLH) - Exp-concave Version}
\label{alg3}
\begin{algorithmic}[1]
\STATE Input: OCO algorithm ${\mathcal A}$, active expert set $S_t$, horizon $T$, exp-concave parameter $\alpha$ and $\varepsilonilon>0$.
\STATE Pruning rule: let ${\mathcal A}_{(t,k)}$ be an instance of ${\mathcal A}$ initialized at $t$ with lifespan $4 l_k=4\lfloor 2^{(1+\varepsilonilon)^k}/2 \rfloor+4$, for only the largest $l_k|t-1$ satisfying $2^{(1+\varepsilonilon)^k}/2\le T$.
\STATE Initialize: $S_1=\{(1,1),(1,2),...\}$, $w_1^{(1,k)}=\frac{1}{|S_1|}$.
\mathbb{F}OR{$t = 1, \ldots, T$}
\STATE Play $x_t=\sum_{(j,k)\in S_t} w_t^{(j,k)} x_t^{(j,k)}$, where $x_t^{(j,k)}$ is the prediction of ${\mathcal A}_{(j,k)}
$
\STATE Perform multiplicative weight update to get $w_{t+1}$. For $(j,k)\in S_t$
$$
\ensuremath{\mathbf h}at{w}_{t+1}^{(j,k)}=\frac{w_t^{(j,k)} e^{-\alpha \ell_t(x_t^{(j,k)})}}{\sum_{(i,k)\in S_t}w_t^{(i,k)} e^{-\alpha \ell_t(x_t^{(i,k)})}}
$$
\STATE Update $S_t$ according to the pruning rule. Set and update for all $j\le t$
$$w_{t+1}^{(t+1,k)}=\frac{1}{t+1} \ , \ w_{t+1}^{(j,k)}=(1-\frac{1}{t+1})\ensuremath{\mathbf h}at{w}_{t+1}^{(j,k)}, $$
where $w_{t+1}^{(t+1,k)}$ is the weight of the newly added expert ${\mathcal A}_{(t+1,k)}$.
\mathbb{E}NDFOR
\end{algorithmic}
\end{algorithm}
In this section we show how to achieve near optimal dynamic regret with a more efficient algorithm as compared to state of the art. When the loss functions are exp-concave or strongly-convex, running Algorithm {\text{Re}}f{alg3} with experts being Online Newton Step (ONS) \cite{hazan2007logarithmic} or Online Gradient Decent (OGS) respectively gives near-optimal dynamic regret bound.
Algorithm {\text{Re}}f{alg3} is a simplified version of Algorithm {\text{Re}}f{alg2}. The main difference is that Algorithm {\text{Re}}f{alg3} does not require learning rate tuning, since we no longer need interval length dependent regret bounds as in the general convex case.
\begin{theorem}
Algorithm {\text{Re}}f{alg3} achieves the following dynamic regret bound for exp-concave (with ${\mathcal A}$ being ONS) or strongly convex (with ${\mathcal A}$ being OGD) loss functions
$$ \text{D-Regret}({\mathcal A}) = \sum_t \ell_t(x_t) - \min_{x_{1:T}^*} \ell_t(x_t^*)= \tilde{O}(\frac{T^{\frac{1}{3}+\varepsilonilon} \mathcal{P}^{\frac{2}{3}}}{\varepsilonilon}) ,$$
where $\mathcal{P}=\sum_{t=1}^T \|x_{t+1}^*-x_t^*\|_1$. Further, the number of active experts is $O(\frac{\log \log T}{\varepsilonilon})$.
\end{theorem}
\begin{proof}
The proof follows by observing that both Theorem 14 in \cite{baby2021optimal} and Theorem 8 in \cite{baby2022optimal} only make use of FLH as an adaptive regret black-box. We maintain the low-level experts: ONS for exp-concave loss and OGD for strongly-convex loss, but replace FLH by Algorithm {\text{Re}}f{alg3}.
To proceed, we first show that Algorithm {\text{Re}}f{alg3} can be applied to exp-concave or strongly-convex loss functions, but at the cost of a worse adaptive regret bound compared with the $O(\log^2 T)$ bound of FLH.
\begin{lemma}\label{lem exp}
Assume ${\mathcal A}$ guarantees a regret bound of $O(\log T)$. Algorithm {\text{Re}}f{alg3} achieves the following adaptive regret bound over any interval $I\subset [1,T]$ for exp-concave or strongly convex loss
$$ \text{SA-Regret}({\mathcal A}) = O(\frac{ I^\varepsilonilon \log T }{\varepsilonilon})$$
with $O(\frac{\log \log T}{\varepsilonilon})$ experts.
\end{lemma}
The proof of Lemma {\text{Re}}f{lem exp} is identical to that of Theorem {\text{Re}}f{main}, except that the regret of experts and the recursion are different. The regret of experts are guaranteed to be $O(\log I)$ by using ONS \cite{hazan2007logarithmic} as the expert algorithm $\mathcal{A}$ for exp-concave loss, or by using OGD for strongly-convex loss. We only need to solve the recursion when interval length of form $2^{(1+\varepsilonilon)^k}$ is used.
According to Lemma {\text{Re}}f{lem exp}, Algorithm {\text{Re}}f{alg3} achieves a worse regret $O(\frac{ I^{\varepsilonilon}\log T}{\varepsilonilon})$ instead of $O(\log^2 T)$ of FLH. Fortunately, the regret bounds of \cite{baby2021optimal}, \cite{baby2022optimal} are achieved by summing up the regret of FLH over $O(T^{\frac{1}{3}}\mathcal{P}^{\frac{2}{3}})$ number of intervals, therefore by using Algorithm {\text{Re}}f{alg3} instead of FLH we get a final bound $\tilde{O}(\frac{ T^{\frac{1}{3}+\varepsilonilon}\mathcal{P}^{\frac{2}{3}}}{\varepsilonilon})$. To this end, we extract the following proposition, from their result.
\begin{proposition}[Lemma 30 + Lemma 31 + Theorem 14 in \cite{baby2021optimal}]
There exists a partition $P=\cup_{i=1}^M I_i$ of the whole interval with size $M=O(T^{\frac{1}{3}}\mathcal{P}^{\frac{2}{3}})$, such that the over all dynamic regret is bounded by
$$
\text{D-Regret}\le \sum_{i=1}^M (\text{Regret}_{{\mathcal A}}(I_i)+\text{Regret}_{{\mathcal A}_{\text{meta}}}(I_i)+\tilde{O}(1))
$$
where $\text{Regret}_{{\mathcal A}}(I_i)$ is the regret of the best expert (ONS/OGD) over interval $I_i$, and $\text{Regret}_{{\mathcal A}_{\text{meta}}}(I_i)$ is the regret of the meta algorithm (FLH/Algorithm {\text{Re}}f{alg3}) over the best expert ${\mathcal A}$ over interval $I_i$.
\end{proposition}
Putting $M=O(T^{\frac{1}{3}}\mathcal{P}^{\frac{2}{3}})$ and $\text{Regret}_{{\mathcal A}_{\text{meta}}}(I)=O(\frac{I^{\varepsilonilon}\log T}{\varepsilonilon})\le O(\frac{ T^{\varepsilonilon}\log T}{\varepsilonilon})$ together we get the desired regret guarantee.
The overall computation consists of the number of experts in Algorithm {\text{Re}}f{alg3}, and the computation of each expert. For exp-concave loss we use ONS as the expert which has $O(d^2)$ computation, thus the overall computation is $O(\frac{d^2 \log \log T}{\varepsilonilon})$. While for strongly-convex loss, OGD is used as the expert, and the overall computation is $O(\frac{d \log \log T}{\varepsilonilon})$.
\end{proof}
\iffalse
\begin{corollary}
Consider the path length in $\ell_1$ norm: $\mathcal{P}=\sum_{t=1}^T \|x_{t+1}^*-x_t^*\|_1$. If we assume the losses are exp-concave or strongly convex, there is an algorithm that achieves $O(\frac{T^{\frac{1}{3}+\deltata} \mathcal{P}^{\frac{2}{3}}}{\deltata})$ dynamic regret bound where $\deltata<1$ is any positive constant, with $O(\log \log T)$ computation per round.
\end{corollary}
To proceed, we first show that Algorithm {\text{Re}}f{alg1} can be applied to exp-concave loss functions, but at the cost of a worse adaptive regret bound compared with the $O(\log I)$ bound of FLH.
\begin{corollary}\label{cor exp}
Assume the loss functions are exp-concave (or strongly-convex). Given an OCO algorithm ${\mathcal A}$ with regret bound $O(\log I)$, the adaptive regret of Algorithm {\text{Re}}f{alg1} is upper bounded by $O(\frac{I^\deltata}{\deltata})$ for any interval $I\subset [1,T]$ where $\deltata<1$ is any positive constant. The number of active experts per round is $O(\log \log T)$.
\end{corollary}
\begin{proof}
The proof is identical to that of Theorem {\text{Re}}f{main}, except that the regret of experts and the recursion are different. The regret of experts are guaranteed to be $O(\log I)$ by using ONS \cite{hazan2007logarithmic} as the expert algorithm $\mathcal{A}$ for exp-concave loss, or by using OGD for strongly-convex loss. We only need to solve the recursion when interval length of form $2^{2^k}$ is used.
Recall that this interval length choice corresponds to $y(x)=x^2$ in Remark {\text{Re}}f{remark main}, and now we are solving
$$
R(x^2-x)+\log x\le R(x^2)
$$
We may take $R(x)=\frac{x^{\deltata}}{\deltata}$, because the following inequality holds
$$
x^{2\deltata}-(x^2-x)^{\deltata}\ge x^{2\deltata}(1-(1-\frac{1}{x})^{\deltata})\ge x^{2\deltata}(1-(1-\frac{1}{x^{\deltata}}))=x^{\deltata}\ge \deltata \log x
$$
for any $x\ge 1$ and $0<\deltata<1$. The last inequality follows by
$$
\log x=\log (1+x-1)\le x-1\le \frac{1+\deltata(x-1)}{\deltata}\le \frac{(1+x-1)^{\deltata}}{\deltata}=\frac{x^{\deltata}}{\deltata}
$$
\end{proof}
As a direct implication of Corollary {\text{Re}}f{cor exp}, Algorithm {\text{Re}}f{alg1} can make dynamic regret algorithms more efficient, by replacing the base FLH learners with our algorithm.
\begin{corollary}
Consider the path length in $\ell_1$ norm: $\mathcal{P}=\sum_{t=1}^T \|x_{t+1}^*-x_t^*\|_1$. If we assume the losses are exp-concave or strongly convex, there is an algorithm that achieves $O(\frac{T^{\frac{1}{3}+\deltata} \mathcal{P}^{\frac{2}{3}}}{\deltata})$ dynamic regret bound where $\deltata<1$ is any positive constant, with $O(\log \log T)$ computation per round.
\end{corollary}
\begin{proof}
The proof follows by observing that both Theorem 14 in \cite{baby2021optimal} and Theorem 8 in \cite{baby2022optimal} only make use of FLH as an adaptive regret black-box. We maintain the low-level experts: ONS for exp-concave and OGD for strongly-convex, but replace FLH by Algorithm {\text{Re}}f{alg1} which achieves a worse regret $O(\frac{I^{\deltata}}{\deltata})$ instead of $O(\log I)$ of FLH by Corollary {\text{Re}}f{cor exp}. The regret bounds of \cite{baby2021optimal}, \cite{baby2022optimal} are achieved by summing up the regret of FLH over $O(T^{\frac{1}{3}}\mathcal{P}^{\frac{2}{3}})$ number of intervals, therefore by using Algorithm {\text{Re}}f{alg3} instead of FLH we get a final bound $O(\frac{T^{\frac{1}{3}+\deltata}\mathcal{P}^{\frac{2}{3}}}{\deltata})$.
The overall computation consists of the number of experts in Algorithm {\text{Re}}f{alg3}, and the computation of each expert. Since \cite{baby2021optimal} uses ONS as the expert which has $O(d^2)$ computation, the overall computation is $O(d^2 \log \log T)$. For strongly-convex loss, OGD is used as the expert \cite{baby2022optimal}, thus the overall computation is $O(d \log \log T)$.
\end{proof}
\fi
\section{Conclusion}\label{s7}
In this paper we propose a more efficient reduction from regret minimization algorithms to adaptive and dynamic regret minimization. We apply a new construction of experts with doubly exponential lifespans $2^{(1+\varepsilonilon)^k}$, then obtain an $\tilde{O}(|I|^{\frac{1+\varepsilonilon}{2}})$ adaptive regret bound with $O( \log \log T/\varepsilon)$ number of experts. As an implication, we show that $O( \log \log T/\varepsilon)$ number of experts also suffices for near-optimal dynamic regret. Our result characterizes the trade-off between regret and efficiency in minimizing adaptive regret in online learning, showing how to achieve near-optimal adaptive regret bounds with $O(\log \log T)$ number of experts.
We have also shown that the technique of history look-back cannot be used to further improve the number of experts in a reduction from regret to adaptive regret, if the regret is to be near optimal. Can we go beyond this technique to improve computational efficiency even further?
\iffalse
This implies an $\tilde{O}(|I|^{\frac{1}{2}})$ adaptive regret bound with $O(\log \log T)$ number of experts, improving upon previous works that required $O(\log T)$ experts for nearly the same adaptive regret bound.
\fi
\appendix
\section{Proof of Lemma {\text{Re}}f{mw regret}}
\begin{proof}
The expert regret is upper bounded by $3GD\sqrt{t-i}$ due to the optimality of $\mathcal{A}_i$, and the choice of $\eta$ is optimal up to a constant factor of 2. We only need to upper bound the regret of the multiplicative weight algorithm. We focus on the case that $\sqrt{\frac{\log T}{l_j}} \le \frac{1}{2}$, because in the other case the length $t-i$ of the sub-interval is $O(\log T)$, and its regret is upper bounded by $(t-i)GD =O(GD\sqrt{\log T (t-i)})$, and the conclusion follows directly.
We define the pseudo weight $\tilde{w}_t^{(j)}=GD\sqrt{\frac{l_j}{\log T}} w_t^{(j)}$ for $i\le t\le i+l_i$, and for $t>i+l_i$ we just set $\tilde{w}_t^{(j)}=\tilde{w}_{i+l_i}^{(j)}$. Let $\tilde{W}_t=\sum_{j\in S_t} \tilde{w}_t^{(j)}$, we are going to show the following inequality
\begin{equation}\label{eq3}
\tilde{W}_t\le t
\end{equation}
We prove this by induction. For $t=1$ it follows from the fact that $\tilde{W}_1=1$. Now we assume it holds for all $t'\le t$. We have
\begin{align*}
\tilde{W}_{t+1}&=\sum_{j\in S_{t+1}} \tilde{w}_{t+1}^{(j)}\\
&=\tilde{w}_{t+1}^{(t+1)}+\sum_{j\in S_{t+1}, j\le t} \tilde{w}_{t+1}^{(j)}\\
&\le 1+ \sum_{j\in S_{t+1}, j\le t} \tilde{w}_{t+1}^{(j)}\\
&= 1+ \sum_{j\in S_{t+1}, j\le t} \tilde{w}_t^{(j)} \left(1+\frac{1}{GD} \sqrt{\frac{\log T}{l_j}} (\ell_t(x_t)-\ell_t(x_t^{(j)})) \right)\\
&=1+ \tilde{W}_t+ \sum_{j\in S_t} \tilde{w}_t^{(j)} \frac{1}{GD} \sqrt{\frac{\log T}{l_j}} (\ell_t(x_t)-\ell_t(x_t^{(j)}))\\
&=1+ \tilde{W}_t+ \sum_{j\in S_t} w_t^{(j)} (\ell_t(x_t)-\ell_t(x_t^{(j)}))\\
&\le t+1 + \sum_{j\in S_t} w_t^{(j)} (\ell_t(x_t)-\ell_t(x_t^{(j)}))
\end{align*}
We further show that $\sum_{j\in S_t} w_t^{(j)} (\ell_t(x_t)-\ell_t(x_t^{(j)}))\le 0$:
\begin{align*}
\sum_{j\in S_t} w_t^{(j)} (\ell_t(x_t)-\ell_t(x_t^{(j)}))&=W_t \sum_{j\in S_t} \frac{w_t^{(j)}}{W_t} (\ell_t(x_t)-\ell_t(x_t^{(j)}))\\
&= W_t \sum_{j\in S_t}(\ell_t(x_t)-\ell_t(x_t))\\
&=0
\end{align*}
which finishes the proof of induction.
By inequality {\text{Re}}f{eq3}, we have that
$$
\tilde{w}_{t+1}^{(i)}\le \tilde{W}_{t+1} \le t+1
$$
Taking the logarithm of both sides, we have
$$
\log(\tilde{w}_{t+1}^{(i)})\le \log(t+1)
$$
Recall the expression
$$
\tilde{w}_{t+1}^{(i)}=\prod_{\tau=i}^t \left(1+\frac{1}{GD} \sqrt{\frac{\log T}{l_i}} (\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(i)}))\right)
$$
By using the fact that $\log(1+x)\ge x-x^2, \forall x\ge -1/2$ and
$$
|\frac{1}{GD} \sqrt{\frac{\log T}{l_i}} (\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(i)}))|\le 1/2
$$
we obtain
\begin{align*}
\log(\tilde{w}_{t+1}^{(i)})&\ge \sum_{\tau=i}^t \frac{1}{GD} \sqrt{\frac{\log T}{l_i}} (\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(i)}))-\sum_{\tau=i}^t [\frac{1}{GD} \sqrt{\frac{\log T}{l_i}} (\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(i)}))]^2\\
&\ge \sum_{\tau=i}^t \frac{1}{GD} \sqrt{\frac{\log T}{l_i}} (\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(i)}))-\frac{\log T}{l_i} (t-i)
\end{align*}
Combing this with $\log(\tilde{w}_{t+1}^{(i)})\le \log(t+1)$, we have that
\begin{align*}
\sum_{\tau=i}^t (\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(i)}))&\le
\frac{1}{GD} \sqrt{\frac{\log T}{l_i}} (t-i)+\frac{1}{GD} \sqrt{\frac{l_i}{\log T}} \log(t+1)
\end{align*}
Notice that $\frac{1}{4}l_i\le t-i\le l_i$ from Lemma {\text{Re}}f{p1} and $1+2=3$, we conclude the proof.
\end{proof}
\iffalse
\section{Proof of Theorem {\text{Re}}f{thm: bandit}}
To begin with, we need to prove the regret bound of EXP-3 with general unbiased estimator of loss. Assume that the loss value $\ell_t(i)$ of each arm $i$ at any time $t$ is bounded in $[0,1]$, we have the following lemma.
\begin{lemma}\label{lemma: exp}
Given $\tilde{\ell}_t(i)$, an unbiased estimator of $\ell_t(i)$ such that there is a constant $C$, $0\le \tilde{\ell}_t(i)\le C$. The modified EXP-3 algorithm {\text{Re}}f{alg exp} using $\tilde{\ell}_t(i)$ with $\eta=\sqrt{\frac{\log n}{T n}}/C$ has regret bound $2C\sqrt{ T \log n}$.
\end{lemma}
\begin{proof}
Following the standard analysis of EXP-3 in \cite{hazan2016introduction}, we have that
\begin{align*}
\mathbb{E}[\text{regret}]&= \mathbb{E}[\sum_{t=1}^T \ell_t(i_t)-\sum_{t=1}^T \ell_t(i^*)]\\
&\le \mathbb{E}[\sum_{t=1}^T \tilde{\ell}_t(w_t)-\sum_{t=1}^T \tilde{\ell}_t(i^*)]\\
&\le \mathbb{E}[\eta \sum_{t=1}^T \sum_{i=1}^n \tilde{\ell}_t(i)^2 w_t(i)+\frac{\log n}{\eta}]\\
&\le \eta \frac{C^2}{n} Tn +\frac{\log n}{\eta}\\
&\le 2C\sqrt{ T \log n}
\end{align*}
\end{proof}
With Lemma {\text{Re}}f{lemma: exp}, we construct $\log T$ number of experts, each to be a general EXP-3 algorithm as above, applied on intervals with length of form $2^k$ where $k=1,...,\log T$. The unbiased estimator is
$$
\tilde{\ell}_t= n \ell_t(i) \mathbf{e}_i, \text{ w.p. } \frac{1}{n}
$$
The constant $C$ is just equal to the number of arms $n$. As a result, each expert has expected regret $2n\sqrt{I \log n}$ over its own intervals.
It's only left to check the regret of the meta MW algorithm. Our analysis follows Theorem 1 in \cite{daniely2015strongly} with a few variations and also resembles the proof of Lemma {\text{Re}}f{mw regret}.
For any interval $I\in S$, first we notice that the actual regret over experts ${\mathcal A}$ can be transferred as
$$
\mathbb{E}[\sum_{t\in I} \ell_t(x_t)-\sum_{t\in I} \ell_t({\mathcal A}_k(t))]
= \mathbb{E}[\sum_{t\in I} \tilde{\ell}_t(x_t)-\sum_{t\in I} \tilde{\ell}_t({\mathcal A}_k(t))]=\mathbb{E}[\sum_{t\in I}\tilde{r}_t(k)]
$$
which allows us to deal with the pesudo loss $\tilde{\ell}_t$ instead.
Define the pseudo-weight $\tilde{w}_t(k)$ to be $\tilde{w}_t(k)=\frac{w_t(k)}{\eta_k}$, and $\tilde{W}_t=\sum_k \tilde{w}_t(k)$. We are going to prove that $\tilde{W}_t\le t(\log t +1)$ by induction. When $t=1$, only ${\mathcal A}_1$ is active therefore $\tilde{W}_1=1\le 1(\log 1+1)$.
Assume now the claim holds for any $\tau\le t$, we decompose $\tilde{W}_{t+1}$ as
\begin{align*}
\tilde{W}_{t+1}&=\sum_{k} \tilde{w}_{t+1}(k)\\
&=\sum_{k, 2^k|t+1} \tilde{w}_{t+1}(k)+\sum_{k, 2^k\nmid t+1} \tilde{w}_{t+1}(k)\\
&\le \log(t+1)+1+\sum_{k, 2^k\nmid t+1} \tilde{w}_{t+1}(k)
\end{align*}
because there are at most $\log(t+1)+1$ number of different interval in $S$ starting at time $t+1$, where each such interval has initial weight $\tilde{w}_{t+1}(k)=1$.
Now according to the induction hypothesis
\begin{align*}
\sum_{k, 2^k\nmid t+1} \tilde{w}_{t+1}(k)&=\sum_{k, 2^k\nmid t+1} \tilde{w}_t(k)(1+\eta_k \tilde{r}_t(k))\\
&=\tilde{W}_t+\sum_{k, 2^k\nmid t+1}\tilde{w}_t(k)\eta_k \tilde{r}_t(k)\\
&\le t(\log t +1)+\sum_{k, 2^k\nmid t+1}w_t(k) \tilde{r}_t(k)
\end{align*}
We complete the argument by showing that $\sum_{k, 2^k\nmid t+1}w_t(k) \tilde{r}_t(k)=0$. Since $x_t={\mathcal A}_k(t)$ with probability $p_t(k)=\frac{w_t(k)}{W_t}$, we have that
\begin{align*}
\sum_{k, 2^k\nmid t+1}w_t(k) \tilde{r}_t(k)&=W_t\sum_{k, 2^k\nmid t+1}p_t(k)( \tilde{\ell}_t(x_{t})-\tilde{\ell}_t({\mathcal A}_k(t)))\\
&=W_t(\tilde{\ell}_t(x_{t})-\tilde{\ell}_t(x_{t}))=0
\end{align*}
Because weights are all non-negative, we obtain
$$
t(\log t+1)\ge \tilde{W}_t\ge \tilde{w}_t(k)
$$
hence using the inequality that $\log(1+x)\ge x-x^2$ for $x\ge -\frac{1}{2}$, we have that
\begin{align*}
2\log t&\ge \log(\tilde{w}_t(k))=\sum_{t\in I}\log (1+\eta_k \tilde{r}_t(k))\\
&\ge \sum_{t\in I}\eta_k \tilde{r}_t(k)-\sum_{t\in I}(\eta_k \tilde{r}_t(k))^2\\
&\ge \eta_k (\sum_{t\in I} \tilde{r}_t(k)-\eta_k n^2)
\end{align*}
Rearranging the above inequality, we get the desired bound
$$
\sum_{t\in I}\tilde{r}_t(k)\le \eta_k n^2+\frac{2 \log T}{\eta_k}\le 2n(\log T+1)\sqrt{I}
$$
Finally, use the same argument as in A.2 of \cite{daniely2015strongly} (also Lemma 7 in \cite{lu2022adaptive}), we extend the $O(n\log T\sqrt{\log n I})$ regret bound over $I\in S$ to any interval $I$ at the cost of an additional $\sqrt{\log T}$ term, by observing that any interval can be written as the union of at most $\log T$ number of disjoint intervals in $S$ and using Cauchy-Schwarz.
\fi
\section{Proof of Theorem {\text{Re}}f{main2}}
The proof is essentially the same as that of Theorem {\text{Re}}f{main}, the main new step is to derive a generalized version of Lemma {\text{Re}}f{tech}.
\begin{lemma}\label{tech new}
For any $x\ge 1$, for $\varepsilonilon < \frac{1}{2}$, we have that
$$8 x^{\frac{1+\varepsilonilon}{2}}\ge 8(x-x^{1-\varepsilonilon}/2)^{\frac{1+\varepsilonilon}{2}}+(x/2)^{\frac{1-\varepsilonilon}{2}} $$
\end{lemma}
\begin{proof}
We would like to upper bound the term $(x-x^{1-\varepsilonilon}/2)^{\frac{1+\varepsilonilon}{2}}$. Notice that $0<x^{-\varepsilonilon}<1$, we have that
$$
(1-x^{-\varepsilonilon}/2)^{\frac{1+\varepsilonilon}{2}}=e^{\frac{1+\varepsilonilon}{2} \log (1-x^{-\varepsilonilon}/2) }\le e^{-\frac{1+\varepsilonilon}{4} x^{-\varepsilonilon}}\le 1-\frac{1+\varepsilonilon}{8} x^{-\varepsilonilon}
$$
where the last step follows from $e^{-x}\le 1-\frac{x}{2}$ when $0<x\le 1$. The above estimation gives us $x^{\frac{1+\varepsilonilon}{2}}-(x-x^{1-\varepsilonilon}/2)^{\frac{1+\varepsilonilon}{2}}\ge \frac{1+\varepsilonilon}{8}x^{\frac{1-\varepsilonilon}{2}}$ which concludes our proof.
\end{proof}
We go through the rest of the proof, and omit details which are the same as Theorem {\text{Re}}f{main}. The number of active experts per round is upper bounded by $4 \log_{1+\varepsilonilon}\log_2 T=O(\log \log T/\varepsilonilon)$, since at each time step there are at most 4 active experts with lifespan $4l_k$ for any $k$.
As for the regret bound, similarly we have the following property on the covering of intervals.
\begin{lemma}\label{p1 new}
For any interval $I=[s,t]$, there exists an integer $i\in [s,t-(t-s)^{1-\varepsilonilon}/2]$, such that ${\mathcal A}_i$ is alive throughout $[i,t]$.
\end{lemma}
And the choice of $\eta=\sqrt{\frac{1}{l_k}}$ is still optimal for each expert up to a constant factor of 2. An almost identical analysis of Lemma {\text{Re}}f{mw regret} yields the following (the only difference is that we make induction on $\tilde{W}_t\le \frac{4\log \log T}{\varepsilonilon} t$ instead, which doesn't affect the bound because $\log(\frac{\log \log T}{\varepsilonilon})=o(\log T)$).
\begin{lemma}\label{mw regret new}
For the $i$ and $\mathcal{A}_{(i,j)}$ chosen in Lemma {\text{Re}}f{p1 new}, the regret of Algorithm {\text{Re}}f{alg2} over the sub-interval $[i,t]$ is upper bounded by $3GD \sqrt{t-i}+3GD \sqrt{\log T(t-i)}$.
\end{lemma}
The reason of such difference is that at time $t=1$ there are multiple active experts in Algorithm {\text{Re}}f{alg2} while there is just one in Algorithm {\text{Re}}f{alg1}. It's possible to make the proof simpler as that of Lemma {\text{Re}}f{mw regret}, however it would complicate the algorithm itself. We proceed to state our induction on $|I|$.
\paragraph{Base case:} for $|I|=1$, the regret is upper bounded by $GD\le 48GD \sqrt{\log T} \cdot 1^{\frac{1+\varepsilonilon}{2}}$.
\paragraph{Induction step:} suppose for any $|I|<m$ we have the regret bound in the statement of theorem. Consider now $t-s=m$, from Lemma {\text{Re}}f{p1 new} we know there exists an integer $i\in [s,t-(t-s)^{1-\varepsilonilon}/2]$ and $k$ satisfying $l_k\le (t-s)^{1-\varepsilonilon}/2\le 4l_k$, such that ${\mathcal A}_{(i,k)}$ is alive throughout $[i,t]$. Algorithm {\text{Re}}f{alg2} guarantees an
$$3GD \sqrt{t-i}+3 GD\sqrt{\log T(t-i)}\le 6GD \sqrt{\log T} (t-i)^{\frac{1}{2}}$$
regret over $[i,t]$ by Lemma {\text{Re}}f{mw regret new}, and by induction the regret over $[s,i]$ is upper bounded by \newline $48GD \sqrt{\log T} (i-s)^{\frac{1+\varepsilonilon}{2}}$. By the monotonicity of the function $f(y)=8(x-y)^{\frac{1+\varepsilonilon}{2}}+\sqrt{y}$ when the variable $y\ge x^{1-\varepsilonilon}/2$, we reach the desired conclusion by Lemma {\text{Re}}f{tech new}. To see the monotonicity, we use the fact $y\ge x^{1-\varepsilonilon}/2$ to see that
\begin{align*}
f'(y)&=\frac{1}{2\sqrt{y}}-\frac{4(1+\varepsilonilon)}{ (x-y)^{\frac{1-\varepsilonilon}{2}}}\\
&\le \frac{1}{2\sqrt{y}}-\frac{4(1+\varepsilonilon)}{ x^{\frac{1-\varepsilonilon}{2}}}\\
&\le \frac{1}{2\sqrt{y}}-\frac{4(1+\varepsilonilon)}{\sqrt{2y}}\\
&\le 0
\end{align*}
\section{Proof of Proposition {\text{Re}}f{prop:lb}}
We prove by induction. For $y=1$, it follows that $R(1)\ge r(1)=C_1$. Suppose that $R(y)\ge \frac{C_1}{2\sqrt{C_2}} y^{1-\frac{1}{2n}}$ for any $y\le m$, then for $y=m+1$ we have that
\begin{align*}
R(y)&\ge R(y-x(y))+r(x(y))\\
&\ge \frac{C_1}{2\sqrt{C_2}} (y-x)^{1-\frac{1}{2n}}+C_1\sqrt{x}\\
&=\frac{C_1}{2\sqrt{C_2}} y^{1-\frac{1}{2n}}(1-\frac{x}{y})^{1-\frac{1}{2n}}+C_1\sqrt{x}\\
&\ge \frac{C_1}{2\sqrt{C_2}} y^{1-\frac{1}{2n}}(1-\frac{2(1-\frac{1}{2n})x}{y})+C_1\sqrt{x}\\
&=\frac{C_1}{2\sqrt{C_2}} y^{1-\frac{1}{2n}}+C_1\sqrt{x}-\frac{C_1}{2\sqrt{C_2}} y^{1-\frac{1}{2n}}\frac{2(1-\frac{1}{2n})x}{y}\\
&\ge \frac{C_1}{2\sqrt{C_2}} y^{1-\frac{1}{2n}}
\end{align*}
where the inequality $(1-\varepsilonilon)^{\beta}=e^{\beta \log(1-\varepsilonilon)}\ge e^{-2\beta \varepsilonilon}\ge 1-2\beta \varepsilonilon$ is used for $0<\beta<1$, $0<\varepsilonilon\le \frac{1}{2}$.
\section{Proof of Proposition {\text{Re}}f{cor:lb}}
We first verify that the choice of $x(y)=y^{\frac{1-\beta}{1-\alpha}}$ indeed satisfies the two properties.
The first property is now equivalent to proving
$$
\frac{\beta}{2} y^{\frac{\alpha(1-\beta)}{1-\alpha}}\le y^{\beta}-(y-y^{\frac{1-\beta}{1-\alpha}})^{\beta}
$$
We estimate the RHS as follows
\begin{align*}
y^{\beta}-(y-y^{\frac{1-\beta}{1-\alpha}})^{\beta}&=y^{\beta}-y^{\beta}(1-y^{\frac{\alpha-\beta}{1-\alpha}})^{\beta}\\
&\ge y^{\beta}-y^{\beta}(1-\frac{\beta y^{\frac{\alpha-\beta}{1-\alpha}}}{2})\\
&=\frac{\beta y^{\frac{\alpha(1-\beta)}{1-\alpha}}}{2}
\end{align*}
The second property follows from the same reasoning in Section {\text{Re}}f{s4}, that such choice of $x(y)$ corresponds to interval length of form $2^{{(1+\frac{\beta-\alpha}{1-\beta})}^k}$.
The impossibility argument on $x(y)=o(y^{\frac{1-\beta}{1-\alpha}})$ follows from the same analysis of Proposition {\text{Re}}f{prop:lb}, which is actually a special case with $\alpha=\frac{1}{2}$ and $\beta=1-\frac{1}{2n}$.
\section{Proof of Lemma {\text{Re}}f{lem exp}}
The proof is identical to that of Theorem {\text{Re}}f{main}, except that the regret of experts and the recursion are different. The regret of experts are guaranteed to be $O(\log I)$ by using ONS \cite{hazan2007logarithmic} as the expert algorithm $\mathcal{A}$ for exp-concave loss, or by using OGD for strongly-convex loss. It's worth to notice that any $\lambdabda$-strongly-convex function is also $\frac{\lambdabda}{G^2}$-exp-concave.
Let us check the covering property of intervals first. Then only difference between Algorithm {\text{Re}}f{alg3} and Algorithm {\text{Re}}f{alg2} is that instead of initiating (potentially) multiple experts with different lifespans at some time $t$, Algorithm {\text{Re}}f{alg3} only initiates the expert with the largest lifespan. As a result, it has no effect on the covering and Lemma {\text{Re}}f{p1 new} still holds. Because both ONS for exp-concave loss and OGD for strongly-convex loss are adaptive to the horizon, the regret on the small interval $[i,t]$ remains the optimal $O(\log T)$.
We only need to solve the recursion when interval length of form $2^{(1+\varepsilonilon)^k}$ is used. By a similar argument to Lemma 3.3 in \cite{hazan2009efficient}, the regret $r(x)$ over the small interval is $O(\log T +\log x)=O(\log T)$ which we discuss later. Recall that this interval length choice corresponds to $y=x^{1+\varepsilonilon}$, and now we are solving
$$
R(x^{1+\varepsilonilon}-x)+\log T\le R(x^{1+\varepsilonilon})
$$
We claim that $R(x)=\frac{2\log T x^{\varepsilonilon}}{\varepsilonilon}$ is valid, by the following argument. The claim is equal to proving
$$
x^{\varepsilonilon(1+\varepsilonilon)}-(x^{1+\varepsilonilon}-x)^{\varepsilonilon}\ge \frac{\varepsilonilon }{2}
$$
We have the following estimation on the LHS:
\begin{align*}
x^{\varepsilonilon(1+\varepsilonilon)}-(x^{1+\varepsilonilon}-x)^{\varepsilonilon}&=x^{\varepsilonilon(1+\varepsilonilon)}(1-(1-x^{-\varepsilonilon})^{\varepsilonilon})\\
&\ge x^{\varepsilonilon(1+\varepsilonilon)}(1-(1-\frac{\varepsilonilon x^{-\varepsilonilon}}{2}))\\
&=\frac{\varepsilonilon x^{\varepsilonilon^2}}{2}\ge \frac{\varepsilonilon}{2}
\end{align*}
for any $x\ge 1$ and $0<\varepsilonilon<1$ which proves the lemma. The first inequality is due to
$$
(1-x^{-\varepsilonilon})^{\varepsilonilon}=e^{\varepsilonilon \log (1-x^{-\varepsilonilon})}\le e^{-\varepsilonilon x^{-\varepsilonilon}}\le 1-\frac{\varepsilonilon x^{-\varepsilonilon}}{2}.
$$
Now we finish the proof for the argument that the regret $r(x)$ over the small interval is $O(\log T)$. We follow the method of \cite{hazan2009efficient}. The regret $r(x)$ can be decomposed as the regret of the expert algorithm and the regret of the multiplicative weight algorithm against the best expert. The regret of the expert algorithm $\sum_{\tau=s}^t \ell_{\tau}(x_{\tau}^{(s,k)})-\min_{x} \ell_{\tau}(x)$ can be upper bounded by $O(\log T)$ by the regret guarantees of ONS and OGD.
Using the $\alpha$-exp-concavity of $\ell_t$, we have that
$$
e^{-\alpha \ell_t(x_t)}=e^{-\alpha \sum_{(j,k)\in S_t} w_t^{(j,k)} x_t^{(j,k)}}\ge \sum_{(j,k)\in S_t} w_t^{(j,k)} e^{-\alpha \ell_t(x_t^{(j,k)})}
$$
Taking logarithm,
$$
\ell_t(x_t)\le -\frac{1}{\alpha} \log \sum_{(j,k)\in S_t} w_t^{(j,k)} e^{-\alpha \ell_t(x_t^{(j,k)})}
$$
as a result,
\begin{align*}
\ell_t(x_t)-\ell_t(x_t^{(j,k)})&\le \frac{1}{\alpha}(\log e^{-\alpha \ell_t(x_t^{(j,k)})}-\log \sum_{(j,k)\in S_t} w_t^{(j,k)} e^{-\alpha \ell_t(x_t^{(j,k)})})\\
&=\frac{1}{\alpha}\log \frac{\ensuremath{\mathbf h}at{w}_{t+1}^{(j,k)}}{w_t^{(j,k)}}
\end{align*}
If $i<t$, we have that
$$
\ell_t(x_t)-\ell_t(x_t^{(j,k)})=\frac{1}{\alpha}[\log \frac{\ensuremath{\mathbf h}at{w}_{t+1}^{(j,k)}}{\ensuremath{\mathbf h}at{w}_t^{(j,k)}}+\frac{\ensuremath{\mathbf h}at{w}_t^{(j,k)}}{w_t^{(j,k)}}]\le \frac{1}{\alpha}(\log \ensuremath{\mathbf h}at{w}_{t+1}^{(j,k)}-\log \ensuremath{\mathbf h}at{w}_t^{(j,k)}+\frac{2}{t})
$$
For $i=t$ we have that $w_t^{(t,k)}\ge -\log t$, thus
$$
\ell_t(x_t)-\ell_t(x_t^{(t,k)})\le \frac{1}{\alpha}(\log \ensuremath{\mathbf h}at{w}_{t+1}^{(t,k)} +\log t)
$$
Therefore, the regret against the desired expert ${\mathcal A}_{(s,k)}$ over any interval $[s,t]$ can be bounded by
\begin{align*}
\sum_{\tau=s}^t \ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(s,k)})&=\ell_{s}(x_{s})-\ell_{s}(x_{s}^{(s,k)})+\sum_{\tau=s+1}^t \ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}^{(s,k)})\\
&\le \frac{1}{\alpha}(\log \ensuremath{\mathbf h}at{w}_{t+1}^{(t,k)} +\log t+\sum_{\tau=s+1}^t \frac{2}{\tau})\\
&\le \frac{2}{\alpha}(\log I +\log T).
\end{align*}
\end{document} |
\begin{example}gin{document}
\title{Interactive Leakage Chain Rule for Quantum Min-entropy}
\author{Ching-Yi Lai and Kai-Min Chung
\thanks{
CYL is with the Institute of Communications Engineering,
National Chiao Tung University,
Hsinchu 30010, Taiwan.
(email: [email protected])
KMC is with the Institute of Information Science, Academia Sinica,
Nankang, Taipei 11529, Taiwan.
(email: [email protected])
}
}
\maketitle
\begin{example}gin{abstract}
The leakage chain rule for quantum min-entropy quantifies the change of min-entropy when one party gets additional leakage
about the information source. Herein we provide an interactive version that quantifies the change of min-entropy between two parties, who share an initial
classical-quantum state and are allowed to run a two-party protocol. As an application, we prove new versions of lower bounds on the complexity of quantum communication of classical information.
\end{abstract}
\section{Introduction}
Let $(X,Y,Z)$ be a classical distribution over $\{0,1\}^{n}\times\{0,1\}^m\times \{0,1\}^{\end{lemma}l}$. (Classical) leakage chain rule states that
$$H(X|Y,Z) \geq H(X|Y) - \end{lemma}l,$$
which says that an $\end{lemma}l$-bit ``leakage'' $Z$ can decrease the entropy of $X$ (conditioned on $Y$) by at most $\end{lemma}l$. Note that the statement is different from the standard chain rule for Shannon entropy that $H(X,Y) = H(X) + H(Y|X)$. Leakage chain rule generally holds for various entropy notions and is especially useful for cryptographic applications. In particular, a computational leakage chain rule for computational min-entropy, first proved by~\cite{DziembowskiP08,ReingoldTTV08}, has found several applications in classical cryptography~\cite{GentryW11,ChungKLR11,FOR12,JP14,ChungLP15}.
The notion of (smooth) min- and max- entropies in the quantum setting are proposed by Renner and Wolf~\cite{RW04}. The leakage chain rule for quantum min-entropy has also been discussed and is more complicated than its classical analogue due to the effect of quantum entanglement. Consider a state $\rho_{XYZ}$ on the state space $\mathcal{X} \otimes \mathcal{Y} \otimes \mathcal{Z}$, where $Z$ is an $\end{lemma}l$-qubit system. The leakage chain for quantum min-entropy states that
\begin{example}gin{equation} \label{eqn:LCR}
H_{\rm min}(X|Y,Z)_\rho \geq
\begin{example}gin{cases}
H_{\rm min}(X|Y)_\rho - \end{lemma}l, \mbox{\quad \ \ if $\rho$ is a separable state on $(\mathcal{X} \otimes \mathcal{Y})$ and $\mathcal{Z}$} \\
H_{\rm min}(X|Y)_\rho - 2\end{lemma}l, \mbox{\quad otherwise.}
\end{cases}
\end{equation}
In other words, the leakage $Z$ can decrease the quantum min-entropy of $X$ conditioned on $Y$ by at most $\end{lemma}l$ if there is no entanglement, and $2\end{lemma}l$ in general. Note that the factor of $2$ is tight by
the application of superdense coding~\cite{BW92}. The separable case is proved by Desrosiers and Dupuis~\cite{DD10}, and the general case is proved by Winkler \end{theorem}al~\cite{WTHR11}, both of which are motivated by cryptographic applications. Furthermore, a computational version of quantum leakage chain rule is explored in~\cite{CCL+17} with applications in quantum leakage-resilient cryptography.
Herein we formulate an interactive version of leakage chain rule with initial classical-quantum (cq) states. Let $\rho_{XY}$ be a cq-state shared between Alice and Bob. Consider that $X$ is a classical input from Alice. Then Alice and Bob engage in an interaction where Alice may leak information about $X$ to Bob. We are interested in how much leakage is generated from the interaction regarding to the communication complexity of the interaction.
We restrict the discussion to the situation where $X$ is a classical input that remains constant during the interaction.
This is formalized by allowing Alice to perform only quantum operations controlled by $X$ on her system.
\begin{example}gin{theorem}[Interactive leakage chain rule for quantum min-entropy] \label{thm:LCR-informal}
Suppose Alice and Bob share a cq state $\rho = \rho_{XY} \in D(\mathcal{X}\otimes\mathcal{Y})$,
where Alice holds the classical system $X$ and Bob holds the quantum system $Z$.
If an interactive protocol $\Pi$ is executed by Alice and Bob
and $m_{B}$ and $m_{A}$ are the total numbers of qubits that Bob and Alice send to each other, respectively,
then
\begin{example}gin{align}
&H_{\rm min}(X|Y)_{\sigma}\geq H_{\rm min}(X|Y)_\rho - \min\{ m_{B}+m_{A}, 2m_{A}\},
\end{align}
where $\sigma_{XY}=\Pi(\rho_{XY})$ is the joint state at the end of the protocol.
\end{theorem}
It is interesting to discuss the implication of Theorem~\ref{thm:LCR-informal} to Holevo's problem of conveying classical messages by transmitting quantum states. In the interactive setting, Cleve \end{theorem}al~\cite{CDNT99} and Nayak and Salzman~\cite{NS06} showed that for Alice to reliably communicate $n$ bits of classical information to Bob, roughly $n$ qubits of total communication and $n/2$ qubits of one-way communication from Alice to Bob are necessary. The same conclusion follows immediately from Theorem~\ref{thm:LCR-informal}.
In fact, in the case without initial shared cq-states, the general form of the result in~\cite{NS06} (Theorem 1.4) agrees to the above interactive leakage chain rule.
Thus our interactive leakage chain rule can be viewed as a generalization of~\cite{NS06} to allow initial correlation between $X$ and $Y$. We remark that our proof is not a generalization of the proof in~\cite{NS06}, although we both used Yao's lemma~\cite{Yao93}.
Conceptually, the use of interactive leakage chain rule makes the proof simple.
This manuscript is organized as follows.
In Sec.~\ref{sec:prelim} we give some basics about quantum information.
Then we discuss the leakage chain rule for quantum min-entropy and its application to the problem of communicating classical information in Sec.~\ref{sec:leakage-chain-rule}.
\section{Preliminaries} \label{sec:prelim}
We give notation and briefly introduce basics of quantum mechanics here.
The Hilbert space of a quantum system $A$ is denoted by the corresponding calligraphic letter $\mathcal{A}$
and its dimension is denoted by $d_A$.
Let $L(\mathcal{A})$ be the space of linear operators on $\mathcal{A}$.
A quantum state of system $A$ is described by a \emph{density operator} $\rho_A\in L(\mathcal{A})$ that is positive semidefinite and with unit trace $(\textnormal{tr}(\rho_A)=1)$.
Let $D(\mathcal{A})= \{ \rho_A\in L(\mathcal{A}): \rho_A\geq 0, \textnormal{tr}(\rho_A)=1\}$ be the set of density operators on $\mathcal{A}$.
When $\rho_A\in D(\mathcal{A})$ is of rank one, it is called a \emph{pure} quantum state and we can write $\rho=\ket{\psi}_A\begin{example}gin{remark}a{\psi}$ for some unit vector $\ket{\psi}_A\in \mathcal{A}$,
where $\begin{example}gin{remark}a{\psi}=\ket{\psi}^{\dag}$ is the conjugate transpose of $\ket{\psi}$. If $\rho_A$ is not pure, it is called a \emph{mixed} state and can be expressed as a convex combination of pure quantum states.
The evolution of a quantum state $\rho\in D(\mathcal{A})$ is described by a completely positive and trace-preserving (CPTP) map $\Psi: D(\mathcal{A})\rightarrow D(\mathcal{A}')$
such that $\Psi(\rho) =\sum_{k} E_k \rho E_k^\dag$,
where $\sum_k E_k^\dag E_k =\mathsf{id}_{A}$.
In particular, if the evolution is a unitary $U$, we have the evolved state $\Psi(\rho)=U\rho U^{\dag}$.
The Hilbert space of a joint quantum system $AB$ is the tensor product of the corresponding Hilbert spaces $\mathcal{A}\otimesimes \mathcal{B}$.
Let $\mathsf{id}_A$ denote the identity on system $A$.
For $\rho_{AB}\in D(\mathcal{A}\otimes \mathcal{B})$, we will use $\rho_A=\textnormal{tr}_B(\rho_{AB})$
to denote its reduced density operator in system $A$,
where
\[
\textnormal{tr}_B(\rho_{AB})= \sum_{i} \mathsf{id}_A\otimes\begin{example}gin{remark}a{i}_B \rho_{AB} \mathsf{id}_A\otimes \ket{i}_B
\]
for an orthonormal basis $\{\ket{i}_B\}$ for $\mathcal{B}$.
A \emph{separable} state $\rho_{AB}$ has a density operator of the form
\[
\rho_{AB}=\sum_{x} p_x \rho_A^x\otimesimes \rho_B^x,
\]
where $\rho_A^x \in D(\mathcal{A})$ and $\rho_B^x \in D(\mathcal{B})$. In particular,
a classical-quantum (cq) state $\rho_{AB}$ has a density operator of the form
\[
\rho_{AB}=\sum_{a} p_a \ket{a}_A\begin{example}gin{remark}a{a}\otimesimes \rho_B^a,
\]
where $\{\ket{a}_A\}$ is an orthonormal basis for $\mathcal{A}$ and $\rho_B^a \in D(\mathcal{B})$.
We define the following specific quantum operations on cq-states that preserve the classical system.
\begin{example}gin{definition} A quantum operation $\Gamma$ on a classical-quantum system $AB$ is said to be controlled by the classical system $A$ if, for a cq state $\rho_{AB}= \sum_{a} p_a \ket{a}_A\begin{example}gin{remark}a{a}\otimesimes \rho_B^a$, $$\Gamma(\rho_{AB})= \sum_{a} p_a \ket{a}_A\begin{example}gin{remark}a{a}\otimesimes \Gamma^a(\rho_B^a),$$
where $\Gamma^a$ are CPTP maps.
In this case, $\Gamma$ is called a \emph{classically-controlled quantum operation}.
In particular, if $\Gamma^a$ are unitaries, $\Gamma$ is called a \emph{classically-controlled unitary}.
\end{definition}
\IEEEnonumberindent Note that the reduced state for classical system $A$ of a cq-state $\rho_{AB}$ remains the same after a
classically-controlled quantum operation $\Gamma$. That is, $\textnormal{tr}_B \rho_{AB}= \textnormal{tr}_B{\Gamma(\rho_{AB})}$.
\begin{example}gin{lemma} [Schmidt decomposition]
For a pure state $\ket{\psi}_{AB}\in\mathcal{A}\otimes\mathcal{B}$, there exist orthonormal states $\{\ket{i}_A\}\in\mathcal{A}$ and $\{\ket{i}_B\}\in\mathcal{B}$
such that \[
\ket{\psi}_{AB}=\sum_{i=1}^s \lambda_i \ket{i}_A\otimes \ket{i}_B,
\]
where $\lambda_i\geq 0$, $s\leq \min\{d_A, d_B\}$, and the smallest such $s$ is called the \emph{Schmidt rank} of $\ket{\psi}_{AB}$.
\end{lemma}
\begin{example}gin{lemma} [Purification]
Suppose $\rho_A\in D(\mathcal{A})$ of finite dimension $d_A$. Then there exists $\mathcal{B}$ of dimension $d_B\geq d_A$ and $\ket{\psi}_{AB}\in \mathcal{A}\otimesimes \mathcal{B}$ such that
\[
\textnormal{tr}_B \ket{\psi}_{AB}\begin{example}gin{remark}a{\psi} = \rho_A.
\]
\end{lemma}
The trace distance between two quantum states $\rho$ and $\sigma$ is
$$||{\rho}-{\sigma}||_{\mathrm{tr}},$$
where $||X||_{\mathrm{tr}}=\frac{1}{2}\textnormal{tr}{\sqrt{X^{\dag}X}}$ is the trace norm of $X$.
The fidelity between $\rho$ and $\sigma$ is
$$
F(\rho,\sigma)=\textnormal{tr} \sqrt{\rho^{1/2}{\sigma}\rho^{1/2}}.
$$
\begin{example}gin{theorem}[Uhlmann's theorem~\cite{Uhl76}]
$$F(\rho_{A},\sigma_A)= \max_{\ket{\phi'}} |\begin{example}gin{remark}aket{\psi}{\phi'}|,$$ where the maximization is over all purification of $\sigma_A$.
\end{theorem}
\IEEEnonumberindent Below is a variant of Uhlmann's theorem.
\begin{example}gin{corollary} \label{cor:Uhlmann}
Suppose $\rho_A$ is a reduced density operator of $\rho_{AB}$.
Suppose $\rho_A$ and $\sigma_A$ have fidelity $F(\rho_A,\sigma_A)\geq 1- \end{proposition}silon$. Then
there exists $\sigma_{AB}$ with $\textnormal{tr}_B(\sigma_{AB})=\sigma_A$ such that $F(\rho_{AB},\sigma_{AB})\geq 1- \end{proposition}silon$.
\end{corollary}
\begin{example}gin{proof}
Let $\ket{\psi}_{ABR}$ be a purification of $\rho_{AB}$, which is immediately a purification of $\rho_A$.
Suppose $\ket{\phi}$ is a purification of $\sigma_A$ such that $|\begin{example}gin{remark}aket{\psi}{\phi}|\geq 1-\end{proposition}silon$. Let $\sigma_{AB}= \textnormal{tr}_R(\ket{\phi}\begin{example}gin{remark}a{\phi})$.
Then $F(\rho_{AB},\sigma_{AB})\geq |\begin{example}gin{remark}aket{\psi}{\phi'}| \geq 1- \end{proposition}silon$.
\end{proof}
A relation between the fidelity and the trace distance of two quantum states $\sigma$ and $\rho$ was proved by Fuchs and van~de Graaf~\cite{FvdG99} that
\begin{example}gin{align}
1-F(\rho,\sigma)\leq || \rho-\sigma||_{\mathrm{tr}}\leq \sqrt{1- F^2(\rho,\sigma)}. \label{eq:FT}
\end{align}
The purified distance is defined as
\begin{example}gin{align}
P(\rho,\sigma)=&\sqrt{1- F^2(\rho,\sigma)}.
\end{align}
For a {one-sided two-party protocol} (that is, only one party will have the output), where Alice has no (or little) information about Bob's input, Lo showed that it is possible for Bob to cheat by changing his input at a later time~\cite{Lo97}.
The basic idea can be formulated as the following lemma, which is proved by a standard argument using Uhlmann theorem and the Fuchs and van~de Graaf~inequality~\cite{FvdG99} (for a proof, see, e.g.,~\cite{BB15}).
\begin{example}gin{lemma}\label{lemma:lo_attack}
Suppose $\rho_A$, $\sigma_A\in\mathcal{A}$ are two quantum states with purifications $\ket{\phi}_{AB}$, $\ket{\psi}_{AB}\in \mathcal{A}\otimes\mathcal{B}$, respectively, and $||\rho_A-\sigma_A||_{\textnormal{tr}}\leq \end{proposition}silon$. Then there exists a unitary $U_B\in L(\mathcal{B})$ such that
$$||\ket{\phi}_{AB}-\mathsf{id}_A\otimes U_B\ket{\psi}_{AB}||_{\textnormal{tr}}\leq \sqrt{\end{proposition}silon(2-\end{proposition}silon)}.$$
\end{lemma}
\subsection{Protocol Definition}
\begin{example}gin{figure} [h]
\centerline{
\begin{example}gin{tikzpicture}[scale=0.6][thick]
\fontsize{8pt}{1}
\tikzstyle{variablenode} = [draw,fill=white, shape=rectangle,minimum size=2.em]
\IEEEnonumberde [fill=black,inner sep=1.5pt,shape=circle,label=left:$\rho_{A_0B_0}$] (n0) at (-2,1.5) {} ;
\IEEEnonumberde [fill=white,inner sep=1.5pt] (n1) at (16,1.5) {} ;
\IEEEnonumberde [fill=white,inner sep=1.5pt] (bn4) at (-3,3) {$\mathrsfs{A}$} ;
\IEEEnonumberde [fill=white,inner sep=1.5pt] (bn4) at (-3,0) {$\mathrsfs{B}$} ;
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt ](an1) at(0,3) {$\Phi_1$};
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt] (an2) at (3,3) {$\Phi_2$} ;
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt](an3) at(6,3) {$\Phi_3$};
\IEEEnonumberde [fill=white,inner sep=1.5pt,shape=circle] (an4) at (9,3) {\quad$\dots \quad$} ;
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt] (an5) at (12,3) {$\Phi_r$} ;
\IEEEnonumberde [fill=white,inner sep=1.5pt] (an6) at (14,3) {$$} ;
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt] (bn1) at (1,0) {$\Psi_1$} ;
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt] (bn2) at (4,0) {$\Psi_2$} ;
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt] (bn3) at (7,0) {$\Psi_3$} ;
\IEEEnonumberde [fill=white,inner sep=1.5pt,shape=circle] (bn4) at (10,0) {\quad$\dots \quad$} ;
\IEEEnonumberde[variablenode] [fill=white,inner sep=1.5pt] (bn5) at (13,0) {$\Psi_r$} ;
\IEEEnonumberde [fill=white,inner sep=1.5pt] (bn6) at (15,0) {} ;
\draw[dashed] (n0) -- (n1);
\draw[->] (n0)--(an1) node [above,midway] {$\mathcal{A}_0$\mbox{ \quad}};
\draw[->] (n0)--(bn1) node [below,midway] {$\mathcal{B}_0$\mbox{ \quad}};
\draw[->] (an1) -- (bn1) node [above,midway] {\mbox{ \quad}$\mathcal{X}_1$};
\draw[->] (an2) -- (bn2) node [above,midway] {\mbox{\quad}$ \mathcal{X}_2$};
\draw[->] (an3) -- (bn3) node [above,midway] {\mbox{\quad}$ \mathcal{X}_3$};
\draw[->] (an5) -- (bn5) node [above,midway] {\mbox{\quad}$ \mathcal{X}_r$};
\draw[->] (an1) -- (an2) node [above,midway] {\mbox{ \quad}$\mathcal{A}_1$};
\draw[->] (an2) -- (an3) node [above,midway] {\mbox{ \quad}$\mathcal{A}_2$};
\draw[->] (an3) -- (an4) node [above,midway] {\mbox{ \quad}$\mathcal{A}_3$};
\draw[->] (an4) -- (an5) node [above,midway] {\mbox{ \quad}$\mathcal{A}_{r-1}\quad $};
\draw[->] (an5) -- (an6) node [above,midway] {\mbox{ \quad}$\mathcal{A}_{r}$};
\draw[->] (bn1) -- (an2) node [below,midway] {\quad$ \mathcal{Y}_1$} ;
\draw[->] (bn2) --(an3) node [below,midway] {\mbox{\quad}$ \mathcal{Y}_2$} ;
\draw[->] (bn3) --(an4) node [below,midway] {\mbox{\quad}$ \mathcal{Y}_3$} ;
\draw[->] (bn4) --(an5) node [below,near start] {\mbox{\quad}$\quad \mathcal{Y}_{r-1}$} ;
\draw[->] (bn1) -- (bn2) node [below,midway] {\mbox{ \quad}$\mathcal{B}_1$};
\draw[->] (bn2) -- (bn3) node [below,midway] {\mbox{ \quad}$\mathcal{B}_2$};
\draw[->] (bn3) -- (bn4) node [below,midway] {\mbox{ \quad}$\mathcal{B}_3$};
\draw[->] (bn4) -- (bn5) node [below,midway] {\mbox{ \quad}$\mathcal{B}_{r-1}\quad $ };
\draw[->] (bn5) -- (bn6) node [below,midway] {\mbox{ \quad}$\mathcal{B}_{r}$};
\end{tikzpicture}
}
\caption{An interactive two-party quantum protocol.} \label{fig:two-party}
\end{figure}
We basically follow the definition of two-party quantum protocol~\cite{GW07,DNS10}.
Consider a quantum protocol between two parties $\mathrsfs{A}$ and $\mathrsfs{B}$, where the party $\mathrsfs{A}$ sends the first and the last messages without loss of generality.
Such a two-party quantum protocol is defined as follows.
\begin{example}gin{definition} (Two-party quantum protocol) \label{def:two-party}
An $(r,m_A,m_B)$ protocol $\Pi=( \mathrsfs{A},\mathrsfs{B})$ is a two-party protocol with $r$ rounds of interaction defined as follows:
\begin{example}gin{enumerate}
\item input spaces $\mathcal{A}_0$ and $\mathcal{B}_0$ for parties $\mathrsfs{A}$ and $\mathrsfs{B}$, respectively;
\item memory spaces $\mathcal{A}_1, \dots, \mathcal{A}_r$ for $\mathrsfs{A}$ and $\mathcal{B}_1, \dots, \mathcal{B}_r$ for $\mathrsfs{B}$;
\item communication spaces $\mathcal{X}_1, \dots, \mathcal{X}_r$, $\mathcal{Y}_1, \dots, \mathcal{Y}_{r-1}$;
\item a series of quantum operations $\Phi_1,\dots,\Phi_r$ for $\mathrsfs{A}$
and a series of quantum operations $\Psi_1,\dots,\Psi_r$ for $\mathrsfs{B}$,
where
\begin{example}gin{align*}
\Phi_1:& L(\mathcal{A}_0)\rightarrow L(\mathcal{A}_1\otimesimes \mathcal{X}_1);\\
\Phi_i:& L(\mathcal{A}_{i-1}\otimesimes \mathcal{Y}_{i-1})\rightarrow L(\mathcal{A}_{i}\otimesimes \mathcal{X}_{i}),\ i=2,\dots,r;\\
\Psi_j:& L(\mathcal{B}_{j-1}\otimesimes \mathcal{X}_{j})\rightarrow L(\mathcal{B}_{j}\otimesimes \mathcal{Y}_{j}),\ j=1,\dots,r-1;\\
\Psi_r:& L(\mathcal{B}_{r-1}\otimesimes \mathcal{X}_{r})\rightarrow L(\mathcal{B}_r).
\end{align*}
\end{enumerate}
The \emph{one-way communication complexities} (in terms of qubits) sent from Alice to Bob and from Bob to Alice are
$m_A=\sum_{i=1}^r \log d_{X_i}$ and $m_B=\sum_{j=1}^{r-1}\log d_{Y_j}$, respectively.
The \emph{(total) communication complexity} of this protocol is $m_A+m_B$.
\end{definition}
For input state $\rho\in D(\mathcal{A}_0\otimesimes \mathcal{B}_0\otimesimes \mathcal{R})$, where $R$ is a reference system of dimension $d_R= d_{A_0}d_{B_0}$,
let
\begin{example}gin{align*}
[\mathrsfs{A}_1^i\circledast \mathrsfs{B}_1^{i-1}](\rho)=& \left(\Phi_{i}\otimes\mathsf{id}_{B_{i-1},R}\right)\left(\Psi_{i-1}\otimes\mathsf{id}_{A_{i-1},R}\right) \cdots \left(\Psi_1\otimes \mathsf{id}_{A_1,R}\right)\left(\Phi_1\otimes \mathsf{id}_{B_0,R}\right) (\rho),\\
[\mathrsfs{A}_1^i\circledast \mathrsfs{B}_1^i](\rho)=& \left(\Psi_{i}\otimes\mathsf{id}_{A_{i},R}\right) \left(\Phi_{i}\otimes\mathsf{id}_{B_{i-1},R}\right) \cdots \left(\Psi_1\otimes \mathsf{id}_{A_1,R}\right)\left(\Phi_1\otimes \mathsf{id}_{B_0,R}\right) (\rho),
\end{align*}
and let $\Pi(\rho)=[\mathrsfs{A}_1^r\circledast \mathrsfs{B}_1^r](\rho)$ denote the final state of protocol $\Pi=(\mathrsfs{A},\mathrsfs{B})$ on input $\rho$.
Figure~\ref{fig:two-party} illustrates an interactive two-party quantum protocol.
Note that the input state $\rho_{A_0B_0}\in D(\mathcal{A}_0\otimesimes \mathcal{B}_0)$ may consist of a classical string, tensor products of pure quantum states, or an entangled quantum state, depending on the context of the underlying protocol.
For example, a part of it can be EPR pairs shared between Alice and Bob.
Also the reference system $R$ is not shown in Fig.~\ref{fig:two-party}.
\begin{example}gin{remark}
In the following discussion we will consider a specific two-party protocol, where the input of $\mathrsfs{A}$ is a classical system $A_0$ that is preserved throughout the protocol and its quantum operations
\begin{example}gin{align*}
\Phi_1:& L(\mathcal{A}_0)\rightarrow L(\mathcal{A}_0\otimesimes \mathcal{A}_1\otimesimes \mathcal{X}_1),\\
\Phi_i:& L(\mathcal{A}_0\otimesimes \mathcal{A}_{i-1}\otimesimes \mathcal{Y}_{i-1})\rightarrow L(\mathcal{A}_0\otimesimes \mathcal{A}_{i}\otimesimes \mathcal{X}_{i}),\ i=2,\dots,r
\end{align*}
are classically-controlled quantum operations controlled by $A_0$.
\end{remark}
\section{Leakage Chain Rule for quantum min-entropy}\label{sec:leakage-chain-rule}
We first review the notion of quantum (smooth) min-entropy~\cite{RW04}.
\begin{example}gin{definition} \label{def:minentropy}
Consider a bipartite quantum state $\rho_{AB}\in D\left(\mathcal{A}\otimesimes \mathcal{B}\right)$.
The min-entropy of $A$ conditioned on $B$ is defined as
\begin{example}gin{align}
H_{\min}(A|B)_{\rho} = -\inf_{\sigma_{B}}
\left\{
\inf \left\{\lambda\in \mathbb{R}: \rho_{AB}\leq 2^\lambda \mathsf{id}_{A}\otimesimes \sigma_B\right\} \label{def:Hmin}
\right\}.
\end{align}
\end{definition}
When $\rho_{AB}$ is a cq-state, the quantum min-entropy has an operational meaning in terms of guessing probability~\cite{KRS09}. Specifically, if $H_{\min}(A|B)_{\rho} = k$, then the optimal probability of predicting the value of $A$ given $\rho_B$ is exactly $2^{-k}$.
The smooth min-entropy of $A$ conditioned on $B$ is defined as
\begin{example}gin{align*}
H_{\min}^{\end{proposition}silon}(A|B)_{\rho} = \sup_{ \rho': P(\rho',\rho)<\end{proposition}silon} H_{\min}(A|B)_\rho.
\end{align*}
{For simplicity we focus on the discussion of min-entropy and our results can be generalized to smooth min-entropy without much effort.}
In cryptography, we would like to see how much (conditional) min-entropy is left in an information source when the adversary gains additional information leakage.
This is characterized by the leakage chain rule for min-entropy. In the quantum case, the situation is different due to the phenomenon of quantum entanglement.
When two parties share a separable quantum state $\rho$, this is like the classical case and we have the following leakage chain rule for conditional quantum min-entropy~\cite{DD10}:
\begin{example}gin{lemma}\cite[Lemma 7]{DD10}
Let $\rho = \rho_{AXB}=\sum_{k} p_{k}\rho_{AX}^k \otimes \rho_B^k$ be a separable state in $D(\mathcal{A}\otimes\mathcal{X}\otimes\mathcal{B})$.
Then
\[H_{\rm min}(A|XB)_{\rho} \geq H_{\rm min}(A|B)_\rho - \log d_X. \]
\end{lemma}
Winkler~\end{theorem}al~\cite{WTHR11} proved the leakage chain rule for quantum (smooth) min-entropy for general quantum states with entanglement.
\begin{example}gin{lemma}{\cite[Lemma~13]{WTHR11}} \label{lemma:LCR2}
Let $\rho = \rho_{AXB}$ be a quantum state in $D(\mathcal{A}\otimes\mathcal{X}\otimes\mathcal{B})$.
Then
\[H_{\rm min}(A|XB)_{\rho} \geq H_{\rm min}(A|B)_\rho - 2\log d.\]
where $d= \min\{d_A d_B, d_X\}$.
\end{lemma}
Lemma~\ref{lemma:LCR2} only characterizes the entropy loss regarding the one-way communication complexity.
We would like to find one that characterizes the two-way communication complexity.
First we prove a variant of Yao's lemma~\cite{Yao93} (see also \cite[Lemma 4]{Kre95}).
For our purpose, the formulation is not symmetric in $\mathrsfs{A}$ and $\mathrsfs{B}$.
\begin{example}gin{lemma} \label{lemma:Yao}
Suppose $\Pi=(\mathrsfs{A},\mathrsfs{B})$ is an $(r,m_A,m_B)$ quantum protocol with initial state $(\ket{x}\ket{0})_{A_0}\otimesimes \ket{\zeta}_{B_0}$,
where $x$ is a binary string,
and that the quantum operations $\Phi_i$ for $\mathrsfs{A}$ are classical-controlled unitaries controlled by $\ket{x}_{A_0}$, respectively.
Then the final state of the protocol can be written as
\[
\sum_{i\in\{0,1\}^{m_A+m_B}} \lambda_{i} \ket{x}_{A_0}\otimesimes\ket{\timesi_{i}}_{A_r}\otimes \ket{\zeta_{i}}_{B_r},
\]
where $\lambda_{i}\geq 0$;
$\ket{\timesi_{i}}_{A_r}$ can be determined by $\Pi$ and $x$; and
$\ket{\zeta_{i}}_{B_r}$ can be determined by $\Pi$ and $\ket{\zeta}_{B_0}$.
\end{lemma}
\begin{example}gin{proof}
We prove it by induction.
For simplicity, we will ignore the fixed register $\ket{x}_{A_0}$ in the following and remember that $\Phi_i$ are classically-controlled unitaries controlled by $\ket{x}_{A_0}$.
Suppose $\Pi=(\mathrsfs{A},\mathrsfs{B})$ is defined as in Def.~\ref{def:two-party}.
Let $m_{B}^{(i)} =\sum_{j=1}^i \log d_{Y_i}$ and $m_{A}^{(i)}=\sum_{j=1}^i \log d_{X_i}$.
The statement is true for the initial state $\rho=\ket{0}_{A_0}\otimesimes \ket{\zeta}_{B_0}$, which is of rank one.
Suppose the statement holds after $k$ rounds. That is,
\[
[\mathrsfs{A}_1^k\circledast \mathrsfs{B}_1^k](\rho)= \sum_{i\in\{0,1\}^{m_{A}^{(k)}+m_{B}^{(k)}}} \lambda_{i}^{(k)}\ket{\timesi_{i}^{(k)}}_{A_k Y_k}\otimes \ket{\zeta_{i}^{(k)}}_{B_k},
\]
where we use the superscript $(k)$ to indicate the states $\ket{\timesi_i}, \ket{\zeta_i}$ or coefficients $\lambda_i$ after $k$ rounds.
Thus
\begin{example}gin{align*}\displaystyle
&(\Phi_{k+1}\otimesimes \text{id}_{B_k}) \sum_{i\in\{0,1\}^{m_{A}^{(k)}+m_{B}^{(k)}}} \lambda_{i}^{(k)}\ket{\timesi_{i}^{(k)}}_{A_k Y_k}\otimes \ket{\zeta_{i}^{(k)}}_{B_k}\\
\stackrel{(a)}{=}&\sum_{i\in\{0,1\}^{m_{A}^{(k)}+m_{B}^{(k)}}} \lambda_{i}^{(k)} \sum_{a\in\{0,1\}^{\log d_{X_{k+1}}}} \alpha_{i,a} \ket{\timesi_{i}^{(k+1)}, a}_{A_{k+1}}\otimes \ket{a}_{X_{k+1}} \otimes \ket{\zeta_{i}^{(k)}}_{B_k}\\
\stackrel{\Psi_{k+1}\otimesimes \text{id}_{A_{k+1}}}{\longrightarrow}& \sum_{i\in\{0,1\}^{m_{A}^{(k)}+m_{B}^{(k)}}} \sum_{a\in\{0,1\}^{\log d_{X_{k+1}}}} \lambda_{i}^{(k)}\alpha_{i,a} \ket{\timesi_{i}^{(k+1)}, a}_{A_{k+1}}\otimes \Psi_{X_{k+1},B_{k}}\left(\ket{a}_{X_{k+1}} \ket{\zeta_{i}^{(k)}}_{B_k}\right)\\
\stackrel{(b)}{=}&\sum_{i\in\{0,1\}^{m_{A}^{(k)}+m_{B}^{(k)}}} \sum_{a\in\{0,1\}^{\log d_{X_{k+1}}}} \lambda_{i}^{(k)}\alpha_{i,a} \ket{\timesi_{i}^{(k+1)}, a}_{A_{k+1}}\otimes \sum_{b\in\{0,1\}^{\log d_{Y_{k+1}}}} \begin{example}ta_{i,a,b} \ket{b}_{Y_{k+1}} \otimes \ket{\zeta_{i}^{(k)},a,b}_{B_{k+1}}\\
\stackrel{(c)}{=}&\sum_{i\in\{0,1\}^{m_{A}^{(k+1)}+m_{B}^{(k+1)}}} \lambda_{i}^{(k+1)}\ket{\timesi_{i}^{(k+1)}}_{A_{k+1}Y_{k+1}}\otimes \ket{\zeta_{i}^{(k+1)}}_{B_{k+1}},
\end{align*}
where $(a)$ and $(b)$ are by Schmidt decomposition on $ \Phi_{k+1} \ket{\timesi_{i}^{(k)}}_{A_kY_k}$ and $\Psi_{k+1}\left(\ket{a}_{X_{k+1}} \ket{\zeta_{i}^{(k)}}_{B_k}\right)$, respectively,
with $\alpha_{i,a},\begin{example}ta_{i,a,b}>0$;
in (c) the indexes $i$, $a$, and $b$ are merged and $ \lambda_{i:a:b}^{(k+1)}= \lambda_{i}^{(k)}\alpha_{i,a}\begin{example}ta_{i,a,b}$.
(We use $a:b$ to denote the concatenation of two strings $a$ and $b$.)
Since $\sum_{b\in\{0,1\}^{d_{Y_{k+1}}}} \begin{example}ta_{i,a,b} \ket{b}_{Y_{k+1}} \otimes \ket{\zeta_{i:a:b}^{(k+1)}}_{B_{k+1}}=\Psi_{{k+1}}\left(\ket{a}_{X_{k+1}} \ket{\zeta_{i}^{(k)}}_{B_k}\right)$ and $\ket{\zeta_{i}^{(k)}}_{B_k}$ can be determined by $\Pi$ and $\ket{\zeta}_{B_0}$ by assumption,
$\ket{\zeta_{i:a:b}^{(k+1)}}_{B_k}$ can also be determined by $\Pi$ and $\ket{\zeta}_{B_0}$.
Similarly, $\ket{\timesi_i^{(k+1)}}_{A_k}$ can be generated by $\Pi$ and $x$.
\end{proof}
Next we consider a special type of interactive two-party protocol on an input cq-state $\rho = \rho_{AB}$, where the system $A$ is classical and will be preserved throughout the protocol. The interactive leakage chain rule bounds how much the min-entropy $H_{\min}(A|B)_{\rho}$ can be decreased by an ``interactive leakage'' generated by applying a two-party protocol $\Pi = \{\mathrsfs{A},\mathrsfs{B}\}$ to $\rho$, where $A$ is treated as a classical input to $\mathrsfs{A}$ and $B$ is given to $\mathrsfs{B}$ as part of its initial state.
\begin{example}gin{theorem}[Interactive leakage chain rule for quantum min-entropy] \label{lemma:interactiveLCR}
Suppose $\rho_{A_0B_0} $ is a cq-state,
where $A_0$ is classical. Let $\Pi = \{\mathrsfs{A},\mathrsfs{B}\}$ be an $(r,m_A,m_B)$ two-party protocol with classically-controlled quantum operations $\Phi_i$ controlled by $A_0$.
Let $\sigma_{A_0A_rB_r}= \left[\mathrsfs{A}\circledast\mathrsfs{B}\right](\rho_{A_0B_0})$ be the final state of the protocol.
Then
\begin{example}gin{align}
&H_{\min}(A_0|B_r)_{\sigma}\geq H_{\min}(A_0|B_0)_\rho - \min\{ m_A+m_B, 2m_{A}\}, \label{eq:LCR}
\end{align}
We say that $\sigma_{B_r}$ is an \emph{interactive leakage} of $A_0$ generated by $\Pi$.
\end{theorem}
\begin{example}gin{proof}
Suppose $\lambda=\log d_{A} - H_{\rm min}(A|B_0)_\rho$.
By definition~(\ref{def:Hmin}) there exists a density operator $\tau_{{B_0}}$
such that
$$\rho_{A_0B_0} \leq 2^{\lambda} \frac{\mathsf{id}_{A_0}}{d_{A_0}}\otimes \tau_{B_0}.$$
Suppose $\ket{\timesi}_{B_0E}$ is a purification of $\tau_{B_0}$ over $\mathcal{B}_0\otimes\mathcal{E}$.
Without loss of generality, we assume that Alice and Bob have auxiliary quantum systems $R_1,R_2$, respectively, initialized in $\ket{0}_{R_1},\ket{0}_{R_2}$, so that
the protocol $\Pi$ can be extended to a protocol $\tilde{\Pi}$
such that the quantum operations of $\tilde{\Pi}$ are unitary operators controlled by $A_0$ for $\mathrsfs{A}$
and unitaries for $\mathrsfs{B}$,
and $\textnormal{tr}_{R_1R_2}\left(\tilde{\Pi}(\rho_{A_0B_0}\otimesimes \ket{0}_{R_1R_2}\begin{example}gin{remark}a{0})\right)= \Pi(\rho_{A_0B_0})$.
Now initially we have
$$\rho_{{A_0}{B_0}R_1R_2} \leq \frac{2^{\lambda}}{d_{{A_0}}} \sum_a \ket{a}_{A_0}\begin{example}gin{remark}a{a} \otimes \textnormal{tr}_E\left(\ket{\timesi}_{{B_0}E}\begin{example}gin{remark}a{\timesi}\right)\otimesimes\ket{0}_{R_1R_2}\begin{example}gin{remark}a{0}.$$
After the protocol the inequality becomes
\begin{example}gin{align*}
\sigma_{{A_0}A_r{B_r}R_1R_2} \leq& \frac{2^{\lambda}}{d_{{A_0}}} \sum_a \tilde{\Pi} \left(\ket{a}_{A_0}\begin{example}gin{remark}a{a} \otimes \textnormal{tr}_E\left(\ket{\timesi,0}_{{B_0}ER_2}\begin{example}gin{remark}a{\timesi,0}\right)\otimesimes \ket{0}_{R_1}\begin{example}gin{remark}a{0} \right)\\
=& \frac{2^{\lambda}}{d_{{A_0}}} \sum_a \textnormal{tr}_E \left( \tilde{\Pi}\otimes \mathsf{id}_E \left(\ket{a}_{A_0}\begin{example}gin{remark}a{a} \otimes \ket{\timesi,0}_{{B_0}ER_2}\begin{example}gin{remark}a{\timesi,0}\otimes \ket{0}_{R_1}\begin{example}gin{remark}a{0}\right)\right)\\
\stackrel{(a)}{=}& \frac{2^{\lambda}}{d_{{A_0}}} \sum_a \ket{a}_{A_0}\begin{example}gin{remark}a{a} \otimes \textnormal{tr}_E \left( \sum_{i=1}^{2^{m_A+m_B}} \lambda_i^a \ket{\timesi_i}_{{B_r}ER_2}\otimes \ket{\zeta_i}_{A_rR_1}
\sum_{j=1}^{2^{m_A+m_B}} \lambda_j^a \begin{example}gin{remark}a{\timesi_j}_{{B_r}ER_2}\otimes \begin{example}gin{remark}a{\zeta_j}_{A_rR_1} \right),
\end{align*}
where $(a)$ follows from Lemma~\ref{lemma:Yao} and the coefficients $\lambda_j^a$ depend on the classical $a$. Consequently,
\begin{example}gin{align*}
\sigma_{{A_0}{B}_r}=\textnormal{tr}_{A_rR_1R_2} \sigma_{{A_0}A_r{B}_r{R_1R_2}} \leq& \frac{2^{\lambda}}{d_{{A_0}}} \sum_a \ket{a}_{A_0}\begin{example}gin{remark}a{a} \otimes \textnormal{tr}_{ER_2} \left( \sum_{i=1}^{2^{m_A+m_B}} \left(\lambda_i^a\right)^2 \ket{\timesi_i}_{{B_r}ER_2}\begin{example}gin{remark}a{\timesi_i}\right)\\
\leq& \frac{2^{\lambda +m_A+m_B}}{d_{{A_0}}} \sum_a \ket{a}_{A_0}\begin{example}gin{remark}a{a} \otimes \textnormal{tr}_{ER_2} \left(\frac{1}{2^{m_A+m_B}} \sum_{i=1}^{2^{m_A+m_B}} \ket{\timesi_i}_{{B_r}ER_2}\begin{example}gin{remark}a{\timesi_i}\right)\\
=&\frac{2^{\lambda +m_A+m_B}}{d_{{A_0}}} \sum_a \ket{a}_{A_0}\begin{example}gin{remark}a{a} \otimes \omega_{B_r},
\end{align*}
where $\omega_{B_r}= \textnormal{tr}_{ER_2} \left(\frac{1}{2^{m_A+m_B}} \sum_{i=1}^{2^{m_A+m_B}} \ket{\timesi_i}_{{B_r}ER_2}\begin{example}gin{remark}a{\timesi_i}\right)$.
Therefore, we have, by Definition~\ref{def:minentropy},
\[
H_{\rm min}({A_0}|{B}_r)_{\sigma}\geq H_{\rm min}({A_0}|{B_0})_\rho -( m_{B}+m_{A}).
\]
Each round of the interactive protocol consists of the following steps:
\begin{example}gin{enumerate}
\item Bob performs a unitary operation on his qubits.
\item Bob sends some qubits to Alice.
\item Alice performs a (classical-controlled) quantum operation on her qubits.
\item Alice sends some qubits to Bob.
\end{enumerate}
Note that only when Alice sends qubits Bob does the min-entropy change
and by Lemma~\ref{lemma:LCR2}, the entropy decreases by at most two for each qubit that Alice sends to Bob. Thus, we have
\begin{example}gin{align*}
&H_{\rm min}(A_0|B_r)_{\sigma}\geq H_{\rm min}(A_0|B_0)_\rho - 2m_{A}.
\end{align*}
\end{proof}
In fact, interactive leakage chain rule can be strengthened to allow pre-shared entanglement between
Alice and Bob by considering only the one-way communication complexity from Alice to Bob.
\begin{example}gin{theorem}[Interactive leakage chain rule for quantum min-entropy with pre-shared entanglement] \label{lemma:interactiveLCR2}
Suppose Alice and Bob share an initial state $\rho_{A_0B_0}=\ket{\Phi^+}_{A_0''B_0''}^{\otimesimes m}\begin{example}gin{remark}a{\Phi^+}^{\otimesimes m}\otimesimes \rho_{A_0'B_0'}$,
where $\mathcal{A}_0=\mathcal{A}_0'\otimesimes\mathcal{A}_0''$, $\mathcal{B}_0=\mathcal{B}_0'\otimesimes\mathcal{B}_0''$, $\ket{\Phi^+}^{\otimesimes m}$ are EPR pairs, and $\rho_{A_0'B_0'}$ is a cq state.
If an $(r, m_A, m_B)$ two-party interactive protocol $\Pi$, where the quantum operations for $\mathrsfs{A}$ are classically controlled by $A_0$,
is executed by Alice and Bob with $m_A\leq m$,
then
\begin{example}gin{align}
&H_{\rm min}(A_0|B_r)_{\sigma}\geq H_{\rm min}(A_0|B_0)_\rho - 2m_{A},
\end{align}
where $\sigma_{A_0B_r}= \textnormal{tr}_{A_r}\left[\mathrsfs{A}\circledast\mathrsfs{B}\right](\rho_{A_0B_0})$.
\end{theorem}
\subsection{Communication Lower Bound} \label{sec:Comm}
In the problem of classical communication over (two-way) quantum channels, Alice wishes to send $n$ classical bits $X$ to Bob,
who then applies a quantum measurement and observes outcome $Y$. The famous Holevo theorem~\cite{Hol73} established a lower bound that
the mutual information between $X$ and $Y$ is at most $m$ if $m$ qubits are sent from Alice to Bob.
Cleve \end{theorem}al extended the Holevo theorem to interactive protocols~\cite[Theorem 2]{CDNT99}: for Bob to acquire $m$ bits of mutual information, Alice has to send at least $m/2$ qubits to Bob
and the two-way communication complexity is at least $m$ qubits.
Nayak and Salzman further improved these results in that Bob only recovers $X$ with probability $p$~\cite{NS06}.
Herein we provide another version of the classical communication lower bound.
Our results are more general since we allow the initial shared states to be separable.
\begin{example}gin{corollary} \label{cor:comm_bound}
Suppose Alice and Bob share a cq state $\rho = \rho_{A_0B_0}=\sum_{a}p_a \ket{a}_{A_0}\begin{example}gin{remark}a{a}\otimesimes \rho_{B_0}^{a} \in D(\mathcal{A}_0\otimes\mathcal{B}_0)$,
where Alice holds system $A_0$ of classical information and Bob holds system $B_0$.
Suppose Alice wants to send $a$ to Bob by an $(r, m_A, m_B)$ interactive protocol $\Pi$
such that Bob can recover $a$ with probability at least $p\in(0,1]$.
Then
\begin{example}gin{align}
m_{B}+m_{A} &\geq H_{\rm min}(A_0|B_0)_\rho- \log \frac{1}{p}; \label{eq:abn}\\
2m_{A} &\geq H_{\rm min}(A_0|B_0)_\rho- \log \frac{1}{p}.\label{eq:2an}
\end{align}
\end{corollary}
\begin{example}gin{remark}
A protocol that uses the superdense coding techniques~\cite{BW92} can achieve Eqs.~(\ref{eq:abn}) and (\ref{eq:2an}) with equalities.
\end{remark}
\begin{example}gin{remark}
As an application, we can recover the communication lower bounds by Nayak and Salzman~\cite[Theorems 1.1 and 1.3]{NS06}\footnote{Nayak and Salzman have another stronger result \cite[Theorems 1.4]{NS06} when there is no initial correlation between Alice and Bob.} when $H_{\rm min}(A_0|B_0)_\rho=n$, where $A_0$ is of $n$ bits.
Note that they did a round reduction argument by using Yao's lemma so that the two-party protocol can be simulated by Alice sending a \emph{single} message of length $(m_A+m_B)$ to Bob.
However, this method requires a compression and decompression procedure, which unlikely generalizes to the case with initial correlations.
\end{remark}
\section{Conclusion}
We proved an interactive leakage chain rule for quantum min-entropy and discussed its applications in quantum communication complexity of classical information
and the lower bounds for quantum private information retrieval.
We may also apply our result to other scenarios. For example, our
we can also derive limitations for information-theoretically secure quantum fully homomorphic encryption~\cite{LC18,NS18,Newman18},
where the essential ingredient of the proof is Nayak's bound~\cite{Nayak99}.
To be more specific, instead of using Nayak's bound, we can use the communication lower (Corollary~\ref{cor:comm_bound}) derived by the interactive leakage chain rule~(Theorem~\ref{lemma:interactiveLCR}) to develop new limitations. This is our ongoing research.
CYL was was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under
Grant MOST107-2636-E-009-005.
KMC was partially supported by 2016 Academia Sinica Career Development Award under Grant
No. 23-17 and the Ministry of Science and Technology, Taiwan under Grant No. MOST 103-2221-
E-001-022-MY3.
\begin{example}gin{thebibliography}{10}
\mathrm{Pr}ovidecommand{\url}[1]{#1}
\csname url@samestyle\endcsname
\mathrm{Pr}ovidecommand{\newblock}{\relax}
\mathrm{Pr}ovidecommand{\bibinfo}[2]{#2}
\mathrm{Pr}ovidecommand{\BIBentrySTDinterwordspacing}{\spaceskip=0pt\relax}
\mathrm{Pr}ovidecommand{\BIBentryALTinterwordstretchfactor}{4}
\mathrm{Pr}ovidecommand{\BIBentryALTinterwordspacing}{\spaceskip=\fontdimen2\font plus
\BIBentryALTinterwordstretchfactor\fontdimen3\font minus
\fontdimen4\font\relax}
\mathrm{Pr}ovidecommand{\BIBforeignlanguage}[2]{{
\expandafter\ifx\csname l@#1\endcsname\relax
\typeout{** WARNING: IEEEtran.bst: No hyphenation pattern has been}
\typeout{** loaded for the language `#1'. Using the pattern for}
\typeout{** the default language instead.}
\end{lemma}se
\language=\csname l@#1\endcsname
\fi
#2}}
\mathrm{Pr}ovidecommand{\BIBdecl}{\relax}
\BIBdecl
\bibitem{DziembowskiP08}
S.~Dziembowski and K.~Pietrzak, ``Leakage-resilient cryptography,'' in
\emph{49th Annual {IEEE} Symposium on Foundations of Computer Science, {FOCS}
2008, October 25-28, 2008, Philadelphia, PA, {USA}}, 2008, pp. 293--302.
\bibitem{ReingoldTTV08}
\BIBentryALTinterwordspacing
O.~Reingold, L.~Trevisan, M.~Tulsiani, and S.~P. Vadhan, ``Dense subsets of
pseudorandom sets,'' \emph{Electronic Colloquium on Computational Complexity
{(ECCC)}}, vol.~15, no. 045, 2008. [Online]. Available:
\url{http://eccc.hpi-web.de/eccc-reports/2008/TR08-045/index.html}
\BIBentrySTDinterwordspacing
\bibitem{GentryW11}
C.~Gentry and D.~Wichs, ``Separating succinct non-interactive arguments from
all falsifiable assumptions,'' in \emph{Proceedings of the 43rd {ACM}
Symposium on Theory of Computing, {STOC} 2011, San Jose, CA, USA, 6-8 June
2011}, 2011, pp. 99--108.
\bibitem{ChungKLR11}
K.~Chung, Y.~T. Kalai, F.~Liu, and R.~Raz, ``Memory delegation,'' in
\emph{Advances in Cryptology - {CRYPTO} 2011 - 31st Annual Cryptology
Conference, Santa Barbara, CA, USA, August 14-18, 2011. Proceedings}, 2011,
pp. 151--168.
\bibitem{FOR12}
B.~Fuller, A.~O'Neill, and L.~Reyzin, \emph{A Unified Approach to Deterministic
Encryption: New Constructions and a Connection to Computational
Entropy}.\hskip 1em plus 0.5em minus 0.4em\relax Berlin, Heidelberg: Springer
Berlin Heidelberg, 2012, pp. 582--599.
\bibitem{JP14}
D.~Jetchev and K.~Pietrzak, ``How to fake auxiliary input,'' in \emph{Theory of
Cryptography - 11th Theory of Cryptography Conference, {TCC} 2014, San Diego,
CA, USA, February 24-26, 2014. Proceedings}, 2014, pp. 566--590.
\bibitem{ChungLP15}
K.~Chung, E.~Lui, and R.~Pass, ``From weak to strong zero-knowledge and
applications,'' in \emph{Theory of Cryptography - 12th Theory of Cryptography
Conference, {TCC} 2015, Warsaw, Poland, March 23-25, 2015, Proceedings, Part
{I}}, 2015, pp. 66--92.
\bibitem{RW04}
R.~Renner and S.~Wolf, ``Smooth renyi entropy and applications,'' in
\emph{International Symposium onInformation Theory, 2004. ISIT 2004.
Proceedings.}, June 2004, pp. 233--.
\bibitem{BW92}
\BIBentryALTinterwordspacing
C.~H. Bennett and S.~J. Wiesner, ``Communication via one- and two-particle
operators on einstein-podolsky-rosen states,'' \emph{Phys. Rev. Lett.},
vol.~69, pp. 2881--2884, Nov 1992. [Online]. Available:
\url{https://link.aps.org/doi/10.1103/PhysRevLett.69.2881}
\BIBentrySTDinterwordspacing
\bibitem{DD10}
S.~P. Desrosiers and F.~Dupuis, ``Quantum entropic security and approximate
quantum encryption,'' \emph{IEEE Trans. Inf. Theory}, vol.~56, no.~7, pp.
3455--3464, July 2010.
\bibitem{WTHR11}
S.~Winkler, M.~Tomamichel, S.~Hengl, and R.~Renner, ``Impossibility of growing
quantum bit commitments,'' \emph{Physical review letters}, vol. 107, no.~9,
p. 090502, 2011.
\bibitem{CCL+17}
\BIBentryALTinterwordspacing
Y.-H. Chen, K.-M. Chung, C.-Y. Lai, S.~P. Vadhan, and X.~Wu, ``Computational
notions of quantum min-entropy,'' 2017. [Online]. Available:
\url{arXiv:1704.07309}
\BIBentrySTDinterwordspacing
\bibitem{CDNT99}
R.~Cleve, W.~van Dam, M.~Nielsen, and A.~Tapp, ``Quantum entanglement and the
communication complexity of the inner product function,'' in \emph{Quantum
Computing and Quantum Communications}, C.~P. Williams, Ed.\hskip 1em plus
0.5em minus 0.4em\relax Berlin, Heidelberg: Springer Berlin Heidelberg, 1999,
pp. 61--74.
\bibitem{NS06}
\BIBentryALTinterwordspacing
A.~Nayak and J.~Salzman, ``Limits on the ability of quantum states to convey
classical messages,'' \emph{J. ACM}, vol.~53, no.~1, pp. 184--206, Jan. 2006.
[Online]. Available: \url{http://doi.acm.org/10.1145/1120582.1120587}
\BIBentrySTDinterwordspacing
\bibitem{Yao93}
A.~C.-C. Yao, ``Quantum circuit complexity,'' in \emph{Proceedings of 1993 IEEE
34th Annual Foundations of Computer Science}, Nov 1993, pp. 352--361.
\bibitem{Uhl76}
A.~Uhlmann, ``The {Transition Probability} in the state space of a *-algebra,''
\emph{Rep. Math. Phys.}, vol.~9, pp. 273--279, 1976.
\bibitem{FvdG99}
C.~A. Fuchs and J.~van~de Graaf, ``Cryptographic distinguishability measures
for quantum-mechanical states,'' \emph{IEEE Trans. Inf. Theory}, vol.~45,
no.~4, pp. 1216--1227, May 1999.
\bibitem{Lo97}
H.-K. Lo, ``Insecurity of quantum secure computations,'' \emph{Physical Review
A}, vol.~56, no.~2, p. 1154, 1997.
\bibitem{BB15}
{\"A}.~Baumeler and A.~Broadbent, ``Quantum private information retrieval has
linear communication complexity,'' \emph{Journal of Cryptology}, vol.~28,
no.~1, pp. 161--175, 2015.
\bibitem{GW07}
G.~Gutoski and J.~Watrous, ``Toward a general theory of quantum games,'' in
\emph{Proceedings of the Thirty-ninth Annual ACM Symposium on Theory of
Computing}, ser. STOC '07.\hskip 1em plus 0.5em minus 0.4em\relax New York,
NY, USA: ACM, 2007, pp. 565--574.
\bibitem{DNS10}
F.~Dupuis, J.~B. Nielsen, and L.~Salvail, ``Secure two-party quantum evaluation
of unitaries against specious adversaries,'' in \emph{Advances in Cryptology
- {CRYPTO} 2010, 30th Annual Cryptology Conference, Santa Barbara, CA, USA,
August 15-19, 2010. Proceedings}, 2010, pp. 685--706.
\bibitem{KRS09}
R.~Konig, R.~Renner, and C.~Schaffner, ``The operational meaning of min- and
max-entropy,'' \emph{IEEE Trans. Inf. Theory}, vol.~55, no.~9, pp.
4337--4347, Sept 2009.
\bibitem{Kre95}
\BIBentryALTinterwordspacing
I.~Kremer, ``Quantum communication,'' Master's thesis, The Hebrew University of
Jerusalem, Mar 1995. [Online]. Available:
\url{http://www.cs.huji.ac.il/~noam/kremer-thesis.ps.}
\BIBentrySTDinterwordspacing
\bibitem{Hol73}
A.~S. Holevo, ``Bounds for the quantity of information transmitted by a quantum
communication channel,'' \emph{Probl. Peredachi Inf.}, vol.~9, no.~3, pp.
3--11, 1973, {E}nglish translation \emph{Problems Inform. Transmission}, vol.
9, no. 3, pp.177--183, 1973.
\bibitem{LC18}
C.-Y. Lai and K.-M. Chung, ``On statistically-secure quantum homomorphic
encryption,'' \emph{Quant. Inf. Comput.}, vol.~18, no. 9\&10, pp. 0785--0794,
2018.
\bibitem{NS18}
M.~Newman and Y.~Shi, ``Limitations on transversal computation through quantum
homomorphic encryption,'' \emph{Quant. Inf. Comput.}, vol.~18, no. 11\&12,
pp. 0927--0948, 2018.
\bibitem{Newman18}
M.~Newman, ``Further limitations on information-theoretically secure quantum
homomorphic encryption,'' 2018. arXiv:1809.08719
\bibitem{Nayak99}
A.~Nayak, ``Optimal lower bounds for quantum automata and random access
codes,'' in \emph{Foundations of Computer Science, 1999. 40th Annual
Symposium on}, 1999, pp. 369--376.
\end{thebibliography}
\end{document} |
\begin{document}
\title{Non-landing hairs in Sierpi\'nski curve\ Julia sets
of transcendental entire maps\ (Revised Version)}
\abstract{ We consider the family of transcendental entire maps given by $f_a(z)=a(z-(1-a))\exp(z+a)$ where $a$ is a complex parameter. Every map has a superattracting fixed point at $z=-a$ and an asymptotic value at $z=0$. For $a>1$ the Julia set of $f_a$ is known to be homeomorphic to the Sierpi\'nski universal curve~\cite{Moro}, thus containing embedded copies of any one-dimensional plane continuum. In this paper we study subcontinua of the Julia set that can be defined in a combinatorial manner.
In particular, we show the existence of non-landing hairs with prescribed combinatorics embedded in the Julia set for all parameters $a\geq 3$. We also study the relation between non-landing hairs and the immediate basin of attraction of $z=-a$. Even as each non-landing hair accumulates onto the boundary of the immediate basin at a single point, its closure, nonetheless, becomes an indecomposable subcontinuum of the Julia set.}
\noindent
{\bf Keywords:} Transcendental entire maps, Julia set, non-landing hairs, indecomposable continua.
\noindent
{\bf Mathematics Subject Classification (2000):} 37F10, 37F20.
\pagebreak
\section{Introduction}\label{section:intro}
Let $f:{\Bbb C} \to {\Bbb C}$ be a transcendental entire map. The \emph{Fatou set} ${\cal F}(f)$ is the largest open set where iterates of $f$ form a normal family. Its complement in ${\Bbb C}$ is the \emph{Julia set} ${\cal J}(f)$ and it is a non-empty and unbounded subset of the plane. When the set of singular values is bounded, we say $f$ is of \emph{bounded singular type} and denote this class of maps by $\cal B$. It has been shown in \cite{Ba} and \cite{R} that the Julia set of a hyperbolic map in $\cal B$ contains uncountably many unbounded curves, usually known as \emph{hairs}, \cite{DT}.~A hair is said to \emph{land} if it is homeomorphic to the half-closed ray $[0,+\infty)$. The point corresponding to $t=0$ is known as the \emph{endpoint} of the hair. In contrast, if its accumulation set is a non-trivial continuum, we obtain a \emph{non-landing} hair.
In this paper we study a particular class of non-landing hairs in the Julia set of transcendental entire maps given by
\[\label{eq:themap}
f_a(z)=a (z-(1-a)) \exp(z+a),
\]
when $a$ is a real parameter. For all complex values of $a$, the map $f_a$ has a superattracting fixed point at $z=-a$ and an asymptotic value at the origin whose dynamics depends on the parameter $a$. If the orbit of the asymptotic value escapes to $+\infty$, we say $a$ is an {\it escaping parameter}. For example, when $a>1$, the orbit of the asymptotic value escapes to $+\infty$ along the positive real axis.
To our knowledge, the family $f_{a}$ was first introduced by Morosawa, \cite{Moro}, as an example of a transcendental entire map whose Julia set is homeomorphic to the {\it Sierpi\'nski curve} continuum when $a>1$. Any planar set that is compact, connected, locally connected, nowhere dense, and has the property that any two complementary domains are bounded by disjoint simple closed curves is homeomorphic to the Sierpi\'nski curve continuum (Whyburn, \cite{Why}). It is also a \emph{universal} continuum, in the sense that it contains a homeomorphic copy of every one-dimensional plane continuum (Kuratowski, \cite{K}). We take advantage of this property to combinatorially construct subsets of ${\mathcal J}(f_a), \ a >1$, that in turn are \emph{indecomposable continua}. An {\it indecomposable continuum} is a compact, connected set that cannot be written as the union of two proper connected and closed subsets.~Observe that a landing hair together with the point at infinity is in fact a decomposable continuum.
Every known example in the literature of indecomposable subsets of Julia sets arises from a single family of maps, namely the exponential family $E_{\lambda}(z)=\lambda \exp(z)$. The first example was given by Devaney \cite{D} when $\lambda=1$ so the asymptotic value escapes to infinity and the Julia set is the whole plane. Under the assumption that either the asymptotic value escapes to infinity or has a preperiodic orbit (thus holding again ${\cal J}(E_\lambda)= {\Bbb C}$), several authors have been able to construct topologically distinct indecomposable continua embedded in ${\mathcal J}(E_\lambda)$ (see among other works, \cite{DJ1}, \cite{DJM}, and \cite{R1}, where a generalization of previous results for a large set of $\lambda$-parameters can be found).
Our work provides examples of indecomposable subcontinua of Julia sets outside the exponential family and without the assumption that ${\mathcal J}(f_a)$ equals ${\Bbb C}$, since $f_a$ has a superattracting fixed point for all $a\in\mathbb C$.
Denote by ${\mathcal A}(-a)$ the \emph{basin of attraction} of $-a$, that is, the set of points with forward orbits converging to $-a$. Also denote by $\mathcal{A}^*(-a)$ the \emph{immediate basin of attraction} of $-a$ which is the connected component of $\mathcal{A}(-a)$ containing $-a$. In \cite{Moro} Morosawa showed that all connected components of ${\mathcal A}(-a)$ are bounded Jordan domains. Moreover, whenever $a>1$, the orbit of the free asymptotic value escapes to infinity. Since there are no other singular values, ${\mathcal F}(f_a)$ cannot contain another attracting basin, or a parabolic basin, or a Siegel disk as these components must be associated with a non-escaping singular value.~Maps with a finite number of singular values do not exhibit neither wandering domains (\cite{EL2,GK}) nor Baker domains (\cite{EL2}). Hence $\mathcal{F}(f_{a})=\mathcal{A}(-a)$. In Figure \ref{fig:julia_set}, we display the dynamical plane of $f_a$ for different values of $a>1$. The basin of attraction of $-a$ is shown in black, while points in the Julia set are shown in white.
Let us summarize our main results. Since ${\mathcal J}(f_a)$ is homeomorphic to the Sierpi\'nski universal curve, it must contain embedded copies of planar indecomposable continua, so we obtain some of them in terms of its combinatorics. To do so, we first characterize the topology and dynamics of the boundary of $\mathcal{A}^*(-a)$ by a polynomial-like construction (Proposition \ref{proposition:pol_like}). Then, using general results of transcendental entire maps, we obtain curves in the Julia set contained in the far right plane and with specific combinatorics (Proposition~\ref{prop:ConjugTails}). By a controlled process of consecutive pullbacks of some of these curves, we extend them into non-landing hairs that limit upon themselves at every point (Theorem \ref{theorem:indecom}). Using a result due to Curry, \cite{C}, we show the closure of such hairs are indecomposable continua (Theorem \ref{thm:DoesNotSeparate}). Finally, we study the relation between each indecomposable continuum and the boundary of ${\mathcal A}^*(-a)$ showing that the intersection between these two sets reduces to a unique point (Theorem \ref{theorem:relation_inde_basin}).
As a consequence of these results, we show the existence of a dense set of points in $\partial \mathcal{A}^*(-a)$ that are landing points of a unique hair (in particular there are no \emph{pinchings} that arise as in other maps in class $\cal B$ having a superattracting basin), while there is a residual set of points in $\partial \mathcal{A}^*(-a)$ that, even though they belong to the accumulation set of a certain ray, they are not landing points of hairs.
The outline of this paper is as follows: in \S
\ref{section:dyn_plane} we describe the dynamical plane of $f_a$ for $a\geq 3$. \S \ref{section:targets} contains most of our technical results while in \S \ref{section:indecom} we provide the proofs of our main results.
\begin{figure}
\caption{\small{The Julia set for $f_a$ (and $a$ an escaping parameter) is shown in white, the Fatou set, in black. }
\label{fig:julia_set}
\end{figure}
\begin{figure}
\caption{\small{Dynamical plane of $f_{3.1}
\label{fig:sigmas}
\end{figure}
\noindent
\emph{Notation and terminology.}
$B_\varepsilon(x)=\{z\in {\Bbb C}~|~|z-x|<\varepsilon\}$.
$\overline{U}$ denotes the closure of a set $U$.
Connected components will be refered to as components.
A curve $\gamma$ \emph{cuts across} a
\begin{enumerate}
\item line $L$ if the intersection $\gamma\cap L$ is not tangential,
\item rectangle $R$ if $\gamma$ cuts across both vertical boundaries of $R$ so $\gamma\cap R$ contains a component with endpoints joining those sides,
\item semi-annular region $A$ if $\gamma$ cuts across the inner and outer semicircular boundaries of $A$ so $\gamma\cap A$ contains a connected component with endpoints joining those boundaries.
\end{enumerate}
\pagebreak
\section{Dynamical plane for escaping real parameters}\label{section:dyn_plane}
Consider escaping parameters of the form $a>1$ for the family of transcendental entire maps
$$f_a(z) = a(z-(1-a))\exp(z+a),$$
which have a unique asymptotic value at $z=0$ and a superattracting fixed point at $z=-a$. For $a>1$, the asymptotic value escapes to infinity along the positive real line and the Fatou set reduces to the basin of attraction of $-a$, ${\mathcal A}(-a)$. Our first aim in this section is to provide a partition of the complex plane that will allow us to combinatorially analyze the dynamics of points in the Julia set.\\
We start by taking preimages of the forward invariant set ${\mathbb R}^+$. Any point $z=x+iy$ in the complex plane whose image under $f_a$ is a real positive number must satisfy
\begin{equation}\label{eq:def_sigmas}
\begin{split}
& \left( x- (1-a) \right)\cos y - y\sin y > 0, \\
& \left( x- (1-a) \right)\sin y + y\cos y = 0.
\end{split}
\end{equation}
From these conditions, the preimages of ${\mathbb R}^+$ are infinitely many analytic curves parametrized by $(x,\zeta_j(x))$, with $j\in {\Bbb Z}$. For $j=0$, $\zeta_0(x)=0$ and is defined for all $x \in (1-a,+\infty)$ while the rest of the $\zeta_j$'s are strictly monotonic functions of $x$ defined for all $x \in {\mathbb R}$. When $j\neq 0$, each $\zeta_j$ has two horizontal asymptotes, given by
\[
\lim\limits_{x\to -\infty} \zeta_j(x) = \text{sign}(j) (2|j|-1)\pi i \quad {\rm and} \quad \lim\limits_{x\to +\infty} \zeta_j(x) =2 j \pi i.
\]
For our purposes, we need to consider the preimage of the interval $(-\infty,-a)$ inside the region bounded by $\zeta_1$ and $\zeta_{-1}$ (see Figure \ref{fig:sigmas}). We obtain two strictly monotonic curves $\eta_1(x)$ and $\eta_{-1}(x)$ defined for $x \in [-a,+\infty)$, satisfying
$$
\lim\limits_{x\to -a^+} \eta_{\pm 1}(x) = 0, \quad \quad \lim\limits_{x\to +\infty} \eta_{1}(x) = \pi i \quad {\rm and} \quad \lim\limits_{x\to +\infty} \eta_{-1}(x) = -\pi i .
$$
Let $T_{0}$ denote the open and connected set containing $z=0$ and bounded by $\eta_1 \cup \eta_{-1}$. Similarly, let $T_{1}$ be the open and connected set bounded by $\zeta_{-1}\cup \eta_{-1} \cup \eta_1 \cup \zeta_{1}$. Far to the right, $T_{1}$ consists of two unbounded and disjoint strips, one above and one below the positive real line.
Since most of our results involve the dynamics of points in $T_0\cup T_1$, we construct a refinement of this region. For $j=0,1$, denote by $T_{j_1}$ and $T_{j_2}$ the proper and disjoint domains in $T_j\setminus {\Bbb R}$ with negative and positive imaginary part, respectively.
Finally, for each $j\in {\Bbb Z}, j\neq 0,1$, denote by $T_j$ the open and connected strip bounded by the curves $\zeta_{j-1}$ and $\zeta_j$ as Im$(z)$ increases. Then $\{T_j~|~j\in {\Bbb Z} \}$ defines the partition of the complex plane sought, while $\{T_{j_i}~|~j=0,1,~i=1,2\}$ defines a refinement of the region $T_0\cup T_1$.\\
It is straightforward to verify that
\begin{equation*}
\begin{split}
& f_a : T_{0} \to {\mathbb C} \setminus \left(-\infty,-a \right], \quad {\rm and}\\
& f_a : T_{1} \to {\mathbb C} \setminus \left( (-\infty,-a] \cup [0,+\infty ) \right),
\end{split}
\end{equation*}
are one-to-one maps. Define $g_a^0=f_a^{-1}|T_0$ and $g_a^1=f_a^{-1}|T_1$ the corresponding inverse branches of $f_a$ taking values in $T_0$ and $T_1$, respectively. As for the refinement of $T_0\cup T_1$, we denote by $g_a^{j_1}$ and $g_a^{j_2}$ the appropiate restrictions of $g_a^j$ mapping into $T_{j_1}$ and $T_{j_2}$, respectively.
Assume $z$ is a point of the Julia set whose orbit is entirely contained in $\cup_{j\in {\Bbb Z}} T_j$. We can naturally associate to $z$ the \emph{itinerary} $s(z)=\left(s_{0},s_{1},\ldots \right)$, with $s_{j}\in {\Bbb Z}$, if and only if $f_{a}^j(z)\in T_{s_j}$. Let us concentrate on the space $\Sigma_B=\{0,1\}^{\Bbb N}$ of {\it binary sequences} (in what follows \emph{$B$-sequences}). With respect to the refinement of $T_0\cup T_1$, consider the space of {\it extended sequences} given by $\Sigma_E=\{0_{1},0_{2},1_{1},1_{2}\}^{\Bbb N}$. Since $f_a$ is a one-to-one map in $T_{0}$ and $T_{1}$, not all extended sequences are {\it allowable}, that is, $f_a$ behaves as a subshift of finite type over the set of points with full orbits inside $T_{0_1}\cup T_{0_2}\cup T_{1_1}\cup T_{1_2}$. Its transition matrix is given by
$$
A=\left(\begin{array}{cccc}1 & 0 & 1 & 0 \\0 & 1 & 0 &1 \\0 & 1 & 0 & 1 \\1 & 0 & 1 & 0 \end{array}\right),
$$
and determines the space of \emph{allowable extended sequences} (in what follows \emph{$A$-sequences}) given by
$$\Sigma_A=\{(s_0,s_1,\ldots)\in \Sigma_E ~|~s_i\in \{0_1,0_2,1_1,1_2\},a_{s_i s_{i+1}}=1, \forall i\}.$$
Denote by $\pi:\Sigma_A\to \Sigma_B$ the \emph{projection map} that transforms an $A$-sequence into a $B$-sequence by erasing all subscripts. The form of the matrix $A$ makes evident that $\pi$ is a 2-to-1 map. For a given $B$-sequence $t$, denote by $t^1$ and $t^2$ the unique $A$-sequences so that $\pi(t^j)=t$. Observe that by interchanging all subscripts in $t^1$ we obtain $t^2$, and conversely.
\begin{Remark}\label{rem:undefinedAseq}
It is important to observe that points on the real line (or on any of its preimages), do not have well defined $A$-sequences. However, since ${\Bbb R}$ is forward invariant under $f_a$, its dynamics and combinatorics are completely understood. Based on the next result, from now on we will only consider $B$-sequences (and its two associated $A$-sequences) that do not end in all zeros.
\end{Remark}
\begin{Lemma}\label{lem:ends-in-zeros}
Assume $a> 1$ and let $w\in {\mathcal J}(f_a)$ such that $f^k_a(w)\in T_0\cup T_1$ for all $k\geq 0$. Then, $w\in {\Bbb R}\cap T_j$ if and only if $s(w)=(j,0,0,\ldots)$, for $j=0,1$.
\end{Lemma}
\begin{proof}
The first implication follows easily by analyzing the action of $f_a$ in ${\Bbb R}$. Whenever $a>1$, the set ${\mathcal A}^*(-a)$ intersects the real line in an open interval $(q_a,p_a)$, where $p_a$ is a repelling fixed point and $q_a$ is its only preimage in ${\Bbb R}$. Moreover $(-\infty, q_a)\cup (p_a,+\infty)$ consists of points that escape to +$\infty$ along ${\Bbb R}^+$ and hence, belong to ${\mathcal J}(f_a)$. Since $f_a$ sends $(-\infty, q_a]$ onto $[p_a,+\infty)$ and this second interval is fixed by $f_a$, then $w$ has a well defined itinerary given by $s(w)=(1,0,0,\ldots)$, if $w\in (-\infty, q_a]\subset T_1$, or $s(w)=(0,0,\ldots)$ if $w\in [p_a,+\infty)\subset T_0$.
To see the second implication, it is enough to show the interval $[p_a,+\infty)$ represents the only set of points in the Julia set that remain inside $T_0$ for all positive iterates. To do so, we analyze the preimages of $\eta_1$ inside $T_{0_2}$ (the case $\eta_{-1}$ and $T_{0_1}$ is analogous). Since $f_a$ maps $T_{0_2}$ onto the upper half plane Im$(z)>0$, then for each $k\geq 1$, the $k^{\rm th}$ preimage of $\eta_1$ in $T_{0_2}$, namely $\eta_1^k=(g_a^{0_2})^k(\eta_1)$, lies completely inside $T_{0_2}$ (except for its endpoint in $z=-a$) and extends towards infinity into the right half plane. In particular, it lies in the strip bounded by $[-a,+\infty)$ and $\eta_1^{k-1}$ (from bottom to top). Also, note $\eta_1^k$ and $\eta_1^j$ meet only at $z=-a$ whenever $k\neq j$. We claim that $\eta_1^k$ accumulates onto $[p_a,+\infty)$ as $k\to \infty$. For otherwise, we can find a point $x\in [-a,+\infty)$ and $\varepsilon>0$ so $B_\varepsilon(x)\cap \eta_1^k =\emptyset$ for all $k\geq 1$. Nevertheless, since $x$ belongs to the Julia set, it follows from Montel's Theorem the existence of an integer $N>0$ for which $f_a^N(B_\varepsilon(x))\cap \eta_1\neq \emptyset$. Hence, $B_\varepsilon(x) \cap \eta_1^N\neq \emptyset$, a contradiction.
Finally, for any given point $w\in T_{0_2}$ such that $s(w)=(0,0,\dots)$, there exists an integer $m>0$ for which, either $w\in \eta_1^m$ or $w$ lies in the interior of the strip bounded by $\eta_1^{m+1}$ and $\eta_1^{m}$ (from bottom to top). In both situations, $f_a^{m+1}(w)$ lies outside $T_0$. This finishes the proof.
\end{proof}
The rest of this section is devoted to a combinatorial description of the dynamics of points with forward orbits contained in $T_0\cup T_1$ using $A$- and $B$-sequences. First, we focus our study on the set ${\mathcal A}^*(-a)$ and then analyze points in the Julia set that lie far to the right in $T_0\cup T_1$.
Using known results in complex dynamics we will prove that points with forward orbits completely contained in a given right hand plane are organized into continuous curves and their combinatorics are governed by the transition matrix $A$.
For future reference, we compute the image of a vertical segment bounded above and below by $\zeta_1$ and $\zeta_{-1}$, respectively.
\begin{Lemma} \label{lemma:vertical_segment}
Let $x\in {\Bbb R}$ be fixed and consider the vertical segment $L[x]=\{x+iy \, | \, \zeta_{-1}(x)\leq y \leq \zeta_1(x)\}$. Then $f_a(L[x])$ lies inside the closed round annulus $f_a(x) \leq |z| \leq f_a(x+\zeta_1(x))$.
\end{Lemma}
\begin{proof}
From the definition of the map $f_a$ we have
\[
|f_a(x+iy)|= a \exp(x+a) \sqrt{(x-(1-a))^2 + y^2}.
\]
\noindent Evidently, when restricted to $L[x]$ for a fixed $x$, the above expression reaches its minimum value when $y=0$ while its maximum value is reached whenever $y=\zeta_1(x)=\zeta_{-1}(x)$.
\end{proof}
\subsection{Dynamics near $z=-a$}\label{subsection:pol_like}
In \cite{Moro} it was shown that for $a>1$, each Fatou domain of $f_a$ is a bounded, connected component of $\mathcal{A}(-a)$ whose boundary is a Jordan curve. Here we show that $\overline {\mathcal{A}^*(-a)}$ is in fact a quasiconformal image of the closed unit disk.
Precisely, we describe a set of points with bounded orbits inside $T_{0}\cup T_{1}$ through a polynomial-like construction (see \cite{DH}) around the unique and simple critical point $z=-a$.~For technical reasons, we restrict to parameters $a\geq 3$ from now on.
\begin{Proposition} \label{proposition:pol_like}
For any $a\geq3$, there exist open, bounded and simply connected domains $U_a$ and $V_a$ with $ -a \in \overline{U}_a \subset V_a$, such that $(f_{a},U_{a},V_{a})$ is a quadratic-like mapping. Furthermore, the filled Julia set of $(f_{a},U_{a},V_{a})$ is the image under a quasiconformal mapping of the closed unit disk and coincides with $\overline{\mathcal{A}^*(-a)}$.
\end{Proposition}
\begin{figure}
\caption{\small{A sketch of the domains $U_a$ and $V_a$ found in Proposition \ref{proposition:pol_like}
\label{fig:polynomial}
\end{figure}
\begin{proof}
Define $V_a$ as the open, simply connected pseudo-rectangle given by
$$
V_{a}=\{z \in {\mathbb C} \ | \ -a-6\ln a < {\rm Re}(z) < \frac{1-a}{2}, \ \zeta_{-1}({\rm Re}(z)) < {\rm Im}(z) < \zeta_{1}({\rm Re}(z)) \}.
$$
First, we show that $V_{a}$ maps outside itself. Indeed, the top and bottom boundaries of $V_a$ map into a segment lying on the positive real line, thus outside $V_a$ as $(1-a)/2\leq -1$. Also, note that $V_a$ lies in the interior of the annulus
$$\frac{|1-a|}{2}\leq |z| \leq |-a-6 \ln a+ i2\pi|.$$
Following the notation in Lemma \ref{lemma:vertical_segment}, $L=L[-a-6\ln a]$ and $R=L[(1-a)/2]$ are the left and right hand boundaries of $V_a$. We show next the images of $L$ and $R$ lie in the complementary components of the annulus. First, for $z\in L$ we have
$$
|f_{a}(z)|=\frac{1}{a^5}\sqrt{\left( 1+6\ln a \right)^2+y^2} < \frac{1}{a^5}\sqrt{\left( 1+6\ln a \right)^2+4\pi^2}<\frac{1}{2} < \frac{|1-a|}{2}.
$$
Similarly, if $z\in R$
\[
|f_{a}(z)|=a e^{\frac{a+1}{2}}\sqrt{\left(\frac{1-a}{2} \right)^2 + y^2}>a e^{\frac{a+1}{2}} \frac{|1-a|}{2} >\sqrt{(a+6 \ln a)^2 + 4\pi^2}
\]
\noindent for all $a\geq 3$, proving thus that $V_a$ is mapped outside itself under $f_a$.
Now, we define $U_a$ to be the connected component of $f^{-1}_a(V_a)$ containing $-a$. Since $-a$ is a superattracting fixed point with multiplicity one, and there are no other critical points, it follows that $\overline{U_a}\subset V_a$ and the map $f_a:U_a \to V_a$ sends $\partial U_a$ to $\partial V_a$ with degree 2, as $-a$ is a simple critical point. We conclude that $(f_a,U_a,V_a)$ is a quadratic-like mapping.
What is left to verify is that the filled Julia set of $(f_a,U_a,V_a)$ is a quasi-disk. Recall that the filled Julia set of a polynomial-like mapping is defined as the set
$\{z\in U_a~|~f_a^n(z)\in U_a~\text{for~all}~n\geq 0\}$.
Being $(f_a,U_a,V_a)$ a quadratic-like mapping, there exists a quasiconformal conjugacy with a polynomial of degree two that has a superattracting fixed point.~Thus, the polynomial must be $z\mapsto z^2$ after a holomorphic change of variables, if necessary.~So the filled Julia set of $(f_a,U_a,V_a)$ is the image under a quasiconformal mapping of the closed unit disk.
\end{proof}
\begin{Proposition}\label{prop:consequences_pol_like}
Let $a\geq3$. The following statements hold.
\begin{enumerate}
\item[(a)] The map $f_{a}$ restricted to the boundary of $\mathcal A^{*}(-a)$ is conjugate to the map $\theta \mapsto 2\theta$ in the unit circle.
\item[(b)] Let $t\in \Sigma_E$ be an extended sequence that does not end in $0_i$'s. Then, $t$ is an $A$-sequence if and only if there exists a unique point $z \in \partial \mathcal A^{*}(-a)$ that realizes $t$ as its itinerary.
\end{enumerate}
\end{Proposition}
\begin{proof}
Statement (a) is a direct consequence of the previous proposition since the map $f_a$ is conjugate in $\partial \mathcal A^*(-a)$ to $z \mapsto z^2$ acting on the unit circle.
We prove statement (b) by defining a partition of the boundary of $\mathcal A^*(-a)$ that coincides with the refinement of the partition $T_0 \cup T_1$ discussed before. For simplicity, angles are measured by $[0,1]$. Denote by $z(0)$, $z(1/4)$, $z(1/2)$ and $z(3/4)$ the points in $\partial \mathcal A^*(-a)$ corresponding under the conjugacy between $f_a$ and $z \mapsto z^2$ to points in $S^1$ of angle $\theta=0,1/4,1/2 \,$ and $\,3/4$. Now label points in $\partial \mathcal A^*(-a)$ in the following way: traveling along $\partial \mathcal A^*(-a)$ in a clockwise direction, associate the symbol $0_1$ to the arc joining $z(0)$ and $z(3/4)$, the symbol $1_1$ to the arc joining $z(3/4)$ and $z(1/2)$, $1_2$ to the arc joining $z(1/2)$ and $z(1/4)$, and $0_2$ to the arc joining $z(1/4)$ and $z(0)$. We leave it to the reader to verify the transition matrix for this partition under the action of $f_a$ is exactly $A$ and the labeling is consistent with the one defined by the $T_{j_i}$'s.
\end{proof}
\begin{Theorem}\label{thm:BdOrbit}
Let $z$ be a point such that $f_a^n(z)\in \overline{T_0\cup T_1}$ for all $n\geq 0$. If $z$ belongs to the Julia set and has bounded orbit, then $z\in \partial {\mathcal A}^*(-a)$.
\end{Theorem}
\begin{proof}
If $z\in {\mathcal J}(f_a)$ satisfies the hypotheses, we can find $m<0<M$ so that $m< \text{Re}(f_a^n(z)) < M$ for all $n\geq 0$.
Let $\varepsilon>0$ small enough and denote by $B_\varepsilon=\overline{B_\varepsilon(0)}$. Since the orbit of the origin escapes monotonically along the positive real line, redefining $M$ if necessary, there exists an integer $N=N(M)>0$ for which $B_\varepsilon, f_a(B_\varepsilon),\ldots, f^{N}_a(B_\varepsilon)$ are pairwise disjoint compact domains, such that for all $0\leq j \leq N-1$,
\[
\begin{array}{ll}
& f^j_a(B_{\varepsilon}) \subset T_0 \cap \{ z \, |\, {\rm Re}(z)<M\}, \hbox{ and }\\
& f^N_a(B_{\varepsilon}) \subset T_0 \cap \{ z \, | \, {\rm Re}(z)> M\}.
\end{array}
\]
Moreover, we may choose $\varepsilon$ small enough so $B_\varepsilon, f_a(B_\varepsilon),\ldots, f^{N}_a(B_\varepsilon)$ are all compact domains contained in $T_0$.
Finally, select $m'\leq m<0$ so the subset $\{z\in T_1~|~\text{Re}(z)\leq m'\}$ maps completely inside $B_\varepsilon$.
Denote by $\Phi_a:{\Bbb D}\to {\mathcal A}^*(-a)$ the B\"ottcher coordinates tangent to the identity at the origin. For $0<r<1$ let $\Delta_r=\Phi_a(B_{r}(0))$.
Clearly $\Delta_r\subset {\mathcal A}^*(-a)$ and maps compactly into its own interior. Moreover, we can choose $r$ small enough so for $j=0,1$, $T_j \setminus \Delta_r$ is a connected set and the intersection of $\Delta_r$ and ${\Bbb R}$ is an open interval $(c,d)$, since $\Phi_a$ has been chosen to be tangent to the identity at the origin. We can now define the set $E$, illustrated in Figure~\ref{fig:periodic}, as follows
\[ E=\{z\in {\Bbb C}~|~m'< \text{Re}(z)< M,~\zeta_{-1}(\text{Re}(z))< \text{Im}(z)< \zeta_1(\text{Re}(z))\} \setminus \Omega
\]
\noindent where $\Omega =\bigcup_{k=0}^{N-1} f^k_a(B_{\varepsilon})\cup \Delta_r \cup [c,M]$. It is easy to verify that $E$ is an open, bounded, connected and simply connected set and $\partial {\mathcal A}^*(-a) \subset E$. Moreover, $E\subset f_a(E)$ although some boundary components map into $\partial E$. Indeed, $f_a(\partial E)\cap \partial E$ consists of segments along the real line and $f^j_a(\partial B_{\varepsilon})$, for $j=1,2,\ldots,N-1$. So after $N$ iterations, the only boundary points mapping into $\partial E$ are points over the real line, thus having $B$-itineraries $(1,0,\ldots)$ or $(0, 0,\ldots)$. We show next that for $\ell>0$ sufficiently large, $f_a^{-\ell}|E$ becomes an strict contraction.
\begin{figure}
\caption{\small{An schematic representation of the set $E$ described in Theorem~\ref{thm:BdOrbit}
\label{fig:periodic}
\end{figure}
Let $s=(s_0, s_1, s_2,\ldots)$ be the $A$-sequence associated to $z$. If $s$ has finitely many $1_i$'s, then its $B$-sequence ends with $0$'s and since $z$ has bounded orbit inside $\overline{T_0\cup T_1}$, then $z\in \partial {\mathcal A}^*(-a)$ by Lemma~\ref{lem:ends-in-zeros}.
If $s$ has infinitely many $1_i$'s there exists a first integer $n>N$ for which $s_{n}\in\{1_1,1_2\}$, and thus $\overline{E\cap T_{s_{n}}} \subset f^{n}(E)$, as points in $f^{n}(\partial E)$ mapping into $\partial E$ have by now itinerary $(0,0,\ldots)$.
For each $k\in \{0_1, 0_2, 1_1, 1_2\}$, the set $E\cap T_k$ is an open, connected and simply connected set with a Riemann mapping given by $\psi_k:{\Bbb D}\to E\cap T_k$. Consider the mapping $\Psi_{\ell}:{\Bbb D}\to {\Bbb D}$ with $\ell>n$, given by
$$\Psi_{\ell} = \psi_{s_0}^{-1}\circ( g_a^{s_0}\circ \ldots \circ g_a^{s_{\ell-1}} )\circ \psi_{s_\ell}.$$
It follows that $\Psi_\ell({\Bbb D})$ is compactly contained in ${\Bbb D}$, that is, for $\ell>n$, $\Psi_\ell$ is an strict contraction with respect to the Poincar\'e metric on the unit disk. Consequently, the sets $\overline{\Psi_\ell({\Bbb D})}$ form a nested sequence of compact sets with diameters converging to zero as $\ell\to \infty$. This implies that for any $w\in {\Bbb D}$, $\lim_{\ell\to \infty} \Psi_\ell(w)$ exists and is independent of the point $w$.
Therefore, by construction, $\psi_{s_0}(\lim_{\ell\to \infty} \Psi_\ell(0))=z$ is the unique point in $E$ with itinerary $s$. From Proposition~\ref{prop:consequences_pol_like}(b) we conclude $z\in \partial {\mathcal A}^*(-a)$.
\end{proof}
\subsection{Dynamics near infinity}
\label{subsection:tails_to_the_right}
Our first aim is to prove that for $R>0$ sufficiently large and the region
$$H_R = \{z\in T_0\cup T_1~|~ {\rm Re}(z)\geq R\},$$
there exist continuous curves in ${\mathcal J}(f_a)\cap H_R$ consisting of points whose orbits escape to $+\infty$ with increasing real part. These curves are usually known as \emph{tails}. The existence of tails as disjoint components of the Julia set were first observed by Devaney and Tangerman \cite{DT} for certain entire transcendental maps and by Schleicher and Zimmer \cite{SZ} for the exponential family $E_\lambda(z)=\lambda \exp(z)$ and all $\lambda\in {\Bbb C}$. In greater generality, Bara\'nski \cite{Ba} and Rempe \cite{R} have shown the existence of tails for hyperbolic maps belonging to the class $\cal B$.
For completeness, we analyze in detail some of their results in the setting of our work to obtain tails in ${\mathcal J}(f_a)\cap H_R$. Once each tail has been assigned an $A$-sequence, we describe a pullback process to compute the full set of points in $T_0\cup T_1$ associated to such $A$-sequence. In the final section, we study the topological properties of that set.
Consider an entire transcendental map $f$ in the class $\cal B$.
The \emph{escaping set of $f$}, denoted as $I(f)$, is the set of points whose orbits under $f$ tend to infinity. For an entire transcendental map, Emerenko \cite{E} has shown the Julia set coincides with the boundary of $I(f)$.
We say that two maps $f,g\in \cal B$ are \emph{quasiconformally equivalent near infinity} if there exist quasiconformal maps $\phi_1, \phi_2:{\Bbb C} \to {\Bbb C}$ that satisfy $\phi_1 \circ f=g\circ \phi_2$ in a neighborhood of infinity.
\begin{Theorem}[Rempe, 2009]\label{theorem:lasse}
Let $f,g \in \mathcal B$ be two entire transcendental maps which are quasiconformally equivalent near infinity. Then there exist $\rho>0$ and a quasiconformal map $\theta:{\mathbb C} \to {\mathbb C}$ such that $\theta \circ f = g \circ \theta$ on
$$
A_{\rho}=\{ z\in{\mathbb C} \ | \ |f^n(z)|>\rho, \ \forall n \geq 1 \}.
$$
Furthermore, the complex dilatation of $\theta$ on $I(f)\cap A_{\rho}$ is zero.
\end{Theorem}
A straightforward computation shows that $f_{a}(z)$ is conjugate to the function $\tilde{f}_{a}(z)=az\exp(z+1)-(1-a)$ under the conformal isomorphism $\varphi_a(z)=z-(1-a)$. Since $\tilde{f_a}(z)= \varphi_a \circ f_a \circ \varphi_a^{-1}(z)$, it is easily verified that $\tilde{f}_{a}$ has a free asymptotical value at $z=a-1$ and a fixed critical point at $z=-1$.
In turn, $\tilde{f}_{a}$ is (globally) conformally equivalent to $g_{b}(z)=bz\exp(z)$ via $\phi_{1}(z)=\alpha z+\alpha(1-a)$ and $\phi_{2}(z)=z$. Indeed, it is easy to see that $\phi_{1}\circ \tilde{f}_{a} = g_{b}\circ \phi_{2}$, where $b = e a\alpha$.
For small values of $b$, the Fatou set of $g_{b}$ consists solely of the completely invariant basin of attraction of the fixed point (and asymptotic value) $z=0$. Thus, we can describe the Julia set of $g_b$ by applying the following result found in \cite{Ba}.
\begin{Theorem}[Bara\'nski, 2007]\label{theo:baranski}
Let $g$ be an entire transcendental function of finite order so that all critical and asymptotic values are contained in a compact subset of a completely invariant attracting basin of a fixed point. Then ${\cal J}(g)$ consists of disjoint curves (hairs) homeomorphic to the half-line $[0,+\infty)$. Moreover, the hairs without endpoints are contained in the escaping set $I(g)$.
\end{Theorem}
These disjoint curves are usually known as hairs or \emph{dynamic rays}. Several consequences are derived from the above theorem. Firstly, if $\gamma$ denotes a hair, it can
be parametrized by a continuous function $h(t), \ t\in [0,+\infty)$, such that $\gamma=h([0,+\infty))$. The point $h(0)$ is called the \emph{endpoint} of the hair. Secondly, each hair is a curve that extends to infinity and, for $r>0$, we say that $\omega=h((r,+\infty))$ is the {\it tail of the hair}. Moreover, all points in a given hair share the same symbolic itinerary defined by a dynamical partition of the plane with respect to $f$, and for every point $z\in \gamma$ that is not the endpoint, we have $f^n(z)\to \infty$ as $n\to \infty$. Finally, if $f$ and $g$ are as in Theorem \ref{theorem:lasse} and in addition, $g$ satisfies hypotheses in Theorem \ref{theo:baranski}, then near infinity the topological structure of the escaping set of $f$ is also given by disjoint curves extending to infinity. We refer to \cite{BJR} for
a topological description of the Julia set in terms of what is known as Cantor bouquets. Furthermore, the dynamics of $f$ in those curves is quasiconformally conjugate to the dynamics of $g$ in the corresponding curves near infinity. We deduce the following result based on the previous theorems and the specific expression of $f_a$.
\begin{Proposition}\label{prop:ConjugTails}
Let $f_a(z) = a(z-(1-a))\exp(z+a)$, $g_b(z)=bz\exp(z)$, $a\geq 3$, and $b$ a complex parameter.
\begin{enumerate}
\item[(a)] If $|b|$ is small enough, the Julia set of $g_{b}$ is given by the union of disjoint hairs. Each hair lands at a distinguished endpoint. Moreover, hairs without endpoints are contained in $I(g_b)$.
\item[(b)] Let $R>0$ large enough. The set of points with forward $f_a$-orbits that are always contained in $H_R$ are given by the union of disjoint curves extending to infinity to the right. All points in those curves belong to $I(f_a)$.
Precisely, these curves are quasiconformal copies of connected components of hairs described in (a).
\item[(c)] To each curve in (b) that does not coincide with ${\Bbb R}$ or one of its preimages, we can assign a unique sequence $t$ in $\Sigma_A$. We denote this curve by $\omega_t$. All points in $\omega_t$ escape to infinity under the action of $f_a$ following the itinerary $t$.
\item[(d)] For any $t\in \Sigma_A$, if $R>0$ is large enough, there exists a unique curve $\omega_t$ in $H_R$. Moreover, for each $r \geq R$, $\omega_t \cap \{z~|~\text{Re}(z)=r\}$ is a unique point. In particular $\omega_t$ is the graph of a function.
\item[(e)] $\omega_t$ is a tail, i.e., a quasiconformal copy of a tail in (a).
\end{enumerate}
\end{Proposition}
\begin{proof}
Statement (a) follows directly from Theorem~\ref{theo:baranski}. To see statement (b) note that for any value of $a\neq 0$ and $b$, there exists $\alpha=b/(ea)$ so $f_a$ and $g_b$ are (globally) conformally equivalent. By Theorem \ref{theorem:lasse}, there exists $\rho>0$ such that $f_a$ and $g_b$ are conjugate on the set $A_\rho=\{z~|~|f_a^n(z)|>\rho, \forall n \geq 1\}$. Let $R>\rho$. Denote by $S$ the set of points in $H_R$ with forward $f_a$-orbits contained in $H_R$. Clearly, each point in $S$ must belong to ${\mathcal J}(f_a)$ (since a point in the Fatou set eventually maps into ${\mathcal A}^*(-a)$) and in particular, $S\subset {\mathcal J}(f_a)\cap A_R$. Moreover, far enough to the right, a point $z\in H_R$ whose forward orbit remains forever in $H_R$ must satisfy Re$\left(f_a^{k+1}(z)\right) > {\rm Re}\left(f_a^k(z)\right)$ for all $k>0$ (see Lemma \ref{lemma:vertical_segment}). Hence from Theorems \ref{theorem:lasse} and \ref{theo:baranski} we know that $S$ is the union of disjoint curves extending to infinity (that is, quasiconformal copies of components of the hairs in (a)) belonging to the escaping set.
To prove statement (c) we start by assigning to each of these curves, $\omega$, a unique sequence in $\Sigma_A$. Let $z_0$ be a point in $\omega$ and let $s(z_0)$ be its itinerary in $\Sigma_A$ in terms of the partition $T_{0_1}, T_{0_2}, T_{1_1}$ and $T_{1_2}$. By assumption this itinerary is well defined since $\omega$ is not ${\Bbb R}$ or one of its preimages. Let $z_1$ be another point in $\omega$. We claim that $s(z_1)=s(z_0)$ and proceed by contradiction. Let $\mathcal C$ be the connected component of $\omega$ joining $z_0$ and $z_1$. If $s(z_1)\neq s(z_0)$, there exists an integer $k\geq 0$ for which the $k^{\text th}$ entries in both itineraries are the first ones to differ.
Hence there is a point $q\in \mathcal C$ such that $f_a^k(q)$ belongs to either $[R,\infty)$, $\eta_1$ or $\eta_{-1}$. Clearly $f_a^k(q)$ cannot belong to $\eta_{\pm 1}$, since otherwise $f_a^{k+1}(q)\in {\Bbb R}^-$ and by hypothesis the forward orbit of $q$ belongs to $H_R$. On the other hand $f_a^k(q)$ cannot belong to ${\Bbb R}$ since by item (b), $f_a^k\left(\omega\right)$ and ${\Bbb R}$ are disjoint curves of the escaping set.
Finally, there cannot be two curves having the same itinerary as this will imply the existence of an open set of points following the same itinerary, which is impossible. Thus, we may now denote by $\omega_t$ the unique curve in $H_R$ formed by escaping points with itinerary $t$.
To prove statement (d) fix $t=(t_0,t_1,\ldots, t_n,\ldots) \in \Sigma_A$ and $R$ large enough. Let $r\geq R$ and let $L[r]=\{r+iy \, | \, \zeta_{-1}(r)\leq y \leq \zeta_1(r)\}$. We denote by $I_{t_0}$ the set of points in $L[r] \cap \overline{T_{s_0}}$. We know that the image of the vertical segment $L[r]$ cuts across $H_R$ in two almost vertical lines (see Lemma \ref{lemma:vertical_segment}). Recall that when $f_a$ is restricted over the set of points with forward orbits in $H_r$, it behaves as a subshift of finite type governed by the matrix $A$, and when restricted to $T_0$ or $T_1$ it is a one-to-one map. Using these facts, we can find a unique subinterval $I_{t_0t_1}\subset I_{t_0}$ formed by points with forward orbits inside $H_r$ and itineraries starting as $(t_0,t_1,\ldots)$. Inductively, for each $n>0$, $I_{t_0t_1\ldots t_n}$ is the unique subinterval of $I_{t_0t_1\ldots t_{n-1}}$ formed by points with forward orbits inside $H_r$ and itineraries starting with $(t_0,t_1,\ldots, t_n,\ldots )$. Clearly,
$$
I_{t_0t_1,\ldots t_n} \subset I_{t_0t_1\ldots t_{n-1}} \subset \ldots \subset I_{t_0t_1} \subset I_{t_0}.
$$
Due to the expansivity of $f_a$ in $H_R$ we obtain $\cap_{n\geq 0} I_{t_0t_1,\ldots t_n} =\{q\}$, and by construction $q$ must have itinerary $t$. Moreover $f^k(q)\to \infty$ as $k\to \infty$, so $q$ must belong to the unique curve $\omega_t$ described in (c). The above arguments imply that $\omega_t$ intersects Re$(z)=r$ at a unique point. Therefore we can parametrize $\omega_t$ on the interval $[r,\infty)$.
Finally to see statement (e) we observe that there are no endpoints associated to bounded itineraries in $H_R$, implying that each $w_t$ is a quasiconformal copy of the tail of some hair in (a). On one hand, the only points in $T_0 \cup T_1$ with bounded orbit belong to $\partial A^*(-a)$, so they are not in $H_R$. On the other hand if there were and endpoint {\it of a hair} with orbit escaping to infinity in $H_R$, it should have an itinerary, say, $t$. But from statement (d) there is a (unique) curve $w_t$ going from Re$(z)=R$ to infinity with such itinerary $t$, a contradiction.
\end{proof}
\begin{Definition}
Given any $A$-sequence $t$, each curve described in Proposition~\ref{prop:ConjugTails} will be denote by $\omega_{t}=\omega_{t}(R)$ and it will be called the \emph{tail with itinerary $t$} contained in the half plane Re$(z)\geq R$.
The component of $\omega_t$ that cuts across
$$F_R=\{z\in H_R~|~|z|<f_a(R+i\zeta_1(R))\}$$
is called the \emph{base} of the tail, and it will be denote by $\alpha_t=\alpha_t(R)$.
\end{Definition}
See Figure \ref{fig:tails}.
\begin{figure}
\caption{\small{The set $H_R$, the tail $\omega_t$ and its base $\alpha_t$ for some $t\in \Sigma_A$. Bases are depicted as bold lines. }
\label{fig:tails}
\end{figure}
We now describe a pullback construction to extend the tail $\omega_t$ into a longer curve. Recall $g_a^{j_i}=f_a^{-1}|T_{j_i}, j_i\in\{0_1,0_2,1_1,1_2\}$ and consider the shift map $\sigma: \Sigma_A\to \Sigma_A$ acting on the space of $A$-sequences. Let $t=(t_0,t_1,\ldots)\in \Sigma_A$. By the conjugacy of $f_a | \bigcup_{t\in \Sigma_A} \omega_t$ with $\sigma|\Sigma_A$, it follows that $\omega_{\sigma (t)}$ properly contains $f_{a}(\omega_t)$ since this curve lies in $H_R\setminus F_R$ (see Proposition \ref{prop:ConjugTails}). Hence $f_a(\omega_t)$ is a curve that misses the base $\alpha_{\sigma(t)}$. Consequently, $g_{a}^{t_0}(\omega_{\sigma(t)})$ is a continuous curve that lies in $T_{t_0}$ and extends $\omega_t$ to the left of Re$(z)=R$. Clearly, any point in the extended curve $g_{a}^{t_0}(\omega_{\sigma (t)})$ has itinerary $t$. Inductively, consider
$$
g_{a}^{t_0}\circ \ldots \circ g_{a}^{t_{n-1}}\left(\omega_{\sigma^n(t)}\right).
$$
This pullback iteration is always defined as long as the extended curve does not meet $z=0$, which is impossible since ${\mathbb R}$ is forward invariant. We thus obtain a curve of points with itinerary $t$, and each pullback iteration extends its predecessor.
\begin{Definition}
Let
\begin{equation}\label{eq:pullback-hair}
\gamma (t)=\bigcup_{n=0}^\infty g_{a}^{t_0}\circ \ldots \circ g_{a}^{t_{n-1}}\left(\omega_{\sigma^n(t)}\right).
\end{equation}
We call $\gamma (t)$ the {\it hair\/} associated to $t$.
\end{Definition}
Note that the pullback process described before may or may not produce an endpoint. In Section~4 we show that in some cases, $\gamma(t)$ is a hair with an endpoint in $\partial {\mathcal A}^*(-a)$, and in some other cases $\gamma(t)$ is a non-landing hair and accumulates everywhere upon itself. By the following theorem found in \cite{C}, we will conclude that $\overline{\gamma(t)}$ is an indecomposable continuum.
\begin{Theorem}[Curry, 1991]\label{theo:curry}
Suppose that $X$ is a one-dimensional nonseparating plane continuum which is the closure of a ray that limits on itself. Then $X$ is either an indecomposable continuum or the union of two indecomposable continua.
\end{Theorem}
\begin{Remark}
A \emph{ray} is defined as the image of $[0,+\infty)$ under a continuous, one to one map. Giving any positive number $\alpha$, the image of $[\alpha,+\infty)$ under the same map is known as a \emph{final segment} of the ray. Then, the ray \emph{limits on itself} if it is contained in the closure of any final segment of itself.
\end{Remark}
\section{Targets in $H_{R}$}\label{section:targets}
The main result in this section will be Theorem~\ref{theorem:indecom}, where we construct $B$-sequences so their associated $A$-sequences produce hairs that accumulate everywhere on themselves.\\
We now set up \emph{targets} around the $n^{\rm th}$ \emph{image of the base} $\alpha_{t}$ of each tail $\omega_t$. The construction is very similar to the one presented in Devaney and Jarque \cite{DJ2} and Devaney, Jarque and Moreno Rocha in \cite{DJM}, although the existence of a critical point in our present case requires some modifications.\\
Let $t$ be a given $A$-sequence. We first enlarge the base $\alpha_t$ inductively along $\omega_t$. Set $\alpha_{t,0}=\alpha_t$ and consider the two bases $\alpha_{\sigma^{-1}(t)}$. The set
$f_{a}(\alpha_{\sigma^{-1}(t)})$ (taking the two possible bases) is a subset of $\omega_t$. Thus, the set
$$
\alpha_{t,1} = \alpha_{t,0} \cup f_{a}(\alpha_{\sigma^{-1}(t)}),
$$
is an extension of $\alpha_{t,0}$ along $\omega_t$.
Inductively, define the $n^{\rm th}$ image of the base $\alpha_t$ as
$$
\alpha_{t,n} = \alpha_{t,n-1} \cup f_{a}^n(\alpha_{\sigma^{-n}(t)}).
$$
It is easy to verify that $\{\alpha_{t,n}\}_{n\in {\mathbb N}}$ is a sequence of curves satisfying the following three conditions:
\begin{enumerate}
\item[(i)] $\alpha_{t,0} = \alpha_t$,
\item[(ii)] $\alpha_{t,n} \subset \alpha_{t,n+1}$, and
\item[(iii)] $\bigcup\limits_{n\geq 0}\alpha_{t,n} = \omega_t$.
\end{enumerate}
In order to define a target around each $\alpha_{t,n}$, consider $\xi,\eta\in {\Bbb R}^+$ and let
\[
V(\xi,\eta) = \{ z \in H_R~|~\xi - 1 <{\rm Re}(z) < \eta+1 \}.
\]
By definition $V( \xi, \eta)$ is a rectangular region bounded above and below
by components of $\zeta_{-1}$ and $\zeta_{1}$, respectively.
\begin{Lemma}\label{lemma:macrolemma}
Let $R>0$ be large enough. For all $n \geq 0$ there exist positive real numbers $\xi_n$ and $\eta_n$ such that the following statements hold.
\begin{enumerate}
\item[(a)] For every $t\in \Sigma_A$, the $n^{\rm th}$ iterate of $\alpha_t$ belongs to the interior of $V(\xi_n,\eta_n)$.
\item[(b)] For every $\ell \geq 0$, $V(\xi_{n+1},\eta_{n+\ell+1})$ is compactly contained inside $f_a(V(\xi_n,\eta_{n+\ell}))$.
\end{enumerate}
\end{Lemma}
\begin{proof}
Set $\eta_0 = f_a\left(R+i\zeta_{1}(R)\right)$, $\eta_{n+1}=f_a\left(\eta_n+i \zeta_{1}\left(\eta_{n}\right)\right)$ and $\xi_n=f_a^n(R)$ for $n\geq 0$. It remains to verify that these values satisfy statements (a) and (b). Observe that for $R$ large enough, the image of every vertical segment $L[R]$ cuts across $T_{0}\cup T_{1}$ in two almost vertical lines: one {\it near} Re$(z)=f_{a}(R)$ and the other {\it near} Re$(z)=f_{a}\left(R+i\zeta_{1}(R)\right)$ (see Lemma \ref{lemma:vertical_segment}).
Statement (a) follows directly from the definition of $\xi_n$ and $\eta_n$, since at each step we choose $\xi_n$ and $\eta_{n}$ to be respectively the smallest and largest possible values of Re$(f^n_a(\alpha_t))$ for all $t\in \Sigma_A$, and moreover it is easy to check that $\xi_{n}<\eta_{n-1}$.
We prove statement (b) when $\ell =0$. The case $\ell >0$ follows similarly. The proof of the statement proceeds in two steps. The first one is to verify the inequality
\[
f_a\left(\xi_n-1+i\zeta_{1}\left(\xi_n-1\right) \right) < \xi_{n+1}-1 \,.
\]
From the definition of $f_a$ we obtain that
\begin{eqnarray*}
f_a\left(\xi_n-1+i\zeta_{1}\left(\xi_n-1\right) \right) & < &a e^{a+\xi_n-1} \sqrt{(\xi_n-1 -(1-a))^2+4\pi^2}\\
& < & a e^{a+\xi_n} (\xi_n-(1-a))-1\\
& = & \xi_{n+1}-1,
\end{eqnarray*}
where the second inequality is satified if $R$ is large enough. The second step is to verify the inequality
\[
f_a(\eta_n+1) > \eta_{n+1}+1 = f_a(\eta_n+i\zeta_1(\eta_n))+1.
\]
By evaluating both sides we obtain
\[
a e^{a+\eta_n+1} (\eta_n+a) > a e^{a+\eta_n}\sqrt{(\eta_n+a-1)^2+\left(\zeta_1(\eta_{n})\right)^2} +1.
\]
Consequently the image of $V(\xi_n,\eta_n)$ contains $\overline{V(\xi_{n+1},\eta_{n+1})}$ as desired.
\end{proof}
\begin{Definition}
The set $V(\xi_n,\eta_n)$ will be called the $n^{\rm th}$ target of $f_a$.
\end{Definition}
Targets provide a useful tool in the proof of Theorem~\ref{theorem:indecom}. Before getting into the details, we briefly explain why targets are so important in our construction. For large values of $n$, an $n^{\rm th}$ target corresponds to a rectangular region with arbitrarily large real part and imaginary part bounded, in absolute value, by $2\pi$. Then, we may pullback the $n^{\text th}$ target using suitable branches of $f_a^{-1}$ to obtain two sequences of nested subsets inside $V(\xi_0,\eta_0)$. We intend to prove each nested sequence contains not only a base $\alpha_{t^{i}}, i=1,2,$ but there are also other components of $\gamma(t^i)$ accumulating into the base.\\
As observed before, each target $V(\xi_n,\eta_{n+\ell})$ intersects the
fundamental domains $T_{0}$ and $T_{1}$ for any $n\in {\mathbb Z}^+$.~Denote by $W_{n,\ell}^{0_{i}}$ and $W_{n,\ell}^{1_{i}}$ the domains given by $V(\xi_n,\eta_{n+ \ell})\cap T_{0_{i}}$ and $V(\xi_n,\eta_{n+\ell})\cap T_{1_{i}}$, respectively.
The next step is to show that, by considering appropriate preimages of these $W$-sets, we obtain a nested sequence of neighborhoods around two particular bases of tails.
In what follows, we will work solely with $B$-sequences and its respective $A$-sequences. We may also assume that every $B$-sequence has infinitely many 1's to avoid taking a preimage of the positive real line.
The next result is a suitable restatement of Lemma 4.3 in \cite{DJM} following our notation.
\begin{Lemma}\label{lemma:two-nested}
Let $\ell \ge 0$ and let $t=(\tau_0,1,\tau_1,1,\tau_2,1,\ldots)$ be a $B$-sequence, where each $\tau_k=\left(\tau_k^1,\ldots, \tau_k^{n_k}\right)$ denotes a finite block of binary symbols $\{0,1\}$ of length $n_k$. Let $m_j=j+1+n_0+n_1+\ldots + n_j, \ j\geq 0$. Then for each $m_{j}$ the sets
\begin{equation}\label{eq:W-preimage-usingB}
g_{a}^{\tau_{0}^1}\circ\cdots\circ
g_{a}^{\tau_{j}^{n_j}} \ \left(W_{m_{j},\ell}^{1_i} \right), \ i=1,2,
\end{equation}
form two nested sequences of subsets of $V(\xi_0,\eta_\ell)$. Moreover, if we denote by $t^{1}$ and $t^{2}$ the two allowable $A$-sequences satisfying $\pi\left(t^{1}\right)=\pi \left(t^{2}\right)=t$, then $\alpha_{t^1,\ell}$ and $\alpha_{t^2,\ell}$ are contained each in a nested sequence of subsets of $V(\xi_0,\eta_\ell)$ given by (\ref{eq:W-preimage-usingB}).
\end{Lemma}
\begin{proof}
Due to Lemma \ref{lemma:macrolemma}, for each $m_j$ and $i=1,2$, the sets $W_{m_{j},\ell}^{1_i} \subset V(\xi_{m_j},\eta_{m_j+\ell})$, can be pulled back following $(\tau_0,1,\tau_1,1,\ldots, \tau_j)$. These preimages yield two nested sequences of subsets in $V(\xi_0,\eta_\ell)$, one sequence corresponds to preimages of $W_{m_{j},\ell}^{1_1}$ and the other to preimages of $W_{m_{j},\ell}^{1_2}$. Notice there is a unique way of pulling back each $W_{m_{j},\ell}^{1_i},\ i=1,2$ following the $B$-sequence $t$, since ${f_{a}}|{T_k},\ k=0,1$ are one-to-one maps.
Also, at each step of the construction, the points belonging to these nested subsets are points of $V(\xi_0,\eta_\ell)$ with $B$-itinerary $t=(\tau_0,1,\tau_1,1,\ldots, \tau_j,\ldots)$. So, $\alpha_{t^1,\ell}$ and $\alpha_{t^2,\ell}$ must be inside all of them.
\end{proof}
The next step in the construction is to show that in each of the nested subsets of $V(\xi_0,\eta_\ell)$ given by the previous lemma we will have not only the bases $\alpha_{t^1,\ell}$ and $\alpha_{t^2,\ell}$, but also other components of $\gamma(t)$. We split this step into two lemmas. The first lemma shows that for suitable choices of $A$-sequences the extended tails cut twice across the line Re$(z)=-\mu$ for $\mu>0$ arbitrarily large.
A finite block of $0$'s (respectively $0_{1}$'s or $0_{2}$'s) of length $k$ will be denoted by $0^k$ (respectively $0^k_1$ or $0^k_{2}$).
\begin{Lemma}\label{lemma:extended_tails}
Let $t$ be any $B$-sequence, and let $t^1, t^2\in \Sigma_A$ so that $\pi(t^1)=\pi(t^2)=t$. Assume also that $0_{1}t^1$ and $0_2t^2$ are allowable. Given any $\mu>0$, there exists $K>0$ such that for all $k>K$, there exist two continuous curves, denoted by $\tilde{\omega}_{1_20_1^k t^1}$ and
$\tilde{\omega}_{1_10_2^k t^2}$, that extend to infinity to the right and satisfy:
\begin{enumerate}
\item[(a)] $\tilde{\omega}_{1_20_1^kt^1}$ and $\tilde{\omega}_{1_10_2^kt^2}$ are enlargements (to the left) of the tails with itineraries
$1_20_1^kt^1$ and $1_10_2^kt^2$, respectively; and
\item[(b)] $\tilde{\omega}_{1_20_1^kt^1}$ and $\tilde{\omega}_{1_10_2^kt^2}$ cuts twice across Re$\,z=-\mu$.
\end{enumerate}
\end{Lemma}
\begin{proof}
Consider any $A$-sequence of the form $0_{1}^mt^1$ with $m>0$ large enough so the unique tail $\omega_{0_1^mt^1}$ is parametrized by $z=(x,h(x))$, $x > R$ (see Proposition \ref{prop:ConjugTails}). Lemma \ref{lem:ends-in-zeros} and its proof imply that $\omega_{0_1^mt^1}$ is $\varepsilon$-close to $z=(x,0), \ x > R$ (that is, $|h(x)|<\varepsilon$ for all $x>R$).
By pulling back $\omega_{0_1^mt^1}$ using $g_{a}^{0}$, we obtain a new curve which is precisely the extended tail $\tilde{\omega}_{0_1^{m+1}t^1}$. Moreover, since the positive real line is repelling, we observe that $\tilde{\omega}_{0_1^{m+1}t^1}$ is also $\varepsilon$-close to $z=(x,0)$ with $x > g_{a}^0(R)$. Successive pullbacks via $g_{a}^{0}$ allow us to find a first positive integer $r$ such that the extended tail $\tilde{\omega}_{0_1^{m+r}t^1}$ will be $\varepsilon$-close to $z=(x,0), \ x > p_a$ where $p_a$ is the real fixed point in the boundary of $\mathcal A^*(-a)$.
At this stage we pullback again $\tilde{\omega}_{0_1^{m+r}t^1}$ using $g_{a}^{1}$. The obtained curve must coincide with the extended tail $\tilde{\omega}_{1_{2}0_{1}^{m+r}t^1}$. By construction it is a curve extending to infinity to the right, extends far into the left half plane (since $\tilde{\omega}_{0_1^{m+r}t^1}$ is close to $z=0$) and lies close to $q_a$ (the preimage of $p_a$ in $\partial \mathcal A^*(-a)$) since $\tilde{\omega}_{0_1^{m+r}t^1}$ gets arbitrarily close to $z=p_a$. For a given $\mu$, if we choose $m$ large enough, this pullback construction guarantees that $\tilde{\omega}_{1_{2}0_{1}^{m+r}t^1}$ cuts across Re$(z)=-\mu$ twice, as desired.
Analogously, if we start the construction with $0_{2}^m t^2, \ m>0$, we get a similar result for the extended tail $\tilde{\omega}_{1_{1}0_{2}^{m+r}t^2}$.
\end{proof}
\begin{Proposition} \label{prop:passes}
Fix $\ell \geq 0$. Let $t$ be any $B$-sequence with $t^1$ and $t^2$ its associated $A$-sequences. Let $\tau$ be any finite block of binary symbols of length $n$ and denote by $s=\tau 110^kt$, and by $s^1,s^2$ its associated $A$-sequences. Then there exists $K>0$ such that for all $k>K$, the following statements hold.
\begin{enumerate}
\item[(a)] The forward image of $W_{n,\ell}^{1_{1}}$ cuts three times (twice far to the left and once far to the right) across the extended tail $\tilde{\omega}_{1_20_1^kt^1}$. In other words, the hair $\gamma(1_{1}1_20_1^kt^1)$ cuts three times across $W_{n,\ell}^{1_{1}}$.
\item[(b)] The hair $\gamma(s^1)$ cuts three times across
\begin{equation*}
g_{a}^{\tau_{0}}\circ\cdots\circ g_{a}^{\tau_{n-1}}\left(W_{n,\ell}^{1_{1}}\right)\ .
\end{equation*}
Moreover one of the connected components of $\gamma(s^1)$ in
the above expression contains $\alpha_{s^1,l}$.
\end{enumerate}
\noindent
Analogous results corresponding for $s^2$ also apply.
\end{Proposition}
\begin{proof} The set $f_{a}\left(W_{n,\ell}^{1_{1}}\right)$ is a large semi-annulus in the upper half plane and intersects $T_0 \cup T_1$ far to the right and far to the left. From Lemma \ref{lemma:extended_tails} we can choose $K$ such that for all $k>K$, the extended tail $\tilde{\omega}_{1_{2}0_{1}^{k}t^1}$ cuts across the semi-annulus $f_{a}\left(W_{n,\ell}^{1_{1}}\right)$ three times: twice far to the left and one far to the right. Consequently there must be three components of $\gamma(1_{1}1_20_1^kt^1)$ in $W_{n,\ell}^{1_{1}}$ that map into three components of the extended tail $\tilde{\omega}_{1_{2}0_{1}^{k}t^1}$ in $f_{a}\left(W_{n,\ell}^{1_{1}}\right)$.
By taking suitable pullbacks of $W_{n,\ell}^{1_{1}}$ that follow the string of symbols $\tau=(\tau_0,\ldots,\tau_{n-1})$ and applying Lemma \ref{lemma:two-nested}, we can now conclude statement (b) of the present proposition.
\end{proof}
\begin{Theorem}\label{theorem:indecom}
Let $\tau$ be a finite block of binary symbols of length $n$. There exists an increasing sequence of integers $k_j,\ j\geq 1$ so for the $B$-sequence
\begin{equation}
{\mathfrak T} = \tau110^{k_1}110^{k_2}110^{k_3}\ldots, \label{eq:IndSeq}
\end{equation}
its associated $A$-sequences ${\mathfrak T}^1$ and ${\mathfrak T}^2$ determine
two distinct hairs $\gamma({\mathfrak T}^1)$ and $\gamma({\mathfrak T}^2)$ that limit upon themselves, thus becoming non-landing hairs.
\end{Theorem}
\begin{proof}
Let $\ell>0$, $p_0=n+1$ and for each $i\geq 1$, set
\begin{equation*}
p_i = n+1 + \sum\limits_{j=1}^{i} k_{j} + \ 2i.
\end{equation*}
The symbol at the $p_i^{\text{th}}$ position in (yet to be constructed) ${\mathfrak T}$ is equal to $1$. In general, the symbol at position $p_i$ in the $A$-sequence ${\mathfrak T}^j,\ j=1,2$ can be either $1_1$ or $1_2$, depending on the previous entry in ${\mathfrak T}^j$. In what follows we assume that all symbols at the $p_i^{\rm th}$ position in ${\mathfrak T}^1$ are equal to $1_{1}$ (hence, all symbols at position $p_{i}^{\rm th}$ in ${\mathfrak T}^2$ are $1_{2}$). It will become clear from the proof that this assumption is without lost of generality. Moreover, we shall prove only that $\gamma({\mathfrak T}^1)$ limits upon itself since the case of $\gamma({\mathfrak T}^2)$ follows in the same way.
Let $s$ be any $B$-sequence not ending in all $0$'s, and let $s^1$ and $s^2$ be its associated $A$-sequences. For each $j\geq 1$ we aim to construct inductively $B$-sequences of the form
\begin{eqnarray}\label{eq:sigma-j}
u_j &=& \tau 110^{k_1}110^{k_2}\ldots110^{k_j}\sigma^{p_j}(s),
\end{eqnarray}
so that, by carefully selecting longer blocks of zeros, the $u_j$ will converge to the $B$-sequence ${\mathfrak T}$ with the desired properties. Note that for all $j$, $u_j$ will be a concatenation of the first $p_j^{\rm th}$ symbols in ${\mathfrak T}=(\mathfrak{t}_0, \mathfrak{t}_1, \ldots)$ with $\sigma^{p_j}(s)$. In other words,
\begin{eqnarray}\label{eq:terms}
u_j &=& (\mathfrak{t}_0, \mathfrak{t}_1, \ldots, \mathfrak{t}_{p_j-1}, s_{p_j}, s_{p_j+1},\ldots).
\end{eqnarray}
As before, $u_{j}^1$ and $u_{j}^2$ will denote the $A$-sequences that project into $u_j$.
We first show how to define $u_1$. Applying Proposition \ref{prop:passes}(a), we can choose an integer $k_1>0$ so the forward image of $W_{p_0-1,\ell}^{1_{1}}$ is cut across by the extended tail $\tilde{\omega}_{1_20_1^{k_{1}}\sigma^{p_1}(s^1)}$ three times (twice far to the left and once far to the right). Thus the hair $\gamma(1_{1}1_20_1^{k_{1}}\sigma^{p_1}(s^1))$ cuts three times across $W_{p_0-1,\ell}^{1_{1}}$. For the given block $\tau$, denote by $\tau_1$ its corresponding $A$-block that makes $u_1^1=\tau_1 1_1 1_2 0_1^{k_1}\sigma^{p_1}(s^1)$ an allowable $A$-sequence. Thus Proposition \ref{prop:passes}(b) implies that
$$
g_{a}^{\mathfrak{t}_{0}}\circ\cdots\circ g_{a}^{\mathfrak{t}_{p_0-2}}\left(W_{p_0-1,\ell}^{1_{1}}\right)
$$
is a subset of $V\left(\xi_{0},\eta_{\ell}\right)$ that contains $\alpha_{u_{1}^1,\ell}$ and (at least) two other components of the hair $\gamma(u_{1}^1)$.
For $j>1$ we apply again Proposition~\ref{prop:passes}(a) to the sequence
$$\tilde{\tau} 110^{k_j}\sigma^{p_j}(s),$$
where $\tilde{\tau}=\tau110^{k_1}110^{k_2}\ldots 110^{k_{j-1}}$ and with an integer $k_{j}>0$ large enough so that the forward image of $W_{p_{j}-1,\ell}^{1_{1}}$ is cut across by the extended tail $\tilde{\omega}_{1_20_1^{k_{j}}\sigma^{p_j}(s^1)}$ three times (twice far to the left and once far to the right). Thus the hair $\gamma(1_{1}1_20_1^{k_{j}}\sigma^{p_j}(s^1))$ cuts three times across $W_{p_{j}-1,\ell}^{1_{1}}$. Pulling back $W_{p_{j}-1,\ell}^{1_{1}}$ through the finite block of binary symbols given by $\tilde{\tau}$, we get from Proposition \ref{prop:passes}(b) that
$$
g_{a}^{\mathfrak{t}_{0}}\circ\cdots\circ g_{a}^{\mathfrak{t}_{p_j-2}}\left(W_{p_{j}-1,\ell}^{1_{1}}\right)
$$
is a subset of $V\left(\xi_{0},\eta_{\ell}\right)$ that contains $\alpha_{u_{j}^1,\ell}$ and (at least) two other components of the hair $\gamma(u_{j}^1)$.
As $j$ tends to infinity, the sequence $u_{j}$ converges to the desired sequence ${\mathfrak T}$. Indeed, from Proposition \ref{prop:passes}(b), the sets
$$
g_{a}^{\mathfrak{t}_{0}}\circ\cdots\circ g_{a}^{\mathfrak{t}_{p_j-2}}\left(W_{p_{j}-1,\ell}^{1_{1}}\right)
$$
form a nested sequence of subsets of $V(\xi_{0},\eta_{\ell})$ that contains $\alpha_{u_{j},\ell}$ and two further components of the hair $\gamma(u_{j})$. Since $\ell\geq 0$ was selected in an arbitrary manner, as $j\to \infty$ we obtain $V\left(\xi_{0},\infty \right)$ contains $\omega_{{\mathfrak T}^1}$ and infinitely many distinct components of the hair $\gamma({\mathfrak T}^1)$ accumulating on it.
To see this, let $z\in \omega_{{\mathfrak T}^1}$ be given and select $\ell_z>0$ large enough so $z$ lies in $\alpha_{{\mathfrak T}^1,\ell_z}$. As before, this base lies in the target $V(\xi_0,\eta_{\ell_z})$. Since ${\mathfrak T}$ has been already constructed, we may now apply Proposition~\ref{prop:passes} to $\gamma({\mathfrak T}^1)$ itself. There exists an integer $N>0$ sufficiently large so for all $n>N$, $\gamma(\sigma^{p_n+1}({\mathfrak T}^1))$ cuts across $W_{p_n-1,\ell_z}^{1_1}$ in (at least) three components, say $A^n_1, A^n_2$ and $A^n_3$, with one of them being the component of the tail $\omega_{\sigma^{p_n+1}({\mathfrak T}^1)}$.
As $n$ increases, the diameter of $g_a^{\mathfrak{t}_0}\circ\ldots \circ g_a^{\mathfrak{t}_{p_n}}(W_{p_n-1,\ell_z}^{1_1})$ decreases, nonetheless, Proposition~\ref{prop:passes} implies that for each $j=1,2,3$,
$$ g_a^{\mathfrak{t}_0}\circ\ldots \circ g_a^{\mathfrak{t}_{p_n}}(A_j^n)~~\text{cuts across}~~ g_a^{\mathfrak{t}_0}\circ\ldots \circ g_a^{\mathfrak{t}_{p_n}}(W_{p_n-1,\ell_z}^{1_1}).$$
Since $g_a^{\mathfrak{t}_0}\circ\ldots \circ g_a^{\mathfrak{t}_{p_n}}(A_j^n)=\alpha_{{\mathfrak T}^1,\ell_z}$ for some $j$, the pullback images of the remaining $A_j^n$'s accumulate lenghtwise over $\alpha_{{\mathfrak T}^1,\ell_z}$ (and thus along $z$) for each $n>N$.
To see that $\gamma({\mathfrak T}^1)$ accumulates on each of its points and not only on its tail portion, we may perform the same construction for the sequences
$$
110^{k_i}110^{k_{i+1}}\ldots,
$$
for $i\geq 1$.
Then we may pullback the corresponding hairs and their accumulations by the appropriate inverse branches of $f_{a}$ to show that $\gamma({\mathfrak T}^1)$ must accumulate on any point in the hair $\gamma({\mathfrak T}^1)$.
\end{proof}
\section{Geometry of hairs}
\label{section:indecom}
This last section will focus on the geometry of sets $\overline{\gamma(t)}$ when $t$ is a $B$-sequence that is either periodic or a sequence as constructed in Theorem~\ref{theorem:indecom}. We show that in the former case, the closure of the hair has a landing point in $\partial {\mathcal A}^*(-a)$ while, in the latter case, $\overline{\gamma(t^i)}$ is an indecomposable continuum for each $i=1,2$.
\begin{Proposition}\label{prop:PerSeqLand}
Let $t$ be a periodic $B$-sequence and $t^1, t^2$ their periodic associated $A$-sequences. Then, the hairs $\gamma(t^1)$ and $\gamma(t^2)$ land at two (repelling) periodic points $p_1,p_2\in \partial{\mathcal A}^*(-a)$.
\end{Proposition}
\begin{proof}
Let $t=\overline{t_0\ldots t_{n-1}}$, so every block of $0$'s in $t$ has bounded length. We work only with $t^1$ since the other case follows similarly.
There exists a value $\delta>0$ and a closed ball $\overline{B_{\delta}(0)}$ so the orbit of $\overline{\gamma(t^1)}$ stays always outside $\overline{B_{\delta}(0)}$. Analogously, we can find a real value $m$ so the orbit of $\overline{\gamma(t^1)}$ does not intersect the half plane Re$(z)< m<0$.
Let $R>0$ as in Proposition~\ref{prop:ConjugTails}. Then, there exists a value $M\geq R$ for which the intersection of the tail $\omega_{t^1}$ with the line Re$(z)=M$ is a single point. Since the orbit of the origin escapes along the positive real line, there exists an integer $N=N(M)>0$ for which $f_a^{N-1}(0)<M\leq f_a^{N}(0)$. Select $0<\epsilon\leq \delta$ small enough so for $B_\epsilon=\overline{B_\varepsilon(0)}$ and $f_a(B_\epsilon),\ldots, f^{N}_a(B_\epsilon)$, they are compact domains contained in $T_0$. Finally, select $m'\leq m<0$ so the left half plane Re$(z)\leq m'$ maps completely inside $B_\epsilon$.
With these constants we can define an open, connected, simply connected region $E$ as in the proof of Theorem~\ref{thm:BdOrbit}. Then, the pullback process that defines the hair $\gamma(t^1)$ together with the contracting map $\Psi_\ell= \psi_{t_0^1}^{-1}\circ(g_a^{t_0^1}\circ \cdots \circ g_a^{t_{n-1}^1})^\ell \circ \psi_{t_0^1}$ (where $(g_a^{t_0^1}\circ \cdots \circ g_a^{t_{n-1}^1})^\ell$ denotes the $\ell$-fold composition with itself), shows that $\overline{\gamma(t^1)}\setminus \gamma(t^1)$ is a unique periodic point in $E$ with itinerary $t^1$. By Theorem~\ref{thm:BdOrbit}, this periodic point lies in $\partial A^*(-a)$.
\end{proof}
\begin{Theorem}\label{thm:DoesNotSeparate}
Let ${\mathfrak T}$ be a $B$-sequence as in Theorem~\ref{theorem:indecom}, ${\mathfrak T}^1, {\mathfrak T}^2$ be its associated $A$-sequences that project onto ${\mathfrak T}$ and $\gamma({\mathfrak T}^1),\gamma({\mathfrak T}^2)$ their associated hairs. Then, the closure of each of these hairs is an indecomposable continuum.
\end{Theorem}
\begin{proof}
As before, we restrict the proof to the $A$-sequence ${\mathfrak T}^1=\tau_1 1_1 1_2 0_1^{k_1}1_1 1_2 0_1^{k_2}\ldots$. Let $\Gamma^1$ be the closure of the hair $\gamma({\mathfrak T}^1)$. In order to apply Theorem~\ref{theo:curry}, we must verify first that $\Gamma^1$ does not separate the plane, as from Theorem~\ref{theorem:indecom} $\gamma({\mathfrak T}^1)$ is a curve that accumulates upon itself.
First observe that $\Gamma^1$ is a set with bounded negative real part. Indeed, the first block of $0_i$'s in ${\mathfrak T}^1$ has finite length and this implies $\gamma({\mathfrak T}^1)$ (and thus $\Gamma^1$) lies to the right of the line Re$(z)=m$ for some $m<0$. For simplicity, denote by $k_0\in\{0_1,0_2,1_1,1_2\}$ the first entry in $\tau_1$, so $\Gamma^1$ lies inside the set $T_{k_0}$ and to the right of the line Re$(z)=m$. Hence
$${\Bbb C}\setminus (T_{k_0}\cap \{z~|~\text{Re}(z)>m\})\subset {\Bbb C}\setminus \Gamma^1,$$
and since the bounderies of $T_0$ and $T_1$ are the graphs of strictly monotonic functions, the set in the left hand side is in fact a single component with unbounded imaginary part. This implies the existence of a single complementary component $U\subset {\Bbb C}\setminus \Gamma^1$ with unbounded imaginary part, while all other complementary components must have bounded imaginary parts. Also, note that $-a\in U$ and since $-a$ is a fixed point then $-a\in f_a^k(U)$ for all $k>0$.
Let $V \neq \emptyset $ be a component in ${\Bbb C}\setminus \Gamma^1$ with bounded imaginary part. Firstly, we assume that $V\cap {\mathcal J}(f_a)\neq \emptyset$. Then, by Montel's Theorem, there exists $N>0$ such that $-a\notin f^k(V)$ for $0\leq k<N$ while $-a\in f^N_a(V)$, since $-a$ is not an exceptional value (indeed, $-a$ has infinitely many preimages). But this implies that $f_a^{N-1}(V)$ is a component with bounded imaginary part inside $T_0\cup T_1$ that must contain a point in $f_a^{-1}(-a)$, a contradiction since in $T_0 \cup T_1$ there are no preimages of $-a$ different from $-a$ itself. Secondly, we assume that $V$ does not contain points in ${\mathcal J}(f_a)$ and thus is a Fatou component. Since the Fatou set coincides with the basin of attraction of $-a$, there exists an integer $N>0$ so that $f^N_a(U)={\mathcal A}^*(-a)$ and thus $\partial {\mathcal A}^*(-a)\subset \partial U$, which is a contradiction with Proposition~\ref{prop:consequences_pol_like}(b), as only one point in $\partial {\mathcal A}^*(-a)$ has itinerary ${\mathfrak T}^1$.
We conclude that $\Gamma^1$ does not separate the plane and by Theorem~\ref{theo:curry}, it is either an indecomposable continuum or the union of two indecomposable continua. Nevertheless, $\gamma({\mathfrak T}^1)$ has a unique tail extending to infinity so it cannot be the union of two indecomposable continua.
\end{proof}
\begin{Remark}
The above two results describe an interesting relationship between the combinatorics of an itinerary $s$ and the landing properties of the curve $\gamma(s)$. Indeed, if $s$ is an $A$-sequence whose blocks of $0_i$'s have bounded length, the same arguments as in Proposition~\ref{prop:PerSeqLand} can be applied to show that the accumulation set of the hair is no other than the unique point $p(s)\in \partial {\mathcal A}^*(-a)$ that follows the given itinerary. The key step is to ensure that pullbacks of the tail are always bounded away from the postcritical and asymptotical orbits (see, for instance, Proposition 3.6 in \cite{Fag}), and this is always the case for $A$-sequences whose blocks of $0_i$'s have bounded length.
In contrast, whenever $s={\mathfrak T}$ is a $B$-sequence as in Theorem~\ref{theorem:indecom}, then the corresponding point $p({\mathfrak T}^1)$ in $\partial {\mathcal A}^*(-a)$ is an accumulation point of $\gamma({\mathfrak T}^1)$, nevertheless, is not its endpoint.
\end{Remark}
\begin{Theorem}\label{theorem:relation_inde_basin}
Consider ${\mathfrak T}$ and $\gamma({\mathfrak T}^1)$ as in Theorem~\ref{theorem:indecom}, set $\Gamma^1= \overline{\gamma({\mathfrak T}^1)}$. Then, there exists a unique point $p_1\in \partial {\mathcal A}^*(-a)$ such that $p_1\in \Gamma^1$ and $p_1$ has itinerary ${\mathfrak T}^1$.
\end{Theorem}
\begin{proof}
Recall that ${\mathfrak T}=\tau110^{k_1}110^{k_2}\ldots$. For each $n\geq1$ define two preperiodic sequences given by
\begin{eqnarray*}
s_n&=&\tau\overline{110^{k_1}\ldots 110^{k_n}0}, \\
r_n&=&\tau\overline{110^{k_1}\ldots 110^{k_n}1}.
\end{eqnarray*}
Clearly, their associated $A$-sequences satisfy $s^1_n < {\mathfrak T}^1 < r^1_n$ for all $n\geq 1$ with respect to the distance induced by the usual order in $\Sigma_B$.
Moreover, if $p(s^1_n)$ and $q(r^1_n)$ denote endpoints of the corresponding preperiodic hairs associated to $s^1_n$ and $r^1_n$, then it follows by Proposition~\ref{prop:PerSeqLand} that $p(s^1_n), q(r^1_n)\in \partial {\mathcal A}^*(-a)$.
Let $p({\mathfrak T}^1)$ be the unique point in $\partial {\mathcal A}^*(-a)$ following the itinerary ${\mathfrak T}^1$ under the action of $f_a|\partial{\mathcal A}^*(-a)$.\\
In the Euclidean distance restricted to $\partial {\mathcal A}^*(-a)$ we obtain $|p(s^1_n)-q(r^1_n)|\to 0$ as $n\to +\infty$ and clearly $p(s^1_n)<p({\mathfrak T}^1)<q(r^1_n)$ with the order inherited by $S^1$ under the continuous extension of the B\"ottcher mapping $\Phi_a:{\Bbb D}\to {\mathcal A}^*(-a)$.
For each $n$, $\gamma({\mathfrak T}^1)$ belongs to the region bounded above and below by $\gamma(r^1_n)$ and $\gamma(s^1_n)$, and the arc $[p(s^1_n),q(r^1_n)]\subset\partial {\mathcal A}^*(-a)$ containing $p({\mathfrak T}^1)$. Thus, if $\gamma({\mathfrak T}^1)$ accumulates on $\partial {\mathcal A}^*(-a)$, then it must accumulate at the point $p({\mathfrak T}^1)$.
In order to show that $\overline{\gamma({\mathfrak T}^1)}$ accumulates on the boundary of ${\mathcal A}^*(-a)$, we employ the polynomial-like construction and the symbolics of ${\mathfrak T}^1$. Let $m>0$ and consider the hair associated to the itinerary
$$t_m=0_{k_m}11 0_{k_{m+1}}11 \ldots.$$
Note that $t_m$ is the image of ${\mathfrak T}^1$ under some iterates of the shift $\sigma|{\Sigma_A}$. By Proposition~\ref{prop:passes}, the hair $\gamma(t_{m})$ is clearly close to the origin and also the hair $\gamma(1t_m)$ intersects a left half plane. Hence, a portion of $\gamma(1t_m)$ must intersect $V_a$ and its preimages under $f_a|V_a$ following $\tau110^{k_1}\ldots 0^{k_{m-1}}1$ must accumulate in $\partial {\mathcal A}^*(-a)$ since by Proposition~\ref{proposition:pol_like},
$$\bigcap_{n\geq 0} f_a^{-n}(V_a)=\partial{\mathcal A}^*(-a),$$
and $m>0$ has been taken arbitrarily large.
\end{proof}
We end this section by briefly discussing the set of points in the Julia set that follow non-binary sequences. Note first that Proposition~\ref{prop:ConjugTails} can be easily modified to show the existence of tails with itineraries corresponding to the partition $\cup_{j\in {\Bbb Z}} T_j$. If $s=(s_0,s_1, \ldots)$ is a sequence that contains infinitely many non-binary symbols and $T_{s_j}\subset f_a(T_{s_{j+1}})$ for all $j\geq 0$. Let $\omega_s$ be the tail associated to $s$ that lies in the right half plane Re$(z)>R$. If in addition $s$ has (if any) blocks of $0$'s with bounded length, then the pullbacks of $\omega_s$ are always bounded away from the postcritical and asymptotic orbits, hence $\gamma(s)$ is a landing hair with an endpoint $p(s)$. By Proposition~\ref{prop:consequences_pol_like}, $p(s)$ (and in fact $\overline{ \gamma(s)}$) do not lie in the boundary of any Fatou component, as otherwise, $s$ will have to end in a binary sequence. This establishes
\begin{Corollary}
If $s=(s_0, s_1,\ldots)\in {\Bbb Z}^{\Bbb N}$ is realizable by $f_a$, contains infinitely many non-binary symbols and its blocks of $0$'s have bounded lenght, then $\gamma(s)$ is a landing hair and a buried component of ${\mathcal J}(f_a)$.
\end{Corollary}
\subsection*{Acknowledgments}
The first and second author are both partially supported by the European network 035651-2-CODY, by MEC and CIRIT through the grants MTM2008Ð01486 and 2009SGR-792, respectively. The second author is also partially supported by MEC through the grant MTM2006-05849/Consolider (including a FEDER contribution). The third author is supported by CONACyT grant 59183, CB-2006-01. She would also like to express her gratitude to Universitat de Barcelona and Universitat Rovira i Virgili for their hospitality in the final stages of this article.
\noindent
{\small Antonio Garijo \& Xavier Jarque}\\
{\small Dept. d'Enginyeria Inform\`atica i Matem\`atiques}\\
{\small Universitat Rovira i Virgili}\\
{\small Av. Pa\"isos Catalans 26}\\
{\small Tarragona 43007, Spain}\\
\noindent
{\small M\'{o}nica Moreno Rocha}\\
{\small Centro de Investigaci\'on en Matem\'aticas}\\
{\small Callej\'on Jalisco s/n}\\
{\small Guanajuato 36240, Mexico}
\end{document} |
\begin{document}
\begin{abstract}
We consider links that are alternating on surfaces embedded in a compact 3-manifold. We show that under mild restrictions, the complement of the link decomposes into simpler pieces, generalising the polyhedral decomposition of alternating links of Menasco. We use this to prove various facts about the hyperbolic geometry of generalisations of alternating links, including weakly generalised alternating links described by the first author. We give diagrammatical properties that determine when such links are hyperbolic, find the geometry of their checkerboard surfaces, bound volume, and exclude exceptional Dehn fillings.
\end{abstract}
\title{Geometry of alternating links on surfaces}
\section{Introduction}\label{Sec:Intro}
In 1982, Thurston proved that all knots in the 3-sphere are either satellite knots, torus knots, or hyperbolic \cite{thu82}. Of all classes of knots, alternating knots (and links) have been the most amenable to the study of hyperbolic geometry in the ensuing years. Menasco proved that aside from $(2,q)$-torus knots and links, all prime alternating links are hyperbolic \cite{men84}. Lackenby bounded their volumes in terms of a reduced alternating diagram \cite{lac04}. Adams showed that checkerboard surfaces in hyperbolic alternating links are quasifuchsian \cite{ada07}; see also \cite{fkp14}. Lackenby and Purcell bounded cusp geometry \cite{lp16}. Additionally, there are several open conjectures on relationships between their geometry and an alternating diagram that arise from computer calculation, for example that of Thistlethwaite and Tsvietkova \cite{tt14}.
Because hyperbolic geometric properties of alternating knots can be read off of an alternating diagram, it makes sense to try to generalise these knots, and to try to distill the properties of such knots that lead to geometric information. One important property is the existence of a pair of essential checkerboard surfaces, which are known to characterise an alternating link \cite{gre17, how17}. These lead to a decomposition of the link complement into polyhedra, which can be given an angle structure \cite{lac00}. This gives tools to study surfaces embedded in the link complement. Angled polyhedra were generalised to ``angled blocks'' by Futer and Gu\'eritaud to study arborescent and Montesinos links \cite{fg09}. However, their angled blocks do not allow certain combinatorial behaviours, such as bigon faces, that arise in practice. Here, we generalise further, to angled decompositions we call angled chunks. We also allow decompositions of manifolds with boundary.
We apply these techniques to broad generalisations of alternating knots in compact 3-manifolds, including generalisations due to the first author \cite{how15t}. An alternating knot has a diagram that is alternating on a plane of projection $S^2\subset S^3$. We may also consider alternating projections of knots onto higher genus surfaces in more general 3-manifolds. Adams was one of the first to consider such knots; in 1994 he studied alternating projections on a Heegaard torus in $S^3$ and lens spaces, and their geometry \cite{ada94}. The case of higher genus surfaces in $S^3$ has been studied by Hayashi \cite{hay95} and Ozawa \cite{oza06}. By generalising further, Howie \cite{how15t} and Howie and Rubinstein \cite{hr16} obtain a more general class of alternating diagrams on surfaces in $S^3$ for which the checkerboard surfaces are guaranteed to be essential. Here, we obtain similar results without restricting to $S^3$.
In this paper, we utilise essential surfaces and angled decompositions to prove a large number of results on the geometry of classes of generalisations of alternating knots in compact 3-manifolds. We identify from a diagram conditions that guarantee such links are hyperbolic, satellite, or torus links, generalising \cite{men84, hay95}. We identify the geometry of checkerboard surfaces, either accidental, quasifuchsian, or a virtual fiber, generalising \cite{fkp14, ada07}. We also bound the volumes of such links from below, generalising \cite{lac04}, and we determine conditions that guarantee their Dehn fillings are hyperbolic, generalising \cite{lac00}. We re-frame all these disparate results as consequences of the existence of essential surfaces and an angled decomposition. It is likely that much of this work will apply to additional classes of link complements and 3-manifolds with such a decomposition, but we focus here on alternating links.
To state our results carefully, we must describe the alternating links on surfaces carefully, and that requires ruling out certain trivial diagrams and generalisations of connected sums. Additionally, the link must project to the surface of the diagram in a substantial way, best described by a condition on representativity $r(\pi(L),F)$, which is adapted from a notion in graph theory. These conditions are very natural, described in detail in \refsec{Gen}, where we define a \emph{weakly generalised alternating link}, \refdef{WeaklyGeneralisedAlternating}. One consequence is the following.
\begin{theorem}\label{Thm:Intro}
Let $\pi(L)$ be a weakly generalised alternating projection of a link $L$ onto a generalised projection surface $F$ in a 3-manifold $Y$. Suppose $Y$ is compact, orientable, irreducible, and if $\partial Y \neq \emptyset$, then $\partial Y$ is incompressible in $Y{\smallsetminus} N(F)$.
Finally, suppose $Y{\smallsetminus} N(F)$ is atoroidal and contains no essential annuli with both boundary components on $\partial Y$.
If $F$ has genus at least one, and the regions in the complement of $\pi(L)$ on $F$ are disks, and the representativity $r(\pi(L),F)>4$, then
\begin{enumerate}
\item $Y{\smallsetminus} L$ is hyperbolic.
\item $Y{\smallsetminus} L$ admits two checkerboard surfaces that are essential and quasifuchsian.
\item The hyperbolic volume of $Y{\smallsetminus} L$ is bounded below by a function of the twist number of $\pi(L)$ and the Euler characteristic of $F$:
\[ \operatorname{vol}(Y{\smallsetminus} L)\geq\frac{v_8}{2}(\operatorname{tw}(\pi(L))-\chi(F)-\chi(\partial Y)).\]
Here, in the case that $Y{\smallsetminus} N(L)$ has boundary components of genus greater than one, we take $\operatorname{vol}(Y{\smallsetminus} L)$ to mean the volume of the unique hyperbolic manifold with interior homeomorphic to $Y{\smallsetminus} L$, and with higher genus boundary components that are totally geodesic.
\item Further, if $L$ is a knot with twist number greater than eight, or the genus of $F$ is at least five, then all non-trivial Dehn fillings of $Y{\smallsetminus} L$ are hyperbolic.
\end{enumerate}
\end{theorem}
\refthm{Intro} follows from Theorems~\ref{Thm:hress}, \ref{Thm:Hyperbolic}, \ref{Thm:volguts}, and~\ref{Thm:DehnFilling} and Corollaries~\ref{Cor:Quasifuchsian} and~\ref{Cor:DehnGenusBound} in this paper. More general results also hold, stated below.
In the classical setting, $Y=S^3$.
The results of \refthm{Intro} immediately apply to large classes of the generalisations of alternating knots in $S^3$ studied in \cite{ada94, oza06, hay95, how15t, hr16}. However, since we allow more general $Y$, \refthm{Intro} also applies more broadly, for example to all cellular alternating links in the thickened torus $T^2\times I$, as in \cite{ckp}. Geometric properties of alternating links in $T^2\times I$ arising from Euclidean tilings have also been studied recently in \cite{acm17} and \cite{ckp}; see \refexa{T2xI}.
\subsection{Organisation of results}
In \refsec{Gen}, we define our generalisations of alternating knots, particularly conditions that ensure our generalisations are nontrivial with reduced diagrams. \refsec{Chunks} introduces angled chunks, and proves that complements of these links have an angled chunk decomposition. This gives us tools to discuss the hyperbolicity of their complements in \refsec{Hyperbolic}. The techniques can be applied to identify geometry of surfaces in \refsec{Accidental}, and to bound volumes in \refsec{Volume}. Finally, we restrict their exceptional Dehn fillings in \refsec{Filling}.
\section{Generalisations of alternating knots and links}\label{Sec:Gen}
An alternating knot has a diagram that is alternating on a plane of projection, $S^2$ embedded in $S^3$. One may generalise to an alternating diagram on another surface embedded in a different 3-manifold. A more general definition is the following.
\begin{definition}\label{Def:ProjSfce}
Let $F_i$ be a closed orientable surface for $i=1,\ldots,p$, and let $Y$ be a compact orientable irreducible 3-manifold. A \emph{generalised projection surface} $F$ is a piecewise linear embedding,
$F\colon \bigsqcup_{i=1}^{p}F_i\hookrightarrow Y$
such that $F$ is non-split in $Y$.
(Recall a collection of surfaces $F$ is \emph{non-split} if every embedded $2$-sphere in $Y{\smallsetminus} F$ bounds a $3$-ball in $Y{\smallsetminus} F$.)
Since $F$ is an embedding we will also denote the image of $F$ in $Y$ by $F$.
\end{definition}
Note that our definitions ensure that if $F$ is a generalised projection surface and some component $F_i$ is a 2-sphere, then $F$ is homeomorphic to $S^2$, and $Y$ is homeomorphic to $S^3$.
\begin{definition}\label{Def:GenDiagram}
For $F$ a generalised projection surface, a link $L\subset F\times I\subset Y$ can be projected onto $F$ by $\pi \colon F\times I \to F$.
We call $\pi(L)$ a \emph{generalised diagram}.
\end{definition}
Note that every knot has a trivial generalised diagram on the torus boundary of a regular neighbourhood of the knot. Such diagrams are not useful. To ensure nontriviality, we require conditions expressed in terms of representativity, adapted from graph theory:
\begin{definition}\label{Def:Representativity}
Let $F$ be a closed orientable (possibly disconnected) surface embedded in a compact orientable irreducible 3-manifold $Y$. A regular neighbourhood $N(F)$ of $F$ in $Y$ has the form $F\times (-1,1)$. Because every component $F_i$ of $F$ is 2-sided, $Y{\smallsetminus} N(F)$ has two boundary components homeomorphic to $F_i$, namely $F_i\times\{-1\}$ and $F_i\times\{1\}$. Let $F_i^-$ denote the boundary component coming from $F_i\times\{-1\}$ and $F_i^+$ the one from $F_i\times\{1\}$.
Let $\pi(L)$ be a link projection onto $F$. If $\ell$ is an essential curve on $F$, isotope $\ell$ so that it meets $\pi(L)$ transversely in edges, not crossings.
\begin{itemize}
\item The \emph{edge-representativity} $e(\pi(L),F)$ is the minimum number of intersections between $\pi(L)$ and any essential curve $\ell\subset F$.
\item Define $r^-(\pi(L),F_i)$ to be the minimum number of intersections between the projection of $\pi(L)$ onto $F_i^-$ and the boundary of any compressing disk for $F_i^-$ in $Y{\smallsetminus} F$. If there are no compressing disks for $F_i^-$ in $Y{\smallsetminus} F$, then set $r^-(\pi(L),F_i)=\infty$. Define $r^+(\pi(L),F_i)$ similarly, using $F_i^+$.
\item The \emph{representativity} $r(\pi(L),F)$ is the minimum of \[\bigcup_i (r^-(\pi(L),F_i)\cup r^+(\pi(L),F_i)).\]
\item Also, define $\hat{r}(\pi(L),F)$ to be the minimum of \[\bigcup_i \max (r^-(\pi(L),F_i), r^+(\pi(L),F_i)).\]
\end{itemize}
\end{definition}
Note that if $F$ has just one component, the representativity $r(\pi(L),F)$ counts the minimum number of times the boundary of any compression disk for $F$ meets the diagram $\pi(L)$. As for $\hat{r}(\pi(L),F)$, there may be a compression disk on one side of $F$ whose boundary meets $\pi(L)$ less than $\hat{r}(\pi(L),F)$ times, but all compression disks on the opposite side have boundary meeting the diagram at least $\hat{r}(\pi(L),F)$ times.
We will require diagrams to have representativity at least $2$, $4$, or more, depending on the result.
\begin{example}
Let $Y$ be the thickened torus $Y=T^2\times [-1,1]$, and let $F$ be the torus $T^2\times\{0\}$. Consider the generalised diagram $\pi(L)$ shown in \reffig{CheckColourable}. The edge-representativity $e(\pi(L), F)$ is zero for this example, since the curve of slope $0/1$ (the horizontal edge of the rectangle shown in the figure) does not meet $\pi(L)$. However, since there are no compressing disks for $F$ in $Y$, both $r^-(\pi(L),F)$ and $r^+(\pi(L),F)$ are infinite, and thus so are $r(\pi(L),F)$ and $\hat{r}(\pi(L),F)$.
\end{example}
\begin{figure}
\caption{An example of an alternating diagram on a torus that is not checkerboard colourable.}
\label{Fig:CheckColourable}
\end{figure}
We would also like to consider diagrams that are appropriately reduced; for example, there should be no nugatory crossings on $F$. In the alternating case on $S^2\subset S^3$, the condition is that the diagram be \emph{prime}: if an essential curve intersects the diagram exactly twice then it bounds a region of the diagram containing a single embedded arc.
There are two natural generalisations of this condition. First, we can define a generalised diagram to be \emph{strongly prime} if whenever a loop $\ell\subset F$ intersects $\pi(L)$ exactly twice, then $\ell$ bounds a disk $D\subset F$ such that $\pi(L)\cap D$ is a single embedded arc. This notion was considered by Ozawa~\cite{oza06}. However, we will consider a weaker notion:
\begin{definition}\label{Def:WeaklyPrime}
A generalised diagram $\pi(L)$ on generalised projection surface $F$ is \emph{weakly prime} if whenever $D\subset F_i\subset F$ is a disk in a component $F_i$ of $F$ with $\partial D$ intersecting $\pi(L)$ transversely exactly twice, the following holds:
\begin{itemize}
\item If $F_i$ has positive genus, then the intersection $\pi(L)\cap D$ is a single embedded arc.
\item If $F_i \cong S^2$, then $\pi(L)$ has at least two crossings on $F_i$ and either the intersection $\pi(L)\cap D$ is a single embedded arc, or $\pi(L)\cap (F_i{\smallsetminus} D)$ is a single embedded arc.
\end{itemize}
\end{definition}
If $F\cong S^2$, so $\pi(L)$ is a diagram in the usual sense, then a weakly prime diagram is equivalent to a reduced prime diagram.
Additionaly, strongly prime implies weakly prime, but the converse is not true: If an essential curve meets $\pi(L)$ exactly twice but does not bound a disk in $F$, the generalised diagram is not strongly prime, but could be weakly prime.
The generalised diagrams we consider will be alternating. Because the diagrams might be disconnected, we need to define carefully an alternating diagram in this case.
\begin{definition}\label{Def:Alternating}
A generalised diagram $\pi(L)$ is said to be \emph{alternating} if for each region of $F{\smallsetminus}\pi(L)$, each boundary component of the region is alternating. That is, each boundary component of each region of $F{\smallsetminus}\pi(L)$ can be given an orientation such that crossings run from under to over in the direction of orientation.
\end{definition}
\reffig{CheckColourable} shows an example of an alternating diagram on a torus. In this diagram, there is exactly one region that is not a disk; it is an annulus. Each boundary component of the annulus can be oriented to be alternating, so this diagram satisfies \refdef{Alternating}. However, note that there is no way to orient the annulus region consistently to ensure that the induced orientations on the boundary are alternating (in the same direction: under to over).
\begin{definition}\label{Def:CheckColourable}
A generalised diagram $\pi(L)$ is said to be \emph{checkerboard colourable} if each region of $F{\smallsetminus}\pi(L)$ can be oriented such that the induced orientation on each boundary component is alternating: crossings run from under to over in the direction of orientation. Given a checkerboard colourable diagram, regions on opposite sides of an edge of $\pi(L)$ will have opposite orientations. We colour all regions with one orientation white and the other shaded; this gives the checkerboard colouring of the definition.
\end{definition}
The diagram of \reffig{CheckColourable} is not checkerboard colourable.
Typically, we will consider the following general class of links.
\begin{definition}[Reduced alternating on $F$]\label{Def:AltKnots}
Let $Y$ be a compact, orientable, irreducible 3-manifold with generalised projection surface $F$ such that if $\partial Y\neq \emptyset$, then $\partial Y$ is incompressible in $Y{\smallsetminus} N(F)$.
A generalised diagram $\pi(L)$ on $F$ of a knot or link $L$ is \emph{reduced alternating} if
\begin{enumerate}
\item\label{Itm:Alternating} $\pi(L)$ is alternating on $F$,
\item $\pi(L)$ is weakly prime,
\item\label{Itm:Connected} $\pi(L)\cap F_i \neq \emptyset$ for each $i=1, \dots, p$, and
\item\label{Itm:slope} each component of $L$ projects to at least one crossing in $\pi(L)$.
\end{enumerate}
\end{definition}
Knots and links that satisfy \refdef{AltKnots} have been studied in various places. Such links on a surface $F$ in $S^3$ that are checkerboard colourable with representativity $r(\pi(L),F) \geq 4$ are called \emph{weakly generalised alternating links}. They were introduced by Howie and Rubinstein~\cite{hr16}. Other knots satisfying \refdef{AltKnots} include generalised alternating links in $S^3$ considered by Ozawa~\cite{oza06}, which are required to be strongly prime, giving restrictions on edge representativity and forcing regions of $F{\smallsetminus}\pi(L)$ to be disks~\cite{how15t}.
Knots satisfying properties \eqref{Itm:Alternating}, \eqref{Itm:Connected}, and \eqref{Itm:slope}, include the toroidally alternating knots considered by Adams~\cite{ada94}, required to lie on a Heegaard torus, with disk regions of $F{\smallsetminus} \pi(L)$. They also include the alternating knots on a Heegaard surface $F$ for $S^3$ considered by Hayashi \cite{hay95}, which are required again to have disk regions of $F{\smallsetminus}\pi(L)$. Additionally they include the alternating projections of links onto their Turaev surfaces~\cite{dfk08}, which also have disk regions.
By contrast, weakly generalised alternating links and the links of \refdef{AltKnots} are not required to lie on a Heegaard surface, and regions of $F{\smallsetminus}\pi(L)$ are not required to be disks.
\begin{definition}\label{Def:WeaklyGeneralisedAlternating}
Let $\pi(L)$ be a reduced alternating diagram on $F$ in $Y$. If further,
\begin{enumerate}
\item[(5)]\label{Itm:CheckCol} $\pi(L)$ is checkerboard colourable, and
\item[(6)]\label{Itm:Rep} the representativity $r(\pi(L),F)\geq 4$,
\end{enumerate}
we say that $\pi(L)$ is a \emph{weakly generalised alternating link diagram} on $F$ in $Y$, and $L$ is a \emph{weakly generalised alternating link}.
\end{definition}
These conditions are sufficient to show the following.
\begin{theorem}[Howie~\cite{how15t}]\label{Thm:wgaprime}
Let $\pi(L)$ be a weakly generalised alternating projection of a link $L$ in $S^3$. Then $L$ is a nontrivial, nonsplit, prime link.
\end{theorem}
Theorem~\ref{Thm:wgaprime} generalises a similar theorem of Menasco for prime alternating links~\cite{men84}. For generalised alternating links this was known by Hayashi~\cite{hay95} and Ozawa~\cite{oza06}.
\section{Angled decompositions}\label{Sec:Chunks}
The usual alternating knot and link complements in $S^3$ have a well-known polyhedral decomposition, described by Menasco \cite{men83} (see also \cite{lac00}). The decomposition can be generalised for knots that are reduced alternating on a generalised projection surface $F$, but the pieces are more complicated than polyhedra. This decomposition is similar to a decomposition into angled blocks defined by Futer and Gu{\'e}ritaud \cite{fg09}, but is again more general. A similar decomposition was done for a particular link in \cite{ckp16}, and more generally for links in the manifold $T^2\times I$ in \cite{ckp}. In this section, we describe the decomposition in full generality.
\subsection{A decomposition of alternating links on $F$}
A \emph{crossing arc} is a simple arc in the link complement running from an overcrossing to its associated undercrossing.
The polyhedral decomposition of Menasco \cite{men83} can be generalised as follows.
\begin{proposition}\label{Prop:AltChunkDecomp}
Let $Y$ be a compact, orientable, irreducible 3-manifold containing a generalised projection surface $F$ such that $\partial Y$ is incompressible in $Y{\smallsetminus} N(F)$ whenever $\partial Y\not=\emptyset$.
Let $L$ be a link with a generalised diagram $\pi(L)$ on $F$ satisfying properties \eqref{Itm:Alternating}, \eqref{Itm:Connected}, and \eqref{Itm:slope} of \refdef{AltKnots}.
Then $Y{\smallsetminus} L$ can be decomposed into pieces such that:
\begin{itemize}
\item Pieces are homeomorphic to components of $Y{\smallsetminus} N(F)$, where $N(F)$ denotes a regular open neighbourhood of $F$, except each piece has a finite set of points removed from $\partial (Y{\smallsetminus} N(F))$ (namely the ideal vertices below).
\item On each copy of each component $F_i$ of $F$, there is an embedded graph with vertices, edges, and regions identified with the diagram graph $\pi(L)\cap F_i$. All vertices are ideal and 4-valent.
\item To obtain $Y{\smallsetminus} L$, glue pieces as follows. Each region of $F_i{\smallsetminus}\pi(L)$ is glued to the corresponding region on the opposite copy of $F_i$ by a homeomorphism that is the identity composed with a rotation along the boundary. The rotation takes an edge of the boundary to the nearest edge in the direction of that boundary component's orientation, with orientation as in \refdef{Alternating}.
\item Edges correspond to crossing arcs, and are glued in fours. At each ideal vertex, two opposite edges are glued together.
\end{itemize}
\end{proposition}
\begin{proof}
Consider a crossing arc of the diagram. Sketch its image four times in the diagram at a crossing, as on the left of \reffig{EdgeIdentifications}.
On each side of a component $F_i$, the link runs through overstrands and understrands in an alternating pattern. Pull the crossing arcs flat to lie on $F_i$, with the overstrand running between two pairs of crossing arcs as shown in \reffig{EdgeIdentifications}, left. Note the pattern of overstrands and understrands will look exactly opposite on the two sides of $F_i$. On each side of $F_i$, identify each pair of crossing arcs that are now parallel on $F_i$. These are the edges of the decomposition. Viewed from the opposite side of $F_i$, overcrossings become undercrossings, and exactly the opposite edges are identified. See \reffig{EdgeIdentifications}, right.
\begin{figure}
\caption{Left: Crossing arcs are split into four edges. Middle and right: edges are identified in pairs, with opposite pairs identified on either side of $F_i$.}
\label{Fig:EdgeIdentifications}
\end{figure}
Now slice along $F_i$. This cuts $Y{\smallsetminus} L$ into pieces homeomorphic to $Y{\smallsetminus} N(F)$ (here, we use condition \eqref{Itm:Connected} of \refdef{AltKnots} to conclude that each $F_i$ appears). Each side of $F_i$ is marked with overstrands of $\pi(L)$ and a pair of crossing arcs adjacent to each overcrossing.
On each side of $F_i$, shrink each overstrand of the diagram to the crossing vertex of $\pi(L)$ corresponding to its overcrossing. Each overstrand thus becomes a single ideal vertex of the decomposition. (And by \eqref{Itm:slope} of \refdef{WeaklyGeneralisedAlternating}, each strand of the link meets a crossing and so is divided up into ideal vertices.)
Note that the edges are pulled to run between vertices following the edges of the diagram graph.
Faces are regions of the diagram graph, and their gluing matches an edge on a region on one side of $F_i$ to an adjacent edge on the same region on the other, in the direction determined by the alternating orientation.
This gluing is the same as for regular alternating links, and has been likened to a gear rotation \cite{thu79}. Our requirement that regions of a diagram be alternating ensures this gear rotation gluing holds even for faces with multiple boundary components.
All edges coming from a crossing arc are identified together. Thus there are four edges identified to one. At each ideal vertex, a pair of opposite edges are glued together.
\end{proof}
When our knot or link is checkerboard colourable, we obtain two checkerboard surfaces.
\begin{definition}\label{Def:CheckerboardSurfaces}
Let $Y$ be a compact, irreducible, orientable 3-manifold with generalised projection surface $F$. Let $\pi(L)$ be a knot or link diagram that is reduced alternating on $F$ and that is also checkerboard colourable. Give $F{\smallsetminus}\pi(L)$ the checkerboard colouring into white and shaded regions. The resulting coloured regions correspond to white and shaded surfaces with boundary that can be embedded in $(F\times I) {\smallsetminus} L \subset Y{\smallsetminus} L$. Complete shaded regions into a spanning surface for $L$ by joining two shaded regions adjacent across a crossing of $\pi(L)$ by a twisted band in $(F\times I){\smallsetminus} L$. Similarly for white regions. The two surfaces that arise are the \emph{checkerboard surfaces} of $\pi(L)$, white and shaded. They intersect in crossing arcs.
\end{definition}
The alternating property and condition \refitm{slope} in \refdef{AltKnots} ensures that the two checkerboard surfaces have distinct slopes on each component of $L$.
\begin{proposition}\label{Prop:AltChunkDecompCheck}
Let $L$ be a link with a generalised diagram $\pi(L)$ on $F$ in a compact, orientable, irreducible 3-manifold $Y$ satisfying properties \eqref{Itm:Alternating}, \eqref{Itm:Connected}, and \eqref{Itm:slope} of \refdef{AltKnots}, and suppose that $\pi(L)$ is checkerboard colourable.
Then the regions of the decomposition of \refprop{AltChunkDecomp} are coloured, shaded and white, and the gluing of faces rotates each boundary component once, in the clockwise direction for white faces, in the counterclockise direction for shaded faces.
\end{proposition}
\begin{proof}
This follows from the relationship between orientation and alternating boundary components on white and shaded faces.
\end{proof}
\subsection{Defining angled chunks} We now define a decomposition of a 3-manifold for which \refprop{AltChunkDecomp} is an example.
\begin{definition}\label{Def:Chunk}
A \emph{chunk} $C$ is a compact, oriented, irreducible 3-manifold with boundary $\partial C$ containing an embedded (possibly disconnected) non-empty graph $\Gamma$ with all vertices having valence at least 3. We allow components of $\partial C$ to be disjoint from $\Gamma$.
Regions of $\partial C{\smallsetminus} \Gamma$ are called \emph{faces}, despite not necessarily being simply connected.
A face that arises from a component of $\partial C$ that is disjoint from $\Gamma$ is called an \emph{exterior face}, and we require any exterior face to be incompressible in $C$. Other faces are called \emph{interior faces}.
A \emph{truncated chunk} is a chunk for which a regular neighbourhood of each vertex of $\Gamma$ has been removed.
This produces new faces, called \emph{boundary faces}, and new edges bordering boundary faces called \emph{boundary edges}. Note boundary faces are homeomorphic to disks.
A \emph{chunk decomposition} of a 3--manifold $M$ is a decomposition of $M$ into chunks, such that $M$ is obtained by gluing chunks by homeomorphisms of (non-boundary, non-exterior) faces, with edges mapping to edges homeomorphically.
\end{definition}
Note we do not require faces to be contractible, we allow bigon faces, and we allow incompressible tori to be embedded in chunks. These are all more general than Futer and Gu{\'e}ritaud's blocks \cite{fg09}.
\refprop{AltChunkDecomp} implies that for a reduced alternating knot or link $L$ on $F$ in $Y$, the complement $Y{\smallsetminus} L$ has a chunk decomposition.
Note that for any chunk in the decomposition, the graph $\Gamma$ is the diagram graph of the link, and so each vertex has valence four. Thus for any truncated chunk arising from a reduced alternating link $L$ on $F$ in $Y$, all boundary faces are squares. We will use this fact frequently in our applications below.
\begin{remark}
We add a word on notation. The manifold $Y{\smallsetminus} L$ is not a compact manifold; the chunks in its decomposition glue to form $Y{\smallsetminus} L$ with ideal vertices on $L$. We also frequently need to consider the compact manifold $Y{\smallsetminus} N(L)$, where $N(\cdot)$ denotes a regular open neighbourhood. We will denote this manifold by $X(L):=Y{\smallsetminus} N(L)$, or more simply by $X$. It is the \emph{exterior} of the link $L$ in $Y$. Truncated chunks glue to form this compact manifold.
\end{remark}
In the case that $\pi(L)$ is checkerboard colourable, we will also be interested in the manifold obtained by leaving the shaded faces of the chunk deomposition of $Y{\smallsetminus} L$ unglued. This is homeomorphic to $(Y{\smallsetminus} L){\smallsetminus} N(\Sigma)$, where $\Sigma$ is the shaded checkerboard surface. More accurately, we will leave shaded faces of \emph{truncated} chunks unglued, and so the result is homeomorphic to $X{\smallsetminus} N(\Sigma)$. We denote this compact manifold by $X{\backslash \backslash}\Sigma$. Its boundary consists of $\partial N(\Sigma) =\widetilde{\Sigma}$, the double cover of $\Sigma$, and remnants of $\partial N(L)$ coming from boundary faces, as well as exterior faces coming from $\partial Y$.
The portion of the boundary $\partial N(L) {\smallsetminus} N(\Sigma)$ is called the \emph{parabolic locus}.
\begin{definition}\label{Def:BoundedChunk}
Let $M$ be a compact orientable 3-manifold containing a properly embedded surface $\Sigma$.
A \emph{bounded chunk decomposition} of the manifold $M{\backslash \backslash}\Sigma := M{\smallsetminus} N(\Sigma)$ is a decomposition of $M{\backslash \backslash}\Sigma$ into truncated chunks, with
boundary faces, exterior faces, and interior faces as before, only now faces lying on $\widetilde{\Sigma}\subset \partial(M{\backslash \backslash}\Sigma)$ are left unglued.
The faces that are glued are still called \emph{interior faces}. Those left unglued, lying on $\widetilde{\Sigma}$, are called \emph{surface faces}.
Edges are defined to be \emph{boundary edges} if they lie between boundary faces and other faces, \emph{interior edges} if they lie between two interior faces, and \emph{surface edges} if they lie between a surface face and an interior face. Surface faces are not allowed to be adjacent along an edge.
Finally, the restriction of the gluing to surface faces identifies each surface edge to exactly one other surface edge, and produces $\widetilde{\Sigma}$.
\end{definition}
\subsection{Normal surfaces}
Although chunks are more general than blocks, we can define normal surfaces inside them. The following definition is modified slightly from Futer--Gu{\'e}ritaud.
\begin{definition}\label{Def:NormalSurface}
For $C$ a truncated chunk, and $(S,\partial S)\subset (C, \partial C)$ a properly embedded surface, we say $S$ is \emph{normal} if it satisfies:
\begin{enumerate}
\item\label{Itm:IncomprNorm} Each closed component of $S$ is incompressible in $C$.
\item $S$ and $\partial S$ are transverse to all faces, boundary faces, and edges of $C$.
\item\label{Itm:BdryNoDisk} If a component of $\partial S$ lies entirely in a face (or boundary face) of $C$, then it does not bound a disk in that face (or boundary face).
\item\label{Itm:NoArcEndptsEdge} If an arc $\gamma$ of $\partial S$ in a face (or boundary face) of $C$ has both endpoints on the same edge, then the arc $\gamma$ along with an arc of the edge cannot bound a disk in that face (or boundary face).
\item\label{Itm:NoArcEndptsAdj} If an arc $\gamma$ of $\partial S$ in a face of $C$ has one endpoint on a boundary edge and the other on an adjacent interior or surface edge, then $\gamma$ cannot cut off a disk in that face.
\end{enumerate}
Given a chunk decomposition of $M$, a surface $(S,\partial S)\subset (M,\partial M)$ is called \emph{normal} if for every chunk $C$, the intersection $S\cap C$ is a (possibly disconnected) normal surface in $C$.
\end{definition}
We have made modifications to Futer--Gu{\'e}ritaud's definition in items \eqref{Itm:BdryNoDisk}, \eqref{Itm:NoArcEndptsEdge}, and \eqref{Itm:NoArcEndptsAdj}: for item \eqref{Itm:BdryNoDisk}, we may in fact have components of $\partial S$ that lie in a single face; however in such a case, the face must not be contractible and $\partial S$ must be a nontrivial curve in that face. Similarly for \eqref{Itm:NoArcEndptsEdge}, and \eqref{Itm:NoArcEndptsAdj}.
Note also that since boundary faces are already disks, items \eqref{Itm:NoArcEndptsEdge} and \eqref{Itm:NoArcEndptsAdj} agree with Futer--Gu{\'e}ritaud's definition for boundary faces.
Also, note that we consider a 2-sphere to be incompressible if it does not bound a ball, hence the fact that a chunk is irreducible by definition, along with item \eqref{Itm:IncomprNorm} of \refdef{NormalSurface} implies that the intersection of a normal surface with a chunk has no spherical components.
\begin{theorem}\label{Thm:NormalForm}
Let $M$ be a manifold with a chunk or bounded chunk decomposition.
\begin{enumerate}
\item If $M$ is reducible, then $M$ contains a normal 2-sphere.
\item If $M$ is irreducible and boundary reducible, then $M$ contains a normal disk.
\item If $M$ is irreducible and boundary irreducible, then any essential surface in $M$ can be isotoped into normal form.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is a standard innermost disk, outermost arc argument. In the case of a chunk decomposition, it follows nearly word for word the proof of \cite[Theorem~2.8]{fg09}; we leave the details to the reader.
In the case of a bounded chunk decomposition, an essential surface $S$ can no longer be isotoped through surface faces. We modify the proof where required to avoid such moves. First, if a component of $\partial S$ lies entirely in a surface face and bounds a disk in that face, consider an innermost such curve. Since $S$ is incompressible, that curve bounds a disk in $S$ as well, hence $S$ has a disk component, parallel into a surface face, contradicting the fact that it is essential.
If an arc of intersection of $S$ with a face has both its endpoints on the same surface edge, and the arc lies in an interior face, then an outermost such arc and the edge bound a disk $D$ with one arc of $\partial D$ on $S$ and one arc on a surface face. Because $S$ is essential, it is boundary incompressible; it follows that the arc of intersection can be pushed off. A similar argument implies that an arc of intersection of $S$ with an interior face that has one endpoint on a boundary edge and one on a surface edge can be pushed off. For all other arcs of intersection with endpoints on one edge, or an edge and adjacent boundary edges, the argument follows just as before.
\end{proof}
\subsection{Angled chunks and combinatorial area}
\begin{definition}\label{Def:AngledChunk}
An \emph{angled chunk} is a truncated chunk $C$ such that each edge $e$ of $C$ has an associated interior angle $\alpha(e)$ and exterior angle $\epsilon(e) = \pi-\alpha(e)$ satisfying:
\begin{enumerate}
\item\label{Itm:0toPi} $\alpha(e) \in (0,\pi)$ if $e$ is not a boundary edge; $\alpha(e) = \pi/2$ if $e$ is a boundary edge.
\item\label{Itm:VertexSum} For each boundary face, let $e_1, \dots, e_n$ denote the non-boundary edges with an endpoint on that boundary face. Then $\sum_{i=1}^n \epsilon(e_i) = 2\pi.$
\item\label{Itm:NormalDiskSum} For each normal disk in $C$ whose boundary meets edges $e_1, \dots, e_p$, $\sum_{i=1}^p \epsilon(e_i) \geq 2\pi.$
\end{enumerate}
An \emph{angled chunk decomposition} of a 3--manifold $M$ is a subdivision of $M$ into angled chunks, glued along interior faces, such that $\sum \alpha(e_i)=2\pi$, where the sum is over the interior edges $e_i$ that are identified under the gluing.
A \emph{bounded angled chunk decomposition} satisfies all the above, and in addition $\sum \alpha(e_i)=\pi$ for edges identified to any surface edge under the gluing. That is, if edges $e_1, \dots, e_n$ are identified and all the $e_i$ are interior edges, then $\sum\alpha(e_i)=2\pi$. If two of the edges are surface edges, then $\sum\alpha(e_i)=\pi$.
\end{definition}
Again our definition is weaker than that of Futer--Gu{\'e}ritaud: in item \refitm{NormalDiskSum}, they require the sum to be strictly greater than $2\pi$ unless the disk is parallel to a boundary face.
\begin{definition}\label{Def:CombinatorialArea}
Let $C$ be an angled chunk.
Let $(S,\partial S)$ be a normal surface in $(C,\partial C)$, and let $e_1, \dots, e_n$ be edges of the truncated chunk $C$ met by $\partial S$, listed with multiplicity. The \emph{combinatorial area} of $S$ is defined to be
\[ a(S) = \sum_{i=1}^n \epsilon(e_i) - 2\pi\chi(S). \]
Given a chunk decomposition of $M$ and a normal surface $(S',\partial S')\subset (M,\partial M)$, write $S'=\bigcup_{j=1}^m S_j$ where each $S_j$ is a normal surface embedded in a chunk. Define $a(S') = \sum_{j=1}^m a(S_j)$.
\end{definition}
\begin{proposition}\label{Prop:NonnegArea}
Let $S$ be a connected orientable normal surface in an angled chunk $C$. Then $a(S)\geq 0$. Moreover, if $a(S)=0$, then $S$ is either:
\begin{enumerate}
\item[(a)] a disk with $\sum \epsilon(e_i)=2\pi$,
\item[(b)] an annulus with $\partial S$ meeting no edges of $\Gamma$, hence the two components of $\partial S$ lie in non-contractible faces of $C$, or
\item[(c)] an incompressible torus disjoint from $\partial C$.
\end{enumerate}
\end{proposition}
\begin{proof}
By definition of combinatorial area, if $\chi(S)<0$, then $a(S)>0$. So we need only check cases in which $\chi(S)\geq 0$.
Note first that $S$ cannot be a sphere because $C$ is required to be irreducible.
If $S$ is a torus, then it does not meet $\partial C$ and must be incompressible in $C$. This is item (c).
Suppose now that $S$ is a disk. Then $\partial S$ meets $\partial C$,
and $\partial S$ does not lie completely on an exterior face because such faces are required to be incompressible in $C$. Then because $S$ is in normal form, \refitm{NormalDiskSum} of \refdef{AngledChunk} implies that the sum of exterior angles of edges meeting $\partial S$ is at least $2\pi$. Thus the combinatorial area of the disk is at least $0$. If the combinatorial area equals zero, then the sum of exterior angles meeting $\partial S$ must be exactly $2\pi$, as required.
Finally suppose $S$ is an annulus. Then $\chi(S)=0$, so the combinatorial area $a(S)\geq 0$. If it equals zero, then the sum of exterior angles of edges meeting $\partial S$ is also zero, hence $\partial S$ meets no edges. Let $\gamma_1$ and $\gamma_2$ denote the two components of $\partial S$. Then each $\gamma_i$ lies entirely in a single face. Because $\gamma_i$ cannot bound a disk in that face, the face is not contractible.
\end{proof}
\begin{proposition}[Gauss--Bonnet]\label{Prop:GaussBonnet}
Let $(S,\partial S)\subset(M,\partial M)$ be a surface in normal form with respect to an angled chunk decomposition of $M$. Then
\[ a(S) = -2\pi\chi(S).\]
Similarly, let $(S,\partial S)\subset (M{\backslash \backslash}\Sigma,\partial (M{\backslash \backslash}\Sigma))$
be a surface in normal form with respect to a bounded angle chunk decomposition of $M{\backslash \backslash}\Sigma$. Let $p$ denote the number of times $\partial S$ intersects a boundary edge adjacent to a surface face. Then
\[ a(S) = -2\pi\chi(S) + \frac{\pi}{2}\,p. \]
\end{proposition}
\begin{proof}
The proof is basically that of Futer and Gu{\'e}ritaud \cite[Prop.~2.11]{fg09}, except we need to consider an additional case for angled chunks, and surface faces for bounded angled chunks. We briefly work through their proof, and check that it holds in our more general setting.
As in \cite[Prop.~2.11]{fg09}, consider components of intersection $\{S_1, \dots, S_n\}$ of $S$ with chunks, and let $S'$ be obtained by gluing some of the $S_i$ along some of their edges, so $S'$ is a manifold with polygonal boundary. At a vertex on the boundary of $S'$, one or more of the $S_i$ meet, glued along faces of chunks. Define the interior angle $\alpha(v)$ of $S'$ at such a point $v$ to be the sum of the interior angles of the adjacent $S_i$, and define the exterior angle to be $\epsilon(v) = \pi-\alpha(v)$ (note $\epsilon(v)$ can be negative). We prove, by induction on the number of edges glued, that
\begin{equation}\label{Eqn:AngleSum}
a(S') = \sum a(S_{i_k}) = \sum_{v\in\partial S'}\epsilon(v) - 2\pi\chi(S').
\end{equation}
As a base case, if no edges are glued, then \refeqn{AngleSum} follows by \refdef{CombinatorialArea}.
Let $\nu$ be the number of vertices $v$ on $S'$, let $\theta$ be the sum of all interior angles along $\partial S'$, and let $\chi$ be the Euler characteristic of $S'$, so the right hand side of \refeqn{AngleSum} is $\nu\pi -\theta - 2\pi\chi$. Futer and Gu{\'e}ritaud work through several cases: two edges are glued with distinct vertices; two edges are glued that share a vertex; edges are glued to close a bigon; and a monogon component is glued. None of these moves change \refeqn{AngleSum}.
We have an additional case, namely when $\partial S_1'$ is glued to $\partial S_2'$ along simple closed curves, each contained in a single face of a chunk. In this case, $\nu$, $\theta$, and $\chi$ will be unchanged. Thus \refeqn{AngleSum} holds.
When $S$ is a normal surface in a (regular) angled chunk decomposition, let $S'=S$ in \refeqn{AngleSum}. Then all $\epsilon(v)$ come from boundary edges, and equal $\pi-(\pi/2+\pi/2)=0$; this comes from the fact that exterior angles on boundary edges are always $\pi/2$ in \refdef{AngledChunk}.
In the bounded angled chunk case, as above, on boundary edges meeting interior faces we have $\epsilon(v)=\pi-(\pi/2+\pi/2)=0$. On surface edges the sum of interior angles is $\pi$, hence $\epsilon(v)=\pi-\pi=0$. On the $p$ boundary faces meeting surface faces, $\epsilon(v)=\pi-\pi/2=\pi/2$.
\end{proof}
\begin{theorem}\label{Thm:IrredBdyIrred}
Let $(M,\partial M)$ be a compact orientable 3-manifold with an angled chunk decomposition.
Then $\partial M$ consists of exterior faces and components obtained by gluing boundary faces; those components coming from boundary faces are homeomorphic to tori. Finally, $M$ is irreducible and boundary irreducible.
\end{theorem}
\begin{proof}
If $M$ is reducible or boundary reducible, it contains an essential 2-sphere or disk. \refthm{NormalForm} implies it contains one in normal form. \refprop{GaussBonnet} implies such a surface has negative combinatorial area. This is impossible by \refprop{NonnegArea}.
Besides exterior faces, each component of $\partial M$ is tiled by boundary faces. A normal disk parallel to a boundary face has combinatorial area $0$. These disks glue to a surface of combinatorial area $0$ parallel to $\partial M$. By \refprop{GaussBonnet}, $\chi(\partial M)=0$. Since $M$ is orientable, each component of $\partial M$ is a torus.
\end{proof}
\subsection{Chunk decomposition for generalisations of alternating links}
\begin{proposition}\label{Prop:AngledChunkDecomp}
Let $\pi(L)$ be a reduced alternating diagram of a link $L$ on $F$ in $Y$.
Suppose that the representativity satisfies $r(\pi(L),F)\geq 4$. Label each edge of the chunk decomposition of \refprop{AltChunkDecomp} with interior angle $\pi/2$ (and exterior angle $\pi/2$). Then the chunk decomposition is an angled chunk decomposition.
Suppose further that $\pi(L)$ is checkerboard colourable on $F$, so $\pi(L)$ is weakly generalised alternating on $F$. Let $\Sigma$ be one of the checkerboard surfaces associated to $\pi(L)$. Then $X{\backslash \backslash}\Sigma$ admits a bounded angled chunk decomposition, with the same chunks as in \refprop{AltChunkDecomp}, but with faces corresponding to $\Sigma$ (white or shaded) left unglued.
\end{proposition}
\begin{proof}
We check the conditions of the definition of an angled chunk, \refdef{AngledChunk}.
The first two conditions are easy: $\pi/2\in(0,\pi)$, and each ideal vertex of a chunk is 4-valent, so the sum of the exterior angles of the edges meeting a boundary face corresponding to that vertex is $4\cdot \pi/2 = 2\pi$, as required. For the third condition, we need to show that if a curve $\gamma$ bounds a normal disk $D$ in the truncated chunk, meeting edges $e_1, \dots, e_n$, then $\sum_i \epsilon(e_i)\geq2\pi$.
Suppose first that $D$ is not a compressing disk for $F$, so it is parallel into $F$. Then boundary $\gamma$ of $D$ must meet an even number of edges.
If $\gamma$ meets zero edges, then by \refitm{BdryNoDisk} of the definition of normal, \refdef{NormalSurface}, it lies in a face that is not simply connected, and thus $\gamma$ bounds a disk in $F$ that contains edges and crossings of $\pi(L)$, with edges and crossings exerior to the disk as well. Isotope $\gamma$ slightly in this disk so that it crosses exactly one edge of $\pi(L)$ twice. Then we have a disk in $F$ whose boundary meets only two edges of $\pi(L)$ but with crossings contained within (and without) the disk. This contradicts the fact that $\pi(L)$ is weakly prime.
If $\gamma$ meets only two edges, then there are two cases. First, if neither edge is a boundary edge, then $\gamma$ defines a curve on $F$ bounding a disk in $F$ meeting only two edges of $\pi(L)$. Because the diagram is weakly prime, there must be no crossings within that disk, or if $F$ is a 2-sphere, there must be a disk on the opposite side containing no crossings. But then its boundary violates condition \refitm{NoArcEndptsEdge} of the definition of normal. In the second case, one of the edges and hence both edges meeting $\gamma$ are boundary edges. Then $\gamma$ defines a curve on $F$ meeting $\pi(L)$ exactly once in a single crossing. Push the disk $D$ slightly off this crossing, so $\partial D$ meets $\pi(L)$ in exactly two edges with a single crossing lying inside $D$. If $F$ is not a 2-sphere, this immediately contradicts the fact that the diagram is weakly prime. If $F$ is a 2-sphere, because $\pi(L)$ must have more than one crossing, again this contradicts the fact that the diagram is weakly prime.
Thus $\gamma$ must meet at least four edges, and $\sum_i\epsilon(e_i) \geq 4\cdot \pi/2 = 2\pi$.
Now suppose that $D$ is a compressing disk for $F$. Then again $\gamma$ determines a curve on $F$ meeting $\pi(L)$, bounding a compressing disk for $F$. If $\gamma$ meets no boundary faces, the fact that $r(\pi(L),F)\geq 4$ implies that $\gamma$ meets at least four edges of the chunk, so $\sum_i\epsilon(e_i)\geq 4\cdot \pi/2 = 2\pi$. If $\gamma$ meets a boundary face, then it meets two boundary edges on that boundary face. Isotope $\partial D$ through the boundary face and slightly outside; let $\beta$ be the result after isotopy. Note the isotopy replaces the two intersections of $\gamma$ with boundary edges by one or two intersections of $\beta$ with edges whose endpoints lie on the boundary face, so $\beta$ meets at most as many edges as $\gamma$. But then $\beta$ defines a curve on $F$ meeting $\pi(L)$, bounding a compressing disk for $F$. Again $r(\pi(L),F)\geq 4$ implies $\beta$ meets at least four edges. It follows that $\gamma$ meets at least four edges. So again $\sum\epsilon(e_i)\geq 2\pi$.
Finally, for a chunk decomposition of $Y{\smallsetminus} L$, because edges are glued in fours, the sum of all interior angles glued to an edge class is $4\cdot \pi/2 = 2\pi$, as required.
For the checkerboard colourable case, white or shaded faces left unglued, each non-boundary edge is a surface edge. The sum of interior angles at each such edge is $\pi/2+\pi/2=\pi$.
\end{proof}
\begin{corollary}\label{Cor:IrredBdryIrred}
If $\pi(L)$ is a reduced alternating diagram of a link $L$ on $F$ in $Y$, and $r(\pi(L),F)\geq 4$, then $Y{\smallsetminus} L$ is irreducible and boundary irreducible.\qed
\end{corollary}
\refcor{IrredBdryIrred} shows that weakly generalised alternating links in $S^3$ are nontrivial and nonsplit, giving a different proof of this fact than in \cite{how15t}.
\begin{corollary}\label{Cor:NoNormalBigons}
If $\pi(L)$ is a reduced alternating diagram of a link $L$ on $F$ in $Y$ with $r(\pi(L),F)\geq 4$, then the chunks in the decomposition of $Y{\smallsetminus} L$ contain no normal bigons, i.e.\ no normal disks meeting exactly two interior edges.
\end{corollary}
\begin{proof}
A normal disk meeting exactly two interior edges would have $\sum \epsilon(e_i) = \pi/2+\pi/2=\pi < 2\pi$, contradicting the definition of an angled chunk decomposition.
\end{proof}
\begin{definition}\label{Def:Pi1Essential}
A properly embedded surface $S$ in a 3-manifold $M$ is \emph{$\pi_1$-essential} if
\begin{enumerate}
\item $\pi_1(S)\to\pi_1(M)$ is injective,
\item $\pi_1(S,\partial S)\to\pi_1(M,\partial M)$ is injective, and
\item $S$ is not parallel into $\partial M$.
\end{enumerate}
\end{definition}
When $Y=S^3$, Howie and Rubinstein proved that the checkerboard surfaces of a weakly generalised alternating link in $S^3$ are essential \cite{hr16}. Using the machinery of angled chunk decompositions, we can extend this result to weakly generalised alternating links in any compact, orientable, irreducible 3-manifold $Y$.
\begin{theorem}\label{Thm:hress}
Let $\pi(L)$ be a weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in $Y$.
Then both checkerboard surfaces associated to $\pi(L)$ are $\pi_1$-essential in $X(L)$.
\end{theorem}
Ozawa~\cite{oza06} proved a similar theorem for generalised alternating projections in $S^3$. Similarly, Ozawa~\cite{oza11} and Ozawa and Rubinstein~\cite{or12} showed that related classes of links admit essential surfaces, some of which can be viewed as checkerboard surfaces for a projection of the link onto a generalised projection surface (e.g.\ the Turaev surface; see also \cite{fkp13}). The original proof that checkerboard surfaces are essential, for reduced prime alternating planar projections in $S^3$, was due to Aumann in 1956~\cite{aum56}.
\begin{proof}[Proof of \refthm{hress}]
Let $\Sigma$ be a checkerboard surface. Recall $X$ denotes the link exterior, $X=Y{\smallsetminus} N(L)$. If $\pi_1(\Sigma)\to\pi_1(X)$ is not injective, then $\pi_1(\widetilde{\Sigma})\to\pi_1(X)$ is not injective.
Because $\widetilde{\Sigma} = \partial N(\Sigma)$ is 2-sided, by the loop theorem there exists a properly embedded essential disk $D$ in $X{\backslash \backslash}\Sigma$ with boundary on $\widetilde{\Sigma}$. We may put $D$ into normal form with respect to the bounded chunk decomposition of $X{\backslash \backslash} \Sigma$. Note $\partial D$ does not meet the parabolic locus $P$ of $X{\backslash \backslash} \Sigma$, which consists of the boundary components $\partial N(L)\cap\partial(X{\backslash \backslash} \Sigma)$. Thus \refprop{GaussBonnet} implies that $a(D) = -2\pi$. On the other hand, any normal component of $D$ in a chunk has combinatorial area at least $0$, by definition of combinatorial area, \refdef{CombinatorialArea}, and definition of an angled chunk, \refdef{AngledChunk}. This is a contradiction.
Now suppose $\pi_1(\Sigma,\partial\Sigma)\to\pi_1(X, \partial N(L))$ is not injective. Then again the loop theorem implies there exists an embedded essential disk $E$ in $X{\backslash \backslash} \Sigma$ with $\partial E$ consisting of an arc on $\widetilde{\Sigma}$ and an arc on $P$. Put $E$ into normal form with respect to the chunk decomposition. \refprop{GaussBonnet} implies $a(E) = -2\pi\chi(E) + \pi = -\pi$. Again this is a contradiction.
\end{proof}
\section{Detecting hyperbolicity}\label{Sec:Hyperbolic}
Our next application of the chunk decomposition is to determine conditions that guarantee that a reduced alternating link on a generalised projection surface is hyperbolic. Thurston~\cite{thu82} proved that a 3-manifold
has hyperbolic interior whenever it is irreducible, boundary irreducible, atoroidal, and anannular. Using this result, Futer and Gu{\'e}ritaud show that a 3-manifold with an angled block decomposition is hyperbolic~\cite{fg09}. But when we allow a more general angled chunk decomposition, the manifold may contain essential tori and annuli. The main result of this section, \refthm{Hyperbolic}, restricts these.
\begin{definition}\label{Def:BdyAnannular}
The manifold $Y{\smallsetminus} N(F)$ is \emph{$\partial$-annular} if it contains a properly embedded essential annulus with both boundary components in $\partial Y$. Otherwise, it is \emph{$\partial$-anannular}.
\end{definition}
\begin{theorem}\label{Thm:Hyperbolic}
Let $\pi(L)$ be a weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose $F$ has genus at least one, all regions of $F{\smallsetminus}\pi(L)$ are disks, and $\hat{r}(\pi(L),F)>4$. Then:
\begin{enumerate}
\item $Y{\smallsetminus} N(L)$ is toroidal if and only if $Y{\smallsetminus} N(F)$ is toroidal.
\item $Y{\smallsetminus} N(L)$ is annular if and only if $Y{\smallsetminus} N(F)$ is $\partial$-annular.
\item Otherwise, the interior of $Y{\smallsetminus} N(L)$ admits a hyperbolic structure.
\end{enumerate}
\end{theorem}
A few remarks on the theorem.
Hayashi~\cite{hay95} showed that if $F$ is a connected generalised projection surface in a closed $3$-manifold $Y$, the regions of $F{\smallsetminus}\pi(L)$ are disks, $\pi(L)$ is weakly prime, and in addition, if $e(\pi(L),F)>4$, then $Y{\smallsetminus} L$ contains no essential tori.
\refthm{Hyperbolic} generalises Hayashi's theorem even in the case that $F$ is a Heegaard surface for $S^3$, for if $\hat{r}(\pi(L),F)>4$, the edge-representativity could still be $2$ or $4$.
Also, we cannot expect to do better than \refthm{Hyperbolic}, for Hayashi~\cite{hay95} gives an example of an 8-component link in $S^3$ that is reduced alternating on a Heegaard torus $F$, with all regions of $F{\smallsetminus}\pi(L)$ disks and $r(\pi(L),F)=\hat{r}(\pi(L),F)=4$, such that the exterior of $L$ admits an embedded essential torus. However, in the results below, we do give restrictions on the forms of reduced alternating links on $F$ that are annular or toroidal.
\begin{example}\label{Exa:T2xI}
Consider the case $Y=T^2\times[-1,1]$, a thickened torus, with generalised projection surface $F=T^2\times\{0\}$. Because $F$ has no compressing disks in $Y$, the representativity of any alternating link on $F$ is infinite. Thus provided an alternating diagram on $F$ has at least one crossing, is weakly prime, and is checkerboard colourable, it is weakly generalised alternating. If all regions of the diagram are disks, then \refthm{Hyperbolic} immediately implies that the link is hyperbolic.
Recently, others have studied the hyperbolic geometry of restricted classes of alternating links on $T^2\times\{0\}$ in $T^2\times[-1,1]$, for example coming from uniform tilings~\cite{acm17, ckp}, and diagrams without bigons~\cite{ckp}. \refthm{Hyperbolic} immediately proves the existence of a hyperbolic structure of such links. However, the results in \cite{acm17, ckp} give more details on their geometry.
Similarly, \refthm{Hyperbolic} implies that a weakly prime, alternating, checkerboard colourable diagram on a surface $S\times\{0\}$ in $S\times[-1,1]$ is hyperbolic, for more general $S$. The hyperbolicity of such links has also been considered by Adams \emph{et al}~\cite{Adams:ThickenedSfces}.
\end{example}
\subsection{Essential annuli}
We first consider essential annuli. Suppose $A$ is an essential annulus embedded in the complement of a link that is reduced alternating on a generalised projection surface $F$, as in \refdef{AltKnots}. Then \refthm{NormalForm} and \refprop{GaussBonnet} imply that $A$ can be put into normal form, with $a(A)=0$. Then \refprop{NonnegArea} implies it is decomposed into disks with area $0$. If $A$ meets $\partial N(L)$, it must meet boundary faces and edges. Recall that all boundary faces are squares.
\begin{lemma}\label{Lem:NormalSquare}
Let $\pi(L)$ be a reduced alternating diagram of a link $L$ on a generalised projection surface $F$ in $Y$, as in \refdef{AltKnots}. Suppose further that the representativity satisfies $r(\pi(L),F)\geq 4$.
In the angled chunk decomposition of $X(L)$, a normal disk with combinatorial area zero that meets a boundary face has one of three forms:
\begin{enumerate}
\item either it meets a single boundary face and two non-boundary edges, and runs through opposite boundary edges of the boundary face, or
\item it meets two boundary faces, intersecting two adjacent boundary edges in each, and encircles a single non-boundary edge of the chunk, or
\item it meets two boundary faces in opposite boundary edges of each boundary face.
\end{enumerate}
If the link is checkerboard colourable, in the second form it runs through faces of opposite colour, and in the third it runs through faces of the same colour.
\end{lemma}
\begin{proof}
Because all edges have exterior angle $\pi/2$, a disk of combinatorial area zero meets exactly four edges. Hence if it meets a boundary face, it either meets four boundary edges or two boundary edges and two interior edges.
There are various cases to consider. If it meets two boundary edges and two non-boundary edges, then it might meet adjacent boundary edges on the boundary face. In this case, slide the disk slightly off the boundary face so that its two intersections with boundary faces are replaced by an intersection with a single non-boundary edge. This gives a disk meeting three edges of the diagram. The representativity condition ensures the disk is parallel into the surface $F$. But the diagram consists of immersed closed curves on the surface, so they cannot meet the boundary of a disk an odd number of times. Thus in the first case, the disk meets opposite boundary edges of a boundary face.
Now suppose that the disk meets exactly two boundary faces. If it meets both faces in opposite boundary edges, we are in the third case. If it meets one face in opposite boundary edges and one in adjacent boundary edges, again push the disk slightly off the boundary faces to obtain a curve bounding a disk that meets three edges of the diagram. As before, this gives a contradiction. So the only other possibility is that the disk meets two boundary faces and runs through adjacent boundary edges in both. This is the second case.
In the second case, we may slide the boundary of the normal disk slightly off the boundary face in a way that minimises intersections with non-boundary edges. Thus it will meet non-boundary edges exactly twice; this gives a curve $\gamma$ on $F$ meeting the link diagram exactly twice. The representativity condition implies that $\gamma$ bounds a disk parallel into $F$. Then the fact that the diagram is weakly prime implies that it has no crossings in the interior of the disk, so the curve encircles a single edge as claimed.
\end{proof}
As in the previous lemma, we will prove many results by considering the boundaries of normal disks on the chunk decomposition. The combinatorics of the gluing of faces in \refprop{AltChunkDecomp} and the fact that the chunk decomposition comes from an alternating link will allow us to obtain restrictions on the diagram of our original link.
Next we determine the form of a normal annulus when one of its normal disks has the first form of \reflem{NormalSquare} and is parallel into $F$.
\begin{lemma}\label{Lem:Annulus}
Let $\pi(L)$ be a reduced alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$, and suppose $r(\pi(L),F)\geq 4$.
Suppose $A$ is a normal annulus in the angled chunk decomposition of the link exterior $X:=Y{\smallsetminus} N(L)$.
Suppose $A$ made up of normal squares such that one square $A_i\subset A$ has boundary meeting exactly one boundary face and two non-boundary edges, and $A_i$ is parallel into the surface $F$. Then
\begin{itemize}
\item two components of $L$ form a 2-component link with a checkerboard colourable diagram which consists of a string of bigons arranged end to end on some component $F_j$ of $F$,
\item a checkerboard surface $\Sigma$ associated to these components of $\pi(L)$ is an annulus, and
\item a sub-annulus of $A$ has one boundary component running through the core of $\Sigma$, and the other parallel to $\partial \Sigma$ on $\partial N(L)$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $A_i$ be as in the statement of the lemma, so $\partial A_i$ bounds a disk on the surface of a chunk and meets one boundary face and two non-boundary edges. By \reflem{NormalSquare}, it must intersect opposite boundary edges of the boundary face that it meets.
Now $A_i$ is glued to squares $A_{i+1}$ and $A_{i-1}$ along its sides adjacent to the boundary face. Say $A_i$ is glued to $A_{i-1}$ in a face that we denote $W_{i-1}$, and $A_i$ is glued to $A_{i+1}$ in a face denoted $W_{i+1}$. The boundary $\partial A_i$ runs through one more non-boundary face, which we denote by $B_i$.
By \refprop{AltChunkDecomp}, the gluing of $A_i$ to $A_{i-1}$ and $A_{i+1}$ will be by rotation in the faces $W_{i-1}$ and $W_{i+1}$, respectively. Since $\partial A_i$ runs through opposite edges of a boundary face, the faces $W_{i-1}$ and $W_{i+1}$ are opposite across a crossing of $\pi(L)$. Thus the gluing must be by rotation in the same direction (i.e.\ following the same orientation) in these two faces.
Superimpose $\partial A_{i-1}$ and $\partial A_{i+1}$ onto the same chunk as $A_i$. Arcs are as shown in \reffig{GluingSquares}, left. In that figure, faces $W_{i-1}$ and $W_{i+1}$ are coloured white, and faces adjacent to these are shaded. The colouring is for ease of visualisation only; the argument applies even if the link is not checkerboard colourable.
\begin{figure}
\caption{On the left, arcs of $\partial A_{i+1}
\label{Fig:GluingSquares}
\end{figure}
Two additional comments on the figure: Because $A_i$ is normal, $\partial A_i$ cannot run from a boundary face to an adjacent edge. This forces arcs of $\partial A_{i-1}$ and $\partial A_{i+1}$ to intersect $\partial A_i$ in faces $W_{i-1}$ and $W_{i+1}$, respectively, as shown. Second, the gluing map is a homeomorphism of these faces. Since $\partial A_i$ bounds a disk in $F$, the parts of the white face bounded by the arcs of $A_{i-1}$ and $A_{i+1}$ are simply connected.
Note $\partial A_{i-1}$ has points inside and outside the disk bounded by $\partial A_i$, so it must cross $\partial A_i$ at least twice. One crossing of $\partial A_{i-1}$ is known: it lies in $W_{i-1}$. We claim that $\partial A_{i-1}$ cannot cross $\partial A_i$ in the face $B_i$. For suppose by way of contradiction that $\partial A_{i-1}$ does intersect $\partial A_i$ in $B_i$. Then $\partial A_{i-1}$ must run from $W_{i-1}$ across an interior edge into the face $B_i$, and similarly for $\partial A_i$. Thus the face $B_i$ lies on opposite sides of a crossing. Draw an arc in $B_i$ from one side of the crossing to the other, and connect to an arc in $W_{i-1}$ to form a closed curve $\gamma$; see \reffig{GluingSquares}, middle.
Note since $\gamma$ is a simple closed curve lying in the disk in $F$ with boundary $\partial A_i$, $\gamma$ bounds a disk. But $\gamma$ gives a curve meeting the diagram exactly twice with a single crossing inside the disk; because there are also crossings outside the disk, this contradicts the fact that $\pi(L)$ is weakly prime.
Since $A_{i-1}$ and $A_{i+1}$ lie in the same chunk of the decomposition, either they coincide or they are disjoint. If they coincide, and one of $W_{i-1}$ or $W_{i+1}$ is simply-connected, then $A_{i-1}$ contradicts the definition of normal, since an arc of $\partial A_{i-1}$ runs from an interior edge to an adjacent boundary edge. If neither region is simply-connected, then two components of $\pi(L)$ form a chain of two bigons on some component of $F$, as required. So now assume $A_{i-1}$ and $A_{i+1}$ are distinct and disjoint, so that the images of $\partial A_{i-1}$ and $\partial A_{i+1}$ superimposed on the chunk are also disjoint.
Since $\partial A_{i-1}$ intersects $\partial A_i$ in the face $W_{i+1}$ but $\partial A_{i-1}$ is disjoint from $\partial A_{i+1}$ in this face, it follows that all intersections of $\partial A_{i-1}$ with $\partial A_i$ must occur within the disk in $W_{i+1}$ bounded by $\partial A_i$ outside the arc $\partial A_{i+1}$. Because this is a disk, we may isotope $\partial A_{i-1}$ in the disk to meet $\partial A_i$ exactly once.
Thus $\partial A_{i-1}$ must be as shown in \reffig{GluingSquares}, right.
Now consider the dotted line lying within $\partial A_{i-1}$ in \reffig{GluingSquares}, right. Because $\partial A_i$ bounds a disk on $F$, and because the portion of the face $W_{i-1}$ bounded by $\partial A_{i-1}$ is also a disk, we may obtain a new disk $E$ by isotoping $A_{i-1}$ through these two disks and slightly past the interior edges and boundary faces. The boundary of the disk $E$ is shown in \reffig{GluingSquares}, right. Because $r(\pi(L),F)\geq 4$, $E$ must be parallel into $F$. It follows that $A_{i-1}$ is also a disk parallel into $F$. Finally, because $\pi(L)$ is weakly prime, $E\cap\pi(L)$ must be a single arc with no crossings. Let $D_{i-1}$ and $D_i$ be the subdisks of $F$ bounded by $\partial A_{i-1}$ and $\partial A_i$ respectively, which are parallel in $C$ to $A_{i-1}$ and $A_i$ respectively. It follows that $D_{i-1}{\smallsetminus} D_i$ bounds a single bigon region.
Repeat the above argument with $A_{i-1}$ replacing $A_i$, and $A_{i-2}$ replacing $A_{i-1}$. It follows that $\partial A_{i-2}$ bounds a subdisk $D_{i-2}$ of $F$ parallel to $A_{i-2}$, with $D_{i-2}{\smallsetminus} D_{i-1}$ bounding a single bigon region. Note also that $\partial A_{i-2}$ must run through the bigon face bounded by $D_{i-1}{\smallsetminus} D_i$, as it is disjoint from $A_i$. Repeat again, and so on. It follows that the diagram $\pi(L)$ contains a string of bigons arranged end to end, and the squares $A_j$ each encircle one bigon. The bigons can be shaded, forming a checkerboard colouring. An arc of each $\partial A_j$ lies on a shaded bigon.
Now boundary faces are arranged in a circle, with bigons between them. Note in the direction of the circle, the boundary faces alternate meeting disks $\{A_j\}$ in one chunk then the other (shown red and blue in \reffig{GluingSquares}). If there are an odd number of bigons, then the disks $\{A_j\}$ overlap in a chunk, contradicting the fact that $A$ is embedded. So there are an even number of bigons. Then this portion of the diagram is a two component link, and the shaded surface $\Sigma$ is an annulus between link components, with arcs of $\partial A_j$ in the shaded faces gluing to form the core of $\Sigma$. Finally, note the component of $\partial A$ on the boundary faces never meets the shaded annulus. Hence it runs parallel to the boundary of the annulus $\Sigma$ on $\partial N(L)$.
\end{proof}
\begin{theorem}\label{Thm:AnAnnularLink}
Let $\pi(L)$ be a reduced alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$. Suppose that $r(\pi(L),F)\geq 4$ and $\hat{r}(\pi(L),F)>4$. If the link exterior $X(L)$ contains an essential annulus $A$ with at least one boundary component on $\partial N(L)$, then $\pi(L)$ contains a string of bigons on $F$. If $S$ denotes the annulus or M\"obius band formed between the string of bigons, then a component of $\partial A$ on $\partial N(L)$ is parallel to the boundary slope of a component of $\partial S$ on $\partial N(L)$.
\end{theorem}
\begin{proof}
Put $A$ into normal form with respect to the angled chunk decomposition of $X(L)$. \refprop{GaussBonnet} implies that $A$ meets chunks in components with combinatorial area zero. Because $\partial A$ meets $\partial N(L)$, one such component meets a boundary face.
Then \refprop{NonnegArea} implies that the chunk decomposition must divide $A$ into disks. It follows that the other boundary component of $A$ cannot lie on an exterior face and so it must also lie on $\partial N(L)$.
Let $A_i$ be a normal disk in $A$. \reflem{NormalSquare} implies it has one of three forms.
If $A_i$ has the first form, then it is glued to another disk $A_{i+1}$ of the first form. Because $\hat{r}(\pi(L),F)>4$, one of $A_i$ or $A_{i+1}$ is not a compression disk for $F$. Thus it is parallel into $F$. Then by \reflem{Annulus}, the diagram $\pi(L)$ is as claimed.
However, no such annulus $A$ actually exists. In the proof of \reflem{Annulus}, we only glued up the normal squares along some of their edges. One edge of $A_i$ lies in a bigon face $B$. Glue this edge to some normal square $A'_i$. Since $A'_i$ is disjoint from $A_{i-1}$ and $A_{i+1}$, it follows that $\partial A'_i$ lies inside the subdisk of $F$ bounded by either $\partial A_{i-1}$ or $\partial A_{i+1}$. But then the only possibility which allows $A'_i$ to be a normal square is if $\partial A'_i$ meets only interior edges. Hence it is impossible to glue up normal squares to form a properly embedded annulus when one such square has the first form, since we will never arrive at the other boundary component of $A$.
So suppose $\partial A_i$ meets two boundary faces.
If all normal disks of $A$ are of the second form of \reflem{NormalSquare}, they encircle a single crossing arc, and $A$ is not essential.
If all normal disks of $A$ are of the third form, then $\hat{r}(\pi(L),F)>4$ implies one, say $A_i$, is not a compressing disk so is parallel into $F$. Superimpose three adjacent squares $\partial A_i$, $\partial A_{i+1}$, and $\partial A_{i-1}$ on the boundary of the same chunk. There are two cases: $\partial A_i$ and $\partial A_{i-1}$ may be disjoint, or they may intersect.
If all normal disks are of the third form and $\partial A_i$ and $\partial A_{i-1}$ are disjoint, then the fact that the diagram is weakly prime implies $\partial A_i$ must bound a bigon face (as in the proof of \reflem{Annulus}). But then similar arguments, using weakly prime and $r(\pi(L),F)\geq 4$, show $\partial A_{i-1}$ and $\partial A_{i+1}$ must also bound bigons. Repeating for these disks, it follows that all $\partial A_j$ bound bigons, and $\pi(L)$ is a string of bigons. Since $\partial A_i$ avoids the bigon faces, the boundary components of $\partial A$ run parallel to the surface made up of the bigons. If there are an odd number of bigons, each bigon is encircled by $\partial A_j$ for normal disks on both sides of $F$, and the boundary of $A$ runs parallel to the boundary of a regular neighbourhood of the M\"obius band made up of these bigons. If there are an even number, then the bigons form an annulus, with this portion of $\pi(L)$ forming a two component link, and $\partial A$ running along the link parallel to the annulus. Either case gives the desired result.
Suppose that all normal disks are of the third form and $\partial A_i$ and $\partial A_{i-1}$ intersect. Since either $\partial A_{i-1}$ and $\partial A_{i+1}$ are disjoint or $A_{i-1}$ and $A_{i+1}$ coincide, the weakly prime and representativity conditions show that $\partial A_i$ bounds two bigon faces. If $A_{i-1}$ and $A_{i+1}$ coincide, then $\pi(L)$ contains a string of exactly two bigons. If not, using similar ideas to the proof of \reflem{Annulus}, it follows that $\partial A_{i-1}$ and $\partial A_{i+1}$ also bound two bigon faces, and so does each $\partial A_j$. Since $\partial A_{j-1}$ and $\partial A_{j+1}$ do not intersect for any $j$, there must be an even number of bigons, and $\pi(L)$ forms a two component link, with $\partial A$ running along the link parallel to the annulus formed by the bigons, as required.
So suppose there are normal disks of $A$ of both the second and third forms. There must be a disk $S_0$ of the third form glued to one $S_1$ of the second form. Superimpose the boundaries of $S_0$ and $S_1$ on $F$. The rotation of the gluing map on chunks implies that the boundary faces met by $\partial S_0$ are adjacent to a single edge. Since $r(\pi(L),F)\geq 4$ and the diagram is weakly prime, $\partial S_0$ must bound a single bigon of $\pi(L)$ as in \reffig{Form23}. (The full argument is nearly identical to the argument that $\partial A_{i-1}$ bounds a bigon in the proof of \reflem{Annulus}.)
\begin{figure}
\caption{Shows how disks of the second and third form must lie in $F$. (Thin black arrows show the direction of the rotation for the gluing map from the blue side to the red.)}
\label{Fig:Form23}
\end{figure}
Now consider $S_2$ glued to $S_1$ along its edge in the other face met by $S_1$. The disk $S_2$ is disjoint from $S_0$, so $\partial S_2$ cannot run through opposite sides of the boundary face it shares with $S_0$. Thus $S_2$ must be of the second form, encircling a single interior edge just as $S_1$ does. Finally, $S_2$ glues to some $S_3$ along an edge in its other face. We claim $S_3$ cannot be of the second form, encircling an interior edge, since if it did, it would glue to some $S_4$ disjoint from $S_0$, requiring $S_4$ to be of the second form. Then $S_1$, $S_2$, $S_3$, and $S_4$ would all encircle the same crossing arc in the diagram, but $S_4$ would not glue to $S_1$. Hence additional normal squares would spiral around this arc, never closing up to form the annulus $A$. This is impossible.
So $S_3$ is of the third form, and when superimposed on $F$, $\partial S_3$ is parallel to $\partial S_0$. Recall, however, that $S_0$ and $S_3$ lie in different chunks.
\begin{figure}
\caption{An isotopy replaces $S_0, S_1, S_2, S_3$ with $S_0$ glued to $S_3$. Shown left in three dimensions, right in two.}
\label{Fig:Isotopy3D}
\end{figure}
We can isotope $A$, removing squares $S_1$ and $S_2$, replacing $S_0$ and $S_3$ with normal squares of the second form. The isotopy is shown in three dimensions on the left of \reffig{Isotopy3D}; the image of the boundaries of the normal squares in the two chunks under the isotopy is shown on the right.
The isotopy strictly reduces the number of normal squares of $A$ of the third form. Thus repeating a finite number of times, we find that $A$ has no normal squares of the third form, and we are in a previous case.
\end{proof}
\begin{corollary}\label{Cor:PrimeLink}
Let $\pi(L)$ be a reduced alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$. Suppose that $r(\pi(L),F)\geq 4$ and $\hat{r}(\pi(L),F)>4$. Then the link is prime. That is, the exterior $X(L)$ contains no essential meridional annulus.
\end{corollary}
\begin{proof}
By \refthm{AnAnnularLink}, any essential annulus has slope parallel to the boundary slope of an annulus or M\"obius band formed between a string of bigons; this is not a meridian.
\end{proof}
\subsection{Toroidal links}
Now we consider essential tori in link complements. As opposed to the case of regular alternating knots, essential tori do appear in our more general setting: Hayashi gives an example~\cite{hay95}.
However, we can rule out essential tori under stronger hypotheses. The main results of this subsection are \refprop{Toroidal} and \refprop{HypToroidal}.
\begin{definition}\label{Def:MeridCompressible}
Let $Y$ be a compact orientable irreducible 3-manifold, and $L$ a link in $Y$. A closed incompressible surface $S$ embedded in the link exterior $X$ is \emph{meridionally compressible} if there is a disk $D$ embedded in $Y$ such that $D\cap S=\partial D$ and the interior of $D$ intersects $L$ exactly once transversely. Otherwise $S$ is \emph{meridionally incompressible}.
\end{definition}
If an incompressible torus $T$ is parallel to a component of $L$ in $Y$, then $T$ is meridionally compressible. Hence to show that a torus $T$ is essential in $X$, it is sufficient to show that $T$ is incompressible and meridionally incompressible.
The following is proved using an argument due to Menasco in the alternating case~\cite{men84}.
\begin{lemma}\label{Lem:Menasco}
Suppose $S$ is a closed normal surface in an angled chunk decomposition of a link $L$ that is reduced alternating on a generalised projection surface $F$ in $Y$ with $r(\pi(L),F)\geq 4$. Suppose further that one of the normal components of $S$ is a disk that is parallel into $F$. Then $S$ is meridionally compressible, and a meridional compressing disk meets $L$ at a crossing.
\end{lemma}
\begin{proof}
Let $S_i$ be a normal disk of $S$ that is parallel into $F$. Then $\partial S_i$ bounds a disk in $F$, and there is an innermost normal disk $S_j$ of $S$ contained in the ball between $S_i$ and $F$ (using irreducibility of $Y$). The normal disk $S_j$ meets (interior) edges of the chunk decomposition, and these correspond to $\partial S_j$ running adjacent to over-crossings of $\pi(L)$ on the side of $F$ containing the chunk.
In the case that $\pi(L)$ is checkerboard colourable, the boundary components of any face meeting $\pi(L)$ will be alternating under--over in a manner consistent with the orientation on a face. Then the curve $\partial S_j$ must enter and exit the face by running adjacent to over-crossings on alternating sides; see \reffig{MenascoArg}, left.
Crossing arcs opposite over-crossings are glued, hence $S$ intersects the opposite crossing arc as shown. Since $S_j$ is innermost, one of the arcs of $S\cap \partial C$ opposite a crossing met by $\partial S_j$ must be part of $\partial S_j$.
Then $S$ encircles a meridian of $L$ at that crossing; see
\reffig{MenascoArg}, right.
\begin{figure}
\caption{On the left, $\partial S_j$ runs adjacent to crossings on opposite sides. The gluing of edges implies there are components of $S\cap \partial C$ meeting the opposite crossing arc. Because $S_j$ is innermost, one of those must be on $\partial S_j$, as shown on the right, which implies $S$ is meridionally compressible.}
\label{Fig:MenascoArg}
\end{figure}
Even in the case that $\pi(L)$ is not checkerboard colourable, we claim that $\partial S_j$ must enter and exit a face in consistently alternating boundary components, meaning that $\partial S_j$ enters and exits the face adjacent to crossings on opposite sides. Provided we can show this, the same argument as above will apply. If this does not hold, then $\partial S_j$ must enter and exit each face in distinct boundary components, and the boundary components are all inconsistently oriented. But $\partial S_j$ bounds a disk in $F$, so it meets each boundary component of each region an even number of times. There must be some outermost arc of intersection of $\partial S_j$ with a boundary component of a face of the chunk. For this arc, $\partial S_j$ must enter and exit the same boundary component of the same region, hence it enters and exits the face in a consistently alternating boundary component.
\end{proof}
\begin{proposition}\label{Prop:Toroidal}
Let $\pi(L)$ be a reduced alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose further that $r(\pi(L),F)\geq 4$ and $\hat{r}(\pi(L),F)>4$.
If $Y{\smallsetminus} N(F)$ is atoroidal but $X:=X(L)$ is toroidal, then any essential torus in $X$ is divided into normal annuli in the angled chunk decomposition of $X$, with each boundary component of a normal annulus completely contained in a single face of the chunk decomposition.
\end{proposition}
\begin{proof}
Suppose $T$ is an incompressible torus in $X$. Put $T$ into normal form with respect to the chunk decomposition. \refprop{GaussBonnet} implies that $T$ meets chunks in components of combinatorial area zero. \refprop{NonnegArea} implies each component has one of three forms. Since $Y{\smallsetminus} N(F)$ is atoroidal, there are no incompressible tori in a chunk $C$ disjoint from $\partial C$, so $T$ meets $\partial C$. We will rule out the case that the torus is split into normal disks, and the only remaining case will be the desired conclusion.
Suppose a component of $T\cap C$ is a normal disk. Then it is glued to normal disks along its sides. Since $T$ is disjoint from $L$, all normal disks of $T\cap C$ are disjoint from boundary faces. By our assignment of exterior angles to interior edges, it follows that each normal disk of $T\cap C$ meets exactly four interior edges.
If any component of $T\cap C$ is parallel into $F$, then it bounds a disk $D$ in $F$. Then \reflem{Menasco} shows $T$ is meridionally compressible. Surger along a meridian compressing disk to obtain an annulus $A$. If the annulus is essential, it can be put into normal form and one of its boundary components is a meridian. But this contradicts \refthm{AnAnnularLink}: if there is an essential annulus, then $\pi(L)$ bounds a string of bigons and the slope of each boundary component of the annulus is integral, and hence cannot be meridional. So the annulus $A$ is not essential. But then $A$ must be parallel to a component of $\partial N(L)$, which implies that $T$ is parallel to a component of $\partial N(L)$, and therefore $T$ is not essential.
So assume all disks are compressing disks for $F$ with nontrivial boundary on $F$. The disks alternate lying on either side of $F$. Each gives a compressing disk for $F$ with boundary meeting four edges, or meeting $\pi(L)$ four times. This is impossible, since $\hat{r}> 4$.
\end{proof}
By contrast to the previous result, if $Y{\smallsetminus} N(F)$ is toroidal, then in the checkerboard colourable case, $X(L)$ will be toroidal as well.
\begin{proposition}\label{Prop:HypToroidal}
Let $\pi(L)$ be a weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$. If $Y{\smallsetminus} N(F)$ admits an embedded incompressible torus that is not parallel to $\partial Y$ (but may be parallel to $\partial N(F)$ in $Y{\smallsetminus} N(F)$), then $X(L)$ is toroidal.
\end{proposition}
\begin{proof}
An embedded incompressible torus $T$ in $Y{\smallsetminus} N(F)$ is also embedded in $Y{\smallsetminus} L$. If it is not parallel into $\partial Y$, we prove it is essential in the link exterior. For suppose $D$ is a compressing disk for $T$ in $Y{\smallsetminus} L$. Consider the intersection of $D$ with the checkerboard surfaces $\Sigma$ and $\Sigma'$. Any innermost loop of intersection can be removed using \refthm{hress}. Then $D$ can be isotoped to be disjoint from both $\Sigma$ and $\Sigma'$, hence disjoint from $N(F)$. This contradicts the fact that $T$ is incompressible in $Y{\smallsetminus} N(F)$.
If $E$ is a meridional compressing disk for $T$, then $E'=E\cap X$ is an annulus embedded in $X$ with one boundary component on $T$ and the other on $\partial X$. Since $T$ is disjoint from $N(F)$, $T$ is disjoint from the checkerboard surfaces. But $E'$ can be isotoped so that the other boundary component has intersection number one with $\partial\Sigma$, a contradiction since $E'\cap\Sigma$ consists of properly embedded arcs and loops.
\end{proof}
The proof of \refthm{Hyperbolic} now follows directly from previous results.
\begin{proof}[Proof of \refthm{Hyperbolic}]
Suppose first that $Y{\smallsetminus} N(L)$ contains an essential annulus. If any component of $\partial A$ lies on $\partial N(L)$, then \refthm{AnAnnularLink} implies the diagram is a string of bigons on $F$. But because $F$ is not a 2-sphere and all regions are disks, this is impossible.
So suppose that $Y{\smallsetminus} N(L)$ contains an essential annulus $A$ with boundary components on $\partial Y$. Put $A$ into normal form with respect to the angled chunk decomposition. If $A$ intersects $N(F)$, then an outermost sub-annulus must intersect the boundary of a chunk in a face that is not simply-connected by \refprop{NonnegArea}. But this is impossible since all the regions of $F{\smallsetminus} N(L)$ are disks. Thus $A$ is disjoint from $N(F)$.
Conversely, an essential annulus $A\subset Y{\smallsetminus} N(F)$ with $\partial A\subset \partial Y$ remains essential in $Y{\smallsetminus} N(L)$: as in the proof of \refprop{HypToroidal} any compressing disk or boundary compressing disk can be isotoped to be disjoint from the checkerboard surfaces.
Now consider essential tori. If $Y{\smallsetminus} N(F)$ is atoroidal, then
by \refprop{Toroidal}, any essential torus in $X$ would have normal form meeting non-disk faces of the angled chunk decomposition of $X$. Since there are no such faces, $X(L)$ is atoroidal. Conversely, if $Y{\smallsetminus} N(F)$ is toroidal, then \refprop{HypToroidal} implies $X(L)$ is toroidal.
If $Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular, then $X(L)$ is irreducible, boundary irreducible, atoroidal and anannular. By work of Thurston, any manifold with torus boundary components that is irreducible, boundary irreducible, atoroidal and anannular has interior admitting a complete, finite volume hyperbolic structure; this is Thurston's hyperbolisation theorem~\cite{thu82}. It follows that $Y{\smallsetminus} L$ is hyperbolic when $\partial Y= \emptyset$ or when $\partial Y$ consists of tori.
When $Y$ has boundary with genus higher than one, Thurston's hyperbolisation theorem still applies, as follows. Double $Y{\smallsetminus} N(L)$ along components of $\partial Y$ with genus greater than one, and denote the resulting manifold by $DY$. Then if $DY$ admits an essential sphere or disk, there must be an essential sphere or disk in $Y{\smallsetminus} N(L)$ because of incompressibility of higher genus components of $\partial Y$ in $Y{\smallsetminus} N(L)$. Similarly, any essential torus or annulus in $DY$ gives rise to an essential torus or annulus in $Y{\smallsetminus} N(L)$. But these have been ruled out. Thus $DY$ admits a complete, finite volume hyperbolic structure. By Mostow-Prasad rigidity, the involution of $DY$ that fixes the higher genus components of $\partial Y$ must be realised by an isometry of $DY$ given by reflection in a totally geodesic surface. Cutting along the totally geodesic surface and discarding the reflection yields $Y{\smallsetminus} N(L)$, now with a hyperbolic structure in which higher genus componets of $\partial Y$ are totally geodesic. Thus there is a hyperbolic structure in this case as well.
\end{proof}
In the case of knots, not links, we do not believe that $\hat{r}(\pi(L),F)>4$ should be required to rule out essential tori and annuli when all regions of $F{\smallsetminus}\pi(L)$ are disks. Some restriction on representativity will be necessary, for Adams \emph{et al}~\cite{abb92} showed that the Whitehead double of the trefoil, which is a satellite knot and hence toroidal, has an alternating projection $\pi(K)$ onto a genus-2 Heegaard surface for $S^3$ with $r(\pi(K),F)=2$ and all regions disks. Even so, we conjecture the following in the case of knots.
\begin{conj}
Let $\pi(K)$ be a weakly generalised alternating diagram of a knot $K$ on a generalised projection surface $F$ in $Y$. Suppose $F$ is a Heegaard surface for $Y$ and all regions of $F{\smallsetminus}\pi(K)$ are disks, and $Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular. Then $X(K)$ is hyperbolic.
\end{conj}
\subsection{The case of knots on a torus in the 3-sphere}
We can improve our results if we restrict to knots in $Y=S^3$ on $F$ a torus. Recall that a nontrivial knot $K$ in $S^3$ is either a satellite knot (the complement is toroidal), a torus knot (the complement is atoroidal but annular), or hyperbolic~\cite{thu82}. For a knot with a reduced alternating diagram $\pi(K)$ on $S^2$, Menasco~\cite{men84} proved $K$ is nontrivial, $K$ is a satellite knot if and only if $\pi(K)$ is not prime, and $K$ is a torus knot if and only if it has the obvious diagram of the $(p,2)$-torus knot. Otherwise $K$ is hyperbolic.
Hyperbolicity has been studied for several other classes of knots and links in $S^3$; see for example \cite{ada03}, \cite{fkp15}.
Here, we completely classify the geometry of weakly generalised alternating knots on a torus in $S^3$.
\begin{theorem}[W.G.A. knots on a torus]\label{Thm:WGAKnotOnTorus}
Let $Y=S^3$, and let $F$ be a torus. Let $\pi(K)$ be a weakly generalised alternating projection of a knot $K\subset S^3$ onto $F$.
\begin{enumerate}
\item\label{Itm:NonHeegaard} If $F$ is not a Heegaard torus, then $K$ is a satellite knot.
\item\label{Itm:HeegaardAnnulus} If $F$ is a Heegaard torus and a region of $F{\smallsetminus}\pi(K)$ is an annulus $A$,
\begin{enumerate}
\item\label{Itm:CoreNontriv} if the core of $A$ forms a nontrivial knot in $S^3$, then $K$ is a satellite knot;
\item\label{Itm:CoreTriv} if the core of $A$ forms an unknot in $S^3$, then $K$ is hyperbolic.
\end{enumerate}
\item\label{Itm:Disk} If $F$ is a Heegaard torus and all regions of $F{\smallsetminus}\pi(K)$ are disks, then $K$ is hyperbolic.
\end{enumerate}
Moreover, items \refitm{NonHeegaard} and \refitm{CoreNontriv} also hold when $K$ is a link.
\end{theorem}
\begin{remark}
Items \refitm{NonHeegaard}, \refitm{CoreNontriv}, and \refitm{Disk} first appeared in \cite{how15t}. Item \refitm{CoreTriv} is new in this paper. We include the full proof for completeness. Note that \refthm{WGAKnotOnTorus} addresses all cases, since a diagram with multiple annular regions is the diagram of a multi-component link, and a diagram with a region which is a punctured torus does not satisfy the weakly prime or representativity conditions.
\end{remark}
We will prove the theorem in a sequence of lemmas. First, we restrict essential annuli.
The following theorem first appeared in \cite{how15t}.
\begin{theorem}\label{Thm:WGAKnotAnannular}
Let $Y=S^3$, and let $F$ be a generalised projection surface of positive genus. Let $\pi(K)$ be a weakly generalised alternating projection of a knot $K$ onto $F$. Then $K$ is not a torus knot.
\end{theorem}
\begin{proof}
One of the checkerboard surfaces is non-orientable, and from \refthm{hress} it is $\pi_1$-essential in $X$. But Moser~\cite{mos71} proved that the only 2-sided essential surfaces in a torus knot exterior are the Seifert surface of genus $\frac{1}{2}(p-1)(q-1)$ and the winding annulus at slope $pq$. The winding annulus covers a spanning surface if and only if $q=2$, in which case it covers a Mobius band at slope $pq$. The only way to obtain an alternating projection from these is a $(p,2)$-torus knot, which has a weakly generalised alternating projection onto $S^2$ only.
\end{proof}
\begin{lemma}\label{Lem:NonHeegaardTorus}
Let $\pi(L)$ be a weakly generalised alternating projection of a link $L$ onto a non-Heegaard torus $F$ in $S^3$. Then $L$ is a satellite link.
\end{lemma}
\begin{proof}
This follows by \refprop{HypToroidal}. Because $F$ is a non-Heegaard torus, there exists a torus $T$ which is parallel to $F$ and incompressible in $S^3{\smallsetminus} N(F)$.
\end{proof}
\begin{lemma}\label{Lem:CoreNontriv}
Let $\pi(L)$ be a weakly generalised alternating projection of a link $L$ onto a Heegaard torus $F$ in $S^3$, such that one region of $F{\smallsetminus}\pi(L)$ is homeomorphic to an annulus $A$. If the core of $A$ forms a nontrivial knot in $S^3$, i.e.\ a $(p,q)$-torus knot, then $L$ is a satellite link on that $(p,q)$-torus knot.
\end{lemma}
\begin{proof}
Say the annulus $A$ is a subset of the checkerboard surface $\Sigma$. Consider the complementary annulus $A' = F{\smallsetminus} A$ on $F$. The core of $A'$ is parallel to that of $A$, hence it forms a nontrivial knot in $S^3$.
Let $T$ be the boundary of a neighbourhood of $A'$, chosen such that a solid torus $V$ bounded by $T$ contains $A'$ and contains $L$, and such that the core of $A$ lies on the opposite side of $T$. We claim $T$ is essential in $X$. It will follow that $L$ is a satellite of the core of $A'$, which is some $(p,q)$-torus knot isotopic to the core of $A$.
If $D$ is a compressing disk for $T$, then $D$ cannot lie on the side of $T$ containing the core of $A$, since this side is a nontrivial knot exterior. So $D \subset V$. Then consider $D\cap\Sigma$ and $D\cap\Sigma'$, where $\Sigma'$ is the other checkerboard surface. Since $D$ is meridional in $V$, it is possible to isotope $D$ so that it intersects $\Sigma$ in exactly one essential arc $\beta$. All loops of intersection between $D$ and $\Sigma$ or $\Sigma'$ can be isotoped away using \refthm{hress}.
Since $\beta$ is disjoint from $\Sigma'$ and $\beta$ runs between the two distinct boundary components of $A$, it follows that $A\cup N(\beta\cap\Sigma)$ is homeomorphic to a once-punctured torus, contradicting the fact that $\pi(L)$ contains exactly one annular region. Thus $T$ is incompressible in $X$.
If $E$ is a meridional compressing disk for $T$, then $E'=E\cap X$ is an annulus embedded in $X$ with one boundary component on $T$ and the other on $\partial X$. The boundary component on $T$ does not intersect $\Sigma'$, but the other boundary component has odd intersection number with $\Sigma'$ since $\Sigma'$ is a spanning surface. This is a contradiction.
\end{proof}
Continuing to restrict to the case that $F$ is a torus, if we further restrict to knots, we can rule out satellite knots in the remaining cases.
\begin{lemma}\label{Lem:AdamsNonsatellite}
Let $\pi(K)$ be a weakly generalised alternating diagram of a knot $K$ on a Heegaard torus $F$ in $S^3$ with all regions of $F{\smallsetminus} \pi(K)$ disks. Then $K$ is not a satellite knot.
\end{lemma}
\begin{proof}
When $K$ is a knot, projected onto a Heegaard torus $F$, it is an example of a toroidally alternating knot, as in \cite{ada94}. Adams showed that a toroidally alternating diagram of a nontrivial prime knot is not a satellite knot \cite{ada94}; $K$ is nontrivial by \refcor{IrredBdryIrred} and prime by \refthm{wgaprime}.
\end{proof}
\begin{lemma}\label{Lem:CoreTriv}
Let $\pi(K)$ be a weakly generalised alternating diagram of a knot $K$ onto a Heegaard torus $F$ in $S^3$ such that a region of $F{\smallsetminus}\pi(K)$ is homeomorphic to an annulus $A$. If the core of $A$ forms a trivial knot in $S^3$, then $K$ is not a satellite knot.
\end{lemma}
\begin{proof}
Suppose $T$ is an essential torus in $S^3{\smallsetminus} K$. Isotope $T$ into normal form with respect to the chunk decomposition of $S^3{\smallsetminus} K$. \refprop{GaussBonnet} (Gauss--Bonnet) implies that the combinatorial area of $T$ is zero. By \refprop{NonnegArea}, $T$ meets the chunks either in incompressible tori, or annuli, or disks.
If $T$ meets a chunk in an incompressible torus, then $T$ is completely contained on one side of $F$. But $F$ is a Heegaard torus, so each chunk is a solid torus. There are no incompressible tori embedded in a solid torus.
So suppose that a normal component of $T$ is an annulus $S$ meeting no edges of a chunk, with $\partial S$ lying in non-contractible faces of the chunk. Because $F$ is a torus, $\partial S$ lies in an annular face. Because $\pi(K)$ is connected ($K$ is a knot), there is only one annular face, namely $A$. Thus $\partial S$ forms parallel essential curves in $A$.
Let $\alpha$ be the core of $A$. By hypothesis, $\alpha$ is the unknot, hence it is either a $(p,1)$ or $(1,q)$ torus knot, and we can choose a compressing disk $D$ for the projection torus $F$ such that $\partial D$ meets $\alpha$ exactly once. Consider $D\cap T$. By incompressibility of $T$, any innermost loop of $D\cap T$ can be isotoped away. So $D\cap T$ consists of arcs with endpoints on $A\cap\partial D$. Consider an outermost arc on $D$. This bounds a disk. We may use the disk to isotope $T$ through the chunk, removing the arc of intersection, and changing two adjacent curves of intersection of $T\cap A$ into a single loop of intersection, bounding a disk on $A$, which can then be isotoped away. Repeating, $T$ can be isotoped to remove all intersections with $A$, so $T$ can be isotoped to be disjoint from $F\times I$. Thus the annulus case does not occur.
Finally, suppose that $T$ is cut into normal disks meeting the chunks. Each has combinatorial area $0$. By definition of combinatorial area and the fact that each edge of the chunk is assigned angle $\pi/2$, each normal disk meets exactly four edges.
First suppose a normal square $S$ of $T$ is parallel into $F$.
\reflem{Menasco} implies that $S$ is meridionally compressible with a meridional compressing disk meeting $K$ at a crossing. Surger along the meridional compressing disk to obtain a sphere meeting $K$ twice in exactly two meridians.
This sphere bounds 3-balls on both sides in $S^3$, one of which contains only a trivial arc of $K$, since $K$ is prime by \refthm{wgaprime}. Depending on which $3$-ball contains the trivial arc, either $T$ is boundary parallel, or $T$ is compressible, both of which are contradictions. This is exactly Menasco's argument in \cite{men84}.
So each normal disk of $T$ is a compression disk for $F$. These normal disks cut the Heegaard tori into ball regions, each ball with boundary consisting of an annulus on $F$ and two disks of $T$. Because $T$ is separating, these can be coloured according to whether $K$ is inside or outside the ball. Consider the regions with $K$ outside. These are $I$-bundles over a disk. They are glued along regions of the chunk decomposition lying between a pair of edges running parallel to the fiber. Thus the regions are fibered, with the gluing preserving the fibering, and we obtain an $I$-bundle that is a submanifold of $S^3$ with boundary a single torus $T$. The only possibility is that the submanifold is an $I$-bundle over a Klein bottle, but no Klein bottle embeds in $S^3$. This contradiction is exactly Adams' contradiction in \cite{ada94}.
\end{proof}
\begin{proof}[Proof of \refthm{WGAKnotOnTorus}]
When $F$ is not a Heegaard torus, or when $F$ is a Heegaard torus but a region of $F{\smallsetminus}\pi(K)$ is an annulus with knotted core, then Lemmas~\ref{Lem:NonHeegaardTorus} and~\ref{Lem:CoreNontriv}, respectively, imply that $K$ is a satellite knot.
In the remaining two cases, Lemmas~\ref{Lem:CoreTriv} and~\ref{Lem:AdamsNonsatellite} imply that $S^3{\smallsetminus} K$ is atoroidal. Subsequently, \refthm{WGAKnotAnannular} implies that $S^3{\smallsetminus} K$ is anannular. Because $S^3{\smallsetminus} K$ is irreducible and boundary irreducible, it must be hyperbolic in these cases.
\end{proof}
\begin{corollary}\label{Cor:GAKnotOnTorus}
Let $\pi(K)$ be a generalised alternating diagram of a knot $K$ onto a torus $F$, as defined by Ozawa \cite{oza06}. Then $K$ is hyperbolic if and only if $F$ is Heegaard.
\end{corollary}
\begin{proof}
In Ozawa's definition of generalised alternating diagrams on $F$, the regions are disks, so the result follows from (1) and (3) of \refthm{WGAKnotOnTorus}.
\end{proof}
\section{Accidental, virtual fibered, and quasifuchsian surfaces}\label{Sec:Accidental}
We now switch to links with checkerboard colourable diagrams, and consider again the checkerboard surfaces $\Sigma$ and $\Sigma'$ of a weakly generalised alternating link diagram, which are essential by \refthm{hress}. Using the bounded angled chunk decompositions of the link exterior cut along $\Sigma$, we give further information about the surface $\Sigma$. In particular, we determine when it is accidental, quasifuchsian, or a virtual fiber.
We fix some notation. As before, we let $Y$ be a compact orientable irreducible 3-manifold with generalised projection surface $F$, such that if $\partial Y\neq \emptyset$, then $\partial Y$ consists of tori and $\partial Y$ is incompressible in $Y{\smallsetminus} N(F)$. Let $L$ be a link with a weakly generalised alternating diagram on $F$, $\Sigma$ a $\pi_1$-essential spanning surface, $\widetilde{\Sigma} = \partial N(\Sigma)$, $M_{\Sigma} = (Y{\smallsetminus} L){\backslash \backslash}\Sigma$, and $P=\partial M_{\Sigma} \cap \partial N(L)$ the parabolic locus.
\subsection{Accidental surfaces}
An \emph{accidental parabolic} is a non-trivial loop in $\Sigma$ which is freely homotopic into $\partial N(L)$ through $X:=Y{\smallsetminus} N(L)$, but not freely homotopic into $\partial\Sigma$ through $\Sigma$.
Define an \emph{accidental annulus} for $\Sigma$ to be an essential annulus $A$ properly embedded in $M_{\Sigma}$ such that $\partial A=\alpha\cup\beta$ where $\beta\subset P$ and $\alpha\subset\widetilde{\Sigma}$ is an accidental parabolic. It is well-known that if spanning surface $\Sigma$ contains an accidental parabolic, then it admits an accidental annulus in $M_{\Sigma}$. See, for example Ozawa-Tsutsumi~\cite{ot03}, or \cite[Lemma~2.2]{fkp14}.
A surface $\Sigma$ is \emph{accidental} if $M_{\Sigma}$ contains an accidental annulus.
If $\Sigma$ is accidental, then the slope of $\Sigma$ is the same as the slope of $\beta$ on $P$.
\begin{lemma}\label{Lem:squares}
Let $\pi(L)$ be a weakly generalised alternating link projection onto a generalised projection surface $F$ in $Y$, with shaded checkerboard surface $\Sigma$.
Let $A$ be an accidental annulus for $\Sigma$. Then $A$ decomposes as the union of an even number of normal squares, each with one side on a shaded face, one side on a boundary face, and two opposite sides on white faces.
\end{lemma}
\begin{proof}
The annulus $A$ is essential, so it can be isotoped into normal form by \refthm{NormalForm}. Then by Gauss--Bonnet, \refprop{GaussBonnet}, the combinatorial area of $A$ is $a(A) = -2\pi\chi(A) + \pi\,p = 0+\pi\,p$, where $p$ is the number of times a boundary curve of $A$ runs from $\partial N(\Sigma)$ to the parabolic locus and then back to $\partial N(\Sigma)$. Since one component of $\partial A$ is completely contained in $\partial N(\Sigma)$ and the other is completely contained in the parabolic locus, it follows that $p=0$. Thus $a(A)=0$. Then $A$ is subdivided into normal pieces, each with combinatorial area $0$ within a chunk. By \refprop{NonnegArea}, each piece can be a disk or an annulus; the torus case does not arise because $A$ is an annulus. Since $\partial A$ meets boundary faces of the chunk, there cannot be a normal component of $A$ meeting no edges. It follows that all normal components of $A$ are disks with combinatorial area $0$. Because each edge of the chunk has angle $\pi/2$, it follows that each piece meets exactly four edges, so it is a normal square.
Because $A$ runs through two distinct chunks, alternating between chunks, the total number of squares forming $A$ must be even.
Consider intersections of $A$ with white faces. We claim each arc of intersection runs from the component of $\partial A$ on the parabolic locus $P$ to a shaded face. For if an arc runs from $\beta=\partial A\cap P$ back to $\beta$, an innermost such arc along with an arc on $P$ bounds a disk in $A$. Because the white checkerboard surface $\Sigma'$ is boundary $\pi_1$-injective, we may push away this intersection. Similarly, if an innermost arc of intersection of $A$ with the white faces runs from $\alpha = \partial A\cap \widetilde{\Sigma}$ back to $\alpha$ then it cuts off a bigon region of $A$ in normal form. This contradicts \refcor{NoNormalBigons}. So each arc of intersection of $A$ with a white face runs from $\beta$ to $\alpha$ on $A$. Thus each square is as described in the lemma.
\end{proof}
\begin{lemma}\label{Lem:consecsquare}
Let $A$ be an accidental annulus for a checkerboard surface $\Sigma$ associated to a weakly generalised alternating diagram $\pi(L)$ on a generalised projection surface $F$ in a 3-manifold $Y$. Let $A_1, \dots, A_n$ be the decomposition of $A$ into normal squares, as in \reflem{squares}. Then no square $A_i$ can be parallel into $F$.
\end{lemma}
\begin{proof}
Suppose by way of contradiction that $A_i$ is parallel into $F$. Then \reflem{Annulus} implies $L$ is a 2-component link, $\pi(L)$ is a string of bigons, $\Sigma$ is an annulus, and $\partial A$ has a component parallel to the core of $\Sigma$. But then $\partial A$ is freely homotopic into $\partial N(L)$, contradicting the definition of an accidental annulus.
\end{proof}
Futer, Kalfagianni, and Purcell~\cite{fkp14} proved that a state surface associated to a $\sigma$-adequate $\sigma$-homogeneous link diagram has no accidental parabolics. This implies that the checkerboard surfaces associated to a reduced prime alternating diagram are not accidental in $X$. \reflem{consecsquare} leads to a new proof in that case.
\begin{corol}
Let $\pi(L)$ be a reduced prime alternating link projection onto $S^2$ in $S^3$; i.e.\ $L$ is alternating in the usual sense. Then neither checkerboard surface is accidental in $X$.
\end{corol}
\begin{proof}
Suppose $A$ is an accidental annulus for a checkerboard surface. Put $A$ into normal form. Every square making up $A$ must be parallel into $S^2$. This contradicts~\reflem{consecsquare}.
\end{proof}
\begin{theorem}\label{Thm:linkaccid}
Let $\pi(L)$ be a weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose $\hat{r}(\pi(L),F)> 4$. If $\Sigma$ is a checkerboard surface of $\pi(L)$, then $\Sigma$ is not accidental in $X$.
\end{theorem}
\begin{proof}
Suppose $A$ is an accidental annulus. The boundary of any normal square making up $A$ can be isotoped to meet $\pi(L)$ exactly four times. Since the representativity is strictly greater than four on one side of $F$, no square of $A$ on that side can be a compressing disk for $F$. Hence every square on one side of $F$ is parallel into $F$, contradicting~\reflem{consecsquare}.
\end{proof}
\reffig{linkaccideg} shows an example of a weakly generalised alternating link in $S^3$ where one of the checkerboard surfaces contains an accidental parabolic. This shows that we need the condition on representativity for links. However for knots, the condition is not necessary. The proof of the following is similar to \cite[Theorem~2.6]{fkp14}.
\begin{figure}
\caption{An accidental parabolic (thin line, in blue) on the shaded surface of a weakly generalised alternating link diagram. It is freely homotopic to the link component in darker blue. Note that the torus must be embedded in $S^3$ such that the identified edges each bound compressing disks for the torus.}
\label{Fig:linkaccideg}
\end{figure}
\begin{theorem}\label{Thm:knotaccid}
Let $\pi(K)$ be a weakly generalised alternating knot projection. If $\Sigma$ is a checkerboard surface associated to $\pi(K)$ and $\Sigma'$ contains at least one disk region, then $\Sigma$ is not accidental in $X$.
\end{theorem}
\begin{proof}
Let $R'$ be a disk region of $\Sigma'$. Suppose that $A$ is an accidental annulus for $\Sigma$. Since $\partial A\cap P$ must have the same slope as $\Sigma$, there must be one arc of $A\cap R'$ beginning on each segment of $\pi(L)$ in $\partial R'$. But it is impossible for these arcs to run to non-adjacent crossings without intersecting in the interior of $R'$. Hence there can be no accidental annulus for $\Sigma$.
\end{proof}
\subsection{Semi-fibers}
We say an essential surface $\Sigma$ in a compact, orientable 3-manifold $M$ is a \emph{semi-fiber} if either there is a fibration of $M$ over $S^1$ with fiber $\Sigma$, or if $\widetilde{\Sigma}$ is the common frontier of two twisted $I$-bundles over $\Sigma$ whose union is $M$.
Note that if $\Sigma$ is a semi-fiber, then $M{\backslash \backslash}\Sigma$ is an $I$-bundle. In this section, we determine when a checkerboard surface is a semi-fiber in a weakly generalised alternating link complement.
\begin{theorem}\label{Thm:Fibered}
Let $\pi(L)$ be a weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$.
Let $\Sigma$ denote the shaded checkerboard surface, and suppose that the white regions of $F{\smallsetminus}\pi(L)$ are disks. If the link exterior $X:=Y{\smallsetminus} N(L)$ is hyperbolic, then $\Sigma$ is not a semi-fiber for $X$. Conversely, if $\Sigma$ is a semi-fiber for $X$, then the white surface $\Sigma'$ is an annulus or M\"obius band, and $\pi(L)$ is a string of white bigons bounding $\Sigma'$.
\end{theorem}
The proof of \refthm{Fibered} will follow from the following lemma, which describes how an $I$-bundle embedded in $M_\Sigma=X{\backslash \backslash}\Sigma$ meets white regions. It is essentially \cite[Lemma~4.17]{fkp13}.
\begin{lemma}\label{Lem:ProductRectangles}
Let $\pi(L)$ and $\Sigma$ be as in the statement of \refthm{Fibered}, so white regions of $F{\smallsetminus} \pi(L)$ are disks.
Let $B$ be an $I$-bundle embedded in $M_\Sigma = X{\backslash \backslash} \Sigma$, with horizontal boundary on $\widetilde{\Sigma}$ and essential vertical boundary.
Let $R$ be a white region of $F{\smallsetminus}\pi(L)$. Then $B\cap R$ is a product rectangle $\alpha\times I$, where $\alpha\times\{0\}$ and $\alpha\times\{1\}$ are arcs of ideal edges of $R$.
\end{lemma}
\begin{proof}
First suppose $B=Q\times I$ is a product $I$-bundle over an orientable base.
Consider a component of $\partial(B\cap R)$. If it lies entirely in the interior of $R$, then it lies in the vertical boundary $V=\partial Q\times I$.
The intersection $V\cap R$ then contains a closed curve component; an innermost one bounds a disk in $R$. Since the vertical boundary is essential, we may isotope $B$ to remove these intersections. So assume each component of $\partial(B\cap R)$ meets $\widetilde{\Sigma}$.
Note $R\cap \widetilde{\Sigma}$ consists of ideal edges on the boundary of the face $R$. It follows that the boundary of each component of $B\cap R$ consists of arcs $\alpha_1$, $\beta_1$, $\dots$, $\alpha_n$, $\beta_n$ with $\alpha_i$ an arc in an ideal edge of $R\cap \widetilde{\Sigma}$ and $\beta_i$ in the vertical boundary of $B$, in the interior of $R$.
We may assume that each arc $\beta_i$ runs between distinct ideal edges, else isotope $B$ through the disk bounded by $\beta_i$ and an ideal edge to remove $\beta_i$, and merge $\alpha_i$ and $\alpha_{i+1}$.
We may assume that $\beta_i$ runs from $Q\times\{0\}$ to $Q\times\{1\}$, for if not, then $\beta_i \subset R$ is an arc from $\partial Q\times\{1\}$ to $\partial Q\times\{1\}$, say, in an annulus component of $\partial Q \times I$. Such an arc bounds a disk in $\partial Q\times I$. This disk has boundary consisting of the arc $\beta_i$ in $R$ and an arc on $\partial Q\times\{1\} \subset\widetilde{\Sigma}$. If the disk were essential, it would give a contradiction to \refcor{NoNormalBigons}. So it is inessential, and we may isotope $B$ to remove $\beta_i$, merging $\alpha_i$ and $\alpha_{i+1}$. It now follows that $n$ is even.
Finally we show that $n=2$, i.e.\ that each component of $B\cap R$ is a quadrilateral with arcs $\alpha_1, \beta_1, \alpha_2, \beta_2$. For if not, there is an arc $\gamma\subset R$ with endpoints on $\alpha_1$ and $\alpha_3$. By sliding along the disk $R$, we may isotope $B$ so $\gamma$ lies in $B\cap R$. Then note that $\gamma$ lies in $Q\times I$ with endpoints on $Q\times\{1\}$. It must be parallel vertically to an arc $\delta\subset Q\times\{1\} \subset \widetilde{\Sigma}$. This gives another disk $D$ with boundary consisting of an arc on $R$ and an arc on $\widetilde{\Sigma}$.
The disk $D$ cannot be essential since that would contradict \refcor{NoNormalBigons}. Hence $D$ is inessential which means that $\alpha_1$ and $\alpha_3$ lie on the same ideal edge of $R$. From above we know that $\alpha_2$ must lie on a different ideal edge of $R$. But it is impossible to then connect up the endpoints of the $\alpha_i$ to enclose a subdisk of $R$.
It follows that each component of $B\cap R$ is a product rectangle $\alpha\times I$ where $\alpha\times\{1\} = \alpha_1$ is an arc of an ideal edge of $W$ and $\alpha\times\{0\} = \alpha_2$ is an arc of an ideal edge of $W$.
Next suppose $B$ is a twisted $I$-bundle $B=Q \widetilde{\times} I$ where $Q$ is non-orientable. Let $\gamma_1, \dots, \gamma_m$ be a maximal collection of orientation reversing closed curves on $Q$. Let $A_i\subset B$ be the $I$-bundle over $\gamma_i$. Each $A_i$ is a M\"obius band. The bundle $B_0 = B{\smallsetminus} (\cup_i A_i)$ is then a product bundle $B_0 = Q_0\times I$ where $Q_0=Q{\smallsetminus} (\cup_i \gamma_i)$ is an orientable surface. Our work above then implies that $B_0\cap R$ is a product rectangle for each white region $R$. To obtain $B\cap R$, we attach the vertical boundary of such a product rectangle to the vertical boundary of a product rectangle of $A_i$. This procedure respects the product structure of all rectangles, hence the result is a product rectangle.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:Fibered}]
If $\Sigma$ is a semi-fiber for $X=Y{\smallsetminus} N(L)$, then the manifold $M_\Sigma = X{\backslash \backslash} \Sigma$ is an $I$-bundle. Lemma~\ref{Lem:ProductRectangles} implies $M_\Sigma$ intersects each white face $R$ in a product rectangle of the form $\alpha\times I$, where $\alpha\times\{0\}$ and $\alpha\times\{1\}$ lie on ideal edges of $R$. Since $R \subset M_\Sigma$, the face $R$ is a product rectangle, with exactly two ideal edges $\alpha\times\{0\}$ and $\alpha\times\{1\}$. Thus $R$ is a bigon.
If every white face $R$ is a bigon, then $\pi(L)$ is a string of bigons lined up end to end on $F$. But then the white checkerboard surface $\Sigma'$ is made up of the interior of the bigons, hence it is a M\"obius band or annulus. Then the white surface $\Sigma'$ is $\pi_1$-essential by \refthm{hress}; it follows that $X$ contains an essential annulus and therefore is not hyperbolic.
\end{proof}
\subsection{Quasifuchsian surfaces}
A properly embedded $\pi_1$-essential surface $\Sigma$ in a hyperbolic $3$-manifold is quasifuchsian if the lift of $\Sigma$ to $\mathbb{H}^3$ is a plane whose limit set is a Jordan curve on the sphere at infinity.
The following theorem follows from work of Thurston~\cite{thu79} and Bonahon~\cite{bon86}; see also Canary--Epstein--Green~\cite{ceg87}.
\begin{theorem}\label{Thm:qftrichot}
Let $\Sigma$ be a properly embedded $\pi_1$-essential surface in a hyperbolic $3$-manifold of finite volume. Then $\Sigma$ is exactly one of: quasifuchsian, semi-fibered, or accidental.
\end{theorem}
Fenley~\cite{fen98} showed that a minimal genus Seifert surface for a non-fibered hyperbolic knot is quasifuchsian.
Futer, Kalfagianni, and Purcell~\cite{fkp14} showed that the state surface associated to a homogeneously adequate knot diagram is quasifuchsian whenever such a knot is hyperbolic. This includes the case of the checkerboard surfaces associated to a reduced planar alternating diagram of a hyperbolic knot in $S^3$.
\begin{corol}\label{Cor:Quasifuchsian}
Let $\pi(L)$ be a weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose that $F$ has genus at least $1$, and that $Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular. Suppose further that $\hat{r}(\pi(L),F)>4$, and that the regions of $F{\smallsetminus} \pi(L)$ are disks. If $\Sigma$ is a checkerboard surface associated to $\pi(L)$, then $\Sigma$ is quasifuchsian.
\end{corol}
\begin{proof}
\refthm{Hyperbolic} ensures that $Y{\smallsetminus} L$ is hyperbolic, so the surface $\Sigma$ cannot be a virtual fiber by \refthm{Fibered}. It cannot be accidental by \refthm{linkaccid}. By \refthm{qftrichot}, the only remaining possibility is for $\Sigma$ to be quasifuchsian.
\end{proof}
\section{Bounds on volume}\label{Sec:Volume}
In this section, we give a lower bound on the volumes of hyperbolic weakly generalised alternating links in terms of their diagrams. Lackenby was the first to bound volumes of alternating knots in terms of a diagram \cite{lac04}. Our method of proof is similar to his, using angled chunk decompositions rather than ideal polyhedra. Much of his work goes through.
\begin{definition}\label{Def:Guts}
Let $S$ be a $\pi_1$-essential surface properly embedded in an orientable hyperbolic 3-manifold $M$. Consider $M{\backslash \backslash} S$. This is a 3-manifold with boundary; therefore it admits a JSJ-decomposition, decomposing it along essential tori and annuli into $I$-bundles, Seifert fibered solid tori, and \emph{guts}. The \emph{guts}, denoted $\operatorname{guts}(M{\backslash \backslash} S)$, is the portion of the manifold $M{\backslash \backslash} S$ after the JSJ-decomposition that admits a hyperbolic metric with geodesic boundary.
\end{definition}
To bound the volume, we will use the following theorem applied to checkerboard surfaces.
\begin{theorem}[Agol-Storm-Thurston \cite{ast07}]\label{Thm:agolguts}
Let $M$ be an orientable, finite volume hyperbolic $3$-manifold. We allow $M$ to have nonempty boundary, but in that case, we require that every component of $\partial M$ is totally geodesic in the hyperbolic structure. (Note $M$ may also have cusps, but these are not part of $\partial M$.)
Finally, let $S$ be a $\pi_1$-essential surface properly embedded in $M$ disjoint from $\partial M$.
Then
\[\operatorname{vol}(M)\geq-v_8\chi(\operatorname{guts}(M{\backslash \backslash} S)),\]
where $v_8 =3.66\dots$ is the volume of a regular ideal octahedron.
\end{theorem}
\begin{proof}
This follows from \cite[Theorem~9.1]{ast07} as follows. If $\partial M$ is empty, which includes the case that $M$ has cusps, then the statement is equivalent to that of \cite[Theorem~9.1]{ast07}.
Otherwise, consider the double $DM$ of $M$, doubling over components of $\partial M$. Because $\partial M$ is incompressible, and $M$ is irreducible, anannular and atoroidal (by hyperbolicity of $M$), $DM$ is a hyperbolic 3-manifold of finite volume. Let $DS$ denote the union of $S$ and its double in $DM$. Then $DS$ is $\pi_1$-essential in $DM$, using the fact that $\partial M$ is incompressible in $M$. Finally, let $\Sigma$ denote the union of $DS$ and the components of $\partial M$. Then $\Sigma$ is $\pi_1$-essential in $DM$.
Thus \cite[Theorem~9.1]{ast07} implies
\[ 2\operatorname{vol}(M) = \operatorname{vol}(DM) \geq -v_8\chi(\operatorname{guts}(DM{\backslash \backslash} \Sigma)). \]
Note that $DM{\backslash \backslash} \Sigma$ is exactly two copies of $M{\backslash \backslash} S$. Thus
\[ \chi(\operatorname{guts}(DM{\backslash \backslash}\Sigma)) = 2\chi(\operatorname{guts}(M{\backslash \backslash} S)). \qedhere \]
\end{proof}
\begin{definition}\label{Def:WeaklyTwistReduced}
A reduced alternating diagram $\pi(L)$ on a generalised projection surface $F$ is said to be \emph{weakly twist reduced} if the following holds.
Suppose $D$ is a disk in $F$ with $\partial D$ meeting $\pi(L)$ transversely in four points, adjacent to exactly two crossings. Then either $D$ contains only bigon faces of $F{\smallsetminus}\pi(L)$, oriented such that the two crossings adjacent to $D$ belong to bigon faces, or $F{\smallsetminus} D$ contains a disk $D'$, where $\partial D'$ meets $\pi(L)$ adjacent to the same two crossings and bounds only bigon faces, again with bigons including the two crossings. See \reffig{WeaklyTwistReduced}.
\end{definition}
\begin{figure}
\caption{Weakly twist reduced: If the boundary of a disk $D\subset F$ meets the diagram in four points adjacent to exactly two crossings, then either $D$ bounds a string of bigons, or there is a disk outside $D$ meeting the diagram adjacent to the same two crossings, bounding a string of bigons.}
\label{Fig:WeaklyTwistReduced}
\end{figure}
\begin{definition}\label{Def:TwistRegion}
Let $\pi(L)$ be a weakly generalised alternating diagram on a surface $F$. A \emph{twist region} of $\pi(L)$ is either
\begin{itemize}
\item a string of bigon regions of $\pi(L)$ arranged vertex to vertex that is maximal in the sense that no larger string of bigons contains it, or
\item a single crossing adjacent to no bigons.
\end{itemize}
The \emph{twist number} $\operatorname{tw}(\pi(L))$ is the number of twist regions in a weakly twist-reduced diagram.
\end{definition}
Any diagram of a weakly generalised alternating link on a generalised projection surface $F$ can be modified to be weakly twist reduced. For if $D$ is a disk as in the definition and neither $D$ nor $D'$ contains only bigons, then a flype in $D$ on $F$ moves the crossings met by $\partial D$ to be adjacent, either canceling the crossings or leaving a bigon between, reducing the number of twist regions on $F$.
\begin{lemma}\label{Lem:MarcLemma7}
Let $\pi(L)$ be a reduced alternating diagram on $F$ in $Y$, such that all regions of $F{\smallsetminus}\pi(L)$ are disks.
Let $D_1$ and $D_2$ be normal disks parallel to $F$ such that $\partial D_1$ and $\partial D_2$ meet exactly four interior edges. Isotope $\partial D_1$ and $\partial D_2$ to minimise intersections $\partial D_1\cap\partial D_2$ in faces. If $\partial D_1$ intersects $\partial D_2$ in a face of the chunk, then $\partial D_1$ intersects $\partial D_2$ exactly twice, in faces of the same colour.
\end{lemma}
\begin{proof}
The boundaries $\partial D_1$ and $\partial D_2$ are quadrilaterals, with sides of $\partial D_i$ between intersections with interior edges. Note that $\partial D_1$ can intersect $\partial D_2$ at most once in any of its sides by the requirement that the number of intersections be minimal (else isotope through a disk face).
Thus there are at most four intersections of $\partial D_1$ and $\partial D_2$. If $\partial D_1$ meets $\partial D_2$ four times, then the two quads run through the same regions, both bounding disks, and can be isotoped off each other using the fact that the diagram is weakly prime. Since the quads intersect an even number of times, there are either zero or two intersections. If zero intersections, we are done.
So suppose there are two intersections. Since all regions are disks, $\pi(L)$ is checkerboard colourable. Suppose $\partial D_1$ intersects $\partial D_2$ exactly twice in faces of the opposite colour. Then an arc $\alpha_1\subset \partial D_1$ has both endpoints on $\partial D_1\cap \partial D_2$ and meets only one intersection of $\partial D_1$ with an interior edge of the chunk decomposition. Similarly, an arc $\alpha_2\subset\partial D_2$ has both endpoints on $\partial D_1\cap \partial D_2$ and meets only one intersection of $\partial D_2$ with an interior edge of the chunk decomposition. Then $\alpha_1\cup \alpha_2$ is a closed curve on $F$ meeting exactly two interior edges of the chunk decomposition and bounding a disk in $F$.
There are four cases: $\alpha_1\cup\alpha_2$ bounds the disk $D_1\cap D_2$, $D_1{\smallsetminus} D_2$, $D_2{\smallsetminus} D_1$, or $D_1\cup D_2$ in the chunk. It follows that the corresponding disk on $F$ contains no crossings. But then the arcs $\alpha_1$ and $\alpha_2$ are parallel and in the first three cases, the arcs can be isotoped through the disk to remove the intersections of $D_1$ and $D_2$. In the final case, the interior of the disk bounded by $\alpha_1\cup\alpha_2$ is not disjoint from $\partial D_1\cup \partial D_2$, but it is still possible to isotope $D_1$ to be disjoint from $D_2$.
\end{proof}
\begin{theorem}\label{Thm:chiguts}
Let $\pi(L)$ be a weakly twist-reduced, weakly generalised alternating diagram on a generalised projection surface $F$ in a 3-manifold $Y$.
If $\partial Y \neq \emptyset$, then suppose $\partial Y$ is incompressible in $Y{\smallsetminus} N(F)$. Also suppose $Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular.
Suppose $F$ has genus at least one, all regions of $F{\smallsetminus}\pi(L)$ are disks, and $r(\pi(L),F)>4$.
Let $S$ and $W$ denote the checkerboard surfaces of $\pi(L)$, and let $r_S = r_S(\pi(L))$ and $r_W = r_W(\pi(L))$ denote the number of non-bigon regions of $S$ and $W$ respectively.
Write $M_S = X{\backslash \backslash} S$, and $M_W=X{\backslash \backslash} W$. Then
\[\chi(\operatorname{guts}(M_S))=\chi(F)+\frac{1}{2}\chi(\partial Y)-r_W,\hspace{5mm} \chi(\operatorname{guts}(M_W))=\chi(F)+\frac{1}{2}\chi(\partial Y)-r_S.\]
\end{theorem}
We know $M_S$ is obtained by gluing chunks along white faces only.
If $\operatorname{guts}(M_S)$ happens to equal $M_S$, then $\chi(M_S)$ is obtained by taking the sum of the Euler characteristics of each chunk $C$, which is $\chi(C) = \chi(\partial C)/2$, and subtracting one from the total for each (disk) white face. Note that each component of $F$ appears as a boundary component of exactly two chunks; the other boundary components come from $\partial Y$. Thus the sum of Euler characteristics of chunks is $\chi(F) + \chi(\partial Y)/2$.
However, note that if there are any white bigon regions, then the white bigon can be viewed as a quad with two sides on the parabolic locus $P$ and two sides on $\widetilde{S}$. A neighbourhood of this is part of an $I$-bundle, with horizontal boundary on $\widetilde{S}$ and vertical boundary on the parabolic locus. Thus white bigon faces cannot be part of the guts. We will remove them.
To find any other $I$-bundle or Seifert fibered components, we must identify any essential annuli embedded in $M_S$, disjoint from the parabolic locus, and with $\partial A\subset \widetilde{S}$; these give the JSJ-decomposition of $M_S$. So suppose $A$ is such an essential annulus.
\begin{definition}\label{Def:ParabolicallyCompressible}
We say that $A$ is \emph{parabolically compressible} if there exists a disk $D$ with interior disjoint from $A$, with $\partial D$ meeting $A$ in an essential arc $\alpha$ on $A$, and with $\partial D{\smallsetminus} \alpha$ lying on $\widetilde{S}\cup P$, with $\alpha$ meeting $P$ transversely exactly once. We may surger along such a disk; this is called a \emph{parabolic compression}, and it turns the annulus $A$ into a disk meeting $P$ transversely exactly twice, with boundary otherwise on $\widetilde{S}$. This is called an \emph{essential product disk (EPD)}; each EPD will be part of the $I$-bundle component of the JSJ-decomposition.
\end{definition}
\begin{lemma}\label{Lem:NoParabComprAnnulus}
Let $\pi(L)$ be a weakly twist-reduced, weakly generalised alternating diagram on a generalised projection surface $F$ in a 3-manifold $Y$. Suppose
all regions of $F{\smallsetminus}\pi(L)$ are disks, $r(\pi(L),F)>4$, and there are no white bigon regions. Let $A$ be an essential annulus embedded in $M_S = X{\backslash \backslash} S$, disjoint from the parabolic locus, with $\partial A\subset \widetilde{S}$. Then $A$ is not parabolically compressible.
\end{lemma}
\begin{proof}
Suppose $A$ is parabolically compressible. Then surger along a parabolic compressing disk to obtain an EPD $E$. Put $E$ into normal form with respect to the chunk decomposition of $M_S$. If $E$ intersects a white (interior) face coming from $W$, then consider the arcs $E\cap W$. Such an arc has both endpoints on $\widetilde{S}$. If one cuts off a disk on $E$ that does not meet the parabolic locus, then there will be an innermost such disk. This will be a normal bigon, i.e.\ a disk in the chunk decomposition meeting exactly two surface edges.
This contradicts \refcor{NoNormalBigons}.
So $E\cap W$ consists of arcs running from $\widetilde{S}$ to $\widetilde{S}$, cutting off a disk meeting the parabolic locus $P$ on either side. Thus $W$ cuts $E$ into normal squares $\{E_1, \dots, E_n\}$. On the end of $E$, the square $E_1$ has one side on $W$, two sides on $\widetilde{S}$, and the final side on a boundary face.
We may isotope slightly off the boundary face into an adjacent white face so that $E_1$ remains normal. Then $E_1$ and $E_2$ are both squares meeting no boundary faces.
Since $r(\pi(L),F)>4$, no $E_i$ can be a compressing disk for $F$, hence each is parallel into $F$. Superimpose $E_1$ and $E_2$ onto the boundary of one of the chunks. An edge of $E_1$ in a white face $U\subset W$ is glued to an edge of $E_2$ in the same white face. Because the two are in different chunks, when we superimpose, $\partial E_2\cap U$ is obtained from $\partial E_1\cap U$ by a rotation in $U$.
If $E_1\cap U$ is not parallel to a single boundary edge, then $\partial E_1\cap U$ and $\partial E_2\cap U$ intersect; see \reffig{NoEPD}.
But then \reflem{MarcLemma7} implies that $\partial E_1$ and $\partial E_2$ also intersect in another white face. But $\partial E_1$ is parallel to a single boundary edge in its second white face, so $\partial E_2$ cannot intersect it. This is a contradiction.
\begin{figure}
\caption{Left: $\partial E_1$ is not parallel to a boundary edge in $U$, hence $\partial E_1$ meets $\partial E_2$ in $U$. Right: $\partial E_1$ is parallel to a boundary edge. }
\label{Fig:NoEPD}
\end{figure}
So $E_1\cap U$ is parallel to a single boundary edge (and hence so is $E_2\cap U$). But then $E_1$ meets both white faces in arcs parallel to boundary edges. Isotoping $\partial E_1$ slightly, this gives a closed curve on $F$ meeting $\pi(L)$ in crossings. If the crossings are distinct, then because $\pi(L)$ is weakly twist reduced, the two crossings must bound white bigons between them, contradicting the fact that there are no white bigon regions in the diagram. If the crossings are not distinct, then $\partial E_1$ encircles a single crossing of $\pi(L)$. Repeating the argument with $E_2$ and $E_3$, and so on, we find that each $\partial E_i$ encircles a single crossing, and thus the original annulus $A$ is boundary parallel. This contradicts the fact that $A$ is essential.
So if there is an EPD $E$, it cannot meet $W$. Then it lies completely on one side of $F$. Its boundary runs through two shaded faces and two boundary faces. Slide slightly off the boundary faces; we see that its boundary defines a curve meeting the knot four times. Because $r(\pi(L),F)>4$, it cannot be a compressing disk, so $E$ is parallel into $F$. But then weakly twist-reduced implies its boundary encloses a string of white bigons, or its exterior in $F$ contains a string of white bigons meeting $\partial E$ in the same two crossings. Since there are no white bigons, $\partial E$ is boundary parallel, and the EPD is not essential.
\end{proof}
\begin{lemma}\label{Lem:NoParabIncomp}
Let $\pi(L)$ be a weakly twist-reduced, weakly generalised alternating diagram on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose $F$ has genus at least one, all regions of $F{\smallsetminus}\pi(L)$ are disks, $r(\pi(L),F)>4$, and there are no white bigon regions. Let $A$ be an essential annulus embedded in $M_S = X{\backslash \backslash} S$, disjoint from the parabolic locus, with $\partial A\subset \widetilde{S}$. Then $A$ cannot be parabolically incompressible.
\end{lemma}
\begin{proof}
Suppose $A$ is parabolically incompressible and put $A$ into normal form; \refprop{NonnegArea} implies $W$ cuts $A$ into squares $E_1, \dots, E_n$. Representativity $r(\pi(L),F)>4$ implies each square is parallel into $F$. Note that if a component of intersection $E_i\cap W$ is parallel to a boundary edge, then the disk of $W$ bounded by $E_i\cap W$, the boundary edge, and $\widetilde{S}$ defines a parabolic compression disk for $A$, contradicting the fact that $A$ is parabolically incompressible. So each component of $E_i\cap W$ cannot be parallel to a boundary edge.
Again superimpose all squares $E_1, \dots, E_n$ on one of the chunks. The squares are glued in white faces, and cut off more than a single boundary edge in each white face, so $\partial E_i$ must intersect $\partial E_{i+1}$ in a white face; see again \reffig{NoEPD}. Then \reflem{MarcLemma7} implies $\partial E_i$ intersects $\partial E_{i+1}$ in both of the white faces it meets. Similarly, $\partial E_i$ intersects $\partial E_{i-1}$ in both its white faces. Because $E_{i-1}$ and $E_{i+1}$ lie in the same chunk, they are disjoint (or $E_{i-1} = E_{i+1}$, but this makes $A$ a M\"obius band rather than an annulus).
This is possible only if $E_{i-1}$, $E_i$, and $E_{i+1}$ line up as in \reffig{FusedUnits} left, bounding tangles as shown. Lackenby calls such tangles \emph{units} \cite{lac04}. Then all $E_j$ form a cycle of such tangles, as in \reffig{FusedUnits} right.
\begin{figure}
\caption{Left: $E_{i-1}
\label{Fig:FusedUnits}
\end{figure}
Since all white regions are disks, the inside and outside of the cycle are disks. Also, all $E_i$ bound disks on $F$. But then $F$ is the union of two disks and an annulus made up of disks; this is a sphere. This contradicts the fact that the genus of $F$ is at least 1.
\end{proof}
\begin{proof}[Proof of \refthm{chiguts}]
First we rule out essential annuli with boundary components on $\partial Y$. Suppose $A$ is an essential annulus in $M_S$ with both boundary components on $\partial Y$. Because $Y{\smallsetminus} N(F)$ is $\partial$-anannular by assumption, $A$ must meet $N(F)$, and hence it meets a white face of the chunk decomposition. When we put $A$ into normal form, it decomposes into pieces, each with combinatorial area zero. The fact that all regions of $F{\smallsetminus} \pi(L)$ are disks along with \refprop{NonnegArea} imply that each normal piece is a disk meeting four interior edges. But then $A$ intersects $\widetilde{S}$, hence is not embedded in $M_S$. Now suppose $A$ is an essential annulus in $M_S$ with only one boundary component on $\partial Y$ and the other on $\widetilde{S}$. Because regions of $F{\smallsetminus}\pi(L)$ are disks, the component of $\partial Y$ on $\widetilde{S}$ must run through both white and shaded faces, and thus again $A$ is decomposed into squares in the chunk decomposition, each with two sides on $\widetilde{S}$. But then $A$ cannot have one boundary component on $\partial Y$.
Now we rule out essential annuli with both boundary components on $\widetilde{S}$. Suppose first that $\pi(L)$ has no white bigon regions. Then \reflem{NoParabComprAnnulus} and \reflem{NoParabIncomp} imply that there are no embedded essential annuli in $M_S$ with both boundary components on $\widetilde{S}$.
It follows that $\chi(\operatorname{guts}(M_S)) = \chi(M_S) = \chi(F)+\chi(\partial Y)/2-r_W$.
If $\pi(L)$ contains white bigon regions, then
replace each string of white bigon regions in the diagram $\pi(L)$ by a single crossing, to obtain a new link. Since white bigons lead to product regions, the guts of the shaded surface of the new link agrees with $\operatorname{guts}(M_S)$. Hence $\chi(\operatorname{guts}(M_S)) = \chi(M_S) = \chi(F)+\chi(\partial Y)/2-r_W$ in this case.
An identical argument applies to $M_W$, replacing the roles of $W$ and $S$ above.
\end{proof}
\begin{theorem}\label{Thm:volguts}
Let $\pi(L)$ be a weakly twist-reduced, weakly generalised alternating diagram on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose
that if $\partial Y\neq \emptyset$, then $\partial Y$ is incompressible in $Y{\smallsetminus} N(F)$. Suppose also that $Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular. Finally suppose $F$ has genus at least one, that all regions of $F{\smallsetminus}\pi(L)$ are disks, and that $r(\pi(L),F)>4$.
Then $Y{\smallsetminus} L$ is hyperbolic and
\[\operatorname{vol}(Y{\smallsetminus} L)\geq \frac{v_8}{2}\left(\operatorname{tw}(\pi(L))-\chi(F) -
\chi(\partial Y)\right).\]
\end{theorem}
\begin{proof}
The fact that $Y{\smallsetminus} L$ is hyperbolic follows from \refthm{Hyperbolic}.
Let $\Gamma$ be the $4$-regular graph associated to $\pi(L)$ by replacing each twist-region with a vertex.
Let $|v(\Gamma)|$ denote the number of vertices of $\Gamma$, $|f(\Gamma)|$ the number of regions. Because $\Gamma$ is 4-valent, the number of edges is $2|v(\Gamma)|$, so
\[\chi(F)=-|v(\Gamma)|+|f(\Gamma)|=-\operatorname{tw}(\pi(L))+r_S+r_W.\]
Then applying Theorems~\ref{Thm:agolguts} and~\ref{Thm:chiguts} gives
\begin{align*}
\operatorname{vol}(X) \ &\geq \ -\frac{1}{2}v_8\chi(\operatorname{guts}(M_S))-\frac{1}{2}v_8\chi(\operatorname{guts}(M_W)) \\
\ &= \ -\frac{1}{2}v_8(2\chi(F) + \chi(\partial Y) -r_S-r_W) \\
\ &= \ \frac{1}{2}v_8(\operatorname{tw}(\pi(L))-\chi(F)-\chi(\partial Y)). \qedhere
\end{align*}
\end{proof}
Again the restriction on representativity is the best possible for \refthm{volguts}: The knot $10_{161}$, shown in \reffig{LowVolEg}, has a generalised alternating projection onto a Heegaard torus, with $r(\pi(L),F)=4$, with 10 twist regions and 11 crossings, but $\operatorname{vol}(10_{161})\approx 5.6388< 5v_8$. In this case, there exists an $I$-bundle which does not come from a twist region.
\begin{figure}
\caption{A generalised alternating diagram of $10_{161}
\label{Fig:LowVolEg}
\end{figure}
\section{Exceptional fillings}\label{Sec:Filling}
This section combines results of the previous sections to address questions on the geometry of Dehn fillings of weakly generalised alternating links. These questions can be addressed in two ways, one geometric and the other combinatorial. Both arguments have advantages. We compare results in \refrem{CompareFilling}.
\subsection{Geometry and slope lengths}
Let $M$ be a manifold with torus boundary whose interior admits a hyperbolic structure. Recall that a \emph{slope} on $\partial M$ is an isotopy class of essential simple closed curves. A slope inherits a length from the hyperbolic metric, by measuring a geodesic representative on the (Euclidean) boundary of an embedded horoball neighborhood of all cusps. The 6--Theorem \cite{ago00, lac00} implies that if a slope has length greater than six, Dehn filling along that slope will result in a hyperbolic manifold. We will apply the 6--Theorem to bound exceptional fillings. First, we need to bound the lengths of slopes on weakly generalised alternating links, and this can be done by applying results of Burton and Kalfagianni~\cite{bk17}. Burton and Kalfagianni give bounds on slope lengths for hyperbolic knots that admit a pair of essential surfaces. By \refthm{hress}, weakly generalised alternating links admit such surfaces. Moreover, \refthm{Hyperbolic} gives conditions that guarantee the link complement is hyperbolic. Putting these together, we obtain the following.
\begin{theorem}\label{Thm:SlopeLengths}
Let $\pi(K)$ be a weakly generalised alternating diagram of a knot $K$ on a generalised projection surface $F$ in a closed 3-manifold $Y$ such that $Y{\smallsetminus} N(F)$ is atoroidal, and such that the hypotheses of \refthm{Hyperbolic} are satisfied. Let $\mu$ denote the meridian slope on $\partial N(K)$, i.e.\ the slope bounding a disk in $Y$, and let $\lambda$ denote the shortest slope with intersection number $1$ with $\mu$. Let $c$ be the number of crossings of $\pi(K)$. Then the lengths $\ell(\mu)$ and $\ell(\lambda)$ satisfy:
\[ \ell(\mu) \leq 3 - \frac{3\chi(F)}{c}, \quad\mbox{and}\quad \ell(\lambda) \leq 3(c-\chi(F)). \]
\end{theorem}
\begin{proof}
By \refthm{Hyperbolic}, the interior of $Y{\smallsetminus} K$ is hyperbolic, and by \refthm{hress}, the checkerboard surfaces $\Sigma$ and $\Sigma'$ are $\pi_1$-essential. Let $\chi$ denote the sum $|\chi(\Sigma)| + |\chi(\Sigma')|$. Then \cite[Theorem~4.1]{bk17} implies the slope lengths of $\mu$ and $\lambda$ are bounded by:
\[ \ell(\mu) \leq\frac{6\chi}{i(\partial\Sigma, \partial\Sigma')}, \quad\mbox{and}\quad
\ell(\lambda) \leq 3\chi, \]
where $i(\partial\Sigma, \partial\Sigma')$ denotes the intersection number of $\partial\Sigma$ and $\partial\Sigma'$ on $\partial N(L)$. (Note \cite[Theorem~4.1]{bk17} is stated for knots in $S^3$, but the proof applies more broadly to our situation.)
Because $\Sigma$ and $\Sigma'$ are checkerboard surfaces and $\pi(K)$ is alternating, $i(\partial\Sigma, \partial\Sigma')=2c$. Moreover, the sum of their Euler characteristics can be shown to be $\chi(F)-c$.
The result follows by plugging in these values of $\chi$ and $i(\partial\Sigma,\partial\Sigma')$.
\end{proof}
The results of \refthm{SlopeLengths} should be compared to those of Adams \emph{et al} in \cite{acf06}: for regular alternating knots in $S^3$, they found that $\ell(\mu) \leq 3-6/c$ and $\ell(\lambda) \leq 3c-6$, matching our results exactly when $F=S^2$.
The bounds on slope length lead immediately to bounds on exceptional Dehn fillings, and to bounds on volume change under Dehn filling.
\begin{corollary}\label{Cor:ExceptionalLengthGeom}
Suppose $\pi(K)$, $F$, and $Y$ satisfy the hypotheses of \refthm{SlopeLengths}. Let $\sigma = p\mu + q\lambda$ be a slope on $\partial N(K)$. If $|q|>5.373(1-\chi(F)/c)$ then the Dehn filling of $X(K)$ along slope $\sigma$ yields a hyperbolic manifold.
\end{corollary}
\begin{proof}
As in \cite{bk17}, the Euclidean area of a maximal cusp $C$ in a one-cusped $3$-manifold $X$ satisfies
\[ \operatorname{area}(C) \leq \frac{\ell(\sigma)\ell(\mu)}{|i(\sigma,\mu)|}. \]
Work of Cao and Meyerhoff \cite{cm01} implies $\operatorname{area}(C)\geq 3.35$. Plugging in the bound on $\ell(\mu)$ from \refthm{SlopeLengths}, along with the fact that $|i(\sigma, \mu)| = |q|$, and solving for $\ell(\sigma)$, we obtain
\[ \ell(\sigma) \geq \frac{3.35c}{3(c-\chi(F))}\,|q|. \]
Thus if $|q|>(18/3.35)(1-\chi(F)/c) > 5.373(1-\chi(F)/c)$, then $\ell(\sigma)>6$ and the 6--Theorem gives the result.
\end{proof}
\begin{corollary}\label{Cor:VolumeDF}
Suppose $\pi(K)$, $F$, and $Y$ satisfy the hypotheses of \refthm{SlopeLengths}, and also
$Y$ is either closed or has only torus boundary components, and
$r(\pi(K),F)>4$. Let $\sigma = p\mu + q\lambda$ be a slope on $\partial N(K)$. If $|q|>5.6267(1-\chi(F)/c)$ then the Dehn filling of $X(K)$ along slope $\sigma$ yields a hyperbolic manifold $N$ with volume bounded by
\begin{gather*}
\operatorname{vol}(Y{\smallsetminus} K) \geq \operatorname{vol}(N) \geq \left( 1-\left(\frac{3.35\,c}{3(c-\chi(F))}\right)^2\right)^{3/2}\operatorname{vol}(Y{\smallsetminus} K) \\
\geq \left( 1-\left(\frac{3.35\,c}{3(c-\chi(F))}\right)^2\right)^{3/2} \frac{v_8}{2}(\operatorname{tw}(\pi(K))-\chi(F)).
\end{gather*}
\end{corollary}
\begin{proof}
If $|q|>5.6267(1-\chi(F)/c)$, then $\ell(\sigma)>2\pi$ and the hypotheses of \cite[Theorem~1.1]{fkp08} hold. The lower bound on volume comes from that theorem, and from \refthm{volguts}.
\end{proof}
\subsection{Combinatorial lengths}
We now turn to combinatorial results on Dehn filling. In \cite{lac00}, Lackenby gives conditions that guarantee that an alternating link on $S^2$ in $S^3$ admits no exceptional Dehn fillings, using the dual of angled polyhedra. We reinterpret his arguments in terms of angled polyhedra, as in \cite{fp07}, and generalise to angled chunks. The main theorem is the following.
\begin{theorem}\label{Thm:DehnFilling}
Let $\pi(L)$ be a weakly twist-reduced, weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose
$Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular, and $F$ has genus at least $1$.
Finally, suppose that the regions of $F{\smallsetminus}\pi(L)$ are disks, and $r(\pi(L),F)>4$.
For some component(s) $K_i$ of $L$, pick surgery coefficient $p_i/q_i$ (in lowest terms) satisfying $|q_i|> 8/\operatorname{tw}(K_i,\pi(L))$. Then the manifold obtained by Dehn filling $Y{\smallsetminus} L$ along the slope(s) $p_i/q_i$ is hyperbolic.
\end{theorem}
Here $\operatorname{tw}(K,\pi(L))$ denotes the number of twist regions that the component $K$ runs through. See \cite{im16} for the complete classification of exceptional fillings on hyperbolic alternating knots when $F=S^2$ and $Y=S^3$. \refthm{DehnFilling} has the following corollaries.
\begin{corollary}\label{Cor:DehnBoundOnq}
Suppose $\pi(L)$, $F$, and $Y$ satisfy the hypotheses of \refthm{DehnFilling}.
For some component(s) $K_i$ of $L$, pick surgery coefficient $p_i/q_i$ (in lowest terms) satisfying $|q_i|> 4$. Then the manifold obtained by Dehn filling $Y{\smallsetminus} L$ along the slope(s) $p_i/q_i$ is hyperbolic.
\end{corollary}
\begin{proof}
Suppose that some component $K_i$ of $L$ only passes through one twist-region. Then there exists a curve $\gamma$ on $F$ which is parallel to $K_i$ and meets $\pi(L)$ at most once. If $\gamma$ meets $\pi(L)$ once, then $\pi(L)$ is not checkerboard colourable. If $\gamma$ is essential in $F$ and does not meet $\pi(L)$, then some region of $F{\smallsetminus} \pi(L)$ is not a disk. If $\gamma$ is trivial in $F$ and does not meet $\pi(L)$, then $\gamma$ can be isotoped across the twist region to meet $\pi(L)$ exactly twice, contradicting the fact that $\pi(L)$ is weakly prime. Hence $\operatorname{tw}(K_i,\pi(L))\geq 2$, and then by \refthm{DehnFilling}, any filling along $K_i$ where $|q_i|> 4$ will be hyperbolic.
\end{proof}
\begin{corollary}\label{Cor:DehnGenusBound}
Let $\pi(K)$ be a weakly twist-reduced, weakly generalised alternating diagram of a knot $K$ on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose $Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular, and $F$ has genus at least $5$.
Finally, suppose that the regions of $F{\smallsetminus}\pi(K)$ are disks, and $r(\pi(K),F)>4$. Then all non-trivial fillings of $Y{\smallsetminus} K$ are hyperbolic.
\end{corollary}
\begin{proof}
Let $\Gamma'$ be a graph obtained by replacing each twist region of $\pi(K)$ by a vertex. If $t$ is the number of twist-regions in $\pi(K)$ and $f$ is the number of non-bigon faces, then Euler characteristic gives $-t+f=2-2g(F)$. Since $\pi(K)$ is checkerboard-colourable, it follows that $f\geq 2$ and hence that $t\geq 2g(F)\geq 10$. But then \refthm{DehnFilling} shows that any filling along $K$ where $q\not=0$ will be hyperbolic.
\end{proof}
\begin{remark}\label{Rem:CompareFilling}
Compare \refcor{ExceptionalLengthGeom} to \refthm{DehnFilling}. The hypotheses are slightly weaker for the corollary, and indeed, Burton and Kalfagianni's geometric estimates on slope length \cite{bk17} apply anytime a knot is hyperbolic with two essential spanning surfaces. Thus it applies to hyperbolic adequate knots, for which $r(\pi(K),F)=2$. However, the requirement on $|q|$ is also stronger in \refcor{ExceptionalLengthGeom}: it is necessary that $|q|\geq 6$ whenever $F$ has positive genus. In contrast, \refthm{DehnFilling} requires stronger restrictions on the diagram, namely $r(\pi(L),F)>4$, but works for links and only needs $|q|>4$, and often works with lower $|q|$. For example, when the diagram has a high number of twist regions, e.g.\ $\operatorname{tw}(K_i,\pi(L))>8$, \refthm{DehnFilling} will apply for any $q\not=0$.
\end{remark}
To prove \refthm{DehnFilling}, we will define a combinatorial length of slopes, and show that if the combinatorial length is at least $2\pi$, then the Dehn filling is hyperbolic. First, we need to extend our definition of combinatorial area to surfaces that may not be embedded, and may have boundary components lying in the interior of a chunk.
\begin{definition}\label{Def:Admissible}
Let $S$ be the general position image of a map of a connected compact surface into a chunk $C$. Then $S$
is \emph{admissible} if:
\begin{itemize}
\item If $S$ is closed, it is $\pi_1$-injective in $C$.
\item $S$ and $\partial S$ are transverse to all faces, boundary faces, and edges of $C$.
\item $\partial S{\smallsetminus}\partial C$ is a (possibly empty) collection of embedded arcs with endpoints in interior faces of $C$, or embedded closed curves.
\item $\partial S\cap \partial C$ consists of a collection of immersed closed curves or immersed arcs.
\item If an arc $\gamma_i$ of $\partial S$ lies entirely in a single face of $C$, then either the arc is embedded in that face, or the face is not simply connected, and every subarc of $\gamma_i$ that forms a closed curve on the face does not bound a disk on that face.
\item Each closed curve component satisfies conditions (3) through (5) of the definition of normal, \refdef{NormalSurface}.
\item Each arc component satisfies conditions (4) and (5) of \refdef{NormalSurface}.
\end{itemize}
A surface $S'$ in an irreducible, $\partial$-irreducible 3-manifold $M$ with a chunk decomposition is \emph{admissible} if each component of intersection of $S'$ with a chunk is admissible.
\end{definition}
We noted above that admissible surfaces do not need to be embedded. In fact, they do not even need to be immersed. For example, an admissible surface could be the image of a compact surface under a continuous map. If $S$ is any immersed surface that is $\pi_1$-injective and boundary $\pi_1$-injective in a 3-manifold $M$ that is irreducible and $\partial$-irreducible, we may homotope $S$ to be admissible. A similar result holds more generally for images of compact surfaces in general position.
We now adjust the definition of combinatorial area to include admissible surfaces.
\begin{definition}\label{Def:CombAreaAdmissible}
Let $S$ be an admissible surface in an angled chunk $C$. Let $\sigma(\partial S{\smallsetminus}\partial C)$ denote the number of components of the intersection of $\partial S$ with the interior of $C$. For each point $v$ on $\partial S$ lying between an interior face of $C$ and an arc of $\partial S{\smallsetminus}\partial C$, define the exterior angle $\epsilon(v) = \pi/2$. Let $v_1, \dots, v_n \in \partial S$ be all the points where $\partial S$ meets edges of $C$, and all points on $\partial S$ between an interior face of $C$ and an arc of $\partial S{\smallsetminus}\partial C$. The \emph{combinatorial area} of $S$ is
\[ a(S) = \sum_{i=1}^n \epsilon(v_i) - 2\pi\chi(S) + 2\pi\sigma(\partial S{\smallsetminus}\partial C).\]
Let $S'$ be an admissible surface in a 3-manifold $M$ with an angled chunk decomposition. Then the combinatorial area of $S'$ is the sum of the combinatorial areas of its components of intersection with the chunks.
\end{definition}
\begin{proposition}[Gauss--Bonnet for admissible surfaces]\label{Prop:GaussBonnetAdmissible}
Let $S$ be an admissible surface in an angled chunk decomposition. Let $\sigma(\partial S {\smallsetminus} \partial M)$ be the number of arcs of intersection of $\partial S{\smallsetminus} \partial M$ and the chunks. Then
\[ a(S) = -2\pi\chi(S) + 2\pi\sigma(\partial S{\smallsetminus}\partial M). \]
\end{proposition}
\begin{proof}
The proof is by induction, identical to that of \refprop{GaussBonnet}, except keeping track of a term $\sigma$ at each step, which remains unchanged throughout the proof.
\end{proof}
\begin{definition}\label{Def:CombLength}
Let $C$ be an angled chunk in a chunk decomposition of $M$, and let $S$ be an admissible surface in $C$ that intersects at least one boundary face. Let $\gamma$ be an arc of intersection of $S$ with a boundary face of $C$. Define the \emph{length of $\gamma$ relative to $S$} to be
\[ \ell(\gamma,S) = \frac{a(S)}{|\partial S\cap \partial M|}. \]
Suppose $\gamma$ is an immersed arc in a component of $\partial M$ meeting boundary faces, with $\gamma$ disjoint from vertices on $\partial M$. Let $\gamma_1, \dots, \gamma_n$ denote the arcs of intersection of $\gamma$ with boundary faces. Suppose that each $\gamma_i$ is embedded and no $\gamma_i$ has endpoints on the same boundary edge. For each $i$, let $S_i$ be an admissible surface in the corresponding chunk whose boundary contains $\gamma_i$. Then $S=\cup_{i=1}^n S_i$ is an \emph{inward extension of $\gamma$} if $\partial S_i$ agrees with $\partial S_{i+1}$ on their shared interior face and if $\gamma$ is closed, $\partial S_n$ agrees with $\partial S_1$ on their shared face.
Define the \emph{combinatorial length} of $\gamma$ to be
\[ \ell_c(\gamma) = \inf \left\{ \sum_{i=1}^n \ell(\gamma_i, S_i) \right\}, \]
where the infimum is taken over all inward extensions of $\gamma$.
\end{definition}
The following is a generalisation of \cite[Proposition~4.8]{lac00}.
\begin{proposition}\label{Prop:AdmissibleSlopeLength}
Let $S$ be an admissible surface in an angled chunk decomposition on $M$. Let $\gamma_1, \dots, \gamma_m$ be the components of $\partial S\cap \partial M$ on torus components of $\partial M$ made up of boundary faces, each $\gamma_j$ representing a non-zero multiple of some slope $s_{i_j}$. Then
\[ a(S) \geq \sum_{j=1}^m \ell_c(s_{i_j}). \]
\end{proposition}
\begin{proof}
Intersections of $S$ with the chunks form an inward extension of $\gamma_j$. Sum the lengths of arcs relative to these intersections.
\end{proof}
Propositions~\ref{Prop:GaussBonnetAdmissible} and~\ref{Prop:AdmissibleSlopeLength} allow us to generalise the following result of Lackenby.
\begin{theorem}[Combinatorial $2\pi$-theorem, \cite{lac00}]\label{Thm:Comb2PiThm}
Let $M$ be a compact orientable 3-manifold with an angled chunk decomposition. Suppose $M$ is atoroidal and not Seifert fibered and has boundary
containing
a non-empty union of tori. Let $s_1, \dots, s_n$ be a collection of slopes on $\partial M$, at most one for each component made up of boundary faces (but no slopes on exterior faces that are tori). Suppose for each $i$, $\ell_c(s_i)>2\pi$. Then the manifold $M(s_1, \dots, s_n)$ obtained by Dehn filling $M$ along these slopes is hyperbolic.
\end{theorem}
\begin{proof}
The proof is nearly identical to that of \cite[Theorem~4.9]{lac00}. In particular, if $M(s_1, \dots, s_n)$ is toroidal, annular, reducible, or $\partial$-reducible, then it contains an essential punctured torus, annulus, sphere, or disk $S$ with punctures on slopes $s_i$. The punctured surface $S$ can be put into normal form. \refprop{GaussBonnetAdmissible} implies $a(S) = -2\pi\chi(S) \leq 2\pi|\partial S|$, and \refprop{AdmissibleSlopeLength} implies $a(S)>2\pi|\partial S|$, a contradiction.
If not all components of $\partial M$ are filled, then Thurston's hyperbolisation theorem
immediately implies $M(s_1, \dots, s_n)$ is hyperbolic. If $M(s_1, \dots, s_n)$ is closed, hyperbolicity will follow when we show it has word hyperbolic fundamental group. We use Gabai's ubiquity theorem \cite{gab98} in the form stated as \cite[Theorem~2.1]{lac00}.
Suppose $M(s_1, \dots, s_n)$ is closed and the core of a surgery solid torus has finite order in $\pi_1(M(s_1, \dots, s_n))$. Then there exists a singular disk with boundary on the core. Putting the disk into general position with respect to all cores and then drilling, we obtain a punctured singular disk $S$ in $M$ with all boundary components on multiples of the slopes $s_i$. If necessary, replace $S$ with a $\pi_1$-injective and boundary $\pi_1$-injective surface with the same property. Because the boundary of $S$ meets $\partial M$ in boundary faces, and runs monotonically through these faces, we may homotope $S$ so that each arc of $\partial S$ in a boundary face is embedded (note this is not necessarily possible if $\partial S$ lies on an exterior face). Similarly, moving singular points away from edges and faces of the chunk decomposition, then using the fact that $M$ is irreducible and boundary irreducible, we may homotope $S$ to satisfy the other requirements of an admissible surface. Then again \refprop{GaussBonnetAdmissible} implies $a(S)=-2\pi\chi(S) \leq 2\pi|\partial S|$ and \refprop{AdmissibleSlopeLength} implies $a(S)>2\pi|\partial S|$, a contradiction. Thus each core of a surgery solid torus has infinite order.
Let $\gamma$ be a curve in $M$ that is homotopically trivial in $M(s_1, \dots, s_n)$, and arrange $\gamma$ to meet the chunk decomposition transversely. Let $S$ be a compact planar surface in $M$ with $\partial S$ consisting of nonzero multiples of the slopes $s_i$ as well as $\gamma$. If necessary, replace $S$ with a $\pi_1$-injective and boundary $\pi_1$-injective surface with the same property, and adjust $S$ to be admissible. Let $\sigma(\gamma)$ be the number of arcs of intersection between $\gamma$ and the chunks of $M$. Let $\epsilon>0$ be such that $\ell_c(s_i)\geq 2\pi+\epsilon$ for all $i$. Then
\begin{align*}
(2\pi+\epsilon)|S\cap \partial M| & \leq \sum_{j=1}^{|S\cap \partial M|} \ell_c (s_{i_j}) \hspace{1.1in} \mbox{by definition of combinatorial length}\\
& \leq a(S) = -2\pi\chi(S) + 2\pi\sigma(\gamma) \quad \mbox{ by \refprop{AdmissibleSlopeLength} } \\
& < 2\pi|S\cap \partial M| + 2\pi\sigma(\gamma) \quad\quad\quad\: \mbox{ since } \chi(S)=2-(|S\cap\partial M|+1), \mbox{ so}
\end{align*}
\[ |S\cap \partial M| < (2\pi/\epsilon)\sigma(\gamma). \]
We have found a constant $c=2\pi/\epsilon$ that depends on $M$ and $s_1, \dots, s_n$, but is independent of $\gamma$ and $S$ with $|S\cap \partial M|\leq c\sigma(\gamma)$. Since $\sigma(\gamma)$ is the length of $\gamma$ in a simplicial metric on the chunk decomposition, the ubiquity theorem \cite[Theorem~2.1]{lac00} implies $\pi_1(M(s_1, \dots, s_n))$ is word hyperbolic.
\end{proof}
\begin{theorem}\label{Thm:CombLengthWGA}
Let $\pi(L)$ be a weakly twist-reduced weakly generalised alternating diagram of a link $L$ on a generalised projection surface $F$ in a 3-manifold $Y$. Suppose $Y{\smallsetminus} N(F)$ is atoroidal and $\partial$-anannular, $F$ has genus at least one, all regions of $F{\smallsetminus}\pi(L)$ are disks, and $r(\pi(L),F)>4$. Then the combinatorial length of the slope $p/q$ on a component $K$ of $L$ is at least $|q|\operatorname{tw}(K,\pi(L))\pi/4$.
\end{theorem}
\begin{proof}[Proof of \refthm{DehnFilling} from \refthm{CombLengthWGA}]
By \refthm{Hyperbolic}, $Y{\smallsetminus} L$ is hyperbolic. Thus it is atoroidal and not Seifert fibered. By \refthm{CombLengthWGA}, the combinatorial length of $p/q$ is at least $|q|\operatorname{tw}(K,\pi(L))\pi/4$. By hypothesis, $|q|>8/\operatorname{tw}(K,\pi(L))$, so $\ell_c(p/q)>2\pi$.
Since $K$ lies on $F$, in the angled chunk decomposition of the link exterior, each slope on $K$ lies on a torus boundary component made up of boundary faces and not exterior faces.
Then \refthm{Comb2PiThm} implies that the manifold obtained by Dehn filling $K$ along slope $p/q$ is hyperbolic.
\end{proof}
The remainder of the section is devoted to the proof of \refthm{CombLengthWGA}.
\begin{lemma}[See Lemmas~4.2 and~5.9 of \cite{lac00}]\label{Lem:Marc4.2}
Let $\pi(L)$ be a weakly twist-reduced, weakly generalised alternating diagram on a generalised projection surface $F$ in a 3-manifold $Y$.
Suppose all regions of $F{\smallsetminus}\pi(L)$ are disks and $r(\pi(L),F)>4$. If $S$ is an orientable admissible surface in a chunk $C$ of $X:=Y{\smallsetminus} N(L)$, and $S\cap \partial X\neq \emptyset$, and further $\partial S$ does not agree with the boundary of a normal surface, then $a(S)>0$.
Furthermore, if $S$ is normal or admissible and $a(S)>0$, then $a(S)/|S\cap \partial X| \geq \pi/4$.
\end{lemma}
\begin{proof}
Suppose $a(S)\leq 0$. By definition, $a(S) = \sum \epsilon(v_i) - 2\pi\chi(S) + 2\pi\sigma(\partial S{\smallsetminus} \partial X)$.
If $\chi(S)<0$, then the combinatorial area is positive. So $\chi(S)\geq 0$. Since $S$ meets a boundary face, it cannot be a sphere or torus. It follows that $S$ is a disk or annulus. If an annulus, then $a(S) \geq \sum\epsilon(v_i)$, and since $S\cap \partial X\neq \emptyset$, at least two terms in the sum come from intersections with boundary edges, so the sum is strictly positive. So $S$ is a disk.
Next, note if $\sigma(\partial S{\smallsetminus} \partial X)>1$, then $a(S)>0$. If $\sigma(\partial S{\smallsetminus} \partial X)=1$ and $a(S)\leq 0$, then $\sum \epsilon(v_i)=0$. Again this is impossible: two terms in the sum arise as endpoints of an arc in $\partial S{\smallsetminus} \partial X$, and each has angle $\epsilon(v_i)=\pi/2$. So $\sigma(\partial S{\smallsetminus} \partial X)=0$.
Now as in the proof of \cite[Lemma~4.2]{lac00}, consider the number of intersections of $S$ with interior edges of the chunk. If $S$ meets no interior edges, then it meets at least two boundary faces
(by representativity and weakly prime conditions), so $a(S)\geq 0$.
If $\partial S$ is embedded, the Loop Theorem gives an embedded disk with the same boundary, but by assumption, $\partial S$ is not the boundary of a normal disk. Thus there is a point $p$ of intersection of $\partial S$ with itself.
Then there is an arc $\alpha$ of $\partial S$ running from $p$ to $p$ that meets $\partial X$ fewer times than $\partial S$, and a disk with boundary $\alpha$. If the boundary of the disk is not embedded, repeat. If it is embedded, the Loop Theorem gives a normal disk with area strictly less than that of $S$. \refprop{NonnegArea} then implies $a(S)>0$.
So now suppose $\partial S$ meets an interior edge. Again $\partial S$ is not embedded.
Then there is a point of self intersection $p$ in $\partial S$. Let $\alpha$ be an arc of $\partial S$ running from $p$ to $p$; we can ensure it meets fewer edges or boundary edges than $\partial S$. There exists a disk with boundary $\alpha$. If the boundary of $\alpha$ is not embedded, we repeat until we have a disk with embedded boundary. By the Loop Theorem, we obtain a normal disk $D$ meeting strictly fewer edges and boundary edges than $\partial S$. \refprop{NonnegArea} implies $a(D)\geq 0$. So $a(S)>0$.
Finally, we show if $a(S)>0$ then $a(S)/|S\cap \partial X| \geq \pi/4$. By definition,
\[ \frac{a(S)}{|S\cap \partial X|} = \frac{\sum \epsilon(v_i) - 2\pi\chi(S) + 2\pi\sigma(\partial S{\smallsetminus} \partial X)}{|S\cap\partial X|}. \]
The sum $\sum \epsilon(v_i)$ breaks into $\pi|S\cap \partial X| + \pi\sigma(\partial S{\smallsetminus} \partial X) + \sum_{j} \epsilon(w_j)$ where $\{w_1,\dots,w_m\}$ denote the intersections of $\partial S$ with interior edges, and each $\epsilon(w_j)=\pi/2$.
Note $\chi(S)\leq 1$ because $S$ is not a sphere. Also, $\sigma(\partial S{\smallsetminus} \partial X)\geq 0$. Hence
\[ \frac{a(S)}{|S\cap \partial X|} \geq \frac{\sum_{w_j} \pi/2}{|S\cap\partial X|} + \pi - \frac {2\pi}{|S\cap\partial X|}. \]
If $|S\cap \partial X|\geq 3$, $a(S)/|S\cap\partial X| \geq 0 + \pi -2\pi/3 >\pi/4.$
If $|S\cap \partial X|=2$, $a(S)/|S\cap\partial X| \geq (\sum_{w_j} \pi/2)/2 + \pi - (2\pi)/2 = m\pi/4$. Since $a(S)>0$, it follows that $m\geq 1$ and the sum is at least $\pi/4$.
If $|S\cap \partial X|=1$, $a(S)/|S\cap\partial X| \geq \sum_{w_j} \pi/2 + \pi - 2\pi = m\pi/2 -\pi.$ Since $a(S)>0$, $\partial S$ meets at least three interior edges, so this is at least $\pi/2 >\pi/4$.
\end{proof}
\begin{remark}
The proof of \cite[Lemma~4.2]{lac00} had additional cases, because the definition of normal in that paper required $\partial S$ to avoid intersecting the same boundary face more than once, and the same interior edge more than once. Because our definition of normal, based off that of \cite{fg09}, did not have these restrictions, our proof is simpler.
\end{remark}
We now consider the boundary faces of an angled chunk. These are all squares. One diagonal of the square, running between vertices of the square that correspond to identified edges, forms a meridian of the link. The squares glue together to form what Adams calls a \emph{harlequin tiling} \cite{acf06}; see \reffig{HarlequinTiling}.
Unwind the boundary $\partial X$ in the longitude direction by taking the cover corresponding to a meridian. Assign an $x$-coordinate to the centre of each square, with adjacent coordinates of squares differing by $1$, as on the right of \reffig{HarlequinTiling}.
\begin{figure}
\caption{The boundary tiling of an alternating knot is called a harlequin tiling. Right: unwinding in the longitude direction gives a string of quads as shown, with coordinates in ${\mathbb{Z}
\label{Fig:HarlequinTiling}
\end{figure}
A curve $\gamma$ representing a non-zero multiple $k$ of the slope $p/q$ lifts to an arc $\widetilde{\gamma}$ starting at $x=0$, ending at $k|q|\operatorname{cr}(K,\pi(L))$, where $\operatorname{cr}(K,\pi(L))$ denotes the number of crossings met by $K$. Note that crossings of $\pi(L)$ are double-counted if both strands are part of $K$.
Let $\gamma_i$ denote the $i$-th intersection of $\widetilde{\gamma}$ with lifts of boundary squares. The arc $\gamma_i$ lies in one boundary square, and has some $x$-coordinate $x_i$. Define $x_i'$ to be the integer $x_i' = (x_{i-1}+x_{i+1})/2$. Say $\gamma_i$ is a \emph{skirting arc} if it has endpoints on adjacent edges of a boundary square.
Now let $S$ be an inward extension of $\gamma$. Each $\gamma_i$ is then part of the boundary of an admissible surface $S_i$ lying in a single chunk.
\begin{lemma}[See Lemmas~5.5, 5.6, 5.7, and 5.8 of \cite{lac00}]\label{Lem:Marc5.5-5.6-5.7-5.8}
The following hold for $S_i$, $\gamma_i$:
\begin{enumerate}
\item[(5.6)] If $S_i$ is admissible and $\gamma_i$ is a skirting arc, either $a(S_i)>0$ or $S_i$ is a boundary bigon.
\item[(5.7)] Let $\gamma_i$ and $\gamma_{i+1}$ be non-skirting arcs in boundary squares $B_i$ and $B_{i+1}$, such that the gluing map gluing a side of $S_i$ to a side of $S_{i+1}$ does not take the point on $B_i$ to that on $B_{i+1}$ by rotating (clockwise or counter-clockwise) past a bigon edge. Then at least one of $a(S_i)$, $a(S_{i+1})$ is positive.
\item[(5.8)] If exactly one of $\gamma_i$, $\gamma_{i+1}$ is a skirting arc, and $B_i$, $B_{i+1}$ satisfy the same hypothesis as in (5.7) above, and $x_i'\neq x_{i+1}'$, then at least one of $a(S_i)$, $a(S_{i+1})$ is positive.
\item[(5.5)] There are at least $k|q|$ arcs $\gamma_i$ with $x_i'\neq x_{i+1}'$.
\end{enumerate}
\end{lemma}
\begin{proof}
For (5.6), (5.7), and (5.8) we will assume the combinatorial areas are zero. Then \reflem{Marc4.2} implies the $S_i$ are normal, and \reflem{NormalSquare} implies it has one of three forms: form (1) meeting a boundary face and two interior edges, form (2) a boundary bigon, or form (3) meeting two boundary faces and interior faces of the same colour.
(5.6) If $\gamma_i$ is a skirting arc, $a(S_i)=0$, and $S_i$ is not a boundary bigon, it has form (1) or (3). But both of these give non-skirting arcs.
(5.7) If $a(S_i)=a(S_{i+1})=0$, then both are normal disks of form (1) or (3) of \reflem{NormalSquare} because the arcs are non-skirting. Then $\partial S_i$ and $\partial S_{i+1}$ glue in (without loss of generality) a white face of the chunk via a clockwise twist. As in \refsec{Hyperbolic}, superimpose $\partial S_i$ and $\partial S_{i+1}$ onto the boundary of the same chunk.
In the first case, $\partial S_i \cap \partial S_{i+1}$ must be nonempty. Then \reflem{MarcLemma7} implies the boundaries of the squares meet in opposite white faces. Since the diagram is weakly prime, this forces $\partial S_{i+1}$ and $\partial S_i$ to bound a bigon, as on the left of \reffig{5.7}, contradicting the assumption that there is no bigon edge adjacent to the boundary squares. In the case $S_i$ and $S_{i+1}$ are of form (3), weakly twist-reduced implies that both bound a string of bigons, and the two have just one edge between them implies that there is a single bigon between $\partial S_i$ and $\partial S_{i+1}$, as on the right of \reffig{5.7}. Again this is a contradiction.
\begin{figure}
\caption{Left: if normal disks $S_i$ and $S_{i+1}
\label{Fig:5.7}
\end{figure}
(5.8) If $a(S_i)=a(S_{i+1})=0$, either $S_i$ is a boundary bigon and $S_{i+1}$ is of form (3) or vice versa. They glue in a face via a twist. As in the previous argument, without loss of generality $S_i$ glues to $S_{i+1}$ in a white face by a single clockwise rotation. If $S_i$ is a boundary bigon, then the gluing implies $S_{i+1}$ bounds a bigon, and this contradicts our assumption on $B_i$, $B_{i+1}$. So $S_{i+1}$ is a boundary bigon and $S_i$ is of form (3). See \reffig{5.8}, left and middle.
In this case, $S_{i+1}$ encircles an edge adjacent to a bigon region bounded by $S_i$. The arc $\gamma_{i+1}$ is a skirting arc running vertically in the tiling of \reffig{HarlequinTiling} (vertical because it must cut off the edge meeting the boundary quad from the upper chunk twice; see \reffig{5.8} right), so
\[ x_i' = \frac{x_i +1 + x_i - 1}{2} = x_i \quad \mbox{and} \quad
x_{i+1}' = \frac{x_i + x_i}{2} = x_i. \]
This contradicts the assumption that $x_i\neq x_{i+1}'$.
\begin{figure}
\caption{Left: If $S_{i+1}
\label{Fig:5.8}
\end{figure}
(5.5)
As in \cite[Lemma~5.5]{lac00}: $|x_{i+1}' - x_i'|\leq 1$, and if $x_i'\neq x_{i+1}'$ then $\{ x_i',x_{i+1}'\} = \{x_i, x_{i+1}\}$. Thus for each $j\in {\mathbb{Z}}$ with $x_1'\leq j\leq x_1'+k|q|\operatorname{cr}(K, \pi(L))$, the pair $\{j, j+1\}$ occurs as $\{x_i', x_{i+1}'\}$ for some arc.
\end{proof}
\begin{proof}[Proof of \refthm{CombLengthWGA}]
Let $\gamma$ be a curve representing a nonzero multiple $k$ of the slope $p/q$, and let $S$ be an inward extension, with $S_i$ the intersections of $S$ with chunks as above.
There are $2\operatorname{tw}(K,\pi(L))$ boundary faces meeting $\gamma$ that are not adjacent to their right neighbour across a bigon region of the chunk. The previous lemma implies there are $k|q|$ arcs on $\partial X$ with $x_i'\neq x_{i+1}'$. Let $\gamma_i$ be one such arc. Then $S_i$, $S_{i+1}$ cannot both be boundary bigons. The previous lemma implies that at least one of $S_i$, $S_{i+1}$ has positive combinatorial area, say $S_i$. Then \reflem{Marc4.2} implies $a(S_i)/|S_i \cap \partial X| \geq \pi/4$. Then $\ell(\gamma_i,S_i)$ satisfies
\[ \ell(\gamma_i,S_i) \geq (1/2)\,2\,\operatorname{tw}(K,\pi(L))\,k|q|\,\pi/4 \geq \operatorname{tw}(K,\pi(L))|q|\pi/4.\]
The factor $1/2$ ensures we have not double counted: for non-skirting $\gamma_i$ it may be the case that both $S_{i-1}$ and $S_{i+1}$ are boundary bigons, and so we must share the combinatorial area of $S_i$ between them.
Now, since $\gamma$ was an arbitrary representative of a nonzero multiple of the slope $p/q$, and since $S$ was an arbitrary inward extension, it follows that
\[\ell_c(\gamma) \geq \pi|q|\operatorname{tw}(K,\pi(L))/4.\qedhere \]
\end{proof}
\end{document} |
\begin{document}
\title{\textsf{Some Relations on Paratopisms and an Intuitive Interpretation
on the Adjugates of a Latin Square}\thanks{This work is supported in part by the Fund of Research Team of Anhui
International Studies University (NO. awkytd1909, awkytd1908), College
Natural Scientific Research Projects organized by Anhui Provincial
Department of Education (NO. KJ2021A1198, KJ2021ZD0143).\protect \\
Corresponding author. Wen-Wei Li (W.-W. Li), E-mail address: [email protected]
.} }
\author{Wen-Wei Li $^{\mathrm{a,b,}}$ , Jia-Bao Liu $^{\mathrm{c}}$, }
\maketitle
\lyxaddress{\begin{center}
{\footnotesize{}$^{\mathrm{a}}$ School of Information and Mathematics,
Anhui International Studies University, }\\
{\footnotesize{}Hefei, 231201, P. R. China}\\
{\footnotesize{}$^{\mathrm{b}}$ School of Mathematical Science, University
of Science and Technology of China, }\\
{\footnotesize{}Hefei, 230026, P. R. China}\\
{\footnotesize{}$^{\mathrm{c}}$ School of Mathematics and Physics,
Anhui Jianzhu University, }\\
{\footnotesize{}Hefei, 230601, P. R. China}
\par\end{center}}
\begin{abstract}
This paper will present some intuitive interpretation of the adjugate
transformations of arbitrary Latin square. With this trick, we can
generate the adjugates of arbitrary Latin square directly from the
original one without generating the orthogonal array. The relations
of isotopisms and adjugate transformations in composition will also
be shown. It will solve the problem that when F1{*}I1=I2{*}F2 how
can we obtain I2 and F2 from I1 and F1, where I1 and I2 are isotopisms
while F1 and F2 are adjugate transformations and ``{*}'' is the
composition of transformations. These methods could distinctly simplify
the computation on a computer for the issues related to main classes
of Latin squares. This will improve the efficiency apparently in computation
for some related problems.
\textbf{Key Words:} Latin square, Adjugate, Conjugate, Isotopism,
Paratopism, Intuitive interpretation, Orthogonal array, Main class,
Computational Complexity
\textbf{AMS2010 Subject Classification:} 05B15
\end{abstract}
\tableofcontents{}
$\ $
\section{Introduction}
Latin squares play an important role in experimental design in combinatorics
and statistics. They are wildly used in the manufacturing of industry,
production of agriculture, etc. Besides the application in orthogonal
experiment in agriculture, Latin square could also be used to minimize
experiment errors in the design of experiments in agronomic research
(refer \cite{Nada2001ALSAR}). Any Latin square is a multiplication
table of a quasi-group, which is an important structure in algebra.
Mutually orthogonal Latin squares are closely linked with finite projective
planes (\cite{Lam1991PP9B,Lam1991PP10}). Sets of orthogonal Latin
squares can be applied in error correcting codes in communication
(refer \cite{Colbourn2004PAPCMOLS,Huczynska2006PC36OP}). It is also
used in Mathematical puzzles such as Sudoku (\cite{Ruiter2010-OJSPRT}),
KenKen (refer \cite{Shortz2009ANPCMS,Stephey2009IKKNS}), etc.
The name ``Latin square'' originated for the first time in the 36
officer problem introduced by Leonhard Euler (refer \cite{Euler1782RNEQM}
or \cite{Euler2007IONTMS}). He used Latin characters as symbols,
which gave the name of ``Latin square''. (Now we usually use Hindu-Arabic
numerals instead). Ever since then, a lot of results have been
obtained on this discipline.
One of the main clue in the development of this subject is the enumeration
problem, such as the number $L_{n}$ of reduced Latin squares of order
$n$ (a positive integer) or the number of equivalence classes (such
as isotopy classes or main classes) of Latin squares of small orders.
It is clear that the number $N_{n}$ of Latin squares of order $n$
is $n!\cdot(n-1)!$ times of the number $L_{n}$ of the reduced Latin
squares of order $n$. But there is no explicit relation (formula)
for $L_{n}$ and the number $S_{n}^{(1)}$ of isotopy classes of Latin
squares of order $n$, which could be used to compute $S_{n}^{(1)}$
from $L_{n}$ or $n$ in practical. Nor can the number $S_{n}^{(2)}$
of main classes of Latin squares of order $n$ be calculated directly
from $L_{n}$ or $n$.
Until nowadays, there is no practical computation formula for $L_{n}$
which could easily obtain $L_{n}$. Although Jia-yu Shao and Wan-di
Wei gave a simple and explicit formula (in form) for $L_{n}$ in 1992
(refer \cite{ShaoJY1992AFNLS}), $L_{n}=n!\underset{A\in B_{n}}{\sum}(-1)^{\sigma_{0}(A)}\dbinom{\mathrm{Per}A}{n}$,
where $B_{n}$ is the set of all the 0-1 square matrices of order
$n$, $\sigma_{0}(A)$ is the number of ``0'' appeared in the matrix
$A$, ``$\mathrm{Per}$'' is the permanent operator, but this formula
is still not so efficiency in practical. There is no practical asymptotic
formula for $L_{n}$, either (refer \cite{McKay2005ONLS}). The difference
for the most accurate upper bounds and lower bounds of $L_{n}$ is
so huge, $\dfrac{\left(n!\right)^{2n}}{n^{n^{2}}}\leqslant L_{n}\leqslant\stackrel[k=1]{n}{\prod}\left(k!\right)^{n/k}$
(mentioned in \cite{Lint1992ACC}, pp.161-162), which made it impossible
to estimate the value of $L_{n}$ by this formula. The upper bound
was inferred from the van der Waerden permanent conjecture by Richard
M. Wilson in 1974 (\cite{Wilson1974NSTS}). Later around 1980, the
van der Waerden conjecture was solved independently by G. P. Egorichev
(\cite{Egorichev1981PVDWCRPDSM-Eng,Egorichev1981PVDWCP-Eng}) and
D. I. Falikman (refer \cite{Falikman1981PVDWCRPDSM-Rus,Falikman1981PVDWCRPDSM-Eng})
almost at the same time.
James R. Nechvatal ( mentioned in \cite{Godsil1990AEGLR}) gave
a general asymptotic formula for generalized Latin Rectangles in 1981,
and Ira. Gessel (\cite{Gessel1987CLR}) gave a general asymptotic
formula for Latin Rectangles in 1987, although they could result
asymptotic formulae of $L_{n}$ in formal, but neither seems suitable
for asymptotic analysis. Chris D. Godsil and Brendan D. McKay obtained
a better asymptotic formula in 1990 (\cite{Godsil1990AEGLR}).
When the order $n$ is less than 4, the number $L_{n}$ of reduced
Latin squares of order $n$ is obvious, i.e., $L_{1}=L_{2}=L_{3}=1$.
When $n=4$ or 5, Euler found that $L_{4}=4$ and $L_{5}=56$ in 1782
(refer \cite{Euler1782RNEQM}), together with the values of $L_{1}$,
$L_{2}$ and $L_{3}$. Cayley re-found these results (up to 5) in
1890 (refer \cite{Cayley1890OLS}). M. Frolov found the value 9,408
of $L_{6}$ in 1890 ( mentioned in \cite{Brendan2007SmallLS}), later
in 1901 Tarry re-found it ( mentioned in \cite{Brendan2007SmallLS}).
The number $S_{6}^{(1)}=22$ was first obtained by Erich Schönhardt
\cite{Schoenhardt1930ULQU} in 1930. Ronald A. Fisher and Frank Yates
\cite{Fisher1934LSOd6} re-found $S_{6}^{(1)}$ independently in 1934,
they also found the values of $S_{n}^{(1)}$ when $n\leqslant5$.
In 1966, D. A. Preece \cite{Preece1966CYR} found that there are 564
isotopy classes of Latin squares of order 7. Mark B. Wells \cite{Wells1967NLSOd8}
acquired $L_{8}=535,281,401,856$ in 1967. In 1990, Galina Kolesova,
Clement W. H. Lam and Larry Thiel \cite{Lam1990NumLS8} gained $S_{8}^{(1)}=$
1,676,267, $S_{8}^{(2)}$ = 283,657 and confirmed Wells' result. The
value of $L_{9}=$ 377,597,570,964,258,816 was found by S. E. Bammel
and Jerome Rothstein \cite{Bammel1975NLSOd9} in 1975. Brendan D.
McKay and Eric Rogoyski \cite{McKayl1995LSOd10} found that $L_{10}=$
7,580,721,483,160,132,811,489,280 and $S_{10}^{(1)}$ = 208,904,371,354,363,006
and $S_{10}^{(2)}$ = 34,817,397,894,749,939 in 1995. Brendan D. McKay
and Ian M. Wanless \cite{McKay2005ONLS} found the value of $L_{11}$
= 5,363,937,773,277,371,298,119,673,540,771,840 and $S_{11}^{(1)}$
=12,216,177,315,369,229,261,482,540 and $S_{11}^{(2)}$ = 2,036,029,552,582,883,134,196,099
in 2005.
$\ $
Before the invention of computers, mathematician could do the enumeration
work by hand with some theoretical tools, which would probably involve
some errors. It was also very difficult to verify a known result,
hence some excellent experts obtained false values even if the corrected
one had been found. In 1915, Percy A. MacMahon gained an incorrect
value of $L_{5}$ in a different way from other experts (refer \cite{MacMahon1915CombAnl}).
In 1930, Savarimuthu M. Jacob (\cite{Jacob1930ELRD3}) obtained a
wrong value of $L_{6}$ after Frolov and Tarry had already found
the corrected one. The value of $L_{7}$ counted by Frolov is incorrect,
either.
It was mention in \cite{Norton1939LSOd7} that Clausen, an assistant
of a German astronomer Schumacher, found 17 ``basic forms''\footnote{$\ $ Translated from the German word ``Grundformen''. According
to the context, it probably means the isotopy classes. } of Latin squares of order 6. This information was described in a
letter from Schumacher to Gauss dated August 10, 1842. This letter
was quoted by Gunther in 1876 and by Ahrens in 1901. Tarry also found
17 isotopy classes of Latin squares of order 6. Until E. Schönhardt
gained the correct values of $L_{n}$, $S_{n}^{(1)}$ and $S_{n}^{(2)}$
when $n\leqslant6$ in 1930.
In 1939, H. W. Norton (\cite{Norton1939LSOd7}) obtained the wrong
values of $S_{7}^{(1)}$ and $S_{7}^{(2)}$. After Preece gained the
correct value of $S_{7}^{(1)}$ in 1966, James W. Brown \cite{Brown1968ELSAOd8}
announced another incorrect value 563 of $S_{7}^{(1)}$ and this result
was widely quoted as the accepted value for several decades (\cite{Colbourn1996CRCHBCD}
\cite{Denes1974Latin}).
In recent decades, with the application of computers, the efficiency
in the enumeration of equivalence classes of Latin squares has been
improved greatly. However, it is still very difficult to avoid errors
because of the complexity in huge amount of computation.
In the case of order 8, J. W. Brown \cite{Brown1968ELSAOd8} also
provided a wrong value of $S_{8}^{(1)}$ in 1968, and Arlazarov et
al. provided a false value of $S_{8}^{(2)}$ in 1978 (mentioned in
\cite{Brendan2007SmallLS,Lam1990NumLS8}).
In some cases, the number of some types of equivalence classes of
Latin squares of a certain order would likely to be believed correct
after at least two times of independent computation. (refer \cite{Brendan2007SmallLS})
$\ $
Technically, according to some conclusions in group theory, especially
the enumeration method of orbits when a group acts on a set, by
generating the representatives of all the equivalence classes of a
certain order and the enumerating the members in every invariant group
of a representative, we will know the number of objects in every
equivalence class, hence the total number of Latin squares of a certain
order will be obtained. But the process contains too much equivalence
classes, which costs too much time . \footnote{$\ $ The number of isotopy classes or main classes of Latin squares
of order $n$ is much less than the number of reduced Latin squares
of order $n$. In general, we can not afford the time to visit all
the reduced Latin squares of order $n$ (when $n>7$) even in some
super computers.}
In logical, the structure of the relations of the Latin squares in
the same main class is a graph, a powerful graph isomorphism program
``Nauty'' is very useful to compute the invariant group of the
Latin square within main class transformations.
In practical, when computing these objects ( to find the invariant
group of a Latin square), the process in computation is similar to
visiting to a tree with some of its branches being isomorphic where
only one of the isomorphic branches will be visited so as to improve
the efficiency.
When considering the topics related to the main classes of Latin
squares, we will usually generate the adjugates (or conjugates) of
an arbitrary Latin square. The routine procedure is to generate the
orthogonal array of the Latin square first, then permute the rows
(or columns in some literature) of the array and turn the new orthogonal
array into a new Latin square at last, which seems not so convenient.
Among the adjugates of a Latin square $Y$, the author has not found
the detailed descriptions on the adjugates other than the orthogonal
array, except two simple cases, $Y$ itself and the transpose $Y^{\mathrm{T}}$.
In Sec. \ref{subsec:Adjugates-LS} intuitive explanation of the adjugates
of an arbitrary Latin square will be described. With this method,
the other 4 types adjugates of arbitrary Latin square could be generated
by transpose and/or replacing the rows/columns by their inverse.
Sometimes we need to consider the composition of adjugate transformations
and isotopic transformations, especially when generating the invariant
group of a Latin square in main class transformation. It is necessary
to exchange the priority order of the these two types of transformations
for simplification. But in general cases, an adjugate transformation
and an isotopic transformation do not commute. So we want to find
the relations of isotopic transformations and adjugate transformations
in composition when their positions are interchanged. In other words,
for any adjugate transformation $\mathcal{F}_{1}$ and any isotopic
transformation $\mathcal{I}_{1}$, how can we find the adjugate transformation
$\mathcal{F}_{2}$ and the isotopic transformation $\mathcal{I}_{2}$,
s.t., $\mathcal{F}_{1}\circ\mathcal{I}_{1}$ = $\mathcal{I}_{2}\circ\mathcal{F}_{2}$
? The answer will be presented in Sec. \ref{subsec:RLT-Prtp}.
\section{Preliminaries}
Here a few notions appeared in this paper will be recalled so as to
avoid ambiguity.
Suppose $n$ be a positive integer and it is greater than 1.
A \emph{permutation}\index{permutation} \label{Def:Permutation}
is the reordering of the sequence 1, 2, 3, $\cdots$, $n$. An element
$\alpha$ = $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
a_{1} & a_{2} & \cdots & a_{n}
\end{array}\right)$ in the symmetry group $\mathrm{S}_{n}$ is also called a \emph{permutation}.
For convenience, in this paper, these two concepts will not be distinguished
rigorously. They might even be mixing used. When referring ``a permutation
$\alpha$” here, sometimes it will be a bijection of the set \{$\,$1,
2, 3, $\cdots$, $n$$\,$\} to itself, sometimes it will standard
for the sequence $\left[\alpha(1),\,\alpha(2),\,\cdots,\,\alpha(n)\right]$.
For a sequence $\left[b_{1},b_{2},\cdots,b_{n}\right]$ which is a
rearrangement of {[}1, 2, $\cdots$, $n${]}, in some occasions it
may also stand for a transformation $\beta\in\mathrm{S}_{n}$, such
that $\beta(i)=b_{i}$ ($i$ = 1, 2, $\cdots$, $n$). The actual
meaning can be inferred from the contexts.\label{CVT:Permutation-2}
In computer, these two kinds of objects are stored almost in the same
way.
We are compelled to accept this ambiguity. Otherwise, it will cost
too much to avoid this ambiguity because we have to use much more
words to describe a simple operation and much more symbols to show
a concise expression. An example will be shown after Lemma 1.
Let $\alpha$ = $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
a_{1} & a_{2} & \cdots & a_{n}
\end{array}\right)$ $\in\mathrm{S}_{n}$, here we may call $\left[a_{1},a_{2},\cdots,a_{n}\right]$
the \emph{one-row form} of the permutation $\alpha$, and call $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
a_{1} & a_{2} & \cdots & a_{n}
\end{array}\right)$ the \emph{two-row form} of the permutation $\alpha$.
A matrix with every row and every column being permutations of 1,
2, $\cdots$, $n$, \footnote{$\ $ Usually we will assume that the $n$ elements in a Latin squares
be 1, 2, 3, $\cdots$, $n$, for convenience. But in a lot of books
and articles, the $n$ elements in a Latin squares are denoted by
0, 1, 2, 3, $\cdots$, $n-1$. } is called a \emph{Latin square}\index{Latin square} of order $n$.
Latin squares with both the first row and the first column being
in natural order are said to be \emph{reduced}\index{reduced (Latin square)}
or in \emph{standard form}\index{standard form} (refer \cite{Denes1974Latin}
page 128). For example, the matrix below is a reduced Latin square
of order 5,
\[
\left[\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\
2 & 3 & 5 & 1 & 4\\
3 & 5 & 4 & 2 & 1\\
4 & 1 & 2 & 5 & 3\\
5 & 4 & 1 & 3 & 2
\end{array}\right].
\]
For convenience, when referring the \emph{inverse} of a row (or a
column) of a Latin square square, we mean the one-row form of the
inverse of the permutation presented by the row (or the column), not
the sequence in reverse order. For instance, the inverse of the third
row $[3\ 5\ 4\ 2\ 1]$ of the Latin square above, is believed to be
$[5\ 4\ 1\ 3\ 2]$, not the reverse $[1\ 2\ 4\ 5\ 3]$ of it, since
$\left(\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\
3 & 5 & 4 & 2 & 1
\end{array}\right)^{-1}=$ $\left(\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\
5 & 4 & 1 & 3 & 2
\end{array}\right)$. \footnote{It is not difficult to find out that, in computer programs, if we
store a permutation $\left[a_{1},a_{2},\cdots,a_{n}\right]$ in an
array ``A{[}n{]}'', the inverse ``B{[}n{]}'' of ``A{[}n{]}''
could be generated by \emph{n} times of evaluation, ``B{[}A{[}j{]}{]}=j'',
j = 1, 2, $\cdots$, $n$. The traditional method used by some programmers
is to interchange the rows of the permutation $\left(\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\
3 & 5 & 4 & 2 & 1
\end{array}\right)$, then sort the columns of the new permutation $\left(\begin{array}{ccccc}
3 & 5 & 4 & 2 & 1\\
1 & 2 & 3 & 4 & 5
\end{array}\right)$ such that the first row is natural order, so as to obtain the inverse
$\left(\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\
5 & 4 & 1 & 3 & 2
\end{array}\right)$. The number of operations of evaluations and comparisons will be
much more than of the previous method. }
$\mathscr{R}_{\alpha}$ \label{Sym:mathscr_R_beta} \nomenclature[R_alpha]{$\mathscr{R}_{\beta}$}{the transformation that permute the rows of a Latin square according to a permutation $\beta$. \pageref{Sym:mathscr_R_beta}}
will denote the transformation of rows on a Latin square (or a matrix)
corresponding to the permutation $\alpha$, i.e., the $\alpha(i)$'s
row of the Latin square $\mathscr{R}_{\alpha}\left(Y\right)$ is the
$i$'s row of the Latin square $Y$, the $i$'s row of the Latin square
$\mathscr{R}_{\alpha}\left(Y\right)$ is the $\alpha^{-1}(i)$'s row
of the Latin square $Y$, where $\alpha$ is a permutation of order
$n$. $\mathscr{C}_{\beta}$ \label{Sym:mathscr_C_alpha} \nomenclature[C_\alpha]{$ \mathscr{C}_{\alpha}$}{the transformation that permute the columns of a Latin square according to a permutation $\alpha$ of the same order. \pageref{Sym:mathscr_C_alpha}}
will denote the transformation which moves the $i$'th column ($i$
= 1, 2, $\cdots$, $n$) of a Latin square (or a general matrix) to
the position of the $\beta(i)$'th column. $\mathscr{L}_{\gamma}$
\label{Sym:mathscr_L_gamma} \nomenclature[L_gamma]{$\mathscr{L}_{\gamma}$}{the transformation that substitutes the elements of a Latin square according to a permutation $\gamma$. \pageref{Sym:mathscr_L_gamma}}
will denote the transformation which relabels the entries, i.e., $\mathscr{L}_{\gamma}$
will substitute all the elements $i$ in a Latin square by $\gamma(i)$
($i$ = 1, 2, $\cdots$, $n$). For example, let $\alpha$=$\left(\begin{array}{cccc}
1 & 2 & 3 & 4\\
2 & 4 & 1 & 3
\end{array}\right)$, $\beta$=$\left(\begin{array}{cccc}
1 & 2 & 3 & 4\\
2 & 3 & 4 & 1
\end{array}\right)$, $\gamma$=$\left(\begin{array}{cccc}
1 & 2 & 3 & 4\\
3 & 1 & 4 & 2
\end{array}\right)$, $Y_{1}$=$\left[\begin{array}{cccc}
1 & 2 & 3 & 4\\
2 & 3 & 4 & 1\\
3 & 4 & 1 & 2\\
4 & 1 & 2 & 3
\end{array}\right]$, then $\mathscr{R}_{\alpha}\left(Y_{1}\right)$ = $\left[\begin{array}{cccc}
3 & 4 & 1 & 2\\
1 & 2 & 3 & 4\\
4 & 1 & 2 & 3\\
2 & 3 & 4 & 1
\end{array}\right]$, $\mathscr{C}_{\beta}\left(Y_{1}\right)$ = $\left[\begin{array}{cccc}
4 & 1 & 2 & 3\\
1 & 2 & 3 & 4\\
2 & 3 & 4 & 1\\
3 & 4 & 1 & 2
\end{array}\right]$, $\mathscr{L}_{\gamma}\left(Y_{1}\right)$ = $\left[\begin{array}{cccc}
3 & 1 & 4 & 2\\
1 & 4 & 2 & 3\\
4 & 2 & 3 & 1\\
2 & 3 & 1 & 4
\end{array}\right]$.
\label{CVT:Permutation-3} From now on, let $Y_{i}$ = $\left[\begin{array}{cccc}
y_{i1}, & y_{i2}, & \cdots, & y_{in}\end{array}\right]$ be the $i$'th row of a Latin square $Y$ and $Z_{i}$ = $\left[\begin{array}{c}
y_{1i}\\
y_{2i}\\
\vdots\\
y_{ni}
\end{array}\right]$ be the $i$'th column of $Y$ ($i$=1, 2, $\cdots$, $n$). In this
paper, we do not distinguish the permutation transformation $\zeta$
: \{1,\emph{ }2, $\cdots$\emph{,} $n$\} $\rightarrow$ \{1, 2, $\cdots$,
$n$\}, $k$ $\longmapsto$ $\zeta(k)$\emph{ }($k$=1, 2, $\cdots$,
$n$) and the column sequence {[}$\zeta(1)$,\emph{ }$\zeta(2)$,
$\cdots$\emph{,} $\zeta(n)${]}$^{\mathrm{T}}$. Here the super-script
``T'' means transpose. $Y_{i}^{-1}$ and $Z_{i}^{-1}$ are the inverse
of $Y_{i}$ and $Z_{i}$, respectively (as the one row form of the
inverse of the corresponding transformations, while $Y_{i}^{-1}$
is a row and $Z_{i}^{-1}$ is a column). \footnote{$\ $ Actually, $Z_{i}^{-1}$ = $\left[\begin{array}{c}
\zeta_{i}^{-1}(1)\\
\zeta_{i}^{-1}(2)\\
\vdots\\
\zeta_{i}^{-1}(n)
\end{array}\right]$ is the transpose the sequence {[}$\zeta_{i}^{-1}(1)$,\emph{ }$\zeta_{i}^{-1}(2)$,
$\cdots$\emph{,} $\zeta_{i}^{-1}(n)${]}, where $\zeta_{i}^{-1}$
is the inverse of the permutation $\zeta_{i}$ = $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
y_{1i} & y_{2i} & \cdots & y_{ni}
\end{array}\right)$. $Y_{i}^{-1}$ is obtained in the same way, except that $Y_{i}^{-1}$
is a row sequence.} Whether a permutation symbol $\alpha$ stands for a row sequence
or column sequence will be inferred from the contexts.
$\forall\alpha,\beta,\gamma\in\mathscr{\mathrm{S}}_{n}$, we know
the definition of the composition $\alpha\cdot\beta$ as a permutation
transformation, here $\left(\alpha\cdot\beta\right)(i)$ is defined
by $\alpha\bigl(\beta(i)\bigr)$, $i$=1, 2, $\cdots$, $n$. We define
$\gamma Y_{i}$ as the one-row form of the composition of the permutation
$\gamma$ and the permutation with the one-row form $Y_{i}$ ($i$=1,
2, $\cdots$, $n$), i.e, $\gamma Y_{i}$ will stand for the sequence
$\left[\begin{array}{cccc}
\gamma\left(y_{i1}\right), & \gamma\left(y_{i2}\right), & \cdots, & \gamma\left(y_{in}\right)\end{array}\right]$, or some times the permutation transformation $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
\ \gamma\left(y_{i1}\right)\ & \ \gamma\left(y_{i1}\right)\ & \cdots & \ \gamma\left(y_{i1}\right)\
\end{array}\right)$ according to the contexts. So are $Z_{i}\alpha^{-1}$ and $\gamma Z_{i}$,
except that $Z_{i}$ is a column, so both are columns.
It is not difficult to verify that
\begin{lem}
\label{lem:Act-Istp} $\ $
\begin{align*}
\mathscr{R}_{\alpha}(Y) & =\left(\begin{array}{c}
Y_{\alpha^{-1}(1)}\\
\vdots\\
Y_{\alpha^{-1}(n)}
\end{array}\right)=\left(\begin{array}{ccc}
Z_{1}\alpha^{-1}, & \cdots, & Z_{n}\alpha^{-1}\end{array}\right),\\
\mathscr{C}_{\beta}(Y) & =\left(\begin{array}{c}
Y_{1}\beta^{-1}\\
\vdots\\
Y_{n}\beta^{-1}
\end{array}\right)=\left(\begin{array}{ccc}
Z_{\beta^{-1}(1)}, & \cdots, & Z_{\beta^{-1}(n)}\end{array}\right),\\
\mathscr{L}_{\gamma}(Y) & =\left(\begin{array}{c}
\gamma Y_{1}\\
\vdots\\
\gamma Y_{n}
\end{array}\right)=\left(\begin{array}{ccc}
\gamma Z_{1}, & \cdots, & \gamma Z_{n}\end{array}\right).
\end{align*}
\end{lem}
If we distinguish the two kinds of permutations strictly, it will
make the above equation much more complex. For example, in order to
show that the transformation $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
a_{1} & a_{2} & \cdots & a_{n}
\end{array}\right)$ is derived from a sequence $R=\left[a_{1},a_{2},\cdots,a_{n}\right]$
which is a reordering of {[}1, 2, $\cdots$, $n${]}, we will use
a symbol such as $\mathscr{S}(R)$ to denote the transformation $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
a_{1} & a_{2} & \cdots & a_{n}
\end{array}\right)$; for a transformation $\beta$ = $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
b_{1} & b_{2} & \cdots & b_{n}
\end{array}\right)$, and a sequence $\left[b_{1},b_{2},\cdots,b_{n}\right]$, in order
to show the relation of $\beta$ and the corresponding sequence,
we will denote the sequence by $\mathscr{T}(\beta)$. Therefore Lemma
1 should be stated as
\begin{align*}
\mathscr{R}_{\alpha}(Y) & =\left(\begin{array}{c}
Y_{\alpha^{-1}(1)}\\
\vdots\\
Y_{\alpha^{-1}(n)}
\end{array}\right)=\left(\begin{array}{ccc}
\left(\mathscr{T}\left(\mathscr{S}\left(Z_{1}^{\mathrm{T}}\right)\cdot\alpha^{-1}\right)\right)^{\mathrm{T}}, & \cdots, & \left(\mathscr{T}\left(\mathscr{S}\left(Z_{n}^{\mathrm{T}}\right)\cdot\alpha^{-1}\right)\right)\end{array}^{\mathrm{T}}\right),\\
\mathscr{C}_{\beta}(Y) & =\left(\begin{array}{c}
\mathscr{T}\left(\mathscr{S}\left(Y_{1}\right)\cdot\beta^{-1}\right)\\
\vdots\\
\mathscr{T}\left(\mathscr{S}\left(Y_{n}\right)\cdot\beta^{-1}\right)
\end{array}\right)=\left(\begin{array}{ccc}
Z_{\beta^{-1}(1)}, & \cdots, & Z_{\beta^{-1}(n)}\end{array}\right),\\
\mathscr{L}_{\gamma}(Y) & =\left(\begin{array}{c}
\mathscr{T}\left(\mathscr{\gamma\cdot S}\left(Y_{1}\right)\right)\\
\vdots\\
\mathscr{T}\left(\mathscr{\gamma\cdot S}\left(Y_{n}\right)\right)
\end{array}\right)=\left(\begin{array}{ccc}
\left(\mathscr{T}\left(\gamma\cdot\mathscr{S}\left(Z_{1}^{\mathrm{T}}\right)\right)\right)^{\mathrm{T}}, & \cdots, & \left(\mathscr{T}\left(\gamma\cdot\mathscr{S}\left(Z_{n}^{\mathrm{T}}\right)\right)\right)^{\mathrm{T}}\end{array}\right).
\end{align*}
The two transformation symbols $\mathscr{S}$ and $\mathscr{T}$ here
will make the above equations not as concise as the previous ones,
and they will distract our attention. Also will it result a big trouble
to demonstrate some other propositions which are more complicated
than Lemma 1. We will pay too much price to distinguish strictly these
two kinds of permutations. Therefore we will ignore it in this paper.
Later, the two symbols $\mathscr{S}$ and $\mathscr{T}$ will stand
for some other injective maps.
For $\forall\alpha,\beta,\gamma\in\mathscr{\mathrm{S}}_{n}$, and
any Latin square $Y$, $\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}$
will be called an \emph{isotopism}, $\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)(Y)$
will be called \emph{isotopic} to to $Y$.
Let $\mathscr{I}_{n}$ be the set of all the isotopy transformations
of Latin squares of order $n.$
Let $Y=\left(y_{ij}\right)_{n\times n}$ be an arbitrary Latin square
with elements belonging to the set $\left\{ \,1,\,2,\,\mbox{\ensuremath{\cdots},\,}n\,\right\} $.
With regard to the set $\mathrm{T}$ = $\left\{ \left.(i,j,y_{ij})\,\right|\,1\leqslant i,j\leqslant n\,\right\} $,
we will have\\
\hphantom{Ob} $\left\{ \left.(i,j)\,\right|\,1\leqslant i,j\leqslant n\,\right\} $
= $\left\{ \left.(j,y_{ij})\,\right|\,1\leqslant i,j\leqslant n\,\right\} $
= $\left\{ \left.(i,y_{ij})\,\right|\,1\leqslant i,j\leqslant n\,\right\} $,
\\
so each pair of different triplets $(i,j,y_{ij})$ and $(r,t,y_{rt})$
in $\mathrm{T}$ will share at most one identical entry in the same
position. The set $\mathrm{T}$ is also called the \emph{orthogonal
array representation} \index{orthogonal array representation} of
the Latin square $Y$.
From now on, each triplets $(i,j,y_{ij})$ in the orthogonal array
set $\mathrm{T}$ of a Latin square $Y=\left(y_{ij}\right)_{n\times n}$
will be written in the form of a column vector $\left[\begin{array}{c}
i\\
j\\
y_{ij}
\end{array}\right]$ so as to save some space (while in a lot of papers, the triplets
are denoted in a row vector). The orthogonal array set $\mathrm{T}$
of the Latin square $Y$ can be denoted by a matrix
\begin{equation}
V=\left[\begin{array}{llll|llll|lll|llll}
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n\\
y_{11} & y_{12} & \cdots & y_{1n} & \,y_{21} & y_{22} & \cdots & y_{2n} & \,\bullet & \bullet & \bullet & \,y_{n1} & y_{n2} & \cdots & y_{nn}
\end{array}\right]\label{eq:OrthogonalArray}
\end{equation}
of size $3\times n^{2}$, with every column being a triplets consists
of the indices of a position in a Latin square and the element in
that position. The matrix $V$ will also be called the \emph{orthogonal
array }(matrix).\index{orthogonal array (matrix)}\emph{ }From now
on, when referring the orthogonal array of a Latin square, it will
always be the matrix with every column being a triplets related to
a position of the Latin square, unless otherwise specified.
The definition of ``orthogonal array” in general may be found in reference
\cite{Denes1974Latin} (page 190).\emph{ }In some references, such
as \cite{Cameron2002LatinSquares}, the orthogonal array is defined
as an $n^{2}\times3$ array.
Every row of the orthogonal array of a Latin square consists of the
elements 1, 2, $\cdots$, $n$, and every element appears exact $n$
times in every row.
For example, the orthogonal arrays of the Latin squares $A_{1}=\left[\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\
2 & 3 & 4 & 5 & 1\\
3 & 4 & 5 & 1 & 2\\
4 & 5 & 1 & 2 & 3\\
5 & 1 & 2 & 3 & 4
\end{array}\right]$ are \label{Eg:Orth-Arr}
\[
V_{1}=\left[\begin{array}{ccccc|ccccc|ccccc|ccccc|ccccc}
1 & 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 3 & 3 & 3 & 3 & 3 & 4 & 4 & 4 & 4 & 4 & 5 & 5 & 5 & 5 & 5\\
1 & 2 & 3 & 4 & 5 & 1 & 2 & 3 & 4 & 5 & 1 & 2 & 3 & 4 & 5 & 1 & 2 & 3 & 4 & 5 & 1 & 2 & 3 & 4 & 5\\
1 & 2 & 3 & 4 & 5 & 2 & 3 & 4 & 5 & 1 & 3 & 4 & 5 & 1 & 2 & 4 & 5 & 1 & 2 & 3 & 5 & 1 & 2 & 3 & 4
\end{array}\right].
\]
Obviously, reordering the columns of an orthogonal array will not
change the Latin square it corresponds to.
On the other hand, if we can construct a matrix $V$ = $\left[\begin{array}{ccccc}
a_{1} & a_{2} & a_{3} & \cdots\cdots\ & a_{n^{2}}\\
b_{1} & b_{2} & b_{3} & \cdots\cdots\ & b_{n^{2}}\\
c_{1} & c_{2} & c_{3} & \cdots\cdots\ & c_{n^{2}}
\end{array}\right]$ of size $3\times n^{2}$ satisfying the following conditions: \label{Cndt:Orth-Arr}
\begin{itemize}
\item (O1) Every row of the array is comprised of the elements 1, 2, $\cdots$,
$n$;
\item (O2) Every element $k$ (1$\leqslant k\leqslant n$) appears exact
$n$ times in every row;
\item (O3) The columns of this array are orthogonal pairwise, that is to
say, any pair of columns share at most one element in the same position;
\end{itemize}
then we can construct a matrix $Y_{2}$ of order $n$ from this orthogonal
array by putting the number $c_{t}$ in the position $\left(a_{t},\:b_{t}\right)$
of an empty matrix of order $n$ ($t$=1, 2, 3, $\cdots$, $n^{2}$).
It is clear that $Y_{2}$ is a Latin square. Hence this array $V$
is the orthogonal array of a certain Latin square $Y_{2}$.
So there is a one to one correspondence between the Latin squares
of order $n$ and the arrays of size $3\times n^{2}$ satisfying the
conditions before (ignoring the ordering of the columns of the orthogonal
array). Hence there is no problem to call a $3\times n^{2}$ matrix
an orthogonal array if it satisfies the 3 conditions mentioned here.
This means that an orthogonal array of size $3\times n^{2}$ corresponding
to a Latin square $Y$, will still be an orthogonal array after permuting
its rows, but the new orthogonal array will correspond to another
Latin square (called a \emph{conjugate} or \emph{adjugate}, which
will be mentioned later) closely related to $Y$.
\label{lem:Iso_on_OA} The operation of permuting the rows of
a Latin square $Y$ according to a permutation $\alpha$ will correspond
to the action of $\alpha$ on the members in the 1st row of its orthogonal
array. Permuting the columns of a Latin square $Y$ according to a
permutation $\beta$ will correspond to replacing the number $i$
in the 2nd row of its orthogonal array by $\beta(i)$ ($i$ = 1, 2,
$\cdots$, $n$). The transformation $\mathscr{L}_{\gamma}$ acting
on the Latin square $Y$ corresponds to the action of substituting
the number $i$ by $\gamma(i)$ in the 3rd row of the orthogonal array
($i$ = 1, 2, $\cdots$, $n$).
Here we define the representative of an isotopy class or a main class
as the minimal one in lexicographic order. Obviously, there are
some other standards for an isotopy class representative or a main
class representative. For some purpose, some other definition of canonical
form of an equivalence class will be more efficient in practical.
These definitions will not be discussed here, but the related algorithms
derived from the conclusions in this paper, may rely on them.
\section{Adjugates of a Latin square\label{subsec:Adjugates-LS}}
It will not be difficult to understand that the permutation of rows
of an orthogonal array will not interfere the orthogonality of the
columns, and the array after row permutation will still be an orthogonal
array of a certain Latin square. The new Latin square is firmly related
to the original one. As the number of the reorderings of a sequence
of 3 different entries is 6, there are exact 6 transformations for
permuting the rows of an orthogonal array.
Every column of an orthogonal array consists of 3 elements, the first
is the row index, the second is the column index, and the third is
the entry of the Latin square in the position determined by the two
numbers before it. Let $[\mathrm{r},\mathrm{c},\mathrm{e}]$ or (1)
denote the transformation to keep the original order of the rows of
an orthogonal array; $[\mathrm{c},\mathrm{r},\mathrm{e}]$ or $(1\ 2)$
denote the transformation to interchange the rows 1 and 2 of an orthogonal
array; $[\mathrm{r},\mathrm{e},\mathrm{c}]$ or $(2\ 3)$ denote the
transformation to interchange the rows 2 and 3 of an orthogonal array,
etc. (A similar notation may be found in reference \cite{Brendan2007SmallLS}.)
There are 5 transformations that will indeed change the order of the
rows of an orthogonal array. For convenience, denote the $i$'th row
of the Latin square $Y$ by $Y_{i}$ ($i$ = 1, 2, $\cdots$, $n$)
. According to the convention on page \pageref{CVT:Permutation-2},
$Y_{i}$ = $\left[\begin{array}{cccc}
y_{i1}, & y_{i2}, & \cdots, & y_{in}\end{array}\right]$ will sometimes represent the transformation that sends $j$ to $y_{ij}$
($j$ = 1, 2, $\cdots$, $n$), and a permutation $\alpha\in\mathrm{S}_{n}$
will sometimes denote the sequence {[}$\alpha(1),$ $\alpha(2)$,
$\cdots$, $\alpha(n)${]} which relies on the contexts.
(1) $[\mathrm{r},\mathrm{e},\mathrm{c}]$ or $(2\ 3)$ \\
If the 2nd row and the 3rd row of an orthogonal array $V$ are interchanged,
it will result
\begin{equation}
V^{(\mathrm{I})}=\left[\begin{array}{llll|llll|lll|llll}
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
y_{11} & y_{12} & \cdots & y_{1n} & \,y_{21} & y_{22} & \cdots & y_{2n} & \,\bullet & \bullet & \bullet & \,y_{n1} & y_{n2} & \cdots & y_{nn}\\
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n
\end{array}\right].\label{eq:OrthogonalArray-2}
\end{equation}
Sort the columns of $V^{(\mathrm{I})}$ in lexicographical order,
such that the submatrix consists of the 1st row and 2nd row will be
\begin{equation}
V_{0}=\left[\begin{array}{cccc|cccc|ccc|cccc}
1 & 1 & \cdots & 1\, & \,2 & 2 & \cdots & 2\, & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
1 & 2 & \cdots & n\, & \,1 & 2 & \cdots & n\, & \,\bullet & \bullet & \bullet\, & \,1 & 2 & \cdots & n
\end{array}\right].\label{eq:OrthogonalArray_2Row}
\end{equation}
It results
\begin{equation}
V^{(\mathrm{IA})}=\left[\begin{array}{llll|llll|lll|llll}
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n\\
y'_{11} & y'_{12} & \cdots & y'_{1n} & \,y'_{21} & y'_{22} & \cdots & y'_{2n} & \,\bullet & \bullet & \bullet & \,y'_{n1} & y'_{n2} & \cdots & y'_{nn}
\end{array}\right].\label{eq:OrthogonalArray-2b}
\end{equation}
Of course $V^{(\mathrm{IA})}$ is essentially the same as $V^{(\mathrm{I})}$
as they correspond to the same Latin square. There are $n-1$ vertical
lines in the arrays $V$, which divide it into $n$ segments. Every
segment of $V$ represents a row of $Y$ as the members in a segment
have the same row index. It is clear that the columns of $V^{(\mathrm{I})}$
in any segment will keep staying in that segment after sorting. That
is to say, the entries in the same row of the original the Latin square
$Y$ will still be in the same row in the new Latin square (denoted
by $Y^{(\mathrm{I})}$) \label{Sym:Y^(I)} corresponds to the orthogonal
array $V^{(\mathrm{I})}$, because the row index of every entry is
not changed in the process of interchanging the 2nd row and the 3rd
row of the orthogonal array. In every segment, the 2nd row and the
3rd row will make up a permutation in two-row form. It is widely known
that to exchange the two rows of a permutation $\alpha$=$\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
a_{1} & a_{2} & \cdots & a_{n}
\end{array}\right)$ will result its inverse $\alpha^{-1}$ = $\left(\begin{array}{cccc}
a_{1} & a_{2} & \cdots & a_{n}\\
1 & 2 & \cdots & n
\end{array}\right)$ = $\left(\begin{array}{cccc}
1 & 2 & \cdots & n\\
a'_{1} & a'_{2} & \cdots & a'_{n}
\end{array}\right)$. So the sequence $\left[\begin{array}{cccc}
y'_{i1}, & y'_{i2}, & \cdots, & y'_{in}\end{array}\right]$ is the inverse of $\left[\begin{array}{cccc}
y_{i1}, & y_{i2}, & \cdots, & y_{in}\end{array}\right]$ as both are reorderings of $\left[\begin{array}{cccc}
1, & 2, & \cdots, & n\end{array}\right]$ ($i$ = 1, 2, $\cdots$, $n$). Hence, to interchange the 2nd row
and 3rd row of the orthogonal array of a Latin square corresponds
to substitute every row of the Latin square by its inverse, i.e.,
$Y^{(\mathrm{I})}$ = $\left(\begin{array}{c}
Y_{1}^{-1}\\
\vdots\\
Y_{n}^{-1}
\end{array}\right)$. This operation is mentioned implicitly in reference \cite{Lam1990NumLS8}.
(2) $[\mathrm{c},\mathrm{r},\mathrm{e}]$ or $(1\ 2)$\\
With regard to the orthogonal array $V$ in \eqref{eq:OrthogonalArray}
of a Latin square $Y=\left(y_{ij}\right)_{n\times n}$, if the 1st
row and the 2nd row of $V$ are interchanged, it will result
\begin{equation}
V^{(\mathrm{II})}=\left[\begin{array}{llll|llll|lll|llll}
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n\\
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
y_{11} & y_{12} & \cdots & y_{1n} & \,y_{21} & y_{22} & \cdots & y_{2n} & \,\bullet & \bullet & \bullet & \,y_{n1} & y_{n2} & \cdots & y_{nn}
\end{array}\right].\label{eq:OrthogonalArray-1}
\end{equation}
It means that every entry $(i,j,y_{ij})^{\mathrm{T}}$ will become
$(j,i,y_{ij})^{\mathrm{T}}$, that is to say, the entry in the $(j,i)$
position of the new Latin squares (denoted by $Y^{(\mathrm{II})}$)
corresponds to $V^{(\mathrm{II})}$ is the entry $y_{ij}$ in the
$(i,j)$ position of the original Latin square $Y$. Hence the new
Latin square $Y^{(\mathrm{II})}$ is the transpose of the original
one.
In order to understand the transformation $[\mathrm{c},\mathrm{e},\mathrm{r}]$,
we should first introduce the operation $[\mathrm{e},\mathrm{c},\mathrm{r}]$.
(3) $[\mathrm{e},\mathrm{r},\mathrm{c}]$ or $(1\ 3\ 2)$ \\
First, interchange the rows 1 and 2 of $V$, then interchange the
rows 2 and 3. This operation corresponds to replacing the $i$'th
row of $Y$ with the inverse of the $i$'th row of its transpose $Y^{\mathrm{T}}$,
or substituting the $i$'th row of $Y$ by the inverse of the $i$'th
column of $Y$. Denote the result by $Y^{(\mathrm{III})}$. $Y^{(\mathrm{III})}$
= $\left(\begin{array}{c}
\left(Z_{1}^{-1}\right)^{\mathrm{T}}\\
\vdots\\
\left(Z_{n}^{-1}\right)^{\mathrm{T}}
\end{array}\right)$. Here the sup-index ``$\mathrm{T}$'' means the transpose, as $Z_{i}$
is a column of $Y$ ($i$ = 1, 2, $\cdots$, $n$).
(4) $[\mathrm{e},\mathrm{c},\mathrm{r}]$ or $(1\ 3)$\\
If the 1st row and the 3rd row of $V$ are interchanged, it will result
\begin{equation}
V^{(\mathrm{IV})}=\left[\begin{array}{llll|llll|lll|llll}
y_{11} & y_{12} & \cdots & y_{1n} & \,y_{21} & y_{22} & \cdots & y_{2n} & \,\bullet & \bullet & \bullet & \,y_{n1} & y_{n2} & \cdots & y_{nn}\\
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n\\
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n
\end{array}\right].\label{eq:OrthogonalArray-3}
\end{equation}
It is difficult to find the relation of the original Latin square
$Y$ and the new Latin square $V^{(\mathrm{IV})}$ corresponds to
array $V^{(\mathrm{IV})}$ by sorting the columns of $V^{(\mathrm{IV})}$
in lexicographic order. But another way will work. To interchange
column $(l-1)n+k$ and column $(k-1)n+l$ of $V$, ($l$ = 1, 2, $\cdots$,
$n-1$; $k$ = $l+1$, $\cdots$, $n$), which means to sort the columns
of $V$ such that the submatrix consists of the 1st row and the 2nd
row is
\begin{equation}
V_{0}^{(\mathrm{A})}=\left[\begin{array}{cccc|cccc|ccc|cccc}
1 & 2 & \cdots & n\, & \,1 & 2 & \cdots & n\, & \,\bullet & \bullet & \bullet\, & \,1 & 2 & \cdots & n\\
1 & 1 & \cdots & 1\, & \,2 & 2 & \cdots & 2\, & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n
\end{array}\right],\label{eq:OrthogonalArray_2Row-B}
\end{equation}
will result
\begin{equation}
V^{(\mathrm{A})}=\left[\begin{array}{llll|llll|lll|llll}
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n\\
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
z_{11} & z_{12} & \cdots & z_{1n} & \,z_{21} & z_{22} & \cdots & z_{2n} & \,\bullet & \bullet & \bullet & \,z_{n1} & z_{n2} & \cdots & z_{nn}
\end{array}\right].\label{eq:OrthogonalArray-b}
\end{equation}
$V^{(\mathrm{A})}$ corresponds to the Latin square $Y$, too. The
vertical lines in the array $V^{(\mathrm{A})}$ divide it into $n$
segments. Every segment corresponds to a column of $Y$ since all
the entries in a segment of $V^{(\mathrm{A})}$ share the same column
index. That is to say, {[} $z_{i1}$, $z_{i2}$, $\cdots$, $z_{in}$
{]} is the $i$'th column of $Y$.
Interchanging the 1st row and the 3rd row of $V^{(\mathrm{A})}$ will
result
\begin{equation}
V^{(\mathrm{IVA})}=\left[\begin{array}{llll|llll|lll|llll}
z_{11} & z_{12} & \cdots & z_{1n} & \,z_{21} & z_{22} & \cdots & z_{2n} & \,\bullet & \bullet & \bullet & \,z_{n1} & z_{n2} & \cdots & z_{nn}\\
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n
\end{array}\right].\label{eq:OrthogonalArray-3-A}
\end{equation}
Obviously, $V^{(\mathrm{IVA})}$ and $V^{(\mathrm{IV})}$ correspond
to the same Latin square. Sorting the columns in each segment of $V^{(\mathrm{IVA})}$
such that the 1st row and the 2nd row are the same as $V_{0}^{(\mathrm{A})}$,
it will result
\begin{equation}
V^{(\mathrm{IVB})}=\left[\begin{array}{llll|llll|lll|llll}
1 & 2 & \cdots & n & \,1 & 2 & \cdots & n & \,\bullet & \bullet & \bullet & \,1 & 2 & \cdots & n\\
1 & 1 & \cdots & 1 & \,2 & 2 & \cdots & 2 & \,\bullet & \bullet & \bullet\, & \,n & n & \cdots & n\\
z'_{11} & z'_{12} & \cdots & z'_{1n} & \,z'_{21} & z'_{22} & \cdots & z'_{2n} & \,\bullet & \bullet & \bullet & \,z'_{n1} & z'_{n2} & \cdots & z'_{nn}
\end{array}\right].\label{eq:OrthogonalArray-3-B}
\end{equation}
So {[} $z'_{i1}$, $z'_{i2}$, $\cdots$, $z'_{in}$ {]} is the inverse
of {[} $z_{i1}$, $z_{i2}$, $\cdots$, $z_{in}$ {]}. Let the Latin
square corresponds to $V^{(\mathrm{IVB})}$ (or $V^{(\mathrm{IVA})}$,
$V^{(\mathrm{IV})}$) be $Y^{(\mathrm{IV})}$. Therefore, the Latin
square $Y^{(\mathrm{IV})}$ related to $V^{(\mathrm{IV})}$ generated
by interchange the 1st row and 3rd row of the orthogonal array $V$,
consists of the columns which are the inverse of the columns of $Y$.
$Y^{(\mathrm{IV})}$ = $\left(\begin{array}{ccc}
Z_{1}^{-1}\ & \cdots\ & Z_{n}^{-1}\end{array}\right)$.
(5) $[\mathrm{c},\mathrm{e},\mathrm{r}]$ or $(1\ 2\ 3)$ \\
First, interchange the rows 1 and 3 of $V$, then interchange the
rows 2 and 3, or equivalently, first interchange rows 1 and 2, then
interchange rows 1 and 3. This operation corresponds to substituting
the $i$'th column of $Y$ by the inverse of the $i$'th column of
its transpose $Y^{\mathrm{T}}$, or replacing the $i$'th column of
$Y$ with the inverse of the $i$'th row of $Y$. Denote the result
by $Y^{(\mathrm{V})}$. \label{Sym:Y^(V)} $Y^{(V)}$ = $\left(\begin{array}{ccc}
\left(Y_{1}^{-1}\right)^{\mathrm{T}}\ & \cdots\ & \left(Y_{n}^{-1}\right)^{\mathrm{T}}\end{array}\right)$.
$\ $
\begin{lem}
\label{lem:Rel-Ad-T} Let $\eta\in\mathrm{S}_{3}$, denote by $\mathcal{F}_{\eta}$
the transformation mentioned above, i.e., $\mathcal{F}_{(1)}(Y)$
= $Y$, $\mathcal{F}_{(1\,2)}(Y)$ = $Y^{(\mathrm{II})}$ = $Y^{\mathrm{T}}$,
$\mathcal{F}_{(2\,3)}(Y)$ = $Y^{(\mathrm{I})}$, $\mathcal{F}_{(1\,3)}(Y)$
= $Y^{(\mathrm{IV})}$, etc. By definition, it is clear that
\begin{equation}
\mathcal{F}_{\eta_{1}}\circ\mathcal{F}_{\eta_{2}}=\mathcal{F}_{\eta_{1}\eta_{2}}\quad(\forall\eta_{1},\eta_{2}\in\mathrm{S}_{3}).
\end{equation}
\end{lem}
The 6 Latin squares correspond to the orthogonal arrays obtained by
permuting the rows of the orthogonal array $V$ of the Latin square
$Y$ are called the \emph{conjugate}s\index{conjugates (of a Latin square)}
or \emph{adjugates\index{adjugate}} or \emph{parastrophes} \index{parastrophe}
of $Y$. Of course, $Y$ is a conjugate of itself. The set of the
Latin squares that are isotopic to any conjugate of a Latin square
$Y$ is called the \emph{main class}\index{main class} or \emph{specy
\index{specy}} or \emph{paratopy class \index{paratopy class}} of
$Y$. If a Latin square $Z$ belongs to the main class of $Y$, then
$Y$ and $Z$ are called \emph{paratopic} \index{paratopic} or \emph{main
class equivalent \index{main class equivalent}}. Sometimes, the set
of the Latin squares that are isotopic to $Y$ or $Y^{\mathrm{T}}$
is called the \emph{type} \index{type} of $Y$. (refer \cite{Bailey2003Equivalence}
or \cite{Brendan2007SmallLS})
In this paper, a special equivalence class \emph{inverse type \index{inverse type}}
is defined, for the convenience in the process of generating the representatives
of all the main classes of Latin squares of a certain order. The \emph{inverse
type} of a Latin square $Y$ is the set of the Latin squares that
are isotopic to $Y$ or $Y^{(\mathrm{I})}$, where $Y^{(\mathrm{I})}$
is the Latin square corresponds to the orthogonal array $V^{(\mathrm{I})}$
obtained by interchanging the 2nd row and the 3rd row of the orthogonal
array $V$ of $Y$ as described before. (This idea is from the notion
``row inverse” in reference \cite{Lam1990NumLS8}) The \emph{inverse
type} mentioned here may be called ``row inverse type'' for the
sake of accuracy since $Y^{(\mathrm{IV})}$ is another type of inverse
(column inverse) of $Y$. The reason for choosing row inverse type
is that the Latin squares are are generated by rows in the following
papers by the author. (In some papers, Latin squares are generated
by columns, then the column inverse type will be useful.)
\section{Relations on Paratopisms\label{subsec:RLT-Prtp}}
The transformation that send a Latin square to another one paratopic
to it is called a \emph{paratopism} \index{paratopism} or a \emph{paratopic
transformation.} \index{paratopic transformation} Let $\mathcal{P}_{n}$
\label{Sym:mathcal_P_n} \nomenclature[P_n]{ $\mathcal{P}_{n}$ or $\mathcal{P}$}{ the set of all the paratopic transformations of Latin squares of order $n$. \pageref{Sym:mathcal_P_n}}
be the set of all the paratopic transformations of Latin squares of
order $n$, sometimes denoted by $\mathcal{P}$ for short if it results
no ambiguity. It is obvious that $\mathcal{P}$ together with the
composition operation ``$\circ$'' will form a group, called the
\emph{paratopic transformation group} \index{paratopic transformation group (of Latin squares)}
or \emph{paratopism} \emph{group}\index{paratopism group@paratopism\emph{ }group}.
Obviously, all the isotopy transformations are paratopisms.
It will not be difficult to find out by hand the following relations
of $\mathcal{F}_{\eta}$ and $\mathscr{C}_{\alpha}$, $\mathscr{R}_{\beta}$,
$\mathscr{L}_{\gamma}$ when exchange their orders if we are familiar
with how the transformations $\mathscr{C}_{\alpha}$, $\mathscr{R}_{\beta}$,
$\mathscr{L}_{\gamma}$ change the rows and columns in detail.
\begin{thm}
\label{thm:Cjgt-ISTP-Comsition-order-1}For $\forall\mathscr{T}\in\mathscr{I}_{n}$,
$\forall\alpha,\beta,\gamma\in\mathscr{\mathrm{S}}_{n}$, these equalities
hold,\\
\hphantom{$\ MM$ } $\mathscr{T}\circ\mathcal{F}_{(1)}$ = $\mathcal{F}_{(1)}\circ\mathscr{T}$,
\\
\hphantom{$\ MM$ } $\mathscr{R}_{\alpha}\circ\mathcal{F}_{(1\,2)}$=$\mathcal{F}_{(1\,2)}\circ\mathscr{C}_{\alpha}$,
$\quad\ \mathscr{C}_{\beta}\circ\mathcal{F}_{(1\,2)}$=$\mathcal{F}_{(1\,2)}\circ\mathscr{R}_{\beta}$,
$\mathscr{\quad\ L}_{\gamma}\circ\mathcal{F}_{(1\,2)}$=$\mathcal{F}_{(1\,2)}\circ\mathscr{L}_{\gamma}$,\\
\hphantom{$\ MM$ } $\mathscr{R}_{\alpha}\circ\mathcal{F}_{(2\,3)}$=$\mathcal{F}_{(2\,3)}\circ\mathscr{R}_{\alpha}$
,$\quad\ \mathscr{C}_{\beta}\circ\mathcal{F}_{(2\,3)}$=$\mathcal{F}_{(2\,3)}\circ\mathscr{L}_{\beta}$,
$\mathscr{\quad\ L}_{\gamma}\circ\mathcal{F}_{(2\,3)}$=$\mathcal{F}_{(2\,3)}\circ\mathscr{C}_{\gamma}$,\\
\hphantom{$\ MM$ } $\mathscr{R}_{\alpha}\circ\mathcal{F}_{(1\,3)}$=$\mathcal{F}_{(1\,3)}\circ\mathscr{L}_{\alpha}$,
$\quad\ \mathscr{C}_{\beta}\circ\mathcal{F}_{(1\,3)}$=$\mathcal{F}_{(1\,3)}\circ\mathscr{C}_{\beta}$,
$\mathscr{\quad\ L}_{\gamma}\circ\mathcal{F}_{(1\,3)}$=$\mathcal{F}_{(1\,3)}\circ\mathscr{R}_{\gamma}$.
\end{thm}
The equalities in the 1st line and the 2nd line are obvious. Here
we explain some equalities in the 3rd line and the 4th line. The readers
may get the proof of other equalities in the same way without difficulties.
By the convention on page \pageref{CVT:Permutation-3},
$Y=\left(\begin{array}{c}
Y_{1}\\
\vdots\\
Y_{n}
\end{array}\right)$ $\overset{\mathcal{F}_{(2\,3)}}{\longrightarrow}$ $\left(\begin{array}{c}
Y_{1}^{-1}\\
\vdots\\
Y_{n}^{-1}
\end{array}\right)$ $\overset{\mathscr{R}_{\alpha}}{\longrightarrow}$ $\left(\begin{array}{c}
Y_{\alpha^{-1}(1)}^{-1}\\
\vdots\\
Y_{\alpha^{-1}(n)}^{-1}
\end{array}\right)$ $\overset{\mathcal{F}_{(2\,3)}}{\longleftarrow}$ $\left(\begin{array}{c}
Y_{\alpha^{-1}(1)}\\
\vdots\\
Y_{\alpha^{-1}(n)}
\end{array}\right)$ $\overset{\mathscr{R}_{\alpha}}{\longleftarrow}$ $\left(\begin{array}{c}
Y_{1}\\
\vdots\\
Y_{n}
\end{array}\right)$, hence $\mathscr{R}_{\alpha}\circ\mathcal{F}_{(2\,3)}$=$\mathcal{F}_{(2\,3)}\circ\mathscr{R}_{\alpha}$
holds.
$Y=\left(\begin{array}{c}
Y_{1}\\
\vdots\\
Y_{n}
\end{array}\right)$ $\overset{\mathcal{F}_{(2\,3)}}{\longrightarrow}$ $\left(\begin{array}{c}
Y_{1}^{-1}\\
\vdots\\
Y_{n}^{-1}
\end{array}\right)$ $\overset{\mathscr{C}_{\beta}}{\longrightarrow}$ $\left(\begin{array}{c}
Y_{1}^{-1}\beta^{-1}\\
\vdots\\
Y_{n}^{-1}\beta^{-1}
\end{array}\right)$ $\overset{\mathcal{F}_{(2\,3)}}{\longleftarrow}$ $\left(\begin{array}{c}
\beta Y_{1}\\
\vdots\\
\beta Y_{n}
\end{array}\right)$ $\overset{\mathscr{L}_{\beta}}{\longleftarrow}$ $\left(\begin{array}{c}
Y_{1}\\
\vdots\\
Y_{n}
\end{array}\right)$, so $\ $ \\
$\mathscr{C}_{\beta}\circ\mathcal{F}_{(2\,3)}$ = $\mathcal{F}_{(2\,3)}\circ\mathscr{L}_{\beta}$.
$Y=\left(\begin{array}{c}
Y_{1}\\
\vdots\\
Y_{n}
\end{array}\right)$ $\overset{\mathcal{F}_{(2\,3)}}{\longrightarrow}$ $\left(\begin{array}{c}
Y_{1}^{-1}\\
\vdots\\
Y_{n}^{-1}
\end{array}\right)$ $\overset{\mathscr{L}_{\gamma}}{\longrightarrow}$ $\left(\begin{array}{c}
\gamma Y_{1}^{-1}\\
\vdots\\
\gamma Y_{n}^{-1}
\end{array}\right)$ $\overset{\mathcal{F}_{(2\,3)}}{\longleftarrow}$ $\left(\begin{array}{c}
Y_{1}\gamma^{-1}\\
\vdots\\
Y_{n}\gamma^{-1}
\end{array}\right)$ $\overset{\mathscr{C}_{\gamma}}{\longleftarrow}$ $\left(\begin{array}{c}
Y_{1}\\
\vdots\\
Y_{n}
\end{array}\right)$, therefore $\mathscr{L}_{\gamma}\circ\mathcal{F}_{(2\,3)}$=$\mathcal{F}_{(2\,3)}\circ\mathscr{C}_{\gamma}$.
$Y$=$\left(\begin{array}{ccc}
Z_{1}, & \cdots, & Z_{n}\end{array}\right)$ $\overset{\mathcal{F}_{(1\,3)}}{\longrightarrow}$ $\left(\begin{array}{ccc}
Z_{1}^{-1}, & \cdots, & Z_{n}^{-1}\end{array}\right)$ $\overset{\mathscr{R}_{\alpha}}{\longrightarrow}$ $\left(\begin{array}{ccc}
Z_{1}^{-1}\alpha^{-1}, & \cdots, & Z_{n}^{-1}\alpha^{-1}\end{array}\right)$,\\
$Y$=$\left(\begin{array}{ccc}
Z_{1}, & \cdots, & Z_{n}\end{array}\right)$ $\overset{\mathscr{L}_{\alpha}}{\longrightarrow}$ $\left(\begin{array}{ccc}
\alpha Z_{1}, & \cdots, & \alpha Z_{n}\end{array}\right)$ $\overset{\mathcal{F}_{(1\,3)}}{\longrightarrow}$ $\left(\begin{array}{ccc}
Z_{1}^{-1}\alpha^{-1}, & \cdots, & Z_{n}^{-1}\alpha^{-1}\end{array}\right)$,\\
then $\mathscr{R}_{\alpha}\circ\mathcal{F}_{(1\,3)}$=$\mathcal{F}_{(1\,3)}\circ\mathscr{L}_{\alpha}$.
With the equalities before, it will be easy to obtain the properties
when $\mathcal{F}_{(1\,3\,2)}$ or $\mathcal{F}_{(1\,2\,3)}$ composited
with $\mathscr{R}_{\alpha}$, $\mathscr{C}_{\beta}$, or $\mathscr{L}_{\gamma}$.
And we will have,
\begin{thm}
\label{thm:Cjgt-ISTP-Comsition-order-2}For $\forall\alpha,\beta,\gamma\in\mathscr{\mathrm{S}}_{n}$,\textup{
}$\forall\mathscr{T}\in\mathscr{I}_{n}$,\textup{}\\
\hphantom{$\ MM$ }$\mathscr{T}\circ\mathcal{F}_{(1)}=\mathcal{F}_{(1)}\circ\mathscr{T}$,\textup{}\\
\hphantom{$\ MM$ }\textup{$\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)\circ\mathcal{F}_{(1\,2)}=\mathcal{F}_{(1\,2)}\circ\left(\mathscr{R}_{\beta}\circ\mathscr{C}_{\alpha}\circ\mathscr{L}_{\gamma}\right)$,
}\\
\hphantom{$\ MM$ }\textup{$\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)\circ\mathcal{F}_{(1\,3)}=\mathcal{F}_{(1\,3)}\circ\left(\mathscr{R}_{\gamma}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\alpha}\right)$,}\\
\hphantom{$\ MM$ }\textup{$\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)\circ\mathcal{F}_{(2\,3)}=\mathcal{F}_{(2\,3)}\circ\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\gamma}\circ\mathscr{L}_{\beta}\right)$.}
\end{thm}
Since $\mathcal{F}_{(1\,2\,3)}$ =$\mathcal{F}_{(1\,3)}$$\circ\mathcal{F}_{(1\,2)}$,
$\mathcal{F}_{(1\,3\,2)}$ =$\mathcal{F}_{(1\,2)}$$\circ\mathcal{F}_{(1\,3)}$,
so
\begin{thm}
\label{thm:Cjgt-ISTP-Comsition-order-3}For $\forall\alpha,\beta,\gamma\in\mathscr{\mathrm{S}}_{n}$,\textup{}\\
\hphantom{$\ MM$ }\textup{$\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)\circ\mathcal{F}_{(1\,2\,3)}=\mathcal{F}_{(1\,2\,3)}\circ\left(\mathscr{R}_{\beta}\circ\mathscr{C}_{\gamma}\circ\mathscr{L}_{\alpha}\right)$,
}\\
\hphantom{$\ MM$ }\textup{$\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)\circ\mathcal{F}_{(1\,3\,2)}=\mathcal{F}_{(1\,3\,2)}\circ\left(\mathscr{R}_{\gamma}\circ\mathscr{C}_{\alpha}\circ\mathscr{L}_{\beta}\right)$. }
\end{thm}
Here we denote an isotopism by $\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}$
not $\mathscr{C}_{\beta}\circ\mathscr{R}_{\alpha}\circ\mathscr{L}_{\gamma}$
(although $\mathscr{R}_{\alpha}$, $\mathscr{C}_{\beta}$, $\mathscr{L}_{\gamma}$
commute pairwise), the reason is, when exchanging the position of
$\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)$
and $\mathcal{F}_{\eta}$, the result is to permute the subscript
of the three transformation according to the permutation $\eta\in\mathscr{\mathrm{S}}_{3}$,
as shown above. (Just substitute 1, 2, 3 by $\alpha,\beta,\gamma$,
respectively. For instance, $(1\,2\,3)$ will become $\left(\alpha\,\beta\,\gamma\right)$,
so when moving $\mathcal{F}_{(1\,2\,3)}$ from the right side of $\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)$
to the left side, $\alpha,\beta,\gamma$ will become $\beta,\gamma,\alpha$,
respectively.) So we can denote these formulas as below.
\begin{thm}
\emph{(main result 1)} \label{thm:Cjgt-ISTP-Comsition-order-General}
$\forall\,\alpha_{1},\alpha_{2},\alpha_{3}\in\mathscr{\mathrm{S}}_{n}$,
\textup{$\forall\,\beta_{1},\beta_{2},\beta_{3}\in\mathscr{\mathrm{S}}_{n}$,
}$\forall\eta\in\mathscr{\mathrm{S}}_{3}$,
\begin{align}
\left(\mathscr{R}_{\alpha_{1}}\circ\mathscr{C}_{\alpha_{2}}\circ\mathscr{L}_{\alpha_{3}}\right)\circ\mathcal{F}_{\eta} & =\mathcal{F}_{\eta}\circ\left(\mathscr{R}_{\alpha_{\eta(1)}}\circ\mathscr{C}_{\alpha_{\eta(2)}}\circ\mathscr{L}_{\alpha_{\eta(3)}}\right),\label{eq:Cjgt-ISTP-Comsition-order-1}\\
\mathcal{F}_{\eta}\circ\left(\mathscr{R}_{\beta_{1}}\circ\mathscr{C}_{\beta_{2}}\circ\mathscr{L}_{\beta_{3}}\right) & =\left(\mathscr{R}_{\beta_{\eta^{-1}(1)}}\circ\mathscr{C}_{\beta_{\eta^{-1}(2)}}\circ\mathscr{L}_{\beta_{\eta^{-1}(3)}}\right)\circ\mathcal{F}_{\eta}.\label{eq:Cjgt-ISTP-Comsition-order-2}
\end{align}
\end{thm}
With Lemmas \ref{lem:Iso_on_OA} and \ref{lem:Rel-Ad-T}, it is not
difficult to explain the six theorems described above, although not
so intuitional.
It is clear that any paratopic transformation is the composition of
an isotopism $\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)$
($\alpha,\beta,\gamma$ $\in\mathscr{\mathrm{S}}_{n}$) and a certain
conjugate transformation $\mathcal{F}_{\eta}$ ($\eta\in\mathscr{\mathrm{S}}_{3}$).
Since $\mathscr{I}_{n}$=$\left\{ \left.\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\,\right|\,\alpha,\beta,\gamma\in\mathscr{\mathrm{S}}_{n}\,\right\} $
$\simeq$ $\mathscr{\mathrm{S}}_{n}^{3}$, it is convenient to denote
$\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\circ\mathcal{F}_{\eta}$
by $\mathscr{P}\left(\alpha,\beta,\gamma,\eta\right)$ for short.
So we have $\left|\mathcal{P}_{n}\right|$ = $\left|\mathscr{\mathrm{S}}_{n}^{3}\right|\times\left|\mathscr{\mathrm{S}}_{3}\right|$
= $\left|\mathscr{I}_{n}\right|$$\cdot$$\left|\mathscr{\mathrm{S}}_{3}\right|$
= $6\left(n!\right)^{3}$. Let $\mathscr{P}$ : $\mathscr{\mathrm{S}}_{n}^{3}\times\mathscr{\mathrm{S}}_{3}$
$\rightarrow$ $\mathcal{P}_{n}$, $\left(\alpha,\beta,\gamma,\eta\right)$
$\longmapsto$ $\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\circ\mathcal{F}_{\eta}$,
it is clear that $\mathscr{P}$ is a bijection. As a set, $\mathcal{P}_{n}$
is isomorphic to $\mathscr{I}_{n}$$\times$$\mathscr{\mathrm{S}}_{3}$
or $\mathscr{\mathrm{S}}_{n}^{3}\times\mathscr{\mathrm{S}}_{3}$,
but as a group, $\mathcal{P}_{n}$ is not isomorphic to $\mathscr{I}_{n}$$\times$$\mathscr{\mathrm{S}}_{3}$,
(refer \cite{Brendan2007SmallLS}) because $\mathscr{P}$ is not compatible
with the multiplications in $\mathcal{P}_{n}$ as $\left(\mathscr{R}_{\alpha}\circ\mathscr{C}_{\beta}\circ\mathscr{L}_{\gamma}\right)$
and $\mathcal{F}_{\eta}$ do not commute, unless $\eta=(1)$.
\begin{thm}
\emph{(main result 2)} In general, $\forall\alpha_{1},\alpha_{2},\alpha_{3},$
$\beta_{1},\beta_{2},\beta_{3}$$\in\mathscr{\mathrm{S}}_{n}$, $\forall\eta,\zeta\in\mathscr{\mathrm{S}}_{3}$,
\hphantom{$\ MM$ } $\text{\textipa{\ }}$$\mathscr{P}\left(\alpha_{1},\alpha_{2},\alpha_{3},\eta\right)$
$\circ$ $\mathscr{P}\left(\beta_{1},\beta_{2},\beta_{3},\zeta\right)$
$=$ $\mathscr{P}\left(\alpha_{1}\beta_{\eta^{-1}(1)},\alpha_{2}\beta_{\eta^{-1}(2)},\alpha_{3}\beta_{\eta^{-1}(3)},\eta\zeta\right).$
\end{thm}
Usually, $\mathscr{P}\left(\alpha_{1}\beta_{\eta^{-1}(1)},\alpha_{2}\beta_{\eta^{-1}(2)},\alpha_{3}\beta_{\eta^{-1}(3)},\eta\zeta\right)$
differs from $\mathscr{P}\left(\alpha_{1}\beta_{1},\alpha_{2}\beta_{2},\alpha_{3}\beta_{3},\eta\zeta\right)$.
\section{Application}
With these theorems mentioned above, we can avoid generating the orthogonal
array when producing the adjugates of a Latin square, which will be
helpful for efficiency in computation especially when writing source
codes. Also will it save a lot of time when generating all the representatives
of main classes of Latin squares of a certain order if we using these
relations together with some properties of cycle structures and properties
of isotopic representatives, as we can avoid generating a lot of Latin
squares when testing main class representatives.
When generating the invariant group of a Latin square in paratopic
transformations, it will be more convenient to simplify the composition
of two or more paratopisms by the relations described in Sec. \ref{lem:Rel-Ad-T}
(although this benefit for improving the efficiency in computation
is not so conspicuous for a single Latin square, but it is remarkable
for a large number of Latin squares).
Some new algorithms for related problems, such as the algorithms for
testing or generating a representative of an equivalence class (a
main class or an isotopic class, etc ), the algorithms for generating
the invariant group of a Latin square in some transformation (isotopisms
and main class transformations), will be presented in the near future.
They are different from these described in \cite{Petteri2006CACD}.
\phantomsection\addcontentsline{toc}{section}{\refname}
\end{document} |
\begin{document}
\title[Lower bounds for uncentered maximal functions on metric measure space]
{Lower bounds for uncentered maximal functions on metric measure space}
\author{Wu-yi Pan and Xin-han Dong$^{\mathbf{*}}$}
\address{Key Laboratory of Computing and Stochastic Mathematics (Ministry of Education),
School of Mathematics and Statistics, Hunan Normal University, Changsha, Hunan 410081, P.
R. China}
\email{[email protected]}
\email{[email protected]}
\date{\today}
\keywords{Uncentered Hardy-Littlewood maximal operator, Besicovitch covering property, Radon measure, Lower $L^p$-bounds.}
\thanks{The research is supported in part by the NNSF of China (Nos. 11831007, 12071125)}
\thanks{$^{\mathbf{*}}$Corresponding author}
\subjclass[2010]{42B25}
\begin{abstract}
We show that the uncentered Hardy-Littlewood maximal operators associated with the Radon measure $\mu$ on $\mathbb{R}^d$ have the uniform lower $L^p$-bounds (independent of $\mu$) that are strictly greater than $1$, if $\mu$ satisfies a mild continuity assumption and $\mu(\mathbb{R}^d)=\infty$. We actually do that in the more general context of metric measure space $(X,d,\mu)$ satisfying the Besicovitch covering property.
In addition, we also illustrate that the continuity condition can not be ignored by constructing counterexamples.
\end{abstract}
\maketitle
\section{ Introduction }
Let $Mf$ be the uncentered Hardy-Littlewood maximal function of $f$, where $f$ is a $p$-th power Lebesgue integrable function for $p>1$. Maximal function plays at least two roles. Firstly it provides an important example of a sub-linear operator used in real analysis and harmonic analysis. Secondly, $Mf$ is apparently comparable to the original function $f$ in the $L^p$ sense. Not only that, by Riesz's sunrise lemma, Lerner \cite{Le10} proved for the real line that \begin{equation*}
\Vert Mf\Vert_{L^p(\mathbb{R})} \geq \left(\frac{p}{p-1}\right)^{\frac{1}{p}}\Vert f\Vert_{L^p(\mathbb{R})}.
\end{equation*}
And Ivanisvili et al. found the lower bound of the high dimensional Euclidean space in \cite{IJN17}.
One may suspect whether in general metric measure space $(X,d,\mu)$, for given $1<p<\infty$, there exists a constant $\varepsilon_{p,d,\mu}>0$ such that
\begin{equation*}
\Vert M_\mu f\Vert_{L^p(\mu)} \geq (1+\varepsilon_{p,d,\mu})\Vert f\Vert_{L^p(\mu)}, \quad\text{for all $f\in L^p(\mu)$},
\end{equation*}
where $M_\mu$ (cf. \eqref{de:max} below for the definition) is the uncentered Hardy-Littlewood maximal operator associated with $\mu$.
Unfortunately, such a measure must not be finite. In fact, $1_{X}$ is a fixed point of $M_\mu$ in $L^p(\mu)$ if $\mu(X)< \infty$. This suggests that we only need to consider infinite measures.
One of our main results is the following.
\begin{thm}\label{thm:1.1}
Let $(X,d,\mu)$ be a metric measure space satisfying the Besicovitch covering property with constant $L$. Suppose that \begin{equation}\label{con:1.1}
\mu(\{x\in \operatorname {supp} (\mu): r\mapsto \mu(B(x,r))\; \text{is discontinuous}\})=0
\end{equation} and $\mu(X)=\infty$. Then
\begin{equation}\label{ineq:main}
\Vert M_\mu f \Vert_p \geq \left(1+\frac{1}{(p-1)L}\right)^{\frac{1}{p}}\Vert f\Vert_p, \quad \text{for all}\;\; f\in L^p(\mu).
\end{equation}
\end{thm}
Since any normed space $(\mathbb{R}^d,\Vert\cdot\Vert)$ has the Besicovitch covering property with a constant which equals its strict Hadwiger number $H^*(\mathbb{R}^d,\Vert\cdot\Vert)$, we derive
\begin{cor}\label{cor:1.2}
Let $\Vert\cdot\Vert$ be any norm on $\mathbb{R}^d$. Let $\mu$ be a Radon measure on $(\mathbb{R}^d,\Vert\cdot\Vert)$ such that
$\mu(\mathbb{R}^d)=\infty.$ Suppose that \begin{equation*}
\mu(\{x\in \operatorname {supp} (\mu): r\mapsto \mu(B(x,r))\; \text{is discontinuous}\})=0.
\end{equation*} Then
\begin{equation*}
\Vert M_\mu f \Vert_p \geq \left(1+\frac{1}{(p-1)H^*(\mathbb{R}^d,\Vert\cdot\Vert)}\right)^{\frac{1}{p}}\Vert f\Vert_p,\quad \text{for all $f\in L^p(\mu)$}.
\end{equation*}
\end{cor}
Now let us make a few remarks about the condition of \eqref{con:1.1}. By Lebesgue decomposition theorem, any Radon measure can be divided into three main parts: the absolutely continuous part; the singular continuous part; the discrete measure part. If its discrete measure part exists (that is, measure is not continuous or has atoms), then it must not satisfy \eqref{con:1.1}. But if it contains only the absolutely continuous part, the condition is true.
Indeed, Theorem \ref{thm:1.1} does not hold in general for infinite Radon measures. We can construct a measure containing atoms to show that \eqref{ineq:main} is invalid.
On the other hand, if $d=1,2$, then \eqref{con:1.1} is satisfied for every Radon measure on the Euclidean space $(\mathbb{R}^d,\Vert\cdot\Vert_2)$ having no atoms. Hence for this metric measure space, we have $\Vert M_\mu f\Vert_p >\Vert f\Vert_p$ for all $f\in L^p$. More precisely, we prove
\begin{thm}\label{thm:1.3}
If $\mu$ is a Radon continuous measure on $(\mathbb{R}^2,\Vert\cdot\Vert_2)$ such that $\mu(\mathbb{R}^2)=\infty$, then
\begin{equation*}
\Vert M_\mu f \Vert_p \geq \left(1+\frac{1}{5(p-1)}\right)^{\frac{1}{p}}\Vert f\Vert_p,\quad\text{for all $f\in L^p(\mu)$}.
\end{equation*}
\end{thm}
It is worth noting that we formulate and prove Lerner's theorem in a more general setting which remains true.
\begin{thm}\label{thm:1.4}
If $\mu$ is a Radon continuous measure on $(\mathbb{R},\Vert\cdot\Vert_2)$ such that $\mu(\mathbb{R})=\infty$, then
\begin{equation*}
\Vert M_\mu f \Vert_p \geq \left(\frac{p}{p-1}\right)^{\frac{1}{p}}\Vert f\Vert_p,\quad\text{for all $f\in L^p(\mu)$}.
\end{equation*}
\end{thm}
Besides, the lower $L^p$-bounds of the maximal operator with respect to a more general measure were discussed and we extended the results of Theorem \ref{thm:1.4} to a class of measures containing only one or two atoms. The proof depends on our adopted approach from a variant of Theorem \ref{thm:1.1}, which differs from the traditional one of Theorem \ref{thm:1.4}. It also means that the condition of \eqref{con:1.1} is not necessary.
The paper is organized as follows.
In Section \ref{S2}, we give some definitions and basic properties of metric measure spaces $(X,d,\mu)$ and the corresponding maximal operators. In Section \ref{S3}, we prove Theorem \ref{thm:1.1} and Corollary \ref{cor:1.2}. In Section \ref{S4}, we mainly restrict our attention to the case of Radon measures in Euclidean space; the counterexample and other theorems mentioned above are presented.
$\\ \hspace*{\fill} \\$
\section{Preliminaries}\label{S2}
Throughout the whole paper, we use the same notation as \cite{Al21}. If not declared specifically, the symbol $B^{cl}(x,r):=\{y\in X: d(x,y)\leq r\}$ denotes closed balls, and $B^o(x,r):=\{y\in X: d(x,y)< r\}$ to refer to open balls. Because most of the definitions in this paper do not depend on the selection of open and closed balls, we generally use $B(x,r)$ to denote it, but it should be noted that all balls are taken to be of the same kind when we utilize $B(x,r)$ once. When referring to $B$, suppose that its radius and center have been determined.
\begin{de}
A measure on a topological space is Borel if it is defined on the $\sigma$-algebra generated by all open sets.
\end{de}
Sometimes we need to study problems on a $\sigma$-algebra larger than Borel $\sigma$-algebra, so we definite Borel semi-regular measure.
\begin{de}
A measure space $(X,\mathcal{A},\mu)$ on a topological space is Borel semi-regular measure if for every $A\in
\mathcal{A}$, there exists a Borel set $B$ such that $\mu((A\backslash B)\bigcup (B\backslash A))=0$.
\end{de}
In general, the maximal operator problem studied in this paper has no essential difference between these two kinds of measures. Therefore, although our conclusion is valid for general Borel semi-regular measure, the proofs are always carried out under the assumption of Borel measure. We will explain this reason in the proof of Lemma \ref{lem:3.4}.
\begin{de}\cite[Definition 7.2.1]{Bo07}
A Borel measure is $\tau$-additive, if for every collection $\mathcal{B}=\{U_{\lambda}:\lambda\in \Lambda\}$ of open sets,
\begin{equation*}
\mu\left(\bigcup_{\lambda}U_\lambda\right)=\sup_{\mathcal{F}}\mu\left(\bigcup_{i=1}^nU_{\lambda_i}\right)
\end{equation*}
where the supremum is taken over all finite subcollections $\mathcal{F}\subset \mathcal{B}$. A Borel measure on a metric space is locally finite if there exists an $r > 0$ such that $\mu(B(x,r))<\infty$ for every $x\in X$.
\end{de}
Recall that the complement of the support of the Borel measures $(\operatorname {supp} (\mu))^c:=\bigcup\{B(x,r):x\in X,\mu(B(x,r))=0\}$ is open, and hence measurable. If $\mu$ is $\tau$-additive, we immediately know from its definition that $\mu$ is full support, i.e., $\mu(X\backslash\operatorname {supp} (\mu))=0$.
We also note that if a metric space is separable, then it is second countable, and hence $\tau$-additivity holds for all Borel measures.
In particular, locally finite Borel regular measures
in complete separable metric spaces are equivalent to Radon measures, see e.g. Schwartz \cite[Part I, §11.3]{Sc73}. Note that our definition of locally finite is different from that of Aldaz \cite[Definition 2.1]{Al21}, and we need to identify carefully. Locally finiteness in the sense of Aldaz refers to that if each bounded set has finite measure.
\begin{de}
We say that $(X,d,\mu)$ is a metric measure space if $\mu$ is a Borel semi-regular measure and its restriction on Borel set is $\tau$-additive and locally finite.
\end{de}
Such a measure is undoubtedly $\sigma$-finite. A function $f\in L^{1}_{\operatorname {loc}}(\mu)$ is defined to be local integral if $\int_B|f|d\mu<\infty$ for each ball $B$.
For $1\leq p<\infty$, we define
$
L^{p}_{\operatorname {loc}}(\mu):=\{f:|f|^p\in L^{1}_{\operatorname {loc}}(\mu)\}.
$
Recall the \emph{uncentered Hardy-Littlewood maximal operator} acting on a locally integrable function $f$ by
\begin{equation}\label{de:max}
M_{\mu}f(x):=\sup_{x\in B;\mu(B)>0}\frac{1}{\mu(B)}\int_{B}|f|d\mu, \quad x\in X.
\end{equation} We will denote $M_\mu f$ briefly by $Mf$ when no confusion can arise. It may be checked that for any $f\in L^{1}_{\operatorname {loc}}(\mu)$, $Mf$ is lower semicontinuous, hence it is Borel measurable. By approximation, it is insignificant in the definition whether one takes the balls $B(x,r)$ to be open or closed. These were shown by Stempak and Tao (see \cite[Lemma 3]{ST14}).
Now we are in the position of the definition of Besicovitch covering property.
\begin{de}
We say that $(X,d)$ satisfies the Besicovitch covering property (BCP) if there exists a constant
$L\in \mathbb{N}^+$ such that for every $R>0$, every set $A\subset X$, and every cover $\mathcal{B}$ of $A$ given by
\begin{equation*}
\mathcal{B}=\{B(x,r):x\in A, 0<r<R\},
\end{equation*} then there is a countable subfamily $\mathcal{F}\subset \mathcal{B}$ such that the balls in $\mathcal{F}$ cover $A$, and every point in $X$ belongs to at most $L$
balls in $\mathcal{F}$, that is,
\begin{equation*}
\mathbf{1}_{A}\leq \sum_{B(x,r)\in \mathcal{F}}\mathbf{1}_{B(x,r)}\leq L.
\end{equation*}
\end{de}
Note that unlike \cite{Al19}, we require subfamily to be countable. Recall that the validity of BCP is sufficient to imply the validity of the differentiation theorem for every locally finite Borel semi-regular measure. See for instance \cite{Ma95,LR17,LR19}.
$\\ \hspace*{\fill} \\$
\section{The proof of the main result}\label{S3}
We shall always work on a metric measure space $(X,d,\mu)$. We denote by $\langle f\rangle_A$ the integral average of $f$ over a measurable set $A$, namely $\langle f\rangle_A =\frac{1}{\mu(A)}\int_A|f|d\mu$. If $\mu(A)=0$ then we set $\langle f\rangle_A =0$.
To prove the main result, we need to establish the following lemmas.
\begin{lem}\label{Lem:3.1}
Let $f,f_n\in L^{1}_{\operatorname {loc}}(\mu)$ be non-negative. If $\liminf\limits_{n\to \infty}\int_{B}f_nd\mu\geq\int_{B}fd\mu$ for all balls $B$ with $\mu(B)<\infty $, then $\liminf\limits_{n\to \infty}Mf_n(x)\geq Mf(x)$.
\end{lem}
\begin{proof}
Fix a point $x\in X$. If $Mf(x)< \infty$, so for every real number $\varepsilon>0$, there exists a ball such that $\mu(B_\varepsilon)>0$ and $\langle f\rangle_{B_{\varepsilon}}>Mf(x)-\frac{\varepsilon}{2}$. By the assumption that $\liminf\limits_{n\to \infty}\int_{B}f_nd\mu\geq\int_{B}fd\mu$ for all balls $B$, so we have
$\liminf\limits_{n\to \infty}\langle f_n\rangle_{B_{\varepsilon}}\geq\langle f\rangle_{B_{\varepsilon}}$, then for $\varepsilon>0$, there exists a natural number $N_{\varepsilon}>0$ such that $\langle f_n\rangle_{B_{\varepsilon}}\geq\langle f\rangle_{B_{\varepsilon}}-\frac{\varepsilon}{2}$ for ${\displaystyle n\geq N_{\varepsilon},}$ hence $\langle f_n\rangle_{B_{\varepsilon}}>Mf(x)-\varepsilon$. Applying the definition of $Mf_n(x)$, for $n> N_{\varepsilon}, $ we get $Mf_n(x)>Mf(x)-\varepsilon$, which prove the lemma in the case of $Mf(x)< \infty$.
Now suppose $Mf(x)=\infty$. Thus for every $M>0$, we can also find a ball $B_M$ such that $\mu(B_M)>0$ and $\langle f\rangle_{B_M}>M$. The same way shows that there exists a $N_M>0$ such that $\langle f_n\rangle_{B_M}>\frac{M}{2}$ for all $n\geq N_M$. Hence $\liminf\limits_{n\to \infty}Mf_n(x)=\infty$. This completes the proof.
\end{proof}
\begin{cor}\label{cor:3.2}
Let $f,f_n\in L^{1}_{\operatorname {loc}}(\mu)$ be non-negative. If $f_n$ is monotonically increasing and converges to $f$ a.e., then $\lim\limits_{n\to \infty}\Vert f_n\Vert_p=\Vert f\Vert_p$ and $\lim\limits_{n\to \infty}\Vert Mf_n\Vert_p=\Vert Mf\Vert_p$
\end{cor}
\begin{proof}
Since strong convergence implies weak convergence, $\lim\limits_{n\to\infty}\int_{B}f_nd\mu=\int_{B}fd\mu$ for all balls $B$ with $\mu(B)<\infty$. By Lemma \ref{Lem:3.1}, we have $ Mf \leq \liminf\limits_{n\to \infty}Mf_n$. Further, operator $M$ is order preserving, then $Mf_n$ is monotonically increasing and converges to $Mf$. By monotone convergence theorem, the result follows.
\end{proof}
The following approximation theorem is well known (see
e.g. \cite[Theorem 1.1.4]{EG92}, \cite[Theorem 2.2.2]{Fe69} or \cite[Theorem 1.3]{Si83}).
\begin{lem}\label{lem:3.3}
Let $\mu$ be a Borel semi-regular measure on $(X,d)$, $E$ a $\mu$-measurable set, and $\varepsilon>0$. If $\mu(E)<\infty$, then there is a bounded closed set $C\subset E$ such that $\mu(E\backslash C)\leq \varepsilon$.
\end{lem}
Recall that a finitely simple Borel function has the form
$\sum\limits_{i=1}^Nc_i\mathbf{1}_{E_i}$,
where $N<\infty, c_i\in \mathbb{R}$, and $E_i$ are pairwise disjoint Borel sets with $\mu(E_i)<\infty$. The support of a measurable function $g$, denoted by $\operatorname {supp} (g)$, is the closure of the set $\{x\in X:g(x)\neq 0\}$.
\begin{lem}\label{lem:3.4}
Let $C$ be a constant. The following are equivalent: \begin{enumerate}[\rm(i)]
\item For all $f\in L^p(\mu)$, $\Vert Mf\Vert_p\geq C \Vert f\Vert_p$.
\item For all non-negative finitely simple Borel function $g$, we have $\Vert Mg\Vert_p\geq C\Vert g\Vert_p$.
\item For all non-negative bounded upper semi-continuous functions $g$ whose $\operatorname {supp} (g)$ is bounded and $ \mu(\operatorname {supp} (g))< \infty$, we have $\Vert Mg\Vert_p\geq C \Vert g\Vert_p$.
\end{enumerate}
\end{lem}
\begin{proof}
Without loss of generality, the functions that appear in this proof are all non-negative.
By restricting the measure on its Borel $\sigma$-algebra to get a new $\nu$, for $f\in L^p(\mu)$, there is a Borel function $g$ such that $\mu$-a.e. $g=f$. Thus \begin{equation*}
M_\mu f =\sup_{x\in B;\mu(B)>0}\frac{\int_{B}fd\mu}{\mu(B)}=\sup_{x\in B;\mu(B)>0}\frac{\int_{B}gd\mu}{\nu(B)}=\sup_{x\in B;\mu(B)>0}\frac{\int_{B}gd\nu}{\nu(B)}=M_\nu g
\end{equation*} and $\Vert g\Vert_{p,\nu}= \Vert f\Vert_{p,\mu}$. This means that we only need to focus on Borel functions.
Now we are in the position that $\textrm{(ii)}$ implies $\textrm{(i)}$. Applying Corollary \ref{cor:3.2}, it suffices to prove that for every Borel function $f\in L^p(\mu)$, there exists a finitely simple Borel function sequence $\{f_{n}\}_{n=1}$ which is monotonically increasing and converges to $f$. In fact, since $f\in L^p(\mu)$, $f$ is a.e. finite. Recall that $\mu$ is $\sigma$-finite and let the pairwise disjoint subsets $A_i$ such that $\bigcup_{i=1}^\infty A_i=X$ and $\mu(A_i)<\infty$. For $k\in \mathbb{N}$, set
\begin{equation*}
E_{k,j}=\{x\in \bigcup_{i=0}^kA_i:\frac{j-1}{2^k}\leq f(x)<\frac{j}{2^k}\}\; (j=1,2,...,k\cdot 2^k)
\end{equation*}
and \begin{equation*}
f_k(x)=\sum_{j=1}^{k\cdot 2^k}\frac{j-1}{2^k}\mathbf{1}_{E_{k,j}}(x).
\end{equation*}
Clearly, $f_k$ is what we need by simple function approximation theorem.
As we have shown above, it suffices to consider the simple Borel function $f=\sum_{n=1}^Nc_n\mathbf{1}_{B_n}$, where $B_n$ is a Borel set with $\mu(B_n)<\infty$. By Lemma \ref{lem:3.3}, for each $\varepsilon>0$ and $1\leq n\leq N$ there exists a bounded closed set $F_{n,\varepsilon}$ with $F_{n,\varepsilon}\subset B_n$ and $c_n\mu(B_n\backslash F_{n,\varepsilon})<\frac{\varepsilon}{2^{N}}$. Set $\psi_k =\sum_{n=1}^Nc_n\mathbf{1}_{F_{n,\frac{1}{k}}}.$ Then $\psi_k$ is upper continuous and supports on a bounded closed set with $ \mu(\operatorname {supp} (\psi_k))< \infty$, and $\psi_k \uparrow f$ as desired.
\end{proof}
\begin{lem}\label{lem:3.5}
Let $\mu$ be an infinite Borel semi-regular measure on $(X,d)$. If the sequence $x_{n}\in X$ is bounded and $\lim\limits_{n\to \infty}r_n =\infty$, then $\lim\limits_{n\to \infty}\mu(B(x_n,r_n))= \infty$.
\end{lem}
\begin{proof}Suppose that $x_{n}\in B(x,R)$ for some $x\in X$. Applying the continuity from above of measure, we get $\lim\limits_{n\to \infty}\mu(B(x,\frac{r_n}{2})) =\mu(X)=\infty.$ Since $\lim\limits_{n\to \infty}r_n =\infty$, there is a $N$ such that $B(x,\frac{r_n}{2})\subset B(x_n, r_n)$ for every $n\geq N$. Hence $\lim\limits_{n\to \infty}\mu(B(x_n,r_n))=\infty$ which proves lemma.
\end{proof}
Now we follow methods from \cite[Theorem 1.1]{IJN17} to prove Theorem \ref{thm:1.1}.
\begin{proof}[Proof of Theorem \ref{thm:1.1}.]
By Lemma \ref{lem:3.4}, it remains to prove that \eqref{ineq:main} holds for all non-negative bounded functions $f$ whose $\operatorname {supp} (f)$ is bounded and $ \mu(\operatorname {supp} (f))< \infty$. Suppose first that $f$ is bounded by $C$. Then
$\int_{X}fd\mu \leq C\mu(\operatorname {supp} (f))< \infty$, and hence $f\in L^1(\mu)$.
As $(X,d)$ satisfies the Besicovitch covering property, the differentiation theorem holds, hence $\lim\limits_{r\to 0}\langle f\rangle_{B(x,r)} = f(x)$ almost everywhere $x\in X$ for every $f\in L^{1}_{\operatorname {loc}}(\mu)$. If we set $F=\{x\in X: \lim\limits_{r\to 0}\langle f\rangle_{B(x,r)} = f(x)\}$, then $\mu(X\backslash F)=0$. Let \begin{equation*}
E=\{x\in \operatorname {supp} (\mu): r\mapsto \mu(B(x,r))\; \text{is continuous}\}.
\end{equation*} Clearly, $\mu(X\backslash E)=0$ by assumption. Fix $t>0$ and consider $K_{t}=\{x\in E\bigcap F: f(x)> t\}$. For fixed $x\in K_t$, applying the definition of $ \operatorname {supp} (\mu)$ and $F$, now choose a ball $B$ centered at $x$ such that $\langle f\rangle_{B}>t$ and $\mu(B)>0$. From the assumption that $\mu(X)=\infty$, we obtain $\lim\limits_{n\to \infty}\langle f\rangle_{B(x,r_{n})}=0$ as $\lim\limits_{n\to \infty}r_n =\infty$. Fix $x\in K_t$ and set $p(s)=\langle f\rangle_{B(x,s)}$. Then $p(s)$ is a continuous function on $(0,\infty]$ that has the intermediate value property and so $\{r: \langle f\rangle_{B(x,r)}=t\}$ is non-empty for every $x\in K_t$. Let
\begin{equation*}
R_t=\sup\limits_{x,r}\{r:x\in K_t, \langle f\rangle_{B(x,r)}=t\}.
\end{equation*}
We show that $R_t<\infty$. To do it, we argue by contradiction. Suppose, if possible, that $R_t = \infty$. Then there exists a sequence $(x_n, r_n)\in K_t \times (0,\infty]$ such that $\lim\limits_{n\to \infty} r_n = \infty$ and $\langle f\rangle_{B(x_n,r_n)}=t$. As $\operatorname {supp} (f)$ is bounded, Lemma \ref{lem:3.5} gives $\limsup\limits_{n\to \infty}\langle f\rangle_{B(x_{n},r_{n})}\leq \limsup\limits_{n\to \infty}\frac{\Vert f\Vert_1}{\mu(B(x_{n},r_{n}))}$ = 0. Thus $\lim\limits_{n\to \infty}\langle f\rangle_{B(x_{n},r_{n})}=0$ which obviously contradicts $t>0$. Note that this number depends only on $t$ if $f$ is given. Thus for $x\in K_t$, there exists an $r_x\leq R_t$ such that $\langle f\rangle_{B(x,r_x)}=t$. Now for cover $\mathcal{C}$ of $K_t$ given by
\begin{equation*}
\mathcal{C}=\{B(x, r) :x\in K_t,\;\langle f\rangle_{B(x,r)}=t\},
\end{equation*}
applying Besicovitch covering property, we extract a countable subfamily $B(x_{t,j}, r_{t,j})\in \mathcal{C}$ so that
\begin{equation*}
\mathbf{1}_{K_t} \leq \psi(x,t):=\sum_{j}\mathbf{1}_{B(x_{t,j}, r_{t,j})}\leq L.
\end{equation*}
We show that $\psi(x,t)$ satisfies the following properties also: \begin{enumerate}[\quad \quad \quad \rm(1)]
\item if $t>Mf(x)$ then $\psi(x,t)=0$;
\item if $f(x)>t$, then $\psi(x,t)\geq 1$ almost everywhere;
\item for every $t>0$, we have $\int_{X}t\psi(x,t)d\mu(x)=\int_{X}\psi(x,t)f(x)d\mu(x)$.
\end{enumerate}
For the first property, we prove it by contradiction. Suppose, if possible, that $\psi(x,t)> 0$, then there exists a $B(x_{t,\ell},r_{t,\ell})$ containing $x$. Thus, $Mf\geq\langle f\rangle_{B(x_{t,\ell},r_{t,\ell})}=t$, contradicting the assumption that $t> Mf(x)$.
To obtain the second property, let $x\in K_t$. The selection of $\psi$ gives $\psi(x,t)\geq \mathbf{1}_{K_t}(x)\geq 1$, and the property follows by $\mu(X\backslash(E\bigcap F))=0$.
The third property follows immediately.
We now seek to prove the desired inequality. Since $\mu$ is $\sigma$-finite, applying Fubini-Tonelli's theorem, (3) implies the following equality:
\begin{equation*}
\int_{X}\int^{\infty}_0t^{p-1}\psi(x,t)dtd\mu(x)= \int_{X}\int^{\infty}_0t^{p-2}\psi(x,t)f(x)dtd\mu(x).
\end{equation*}
By property (1), we can restrict the integration to $[0,Mf(x)]$, that is
\begin{equation}\label{3.1}
\int_{X}\int^{Mf(x)}_0t^{p-1}\psi(x,t)dtd\mu(x)= \int_{X}\int^{Mf(x)}_0t^{p-2}\psi(x,t)f(x)dtd\mu(x).
\end{equation}
Now, since $\psi\leq L$, and $Mf\geq f$ a.e., it follows from the above equality \eqref{3.1} that
\begin{align*}
\frac{L}{p}\left(\Vert Mf\Vert_p^p-\Vert f\Vert_p^p\right)&\geq \int_{X}\int^{Mf(x)}_0t^{p-1}\psi(x,t)dtd\mu(x)-\int_{X}\int^{f(x)}_0t^{p-1}\psi(x,t)dtd\mu(x)\\
&\geq \int_{X}\int^{f(x)}_0t^{p-1}\psi(x,t)\left(\frac{f(x)}{t}-1\right)dtd\mu(x).
\end{align*}
Note that property (2) yields
\begin{align*}
\frac{L}{p}\left(\Vert Mf\Vert_p^p-\Vert f\Vert_p^p\right)&\geq\int_{X}\int^{f(x)}_0t^{p-1}\left(\frac{f(x)}{t}-1\right)dtd\mu(x)= \frac{\Vert f\Vert_p^p}{p(p-1)}.
\end{align*}
This finishes the proof.
\end{proof}
\begin{re}
More generally, the same is true for quasi-metric in place of metric if we assume that all balls are Borel semi-regular measurable. It may happen that a ball in quasi-metric space is not a Borel set. To avoid such pathological cases, the assumption must be made to ensure the definition of $M_\mu$ is reasonable. In fact, Mac\'{ı}as and Segovia showed in \cite[Theorem 2, p. 259]{MS79} that there exists an $\alpha\in (0,1)$ and a quasi-metric $d_*$ which is equivalent to original quasi-metric and original topology is metrizable by $(d_*)^\alpha$. Thus we can generalize Lemma \ref{lem:3.3} to quasi-metric space since the bounded closed sets of both are the same. Lemma \ref{lem:3.5} can also be established by quasi-triangle inequality. From the foregoing discussion, the same proof carries over into quasi-metric space.
\end{re}
A set of the form $L^+_t(f):=\{x\in \operatorname {supp} (\mu): f(x)>t\}$ is called a \emph{strict superlevel sets} of $f$. It is rather straightforward to see that the following theorem remains valid by modifying the above proof.
\begin{thm}\label{thm:3.6}
Let $f\in L^{p}(\mu)$ be non-negative. If $\mu(X)=\infty$, and for any $t>0$, there exists a finite or countable ball-coverings $\mathcal{F}_t$ of $L^+_t(f)$ such that the average value of the function $f$ on each ball $B\in \mathcal{F}_t$ is equal to $t$, and almost everywhere every point in $\operatorname {supp} (\mu)$ belongs to at most $L$
balls in $\mathcal{F}_t$, then
\begin{equation*}
\Vert M
f \Vert_p \geq \left(1+\frac{1}{(p-1)L}\right)^{\frac{1}{p}}\Vert f\Vert_p.
\end{equation*}
Especially, if the balls are pairwise disjoint, then $
\Vert Mf \Vert_p \geq \left(\frac{p}{p-1}\right)^{\frac{1}{p}}\Vert f\Vert_p.
$
\end{thm}
Whether the theorem is true without Besicovitch covering property seems worthwhile to pursue. However, due to the bottleneck in the extraction of the ball subfamily, the approach of Theorem \ref{thm:1.1} can not help. To the surprise, if we preserve the continuity assumption of $\mu(B(x_0,r))$ and impose on $f$ the additional conditions of being radially decreasing symmetric $($with the point $x_0$$)$, we can avoid such difficulties.
\begin{thm}
Let $f\in L^{p}(\mu)$ be a radial decreasing function with the point $x_0\in X$. If $\mu(X)=\infty$ and the function $r\mapsto \mu(B(x_0,r))$ is continuous on the interval $(0,+\infty)$, then
\begin{equation*}
\Vert Mf \Vert_p \geq \left(\frac{p}{p-1}\right)^{\frac{1}{p}}\Vert f\Vert_p.
\end{equation*}
\end{thm}
\begin{proof}
Note first that the strict superlevel sets of a radial decreasing function are always balls centered on the point $x_0$. Then we can choose a larger ball $B$ centered at $x_0$ to contain $L^+_t(f)$ while making $\langle f\rangle_{B}=t$ possible for any $t$ by the assumption. Thus the theorem follows immediately applying Theorem \ref{thm:3.6}.
\end{proof}
Given a norm $\Vert\cdot\Vert$ on $\mathbb{R}^d$, \emph{the strict Hadwiger number} $H^*(\mathbb{R}^d,\Vert\cdot\Vert)$ is the maximum number of translates of the closed unit
ball $B^{cl}(0,1)$ that can touch $B^{cl}(0,1)$ and such that any two translates
are disjoint. See \cite[p. 10]{Al21} in more detail.
\begin{proof}[Proof of Corollary \ref{cor:1.2}.]
As special cases of Theorem \ref{thm:1.1} when $(X,d)=(\mathbb{R}^d,\Vert\cdot\Vert)$, we only need to prove that the Besicovitch constant of $(\mathbb{R}^d,\Vert\cdot\Vert)$ equals its strict Hadwiger number. But this fact has been shown in \cite[Theorem 3.2]{Al21}.
\end{proof}
\section{Radon measure in Euclidean spaces}\label{S4}
Since every finite measure has no such lower bounds greater than 1, a natural attempt to generalize Corollary \ref{cor:1.2} is to consider measures only satisfying $\mu(X)=\infty$. The following example tells us that condition \eqref{con:1.1} can not be omitted.
\begin{ex}\label{ex:4.1}
Let $\Vert\cdot\Vert$ be any norm on $\mathbb{R}^d$.
For any $p>1$ and $\varepsilon>0$,
there exists a discrete measure $\mu$ on $(\mathbb{R}^d,\Vert\cdot\Vert)$ which satisfies the following conditions{\rm :}
\begin{enumerate}[\rm(i)]
\item $\mu(\mathbb{R}^d)=\infty;$
\item $ \mu(\{x\in \operatorname {supp} (\mu): r\mapsto \mu(B(x,r))\; \text{is discontinuous}\})=\infty;$
\item $\inf\limits_{\Vert g \Vert_p=1}\Vert Mg\Vert_p\leq 1+\varepsilon.$
\end{enumerate}
\end{ex}
\begin{proof}
Given $x\in \mathbb{R}$, we use $e_x$ to denote the point $(x,0,\cdots,0)$ on $\mathbb{R}^d$. For $t>1$ and $i\in \mathbb{N}$, consider $\mu= \frac{1}{t-1}\delta_{e_0}+\sum\limits_{i=1}^\infty t^{i-1}\delta_{e_i},$ where $\delta_{y}$ is the Dirac measure concentrated at the point $y\in \mathbb{R}^d$. Since (i) and (ii) follow immediately, we only need to verify (iii). Assume $i\in \mathbb{N}^+.$ Then, using the convexity of the ball, we have \begin{equation*}
M_\mu\mathbf{1}_{\{e_0\}}(e_i)=\frac{\Vert\mathbf{1}_{\{e_0\}}\Vert_{1,\mu}}{\inf_{B\ni e_i;B\ni e_0}\mu(B)}=\frac{\frac{1}{t-1}}{\frac{1}{t-1}+\sum_{j=1}^{i}t^{j-1}}=t^{-i}.
\end{equation*}
Hence, a simple calculation gives
\begin{align*}
\Vert M_\mu\mathbf{1}_{\{e_0\}}\Vert_{p,\mu}^p&=\frac{1}{t-1}(M_\mu\mathbf{1}_{\{e_0\}}(e_0))^p+\sum_{i=1}^\infty t^{i-1}(M_\mu\mathbf{1}_{\{e_0\}}(e_i))^p\\
&=\frac{1}{t-1}+\sum_{i=1}^\infty t^{(1-p)i-1} =\frac{1}{t-1}+\frac{1}{t^p-t}.\\
\end{align*}As $t\to \infty,$ we get $\frac{\Vert M_\mu\mathbf{1}_{\{e_0\}}\Vert_{p,\mu}^p}{\Vert\mathbf{1}_{\{e_0\}}\Vert_{p,\mu}^p}=1+\frac{t-1}{t^p-t}\to 1.$ Thus fixing $p>1$ and $\varepsilon>0$, $\inf\limits_{\Vert g \Vert_p=1}\Vert Mg\Vert_p\leq \frac{\Vert M_\mu\mathbf{1}_{\{e_0\}}\Vert_{p,\mu}}{\Vert\mathbf{1}_{\{e_0\}}\Vert_{p,\mu}}\leq 1+\varepsilon$, when $t$ is large enough.
\end{proof}
Set $c_p=\left(1+\frac{1}{(p-1)H^*(\mathbb{R}^d,\Vert\cdot\Vert)}\right)^{1\slash p},$ where $\Vert\cdot\Vert$ is a norm on $\mathbb{R}^d$. As we mentioned before, \eqref{con:1.1} is satisfied for every absolutely continuous measure with respect to the Lebesgue measure,
it is reasonable to further ask whether $M_\mu f$ for every continuous measure must have bound $c_p$. Surprisingly, for lower-dimensional Euclidean space, the answer is affirmative.
We need the following basic fact:
\begin{lem}\label{lem:4.1}
For $d=1,2,$ let $\mu$ be a Radon measure on $(\mathbb{R}^d,\Vert\cdot\Vert_2)$ that has no atoms, then $\{x\in \operatorname {supp} (\mu): r\mapsto \mu(B(x,r))\; \text{is discontinuous} \}$ is countable. In particular, $\mu(\{x\in \operatorname {supp} (\mu): r\mapsto \mu(B(x,r))\; \text{is discontinuous}\})=0$.
\end{lem}
\begin{proof}
We begin by observing that $\lim\limits_{r\to r_0^+}\mu(B^{o}(x,r))=\mu(B^{cl}(x,r_0))$ and $\lim\limits_{r\to r_0^-}\mu(B^o(x,r))=\mu(B^o(x,r_0))$. Thus the continuity of this function at $r_0$ is equivalent to measure zero on the spherical boundary $\{y:d(y,x)=r_0\}$. If in the case of one dimension, the result follows immediately.
Next we turn to the case of two dimension. To prove it, we claim that
\begin{equation*}
E=\{S:\text{ where $S$ is an one-dimensional sphere } \text{s.t.} \;\mu(S)>0\}
\end{equation*} is countable.
We shall do this by contradiction. Suppose, if possible, that $E$ is uncountable. Letting $E= \left(S_{\gamma}\right)_{\gamma \in \Gamma}$ and $x_{\gamma}$ be a center of $S_\gamma$, we obtain
$
E=\bigcup_{i\in \mathbb{Z}}\bigcup_{j\in \mathbb{N}}\bigcup_{s\in \mathbb{N}}E_{i,j,s},
$ where
\begin{align*}
E_{i,j,s}
\triangleq \{S_{\gamma}:2^{-i-1}\leq\mu(S_\gamma)<2^{-i};\; j\leq\operatorname {radii} S_\gamma<j+1;\;s\leq d(x_{\gamma},0)<s+1 \}.
\end{align*}
Then there exists at least uncountable spheres meeting requirements listed above for some $i_0,j_0,s_0$, that is, $E_{i_0,j_0,s_0} $ is uncountable. The next observation is that the intersection of two different spheres is at most two points. By continuity of $\mu$, any two different spheres are disjoint in the sense of measure. Hence we can choose a subfamily $\{S_{\gamma_k}\}_{k\in \mathbb{N}}\subset E_{i_0,j_0,s_0} $ so that $\mu(B^{cl}(0,j_0+s_0+2))\geq \mu(\bigcup_{k}S_{\gamma_k})=\sum_{k\in\mathbb{N}}\mu(S_{\gamma_k})\geq \infty,$ which contradicts the assumption that $\mu$ is a Radon measure. Thus $E$ is countable. Since every sphere has only one center,
$
\{x\in \mathbb{R}^2: \text{$\exists r$ s.t. $\mu(\{y:d(y,x)=r\})>0$}\}
$
is also countable as desired.
\end{proof}
\begin{re}
Notice that the lemma cannot be extended to the high dimension. An example of a singular continuous measure shows the value of the right-hand item of \eqref{con:1.1} can take infinite. Our methods at present are not able to tackle the high dimension problem.
\end{re}
By corresponding results of Sullivan \cite[Proposition 23]{Su94}, we know that strict Hadwiger numbers in one-dimensional and two-dimensional Euclidean space are 2 and 5. Thus the lower bounds are greater than $\left(1+\frac{1}{2(p-1)}\right)^{1\slash p}$ and $\left(1+\frac{1}{5(p-1)}\right)^{1\slash p}$ respectively, and this verifies Theorem \ref{thm:1.3}. We ignore the details of this proof since it is the direct inference of Sullivan's proposition, Corollary \ref{cor:1.2} and Lemma \ref{lem:4.1}.
We would like to mention that in one dimension we can improve the lower bound estimation to $\left(\frac{p}{p-1}\right)^{1\slash p}$.
The core of our proof is exactly consistent with Lerner's approach under one-dimensional Lebesgue measures.
To this end, we introduce the following theorem proved by Ephremidze et al.\cite[Theorem 1]{EFT07}.
\begin{thm}\label{thm:4.2}
Let $\mu$ be a Borel semi-regular measure on $\mathbb{R}$ that contains no atoms. For $f\geq 0$, we consider the one sided maximal operator \begin{equation*}
M_+f(x)= \sup_{b>x}\frac{1}{\mu([x,b))}\int_{[x,b)}f(s)d\mu.
\end{equation*} If
$
t > \liminf\limits_{x \to -\infty}M_+f(x),
$ then \begin{equation*}
t\mu(\{x\in\mathbb{R}:M_+f(x)>t\}) = \int_{\{x\in\mathbb{R}:M_+f(x)>t\}}f(s)d\mu.
\end{equation*}
\end{thm}
\begin{proof}[Proof of Theorem \ref{thm:1.4}.]
If $\mu(\mathbb{R})=\infty$, we have either $\mu((0,\infty))=\infty$ or $\mu((-\infty,0])=\infty$. Without loss of generality, we assume $\mu((-\infty,0])=\infty$. Otherwise we consider
$
M_-f(x)= \sup\limits_{a<x}\frac{1}{\mu((a,x])}\int_{(a,x]}f(s)d\mu
$ instead of $M_{+}f(x)$. As $\mu$ is Radon, $\mu((-\infty,-R))=\mu((-\infty,0])-\mu([-R,0])=\infty$ for any $R>0$. Assume that the function $f\in L^1(\mu)$ with $\operatorname {supp} (f)\subseteq[-R,R]$. If $x<-R$, we have
\begin{equation*}
M_+f(x)= \sup_{b>-R}\frac{1}{\mu([x,b))}\int_{[x,b)}f(s)d\mu\leq\sup\limits_{b>-R}\frac{\Vert f\Vert_1}{\mu([x,b))}\leq\frac{\Vert f\Vert_1}{\mu([x,-R))}.
\end{equation*}From $\mu((-\infty, -R))=\infty$, we obtain
\begin{equation*}
\liminf\limits_{x \to -\infty}M_+f(x) = \liminf\limits_{x \to -\infty}\frac{\Vert f\Vert_1}{\mu([x,-R))}= 0.
\end{equation*}
Applying Theorem \ref{thm:4.2}, we conclude
\begin{equation}\label{eq:4.1}
t\mu(\{x\in\mathbb{R}:M_+f(x)>t\}) = \int_{\{x\in\mathbb{R}:M_+f(x)>t\}}f(s)d\mu \;\;\text{for all $t>0$}.
\end{equation}
Now multiplying both sides of \eqref{eq:4.1} by $t^{p-2}$ and integrating with respect to $t$ over $(0,\infty)$, we obtain $\frac{\Vert M_+f\Vert_p^p}{p}=\frac{1}{p-1}\int_{\mathbb{R}}f(M_+f)^{p-1}d\mu$. Therefore, $\frac{\Vert Mf\Vert_p^p}{p}\geq\frac{1}{p-1}\int_{\mathbb{R}}f^pd\mu$ and the desired inequality follows.
\end{proof}
For one-dimensional Banach space, there are more powerful covering theorems to ensure the weak type inequality with respect to arbitrary measures, as was done by Peter Sjogren in \cite{Sj83}. See also \cite{Be89} and \cite{GK98}. Hence the maximal operator $M$ satisfies
the strong type $L^p(\mu)$ estimates for $1<p<\infty$. The function family of the following criterion is easier to process in comparison with Lemma \ref{lem:3.4}.
\begin{lem}\label{lem:4.3}
Let $\mu$ be a Borel semi-regular measure on $\mathbb{R}$ and let $C$ be a constant. The following are equivalent: \begin{enumerate}[\rm(i)]
\item $\Vert Mf\Vert_p\geq C \Vert f\Vert_p$ for all $f\in L^p(\mu)$.
\item $\Vert Mg\Vert_p\geq C\Vert g\Vert_p$ for all $g$ of form $g=\sum_{i=1}^N\beta_i\mathbf{1}_{I_i}$, where the bounded open intervals $I_i$ are disjoint and $\beta_i>0$.
\end{enumerate}
\end{lem}
\begin{proof}
Suppose that (ii) is invalid.
Let $f\in
L^p(\mu)$. Since the sub-linear operator $M$ is bounded on $L^p(\mu)$, we have $Mf_n \to Mf$ as $f_n \to f$. Thus (i) follows from the fact that the set of all positive coefficients linear combinations of characteristic functions of bounded open intervals is dense.
\end{proof}
Let $A_\mu$ be the set of all the real numbers $x$ with
$\mu(\{x\})>0$. If we denote by $x_n$ the points
of $A_\mu$ and by $w_n$ the weight of $x_n$, the decomposition follows: $\mu =\mu_c+\sum_{n}w_n\delta_{x_n}$, where the continuous part of $\mu$: $\mu_c$ is defined by
$\mu_c(B)=\mu(B\cap A^c)$. This brings us nicely to consider the case of that $\mu_c(\mathbb{R})=\infty.$
\begin{thm}\label{thm:4.4} Let $\mu$ be a Borel semi-regular measure on $\mathbb{R}$ and suppose that the axis of negative and positive reals have measures infinite.
If one of the following holds:
\begin{enumerate}[\rm(i)]
\item the set $A_\mu$ is one point;
\item the set $A_\mu$ contains only two points and there is no mass between these two points;
\end{enumerate} then
\[
\Vert M_\mu f \Vert_p \geq \left(\frac{p}{p-1}\right)^{\frac{1}{p}}\Vert f\Vert_p \quad\text{for all $f\in L^p(\mu)$}.
\]
\end{thm}
\begin{proof}
We claim that the ball-coverings family assumed in Theorem \ref{thm:3.6} exists for each step function, then we justify the theorem.
Let $f=\sum_{i=1}^N\beta_i\mathbf{1}_{I_i}$, where the bounded open intervals $I_i$ are disjoint and $\beta_i>0$. If necessary, we rearrange the intervals $I_i$ to ensure that it is mutually disjoint successively. Meanwhile, we also relabel the series $\beta_i$, that is $\beta_{i_1}\leq \beta_{i_2}\leq \dots \leq \beta_{i_N}$, where $(i_1,\dots i_N)$ is a permutation of $(1,\dots, N)$. Suppose first (i) holds and write $A_\mu=\{y\}$.
We first consider the case when $y\notin \cup_{1\leq i\leq N}I_i$. Now suppose $0<t<\beta_{i_N}$, otherwise the corresponding superlevel set is nonempty. We define $j_t$ to be the first number in $\{\beta_{i_j}\}_{1\leq j\leq N}$ skipping over $t$, then $\{x\in X: f(x)>t\}= \cup_{j\geq j_t}I_{i_j}.$ Now we apply a standard selection
procedure: since $y\notin \cup_{1\leq i\leq N}I_i$, $(-\infty,y]$ and $(y,+\infty)$ cut apart the intervals family $\cup_{j\geq j_t}I_{i_j}$ into the two parts, then we write $b_1$ be the right endpoint of the last interval in the list which lies on the left side of $y$ (the exception is considered later); thus there exists a ball $B_1=(s_1,b_1)$ such that $\langle f\rangle_{B_1}=t$, since $\langle f\rangle_{(s,b_1)}$ is continuous, $\lim_{s\to b_1}\langle f\rangle_{(s,b_1)}>t$ and $\lim_{s\to -\infty}\langle f\rangle_{(s,b_1)}=0$ which all stems from the assumption that $\mu|_{[-\infty,y)}$ is an infinite continuous measure; if $s_1\notin \cup_{j\geq j_t}I_{i_j}$, we choose it; otherwise, using a while loop, we will find a new ball $B_s=(s_{i},s_{i-1})$ to meet $\langle f\rangle_{B_s}=t$ until $s_i\notin \cup_{j\geq j_t}I_{i_j}$, then we select $(s_i,b_1)$ be the ball $B_1$ as desired; having disposed of this step, we use induction to find the family of balls which covers the part of $\cup_{j\geq j_t}I_{i_j}$ belonging to $(-\infty,y]$. For the intervals family on the right side of $y$, we do the same by consider the ball in $(y,+\infty)$. It was clear that the balls chosen was what we needed.
We now turn to the case when $y\in \cup_{1\leq i\leq N}I_i$. Then we suppose $I_{i_y}\ni y$ for some $i_y\in (1,\dots, N)$. The proof of this case is quite similar to the former since we can choose the right endpoint of $I_{i_y}$ as the starting point of the selection
procedure.
The same reasoning applies to the case (ii).
\end{proof}
The following example indicates that a one-sided infinite measure containing only one atom can also lead to the phenomenon of Example \ref{ex:4.1}.
\begin{ex}
For $t>1$, consider $\mu= t\delta_{1}+m|_{(0,\infty)},$ where $m$ is the Lebesgue measure. Obviously, the examples satisfy $\mu(\mathbb{R})=\infty$. Letting $f=\mathbf{1}_{(0,1)}$, we will show that we can take large $t $ such that $\Vert M_{\mu}f\Vert_{p,\mu}$ gets close enough to $1$. If $x\in (0,1)$, $M_\mu f(x)=1$; if $x\in [1,\infty)$, then
\begin{equation*}
M_\mu
f(x)=\sup\limits_{a<x}\frac{1}{\mu((a,x])}\int_{(a,x]}f(s)d\mu=\sup\limits_{0<a\leq1}\frac{1-a}{t+x-a}=\frac{1}{t+x}.
\end{equation*}
Thus, we have
\begin{align*}
\Vert M_{\mu}f \Vert^p_{p,\mu}=&\Vert M_{\mu}f \Vert^p_{p,m|_{(0,\infty)}}+\Vert M_{\mu}f \Vert^p_{p,t\delta_1}\\
=&1+\int_{1}^{\infty}\frac{1}{(t+x)^p}dx+\frac{t}{(t+1)^p} \\
\leq& 1+\frac{p}{p-1} (t+1)^{1-p}.
\end{align*}
Since $\lim\limits_{t\to \infty}\Vert M_{\mu}f\Vert_{p,\mu} =1$, the result follows.
\end{ex}
\begin{re}
Another similar example shows that in general Theorem \ref{thm:4.4} cannot be popularizing to the case where $A_\mu$ contains more than three atoms.
\end{re}
Finally, we comment on a result that the lower $L^p$-bound of $M$ becomes significantly increased with the decreases of $p$.
Since it is immediate consequence of H{\"o}lder's inequality, the proof is not shown here.
\begin{prop}
Let $(X,d,\mu)$ be a metric measure
space and $M_r = \inf\limits_{\Vert g \Vert_r=1}\Vert Mg\Vert_r$, then $M_r \leq M_p^{\frac{p}{r}}$ for $1<p<r$.
\end{prop}
\end{document} |
\begin{document}
\title{A Refined Study of the Complexity of Binary Networked Public Goods Games}
\begin{abstract}
We study the complexity of several combinatorial problems in the model of binary networked public goods games. In this game, players are represented by vertices in a network, and the action of each player can be either investing or not investing in a public good. The payoff of each player is determined by the number of neighbors who invest and her own action. We study the complexity of computing action profiles that are Nash equilibrium, and those that provide the maximum utilitarian or egalitarian social welfare. We show that these problems are generally {{\textsf{NP}}h} but become polynomial-time solvable when the given network is restricted to some special domains, including networks with a constant bounded treewidth, and those whose critical clique graphs are forests.
\end{abstract}
\hyphenation{BNPG}
\hyphenation{BNPGs}
\hyphenation{PSNE}
\hyphenation{PSNEC}
\section{Introduction}
Binary networked public good games (BNPGs) model the scenario where players reside in a network and decide whether they invest in a public good. The value of the public good is determined by the total amount of investment and is shared by players in some specific way. Particularly, the payoff of each player is determined by the number of her neighbors who invest and her own action. BNPGs are relevant to several real-world applications. One example could be vaccination, where parents decide whether to vaccinate their children, and the public good here is herd immunity.
In a recent paper, Yu~et~al.~\shortcite{DBLP:conf/aaai/YuZBV20} explored the complexity of determining the existence of pure-strategy Nash equilibria (PSNE) in BNPGs. In particular, they showed that this problem is {{\textsf{NP}}h}. On the positive side, they derived polynomial-time algorithms for the case where the given network is a clique or a tree. Following their work, we embark on a more refined complexity study of BNPGs. Our main contributions are as follows.
\begin{itemize}
\item We fix a flaw in an {{\textsf{NP}}hns} proof in~\cite{DBLP:conf/aaai/YuZBV20}.
\item Besides PSNE (action) profiles, we consider also profiles with the maximum utilitarian or egalitarian social welfare which are not the main focus of~\cite{DBLP:conf/aaai/YuZBV20}.
\item We show that the problem of computing profiles with the maximum utilitarian/egalitarian social welfare is {{\textsf{NP}}h}, and this holds even when the given network is bipartite and has a constant diameter.
\item For the problems studied, we derive polynomial-time algorithms restricted to networks whose critical clique graphs are forests, or whose treewidths are bounded by a constant. Note that the former class of graphs contains both trees and cliques, and the latter class contains trees. Therefore, our results widely extend the {{\textsf{P}}} results studied in~\cite{DBLP:conf/aaai/YuZBV20}.
\end{itemize}
{\bf{Related Works.}} BNPGs are a special case of graphical games proposed by Kearns, Littman, and Singh~\shortcite{DBLP:conf/uai/KearnsLS01}, where the complexity of computing Nash equilibria has been investigated in the literature~\cite{DBLP:conf/uai/KearnsLS01,DBLP:conf/sigecom/ElkindGG06}. Recall that graphical games are proposed as a succinct representation of $n$-player $2$-action games whose action space is $2^n$. In graphical games, players are mapped to vertices in a network and the payoff of each player is entirely determined by her own action and those of her neighbors. BNPGs are specified so that, in addition to one's own action, only the number of neighbors who invest matters for the payoff of this player. In other words, in BNPGs, every player treats her neighbors equally. An intuitive generalization of BNPGs where each player may take more than two actions have been considered by Bramoull\'{e} and Kranton~\shortcite{DBLP:journals/jet/BramoulleK07}. Additionally, BNPGs are also related to supermodular network games~\cite{ManshadiJohariAllerton2009} and best-shot games~\cite{HarrisonH1989,Carpenter2002,DBLP:journals/ai/LevitKGM18}. For more detailed discussions or comparisons we refer to~\cite{DBLP:conf/aaai/YuZBV20}.
\section{Preliminaries}
We assume that the reader is familiar with basic notions in graph theory and computational complexity, and, if not, we refer to~\cite{Douglas2000,DBLP:journals/interfaces/Tovey02}.
Let $G=(V, E)$ be an undirected graph. The vertex set of~$G$ is also denoted by~$\ver{G}$. The set of (open) neighbors of a vertex in $v\in V$ in~$G$ is denoted by $N_G(v)=\{v'\in V : \edge{v}{v'}\in E\}$. The set of closed neighbors of~$v$ is $N_G[v]=N_G(v)\cup \{v\}$.
We define~$n_G(v)$ (resp.\ $n_G[v]$) as the cardinality of~$N_G(v)$ (resp.\ $N_G[v]$). The notion~$n_G(v)$ is often called the degree of~$v$ in~$G$ in the literature.
Given a subset $V'\subseteq V$, we use $n_G(v, V')=\abs{N_G(v)\cap V'}$ (resp.\ $n_{G}[v, V']=\abs{N_G[v]\cap V'}$) to denote the number of open neighbors (resp.\ closed neighbors) of~$v$ contained in the set~$V'$. The subgraph induced by~$V'\subseteq V$ is denoted by~$G[V']$, and $G-V'$ is the subgraph induced by $V\setminus V'$.
A BNPG~$\mathcal{G}$ is a $4$-tuple $(V, E, g_V, c)$. In the notion,~$V$ is a set of players, and~$(V, E)$ is an undirected graph with~$V$ being the vertex set. In addition,~$g_V$ is a set of functions, one for each player. In particular, for every player $v\in V$ there is one function $g_v: \mathbb{N}_0 \rightarrow \mathbb{R}_{\geq 0}$ in~$g_V$ such that~$g_v(i)$, where~$i$ is a nonnegative integer not greater than~$n_G[v]$, measures the external benefits of the player~$v$ when exactly~$i$ of her {\bf{closed neighbors}} invest. Thus, the function~$g_v$ is called the externality function of~$v$. Finally, $c: V\rightarrow \mathbb{R}_{\geq 0}$ is a cost function where~$c(v)$ is the investment cost of player~$v\in V$.
For a function $f: A\rightarrow B$ and $A'\subseteq A$, we use~$f|_{A'}$ to denote the function~$f$ restricted to~$A'$, i.e., it holds that $f_{A'}: A' \rightarrow B$ such that $f_{A'}(v)=f(v)$ for all $v\in A'$. Sometimes we also write $f|_{\neg (A\setminus A')}$ for~$f|_{A'}$. For $V'\subseteq V$, a subgame of~$\mathcal{G}=(V, E, g_V, c)$ induced by~$V'$, denoted by $\mathcal{G}|_{V'}$, is the game $(V', E', g_{V'}, c|_{V'})$ such that $E'=\{\edge{u}{u'}\in E : u, u'\in V'\}$ and $g_{V'}=\{g_{v} : v\in V'\}$.
An (action) profile of a BNPG is represented by a subset consisting of the players who invest. Given a profile $\mathbf{s}\subseteq V$, the payoff of every player $v\in V$ is defined by
\[\mu(v, {\bf{s}}, \mathcal{G})=g_v(n_G[v, {\bf{s}}])-{\mathbf{1}}_{\bf{s}}(v) \cdot c(v),\]
where ${\mathbf{1}}_{\bf{s}}(\cdot)$ is the indicator function of ${\bf{s}}$, i.e., ${\bf{1}}_{\bf{s}}(v)$ is~$1$ if $v\in \bf{s}$ and is~$0$ otherwise.
If it is clear which BNPG is discussed, we drop\onlyfull{ the third parameter}~$\mathcal{G}$ from~$\mu(v, {\bf{s}}, \mathcal{G})$ for brevity.
{\bf{Nash equilibria.}}
In the setting of noncooperative game, it is assumed that players are all self-interested. In this case, profiles that are stable in some sense is of particular importance.
For a profile~${\bf{s}}$ and a player $v\in V$, let ${\bf{s}}(\neg v)$ be the profile obtained from~${\bf{s}}$ by altering the action of~$v$, i.e., $v \in {\bf{s}}(\neg v)$ if and only if $vNot\in {\bf{s}}$, and for every $v'\in V\setminus \{v\}$ it holds that $v'\in {\bf{s}}(\neg v)$ if and only if $v'\in {\bf{s}}$.
We say that a player $v\in V$ has an incentive to deviate under~${\bf{s}}$ if it holds that $\mu({\bf{s}}, v, \mathcal{G})< \mu({\bf{s}}(\neg v), v, \mathcal{G})$.
A profile~${\bf{s}}$ is a PSNE if none of the players has an incentive to deviate under~${\bf{s}}$.
{\bf{Social welfare.}}
In some other settings, there is an organizer, a leader, or an authority (e.g., in the game of vaccination the authority might be the government) who is able to coordinate the actions of the players in order to yield the maximum possible welfare for the whole society. We consider two types of social welfare as follows. Let $\mathcal{G}=(V, E, g_V, c)$ be a BNPG and~${\bf{s}}$ a profile of~$\mathcal{G}$.
\begin{description}
\item[Utilitarian social welfare (USW)] The USW of $\bf{s}$, denoted by ${\textsf{USW}}({\bf{s}}, \mathcal{G})$, is the sum of all utilities of all players:
\[{\textsf{USW}}({\bf{s}}, \mathcal{G})=\sum_{v\in V}\mu({\bf{s}}, v, \mathcal{G}).\]
\item[Egalitarian social welfare (ESW)] Under profiles with the maximum USW, it can be the case where some player receives extremely large utility while others obtain only little utility, which is considered unfair in some circumstances. Unlike USW, ESW cares about the player with the least payoff. Particularly, the ESW of~${\bf{s}}$ is
\[{\textsf{ESW}({\bf{s}}, \mathcal{G})}=\min_{v\in V}\mu({\bf{s}}, v, \mathcal{G}).\]
\end{description}
{\bf{Problem formulation.}}
We study the following problems which have the same input $\mathcal{G}=(V, E, g_V, c)$. The notations of the problems and their tasks are as follows.
\begin{description}
\item \prob{PSNE Computation (PSNEC)}: Compute a PSNE profile of~$\mathcal{G}$ if there are any, and output ``{No}'' otherwise.
\item \prob{USW/ESW Computation (USWC/ESWC)}: Compute a profile of~$\mathcal{G}$ with the maximum USW/ESW.
\end{description}
{\bf{Some hard problems.}} Our hardness results are established based on the following problems.
For a positive integer~$d$, a $d$-regular graph is a graph where every vertex has degree exactly~$d$.
\EP
{$3$-Regular Induced Subgraph ($3$-RIS)}
{A graph~$G$.}
{Does~$G$ admit a $3$-regular induced subgraph?}
The {\prob{$3$-RIS}} problem is {{\textsf{NP}}h}~\cite{DBLP:journals/tcs/BroersmaGP13,DBLP:journals/dam/CheahC90}.
A clique of a graph is a subset of vertices such that between every pair of vertices there is an edge.
\EP
{$\kappa$-Clique}
{A graph~$G$ and a positive integer~$\kappa$.}
{Is there a clique of size~$\kappa$ in~$G$?}
{\prob{$\kappa$-Clique}} is a well-known {{\textsf{NP}}h} problem~\cite{DBLP:conf/coco/Karp72}.
A vertex~$v$ dominates another vertex~$v'$ in a graph~$G$ if there is an edge between~$v$ in~$v'$ in~$G$. In addition, a subset~$A$ of vertices dominate another subset~$B$ of vertices if every vertex in~$B$ is dominated by at least one vertex in~$A$.
\EP
{Red-Blue Dominating Set (RBDS)}
{A bipartite graph $G=(B\uplus R, E)$ and a positive integer~$\kappa$.}
{Is there a subset $B'\subseteq B$ such that~$\abs{B'}\leq \kappa$ and~$B'$ dominates~$R$?}
{The {\prob{RBDS}} problem is {{\textsf{NP}}h}~\cite{garey}.}
We assume that in instances of {\prob{$\kappa$-Clique}} and {\prob{RBDS}}, the graph~$G$ does not contain any isolated vertices (vertices of degree one).
\section{BNPG Solutions in General}
In this section, we settle the complexity of the {\prob{PSNEC/USWC/ESWC}} problems.
We first point out a flaw in the proof of Theorem~10 in~\cite{DBLP:conf/aaai/YuZBV20}, where an {{\textsf{NP}}hns} reduction for {\prob{PSNEC}} restricted to fully-homogeneous BNPGs is established. \footnote{Due to space limitation they omitted the proof in~\cite{DBLP:conf/aaai/YuZBV20} but the proof is included in a full version posted online (\url{https://arxiv.org/pdf/1911.05788.pdf}). [19.11.2020]} Recall that a BNPG is fully-homogenous if both the externality functions and the investment costs of all players are the same.
A profile is trivial if either all players invest or none of the players invests.
Checking whether a trivial profile is PSNE is clearly easy.
Yu~et.~al.\ provide a reduction to show that determining whether a fully-homogenous BNPG admits a nontrivial PSNE is {{\textsf{NP}}h}.
However, this reduction is flawed. Let us briefly reiterate the reduction, which is from the {\prob{$\kappa$-Clique}} problem. The instance of BNPG is obtained from an instance $(G=(V,E), \kappa)$ of {\prob{$\kappa$-Clique}} by first considering vertices in~$V$ as players and then adding a large set~$T$ of~$M$ players who form a clique and are adjacent to all players in~$V$. The externality functions and costs of players are set so that every player is indifferent between investing and not investing if exactly $\kappa-1+M$ of her open neighbors invest, otherwise, the player prefers to not investing. If there is a clique of size~$\kappa$, then there is a PSNE. However, the other direction does not work.
In fact, a nontrivial PSNE needs only the existence of a $(\kappa-1)$-regular induced subgraph of~$G$ (in this case, all players in~$M$ and the players in the regular subgraph invest). A more concrete counterexample is demonstrated in Figure~\ref{fig-reduction-flawed}. Our amendment is as follows.
\begin{figure}
\caption{A counterexample to the reduction in the proof of Theorem~10 in~\cite{DBLP:conf/aaai/YuZBV20}
\label{fig-reduction-flawed}
\end{figure}
\begin{theorem}
Determining if a fully-homogeneous BNPG admits a nontrivial PSNE is {{\textsf{NP}}h}.
\end{theorem}
\begin{proof}
We prove the theorem via a reduction from the {\prob{$3$-RIS}} problem. Let $(G, \kappa)$ be a {\prob{$3$}-RIS} instance.
The network is exactly~$G$. We set the externality functions and the costs of the players so that everyone is indifferent between investing and not investing if exactly three of her open neighbors invest, and prefers to not investing otherwise. This can be achieved by setting, for every $v\in V$, $c(v)=2$ and
\begin{equation*}
g_v(x)=
\begin{cases}
x & x\leq 3\\
x+1 & 4\leq x\leq n_G[v]\\
\end{cases}
\end{equation*}
The correctness is easy to see. If the graph~$G$ admits a $3$-regular subgraph induced by a subset~$H\subseteq V$, then~$H$ is a nontrivial PSNE. On the other hand, if there is a nontrivial PSNE $\emptyset\neq H\subseteq V$, due to the above construction, every vertex in~$H$ must have exactly~$3$ open neighbors in~$H$, and so~$G[H]$ is $3$-regular.
\end{proof}
Now we start the exploration on profiles providing the maximum social welfare. We show that the corresponding problems are all hard to solve, and this holds even when the given network is a bipartite graph with a constant bounded diameter. Recall that the diameter of a graph is the maximum possible distance between vertices, where the distance between two vertices is defined as the length of a shortest path between them.
\begin{theorem}
\label{thm-uswo-np-hard}
{\prob{USWC}} is {{\textsf{NP}}h}. This holds even when the given network is bipartite and has diameter at most~$4$.
\end{theorem}
\begin{proof}
We prove the theorem via a reduction from the {\prob{$\kappa$-Clique}} problem to the decision version of {\prob{USWC}} which consists in determining whether there is a profile of utilitarian social welfare at least a threshold value.
Let $(H, \kappa)$ be a {\prob{$\kappa$-Clique}} instance, where $H=(U, E)$ is a graph, $n=\abs{U}$, and $m=\abs{E}$. We assume that~$\kappa$ is considerably smaller than~$m$, say $(k+1)^{10}<m$. As {\prob{$\kappa$-Clique}} is {{\textsf{W[1]-hard}}} with respect to~$\kappa$, this assumption does not change the hardness of the problem. We construct the following instance. For each vertex $u\in U$, we create a player denoted by~$v(u)$. The externality function of~$v(u)$ is defined so that $g_{v(u)}(0)=1$ and $g_{v(u)}(x)=0$ for positive integers~$x\leq n_H[u]$, and the investment cost of~$v(u)$ is $c(v(u))=1$. Let $V(U)=\{v(u) : u\in U\}$ be the set of the vertex-players. In addition, for each edge $e\in E$, where $e=\edge{u}{u'}$, we create a player~$v(e)$. We define $g_{v(e)}(0)=1$ and $g_{v(e)}(x)=0$ for all~$x\in [4]$. Moreover, $c(v(e))=0$. Let $V(E)=\{v(e) : e\in E\}$ be the set of edge-players. Finally, we create a player~$v^*$ such that $g_{v^*}(\frac{\kappa \cdot (\kappa-1)}{2})=m$ and $g_{v^*}(x)=0$ for all other possible values of~$x$. The investment cost of~$v^*$ is $c(v^*)=m$. The player network is a bipartite graph with the vertex partition $(V(U)\cup \{v^*\}, V(E))$. In particular, the edges in the network are as follows: for every edge $e=\edge{u}{u'}\in E$, the player~$v(e)$ is adjacent to exactly~$v(u)$,~$v(u')$, and~$v^*$, and thus has degree~$3$ in the network. It is clear that the network has exactly~$3m$ edges and has diameter at most~$4$.
The construction clearly can be done in polynomial time.
We claim that there is a clique of size~$\kappa$ in the graph~$H$ if and only if there is a profile of USW at least
\[q=(n-\kappa)+\left(m-\frac{\kappa\cdot (\kappa-1)}{2}\right)+m.\]
$(\Rightarrow)$ Assume that there is a clique $K\subseteq U$ of size~$\kappa$ in the graph~$H$. Let $E(K)=\{\edge{u}{u'}\in E : u, u'\in K\}$ be the set of edges in the subgraph induced by~$K$. Clearly,~$E(K)$ consists of exactly $\frac{\kappa\cdot (\kappa-1)}{2}$ edges. Let ${\bf{s}}=\{v(e) : e\in E(K)\}$ be the set of the $\frac{\kappa \cdot (\kappa-1)}{2}$ players corresponding to the edges in~$E(K)$. We claim that~${\bf{s}}$ has USW at least~$q$. From the above construction, the utility of each player~$v(u)$, where $u\in U$, is~$1$ if $uNot\in K$ and is~$0$ otherwise. Hence, the total utility of the vertex-players is exactly $n-\kappa$. In addition, the utility of a player~$v(e)$, $e\in E$, is~$1$ if $eNot\in E(K)$ and is~$0$ otherwise. Therefore, the total utility of edge-players is $m-\frac{\kappa\cdot (\kappa-1)}{2}$. Finally, the utility of the player~$v^*$ is exactly~$m$. The sum of the above utility is exactly~$q$.
$(\Leftarrow)$ Assume that there is a profile with USW at least~$q$. Due to the large investment cost of the player~$v^*$, to maximize the USW,~$v^*$ must not invest and, moreover, by the externality functions, exactly $\frac{\kappa\cdot (\kappa-1)}{2}$ edge-players must invest. Additionally, by the specific setting of the externality functions and the costs of the vertex-players, none of the vertex-players invests in any profile with the maximum USW. It follows that in a profile with the maximum USW, exactly $\frac{\kappa\cdot (\kappa-1)}{2}$ edge-players invest. Then, due to the setting of the externality functions, the smaller the number of vertex-players dominated by the investing edge-players, the larger is the USW. It is then easy to check that a profile achieves a USW at least~$q$ if and only if at most~$\kappa$ vertex-players are dominated by the investing edge-players. This implies that the edges whose corresponding players invest in a profile with USW at least~$q$ induce a clique in~$H$.
\end{proof}
\onlyfull{
If we seek a strategy with maximum social welfare under the restriction that there are at least~$k$ players invest, we have a W[1]-hard result with respect to the number of investors.
\begin{theorem}
Determining whether there is a strategy of size exactly/at least $n-k$ of utilitarian social welfare at least~$q$ is W[2]-hard with respect to~$k$. This holds even for fully-homogeneous case.
\end{theorem}
\begin{proof}
Reduction from dominating set restricted to~$\ell$-regular graphs. Each vertex is a player. Investing does not cost anything for all players. Moreover, for all players $v\in V$, it holds that $g_v(\ell+1)=0$ and $g_v(x)=1$ for all nonnegative integer $x\leq \ell$, and set $q=n$.
It can be also adapted for egalitarian social welfare by resetting $q=1$.
\end{proof}
}
For the computation of profiles with the maximum egalitarian social welfare, we have the same result.
\begin{theorem}
\label{thm-eswo-np-hard}
{\prob{ESWC}} is {{\textsf{NP}}h}. This holds even when the given network is bipartite with diameter at most~$4$ and all players have the same investment cost.
\end{theorem}
{
\begin{proof}
Our proof is based on a reduction from the {\prob{RBDS}} problem. Let $(G, \kappa)$ be an {\prob{RBDS}} instance, where~$G$ is a bipartite graph with the vertex partition $(R\uplus B)$. We construct an instance of the decision version of {\prob{ESWC}}, which takes as input a BNPG and a number~$q$, and determines if the give BNPG admits a profile of ESW at least~$q$.
For each vertex $v\in B\uplus R$, we create one player denoted still by the same symbol~$v$ for simplicity.
In addition, we create a player~$v^*$. The network of the players is obtained from~$G$ by first adding~$v^*$ and then creating edges between~$v^*$ and all blue-players in~$B$, which is clearly a bipartite graph with the vertex partition $(R\cup \{v^*\}, B)$. Furthermore, the diameter of the network is at most~$4$.
The externality and the cost functions are defined as follows.
\begin{itemize}
\item For every red-player~$v\in R$, we define $g_{v}(0)=0$ and $g_{v}(x)=1$ for all other possible integers~$x$.
\item For every blue-player~$v\in B$, we define $g_{v}(0)=1$, $g_{v}(1)=2$, and $g_{v}(x)=0$ for all other possible integers $x\geq 2$.
\item For the player~$v^*$, we define $g_{v^*}(x)=1$ for every nonnegative integer $x\leq \kappa$ and $g_{v^*}(x)=0$ for all other possible integers $x> \kappa$.
\item All players have the same investment cost~$1$, i.e., $c(v)=1$ for every player~$v$ constructed above.
\end{itemize}
The reduction is completed by setting $q=1$.
The above instance can be constructed in polynomial time.
It remains to show the correctness of the reduction.
$(\Rightarrow)$ Assume that there is a subset $B'\subseteq B$ such that $\abs{B'}\leq \kappa$ and every red-player has at least one neighbor in~$B'$. One can check that profile~$B'$ has ESW at least one. Particularly, as~$B'$ is the set of the investing neighbors of~$v^*$, due to the definitions of the externality and cost functions given above, the utility of the player~$v^*$ is $g_{v^*}(\abs{B'})=1$. Let~$v$ be a player other than~$v^*$. If $v\in R$, then as~$v$ has at least one neighbor in~$B'$, the utility of~$v$ is exactly one. If $v\in B'$, then the utility of~$v$ is $g_v(2)-c(v)=1$. Finally, if $v\in B\setminus B'$, the utility of~$v$ is $g_v(0)=1$.
$(\Leftarrow)$ Assume that there is a profile~${\bf{s}}$ where every player obtains utility at least one. Observe first that none of $R\cup \{v^*\}$ can be contained in~${\bf{s}}$, since for every player in $R\cup \{v^*\}$, the investment cost is exactly one and the externality benefit is at most one. It follows that ${\bf{s}}\subseteq B$. Then, it must hold that $\abs{\bf{s}}\leq \kappa$, since otherwise the utility of the player~$v^*$ can be at most $g_{v^*}(\abs{\bf{s}})=0$. Finally, as every player $v\in R$ obtains utility at least one under~$\bf{s}$, at least one of~$v$'s neighbors must be contained in ${\bf{s}}$. This implies that~$\bf{s}$ dominates~$R$.
\end{proof}
}
A close look at the reduction in the proof of Theorem~\ref{thm-eswo-np-hard} reveals that if the {\prob{RBDS}} instance is a {Noins}, the best achievable ESW in the constructed BNPG is zero. The following corollary follows.
\begin{corollary}
{\prob{ESWC}} is not polynomial-time approximable within factor~$\beta(p)$ unless ${\textsf{P}}= {\textsf{NP}}$, where~$p$ is the input size and~$\beta$ can be any computable function in~$p$. Moreover, this holds even when the given network is bipartite with diameter at most~$4$ and all players have the same investment cost.
\end{corollary}
\section{Games with Critical Clique Forests}
In the previous section, we showed that all problems studied in the paper are {{\textsf{NP}}h}. This motivates us to study the problems when the give network is subject to some restrictions. Yu~et.~al.~\shortcite{DBLP:conf/aaai/YuZBV20} considered the cases where the given networks are cliques or trees, and showed separately that {\prob{PSNEC}} in both cases becomes polynomial-time solvable. To significantly extend their results, we derive a polynomial-time algorithm which applies to a much larger class of networks containing both cliques and trees. Generally speaking, we consider the networks whose vertices can be divided into disjoint cliques and, moreover, contracting these cliques results in a forest. For formal expositions, we need the following notions.
A {\it{critical clique}} in a graph $G=(V, E)$ is a clique $K\subseteq V$ whose members share exactly the same neighbors and is maximal under this property, i.e., for every $v\in V\setminus K$ either~$v$ is adjacent to all vertices in the clique~$K$ or is adjacent to none of them, and there does not exist any other clique~$K'$ satisfying the same condition and $K\subset K'$. The concept of critical cliques was coined by Lin, Jiang, and Kearney~\shortcite{DBLP:conf/isaac/LinJK00}, and since then has been used to derive many efficient algorithms (see, e.g.,~\cite{DBLP:journals/tcs/Guo09,DBLP:journals/algorithmica/DomGHN06,DBLP:journals/dam/DomGHN08}). It is well-known any two different critical cliques do not intersect. In addition, for two critical cliques, either they are completely adjacent (i.e., there is an edge between every two vertices from the two cliques respectively), or they are not adjacent at all (i.e., there are no edges between these two cliques). For brevity, when we say two critical cliques are adjacent we mean that they are completely adjacent.
For a graph~$G$, its critical clique graph, denoted by~$\cclique{G}$, is the graph whose vertices are critical cliques of~$G$ and, moreover, there is an edge between two vertices if and only if the corresponding critical cliques are adjacent in~$G$. See Figure~\ref{fig-critical-clique-graphs} for an illustration. Every graph has a unique critical clique graph and, importantly, it can be constructed in polynomial time~\cite{DBLP:conf/isaac/LinJK00,DBLP:journals/algorithmica/DomGHN06}.
\begin{figure}
\caption{A graph~$G$ and its critical clique graph $\cclique{G}
\label{fig-critical-clique-graphs}
\end{figure}
We are ready to show the first main result in this section.
\begin{theorem}
\label{thm-psnec-poly-critical-clique-graphs}
{\prob{PSNEC}} is polynomial-time solvable when the critical clique graph of the given network is a forest.
\end{theorem}
\begin{proof}
To prove the theorem, we derive a dynamic programming algorithm to solve {\prob{PSNEC}} in the case where the given network has a critical clique graph being a forest.
Let $\mathcal{G}=(V, E, g_V, c)$ be a BNPG, where $G=(V, E)$ is a network of players.
We first create the critical clique graph of~$G$ in polynomial time. Let~$T$ denote the critical clique graph of~$G$. For clarity, we call vertices in~$T$ nodes. For notational simplicity, we directly use the critical clique to denote its corresponding node in~$T$.
If~$G$ is disconnected, we run the following algorithm for each connected component. Then, we return the union of the profiles computed for all connected components if all subgames restricted to these connected components admit PSNEs; otherwise, we return ``{No}''.
Therefore, let us assume now that~$G$ is connected, and hence~$T$ is a tree.
We choose any arbitrary node in~$T$ and make it as the root of the tree.
For each nonroot node~$K$ in~$T$, let~$K^{\text{P}}$ denote the parent node of~$K$ in~$T$. If~$K$ is the root, we define $K^{\text{P}}=\emptyset$. In addition, let~$\textsf{chd}(K)$ be the set of the children of~$K$ in~$T$ for every nonleaf node~$K$. If~$K$ is a leaf, we define ${\textsf{chd}}(K)=\{\emptyset\}$.
We use~$T_K$ to denote the subtree of~$T$ rooted at~$K$, and use~$\textsf{Ver}(T_K)$ to denote the set of vertices in~$G$ that are contained in the nodes of~$T_K$, i.e.,
\[\textsf{Ver}(T_K)=\bigcup_{K~\text{is a node in}~T_K} K.\]
For each node~$K$ in~$T$, we maintain a binary dynamic table ${\textsf{DT}}_K(x, y, z)$, where~$x$,~$y$, and~$z$ are integers such that
\begin{itemize}
\item $0\leq x\leq \abs{K}$,
\item $0\leq y\leq \abs{K^\text{P}}$, and
\item $0\leq z\leq \abs{\bigcup_{K'\in \textsf{chd}(K)}K'}$.
\end{itemize}
Particularly, $\textsf{DT}_K(x, y, z)$ is supposed to be~$1$ if and only if the subgame~$\mathcal{G}|_{{\textsf{Ver}}(T_{K^{\text{P}}})}$ admits a profile~${\bf{s}}$ under which (regard $T_{K^{\text{P}}}$ as~$T$ if~$K$ is the root of~$T$)
\begin{itemize}
\item everyone in~$K$ has exactly~$x$ closed neighbors in~$K$ who invest, i.e., $\abs{{\bf{s}}\cap K}=x$,
\item everyone in~$K$ has exactly~$y$ neighbors in~$K^{\text{P}}$ who invest, i.e., $\abs{{\bf{s}}\cap K^{\text{P}}}=y$,
\item everyone in $K$ has exactly~$z$ neighbors in their children nodes who invest, i.e., $\sum_{K'\in {\textsf{chd}}(K)}\abs{{\bf{s}}\cap K'}=z$ and,
\item none of players in ${\textsf{Ver}}(T_K)$ has an incentive to deviate.
\end{itemize}
Clearly, after computing all entries of the table~${\textsf{DT}}_{\hat{K}}$ associated to the root node~$\hat{K}$ in~$T$, we could conclude that the given BNPG admits a PSNE if and only if there if there is a $1$-valued entry ${\textsf{DT}}_{\hat{K}}(x, 0, z)=1$.
The tables associated to the nodes in~$T$ are computed in a bottom-up manner, from those associated to leaf nodes up to that associated to the root node.
Let ${\textsf{DT}}_K(x, y, z)$ be an entry considered at the moment. For each player~$v$, we define
\[\triangle(v, 0)=g_v(x+y+z)-c(v)-g_v(x+y+z-1),\]
\[\triangle(v, 1)=g_v(x+y+z)-(g_v(x+y+z+1)-c(v)).\]
Note that if $\triangle(v, 0)<0$,~$v$ does not invest in any PSNE, and if $\triangle(v, 1)<0$,~$v$ must invest in every PSNE.
Therefore, if there is a player~$v\in K$ such that
$\triangle(v, 0)<0$ and $\triangle(v, 1)<0$, we immediately set $\textsf{DT}_K(x, y, z)=0$. Otherwise, we divide the players from~$K$ into
\begin{itemize}
\item $K_{{-}}=\{v\in K : \triangle(v, 0)<0\}$;
\item $K_{{+}}=\{v\in K : \triangle(v, 1)<0\}$; and
\item $K_*=K\setminus (K_{{+}}\cup K_{-})$.
\end{itemize}
We have the following observations.
\begin{itemize}
\item None of~$K_{{-}}$ invests in any PSNE;
\item All players in~$K_{{+}}$ must invest in all PSNEs;
\item Each player in~$K_*$ can be in both the set of investing players and the set of noninvesting players under all PSNEs.
\end{itemize}
Given the above observations, if $\abs{K_{{+}}}> x$ or $\abs{K_{{+}}\cup K_*}< x$, we directly set ${\textsf{DT}}_K(x, y, z)=0$.
Let us assume now that $\abs{K_{{+}}}\leq x$ and $\abs{K_{{+}}\cup K_*}\geq x$. We determine the value of ${\textsf{DT}}_K(x, y, z)$ as follows.
If~$K$ is a leaf node, we set ${\textsf{DT}}_K(x, y, z)=1$ if and only if $z=0$.
Otherwise, let~($K_1, K_2, \dots, K_t)$ be an arbitrary but fixed order of the children of~$K$ in~$T$, where~$t$ is the number of children of~$K$ in~$T$. Then, we set ${\textsf{DT}}_K(x, y, z)=1$ if and only if the following condition holds: there are entries $\textsf{DT}_{K_1}(x_1, y_1, z_1)=1$, $\textsf{DT}_{K_2}(x_2, y_2, z_2)=1$, $\dots$, $\textsf{DT}_{K_t}(x_t, y_t, z_t)=1$ such that $y_j=x$ for all $j\in [t]$ and $\sum_{j=1}^t x_t=z$.
In fact, in this case we let all players in~$K_{{+}}$ and arbitrarily $x-\abs{K_{{+}}}$ players in~$K_*$ invest; and let all the other players in~$K$ not invest. Importantly, the above condition can be checked in polynomial time by a dynamic programming algorithm. To this end, we maintain a binary dynamic table ${\textsf{DT}'}(i, x_i(1), x_i(2))$ where $i\in [t]$, and~$x_i(1)$ and~$x_i(2)$ are two integers such that $0\leq x_i(1)\leq \abs{K_i}$ and $0\leq x_i(2)\leq \abs{\bigcup_{j\in [i-1]}K_j}$ if $i>1$ and $x_i(2)=0$ if $i=1$. In particular, ${\textsf{DT}'}(i, x_i(1), x_i(2))$ is supposed to be~$1$ if and only if there are entries $\textsf{DT}_{K_1}(x_1, y_1, z_1)=1$, $\textsf{DT}_{K_2}(x_2, y_2, z_2)=1$, $\dots$, $\textsf{DT}_{K_i}(x_i, y_i, z_i)=1$ such that $y_j=x$ for all $j\in [i]$, $\sum_{j=1}^{i-1} x_j=x_i(2)$, and $x_i=x_i(1)$.
The table is computed as follows. First, every base entry $\textsf{DT}'(1, x_1(1), x_1(2))$ has value~$1$ if and only if $x_1(2)=0$ and ${\textsf{DT}}_{K_1}(x_1(1), x, z')=1$ for some integer~$z'$. Then, the value of every entry ${\textsf{DT}'}(i, x_1(1), x_1(2))$ such that $i\geq 2$ is~$1$ if and only if there exists an entry ${\textsf{DT}}'(i-1, x_{i-1}(1), x_{i-2}(2))=1$ such that $x_{i-1}(1)\leq x_i(2)$, $x_{i-2}(2)=x_i(2)-x_{i-1}(1)$, and ${\textsf{DT}}_{K_i}(x_i(1), x, z')=1$ for some integer~$z'$. The above condition is satisfied if and only if $\textsf{DT}'(t, x_t(1), x_t(2))=1$ for some valid values of~$x_t(1)$ and~$x_t(2)$ such that $x_t(1)+x_t(2)=z$.
The algorithm can be implemented in polynomial time since for each node~$K$, the corresponding table has at most~$n^3$ entries, where~$n$ is the number of total players. So, we have in total at most~$n^4$ entries each of which can be computed in polynomial time.
\end{proof}
For {\prob{USWC}} and {\prob{ESWC}} we have similar results.
\begin{theorem}[*]
\label{thm-uswo-critical-clique-graph-tree-poly}
{\prob{USWC}} is polynomial-time solvable if the critical clique graph of the given network is a forest.
\end{theorem}
\begin{proof}
To prove the theorem, we derive a dynamic programming algorithm to solve {\prob{USWC}} in the case where the given network has a critical clique graph being a forest.
Let $\mathcal{G}=(V, E, g_V, c)$ be a BNPG, where $G=(V, E)$ is a network of players.
We first create the critical clique graph of~$G$ in polynomial time. Let~$T$ denote the critical clique graph of~$G$. For clarity, we call vertices in~$T$ nodes. For notational simplicity, we directly use the critical clique to denote its corresponding node in~$T$.
If~$G$ is disconnected, we run the following algorithm for each connected component. Then, we sum up all the values returned by the algorithms running on the connected components.
Therefore, let us assume now that~$G$ is connected, and hence~$T$ is a tree.
We choose any arbitrary node in~$T$ and make it as the root of the tree.
For each nonroot node~$K$ in~$T$, let~$K^{\text{P}}$ denote the parent node of~$K$ in~$T$. If~$K$ is the root, we define $K^{\text{P}}=\emptyset$. In addition, let~$\textsf{chd}(K)$ be the set of the children of~$K$ in~$T$ for every nonleaf node~$K$. If~$K$ is a leaf, we define ${\textsf{chd}}(K)=\{\emptyset\}$.
We use~$T_K$ to denote the subtree of~$T$ rooted at~$K$, and use~$\textsf{Ver}(T_K)$ to denote the set of vertices in~$G$ that are contained in the nodes in the subtree~$T_K$.
For each node~$K$ in~$T$, we maintain a dynamic table ${\textsf{DT}}_K(x, y, z)$, where~$x$,~$y$, and~$z$ are integers such that
\begin{itemize}
\item $0\leq x\leq \abs{K}$,
\item $0\leq y\leq \abs{K^\text{P}}$, and
\item $0\leq z\leq \abs{\bigcup_{K'\in \textsf{chd}(K)}K'}$.
\end{itemize}
We say that a profile of the subgame~$\mathcal{G}|_{\textsf{Ver}(T_{K^{\text{P}}})}$ (regard $T_{K^{\text{P}}}$ as~$T$ if~$K$ is the root of~$T$) is a $\textsf{DT}_K(x, y, z)$-compatible profile of the subgame if in this profile the following three conditions are satisfied:
\begin{enumerate}
\item exactly~$x$ players in~$K$ invest,
\item exactly~$y$ players in~$K^{\text{P}}$ invest, and
\item exactly~$z$ players in $\bigcup_{K'\in \textsf{chd}(K)}K'$ invest.
\end{enumerate}
The value of the entry ${\textsf{DT}}_K(x, y, z)$ is supposed to be the maximum possible USW of players in~${\textsf{Ver}}(T_K)$ under $\textsf{DT}_K(x, y, z)$-compatible profiles of the subgame~$\mathcal{G}|_{\textsf{Ver}(T_{K^{\text{P}}})}$.
The values of the entries in the table can be computed recursively, beginning from the leaf nodes up to the root node.
In particular, if ${\textsf{DT}}_K(x, y, z)$ is a leaf node in~$T$ (note that in this case~$z=0$), we compute ${\textsf{DT}}_K(x, y, 0)$ as follows.
For every player~$v\in K$, the number of closed neighbors who invest is exactly~$x+y$ in every ${\textsf{DT}}_K(x, y, 0)$-compatible profile. Then, the utility of every player~$v\in K$ to investing and not to investing are respectively $g_v(x+y)-c(v)$ and $g_v(x+y)$. We order players in~$K$ according to a nondecreasing order of the investment costs~$c(v)$ of players~$v\in K$. Then, it is easy to see that a ${\textsf{DT}}_K(x, y, 0)$-compatible profile which consists of the first~$x$ players in the order achieves the maximum possible USW of players in~$K$, among all ${\textsf{DT}}_K(x, y, 0)$-compatible profiles of the game restricted to $\textsf{Ver}(T_{K^{\text{P}}})$. Let~$H$ be the set of the first~$x$ players in the order. In light of the above discussion, we define
\begin{align*}
{\textsf{DT}}_K(x, y, 0)& =\sum_{v\in H}(g_v(x+y)-c(v))+\sum_{v\in K\setminus H}g_v(x+y)\\
&= \sum_{v\in K} g_v(x+y)-\sum_{v\in H}c(v).\\
\end{align*}
For a nonleaf node~$K$, we compute ${\textsf{DT}}_K(x, y, z)$ as follows, assuming that the values of all tables associated to the descendants of~$K$ are already computed.
First, similar to the above case, we first order players in~$K$ according to a nondecreasing order of~$c(v)$, $v\in K$, and let~$H$ denote the first~$x$ players in the order. Then, we define
\[s=\sum_{v\in H}\left(g_v(x+y+z)-c(v)\right)+\sum_{v\in K\setminus H}g_v(x+y+z),\]
which is the maximum possible USW of players in~$K$ under ${\textsf{DT}}_K(x, y, z)$-compatible profiles. However, we need also to take into account the USW of the descendents of~$K$. We solve this by a dynamic programming. Let $(K_1, K_2, \dots, K_t)$ be an arbitrary but fixed order of children of~$K$, where~$t$ is the number of children of~$K$. As~$K$ is the parent of each~$K_i$, only the entries ${\textsf{DT}}_{K_i}(x_i, y_i, z_i)$ such that $y_i=x$ are relevant to our computation. For each $i\in [t]$, we maintain a dynamic table ${\textsf{DT}'}(i, x_i(1), x_i(2))$ where $x_i(1)$ and $x_i(2)$ are two integers such that $0\leq x_i(1)\leq \abs{K_i}$ and $0\leq x_i(2)\leq \bigcup_{j\in [i-1]}K_j$ if $i>1$ and $x_i(2)=0$ if $i=1$. The integer $x_i(1)$ and $x_i(2)$ respectively indicate the number of players in~$K_i$ who invest and the number of players in $\bigcup_{j\in [i-1]}K_j$ who invest, and the value of the entry is the maximum possible USW of players in $\bigcup_{j\in [i]}K_j$ and their descendants, i.e., the USW of players in $\bigcup_{j\in [i]} {\textsf{Ver}}(T_{K_j})$, under the above restrictions.
The table is computed as follows. First, we let
\[{\textsf{DT}'}(1, x_1(1), 0)=\max_{z'}{\textsf{DT}}_{K_1}(x_1(1), x, z'),\]
where~$z'$ runs over all possible values.
Then, for each~$i$ from~$2$ to~$t$ (this applies only when $t\geq 2$), the entry ${\textsf{DT}}'(i, x_i(1), x_i(2))$ is computed by the following recursive:
\begin{align*}
\textsf{DT}'(i, x_i(1), x_i(2))=\max_{z'}{\textsf{DT}}_{K_i}(x_i(1), x, z')+\\
\max_{\substack{0\leq x_{i-1}(1)\leq \abs{K_{i-1}} \\ x_i(2)-x_{i-1}(1)\geq 0}}\{\textsf{DT}'(i-1, x_{i-1}(1), x_i(2)-x_{i-1}(1))\},\\
\end{align*}
where~$z'$ runs all possible values.
After all the entries are updated, we define
\[s'=\max_{\substack{0\leq x_t(1)\leq \abs{K_t}\\ x_t(1)+x_t(2)=z}}\{\textsf{DT}'(t, x_t(1), x_t(2))\}.\]
Now we are ready to update the entry for~$K$.
In particular, we define
\[\textsf{DT}_K(x, y, z)=s+s'.\]
After the entries of the table~$\textsf{DT}$ are computed, we return
\[\max_{x', z'}\{\textsf{DT}_K(x', 0, z')\},\]
where~$K$ is the root node and~$x'$ and~$z'$ run over all possible values.
The algorithm can be implemented in polynomial time since for each node~$K$, the corresponding table has at most~$n^3$ entries, where~$n$ is the number of total players. So, we have in total at most~$n^4$ entries each of which can be computed in polynomial time.
\end{proof}
\begin{theorem}[*]
{\prob{ESWC}} is polynomial-time solvable if the critical clique graph of the given network is a forest.
\end{theorem}
{
\begin{proof}
The algorithm is similar to the one in the proof of Theorem~\ref{thm-uswo-critical-clique-graph-tree-poly}. In particular, we guess the ESW of the desired profile. There can be polynomially many guesses. For each guessed value~$q$, we determine if there is a profile of ESW at least~$q$, i.e., every player receives utility at least~$q$ under this profile. This can be solved using a dynamic programming algorithm.
We adopt the same notations in the proof of Theorem~\ref{thm-uswo-critical-clique-graph-tree-poly}.
However, in the current algorithm, each entry $\textsf{DT}_K(x, y, z)$ in the dynamic tables takes only binary values.
Precisely, $\textsf{DT}_K(x, y, z)$ is supposed to be~$1$ if and only if the subgame~$\mathcal{G}|_{{\textsf{Ver}}(T_K^{\text{P}})}$ admits a profile~${\bf{s}}$ under which (regard $T_{K^{\text{P}}}$ as~$T$ if~$K$ is the root of~$T$)
\begin{enumerate}
\item exactly~$x$ players in~$K$ invest, i.e., $\abs{{\bf{s}}\cap K}=x$,
\item exactly~$y$ players in~$K^{\text{P}}$ invest, i.e., $\abs{{\bf{s}}\cap K^{\text{P}}}=y$,
\item exactly~$z$ players in $\bigcup_{K'\in \textsf{chd}(K)}K'$ invest, i.e., $\sum_{K'\in \textsf{chd}(K)}\abs{{\bf{s}}\cap K'}=z$ and, more importantly,
\item every player $v\in {\textsf{Ver}}(T_K)$ obtains utility at least~$q$ under this profile, i.e., $\mu(v, {\bf{s}}, \mathcal{G}|_{{\textsf{Ver}}(T_{K^{\text{P}}})})\geq q$.
\end{enumerate}
The values of entries in the tables associated to leaf nodes can be computed trivially based on the above definition. We describe how to update the remaining tables.
Let ${\textsf{DT}}_K(x, y, z)$ be the currently considered entry in a table associated to a node~$K$ in~$T$. Let $K_1, K_2, \dots, K_t$ be the children of~$K$ in~$T$. We set ${\textsf{DT}}_K(x, y, z)$ to be~$1$ if and only if the following conditions hold simultaneously.
\begin{enumerate}
\item There is a subset $K'\subseteq K$ of cardinality~$x$ such that
\begin{itemize}
\item for every $v\in K'$ it holds that $g_v(x+y+z)-c(v)\geq q$; and
\item for every $v\in K\setminus K'$ it holds that $g(x+y+z)\geq q$.
\end{itemize}
\item There are ${\textsf{DT}}_{K_1}(x_1, x, z_1)=1$, ${\textsf{DT}}_{K_2}(x_2, x, z_2)=1$, $\dots$, ${\textsf{DT}}_{K_t}(x_t, x, z_t)=1$ such that $\sum_{j=1}^t x_j=z$.
\end{enumerate}
We point out that both of the above two conditions can be checked in polynomial time.
Precisely, to check the first condition, we define $A=\{v\in K : g_v(x+y+z)< q\}$ and $B=\{v\in K : g_v(x+y+z)\geq q, g_v(x+y+z)-c(v)< q\}$, both of which can be computed in polynomial time. Clearly, if $A\neq \emptyset$, Condition~1 does not hold. Otherwise, if $\abs{B}> \abs{K}-x$, we also conclude that Condition~1 does not hold, because due to the definition of~$B$, none of them should invest in order to obtain utility at least~$q$. If none of the above two cases occurs, we conclude that Condition~1 holds. As a matter of fact, in this case, we can let~$K'$ be any subset of $K\setminus B$ of cardinality~$x$.
To check Condition~2, we use a similar dynamic programming algorithm with the associated table ${\textsf{DT}'}(i, x_i(1), x_i(2))$ in the proof of Theorem~\ref{thm-uswo-critical-clique-graph-tree-poly}.
The algorithm runs in polynomial time since there are polynomially many entries and computing the value for each entry can be done in polynomial time.
\end{proof}
}
\section{Networks with Bounded Treewidth}
In this section, we study another prevalent class of tree-like networks, namely, the networks with a constant bounded treewidth. We show that the problems studied in the paper are polynomial-time solvable in this special case.
Notice that as every clique of size~$k$ has treewidth~$k-1$, the results established in the previous section do not cover the polynomial-time solvability in this case. The other direction does not hold too because every cycle has treewidth three but the critical clique graph of every cycle is itself.
The following notion is due to~\cite{DBLP:journals/jal/RobertsonS86}.
A {\it{tree decomposition}} of a graph $G=(V, E)$ is a tuple $(T=(L, F),\mathcal{B})$, where~$T$ is a rooted tree with vertex set~$L$ and edge set~$F$,
and $\mathcal{B}=\{B_x \subseteq V \mid x\in L\}$ is a
collection of subsets of vertices of~$G$ such that the following three conditions are satisfied:
\begin{itemize}
\item every $v\in V$ is contained in at least one element of $\mathcal{B}$;
\item for each edge $\edge{v}{v'}\in E$, there exists at least one $B_x\in \mathcal{B}$ such that $v, v'\in B_x$; and
\item for every $v\in V$, if~$v$ is in two distinct $B_x, B_y\in \mathcal{B}$, then~$v$ is in every $B_z\in \mathcal{B}$ where~$z$ is on the unique path between~$x$ and~$y$ in~$T$.
\end{itemize}
The {\it{width}} of the tree decomposition is $\max_{B\in \mathcal{B}}{|B|-1}$.
The {\it{treewidth}} of a graph~$G$, denoted by~$\omega(G)$, is the width of a tree decomposition of~$G$ with the minimum width.
The subsets in~$\mathcal{B}$ are often called {\it{bags}}. The root bag in the decomposition is the bag associated to the root of~$T$.
To avoid confusion, in the following we refer to the vertices of~$T$ as nodes.
The parent bag of a bag $B_i\in \mathcal{B}$ means the bag associated to the parent of~$i$.
A more refined notion is the so-called nice tree decomposition.
Particular, a {\it{nice tree decomposition}} $(T, \mathcal{B})$ of a graph~$G$ is a specific tree decomposition of~$G$ satisfying the following conditions:
\begin{itemize}
\item every bag associated to the root or a leaf of~$T$ is empty;
\item inner nodes of~$T$ are categorized into {\it{introduce nodes, forget nodes}}, and {\it{join nodes}} such that
\begin{itemize}
\item each introduce node~$x$ has exactly one child~$y$ such that $B_y\subset B_x$ and $|B_x\setminus B_y|=1$;
\item each forget node~$x$ has exactly one child~$y$ such that $B_x\subset B_y$ and $|B_y\setminus B_x|=1$; and
\item each join node~$x$ has exactly two children~$y$ and~$z$ such that $B_x=B_y=B_z$.
\end{itemize}
\end{itemize}
For ease of exposition, we sometimes call a bag associated to a join (resp.\ forget, introduce) node a join (resp.\ forget, introduce) bag.
It can be known from the definition that in a nice tree decomposition of a graph~$G$, every vertex in~$G$ can be introduced several times but can be only forgotten once.
Nice tree decomposition was introduced by Bodlaender and Kloks~\shortcite{DBLP:conf/icalp/BodlaenderK91}, and has been used in tacking many problems.
At first glance, nice tree decomposition seems very restrictive.
However, it is proved that given a tree decomposition of width~$\omega$, one can calculate a nice tree decomposition with the same width in polynomial-time~\cite{DBLP:books/sp/Kloks94}.
It is known that calculating the treewidth of a graph~$G$ is {{\textsf{NP}}h}~\cite{DBLP:journals/actaC/Bodlaender93}.
However, determining whether a graph has a constant bounded treewidth can be solved in polynomial time and, moreover, powerful heuristic algorithms, approximation algorithms, and fixed-parameter algorithms to calculating treewidth have been reported~\cite{DBLP:conf/birthday/Bodlaender12,DBLP:journals/siamcomp/BodlaenderDDFLP16,DBLP:conf/iwpec/ZandenB17}.
Hence, in the following results we assume that a nice tree-decomposition of the given network is given.
\begin{theorem}[*]
{\prob{PSNEC}} is polynomial-time solvable if the treewidth of the given network is a constant.
\end{theorem}
{
\begin{proof}
Let $\mathcal{G}=(V, E, g_V, c)$ be a BNPG, where~$(V, E)$ is a network of players,~$g_V$ is a set of externality functions of players in~$V$, one for each player, and $c: V\rightarrow \mathbb{R}_{\geq 0}$ is the investment cost function. For every player $v\in V$, let $g_v: \mathbb{N}_{0}\rightarrow \mathbb{R}_{\geq 0}$ denote its externality function in~$g_V$. Let~$G$ denote the network $(V, E)$, and let $n=\abs{V}$ be the number of players. In addition, let $(T, \mathcal{B})$ be a nice-tree decomposition of~$G$ which is of polynomial size in~$n$ and of width at most~$p$ for some constant~$p$. For a node~$i$ in~$T$, let $B_i\in \mathcal{B}$ denote its associated bag in the nice tree decomposition. Moreover, let~$T_i$ denote the subtree of~$T$ rooted at~$i$, let~$G_i=G[\bigcup_{j\in \ver{T_i}} B_j]$ denote the subgraph of~$G$ induced by all vertices contained in bags associated to nodes in~$T_i$, and let~$V_i$ denote the vertex set of~$G_i$. For each nonroot bag $B_i\in \mathcal{B}$, we define~$B_i^{\text{P}}$ as the parent bag of~$B_i$. If~$B_i$ is the root bag, we define $B_i^{\text{P}}=\emptyset$.
For each bag~$B_i$ associated to a node~$i$ and each vertex~$v\in B_i$, we use~$n_i(v)$ to denote the number of neighbors of~$v$ in the subgraph~$G_i-B_i$.
In the following, we derive a dynamic programming algorithm to determine if the BNPG game $\mathcal{G}$ has a PSNE profile, and if so, the algorithm returns a PSNE profile.
For each bag~$B_i\in \mathcal{B}$, we maintain a binary dynamic table ${\textsf{DT}}_i(U, f)$ where~$U$ runs over all subsets of~$B_i$ and~$f$ runs over all functions $f: B_i\rightarrow \mathbb{N}_0$ such that~$f(u)\leq n_i(u)$ for every~$u\in B_i$.
The entry~${\textsf{DT}}_i(U, f)$ is supposed to be~$1$ if the subgame $\mathcal{G}|_{V_i}$ admits a profile~${\bf{s}}$ such that
\begin{enumerate}
\item ${\bf{s}}\cap B_i=U$;
\item no player in $V(G_i)\setminus B_{t'}$ has an incentive to deviate under~${\bf{s}}$; and
\item every player $v\in B_i$ has exactly~$f(v)$ investing neighbors contained in $G_i-B_i$ under~${\bf{s}}$.
\end{enumerate}
We compute the tables for the bags in a bottom-up manner, beginning from those associated to leaf nodes to the table associated to the root. Assume that ${\textsf{DT}}_i(U, f)$ is the currently considered entry. To compute the value of this entry, we distinguish the following cases.
\begin{description}
\item[Case~1: $B_i^{\text{P}}$ is a join or an introduce bag, or~$i$ is the root of~$T$]
We further distinguish between the following subcases.
\begin{description}
\item[Case~1.1:~$B_i$ is a join bag]
Let~$x$ and~$y$ be the two children of~$i$. Therefore, it holds that $B_x=B_y=B_i$. In this case, we set ${\textsf{DT}}_i(U, f)=1$ if and only if ${\textsf{DT}}_x(U, f)=1$ and ${\textsf{DT}}_y(U, f)=1$.
\item[Case~1.2:~$B_i$ is an introduce bag]
Let~$x$ be the child of~$i$, and let~$B_i\setminus B_x=\{v\}$. (In this case,~$i$ cannot the root of~$T$) Notice that in this case,~$v$ does not have any neighbor in $G_i-B_i$. Then, we set ${\textsf{DT}}_i(U, f)=1$ if and only if $f(v)=0$ and ${\textsf{DT}}_x(U\setminus \{v\}, f|_{\neg v})=1$.
\item[Case~1.3:~$B_i$ is a forget node]
Let~$x$ be the child of~$i$, and let~$B_x\setminus B_i=\{v\}$. Then, we set ${\textsf{DT}}_i(U, f)=1$ if and only if there exists an entry ${\textsf{DT}}_x(U', f')=1$ such that $U'\setminus \{v\}=U$ and $f'|_{\neg v}=f$.
\end{description}
\item[Case~2: $B_i^{\text{P}}$ is a forget node]
Let $B_i\setminus B_i^{\text{P}}=\{v\}$.
We further divided into three subcases.
\begin{description}
\item[Case~2.1 $B_i$ is a join bag]
Let~$x$ and~$y$ denote the two children of~$i$ in~$T$. Then, we set ${\textsf{DT}}_i(U, f)=1$ if and only if ${\textsf{DT}}_x(U, f)={\textsf{DT}}_y(U, f)=1$ and, moreover, $g_v(n_G[v, U]+f(v))-c(v)\geq g_v(n_G(v, U))$ when $v\in U$ and $g_v(n_G(v, U)+f(v))\geq g_v(n_G(v, U)+f(v))-c(v)$ when $v\in B_i\setminus U$.
\item[Case~2.2 $B_i$ is an introduce bag]
Let~$x$ denote the child of~$i$ in~$T$. Let $B_i\setminus B_x=\{u\}$. Obviously,~$u$ does not have any neighbor in $G_i-B_i$. Therefore, if $f(u)>0$, we directly set ${\textsf{DT}}_i(U, f)=0$. Let us assume now that $f(u)=0$. Then, we set ${\textsf{DT}}_i(U, f)=1$ if and only if ${\textsf{DT}}_x(U\setminus \{u\}, f|_{\neg u})=1$ and, moreover, it holds that
$g_v(n_G[v, U]+f(v))-c(v)\geq g_v(n_G(v, U))$ when $v\in U$ and
$g_v(n_G(v, U))\geq g_v(n_G[v, U]+f(v))-c(v)$ when $v\in B_i\setminus U$.
\item[Case 2.3, $B_i$ is a forget bag]
Let~$x$ denote the child of~$i$ in~$T$, and let $B_i\setminus B_x=\{u\}$.
In this case, we set ${\textsf{DT}}_i(U, f)=1$ if and only if the following two conditions holds:
\begin{itemize}
\item there exists an entry ${\textsf{DT}}_x(U', f')=1$ such that $U'\setminus \{u\}=U$, $f'|_{\neg u}=f$ and
\item $g_v(n_G[v, U]+f(v))-c(v)\geq g_v(n_G(v, U))$ when $v\in U$ and
$g_v(n_G(v, U))\geq g_v(n_G[v, U]+f(v))-c(v)$ when $v\in B_i\setminus U$.
\end{itemize}
\end{description}
\end{description}
Recall that the root bag is empty. Therefore, there is only one entry in the table associated to the root. The game~$\mathcal{G}$ admits a PSNE profile if the only entry in the table of the root takes the value~$1$. A PSNE can be constructed using standard backtracking technique of dynamic programming algorithms if such a profile exists.
Finally, we analysis the running time of the algorithm. First, there are in total~$O(n)$ bags where~$n$ denotes the number of all players in the given game. Let~$k$ be the treewidth of the given nice tree decomposition of the given network. For each bag~$B$, the set associated to~$B$ is of cardinality $2^{|B|}\cdot n^{|B|}= Os(2n^k)$, which can be constructed in~$Os(2n^k)$ time. Therefore, the running time of the algorithm is bounded by~$Os(2n^k)$, which is polynomial if~$k$ is a constant.
\end{proof}
}
\onlyfull{
A cluster is a disjoint of cliques. It is also called the~$P_3$-free graphs and 2-leaf powers.
The {\it{distance to a cluster}} of a graph is the minimum number of vertices need to be deleted to obtain a cluster.
We extend the result for clique in the following way.
\begin{theorem}
With respect to the distance to a cluster, all problems become {{\sf{FPT}}}.
\end{theorem}
{\bf{Remark.}} The above three NP-hardness results hold when the network has diameter at most 3.
}
\begin{theorem}[*]
{\prob{USWC}} is polynomial-time solvable if the treewidth of the given network is a constant.
\end{theorem}
{
\begin{proof}
Let $\mathcal{G}=(V, E, g_V, c)$ be a BNPG, where~$(V, E)$ is a network of players,~$g_V$ is a set of externality functions of players in~$V$, one for each player, and $c: V\rightarrow \mathbb{R}_{\geq 0}$ is the cost function. For every player $v\in V$, let $g_v: \mathbb{N}_{0}\rightarrow \mathbb{R}_{\geq 0}$ denote its externality function in~$g_V$. Let~$G$ denote the network $(V, E)$, and let $n=\abs{V}$ be the number of players. In addition, let $(T, \mathcal{B})$ be a nice-tree decomposition of~$G$ which is of polynomial size in~$n$ and of width at most~$p$ for some constant~$p$. For a node~$i$ in~$T$, let $B_i\in \mathcal{B}$ denote its associated bag. Moreover, let~$T_i$ denote the subtree of~$T$ rooted at~$i$, let~$G_i=G[\bigcup_{j\in \ver{T_i}} B_j]$ denote the subgraph of~$G$ induced by all vertices contained in bags associated to nodes in~$T_i$, and let~$V_i$ denote the vertex set of~$G_i$. For each nonroot bag $B_i\in \mathcal{B}$, we define~$B_i^{\text{P}}$ as the parent bag of~$B_i$. If~$B_i$ is the root bag, we define $B_i^{\text{P}}=\emptyset$.
For each bag~$B_i$ associated to a node~$i$ and each vertex~$v\in B_i$, we use~$n_i(v)$ to denote the number of neighbors of~$v$ in the subgraph~$G_i-B_i$.
In the following, we derive a dynamic programming algorithm to compute a profile of~$\mathcal{G}$ with the maximum possible USW.
For each bag~$B_i\in \mathcal{B}$, we maintain a dynamic table ${\textsf{DT}}_i(U, f)$ where~$U$ runs over all subsets of~$B_i$ and~$f$ runs over all functions $f: B_i\rightarrow \mathbb{N}_0$ such that~$f(u)\leq n_i(u)$ for every~$u\in B_i$.
A profile~${\bf{s}}$ of a the subgame $\mathcal{G}|_{V_i}$ is consistent with the tuple $(U, f)$ if among the players in~$B_i$ exactly those in~$U$ invest and, moreover, each $v\in B_i$ has exactly~$f(v)$ neighbors in $G_i- B_i$ who invest, i.e., ${\bf{s}}\cap B_i=U$ and $n_G(v, {\bf{s}}\setminus U)=f(v)$.
The entry ${\textsf{DT}}_i(U, f)$ is defined as
\[\max_{{\bf{s}}} \left(\sum_{v\in V_i\setminus B_i^{\text{P}}} \mu(v, \mathcal{G}|_{V_i}, {\bf{s}})\right),\]
where~${\bf{s}}$ runs over all profiles of the subgame $\mathcal{G}|_{V_i}$ that are consistent with $(U, f)$. If the subgame does not admit a profile which is consistent with $(U, f)$, ${\textsf{DT}}_i(U, f)=-\infty$.
To compute the values of the entries, we distinguish the following cases. Let ${\textsf{DT}}_i(U, f)$ denote the currently considered entry.
\begin{description}
\item[Case~1:~$B_i^{\text{P}}$ is a join or an introduce bag, or~$i$ is the root of~$T$]
We consider the following subcases.
\begin{description}
\item[Case 1.1:~$B_i$ is a join bag]
Let~$x$ and~$y$ denote the two children of~$i$ in the tree~$T$. Then, we define
\[{\textsf{DT}}_i(U, f)={\textsf{DT}}_x(U, f)+{\textsf{DT}}_y(U, f).\]
\item[Case~1.2 $B_i$ is an introduce bag]
Let~$x$ be the child of~$i$ in~$T$, and let $B_i\setminus B_x=\{v\}$. (In this case,~$i$ cannot be the root of~$T$.) Observe that in this case~$v$ does not have any neighbors in $G_i-B_i$. Hence, if $f(v)>0$, we let ${\textsf{DT}}_i(U, f)=-\infty$. Otherwise, it must be that $f(v)=0$, and we let
\[{\textsf{DT}}_i(U, f)={\textsf{DT}}_x(U\setminus \{v\}, f|_{\neg v}).\]
\item[Case 1.3: $B_i$ is a forget bag]
Let~$x$ be the child of~$i$ in~$T$ and let $B_x\setminus B_i=\{v\}$. Then, we define
\[{\textsf{DT}}_x(U, f)=\max_{\substack{U'\subseteq B_x~\text{s.t.}~U'\setminus \{v\}=U,\\ f':B_x\rightarrow \mathbb{N}_{0}~\text{s.t.}~f'|_{\neg v}=f}} \{{\textsf{DT}}_x(U', f')\}.\]
\end{description}
\item[Case~2:~$B_i^{\text{P}}$ is a forget bag]
Let $B_i\setminus B_i^{\text{P}}=\{v\}$. We consider the following subcases.
\begin{description}
\item[Case 2.1: $B_i$ is a join bag]
Let~$x$ and~$y$ denote the two children of~$i$ in~$T$. We define
\[{\textsf{DT}}_i(U, f)={\textsf{DT}}_x(U, f)+{\textsf{DT}}_y(U, f)+\pi(v),\]
where
\begin{equation*}
\pi(v)=
\begin{cases}
g_v(f(v)+n_G[v, U])-c(v) & v\in U\\
g_v(f(v)+n_G(v, U)) & v\in B_i\setminus U\\
\end{cases}
\end{equation*}
\item[Case 2.2: $B_i$ is an introduce bag]
Let~$x$ denote the child of~$i$ in~$T$. In addition, let $B_i\setminus B_x=\{u\}$. In this case,~$u$ does not have any neighbor in $G_i-B_i$, and hence if~$f(u)>0$ the game restricted to~$V_i$ does not have a profile which is consistent with $(U, f)$. Therefore, if $f(u)>0$ we directly set ${\textsf{DT}}_i(U, f)=-\infty$. Let us assume that $f(u)=0$. Then, we define
\[{\textsf{DT}}_x(U, f)={\textsf{DT}}_x(U\setminus \{u\}, f|_{\neg u})+\pi(v),\]
where~$\pi(v)$ is as defined in Case~2.1.
\item[Case 2.3: $B_i$ is a forget bag]
Let~$x$ denote the child of~$i$ in~$T$. In addition, let $B_x\setminus B_i=\{u\}$.
Let~$\pi(v)$ be as defined in Case~2.1. Then, we define
\end{description}
\[{\textsf{DT}}_x(U, f)=\pi(v)+\max_{\substack{U'\subseteq B_x~\text{s.t.}~U'\setminus \{u\}=U\\ f': B_i\rightarrow \mathbb{N}_{0}~\text{s.t.}~f'|_{\neg u}=f}}\{{\textsf{DT}}_x(U', f')\}.\]
\end{description}
The algorithm returns ${\textsf{DT}}_r(\emptyset, \emptyset)$, where~$r$ is the root. The dynamic programming algorithm runs in polynomial time as there are in total polynomially many entries and computing the value of each entry takes polynomial time.
\end{proof}
}
\begin{theorem}
{\prob{ESWC}} is polynomial-time solvable if the treewidth of the given network is a constant.
\end{theorem}
\begin{proof}
Let $\mathcal{G}=(V, E, g_V, c)$ be a BNPG, where~$(V, E)$ is a network of players,~$g_V$ is a set of externality functions of players in~$V$, one for each player, and $c: V\rightarrow \mathbb{R}_{\geq 0}$ is the cost function. For every player $v\in V$, let $g_v: \mathbb{N}_{0}\rightarrow \mathbb{R}_{\geq 0}$ denote its externality function in~$g_V$. Let~$G$ denote the network $(V, E)$, and let $n=\abs{V}$ be the number of players. In addition, let $(T, \mathcal{B})$ be a nice-tree decomposition of~$G$ which is of polynomial size in~$n$ and of width at most~$p$ for some constant~$p$. For a node~$i$ in~$T$, let $B_i\in \mathcal{B}$ denote its associated bag. Moreover, let~$T_i$ denote the subtree of~$T$ rooted at~$i$, and let~$G_i=G[\bigcup_{j\in \ver{T_i}} B_j]$ denote the subgraph of~$G$ induced by all vertices contained in bags associated to nodes in~$T_i$. For each nonroot bag $B_i\in \mathcal{B}$, we define~$B_i^{\text{P}}$ as the parent bag of~$B_i$. If~$B_i$ is the root bag, we define $B_i^{\text{P}}=\emptyset$.
For each bag~$B_i\in \mathcal{B}$, where $i\in \ver{T}$, and each vertex~$v\in B_i$, we use~$n_i(v)$ to denote the number of neighbors of~$v$ in the subgraph~$G_i-B_i$.
We derive an algorithm as follows.
The algorithm first guesses the ESW of the desired profile.
Note that we need only to consider at most $2n\cdot (n+1)$ possible values (the utility of every player has at most~$2(n+1)$ possible values and there are~$n$ players).
For each guessed value~$q$ of the ESW, we solve the problem of determining if~$\mathcal{G}$ admits a profile of ESW at least~$q$. This problem can be solved by the following dynamic programming algorithm running in polynomial time.
For each bag~$B_i\in \mathcal{B}$, we maintain a binary dynamic table $\textsf{DT}_i(U, f)$, where~$U$ runs over all subsets of~$B_i$ and~$f$ runs over all functions $f: B_i\rightarrow \mathbb{N}_0$ such that~$f(u)\leq n_i(u)$ for every~$u\in B_i$. Each entry ${\textsf{DT}}_i(U, f)$ is supposed to be~$1$ if and only if~$\mathcal{G}$ admits a profile such that
\begin{enumerate}
\item[(1)] among the players in~$B_i$, exactly those in~$U$ invest;
\item[(2)] every $v\in B_i$ has exactly~$f(v)$ neighbors in $G_i- B_i$ who invest; and
\item[(3)] everyone in~$B_i$ but not in~$B_i^{\text{P}}$ obtains utility at least~$q$, i.e.,
\begin{enumerate}
\item[(3.1)] for every $v\in U\setminus B_i^{\text{P}}$, it holds that $g_v(n_G[v, U]+f(v))-c(v)\geq q$; and
\item[(3.2)] for every $v\in (B_i\setminus U)\setminus B_i^{\text{P}}$, it holds that $g_v(n_G(v, U)+f(v))\geq q$.
\end{enumerate}
\end{enumerate}
We do not request players in~$B_i$ who also appear~$B_i^{\text{P}}$ to obtain the threshold utility~$q$ at the moment because we do not have the complete information over the number of their neighbors who invest at the moment. These players will be treated when they leave their corresponding forget bags.
The tables are computed in a bottom-up manner, from those maintained for the leaf nodes up to that for the root node in~$T$. Assume that~$i$ is the currently considered node. We show how to compute $\textsf{DT}_i(U, f)$ as follows. Recall that every leaf bag is empty. So, if~$i$ is a leaf node, the table for~$i$ contains only one entry with the two parameters being an empty set and an empty function. We let this entry contain the value~$1$. Let us assume now that~$i$ is not a leaf node. We distinguish the following cases.
\begin{description}
\item[Case:~$B_i$ is a join bag]
Let~$x$ and~$y$ denote the two children of~$i$ in the tree~$T$.
If~$i$ is the root, or~$B_i^{\text{P}}$ is a join bag or an introduce bag, we set ${\textsf{DT}}_i(U, f)=1$ if and only if ${\textsf{DT}}_x(U, f)={\textsf{DT}}_y(U, f)=1$.
If~$B_i^{\text{P}}$ is a forget bag, let $B_i\setminus B_i^{\text{P}}=\{v\}$. Then, we set ${\textsf{DT}}_i(U, f)=1$ if and only if ${\textsf{DT}}_x(U, f)={\textsf{DT}}_y(U, f)=1$ and one of the following holds:
\begin{itemize}
\item $v\in U$ and $g_v(n_G[v, U]+f(v))-c(v)\geq q$; or
\item $v\in B_i\setminus U$ and $g_{v}(n_G(v, U)+f(v))\geq q$.
\end{itemize}
\item[Case:~$B_i$ is an introduce bag]
Let~$x$ be the child of~$i$ in~$T$.
Let $B_i\setminus B_x=\{v\}$. Note that~$v$ does not have any neighbor in $G_i-B_i$. Thus, if $f(v)>0$, we directly set ${\textsf{DT}}_i(U, f)$=0. We assume now that $f(v)=0$. We consider the following cases. First, if~$B_i^{\text{P}}$ is a join bag or an introduce bag, then ${\textsf{DT}}_i(U, f)=1$ if and only if ${\textsf{DT}}_x(U\setminus \{v\}, f|_{\neg \{v\}})=1$.
If~$B_i^{\text{P}}$ is a forget bag, let $B_i\setminus B_i^{\text{P}}=\{u\}$.
Then, we set ${\textsf{DT}}_i(U, f)=1$ if and only if ${\textsf{DT}}_x(U\setminus \{v\}, f|_{\neg \{v\}})=1$ and one of the following holds (notice that when $u=v$ we have $f(u)=0$):
\begin{itemize}
\item $u\in U$ and $g_u(n_G[u, U]+f(u))-c(u)\geq q$;
\item $u\in B_i\setminus U$ and $g_u(n_G(u, U)+f(u))\geq q$.
\end{itemize}
\item[Case:~$B_i$ is a forget bag]
Let~$x$ be the child of~$i$ in~$T$.
Let $B_x\setminus B_i=\{v\}$. If~$i$ is the root, or~$B_i^{\text{P}}$ is a join or an introduce bag, then~${\textsf{DT}}_i(U, f)=1$ if and only if there is a ${\textsf{DT}}_x(U', f')=1$ such that $U'\cap B_i=U$ and $f'|_{\neg \{v\}}=f$.
If~$B_i^{\text{P}}$ is a forget bag, let $B_i\setminus B_i^{\text{P}}=\{u\}$.
Then, we set ${\textsf{DT}}_i(U, f)=1$ if and only if there is a ${\textsf{DT}}_x(U', f')=1$ such that $U'\cap B_i=U$, $f'|_{\neg \{v\}}=f$ and one of the following conditions holds:
\begin{itemize}
\item $u\in U$ and $g_u(n_G[u, U]+f(u))-c(u)\geq q$;
\item $u\in B_i\setminus U$ and $g_u(n_G(u, U)+f(u))\geq q$.
\end{itemize}
\end{description}
After computing the values of all tables, we conclude that the game~$\mathcal{G}$ admits a profile with ESW at least~$q$ if and only if ${\textsf{DT}}_r(\emptyset, \emptyset)=1$ where~$r$ is the root of~$T$.
To see that the algorithm runs in polynomial time, recall first that, as described above, the value of each entry in all tables can be computed in polynomial time. Moreover, there are polynomially many nodes in the tree~$T$, and for each node~$i$, the associated table~${\textsf{DT}}_i$ contains at most $2^{\abs{B_i}} \cdot (\max_{u\in B_i} n_i(u))^{\abs{B_i}}\leq 2^p \cdot n^p$ entries. The running time follows then from the fact that~$p$ is a constant.
For the whole algorithm, we first find the maximum possible value~$q$ such that~$\mathcal{G}$ admits a profile of ESW at least~$q$, then using standard backtracking technique, a profile with ESW~$q$ can be computed in polynomial time based on the above dynamic programming algorithm.
\end{proof}
\end{document} |
\begin{document}
\begin{doublespace}
\def{\bf 1}{{\bf 1}}
\def{\bf 1}{{\bf 1}}
\def{\bf n}} \def\Z {{\mathbb Z}onumber{{\bf n}} \def\Z {{\mathbb Z}onumber}
{\bf n}} \def\Z {{\mathbb Z}ewcommand{\mathbf{1}}{\mathbf{1}}
\def{\mathfrak X}{{\mathfrak X}}
\def{\cal A}} \def\sB {{\cal B}} \def\sC {{\cal C}{{\cal A}} \def\sB {{\cal B}} \def\sC {{\cal C}}
\def{\cal D}} \def\sE {{\cal E}} \def\sF {{\cal F}{{\cal D}} \def\sE {{\cal E}} \def\sF {{\cal F}}
\def{\cal G}} \def\sH {{\cal H}} \def\sI {{\cal I}{{\cal G}} \def\sH {{\cal H}} \def\sI {{\cal I}}
\def{\cal J}} \def\sK {{\cal K}} \def\sL {{\cal L}{{\cal J}} \def\sK {{\cal K}} \def\sL {{\cal L}}
\def{\cal M}} \def\sN {{\cal N}} \def\sO {{\cal O}{{\cal M}} \def\sN {{\cal N}} \def\sO {{\cal O}}
\def{\cal P}} \def\sQ {{\cal Q}} \def\sR {{\cal R}{{\cal P}} \def\sQ {{\cal Q}} \def\sR {{\cal R}}
\def{\cal S}} \def\sT {{\cal T}} \def\sU {{\cal U}{{\cal S}} \def\sT {{\cal T}} \def\sU {{\cal U}}
\def{\cal V}} \def\sW {{\cal W}} \def\sX {{\cal X}{{\cal V}} \def\sW {{\cal W}} \def\sX {{\cal X}}
\def{\cal Y}} \def\sZ {{\cal Z}{{\cal Y}} \def\sZ {{\cal Z}}
\def{\mathbb A}} \def\bB {{\mathbb B}} \def\bC {{\mathbb C}{{\mathbb A}} \def\bB {{\mathbb B}} \def\bC {{\mathbb C}}
\def{\mathbb D}} \def\bE {{\mathbb E}} \def\bF {{\mathbb F}{{\mathbb D}} \def\bE {{\mathbb E}} \def\bF {{\mathbb F}}
\def{\mathbb G}} \def\bH {{\mathbb H}} \def\bI {{\mathbb I}{{\mathbb G}} \def\bH {{\mathbb H}} \def\bI {{\mathbb I}}
\def{\mathbb J}} \def\bK {{\mathbb K}} \def\bL {{\mathbb L}{{\mathbb J}} \def\bK {{\mathbb K}} \def\bL {{\mathbb L}}
\def{\mathbb M}} \def\bN {{\mathbb N}} \def\bO {{\mathbb O}{{\mathbb M}} \def\bN {{\mathbb N}} \def\bO {{\mathbb O}}
\def{\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}{{\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}}
\def{\mathbb S}} \def\bT {{\mathbb T}} \def\bU {{\mathbb U}{{\mathbb S}} \def\bT {{\mathbb T}} \def\bU {{\mathbb U}}
\def{\mathbb V}} \def\bW {{\mathbb W}} \def\bX {{\mathbb X}{{\mathbb V}} \def\bW {{\mathbb W}} \def\bX {{\mathbb X}}
\def{\mathbb Y}} \def\bZ {{\mathbb Z}{{\mathbb Y}} \def\bZ {{\mathbb Z}}
\def{\mathbb R}} \def{\mathbb R} {{\mathbb R}} \def\H {{\mathbb H}{{\mathbb R}} \def{\mathbb R} {{\mathbb R}} \def\H {{\mathbb H}}
\def{\bf n}} \def\Z {{\mathbb Z}{{\bf n}} \def\Z {{\mathbb Z}}
{\bf n}} \def\Z {{\mathbb Z}ewcommand{\expr}[1]{\left( #1 \right)}
{\bf n}} \def\Z {{\mathbb Z}ewcommand{\cl}[1]{\overline{#1}}
{\bf n}} \def\Z {{\mathbb Z}ewtheorem{thm}{Theorem}[section]
{\bf n}} \def\Z {{\mathbb Z}ewtheorem{lemma}[thm]{Lemma}
{\bf n}} \def\Z {{\mathbb Z}ewtheorem{defn}[thm]{Definition}
{\bf n}} \def\Z {{\mathbb Z}ewtheorem{prop}[thm]{Proposition}
{\bf n}} \def\Z {{\mathbb Z}ewtheorem{corollary}[thm]{Corollary}
{\bf n}} \def\Z {{\mathbb Z}ewtheorem{remark}[thm]{Remark}
{\bf n}} \def\Z {{\mathbb Z}ewtheorem{example}[thm]{Example}
{\bf n}} \def\Z {{\mathbb Z}umberwithin{equation}{section}
\def\varepsilon{\varepsilon}
\def{
$\Box$
}{{
$\Box$
}}
\def{\mathcal N}{{\mathcal N}}
\def{\mathcal A}{{\mathcal A}}
\def{\mathcal M}{{\mathcal M}}
\def{\mathcal B}{{\mathcal B}}
\def{\mathcal C}{{\mathcal C}}
\def{\mathcal L}{{\mathcal L}}
\def{\mathcal D}{{\mathcal D}}
\def{\mathcal F}{{\mathcal F}}
\def{\mathcal E}{{\mathcal E}}
\def{\mathcal Q}{{\mathcal Q}}
\def{\mathcal S}{{\mathcal S}}
\def{\mathbb R}{{\mathbb R}}
\def{\mathbb R}{{\mathbb R}}
\def{\bf L}{{\bf L}}
\def{\bf K}{{\bf K}}
\def{\bf S}{{\bf S}}
\def{\bf A}{{\bf A}}
\def{\mathbb E}{{\mathbb E}}
\def{\bf F}{{\bf F}}
\def{\mathbb P}{{\mathbb P}}
\def{\mathbb N}{{\mathbb N}}
\def\varepsilon{\varepsilon}
\def\widehat{\widehat}
\def\widetilde{\widetilde}
\def\noindent{\bf Proof.} {{\bf n}} \def\Z {{\mathbb Z}oindent{\bf Proof.} }
\def\noindent{\bf Proof.} f{{\bf n}} \def\Z {{\mathbb Z}oindent{\bf Proof} }
\def\mathrm{Cap}{\mathrm{Cap}}
\title{{\bf L}arge \bf Martin boundary of unbounded sets for purely discontinuous Feller processes}
\author{{\bf Panki Kim}\thanks{This work was supported by the National Research Foundation of
Korea(NRF) grant funded by the Korea government(MSIP) (No. NRF-2015R1A4A1041675)
}
\quad {\bf Renming Song\thanks{Research supported in part by a grant from
the Simons Foundation (208236)}} \quad and
\quad {\bf Zoran Vondra\v{c}ek}
\thanks{Research supported in part by the Croatian Science Foundation under the project 3526}
}
\date{}
\maketitle
\begin{abstract}
In this paper, we study the Martin kernels of general open sets associated with inaccessible
points for a large class of purely discontinuous Feller processes in metric measure spaces.
\end{abstract}
{\bf n}} \def\Z {{\mathbb Z}oindent {\bf AMS 2010 Mathematics Subject Classification}: Primary 60J50, 31C40; Secondary 31C35, 60J45, 60J75.
{\bf n}} \def\Z {{\mathbb Z}oindent {\bf Keywords and phrases:} Martin boundary, Martin kernel,
purely discontinuous Feller process, L\'evy process
\section{Introduction and setup}\label{s:intro}
This paper is a companion of \cite{KSVp2} and here we continue our study of the Martin boundary
of Greenian open sets with respect to purely discontinuous Feller processes in metric measure spaces.
In \cite{KSVp2}, we have shown that
(1) if $D$ is a Greenian open set and $z_0\in \partial D$ is accessible from $D$, then the Martin kernel of
$D$ associated with $z_0$ is a minimal harmonic function; (2) if $D$ is an unbounded Greenian open set and
$\infty$ is accessible from $D$, then the Martin kernel of $D$ associated with $\infty$ is a minimal harmonic function.
The goal of this paper is to study the Martin kernels of $D$ associated with inaccessible boundary points
of $D$, including $\infty$.
The background and recent progress on the Martin boundary is explained in the companion paper
\cite{KSVp2}.
Martin kernels of bounded open sets $D$ associated with both accessible and inaccessible
boundary points of $D$ have been studied in the recent preprint \cite{JK}. In this paper,
we are mainly concerned with the Martin kernels of unbounded open sets associated with $\infty$
when $\infty$ is inaccessible from $D$. For completeness, we also spell out
some of the details of the argument for dealing with the Martin kernels of unbounded open sets associated with
inaccessible boundary points of $D$.
To accomplish our task of studying the Martin kernels of general open sets,
we follow the ideas of \cite{BKK, KL} and first study the oscillation reduction of ratios of positive harmonic functions.
In the case of isotropic $\alpha$-stable processes, the oscillation reduction at infinity and Martin kernel
associated with $\infty$ follow easily from the corresponding results at finite boundary points
by using the sphere inversion and Kelvin transform.
For the general processes dealt with in this paper, the Kelvin transform method does not apply.
Now we describe the setup of this paper which is the same as that of \cite{KSVp2}
and then give the main results of this paper.
Let $(\X, d, m)$ be a metric measure space with a countable base
such that all bounded closed sets are compact and the measure $m$ has full support. For $x\in \X$ and $r>0$, let $B(x,r)$ denote the ball centered at $x$ with radius $r$. Let $R_0\in (0,\infty]$ be the localization radius such that $\X\setminus B(x,2r){\bf n}} \def\Z {{\mathbb Z}eq \emptyset$ for all $x\in \X$ and all $r<R_0$.
Let $X=(X_t, \sF_t, {\mathbb P}_x)$ be a Hunt process on $\X$. We will assume the following
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption A:}
$X$ is a Hunt process admitting a strong dual process $\widehat{X}$ with respect to the measure $m$ and $\widehat{X}$ is also a Hunt process.
The transition semigroups $(P_t)$ and $(\widehat{P}_t)$ of $X$ and $\widehat{X}$ are both Feller and strong Feller. Every semi-polar set of $X$ is polar.
In the sequel, all objects related to the dual process $\widehat{X}$ will be denoted by a hat.
We first recall that a set is polar (semi-polar, respectively) for $X$ if and only if it is polar (semi-polar, respectively) for $\widehat X$.
If $D$ is an open subset of $\X$ and $\tau_D=\inf\{t>0:\, X_t{\bf n}} \def\Z {{\mathbb Z}otin D\}$ the exit time from $D$, the killed process $X^D$ is defined by $X_t^D=X_t$ if $t<\tau_D$ and $X_t^D=\partial$
where $\partial$ is an extra point added to $\X$.
Then, under assumption {\bf A},
$X^D$ admits a unique (possibly infinite) Green function (potential kernel) $G_D(x,y)$ such that for every non-negative Borel function $f$,
$$
G_D f(x):={\mathbb E}_x \int_0^{\tau_D}f(X_t)dt=\int_D G_D(x.y)\, m(dy)\, ,
$$
and $G_D(x,y)=\widehat{G}_D(y,x)$, $x,y\in D$, with $\widehat{G}_D(y,x)$ the Green function of $\widehat{X}^D$.
It is assumed throughout the paper that $G_D(x,y)=0$ for $(x,y)\in (D\times D)^c$.
We also note that the killed process $X^D$ is strongly Feller, see e.g.~the first part of the proof of Theorem on \cite[pp.~68--69]{Chu}.
Let $\partial D$ denote the boundary of the open set $D$ in the topology of $\X$.
Recall that $z\in \partial D$ is said to be regular for $X$ if ${\mathbb P}_z(\tau_D =0)=1$ and irregular otherwise.
We will denote the set of regular points of $\partial D$ for $X$ by $D^{\mathrm{reg}}$ (and the set of regular points of $\partial D$ for $\widehat X$ by $\widehat D^{\mathrm{reg}}$).
It is well known that the set of irregular points is semipolar, hence polar under {\bf A}.
Suppose that $D$ is Greenian, that is, the Green function $G_D(x,y)$ is finite away from the diagonal.
Under this assumption, the killed process $X^D$ is transient (and strongly Feller). In particular, for every bounded Borel function $f$ on $D$, $G_D f$ is continuous.
The process $X$, being a Hunt process, admits a L\'evy system $(J,H)$ where $J(x,dy)$
is a kernel on $\X$ (called the L\'evy kernel of $\X$),
and $H=(H_t)_{t\ge 0}$ is a positive continuous additive functional of $X$. We assume that $H_t=t$
so that for every function $f:\X\times {\mathfrak X}\to [0,\infty)$ vanishing on the diagonal
and every stopping time $T$,
$$
{\mathbb E}_x \sum_{0<s\le T} f(X_{s-}, X_s)={\mathbb E}_x \int_0^T f(X_s,y)J(X_s,dy) ds\, .
$$
Let $D\subset \X$ be a Greenian open set. By replacing $T$ with $\tau_D$ in the displayed formula above and taking $f(x,y)={\bf 1}_D(x){\bf 1}_A(y)$ with $A\subset \overline{D}^c$, we get that
\begin{equation}\label{e:exit-distribution}
{\mathbb P}_x(X_{\tau_D}\in A, \tau_D <\zeta)={\mathbb E}_x \int_0^{\tau_D} J(X_s, A) ds= \int_D G_D(x,y)J(y,A)m(dy)\,
\end{equation}
where $\zeta$ is the life time of $X$.
Similar formulae hold for $\widehat{X}$ and $\widehat{J}(x,dy)m(dx)=J(y,dx)m(dy)$.
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption C:} The L\'evy kernels of $X$ and $\widehat{X}$ have the form $J(x,dy)=j(x,y)m(dy)$, $\widehat{J}(x,dy)=\widehat{j}(x,y)m(dy)$, where $j(x,y)=\widehat{j}(y,x)>0$ for all $x,y\in \X$, $x{\bf n}} \def\Z {{\mathbb Z}eq y$.
We will always assume that Assumptions \textbf{A} and \textbf{C} hold true.
In the next assumption, $z_0$ is a point in $\X$ and $R\le R_0$.
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption C1}$(z_0, R)$:
For all $0<r_1<r_2<R$, there exists a constant $c=c(z_0, r_2/r_1)>0$ such that for all $x\in B(z_0,r_1)$ and all $y\in \X\setminus B(z_0, r_2)$,
$$
c^{-1} j(z_0,y)\le j(x,y)\le c j(z_0,y), \qquad c^{-1} \widehat{j}(z_0,y)\le \widehat{j}(x,y)\le c \widehat{j}(z_0,y).
$$
In the next assumption we require that the localization radius $R_0=\infty$ and that $D$ is unbounded.
Again, $z_0$ is a point in $\X$.
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption C2}$(z_0, R)$:
For all $R\le r_1<r_2< \infty$, there exists a constant $c=c(z_0, r_2/r_1)>0$ such that for all $x\in B(z_0,r_1)$ and all $y\in \X\setminus B(z_0, r_2)$,
$$
c^{-1} j(z_0,y)\le j(x,y)\le c j(z_0,y), \qquad c^{-1} \widehat{j}(z_0,y)\le \widehat{j}(x,y)\le c \widehat{j}(z_0,y).
$$
We \emph{define} the Poisson kernel of $X$ on an open set $D\in \X$ by
\begin{equation*}
P_D(x,z)=\int_D G_D(x,y)j(y,z) m(dy), \qquad x\in D, z\in D^c .
\end{equation*}
By \eqref{e:exit-distribution}, we see that $P_D(x,\cdot)$ is the density of the exit distribution of $X$ from $D$
restricted to $\overline{D}^c$:
$$
{\mathbb P}_x(X_{\tau_D}\in A, \tau_D <\zeta)=\int_A P_D(x,z) m(dz), \qquad A\subset \overline{D}^c .
$$
Recall that $f:\X\to [0,\infty)$ is regular harmonic in $D$ with respect to $X$ if
$$
f(x)={\mathbb E}_x[f(X_{\tau_D}), \tau_D<\zeta]\, , \quad \textrm{for all }x\in D\, ,
$$
and it is harmonic in $D$ with respect to $X$ if for every relatively compact open $U\subset \overline{U}\subset D$,
$$
f(x)={\mathbb E}_x[f(X_{\tau_U}), \tau_D<\zeta]\, , \quad \textrm{for all }x\in U\, .
$$
Recall also that $f:D\to [0,\infty)$ is harmonic in $D$ with respect to $X^D$ if for every relatively compact open $U\subset \overline{U}\subset D$,
$$
f(x)={\mathbb E}_x[f(X^D_{\tau_U}), \tau_U < \zeta]\, , \quad \textrm{for all }x\in U\, .
$$
The next pair of assumptions is about an approximate factorization of
positive harmonic functions. This approximate factorization plays a crucial role in
proving the oscillation reduction.
The first one is an approximate factorization of harmonic functions at a finite boundary point.
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption F1}$(z_0, R)$:
Let $z_0\in \X$ and $R\le R_0$. For any $\frac{1}{2} < a < 1$, there exists $C(a)=C(z_0, R, a)\ge 1$ such that for
every $r\in (0, R)$, every open set $D\subset B(z_0,r)$, every nonnegative function $f$ on $\X$
which is regular harmonic in $D$ with respect to $X$ and vanishes in
$B(z_0, r) \cap ( \overline{D}^c \cup D^{\mathrm{reg}})$,
and all $x\in D\cap B(z_0,r/8)$\,,
\begin{eqnarray}\label{e:af-1}
\lefteqn{C(a)^{-1}{\mathbb E}_x[\tau_{D}] \int_{\overline{B}(z_0,ar/2)^c} j(z_0,y) f(y)m(dy) }{\bf n}} \def\Z {{\mathbb Z}onumber \\
&&\le f(x) \le C(a){\mathbb E}_x[\tau_{D}]\int_{\overline{B}(z_0,ar/2)^c} j(z_0,y) f(y)m(dy).
\end{eqnarray}
In the second assumption we require that the localization radius $R_0=\infty$ and that $D$ is unbounded.
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption F2}$(z_0, R)$:
Let $z_0\in \X$ and $R>0$. For any $1 < a < 2$, there exists $C(a)=C(z_0, R, a)\ge 1$ such that for
every $r\ge R$, every open set $D\subset \overline{B}(z_0,r)^c$,
every nonnegative function $f$ on $\X$ which is regular harmonic in $D$
with respect to $X$ and vanishes
on
$\overline{B}(z_0, r)^c \cap ( \overline{D}^c \cup D^{\mathrm{reg}})$,
and all $x\in D\cap \overline{B}(z_0,8r)^c$,
\begin{eqnarray}\label{e:af-2}
\lefteqn{C(a)^{-1}\, P_{D} (x,z_0) \int_{B(z_0, 2ar)} f(z)m(dz)}{\bf n}} \def\Z {{\mathbb Z}onumber\\
&&\le f(x) \le
C(a)\, P_{D} (x,z_0) \int_{B(z_0, 2ar)} f(z)m(dz).
\end{eqnarray}
Let $D\subset \X$ be an open set. A point $z\in \partial D$ is said to be accessible from $D$ with respect to $X$ if
\begin{equation}\label{e:accessible-finite}
P_D(x,z)=\int_D G_D(x,w)j(w,z) m(dw) = \infty \quad \text{ for all } x \in D\, ,
\end{equation}
and inaccessible otherwise.
In case $D$ is unbounded we say that $\infty$ is accessible from $D$ with respect to $X$ if
\begin{equation}\label{e:accessible-finite2}
{\mathbb E}_x \tau_D =\int_D G_D(x,w) m(dw)= \infty \quad \text{ for all } x \in D
\end{equation}
and inaccessible otherwise.
The notion of accessible and inaccessible points was introduced in \cite{BKuK}.
In \cite{KSVp2}, we have discussed the oscillation reduction and Martin boundary points at accessible points, and
showed that the Martin kernel associated with an accessible point is a minimal harmonic function.
As in \cite{KSVp2},
the main tool in studying the Martin kernel associated with inaccessible
points is the oscillation reduction at inaccessible points. To prove the oscillation reduction
at inaccessible points,
we need to assume one of the following additional conditions
on the asymptotic behavior of the L\'evy kernel:
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption E1}$(z_0, R)$: For every $r \in (0, R)$,
$$
\lim_{d(z_0, y) \to 0}\sup_{z: d(z_0, z)>r}\frac{j(z, z_0)}{j( z, y)}=\lim_{d(y, z_0) \to 0}\inf_{z:d(z_0, z)>r}\frac{j(z, z_0)}{j(z, y)}=1.
$$
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption E2}$(z_0, R)$: For every $r>R$,
$$
\lim_{d(z_0, z) \to \infty} \sup_{y: d(z_0, y) < r}
\frac{j(z, z_0)}{j(z, y)}=\lim_{d(z_0, z) \to \infty} \inf_{y: d(z_0, y) < r}
\frac{j(z, z_0)}{j(z, y)}
=1.
$$
Combining Theorems \ref{t:oscillation-reduction-yI} and \ref{t:reduction} below for inaccessible points with the results in \cite{KSVp2} for accessible ones, we have the following,
which is the first main result of this paper.
\begin{thm}\label{t:main-mb0}
Let $D\subset \X$ be an open set.
(a) Suppose that $z_0\in \partial D$.
Assume that there exists $R\le R_0$ such that
{\bf C1}$(z_0, R)$ and {\bf E1}$(z_0, R)$ hold,
and that $\widehat{X}$ satisfies \textbf{F1}$(z_0, R)$.
Let $r \le R$ and let $f_1$ and $f_2$ be nonnegative functions on $\X$ which are regular harmonic in $D\cap {B}(z_0, r)$ with respect to $\widehat{X}$ and vanish on
$B(z_0, r) \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$.
Then the limit
$$
\lim_{D{\bf n}} \def\Z {{\mathbb Z}i x\to z_0}\frac{f_1(x)}{f_2(x)}
$$
exists and is finite.
{\bf n}} \def\Z {{\mathbb Z}oindent
(b)
Suppose that $R_0=\infty$ and $D$ is an unbounded subset of $\X$.
Assume that there is a point $z_0\in\X$ such that
{\bf C2}$(z_0, R)$ and {\bf E2}$(z_0, R)$ hold,
and that $\widehat{X}$ satisfies \textbf{F2}$(z_0, R)$ for some $R>0$.
Let $r > R$ and let $f_1$ and $f_2$ be nonnegative functions on $\X$ which are regular harmonic in $D\cap \overline{B}(z_0, r)^c$ with respect to $\widehat{X}$ and vanish on
$\overline{B}(z_0, r)^c \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}}) $.
Then the limit
$$
\lim_{D{\bf n}} \def\Z {{\mathbb Z}i x\to \infty}\frac{f_1(x)}{f_2(x)}
$$
exists and is finite.
\end{thm}
For $D\subset \X$, let $\partial_M D$ denote the Martin boundary of $D$ with respect to $X^D$ in the sense of Kunita-Watanabe \cite{KW}, see Section 3 for more details.
A point $w\in \partial_M D$ is said to be minimal if the Martin kernel $M_D(\cdot, w)$ is a minimal
harmonic function with respect to $X^D$. We will use $\partial_m D$ to denote the
minimal Martin boundary of $D$ with respect to $X^D$.
A point $w\in \partial_M D$ is said to be a \emph{finite Martin boundary point}
if there exists a bounded (with respect to the metric $d$) sequence $(y_n)_{n\ge 1}\subset D$
converging to $w$ in the Martin topology.
A point $w\in \partial_M D$ is said to be an \emph{infinite Martin boundary point}
if there exists an unbounded (with respect to the metric $d$) sequence $(y_n)_{n\ge 1}\subset D$
converging to $w$ in the Martin topology. We note that these two definitions do not rule out the possibility that a point $w\in \partial_M D$ is at the same time finite and infinite Martin boundary point. We will show in Corollary \ref{c:finite-not-infinite}(a) that under appropriate and natural assumptions this cannot happen.
A point $w\in \partial_MD$ is said to be associated
with $z_0\in \partial D$ if there is a sequence $(y_n)_{n\ge 1}\subset D$ converging to $w$
in the Martin topology and to $z_0$ in the topology of $\X$. The set of Martin
boundary points associated with $z_0$ is denoted by $\partial_M^{z_0} D$.
A point $w\in \partial_MD$ is said to be associated with $\infty$ if $w$ is an infinite Martin boundary point.
The set of Martin boundary points associated with $\infty$ is denoted by $\partial_M^{\infty} D$.
$\partial^f_M D$ and $\partial^f_m D$ will be used to denote the finite part of the Martin
boundary and minimal boundary respectively.
Note that $\partial_M^{\infty} D$ is the set of infinite Martin boundary points.
Recall that we denote the set of regular points of $\partial D$ for $X$ by $D^{\mathrm{reg}}$.
Here is our final assumption.
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Assumption G}:
$\lim_{D{\bf n}} \def\Z {{\mathbb Z}i x\to z}G_D(x,y)=0$ for every $z\in D^{\mathrm{reg}}$ and every $y\in D$.
{\bf n}} \def\Z {{\mathbb Z}oindent
From Theorem \ref{t:main-mb0} and the results in \cite{KSVp2}, we have the following.
\begin{thm}\label{t:main-mb3}
Let $D\subset \X$ be an open set.
(a) Suppose that $z_0\in \partial D$.
Assume that there exists $R\le R_0$ such that {\bf C1}$(z_0, R)$
and {\bf E1}$(z_0, R)$ hold, and that
$\widehat{X}$ satisfies {\bf F1}$(z_0, R)$.
Then there is only one Martin boundary point associated with $z_0$.
{\bf n}} \def\Z {{\mathbb Z}oindent
(b) Assume further that
{\bf G} holds,
${X}$ satisfies {\bf F1}$(z_0, R)$, and that for all $r\in (0,R]$,
\begin{align}\label{e:nGassup1}
\sup_{x\in D\cap B(z_0,r/2)} \sup_{y \in {\mathfrak X}\setminus B(z_0, r)} \max(G_D(x, y), \widehat{G}_D(x, y))=:c(r) <\infty,
\end{align}
and in case of unbounded $D$, for $r \in (0, r_0]$,
$$
\lim_{x \to \infty} G_D(x, y)=0 \quad \text{for all } y \in D \cap B(z_0, r)\, .
$$
Then the Martin boundary point associated with $z_0\in \partial D$ is minimal if and only if $z_0$ is accessible from $D$ with respect to $X$.
\end{thm}
\begin{corollary}\label{c:main-mb3}
Suppose that the assumptions of Theorem \ref{t:main-mb3}(b) are satisfied for all $z_0\in \partial D$ (with $c(r)$ in \eqref{e:nGassup1} independent of $z_0$). Suppose
further that, for any inaccessible point $z_0\in \partial D$, $\lim_{D {\bf n}} \def\Z {{\mathbb Z}i x\to z_0}j(x, z_0)=\infty$.
{\bf n}} \def\Z {{\mathbb Z}oindent
(a) Then the finite part of the Martin boundary $\partial_M D$ can be identified
with $\partial D$.
{\bf n}} \def\Z {{\mathbb Z}oindent
(b) If $D$ is bounded, then $\partial D$ and $\partial_M D$ are homeomorphic.
\end{corollary}
\begin{thm}\label{t:main-mb4}
(a) Suppose that $R_0=\infty$ and $D$ is an unbounded open subset of $\X$.
If there is a point $z_0\in \X$ such that {\bf C2}$(z_0, R)$
and {\bf E2}$(z_0, R)$ hold,
and $\widehat{X}$ satisfies {\bf F2}$(z_0, R)$,
then there is only one Martin boundary point associated with $\infty$.
{\bf n}} \def\Z {{\mathbb Z}oindent
(b) Assume further that
{\bf G} holds, ${X}$ satisfies {\bf F2}$(z_0, R)$, and that
for all $r\ge R$,
\begin{align}\label{e:nGassup1-infty}
\sup_{x\in D\cap B(z_0,r/2)} \sup_{y \in {\mathfrak X}\setminus B(z_0, r)} \max(G_D(x, y), \widehat{G}_D(x, y))=:c(r) <\infty
\end{align}
and
\begin{align}
\label{e:G111}
\lim_{x \to \infty} G_D(x, y)=0 \quad \text{for all } y \in D.
\end{align}
Then the Martin boundary point
associated with $\infty$ is minimal if and only if $\infty$ is accessible from $D$.
\end{thm}
\begin{corollary}\label{c:finite-not-infinite} Let $R_0=\infty$ and $D\subset \X$ be unbounded. Suppose that the assumptions of Theorem \ref{t:main-mb3}(b) are satisfied for all $z_0\in \partial D$ (with $c(r)$ in \eqref{e:nGassup1} independent of $z_0$) and that the assumptions of
Theorem \ref{t:main-mb4}(a) and (b) are satisfied. Then
{\bf n}} \def\Z {{\mathbb Z}oindent
(a) $\partial_M^f D\cap \partial_M^{\infty}D=\emptyset$.
{\bf n}} \def\Z {{\mathbb Z}oindent
(b) Suppose that, for any inaccessible point $z_0\in \partial D$, $\lim_{D {\bf n}} \def\Z {{\mathbb Z}i x\to z_0}j(x, z_0)=\infty$. Then the Martin boundary $\partial_M D$ is homeomorphic with the one-point compactification of $\partial D$.
\end{corollary}
In case when $X$ is an isotropic stable process, Theorems \ref{t:main-mb3} and \ref{t:main-mb4} were proved in \cite{BKK}.
In Section \ref{s:inaccessible} we provide the proof of Theorem \ref{t:main-mb0} for inaccessible points. Section \ref{s:t34} contains
the proofs of Theorems \ref{t:main-mb3} and \ref{t:main-mb4}. In Section \ref{s:discussion} we discuss some L\'evy processes in ${\mathbb R}^d$ satisfying our assumptions.
We will use the following conventions in this paper.
$c, c_0,
c_1, c_2, \cdots$ stand for constants
whose values are unimportant and which may change from one
appearance to another. All constants are positive finite numbers.
The labeling of the constants $c_0, c_1, c_2, \cdots$ starts anew in
the statement of each result. We will use ``$:=$"
to denote a definition, which is read as ``is defined to be".
We denote $a \wedge b := \min \{ a, b\}$, $a \vee b := \max \{ a, b\}$.
Further, $f(t) \sim g(t)$, $t \to 0$ ($f(t) \sim g(t)$, $t \to
\infty$, respectively) means $ \lim_{t \to 0} f(t)/g(t) = 1$
($\lim_{t \to \infty} f(t)/g(t) = 1$, respectively).
Throughout the paper we will adopt the convention that
$X_{\zeta}=\partial$ and $u(\partial)=0$ for every function $u$.
\section{Oscillation reductions for inaccessible points}\label{s:inaccessible}
To handle the oscillation reductions at inaccessible points,
in this section we will assume, in addition to the corresponding assumptions in \cite{KSVp2},
that {\bf E1}$(z_0, R)$ ({\bf E2}$(z_0, R)$ respectively) holds
when we deal with finite boundary points (respectively infinity).
\subsection{Infinity}\label{ss:inaccessible-infty}
Throughout this subsection we will assume that
$R_0=\infty$ and $D\subset \X$ is an unbounded open set.
We will deal with oscillation reduction at $\infty$ when $\infty$ is inaccessible from $D$ with respect $X$.
We further assume that there exists a point $z_0\in\X$
such that {\bf E2}$(z_0, R)$ and {\bf C2}$(z_0, R)$ hold, and that
$\widehat{X}$ satisfies \textbf{F2}$(z_0, R)$ for some $R>0$.
We will fix $z_0$ and $R$ and use the notation $B_r=B(z_0, r)$.
The next lemma is a direct consequence of assumption {\bf E2}$(z_0, R)$.
\begin{lemma}\label{l:levy-density-I}
For any $q\ge2$, $r\ge R$ and $\varepsilonilon>0$,
there exists $p=p(\varepsilonilon,q,r)>16q$
such that for every $z\in \overline{B}_{pr/8}^c$ and every $y\in \overline{B}_{qr}$, it holds that
\begin{equation}\label{e:levy-density-I}
(1+\varepsilonilon)^{-1} < \frac{j(z,y)}{j(z, z_0)} < 1+\varepsilonilon .
\end{equation}
\end{lemma}
In the remainder of this subsection, we assume that $r\ge R$, and that $D$ is an open set such that $D\subset \overline{B}^c_r$. For $p>q>0$, let
$$
D^p=D\cap \overline{B}_p^c,\qquad D^{p,q}=D^q\setminus D^p.
$$
For $p>q>1$ and a nonnegative function $f$ on $\X$ define
\begin{eqnarray}\label{f^pr}
f^{pr,qr}(x)&=&
{\mathbb E}_{x} \left[f(\widehat X_{\widehat \tau_{D^{pr}}}): \widehat X_{\widehat \tau_{D^{pr}}} \in D^{pr,qr}\right], {\bf n}} \def\Z {{\mathbb Z}onumber
\\
\widetilde{f}^{pr,qr}(x)&=&
{\mathbb E}_{x} \left[f(\widehat X_{\widehat \tau_{D^{pr}}}): \widehat X_{\widehat \tau_{D^{pr}}} \in (D\setminus D^{qr})\cup \overline{B}_r\right].
\end{eqnarray}
\begin{lemma}\label{l:two-sided-estimate-I}
Suppose that $r\ge R$, $D\subset \overline{B}_r^c$ is an open set
and $f$ is a nonnegative function on $ \X$ which is regular harmonic in $D$ with respect to $\widehat X$
and vanishes on
$\overline{B}_r^c \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$.
Let
$q\ge 2$, $\varepsilonilon >0$, and choose $p=p(\varepsilonilon, q,r)$ as in Lemma \ref{l:levy-density-I}.
Then for every $x\in D^{pr/8}$,
\begin{equation}\label{el:two-sided-estimate-I}
(1+\varepsilonilon)^{-1} \widehat P_{D^{pr/8}}(x,z_0) \int_{ \overline{B}_{qr}} f(y) m(dy)
\le \widetilde{f}^{pr/8,qr}(x) \le (1+\varepsilonilon)
\widehat P_{D^{pr/8}}(x,z_0) \int_{ \overline{B}_{qr}} f(y) m(dy).
\end{equation}
\end{lemma}
\noindent{\bf Proof.} Let $x\in D^{pr/8}$. Using Lemma \ref{l:levy-density-I} in the second inequality below, we get
\begin{eqnarray*}
\widetilde{f}^{pr/8,qr}(x) &= & \int_{D\setminus D^{qr}} \widehat P_{D^{pr/8}}(x,y)f(y)m(dy) +
\int_{\overline{B}_r} \widehat P_{D^{pr/8}}(x,y)f(y)m(dy) \\
&=&\int_{D\setminus D^{qr}}\int_{D^{pr/8}} \widehat G_{D^{pr/8}}(x,z) \widehat j(z,y)m(dz) f(y)m(dy) \\
& & +\int_{\overline{B}_r}\int_{D^{pr/8}}\widehat G_{D^{pr/8}}(x,z) \widehat j(z,y)m(dz) f(y)m(dy) \\
&\le &(1+\varepsilonilon)\int_{D\setminus D^{qr}}\int_{D^{pr/8}} \widehat G_{D^{pr/8}}(x,z) \widehat j(z, z_0)m(dz) f(y)m(dy) \\
& & +(1+\varepsilonilon)\int_{\overline{B}_r}\int_{D^{pr/8}}\widehat G_{D^{pr/8}}(x,z) \widehat j(z, z_0)m(dz) f(y)m(dy) \\
&=&(1+\varepsilonilon)\left(\int_{D\setminus D^{qr}} \widehat P_{D^{pr/8}}(x,z_0)f(y)m(dy) +
\int_{\overline{B}_r}\widehat P_{D^{pr/8}}(x,z_0)f(y)m(dy)\right) \\
&=&(1+\varepsilonilon)\widehat P_{D^{pr/8}}(x,z_0) \int_{ \overline{B}_{qr}} f(y) m(dy).
\end{eqnarray*}
This proves the right-hand side inequality. The left-hand side inequality can be proved in the same way. {
$\Box$
}
In the remainder of this subsection, we assume that
$r \ge R$, $D\subset \overline{B}_r^c$ an open set and $f_1$ and $f_2$ are nonnegative functions on $\X$ which are regular harmonic in $D$ with respect $\widehat{X}$ and vanish on
$\overline{B}_r^c \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$.
Note that $f_i=f^{pr,qr}_i+ \widetilde{f}^{pr,qr}_i$.
\begin{lemma}\label{l:oscillation-assumption-1-I}
Let $r\ge R$, $q>2$, $\varepsilonilon>0$, and choose $p=p(\varepsilonilon,q,r)$ as in
Lemma \ref{l:levy-density-I}. If
\begin{equation}\label{e:assumption-1-12-I}
\int_{ D^{3pr/8,qr}} f_i(y) m(dy)\le \varepsilonilon \int_{ \overline{B}_{qr}} f_i(y) m(dy), \quad i=1,2,
\end{equation}
then, for all $x\in D^{pr}$.
\begin{equation}\label{e:estimate-of-quotient-I}
\frac{(1+\varepsilonilon)^{-1}\int_{ \overline{B}_{qr}} f_1(y) m(dy)}{(C\varepsilonilon +1+\varepsilonilon)\int_{ \overline{B}_{qr}} f_2(y) m(dy)}\le \frac{f_1(x)}{f_2(x)} \le \frac{(C\varepsilonilon +1+\varepsilonilon)\int_{ \overline{B}_{qr}} f_1(y) m(dy)}{(1+\varepsilonilon)^{-1}\int_{ \overline{B}_{qr}} f_2(y) m(dy)}.
\end{equation}
\end{lemma}
\noindent{\bf Proof.} Assume that $x\in D^{pr}$.
Since ${f_i}^{pr/8,qr}$ is regular harmonic in $D^{pr/8}$ with respect to $\widehat{X}$ and vanishes
on
$\overline{B}_{pr/8}^c \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$,
using {\bf F2}$(z_0, R)$ (with $a=3/2$),
we have
$$
f_i^{pr/8,qr}(x) \le C \widehat{P}_{D^{pr/8}}(x,z_0)
\int_{ B_{3pr/8}} f_i^{pr/8,qr}(y) m(dy).
$$
Since $f_i^{pr/8,qr}(y)\le f_i(y)$
and $f_i^{pr/8,qr}(y)=0$ on $(D^{qr})^c$ except possibly at irregular points of $D$, by using that $m$ does not charge polar sets and applying \eqref{e:assumption-1-12-I} we have
$$
f_i^{pr/8,qr}(x) \le C \widehat{P}_{D^{pr/8}}(x,z_0)
\int_{ D^{3pr/8,qr}} f_i(y) m(dy) \le C \varepsilonilon \widehat{P}_{D^{pr/8}}(x,z_0)
\int_{ \overline{B}_{qr}} f_i(y) m(dy).
$$
By this and Lemma \ref{l:two-sided-estimate-I} we have
\begin{eqnarray*}
f_i(x)&=&f_i^{pr/8,qr}(x)+\widetilde{f}_i^{pr/8,qr}(x)\\
&\le & C\varepsilonilon \widehat{P}_{D^{pr/8}}(x,z_0)\int_{ \overline{B}_{qr}} f_i(y) m(dy)
+ (1+\varepsilonilon) \widehat{P}_{D^{pr/8}}(x,z_0)\int_{ \overline{B}_{qr}} f_i(y) m(dy)\\
&=&(C\varepsilonilon +1+\varepsilonilon)\widehat{P}_{D^{pr/8}}(x,z_0)\int_{ \overline{B}_{qr}} f_i(y) m(dy)
\end{eqnarray*}
and
$$
f_i(x)\ge \widetilde{f}_i^{pr/8,qr}(x)\ge (1+\varepsilonilon)^{-1}\widehat{P}_{D^{pr/8}}(x,z_0)\int_{ \overline{B}_{qr}} f_i(y) m(dy).
$$
Therefore, \eqref{e:estimate-of-quotient-I} holds. {
$\Box$
}
Suppose that $\infty$ is inaccessible from $D$ with respect to $X$. Then there exists a point $x_0\in D$ such that
\begin{align}\label{e:pdRinfty}
\int_D G_D(x_0,y) m(dy) ={\mathbb E}_{x_0}\tau_D <\infty.
\end{align}
In the next result we fix this point $x_0$.
\begin{thm}\label{t:oscillation-reduction-yI}
Suppose that $\infty$ is inaccessible from $D$ with respect to $X$.
Let $r > 2d(z_0,x_0) \vee R$.
For any two nonnegative functions $f_1$, $f_2$ on $\X$ which are regular harmonic in $D^r$ with respect to $\widehat{X}$ and vanish on
$\overline{B}_r^c \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$
we have
\begin{equation}\label{e:inaccessible-limit-I}
\lim_{D{\bf n}} \def\Z {{\mathbb Z}i x\to \infty}\frac{f_1(x)}{f_2(x)}=
\frac{\int_{ \X} f_1(y) m(dy)}{\int_{ \X} f_2(y) m(dy)}.
\end{equation}
\end{thm}
\noindent{\bf Proof.}
First note that
$$
\int_{B_{3r}} G_D(x_0, z)m(dz) \ge {\mathbb E}_{x_0}[D \cap B_{3r}]>0.
$$
By using {\bf F2}$(z_0, R)$ we see that
$\int_{{B}_{8r}}f_i(y)m(dy)<\infty$.
The function $v\mapsto G_D(x_0,v)$ is regular harmonic in $D^r$ with respect to $\widehat{X}$ and vanishes on
$\overline{B}_{r}^c\setminus {D^{r}}$ (so vanishes on
$\overline{B}_r^c \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$).
By using {\bf F2}$(z_0, R)$ for $\widehat{X}$, we have for $i=1, 2$,
\begin{align*}
&\int_{D^{8r}} f_i(y)m(dy) \le C
\int_{B_{3r}} f_i(z)m(dz)
\int_{D^{8r}} \widehat{P}_{D^r} (y,z_0) m(dy)\\
&=C\int_{B_{3r}} G_D(x_0, z)m(dz) \int_{D^{8r}} \widehat{P}_{D^r} (y,z_0) m(dy)
\frac{\int_{B_{3r}} f_i(z)m(dz)}{\int_{B_{3r}} G_D(x_0, z)m(dz)}
\\
& \le C^2
\int_{D^{8r}} G_D(x_0, y) m(dy)
\frac{
\int_{B_{3r}} f_i(z)m(dz)}{\int_{B_{3r}} G_D(x_0, z)m(dz)} \\
& \le C^2
\int_{D} G_D(x_0, y) m(dy)
\frac{
\int_{B_{3r}} f_i(z)m(dz)}{{\mathbb E}_{x_0}[D \cap B_{3r}]} < \infty.
\end{align*}
Hence $\int_{\X} f_i(y) m(dy)<\infty$, $i=1, 2$.
Let $q_0=2$ and $\varepsilonilon >0$. For $j=0,1,\dots $, inductively define the
sequence $q_{j+1}=3p(\varepsilonilon,q_j,r)/8 >6 q_{i}$ using Lemma \ref{l:levy-density-I}. Then
for $i=1, 2$,
$$
\sum_{j=0}^{\infty} \int_{ D^{q_{j+1}r, q_j r}} f_i(y) m(dy) =
\int_{ D^{q_0 r}} f_i(y) m(dy)<\infty .
$$
If $\int_{ D^{q_{j+1}r, q_j r}} f_i(y) m(dy)>\varepsilonilon
\int_{ \overline{B}_{q_j r}} f_i(y) m(dy)$
for all $j\ge 0$, then
$$
\sum_{j=0}^{\infty}
\int_{ D^{q_{j+1}r, q_j r}} f_i(y) m(dy)
\ge \varepsilonilon
\sum_{j=0}^{\infty}
\int_{ \overline{B}_{q_j r}} f_i(y) m(dy)
\ge \varepsilonilon \sum_{j=0}^{\infty}
\int_{ \overline{B}_{q_0 r}} f_i(y) m(dy)=\infty .
$$
Hence, there exists $k\ge 0$ such that
$\int_{ D^{q_{k+1}r, q_k r}} f_i(y) m(dy) \le \varepsilonilon
\int_{ \overline{B}_{q_k r}} f_i(y) m(dy)$.
Moreover,
since $\lim_{j\to\infty}\int_{ D^{q_{j+1}r, q_j r}} f_i(y) m(dy)=0$, there exists $j_0\ge 0$
such that $\int_{ D^{q_{j+1}r, q_j r}} f_i(y) m(dy)\le
\int_{ D^{q_{k+1}r, q_k r}} f_i(y) m(dy)
$ for all $j\ge j_0$. Hence for all $j\ge j_0\vee k$ we have
$$
\int_{ D^{q_{j+1}r, q_j r}} f_i(y) m(dy)\le \int_{ D^{q_{k+1}r, q_k r}} f_i(y) m(dy) \le \varepsilonilon
\int_{ \overline{B}_{q_k r}} f_i(y) m(dy) \le
\varepsilonilon \int_{ \overline{B}_{q_j r}} f_i(y) m(dy).
$$
Therefore, there exists $j_0\in {\mathbb N}$ such that for all $j\ge j_0 \vee k$,
$$
\int_{ D^{q_{j+1}r, q_j r}} f_i(y) m(dy)\le
\varepsilonilon \int_{ \overline{B}_{q_j r}} f_i(y) m(dy) \qquad i=1,2,
$$
and
$$
(1+\varepsilonilon)^{-1}
\int_{ \X} f_i(y) m(dy)
<
\int_{ \overline{B}_{q_j r}} f_i(y) m(dy)
<(1+\varepsilonilon) \int_{ \X} f_i(y) m(dy),\qquad i=1,2.
$$
We see that the assumption of Lemma \ref{l:oscillation-assumption-1-I} are satisfied and conclude that \eqref{e:estimate-of-quotient-I} holds true: for $x\in D^{8q_{j+1}r/3}$,
$$
\frac{(1+\varepsilonilon)^{-1} \int_{ \overline{B}_{q_j r}} f_1(y) m(dy)}{(C\varepsilonilon +1+\varepsilonilon) \int_{ \overline{B}_{q_j r}} f_2(y) m(dy)} \le
\frac{f_1(x)}{f_2(x)} \le \frac{(C\varepsilonilon +1+\varepsilonilon)\int_{ \overline{B}_{q_j r}} f_1(y) m(dy)}{(1+\varepsilonilon)^{-1}
\int_{ \overline{B}_{q_j r}} f_2(y) m(dy)}.
$$
It follows that for $x\in D^{8q_{j+1}r/3}$,
$$
\frac{(1+\varepsilonilon)^{-2}\int_{ \X} f_1(y) m(dy)}{(C\varepsilonilon +1+\varepsilonilon)(1+\varepsilonilon)
\int_{ \X} f_2(y) m(dy)}\le \frac{f_1(x)}{f_2(x)} \le \frac{(C\varepsilonilon +1+\varepsilonilon)
(1+\varepsilonilon)\int_{ \X} f_1(y) m(dy)}{(1+\varepsilonilon)^{-2}\int_{ \X} f_2(y) m(dy)}.
$$
Since $\varepsilonilon >0$ was arbitrary, we conclude that \eqref{e:inaccessible-limit-I} holds. {
$\Box$
}
\subsection{Finite boundary point}\label{ss:inaccessible-finite}
In this subsection, we deal with oscillation reduction at an inaccessible boundary point $z_0\in \X$ of an open set $D$.
Throughout the subsection, we assume that there exists $R\le R_0$
such that {\bf E1}$(z_0, R)$ and {\bf C1}$(z_0, R)$ hold,
and that $\widehat{X}$ satisfies \textbf{F1}$(z_0, R)$. We will fix this $z_0$.
Again, for simplicity, we use notation $B_r=B(z_0,r)$, $r>0$.
First, the next lemma is a direct consequence of assumption {\bf E1}$(z_0, R)$.
\begin{lemma}\label{l:levy-density}
For any $q\in (0,1/2]$, $r\in (0,R]$ and $\varepsilonilon>0$,
there exists $p=p(\varepsilonilon,q,r)<q/16$ such that for every $z\in B_{8pr}$ and every $y\in B_{qr}^c$,
\begin{equation}\label{e:levy-density}
(1+\varepsilonilon)^{-1} < \frac{j(z,y)}{j(z_0, y)} < 1+\varepsilonilon .
\end{equation}
\end{lemma}
Let $D\subset \X$ be an open set.
For $0<p<q$, let $D_p=D\cap B_p$ and $D_{p,q}=D_q\setminus D_p$.
For a function $f$ on $\X$, and $0<p<q$, let
\begin{equation}\label{e:Lambda}
\widehat{{\bf L}ambda}_p(f):=\int_{\overline{B}_p^c} \widehat j(z_0,y) f(y) m(dy),\qquad \widehat{{\bf L}ambda}_{p,q}(f)=:\int_{D_{p,q}}\widehat j(z_0,y) f(y) m(dy).
\end{equation}
For $0<p<q<1$ and $r\in (0,R]$, define
\begin{eqnarray}\label{f_pr}
f_{pr,qr}(x)&=&
{\mathbb E}_{x} \left[f(\widehat X_{\widehat \tau_{D_{pr}}}): \widehat X_{\widehat \tau_{D_{pr}}} \in D_{pr,qr}\right],{\bf n}} \def\Z {{\mathbb Z}onumber \\
\widetilde{f}_{pr,qr}(x)&=&
{\mathbb E}_{x} \left[f(\widehat X_{\widehat \tau_{D_{pr}}}): \widehat X_{\widehat \tau_{D_{pr}}} \in (D\setminus D_{qr})\cup B_r^c\right].
\end{eqnarray}
\begin{lemma}\label{l:two-sided-estimate}
Let $q\in (0,1/2]$, $R\in (0,r]$, $\varepsilonilon >0$, and choose $p=p(\varepsilonilon, q,r)$
as in Lemma \ref{l:levy-density}.
Then for every $r\in (0,R]$, $D\subset B_r=B(z_0,r)$, nonnegative function $f$ on $\X$ which is regular harmonic in $D$ with respect to $\widehat{X}$
and vanishes on
$B_r \cap ( \overline{D}^c \cup\widehat D^{\mathrm{reg}})$,
and
every $x\in D_{8pr}$,
\begin{equation}\label{el:two-sided-estimate}
(1+\varepsilonilon)^{-1}({\mathbb E}_x \widehat \tau_{D_{8pr}}) \widehat{{\bf L}ambda}_{qr}(f)
\le \widetilde{f}_{8pr,qr}(x) \le (1+\varepsilonilon)({\mathbb E}_x \widehat \tau_{D_{8pr}}) \widehat{{\bf L}ambda}_{qr}(f).
\end{equation}
\end{lemma}
\noindent{\bf Proof.} Let $x\in D_{8pr}$. Using Lemma \ref{l:levy-density} in the second inequality below, we get
\begin{eqnarray*}
\widetilde{f}_{8pr,qr}(x) &= & \int_{D\setminus D_{qr}}\widehat P_{D_{8pr}}(x,y)f(y)m(dy) +\int_{B_R^c}\widehat P_{D_{8pr}}(x,y)f(y)m(dy) \\
&=&\int_{D\setminus D_{qr}}\int_{D_{8pr}}\widehat G_{D_{8pr}}(x,z) \widehat j(z,y)m(dz) f(y)m(dy) \\
& & +\int_{B_R^c}\int_{D_{8pr}} \widehat G_{D_{8pr}}(x,z)\widehat j(z,y)m(dz) f(y)m(dy) \\
& \le & (1+\varepsilonilon) ({\mathbb E}_x \widehat \tau_{D_{8pr}}) \left(\int_{D\setminus D_{qr}} \widehat j(z_0, y)f(y)m(dy) +\int_{B_r^c}\widehat j(z_0, y)f(y)m(dy)\right)\\
&=& (1+\varepsilonilon) ({\mathbb E}_x \widehat \tau_{D_{8pr}}) \int_{B_{qr}^c}\widehat j(z_0, y) f(y)m(dy)\\
&=&(1+\varepsilonilon)( {\mathbb E}_x \widehat \tau_{D_{8pr}}) \widehat{{\bf L}ambda}_{qr}(f).
\end{eqnarray*}
This proves the right-hand side inequality. The left-hand side inequality can be
proved in the same way. {
$\Box$
}
In the remainder of this subsection, we assume $r\in (0,R]$, $D\subset B_r$ is an open set and $z_0\in \partial D$.
We also assume that $f_1$ and $f_2$ are nonnegative functions on $\X$ which are regular harmonic in $D$ with respect
to the process $\widehat{X}$, and vanish on
$B_r \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$.
Note that $f_i=(f_i)_{pr,qr}+(\widetilde{f}_i)_{pr,qr}$.
\begin{lemma}\label{l:oscillation-assumption-1}
Let $R\in (0,1]$, $q<1/2$, $\varepsilonilon>0$, and let $p=p(\varepsilonilon,q,r)$ be as
in Lemma \ref{l:levy-density}. If
\begin{equation}\label{e:assumption-1-12}
\widehat{{\bf L}ambda}_{8pr/3,qr}(f_i)\le \varepsilonilon \widehat{{\bf L}ambda}_{qr}(f_i), \quad i=1,2,
\end{equation}
then for $x \in D_{pr}$
\begin{equation}\label{e:estimate-of-quotient}
\frac{(1+\varepsilonilon)^{-1}\widehat{{\bf L}ambda}_{qr}(f_1)}{(C\varepsilonilon +1+\varepsilonilon)\widehat{{\bf L}ambda}_{qr}(f_2)}\le \frac{f_1(x)}{f_2(x)} \le \frac{(C\varepsilonilon +1+\varepsilonilon)\widehat{{\bf L}ambda}_{qr}(f_1)}{(1+\varepsilonilon)^{-1}\widehat{{\bf L}ambda}_{qr}(f_2)}.
\end{equation}
\end{lemma}
\noindent{\bf Proof.} Assume that $x\in D_{pr}$.
Since $(f_i)_{8pr,qr}$ is regular harmonic in $D_{8pr}$ with respect to $\widehat X$ and vanish
on
$B_{8pr} \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$,
using {\bf F1}$(z_0, R)$ (with $a=2/3$),
we have
$$
(f_i)_{8pr,qr}(x) \le C ({\mathbb E}_x \tau_{D_{8pr}})
\widehat{{\bf L}ambda}_{8pr/3}( (f_i)_{8pr,qr}).
$$
Since $ (f_i)_{8pr,qr}(y)\le f_i(y)$
and $ (f_i)_{8pr,qr}(y)=0$ on $D_{qr}^c$ except possibly at irregular points of $D$, applying \eqref{e:assumption-1-12} we have
$$
(f_i)_{8pr,qr}(x) \le C ({\mathbb E}_x \tau_{D_{8pr}})
\widehat{{\bf L}ambda}_{8pr/3,qr}(f_i) \le C \varepsilonilon ({\mathbb E}_x \tau_{D_{8pr}})
\widehat{{\bf L}ambda}_{qr}(f_i).
$$
By this and Lemma \ref{l:two-sided-estimate}, we have that
\begin{eqnarray*}
f_i(x)&=&(f_i)_{8pr,qr}(x)+(\widetilde{f}_i)_{8pr,qr}(x)\\
&\le & C\varepsilonilon ({\mathbb E}_x \tau_{D_{8pr}}) \widehat{{\bf L}ambda}_{qr}(f_i)+ (1+\varepsilonilon) ({\mathbb E}_x \tau_{D_{8pr}}) \widehat{{\bf L}ambda}_{qr}(f_i)\\
&=&(C\varepsilonilon +1+\varepsilonilon) ({\mathbb E}_x \tau_{D_{8pr}}) \widehat{{\bf L}ambda}_{qr}(f_i)
\end{eqnarray*}
and
$$
f_i(x)\ge (\widetilde{f}_i)_{8pr,qr}(x)\ge (1+\varepsilonilon)^{-1}({\mathbb E}_x \tau_{D_{8pr}}) \widehat{{\bf L}ambda}_{qr}(f_i).
$$
Therefore,
\eqref{e:estimate-of-quotient} holds.
{
$\Box$
}
Assume that $z_0$ is inaccessible from $D$ with respect to $X$.
Then there exist a point $x_0$ in $D$ such that
$$
P_D(x_0,z_0)=\int_D G_D(x_0,v)j(v, z_0) m(dv) <\infty .
$$
In the next result we fix this point $x_0$
\begin{thm}\label{t:reduction}
Suppose that $z_0$ is inaccessible from $D$ with respect to $X$.
Let $r < 2d(z_0,x_0) \wedge R$.
For any two nonnegative functions $f_1$, $f_2$ on $\X$ which are regular harmonic in $D_r$ with respect to $\widehat{X}$ and vanish on
$B_r \cap ( \overline{D}^c \cup\widehat D^{\mathrm{reg}})$,
we have
\begin{equation}\label{e:inaccessible-limit}
\lim_{D{\bf n}} \def\Z {{\mathbb Z}i x\to z_0}\frac{f_1(x)}{f_2(x)}
=\frac{\int_{\X}\widehat j(z_0, y)f_1(y)m(dy)}{\int_{\X}\widehat j(z_0, y) f_2(y)m(dy)}.
\end{equation}
\end{thm}
\noindent{\bf Proof.}
First note that
$$
\int_{\overline{B}_{r/3}^c} \widehat j(z_0,z) G_D(x_0, z)m(dz) \ge \int_{D \cap \overline{B}_{r/3}^c} j(z, z_0) G_{D \cap \overline{B}_{r/3}^c} (x_0, z)m(dz)
=P_{D \cap \overline{B}_{r/3}^c}(x_0,z_0) >0.
$$
Since $\widehat{X}$ satisfies {\bf F1}$(z_0, R)$, we have $\widehat{{\bf L}ambda}_{r/8}(f_i)<\infty$.
The function $v\mapsto G_D(x_0,v)$ is regular harmonic in $D_r$ with respect to $\widehat{X}$ and vanishes on
$B_{r}\setminus D_{r}$ (so vanishes on $B_r \cap ( \overline{D}^c \cup \widehat D^{\mathrm{reg}})$).
By using {\bf F1}$(z_0, R)$ for $\widehat{X}$ we have
\begin{align*}
&
\int_{B_{r/8}}\widehat j(z_0, y)f_i(y) m(dy) \le C
\int_{\overline{B}_{r/3}^c} \widehat j(z_0,z) f_i(z)m(dz)
\int_{B_{r/8}}\widehat j(z_0, y) {\mathbb E}_y [ \widehat{\tau}_{D_r}] m(dy)
\\
&=C
\int_{\overline{B}_{r/3}^c} \widehat j(z_0,z) G_D(x_0, z)m(dz)
\int_{B_{r/8}}\widehat j(z_0, y) {\mathbb E}_y [ \widehat{\tau}_{D_r}] m(dy)
\frac{\int_{\overline{B}_{r/3}^c} \widehat j(z_0,z) f_i(z)m(dz)}
{\int_{\overline{B}_{r/3}^c} \widehat j(z_0,z) G_D(x_0, z)m(dz)}\\
&\le C^2 \int_{B_{r/8}}\widehat j(z_0, y)G_D(x_0, y) m(dz)
\frac{\int_{\overline{B}_{r/3}^c} \widehat j(z_0,z) f_i(z)m(dz)}
{\int_{\overline{B}_{r/3}^c} \widehat j(z_0,z) G_D(x_0, z)m(dz)}\\
&\le C^2 P_D(x_0,z_0)
\frac{ \widehat{{\bf L}ambda}_{r/3}(f_i) }
{P_{D \cap \overline{B}_{r/3}^c}(x_0,z_0) }<\infty.
\end{align*}
Therefore
\begin{eqnarray*}
\widehat{{\bf L}ambda}(f_i):=\int_{\X}\widehat j(z_0, y)f_i(y) m(dy)
=\int_{B_{r/8}}\widehat j(z_0, y)f_i(y) m(dy)+\widehat{{\bf L}ambda}_{r/8}(f_i)<\infty.
\end{eqnarray*}
Let $q_0=1/2$ and $\varepsilonilon >0$. For $j=0,1,\dots $, inductively define the sequence
$q_{j+1}=p(\varepsilonilon,q_j,r)$ as in Lemma \ref{l:levy-density}. Then
$$
\sum_{j=0}^{\infty} \widehat{{\bf L}ambda}_{q_{j+1}r, q_j r}(f_i) =
\int_{D_{r/2}}\widehat j(z_0, y)f_i(y) m(dy)\le \int_{\X}\widehat j(z_0, y)f_i(y) m(dy)<\infty .
$$
If $\widehat{{\bf L}ambda}_{q_{j+1}r, q_j r}(f_i)>\varepsilonilon \widehat{{\bf L}ambda}_{q_j r}(f_i)$
for all $j\ge 0$, then
$$
\sum_{j=0}^{\infty} \widehat{{\bf L}ambda}_{q_{j+1}r, q_j r}(f_i) \ge \varepsilonilon
\sum_{j=0}^{\infty} \widehat{{\bf L}ambda}_{q_j r}(f_i)\ge \varepsilonilon \sum_{j=0}^{\infty}
\widehat{{\bf L}ambda}_{q_0 r}(f_i)=\infty .
$$
Hence, there exists an integer $k\ge 0$ such that $\widehat{{\bf L}ambda}_{q_{k+1}r, q_k r}(f_i)
\le \varepsilonilon \widehat{{\bf L}ambda}_{q_k r}(f_i)$. Moreover, since $\lim_{j\to \infty}
\widehat{{\bf L}ambda}_{q_{j+1}r, q_j r}(f_i)=0$, there exists $j_0\ge 0$ such that
$\widehat{{\bf L}ambda}_{q_{j+1}r, q_j r}(f_i)\le \widehat{{\bf L}ambda}_{q_{k+1}r, q_k r}(f_i)$
for all $j\ge j_0$. Hence for all $j\ge j_0\vee k$ we have
$$
\widehat{{\bf L}ambda}_{q_{j+1}r, q_j r}(f_i)\le \widehat{{\bf L}ambda}_{q_{k+1}r, q_k r}(f_i)
\le \varepsilonilon \widehat{{\bf L}ambda}_{q_k r}(f_i) \le \varepsilonilon \widehat{{\bf L}ambda}_{q_j r}(f_i).
$$
Therefore for all $j\ge j_0$,
$$
\widehat{{\bf L}ambda}_{q_{j+1}r, q_j r}(f_i) \le \varepsilonilon \widehat{{\bf L}ambda}_{q_j r}(f_i),\qquad i=1,2,
$$
and
$$
(1+\varepsilonilon)^{-1}\widehat{{\bf L}ambda}(f_i) < \widehat{{\bf L}ambda}_{q_j r}(f_i) <(1+\varepsilonilon) \widehat{{\bf L}ambda}(f_i),\qquad i=1,2.
$$
Hence the assumption of Lemma \ref{l:oscillation-assumption-1} are satisfied and
consequently \eqref{e:estimate-of-quotient} holds: for $x\in D_{q_{j+1}r}$,
$$
\frac{(1+\varepsilonilon)^{-1}\widehat{{\bf L}ambda}_{q_j r}(f_1)}{(C\varepsilonilon +1+\varepsilonilon)\widehat{{\bf L}ambda}_{q_j r}(f_2)}\le \frac{f_1(x)}{f_2(x)} \le \frac{(C\varepsilonilon +1+\varepsilonilon)\widehat{{\bf L}ambda}_{q_j r}(f_1)}{(1+\varepsilonilon)^{-1}\widehat{{\bf L}ambda}_{q_j r}(f_2)}.
$$
It follows that $x\in D_{q_{j+1}r}$,
$$
\frac{(1+\varepsilonilon)^{-2}\widehat{{\bf L}ambda}(f_1)}{(C\varepsilonilon +1+\varepsilonilon)(1+\varepsilonilon)\widehat{{\bf L}ambda}(f_2)}\le \frac{f_1(x)}{f_2(x)} \le \frac{(C\varepsilonilon +1+\varepsilonilon)(1+\varepsilonilon)\widehat{{\bf L}ambda}(f_1)}{(1+\varepsilonilon)^{-2}\widehat{{\bf L}ambda}(f_2)}.
$$
Since $\varepsilonilon >0$ was arbitrary, we conclude that \eqref{e:inaccessible-limit} holds. {
$\Box$
}
\section{Proof of Theorems \ref{t:main-mb3} and \ref{t:main-mb4}}\label{s:t34}
Let $D$ be a Greenian open subset of $\X$. Fix $x_0\in D$ and define
$$
M_D(x, y):=\frac{G_D(x, y)}{G_D(x_0, y)}, \qquad x, y\in D,~y{\bf n}} \def\Z {{\mathbb Z}eq x_0.
$$
Combining \cite[Lemmas 3.2 and 3.4]{KSVp2}
and our Theorems \ref{t:oscillation-reduction-yI} and \ref{t:reduction}
we have the following.
\begin{thm}\label{t:1-10}
(a) Suppose that {\bf E1}$(z_0, R)$ holds
and that $\widehat{X}$ satisfies {\bf F1}$(z_0, R)$.
Then
\begin{align}\label{e:martin-kernel1}
M_D(x,z_0):=\lim_{D{\bf n}} \def\Z {{\mathbb Z}i v\to z_0}\frac{G_D(x,v)}{G_D(x_0,v)}
\end{align}
exists and is finite.
In particular, if $z_0$ is inaccessible from $D$ with respect to $X$, then
\begin{align}\label{e:martin-kernel2}
M_D(x,z_0)=\frac{\int_{\X}\widehat j(z_0, y)G_D(x, y) m(dy)}{\int_{\X}\widehat j(z_0, y)G_D(x_0, y) m(dy)}= \frac{P_D(x,z_0)}{P_D(x_0,z_0)}\, .
\end{align}
{\bf n}} \def\Z {{\mathbb Z}oindent
(b)
Suppose that {\bf E2}$(z_0, R)$ holds and that $\widehat{X}$ satisfies {\bf F2}$(z_0, R)$.
Then for every $x\in D$ the limit
\begin{align}\label{e:martin-kernel3}
M_D(x,\infty):=\lim_{D{\bf n}} \def\Z {{\mathbb Z}i v\to \infty}\frac{G_D(x,v)}{G_D(x_0,v)}
\end{align}
exists and is finite.
In particular, if $\infty$ is inaccessible from $D$ with respect to $X$, then
\begin{align}\label{e:martin-kernel4}
M_D(x,\infty)=\frac{{\mathbb E}_x \tau_D}{{\mathbb E}_{x_0}\tau_D}\, .
\end{align}
\end{thm}
Since both $X^D$ and $\widehat{X}^D$ are strongly Feller, the process $X^D$ satisfies Hypothesis (B) in \cite{KW}. See \cite[Section 4]{KSVp2} for details.
Therefore $D$ has
a Martin boundary $\partial_M D$ with respect to $X^D$ satisfying the following properties:
\begin{description}
\item{(M1)} $D\cup \partial_M D$ is
a compact metric space (with the metric denoted by $d_M$);
\item{(M2)} $D$ is open and dense in $D\cup \partial_M D$, and its relative topology coincides with its original topology;
\item{(M3)} $M_D(x ,\, \cdot\,)$ can be uniquely extended to $\partial_M D$ in such a way that
\begin{description}
\item{(a)}
$ M_D(x, y) $ converges to $M_D(x, w)$ as $y\to w \in \partial_M D$ in the Martin topology;
\item{(b)} for each $ w \in D\cup \partial_M D$ the function $x \to M_D(x, w)$ is excessive with respect to $X^D$;
\item{(c)} the function $(x,w) \to M_D(x, w)$ is jointly continuous on
$D\times ((D\setminus\{x_0\})\cup \partial_M D)$ in the Martin topology and
\item{(d)} $M_D(\cdot,w_1){\bf n}} \def\Z {{\mathbb Z}ot=M_D(\cdot, w_2)$ if $w_1 {\bf n}} \def\Z {{\mathbb Z}ot= w_2$ and $w_1, w_2 \in \partial_M D$.
\end{description}
\end{description}
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Proof of Theorem \ref{t:main-mb3}:}
(a) Using Theorem \ref{t:1-10}(a), by the same argument as in the proof of \cite[Theorem 1.1(a)]{KSVp2},
we have that $\partial_M^{z_0} D$ consists of a single point.
{\bf n}} \def\Z {{\mathbb Z}oindent
(b) If $z_0$ is accessible from $D$ with respect to $X$, then by \cite[Theorem 1.1 (b)]{KSVp2} the Martin kernel $M_D(\cdot,z_0)$ is minimal harmonic for $X^D$.
Assume that $z_0$ is inaccessible from $D$ with respect to $X$.
Since $x\mapsto P_D(x,z_0)$ is \emph{not} harmonic with respect to $X^D$, we conclude
from \eqref{e:martin-kernel3}
that the Martin kernel $M_D(\cdot,z_0)$ is not harmonic, and in particular, that $z_0$ is \emph{not} a minimal Martin boundary point.
{
$\Box$
}
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Proof of Corollary \ref{c:main-mb3}}: (a)
Let $\Xi:\partial D\to \partial^f_MD$ so that $\Xi(z)$ is the unique Martin boundary
point associated with $z\in \partial D$.
Since every finite Martin boundary point is associated with some $z\in \partial D$, we see that $\Xi$ is onto.
We show now that $\Xi$ is 1-1. If not, there are $z, z'\in \partial D$, $z{\bf n}} \def\Z {{\mathbb Z}eq z'$, such that $\Xi(z)=\Xi(z')=w$.
Then $M_D(\cdot, z)=M_D(\cdot, w)= M_D(\cdot, z')$. It follows from the proof of \cite[Corollary 1.2(a)]{KSVp2}
that $z$ and $z'$ can not be both accessible. If one of them, say $z$, is accessible and the other, $z'$, is
inaccessible, then we can not have $M_D(\cdot, z)=M_D(\cdot, z')$ since $M_D(\cdot, z)$ is harmonic while
$M_D(\cdot, z')$ is not. Now let's assume that both $z$ and $z'$ are inaccessible.
Then $M_D(\cdot,z)=\frac{P_D(\cdot,z)}{P_D(x_0,z)}$ and
$M_D(\cdot,z')=\frac{P_D(\cdot,z')}{P_D(x_0,z')}$.
From $M_D(\cdot, z)=M_D(\cdot, z')$ we deduce that
$$
P_D(x,z)P_D(x_0,z')=P_D(x,z')P_D(x_0,z), \qquad \text{for all }x\in D.
$$
By treating $P_D(x_0,z')$ and $P_D(x_0,z)$ as constants, the above equality can be written as
$$
\int_D G_D(x,y)j(y, z)m(dy) = c \int_D G_D(x,y) j(y, z') m(dy), \qquad \text{for all }x\in D.
$$
By the uniqueness principle for potentials, this implies that the measures $j(y, z)m(dy)$ and $c j(y, z') m(dy)$ are equal. Hence $j(y, z)=c j(y, z')$ for $m$-a.e.~$y\in D$.
But this is impossible (for example, let $y\to z$; then $j(y, z)\to \infty$, while $cj(y, z')$ stays bounded
because of {\bf C1}$(z, R)$). We conclude that $z=z'$.
(b) The proof of this part is exactly the same as that of \cite[Corollary 1.2(b)]{KSVp2}.
{
$\Box$
}
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Proof of Theorem \ref{t:main-mb4}:}
{\bf n}} \def\Z {{\mathbb Z}oindent
(a) Using Theorem \ref{t:1-10}(b), by the same argument as in the proof of \cite[Theorem 1.3(a)]{KSVp2}, we have that
$\partial_M^{\infty} D$ is a single point which we will denote by $\infty$.
{\bf n}} \def\Z {{\mathbb Z}oindent
(b) If $\infty$ is inaccessible from $D$ with respect to $X$,
then by \cite[Theorem 1.2 (b)]{KSVp2} the Martin kernel $M_D(\cdot, \infty)$ is minimal harmonic for $X^D$.
Assume that $\infty$ is inaccessible from $D$ with respect to $X$.
Since the function $x\mapsto {\mathbb E}_x \tau_D =\int_D G_D(x,y)m(dy)$
is \emph{not} harmonic with respect
to $X^D$, by \eqref{e:martin-kernel3} we conclude that the Martin kernel $M_D(\cdot, \infty)$ is not harmonic, and
in particular, $\infty$ is \emph{not} a minimal Martin boundary point. {
$\Box$
}
{\bf n}} \def\Z {{\mathbb Z}oindent
\textbf{Proof of Corollary \ref{c:finite-not-infinite}}: (a) In the same way as in the proof of \cite[Corollary 1.4(a)]{KSVp2} it suffices to show that it cannot happen that $M_D(\cdot,\infty)=M_D(\cdot, z)$ for any $z\in \partial D$. If both $\infty$ and $z$ are accessible, this was shown in the proof of \cite[Corollary 1.4(a)]{KSVp2}. If one of the two points is accessible and the other inaccessible, then clearly the two Martin kernels are different since one is harmonic while the other is not. Assume that both $\infty$ and $z$ are inaccessible. Then $M_D(\cdot,\infty)=\frac{{\mathbb E}_{\cdot} \tau_D}{{\mathbb E}_{x_0}\tau_D}$ and $M_D(\cdot, z)=\frac{P_D(\cdot, z)}{P_D(x_0,z)}$. Therefore,
$$
P_D(x_0,z){\mathbb E}_x \tau_D =P_D(x,z) {\mathbb E}_{x_0}\tau_D, \qquad \text{for all }x\in D.
$$
By treating ${\mathbb E}_{x_0}\tau_D$ and $P_D(x_0,z)$ as constants, the above equality can be written as
$$
\int_D G_D(x,y)m(dy) = c \int_D G_D(x,y) j(y, z) m(dy), \qquad \text{for all }x\in D.
$$
By the uniqueness principle for potentials, this implies that the measures $m(dy)$ and $c j(y, z) m(dy)$ are equal. Hence $1=c j(y, z)$ for $m$-a.e.~$y\in D$ which clearly contradicts {\bf C1}$(z, R)$.
(b) The proof of this part is exactly the same as that of \cite[Corollary 1.4(b)]{KSVp2}.
\section{ Examples}\label{s:discussion}
In this section we discuss several classes of L\'evy processes in ${\mathbb R}^d$ satisfying our assumptions.
\subsection{Subordinate Brownian motions}
In this subsection we discuss subordinate Brownian motions in ${\mathbb R}^d$ satisfying our assumptions.
We will list conditions on subordinate Brownian motions one by one under which our assumptions hold true.
Let $W=(W_t, {\mathbb P}_x)$ be a Brownian motion in ${\mathbb R}^d$,
$S=(S_t)$ an independent driftless subordinator with Laplace exponent $\phi$ and define the subordinate Brownian motion $Y=(Y_t, {\mathbb P}_x)$ by $Y_t=W_{S_t}$. Let $j_Y$ denote the L\'evy density of $Y$.
The Laplace exponent $\phi$ is a Bernstein function with $\phi(0+)=0$. Since $\phi$ has no drift part, $\phi$ can be written in the form $$
\phi(\lambda)=\int_0^{\infty}(1-e^{-\lambda t})\,
\mu(dt)\, .
$$
Here $\mu$ is a $\sigma$-finite measure on
$(0,\infty)$ satisfying
$
\int_0^{\infty} (t\wedge 1)\, \mu(dt)< \infty.
$
$\mu$ is called the L\'evy measure
of the subordinator $S$.
$\phi$ is called a complete Bernstein function
if the L\'evy measure $\mu$ of $S_t$ has a completely monotone density
$\mu(t)$, i.e., $(-1)^n D^n \mu\ge 0$ for every non-negative integer
$n$.
We will assume that $\phi$ is a complete Bernstein function.
When $\phi$ is unbounded and $Y$ is transient,
the mean occupation time measure of $Y$ admits a density $G(x,y)=g(|x-y|)$
which is called the Green function of $Y$,
and is given by the formula
\begin{equation}\label{e:green-function}
g(r):=\int_0^{\infty}(4\pi t)^{-d/2} e^{-r^2/(4t)}u(t)\, dt\, .
\end{equation}
Here $u$ is the potential density of the subordinator $S$.
We first discuss conditions that ensure {\bf E1}$(z_0, R)$.
By \cite[Lemma A.1]{KM}, for all $t>0$, we have
\begin{equation}\label{e:muexp}
\mu(t)\le (1-2e^{-1})^{-1}t^{-2}\phi'(t^{-1}) \le (1-2e^{-1})^{-1}t^{-1}\phi(t^{-1}).
\end{equation}
Thus
\begin{equation}\label{e:expub4mu}
\mu(t)\le (1-2e^{-1})^{-1}\phi'(M^{-1})t^{-2}, \qquad t \in (0, M].
\end{equation}
In \cite{KSV12b}, we have shown that there exists $c\in (0, 1)$ such that
\begin{equation}\label{e:bofmuatinfty}
\mu(t+1)\ge c\mu(t), \qquad t\ge 1.
\end{equation}
As a consequence of this, one can easily show that there exist $c_1, c_2>0$ such that
\begin{equation}\label{e:explb4mu}
\mu(t)\ge c_1e^{-c_2 t}, \qquad t\ge 1.
\end{equation}
In fact, it follows from \eqref{e:bofmuatinfty} that for any $n\ge 1$,
$
\mu(n+1)\ge c^n\mu(1).
$
Thus, for any $t\ge 1$,
\begin{eqnarray*}
\mu(t)&\ge&\mu([t]+1)\ge c^{[t]}\mu(1)=\mu(1)e^{[t]\log c}\\
&=&\mu(1)e^{([t]-t)\log c}e^{t\log c}\ge c^{-1}\mu(1)e^{t\log c}.
\end{eqnarray*}
The following is a refinement of \eqref{e:bofmuatinfty} and \cite[Lemma 3.1]{KL}.
\begin{lemma}\label{l:1}
Suppose that the Laplace exponent
$\phi$ of $S$ is a complete Bernstein function.
Then, for any $t_0>0$,
$$
\lim_{\delta\to 0} \sup_{t>t_0}\frac{\mu(t)}{\mu(t+\delta)}=1\, .
$$
\end{lemma}
\noindent{\bf Proof.} This is proof is similar to the proof of \cite[Lemma 3.1]{KL}, which in turn
is a refinement of the proof of \cite[Lemma 13.2.1]{KSV12b}. Let $\eta>0$ be given. Since $\mu$ is
a complete monotone function, there exists a measure $m$ on $[0, \infty)$ such that
$$
\mu(t)=\int_{[0, \infty)}e^{-tx}m(dx), \qquad t>0.
$$
Choose $r=r(\eta, t_0)>0$ such that
$$
\eta\int_{[0, r]}e^{-t_0x}m(dx)\ge \int_{(r, \infty)}e^{-t_0x}m(dx).
$$
Then for any $t>t_0$, we have
\begin{eqnarray*}
\lefteqn{\eta\int_{[0, r]}e^{-tx}m(dx)=\eta\int_{[0, r]}e^{-(t-t_0)x}e^{-t_0x}m(dx)\ge \eta e^{-(t-t_0)r}\int_{[0, r]}e^{-t_0x}m(dx)}\\
&\ge& e^{-(t-t_0)r}\int_{(r, \infty)}e^{-t_0x}m(dx)=\int_{(r, \infty)}e^{-(t-t_0)r}e^{-t_0x}m(dx)\ge \int_{(r, \infty)}e^{-tx}m(dx).
\end{eqnarray*}
Thus for any $t>t_0$ and $\delta>0$,
\begin{eqnarray*}
\mu(t+\delta)&\ge&\int_{[0, r]}e^{-(t+\delta)x}m(dx)\ge e^{-r\delta}\int_{[0, r]}e^{-tx}m(dx)\\
&=&e^{-r\delta}(1+\eta)^{-1}\left(\int_{[0, r]}e^{-tx}m(dx)+\eta\int_{[0, r]}e^{-tx}m(dx) \right)\\
&\ge&e^{-r\delta}(1+\eta)^{-1}\left(\int_{[0, r]}e^{-tx}m(dx)+\int_{(r, \infty)}e^{-tx}m(dx) \right)\\
&=&e^{-r\delta}(1+\eta)^{-1}\int_{[0, \infty)}e^{-tx}m(dx)=e^{-r\delta}(1+\eta)^{-1}\mu(t).
\end{eqnarray*}
Therefore
$$
\limsup_{\delta\to 0}\sup_{t>t_0}\frac{\mu(t)}{\mu(t+\delta)}\le 1+\eta.
$$
Since $\eta$ is arbitrary and $\mu$ is decreasing, the assertion of the lemma is valid.
{
$\Box$
}
The L\'evy measure of $Y$ has a density with respect to
the Lebesgue measure given by $j_Y(x)=j(|x|)$ with
$$
j(r)=\int^\infty_0 g(t, r)\mu(t)dt, \qquad r{\bf n}} \def\Z {{\mathbb Z}eq 0,
$$
where
$$
g(t, r)=(4\pi t)^{-d/2}\exp(-\frac{r^2}{4t}).
$$
As a consequence of \eqref{e:bofmuatinfty}, one can easily get that there exists
$c\in (0, 1)$ such that
\begin{equation}\label{e:bofjatinfty}
j(r+1)\ge cj(r), \qquad r\ge 1.
\end{equation}
Using this, we can show that there exist $c_1, c_2>0$ such that
\begin{equation}\label{e:explb4j}
j(r)\ge c_1e^{-c_2 r}, \qquad r\ge 1.
\end{equation}
\begin{lemma}\label{l:2}
Suppose that the Laplace exponent
$\phi$ of $S$ is a complete Bernstein function.
For any $r_0\in (0, 1)$,
$$
\lim_{\eta\to 0} \sup_{r>r_0}\frac{\int^\eta_0g(t, r)\mu(t)dt}{j(r)}=0\, .
$$
\end{lemma}
\noindent{\bf Proof.} For any $\eta\in (0, 1)$ and $r\in (r_0, 2]$, we have
$$
\frac{\int^\eta_0g(t, r)\mu(t)dt}{j(r)}\le \frac{\int^\eta_0g(t, r_0)\mu(t)dt}{j(2)}.
$$
Thus
$$
\lim_{\eta\to 0}\sup_{r\in (r_0, 2]}\frac{\int^\eta_0g(t, r)\mu(t)dt}{j(r)}
=0.
$$
Thus we only need to show that
$$
\lim_{\eta\to 0}\sup_{r>2}\frac{\int^\eta_0g(t, r)\mu(t)dt}{j(r)}
=0.
$$
It follows from \eqref{e:expub4mu} that
\begin{eqnarray*}
\lefteqn{\int^\eta_0(4\pi t)^{-d/2}\exp(-\frac{r^2}{4t})\mu(t)dt
\le c_1\int^\eta_0t^{-(\frac{d}2+2)}\exp(-\frac{r^2}{4t})dt\le c_3\int^\eta_0\exp(-\frac{r^2}{8t})dt}\\
&=& c_3 \int^\infty_{r^2/(8\eta)}e^{-s}\frac{r^2}{8s^2}ds\le c_4r^2\int^\infty_{r^2/(8\eta)}e^{-s/2}ds=c_5r^2\exp(-\frac{r^2}{16\eta}).
\end{eqnarray*}
Now combining this with \eqref{e:explb4j} we immediately arrive at the
desired conclusion.
{
$\Box$
}
\begin{lemma}\label{l:3}
Suppose that the Laplace exponent
$\phi$ of $S$ is a complete Bernstein function.
For any $r_0\in (0, 1)$,
\begin{align}\label{e:BL1}
\lim_{\delta\to 0} \sup_{r>r_0}\frac{j(r)}{j(r+\delta)}=1\, .
\end{align}
\end{lemma}
\noindent{\bf Proof.} For any $\varepsilonilon\in (0, 1)$, choose $\eta\in (0, 1)$ such that
$$
\sup_{r>r_0}\frac{\int^\eta_0 g(t, r)\mu(t)dt}{j(r)}\le \varepsilonilon.
$$
Then for any $r>r_0$,
$
\int^\infty_\eta g(t, r)\mu(t)dt\ge (1-\varepsilonilon)j(r).
$
Fix this $\eta$. It follows from Lemma \ref{l:1} that there exists $\delta_0
\in (0, \eta/2)$ such that
$$
\frac{\mu(t)}{\mu(t+\delta)}\le 1+\varepsilonilon, \qquad t\ge \eta, \delta\in (0, \delta_0].
$$
For $t>\eta$, $0\le (r+\delta-t)^2=(r+\delta)^2-2tr+t(t-\delta)-\delta t$ and so
$t(t-\delta)\ge 2tr+\delta t-(r+\delta)^2$. Thus
\begin{eqnarray*}
\frac{(r+\delta)^2}{4t}-\frac{r^2}{4(t-\delta)}=
\frac{(r+\delta)^2(t-\delta)-r^2t}{4t(t-\delta)}
=\frac{\delta(2tr+\delta t-(r+\delta)^2)}{4t(t-\delta)}\le \frac\delta4.
\end{eqnarray*}
Consequently, for $r>r_0$ and $\delta\in (0, \delta_0)$,
\begin{eqnarray*}
j(r+\delta)&\ge & \int^\infty_\eta(4\pi t)^{-d/2}\exp(-\frac{(r+\delta)^2}{4t})\mu(t)dt\\
&\ge &e^{-\delta/4}\int^\infty_\eta (4\pi t)^{-d/2}\exp(-\frac{r^2}{4(t-\delta)})\mu(t)dt\\
&\ge &e^{-\delta/4}\int^\infty_{\eta-\delta} (4\pi (t+\delta))^{-d/2}\exp(-\frac{r^2}{4t})\mu(t+\delta)dt\\
&\ge &e^{-\delta/4}\left(\frac{\eta}{\eta+\delta}\right)^{d/2}(1+\varepsilonilon)^{-1}\int^\infty_\eta g(t, r)\mu(t)dt\\
&\ge&e^{-\delta/4}\left(\frac{\eta}{\eta+\delta}\right)^{d/2}(1+\varepsilonilon)^{-1}(1-\varepsilonilon)j(r).
\end{eqnarray*}
Now choose $\delta^*\in (0, \delta_0)$ such that
$$
e^{-\delta/4}\left(\frac{\eta}{\eta+\delta}\right)^{d/2}\ge (1+\varepsilonilon)^{-1},
\qquad \delta \in (0, \delta^*].
$$
Then for all $r>r_0$ and $\delta \in (0, \delta^*]$,
$$
j(r+\delta)\ge (1+\varepsilonilon)^{-2}(1-\varepsilonilon)j(r),
$$
which is equivalent to
$$
\frac{j(r)}{j(r+\delta)}\le \frac{(1+\varepsilonilon)^2}{(1-\varepsilonilon)},
$$
which implies \eqref{e:BL1}.
{
$\Box$
}
\begin{lemma}\label{l:levy-densityn2}
If the Laplace exponent
$\phi$ of $S$ is a complete Bernstein function,
then {\bf E1}$(z_0, R)$ holds for $Y$.
\end{lemma}
\noindent{\bf Proof.}
Fix $r_0, \varepsilon>0$ and use the notation $B_r=B(0, r)$.
By Lemma \eqref{l:3}
there exists $\eta=\eta(\varepsilonilon,r_0)>0$ such that for all $\eta\le \eta(\varepsilonilon,r_0)$,
$$
\sup_{r>r_0} \frac{j(r)}{j(r+\eta)} <1+\varepsilonilon .
$$
Let
$
\delta:=\frac{2\eta}{r_0}\wedge 1.
$
For $y\in B_{\delta r_0/2}$ and $z\in B_{2r_0}^c$ we have
\begin{eqnarray*}
&&r_0 <\frac{|z|}{2}= |z|-\frac{|z|}{2} \le |z|-|y| \le |z-y|\le |z|+|y| \le |z|+\frac{\delta r_0}{2} \le |z|+\eta,\\
&& r_0<|z| \le |z-y|+|y|\le |z-y|+\eta .
\end{eqnarray*}
Hence,
$$
\frac{j(|z-y|)}{j(|z|)}\le \frac{j(|z-y|)}{j(|z-y|+\eta)}\le \sup_{r>r_0}
\frac{j(r)}{j(r+\eta)}< 1+\varepsilonilon
$$
and
$$
\frac{j(|z|)}{j(|z-y|)}\le \frac{j(|z|)}{j(|z|+\eta)} \le \sup_{r>r_0}
\frac{j(r)}{j(r+\eta)}< 1+\varepsilonilon.
$$
This finishes the proof of the lemma. {
$\Box$
}
We now briefly discuss \eqref{e:G111}, {\bf C1}$(z_0, R)$, \eqref{e:nGassup1}, \textbf{F1}$(z_0, R)$, and {\bf G}.
First note that, if $Y$ is transient then \eqref{e:G111} holds (see \cite[Lemma 2.10]{KSV14b}).
For the remainder of this section, we will always assume that
$\phi$ is a complete Bernstein function and the L\'evy density $\mu$ of $\phi$ is infinite, i.e.
$\mu(0,\infty)=\infty$.
We consider the following further assumptions on $\phi$:
{\bf n}} \def\Z {{\mathbb Z}oindent
{\bf H}:
{\it there exist constants $\sigma>0$, $\lambda_0 > 0$ and
$\delta \in (0, 1]$
such that
\begin{equation}\label{e:sigma}
\frac{\phi'(\lambda t)}{\phi'(\lambda)}\leq\sigma\, t^{-\delta}\ \text{ for all }\ t\geq 1\ \text{ and }\
\lambda \ge \lambda_0\, .
\end{equation}
When $d \le 2$, we assume that $d+2\delta-2>0$ where $\delta$ is the constant
in \eqref{e:sigma}, and there are $\sigma'>0$ and
\begin{equation}\label{e:new22}
\delta' \in \left(1-\tfrac{d}{2}, (1+\tfrac{d}{2})
\wedge (2\delta+\tfrac{d-2}{2})\right)
\end{equation}
such that
\begin{equation}\label{e:new23}
\frac{\phi'(\lambda x)}{\phi'(\lambda)}\geq
\sigma'\,x^{-\delta'}\ \text{ for all
}\ x\geq 1\ \text{ and }\ \lambda\geq\lambda_0\,.
\end{equation}
}
Assumption {\bf H} was introduced and used in \cite{KM} and \cite{KM2}. It is easy to check that
if $\phi$ is a complete Bernstein function satisfying
satisfying a weak lower scaling condition at infinity
\begin{equation}\label{e:new2322}
a_1 \lambda^{\delta_1}\phi(t)\le \phi(\lambda t)\le a_2 \lambda^{\delta_2}\phi(t)\, ,\qquad \lambda \ge 1, t\ge 1\, ,
\end{equation}
for some $a_1, a_2>0$ and $\delta_1, \delta_2\in (0,1)$, then {\bf H} is automatically satisfied.
One of the reasons for adopting the more general setup above is to cover the case
of geometric stable and iterated geometric stable subordinators.
Suppose that $\alpha\in (0, 2)$ for $d \ge 2$ and that $\alpha\in (0, 2]$ for $d \ge 3$.
A geometric $(\alpha/2)$-stable subordinator is a subordinator
with Laplace exponent $\phi(\lambda)=\log(1+\lambda^{\alpha/2})$.
Let $\phi_1(\lambda):=\log(1+\lambda^{\alpha/2})$, and for $n\ge 2$,
$\phi_n(\lambda):=\phi_1(\phi_{n-1}(\lambda))$. A subordinator with
Laplace exponent $\phi_n$ is called an iterated geometric subordinator.
It is easy to check that the functions $\phi$ and $\phi_n$ satisfy
{\bf H}
but they do not satisfy \eqref{e:new2322}.
It follows from \cite[Lemma 5.4]{{KM2}} and \cite[Section 4.2]{KSVp2} that if $Y$ is transient and {\bf H} is true, then there exists $R>0$ such that the assumptions {\bf G}, {\bf C1}$(z_0, R)$, \eqref{e:nGassup1} and \textbf{F1}$(z_0, R)$ hold for all $z_0 \in {\mathbb R}^d$.
Thus using these facts and Lemma \ref{l:levy-densityn2}, we have the following as a special case of Theorems \ref{t:main-mb0}(a) and \ref{t:main-mb3}(b).
\begin{corollary}\label{c:lK1}
Suppose that
$Y=(Y_t, {\mathbb P}_x:\, t\ge 0, x \in {\mathbb R}^d)$
is a transient subordinate Brownian motion
whose
characteristic exponent is given by ${\mathbb P}hi(\theta)=\phi(|\theta|^2)$,
$\theta\in {\mathbb R}^d$.
Suppose $\phi$ is a complete Bernstein function
with the infinite L\' evy measure $\mu$
and assume that {\bf H} holds.
Let $r \le 1$ and let $f_1$ and $f_2$ be nonnegative functions on ${\mathbb R}^d$ which are regular harmonic in $D\cap {B}(z_0, r)$ with respect to the process $Y$, and
vanish on
$B(z_0, r) \cap ( \overline{D}^c \cup D^{\mathrm{reg}})$.
Then the limit
$$
\lim_{D{\bf n}} \def\Z {{\mathbb Z}i x\to z_0}\frac{f_1(x)}{f_2(x)}
$$
exists and is finite. Moreover, the Martin boundary point associated with $z\in \partial D$ is minimal
if and only if $z$ is accessible from $D$.
\end{corollary}
\subsection{Unimodal L\'evy process}
Let $Y$ be an isotropic unimodal L\'evy process whose characteristic exponent is
${\mathbb P}si_0(|\xi|)$, that is,
\begin{equation}\label{e:fuku1.3}
{\mathbb P}si_0(|\xi|)= \int_{{\mathbb R}^d}(1-\cos(\xi\cdot y))j_0(|y|)dy
\end{equation}
where the function $x \mapsto j_0(|x|)$ is the L\'evy density of $Y$.
If $Y$ is transient, let $x \mapsto g_0(|x|)$ denote the Green function of $Y$.
Let $0<\alpha < 2$. Suppose that ${\mathbb P}si_0(\lambda)\sim \lambda^{\alpha}\ell(\lambda)$,
$\lambda \to 0$, and $\ell$ is a slowly varying function at $0$.
Then by \cite[Theorems 5 and 6]{CGT} we have the following asymptotics of $j_0$ and $g_0$.
\begin{lemma}\label{l:j-g-near-infty} Suppose that ${\mathbb P}si_0(\lambda)\sim \lambda^{\alpha}\ell(\lambda)$,
$\lambda \to 0$, and $\ell$ is a slowly varying function at $0$ ,
\begin{itemize}
\item[(a)] It holds that
\begin{equation}\label{e:j-near-infty}
j_0(r)\sim r^{-d}
{\mathbb P}si_0(r^{-1})\, ,\qquad r\to \infty.
\end{equation}
\item[(b)] If $d \ge 3$, then $Y$ is transient and
\begin{equation}\label{e:g-near-infty}
g_0(r)
\sim r^{-d}
{\mathbb P}si_0(r^{-1})^{-1}\, ,\qquad r\to \infty.
\end{equation}
\end{itemize}
\end{lemma}
We further assume that the L\'{e}vy measure of $X$ is infinite. Then by \cite[Lemma 2.5]{KR} the density function $x \to p_t(|x|)$ of $X$ is continuous and, by the strong Markov property, so is the density function of $X^D$.
Using the upper bound of $p_t(|x|)$ in \cite[Theorem 2.2]{KKK} (which works for all $t>0$) and the monotonicity of $r \to p_t(r)$, we see that the Green function of $X^D$ is continuous for all open set $D$.
From this and \eqref{e:g-near-infty}, we have that if $d \ge 3$, the L\'{e}vy measure is infinite, and ${\mathbb P}si_0(\lambda)\sim \lambda^{\alpha}\ell(\lambda)$, then ${\bf G}$ and \eqref{e:nGassup1-infty} hold (see \cite[Proposition 6.2]{KSVp1}).
It is proved in \cite{KSVp1} under some assumptions
much weaker than the above that {\bf F2}$(z_0, R)$ holds for all $z_0 \in {\mathbb R}^d$.
From \eqref{e:j-near-infty} we have
that {\bf E2}$(z_0, R)$ and {\bf C2}$(z_0, R)$ hold for all $z_0 \in {\mathbb R}^d$.
Using the above facts, we have the following as a special case of Theorems \ref{t:main-mb0}(b) and \ref{t:main-mb4}(b).
\begin{corollary}\label{c:lK2}
Suppose that $d \ge 3$ and that
$Y=(Y_t, {\mathbb P}_x:\, t\ge 0, x \in {\mathbb R}^d)$
is an isotropic unimodal L\'evy process
whose
characteristic exponent is given by ${\mathbb P}si_0(|\xi|)$.
Suppose that $0<\alpha < 2$, that the L\'{e}vy measure of $X$ is infinite, and that
${\mathbb P}si_0(\lambda)\sim \lambda^{\alpha}\ell(\lambda)$,
$\lambda \to 0$, and $\ell$ is a slowly varying function at $0$.
Let $r > 1$, $D$ be an unbounded open set and let $f_1$ and $f_2$ be nonnegative functions on ${\mathbb R}^d$ which are regular harmonic in $D\cap \overline{B}(z_0, r)^c
$ with respect to the process $Y$, and vanish on
$\overline{B}(z_0, r)^c\cap ( \overline{D}^c \cup D^{\mathrm{reg}})$.
Then the limit
$$
\lim_{D{\bf n}} \def\Z {{\mathbb Z}i x\to \infty}\frac{f_1(x)}{f_2(x)}
$$
exists and is finite. Moreover,
the Martin boundary point
associated with $\infty$ is minimal if and only if $\infty$ is accessible from $D$.
\end{corollary}
\begin{remark}\label{r:B0}
{\rm Using \cite[Lemma 3.3]{KSV8} instead of \cite[Theorem 6]{CGT}, one can see that
Corollary \ref{c:lK2} holds for $d > 2\alpha$
when $Y$
is a subordinate Brownian motion
whose
Laplace exponent $ \phi$ is a complete Bernstein function
and that $\phi(\lambda)\sim \lambda^{\alpha/2}\ell(\lambda)$ where $0<\alpha < 2$ and $\ell$ is a slowly varying function at $0$,
}
\end{remark}
\begin{remark}\label{r:B1}
{\rm
If $Y$ is a L\'evy process satisfying {\bf E1}$(z_0, R)$, ({\bf E2}$(z_0, R)$, respectively)
then the L\'evy process $Z$ with Levy density $j_Z(x):=k(x/|x|)j_Y(x)$ also satisfies
{\bf E1}$(z_0, R)$ ({\bf E2}$(z_0, R)$, respectively) when $k$ is a continuous
function on the unit sphere and bounded between two positive constant.
In fact, since
$$
\left|\frac{z-y}{|z-y|}-\frac{z}{|z|}\right|
\le\left |\frac{z-y}{|z-y|} -\frac{z-y}{|z|} \right|+\left|\frac{z-y}{|z|} -\frac{z}{|z|}\right|
\le \frac{||z|-|z-y||}{|z|}+\frac{|y|}{|z|}
\le \frac{2|y|}{|z|},
$$
we have
$$
\left|\frac{z-y}{|z-y|}-\frac{z}{|z|}\right| \le \frac{2|y|}{r} \quad \text{for all }|z|>r \quad
\text{and} \quad
\left|\frac{z-y}{|z-y|}-\frac{z}{|z|}\right| \le \frac{2r}{|z|} \quad \text{for all }|y|<r.
$$
Moreover, since $k$ is bounded below by a positive constant,
$$
\left|\frac{k(z/|z|)}{k((z-y)/|z-y|)}-1 \right| \le c |{k(z/|z|)}-{k((z-y)/|z-y|)}|.
$$
Thus by uniform continuity of $k$ on the unit sphere, we see that for all $r>0$
$$
\lim_{|y| \to 0}\sup_{z: |z|>r}\frac{ k(z/|z|) }{k((z-y)/|z-y|)}
=\lim_{|y| \to 0}\sup_{z: |z|>r}\frac{ k(z/|z|) }{k((z-y)/|z-y|)}
=1,
$$
and
$$\lim_{|z| \to \infty} \sup_{y: |y| < r}\frac{ k(z/|z|) }{k((z-y)/|z-y|)}
=\lim_{|z| \to \infty} \inf_{y: |y| < r}
\frac{k(z/|z|) }{k((z-y)/|z-y|)}
=1.
$$
When $Y$ is a symmetric stable process, this includes not necessarily
symmetric strictly stable processes with Levy density $c k(x/|x|)|x|^{-d-\alpha}$ where $k$ is a continuous function on the unit sphere bounded between two
positive constant.
}
\end{remark}
{\bf n}} \def\Z {{\mathbb Z}oindent
{\bf Acknowledgements:} Part of the research for this paper was done
during the visit of Renming Song and Zoran Vondra\v{c}ek to Seoul National University
from May 24 to June 8 of 2015.
They thank the Department of
Mathematical Sciences of Seoul National University for the hospitality.
\end{doublespace}
\begin{singlespace}
\small
\end{singlespace}
\
\vskip 0.1truein
\parindent=0em
{\bf Panki Kim}
Department of Mathematical Sciences and Research Institute of Mathematics,
Seoul National University, Building 27, 1 Gwanak-ro, Gwanak-gu Seoul 08826, Republic of Korea
E-mail: \texttt{[email protected]}
{\bf Renming Song}
Department of Mathematics, University of Illinois, Urbana, IL 61801,
USA
E-mail: \texttt{[email protected]}
{\bf Zoran Vondra\v{c}ek}
Department of Mathematics, University of Zagreb, Zagreb, Croatia, and \\
Department of Mathematics, University of Illinois, Urbana, IL 61801,
USA
Email: \texttt{[email protected]}
\end{document} |
\begin{document}
\title[Vector bundles on symmetric power]{Reconstructing vector
bundles on curves from their direct image on symmetric powers}
\author[I. Biswas]{Indranil Biswas}
\address{School of Mathematics, Tata Institute of Fundamental
Research, Homi Bhabha Road, Mumbai 400005, India}
\email{[email protected]}
\author[D. S. Nagaraj]{D. S. Nagaraj}
\address{The Institute of Mathematical Sciences, CIT
Campus, Taramani, Chennai 600113, India}
\email{[email protected]}
\subjclass[2000]{14J60, 14C20}
\keywords{Symmetric power, direct image, curve}
\date{}
\begin{abstract}
Let $C$ be an irreducible smooth complex projective curve, and let $E$ be an
algebraic vector bundle of rank $r$ on $C$. Associated to $E$, there are
vector bundles ${\mathcal F}_n(E)$ of rank $nr$ on $S^n(C)$, where $S^n(C)$ is the
$n$-th symmetric power of $C$. We prove the following: Let $E_1$ and $E_2$ be
two semistable vector bundles on $C$, with ${\rm genus}(C)\, \geq\, 2$. If
${\mathcal F}_n(E_1)\,\simeq \, {\mathcal F}_n(E_2)$ for a fixed
$n$, then $E_1 \,\simeq\, E_2$.
\end{abstract}
\maketitle
\section{Introduction}
Let $C$ be an irreducible smooth projective curve defined over the field of
complex numbers. Let $E$ be a vector bundle of rank $r$ on $C$. Let
$S^n(C)$ be $n$-th symmetric power of $C$. Let $q_1$ (respectively,
$q_2$) be the projection of $S^n(C)\times C$ to $S^n (C)$
(respectively, $C$). Let $\Delta_n \subset S^n(C)\times C$
be the universal effective divisor of degree $n.$ The direct image
$${\mathcal F}_n(E ) \,:=\, q_{1\star} (q_{2}^{\star}(E)\vert_{\Delta_n})$$
is a vector bundle of rank $nr$ over $S^n(C)$. These vector bundles
${\mathcal F}_n(E)$ are extensively studied (see \cite{ACGH}, \cite{BL},
\cite{BP}, \cite{ELN}, \cite{Sc}).
Assume that ${\rm genus}(C)\, \geq\, 2$. We prove the following (see Theorem
\ref{oneone2}):
\begin{theorem}
Let $E_1$ and $E_2$ be semistable vector bundles on $C$. If the two vector bundles
${\mathcal F}_n(E_1)$ and ${\mathcal F}_n(E_2)$ on $S^n(C)$ are isomorphic for a fixed
$n$, then $E_1$ is isomorphic to $E_2$.
\end{theorem}
\section{preliminaries}
Fix an integer $n\, \geq\, 2$. Let $S_n$ be the group of permutations of
$\{1\, ,\cdots\, ,n\}$. Given an irreducible smooth complex projective curve
$C$, the group $S_n$ acts on $C^n$, and the quotient $S^n(C)\, :=\, C^n/{S_n}$
is an irreducible smooth complex projective variety of dimension $n$.
An effective divisor of degree $n$ on $C$ is a formal sum of the form
$\sum_{i=1}^rn_i z_i$, where $z_i$ are points on $C$ and $n_i$ are positive
integers, such that $\sum_{i=1}^rn_i\,=\,n$. The set of all effective divisors
of degree $n$ on $C$ is naturally identified with $S^n(C)$.
Let $q_1$ (respectively, $q_2$) be the projection of $S^n(C)\times C$ on to
$S^n(C)$ (respectively, $C$). Define
$$\Delta_n\, :=\, \{ (D,z) \,\in\, S^n(C)\times C~\mid ~ z \in D \}\,
\subset\, S^n(C)\times C\, .$$
Then $\Delta_n$ is a smooth hypersurface on $S^n(C)\times C$; it is
called the \textit{universal effective divisor} of degree $n$ of $C$.
The restriction of the projection $q_1$ to $\Delta_n$ is
a finite morphism
\begin{equation}\label{q}
q\, :=\, q_1\vert_{\Delta_n} \,:\, \Delta_n \,\longrightarrow\, S^n(C)
\end{equation}
of degree $n$.
Let $E$ be a vector bundle on $C$ of rank $r$. Define
$$
{\mathcal F}_n(E ) \,:=\,
q_{{\star}} (q_{2}^{\star}(E)\vert_{\Delta_n})
$$
to be the vector bundle on $S^n(C)$ of rank $nr$.
The \textit{slope} $E$ is defined to be $ \mu(E) \,:=\, \text{degree}(E)/r$.
The vector bundle $E$ is said to be {\em semistable} if $\mu(F)\,\leq\, \mu(E)$
for every nonzero subbundle $F$ of $E$.
\section{The reconstruction}
Henceforth, we assume that ${\rm genus}(C)\,\geq\, 2$.
We first consider the case of $n\,=\,2$. The hypersurface $\Delta_2$ in $S^2(C)
\times C$ can be identified with $C\times C$. In fact the map $(x,y)\,
\longmapsto\, (x+y,x)$ is an isomorphism from $C\times C$ to $\Delta_2$ (cf.
\cite{BP}). Let
$$
q \,:\, \Delta_2 \, \longrightarrow \,S^2(C)
$$
be the map in \eqref{q}. Under the above identification of $\Delta_2$ with
$C\times C$, the map $q$ coincides with the quotient map
\begin{equation}\label{f}
f\, :\, C \times C\, \longrightarrow\, S^2(C)\, =\, (C \times C)/S_2\, .
\end{equation}
For $i\, =\, 1\, ,2$, let
$$p_i\,:\,C \times C \, \longrightarrow \,C$$ be the projection to the $i$-th
factor. The diagonal $\Delta\, \subset\, C \times C$ is canonically isomorphic
to $C$, and hence any vector bundle on $C$ can also be thought of as a vector
bundle on $\Delta$.
For a vector bundle $E$ on $C$, we have the short exact sequence
$$
0\, \longrightarrow \, V(E)\,:=\, f^\star{\mathcal F}_2(E)\, \longrightarrow \,
p_1^\star E \oplus p_2^\star E
\stackrel{q}{\longrightarrow} E \, \longrightarrow \, 0\, ,
$$
where $f$ is defined in \eqref{f}, and $q$ is the homomorphism defined by $(u,v)
\,\longmapsto\, u-v$ (cf. \cite{BP}). Let $$\phi_i \,:\, V(E)\,\longrightarrow
\,p_i(E)\, , ~\, i\,=\, 1\, ,2\, ,$$ be the restriction of the projection
$p_1^\star E \oplus
p_2^\star E\,\longrightarrow\, p_i^\star E$ to $V(E)\, \subset\,
p_1^\star E\oplus p_2^\star E$.
We have the following two exact sequences:
\begin{equation}\label{eq2}
0 \, \longrightarrow \, (p_2^\star E)(-\Delta)\,:=\,
(p_2^\star E)\otimes {\mathcal O}_{C\times C}(-\Delta)\, \longrightarrow \, V(E)
\,\stackrel{\phi_1}{\longrightarrow}\, p_1^\star E \, \longrightarrow \, 0
\end{equation}
(we are using the fact that the restriction of the line bundle ${\mathcal
O}_{C\times C}(-\Delta)$ to $\Delta$ is $K_\Delta \,=\, K_C$, where
$K_\Delta$ and $K_C$ are the canonical line bundles of $\Delta$ and $C$
respectively) and
$$
0 \, \longrightarrow \, (p_1^\star E)(-\Delta)\,:=\,
(p_1^\star E)\otimes {\mathcal O}_{C\times C}(-\Delta)\, \longrightarrow \,V(E)
\,\stackrel{\phi_2}{\longrightarrow} p_2^\star E\, \longrightarrow \, 0\, .
$$
\begin{proposition}\label{oneone}
Let $E$ and $F$ be two semistable vector bundles on $C$ such that
${\mathcal F}_2(E) \,\simeq \,{\mathcal F}_2(F)$. Then $E$ is isomorphic to $F$.
\end{proposition}
\begin{proof}
The restriction of the exact sequence in \eqref{eq2} to the diagonal
$\Delta\,=\, C$ gives a short exact sequence of vector bundles on $C$:
\begin{equation}\label{j1}
0\, \longrightarrow \, E\otimes K_C\, \longrightarrow \, J^1(E) \,
\longrightarrow \,E\, \longrightarrow \, 0\, ,
\end{equation}
where $K_C$ is the canonical bundle on $C$. Similarly we have a short exact
sequence
\begin{equation}\label{j2}
0\,\longrightarrow\, F \otimes K_C\, \longrightarrow\, J^1(F)\, \longrightarrow
\, F\, \longrightarrow \, 0\, .
\end{equation}
Since ${\mathcal F}_2(E)\,\simeq\,{\mathcal F}_2(F)$, we see that $J^1(E)\simeq J^1(F)$.
As $E$ (respectively, $F$) is semistable, and $\text{degree}(K_C) \, >\, 0$, the
subbundle $E\otimes K_C$ (respectively, $F\otimes K_C$) of $J^1(E)$
(respectively, $J^1(F)$) in \eqref{j1} (respectively, \eqref{j2}) is the first
term in the Harder--Narasimhan filtration of $J^1(E)$ (respectively, $J^1(F)$).
Since $J^1(E)\,\simeq\, J^1(F)$, this implies that that $E \,\simeq\, F$.
\end{proof}
Now we consider the general case of $n\, \geq\, 3$.
\begin{theorem}\label{oneone2}
Let $E$ and $F$ be semistable vector bundles on $C$ such that
${\mathcal F}_n(E) \,\simeq
\, {\mathcal F}_n(F)$. Then the vector bundle $E$ is isomorphic to $F$.
\end{theorem}
\begin{proof}
The universal effective divisor $\Delta_n\,\subset\, S^n(C) \times C$ of degree
$n$ can be identified with $S^{n-1}(C) \times C$ using the morphism
$$
f\,:\, S^{n-1}(C) \times C \,\longrightarrow \, \Delta_n
$$
that sends any
$(D\, ,z) \, \in\, S^{n-1}(C) \times C$ to $(D+z,z)$. The composition
\begin{equation}\label{bq}
S^{n-1}(C) \times C \,\stackrel{f}{\longrightarrow} \, \Delta_n
\,\stackrel{q}{\longrightarrow} \, S^n(C)\, ,
\end{equation}
where $q$ is defined in \eqref{q}, will be denoted by $\overline{q}$. We note
that
$$
{\mathcal F}_n(E)\,=\, \overline{q}_\star p_2^\star E\, ,
$$
where $p_2\,:\, S^{n-1}(C)\times C\,\longrightarrow\, C$ is the natural
projection.
Let $f_1\,:\, S^{n-1}(C)\times C\,\longrightarrow\, S^{n-1}(C)$ is the natural
projection. Let
$$
\alpha\,:\, C\times C \,\longrightarrow\, S^{n-1}(C) \times C
$$
be the
morphism defined by $(x\, ,y)\,\longmapsto\, ((n-1)x\, , y)$. Then the pullback
$(\overline{q}\circ\alpha)^\star ({\mathcal F}_n(E))$, where $\overline{q}$ is
constructed in \eqref{bq}, fits in an exact sequence:
\begin{equation}\label{ex1}
0\,\longrightarrow\,
p_2^\star E \otimes{\mathcal O}_{C\times C}(-(n-1)\Delta)\,\longrightarrow\,
(\overline{q}\circ\alpha)^\star ({\mathcal F}_n(E))\,\longrightarrow
(f_1\circ\alpha)^\star {\mathcal F}_{n-1}(E)\,\longrightarrow\, 0\, ,
\end{equation}
where $f_1$ is defined above. The above projection
$(\overline{q}\circ\alpha)^\star ({\mathcal F}_n(E))\,\longrightarrow
(f_1\circ\alpha)^\star{\mathcal F}_{n-1}(E)$ follows from the fact that
$f_1\circ\alpha (z) \, \subset\, \overline{q}\circ\alpha (z)$
for any $z\, \in\, C\times C$.
Define the vector bundle
$$J^{n-1}(E)\,:=\,(f\circ\alpha)^\star
({\mathcal F}_n(E))\vert_{\Delta}\,\longrightarrow\,
\Delta\, =\, C$$
on the diagonal in $C\times C$. Restricting the exact sequence in \eqref{ex1} to
$\Delta$, we get a short exact sequence of vector bundles
$$
0\, \longrightarrow\, J^{n-2}(E)\otimes K_C \, \longrightarrow\,
J^{n-1}(E)\, \longrightarrow\, E\, \longrightarrow\, 0\, ;
$$
note that $J^{0}(E)\, =\, E$.
Therefore, by induction on $n$, we get a filtration of subbundles of
$J^{n-1}(E)$
\begin{equation}\label{fi}
0\,=\, W_n\, \subset\, W_{n-1}\, \subset\, \cdots \, \subset\,
W_1\, \subset\, W_0\, =\, J^{n-1}(E)\, ,
\end{equation}
such that $W_j/W_{j+1} \,=\, E\otimes K_C^{\otimes j}$ for all $j\, \in\, [0\, ,
n-1]$. In particular, $W_{n-1}\,=\, E\otimes K_C^{\otimes (n-1)}$.
Since $E$ is semistable, and $\text{degree}(K_C)\, >\, 0$, we
conclude that $E\otimes K_C^{\otimes j}$ is semistable for all
$j\, \in\, [0\, , n-1]$, and
$$
\mu(W_j/W_{j+1})\, <\, \mu(W_{j+1}/W_{j+2})
$$
for all $j\, \in\, [0\, , n-2]$. Consequently, the filtration of $J^{n-1}(E)$
in \eqref{fi} coincides with the Harder--Narasimhan filtration of $J^{n-1}(E)$.
In particular, the first term of the Harder--Narasimhan filtration (the maximal
semistable subsheaf) of $J^{n-1}(E)$ is the subbundle $E\otimes K_C^{\otimes
(n-1)}$.
Using this, and the fact that $J^{n-1}(E)\,\simeq\,J^{n-1}(F)$ (recall that
${\mathcal F}_n(E)\,\simeq\,{\mathcal F}_n(F)$), we conclude that
$F$ is isomorphic to $E\,=\,
W_0/W_1$.
\end{proof}
\end{document} |
\begin{document}
\title{Equivariant, locally finite inverse representations with uniformly bounded zipping length, for arbitrary finitely presented groups}
\vglue 1cm
\section{Introduction}\label{sec0}
\setcounter{equation}{0}
This is the first of a series of papers the aim of which is to give complete proofs for the following
\noindent {\bf Theorem.} {\it Any finitely presented group $\Gamma$ is QSF.}
This has already been announced in my preprint \cite{24}, where a few more details about the general context of the theorem above are also given. The QSF ($=$ quasi simply filtered) for locally compact spaces $X$ and/or for finitely presented groups $\Gamma$, has been previously introduced by S.~Brick, M.~Mihalik and J.~Stallings \cite{3}, \cite{29}.
A locally compact space $X$ is said to be QSF, and it is understood that from now on we move in the simplicial category, if the following thing happens. For any compact $k \subset X$ we can find an abstract compact $K$, where by ``abstract'' we mean that $K$ is not just another subspace of $X$, such that $\pi_1 K = 0$, and coming with an embedding $k \overset{j}{\longrightarrow} K$, such that there is a continuous map $f$ entering the following commutative diagram
$$
\xymatrix{
k \ar[rr]^{j} \ar@{_{(}->}[dr] &&K \ar[dl]^{f} \\
&X
}
$$
with the property that $M_2 (f) \cap jk = \emptyset$. Here $M_2 (f) \subset K$ is the set of double points $x \in K$ s.t. ${\rm card} \, f^{-1} f(x) > 1$. [We will also use the notation $M^2 (f) \subset K \times K$ for the pairs $(x,y)$, $x \ne y$, $fx = fy$.] One of the virtues of QSF is that if $\Gamma$ is a finitely presented group and $P$ is a PRESENTATION for $\Gamma$, {\it i.e.} a finite complex with $\pi_1 P = \Gamma$, then ``$\tilde P \in \Gamma$'' is presentation independent, {\it i.e.} if one presentation of $\Gamma$ has this property, then all presentations of $\Gamma$ have it too. Also, if this is the case, then we will say that $\Gamma$ itself is QSF. The QSF {\ibf is} a bona fide group theoretical concept. Another virtue of QSF is that if $V^3$ is an open contractible 3-manifold which is QSF, then $\pi_1^{\infty} V^3 = 0$.
The special case when $\Gamma = \pi_1 M^3$, with $M^3$ a closed 3-manifold, of the theorem above, is already known indeed, since that particular case is already a consequence of the full big work of G.~Perelman on the Thurston Geometrization of 3-manifolds \cite{11}, \cite{12}, \cite{13}, \cite{9}, \cite{2}, \cite{1}, \cite{8}. This is one of the impacts of the work in question on geometric group theory. Now, for technical reasons which will become clear later on, when flesh and bone will be put onto $\Gamma$ by picking up a specific presentation $P$, then we will make the following kind of choice for our $P$. We will disguise our general $\Gamma$ as a $3$-manifold group, in the sense that we will work with a compact bounded {\ibf singular} $3$-manifold $M^3 (\Gamma)$, with $\pi_1 M^3 (\Gamma) = \Gamma$. The $M^3 (\Gamma)$'s will be described with more detail in the section~\ref{sec1} below. Keep in mind that $M^3 (\Gamma)$ is NOT a smooth manifold and it certainly has no vocation of being geometrized in any sense.
A central notion for our whole approach is that of REPRESENTATION for $\Gamma$ and/or for $\tilde M^3 (\Gamma)$, two objects which up to quasi-isometry are the same. Our terminology here is quite unconventional, our representations proceed like ``$\longrightarrow \Gamma$'', unlike the usual representation of groups which proceed like ``$\Gamma \longrightarrow$''. The reason for this slip of tongue will be explained later. Here is how our representations are defined. We have to start with some infinite, not necessarily locally finite complex $X$ of dimension $2$ or $3$ (but only $\dim X = 3$ will occur in the present paper), endowed with a non-degenerate cellular map $X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$.
Non-degenerate means here that $f$ injects on the individual cells. The points of $X$ where $f$ is not immersive, are by definition the {\ibf mortal singularities} ${\rm Sing} \, (f) \subset X$. We reserve the adjective {\ibf immortal} for the singularities of $\tilde M^3 (\Gamma)$ and/or of $M^3 (\Gamma)$.
Two kinds of equivalence relations will be associated to such non-degenerate maps $f$, namely the
$$
\Psi (f) \subset \Phi (f) \subset X \times X \, .
$$
Here $\Phi (f)$ is the simple-minded equivalence relation where $(x,y) \in \Phi (f)$ means that $f(x) = f(y)$. The more subtle and not so easily definable $\Psi (f)$, is the smallest equivalence relation compatible with $f$, which kills all the mortal singularities. More details concerning $\Psi (f)$ will be given in the first section of this present paper. But before we can define our representations for $\Gamma$ we also have to review the notion of GSC, {\it i.e.} {\ibf ``geometrically simply connected''}. This stems from differential topology where a smooth manifold is said to be GSC iff it has a handle-body decomposition without handles of index $\lambda = 1$, and/or such that the $1$-handles and $2$-handles are in cancelling position, and a certain amount of care is necessary here, since what we are after are rather non-compact manifolds with non-empty boundary. As it is explained in \cite{24} there are some deep connections between the concept GSC from differential topology (for which, see also \cite{20}, \cite{21}, \cite{25}) and the concept QSF in group theory.
But then, the GSC concept generalizes easily for cell-complexes and the ones of interest here will always be infinite. We will say that our $X$ which might be a smooth non-compact manifold (possibly with boundary $\ne \emptyset$) or a cell-complex, like in $X \longrightarrow \tilde M^3 (\Gamma)$, is GSC if it has a cell-decomposition or handle-body decomposition, according to the case of the following type. Start with a PROPERLY embedded tree (or with a smooth regular neighbourhood of such a tree); with this should come now a cell-decomposition, or handle-decomposition
$$
X = T \cup \sum_{1}^{\infty} \left\{ \mbox{$1$-handles (or $1$-cells)} \, H_i^1 \right\} \cup \sum_{1}^{\infty} H_j^2 \cup \left\{ \sum_{k;\lambda \geq 2} H_k^{\lambda} \right\} \, ,
$$
where the $i$'s and the $j$'s belong to a same countable set, such that the {\ibf geometric intersection matrix}, which counts without any kind of $\pm$ signs, how many times $H_j^2$ goes through $H_i^1$, takes the following ``easy'' id $+$ nilpotent form
\begin{equation}
\label{eq0.1}
H_j^2 \cdot H_i^1 = \delta_{ji} + a_{ji} \, , \quad \mbox{where} \quad a_{ji} \in Z_+ \quad \mbox{and} \quad a_{ji} > 0 \Rightarrow j > i \, .
\end{equation}
In this paper we will distinguish between PROPER, meaning inverse image of compact is compact, and proper meaning (inverse image of boundary) $=$ boundary.
Also, there is a notion of ``difficult'' id $+$ nilpotent, gotten by reversing the last inequality in (\ref{eq0.1}) and this is no longer GSC. For instance, the Whitehead manifold ${\rm Wh}^3$, several times mentioned in this sequence of papers and always impersonating one of the villains of the cast, admits handle-body decompositions of the difficult id $+$ nilpotent type. But then, ${\rm Wh}^3$ is certainly not GSC either. Neither are the various manifolds ${\rm Wh}^3 \times B^n$, $n \geq 1$ (see here \cite{15}).
We can, finally, define the REPRESENTATIONS for $\tilde M^3 (\Gamma)$ (and/or $\Gamma$). These are non-degenerate maps, like above
\begin{equation}
\label{eq0.2}
X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)
\end{equation}
with the following features
I) $X \in {\rm GSC}$; we call it the representation space
II) $\Psi (f) = \Phi (f)$
III) $f$ is {\ibf ``essentially surjective''} which, for the case when $\dim X = 3$ this simply means that $\overline{{\rm Im} f} = \tilde M^3 (\Gamma)$.
Here are some comments: A) The fact that it is a {\ibf group} $\Gamma$ which is being ``represented'' this way, does {\ibf not} occur explicitly in the features I), II), III). The point is that, when groups $\Gamma$ ({\it i.e.} $\tilde M^3 (\Gamma)$) are being represented, this opens the possibility for the representation to be {\ibf equivariant}, a highly interesting proposition, as it will turn out soon.
On the other hand, as it will soon become clear, whenever a map $(X,f)$ has the features I), II), III), then this {\it forces} whatever sits at the target of $f$ to be simply-connected. So, a priori at least, one may try to represent various simply-connected objects. Long ago, the present author has started by representing this way homotopy $3$-spheres $\Sigma^3$ (see here \cite{16} and \cite{6}, papers which eventually culminated with \cite{23}). Then, I also represented universal covering spaces of smooth closed $3$-manifolds (\cite{18}, \cite{19}) or even the wild Whitehead manifold (see \cite{27}). My excuse for calling the thing occuring in (\ref{eq0.2}) a ``representation'' for $\Gamma$, is that I had started to use this terminology already long ago, in contexts where no group was present; and by now the sin has been committed already.
Initially, the present kind of representations were called ``pseudo-spine representations'' (see \cite{16}, for instance). But today, in contexts where no confusion is possible I will just call them ``representations''. Only once, in the title to the present paper, I have also added the adjective ``inverse'', with the sole purpose of avoiding confusion.
\noindent B) We will call $n = \dim X$ the {\ibf dimension} of the representation, which can be $n=3$, like it will be the case now, or $n=2$ which is the really useful thing, to be developed in the next papers. The reason for using the presentation $M^3 (\Gamma)$ is to be able to take advantage of the richness of structure of $M_2 (f)$ when maps $(\dim = 2) \overset{f}{\longrightarrow} (\dim = 3)$ are concerned.
\noindent C) When $\dim X = 2$, then the ``essential surjectivity'' from III) will mean that $\tilde M^3 (\Gamma) - \overline{{\rm Im} (f)}$ is a countable union of small, open, insignificant ($=$ cell-like) subsets. This ends our comments.
Let us come back now to the condition $\Psi (f) = \Phi (f)$ from II) above and give it a more geometric meaning. We consider then
$$
\Phi (f) \supset M^2 (f) \subset X \times X \supset {\rm Sing} \, (f)
$$
where, strictly speaking, ``${\rm Sing} \, (f)$'' should be read ``${\rm Diag} \, ({\rm Sing} \, (f))$''. We will extend $M^2 (f)$ to $\hat M^2 (f) = M^2 (f) \cup {\rm Sing} \, (f) \subset X \times X$. With this, the condition $\Psi (f) = \Phi (f)$ means that, at the level of $\hat M^2 (f)$, any $(x,y) \in M^2 (f)$ can be joined by continuous paths to the singularities. The figure~1.1 below should be enough, for the time being, to make clear what we talk about now; more formal definitions will be given later. Anyway, the kind of thing which figure~1.1 displays will be called a zipping path (or a zipping strategy) $\lambda (x,y)$ for $(x,y) \in M^2 (f)$. Here is a way to make sense of the length $\Vert \lambda (x,y) \Vert$ of such a zipping path.
$$
\includegraphics[width=11cm]{FigPoFeb09-1.eps}
$$
\centerline{Figure 1.1. A zipping path $\lambda (x,y)$ for $(x,y) \in M^2 (f)$.}
\begin{quote}
Here $(x_1 , x_2 , x_3) \in M^3 (f)$ ($=$ triple points) and the various moving points $(X'_t , X''_t)$, $(Y'_t , Y''_t)$, $(Z'_t , Z''_t)$ which depend continuously on $t \in [0,1]$, are in $M^2 (f)$. The condition $\Psi (f) = \Phi (f)$ means the existence of a zipping path, like above, for any double point. But, of course, zipping paths are {\ibf not} unique.
\end{quote}
Although $M^3 (\Gamma)$ is not a smooth manifold, riemannian metrics can be defined for it. Such metrics can then be lifted to $\tilde M^3 (\Gamma)$, to $X$ or to $X \times X$. Using them we can make sense of $\Vert \lambda (x,y) \Vert$ which, of course, is only well-defined up to quasi-isometry. The condition $\Psi (f) = \Phi (f)$ also means that the map $f$ in (\ref{eq0.2}) is realizable (again not uniquely, of course), via a sequence of folding maps. Again, like in the context of figure~1.1, this defines a zipping or a zipping strategy.
When a double point $(x,y) \in M^2 (f)$ is given, we will be interested in quantities like ${\rm inf} \, \Vert \lambda (x,y) \Vert$, taken over all the possible continuous zipping paths for $(x,y)$. A quasi-isometrically equivalent, discrete definition, would be here to count the minimum member of necessary folding maps for closing $(x,y)$. But we will prefer the continuous version, rather than the discrete one. $\Box$
The object of the present paper is to give a proof for the following
\noindent REPRESENTATION THEOREM. {\it For any $\Gamma$ there is a $3$-dimensional representation
\begin{equation}
\label{eq0.3}
\xymatrix{
Y(\infty) \ar[rr]^{\!\!\!\!\!\!\!\!\!\!\ g(\infty)} && \ \tilde M^3 (\Gamma) \, ,
}
\end{equation}
such that the following things should happen too}
1) {\it The $Y(\infty)$ is {\ibf locally finite.}}
2) {\it The representation} (\ref{eq0.3}) {\it is EQUIVARIANT. Specifically, the representation space $Y(\infty)$ itself comes equipped with a free action of $\Gamma$ and then, for each $x \in Y(\infty)$ and $\gamma \in \Gamma$, we have also that}
$$
g(\infty) (\gamma x) = \gamma g (\infty)(x) \, .
$$
3) {\it There is a {\ibf uniform bound} $M > 0$ such that, for any $(x,y) \in M^2 (g(\infty))$ we have}
\begin{equation}
\label{eq0.4}
\underset{\lambda}{\rm inf} \, \Vert \lambda (x,y) \Vert < M \, ,
\end{equation}
{\it when $\lambda$ runs over all zipping paths for $(x,y)$.}
Here are some comments concerning our theorem above.
\noindent D) In general, representation spaces are violently {\ibf non} locally finite. So, the feature 1) is pretty non-trivial. In our present case of a $3$-dimensional representation, the image $g(\infty) \, Y(\infty)$ will automatically be the interior of the singular manifold $\tilde M^3 (\Gamma)$ which, of course, is locally finite. This does not involve in any way the finer aspects of the representation theorem, namely locally finite source, equivariance and bounded zipping length. But then, also, as we shall see below, when we go to $2^{\rm d}$ representation, even with our three finer features present, the image is no longer locally finite, generally speaking at least. This is a major issue for this sequence of papers.
\noindent E) It is in 2) above that the group property is finally used.
\noindent F) In the special case when $\Gamma = \pi_1 M^3$ for a smooth closed $3$-manifold, parts 1) + 2) of the REPRESENTATION THEOREM are already proved in \cite{26}. It turns out that the way the paper \cite{26} was constructed, makes it very easy for us to extend things from $M_{\rm smooth}^3$ to the present $M^3 (\Gamma)_{\rm SINGULAR}$. End of comments.
The first section of this paper reviews the $\Psi/\Phi$ theory in our present context, while the section II $+$ III give the proof of the REPRESENTATION THEOREM above. They rely heavily on \cite{26}. In the rest of the present introduction, we will start by giving a rather lengthy sketch of how the REPRESENTATION THEOREM fits into the proof of the main theorem stated in the very beginning, {\it i.e.} the fact that all finitely presented $\Gamma$'s are QSF. This will largely overlap with big pieces of the announcement \cite{24}. We will end the introduction by briefly reviewing some possible extensions of the present work.
\noindent AN OVERVIEW OF THE PROOF THAT ``ALL $\Gamma$'s ARE QSF''. The representation theorem above provides us with a $3^{\rm d}$ representation space $Y(\infty)$. But we actually need a $2$-dimensional representation, in order to be able to go on. So we will take a sufficiently dense skeleton of the $Y(\infty)$ from the representation theorem and, making use of it, we get the next representation theorem below.
\noindent $2$-DIMENSIONAL REPRESENTATION LEMMA. {\it For any finitely presented $\Gamma$ there is a $2$-dimensional representation
\begin{equation}
\label{eq0.5}
X^2 \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)
\end{equation}
which, just like in the preceeding $3^{\rm d}$ REPRESENTATION THEOREM, is}
1) {\it Locally finite; we will call this the first finiteness condition.}
2) {\it Equivariant;}
3) {\it With uniformly bounded zipping length and, moreover, such that we also have the next features below.}
4) {\it (The second finiteness condition.) For any tight compact transversal $\Lambda$ to $M_2 (f) \subset X^2$ we have}
\begin{equation}
\label{eq0.6}
\mbox{card} \, (\lim \, (\Lambda \cap M_2 (f))) < \infty \, .
\end{equation}
5) {\it The closed subset}
\begin{equation}
\label{eq0.7}
\mbox{LIM} \, M_2 (f) \underset{\rm def}{=} \ \bigcup_{\Lambda} \ \lim \, (\Lambda \cap M_2 (f)) \subset X^2
\end{equation}
{\it is a locally finite graph and $f \, {\rm LIM} \, M_2 (f) \subset f X^2$ is also a {\ibf closed} subset.}
6) {\it Let $\Lambda^*$ run over all tight transversals to ${\rm LIM} \, M_2 (f)$. Then we have}
$$
\bigcup_{\Lambda^*} \ (\Lambda^* \cap M_2 (f)) = M_2 (f) \, .
$$
One should notice that ${\rm LIM} \, M_2 (f) = \emptyset$ is equivalent to $M_2 (f) \subset X^2$ being a closed subset. It so happens that, if this is the case, then it is relatively easy to prove that $\Gamma \in $ QSF. This makes that in all this sequence of papers we will only consider the situation ${\rm LIM} \, M_2 (f) \ne \emptyset$, {\it i.e.} the worst possible case.
Now, once $M_2 (f)$ is NOT a closed subset, the (\ref{eq0.6}) is clearly the next best option. But then, in \cite{27} it is shown that if, forgetting about groups and group actions, we play this same game, in the most economical manner for the Whitehead manifold ${\rm Wh}^3$, instead of $\tilde M^3 (\Gamma)$, then generically the set $\lim (\Lambda \cap M_2 (f))$ becomes a Cantor set, which comes naturally equipped with a feedback loop, turning out to be directly related to the one which generates the Julia sets in the complex dynamics of quadratic maps. See \cite{27} for the exact Julia sets which are concerned here.
Continuing with our list of comments, in the context of 6) in our lemma above, there is a transversal holonomy for ${\rm LIM} \, M_2 (f)$, which is quite non trivial. Life would be easier without that.
The $X^2$ has only mortal singularities, while the $fX^2$ only has immortal ones. [In terms of \cite{16}, \cite{6}, the singularities of $X^2$ (actually of $f$) are all ``undrawable singularities''. At the source, the same is true for the immortal ones too.]
The representation space $X^2$ in Lemma~5 is locally finite but, as soon as ${\rm LIM} \, M_2 (f) \ne \emptyset$ (which, as the reader will retain, is the main source of headaches in the present series of papers), the $fX^2 \subset \tilde M^3 (\Gamma)$ is {\ibf not}.
At this point, we would like to introduce canonical smooth high-dimensional thickenings for this $fX^2$. Here, even if we temporarily forget about $f \, {\rm LIM} \, M_2 (f)$, there are still the immortal singularities (the only kind which $f X^2$ possesses), and these accumulate at finite distance. This is something which certainly has to be dealt with but, at the level of this present sketch we will simply ignore it. Anyway, even dealing with isolated immortal singularities only, requires a relatively indirect procedure. One starts with a 4-dimensional smooth thickening (remember that, provisionally, we make as if $f \, {\rm LIM} \, M_2 (f) = \emptyset$). This 4$^{\rm d}$ thickening is not uniquely defined, it depends on a desingularization (see \cite{6}). Next one takes the product with $B^m$, $m$ large, and this washes out the desingularization-dependence.
In order to deal with ${\rm LIM}Ê\, M_2 (f) \ne \emptyset$, the procedure sketched above has to be supplemented with appropriate {\ibf punctures}, by which we mean here pieces of ``boundary'' of the prospective thickened object which are removed, or sent to infinity. This way we get a more or less canonical {\ibf smooth} high-dimensional thickening for $fX^2$, which we will call $S_u (\tilde M^3 (\Gamma))$. Retain that the definition of $S_u (\tilde M^3 (\Gamma))$ has to include at least a first batch of punctures, just so as to get a smooth object. By now we can also state the following
\noindent {\bf Main Lemma.} {\it $S_u (\tilde M^3 (\Gamma))$ is} GSC.
We will go to some length with the sketch of proof for this Main Lemma below, but here are some comments first. What our main lemma says, is that a certain smooth, very high-dimensional non-compact manifold with large non-empty boundary, very much connected with $\tilde M^3 (\Gamma)$ but without being exactly a high-dimensional thickening of it, is GSC. We will have a lot to say later concerning this manifold, our $S_u (\tilde M^3 (\Gamma))$. For right now, it suffices to say that it comes naturally equipped with a free action of $\Gamma$ but for which, unfortunately so to say, the fundamental domain $S_u (\tilde M^3 (\Gamma))/\Gamma$ fails to be compact. If one thinks of the $\tilde M^3 (\Gamma)$ itself as being made out of fundamental domains which are like solid compact ice-cubes, then the $S_u (\tilde M^3 (\Gamma))$ is gotten, essentially, by replacing each of these compact ice-cubes by a non-compact infinitely foamy structure, and then going to a very high-dimensional thickening of this foam.
In order to clinch the proof of ``$\forall \, \Gamma \in {\rm QSF}$'' one finally also needs to prove the following kind of implication
\noindent {\bf Second Lemma.} {\it We have the following implication}
$$
\{ S_u (\tilde M^3 (\Gamma)) \in {\rm GSC}, \ \mbox{which is what the MAIN LEMMA claims}\} \Longrightarrow
$$
\begin{equation}
\label{eq0.8}
\Longrightarrow \{ \Gamma \in {\rm QSF}\} \, .
\end{equation}
At the level of this short outline there is no room even for a sketch of proof of the implication (\ref{eq0.8}) above. Suffices to say here that this (\ref{eq0.8}) is a considerably easier step than the main lemma itself, about which a lot will be said below.
We end this OVERVIEW with a SKETCH OF THE PROOF OF THE MAIN LEMMA.
For the lemma to be true a second batch of punctures will be necessary. But then, we will also want our $S_u (\tilde M^3 (\Gamma))$ to be such that we should have the implication (\ref{eq0.8}). This will turn out to put a very strict {\ibf ``Stallings barrier''} on how much punctures the definition of $S_u (\tilde M^3 (\Gamma))$ can include, at all.
So, let $Y$ be some low-dimensional object, like $fX^2$ or some 3-dimensional thickening of it. When it is a matter of punctures, these will be put into effect directly at the level of $Y$ (most usually by an appropriate infinite sequence of Whitehead dilatations), before going to higher dimensions, so that the high dimensional thickening should be {\ibf transversally compact} with respect to the $Y$ to be thickened. The reason for this requirement is that when we will want to prove (\ref{eq0.8}), then arguments like in \cite{15} will be used (among others), and these ask for transversal compactness. For instance, if $V$ is a low-dimensional non-compact manifold, then
$$
\mbox{$V \times B^m$ is transversally compact, while $V \times \{ B^m$ with some boundary}
$$
$$
\mbox{punctures$\}$, or even worst $V \times R^m$, is not.}
$$
This is our so-called ``Stallings barrier'', putting a limit to how much punctures we are allowed to use. The name is referring here to a corollary of the celebrated Engulfing Theorem of John Stallings, saying that with appropriate dimensions, if $V$ is an {\ibf open} contractible manifold, then $V \times R^p$ is a Euclidean space. There are, of course, also infinitely more simple-minded facts which give GSC when multiplying with $R^p$ or $B^p$.
We take now a closer look at the $n$-dimensional smooth manifold $S_u (\tilde M^3 (\Gamma))$ occuring in the Main Lemma. Here $n=m+4$ with high $m$. Essentially, we get our $S_u (\tilde M^3 (\Gamma))$ starting from an initial GSC low-dimensional object, like $X^2$, by performing first a gigantic quotient-space operation, namely our zipping, and finally thickening things into something of dimension $n$. But the idea which we will explore now, is to construct another smooth $n$-dimensional manifold, essentially starting from an already $n$-dimensional GSC smooth manifold, and then use this time a gigantic collection of additions and inclusion maps, rather than quotient-space projections, which should somehow mimic the zipping process. We will refer to this kind of thing as a {\ibf geometric realization of the zipping}. The additions which are allowed in this kind of realization are Whitehead dilatations, or adding handles of index $\lambda \geq 2$. The final product of the geometric realization, will be another manifold of the same dimension $n$ as $S_u (\tilde M^3 (\Gamma))$, which we will call $S_b (\tilde M^3 (\Gamma))$ and which, {\it a priori} could be quite different from the $S_u (\tilde M^3 (\Gamma))$. To be more precise about this, in a world with ${\rm LIM} \, M_2 (f) = \emptyset$ we would quite trivially have $S_u = S_b$ but, in our real world with ${\rm LIM} \, M_2 (f) \ne \emptyset$, there is absolutely no {\it a priori} reason why this should be so. Incidentally, the subscripts ``$u$'' and ``$b$'', refer respectively to ``usual'' and ``bizarre''.
Of course, we will want to compare $S_b (\tilde M^3 (\Gamma))$ and $S_u (\tilde M^3 (\Gamma))$. In order to give an idea of what is at stake here, we will look into the simplest possible local situation with ${\rm LIM} \, M_2 (f)$ present. Ignoring now the immortal singularities for the sake of the exposition, we consider a small smooth chart $U = R^3 = (x,y,z)$ of $\tilde M^3 (\Gamma)$, inside which live $\infty + 1$ planes, namely $W = (z=0)$ and the $V_n = (x=x_n)$, where $x_1 < x_2 < x_3 < \ldots$ with $\lim x_n = x_{\infty}$. Our local model for $X^2 \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$ is here $f^{-1} \, U = W + \underset{1}{\overset{\infty}{\sum}} \ V_n \subset X^2$ with $f \mid \Bigl( W + \underset{1}{\overset{\infty}{\sum}} \ V_n \Bigl)$ being the obvious map. We find here that the line $(x = x_{\infty}, z=0) \subset W$ is in ${\rm LIM} \, M_2 (f)$ and the situation is sufficiently simple so that we do not need to distinguish here between ${\rm LIM} \, M_2 (f)$ and $f \, {\rm LIM} \, M_2 (f)$.
Next, we pick up a sequence of positive numbers converging very fast to zero $\varepsilon > \, \varepsilon_1 > \varepsilon_2 > \ldots$ and, with this, on the road to the $S_u (\tilde M^3 (\Gamma))$ from the {\ibf Main Lemma}, we will start by replacing the $f W \cup \underset{1}{\overset{\infty}{\sum}} \ V_n \subset f X^2$, with the following 3-dimensional non-compact 3-manifold with boundary
$$
M \underset{\rm def}{=} [ W \times (-\varepsilon \leq z \leq \varepsilon) - {\rm LIM} \, M_2 (f) \times \{ z = \pm \, \varepsilon \}] \ \cup
$$
\begin{equation}
\label{eq0.9}
\cup \ \sum_{1}^{\infty} V_n \times (x_n - \varepsilon_n \leq x \leq x_n + \varepsilon_n ) \, .
\end{equation}
In such a formula, notations like ``$W \times (-\varepsilon \leq z \leq \varepsilon)$'' should be read ``$W$ thickened into $-\varepsilon \leq z \leq \varepsilon$''. Here ${\rm LIM} \, M_2 (f) \times \{ \pm \, \varepsilon \}$ is a typical puncture, necessary to make our $M$ be a smooth manifold. For expository purposes, we will pretend now that $n=4$ and then $M \times [0 \leq t \leq 1]$ is a local piece of $S_u (\tilde M^3 (\Gamma))$. Now, in an ideal world (but not in ours!), the geometrical realization of the zipping process {\it via} the inclusion maps (some of which will correspond to the Whitehead dilatations which are necessary for the punctures), which are demanded by $S_b (\tilde M^3 (\Gamma))$, should be something like this. We start with the obviously GSC $n$-dimensional thickening of $X^2$, call it $\Theta^n (X^2)$; but remember that for us, here, $n=4$. Our local model should live now inside $R^4 = (x,y,z,t)$, and we will try to locate it there conveniently for the geometric realization of the zipping. We will show how we would like to achieve this for a generic section $y = $ constant.
For reasons to become soon clear, we will replace the normal section $y = $ const corresponding to $W$ and which should be
$$
N_y = [ - \infty < x < \infty \, , \ y = {\rm const} \, , \ - \varepsilon \leq z \leq \varepsilon \, , \ 0 \leq t \leq 1]
$$
\begin{equation}
\label{eq0.10}
- \, (x = x_{\infty} \, , \ y = {\rm const} \, , \ z = \pm \, \varepsilon \, , \ 0 \leq t \leq 1) \, ,
\end{equation}
by the smaller $N_y \, - \, \overset{\infty}{\underset{1}{\sum}} \ {\rm DITCH} \, (n)_y$, which is defined as follows. The ${\rm DITCH} \, (n)_y$ is a thin column of height $-\varepsilon \leq z \leq \varepsilon$ and of $(x,t)$-width $4 \, \varepsilon_n$, which is concentrated around the arc
$$
(x = x_n \, , \ y = {\rm const} \, , \ - \varepsilon \leq z \leq \varepsilon \, , \ t=1) \, .
$$
This thin indentation inside $N_y$ is such that, with our fixed $y = {\rm const}$ being understood here, we should have
\begin{equation}
\label{eq0.11}
\lim_{n = \infty} \, {\rm DITCH} \, (n)_y = (x = x_{\infty} \, , \ -\varepsilon \leq z \leq \varepsilon \, , \ t=1) \, .
\end{equation}
Notice that, in the RHS of (\ref{eq0.11}) it is exactly the $z = \pm \, \varepsilon$ which corresponds to punctures.
Continuing to work here with a fixed, generic $y$, out of the normal $y$-slice corresponding to $V_n$, namely $(x_n - \varepsilon_n \leq x \leq x_n + \varepsilon_n \, , \ - \infty < z < \infty \, , \ 0 \leq t \leq 1)$, we will keep only a much thinner, isotopically equivalent version, namely the following
\begin{equation}
\label{eq0.12}
(x_n - \varepsilon_n \leq x \leq x_n + \varepsilon_n \, , \ - \infty < z < \infty \, , \ 1 - \varepsilon_n \leq t \leq 1) \, .
\end{equation}
This (\ref{eq0.12}) has the virtue that it can fit now inside the corresponding ${\rm DITCH} \, (n)_y$, without touching at all the $N_y - \{{\rm DITCHES}\}$.
What has been carefully described here, when all $y$'s are being taken into account, is a very precise way of separating the $\infty + 1$ branches of (the thickened) (\ref{eq0.9}), at the level of $R^4$, taking full advantage of the additional dimensions ({\it i.e.} the factor $[0 \leq t \leq 1]$ in our specific case). With some work, this kind of thing can be done consistently for the whole global $fX^2$. The net result is an isotopically equivalent new model for $\Theta^n (X^2)$, which invites us to try the following naive approach for the geometric realization of the zipping. Imitating the successive folding maps of the actual zipping, fill up all the empty room left inside the ditches, by using only Whitehead dilatations and additions of handles of index $\lambda > 1$, until one has reconstructed completely the $S_u (\tilde M^3 (\Gamma))$. Formally there is no obstruction here and then also what at a single $y = $ const may look like a handle of index one, becomes ``only index $\geq 2$'', once the full global zipping is taken into account. But there {\ibf is} actually a big problem with this naive approach, {\it via} which one can certainly reconstruct $S_u (\tilde M^3 (\Gamma))$ as a set, but with the {\ibf wrong topology}, as it turns out. I will give an exact idea now of how far one can actually go, proceeding in this naive way. In \cite{28} we have, actually, tried to play the naive game to its bitter end, and the next Proposition~1.1, given here for purely pedagogical reasons, is the kind of thing one gets (and essentially nothing better than it).
\noindent {\bf Proposition 1.1.} {\it Let $V^3$ be any open simply-connected $3$-manifold, and let also $m \in Z_+$ be high enough. There is then an infinite collection of smooth $(m+3)$-dimensional manifolds, all of them non-compact, with very large boundary, connected by a sequence of smooth embeddings
\begin{equation}
\label{eq0.13}
X_1 \subset X_2 \subset X_3 \subset \ldots
\end{equation}
such that}
1) {\it $X_1$ is {\rm GSC} and each of the inclusions in {\rm (\ref{eq0.13})} is either an elementary Whitehead dilatation or the addition of a handle of index $\lambda > 1$.}
2) {\it When one considers the union of the objects in {\rm (\ref{eq0.13})}, endowed with the {\ibf weak topology}, and there is no other, reasonable one which is usable here, call this new space $\varinjlim \, X_i$, then there is a continuous bijection}
\begin{equation}
\label{eq0.14}
\xymatrix{
\varinjlim \, X_i \overset{\psi}{\longrightarrow} V^3 \times B^m \, .
}
\end{equation}
The reader is reminded that in the weak topology, a set $F \subset \varinjlim \, X_i$ is closed iff all the $F \cap X_i$ are closed. Also, the inverse of $\psi$ is not continuous here; would it be, this would certainly contradict \cite{15}, since $V^3$ may well be ${\rm Wh}^3$, for instance. This, {\it via} Brouwer, also means that $\varinjlim \, X_i$ cannot be a manifold (which would automatically be then GSC). Also, exactly for the same reasons why $\varinjlim \, X_i$ is not a manifold, it is not a metrizable space either. So, here we meet a new barrier, which I will call the {\ibf non metrizability barrier} and, when we will really realize geometrically the zipping, we better stay on the good side of it.
One of the many problems in this series of papers is that the Stallings barrier and the non metrizability barrier somehow play against each other, and it is instructive to see this in a very simple instance. At the root of the non metrizability are, as it turns out, things like (\ref{eq0.11}); {\it a priori} this might, conceivably, be taken care of by letting all of $(x = x_{\infty} \, , \ - \varepsilon \leq z \leq \varepsilon \, , \ t=1)$ be punctures, not just the $z = \pm \, \varepsilon$ part. But then we would also be on the wrong side of the Stallings barrier. This kind of conflict is quite typical.
So far, we have presented the disease and the rest of the section gives, essentially, the cure. In a nutshell here is what we will do. We start by drilling a lot of {\ibf Holes}, consistently, both at the (thickened) levels of $X^2$ and $fX^2$. By ``Holes'' we mean here deletions which, to be repaired back, need additions of handles of index $\lambda = 2$. Working now only with objects with Holes, we will be able to fill in the empty space left inside the ${\rm Ditch} \, (n)$ {\ibf only} for $1-\varepsilon_n \leq z \leq 1$ where, remember $\underset{n = \infty}{\rm lim} \, \varepsilon_n = 0$. This replaces the trouble-making (\ref{eq0.11}) by the following item
\begin{equation}
\label{eq0.15}
\lim_{n = \infty} \{\mbox{{\it truncated}} \ {\rm Ditch} \, (n)_y \} = (x = x_{\infty} \, , \ z = \varepsilon \, , \ t = 1) \, ,
\end{equation}
which is now on the good side, both of the Stallings barrier and of the non metrizability barrier. But, after this {\ibf partial ditch-filling process}, we have to go back to the Holes and put back the corresponding deleted material. This far from trivial issue will be discussed later.
But before really going on, I will make a parenthetical comment which some readers may find useful. There are many differences between the present work (of which this present paper is only a first part, but a long complete version exists too, in hand-written form) and my ill-fated, by now dead $\pi_1^{\infty} \tilde M^3 = 0$ attempt [Pr\'epublications Orsay 2000-20 and 2001-57]. Of course, a number of ideas from there found their way here too. In the dead papers I was also trying to mimick the zipping by inclusions; but there, this was done by a system of ``gutters'' added in $2^{\rm d}$ or $3^{\rm d}$, before any thickening into high dimensions. Those gutters came with fatal flaws, which opened a whole Pandora's box of troubles. These turned out to be totally unmanageable, short of some input of new ideas. And, by the time a first whiff of such ideas started popping up, Perelman's announcement was out too. So, I dropped then the whole thing, turning to more urgent tasks.
It is only a number of years later that this present work grew out of the shambles. By contrast with the low-dimensional gutters from the old discarded paper, the present ditches take full advantages of the {\ibf additional} dimensions; I use here the word ``additional'', by opposition to the mere high dimensions. The spectre of non metrizability which came with the ditches, asked then for Holes the compensating curves of which are dragged all over the place by the inverse of the zipping flow, far from their normal location, {\it a.s.o.} End of practice.
With the Holes in the picture, the $\Theta^n (X^2)$, $S_u (\tilde M^3 (\Gamma))$, will be replaced by the highly non-simply-connected smooth $n$-manifolds with non-empty boundary $\Theta^n (X^2) - H$ and $S_u (\tilde M^3 (\Gamma)) - H$. Here the ``$-H$'' stands for ``with the Holes having been drilled'' or, in a more precise language, as it will turn out, with the 2-handles which correspond to the Holes in question, deleted.
The manifold $S_u (\tilde M^3 (\Gamma)) - H$ comes naturally equipped with a PROPER framed link
\begin{equation}
\label{eq0.16}
\sum_1^{\infty} \, C_n \overset{\alpha}{\longrightarrow} \partial \, (S_u (\tilde M^3 (\Gamma)) - H) \, ,
\end{equation}
where ``$C$'' stands for ``curve''. This framed link is such that, when one adds the corresponding 2-handles to $S_u (\tilde M^3 (\Gamma)) - H$, then one gets back the $S_u (\tilde M^3 (\Gamma))$.
This was the $S_u$ story with holes, which is quite simple-minded, and we move now on to $S_b$. One uses now the $\Theta^n (X^2) - H$, {\it i.e.} the thickened $X^2$, with Holes, as a starting point for the geometric realization of the zipping process, rather than starting from the $\Theta^n (X^2)$ itself. The Holes allow us to make use of a partial ditch filling, {\it i.e.} to use now the truncation $1-\varepsilon_n \leq z \leq 1$, which was already mentioned before. This has the virtue of putting us on the good side of all the various barriers which we have to respect. The end-product of this process is another smooth $n$-dimensional manifold, which we call $S_b (\tilde M^3 (\Gamma)) - H$. This comes with a relatively easy diffeomorphism
$$
\xymatrix{
S_b (\tilde M^3 (\Gamma)) - H \ar[rr]_{\approx}^{\eta} &&S_u (\tilde M^3 (\Gamma)) - H \, .
}
$$
Notice that only ``$S_b (\tilde M^3 (\Gamma)) - H$'' is defined, so far, and not yet the full $S_b (\tilde M^3 (\Gamma))$ itself. Anyway here comes the following fact, which is far from being trivial.
\noindent {\bf Lemma 1.2.} {\it There is a second PROPER framed link
\begin{equation}
\label{eq0.17}
\sum_1^{\infty} \, C_n \overset{\beta}{\longrightarrow} \partial \, (S_b (\tilde M^3 (\Gamma)) - H)
\end{equation}
which has the following two features.}
1) {\it The following diagram is commutative, {\ibf up to a homotopy}}
\begin{equation}
\label{eq0.18}
\xymatrix{
S_b (\tilde M^3 (\Gamma)) -H \ar[rr]_{\eta} &&S_u (\tilde M^3 (\Gamma)) - H \\
&\overset{\infty}{\underset{1}{\sum}} \, C_n \ar[ul]^{\beta} \ar[ur]_{\alpha}
}
\end{equation}
{\it The homotopy above, which is {\ibf not} claimed to be PROPER, is compatible with the framings of $\alpha$ and $\beta$.}
2) {\it We {\rm define} now the smooth $n$-dimensional manifold
$$
S_b (\tilde M^3 (\Gamma)) = (S_b (\tilde M^3 (\Gamma)) - H) + \{\mbox{\rm the $2$-handles which are defined by}
$$
\begin{equation}
\label{eq0.19}
\mbox{\rm the framed link $\beta$ {\rm (1.17)}}\} \, .
\end{equation}
It is claimed that this manifold $S_b (\tilde M^3 (\Gamma))$ is} GSC.
Without even trying to prove anything, let us just discuss here some of the issues which are involved in this last lemma.
In order to be able to discuss the (\ref{eq0.17}), let us look at a second toy-model, the next to appear, in increasing order of difficulty, after the one already discussed, when formulae (\ref{eq0.9}) to (\ref{eq0.12}) have been considered. We keep now the same $\underset{1}{\overset{\infty}{\sum}} \, V_n$, and just replace the former $W$ by $W_1 = (y=0) \cup (z=0)$. The $M$ from (\ref{eq0.9}) becomes now the following non-compact 3-manifold with boundary
$$
M_1 = [(-\varepsilon \leq y \leq \varepsilon) \cup (-\varepsilon \leq z \leq \varepsilon) - \{\mbox{the present contribution of}
$$
\begin{equation}
\label{eq0.20}
{\rm LIM} \, M_2 (f) \}] \cup \sum_{1}^{\infty} \, V_n \times (x_n - \varepsilon_n \leq x \leq x + \varepsilon_n ) \, ;
\end{equation}
the reader should not find it hard to make explicit the contribution of ${\rm LIM} \, M_2 (f)$ here. Also, because we have considered only $(y=0) \, \cup \, (z=0)$ and not the slightly more complicated disjoint union $(y=0) + (z=0)$, which comes with triple points, there is still no difference, so far, between ${\rm LIM} \, M_2 (f)$ and $f \, {\rm LIM} \, M_2 (f)$.
When we have discussed the previous local model, then the
$$
{\rm DITCH} \, (n) \underset{\rm def}{=} \ \bigcup_y \ {\rm DITCH} \, (n)_y
$$
was concentrated in the neighbourhood of the rectangle
$$
(x = x_n \, , \ -\infty < y < \infty \, , \ - \varepsilon \leq z \leq \varepsilon \, , \ t=1 ) \, .
$$
Similarly, the present ${\rm DITCH} \, (n)$ will be concentrated in a neighbourhood of the $2$-dimensional infinite cross
$$
(x = x_n \, , \ (-\varepsilon \leq y \leq \varepsilon) \cup (-\varepsilon \leq z \leq \varepsilon) \, , \ t=1) \, .
$$
It is only the $V_n$'s which see Holes. Specifically, $V_n - H$ is a very thin neighbourhood of the $1^{\rm d}$ cross $(y = + \varepsilon) \cup (z = +\varepsilon)$, living at $x=x_n$, for some fixed $t$, and fitting inside ${\rm DITCH} \, (n)$, without touching anything at the level of the $\{ W_1$ thickened in the high dimension, and with the DITCH deleted$\}$. But it does touch to four Holes, corresponding to the four corners. The action takes place in the neighbourhood of $t=1$, making again full use of the additional dimensions, supplementary to those of $M_1$ (\ref{eq0.20}).
With this set-up, when we try to give a ``normal'' definition for the link $\beta$ in (\ref{eq0.17}), then we encounter the following kind of difficulty, and things only become worse when triple points of $f$ are present too. Our normal definition of $\beta \, C_n$ (where $C_n$ is here the generic boundary of one of our four Holes), is bound to make use of arcs like $I_n = (x=x_n \, , \ y = +\varepsilon \, , \ -\varepsilon \leq z \leq \varepsilon \, , \ t = {\rm const})$, or $I_n = (x = x_n \, , \ - \varepsilon \leq y \leq \varepsilon \, , \ z = + \varepsilon \, , \ t = {\rm const})$ which, in whatever $S_b (\tilde M^3 (\Gamma)) - H$ may turn out to be, accumulate at finite distance. So, the ``normal'' definition of $\beta$ fails to be PROPER, and here is what we will do about this. Our arcs $I_n$ come naturally with (not completely uniquely defined) double points living in $\Psi (f) = \Phi (f)$, attached to them and these have zipping paths, like in the Representation Theorem. The idea is then to push the arcs $I_n$ back, along (the inverses of) the zipping paths, all the way to the singularities of $f$, and these do not accumulate at finite distance. In more precise terms, the arcs via which we replace the $I_n$'s inside $\beta \, C_n$ come closer and closer to $f \, {\rm LIM} \, M_2 (f)$ as $n \to \infty$, and this last object lives at infinity. This is the {\it correct} definition of $\beta$ in (\ref{eq0.17}). The point is that, now $\beta$ is PROPER, as claimed in our lemma. But then, there is also a relatively high price to pay for this. With the roundabout way to define $\beta$ which we have outlined above, the point 2) in our lemma, {\it i.e.} the basic property $S_b \in$ GSC, is no longer the easy obvious fact which it normally should be; it has actually become non-trivial. Here is where the difficulty sits. To begin with, our $X^2$ which certainly is GSC, by construction, houses two completely distinct, not everywhere well-defined flow lines, namely the zipping flow lines and the collapsing flow lines stemming from the easy id $+$ nilpotent geometric intersection matrix of $X^2$. By itself, each of these two systems of flow lines is quite simple-minded and controlled, but not so the combined system, which can exhibit, generally speaking, closed oriented loops.
These {\ibf bad cycles} are in the way for the GSC property of $S_b$; they introduce unwanted terms in the corresponding geometric intersection matrix. An infinite machinery is required for handling this new problem: the bad cycles have to be, carefully, pushed all the way to infinity, out of the way. Let me finish with Lemma~1.2 by adding now the following item. Punctures are normally achieved by infinite sequences of dilatations, but if we locate the $\beta \, \underset{1}{\overset{\infty}{\sum}} \, C_n$ over the regions created by them, this again may introduce unwanted terms inside the geometric intersection matrix of $S_b$, making havoc of the GSC feature. In other words, we have now additional restrictions concerning the punctures, going beyond what the Stallings barrier would normally tolerate. In more practical terms, what this means that the use of punctures is quite drastically limited, by the operations which make our $\beta$ (\ref{eq0.17}) PROPER. This is as much as I will say about the Lemma~1.2, as such.
But now, imagine for a minute that, in its context, we would also know that (\ref{eq0.18}) commutes up to PROPER homotopy. In that hypothetical case, in view of the high dimensions which are involved, the (\ref{eq0.18}) would commute up to isotopy too. This would prove then that the manifolds $S_u (\tilde M^3 (\Gamma))$ and $S_b (\tilde M^3 (\Gamma))$ are diffeomorphic. Together with 2) in Lemma~1.2, we would then have a proof of our desired Main Lemma. But I do not know how to make such a direct approach work, and here comes what I can offer instead.
The starting point of the whole $S_u / S_b$ story, was an {\ibf equivariant} representation theorem for $\tilde M^3 (\Gamma)$, namely our $2^{\rm d}$ Representation Theorem and, from there on, although we have omitted to stress this until now, everything we did was supposed to be equivariant all along: the zipping, the Holes, the (\ref{eq0.16}) $+$ (\ref{eq0.17}), {\it a.s.o.} are all equivariant things.
Also, all this equivariant discussion was happening upstairs, at the level of $\tilde M^3 (\Gamma)$. But, being equivariant it can happily be pushed down to the level of $M^3 (\Gamma) = \tilde M^3 (\Gamma) / \Gamma$. Downstairs too, we have now two, still non-compact manifolds, namely $S_u (M^3 (\Gamma)) \underset{\rm def}{=} S_u (\tilde M^3 (\Gamma)) / \Gamma$ and $S_b (M^3 (\Gamma)) \underset{\rm def}{=} S_b (\tilde M^3 (\Gamma)) / \Gamma$.
What these last formulae mean, is also that we have
\begin{equation}
\label{eq0.21}
S_u (M^3 (\Gamma))^{\sim} = S_u (\tilde M^3 (\Gamma)) \quad \mbox{and} \quad S_b (M^3 (\Gamma))^{\sim} = S_b (\tilde M^3 (\Gamma)) \, ,
\end{equation}
let us say that $S_u$ and $S_b$ are actually functors of sorts.
But before really developping this new line of thought, we will have to go back to the diagram (\ref{eq0.18}) which, remember, commutes up to homotopy. Here, like in the elementary text-books, we would be very happy now to change
$$
\alpha \, C_n \sim \eta \beta \, C_n \quad \mbox{into (something like)} \quad \alpha \, C_n \cdot \eta \beta \, C_n^{-1} \sim 0 \, .
$$
This is less innocent than it may look, since in order to be of any use for us, the infinite system of closed curves
\begin{equation}
\label{eq0.22}
\Lambda_n \underset{\rm def}{=} \alpha \, C_n \cdot \eta \beta \, C_n^{-1} \subset \partial \, (S_u (\tilde M^3 (\Gamma)) - H) \, , \ n = 1,2, \ldots
\end{equation}
better be PROPER (and, of course, equivariant too). The problem here is the following and, in order not to over complicate our exposition, we look again at our simplest local model. Here, in the most difficult case at least, the curve $\alpha \, C_n$ runs along $(x = x_n \, , \ z = - \varepsilon)$ while $\eta \beta \, C_n$ runs along $(x = x_n \, , \ z = + \varepsilon)$. The most simple-minded procedure for defining (\ref{eq0.22}) would then be to start by joining them along some arc of the form
$$
\lambda_n = (x = x_n \, , \ y = {\rm const} \, , \ -\varepsilon \leq z \leq \varepsilon \, , \ t = {\rm const}) \, .
$$
But then, for the very same reasons as in our previous discussion of the mutual contradictory effects of the two barriers (Stallings and non-metrizability), this procedure is certainly not PROPER. The cure for this problem is to use, once again, the same trick as for defining a PROPER $\beta$ in (\ref{eq0.17}), namely to push the stupid arc $\lambda_n$ along the (inverse of) the zipping flow, all the way back to the singularities, keeping things all the time close to $f \, {\rm LIM} \, M_2 (f)$, {\it i.e.} close to $(x=x_{\infty} \, , \ z = \pm \varepsilon \, , \ t=1)$, in the beginning at least.
The next lemma sums up the net result of all these things.
\noindent {\bf Lemma 1.3.} {\it The correctly defined system of curves {\rm (\ref{eq0.22})} has the following features}
\begin{itemize}
\item[0)] {\it It is equivariant (which, by now, does not cost much),}
\item[1)] {\it It is PROPER,}
\item[2)] {\it For each individual $\Lambda_n$, we have a null homotopy
$$
\Lambda_n \sim 0 \quad \mbox{in} \quad S_u (\tilde M^3 (\Gamma)) - H \, .
$$
[Really this is in $\partial (S_u (\tilde M^3 (\Gamma)) - H)$, but we are very cavalier now concerning the distinction between $S_u$ and $\partial S_u$; the thickening dimension is very high, anyway.]}
\item[3)] {\it As a consequence of the bounded zipping length in the Representation Theorem, our system of curves $\Lambda_n$ has {\ibf uniformly bounded length}.}
\end{itemize}
Lemma~1.3 has been stated in the context of $\tilde M^3 (\Gamma)$, upstairs. But then we can push it downstairs to $M^3 (\Gamma)$ too, retaining 1), 2), 3) above. So, from now on, we consider the correctly defined system $\Lambda_n$ {\it downstairs}. Notice here that, although $M^3 (\Gamma)$ is of course compact (as a consequence of $\Gamma$ being finitely presented), the $S_u (M^3 (\Gamma)) - H$ and $S_u (M^3 (\Gamma))$ are certainly not.
So, the analogue of 1) from Lemma~1.3, which reads now
$$
\lim_{n = \infty} \, \Lambda_n = \infty \, , \ \mbox{inside} \ S_u (M^3 (\Gamma))-H \, ,
$$
is quite meaningful. Finally, here is the
\noindent {\bf Lemma 1.4.} (KEY FACT) {\it The analogue of diagram {\rm (\ref{eq0.18})} downstairs, at the level of $M^3 (\Gamma)$, commutes now up to PROPER homotopy.}
Before discussing the proof of this key fact, let us notice that it implies that $S_u (M^3 (\Gamma)) \underset{\rm DIFF}{=} S_b (M^3 (\Gamma))$ hence, {\it via} (\ref{eq0.21}), {\it i.e.} by ``functoriality'', we also have
$$
S_u (\tilde M^3 (\Gamma)) \underset{\rm DIFF}{=} S_b (\tilde M^3 (\Gamma)) \, ,
$$
making that $S_u (\tilde M^3 (\Gamma))$ is GSC, as desired; see here 2) in Lemma~1.2 too.
All the rest of the discussion is now downstairs, and we will turn back to Lemma~1.4. Here, the analogue of 2) from Lemma~1.3 is, of course, valid downstairs too, which we express as follows
$$
\mbox{For every $\Lambda_n$ there is a singular disk $D_n^2 \subset S_u (M^3 (\Gamma)) - H$,}
$$
\begin{equation}
\label{eq0.23}
\mbox{with $\partial D^2 = \Lambda_n$.}
\end{equation}
With this, what we clearly need now for Lemma~1.4, is something like (\ref{eq0.23}), but with the additional feature that $\underset{n=\infty}{\lim} \, D_n^2 = \infty$, inside $S_u (M^3 (\Gamma)) - H$. In a drastically oversimplified form, here is how we go about this. Assume, by contradiction, that there is a compact set $K \subset \partial (S_u (M^3 (\Gamma)) - H)$ and a subsequence of $\Lambda_1 , \Lambda_2 , \ldots$, which we denote again by exactly the same letters, such that for {\ibf any} corresponding system of singular disks cobounding it, $D_1^2 , D_2^2 , \ldots$, we should have $K \cap D_n^2 \ne \emptyset$, for all $n$'s.
We will show now that this itself, leads to a contradiction. Because $M^3 (\Gamma)$ is compact ($\Gamma$ being finitely presented), we can {\ibf compactify} $S_u (M^3 (\Gamma)) - H$ by starting with the normal embedding $S_u (M^3 (\Gamma)) - H \subset M^3 (\Gamma) \times B^N$, $N$ large; inside this compact metric space, we take then the closure of $S_u (M^3 (\Gamma)) - H$. For this compactification which we denote by $(S_u (M^3 (\Gamma)) - H)^{\wedge}$ to be nice and useful for us, we have to be quite careful about the exact locations and sizes of the Holes, but the details of this are beyond the present outline. This good compactification is now
$$
(S_u (M^3 (\Gamma)) - H)^{\wedge} = (S_u (M^3 (\Gamma)) - H) \cup E_{\infty} \, ,
$$
where $E_{\infty}$ is the compact space which one has to add at the infinity of $S_u (M^3 (\Gamma))$ $- \, H$, so as to make it compact. It turns out that $E_{\infty}$ is moderately wild, failing to the locally connected, although it has plenty of continuous arcs embedded inside it.
Just by metric compactness, we already have
$$
\lim_{n = \infty} {\rm dist} (\Lambda_n , E_{\infty}) = 0
$$
and, even better, once we know that the lengths of $\Lambda_n$ are uniformly bounded, there is a subsequence $\Lambda_{j_1} , \Lambda_{j_2} , \Lambda_{j_3} , \ldots$ of $\Lambda_1 , \Lambda_2 , \Lambda_3 , \ldots$ and a continuous curve $\Lambda_{\infty} \subset E_{\infty}$, such that $\Lambda_{j_1} , \Lambda_{j_2} , \ldots$ converges uniformly to $E_{\infty}$. To be pedantically precise about it, we have
\begin{equation}
\label{eq0.24}
{\rm dist} \, (\Lambda_{j_n} , \Lambda_{\infty}) = \varepsilon_n \, , \ \mbox{where} \ \varepsilon_1 > \varepsilon_2 > \ldots > 0 \ \mbox{and} \ \lim_{n = \infty} \varepsilon_n = 0 \, .
\end{equation}
Starting from this data, and injecting also a good amount of precise knowledge concerning the geometry of $S_u (M^3 (\Gamma))$ (knowledge which we actually have to our disposal, in real life), we can construct a region $N = N (\varepsilon_1 , \varepsilon_2 , \ldots) \subset S_u (M^3 (\Gamma)) - H$, which has the following features
A) The map $\pi_1 N \longrightarrow \pi_1 (S_u (M^3 (\Gamma)) - H)$ injects;
B) The $E_{\infty}$ lives, also, at the infinity of $N$, to which it can be glued, and there is a retraction
$$
N \cup E_{\infty} \overset{R}{\longrightarrow} E_{\infty} \, ;
$$
C) There is an ambient isotopy of $S_u (M^3 (\Gamma)) - H$, which brings all the $\Lambda_{j_1} , \Lambda_{j_2} , \ldots$ inside $N$. After this isotopy, we continue to have (\ref{eq0.24}), or at least something very much like it.
The reader may have noticed that, for our $N$ we studiously have avoided the word ``neighbourhood'', using ``region'' instead; we will come back to this.
Anyway, it follows from A) above that our $\Lambda_{j_1} , \Lambda_{j_2} , \ldots \subset N$ (see here C)) bound singular disks in $N$. Using B), these disks can be brought very close to $E_{\infty}$, making them disjoined from $K$. In a nutshell, this is the contradiction which proves what we want. We will end up with some comments.
To begin with, our $N = N (\varepsilon_1 , \varepsilon_2 , \ldots)$ is by no means a neighbourhood of infinity, it is actually too thin for that, and its complement is certainly not pre-compact.
In the same vein, our argument which was very impressionistically sketched above, is certainly not capable of proving things like $\pi_1^{\infty} (S_u (M^3 (\Gamma)) - H) = 0$, which we do {\ibf not} claim, anyway.
Finally, it would be very pleasant if we could show that $\Lambda_{\infty}$ bounds a singular disk inside $E_{\infty}$ and deduce then our desired PROPER homotopy for the $\Lambda_{i_n}$'s from this. Unfortunately, $E_{\infty}$ is too wild a set to allow such an argument to work.
This ends the sketch of the proof that all $\Gamma$'s are QSF and now I will present some
\noindent CONJECTURAL FURTHER DEVELOPMENTS. The first possible development concerns a certain very rough classification of the set of all (finitely presented) groups. We will say that a group $\Gamma$ is {\ibf easy} if it is possible to find for it some 2-dimensional representation with {\ibf closed} $M_2 (f)$. We certainly do not mean here some equivariant representation like in the main theorem of the present paper which, most likely, will have ${\rm LIM} \, M_2 (f) \ne \emptyset$. We do not ask for anything beyond GSC and $\Psi = \Phi$. On the other hand, when there is {\ibf no} representation for $\Gamma$ with a closed $M_2 (f)$, {\it i.e.} if for any 2$^{\rm d}$ representation we have ${\rm LIM} \, M_2 (f) \ne \emptyset$, then we will say that the group $\Gamma$ is {\ibf difficult}.
Do not give any connotations to these notions of easy group {\it versus} difficult group, beyond the tentative technical definitions given here.
But let us move now to 3-dimensional representations too. All the 3-dimen\-sional representations $X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$ are such that $X$ is a union of {\ibf ``fundamental domains''}, pieces on which $f$ injects and which have sizes which are uniformly bounded ({\it i.e.} of bounded diameters). With this, $\Gamma$ will be said now to be easy if for any compact $K \subset \tilde M^3 (\Gamma)$ there is a representation $X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$, (possibly depending on $K$), such that only finitely many fundamental domains $\Delta \subset X$ are such that $K \cap f\Delta \ne \emptyset$. There are clearly two distinct notions here, the one just stated and then also the stronger one where a same $(X^2 , f)$ is good for all $K$'s. This last one should certainly be equivalent to the 2-dimensional definition which was given first. But at the present stage of the discussion, I prefer to leave, temporarily, a certain amount of fuzzynen concerning the $3^{\rm d}$ notion of ``difficult group''. Things will get sharper below (see that CONJECTURE~1.5 which follows). Anyway, the general idea here is that the easy groups are those which manage to avoid the {\ibf Whitehead nightmare} which is explained below, and for which we also refer to \cite{17}. It was said earlier that we have a not too difficult implication $\{ \Gamma$ is easy$\} \Longrightarrow \{ \Gamma$ is QSF$\}$. I believe this holds even with the $K$-dependent version of ``easy'', but I have not checked this fact.
So, the difficult groups are defined now to be those for which {\ibf any} representation $(X,f)$ exhibits the following Whitehead nightmare
$$
\mbox{For any compact $K \subset \tilde M^3 (\Gamma)$ there are INFINITELY many}
$$
$$
\mbox{fundamental domains $\Delta \subset X$ {\it s.t.} $K \cap f\Delta \ne \emptyset$.}
$$
The Whitehead nightmare above is closely related to the kind of processes {\it via} which the Whitehead manifold ${\rm Wh}^3$ itself or, even more seriously, the Casson Handles, which have played such a proeminent role in Freedman's proof of the TOP $4$-dimensional Poincar\'e Conjecture, are constructed; this is where the name comes from, to begin with.
Here is the story behind these notions. Years ago, various people like Andrew Casson, myself, and others, have written papers (of which \cite{18}, \cite{19}, \cite{7},$\ldots$ are only a sample), with the following general gist. It was shown, in these papers, that if for a closed 3-manifold $M^3$, the $\pi_1 \, M^3$ has some kind of nice geometrical features, then $\pi_1^{\infty} \tilde M^3 = 0$; the list of nice geometrical features in question, includes Gromov hyperbolic (or more generally almost convex), automatic (or more generally combable), {\it a.s.o.} The papers just mentioned have certainly been superseded by Perelman's proof of the geometrization conjecture, but it is still instructive to take a look at them from the present vantage point: with hindsight, what they actually did, was to show that, under their respective geometric assumptions, the $\pi_1 \, M^3$ way easy, hence QSF, hence $\pi_1^{\infty} = 0$. Of the three implications involved here the really serious one was the first and the only $3$-dimensional one, the last.
As a small aside, in the context of those old papers mentioned above, both Casson and myself we have developed various group theoretical concepts, out of which Brick and Mihalik eventually abstracted the notion of QSF. For instance, I considered ``Dehn exhaustibility'' which comes with something looking, superficially like the QSF, except that now both $K$ and $X$ are smooth and, more seriously so, $f$ is an {\ibf immersion}. Contrary to the QSF itself, the Dehn exhaustibility fails to be presentation independent, but I have still found it very useful now, as an ingredient for the proof of the implication (\ref{eq0.8}). Incidentally also, when groups are concerned, Daniele Otera~\cite{10} has proved that QSF and Dehn exhaustibility are equivalent, in the same kind of weak sense in which he and Funar have proved that QSF and GSC are equivalent, namely $\Gamma \in {\rm QSF}$ iff $\Gamma$ possesses {\ibf some} presentation $P$ with $\tilde P$ Dehn exhaustible and/or GSC. See here \cite{5}, \cite{10} and \cite{4}.
So, concerning these same old papers as above, if one forgets about three dimensions and about $\pi_1^{\infty}$ (which I believe to be essentially a red herring in these matters), what they actually prove too, between the lines, is that any $\Gamma$ which satisfies just the geometrical conditions which those papers impose, is in fact easy.
What then next? For a long time I have tried unsuccessfully, to prove that any $\pi_1 \, M^3$ is easy. Today I believe that this is doable, provided one makes use of the geometrization of 3-manifolds in its full glory, {\it i.e.} if one makes full use of Perelman's work. But then, rather recently, I have started looking at these things from a different angle. What the argument for the THEOREM $\forall \, \Gamma \in {\rm QSF}$ does, essentially, is to show that even if $\Gamma$ is difficult, it still is QSF. Then, I convinced myself that the argument in question can be twisted around and then used in conjunction with the fact that we already know now that $\Gamma$ is QSF, so as to prove a much stronger result, which I only state here as a conjecture, since a lot of details are still to be fully worked out. Here it is, in an improved version with respect to the earlier, related statement, to be found in \cite{24}.
\noindent {\bf Conjecture 1.5.} I) {\it $2^{\rm d}$ form: For every $\Gamma$ there is a $2^{\rm d}$ representation
$$
X^2 \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)
$$
such that ${\rm LIM} \, M_2 (f) = \emptyset$, i.e. such that the double points set $M_2 (f) \subset X^2$ is {\ibf closed}}
II) {\it $3^{\rm d}$ form: For every $\Gamma$ there is a $3^{\rm d}$ representation
$$
X^3 \longrightarrow \tilde M^3 (\Gamma) \, ,
$$
coming with a function $Z_+ \overset{\mu}{\longrightarrow} Z_+$, such that for any fundamental domain $\delta \subset \tilde M^3 (\Gamma)$, there are at most $\mu (\Vert \delta \Vert)$ fundamental domains $\Delta \subset X$ such that $f \Delta \cap \delta \ne \emptyset$. Here $\Vert \delta \Vert$ is the word-length norm coming from the group $\Gamma$.
In other words, {\ibf all groups are easy}, i.e. they all can avoid the Whitehead nightmare.}
This certainly implies, automatically, that $\Gamma \in {\rm QSF}$ and also, according to what I have already said above, it should not really be a new thing for $\Gamma = \pi_1 \, M^3$. But in order to get to that, one has to dig deeper into the details of the Thurston Geometrization, than one needs to do for $\pi_1 \, M^3 \in$ QSF, or at least so I think.
All this potential development concerning easy and difficult groups is work in progress. The next possible developments which I will briefly review now are not even conjectures but rather hypothetical wild dreams, bordering to science-fiction.
To begin with, when it comes to zipping then there is the notion of COHERENCE which I will not redefine here, a very clear exposition of it can be found in \cite{6}. Coherence is largely irrelevant in dimensions $n > 4$ but then, it is tremendously significant when the dimension is four.
\noindent {\bf Question 1.6.} Given any $\Gamma$, can one always find some $2^{\rm d}$ representation $X^2 \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$ for which there is a COHERENT zipping strategy?
There should be no question here of anything like a nice, equivariant representation. My wild tentative conjecture is that one might be able to prove the statement with question mark above by using the techniques which are very briefly alluded to in \cite{22}, \cite{23}. I am in particular thinking here about the proof of the so-called COHERENCE THEOREM (also stated in \cite{6}). The proofs in question (and also for the rest of \cite{22}, \cite{23}) are actually already completely (hand-)written. If the answer to question~1.6 would be yes, then we could construct a {\ibf 4-dimensional} $S_u \tilde M^3 (\Gamma)$, most likely not GSC and certainly not equivariant. But this would come with a boundary, an open non simply connected wild $3$-manifold $V^3$, with $\pi_1^{\infty} V^3 \ne 0$. The point is that this $V^3$ without actually really supporting an action of $\Gamma$ would still be in a certain sense related to $\Gamma$.
This could conceivably have some interest.
Next, here are two related questions for which we would like to have an answer: How can be reconcile the statement that all $\Gamma$'s are QSF, with the commonly accepted idea that any property valid for all $\Gamma$'s has to be trivial? And then, if we belive Conjecture~1.5 above, where do the ``difficult objects'' in group theory, if any, hide? I believe that there should be a class of objects which, very tentatively I will call here ``quasi groups'' and among which the finitely presented groups should presumably live somehow like the rational numbers among the reals. My ``quasi'' refers here to quasiperiodic as opposed to periodic and, of course also, to quasi-crystals and/or to Penrose tilings. I cannot even offer here a conjectural definition for ``quasi groups''. I might have some guesses about what a quasi group presentation should be and then I can already see a serious difficulty in deciding when two presentations define the same object.
\centerline{* \ * \ * \ * \ *}
The many criticisms, comments and suggestions which David Gabai has made in connection with my earlier ill-fated attempt of proving $\pi_1^{\infty} \, \tilde M^3 = 0$, were essential for the present work, which could not have existed without them. Like in other occasions too, his help was crucial for me.
Thanks are also due to Louis Funar and Daniele Otera for very useful conversations. Actually, it was Daniele who, at that time my PhD student, first told me about QSF, and who also insisted that I should look into it.
In 2007 and 2008 I have lectured on these matters in the Orsay Geometric Group Theory Seminar. Thanks are due to Frederic Harglund and to the other members of the Seminar, for helpful comments and conversations.
Finally, I wish to thank IH\'ES for its friendly help, C\'ecile Cheikhchoukh for the typing and Marie-Claude Vergne for the drawings.
\section{Zipping}\label{sec1}
\setcounter{equation}{0}
The presentations for finitely generated groups $\Gamma$, which we will use, will be singular compact 3-manifolds with boundary $M^3 (\Gamma)$, such that $\pi_1 M^3 (\Gamma) = \Gamma$. Here is how such a $M^3 (\Gamma)$ will be gotten. To a smooth handlebody $H$ ($=$ finite union of handles of index $\lambda \leq 1$), we will attach 2-handles via a generic immersion having as core a link projection
\begin{equation}
\label{eq1.1}
\sum_i (S_i^1 \times I_i) \overset{\alpha}{\longrightarrow} \partial H \, .
\end{equation}
We may assume each individual component to be embedded. The double points of $\alpha$ are little squares $S \subset \partial H$. These are the singularities ${\rm Sing} \, M^3 (\Gamma) \subset M^3 (\Gamma)$ and, to distinguish them from other singularities to occur later, they are called {\ibf immortal}. Without loss of generality, the immortal singularities $S$ live on the free part of $\partial$ (0-handles). There are no 3-handles for $M^3 (\Gamma)$.
Following rather closely \cite{14}, we will discuss now the equivalence relation forced by the singularities of a non-degenerate simplicial map $X \overset{f}{\longrightarrow} M$. In this story, $M$ could be any simplicial complex of dimension $n$ (although in real life it will just be $M = \tilde M^3 (\Gamma)$ or $M = M^3 (\Gamma)$ itself) and $X$ a countable Gromov multicomplex (where the intersection of two simplices is not just a common face, but a subcomplex). Since $f$ is non-degenerate, we will have $\dim X \leq \dim M$. We will need $X$'s which in general, will be not locally finite, and we will endow them with the {\ibf weak topology}. By definition, a singularity
\begin{equation}
\label{eq1.2}
x \in {\rm Sing} \, (f) \subset X
\end{equation}
is a point of $X$ in the neighbourhood of which $f$ fails to inject, {\it i.e.} fails to be immersive. Do not mix the permanent, immortal singularities of $M^3 (\Gamma)$, {\it i.e.} $S \subset {\rm Sing} \, M^3 (\Gamma)$ with the unpermanent, {\ibf mortal} singularities (\ref{eq1.2}), of the map $f$ (as defined already in \cite{14}, \cite{15}, \cite{16} and \cite{6}).
Quite trivially, the map $f$ defines an equivalence relation $\Phi (f) \subset X \times X$, where $(x_1 , x_2) \in \Phi (f)$ means just that $fx_1 = fx_2$. An equivalence relation $R \subset \Phi (f)$ will be called {\ibf $f$-admissible}, if it fullfils the following condition
\noindent (2.2.1) \quad Assume $\sigma_1 , \sigma_2 \subset X$ are two simplices of the same dimension, with $f \sigma_1 = f \sigma_2$. If we can find pairs of points $x \in {\rm int} \, \sigma_1$, $y \in {\rm int} \, \sigma_2$ with $fx = fy$, then the following IMPLICATION holds
$$
(x,y) \in R \Longrightarrow R \ \mbox{identifies} \ \sigma_1 \ \mbox{to} \ \sigma_2 \, .
$$
The equivalence relation $\Phi (f)$ and the double point set $M^2 (f) \subset X \times X$ are, of course, closely related, since $M^2 (f) = \Phi (f) - {\rm Diag} \, X$ and we will also introduce the following intermediary subset
\begin{equation}
\label{eq1.3}
\hat M^2 (f) = M^2 (f) \cup {\rm Diag} \, ({\rm Sing} \, (f)) \subsetneqq \Phi (f) \, .
\end{equation}
This has a natural structure of simplicial complex of which ${\rm Diag} \, ({\rm Sing} \, (f))$ is a subcomplex. We will endow it, for the time being, with the weak topology.
We will be interested in subsets $\tilde R \subset \hat M^2 (f)$ of the following form. Start with an arbitrary equivalence relation $R \subset \Phi (f)$, and then go to $\tilde R = R \cap \hat M^2 (f)$, when $R$ is an $f$-admissible equivalence relation; then we will say that $\tilde R$ itself is an {\ibf admissible set}. In other words the admissible sets are, exactly, the traces on $\hat M^2 (f)$ of $f$-admissible equivalence relations.
\begin{equation}
\label{eq1.4}
\mbox{A subset of $\hat M^2 (f)$ is admissible ({\it i.e.} it is an $\tilde R$) iff}
\end{equation}
$$
\mbox{it is both open and closed in the weak topology of $\hat M^2 (f)$.}
$$
Automatically, admissible sets are subcomplexes of $\hat M^2 (f)$.
When $R$ is an $f$-admissible equivalence relation, then $X/R$ is again a (multi) complex and its induced quotient space topology is again the weak topology. Also, we have a natural simplicial diagram
\begin{equation}
\label{eq1.5}
\xymatrix{
X \ar[rr]^f \ar[dr]^{\pi(R)} &&M \\
&X/R \ar[ur]_{f_1(R)}
}
\end{equation}
The $\tilde R \cap {\rm Diag} \, ({\rm Sing} \, (f)) \subset \tilde R \subset \hat M^2 (f)$ is a subcomplex, naturally isomorphic to a piece, denoted $\tilde R \cap {\rm Sing} \, (f) \subset {\rm Sing} \, (f) \subset X$, which is both open and closed in ${\rm Sing} \, (f)$.
\noindent {\bf Claim 2.5.1.} In the context of (\ref{eq1.5}) we have the following equality
$$
{\rm Sing} \, (f_1 (R)) = \pi (R) \, ({\rm Sing} \, (f) - \tilde R \cap {\rm Sing} \, (f)) \, .
$$
All these things, just like the next two lemmas are easy extensions to our present singular set-up, of the little theory developed in \cite{14} in the context of smooth 3-manifolds (immortal singularities were absent in \cite{14}). In fact this little, abstract non sense type theory, developed in \cite{14}, is quite general, making such extensions painless. We certainly do not even aim at full generality here.
\noindent {\bf Lemma 2.1.} {\it There is a {\ibf unique} $f$-admissible equivalence relation $\Psi (f) \subset \Phi (f)$ which has the following properties, which also characterize it.}
I) {\it When one goes to the natural commutative diagram, on the lines of} (\ref{eq1.5}), {\it i.e. to
$$
\xymatrix{
X \ar[rr]^f \ar[dr]_{\pi = \pi (\Psi (f))} &&M \\
&X/\Psi (f) \ar[ur]_{f_1= f_1 (\Psi (f))}
}
\eqno (2.5.2)
$$
then we have ${\rm Sing} \, (f_1) = \emptyset$, i.e. $f_1$ is an immersion.}
II) {\it Let now $R$ be {\ibf any} equivalence relation such that $R \subset \Phi (f)$. For such an $R$ there is always a diagram like} (\ref{eq1.5}), {\it except that it may no longer be simplicial. But let us assume now also, that ${\rm Sing} \, (f_1 (R)) = \emptyset$, and also that $R \subset \Psi (f)$. Then we necessarily have $R = \Psi (f)$.}
We may rephrase the II) above, by saying that $\Psi (f)$ is the smallest equivalence relation, compatible with $f$, which kills all the (mortal) singularities.
Like in \cite{14}, we start by outlining a very formal definition for $\Psi (f)$, with which the statement above can be proved easily. For that purpose, we will introduce a new, non-Hausdorff topology for $\hat M^2 (f)$, which we will call the {\ibf $Z$-topology}. The closed sets for the $Z$-topology are the finite unions of admissible subsets. For any subset $E \subset \hat M^2 (f)$, we will denote by $C\ell_Z (E)$, respectively by $\widehat{C\ell}_Z (E) \supset C\ell_Z (E)$, the closure of $E$ in the $Z$-topology, respectively the smallest subset of $\hat M^2 (f)$ which contains $E$ and which is both $Z$-closed and admissible, {\it i.e.} the smallest equivalence relation containing $E$ and which is also $Z$-closed. One can check that the {\ibf irreducible} closed subsets are now exactly the $C\ell_Z (x,y)$ with $(x,y) \in \hat M^2 (f)$ and, for them we also have that $\widehat{C\ell}_Z (x,y) = C\ell_Z (x,y)$. With all these things, I will take for $\Psi (f)$ the following definition
$$
\Psi (f) \underset{\rm def}{=} \widehat{C\ell}_Z ({\rm Diag} \, ({\rm Sing} \, (f))) \cup {\rm Diag} \, (X) \subset \Phi (f) \, . \eqno (2.5.3)
$$
Here, the $\widehat{C\ell}_Z$ might a priori miss some of ${\rm Diag} \, (X) - {\rm Diag} \, ({\rm Sing} \, (f))$, reason for adding the full ${\rm Diag} \, (X)$ into our definition. The formula (2.5.3) defines, automatically, an $f$-admissible equivalence relation, for which we have
$$
\widetilde{\Psi (f)} = \widehat{C\ell}_Z ({\rm Diag} \, ({\rm Sing} \, (f))) \, .
$$
Also, starting from (2.5.3), it is easy to prove lemma~2.1, proceeding on the following lines, just like in \cite{14}.
With the notations already used in the context of the (2.5.1), one checks first that $\widetilde{\Psi (f)} \cap {\rm Sing} \, (f) = {\rm Sing} \, (f)$ and, by the same claim (2.5.1) this implies that ${\rm Sing} \, (f_1 (\Psi (f)) = \emptyset$, proving thereby I).
So, consider now the a priori quite arbitrary $R$ from II). We claim that $R$ has to be $f$-admissible. The argument goes as follows. The map $f_1 (R)$ being immersive, $X/R$ is Hausdorff, hence $R \subset X \times X$ is closed and hence so is also $\tilde R = X \cap \hat M^2 (f)$. Weak topology is meant here all along, in this little argument, and not the $Z$-topology.
Next, any non-interior point for $\tilde R$ would be a singularity for $f_1 (R)$. This implies that $\tilde R \subset \hat M^2 (f)$ is open. The offshot is that $\tilde R$ is both open and closed in the weak topology. Hence, by (\ref{eq1.4}) our $R$ has to be $f$-admissible, as it was claimed.
So, assume now that $R$ is $f$-admissible, with ${\rm Sing} \, f_1 (R) = \emptyset$ and with $R \subset \Psi$. By (2.5.1), $\tilde R \supset {\rm Diag} \, ({\rm Sing} \, (f))$. Being admissible, $\tilde R$ is $Z$-closed and, by assumption, it is an equivalence relation too. Being already contained in $\Psi$ it cannot fail to be equal to it. This proves our lemma~2.1. $\Box$
But then, once we know that our unique $\Psi (f)$ is well-defined and exists, we can also proceed now differently. I will describe now a process, called ZIPPING, which is a more direct constructive approach to $\Psi (f)$, to be used plenty in this paper.
So, let us look for a most efficient minimal way to kill all the mortal singularities. Start with two simplexes $\sigma_1 , \sigma_2 \subset X$ with same dimensions and $f \sigma_1 = f \sigma_2$, for which there is a singularity
$$
{\rm Sing} \, (f) \ni x_1 \in \sigma_1 \cap \sigma_2 \, .
$$
We go then to a first quotient of $X$, call it $X_1 \overset{f_1}{\longrightarrow} M$, which kills $x_1$ via a {\ibf folding map}, identifying $\sigma_1$ to $\sigma_2$. Next, start with some $x_2 \in {\rm Sing} \, (f_2)$, then repeat the same process, a.s.o. Provided things do not stop at any finite time, we get an increasing sequence of equivalence relations
$$
\rho_1 \subset \rho_2 \subset \ldots \subset \rho_n \subset \rho_{n+1} \subset \ldots \subset \Phi (f) \, . \eqno (2.5.4)
$$
The union $\rho_{\omega} = \overset{\infty}{\underset{1}{\bigcup}} \ \rho_i$ is again an equivalence relation, subcomplex of $\Phi (f)$, {\it i.e.} closed in the weak topology. The map
$$
X_{\omega} = X / \rho_{\omega} \overset{f_{\omega}}{\longrightarrow} M^3 (\Gamma)
$$
is simplicial. None of the $\rho_1 , \rho_2 , \ldots$ is $f$-admissible, of course, and, in general, neither is $\rho_{\omega}$. We then pick up some $x_{\omega} \in {\rm Sing} \, (f_{\omega})$ and go to $X/\rho_{\omega + 1} \overset{f_{\omega + 1}}{-\!\!-\!\!\!\longrightarrow} M$. From here on, one proceeds by transfinite induction and, since $X$ is assumed to be countable, the process has to stop at some countable ordinal $\omega_1$. The following things happen at this point.
\noindent (2.5.5) \quad Using lemma~2.1 one can show that $\rho_{\omega_1} = \Psi (f)$. This makes $\rho_{\omega_1}$ canonical, but not $\omega_1$ itself. There is no unique strategy leading to $\Psi (f)$, this way, or any other way.
\noindent {\bf Claim 2.5.6.} One can {\it chose} the sequence (2.5.4) such taht $\omega_1 = \omega$, {\it i.e.} such that, just with (2.5.4) we get $\rho_{\omega} = \Psi (f)$. It is this kind of sequence (2.5.4) (which is by no means unique either), which will be by definition, a {\ibf strategy for zipping $f$}, or just a ``zipping''.
\noindent {\bf Lemma 2.2.} {\it The following, induced map, is surjective}
$$
\pi_1 X \longrightarrow \pi_1 (X/\Psi (f)) \, .
$$
Given the $M^3 (\Gamma)$, we will apply, in this series of papers, the little $\Psi / \Phi$ theory above in a sequence of contexts of increasing complexity. The prototype is to be shown next, and the reader should compare it with \cite{16}, \cite{18}, \cite{19}.
Our $M^3 (\Gamma)$, to which we come back now, is naturally divided into handles $h^{\lambda}$ of index $\lambda \leq 2$. For each such $h$ we will distinguish some individually embedded 2-cells $\varphi \subset \partial h$ called {\ibf faces}. The idea is that each face is shared exactly by two $h$'s, the whole $M^3 (\Gamma)$ being the union of the $h$'s along common faces, with the immortal singularities being created automatically, for free so to say, in the process. To begin with the attaching zone of an $h^1$ consists of two faces, little discs occuring again on the $h^0$'s. This first batch of faces, creates a system of disjoined curves $\gamma \subset \partial H$ where $H$ is the handlebody put together from the $h^0$'s and $h^1$'s; the $\gamma$'s are the boundaries of the small discs of type $h^0 \cap h^1$. Each $h^2$ has an attaching zone which, without loss of generality we may assume embedded in $\partial H$; the $\gamma$'s cut this attaching zone into long rectangles. The rectangles are a second batch of faces, the first one were discs. Each rectangle is contained either in the lateral surface of an $h^1$, going parallel to the core of $h^1$, or in the boundary of an $h^0$, connecting two discs. Two rectangles occuring on the same $\partial h^0$ may cut transversally through each other, along a little square, which is now an immortal singularity $S$. Without any loss of generality it may be assumed that all our immortal singularities $S$ occur on the lateral surface of the $0$-handles. Retain that the $S$'s are NOT faces, only parts of such.
Forgetting now about the Morse index $\lambda$, consider all the handles of $M^3 (\Gamma)$, call them $h_1 , h_2 , \ldots , h_p$. Some ``initial''
$$
h \in \{ h_1 , h_2 , \ldots , h_p \}
$$
will be fixed once and for all. If ${\mathcal F} (h_i)$ is the set of all the faces occuring on the boundary of $h_i$ (discs or rectangles), we note that the set ${\mathcal F} \underset{\rm def}{=} \underset{i}{\sum} \ {\mathcal F} (h_i)$ which is of even cardinality, comes equipped with a fixed-point free involution ${\mathcal F} \overset{j}{\longrightarrow} {\mathcal F}$, with the feature that, if $x \in {\mathcal F} (h_{\ell})$ then $jx \in {\mathcal F} (h_k)$ with $k \ne \ell$, and such that
\begin{equation}
\label{eq1.6}
\mbox{Whenever $x \in {\mathcal F} (h_{\ell})$ and $jx \in {\mathcal F} (h_k)$, one glues $h_{\ell}$ to $h_k$}
\end{equation}
$$
\mbox{along $x = jx$, so as to get $M^3 (\Gamma)$.}
$$
With these things, starting from our arbitrarily chosen $h$, we will introduce a class of 3-dimensional thick paths, formally modelled on $[0,\infty)$, which are randomly exploring through $M^3 (\Gamma)$, constructed according to the following recipee.
\noindent (2.6.1) \quad For the initial $h$, chose some $x_1 \in {\mathcal F} (h)$. There is exactly one handle in $\{ h_1 , h_2 , \ldots , h_p \} - \{ h \}$, call it $x_1 h$, such that $x_1 h$ houses $jx_1$.
\noindent (2.6.2) \quad Inside ${\mathcal F} (x_1 h) - \{ jx_1 \}$ chose some face $x_2 \in {\mathcal F} (x_1 h)$. There is then exactly on handle in $\{ h_1 , h_2 , \ldots , h_p \} - \{ x_1 h \}$, call it $x_1 \, x_2 \, h$, such that $x_1 \, x_2 \, h$ houses $jx_2$.
$$\ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots$$
This kind of process continues indefinitely, producing infinitely many sequences of words written with the letters $\{ h ; x_1 , x_2 , \ldots \}$, and taking the form
$$
h , \ x_1 h , \ x_1 \, x_2 \, h , \ x_1 \, x_2 \, x_3 \, h , \ \ldots \eqno (S_{\infty})
$$
Of course, $(S_{\infty})$ can also be thought of as just an infinite word $x_1 \, x_2 \, x_3 \ldots$, written with letters $x_{\ell} \in {\mathcal F}$. The rule for constructing it makes it automatically reduced, i.e. $x_{\ell + 1} \ne jx_{\ell}$, $\forall \, \ell$.
{\ibf All} the sequences $(S_{\infty})$ can be put together into the following infinite, non locally finite complex, endowed with a tautological map $F$ into $M^3 (\Gamma)$, namely
\begin{equation}
\label{eq1.7}
X \underset{\rm def}{=} h \ \cup \sum_{x_1 \in {\mathcal F} (h)} x_1 h \ \cup \sum_{x_2 \in {\mathcal F} (x_1 h)} x_1 \, x_2 \, h \cup \ldots \overset{F}{\longrightarrow} M^3 (\Gamma) \, .
\end{equation}
Before we apply our little $\Phi / \Psi$ theory to this (\ref{eq1.7}), let us make a few remarks concerning it
\noindent (2.7.1) \quad Our tree-like $X$ is certainly {\ibf arborescent}, {\it i.e.} gettable from a point by a sequence of Whitehead dilatations; and arborescence implies GSC.
\noindent (2.7.2) \quad Up to a point the construction of $X$ is modelled on the Cayley graph. BUT there is no group involved in our construction and hence no group action either. Not all the infinite words $x_1 \, x_2 \ldots$ are acceptable for $(S_{\infty})$. There is not even a monoid present. All this makes the present construction be {\ibf not} {\it quite} like in \cite{16} or \cite{18} or \cite{19}.
\noindent {\bf Lemma 2.3.} {\it In the style of} (2.5.2), {\it we consider now the diagram}
\begin{equation}
\label{eq1.8}
\xymatrix{
X \ar[rr]_F \ar[dr] &&M^3 (\Gamma) \\
&X/\Psi (F) \ar[ur]_{F_1}
}
\end{equation}
{\it Then, the $X/\Psi (F) \overset{F_1}{\longrightarrow} M^3 (\Gamma)$ IS the universal covering space $\tilde M^3 (\Gamma) \overset{\pi}{\longrightarrow} M^3 (\Gamma)$.}
\noindent {\bf Proof.} Lemma~2.2 tells us that $\pi_1 (X/\Psi (F_1)) = 0$. Then, $F_1$ clearly has the path lifting property and it is also \'etale. End of the argument.
I actually like to think of this lemma~2.3 as being a sort of ``naive theory of the universal covering space''.
We move now to the following natural tesselation
\begin{equation}
\label{eq1.9}
\tilde M^3 (\Gamma) = \bigcup_{y \in \Gamma} \ \sum_{1}^{p} \, y \, h_i \, .
\end{equation}
Any lift of the initial $h \subset X$ to $\tilde M^3 (\Gamma)$, automatically comes with a canonical lift of the whole of $X$, which respects the handle-identities. We will denote by $f$ this lift of $F$ to $\tilde M^3 (\Gamma)$
$$
\xymatrix{
X \ar[rr]^f \ar[dr]_F &&\tilde M^3 (\Gamma) \ar[dl]^{\pi} \\
&M^3 (\Gamma)
} \eqno (2.9.1)
$$
The existence of $f$ follows from the path lifting properties of $\pi$, combined with the fact that, locally our $X$ is always something of the following form, just like in $M^3 (\Gamma)$ and/or in $\tilde M^3 (\Gamma)$
$$
\underset{\overbrace{{\mathcal F} (x_1 \ldots x_i h) \ni x_{i+1} = jx_{i+1} \in {\mathcal F} (x_1 \ldots x_{i+1} h)}}{\quad \quad \ \ x_1 \, x_2 \ldots x_i \, h \cup x_1 \, x_2 \ldots x_i \, x_{i+1} \, h \, .}
$$
Clearly we have ${\rm Sing} \, (f) = {\rm Sing} \, (F) \subset X$ and these mortal singularities are exactly the disjoined arcs $\sigma$ of the form $\sigma = h^0 \cap h^1 \cap h^2$ where, at the level of $M^3 (\Gamma)$ the $h^{\lambda}$ and $h^{\mu}$ have in common the faces $\varphi_{\lambda \mu}$, {\it i.e.}
$$
\varphi_{01} = h^0 \cap h^1 \, , \ \varphi_{12} = h^1 \cap h^2 \, , \ \varphi_{20} = h^2 \cap h^0 \, , \ \mbox{with} \ \sigma = \varphi_{01} \cap \varphi_{12} \cap \varphi_{20} \, . \eqno (2.9.2)
$$
The $\sigma$'s, which are all mortal, live far from the immortal $S$'s. Around $\sigma$, the (\ref{eq1.7}) looks, locally, like
$$
\{\mbox{The Riemann surface of} \ \log z \} \times R \, .
$$
When one goes from $X$ to $fX$ or to $FX$, then the following things happen, as far as the immortal singularities are concerned:
I) Two distinct immortal singularities, together with their neighbourhoods inside $X$, call them $(V_1 , S_1) \subset (X , {\rm Sing} \, X) \supset (V_2 , S_2)$, may get identified, $(V_1 , S_1) = (V_2 , S_2)$.
II) The zipping flow of $\Psi (f) = \Psi (F)$ (see here the next lemma~2.4 too) may create new immortal singularities of $fX = \tilde M^3 (\Gamma)$, by forcing glueings of the type $h^0 \underset{\varphi}{\cup} h_i^2 \underset{\varphi}{\cup} h_j^2$. Their images via $\pi$ are then immortal singularities of $M^3 (\Gamma)$. Our $3^{\rm d}$ representation space $X$ can, itself, have immortal singularities.
From lemma~2.3 one can easily deduce the following
\noindent {\bf Lemma 2.4.} 1) {\it We have $\Psi (f) = \Psi (F) \subset X \times X$.}
\noindent 2) {\it The map $X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$ (see} (2.9.1){\it ) is such that}
\begin{equation}
\label{eq1.10}
\Psi (f) = \Phi (f) \, .
\end{equation}
Since clearly also, $X$ being collapsible is certainly GSC and the map $f$ is surjective, our $X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$ IS a representation for $\Gamma$, albeit one which has none of the desirable features of the REPRESENTATION THEOREM. But this, presumably simplest possible representation for $\Gamma$, will be the first step towards the theorem in question.
But before anything else, a small drawback of our newly found representation will have to be corrected. The problem here is that, although it is a union of handles, our $X$ is {\ibf not} a {\ibf ``handlebody''}. We distinguish here HANDLEBODIES (singular of course) as being unions of handles with nice attaching maps of the $\lambda$-handles to the $(\lambda - 1)$-skeleton. We will not elaborate this notion in the most general case but, in our specific situation with only handles of indices $\lambda \leq 2$, what we demand is that, besides each 1-handle being normally attached to two 0-handles, each attaching zone of a 2-handle should find a necklace of successive handles of index $\lambda = 0$ and $\lambda = 1$, into the surface of which it should embed. This necklace itself could have repetitions, of course. Our $X$ (\ref{eq1.7}) does {\ibf not} fulfill this last condition. We will get around this difficulty by a REDEFINITION of $(X , F , M^3 (\Gamma))$, and hence of $(f,\tilde M^3 (\Gamma))$ too. More manageable mortal singularities will be gotten in this process too.
We proceed as follows. Start by forgetting the handlebody structure of $M^3 (\Gamma)$ and replace $M^3 (\Gamma)$ by a $2^{\rm d}$ cell-complex, via the following two stages. We first go from the handlebody $H$ (see the beginning of the section) to
$$
A^2 \underset{\rm def}{=} \{\mbox{the union of the {\it boundaries} of the various handles of index $\lambda = 0$ and $1$}\}.
$$
This is taken to be a very finely subdivided simplicial complex, with the corresponding faces $\varphi$ occuring as subcomplexes. Our $A^2$ comes equipped with individually embedded simplicial closed curves $c_1 , c_2 , \ldots , c_{\mu}$ which correspond to the cores of the attaching zones of the $2$-handles. The immortal singularities correspond to intersections $c_i \cap c_j$ ($i \ne j$), at vertices of $A^2$. Next, we go to the cell-complex $B^2$ gotten by attaching a 2-cell $D_i^2$ to $A^2$ along each $c_i$. The $B^2$ continues to be a presentation for $\Gamma$, of course. To each of the building blocs $h , x_i , x_1 \ldots x_i \, h$ corresponds a subcomplex of $B^2$. We can glue them together by exactly the same recipee as in (\ref{eq1.7}) and, this way, we generate a purely 2-dimensional version of (\ref{eq1.7}), in the realm of $2^{\rm d}$ cell-complexes (or even simplicial complexes if we bother to subdivide a bit, see here also the lemma~2.5 below)
\begin{equation}
\label{eq1.11}
X ({\rm provisional}) \overset{F ({\rm provisional})}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow} B^2 \, .
\end{equation}
The next stage is to get 3-dimensional again, by applying to the (\ref{eq1.11}) the standard recipee
$$
\{\mbox{cell (or simplex) of dimension} \ \lambda\} \Longrightarrow \{3^{\rm d} \ \mbox{handle of index} \ \lambda\} \, . \eqno (2.11.1)
$$
Notice that, when a cell-{\ibf complex} is changed into a union of handles via the recipee (2.11.1), then this is, automatically, a {\ibf handlebody}, and not a new union of handles. This changes (\ref{eq1.11}) into a new version of (\ref{eq1.7}), sharing all the good features (2.7.1) to (\ref{eq1.10}), which the (\ref{eq1.7}) already had, except that (2.9.2) and the $\log z$-type structure of the mortal singularities will no longer be with us (see below). This new version
\begin{equation}
\label{eq1.12}
{\rm new} \, X \overset{{\rm new} \, F}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow} {\rm new} \, M^3 (\Gamma)
\end{equation}
will replace from now on (\ref{eq1.7}). We will have an handlebody decomposition
\begin{equation}
\label{eq1.13}
{\rm new} \, \tilde M^3 (\Gamma) = ({\rm new} \, M^3 (\Gamma))^{\sim} = \bigcup_{0 \leq \lambda \leq 2} h_i^{\lambda} \, .
\end{equation}
By construction, the new $X$ is now a singular handlebody
\begin{equation}
\label{eq1.14}
{\rm new} \, X = \bigcup_{\overbrace{\lambda , i , \alpha}} h_i^{\lambda} (\alpha) \quad \mbox{where} \quad ({\rm new} \, f)(h_i^{\lambda} (\alpha)) = h_i^{\lambda} \, .
\end{equation}
Our $h_i^{\lambda} (\alpha)$'s are here bona fide $3^{\rm d}$ handles of index $\lambda$. The $\{ \alpha \}$ is a countable system of indices, which is $(\lambda , i)$-dependent.
\noindent AN IMPORTANT CHANGE OF NOTATION. From now on $M^3 (\Gamma)$, $\tilde M^3 (\Gamma)$, $X$, will mean the {\ibf new} objects which we have just introduced. When the others from before, may still need to be mentioned, they will be referred to as the being {\ibf old} ones, when necessary.
Without any loss of generality, the immortal singularities $S$ are again corralled on the free part of the lateral surface of the $0$-handles and, also, all the desirable features from the old context continue to be with us. The new context will also have, among others, the virtue that we will be able to plug into it, with hardly any change, the theory developed in the previous paper \cite{26}.
\noindent THE SINGULARITIES OF THE $({\rm new}) \, X$ AND $f$. Like in the context of the singular handlebody $T \overset{F}{\longrightarrow} \tilde M^3$ from (2.9) in \cite{26}, the singularities of $f$ are now no longer modelled on $\log z$, like it was the case for the old $f$, but they occur now as follows.
(Description of ${\rm Sing} \, (f) = {\rm Sing} \, (F) \subset X$.) The ${\rm Sing} \, (f)$ ($=$ mortal singularities of $f$) is a union of 2-cells
\begin{equation}
\label{eq1.15}
D^2 \subset \delta \, h_{i_1}^{\lambda} (\alpha_1) \cap \partial \, h_{i_2}^{\mu} (\alpha_2) \, , \quad \mu > \lambda
\end{equation}
where for any given $\{ D^2 , (i_1 , \alpha_1 )\}$ we have {\ibf infinitely} many distinct $(i_2 , \alpha_2)$'s. Also, for any handle $h$ our notation is $\partial =$ attaching zone and $\delta =$ lateral surface. One virtue of these mortal singularities, among others is that they are amenable to the treatment from \cite{26}; the older $\log z$ type singularities were not. It should be stressed that (\ref{eq1.15}) lives far from ${\rm Sing} \, (X)$.
In a toy-model version, figure 2.1 offers a schematical view of the passage from the odd context to the new one.
$$
\includegraphics[width=11cm]{FigPoFeb09-2.eps}
$$
\centerline{Figure 2.1. This is a very schematical view of our redefinition old $\Rightarrow$ new.}
\begin{quote}
We have chosen here a toy-model where $M^3 (\Gamma)$ is replaced by $S^1 \times S^1$, with the old $\tilde M^3 (\Gamma)$ suggested in (I). When we go to the new context in (II), vertices become $0$-handles. The edges should become $1$-handles, but we have refrained from drawing this explicitly, so as to keep the figures simple. In (III) we are supposed to see singularities like in (\ref{eq1.15}). The pointwise $\log z$ singularity of the old $X$ is now an infinite chain of (\ref{eq1.15})-like singularities smeared in a $\log z$ pattern around a circle $S^1 \subset \delta (\mbox{$0$-handle})$. The notation ``$\log z$'' in (III) is supposed to suggest this.
\end{quote}
Our $X$ may also have a set of immortal singularities ${\rm Sing} \, (X) \subset X - {\rm Sing} \, (f)$ and it is exactly along ${\rm Sing} \, (f) + {\rm Sing} \, (X)$ that it fails to be a 3-manifold. It is along ${\rm Sing} (f)$ that it fails to be locally finite. Generically, the immortal singularities ${\rm Sing} (\tilde M^3 (\Gamma))$ are created by the zipping process and not mere images of the immortal ${\rm Sing} \, (X)$. Think of these latter ones as exceptional things, and with some extra work one could avoid them altogether. Anyway, immortal singularities are a novelty with respect to \cite{26}.
Also, in the next sections, the representation spaces will be completely devoid of immortal singularities.
The singularity $\sigma$ from (2.9.2) becomes now a finite linear chain of successive $0$-simplexes and $1$-simplexes. On the two extreme $0$-simplexes will occur now something like the ``$\log z$'' in figure~2.1-(III) which, outside them we have something like in the much more mundane figure 2.2-(II).
For the needs of the next sections, a bit more care has to go into the building of the singular handlebody $M^3 (\Gamma)$. To describe this, we reverse provisionally the transformation (2.11.1) and change $M^3 (\Gamma)$ back into a 2-dimensional cell-complex $[M^3 (\Gamma)]$, coming with $[\tilde M^3 (\Gamma)] = [M^3 (\Gamma)]^{\sim}$.
\noindent {\bf Lemma 2.5.} 1) {\it Without any loss of generality, i.e. without loosing any of our desirable features, our $M^3 (\Gamma)$ can be chosen such that $[M^3 (\Gamma)]$ is a {\ibf simplicial} complex.}
2) {\it At the same time, we can also make so that any given $2$-handle should see at most {\ibf one} immortal singularity on its $\partial h^2$. Like before, this should involve another, distinct, $\partial h^2$.}
3) {\it All these properties are preserved by further subdivisions.}
$$
\includegraphics[width=13cm]{FigPoFeb09-3.eps}
$$
\centerline{Figure 2.2. We are here at the level of our $({\rm new}) \, X$.}
\begin{quote}
The $1$-handle $h^1$ corresponds to one of the small 1-simplices of $A^2 \mid \sigma$. The hatched areas are mortal singularities $D^2 \subset {\rm Sing} \, (f)$.
\end{quote}
Here is how the easy lemma~2.5 is to be used. For convenience, denote $[\tilde M^3 (\Gamma)]$ by $K^2$ and let $K^1$ be its 1-skeleton. Consider then any arbitrary, non-degenerate simplicial map
\begin{equation}
\label{eq1.16}
S^1 \overset{\psi}{\longrightarrow} K^1 \, .
\end{equation}
\noindent {\bf Lemma 2.6.} {\it The triangulation of $S^1$ which occurs in {\rm (\ref{eq1.16})} extends to a triangulation of $D^2$ such that there is now another simplicial {\ibf non-degenerate} map}
$$
D^2 \overset{\Psi}{\longrightarrow} K^2 \quad \mbox{with} \quad \Psi \mid S^1 = \psi \, .
$$
The proof of lemma~2.6 uses the same argument as the proof of the ``simplicial lemma''~2.2 in \cite{26}. That kind of argument certainly has to use simplicial structures, hence the need for our lemma~2.5. I will only offer here some comments, without going into details.
The argument accompanying figure~2.7.3 in \cite{26} certainly needs requirement that two edges of $X^2$ with a common vertex should be joinable by a continuous path of 2-simplices. Without loss of generality, our $K^2$ verifies this condition (actually $[M^3 (\Gamma)]$ already does so). Next, it may be instructive to see what the argument boils down to when $X^2$ is reduced to a unique 2-simplex $\Delta$ the boundary of which is covered twice by (\ref{eq1.16}). Then, $D^2$ in the lemma is gotten out of four copies of $\Delta$. Start with a central $\Delta$, and then mirror it along its three sides. End of comment.
\section{Constructing equivariant locally-finite representations for $\tilde M^3 (\Gamma)$}\label{sec2}
\setcounter{equation}{0}
The present section will follow relatively closely \cite{26} and we will show how to extend that $3^{\rm d}$ part of \cite{26} which culminates with lemma~3.3, to $\tilde M^3 (\Gamma)$, which replaces now the smooth $\tilde M^3$ from \cite{26}. We will worry about the $2^{\rm d}$ context later on, in a subsequent paper. We use now again the formulae (\ref{eq1.13}) and (\ref{eq1.14}) from the last section, {\it i.e.} we write
\begin{equation}
\label{eq2.1}
\tilde M^3 (\Gamma) = \bigcup_{\lambda , i} h_i^{\lambda} \, , \quad X = \bigcup_{\lambda , i , \alpha} h_i^{\lambda} (\alpha) \, , \quad f (h_i^{\lambda} (\alpha)) = h_i^{\lambda} \, .
\end{equation}
We may as well assume that $\Gamma$ operates already at the level of the indices $i$, so that
\begin{equation}
\label{eq2.2}
\mbox{For all $g \in \Gamma$, and $(i,\lambda)$ we have $g h_i^{\lambda} = h_{gi}^{\lambda}$ \, .}
\end{equation}
The $X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$ will be replaced now by a new $3^{\rm d}$ representation
\begin{equation}
\label{eq2.3}
Y \overset{G}{\longrightarrow} \tilde M^3 (\Gamma)
\end{equation}
having the properties 1) (local finiteness) and 2) (equivariance) from our REPRESENTATION THEOREM. We will worry about the bounded zipping length only later on in the next section. Our $Y$ which is a Thurston $3^{\rm d}$ train-track is a handlebody type union of {\ibf bicollared handles} $H_i^{\lambda} (\gamma)$, where $\gamma$ belongs to a countable family of indices, a priori $(\lambda , i)$-dependent. Bicollared handles were defined in \cite{26} and, according to this definition, a bicollared handle $H$ is, purely topologically speaking, a bona fide handle $\hat H$ from which the lateral surface (which we will denote by $\delta \hat H$, or simply by $\delta H$) has been {\ibf deleted}. But bicollared handles have more structure, namely a filtration by bona fide handles with the same index as $H$, $H = \overset{\infty}{\underset{n=1}{\bigcup}} \, H_n$ (see (2.15) in \cite{26}). This endows $H$ with two collars, each with countably many layer (or ``levels'') and incoming collar parallel to the attaching zone $\partial H^{\lambda}$ of $H^{\lambda}$ and an outgoing collar, parallel to the lateral surface $\delta H^{\lambda}$. These collars, which are not disjoined, can be visualized in the figures 2.1, 2.3 from \cite{26}. The general idea is that, when $H^{\lambda}$ is attached to $H^{\lambda - 1}$ along (a piece of) $\partial H^{\lambda}$, making use of the respective outgoing collar of $H^{\lambda - 1}$ and incoming collar of $H^{\lambda}$, then the two outgoing collars, of $H^{\lambda - 1}$ and $H^{\lambda}$ {\ibf combine} into a unique outgoing collar for $H^{\lambda - 1} \cup H^{\lambda}$. It is that part of the outgoing collar of $H^{\lambda - 1}$ which was not used for attaching $H^{\lambda}$ which occurs here. The physical glueing of $H^{\lambda}$ to $H^{\lambda - 1}$ occurs actually along a PROPERLY embedded codimension one hypersurface $\partial H^{\lambda} \cap H^{\lambda - 1} \subset H^{\lambda - 1}$.
It is along the newly created collar above that $H^{\lambda + 1}$ will be attached. Moreover, when a handle is to be attached along some outgoing collar this always happens at some specific {\ibf level} $\ell$, see here \cite{26}. We will want to control these levels.
Assume, for instance, that we have a necklace of bicollared $0$-handles and $1$-handles, where the bicollared $H_i^1$ is attached at level $\ell'_i$ to its left and $\ell''_i$ to its right. As explained in \cite{26} (see the pages 28 and 29), the following {\ibf ``frustration number''} ($\approx$ holonomy, in a discrete version)
$$
K = \sum_i (\ell''_i - \ell'_{i+1})
$$
is the obstruction for the necklace to have a good outgoing collar to which a bicollared $2$-handle can be attached. This obstruction is very easily dealt with by observing first that $K$ may be re-written as $\underset{i}{\sum} \, (\ell'_i - \ell''_i)$ and then making sure that when $H_i^1$ is attached, we always fix the levels so that $\ell'_i = \ell''_i$.
As far as (\ref{eq2.3}) is concerned, all this is $Y$-story. Next, the map $G$ is always supposed to be such that each $G \mid H_i^{\lambda} (\gamma)$ extends continuously to a larger embedding $G \mid \hat H_i^{\lambda} (\gamma)$, where $\hat H_i^{\lambda} (\gamma) = H_i^{\lambda} (\gamma) \cup \delta H_i^{\lambda} (\gamma)$. Inside $\tilde M^3 (\Gamma)$, for each $h_i^{\lambda}$, the $GH_i^{\lambda} (\gamma)$ occupies, {\it roughly}, the position $h_i^{\lambda}$ while, again for each $h_i^{\lambda}$, we always have the {\ibf strict} equality $G \delta \hat H_i^{\lambda} (\gamma) = \delta h_i^{\lambda}$, for all $\lambda$'s.
Using the technology from \cite{26} we can get now the following lemma, as well as the two complements which follow.
\noindent {\bf Lemma 3.1.} {\it We can perform the construction of {\rm (\ref{eq2.3})} so that}
1) {\it $Y$ is GSC, $\Psi (G) = \Phi (G)$ (i.e. $G$ is zippable) and also
$$
\overline{{\rm Im} \, G} = \tilde M^3 (\Gamma) \, .
$$
Of course, so far this only expresses the fact that} (\ref{eq2.3}) {\it is a representation, but then we also have the next items.}
2) {\it A FIRST FINITENESS CONDITION (at the source). The complex $Y$ is {\ibf locally finite}.}
3) {\it There is a {\ibf free action} $\Gamma \times Y \to Y$, for which the map $G$ is {\ibf equivariant}, i.e.
$$
G(gx) = g \, G(x) \, , \quad \forall \, x \in Y \, , \ g \in \Gamma \, .
$$
Moreover, with the same action of $\Gamma$ on the indices $i$ like in} (\ref{eq2.2}), {\it the action of $\Gamma$} on the $0$-skeleton {\it of $Y$ takes the following form, similar to} (\ref{eq2.2}), {\it namely
$$
g H_i^0 (\gamma) = H_{gi}^0 (\gamma) \, , \quad \forall \, \gamma \, .
$$
By now we really have an equivariant, locally-finite representation for our $\tilde M^3 (\Gamma)$ ($\approx \Gamma$), like it is announced in the title of the present section.}
4) {\it We have $GH_i^0 (\gamma) = {\rm int} \, h_i^0$ and also $G(\delta H_i^0 (\gamma)) = \delta h_i^0$, as already said above. This makes the $GH_i^0 (\gamma)$ be $\gamma$-independent, among other things.}
Retain that the $\delta H_i^{\lambda} (\gamma)$ exists only ideally, as far as the $Y$ is concerned; it lies at infinity. But we certainly can introduce the following stratified surface, which is PROPERLY embedded inside $\tilde M^3 (\Gamma)$, namely
\begin{equation}
\label{eq2.4}
\Sigma_1 (\infty) \underset{\rm def}{=} \bigcup_{i,\lambda , \gamma} G (\delta H_i^{\lambda} (\gamma)) = \bigcup_{i,\lambda} \delta h_i^{\lambda} \subset \tilde M^3 (\Gamma) \, ,
\end{equation}
which comes with its useful restriction
\begin{equation}
\label{eq2.5}
\Sigma_2 (\infty) \underset{\rm def}{=} GY \cap \Sigma_1 (\infty) = \bigcup \ \{ \mbox{common faces $h_i^{\lambda} \cap \lambda_j^{\mu}$,}
\end{equation}
$$
\mbox{for $0 \leq \lambda < \mu \leq 2$}\} = \bigcup \ \{\mbox{interiors of the attaching zones} \ \partial h^1 , \partial h^2\}.
$$
In the lemma which follows next, the $\varepsilon$-skeleton of $Y$ is denoted $Y^{(\varepsilon)}$.
\noindent {\bf Lemma 3.2.} (FIRST COMPLEMENT TO LEMMA 3.1.) 1) {\it The $Y^{(\varepsilon)}$ contains a canonical outgoing collar such that each $H_i^{\lambda} (\gamma)$ is attached to $Y^{(\lambda - 1)}$ in a collar-respecting manner at some level $k (i,\gamma) \in Z_+$ which is such that}
\begin{equation}
\label{eq2.6}
\lim_{i+\gamma = \infty} k(i,\gamma) = \infty \, .
\end{equation}
2) {\it For each $Y^{(\varepsilon)}$ we introduce its ideal boundary, living at infinity, call it
$$
\delta Y^{(\varepsilon)} = \bigcup_{\overbrace{i,\gamma,\lambda \leq \varepsilon}} \delta H_i^{\lambda} (\gamma) \quad \mbox{and also} \quad \hat Y^{(\varepsilon)} = Y^{(\varepsilon)} \cup \delta Y^{(\varepsilon)} \, .
$$
Eeach $H_i^{\lambda} (Y)$ is attached to $Y^{(\lambda - 1)}$ via $\partial H_i^{\lambda} (\gamma)$ in a bicollared manner, and as a consequence of} (\ref{eq2.6}) {\it inside $\hat Y^{(\lambda - 1)}$ we will find that}
$$
\lim_{n+m=\infty} \partial H_m^{\lambda} (\gamma_n) \subset \delta Y^{(\lambda - 1)} \, , \eqno (3.6.1)
$$
{\it which implies the FIRST FINITENESS CONDITION from lemma}~3.1.
3) {\it Inside $\tilde M^3 (\Gamma)$ we also have that}
$$
\lim_{n+m+k=\infty} G \delta H_{n,m}^{\lambda} (\gamma_k) \subset G \delta Y^{(\lambda)} = \tilde M^3 (\Gamma)^{(\lambda)} \subset \Sigma_1 (\infty) \, . \eqno (3.6.2)
$$
\noindent A COMMENT. In point 4) of the 2-dimensional representation theorem which was stated in the introduction (but which will only be proved in a subsequent paper) we have introduced the so-called SECOND FINITENESS CONDITION. Eventually, it will be the (3.6.2) which will force this condition.
\noindent {\bf Lemma 3.3.} (SECOND COMPLEMENT TO LEMMA 3.1.) 1) {\it There are PROPER individual embeddings $\partial H_i^{\lambda} (\gamma) \subset Y^{(\lambda - 1)}$ and the global map
\begin{equation}
\label{eq2.7}
\sum_{0 < \lambda \leq 2,i,\gamma} \partial H_i^{\lambda} (\gamma) \overset{j}{\longrightarrow} Y
\end{equation}
is also PROPER. As a novelty with respect to} \cite{26}, {\it the map $j$ above fails now to be injective. At each immortal singularity $S$ of $X$ (if such exist), call it $S \subset \partial h_k^0 (\alpha_1) \cap \partial h_i^2 (\alpha_2) \cap \partial h_j^2 (\alpha_3)$, we have transversal contacts $\partial H_i^2 (\alpha_2) \pitchfork \partial H_j^2 (\alpha_3) \subset H_k^0 (\alpha_1)$.}
2) {\it We have ${\rm Im} \, j = {\rm Sing} \, (G)$ ($=$ mortal singularities of $G$) and, at the same time, ${\rm Im} \, j$ is the set of non-manifold points of the traintrack $Y$.}
3) {\it There are {\ibf no} immortal singularities for $Y$, all the immortal singularities for $GY$ are created by the zipping.}
We will describe now the GEOMETRY OF $(G,Y)$ in the neighbourhood of an immortal singularity downstairs
$$
S \subset \delta h_k^0 \cap \partial h_i^2 \cap \partial h_j^2 \subset \tilde M^3 (\Gamma) \, .
$$
When we fix the indices $k,i,j$ then, at the level of $Y$ we will find an infinity of triplets $H_k^0 (\gamma_n)$, $H_i^2 (\gamma'_n)$, $H_j^2 (\gamma''_n)$, $n \to \infty$. Here each of the $G \, \underset{n}{\sum} \, H_k^0 (\gamma_n) \cup H_i^2 (\gamma'_n)$, $G \, \underset{n}{\sum} \, H_k^0 (\gamma_n) \cup H_j^2 (\gamma''_n)$ generates a figure analogous to 2.2 in \cite{26}, living inside the common $GH_k^0 (\gamma_n) = {\rm int} \, h_k^0$. What we are discussing now is the interaction of these two figures.
For any given pair $n,m$, among the two $\partial H_i^2 (\gamma'_n)$, $\partial H_j^2 (\gamma''_m)$ one is ``low'', {\it i.e.} pushed deeper inside ${\rm int} \, h_k^0$, the other one is ``high'', {\it i.e.} living more shallowly close to $\delta h_k^0$.
If we think, loosely of each $\partial H^2$ as being a copy of $S^1 \times [0,1]$ then the intersection $G \partial H_i^2 (\gamma'_n) \cap G \partial H_j^2 (\gamma''_m)$ with $\gamma'_n$ low and $\gamma''_n$ high, consists of two transversal intersection arcs which are such that, from the viewpoint of the lower $\gamma'_n$ they are close to the boundary $S^1 \times \{ 0,\varepsilon \}$, while from the viewpoint of the higher $\gamma''_m$ they are generators $p \times [0,1]$ and $q \times [0,1]$.
Given $\gamma'_n$, for almost all $\gamma''_m$ the $\gamma'_n$ is low and the $\gamma''_m$ is high, and a similar thing is true when we switch $\gamma'$ and $\gamma''$.
\noindent SKETCH OF PROOF FOR THE LEMMA 2.1 AND ITS COMPLEMENTS. We will follow here rather closely \cite{26}. Together with the transformation $\{ (X,f)$ $\mbox{from (3.1)} \Longrightarrow \{ (Y,G) \ \mbox{from (3.3)} \}$, to be described now, will come two successive increases of the family of indices
\begin{equation}
\label{eq2.8}
\underbrace{\{ \alpha \}}_{{\rm like \ in \ (3.1)}} \subsetneqq \{ \beta \} \subsetneqq \underbrace{\{ \gamma \}}_{{\rm like \ in \ (3.3)}} \, .
\end{equation}
The first step, is to change each $h_i^{\lambda} (\alpha) \subset X$ into a bicollared handle $H_i^{\lambda} (\alpha)$. The $H_i^{\lambda} (\alpha)$'s are put together into a provisional $Y = Y (\alpha)$ following roughly, but with some appropriate perturbations, the recipee via which the $h_i^{\lambda} (\alpha)$'s themselves have been put together at the level of $X$ (3.1).
The index ``$\alpha$'' is here generic, standing for ``first step'', and it is not to be mixed up with the ``$\alpha$'' which occurs inside $H_i^{\lambda} (\alpha)$. But then, when this index is used for $Y$ rather than for an individual $H^{\lambda}$ we may safely write again ``$\alpha$''.
For the time being we will concentrate on $\lambda = 0$, where we start by {\ibf breaking the $i$-dependence} of the $\alpha$'s, chosing for each $i$ a fixed isomorphism $\{ \alpha \} \approx Z_+$. For $\lambda = 0$, the index ``$\alpha$'' has now a universal meaning; this will be essential for the (\ref{eq2.10}) below. We consider a provisional, restricted $0$-skeleton for $Y$,
\begin{equation}
\label{eq2.9}
Y^{(0)} (\alpha) = \sum_{i,\alpha} H_i^0 (\alpha) \, , \quad \mbox{with} \ G \hat H_i^0 (\alpha) = h_i^0 (\alpha) \, .
\end{equation}
Next, we will {\ibf force} a free action $\Gamma \times Y^{(0)} (\alpha) \to Y^{(0)} (\alpha)$, by
\begin{equation}
\label{eq2.10}
g H_i^0 (\alpha) = H_{gi}^0 (\alpha) \, \quad \forall \, g \in \Gamma \, , \ \mbox{with the {\ibf same} index $\alpha$ on both sides.}
\end{equation}
Whatever additional requirements there will be for the collaring, we will also always have
\begin{equation}
\label{eq2.11}
gH_{n,i}^0 (\alpha) = H_{n,gi}^0 (\alpha) \, , \quad \forall \, n \, .
\end{equation}
With (\ref{eq2.2}), (\ref{eq2.10}), (\ref{eq2.11}) comes easily a $\Gamma$-equivariant $G \mid Y^{(0)} (\alpha)$, and the following condition will be imposed on our equivariant collaring too.
The following set accumulates, exactly, on $\delta h_i^0$
\begin{equation}
\label{eq2.12}
\underbrace{\sum_{n,\alpha} G (\delta H_{n,i}^0 (\alpha))}_{\mbox{this is the same thing as} \atop \mbox{$\underset{g,n,\alpha}{\sum} g^{-1} G (\delta H_{n,gi}^0 (\alpha))$}} \!\!\!\!\!\!\!\!\!\!\!\!\subset h_i^0 \subset \tilde M^3 (\Gamma) \, .
\end{equation}
We turn now to the $1$-skeleton. At the level of $X$ (\ref{eq2.1}), we have things like
\begin{equation}
\label{eq2.13}
\partial h_j^1 (\alpha) = h_{j_0}^0 (\alpha_0) - h_{j_1}^0 (\alpha_1) \, ,
\end{equation}
where each pair $(j_{\varepsilon} , \alpha_{\varepsilon})$ depends of $(j,\alpha)$. With this, at the level of our provisional, restricted $1$-skeleton
$$
Y^{(1)} (\alpha) = Y^{(0)} (\alpha) \cup \sum_{i,\alpha} H_i^1 (\alpha) \subset Y \ (\mbox{from (3.3)}) \, ,
$$
we will attach $H_i^1 (\alpha)$ to each of the two $H_{j_{\varepsilon}}^0 (\alpha_{\varepsilon})$ at some respective level $k (j_{\varepsilon} , \alpha_{\varepsilon})$, for which we will impose the condition
\begin{equation}
\label{eq2.14}
k (j_0 , \alpha_0) = k (j_1 , \alpha_1) \, ,
\end{equation}
which will make sure that for the frustration numbers we have $K=0$. This will be a standard precautionary measure, from now on. The (\ref{eq2.14}) is to be added to the conditions (\ref{eq2.6}), of course.
But, of course, there is no reason for our $\underset{i,\alpha}{\sum} \ H_i^1 (\alpha)$ to be equivariant, contrary to what happens for the $\underset{i}{\sum} \ h_i^1 \subset \tilde M^3 (\Gamma)$. We will {\ibf force} now such an equivariance by the following kind of averaging procedure. We start by extending the family $\{ \alpha \}$ into a larger family of indices $\{ \beta \} \supsetneqq \{ \alpha \}$ such that, for the time being at a purely abstract level, the following {\ibf saturation formula} should be satisfied
\begin{equation}
\label{eq2.15}
\{\mbox{the set of all $H_h^1 (\beta)$, $\forall \, h$ and $\beta\} = \{$the set of all $gH_j^1 (\alpha)$, $\forall \, g,j$ and $\alpha\}$.}
\end{equation}
Here $(h,\beta) = (h,\beta) [g,j,\alpha]$ and, at the purely abstract level of the present discussion, we will also impose things like
$$
g_1 \, H_h^1 (\beta) = g_1 \, g H_j^1 (\alpha) = H_{h(g_1 g,j,\alpha)}^1 (\beta (g_1 \, g , j,\alpha)) \ \mbox{and} \ (h,\beta) (1 \in \Gamma , j,\alpha) = (j,\alpha) \, .
$$
The next step is now to give flesh and bone to the abstract equality of sets (\ref{eq2.15}) by requiring that, for each $(g,j,\alpha)$, the corresponding $H_h^1 (\beta)$ (from (\ref{eq2.15})) should be attached to the two $0$-handles $g H_{j_{\varepsilon}}^0 (\alpha_{\varepsilon})$ (see (\ref{eq2.13})), and this at the {\ibf same} level $k (j_{\varepsilon} , \alpha_{\varepsilon})$ (\ref{eq2.14}) on both sides. This way we have created an object $Y^{(1)} (\beta) \supset Y^{(0)} (\beta) \underset{\rm def}{=} Y^{(0)} (\alpha)$, which is endowed with a free action of $\Gamma$, which extends the preexisting action on $Y^{(0)} (\alpha)$. We will also ask that the saturation should work at the level of the $H_{n,i}^1 (\alpha)$'s too. Moreover we can arrange things so that the following implication should hold
$$
\{ g H_i^1 (\beta) = H_{i_1}^1 (\beta_1)\} \Longrightarrow \{ g H_{n,i}^1 (\beta) = H_{n,i_1}^1 (\beta_1) \ \mbox{for all $n$'s}\} .
$$
By now we can easily get an equivariant map too, extending $G \mid Y^{(0)} (\alpha)$
\begin{equation}
\label{eq2.16}
Y^1 (\beta) \overset{G}{\longrightarrow} (\tilde M^3 (\Gamma))^{(1)} = \sum_{\lambda \leq 1,i} h_i^{\lambda} \, .
\end{equation}
The map (\ref{eq2.16}) is not completely rigidly predetermined, inside $\tilde M^3 (\Gamma)$ the $GH_j^1 (\beta)$ occupies only {\it roughly} the position $h_j^1$, but then we will also insist on strict equalities $G\delta H_j^1 (\beta) = \delta h_j^1$. As a side remark, notice that given $j,\beta , g$, there is some $\beta' = \beta' (j,\beta ,g)$ with the equality
$$
g H_j^1 (\beta) = H_{gj}^1 (\beta') \, , \ \mbox{corresponding to} \ gh_j^1 = h_{gj}^1 \ \mbox{(see (3.2))}.
$$
Finally, we can also impose on (\ref{eq2.16}) the condition that the doubly infinite collection of annuli
$$
\sum_{g,n,\beta} g^{-1} G (\delta H_{n,gi}^1 (\beta)) = \sum_{n,\beta} G (\delta H_{n,i}^1 (\beta)) \subset (\tilde M^3 (\Gamma))^{(1)}
$$
accumulates exactly on $\delta h_i^1$, just like in (\ref{eq2.12}).
Once all the {\it frustration numbers are zero}, we can next proceed similarly for the $2$-skeleton and thereby build a completely equivariant $Y(\beta)$, endowed with an equivariant map into $\tilde M^3 (\Gamma)$. At this level, all the requirements in our lemma~3.1 (and its complements) are fulfilled, {\ibf except} the geometric sample connectivity which, until further notice has been violated. To be very precise, the GSC, which was there at the level of $X$ (\ref{eq2.1}), got lost already when at the level of the saturation formula (\ref{eq2.15}), additional $1$-handles were thrown in. There is no reason, of course, why the $\{ H_j^2 (\beta) \}$ should cancell them. Before going on with this, here are some more details concerning the construction $Y^{(1)} (\beta) \Longrightarrow Y(\beta)$, which yields the full 2-skeleton. We start with an abstract saturation formula, analogous to (\ref{eq2.15})
$$
\{ H_h^2 (\beta) \} = \{ g H_i^2 (\alpha) \} \, , \ \forall \, h, \beta , g ,i,\alpha \, ,
$$
producing 2-handles which can afterwards be happily attached to $Y^{(1)} (\beta)$.
So, we have by now a map which is equivariant, has a locally finite source, but with a source which fails to be GSC, reason which prevents it to be a representation,
$$
Y(\beta) \overset{G}{\longrightarrow} \tilde M^3 (\Gamma) \, .
$$
We want to restore the GSC, without loosing any of the other desirable features already gained. Proceeding like in \cite{26}, inside the 1-skeleton $Y^{(1)} (\beta)$ we chose a PROPER, equivariant family of simple closed curves $\{ \gamma_j (\beta) \}$, in cancelling position with the family of 1-handles $\{ H_j^1 (\beta) \}$. With this, we extend $\{ G \gamma_j (\beta) \}$ to an equivariant family of mapped discs
\begin{equation}
\label{eq2.18}
\sum_{j,\beta} D^2 (j,\beta) \overset{f}{\longrightarrow} \tilde M^3 (\Gamma) \, ,
\end{equation}
and the first idea would be now to replace $Y(\beta)$ by the following equivariant map
\begin{equation}
\label{eq2.19}
Y(\beta) \cup \sum_{j,\beta} D^2 (j,\beta) \overset{G \cup f}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow} \tilde M^3 (\Gamma) \, ,
\end{equation}
the source of which is both locally finite and GSC. We may also assume that $Y(\beta) \overset{G}{\longrightarrow} \tilde M^3 (\Gamma)$ is essentially surjective, from which it is easy to get $\Psi (G \cup f) = \Phi (G \cup f)$, {\it i.e.} a zippable $G \cup f$. In other words (\ref{eq2.19}) is already a representation, even an equivariant, locally finite one. BUT there is still a trouble with (\ref{eq2.19}), namely that we do not control the accumulation pattern of the family $\{ f D^2 \}$ which, a priori, could be as bad as {\ibf transversally cantorian}. In particular, the SECOND FINITENESS CONDITION (\ref{eq0.6}), which will be essential in the subsequent paper of this series is, generically speaking, violated by (\ref{eq2.19}). [More precisely, as one can see in \cite{26} and \cite{27}, once representations go $2$-dimensional, the accumulation pattern of $M_2 (f)$, instead of being, transversally speaking, with finitely many limit point like (\ref{eq0.6}) wants it to be, tends to become cantorian.]
The cure for this disease is the following {\ibf decantorianization process}, which is discussed at more length in \cite{26}. Here are the successive steps
\noindent (3.18.1) We start by applying lemma~2.6 to $\{ G \gamma_j (\beta) \}$, and this replaces (\ref{eq2.18}) by a simplicial {\ibf non degenerate} map
$$
\sum_{j,\beta} D^2 (j,\beta) \overset{f}{\longrightarrow} [M^3 (\Gamma)] \quad \mbox{(see section II)}.
$$
We also have the following features
\noindent (3.18.2) \quad The $\underset{j,\beta}{\sum} \ D^2 (j,\beta)$ is endowed with an equivariant triangulation, involving three infinite families of simplices $\{ s^0 \}$, $\{ s^1 \}$, $\{ s^2 \}$ of dimensions 0, 1, and 2, respectively.
\noindent (3.18.3) \quad For any simplex $k = s^{\lambda}$ not on the boundary and not corresponding hence to some already existing bicollared handle $H_i^{\lambda} (\beta)$, we introduce a new bicollared family $H_k^{\lambda} (\gamma)$. Next, we use $Y(\beta) \cup \underset{j,\beta}{\sum} \ D^2 (j,\beta)$ as a pattern for attaching, successively, these new bicollared $H_k^{\lambda} (\gamma)$ to $Y(\beta)$. This leads to a $3^{\rm d}$ representation of $\Gamma$,
\begin{equation}
\label{eq2.20}
Y = Y (\gamma) = \bigcup_{\lambda , i , \gamma} H_i^{\lambda} (\gamma) \overset{G}{\longrightarrow} \tilde M^3 (\Gamma) \, .
\end{equation}
We follow here the already established patterns of controlling the levels when bicollared handles are attached, so as to have local finiteness (FIRST FINITENESS CONDITION) and, at the same time the SECOND FINITENESS CONDITION. The equivariance of (\ref{eq2.20}) will descend directly from the equivariance of (\ref{eq2.19}), and similarly the condition $\Psi = \Phi$ too. With this, our lemmas~3.1, 3.2, 3.3 can be established.
\noindent FINAL REMARKS. A) In view of (\ref{eq2.8}) we have here also inclusions at the level of the handles
$$
\{ H_i^{\lambda} (\alpha) \} \subsetneqq \{ H_i^{\lambda} (\beta) \} \subsetneqq \{ H_i^{\lambda} (\gamma) \} \, ,
$$
hence $Y(\alpha) \subset Y(\beta) \subset Y(\gamma)$, with compatible maps into $\tilde M^3 (\Gamma)$.
\noindent B) Here is a very fast way to understand how one decantorianizes via a process of {\ibf thickening}. Inside the interval $[a,b]$ far from the border, consider a sequence $x_i \in (a,b)$, $i=1,2,\ldots$ accumulating on a Cantor set $C \subset (a,b) - \underset{i}{\sum} \ \{ x_i \}$.
In order to get rid of $C$, we start by consider two sequences $a_n , b_n \in (a,b) - C - \{ x_i \}$, such that $\lim a_n = a$, $\lim b_n = b$. If one thickens each $x_n$ int $[a_n , b_n] \ni x_n$ then one has managed to replace the Cantorian accumulation of the set $\{ x_n \}$ by the very tame accumulation of the arcs $\{ [a_n , b_n ] \}$.
Of course, what we are worrying about is the transversal accumulation to an infinite family of $2^{\rm d}$ sheets in $3^{\rm d}$. There, one decantorianizes by thickening the $2^{\rm d}$ sheets into $3^{\rm d}$ boxes (or $3^{\rm d}$ handles of index $\lambda = 2$). With a bit of care, the accumulation pattern of the lateral surfaces of the handles is now tame.
\section{Forcing a uniformaly bounded zipping length}\label{sec3}
\setcounter{equation}{0}
The last section has provided us with the 3-dimensional representation (\ref{eq2.3}), {\it i.e.} with
$$
Y \overset{G}{\longrightarrow} \tilde M^3 (G)
$$
which had both a locally finite representation space $Y$ and was equivariant. We turn now to the issue of the zipping length and, starting from (\ref{eq2.3}) as a first stage, what we will achieve now will be the following
\noindent {\bf Lemma 4.1.} {\it There is another, much larger $3$-dimensional representation, extending the} (\ref{eq2.3}) {\it above,
\begin{equation}
\label{eq3.1}
\xymatrix{
Y(\infty) \ar[rr]^{\!\!\!\!\!\!\!\!\!\!\ g(\infty)} && \ \tilde M^3 (\Gamma) \, ,
}
\end{equation}
having all the desirable features of} (\ref{eq2.3}), {\it as expressed by lemma~{\rm 3.1} and its two complements} 3.2, 3.3 {\it and moreover such that the following three things should happen too.}
\noindent (4.2) \quad {\it There exists a {\ibf uniform bound} $M > 0$ such that for any $(x,y) \in M^2 (g(\infty))$ we have
$$
\inf_{\lambda} \Vert \lambda (x,y) \Vert < M \, ,
$$
where $\lambda$ runs through all the zipping paths for $(x,y)$.}
\noindent (4.3) \quad {\it Whenever $(x,y) \in M^2 (g(\infty)) \mid Y(\infty)^{(1)}$, then we can find a zipping path of controlled length $\Vert \lambda (x,y) \Vert < M$, which is confined inside the $1$-skeleton $Y(\infty)^{(1)}$.}
\noindent (4.4) \quad {\it When $\Psi (g(\infty)) = \Phi (g(\infty))$ is being implemented by a complete zipping of $g(\infty)$, then one can start by zipping completely the $1$-skeleton of $Y(\infty)$ before doing anything about the $2$-skeleton.}
\noindent PROOF OF THE LEMMA 4.1. We start with some comments. The bounded zipping length is really a $1$-skeleton issue. Also, it suffices to check it for double points $(x,y) \in M^2 (g(\infty)) \mid Y (\infty)^{(0)}$ with a controlled $\lambda (x,y)$ confined inside $Y(\infty)^{(1)}$.
We will use the notation $(Y^{(0)} , G_0) \underset{\rm def}{=} (Y,G)$ (\ref{eq2.3}) and, with this, our proof will involve an infinite sequence of $3$-dimensional representation $(Y (n) , G_n)$ for $n = 0,1,2,\ldots$, converging to the desired $(Y(\infty) , g(\infty))$. We will find
$$
Y(0) \subset Y(1) \subset Y(2) \subset \ldots , \quad Y (\infty) = \bigcup_{0}^{\infty} Y(n)
$$
where the $Y(0) = Y$ is already GSC and where the construction will be such that the $2$-handles in $Y(n) - Y(n-1)$ will be in cancelling position with the $1$-handles in $Y(n) - Y(n-1)$, something which will require the technology of the previous section. Anyway, it will follow from these things that $Y(\infty) \in {\rm GSC}$.
The construction starts by picking up some fundamental domain $\Delta \subset \tilde M^3 (\Gamma)$ of size $\Vert \Delta \Vert$. Let us say that this $\Delta$ consists of a finite connected system of handles $h^0 , h^1 , h^2$. No other handles will be explicitly considered, until further notice, in the discussion which follows next. One should also think, alternatively, of our $\Delta$ as being part of $M^3 (\Gamma)$.
We will start by establishing abstract, intercoherent isomorphisms between the various, a priori $\ell$-dependent systems of indices $\gamma$, for the various infinite families $\{ H_{\ell}^0 (\gamma) \}$ which correspond to the finitely many $h_{\ell}^0 \subset \Delta$. Downstairs, in $\Delta$, we also fix a triple $h_1^0 , h_2^0 , h_i^1$ such that $\partial h_i^1 = h_2^0 - h_1^0$. Upstairs, some specific $H_1^0 (\gamma_0)$ living over $h_1^0$ will be fixed too.
With all this, we will throw now into the game a system of bicollared handles $H_j^1 (\delta)$, living over the various $h_j^1 \subset \Delta$, many enough so as to realize the following conditions
\noindent (4.5) \quad For any given index $\gamma$ there is an index $\delta = \delta (\gamma)$, such that $\partial H_i^1 (\delta) = H_1^0 (\gamma_0) - H_2^0 (\gamma)$, with $H_i^1 (\delta)$ living over $h_i^1$.
\noindent (4.6) \quad Downstairs, for each pair $h_n^0 , h_m^0 \subset \Delta$ we fix a path joining them $\mu = \mu (h_n^0 , h_m^0) \subset \tilde M^3 (\Gamma)$, with $\Vert \mu \Vert \leq \Delta$. It is assumed here that $\mu (h_1^0 , h_2^0) = h_i^1$. With this, for any index $\gamma$ and each pair $h_n^0 , h_m^0 \subset \Delta$ it is assumed that enough things should have been added upstairs, s.t. there should be a continuous path $\lambda = \lambda (h_n^0 , h_m^0 , \gamma)$ covering exactly the $\mu (h_n^0 , h_m^0)$ and such that, moreover,
$$
\partial \lambda (h_n^0 , h_m^0 , \gamma) = H_n^0 (\gamma) - H_m^0 (\gamma) \, , \ \forall \, \gamma \, .
$$
Here $\lambda (h_1^0 , h_2^0 , \gamma_0) = H_i^1 (\delta (\gamma_0))$. From here on we will imitate the process $Y(\alpha) \subset Y(\beta) \subset Y(\gamma) = \{ Y ,$ {\it i.e.} our present $Y(0)\}$, meaning the following succession of steps. The only addition, so far, has been the system of $1$-handles $\{ H_1^0 (\delta)\}$, localized over $\Delta$. The next step will be to saturate it like in (\ref{eq2.15}) so as to achieve equivariance. The next two steps will add more handles of index $\lambda = 0$, $\lambda = 1$ and $\lambda = 2$ so as to restore GSC and then to decantorianize. Each of these two steps has equivariance built in and also, by proceeding like in the last section, local finiteness is conserved. The feature $\Psi = \Phi$ is directly inherited from $Y(0)$, whose image is already almost everything.
The end result of this process is a bigger $3$-dimensional representation
$$
Y \underset{\rm def}{=} Y(0) \subset Y(1) \overset{G_1}{-\!\!\!-\!\!\!\longrightarrow} \tilde M^3 (\Gamma) \, , \eqno (4.7)
$$
having all the desirable features of (\ref{eq2.3}) and moreover the following one too,
\noindent (4.7.1) \quad For any $(x,y) \in M^2 (G_1 \mid Y(0)^{(0)})$ there is now a zipping path of length $\leq \Vert \Delta \Vert + 1$, in $Y(1)^{(1)}$. Assuming here that
$$
x \in H_n^0 (\gamma_1) \, , \ y \in H_n^0 (\gamma_2) \, ,
$$
then there will be a zipping path which makes use of the features (4.5) and (4.6), suggested in the diagram below
$$
\xymatrix{
&H_2^0(\gamma_1) \ar@{-}[rr]_{\lambda (h_2^0 , h_n^0 , \gamma_1)} &&H_n^0 (\gamma_1) \\
H_1^0(\gamma_0) \ar@{-}[dr]_{H_i^1 (\delta (\gamma_2))} \ar@{-}[ur]_{H_i^1 (\delta (\gamma_1))} \\
&H_2^0(\gamma_2) \ar@{-}[rr]_{\lambda (h_2^0 , h_n^0 , \gamma_2)} &&H_n^0 (\gamma_2).
}
$$
This {\ibf is} our zipping path of length $\leq \Vert \Delta \Vert + 1$ for $(x,y) \in M^2 (f)$.
A priori, the diagram of $0$-handles and continuous paths of $1$-handles (and $0$-handles) above, lives at the level of $\Delta$ but then, via equivariance it is transported all over the place.
In the diagram above (part of the claim (4.7.1)) all the explicitly occuring $H^0$'s are $0$-handles of our original $Y(0)$. This also means that, so far, the zipping length is totally uncontrolled when we move to the additional $0$-handles which have been introduced by $Y(0) \Rightarrow Y(1)$.
But then, we can iterate indefinitely the $(Y(0) \Rightarrow Y(1))$-type construction, keeping all the time the same fundamental domain $\Delta \subset \tilde M^3 (\Gamma)$ as the first time. Moreover, like in \cite{26} and/or like in section III above, all the newly created $\{ \partial H$ and $\delta H \}$ will all the time be pushed closer and closer to the respective infinities. So, at least at each individual level $n$, local finiteness will stay all the time with us. Also, at each step $n$ we force equivariance and then $\{$restore GSC$\} + \{$decantorianize$\}$, a process which as already said keeps both the local finiteness and the equivariance alive. More explicitly, here is how our iteration will proceed. At each stage $n$ we will find some equivariant representation
$$
Y(0) \subset Y(1) \subset Y(2) \subset \ldots \subset Y(n-1) \subset Y(n) \overset{G_n}{-\!\!\!-\!\!\!\longrightarrow} \tilde M^3 (\Gamma) \, , \eqno (4.8.n)
$$
such that for all $m < n$ we have $G_n \mid Y(m) = G_m$. The representation $Y(n) \overset{G_n}{-\!\!\!-\!\!\!\longrightarrow} \tilde M^3 (\Gamma)$ has all the desirable features of (\ref{eq2.3}) from the lemmas~3.1, 3.2, 3.3 and, also in addition the following two too.
\noindent (4.9.$n$) \quad For any $(x,y) \in M^2 (G_n \mid Y(n-1)^{(0)})$ there is a zipping path of length $\leq \Vert \Delta \Vert + 1$, in $M^2 (G_n \mid Y(n)^{(1)})$.
\noindent (4.10) \quad Consider, like in the beginning of (4.9.$n$), an $(x,y) \in M^2 (G_n \mid Y(m-1)^{(0)})$ for some $m < n$. Then the zipping path which (4.9.$n$) assigns for this particular $(x,y)$, is exactly the one already assigned by (4.9.$m$).
With such compatibilities we have by now a well-defined space $Y(\infty)$ with a non-degenerate map
$$
Y(\infty) \underset{\rm def}{=} \bigcup_{n=0}^{\infty} Y(n) \overset{g(\infty) \underset{\rm def}{=} \lim G_n}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow} \tilde M^3 (\Gamma) \, , \eqno (4.10.1)
$$
and the issue now are the virtues of this (4.10.1).
To begin with, we have $Y(0) \in {\rm GSC}$. Next, our construction is such that the $1$-handles which via the inductive step $Y(n-1) \Rightarrow Y(n)$ are, to begin with, thrown in, are afterwards cancelled by later $2$-handles, thrown in by the same $Y(n-1) \Rightarrow Y(n)$. So $Y(\infty) \in {\rm GSC}$. The fact that, at each finite level $n$ we have $\Psi = \Phi$, easily implies this property for $g(\infty)$ too. So (4.10.1) is a representation, to begin with. At each finite $n$ we have equivariance, and zipping length is uniformly bounded by an $M$ which is independent of $n$. So (4.10.1) has these features too. But, what is, a priori, NOT automatic for $Y(\infty)$ is the local finiteness. This means that some care is required during the infinite process, so as to insure the convergence. But the technology from \cite{26} and/or the preceeding section saves here the day. Without any harm we can also add the following ingredient to our whole construction.
Let us consider the generic bicollared handle $H_i^{\lambda} (\varepsilon , n)$ which is to be added to $Y(n-1)$ so as to get $Y(n)$. Let us also denote by $k(\lambda , i , \varepsilon , n)$ the level at which we attach it, in terms of the corresponding outgoing collar. Then, the whole infinite construction can be arranged so that we should have
$$
\lim_{n = \infty} \underset{\overbrace{\lambda , i , \varepsilon}}{\rm inf} k(\lambda , i , \varepsilon , n) = \infty \eqno (4.11)
$$
and also
$$
\lim_{\overbrace{p+i+\varepsilon + n = \infty}} \!\!\!\!\!\!\!\!\!\! (g(\infty) \, \delta H_{p,i}^{\lambda} (\varepsilon , n)) \subset g(\infty) \, \delta Y(\infty)^{(\lambda)} = (\tilde M^3 (\Gamma))^{(\lambda)} \, . \eqno (4.12)
$$
With this, our (4.10.1) has by now all the features demanded by lemma~4.1. So, by now the proof of the lemma in question is finished and then the proof of the $3^{\rm d}$ representation theorem too.
We will end this section with some remarks and comments.
A) We very clearly have, in the context of our $3$-dimensional representation that $g(\infty) \, Y(\infty) = {\rm int} \, \tilde M^3 (\Gamma) = \tilde M^3 (\Gamma) - \partial \tilde M^3 (\Gamma)$, {\it i.e.} the image of $g(\infty)$ is locally finite. Moreover none of the special features like locally finite source, equivariance or bounded zipping length is involved here. But then, our $3^{\rm d}$ representation theorem is only a preliminary step towards the $2^{\rm d}$ representation theorem, stated in the introduction to this present paper and from which the proof of $\forall \, \Gamma \in {\rm GSC}$ will proceed, afterwards, in the subsequent papers of this series.
In the context of the $2$-dimensional representation (\ref{eq0.5}) we have again a locally finite source and also a uniformly bounded zipping length which, in some sense at least, should mean that ``all the zipping of (\ref{eq0.5}) can be performed in a finite time''. All these things notwithstanding and contrary to what one may a priori think, the image of the representation, {\it i.e.} the $fX^2$ FAILS NOW TO BE LOCALLY FINITE. The rest of this comment should give an idea of the kind of mechanism via which this seemingly self-contradictory phenomenon actually occurs.
In the same spirit as the diagram from (4.7.1) we consider inside $Y(\infty)$ and living over a same $\gamma \Delta \subset \tilde M^3 (\Gamma)$, $\gamma \in \Gamma$, infinitely many paths of $0$-handles and $1$-handles, all with the same $g(\infty)$-image, and starting at the same $H^0$
$$
H^0 \underset{\lambda_i}{\longrightarrow} H^0 (\gamma_i) \, ; \ i = 0,1,2,\ldots \eqno (A_1)
$$
We fix now our attention on $H^0 (\gamma_0)$ and consider some bicollared $1$-handle of $Y(\infty)$, call it $H^1$ attached to $H^0 (\gamma_0)$ in a direction transversal to the one of the the path $\lambda_0$ which hits $H^0 (\gamma_0)$. Some $2$-handle $H^2$ is now attached to a closed finite necklace $\ldots \cup H^0 (\gamma_0) \cup H^1 \cup \ldots$, inside $Y(\infty)$. We are here in a bicollared context meaning that $\partial H^2$ goes deep inside $H^0 (\gamma_0) \cup H^1$ and $g(\infty) (\delta H^0 (\gamma_0) \cup \delta H^1)$ cuts through $g(\infty) H^2$.
We move now to the $2$-dimensional representation (\ref{eq0.5}). Without going here into any more details, each $X^2 \mid H^{\lambda} (\gamma)$ is now a very dense $2$-skeleton of $H^{\lambda} (\gamma)$, becoming denser and denser as one approaches $\delta H^{\lambda} (\gamma)$, which is at infinity, as far as $H^{\lambda} (\gamma)$ is concerned.
When we go to $X^2 \mid H^2$, then among other things this will contain $2$-dimensional compact walls $W^2$ glued to $X^2 \mid (H^0 (\gamma_0) \cup H^1)$, parallel to the core of $H^2$, and such that we find a transversal intersection
$$
[f (\delta H^0 (\gamma_0) \cup \delta H^1) = g(\infty) (\delta H^0 (\gamma_0) \cup \delta H^1)] \pitchfork W^2 \subset \tilde M^3 (\Gamma) \, . \eqno (A_2)
$$
From $(A_1) + (A_2)$ we can pull out a sequence of double points $(x_n , y_n) \in M^2 (f)$, with $n=1,2,\ldots$ such that
a) $x_n \in X^2 \mid H^0 (\gamma_n) \, , \ y_n \in W^2$.
b) Inside $W^2$ we have $\lim y_n = y_{\infty} \in f (\delta H^0 (\gamma_0)) \cup W^2$, while as far as $g(\infty) H^0 (\gamma_n)$ is concerned, $\lim x_n = \infty$.
c) Making use just of what $(A_1)$ and $W^2$ give us, we can produce zipping paths $\Lambda_n$ of length $\leq \Vert \Delta \Vert + 1$ connecting each $(x_n , y_n)$ to some singularity $\sigma_n \in X^2 \mid H^0$.
d) Both $\sigma_n$ and $\Lambda_n$ go to infinity, at least in the following sense: inside $\tilde M^3 (\Gamma)$, the $f \Lambda_n$'s come closer and closer to $\underset{i}{\sum} \, f \partial H_i^0 \cup \underset{j}{\sum} \, f \partial H_j^1$. [This ``going to infinity'' of $\Lambda_n$ gets a more interesting meaning once one goes to the $S_u \, \tilde M^3 (\Gamma)$ from the introduction and one applies punctures, like in (\ref{eq0.9}).]
But the point here is that $fX^2$ is {\ibf not locally finite} in the neighbourhood of $fy_{\infty}$.
B) Notice also, that contrary to what one might have thought, a priori, bounded zipping length and difficulty of solving algorithmic problems for $\Gamma$, like the word problem (or lack of such difficulties), are totally unrelated to each other.
C) There is no finite uniform boundedness for the lengths of the zipping paths induced by (4.8.9) on $Y (n)^{(0)} - Y(n-1)^{(0)}$. That is why an INFINITE CONSTRUCTION is necessary here.
D) So, eventually, our uniform boundedness has been achieved via an {\ibf infinite} process which is {\ibf convergent}. Also, it would not cost us much to put also some lower bound for the zipping lengths too, if that would be necessary.
E) The way our proof proceeded, was to show (4.2) $+$ (4.3) for any $(x,y) \in M^2 (G_{\infty}) \mid \{\mbox{$0$-skeleton of} \ Y(\infty)\}$. But from then on, it is very easy to connect, first, any $(x,y) \in M^2 \mid \{\mbox{$1$-skeleton}\}$ to a neighbouring double point in the $0$-skeleton, via a short zipping path confined in the $1$-skeleton. Next any double point involving the $2$-skeleton can be connected, via short zipping paths too, to neighbouring double points in the $0$-skeleton.
F) Our general way of proceeding, in this paper, was the following. We have started with the initial representation $X \overset{f}{\longrightarrow} \tilde M^3 (\Gamma)$ from section I (see for instance (2.9.1)), then we enlarged it into the $Y \overset{G}{\longrightarrow} \tilde M^3 (G)$ from (\ref{eq2.3}) and then finally to our present gigantic $Y(\infty) \overset{g(\infty)}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow} \tilde M^3 (\Gamma)$ (\ref{eq3.1}). We started with the initial feature $X \in {\rm GSC}$ and thus we never lost it in our successive constructions.
G) When, in the next paper, we will change our (4.1) into a 2-dimensional representation, then the (4.12) will make that the second finiteness condition is again satisfied.
\end{document} |
\begin{document}
\title{The graphs with all but two eigenvalues equal to $-2$ or $0$}
\begin{abstract}
\noindent
We determine all graphs for which the adjacency matrix has at most
two eigenvalues (multiplicities included) not equal to $-2$, or $0$,
and determine which of these graphs are determined by their
adjacency spectrum.
\end{abstract}
{\bf s}ection{Introduction}
In an earlier paper~\cite{chvw} we determined the class $\cal G$ of
graphs with at most two adjacency eigenvalues
(multiplicities included) different from $\pm 1$.
The classification was motivated by the question whether the
friendship graph is determined by its spectrum.
Here we deal with the class $\cal H$ of graphs for which the
adjacency matrix $A$ has all but at most two eigenvalues equal
to $-2$ or $0$.
Equivalently, $A+I$ has at most two eigenvalues different from
$\pm 1$.
The class $\cal H$ is in some sense complementary to $\cal G$.
Indeed, many graphs in $\cal G$ (including the friendship graphs)
are the complements of graphs in $\cal H$.
In particular it easily follows that a regular graph is in $\cal G$
if and only if its complement is in $\cal H$.
But for non-regular graphs there is no such relation.
Note that a graph from $\cal H$ remains in $\cal H$ if we add or
delete isolated vertices.
Let $\cal H'$ be the set of graphs in $\cal H$ with no isolated
vertices.
Then it clearly suffices to determine all graphs in $\cal H'$.
It turns out that $\cal H'$ contains several infinite families and some sporadic graphs,
and that relatively few graphs in $\cal H'$ have a cospectral mate.
Thus we find many graphs with four distinct eigenvalues which are determined by their adjacency spectrum;
see~\cite{dkx} for some discussion on this phenomenon.
The major part of our proof deals with graphs in $\cal H'$ with two positive eigenvalues.
These graphs have least eigenvalues $-2$ and have been classified, see for example~\cite{crs}.
However, using this classification is not straightforward, and we decided to start from scratch
using mainly linear algebra.
The most important tool is {\em eigenvalue interlacing},
which states that if $\lambda_1(A)\geq\cdots\geq\lambda_n(A)$
are the eigenvalues of a symmetric matrix $A$ of order $n$,
and $\lambda_1(B)\geq\cdots\geq\lambda_m(B)$
are the eigenvalues of a principal submatrix $B$ of $A$ or order $m$,
then
\[
\lambda_i(A)\geq\lambda_i(B)\geq\lambda_{n-m+i}(A)
\ \mbox{ for }\ i=1,\ldots,m.
\]
Another important tool is the following well known result on
equitable partitions (we refer to~\cite{bh} for these tools and
many other results on spectral graph theory).
Consider a partition ${{\cal P}}=\{V_1,\ldots,V_m\}$ of the set
$V=\{1,\ldots,n\}$.
The characteristic matrix $\chi_{\cal P}$ of ${\cal P}$ is the $n{\bf t}imes m$
matrix whose columns are the character vectors of $V_1,\ldots,V_m$.
Consider a symmetric matrix $A$ of order $n$, with rows and
columns partitioned according to ${\cal P}$.
The partition of $A$ is {\em equitable} if each submatrix
$A_{i,j}$ formed by the rows of $V_i$ and the columns of $V_j$
has constant row sums $q_{i,j}$.
The $m{\bf t}imes m$ matrix $Q=(q_{i,j})$ is called the
{\em quotient matrix} of $A$ with respect to ${\cal P}$.
\begin{lem}\label{ep}
The matrix $A$ has the following two kinds of eigenvectors
and eigenvalues:
\begin{itemize}
\item[(i)]
The eigenvectors in the column space of $\chi_{\cal P}$;
the corresponding eigenvalues coincide with the eigenvalues of $Q$.
\item[(ii)]
The eigenvectors orthogonal to the columns of $\chi_{\cal P}$;
the corresponding eigenvalues of $A$ remain unchanged if
some scalar multiple of the all-one block $J$ is added to block
$A_{i,j}$ for each $i,j\in\{1,\ldots,m\}$.
\end{itemize}
\end{lem}
The all-ones and all-zeros vector of dimension $m$ is denoted by
${\bf 1}_m$ (or ${\bf 1}$) and ${\bf 0}_m$ (or ${\bf 0}$), respectively.
We denote the $m{\bf t}imes n$ all-ones matrix by $J_{m,n}$ (or just $J$),
the all-zeros matrix by $O$, and the identity matrix of order $n$ by
$I_n$, or $I$.
{\bf s}ection
{Graphs in $\cal{H'}$ with just one positive eigenvalue}\label{1}
It is known (see~\cite{s}) that a graph with just one positive
adjacency eigenvalue is a complete multipartite graph,
possibly extended with a number of isolated vertices.
Thus we have to find the complete multipartite graphs in $\cal H'$.
\begin{thm}\label{onepos}
If a graph $G\in{\cal H'}$ has one positive eigenvalue,
then $G$ is one of the following.
\begin{itemize}
\item $G_0(\ell,m)=K_{\ell,m}$ with spectrum
$\{0^{\ell+m-2},~\pm{\bf s}qrt{\ell m}\}$ ($\ell\geq m\geq 1$),
\item $G_1=K_{1,1,3}$ with spectrum $\{-2,~-1,~0^2,~3\}$,
\item $G_2(k,m)=K_{2,\ldots,2,m}$ with
spectrum $\{-2^{k-2},~0^{k+m-2},~k-2\pm{\bf s}qrt{k^2+2(k-1)(m-2)}\}$
($m\geq 1$), where $k\geq 3$ is the number of classes.
\end{itemize}
\end{thm}
\noindent
{\bf Proof.}
Let ${\cal H}_1'$ be the set of complete multipartite graph in
$\cal H'$, and let $G$ be a complete $k$-partite graph in
${\cal H}_1'$.
Then $k\geq 2$, and $k=2$ gives $G_0(\ell,m)$.
Assume $k\geq 3$.
The graph $K_{2,3,3}$ has two eigenvalues less than $-2$,
so by interlacing,
$G$ does not contain $K_{2,3,3}$ as an induced subgraph.
So, if no class has size $1$, then $G=G_2(k,m)$ with
$m\geq 2$.
Suppose there are $k_1\geq 1$ classes of size~$1$ and assume
$k\geq k_1+3$.
Then at least one class has size $2$, and at most one class has
size greater than $2$
(because $K_{2,3,3}$ is a forbidden induced subgraph).
If one class has size $m\geq 3$, then the adjacency matrix $A$
of $G$ admits an equitable partition with quotient matrix
\[
Q=\left[
\begin{array}{ccc}
k_1-1 & m & 2(k-k_1-1)\\
k_1 & 0 & 2(k-k_1-1)\\
k_1 & m & 2(k-k_1-2)
\end{array}
\right].
\]
By Lemma~\ref{ep}, the eigenvalues of $Q$ are also eigenvalues of $A$,
but $Q$ has three eigenvalues different from $-2$ and $0$,
so $G\not\in{\cal H}_1'$.
If $k_1\geq 3$, then $A+I$ has at least $3$ equal rows,
and hence $A$ has an eigenvalue $-1$ of multiplicity at least $2$,
and therefore $G\not\in{\cal H}'_1$.
If $k_1=2$ and $G$ contains $K_{2,3}$ (which has smallest eigenvalue
$-{\bf s}qrt{6}<-2$), then $G$ has an eigenvalue equal to $-1$,
and an eigenvalue less than $-2$, so $G\not\in{\cal H}_1'$.
So only $K_{1,2,\ldots,2}$, $K_{1,\ell,m}$, $K_{1,1,m}$,
and $K_{1,1,2,\ldots,2}$ are not covered yet by the above cases,
and one easily checks that only $K_{1,2,\ldots,2}=G_2(k,1)$ and $K_{1,1,3}=G_1$ survive.
The eigenvalues readily follow by use of Lemma~\ref{ep},
or from~\cite{eh}.
$\Box$
\\
The graph $K_{2,\ldots,2}$ with $k\geq 1$ classes is also known as
the {\em cocktail party graph}, usually denoted by $CP(k)$.
We will write $C(k)$ for the adjacency matrix of $CP(k)$.
The spectrum of $CP(k)$ equals $\{-2^{k-1},~0^k,~2k-2\}$.
Note that $CP(k)$ with $k\geq 2$ and $K_{1,4}$ are the only graphs
in $\cal H'$ with just one eigenvalue different from $-2$ or $0$.
Clearly there is no graph in $\cal H'$ with no positive eigenvalue,
so we have the following result:
(The disjoint union of two graphs $G$ and $G'$ is denoted by $G+G'$,
and $mG$ denotes $m$ disjoint copies of $G$.)
\begin{thm}\label{one}
Suppose $G$ is a graph with $n$ vertices and at most one
eigenvalue different from $-2$ and $0$,
then $G=nK_1$, $G=K_{1,4}+(n-5)K_1$, or $G=CP(k)+(n-2k)K_1$ with
$k\geq 2$.
\end{thm}
\begin{cor}\label{disconnected}
If $G$ is a disconnected graph in $\cal H'$, then $G$ is one of
the following:
\begin{itemize}
\item
$G_3=2K_{1,4}$ with spectrum $\{-2^2,~0^{8},~2^2\}$,
\item
$G_4(k)=K_{1,4}+CP(k)$ with spectrum $\{-2^k,~0^{k+3},~2,~2k-2\}$
($k\geq 2$),
\item
$G_5(k,\ell)=CP(k)+CP(\ell)$ with spectrum
$\{-2^{k+\ell-2},~0^{k+\ell},~2\ell-2,~2k-2\}$ ($k\geq\ell\geq 2$).
\end{itemize}
\end{cor}
\noindent
{\bf Proof.}
Suppose $G\in{\cal H'}$ has $c$ components, then each component has
at least one edge, and therefore at least one positive
eigenvalue.
Thus $G$ has $c$ positive eigenvalues.
So, if $G$ is disconnected then $c=2$, and each component is
$K_{1,4}$ or $CP(k)$ with $k\geq 2$.
$\Box$\\
{\bf s}ection
{Graphs in $\cal H'$ with two positive eigenvalues}\label{2}
In this section we determine the set ${\cal H}_2'$ of connected
graphs in ${\cal H}'$ with two positive eigenvalues.
We start with some tools.
\begin{prp}\label{E}
Let $A$ be the adjacency matrix of a graph $G\in{\cal H}_2'$,
and define $E=A(A+2I)$.
Then $E$ is positive semi-definite and rank$(E)=2$.
\end{prp}
We call a pair of vectors ${\bf v}$ and ${\bf w}\in\{0,1\}^n$ {\em almost equal}
whenever ${\bf v}$ and ${\bf w}$ have different weights, and differ in at
most two positions.
\begin{cor}\label{ae}
If $A$ is the adjacency matrix of a graph $G\in{\cal H}_2'$,
then no two columns of $A+I$ are equal or almost equal.
\end{cor}
\noindent{\bf Proof.}
Let $a\geq b$ be the weights of two (almost) equal columns of $A+I$.
Then $a-b\leq 2$ and $E=A(A+2I)=(A+I)^2-I$ has the following
principal submatrix
\[
\left[
\begin{array}{cc}
a-1& b\\b & b-1
\end{array}
\right],
\]
which is not positive semi-definite.
$\Box$
\\
\begin{prp}
The following graphs ($(d)$-$(m)$ represented by their adjacency matrices)
are forbidden induced subgraphs for any graph $G\in{\cal H}_2'$.
\[
(a)\ K_{1,5}
,
\ \ (b)\ K_{2,3}
,
\ \ (c)\ C_5
,
\ \ (d)
{{\bf s}s
\left[
\begin{array}{ccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0
\end{array}
\right]
},
\ (e)
{{\bf s}s
\left[
\begin{array}{ccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0
\end{array}
\right]
},
\]
\[
\ (f)
{{\bf s}s
\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0
\end{array}
\right]
},
\ (g)
{{\bf s}s
\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0
\end{array}
\right]
},
\ (h)
{{\bf s}s\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0
\end{array}
\right]
},
\,(i)\,\,
{{\bf s}s
\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0
\end{array}
\right]
},
\]
\[
\ (j)
{{\bf s}s\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0
\end{array}
\right]
},
\ (k)
{{\bf s}s
\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0
\end{array}
\right]
},
\ (\ell)
{{\bf s}s
\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0
\end{array}
\right]
},
\ (m)
{{\bf s}s
\left[
\begin{array}{cccccc}
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 1\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0\\
1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 1&{\hspace{-6pt}} 0&{\hspace{-6pt}} 0
\end{array}
\right]
}.
\]
\end{prp}
\noindent{\bf Proof.}
Interlacing.
Each of the adjacency matrices has smallest eigenvalue less than
$-2$, or more than two positive eigenvalues.
$\Box$
\\
Let $\alpha=\alpha(G)$ be the maximum size of a coclique (that is,
an independent set of vertices) in $G$.
We will distinguish a number of cases depending on $\alpha$.
{\bf s}ubsection{Graphs in ${\cal H}_2'$ with $\alpha=2$}
\begin{thm}\label{bipartitecomplement}
If $G\in{\cal H}_2'$ and $\alpha(G)=2$,
then ${G}$ is one of the following graphs (represented by their adjacency matrices):
\begin{itemize}
\item
$G_6(m):~
\left[
\begin{array}{cc}
J-I_m & I_m \\
I_m & J-I_m
\end{array}
\right]
$
($m\geq 3$), with spectrum
$\{-2^{m-1},\, 0^{m-1},\, m-2,\, m\}$,
\item
$G_7:~
\left[
\begin{array}{ccc}
J-I_7 & J-I_7 & {\bf 0} \\
J-I_7 & J-I_7 & {\bf 1} \\
{\bf 0}^{\bf t}op & {\bf 1}^{\bf t}op & 0
\end{array}
\right]
$
with spectrum $\{-2^7,\, 0^6,\, 7\pm 2{\bf s}qrt{7}\}$.
\end{itemize}
\end{thm}
\noindent
{\bf Proof.}
Let $\overline{G}$ be the complement of $G$.
Since $\alpha(G)=2$, $\overline{G}$ has no triangles.
It is well-known that the complement of an odd cycle $C_n$
with $n\geq 5$ has at least three positive eigenvalues.
Therefore, by interlacing, $\overline{G}$ has no induced
odd cycle, so $\overline{G}$ is bipartite and the
adjacency matrix $A$ of $G$ has the following structure:
\[
A=\left[
\begin{array}{cc}
J-I_m & N\\
N^{\bf t}op & J-I_{m'}
\end{array}
\right].
\]
First we claim that $|m-m'|\leq 1$.
Indeed, suppose $m\leq m'-2$, then $J_{m'}-I$ has eigenvalue $-1$ of multiplicity $m'-1$,
and therefore, by interlacing, $A$ has an eigenvalue $-1$ of multiplicity at least $m'-1-m>0$,
contradiction.
So without loss of generality $m=m'\geq 2$ or $m=m'-1\geq 2$.
Consider four columns ${\bf s}$, ${\bf t}$, ${\bf u}$, ${\bf v}$ of $A+I$, with
${\bf s}$ and ${\bf t}$ in the first part and ${\bf u}$ and ${\bf v}$ in the second part.
Let $m+a$, $m+b$ ($a\leq b$), $m'+c$, and $m'+d$ ($c\leq d$)
be the weights of ${\bf s}$, ${\bf t}$, ${\bf u}$ and ${\bf v}$ respectively,
and define $\lambda={\bf s}^{\bf t}op {\bf t} -m$ and $\mu={\bf u}^{\bf t}op {\bf v} -m'$.
Then these four columns give the following submatrix of $E=A(A+2I)$:
\[
E'=\left[
\begin{array}{cccc}
m+a-1 & m+\lambda & a+c & a+d \\
m+\lambda & m+b-1 & b+c & b+d \\
a+c & b+c & m'+c-1 & m'+\mu \\
a+d & b+d & m'+\mu & m'+d-1
\end{array}
\right].
\]
After Gaussian elimination (subtract the first row from second row
and the last row from the third row,
and apply a similar action to the columns) we obtain
\[
E''=\left[
\begin{array}{cccc}
m+a-1 & \lambda-a+1 & c-d & a+d \\
\lambda-a+1 & a+b-2\lambda-2 & 0 & b-a \\
c-d & 0 & c+d-2\mu-2 & \mu-d+1 \\
a+d & b-a & \mu-d+1 & m'+d-1
\end{array}
\right].
\]
By Lemma~\ref{E}, rank$(E'')\leq 2$, which implies that the upper
$3{\bf t}imes 3$ submatrix of $E''$ has determinant $0$, which leads to
\[
(m+a-1)(a+b-2\lambda-2)(c+d-2\mu-2)=(c+d-2\mu-2)
(\lambda-a+1)^2+(a+b-2\lambda-2)(c-d)^2.
\]
By Lemma~\ref{ae}, no two rows of $A+I$ are equal or almost equal
which implies that
$a+b-2\lambda-2\geq 0$ with equality if and only if $a=b=\lambda+1$,
and similarly
$c+d-2\mu-2\geq 0$ with equality if and only if $c=d=\mu+1$.
If $a+b-2\lambda-2$ and $c+d-2\mu-2$ are both nonzero, then we get
\[
m+a-1=(a-\lambda-1)^2 /(a+b-2\lambda-2) + (d-c)^2/(c+d-2\mu-2).
\]
By use of $a\leq b$, $c\leq d$ and $d\geq\mu+1$ we find
$m+a-1\leq(a-\lambda-1)/2 + (d-c)/2 < m$, contradiction.
Therefore $a=b=\lambda+1$ or $c=d=\mu+1$.
When $m=m'$ this gives $N=I_m$ or $N=J-I_m$.
If $m=2$ or $N=J-I_m$, then $G=CP(m)$, and so only $N=I_m$ with
$m\geq 3$ survives.
When $m=m'-1$ we find four possibilities:
$N=[~I_m\ \ {\bf 0}~],\ [~J-I_m\ \ {\bf 1}~],\ [~I_m\ \ {\bf 1}~]$,
or $[~J-I_m\ \ {\bf 0}~]$.
The first two options do not occur, because $A+I$ has almost
equal columns (but note that the second case gives
$K_{1,2,\ldots,2}$).
For the other cases $A$ has equitable partitions with
quotient matrices
\[
\left[
\begin{array}{ccc}
m-1 & 1 & 1 \\
1 & m-1 & 1 \\
m & m & 0
\end{array}
\right]
\mbox{ and }
\left[
\begin{array}{ccc}
m-1 & m-1 & 0 \\
m-1 & m-1 & 1 \\
0 & m & 0
\end{array}
\right],
\]
respectively.
These quotient matrices, and therefore the adjacency matrices, have
three eigenvalues different from $-2$ and $0$, unless $m=2$ in the
first case (which gives $K_{1,2,2}$), and $m=7$ in the second case.
$\Box$
\\
{\bf s}ubsection{Graphs in ${\cal H}_2'$ with $\alpha\geq 3$}
\begin{thm}\label{remaining}
If $G\in{\cal H}_2'$ and $\alpha(G)\geq 3$, then $G$ is one of
the following graphs (represented by their adjacency matrices):
\begin{itemize}
\item
$G_8(k,\ell):~
\left[
\begin{array}{ccc}
C(k) & O & {\bf 1} \\
O & {\hspace{-6pt}} C(\ell){\hspace{-6pt}} & {\bf 1} \\
{\bf 1}^{\bf t}op & {\bf 1}^{\bf t}op & 0
\end{array}
\right]$
with spectrum
$\{-2^{k+\ell-1},\ 0^{k+\ell},\ k+\ell-1\pm{\bf s}qrt{(k-\ell)^2 + 1}\}$
\\[3pt]
($k\geq\ell\geq 1$, $k\geq 2$),
\item
$G_9(k):~
\left[
\begin{array}{cccc}
O & O & J_{2,3} & J_{2,2k} \\
O & J-I & J-I & O \\
J_{3,2} & J-I & J-I & J_{3,2k} \\
J_{2k,2} & O & J_{2k,3} & C(k)
\end{array}
\right]$
with spectrum $\{ -2^{k+3},\, 0^{k+3},\, k+3\pm{\bf s}qrt{k^2+3}\}$
\\[3pt]
($k\geq 0$, if $k=0$, the last block row and column vanish),
\item
$G_{10}:~$
{{\bf s}criptsize
$\left[
\begin{array}{ccccccccc}
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1\\
1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0\\
1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1\\
1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0\\
1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0
\end{array}
\right]
$
}
with spectrum $\{-2^3,\, 0^4,\, 3\pm{\bf s}qrt{2}\}$,
\item
$G_{11}:~$
{{\bf s}criptsize
$\left[
\begin{array}{ccccccccc}
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1\\
1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0
\end{array}
\right]
$
} with spectrum $\{-2^3,\ 0^4,\ 2,\ 4\}$,
\item
$G_{12}:~$
{{\bf s}criptsize
$\left[
\begin{array}{cccccccc}
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&0\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1\\
1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0{\hspace{-6pt}}&1\\
0{\hspace{-6pt}}&0{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&1{\hspace{-6pt}}&0
\end{array}
\right]
$
} with spectrum $\{-2^2,\ 0^4,\ 1,\ 3\}$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}
Let $C$ be a coclique of $G$ of size $\alpha\geq 3$.
We assume that $C$ has the largest number of outgoing edges among all
cocliques of size $\alpha$.
The coclique $C$, and the remaining vertices of $G$ give the
following partition of the adjacency matrix $A$ of $G$.
\[
A=\left[
\begin{array}{cc}
O & N \\ N^{\bf t}op & B
\end{array}
\right]
\]
First we will prove that the matrix $N$ defined above has one of
the following structures:
\[
(i)\ \left[
\begin{array}{cc}
J_{p,a} & O \\
O & J_{q,b}
\end{array}
\right],
\ (ii)\ \left[
\begin{array}{ccc}
J_{p,a} & O & J_{p,c}\\
O & J_{q,b} & J_{q,c}
\end{array}
\right],
\ (iii)\ \ \left[
\begin{array}{cc}
J_{p,a} & J_{p,c} \\
O & J_{q,c}
\end{array}
\right],
\ (iv)\ \left[
\begin{array}{cc}
J_{p,a} & O \\
O & J_{q,b}\\
J_{r,a} & J_{r,b}
\end{array}
\right].
\]
Suppose rank$(N)\geq 3$.
Then $N$ has a $3{\bf t}imes 3$ submatrix of rank $3$, and $A$ has
a nonsingular principal submatrix $A'$ of order $6$ containing
$O$ of order $3$.
Interlacing gives $\lambda_3(A')\geq \lambda_3(O) = 0$.
But $A'$ is nonsingular, so $\lambda_3(A')>0$, and interlacing
gives $\lambda_3(G)>0$, contradiction.
So rank$(N)\leq 2$.
Suppose $N=J$, then $B=J-I$, since otherwise $G$ contains the
forbidden subgraph $K_{2,3}$.
Thus we have $G=K_{\alpha,1,\ldots,1}$
which has been treated in Section~\ref{1}.
We easily have that $N$ has no all-zero column
(because $C$ is maximal),
and no all-zero row (because $G$ is connected).
We conclude that rank$(N)=2$ and $N$ has one of the structures
$(i),(ii),(iii),(iv)$.
We will continue the proof by considering these four cases step
by step.
{\bf s}ubsubsection{Case $(i)$}\label{i}
We have
\[
A=\left[\begin{array}{cccc}
O&O&J_{p,a} & O \\
O&O& O & J_{q,b}\\
J_{a,p}& O & B_1 & M \\
O & J_{b,q} & M^{\bf t}op & B_2
\end{array}
\right]
\]
Assume $p\leq q$.
If $p=1$, then $B_1=J-I$ (because $C$ is maximal), and $M=O$
(otherwise there would be another coclique of size $\alpha$
with more outgoing edges).
So $G$ is disconnected, contradiction.
If $p\geq 3$, then the forbidden subgraphs $(a),(b),(f)$, and $(i)$
lead to just six possibilities for $(p,q,a,b)$, being:
\[
(3,3,2,2),\ (3,3,2,1),\ (3,3,1,1),\ (3,4,2,1),\ (3,4,1,1),
\ (4,4,1,1).
\]
One easily checks that none of these give a graph in ${\cal H}_2'$.
So $p=2$.
We assume $a\geq b$ (if $p=q=2$ we can do so, and if $q>p$,
then the same forbidden subgraphs as above imply $b\leq2$,
but if $b=2$ and $a=1$ then the last two columns of $A+I$ are equal
or almost equal, which contradicts Corollary~\ref{ae}).
Forbidden subgraph $(d)$ shows that each row of $B_1$ has weight
$a-1$, or $a-2$.
If $B_1$ has an off-diagonal $0$, then the two corresponding rows
have weight $a-2$, and $G$ has another coclique of size $\alpha$.
The two corresponding rows of $M$ have weight $0$, because otherwise
the other coclique would have more outgoing edges.
So if all rows of $B_1$ have weight $a-2$, then $M=O$,
and $G$ is disconnected.
Therefore at least one row (the first one, say) of $B_1$ has
weight $a-1$.
Now rows 1, 3 and $\alpha+1$ of $A$ give the following submatrix of
$E=A(A+2I)$:
\[
E'=\left[
\begin{array}{ccc}
a & 0 & a+1\\
0 & b & x \\
a+1 & x & 1+a+x
\end{array}
\right],
\]
where $x$ is the weight of the first row of $M$.
By Proposition~\ref{E}, rank$(E)=2$, so $\det(E')=0$,
which leads to $b/a=bx-x^2-b$.
Therefore $b/a$ is an integer, and since $a\geq b$,
we have $a=b$ and $2x=a\pm{\bf s}qrt{(a-2)^2-8}$.
This equation only has an integer solution if $a=b=5$ and
$x=2$, or $3$.
Forbidden subgraphs $(b)$ and $(f)$ show that $q<3$, so $p=q=2$.
Now also $B_2$ has a row (the first one say) with weight $b-1=4$,
and the corresponding row of $A$ together with the three rows
considered above, give a $4{\bf t}imes 4$ submatrix of $E$ of rank~$3$,
contradiction with Proposition~\ref{E}.
{\bf s}ubsubsection{Cases $(ii)$ and $(iii)$}\label{iii}
We consider case $(iii)$, but include $(ii)$ by
allowing $a=0$, or $b=0$ (not both).
The forbidden subgraph $K_{1,5}$ gives $p+q=\alpha\leq 4$.
First consider $\alpha=4$.
Then forbidden subgraphs $(b)$ and $(i)$
give $c=1$, and $(b)$, $(k)$, $(\ell)$ and $(j)$ imply that $p=q=2$.
We assume $a\geq b$.
Moreover, the forbidden graph $(m)$ shows that the last vertex of
$A$ is adjacent to all other vertices.
So $A$ has the following structure
\[
A=\left[
\begin{array}{ccccc}
O&O&J_{2,a} & O &{\bf 1} \\
O&O& O & J_{2,b}&{\bf 1} \\
J_{a,2}& O & B_1 & M&{\bf 1} \\
O & J_{b,2} & M^{\bf t}op & B_2 &{\bf 1} \\
{\bf 1}^{\bf t}op & {\bf 1}^{\bf t}op &{\bf 1}^{\bf t}op &{\bf 1}^{\bf t}op &0
\end{array}
\right].
\]
Like in Section~\ref{i}, $p=2$ implies that a row of $B_1$ has
weight $a-1$ or $a-2$.
If some row of $B_1$ (the first one, say) has weight $a-1$, then
row 1, 2 and 5 of $A$ give the following submatrix of $E=A(A+2I)$:
\[
E'=\left[
\begin{array}{ccc}
a+1 & 1 & a+1\\
1 & b+1 & x+1 \\
a+1 & x+1 & 1+a+x
\end{array}
\right],
\]
where $x$ is the weight of the first row of $M$.
By Proposition~\ref{E} $\det(E')=0$, which implies that $x\neq b$
and $x=1+1/(a+1)+1/(b-x)$, which has no solution (because $a\geq b$).
Therefore each row of $B_1$ has weight $a-2$, and $B_1=C(a/2)$.
But then $M=O$, since otherwise $G$ would have another coclique of
size~$4$ with more outgoing edges.
Also $B_2=C(b/2)$ ($CP(0)$ is the graph with the empty vertex set),
because if a row of $B_1$ would have weight $b-1$,
then the corresponding row of $A+I$ and row~3 of $A+I$ are almost
equal, which contradicts Corollary~\ref{ae}.
Now reordering rows and columns of $A$ shows that $G=G_8(k,\ell)$).
The eigenvalues readily follow by use of Lemma~\ref{ep}.
\\
Next we consider $\alpha=3$.
Then we take $p=1$, $q=2$.
The forbidden graphs $(b)$ and $(f)$ show that $c=1$ or $2$.
First suppose $c=1$.
Then
\[
A=\left[
\begin{array}{ccccc}
O&O&{\bf 1}^{\bf t}op & O &1 \\
O&O& O & J_{2,b}&{\bf 1} \\
{\bf 1}& O & B_1 & M&{\bf v} \\
O & J_{b,2} & M^{\bf t}op & B_2 &{\bf w} \\
1 & {\bf 1}^{\bf t}op &{\bf v}^{\bf t}op &{\bf w}^{\bf t}op &0
\end{array}
\right].
\]
We have $B_1=J-I$, because $C$ is maximal.
Let $x$ be the weight of an arbitrary row of $M$.
If $x=0$, then the corresponding row of $A+I$ is equal or almost equal
to the first row of $A+I$.
So $x\geq 1$.
Because no coclique of size $3$ has more outgoing edges than $C$,
it follows that $x=1$ and ${\bf v}={\bf 0}$.
Forbidden subgraph $(e)$ gives ${\bf w}={\bf 1}$.
But now the last row of $A+I$ is almost equal to the second row.
So there is no solution when $\alpha=3$ and $c=1$.
(Note that the conclusion also holds if $a=0$ or $b=0$.)
Suppose $c=2$.
We assume $p=1$, $q=2$.
Then $A$ has the following structure.
\[
A=\left[
\begin{array}{ccccc}
O&O& J_{1,a} & O &{\bf 1}^{\bf t}op \\
O&O& O & J_{2,b}&J_{2,2} \\
J_{a,1} & O & B_1 & M& M_1 \\
O & J_{b,2} & M^{\bf t}op & B_2 &M_2 \\
{\bf 1} & J_{2,2} & M_1^{\bf t}op &M_2^{\bf t}op & B_3
\end{array}
\right].
\]
Like before, we have that $B_1=J-I$.
Forbidden subgraphs $(b)$ and $(e)$ give that
$B_3=J-I$, and $M_2=J$.
Note that if $a=0$, then the last two rows of $A+I$
are equal, and if $b=0$ the first and the fourth row are
(almost) equal.
So $a\geq 1$, and $b\geq 1$.
Moreover, subgraphs $(g)$ and $(h)$ imply that
each row of $M_1$ equals $[0~1]$, or $[1~0]$.
Also, by the same arguments as before each row of $M$
has weight $1$, and each row of $B_2$ has weight
$b-1$ or $b-2$.
Consider the submatrix of $E$ made from the second, the fourth
and the last two rows of $A$,
and then replace the last two rows and columns by their sum.
Then we get
\[
E'=\left[
\begin{array}{ccc}
b+2 & 2 & 2b+6 \\
2 & a+2 & a+6 \\
2b+6 & a+6 & a+4b+18
\end{array}
\right].
\]
We have rank$(E')\leq\mbox{rank}(E)=2$,
so $0=\det(E')=2(a-2)(ab+2a+2b)$, hence $a=2$.
Thus $M_1=[{\bf 1}_2 \ O]$ or $[I_2 \ O]$.
However, the second options does not occur, because it
would give a column in $A+I$ which is almost equal to the last
column of $A+I$.
Similarly, if a column of $M$, different from the first one,
has weight $b-1$, then the corresponding column in $A+I$ is
almost equal to the last one.
So these columns of $M$ all have weight~$b-2$.
However, the first row and column of $M$ has weight $b-1$,
since if some off-diagonal entry (the second, say) of
the first row of $M$ is zero, then rows and columns
2, 3, 4, 6, 7 give the forbidden subgraph $(e)$.
Now there is just one family of solutions for $A$,
which, after reordering rows and columns shows that $G=G_9(k)$.
The spectrum follows in a straightforward way by use of
Lemma~\ref{ep}.
This finishes cases~$(ii)$ and $(iii)$.
{\bf s}ubsubsection{Case ($iv$)}
We have the following adjacency matrix.
\[
A=\left[
\begin{array}{ccccc}
O&O&O& J_{p,a} & O \\
O&O&O& O & J_{q,b} \\
O&O&O& J_{r,a} & J_{r,b} \\
J_{a,p} & O & J_{a,r} & B_1 & M \\
O & J_{b,q} & J_{b,r} & M^{\bf t}op & B_2
\end{array}
\right].
\]
We assume $p\leq q$.
There is a somewhat special behaviour when $p=r=b=1$, so
we treat this case first.
Forbidden subgraph $(a)$, $(b)$ and $(d)$ show that $q\leq 3$ and that each row of $B_1$ has weight at least $a-2$.
If some row of $B_1$ has weight $a-1$, then $A+I$ has two almost equal rows, hence $B_1$ is the adjacency matrix of $CP(a/2)$.
Two nonadjacent vertices in $CP(a/2)$ together with $q$ vertices of $C$ make another coclique in $G$ of size $\alpha$ which,
by assumption, has not more outgoing edges than $C$.
Therefore the two corresponding entries of $M$ cannot both be equal to $1$ and therefore the weight $x$ of $M$ satisfies $x\leq a/2$.
The first, the second and the last row of $A$ give the following submatrix of $E=A(A+2I)$:
\[
E'=\left[
\begin{array}{ccc}
a & 0 & x \\
0 & 1 & 2 \\
x & 2 & x+q+1
\end{array}
\right].
\]
Proposition~\ref{E} gives $\det(E')=0$, which implies $2x=a\pm{\bf s}qrt{(a+2(q-3))^2-4(q-3)^2}$.
If $q=1$ we get $a=9$, which is not possible.
If $q=2$ we get $a=4$, $x=2$, which gives $G_{10}$.
If $q=3$ there is no solution, because $G$ contains a forbidden subgraph $(k)$ or $(\ell)$.
\\
Next we treat the case $p=q=r=1$ and $b\geq 2$.
Then, like before, forbidden subgraphs $(b)$ and $(d)$ imply
that the a row of $B_1$ has weight $a-1$ or $a-2$.
Suppose all rows have weight $a-2$, so $B_1$ represents $CP(a/2)$.
Now the first, the second and $k$-th row of $A$
($4\leq k\leq a+3$) give the following submatrix of $E=A(A+2I)$:
\[
E'=\left[
\begin{array}{ccc}
a & a & 0 \\
a & a+x & x \\
0 & x & b
\end{array}
\right],
\]
where $x$ is the weight of the corresponding row in $M$.
This matrix is singular, which implies that $x=0$, or $x=b$.
Since $b\geq 2$, no row or column of $B_2$ has weight $b$, since
otherwise $A+I$ would have two equal or almost equal columns.
So $B_2$ is the adjacency matrix of $CP(b/2)$,
and the columns of $M$ have weight $0$ or $a$.
So $M=O$ or $M=J$, but $M=J$ is impossible,
because then one vertex of $C$ and two nonadjacent vertices of
$CP(a/2)$ make up a coclique of size~3
with more outgoing edges than than $C$ has.
Thus we find an equitable partition of $A$ with quotient matrix
\[
Q=\left[
\begin{array}{ccccc}
0&0&0& a & 0 \\
0&0&0& 0 & b \\
0&0&0& a & b \\
1&0&1& a-2 & 0 \\
0&1&1& 0 & b-2
\end{array}
\right].
\]
For all positive integers $a$ and $b$, $Q$ and hence $A$ has at
least three eigenvalues different from $-2$ and $0$.
We conclude that at least one row of $B_1$ has
weight $a-1$ and find the following submatrix $E'$ of $E=A(A+2I)$:
\[
E'=\left[
\begin{array}{ccc}
a & 0 & a+1\\
0 & b & x \\
a+1 & x & a+x+1
\end{array}
\right],
\]
where $x$ is the weight of the corresponding row of $M$.
Now $\det(E')=0$ gives $x^2-xb+b+b/a = 0$,
We assume $a\geq b$.
Then $b=a$ and $2x=b\pm{\bf s}qrt{(b-2)^2-8}$.
This only has an integer solution if $b=5$.
Then $x=2$ or $x=3$.
It is straightforward to check that there is
no matrix $A$ that satisfies all requirements.
[A quick way to see this is by extending $E'$ above to a $4{\bf t}imes 4$
matrix $E''$ by considering also a row of $B_2$ with weight~$4$.
If $x'$ is the weight of the corresponding row of $M^{\bf t}op$,
then rank$(E'')=2$ implies that $x=2$, $x'=3$ (or vice versa).
This is clearly impossible if all rows of $B_1$ and $B_2$ have
weight~$4$.
However, if a number of rows of $B_1$ and $B_2$ have weight $3$,
then this number is even and the corresponding rows of $M$ and
$M^{\bf t}op$ have weight $0$ or $5$ (see above).
This gives that the sum of row weights of $M$ is equal to
$2$ or $1$ (mod~$5$), whilst the column weights add up to
$3$ or $4$ (mod~$5$),which is of course impossible.]
This finishes the case $p=q=r=1$.
The next case is $p=r=1$, $q\geq 2$.
Forbidden subgraphs $a$ and $b$ give $q=2$ and $b\leq 2$,
or $q=3$, $b=1$.
The case $b=1$ is solved above, so we only need to deal with
$b=q=2$.
Then forbidden subgraph $(b)$ gives that $B_2=J-I$.
We consider the submatrix $E'$ of $E=A(A+2I)$ formed by the first,
the second and the last row of $A$.
Then
\[
E'=\left[
\begin{array}{ccc}
a & 0 & x\\
0 & 2 & 3 \\
x & 3 & x+4
\end{array}
\right],
\]
where $x$ is the weight of the second column of $M$.
We have $0=\det(E')=a(2x-1)-2x^2$, which implies $x=1$, $a=2$.
Now it is straightforward to check that there is no solution.
Finally we consider the case that $3\leq p+r\leq q+r$.
Forbidden subgraphs $(a)$ and $(b)$ lead to $p+r\leq 4$,
$a\leq 2$, $b\leq 2$,
$a\leq 1$ if $p+r=4$, and $b\leq 1$ if $q+r=4$.
This gives only a short list of possibilities for $(p,q,r,a,b)$,
being:
\[
\begin{array}{cccccc}
(2,2,1,2,2),&
(2,2,1,2,1),&
(2,2,1,1,1),&
(2,3,1,2,1),&
(2,3,1,1,1),&
(3,3,1,1,1),
\\
(1,1,2,2,2),&
(1,1,2,2,1),&
(1,1,2,1,1),&
(1,2,2,2,1),&
(1,2,2,1,1),&
(2,2,2,1,1).
\end{array}
\]
Only the first and the last $ 5$-tuple correspond to a solution:
$G_{11}$ and $G_{12}$, respectively.
$\Box$\\
{\bf s}ection{Conclusions}
The graphs in ${\cal H}'$ are given in
Theorems~\ref{onepos}, \ref{bipartitecomplement}, \ref{remaining}
and Corollary~\ref{disconnected}.
The graphs $G_8(2,2)$ and $G_{11}$ are the only two nonisomorphic
cospectral graphs in ${\cal H}'$.
All other pairs of cospectral graphs in ${\cal H}$ consist of pairs of
graphs in ${\cal H}'$ with the same nonzero part of the spectrum
(see Theorem~\ref{conclusion} below),
where one or both graphs are extended with isolated vertices,
such that the numbers of vertices in both graphs are equal.
The following theorem follows straightforwardly from the list of graphs in ${\cal H}'$.
\begin{thm}\label{conclusion}
Two nonisomorphic graphs $G,~G'\in{\cal H}'$ have equal nonzero parts of the spectrum if and only if one of the following holds:
\begin{itemize}
\item
$G=G_0(\ell,m)\ (= K_{\ell,m})$, $G'=G_0(\ell',m')\ (= K_{\ell',m'})$, where $\ell m=\ell'm'$,
\item
$G,~G'\in\{G_4(k),~G_5(k,2)\}$ ($k\geq 2$),
\item
$G,~G'\in\{G_5(k+1,k),~G_6(2k),~G_8(k,k)\}$ ($k\geq 2$),
\item
$G,~G'\in\{G_6(3),~G_{12}\}$,
\item
$G,~G'\in\{G_4(4),~G_5(4,2),~G_9(1)\}$,
\item
$G,~G'\in\{G_4(3),~G_5(3,2),~G_6(4),~G_8(2,2),~G_{11}\}$,
\item
$G,~G'\in\{G_3,~G_4(2),~G_5(2,2)\}$.
\end{itemize}
\end{thm}
\begin{cor}
A graph $G\in{\cal H}'$ is not determined by the spectrum of the adjacency matrix if and only if $G$ is one of the following:
\begin{itemize}
\item
$G_0(\ell,m)\ (=K_{\ell,m})$, where $\ell m$ has a divisor
strictly between $\ell$ and $m$,
\item
$G_4(k),~G_5(k+1,k),~G_8(k,k)$ with $k\geq 2$,
\item
$G_3,~G_5(4,2),~G_{11},~G_{12}$.
\end{itemize}
\end{cor}
For every graph in $\cal H$ we can decide whether it is determined
by the spectrum of the adjacency matrix by use of
Theorem~\ref{conclusion}.
For example $CP(k)+ K_1$, which is the complement of the friendship
graph, is determined by its spectrum because every graph with
the same nonzero part of the spectrum equals $CP(k)+mK_1$ for some $m\geq 0$.
This result was already proved in \cite{ajo}.
Also the spectral characterisation of complete multipartite graphs
is known, see~\cite{eh}.
{\bf s}ection*{Acknowledgments} The research of S.M. Cioab\u{a} was supported by the NSF Grant DMS 160078.
\end{document} |
\mathbf{e}gin{document}
\mathbf{e}gin{abstract}
Let $G$ (resp.\ $H$) be the group of orientation preserving self-homeomorphisms of the unit circle (resp.\ real line). In previous work, the first two authors constructed pre-Tannakian categories $\uRep(G)$ and $\uRep(H)$ associated to these groups. In the predecessor to this paper, we analyzed the category $\uRep(H)$ (which we named the ``Delannoy category'') in great detail, and found it to have many special properties. In this paper, we study $\uRep(G)$. The primary difference between these two categories is that $\uRep(H)$ is semi-simple, while $\uRep(G)$ is not; this introduces new complications in the present case. We find that $\uRep(G)$ is closely related to the combinatorics of objects we call Delannoy loops, which seem to have not previously been studied.
\end{abstract}
\title{The circular Delannoy category}
\tableofcontents
\section{Introduction}
\subsection{Background}
A common practice in mathematics is to identify the key features of a set of examples, and take these as the defining axioms for a more general class of objects. In the setting of tensor categories, representation categories of algebraic groups are a fundamental set of examples. Taking their key features\mathfrak{o}otnote{Of course, pre-Tannakian categories are not the only generalization of classical representation categories. For instance, one can relax the symmetry of the tensor product to a braiding. However, of the different generalizations, pre-Tannakian categories seem to be closest to group representations.} as axioms leads to the notion of \defn{pre-Tannakian category} (see \cite[\S 2.1]{ComesOstrik} for a precise definition). Understanding these categories is an important problem within the field of tensor categories. The last twenty years has seen important progress, but the picture is still far from clear.
One difficulty in the study of pre-Tannakian categories is that few examples are known, beyond those coming from (super)groups. In recent work \cite{repst}, the first two authors gave a general construction of pre-Tannakian categories. Given an oligomorphic group $G$ equipped with a piece of data called a measure $\mu$, we constructed a tensor category $\uPerm(G; \mu)$ analogous to the category of permutation representations. This category sometimes has a pre-Tannakian abelian envelope $\uRep(G; \mu)$. The basic ideas are reviewed in \S \mathrm{e}f{s:olig}. In forthcoming work \cite{discrete}, we show that any discrete pre-Tannakian category (i.e., one generated by an \'etale algebra) comes from this construction. If pre-Tannakian categories are an abstraction of algebraic groups, then the discrete ones are an abstraction of finite groups. Thus this is a very natural class of examples.
The simplest example of an oligomorphic group is the infinite symmetric group. In this case, the theory of \cite{repst} leads to Deligne's interpolation category $\uRep(\mathfrak{S}_t)$ \cite{Deligne3}; we note that Deligne's work predated (and motivated) \cite{repst}. This category (and related examples) has received much attention in the literature, e.g., \cite{ComesOstrik1, ComesOstrik, Etingof1, Etingof2, EntovaAizenbudHeidersdorf, Harman, Harman2, Knop, Knop2}.
Currently, there are only (essentially) three known examples of pre-Tannakian categories coming from oligomorphic groups that fall outside the setting of Deligne interpolation:
\mathbf{e}gin{enumerate}
\item The group $H$ of all orientation-preserving self-homeomorphisms of the real line carries four measures; one of these is known to yield a pre-Tannakian category $\uRep(H)$.
\item The group $G$ of all orientation-preserving self-homeomorphisms of the unit circle carries a unique measure, and this yields a pre-Tannakian category $\uRep(G)$.
\item The automorphism group of the universal boron tree carries two measures, one of which is known to yield a pre-Tannakian category; see \cite[\S 18]{repst}.
\end{enumerate}
We do not know if the other three measures for $H$, or the other measure in case (c), can be associated to pre-Tannakian categories. This is an important problem.
Deligne's categories have been well-studied, but since they are closely tied to finite groups they may not be representative of general pre-Tannakian categories. We therefore feel it is important to carefully study the above three examples. In our previous paper \cite{line}, we studied $\uRep(H)$ in detail. We named it the \emph{Delannoy category}, due to its connection to Delannoy paths, and found that it has numerous special properties. In this paper, we study $\uRep(G)$. While $\uRep(G)$ is closely related to $\uRep(H)$, the latter is semi-simple and the former is not, and this introduces a number of new difficulties. In the rest of the introduction, we recall some background on these categories and then state our main results.
\subsection{Representations of $H$}
We briefly recall the construction of $\uRep(H)$, and some results from \cite{line}. See \S \mathrm{e}f{s:olig} and \S \mathrm{e}f{s:setup} for details. We fix an algebraically closed field $k$ of characteristic~0 in what follows.
Let $\mathbf{R}^{(n)}$ be the subset of $\mathbf{R}^n$ consisting of tuples $(x_1, \ldots, x_n)$ with $x_1<\cdots<x_n$. The group $H$ acts transitively on this set. We say that a function $\varphi \colon \mathbf{R}^{(n)} \to k$ is \defn{Schwartz} if it assumes finitely many values and its level sets can be defined by first-order formulas using $<$, $=$, and finitely many real constants; equivalently, the stabilizer in $H$ of $\varphi$ is open in the natural topology. We define the \defn{Schwartz space} $\mathcal{C}(\mathbf{R}^{(n)})$ to be the space of all Schwartz functions. An important feature of this space is that there is a notion of integral, namely, integration with respect to Euler characteristic (as defined by Schapira and Viro \cite{Viro}).
We define a category $\uPerm(H)$ by taking the objects to be formal sums of $\mathcal{C}(\mathbf{R}^{(n)})$'s, and the morphisms between basic objects to be $H$-invariant integral operators. This category admits a tensor structure $\mathbin{\ul{\otimes}}$ by
\mathbf{e}gin{displaymath}
\mathcal{C}(\mathbf{R}^{(n)}) \mathbin{\ul{\otimes}} \mathcal{C}(\mathbf{R}^{(m)}) = \mathcal{C}(\mathbf{R}^{(n)} \times \mathbf{R}^{(m)}),
\end{displaymath}
where on the left side we decompose $\mathbf{R}^{(n)} \times \mathbf{R}^{(m)}$ into orbits (which have the form $\mathbf{R}^{(s)}$ for various $s$), and then take the corresponding formal sum of Schwartz spaces. The category $\uRep(H)$ is equivalent to (the ind-completion of) the Karoubian envelope of $\uPerm(H)$. This characterization of $\uRep(H)$ relies on non-trivial results of \cite{repst}.
In \cite{line}, we determined the structure of $\uRep(H)$ in greater detail. The construction and classification of its simple objects will play an important role in this paper. A \defn{weight} is a word in the alphabet $\{{\bullet}, {\circ}\}$. Given a weight $\lambda$, we define $L_{\lambda}$ to be the submodule of $\mathcal{C}(\mathbf{R}^{(n)})$ generated by a certain explicit family of Schwartz functions (see \S \mathrm{e}f{ss:RepH}). We showed that these account for all the simple objects of $\uRep(H)$. As we mentioned above, the category $\uRep(H)$ is semi-simple; this follows from general results from \cite{repst}.
\subsection{Representations of $G$}
We now recall the construction of $\uRep(G)$, which is similar to the above. Let $\mathbf{S}^{\{n\}}$ be the subset of $\mathbf{S}^n$ consisting of tuples $(x_1, \ldots, x_n)$ such that $x_i$ is strictly between $x_{i-1}$ and $x_{i+1}$ (in counterclockwise cyclic order) for all $i \in \mathbf{Z}/n$. The group $G$ acts transitively on $\mathbf{S}^{\{n\}}$, and this space plays an analogous role to $\mathbf{R}^{(n)}$ from the $H$ theory. We have a notion of Schwartz space $\mathcal{C}(\mathbf{S}^{\{n\}})$ and integration just as before, and this leads to a category\mathfrak{o}otnote{The category $\uPerm(G)$ defined in \cite{repst} is a little bigger than the $\uPerm^{\circ}(G)$ category defined here.} $\uPerm^{\circ}(G)$. The category $\uRep(G)$ is equivalent to the (ind-completion of) the abelian envelope of $\uPerm^{\circ}(G)$. Again, this relies on non-trivial results from \cite{repst}. Since $\uRep(G)$ is not semi-simple, we can no longer simply take the Karoubian envelope to obtain the abelian envelope.
Fix a point $\infty \in \mathbf{S}$ and identify $\mathbf{R}$ with $\mathbf{S} \setminus \{\infty\}$. Then $H$ is identified with the stabilizer of $\infty$ in $G$, and as such is an open subgroup of $G$. There is thus a restriction functor $\uRep(G) \to \uRep(H)$, which admits a two-sided adjoint called induction. Since we already know the structure of $\uRep(H)$ from \cite{line}, restriction and induction will be very useful in the study of $\uRep(G)$.
\subsection{Results} \label{ss:results}
We now discuss our main results about the category $\uRep(G)$.
\textit{(a) Classification of simples.} Given a non-empty weight $\lambda$, we define $\Delta_{\lambda}$ to be the $\ul{G}$-submodule of $\mathcal{C}(\mathbf{S}^{\{n\}})$ generated by an explicit set of Schwartz functions. The group $\mathbf{Z}/n$ acts on $\mathcal{C}(\mathbf{S}^{\{n\}})$ by cyclicly permuting co-ordinates, and $\Delta_{\lambda}$ is stable under the action of the subgroup $\Aut(\lambda)$ consisting of cyclic symmetries of $\lambda$. We let $\Delta_{\lambda,\zeta}$ be the $\zeta$-eigenspace of the generator of $\Aut(\lambda)$, where $\zeta$ is an appropriate root of unity. We show that $\Delta_{\lambda,\zeta}$ has a unique maximal proper $\ul{G}$-submodule, and thus a unique simple quotient $M_{\lambda,\zeta}$. We show that the $M_{\lambda,\zeta}$, together with the trivial module, account for all simple $\ul{G}$-modules. The simples $M_{\lambda,\zeta}$ and $M_{\mu,\omega}$ are isomorphic if and only if $\mu$ is a cyclic shift of $\lambda$ and $\omega=\zeta$.
We say that $(\lambda,\zeta)$ is \defn{special} if $\lambda$ consists only of ${\bullet}$'s, or only of ${\circ}$'s, and $\zeta=(-1)^{\ell(\lambda)+1}$; otherwise, we say that $(\lambda,\zeta)$ is \defn{generic}. In the generic case, $\Delta_{\lambda,\zeta}$ is in fact already simple. In the special case, the module $\Delta_{\lambda,\zeta}$ has length two. The generic simples have categorical dimension~0, while the special simples have dimension $\pm 1$.
\textit{(b) Structure of general modules.} Let $\mathcal{C}=\uRep^{\mathrm{f}}(G)$. Define $\mathcal{C}_{\rm gen}$ (resp.\ $\mathcal{C}_{\rm sp}$) to be the full subcategory of $\mathcal{C}$ spanned by modules whose simple constituents are generic (resp.\ special). We prove the following three statements:
\mathbf{e}gin{itemize}
\item The category $\mathcal{C}$ is the direct sum of the subcategories $\mathcal{C}_{\rm gen}$ and $\mathcal{C}_{\rm sp}$.
\item The category $\mathcal{C}_{\rm gen}$ is semi-simple.
\item The category $\mathcal{C}_{\rm sp}$ is equivalent to the category of finitely generated $\mathbf{Z}$-graded $R$-modules, where $R=k[x,y]/(x^2,y^2)$, and $x$ and $y$ have degrees $-1$ and $+1$.
\end{itemize}
We note that the above results are just statements about $\mathcal{C}$ as an abelian category, and ignore the tensor structure. Using the above results, we classify the indecomposable $\ul{G}$-modules; they are ``zigzag modules,'' which appear frequently across representation theory.
\textit{(c) Branching rules.}
We determine the induction and restriction rules between $G$ and $H$. For a simple $L_\lambda$ in $H$, we determine the decomposition of its induction $I_{\lambda}$ into indecomposable projectives (Proposition~\mathrm{e}f{prop:I-decomp}); this decomposition is multiplicity-free, and at most one of the projectives belongs to the special block (the rest are generic simples). Since $\uRep(H)$ is semisimple, this determines the induction of any object in $\uRep(H)$. Additionally, for simple object $M_{\lambda,\zeta}$ in $\uRep(G)$ we determine the irreducible decomposition of $\Res^G_H(M_{\lambda,\zeta})$. This determines the restriction of any object in $\uRep(G)$, provided we know its simple constituents.
\textit{(d) Semisimplification.} The \defn{semisimplification} of a symmetric tensor category is the category obtained by killing the so-called negligible morphisms; see \cite{EtingofOstrik} for general background. We show that the semisimplification of $\uRep^{\mathrm{f}}(G)$ is equivalent (as a symmetric tensor category) to the category of bi-graded vector spaces equipped with a symmetric structure similar to that of super vector spaces. We note that the final answer and the proof are quite similar to the computation of the semisimplification of $\Rep(\mathbf{GL}(1|1))$ (see \cite{Heidersdorf}); it would be interesting if there were a conceptual explanation for this.
\textit{(e) Loop model.} We show that $\uRep(G)$ is closely related to a category constructed from combinatorial objects we introduce called \defn{Delannoy loops}. These are similar to the well-known Delannoy paths, but (as far as we can tell) have not been previously studied.
\subsection{Notation}
We list some of the most important notation here:
\mathbf{e}gin{description}[align=right,labelwidth=2.25cm,leftmargin=!]
\item[ $k$ ] the coefficient field (algebraically closed of characteristic~0)
\item [$\mathbf{b}one$ ] the trivial representation
\item[ $\mathbf{S}$ ] the circle
\item[ $x<y<z$ ] the cyclic order on $\mathbf{S}$, a ternary relation
\item[ $\mathbf{R}$ ] the real line, identified with $\mathbf{S} \setminus \{\infty\}$ (see \S \mathrm{e}f{ss:groups})
\item[ $\mathbf{S}^{\{n\}}$ ] the subset of $\mathbf{S}^n$ consisting of cyclicly ordered tuples (see \S \mathrm{e}f{ss:groups})
\item[ $\mathbf{R}^{(n)}$ ] the subset of $\mathbf{R}^n$ consisting of totally ordered tuples (see \S \mathrm{e}f{ss:groups})
\item[ $G$ ] the group $\Aut(\mathbf{S},<)$ (except in \S \mathrm{e}f{s:olig})
\item[ $H$ ] the group $\Aut(\mathbf{R},<)$ (except in \S \mathrm{e}f{s:olig})
\item[ $G(b)$ ] the subgroup of $G$ fixing each element of $b$
\item[ {$G[b]$} ] the subgroup of $G$ fixing $b$ as a set
\item[ $\sigma$ ] the generator of $\mathbf{Z}/n$, which acts on $\mathbf{S}^{\{n\}}$ by permuting the subscripts (see \S \mathrm{e}f{ss:groups}) and on weights of length $n$ by cyclically permuting the letters (see \S \mathrm{e}f{ss:weights})
\item[ $\tau$ ] the standard generator of $G[b]/G(b)$ (see \S \mathrm{e}f{ss:groups})
\item[ $\lambda$ ] a weight (see \S \mathrm{e}f{ss:weights})
\item[ $\gamma_i$ ] the $i$th cyclic contraction (see \S \mathrm{e}f{ss:weights})
\item[ $g(\lambda)$ ] the order of $\Aut(\lambda)$ (see \S \mathrm{e}f{ss:weights})
\item[ $N(\lambda)$ ] $\ell(\lambda)/g(\lambda)$ (see \S \mathrm{e}f{ss:weights})
\item[ $L_{\lambda}$ ] a simple $H$-module (see \S \mathrm{e}f{ss:RepH})
\item[ $(-)^{\dag}$ ] the transpose functor (see \S \mathrm{e}f{ss:transp})
\item[ $I_{\lambda}$ ] the induction of $L_{\lambda}$ from $H$ to $G$ (see \S \mathrm{e}f{ss:ind})
\item[ $\Delta_{\lambda}$ ] a standard $G$-module (see Definition~\mathrm{e}f{defn:std})
\item[ $M_{\lambda,\zeta}$ ] a simple $G$-module (see Definition~\mathrm{e}f{defn:M})
\item[ $\pi(n)$ ] the weight with all ${\bullet}$'s or ${\circ}$'s (see \S \mathrm{e}f{ss:fine-intro})
\end{description}
\subsection*{Acknowledgments}
NS was supported in part by NSF grant DMS-2000093.
\section{Generalities on oligomorphic groups} \label{s:olig}
In this section, we recall some key definitions and results from \cite{repst}. Since we already provided an overview of this theory in \cite[\S 2]{line}, we keep this discussion very brief. We close with a short discussion of Mackey theory, which is new.
\subsection{Admissible groups}
An \defn{admissible group} is a topological group $G$ that is Hausdorff, non-archimedean (open subgroups form a neighborhood basis of the identity), and Roelcke pre-compact (if $U$ and $V$ are open subgroups then $U \mathbf{a}ckslash G /V$ is finite). An \defn{oligomorphic group} is a permutation group $(G, \Omega)$ such that $G$ has finitely many orbits on $\Omega^n$ for all $n \ge 0$. Suppose that $(G,\Omega)$ is oligomorphic. For a finite subset $a$ of $\Omega$, we let $G(a)$ be the subgroup of $G$ fixing each element of $a$. The $G(a)$'s form a neighborhood basis for an admissible topology on $G$. This is the main source of admissible groups.
Fix an admissible group $G$. We say that an action of $G$ on a set is \defn{smooth} if every stabilizer is open, and \defn{finitary} if there are finitely many orbits. We use the term ``$G$-set'' for ``set equipped with a smooth action of $G$.'' A $\hat{G}$-set is a set equipped with a smooth action of some (unspecified) open subgroup of $G$; shrinking the subgroup does not change the $\hat{G}$-set.
\subsection{Integration} \label{ss:olig-int}
Fix an admissible group $G$ and a commutative ring $k$. A \defn{measure} for $G$ with values in $k$ is a rule assigning to each finitary $\hat{G}$-set $X$ a value $\mu(X)$ in $k$ such that a number of axioms hold; see \cite[\S 2.2]{line}.
Let $X$ be a $G$-set. A function $\varphi \colon X \to k$ is called \defn{Schwartz} if it is smooth (i.e., its stabilizer in $G$ is open) and has finitary support. We write $\mathcal{C}(X)$ for the space of all Schwartz functions on $X$, which we call \defn{Scwhartz space}. Suppose that $\varphi$ is a Schwartz function on $X$, and we have a measure $\mu$ for $G$. Let $a_1, \ldots, a_n$ be the non-zero values attained by $\varphi$, and let $X_i=\varphi^{-1}(a_i)$. We define the \defn{integral} of $\varphi$ by
\mathbf{e}gin{displaymath}
\int_X \varphi(x) dx = \sum_{i=1}^n a_i \mu(X_i).
\end{displaymath}
Integration defines a $k$-linear map $\mathcal{C}(X) \to k$. More generally, if $Y$ is a second $G$-set and $f \colon X \to Y$ is a smooth function (i.e., equivariant for some open subgroup) then there is a push-forward map $f_* \colon \mathcal{C}(X) \to \mathcal{C}(Y)$ defined by integrating over fibers.
There are several niceness conditions that can imposed on a measure $\mu$. We mention a few here. We say that $\mu$ is \defn{regular} if $\mu(X)$ is a unit of $k$ for all transitive $G$-sets $X$. We say that $\mu$ is \defn{quasi-regular} if there is some open subgroup $U$ such that $\mu$ restricts to a regular measure on $U$. Finally, there is a more technical condition called \defn{property~(P)} that roughly means $\mu$ is valued in some subring of $k$ that has enough maps to fields of positive characteristic; see \cite[Definition~7.17]{repst}.
\subsection{Representations} \label{ss:olig-rep}
Fix an admissible group $G$ with a $k$-valued measure $\mu$. We assume for simplicity that $k$ is a field and $\mu$ is quasi-regular and satisfies property~(P), though it is possible to be more general. In \cite[\S 10]{repst}, we introduce the \defn{completed group algebra} $A(G)$ of $G$. As a $k$-vector space, $A(G)$ is the inverse limit of the Schwartz spaces $\mathcal{C}(G/U)$ over open subgroups $U$. The multiplication on $A(G)$ is defined by convolution; this uses integration, and hence depends on the choice of measure $\mu$. See \cite[\S 10.3]{repst} or \cite[\S 2.4]{line} for details. An $A(G)$-module $M$ is called \defn{smooth} if the action of $A(G)$ is continuous, with respect to the discrete topology on $M$; concretely, this means that for $x \in M$ there is some open subgroup $U$ of $G$ such that the action of $A(G)$ on $x$ factors through $\mathcal{C}(G/U)$. For any $G$-set $X$, the Schwartz space $\mathcal{C}(X)$ carries the structure of a smooth $A(G)$-module in a natural manner.
We define $\uRep(G)$ to be the category of smooth $A(G)$-modules; this is a Grothendieck abelian category. In \cite[\S 12]{repst}, we construct a tensor product $\mathbin{\ul{\otimes}}$ on $\uRep(G)$. This tensor product is $k$-bilinear, bi-cocontinuous, and satisfies $\mathcal{C}(X) \mathbin{\ul{\otimes}} \mathcal{C}(Y) = \mathcal{C}(X \times Y)$, and these properties uniquely characterize it; additionally, it is exact. In \cite[\S 13]{repst}, we show that every object of $\uRep(G)$ is the union of its finite length subobjects, and that the category $\uRep^{\mathrm{f}}(G)$ of finite length objects is pre-Tannakian. Moreover, we show that $\uRep(G)$ is semi-simple if $\mu$ is regular.
\subsection{Induction and restriction}
Let $G$ and $\mu$ be as above, and let $H$ be an open subgroup of $G$. Then $\mu$ restricts to a quasi-regular measure on $H$ which still satisfies property~(P). There is a natural algebra homomorphism $A(H) \to A(G)$ that induces a restriction functor
\mathbf{e}gin{displaymath}
\Res^G_H \colon \uRep(G) \to \uRep(H).
\end{displaymath}
Suppose now that $N$ is a smooth $A(H)$-module. Define $\Ind_H^G(N)$ to be the space of all functions $\varphi \colon G \to N$ satsifying the following two conditions:
\mathbf{e}gin{itemize}
\item $\varphi$ is left $H$-equivariant, i.e., $\varphi(hg)=h\varphi(g)$ for all $h \in H$ and $g \in G$.
\item $\varphi$ is right $G$-smooth, i.e., there is an open subgroup $U$ of $G$ such that $\varphi(gu)=\varphi(g)$ for all $g \in G$ and $u \in U$.
\end{itemize}
In \cite[\S 2.5]{line}, we defined a natural smooth $A(G)$-module structure on this space. We call $\Ind_H^G(N)$ the \defn{induction} of $N$. Induction defines a functor
\mathbf{e}gin{displaymath}
\Ind_H^G \colon \uRep(H) \to \uRep(G).
\end{displaymath}
In \cite[Proposition~2.13]{line}, we showed that induction is left and right adjoint to restriction (and thus continuous, co-continuous, and exact) and preserves finite length objects.
We now formulate an analog of Mackey's theorem:
\mathbf{e}gin{proposition} \label{prop:mackey}
Let $H$ and $K$ be open subgroups of $G$. For a smooth $H$-module $N$, we have a natural isomorphism of $A(K)$-modules
\mathbf{e}gin{displaymath}
\Res^G_K(\Ind_H^G(N)) = \mathbf{i}goplus_{g \in H \mathbf{a}ckslash G/K} \Ind^K_{H^g \cap K}(\Res^{H^g}_{H^g \cap K}(N^g)).
\end{displaymath}
Here $H^g=gHg^{-1}$ and $N^g$ is the $A(H^g)$-module with underlying vector space $N$ and action obtained by twisting the action on $N$ with conjugation by $g$.
\end{proposition}
\mathbf{e}gin{proof}
Write $G=\mathbf{i}gsqcup_{i=1}^n Hx_iK$; note that there are finitely many double cosets since $G$ is admissible. Suppose that $\varphi$ is an element of $\Ind_H^G(N)$. Define $\varphi_i \colon K \to N^{x_i}$ by $\varphi_i(g)=\varphi(x_i g)$. One easily sees that $\varphi \mapsto (\varphi_i)_{1 \le i \le n}$ defines an isomorphism of $k$-vector spaces
\mathbf{e}gin{displaymath}
\Res^G_K(\Ind_H^G(N)) \to \mathbf{i}goplus_{i=1}^n \Ind^K_{H^{x_i} \cap K}(\Res^{H^{x_i}}_{H^{x_i} \cap K}(N^{x_i})).
\end{displaymath}
One then verifies that this map is $A(K)$-linear; we omit the details.
\end{proof}
\section{Set-up and basic results} \label{s:setup}
In this section, we introduce the groups $G=\Aut(\mathbf{S},<)$ and $H=\Aut(\mathbf{R},<)$ and review their basic structure. We recall some results on the representation theory of $H$ from \cite{line}, and prove a few simple results about representations of $G$.
\subsection{Automorphisms of the circle and line} \label{ss:groups}
Let $\mathbf{S}$ be the circle; to be definitive, we take $\mathbf{S}$ to be the unit circle in the plane. We let $<$ be the cyclic order on $\mathbf{S}$ defined by the ternary relation $x<y<z$ if $y$ is strictly between $x$ and $z$ when one moves from $x$ to $z$ counterclockwise. (Note that this notation can be confusing since this ternary relation is not made up of two binary relations!) We let $\infty$ be the north pole of $\mathbf{S}$. The set $\mathbf{S} \setminus \{\infty\}$ is totally ordered by $x<y$ if $\infty<x<y$. As a totally ordered set, it is isomorphic to the real line $\mathbf{R}$, and we identify $\mathbf{S} \setminus \{\infty\}$ with $\mathbf{R}$ in what follows.
We let $G=\Aut(\mathbf{S},<)$ be the group of permutations of $\mathbf{S}$ preserving the cyclic order; equivalently, $G$ is the group of orientation-preserving homeomorphisms of $\mathbf{S}$. We let $H=\Aut(\mathbf{R},<)$ be the group of permutations of $\mathbf{R}$ preserving the total order; equivalently, $H$ is the group of orientation-preserving homeomorphisms of $\mathbf{R}$. The group $H$ is identified with the stabilizer of $\infty$ in $G$. Both $G$ and $H$ are oligomorphic permutation groups.
Let $a$ be an $n$-element subset of $\mathbf{R}$. Let $H(a)$ be the subgroup of $H$ fixing each element of $a$; this coincides with the subgroup $H[a]$ fixing $a$ as a set. The $H(a)$'s define the admissible topology on $H$. In fact, every open subgroup of $H$ is of the form $H(a)$ for some $a$ \cite[Proposition~17.1]{repst}. For a totally ordered set $I$ isomorphic to $\mathbf{R}$, let $H_I=\Aut(I,<)$, which is isomorphic to $H$. Let $I_1, \ldots, I_n$ be the connected components of $\mathbf{R} \setminus a$, listed in order. Then $H(a)$ preserves these intervals, and the natural map $H(a) \to H_{I_1} \times \cdots \times H_{I_n}$ is an isomorphism; thus $H(a)$ is isomorphic to $H^n$.
Let $a$ be an $n$-element subset of $\mathbf{S}$, and enumerate $a$ as $\{a_1,\ldots,a_n\}$ in cyclic order. Let $G(a)$ (resp.\ $G[a]$) be the subgroup of $G$ fixing each element of $a$ (resp.\ the set $a$). Let $I_1, \ldots, I_n$ be the connected components of $\mathbf{S} \setminus a$, where $I_i$ is between $a_i$ and $a_{i+1}$. Then $G(a)$ preserves each $I_i$ and the natural map $G(a) \to H_{I_1} \times \cdots \times H_{I_n}$ is an isomorphism; thus $G(a) \cong H^n$. The group $G[a]$ preserves the set $\{I_1, \ldots, I_n\}$, and in fact cyclically permutes it; one thus finds $G[a] \cong \mathbf{Z}/n \ltimes H^n$, where $\mathbf{Z}/n$ cyclically permutes the factors. In particular, $G[a]/G(a) \cong \mathbf{Z}/n$. This group has a standard generator $\tau$, which satisfies $\tau(a_i)=a_{i-1}$.
One can show that if $U$ is any open subgroup of $G$ then there is some $a \in \mathbf{S}^{\{n\}}$ such that $G(a) \subset U \subset G[a]$; we omit the proof as we will not use this.
We let $\mathbf{R}^{(n)}$ be the subset of $\mathbf{R}^n$ consisting of tuples $(x_1, \ldots, x_n)$ with $x_1<\cdots<x_n$. This is a transitive $H$-set, and these account for all transitive $H$-sets. We let $\mathbf{S}^{\{n\}}$ be the subset of $\mathbf{S}^n$ consisting of tuples $(x_1, \ldots, x_n)$ with $x_1<\cdots<x_n<x_1$. This is a transitive $G$-set. We define $\sigma \colon \mathbf{S}^{\{n\}} \to \mathbf{S}^{\{n\}}$ by $\sigma(x_1, \ldots, x_n)=(x_n, x_1, \ldots, x_{n-1})$. This defines an action of $\mathbf{Z}/n$ on $\mathbf{S}^{\{n\}}$ that commutes with the action of $G$. It follows from the classification of open subgroups of $G$ that every transitive $G$-set has the form $\mathbf{S}^{\{n\}}/\Gamma$ for some $n$ and some subgroup $\Gamma$ of $\mathbf{Z}/n$.
\subsection{Integration}
Fix an algebraically closed field $k$ of characteristic~0 for the remainder of the paper. By \cite[Theorem~17.7]{repst}, there is a unique $k$-valued measure $\mu$ for $H$ satisfying
\mathbf{e}gin{displaymath}
\mu(I_1^{(n_1)} \times \cdots \times I_r^{(n_r)}) = (-1)^{n_1+\cdots+n_r},
\end{displaymath}
where $I_1, \ldots, I_r$ are disjoint open intervals in $\mathbf{R}$, and $I^{(n)}$ is defined just like $\mathbf{R}^{(n)}$. Note that $I_1^{(n_1)} \times \cdots \times I_r^{(n_r)}$ is naturally an $\hat{H}$-set; in fact, every $\hat{H}$-set is isomorphic to a disjoint union of ones of this form. We call this measure the \defn{principal measure} for $H$. (There are three other $k$-valued measures for $H$, but they will not be relevant to us.) We write $\vol(X)$ in place of $\mu(X)$ in what follows.
The $\hat{H}$-set $I_1^{(n_1)} \times \cdots \times I_r^{(n_r)}$ is naturally a smooth manifold, and its volume (with respect to the principal measure) is its compactly supported Euler characteristic. It follows that the integration with respect to this measure (as defined in \S \mathrm{e}f{ss:olig-int}) coincides with integration with respect to Euler characteristic, as developed by Schapira and Viro (see, e.g., \cite{Viro}).
The principal measure for $H$ uniquely extends to a $k$-valued measure for $G$ (see \cite[Theorem~17.13]{repst}, or \cite{colored} for more details); in fact, its extension is the unique $k$-valued measure for $G$. This measure can again be described using Euler characteristic; in particular, we find $\vol(\mathbf{S})=0$.
\subsection{Representation categories}
We let $\uRep(H)$ and $\uRep(G)$ be the representation categories defined in \S \mathrm{e}f{ss:olig-int} for $H$ and $G$ with respect to the principal measure. This measure is clearly regular on $H$, and satisfies property~(P) (as it is $\mathbf{Z}$-valued), and so $\uRep^{\mathrm{f}}(H)$ is a semi-simple pre-Tannakian category. The principal measure is not regular on $G$, since $\vol(\mathbf{S})=0$. However, it is quasi-regular (because it is regular on $H$) and still satisfies (P), and so $\uRep^{\mathrm{f}}(G)$ is a pre-Tannakian category. It is easily seen that $\uRep(G)$ is not semi-simple: indeed, the natural map $\epsilon \colon \mathcal{C}(\mathbf{S}) \to \mathbf{b}one$ does not split, as there is a unique map $\eta \colon \mathbf{b}one \to \mathcal{C}(\mathbf{S})$ (up to scalars) and $\epsilon \circ \eta=\vol(\mathbf{S})=0$. (Note that $\epsilon(\varphi)=\int_{\mathbf{S}} \varphi(x) dx$ and $\eta$ is the inclusion of constant functions.)
We use the term ``$\ul{G}$-module'' for ``smooth $A(G)$-module'' and the adjective ``$\ul{G}$-linear'' for ``$A(G)$-linear.'' Thus the objects of $\uRep(G)$ are $\ul{G}$-modules, and the morphisms are $\ul{G}$-linear maps. We similarly use $\ul{H}$-module and $\ul{H}$-linear.
\subsection{Weights} \label{ss:weights}
A \defn{weight} is a word in the alphabet $\{{\bullet},{\circ}\}$. Weights will be the main combinatorial objects we use to describe representations of $G$ and $H$.
Let $\lambda=\lambda_1 \cdots \lambda_n$ be a weight. We define the \defn{length} of $\lambda$, denoted $\ell(\lambda)$, to be $n$. We define\mathfrak{o}otnote{We regard the symbol $\sigma$ as a general cyclic shift operator, which is why we use the same notation for this action as the one on $\mathbf{S}^{\{n\}}$.} $\sigma(\lambda)=\lambda_n \lambda_1 \lambda_2 \cdots \lambda_{n-1}$. This construction defines an action of $\mathbf{Z}/n$ on the set of weights of length $n$. We refer to $\sigma^i(\lambda)$ as the $i$th \defn{cyclic shift} of $\lambda$. We let $\Aut(\lambda) \subset \mathbf{Z}/n$ be the stabilizer of $\lambda$, i.e., the set of $i \in \mathbf{Z}/n$ such that $\sigma^i(\lambda)=\lambda$. We put $g(\lambda)=\# \Aut(\lambda)$ and $N(\lambda)=\# [\lambda]$; these numbers multiply to $n$. Note that $\sigma^{N(\lambda)}$ generates $\Aut(\lambda)$ (if $\sigma$ is the generator of $\mathbf{Z}/n$).
Let $\lambda$ be as above, and assume $n>0$. Define $\gamma_1(\lambda)=\lambda_2 \cdots \lambda_n$, which is a weight of length $n-1$. For $i \in \mathbf{Z}/n$, define $\gamma_i(\lambda)=\gamma_1(\sigma^{i-1}(\lambda))$. Explicitly, $\gamma_i(\lambda)=\lambda_{i+1} \cdots \lambda_n \lambda_1 \cdots \lambda_{i-1}$. We refer to $\gamma_i(\lambda)$ as the $i$th \defn{cyclic contraction} of $\lambda$. We note that if $\gamma_i(\lambda)=\gamma_j(\lambda)$ then $\sigma^i(\lambda)=\sigma^j(\lambda)$. Indeed, the hypothesis shows that $\sigma^i(\lambda)$ and $\sigma^j(\lambda)$ agree except for perhaps the first letter, but these have to be the same by counting the number of ${\bullet}$'s and ${\circ}$'s in $\lambda$.
\subsection{Representations of $H$} \label{ss:RepH}
The predecessor to this paper \cite{line} studied the category $\uRep(H)$ in detail. We will use several results from this paper, the most important of which is the description of the simple objects, which we now recall.
A \defn{half-open interval} in $\mathbf{R}$ is a non-empty interval of the form $(b,a]$ or $[a,b)$ where $a \in \mathbf{R}$ and $b \in \mathbf{R} \cup \{\pm \infty\}$. We define the \defn{type} of a half-open interval to be ${\bullet}$ if its right endpoint is included, and ${\circ}$ if its left endpoint is included. For two intervals $I$ and $J$, we write $I<J$ to mean $x<y$ holds for all $x \in I$ and $y \in J$. We also write $I \ll J$ to mean $\ol{I}<\ol{J}$, where the bar denotes closure. Thus, for instance, $[0,1)<[1,2)$ is true, while $[0,1) \ll [1,2)$ is false.
Let $\mathbf{I}=(I_1, \ldots, I_n)$ be a tuple of half-open intervals. We assume that $\mathbf{I}$ is \defn{ordered}, meaning $I_1<\cdots<I_n$. We identify $\mathbf{I}$ with the cube $I_1 \times \cdots \times I_n$ in $\mathbf{R}^{(n)}$. We let $\varphi_{\mathbf{I}} \in \mathcal{C}(\mathbf{R}^{(n)})$ be the characteristic function of $\mathbf{I}$. We define the \defn{type} of $\mathbf{I}$ to be the weight $\lambda_1 \cdots \lambda_n$, where $\lambda_i$ is the type of $I_i$.
For a weight $\lambda$ of length $n$, we define $L_{\lambda}$ to be the $\ul{H}$-submodule of $\mathcal{C}(\mathbf{R}^{(n)})$ generated by the functions $\varphi_{\mathbf{I}}$ where $\mathbf{I}$ is an ordered tuple of type $\lambda$. The module $L_{\lambda}$ is simple \cite[Theorem~4.3(a)]{line}, if $\lambda \ne \mu$ then $L_{\lambda}$ and $L_{\mu}$ are non-isomorphic \cite[Theorem~4.3(b)]{line}, and every simple is isomorphic to some $L_{\lambda}$ \cite[Corollary~4.12]{line}. Note that since $L_{\lambda}$ is simple, it is generated as an $\ul{H}$-module by any non-zero element; in particular, we could use generators $\varphi_{\mathbf{I}}$ where $I_1 \ll \cdots \ll I_n$. The module $L_{\lambda}$ has categorical dimension $(-1)^n$ \cite[Corollary~5.7]{line}.
Suppose that $a \in \mathbf{R}^{(n)}$. Then the $\ul{H}(a)$-invariants and $H(a)$-invariants in any $\ul{H}$-module are the same \cite[Proposition~11.17]{repst}, and naturally identified with the $\ul{H}(a)$-coinvariants by semi-simplicity. If $\lambda$ has length $m$ then the space $L_{\lambda}^{H(a)}$ has dimension $\mathbf{i}nom{m}{n}$ \cite[Corollary~4.11]{line}. This is an important result that we will often use. It is possible to write down an explicit basis for the invariants (see \cite[Corollary~5.5]{line}), but is much easier to describe the co-invariants. For $x \in \mathbf{R}^{(n)}$, let $\mathrm{ev}_x \colon \mathcal{C}(\mathbf{R}^{(n)}) \to k$ be the map $\mathrm{ev}_x(\varphi)=\varphi(x)$, which is $\ul{H}(x)$-equivariant. The maps $\mathrm{ev}_x$, with $x$ an $m$-element subset of $a$, give a basis for the $\ul{H}(a)$-coinvariants of $L_{\lambda}$ (see \cite[Proposition~4.6]{line}). We note that if $b \in \mathbf{S}^{\{n+1\}}$ is the point $(a_1, \ldots, a_n, \infty)$ then $H(a)=G(b)$, and so the above results can be phrased in terms of $G(b)$-invariants too.
\subsection{Transpose} \label{ss:transp}
Let $r \colon \mathbf{S} \to \mathbf{S}$ be the reflection about the $y$-axis (recall $\mathbf{S}$ is the unit circle in the plane). Conjugation by $r$ induces a continuous outer automorphism of the group $G$. For a $\ul{G}$-module $M$, we let $M^{\dag}$ be its conjugate under $r$; we call this the \defn{transpose} of $M$. Transpose defines a covariant involutive auto-equivalence
\mathbf{e}gin{displaymath}
(-)^{\dag} \colon \uRep(G) \to \uRep(G).
\end{displaymath}
The map $r$ induces an isomorphism $\mathbf{S}^n \to \mathbf{S}^n$ of $G$-sets, though it does not map $\mathbf{S}^{\{n\}}$ into itself for $n \ge 3$. However, letting $\sigma$ be the longest element of the symmetric group $\mathfrak{S}_n$, the composition $r'=\sigma \circ r$ does map $\mathbf{S}^{\{n\}}$ to itself. Pull-back by $r'$ is an isomorphism $\mathcal{C}(\mathbf{S}^{\{n\}})^{\dag} \cong \mathcal{C}(\mathbf{S}^{\{n\}})$ of $\ul{G}$-modules.
We take $\infty$ to be the north pole of $\mathbf{S}$. Thus $r$ fixes $\infty$, and so normalizes $H$. Thus we can define the transpose of an $\ul{H}$-module as well. As above, $r'$ induces an isomorphism $\mathcal{C}(\mathbf{R}^{(n)})^{\dag} \cong \mathcal{C}(\mathbf{R}^{(n)})$. This isomorphism takes $\varphi_{\mathbf{I}}$ to $\varphi_{\mathbf{J}}$, where $\mathbf{J}=(r(I_n), \ldots, r(I_1))$. If $\mathbf{I}$ has type $\lambda$ then $\mathbf{J}$ has type $\lambda^{\dag}$, where $\lambda^{\dag}$ is the result of reversing $\lambda$ and changing each ${\bullet}$ to ${\circ}$. In particular, we find $L_{\lambda}^{\dag} \cong L_{\lambda^{\dag}}$, as noted in \cite[Remark~4.17]{line}.
\subsection{Induced modules} \label{ss:ind}
For a weight $\lambda$, let $I_{\lambda}=\Ind_H^G(L_{\lambda})$. These modules will play an important role in our analysis of $\uRep(G)$. We prove a few simple results about them here.
\mathbf{e}gin{proposition}
The module $I_{\lambda}$ is both projective and injective in $\uRep(G)$.
\end{proposition}
\mathbf{e}gin{proof}
Since restriction is exact and induction is both its left and right adjoint, it follows that induction preserves injective and projective objects. Since $\uRep(H)$ is semi-simple, all objects are injective and projective.
\end{proof}
We next determine the decomposition of $I_{\lambda}$ into simple $\ul{H}$-modules. For this, we introduce some notation. Let $\lambda$ be a weight of length $n \ge 0$. Put
\mathbf{e}gin{displaymath}
A_1(\lambda) = \mathbf{i}goplus_{1 \le i \le n} L_{\sigma^i(\lambda)}, \qquad
A_2(\lambda) = \mathbf{i}goplus_{1 \le i \le n} L_{\gamma_i(\lambda)}, \qquad
A(\lambda) = A_1(\lambda) \mathrm{op}lus A_2(\lambda).
\end{displaymath}
Note that for $\lambda = \varnothing$ we have empty sums, and so $A(\varnothing)$ is the zero module. Also, put $\lambda^+=\lambda {\bullet}$ and $\lambda^-=\lambda {\circ}$.
\mathbf{e}gin{proposition} \label{prop:ind}
We have an isomorphism of $\ul{H}$-modules
\mathbf{e}gin{displaymath}
I_{\lambda} = A(\lambda) \mathrm{op}lus A(\lambda^+) \mathrm{op}lus A(\lambda^-).
\end{displaymath}
\end{proposition}
\mathbf{e}gin{proof}
By Mackey decomposition (Proposition~\mathrm{e}f{prop:mackey}),
\mathbf{e}gin{displaymath}
\Res^G_H(\Ind_H^G(L_\lambda)) = \mathbf{i}goplus_{g \in H \mathbf{a}ckslash G/H} \Ind^H_{H^g \cap H}(\Res^{H^g}_{H^g \cap H}(L_\lambda^g)).
\end{displaymath}
We first analyze the double cosets and the corresponding subgroups. Since $G/H = \mathbf{S}$, the double cosets $H \mathbf{a}ckslash G /H$ are identified with the $H$ orbits on $\mathbf{S}$, of which there are exactly two. For the trivial double coset, $H^e \cap H = H$, and $L_\lambda^g = L_\lambda$. So this double coset contributes exactly $L_\lambda$. For the non-trivial double coset we pick a representative $g$ sending $\infty$ to $0$, so $H^g \cap H = H(0)$. Since $\mathbf{R} - \{0\} = \mathbf{R}_{< 0} \cup \mathbf{R}_{> 0}$, we have that $H(0) \cong H \times H$, and for clarity we will denote these subgroups $H_{(-\infty,0)}$ and $H_{(0,+\infty)}$. Now we come to the key point, which is that conjugation by $g$ interchanges the two subgroups $H_{(-\infty,0)}$ and $H_{(0,+\infty)}$.
Before considering representations contributed by the non-trivial double coset, we recall the branching rules between $H$ and $H(0)$. For this we will need a piece of notation: for a weight $\lambda=\lambda_1 \cdots \lambda_n$ of length $n$, we let $\lambda[i,j]$ denote the substring of $\lambda$ between indices $i$ and $j$ (inclusively), i.e., $\lambda_i \cdots \lambda_j$. As with intervals, we use parentheses to exclude the edge values, e.g., $\lambda[i,j)=\lambda_i \cdots \lambda_{j-1}$. Now, \cite[Theorem~6.1]{line} states
\mathbf{e}gin{displaymath}
\Res_{H(0)}^H L_\lambda \cong \mathbf{i}goplus_{i=0}^n \mathbf{i}g( L_{\lambda[1,i]} \mathbf{o}xtimes L_{\lambda(i,n]} \mathbf{i}g) \mathrm{op}lus \mathbf{i}goplus_{i=1}^n \mathbf{i}g( L_{\lambda[1,i)} \mathbf{o}xtimes L_{\lambda(i,n]} \mathbf{i}g)
\end{displaymath}
and \cite[Theorem~6.9]{line} states
\mathbf{e}gin{displaymath}
\Ind_{H(0)}^H \left(L_{\lambda_1} \mathbf{o}xtimes L_{\lambda_2}\mathrm{i}ght) \cong L_{\lambda_1 {\bullet} \lambda_2} \mathrm{op}lus L_{\lambda_1 {\circ} \lambda_2} \mathrm{op}lus L_{\lambda_1 \lambda_2}
\end{displaymath}
Keeping in mind that conjugation by $g$ interchanges the two subgroups $H_{(-\infty,0)}$ and $H_{(0,+\infty)}$, we have
\mathbf{e}gin{displaymath}
\Res^{H^g}_{H(0)}(L_\lambda^g) \cong \mathbf{i}goplus_{i=0}^n \mathbf{i}g( L_{\lambda(i,n]} \mathbf{o}xtimes L_{\lambda[1,i]} \mathbf{i}g) \mathrm{op}lus \mathbf{i}goplus_{i=1}^n \mathbf{i}g( L_{\lambda(i,n]} \mathbf{o}xtimes L_{\lambda[1,i)} \mathbf{i}g),
\end{displaymath}
and so,
\mathbf{e}gin{align*}
\Ind_{H(0)}^H \Res^{H^g}_{H(0)}(L_\lambda^g) &\cong
\mathbf{i}goplus_{i=0}^n \Ind_{H(0)}^H \left(L_{\lambda(i,n]} \mathbf{o}xtimes L_{\lambda[1,i]}\mathrm{i}ght) \mathrm{op}lus \mathbf{i}goplus_{i=1}^n \Ind_{H(0)}^H \left(L_{\lambda(i,n]} \mathbf{o}xtimes L_{\lambda[1,i)} \mathrm{i}ght)\\
& \cong \mathbf{i}goplus_{i=0}^n L_{\lambda(i,n] {\bullet} \lambda[1,i]} \mathrm{op}lus
\mathbf{i}goplus_{i=0}^n L_{\lambda(i,n] {\circ} \lambda[1,i]} \mathrm{op}lus
\mathbf{i}goplus_{i=0}^n L_{\lambda(i,n] \lambda[1,i]} \\
&\quad \mathrm{op}lus \mathbf{i}goplus_{i=1}^n L_{\lambda(i,n] {\bullet} \lambda[1,i)}
\mathrm{op}lus \mathbf{i}goplus_{i=1}^n L_{\lambda(i,n] {\circ} \lambda[1,i)}
\mathrm{op}lus \mathbf{i}goplus_{i=1}^n L_{\lambda(i,n] \lambda[1,i)}
\end{align*}
Now note that these summands are exactly $L_{\sigma^i(\lambda {\bullet})}$, $L_{\sigma^i(\lambda {\circ})}$, $L_{\sigma^i(\lambda)}$, $L_{\gamma_i(\lambda {\bullet})}$, $L_{\gamma_i(\lambda {\circ})}$, and $L_{\gamma_i(\lambda)}$, respectively. This gives exactly one copy of each cyclic rotation and each cyclic contraction of $\lambda {\bullet}$, $\lambda {\circ}$, and $\lambda$, except that we miss $L_{\gamma_{n+1}(\lambda {\bullet})} = L_\lambda$ and $L_{\gamma_{n+1}(\lambda {\circ})} = L_\lambda$, and have an extra copy of $L_{\sigma^n(\lambda)} = L_\lambda$. Finally, including the additional copy of $L_\lambda$ from the trivial double coset, we get exactly $A(\lambda) \mathrm{op}lus A(\lambda^+) \mathrm{op}lus A(\lambda^-)$.
\end{proof}
We finally observe that we can decompose Schwartz space $\mathcal{C}(\mathbf{S}^{\{n\}})$ into induced modules.
\mathbf{e}gin{proposition} \label{prop:schwartz-ind}
For $n \ge 0$, we have an isomorphism of $\ul{G}$-modules
\mathbf{e}gin{displaymath}
\mathcal{C}(\mathbf{S}^{\{n+1\}}) = \mathbf{i}goplus_{\ell(\lambda) \le n} I_{\lambda}^{\mathrm{op}lus m_{\lambda}}, \qquad m_{\lambda} = \mathbf{i}nom{n}{\ell(\lambda)}.
\end{displaymath}
\end{proposition}
\mathbf{e}gin{proof}
We have an isomorphism of $G$-sets $\mathbf{S}^{\{n+1\}} \cong G \times^H \mathbf{R}^{(n)}$, and so a $\ul{G}$-isomorphism
\mathbf{e}gin{displaymath}
\mathcal{C}(\mathbf{S}^{\{n+1\}}) = \Ind_H^G(\mathcal{C}(\mathbf{R}^{(n)}))
\end{displaymath}
by \cite[Proposition~2.13]{line}. We have an $\ul{H}$-isomorphism
\mathbf{e}gin{displaymath}
\mathcal{C}(\mathbf{R}^{(n)}) = \mathbf{i}goplus_{\ell(\lambda) \le n} L_{\lambda}^{\mathrm{op}lus m_{\lambda}}
\end{displaymath}
by \cite[Theorem~4.7]{line}. Thus the result follows.
\end{proof}
\section{Standard modules} \label{s:std}
\subsection{Construction of standard modules}
For distinct $a,b \in \mathbf{S}$, we let $[a,b]$ denote the interval consisting of points $x \in \mathbf{S}$ such that $a \le x \le b$. We use the usual parenthesis notation to omit endpoints. A \defn{half-open interval} is one of the form $(a,b]$ or $[a,b)$, with $a \ne b$. We define the \defn{type} of a half-open interval to be ${\bullet}$ in the $(a,b]$ case and ${\circ}$ in the $[a,b)$ case. Suppose that $I_1, \ldots, I_n$ are subsets of $\mathbf{S}$. We write $I_1<\cdots<I_n<I_1$ to mean that $x_1<\cdots<x_n<x_1$ for all $x_i \in I_i$, and we write $I_1 \ll \cdots \ll I_n \ll I_1$ to mean that $\ol{I}_1<\cdots<\ol{I}_n<\ol{I}_1$, where the bar denotes topological closure.
Let $\mathbf{I}=(I_1, \ldots, I_n)$ be a tuple of half-open intervals in $\mathbf{S}$. We say that $\mathbf{I}$ is \defn{ordered} if $I_1<\cdots<I_n<I_1$, and \defn{strictly ordered} if $I_1 \ll \cdots \ll I_n \ll I_1$. Assuming $\mathbf{I}$ is ordered, we identify $\mathbf{I}$ with the cube $I_1 \times \cdots \times I_n$ in $\mathbf{S}^{\{n\}}$, and write $\psi_{\mathbf{I}} \in \mathcal{C}(\mathbf{S}^{\{n\}})$ for its characteristic function. We define the \defn{type} of $\mathbf{I}$ to be the word $\lambda_1 \cdots \lambda_n$ where $\lambda_i$ is the type of $I_i$. We can now introduce an important class of modules.
\mathbf{e}gin{definition} \label{defn:std}
For a word $\lambda$ of length $n>0$, we define the \defn{standard module} $\Delta_{\lambda}$ to be the $\ul{G}$-submodule of $\mathcal{C}(\mathbf{S}^{\{n\}})$ generated by the functions $\psi_{\mathbf{I}}$, with $\mathbf{I}$ a strictly ordered tuple of type $\lambda$.
\end{definition}
Recall that $\mathbf{Z}/n=\langle \sigma \mathrm{a}ngle$ acts on $\mathbf{S}^{\{n\}}$ by cyclicly permuting coordinates. We have $\sigma^*(\psi_{\mathbf{I}})=\psi_{\sigma^{-1} \mathbf{I}}$. We thus see that $\sigma^*(\Delta_{\lambda})=\Delta_{\sigma^{-1}\lambda}$. In particular, $\Aut(\lambda)$ acts on $\Delta_{\lambda}$. For a $g(\lambda)$-root of unity $\zeta$, let $\Delta_{\lambda,\zeta}$ be the $\zeta$-eigenspace of the action of $\sigma^{N(\lambda)}$ on $\Delta_{\lambda}$. If $\mu$ is a cyclic shift of $\lambda$ then $\Delta_{\lambda}$ and $\Delta_{\mu}$ are isomorphic, as are $\Delta_{\lambda,\zeta}$ and $\Delta_{\mu,\zeta}$, via an appropriate power of $\sigma^*$.
\mathbf{e}gin{remark}
In our previous paper \cite{line}, we worked over an arbitrary field. In this paper, we only work with an algebraically closed field $k$ of characteristic~0. Without this assumption we would need to take more care in analyzing the representation theory of $\mathbf{Z}/n$ in the construction of $\Delta_{\lambda, \zeta}$ and throughout the rest of the paper. With more work, one should be able to extend many of our results to more general fields.
\end{remark}
We defined $\Delta_{\lambda}$ using strictly ordered tuples, since they form a single $G$-orbit. The following proposition shows we get the same result using all ordered tuples.
\mathbf{e}gin{proposition} \label{prop:merely-ordered}
If $\mathbf{I}$ is ordered and has type $\lambda$, then $\psi_\mathbf{I} \in \Delta_\lambda$
\end{proposition}
\mathbf{e}gin{proof}
Let $I_i^{\circ}$ be the interior of $I_i$, and let $\mathbf{I}^{\circ}$ be their product. For $t \in I_i^{\circ}$, let $J_i(t)$ be the half-open interval of type $\lambda_i$ with the same closed endpoint as $I_i$, and with open endpoint $t$. We claim that
\mathbf{e}gin{displaymath}
\psi_{\mathbf{I}} = (-1)^n \int_{\mathbf{I}^{\circ}}\psi_{\mathbf{J}(t)} \; dt.
\end{displaymath}
Indeed, it suffices to check that the values agree for each point $a \in \mathbf{S}^{\{n\}}$. If $a \not\in \mathbf{I}$ then $a \not\in \mathbf{J}(t)$ for any $t$, and so the integral is zero. Now suppose that $a \in \mathbf{I}$. Then the value of the integral, evaluated at $a$, is the volume of the set $\{ t \in \mathbf{I}^{\circ} \mid a \in \mathbf{J}(t) \}$. This is a product of non-empty open intervals, and thus has volume $(-1)^n$. Since $\mathbf{J}(t)$ belongs to $\Delta_{\lambda}$ for all $t \in \mathbf{I}^{\circ}$, the integral belongs to $\Delta_{\lambda}$, and thus so does $\psi_{\mathbf{I}}$.
\end{proof}
\mathbf{e}gin{example} \label{ex:std}
The module $\Delta_{{\circ}}$ is generated by functions of the form $\psi_{[a,b)}$ with $a,b \in \mathbf{S}$ distinct. From the expression
\mathbf{e}gin{displaymath}
1=\psi_{[\infty,0)}+\psi_{[0,\infty)},
\end{displaymath}
we see that the constant function~1 belongs to $\Delta_{{\circ}}$. This generates a trivial subrepresentation of $\Delta_{{\circ}}$, and it is clearly the unique trivial subrepresentation (as a $G$-invariant function on $\mathbf{S}$ is constant). We will see in Theorem~\mathrm{e}f{thm:std} below that the quotient is simple.
\end{example}
\mathbf{e}gin{remark} \label{rmk:std-transp}
The discussion in \S \mathrm{e}f{ss:transp} shows that $\Delta_{\lambda}^{\dag} \cong \Delta_{\lambda^{\dag}}$, and also that $\Delta_{\lambda,\zeta}^{\dag} \cong \Delta_{\lambda^{\dag},\zeta^{-1}}$. Note that the isomorphism $\mathcal{C}(\mathbf{S}^{\{n\}})^{\dag} \cong \mathcal{C}(\mathbf{S}^{\{n\}})$ intertwines the actions of $\sigma$ and $\sigma^{-1}$; this is why we get $\zeta^{-1}$.
\end{remark}
\subsection{Statement of results}
Fix a weight $\lambda$ of length $n>0$, and a $g(\lambda)$-root of unity $\zeta$. Recall from \S \mathrm{e}f{ss:ind} that we have defined $\ul{H}$-modules
\mathbf{e}gin{displaymath}
A_1(\lambda) = \mathbf{i}goplus_{1 \le i \le n} L_{\sigma^i(\lambda)}, \qquad
A_2(\lambda) = \mathbf{i}goplus_{1 \le i \le n} L_{\gamma_i(\lambda)}, \qquad
A(\lambda) = A_1(\lambda) \mathrm{op}lus A_2(\lambda).
\end{displaymath}
We note that $\Aut(\lambda)$ permutes the summands in $A_1(\lambda)$ and $A_2(\lambda)$, and so the above modules can be viewed as $\ul{H} \times \Aut(\lambda)$ modules. We put
\mathbf{e}gin{displaymath}
\ol{A}_1(\lambda) = \mathbf{i}goplus_{1 \le i \le N(\lambda)} L_{\sigma^i(\lambda)}, \qquad
\ol{A}_2(\lambda) = \mathbf{i}goplus_{1 \le i \le N(\lambda)} L_{\gamma_i(\lambda)}, \qquad
\ol{A}(\lambda) = \ol{A}_1(\lambda) \mathrm{op}lus \ol{A}_2(\lambda).
\end{displaymath}
Thus $A(\lambda) = \ol{A}(\lambda) \mathbf{o}xtimes k[\Aut(\lambda)]$ as a $\ul{H} \times \Aut(\lambda)$ module. The following is the main theorem of this section.
\mathbf{e}gin{theorem} \label{thm:std}
Let $\lambda$ and $\zeta$ be as above.
\mathbf{e}gin{enumerate}
\item We have $\Delta_{\lambda} \cong A(\lambda)$ as an $\ul{H} \times \Aut(\lambda)$ module.
\item We have $\Delta_{\lambda,\zeta} \cong \ol{A}(\lambda)$ as an $\ul{H}$-module.
\item The $\ul{G}$-module $\Delta_{\lambda,\zeta}$ has length at most two. If it is not simple then $\ol{A}_2(\lambda)$ is its unique proper non-zero submodule.
\item Let $b \in \mathbf{S}^{\{n\}}$ and let $\tau$ be the standard generator of $G[b]/G(b) \cong \mathbf{Z}/n$. Then $\Delta^{G(b)}_{\lambda,\zeta}$ has dimension $N(\lambda)$, and $\tau^{N(\lambda)}$ acts on it by $\zeta$.
\end{enumerate}
\end{theorem}
We offer a word of clarification on (c). Technically speaking, $\ol{A}_2(\lambda)$ is not a subspace of $\Delta_{\lambda,\zeta}$. However, there is a unique copy of $\ol{A}_2(\lambda)$ in $\Delta_{\lambda,\zeta}$, namely, the sum of the $L_{\gamma_i(\lambda)}$ isotypic pieces. This is what we mean in (c).
We observe a few corollaries of the theorem.
\mathbf{e}gin{corollary}
We have $\End_{\ul{G}}(\Delta_{\lambda,\zeta})=k$. In particular, $\Delta_{\lambda,\zeta}$ is indecomposable.
\end{corollary}
\mathbf{e}gin{proof}
The $\ul{H}$-module $L_{\lambda}$ appears with multiplicity one in $\Delta_{\lambda,\zeta}$. We therefore have a map
\mathbf{e}gin{displaymath}
\End_{\ul{G}}(\Delta_{\lambda,\zeta}) \to \End_{\ul{H}}(L_{\lambda}) = k.
\end{displaymath}
We claim that this map is injective. Indeed, suppose that $f$ is in the kernel, i.e., $f$ is a $\ul{G}$-endomorphism of $\Delta_{\lambda,\zeta}$ that kills $L_{\lambda}$. Then $\ker(f)$ is a $\ul{G}$-submodule of $\Delta_{\lambda,\zeta}$ containing $L_{\lambda}$, and is thus all of $\Delta_{\lambda,\zeta}$ by Theorem~\mathrm{e}f{thm:std}(c). Hence $f=0$, which proves the claim, and completes the proof.
\end{proof}
\mathbf{e}gin{corollary} \label{cor:std-isom}
The $\ul{G}$-modules $\Delta_{\lambda,\zeta}$ and $\Delta_{\mu,\omega}$ are isomorphic if and only if $\omega=\zeta$ and $\mu$ is a cyclic shift of $\lambda$ (i.e., $\mu=\sigma^i(\lambda)$ for some $i$).
\end{corollary}
\mathbf{e}gin{proof}
We have already seen that $\Delta_{\lambda,\zeta}$ and $\Delta_{\sigma^i(\lambda),\zeta}$ are isomorphic. Suppose now that we have an isomorphism $f \colon \Delta_{\lambda,\zeta} \to \Delta_{\mu,\omega}$. Let $n=\ell(\lambda)$ and $m=\ell(\mu)$. Since $\Delta_{\lambda,\zeta}$ only has simple $\ul{H}$-modules of lengths $n$ and $n-1$, and $\Delta_{\mu,\omega}$ only has ones of lengths $m$ and $m-1$, we must have $m=n$. Looking at the $\ul{H}$-submodules of length $n$, we find $A_1(\lambda) \cong A_1(\mu)$ as $\ul{H}$-modules, which implies that $\mu$ is a cyclic shift of $\lambda$. Let $b \in \mathbf{S}^{\{n\}}$, and let $\tau$ be the generator of $G[b]/G(b)$. We have an isomorphism $f \colon \Delta_{\lambda,\zeta}^{G(b)} \to \Delta_{\mu,\omega}^{G(b)}$ that commutes with the action of $\tau$. Since $\tau^{N(\lambda)}$ acts by $\zeta$ on the source and $\omega$ on the target, we conclude that $\omega=\zeta$.
\end{proof}
\mathbf{e}gin{corollary}
$\Delta_{\lambda,\zeta}$ has categorical dimension~0.
\end{corollary}
\mathbf{e}gin{proof}
The modules $\ol{A}_1(\lambda)$ and $\ol{A}_2(\lambda)$ have dimension of absolute value $N(\lambda)$, but with different signs, by \cite[Corollary~5.7]{line}.
\end{proof}
Since $\Delta_{\lambda,\zeta}$ has a unique maximal proper submodule (possibly zero), it has a unique simple quotient. We give it a name:
\mathbf{e}gin{definition} \label{defn:M}
For a non-empty weight $\lambda$ and a $g(\lambda)$ root of unity $\zeta$, we define $M_{\lambda,\zeta}$ to be the unique simple quotient of $\Delta_{\lambda,\zeta}$.
\end{definition}
We note that $M_{\lambda,\zeta}$ contains $L_{\lambda}$ with multiplicity one, and that all simple $\ul{H}$-modules appearing in $M_{\lambda,\zeta}$ have length $n$ or $n-1$ where $n=\ell(\lambda)$. It follows from Remark~\mathrm{e}f{rmk:std-transp} that $M^{\dag}_{\lambda,\zeta} \cong M_{\lambda^{\dag}, \zeta^{-1}}$.
\subsection{Set-up} \label{ss:std-setup}
The proof of Theorem~\mathrm{e}f{thm:std} will take the remainder of \S \mathrm{e}f{s:std}. We now set-up some notation and make a few initial observations.
Fix a weight $\lambda$ of length $n>0$, and a $g(\lambda)$-root of unity $\zeta$. For an $\ul{H}$-module $X$, we let $X[r]$ be the sum of the $L_{\mu}$-isotypic components of $X$ with $\ell(\mu)=r$. We also use this notation when $X$ is a $\ul{G}$-module, by simply restricting to $\ul{H}$. We note that $\Delta_{\lambda}[r]$ is an $\ul{H} \times \Aut(\lambda)$ module. In this notation, Theorem~\mathrm{e}f{thm:std}(a) states that $\Delta_{\lambda}[n] \cong A_1(\lambda)$ and $\Delta_{\lambda}[n-1] \cong A_2(\lambda)$, as $\ul{H} \times \Aut(\lambda)$ modules, and also $\Delta_{\lambda}[r]=0$ for $r \ne n, n-1$.
Let $\alpha_0 \colon \mathbf{R}^{(n)} \to \mathbf{S}^{\{n\}}$ be the standard inclusion, and let $\mathbf{e}ta_0 \colon \mathbf{R}^{(n-1)} \to \mathbf{S}^{\{n\}}$ be defined by
\mathbf{e}gin{displaymath}
\mathbf{e}ta_0(x_1, \ldots, x_{n-1}) = (x_1, \ldots, x_{n-1}, \infty).
\end{displaymath}
For $i \in \mathbf{Z}/n$, let $\alpha_i=\sigma^i \alpha_0$ and $\mathbf{e}ta_i=\sigma^i \mathbf{e}ta_0$. These maps are $H$-equivariant. We let
\mathbf{e}gin{displaymath}
\alpha_* \colon \mathcal{C}(\mathbf{R}^{(n)})^{\mathrm{op}lus n} \to \mathcal{C}(\mathbf{S}^{\{n\}}), \qquad
\mathbf{e}ta_* \colon \mathcal{C}(\mathbf{R}^{(n-1)})^{\mathrm{op}lus n} \to \mathcal{C}(\mathbf{S}^{\{n\}})
\end{displaymath}
be the direct sum of the $(\alpha_i)_*$ and $(\mathbf{e}ta_i)_*$, and similarly define $\alpha^*$ and $\mathbf{e}ta^*$. These maps are $\ul{H} \times \mathbf{Z}/n$ equivariant, where $\mathbf{Z}/n$ permutes the summands in the domain and acts by shifting on the target. We note that $\alpha_i^*$ simply restricts a function on $\mathbf{S}^{\{n\}}$ to the $i$th copy of $\mathbf{R}^{(n)}$, while $(\alpha_i)_*$ extends a function on this $\mathbf{R}^{(n)}$ to all of $\mathbf{S}^{\{n\}}$ by zero. The map
\mathbf{e}gin{displaymath}
\mathbf{i}g( \coprod_{i \in \mathbf{Z}/n} \mathbf{R}^{(n)} \mathbf{i}g) \amalg \mathbf{i}g( \coprod_{i \in \mathbf{Z}/n} \mathbf{R}^{(n-1)} \mathbf{i}g) \to \mathbf{S}^{\{n\}}
\end{displaymath}
defined by the $\alpha_i$'s and $\mathbf{e}ta_i$'s is an isomorphism of $H$-sets. It follows that the map
\mathbf{e}gin{displaymath}
\alpha_* \mathrm{op}lus \mathbf{e}ta_* \colon \mathcal{C}(\mathbf{R}^{(n)})^{\mathrm{op}lus n} \mathrm{op}lus \mathcal{C}(\mathbf{R}^{(n-1)})^{\mathrm{op}lus n} \to \mathcal{C}(\mathbf{S}^{\{n\}
})
\end{displaymath}
is an isomorphism of $\ul{H}$-modules; the inverse map is $\alpha^* \mathrm{op}lus \mathbf{e}ta^*$.
We have $\mathcal{C}(\mathbf{R}^{(n)})[r]=\mathcal{C}(\mathbf{R}^{(n-1)})[r]=0$ for $r>n$ \cite[Theorem~4.7]{line}, and so $\Delta_{\lambda}[r]=0$ for $r>n$ as well.
\subsection{The length $n$ piece}
We now examine $\Delta_{\lambda}[n]$.
\mathbf{e}gin{lemma} \label{lem:std-1}
For $i \in \mathbf{Z}/n$, we have $(\alpha_{-i})_*(L_{\sigma^i(\lambda)}) \subset \Delta_{\lambda}$, and any non-zero element of this space generates $\Delta_{\lambda}$ as a $\ul{G}$-module.
\end{lemma}
\mathbf{e}gin{proof}
It suffices to treat the $i=0$ case. Let $\mathbf{I}=(I_1,\ldots,I_n)$ be a tuple of bounded half-open intervals in $\mathbf{R}$ of type $\lambda$, with $I_1 \ll \cdots \ll I_n$. We can also regard $\mathbf{I}$ as a tuple of intervals in $\mathbf{S}$. As such, it is strictly ordered of type $\lambda$; here it is important that the $I_i$'s are bounded. It follows that $(\alpha_i)_*(\varphi_{\mathbf{I}})=\psi_{\mathbf{I}}$ is a generator for $\Delta_{\lambda}$. Since $\varphi_{\mathbf{I}}$ generates $L_{\lambda}$ as an $\ul{H}$-module and $\alpha_i$ is $\ul{H}$-equivariant, it follows that $(\alpha_i)_*$ carries $L_{\lambda}$ into $\Delta_{\lambda}$. Any non-zero element of the image will generate the entire image as an $\ul{H}$-module (since $L_{\lambda}$ is irreducible), and thus generate $\psi_{\mathbf{I}}$, which in turn generates $\Delta_{\lambda}$ as a $\ul{G}$-module.
\end{proof}
Let $Q = \mathbf{i}goplus_{i<n} \subset \mathcal{C}(\mathbf{R}^{(n)})[i]$ be the sum of all $\ul{H}$-simple submodules of length $<n$.
\mathbf{e}gin{lemma} \label{lem:std-2}
For $i \in \mathbf{Z}/n$ we have $\alpha_{-i}^*(\Delta_{\lambda}) \subset L_{\sigma^i(\lambda)}+Q$.
\end{lemma}
\mathbf{e}gin{proof}
It suffices to treat the $i=0$ case, so we assume this in what follows. It suffices to show that $\alpha_0^*(\psi_{\mathbf{I}})$ is contained in $L_{\lambda}+Q$ for each generator $\psi_{\mathbf{I}}$ of $\Delta_{\lambda}$. Note that if $\infty$ belongs to the interior of some interval in $\mathbf{I}$ then we can write $\psi_{\mathbf{I}}=\psi_{\mathbf{J}}+\psi_{\mathbf{K}}$, where $\psi_{\mathbf{J}}$ and $\psi_{\mathbf{K}}$ are generators of $\Delta_{\lambda}$ and $\infty$ is not in the interior of any interval in $\mathbf{J}$ or $\mathbf{K}$. Thus we assume $\infty$ is not in the interior of any interval in $\mathbf{I}$ in what follows.
We consider four cases:
\mathbf{e}gin{enumerate}
\item We have $I_n<\infty<I_1$.
\item $\infty$ is the included left endpoint of $I_1$.
\item $\infty$ is the included right endpoint of $I_n$.
\item All other cases.
\end{enumerate}
In case (a), $\alpha_0^*(\psi_{\mathbf{I}})$ is one of the generators of $L_{\lambda}$, and so the statement holds. In case (d), we have $\alpha_0^*(\varphi_{\mathbf{I}})=0$. Cases (b) and (c) are essentially the same, and we treat (b).
Assume we are in case (b); in particular, $\lambda_1={\circ}$. We regard $\psi_{\mathbf{I}}$ as an element of $\mathcal{C}(\mathbf{R}^{(n)})$; note that $I_1$ was of the form $[-\infty,a)$, but (now that we're ignoring $\infty$) is the open interval $(\infty,a)$. To show that $\psi_{\mathbf{I}}$ belongs to $L_{\lambda}+Q$, it suffices (by \cite[\S 4.5]{line}) to show that it is orthogonal to the spaces $L_{\mu}$ where $\ell(\mu)=n$ and $\mu \ne \lambda^{\vee}$. Thus let $\mu$ be given, and let $\mathbf{J}$ be a tuple of type $\mathbf{J}$; we show that $\langle \psi_{\mathbf{I}}, \varphi_{\mathbf{J}} \mathrm{a}ngle=0$.
This pairing is equal to the product of $\vol(I_i \cap J_i)$ for $1 \le i \le n$, so we must show one of these vanishes. Since $\mu \ne \lambda^{\vee}$, there is some $i$ such that $\mu_i=\lambda_i$. If $i>1$ then this means that $I_i \cap J_i$ is a half-open interval (or empty), and thus has volume~0. In fact, the same is true for $i=1$. Indeed, we have $\mu_1=\lambda_1={\circ}$, and so $J_1=[c,d)$ for some $c<d$. If $c>a$ then $I_1 \cap J_1=\varnothing$, while otherwise the intersection is half-open of type ${\circ}$ with left endpoint $c$.
\end{proof}
\mathbf{e}gin{example}
Here is the main example to keep in mind for Lemma~\mathrm{e}f{lem:std-2}. The module $\Delta_{{\circ}}$ contains the function $\psi_{[\infty,0)} \in \mathcal{C}(\mathbf{S})$. When we apply $\alpha^*_0$ to this, we get the function $\psi_{(-\infty,0)} \in \mathcal{C}(\mathbf{R})$. This belongs to (and generates) $L_{{\circ}}+L_{\varnothing}$. To see this explicitly, simply note that $\psi_{(-\infty,0)}=\psi_{\mathbf{R}}-\psi_{[0,\infty)}$ and the two terms belong to $L_{\varnothing}$ and $L_{{\circ}}$.
\end{example}
\mathbf{e}gin{lemma} \label{lem:std-3}
We have $\Delta_{\lambda}[n] \cong A_1(\lambda)$ as $\ul{H} \times \Aut(\lambda)$ modules.
\end{lemma}
\mathbf{e}gin{proof}
Since $\mathcal{C}(\mathbf{R}^{(n-1)})[n]=0$, it follows that $\alpha^*$ is injective on $\Delta_{\lambda}[n]$. Thus $\Delta_{\lambda}[n]$ has $\ul{H}$-length at most $n$ by Lemma~\mathrm{e}f{lem:std-2}. Now, regard $A_1(\lambda)$ as a submodule of $\mathcal{C}(\mathbf{R}^{(n)})^{\mathrm{op}lus n}$ by including the $i$th summand of the former into the $i$th summand of the latter. Then $\alpha_*$ defines an injective map $A_1(\lambda) \to \Delta(\lambda)[n]$ by Lemma~\mathrm{e}f{lem:std-1}, which is clearly $\ul{H} \times \Aut(\lambda)$ equivariant. Since $A_1(\lambda)$ has $\ul{H}$-length $n$, this map is an isomorphism.
\end{proof}
\mathbf{e}gin{lemma} \label{lem:std-4}
We have $\Delta_{\lambda,\zeta}[n] \cong \ol{A}_1(\lambda)$ as $\ul{H}$-modules, and any non-zero element of this space generates $\Delta_{\lambda,\zeta}$ as a $\ul{G}$-module.
\end{lemma}
\mathbf{e}gin{proof}
The first statement follows from taking $\zeta$-eigenspaces in Lemma~\mathrm{e}f{lem:std-3}; recall that $A_1(\lambda) \cong \ol{A}_1(\lambda) \mathbf{o}xtimes k[\Aut(\lambda)]$. To prove the second, it suffices to show that the unique copy of $L_{\sigma^i(\lambda)}$ in $\Delta_{\lambda,\zeta}$ generates it as a $\ul{G}$-module, for any $i \in \mathbf{Z}/n$. It suffices to treat the $i=0$ case, so we assume this. Consider the composition $L_{\lambda} \to \Delta_{\lambda} \to \Delta_{\lambda,\zeta}$, where the first map is $\alpha_0^*$ and the second is the natural quotient map. The composition is non-zero: indeed, the multiplicity space of $L_{\lambda}$ in $\Delta_{\lambda}$ is identified with $k[\Aut(\lambda)]$, and $\alpha_0^*$ (regarded as an element of this space) is identified with $1 \in \Aut(\lambda)$, which has non-zero projection to the $\Aut(\lambda)$-invariants of the regular representation. Since the image of $L_{\lambda}$ in $\Delta_{\lambda}$ generates it as a $\ul{G}$-module (Lemma~\mathrm{e}f{lem:std-1}), the same is true for its image in $\Delta_{\lambda,\zeta}$.
\end{proof}
\subsection{The length $n-1$ piece}
So far, we have shown that $\Delta_{\lambda}[r]$ vanishes for $r>n$ and is what we want when $r=n$. The following lemma handles the $r<n-1$ cases, and gives some information in the $r=n-1$ case.
\mathbf{e}gin{lemma} \label{lem:std-5}
We have $\Delta_{\lambda}[r]=0$ for $r<n-1$. Additionally, $\Delta_{\lambda}[n-1]$ is isomorphic to a quotient of $A_2(\lambda)$.
\end{lemma}
\mathbf{e}gin{proof}
By Lemma~\mathrm{e}f{lem:std-1}, $\Delta_{\lambda}$ is generated as a $\ul{G}$-module by a copy of $L_{\lambda}$, and so there is a surjection $I_{\lambda} \to \Delta_{\lambda}$ of $\ul{G}$-modules. Since $I_{\lambda}[r]=0$ for $r<n-1$ and $I_{\lambda}[n-1]=A_2(\lambda)$ by Proposition~\mathrm{e}f{prop:ind}, the result follows.
\end{proof}
We next turn our attention to $\Delta_{\lambda}[n-1]$. We regard $A_2(\lambda)$ as a subspace of $\mathcal{C}(\mathbf{R}^{(n-1)})$ by identifying the $i$th summand of the former with a subspace of the $i$th summand of the latter.
\mathbf{e}gin{lemma} \label{lem:std-6}
The map $\mathbf{e}ta^*$ defines an isomorphism $\Delta_{\lambda}[n-1] \to A_2(\lambda)$ of $\ul{H} \times \Aut(\lambda)$-modules.
\end{lemma}
\mathbf{e}gin{proof}
Consider a generator $\psi_{\mathbf{I}}$ of $\Delta_{\lambda}$, where $\infty \in I_n$. Let $\mathbf{J}=(I_1, \ldots, I_{n-1})$, which is an ordered tuple of half-open intervals in $\mathbf{R}$ of type $\gamma_0(\lambda)$. We have $\mathbf{e}ta_0^*(\psi_{\mathbf{I}})=\varphi_{\mathbf{J}}$, and $\mathbf{e}ta_i^*(\psi_{\mathbf{I}})=0$ for $i \ne 0$. It follows that $\mathbf{e}ta^*(\Delta_{\lambda})$ contains the 0th summand of $A_2(\lambda)$ (as this summand is $\ul{H}$-irreducible). By symmetry, it contains each summand of $A_2(\lambda)$, and so it contains all of $A_2(\lambda)$. Since $\mathcal{C}(\mathbf{R}^{(n-1)})[r]=0$ for $r \ge n$ and $\Delta_{\lambda}[r]=0$ for $r<n-1$, it follows that $\mathbf{e}ta^*(\Delta_{\lambda})=\mathbf{e}ta^*(\Delta_{\lambda}[n-1])$. Since $\Delta_{\lambda}[n-1]$ has length $\le n$ by Lemma~\mathrm{e}f{lem:std-5} and $A_2(\lambda)$ has length $n$, it follows that $\mathbf{e}ta^*$ maps $\Delta_{\lambda}[n-1]$ isomorphically to $A_2(\lambda)$. As this map is $\ul{H} \times \Aut(\lambda)$ equivariant, the result follows.
\end{proof}
Since $A_2(\lambda) \cong \ol{A}_2(\lambda) \mathbf{o}xtimes k[\Aut(\lambda)]$, its $\zeta$-eigenspace is $\ol{A}_2(\lambda)$. We thus have $\Delta_{\lambda,\zeta}[n-1] \cong \ol{A}_2(\lambda)$, which completes the proof of Theorem~\mathrm{e}f{thm:std}(a,b).
\subsection{Action of normalizers}
We now prove Theorem~\mathrm{e}f{thm:std}(c,d). Fix $b \in \mathbf{S}^{\{n\}}$; without loss of generality, we take $b_n=\infty$. Let $a=(b_1,\ldots,b_{n-1}) \in \mathbf{R}^{(n-1)}$, so that $G(b)=H(a)$. If $\mu$ is weight of length $\ge n$ then $L_{\mu}^{G(b)}=0$, while if $\mu$ has length $n-1$ then $L_{\mu}^{G(b)}$ is one-dimensional \cite[Corollary~4.11]{line}. For $1 \le i \le n$, let $Y_i=L_{\gamma_i(\lambda)}^{G(b)}$, regarded as a one-dimensional subspace of $A_2(\lambda)$, and let $X_i \subset \Delta_{\lambda}$ be its inverse image under $\mathbf{e}ta^*$. Thus $\Delta_{\lambda}^{G(b)}$ is the direct sum of the lines $X_i$ for $i \in \mathbf{Z}/n$. Let $\tau$ be the standard generator of $G[b]/G(b) \cong \mathbf{Z}/n$. Recall that $\sigma^{N(\lambda)}$ is the generator of $\Aut(\lambda) \subset \mathbf{Z}/n$.
\mathbf{e}gin{lemma} \label{lem:std-7}
We have $\tau(X_i)=X_{i+1}$ and $\sigma^{N(\lambda)}(X_i)=X_{i+N(\lambda)}$ for $i \in \mathbf{Z}/n$.
\end{lemma}
\mathbf{e}gin{proof}
Let $\mathrm{ev}_b \colon \mathcal{C}(\mathbf{S}^{\{n\}}) \to k$ be the evaluation at $b$ map, defined by $\mathrm{ev}_b(\varphi)=\varphi(b)$. This map is $\ul{G}(b)$-equivariant. Similarly, let $\mathrm{ev}_a \colon \mathcal{C}(\mathbf{R}^{(n-1)}) \to k$ be the evaluation at $a$ map. It is clear that $\mathrm{ev}_b = \mathrm{ev}_a \circ \mathbf{e}ta_0^*$. Now, let $\mathrm{ev}_a^i \colon \mathcal{C}(\mathbf{R}^{(n-1)})^{\mathrm{op}lus n} \to k$ be the map $\mathrm{ev}_a$ on the $i$th summand and~0 on the remaining summands. We thus have $\mathrm{ev}_b=\mathrm{ev}_a^0 \circ \mathbf{e}ta^*$, and so $\mathrm{ev}_{\sigma^i(b)} = \mathrm{ev}_a^i \circ \mathbf{e}ta^*$ for $i \in \mathbf{Z}/n$.
Now, any non-zero $H(a)$-invariant in $L_{\gamma_i(\lambda)} \subset \mathcal{C}(\mathbf{R}^{(n-1)})$ has non-zero image under $\mathrm{ev}_a$. (This follows from semi-simplicity, or the explicit description of invariants in \cite[\S 5]{line}.) It follows that $Y_i$ can be described as the joint kernels of $\mathrm{ev}^j_a$, for $j \ne i$, on $A_2(\lambda)^{H(a)}$. It thus follows that $X_i$ is the joint kernels of $\mathrm{ev}_{\sigma^j(b)}$, for $j \ne i$, on $\Delta_{\lambda}^{G(b)}$. Since $\tau(\sigma^i(b))=\sigma^{i+1}(b)$, the result thus follows.
\end{proof}
Let $\ol{X}_i \subset \Delta_{\lambda,\zeta}^{G(b)}$, for $i \in \mathbf{Z}/N(\lambda)$, be the intersection of $\Delta_{\lambda,\zeta}^{G(b)}$ with the $L_{\gamma_i(\lambda)}$-isotypic component. It is clear from the above discussion that $\ol{X}_i$ is one-dimensional and that $\Delta_{\lambda,\zeta}^{G(b)}$ is the direct sum of these spaces.
\mathbf{e}gin{lemma} \label{lem:std-8}
We have $\tau(\ol{X}_i)=\ol{X}_{i+1}$ for $i \in \mathbf{Z}/N(\lambda)$. Also, $\tau^{N(\lambda)}$ acts by $\zeta$ on $\Delta_{\lambda,\zeta}^{G(b)}$.
\end{lemma}
\mathbf{e}gin{proof}
For a weight $\mu$ and a $\ul{G}$-module $M$, write $M[\mu]$ for the $L_{\mu}$ isotypic piece of $M$. We have
\mathbf{e}gin{displaymath}
\Delta_{\lambda}^{G(b)} \cap \Delta_{\lambda}[\gamma_i(\lambda)] = \mathbf{i}goplus_{j \equiv i \text{ (mod $N(\lambda)$)}} X_j
\end{displaymath}
and so Lemma~\mathrm{e}f{lem:std-7} shows that $\tau$ induces an isomorphism
\mathbf{e}gin{displaymath}
\Delta_{\lambda}^{G(b)} \cap \Delta_{\lambda}[\gamma_i(\lambda)] \to \Delta_{\lambda}^{G(b)} \cap \Delta_{\lambda}[\gamma_{i+1}(\lambda)]
\end{displaymath}
for $i \in \mathbf{Z}/N(\lambda)$. Since the action of $\Aut(\lambda)$ on $\Delta_{\lambda}$ commutes with $G$, and in particular $\tau$, it follows that we get a similar isomorphism after passing to an $\Aut(\lambda)$ isotypic component, and so $\tau(\ol{X}_i)=\ol{X}_{i+1}$. Since $\tau^{N(\lambda)}$ and $\sigma^{N(\lambda)}$ act in the same way on $\Delta_{\lambda}^{G(b)}$, they act the same on $\Delta_{\lambda,\zeta}^{G(b)}$ as well; $\sigma^{N(\lambda)}$ acts by $\zeta$ on this space by definition.
\end{proof}
We can now complete the proof of the theorem.
\mathbf{e}gin{lemma} \label{lem:std-9}
If $\Delta_{\lambda,\zeta}$ is not simple then $\ol{A}_2(\lambda)$ is its unique proper non-zero submodule.
\end{lemma}
\mathbf{e}gin{proof}
Suppose $M$ is a proper non-zero $\ul{G}$-submodule of $\Delta_{\lambda,\zeta}$. We show $M=\ol{A}_2(\lambda)$. By Lemma~\mathrm{e}f{lem:std-4}, we must have $M \subset \ol{A}_2(\lambda)$, otherwise $M$ would be all of $\Delta_{\lambda,\zeta}$. We now prove the reverse containment. Since $M$ is non-zero, it contains some $L_{\gamma_i(\lambda)}$, and in particular, contains its $G(b)$-invariant space $\ol{X}_i$. Lemma~\mathrm{e}f{lem:std-8} implies that $M$ contains $\ol{X}_j$ for each $j$. Since $L_{\gamma_j(\lambda)}$ is $\ul{H}$-simple and has non-zero intersection with $M$, it follows that $M$ contains $L_{\gamma_j(\lambda)}$. Thus $M$ contains $\ol{A}_2(\lambda)$, as required.
\end{proof}
\section{Standard filtration of induced modules}
\subsection{Statement of results}
Recall that for a weight $\lambda$ we put $\lambda^+=\lambda {\bullet}$ and $\lambda^-=\lambda {\circ}$, and $I_{\lambda}$ is the induced module $\Ind_H^G(L_{\lambda})$.
\mathbf{e}gin{theorem} \label{thm:delta-seq}
For a non-empty weight $\lambda$ we have a natural short exact sequence
\mathbf{e}gin{displaymath}
0 \to \Delta_{\lambda^+} \mathrm{op}lus \Delta_{\lambda^-} \to I_{\lambda} \to \Delta_{\lambda} \to 0.
\end{displaymath}
\end{theorem}
Before giving the proof, we note a few corollaries.
\mathbf{e}gin{corollary} \label{cor:ind-factor}
Let $\lambda$ be a weight of length $n$ and let $X$ be a $\ul{G}$-module such that every simple $\ul{H}$-module in it has length $\le n$. Then any map $I_{\lambda} \to X$ factors through $\Delta_{\lambda}$.
\end{corollary}
\mathbf{e}gin{proof}
The modules $\Delta_{\lambda^{\pm}} \subset I_{\lambda}$ are generated as $\ul{G}$-modules by $\ul{H}$-modules of length $n+1$. These generating $\ul{H}$-submodules must map to~0 in $X$, and so all of $\Delta_{\lambda^{\pm}}$ must map to zero. The result therefore follows from the theorem.
\end{proof}
\mathbf{e}gin{corollary}
Every non-trivial simple $\ul{G}$-module is isomorphic to some $M_{\lambda,\zeta}$.
\end{corollary}
\mathbf{e}gin{proof}
Let $M$ be a non-trivial simple $G$-module, and let $\lambda$ be a weight of maximum length such that $L_{\lambda}$ appears in $M$; put $n=\ell(\lambda)$. By Frobenius reciprocity, we have a non-zero map $I_{\lambda} \to M$, which is surjective since $M$ is simple. By Corollary~\mathrm{e}f{cor:ind-factor}, this map induces a surjection $\Delta_{\lambda} \to M$. Since $\Delta_{\lambda}$ is the direct sum of $\Delta_{\lambda,\zeta}$'s, it follows that $M$ is a quotient of some $\Delta_{\lambda,\zeta}$. Since $M_{\lambda,\zeta}$ is the unique simple quotient of $\Delta_{\lambda,\zeta}$, it follows that $M$ is isomorphic to $M_{\lambda,\zeta}$.
\end{proof}
\mathbf{e}gin{remark}
If $\lambda=\varnothing$ and we interpret $\Delta_{\lambda}$ to be the trivial representation, then we obtain a sequence like the one in Theorem~\mathrm{e}f{thm:delta-seq}, except that we lose exactness on the left: the intersection of $\Delta_{{\bullet}}$ and $\Delta_{{\circ}}$ in $I_{\varnothing}$ is a copy of the trivial representation. See Figure~\mathrm{e}f{fig:proj}.
\end{remark}
\subsection{Proof}
We fix a weight $\lambda$ of length $n>0$ in what follows. By Theorem~\mathrm{e}f{thm:std}, $\Delta_{\lambda}$ contains a unique copy of $L_{\lambda}$, which generates it as a $\ul{G}$-module. It follows that there is a unique (up to scaling) non-zero map $I_{\lambda} \to \Delta_{\lambda}$, and it is surjective. Let $K$ be the kernel. We must show that $K$ is naturally isomorphic to $\Delta_{\lambda^+} \mathrm{op}lus \Delta_{\lambda^-}$.
For $\varphi \in \mathcal{C}(\mathbf{S}^{\{n+1\}})$, let $\varphi_{\infty} \in \mathcal{C}(\mathbf{R}^{(n)})$ be the function given by
\mathbf{e}gin{displaymath}
\varphi_{\infty}(x_1, \ldots, x_n) = \varphi(x_1, \ldots, x_n, \infty).
\end{displaymath}
\mathbf{e}gin{lemma} \label{lem:delta-seq-0}
Identify $\Ind(\mathcal{C}(\mathbf{R}^{(n)}))$ with $\mathcal{C}(\mathbf{S}^{\{n+1\}})$. For an $\ul{H}$-submodule $M$ of $\mathcal{C}(\mathbf{R}^{(n)})$, the $\ul{G}$-module $\Ind(M)$ is identified with the subspace of $\mathcal{C}(\mathbf{S}^{\{n+1\}})$ consisting of functions $\varphi$ such that $(g\varphi)_{\infty}$ belongs to $M$ for all $g \in G$. In particular, if $N$ is a $\ul{G}$-submodule of $\mathcal{C}(\mathbf{S}^{\{n+1\}})$ then $N \subset \Ind(M)$ if and only if $\varphi_{\infty} \in M$ for all $\varphi \in N$.
\end{lemma}
\mathbf{e}gin{proof}
The induction $\Ind(\mathcal{C}(\mathbf{R}^{(n)}))$ consists of functions $\varphi \colon G \to \mathcal{C}(\mathbf{R}^{(n)})$ that are left $H$-equivariant and right $G$-smooth. One easily sees that such functions correspond to $G$-smooth functions $G \times^H \mathbf{R}^{(n)} \to k$, where the domain here is the quotient of the usual product by the relations $(gh,x)=(g,hx)$ for $h \in H$. We have an isomorphism
\mathbf{e}gin{displaymath}
G \times^H \mathbf{R}^{(n)} \to \mathbf{S}^{\{n+1\}}, \qquad
(g, x_1, \ldots, x_n) \mapsto (gx_1, \ldots, gx_n, g\infty).
\end{displaymath}
In this way, we identify $\Ind(\mathcal{C}(\mathbf{R}^{(n)}))$ with $\mathcal{C}(\mathbf{S}^{\{n+1\}})$.
Now, let $\varphi \in \mathcal{C}(\mathbf{S}^{\{n+1\}})$ be given. The function $\varphi' \colon G \times^H \mathbf{R}^{(n)} \to k$ corresponding to $\varphi$ is given by
\mathbf{e}gin{displaymath}
\varphi'(g, x_1, \ldots, x_n)=\varphi(gx_1, \ldots, gx_n, g\infty).
\end{displaymath}
Clearly, $\varphi'$ belongs to $\Ind(M)$ if and only if $\varphi'(g, -)$ belongs to $M$ for each $g \in G$. As $\varphi'(g, -)$ is the function $(g^{-1} \varphi)_{\infty}$ the main claim follows.
As for the final sentence the condition is clearly necessary. Suppose that it holds. Let $\varphi \in N$ be given. Since $N$ is a $\ul{G}$-submodule, we have $g \varphi \in N$ for any $g \in G$. Thus $(g \varphi)_{\infty} \in M$ for all $g \in G$ by hypothesis. Hence $\varphi \in \Ind(M)$, which proves the claim.
\end{proof}
\mathbf{e}gin{lemma} \label{lem:delta-seq-1}
Identify $\Ind(\mathcal{C}(\mathbf{R}^{(n)}))$ with $\mathcal{C}(\mathbf{S}^{\{n+1\}})$. Then $I_{\lambda}$ is identified with a $G$-submodule of $\mathcal{C}(\mathbf{S}^{\{n+1\}})$ that contains $\Delta_{\lambda^+}$ and $\Delta_{\lambda^-}$.
\end{lemma}
\mathbf{e}gin{proof}
We apply the final sentence of Lemma~\mathrm{e}f{lem:delta-seq-0} with $M=L_{\lambda}$ and $N=\Delta_{\lambda^+}$. It suffices to show that $\varphi_{\infty} \in M$ for the generators $\varphi$ of $N$. Thus suppose $\mathbf{I}$ is a cyclicly ordered tuple of intervals of type $\lambda$, and put $\varphi=\psi_{\mathbf{I}}$. If $\infty \not\in I_{n+1}$ then $\varphi_{\infty}=0$, which belongs to $M$. Suppose now that $\infty \in I_{n+1}$. Then $\mathbf{J}=(I_1,\ldots,I_n)$ is a totally ordered tuple of intervals in $\mathbf{R}$ of type $\lambda$, and $\varphi_{\infty}=\varphi_{\mathbf{J}}$, which belongs to $M$. Thus $N \subset \Ind(M)$. Then $\lambda^-$ case is similar.
\end{proof}
The following is the key lemma in the proof of the theorem.
\mathbf{e}gin{lemma} \label{lem:delta-seq-2}
$\Delta_{\lambda^+}$ and $\Delta_{\lambda^-}$ are linearly disjoint subspaces of $\mathcal{C}(\mathbf{S}^{\{n+1\}})$.
\end{lemma}
\mathbf{e}gin{proof}
Let $X=\Delta_{\lambda^+} + \Delta_{\lambda^-}$, where the sum is taken inside of $\mathcal{C}(\mathbf{S}^{\{n+1\}})$. Let $b \in \mathbf{S}^{\{n+1\}}$ with $b_{n+1}=\infty$. We know that $\Delta_{\lambda^{\pm}}$ contains $n+1$ simple $H$-modules of length $n$, and $n+1$ of length $n+1$ (Theorem~\mathrm{e}f{thm:std}). The length $n+1$ modules in $\Delta_{\lambda^+}$ and $\Delta_{\lambda^-}$ are non-isomorphic, and thus intersect to~0. It thus suffices to show $X$ contains $2n+2$ simple $H$-modules of length $n$. For this, it is enough to show that $X^{G(b)}$ has dimension at least $2n+2$.
Our usual trick for constructing co-invariants is to use evaluation maps; see, e.g., \cite[\S 4.2]{line}. Here there are not enough evaluation maps, so we use a variant of this idea: we evaluate at all but one coordinate and integrate the remaining coordinate over an interval. By choosing the intervals correctly, we can ensure that the integral is zero on one summand of $X$ and non-zero on the other, and this will let us show that these functionals are linearly independent.
Define functionals $\alpha_0, \mathbf{e}ta_0 \in \mathcal{C}(\mathbf{S}^{\{n+1\}})^*$ by
\mathbf{e}gin{align*}
\alpha_0(\varphi) &= \int_{(b_n,b_{n+1})} \varphi(b_1, \ldots, b_n, t) dt \\
\mathbf{e}ta_0(\varphi) &= \int_{(b_{n+1}, b_1)} \varphi(b_1, \ldots, b_n, t) dt.
\end{align*}
Let $\alpha_i$ and $\mathbf{e}ta_i$ be the $i$th cyclic shifts of $\alpha_0$ and $\mathbf{e}ta_0$, for $i \in \mathbf{Z}/(n+1)$. It is clear that $\alpha_0$ and $\mathbf{e}ta_0$ are $G(b)$-equivariant. We claim that these $2n+2$ functionals are linearly independent on $X$, which will complete the proof.
Let $\mathbf{I}=(I_1, \ldots, I_{n+1})$ be a cyclicly ordered tuple of intervals of type $\lambda^+$ such that $b_i$ belongs to the interior of $I_i$ for all $i$. We have
\mathbf{e}gin{displaymath}
\alpha_0(\varphi_{\mathbf{I}})=\vol((b_n, b_{n+1}) \cap I_{n+1}) = -1, \qquad
\mathbf{e}ta_0(\varphi_{\mathbf{I}})=\vol((b_{n+1}, b_1) \cap I_{n+1}) = 0.
\end{displaymath}
Indeed, in the first case the intersection is an open interval, and in the second it is a half-open interval. We also have $\alpha_i(\varphi_{\mathbf{I}})=\mathbf{e}ta_i(\varphi_{\mathbf{I}})=0$ for all $i \neq 0$. Taking $\mathbf{I}$ of type $\lambda^-$ leads to similar results, but with the roles of $\alpha$ and $\mathbf{e}ta$ reversed. This completes the proof.
\end{proof}
\mathbf{e}gin{proof}[Proof of Theorem~\mathrm{e}f{thm:delta-seq}]
In what follows, we write $\ell(M)$ for the length (i.e., number of simple constituents) of $M$ as an $\ul{H}$-module. We have
\mathbf{e}gin{displaymath}
\ell(\Delta_{\lambda})=2n, \qquad \ell(\Delta_{\lambda^{\pm}}) = 2n+2, \qquad \ell(I_{\lambda}) = 6n+4.
\end{displaymath}
The first two formula follow from Theorem~\mathrm{e}f{thm:std}, while the third follows from Proposition~\mathrm{e}f{prop:ind}. We thus see that $\ell(K)=4n+4$. From Lemma~\mathrm{e}f{lem:delta-seq-1}, we see that $\Delta_{\lambda^{\pm}}$ is naturally contained in $I_{\lambda}$. Since these modules are generated by $\ul{H}$-submodules of length $n+1$, the generators must map to~0 in $\Delta_{\lambda}$. We thus find that $\Delta_{\lambda^{\pm}} \subset K$. From Lemma~\mathrm{e}f{lem:delta-seq-2}, we see that $\Delta_{\lambda^+} \mathrm{op}lus \Delta_{\lambda^-} \subset K$. Since the two sides have the same $\ul{H}$-length, this inclusion is an equality.
\end{proof}
\section{Special and generic modules} \label{s:fine}
\subsection{Statement of results} \label{ss:fine-intro}
For an integer $n$, let $\pi(n)$ be the length $\vert n \vert$ weight consisting of all ${\bullet}$'s if $n \ge 0$, and all ${\circ}$'s if $n \le 0$. Also, put $\epsilon(n)=(-1)^{n+1}$. We say that a pair $(\lambda, \zeta)$ with $\lambda$ non-empty is \defn{special} if $\lambda=\pi(n)$ and $\zeta=\epsilon(n)$ for some integer $n$; otherwise, we say that $(\lambda, \zeta)$ is \defn{generic}. We also apply this terminology to the simple modules $M_{\lambda,\zeta}$, and label the trivial module $\mathbf{b}one$ as special.
\mathbf{e}gin{theorem} \label{thm:fine}
Let $(\lambda, \zeta)$ be given with $\lambda$ non-empty.
\mathbf{e}gin{enumerate}
\item If $(\lambda, \zeta)$ is generic then $\Delta_{\lambda,\zeta}=M_{\lambda,\zeta}$, and this module is both projective and injective.
\item Suppose that $(\lambda, \zeta)=(\pi(n), \epsilon(n))$ is special with $n>0$. Then:
\mathbf{e}gin{enumerate}[(i)]
\item The standard module $\Delta_{\pi(n),\epsilon(n)}$ has length two.
\item The unique proper non-zero submodule of $\Delta_{\pi(n),\epsilon(n)}$ is isomorphic to $M_{\pi(n-1),\epsilon(n-1)}$ (which is taken to mean $\mathbf{b}one$ if $n=1$).
\item Let $b \in \mathbf{S}^{\{n+1\}}$. Then $M_{\lambda,\zeta}^{G(b)}$ is one-dimensional, and the standard generator $\tau$ of $G[b]/G(b)$ acts on this space by $-\epsilon(n)$.
\end{enumerate}
\end{enumerate}
\end{theorem}
We note that if $(\lambda,\zeta)$ is generic with $\ell(\lambda)=n$ then $M_{\lambda,\zeta}=\Delta_{\lambda,\zeta}$ contains $\ul{H}$-simples of lengths $n$ and $n-1$. On the other hand, if $(\lambda,\zeta)$ is special then $M_{\lambda,\zeta}$ is irreducible as an $\ul{H}$-module, as it is simply isomorphic to $L_{\lambda}$ by Theorem~\mathrm{e}f{thm:std}.
\mathbf{e}gin{corollary}
The simples $M_{\lambda,\zeta}$ and $M_{\mu,\omega}$ are isomorphic if and only if $\omega=\zeta$ and $\mu$ is a cyclic shift of $\lambda$ (i.e., $\mu=\sigma^i(\lambda)$ for some $i$).
\end{corollary}
\mathbf{e}gin{proof}
If $\omega=\zeta$ and $\mu$ is a cyclic shift of $\lambda$ then we have seen that $\Delta_{\lambda,\zeta}$ is isomorphic to $\Delta_{\mu,\omega}$, and so it follows that $M_{\lambda,\zeta}$ is isomorphic to $M_{\mu,\omega}$. Now suppose that $M_{\lambda,\zeta}$ is isomorphic to $M_{\mu,\omega}$. By the preceding remarks, $(\lambda,\zeta)$ and $(\mu,\omega)$ are both generic or both special. If both are generic then the result follows from Corollary~\mathrm{e}f{cor:std-isom}. Now suppose both are special. Then as $\ul{H}$-modules we have $M_{\lambda,\zeta}=L_{\lambda}$ and $M_{\mu,\omega}=L_{\mu}$, and so $\lambda=\mu$. Since $\zeta=(-1)^{\ell(\lambda)+1}$ and $\omega=(-1)^{\ell(\mu)+1}$, we also have $\omega=\zeta$.
\end{proof}
\mathbf{e}gin{corollary}
For $n>0$, we have $M_{\pi(n), \epsilon(n)} \cong {\textstyle \bigwedge}^n(M_{{\bullet}})$.
\end{corollary}
\mathbf{e}gin{proof}
The $\ul{H}$-module underlying $M_{{\bullet}}$ is $L_{{\bullet}}$. We have ${\textstyle \bigwedge}^n(L_{{\bullet}})=L_{\pi(n)}$ by \cite[Proposition~8.5]{line}. Thus ${\textstyle \bigwedge}^n(M_{{\bullet}})$ is a $\ul{G}$-module whose underlying $\ul{H}$-module is $L_{\pi(n)}$. It is therefore irreducible as a $\ul{G}$-module, and must be isomorphic to $M_{\pi(n),\epsilon(n)}$ by the classification of irreducibles and Theorem~\mathrm{e}f{thm:fine}.
\end{proof}
Theorem~\mathrm{e}f{thm:fine} also implies the decomposition of $\mathcal{C}=\uRep^{\mathrm{f}}(G)$ stated in \S \mathrm{e}f{ss:results}(b). We now explain. Recall that $\mathcal{C}_{\rm gen}$ (resp.\ $\mathcal{C}_{\rm sp}$) is the full subcategory of $\mathcal{C}$ spanned by modules whose simple constituents are all generic (resp.\ special). These categories are clearly orthogonal in the sense that $\Hom_{\mathcal{C}}(M,N)=\Hom_{\mathcal{C}}(N,M)=0$ for $M \in \mathcal{C}_{\rm gen}$ and $N \in \mathcal{C}_{\rm sp}$. Let $M$ be a given object of $\mathcal{C}$. Since every generic simple is projective and injective, any such constituent of $M$ splits off as a summand. We thus obtain a canonical decomposition $M=M_{\rm gen} \mathrm{op}lus M_{\rm sp}$ with $M_{\rm gen} \in \mathcal{C}_{\rm gen}$ and $M_{\rm sp} \in \mathcal{C}_{\rm sp}$; to see that this is canonical, simply note that $M_{\rm gen}$ is the maximal subobject of $M$ belonging to $\mathcal{C}_{\rm gen}$, and similarly for $M_{\rm sp}$. We thus find that $\mathcal{C}=\mathcal{C}_{\rm gen} \mathrm{op}lus \mathcal{C}_{\rm sp}$, and that $\mathcal{C}_{\rm gen}$ is semi-simple. The precise structure of $\mathcal{C}_{\rm sp}$ will be determined in \S \mathrm{e}f{s:special} below.
\subsection{The special case}\label{ss:fine-special}
For $1 \le i \le n+1$, let $p_i \colon \mathbf{S}^{\{n+1\}} \to \mathbf{S}^{\{n\}}$ be the projection away from the $i$th coordinate. Define $d_n \colon \mathcal{C}(\mathbf{S}^{\{n\}}) \to \mathcal{C}(\mathbf{S}^{\{n+1\}})$ to be $\sum_{i=1}^{n+1} (-1)^{i+1} p_i^*$.
\mathbf{e}gin{proposition} \label{prop:fine-aux}
The map $d_n$ carries $\Delta_{\pi(n),\epsilon(n)}$ into $\Delta_{\pi(n+1),\epsilon(n+1)}$. The sequence of maps
\mathbf{e}gin{displaymath}
\xymatrix@C=3em{
0 \ar[r] & \mathbf{b}one \ar[r]^-{d_0} & \Delta_{\pi(1),\epsilon(1)} \ar[r]^-{d_1} & \Delta_{\pi(2),\epsilon(2)} \ar[r]^-{d_2} & \cdots }
\end{displaymath}
is an exact complex.
\end{proposition}
Before proving the proposition, we show how it implies the special case of the theorem.
\mathbf{e}gin{proof}[Proof of Theorem~\mathrm{e}f{thm:fine}(b)]
No map in the complex is surjective (since $\Delta_{\pi(n),\epsilon(n)}$ contains simple $\ul{H}$-modules not appearing in $\Delta_{\pi(n-1),\epsilon(n-1)}$ by Theorem~\mathrm{e}f{thm:std}), and so no map in the complex is zero (besides the leftmost one). Thus the terms of the complex are not simple, and thus of length two by Theorem~\mathrm{e}f{thm:std}. It follows that the image of $d_{n-1}$ must be the unique proper non-zero submodule of $\Delta_{\pi(n),\epsilon(n)}$; since this is a simple quotient of $\Delta_{\pi(n-1),\epsilon(n-1)}$, it must be $M_{\pi(n-1),\epsilon(n-1)}$.
Now, let $b \in \mathbf{S}^{\{n+1\}}$ be given. We have an exact sequence
\mathbf{e}gin{displaymath}
0 \to M_{\pi(n),\epsilon(n)} \to \Delta_{\pi(n+1),\epsilon(n+1)} \to M_{\pi(n+1),\epsilon(n+1)} \to 0.
\end{displaymath}
Since $\uRep(G(b))$ is semi-simple, formation of $G(b)$-invariants is exact. As the rightmost module has no invariants, we obtain an isomorphism $M_{\pi(n),\epsilon(n)}^{G(b)} \cong \Delta^{G(b)}_{\pi(n+1),\epsilon(n+1)}$. We have already seen in Theorem~\mathrm{e}f{thm:std} that the standard generator of $G[b]/G(b)$ acts on this space by $\epsilon(n+1)$.
This completes the proof of this case of the theorem when $\lambda$ consists of all ${\bullet}$'s. Of course, the case when $\lambda$ consists of all ${\circ}$'s is exactly the same.
\end{proof}
Let $\mathbf{I}=(I_1,\ldots,I_n)$ be an ordered tuple of intervals in $\mathbf{S}$ of type $\pi(n)$. By Proposition~\mathrm{e}f{prop:merely-ordered} $\psi_{\mathbf{I}}$ belongs to $\Delta_{\pi(n)}$. Let $\theta_{\mathbf{I}}$ be $n$ times the projection of $\psi_{\mathbf{I}}$ to $\Delta_{\pi(n),\epsilon(n)}$. That is, if $n$ is odd then $\theta_{\mathbf{I}}=\sum_{i \in \mathbf{Z}/n} (\sigma^i)^*(\psi_{\mathbf{I}})$, while if $n$ is even then $\theta_{\mathbf{I}}=\sum_{i \in \mathbf{Z}/n} (-1)^i (\sigma^i)^*(\psi_{\mathbf{I}})$. The following lemma is the key computation needed to prove the proposition.
\mathbf{e}gin{lemma} \label{lem:fine-1}
Let $\mathbf{S} = I_1 \sqcup \cdots \sqcup I_{n+1}$ be a decomposition where each $I_i$ is a half-open interval of type ${\bullet}$. Put $\mathbf{J}=(I_1, \ldots, I_n)$ and $\mathbf{I}=(I_1, \ldots, I_{n+1})$. Then $d_n(\theta_{\mathbf{J}})=(-1)^n \cdot \theta_{\mathbf{I}}$.
\end{lemma}
\mathbf{e}gin{proof}
Put $\alpha=d_n(\theta_{\mathbf{J}})$. It suffices to establish the following three statements:
\mathbf{e}gin{enumerate}
\item $\alpha(x_1, \ldots, x_{n+1})=(-1)^n$ if $x_i \in I_i$ for all $1 \le i \le n+1$.
\item $\alpha(x_1, \ldots, x_{n+1})=0$ if two $x_i$'s belong to the same $I_j$.
\item $\sigma^*(\alpha)=\epsilon(n+1) \alpha$.
\end{enumerate}
Indeed, this implies that $\alpha=(-1)^n \cdot \theta_{\mathbf{I}}$.
(a) Suppose $x_i \in I_i$ for all $1 \le i \le n+1$. By definition,
\mathbf{e}gin{equation} \label{eq:alpha}
\alpha(x_1, \ldots, x_{n+1})=\sum_{i=1}^{n+1} (-1)^{i+1} \theta_{\mathbf{J}}(x_1, \ldots, \hat{x}_i, \ldots, x_{n+1})
\end{equation}
In this sum, only the $i=n+1$ term is non-zero, and it is equal to $(-1)^n$. We thus find $\alpha(x_1, \ldots, x_{n+1})=(-1)^{n+1}$, as required.
(b) Suppose $x \in \mathbf{S}^{\{n+1\}}$ is given and two coordinates belongs to the same $I_i$. Since the $x_i$'s are ordered, there are indices $i$ and $j$ such that $x_i$ and $x_{i+1}$ both belong to $I_j$. If $j=n+1$ then every term in the sum in \eqref{eq:alpha} vanishes, as some coordinate belongs to $I_{n+1}$. Suppose $j \ne n+1$. All terms but the $i$ and $i+1$ ones in the sum in \eqref{eq:alpha} vanish, as there are two coordinates in the same interval in $\mathbf{J}$. Since $x_i$ and $x_{i+1}$ belong to the same interval in $\mathbf{J}$, we have
\mathbf{e}gin{displaymath}
\theta_{\mathbf{J}}(x_1, \ldots, \hat{x}_i, \ldots, x_{n+1}) =
\theta_{\mathbf{J}}(x_1, \ldots, \hat{x}_{i+1}, \ldots x_{n+1}).
\end{displaymath}
Thus the $i$ and $i+1$ terms in the sum in \eqref{eq:alpha} cancel. Hence $\alpha(x)=0$, as required.
(c) We have
\mathbf{e}gin{align*}
& (\sigma^* \alpha)(x_1, \ldots, x_{n+1})
= \alpha(x_{n+1}, x_1, \ldots, x_n) \\
&= \theta_{\mathbf{J}}(x_1, \ldots, x_n) + \sum_{i=1}^n (-1)^i \theta_{\mathbf{J}}(x_{n+1}, x_1, \ldots, \hat{x}_i, \ldots, x_n) \\
&= \theta_{\mathbf{J}}(x_1, \ldots, x_n) + \sum_{i=1}^n (-1)^{i+n+1} \theta_{\mathbf{J}}(x_1, \ldots, \hat{x}_i, \ldots, x_n, x_{n+1}) \\
&= \sum_{i=1}^{n+1} (-1)^{i+n+1} \theta_{\mathbf{J}}(x_1, \ldots, \hat{x}_i, \ldots, x_{n+1})
= (-1)^n \alpha(x_1, \dots, x_{n+1}).
\end{align*}
In the third step, we used that $\theta_{\mathbf{J}}$ is an eigenvector for $\sigma^*$ of eigenvalue $\epsilon(n)=(-1)^{n+1}$. We thus see that $\alpha$ is an eigenvector for $\sigma^*$ with eigenvalue $\epsilon(n+1)$.
\end{proof}
\mathbf{e}gin{lemma} \label{lem:fine-2}
With the same notation as Lemma~\mathrm{e}f{lem:fine-1}, $\theta_{\mathbf{J}}$ generates $\Delta_{\pi(n),\epsilon(n)}$ as a $\ul{G}$-module.
\end{lemma}
\mathbf{e}gin{proof}
Without loss of generality, we suppose $\infty$ belongs to $I_{n+1}$. Let $i \colon \mathbf{R}^n \to \mathbf{S}^{\{n\}}$ denote the standard inclusion. Then $i^*(\theta_{\mathbf{J}})=\varphi_{\mathbf{J}}$ generates $L_{\lambda}$ as an $\ul{H}$-module. It follows that the $\ul{G}$-submodule of $\Delta_{\pi(n),\epsilon(n)}$ generated by $\theta_{\mathbf{J}}$ contains $L_{\lambda}$, and is thus all of $\Delta_{\pi(n),\epsilon(n)}$ by Theorem~\mathrm{e}f{thm:std}.
\end{proof}
\mathbf{e}gin{proof}[Proof of Proposition~\mathrm{e}f{prop:fine-aux}]
Let $\mathbf{I}$ and $\mathbf{J}$ be as in Lemma~\mathrm{e}f{lem:fine-1}. By that lemma, $d_n(\theta_{\mathbf{J}})=(-1)^n \cdot \theta_{\mathbf{I}}$ belongs to $\Delta_{\pi(n+1),\epsilon(n+1)}$. By Lemma~\mathrm{e}f{lem:fine-2}, it follows that $d_n$ maps all of $\Delta_{\pi(n),\epsilon(n)}$ into $\Delta_{\pi(n+1),\epsilon(n+1)}$. Note that $d_n(\Delta_{\pi(n),\epsilon(n)})$ is a non-zero by the computation of $d_n(\theta_{\mathbf{J}})$.
The proposition now essentially follows from Theorem~\mathrm{e}f{thm:std}. Since $d_n$ cannot be surjective (by comparing the underlying $\ul{H}$-modules), each $\Delta_{\pi(n),\epsilon(n)}$ must have length two, and the image of $d_n$ must be the unique proper non-zero submodule of $\Delta_{\pi(n+1),\epsilon(n+1)}$. Thus the complex is exact.
\end{proof}
\subsection{The generic case}
We now prove Theorem~\mathrm{e}f{thm:fine}(a), in two steps.
\mathbf{e}gin{lemma}
If $(\lambda,\zeta)$ is generic then $\Delta_{\lambda,\zeta}$ is irreducible.
\end{lemma}
\mathbf{e}gin{proof}
Consider the following statement:
\mathbf{e}gin{itemize}
\item[$(S_n)$] If $\Delta_{\lambda,\zeta}$ is reducible, with $\ell(\lambda)=n$, then $(\lambda,\zeta)$ is special.
\end{itemize}
We prove the statement $(S_n)$ by induction on $n$. The statement is vacuously true for $n \le 1$ since then any $(\lambda,\zeta)$ is special.
Suppose now that $(S_{n-1})$ holds, and let $\Delta_{\lambda,\zeta}$ be reducible with $\ell(\lambda)=n$. We show that $(\lambda,\zeta)$ is special. By Theorem~\mathrm{e}f{thm:std}, $\Delta_{\lambda,\zeta}$ has a unique non-zero proper submodule $M$, which is irreducible and isomorphic to $\ol{A}_2(\lambda)$ as an $\ul{H}$-module. Now, $M$ must be isomorphic to $M_{\mu,\omega}$ for some $(\mu,\omega)$ where $\ell(\mu)=n-1$. If $(\mu,\omega)$ is not special then $M_{\mu,\omega}=\Delta_{\mu,\omega}$ has $\ul{H}$-simples of length $n-1$ and $n-2$, and thus cannot be $M$. Thus $(\mu,\omega)$ must be special. Without loss of generality, say $(\mu,\omega)=(\pi(n-1),\epsilon(n-1))$. Since $\ol{A}_2(\lambda)=L_{\pi(n-1)}$, it follows that $\lambda=\pi(n)$.
Now, let $b \in \mathbf{S}^{\{n\}}$, and let $\tau$ be the standard generator of $G[b]/G(b)$. By Theorem~\mathrm{e}f{thm:std}, $\tau$ acts by $\zeta$ on $\Delta_{\lambda,\zeta}^{G(b)}=M^{G(b)}$. On the other hand, by Theorem~\mathrm{e}f{thm:fine}(b) (which we have already proved in \S \mathrm{e}f{ss:fine-special}), $\tau$ acts by $\epsilon(n)$ on $M_{\pi(n-1),\epsilon(n-1)}^{G(b)}$. Thus $\zeta=\epsilon(n)$, and so $(\lambda,\zeta)$ is special.
\end{proof}
\mathbf{e}gin{lemma}
If $(\lambda,\zeta)$ is generic then $\Delta_{\lambda,\zeta}$ is both projective and injective.
\end{lemma}
\mathbf{e}gin{proof}
Let $n=\ell(\lambda)$ and let $P$ be the projective cover of $M_{\lambda,\zeta}$. Since there is a surjection $I_{\lambda} \to M_{\lambda,\zeta}$, it follows that $P$ is a summand of $I_{\lambda}$. In particular, $P$ is also injective. (This is true in any pre-Tannakian category; see \cite[Proposition~6.1.3]{EGNO}.)
Suppose that $\mu$ is a weight of length $n+1$. Then
\mathbf{e}gin{displaymath}
0=\Hom_{\ul{G}}(P, I_{\mu})=\Hom_{\ul{H}}(P, L_{\mu}).
\end{displaymath}
The first $\Hom$ space counts the multiplicity of $M_{\lambda,\zeta}$ in $I_{\mu}$, and this must vanish since $M_{\lambda,\zeta}=\Delta_{\lambda,\zeta}$ has an $\ul{H}$-simple of length $n-1$, but $I_{\mu}$ does not by Proposition~\mathrm{e}f{prop:ind}. The second equality above is Frobenius reciprocity. We conclude that $P$ is concentrated in degrees $n$ and $n-1$.
By Corollary~\mathrm{e}f{cor:ind-factor}, we find that the quotient map $I_{\lambda} \to P$ factors through $\Delta_{\lambda}$, and so $P$ is a summand of $\Delta_{\lambda}$. Since $\Delta_{\lambda}=\mathbf{i}goplus \Delta_{\lambda,\omega}$ is the indecomposable decomposition of $\Delta_{\lambda}$, we must have $P=\Delta_{\lambda,\zeta}$.
\end{proof}
\section{The special block} \label{s:special}
\subsection{Initial remarks}
Recall that $\mathcal{C}_{\rm sp}$ is the full subcategory of $\mathcal{C}=\uRep^{\mathrm{f}}(G)$ spanned by objects whose simple constituents are all special. Our goal in this section is to determine the structure of this category.
Let $n$ be an integer (possibly negative). Recall that $\pi(n)$ is the word of length $\vert n \vert$ consisting of all ${\bullet}'s$ if $n \ge 0$, and all ${\circ}$'s if $n \le 0$; also, we put $\epsilon(n)=(-1)^{n+1}$. We now introduce some additional notation:
\mathbf{e}gin{itemize}
\item We let $M(n)$ be the simple $M_{\pi(n),\epsilon(n)}$ for $n \ne 0$, and we put $M(0)=\mathbf{b}one$.
\item For $n>0$, we let $\Delta(n)=\Delta_{\pi(n),\epsilon(n)}$. For $n \le 0$, we let $\Delta(n)=\Delta(1-n)^{\vee}$.
\item For $n \le 0$, we let $\nabla(n)=\Delta_{\pi(n-1),\epsilon(n-1)}$. For $n>0$, we let $\nabla(n)=\nabla(1-n)^{\vee}$.
\end{itemize}
The definitions of $\Delta(n)$ and $\nabla(n)$ may appear somewhat strange, but we will see that they work very well; see Figure~\mathrm{e}f{fig:proj} for some motivation. Here are some small examples of how the indexing works:
\mathbf{e}gin{align*}
\Delta(-1) &= \Delta_{{\bullet}{\bullet},-1}^{\vee} & \Delta(0) &= \Delta_{{\bullet},+1}^{\vee} & \Delta(1) &= \Delta_{{\bullet},+1} & \Delta(2) &= \Delta_{{\bullet}{\bullet},-1} \\
\nabla(-1) &= \Delta_{{\circ}{\circ},-1} & \nabla(0) &= \Delta_{{\circ},+1} & \nabla(1) &= \Delta_{{\circ},+1}^{\vee} & \nabla(2) &= \Delta^{\vee}_{{\circ}{\circ},-1}
\end{align*}
Note that $\Delta_{{\bullet},+1}=\Delta_{{\bullet}}$, so the sign could be omitted in this case. The following propositions give some basic information about these objects. Recall that $(-)^{\dag}$ denotes transpose (see \S \mathrm{e}f{ss:transp}).
\mathbf{e}gin{proposition} \label{prop:dual}
For any $n \in \mathbf{Z}$, we have $M(n)^{\vee} \cong M(-n)$.
\end{proposition}
\mathbf{e}gin{proof}
From \cite[Proposition~4.16]{line}, we have $L_{\pi(n)}^{\vee} \cong L_{\pi(-n)}$. Thus $M(n)^{\vee}$ is a simple $\ul{G}$-module with underlying $\ul{H}$-module $L_{\pi(-n)}$, and must therefore be isomorphic to $M(-n)$.
\end{proof}
\mathbf{e}gin{proposition}
For any $n \in \mathbf{Z}$, we have non-split exact sequences
\mathbf{e}gin{displaymath}
0 \to M(n-1) \to \Delta(n) \to M(n) \to 0
\end{displaymath}
and
\mathbf{e}gin{displaymath}
0 \to M(n) \to \nabla(n) \to M(n-1) \to 0
\end{displaymath}
\end{proposition}
\mathbf{e}gin{proof}
The first sequence follows from Theorem~\mathrm{e}f{thm:fine}(b) for $n>0$, and then for $n \le 0$ by duality. The second sequence is similar. These sequences are non-split since $\Delta_{\lambda,\zeta}$ is always indecomposable.
\end{proof}
\mathbf{e}gin{proposition} \label{prop:sp-transp}
For any $n \in \mathbf{Z}$, we have $M(n)^{\dag} \cong M(-n)$ and $\Delta(n)^{\dag} = \nabla(1-n)$.
\end{proposition}
\mathbf{e}gin{proof}
Since $M(n)^{\dag}$ has underlying $\ul{H}$-module $L_{\pi(n)}^{\dag} \cong L_{\pi(-n)}$, it is necessarily isomorphic to $M(-n)$. For $n>0$, we have $\Delta(n)^{\dag} \cong \Delta_{\pi(-n),\epsilon(n)}$ by Remark~\mathrm{e}f{rmk:std-transp}, which is $\nabla(1-n)$. Since transpose is a tensor functor, it commutes with duality, and so the result for $n \le 0$ follows as well.
\end{proof}
\mathbf{e}gin{remark}
If we put $M^* = (M^{\vee})^{\dag}$ then $M(n)^*=M(n)$ and $\Delta(n)^*=\nabla(n)$. The presence of several closely related $\mathbf{Z}/2$ actions has a similar feel to shifted Weyl group actions, and makes it impossible to choose an indexing which looks natural for all three involutions. For example, some of the results above would look more natural if we indexed the $M$'s by half-integers instead of integers.
\end{remark}
\subsection{Projectives}
For $n \in \mathbf{Z}$, we let $P(n)$ be the projective cover of $M(n)$. We now determine the structure of these objects.
\mathbf{e}gin{theorem} \label{thm:proj}
Let $n \in \mathbf{Z}$ be given. Then we have the following.
\mathbf{e}gin{enumerate}
\item $P(n)$ is the injective envelope of $M(n)$.
\item We have $P(n)^{\vee} \cong P(-n)$ and $P(n)^{\dag} \cong P(-n)$.
\item We have a short exact sequence
\mathbf{e}gin{displaymath}
0 \to \Delta(n+1) \to P(n) \to \Delta(n) \to 0.
\end{displaymath}
\item We have a short exact sequence
\mathbf{e}gin{displaymath}
0 \to \nabla(n) \to P(n) \to \nabla(n+1) \to 0.
\end{displaymath}
\end{enumerate}
\end{theorem}
Parts~(c) and (d) of the theorem are depicted for $n=0,1$ in Figure~\mathrm{e}f{fig:proj}. We prove two special cases of the theorem before giving the general proof.
\mathbf{e}gin{figure}
\mathbf{e}gin{tikzpicture}
\node (a) at (0,3) {$\mathbf{b}one$};
\node (b) at (-1.5,1.5) {$M_{{\circ},1}$};
\node (c) at (1.5,1.5) {$M_{{\bullet},1}$};
\node (d) at (0,0) {$\mathbf{b}one$};
\draw (a) -- (b);
\draw (a) -- (c);
\draw (d) -- (b);
\draw (d) -- (c);
\draw[decorate,decoration={brace,amplitude=10pt,mirror},xshift=12pt,yshift=-12pt] (0,0) -- (1.5,1.5) node [black,midway,xshift=15pt,yshift=-14pt] {$\Delta_{{\bullet}}$};
\draw[decorate,decoration={brace,amplitude=10pt},xshift=-12pt,yshift=-12pt] (0,0) -- (-1.5,1.5) node [black,midway,xshift=-15pt,yshift=-14pt] {$\Delta_{{\circ}}$};
\draw[decorate,decoration={brace,amplitude=10pt},xshift=-12pt,yshift=12pt] (-1.5,1.5) -- (0,3) node [black,midway,xshift=-15pt,yshift=14pt] {$\Delta_{{\bullet}}^{\vee}$};
\draw[decorate,decoration={brace,amplitude=10pt,mirror},xshift=12pt,yshift=12pt] (1.5,1.5) -- (0,3) node [black,midway,xshift=15pt,yshift=14pt] {$\Delta_{{\circ}}^{\vee}$};
\end{tikzpicture}
\qquad\qquad\qquad
\mathbf{e}gin{tikzpicture}
\node (a) at (0,3) {$M_{{\bullet},1}$};
\node (b) at (-1.5,1.5) {$\mathbf{b}one$};
\node (c) at (1.5,1.5) {$M_{{\bullet} {\bullet},-1}$};
\node (d) at (0,0) {$M_{{\bullet},1}$};
\draw (a) -- (b);
\draw (a) -- (c);
\draw (d) -- (b);
\draw (d) -- (c);
\draw[decorate,decoration={brace,amplitude=10pt,mirror},xshift=12pt,yshift=-12pt] (0,0) -- (1.5,1.5) node [black,midway,xshift=22pt,yshift=-15pt] {$\Delta_{{\bullet}{\bullet},-1}$};
\draw[decorate,decoration={brace,amplitude=10pt},xshift=-12pt,yshift=-12pt] (0,0) -- (-1.5,1.5) node [black,midway,xshift=-15pt,yshift=-15pt] {$\Delta_{{\circ}}^{\vee}$};
\draw[decorate,decoration={brace,amplitude=10pt},xshift=-12pt,yshift=12pt] (-1.5,1.5) -- (0,3) node [black,midway,xshift=-15pt,yshift=15pt] {$\Delta_{{\bullet}}$};
\draw[decorate,decoration={brace,amplitude=10pt,mirror},xshift=12pt,yshift=12pt] (1.5,1.5) -- (0,3) node [black,midway,xshift=22pt,yshift=15pt] {$\Delta_{{\circ}{\circ},-1}^{\vee}$};
\end{tikzpicture}
\caption{Diagrams of $P(0)$ and $P(1)$. Submodules are at the bottom and quotients at the top. The right diagram shows that $P(1)$ contains $\Delta_{{\bullet}{\bullet},-1}$ as a sub, with quotient $\Delta_{{\bullet}}$. In general, $P(n)$, for $n \ge 1$, contains $\Delta_{\pi(n),\epsilon(n)}$ as a sub with quotient $\Delta_{\pi(n-1),\epsilon(n-1)}$, and a similar patterns holds for $n \le -1$. The left picture above shows that $P(0)$ behaves differently: $\Delta_{{\bullet}}$ is a sub, with quotient $\Delta_{{\bullet}}^{\vee}$. This picture motivates our definition of $\Delta(n)$, which handles these cases uniformly. }
\label{fig:proj}
\end{figure}
\mathbf{e}gin{lemma} \label{lem:proj-1}
Theorem~\mathrm{e}f{thm:proj}(c) holds for $n>0$.
\end{lemma}
\mathbf{e}gin{proof}
From Theorem~\mathrm{e}f{thm:delta-seq}, we have a short exact sequence
\mathbf{e}gin{displaymath}
0 \to \Delta_{\pi(n+1)} \mathrm{op}lus \Delta_{\pi(n)^-} \to I_{\pi(n)} \to \Delta_{\pi(n)} \to 0,
\end{displaymath}
where $\pi(n)^-$ is the word $\pi(n)$ with ${\circ}$ appended to the end. Now, $\Delta_{\pi(n)^-}$ is a generic simple, and thus splits off $I_{\lambda}$ as a summand. The module $\Delta_{\pi(n)}$ is the sum of the modules $\Delta_{\pi(n),\zeta}$ as $\zeta$ varies over all $n$th roots of unity. Except for $\zeta=\epsilon(n)$, these are generic simples, and thus split off. Similarly for the $\Delta_{\pi(n+1)}$ term. We thus find that $I_{\pi(n)}$ has a summand $I$ that fits into a short exact sequence
\mathbf{e}gin{displaymath}
0 \to \Delta(n+1) \to I \to \Delta(n) \to 0.
\end{displaymath}
Since $I$ surjects onto $M(n)$, we see that $P(n)$ is a summand of $I$.
Let $e$ be the idempotent of $I$ such that $P=e(I)$. The $\ul{H}$-simple $L_{\pi(n+1)}$ occurs in $\Delta(n+1)$ with multiplicity one, and does not occur in $\Delta(n)$. It follows that $e$ must map this $L_{\pi(n+1)}$ into $\Delta(n+1)$. Since this $L_{\pi(n+1)}$ generates $\Delta(n+1)$ by Theorem~\mathrm{e}f{thm:std}, we see that $e$ maps $\Delta(n+1)$ into itself. Since $\Delta(n+1)$ and $\Delta(n)$ are indecomposable, it follows that $P$ must be either $\Delta(n)$ or all of $I$.
Suppose now that $P=\Delta(n)$. The kernel of the map $P \to M(n)$ is then $M(n-1)$, and so we have a presentation
\mathbf{e}gin{displaymath}
P(n-1) \to P(n) \to M(n) \to 0.
\end{displaymath}
We thus see that $\Ext^1(M(n), M(n+1))$ is a subquotient of $\Hom(P(n-1), M(n+1))=0$. However, we know that this $\Ext^1$ group does not vanish, due to the extension afforded by $\nabla(n+1)$, and so we have a contradiction. Thus $P=I$, which completes the proof.
\end{proof}
\mathbf{e}gin{lemma} \label{lem:proj-2}
Theorem~\mathrm{e}f{thm:proj}(c) holds for $n=0$.
\end{lemma}
\mathbf{e}gin{proof}
Since $\mathcal{C}(\mathbf{S}) \cong I_{\varnothing}$, Proposition~\mathrm{e}f{prop:ind}, gives an $\ul{H}$-decomposition
\mathbf{e}gin{displaymath}
\mathcal{C}(\mathbf{S}) = L^{\mathrm{op}lus 2}_{\varnothing} \mathrm{op}lus L_{{\bullet}} \mathrm{op}lus L_{{\circ}}.
\end{displaymath}
We have seen in Example~\mathrm{e}f{ex:std} that there is a unique trivial $\ul{G}$-submodule of $\mathcal{C}(\mathbf{S})$. This is in fact the socle of $\mathcal{C}(\mathbf{S})$: indeed, any other simple $\ul{G}$-submodule would be generated by $L_{{\bullet}}$ or $L_{{\circ}}$, but these generate $\Delta_{{\bullet}}$ and $\Delta_{{\circ}}$ (Theorem~\mathrm{e}f{thm:std}), which are not simple. It follows that $\mathcal{C}(\mathbf{S})$ is indecomposable. Since it is projective (being an induction) and surjects onto the trivial representation, it is thus $P(0)$.
Now, $\mathcal{C}(\mathbf{S})$ contains $\Delta_{{\bullet}}=\Delta(1)$ as a submodule. Since $\mathcal{C}(\mathbf{S})$ is self-dual, it thus admits $\Delta(1)^{\vee}=\Delta(0)$ as a quotient. We claim that the sequence
\mathbf{e}gin{displaymath}
0 \to \Delta(1) \to P(0) \to \Delta(0) \to 0
\end{displaymath}
is exact. Indeed, $L_{{\bullet}}$ does not occur in $\Delta(0)$, and thus belongs to the kernel of $P(0) \to \Delta(0)$. Since $L_{{\bullet}}$ generates $\Delta_{{\bullet}}=\Delta(1)$, it follows that the above composition is zero. It is therefore exact by length reasons.
\end{proof}
\mathbf{e}gin{proof}[Proof of Theorem~\mathrm{e}f{thm:proj}]
Suppose $n \ge 0$. Then (c) holds by Lemmas~\mathrm{e}f{lem:proj-1} and~\mathrm{e}f{lem:proj-2}. Since $P(n)$ is a summand of $I_{\pi(n)}$, which is projective and injective, it follows that $P(n)$ is injective. By (c) we see that $P(n)$ contains $M(n)$ as a submodule. Since $P(n)$ is indecomposable injective, it must therefore be the injective envelope of $M(n)$. It thus follows that $P(n)^{\vee}$ is the projective envelope of $M(n)^{\vee} \cong M(-n)$, and so $P(n)^{\vee} \cong P(-n)$. Since $(-)^{\dag}$ is a covariant equivalence of abelian categories, it follows that $P(n)^{\dag}$ is the projective cover of $M(n)^{\dag} \cong M(-n)$, and so $P(n)^{\dag} \cong P(-n)$. This proves (a,b,c) for $n \ge 0$, and the remaining cases follow from duality. (d) now follows from (c) by taking transpose.
\end{proof}
\subsection{Maps of projectives}\label{s:maps-of-projectives}
The spaces
\mathbf{e}gin{displaymath}
\Hom(P(n), P(n-1)), \qquad \Hom(P(n), P(n+1))
\end{displaymath}
are each one dimensional, since $M(n)$ has multiplicity one in $P(n-1)$ and $P(n+1)$. Let $\alpha_n$ and $\mathbf{e}ta_n$ be bases of these spaces.
\mathbf{e}gin{proposition} \label{prop:P-homs}
We have the following:
\mathbf{e}gin{enumerate}
\item $\alpha_{n-1} \alpha_n=0$.
\item $\mathbf{e}ta_{n+1} \mathbf{e}ta_n=0$.
\item $\mathbf{e}ta_{n-1} \alpha_n = c(n) \cdot \alpha_{n+1} \mathbf{e}ta_n$ for some non-zero scalar $c(n)$.
\end{enumerate}
\end{proposition}
\mathbf{e}gin{proof}
Since $M(n)$ is not a constituent of $P(n-2)$, we have $\Hom(P(n), P(n-2))=0$, and so (a) follows. A similar argument gives (b). Now, we have a diagram
\mathbf{e}gin{displaymath}
\xymatrix@R=1em{
&&& \nabla(n) \ar@{^(->}[rd] \\
P(n) \ar[rr]^{\alpha_n} \ar@{->>}[rd] && P(n-1) \ar@{->>}[ru] \ar[rr]^{\mathbf{e}ta_{n-1}} && P(n) \\
& \Delta(n) \ar@{^(->}[ru] }
\end{displaymath}
that commutes up to non-zero scalars. The kernel of $P(n-1) \to \nabla(n)$ is $\nabla(n-1)$, which does not contain $M(n)$ as a constituent. It follows that the $M(n)$ in $\Delta(n)$ has non-zero image in $\nabla(n)$. Thus the composition $\mathbf{e}ta_{n-1} \alpha_n$ is non-zero. However, it does contain $M(n)$ in its kernel. A similar analysis shows the same for $\alpha_{n+1} \mathbf{e}ta_n$. The space of maps $P(n) \to P(n)$ killing $M(n)$ is one-dimensional, and so the result follows.
\end{proof}
\mathbf{e}gin{proposition} \label{prop:alpha-beta}
It is possible to choose $\alpha_n$ and $\mathbf{e}ta_n$ such that $c(n)=1$ for all $n$.
\end{proposition}
\mathbf{e}gin{proof}
Choose the $\alpha_n$'s arbitrarily, let $\mathbf{e}ta'_n$ be an arbitrary non-zero map $P(n) \to P(n+1)$, and let $c'(n)$ be the scalar such that $\mathbf{e}ta'_{n-1}\alpha_n = c'(n) \alpha_{n+1} \mathbf{e}ta'_n$. Put $\mathbf{e}ta_0=\mathbf{e}ta'_0$. For $n<0$, define $\mathbf{e}ta_n=(c'(n+1) \cdots c'(0))^{-1} \mathbf{e}ta'_n$. For $n>0$, define $\mathbf{e}ta_n=c(1) \cdots c(n) \mathbf{e}ta'_n$. This leads to $c(n)=1$ for all $n$.
\end{proof}
\subsection{Description of the category}
Let $R=k[x,y]/(x^2,y^2)$, and $\mathbf{Z}$-grade $R$ by letting $x$ and $y$ have degrees $-1$ and $+1$. As a $k$-vector space, $R$ is four dimensional, with basis $1, x, y, xy$. Let $\Mod_R$ denote the category of $\mathbf{Z}$-graded $R$-algebras. For $n \in \mathbf{Z}$, we let $R(n)$ be the free $R$-module of rank one with generator of degree $n$. The following theorem, which is the main result of \S \mathrm{e}f{s:special}, provides a complete description of $\mathcal{C}_{\rm sp}$.
\mathbf{e}gin{theorem} \label{thm:spequiv}
We have an equivalence of $k$-linear categories
\mathbf{e}gin{displaymath}
\Phi \colon \mathcal{C}_{\rm sp} \to \Mod_R, \qquad P(n) \mapsto R(n).
\end{displaymath}
\end{theorem}
\mathbf{e}gin{proof}
Let $\mathcal{P}$ be the full subcategory of $\mathcal{C}_{\rm sp}$ spanned by the $P(n)$'s, and similarly let $\mathcal{Q}$ be the full subcategory of $\Mod_R$ spanned by the $R(n)$'s. Choose $\alpha_n$'s and $\mathbf{e}ta_n$'s as in Proposition~\mathrm{e}f{prop:alpha-beta}. Define $\Phi_0 \colon \mathcal{P} \to \mathcal{Q}$ by
\mathbf{e}gin{displaymath}
\Phi_0(P(n))=R(n), \qquad
\Phi_0(\alpha_n) = x, \qquad
\Phi_0(\mathbf{e}ta_n) = y,
\end{displaymath}
and extended linearly. In the second equation above, $x$ denotes the multiplication-by-$x$ map $R(n) \to R(n+1)$. It follows from Proposition~\mathrm{e}f{prop:P-homs} that $\Phi_0$ is a well-defined equivalence of categories. Since $\mathcal{P}$ and $\mathcal{Q}$ contain projective generators of their respective categories, $\Phi_0$ extends to an equivalence $\Phi$ as in the statement of the theorem.
\end{proof}
\subsection{Indecomposables} \label{ss:indecomp}
A graded $R$-module is exactly a representation of the quiver
\mathbf{e}gin{displaymath}
\xymatrix@C=3em{
\cdots \ar@<2pt>[r]^{b_{-2}} & v_{-1} \ar@<2pt>[r]^{b_{-1}} \ar@<2pt>[l]^{a_{-1}} & v_0 \ar@<2pt>[r]^{b_0} \ar@<2pt>[l]^{a_0} & v_1 \ar@<2pt>[r]^{b_1} \ar@<2pt>[l]^{a_1} & \cdots \ar@<2pt>[l]^{a_2} }
\end{displaymath}
such that the relations
\mathbf{e}gin{displaymath}
a_{n-1} a_n=0, \qquad b_{n+1} b_n=0, \qquad a_{n+1} b_n = b_{n-1} a_n
\end{displaymath}
hold. For $n \le m$, define $I^+(n,m)$ to be the following representation. The vector space at $v_i$ is $k$ for $n \le i \le m$, and~0 elsewhere. The map $a_i$ is the identity if $n \le i \le m-1$ and $i$ has the same parity as $n$, and is~0 otherwise. The map $b_i$ is the identity if $n+1 \le i \le m$ and $i$ has the same parity as $n$, and is~0 otherwise. We define $I^-(n,m)$ similarly, but now the non-zero morphisms are those where $i$ and $n$ have opposite parity.
\mathbf{e}gin{example}
The following illustrates $I^+(0, 4)$
\mathbf{e}gin{center}
\mathbf{e}gin{tikzpicture}
\node (a) at (0,2.5) {$k$};
\node (b) at (-1.5,1.5) {$k$};
\node (c) at (1.5,1.5) {$k$};
\node (d) at (3,2.5) {$k$};
\node (e) at (-3,2.5) {$k$};
\draw [->] (a) -- node[above] {$\scriptstyle a_2$} (b);
\draw [->] (a) -- node[above] {$\scriptstyle b_2$}(c);
\draw [->] (e) -- node[above] {$ \scriptstyle b_0$}(b);
\draw [->] (d) -- node[above] {$ \scriptstyle a_4$}(c);
\end{tikzpicture}
\end{center}
and the following illustrates $I^-(0, 5)$
\mathbf{e}gin{center}
\mathbf{e}gin{tikzpicture}
\node (a) at (0,2.5) {$k$};
\node (b) at (-1.5,1.5) {$k$};
\node (c) at (1.5,1.5) {$k$};
\node (d) at (3,2.5) {$k$};
\node (e) at (-3,2.5) {$k$};
\node (f) at (-4.5,1.5) {$k$};
\draw [->] (a) -- node[above] {$\scriptstyle a_3$} (b);
\draw [->] (a) -- node[above] {$\scriptstyle b_3$}(c);
\draw [->] (e) -- node[above] {$ \scriptstyle b_1$}(b);
\draw [->] (d) -- node[above] {$ \scriptstyle a_5$}(c);
\draw [->] (e) -- node[above] {$ \scriptstyle a_1$}(f);
\end{tikzpicture}
\end{center}
In the above diagrams, the drawn arrows are the identity, and all others vanish.
\end{example}
The modules $I^{\pm}(n,m)$ are known as \defn{zigzag modules}, and they are easily seen to be indecomposable. We note that $I^+(n,n)=I^-(n,n)$, but otherwise these modules are different. These modules appear quite often: e.g., in the theory of supergroups \cite[\S 2.2.3]{GQS} and \cite[\S 5.1]{Heidersdorf}, quivers \cite{Gabriel}, the modular representation theory of the Klein 4-group \cite[\S 2]{CM} and \cite{Johnson}, and elsewhere \cite[\S 2.1]{CdS}.
Let $J^{\pm}(n,m)$ be the object of $\mathcal{C}_{\rm sp}$ corresponding to $I^{\pm}(n,m)$ under the equivalence in Theorem~\mathrm{e}f{thm:spequiv}. This is an indecomposable object of length $m-n+1$, with simple constituents $M(i)$ for $n \le i \le m$, each with multiplicity one.
\mathbf{e}gin{proposition} \label{prop:indecomp}
We have the following:
\mathbf{e}gin{enumerate}
\item Every non-projective indecomposable in $\mathcal{C}_{\rm sp}$ is isomorphic to some $J^{\pm}(n,m)$.
\item The categorical dimension of $J^{\pm}(n,m)$ is $(-1)^n$ if $n \equiv m \!\pmod{2}$, and~0 otherwise.
\end{enumerate}
\end{proposition}
\mathbf{e}gin{proof}
(a) It is shown in \cite[\S 5.2,5.3]{Germoni} that every finite dimensional non-projective indecomposable graded $R$-module is isomorphic to some $I^{\pm}(n,m)$. This implies the present statement by Theorem~\mathrm{e}f{thm:spequiv}. We note that \cite{Germoni} works with the ring $R'$ where $x$ and $y$ anti-commute, but graded $R$- and $R'$-modules are equivalent (change $x$ to $-x$ on odd degree elements). We also note that the indecomposable projective $R$-modules are just the rank one free modules $R(n)$.
(b) This follows from the description of the simple constituents of $J^{\pm}(m,n)$, and the fact that $M(i)$ has categorical dimension $(-1)^i$.
\end{proof}
\subsection{Decomposition of projectives} \label{s:decomposition}
From Theorems~\mathrm{e}f{thm:fine} and~\mathrm{e}f{thm:spequiv}, we know that the indecomposable projectives in $\uRep(G)$ are the generic simples and special projectives $P(n)$ for $n \in \mathbf{Z}$. Recall that the Schwartz space $\mathcal{C}(\mathbf{S}^{\{n\}})$ (for $n \ge 1$) and the induced modules $I_{\lambda}$ are also projective. We now determine their indecomposable decompositions.
\mathbf{e}gin{proposition} \label{prop:I-decomp}
Let $\mu$ be a weight.
\mathbf{e}gin{enumerate}
\item In the projective object $I_{\mu}$, the multiplicity of any indecomposable projective in it (as a summand) is at most one.
\item The generic simple $M_{\lambda,\zeta}$ is a summand of $I_{\lambda}$ if and only if $\mu$ is a cyclic rotation or cyclic contraction of $\lambda$.
\item The special projective $P(n)$ (for $n \in \mathbf{Z}$) is a summand of $I_{\lambda}$ if and only if $\lambda=\pi(n)$.
\end{enumerate}
\end{proposition}
\mathbf{e}gin{proof}
The multiplicity of $M_{\lambda,\mu}$ in $I_{\mu}$ is the dimension of the space
\mathbf{e}gin{displaymath}
\Hom_{\ul{G}}(I_{\mu}, M_{\lambda,\mu}) = \Hom_{\ul{H}}(L_{\mu}, M_{\lambda,\zeta}),
\end{displaymath}
where the isomorphism is Frobenius reciprocity. By Theorem~\mathrm{e}f{thm:std}, this is~1 if $\mu$ is a cyclic rotation or contraction of $\lambda$, and~0 otherwise. Similarly, the multiplicity of $P(n)$ in $I_{\mu}$ is the dimension of the space
\mathbf{e}gin{displaymath}
\Hom_{\ul{G}}(I_{\mu}, M(n)) = \Hom_{\ul{H}}(L_{\mu}, L_{\pi(n)}),
\end{displaymath}
which is~1 if $\mu=\pi(n)$, and~0 otherwise. The result follows.
\end{proof}
Recall that $\ell(\lambda)$ is the length of $\lambda$, that $g(\lambda)$ is the order of $\Aut(\lambda)$ and that $N(\lambda) = \ell(\lambda)/g(\lambda)$. In particular, $\lambda$ consists of $N(\lambda)$ repetitions of a Lyndon word of length $g(\lambda)$.
\mathbf{e}gin{proposition}
Let $n \ge 1$.
\mathbf{e}gin{enumerate}
\item The multiplicity of the generic projective $M_{\lambda,\zeta}$ in $\mathcal{C}(\mathbf{S}^{\{n\}})$ is $g(\lambda) \mathbf{i}nom{n}{\ell(\lambda)}$.
\item The multiplicity of the special projective $P(m)$ in $\mathcal{C}(\mathbf{S}^{\{n\}})$ is $\mathbf{i}nom{n-1}{m}$.
\end{enumerate}
\end{proposition}
\mathbf{e}gin{proof}
(a) The multiplicity of $M_{\lambda,\zeta}$ is the dimension of the space
\mathbf{e}gin{displaymath}
\Hom_{\ul{G}}(\mathcal{C}(\mathbf{S}^{\{n\}}), M_{\lambda,\zeta}) = \Hom_{\ul{H}}(\mathcal{C}(\mathbf{R}^{(n-1)}), \ol{A}(\lambda)).
\end{displaymath}
Here we have used that $\mathbf{S}^{\{n\}}$ is the induction from $H$ to $G$ of $\mathbf{R}^{(n-1)}$ (see Proposition~\mathrm{e}f{prop:schwartz-ind}), Frobenius reciprocity, and Theorem~\mathrm{e}f{thm:std}. The number of terms in $\overline{A}$ coming from cyclic rotations is $g(\lambda)$ and each has multiplicity ${n-1 \choose \ell(\lambda)}$, while the number of terms coming from cyclic contractions is $g(\lambda)$ and each has multiplicity ${n-1 \choose \ell(\lambda)-1}$. Thus,
\mathbf{e}gin{displaymath}
\dim \Hom_{\ul{G}}(\mathcal{C}(\mathbf{S}^{\{n\}}), M_{\lambda,\zeta}) = g(\lambda) {n-1 \choose \ell(\lambda)} + g(\lambda) {n-1 \choose \ell(\lambda)-1} = g(\lambda) {n \choose \ell(\lambda)}.
\end{displaymath}
(b) The multiplicity of $P(m)$ is the dimension of the space
\mathbf{e}gin{displaymath}
\Hom_{\ul{G}}(\mathcal{C}(\mathbf{S}^{\{n\}}), M(m)) = \Hom_{\ul{H}}(\mathcal{C}(\mathbf{R}^{(n-1)}), L_{\pi(m)}),
\end{displaymath}
which is $\mathbf{i}nom{n-1}{m}$.
\end{proof}
\section{Semisimplification} \label{s:ss}
\subsection{Background}
We now recall some generalities on quotients of tensor categories; we refer to \cite[\S 2]{EtingofOstrik} for additional details. Let $\mathcal{C}$ be a $k$-linear category equipped with a symmetric monoidal structure that is $k$-bilinear. A \defn{tensor ideal} is a rule $I$ that assigns to every pair of objects $(X,Y)$ a $k$-subspace $I(X,Y)$ of $\Hom_{\mathcal{C}}(X,Y)$ such that the following two conditions hold:
\mathbf{e}gin{enumerate}
\item Let $\mathbf{e}ta \colon X \to Y$ be a morphism in $I(X,Y)$, and let $\alpha \colon W \to X$ and $\gamma \colon Y \to Z$ be arbitrary morphisms. Then $\mathbf{e}ta \circ \alpha$ belongs to $I(W,Y)$ and $\gamma \circ \mathbf{e}ta$ belongs to $I(X,Z)$.
\item Let $\alpha \colon X \to Y$ be a morphism in $I(X,Y)$ and let $\alpha' \colon X' \to Y'$ be an arbitrary morphism. Then $\alpha \otimes \alpha'$ belongs to $I(X \otimes X', Y \otimes Y')$.
\end{enumerate}
Suppose that $I$ is a tensor ideal. Define a new category $\mathcal{C}'$ with the same objects as $\mathcal{C}$, and with
\mathbf{e}gin{displaymath}
\Hom_{\mathcal{C}'}(X,Y) = \Hom_{\mathcal{C}}(X,Y)/I(X,Y).
\end{displaymath}
One readily verifies that $\mathcal{C}'$ is naturally a $k$-linear symmetric monoidal category; it is called the \defn{quotient} of $\mathcal{C}$ by $I$.
Suppose now that $\mathcal{C}$ is pre-Tannakian. A morphism $f \colon X \to Y$ is called \defn{negligible} if $\utr(f \circ g)=0$ for any morphism $g \colon Y \to X$; here $\utr$ denotes the categorical trace. The negligible morphisms form a tensor ideal $\mathcal{N}(\mathcal{C})$ of $\mathcal{C}$. The quotient category is called the \defn{semisimplification} of $\mathcal{C}$, and denote $\mathcal{C}^{\rm ss}$. It is a semi-simple pre-Tannakian category. The simple objects of $\mathcal{C}^{\rm ss}$ are the indecomposable objects of $\mathcal{C}$ of non-zero categorical dimension. (Indecomposables of $\mathcal{C}$ of dimension~0 become~0 in the semisimplification.)
\subsection{The main theorem}
Let $\mathcal{C}=\uRep^{\mathrm{f}}(G)$. The goal of \S \mathrm{e}f{s:ss} is to determine $\mathcal{C}^{\rm ss}$. To state our result, we introduce another pre-Tannakian category $\mathcal{D}$. The objects of $\mathcal{D}$ are bi-graded vector spaces. For $n,m \in \mathbf{Z}$, we let $k(n,m)$ be the simple object of $\mathcal{D}$ concentrated in degree $(n,m)$. The tensor product on $\mathcal{D}$ is the usual one, i.e.,
\mathbf{e}gin{displaymath}
k(n,m) \otimes k(r,s) = k(n+r,m+s).
\end{displaymath}
Finally, the symmetry isomorphism is defined by
\mathbf{e}gin{displaymath}
v \otimes w \mapsto (-1)^{\vert v \vert \cdot \vert w \vert} w \otimes v,
\end{displaymath}
where $v$ and $w$ are homogeneous and $\vert v \vert$ is the total degree of $v$. (Elements of $k(n,m)$ have total degree $n+m$.) The following is our main result:
\mathbf{e}gin{theorem} \label{thm:semisimplification}
We have an equivalence of symmetric tensor categories $\mathcal{C}^{\rm ss} \cong \mathcal{D}$. Under this equivalence, $k(n,m)$ corresponds to $J^+(n-m,n+m)$ if $m \ge 0$, and to $J^-(n-m, n+m)$ if $m \le 0$.
\end{theorem}
The proof will occupy the remainder of \S \mathrm{e}f{s:ss}. We make one initial remark here. Since generic simples have categorical dimension~0, they become~0 in $\mathcal{C}^{\rm ss}$. We have seen (Proposition~\mathrm{e}f{prop:indecomp}) that the indecomposables of $\mathcal{C}_{\rm sp}$ are the modules $J^{\pm}(n,m)$, and that this module has categorical dimension~0 if and only if $n$ and $m$ have opposite parity. Thus the simples of $\mathcal{C}^{\rm ss}$ are exactly the objects $J^{\pm}(n,m)$ with $n \equiv m \!\pmod{2}$. In particular, the additive functor $\mathcal{D} \to \mathcal{C}^{\rm ss}$ defined by the correspondence in the theorem is an equivalence of abelian categories. The main content of the theorem concerns the tensor structures.
\mathbf{e}gin{remark}
The category $\mathcal{D}$ is super-Tannakian. In the notation of \cite[\S 0.3]{Deligne2}, $\mathcal{D}=\Rep(G, \epsilon)$ where $G=\mathbf{G}_m \times \mathbf{G}_m$ and $\epsilon=(-1,-1)$.
\end{remark}
\mathbf{e}gin{remark}
The semi-simplification of $\mathcal{C}$ is very similar to that of the supergroup $\mathbf{GL}(1|1)$; see \cite[Theorem~5.12]{Heidersdorf}.
\end{remark}
\subsection{The key computation} \label{ss:sskey}
Much of the proof of Theorem~\mathrm{e}f{thm:semisimplification} follows from general considerations and results we have already established. However, we will require the following genuinely new piece of information:
\mathbf{e}gin{proposition} \label{prop:M-tensor}
For $n,m \in \mathbf{Z}$, we have $M(n) \otimes M(m) \cong M(n+m)$ in $\mathcal{C}^{\rm ss}$.
\end{proposition}
We require a few lemmas.
\mathbf{e}gin{lemma} \label{lem:M-tensor-1}
Let $b \in \mathbf{S}^{\{n+1\}}$, and let $\tau$ be the generator of $G[b]/G(b)$. Then $M(1)^{G(b)}$ is $n$-dimensional, and the eigenvalues of $\tau$ on this space are the $(n+1)$st roots of unity, except for~1.
\end{lemma}
\mathbf{e}gin{proof}
We have $\Delta(1) = \mathbf{b}one \mathrm{op}lus L_{{\bullet}}$ as an $\ul{H}$-module, and so $\Delta(1)^{G(b)}$ is $(n+1)$-dimensional by \cite[Corollary~4.11]{line}. For $a \in \mathbf{S}$, let $\mathrm{ev}_a \colon \mathcal{C}(\mathbf{S}) \to k$ be the evaluation at $a$ map, as in Lemma~\mathrm{e}f{lem:std-7}. One easily sees that these functionals are linearly independent (see \cite[Proposition~4.6]{line}). Thus the functionals $\mathrm{ev}_{b_i}$ for $1 \le i \le n+1$ form a basis for the $\ul{G}(b)$-coinvariants of $\Delta(1)$, which is isomorphic to $\Delta(1)^{G(b)}$. The action of $\tau$ permutes the basis vectors of this space, and so $\Delta(1)^{G(b)}$ is the regular representation of $G[b]/G(b) \cong \mathbf{Z}/(n+1)$. Since $M(1)=\Delta(1)/\mathbf{b}one$, we have $M(1)^{G(b)}=\Delta(1)^{G(b)}/\mathbf{b}one^{G(b)}$. Since $\mathbf{b}one^{G(b)}$ is 1-dimensional with trivial $\tau$ action, the result follows.
\end{proof}
\mathbf{e}gin{lemma} \label{lem:M-tensor-2}
For $n>0$ we have an isomorphism
\mathbf{e}gin{displaymath}
M(1) \otimes M(n) = M(n+1) \mathrm{op}lus \mathbf{i}goplus_{\zeta \in \mu_{n+1} \setminus \{ \epsilon(n+1) \}} M_{\pi(n+1), \zeta}
\end{displaymath}
in $\mathcal{C}$, where $\mu_{n+1}$ denotes the set of $(n+1)$st roots of unity. In particular, we have an isomorphism $M(1) \otimes M(n) \cong M(n+1)$ in $\mathcal{C}^{\rm ss}$.
\end{lemma}
\mathbf{e}gin{proof}
We have $M(1)=L_{{\circ}}$ and $M(n)=L_{\pi(n)}$ as $H$-representations, and so
\mathbf{e}gin{displaymath}
M(1) \otimes M(n) = L_{\pi(n+1)}^{\mathrm{op}lus n+1} \mathrm{op}lus L_{\pi(n)}^{\mathrm{op}lus n}
\end{displaymath}
by \cite[Theorem~7.2]{line} (or, better, \cite[Proposition~7.5]{line}). In particular, for $b \in \mathbf{S}^{\{n+1\}}$, we have
\mathbf{e}gin{displaymath}
\dim M(1)^{G(b)} = n, \qquad \dim M(n)^{G(b)} = 1, \qquad \dim (M(1) \otimes M(n))^{G(b)} = n.
\end{displaymath}
It follows that the natural map
\mathbf{e}gin{displaymath}
M(1)^{G(b)} \otimes M(n)^{G(b)} \to (M(1) \otimes M(n))^{G(b)}
\end{displaymath}
is an isomorphism. Since $\tau$ acts by $-\epsilon(n)$ on $M(n)^{G(b)}$ by Theorem~\mathrm{e}f{thm:fine}, Lemma~\mathrm{e}f{lem:M-tensor-1} shows that the $\tau$-eigenvalues on the right side above are exactly the $(n+1)$st roots of unity except for $-\epsilon(n)=\epsilon(n+1)$. In particular, this shows that $M(n)$ is not a constituent of $M(1) \otimes M(n)$. Thus each $L_{\pi(n)}$ in the tensor product belongs to a generic simple $M_{\pi(n+1),\zeta}$. Looking at the $\tau$ eigenvalues on this space (from Theorem~\mathrm{e}f{thm:std}), we see that each of these simples occurs exactly once. (Note that the generic condition excludes $\zeta=\epsilon(n+1)$.) Since generic simples are projective, these all split off from the tensor product as summands, that is, we have
\mathbf{e}gin{displaymath}
M(1) \otimes M(n) = X \mathrm{op}lus \mathbf{i}goplus_{\zeta \in \mu_{n+1} \setminus \{\epsilon(n+1)\}} M_{\pi(n+1), \zeta}
\end{displaymath}
as $\ul{G}$-modules. Since the $\ul{H}$-module underlying $X$ is $L_{\pi(n+1)}$, we must have $X=M(n+1)$. Since the generic simples become~0 in $\mathcal{C}^{\rm ss}$, the result follows.
\end{proof}
\mathbf{e}gin{lemma} \label{lem:M-tensor-3}
We have an isomorphism
\mathbf{e}gin{displaymath}
M(1) \otimes M(-1) \cong \mathbf{b}one \mathrm{op}lus M_{{\bullet}{\circ}}
\end{displaymath}
in $\mathcal{C}$, and thus an isomorphism $M(1) \otimes M(-1) \cong M(0)$ in $\mathcal{C}^{\rm ss}$.
\end{lemma}
\mathbf{e}gin{proof}
We have $M(1)=L_{{\bullet}}$ and $M(-1)=L_{{\circ}}$ as $\ul{H}$-modules. Thus the $\ul{H}$-module underlying $M(1) \otimes M(-1)$ is
\mathbf{e}gin{displaymath}
L_{{\bullet}{\circ}} \mathrm{op}lus L_{{\circ}{\bullet}} \mathrm{op}lus L_{{\bullet}} \mathrm{op}lus L_{{\circ}} \mathrm{op}lus \mathbf{b}one
\end{displaymath}
by \cite[Theorem~7.2]{line} (or, better, \cite[Lemma~7.7]{line}). From the classification of simples in $\uRep(G)$, we see that $M_{{\bullet}{\circ}}$ must be a constituent of the tensor product. Since this is a generic simple, it splits off, and so $M(1) \otimes M(-1) \cong X \mathrm{op}lus M_{{\bullet}{\circ}}$ for some $\ul{G}$-module $X$. The $\ul{H}$-module underlying $X$ is trivial, and so $X$ is trivial as a $\ul{G}$-module. Since $M_{{\bullet}{\circ}}$ becomes~0 in $\mathcal{C}^{\rm ss}$, the result follows.
\end{proof}
\mathbf{e}gin{proof}[Proof of Proposition~\mathrm{e}f{prop:M-tensor}]
We must show
\mathbf{e}gin{displaymath}
M(n) \otimes M(m) \cong M(n+m)
\end{displaymath}
holds in $\mathcal{C}^{\rm ss}$ for all $n,m \in \mathbf{Z}$. If $n$ and $m$ are both non-negative, this follows from Lemma~\mathrm{e}f{lem:M-tensor-2} by induction. Dualizing gives the case when $n$ and $m$ are both non-positive. Now suppose $n>0$ and $m<0$. We have
\mathbf{e}gin{displaymath}
M(n) \otimes M(m) \cong (M(n-1) \otimes M(1)) \otimes (M(-1) \otimes M(m+1)) \cong M(n-1) \otimes M(m+1),
\end{displaymath}
where we first used the cases already established, and then used Lemma~\mathrm{e}f{lem:M-tensor-3}. Continuing by induction, we obtain the desired result.
\end{proof}
\subsection{Heller shifts} \label{ss:heller}
Let $X$ and $Y$ be objects of $\mathcal{C}$. Define $X \sim Y$ if there exist projective objects $P$ and $Q$ such that $X \mathrm{op}lus P \cong Y \mathrm{op}lus Q$. This is an equivalence relation on objects of $\mathcal{C}$. Now, choose a surjection $\varphi \colon P \to X$, with $P$ projective, and an injection $\psi \colon X \to I$, with $I$ injective. (Note: injectives and projectives are the same in $\mathcal{C}$.) We define the \defn{Heller shifts} of $M$ by $\Omega(X)=\ker(\varphi)$ and $\Omega^{-1}(X)=\coker(\psi)$. These are well-defined up to equivalence by Schanuel's lemma; see \cite[Lemma~1.5.3]{Benson} and the following discussion. Note that all projectives in $\mathcal{C}$ have categorical dimension~0, so $X \sim Y$ implies $X \cong Y$ in $\mathcal{C}^{\rm ss}$.
\mathbf{e}gin{proposition} \label{prop:heller}
Let $X$ and $Y$ be objects of $\mathcal{C}$, and let $n,m \in \mathbf{Z}$. Then:
\mathbf{e}gin{enumerate}
\item If $X \sim Y$ then $\Omega(X) \sim \Omega(Y)$.
\item $\Omega^{-1}(\Omega(X)) \sim X$ and $\Omega(\Omega^{-1}(X)) \sim X$.
\item $X \otimes \Omega(Y) \sim \Omega(X \otimes Y)$, and similarly for $\Omega^{-1}$.
\item $\Omega^n(X) \otimes \Omega^m(Y) \sim \Omega^{n+m}(X \otimes Y)$.
\end{enumerate}
\end{proposition}
\mathbf{e}gin{proof}
(a) is clear.
(b) Consider a short exact sequence
\mathbf{e}gin{displaymath}
0 \to \Omega(X) \to P \to X \to 0
\end{displaymath}
with $P$ projective. Since $P$ is also injective this shows that $X \sim \Omega^{-1}(\Omega(X))$. The other direction is similar.
(c) Consider a short exact sequence as above. Tensoring with $Y$, we find
\mathbf{e}gin{displaymath}
0 \to \Omega(X) \otimes Y \to P \otimes Y \to X \otimes Y \to 0.
\end{displaymath}
Since $P \otimes Y$ is projective \cite[Proposition~4.2.12]{EGNO}, it follows that the kernel above is $\Omega(X \otimes Y)$.
(d) We have
\mathbf{e}gin{displaymath}
\Omega^n(X) \otimes \Omega^m(Y) \sim \Omega^m(\Omega^n(X) \otimes Y) \sim \Omega^m(\Omega^n(X \otimes Y)) \sim \Omega^{n+m}(X \otimes Y)
\end{displaymath}
where in the first two steps we used (c), and in the final step we used (b) (if $n$ and $m$ have different signs).
\end{proof}
\mathbf{e}gin{proposition} \label{prop:heller-J}
For $n \in \mathbf{Z}$ and $m \ge 0$, we have, in $\mathcal{C}^{\rm ss}$,
\mathbf{e}gin{displaymath}
\Omega^m(M(n)) \cong J^+(n-m, n+m), \qquad \Omega^{-m}(M(n)) \cong J^-(n-m, n+m)
\end{displaymath}
In particular, every simple object of $\mathcal{C}^{\rm ss}$ is isomorphic to $\Omega^m(M(n))$ for a unique $(n,m) \in \mathbf{Z}^2$.
\end{proposition}
\mathbf{e}gin{proof}
By Theorem~\mathrm{e}f{thm:spequiv}, it is equivalent to prove the analogous result for $R$-modules. This is well-known: for example, the same argument used to prove \cite[Theorem~1]{Johnson} applies here.
\end{proof}
\mathbf{e}gin{proposition} \label{prop:heller-tensor}
For integers $n$, $m$, $r$, and $s$, we have an isomorphism in $\mathcal{C}^{\rm ss}$
\mathbf{e}gin{displaymath}
\Omega^r(M(m)) \otimes \Omega^s(M(n)) \cong \Omega^{r+s}(M(m+n)).
\end{displaymath}
\end{proposition}
\mathbf{e}gin{proof}
This follows from Propositions~\mathrm{e}f{prop:M-tensor} and~\mathrm{e}f{prop:heller}. (Note that the proof of Proposition~\mathrm{e}f{prop:M-tensor} actually shows $M(n) \otimes M(m) \sim M(n+m)$.)
\end{proof}
\subsection{Pointed tensor categories} \label{ss:pointed}
The simple objects of $\mathcal{C}^{\rm ss}$ and $\mathcal{D}$ can both be indexed by $\mathbf{Z}^2$, and Proposition~\mathrm{e}f{prop:heller-tensor} shows that tensor products of simples decompose in the same way in both categories. To actually construct an equivalence of symmetric tensor categories directly would require a good deal of additional work. We circumvent this by appealing to the general theory of pointed tensor categories.
Let $\mathcal{T}$ be a semisimple $k$-linear pre-Tannakian category. We say that $\mathcal{T}$ is \defn{pointed} if its simple objects are invertible, i.e., given a simple object $S$ we have $S \otimes S' \cong \mathbf{b}one$ for some $S'$ (necessarily the dual of $S$). Suppose this is the case. It follows that the tensor product of two simples is again simple. Let $A(\mathcal{T})$ be the set of isomorphism classes of simple objects. This forms an abelian group under tensor product.
The group $A(\mathcal{T})$ carries a natural piece of extra structure. Let $S$ be a simple object. Then $S \otimes S$ is a simple object, and so (by Schur's lemma) the symmetry isomorphism $\mathbf{e}ta_{S,S}$ must be multiplication by a scalar. Since our category is symmetric (as opposed to braided), this scalar must be $\pm 1$. We can thus define a function
\mathbf{e}gin{displaymath}
q_{\mathcal{T}} \colon A(\mathcal{T}) \to \mathbf{Z}/2, \qquad (-1)^{q_{\mathcal{T}}(S)} = \mathbf{e}ta_{S,S}.
\end{displaymath}
One easily sees that $q_{\mathcal{T}}$ is in fact a group homomorphism. (In the general braided theory, $q$ will be a quadratic form valued in $k^{\times}$, but in the symmetric case the situation simplifies; see \cite[\S 2.11]{DGNO} and \cite[Example~2.45]{DGNO}.) The following is the main result we require:
\mathbf{e}gin{theorem} \label{thm:pointed}
Let $\mathcal{T}$ and $\mathcal{T}'$ be semi-simple pointed pre-Tannakian categories. Given an isomorphism
\mathbf{e}gin{displaymath}
\varphi \colon (A(\mathcal{T}), q_{\mathcal{T}}) \to (A(\mathcal{T}'), q_{\mathcal{T}'})
\end{displaymath}
there exists an equivalence $\Phi \colon \mathcal{T} \to \mathcal{T}'$ of symmetric tensor categories inducing $\varphi$; moreover, $\Phi$ is unique up to isomorphism.
\end{theorem}
\mathbf{e}gin{proof}
When $A(\mathcal{T})$ is finite this is the symmetric case of \cite[Proposition~2.41]{DGNO} as explained by \cite[Example~2.45]{DGNO}. But the assumption that $A(\mathcal{T})$ be finite is unnecessary as is clear from the more general version of \cite[Proposition~2.41]{DGNO} given in \cite[Theorem 3.3]{Joyal-Street}.
\end{proof}
There is one other general fact that will be useful:
\mathbf{e}gin{proposition} \label{prop:pointed-q}
Let $\mathcal{T}$ be a semi-simple pointed pre-Tannakian category, and let $S$ be a simple object. Then the categorical dimension of $S$ is $q_{\mathcal{T}}([S])$.
\end{proposition}
\mathbf{e}gin{proof}
Put $A=A(\mathcal{T})$ and $q=q_{\mathcal{T}}$. Let $\mathcal{T}'$ be the category of $A$-graded vector spaces, with its usual tensor product. Define a symmetric structure on $\mathcal{T}'$ by
\mathbf{e}gin{displaymath}
v \otimes w \mapsto (-1)^{q(\vert v \vert) q(\vert w \vert)} w \otimes v,
\end{displaymath}
where $v$ and $w$ are homogeneous elements, and $\vert v \vert \in A$ denotes the degree of $v$. Then one readily verifies that $A(\mathcal{T}')=A$ and $q_{\mathcal{T}'}=q$, and so by Theorem~\mathrm{e}f{thm:pointed} there is an equivalence $\mathcal{T} \cong \mathcal{T}'$. Letting $k(a)$ denote the simple of $\mathcal{T}'$ concentrated in degree $a$, one easily sees that the categorical dimension of $k(a)$ is $q(a)$, which completes the proof.
\end{proof}
\mathbf{e}gin{proof}[Proof of Theorem~\mathrm{e}f{thm:semisimplification}]
Theorem~\mathrm{e}f{prop:heller-tensor} shows that $\mathcal{C}^{\rm ss}$ is a pointed tensor category. Indeed, every simple of $\mathcal{C}^{\rm ss}$ is of the form $\Omega^m(M(n))$ for $n,m \in \mathbf{Z}$, and the proposition shows
\mathbf{e}gin{displaymath}
\Omega^m(M(n)) \otimes \Omega^{-m}(M(-n)) \cong \mathbf{b}one.
\end{displaymath}
The same proposition shows that the map $\mathbf{Z}^2 \to A(\mathcal{C}^{\rm ss})$ given by $(n,m) \mapsto [\Omega^m(M(n))]$ is a group isomorphism. By Proposition~\mathrm{e}f{prop:indecomp} and~\mathrm{e}f{prop:heller-J}, the object $\Omega^m(M(n))$ has categorical dimension $(-1)^{n+m}$, and so $q_{\mathcal{C}^{\rm ss}}(n,m) = n+m$ by Proposition~\mathrm{e}f{prop:pointed-q}.
We also have an isomorphism $\mathbf{Z}^2 \to A(\mathcal{D})$ by $(n,m) \mapsto [k(n,m)]$. One readily verifies that $q_{\mathcal{D}}(n,m)=n+m$. Thus by Theorem~\mathrm{e}f{thm:pointed}, we have an equivalence of symmetric tensor categories $\mathcal{D} \to \mathcal{C}^{\rm ss}$ which maps $k(n,m)$ to $\Omega^m(M(n))$.
\end{proof}
\section{Delannoy loops} \label{s:loops}
In \cite[\S 3.4, \S 9]{line} we gave a description of the category $\uRep(H)$ in terms of Delannoy paths. In this section we give a similar description of $\uRep(G)$ in terms of a natural generalization that we call Delannoy loops.
\subsection{Delannoy loops}
An \defn{$(n,m)$-Delannoy path} is a path in the plane from $(0,0)$ to $(n,m)$ composed of steps of the form $(1,0)$, $(0,1)$, and $(1,1)$. The \defn{Delannoy number} $D(n,m)$ is the number of $(n,m)$-Delannoy paths, and the \defn{central Delannoy number} $D(n)$ is $D(n,n)$. For example, $D(2)=13$; see Figure~\mathrm{e}f{fig:delannoy}. The Delannoy numbers are well-known in the literature; see, e.g., \cite{Banderier}.
\mathbf{e}gin{figure}
\def\delannoy#1{\mathbf{e}gin{tikzpicture}[scale=0.5]
\draw[step=1, color=gray!50] (0, 0) grid (2,2);
\draw[line width=2pt] #1;
\end{tikzpicture}}
\mathbf{e}gin{center}
\delannoy{(0,0)--(1,0)--(2,0)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(1,0)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(1,0)--(1,1)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(1,1)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(1,0)--(1,1)--(2,2)} \quad
\delannoy{(0,0)--(1,0)--(1,1)--(1,2)--(2,2)} \quad
\delannoy{(0,0)--(1,1)--(2,2)}
\end{center}
\vskip.5\mathbf{a}selineskip
\mathbf{e}gin{center}
\delannoy{(0,0)--(0,1)--(1,1)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(0,1)--(1,1)--(2,2)} \quad
\delannoy{(0,0)--(1,1)--(1,2)--(2,2)} \quad
\delannoy{(0,0)--(0,1)--(1,1)--(1,2)--(2,2)} \quad
\delannoy{(0,0)--(0,1)--(1,2)--(2,2)} \quad
\delannoy{(0,0)--(0,1)--(0,2)--(1,2)--(2,2)}
\end{center}
\caption{The thirteen $(2,2)$-Delannoy paths.}
\label{fig:delannoy}
\end{figure}
An \defn{$(n,m)$ Delannoy loop} is an oriented unbased loop on a toroidal grid with points labeled by $\mathbf{Z}/n \times \mathbf{Z}/m$ composed of steps of the form $(1,0)$, $(0,1)$, and $(1,1)$, and which loops around the torus exactly once in each of the $x$-direction and the $y$-direction. Loops are allowed to touch themselves at vertices (though it turns out that this can happen at most once), and are not required to pass through $(0,0)$.
The set of Delannoy loops will be denoted $\Lambda(n,m)$. The \defn{circular Delannoy number} $C(n,m)$ is the number of $(n,m)$-Delannoy loops, and the \defn{central circular Delannoy number} $C(n)$ is $C(n,n)$. For example, $C(2)=16$; see Figure~\mathrm{e}f{fig:circular-delannoy}. Although we will see that circular Delannoy numbers have a number of natural combinatorial properties, they have not previously appeared in the OEIS and do not seem to have been studied elsewhere.
The group $\mathbf{Z}/n \times \mathbf{Z}/m$ acts on Delannoy loops by cyclically permuting the coordinates. The actions of $\mathbf{Z}/n$ and $\mathbf{Z}/m$ individually are free, but the product action is not free.
\mathbf{e}gin{figure}
\def\delannoy#1{\mathbf{e}gin{tikzpicture}[scale=0.5]
\draw[step=1, color=gray!50] (0, 0) grid (2,2);
\draw[line width=2pt] #1;
\end{tikzpicture}}
\mathbf{e}gin{center}
\delannoy{(0,0)--(1,0)--(2,0)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(1,0)--(1,1)--(1,2)--(2,2)}, \quad
\delannoy{(0,0)--(0,1)--(1,1)--(2,1)--(2,2)} \quad
\delannoy{(1,0)--(1,1)--(2,1) (0,1)--(1,1)--(1,2)}, \quad
\delannoy{(0,0)--(1,0)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(1,1)--(1,2)--(2,2)}, \quad
\delannoy{(0,0)--(0,1)--(1,1)--(2,2)} \quad
\delannoy{(0,1)--(1,2) (1,0)--(1,1)--(2,1)}, \quad
\end{center}
\vskip.5\mathbf{a}selineskip
\mathbf{e}gin{center}
\delannoy{(0,0)--(1,1)--(2,1)--(2,2)} \quad
\delannoy{(0,1)--(1,1)--(1,2) (1,0)--(2,1)}, \quad
\delannoy{(0,0)--(1,0)--(1,1)--(2,2)} \quad
\delannoy{(0,0)--(0,1)--(1,2)--(2,2)}, \quad
\delannoy{(0,0)--(1,1)--(2,2)} \quad
\delannoy{(0,1)--(1,2) (1,0)--(2,1)}, \quad
\delannoy{(0,0)--(1,0)--(1,1)--(2,1)--(2,2)} \quad
\delannoy{(0,0)--(0,1)--(1,1)--(1,2)--(2,2)} \quad
\end{center}
\caption{The 16 $(2,2)$-Delannoy loops, paired up by the $\mathbf{Z}/2$-action on the $x$-coordinate. These form five orbits under $\mathbf{Z}/2 \times \mathbf{Z}/2$, with the first and second pairs, the third and fourth pairs, and and the fifth and sixth pairs each combining into orbits of size $4$, and the seventh and eigth pairs each forming their own orbit of size $2$.}
\label{fig:circular-delannoy}
\end{figure}
\subsection{Orbits}
As in \cite[Proposition~3.5]{line}, the $H$-orbits on $\mathbf{R}^{(n)} \times \mathbf{R}^{(m)}$ are naturally in bijective correspondence with $(n,m)$-Delannoy paths. In particular, the number of orbits is the Delannoy number $D(n,m)$. Here is a description of a bijection. Given a point $(x,y) \in \mathbf{R}^{(n)} \times \mathbf{R}^{(m)}$ we break up $\mathbf{R}$ into the points $x \cup y$ and intervals $\mathbf{R} - (x \cup y)$. For each interval $I_k$ we have a maximal $a_k$ with $x_{a_k} < I_k$ and a maximal $b_k$ with $x_{b_k} <I$, except for the leftmost interval where we define $a_0 = b_0 = 0$. Then look at the path which goes from $(0,0) = (a_0,b_0)$ to $(a_1, b_1)$, etc. Between each interval there is a point that lies in $x \cup y$ and hence lies in $x$, in $y$, or in both. From the description of the path it is clear that it steps by $(1,0)$ when we go past a point in just $x$, by $(0,1)$ when we go past a point in just $y$, and by $(1,1)$ when we go past a point in both. Hence this path is a Delannoy path. See \cite[Figure 2]{line} for an example.
We now prove the analogous result in the circle case.
\mathbf{e}gin{proposition}
The $G$-orbits on $\mathbf{S}^{\{n\}} \times \mathbf{S}^{\{m\}}$ are in bijective correspondence with $(n,m)$-Delannoy loops. This correspondence is natural, and, in particular, equivariant for $\mathbf{Z}/n \times \mathbf{Z}/m$. Hence, the number of orbits is the circular Delannoy number $C(n,m)$.
\end{proposition}
\mathbf{e}gin{proof}
Suppose that $(x,y)$ is an element of $\mathbf{S}^{\{n\}} \times \mathbf{S}^{\{m\}}$. Again $\mathbf{S} - (x \cup y)$ is a collection of intervals. To each interval we again assign the point $(a, b)$ where $x_a$ is the largest of the $x$ which is smaller than $I$ in cyclic ordering and similarly for $y_b$. Again between intervals we pass through a point which is either in $x$, in $y$, or in both, and we step by $(1,0)$, $(0,1)$, or $(1,1)$ respectively. Since each point in $x$ and each point in $y$ is crossed exactly once, this loop will go around the torus exactly once in each of the $x$ and $y$ directions, so this gives a Delannoy loop $p(x,y)$. The proof that this gives a bijection is the same as in the line case.
\end{proof}
\mathbf{e}gin{remark}
If we fix a vertex $(a,b)$, then the intervals assigned to $(a,b)$ are exactly the intersection of the intervals $(x_a,x_{a+1})$ and $(y_b, y_{b+1})$. Such an intersection can be empty, a single interval, or a union of two intervals. This is why it's possible for a Delannoy loop to intersect itself at a vertex. In the $(2,2)$ case, this phenomenon occurs for a single $\mathbf{Z}/2 \times \mathbf{Z}/2$ orbit, consisting of the first two pairs in Figure \mathrm{e}f{fig:circular-delannoy}.
\end{remark}
\subsection{Representations}
As in the line case, we can describe an equivalent category to $\uPerm^{\circ}(G)$ directly using Delannoy loops. Since the proof is essentially the same as in the line case we content ourselves with giving a correct statement.
Let $\mathcal{C}(n,m)$ be the vector space with basis indexed by the set of Delannoy loops $\Lambda(n,m)$. We write $[p]$ for the basis vector of $\mathcal{C}(n,m)$ corresponding to $p \in \Lambda(n,m)$. We will define a composition law on the $\mathcal{C}$'s. Let $p_1 \in \mathcal{C}(n,m)$, $p_2 \in \mathcal{C}(m,\ell)$, and $p_3 \in \mathcal{C}(n,\ell)$ be given.
We define $(n,m,\ell)$ Delannoy loops similarly to above: they are loops in a toroidal grid labeled by $\mathbf{Z}/n \times \mathbf{Z}/m \times \mathbf{Z}/\ell$ that can take steps which increase by one in any non-empty subset of the three coordinates. If $q$ is in $\Lambda(n,m,\ell)$, we can use the projection maps $\pi_{i,j}$ to produce $\pi_{1,2}(q) \in \Lambda(n,m)$, $\pi_{1,3}(q) \in \Lambda(n,\ell)$, and $\pi_{1,2}(q) \in \Lambda(m,\ell)$ (see \cite[\S 9.1]{line} for details). We say that $q$ is \emph{compatible with} a triple $p_{1,2}, p_{2,3}, p_{1,3}$ of Delannoy loops if
\mathbf{e}gin{displaymath}
\pi_{1,2}(q)=p_{1,2}, \quad \pi_{2,3}(q)=p_{2,3}, \quad \pi_{1,3}(q)=p_{1,3},
\end{displaymath}
Let
\mathbf{e}gin{displaymath}
\epsilon(q, p_1,p_2,p_3) = \mathbf{e}gin{cases}
(-1)^{\ell(q)+\ell(p_3)} & \text{if $q$ is compatible with $(p_1,p_2, p_3)$} \\
0 & \text{otherwise} \end{cases}
\end{displaymath}
We define a $k$-bilinear composition law
\mathbf{e}gin{displaymath}
\mathcal{C}(n,m) \times \mathcal{C}(m,\ell) \to \mathcal{C}(n,\ell)
\end{displaymath}
by
\mathbf{e}gin{displaymath}
[p_1] \circ [p_2] = \sum_{q \in \Gamma(n,m,\ell), p_3 \in \Gamma(n,\ell)} \epsilon(q,p_1,p_2,p_3) [p_3].
\end{displaymath}
\mathbf{e}gin{proposition} \label{prop:equiv}
We have the following:
\mathbf{e}gin{enumerate}
\item The composition law in $\mathcal{C}$ is associative and has identity elements.
\item There is a fully faithful functor $\Phi \colon \mathcal{C} \to \uRep(G)$ defined on objects by $\Phi(X_n)=\mathcal{C}(\mathbf{S}^{\{n\}})$ and on morphisms by $\Phi([p])=A_p$.
\item $\Phi$ identifies $\mathcal{C}$ with the full subcategory of $\uRep(G)$ spanned by the $\mathcal{C}(\mathbf{S}^{\{n\}})$'s.
\item $\Phi$ identifies the additive envelope of $\mathcal{C}$ with $\uPerm^{\circ}(G)$.
\end{enumerate}
\end{proposition}
\mathbf{e}gin{remark}
In the line case it turned out that given Delannoy paths $p_1$ and $p_2$, and $p_3$ there is at most one $q$ such that $q$ is compatible with $(p_1,p_2,p_3)$, and hence each $[p_3]$ only appeared once in the sum. In the circle setting it is possible for $(p_1,p_2,p_3)$ to be compatible with \emph{two} such $q$. For example, there exactly two $(1,1,1)$ Delannoy loops that have no diagonal steps (one for each of the cyclic orderings on the coordinates, and they both project down in all three directions to the unique non-diagonal $(1,1)$ loop.
\end{remark}
\mathbf{e}gin{remark}
In the line case, we also had that $\Phi$ identifies the additive--Karoubian envelope of the path category with $\uRep^{\mathrm{f}}(H)$. In the circle case $\uRep^{\mathrm{f}}(G)$ is not semisimple, and so one needs to be more careful about what kind of completion to apply.
\end{remark}
\subsection{Enumeration}
We have the following combinatorial formula for circular Delannoy numbers in terms of ordinary Delannoy numbers. We give two proofs of this formula, one by counting and one using representation theory.
\mathbf{e}gin{proposition}
If $n,m>0$ then we have
\mathbf{e}gin{align*}
C(n,m) &= n\left(D(n,m-1) + D(n-1,m-1)\mathrm{i}ght) \\
&= n\left(D(n,m) - D(n-1,m)\mathrm{i}ght)
\end{align*}
\end{proposition}
The two formulas are equal to each other using the standard recurrence relation for ordinary Delannoy numbers, $D(n,m) = D(n,m-1) + D(n-1,m) + D(n-1,m-1)$. We will prove the first version.
\mathbf{e}gin{proof}[Counting proof]
Each orbit of $\mathbf{Z}/n$ contains a unique representative which passes through $(0,0)$ and immediately takes a vertical or diagonal step. If we ignore this first step then we obtain an ordinary $(n,m-1)$-Delannoy path if the first step is vertical, and an ordinary $(n-1,m-1)$-Delannoy path if the first step is diagonal.
\end{proof}
\mathbf{e}gin{proof}[Representation theoretic proof]
Second we give the representation theoretic proof, using that Delannoy loops form a basis for $\Hom_{\ul{G}}(\mathcal{C}(\mathbf{S}^{\{n\}}), \mathcal{C}(\mathbf{S}^{\{m\}}))$. Recall that $\mathbf{S}^{\{k\}} \cong \Ind_H^G \mathcal{C}(\mathbf{R}^{(k-1)})$ (see the proof of Proposition~\mathrm{e}f{prop:schwartz-ind}) and
\mathbf{e}gin{displaymath}
\Res_H^G (\mathcal{C}(\mathbf{S}^{\{k\}})) \cong \mathcal{C}(\mathbf{R}^{(k)})^{\mathrm{op}lus k} \mathrm{op}lus \mathcal{C}(\mathbf{R}^{(k-1)})^{\mathrm{op}lus k}
\end{displaymath}
(see \S \mathrm{e}f{ss:std-setup}). Now we use Frobenius reciprocity, and the fact that $\Hom( \mathcal{C}(\mathbf{R}^{(a)}), \mathcal{C}(\mathbf{R}^{(b)}))$ has dimension $D(a,b)$ to compute
\mathbf{e}gin{align*}
C(n,m) &= \dim \Hom_{\ul{G}}(\mathcal{C}(\mathbf{S}^{\{n\}}), \mathcal{C}(\mathbf{S}^{\{m\}})) \\
&= \dim \Hom_{\ul{G}}(\mathcal{C}(\mathbf{S}^{\{n\}}), \Ind_H^G \mathcal{C}(\mathbf{R}^{(m-1)})) \\
&= \dim \Hom_{\ul{H}}(\Res_H^G \mathcal{C}(\mathbf{S}^{\{n\}}), \mathcal{C}(\mathbf{R}^{(m-1)})) \\
&= \dim \Hom_{\ul{H}}(\mathcal{C}(\mathbf{R}^{(n)})^{\mathrm{op}lus n} \mathrm{op}lus \mathcal{C}(\mathbf{R}^{(n-1)})^{\mathrm{op}lus n}, \mathcal{C}(\mathbf{R}^{(m-1)})) \\
&= n\left(D(n,m-1) + D(n-1,m-1)\mathrm{i}ght) \qedhere
\end{align*}
\end{proof}
\mathbf{e}gin{figure}
$$\mathbf{e}gin{array}{c|cccccccccc}
m \mathbf{a}ckslash n& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\
1 & 1 & 2& 4& 6& 8& 10& 12& 14& 16& 18\\
2 & 1 & 4& 16& 36& 64& 100& 144& 196& 256& 324\\
3 & 1 & 6& 36& 114& 264& 510& 876& 1386& 2064& 2934\\
4 & 1 & 8& 64& 264& 768& 1800& 3648& 6664& 11264& 17928\\
5 & 1 & 10& 100& 510& 1800& 5010& 11820& 24710& 47120& 83610\\
6 & 1 & 12& 144& 876& 3648& 11820& 32016& 75852& 162048& 318924 \\
7 & 1 & 14& 196& 1386& 6664& 24710& 75852& 201698& 479248& 1040382\\
8 & 1 & 16& 256& 2064& 11264& 47120& 162048& 479248& 1257472& 2994192\\
9 & 1 & 18& 324& 2934& 17928& 83610& 318924& 1040382& 2994192& 7777314\\
\end{array}$$
\caption{A table of values of the circular Delannoy numbers.}
\label{fig:circular-table}
\end{figure}
Using the proposition, we can easily compute $C(n,m)$ for small $n$ and $m$, as shown in Figure \mathrm{e}f{fig:circular-table}. We can also extract a nice formula for $C(n,m)$, using the representation-theoretic formula for the ordinary Delannoy numbers (see \cite[Remark~4.9]{line}):
\mathbf{e}gin{displaymath}
D(n,m) = \sum_{k=0}^{\min(m,n)} \mathbf{i}nom{n}{k} \mathbf{i}nom{m}{k} 2^k.
\end{displaymath}
\mathbf{e}gin{proposition} \label{prop:circular-sum-formula}
If $n,m \geq 1$ then
\mathbf{e}gin{displaymath}
C(n,m) = \sum_{k=1}^{\min(m,n)} \mathbf{i}nom{n}{k} \mathbf{i}nom{m}{k} k 2^k
\end{displaymath}
\end{proposition}
\mathbf{e}gin{proof}
Letting $N=\min(n,m)$, we have
\mathbf{e}gin{align*}
C(n,m) = & n\left(D(n,m) - D(n-1,m)\mathrm{i}ght) \\
&= n \sum_{k=1}^N \mathbf{i}gg[ \mathbf{i}nom{n}{k} \mathbf{i}nom{m}{k} 2^k - \mathbf{i}nom{n-1}{k} \mathbf{i}nom{m}{k} 2^k \mathbf{i}gg] \\
&= n \sum_{k=1}^N \mathbf{i}nom{n-1}{k-1} \mathbf{i}nom{m}{k} 2^k = \sum_{k=1}^N \mathbf{i}nom{n}{k} \mathbf{i}nom{m}{k} k 2^k
\end{align*}
Note that the $k=0$ terms in the sums defining the two Delannoy numbers cancel, which is why we can start from $k=1$.
\end{proof}
This formula for $C(n,m)$ can also be proved directly using the decomposition of $\mathcal{C}(\mathbf{S}^{\{n\}})$ into indecomposable projectives from \S \mathrm{e}f{s:decomposition} and counting the dimensions of the Hom spaces between projectives from \S \mathrm{e}f{s:maps-of-projectives}. This derivation is somewhat messy due to the contributions from the special block, so we do not work it out here.
The recurrence for ordinary Delannoy numbers immediately yields the generating function $(1-x-y-xy)^{-1}$, and from our formula for $C(n,m)$ in terms of $D(n,m)$ it is easy to derive the following generating function for circular Delannoy numbers with $n,m \geq 1$.
\mathbf{e}gin{displaymath}
\sum_{n,m=1}^\infty C(n,m) x^n y^m = \mathfrak{r}ac{2xy}{(1-x-y-xy)^2}
\end{displaymath}
\mathbf{e}gin{thebibliography}{DGNO}
\mathbf{i}bitem[Be]{Benson} D. J. Benson. Representations and Cohomology, I: Basic representation theory of finite groups and associative algebras. Cambridge University Press, 1995. \DOI{10.1017/CBO9780511623615}
\mathbf{i}bitem[BS]{Banderier} Cyril Banderier, Sylviane Schwer. Why Delannoy numbers? \textit{J.\ Statis.\ Plann.\ Inference} \textbf{135} (2005), no.~1, pp.~40--54. \DOI{10.1016/j.jspi.2005.02.004} \arxiv{math/0411128}
\mathbf{i}bitem[CdS]{CdS} Gunnar Carlsson, Vin de Silva. Zigzag Persistence. \textit{Found.\ Comput.\ Math.} \textbf{10} (2010), pp.~367--405. \DOI{10.1007/s10208-010-9066-0} \arxiv{0812.0197}
\mathbf{i}bitem[CM]{CM} Sunil Chebolu, J\'an Min\'ac. Representations of the miraculous Klein group. \textit{Math.\ Newsl.} \textbf{22} (2012), no.~1, pp.~135--145. \arxiv{1209.4074}
\mathbf{i}bitem[CO1]{ComesOstrik1} Jonathan Comes, Victor Ostrik. On blocks of Deligne's category $\uRep(S_t)$. \textit{Adv.\ Math.} \textbf{226} (2011), no.~2, pp.~1331--1377. \DOI{10.1016/j.aim.2010.08.010} \arxiv{0910.5695}
\mathbf{i}bitem[CO2]{ComesOstrik} Jonathan Comes, Victor Ostrik. On Deligne’s category $\uRep^{ab}(S_d)$. \textit{Algebra Number Theory} \textbf{8} (2014), pp.~473--496. \DOI{10.2140/ant.2014.8.473} \arxiv{1304.3491}
\mathbf{i}bitem[Del1]{Deligne2} P. Deligne. Cat\'egories tensorielles. \emph{Mosc. Math. J.} {\bf 2} (2002), no.~2, pp.~227--248. \\ \DOI{10.17323/1609-4514-2002-2-2-227-248} \\
Available at: {\tiny\url{https://www.math.ias.edu/files/deligne/Tensorielles.pdf}}
\mathbf{i}bitem[Del2]{Deligne3} P. Deligne. La cat\'egorie des repr\'esentations du groupe sym\'etrique $S_t$, lorsque $t$ n’est pas un entier naturel. In: Algebraic Groups and Homogeneous Spaces, in: Tata Inst. Fund. Res. Stud. Math., Tata Inst. Fund. Res., Mumbai, 2007, pp.~209--273. \\
Available at: {\tiny\url{https://www.math.ias.edu/files/deligne/Symetrique.pdf}}
\mathbf{i}bitem[DGNO]{DGNO} Vladimir Drinfeld, Shlomo Gelaki, Dmitri Nikshych, Victor Ostrik. On braided fusion categories I. \textit{Selecta Math.\ N.S.} \textbf{16} (2010), pp.~1-119. \DOI{10.1007/s00029-010-0017-z} \arxiv{0906.0620}
\mathbf{i}bitem[Eti1]{Etingof1} Pavel Etingof. Representation theory in complex rank, I. \textit{Transform.\ Groups} \textbf{19} (2014), no.~2, pp.~359--381. \DOI{10.1007/s00031-014-9260-2} \arxiv{1401.6321}
\mathbf{i}bitem[Eti2]{Etingof2} Pavel Etingof. Representation theory in complex rank, II. \textit{Adv.\ Math.} \textbf{300} (2016), pp.~473--504. \DOI{10.1016/j.aim.2016.03.025} \arxiv{1407.0373}
\mathbf{i}bitem[EAH]{EntovaAizenbudHeidersdorf} Inna Entova-Aizenbud, Thorsten Heidersdorf. Deligne categories for the finite general linear groups, part 1: universal property. \arxiv{2208.00241}
\mathbf{i}bitem[EGNO]{EGNO} Pavel Etingof, Shlomo Gelaki, Dmitri Nikshych, Victor Ostrik. Tensor categories. Mathematical Surveys and Monographs, \textbf{205}. American Mathematical Society, Providence, RI, 2015.
\mathbf{i}bitem[EO]{EtingofOstrik} Pavel Etingof, Victor Ostrik. On semisimplification of tensor categories. In ``Representation Theory and Algebraic Geometry,'' eds.\ Vladimir Baranovsky, Nicolas Guay, Travis Schedler. Trends in Mathematics. Birkh\"auser. \DOI{10.1007/978-3-030-82007-7\_1} \arxiv{1801.04409}
\mathbf{i}bitem[Ga]{Gabriel} Peter Gabriel. Unzerlegbare darstellungen I. Manuscripta Math.~\textbf{6} (1972), pp.~71--103. \DOI{10.1007/BF01298413}
\mathbf{i}bitem[Ge]{Germoni} J\'er\^ome Germoni. Indecomposable representations of special linear Lie superalgebras. \textit{J.\ Algebra} \textbf{209} (1998), no.~2, pp.~367--401. \DOI{10.1006/jabr.1998.7520}
\mathbf{i}bitem[GQS]{GQS} Gerhard Gotz, Thomas Quella, Volker Schomerus. Representation theory of $\mathrm{sl}(2|1)$. \textit{J.\ Algebra} \textbf{312} (2007), no.~2, pp.~829--848. \DOI{10.1016/j.jalgebra.2007.03.012} \arxiv{hep-th/0504234}
\mathbf{i}bitem[Har]{Harman} Nate Harman. Stability and periodicity in the modular representation theory of symmetric groups. \arxiv{1509.06414v3}
\mathbf{i}bitem[Har2]{Harman2} Nate Harman. Deligne categories as limits in rank and characteristic. \arxiv{1601.03426}
\mathbf{i}bitem[Hei]{Heidersdorf} Thorsten Heidersdorf. On supergroups and their semisimplified representation categories. \textit{Algebr.\ Represent.\ Theory} \textbf{22} (2019), pp.~937--959. \DOI{10.1007/s10468-018-9806-4} \arxiv{1512.03420}
\mathbf{i}bitem[HS1]{repst} Nate Harman, Andrew Snowden. Oligomorphic groups and tensor categories. \arxiv{2204.04526}
\mathbf{i}bitem[HS2]{discrete} Nate Harman, Andrew Snowden. Discrete pre-Tannakian categories. In preparation.
\mathbf{i}bitem[HSS]{line} Nate Harman, Andrew Snowden, Noah Snyder. The Delannoy category. \arxiv{2211.15392}
\mathbf{i}bitem[Jo]{Johnson} D.~L.~Johnson. Indecomposable representations of the four-group over fields of characteristic 2. \textit{J.\ London Math.\ Soc.} \textbf{44} (1969), no.~1, pp.~295--298. \DOI{10.1112/jlms/s1-44.1.295}
\mathbf{i}bitem[JS]{Joyal-Street} Andr\'e Joyal, Ross Street. Braided tensor categories. \textit{Adv.\ Math.} \textbf{102} (1993), no.~1, pp.~20--78. \DOI{10.1006/aima.1993.1055}
\mathbf{i}bitem[Kno1]{Knop} Friedrich Knop. A construction of semisimple tensor categories. \textit{C.~R.~Math.\ Acad.\ Sci.\ Paris C} \textbf{343} (2006), no.~1, pp.~15--18. \DOI{10.1016/j.crma.2006.05.009} \arxiv{math/0605126}
\mathbf{i}bitem[Kno2]{Knop2} Friedrich Knop. Tensor envelopes of regular categories. \textit{Adv.\ Math.} \textbf{214} (2007), pp.~571--617. \DOI{10.1016/j.aim.2007.03.001} \arxiv{math/0610552}
\mathbf{i}bitem[Sn]{colored} Andrew Snowden. Measures for the colored circle. \arxiv{2302.08699}
\mathbf{i}bitem[Vir]{Viro} O.~Y.~Viro. Some integral calculus based on Euler characteristic. In Topology and geometry---Rohlin Seminar, Lecture Notes in Math., vol.~1346, pp.~127--138. Springer, Berlin, 1988. \\ \DOI{10.1007/BFb0082775}
\end{thebibliography}
\end{document} |
\begin{document}
\begin{singlespace}
\begin{center}
\begin{spacing}{0.8}
\Large{\textsc{The Fourier transform of the non-trivial zeros of the zeta function\\}}
\large{Levente Csoka\\}
[email protected]\\
University of Sopron, 9400 Sopron, Bajcsy Zs. e. u. 4. Hungary\\
\line(1,0){300}
\end{spacing}
\end{center}
\end{singlespace}
\noindent
\textsc{ABSTRACT.} \emph{The non-trivial zeros of the Riemann zeta function and the prime numbers can be plotted by a modified von Mangoldt function. The series of non-trivial zeta zeros and prime numbers can be given explicitly by superposition of harmonic waves. The Fourier transform of the modified von Mangoldt functions shows interesting connections between the series. The Hilbert-Pólya conjecture predicts that the Riemann hypothesis is true, because the zeros of the zeta function correspond to eigenvalues of a positive operator and this idea encouraged to investigate the eigenvalues itself in a series. The Fourier transform computations is verifying the Riemann hypothesis and give evidence for additional conjecture that those zeros and prime numbers arranged in series that lie in the critical $\frac{1}{2}$ positive upper half plane and over the positive integers, respectively.}\\
Recent years have seen an explosion of research stating connection between areas of physics and mathematics \cite{1}. Hardy and Littlewood gave the first proof that infinitely many of the zeros are on the $\frac{1}{2}$ line\cite{2}. Hilbert and Pólya independently suggested a Hermitian operator whose eigenvalues are the non-trivial zeros. of Eq. \ref{eq1}. Pólya investigated the Fourier transform of Eq. \ref{eq1} as a consequence of the zeros's location of a functional equations. Physicist working in probability found that zeta function arises as an expectation in a moment of a Brownian bridge\cite{3}. Here, a new series is created from the non-trivial zeta zeros using the modified von Mangoldt function and then apply the Fourier transform to see the connection between the zeta zero members. To describe the numerical result, we consider the non-trivial zeros of the zeta function, by denoting
\begin{flalign}
&s=\frac{1}{2}+it,\ \text{($i$ is real)}.&
\label{eq1}
\end{flalign}
Let us consider a special, simple case of von Mangoldt function at for $k=$ non-trivial zeros of zeta function,
\begin{flalign}
&L(n) = \left\{
\begin{array}{l l}
\ln\left(e\right) & \quad \text{if $n=$ non-trivial zeros},\\
0 & \quad \text{otherwise}.
\end{array} \right.&
\label{eq2}
\end{flalign}
The function $L(n)$ is a representative way to introduce the set of non-trivial zeta zeros with weight of unity attached to the location of a non-trivial zeta zeros.
The discrete Fourier transform of the modified von Mangoldt function $L(n)$ gives a spectrum with periodic, real parts as spikes at the frequency axis ordinates. Hence the modified von Mangoldt function $L(n)$ and the non-trivial zeta zeros series can be approximated by periodic sin or cos functions. Let us consider that natural numbers form a discrete function $g(n)\rightarrow g(n_k)$ and $g_k\equiv g(n_k)$ by uniform sampling at every sampling point $n_k\equiv k\Delta$, where $\Delta$ is the sampling period and $k=2,3,..,L-1$. If the subset of discrete non-trivial zeta zeros numbers is generated by the uniform sampling of a wider discrete set, it is possible to obtain the Discrete Fourier Transform (DFT) of $L(n)$. The sequence of $L(n)$ non-trivial zeta zeros function is transformed into another sequence of $N$ complex numbers according to the following equation:
\begin{flalign}
&F\left\{L(n)\right\}=X\left(\nu\right)&
\label{eq3}
\end{flalign}
where $L(n)$ is the modified von Mangoldt function. The operator $F$ is defined as:
\begin{flalign}
&X\left(\nu\right)=\sum\limits_{n=1}^{N-1}L\left(k\Delta\right)e^{-i\left(2\pi\nu\right)\left(k\Delta\right)},&
\label{eq4}
\end{flalign}
where $\nu=l\Delta f$, $l=0,1,2,…,N-1$ and $\Delta f=\frac{1}{L}$, $L=$length of the modified von Mangoldt function.
\setcounter{chapter}{1}
The amplitude spectrum of Eq. \ref{eq4}, describes completely the discrete-time Fourier transform of the $L(n)$ non-trivial zeta zeros function of quasi periodic sequences, which comprises only discrete frequency component. The important properties of the amplitude spectrum are as follows:
\begin{enumerate}
\item the ratio of consecutive frequencies $(f_t,f_{t+1})$ and maximum frequency $(f_{max})$ converges $\lim_{t\to\frac{f_s}{2}}\frac{f_{t+1}}{f_t}=\frac{f_t+\frac{1}{L}}{f_t}=\frac{f_{max}}{f_t}=1$, where $f_s=\frac{1}{\Delta}$ represents the positive half-length of amplitude spectrum. This is similar to the prime number theorem: $\lim_{x\to\infty}\frac{\pi\left(x\right)\ln\left(x\right)}{x}=1$\cite{4};
\item The reciprocate of the frequencies $(f_t)$ converges $\lim_{t\to \frac{f_s}{2}}\frac{1}{f_t}=\frac{1}{f_s}$;
\item The consecutive frequencies on the DFT amplitude spectrum describe a parabolic or Fermat's spiral (Fig. \ref{fig:fig1}), starting from 0 with increasing modulus (polar form $r=af_t$), where $a=1$, independently from the length of the non-trivial zeta zeros function and its parametric equations can be written as: $\chi\left(f_t\right)=f_t\cos\left(f_t\right), y\left(f_t\right)=f_t\sin\left(f_t\right)$;
\begin{figure}
\caption{Parabolic or Fermat spiral using the consecutive frequencies. The arc length of the spiral is a function of the sampling interval.}
\label{fig:fig1}
\end{figure}
\item The DFT spectrum of the non-trivial zeta zeros exhibits highly ordered and symmetrical frequency distribution (Fig. \ref{fig:fig3});
\item The DFT of the modified von Mangoldt function will be periodic (Fig. \ref{fig:fig2}) independently the length of integer sequences used:
\begin{flalign*}
&(\nu\geq N,\nu = 0, 1, ..., N-1)&
\end{flalign*}
\begin{flalign}
&X\left(\nu\right)=X\left(\nu+N\right)=X\left(\nu+2N\right),...,=X\left(\nu+zN\right),&
\end{flalign}
\begin{flalign}
&X\left(zN+\nu\right)=\sum\limits_{n=0}^{N-1}L\left(n\right)e^{-i\left(2\pi n\nu\right)\left(zN+\nu\right)}=\sum\limits_{n=0}^{N-1}L\left(n\right)e^{-i\left(2\pi n\nu\right)}e^{-i2\pi z n},&\\
&z\text{ and }n\text{ are integers and }e^{-i2\pi z n}=1,&
\end{flalign}
\begin{flalign}
&X\left(\nu+zN\right)=\sum\limits_{n=0}^{N-1}L\left(n\right)e^{-i\left(2\pi n\nu\right)}=X\left(\nu\right).&
\end{flalign}
\end{enumerate}
\section{Spectral evidence}
For completeness we give the compelling spectral evidence for the truth that Fourier decomposition of the modified von Mangoldt function is periodic. Fourier transformation has not been used for studying the behaviour of non-trivial zeta zeros numbers and it was employed here to reveal the periodic distribution of the non-trivial zeta zeros. Figure \ref{fig:fig2} shows that the corresponding amplitude spectrum of the modified von Mangoldt function exhibits a well-defined periodic signature. This suggests that the series of non-trivial zeta zeros numbers is not an arbitrary function. Creating the superposition from the Fermat spiral frequencies by adding their amplitude and phase together, the original non-trivial zeta zeros function can be restored.
\begin{figure}
\caption{Representation of the modified von Mangoldt function up to 100. Spikes appear at the location of non-trivial zeta zeros numbers.}
\label{fig:fig2}
\end{figure}
Creating the superposition from the Fermat spiral frequencies by adding their amplitude and phase together, the original non-trivial zeta zeros function can be arising.
\begin{flalign}
&L\left(n\right)=\sum\limits_{f_t=0}^{f_{max}}A_t\sin\left(f_t+\omega n\right)&
\end{flalign}
From the wave sequence above, it can be seen that all non-trivial zeta zeros able to show some arithmetical progression, pattern of non-trivial zeta zeros numbers. Finding long arithmetic progressions in the non-trivial zeros attracted interest in the last decades as well. We can identify certain peaks on Fig. \ref{fig:fig2},which are the reciprocate of distances of the non-trivial zeta zeros in the arithmetic progressions.
\begin{figure}
\caption{Reconstruction of the modified von Mangoldt function from the Fourier decomposed periodic waves between 10-50. The circles at the top of the spikes indicates the location of non-trivial zeta zeros numbers.}
\label{fig:fig3}
\end{figure}
\section{Conclusion}
We have presented a novel method for the detection of non-trivial zeta zeros repetitions on the set of natural numbers. The method is based on a DFT of the modified von Mangoldt function. The relevance of the method for determination of the conformational sequences is described. The sequence of the non-trivial zeta zeros seems correlating with the unconditional theory for the spacing correlations of characteristic roots of zeta function developed by Katz and Sarnak\cite{5}.
\renewcommand{References}{References}
\end{document} |
\begin{document}
\title{CNOT on Polarization States of Coherent Light}
\begin{abstract} We propose a CNOT gate for quantum
computation. The CNOT operation is based on existence of triactive
molecules, which in one direction have dipole moment and cause
rotation of the polarization plane of linearly polarized light and
in perpendicular direction have a magnetic moment. The incoming
linearly polarized laser beam is divided into two beams by beam
splitter. In one beam a control state is prepared and the other
beam is a target. The interaction of polarized states of both beams
in a solution containing triactive molecules can be described as
interaction of two qubits in CNOT.
\end{abstract}
\section{Introduction}
Numerous investigations have proposed various physical ways of how
to realize an operating CNOT gate which is one of the basic elements
for quantum computing \cite{cnot1, cnot2}. Our proposal is based on
the manipulation of polarization states of the laser beam which has
many advantages in technological aspects that are not necessary to
describe. The polarization state of completely polarized light can
be described by the same mathematics as a qubit. A geometric
representation of both quantities is provided by the Poincar\'e (or
the Bloch) sphere.
In order to be able to construct proposed CNOT gate, triactive
molecules are needed as basic ingredient. When polarized light
passes through a solution of an optically active compound --- chiral
or optical isomers or optical polymers --- the direction of
polarization is rotated to the right or to the left. Recent
development in the chemistry of such dipole molecules with
controlled optical properties provides us with an opportunity to ask
chemists to prepare such an isomer or polymer molecule with one more
property --- with magnetic dipole moment oriented perpendicularly to
the electric dipole moment as shown in Fig.1. For instance, what we
need is to implement a magnetic dipole moment to some chiral isomer
or polymer, with high anisotropic polarizabilities as described in
\cite{Cdenz}. Thus we need a molecule with the following properties:
\begin{itemize}
\item It has an electric dipole moment $ \overrightarrow{p}$
\item It has a property of optical activity in one direction
$\overrightarrow{a}$, i.e., the direction of polarization of
passing linearly polarized light rotates due to interaction with the
molecules whose $\overrightarrow{a}$ is oriented parallel with the
ray ($\overrightarrow{a}$ and $\overrightarrow{p}$ are often
parallel --- let us suppose it in the following)
\item It has a magnetic moment $\overrightarrow{m}$ perpendicular to
directions $\overrightarrow{a}$ and $\overrightarrow{p}$.
\end{itemize}
It may be expected that modern chemistry can produce such molecules,
since molecules with the first two properties, i.e. without
magnetic moment, have already been produced. For molecules with
electric moment see \cite{Cdenz} Moreover, the engineering of
core-shell nanoparticles can be used to produce triactive molecules
\cite{Caruzo}.
\begin{figure}
\caption{\it Scheme of a triactive molecule. Molecule has
magnetic moment $\vec m$ in the direction perpendicular to the
electric dipole moment, and the direction of the electric dipole
moment (dashed line) is the axis of rotation of the direction of
polarization - green lines}
\end{figure}
\section{Operation}
For the sake of describing the principle of operation we will
consider completely linearly polarized light.
Let us have a concentration $n$ of such triactive molecules in a
homogenous solution under thermal motion, they rotate in all
directions, the distribution of dipole moments of molecules is the
same in all directions and the polarization density is
$$\overrightarrow{P} = 0 .$$
The angle of rotation of polarization of passing light will depend
only on the path length of light in the solution.
Putting the solution in a strong homogenous magnetic field in an
ideal case guarantees that the thermal rotation of the molecules
will be restricted to precession of magnetic moments around the
direction of the magnetic field. The magnetic field thus breaks the
isotropy of optical activity of solution. After switching the
magnetic field, the angles of rotation of polarization of light in
the directions parallel and perpendicular to the magnetic field at
the same distance are no more equal. The resulting angle of rotation
of polarization of passing light depends on the path length of light
in the solution and on the angle between the incident ray and the
magnetic field.
Next switching an electric field in a direction perpendicular to the magnetic
field, the rotational symmetry in the plane perpendicular to the
magnetic field is broken. The polarization density will have the
direction of electric field. The angle of rotation of polarization (passing the
same distance in solution) will be minimal in the direction of
magnetic field and maximal in the direction of electric field.
Let us send two light beams through the solution in two
perpendicular directions, the first one being parallel with the
magnetic field and the second perpendicular.
The oscillating electric field of the light beam parallel with the
magnetic field controls the absolute value of polarization density
of the illuminated solution, and the polarization state of this beam
does not change (in an ideal case of low temperature). The
polarization of the light beam perpendicular to the magnetic field
will rotate around the direction of the first beam. Moreover, it can
also influence the mean absolute value of polarization density of
solution when the polarization of the second beam is not parallel
with the magnetic field.
The angle of rotation of the polarization of the beam
perpendicular to the magnetic field will depend on the direction of
the polarization density of the solution, which will lie in the
plane perpendicular to the magnetic field. Of course, the dependence
of the angle of rotation of the polarization on the temperature as
well as the length of the path of beam in the solution have to be
taken into consideration, and these parameters will guarantee the
correct function of CNOT gate.
\section{How does it work?}
The operation of the proposed CNOT gate can be described as
follows:
\begin{enumerate}
\item The linearly polarized laser beam (LB) is divided into two
parts by the beam splitter (BS), see Fig. 2.
\item The optical devices C and T in
the two branches prepare the polarization states according to the
requirement for the input states in CNOT cell.
\item One of divided
beams which passes the CNOT cell with the solution of three-active
molecules parallel with the external magnetic field plays the role
of the control qubit and the other is the target qubit. Both beams
pass the CNOT cell simultaneously.
\end{enumerate}
The control beam determines the magnitude of the absolute value of
the polarization density in cube and the direction of the
polarization of passing light. According to the Langevin--Debye
theory the mean value of the polarization density is
$$ <|P|> = \frac{n p^2 \pi }{\sqrt{2} k T} E_{C} ,$$
where $\vec{E_C}$ is an amplitude of the vector of the intensity of
electric field of control beam, and the direction of the
polarization density is given by the direction of linearly polarized
light in the control channel. The influence of the vector of the
target electric field is negligible because it rotates several
times when passing the CNOT cube.
The nonzero mean value of the magnitude of the vector of
polarization density in the described direction is the crucial point
for the operation of the CNOT. If the direction of polarization of
the control beam will be parallel with the direction of target beam,
the polarization of target beam will rotate more than in the case
where the direction of polarization of the control beam will be
perpendicular to the direction of target beam. The direction of
polarization of the control beam is not changed because the magnetic
field does not allow significant changes of the polarization density
of solution in direction of control beam. So one can control the
geometry of the cube, concentration of triactive molecules in the
solution and temperature to fit the difference between the
polarization planes of the target beam of two previous cases after
passing the cube.
\begin{figure}
\caption{\it Scheme of CNOT in operation.
The laser beam ({\bf LB}
\end{figure}
The operation is shown in Fig.2. Let \ket{0} be polarization state
of linearly polarized light perpendicular to the plane of figure --
vertical polarization, and \ket{1} in the plane -- horizontal
polarization. Having the input state \ket{1}, after splitting the
state is $\mket{1} \otimes \mket{1}$. The device C change the state
\ket{1} into $\mket{\gamma}$ and T change the state \ket{1} into
$\mket{\tau}$. The state on inputs in cube is $\mket{\gamma} \otimes
\mket{\tau}$. On the output of cube the state is $\mket{\gamma}
\otimes \mket{\tau - \gamma}$.
\begin{tabular}{|c|c|c|c|}
\hline \multicolumn{4}{|c|}{Table: CNOT operations on polarization states of beams} \\
\hline Input control & Input target & Output control & Output target \\
\hline $\rightarrow$ & $\rightarrow$ & $\rightarrow$ & $\rightarrow$ \\
\hline $\rightarrow$ & $\uparrow$ & $\rightarrow$ & $\uparrow $\\
\hline $\uparrow$ & $\rightarrow$ & $\uparrow$ & $\uparrow$ \\
\hline $\uparrow$ & $\uparrow$ & $\uparrow$ & $\rightarrow$ \\
\hline
\end{tabular}
\section{Conclusion}
The magnetic dipole moment is in principle necessary in order to fix
an axis of rotation of molecules. The same effect can be achieved
also by other means, e.g. ordering of optically active dipole
molecules in material with a preferred plane of change in
polarizability. We note that in spite of the fact that the
interactions involved will be very weak (perhaps of the order of
$\chi^{(3)}$ effects) the proposed mechanism is worthy of further
investigation.
\end{document} |
\begin{document}
\setlength{\parindent}{1.5em}
\begin{center}
\Large
\textsc{Global Igusa Zeta Functions and $K$-Equivalence}
\vspace*{0.4cm}
\large
Shuang-Yen Lee
\vspace*{0.9cm}
\end{center}
\pagestyle{plain}
\thispagestyle{plain}
\setcounter{section}{-1}
\begin{abstract}
For smooth projective varieties $\mathfrak{X}$ and $\mathfrak{X}'$ over the ring of integers of a $p$-adic field $\textbf{K}$, a positive integer $r$, and a positive number $s \neq 1/r$, we show that when the $r$-canonical maps map both general fibers $X = \mathfrak{X}_{\textbf{K}}$ and $X' = \mathfrak{X}_{\textbf{K}}'$ birationally to their image respectively, any isometry between $H^0(X, rK_X)$ and $H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'})$ with respect to the $s$-norm $\|{-}\|_s$ induces a $K$-equivalence on the $\textbf{K}$-points between $X$ and $X'$.
\end{abstract}
\section{Introduction}
Let $\textbf{K}$ be a fixed $p$-adic field, i.e., a finite extension of $\textbf{Q}_p$, and $\mathcal{O}$ its ring of integers. Let $\mathfrak{m}$ be the maximal ideal of $\mathcal{O}$, $\pi$ a uniformizer of $\mathcal{O}$, and $\textbf{F}_{\!q} = \mathcal{O}/\mathfrak{m}$ the residue field of $\mathcal{O}$. For an $n$-dimensional projective variety $\mathfrak{X}$ over $\mathcal{O}$, there are two parts of $\mathfrak{X}$ --- the generic fiber $\mathfrak{X}g = \mathfrak{X} \times_{\operatorname{Spec} \mathcal{O}} \operatorname{Spec} \textbf{K}$ and the special fiber $\mathfrak{X}s = \mathfrak{X} \times_{\operatorname{Spec} \mathcal{O}} \operatorname{Spec} \textbf{F}_{\!q}$. We assume that $X$ is smooth over $\textbf{K}$ in the following text.
Throughout this paper, we will use fraktur font to denote varieties over $\mathcal{O}$ (e.g. $\mathfrak{X}$, $\mathfrak{X}'$, $\mathfrak{Y}$), normal font to denote their general fibers over $\textbf{K}$ (e.g. $X$, $X'$, $Y$), and normal font with subscript $0$ to denote their special fibers over $\textbf{F}_{\!q}$ (e.g. $X_0$, $X_0'$, $Y_0$).
If we view $\mathfrak{X}g(\textbf{K})$ as a $\textbf{K}$-analytic $n$-dimensional manifold, then $\mathfrak{X}g(\textbf{K})$ is bianalytic to a finite disjoint union of polydiscs of the form $\pi^k\mathcal{O}^n$ (see \cite[Section~7.5]{MR1743467}). For a positive integer $r$ and a pluricanonical $r$-form $\alpha \in H^0(\mathfrak{X}g, rK_{\mathfrak{X}g}) = H^0(\mathfrak{X}, rK_{\mathfrak{X}})(\textbf{K})$, we can write it locally as $a(u)\,(du)^{\otimes r}$ on a local chart $\Delta\cong\pi^k\mathcal{O}^n$ with coordinate $u = (u_1, \ldots, u_n)$. The $p$-adic norm
\[|\alpha|^{1/r} \colonequals |a(u)|^{1/r}\,d\mu_\mathcal{O}\]
gives us a measure on $\Delta$, where $\mu_\mathcal{O}$ is the normalized Haar measure on the locally compact abelian group $\textbf{K}^n$ in order that $\mu_\mathcal{O}(\mathcal{O}^n) = 1$. The measures glue into a measure $|\alpha|^{1/r}$ on $\mathfrak{X}g(\textbf{K})$ by the change of variables formula (see \cite[Section~7.4]{MR1743467}).
If we assume furthermore that $\mathfrak{X}$ is smooth over $\mathcal{O}$, then the smoothness gives us a good reduction map
\[h_1\colon \mathfrak{X}g(\textbf{K})\longrightarrow \mathfrak{X}s(\textbf{F}_{\!q}). \]
By Hensel's lemma, each fiber $h_1^{-1}(\overline{x})$ of $h_1$ is $\textbf{K}$-bianalytic to $\pi \mathcal{O}^n$ (canonically up to a linear transformation, see \cite[Section~10.1]{MR1917232}). Using the canonical measure on $\pi\mathcal{O}^n$, we get a measure on each $h_1^{-1}(\overline{x})$ that glues into a measure $\mu_{\mathfrak{X}}$ on $\mathfrak{X}g(\textbf{K})$. Note that this measure is independent of the choice of the bianalytic map $h_1^{-1}(\overline{x}) \xrightarrow{\sim} \pi \mathcal{O}^n$. The measure thus constructed is called the \textit{canonical measure} on $\mathfrak{X}g(\textbf{K})$.
Recall that the Igusa zeta function associated to a function $f$ is defined by
\[s \xmapsto{\quad} Z(s, f) = \int_{\mathcal{O}^n} |f|^s\,du. \]
For a pluricanonical $r$-form $\alpha$ on $\mathfrak{X}g$, we define the \textit{global Igusa zeta function} associated to $\alpha$ to be
\[\hphantom{\text{$s\in \textbf{C}$, $\operatorname{Re} s \ge 0$,}}s \xmapsto{\quad} \|\alpha\|_s \colonequals \int_{X(\textbf{K})} |\alpha|^s\, d\mu_{X}^{1 - rs}, \tag*{$s\in \textbf{C}$, $\operatorname{Re} s \ge 0$,}\]
where $d\mu_X$ is the canonical measure. The ``global'' here means that they are defined over projective varieties, while the original Igusa zeta functions are defined over the affine space $\mathcal{O}^n$.
For each fixed $s \in \textbf{R}_{>0}$, $\|{-}\|_s$ defines a norm on the space $H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})$.
When $s = 1/r$, the exponent $1-rs$ of $d\mu_{\mathfrak{X}}$ becomes $0$, so the norm $\|{-}\|_{1/r}$ is also defined for a projective smooth variety $\mathfrak{X}g$ over $\textbf{K}$. For a birational map $f\colon \mathfrak{X}g \dashrightarrow \mathfrak{X}g'$ between projective smooth varieties over $\textbf{K}$, the exceptional varieties of this map are of codimension at least $2$. Then the pullback
\[f^*\colon \bigl(H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'}), \|{-}\|_{1/r}\bigr) \longrightarrow \bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g}), \|{-}\|_{1/r}\bigr)\]
is an isometry by the change of variables formula. Conversely, let $V$ and $V'$ denote the space $H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})$ and $H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'})$, respectively. Then we will show that $\mathfrak{X}g$ is birational to $\mathfrak{X}g'$ when the following condition holds:
\begin{itemize}
\item[(\hypertarget{assumption}{$\spadesuit$})] The maps
\[\textbf{P}hi_{|V|}\colon \mathfrak{X}g\dashrightarrow \textbf{P}(V)^\vee\quad\text{and}\quad \textbf{P}hi_{|V'|}\colon \mathfrak{X}g' \dashrightarrow \textbf{P}(V')^\vee\]
map $\mathfrak{X}g$ and $\mathfrak{X}g'$ birationally to their images and both $\mathfrak{X}g(\textbf{K})$ and $\mathfrak{X}g'(\textbf{K})$ are of positive measure.
\end{itemize}
In this case, this condition implies that both $\mathfrak{X}g$ and $\mathfrak{X}g'$ are of general type. When $\mathfrak{X}$ is smooth over $\mathcal{O}$, the assumption on $\mathfrak{X}g(\textbf{K})$ being of positive measure is equivalent to $\mathfrak{X}s(\textbf{F}_{\!q})\neq \varnothing$, and a sufficient condition is achieved by using the Weil conjectures (cf. Remark~\ref{rmkWeilbound}).
More precisely, we have (cf. Theorem~\ref{thrbir}):
\begin{thr}\label{thrbir0}
Let $\mathfrak{X}g$ and $\mathfrak{X}g'$ be smooth projective varieties over $\textbf{K}$ of dimension $n$, $r$ a positive integer. Suppose there is an isometry
\[T\colon \bigl(H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'}), \|{-}\|_{1/r}\bigr) \longrightarrow \bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g}), \|{-}\|_{1/r}\bigr). \]
Then the images of the $\textbf{K}$-points of the $r$-canonical maps
\[\textbf{P}hi_{|rK_{\mathfrak{X}g}|}\colon X\dashrightarrow \textbf{P}\bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})\bigr)^\vee\quad\text{and}\quad \textbf{P}hi_{|rK_{\mathfrak{X}g'}|}\colon X' \dashrightarrow \textbf{P}\bigl(H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'})\bigr)^\vee\]
share a Zariski open dense subset under the identification
\[\textbf{P}(T)^\vee\colon \textbf{P}\bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})\bigr)^\vee \xrightarrow{\ \sim\ }\textbf{P}\bigl(H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'})\bigr)^\vee. \]
\end{thr}
The original question for $n = 1$ and $r = 2$ over the complex numbers $\textbf{C}$ (i.e., when $\mathfrak{X}g$ and $\mathfrak{X}g'$ are Riemann surfaces and $\|{-}\|$ the Teichm\"uller norm on the Teichm\"uller space) was first introduced by Royden in \cite{MR0288254}. Chi \cite{MR3557304} generalized Royden's result to higher dimensional case for $r$ being sufficiently large and sufficiently divisible. The higher dimensional case for general $r$ is given by Antonakoudis in \cite[5.2]{MR3251342}. The proof of Theorem~\ref{thrbir0} is the $p$-adic analogue of Antonakoudis' proof and Rudin's result \cite[7.5.2]{MR2446682} used in his proof. Rudin's proof used Fourier transforms over $\textbf{C}$, while the $p$-adic analogous proof presented in this paper used the ``cut-off'' functions to avoid some properties of Fourier transforms that hold over $\textbf{C}$ but might not hold over the $p$-adic field $\textbf{K}$.
For the case $s\neq 1/r$, we see that a birational map $f\colon \mathfrak{X} \dashrightarrow \mathfrak{X}'$ (between projective smooth $\mathcal{O}$-varieties) in general does not induce an isometry
\[\bigl(H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'}), \|{-}\|_s\bigr) \longrightarrow \bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g}), \|{-}\|_s\bigr). \]
This is an isometry when $\mathfrak{X}$ and $\mathfrak{X}'$ are \textit{$K$-equivalent on the $\textbf{K}$-points} (over $\mathcal{O}$) \cite{MR1678489}, i.e., there is a smooth $\mathfrak{Y}$ over $\mathcal{O}$ with birational morphisms $\phi\colon \mathfrak{Y} \to \mathfrak{X}$ and $\phi'\colon \mathfrak{Y} \to \mathfrak{X}'$ such that $\phi^*K_{\mathfrak{X}} = {\phi'}^*K_{\mathfrak{X}'}$ on $\mathfrak{Y}g(\textbf{K})$. This is slightly different from the original definition of $K$-equivalence because $p$-adic integrals are only affected by the $\textbf{K}$-points. Again, the converse is true when $\mathfrak{X}g$ and $\mathfrak{X}g'$ satisfies (\hyperlink{assumption}{$\spadesuit$}) but only over $\textbf{K}$ (cf. Corollary~\ref{corgI}):
\begin{thr}
Let $\mathfrak{X}$ and $\mathfrak{X}'$ be smooth projective varieties over $\mathcal{O}$ of relative dimension $n$, $r$ a positive integer. Suppose the general fibers $X$ and $X'$ satisfy (\hyperlink{assumption}{$\spadesuit$}) with $V = H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})$, $V' = H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'})$ and there is an isometry
\[T\colon (V', \|{-}\|_{s}) \longrightarrow (V, \|{-}\|_{s}) \]
for some positive number $s\neq 1/r$. Then $\mathfrak{X}g$ and $\mathfrak{X}g'$ are $K$-equivalent on the $\textbf{K}$-points, i.e., there is a smooth $\mathfrak{Y}g$ over $\textbf{K}$ with birational morphisms $\phi\colon \mathfrak{Y}g \to \mathfrak{X}g$ and $\phi'\colon \mathfrak{Y}g \to \mathfrak{X}g'$ such that $\phi^*K_{\mathfrak{X}g} = {\phi'}^*K_{\mathfrak{X}g'}$ on $\mathfrak{Y}g(\textbf{K})$.
\end{thr}
The main idea of the proof is to determine the Jacobian function by the norms. More precisely, for a resolution $\phi\colon \mathfrak{Y}g\to \mathfrak{X}g$, we use the norms on $H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})$ and $H^0(\mathfrak{Y}g, rK_{\mathfrak{Y}g})$ to determine $J_\phi = \phi^*d\mu_{\mathfrak{X}} / d\mu_{\mathfrak{Y}}$, based on the proof of Theorem~\ref{thrbir0}.
The same proof applies to the case where $\mathfrak{X}$ is replaced by a klt pair $(\mathfrak{X}, D)$ so that the canonical measure $d\mu_{\mathfrak{X}, D}$ (defined in Section~\ref{secgI}) is a finite measure and the space $H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})$ is replaced by $H^0(\mathfrak{X}g, \lfloor r(K_{\mathfrak{X}g} + D + L) \rfloor)$ for some $\textbf{R}$-divisor $L$ in order that the map $\textbf{P}hi_{|\lfloor r(K_{\mathfrak{X}g} + D + L) \rfloor|}$ maps $\mathfrak{X}g$ birationally into its image.
\section{Global Igusa zeta functions}\label{secgI}
As in the introduction, for a $p$-adic field $(\textbf{K}, |{-}|)$, let
\begin{itemize}
\item $\mathcal{O} = \{x\in \textbf{K}\mid |x| \le 1\}$ be the ring of integers,
\item $\mathfrak{m} = \{x\in \textbf{K}\mid |x| < 1\} = (\pi)$ the maximal ideal of $\mathcal{O}$,
\item $v\colon\textbf{K} \to \textbf{Z}\cup\{\infty\}$ the valuation on $\textbf{K}$, and
\item $\textbf{F}_{\!q} = \faktor{\mathcal{O}}{\mathfrak{m}}$ the residue field.
\end{itemize}
Here we scale the norm in order that $|u| = q^{-v(u)}$ for all $u\in \textbf{K}^\times$.
Fix a positive integer $r$. Let $\mathfrak{X}$ be an $n$-dimensional projective variety over $\mathcal{O}$ with the following assumptions: its general fiber $\mathfrak{X}g$ is smooth over $\textbf{K}$, and $\mathfrak{X}g(\textbf{K})$ (as a $\textbf{K}$-analytic manifold) is of positive measure, i.e., containing an $n$-dimensional polydisc $\mathcal{O}^n$. It follows from the valuative criterion for properness that $\mathfrak{X}(\mathcal{O}) = \mathfrak{X}(\textbf{K}) = \mathfrak{X}g(\textbf{K})$.
We have defined the norm $\|{-}\|_{1/r}$ on the space of pluricanonical $r$-forms $H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})$ via the $p$-adic integral:
\[\|\alpha\|_{1/r} = \int_{\mathfrak{X}g(\textbf{K})} |\alpha|^{1/r}. \]
The assumption that $\mathfrak{X}g(\textbf{K})$ being of positive measure is to make sure the norm is not trivial. Another reason to make this assumption is that we want $\mathfrak{X}g(\textbf{K})$ to be Zariski dense in $\mathfrak{X}g$. Indeed, $\mathfrak{X}g(\textbf{K})$ contains a $\textbf{K}$-analytic open subset that is bianalytic to an $n$-dimensional polydisc $\Delta$ with the result that the dimension of the Zariski closure of $\mathfrak{X}g(\textbf{K})$ is at least $n$.
In order to resolve the non-general type case, we shall extend the definition of the norm to a Kawamata log-terminal (klt for short) pair $(X, D)$:
\[\|\alpha\|_{1/r, D} \colonequals \int_{\mathfrak{X}g(\textbf{K})} |\alpha|^{1/r},\quad \alpha \in H^0(\mathfrak{X}g, \lfloor r(K_{\mathfrak{X}g} + D)\rfloor ). \]
When $\mathfrak{X}$ is smooth over $\mathcal{O}$, we have defined for each positive number $s$ the $s$-norm
\[\|\alpha\|_s = \int_{\mathfrak{X}g(\textbf{K})} |\alpha|^s \,d\mu_{\mathfrak{X}}^{1-rs}, \]
where $d\mu_{\mathfrak{X}}$ is the canonical measure on $\mathfrak{X}g(\textbf{K})$ defined by the mod-$\mathfrak{m}$ reduction
\[h_1\colon \mathfrak{X}(\mathcal{O}) \longrightarrow \mathfrak{X}s(\textbf{F}_{\!q}), \]
where $\mathfrak{X}s$ is the special fiber of $\mathfrak{X}$ over $\textbf{F}_{\!q}$.
For a klt log pair $(\mathfrak{X}, D)$, we can consider the measure $d\mu_{\mathfrak{X}, D}$, which is locally the $p$-adic norm $|\omega|^{1/r_D}$ of a generator $\omega\in H^0(\mathfrak{U}, r_D(K_{\mathfrak{X}} + D))(\mathcal{O})$, where $r_D$ is a positive integer such that $r_D(K_{\mathfrak{X}g} + D)$ is Cartier and $\mathfrak{U}\subseteq \mathfrak{X}$ is an open subset on which $r_D(K_{\mathfrak{X}} + D)$ is free. Note that this $p$-adic norm does not depend on the choice of $\omega$, $r_D$ and $\mathfrak{U}$, and hence glues into $d\mu_{\mathfrak{X}, D}$ on $\mathfrak{X}g(\textbf{K})$.
For any $\textbf{R}$-divisor $L$ and any positive integer $r$, we can define $\|{-}\|_{s, D}$ similarly on the space $H^0(\mathfrak{X}g, \left\lfloor r(K_{\mathfrak{X}g} + D + L)\right\rfloor)$ by
\[\|\alpha\|_{s, D} = \int_{\mathfrak{X}g(\textbf{K})} |\alpha|^s\, d\mu_{\mathfrak{X}, D}^{1 - rs}\]
when $\operatorname{Re} s \ge 0$ is small enough.
Indeed, by replacing $L$ by $L'$, which is determined by the equation
\[r(K_{\mathfrak{X}g} + D + L') = \lfloor r(K_{\mathfrak{X}g} + D + L) \rfloor, \]
we may assume that $r(K_{\mathfrak{X}g} + D + L)$ is Cartier. Let $F$ be the fixed locus divisor of the linear system $|r(K_{\mathfrak{X}g} + D + L)|$ and consider a log-resolution $\phi\colon \mathfrak{Y}g \to \mathfrak{X}g$ such that
\[\phi^*D = \sum_{E\in \mathcal{E}} d_E E,\quad \phi^*L = \sum_{E\in \mathcal{E}} \ell_E E,\quad \phi^*F = \sum_{E\in\mathcal{E}} f_E E,\quad K_{\mathfrak{Y}g} = \phi^*K_{\mathfrak{X}g} + \sum_{E\in \mathcal{E}} m_E E \]
with $\sum E$ a smooth normal crossing divisor. Then $\|{-}\|_{s, D}$ is defined on $H^0(\mathfrak{X}g, r(K_{\mathfrak{X}g} + D + L))$ when
\[\operatorname{Re}\bigl(s \cdot (f_E + r(m_E - d_E - \ell_E)) + (1 - rs) \cdot (m_E - d_E)\bigr) > -1, \quad \forall E\in \mathcal{E}, \]
which is equivalent to
\[\operatorname{Re} s < s_r(\mathfrak{X}, D, L) \colonequals \inf_{E \in \mathcal{E}^+} \left(\frac{m_E - d_E + 1}{r\ell_E - f_E}\right), \]
where $\mathcal{E}^+ = \{E\in \mathcal{E}\mid r\ell_E > f_E\}$. Note that the klt assumption on the log pair $(\mathfrak{X}, D)$ is used here in order that $s_r(X, D, L) > 0$.
Since $s_r(X, D, L)$ is defined to be the largest number such that the integral of $|\alpha|^s\,d\mu_{\mathfrak{X}, D}^{1-rs}$ is finite for all $0 \le \operatorname{Re} s < s_r(X, D, L)$ and $\alpha \in H^0(\mathfrak{X}g, r(K_{\mathfrak{X}g} + D + L))$, it is independent of the choice of the resolution $\phi$.
For general $L$ and a linear subspace $V$ of $H^0(\mathfrak{X}g, \left\lfloor r(K_{\mathfrak{X}g} + D + L)\right\rfloor)$, $s_r(X, D, L)$ is defined to be $s_r(X, D, L')$, where $L'$ is determined by the equation
\[r(K_{\mathfrak{X}g} + D + L') = \lfloor r(K_{\mathfrak{X}g} + D + L) \rfloor; \]
$s_r(X, D, V) \ge s_r(X, D, L)$ is defined to be the largest number such that the integral of $|\alpha|^s\,d\mu_{\mathfrak{X}, D}^{1-rs}$ is finite for all $0 \le \operatorname{Re} s < s_r(X, D, V)$ and $\alpha \in V$.
In this (smooth) case, the assumption on $\mathfrak{X}g(\textbf{K})$ being of positive measure is equivalent to $\mathfrak{X}s(\textbf{F}_{\!q})\neq \varnothing$.
\begin{rmk}\label{rmkWeilbound}
Using the Weil conjectures \cite{MR340258}, we see that
\[\#\mathfrak{X}s(\textbf{F}_{\!q}) = \sum_{i = 0}^{2n} \sum_{j = 1}^{h_i} (-1)^i \alpha_{ij} \ge q^{n} + 1 - \sum_{i = 1}^{2n-1} h^i q^{i/2}, \]
for some algebraic integers $\alpha_{ij}$ with $|\alpha_{ij}| = q^{i/2}$, where $h^i = h^i(\mathfrak{X}g)$ are the Betti numbers. Hence, the condition $\# \mathfrak{X}s(\textbf{F}_{\!q}) > 0$ could be achieved by some bounds on $q$ and $h^i$'s. For example,
\[q \ge \left(\sum_{i=1}^{2n - 1}h^i\right)^2 \]
will do. We denote by $q_0(\mathfrak{X})$ to be the smallest positive integer such that
\[q^n + 1 \ge \sum_{i = 1}^{2n-1} h^i q^{i/2},\quad \forall q \ge q_0(\mathfrak{X}). \]
In particular, when $\mathfrak{X}$ is a curve (over $\mathcal{O}$) of genus $g \ge 1$, $q_0(\mathfrak{X}) = 4g^2 - 2$.
\end{rmk}
For each $k\ge 1$, consider the mod-$\mathfrak{m}^k$ reductions
\[\begin{tikzcd}[column sep = 0pt, row sep = 0pt]
\mathfrak{X}(\mathcal{O}) \ar[rr, "h_k"] &~~& \mathfrak{X}(\mathcal{O}/\mathfrak{m}^k)\\
x \ar[rr, mapsto] &~~& \overline{x}^{(k)}
\end{tikzcd}\quad\text{and}\quad\begin{tikzcd}[column sep = 0pt, row sep = 0pt]
H^0(\mathfrak{X}, rK_{\mathfrak{X}})(\mathcal{O}) \ar[rr] &~~& H^0(\mathfrak{X}, rK_{\mathfrak{X}})(\mathcal{O}/\mathfrak{m}^k)\\
\alpha \ar[rr, mapsto] &~~& \overline{\alpha}^{(k)},
\end{tikzcd}\]
where $H^0(\mathfrak{X}, rK_{\mathfrak{X}})(R)$ denotes the $R$-points of $H^0(\mathfrak{X}, rK_{\mathfrak{X}})$ for $R = \mathcal{O}$, $\mathcal{O}/\mathfrak{m}^k$. Applying the method in \cite[Section~8.2]{MR1743467}, we may calculate the norm by the numbers of the zeros of $\alpha \in H^0(\mathfrak{X}, rK_{\mathfrak{X}})(\mathcal{O})$ on each $\mathfrak{X}(\mathcal{O}/\mathfrak{m}^k)$:
\begin{pp}\label{ppcal}
For a nonzero element $\alpha \in H^0(\mathfrak{X}, rK_{\mathfrak{X}})(\mathcal{O})$, we have
\[\|\alpha\|_s = \frac{\# \mathfrak{X}s(\textbf{F}_{\!q})}{q^n} - (q^s - 1) \sum_{k=1}^\infty \frac{N_k}{q^{k(n + s)}}, \]
where
\[N_k = \#\left\{\overline{x}^{(k)}\in \mathfrak{X}(\mathcal{O}/\mathfrak{m}^k)\,\middle|\, \overline{\alpha}^{(k)}\bigl(\overline{x}^{(k)}\bigr) = 0\right\} \]
is the cardinality of the zero set of $\overline{\alpha}^{(k)}\in H^0(\mathfrak{X}, rK_{\mathfrak{X}})(\mathcal{O}/\mathfrak{m}^k)$ on $\mathfrak{X}(\mathcal{O}/\mathfrak{m}^k)$.
\end{pp}
\begin{proof}
Since $\mathfrak{X}$ is smooth over $\mathcal{O}$, each fiber of $h_1$ is $\textbf{K}$-bianalytic to $\pi \mathcal{O}^n$ with measure preserved. For $\overline{x}^{(1)}\in \mathfrak{X}s(\textbf{F}_{\!q})$, let $u = (u_1, \ldots, u_n)$ be a local coordinate of $h_1^{-1}(\overline{x}^{(1)})$. Then
\[\int_{h_1^{-1}(\overline{x}^{(1)})} |\alpha|^s = \int_{\pi \mathcal{O}^n} |a(u)|^s\, du\]
for some analytic function $a(u) \in \mathcal{O}[[u]]$. Let
\[N_k\bigl(\overline{x}^{(1)}\bigr) = \#\left\{\overline{y}^{(k)} \in h_k(h_1^{-1}(\overline{x}^{(1)}))\,\left|\, \overline{\alpha}^{(k)}\bigl(\overline{y}^{(k)}\bigr) =0\right.\right\} \]
be the cardinality of the zero set of $\overline{\alpha}^{(k)}$ on the fiber of $\overline{x}^{(1)}$. Then it follows from
\[\mu_\mathcal{O}\left(|a|^{-1}\left([0, q^{-k}]\right)\right) = \mu_\mathcal{O}\left(\left\{u\bigm| \overline{a}^{(k)}\bigl(\overline{u}^{(k)}\bigr) = 0\right\}\right) = \frac{N_k\bigl(\overline{x}^{(1)}\bigr)}{q^{kn}}\]
that
\begin{align*}
\int_{\pi \mathcal{O}^n} |a(u)|^s\,du &= \sum_{k = 0}^\infty q^{-ks}\cdot \mu_\mathcal{O}(|a|^{-1}(q^{-k})) \\
&= \frac{1}{q^n} + \sum_{k = 1}^\infty (q^{-ks} - q^{-(k-1)s})\cdot \mu_\mathcal{O}(|a|^{-1}([0, q^{-k}])) \\
&= \frac{1}{q^n} - (q^s - 1)\sum_{k = 1}^\infty q^{-ks}\cdot \frac{N_k\bigl(\overline{x}^{(1)}\bigr)}{q^{kn}}.
\end{align*}
Summing the above equation over $\overline{x}^{(1)} \in \mathfrak{X}s(\textbf{F}_{\!q})$, we get
\begin{align*}
\|\alpha\|_s &= \sum_{\ \ \overline{x}^{(1)}}\left(\frac{1}{q^n} - (q^s - 1)\sum_{k = 1}^\infty \frac{N_k\bigl(\overline{x}^{(1)}\bigr)}{q^{k(n+s)}}\right) = \frac{\# \mathfrak{X}s(\textbf{F}_{\!q})}{q^n} - (q^s - 1) \sum_{k=1}^\infty \frac{N_k}{q^{k(n + s)}}. \qedhere
\end{align*}
\end{proof}
In particular, we have \cite[Theorem~2.2.5]{MR670072}
\[\|\alpha\|_0 = \int_{\mathfrak{X}g(\textbf{K})} \, d\mu_{\mathfrak{X}} = \frac{\# \mathfrak{X}s(\textbf{F}_{\!q})}{q^n}, \quad \|\alpha\|_\infty \colonequals \lim_{s \to \infty} \|\alpha\|_s = \frac{\# \mathfrak{X}s(\textbf{F}_{\!q}) - N_1}{q^n}. \]
We see that
\[\|\alpha\|_s = \frac{\# \mathfrak{X}s(\textbf{F}_{\!q})}{q^n} - (1 - t) \sum_{k = 1}^\infty \frac{N_{k}}{q^{kn}} t^{k - 1} \]
is a holomorphic function in $t = q^{-s}$ near $t = 0$. As (local) Igusa zeta functions are rational functions in $t$ \cite{MR546292}, the similar result also holds for global Igusa zeta functions:
\begin{pp}
The holomorphic function $t \mapsto \|\alpha\|_{s}$ is a rational function in $t = q^{-s}$ of the form
\[\frac{P_\alpha(t)}{\prod_E (q^{m_E+1} - t^{a_E})}, \]
where $P_\alpha$ is a polynomial with coefficients in $\textbf{Z}[q^{-1}]$ and $\{(a_E, m_E)\}_{E\in \mathcal{E}}$ are the
discrepancies associated to the divisor $(\alpha)$.
\end{pp}
In fact, it holds for the log pair case:
\begin{pp}
Let $L$ be an $\textbf{R}$-divisor on a smooth projective klt pair $(\mathfrak{X}, D)$. For a nonzero element $\alpha \in H^0(\mathfrak{X}g, \lfloor r(K_{\mathfrak{X}g} + D + L)\rfloor)$, the function $t \mapsto \|\alpha\|_{s, D}$ is a rational function in $t_D = q^{-s/r_D}$ of the form
\[\frac{P_\alpha(t_D)}{t_D^{e} \prod_E \bigl(q^{m_E - d_E +1} - t_D^{r_D(a_E - rd_E)}\bigr)}, \]
where $r_D$ is the least positive integer such that $r_D(K_X + D)$ is Cartier, $P_\alpha$ is a polynomial with coefficients in $\textbf{Z}[q^{-1}]$, $e$ is a nonnegative integer and $\{(a_E, d_E, m_E)\}_{E\in \mathcal{E}}$ are the discrepancies associated to the divisors $(\alpha)$ and $D$.
\end{pp}
For the original ($D = 0$) case, $r_D = 1$, and notice that $\|\alpha\|_{s}$ is bounded as $t$ tends to $0$, so the exponent $e$ could be chosen to be $0$.
\begin{proof}
Consider a log resolution $\phi\colon \mathfrak{Y}g\to \mathfrak{X}g$ such that
\[\phi^*(\alpha) = \sum_{E} a_E E,\quad \phi^*D = \sum_E d_E E,\quad K_{\mathfrak{Y}g} = \phi^*K_{\mathfrak{X}g} + \sum_{E} m_E E \]
with $\sum E$ a normal crossing divisor.
It follows from the definition that
\[\|\alpha\|_{s, D} = \int_{\mathfrak{X}g(\textbf{K})} |\alpha|^{s}\, d\mu_{\mathfrak{X}, D}^{1 -rs} = \int_{\mathfrak{Y}g(\textbf{K})} |\phi^*\alpha|^s \cdot \phi^* d\mu_{\mathfrak{X}, D}^{1 - rs}. \]
Decompose $\mathfrak{Y}g(\textbf{K})$ into disjoint compact charts $Y_i$ such that in coordinate we have
\[\phi^*\alpha = a_i(u)\cdot u^{A_i + rM_i}\, (du)^{\otimes r}, \quad \phi^*d\mu_{\mathfrak{X}, D}(u) = m_i(u)\cdot |u|^{M_i - D_i}\, du, \]
where $a_i(u) \neq 0$ and $m_i(u) \neq 0$ for all $u \in U_i$. After further decomposing $Y_i$, we may assume that each $Y_i$ is a polydisc and that $|a_i(u)| \equiv q^{-a}$ and $m_i(u)\equiv q^{-m}$ are constants on $Y_i$. Then
\[\int_{Y(\textbf{K})} |\phi^*\alpha|^{s} \cdot \phi^*d\mu^{1 - rs} = \sum_{i} \int_{Y_i} q^{-as}|u|^{sA_i} \cdot q^{-m(1 - rs)}|u|^{M_i - (1 - rs)D_i}\, du. \]
Say $Y_i$ is the polydisc $\pi^{k} \mathcal{O}^n$. It is easy to calculate that
\begin{align*}
\int_{\pi^k \mathcal{O}^n} &q^{-as}|u|^{sA_i}\cdot q^{-m(1 - rs)} |u|^{M_i - (1 - rs)D_i}\, du \\
&= q^{-m} t_D^{r_D(a - rm)} \prod_j \int_{\pi^k \mathcal{O}^n} |u_j|^{sa_{i,j} + m_{i,j} - (1-rs)d_{i,j}} du \\
&= q^{-m} t_D^{r_D(a - rm)} \prod_{j} \frac{(1 - q^{-1})q^{-k(sa_{i,j} + m_{i,j} - (1-rs)d_{i,j} + 1)}}{1 - q^{-(sa_{i,j} + m_{i,j} - (1-rs)d_{i,j} + 1)}} \\
&= \frac{(1 - q^{-1})^n}{q^{m + (k-1)(\sum (m_{i,j} - d_{i,j} + 1))}} \cdot \frac{t_D^{r_D(a - rm + k\sum (a_{i,j} - rd_{i,j}))}}{\prod_{j} (q^{m_{i,j} - d_{i,j} + 1} - t^{r_D(a_{i,j} - rd_{i,j})})}.
\end{align*}
Summing over $i$, we see that $\|\alpha\|_{s, D}$ is a rational function in $t_D$ of the form
\[\frac{P_\alpha(t_D)}{t_D^{e} \prod_E (q^{m_E - d_E + 1} - t_D^{r_D(a_E - rd_E)})}, \]
where $P_\alpha$ is a polynomial with coefficients lie in $\textbf{Z}[q^{-1}]$ and $e$ is a nonnegative integer (coming from the term $t_D^{r_D(a - rm + k\sum (a_{i,j} - rd_{i,j}))}$), as desired.
\end{proof}
This proposition allows us to view $t_D = q^{-s/r_D}$ as a formal variable, and view the total norm $\|\alpha\|_{D} \colonequals (\|\alpha\|_{s, D})_{s}$ of a form $\alpha$ as an element in the function field $\textbf{Q}(t_D)$. Hence, there is a $\textbf{Q}(t_D)$-valued norm
\[\|{-}\|_{D}\colon \varinjlim_{L} H^0(\mathfrak{X}g, \left\lfloor r(K_{\mathfrak{X}g} + D + L) \right\rfloor) \longrightarrow \textbf{Q}(t_D). \]
As a consequence, when $t_{D,0} = q^{-s_0/r_D}$ is a transcendental number for some $s_0$, we can determine the rational function $\|\alpha\|_{D}$ by the value $\|\alpha\|_{s_0, D}$. So the norms $\|{-}\|_{s, D}$ on the space $H^0(\mathfrak{X}g, \lfloor r(K_{\mathfrak{X}g} + D + L) \rfloor)$, $0 \le \operatorname{Re} s < s_r(\mathfrak{X}, D, L)$, are in fact determined by $\|{-}\|_{s_0, D}$.
\section{Characterizing birational models}
Suppose that $f\colon \mathfrak{X}g \dashrightarrow \mathfrak{X}g'$ is a birational map between smooth projective varieties over $\textbf{K}$. Then the properness of $\mathfrak{X}g$ shows that there is an open set $U \subset \mathfrak{X}g$ on which $f|_U\colon U\to \mathfrak{X}g'$ is a birational morphism with $\operatorname{codim}_{\mathfrak{X}g}(\mathfrak{X}g \setminus U) \ge 2$. It follows that
\[f^*\colon \bigl(H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'}), \|{-}\|_{1/r}\bigr) \longrightarrow \bigl(H^0(U, rK_{\mathfrak{X}g}|_U),\|{-}\|_{1/r}\bigr) \cong \bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g}),\|{-}\|_{1/r}\bigr)\]
is an isometry by change of variables. This means that the normed space
\[\bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g}), \|{-}\|_{1/r}\bigr)\]
only reflects the birational class of $\mathfrak{X}g$.
For simplicity, let us denote by $V_{r, \mathfrak{X}g}$ the space $H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})$ and by $\textbf{P}hi_{r, \mathfrak{X}g}$ the $r$-canonical map
\[\textbf{P}hi_{|rK_{\mathfrak{X}g}|}\colon \mathfrak{X}g \dashrightarrow \textbf{P}\bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})\bigr)^\vee. \]
For a klt log pair $(X, D)$ and an $\textbf{R}$-divisor $L$ on $X$, denote by $V_{r, (X, D), L}$ the space
\[H^0(\mathfrak{X}g, \left\lfloor r(K_{\mathfrak{X}g} + D + L)\right\rfloor)\]
and by $\textbf{P}hi_{r, (X, D), L}$ the map
\[\textbf{P}hi_{|\left\lfloor r(K_{\mathfrak{X}g} + D + L)\right\rfloor|}\colon \mathfrak{X}g \dashrightarrow \textbf{P}(V_{r, (X, D), L})^\vee. \]
For a linear subspace of $V$ of $V_{r, (X, D), L}$, denote by $\textbf{P}hi_V$ the map determined by the linear system $|V|$.
Using the $p$-adic analogue of the trick in \cite[Theorem~5.2]{MR3251342}, we can prove that:
\begin{thr}\label{thrbir}
Let $\mathfrak{X}g$ and $\mathfrak{X}g'$ be smooth projective varieties over $\textbf{K}$ of dimension $n$ and of positive measure, $r$ a positive integer. Suppose there is a ($\textbf{K}$-linear) isometry
\[T\colon \bigl(V_{r, \mathfrak{X}g'}, \|{-}\|_{1/r}\bigr) \longrightarrow \bigl(V_{r, \mathfrak{X}g}, \|{-}\|_{1/r}\bigr). \]
Then the images of the $r$-canonical maps $\textbf{P}hi_{r, \mathfrak{X}g}$ and $\textbf{P}hi_{r, \mathfrak{X}g'}$ are birational to each other.
When $\mathfrak{X}$ and $\mathfrak{X}'$ are both smooth over $\mathcal{O}$, the statement also holds when the norm $\|{-}\|_{1/r}$ is replaced by $\|{-}\|_s$ for any positive number $s$.
\end{thr}
\begin{proof}
Let $\alpha_0$, $\alpha_1$, $\ldots$ , $\alpha_N$ be a basis of $V_{r, \mathfrak{X}g}$, and let $\alpha_i' = T^{-1}(\alpha_i)$, which is a basis of $V_{r, \mathfrak{X}g'}$. Then the $r$-canonical maps $\textbf{P}hi_{r,\mathfrak{X}g}$ and $\textbf{P}hi_{r, \mathfrak{X}g'}$ could be realized as
\begin{align*}
[\alpha_0: \cdots : \alpha_N]\colon \mathfrak{X}g \dashrightarrow \textbf{P}^N\quad\text{and}\quad [\alpha_0': \cdots : \alpha_N']\colon \mathfrak{X}g' \dashrightarrow \textbf{P}^N,
\end{align*}
respectively.
In the following, $s = 1/r$ if $\mathfrak{X}g$ and $\mathfrak{X}g'$ are just smooth projective varieties over $\textbf{K}$.
Let $d\nu = |\alpha_0|^{s}\,d\mu_\mathfrak{X}^{1-rs}$ and $g_i = \frac{\alpha_i}{\alpha_0}$ on $\mathfrak{X}g^\circ = \mathfrak{X}g\setminus\{\alpha_0 = 0\}$, $d\nu' = |\alpha_0'|^{s}\,d\mu_{\mathfrak{X}'}^{1-rs}$ and $g_i' = \frac{\alpha_i'}{\alpha_0'}$ on ${\mathfrak{X}g'}^\circ = \mathfrak{X}g'\setminus\{\alpha_0' = 0\}$. Using the isometry $T$, we get
\begin{align}\label{eqmeas}
\int_{{\mathfrak{X}g}^\circ(\textbf{K})} \Bigl|1 + \sum_{i}\lambda_i g_i\Bigr|^s\, d\nu &= \Bigl\|\alpha_0 + \sum_{i}\lambda_i \alpha_i\Bigr\|_s \notag\\
&= \Bigl\|\alpha_0' + \sum_{i}\lambda_i \alpha_i'\Bigr\|_s = \int_{{X'}^\circ(\textbf{K})} \Bigl|1 + \sum_{i}\lambda_i g_i'\Bigr|^s\, d\nu'
\end{align}
for all $(\lambda_1, \ldots, \lambda_N)\in \textbf{K}^N$.
Now, we need a $p$-adic analogue of Rudin's result \cite[7.5.2]{MR2446682}:
\noindent{\bf Claim.} We have
\begin{equation}\label{eqRudin}
\int_{X^\circ} h\circ (g_1, \ldots, g_N)\,d\nu = \int_{{X'}^\circ} h\circ (g_1', \ldots, g_N')\,d\nu'
\end{equation}
for all nonnegative Borel function $h\colon \textbf{K}^N \to \textbf{R}$.
\begin{proof}[Proof of Claim]\renewcommand{$\square$}{$\square$}
Let $W$ be the set of all Borel function $h$ such that (\ref{eqRudin}) holds.
It follows that $W$ is invariant under translations, dilations, and scaling. So it suffices to prove that $\boldsymbol{1}_{\mathcal{O}^N}\in W$.
Let $G = (g_1, \ldots, g_N)\colon {X}^\circ \to \textbf{A}^{\!N}$ and $G' = (g_1', \ldots, g_N')\colon {X'}^\circ \to \textbf{A}^{\!N}$. Define
\[B(s, y) = B(s, y_1, \ldots, y_N) = \int_{\mathcal{O}^N} \Bigl|1 + \sum_{i} \lambda_i y_i\Bigr|^{s}\,d\lambda, \]
where $y = (y_1, \ldots, y_N) \in \textbf{K}^N$ and $d\lambda = d\lambda_1\cdots d\lambda_N$. By (\ref{eqmeas}) and Fubini's theorem,
\begin{align*}
\int_{X(\textbf{K})} B\circ G\,d\nu &= \int_{\mathcal{O}^N} \int_{{X}^\circ(\textbf{K})} \Bigl|1 + \sum_{i}\lambda_i g_i\Bigr|^{s}\, d\nu d\lambda \\
&= \int_{\mathcal{O}^N} \int_{{X'}^\circ(\textbf{K})} \Bigl|1 + \sum_{i}\lambda_i g_i'\Bigr|^{s}\, d\nu' d\lambda = \int_{X'(\textbf{K})} B\circ G'\,d\nu'.
\end{align*}
By change of variables, we see that
\[B(s, y) = \begin{cases}
1, & \text{ if }\max |y_i| < 1, \\[-6pt]
b(s)\cdot \max |y_i|^{s}, & \text{ if }\max |y_i| \ge 1,
\end{cases}\]
where $b(s) = \int_{\mathcal{O}} |y|^{s}\,dy = \frac{q - 1}{q - t} \in (0, 1)$ and $t = q^{-s}$. So, if we define the ``cut-off'' function
\[B_0(s, y) = t^{-1} B(\pi y) - B(y) = \begin{cases}
t^{-1} - 1, & \text{ if }\max |y_i| < 1, \\[-6pt]
t^{-1} - b(s), & \text{ if }\max |y_i| = 1, \\[-6pt]
0, & \text{ if }\max |y_i| > 1,
\end{cases} \]
then $B_0(s, -)$ also satisfies (\ref{eqRudin}), and hence lies in $W$. Since $B_0(s, -)$ is a Schwartz–Bruhat function that is supported in $\mathcal{O}^N$ with $\int_{\mathcal{O}^N} B_0(s, y)\,dy\neq 0$, we see that
\begin{equation}\label{eqcharON}
\boldsymbol{1}_{\mathcal{O}^N}(y) = \left(\frac{1}{\mu_\mathcal{O}(\pi\mathcal{O}^N)}\int_{\mathcal{O}^N} B_0(s, u)\,du\right)^{-1}\sum_{\overline{z}\in \mathcal{O}^N/\pi\mathcal{O}^N} B_0(s, y + z)
\end{equation}
also lies in $W$, as desired. \qedhere
\end{proof}
Taking $h = \boldsymbol{1}_{G(X^\circ)}$ in the claim, we see that
\[\|\alpha_0\|_{s} = \int_{G^{-1}(G(X^\circ))}|\alpha_0|^{s}\,d\mu_X^{1 - rs} = \int_{{G'}^{-1}(G(X^\circ))}|\alpha_0'|^{s}\, d\mu_{X'}^{1 - rs} \le \|\alpha_0'\|_{s} = \|\alpha_0\|_{s}. \]
So the inequality above has to be an equality. Let $U$ be the intersection of $G(X^\circ)$ and $G'({X'}^\circ)$. The equality implies that the $\textbf{K}$-points $U(\textbf{K})$ of $U$ has full measure in $G'({X'}^\circ)(\textbf{K})$ with respect to $G_*'\nu'$.
Let $\bar{X}$ (resp.~$\bar{X}'$) be the image of $\textbf{P}hi_{r, X}$ (resp.~$\textbf{P}hi_{r, X'}$), and let $\bar{n}$ (resp.~$\bar{n}'$) be the dimension of $\bar{X}$ (resp.~$\bar{X}'$). Since the general fiber of $X\dashrightarrow \bar{X}$ has dimension $n - \bar{n}$, the image of an $n$-dimensional polydisc $\Delta$ in $X(\textbf{K})$ under $X \dashrightarrow \bar{X}$ has dimension $\bar{n}$ (at a generic point). This argument also holds for $X'\dashrightarrow \bar{X}'$. So we see that $\bar{n} = \bar{n}'$ and that $U(\textbf{K})$ contains an $\bar{n}$-dimensional polydisc.
Since $U_{\textbf{K}}$ is now a quasi-projective variety of dimension at least $\bar{n}$ (and hence equal to $\bar{n}$), the images $\bar{X}$ and $\bar{X}'$ are birational to each other.
\end{proof}
\begin{cor}\label{corbir}
Let $\mathfrak{X}$ and $\mathfrak{X}'$ be smooth projective varieties over $\mathcal{O}$ of relative dimension $n$, $r$ a positive integer. Suppose that $q \ge \max\{q_0(\mathfrak{X}), q_0(\mathfrak{X}')\}$, the $r$-canonical maps $\textbf{P}hi_{r, X}$ and $\textbf{P}hi_{r, X'}$ maps $X$ and $X'$ birationally to their images, respectively, and there is an isometry
\[T\colon \bigl(V_{r, X'}, \|{-}\|_{s}\bigr) \longrightarrow \bigl(V_{r, X}, \|{-}\|_{s}\bigr) \]
for some positive number $s$. Then there is a birational map $f\colon X \dashrightarrow X'$ such that $T = u\cdot f^*$ for some $u \in \textbf{K}^\times$.
\end{cor}
\begin{proof}
The condition $q \ge \max\{q_0(\mathfrak{X}), q_0(\mathfrak{X}')\}$ ensures that both $\mathfrak{X}g(\textbf{K})$ and $\mathfrak{X}g'(\textbf{K})$ are of positive measure. Since the images $\bar{X}$ and $\bar{X}'$ are birational to each other, $X$ and $X'$ are birational to each other. The birational map $f\colon \mathfrak{X}\dashrightarrow \mathfrak{X}'$ comes from the identification
\[\textbf{P}(T)^\vee\colon \textbf{P}(V_{r, X})^\vee \xrightarrow{\ \sim\ } \textbf{P}(V_{r, X'})^\vee. \]
Therefore, $T = u\cdot f^*$ for some $u\in \textbf{K}^\times$.
\end{proof}
Imitating the proof, Theorem~\ref{thrbir} and Corollary~\ref{corbir} are also generalized to the log pair case (with only difference in notation):
\begin{thr}
Let $(X, D)$ and $(X', D')$ be smooth projective klt pairs over $\textbf{K}$ of dimension $n$ and of positive measure, $r$ a positive integer. Let $L$ (resp.~$L'$) be an $\textbf{R}$-divisor on $X$ (resp.~$X'$), and let $V$ (resp.~$V'$) be a linear subspace of $V_{r, (X, D), L}$ (resp.~$V_{r, (X', D'), L'}$). Suppose $1/r < \min\{s_r(X, D, V), s_r(X', D', V')\}$ and there is an isometry
\[T\colon \bigl(V', \|{-}\|_{1/r, D'}\bigr) \longrightarrow \bigl(V, \|{-}\|_{1/r, D}\bigr). \]
Then the images of the maps $\textbf{P}hi_{V}$ and $\textbf{P}hi_{V'}$ are birational to each other.
When $\mathfrak{X}$ and $\mathfrak{X}'$ are both smooth over $\mathcal{O}$, the statement also holds when $1/r$ is replaced by any positive number $s < \min\{s_r(X, D, V), s_r(X', D', V')\}$.
\end{thr}
\begin{cor}
Let $(\mathfrak{X}, D)$ and $(\mathfrak{X}', D')$ be smooth projective klt pairs over $\mathcal{O}$ of relative dimension $n$, $r$ a positive integer. Let $L$ (resp.~$L'$) be an $\textbf{R}$-divisor on $X$ (resp.~$X'$), and let $V$ (resp.~$V'$) be a linear subspace of $V_{r, (X, D), L}$ (resp.~$V_{r, (X', D'), L'}$).
Suppose that $q \ge \max\{q_0(\mathfrak{X}), q_0(\mathfrak{X}')\}$, the maps $\textbf{P}hi_{V}$ and $\textbf{P}hi_{V'}$ maps $X$ and $X'$ birationally to their images, and there is an isometry
\[T\colon \bigl(V', \|{-}\|_{s, D'}\bigr) \longrightarrow \bigl(V, \|{-}\|_{s, D}\bigr) \]
for some positive number $s < \min \{s_r(X, D, V), s_r(X', D', V')\}$. Then there is a birational map $f\colon \mathfrak{X} \dashrightarrow \mathfrak{X}'$ such that $T = u\cdot f^*$ for some $u \in \textbf{K}^\times$.
\end{cor}
\section{Characterizing the $K$-equivalence}\label{secKequiv}
We say two projective smooth varieties $\mathfrak{X}$ and $\mathfrak{X}'$ over $\mathcal{O}$ (resp.~$\mathfrak{X}g$ and $\mathfrak{X}g'$ over $\textbf{K}$) are $K$-equivalent on the $\textbf{K}$-points if
there is a projective smooth variety $\mathfrak{Y}$ over $\mathcal{O}$ (resp.~$\mathfrak{Y}g$ over $\textbf{K}$) with birational morphisms $\phi\colon \mathfrak{Y} \to \mathfrak{X}$ and $\phi'\colon \mathfrak{Y} \to \mathfrak{X}'$ (resp.~$\phi\colon Y\to X$ and $\phi'\colon Y\to X'$) such that $\phi^*K_X = {\phi'}^*K_{X'}$ on $\mathfrak{Y}g(\textbf{K})$. Equivalently,
\[K_Y = \phi^*K_X + \sum m_E E = {\phi'}^*K_{X'} + \sum m_E E \]
on $Y(\textbf{K})$. For $K$-equivalence between klt pairs $(\mathfrak{X}, D)$ and $(\mathfrak{X}', D')$, simply replace the relation $\phi^*K_X = {\phi'}^*K_{X'}$ by $\phi^*(K_X + D) = {\phi'}^*(K_{X'} + D')$.
When $\mathfrak{X}$ and $\mathfrak{X}'$ are $K$-equivalent on the $\textbf{K}$-points over $\mathcal{O}$, the pullbacks of the canonical measures $\phi^*d\mu_{\mathfrak{X}}$ and ${\phi'}^*d\mu_{\mathfrak{X}'}$ are equal to each other. Hence, the natural linear transformation
\begin{equation}\label{eqisoKequiv}
\bigl(V_{r, X'}, \|{-}\|_s\bigr)\longrightarrow \bigl(V_{r, X}, \|{-}\|_s\bigr)
\end{equation}
is an isometry:
\[ \|\alpha'\|_s = \int_{Y(\textbf{K})} |{\phi'}^*\alpha'|^s\cdot {\phi'}^*d\mu_{\mathfrak{X}'}^{1-rs} = \int_{Y(\textbf{K})} |\phi^*\alpha|^s\cdot \phi^*d\mu_{\mathfrak{X}}^{1-rs} = \|\alpha\|_s. \]
The main result of this section is to show that the isometry (\ref{eqisoKequiv}) gives us $K$-equivalence on the $\textbf{K}$-points between $\mathfrak{X}g$ and $\mathfrak{X}g'$ when $\mathfrak{X}g$ and $\mathfrak{X}g'$ satisfy (\hyperlink{assumption}{$\spadesuit$}).
In order to prove this, let us consider a birational morphism $\phi\colon Y\to X$ over $\textbf{K}$ such that
\[K_Y = \phi^* K_X + \sum_{E\in \mathcal{E}} m_E E \]
on $\mathfrak{Y}g(\textbf{K})$. Suppose that the $r$-canonical maps
\[\begin{tikzcd}
Y \ar[d, "\phi"] \ar[r, dashed, "\textbf{P}hi_{r, Y}"] & \textbf{P}\bigl(H^0(Y, rK_Y)\bigr)^\vee \ar[d, "\textbf{P}(\phi^*)^\vee", "\wr"']\\
X \ar[r, dashed, "\textbf{P}hi_{r, X}"] & \textbf{P}\bigl(H^0(\mathfrak{X}g, rK_{\mathfrak{X}g})\bigr)^\vee.
\end{tikzcd}\]
map $X$ and $Y$ birationally to their images.
For an $r$-form $\alpha \in V_{r, X}$, its pullback $\phi^*\alpha$ lies in $V_{r, Y}$. It is then natural to compare the difference between $\|\alpha\|_s$ and $\|\phi^*\alpha\|_{s}$. But there might be a problem: $Y$ may not be defined over $\mathcal{O}$, so there may not be a canonical measure $\mu_{\mathfrak{Y}}$ on $Y(\textbf{K})$. Instead, we construct a measure $\mu_Y$ temporary as follows:
\begin{itemize}
\item[(\hypertarget{construction}{$\dagger$})] Decompose $\mathfrak{Y}g(\textbf{K})$ into finitely many polydiscs $Y_j \cong \pi^{k_j} \mathcal{O}^n$, and take $\mu_Y|_{Y_j}$ to be the canonical measure $\mu_\mathcal{O}$ on $\pi^{k_j}\mathcal{O}^n$. Using this measure $\mu_Y$, we define the $s$-norm $\|{-}\|_s$ on $V_{r, Y}$ similarly:
\[\|\beta\|_s = \int_{\mathfrak{Y}g(\textbf{K})} |\beta|^s \,d\mu_Y^{1-rs}. \]
\end{itemize}
\begin{thr}\label{thrgI}
For a fixed positive number $s \neq \frac{1}{r}$, the Jacobian $J_\phi\colon Y(\textbf{K}) \to \textbf{R}_{\ge 0}$ defined by the formula $\phi^*d\mu_\mathfrak{X} = J_\phi\,d\mu_Y$ is determined by the data
\begin{equation}\label{eqdata}
\bigl(V_{r, X}, \|{-}\|_{s}\bigr)\quad\text{and}\quad \bigl(V_{r, Y}, \|{-}\|_{s}\bigr).
\end{equation}
In particular, the set of divisors $\mathcal{E} = \{E\}$ and the positive integers $m_E$, $E\in \mathcal{E}$, are also determined by (\ref{eqdata}).
\end{thr}
\begin{proof}
It follows from the construction (\hyperlink{construction}{$\dagger$}) of $\mu_Y$ that $J_\phi\colon Y(\textbf{K}) \to \textbf{R}_{\ge 0}$ is continuous. Let $\alpha_0$, $\ldots$ , $\alpha_N$ be a basis of $V_{r, X}$, $g_i = \frac{\alpha_i}{\alpha_0}$ a rational function on $Y$ for each $i$, and $d\nu = |\phi^*\alpha_0|^s\,d\mu_Y^{1 - rs}$. Then for a form $\alpha = \alpha_0 + \sum_i \lambda_i g_i$, we have
\[\|\alpha\|_s = \int_{Y^\circ(\textbf{K})} J_\phi^{1-rs} \left|1 + \sum \lambda_i g_i\right|^s d\nu,\quad \|\phi^*\alpha\|_s = \int_{Y^\circ(\textbf{K})} \left|1 + \sum \lambda_i g_i\right|^s d\nu, \]
where $Y^\circ = Y\setminus \{\phi^*\alpha_0 = 0\}$.
For each Borel function $h\colon \textbf{K}^N \to \textbf{R}$, denote by $I(h) = (I_X(h), I_Y(h))$ the integrals given by
\[I_X(h) = \int_{Y^\circ(\textbf{K})} J_\phi^{1-rs}\cdot(h \circ G)\,d\nu,\quad I_Y(h) = \int_{Y^\circ(\textbf{K})} (h \circ G)\,d\nu, \]
where $G = (g_1, \ldots, g_N) \colon Y^\circ \to \textbf{A}^{\!N}$.
As in the proof of Theorem~\ref{thrbir}, consider the function
\[B(s, y) = \int_{\mathcal{O}^N}\left|1 + \sum \lambda_i y_i\right|^s\, d\lambda = \begin{cases}
1, & \text{ if }\operatorname{max}|y_i| < 1, \\[-6pt]
b(s)\cdot\operatorname{max}|y_i|^s, & \text{ if }\operatorname{max}|y_i| \ge 1
\end{cases}\]
and the ``cut-off'' function
\[B_0(s,y) = q^s B(s,\pi y) - B(s,y) = \begin{cases}
t^{-1} - 1, & \text{ if }\operatorname{max}|y_i| < 1, \\[-6pt]
t^{-1} - b(s), & \text{ if }\operatorname{max}|y_i| = 1, \\[-6pt]
0, & \text{ if }\operatorname{max}|y_i| > 1.
\end{cases}\]
Let $A = \alpha_0 + \sum \mathcal{O}\alpha_i$. We see that
\[I(B(s,-)) = \bigl(I_X(B(s,-)), I_Y(B(s,-)\bigr) = \left(\int_{A} \|\alpha\|_s \,d\alpha,\int_{A} \|\phi^*\alpha\|_s \,d\alpha\right)\]
is determined by (\ref{eqdata}), so is $I(B_0(s,-))$. Using (\ref{eqcharON}), we see that $I(\boldsymbol{1}_{\mathcal{O}^N})$, and hence $I(h)$ for each nonnegative Borel function $h\colon \textbf{K}^{N} \to \textbf{R}$, is determined by (\ref{eqdata}).
Let $U$ be a Zariski open dense subset of $Y^\circ$ on which $G|_U\colon U \to \textbf{A}^{\!N}$ is an immersion and $G^{-1}(G(y)) = \{y\}$ for each $y\in U$. Then for each $y\in U(\textbf{K})$,
\[J_\phi(y) = \left(\lim_{\varepsilon \to 0^+} \frac{I_X\bigl(\boldsymbol{1}_{B_\varepsilon(G(y))}\bigr)}{I_Y\bigl(\boldsymbol{1}_{B_\varepsilon(G(y))}\bigr)}\right)^{\frac{1}{1-rs}} \]
is determined by (\ref{eqdata}) (note that $s \neq 1/r$). By the continuity of $J_\phi$, we can then determine the function $J_\phi\colon Y \to \textbf{R}_{\ge 0}$.
Thus, the union of the divisors
\[\bigcup_{E\in \mathcal{E}}E = \bigl\{y\in Y\,\big|\, J_\phi(y) = 0\bigr\} \]
is determined by (\ref{eqdata}).
In order to find the order $m_E$ of $E$, we pick a generic point $y\in E$ that does not lies in any other $E'\in \mathcal{E}$. Say $y$ lies in the polydisc $Y_j \cong \pi^{k_j}\mathcal{O}^n$. Then under some coordinate $u = (u_1, \ldots, u_n)$,
\[\phi^*d\mu_X = J_\phi(u)\,d\mu_Y = m(u)\cdot |u_1|^{m_E}\,du, \]
for some nonvanishing continuous function $m(u)$. We may assume that $m(u) \equiv q^{-m}$ is a constant $u$ lies in some polydisc $\Delta$ that contains $y$. Then for $j$ large,
\[\left\{u\in \Delta\,\middle|\, J_\phi(u) = q^{-j} \right\} \neq \varnothing \quad \iff \quad j \in m_E \textbf{Z} + m. \]
Therefore, $m_E$ can be determined by $J_\phi$, and hence, by (\ref{eqdata}).
\end{proof}
\begin{cor}\label{corgI}
Let $\mathfrak{X}$ and $\mathfrak{X}'$ be smooth projective varieties over $\mathcal{O}$ of relative dimension $n$ such that
\[T\colon \bigl(V_{r, X'}, \|{-}\|_s\bigr) \longrightarrow \bigl(V_{r, X}, \|{-}\|_s\bigr) \]
is an isometry for some positive number $s\neq 1/r$. Suppose that the $r$-canonical maps $\textbf{P}hi_{r, X}$ and $\textbf{P}hi_{r, X'}$ maps $X$ and $X'$ birationally to their images. Then there exists a projective smooth variety $Y$ over $\textbf{K}$ with birational morphisms $\phi\colon Y \to X$ and $\phi'\colon Y \to X'$ such that $\phi^*d\mu_{X}$ is proportional to ${\phi'}^*d\mu_{X'}$. In particular, $\mathfrak{X}g$ and $\mathfrak{X}g'$ are $K$-equivalent on the $\textbf{K}$-points.
\end{cor}
\begin{proof}
It follows from Theorem~\ref{thrbir} that $X$ and $X'$ are birational. So there is a resolution $\phi\colon Y \to X$, such that the birational map $f\colon X \dashrightarrow X'$ factors through some birational morphism $\phi'\colon Y \to X'$:
\[\begin{tikzcd}[column sep = tiny]
& Y \ar[ld, "\phi"'] \ar[rd, "\phi'"] \\
X \ar[rr, dashed, "f"] && X'
\end{tikzcd}\]
Note that the $r$-canonical map $\textbf{P}hi_{r, Y}$ also maps $Y$ birationally to its image.
Write
\[\phi^*K_{X} = K_Y + \sum m_E E,\quad {\phi'}^* K_{X'} = K_Y + \sum m_E' E. \]
Let $\mu_Y$ be the measure on $Y(\textbf{K})$ we constructed in (\hyperlink{construction}{$\dagger$}), and let $\|{-}\|_s$ be the $s$-norm induced by $\mu_Y$. By Theorem~\ref{thrgI}, the Jacobian $J_\phi = \phi^*d\mu_\mathfrak{X} / d\mu_Y$ and $\{(E, m_E)\}_{E\in \mathcal{E}}$ are determined by
\[\bigl(V_{r, X}, \|{-}\|_s\bigr)\quad\text{and}\quad \bigl(V_{r, Y}, \|{-}\|_{s}\bigr), \]
while $J_{\phi'} = {\phi'}^*d\mu_{\mathfrak{X}'} / d\mu_Y$ and $\{(E, m_E')\}_{E\in \mathcal{E}}$ are determined by
\[\bigl(V_{r, X'}, \|{-}\|_s\bigr)\quad\text{and}\quad \bigl(V_{r, Y}, \|{-}\|_{s}\bigr). \]
Since $T = u\cdot f^*$ for some $u \in \textbf{K}^\times$, we see that
\[{\phi'}^*d\mu_{\mathfrak{X}'} = J_{\phi'}\,d\mu_{Y} = {|u|}^{\frac{1}{1-rs}}J_\phi\,d\mu_{Y} = {|u|}^{\frac{1}{1-rs}}\,\phi^*d\mu_{\mathfrak{X}} \]
and $\{(E, m_E)\}_{E\in \mathcal{E}} = \{(E, m_E')\}_{E\in \mathcal{E}}$, which shows that $\mathfrak{X}g$ is $K$-equivalent to $\mathfrak{X}g'$ on the $\textbf{K}$-points.
\end{proof}
\begin{rmk}
\begin{enumerate}[(i)]
\item In the proof above, if we assume further more that there are resolutions $\phi\colon \mathfrak{Y}\to \mathfrak{X}$ and $\phi'\colon \mathfrak{Y} \to \mathfrak{X}'$ over $\mathcal{O}$, then this shows that $\mathfrak{X}$ and $\mathfrak{X}'$ are $K$-equivalent on the $\textbf{K}$-points (over $\mathcal{O}$).
\item For other positive number $s'$, using ${\phi'}^*d\mu_{\mathfrak{X}'} = \upsilon\,\phi^*d\mu_{\mathfrak{X}}$ for some $\upsilon \in \textbf{R}_{>0}$, we see that
\[\|f^*\alpha\|_{s'} = \int \|\phi^*\alpha\|^{s'}J_{\phi'}^{1-rs'}\,d\mu_Y^{1-rs'} = \upsilon^{1-rs'}\int \|\phi^*\alpha\|^{s'}J_{\phi}^{1-rs'}\,d\mu_Y^{1-rs'} = \upsilon^{1-rs'}\|\alpha\|_{s'}, \]
i.e., under $f^*$, $\|{-}\|_{s'}$ on $V_{r, X'}$ is proportional to $\|{-}\|_{s'}$ on $V_{r, X}$.
\item When $X$ and $X'$ are already birational to each other, say $f\colon X \dashrightarrow X'$ is a birational map, the order on the $s$-norms also detects the $K$-partial ordering. More precisely, if $s < 1/r$ and
\begin{equation}\label{eqnormorder}
\|f^*\alpha'\|_s \le \|\alpha'\|_s,\quad \forall \alpha' \in H^0(\mathfrak{X}g', rK_{\mathfrak{X}g'}),
\end{equation}
then $X \le_{K} X'$, i.e., there exists a birational correspondence (over $\textbf{K}$)
\[\begin{tikzcd}[column sep = tiny]
& Y \ar[ld, "\phi"'] \ar[rd, "\phi'"] \\
X \ar[rr, dashed] && X'
\end{tikzcd}\]
with $\phi^* K_X \le {\phi'}^* K_{X'}$ on $Y(\textbf{K})$. If $s > 1/r$, then the condition (\ref{eqnormorder}) implies $X \ge_{K} X'$.
\item The result also applies to the case in which $r$ is a negative integer, for example, when both $X$ and $X'$ are Fano with $-r$ sufficiently large.
\end{enumerate}
\end{rmk}
Again, the theorem and the corollary above are also generalized to the log pair case.
\begin{cor}
Let $(\mathfrak{X}, D)$ and $(\mathfrak{X}', D')$ be smooth projective klt pairs over $\mathcal{O}$ of relative dimension $n$. Let $L$ (resp.~$L'$) be an $\textbf{R}$-divisor on $X$ (resp.~$X'$), and let $V$ (resp.~$V'$) be a linear subspace of $V_{r, (X, D), L}$ (resp.~$V_{r, (X', D'), L'}$).
Suppose that the maps $\textbf{P}hi_V$ and $\textbf{P}hi_{V'}$ maps $X$ and $X'$ birationally to their images, and there is an isometry \[T\colon \bigl(V', \|{-}\|_{s, D'}\bigr) \longrightarrow \bigl(V, \|{-}\|_{s, D}\bigr) \]
for some positive number $s < \min \{s_r(X, D, V), s_r(X', D', V')\}$ with $s\neq 1/r$. Then $(X, D)$ and $(X', D')$ are $K$-equivalent on the $\textbf{K}$-points.
\end{cor}
\begin{rmk}
Let $P(\partial_s) = \sum c_k \partial_s^k$ be a linear differential operator of degree $d \ge 1$ where $\partial_s = \tfrac{d}{ds}$. Then
\[P(\partial_s)\|\alpha\|_{s, D} = \int_{X(\textbf{K})} P(a)t^a\,d\mu_{\mathfrak{X}, D}, \]
where $a = v(|\alpha| / d\mu_{\mathfrak{X}, D}^r)$. An isometry
\[\bigl(V', P(\partial_s)\|{-}\|_{s, D'}\bigr) \longrightarrow \bigl(V, P(\partial_s)\|{-}\|_{s, D}\bigr)\]
also gives us $K$-equivalence between $(\mathfrak{X}, D)$ and $(\mathfrak{X}', D')$ when they both satisfy (\hyperlink{assumption}{$\spadesuit$}) (even when $s = 0$, $1/r$).
Indeed, replace $B(s, y)$ in the proof of Theorem~\ref{thrgI} by
\[B_a^{P}(s, y) = \int_{\mathcal{O}^N} P\bigl(v\bigl(1 + {\textstyle\sum} \lambda_i y_i\bigr) + a\bigr)\cdot \bigl|1 + {\textstyle\sum} \lambda_i y_i\bigr|^s\,d\lambda. \]
Then the function
\[B_{a, 0}^{P}(s, y) \colonequals (-1)^{d + 1} \sum_{j = 0}^{d + 1} \tbinom{d+1}{j} (-t)^{-j} B_a^{P}(s, \pi^j y)\]
is supported in $\pi^{-d}\mathcal{O}^N$, and one can prove that the integral
\[Q^P(a) \colonequals \int_{\pi^{-d} \mathcal{O}^N} B_{a, 0}^P(s, y)\,dy\]
is a polynomial in $a$ with $\deg Q^P = d - \delta_{t1}$. Since $Q^P\neq 0$, we can then determine the Jacobian function $J_\phi$ following the proof above.
\end{rmk}
Applying the same method in Theorem~\ref{thrgI}, we can prove that:
\begin{pp}
Let $(\mathfrak{X}, D)$ be a smooth projective klt pair over $\mathcal{O}$, $L$ be an $\textbf{R}$-divisor on $X$, $V$ be a linear subspace of $V_{r, (X, D), L}$ such that $\textbf{P}hi_{V}$ maps $X$ birationally to its image. Then the $\textbf{Q}(t_D)$-valued total norm $\|{-}\|_D = (\|{-}\|_{s, D})_{s}$ on $V$ can be determined by any two norms $\|{-}\|_{s_1, D}$ and $\|{-}\|_{s_2, D}$ with $s_1\neq s_2$.
\end{pp}
\begin{proof}
Suppose that the data $\|{-}\|_{s_1, D}$ and $\|{-}\|_{s_2, D}$ with $s_1 \neq s_2$ are given. Then for each $\alpha \in V$ we can determine
\[I_i(h) = \int_{U(\textbf{K})}(h\circ G)\cdot|\alpha|^{s_i}\,d\mu_{\mathfrak{X}, D}^{1-rs_i} \]
for all nonnegative Borel function $h\colon \textbf{K}^N \to \textbf{R}$, where $U$ is some Zariski dense open dense subset of $X$ and $G\colon U \to \textbf{A}^{\!N}$ is the immersion defined by $\textbf{P}hi_{V}$ (up to a choice of basis).
Thus, we can determine
\[\frac{|\alpha|}{d\mu_{\mathfrak{X}, D}^r}(x) = \lim_{\varepsilon \to 0^+}\left(\frac{I_2\bigl(B_\varepsilon(G(x))\bigr)}{I_1\bigl(B_\varepsilon(G(x))\bigr)}\right)^{\frac{1}{s_2 - s_1}}\]
for each $x\in U(\textbf{K})$. Let
\[Z = \left\{u \in \textbf{K}^N \,\middle|\, I_1\bigl(B_\varepsilon(u)) \neq 0,\ \forall \varepsilon > 0\right\}. \]
It is clear that $Z$ contains $G(U)(\textbf{K})$ and is contained in the closure $\overline{G(U)(\textbf{K})}$ of $G(U)(\textbf{K})$.
Take
\[h(u) = \boldsymbol{1}_Z(u)\cdot\lim_{\varepsilon \to 0^+}\left(\frac{I_2\bigl(B_\varepsilon(u)\bigr)}{I_1\bigl(B_\varepsilon(u)\bigr)}\right)^{\frac{s - s_1}{s_2 - s_1}}, \]
so $h(G(x)) = (|\alpha|/d\mu_{\mathfrak{X}, D}^r)^{s - s_1}(x)$ for each $x\in U(\textbf{K})$. We see that
\[\|\alpha\|_{s, D} = \int_{U(\textbf{K})} \left(\frac{|\alpha|}{d\mu_{\mathfrak{X}, D}^r}\right)^{s - s_1}\cdot |\alpha|^{s_1}\,d\mu_{\mathfrak{X}, D}^{1 - rs_1} = I_1(h) \]
is determined by the norms $\|{-}\|_{s_1, D}$ and $\|{-}\|_{s_2, D}$.
\end{proof}
\end{document} |
\begin{document}
\begin{center}
\LARGE
\textbf{Why Should we Interpret\\
Quantum Mechanics?}\\[1cm]
\large
\textbf{Louis Marchildon}\\[0.5cm]
\normalsize
D\'{e}partement de physique,
Universit\'{e} du Qu\'{e}bec,\\
Trois-Rivi\`{e}res, Qc.\ Canada G9A 5H7\\
email: marchild$\hspace{0.3em}a\hspace{-0.8em}
\bigcirc$uqtr.ca\\
\end{center}
\begin{abstract}
The development of quantum information
theory has renewed interest in the
idea that the state vector
does not represent the state of a quantum
system, but rather the knowledge
or information that we may have on the
system. I argue that this epistemic
view of states appears to solve foundational
problems of quantum mechanics only at
the price of being essentially incomplete.
\end{abstract}
\textbf{KEY WORDS:} quantum mechanics;
interpretation; information.
\section{INTRODUCTION}
The foundations and the interpretation of
quantum mechanics, much discussed by the
founders of the theory, have blossomed anew
in the past few decades. It has been
pointed out that every single year between 1972 and
2001 has witnessed at least one scientific meeting
devoted to these issues~\cite{fuchs}.
Two problems in the foundations of quantum
mechanics stand out prominently. The first one
concerns the relationship between quantum mechanics
and general relativity. Both theories are highly
successful empirically, but they appear to be
mutually inconsistent. Most investigators
used to believe that the problem lies more with
general relativity than with quantum mechanics, and
that the solution would consist in coming up with
a suitable quantum version of Einstein's
gravitational theory. More recently, however,
it has become likely that a correct quantum
theory of gravity may involve leaps
substantially bolder~\cite{smolin}.
The second problem is more down-to-earth, and
shows up starkly in ordinary nonrelativistic
quantum mechanics. It consists in the reconciliation
of the apparently indeterminate nature of quantum
observables with the apparently determinate
nature of classical observables, and it crystallizes
in the so-called measurement problem.
Broadly speaking, there
are two ways to address it. The first one, elaborated
by von Neumann~\cite{neumann}, consists in denying
the universal validity of the unitary evolution of
quantum states, governed by the Schr\"{o}dinger
equation. State vector collapse, postulated by
von Neumann but not worked out in detail by him,
was made much more precise recently in spontaneous
localization models~\cite{ghirardi}. The second
road to reconciliation consists in stressing the
universal validity of the Schr\"{o}dinger
equation, while assigning values to specific
observables of which the state vector is not
necessarily an eigenstate. Bohmian
mechanics~\cite{bohm1,bohm2,holland} and
modal interpretations~\cite{fraassen1,vermaas},
among others, fall in this category.
The past twenty years have also witnessed the
inception and quick development of quantum
information theory. This was not unrelated to
foundational studies. Quantum
cryptography~\cite{bennett} and quantum
teleportation~\cite{brassard}, for instance, are
based on the Einstein-Podolsky-Rosen setup
and algorithms for fast computation use
entanglement in an essential way~\cite{nielsen}.
As foundational studies contributed to
quantum information theory, a number of
investigators feel that quantum information
theory has much to contribute to the interpretation
of quantum mechanics. This has to do with what
has been called the \emph{epistemic view} of
state vectors, which goes back at least to
Heisenberg~\cite{heisenberg} but has
significantly evolved in the past few
years~\cite{fuchs,peierls,rovelli,peres,spekkens,bub}.
In the epistemic view, the state vector (or wave
function, or density matrix) does not represent
the objective state of a microscopic system
(like an atom, an electron, a photon), but
rather our knowledge of the probabilities
of outcomes of macroscopic measurements.
This, so the argument goes, considerably
clarifies, or even completely dissolves,
the EPR paradox and the measurement problem.
The purpose of this paper is to examine the
status of the epistemic view. It is a minimal
interpretation, and indeed was referred to as an
``interpretation without interpretation''~\cite{peres}.
The question of the adequacy of the epistemic
view is much the same as the one whether
quantum mechanics needs being interpreted.
In Sec.~2, I shall review the main arguments
in favor of the epistemic view of quantum states.
They have to do with state vector collapse and the
consistency of quantum mechanics with special
relativity. It turns out that the issue
of interpretation can fruitfully be
analyzed in the context of the \emph{semantic view}
of theories. As outlined in Sec.~3, this
approach tries to answer the question, How
can the world be the way the theory says it is? The
next section will describe a world where the epistemic
view would be adequate. This, it turns out, is not
the world we live in, and Secs.~5 and~6 will argue that
in the real world, the epistemic view in fact
begs the question of interpretation.
\begin{sloppypar}
\section{ARGUMENTS FOR THE EPISTEMIC VIEW}
\end{sloppypar}
Interpretations of quantum mechanics can be
asserted with various degrees of
illocutionary strength. The many-worlds
interpretation~\cite{everett}, for instance,
has sometimes been presented as following directly
from the quantum-mechanical formalism.
In a similar vein, Rovelli has proposed that
the epistemic view follows from the observation
that different observers may give different
accounts of the same
sequence of events~\cite{rovelli}.
Rovelli's argument goes as follows. A quantum
system $S$ with two-dimensional state space is
initially described by the state vector
$|\psi\rangle = \alpha |1\rangle + \beta |2\rangle$,
where $|1\rangle$ and $|2\rangle$ are orthonormal
eigenvectors of an observable $Q$. Sometime
between $t_i$ and $t_f$, an apparatus $O$ (which
may or may not include a human being)
performs a measurement of $Q$ on $S$, obtaining
the value 1. Call this experiment $\mathcal{E}$.
According to standard quantum mechanics, $O$
describes $\mathcal{E}$ as
\begin{align}
t_i & \longrightarrow t_f \notag \\
\alpha |1\rangle + \beta |2\rangle
& \longrightarrow |1\rangle . \label{rov1}
\end{align}
Now let a second observer $P$ describe the
compound system made up of $S$ and $O$.
Let $|O_i\rangle$ be the state vector of $O$
at $t_i$. The measurement interaction between
$O$ and $S$ implies that their total state
vector becomes entangled. Thus from the point
of view of $P$, who performs no measurement
between $t_i$ and $t_f$, the experiment
$\mathcal{E}$ should be described as
\begin{align}
t_i & \longrightarrow t_f \notag \\
(\alpha |1\rangle + \beta |2\rangle) \otimes |O_i\rangle
& \longrightarrow \alpha |1\rangle \otimes |O1\rangle
+ \beta |2\rangle \otimes |O2\rangle . \label{rov2}
\end{align}
Here $|O1\rangle$ and $|O2\rangle$ are pointer
states of $O$ showing results 1 and 2.
According to Rovelli, Eq.~(\ref{rov1}) is the
conventional quantum-mechanical description
of experiment $\mathcal{E}$ from the point of
view of $O$, whereas Eq.~(\ref{rov2})
is the description of
$\mathcal{E}$ from the point of view of $P$.
From this he concludes that ``[i]n quantum mechanics
different observers may give different accounts of
the same sequence of events.'' Or, to borrow the
title of his Sec.~2, ``Quantum mechanics is a
theory about information.''
To someone who believes there is a state of
affairs of some sort behind the description, the
difference between O's and P's point of view
means that one of them is mistaken. To put
it differently, the
problem with the argument is that expressions
like ``standard quantum mechanics'' or
``conventional quantum-mechanical description''
are ambiguous. They can refer
either (i) to strict unitary Schr\"{o}dinger evolution,
or (ii) in the manner of von Neumann, to
Schr\"{o}dinger evolution \emph{and} collapse.
Once a precise definition is agreed upon, either
description~(\ref{rov1}) or description~(\ref{rov2})
(but not both) is correct. Strict
Schr\"{o}dinger evolution (i) is encapsulated
in Eq.~(\ref{rov2}), whereas Schr\"{o}dinger
evolution with collapse (ii) is encapsulated
in Eq.~(\ref{rov1}). As one may see it,
the issue is not that
the description depends on the observer,
but that with any precise definition there
is a specific problem. With
definition (i), quantum mechanics seems to
contradict experiment, while with definition (ii),
one needs to provide an explicit collapse
mechanism. Thus Rovelli's discussion, rather than
establishing the epistemic view, brings us back
to the foundational problem outlined in
Sec.~1.\footnote{In private exchanges, Rovelli
stressed that the relational character of quantum
theory is the solution of the apparent
contradiction between discarding Schr\"{o}dinger
evolution without collapse and not providing
an explicit collapse mechanism.
My criticism of its starting
point notwithstanding, much of the discussion
in Ref.~\cite{rovelli} is lucid and
thought-provoking.}
The epistemic view can also be propounded not
as a consequence of the quantum-mechanical
formalism, but as a way to solve
the foundational problems. It does so by denying
that the (in this context utterly misnamed) state
vector represents the state of a microscopic
system. Rather, the state vector represents
knowledge about the probabilities of results of
measurements performed in a given context with a
macroscopic apparatus, that is, information about
``the potential consequences of our experimental
interventions into nature''~\cite{fuchs}.
How does the epistemic view deal with the
measurement problem? It does so by construing
the collapse of the state vector not as a
physical process, but as a change of
knowledge~\cite{peierls}. Insofar as the state
vector is interpreted as objectively describing
the state of a physical system, its abrupt change
in a measurement
implies a similar change in the system, which
calls for explanation. If, on the other hand,
the state vector describes knowledge of
conditional probabilities (i.e.\ probabilities of
future macroscopic events conditional on past
macroscopic events), then as long as what is
conditionalized upon remains the same, the state
vector evolves unitarily. It collapses when the
knowledge base changes, thereby simply reflecting
the change in the conditions being held fixed
in the specification of probabilities.
The epistemic view also helps in removing the
clash between collapse and Lorentz
invariance~\cite{fuchs,bloch}. Take for
instance the EPR setup with two spin 1/2
particles, and let the state vector $|\chi\rangle$
of the compound system be an eigenstate of the
total spin operator with eigenvalue zero. One
can write
\begin{equation}
|\chi\rangle = \frac{1}{\sqrt{2}}
\left\{ |+; \mathbf{n} \rangle \otimes |-; \mathbf{n} \rangle -
|-; \mathbf{n} \rangle \otimes |+; \mathbf{n} \rangle \right\} ,
\end{equation}
where the first vector in a tensor product refers to
particle $A$ and the second vector to particle $B$.
The vector $|+; \mathbf{n} \rangle $, for instance,
stands for an eigenvector of the $\mathbf{n}$-component
of the particle's spin operator, with eigenvalue
$+1$ (in units of $\hbar/2$).
The unit vector $\mathbf{n}$ can point in any direction,
a freedom which corresponds to the
rotational symmetry of $|\chi\rangle$.
Suppose Alice measures the $\mathbf{n}$-component
of $A$'s spin and obtains the value $+1$. If
the state vector represents the objective state of
a quantum system and if collapse is a physical
mechanism, then $B$'s state immediately collapses
to $|-; \mathbf{n}\rangle$. This explains why
Bob's subsequent measurement of the
$\mathbf{n}$-component of $B$'s spin yields the
value $-1$ with certainty. Thus Alice's choice
of axis at once determines the possible states
into which $B$ may collapse, and Alice's result
immediately singles out one of these states.
If the two measurements are spacelike separated,
there exist Lorentz frames where Bob's
measurement is earlier
than Alice's. In these frames the instantaneous
collapse is triggered by Bob's measurement.
This ambiguity, together with the fact that what
is instantaneous in one frame is not in others,
underscores the difficulty of reconciling
relativistic covariance with physical collapse.
In the epistemic view, what changes when Alice
performs a measurement is Alice's knowledge.
Bob's knowledge will change either if he himself
performs a measurement, or if Alice sends him
the result of her measurement by conventional
means. Hence there is no physical collapse
on a spacelike hypersurface.
To the proponents of the epistemic view,
the above arguments show that it considerably
attenuates, or even completely solves, the
problems associated with quantum measurements
and collapse. I will argue, however, that this
result is achieved only at the price of giving up
the search for a spelled out consistent view
of nature.\footnote{The epistemic view was
criticized from a different perspective by
Zeh~\cite{zeh}.}
\section{THE SEMANTIC VIEW OF THEORIES}
Investigations on the structure of scientific
theories and the way they relate to phenomena
have kept philosophers of science busy for
much of the twentieth century. In the past
few decades, the semantic view has emerged
as one of the leading approaches to these
problems~\cite{fraassen1,giere,suppe,fraassen2}.
Among other issues it helps clarifying,
I believe it sheds considerable light
on the question of the
interpretation of quantum mechanics.
In the semantic view a scientific theory
is a structure, defined primarily by models
rather than by axioms or a specific linguistic
formulation. A general theoretical
hypothesis then asserts
that a class of real systems, under suitable
conditions of abstraction and idealization,
belongs to the class of models entertained
by the theory. If the theory is empirically
adequate, the real systems behave (e.g.\ evolve
in time) in a way predictable on the basis
of the models. Yet the empirical agreement
may leave considerable room on the way the
world can be for the theory to be true. This
is the question of interpretation.
This succinct characterization can best
be understood by means of examples. Take
classical particle mechanics. The (mathematical)
structure consists of constants $m_i$, functions
$\mathbf{r}_i (t)$, and vector fields $\mathbf{F}_i$
(understood as masses, positions, and forces),
together with the system of second-order differential
equations $\mathbf{F}_i = m_i \mathbf{a}_i$. A
particular model is a system of ten point masses
interacting through the $1/r^2$ gravitational
force. A theoretical hypothesis then
asserts that the solar system corresponds to
this model, if the sun and nine planets are
considered pointlike and all other objects
neglected. Predictions made on the basis of
the model correspond rather well with reality,
especially if suitable correction factors are
introduced in the process of abstraction.
But obviously the model can be made much more
sophisticated, taking into account for instance
the shape of the sun and planets, the planets'
satellites, interplanetary matter, and so on.
Now what does the theory have to say about
how a world of interacting masses is really like?
It turns out that such a world can be viewed
in (at least) two empirically equivalent but
conceptually very different ways. One can assert
that it is made only of small (or extended) masses
that interact by instantaneous action at a distance.
Or else, one can say that the masses produce
everywhere in space a gravitational field, which
then locally exerts forces on the masses. These
are two different interpretations of the theory.
Each one expresses a possible way of making the
theory true (assuming empirical adequacy).
Whether the world is such that masses instantaneously
interact at a distance in a vacuum, or a
genuine gravitational field is produced throughout
space, the theory can be held as truly realized.
A similar discussion can be made with
classical electromagnetism. The mathematical
structure consists of source
fields $\rho$ and $\mathbf{j}$
(understood as charge and current densities),
and vector fields $\mathbf{E}$ and $\mathbf{B}$
(electric and magnetic fields) related to the
former through Maxwell's equations. A function
$\mathbf{F}$ of the fields (the Lorentz force)
governs the motion of charge and current
carriers. A model of the theory might
be a perfectly conducting rectangular guide
with a cylindrical dielectric post, subject
to an incoming TE$_{10}$ wave.
Again, each specific model of interacting
charges and currents can be viewed in empirically
equivalent and conceptually different ways.
One can allow only retarded fields, or both
retarded and advanced fields~\cite{wheeler}.
If one assumes the existence of a complete
absorber, one can get rid of the fields
entirely, and view electrodynamics as a theory
of moving charges acting
on each other at a distance (though not
instantaneously). Although the interpretation
with genuine retarded fields is the one by far
the more widely accepted, the other interpretations
also provide a way by which Maxwell's theory can be
true.
Let us now turn to quantum mechanics. The
mathematical structure consists of state spaces
$\mathcal{H}$, state vectors $|\psi\rangle$, and
density operators $\rho$; of Hermitian operators
$A$, eigenvalues $|a\rangle$, and eigenvectors $a$;
of projectors $P_a$, etc. Defining quantum mechanics
as covering strict unitary evolution, state
vectors evolve according to the Schr\"{o}dinger
equation. The scope of the theory is specified
(minimally) by associating eigenvalues $a$ to results
of possible measurements, and quantities like
$\mbox{Tr} (\rho P_a)$ to corresponding probabilities.
One kind of real system that can be modelled
on the basis of the structure is
interference in a two-slit Young setup. Often
the system can conveniently be restricted to the
$xy$ plane. In a specific model
the source can be represented by
a plane wave moving in the $x$ direction, and the
slits can be taken as modulating gaussian wave
packets in the $y$ direction. These packets then
propagate according to the Schr\"{o}dinger
equation in free space and produce a wave
pattern on a screen. The absolute square of
the wave amplitude is associated with the
probability that a suitable detector will
click, as a function of its position on
the screen.
From the semantic point of view, the question
of interpretation is the following: How can the
world be so that quantum models representing
Young setups (as well as other situations)
are empirically adequate? The Copenhagen answer
(or at least a variant of it) says that
micro-objects responsible for the fringes don't
have well-defined properties unless these are
measured, but that large scale apparatus always have
well-defined properties. I share the view
of those who believe that this answer is complete only
insofar as it precisely specifies the transition
scale between the quantum and the classical. Bohm's
answer to the above question is that all
particles always have precise positions,
and these positions are the ones that
show up on screens~\cite{philipidis}. The
many-worlds view is that different
detector outcomes simultaneously exist in
different worlds or in different minds. I shall
shortly attempt to assess the adequacy of
the epistemic view, but first we should look
at a world especially tailored to such an
interpretation.
\section{A WORLD FOR THE EPISTEMIC VIEW}
Consider a hypothetical world where large
scale objects (meaning objects much larger
than atomic sizes) behave, for all practical
purposes, like large scale objects in the
real world. The trajectories of javelins and
footballs can be computed accurately by means
of classical mechanics with the use of a
uniform downward force and air resistance.
Waveguides and voltmeters obey
Maxwell's equations. Steam engines and air
conditioners work according to the laws of
classical thermodynamics.
The motion of planets is well described by
Newton's laws of gravitation and of motion,
slightly corrected by the equations of
general relativity.
Close to atomic scales, however, these laws
may no longer hold. Except for one restriction
soon to be spelled out, I shall not be specific
about the changes that macroscopic laws may or
may not undergo in the microscopic realm. Matter, for
instance, could either be continuous down to
the smallest scales, or made of a small
number of constituent particles like our
atoms. The laws of particles and fields
could be the same at all scales, or else
they could undergo significant changes
as we probed smaller and smaller distances.
In the hypothetical world one can perform
experiments with pieces of equipment like
Young's two-slit setup and Stern-Gerlach
devices. The Young type experiment, for instance,
uses two macroscopic devices $E$ and $D$ that
both have on and off states and work in the
following way. Whenever $D$ is suitably
oriented with respect to $E$ (say, roughly
along the $x$ axis) and both are in
the on state, $D$ clicks in a more or less
random way. The average time interval
between clicks depends on the distance $r$
between $D$ and $E$, and falls roughly as
$1/r^2$. The clicking stops if a shield of a
suitable material is placed perpendicularly
to the $x$ axis, between $D$ and $E$.
If holes are pierced through the shield, however,
the clicking resumes. In particular, with
two small holes of appropriate size and
separation, differences in the clicking rate
are observed for small transverse displacements
of $D$ behind the shield. A
plot of the clicking rate against $D$'s
transverse coordinate displays maxima and
minima just as in a wave interference pattern.
No such maxima and minima are observed, however,
if just one hole is open or if both holes
are open alternately.
At this stage everything happens as if $E$
emitted some kind of particles and $D$
detected them, and the particles behaved
according to the rules of quantum mechanics.
Nevertheless, we shall nor commit ourselves
to the existence or nonexistence of these
particles, except on one count. Such particles,
if they exist, are not in any way related to
hypothetical constituents of the material
making up $D$, $E$, or the shield, or of any
macroscopic object whatsoever. Whatever
the microscopic structure of macroscopic
objects, it has nothing to do with what is
responsible for the correlations between
$D$ and $E$.\footnote{Any objection to
the hypothetical world on grounds that
energy might not be conserved, action would
not entail corresponding reaction, and so on,
would miss the point, which is to clarify in what
context the epistemic view can be held
appropriately.}
In the hypothetical world one can also perform
experiments with Stern-Gerlach setups.
Again a macroscopic device $D'$ clicks more
or less randomly when suitably oriented
with respect to another macroscopic device
$E'$, both being in the on state.
If a large magnet (producing a strongly
inhomogeneous magnetic field) is placed
along the $x$ axis between $E'$ and
$D'$, clicks are no longer observed
where $D'$ used to be. Rather they are observed
if $D'$ is moved transversally to (say) two
different positions, symmetrically oriented
with respect to the $x$ axis. Let $\xi$
be the axis going from the first magnet
to one of these positions. If a second
magnet is put behind the first one
along the $\xi$ axis and $D'$ is placed
further behind, then $D'$ clicks in two
different positions in the plane
perpendicular to $\xi$. In the
hypothetical world the
average clicking rate depends on the magnet's
orientation, and it follows the
quantum-mechanical rules of spin 1/2 particles.
Once again, however, we assume that the
phenomenon responsible for the correlations
between $D'$ and $E'$ has nothing to do with
hypothetical constituents of macroscopic
objects.
In the two experiments just described,
quantum mechanics correctly predicts
the correlations between $D$ and $E$
(or $D'$ and $E'$) when suitable
macroscopic devices are interposed
between them. In that situation, the
theory can be interpreted in (at least) two
broadly different ways. In one interpretation,
the theory is understood as applying to
genuine microscopic objects, emitted by $E$
and detected by $D$. Perhaps these objects
follow Bohmian like trajectories, or behave
between emitter and detector in some other
way compatible with quantum mechanics. In the
other interpretation, there are no microscopic
objects whatsoever going from $E$ to $D$.
There may be something like an action at a distance.
At any rate the theory is in that case
interpreted instrumentally,
for the purpose of quantitatively
accounting for correlations in the stochastic
behavior of $E$ and $D$.
In the hypothetical world being
considered, I would maintain
that both interpretations
are logically consistent and adequate. Both clearly
spell out how the world can possibly be the way
the theory says it is. Of course, each
particular investigator
can find more satisfaction in one
interpretation than in the other. The
epistemic view corresponds here to the
instrumentalist interpretation. It
simply rejects the existence of
microscopic objects that have no other use
than the one of predicting observed
correlations between macroscopic objects.
\section{INSUFFICIENCY OF THE EPISTEMIC VIEW}
We shall assume that macroscopic objects
exist and are always in definite
states.\footnote{By this assumption, I mean
that they are not in quantum superpositions
of macroscopically distinct states. They may
still be subject to the very tiny uncertainties
required by Heisenberg's principle.} Not everyone
agrees with this. Idealistic thinkers believe
there is nothing outside mind, and some
fruitful interpretations of quantum mechanics
(like the many-minds interpretation) claim we
are mistaken in assuming the definiteness of
macroscopic states.
For the purpose of asserting the relevance
of the epistemic view, however, these assumptions
can be maintained. Indeed much of the appeal
of the epistemic view is that it appears
to reconcile them with the exact validity of
quantum mechanics.
All scientists today believe that macroscopic
objects are in some sense made of atoms and
molecules or, more fundamentally, of electrons,
protons, neutrons, photons, etc. The epistemic
view claims that state vectors do not represent
states of microscopic objects, but knowledge of
probabilities of experimental results. I suggest
that with respect to atoms, electrons, and
similar entities this can mean broadly
either of three things:
\begin{enumerate}
\item Micro-objects do not exist~\cite{ulfbeck}.
\item Micro-objects may exist but they have no states.
\item Micro-objects may exist and may have states, but
attempts at narrowing down their existence or
specifying their states are useless, confusing, or
methodologically inappropriate.
\end{enumerate}
In the first case, the question that immediately
comes to mind is the following: How can
something that exists (macroscopic objects)
be made of something that does not exist
(micro-objects)?\footnote{I can find no answer
to this question in Ref.~\cite{ulfbeck} which,
however, fits remarkably well with the
hypothetical world of Sec.~4.}
And in the second case, we
can ask similarly: How can something that has a
state be made of something that does not have one?
Can we conclude from these interrogations
that the epistemic view is logically inconsistent?
Is the argument of the last paragraph
a \emph{reductio ad absurdum}? Not so. What the
questions really ask is the following:
How is it possible that the world be like that?
How, for instance, can we have a well-defined
macroscopic state starting from objects that do
not have states? This, we have seen, is
precisely the question of interpretation. Hence
if the epistemic view is asserted as in cases
(1) and (2), our discussion shows that it is
incomplete and paradoxical, and that the process
of completion and paradox resolution coincides
with the one of interpretation.\footnote{Although
a proponent of the epistemic view, Spekkens
suggests looking for ``the ontic states of
which quantum states are states of
knowledge''~\cite{spekkens}. A specific way of
making sense of quantum mechanics
understood as a probability algorithm was recently
proposed by Mohrhoff~\cite{mohrhoff}.}
To address the third case, it will help focussing on
a particular argument along this line, the one
recently put forth by Bub~\cite{bub}. Having
in mind mechanical extensions of the quantum theory
by means of nonclassical waves and particles, Bub
appeals to the following methodological principle:
\begin{quote}
[I]f $T'$ and $T''$ are empirically equivalent
extensions of a theory $T$, and if $T$ entails that,
in principle, there \emph{could not be} evidence
favoring one of the rival extensions $T'$ or $T''$,
then it is not rational to believe either $T'$ or
$T''$.
\end{quote}
Bub then contrasts the relationship between
the quantum theory and its mechanical extensions
like Bohmian mechanics, with the
relationship between classical thermodynamics
and its mechanical explanation through the
kinetic-molecular theory. In the latter case
there are empirical differences, as Einstein
showed in 1905 in the context of Brownian motion,
and as Perrin observed in 1909.
The realization that there may be empirical
differences between thermodynamics and the kinetic
theory of gases goes back to the
1860's~\cite{bader}, and is connected with the names
of Loschmidt and Maxwell. But the kinetic theory is
much older. Leaving aside the speculations of
the Greek atomists, we find atomism well alive
in the writings of Boyle and Newton in the seventeenth
and early eighteenth centuries. D.~Bernouilli
explained the pressure of a gas in terms of atomic
collisions in 1738, and Dalton used
atoms to explain the law of multiple proportions
around 1808. All this time though, there was little
indication that the empirical content of the atomic
models and the phenomenological theories might be
different.
One may argue whether it was rational
to believe in atoms before Loschmidt, but there
is no question that it was very fruitful. As a
matter of fact, one of the reasons for
contemplating empirically equivalent extensions
of theories is that they may open the way
to nonequivalent theories. This was clearly one
of Bohm's preoccupations in 1952~\cite{bohm1}.
That this perspective may or may not bear
fruits remains to be seen.
In the semantic view, the empirically equivalent
mechanical extensions $T'$ and $T''$ of $T$ are
rather called interpretations, emphasizing that
each points to how the world can possibly be the
way the theory says it is. Equivalently, each shows
a way the theory can be true. In classical
physics, finding ways that the theory can be true
is usually nonproblematic, as was illustrated
in Sec.~3. This, however, is not the case with
the quantum theory. Every logically consistent
interpretation proposed should therefore be viewed
as adding to the understanding of the theory.
Bub's methodological principle states that it is
not rational to believe in empirically equivalent
extensions $T'$ or $T''$ of $T$, if there cannot in
principle be evidence favoring the extensions.
Presumably, however, it is rational to believe in
$T$ (assuming empirical adequacy). If $T$ is singled
out among its empirical equivalents, it must be
so on the basis of criteria other than empirical,
perhaps something like Ockham's razor. This comes
as no surprise since even within the class of
internally consistent theories, acceptance almost never
depends on empirical criteria alone.
What criteria, other than empirical, make for
theory acceptance has been the subject of lively
debate, and the question may never be settled.
But the problem of acceptance translates to
interpretations. Assume that a theory is
empirically adequate, and consider all the ways the
world can be for the theory to be so. Each of
these ways is an interpretation. Let an
interpretation be true just in case the world
is actually like it says it is. Necessarily
then, an empirically adequate theory has one
of its interpretations that is true.
Now assume that, among all available
interpretations of a theory, no one is found
acceptable on a given nonempirical criterion.
Any proponent of the theory who believes
that the criterion is important is then faced
with the following choice. Either he hopes that an
interpretation meeting the criterion
will eventually be found and, whether or not
he actually looks for it, he grants that the
search is an important task; or he concludes
that the theory, in spite of being empirically
adequate, is not acceptable.
\section{CONCLUSION}
In attempting to solve the problems of
measurement and instantaneous collapse,
the epistemic view asserts that state vectors
do not represent states of objects like
electrons and photons, but rather the
information we have on the potential
consequences of our experimental interventions
into nature. I have argued that this picture
is adequate in a world where the theoretical
structure used in the prediction of these
consequences is independent of the one used
in the description of fundamental material
constituents.
In the world we live in, however, whatever
is responsible for clicks in Geiger counters
or cascades in photomultipliers also forms
the basis of material structure. The
epistemic view can be construed as denying
that micro-objects exist or have states, or
as being agnostic about any logically
coherent connection between them and
macroscopic objects. In the first case,
the solution it proposes to the foundational
problems is simply paradoxical, and calls for
an investigation of how it can be true. In
the second case, it posits the existence of
a link between quantum and macroscopic objects.
Once this is realized, the urge to investigate
the nature of this connection will not easily
subside.
\end{document} |
\begin{document}
\extrafloats{100}
\title{Generating sets for the kauffman skein module of a family of Seifert manifolds}
\author{Jos\'e Rom\'an Aranda}
\author{Nathaniel Ferguson}
\begin{center}\Large{Generating sets for the kauffman skein module of a family \\ of Seifert fibered spaces}\\
\leftarrowrge{Jos\'e Rom\'an Aranda and Nathaniel Ferguson}
\end{center}
\begin{abstract}
We study spanning sets for the Kauffman bracket skein module $\mathcal{S}(M,\mathbb{Q}(A))$ of orientable Seifert fibered spaces with orientable base and non-empty boundary. As a consequence, we show that the KBSM of such manifolds is a finitely generated $\mathcal{S}(\partial M, \mathbb{Q}(A))$-module.
\end{abstract}
Skein modules are a useful tool to study 3-manifolds. Roughly speaking, a skein module captures the space of links in a given 3-manifold, modulo certain local (skein) relations between the links.
The choice of skein relations must strike a careful balance between providing interesting structure and ensuring that the structure is managable \cite{fundamentals_KBSM}. The most studied skein module is the Kauffman bracket skein module, so named because the skein relations are the same relations used in the construction of the Kauffman bracket polynomial.
Let $\mathcal{R}$ be a ring containing an invertible element $A$. The \textbf{Kauffman bracket skein module} of a 3-manifold $M$ is defined to be the $\mathcal{R}$-module $\mathcal{S}(M,\mathcal{R})$ spanned by all framed links in $M$, modulo isotopy and the skein relations
\begin{center}
(K1): $\includegraphics[valign=c,scale=.45]{K+.png}= A\includegraphics[valign=c,scale=.45]{K_inf.png}+ A^{-1} \includegraphics[valign=c,scale=.5]{K_0.png}$
$ \quad \quad \quad$
(K2): $L\cup \includegraphics[valign=c,scale=.35]{U.png} = \left( -A^2-A^{-2}\right) L$.
\end{center}
Throughout this note, when $\mathcal{R}$ is unspecified, it is assumed that $\mathcal{S}(M) = \mathcal{S}(M, \mathbb{Q}(A))$. Since its introduction by Przytycki \cite{KBSM_Przytycki} and Turaev \cite{KBSM_Turaev}, $\mathcal{S}(M,\mathcal{R})$ has been studied and computed for various 3-manifolds. It is difficult to describe $\mathcal{S}(M,\mathcal{R})$ for a given 3-manifold, although some results have been found\footnote{As summarized in \cite{fundamentals_KBSM} this is not a complete list of 3-manifolds for which $\mathcal{S}(M,\mathcal{R})$ is known.}.
\begin{itemize}
\item $\mathcal{S}(S^3, Z[A^{\pm 1}]) = Z[A^{\pm 1}]$.
\item $\mathcal{S}(S^1\times S^2,\mathbb{Z}[A^{\pm1}])$ is isomorphic to $\mathbb{Z}[A^{\pm1}] \oplus \left( \bigoplus_{i=1}^{\infty} \mathbb{Z}[A^{\pm1}]/(1-A^{2i+4})\right)$ \cite{KBSM_S1S2_PH}.
\item $\mathcal{S}(L(p, q), Z[A^{\pm 1}])$ is a free $Z[A^{\pm 1}]$ module with $\lfloor p/2 \rfloor + 1$ generators \cite{KBSM_lens_PH,non_comm_torus}.
\item $\mathcal{S}(\mathcal{S}igma \times [0,1], Z[A^{\pm 1}])$ is a free module generated by multicurves in $\mathcal{S}igma$ \cite{fundamentals_KBSM,confluence}.
\item $\mathcal{S}(\mathcal{S}igma\times S^1, \mathbb{Q}(A))$ is a vector space of dimension $2^{2g+1}+2g-1$ if $\partial \mathcal{S}igma=\emptyset$, \cite{KBSM_S1_uppper,KBSM_S1}.
\end{itemize}
In 2019, Gunningham, Jordan and Safronov proved that, for closed 3-manifolds, $\mathcal{S}(M,\mathbb{C}(A))$ is finite dimensional \cite{KBSM_finiteness}. However, for 3-manifolds with boundary, this problem is still open.
In \cite{KBSM_infinite}, Detcherry asked versions of a finiteness conjecture for the skein module of knot complements and general 3-manifolds (see Section 3 of \cite{KBSM_infinite} for a detailed exposition).
\begin{conj}[Finiteness conjecture for manifolds with boundary \cite{KBSM_infinite}]\leftarrowbel{conj_finiteness}
Let $M$ be a compact oriented 3-manifold. Then $\mathcal{S}(M)$ is a finitely generated $\mathcal{S}(\partial M, \mathbb{Q}(A))$-module.
\end{conj}
This paper studies the finiteness conjecture for a large family of Seifert fibered spaces, SFS. Let $\mathcal{S}igma$ be an orientable surface of genus $g$ with $N$ boundary components. Let $n$, $b$ be non-negative integers with $N=n+b$. For each $i=1, \dots, n$, pick pairs of relatively prime integers $(q_i, p_i)$ satisfying $0<q_i < |p_i|$. The 3-manifold $\mathcal{S}igma\times S^1$ has torus boundary components with horizontal meridians $\mu_i\subset \mathcal{S}igma\times \{pt\}$ and vertical longitudes $\leftarrowmbda_i=\{pt\} \times S^1$. Denote by $M\left(g; b,\{(q_i,p_i)\}_{i=1}^n\right)$ the result of Dehn filling the first $n$ tori of $\partial \left(\mathcal{S}igma\times S^1\right)$ with slopes $q_i\mu_i + p_i\leftarrowmbda_i$. Every SFS with orientable base orbifold is of the form $M\left(g; b,\{(q_i,p_i)\}_{i=1}^n\right)$ \cite{hatcher_3M}. The main result of this paper is to establish Conjecture \ref{conj_finiteness} for such SFS.
\newtheorem*{thm:conj_3_boundary}{Theorem \ref{thm_conj_3_boundary}}
\begin{thm:conj_3_boundary}
Let $\mathcal{S}igma$ be an orientable surface with non-empty boundary. Then $\mathcal{S}(\mathcal{S}igma\times S^1)$ is a finitely generated $\mathcal{S}(\partial \mathcal{S}igma\times S^1, \mathbb{Q}(A))$-module of rank at most $2^{2g+1}-1$.
\end{thm:conj_3_boundary}
\newtheorem*{thm:finite_SFS}{Theorem \ref{thm_finite_SFS}}
\begin{thm:finite_SFS}
Let $M=M\left(g; b,\{(q_i,p_i)\}_{i=1}^n\right)$ be an orientable Seifert fibered space with non-empty boundary. Suppose $M$ has orientable orbifold base. Then, $\mathcal{S}(M)$ is a finitely generated $\mathcal{S}(\partial M, \mathbb{Q}(A))$-module of rank at most $(2^{2g+1}-1) \prod_{i=1}^n (2q_i-1)$.
\end{thm:finite_SFS}
The following is a more general formulation of the finiteness conjecture.
\begin{conj}[Strong finiteness conjecture for manifolds with boundary \cite{KBSM_infinite}]\leftarrowbel{conjecture_strong_finiteness}
Let $M$ be a compact oriented 3-manifold. Then there exists a finite collection $\mathcal{S}igma_i, \dots, \mathcal{S}igma_k$ of essential subsurfaces $\mathcal{S}igma_i \subset \partial M$ such that:
\begin{itemize}
\item for each $i$, the dimension of $H_1(\mathcal{S}igma_i, \mathbb{Q})$ is half of $H_1(\partial M, \mathbb{Q})$;
\item the skein module $\mathcal{S}(M)$ is a sum of finitely many subspaces $F_1, \dots, F_k$, where $F_i$ is a finitely generated $\mathcal{S}(\mathcal{S}igma_i, \mathbb{Q}(A))$-module.
\end{itemize}
\end{conj}
We are able to show this conjecture for a subclass of SFS.
\newtheorem*{thm:conj2_SFS}{Theorem \ref{conj2_SFS}}
\begin{thm:conj2_SFS}
Seifert fibered spaces of the form $M\left(g; 1,\{(1,p_i)\}_{i=1}^{n}\right)$ satisfy Conjecture \ref{conjecture_strong_finiteness}. In particular, Conjecture \ref{conjecture_strong_finiteness} holds for $\mathcal{S}igma_{g,1}\times S^1$.
\end{thm:conj2_SFS}
The techniques in this work are based on the ideas of Detcherry and Wolff in \cite{KBSM_S1}. For simplicity, we set $\mathcal{R} = \mathbb{Q}(A)$ by default, even though our statements work for any ring $\mathcal{R}$ such that $1 - A^{2m}$ is invertible for all $m > 0$. It would be interesting to see if the generating sets in this work can be upgraded to verify Conjecture \ref{conjecture_strong_finiteness} for all Seifert fibered spaces. Although there is no reason to expect the generating sets to be minimal, we wonder if the work of Gilmer and Masmbaum in \cite{KBSM_S1_uppper} could yield similar lower bounds.
\textbf{Outline of the work.}
The sections in this paper build-up to the proof of Theorems \ref{thm_finite_SFS} and \ref{conj2_SFS} in Section \ref{section_SFS}. Section \ref{section_prelims} introduces the arrowed diagrams which describe links in $\mathcal{S}igma\times S^1$. We show basic relations among arrowed diagrams in Section \ref{section_relations_pants}. Section \ref{section_planar_case} proves that $\mathcal{S}(\mathcal{S}igma_{0,N}\times S^1)$ is generated by boundary parallel diagrams.
Section \ref{section_non_planar_case} studies the positive genus case $\mathcal{S}(\mathcal{S}igma_{g,N}\times S^1)$; we find a generating set over $\mathbb{Q}(A)$ in Proposition \ref{thm_boundary_case}.
In Section \ref{section_SFS}, we describe global and local relations between links in the skein module of Seifert fibered spaces. We use this to build generating sets in Section \ref{section_proof_41}.
\textbf{Acknowledgments.} This work is the result of a course at and funding from Colby College. The authors are grateful to Puttipong Pongtanapaisan for helpful conversations and Scott Taylor for all his valuable advice.
\section{Preliminaries}\leftarrowbel{section_prelims}
Most of the arguments in this paper will focus on finding relations among links in $\mathcal{S}igma\times S^1$ for some compact orientable surface $\mathcal{S}igma$. The main technique is the use of arrow diagrams introduced by Dabkowski and Mroczkowski in \cite{KBSM_pants_S1}. \\
An \textbf{arrow diagram} in $\mathcal{S}igma$ is a generically immersed 1-manifold in $\mathcal{S}igma$ with finitely many double points, together with crossing data on the double points, and finitely many arrows in the embedded arcs. Such diagrams describe links in $\mathcal{S}igma\times S^1$ as follows: Write $S^1 = [0,1]/\left(0\sim 1\right)$. Lift the knot diagram in $\mathcal{S}igma\times \{1/2\}$ away from the arrows to a union of knotted arcs in $\mathcal{S}igma\times [1/4,3/4]$, and interpret the arrows as vertical arcs intersecting $\mathcal{S}igma\times\{1\}$ in the positive direction. We can use the surface framing on arrowed diagrams to describe framed links in $\mathcal{S}igma\times S^1$.
\begin{figure}
\caption{Example of arrowed diagram.}
\end{figure}
Arrowed diagrams have been used to study the skein module of $\mathcal{S}igma_{0,3}\times S^1$ \cite{KBSM_pants_S1}, prism manifolds \cite{KBSM_prism}, the connected sum of two projective spaces \cite{KBSM_RP3RP3}, and $\mathcal{S}igma_g \times S^1$ \cite{KBSM_S1}.
\begin{prop}[\cite{KBSM_pants_S1}]
Two arrowed diagrams of framed links in $\mathcal{S}igma \times S^1$ correspond to isotopic links if and only if they are related by standard Reidemeister moves $R'_1$, $R_2$, $R_3$ and the moves
\begin{center}
(R4): $\includegraphics[valign=c,scale=.5]{R4_1.png}\sim \includegraphics[valign=c,scale=.5]{R4_2.png}\sim \includegraphics[valign=c,scale=.5]{R4_3.png} \quad \quad \quad \quad $
\text{(R5):} $\includegraphics[valign=c,scale=.5]{R5_1.png} \sim \includegraphics[valign=c,scale=.5]{R5_2.png}$.
\end{center}
\end{prop}
From relation $R_4$, we only need to focus on the total number of the arrows between crossings. We will keep track of them by writting a number $n\in \mathbb{Z}$ next to an arrow. Negative values of $n$ correspond to $|n|$ arrows in the opposite direction.
Throughout this work, a \textbf{simple arrowed diagram} (or arrowed multicurve) will denote an arrowed diagram with no crossings.
A simple closed curve in $\mathcal{S}igma$ will be said to be \textbf{trivial} if it bounds a disk. We will sometimes refer to trivial curves bounding disks disjoint from a given diagram as unknots. Loops parallel to the boundary will not be considered trivial. A simple closed curve will be \textbf{essential} if it does not bound a disk nor is parallel to the boundary in $\mathcal{S}igma$.
We can always resolve the crossings of an arrowed diagram via skein relations. Thus, every element in $\mathcal{S}(\mathcal{S}igma\times S^1)$ can be written a $\mathbb{Z}[A^{\pm1}]$-linear combination of arrowed diagrams with no crossings. The following equation will permit us to disregard arrowed unknots, since we can merge them with other loops.
\begin{equation}\leftarrowbel{eqn_23}
\includegraphics[valign=c,scale=.4]{fig_23_1_n.png} = \includegraphics[valign=c,scale=.4]{fig_23_1_1n-1.png}
\quad \implies \quad
A \includegraphics[valign=c,scale=.4]{fig_23_2_n.png} + A^{-1} \includegraphics[valign=c,scale=.4]{fig_23_2_0n.png} = A^{-1} \includegraphics[valign=c,scale=.4]{fig_23_2_n-2.png} + A \includegraphics[valign=c,scale=.4]{fig_23_2_1n-1.png}.
\end{equation}
Equation \eqref{eqn_23} implies Proposition \ref{prop_2.3}.
\begin{prop}[\cite{KBSM_S1}]\leftarrowbel{prop_2.3}
The skein module $\mathcal{S}(\mathcal{S}igma\times S^1)$ is spanned by arrowed multicurves containing no trivial component, and by the arrowed multicurves consisting of just one arrowed unknot with some number of boundary parallel arrowed curves.
\end{prop}
\begin{defn}[Dual graph \cite{KBSM_S1}]
Let $\gamma\subset \mathcal{S}igma$ be an arrowed multicurve.
Let $c$ be the multicurve consisting of one copy of each isotopy class of separating essential loop in $\gamma$. Let $V$ be the set of connected components of $\mathcal{S}igma-c$. For $v\in V$, denote by $\mathcal{S}igma(v)\subset \mathcal{S}igma$ the corresponding connected component of $\mathcal{S}igma-c$. Two distinct vertices share an edge $(v_1, v_2) \in E$ if the subsurfaces $\mathcal{S}igma(v_1)$ and $\mathcal{S}igma(v_2)$ have a common boundary component. Define the \textbf{dual graph} of $\gamma$ to be the graph $\Gamma(\gamma)=(V,E)$.
\end{defn}
\subsection{Relations between skeins}\leftarrowbel{section_relations_pants}
We now study some operations among arrowed multicurves in $\mathcal{S}(\mathcal{S}igma\times S^1)$ that change the number of arrows in a controlled way. Although one can observe that all relations happen on a three-holed sphere, we write them separately for didactical purposes.
In practice, a vertical strand will be part of a concentric circle. Lemma \ref{lem_popping_arrows} states that we can `pop-out' the arrows from a loop with the desired sign (of $y$ and $x$) without increasing the number of arrows in the diagrams. Lemma \ref{lem_flip_unknot} states that we can change the sign of the arrows in an unknot at the expense of adding skeins with fewer arrows. Lemma \ref{lem_merging_unknots} allows us to `break' and `merge' the arrows in between two unknots. Lemma \ref{lem_pushing_arrows} lets us pass arrows between parallel (or nested), and Lemma \ref{lem_crossing_borders} is an explicit case of Equation \eqref{eqn_23}.
The symbol $\includegraphics[scale=.65]{blue_star.png}$ in Lemmas \ref{lem_merging_unknots} and \ref{lem_pushing_arrows} will correspond to any subsurface of surface $\mathcal{S}igma$. In practice, $\includegraphics[scale=.65]{blue_star.png}$ will correspond to a boundary component of $\mathcal{S}igma$ or an exceptional fiber in Section \ref{section_SFS}.
\begin{lem} \leftarrowbel{lem_popping_arrows}
For any $m\in \mathbb{Z}-\{0\}$,
\begin{enumerate}[label=(\roman*)]
\item
$ \includegraphics[valign=c,scale=.55]{fig_14_m.png} \in \mathbb{Z}[A^\pm] \bigg\{ \quad\includegraphics[valign=c,scale=.55]{fig_14_yx.png}\quad : mx\geq 0, y\in \{0,1\}, y+|x|\leq |m|\bigg\}$.
\item
$ \includegraphics[valign=c,scale=.55]{fig_14_m.png} \in \mathbb{Z}[A^\pm] \bigg\{ \quad\includegraphics[valign=c,scale=.55]{fig_14_yx.png}\quad : mx\geq 0, y\in \{0,-1\}, |y|+|x|\leq |m|\bigg\}$.
\end{enumerate}
\end{lem}
\begin{proof}
Add one arrow pointing upwards at the top end of the arcs in Equation \eqref{eqn_23} and set $n=m-1$. We obtain the following equation
\begin{equation}\leftarrowbel{eqn_23+}
A \includegraphics[valign=c,scale=.5]{fig_14_m.png} + A^{-1} \includegraphics[valign=c,scale=.5]{fig_14_1m-1.png} = A^{-1} \includegraphics[valign=c,scale=.5]{fig_14_m-2.png} + A \includegraphics[valign=c,scale=.5]{fig_14_0m-2.png}.
\end{equation}
If $m>0$, we can solve for \includegraphics[valign=c,scale=.5]{fig_14_m.png} and use it inductively to show Part (i). If $m<0$, we can instead solve for \includegraphics[valign=c,scale=.5]{fig_14_m-2.png} and set $m'=m-2$. This new equation can be use to prove Part (i) for $m'<0$. \\
Part (ii) is similar. Start with Equation \eqref{eqn_23} with $m=n$ and solve for \includegraphics[valign=c,scale=.5]{fig_14_m.png} to prove Part (ii) for $m>0$. If $m<0$, set $m=n-2$ in Equation \eqref{eqn_23}.
\end{proof}
\begin{lem}[Proposition 4.2 of \cite{KBSM_S1}]\leftarrowbel{lem_flip_unknot}
Let $S_k$ be an unknot in $\mathcal{S}igma$ with $k\in \mathbb{Z}$ arrows oriented counterclockwise. The following holds for $n\geq 1$,
\begin{enumerate}[label=(\roman*)]
\item $S_1 = A^6 S_{-1}$
\item $S_{-n} = A^{-(2n+4)} S_{n}$ modulo $\mathbb{Q}(A)\{ S_0, \dots, S_{n-1}\}$.
\item $S_{n} = A^{2n+4} S_{-n}$ modulo $\mathbb{Q}(A)\{ S_{-(n-1)}, \dots, S_{0}\}$.
\end{enumerate}
\end{lem}
\begin{lem}\leftarrowbel{lem_merging_unknots}
Let $a, b \in \mathbb{Z}$ with $ab>0$. Then
\begin{enumerate}[label=(\roman*)]
\item $\includegraphics[valign=c,scale=.5]{fig_16_ab.png} +A^{\frac{2a}{|a|}} \includegraphics[valign=c,scale=.5]{fig_16_a+b.png} \in \mathbb{Z}[A^{\pm1}] \big\{ \includegraphics[valign=c,scale=.5]{fig_16_x.png} : 0\leq ax, 0\leq |x|<|a|+|b|\big\}$.
\item $\includegraphics[valign=c,scale=.5]{fig_16_a0.png} \in \mathbb{Z}[A^{\pm1}] \big\{ \includegraphics[valign=c,scale=.5]{fig_16_x.png} : 0\leq |x|\leq|a|\big\}$.
\end{enumerate}
\end{lem}
\begin{proof}
Suppose first that $a, b>0$. Using R4 we obtain $\includegraphics[valign=c,scale=.5]{merging_1_ax.png} = \includegraphics[valign=c,scale=.5]{merging_1_a-1x.png}$. Thus,
\begin{equation}
\includegraphics[valign=c,scale=.5]{merging_ax+1.png} +A^2 \includegraphics[valign=c,scale=.5]{merging_a+x+1.png} = A^2 \includegraphics[valign=c,scale=.5]{merging_a-1x.png} + \includegraphics[valign=c,scale=.5]{merging_a+x-1.png}.
\end{equation}
By setting $x=0$, the statement follows for $b=1$ and all $a\geq 1$. For general $b\geq 1$, we proceed by induction on $a\geq 1$ setting $x=b$ in the equation above.
The proof of case $ab<0$ uses the equation above after the change of variable $a=-a'+1$ and $x=-x'$. Part (ii) follows from Equation \eqref{eqn_23} with $n=a$.
\end{proof}
\begin{lem} \leftarrowbel{lem_pushing_arrows}
For all $a,b\in \mathbb{Z}$,
\begin{align*}
&\text{(i) }
\includegraphics[valign=c,scale=.35]{pushing_p_a_b.png} = A^2 \includegraphics[valign=c,scale=.35]{pushing_p_a-1_b+1.png} + \includegraphics[valign=c,scale=.45]{pushing_p_b-a+2.png} - A^2 \includegraphics[valign=c,scale=.45]{pushing_p_b-a.png}.\\
&\text{(ii) }
\includegraphics[valign=c,scale=.35]{pushing_p_a_b.png} = A^{-2} \includegraphics[valign=c,scale=.35]{pushing_p_a+1_b-1.png} + \includegraphics[valign=c,scale=.45]{pushing_p_b-a-2.png} - A^{-2} \includegraphics[valign=c,scale=.45]{pushing_p_b-a.png}. \\
&\text{(iii) }
\includegraphics[valign=c,scale=.35]{pushing_n_a_b.png} = A^2 \includegraphics[valign=c,scale=.35]{pushing_n_a-1_b+1.png} + \includegraphics[valign=c,scale=.35]{pushing_n_b-a+2.png} - A^2 \includegraphics[valign=c,scale=.35]{pushing_n_b-a.png}.\\
&\text{(iv) }
\includegraphics[valign=c,scale=.35]{pushing_n_a_b.png} = A^{-2} \includegraphics[valign=c,scale=.35]{pushing_n_a+1_b-1.png} + \includegraphics[valign=c,scale=.35]{pushing_n_b-a-2.png} - A^{-2} \includegraphics[valign=c,scale=.35]{pushing_n_b-a.png}.\\
\end{align*}
\end{lem}
\begin{proof}
One can use (K1) on the LHS of each equation to create a new croossing. The result follows from (R4).
\end{proof}
\begin{lem} \leftarrowbel{lem_crossing_borders}
$\quad$
\[ \includegraphics[valign=c,scale=.55]{fig_18_a.png} = -A^{4} \includegraphics[valign=c,scale=.55]{fig_18_b.png} - A^{2} \includegraphics[valign=c,scale=.55]{fig_18_c.png}.\]
\end{lem}
\begin{proof}
Rotate Equation \eqref{eqn_23} by 180 degrees and set $n=1$.
\end{proof}
\section{Planar case}\leftarrowbel{section_planar_case}
Fix a planar subsurface $\mathcal{S}igma' \subset \mathcal{S}igma$ with at least 4 boundary components. The goal of this section is to prove Proposition \ref{thm_planar_case} which states that $\mathcal{S}(\mathcal{S}igma'\times S^1)$ is generated by arrowed diagrams with $\partial$-parallel arrowed curves only. In particular, the dimension of $\mathcal{S}(\mathcal{S}igma'\times S^1)$ as a module over its boundary is one; generated by the empty link.
We will study diagrams in linear pants decompositions. These are pants decompositions for $\mathcal{S}igma'$ with dual graph isomorphic to a line. See Figure \ref{fig_linear_pants} for a concrete picture. Linear decompositions have $N=|\chi(\mathcal{S}igma')|\geq 2$ pairs of pants. By fixing a linear pants decomposition, there is a well-defined notion of left and right ends of $\mathcal{S}igma'$. We denote the specific curves of a linear pants decomposition as in Figure \ref{fig_linear_pants}.
We think of such decomposition as the planar analogues for the sausague decompositions of positive genus surfaces in \cite{KBSM_S1}.
\begin{figure}
\caption{Linear pants decomposition for spheres with holes.}
\end{figure}
The \textbf{main idea of Proposition \ref{thm_planar_case}} is to `push' loops parallel to $l_i$ in a linear pants decomposition towards the boundary of $\partial \mathcal{S}igma$ in both directions. We do this with the help of the $\Delta$-maps from Definition \ref{defn_delta}; $\Delta_+$ `pushes' loops towards the left and $\Delta_-$ towards the right (see Lemma \ref{equation_1_planar}). This idea is based on Section 3.3 of \cite{KBSM_S1} where the authors concluded a version of Proposition \ref{thm_planar_case} for closed surfaces. The following definition helps us to keep track of the arrowed curves in the boundary.
\begin{defn}[Diagrams in linear pants decompositions]\leftarrowbel{defn_linear_pants}
Fix a linear pants decomposition of $\mathcal{S}igma'$ and integers $m\geq 0$, $k_0 \in \{1, \dots, N-1\}$. For each $k\in \{0,\dots, N\} - \{k_0\}$, $a\in \mathbb{Z}$, and $v\in \{0,1,\emptyset\}^N$, we define the arrowed multicurves $D^k_{a,v}$ in $\mathcal{S}igma'$ as follows: $D^{k}_{a,v}$ has one copy of $l_k$ with $a$ arrows, $m$ copies of $l_{k_0}$with no arrows, and one copy of $c_i$ with $v_i$ arrows if $v_i=0,1$ and no curve $c_i$ if $v_i=\emptyset$. Notice that the positive direction of the arrows of the curves $c_i$ depends on the (left/right) position of $c_i$ with respect to $l_{k_0}$.
If $k=k_0$, we define ${}_lD^{k_0}_{a,v}$ (resp. ${}_rD^{k_0}_{a,v}$) as before with the condition that the left-most (resp. right-most) copy of $l_{k_0}$ contains $a$ arrows.
\end{defn}
\begin{figure}
\caption{Definition of $D^k_{a,v}
\end{figure}
\begin{lem}[Lemma 3.11 of \cite{KBSM_S1}]\leftarrowbel{lem_3.11}
The following holds for any two parallel curves,
\[ A^{-1}\includegraphics[valign=c,scale=.4]{fig_311_a+1_b.png} + A \includegraphics[valign=c,scale=.4]{fig_311_a-b+1.png} = A \includegraphics[valign=c,scale=.4]{fig_311_a_b+1.png} + A^{-1} \includegraphics[valign=c,scale=.4]{fig_311_a-b-1.png}.\]
In particular, for any $a \in \mathbb{Z}$, $m\geq 0$, and $v\in \{0,1,\emptyset\}^N$, we have
\[ {}_lD^{k_0}_{a,v} \cong A^{2ma} {}_rD^{k_0}_{a,v}, \]
modulo $\mathbb{Z}[A^{\pm1}]$-linear combinations of diagrams with fewer non-trivial loops.
\end{lem}
Lemma \ref{lem_3.13} allows us to change the location of the $a$ arrows in the diagram $D$ at the expense of changing the vector $v$. Its proof follows from Proposition 3.5 of \cite{KBSM_S1}.
\begin{lem}\leftarrowbel{lem_3.13}
The following equations hold for $k\in \{1,\dots, N-1\}$.
\begin{enumerate}[label=(\roman*)]
\item If $k>k_0$, then
\[ A D^k_{a,(\dots, v_k, \emptyset ,\dots)} - A^{-1}D^k_{a+2,(\dots, v_k, \emptyset, \dots)} = A D^{k+1}_{a+1,(\dots, v_k, 1, \emptyset,\dots)} - A D^{k+1}_{a,(\dots, v_k, 0, \emptyset,\dots)}. \]
\item If $k=k_0$, then
\[ A {}_r D^{k_0}_{a,(\dots, v_k, \emptyset,\dots)} - A^{-1} {}_r D^{k_0}_{a+2,(\dots, v_k, \emptyset,\dots)} = A D^{k_0+1}_{a+1,(\dots, v_k, 1, \emptyset,\dots)} - A^{-1} D^{k_0+1}_{a,(\dots, v_k, 0, \emptyset,\dots)}. \]
\item If $k = k_0$, then
\[ A {}_l D^{k_0}_{a+2,(\dots, \emptyset, v_{k_0}, \dots)} - A^{-1} {}_l D^{k_0}_{a,(\dots, \emptyset, v_{k_0}, \dots)} = A D^{k_0-1}_{a,(\dots,\emptyset, 0, v_{k_0}, \dots)} - A^{-1} D^{k_0-1}_{a+1,(\dots,\emptyset, 1, v_{k_0}, \dots)}. \]
\item If $k < k_0$, then
\[ A D^{k}_{a+2,(\dots, \emptyset, v_{k}, \dots)} - A^{-1} D^{k}_{a,(\dots, \emptyset, v_{k}, \dots)} = A D^{k-1}_{a,(\dots,\emptyset, 0, v_{k}, \dots)} - A^{-1} D^{k-1}_{a+1,(\dots,\emptyset, 1, v_{k}, \dots)}. \]
\end{enumerate}
\end{lem}
\begin{defn}[$\Delta$-maps]\leftarrowbel{defn_delta}
Following \cite{KBSM_S1}, let $V^{\partial \mathcal{S}igma'}$ be the subspace of $\mathcal{S}(\mathcal{S}igma'\times S^1)$ generated by arrowed diagrams with trivial loops and boundary parallel curves in $\mathcal{S}igma'$. Consider $V$ to be the formal vector space over $\mathbb{Q}(A)$ spanned by the diagrams $D^k_{a,v}$, ${}_l D^{k_0}_{a,v}$ and ${}_r D^{k_0}_{a,v}$ for all $a\in \mathbb{Z}$, $v\in \{0,1,\emptyset\}^{N}$ and $k\in \{0,\dots, N\} \setminus \{k_0\}$. Define the linear map $s: V\rightarrow V$ given by $s(D^k_{a,v}) = D^k_{a+2,v}$ (similarly for ${}_l D^{k_0}_{a,v}$ and ${}_r D^{k_0}_{a,v}$). Define the maps $\Delta_-, \Delta_+$, and $\Delta_{+, m}$ by
\[ \Delta_- = A-A^{-1}s, \quad \Delta_+=As-A^{-1}, \quad \Delta_{+,m} = A^{4m+1}s - A^{-1}.\]
\end{defn}
Combinations of $\Delta$-maps, together with Lemmas \ref{equation_1_planar} and \ref{lem_3.14}, will show that $V\subset V^{\partial \mathcal{S}igma'}$.
\begin{lem}\leftarrowbel{equation_1_planar}
Let $o(e)$ and $z(e)$ be the number of ones and zeros of a vector $e\in \{0,1\}^n$.
\begin{enumerate}[label=(\roman*)]
\item The following equation holds for all $1\leq n \leq k_0$.
\begin{equation}\leftarrowbel{eqn_1_planar}
\Delta^n_+ \left( {}_l D^{k_0}_{a, (\dots, \emptyset, \dots)} \right)= \sum_{e \in \{0,1\}^n} (-1)^{o(e)} A^{z(e) - o(e)} D^{k_0 - n}_{a+o(e), (\dots, \emptyset, e, \emptyset, \dots)}, \end{equation}
where $e=(e_1, \dots, e_n)$ is located so that $v_{k_0} = e_n$.
\item The following equation holds for all $1\leq n \leq N-k_0$.
\begin{equation}\leftarrowbel{eqn_2_planar}
\Delta^n_- \left( {}_r D^{k_0}_{a, (\dots, \emptyset, \dots)} \right)= \sum_{e \in \{0,1\}^n} (-1)^{z(e)} A^{o(e) - z(e)} D^{k_0 + n}_{a+o(e), (\dots, \emptyset, e, \emptyset, \dots)},
\end{equation}
where $e=(e_1, \dots, e_n)$ is located so that $v_{k_0} = e_1$.
\end{enumerate}
\end{lem}
\begin{proof}
We now prove Equation \eqref{eqn_1_planar}. The proof of Equation \eqref{eqn_2_planar} is symmetric and it is left to the reader.
Lemma \ref{lem_3.13} with $k=k_0$ is the statement of case $n=1$. We proceed by induction on $n$ and suppose that Equation \eqref{eqn_1_planar} holds for some $1\leq n \leq k_0-1$. Using Lemma \ref{lem_3.13} with $k<k_0$, we show the inductive step as follows,
\begin{align*}
\Delta^{n+1}_{+}\left( {}_l D^{k_0}_{a, \emptyset}\right) = & \left(As - A^{-1}\right) \circ \Delta^{n}_{+} \left( {}_l D^{k_0}_{a, \emptyset}\right) \\
= & \sum_{e \in \{0,1\}^n} (-1)^{o(e)} A^{1+z(e) - o(e)}D^{k_0 - n}_{a+2+o(e), (\dots, \emptyset, e, \emptyset, \dots)} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad -(-1)^{o(e)} A^{-1 + z(e) -o(e)} D^{k_0 - n}_{a+o(e), (\dots, \emptyset, e, \emptyset, \dots)} \\
= & \sum_{e\in \{0,1\}^n} (-1)^{o(e)} A^{z(e) -o(e)} \left[ AD^{k_0 - n}_{a+o(e)+2, (\dots, \emptyset, e, \emptyset, \dots)} - A^{-1} D^{k_0 - n}_{a+o(e) , (\dots, \emptyset, e, \emptyset, \dots)} \right] \\
= & \sum_{e\in \{0,1\}^n} (-1)^{o(e)} A^{z(e) -o(e)} \left[ AD^{k_0 - n-1}_{a+o(e), (\dots, \emptyset,0, e, \emptyset, \dots)} - A^{-1} D^{k_0 - n-1}_{a+o(e) +1, (\dots, \emptyset,1, e, \emptyset, \dots)} \right] \\
= & \sum_{e\in \{0,1\}^n} (-1)^{o(e)} A^{1+z(e) - o(e)}D^{k_0 - n-1}_{a+o(e), (\dots, \emptyset,0, e, \emptyset, \dots)} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad +(-1)^{o(e)+1} A^{z(e) -o(e)-1} D^{k_0 - n-1}_{a+o(e)+1, (\dots, \emptyset,1, e, \emptyset, \dots)} \\
= & \sum_{e \in \{0,1\}^{n+1}} (-1)^{o(e)} A^{z(e) - o(e)} D^{k_0 - (n+1)}_{a+o(e), (\dots, \emptyset, e, \emptyset, \dots)}.
\end{align*}
\end{proof}
\begin{lem} \leftarrowbel{lem_3.14}
For any $a \in \mathbb{Z}$, we have $\Delta^{k_0}_{+} \left({}_l D^{k_0}_{a,\emptyset}\right), \Delta^{N-k_0}_{-} \left({}_r D^{k_0}_{a,\emptyset}\right)\in V^{\partial \mathcal{S}igma'}$. Furthermore, $\Delta^{k_0}_{+} \left({}_l D^{k_0}_{a,\emptyset}\right)$ is a linear combination of elements of the form $D^{0}_{a',(v_1,\dots, v_{k_0},\emptyset,\dots, \emptyset)}$ and $\Delta^{N-k_0}_{-} \left({}_r D^{k_0}_{a,\emptyset}\right)$ is a sum of elements $D^{N}_{a',(\emptyset,\dots, \emptyset, v_{k_0+1}, \dots, v_N)}$.
\end{lem}
\begin{proof}
Setting $n=k_0$ in Equation \eqref{eqn_1_planar} yields the condition $\Delta^{k_0}_{+} \left({ }_l D^{k_0}_{a,\emptyset}\right)\in V^{\partial \mathcal{S}igma'}$ and the first half of the statement.
The second conclusion $\Delta^{N-k_0}_{-} \left({ }_r D^{k_0}_{a,\emptyset}\right)\in V^{\partial \mathcal{S}igma'}$ follows by setting $n=N-k_0$ in Equation \eqref{eqn_2_planar}.
\end{proof}
\begin{prop}\leftarrowbel{prop_3.12}
${ }_l D^{k_0}_{a,v}$ and ${ }_r D^{k_0}_{a,v}$ lie in $V^{\partial \mathcal{S}igma'}$ for any $a\in \mathbb{Z}$ and $v\in \{0,1,\emptyset\}^N$.
\end{prop}
\begin{proof}
By pushing the boundary parallel curves `outside' $\mathcal{S}igma'$, it is enough to show the proposition for $v= \emptyset$.
Using Lemma \ref{lem_3.11}, modulo arrowed multicurves with fewer non-trivial loops, we get that
$ s\left( { }_l D^{k_0}_{a,\emptyset}\right) = { }_l D^{k_0}_{a+2,\emptyset} \cong A^{2m(a+2)} { }_r D^{k_0}_{a+2,\emptyset}$.
Thus,
\begin{align*}
\Delta_{+} \left( { }_l D^{k_0}_{a,\emptyset}\right) = & As\left( { }_l D^{k_0}_{a,\emptyset} \right) - A^{-1} { }_l D^{k_0}_{a,\emptyset} \\
\cong & A^{4m+1} A^{2ma} { }_r D^{k_0}_{a+2,\emptyset} - A^{-1} A^{2ma} { }_r D^{k_0}_{a,\emptyset}\\
= & A^{2ma} \left[ A^{4m+1}s - A^{-1}\right] \left( { }_r D^{k_0}_{a,\emptyset}\right) \\
= & A^{2ma} \Delta_{+,m} \left( { }_r D^{k_0}_{a,\emptyset}\right).
\end{align*}
Hence, up to sums of curves with less non-trivial loops in $\mathcal{S}igma'$, Lemma \ref{lem_3.14} implies \[\Delta^{k_0}_{+,m} \left( { }_r D^{k_0}_{a,\emptyset}\right), \Delta^{N-k_0}_{-} \left( { }_r D^{k_0}_{a,\emptyset}\right) \in V^{\partial\mathcal{S}igma'}.\]
Finally, observe that
$ A^{-1} \Delta_{+,m} + A^{4m+1} \Delta_- = \left( A^{4m+2} - A^{-2} \right) Id_V$.
This yields
\[{ }_r D^{k_0}_{a,\emptyset} = Id_{V}^{N} \left( { }_r D^{k_0}_{a,\emptyset}\right) = \frac{1}{\left( A^{4m+2} - A^{-2} \right)^N } \left( A^{-1} \Delta_{+,m} + A^{4m+1} \Delta_- \right)^N \left( { }_r D^{k_0}_{a,\emptyset}\right).\]
The result follows since $\Delta^{k_0}_{+,m} \left( { }_r D^{k_0}_{a,\emptyset}\right)$ and $\Delta^{N-k_0}_{-} \left( { }_r D^{k_0}_{a,\emptyset}\right)$ are both elements of $V^{\partial \mathcal{S}igma'}$.
\end{proof}
We are ready to describe an explicit generating set for $\mathcal{S}(\mathcal{S}igma\times S^1)$ for any planar surface $\mathcal{S}igma$.
\begin{prop} \leftarrowbel{thm_planar_case}
Let $\mathcal{S}igma$ be a $N$-holed sphere with $N\geq 1$. Then $\mathcal{S}(\mathcal{S}igma \times S^1)$ is generated by arrowed unknots and $\partial$-parallel arrowed multicurves.
\end{prop}
\begin{proof}
Proposition \ref{thm_planar_case} is equivalent to the statement that $\mathcal{S}(\mathcal{S}igma \times S^1)$ is generated by arrowed multicurves with dual graphs isomorphic to a point.
Let $\gamma$ be an arrowed multicurve in $\mathcal{S}igma$ with $\Gamma(\gamma)\neq \{pt\}$. Let $e =(v_1,v_2)$ be a fixed edge of $\Gamma(\gamma)$, and let $\mathcal{S}igma' \subset \mathcal{S}igma$ be the subsurface $\mathcal{S}igma(v_1) \cup \mathcal{S}igma(v_2)$.
By Lemma \ref{lem_3.11}, up to curves of smaller degree, we can arrange the arrows in the loops corresponding to $e$ so that only one loop (the closest to $\mathcal{S}igma(v_2)$) may have arrows.
By construction, $\gamma \cap \mathcal{S}igma'$ has one isotopy class of separating non $\partial$-parallel curve in $\mathcal{S}igma'$. Thus, there exists a linear pants decomposition for $\mathcal{S}igma'$ and integers $a\in \mathbb{Z}$, $k_0 \in \{1, \dots, |\chi(\mathcal{S}igma')|-1\}$ so that $\gamma \cong {}_r D^{k_0}_{a,\emptyset}$ (we focus on the non $\partial$-parallel components of $\gamma\cap \mathcal{S}igma'$). Proposition \ref{prop_3.12} states that ${}_r D^{k_0}_{a,\emptyset}\in V^{\partial \mathcal{S}igma'}$.
Therefore, $\gamma$ is a $\mathbb{Q}(a)$-linear combination of arrowed multicurves with dual graphs isomorphic to $\Gamma(\gamma)/e$; with fewer vertices than $\Gamma(\gamma)$.
\end{proof}
\section{Non-planar case}\leftarrowbel{section_non_planar_case}
This section further exploits the proofs in \cite{KBSM_S1} to give a finiteness result for $\mathcal{S}(\mathcal{S}igma \times S^1)$ for all orientable surfaces with boundary (Proposition \ref{thm_boundary_case}). Throughout this section, $\mathcal{S}igma$ will be a compact orientable surface of genus $g>0$ with $N>0$ boundary components.
\subsection{Properties of stable multicurves}
\begin{defn}[Complexity]
Let $\gamma$ be an arrowed multicurve. Denote by $n$ the number of non-separating circles of $\gamma$, $m$ the number of non-trivial non $\partial$-parallel separating circles in $\gamma$, and $b$ the number of vertices in the dual graph of $\gamma$ intersecting $\partial\mathcal{S}igma$. We define the complexity of a multicurve $\gamma$ as $\left( b, n+2m, n+m\right)$ and order them with the lexicographic order. An arrowed multicurve is said to be \textbf{stable} if it is not a linear combination of diagrams with lower complexity.
\end{defn}
Proposition \ref{prop_3.7}, Proposition \ref{prop_3.8}, and Lemma \ref{lem_3.9} restate properties of stable curves from \cite{KBSM_S1}. Fix a stable arrowed multicurve $\gamma$ in $\mathcal{S}igma$.
\begin{prop}[Proposition 3.7 of \cite{KBSM_S1}] \leftarrowbel{prop_3.7}
Let $\mathcal{S}igma' = \mathcal{S}igma(v)$ be a vertex of $\Gamma$ with $|\partial \mathcal{S}igma'|\geq 1$ and $g(\mathcal{S}igma')\geq 1$. Then $\gamma \cap \mathcal{S}igma'$ contains at most one non-separating curve.
\end{prop}
\begin{prop}[From proof of Proposition 3.8 of \cite{KBSM_S1}] \leftarrowbel{prop_3.8}
If $e=(v,v')$ is an edge of $\Gamma$ with $g(v') \geq 1$, then the valence of $v$ is at most two.
\end{prop}
\begin{lem} [Lemma 3.9 of \cite{KBSM_S1}] \leftarrowbel{lem_3.9}
For a vertex $v$ with $g(v)\geq 1$ and valence two, $\gamma\cap \mathcal{S}igma(v)$ contains no non-separating curves.
\end{lem}
Now, Proposition \ref{prop_stable_line} shows that stable arrowed multicurves satisfy $b(\gamma)= 1$.
\begin{prop}\leftarrowbel{prop_stable_line}
Stable arrowed multicurves have dual graphs isomorphic to lines. Moreover, they are $\mathbb{Q}(A)$-linear combinations of arrowed unknots and the two types of arrowed multicurves depicted in Figure \ref{fig_two_types}.
\end{prop}
\begin{figure}
\caption{Type 1 multicurves contain only one isotopy class of non-separating simple curve and type 2 at most two non-separating loops.}
\end{figure}
\begin{proof}
Suppose $b(\gamma)>1$; i.e., there exist two distinct vertices $v_1, v_2 \in \Gamma$ containing boundary components of $\mathcal{S}igma$. We will show that $\gamma$ is not stable.
There exists a path $P\subset \mathcal{S}igma$ connecting $v_1$ and $v_2$. For each vertex $x \in P$, we define a subsurface $\mathcal{S}igma'(x) \subset \mathcal{S}igma(x)$ as follows:
If $\mathcal{S}igma(x)$ is planar, define $\mathcal{S}igma'(x) := \mathcal{S}igma(x)$.
Suppose now that $g(x)\geq 1$ and $x\notin\{ v_1, v_2\}$. Proposition \ref{prop_3.7} states that $\gamma \cap \mathcal{S}igma(x)$ contains at most one non-separating loop. Thus, we can find a planar surface $\mathcal{S}igma'(x)\subset \mathcal{S}igma(x)$ disjoint from the non-separating loop such that $\partial \mathcal{S}igma'(x)$ contains the two boundaries of $\mathcal{S}igma(x)$ participating in the path $P$ (see Figure \ref{fig_path}).
Suppose now $g(x) \geq 1$ and $x=v_i$. Using Proposition \ref{prop_3.7} again, we can find a subsurface $\mathcal{S}igma'(x)\subset \mathcal{S}igma(x)$ with $\partial \mathcal{S}igma'(x)$ containing the $\mathcal{S}igma(x) \cap \partial \mathcal{S}igma$ and the one loop of $\partial \mathcal{S}igma(x)$ participating in the path $P$ (see Figure \ref{fig_path}).
Define $\mathcal{S}igma'\subset \mathcal{S}igma$ to be the connected surface obtained by gluing the subsurfaces $\mathcal{S}igma'(x)$ for all $x\in P$. Since $\Gamma$ is a tree, $\mathcal{S}igma'$ must be planar.
\begin{figure}
\caption{Building the subsurface $\mathcal{S}
\end{figure}
By construction $\gamma \cap \mathcal{S}igma'$ can be thought of as an element of $\mathcal{S}(\mathcal{S}igma' \times S^1)$. Proposition \ref{thm_planar_case} states that $\gamma \cap \mathcal{S}igma'$ can be written as $\mathbb{Q}(a)$-linear combination of arrowed diagrams with only trivial and $\partial$-parallel curves in $\mathcal{S}igma'$. In particular, $\gamma$ can be written as a linear combination of arrowed diagrams $\gamma'$ in $\mathcal{S}igma$ with $b(\gamma')<b(\gamma)$, and so $\gamma$ is not stable.
Let $\gamma$ be an stable arrowed multicurve. Since $b(\gamma)=1$, there is a unique vertex $x_0 \in \Gamma$ with $\partial \mathcal{S}igma \subset \mathcal{S}igma(x_0)$. Notice that any vertex $v\in \Gamma$ of valence two either has positive genus or is equal to $x_0$. This assertion, together with Proposition \ref{prop_3.8}, implies that $\Gamma$ is isomorphic to a line where every vertex different than $x_0$ has positive genus.
The graph $\Gamma \setminus \{x_0\}$ is the disjoint union of at most two linear graphs $\Gamma_1$ and $\Gamma_2$; $\Gamma_i$ might be empty. For each $\Gamma_i\neq \emptyset$, the subsurface $\mathcal{S}igma(\Gamma_i)$ is a surface of positive genus with one boundary component.
If each $\Gamma_i$ has at most one vertex then $\gamma$ looks like curves in Figure \ref{fig_two_types} and the proposition follows. Suppose then that $\Gamma_i$ has two or more vertices and pick an edge $e$ of $\Gamma_i$.
By Proposition \ref{prop_3.7} and Lemma \ref{lem_3.9}, $\gamma \cap \mathcal{S}igma(e) $ contains at most one isotopy class of non-separating curves. Denote such curve by $\alpha$; observe that $\alpha$ is empty unless $e$ is has an endpoint on a leaf of $\Gamma$. Let $\mathcal{S}igma''$ be the complement of an open neighborhood of $\alpha$ in $\mathcal{S}igma(e)$. By construction, $\gamma \cap \mathcal{S}igma''$ contains one isotopy class of non-trivial separating curves in $\mathcal{S}igma''$.
By Lemma 3.12 of \cite{KBSM_S1} we can `push' the separating arrowed loops in $\gamma \cap \mathcal{S}igma''$ towards the boundary of $\mathcal{S}igma''$. Thus, we can write $\gamma$ as a linear combination of diagrams with dual graph $\Gamma / e$. We can repeat this process until we obtain only summands with each $\Gamma_i$ having at most one vertex.
\end{proof}
\begin{figure}
\caption{One needs to apply Proposition 3.12 of \cite{KBSM_S1}
\end{figure}
\begin{prop} \leftarrowbel{prop_genus_case}
Let $\mathcal{S}igma$ be an orientable surface of genus $g>0$ and $N>0$ boundary components.
Then $\mathcal{S}(\mathcal{S}igma\times S^1)$ is generated by arrowed unknots and arrowed multicurves with $\partial$-parallel components and at most one non-separating simple closed curve.
\end{prop}
\begin{proof}
Using Proposition 3.12 of \cite{KBSM_S1} with $\mathcal{S}igma'$ being the shaded surfaces in Figures \ref{fig_two_types} and \ref{fig_last_step}, we obtain that the $\mathcal{S}(\mathcal{S}igma\times S^1)$ is generated by arrowed diagrams as in Figure \ref{fig_last_step} where $l+l'=n_1$ and $m,n_1, n_2 \geq 0$. Observe that, ignoring the $m$ curves, the $l$ and $l'$ curves are parallel.
Also observe that, by Lemma \ref{lem_3.11}, we can still pass arrows among the $l$ and $l'$ curves modulo linear combinations of diagrams of the same type with lower $n_1$ but higher $m$. Thus, if we only focus on the complexity $n_1 + n_2$, we can follow the proof of Proposition 3.16 in \cite{KBSM_S1} and conclude that $\mathcal{S}(\mathcal{S}igma\times S^1)$ is generated by arrowed diagrams with $n_1+ n_2\leq 1$.
The rest of this proof focuses on making $m=0$. In order to do this, we combine techniques in Section \ref{section_planar_case} of this paper and Proposition 3.12 of \cite{KBSM_S1}.
\textbf{Case 1: $n_1+n_2=0$.} Fix $m\geq 0$. Let $c$ be a separating curve cutting $\mathcal{S}igma$ into a sphere with $N+1$ holes and one connected surface of genus $g>0$ with one boundary component. The diagrams in this case contain only boundary parallel curves and copies of $c$. Define $V^{\partial \mathcal{S}igma}_m$ to be the formal vector space defined by such pictures with at most $m$ parallel separating curves. For each $a\in \mathbb{Z}$, define the diagram ${}_r D_a$ (resp. ${}_l D_a$) to be given by $m+1$ copies of $c$, $m$ of which have no arrows and where the closest to the positive genus surface (resp. to the holed sphere) has $a$ arrows. By Lemma \ref{lem_3.11}, in order to conclude this case, we only need to check ${}_r D_a\in V^{\partial \mathcal{S}igma}_m$.
Define $\Delta_+$, $\Delta_-$ and $\Delta_{+,m}$ as in Section \ref{section_planar_case}.
First, observe that Lemma \ref{lem_3.14} implies that $\Delta^{N-1}_+ \left( {}_l D_a \right) \in V^{\partial \mathcal{S}igma}_m$. Using the computation in the proof of Proposition \ref{prop_3.12}, we conclude that $\Delta^{N-1}_{+,m} \left( {}_r D_a \right) \in V^{\partial \mathcal{S}igma}_m$. On the other hand, Lemma 3.14 of \cite{KBSM_S1} gives us $\Delta^{2g}_- \left( {}_r D_a \right)\in V^{\partial \mathcal{S}igma}_m$.
Hence,
\[{}_r D_a = Id_{V}^{2g+N-1}\left( {}_r D_a\right) =\frac{1}{\left( A^{4m+2} - A^{-2} \right)^{2g+N-1} } \left( A^{-1} \Delta_{+,m} + A^{4m+1} \Delta_- \right)^{2g+N-1}\left({}_r D_a\right) \in V^{\partial \mathcal{S}igma}_m.\]
\textbf{Case 2: $n_1+n_2 = 1$.} Fix $m\geq 0$. The diagrams in this case contain boundary parallel curves, some copies of $c$, and exactly one non-separating curve denoted by $\alpha$. Define $V^{\partial \mathcal{S}igma}_m$ to be the formal vector space defined by such pictures with at most $m$ copies of $c$. For $a\in \mathbb{Z}$, define ${}_l D_a$, ${}_r D_a$ as in Case 1 with the addition of one copy of $\alpha$. In order to conclude this case, it is enough to show ${}_r D_a \in V^{\partial \mathcal{S}igma}_m$.
Suppose that $\alpha$ has $x \in \mathbb{Z}$ arrows. For $a,b \in \mathbb{Z}$, define ${}_r E_{a,b}$ and ${}_l E_{a,b}$ to be $m$ copies of $c$ with no arrows and three copies of $\alpha$ with arrows arranged as in Figure \ref{fig_left_right}.
We can define the map $s$ on the diagrams ${}_rE_{a,b}$ and ${}_l E_{a,b}$ by $s({}_* E_{a,b}) = {}_* E_{a+1,b+1}$. This way, the maps $\Delta_-$, $\Delta_+$, $\Delta_{+,m}$ are defined on the diagrams $D_a$ and $E_{a,b}$. Define $\Delta_{-,1}=A - A^{-3}s$.
Using Lemma \ref{lem_3.11}, up to linear combinations of diagrams in $V^{\partial \mathcal{S}igma}_m$, we obtain the following:
\begin{align*}
\Delta_{-}\left( {}_r E_{a,0}\right) = &A{}_r E_{a,0} - A^{-1} {}_rE_{a+1,1} \\
\cong & A^{2x+1} {}_l E_{a,0} - A^{2(x-1)-1} {}_l E_{a+1,1} \\
= & A^{2x} \left[ A {}_l E_{a,0} - A^{-3} {}_l E_{a+1,1}\right]\\
= & A^{2x} \Delta_{-,1} \left( {}_l E_{a,0}\right).
\end{align*}
\begin{figure}
\caption{${}
\end{figure}
Lemmas 3.13 and 3.14 of \cite{KBSM_S1} give us that $\Delta_- \left( {}_r E_{a,0} \right) \in V^{\partial \mathcal{S}igma}_m$ and
$\Delta^{2g-1}_- ({}_r D_a) = \Delta^{2g-1}_+ ({}_l E_{a,0})$. The first equation implies that $\Delta_{-,1} \left( {}_l E_{a,0}\right) \in V^{\partial \mathcal{S}igma}_m$. This implication, together with the second equation and the fact that the $\Delta$-maps commute, lets us conclude that $\Delta_{-,1} \circ \Delta^{2g-1}_{-} ({}_r D_a) \in V^{\partial \mathcal{S}igma}_m$.
Finally, notice that the argument in Case 1 implies that $\Delta^{N-1}_{+,m} \left( {}_r D_a \right) \in V^{\partial \mathcal{S}igma}_m$. We also have the following relations between $\Delta$-maps.
\[ A^{4m+3} \Delta_{-,1} + A^{-1} \Delta_{+,m} = \left( A^{4m+4} - A^{-2}\right) Id_{V} \]
\[ A^{4m+1} \Delta_- + A^{-1} \Delta_{+,m} = \left( A^{4m+2} - A^{-2}\right) Id_{V}\]
\[ Id_{V} = \frac{1}{( A^{4m+4} - A^{-2})^N ( A^{4m+2} - A^{-2})^{2g-1}}\left( A^{4m+3} \Delta_{-,1} + A^{-1} \Delta_{+,m}\right)^N \circ \left( A^{4m+1} \Delta_- + A^{-1} \Delta_{+,m} \right)^{2g-1}\]
When expanding the last expression, we see that every summand has a factor of the form $\Delta_{-,1} \circ \Delta^{2g-1}_{-}$ or $\Delta^{N-1}_{+,m} $. Hence, by evaluating ${}_r D_a$, we obtain ${}_r D_a\in V^{\partial \mathcal{S}igma}_m$ as desired.
\end{proof}
\subsection{A generating set for $\mathcal{S}(\mathcal{S}igma\times S^1)$}
To conclude the proof of finiteness for the Kauffman Bracket Skein Module of trivial $S^1$-bundles over surfaces with boundary, this section studies relations among non-separating simple closed curves.
\begin{lem}\leftarrowbel{lem_change_sides}
Any arrowed non-separating simple closed curve in $\mathcal{S}igma$ can be written as follows in $\mathcal{S}(\mathcal{S}igma\times S^1)$
\[\left( A-A^{-1}\right) \includegraphics[valign=c,scale=.4]{fig_37_n.png} = A \includegraphics[valign=c,scale=.4]{fig_37_1n-1.png} - A^{-1} \includegraphics[valign=c,scale=.4]{fig_37_0n.png}.\]
\end{lem}
\begin{proof}
Using the R5 relation, we obtain
$\includegraphics[valign=c,scale=.35]{fig_37_a.png} = \includegraphics[valign=c,scale=.35]{fig_37_b.png}$. Thus,
\[A \includegraphics[valign=c,scale=.4]{fig_37_n.png} - A^{-1} \includegraphics[valign=c,scale=.4]{fig_37_n-2.png} = A \includegraphics[valign=c,scale=.4]{fig_37_1n-1.png} - A^{-1} \includegraphics[valign=c,scale=.4]{fig_37_0n.png}.\]
Proposition 4.1 of \cite{KBSM_S1} states that non-separating curves with $n$ and $n-2$ arrows are the same in $\mathcal{S}(\mathcal{S}igma\times S^1)$. Thus, the result follows.
\end{proof}
\begin{remark}\leftarrowbel{remark_change_sides}[Application of Lemma \ref{lem_change_sides}]
Let $\gamma$ be a non-separating simple closed curve in $\mathcal{S}igma$ and let $c\in \partial \mathcal{S}igma$. Let $\widetilde \gamma$ be an arrowed diagram with one copy of $\gamma$ and some copies of $c$; think of $\gamma$ to be `on the right side' of $c$. Lemma \ref{lem_change_sides} states that, at the expense of adding more copies of $c$ and arrows, $\gamma$ is a linear combination of two diagrams where $\gamma$ is on the other side of $c$.
\end{remark}
\begin{prop}\leftarrowbel{thm_boundary_case}
Let $\mathcal{S}igma$ be an orientable surface of genus $g>0$ and $N>0$ boundary components. Let $D\subset \mathcal{S}igma$ be a $(N+1)$-holed sphere containing $\partial \mathcal{S}igma$, and let $\mathcal{F}$ be a collection of $2^{2g}-1$ non-separating simple closed curves in $\mathcal{S}igma-D$ such that each curve in $\mathcal{F}$ represents a unique non-zero element of $H_1(\mathcal{S}igma-D; \mathbb{Z}/2\mathbb{Z})$.
Let $\mathcal{B}$ be the collection $\{ \gamma \cup \alpha, U \cup \alpha \}$, where $\gamma$ is a curve in $\mathcal{F}$ zero or one arrow, $U$ is an arrowed unknot, and $\alpha$ is any collection of boundary parallel arrowed circles.
Then $\mathcal{B}$ is a generating set for $\mathcal{S}(\mathcal{S}igma\times S^1)$ over $\mathbb{Q}(A)$.
\end{prop}
\begin{proof}
By Proposition \ref{prop_genus_case}, we only need to focus on the non-separating curves.
Let $\widetilde \gamma$ be a non-separating simple closed curve in $\mathcal{S}igma$. After using Lemma \ref{lem_change_sides} repeatedly, we can write $\widetilde \gamma$ as a linear combination of arrowed diagrams of the form $\gamma \cup \alpha$ where $\gamma$ is a non-separating curve in $\mathcal{S}igma-D$ and $\alpha$ is a collection of boundary parallel curves.
Observe that the work on Section 5 of \cite{KBSM_S1} holds for surfaces with connected boundary since generators for $\pi_1(S_g, *)$ and $Mod(S_g)$ also work for $S_{g,1}$. Thus, by Proposition 5.5 of \cite{KBSM_S1}, two non-separating curves $\gamma_1,\gamma_2 \subset \mathcal{S}igma-D$ with the same number of arrows are equal in $\mathcal{S}(\mathcal{S}igma\times S^1)$ if $[\gamma_1]=[\gamma_2]$ in $H_1(\mathcal{S}igma-D; \mathbb{Z}/2\mathbb{Z})$. The conditions on the number of arrows for non-separating curves follows from Propositions 4.1 of \cite{KBSM_S1}.
\end{proof}
\begin{thm}\leftarrowbel{thm_conj_3_boundary}
Let $\mathcal{S}igma$ be an orientable surface with non-empty boundary. Then $\mathcal{S}(\mathcal{S}igma\times S^1)$ is a finitely generated $\mathcal{S}(\partial \mathcal{S}igma\times S^1, \mathbb{Q}(A))$-module of rank at most $2^{2g+1}-1$.
\end{thm}
\begin{proof}
As a module over $\mathcal{S}(\partial M, \mathbb{Q}(A))$, we can overlook $\partial$-parallel subdiagrams. Proposition \ref{thm_boundary_case} implies that $\mathcal{S}(M)$ is generated by the empty diagram and diagrams in $\mathcal{F}$ with at most one arrow.
\end{proof}
\section{Seifert Fibered Spaces}\leftarrowbel{section_SFS}
Seifert manifolds with orientable base orbifold can be built as Dehn fillings of $\mathcal{S}igma\times S^1$ where $\mathcal{S}igma$ is a compact orientable surface. A result of Przytycki \cite{fundamentals_KBSM} implies that their Kauffman bracket skein modules are isomorphic to the quotient of $\mathcal{S}(\mathcal{S}igma\times S^1)$ by a submodule generated by curves in $\partial \mathcal{S}igma \times S^1$ bounding disks after the fillings. In this section, we use these new relations to show the finiteness conjectures for a large family of Seifert fibered spaces. For details on the notation see next subsection.
\begin{thm}\leftarrowbel{thm_finite_SFS}
Let $M=M\left(g; b,\{(q_i,p_i)\}_{i=1}^n\right)$ be an orientable Seifert fibered space with non-empty boundary. Suppose $M$ has orientable orbifold base. Then, $\mathcal{S}(M)$ is a finitely generated $\mathcal{S}(\partial M, \mathbb{Q}(A))$-module of rank at most $(2^{2g+1}-1) \prod_{i=1}^n (2q_i-1)$.
\end{thm}
\begin{thm}\leftarrowbel{conj2_SFS}
Seifert fibered spaces of the form $M\left(g; 1,\{(1,p_i)\}_{i=1}^{n}\right)$ satisfy Conjecture \ref{conjecture_strong_finiteness}. In particular, Conjecture \ref{conjecture_strong_finiteness} holds for $\mathcal{S}igma_{g,1}\times S^1$.
\end{thm}
\subsection{Links in Seifert manifolds}
Let $\mathcal{S}igma$ be a compact orientable surface of genus $g\geq 0$ with $N\geq 0$ boundary components. Fix non-negative integers $n$, $b$ with $N=n+b$. Denote the boundary components of $\mathcal{S}igma$ by $\partial_1, \dots, \partial_N$ and the isotopy class of a circle fiber in $\mathcal{S}igma\times S^1$ by $\leftarrowmbda = \{pt\}\times S^1$. For each $i=1, \dots, n$, let $(q_i,p_i)$ be pairs of relatively prime integers satisfying $0<q_i<|p_i|$.
Let $M\left(g; b,\{(q_i,p_i)\}_{i=1}^n\right)$ be the result of gluing $n$ solid tori to $\mathcal{S}igma\times S^1$ in such way that the curve $p_i [\leftarrowmbda]+ q_i[\partial_i] \in H_1(\partial_i \times S^1)$ bounds a disk. In summary, $\mathcal{S}igma$ is the base orbifold of the Seifert manifold, $n$ counts the number of exceptional fibers, and is $b$ the number of boundary components of the 3-manifold.
Let $M$ be an orientable Seifert manifold with orientable orbifold base. It is a well known fact that $M$ is homeomorphic to some $M\left(g; b,\{(q_i,p_i)\}_{i=1}^n\right)$ \cite{hatcher_3M}. Links in $M$ can be isotoped to lie inside $\mathcal{S}igma\times S^1$. Thus, we can represent links in $M$ as arrowed diagrams in $\mathcal{S}igma$ with some extra Reidemester moves.
By Proposition 2.2 of \cite{fundamentals_KBSM} and Proposition \ref{thm_boundary_case}, $\mathcal{S}(M)$ is generated by the family of simple diagrams $\mathcal{B}= \{ \gamma \cup \alpha, U \cup \alpha \}$.
\begin{defn}
Let $D\in \mathcal{B}$. Let $l_i\geq 0$ be the number of parallel copies of $\partial_i$ in $D$. Let $\varepsilon_i\geq 0$ be the number of arrows (regardless of orientation) among all components of $D$ parallel to $\partial_i$. If $D$ contains an unknot $U$, denote by $u\geq 0$ the number of arrows in $U$. If $D$ contains a non-separating loop, let $u=0$.
The \textbf{absolute arrow sum} of $D$ is the total number of arrows among its separating loops $s:= u + \sum_i \varepsilon_i$.
$D$ is \textbf{standard} if $0\leq \varepsilon_i \leq 1$ for every $i=1,\dots, N$; and such arrows (if exist) lie in the loop furthest from the boundary.
\end{defn}
\begin{lem}\leftarrowbel{lem_std_diagrams_B}
Every diagram $D$ in $\mathcal{B}$ is a $\mathbb{Z}[A^{\pm1}]$-linear combination of standard diagrams $D'$ satisfying $s'\leq s$ and $l_i'\leq l_i, \forall i$.
\end{lem}
\begin{proof}
Follows from Proposition 4.1 of \cite{KBSM_S1} and Lemmas \ref{lem_popping_arrows}, \ref{lem_merging_unknots}, and \ref{lem_pushing_arrows}.
\end{proof}
The rest of this section is devoted to understand how the quantities $s$ and $l_i$ behave under certain relations in $\mathcal{B}$. We use Lemma \ref{lem_std_diagrams_B} implicitly to rewrite any relation in terms of standard diagrams with bounded sums $s$ and $l$.
\begin{remark}[Moving arrows]\leftarrowbel{remark_pushing_arrows}
We think of Lemma \ref{lem_pushing_arrows} as a set of moves that change the arrows between consecutive circles at the expense of adding `debris' terms. Observe that $|b-a+2| \leq |a| + |b|$ as long as $b<0$ or $a>0$. In particular, the debris terms in the equations of Lemma \ref{lem_pushing_arrows} parts (i) and (iii) will have absolute arrow sums bounded above by the LHS whenever $b<0$ or $a>0$. The same happens with parts (ii) and (iv) when $b>0$ or $a<0$. This can be summarized as follows: ``We can move arrows between consecutive nested loops without increasing the arrow sum nor $l_i$."
\end{remark}
\subsubsection{Local moves around an exceptional fiber}\leftarrowbel{subsubsection_Omega_move}
Fix an index $i=1,\dots, n$. By construction, there is a loop $\beta_i$ in the torus $\partial_i \times S^1$ bounding a disk $B_i$ in $M$; $\beta_i$ homologous to $\left( p_i[\leftarrowmbda] + q_i [\partial_i]\right) \in H_1(\partial_i \times S^1,\mathbb{Z})$. Following \cite{KBSM_prism}, we can slide arcs in $\mathcal{S}igma\times S^1$ over the disk $B_i$ and get new Reidemeister moves for arrowed projections in $\mathcal{S}igma\times S^1$. We obtain a new move, denoted by $\Omega(q_i, p_i)$ (see Figure \ref{fig_Omega_move}).
\begin{figure}
\caption{$\Omega(q_i,p_i)$ is obtained by drawing $q_i$ concentric circles and $p_i$ arrows equidistributed. Notice that the orientation of the arrows in the RHS is determined by the sign of $p_i$.}
\end{figure}
We can perform the $\Omega(q_i,p_i)$-move on an unknot near the boundary $\partial_i$ and resolve the $q_i-1$ crossings with K1 relations.
Since $0<q_i<|p_i|$, there is only one state with orientations of the arrows not cancelling. This unique state has exactly $q_i$ concentric loops while the other states have strictly fewer loops and no more than $|p_i|-2$ arrows.
We then obtain an equation in $\mathcal{S}(M)$ called the \textbf{$\Omega(q_i,p_i)$-relation}. Figure \ref{fig_Omega_relation} shows a concrete example of this equation.
\begin{remark}[The $\Omega(q_i,p_i)$-relation]\leftarrowbel{remark_Omega_move}
The $\Omega(q_i,p_i)$-relation lets us write a diagram with $q_i$ concentric loops and $|p_i|$ arrows arranged in a particular way as a $\mathbb{Z}[A^{\pm}]$-linear combination of diagrams with $0\leq l_i< q_i$ concentric circles and $0\leq \varepsilon_i<|p_i|$ arrows (see Figure \ref{fig_Omega_relation}). The LHS has $|p_i|$ arrows oriented in the same direction depending on the sign of $p_i$; counterclockwise if $p_i>0$ and clockwise otherwise. Notice that the condition $0<q_i<|p_i|$ implies that every parallel loop in the LHS has at least one arrow.
\end{remark}
The special arrangement of arrows in the LHS of the $\Omega(q_i,p_i)$-relation is important and depends on the pair $(q_i, p_i)$. In practice, we rearrange the arrows around the outer $q_i$ copies of $\partial_i$ to match with the LHS of the $\Omega(q_i,p_i)$-relation. Lemma \ref{lem_Omega_move} uses this idea in a particular setup.
\begin{figure}
\caption{$\Omega(3,5)$-relation.}
\end{figure}
\begin{lem}\leftarrowbel{lem_Omega_move}
The following equation in $\mathcal{S}(M)$ relates identical diagrams outside a neighborhood of $\partial_1$. Let $D\in \mathcal{B}$ and $x\geq |p_1|$.
Suppose that $l_1\geq q_1$, the loop furthest from $\partial_1$ has $x$ arrows with the same orientation as in the LHS of the $\Omega(q_1,p_1)$-relation, and no other loop in $D$ parallel to $\partial_1$ has arrows. Then $D$ is a sum of diagrams $D'\in \mathcal{B}$ with $l'_1<l_1$ and at most $x$ arrows.
\end{lem}
\begin{proof}
Rearrange the arrows to prepare for the $\Omega(q_1, p_1)$-relation using Lemma \ref{lem_pushing_arrows}. Remark \ref{remark_pushing_arrows} explains that the debris terms in this procedure will have arrow sum at most $x$ and $l'_1<l_1$. After performing the $\Omega(q_1,p_1)$-move, we obtain diagrams with lesser loops $l'_1 < l_1$. Observe that the lower arrow sum is explained due to at least one pair of arrows getting cancelled; this always happens since $0<q_1<|p_1|$. In particular, we lose at least two arrows when performing the move.
\end{proof}
\subsubsection{Global relations}
We now discuss relations among elements in $\mathcal{B}$ of the form $U \cup \alpha$.
Lemma \ref{lem_earthquake} permits us to add new loops around each $\partial_i$ all of which have one arrow of the same direction. This move is valid as long as we have enough arrows on the unknot $U$; i.e. $u\geq 4g+2N$. The debris terms are $\mathbb{Z}[A^{\pm1}]$-linear combinations of standard diagrams with fewer arrow sum and $l'_i\leq l_i +1$. This move is key to prove Theorem \ref{conj2_SFS}.
Consider the decomposition $\mathcal{P}_+$ of $\mathcal{S}igma$ described in Figure \ref{fig_global_P+}. Set $\partial_0$ to be the left-most unknot in $\mathcal{P}_+$ oriented counterclockwise.
As we did in Definition \ref{defn_linear_pants}, if $v_i\in \mathbb{Z}$ we will draw one copy of $\partial_i$ with $v_i$ arrows oriented as in $\mathcal{P}_+$, and do nothing if $v_i = \emptyset$. For $v \in \left(\mathbb{Z}\cup \{\emptyset\}\right)^{N+1}$, denote by $E_{v}$ the diagram obtained by drawing $\partial_i$ with $v_i$ arrows on it. For example, $E_{(b, \emptyset, \dots, \emptyset)}$ corresponds to the arrowed unknot $S_b$.
\begin{figure}
\caption{$\mathcal{P}
\end{figure}
We define the $\Delta$-maps from Definition \ref{defn_delta} on the family of diagrams $E_v$ with exactly one of $v_0$ and $v_N$ being empty. If $v_0=\emptyset$ and $v_N\in \mathbb{Z}$, define $s(E_v)=E_{(v_0, \dots, v_{N-1}, v_{N}+2)}$. If $v_0\in \mathbb{Z}$ and $v_N=\emptyset$, define $s(E_v)=E_{(v_0+2, \dots, v_{N-1}, v_{N})}$.
\begin{lem} \leftarrowbel{lem_earthquake}
Let $a\geq 0$. The following equation in $\mathcal{S}(\mathcal{S}igma\times S^1)$ holds modulo $\mathbb{Z}[A^{\pm1}]$-linear combinations of standard diagrams
$E_{(\emptyset, \dots, \emptyset, a')}$ and $E_{(a'', 1,1, \dots, 1, \emptyset)}$ with $a'$, $a''+(N-1)$ integers in $[0,a+4g+2N-2)$.
\[E_{(\emptyset, \dots, \emptyset, a+4g+2N-2)} \cong (-1)^{N-1} A^{-4g-2N+4} E_{(a+4g+N-1, 1,1, \dots, 1, \emptyset)}.\]
\end{lem}
\begin{proof}
Observe first that $\mathcal{P}_+$ induces a linear pants decomposition on $\mathcal{S}igma''$ as in Section \ref{section_planar_case}. Here, a copy of $\partial_N$ with $x\in \mathbb{Z}$ arrows, $E_{(\emptyset, \dots, \emptyset, x)}$, corresponds to the diagram $D^{N-1}_{x,(\emptyset, \dots, \emptyset)}$. Equation \eqref{eqn_1_planar} of Lemma \ref{equation_1_planar} with $n=k_0=N-1$ states the following
\[ \Delta^{N-1}_+ \left( D^{N-1}_{a, (\emptyset, \dots, \emptyset)} \right)= \sum_{e \in \{0,1\}^{N-1}} (-1)^{o(e)} A^{z(e) - o(e)} D^{0}_{a+o(e), e} \quad . \]
For any $x\in \mathbb{Z}$ and $v\in \{0,1\}^{N-1}$, the diagram $D^0_{x,v}$ contains a copy of the curve $c$ (see Figure \ref{fig_global_P+}) with $x$ arrows.
Now, observe that $\mathcal{P}_+$ also induces a sausage decomposition of $\mathcal{S}igma'$ (see \cite{KBSM_S1}). Using the notation in Section 3.3 of \cite{KBSM_S1}, the part of the diagram $D^0_{x,v}$ inside the subsurface $\mathcal{S}igma'\subset \mathcal{S}igma$ is denoted by $D^{2g}_{x}$. Proposition 3.13 of \cite{KBSM_S1} implies the equation $\Delta^{2g}_+(D^{2g}_x) = \Delta^{2g}_{-}(D^{0}_x)$, where $D^0_x$ is a copy of the left-most unknot $\partial_0$ (red loop) in $\mathcal{P}_+$ with $x$ arrows. Putting everything together, we obtain the following relation in $\mathcal{S}(\mathcal{S}igma\times S^1)$:
\begin{equation*}
\Delta^{2g+N-1}_+ (E_{(\emptyset, \dots, \emptyset,a)}) = \sum_{e \in \{0,1\}^{N-1}} (-1)^{o(e)} A^{z(e) - o(e)} \Delta^{2g}_- (E_{(a+o(e), e_1, \dots, e_N, \emptyset)}).
\end{equation*}
The result follows by taking the summads on each side with the most number of arrows.
\end{proof}
\subsection{Proofs of Theorems \ref{thm_finite_SFS} and \ref{conj2_SFS}}\leftarrowbel{section_proof_41}
Recall that $\mathcal{S}(M)$ is generated by all standard diagrams, and such diagrams are filtered by the complexity $(s,\sum_i l_i)$ in lexicographic order. Here, $s=u+\sum_i \varepsilon_i$ is the absolute arrow sum and $\sum_i l_i$ is the number of boundary parallel loops. Throughout the argument we will have debris terms with lower complexity $(s',\sum_i l'_i)$; on each of those terms, we can perform a series of combinations of Lemmas \ref{lem_popping_arrows}, \ref{lem_flip_unknot}, \ref{lem_merging_unknots}, and \ref{lem_pushing_arrows} in order to write them in terms of standard diagrams with complexities $s''\leq s'$ and $l''_i\leq l'_i$.
Let $D\in \mathcal{B}$ be a diagram. Suppose that $D$ is of the form $D=\gamma \cup \alpha$, where $\gamma$ is an non-separating simple closed curve with at most one arrow and $\alpha$ is a collection of arrowed boundary parallel loops. We can rewrite $D$ in $\mathcal{S}(M)$ as $D=\frac{-1}{(A^2+A^{-2})} (D\cup U)$ where $U$ is a small unknot with no arrows. Proposition \ref{lem_case_2} focuses on the subdiagram $U\cup \alpha$ near a fixed boundary component.
\begin{prop}\leftarrowbel{lem_case_2}
Let $D\in \mathcal{B}$ be a standard diagram with $l_{i_0}\geq q_{i_0}$ for some ${i_0}\in \{1,\dots, n\}$. Then $D$ is a linear combination of some standard diagrams $D'$ identical to $D$ outside a neighborhood of $\partial_{i_0}$, satisfying
\[l'_{i_0}< l_{i_0} \text{ and } u'+\varepsilon'_{i_0}\leq 2(u + |p_{i_0}|). \]
\end{prop}
\begin{proof}
For simplicity, set ${i_0}=1$. We assume that $p_1>0$ so that the arrows in the LHS of the $\Omega(q_1,p_1)$-relation are oriented counterclockwise; the other case is analogous.
We can assume that if $\varepsilon_1=1$, then the orientation of the arrow in the loop furtherst from $\partial_1$ agrees with the LHS of the $\Omega(q_1,p_1)$-relation. This is true since Lemma \ref{lem_crossing_borders} lets us flip the orientation at the expense of having one debris diagram with $l'_1=l_1$, $\varepsilon'_1=0$, and $u'=u+1$.
Denote by $D_x$ the standard diagram in $\mathcal{B}$, identical to $D$ away from a neighborhood of $\partial_1$ with $l_1$ copies of $\partial_1$, having $x$ arrows oriented counterclockwise in the loop furtherst from $\partial_1$.
Recall that $S_a$ denotes a small unknot with $a\in \mathbb{Z}$ arrows oriented counterclockwise. We have that $D=D_{\varepsilon_1}\cup S_u$, where the disjoint union of the diagrams is made so that $S_u$ lies inside a small disk away from the diagram $D_{x}$.
Merge the arrows on $U$ with the outer loop around $\partial_1$ using Lemma \ref{lem_merging_unknots}. Thus, $D$ is a linear combination of diagrams $D_x$ with no unknots ($U=\emptyset$).
If $\varepsilon_1=1$, we get diagrams with $0\leq x\leq u+\varepsilon_1$, and if $\varepsilon_1=0$, we obtain diagrams with $0\leq |x|\leq u$.
We focus on each $D_x$.
Use the relation around $\partial_1$
\begin{equation}\leftarrowbel{add_arrows}
\includegraphics[valign=c,scale=.45]{47_a.png} = \includegraphics[valign=c,scale=.45]{47_b.png}
\quad \implies \quad
\includegraphics[valign=c,scale=.55]{47_x.png} = -A^2 \includegraphics[valign=c,scale=.55]{47_x+11.png} - A^{4} \includegraphics[valign=c,scale=.55]{47_x+2.png}
\end{equation}
to write $D_x$ as a linear combination of $D_{x+1}\cup S_1$ and $D_{x+2}$. Thus, at the expense of getting a cluster of 1-arrowed unknots $S_{\pm1}$, we can increase/decrease the arrows in the outermost loop around $\partial_1$. Hence, the original diagram $D$ is a linear combination of diagrams of the form $D_{x} \cup \left( \cup_y S_1\right)$ where $x\geq p_1$, $y\geq 0$ and $x+y\leq 2(u+ p_1)$. To see the upper bound for $x+y$, observe that if we start with $D_{-u}$, one might need to add a copy of $S_{1}$ $(u+ p_1)$ times in order to reach $x \geq p_1$.
Lemma \ref{lem_Omega_move} implies that each $D_{\pm x} \cup \left( \cup_y S_1\right)$ is a linear combination of diagrams with $l'_1<l_1$ and at most $x+y$ arrows. After making such diagrams standard and merging the arrowed unknots, we obtain diagrams with $l'_1<l_1$ and $u'+\varepsilon'_1\leq 2(u+ p_1)$ as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_finite_SFS}]
Let $M=M\left(g; b,\{(q_i,p_i)\}_{i =1}^n\right)$ be a Seifert fibered space with non-empty boundary.
Proposition \ref{thm_boundary_case} and Lemma \ref{lem_std_diagrams_B} imply that $\mathcal{S}(M)$ is generated over $\mathbb{Q}(A)$ by standard diagrams in $\mathcal{B}$.
Furthermore, it follows from Lemmas \ref{lem_merging_unknots}, \ref{lem_pushing_arrows}, and \ref{lem_crossing_borders} that it is enough to consider standard diagrams with all arrows on separating loops oriented counterclockwise.
Notice that the standard condition allow us to overlook the numbers $l_{n+j}$ for $j=1,\dots, b$ since they correspond to coefficients of the ring $\mathcal{S}(\partial M, \mathbb{Q}(A))$.
Divide the collection $\mathcal{B}$ into two sets $\mathcal{B}_{ns} =\{\gamma\cup \alpha\}$ and $\mathcal{B}_{U}=\{U\cup \alpha\}$.
Proposition 4.1 of \cite{KBSM_S1} implies that arrowed non-separating simple closed curves are equal in $\mathcal{S}(M)$ if they are the same loop and have the same number of arrows modulo 2. Thus, using Proposition \ref{lem_case_2}, we obtain that $\mathbb{Q}(A)\cdot\mathcal{B}_{ns}$ is generated by standard diagrams $D=\gamma \cup \alpha$ with $0\leq l_i< q_i$ for $i=1,\dots, n$ and all arrows in copies of $\partial$-parallel loops oriented counterclockwise.
Hence, $\mathbb{Q}(A)\cdot\mathcal{B}_{ns}$ is generated as a $\mathcal{S}(\partial M, \mathbb{Q}(A))$-module by a set of cardinality
\[ r_{ns} \leq (2^{2g+1}-2) \prod_{i=1}^n (2q_i-1).\]
Proposition \ref{lem_case_2} implies that $\mathbb{Q}(A)\cdot\mathcal{B}_{U}$ is generated over $\mathbb{Q}(A)$ by standard diagrams satisfying $0\leq l_i <q_i$ for all $i=1, \dots, n$. Therefore, since $U$ can be pushed towards the boundary, $\mathbb{Q}(A)\cdot\mathcal{B}_{U}$ is generated over $\mathcal{S}(\partial M ,\mathbb{Q}(A))$ by a finite set of cardinality
\[ r_U \leq \prod_{i=1}^n (2q_i-1),\]
Hence, $\mathcal{S}(M)$ is a finitely generated $\mathcal{S}(\partial M, \mathbb{Q}(A))$-module.
\end{proof}
\begin{proof}[Proof of Theorem \ref{conj2_SFS}]
Let $\leftarrowmbda\subset \partial M$ be a $S^1$-fiber and let $\mu_N =\partial_N \times \{pt\}$ be a meridian of $\partial M$.
For $i=1, \dots, n$, the $\Omega(1,p_i)$-move turns loops parallel to $\partial_i$ into arrowed unknots. Thus, Proposition \ref{thm_boundary_case}, Lemma \ref{lem_std_diagrams_B}, and Equation \eqref{eqn_23} imply that $\mathcal{S}(M)$ is generated over $\mathbb{Q}(A)$ by standard diagrams in $\mathcal{B}=\{\gamma\cup \alpha, U\cup \alpha\}$ with no parallel loops around the exceptional fibers. In particular, $\alpha$ only contains loops around $\partial_N$. Hence $\mathbb{Q}(A) \cdot \mathcal{B}_{ns}$ is generated over $\mathbb{Q}(A)[\mu_N]$ by elements of the form $\gamma$ and $\gamma \cup \alpha$ where $\gamma \in \mathcal{F}$ has at most one arrow and $\alpha$ is a copy of $\partial_N$ with one arrow.
Let $U\cup \alpha\in \mathcal{B}_U$ and suppose that $U$ has $u\neq 0$ arrows. Using Equation \eqref{add_arrows} of Proposition \ref{lem_case_2}, we can assume that the loop of $\alpha$ furthest to the boundary has at least one arrow. Then, using Lemmas \ref{lem_flip_unknot} and \ref{lem_merging_unknots}, we can write any diagram in $\mathcal{B}_U$ as a $\mathbb{Q}(A)$-linear combination of diagrams with only $\partial$-parallel curves and such that the loop furthest to $\partial_N$ has $x\geq 0$ arrows oriented clockwise. In other words, $\mathbb{Q}(A)\cdot \mathcal{B}_U=\mathbb{Q}(A) \leftarrowngle U, \mu_N^k\cdot \alpha_x| k, x\geq 0\rightarrowngle$, where $\alpha_x$ denotes a copy of $\mu_N$ with $x$ arrows.
We will see that it is enough to consider $0\leq x<4g+2n$. Take $\mu_N^k\cdot \alpha_x$ with $k\geq 0$ and $x\geq 4g+2n$. By Lemma \ref{lem_earthquake}, $\mu_N^k\cdot \alpha_x$ is a $\mathbb{Z}[A^{\pm1}]$-linear combination of diagrams of the form $U\cup \mu_N^k$ and $\mu_N^k \cdot \alpha_{y}$ with $0\leq y<x$. We can proceed as in the previous paragraph and write the diagrams $U\cup \mu_N^k$ as $\mathbb{Z}[A^{\pm1}]$-linear combinations of $\mu_N^{\max(0,k-1)}\cdot \alpha_{x'}$ for some $x'\geq 0$. Hence, $\mathbb{Q}(A)\cdot \mathcal{B}_U=\mathbb{Q}(A) \leftarrowngle U, \mu_N^k\cdot \alpha_x| 0\leq k, 0\leq x <4g+2n \rightarrowngle$.
To end the proof, consider $F_1$ the subspace generated by $\mathcal{B}_{ns} \cup \{\mu_N^k\cdot \alpha_x| 0\leq x <4g+2n \}$ over $\mathbb{Q}(A)$, and $F_2$ the $\mathbb{Q}(A)$-subspace generated by arrowed unknots. By Proposition \ref{thm_boundary_case}, $\mathcal{S}(M) = F_1 + F_2$. Let $\mathcal{S}igma_1$ and $\mathcal{S}igma_2$ be neighborhoods of $\mu_N$ and $\leftarrowmbda$ in $\partial M$, respectively. We have shown that $F_1$ is a $\mathcal{S}(\mathcal{S}igma_1, \mathbb{Q}(A))$-module of rank at most $2(2^{2g+1}-2)+ 4g+2n$. Also, since every arrowed unknot can be pushed inside a neighborhood of $\mathcal{S}igma_2$, $F_2$ is generated over $\mathcal{S}(\mathcal{S}igma_2, \mathbb{Q}(A))$ by the empty link. So $F_2$ is a $\mathcal{S}(\mathcal{S}igma_2, \mathbb{Q}(A))$-module of rank at most one.
\end{proof}
\begin{flushright}
Jos\'e Rom\'an Aranda, University of Iowa\\
email: \texttt{[email protected]}\\ $\quad$ \\
Nathaniel Ferguson, Colby College \\
email: \texttt{[email protected]}
\end{flushright}
\end{document} |
\begin{document}
\title{In-Process Global Interpretation for Graph Learning via Distribution Matching}
\begin{abstract}
Graphs neural networks (GNNs) have emerged as a powerful graph learning model due to their superior capacity in capturing critical graph patterns. To gain insights about the model mechanism for interpretable graph learning, previous efforts focus on \emph{post-hoc local interpretation} by extracting the data pattern that a pre-trained GNN model uses to make an individual prediction. However, recent works show that post-hoc methods are highly sensitive to model initialization and local interpretation can only explain the model prediction specific to a particular instance.
In this work, we address these limitations by answering an important question that is not yet studied: \emph{how to provide global interpretation of the model training procedure?}
We formulate this problem as \emph{in-process global interpretation}, which targets on distilling high-level and human-intelligible patterns that dominate the training procedure of GNNs.
We further propose \emph{Graph Distribution Matching} (GDM\xspace) to synthesize interpretive graphs by matching the distribution of the original and interpretive graphs in the feature space of the GNN as its training proceeds. These few interpretive graphs demonstrate the most informative patterns the model captures during training. Extensive experiments on graph classification datasets demonstrate multiple advantages of the proposed method, including high explanation accuracy, time efficiency and the ability to reveal class-relevant structure.
\end{abstract}
\section{Introduction}
Graph neural networks (GNNs)~\cite{wu2020comprehensive,gilmer2017neural,kipf2016semi,ying2018graph,lin2021graph,velivckovic2017graph} have attracted enormous attention for graph learning, but they are usually treated as black boxes, which may raise trustworthiness concerns if humans cannot really interpret what pattern exactly a GNN model learns and justify its decisions. Lack of such understanding could be particularly risky when using a GNN model for high-stakes domains (e.g., finance~\cite{you2022roland} and medicine~\cite{duvenaud2015convolutional}),
which highlights the importance of ensuring a comprehensive interpretation of the working mechanism for GNNs.
To improve transparency and understand the behavior of GNNs, most recent works focus on providing \emph{post-hoc interpretation}, which aims at explaining what patterns a pre-trained GNN model uses to make decisions~\cite{ying2019gnnexplainer,PGEXPLAINER, yuan2020xgnn, wang2022gnninterpreter}. However, recent studies have shown that such a pre-train-then-explain manner would fail to provide faithful explanations: their interpretations may suffer from the bias attribution issue~\cite{faber2021comparing}, the overfitting and initialization issue (i.e., explanation accuracy is sensitive to the pre-trained model)~\cite{gsat}.
In contrast, \emph{in-process interpretation} probes to the while training procedure of GNN models to interpret the patterns learned during training, which is not aggressively depended on a single pre-trained model thus results in more stable interpretations. However, in-process interpretation has been rarely investigated for graph learning.
Existing works generate in-process interpretation by constructing inherently interpretable models, which either suffer from a trade-off between the prediction accuracy and interpretability~\cite{du2019techniques, yu2020graph} or can only provide local interpretations on individual instances~\cite{gsat}.
Specifically, \emph{local interpretations}~\cite{gsat, ying2019gnnexplainer,PGEXPLAINER} focus on explaining the model decision for a particular graph instance, which requires manual inspections on many local interpretations to mitigate the variance across instances and conclude a high-level pattern of the model behavior. The majority of existing interpretation techniques belong to this category. As a sharp comparison to such instance-specific interpretations, \emph{global interpretations}~\cite{yuan2020xgnn,wang2022gnninterpreter} aim at understanding the general behavior of the model with respect to different classes, without relying on any particular instance.
To address the limitations of post-hoc and local interpretation methods, for the first time, we attempt to provide \emph{in-process global interpretation} for the graph learning procedure, which targets on distilling high-level and human-intelligible patterns the model learns to differentiate different classes. Specifically, we propose Graph Distribution Matching (GDM\xspace) to synthesize a few compact interpretive graphs for each class following the \emph{distribution matching principle}: the interpretive graphs should follow similar distribution as the original graphs in the dynamic feature space of the GNN model as its training progresses.
We optimize interpretive graphs by minimizing the distance between the interpretive and original data distributions, measured by the maximum mean discrepancy (MMD)~\cite{gretton2012kernel} in a family of embedding spaces obtained by the GNN model. Presumably, GDM\xspace simulates the model training behavior, thus the generated interpretation can provide a general understanding of patterns that dominate the model training.
Different from post-hoc interpretation, our in-process interpretation concludes patterns from the whole training trajectory, thus is less biased to a single pre-trained model and is more stable; and compared with local interpretation, our global interpretation does not rely on individual graph instance, thus is a more general reflection of the model behavior. Our perspective provides a complementary view to the extensively studied post-hoc local interpretability, and our work puts an important step towards a more comprehensive understanding of graph learning.
Extensive quantitative evaluations on six synthetic and real-world graph classification datasets verify the interpretation effectiveness of GDM\xspace. The experimental results show that GDM\xspace can be used as a probe to precisely generate small interpretive graphs that preserve sufficient information for model training in an efficient manner, and the captured patterns are class discriminative. A qualitative study is also conducted to intuitively demonstrate human-intelligible interpretive graphs.
\section{Related Work}
Due to the prevalence of graph neural networks (GNNs), extensive efforts have been conducted to improve their transparency and interpretability. Existing interpretation techniques can be categorized as \emph{post-hoc} and \emph{in-process} interpretation depending on the interpretation stage, or \emph{local} and \emph{global} interpretation depending on the interpretation form.
\highlight{Post-hoc versus In-process Interpretations}
Most existing GNN interpretation methods we have found are post-hoc methods \cite{ying2019gnnexplainer,luo2020parameterized,yuan2020xgnn,wang2022gnninterpreter}. Post-hoc interpretability methods target on a pre-trained model and learn the underlying important features or nodes by querying the model's output. However, recent studies \cite{gsat, faber2021comparing} found that their explanations are sensitive to the pre-trained model: not all pre-trained models will lead to the best interpretation. And they suffer from the overfitting issue: interpretation performance starts to decrease after several epochs. Our in-process interpretation generates explanation graphs by inspecting the whole training trajectory of GNNs, not only the last snapshot, and thus is more stable and less prone to overfitting.
We only found one in-process interpretation method for graph learning, Graph Stochastic Attention (GSAT) \cite{gsat}, which is based on the information bottleneck principle and incorporates stochasticity into attention weights. While GSAT regards attention scores as interpretations, and are generated during the training procedure, it can only provide local interpretation, i.e., the attention is on each individual graph instance, thus cannot conclude general interpretation patterns without inspecting many instances. As a contrast, GDM\xspace provides a global interpretation for each class following the distribution matching principle.
\highlight{Local versus Global Interpretations}
Local interpretation methods such as GNNExplainer \cite{ying2019gnnexplainer} and PGExplainer \cite{luo2020parameterized} provide input-dependent explanations for each input graph. Given an input graph, these methods explain GNNs by outputting a small, explainable graph with important features. According to \cite{yuan2022explainability}, there are four types of instance-level explanation frameworks for GNN. They are gradient-based \cite{Pope_2019_CVPR}, perturbation-based \cite{ying2019gnnexplainer,PGEXPLAINER}, decomposition-based \cite{GRAPHLIME}, and surrogate-based methods \cite{vu2020pgm}. While local interpretations help explain predictions on individual graph instances, local interpretation cannot answer what common features are shared by graph instances for each class. Therefore, it is necessary to have both instance-level local and model-level global interpretations.
XGNN~\cite{yuan2020xgnn} and GNNInterpreter \cite{wang2022gnninterpreter} are the only works that we found providing a global explanation of GNNs. In detail, XGNN leverages a reinforcement learning framework where for each step, the graph generator predicts how to add an edge to the current graph. Finally, it explains GNNs by training a graph generator so that the generated graph patterns maximize a certain prediction of the model. Specifically, XGNN leverages domain-experts knowledge for making rewards function for different inputs. GNNInterpreter learns a probabilistic generative graph distribution and identifies graph pattern when GNN tries to make a certain prediction by optimizing a objective function consisting of the likelihood of the explanation graph being predicted as a target class by the GNN model and a similarity regularization term. However, these two works are essentially post-hoc methods for a pre-trained GNN, and inherit the aforementioned issues of post-hoc interpretation. Our GDM\xspace instead is the first work studying the in-process global interpretation problem for graph learning.
\section{Methods}
We first discuss existing post-hoc methods and provide a general form covering both local and global interpretation. To understand model training behavior, we for the first time formulate a new form of in-process interpretation. Based on this general form, we propose to generate interpretations by synthesizing a few compact graphs via distribution matching, which can be formulated as an optimization problem. We further discuss several practical constraints for optimizing interpretive graphs. Finally, we provide an overview and algorithm for the proposed interpretation method.
\subsection{Notations and Background for Graph Learning}
We first introduce the notations for formulating the interpretation problem. We focus on explaining GNNs' training behavior for the graph classification task. A graph classification dataset with $N$ graphs can be denoted as $\mathcal{G}=\{ G^{(1)}, G^{(2)}, \dots, G^{(N)}\}$ with a corresponding ground-truth label set $\mathcal{Y}=\{y^{(1)}, y^{(2)},\dots, y^{(N)}\}$. Each graph consists of two components, $G^{(i)} = (\textbf{A}^{(i)}, \textbf{X}^{(i)})$, where $\textbf{A}^{(i)}\in\{0, 1\}^{n\times n}$ denotes the adjacency matrix and $\textbf{X}^{(i)}\in\mathbb{R}^{n\times d}$ is the node feature matrix. The label for each graph is chosen from a set of $C$ classes $y^{(i)}\in\{1,\dots, C\}$, and we use $y^{(i)}_c$ to denote the label of graph $G_i$ is $c$, that is $y^{(i)}=c$. We further denote a set of graphs that belong to class $c$ as $\mathcal{G}_c=\{G^{(i)}|y^{(i)}=c\}$. A GNN model $\Phi(\cdot)$ is a concatenation of a feature extractor $f_{\bm{\theta}}(\cdot)$ parameterized by $\bm{\theta}$ and a classifier $h_{\bm{\psi}}(\cdot)$ parameterized by $\bm{\psi}$, that is $\Phi(\cdot)=h_{\bm{\psi}}(f_{\bm{\theta}}(\cdot))$. The feature extractor $f_{\bm{\theta}}: \mathcal{G}\rightarrow\mathbb{R}^{d'}$ takes in a graph and embeds it to a low dimensional space with $d'\ll d$. The classifier $h_{\bm{\psi}}: \mathbb{R}^{d'}\rightarrow \{1,\dots, C\}$ further outputs the predicted class given the graph embedding.
\subsection{Revisit Post-hoc Interpretation Problem}
We first provide a general form for existing post-hoc interpretations. The goal of post-hoc interpretation is to explain the inference behavior of a pre-trained GNN model, e.g., what patterns lead a GNN model to make a certain decision. Specifically, given a pre-trained GNN model $\Phi^{*}$, post-hoc
interpretation method finds the patterns that maximize the predicted probability for a class $y_c$.
Formally, this problem can be defined as:
\begin{equation}
\min_{\mathcal{S}}\mathcal{L}(\Phi^*(\mathcal{S}), y_c),
\label{eq:test-time}
\end{equation}
where $\mathcal{S}$ is one or a set of small graphs absorbing important graph structures and node features for interpretation, and $\mathcal{L}(\cdot, \cdot)$ is the loss (e.g., cross-entropy loss) of predicting $\mathcal{S}$ as label $y_c$.
Existing post-hoc interpretation techniques can fit in this form but differ in the definition of $\mathcal{S}$. We take two representative works, GNNExplainer~\cite{ying2019gnnexplainer} and XGNN~\cite{yuan2020xgnn}, as an example of post-hoc local and post-hoc global interpretation, respectively.
\noindent\textbf{GNNExplainer}~\cite{ying2019gnnexplainer} is a post-hoc local interpretation method. In this work, $\mathcal{S}$ is a compact subgraph extracted from an input instance $G_o=(\textbf{A}_o, \textbf{X}_o)$, defined as $\mathcal{S}_\text{GNNExplainer}=(\textbf{A}_o \odot \sigma(M), \textbf{X}_o)$, where $\sigma(M)$ is a soft mask, $M$ denotes a mask matrix to be optimized for masking unimportant edges in the adjacency matrix, $\sigma$ denotes a sigmoid function, and $\odot$ denoted element-wise multiplication. The label of the input instance $G_o$ determines $y_c$.
\noindent\textbf{XGNN}~\cite{yuan2020xgnn} is a post-hoc global interpretation method. It defines $\mathcal{S}$ as a set of completely synthesized graphs with each edge generated by reinforcement learning. The goal of the reward function is to maximize the probability of predicting $\mathcal{S}$ as a certain class $y_c$.
\noindent\textbf{Discussion}
As shown in Eq.(\ref{eq:test-time}), the post-hoc interpretation is a post-analysis of a pre-trained GNN model $\Phi^*$. As discussed in \cite{gsat}, these post-hoc methods often fail to provide stable interpretation: the interpreter could be easily over-fitted to the pre-trained model, and is sensitive to model initialization. A possible reason provided in \cite{gsat} is that post-hoc methods just perform one-single step projection from the pre-train model in an unconstrained space to an information-constrained space, which could mislead the optimization of the interpreter and spurious correlation could be captured in the interpretation.
Therefore, we are inspired to explore the possibility of providing a general recipe for in-process interpretability, which does not rely on a single snapshot of the pre-trained model and better constrains the model space via the training trajectory of the model.
\subsection{In-process Interpretation Problem}
Due to the limitations of post-hoc interpretation and the urgent need for comprehensive interpretability of the whole model cycle, we start our exploration of a novel research problem: \emph{how to provide global interpretation of the model training procedure?} We conceive of in-process interpretation as a tool to investigate the training behavior of the model, with the goal of capturing informative and human-intelligible patterns that the model learns from data during training. Instead of analyzing an already trained model as in post-hoc interpretation, the in-process interpretation is generated as the model training progresses. Formally, we formulate this task as follows:
\begin{align}
\min_{\mathcal{S}}\ \mathbb{E}_{\Phi_0 \sim P_{\Phi_0}} \big[ \underset{t\sim\mathcal{T}}{\mathbb{E}} [\mathcal{L}(\Phi_t(\mathcal{S}), y_c)] \big], \ \ \
\text{s.t.\ } \Phi_t=\mathtt{opt-alg}_{\Phi}(\mathcal{L}_\text{CE}(\Phi_{t-1}), \varsigma),
\label{eq:train-time}
\end{align}
where $\mathcal{T}=[0,\dots,T-1]$ is the training iterations for the GNN model, and $\mathtt{opt-alg}$ is a specific model update procedure with a fixed number of steps $\varsigma$, $\mathcal{L}_\text{CE}(\Phi)=\mathbb{E}_{G, y\sim \mathcal{G, Y}}[\mathscr{l}(\Phi(G), y)]$ is the cross-entropy loss of normal training for the GNN model, and $P_{\bm{\theta}_0}$ is the distribution of initial model parameters.
This formulation for in-process interpretation states that the interpretable patterns $\mathcal{S}$ should be optimized based on the whole training trajectory $\Phi_0\rightarrow\Phi_1\rightarrow\cdots\rightarrow\Phi_{T-1}$ of the model. This stands in sharp contrast to the post-hoc interpretation where only the final model $\Phi^*=\Phi_{T-1}$ is considered. The training trajectory reveals more information about model's training behavior to form a constrained model space, e.g., essential graph patterns that dominate the training of this model.
\subsection{Interpretation via Distribution Matching}
\label{sec:dm}
To realize in-process interpretation as demonstrated in Eq.~(\ref{eq:train-time}), we are interested in finding a suitable objective to optimize the interpretive graphs that can summarize what the model learns from the data.
One possible choice is to define the outer objective in Eq.~(\ref{eq:train-time}) as the cross-entropy loss of predicting $\mathcal{S}$ as label $y_c$, similar to XGNN. However, our experiment in Section \ref{sec:quant} demonstrates that class label actually provides very limited information for in-process interpretation: XGNN achieves near random-guessing interpretation accuracy in in-process evaluation. Meanwhile, an empirical ablation study in Appendix~\ref{sec:dm} also verifies such issue of using label as guidance.
Recall that a GNN model is a combination of feature extractor and a classifier. The
feature extractor $f_{\bm{\theta}}$ usually carries the most essential information about the model, while the classifier is a rather simple multi-perceptron layer.
Since the feature extractor plays the majority role, a natural idea is to match the embeddings generated by the GNN extractor $f_{\bm{\theta}}$ w.r.t. the training graphs and interpretive graphs for each class. Based on this idea, we can instantiate the outer objective in Eq.~\eqref{eq:train-time} as follows:
\begin{align}
\mathcal{L}(\Phi_t(\mathcal{S}), y_c) & \coloneqq \mathcal{L}_\text{DM}(f_{\bm{\theta}_t}(\mathcal{S}_c)) = \|\frac{1}{|\mathcal{G}_c|} \sum_{G\in \mathcal{G}_{c}} {f_{\bm{\theta}_t}}(G) - \frac{1}{| \mathcal{S}_c|} \sum_{S\in \mathcal{S}_c} {f_{\bm{\theta}_t}}(S)\|^2,
\label{eq:embed}
\end{align}
where $\mathcal{S}_c$ is the interpretive graph(s) for explaining class $c$.
By optimizing Eq.~\eqref{eq:embed}, we can obtain interpretive graphs that produce similar embeddings to training graphs for the current GNN model in the training trajectory. Thus, the interpretive graphs provide a plausible explanation for the model learning process. Note that there can be multiple interpretive graphs for each class, i.e., $|\mathcal{S}_c|\geq 1$. With this approach, we are able to generate an arbitrary number of interpretive graphs that capture different patterns. Remarkably, Eq.~\eqref{eq:embed} can be also interpreted as matching the distributions of training graphs and
interpretive graphs: it is the empirical estimate of the maximum
mean discrepancy (MMD)~\cite{gretton2012kernel} between the two distributions, which measures the difference between means of distributions in a Hilbert kernel space $\mathcal{H}$:
\begin{equation}
\sup _{\left\|f_{\bm{\theta}}\right\|_{\mathcal{H}} \leq 1}\left(\underset{G \sim \mathcal{G}_c}{\mathbb{E}}\left[f_{\bm{\theta}}({G})\right]-\underset{S \sim \mathcal{S}_c}{\mathbb{E}}\left[f_{\bm{\theta}}({S})\right]\right).
\end{equation}
As shown in Figure~\ref{fig:framework}, given the network parameter $\theta_t$, we forward the interpretive graphs and training graphs to the GNN feature extractor $f_\theta$ and obtain their embeddings which can regarded as the data distributions. By matching their distributions along the whole training trajectory, we are able to learn meaningful interpretive graphs that interpret the training behavior of the GNN model. It is worth noting that such a distribution matching scheme has shown success in distilling rich knowledge from training data to synthetic data~\cite{zhao2023dataset}, which preserve sufficient information for training the underlying model. This justifies our usage of distribution matching for interpreting the model's training behavior.
\begin{figure*}
\caption{Overview of the proposed in-process global interpretation method GDM\xspace
}
\label{fig:framework}
\end{figure*}
Furthermore, by plugging the distribution matching objective Eq.~\eqref{eq:embed} into Eq.~\eqref{eq:train-time}, and generating interpretive graphs for multiple class simultaneously $\mathcal{S}=\{\mathcal{S}_c\}^C_{c=1}$, we can rewrite our learning goal as follows:
\begin{align}
\min_{\mathcal{S}}\ &
\underset{\bm{\theta}_0 \sim P_{\bm{\theta}_0}}{\mathbb{E}} \left[\underset{t\sim\mathcal{T}}{\mathbb{E}} \big[ \sum_{c=1}^C
\mathcal{L}_\text{DM}(f_{\bm{\theta}_t}(\mathcal{S}_c)) \big] \right]
& \text{s.t.\ }\bm{\theta}_t, \bm{\psi}_t=\mathtt{opt-alg}_{\bm{\theta}, \bm{\psi}}(\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}}), \varsigma)
\label{eq:dm}
\end{align}
The interpretation procedure is based on the parameter trajectory of the model, while the training procedure on the original classification task is independently done. Thus this method can serve as a plug-in without influencing normal model training.
In order to learn interpretive graphs that generalize to a distribution of model initializations $P_{\bm{\theta}_0}$, we can sample $\bm{\theta}_0\sim P_{\bm{\theta}_0}$ to obtain multiple trajectories.
\subsection{Practical Constraints in Graph Optimization}
\label{sec:constraint}
Optimizing each interpretive graph is essentially optimizing its adjacency matrix and node feature. Denote a interpretive graph as $S=(\textbf{A}_s, \textbf{X}_s)$, with $\textbf{A}_s\in\{0,1\}^{m\times m}$ and $\textbf{X}_s\in\mathbb{R}^{m\times d}$.
To generate solid graph explanations using Eq. (\ref{eq:dm}), we introduce several practical constraints on $\textbf{A}_s$ and $\textbf{X}_s$. The constraints are applied on each interpretive graph, concerning discrete graph structure, matching edge sparsity, and feature distribution with the training data.
\noindent\textbf{Discrete Graph Structure}
Optimizing the adjacency matrix is challenging as it has discrete values.
To address this issue, we assume that each entry in matrix $\textbf{A}_s$ follows a Bernoulli distribution $\mathcal{B}(\Omega): p(\textbf{A}_s)=\textbf{A}_s\odot\sigma(\Omega)+(1-\textbf{A}_s)\odot\sigma(-\Omega)$, where $\Omega\in[0, 1]^{m\times m}$ is the Bernoulli parameters, $\sigma(\cdot)$ is element-wise sigmoid function and $\odot$ is the element-wise product, following~\cite{jin2022condensing, lin2022spectral, lin2022graph}. Therefore, the optimization on $\textbf{A}_s$ involves optimizing $\Omega$ and then sampling from the Bernoulli distribution. However, the sampling operation is non-differentiable, thus we employ the reparameterization method~\cite{maddison2016concrete} to refactor the discrete random variable into a function of a new variable $\varepsilon\sim\text{Uniform}(0, 1)$. The adjacency matrix can then be defined as a function of Bernoulli parameters as follows, which is differentiable w.r.t. $\Omega$:
\begin{equation}
\textbf{A}_s(\Omega)=\sigma((\log\varepsilon-\log(1-\varepsilon)+\Omega)/\tau),
\label{eq:discrete}
\end{equation}
where $\tau\in(0, \infty)$ is the temperature parameter that controls the strength of continuous relaxation: as $\tau\rightarrow 0$, the variable $\textbf{A}_s$ approaches the Bernoulli distribution. Now Eq. (\ref{eq:discrete}) turns the problem of optimizing the discrete adjacency matrix $\textbf{A}_s$ into optimizing the Bernoulli parameter matrix $\Omega$.
\vskip 0.5em
\noindent\textbf{Matching Edge Sparsity}
Our interpretive graphs are initialized by randomly sampling subgraphs from training graphs, and their adjacency matrices will be freely optimized, which might result in too sparse or too dense graphs. To prevent such scenarios, we exert a sparsity matching loss by penalizing the distance of sparsity between the interpretive and the training graphs, following~\cite{jin2022condensing}:
\begin{equation}
\mathcal{L}_\text{sparsity}(\mathcal{S})=\sum_{(\textbf{A}_s(\Omega), \textbf{X}_s)\sim\mathcal{S}}\max(\bar{\Omega}-\epsilon, 0),
\end{equation}
where $\bar{\Omega}=\sum_{ij}\sigma(\Omega_{ij})/|\Omega|$ calculates the expected sparsity of a interpretive graph, and $\epsilon$ is the average sparsity of initialized $\sigma(\Omega)$ for all interpretive graphs, which are sampled from original training graphs thus resembles the sparsity of training dataset.
\noindent\textbf{Matching Feature Distribution}
Real graphs in practice may have skewed feature distribution; without constraining the feature distribution on interpretive graphs, rare features might be overshadowed by the dominating ones. For example, in the molecule dataset MUTAG, node feature is the atom type, and certain node types such as Carbons dominate the whole graphs. Therefore, when optimizing the feature matrix of interpretive graphs for such unbalanced data, it is possible that only dominating node types are maintained. To alleviate this issue, we propose to match the feature distribution between the training graphs and the interpretive ones.
Specifically, for each graph $G=(\textbf{A, X})$ with $n$ nodes, we estimate the graph-level feature distribution as $\bar{\textbf{x}}=\sum^n_{i=1}\textbf{X}_i/n\in{\mathbb{R}^d}$, which is essentially a mean pool of the node features. For each class $c$, we then define the following feature matching loss:
\begin{equation}
\mathcal{L}_\text{feat}(\mathcal{S}_c) = \|\frac{1}{|\mathcal{G}_c|} \sum_{(\textbf{A, X})\in \mathcal{G}_{c}} \bar{\textbf{x}} - \frac{1}{| \mathcal{S}_c|} \sum_{(\textbf{A}_s, \textbf{X}_s)\in \mathcal{S}_c} \bar{\textbf{x}}_s\|^2,
\label{eq:feature-match}
\end{equation}
where we empirically measure the class-level feature distribution by calculating the average of graph-level features. By minimizing the feature distribution distance in Eq. (\ref{eq:feature-match}), even rare features can have a chance to be distilled in the interpretive graphs.
\subsection{Final Objective and Algorithm}
\label{sec:final}
Integrating the practical constraints discussed in Section~\ref{sec:constraint} with the distribution matching based in-process interpretation framework in Eq. (\ref{eq:dm}) in Section~\ref{sec:dm}, we now obtain the final objective for optimizing the interpretive graphs, which essentially is determined by the Bernoulli parameters for sampling discrete adjacency matrices and the node feature matrices. Formally, we propose Graph Distribution Matching (GDM\xspace), which aims to solve the following optimization problem:
\begin{align}
\min_{\mathcal{S}}\ &
\underset{\bm{\theta}_0 \sim P_{\bm{\theta}_0}}{\mathbb{E}} \left[ \underset{t\sim\mathcal{T}}{\mathbb{E}} \big[ \sum_{c=1}^C
\mathcal{L}_\text{DM}(f_{\bm{\theta}_t}(\mathcal{S}_c)) + \alpha\cdot\mathcal{L}_\text{feat}(\mathcal{S}_c) + \beta\cdot\mathcal{L}_\text{sparsity}(\mathcal{S}) \big] \right] \nonumber\\
& \text{s.t.\ }\bm{\theta}_t, \bm{\psi}_t=\mathtt{opt-alg}_{\bm{\theta}, \bm{\psi}}(\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}}), \varsigma)
\label{eq:final}
\end{align}
where we use $\alpha$ and $\beta$ to control the strength of regularizations on feature distribution matching and edge sparsity respectively. We explore the sensitivity of hyper-parameters $\alpha$ and $\beta$ in Appendix \ref{hyper-para}. Detailed algorithm of our proposed GDM\xspace is provided in Appendix \ref{training_algo}.
\noindent\textbf{Complexity Analysis} We now analyze the time complexity of the proposed method. Suppose for each iteration, we sample $B_1$ interpretive graphs and $B_2$ training graphs, and their average node number is $n$. The inner loop for interpretive graph update takes $n(B_1+B_2)$ computations on node, while the update of GNN model uses $nB_2$ computations. Therefore the overall complexity is $\mathcal{O}(nKT(B_1+2B_2))$, which is of the same magnitude of complexity for normal GNN training. This demonstrates the efficiency of our interpretation method: it can simultaneously generate interpretations as the training of GNN model proceeds, without introducing extra complexity.
\section{Experimental Studies}
\subsection{Experimental Setup}
\textbf{Dataset} The interpretation performance is evaluated on the following synthetic and real-world datasets for graph classification, whose statistics can be found in Appendix~\ref{sec:dataset}.
\begin{itemize}[leftmargin=*]
\item \textbf{Real-world} data includes: \textbf{MUTAG}, which contains chemical compounds where nodes are atoms and edges are chemical bonds.
Each graph is labeled as whether it has a mutagenic effect on a bacterium \cite{debnath1991structure}.
\textbf{Graph-Twitter} \cite{socher-etal-2013-recursive} includes Twitter comments for sentiment classification with three classes. Each comment sequence is presented as a graph, with word embedding as node feature. \textbf{Graph-SST5} \cite{yuan2021towards} is a similar dataset with reviews, where each review is converted to a graph labeled by one of five rating classes.
\item \textbf{Synthetic} data includes:
\textbf{Shape} contains four classes of graphs: Lollipop, Wheel, Grid, and Star. For each class, we synthesize the same number of graphs with a random number of nodes using NetworkX~\cite{hagberg2008exploring}. \textbf{BA-Motif}~\cite{luo2020parameterized} uses Barabasi-Albert (BA) graph as base graphs, among which half graphs are attached with a ``house'' motif and the rest with ``non-house'' motifs. \textbf{BA-LRP}~\cite{schnake2021higher} based on Barabasi-Albert (BA) graph includes one class being node-degree concentrated graphs, and the other degree-evenly graphs. These datasets do not have node features, thus we use node index as the surrogate feature.
\end{itemize}
\noindent\textbf{Baseline}
Note that this work is the first in-process global interpretation method for graph learning, thus no directly comparable baseline exists. However, we can still compare GDM\xspace{} with different types of works from multiple perspectives to demonstrate its advantages:
\begin{itemize}[leftmargin=*]
\item GDM\xspace{} versus \textbf{post-hoc} interpretation baselines:
This comparison aims to verify the necessity of leveraging in-process training trajectory for accurate global interpretation, compared with post-hoc global interpretation methods~\cite{yuan2020xgnn, wang2022gnninterpreter}. We compare GDM\xspace{} with \textbf{XGNN}~\cite{yuan2020xgnn}, which is the only open-sourced post-hoc global interpretation method for graph learning; XGNN heavily relies on domain knowledge (e.g. chemical rules), thus it is only adopted on MUTAG. We further consider \textbf{GDM\xspace{}-final}, which is a variant of GDM\xspace{} that generates interpretations only based on the final-step model snapshot, instead of the in-process training trajectory. We also include a simple \textbf{Random} strategy as a reference, which randomly selects graphs from the training set as interpretations.
\item GDM\xspace{} versus \textbf{local} interpretation baselines: This comparison is to demonstrate the effectiveness of global interpretation with only few interpretive graphs generated for each class, compared with existing local interpretation works generating interpretation per graph instance. Note that most existing local interpretation solutions~\cite{ying2019gnnexplainer, gsat, agarwal2023evaluating} are post-hoc and evaluated on test instances, here we generate their interpretations of a pre-trained GNN on training instances for a fair knowledge availability. Detailed results can be found in Appendix~\ref{sec:local-global}.
\item GDM\xspace{} versus \textbf{inherently global-interpretable} baseline: This comparison is to demonstrate that our general recipe of in-process interpretation can achieve better trade-off of model prediction and interpretation performance than simple inherently global-interpretable methods. The results and discussion are provided in Appendix~\ref{LR}.
\end{itemize}
\noindent\textbf{Evaluation Protocol}
We comprehend interpretability from two perspectives, i.e., the interpretation should maintain utility to train a model and fidelity to be predicted as the right classes. Therefore, we establish the following evaluation protocols accordingly:
\begin{itemize}[leftmargin=*]
\item \textbf{Utility} aims to verify whether generated interpretations can be utilized and leads to a well-trained GNN. Desired interpretations should capture the dominating patterns that guide the training procedure. For this protocol, we use the interpretive graphs to train a GNN model from scratch and report its performance on the test set. We call this accuracy \textit{utility}.
\item \textbf{Fidelity} is to validate whether the interpretation really captures discriminative patterns. Ideal interpretive graphs should be correctly classified to their corresponding classes by the trained GNN. In this protocol, given the trained GNN model, we report its prediction accuracy on the interpretive graphs. We define this accuracy as \textit{Fidelity}.
\end{itemize}
\noindent\textbf{Configurations}
We choose the graph convolution network (GCN) as the target GNN model for interpretation, as it is widely used for graph learning. It contain 3 layers with 256 hidden dimension, concatenated by a mean pooling layer and a dense layers in the end.
Adam optimizer \cite{kingma2014adam} is adopted for model training. In both evaluation protocols, we split the dataset as $85\%$ training and $15\%$ test data, and only use the training set to generate interpretative graphs. Given the interpretative graphs, each evaluation experiments are run $5$ times, with the mean and variance reported.
\begin{table*}[t]
\centering
\caption{\textbf{Utility} with a varying number of interpretive graphs generated per class.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccccc}
\toprule
\multirow{2}{*}{Graphs/Cls} & \multicolumn{3}{c}{\textbf{Graph-Twitter}} & \multicolumn{3}{c}{\textbf{Graph-SST5}} \\
\cmidrule(rl){2-4}\cmidrule(rl){5-7}
& GDM\xspace{} & GDM\xspace{}-final & Random & GDM\xspace{} & GDM\xspace{}-final & Random & \\
\midrule
10 & \textbf{56.43 $\pm$ 1.39} & 41.69 $\pm$ 2.61 & 52.40 $\pm$ 0.29 & \textbf{35.72 $\pm$ 0.65} & 29.03 $\pm$ 0.79 & 21.07 $\pm$ 0.60 \\
50 & \textbf{58.93 $\pm$ 1.29} & 54.80 $\pm$ 1.13 & 52.92 $\pm$ 0.27& \textbf{43.81 $\pm$ 0.86} & 38.37 $\pm$ 0.71 & 23.15 $\pm$ 0.35\\
100 & \textbf{59.51 $\pm$ 0.31} & 57.39 $\pm$ 1.29 & 55.47 $\pm$ 0.51 & \textbf{44.43 $\pm$ 0.45} & 42.27 $\pm$ 0.17 & 25.26 $\pm$ 0.75 \\
& \multicolumn{3}{c}{{GCN Accuracy:61.40}} & \multicolumn{3}{c}{{GCN Accuracy:44.39}} \\
\midrule
& \multicolumn{3}{c}{\textbf{BA-Motif}} & \multicolumn{3}{c}{\textbf{BA-LRP}} \\
\cmidrule(rl){2-4}\cmidrule(rl){5-7}
& GDM\xspace{} & GDM\xspace{}-final & Random & GDM\xspace{} & GDM\xspace{}-final & Random & \\
\midrule
1 & \textbf{71.66 $\pm$ 2.49} & 65.33 $\pm$ 7.31 & 67.60 $\pm$ 4.52 & 71.56 $\pm$ 3.62 & 68.81 $\pm$ 4.46 & \textbf{77.48 $\pm$ 1.21} \\
5 & \textbf{96.00 $\pm$ 1.63} & 91.33 $\pm$ 2.05 & 77.60 $\pm$ 2.21 & \textbf{91.60 $\pm$ 1.57} & 90.20 $\pm$ 1.66 & 77.76 $\pm$ 0.52\\
10 & \textbf{98.00 $\pm$ 0.00} & 89.33 $\pm$ 3.86 & 84.33 $\pm$ 2.49 & \textbf{94.90 $\pm$ 1.09} & 93.26 $\pm$ 0.83 & 88.38 $\pm$ 1.40 \\
& \multicolumn{3}{c}{{GCN Accuracy: 100.00}} & \multicolumn{3}{c}{{GCN Accuracy: 97.95}} \\
\midrule
& \multicolumn{3}{c}{\textbf{MUTAG}} & \multicolumn{3}{c}{\textbf{Shape}} \\
\cmidrule(rl){2-4}\cmidrule(rl){5-7}
& GDM\xspace{} & GDM\xspace{}-final & Random & GDM\xspace{} & GDM\xspace{}-final & Random & \\
\cmidrule(rl){2-4}\cmidrule(rl){5-7}
1 & \textbf{71.92 $\pm$ 2.48} & 65.33 $\pm$ 7.31 & 50.87$\pm$ 15.0 & \textbf{93.33 $\pm$ 4.71} & 60.00 $\pm$ 8.16 & 33.20 $\pm$ 4.71\\
5 & 77.19 $\pm$ 4.96 & 73.68 $\pm$ 0.00 & \textbf{80.70 $\pm$ 2.40} & \textbf{96.66 $\pm$ 4.71} & 66.67 $\pm$ 4.71 & 85.39 $\pm$ 12.47\\
10 & \textbf{82.45 $\pm$ 2.48} & 73.68 $\pm$ 0.00 & 75.43 $\pm$ 6.56 & \textbf{100.00 $\pm$ 0.00} & 70.00 $\pm$ 8.16 & 87.36 $\pm$ 4.71 \\
& \multicolumn{3}{c}{{GCN Accuracy:88.63}} & \multicolumn{3}{c}{{GCN Accuracy:100.00}} \\
\bottomrule
\end{tabular}
}
\label{tab:train-time}
\end{table*}
\subsection{Quantitative Analysis}
\label{sec:quant}
Our major contribution is to incorporate the training trajectory for in-process interpretation, therefore, the main quantitative analysis aims to answer the following questions: 1) Is the in-process training trajectory necessary? 2) Is our in-process interpretation efficient compared with post-hoc method?
To demonstrate the necessity of training trajectory, we compare GDM\xspace{} with post-hoc global interpretation baselines, XGNN, GDM\xspace{}-final, which provide global interpretation only using the final-step model.
\noindent\textbf{Utility Performance} In Table~\ref{tab:train-time}, we summarize the utility performance when generating different numbers of interpretive graphs per class, along with the benchmark GCN test accuracy for each dataset. Presumably, an ideal utility score should be comparable to the benchmark GCN accuracy when the model is trained on the original training set.
We observe that the GCN model trained on GDM\xspace{}'s interpretive graphs can achieve remarkably higher utility score, compared with the post-hoc GDM\xspace{}-final baselines and the random strategy. Note that XGNN on MUTAG can only achieve a similar utility score as the random strategy, thus is excluded in the table. The comparison between GDM\xspace{} and GDM\xspace{}-final validates that the in-process training trajectory does help for capturing more informative patterns the model uses for training. Meanwhile, GDM\xspace{}-final could result in large variance on BA-Motif and Shape, which reflects the instability of post-hoc methods.
\textbf{Fidelity Performance} In Table~\ref{tab:post-hoc}, we compare GDM\xspace{} and GDM\xspace{}-final in terms of the fidelity of generated interpretive graphs. We observe that GDM\xspace{} achieves over $90\%$ fidelity score on all datasets, which indicates that GDM\xspace{} indeed captures discriminative patterns.
Though being slightly worse than XGNN on MUTAG, GDM\xspace{} achieves much better utility.
Meanwhile, different from XGNN, we do not include any dataset specific rules, thus is a more general interpretation solution.
\begin{table}[t]
\centering
\small
\caption{\textbf{Fidelity} and \textbf{Efficiency} when generating 10 interpretive graphs per class.}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{lccccccc}
\toprule
Dataset & \textbf{Graph-Twitter} & \textbf{Graph-SST5} & \textbf{BA-Motif} & \textbf{BA-LRP} & \textbf{Shape} & \textbf{MUTAG} &\textbf{XGNN on MUTAG}\\
\midrule
GDM\xspace{}-final & 93.20$\pm$ 0.00 & 88.60 $\pm$ 0.09 & 68.30
$\pm$ 0.02 & 98.20 $\pm$ 0.02 & 100.00$\pm$ 0.00 & 66.60 $\pm$ 0.02 & \multirow{2}{*}{100.00$\pm$ 0.00}\\
GDM\xspace{} & 94.44$\pm$ 0.015 & 90.67$\pm$ 0.019 & 95.00$\pm$ 1.11 & 96.67$\pm$ 0.023 & 100.00$\pm$ 0.00 & 96.67 $\pm$ 0.045 & \\
\midrule
Time (s) & 129.63 & 327.60 & 108.00 & 110.43 & 210.52 & 218.45 & 838.20 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:post-hoc}
\end{table}
\textbf{Efficiency}
Another advantage of GDM\xspace{} is that it generates interpretations in an efficient manner. As shown in Table~\ref{tab:post-hoc}, GDM\xspace{} is almost 4 times faster than the post-hoc global interpretation method XGNN. Our methods takes almost no extra cost to generate multiple interpretative graphs, as there are only few interpretive graphs compared with the training dataset. XGNN, however, select each edge in each graph by a reinforcement learning policy which makes the interpretation process rather expensive. We also explore an accelerated version for GDM\xspace{} in Appendix \ref{no_update}.
\subsection{Qualitative Analysis}
\begin{table*}[t]
\centering
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{lcc|cc}
\toprule
Dataset & Real Graph & Global Interpretation & Real Graph & Global Interpretation \\
\midrule
& \multicolumn{2}{c}{\textbf{House} Class} & \multicolumn{2}{c}{\textbf{Non-House} Class} \\
\multirow{2}{*}{\textbf{BA-Motif}} & \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=21mm, height=18mm]{BA/BA2.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{BA/ba_single.png}
\end{minipage}
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=18mm]{BA/BA1.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{BA/ba_single2.png}
\end{minipage} \\
\midrule
& \multicolumn{2}{c}{\textbf{Wheel} Class} & \multicolumn{2}{c}{\textbf{Lollipop} Class} \\
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=21mm, height=18mm]{shapes/shape1.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{shapes/shape_single_1.png}
\end{minipage}
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=22mm, height=18mm]{shapes/shape2.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=20mm, height=20mm]{shapes/shape_single_2.png}
\end{minipage} \\
\cmidrule{2-5}
\textbf{Shape} & \multicolumn{2}{c}{\textbf{Grid} Class} & \multicolumn{2}{c}{\textbf{Star} Class} \\
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=21mm, height=18mm]{shapes/shape3.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{shapes/shape_single3.png}
\end{minipage}
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=22mm, height=18mm]{shapes/shape4.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=20mm, height=20mm]{shapes/shape_single_4.png}
\end{minipage} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Visualization of real example graphs and generated global interpretations.}
\label{tab:quali}
\end{table*}
We qualitatively visualize the global interpretations provided by GDM\xspace{} to verify that GDM\xspace{} can capture human-intelligible patterns, which indeed correspond to the ground-truth rules for discriminating classes. Table \ref{tab:quali} shows examples in BA-Motif and Shape datasets, and more results and analyses on other datasets can be found in Appendix~\ref{sec:more-qual}.
The qualitative results show that the global explanations successfully identify the discriminative patterns for each class. If we look at BA-Motif dataset, for the
house-shape class, the interpretation has captured such a pattern of house structure, regardless of the complicated base BA graph in the other part of graphs; while in the non-house class with five-node cycle, the interpretation also successfully grasped it from the whole BA-Motif graph.
Regarding the Shape dataset, the global interpretations for all the classes are almost consistent with the ground-truth graph patterns, i.e., Wheel, Lollipop, Grid and Star shapes are also reflected in the interpretation graphs. Note that the difference for interpretative graphs of Star and Wheel are small, which provides a potential explanation for our fidelity results in Table \ref{tab:post-hoc}, where pre-trained GNN models cannot always distinguish interpretative graphs of Wheel shape with interpretative graphs of Star shape.
\section{Conclusions}
In this work, we studied a new problem to enhance interpretability for graph learning: how to achieve in-process global interpretation to understand the high-level patterns the model learns during training? We proposed a novel framework, where interpretations are optimized based on the whole training trajectory. We designed an interpretation method GDM\xspace via distribution matching, which matches the embeddings generated by the GNN model for the training data and interpretive data. Our framework can generate an arbitrary number of interpretable and effective interpretive graphs. Our study puts the first step toward in-process global interpretation and offers a comprehensive understanding of GNNs. We evaluated our method both quantitatively and qualitatively on real-world and interpretive datasets, and the results indicate that the explanation graphs produced by GDM\xspace achieve high performance and be able to capture class-relevant structures and demonstrate efficiency.
One possible limitation of our work is that the interpretations are a general summarizing of the whole training procedure, thus cannot reflect the dynamic change of the knowledge captured by the model to help early detect anomalous behavior, which we believe is an important and challenging open problem. In the future work, we aim to extend the proposed framework to study model training dynamics.
\section{Appendix}
\subsection{GDM Algorithm}
\label{training_algo}
To solve the final objective in Eq. (\ref{eq:final}), we use mini-batch based training procedure as depicted in Algorithm~\ref{alg}.
We first initialize all the explanation graphs by sampling sub-graphs from the training graphs, with node number being set as the average node number of training graphs for the corresponding class.
The GNN model can be randomly initialized for $K$ times. For each randomly initialized GNN model, we train it for $T$ iterations. During each iteration, we sample a mini-batch of training graphs and explanation graphs. We first apply one step of update for explanation graphs based on the current GNN feature extractor; then update the GNN model for one step using normal graph classification loss.
\begin{algorithm}[h!]
\caption{GDM\xspace: Learning Explanation Graphs}
\label{alg}
\begin{algorithmic}[1]
\State \textbf{Input}: Training data $\mathcal{G}=\{\mathcal{G}_c\}^C_{c=1}$
\State Initialize explanation graphs $\mathcal{S}=\{\mathcal{S}_c\}_{c=1}^{C}$ for each class $c$
\For{$k=0,\dots,K-1$}
\State Initialize GNN by sampling $\bm{\theta}_0\sim P_{\bm{\theta}}, \bm{\psi}_0 \sim P_{\bm{\psi}}$
\For{$t=0,\ldots, T-1$}
\State Sample mini-batch graphs $B^{\mathcal{S}}=\{B_c^{\mathcal{S}} \sim \mathcal{S}_c\}^C_{c=1}$
\State Sample mini-batch graphs $B^{\mathcal{G}}=\{B_c^{\mathcal{G}} \sim \mathcal{G}_c\}^C_{c=1}$
\State {\color{magenta}\textit{\# Optimize global interpretive graphs}}
\For{class $c=1,\dots, C$}
\State Compute the interpretation loss following Eq. (\ref{eq:final}): $\mathcal{L}_c=\mathcal{L}_\text{DM}(f_{\bm{\theta}_t}(B_c^{\mathcal{S}})) + \alpha\cdot\mathcal{L}_\text{feat}(B^{\mathcal{S}}_c) + \beta\cdot\mathcal{L}_\text{sparsity}(B^{\mathcal{S}}_c)$
\EndFor
\State Update explanation graphs $\mathcal{S} \leftarrow \mathcal{S} - \eta\nabla_{\mathcal{S}}\sum_{c=1}^{C}\mathcal{L}_c$
\State {\color{magenta}\textit{\# Optimize GNN model as normal}}
\State Compute normal training loss for graph classification task $\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}})=\sum_{G\sim B^{\mathcal{G}}}\mathscr{l}(h_{\bm{\psi}_{t-1}}(f_{\bm{\theta}_{t-1}}(G)), y)$
\State Update $\bm{\theta}_{t+1}=\bm{\theta}_{t}-\eta_1\nabla_{\bm{\theta}}\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}})$
\State Update $\bm{\psi}_{t+1}=\bm{\psi}_{t}-\eta_2\nabla_{\bm{\psi}}\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\psi}_{t-1}})$
\EndFor
\EndFor
\State \textbf{Output}: Explanation graphs $\mathcal{S}^*=\{\mathcal{S}^*_c\}_{c=1}^{C}$ for each class $c$
\end{algorithmic}
\end{algorithm}
\subsection{{Dataset Statistics}}
\label{sec:dataset}
The data statistics on both synthetic and real-world datasets for graph classification are provided in
Table~\ref{summary_stats}. The ground-truth performance of GCN model is also listed for reference.
\begin{table}[h]
\centering
\caption{Basic Graph Statistics}
\begin{tabular}{lcccccc}
\toprule
Dataset & \textbf{BA-Motif} & \textbf{BA-LRP} & \textbf{Shape} & \textbf{MUTAG} & \textbf{Graph-Twitter} & \textbf{Graph-SST5} \\
\midrule
\# of classes & 2 & 2 & 4 & 2 & 3 & 5\\
\# of graphs & 1000 & 20000 & 100 & 188 & 4998 & 8544\\
Avg. \# of nodes & 25 & 20 & 53.39 & 17.93 & 21.10 & 19.85\\
Avg. \# of edge & 50.93 & 42.04 &813.93 & 19.79 & 40.28 & 37.78\\
\midrule
GCN Accuracy & 100.00 & 97.95 & 100.00 & 88.63& 61.40 & 44.39\\
\bottomrule
\end{tabular}
\label{summary_stats}
\end{table}
\subsection{GDM\xspace{} versus Inherently Global-Interpretable Model}
\label{LR}
We compare the performance of GDM\xspace with a simple yet inherently global-interpretable method, logistic regression with hand-crafted graph-based features. When performing LR on these graph-structured data, we leverage the Laplacian matrix as graph features: we first sort row/column of adjacency matrix by nodes' degree to align the feature dimensions across different graphs; we then flatten the reordered laplacian matrix as input for LR model. When generating interpretations, we first train a LR on training graphs and obtain interpretations as the top most important features (i.e. edges on graph) based on regression weights where average number of edges. We then report the utility of LR interpretations, shown in the following table \ref{tab:LR}.
\begin{table}[h!]
\centering
\small
\caption{Utility of Logistic Regression}
\begin{tabular}{lcccccc}
\toprule
Dataset &MUTAG& BA-Motif&BA-LRP &Shape&Graph-Twitter&Graph-SST5\\
\midrule
LR Interpretation & 93.33\% & 100\%&100\%&100\%&42.10\%&22.68\%\\
Original LR &96.66\% & 100\%&100\% & 100\%& 52.06\%& 27.45\%\\
\bottomrule
\end{tabular}
\label{tab:LR}
\end{table}
\begin{table}[h!]
\centering
\small
\caption{Utility of GDM\xspace on large datasets}
\begin{tabular}{lcccc}
\toprule
\# of interpretive graphs per class& Graph-Twitter& Graph-SST5\\
\midrule
10 &56.43 $\pm$ 1.39 &40.30 $\pm$ 1.27\\
50 &58.93 $\pm$ 1.29 &42.70 $\pm$ 1.10\\
100 &59.51 $\pm$ 0.31 &44.37 $\pm$ 0.28\\
Original GCN Accuracy & 61.40 & 44.39\\
\bottomrule
\end{tabular}
\label{tab:GDM_LARGE}
\end{table}
LR shows good interpretation utility on simple datasets like BA-Motif and BA-LRP, but it has much worse performance on more sophisticated datasets, compared with GDM\xspace in table \ref{tab:GDM_LARGE}. For example, interpretations generated by GDM\xspace can achieve close accuracy as the original GCN model.
\subsection{GDM\xspace{} versus Local Interpretation}
\label{sec:local-global}
As aforementioned, GDM\xspace{} provides global interpretation, which is significantly different from the extensively studied local interpretation methods: we only generate few small interpretive graphs per class to reflect the high-level discriminative patterns the model captures; while local interpretation is for each graph instance. Though our global interpretation is not directly comparable with existing local interpretation, we still compare their interpretation utility to demonstrate the efficacy of our GDM\xspace{} when we only generate a few interpretive graphs. The results can be found in Table \ref{tab:GDM_LOCAL_Utility}. We compare our model with GNNExplainer\footnote[1]{https://github.com/RexYing/gnn-model-explainer},PGExplainer\footnote[2]{https://github.com/flyingdoog/PGExplainer} and Captum\footnote[3]{https://github.com/pytorch/captum} on utility.
For Graph-SST5 and Graph-Twitter, we generate 100 graphs for each class and 10 graphs for other datasets.
\begin{table}[h!]
\centering
\small
\caption{Utility Compared with Local Interpretation}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccccc}
\toprule
Datasets & \textbf{Graph-SST5} & \textbf{Graph-Twitter} & \textbf{MUTAG} & \textbf{BA-Motif} & \textbf{Shape} & \textbf{BA-LRP} & \\
GDM\xspace{}& \textbf{44.43 $\pm$0.45} & \textbf{59.51 $\pm$ 0.31} &82.45$\pm$2 .48 &\textbf{98.00$\pm$0.00} &\textbf{100 $\pm$0.00} & \textbf{94.90$\pm$1.09}\\
GNNExplainer & 43.00$\pm$0.07 & 58.12$\pm$1.48 & 73.68$\pm$5.31 & 93.2$\pm$0.89 & 89.00$\pm$4.89&58.65$\pm$4.78 \\
PGExplainer&28.41 $\pm$ 0.00 & 55.46 $\pm$ 0.03 & 75.62$\pm$4.68 & 62.58$\pm$0.66 & 71.75$\pm$1.85 & 50.25$\pm$0.15 & \\
Captum& 28.83$\pm$0.05 & 55.76$\pm$0.42 & \textbf{89.20$\pm$0.01} & 52.00$\pm$0.60 & 80.00$\pm$0.01 & 49.25$\pm$0.01 \\
\end{tabular}
}
\label{tab:GDM_LOCAL_Utility}
\end{table}
Comparing these results, we can observe that the GDM\xspace{} obtains higher utility score compared to different GNN explaination methods, with relatively small variance.
\subsection{One Use Case of In-Process Interpretation}
\label{OO}
A use case of in-process interpretation is when the queried data is very different from the data used to train the model (i.e. out-of-distribution): fidelity score could fail to reflect intrinsic patterns learned by the model, as post-hoc interpretation also heavily relies on the property of queried data; and post-hoc methods are usually used when only given the pre-trained model without knowing the training data (the scenario when training data is provided can be found in Appendix \ref{sec:local-global}). In contrast, our in-process interpretation is generated during model training without accessing any auxiliary data, and thus is free of influence from the quality of queried data.
To mimic such a out-of-distribution case, we conduct a post-hoc interpretation experiment where the GCN model is trained on BA-LRP, and is used on BA-Motif. We use PGExplainer as an example post-hoc (local) interpretation method. The post-hoc interpretation utility is shown below when training and test data are from the same or different distribution.
\begin{figure}
\caption{Performance of PGExplainer under Out-of-distribution Scenario}
\label{fig:oo}
\end{figure}
\begin{table}[h!]
\centering
\small
\caption{PGExplainer interpretation utility}
\begin{tabular}{cccc}
\toprule
Scenario & Training Data & Test Data & Utility $\pm$ std \\
\midrule
Same Distribution & BA-Motif & BA-Motif & 80.11 $\pm$ 0.08 \\
Out of Distribution & BA-LRP& BA-Motif & 75.34 $\pm$ 0.16\\
\bottomrule
\end{tabular}
\label{tab:oo}
\end{table}
As we expected, the utility drops 5\%. Meanwhile, we provide a visualization of generated interpretation via \ref{fig:oo} where we can see that PGExplainer fails to capture the most important motif in BA-Motif graphs (i.e. house/non-house subgraph). This case study demonstrates that post-hoc methods cannot provide valid interpretations under distribution shifts. In contrast, in-process interpretation is generated along with model training and does not rely on what instances are queried post-hoc. The out-of-distribution case could be popular in real-world scenarios, and in-process interpretation is a complementary tool for model interpretability when post-hoc interpretability is not possible/suitable.
\subsection{Hyperparameter Analysis}
\label{hyper-para}
In our final objective Eq. (\ref{eq:final}), we defined two hyper-parameters $\alpha$ and $\beta$ to control the strength for feature matching and sparsity matching regularization, respectively.
In this subsection, we explore the sensitivity of hyper-parameters $\alpha$ and $\beta$. Since MUTAG is the only dataset that contains node features, we only apply the feature matching regularization on this dataset. We report the in-process utility with different feature-matching coefficients $\alpha$ in Table \ref{tab:alpha}. A larger $\alpha$ means we have a stronger restriction on the node feature distribution. We found that when we have more strict restrictions, the utility increases slightly. This is an expected behavior since the node features from the original MUTAG graphs contain rich information for classifications, and matching the feature distribution enables the interpretation to capture rare node types. By having such restrictions, we successfully keep the important feature information in our interpretive graphs.
\begin{figure}
\caption{Sensitivity analysis of hyper-parameter $\beta$ for the sparsity matching regularization}
\label{fig:beta}
\end{figure}
Moreover, we vary the sparsity coefficient $\beta$ in the range of $\{0.001,0.01,0.1,0.5,1\}$, and report the utility for all of our four datasets in Figure \ref{fig:beta}. For most datasets excluding Shape, the performance start to degrade when the $\beta$ becomes larger than 0.5. This means that when the interpretive graph becomes more sparse, it will lose some information during training time.
\begin{table}[h!]
\centering
\small
\caption{Sensitivity analysis of hyper-parameter $\alpha$ for feature matching regularization. }
\begin{tabular}{lccccc}
\toprule
$\alpha$ & 0.005 & 0.01 & 0.05& 0.5 & 1.0\\ [0.5ex]
\midrule
Utility & 82.45 & 82.45 & 82.45 & 82.45 & 80.70 \\
\bottomrule
\end{tabular}
\label{tab:alpha}
\end{table}
\subsection{Special Case of Our Framework}
\label{no_update}
We also explore a special case when the GNN model is not updated when generating explanations, as discussed in Section~\ref{sec:final}. For this case, we randomly initialized different GNNs in each iteration with parameters fixed within each iteration as shown in algorithm \ref{alg:noupdate} To our surprise, the generated graphs are also quite effective in training a well-performed GNN. As shown in Table \ref{tab:train-time-init}, the interpretation utility of our methods does not degrade when the explanations are optimized without updating the GNN model in each iteration. This indicates that randomly initialized GNNs are already able to produce informative interpretation patterns.
This finding echoes the observations from previous work in image classification~\cite{amid2022learning} and node classification~\cite{velickovic2019deep}. They have shown a similar observation that random neural networks can generate powerful representations, and random networks can perform a distance-preserving embedding of the data to make smaller distances between samples of the same class~\cite{giryes2016deep}.
Our finding further verifies such a property also stands for GNNs on the graph classification task from the perspective of model interpretation.
\begin{table}[h!]
\centering
\small
\caption{Utility without updating GCN.}
\begin{tabular}{lccccc}
\toprule
Graphs/Cls & \textbf{BA-Motif} & \textbf{BA-LRP} & \textbf{Shape} & \textbf{MUTAG} \\
\midrule
1 & 83.12 $\pm$ 2.16 & 79.95 $\pm$ 3.50 & 70.00 $\pm$ 21.60 & 71.93 $\pm$ 8.94 \\
5 & 99.33 $\pm$ 0.47 & 88.46 $\pm$ 1.11 & 100.00 $\pm$ 0.00 &89.47 $\pm$ 7.44 \\
10 & 100.00 $\pm$ 0.00 & 93.53 $\pm$ 3.41 & 100.00 $\pm$ 0.00 & 85.96 $\pm$ 2.48\\
\bottomrule
Graphs/Cls & \multicolumn{2}{c}{\textbf{Graph-Twitter}}& \multicolumn{2}{c}{\textbf{Graph-SST5}} \\
\midrule
10 & \multicolumn{2}{c}{52.97 $\pm$ 0.29} & \multicolumn{2}{c}{40.30 $\pm$ 1.27}\\
50 & \multicolumn{2}{c}{57.63 $\pm$ 0.92} & \multicolumn{2}{c}{42.70 $\pm$ 1.10}\\
100 & \multicolumn{2}{c}{58.35 $\pm$ 1.32} & \multicolumn{2}{c}{44.37 $\pm$ 0.28}\\
\bottomrule
\end{tabular}
\label{tab:train-time-init}
\end{table}
\begin{algorithm}[h!]
\caption{GDM without GNN training}
\label{alg:noupdate}
\begin{algorithmic}[1]
\State \textbf{Input}: Training data $\mathcal{G}=\{\mathcal{G}_c\}^C_{c=1}$
\State Initialize explanation graphs $\mathcal{S}=\{\mathcal{S}_c\}_{c=1}^{C}$ for each class $c$
\For{$k=0,\dots,K-1$}
\State Initialize GNN by sampling $\bm{\theta}_0\sim P_{\bm{\theta}}, \bm{\psi}_0 \sim P_{\bm{\psi}}$
\For{$t=0,\ldots, T-1$}
\State Sample mini-batch graphs $B^{\mathcal{S}}=\{B_c^{\mathcal{S}} \sim \mathcal{S}_c\}^C_{c=1}$
\State Sample mini-batch graphs $B^{\mathcal{G}}=\{B_c^{\mathcal{G}} \sim \mathcal{G}_c\}^C_{c=1}$
\For{class $c=1,\dots, C$}
\State Compute the interpretation loss: \\$\mathcal{L}_c=\mathcal{L}_\text{DM}(f_{\bm{\theta}_L}(B_c^{\mathcal{S}})) + \alpha\cdot\mathcal{L}_\text{feat}(B^{\mathcal{S}}_c) + \beta\cdot\mathcal{L}_\text{sparsity}(B^{\mathcal{S}}_c)$
\EndFor
\State Update explanation graphs $\mathcal{S} \leftarrow \mathcal{S} - \eta\nabla_{\mathcal{S}}\sum_{c=1}^{C}\mathcal{L}_c$
\EndFor
\EndFor
\State \textbf{Output}: Explanation graphs $\mathcal{S}^*=\{\mathcal{S}^*_c\}_{c=1}^{C}$ for each class $c$
\end{algorithmic}
\end{algorithm}
\subsection{Gain of Distribution Matching}
\label{sec:dm}
We construct an ablation method that adopts Algorithm \ref{alg} but replaces the design of matching embedding by matching the output probability distribution in the label space instead. The in-process utility of such a label-matching baseline is as follows in table \ref{tab:label-matching}:
\begin{table*}[h!]
\centering
\caption{Utility comparison, with a varying number of explanation graphs generated per class.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccccc}
\toprule
\multirow{2}{*}{Graphs/Cls} & \multicolumn{3}{c}{\textbf{Graph-Twitter}} & \multicolumn{3}{c}{\textbf{Graph-SST5}} \\
\cmidrule(rl){2-4}\cmidrule(rl){5-7}
& GDM\xspace{} & GDM\xspace{}-label & Random & GDM\xspace{} & GDM\xspace{}-label & Random & \\
\midrule
10 & \textbf{56.43 $\pm$ 1.39} & 40.00 $\pm$ 3.98 & 52.40 $\pm$ 0.29 & \textbf{35.72 $\pm$ 0.65} & 25.49 $\pm$ 0.39 & 24.90 $\pm$ 0.60 \\
50 & \textbf{58.93 $\pm$ 1.29} & 55.62 $\pm$ 1.12 & 52.92 $\pm$ 0.27& \textbf{43.81 $\pm$ 0.86} & 31.47 $\pm$ 2.58 & 23.15 $\pm$ 0.35\\
100 & \textbf{59.51 $\pm$ 0.31} & 53.37 $\pm$ 0.55 & 55.47 $\pm$ 0.51 & \textbf{44.43 $\pm$ 0.45} & 32.01 $\pm$ 1.90 & 25.26 $\pm$ 0.75 \\
& \multicolumn{3}{c}{\textbf{BA-Motif}} & \multicolumn{3}{c}{\textbf{BA-LRP}} \\
\cmidrule(rl){2-4}\cmidrule(rl){5-7}
& GDM\xspace{} & GDM\xspace{}-label & Random & GDM\xspace{} & GDM\xspace{}-label & Random & \\
\midrule
1 & \textbf{71.66 $\pm$ 2.49} & 50.63 $\pm$ 0.42 & 67.60 $\pm$ 4.52 & 71.56 $\pm$ 3.62 & 54.11 $\pm$ 5.33 & \textbf{77.48 $\pm$ 1.21} \\
5 & \textbf{96.00 $\pm$ 1.63} & 82.54 $\pm$ 0.87 & 77.60 $\pm$ 2.21 & \textbf{91.60 $\pm$ 1.57} & 59.21 $\pm$ 0.99 & 77.76 $\pm$ 0.52\\
10 & \textbf{98.00 $\pm$ 0.00} & 90.89 $\pm$ 0.22 & 84.33 $\pm$ 2.49 & \textbf{94.90 $\pm$ 1.09} & 66.40 $\pm$ 1.47 & 88.38 $\pm$ 1.40 \\
& \multicolumn{3}{c}{\textbf{MUTAG}} & \multicolumn{3}{c}{\textbf{Shape}} \\
\cmidrule(rl){2-4}\cmidrule(rl){5-7}
& GDM\xspace{} & GDM\xspace{}-label & Random & GDM\xspace{} & GDM\xspace{}-label & Random & \\
\midrule
1 & \textbf{71.92 $\pm$ 2.48} & 70.17 $\pm$ 2.48 & 50.87$\pm$ 15.0 & \textbf{93.33 $\pm$ 4.71} & 60.00 $\pm$ 7.49 & 33.20 $\pm$ 4.71\\
5 & 77.19 $\pm$ 4.96 & 57.89 $\pm$ 4.29 & \textbf{80.70 $\pm$ 2.40} & \textbf{96.66 $\pm$ 4.71} & 85.67 $\pm$ 2.45 & 85.39 $\pm$ 12.47\\
10 & \textbf{82.45 $\pm$ 2.48} & 59.65 $\pm$ 8.94 & 75.43 $\pm$ 6.56 & \textbf{100.00 $\pm$ 0.00} & 88.67 $\pm$ 4.61 & 87.36 $\pm$ 4.71 \\
\bottomrule
\end{tabular}
}
\label{tab:label-matching}
\end{table*}
Comparing these results, we can observe that the distribution matching design is effective: disabling this design will greatly deteriorate the performance.
\subsection{More Qualitative Results}
\label{sec:more-qual}
\noindent\textbf{MUTAG}
This dataset has two classes: ``non-mutagenic'' and ``mutagenic''. As discussed in previous works \cite{debnath1991structure, ying2019gnnexplainer}, Carbon rings along with $NO_2$ chemical groups are known to be mutagenic. And \cite{luo2020parameterized} observe that Carbon rings exist in both mutagen and non-mutagenic graphs, thus are not really discriminative. Our synthesized interpretive graphs are also consistent with these ``ground-truth'' chemical rules. For `mutagenic'' class, we observe two $NO_2$ chemical groups within one interpretative graph, and one $NO_2$ chemical group and one carbon ring, or multiple carbon rings from a interpretative graph. For the class of ``non-mutagenic'', we observe that $NO_2$ groups exist much less frequently but other atoms, such as Chlorine, Bromine, and Fluorine, appear more frequently. To visualize the structure more clearly, we limit the max number of nodes to 15 such that we do not have too complicate interpretative graphs.
\noindent\textbf{BA-Motif and BA-LRP}
The qualitative results on BA-Motif dataset show that the explanations for all classes successfully identify the discriminative features. For
House-shape class, all the generated graphs have captured such a pattern of house structure, regardless of the complicated base BA graph in the other part of graphs. For the other class with five-node cycle, our generated graphs successfully grasp it from the whole BA-Motif graph.
In BA-LRP dataset, the first class consists of Barabasi-Albert graphs of growth parameter 1, which means new nodes attached preferably to low degree nodes, while the second class has new nodes attached to Barabasi-Albert graphs preferably to higher degree nodes. Our interpretative dataset again correctly identify discriminative patterns to differentiate these two classes, which are the tree-shape and ring-shape structures.
\noindent\textbf{Shape}
In Table~\ref{tab:qualification}, the generated explanation graphs for all the classes are almost consistent with the desired graph patterns. Note that the difference for interpretative graphs of Star and Wheel are small. This provides a potential explanation for our post-hoc quantitative results in Table \ref{tab:train-time}, where pre-trained GNN models cannot always distinguish interpretative graphs of Wheel shape with interpretative graphs of Star shape.
\begin{table*}[h!]
\centering
\begin{tabular}{llcc}
\toprule
Dataset & Class & Training Graph Example & Synthesized Interpretation Graph\\
\midrule
\multirow{5}{*}{\textbf{BA-Motif}} & House
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{BA/BA2.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{BA/BA_EXP1.png}
\end{minipage} \\
\cmidrule(rl){2-4}
& Non-House
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{BA/BA1.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{BA/BA_EXP2.png}
\end{minipage} \\
\midrule
\multirow{5}{*}{\textbf{BA-LRP}} & Low Degree
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{BA/lrp1.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{BA/lrp_exp.png}
\end{minipage} \\
\cmidrule(rl){2-4}
& High Degree
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{BA/lrp2.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{BA/lrp_exp2.png}
\end{minipage} \\
\midrule
\multirow{5}{*}{\textbf{MUTAG}} & Mutagenicity
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{mutag/ori_class1.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{mutag/1.png}
\end{minipage} \\
\cmidrule(rl){2-4}
& Non-Mutagenicity
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{mutag/ori_class0.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{mutag/2.png}
\end{minipage} \\
\midrule
\multirow{15}{*}{\textbf{Shape}} & Wheel
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{shapes/shape1.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{shapes/wheel.png}
\end{minipage} \\
\cmidrule(rl){2-4}
& Lollipop
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{shapes/shape2.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{shapes/lolli.png}
\end{minipage} \\
\cmidrule(rl){2-4}
& Grid
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{shapes/shape3.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{shapes/grid.png}
\end{minipage} \\
\cmidrule(rl){2-4}
& Star
& \begin{minipage}{.2\textwidth}
\centering
\includegraphics[width=20mm, height=15mm]{shapes/shape4.png}
\end{minipage}
& \begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=50mm, height=15mm]{shapes/star.png}
\end{minipage} \\
\bottomrule
\end{tabular}
\caption{The qualitative results for all datasets. For each class in each dataset, as a reference, the example graph selected from the training data is displayed in the left column, while the generated explanation graphs are in the right column. Different node colors in MUTAG represent different node types.}
\label{tab:qualification}
\end{table*}
\begin{comment}
Below is the KDD algorithm (GNN training and interpretation generation are \highlight{coupled}):
\begin{algorithm}[h]
\caption{GDM\xspace: Learning Explanation Graphs}
\label{alg}
\begin{algorithmic}[1]
\State \textbf{Input}: Training data $\mathcal{G}=\{\mathcal{G}_c\}^C_{c=1}$
\State Initialize explanation graphs $\mathcal{S}=\{\mathcal{S}_c\}_{c=1}^{C}$ for each class $c$
\For{$k=0,\dots,K-1$}
\State Initialize GNN by sampling $\bm{\theta}_0\sim P_{\bm{\theta}}, \bm{\psi}_0 \sim P_{\bm{\psi}}$
\For{$t=0,\ldots, T-1$}
\State Sample mini-batch graphs $B^{\mathcal{S}}=\{B_c^{\mathcal{S}} \sim \mathcal{S}_c\}^C_{c=1}$
\State Sample mini-batch graphs $B^{\mathcal{G}}=\{B_c^{\mathcal{G}} \sim \mathcal{G}_c\}^C_{c=1}$
\For{class $c=1,\dots, C$}
\State Compute the interpretation loss: \\ $\mathcal{L}_c=\mathcal{L}_\text{DM}(f_{\bm{\theta}_t}(B_c^{\mathcal{S}})) \highlight{+
\eta\cdot\sum_{S\sim B^{\mathcal{S}}}\mathscr{l}(h_{\bm{\psi}_{t-1}}(f_{\bm{\theta}_{t-1}}(S)), y)} + \alpha\cdot\mathcal{L}_\text{feat}(B^{\mathcal{S}}_c) + \beta\cdot\mathcal{L}_\text{sparsity}(B^{\mathcal{S}}_c)$
\EndFor
\State Update explanation graphs $\mathcal{S} \leftarrow \mathcal{S} - \eta\nabla_{\mathcal{S}}\sum_{c=1}^{C}\mathcal{L}_c$
\For{$l=1,\dots, L-1$}
\State Compute training loss for graph classification task: \\ $\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}})= \highlight{\lambda\cdot} \sum_{G\sim B^{\mathcal{G}}}\mathscr{l}(h_{\bm{\psi}_{t-1}}(f_{\bm{\theta}_{t-1}}(G)), y) \highlight{+
(1-\lambda)\cdot\sum_{S\sim B^{\mathcal{S}}}\mathscr{l}(h_{\bm{\psi}_{t-1}}(f_{\bm{\theta}_{t-1}}(S)), y)}$
\State Update $\bm{\theta}_{t+1}=\bm{\theta}_{t}-\eta_1\nabla_{\bm{\theta}}\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}})$
\State Update $\bm{\psi}_{t+1}=\bm{\psi}_{t}-\eta_2\nabla_{\bm{\psi}}\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\psi}_{t-1}})$
\EndFor
\EndFor
\EndFor
\State \textbf{Output}: Explanation graphs $\mathcal{S}^*=\{\mathcal{S}^*_c\}_{c=1}^{C}$ for each class $c$
\end{algorithmic}
\end{algorithm}
Below is current algorithm (GNN training and interpretation generation are \highlight{decoupled}):
\begin{algorithm}[h]
\caption{Decoupled}
\label{alg}
\begin{algorithmic}[1]
\State \textbf{Input}: Training data $\mathcal{G}=\{\mathcal{G}_c\}^C_{c=1}$
\State Initialize explanation graphs $\mathcal{S}=\{\mathcal{S}_c\}_{c=1}^{C}$ for each class $c$
\For{$k=0,\dots,K-1$}
\State Initialize GNN by sampling $\bm{\theta}_0\sim P_{\bm{\theta}}, \bm{\psi}_0 \sim P_{\bm{\psi}}$
\State \highlight{\# pre-train GNN first}
\For{$l=0,\ldots, L-1$}
\State Sample mini-batch graphs $B^{\mathcal{G}}=\{B_c^{\mathcal{G}} \sim \mathcal{G}_c\}^C_{c=1}$
\State Compute normal training loss for graph classification task: \\
$\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}})= \highlight{\lambda\cdot} \sum_{G\sim B^{\mathcal{G}}}\mathscr{l}(h_{\bm{\psi}_{t-1}}(f_{\bm{\theta}_{t-1}}(G)), y)\highlight{+(1-\lambda)\cdot\sum_{S\sim B^{\mathcal{S}}}\mathscr{l}(h_{\bm{\psi}_{t-1}}(f_{\bm{\theta}_{t-1}}(S)), y)}$
\State Update $\bm{\theta}_{t+1}=\bm{\theta}_{t}-\eta_1\nabla_{\bm{\theta}}\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\theta}_{t-1}})$
\State Update $\bm{\psi}_{t+1}=\bm{\psi}_{t}-\eta_2\nabla_{\bm{\psi}}\mathcal{L}_\text{CE}(h_{\bm{\psi}_{t-1}}, f_{\bm{\psi}_{t-1}})$
\EndFor
\State \highlight{\# then generate interpretation graph}
\For{$t=0,\ldots, T-1$}
\State Sample mini-batch graphs $B^{\mathcal{S}}=\{B_c^{\mathcal{S}} \sim \mathcal{S}_c\}^C_{c=1}$
\State Sample mini-batch graphs $B^{\mathcal{G}}=\{B_c^{\mathcal{G}} \sim \mathcal{G}_c\}^C_{c=1}$
\For{class $c=1,\dots, C$}
\State Compute the interpretation loss: \\$\mathcal{L}_c=\mathcal{L}_\text{DM}(f_{\bm{\theta}_L}(B_c^{\mathcal{S}})) \highlight{+
\eta\cdot\sum_{S\sim B^{\mathcal{S}}}\mathscr{l}(h_{\bm{\psi}_{t-1}}(f_{\bm{\theta}_{t-1}}(S)), y)} + \alpha\cdot\mathcal{L}_\text{feat}(B^{\mathcal{S}}_c) + \beta\cdot\mathcal{L}_\text{sparsity}(B^{\mathcal{S}}_c)$
\EndFor
\State Update explanation graphs $\mathcal{S} \leftarrow \mathcal{S} - \eta\nabla_{\mathcal{S}}\sum_{c=1}^{C}\mathcal{L}_c$
\EndFor
\EndFor
\State \textbf{Output}: Explanation graphs $\mathcal{S}^*=\{\mathcal{S}^*_c\}_{c=1}^{C}$ for each class $c$
\end{algorithmic}
\end{algorithm}
\highlight{
Here are different settings:
\begin{itemize}
\item In alg 2, when $T=1$, it is 1.a (train gnn with few(=5) epoch from scratch at each iteration); $K=200$
\item In alg 2, when $T=1, L=1$, it is 1.b (interpret without updating gnn); $K=200$
\item In alg 2, when $K=1$, it is similar to GNNInterpreter and 1.c (interpret with pre-trained gnn); $L=100, T=200$
\item In alg 1, when $K=1$, it is 1.d (train gnn with few(=5) epoch not from scratch at each iteration). $T=200, L=5$
\end{itemize}
}
\end{comment}
\end{document} |
\begin{document}
\begin{center}
\textbf{\Large A Bernstein type inequality and moderate deviations for
weakly dependent sequences}\vskip15pt
Florence Merlev\`{e}de $^{a}$, Magda Peligrad $^{b}$ \footnote{
Supported in part by a Charles Phelps Taft Memorial Fund grant, and NSA\
grants, H98230-07-1-0016 and H98230-09-1-0005} \textit{and\/} Emmanuel Rio $
^{c}$ \footnote{
Supported in part by Centre INRIA Bordeaux Sud-Ouest $\&$ Institut de
Math\'ematiques de Bordeaux}
\end{center}
\vskip10pt $^a$ Universit\'e Paris Est, Laboratoire de math\'ematiques, UMR
8050 CNRS, B\^atiment Copernic, 5 Boulevard Descartes, 77435
Champs-Sur-Marne, FRANCE. E-mail: [email protected]\newline
\newline
$^b$ Department of Mathematical Sciences, University of Cincinnati, PO Box
210025, Cincinnati, Oh 45221-0025, USA. Email: [email protected] \newline
\newline
$^c$ Universit\'e de Versailles, Laboratoire de math\'ematiques, UMR 8100
CNRS, B\^atiment Fermat, 45 Avenue des Etats-Unis, 78035 Versailles, FRANCE.
E-mail: [email protected] \vskip10pt
\textit{Key words}: Deviation inequality, moderate deviations principle,
semiexponential tails, weakly dependent sequences, strong mixing, absolute
regularity, linear processes.
\textit{Mathematical Subject Classification} (2000): 60E15, 60F10.
\begin{center}
\textbf{Abstract}\vskip10pt
\end{center}
In this paper we present a tail inequality for the maximum of partial sums
of a weakly dependent sequence of random variables that is not necessarily
bounded. The class considered includes geometrically and subgeometrically
strongly mixing sequences. The result is then used to derive asymptotic
moderate deviation results. Applications include classes of Markov chains,
functions of linear processes with absolutely regular innovations and ARCH
models.
\; \; \;ection{Introduction}
\; \; \;etcounter{equation}{0}
Let us consider a sequence $X_{1},X_{2},\ldots $ of real valued random
variables. The aim of this paper is to present nonasymptotic tail
inequalities for $S_{n}=X_{1}+X_{2}+\cdots +X_{n}$ and to use them to derive
moderate deviations principles.
\; \; \;mallskip For independent and centered random variables $X_{1},X_{2},\ldots
$, one of the main tools to get an upper bound for the large and moderate
deviations principles is the so-called Bernstein inequalities. We first
recall the Bernstein inequality for random variables satisfying Condition (
\ref{b1lt}) below. Suppose that the random variables $X_{1},X_{2},\ldots $
satisfy
\begin{equation}
\log \mathbb{E}\exp (tX_{i})\leq \frac{\; \; \;igma _{i}^{2}t^{2}}{2(1-tM)}\
{ \Bbb H} box{ for
positive constants }\ \; \; \;igma _{i}\ { \Bbb H} box{ and }M, \label{b1lt}
\end{equation}
for any $t$ in $[0,1/M[$. Set $V_{n}=\; \; \;igma _{1}^{2}+\; \; \;igma _{2}^{2}+\cdots
+\; \; \;igma _{n}^{2}$. Then
\begin{equation*}
\mathbb{P}(S_{n}\geq \; \; \;qrt{2V_{n}x}+Mx)\leq \exp (-x).
\end{equation*}
When the random variables $X_{1},X_{2},\ldots $ are uniformly bounded by $M$
then (\ref{b1lt}) holds with $\; \; \;igma _{i}^{2}=\mathrm{Var}X_{i}$, and the
above inequality implies the usual Bernstein inequality
\begin{equation}
\mathbb{P}(S_{n}\geq y)\leq \exp \Bigl(-y^{2}(2V_{n}+2yM)^{-1}\Bigr).
\label{B1}
\end{equation}
Assume now that the random variables $X_{1},X_{2},\ldots $ satisfy the
following weaker tail condition: for some $\gamma $ in $]0,1[$ and any positive $t$,
$\; \; \;up_i \mathbb{P}(X_{i}\geq t)\leq \exp (1-(t/M)^{\gamma })$. Then, by the proof of Corollary 5.1 in
Borovkov (2000-a) we infer that
\begin{equation}
\mathbb{P}(|S_{n}|\geq y)\leq 2\exp \Bigl(-c_{1}y^{2}/V_{n}\Bigr)+n\exp
\Bigl(-c_{2}(y/M)^{\gamma }\Bigr)\,, \label{B2}
\end{equation}
where $c_{1}$ and $c_{2}$ are positive constants ($c_{2}$ depends on $\gamma
$). More precise results for large and moderate deviations of sums of
independent random variables with \textsl{semiexponential\/} tails may be
found in Borovkov (2000-b).
In our terminology the moderate deviations principle (MDP) stays for the
following type of asymptotic behavior:
\; \; \;mallskip
\begin{defn}
We say that the MDP holds for a sequence $(T_{n})_{n}$ of random variables
with the speed $a_{n}\rightarrow 0$ and rate function $I(t)$ if for each $A$
Borelian,
\begin{eqnarray}
-\inf_{t\in A^{o}}I(t) &\leq &\liminf_{n}a_{n}\log \mathbb{P}(\; \; \;qrt{a_{n}}
T_{n}\in A) \notag \\
&\leq &\limsup_{n}a_{n}\log \mathbb{P}(\; \; \;qrt{a_{n}}T_{n}\in A)\leq
-\inf_{t\in \bar{A}}I(t)\,, \label{mdpdef}
\end{eqnarray}
where $\bar{A}$ denotes the closure of $A$ and $A^{o}$ the interior of $A$.
\end{defn}
Our interest is to extend the above inequalities to strongly mixing
sequences of random variables and to study the MDP for $
(S_{n}/stdev(S_{n}))_{n}$. In order to cover a larger class of examples we
shall also consider less restrictive coefficients of weak dependence, such
as the $\tau $-mixing coefficients defined in Dedecker and Prieur (2004)
(see Section \ref{sectionMR} for the definition of these coefficients).
Let $X_{1},X_{2},\ldots $ be a strongly mixing sequence of real-valued and
centered random variables. Assume that there exist a positive constant $
\gamma _{1}$ and a positive $c$ such that the strong mixing coefficients of
the sequence satisfy
\begin{equation}
\alpha (n)\leq \exp (-cn^{\gamma _{1}})\ { \Bbb H} box{ for any positive integer }
n\,, \label{H1}
\end{equation}
and there is a constant $\gamma _{2}$ in $]0,+\infty ]$ such that
\begin{equation}
\; \; \;up_{i>0}\mathbb{P}(|X_{i}|>t)\leq \exp (1-t^{\gamma _{2}})\
{ \Bbb H} box{ for any
positive }t \label{H2}
\end{equation}
(when $\gamma _{2}=+\infty $ (\ref{H2}) means that $\Vert X_{i}\Vert
_{\infty }\leq 1$ for any positive $i$).
Obtaining exponential bounds for this case is a challenging problem. One of
the available tools in the literature is Theorem 6.2 in Rio (2000), which is
a Fuk-Nagaev type inequality, that provides the inequality below. Let $
\gamma $ be defined by $1/\gamma =(1/\gamma _{1})+(1/\gamma _{2})$. For any
positive $\lambda $ and any $r\geq 1$,
\begin{equation}
\mathbb{P}(\; \; \;up_{k\in \lbrack 1,n]}|S_{k}|\geq 4\lambda )\leq 4\Bigl(1+\frac{
\lambda ^{2}}{rnV}\Bigr)^{-r/2}+4Cn\lambda ^{-1}\exp \Bigl(-c(\lambda
/r)^{\gamma }\Bigr), \label{RFN}
\end{equation}
where
\begin{equation*}
V=\; \; \;up_{i>0}\Bigl(\mathbb{E}(X_{i}^{2})+2\; \; \;um_{j>i}|\mathbb{E}(X_{i}X_{j})|
\Bigr).
\end{equation*}
Selecting in (\ref{RFN}) $r=\lambda ^{2}/(nV)$ leads to
\begin{equation*}
\mathbb{P}(\; \; \;up_{k\in \lbrack 1,n]}|S_{k}|\geq 4\lambda )\leq 4\exp \Bigl(-
\frac{\lambda ^{2}\log 2}{2nV}\Bigr)+4Cn\lambda ^{-1}\exp \Bigl(
-c(nV/\lambda )^{\gamma }\Bigr)
\end{equation*}
for any $\lambda \geq (nV)^{1/2}$. The above inequality gives a subgaussian
bound, provided that
\begin{equation*}
(nV/\lambda )^{\gamma }\geq \lambda ^{2}/(nV)+\log (n/\lambda ),
\end{equation*}
which holds if $\lambda \ll (nV)^{(\gamma +1)/(\gamma +2)}$ (here and below $
\ll $ replaces the
symbol $o$). Hence (\ref{RFN}) is useful to study the probability of
moderate deviation $\mathbb{P}(|S_{n}|\geq t\; \; \;qrt{n/a_{n}})$ provided $
a_{n}\gg n^{-\gamma /(\gamma +2)}$. For $\gamma =1$ this leads to $a_{n}\gg
n^{-1/3}$. For bounded random variables and geometric mixing rates (in that
case $\gamma =1$), Proposition 13 in Merlev\`{e}de and Peligrad (2009)
provides the MDP under the improved condition $a_{n}\gg n^{-1/2}$. We will
prove in this paper that this condition is still suboptimal from the point
of view of moderate deviation.
\; \; \;mallskip For stationary geometrically mixing (absolutely regular) Markov
chains, and bounded functions $f$ (here $\gamma =1$), Theorem 6 in Adamczak
(2008) provides a Bernstein's type inequality for $
S_{n}(f)=f(X_{1})+f(X_{2})+\cdots +f(X_{n})$. Under the centering condition $
\mathbb{E}(f(X_{1}))=0$, he proves that
\begin{equation}
\mathbb{P}(|S_{n}(f)|\geq \lambda )\leq C\exp \Bigl(-\frac{1}{C}\min \Bigl(
\frac{\lambda ^{2}}{n\; \; \;igma ^{2}},\frac{\lambda }{\log n}\Bigr)\Bigr),
\label{Adam}
\end{equation}
where $\; \; \;igma ^{2}=\lim_{n}n^{-1}\mathrm{Var}S_{n}(f)$ (here we take $m=1$
in his condition (14) on the small set). Inequality (\ref{Adam}) provides
exponential tightness for $S_{n}(f)/\; \; \;qrt{n}$ with rate $a_{n}$ as soon as $
a_{n}\gg n^{-1}(\log n)^{2}$, which is weaker than the above conditions.
Still in the context of Markov chains, we point out the recent Fuk-Nagaev
type inequality obtained by Bertail and Cl\'emen\c{c}on (2008). However for
stationary subgeometrically mixing Markov chains, their inequality does not
lead to the optimal rate which can be expected in view of the results
obtained by Djellout and Guillin (2001).
\; \; \;mallskip To our knowledge, Inequality (\ref{Adam}) has not been extended
yet to the case $\gamma <1$, even for the case of bounded functions $f$ and
absolutely regular Markov chains. In this paper we improve inequality (\ref
{RFN}) in the case $\gamma <1 $ and then derive moderate deviations
principles from this new inequality under the minimal condition $
a_{n}n^{\gamma /(2-\gamma )}\rightarrow \infty $. The main tool is an
extension of inequality (\ref{B2}) to dependent sequences. We shall prove
that, for $\alpha $-mixing or $\tau $-mixing sequences satisfying (\ref{H1})
and (\ref{H2}) for $\gamma <1$, there exists a positive $\eta $ such that,
for $n\geq 4$ and $\lambda \geq C(\log n)^{\eta }$
\begin{equation}
\mathbb{P}( \; \; \;up_{j\leq n}|S_{j}| \geq \lambda )\leq (n+1) \exp
(-\lambda ^{\gamma } / C_1) + \exp (-\lambda ^{2}/ (C_2 + C_2nV) ) , \label{Boro}
\end{equation}
where $C$, $C_1$ and $C_2$ are positive constants depending on $c$, $\gamma
_{1}$ and $\gamma _{2}$ and $V$ is some constant (which differs from the
constant $V$ in (\ref{RFN}) in the unbounded case), depending on the
covariance properties of truncated random variables built from the initial
sequence. In order to define precisely $V$ we need to introduce truncation
functions $\varphi _{M}$.
\begin{nota}
For any positive $M$ let the function $\varphi_M$ be defined by $\varphi_M
(x) = (x \wedge M) \vee(-M) $.
\end{nota}
With this notation, (\ref{Boro}) holds with
\begin{equation}
V=\; \; \;up_{M\geq 1}\; \; \;up_{i>0}\Bigl (\mathrm{Var} (\varphi
_{M}(X_{i}))+2\; \; \;um_{j>i}|\mathrm{Cov}(\varphi _{M}(X_{i}),\varphi
_{M}(X_{j}))|\Bigr). \label{VV}
\end{equation}
To prove (\ref{Boro}) we use a variety of techniques and new ideas, ranging
from the big and small blocks argument based on a Cantor-type construction,
diadic induction, adaptive truncation along with coupling arguments. In a
forthcoming paper, we will study the case $\gamma _{1}=1$ and $\gamma
_{2}=\infty $. We now give more definitions and precise results.
\; \; \;ection{Main results}
\label{sectionMR}
\; \; \;etcounter{equation}{0}
\quad We first define the dependence coefficients that we consider in this
paper.
\quad For any real random variable $X$ in ${\mathbb{L}}^{1}$ and any $\; \; \;igma
$-algebra $\mathcal{M}$ of $\mathcal{A}$, let ${\mathbb{P}}_{X|\mathcal{M}}$
be a conditional distribution of $X$ given ${\mathcal{M}}$ and let ${\mathbb{
\ P}}_{X}$ be the distribution of $X$. We consider the coefficient $\tau (
\mathcal{M},X)$ of weak dependence (Dedecker and Prieur, 2004) which is
defined by
\begin{equation}
\tau (\mathcal{M},X)=\Big \|\; \; \;up_{f\in \Lambda _{1}(\mathbb{R})}\Bigr|\int
f(x)\mathbb{P}_{X|\mathcal{M}}(dx)-\int f(x)\mathbb{P}_{X}(dx)\Big |\Big \|
_{1}\, , \label{deftau1}
\end{equation}
where $\Lambda _{1}(\mathbb{R})$ is the set of $1$-Lipschitz functions from $
\mathbb{R}$ to $\mathbb{R}$.
The coefficient $\tau $ has the following coupling property: If $\Omega $ is
rich enough then the coefficient $\tau (\mathcal{M},X)$ is the infimum of $
\Vert X-X^{\ast }\Vert _{1}$ where $X^{\ast }$ is independent of $\mathcal{M}
$ and distributed as $X$ (see Lemma 5 in Dedecker and Prieur (2004)). This
coupling property allows to relate the coefficient $\tau $ to the strong
mixing coefficient Rosenblatt (1956) defined by
\begin{equation*}
\alpha (\mathcal{M},\; \; \;igma (X))=\; \; \;up_{A\in \mathcal{M},B\in \; \; \;igma (X)}|{
\mathbb{P}}(A\cap B)-{\mathbb{P}}(A){\mathbb{P}}(B)|\,,
\end{equation*}
as shown in Rio (2000, p. 161) for the bounded case, and by Peligrad (2002)
for the unbounded case. For equivalent definitions of the strong mixing
coefficient we refer for instance to Bradley (2007, Lemma 4.3 and Theorem
4.4).
\quad If $Y$ is a random variable with values in $\mathbb{R}^{k}$, the
coupling coefficient $\tau$ is defined as follows: If $Y\in {\mathbb{L}}^{1}(
\mathbb{R}^{k})$,
\begin{equation}
\tau (\mathcal{M},Y)=\; \; \;up \{\tau (\mathcal{M},f(Y)),f\in \Lambda _{1}(
\mathbb{R}^{k})\}\,, \label{deftau1}
\end{equation}
where $\Lambda _{1}(\mathbb{R}^{k})$ is the set of $1$-Lipschitz functions
from $\mathbb{R}^{k}$ to $\mathbb{R}$.
The $\tau $-mixing coefficients $\tau _{X}(i)=\tau (i)$ of a
sequence $(X_{i})_{i\in \mathbb{Z}}$ of real-valued random variables are
defined by
\begin{equation}
\tau _{k}(i)=\max_{1\leq \ell \leq k}\frac{1}{\ell }\; \; \;up \Big \{\tau (
\mathcal{M}_{p},(X_{j_{1}},\cdots ,X_{j_{\ell }})),\,p+i\leq j_{1}<\cdots
<j_{\ell }\Big \}\text{ and }\tau (i)=\; \; \;up_{k\geq 0}\tau _{k}(i)\,,
\label{deftau2}
\end{equation}
where $\mathcal{M}_{p}=\; \; \;igma (X_{j},j\leq p)$ and the above supremum is
taken over $p$ and $(j_{1},\ldots j_{\ell})$. Recall that the strong mixing
coefficients $\alpha (i)$ are defined by:
\begin{equation*}
\ {\alpha }(i)=\; \; \;up_{p\in \mathbb{Z}}\alpha (\mathcal{M}_{p},\; \; \;igma
(X_{j},j\geq i+p))\,.
\end{equation*}
Define now the function $Q_{|Y|}$ by $Q_{|Y|}(u)=\inf \{t>0,\mathbb{P}
(|Y|>t)\leq u\}$ for $u$ in $]0,1]$. To compare the $\tau $-mixing
coefficient with the strong mixing coefficient, let us mention that, by
Lemma 7 in Dedecker and Prieur (2004),
\begin{equation}
\tau (i)\leq 2\int_{0}^{2\alpha (i)}Q(u)du\,, { \Bbb H} box{ where }Q=\; \; \;up_{k\in {
\mathbb{Z}}}Q_{|X_{k}|}. \label{comptaualpha}
\end{equation}
\; \; \;mallskip Let $(X_{j})_{j\in \mathbb{Z}}$ be a sequence of centered real
valued random variables and let $\tau (i)$ be defined by (\ref{deftau2}).
Let $\tau (x)=\tau ([x])$ (square brackets denoting the integer part).
Throughout, we assume that there exist positive constants $\gamma _{1}$ and $
\gamma _{2}$ such that
\begin{equation}
\tau (x)\leq \exp (-cx^{\gamma _{1}})\ { \Bbb H} box{ for any }x\geq 1\,,
\label{hypoalpha}
\end{equation}
where $c>0$ and for any positive $t$,
\begin{equation}
\; \; \;up_{k>0}\mathbb{P}(|X_{k}|>t)\leq \exp (1-t^{\gamma _{2}}):=H(t)\,.
\label{hypoQ}
\end{equation}
Suppose furthermore that
\begin{equation}
\gamma <1\text{ where $\gamma $ is defined by $1/\gamma =1/\gamma
_{1}+1/\gamma _{2}$}\,. \label{hypogamma}
\end{equation}
\begin{thm}
\label{BTinegacont} Let $(X_{j})_{j\in \mathbb{Z}}$ be a sequence of
centered real valued random variables and let $V$ be defined by (\ref{VV}).
Assume that (\ref{hypoalpha}), (\ref{hypoQ}) and (\ref{hypogamma}) are
satisfied. Then $V$ is finite and, for any $n\geq 4$, there exist positive
constants $C_{1}$, $C_{2}$, $C_{3}$ and $C_{4}$ depending only on $c $, $
\gamma $ and $\gamma _{1}$ such that, for any positive $x$,
\begin{equation*}
\mathbb{P}\Bigl(\; \; \;up_{j\leq n}|S_{j}|\geq x\Bigr)\leq n\exp \Bigl(-\frac{
x^{\gamma }}{C_{1}}\Bigr )+\exp \Bigl(-\frac{x^{2}}{C_{2}(1+nV)}\Bigr)+\exp
\Bigl(-\frac{x^{2}}{C_{3}n}\exp \Bigr(\frac{x^{\gamma (1-\gamma )}}{
C_{4}(\log x)^{\gamma }}\Bigr)\Bigr)\,.
\end{equation*}
\end{thm}
\begin{rmk}
Let us mention that if the sequence $(X_{j})_{j\in \mathbb{Z}}$ satisfies (
\ref{hypoQ}) and is strongly mixing with strong mixing coefficients
satisfying (\ref{H1}), then, from (\ref{comptaualpha}), (\ref{hypoalpha}) is
satisfied (with an other constant), and Theorem \ref{BTinegacont} applies.
\end{rmk}
\begin{rmk}
\label{remexpmom} If $\mathbb{E} \exp ( |X_i|^{\gamma_2})) \leq K$ for any
positive $i$, then setting $C = 1 \vee \log K $, we notice that the process $
(C^{-1/\gamma_2}X_i)_{i \in {\mathbb{Z}}}$ satisfies (\ref{hypoQ}).
\end{rmk}
\begin{rmk}
\label{remv2} If $(X_{i})_{i\in \mathbb{Z} }$ satisfies (\ref{hypoalpha})
and (\ref{hypoQ}), then
\begin{eqnarray*}
V &\leq &\; \; \;up_{i>0} \Bigl( \mathbb{E} (X_i^2) + 4 \; \; \;um_{k >0 }
\int_0^{\tau(k)/2} Q_{|X_i|} (G(v)) dv \Bigr) \\
& = & \; \; \;up_{i>0} \Bigl( \mathbb{E} (X_i^2) + 4 \; \; \;um_{k >0 }
\int_0^{G(\tau(k)/2)} Q_{|X_i|} (u) Q(u) du \Bigr) ,
\end{eqnarray*}
where $G$ is the inverse function of $x \mapsto \int_0^xQ(u) du$ (see
Section \ref{prremv2} for a proof). Here the random variables do not need to
be centered. Note also that, in the strong mixing case, using (\ref
{comptaualpha}), we have $G(\tau(k)/2) \leq 2 \alpha (k)$.
\end{rmk}
This result is the main tool to derive the MDP below.
\begin{thm}
\label{thmMDPsubgeo11} Let $(X_{i})_{i\in \mathbb{Z}}$ be a sequence of
random variables as in Theorem \ref{BTinegacont} and let $
S_{n}=\; \; \;um_{i=1}^{n}X_{i}$ and $\; \; \;igma _{n}^{2}=\mathrm{Var}S_{n}$. Assume
in addition that $\lim \inf_{n\rightarrow \infty }\; \; \;igma _{n}^{2}/n>0$. Then
for all positive sequences $a_{n}$ with $a_{n}\rightarrow 0$ and $
a_{n}n^{\gamma /(2-\gamma )}\rightarrow \infty $, $\{\; \; \;igma _{n}^{-1}S_{n}\}$
satisfies (\ref{mdpdef}) with the good rate function $I(t)=t^{2}/2$.
\end{thm}
If we impose a stronger degree of stationarity we obtain the following
corollary.
\begin{cor}
\label{thmMDPsubgeo} Let $(X_{i})_{i\in \mathbb{Z}}$ be a second order
stationary sequence of centered real valued random variables. Assume that (
\ref{hypoalpha}), (\ref{hypoQ}) and (\ref{hypogamma}) are satisfied. Let $
S_{n}=\; \; \;um_{i=1}^{n}X_{i}$ and $\; \; \;igma _{n}^{2}=\mathrm{Var}S_{n}$. Assume
in addition that $\; \; \;igma _{n}^{2}\rightarrow \infty $. Then $
\lim_{n\rightarrow \infty }\; \; \;igma _{n}^{2}/n=\; \; \;igma ^{2}>0$, and for all
positive sequences $a_{n}$ with $a_{n}\rightarrow 0$ and $a_{n}n^{\gamma
/(2-\gamma )}\rightarrow \infty $, $\{n^{-1/2}S_{n}\}$ satisfies (\ref
{mdpdef}) with the good rate function $I(t)=t^{2}/(2\; \; \;igma ^{2})$.
\end{cor}
\; \; \;ubsection{Applications}
\; \; \;ubsubsection{Instantaneous functions of absolutely regular processes}
Let $(Y_{j})_{j\in \mathbb{Z}}$ be a strictly stationary sequence of random
variables with values in a Polish space $E$, and let $f$ be a measurable
function from $E$ to ${\mathbb{R}}$. Set $X_{j}=f(Y_{j})$. Consider now the
case where the sequence $(Y_{k})_{k\in \mathbb{Z}}$ is absolutely regular
(or $\beta $-mixing) in the sense of Rozanov and Volkonskii (1959). Setting $
\mathcal{F}_{0}=\; \; \;igma (Y_{i},i\leq 0)$ and $\mathcal{G}_{k}=\; \; \;igma
(Y_{i},i\geq k)$, this means that
\begin{equation*}
\beta (k)=\beta (\mathcal{F}_{0},\mathcal{G}_{k})\rightarrow 0\,,\text{ as }
k\rightarrow \infty \,,
\end{equation*}
with $\beta ({\mathcal{A}},{\mathcal{B}})=\frac{1}{2}\; \; \;up \{\; \; \;um_{i\in
I}\; \; \;um_{j\in J}|\mathbb{P}(A_{i}\cap B_{j})-\mathbb{P}(A_{i})\mathbb{P}
(B_{j})|\}$, the maximum being taken over all finite partitions $
(A_{i})_{i\in I}$ and $(B_{i})_{i\in J}$ of $\Omega $ respectively with
elements in ${\mathcal{A}}$ and ${\mathcal{B}}$. If we assume that
\begin{equation}
\beta (n)\leq \exp (-cn^{\gamma _{1}})\text{ for any positive }n,
\label{hypobeta}
\end{equation}
where $c>0$ and $\gamma _{1}>0$, and that the random variables $X_{j}$ are
centered and satisfy (\ref{hypoQ}) for some positive $\gamma _{2}$ such that
$1/\gamma =1/\gamma _{1}+1/\gamma _{2}>1$, then Theorem \ref{BTinegacont}
and Corollary \ref{thmMDPsubgeo} apply to the sequence $(X_{j})_{j\in
\mathbb{Z}}$. Furthermore, as shown in Viennet (1997), by Delyon's (1990)
covariance inequality,
\begin{equation*}
V\leq \mathbb{E}(f^{2}(X_{0}))+4\; \; \;um_{k>0}\mathbb{E}(B_{k}f^{2}(X_{0})),
\end{equation*}
for some sequence $(B_{k})_{k>0}$ of random variables with values in $[0,1]$
satisfying $\mathbb{E}(B_{k})\leq \beta (k)$ (see Rio (2000, Section 1.6)
for more details).
We now give an example where $(Y_{j})_{j\in \mathbb{Z}}$ satisfies (\ref
{hypobeta}). Let $(Y_{j})_{j\geq 0}$ be an $E$-valued irreducible ergodic
and stationary Markov chain with a transition probability $P$ having a
unique invariant probability measure $ { \Bbb P} i $ (by Kolmogorov extension Theorem
one can complete $(Y_{j})_{j\geq 0}$ to a sequence $(Y_{j})_{j\in \mathbb{Z}
} $). Assume furthermore that the chain has an atom, that is there exists $
A\; \; \;ubset E$ with $ { \Bbb P} i (A)>0$ and $\nu $ a probability measure such that $
P(x,\cdot )=\nu (\cdot )$ for any $x$ in $A$. If
\begin{equation}
\text{ there exists }\delta >0\text{ and }\gamma _{1}>0\text{ such that }
\mathbb{E}_{\nu }(\exp (\delta \tau ^{\gamma _{1}}))<\infty \,,
\label{condsg}
\end{equation}
where $\tau =\inf \{n\geq 0;\,Y_{n}\in A\}$, then the $\beta $-mixing
coefficients of the sequence $(Y_{j})_{j\geq 0}$ satisfy (\ref{hypobeta})
with the same $\gamma _{1}$ (see Proposition 9.6 and Corollary 9.1 in Rio
(2000) for more details). Suppose that $ { \Bbb P} i (f)=0$. Then the results apply
to $(X_{j})_{j\geq 0}$ as soon as $f$ satisfies
\begin{equation*}
{ \Bbb P} i (|f|>t)\leq \exp (1-t^{\gamma _{2}})\ \text{ for any positive }t\,.
\end{equation*}
Compared to the results obtained by de Acosta (1997) and Chen and de Acosta
(1998) for geometrically ergodic Markov chains, and by Djellout and Guillin
(2001) for subgeometrically ergodic Markov chains, we do not require here
the function $f$ to be bounded.
\; \; \;ubsubsection{Functions of linear processes with absolutely regular
innovations}
\label{Sexample1}
Let $f$ be a 1-Lipshitz function. We consider here the case where
\begin{equation*}
X_{n}=f\bigl(\; \; \;um_{j\geq 0}a_{j}\xi _{n-j}\bigr)-\mathbb{E}f\bigl(
\; \; \;um_{j\geq 0}a_{j}\xi _{n-j}\bigr)\,,
\end{equation*}
where $A=\; \; \;um_{j\geq 0}|a_{j}|<\infty $ and $(\xi _{i})_{i\in {\mathbb{Z}}}$
is a strictly stationary sequence of real-valued random variables which is
absolutely regular in the sense of Rozanov and Volkonskii.
Let $\mathcal{F}_{0}=\; \; \;igma (\xi _{i},i\leq 0)$ and $\mathcal{G}_{k}=\; \; \;igma (
{\xi }_{i},i\geq k)$. According to Section 3.1 in Dedecker and Merlev\`{e}de
(2006), if the innovations $(\xi _{i})_{i\in {\mathbb{Z}}}$ are in ${\mathbb{
L}}^{2}$, the following bound holds for the $\tau $-mixing coefficient
associated to the sequence $(X_{i})_{i\in {\mathbb{Z}}}$:
\begin{equation*}
\tau (i)\leq 2\Vert \xi _{0}\Vert _{1}\; \; \;um_{j\geq i}|a_{j}|+4\Vert \xi
_{0}\Vert _{2}\; \; \;um_{j=0}^{i-1}|a_{j}|\,\beta _{\xi }^{1/2}(i-j)\,.
\end{equation*}
Assume that there exists $\gamma _{1}>0$ and $c^{ { \Bbb P} rime }>0$ such that, for
any positive integer $k$,
\begin{equation*}
a_{k}\leq \exp (-c^{ { \Bbb P} rime }k^{\gamma _{1}})\text{ and }\beta _{\xi }(k)\leq
\exp (-c^{ { \Bbb P} rime }k^{\gamma _{1}})\,.
\end{equation*}
Then the $\tau $-mixing coefficients of $(X_{j})_{j\in \mathbb{Z}}$ satisfy (
\ref{hypoalpha}). Let us now focus on the tails of the random variables $
X_{i}$. Assume that $(\xi _{i})_{i\in {\mathbb{Z}}}$ satisfies (\ref{hypoQ}
). Define the convex functions $ { \Bbb P} si _{\eta }$ for $\eta >0$ in the
following way: $ { \Bbb P} si _{\eta }(-x)= { \Bbb P} si _{\eta }(x)$, and for any $x\geq 0$,
\begin{equation*}
{ \Bbb P} si _{\eta }(x)=\exp (x^{\eta })-1 { \Bbb H} box{ for }\eta \geq 1\ { \Bbb H} box{ and }\
{ \Bbb P} si _{\eta }(x)=\int_{0}^{x}\exp (u^{\eta })du { \Bbb H} box{ for }\eta \in ]0,1].
\end{equation*}
Let $\Vert \,.\,\Vert _{ { \Bbb P} si _{\eta }}$ be the usual corresponding Orlicz
norm. Since the function $f$ is 1-Lipshitz, we get that $\Vert X_{0}\Vert
_{ { \Bbb P} si _{\gamma _{2}}}\leq 2A\Vert \xi _{0}\Vert _{ { \Bbb P} si _{\gamma _{2}}}$.
Next, if $(\xi _{i})_{i\in {\mathbb{Z}}}$ satisfies (\ref{hypoQ}), then $
\Vert \xi _{0}\Vert _{ { \Bbb P} si _{\gamma _{2}}}<\infty $. Furthermore, it can
easily be proven that, if $\Vert Y\Vert _{ { \Bbb P} si _{\eta }}\leq 1$, then $
\mathbb{P}(|Y|>t)\leq \exp (1-t^{\eta })$ for any positive $t$. Hence,
setting $C=2A\Vert \xi _{0}\Vert _{ { \Bbb P} si _{\gamma _{2}}}$, we get that $
(X_{i}/C)_{i\in {\mathbb{Z}}}$ satisfies (\ref{hypoQ}) with the same
parameter $\gamma _{2}$, and therefore the conclusions of Theorem \ref
{BTinegacont} and Corollary \ref{thmMDPsubgeo} hold with $\gamma $ defined
by $1/\gamma =1/\gamma _{1}+1/\gamma _{2}$, provided that $\gamma <1$.
This example shows that our results hold for processes that are not
necessarily strongly mixing. Recall that, in the case where $a_{i}=2^{-i-1}$
and the innovations are iid with law ${\mathcal{B}}(1/2)$, the process fails
to be strongly mixing in the sense of Rosenblatt.
\; \; \;ubsubsection{ARCH($\infty$) models}
\label{Sexample2} Let $(\eta _{t})_{t\in \mathbb{Z}}$ be an iid sequence of
zero mean real random variables such that $\Vert \eta _{0}\Vert _{\infty
}\leq 1$. We consider the following ARCH($\infty $) model described by
Giraitis \textit{et al.} (2000):
\begin{equation}
Y_{t}=\; \; \;igma _{t}\eta _{t}\,, { \Bbb H} box{ where }\; \; \;igma _{t}^{2}=a+\; \; \;um_{j\geq
1}a_{j}Y_{t-j}^{2}\,, \label{defarch}
\end{equation}
where $a\geq 0$, $a_{j}\geq 0$ and $\; \; \;um_{j\geq 1}a_{j}<1$. Such models are
encountered, when the volatility $(\; \; \;igma _{t}^{2})_{t\in {\mathbb{Z}}}$ is
unobserved. In that case, the process of interest is $(Y_{t}^{2})_{t\in {
\mathbb{Z}}}$. Under the above conditions, there exists a unique stationary
solution that satisfies
\begin{equation*}
\Vert Y_{0}\Vert _{\infty }^{2}\leq a+a\; \; \;um_{\ell \geq 1}\big (\; \; \;um_{j\geq
1}a_{j}\big )^{\ell }=M<\infty \,.
\end{equation*}
Set now $X_{j}=(2M)^{-1}(Y_{j}^{2}-\mathbb{E}(Y_{j}^{2}))$. Then the
sequence $(X_{j})_{j\in \mathbb{Z}}$ satisfies (\ref{hypoQ}) with $\gamma
_{2}=\infty $. If we assume in addition that $a_{j}=O(b^{j})$ for some $b<1$
, then, according to Proposition 5.1 (and its proof) in Comte \textit{et al.}
(2008), the $\tau $-mixing coefficients of $(X_{j})_{j\in \mathbb{Z}}$
satisfy (\ref{hypoalpha}) with $\gamma _{1}=1/2$. Hence in this case, the
sequence $(X_{j})_{j\in \mathbb{Z}}$ satisfies both the conclusions of
Theorem \ref{BTinegacont} and of Corollary \ref{thmMDPsubgeo} with $\gamma
=1/2$.
\; \; \;ection{Proofs}
\; \; \;etcounter{equation}{0}
\; \; \;ubsection{Some auxiliary results}
The aim of this section is essentially to give suitable bounds for the
Laplace transform of
\begin{equation} \label{Bp1}
S (K) = \; \; \;um_{i \in K} X_i \, ,
\end{equation}
where $K$ is a finite set of integers.
\begin{equation}
c_{0}=(2(2^{1/\gamma }-1))^{-1}(2^{(1-\gamma )/\gamma }-1)\,,\ c_{1}=\min
(c^{1/\gamma _{1}}c_{0}/4,2^{-1/\gamma })\,, \label{defc0}
\end{equation}
\begin{equation}
c_{2}=2^{-(1+2\gamma _{1}/\gamma )}c_{1}^{\gamma
_{1}}\,\,,\,\,c_{3}=2^{-\gamma _{1}/\gamma }\,,\text{ and }\kappa =\min \big
(c_{2},c_{3}\big )\,. \label{defkappa}
\end{equation}
\begin{prop}
\label{propinter2} Let $(X_{j})_{j\geq 1}$ be a sequence of centered and
real valued random variables satisfying (\ref{hypoalpha}), (\ref{hypoQ}) and
(\ref{hypogamma}). Let $A$ and $\ell $ be two positive integers such that $
A2^{-\ell }\geq (1\vee 2c_{0}^{-1})$. Let $M=H^{-1}(\tau (c^{-1/\gamma
_{1}}A))$ and for any $j$, set $\overline{X}_{M}(j)=\varphi _{M}(X_{j})-
\mathbb{E}\varphi _{M}(X_{j})$. \noindent Then, there exists a subset $
K_{A}^{(\ell )}$ of $\{1,\dots ,A\}$ with $\mathrm{Card}(K_{A}^{(\ell
)})\geq A/2$, such that for any positive $t\leq \kappa \bigl(A^{\gamma
-1}\wedge (2^{\ell }/A)\bigr)^{\gamma _{1}/\gamma }$, where $\kappa $ is
defined by (\ref{defkappa}),
\begin{equation} \label{resultpropinter2}
\log \exp \Bigl(t\; \; \;um_{j\in K_{A}^{(\ell )}}\overline{X}_{M}(j)\Bigr)\leq
t^{2}v^{2}A+t^{2}\bigl(\ell (2A)^{1+\frac{\gamma _{1}}{\gamma }}+4A^{\gamma
}(2A)^{\frac{2\gamma _{1}}{\gamma }}\bigr)\exp \Bigl(-\frac{1}{2}\Bigl(\frac{
c_{1}A}{2^{\ell }}\Bigr)^{\gamma _{1}}\Bigr)\,,
\end{equation}
with
\begin{equation}
v^{2}=\; \; \;up_{T\geq 1}\; \; \;up_{K\; \; \;ubset {\mathbb{N}}^{\ast }}\frac{1}{\mathrm{Card
}K}\mathrm{Var}\; \; \;um_{i\in K}\varphi _{T}(X_{i}) \label{hypovarcont}
\end{equation}
(the maximum being taken over all nonempty finite sets $K$ of integers).
\end{prop}
\begin{rmk}
\label{compv2V} Notice that $v^2 \leq V$ (the proof is immediate).
\end{rmk}
\noindent \textbf{Proof of Proposition \ref{propinter2}.} The proof is
divided in several steps.
\textit{Step 1. The construction of $K_{A}^{(\ell )}$}. Let $c_{0}$ be
defined by (\ref{defc0}) and $n_{0}=A$. $K_{A}^{(\ell )}$ will be a finite
union of $2^{\ell }$ disjoint sets of consecutive integers with same
cardinal spaced according to a recursive "Cantor"-like construction. We
first define an integer $d_{0}$ as follows:
\begin{equation*}
d_{0}=\left\{
\begin{array}{ll}
\; \; \;up \{m\in {2\mathbb{N}}\,,\,m\leq c_{0}n_{0}\} & \text{ if $n_{0}$ is even}
\\
\; \; \;up \{m\in {2\mathbb{N}}+1\,,\,m\leq c_{0}n_{0}\} & \text{ if $n_{0}$ is odd
}.
\end{array}
\right.
\end{equation*}
It follows that $n_{0}-d_{0}$ is even. Let $n_{1}=(n_{0}-d_{0})/2$, and
define two sets of integers of cardinal $n_{1}$ separated by a gap of $d_{0}$
integers as follows
\begin{eqnarray*}
I_{1,1} &=&\{1,\dots ,n_{1}\} \\
I_{1,2} &=&\{n_{1}+d_{0}+1,\dots ,n_{0}\}\,.
\end{eqnarray*}
We define now the integer $d_{1}$ by
\begin{equation*}
d_{1}=\left\{
\begin{array}{ll}
\; \; \;up \{m\in {2\mathbb{N}}\,,\,m\leq c_{0}2^{-(\ell \wedge \frac{1}{\gamma }
)}n_{0}\} & \text{ if $n_{1}$ is even} \\
\; \; \;up \{m\in {2\mathbb{N}}+1\,,\,m\leq c_{0}2^{-(\ell \wedge \frac{1}{\gamma }
)}n_{0}\} & \text{ if $n_{1}$ is odd}.
\end{array}
\right.
\end{equation*}
Noticing that $n_{1}-d_{1}$ is even, we set $n_{2}=(n_{1}-d_{1})/2$, and
define four sets of integers of cardinal $n_{2}$ by
\begin{eqnarray*}
I_{2,1} &=&\{1,\dots ,n_{2}\} \\
I_{2,2} &=&\{n_{2}+d_{1}+1,\dots ,n_{1}\} \\
I_{2,i+2} &=&(n_{1}+d_{0})+I_{2,i}\text{ for $i=1,2$}\,.
\end{eqnarray*}
Iterating this procedure $j$ times (for $1\leq j\leq \ell )$, we then get a
finite union of $2^{j}$ sets, $(I_{j,k})_{1\leq k\leq 2^{j}}$, of
consecutive integers, with same cardinal, constructed by induction from $
(I_{j-1,k})_{1\leq k\leq 2^{j-1}}$ as follows: First, for $1\leq k\leq
2^{j-1}$, we have
\begin{equation*}
I_{j-1,k}=\{a_{j-1,k},\dots ,b_{j-1,k}\}\,,
\end{equation*}
where $1+b_{j-1,k}-a_{j-1,k}=n_{j-1}$ and
\begin{equation*}
1=a_{j-1,1}<b_{j-1,1}<a_{j-1,2}<b_{j-1,2}<\cdots
<a_{j-1,2^{j-1}}<b_{j-1,2^{j-1}}=n_{0}.
\end{equation*}
Let $n_{j}=2^{-1}(n_{j-1}-d_{j-1})$ and
\begin{equation*}
d_{j}=\left\{
\begin{array}{ll}
\; \; \;up \{m\in {2\mathbb{N}}\,,\,m\leq c_{0}2^{-(\ell \wedge \frac{j}{\gamma }
)}n_{0}\} & \text{ if $n_{j}$ is even} \\
\; \; \;up \{m\in {2\mathbb{N}}+1\,,\,m\leq c_{0}2^{-(\ell \wedge \frac{j}{\gamma }
)}n_{0}\} & \text{ if $n_{j}$ is odd}.
\end{array}
\right.
\end{equation*}
Then $I_{j,k}=\{a_{j,k},a_{j,k}+1,\dots ,b_{j,k}\}$, where the double
indexed sequences $(a_{j,k})$ and $(b_{j,k})$ are defined as follows:
\begin{equation*}
a_{j,2k-1}=a_{j-1,k}\,,\,b_{j,2k}=b_{j-1,k}\,,\,b_{j,2k}-a_{j,2k}+1=n_{j}
\text{ and }b_{j,2k-1}-a_{j,2k-1}+1=n_{j}\,.
\end{equation*}
With this selection, we then get that there is exactly $d_{j-1}$ integers
between $I_{j,2k-1}$ and $I_{j,2k}$ for any $1\leq k\leq 2^{j-1}$.
Finally we get
\begin{equation*}
K_{A}^{(\ell )}=\bigcup_{k=1}^{2^{\ell }}I_{\ell ,k}\,.
\end{equation*}
Since $\mathrm{Card}(I_{\ell ,k})=n_{\ell }$, for any $1\leq k\leq 2^{\ell }$
, we get that $\mathrm{Card}(K_{A}^{(\ell )})=2^{\ell }n_{\ell }$. Now
notice that
\begin{equation*}
A-\mathrm{Card}(K_{A}^{(\ell )})=\; \; \;um_{j=0}^{\ell -1}2^{j}d_{j}\leq Ac_{0}
\Big (\; \; \;um_{j\geq 0}2^{j(1-1/\gamma )}+\; \; \;um_{j\geq 1}2^{-j}\Big )\leq A/2\,.
\end{equation*}
Consequently
\begin{equation*}
A\geq \mathrm{Card}(K_{A}^{(\ell )})\geq A/2\,\text{ and }\,n_{\ell }\leq
A2^{-\ell }\,.
\end{equation*}
The following notation will be useful for the rest of the proof: For any $k$
in $\{0,1,\dots, \ell \}$ and any $j$ in $\{1,\dots, 2^{\ell}\}$ , we set
\begin{equation} \label{defKAlj}
K^{(\ell)}_{A, k, j} = \bigcup_{i=(j-1)2^{\ell - k} +1}^{j2^{\ell -
k}}I_{\ell , i} \, .
\end{equation}
Notice that $K_{A}^{( \ell)}= K^{(\ell)}_{A, 0, 1}$ and that for any $k$ in $
\{0,1,\dots, \ell\}$
\begin{equation} \label{defKAl}
K_{A}^{( \ell)} = \bigcup_{j=1}^{2^{k}}K^{(\ell)}_{A,k, j} \, ,
\end{equation}
where the union is disjoint.
In what follows we shall also use the following notation: for any integer $j$
in $[0,\ell ]$, we set
\begin{equation}
M_{j}=H^{-1}\bigl(\tau (c^{-1/\gamma _{1}}A2^{-(\ell \wedge \frac{j}{\gamma }
)})\bigr)\,. \label{defMk}
\end{equation}
Since $H^{-1}(y)=\big (\log (e/y)\big)^{1/\gamma _{2}}$ for any $y\leq e$,
we get that for any $x\geq 1$,
\begin{equation}
H^{-1}(\tau (c^{-1/\gamma _{1}}x))\leq \big (1+x^{\gamma _{1}}\big)
^{1/\gamma _{2}}\leq (2x)^{\gamma _{1}/\gamma _{2}}\,. \label{majQalpha}
\end{equation}
Consequently since for any $j$ in $[0,\ell ]$, $A2^{-(\ell \wedge \frac{j}{
\gamma })}\geq 1$, the following bound is valid:
\begin{equation}
M_{j}\leq \big (2A2^{-(\ell \wedge \frac{j}{\gamma })}\big )^{\gamma
_{1}/\gamma _{2}}\,. \label{majMl}
\end{equation}
For any set of integers $K$ and any positive $M$ we also define
\begin{equation}
\overline{S}_{M}(K)=\; \; \;um_{i\in K}\overline{X}_{M}(i)\,. \label{defSMK}
\end{equation}
\textit{Step 2. Proof of Inequality (\ref{resultpropinter2}) with $
K_A^{(\ell)}$ defined in step 1}.
Consider the decomposition (\ref{defKAl}), and notice that for any $i=1,2$, $
\mathrm{Card}(K_{A,1,i}^{(\ell )})\leq A/2$ and
\begin{equation*}
\tau \bigl(\; \; \;igma (X_{i}\,:\,i\in K_{A,1,1}^{(\ell )}),\bar{S}
_{M_{0}}(K_{A,1,2}^{(\ell )})\bigr)\leq A\tau (d_{0})/2\,.
\end{equation*}
Since $\overline{X}_{M_{0}}(j)\leq 2M_{0}$, we get that $|\bar{S}
_{M_{0}}(K_{A,1,i}^{(\ell )})|\leq AM_{0}$. Consequently, by using Lemma 2
from Appendix, we derive that for any positive $t$,
\begin{equation*}
|\mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A}^{(\ell )})\big )- { \Bbb P} rod_{i=1}^{2}
\mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A,1,i}^{(\ell )})\big )|\leq \frac{
At}{2}\tau (d_{0})\exp (2tAM_{0})\,.
\end{equation*}
Since the random variables $\bar{S}_{M_{0}}(K_{A}^{(\ell )})$ and $\bar{S}
_{M_{0}}(K_{A,1,i}^{(\ell )})$ are centered, their Laplace transform are
greater than one. Hence applying the elementary inequality
\begin{equation}
|\log x-\log y|\leq |x-y|\text{ for }x\geq 1\text{ and }y\geq 1,
\label{inelog}
\end{equation}
we get that, for any positive $t$,
\begin{equation}
|\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A}^{(\ell )})\big )
-\; \; \;um_{i=1}^{2}\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A,1,i}^{(\ell
)})\big )|\leq \frac{At}{2}\tau (d_{0})\exp (2tAM_{0})\,. \notag \label{1}
\end{equation}
The next step is to compare $\mathbb{E}\exp \big (t\bar{S}
_{M_{0}}(K_{A,1,i}^{(\ell )})\big )$ with $\mathbb{E}\exp \big (t\bar{S}
_{M_{1}}(K_{A,1,i}^{(\ell )})\big )$ for $i=1,2$. The random variables $\bar{
S}_{M_{0}}(K_{A,1,i}^{(\ell )})$ and $\bar{S}_{M_{1}}(K_{A,1,i}^{(\ell )})$
have values in $[-AM_{0},AM_{0}]$, hence applying the inequality
\begin{equation}
|e^{tx}-e^{ty}|\leq |t||x-y|(e^{|tx|}\vee e^{|ty|})\,, \label{AFexp}
\end{equation}
we obtain that, for any positive $t$,
\begin{equation*}
\big |\mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A,1,i}^{(\ell )})\big )-
\mathbb{E}\exp \big (t\bar{S}_{M_{1}}(K_{A,1,i}^{(\ell )})\big )\big |\leq
te^{tAM_{0}}\mathbb{E}\big |\bar{S}_{M_{0}}(K_{A,1,i}^{(\ell )})-\bar{S}
_{M_{1}}(K_{A,1,i}^{(\ell )})\big |\,.
\end{equation*}
Notice that
\begin{equation*}
\mathbb{E}\big |\bar{S}_{M_{0}}(K_{A,1,i}^{(\ell )})-\bar{S}
_{M_{1}}(K_{A,1,i}^{(\ell )})\big |\leq 2\; \; \;um_{j\in K_{A,1,i}^{(\ell )}}
\mathbb{E}|(\varphi _{M_{0}}-\varphi _{M_{1}})(X_{j})|\,.
\end{equation*}
Since for all $x\in {\mathbb{R}}$, $|(\varphi _{M_{0}}-\varphi
_{M_{1}})(x)|\leq M_{0}{\ 1 { \Bbb H} space{-1mm}{\mathrm{I}}}_{|x|>M_{1}}$, we get
that
\begin{equation*}
\mathbb{E}|(\varphi _{M_{0}}-\varphi _{M_{1}})(X_{j})|\leq M_{0}\mathbb{P}
(|X_{j}|>M_{1})\leq M_{0}\tau (c^{-\frac{1}{\gamma _{1}}}A2^{-(\ell \wedge
\frac{1}{\gamma })})\,.
\end{equation*}
Consequently, since $\mathrm{Card}(K_{A,1,i}^{(\ell )})\leq A/2$, for any $
i=1,2$ and any positive $t$,
\begin{equation*}
\big |\mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A,1,i}^{(\ell )})\big )-
\mathbb{E}\exp \big (t\bar{S}_{M_{1}}(K_{A,1,i}^{(\ell )})\big )\big |\leq
tAM_{0}e^{tAM_{0}}\tau (c^{-\frac{1}{\gamma _{1}}}A2^{-(\ell \wedge \frac{1}{
\gamma })})\,.
\end{equation*}
Using again the fact that the variables are centered and taking into account
the inequality (\ref{inelog}), we derive that for any $i=1,2$ and any
positive $t$,
\begin{equation}
\big |\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A,1,i}^{(\ell )})\big )
-\log \mathbb{E}\exp \big (t\bar{S}_{M_{1}}(K_{A,1,i}^{(\ell )})\big )\big |
\leq e^{2tAM_{0}}\tau (c^{-\frac{1}{\gamma _{1}}}A2^{-(\ell \wedge \frac{1}{
\gamma })})\,. \label{2}
\end{equation}
Now for any $k=1,\dots ,\ell $ and any $i=1,\dots ,2^{k}$, $\mathrm{Card}
(K_{A,k,i}^{(\ell )})\leq 2^{-k}A$. By iterating the above procedure, we
then get for any $k=1,\dots ,\ell $, and any positive $t$,
\begin{eqnarray*}
|\; \; \;um_{i=1}^{2^{k-1}}\log \mathbb{E}\exp \big (t\bar{S}
_{M_{k-1}}(K_{A,k-1,i}^{(\ell )})\big ) &-&\; \; \;um_{i=1}^{2^{k}}\log \mathbb{E}
\exp \big (t\bar{S}_{M_{k-1}}(K_{A,k,i}^{(\ell )})\big )| \\
&\leq &2^{k-1}\frac{tA}{2^{k}}\tau (d_{k-1})\exp \big (\frac{2tAM_{k-1}}{
2^{k-1}}\big )\,,
\end{eqnarray*}
and for any $i=1,\dots ,2^{k}$,
\begin{equation*}
|\log \mathbb{E}\exp \big (t\bar{S}_{M_{k-1}}(K_{A,k,i}^{(\ell )})\big )
-\log \mathbb{E}\exp \big (t\bar{S}_{M_{k}}(K_{A,k,i}^{(\ell )})\big )|\leq
\tau (c^{-\frac{1}{\gamma _{1}}}A2^{-(\ell \wedge \frac{k}{\gamma })})\exp
\big (\frac{2tAM_{k-1}}{2^{k-1}}\big )\,.
\end{equation*}
Hence finally, we get that for any $j=1,\dots ,\ell $, and any positive $t$,
\begin{eqnarray*}
&&|\; \; \;um_{i=1}^{2^{j-1}}\log \mathbb{E}\exp \big (t\bar{S}
_{M_{j-1}}(K_{A,j-1,i}^{(\ell )})\big )-\; \; \;um_{i=1}^{2^{j}}\log \mathbb{E}
\exp \big (t\bar{S}_{M_{j}}(K_{A,j,i}^{(\ell )})\big )| \\
&&\quad \quad \leq \frac{tA}{2}\tau (d_{j-1})\exp
(2tAM_{j-1}2^{1-j})+2^{j}\tau (c^{-\frac{1}{\gamma _{1}}}A2^{-(\ell \wedge
\frac{j}{\gamma })})\exp (2tAM_{j-1}2^{1-j})\,.
\end{eqnarray*}
Set
\begin{equation*}
k_{\ell }=\; \; \;up \{j\in \mathbb{N}\,,\,j/\gamma <\ell \}\,,
\end{equation*}
and notice that $0\leq k_{\ell }\leq \ell -1$. Since $K_{A}^{(\ell
)}=K_{A,0,1}^{(\ell )}$, we then derive that for any positive $t$,
\begin{eqnarray}
&&|\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A}^{(\ell )})\big )
-\; \; \;um_{i=1}^{2^{k_{\ell }+1}}\log \mathbb{E}\exp \big (t\bar{S}_{M_{k_{\ell
}+1}}(K_{A,k_{\ell }+1,i}^{(\ell )})\big )| \notag \label{1step} \\
&&\quad \quad \leq \frac{tA}{2}\; \; \;um_{j=0}^{k_{\ell }}\tau (d_{j})\exp \big (
\frac{2tAM_{j}}{2^{j}}\big )+2\; \; \;um_{j=0}^{k_{\ell }-1}2^{j}\tau
(2^{-1/\gamma }c^{-1/\gamma _{1}}A2^{-j/\gamma })\exp \big (\frac{2tAM_{j}}{
2^{j}}\big ) \notag \\
&&\quad \quad \quad +2^{k_{\ell }+1}\tau (c^{-1/\gamma _{1}}A2^{-\ell })\exp
(2tAM_{k_{\ell }}2^{-k_{\ell }})\,.
\end{eqnarray}
Notice now that for any $i=1,\dots ,2^{k_{\ell }+1}$, $S_{M_{k_{\ell
}+1}}(K_{A,k_{\ell }+1,i}^{(\ell )})$ is a sum of $2^{\ell -k_{\ell }-1}$
blocks, each of size $n_{\ell }$ and bounded by $2M_{k_{\ell }+1}n_{\ell }$.
In addition the blocks are equidistant and there is a gap of size $
d_{k_{\ell }+1}$ between two blocks. Consequently, by using Lemma 2 along
with Inequality (\ref{inelog}) and the fact that the variables are centered,
we get that
\begin{eqnarray}
&&|\log \mathbb{E}\exp \big (t\bar{S}_{M_{k_{\ell }+1}}(K_{A,k_{\ell
}+1,i}^{(\ell )})\big )-\; \; \;um_{j=(i-1)2^{\ell -k_{\ell }-1}+1}^{i2^{\ell
-k_{\ell }-1}}\log \mathbb{E}\exp \big (t\bar{S}_{M_{k_{\ell }+1}}(I_{\ell
,j})\big )| \notag \label{2step} \\
&&\quad \quad \leq tn_{\ell }2^{\ell }2^{-k_{\ell }-1}\tau (d_{k_{\ell
}+1})\exp (2tM_{k_{\ell }+1}n_{\ell }2^{\ell -k_{\ell }-1})\,.
\end{eqnarray}
Starting from (\ref{1step}) and using (\ref{2step}) together with the fact
that $n_{\ell }\leq A2^{-\ell }$, we obtain:
\begin{eqnarray}
&&|\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A}^{(\ell )})\big )
-\; \; \;um_{j=1}^{2^{\ell }}\log \mathbb{E}\exp \big (t\bar{S}_{M_{k_{\ell
}+1}}(I_{\ell ,j})\big )| \notag \label{3step} \\
&&\quad \quad \leq \frac{tA}{2}\; \; \;um_{j=0}^{k_{\ell }}\tau (d_{j})\exp \big (
\frac{2tAM_{j}}{2^{j}}\big )+2\; \; \;um_{j=0}^{k_{\ell }-1}2^{j}\tau
(2^{-1/\gamma }c^{-1/\gamma _{1}}A2^{-j/\gamma })\exp \big (\frac{2tAM_{j}}{
2^{j}}\big ) \notag \\
&&\quad \quad +2^{k_{\ell }+1}\tau (c^{-1/\gamma _{1}}A2^{-\ell })\exp \big (
\frac{2tAM_{k_{\ell }}}{2^{k_{\ell }}}\big )+tA\tau (d_{k_{\ell }+1})\exp
(tM_{k_{\ell }+1}A2^{-k_{\ell }})\,.
\end{eqnarray}
Notice that for any $j=0,\dots ,\ell -1$, we have $d_{j}+1\geq \lbrack
c_{0}A2^{-(\ell \wedge \frac{j}{\gamma })}]$ and $c_{0}A2^{-(\ell \wedge
\frac{j}{\gamma })}\geq 2$. Whence
\begin{equation*}
d_{j}\geq (d_{j}+1)/2\geq c_{0}A2^{-(\ell \wedge \frac{j}{\gamma })-2}\,.
\end{equation*}
Consequently setting $c_{1}=\min (\frac{1}{4}c^{1/\gamma
_{1}}c_{0},2^{-1/\gamma })$ and using (\ref{hypoalpha}), we derive that for
any positive $t$,
\begin{eqnarray*}
&&|\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A}^{(\ell )})\big )
-\; \; \;um_{j=1}^{2^{\ell }}\log \mathbb{E}\exp \big (t\bar{S}_{M_{k_{\ell
}+1}}(I_{\ell ,j})\big )| \\
&\leq &\frac{tA}{2}\; \; \;um_{j=0}^{k_{\ell }}\exp \Big(-\big (c_{1}A2^{-j/\gamma
}\big )^{\gamma _{1}}+\frac{2tAM_{j}}{2^{j}}\Big )+2\; \; \;um_{j=0}^{k_{\ell
}-1}2^{j}\exp \Big(-\big (c_{1}A2^{-j/\gamma }\big )^{\gamma _{1}}+\frac{
2tAM_{j}}{2^{j}}\Big ) \\
&&\quad \quad +2^{k_{\ell }+1}\exp \Big(-\big (A2^{-\ell }\big )^{\gamma
_{1}}+\frac{2tAM_{k_{\ell }}}{2^{k_{\ell }}}\Big )+tA\exp \Big(-\big (
c_{1}A2^{-\ell }\big )^{\gamma _{1}}+tM_{k_{\ell }+1}A2^{-k_{\ell }}\Big )\,.
\end{eqnarray*}
By (\ref{majMl}), we get that for any $0\leq j\leq k_{\ell }$,
\begin{equation*}
2AM_{j}2^{-j}\leq 2^{\gamma _{1}/\gamma }(2^{-j}A)^{\gamma _{1}/\gamma }\,.
\end{equation*}
In addition, since $k_{\ell }+1\geq \gamma \ell $ and $\gamma <1$, we get
that
\begin{equation*}
M_{k_{\ell }+1}\leq (2A2^{-\ell })^{\gamma _{1}/\gamma _{2}}\leq
(2A2^{-\gamma \ell })^{\gamma _{1}/\gamma _{2}}\,.
\end{equation*}
Whence,
\begin{equation*}
M_{k_{\ell }+1}A2^{-k_{\ell }}=2M_{k_{\ell }+1}A2^{-(k_{\ell }+1)}\leq
2^{\gamma _{1}/\gamma }A^{\gamma _{1}/\gamma }2^{-\gamma _{1}\ell }\,.
\end{equation*}
In addition,
\begin{equation*}
2AM_{k_{\ell }}2^{-k_{\ell }}\leq 2^{2\gamma _{1}/\gamma }(A2^{-k_{\ell
}-1})^{\gamma _{1}/\gamma }\leq 2^{2\gamma _{1}/\gamma }A^{\gamma
_{1}/\gamma }2^{-\gamma _{1}\ell }\,.
\end{equation*}
Hence, if $t\leq c_{2}A^{\gamma _{1}(\gamma -1)/\gamma }$ where $
c_{2}=2^{-(1+2\gamma _{1}/\gamma )}c_{1}^{\gamma _{1}}$, we derive that
\begin{eqnarray*}
&&|\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A}^{(\ell )})\big )
-\; \; \;um_{j=1}^{2^{\ell }}\log \mathbb{E}\exp \big (t\bar{S}_{M_{k_{\ell
}+1}}(I_{\ell ,j})\big )| \\
&\leq &\frac{tA}{2}\; \; \;um_{j=0}^{k_{\ell }}\exp \Big(-\frac{1}{2}\big (
c_{1}A2^{-{j}/\gamma }\big )^{\gamma _{1}}\Big )+2\; \; \;um_{j=0}^{k_{\ell
}-1}2^{j}\exp \Big(-\frac{1}{2}\big (c_{1}A2^{-{j}/\gamma }\big )^{\gamma
_{1}}\Big ) \\
&&\quad \quad +(2^{k_{\ell }+1}+tA)\exp \bigl(-(c_{1}A2^{-\ell })^{\gamma
_{1}}/2\bigr)\,.
\end{eqnarray*}
Since $2^{k_{\ell }}\leq 2^{\ell \gamma }\leq A^{\gamma }$, it follows that
for any $t\leq c_{2}A^{\gamma _{1}(\gamma -1)/\gamma }$,
\begin{equation}
|\log \mathbb{E}\exp \big (t\bar{S}_{M_{0}}(K_{A}^{(\ell )})\big )
-\; \; \;um_{j=1}^{2^{\ell }}\log \mathbb{E}\exp \big (t\bar{S}_{M_{k_{\ell
}+1}}(I_{\ell ,j})\big )|\leq (2\ell tA+4A^{\gamma })\exp \Bigl(-\frac{1}{2}
\Bigl(\frac{c_{1}A}{2^{\ell }}\Bigr)^{\gamma _{1}}\Bigr)\,. \label{4step}
\end{equation}
We bound up now the log Laplace transform of each $\bar S_{M_{k_{\ell}+1}}
(I_{\ell, j})$ using the following fact: from l'Hospital rule for
monotonicity (see Pinelis (2002)), the function $x\mapsto g (x)=x^{-2}(e^x -
x - 1)$ is increasing on ${\mathbb{R}}$. Hence, for any centered random
variable $U$ such that $\|U\|_{\infty} \leq M$, and any positive $t$,
\begin{equation} \label{psi}
\mathbb{E} \exp ( t U) \leq 1 + t^2 g(tM) \mathbb{E} (U^2) \, .
\end{equation}
Notice that
\begin{equation*}
\| \bar S_{M_{k_{\ell}+1}} (I_{\ell, j}) \|_{\infty} \leq 2 M_{k_{\ell}+1}
n_{\ell} \leq 2^{\gamma_1/\gamma } (A2^{-\ell} )^{\gamma_1/\gamma} .
\end{equation*}
Since $t \leq 2^{-\gamma_1/\gamma } (2^{\ell}/ A )^{\gamma_1/\gamma} $, by
using (\ref{hypovarcont}), we then get that
\begin{equation*}
\log \mathbb{E} \exp \big (t \bar S_{M_{k_{\ell}+1}} (I_{\ell, j}) \big )
\leq t^2 v ^2 n_{\ell} \, .
\end{equation*}
Consequently, for any $t \leq \kappa \big ( A^{ \gamma_1(\gamma -1)/\gamma}
\wedge (2^{ \ell} / A)^{\gamma_1/\gamma})$, the following inequality holds:
\begin{equation} \label{5step}
\log \mathbb{E} \exp \big (t \bar S_{M_{0}} (K_{A}^{( \ell)}) \big ) \leq
t^2 v^2 A + (2 \ell t A + 4A^{\gamma} ) \exp \bigl( -\big ( c_1
A2^{-\ell})^{\gamma_1} / 2 \bigr) \, .
\end{equation}
Notice now that $\| \bar S_{M_0} (K_A^{(\ell)}) \|_{\infty} \leq 2 M_0 A
\leq 2^{\gamma_1/\gamma }A^{\gamma_1/\gamma} $. Hence if $t \leq
2^{-\gamma_1/\gamma }A^{-\gamma_1/\gamma}$, by using (\ref{psi}) together
with (\ref{hypovarcont}), we derive that
\begin{equation} \label{6step}
\log \mathbb{E} \exp \big (t \bar S_{ M_0 } (K_A^{( \ell)}) \big ) \leq t^2
v^2 A \, ,
\end{equation}
which proves (\ref{resultpropinter2}) in this case.
Now if $2^{-\gamma_1/\gamma }A^{-\gamma_1/\gamma} \leq t \leq \kappa \bigl(
A^{ \gamma_1(\gamma -1)/\gamma} \wedge ( 2^{ \ell}/A )^{\gamma_1/\gamma}
\bigr)$, by using (\ref{5step}), we derive that (\ref{resultpropinter2})
holds, which completes the proof of Proposition 1. $\diamond$
We now bound up the Laplace transform of the sum of truncated random
variables on $[1,A]$. Let
\begin{equation}
\mu =\bigl(2(2\vee 4c_{0}^{-1})/(1-\gamma )\bigr)^{\frac{2}{1-\gamma }}\text{
and }c_{4}=2^{\gamma _{1}/\gamma }3^{\gamma _{1}/\gamma _{2}}c_{0}^{-\gamma
_{1}/\gamma _{2}}\,, \label{borneA}
\end{equation}
where $c_{0}$ is defined in (\ref{defc0}). Define also
\begin{equation}
\nu =\bigl(c_{4}\bigl(3-2^{(\gamma -1)\frac{\gamma _{1}}{\gamma }}\bigr)
+\kappa ^{-1}\bigr)^{-1}\bigl(1-2^{(\gamma -1)\frac{\gamma _{1}}{\gamma }}
\bigr)\,, \label{defK}
\end{equation}
where $\kappa $ is defined by (\ref{defkappa}).
\begin{prop}
\label{propinter3} Let $(X_{j})_{j\geq 1}$ be a sequence of centered real
valued random variables satisfying (\ref{hypoalpha}), (\ref{hypoQ}) and (\ref
{hypogamma}). Let $A$ be an integer. Let $M=H^{-1}(\tau (c^{-1/\gamma
_{1}}A))$ and for any $j$, set $\overline{X}_{M}(j)=\varphi _{M}(X_{j})-
\mathbb{E}\varphi _{M}(X_{j})$. \noindent Then, if $A\geq \mu $ with $\mu $
defined by (\ref{borneA}), for any positive $t<\nu A^{\gamma _{1}(\gamma
-1)/\gamma }$, where $\nu $ is defined by (\ref{defK}), we get that
\begin{equation}
\log \mathbb{E}\Bigl(\exp ({\textstyle t\; \; \;um_{k=1}^{A}\overline{X}_{M}(k)})
\Bigr)\leq \frac{AV(A)t^{2}}{1-t\nu ^{-1}A^{\gamma _{1}(1-\gamma )/\gamma }}
\,, \label{resultpropinter3}
\end{equation}
where $V(A)=50v^{2}+\nu _{1}\exp (-\nu _{2}A^{\gamma _{1}(1-\gamma )}(\log
A)^{-\gamma })$ and $\nu _{1}$, $\nu _{2}$ are positive constants depending
only on $c$, $\gamma $ and $\gamma _{1}$, and $v^{2}$ is defined by (\ref
{hypovarcont}).
\end{prop}
\noindent \textbf{Proof of Proposition \ref{propinter3}.} Let $A_{0}=A$ and $
X^{(0)}(k)=X_{k}$ for any $k=1,\dots ,A_{0}$. Let $\ell $ be a fixed
positive integer, to be chosen later, which satisfies
\begin{equation}
A_{0}2^{-\ell }\geq (2\vee 4c_{0}^{-1})\,. \label{rest1l}
\end{equation}
Let $K_{A_{0}}^{(\ell )}$ be the discrete Cantor type set as defined from $
\{1,\dots ,A\}$ in Step 1 of the proof of Proposition \ref{propinter2}. Let $
A_{1}=A_{0}-\mathrm{Card}K_{A_{0}}^{(\ell )}$ and define for any $k=1,\dots
,A_{1}$,
\begin{equation*}
X^{(1)}(k)=X_{i_{k}}\text{ where }\{i_{1},\dots ,i_{A_{1}}\}=\{1,\dots
,A\}\; \; \;etminus K_{A}\,.
\end{equation*}
Now for $i\geq 1$, let $K_{A_{i}}^{(\ell _{i})}$ be defined from $\{1,\dots
,A_{i}\}$ exactly as $K_{A}^{(\ell )}$ is defined from $\{1,\dots ,A\}$.
Here we impose the following selection of $\ell _{i}$:
\begin{equation}
\ell _{i}=\inf \{j\in \mathbb{N}\,,\,A_{i}2^{-j}\leq A_{0}2^{-\ell }\,\}\,.
\label{restli}
\end{equation}
Set $A_{i+1}=A_{i}-\mathrm{Card}K_{A_{i}}^{(\ell _{i})}$ and $\{j_{1},\dots
,j_{A_{i+1}}\}=\{1,\dots ,A_{i+1}\}\; \; \;etminus K_{A_{i+1}}^{(\ell _{i+1})}$.
Define now
\begin{equation*}
X^{(i+1)}(k)=X^{(i)}(j_{k})\text{ for }k=1,\dots ,A_{i+1}\,.
\end{equation*}
Let
\begin{equation}
m(A)=\inf \{m\in \mathbb{N}\,,\,A_{m}\leq A2^{-\ell }\}\,. \label{defm(A)}
\end{equation}
Note that $m(A)\geq 1$, since $A_{0}>A2^{-\ell }$ ($\ell \geq 1$). In
addition, $m(A)\leq \ell $ since for all $i\geq 1$, $A_{i}\leq A2^{-i}$.
Obviously, for any $i=0,\dots ,m(A)-1$, the sequences $(X^{(i+1)}(k))$
satisfy (\ref{hypoalpha}), (\ref{hypoQ}) and (\ref{hypovarcont}) with the
same constants. Now we set $T_{0}=M=H^{-1}(\tau (c^{-1/\gamma _{1}}A_{0}))$,
and for any integer $j=0,\dots ,m(A)$,
\begin{equation*}
T_{j}=H^{-1}(\tau (c^{-1/\gamma _{1}}A_{j}))\,.
\end{equation*}
With this definition, we then define for all integers $i$ and $j$,
\begin{equation*}
X_{T_{j}}^{(i)}(k)=\varphi _{T_{j}}\big (X^{(i)}(k)\big )-\mathbb{E}\varphi
_{T_{j}}\big (X^{(i)}(k)\big )\,.
\end{equation*}
Notice that by (\ref{hypoalpha}) and (\ref{hypoQ}), we have that for any
integer $j\geq 0$,
\begin{equation}
T_{j}\leq (2A_{j})^{\gamma _{1}/\gamma _{2}}\,. \label{boundTl}
\end{equation}
For any $j=1,\dots ,m(A)$ and $i<j$, define
\begin{equation*}
Y_{i}=\; \; \;um_{k\in K_{A_{i}}^{(\ell _{i})}}X_{T_{i}}^{(i)}(k)\,,\
Z_{i}=\; \; \;um_{k=1}^{A_{i}}(X_{T_{i-1}}^{(i)}(k)-X_{T_{i}}^{(i)}(k))\text{ for $
i>0$, and }R_{j}=\; \; \;um_{k=1}^{A_{j}}X_{T_{j-1}}^{(j)}(k)\,.
\end{equation*}
The following decomposition holds:
\begin{equation}
\; \; \;um_{k=1}^{A_{0}}X_{T_{0}}^{(0)}(k)=\; \; \;um_{i=0}^{m(A)-1}Y_{i}+
\; \; \;um_{i=1}^{m(A)-1}Z_{i}+R_{m(A)}\,. \label{P3prop3}
\end{equation}
To control the terms in the decomposition (\ref{P3prop3}), we need the
following elementary lemma.
\begin{lma}
\label{compAi} For any $j = 0, \dots, m (A) -1$, $A_{j+1} \geq \frac 13 c_0
A_j $.
\end{lma}
\noindent \textbf{Proof of Lemma \ref{compAi}. } Notice that for any $i$ in $
[0, m(A)[$, we have $A_{i+1} \geq [c_0 A_i] - 1$. Since $c_0 A_i \geq 2$, we
derive that $[c_0 A_i] - 1 \geq ( [c_0 A_i] + 1)/3 \geq c_0 A_i/3$, which
completes the proof. $\diamond$
Using (\ref{boundTl}), a useful consequence of Lemma \ref{compAi}
is that for any $j=1, \dots, m(A)$
\begin{equation} \label{secondboundTl}
2A_{j} T_{j-1} \leq c_4 A_j^{\gamma_1 /\gamma}
\end{equation}
where $c_4$ is defined by (\ref{borneA})
\noindent \textit{A bound for the Laplace transform of $R_{m (A)}$.}
The random variable $|R_{m(A)}|$ is a.s. bounded by $2A_{m(A)}T_{{m(A)}-1}$.
By using (\ref{secondboundTl}) and (\ref{defm(A)}), we then derive that
\begin{equation}
\Vert R_{m(A)}\Vert _{\infty }\leq c_{4}(A_{m(A)})^{\gamma _{1}/\gamma }\leq
c_{4}\bigl(A2^{-\ell }\bigr)^{\gamma _{1}/\gamma }\,. \label{boundRl}
\end{equation}
Hence, if $t\leq c_{4}^{-1}(2^{\ell }/A)^{\gamma _{1}/\gamma }$, by using (
\ref{psi}) together with (\ref{hypovarcont}), we obtain
\begin{equation}
\log \mathbb{E}\bigl(\exp ({\textstyle tR_{m(A)}})\bigr)\leq
t^{2}v^{2}A2^{-\ell }\leq t^{2}(v\; \; \;qrt{A})^{2}:=t^{2}\; \; \;igma _{1}^{2}\,.
\label{P5prop3}
\end{equation}
\noindent \textit{A bound for the Laplace transform of the $Y_i$'s.}
Notice that for any $0 \leq i \leq m(A)-1$, by the definition of $\ell_i$
and (\ref{rest1l}), we get that
\begin{equation*}
2^{- \ell_i} A_i = 2^{1 - \ell_i} (A_i /2) > 2^{-\ell } (A/2) \geq ( 1 \vee
2c_0^{-1} ) \, .
\end{equation*}
Now, by Proposition 1, we get that for any $i \in [0, m(A)[$ and any $t \leq
\kappa \big (
A_i^{ \gamma -1} \wedge 2^{- \ell_i} A_i \big )^{\gamma_1/\gamma} $ with $
\kappa$ defined by (\ref{defkappa}),
\begin{equation*}
\log \mathbb{E} \bigl( e^{t Y_i} \bigr) \leq t^2 \Big ( v \; \; \;qrt{A_i} + \big
( \; \; \;qrt{\ell_i} (2A_i)^{\frac{1}{2}+ \frac{\gamma_1}{2\gamma}} +
2A_i^{\gamma/2} (2A_i)^{\gamma_1/\gamma} \big ) \exp \big( - \frac 14 \big (
c_1 A_i2^{-\ell_i} \big )^{\gamma_1} \big ) \Big )^2 \, .
\end{equation*}
Notice now that $\ell_i \leq \ell \leq A$, $A_i \leq A2^{-i}$ and $2^{-\ell
-1} A \leq 2^{-\ell_i} A_i \leq 2^{-\ell} A$. Taking into account these
bounds and the fact that $\gamma < 1$, we then get that for any $i$ in $[0 ,
m(A)[$ and any $t \leq \kappa (2^i /A)^{ 1- \gamma} \wedge (2^\ell
/A)^{\gamma_1/\gamma} $,
\begin{equation} \label{P6prop3}
\log \mathbb{E} \bigl( e^{t Y_i} \bigr) \leq t^2 \Bigl( v \frac{A^{1/2}}{
2^{i/2}} + \Bigl( 2^{2+ \frac{\gamma_1}{\gamma} } \frac{A^{ 1+ \frac{\gamma_1
}{\gamma} } }{ (2^i)^{ \frac{\gamma}{2} +\frac{\gamma_1}{2\gamma} } } \Bigr)
\exp \Bigl( - \frac{c^{\gamma_1}_1}{2^{2+\gamma_1}} \Bigl( \frac{A}{2^\ell}
\Bigr)^{\gamma_1} \Bigr) \Bigr)^2 := t^2 \; \; \;igma_{2,i}^2 \, ,
\end{equation}
\noindent \textit{A bound for the Laplace transform of the $Z_i$'s.}
Notice first that for any $1\leq i\leq m(A)-1$, $Z_{i}$ is a centered random
variable, such that
\begin{equation*}
|Z_{i}|\leq \; \; \;um_{k=1}^{A_{i}}\Big (\big |(\varphi _{T_{i-1}}-\varphi
_{T_{i}})X^{(i)}(k)\big |+\mathbb{E}|(\varphi _{T_{i-1}}-\varphi
_{T_{i}})X^{(i)}(k)\big |\Big )\,.
\end{equation*}
Consequently, using (\ref{secondboundTl}) we get that
\begin{equation*}
\Vert Z_{i}\Vert _{\infty }\leq 2A_{i}T_{i-1}\leq c_{4}A_{i}^{\gamma
_{1}/\gamma }\,.
\end{equation*}
In addition, since $|(\varphi _{T_{i-1}}-\varphi _{T_{i}})(x)|\leq
(T_{i-1}-T_{i}){\ 1 { \Bbb H} space{-1mm}{\mathrm{I}}}_{x>T_{i}}$, and the random
random variables $(X^{(i)}(k))$ satisfy (\ref{hypoQ}), by the definition of $
T_{i}$, we get that
\begin{equation*}
\mathbb{E}|Z_{i}|^{2}\leq (2A_{i}T_{i-1})^{2}\tau (c^{-1/\gamma
_{1}}A_{i})\leq c_{4}^{2}A_{i}^{2\gamma _{1}/\gamma }\,.
\end{equation*}
Hence applying (\ref{psi}) to the random variable $Z_{i}$, we get for any
positive $t$,
\begin{equation*}
\mathbb{E}\exp (tZ_{i})\leq 1+t^{2}g(c_{4}tA_{i}^{\gamma _{1}/\gamma
})c_{4}^{2}A_{i}^{2\gamma _{1}/\gamma }\exp (-A_{i}^{\gamma _{1}})\,.
\end{equation*}
Hence, since $A_{i}\leq A2^{-i}$, for any positive $t$ satisfying $t\leq
(2c_{4})^{-1}(2^{i}/A)^{\gamma _{1}(1-\gamma )/\gamma }$, we have that
\begin{equation*}
2tA_{i}T_{i-1}\leq A_{i}^{\gamma _{1}}/2\,.
\end{equation*}
Since $g(x)\leq e^{x}$ for $x\geq 0$, we infer that for any positive $t$
with $t\leq (2c_{4})^{-1}(2^{i}/A)^{\gamma _{1}(1-\gamma )/\gamma }$,
\begin{equation*}
\log \mathbb{E}\exp (tZ_{i})\leq c_{4}^{2}t^{2}(2^{-i}A)^{2\gamma
_{1}/\gamma }\exp (-A_{i}^{\gamma _{1}}/2)\,.
\end{equation*}
By taking into account that for any $1\leq i\leq m(A)-1$, $A_{i}\geq
A_{m(A)-1}>A2^{-\ell }$ (by definition of $m(A)$), it follows that for any $
i $ in $[1,m(A)[$ and any positive $t$ satisfying $t\leq
(2c_{4})^{-1}(2^{i}/A)^{\gamma _{1}(1-\gamma )/\gamma }$,
\begin{equation}
\log \mathbb{E}\exp (tZ_{i})\leq t^{2}\bigl(c_{4}(2^{-i}A)^{\gamma
_{1}/\gamma }\exp (-(A2^{-\ell })^{\gamma _{1}}/4)\bigr)^{2}:=t^{2}\; \; \;igma
_{3,i}^{2}\,. \label{P7prop3}
\end{equation}
\noindent \textit{End of the proof.} Let
\begin{equation*}
C= c_4\Big ( \frac{A}{2^{\ell}} \Big )^{ \gamma_1/\gamma}+ \frac{1}{\kappa}
\; \; \;um_{i=0}^{m(A)-1} \Big ( \Big ( \frac{A}{2^i} \Big )^{ 1- \gamma} \vee
\frac{A}{2^{ \ell}} \Big )^{\gamma_1/\gamma}+ 2c_4 \; \; \;um_{i=1}^{m(A)-1} \Big
( \frac{A}{2^i} \Big )^{ \gamma_1(1- \gamma)/\gamma} \, ,
\end{equation*}
and
\begin{equation*}
\; \; \;igma= \; \; \;igma_1 + \; \; \;um_{i=0}^{m(A)-1} \; \; \;igma_{2,i} + \; \; \;um_{i=1}^{m(A)-1}
\; \; \;igma_{3,i} \, ,
\end{equation*}
where $\; \; \;igma_1$, $\; \; \;igma_{2,i}$ and $\; \; \;igma_{3,i}$ are respectively defined
in (\ref{P5prop3}), (\ref{P6prop3}) and (\ref{P7prop3}).
Notice that $m(A)\leq \ell $, and $\ell \leq 2\log A/\log 2$. We select now $
\ell $ as follows
\begin{equation*}
\ell =\inf \{j\in \mathbb{N}\,,\,2^{j}\geq A^{\gamma }(\log A)^{\gamma
/\gamma _{1}}\}\,.
\end{equation*}
This selection is compatible with (\ref{rest1l}) if
\begin{equation}
(2\vee 4c_{0}^{-1})(\log A)^{\gamma /\gamma _{1}}\leq A^{1-\gamma }\,.
\label{restrictA}
\end{equation}
Now we use the fact that for any positive $\delta $ and any positive $u$, $
\delta \log u\leq u^{\delta }$. Hence if $A\geq 3$,
\begin{equation*}
(2\vee 4c_{0}^{-1})(\log A)^{\gamma /\gamma _{1}}\leq (2\vee
4c_{0}^{-1})\log A\leq 2(1-\gamma )^{-1}(2\vee 4c_{0}^{-1})A^{(1-\gamma
)/2}\,,
\end{equation*}
which implies that (\ref{restrictA}) holds as soon as $A\geq \mu $ where $
\mu $ is defined by (\ref{borneA}). It follows that
\begin{equation}
C\leq \nu ^{-1}A^{\gamma _{1}(1-\gamma )/\gamma }\,. \label{boundC}
\end{equation}
In addition
\begin{equation*}
\; \; \;igma \leq 5v\; \; \;qrt{A}+10\times 2^{2\gamma _{1}/\gamma }A^{1+\gamma
_{1}/\gamma }\exp \big(-\frac{c_{1}^{\gamma _{1}}}{2^{2+\gamma _{1}}}
(A2^{-\ell }\big )^{\gamma _{1}}\big )+c_{4}A^{\gamma _{1}/\gamma }\exp \big
(-\frac{1}{4}(A2^{-\ell })^{\gamma _{1}}\big )\,.
\end{equation*}
Consequently, since $A2^{-\ell }\geq \frac{1}{2}A^{1-\gamma }(\log
A)^{-\gamma /\gamma _{1}}$, there exists positive constants $\nu _{1}$ and $
\nu _{2}$ depending only on $c$, $\gamma $ and $\gamma _{1}$ such that
\begin{equation}
\; \; \;igma ^{2}\leq A\bigl(50v^{2}+\nu _{1}\exp (-\nu _{2}A^{\gamma
_{1}(1-\gamma )}(\log A)^{-\gamma })\bigr)=AV(A)\,. \label{sigmagene}
\end{equation}
Starting from the decomposition (\ref{P3prop3}) and the bounds (\ref{P5prop3}
), (\ref{P6prop3}) and (\ref{P7prop3}), we aggregate the contributions of
the terms by using Lemma \ref{breta} given in the appendix. Then, by taking
into account the bounds (\ref{boundC}) and (\ref{sigmagene}), Proposition 2
follows. $\diamond $
\; \; \;ubsection{Proof of Theorem { \Bbb P} rotect\ref{BTinegacont}}
For any positive $M$ and any positive integer $i$, we set
\begin{equation*}
\overline X_M (i) = \varphi_M (X_i) - \mathbb{E} \varphi_M (X_i) \, .
\end{equation*}
\noindent $\bullet $ If $\lambda \geq n^{\gamma _{1}/\gamma }$, setting $
M=\lambda /n$, we have:
\begin{equation*}
\; \; \;um_{i=1}^{n}|\overline{X}_{M}(i)|\leq 2\lambda ,
\end{equation*}
which ensures that
\begin{equation*}
\mathbb{P}\Bigl(\; \; \;up_{j\leq n}|S_{j}|\geq 3\lambda \Bigr)\leq \mathbb{P}\Big
(\; \; \;um_{i=1}^{n}|X_{i}-\overline{X}_{M}(i)|\geq \lambda \Big ).
\end{equation*}
Now
\begin{equation*}
\mathbb{P}\Big (\; \; \;um_{i=1}^{n}|X_{i}-\overline{X}_{M}(i)|\geq \lambda \Big )
\leq \frac{1}{\lambda }\; \; \;um_{i=1}^{n}\mathbb{E}|X_{i}-\overline{X}
_{M}(i)|\leq \frac{2n}{\lambda }\int_{M}^{\infty }H(x)dx.
\end{equation*}
Now recall that $\log H(x)=1-x^{\gamma _{2}}$. It follows that the function $
x\rightarrow \log (x^{2}H(x))$ is nonincreasing for $x\geq (2/\gamma
_{2})^{1/\gamma _{2}}$. Hence, for $M\geq (2/\gamma _{2})^{1/\gamma _{2}}$,
\begin{equation*}
\int_{M}^{\infty }H(x)dx\leq M^{2}H(M)\int_{M}^{\infty }\frac{dx}{x^{2}}
=MH(M).
\end{equation*}
Whence
\begin{equation}
\mathbb{P}\Big (\; \; \;um_{i=1}^{n}|X_{i}-\overline{X}_{M}(i)|\geq \lambda \Big )
\leq 2n\lambda ^{-1}MH(M)\ { \Bbb H} box{ for any }\ M\geq (2/\gamma _{2})^{1/\gamma
_{2}}. \label{B1decST*}
\end{equation}
Consequently our choice of $M$ together with the fact that $(\lambda
/n)^{\gamma _{2}}\geq \lambda ^{\gamma }$ lead to
\begin{equation*}
\mathbb{P}\Bigl(\; \; \;up_{j\leq n}|S_{j}|\geq 3\lambda \Bigr)\leq 2\exp
(-\lambda ^{\gamma })\,
\end{equation*}
provided that $\lambda /n\geq (2/\gamma _{2})^{1/\gamma _{2}}$. Now since $
\lambda \geq n^{\gamma _{1}/\gamma }$, this condition holds if $\lambda \geq
(2/\gamma _{2})^{1/\gamma }$. Consequently for any $\lambda \geq n^{\gamma
_{1}/\gamma }$, we get that
\begin{equation*}
\mathbb{P}\Bigl(\; \; \;up_{j\leq n}|S_{j}|\geq 3\lambda \Bigr)\leq e\exp
(-\lambda ^{\gamma }/C_{1})\,,
\end{equation*}
as soon as $C_{1}\geq (2/\gamma _{2})^{1/\gamma }$.
\noindent $\bullet $ Let $\zeta =\mu \vee (2/\gamma _{2})^{1/\gamma _{1}}$
where $\mu $ is defined by (\ref{borneA}). Assume that $(4\zeta )^{\gamma
_{1}/\gamma }\leq \lambda \leq n^{\gamma _{1}/\gamma }$. Let $p$ be a real
in $[1,\frac{n}{2}]$, to be chosen later on. Let
\begin{equation*}
A=\Big [\frac{n}{2p}\Big ],\,k=\Big [\frac{n}{2A}\Big ]\text{ and }
M=H^{-1}(\tau (c^{-\frac{1}{\gamma _{1}}}A))\,.
\end{equation*}
For any set of natural numbers $K$, denote
\begin{equation*}
\bar{S}_{M}(K)=\; \; \;um_{i\in K}\overline{X}_{M}(i)\,.
\end{equation*}
For $i$ integer in $[1,2k]$, let $I_{i}=\{1+(i-1)A,\dots ,iA\}$. Let also $
I_{2k+1}=\{1+2kA,\dots ,n\}$. Set
\begin{equation*}
\bar{S}_{1}(j)=\; \; \;um_{i=1}^{j}\bar{S}_{M}(I_{2i-1})\ { \Bbb H} box{ and }\bar{S}
_{2}(j)=\; \; \;um_{i=1}^{j}\bar{S}_{M}(I_{2i}).
\end{equation*}
We then get the following inequality
\begin{equation}
\; \; \;up_{j\leq n}|S_{j}|\leq \; \; \;up_{j\leq k+1}|\bar{S}_{1}(j)|+\; \; \;up_{j\leq k}|
\bar{S}_{2}(j)|+2AM+\; \; \;um_{i=1}^{n}|X_{i}-\overline{X}_{M}(i)|\,.
\label{decST}
\end{equation}
Using (\ref{B1decST*}) together with (\ref{hypoalpha}) and our selection of $
M$, we get for all positive $\lambda $ that
\begin{equation*}
\mathbb{P}\Big (\; \; \;um_{i=1}^{n}|X_{i}-\overline{X}_{M}(i)|\geq \lambda \Big )
\leq 2n\lambda ^{-1}M\exp (-A^{\gamma _{1}})\ { \Bbb H} box{ for }\ A\geq (2/\gamma
_{2})^{1/\gamma _{1}}\,.
\end{equation*}
By using Lemma 5 in Dedecker and Prieur (2004), we get the existence of
independent random variables $(\bar{S}_{M}^{\ast }(I_{2i}))_{1\leq i\leq k}$
with the same distribution as the random variables $\bar{S}_{M}(I_{2i})$
such that
\begin{equation}
\mathbb{E}|\bar{S}_{M}(I_{2i})-\bar{S}_{M}^{\ast }(I_{2i})|\leq A\tau
(A)\leq A\exp \big (-cA^{\gamma _{1}}\big )\,. \label{coupY}
\end{equation}
The same is true for the sequence $(\bar{S}_{M}(I_{2i-1}))_{1\leq i\leq k+1}$
. Hence for any positive $\lambda $ such that $\lambda \geq 2AM$,
\begin{eqnarray*}
\mathbb{P}\Bigl(\; \; \;up_{j\leq n}|S_{j}|\geq 6\lambda \Bigr) &\leq &\lambda
^{-1}A(2k+1)\exp \big (-cA^{\gamma _{1}}\big )+2n\lambda ^{-1}M\exp
(-A^{\gamma _{1}}) \\
&&+\mathbb{P}\Bigl(\max_{j\leq k+1}\Big|\; \; \;um_{i=1}^{j}\bar{S}_{M}^{\ast
}(I_{2i-1})\Big|\geq \lambda \Bigr)+\mathbb{P}\Bigl(\max_{j\leq k}\Big|
\; \; \;um_{i=1}^{j}\bar{S}_{M}^{\ast }(I_{2i})\Big|\geq \lambda \Bigr)\,.
\end{eqnarray*}
For any positive $t$, due to the independence and since the variables are
centered, $(\exp (t\bar{S}_{M}(I_{2i})))_{i}$ is a submartingale. Hence
Doob's maximal inequality entails that for any positive $t$,
\begin{equation*}
\mathbb{P}\Bigl(\max_{j\leq k}\; \; \;um_{i=1}^{j}\bar{S}_{M}^{\ast }(I_{2i})\geq
\lambda \Bigr)\leq e^{-\lambda t} { \Bbb P} rod_{i=1}^{k}\mathbb{E}\Bigl(\exp (t\bar{S
}_{M}(I_{2i}))\Bigr)\,.
\end{equation*}
To bound the Laplace transform of each random variable $\bar{S}_{M}(I_{2i})$
, we apply Proposition \ref{propinter3} to the sequences $(X_{i+s})_{i\in {
\mathbb{Z}}}$ for suitable values of $s$. Hence we derive that, if $A\geq
\mu $ then for any positive $t$ such that $t<\nu A^{\gamma _{1}(\gamma
-1)/\gamma }$ (where $\nu $ is defined by (\ref{defK})),
\begin{equation*}
\; \; \;um_{i=1}^{k}\log \mathbb{E}\Bigl(\exp (t\bar{S}_{M}(I_{2i}))\Bigr)\leq
Akt^{2}\frac{V(A)}{1-t\nu ^{-1}A^{\gamma _{1}(1-\gamma )/\gamma }}\,.
\end{equation*}
Obviously the same inequalities hold true for the sums associated to $
(-X_{i})_{i\in \mathbb{Z}}$. Now some usual computations (see for instance
page 153 in Rio (2000)) lead to
\begin{equation*}
\mathbb{P}\Bigl(\max_{j\leq k}\Big|\; \; \;um_{i=1}^{j}\bar{S}_{M}^{\ast }(I_{2i})
\Big|\geq \lambda \Bigr)\leq 2\exp \Bigl(-\frac{\lambda ^{2}}{
4AkV(A)+2\lambda \nu ^{-1}A^{\gamma _{1}(1-\gamma )/\gamma }}\Bigr)\,.
\end{equation*}
Similarly, we obtain that
\begin{equation*}
\mathbb{P}\Bigl(\max_{j\leq k+1}\Big|\; \; \;um_{i=1}^{j}\bar{S}_{M}^{\ast
}(I_{2i-1})\Big|\geq \lambda \Bigr)\leq 2\exp \Bigl(-\frac{\lambda ^{2}}{
4A(k+1)V(A)+2\lambda \nu ^{-1}A^{\gamma _{1}(1-\gamma )/\gamma }}\Bigr)\,.
\end{equation*}
Let now $p=n\lambda ^{-\gamma /\gamma _{1}}$. It follows that $2A\leq
\lambda ^{\gamma /\gamma _{1}}$ and, since $M\leq (2A)^{\gamma _{1}/\gamma
_{2}}$, we obtain $2AM\leq (2A)^{\gamma _{1}/\gamma }\leq \lambda $. Also,
since $\lambda \geq (4\zeta )^{\gamma _{1}/\gamma }$, we have $n\geq 4p$
implying that $A\geq 4^{-1}\lambda ^{\gamma /\gamma _{1}}\geq \zeta $. The
result follows from the previous bounds.
To end the proof, we mention that if $\lambda \leq ( 4 \zeta
)^{\gamma_1/\gamma}$, then
\begin{equation*}
\mathbb{P} \Bigl( \; \; \;up_{ j \leq n} |S_j| \geq \lambda \Bigr) \leq 1 \leq
e\exp \Bigl( - \frac{ \lambda^\gamma}{(4\zeta)^{\gamma_1}} \Bigr ) \, ,
\end{equation*}
which is less than $n\exp ( - \lambda^\gamma /C_1 ) $ as soon as $n \geq 3$
and $C_1 \geq (4\zeta)^{\gamma_1}$. $\diamond$
\; \; \;ubsection{Proof of Remark { \Bbb P} rotect\ref{remv2}}
\label{prremv2} Setting $W_{i}=\varphi _{M}(X_{i})$ we first bound $\mathrm{
Cov}(W_{i},W_{i+k}))$. Applying Inequality (4.2) of Proposition 1 in
Dedecker and Doukhan (2003), we derive that, for any positive $k$,
\begin{equation*}
|\mathrm{Cov(}W_{i},W_{i+k})|\leq 2\int_{0}^{\gamma ({\mathcal{M}}
_{i},W_{i+k})/2}Q_{|W_{i}|}\circ G_{|W_{i+k}|}(u)du\,
\end{equation*}
where
\begin{equation*}
\gamma ({\mathcal{M}}_{i},W_{i+k})=\Vert \mathbb{E}(W_{i+k}|{\mathcal{M}}
_{i})-\mathbb{E}(W_{i+k})\Vert _{1}\leq \tau (k)\,,
\end{equation*}
since $x\mapsto \varphi _{M}(x)$ is $1$-Lipschitz. Now for any $j$, $
Q_{|W_{j}|}\leq Q_{|X_{j}|}\leq Q$, implying that $G_{|W_{j}|}\geq G$, where
$G$ is the inverse function of $u\rightarrow \int_{0}^{u}Q(v)dv$. Taking $
j=i $ and $j=i+k$, we get that
\begin{equation*}
|\mathrm{Cov}(W_{i},W_{i+k})|\leq 2\int_{0}^{\tau (k)/2}Q_{|X_{i}|}\circ
G(u)du\,.
\end{equation*}
Making the change-of-variables $u=G(v)$ we also have
\begin{equation}
|\mathrm{Cov}(W_{i},W_{i+k})|\leq 2\int_{0}^{G(\tau
(k)/2)}Q_{X_{i}}(u)Q(u)du\,, \label{boundcov}
\end{equation}
proving the remark.
\; \; \;ubsection{Proof of Theorem { \Bbb P} rotect\ref{thmMDPsubgeo11}}
For any $n\geq 1$, let $T=T_{n}$ where $(T_{n})$ is a sequence of real
numbers greater than $1$ such that $\lim_{n\rightarrow \infty }T_{n}=\infty $
, that will be specified later. We truncate the variables at the level $
T_{n} $. So we consider
\begin{equation*}
X_{i}^{ { \Bbb P} rime }=\varphi _{T_{n}}(X_{i})-\mathbb{E}\varphi _{T_{n}}(X_{i})
\text{ and }X_{i}^{ { \Bbb P} rime { \Bbb P} rime }=X_{i}-X_{i}^{ { \Bbb P} rime }\,.
\end{equation*}
Let $S_{n}^{ { \Bbb P} rime }=\; \; \;um_{i=1}^{n}X_{i}^{ { \Bbb P} rime }$ and $S_{n}^{ { \Bbb P} rime
{ \Bbb P} rime }=\; \; \;um_{i=1}^{n}X_{i}^{ { \Bbb P} rime { \Bbb P} rime }$. To prove the result, by
exponentially equivalence lemma in Dembo and Zeitouni (1998, Theorem 4.2.13.
p130), it suffices to prove that for any $\eta >0$,
\begin{equation}
\limsup_{n\rightarrow \infty }a_{n}\log \mathbb{P}\big (\frac{\; \; \;qrt{a_{n}}}{
\; \; \;igma _{n}}|S_{n}^{ { \Bbb P} rime { \Bbb P} rime }|\geq \eta \big )=-\infty \,,
\label{SnS'n}
\end{equation}
and
\begin{equation}
\text{$\{\frac{{1}}{\; \; \;igma _{n}}S_{n}^{ { \Bbb P} rime }\}$ satisfies (\ref{mdpdef})
with the good rate function $I(t)=\frac{t^{2}}{2}$}\,. \label{mdpST}
\end{equation}
To prove (\ref{SnS'n}), we first notice that $|x-\varphi
_{T}(x)|=(|x|-T)_{+} $. Consequently, if
\begin{equation*}
W_{i}^{ { \Bbb P} rime }=X_{i}-\varphi _{T}(X_{i})\,,
\end{equation*}
then $Q_{|W_{i}^{ { \Bbb P} rime }|}\leq (Q-T)_{+}$. Hence, denoting by $
V_{T_{n}}^{ { \Bbb P} rime { \Bbb P} rime }$ the upper bound for the variance of $
S_{n}^{ { \Bbb P} rime { \Bbb P} rime }$ (corresponding to $V$ for the variance of $S_{n}$)
we have, by Remark \ref{remv2} ,
\begin{equation*}
V_{T_{n}}^{ { \Bbb P} rime { \Bbb P} rime }\leq
\int_{0}^{1}(Q(u)-T_{n})_{+}^{2}du+4\; \; \;um_{k>0}\int_{0}^{\tau _{W^{ { \Bbb P} rime
}}(k)/2}(Q(G_{T_{n}}(v))-T_{n})_{+}dv\,.
\end{equation*}
where $G_{T}$ is the inverse function of $x\rightarrow
\int_{0}^{x}(Q(u)-T)_{+}du$ and the coefficients $\tau _{W^{ { \Bbb P} rime }}(k)$
are the $\tau $-mixing coefficients associated to $(W_{i}^{ { \Bbb P} rime })_{i}$.
Next, since $x\rightarrow x-\varphi _{T}(x)$ is $1$-Lipschitz, we have that $
\tau _{W^{ { \Bbb P} rime }}(k)\leq \tau _{X}(k)=\tau (k)$. Moreover, $G_{T}\geq G$,
because $(Q-T)_{+}\leq Q$. Since $Q$ is nonincreasing, it follows that
\begin{equation*}
V_{T_{n}}^{ { \Bbb P} rime { \Bbb P} rime }\leq
\int_{0}^{1}(Q(u)-T_{n})_{+}^{2}du+4\; \; \;um_{k>0}\int_{0}^{\tau
(k)/2}(Q(G(v))-T_{n})_{+}dv.
\end{equation*}
Hence
\begin{equation}
\lim_{n\rightarrow +\infty }V_{T_{n}}^{ { \Bbb P} rime { \Bbb P} rime }=0. \label{majvarT}
\end{equation}
The sequence $(X_{i}^{ { \Bbb P} rime { \Bbb P} rime })$ satisfies (\ref{hypoalpha}) and we
now prove that it satisfies also (\ref{hypoQ}) for $n$ large enough. With
this aim, we first notice that, since $|\mathbb{E}(X_{i}^{ { \Bbb P} rime { \Bbb P} rime })|=|
\mathbb{E}(X_{i}^{ { \Bbb P} rime })|\leq T$, $|X_{i}^{ { \Bbb P} rime { \Bbb P} rime }|\leq |X_{i}|$
if $|X_{i}|\geq T$. Now if $|X_{i}|<T$ then $X_{i}^{ { \Bbb P} rime { \Bbb P} rime }=\mathbb{E
}(\varphi _{T}(X_{i}))$, and
\begin{equation*}
\big |\mathbb{E}(\varphi _{T_{n}}(X_{i}))\big |\leq \int_{T_{n}}^{\infty
}H(x)dx<1\text{ for $n$ large enough}.
\end{equation*}
Then for $t\geq 1$,
\begin{equation*}
\; \; \;up_{i\in \lbrack 1,n]}\mathbb{P}(|X_{i}^{ { \Bbb P} rime { \Bbb P} rime }|\geq t)\leq
H(t)\,,
\end{equation*}
proving that the sequence $(X_{i}^{ { \Bbb P} rime { \Bbb P} rime })$ satisfies (\ref{hypoQ})
for $n$ large enough. Consequently, for $n$ large enough, we can apply
Theorem \ref{BTinegacont} to the sequence $(X_{i}^{ { \Bbb P} rime { \Bbb P} rime })$, and we
get that, for any $\eta >0$,
\begin{equation*}
\mathbb{P}\Bigl(\; \; \;qrt{\frac{a_{n}}{\; \; \;igma _{n}^{2}}}|S_{n}^{ { \Bbb P} rime { \Bbb P} rime
}|\geq \eta \Bigr)\leq n\exp \Bigl(-\frac{\eta ^{\gamma }\; \; \;igma _{n}^{\gamma
}}{C_{1}a_{n}^{\frac{\gamma }{2}}}\Bigr)+\exp \Bigl(-\frac{\eta ^{2}\; \; \;igma
_{n}^{2}}{C_{2}a_{n}(1+nV_{T_{n}}^{ { \Bbb P} rime { \Bbb P} rime })}\Bigr)+\exp \Bigl(-\frac{
\eta ^{2}\; \; \;igma _{n}^{2}}{C_{3}na_{n}}\exp \Bigl(\frac{\eta ^{\delta }\; \; \;igma
_{n}^{\delta }}{C_{4}a_{n}^{\frac{\delta }{2}}}\Bigr)\Bigr)\,,
\end{equation*}
where $\delta =\gamma (1-\gamma )/2$. This proves (\ref{SnS'n}), since $
a_{n}\rightarrow 0,$ $a_{n}n^{\gamma /(2-\gamma )}\rightarrow \infty $ , $
\lim_{n\rightarrow \infty }V_{T_{n}}^{ { \Bbb P} rime { \Bbb P} rime }=0$ and $\lim
\inf_{n\rightarrow \infty }\; \; \;igma _{n}^{2}/n>0$.
We turn now to the proof of (\ref{mdpST}). Let $p_{n}=[n^{1/(2-\gamma )}]$
and $q_{n}=\delta _{n}p_{n}$ where $\delta _{n}$ is a sequence of integers
tending to zero and such that
\begin{equation*}
\delta _{n}^{\gamma _{1}}n^{\gamma _{1}/(2-\gamma )}/\log n\rightarrow
\infty \,\text{ and }\,\delta _{n}^{\gamma _{1}}a_{n}n^{\gamma
_{1}/(2-\gamma )}\rightarrow \infty \,
\end{equation*}
(this is always possible since $\gamma _{1}\geq \gamma $ and by assumption $
a_{n}n^{\gamma /(2-\gamma )}\rightarrow \infty $). Let now $
m_{n}=[n/(p_{n}+q_{n})]$. We divide the variables $\{X_{i}^{ { \Bbb P} rime }\}$ in
big blocks of size $p_{n}$ and small blocks of size $q_{n}$, in the
following way: Let us set for all $1\leq j\leq m_{n}$,
\begin{equation*}
Y_{j,n}=\; \; \;um_{i=(j-1)(p_{n}+q_{n})+1}^{(j-1)(p_{n}+q_{n})+p_{n}}X_{i}^{
{ \Bbb P} rime }\quad \mboxox{ and }\quad
Z_{j,n}=\; \; \;um_{i=(j-1)(p_{n}+q_{n})+p_{n}+1}^{j\,(p_{n}+q_{n})}X_{i}^{ { \Bbb P} rime
}\,.
\end{equation*}
Then we have the following decomposition:
\begin{equation} \label{dec1Sn'}
S^{ { \Bbb P} rime }_{n} = \; \; \;um_{j=1}^{m_{n}}Y_{j,n}+ \; \; \;um_{j=1}^{m_{n}} Z_{j,n} +
\; \; \;um_{i=m_n (p_n +q_n) + 1}^{n}X_i^{ { \Bbb P} rime }\, .
\end{equation}
For any $j=1,\dots ,m_{n}$, let now
\begin{equation*}
I(n,j)=\{(j-1)(p_{n}+q_{n})+1,\dots ,(j-1)(p_{n}+q_{n})+p_{n}\}\,.
\end{equation*}
These intervals are of cardinal $p_{n}$. Let
\begin{equation*}
\ell _{n}=\inf \{k\in \mathbb{N}^{\ast },2^{k}\geq \varepsilon
_{n}^{-1}p_{n}^{\gamma /2}a_{n}^{-1/2}\}\,,
\end{equation*}
where $\varepsilon _{n}$ a sequence of positive numbers tending to zero and
satisfying
\begin{equation}
\varepsilon _{n}^{2}a_{n}n^{\gamma /(2-\gamma )}\rightarrow \infty .
\label{choiceepsi}
\end{equation}
For each $j\in \{1,\dots ,m_{n}\}$, we construct discrete Cantor sets, $
K_{I(n,j)}^{(\ell _{n})}$, as described in the proof of Proposition \ref
{propinter2} with $A=p_{n}$, $\ell =\ell _{n}$, and the following selection
of $c_{0}$,
\begin{equation*}
c_{0}=\frac{\varepsilon _{n}}{1+\varepsilon _{n}}\frac{2^{(1-\gamma )/\gamma
}-1}{2^{1/\gamma }-1}\,.
\end{equation*}
Notice that clearly with the selections of $p_{n}$ and $\ell _{n}$, $
p_{n}2^{-\ell _{n}}\rightarrow \infty $. In addition with the selection of $
c_{0}$ we get that for any $1\leq j\leq m_{n}$,
\begin{equation*}
\mathrm{Card}(K_{I(n,j)}^{(\ell _{n})})^{c}\leq \frac{\varepsilon _{n}p_{n}}{
1+\varepsilon _{n}}
\end{equation*}
and
\begin{equation*}
K_{I(n,j)}^{(\ell _{n})}=\bigcup_{i=1}^{2^{\ell _{n}}}I_{\ell
_{n},i}(p_{n},j)\,,
\end{equation*}
where the $I_{\ell _{n},i}(p_{n},j)$ are disjoint sets of consecutive
integers, each of same cardinal such that
\begin{equation}
\frac{p_{n}}{2^{\ell _{n}}(1+\varepsilon _{n})}\leq \mathrm{Card}I_{\ell
_{n},i}(p_{n},j)\leq \frac{p_{n}}{2^{\ell _{n}}}\,. \label{majcard}
\end{equation}
With this notation, we derive that
\begin{equation}
\; \; \;um_{j=1}^{m_{n}}Y_{j,n}=\; \; \;um_{j=1}^{m_{n}}S^{ { \Bbb P} rime }\big (
K_{I(n,j)}^{(\ell _{n})}\big )+\; \; \;um_{j=1}^{m_{n}}S^{ { \Bbb P} rime }\big (
(K_{I(n,j)}^{(\ell _{n})})^{c}\big )\,. \label{dec2Sn'}
\end{equation}
Combining (\ref{dec1Sn'}) with (\ref{dec2Sn'}), we can rewrite $
S_{n}^{ { \Bbb P} rime }$ as follows
\begin{equation} \label{construction}
S_{n}^{ { \Bbb P} rime }=\; \; \;um_{j=1}^{m_{n}}S^{ { \Bbb P} rime }\big (K_{I(n,j)}^{(\ell _{n})}
\big )+\; \; \;um_{k=1}^{r_{n}}\tilde{X}_{i}\,,
\end{equation}
where $r_{n}=n-m_{n}\mathrm{Card}K_{I(n,1)}^{(\ell _{n})}$ and the $\tilde{X}
_{i}$ are obtained from the $X_{i}^{ { \Bbb P} rime }$ and satisfied (\ref{hypoalpha}
) and (\ref{hypovarcont}) with the same constants. Since $r_{n}=o(n)$,
applying Theorem \ref{BTinegacont} and using the fact that $\lim
\inf_{n\rightarrow \infty }\; \; \;igma _{n}^{2}/n>0$, we get that for any $\eta
>0 $,
\begin{equation}
\limsup_{n\rightarrow \infty }a_{n}\log \mathbb{P}\big (\frac{\; \; \;qrt{a_{n}}}{
\; \; \;igma_n}\; \; \;um_{k=1}^{r_{n}}\tilde{X}_{i}\geq \eta \big )=-\infty \,.
\label{negligiblereste}
\end{equation}
Hence to prove (\ref{mdpST}), it suffices to prove that
\begin{equation}
\text{ $\{\; \; \;igma^{-1}_{n}\; \; \;um_{j=1}^{m_{n}}S^{ { \Bbb P} rime }\big (
K_{I(n,j)}^{(\ell _{n})}\big )\}$ satisfies (\ref{mdpdef}) with the good
rate function $I(t)=t^2/2$}. \label{mdpST*}
\end{equation}
With this aim, we choose now $T_n =\varepsilon _{n}^{-1/2}$ where $
\varepsilon _n$ is defined by (\ref{choiceepsi}).
By using Lemma 5 in Dedecker and Prieur (2004), we get the existence of
independent random variables $\big (S^{\ast }(K_{I(n,j)}^{(\ell _{n})})\big )
_{1\leq j\leq m_{n}}$ with the same distribution as the random variables $
S^{ { \Bbb P} rime }(K_{I(n,j)}^{(\ell _{n})})$ such that
\begin{equation*}
\; \; \;um_{j=1}^{m_{n}}{\mathbb{E}}|S^{ { \Bbb P} rime }(K_{I(n,j)}^{(\ell _{n})})-S^{\ast
}(K_{I(n,j)}^{(\ell _{n})})|\leq \tau (q_{n})\; \; \;um_{j=1}^{m_{n}}\mathrm{Card}
K_{I(n,j)}^{(\ell _{n})}\,.
\end{equation*}
Consequently, since $\; \; \;um_{j=1}^{m_{n}}\mathrm{Card}K_{I(n,j)}^{(\ell
_{n})}\leq n$, we derive that for any $\eta >0$,
\begin{equation*}
a_{n}\log \mathbb{P}\big (\frac{\; \; \;qrt{a_{n}}}{\; \; \;igma_n}
|\; \; \;um_{j=1}^{m_{n}}(S^{ { \Bbb P} rime }(K_{I(n,j)}^{(\ell _{n})})-S^{\ast
}(K_{I(n,j)}^{(\ell _{n})}))|\geq \eta \big )\leq a_{n}\log \Big (\eta
^{-1}\; \; \;igma_n^{-1}n \; \; \;qrt{a_{n}}\exp \big (-c\delta _{n}^{\gamma
_{1}}n^{\gamma _{1}/(2-\gamma )}\big )\Big )\,,
\end{equation*}
which tends to $-\infty $ by the fact that $\liminf_n \; \; \;igma _{n}^{2}/n>0$
and the selection of $\delta _{n}$. Hence the proof of the MDP for $
\{\; \; \;igma^{-1} _{n}\; \; \;um_{j=1}^{m_{n}}S^{ { \Bbb P} rime }\big (K_{I(n,j)}^{(\ell _{n})}
\big
)\}$ is reduced to proving the MDP for $\{\; \; \;igma_{n}^{-1}
\; \; \;um_{j=1}^{m_{n}}S^{\ast }\big (K_{I(n,j)}^{(\ell _{n})}\big )\}$. By Ellis
Theorem, to prove (\ref{mdpST*}) it remains then to show that, for any real $
t$,
\begin{equation}
a_{n}\; \; \;um_{j=1}^{m_{n}}\log \mathbb{E}\exp \Big (tS^{ { \Bbb P} rime }\big (
K_{I(n,j)}^{(\ell _{n})}\big )/\; \; \;qrt{a_n \; \; \;igma _{n}^2}\Big )\rightarrow
\frac{t^{2}}{2}\text{ as }n\rightarrow \infty \,. \label{ellisindsg}
\end{equation}
As in the proof of Proposition \ref{propinter2}, we decorrelate step by
step. Using Lemma \ref{lmainegatau} and taking into account the fact that
the variables are centered together with the inequality (\ref{inelog}), we
obtain, proceeding as in the proof of Proposition \ref{propinter2}, that for
any real $t$,
\begin{eqnarray*}
&&\Bigl |\; \; \;um_{j=1}^{m_{n}}\log \mathbb{E}\exp \Big (tS^{ { \Bbb P} rime }\big (
K_{I(n,j)}^{(\ell _{n})}\big )/\; \; \;qrt{a_n \; \; \;igma _{n}^2}\Big )
-\; \; \;um_{j=1}^{m_{n}}\; \; \;um_{i=1}^{2^{\ell _{n}}}\log \mathbb{E}\exp \Big (
tS^{ { \Bbb P} rime }\big (I_{\ell _{n},i}(p_{n},j)\big )/\; \; \;qrt{a_n \; \; \;igma _{n}^2}
\Big )\Big | \\
&&\quad \leq \frac{|t|m_{n}p_{n}}{\; \; \;qrt{a_n \; \; \;igma _{n}^2}}\Bigl(\exp \Bigl(
-c\frac{c_{0}^{\gamma _{1}}}{4}\frac{p_{n}^{\gamma _{1}}}{2^{\ell _{n}\gamma
_{1}}}+2\frac{|t|}{\; \; \;qrt{\varepsilon _{n}a_n \; \; \;igma _{n}^2}}\frac{p_{n}}{
2^{\gamma \ell _{n}}}\Bigr)+\; \; \;um_{j=0}^{k_{\ell _{n}}}\exp \Bigl(-c\frac{
c_{0}^{\gamma _{1}}}{4}\frac{p_{n}^{\gamma _{1}}}{2^{j\gamma _{1}/\gamma }}+2
\frac{|t|}{\; \; \;qrt{\varepsilon _{n}a_n \; \; \;igma _{n}^2}}\frac{p_{n}}{2^{j}}\Bigr)
\Bigr)\,,
\end{eqnarray*}
where $k_{{\ell _{n}}}=\; \; \;up \{j\in \mathbb{N}\,,\,j/\gamma <\ell _{n}\}$. By
the selection of $p_{n}$ and $\ell _{n}$, and since $\lim \inf_{n\rightarrow
\infty }\; \; \;igma _{n}^{2}/n>0$ and $\varepsilon _{n}^{2}a_{n}n^{\gamma
/(2-\gamma )}\rightarrow \infty $, we derive that for $n$ large enough,
there exists positive constants $K_{1}$ and $K_{2}$ depending on $c$, $
\gamma $ and $\gamma _{1}$ such that
\begin{eqnarray}
&&a_{n}\Bigl |\; \; \;um_{j=1}^{m_{n}}\log \mathbb{E}\exp \Big (tS^{ { \Bbb P} rime }\big (
K_{I(n,j)}^{(\ell _{n})}\big )/\; \; \;qrt{a_n \; \; \;igma _{n}^2}\Big )
-\; \; \;um_{j=1}^{m_{n}}\; \; \;um_{i=1}^{2^{\ell _{n}}}\log \mathbb{E}\exp \Big (
tS^{ { \Bbb P} rime }\big (I_{\ell _{n},i}(p_{n},j)\big )/\; \; \;qrt{a_n \; \; \;igma _{n}^2}
\Big )\Big | \notag \label{dec2proofsg} \\
&\leq &K_{1}|t|\; \; \;qrt{a_{n}n}\log (n)\exp \Bigl(-K_{2}\big (\varepsilon
_{n}^{2}a_{n}n^{\gamma /(2-\gamma )}\big )^{\gamma /2}n^{\gamma (1-\gamma
)/(2-\gamma )}\Bigr)\,,
\end{eqnarray}
which converges to zero by the selection of $\varepsilon _{n}$.
Hence (\ref{ellisindsg}) holds if we prove that for any real $t$
\begin{equation}
a_{n}\; \; \;um_{j=1}^{m_{n}}\; \; \;um_{k=1}^{2^{\ell _{n}}}\log \mathbb{E}\exp \Big (
tS^{ { \Bbb P} rime }\big (I_{\ell _{n},i}(p_{n},j)/\; \; \;qrt{a_{n}\; \; \;igma _{n}^{2}}\Big )
\rightarrow \frac{t^{2}}{2}\text{ as }n\rightarrow \infty \,.
\label{ellisindsg2}
\end{equation}
With this aim, we first notice that, by the selection of $\ell _{n}$ and the
fact that $\varepsilon _{n}\rightarrow 0$,
\begin{equation}
\Vert S^{ { \Bbb P} rime }\big (I_{\ell _{n},i}(p_{n},j)\Vert _{\infty }\leq
2T_{n}2^{-\ell _{n}}p_{n}=o(\; \; \;qrt{na_{n}})=o(\; \; \;qrt{\; \; \;igma _{n}^{2}a_{n}})\,.
\label{maj1arc}
\end{equation}
In addition, since $\lim_{n}V_{T_{n}}^{ { \Bbb P} rime { \Bbb P} rime }=0$ and the fact that $
\lim \inf_{n}\; \; \;igma _{n}^{2}/n>0$, we have $\lim_{n}\; \; \;igma _{n}^{-2}\mathrm{
Var}S_{n}^{ { \Bbb P} rime }=1$. Notice that by (\ref{construction}) and the fact
that $r_{n}=o(n)$,
\begin{equation*}
\mathrm{Var}S_{n}^{ { \Bbb P} rime }=\mathbb{E}\Bigl(\; \; \;um_{j=1}^{m_{n}}
\; \; \;um_{i=1}^{2^{\ell _{n}}}{S^{ { \Bbb P} rime }}(I_{\ell _{n},i}(p_{n},j))\Bigr)
^{2}+o(n)\text{ as }n\rightarrow \infty \,.
\end{equation*}
Also, a straightforward computation as in Remark \ref{remv2} shows that
under (\ref{hypoalpha}) and (\ref{hypoQ}) we have
\begin{equation*}
\mathbb{E}\Bigl(\; \; \;um_{j=1}^{m_{n}}\; \; \;um_{i=1}^{2^{\ell _{n}}}\big ({\
S^{ { \Bbb P} rime }}\big (I_{\ell _{n},i}(p_{n},j))\Bigr)^{2}=\; \; \;um_{j=1}^{m_{n}}
\; \; \;um_{i=1}^{2^{\ell _{n}}}\mathbb{E}\big ({S^{ { \Bbb P} rime }}^{2}\big (I_{\ell
_{n},i}(p_{n},j)\big )\big )+o(n)\text{ as }n\rightarrow \infty \,.
\end{equation*}
Hence
\begin{equation}
\lim_{n\rightarrow \infty }\left( \; \; \;igma _{n}\right)
^{-2}\; \; \;um_{j=1}^{m_{n}}\; \; \;um_{i=1}^{2^{\ell _{n}}}\mathbb{E}\big ({S^{ { \Bbb P} rime }
}^{2}\big (I_{\ell _{n},i}(p_{n},j)\big )\big )=1\,. \label{maj2arc}
\end{equation}
Consequently (\ref{ellisindsg2}) holds by taking into account (\ref{maj1arc}
) and (\ref{maj2arc}) and by using Lemma 2.3 in Arcones (2003). $\diamond $
\; \; \;ubsection{ Proof of Corollary { \Bbb P} rotect\ref{thmMDPsubgeo}}
We have to show that
\begin{equation}
\lim_{n\rightarrow \infty }\mathrm{Var}(S_{n})/n=\; \; \;igma ^{2}>0\,.
\label{limvar}
\end{equation}
Proceeding as in the proof of Remark \ref{remv2}, we get that for any
positive $k$,
\begin{equation*}
|\mathrm{Cov}(X_{0},X_{k})|\leq 2\int_{0}^{G(\tau (k)/2)}Q^{2}(u)du\,,
\end{equation*}
which combined with (\ref{hypoalpha}) and (\ref{hypoQ}) imply that $
\; \; \;um_{k>0}k|\mathrm{Cov}(X_{0},X_{k})|<\infty $. This condition together
with the fact that $\mathrm{Var}(S_{n})\rightarrow \infty $ entails (\ref
{limvar}) (see Lemma 1 in Bradley (1997)).
\; \; \;ection{Appendix}
\; \; \;etcounter{equation}{0}
We first give the following decoupling inequality.
\begin{lma}
\label{lmainegatau} Let $Y_{1}$, ..., $Y_{p}$ be real-valued random
variables each a.s. bounded by $M$. For every $i\in \lbrack 1,p]$, let ${
\mathcal{M}}_{i}=\; \; \;igma (Y_{1},...,Y_{i})$ and for $i\geq 2$, let $
Y_{i}^{\ast }$ be a random variable independent of ${\mathcal{M}}_{i-1}$ and
distributed as $Y_{i}$. Then for any real $t$,
\begin{equation*}
|\mathbb{E}\exp \Big (t\; \; \;um_{i=1}^{p}Y_{i}\Big )- { \Bbb P} rod_{i=1}^{p}\mathbb{E}
\exp (tY_{i})|\leq |t|\exp (|t|Mp)\; \; \;um_{i=2}^{p}\mathbb{E}|Y_{i}-Y_{i}^{\ast
}|\,.
\end{equation*}
In particular, we have for any real $t$,
\begin{equation*}
|\mathbb{E}\exp \Big (t\; \; \;um_{i=1}^{p}Y_{i}\Big )- { \Bbb P} rod_{i=1}^{p}\mathbb{E}
\exp (tY_{i})|\leq |t|\exp (|t|Mp)\; \; \;um_{i=2}^{p}\tau (\; \; \;igma (Y_{1},\dots
,Y_{i-1}),Y_{i})\,,
\end{equation*}
where $\tau $ is defined by (\ref{deftau1}).
\end{lma}
\noindent \textbf{Proof of Lemma \ref{lmainegatau}}. Set $
U_{k}=Y_{1}+Y_{2}+\cdots +Y_{k}$. We first notice that
\begin{equation} \label{P1lmatau}
\mathbb{E}\bigl(e^{tU_{p}}\bigr)- { \Bbb P} rod_{i=1}^{p}\mathbb{E}\bigl(e^{tY_{i}}
\bigr)=\; \; \;um_{k=2}^{p}\Bigl(\mathbb{E}\bigl(e^{tU_{k}}\bigr)-\mathbb{E}\bigl(
e^{tU_{k-1}}\bigr)\mathbb{E}\bigl(e^{tY_{k}}\bigr)\Bigr) { \Bbb P} rod_{i=k+1}^{p}
\mathbb{E}\bigl(e^{tY_{i}}\bigr)
\end{equation}
with the convention that the product from $p+1$ to $p$ has value $1$. Now
\begin{equation*}
|\mathbb{E}\exp (tU_{k})-\mathbb{E}\exp (tU_{k-1})\mathbb{E}\exp
(tY_{k})|\leq \Vert \exp (tU_{k-1})\Vert _{\infty }\Vert \mathbb{E}\bigl(
e^{tY_{k}}-e^{tY_{k}^{\ast }}|{{\mathcal{M}}_{k-1}}\bigr)\Vert _{1}\,.
\end{equation*}
Using (\ref{AFexp}) we then derive that
\begin{equation} \label{Plmatau}
|\mathbb{E}\exp (tU_{k})-\mathbb{E}\exp (tU_{k-1})\mathbb{E}\exp
(tY_{k})|\leq |t|\exp (|t|kM)\Vert Y_{k}-Y_{k}^{\ast }\Vert _{1}\,.
\end{equation}
Since the variables are bounded by $M$, starting from (\ref{P1lmatau}) and
using (\ref{Plmatau}), the result follows.$\diamond $
One of the tools we use repeatedly is the technical lemma below, which
provides bounds for the log-Laplace transform of any sum of real-valued
random variables.
\begin{lma}
\label{breta} Let $Z_0 , Z_1 , \ldots $ be a sequence of real valued random
variables. Assume that there exist positive constants $\; \; \;igma_0 , \; \; \;igma_1 ,
\ldots$ and $c_0, c_1 , \ldots $ such that, for any positive $i$ and any $t$
in $[0, 1/c_i[$,
\begin{equation*}
\log \mathbb{E} \exp (tZ_i) \leq (\; \; \;igma_i t)^2 / (1-c_it) \, .
\end{equation*}
Then, for any positive $n$ and any $t$ in $[0, 1/(c_0 + c_1 + \cdots + c_n)[$
,
\begin{equation*}
\log \mathbb{E} \exp (t (Z_0 + Z_1 + \cdots + Z_n) ) \leq (\; \; \;igma t)^2 /
(1-Ct) ,
\end{equation*}
where $\; \; \;igma = \; \; \;igma_0 + \; \; \;igma_1 + \cdots + \; \; \;igma_n $ and $C= c_0 + c_1
+ \cdots + c_n$.
\end{lma}
\noindent \textbf{Proof of Lemma \ref{breta}}. Lemma \ref{breta} follows
from the case $n=1$ by induction on $n$. Let $L$ be the log-Laplace of $
Z_{0}+Z_{1}$. Define the functions $\gamma _{i}$ by
\begin{equation*}
\gamma _{i}(t)=(\; \; \;igma _{i}t)^{2}/(1-c_{i}t)\ { \Bbb H} box{ for }t\in \lbrack
0,1/c_{i}[\ { \Bbb H} box{ and }\gamma _{i}(t)=+\infty \ { \Bbb H} box{ for }t\geq 1/c_{i}.
\end{equation*}
For $u$ in $]0,1[$, let $\gamma _{u}(t)=u\gamma _{1}(t/u)+(1-u)\gamma
_{0}(t/(1-u))$. From the H\"{o}lder inequality applied with $p=1/u$ and $
q=1/(1-u)$, we get that $L(t)\leq \gamma _{u}(t)$ for any nonnegative $t$.
Now, for $t$ in $[0,1/C[$, choose $u=(\; \; \;igma _{1}/\; \; \;igma )(1-Ct)+c_{1}t$
(here $C=c_{0}+c_{1}$ and $\; \; \;igma =\; \; \;igma _{0}+\; \; \;igma _{1}$). With this
choice $1-u=(\; \; \;igma _{0}/\; \; \;igma )(1-Ct)+c_{0}t$, so that $u$ belongs to $
]0,1[$ and
\begin{equation*}
L(t)\leq \gamma _{u}(t)=(\; \; \;igma t)^{2}/(1-Ct),
\end{equation*}
which completes the proof of Lemma \ref{breta}. $\diamond $
\end{document} |
\begin{document}
\title{\textbf{Polynomial and exponential stability of $\theta$-EM approximations to
a class of stochastic differential equations}}
\author{Yunjiao Hu$^a$,\ Guangqiang Lan$^a$\footnote{Corresponding author. Email:
[email protected]. Supported by
China Scholarship Council, National Natural Science Foundation of
China (NSFC11026142) and Beijing Higher Education Young Elite Teacher Project (YETP0516).},\ Chong Zhang$^a$
\\ \small $^{a}$School of Science, Beijing University of Chemical Technology, Beijing 100029, China}
\date{}
\maketitle
\begin{abstract}
Both the mean square polynomial stability and exponential stability
of $\theta$ Euler-Maruyama approximation solutions of stochastic
differential equations will be investigated for each $0\le\theta\le
1$ by using an auxiliary function $F$ (see the following definition
(\ref{dingyi})). Sufficient conditions are obtained to ensure the
polynomial and exponential stability of the numerical
approximations. The results in Liu et al \cite{LFM} will be improved
and generalized to more general cases. Several examples and non
stability results are presented to support our conclusions.
\end{abstract}
\noindent\textbf{MSC 2010:} 60H10, 65C30.
\noindent\textbf{Key words:} stochastic differential equation,
$\theta$ Euler-Maruyama approximation, polynomial stability,
exponential stability.
\section{Introduction}
\noindent
Given a probability space $(\Omega,\mathscr{F},P)$ endowed with a
complete filtration $(\mathscr{F}_t)_{t\geq 0}$. Let
$d,m\in\mathbb{N}$ be arbitrarily fixed. We consider the following
stochastic differential equations (SDEs)
\begin{equation}\label{sde}dX_t=f(X_t,t)dt+g(X_t,t)dB_t,\
X_0=x_0\in \mathbb{R}^d, \end{equation}
\noindent where the initial $x_0\in \mathbb{R}^d, (B_t)_{t\geq0}$ is
an $m$-dimensional standard $\mathscr{F}_t$-Brownian motion,
$f:(t,x)\in[0,\infty)\times\mathbb{R}^d\mapsto f(t,x)\in\mathbb{R}^d$
and $g:(t,x)\in[0,\infty)\times\mathbb{R}^d\mapsto\sigma(t,x)\in
\mathbb{R}^d\otimes\mathbb{R}^m$ are both Borel measurable functions.
The corresponding $\theta$ Euler-Maruyama ($\theta$-EM) approximation
(or the so called stochastic theta method) of the above SDE is
\begin{equation}\label{SEM} X_{k+1}=X_k+[(1-\theta)f(X_k,k\Delta t)+
\theta f(X_{k+1},(k+1)\Delta t)]\Delta t+g(X_k,k\Delta t)\Delta B_k,\end{equation}
\noindent where $X_0:=x_0$, $\Delta t$ is a constant step size,
$\theta\in [0,1]$ is a fixed parameter, $\Delta B_k:=B((k+1)\Delta
t)-B(k\Delta t)$ is the increment of Brownian motion. Note that
$\theta$-EM includes the classical EM method ($\theta=0$), the
backward EM method ($\theta=1$) and the so-called trapezoidal method
($\theta=\frac{1}{2}$).
Throughout of this paper, we simply assume that the coefficients $f$
and $g$ satisfy the following local Lipschitz condition:
For every integer $r\ge1$ and any $t\ge0$, there exists a positive
constant $\bar{K}_{r,t}$ such that for any $x,y\in\mathbb{R}^d$ with
$\max\{|x|,|y|\}\le r,$
\begin{equation}\label{local}
\max\{|f(x,t)-f(y,t)|,|g(x,t)-g(y,t)|\}\le \bar{K}_{r,t}|x-y|.
\end{equation}
Condition (\ref{local}) could make sure that equation (\ref{sde})
has a unique solution, which is denoted by
$X_t(x_0)\in\mathbb{R}^d,$ (this condition could be weakened to more
generalized condition, see e.g. \cite{Lan,Lan1}).
Stability theory is one of the central problems in numerical
analysis. The stability concepts of numerical approximation for SDEs
mainly include moment stability (M-stability) and almost sure
stability (trajectory stability). Results concerned with different
kinds of stability analysis for numerical methods can be found in
many literatures.
For example, Baker and Buckwar \cite{BB} dealt with the $p$-th
moment exponential stability of stochastic delay differential
equations when the coefficients are both globally Lipschitz
continuous, Higham \cite{Higham1,Higham2} considered the scalar
linear case and Higham et al. \cite{HMS} for one sided Lipschitz and
the linear growth condition. Other results concerned with moment
stability can be found in the Mao's monograph \cite{Mao}, Higham et
al \cite{HMY}, Zong et al \cite{ZW}, Pang et al \cite{PDM}, Szpruch
\cite{Szpruch} (for the so called $V$-stability) and references
therein.
For the almost sure stability of numerical approximation for SDEs,
by Borel-Cantelli lemma and Chebyshev inequality, recently, Wu et al
\cite{WMS} investgated the almost sure exponential stability of the
stochastic theta method by the continuous and discrete semi
martingale convergence theorems (see Rodkina and Schurz \cite{RS}
for details), Chen and Wu \cite{CW} and Mao and Szpruch \cite{MS}
also used the same method to prove the almost sure stability of the
numerical approximations. However, \cite{CW,HMY,WMS} only dealt with
the case that the coefficient of the diffusion part is at most
linear growth, that is, there exists $K>0$ such that
\begin{equation}\label{linear}
|g(x)|\le K|x|, \forall x\in \mathbb{R}^d.
\end{equation}
This condition excludes the case when the coefficient $g$ is
super-linearly growing (that is, $g(x)=C|x|^\gamma,\ \gamma>1)$. In
Mao and Szpruch \cite{MS}, authors examined the globally almost sure
asymptotic stability of the $\theta$-EM scheme (\ref{SEM1}), they
presented a rather weak sufficient condition to ensure that the
$\theta$-EM solution is almost surely stable when
$\frac{1}{2}<\theta\le 1$, but they didn't give the convergence rate
of the solution to zero explicitly. In \cite{ZW}, the authors
studied the mean square exponential stability of $\theta$-EM scheme
systematically, they proved that if $0\le\theta<\frac{1}{2},$ the
$\theta$-EM scheme preserves mean square exponential stability under
the linear growth condition for both the drift term and the
diffusion term; if $\frac{1}{2}<\theta\le 1,$ the $\theta$-EM
preserves mean square exponential stability without the linear
growth condition for the drift term (the linear growth condition for
the diffusion term is still necessary), exponential stability for
the case $\theta=\frac{1}{2}$ is not studied there.
However, to the best of our knowledge, there are few results devoted
to the exponential stability of the numerical solutions when the
coefficient of the diffusion term does not satisfy the linear growth
condition, which is one of the main motivations of this work.
Recently, in \cite{LFM}, Liu et al examined the polynomial stability
of numerical solutions of SDEs (\ref{sde}). They considered the
polynomial stability of both the classical and backward
Euler-Maruyama approximation. The condition on diffusion coefficient
$g$ is bounded with respect to variable $x$. This condition excludes
the case that $g$ is unbounded with respect to variable $x$. It
immediately raises the question of whether we can relax this
condition. This is the other main motivation of this work.
To study the polynomial stability of equation (\ref{SEM}), we consider the
following condition:
\begin{equation}\label{c1}
2\langle x,f(x,t)\rangle+|g(x,t)|^2\le C(1+t)^{-K_1}- K_1(1+t)^{-1}|x|^2,
\forall t\ge0, x\in\mathbb{R}^d,
\end{equation}
where $K_1, C$ are positive constants, and $K_1>1,$ $\langle
\cdot,\cdot\rangle$ stands for the inner product in $\mathbb{R}^d$
and $|\cdot|$ denotes the both the Euclidean vector norm and the
Hilbert-Schmidt matrix norm.
To study the exponential stability of equation (\ref{SEM}), we need stronger
condition on the coefficients,
\begin{equation}\label{c2}
2\langle x,f(x,t)\rangle+|g(x,t)|^2\le -C|x|^2, \forall x\in\mathbb{R}^d,
\end{equation}
where $C>0$ is a constant.
Define an operator $L$ by
$$\aligned LV(x,t):&=\frac{\partial}{\partial t}V(x,t)+\sum_{i=1}^df^i(x,t)
\frac{\partial}{\partial x_i}V(x,t)\\&
\quad+\frac{1}{2}\sum_{i,j=1}^d\sum_{k=1}^m g^{ik}(x,t)g^{jk}(x,t)\frac{\partial^2}
{\partial x_i\partial x_j}V(x,t),\endaligned$$
where $V(x,t): \mathbb{R}^d\times\mathbb{R}^+\rightarrow\mathbb{R}^+$ has continuous
second-order partial derivatives in $x$ and first-order partial derivatives in $t.$
It is clear that under condition (\ref{local}) and (\ref{c1}) (or
(\ref{c2})), there exists a unique global solution of equation
(\ref{sde}). By taking $V(x,t)=(1+t)^m|x|^2,$ or $V(x,t)=|x|^2,$
respectively, it is easy to see that under condition (\ref{c1}) the
true solution $X_t(x_0)$ of equation (\ref{sde}) is mean square
polynomially stable (see Liu and Chen \cite{LC} Theorem 1.1) or mean
square exponentially stable under condition (\ref{c2}) (the proof is
the same as Higham et al, see \cite{HMY} Appendix A). So a natural
question raises: Whether $\theta$-EM method can reproduce the
polynomial and exponential stability of the solution of (\ref{sde}).
If $\frac{1}{2} <\theta\le 1$, we will study the polynomial
stability and exponential stability of $\theta$-EM scheme
(\ref{SEM}) under conditions (\ref{c1}) and (\ref{c2}) respectively.
For the exponential stability, we first investigate the mean square
exponential stability, then we derive the almost sure exponential
stability by Borel-Cantelli lemma.
If $0\le\theta\le\frac{1}{2},$ besides condition (\ref{c1})
(respectively, (\ref{c2})), linear growth condition for the drift
term is also needed to ensure the corresponding stability, that is,
there exists $K>0$ such that
\begin{equation}\label{growth}
|f(x,t)|\le K(1+t)^{-\frac{1}{2}}|x|
\end{equation}
for polynomial stability case and
\begin{equation}\label{growth1}
|f(x,t)|\le K|x|, \forall x\in \mathbb{R}^d
\end{equation}
for exponential stability case. Notice that condition (\ref{growth})
is strictly weaker than condition (2.4) in \cite{LFM}.
The main feature of this paper is that we consider conditions in
which both diffusion and drift coefficients are involved, which give
weaker sufficient conditions than known ones, while in most of the
preceding studies, such conditions have been provided as separate
ones for diffusion coefficients and drift coefficients.
The rest of the paper is organized as follows. In Section 2, we give
some lemmas which will be used in the following sections to prove
the stability results. In Section 3 we study the polynomial
stability of the $\theta$-EM scheme. Our method hinges on various
properties of the gamma function and the ratios of gamma functions.
We show that when $\frac{1}{2}<\theta\le 1$, the polynomial
stability of the $\theta$-EM scheme holds under condition (\ref{c1})
plus one sided Lipschitz condition on $f$; when
$0\le\theta\le\frac{1}{2},$ the linear growth condition for the
drift term $f$ is also needed. In Section 4, we investigate the
exponential stability of the $\theta$-EM scheme for all
$0\le\theta\le 1$. Finally, we give in Section 5 some non stability
results and counter examples to support our conclusions.
\section{Preliminary}
To ensure that the semi-implicit $\theta$-EM scheme is well
defined, we need the first two lemmas.The first lemma gives the
existence and uniqueness of the solution of the equation $F(x)=b.$
We can prove the existence and uniqueness of the solution of the
$\theta$-EM scheme based on this lemma.
\begin{Lemma}\label{l1}
Let $F$ be the vector field on $\mathbb{R}^d$ and consider the equation
\begin{equation}\label{e1}
F(x)=b
\end{equation}
for a given $b\in\mathbb{R}^d$. If $F$ is monotone, that is,
$$\langle x-y, F(x)-F(y)\rangle>0$$
for all $x,y\in\mathbb{R}^d,x\neq y$, and $F$ is continuous and coercive, that is,
$$\lim_{|x|\rightarrow\infty}\frac{\langle x, F(x)\rangle}{|x|}=\infty,$$
then for every $b\in\mathbb{R}^d,$ equation (\ref{e1}) has a unique solution $x\in\mathbb{R}^d$.
\end{Lemma}
This lemma follows directly from Theorem 26.A in \cite{Zeidler}.
Consider the following one sided Lipschitz condition on $f$: There exists $L>0$ such that
\begin{equation}\label{c3}
\langle x-y, f(x,t)-f(y,t)\rangle\le L|x-y|^2.
\end{equation}
\begin{Lemma}\label{l2}
Define \begin{equation}\label{dingyi}F(x,t):=x-\theta\Delta t
f(x,t), \forall t>0, x\in\mathbb{R}^d.\end{equation} Assume
conditions (\ref{c1}) and (\ref{c3}) and $\Delta t$ is small enough
such that $\Delta t<\frac{1}{\theta L}$. Then for any $t>0$ and
$b\in\mathbb{R}^d,$ there is a unique solution of equation
$F(x,t)=b.$
\end{Lemma}
By this Lemma, we know that the $\theta$-EM scheme is well defined
under conditions (\ref{c1}) and (\ref{c3}) for $\Delta t$ small
enough.
The proof of Lemma \ref{l2} is the same as that of Lemma 3.4 in
\cite{LFM} and Lemma 3.3 in \cite{MS2}, just notice that condition
(\ref{c3}) implies $\langle x-y,F(x,t)-F(y,t)\rangle>0$, and
(\ref{c1}) (or (\ref{c2})) implies $\langle
x,F(x,t)\rangle\rightarrow\infty$ as $x\rightarrow\infty$. Notice
also that our condition (\ref{c3}) is weaker than (2.3) in
\cite{LFM}.
We also need the following two lemmas to study the polynomial stability
of the $\theta$-EM scheme.
\begin{Lemma}\label{l3}
Given $\alpha>0$ and $\beta\ge 0,$ if there exists a $\delta$ such that
$0<\delta<\alpha^{-1}$, then
$$\prod_{i=a}^b\Big(1-\frac{\alpha\delta}{1+(i+\beta)\delta}\Big)=
\frac{\Gamma(b+1+\delta^{-1}+\beta-\alpha)}{\Gamma(b+1+\delta^{-1}+\beta)}
\times\frac{\Gamma(a+\delta^{-1}+\beta)}{\Gamma(a+\delta^{-1}+\beta-\alpha)},$$
where $0\le a\le b,$ $\Gamma(x):=\int_0^\infty y^{x-1}e^{-y}dy.$
\end{Lemma}
\begin{Lemma}\label{l4}
For any $x>0,$ if $0<\eta<1,$ then
$$\frac{\Gamma(x+\eta)}{\Gamma(x)}<x^\eta,$$
and if $\eta>1,$ then
$$\frac{\Gamma(x+\eta)}{\Gamma(x)}>x^\eta.$$
\end{Lemma}
The proof of Lemmas \ref{l3} and \ref{l4} could be found in \cite{LFM}.
\section{Polynomial stability of $\theta$-EM solution (\ref{SEM})}
We are now in the position to give the polynomial stability of $\theta$-EM solution
(\ref{SEM}). First, we consider the case $\frac{1}{2}<\theta\le 1.$ We have the following
\begin{Theorem}\label{polynomial}
Assume that conditions (\ref{c1}) and (\ref{c3}) hold. If
$\frac{1}{2}<\theta\le 1,$ then for any $0<\varepsilon<K_1-1$, we
can choose $\Delta t$ small enough such that the $\theta$-EM
solution satisfies
\begin{equation}\label{bu}
\limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{\log
k\Delta t}\le -(K_1-1-\varepsilon)
\end{equation}
for any initial value $X_0=x_0\in \mathbb{R}^d.$
\end{Theorem}
\textbf{Proof} We first prove that condition (\ref{c1}) implies that
for $\Delta t$ small enough,
\begin{equation}\label{ineq}
2\langle x,f(x,t)\rangle+|g(x,t)|^2+(1-2\theta)\Delta t|f(x,t)|^2\le
C(1+t)^{-K_1}-(K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2 \end{equation}
holds for $\forall t\ge0, x\in\mathbb{R}^d.$ Here and in the
following, $F$ is defined by (\ref{dingyi}).
In fact, we only need to show that
$$(2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2\ge - K_1(1+t)^{-1}|x|^2.$$
On the other hand, by the definition of $F(x,t),$ we have
$$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2\\&
=(2\theta-1)\Delta t|f(x,t)|^2-
(K_1-\varepsilon)(1+t)^{-1}[|x|^2-2\theta\Delta t\langle x,
f(x,t)\rangle+\theta^2\Delta t^2|f(x,t)|^2]\\& =[(2\theta-1)\Delta
t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta
t^2]|f(x,t)|^2\\&\quad+2(K_1-\varepsilon)(1+t)^{-1}\theta\Delta
t\langle x,f(x,t)\rangle- (K_1-\varepsilon)(1+t)^{-1}|x|^2\\&
=a|f(x,t)+bx|^2-(ab^2+(K_1-\varepsilon)(1+t)^{-1})|x|^2,\endaligned
$$
where
$$a:=(2\theta-1)\Delta t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta t^2,\quad
b:=\frac{(K_1-\varepsilon)(1+t)^{-1}\theta\Delta t}{a}.$$
Since
$$a\ge (2\theta-1)\Delta t-(K_1-\varepsilon)\theta^2\Delta t^2
=\Delta t(2\theta-1-(K_1-\varepsilon)\theta^2\Delta t),$$ we can
choose $\Delta t$ small enough (for example $\Delta t\le
\min\{\frac{1}{\theta L},\frac{(2\theta-1)(\varepsilon\wedge
1)}{K_1(K_1-\varepsilon)\theta^2}\}$) such that $a\ge 0$ and
$ab^2\le \frac{\varepsilon}{1+t}.$ Then we have
$$\aligned \ (2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2&
\ge -(\frac{b^2}{a}+(K_1-\varepsilon)(1+t)^{-1})|x|^2\\&
\ge -K_1(1+t)^{-1}|x|^2.\endaligned
$$
So we complete the proof of inequality (\ref{ineq}).
Now by the definition of $F(x,t)$, it follows that
$$F(X_{k+1},(k+1)\Delta t)=F(X_{k},k\Delta t)+f(X_k,k\Delta t)\Delta t
+g(X_k,k\Delta t)\Delta B_k.$$
So we have
\begin{equation}\label{F}\aligned |F(X_{k+1},(k+1)\Delta t)|^2&=
[2\langle X_k,f(X_k,k\Delta t)\rangle+|g(X_{k},k\Delta t)|^2+(1-2\theta)
|f(X_{k},k\Delta t)|^2\Delta t]\Delta t\\&\quad+|F(X_{k},k\Delta t)|^2+M_k,\endaligned\end{equation}
where
\begin{equation}\label{M}\aligned M_k&:=|g(X_{k},k\Delta t)\Delta B_k|^2
-|g(X_{k},k\Delta t)|^2\Delta t+2\langle F(X_{k},k\Delta t),g(X_{k},k\Delta t)\Delta B_k\rangle\\&
\quad+2\langle f(X_{k},k\Delta t)\Delta t,g(X_{k},k\Delta t)\Delta B_k\rangle.\endaligned\end{equation}
Notice that
$$\mathbb{E}(M_k|\mathscr{F}_{k\Delta t})=0.$$
Then by condition (\ref{c1}) and inequality (\ref{ineq}), we have
$$ \mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2|\mathscr{F}_{k\Delta t})\le
(1-\frac{(K_1-\varepsilon)\Delta t}{1+k\Delta t})|F(X_{k},k\Delta t)|^2
+\frac{C\Delta t}{(1+k\Delta t)^{K_1-\varepsilon}}.$$
We can get by iteration that
$$\aligned\mathbb{E}(|F(X_{k},k\Delta t)|^2)&\le \Big(\prod_{i=0}^{k-1}
(1-\frac{(K_1-\varepsilon)\Delta t}{1+k\Delta t})\Big)|F(x_0,0)|^2\\&\quad
+\sum_{r=0}^{k-1}\Big(\prod_{i=r+1}^{k-1}(1-\frac{(K_1-\varepsilon)\Delta t}{1+i\Delta t})\Big)
\frac{C\Delta t}{(1+r\Delta t)^{K_1-\varepsilon}}.\endaligned$$
Then by Lemma \ref{l3},
\begin{equation}\label{budeng}\aligned\mathbb{E}(|F(X_{k},k\Delta t)|^2)&\le
\frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon))\Gamma(\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t})
\Gamma(\frac{1}{\Delta t}-(K_1-\varepsilon))}|F(x_0,0)|^2\\&\quad
+C\Delta t\sum_{r=0}^{k-1}\frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon))
\Gamma(r+1+\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t})\Gamma(r+1+\frac{1}{\Delta t}
-(K_1-\varepsilon))}(1+r\Delta t)^{-(K_1-\varepsilon)}.\endaligned\end{equation}
On the other hand, since $K_1-\varepsilon>1,$ by Lemma \ref{l4} or
\cite{LFM} one can see that
\begin{equation}\label{kongzhi1}\frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon))
\Gamma(\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t})\Gamma(\frac{1}{\Delta t}-
(K_1-\varepsilon))}\le ((k-(K_1-\varepsilon))\Delta t+1)^{-(K_1-\varepsilon)}\end{equation}
and that
\begin{equation}\label{kongzhi2}\frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon))
\Gamma(r+1+\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t})\Gamma(r+1+\frac{1}{\Delta t}-
(K_1-\varepsilon))}\le ((k-(K_1-\varepsilon))\Delta t+1)^{-(K_1-\varepsilon)}((r+1)\Delta t+1)^{K_1-\varepsilon}.\end{equation}
Substituting (\ref{kongzhi1}) and (\ref{kongzhi2}) into inequality (\ref{budeng}) yields
\begin{equation}\aligned\mathbb{E}(|F(X_{k},k\Delta t)|^2)&\le((k-(K_1-\varepsilon))\Delta t+1)^{-(K_1-\varepsilon)}
|F(x_0,0)|^2\\&\quad+C\Delta
t\sum_{r=0}^{k-1}((k-(K_1-\varepsilon))\Delta
t+1)^{-(K_1-\varepsilon)} \frac{((r+1)\Delta
t+1)^{K_1-\varepsilon}}{(1+r\Delta t)^{K_1-\varepsilon}}\\&
\le2^{K_1-\varepsilon}(k\Delta
t+1)^{-(K_1-\varepsilon)}\Big[|F(x_0,0)|^2+C\Delta t\sum_{r=0}^{k-1}
\Big(\frac{(r+1)\Delta t+1}{1+r\Delta
t}\Big)^{K_1-\varepsilon}\Big]\\&\le2^{K_1-\varepsilon}(k\Delta
t+1)^{-(K_1-\varepsilon)} [|F(x_0,0)|^2+C\cdot
2^{K_1-\varepsilon}k\Delta t]\\&\le
2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot 2^{K_1-\varepsilon})
(k\Delta t+1)^{-(K_1-\varepsilon)+1}.\endaligned\end{equation} We
have used the fact that $(k-(K_1-\varepsilon))\Delta t+1\ge
\frac{1}{2}(k\Delta t+1) $ for small $\Delta t$ in second inequality
and that $((r+1)\Delta t+1)/(1+r\Delta t)\le 2$ in the third
inequality.
Now by condition (\ref{c1}),
$$\aligned |F(x,t)|^2&=|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle+\theta^2\Delta t^2|f(x,t)|^2\\&
\ge |x|^2- C(1+t)^{-K_1}\theta\Delta t+ K_1(1+t)^{-1}|x|^2\theta\Delta t+\theta^2\Delta t^2|f(x,t)|^2\\&
\ge |x|^2- C(1+t)^{-K_1}\theta\Delta t\ge |x|^2- C(1+t)^{-(K_1-\varepsilon)}\theta\Delta t.\endaligned$$
Therefore, for small enough $\Delta t,$
$$\aligned\mathbb{E}(|X_{k}|^2)&\le\mathbb{E}(|F(X_{k},k\Delta t)|^2)+C(1+k\Delta t)^{-(K_1-\varepsilon)}\theta\Delta t\\&
\le 2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot
2^{K_1-\varepsilon})(k\Delta t+1)^{-(K_1-\varepsilon)+1}+C(1+k\Delta
t)^{-(K_1-\varepsilon)}\theta\Delta t\\& \le
2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot2^{K_1-\varepsilon}+C\theta)(k\Delta
t+1)^{-(K_1-\varepsilon)+1}.\endaligned$$ Namely, the $\theta$-EM
solution of (\ref{sde}) is mean square polynomial stable with rate
no greater than $-(K_1-1-\varepsilon)$ when $\frac{1}{2}<\theta\le
1$ and $\Delta t$ is small enough.
We complete the proof. $\square$
\begin{Remark}
Notice that we can not let $\varepsilon\rightarrow0$ in (\ref{bu})
since $\Delta t$ depends on $\varepsilon.$ Moreover, our condition
(\ref{c1}) could cover conditions (2.5) and (2.6) (even though not
entirely. They need $K_1>0.5$, but our $K_1>1$) for the polynomial
stability of backward EM approximation of SDE (\ref{sde}).
\end{Remark}
Now let us consider the case $0\le\theta\le\frac{1}{2}.$ We have
\begin{Theorem}\label{polynomial1}
Assume that conditions (\ref{c1}), (\ref{growth}) and (\ref{c3})
hold. If $0\le\theta\le\frac{1}{2},$ then for any
$0<\varepsilon<K_1-1$, we can choose $\Delta t$ small enough such
that the $\theta$-EM solution satisfies
\begin{equation}
\limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{\log
k\Delta t}\le -(K_1-1-\varepsilon)
\end{equation}
for any initial value $X_0=x_0\in \mathbb{R}^d.$
\end{Theorem}
\textbf{Proof}
Notice that in this case
$$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2\\&
=(2\theta-1)\Delta t|f(x,t)|^2-
(K_1-\varepsilon)(1+t)^{-1}[|x|^2-2\theta\Delta t\langle
x,f(x,t)\rangle +\theta^2\Delta t^2|f(x,t)|^2]\\&
=[(2\theta-1)\Delta t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta
t^2]|f(x,t)|^2\\&\quad+2 (K_1-\varepsilon) (1+t)^{-1}\theta\Delta
t\langle x,f(x,t)\rangle- (K_1-\varepsilon)(1+t)^{-1}|x|^2\\& \ge
aK^2(1+t)^{-1}|x|^2-2
K(K_1-\varepsilon)(1+t)^{-\frac{3}{2}}\theta\Delta
t|x|^2-(K_1-\varepsilon)(1+t)^{-1}|x|^2,\endaligned
$$
where
$$a:=(2\theta-1)\Delta t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta t^2\le 0.$$
Thus, we can choose $\Delta t$ small enough such that
$$aK^2(1+t)^{-1}-2 KK_1(1+t)^{-\frac{3}{2}}\theta\Delta t-(K_1-\varepsilon)(1+t)^{-1}\ge -K_1(1+t)^{-1}.$$
Therefore, by condition (\ref{c1}), we have
$$ \mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2|\mathscr{F}_{k\Delta t})\le (1-\frac{(K_1-\varepsilon)
\Delta t}{1+k\Delta t})|F(X_{k},k\Delta t)|^2+\frac{C\Delta t}{(1+k\Delta t)^{K_1-\varepsilon}}.$$
Then by the same argumentation as Theorem \ref{polynomial}, we have
$$\aligned |F(x,t)|^2&=|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle+\theta^2\Delta t^2|f(x,t)|^2\\&
\ge |x|^2- C(1+t)^{-K_1}\theta\Delta t+ K_1(1+t)^{-1}|x|^2\theta\Delta t\\&
\ge |x|^2- C(1+t)^{-K_1}\theta\Delta t\ge |x|^2- C(1+t)^{-(K_1-\varepsilon)}\theta\Delta t.\endaligned$$
Therefore, for small enough $\Delta t,$ we can derive in the same way as in proof of Theorem \ref{polynomial} that
$$\aligned\mathbb{E}(|X_{k}|^2)&\le\mathbb{E}(|F(X_{k},k\Delta t)|^2)+C(1+k\Delta t)^{-(K_1-\varepsilon)}\theta\Delta t\\&
\le 2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot
2^{K_1-\varepsilon})(k\Delta t+1)^{-(K_1-\varepsilon)+1}+C(1+k\Delta
t)^{-(K_1-\varepsilon)}\theta\Delta t\\& \le
2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot2^{K_1-\varepsilon}+C\theta)(k\Delta
t+1)^{-(K_1-\varepsilon)+1}.\endaligned$$
Namely, the $\theta$-EM solution of (\ref{sde}) is mean square
polynomial stable with rate no greater than $-(K_1-1-\varepsilon)$
when $0\le\theta\le\frac{1}{2}$ and $\Delta t$ is small enough.
We complete the proof. $\square$
\begin{Remark}
In \cite{LFM} Condition 2.3, authors gave the sufficient conditions
on coefficients $f$ and $g$ separately for the polynomial stability
of the classical EM scheme, their conditions (2.5) and (2.6) hold
for $K_1>1$ and $C>0,$ then it is easy to see that our condition
(\ref{c1}) holds automatically for the same $K_1$ and $C,$ and our
condition (\ref{growth}) is strictly weaker than (2.4). Therefore,
we have improved Liu et al and generalized it to
$0\le\theta\le\frac{1}{2}.$
\end{Remark}
\section{Exponential stability of $\theta$-EM solution (\ref{SEM})}
Now let us consider the exponential stability of $\theta$-EM solution of (\ref{sde}).
When SDE (\ref{sde}) goes back to time homogeneous case, that is,
\begin{equation}\label{sde1}dX_t=f(X_t)dt+g(X_t)dB_t,\
X_0=x_0\in \mathbb{R}^d,a.s. \end{equation}
The corresponding $\theta$-EM approximation becomes to
\begin{equation}\label{SEM1} X_{k+1}=X_k+[(1-\theta)f(X_k)+\theta f(X_{k+1})]\Delta t
+g(X_k)\Delta B_k.\end{equation}
In \cite{MS}, Mao and Szpruch gave a sufficient condition ensuring
that the almost sure stability of $\theta$-EM solution of
(\ref{sde1}) holds in the case that $\frac{1}{2}<\theta\le 1$.
However they didn't reveal the rate of convergence. Their method of
the proof is mainly based on the discrete semi martingale
convergence theorem. We will study the exponential stability
systematically for $0\le\theta\le 1$ for the time inhomogeneous
case. We first prove the mean square exponential stability, then we
prove the almost sure stability by Borel-Cantelli lemma.
\begin{Theorem}\label{exponential}
Assume that conditions (\ref{c2}) and (\ref{c3}) hold. Then for any
$\frac{1}{2}<\theta\le 1$ and $0<\varepsilon<1$, we can choose
$\Delta t$ small enough such that the $\theta$-EM solution satisfies
\begin{equation}
\limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{k\Delta t}\le -C(1-\varepsilon)
\end{equation}
for any initial value $X_0=x_0\in \mathbb{R}^d$ and
\begin{equation}\label{ex}
\limsup_{k\rightarrow \infty}\frac{\log |X_k|}{k\Delta t}\le -\frac{C(1-\varepsilon)}{2}\quad a.s.
\end{equation}
\end{Theorem}
\textbf{Proof} Define $F(x,t)$ as in Lemma \ref{l2}. We have
$$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\\&
=(2\theta-1)\Delta t|f(x,t)|^2- C(1-\varepsilon)[|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle
+\theta^2\Delta t^2|f(x,t)|^2]\\&
=[(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon)]|f(x,t)|^2+2C\theta\Delta t
(1-\varepsilon)\langle x,f(x,t)\rangle-C(1-\varepsilon)|x|^2\\&
=a|f(x,t)+bx|^2-(ab^2+C(1-\varepsilon))|x|^2,\endaligned
$$
where
$$a:=(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon), \quad
b:=\frac{C\theta\Delta t(1-\varepsilon)}{a}.$$
We can choose $\Delta t$ small enough (for example $\Delta t\le \min
\{\frac{1}{\theta L},\frac{\varepsilon(2\theta-1)}{C(1-\varepsilon)\theta^2}\}$)
such that $a\ge 0$ and $ab^2\le C\varepsilon$, and therefore
$$(2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\ge -C|x|^2.$$
Then by condition (\ref{c2}), we can prove that
\begin{equation}\label{ineq1}
2\langle x,f(x,t)\rangle+|g(x,t)|^2+(1-2\theta)\Delta t|f(x,t)|^2\le
-C(1-\varepsilon)|F(x,t)|^2 \end{equation}
holds for $\forall x\in\mathbb{R}^d.$
Therefore, by (\ref{F}), for small enough $\Delta t$ ($\Delta t\le
\frac{1}{\theta L}\wedge\frac{\varepsilon(2\theta-1)}{C\theta^2(1-\varepsilon)}
\wedge\frac{1}{C(1-\varepsilon)}$),
$$\aligned \mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2)
\le \mathbb{E}(|F(X_{k},k\Delta t)|^2)(1-C(1-\varepsilon)\Delta t).\endaligned$$
So we have
\begin{equation}\label{mean1}\aligned \mathbb{E}(|X_{k}|^2)\le
\mathbb{E}(|F(X_{k},k\Delta t)|^2) \le
|F(x_0,0)|^2(1-C(1-\varepsilon)\Delta t)^k.\endaligned\end{equation}
or
\begin{equation}\label{mean}\aligned
\mathbb{E}(|X_{k}|^2)\le |F(x_0,0)|^2e^{-C(1-\varepsilon)k\Delta t},
\forall k\ge 1.\endaligned\end{equation}
The first inequality of (\ref{mean1}) holds because of condition (\ref{c2}).
Thus, the $\theta$-EM solution of (\ref{sde1}) is mean square exponential
stable when $\frac{1}{2}<\theta\le 1$ and $\Delta t$ is small enough.
On the other hand, by Chebyshev inequality, inequality (\ref{mean}) implies that
$$P(|X_k|^2>k^2e^{-kC(1-\varepsilon)\Delta t})\le\frac{|F(x_0,0)|^2}{k^2}, \forall k\ge 1.$$
Then by Borel-Cantelli lemma, we see that for almost all $\omega\in\Omega$
\begin{equation}\label{bound}|X_k|^2\le k^2e^{-kC(1-\varepsilon)\Delta t}\end{equation}
holds for all but finitely many $k$. Thus, there exists a $k_0(\omega),$ for
all $\omega\in\Omega$ excluding a $P$-null set, for which (\ref{bound}) holds whenever $k\ge k_0$.
Therefore, for almost all $\omega\in\Omega$,
\begin{equation}\label{bound1}\frac{1}{k\Delta t}\log|X_k|\le-\frac{C(1-\varepsilon)}{2}
+\frac{\log k}{k\Delta t}\end{equation}
whenever $k\ge k_0$. Letting $k\rightarrow\infty$ we obtain (\ref{ex}).
The proof is then complete. $\square$
If $0\le\theta\le\frac{1}{2},$ then we have the following
\begin{Theorem}\label{exponential1}
Assume that conditions (\ref{c2}), (\ref{growth1}) and (\ref{c3}) hold. Then for any
$0<\varepsilon<1$, we can choose $\Delta t$ small enough
such that the $\theta$-EM solution satisfies
\begin{equation}
\limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{k\Delta t}\le -C(1-\varepsilon)
\end{equation}
for any initial value $X_0=x_0\in \mathbb{R}^d$ and
\begin{equation}\label{ex1}
\limsup_{k\rightarrow \infty}\frac{\log |X_k|}{k\Delta t}\le -\frac{C(1-\varepsilon)}{2}\quad a.s.
\end{equation}
\end{Theorem}
\textbf{Proof} By the same argument as Theorem \ref{polynomial1}, we have
$$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\\&
=(2\theta-1)\Delta t|f(x,t)|^2- C(1-\varepsilon)[|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle
+\theta^2\Delta t^2|f(x,t)|^2]\\&
=[(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon)]|f(x,t)|^2+2C\theta\Delta t
(1-\varepsilon)\langle x,f(x,t)\rangle-C(1-\varepsilon)|x|^2\\&
\ge aK^2|x|^2-2KC\theta\Delta t(1-\varepsilon)|x|^2-C(1-\varepsilon)|x|^2\endaligned
$$
since
$$a:=(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon)\le 0.$$
We have used condition (\ref{growth}) in the last inequality.
We can choose $\Delta t$ small enough such that
$$\Delta t\le \frac{1}{\theta L}\wedge\frac{K(1-2\theta)+2C\theta(1-\varepsilon)}{KC(1-\varepsilon)\theta^2}
\wedge\frac{C\varepsilon}{2(K^2(1-2\theta)+2KC\theta(1-\varepsilon))},$$ and thus
$$aK^2-2KC\theta\Delta t(1-\varepsilon)\ge -C\varepsilon.$$
Then we have
$$(2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\ge -C|x|^2.$$
Then by condition (\ref{c2}), we can prove that
\begin{equation}\label{ineq1}
2\langle x,f(x,t)\rangle+|g(x,t)|^2+(1-2\theta)\Delta t|f(x,t)|^2\le -C(1-\varepsilon)|F(x,t)|^2 \end{equation}
holds for $\forall x\in\mathbb{R}^d.$
Therefore, for small enough $\Delta t$,
$$\aligned \mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2)
\le \mathbb{E}(|F(X_{k},k\Delta t)|^2)(1-C(1-\varepsilon)\Delta t).\endaligned$$
So we have
\begin{equation}\label{mean2}\aligned \mathbb{E}(|X_{k}|^2)\le \mathbb{E}(|F(X_{k},k\Delta t)|^2)
\le |F(x_0,0)|^2(1-C(1-\varepsilon)\Delta t)^k.\endaligned\end{equation}
or
\begin{equation}\label{mean3}\aligned
\mathbb{E}(|X_{k}|^2)\le |F(x_0,0)|^2e^{-C(1-\varepsilon)k\Delta t}, \forall k\ge 1.\endaligned\end{equation}
The first inequality of (\ref{mean1}) holds because of condition (\ref{c2}). Thus, the $\theta$-EM
solution of (\ref{sde1}) is mean square exponential stable when $\frac{1}{2}<\theta\le 1$ and $\Delta t$
is small enough.
From (\ref{mean}) we can show the almost sure stability assertion (\ref{ex1}) in the same
way as in the proof of Theorem \ref{exponential}.
The proof is complete. $\square$
\section{Non stability results and counter examples}
In this section we will give some non stability results for the classical EM scheme
and counter examples to support our conclusions. We show that there are cases that
our assertion works while the assertions in the literature do not work.
Let us consider the following 1-dimensional stochastic differential equations:
\begin{equation}\label{SDE} dX_t=(aX_t+b|X_t|^{q-1}X_t)dt+c|X_t|^\gamma dB_t.\ X_0=x_0(\neq0)\in \mathbb{R}.\end{equation}
When $b\le0, q> 0$ and $\gamma\ge\frac{1}{2},$ by Gy\"ongy and
Krylov \cite{GK} Corollary 2.7 (see also \cite{IW,Lan,Lan1,RY}),
there is a unique global solution of equation (\ref{SDE}). Here
$|x|^{q-1}x:=0$ if $x=0$. For this equation, if $q=2\gamma-1, a<0$
and $2b+c^2\le 0,$ then condition (\ref{c2}) is automatically
satisfied. Therefore, the true solution of SDE (\ref{SDE}) is mean
square exponentially stable.
Now let us consider the corresponding Euler-Maruyama approximation:
\begin{equation}\label{EM} X_{k+1}=X_k+(aX_k+b|X_k|^{q-1}X_k)\Delta t+c|X_k|^\gamma \Delta B_k.\end{equation}
For the classical EM approximation $X_k$, we have the following
\begin{Lemma}\label{divergence1}
Suppose $q>1, q>\gamma.$ If $\Delta t>0$ is small enough, and
$$|X_1|\ge\frac{2^\frac{q+2}{q-1}}{(|b|\Delta t)^\frac{1}{q-1}},$$
then for any $K\ge 1,$ there exists a positive number $\alpha$ such that
\[P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}},\forall
1\le k\le K)\ge\exp(-4e^{-\alpha/\sqrt{\Delta t}})>0,\]
where $\alpha:=\frac{2^{\frac{(q-\gamma)(q+2)}{q-1}}}{2|c|}(1\wedge((q-\gamma)\log 2)).$
\end{Lemma}
That is, no matter what values $a,b,c$ take, by taking the initial
value and the step size suitably, the numerical approximation
solution of SDE (\ref{SDE}) is divergent with a positive probability
when $q>1, q>\gamma.$
\textbf{Proof of Lemma \ref{divergence1}}: According to (\ref{EM}),
\[\aligned|X_{k+1}|&=|X_k|\Big| b|X_k|^{q-1}\Delta t+c\cdot \textrm{sgn}
(X_k)|X_k|^{\gamma-1}\Delta B_k+1+a\Delta t\Big|\\&
\ge |X_k|\Big||b||X_k|^{q-1}\Delta t-|c| |X_k|^{\gamma-1}|\Delta B_k|-1-|a|\Delta t\Big|.\endaligned\]
Take $\Delta t$ small enough such that $|a|\Delta t\le 1.$ If
$|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}$
and $|\Delta B_k|\le\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)}$, then
\[\aligned |X_{k+1}|&\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}
(2^{k(q-1)+3}(1-\frac{1}{2})-2)\\&
=\frac{2^{k+1+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}.\endaligned\]
Thus, given that $|X_1|\ge\frac{2^{\frac{q+2}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}$,
the event that $\{|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}},
\forall 1\le k\le K\}$ contains the event that $\{|\Delta B_k|\le
\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)},\forall 1\le k\le K\}$. So
\[P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}},\forall 1\le k\le K)
\ge\prod_{k=1}^KP(|\Delta B_k|\le\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)},\forall 1\le k\le K).\]
We have used the fact that $\{\Delta B_k\}$ are independent in the above inequality. But
\[\aligned P(|\Delta B_k|\ge\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)})&=
P(\frac{|\Delta B_k|}{\sqrt{\Delta t}}\ge\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}})\\&
=\frac{2}{\sqrt{2\pi}}\int_{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|
\sqrt{\Delta t}}}^\infty e^{-\frac{x^2}{2}}dx.\endaligned\]
We can take $\Delta t$ small enough such that ${\frac{ 2^{(k+\frac{3}{q-1})
(q-\gamma)}}{2|c|\sqrt{\Delta t}}}\ge 2,$ so $x\le\frac{x^2}{2}$ for $x\ge
{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}$ and therefore,
\[\aligned P(|\Delta B_k|\ge\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)})&
\le\frac{2}{\sqrt{2\pi}}\int_{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|
\sqrt{\Delta t}}}^\infty e^{-x}dx\\&=\frac{2}{\sqrt{2\pi}}\exp\{
-{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}\}.\endaligned\]
Since
\[\log (1-u)\ge -2u,\quad 0<u<\frac{1}{2},\]
we have
\[\aligned\log P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}},
\forall 1\le k\le K)&\ge\sum_{k=1}^K\log (1-\exp(-{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}))\\&
\ge-2\sum_{k=1}^K\exp(-{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}).\endaligned\]
Next, by using the fact that $r^x\ge r(1\wedge\log r)x$ for any $ x\ge 1, r>1,$ we have
\[\aligned\sum_{k=1}^K\exp(-{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|
\sqrt{\Delta t}}})&=\sum_{k=1}^K\exp(-{\frac{ 2^{\frac{3(q-\gamma)}{q-1}}}{2|c|\sqrt{\Delta t}}}(2^{q-\gamma})^k)\\&
\le\sum_{k=1}^K\exp(-{\frac{ 2^{\frac{3(q-\gamma)}{q-1}}}{2|c|\sqrt{\Delta t}}}2^{q-\gamma}(1\wedge\log 2^{q-\gamma})k)\\&
\le\frac{e^{-\frac{\alpha}{\sqrt{\Delta t}}}}{1-e^{-\frac{\alpha}{\sqrt{\Delta t}}}}
\le 2e^{-\frac{\alpha}{\sqrt{\Delta t}}}\endaligned\]
for $\Delta t$ small enough, where $\alpha:=\frac{1}{2|c|}2^{\frac{(q+2)(q-\gamma)}{q-1}}(1\wedge\log 2^{q-\gamma}).$
Hence
\[\aligned\log P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}},
\forall 1\le k\le K)\ge -4e^{-\frac{\alpha}{\sqrt{\Delta t}}}.\endaligned\]
We complete the proof. $\square$
When $0<q<1, \frac{1}{2}\le\gamma<1, |b|<a,$ we also have the divergence result of the EM approximation:
\begin{Lemma}\label{divergence2}
For any $\Delta t>0$ small enough, if $|X_1|\ge r:=1+\frac{a-|b|}{2}\Delta t,$ then
there exist $k_0\ge 1$ $($depending on $\Delta t)$, A and $\alpha>0$ such that
\[\log P(|X_k|\ge r^k,\forall k\ge 1)\ge A-\frac{2e^{-k_0\alpha}}{1-e^{-k_0\alpha}}>-\infty,\]
where $A$ is finite, $\alpha:=\frac{(a-|b|)\Delta t}{2|c|}r^{1-\gamma}(1\wedge\log r^{1-\gamma}).$
\end{Lemma}
\textbf{Proof}: First, we show that
\[|X_k|\ge r^k\ \textrm{and}\ |\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|}\Rightarrow |X_{k+1}|\ge r^{k+1}.\]
Now
\[\aligned|X_{k+1}|&=|X_k|\Big| b|X_k|^{q-1}\Delta t+c\cdot\textrm{sgn}(X_k) |X_k|^{\gamma-1}\Delta B_k+1+a\Delta t\Big|\\&
\ge |X_k|\Big|1+a\Delta t-|b||X_k|^{q-1}\Delta t-|c| |X_k|^{\gamma-1}|\Delta B_k|\Big|\\&
\ge r^k(1+a\Delta t-|b|\Delta t-|c| r^{k(\gamma-1)}\frac{(r-1)r^{k(1-\gamma)}}{|c|})\\&
=r^k(1+2(r-1)-(r-1))=r^{k+1}.\endaligned\]
Thus, given that $|X_1|\ge r$,
the event that $\{|X_k|\ge r^k,\forall k\ge 1\}$ contains the event that
$\{|\Delta B_k|\le \frac{(r-1)r^{k(1-\gamma)}}{|c|},\forall k\ge 1\}$.
If $\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}\ge 2$, then
\[\aligned P(|\Delta B_k|\ge\frac{(r-1)r^{k(1-\gamma)}}{|c|})&
=\frac{2}{\sqrt{2\pi}}\int_{\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}}^\infty
e^{-\frac{x^2}{2}}dx\\&\le\frac{2}{\sqrt{2\pi}}\int_{\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}}^\infty e^{-x}dx\\&
=\frac{2}{\sqrt{2\pi}}\exp(-\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}).\endaligned\]
We can choose $k_0$ be the smallest $k$ such that $\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}\ge 2$
(note that since $r>1$, such $k_0$ always exists).
On the other hand,
\[\aligned &\sum_{k=k_0}^\infty\log(1-\frac{2}{\sqrt{2\pi}}\exp(-\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}))\\&
\ge -2\sum_{k=k_0}^\infty\exp(-\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}})\\&
\ge -2\sum_{k=k_0}^\infty\exp(-k\times\frac{(r-1)r^{1-\gamma}(1\wedge\log r^{1-\gamma})}{|c|\sqrt{\Delta t}})\\&
=-\frac{2e^{-k_0\alpha}}{1-e^{-k_0\alpha}}>-\infty.\endaligned\]
So $\prod_{k=1}^\infty P(|\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|})$ is well defined and therefore
\[P(|X_k|\ge r^k, \forall\ k\ge 1)\ge\prod_{k=1}^\infty P(|\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|}).\]
Then as in proof of Lemma \ref{divergence1}, we have
\[\aligned \log P(|X_k|\ge r^k, \forall\ k\ge 1)&\ge
A-\frac{2e^{-k_0\alpha}}{1-e^{-k_0\alpha}}>-\infty,\endaligned\]
where
\[A=\sum_{k=1}^{k_0-1} \log P(|\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|}),\]
\[\alpha=\frac{(r-1)r^{1-\gamma}(1\wedge\log r^{1-\gamma})}{|c|\sqrt{\Delta t}}.\]
We complete the proof. $\square$
Let us give an example to show that the $\theta$-EM scheme
($\frac{1}{2}<\theta\le 1$) is exponentially stable while EM scheme
is not.
\textbf{Example 1:}
Consider the following one dimensional stochastic differential equation,
\begin{equation}\label{sde2}
dX_t=(aX_t+b|X_t|^{2\gamma-2} X_t)dt+c|X_t|^\gamma dB_t,
\end{equation}
where $\gamma>1$ $a<0$ and $2b+c^2\le 0$.
It is clear that both of the coefficients are locally Lipschitz
continuous. Thus SDE (\ref{sde2}) has a unique global solution.
By Lemma \ref{divergence1}, since $2\gamma-1>\gamma>1$, we know that
when we choose the step size $\Delta t$ small enough and the initial
value $X_1$ suitably, the classical EM scheme is divergent with a
positive probability. Now let us consider the exponential stability
of $\theta$-EM scheme.
The corresponding $\theta$-EM scheme of (\ref{sde2}) is
\begin{equation}\label{STM2}
X_{k+1}=X_k+[(1-\theta)X_k(a+b|X_k|^{2\gamma-2})+\theta X_{k+1}
(a+b|X_{k+1}|^{2\gamma-2})]\Delta t+c|X_{k}|^\gamma \Delta B_k,
\end{equation}
Notice that in our case $g(x)=c|x|^\gamma$ does not satisfy the
linear growth condition. Therefore, the stability results in
\cite{HMS,HMY,PDM,WMS} as well as \cite{CW} for the moment as well
as almost sure exponential stability of the backward EM scheme case
($\theta=1$) can not be used here.
On the other hand, since in this case $f(x)=ax+b|x|^{2\gamma-2} x,
g(x)=c|x|^\gamma,$ it is obvious that
$$2\langle x,f(x)\rangle+|g(x)|^2=2ax^2+(2b+c^2)|x|^{2\gamma}\le 2ax^2.$$
Since $a<0$, then condition (\ref{c2}) holds for $C=-2a$. Moreover,
$$\langle x-y,f(x)-f(y)\rangle=a(x-y)^2+b(x-y)(|x|^{2\gamma-2}x-|y|^{2\gamma-2}y).$$
Since
$$(x-y)(|x|^{2\gamma-2}x-|y|^{2\gamma-2}y)\ge0$$
holds for $\forall x,y\in\mathbb{R},$ it follows that
$$\langle x-y,f(x)-f(y)\rangle\le a(x-y)^2.$$
We have used the fact that $b<0$ here. Thus conditions (\ref{c2})
and (\ref{c3}) hold. By Theorem \ref{exponential}, we know that, for
any $0<\varepsilon<1$, the $\theta$-EM ($\frac{1}{2}<\theta\le 1$)
scheme (\ref{STM2}) of the corresponding SDE (\ref{sde2}) is mean
square exponentially stable with with Lyapunov exponent no greater
than $2a(1-\varepsilon)$ and almost surely exponentially stable with
Lyapunov exponent no greater than $a(1-\varepsilon)$ if $\Delta t$
is small enough.
For the polynomial stability, we consider the following example.
\textbf{Example 2:}
Now let us consider the following scalar stochastic differential equation,
\begin{equation}\label{sde3}
dX_t=\frac{-(1+t)^{\frac{1}{2}}|X_t|^{2\gamma-2}X_t-2K_1X_t}{2(1+t)}dt+
\sqrt{\frac{|X_t|^{2\gamma}}{(1+t)^\frac{1}{2}}+\frac{C}{(1+t)^{K_1}}}
dB_t,
\end{equation}
where $C>0$, $K_1>1$, $\gamma\ge1$ are constants.
Since in this case
$$f(x,t)=\frac{-(1+t)^\frac{1}{2}|x|^{2\gamma-2}x-2K_1x}{2(1+t)},\quad g(x,t)=
\sqrt{\frac{|x|^{2\gamma}}{(1+t)^\frac{1}{2}}+\frac{C}{(1+t)^{K_1}}},$$
It is clear that both of the coefficients are locally Lipschitz
continuous. Moreover, it is easy to verify that
$$2\langle x,f(x,t)\rangle+|g(x,t)|^2\le C(1+t)^{-K_1}- K_1(1+t)^{-1}|x|^2,$$
and
$$\langle x-y,f(x,t)-f(y,t)\rangle\le 0\le L|x-y|^2.$$
Thus conditions (\ref{c1}) and (\ref{c3}) hold. Therefore, SDE
(\ref{sde3}) has a unique global solution. If $\gamma>1,$ then by
Theorem \ref{polynomial}, for any $0<\varepsilon<K_1-1,$ the
$\theta$-EM ($\frac{1}{2}<\theta\le 1$) solution of (\ref{sde3})
satisfies the polynomial stability (with rate no great than
$-(K_1-1-\varepsilon)$) for $\Delta t$ small enough. If $\gamma=1,$
it is obvious that $f$ also satisfies the linear growth condition
(\ref{growth}) (condition (2.4) in \cite{LFM} failed in this case),
then by Theorem \ref{polynomial1}, the $\theta$-EM
($0\le\theta\le\frac{1}{2}$) solution of (\ref{sde3}) satisfies the
polynomial stability for $\Delta t$ small enough. However, since the
coefficient $g(x,t)$ is not bounded with respect to $x$, we can not
apply Theorem 3.1 and Theorem 3.5 in \cite{LFM} to get the
polynomial stability of the classical EM scheme and back EM scheme
respectively.
\textbf{Acknowledgement:} The second named author would like to
thank Professor Chenggui Yuan for useful discussions and suggestions
during his visit to Swansea University.
\end{document} |
\betagin{document}
\title[Equilibrium configurations of epitaxially strained films and material voids]{Equilibrium configurations for epitaxially strained films and material voids in three-dimensional linear elasticity}
\author{Vito Crismale}
\address[Vito Crismale]{CMAP, \'Ecole Polytechnique, 91128 Palaiseau Cedex, France}
\email[Vito Crismale]{[email protected]}
\author{Manuel Friedrich}
\address[Manuel Friedrich]{Applied Mathematics M\"unster, University of M\"unster\\
Einsteinstrasse 62, 48149 M\"unster, Germany.}
\email{[email protected]}
\betagin{abstract}
We extend the results about existence of minimizers, relaxation, and approximation proven by Chambolle et al.\ in 2002 and 2007 for an energy related to epitaxially strained crystalline films, and by Braides, Chambolle, and Solci in 2007 for a class of energies defined on pairs of function-set. We study these models in the framework of three-dimensional linear elasticity, where
a major obstacle to overcome
is the lack of any \emph{a priori} assumption on the integrability properties of displacements. As a key tool for the proofs, we introduce a new notion of convergence for $(d{-}1)$-rectifiable sets that are jumps of $GSBD^p$ functions, called $\sigma^p_{\mathrm{\rm sym}}$-convergence.
\end{abstract}
\keywords{Epitaxial growth, material voids, free discontinuity problems, generalized special functions of bounded deformation, relaxation, $\Gamma$-convergence, phase-field approximation}
\subjclass[2010]{49Q20, 26A45, 49J45, 74G65.}
\maketitle
\section{Introduction}
The last years have witnessed a remarkable progress in the mathematical and physical literature towards the understanding of stress driven rearrangement instabilities (SDRI), i.e., morphological instabilities of interfaces between elastic phases generated by the
competition between elastic and surface energies of (isotropic or anisotropic) perimeter type. Such phenomena are for instance observed in the formation of material voids inside elastically stressed solids. Another example is hetero-epitaxial growth of elastic thin films, when thin layers of highly strained hetero-systems, such as InGaAs/GaAs or SiGe/Si, are deposited onto a substrate: in case of a mismatch between the lattice parameters of the two crystalline solids, the free surface of the film is flat until a critical value of the thickness is reached, after which the free surface becomes corrugated (see e.g.\ \cite{AsaTil72, GaoNix99, Grin86, Grin93, SieMikVoo04, Spe99} for some physical and numerical literature).
From a mathematical point of view, the common feature of functionals describing SDRI is the presence of both stored elastic bulk and surface energies. In the static setting, problems arise concerning existence, regularity, and stability of equilibrium configurations obtained by energy minimization. The analysis of these issues is by now mostly developed
in dimension two only.
Starting with the seminal work by {\sc Bonnetier and Chambolle} \cite{BonCha02} who proved existence of equilibrium configurations, several results have been obtained in \cite{BelGolZwi15, Bon12, FonFusLeoMor07, FonPraZwi14, FusMor12, GolZwi14} for hetero-epitaxially strained elastic thin films in 2d. We also refer to \cite{DavPio18, DavPio17, KrePio19} for related energies and to \cite{KhoPio19} for a unified model for SDRI. In the three dimensional setting, results
are limited to the geometrically nonlinear setting or to linear elasticity under antiplane-shear assumption \cite{Bon13, ChaSol07}. In a similar fashion, regarding the study of material voids in elastic solids, there are works about existence and regularity in dimension two \cite{capriani, FonFusLeoMil11} and a relaxation result in higher dimensions \cite{BraChaSol07} for nonlinearly elastic energies or in linear elasticity under antiplane-shear assumption. \color{black} Related to \cite{BraChaSol07}, we also mention a similar relaxation result in presence of obstacles \cite{FocGel08}, and the study of homogenization in periodically perforated domains, cf.\ e.g.\ \cite{CagSca11, FocGel07}. \color{black}
The goal of the present paper is to extend the results about relaxation, existence, and approximation obtained for energies related to material voids \cite{BraChaSol07} and to epitaxial growth \cite{BonCha02, ChaSol07}, respectively, to the case of linear elasticity in arbitrary space dimensions. As already observed in \cite{ChaSol07}, the main obstacle for deriving such generalizations lies in the fact that a deep understanding of the function space of \emph{generalized special functions of bounded deformation} ($GSBD$) is necessary. Indeed, our strategy is based extensively on using the theory on $GSBD$ functions which, initiated by {\sc Dal Maso} \cite{DM13}, was developed over the last years, see e.g.\ \cite{ChaConIur17, CC17, CC18, ConFocIur15, CFI16ARMA, CFI17DCL, CFI17Density, Cri19, Fri17M3AS, FriPWKorn, FriSol16, Iur14}. In fact, as a byproduct of our analysis, we introduce two new notions related to this function space, namely (1) a version of the space with functions attaining also the value infinity and (2) a novel notion for convergence of rectifiable sets, which we call $\sigma^p_{\mathrm{\rm sym}}$-convergence. Let us stress that in this work we consider exclusively a static setting. For evolutionary models, we mention the recent works \cite{FonFusLeoMor15, FusJulMor18, FusJulMor19, Pio14}.
We now introduce the models under consideration in a slightly simplified way, restricting ourselves to three space dimensions. To describe material voids in elastically stressed solids, we consider the following functional defined on pairs of function-set (see \cite{SieMikVoo04})
\betagin{equation}\label{eq: F functional-intro}
F(u,E) = \int_{\Omega \setminus E} \mathbb{C} \, e(u) : e(u) \, \, \mathrm{d} x + \int_{\Omega \cap \varphiartial E} \varphi(\nu_E) \, \mathrm{d}\mathcal{H}^{2}\,,
\end{equation}
where $E \subset \Omega$ represents the (sufficiently smooth) shape of voids within an elastic body with reference configuration $\Omega \subset \mathbb{R}^3$, and $u$ is an elastic displacement field. The first part of the functional represents the elastic energy depending on the linear strain $e(u) := \frac{1}{2}\big((\nabla u)^T + \nabla u\big)$, where $\mathbb{C}$ denotes the fourth-order positive semi-definite tensor of elasticity coefficients. (In fact, we can incorporate more general elastic energies, see \eqref{eq: F functional} below.) The surface energy depends on a (possibly anisotropic) density $\varphi$ evaluated at the outer normal $\nu_E$ to $E$. This setting is usually complemented with a volume constraint on the voids $E$ and nontrivial prescribed Dirichlet boundary conditions for $u$ on a part of $\varphiartial \Omega$. We point out that the boundary conditions are the reason why the solid is elastically stressed.
A variational model for epitaxially strained films can be regarded as a special case of \eqref{eq: F functional-intro} and corresponds to the situation where the material domain is the subgraph of an unknown nonnegative function $h$. More precisely, we assume that the material occupies the region
$${\Omega_h^+ := \{ x \in \omega \times \mathbb{R} : 0 < x_3 < h(x_1,x_2)\}}$$
for a given bounded function $h: \omega \to [0,\infty) $, $\omega \subset \mathbb{R}^2$, whose graph represents the free profile of the film. We consider the energy
\betagin{equation}\label{eq: G functional-intro}
G(u,h) = \int_{\Omega_h^+} \mathbb{C} \, e(u) : e(u) \, \, \mathrm{d} x + \int_{\omega} \sqrt{1 + |\nabla h(x_1,x_2)|^2} \, \mathrm{d}(x_1,x_2)\,.
\end{equation}
Here, $u$ satisfies prescribed boundary data on $\omega \times \lbrace 0 \rbrace$ which corresponds to the interface between film and substrate. This Dirichlet boundary condition models the case of a film growing on an infinitely rigid substrate and is the reason for the film to be strained. We observe that \eqref{eq: G functional-intro} corresponds to \eqref{eq: F functional-intro} when $\varphi$ is the Euclidean norm, $\Omega= \omega {\times} (0, M)$ for some $M>0$ large enough, and $E=\Omega \setminus \Omega^+_h$.
Variants of the above models \eqref{eq: F functional-intro} and \eqref{eq: G functional-intro} have been studied by {\sc Braides, Chambolle, and Solci} \cite{BraChaSol07} and by {\sc Chambolle and Solci} \cite{ChaSol07}, respectively, where the linearly elastic energy density $\mathbb{C} \, e(u): e(u)$ is replaced by an elastic energy satisfying a $2$-growth (or $p$-growth, $p>1$) condition in the full gradient $\nabla u$ with quasiconvex integrands. These works are devoted to giving a sound mathematical formulation for determining equilibrium configurations. By means of variational methods and geometric measure theory, they study the relaxation of the functionals in terms of \emph{generalized functions of bounded variation} ($GSBV$) which allows to incorporate the possible roughness of the geometry of voids or films. Existence of minimizers for the relaxed functionals and the approximation of (the counterpart of) $G$ through a phase-field $\Gamma$-convergence result are addressed. In fact, the two articles
have been written almost simultaneously with many similarities in both the setting and the proof strategy.
Therefore, we prefer to present the extension of both works to the $GSBD$ setting (i.e., to three-dimensional linear elasticity) in a single work to allow for a comprehensive study of different applications.
We now briefly discuss our main results.
\textbf{(a) Relaxation of $F$}: We first note that, for fixed $E$, $F(\cdot,E)$ is weakly lower semicontinuous in $H^1$
and, for fixed $u$, $F(u,\cdot)$ can be regarded as a lower semicontinuous functional on sets of finite perimeter. The energy defined on pairs $(u,E)$, however, is not lower semicontinuous since, in a limiting process, the voids $E$ may collapse into a discontinuity of the displacement $u$. The relaxation has to take this phenomenon into account, in particular collapsed surfaces need to be counted twice in the relaxed energy. Provided that the surface density $\varphi$ is a norm in $\mathbb{R}^3$, we show that the relaxation takes the form (see Proposition \ref{prop:relF})
\betagin{equation}\label{eq: oveF}
\overline{F}(u,E) = \int_{\Omega \setminus E} \mathbb{C} \, e(u) : e(u) \, \, \mathrm{d} x + \int_{\Omega \cap \varphiartial^* E} \varphi (\nu_E) \, {\rm d}\mathcal{H}^2 + \int_{J_u \cap (\Omega \setminus E)^1} 2\, \varphi(\nu_u) \, {\rm d}\mathcal{H}^2\,,
\end{equation}
where $E$ is a set of finite perimeter with essential boundary $\varphiartial^* E$, $( \Omega \setminus E)^1$ denotes the set of points of density $1$ of $\Omega \setminus E$, and $u \in GSBD^2(\Omega)$. Here, $e(u)$ denotes the approximate symmetrized gradient of class $L^2(\Omega; \mathbb{R}^{3 \times 3})$ and $J_u$ is the jump set with corresponding measure-theoretical normal $\nu_u$. (We refer to Section~\ref{sec:prel} for the definition and the main properties of this function space. Later, we will also consider more general elastic energies and work with the space $GSBD^p(\Omega)$, $1 <p < \infty$, i.e., $e(u) \in L^p(\Omega; \mathbb{R}^{3 \times 3})$.)
\textbf{(b) Minimizer for $\overline F$:} In Theorem~\ref{thm:relFDir}, we show that such a relaxation result can also be proved by imposing additionally a volume constraint on $E$ (which reflects mass conservation) and by prescribing boundary data for $u$. For this version of the relaxed functional, we prove the existence of minimizers, see Theorem~\ref{th: relF-extended}.
\textbf{(c) Relaxation of $G$}: For the model \eqref{eq: G functional-intro} describing epitaxially strained crystalline films, we show in Theorem \ref{thm:relG} that the lower semicontinous envelope takes the form
\betagin{equation}\label{eq: oveG}
\overline{G}(u,h) = \int_{\Omega_h^+} \mathbb{C} \, e(u) : e(u) \, \, \mathrm{d} x +\mathcal{H}^{2}(\Gamma_h) + 2 \, \mathcal{H}^{2}(\Sigma)\,,
\end{equation}
where $h \in BV(\omega; [0,\infty) )$ and $\Gamma_h$ denotes the (generalized) graph of $h$. Here, $u$ is again a $GSBD^2$-function and the set $\Sigma \subset \mathbb{R}^3$ is a ``vertical'' rectifiable set describing the discontinuity set of $u$ inside the subgraph $\Omega_h^+$. Similar to the last term in \eqref{eq: oveF}, this contribution has to be counted twice. We remark that in \cite{FonFusLeoMor07} the set $\Sigma$ is called ``vertical cuts''. Also here a volume constraint may be imposed.
\textbf{(d) Minimizer for $\overline G$:} In Theorem~\ref{thm:compG}, we show compactness for sequences with bounded $G$ energy. In particular, this implies existence of minimizers for $\overline G$ (under a volume constraint).
\textbf{(e) Approximation for $\overline G$:}
In Theorem~\ref{thm:phasefieldG}, we finally prove a phase-field $\Gamma$-convergence approximation of $\overline G$. We remark that
we can generalize the assumptions on the regularity of the Dirichlet datum. Whereas in \cite[Theorem~5.1]{ChaSol07} the class $H^1 \cap L^\infty$ was considered, we show that it indeed suffices to assume $H^1$-regularity.
We now provide some information on the proof strategy highlighting in particular the additional difficulties compared to \cite{BraChaSol07, ChaSol07}. Here, we will also explain why two new technical tools related to the space $GSBD$ have to be introduced.
\textbf{(a)} The proof of the lower inequality for the relaxation $\overline F$ is closely related to the analog in \cite{BraChaSol07}: we use an approach by slicing, exploit the lower inequality in one dimension, and a localization method. To prove the upper inequality, it is enough to combine the corresponding upper bound from \cite{BraChaSol07} with a density result for $GSBD^p$ ($p>1$) functions \cite{CC17}, slightly adapted for our purposes, see Lemma \ref{le:0410191844}.
\textbf{(b)} We point out that, in \cite{BraChaSol07}, the existence of minimizers has not been addressed due to the lack of a compactness result. In this sense, our study also delivers a conceptionally new result without corresponding counterpart in \cite{BraChaSol07}. The main difficulty lies in the fact that, for configurations with finite energy \eqref{eq: oveF}, small pieces of the body could be
disconnected from the bulk part, either by the voids $E$ or by the jump set $J_u$. Thus, since there are \emph{no a priori bounds} on the displacements, the function $u$ could attain arbitrarily large values on certain components, and this might
rule out measure convergence for minimizing sequences. We remark that truncation methods, used to remedy this issue in scalar problems, are not applicable in the vectorial setting. This problem was solved only recently by general compactness results, both in the $GSBV^p$ and the $GSBD^p$ setting. The result \cite{Fri19} in $GSBV^p$ delivers a \emph{ selection principle for minimizing sequences} showing that one can always find at least one minimizing sequence converging in measure. With this, existence of minimizers for the
energies
in \cite{BraChaSol07} is immediate.
Our situation in linear elasticity, however, is more delicate since a comparable strong result is not available in $GSBD$. In \cite[Theorem~1.1]{CC18}, a compactness and lower semicontinuity
result in $GSBD^p$ is derived relying on the idea that minimizing sequences may ``converge to infinity'' on a set of finite perimeter. In the present work, we refine this result by introducing a topology which induces this kind of nonstandard convergence. To this end, we need to define the new space $GSBD^p_\infty$ consisting of $GSBD^p$ functions which may also attain the value infinity. With these new techniques at hand, we can prove a general compactness result in $GSBD^p_\infty$ (see Theorem~\ref{thm:compF}) which particularly implies the existence of minimizers for \eqref{eq: oveF}.
\textbf{(c)} Although the functional $G$ in \eqref{eq: G functional-intro} is a special case of $F$, the relaxation result is not an immediate consequence, due to the additional constraint that the domain is the subgraph of a function. Indeed, in the lower inequality, a further crucial step is needed in the description of the (variational) limit of $\varphiartial\Omega_{h_n}$ when $h_n \to h$ in $L^1(\omega)$. In particular, the vertical set $\Sigma$ has to be identified, see \eqref{eq: oveG}.
This issue is connected to the problem of detecting all possible limits of jump sets $J_{u_n}$ of converging sequences $(u_n)_n$ of $GSBD^p$ functions. In the $GSBV^p$ setting, the notion of $\sigma^p$-convergence of sets is used, which has originally been developed by {\sc Dal Maso, Francfort, and Toader} \cite{DMFraToa02} to study quasistatic crack evolution in nonlinear elasticity. (We refer also to the variant \cite{GiaPon06} which is independent of $p$.) In this work, we introduce an analogous notion in the $GSBD^p$ setting which we call $\sigma^p_{\mathrm{\rm sym}}$-convergence. The definition is a bit more complicated compared to the $GSBV$ setting since it has to be formulated in the frame of $GSBD^p_\infty$ functions possibly attaining the value infinity. We believe that this notion may be of independent interest and is potentially helpful to study also other problems such as quasistatic crack evolution in linear elasticity \cite{FriSol16}. We refer to Section \ref{sec:sigmap} for the definition and properties of $\sigma^p_{\mathrm{\rm sym}}$-convergence, as well as for a comparison to the corresponding notion in the $GSBV^p$ setting.
Showing the upper bound for the relaxation result is considerably more difficult than the analogous bound for $\overline F$. In fact, one has to guarantee that recovery sequences are made up by sets that are still subgraphs. We stress that this cannot be obtained by some general existence results, but is achieved through a very careful construction (pp.\ \varphiageref{page:upperineqbeg}-\varphiageref{page:upperineqend}), that follows only partially the analogous one
in \cite{ChaSol07}. We believe that the construction in \cite{ChaSol07} could indeed be improved by adopting an approach similar to ours, in order to take also some pathological situations into account.
\textbf{(d)} To show the existence of minimizers of $G$, the delicate step is to prove that minimizing sequences have subsequences which converge (at least) in measure. In the $GSBV^p$ setting, this is simply obtained by applying a Poincar\'e inequality on vertical slices through the film. The same strategy cannot be pursued in $GSBD^p$ since by slicing in a certain direction not all components can be controlled. As a remedy, we proceed in two steps. We first use the novel compactness result in $GSBD^p_\infty$ to identify a limit which might attain the value infinity on a set of finite perimeter $G_\infty$. Then, \emph{a posteriori}, we show that actually $G_\infty = \emptyset$,
see Subsection~\ref{subsec:RelG} for details.
\textbf{(e)} For the phase-field approximation, we combine a variant of the construction in the upper inequality for $\overline G$ with the general strategy of the corresponding approximation result in \cite{ChaSol07}. The latter is slightly modified in order to proceed without $L^\infty$-bound on the displacements.
The paper is organized as follows. In Section~\ref{sec: main results}, we introduce the setting of our two models on material voids in elastic solids and epitaxially strained films. Here, we also present our main relaxation, existence, and approximation results. Section~\ref{sec:prel} collects definition and main properties of the function space $GSBD^p$. In this section, we also define the space $GSBD^p_\infty$ and show basic properties. In Section \ref{sec:sigmap} we introduce the novel notion of $\sigma^p_{\mathrm{\rm sym}}$-convergence and prove
a compactness result for sequences of rectifiable sets with bounded Hausdorff measure. Section \ref{sec:FFF} is devoted to the analysis of functionals defined on pairs of function-set. Finally, in Section \ref{sec:GGG} we investigate the model for epitaxially strained films and prove the relaxation, existence, and approximation results.
\section{Setting of the problem and statement of the main results}\label{sec: main results}
In this section, we give the precise definitions of the two energy functionals and present the main relaxation, existence, and approximation results. In the following, $f\colon \mathbb{M}^{d\times d}\to [0,\infty) $ denotes a convex function satisfying the growth condition ($| \cdot |$ is the Frobenius norm on $\mathbb{M}^{d\times d}$)
\betagin{align}\label{eq: growth conditions}
c_1 |\zeta^T + \zeta|^p \color{black} &\le f(\zeta) \le c_2 (|\zeta^T + \zeta|^p +1) \ \ \ \ \ \text{for all} \ \zeta \in \mathbb{M}^{d\times d}
\end{align}
and $f(0) = 0$,
for some $1 < p < + \infty$. \color{black} In particular, the convexity of $f$ and \eqref{eq: growth conditions} imply that $f(\zeta) = f(\frac{1}{2}(\zeta^T + \zeta))$ for all $\zeta \in \mathbb{M}^{d\times d}$. \color{black} For an open subset $\Omega \subset \mathbb{R}^d$, we will denote by $L^0(\Omega;\mathbb{R}d)$ the space of $\mathcal{L}^d$-measurable functions $v \colon \Omega \to \mathbb{R}d$ endowed with the topology of the convergence in measure. We let $\mathfrak{M}(\Omega)$ be the family of all $\mathcal{L}^d$-measurable subsets of $\Omega$.
\subsection{Energies on pairs function-set: material voids in elastically stressed solids}\label{sec: results1}
Let $\Omega\subset \mathbb{R}^d$ be a Lipschitz domain. We introduce an energy functional defined on pairs function-set. Given a norm $\varphi$ on $\mathbb{R}d$ and $f\colon \mathbb{M}^{d\times d}\to [0,\infty) $, we let $F\colon L^0( \Omega; \mathbb{R}^d) \times \mathfrak{M}(\Omega) \to \mathbb{R} \cup \lbrace+ \infty \rbrace$ be defined by
\betagin{equation}\label{eq: F functional}
F(u,E) =
\betagin{dcases}
\int_{\Omega \setminus E}
f(e(u))\, \, \mathrm{d} x + \int_{\Omega \cap \varphiartial E}
\varphi(\nu_E) \, \mathrm{d}\mathcal{H}^{d-1}
\hspace{-0.5em} & \\
& \hspace{-4cm}
\text{if } \varphiartial E \text{ Lipschitz, } u|_{\Omega\setminus \overline E} \in W^{1,p}( \Omega \setminus \overline E;\mathbb{R}^d), \, u|_E=0,\\
+ \infty \hspace{-0.9em} &\hspace{-4cm} \text{otherwise,}
\end{dcases}
\end{equation}
where $e(u) := \frac{1}{2}\big((\nabla u)^T + \nabla u\big)$ denotes the symmetrized gradient, and $\nu_E$ the outer normal to $E$. We point out that the energy is determined by $E$ and the values of $u$ on $\Omega \setminus \overline E$. The condition $u|_E=0$ is for definiteness only. We denote by $\overline{F}\colon L^0(\Omega;\mathbb{R}^d) \times \mathfrak{M}(\Omega) \to \mathbb{R} \cup \lbrace+ \infty \rbrace$ the lower semicontinuous envelope of the functional $F$ with respect to the convergence in measure for the functions and the $L^1(\Omega)$-convergence of characteristic functions of sets, i.e.,
\betagin{equation}\label{eq: first-enve}
\overline{F}(u,E) = \inf\Big\{ \liminf_{n\to\infty} F(u_n,E_n)\colon \, u_n \to u \text{ in }L^0(\Omega;\mathbb{R}^d) \text{ and } \chi_{E_n} \to \chi_E \text{ in }L^1(\Omega) \Big\}\,.
\end{equation}
\color{black} (We observe that the convergence in $L^0(\Omega;\mathbb{R}d)$ is metrizable, so the sequential lower semicontinuous envelope coincides with the lower semicontinuous envelope with respect to this convergence.) \color{black}
In the following, for any $s \in [0,1]$ and any $E \in \mathfrak{M}(\Omega)$, $E^s$ denotes the set of points with density $s$ for $E$. By $\varphiartial^* E$ we indicate its essential boundary, see \cite[Definition 3.60]{AFP}. For the definition of the space $GSBD^p(\Omega)$, $p>1$, we refer to Section \ref{sec:prel} below. In particular, by $e(u) = \frac{1}{2}((\nabla u)^T + \nabla u)$ we denote the approximate symmetrized gradient, and by $J_u$
the jump set of $u$ with measure-theoretical normal $\nu_u$. We characterize $\overline F$ as follows.
\betagin{proposition}[Characterization of the lower semicontinuous envelope $\overline{F}$]\label{prop:relF}
Suppose that $f$ is convex and satisfies \eqref{eq: growth conditions}, and that $\varphi$ is a norm on $\mathbb{R}d$. Then there holds
\betagin{equation*}
\overline{F}(u,E) = \betagin{dcases} \int_{\Omega \setminus E} f(e(u))\, \, \mathrm{d} x + \int_{\Omega \cap \varphiartial^* E} & \varphi (\nu_E) \, \, \mathrm{d} \hd + \int_{J_u \cap (\Omega \setminus E)^1} 2\, \varphi(\nu_u) \, \, \mathrm{d} \hd \\
&\hspace{-1em}\text{if } u= u \, \chi_{E^0} \in GSBD^p(\Omega) \text{ and }\mathcal{H}^{d-1}(\varphiartial^* E) < +\infty\,,\\
+\infty &\hspace{-1em}\text{otherwise.}
\end{dcases}
\end{equation*}
Moreover, if $\mathcal{L}^d(E)>0$, then for any $(u,E) \in L^0(\Omega;\mathbb{R}d){\times}\mathfrak{M}(\Omega)$ there exists a recovery sequence $(u_n,E_n)_n\subset L^0(\Omega;\mathbb{R}d){\times}\mathfrak{M}(\Omega)$ such that $\mathcal{L}^d(E_n) = \mathcal{L}^d(E)$ for all $n \in \mathbb{N}$.
\end{proposition}
The last property shows that it is possible to incorporate a volume constraint on $E$ in the relaxation result. We now move on to consider a Dirichlet minimization problem associated to $F$. We will impose Dirichlet boundary data $u_0 \in W^{1,p}(\mathbb{R}^d;\mathbb{R}^d)$ on a subset $\varphiartial_D \Omega \subset \varphiartial \Omega$. For technical reasons, we suppose that ${\partial \Omega}={\partial_D \Omega}\cup {\partial_N \Omega}\cup N$ with ${\partial_D \Omega}$ and ${\partial_N \Omega}$ relatively open, ${\partial_D \Omega} \cap {\partial_N \Omega} =\emptyset$, $\mathcal{H}^{d-1}(N)=0$,
${\partial_D \Omega} \neq \emptyset$, $\varphiartial({\partial_D \Omega})=\varphiartial({\partial_N \Omega})$, and
that there exist a small $\overline \mathrm{d}elta >0 $ and $x_0\in \mathbb{R}d$ such that for every $\mathrm{d}elta \in (0,\overline \mathrm{d}elta)$ there holds
\betagin{equation}\label{0807170103}
O_{\mathrm{d}elta,x_0}({\partial_D \Omega}) \subset \Omega\,,
\end{equation}
where $O_{\mathrm{d}elta,x_0}(x):=x_0+(1-\mathrm{d}elta)(x-x_0)$. (These assumptions are related to Lemma \ref{le:0410191844} below.) In the following, we denote by ${\rm tr}(u)$ the trace of $u$ on $\varphiartial \Omega$ which is well defined for functions in $GSBD^p(\Omega)$, see Section \ref{sec:prel}. In particular, it is well defined for functions $u$ considered in \eqref{eq: F functional} satisfying $u|_{\Omega\setminus \overline E} \in W^{1,p}( \Omega \setminus \overline E;\mathbb{R}^d)$ and $u|_E=0$. By $\nu_\Omega$ we denote the outer unit normal to $\varphiartial \Omega$.
We now introduce a version of $F$ taking boundary data into account. Given $u_0 \in W^{1,p}(\mathbb{R}^d;\mathbb{R}^d)$, we set
\betagin{equation}\label{eq: FDir functional}
F_{\mathrm{Dir}}(u,E) =
\betagin{cases}
F(u,E) + \int_{{\partial_D \Omega} \cap \varphiartial E} \varphi(\nu_E) \, \mathrm{d}\mathcal{H}^{d-1} & \text{if } \ \ \mathrm{tr}(u) =\mathrm{tr}(u_0) \text{ on } \varphiartial_D\Omega \setminus \overline E, \\
+\infty &\text{otherwise.}
\end{cases}
\end{equation}
Similar to \eqref{eq: first-enve}, we define the lower semicontinuous envelope $\overline F_{\mathrm{Dir}}$ by
\betagin{equation}\label{eq: Fdir-rela}
\overline{F}_{\mathrm{Dir}}(u,E) = \Big\{ \liminf_{n\to\infty} F_{\mathrm{Dir}}(u_n,E_n)\colon \, u_n \to u \text{ in }L^0(\Omega;\mathbb{R}^d) \text{ and } \chi_{E_n} \to \chi_E \text{ in }L^1(\Omega) \Big\}\,.
\end{equation}
We have the following characterization.
\betagin{theorem}[Characterization of the lower semicontinuous envelope $\overline{F}_{\mathrm{Dir}}$]\label{thm:relFDir}
Suppose that $f$ is convex and satisfies \eqref{eq: growth conditions}, that $\varphi$ is a norm on $\mathbb{R}d$, and that \eqref{0807170103} is satisfied. Then there holds
\betagin{equation}\label{1807191836}
\overline{F}_{\mathrm{Dir}}(u,E) =
\overline{F}(u,E) + \int\limits_{{\partial_D \Omega} \cap \varphiartial^* E} \hspace{-0.5cm} \varphi (\nu_E) \, \mathrm{d} \hd + \int\limits_{ \{ \mathrm{tr}(u) \neq \mathrm{tr}(u_0) \} \cap ({\partial_D \Omega} \setminus \varphiartial^* E) } \hspace{-0.5cm} 2 \, \varphi( \nu_\Omega ) \, \mathrm{d} \hd\,.
\end{equation}
Moreover, if $\mathcal{L}^d(E)>0$, then for any $(u,E) \in L^0(\Omega;\mathbb{R}d){\times}\mathfrak{M}(\Omega)$ there exists a recovery sequence $(u_n,E_n)_n\subset L^0(\Omega;\mathbb{R}d){\times}\mathfrak{M}(\Omega)$ such that $\mathcal{L}^d(E_n) = \mathcal{L}^d(E)$ for all $n \in \mathbb{N}$.
\end{theorem}
The proof of Proposition~\ref{prop:relF} and Theorem~\ref{thm:relFDir} will be given in Subsection \ref{sec: sub-voids}. \color{black} There, we provide also two slight generalizations, see Proposition~\ref{prop:relFinfty} and Theorem~\ref{thm:relFDirinfty}, namely a relaxation with respect to a weaker convergence in a general space $GSBD^p_\infty$ (cf.\ \eqref{eq: compact extension}), where functions are allowed to attain the value infinity. \color{black}
We close this subsection with an existence result for $\overline{F}_{\mathrm{Dir}}$, under a volume constraint for the voids.
\betagin{theorem}[Existence of minimizers for $\overline{F}_{\mathrm{Dir}}$]\label{th: relF-extended}
Suppose that $f$ is convex and satisfies \eqref{eq: growth conditions}, and that $\varphi$ is a norm on $\mathbb{R}d$. Let $m>0$. Then the minimization problem
$$
\inf \big\{ \overline{F}_{\mathrm{Dir}}(u,E)\colon (u,E) \in L^0(\Omega;\mathbb{R}d){\times}\mathfrak{M}(\Omega), \ \mathcal{L}^d(E) = m \big\}
$$
admits solutions.
\end{theorem}
For the proof, we refer to Subsection \ref{sec: Fcomp}. It relies on the lower semicontinuity of $\overline{F}_{\mathrm{Dir}}$ and a compactness result in the general space $GSBD^p_\infty$ (cf.\ \eqref{eq: compact extension}), see Theorem~\ref{thm:compF}.
\subsection{Energies on domains with a subgraph constraint: epitaxially strained films}\label{sec: results2}
We now consider the problem of displacement fields in a material domain which is the subgraph of an unknown nonnegative function $h$. Assuming that $h$ is defined on a Lipschitz domain $\omega \subset \mathbb{R}^{d-1}$, displacement fields $u$ will be defined on the
subgraph
$$\Omega_h^+ := \{ x \in \omega \times \mathbb{R} \colon 0 < x_d < h(x')\},$$
where here and in the following we use the notation $x = (x',x_d)$ for $x \in \mathbb{R}^d$. To model Dirichlet boundary data at the flat surface $\omega \times \lbrace 0 \rbrace$, we will suppose that functions are extended to the set $\Omega_h := \{ x \in \omega \times \mathbb{R} \colon -1 < x_d < h(x')\}$ and satisfy $u = u_0$ on $\omega{\times}(-1,0)$ for a given function $u_0 \in W^{1,p}(\omega{\times}(-1,0);\mathbb{R}^d)$, $p>1$. In the application to epitaxially strained films, $u_0$ represents the substrate and $h$ represents the profile of the free surface of the film.
For convenience, we introduce the reference domain $\Omega:=\omega{\times} (-1, M+1)$ for $M>0$. We define the energy functional $G\colon L^0(\Omega;\mathbb{R}^d) \times L^1(\omega;[0,M]) \to \mathbb{R} \cup \lbrace + \infty \rbrace$ by
\betagin{equation}\label{eq: Gfunctional}
G(u,h) =
\int_{\Omega_h^+} f(e(u(x))) \, \, \mathrm{d} x + \int_{\omega} \sqrt{1 + |\nabla h(x')|^2} \, \, \mathrm{d} x'
\end{equation}
if $h \in C^1(\omega;[0,M])$, $u|_{\Omega_h} \in W^{1,p}(\Omega_h;\mathbb{R}^d)$, $u=0$ in $\Omega\setminus \Omega_h$, and $u=u_0$ in $\omega{\times}(-1,0)$, and $G(u,h) = +\infty$ otherwise. Here, $f\colon \mathbb{M}^{d\times d}\to [0,\infty) $ denotes a convex function satisfying \eqref{eq: growth conditions}, and as before we set $e(u) := \frac{1}{2}\big((\nabla u)^T + \nabla u\big)$.
Notice that, in contrast to \cite{BonCha02}, we suppose that the functions $h$ are equibounded by a value $M$: this is for technical reasons only and is indeed justified from a mechanical point of view since other effects
come into play for very high crystal profiles.
We study the relaxation of $G$ with respect to the
$L^0(\Omega;\mathbb{R}d){\times}L^1(\omega; [0,M])$ topology, i.e., its
lower semicontinuous envelope $\overline G\colon L^0(\Omega;\mathbb{R}^d) \times L^1(\omega;[0, M ]) \to \mathbb{R} \cup \lbrace+ \infty \rbrace$,
defined as
\betagin{equation*}
\overline{G}(u,h) = \inf\big\{ \liminf\nolimits_{n\to\infty} G(u_n,h_n)\colon \, u_n \to u \text { in } L^0(\Omega;\mathbb{R}^d), \ \ h_n \to h \text { in } L^1(\omega) \big\} \,.
\end{equation*}
We characterize $\overline G$ as follows, further assuming that the Lipschitz set $\omega \subset \mathbb{R}^{d-1}$ is uniformly star-shaped with respect to the origin, i.e.,
\betagin{align}\label{eq: star-shaped}
tx \subset \omega \ \ \ \text{for all} \ \ t \in (0,1), \, x \in \varphiartial \omega.
\end{align}
\betagin{theorem}[Characterization of the lower semicontinuous envelope $\overline{G}$]\label{thm:relG}
Suppose that $f$ is convex satisfying \eqref{eq: growth conditions} and that \eqref{eq: star-shaped} holds. Then we have
\betagin{equation*}
\overline{G}(u,h) =
\betagin{dcases}
\int_{\Omega_h^+} f(e(u))\,& \hspace{-1.0em}\, \mathrm{d} x +\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap \Omega ) + 2 \mathcal{H}^{d-1}(J_u' \cap \Omega_h^1) \\
& \hspace{-0.4cm} \text{if } u= u \chi_{ \Omega_h } \in GSBD^p(\Omega), \, u=u_0 \text{ in }\omega{\times}(-1,0),\, h \in BV (\omega; [0,M] )\,, \\
+\infty &\hspace{-0.4cm} \hspace{-0.4em}\text{ otherwise,}
\end{dcases}
\end{equation*}
where
\betagin{align}\label{eq: Ju'}
J_u' := \lbrace (x',x_d + t)\colon \, x \in J_u, \, t \ge 0 \rbrace.
\end{align}
\end{theorem}
The assumption \eqref{eq: star-shaped} on $\omega$ is more general than the one considered in \cite{ChaSol07}, where $\omega$ is
assumed to be a torus. We point out, however, that both assumptions are only of technical nature and could be dropped at the expense of more elaborated estimates, see also \cite{ChaSol07}. The proof of this result will be given in Subsection~\ref{subsec:RelG}.
We note that the functional $G$ could be considered with an additional volume constraint on the film, i.e., $\mathcal{L}^d(\Omega_h^+) = \int_\omega h(x') \, \mathrm{d} x'$ is fixed.
An easy adaptation of the proof shows that the relaxed functional $\overline G$ is not changed under this constraint,
see Remark~\ref{rem: volume constraint} for details.
In Subsection~\ref{subsec:compactness}, we further prove the following
general compactness result, from which we deduce the existence of equilibrium configurations for epitaxially strained films.
\betagin{theorem}[Compactness
for $\overline G$]\label{thm:compG}
Suppose that $f$ is convex and satisfies \eqref{eq: growth conditions}.
For any $(u_n,h_n)_n$ with $\sup_{n} G(u_n,h_n) <+\infty$, there exist a subsequence (not relabeled) and functions $u \in GSBD^p(\Omega)$, $h \in BV(\omega;[0,M])$ with $u = u\chi_{\Omega_h}$ and $u=u_0$ on $\omega \times (-1,0)$ such that
\betagin{equation*}
(u_n, h_n) \to (u, h) \quad \text{ in } \quad
L^0(\Omega;\mathbb{R}^d){\times} { L^1(\omega) \,. }
\end{equation*}
\end{theorem}
In particular, general properties of relaxation (see e.g.\ \cite[Theorem~3.8]{DMLibro}) imply that, given $0<m<M\mathcal{H}^{d-1}(\omega)$, the minimization problem
\betagin{align}\label{eq: minimization problem2}
\inf \Big\{ \overline{G}(u,h)\colon (u,E) \in L^0(\Omega;\mathbb{R}^d) \times L^1(\omega), \ \mathcal{L}^d(\Omega_h^+) = m \Big\}
\end{align}
admits solutions. Moreover, fixed $m$ and the volume constraint $\mathcal{L}^d(\Omega_h^+) = m$ for $G$ and $\overline G$, any cluster point for minimizing sequences of $G$ is a minimum point for $\overline G$.
Our final issue is a phase-field approximation of $\overline G$. The idea is to represent any subgraph $\Omega_h$ by a (regular) function $v$ which will be an approximation of the characteristic function $\chi_{\Omega_h}$ at a scale of order $\varepsilons$. Let $W \colon [0,1] \to [0,\infty) $ be continuous, with $W(1)=W(0)=0$, $W>0$ in $(0,1)$, and let $(\eta_\varepsilons)_\varepsilon$ with $\eta_\varepsilon>0$ and $\eta_\varepsilon \varepsilon^{1-p} \to 0$ as $\varepsilon \to 0$. Let $c_W:=(\int_0^1 \sqrt{2 W(s)} \,\mathrm{d} s)^{-1}$. In the reference domain $\Omega=\omega{\times}(-1,M+1)$, we introduce the functionals
\betagin{equation}\label{eq: phase-approx}
G_\varepsilon(u,v):=\int_\Omega \bigg( (v^2+\eta_\varepsilon) f(e(u)) + c_W\Big(\frac{W(v)}{\varepsilon} + \frac{\varepsilon}{2} |\nabla v|^2 \Big) \bigg)\, \mathrm{d} x\,,
\end{equation}
if
\[ {u \in W^{1,p}(\Omega; \mathbb{R}d)\,,\quad u=u_0 \text{ in }\omega{\times} (-1,0) \,,}\]
\[
v \in H^1(\Omega; [0,1])\,,\quad v=1\text{ in }\omega{\times}(-1,0)\,, \, v=0\text{ in }\omega{\times}(M,M+1) \quad \varphiartial_d v \leq 0 \,\, \mathcal{L}^d\text{-a.e.\ in }\Omega\,,
\]
and $G_\varepsilon(u,v):=+\infty$ otherwise.
The following phase-field approximation is the analog of \cite[Theorem~5.1]{ChaSol07} in the frame of linear elasticity. We remark that here, differently from \cite{ChaSol07}, we assume only $u_0 \in W^{1,p}( \omega \times (-1,0); \mathbb{R}d)$, and not necessarily $u_0 \in L^\infty(\omega \times (-1,0);\mathbb{R}^d)$. For the proof we refer to Subsection~\ref{sec:phasefield}.
\betagin{theorem}\label{thm:phasefieldG}
Let $u_0 \in W^{1,p}(\omega \times (-1,0);\mathbb{R}^d)$. For any decreasing sequence $(\varepsilon_n)_n$ of positive numbers converging to zero, the following hold:
\betagin{itemize}
\item[(i)]
For any $(u_n, v_n)_n$ with $\sup_n G_{\varepsilon_n}(u_n, v_n)< +\infty$, there exist $u \in L^0(\Omega;\mathbb{R}d)$ and $h\in BV(\omega; [0,M] )$ such that, up to a subsequence, $u_n \to u$ a.e.\ in $\Omega$, $v_n \to \chi_{\Omega_h}$ in $L^1(\Omega)$, and
\betagin{equation}\label{1805192018}
\overline G(u,h) \leq \liminf_{n\to +\infty}G_{\varepsilon_n}(u_n, v_n)\,.
\end{equation}
\item[(ii)]For any $(u,h)$ with $\overline G(u, h) < + \infty $, there exists $(u_n, v_n)_n$ such that $u_n \to u$ a.e.\ in $\Omega$, $v_n \to \chi_{\Omega_h}$ in $L^1(\Omega)$, and
$$
\limsup_{n\to \infty} G_{\varepsilon_n}(u_n, v_n)= \overline G(u,h)\,.
$$
\end{itemize}
\end{theorem}
\section{Preliminaries}\label{sec:prel}
In this section, we recall the definition and main properties of the function space $GSBD^p$. Moreover, we introduce the space $GSBD^p_\infty$ of functions which may attain the value infinity.
\subsection{Notation}
For every $x\in \mathbb{R}d$ and $\varrho>0$, let $B_\varrho(x) \subset \mathbb{R}^d$ be the open ball with center $x$ and radius $\varrho$. For $x$, $y\in \mathbb{R}d$, we use the notation $x\cdot y$ for the scalar product and $|x|$ for the Euclidean norm. By ${\mathbb{M}^{d\times d}}$ and ${\mathbb{M}^{d\times d}_{\rm sym}}$ we denote the set of matrices and symmetric matrices, respectively. We write $\chi_E$ for the indicator function of any $E\subset \mathbb{R}^n$, which is 1 on $E$ and 0 otherwise. If $E$ is a set of finite perimeter, we denote its essential boundary by $\varphiartial^* E$, and by $E^s$ the set of points with density $s$ for $E$, see \cite[Definition 3.60]{AFP}.
We indicate the minimum and maximum value between $a, b \in \mathbb{R}$ by $a \wedge b$ and $a \vee b$, respectively. The symmetric difference of two sets $A,B \subset \mathbb{R}^d$ is indicated by $A \triangle B$.
We denote by $\mathcal{L}^d$ and $\mathcal{H}^k$ the $n$-dimensional Lebesgue measure and the $k$-dimensional Hausdorff measure, respectively. For any locally compact subset $B \subset \mathbb{R}d$, (i.e.\ any point in $B$ has a neighborhood contained in a compact subset of $B$),
the space of bounded $\mathbb{R}^m$-valued Radon measures on $B$ [respectively, the space of $\mathbb{R}^m$-valued Radon measures on $B$] is denoted by $\mathcal{M}_b(B;\mathbb{R}^m)$ [resp., by $\mathcal{M}(B;\mathbb{R}^m)$]. If $m=1$, we write $\mathcal{M}_b(B)$ for $\mathcal{M}_b(B;\mathbb{R})$, $\mathcal{M}(B)$ for $\mathcal{M}(B;\mathbb{R})$, and $\mathcal{M}^+_b(B)$ for the subspace of positive measures of $\mathcal{M}_b(B)$. For every $\mu \in \mathcal{M}_b(B;\mathbb{R}^m)$, its total variation is denoted by $|\mu|(B)$. Given $\Omega \subset \mathbb{R}^d$ open, we use the notation
$L^0(\Omega;\mathbb{R}d)$ for the space of $\mathcal{L}^d$-measurable functions $v \colon \Omega \to \mathbb{R}d$.
\betagin{definition}
Let $E\subset \mathbb{R}d$, $v \in L^0(E;\mathbb{R}^m)$, and $x\in \mathbb{R}d$ such that
\betagin{equation*}
\limsup_{\varrho\to 0^+}\frac{\mathcal{L}^d(E\cap B_\varrho(x))}{\varrho^{ d }}>0\,.
\end{equation*}
A vector $a\in \mathbb{R}d$ is the \emph{approximate limit} of $v$ as $y$ tends to $x$ if for every $\varepsilon>0$ there holds
\betagin{equation*}
\lim_{\varrho \to 0^+}\frac{\mathcal{L}^d(E \cap B_\varrho(x)\cap \{|v-a|>\varepsilon\})}{ \varrho^d }=0\,,
\end{equation*}
and then we write
\betagin{equation*}
\aplim \limits_{y\to x} v(y)=a\,.
\end{equation*}
\end{definition}
\betagin{definition}
Let $U\subset \mathbb{R}d$ be open and $v \in L^0(U;\mathbb{R}^m)$. The \emph{approximate jump set} $J_v$ is the set of points $x\in U$ for which there exist $a$, $b\in \mathbb{R}^m$, with $a \neq b$, and $\nu\in {\mathbb{S}^{d-1}}$ such that
\betagin{equation*}
\aplim\limits_{(y-x)\cdot \nu>0,\, y \to x} v(y)=a\quad\text{and}\quad \aplim\limits_{(y-x)\cdot \nu<0, \, y \to x} v(y)=b\,.
\end{equation*}
The triplet $(a,b,\nu)$ is uniquely determined up to a permutation of $(a,b)$ and a change of sign of $\nu$, and is denoted by $(v^+(x), v^-(x), \nu_v(x))$. The jump of $v$ is the function
defined by $[v](x):=v^+(x)-v^-(x)$ for every $x\in J_v$.
\end{definition}
We note that $J_v$ is a Borel set with $\mathcal{L}^d(J_v)=0$, and that $[v]$ is a Borel function.
\subsection{$BV$ and $BD$ functions}
Let $U\subset \mathbb{R}d$ be open. We say that a function $v\in L^1(U)$ is a \emph{function of bounded variation} on $U$, and we write $v\in BV(U)$, if $\mathrm{D}_i v\in \mathcal{M}_b(U)$ for $i=1,\mathrm{d}ots,d$, where $\mathrm{D}v=(\mathrm{D}_1 v,\mathrm{d}ots, \mathrm{D}_d v)$ is its distributional derivative. A vector-valued function $v\colon U\to \mathbb{R}^m$ is in $BV(U;\mathbb{R}^m)$ if $v_j\in BV(U)$ for every $j=1,\mathrm{d}ots, m$.
The space $BV_{\mathrm{loc}}(U)$ is the space of $v\in L^1_{\mathrm{loc}}(U)$ such that $\mathrm{D}_i v\in \mathcal{M}(U)$ for $i=1,\mathrm{d}ots,d$.
A function $v\in L^1(U;\mathbb{R}d)$ belongs to the space of \emph{functions of bounded deformation} if
the distribution
$\mathrm{E}v := \frac{1}{2}((\mathrm{D}v)^T + \mathrm{D}v )$ belongs to $\mathcal{M}_b(U;{\mathbb{M}^{d\times d}_{\rm sym}})$.
It is well known (see \cite{AmbCosDM97, Tem}) that for $v\in BD(U)$, $J_v$ is countably $(\mathcal{H}^{d-1}, d-1)$ rectifiable, and that
\betagin{equation*}
\mathrm{E}v=\mathrm{E}^a v+ \mathrm{E}^c v + \mathrm{E}^j v\,,
\end{equation*}
where $\mathrm{E}^a v$ is absolutely continuous with respect to $\mathcal{L}^d$, $\mathrm{E}^c v$ is singular with respect to $\mathcal{L}^d$ and such that $|\mathrm{E}^c v|(B)=0$ if $\mathcal{H}^{d-1}(B)<\infty$, while $\mathrm{E}^j v$ is concentrated on $J_v$. The density of $\mathrm{E}^a v$ with respect to $\mathcal{L}^d$ is denoted by $e(v)$.
The space $SBD(U)$ is the subspace of all functions $v\in BD(U)$ such that $\mathrm{E}^c v=0$. For $p\in (1,\infty)$, we define
\betagin{equation*}
SBD^p(U):=\{v\in SBD(U)\colon e(v)\in L^p(\Omega;{\mathbb{M}^{d\times d}_{\rm sym}}),\, \mathcal{H}^{d-1}(J_v)<\infty\}\,.
\end{equation*}
Analogous properties hold for $BV$, such as the countable rectifiability of the jump set and the decomposition of $\mathrm{D}v$. The spaces $SBV(U;\mathbb{R}^m)$ and $SBV^p(U;\mathbb{R}^m)$ are defined similarly, with $\nabla v$, the density of $\mathrm{D}^a v$, in place of $e(v)$.
For a complete treatment of $BV$, $SBV$ functions and $BD$, $SBD$ functions, we refer to \cite{AFP} and to \cite{AmbCosDM97, BelCosDM98, Tem}, respectively.
\subsection{$GBD$ functions}
We now recall the definition and the main properties of the space $GBD$ of \emph{generalized functions of bounded deformation}, introduced in \cite{DM13}, referring to that paper for a general treatment and more details. Since the definition of $GBD$ is given by slicing (differently from the definition of $GBV$, cf.~\cite{Amb90GSBV, DeGioAmb88GBV}), we first need to introduce some notation. Fixed $\xi \in {\mathbb{S}^{d-1}}:=\{\xi \in \mathbb{R}d\colon |\xi|=1\}$, we let
\betagin{equation}\label{eq: vxiy2}
\Pi^\xi:=\{y\in \mathbb{R}d\colon y\cdot \xi=0\},\qquad B^\xi_y:=\{t\in \mathbb{R}\colon y+t\xi \in B\} \ \ \ \text{ for any $y\in \mathbb{R}d$ and $B\subset \mathbb{R}d$}\,,
\end{equation}
and for every function $v\colon B\to \mathbb{R}^d $ and $t\in B^\xi_y$ let
\betagin{equation}\label{eq: vxiy}
v^\xi_y(t):=v(y+t\xi),\qquad \widehat{v}^\xi_y(t):=v^\xi_y(t)\cdot \xi\,.
\end{equation}
\betagin{definition}[\cite{DM13}]
Let $\Omega\subset \mathbb{R}d$ be a bounded open set, and let $v \in L^0(\Omega;\mathbb{R}d)$. Then $v\in GBD(\Omega)$ if there exists $\lambda_v\in \mathcal{M}^+_b(\Omega)$ such that one of the following equivalent conditions holds true
for every
$\xi \in {\mathbb{S}^{d-1}}$:
\betagin{itemize}
\item[(a)] for every $\tau \in C^1(\mathbb{R})$ with $-\tfrac{1}{2}\leq \tau \leq \tfrac{1}{2}$ and $0\leq \tau'\leq 1$, the partial derivative $\mathrm{D}_\xi\big(\tau(v\cdot \xi)\big)=\mathrm{D}\big(\tau(v\cdot \xi)\big)\cdot \xi$ belongs to $\mathcal{M}_b(\Omega)$, and for every Borel set $B\subset \Omega$
\betagin{equation*}
\big|\mathrm{D}_\xi\big(\tau(v\cdot \xi)\big)\big|(B)\leq \lambda_v(B);
\end{equation*}
\item[(b)] $\widehat{v}^\xi_y \in BV_{\mathrm{loc}}(\Omega^\xi_y)$ for $\mathcal{H}^{d-1}$-a.e.\ $y\in \Pi^\xi$, and for every Borel set $B\subset \Omega$
\betagin{equation*}
\int_{\Pi^\xi} \Big(\big|\mathrm{D} {\widehat{v}}_y^\xi\big|\big(B^\xi_y\setminus J^1_{{\widehat{v}}^\xi_y}\big)+ \mathcal{H}^0\big(B^\xi_y\cap J^1_{{\widehat{v}}^\xi_y}\big)\Big)\, \mathrm{d} \hd(y)\leq \lambda_v(B)\,,
\end{equation*}
where
$J^1_{{\widehat{u}}^\xi_y}:=\left\{t\in J_{{\widehat{u}}^\xi_y} : |[{\widehat{u}}_y^\xi]|(t) \geq 1\right\}$.
\end{itemize}
The function $v$ belongs to $GSBD(\Omega)$ if $v\in GBD(\Omega)$ and $\widehat{v}^\xi_y \in SBV_{\mathrm{loc}}(\Omega^\xi_y)$ for
every
$\xi \in {\mathbb{S}^{d-1}}$ and for $\mathcal{H}^{d-1}$-a.e.\ $y\in \Pi^\xi$.
\end{definition}
$GBD(\Omega)$ and $GSBD(\Omega)$ are vector spaces, as stated in \cite[Remark~4.6]{DM13}, and one has the inclusions $BD(\Omega)\subset GBD(\Omega)$, $SBD(\Omega)\subset GSBD(\Omega)$, which are in general strict (see \cite[Remark~4.5 and Example~12.3]{DM13}).
Every $v\in GBD(\Omega)$ has an \emph{approximate symmetric gradient} $e(v)\in L^1(\Omega;{\mathbb{M}^{d\times d}_{\rm sym}})$ such that for every $\xi \in {\mathbb{S}^{d-1}}$ and $\mathcal{H}^{d-1}$-a.e.\ $y\in\Pi^\xi$ there holds
\betagin{equation}\label{3105171927}
e(v)(y + t\xi) \xi \cdot \xi= (\widehat{v}^\xi_y)'(t) \quad\text{for } \mathcal{L}^1\text{-a.e.\ } t \in \Omega^\xi_y\,.
\end{equation}
We recall also that by the area formula (cf.\ e.g.\ \cite[(12.4)]{Sim84}; see \cite[Theorem~4.10]{AmbCosDM97} and \cite[Theorem~8.1]{DM13}) it follows that for any $\xi \in {\mathbb{S}^{d-1}}$
\betagin{subequations}\label{2304191254}
\betagin{equation}\label{2304191254-1}
(J^\xi_v)^\xi_y = J_{\widehat{v}^\xi_y} \ \ \text{for $\mathcal{H}^{d-1}$-a.e.\ $y \in \Pi^\xi$,} \ \ \text{where} \ \ J_v^\xi:= \{ x \in J_v \colon [v](x) \cdot \xi \neq 0\}\,,
\end{equation}
\betagin{equation}
\int_{\Pi^\xi} \mathcal{H}^{0}(J_{\widehat{v}^\xi_y}) \, \mathrm{d} \hd(y) = \int_{J_v^ \xi} |\nu_v \cdot \xi| \, \mathrm{d} \hd\,.
\end{equation}
\end{subequations}
Moreover, there holds
\betagin{equation}\label{2304191637}
\mathcal{H}^{d-1}(J_v \setminus J_v^\xi)=0 \qquad\text{for }\mathcal{H}^{d-1}\text{-a.e.\ }\xi \in {\mathbb{S}^{d-1}}\,.
\end{equation}
Finally, if $\Omega$ has Lipschitz boundary, for each $v\in GBD(\Omega)$ the traces on $\varphiartial \Omega$ are well defined in the sense that for $\mathcal{H}^{d-1}$-a.e.\ $x \in \varphiartial\Omega$ there exists ${\rm tr}(v)(x) \in \mathbb{R}^d$ such that
$$\aplim\limits_{y \to x, \ y \in \Omega} v(y) = {\rm tr}(v)(x). $$
For $1 < p < \infty$, the space $GSBD^p(\Omega)$ is defined by
\betagin{equation*}
GSBD^p(\Omega):=\{u\in GSBD(\Omega)\colon e(u)\in L^p(\Omega;{\mathbb{M}^{d\times d}_{\rm sym}}),\, \mathcal{H}^{d-1}(J_u)<\infty\}\,.
\end{equation*}
We recall below two general density and compactness results in $GSBD^p$, from \cite{CC17} and \cite{CC18}.
\betagin{theorem}[Density in $GSBD^p$]\label{thm:densityGSBD}
Let $\Omega\subset \mathbb{R}d$ be an open, bounded set with finite perimeter and let $\varphiartial \Omega$ be a $(d{-}1)$-rectifiable,
$ p > 1$, $\varphisi(t)= t \wedge 1$,
and $u\in GSBD^p(\Omega)$.
Then there exist $u_n\in SBV^p(\Omega;\mathbb{R}d)\cap L^\infty(\Omega; \mathbb{R}d)$ such that each
$J_{u_n}$ is closed in $\Omega$ and included in a finite union of closed connected pieces of $C^1$ hypersurfaces, $u_n\in W^{1,\infty}(\Omega\setminus J_{u_n}; \mathbb{R}d)$, and:
\betagin{subequations}\label{eqs:main'}
\betagin{align}
\int_\Omega \varphisi(|u_n - u|) \, \mathrm{d} x & \to 0 \,,\label{1main'}\\
\|e(u_n) - e(u) \|_{L^p(\Omega) } & \to 0 \,, \label{2main'}\\
\mathcal{H}^{d-1}(J_{u_n}\triangle J_u)&\to 0 \,.\label{3main'}
\end{align}
\end{subequations}
\end{theorem}
We refer to \cite[Theorem 1.1]{CC17}. In contrast to \cite{CC17}, we use here the function $\varphisi(t):= t \wedge 1$ for simplicity. It is indeed easy to check that \cite[(1.1e)]{CC17} implies \eqref{1main'}.
\betagin{theorem}[$GSBD^p$ compactness]\label{th: GSDBcompactness}
Let $\Omega \subset \mathbb{R}$ be an open, bounded set, and let $(u_n)_n \subset GSBD^p(\Omega)$ be a sequence satisfying
$$ \sup\nolimits_{n\in \mathbb{N}} \big( \Vert e(u_n) \Vert_{L^p(\Omega)} + \mathcal{H}^{d-1}(J_{u_n})\big) < + \infty.$$
Then, there exists a subsequence, still denoted by $(u_n)_n$, such that the set $A := \lbrace x\in \Omega\colon \, |u_n(x)| \to \infty \rbrace$ has finite perimeter, and there exists $u \in GSBD^p(\Omega)$ such that
\betagin{align}\label{eq: GSBD comp}
{\rm (i)} & \ \ u_n \to u \ \ \ \ \text{ in } L^0(\Omega \setminus A; \mathbb{R}^d), \notag \\
{\rm (ii)} & \ \ e(u_n) \rightharpoonup e(u) \ \ \ \text{ weakly in } L^p(\Omega \setminus A; {\mathbb{M}^{d\times d}_{\rm sym}}),\notag \\
{\rm (iii)} & \ \ \liminf_{n \to \infty} \mathcal{H}^{d-1}(J_{u_n}) \ge \mathcal{H}^{d-1}(J_u \cup (\varphiartial^*A \cap\Omega) ).
\end{align}
Moreover, for each $\Gamma\subset \Omega$ with $\mathcal{H}^{d-1}(\Gamma) < + \infty$, there holds
\betagin{align}\label{eq: with Gamma}
\liminf_{n \to \infty} \mathcal{H}^{d-1}(J_{u_n}\setminus \Gamma) \ge \mathcal{H}^{d-1} \big( (J_u \cup (\varphiartial^*A \cap\Omega) ) \setminus \Gamma \big)\,.
\end{align}
\end{theorem}
\betagin{proof}
We refer to \cite{CC18}. The additional statement \eqref{eq: with Gamma} is proved, e.g., in \cite[Theorem 2.5]{FriSol16}.
\end{proof}
Later, as a byproduct of our analysis, we will generalize the lower semicontinuity property \eqref{eq: GSBD comp}(iii) to anisotropic surface energies, see Corollary \ref{cor: GSDB-lsc}.
\subsection{$GSBD^p_\infty$ functions}\label{sec:prel4}
Inspired by the previous compactness result, we now introduce a space of $GSBD^p$ functions which may also attain a limit value $\infty$. Define $\bar{\mathbb{R}}^d := \mathbb{R}^d \cup \lbrace \infty \rbrace$. The sum on $\bar{\mathbb{R}}^d$ is given by $a + \infty = \infty$ for any $a \in \bar{\mathbb{R}}^d$. There is a natural bijection between $\bar{\mathbb{R}}^d$ and $ \mathbb{S}^d =\lbrace \xi \in\mathbb{R}^{d+1}:\,|\xi| =1 \rbrace$ given by the stereographic projection of $\mathbb{S}^{d}$ to $\bar{\mathbb{R}}^d$: for $\xi \neq e_{d+1}$, we define
$$\varphihi(\xi) = \frac{1}{1-\xi_{d+1}}(\xi_1,\ldots,\xi_d),$$
and let $\varphihi(e_{d+1}) = \infty$. By $\varphisi: \bar{\mathbb{R}}^d\to \mathbb{S}^{d}$ we denote the inverse. Note that
\betagin{equation}\label{3005191230}
d_{\bar{\mathbb{R}}^d}(x,y):= |\varphisi(x) - \varphisi(y)|\quad \text{for }x,y \in \bar{\mathbb{R}}^d\end{equation}
induces a bounded metric on $\bar{\mathbb{R}}^d$. We define
\betagin{align}\label{eq: compact extension}
GSBD^p_\infty(\Omega) := \Big\{ &u \in L^0(\Omega;\bar{\mathbb{R}}^d)\colon \, A^\infty_u := \lbrace u = \infty \rbrace \text{ satisfies } \mathcal{H}^{d-1}(\varphiartial^* A^\infty_u)< +\infty, \notag \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{u}_t := u \chi_{\Omega \setminus A^\infty_u} + t \chi_{A^\infty_u} \in GSBD^p(\Omega) \ \text{ for all $t \in \mathbb{R}^d$} \Big\}.
\end{align}
Symbolically, we will also write
$$u = u \chi_{\Omega \setminus A^\infty_u} + \infty \chi_{A^\infty_u}.$$
Moreover, for any $u \in GSBD^p_\infty(\Omega)$, we set $e(u) = 0$ in $A^\infty_u$, and
\betagin{align}\label{eq: general jump}
J_u = J_{u \chi_{\Omega \setminus A^\infty_u}} \cup (\varphiartial^*A^\infty_u \cap \Omega).
\end{align}
In particular, we have
\betagin{align}\label{eq:same}
e(u) = e(\tilde{u}_t) \ \ \text{$\mathcal{L}^d$-a.e.\ on $\Omega$} \ \ \ \text{ and } \ \ \ J_u = J_{\tilde{u}_t} \ \ \text{$\mathcal{H}^{d-1}$-a.e.} \ \ \ \text{ for almost all $t \in \mathbb{R}$}\,,
\end{align} where $\tilde{u}_t$ is the function from \eqref{eq: compact extension}. Hereby, we also get a natural definition of a normal $\nu_u$ to the jump set $J_u$, and the slicing properties described in \eqref{3105171927}--\eqref{2304191637} still hold. Finally, we point out that all definitions are consistent with the usual ones if $u \in GSBD^p(\Omega)$, i.e., if $A^\infty_u= \emptyset$. Since $GSBD^p(\Omega)$ is a vector space, we observe that the sum of two functions in $GSBD^p_\infty(\Omega)$ lies again in this space.
A metric on $GSBD^p_\infty(\Omega)$ is given by
\betagin{equation}\label{eq:metricd}
\color{black}\bar{d}(u,v) \color{black} { := } \int_\Omega d_{\bar{\mathbb{R}}^d}(u(x),v(x)) \, \, \mathrm{d} x\,,
\end{equation}
where $d_{\bar{\mathbb{R}}^d}$ is the distance in \eqref{3005191230}. We now state compactness properties in $GSBD^p_\infty(\Omega)$.
\betagin{lemma}[Compactness in $GSBD^p_\infty$]\label{eq: corollary-comp}
For $L>0$ and $\Gamma \subset \Omega$ with $\mathcal{H}^{d-1}(\Gamma)<+\infty$, we introduce the sets
\betagin{align}\label{eq: XL}
X_L(\Omega) & = \big\{ v \in GSBD^p_\infty(\Omega)\colon \mathcal{H}^{d-1}(J_v ) \le L, \ \ \Vert e(v) \Vert_{L^p(\Omega)} \le 1 \big\}\,,\notag\\
X_\Gamma(\Omega) & = \big\{ v \in GSBD^p_\infty(\Omega)\colon \mathcal{H}^{d-1}(J_v \setminus \Gamma ) = 0, \ \ \Vert e(v) \Vert_{L^p(\Omega)} \le 1 \big\}\,.
\end{align}
Then the sets $X_L(\Omega), X_\Gamma(\Omega) \subset GSBD^p_\infty(\Omega)$ are compact with respect to the metric $\color{black}\bar{d} \color{black}$.
\end{lemma}
\betagin{proof}
For $X_L(\Omega)$, the statement follows from Theorem \ref{th: GSDBcompactness} and the definitions \eqref{eq: compact extension}--\eqref{eq: general jump}: in fact, given a sequence $(u^n)_n \subset X_L(\Omega)$, we consider a sequence $(\tilde{u}_{t_n}^n)_n \subset GSBD^p(\Omega)$ as in \eqref{eq: compact extension}, for suitable $(t_n)_n \subset \mathbb{R}^d$ with $|t_n| \to \infty$. This implies
\betagin{align}\label{eq: t-def}
\color{black}\bar{d} \color{black}(u^n,\tilde{u}_{t_n}^n) \to 0 \text{ as } n \to \infty.
\end{align}
Then, by Theorem \ref{th: GSDBcompactness} there exists $v \in GSBD^p(\Omega)$ and $A = \lbrace x \in \Omega\colon \, |\tilde{u}_{t_n}^n(x)| \to \infty \rbrace$ such that $\tilde{u}_{t_n}^n \to v$ in $L^0(\Omega \setminus A;\mathbb{R}^d)$. We define $u = v\chi_{\Omega \setminus A} + \infty \chi_A \in GSBD^p_\infty(\Omega)$. By \eqref{eq: GSBD comp}(ii),(iii) and \eqref{eq: general jump} we get that $u \in X_L(\Omega)$. We observe that $\color{black}\bar{d} \color{black}(\tilde{u}_{t_n}^n,u) \to 0$ and then by \eqref{eq: t-def} also $\color{black}\bar{d} \color{black}(u^n,u) \to 0$.
The proof for the set $X_\Gamma(\Omega)$ is similar, where we additionally use \eqref{eq: with Gamma} to ensure that $\mathcal{H}^{d-1}(J_u \setminus \Gamma) = 0 $.
\end{proof}
In the next sections, we will use the following notation. We say that a sequence $(u_n)_n \subset GSBD^p_\infty(\Omega)$ \emph{converges weakly} to $u \in GSBD^p_\infty(\Omega)$ if
\betagin{align}\label{eq: weak gsbd convergence}
\sup\nolimits_{n\in \mathbb{N}} \big( \Vert e(u_n) \Vert_{L^p(\Omega)} + \mathcal{H}^{d-1}(J_{u_n})\big) < + \infty \ \ \ \text{and} \ \ \ \color{black}\bar{d} \color{black}(u_n,u) \to 0 \text{ for } n \to \infty\,.
\end{align}
\color{black} We close this subsection by pointing out that a similar space has been introduced in \cite{CagColDePMag17}, in the case of scalar valued functions attaining extended real values: the space $GBV_*(\mathbb{R}^{d})$ was defined by $f \colon \mathbb{R}^{d} \to \mathbb{R} \cup \{ \varphim \infty\} \in GBV_*(\mathbb{R}^{d})$ if and only if $(-M \vee f)\wedge M \in BV_{\mathrm{loc}}(\mathbb{R}^d)$ for every $M>0$. In \cite[Proposition~3.1]{CagColDePMag17} it is shown that $f \in GBV_*(\mathbb{R}^{d})$ if and only if its epigraph is of locally finite perimeter in $\mathbb{R}^{d+1}$.
Our definition is based on the structure of the set where functions attain infinite values, rather than employing (the analog of) truncations. In fact, the latter is not meaningful if one controls only symmetric gradients. \color{black}
\section{The $\sigma^p_{\mathrm{\rm sym}}$-convergence of sets}\label{sec:sigmap}
This section is devoted to the introduction of a convergence of sets in the framework of $GSBD^p$ functions
analogous to $\sigma^p$-convergence defined in \cite{DMFraToa02} for the space
\color{black} $SBV^p$. \color{black}
This type of convergence of sets will be useful to study the lower limits in the relaxation results in Subsection~\ref{subsec:RelG} and the compactness properties in Subsection~\ref{subsec:compactness}. We believe that this notion may be of independent interest and is potentially helpful to study also other problems such as quasistatic crack evolution.
We start by recalling briefly the definition of $\sigma^p$-convergence in \cite{DMFraToa02}: a sequence of sets $(\Gamma_n)_n$ $\sigma^p$-converges to $\Gamma$ if (i) for any sequence $(u_n)_n$ converging to $u$ \color{black} weakly \color{black} in \color{black} $SBV^p$ \color{black} with $J_{u_n} \subset \Gamma_n$, it holds $J_u \subset \Gamma$ and (ii) there exists a \color{black} $SBV^p$ \color{black} function whose jump is $\Gamma$, which is approximated \color{black} (in the sense of weak convergence in $SBV^p$) \color{black} by \color{black} $SBV^p$ \color{black} functions with jump included in $\Gamma_n$. \color{black} (Here, weak convergence in $SBV^p$ means that $\sup_n \big(\|u_n\|_{L^\infty} + \mathcal{H}^{d-1}(J_{u_n})\big) < +\infty$, $\nabla u_n \rightharpoonup \nabla u$ in $L^p$, and $u_n \to u$ almost everywhere.) \color{black} For sequences of sets $(\Gamma_n)_n$ with $\sup_n \mathcal{H}^{d-1}(\Gamma_n) < +\infty$, a compactness result
with respect to $\sigma^p$-convergence is obtained by means of Ambrosio's compactness theorem \cite{Amb90GSBV}, see \cite[Theorem~4.7]{DMFraToa02} and \cite[Theorem~3.3]{ChaSol07}. We refer to \cite[Section~4.1]{DMFraToa02} for a general motivation to consider such a kind of convergence.
We now introduce the notion of $\sigma^p_{\mathrm{\rm sym}}$-convergence. In the following, we use the notation $A \tilde{\subset} B$ if $\mathcal{H}^{d-1}(A \setminus B) = 0$ and $A \tilde{=} B$ if $A \tilde{\subset} B$ and $B \tilde{\subset} A$. As before, by $(G)^1$ we denote the set of points with density $1$ for $G \subset \mathbb{R}^d$. Recall also the definition and properties of $GSBD^p_\infty$ in Subsection~\ref{sec:prel4}, in particular \eqref{eq: weak gsbd convergence}.
\betagin{definition}[$\sigma^p_{\mathrm{\rm sym}}$-convergence]\label{def:spsconv}
Let $ U \subset \mathbb{R}^d$ be open, let $U' \supset U$ be open with $\mathcal{L}^d(U' \setminus U)>0$, and let $p \in (1,\infty)$. We say that a sequence $(\Gamma_n)_n \subset \overlinerline{U}\cap U'$ with $\sup_{n\in \mathbb{N}} \mathcal{H}^{d-1}(\Gamma_n) <+\infty$ $\sigma^p_{\mathrm{\rm sym}}$-converges to a pair $(\Gamma, G_\infty)$ satisfying $\Gamma \subset \overlinerline{U} \cap U'$ together with
\betagin{align}\label{eq: limit-prop}
\mathcal{H}^{d-1}(\Gamma) < +\infty, \ \ G_\infty \subset U, \ \ \varphiartial^*G_\infty \cap U' \, \tilde{\subset} \, \Gamma, \ \ \text{ and } \ \ \Gamma \cap (G_\infty)^1 = \emptyset
\end{align}
if there holds:
(i) for any sequence $(v_n)_n \subset GSBD^p_\infty(U')$ with $J_{v_n} \tilde{\subset} \Gamma_n$ and $v_n = 0$ in $U' \setminus U$, if a subsequence $(v_{n_k})_k$ converges weakly in $GSBD^p_\infty(U')$ to $v \in GSBD^p_\infty(U')$, then
\color{black} $\mathcal{L}^d(\lbrace v = \infty \rbrace \setminus G_\infty) = 0$ and \color{black}
$J_v \setminus \Gamma \tilde{\subset} ( G_\infty)^1$.
(ii) there exists a function $v \in GSBD^p_\infty(U')$ and a sequence $(v_n)_n \subset GSBD^p_\infty(U')$ converging weakly in $GSBD^p_\infty(U')$ to $v$ such that $ J_{v_n} \tilde{\subset} \Gamma_n$, $v_n = 0$ on $U' \setminus U$ for all $n \in \mathbb{N}$, $J_v \tilde{=} \Gamma$, and $\lbrace v = \infty \rbrace = G_\infty$.
\end{definition}
Our definition deviates from $\sigma^p$-convergence in the sense that, besides a limiting $(d{-}1)$-rectifiable set $\Gamma$, there exists also a set of finite perimeter $G_\infty$. Roughly speaking, in view of $\varphiartial^* G_\infty \subset \Gamma \cup \varphiartial U$, this set represents the parts which are completely disconnected by $\Gamma$ from the rest of the domain. The behavior of functions cannot be controlled there, i.e., a sequence $(v_n)_n$ as in (i) may converge to infinity on this set or exhibit further cracks. \color{black}
In the framework of $GSBV^p$ functions in \cite{DMFraToa02}, it was possible to avoid such a phenomenon by working with truncations which allows to resort to $SBV^p$ functions with uniform $L^\infty$-bounds.
\color{black} In $GSBD$, however, this truncation technique is not available and we therefore need a more general definition involving the space $GSBD^p_\infty$ and a set of finite perimeter $G_\infty$.
Moreover, due to the presence of the set $G_\infty$, in contrast to the definition of $\sigma^p$-convergence, it is essential to control the functions in a set $U' \setminus U$: the assumptions $\mathcal{L}^d(U' \setminus U)>0$ and $ G_\infty \subset U$ are crucial since otherwise, if $U' = U$, conditions (i) and (ii) would always be trivially satisfied with $G_\infty = U$ and $\Gamma = \emptyset$.
\color{black} We briefly note that the pair $(\Gamma,G_\infty)$ is unique. In fact, if there were two different limits $(\Gamma^1,G^1_\infty)$ and $(\Gamma^2,G^2_\infty)$, we could choose functions $v^1$ and $v^2$ with $J_{v^1} \tilde{=} \Gamma^1$, $J_{v^2} \tilde{=} \Gamma^2$, $\lbrace v^1 = \infty \rbrace = G^1_\infty$, and $\lbrace v^2 = \infty \rbrace = G^2_\infty$, as well as corresponding sequences $(v^1_n)_n$ and $(v^2_n)_n$ as in (ii). But then (i) implies $\Gamma^1 \setminus \Gamma^2 \tilde{\subset} ( G^2_\infty)^1$, $\Gamma^2 \setminus \Gamma^1 \tilde{\subset} ( G^1_\infty)^1$, as well as $G_\infty^1 \subset G^2_\infty$ and $G_\infty^2 \subset G^1_\infty$. As $\Gamma^i\cap (G_\infty^i)^1 = \emptyset$ for $i=1,2$, this shows $(\Gamma^1,G^1_\infty) = (\Gamma^2,G^2_\infty)$. In a similar way, if a sequence $(\Gamma_n)_n$ $\sigma^p_{\mathrm{\rm sym}}$-converges to $(\Gamma, G_\infty)$, then every subsequence $\sigma^p_{\mathrm{\rm sym}}$-converges to the same limit. \color{black}
\color{black}
Let us mention that, in our application in Section \ref{sec:GGG}, the sets $\Gamma_n$ will be graphs of functions. In this setting, we will be able to ensure that $G_\infty = \emptyset$, see \eqref{eq: G is empty} below, and thus a simplification of Definition \ref{def:spsconv} only in terms of $\Gamma$ without $G_\infty$ is in principle possible. We believe, however, that the notion of $\sigma^p_{\mathrm{\rm sym}}$-convergence may be of independent interest and is potentially helpful to study also other problems such as quasistatic crack evolution in linear elasticity \cite{FriSol16}, where $G_\infty = \emptyset$ cannot be expected. Therefore, we prefer to treat this more general definition here.
\color{black}
The main goal of this section is to prove the following compactness result for $\sigma^p_{\mathrm{\rm sym}}$-convergence.
\betagin{theorem}[Compactness of $\sigma^p_{\mathrm{\rm sym}}$-convergence]\label{thm:compSps}
Let $U \subset \mathbb{R}^d$ be open, let $U' \supset U$ be open with $\mathcal{L}^d(U' \setminus U)>0$, and let $p \in (1,\infty)$. Then, every sequence $(\Gamma_n)_n \subset U$ with $\sup_n \mathcal{H}^{d-1}(\Gamma_n) < + \infty$ has a $\sigma^p_{\mathrm{\rm sym}}$-convergent subsequence with limit $(\Gamma,G_\infty)$ satisfying $\mathcal{H}^{d-1}(\Gamma) \leq \liminf_{n\to\infty} \mathcal{H}^{d-1}(\Gamma_n)$.
\end{theorem}
For the proof, we need the following two auxiliary results.
\betagin{lemma}\label{lemma: good function choice}
Let $(v_i)_i \subset GSBD^p(\Omega)$ such that $\|e(v_i)\|_{L^p(\Omega)} \leq 1$ for all $i$ and $\Gamma:= \bigcup_{i=1}^\infty J_{v_i}$ satisfies $\mathcal{H}^{d-1}(\Gamma) < + \infty$. Then there exist constants $c_i>0$, $i \in \mathbb{N}$, such that $\sum_{i=1}^\infty c_i \le 1$ and $v:= \sum\nolimits_{i=1}^\infty c_i v_i \in GSBD^p(\Omega)$ satisfies $J_v \tilde{=} \color{black} \Gamma \color{black}$.
\end{lemma}
\betagin{lemma}\label{lemma: theta}
Let $V \subset \mathbb{R}^d $ and suppose that two sequences $(u_n)_n,(v_n)_n \in L^0(V;\bar{\mathbb{R}}^d)$ satisfy $|u_n|, |v_n| \to \infty$ on $V$. Then for $\mathcal{L}^1$-a.e. $\theta \subset(0,1)$ there holds
$$|(1-\theta) u_n(x) + \theta v_n(x)| \to \infty\ \ \ \text{for a.e.\ $x \in V$}. $$
\end{lemma}
We postpone the proof of the lemmas and proceed with the proof of Theorem \ref{thm:compSps}.
\betagin{proof}[Proof of Theorem \ref{thm:compSps}]
For $\Gamma \subset U$ with $\mathcal{H}^{d-1}(\Gamma) < +\infty$ we define
$$X(\Gamma) = \big\{ v \in GSBD^p_\infty(U')\colon J_v \tilde{\subset} \Gamma, \ \ \Vert e(v) \Vert_{L^p(U')} \le 1, \ \ \ v = 0 \text{ on } U' \setminus U \big\}. $$
The set $X(\Gamma)$ is compact with respect to the metric \color{black} $\bar{d}$ \color{black} introduced in \eqref{eq:metricd}. This follows from Lemma \ref{eq: corollary-comp} and the fact that $\lbrace v \in L^0( U' ;\bar{\mathbb{R}}^d)\colon \, v = 0 \text{ on } U' \setminus U \rbrace$ is closed with respect to \color{black} $\bar{d}$ \color{black}.
Since we treat any $v \in GSBD^p_\infty(U')$ as a constant function in the exceptional set $A^\infty_v$ (namely we have no jump and $e(v)=0$ therein, see \eqref{eq:same}), we get that the convex combination of two $v, v'\in X(\Gamma)$ is still in $X(\Gamma)$.
(Recall that the sum on $\bar{\mathbb{R}}^d$ is given by $a + \infty = \infty$ for any $a \in \bar{\mathbb{R}}^d$.)
\noindent \emph{Step 1: Identification of a compact and convex subset.}
Consider $(\Gamma_n)_n \subset U$ with $\sup_{n }\mathcal{H}^{d-1}(\Gamma_n) < + \infty$. Fix $\mathrm{d}elta>0$ small and define
\betagin{align}\label{eq: delta error}
L:= \liminf_{n \to \infty} \mathcal{H}^{d-1}(\Gamma_n) +\mathrm{d}elta\,.
\end{align}
\color{black} By \eqref{eq: delta error} we have that, up to a subsequence (not relabeled), each $X(\Gamma_n)$ is contained in $X_L(U')$ defined in \eqref{eq: XL}. Moreover, as noticed above, $X_L(U')$ and each $X(\Gamma_n)$ are compact with respect to $\bar{d}$. \color{black}
\color{black} Since the class of non-empty compact subsets of a compact metric space $(M,d_M)$ is itself compact with respect to the Hausdorff distance induced by $d_M$, \color{black} a subsequence (not relabeled) of $(X(\Gamma_n))_n$ converges in the Hausdorff sense (with the Hausdorff distance induced by \color{black} $\bar{d}$) \color{black} to a compact set $K \subset X_L(U')$.
We first observe that the function identical to zero lies in $K$. We now show that $K$ is convex. Choose $u,v \in K$ and $\theta \in (0,1)$. We need to check that $w := (1-\theta)u + \theta v \in K$. Observe that $A^\infty_{w} = A^\infty_u \cup A^\infty_v$, where $A^\infty_u$, $A^\infty_v$, and $A^\infty_w$ are the exceptional sets given in \eqref{eq: compact extension}. There exist sequences $(u_n)_n$ and $(v_n)_n$ with $u_n,v_n \in X(\Gamma_n)$ such that $\color{black}\bar{d} \color{black}(u_n,u) \to 0$ and $\color{black}\bar{d} \color{black}(v_n,v) \to 0$. In particular, note that $|u_n| \to \infty$ on $A^\infty_u$ and $|v_n| \to \infty$ on $A^\infty_v$. By Lemma \ref{lemma: theta} and a diagonal argument we can choose $(\theta_n)_n\subset (0,1)$ with $\theta_n \to \theta$ such that
$w_n := (1-\theta_n)u_n + \theta_n v_n$ satisfies $|w_n| \to \infty$ on $A^\infty_u \cap A^\infty_v$. As clearly $|w_n|\to \infty$ on $A^\infty_u \triangle A^\infty_v$ and $(1-\theta_n)u_n + \theta_n v_n \to (1-\theta)u + \theta v $ in measure on $U' \setminus (A^\infty_u \cup A^\infty_v)$, we get $\color{black}\bar{d} \color{black}(w_n,w) \to 0$. Since $X(\Gamma_n)$ is convex, there holds $w_n \in X(\Gamma_n)$. Then $\color{black}\bar{d} \color{black}(w_n,w) \to 0$ implies $w \in K$, as desired.
\color{black}
\noindent \emph{Step 2: Choice of dense subset.} \color{black} Since $K$ is compact with respect to the metric \color{black} $\bar{d}$ \color{black} (so, in particular, $K$ is separable), \color{black} we can choose a countable set $(y_i)_i \subset GSBD^p_\infty(U')$ with $y_i = 0$ on $U' \setminus U$ which is \color{black} $\bar{d}$\color{black}-dense in $K$. \color{black} We now show that this countable set can be chosen with the additional property
\betagin{align}\label{eq: additional property}
\mathcal{L}^d\Big( A^\infty_v \setminus \bigcup\nolimits_i A^\infty_{y_i} \Big) = 0 \quad \quad \text{for all $v \in K$,}
\end{align}
where we again denote by $A^\infty_{y_i}$ and $A^\infty_v$ the sets where the functions attain the value $\infty$. In fact, fix an arbitrary countable and $\bar{d}$-dense set $(y_i)_i$ in $K$, and let $\eta >0$. After adding a finite number (smaller than $\mathcal{L}^d(U)/\eta$) of functions of $K$ to this collection, we obtain a countable $\bar{d}$-dense family $(y^\eta_i)_i$ such that
$$
\mathcal{L}^d\Big( A^\infty_v \setminus \bigcup\nolimits_i A^\infty_{y^\eta_i} \Big) \le \eta \quad \quad \text{for all $v \in K$.}
$$
Then, we obtain the desired countable set by taking the union of $(y^{1/k}_i)_i$ for $k \in \mathbb{N}$. \color{black}
\noindent \emph{Step 3: Definition of $\Gamma$ and $G_\infty$.} Fix $v,v' \in K$. Since $\lbrace x \in J_v \setminus \varphiartial^* A^\infty_v \colon [v](x) =t \rbrace$ has negligible $\mathcal{H}^{d-1}$-measure up to a countable set of points $t$, we find some $\theta \in (0,1)$ such that $w:= \theta v + (1-\theta) v'$ satisfies
\betagin{align}\label{eq: jump-w}
J_w \tilde{\subset} J_v \cup J_{v'}, \ \ \ \ \ \ (J_v \cup J_{v'}) \setminus J_w \tilde{\subset} (A^\infty_v\cup A^\infty_{v'})^1.
\end{align}
Here, we particularly point out that $\lbrace w = \infty\rbrace = A^\infty_v\cup A^\infty_{v'}$ and that $\varphiartial^* (A^\infty_v\cup A^\infty_{v'}) \cap U' \tilde{\subset} J_w$ by \eqref{eq: general jump}. Note that $w \in K$ since $K$ is convex. Since $w \in K \subset X_L(U')$, \eqref{eq: jump-w} implies
$$\mathcal{H}^{d-1}( (J_v \cup J_{v'}) \setminus (A^\infty_v\cup A^\infty_{v'})^1) \le \mathcal{H}^{d-1}(J_w) \le L, \ \ \ \ \ \ \mathcal{H}^{d-1}(\varphiartial^* (A^\infty_v\cup A^\infty_{v'}) \cap U') \le \mathcal{H}^{d-1}(J_w) \le L\,.$$
\color{black} Let $(y_i)_i \subset GSBD^p_\infty(U')$ with $y_i = 0$ on $U' \setminus U$ be the countable and $\bar{d}$-dense subset of $K$ satisfying \eqref{eq: additional property} that we defined in Step 2. \color{black} By the above convexity argument, we find
\betagin{align}\label{eq: LLL}
\mathcal{H}^{d-1}\Big(\bigcup\nolimits_{i=1}^k J_{y_i} \setminus \big(\bigcup\nolimits_{i=1}^k A_i \big)^1\Big)\le L, \ \ \ \ \ \ \ \ \mathcal{H}^{d-1}\Big(\varphiartial^*\big(\bigcup\nolimits_{i=1}^k A_i \big) \cap U'\Big)\le L
\end{align}
for all $k \in \mathbb{N}$, where
\[
A_i := A^\infty_{y_i} = \lbrace y_i = \infty \rbrace\,.
\]
We define
\betagin{align}\label{eq: G-deffi}
G_\infty := \bigcup\nolimits_{i=1}^\infty A_i\,.
\end{align}
By passing to the limit $k \to \infty$ in \eqref{eq: LLL}, we get $\mathcal{H}^{d-1}(\varphiartial^* G_\infty \cap U') \le L$ and $\mathcal{H}^{d-1}(\bigcup\nolimits_{i=1}^k J_{y_i} \setminus (G_\infty )^1 )\le L $ for all $k \in \mathbb{N}$. Passing again to the limit $k \to \infty$, and setting
\betagin{equation}\label{eq: Gamma-def}
\Gamma := \bigcup\nolimits_{i=1}^\infty J_{y_i} \setminus ( G_\infty )^1
\end{equation}
we get $\mathcal{H}^{d-1}(\Gamma) \le L$. Notice that $\Gamma\cap(G_\infty )^1 = \emptyset$ by definition. Moreover, the fact that $y_i = 0$ on $U' \setminus U$ for all $i\in \mathbb{N}$ implies both that $G_\infty \subset U$ and that $\Gamma \subset \overline{U} \cap U'$.
By \eqref{eq: delta error} and the arbitrariness of $\mathrm{d}elta$ we get $\mathcal{H}^{d-1}(\Gamma) \leq \liminf_{n\to \infty} \mathcal{H}^{d-1}(\Gamma_n)$. Since $\varphiartial^* A_i \cap U' \tilde{\subset} J_{y_i}$ for all $i \in \mathbb{N}$ by \eqref{eq: general jump}, we also get $\Gamma \, \tilde{\supset} \, \varphiartial^* G_\infty \cap U'$. Thus, \eqref{eq: limit-prop} is satisfied.
We now claim that for each $v \in K$ there holds
\betagin{align}\label{eq: subset outside G}
{\color{black} \mathcal{L}^d(\lbrace v = \infty \rbrace \setminus G_\infty) = 0 \quad \quad \color{black} \text{and} \quad \quad J_v \setminus \Gamma \tilde{\subset} (G_\infty )^1.}
\end{align}
Indeed, \color{black} the first property follows from \eqref{eq: additional property} and \eqref{eq: G-deffi}. To see the second, we note that, \color{black} for any fixed $v\in K$, there is a sequence $(y_k)_k=(y_{i_k})_k$ with $\color{black}\bar{d} \color{black}(y_k, v) \to 0 $, by the density of $(y_i)_i$. Consider the functions $ \tilde{v}_k:= y_k (1-\chi_{G_\infty})$ that \color{black} $\bar{d}$\color{black}-converge to $\tilde{v}:=v (1-\chi_{G_\infty})$: since $J_{ \tilde{v}_k } \tilde{\subset} \Gamma$ for any $k$ (we employ \eqref{eq: Gamma-def} and that $\varphiartial^* G_\infty \cap U' \tilde{\subset} \Gamma$),
the fact that $X(\Gamma)$ is closed gives that $J_{\tilde{v}} \tilde{\subset} \Gamma$. This implies \eqref{eq: subset outside G}.
\noindent \emph{Step 4: Proof of properties (i) and (ii).} We first show (i). Given a sequence $(v_n)_n \subset GSBD^p_\infty(U')$ with $J_{v_n} \tilde{\subset} \Gamma_n$ and $v_n = 0$ on $U' \setminus U$, and a subsequence $(v_{n_k})_k$ that converges weakly in $GSBD^p_\infty(U')$ to $v$, we clearly get $v \in K$ by Hausdorff convergence of $X(\Gamma_n) \to K$. (More precisely, consider $\lambda v_{n_k}$ and $\lambda v$ for $\lambda >0$ such that $\Vert e(\lambda v_{n_k}) \Vert_{L^p(U')} \le 1$ for all $k$.) By \eqref{eq: subset outside G}, this implies \color{black} $\mathcal{L}^d(\lbrace v = \infty \rbrace \setminus G_\infty) = 0$ and \color{black} $J_v \setminus \Gamma \tilde{\subset} (G_\infty )^1$. This shows (i).
We now address (ii). Recalling the choice of the sequence $(y_i)_i \subset K$, for each $i \in \mathbb{N}$, we choose $\tilde{y}_i = y_i \chi_{U' \setminus G_\infty } + t_i \chi_{ G_\infty } \in GSBD^p(U')$ for some $t_i \in \mathbb{R}^d$ such that $J_{\tilde{y}_i} \tilde{=} J_{y_i} \setminus ( G_\infty )^1$. (Almost every $t_i$ works. Note that the function indeed lies in $GSBD^p( U )$, see \eqref{eq: compact extension} and \eqref{eq: G-deffi}.) In view of \eqref{eq: Gamma-def}, we also observe that $\bigcup_i J_{\tilde{y}_i} = \Gamma$.
By Lemma \ref{lemma: good function choice} (recall $(y_i)_i \subset K \subset X_L(\Omega)$) we get a function $\tilde{v}= \sum_{i=1}^\infty c_i \tilde{y}_i \in GSBD^p( U' )$ such that $J_{\tilde{v}} \tilde{=} \Gamma$, where $\sum_{i=1}^\infty c_i \le 1$. We also define $v = \tilde{v} \chi_{ U' \setminus G_\infty } + \infty \chi_{ G_\infty} \in GSBD^p_\infty(U')$. Note that $\lbrace v = \infty \rbrace = G_\infty$ and $J_v \tilde{=} \Gamma$ since $\Gamma \cap (G_\infty)^1 = \emptyset$ and $\varphiartial^* G_\infty \cap U' \subset \Gamma$. Then by the convexity of $K$, we find $z_k: = \sum_{i=1}^k c_i y_i \in K$. (Here we also use that the function identical to zero lies in $K$.) As $G_\infty= \bigcup_{i=1}^\infty A_i$, we obtain $\color{black}\bar{d} \color{black}(z_k,v) \to 0$ for $k \to \infty$. Thus, also $v \in K$ since $K$ is compact. As $X(\Gamma_n)$ converges to $K$ in Hausdorff convergence, we find a sequence $(v_n)_n \subset GSBD^p_\infty( U' )$ with $J_{v_n} \tilde{\subset} \Gamma_n$, $v_n = 0$ on $U' \setminus U$, and $\color{black}\bar{d} \color{black}(v_n,v) \to 0$. This shows (ii).
\end{proof}
Next, we prove Lemma \ref{lemma: good function choice}. To this end, we will need the following measure-theoretical result. (See \cite[Lemma 4.1, 4.2]{Fri17ARMA} and note that the statement in fact holds in arbitrary space dimensions for measurable functions.)
\betagin{lemma}
Let $\Omega\subset \mathbb{R}^d$ with $\mathcal L^d(\Omega)<\infty$, and $N \in \mathbb{N}$. Then for every sequence $(u_n)_n \subset L^0(\Omega;\mathbb{R}^N )$ with
\betagin{align}\label{eq: inclusion condition}
\mathcal L^d\left(\bigcap\nolimits_{n \in \mathbb{N}} \bigcup\nolimits_{m \ge n} \lbrace |u_m - u_n| > 1 \rbrace\right)=0
\end{align}
there exist a subsequence (not relabeled) and an increasing concave function $\varphisi: [0,\infty) \to [0,\infty) $ with
$
\lim_{t\to \infty}\varphisi(t)=+\infty
$
such that $$\sup_{n \ge 1} \int_{\Omega}\varphisi(|u_n|)\, \, \mathrm{d} x < + \infty.$$
\end{lemma}
\betagin{proof}[Proof of Lemma \ref{lemma: good function choice}]
Let $(v_i)_i \subset GSBD^p(\Omega)$ be given satisfying the assumptions of the lemma.
First, choose $0 < d_i < 2^{-i}$ such that
\betagin{align}\label{eq: small volume}
\mathcal{L}^d\Big( \Big\{ |{v}_i| \ge \frac{1}{2^i d_i} \Big\} \Big) \le 2^{-i},\ \ \ \ \ \ \ \ \mathcal{H}^{d-1}\Big( \Big\{x \in J_{v_i}\colon |[v_i](x)| \ge \frac{1}{d_i} \Big\} \Big) \le 2^{-i}.
\end{align}
Our goal is to select constants $c_i \in (0,d_i)$ such that the function ${v}:= \sum\nolimits_{i=1}^\infty c_i {v}_i$ lies in $GSBD^p(\Omega)$ and satisfies $J_v \, \tilde{=} \, \Gamma := \bigcup_{i=1}^\infty J_{v_i}$. We proceed in two steps: we first show that for each choice $c_i \in (0,d_i)$ the function ${v}= \sum\nolimits_{i=1}^\infty c_i {v}_i$ lies indeed in $GSBD^p(\Omega)$ (Step 1). Afterwards, we prove that for a specific choice there holds $J_{{v}} \tilde{=} \Gamma$.
\emph{Step 1.} Given $c_i \in (0,d_i)$, we define $u_k = \sum_{i=1}^k c_iv_i$. Fix \color{black} $m \ge n+1$. \color{black} We observe that
\betagin{align*}
\lbrace |u_m-u_n|>1 \rbrace = \Big\{ \big|\sum\nolimits_{i=n+1}^m c_i v_i \big|>1 \Big\} \subset \bigcup\nolimits_{i=n+1}^m \Big\{ |c_i{v}_i| \ge 2^{-i} \Big\} \subset \bigcup\nolimits_{i=n+1}^m \Big\{ |{v}_i| \ge \frac{1}{2^i d_i} \Big\}.
\end{align*}
By passing to the limit $m \to \infty$ and by using \eqref{eq: small volume} we get
\betagin{align*}
\mathcal{L}^d\Big(\bigcup\nolimits_{m \ge n}\lbrace |u_m-u_n|>1 \rbrace \Big) \le \sum\nolimits_{i=n+1}^\infty \mathcal{L}^d \Big(\Big\{ |{v}_i| \ge \frac{1}{2^i d_i} \Big\} \Big) \le \sum\nolimits_{i=n+1}^\infty 2^{-i} = 2^{-n}.
\end{align*}
This shows that the sequence $(u_k)_k$ satisfies \eqref{eq: inclusion condition}, and therefore there exist a subsequence (not relabeled) and an increasing continuous function $\varphisi: [0,\infty) \to [0,\infty) $ with
$
\lim_{t\to \infty}\varphisi(t)=+\infty
$
such that $\sup_{k \ge 1} \int_{\Omega}\varphisi(|u_k|)\, \, \mathrm{d} x < + \infty$. Recalling also that $\|e(v_i)\|_{L^p(\Omega)}\leq 1$ for all $i$
and $\mathcal{H}^{d-1}(\Gamma) <+\infty$, we are now in the position to apply the $GSBD^p$-compactness result \cite[Theorem~11.3]{DM13} (alternatively, one could apply Theorem~\ref{th: GSDBcompactness} and observe that the limit $v$ satisfies $\mathcal{L}^d(\lbrace v = \infty \rbrace)=0$), to get that the function
$v = \sum_{i=1}^\infty c_iv_i$
lies in $GSBD^p(\Omega)$. For later purposes, we note that by \eqref{eq: with Gamma} (which holds also in addition to \cite[Theorem~11.3]{DM13}) we obtain
\betagin{align}\label{eq: the first inclusion}
J_v \tilde{\subset} \bigcup\nolimits_{i=1}^\infty J_{v_i} = \Gamma.
\end{align}
This concludes Step 1 of the proof.
\emph{Step 2.} We define the constants $c_i \in (0,d_i)$ iteratively by following the arguments in \cite[Lemma 4.5]{DMFraToa02}. Suppose that $(c_i)_{i=1}^k$, and a decreasing sequence $(\varepsilons_i)^k_{i=1} \subset (0,1)$ have been chosen such that the functions $u_j = \sum_{i=1}^j c_iv_i$, $ 1 \le j \le k$, satisfy
\betagin{align}\label{eq: iterative conditions}
{\rm (i)} & \ \ J_{u_j} \tilde{=} \bigcup\nolimits_{i=1}^{j} J_{v_i},\notag\\
{\rm (ii)} & \ \ \mathcal{H}^{d-1}(\lbrace x\in J_{u_j}\colon | [u_j](x) | \le \varepsilons_j \rbrace) \le 2^{-j},
\end{align}
and, for $2\le j\le k$, there holds
\betagin{align}\label{eq: iterative conditions2}
c_{j} \le \varepsilons_{j-1} d_j2^{-j-1}.
\end{align}
(Note that in the first step we can simply set $c_1 = 1/4$ and $0 < \varepsilons_1< 1$ such that \eqref{eq: iterative conditions}(ii) holds.)
We pass to the step $k + 1$ as follows. Note that there is a set $N_0 \subset \mathbb{R} $ of negligible measure such that for all $t \in \mathbb{R} \setminus N_0$ there holds $J_{u_k + t v_{k+1}} \tilde{=} J_{u_k} \cup J_{v_{k+1}}$. We choose $c_{k+1} \in \mathbb{R} \setminus N_0$ such that additionally $c_{k+1} \le \varepsilons_k d_{k+1} 2^{-k-2}$. Then \eqref{eq: iterative conditions}(i) and \eqref{eq: iterative conditions2} hold. We can then choose $\varepsilons_{k+1} \le \varepsilons_k$ such that also \eqref{eq: iterative conditions}(ii) is satisfied.
We proceed in this way for all $k \in \mathbb{N}$. Let us now introduce the sets
\betagin{align}\label{eq: Ek-def}
E_k = \bigcup\nolimits_{m \ge k} \lbrace x \in J_{u_m}\colon \ | [u_m](x) | \le \varepsilons_m \rbrace, \ \ \ F_k = \bigcup\nolimits_{m \ge k} \lbrace x \in J_{v_m}\colon \ |[v_m](x)| > 1/d_m\rbrace\,.
\end{align}
Note by \eqref{eq: small volume} and \eqref{eq: iterative conditions}(ii) that
\betagin{align}\label{eq: EkFk}
\mathcal{H}^{d-1} (E_k \cup F_k) \le 2 \sum\nolimits_{m \ge k} 2^{-m} = 2^{2-k}\,.
\end{align}
We now show that for all $k \in \mathbb{N}$ there holds
\betagin{align}\label{eq: important inclusion}
J_{u_k} \tilde{\subset} J_v \cup E_k \cup F_k\,.
\end{align}
To see this, we first observe that for $\mathcal{H}^{d-1}$-a.e.\ $x \in \Gamma = \bigcup\nolimits_{i=1}^\infty J_{v_i} $
there holds
\betagin{align}\label{eq: jumpsum}
[v](x) = [u_k](x) + \sum\nolimits_{i=k+1}^\infty c_i[v_i](x)\,.
\end{align}
Moreover, we get that $c_i \le \varepsilons_k d_i 2^{-i-1}$ for all $i \ge k+1$ by \eqref{eq: iterative conditions2} and the fact that $(\varepsilons_i)_i$ is decreasing. Fix $x \in J_{u_k} \setminus (E_k \cup F_k)$. Then by \eqref{eq: Ek-def} and \eqref{eq: jumpsum} we get
\betagin{align*}
|[v](x)| \ge |[u_k](x)| - \sum\nolimits_{i=k+1}^\infty c_i|[v_i](x)| \ge \varepsilons_k - \sum\nolimits_{i=k+1}^\infty \frac{c_i}{d_i} \ge \varepsilons_k\Big(1- \sum\nolimits_{i=k+1}^\infty 2^{-i-1} \Big) \ge \varepsilons_k/2\,,
\end{align*}
where we have used that $|[u_k](x)| \ge \varepsilons_k$ and $[v_i](x) \le 1 /d_i$, for $i \ge k+1$. Thus, $[v](x) \neq 0$ and therefore $x \in J_v$. Consequently, we have shown that $\mathcal{H}^{d-1}$-a.e.\ $x \in J_{u_k} \setminus (E_k \cup F_k)$ lies in $J_v$. This shows \eqref{eq: important inclusion}.
We now conclude the proof as follows: by \eqref{eq: iterative conditions}(i) and \eqref{eq: important inclusion} we get that
$$\bigcup\nolimits_{i=1}^l J_{v_i} \ \tilde{=} \ J_{u_l} \ \tilde{\subset} \ J_v \cup E_l \cup F_l \tilde{\subset} \ J_v \cup E_k \cup F_k $$
for all $l \ge k$, where we used that the sets $(E_k)_k$ and $(F_k)_k$ are decreasing. Taking the union with respect to $l$, we get that $\Gamma \tilde{\subset} J_v \cup E_k \cup F_k$ for all $k \in \mathbb{N}$. By \eqref{eq: EkFk} this implies $\mathcal{H}^{d-1}(\Gamma \setminus J_v) \le 2^{2-k}$. Since $k \in \mathbb{N}$ was arbitrary, we get $\Gamma \tilde{\subset} J_v$. This along with \eqref{eq: the first inclusion} shows $J_v \tilde{=} \Gamma$ and concludes the proof.
\end{proof}
We close this section with the proof of Lemma \ref{lemma: theta}.
\betagin{proof}[Proof of Lemma \ref{lemma: theta}]
Let $B = \lbrace x \in V\colon \, \limsup_{n \to \infty}|u_n(x) - v_n(x)| < + \infty\rbrace$. For $\theta \in (0,1)$, define $w^\theta_n = (1-\theta) u_n + \theta v_n$ and observe that $|w^\theta_n| \to \infty$ on $B$ for all $\theta$ since $|u_n| \to \infty$ on $V$. Let $D_\theta = \lbrace x \in V\setminus B\colon \, \limsup_{n \to \infty} |w^\theta_n(x)| < + \infty \rbrace$. As $|u_n -v_n| \to \infty$ on $V\setminus B$ and thus $|w_n^{\theta_1}- w_n^{\theta_2}| = |(\theta_1 - \theta_2)(v_n - u_n)|\to \infty$ on $V \setminus B$ for all $\theta_1\neq \theta_2$, we obtain $D_{\theta_1} \cap D_{\theta_2} = \emptyset$. This implies that $\mathcal{L}^d(D_\theta)>0$ for an at most countable number of different $\theta$. We note that for all $\theta$ with $\mathcal{L}^d(D_\theta) = 0$ there holds $|w^\theta_n| \to \infty$ a.e.\ on $V$. This yields the claim.
\end{proof}
\section{Functionals defined on pairs of function-set}\label{sec:FFF}
This section is devoted to the proofs of the results announced in Subsection \ref{sec: results1}. Before proving the relaxation and existence results, we address the lower bound separately since this will be instrumental also for Section \ref{sec:GGG}.
\subsection{The lower bound}
In this subsection we prove a lower bound for functionals defined on pairs of function-set which will be needed
for the proof of Theorem \ref{thm:relFDir}--Theorem~\ref{thm:relG}. We will make use of the definition of $GSBD^p_\infty(\Omega)$ in Subsection \ref{sec:prel4}. In particular, we refer to the definition of $e(u)$ and of the jump set $J_u$ with its normal $\nu_u$, see \eqref{eq: general jump}--\eqref{eq:same}, as well as to the notion of weak convergence in $GSBD^p_\infty(\Omega)$, see \eqref{eq: weak gsbd convergence}. We recall also that for any $s \in [0,1]$ and any
$E \in \mathfrak{M}(\Omega)$, $E^s$ denotes the set of points with density $s$ for $E$, see \cite[Definition 3.60]{AFP}.
\betagin{theorem}[Lower bound]\label{thm:lower-semi}
Let $\Omega \subset \mathbb{R}^d$ be open and bounded, let $1 < p < \infty$. Consider a sequence of Lipschitz sets $(E_n)_n \subset \Omega$ with $\sup_{n \in \mathbb{N}} \mathcal{H}^{d-1}(\varphiartial E_n) < +\infty$ and a sequence of functions $(u_n)_n \subset GSBD^p(\Omega)$ such that $u_n |_{\Omega\setminus \overlinerline{E_n}} \in W^{1,p}(\Omega\setminus \overlinerline{E_n}; \mathbb{R}d)$ and $u_n = 0$ in $ E_n $. Let $u \in GSBD^p_\infty(\Omega)$ and $E \in \mathfrak{M}(\Omega)$ such that $u_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to $u$ and
\betagin{equation}\label{1405192012}
\chi_{E_n} \to \chi_E \text{ in }L^1(\Omega)\,.
\end{equation}
Then, for any norm $\varphi$ on $\mathbb{R}^d$ there holds
\betagin{subequations}\label{eqs:1405191304}
\betagin{equation}\label{1405191303}
e(u_n)\chi_{\Omega \setminus (E_n \cup A^\infty_u)} \rightharpoonup e(u) \chi_{\Omega \setminus (E \cup A^\infty_u)} \quad\text{ weakly in }L^p(\Omega;{\mathbb{M}^{d\times d}_{\rm sym}})\,,
\end{equation}
\betagin{equation}\label{1405191304}
\int_{J_u \cap E^0} 2\varphi(\nu_u)\, \mathrm{d} \hd + \int_{\Omega \cap \varphiartial^* E} \varphi(\nu_E) \, \mathrm{d} \hd \leq \liminf_{n \to +\infty} \int_{ \Omega \cap \varphiartial E_n } \varphi(\nu_{E_n}) \, \mathrm{d} \hd\,,
\end{equation}
\end{subequations}
where $A^\infty_u=\{u=\infty\}$.
\end{theorem}
In the proof, we need the following two auxiliary results, see \cite[Proposition 4, Lemma 5]{BraChaSol07}.
\betagin{proposition}\label{prop:prop4BCS}
Let $\Omega$ be an open subset of $\mathbb{R}d$ and $\mu$ be a finite, positive set function defined on the family of open subsets of $\Omega$. Let $\lambda \in \mathcal{M}_b^+(\Omega)$, and $(g_i)_{i \in \mathbb{N}}$ be a family of positive Borel functions on $\Omega$. Assume that $\mu(U) \geq \int_U g_i \,\mathrm{d}\lambda$ for every $U$ and $i$, and that $\mu(U \cup V) \geq \mu(U) + \mu(V)$ whenever $U$, $V \subset \subset \Omega$ and $\overline U \cap \overline V = \emptyset$ (superadditivity). Then $\mu(U) \geq \int_U (\sup_{i \in \mathbb{N}} g_i) \,\mathrm{d} \lambda$ for every open $U \subset \Omega$.
\end{proposition}
\betagin{lemma}\label{le:lemma5BCS}
Let $\Gamma \subset E^0$ be a $(d{-}1)$-rectifiable subset, $\xi \in {\mathbb{S}^{d-1}}$ such that $\xi$ is not orthogonal to the normal $\nu_\Gamma$ to $\Gamma$ at any point of $\Gamma$. Then, for $\mathcal{H}^{d-1}$-a.e.\ $y\in \Pi^\xi$, the set $E^\xi_y$ (see \eqref{eq: vxiy2}) has density 0 in $t$ for every $t \in \Gamma^\xi_y$.
\end{lemma}
\betagin{proof}[Proof of Theorem \ref{thm:lower-semi}]
Since $u_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to $u$, \eqref{eq: weak gsbd convergence} implies
\betagin{align}\label {eq: weak gsbd convergence2}
\sup_{n \in \mathbb{N}}\nolimits \big( \Vert e(u_n) \Vert^p_{L^p(\Omega)} + \mathcal{H}^{d-1}(J_{u_n}) + \mathcal{H}^{d-1}(\varphiartial E_n) \big) =:M < + \infty.
\end{align}
Consequently, Theorem~\ref{th: GSDBcompactness} and the fact that $\color{black}\bar{d} \color{black}(u_n,u) \to 0$, see \eqref{eq:metricd} and \eqref{eq: weak gsbd convergence}, imply that $A^\infty_u=\lbrace u = \infty \rbrace = \lbrace x \in \Omega\colon \, |u_n(x)| \to \infty \rbrace$ and
\betagin{subequations}
\betagin{align}
u_n &\to u \quad \ \ \ \ \text{$\mathcal{L}^d$-a.e.\ in }\Omega \setminus A^\infty_u\,,\label{1405192139}\\
e(u_n) &\rightharpoonup e(u) \quad \text{ weakly in } L^p(\Omega\setminus A^\infty_u; {\mathbb{M}^{d\times d}_{\rm sym}})\,.\label{1405192141}
\end{align}
\end{subequations}
By \eqref{1405192012}, \eqref{1405192139}, $u_n = 0$ on $E_n$, and the definition of $A^\infty_u$, we have
\betagin{align}\label{eq: EcapA}
E \cap A^\infty_u = \emptyset \ \ \ \text{ and } \ \ \ \text{$u=0$ \ $\mathcal{L}^d$-a.e.\ in $E$}.
\end{align}
Then \eqref{1405192141} gives \eqref{1405191303}.
We now show \eqref{1405191304} which is the core of the proof. Let $\varphi^*$ be the dual norm of $\varphi$ and observe that (see, e.g.\ \cite[Section~4.1.2]{Bra98})
\betagin{equation}\label{1904191706}
\varphi(\nu)= \max_{\xi \in {\mathbb{S}^{d-1}}} \frac{\nu \cdot \xi}{\varphi^*(\xi)} = \max_{\xi \in {\mathbb{S}^{d-1}}} \frac{|\nu \cdot \xi|}{\varphi^*(\xi)} \,,
\end{equation}
where the second equality holds since $\varphi(\nu) = \varphi(-\nu)$.
As a preparatory step, we consider a set $B \subset \Omega$ with Lipschitz boundary and a function $v$ with $v |_{\Omega\setminus \overline B} \in W^{1,p}(\Omega\setminus \overline B;\mathbb{R}^d)$
and $v=0$ in $B$ (observe that $v \in GSBD^p(\Omega)$). Recall the notation in \eqref{eq: vxiy2}--\eqref{eq: vxiy}. Let $\varepsilons\in (0,1)$ and $U \subset \Omega$ be open. For each $\xi \in {\mathbb{S}^{d-1}}$ and $y \in \Pi^\xi$, we define
\betagin{align}\label{eq: fxsieps}
F^\xi_\varepsilons (\widehat{v}^\xi_y, B^\xi_y; U^\xi_y) =\varepsilons \int_{U^\xi_y \setminus B^\xi_y} |(\widehat{v}^\xi_y)'|^p \,\mathrm{d}t + \mathcal{H}^0(\varphiartial B^\xi_y \cap U^\xi_y ) \frac{1}{\varphi^*(\xi)}\,.
\end{align}
By Fubini-Tonelli Theorem, with the slicing properties \eqref{3105171927}, \eqref{2304191254}, \eqref{2304191637}, for a.e.\ $\xi \in \mathbb{S}^{d-1}$ there holds
$$\int_{\Pi^\xi} F_\varepsilons^\xi(\widehat{v}^\xi_y, B^\xi_y; U^\xi_y) \, \mathrm{d} \hd(y) = \varepsilon\int_{U \setminus B}|e(v) \xi \cdot \xi|^p \, \mathrm{d} x + \int_{U \cap \varphiartial B} \frac{|\nu_B \cdot \xi | }{\varphi^*(\xi)}\, \mathrm{d} \hd \,. $$
Since $|e(v)|\geq |e(v) \xi \cdot \xi|$, the previous estimate along with \eqref{1904191706} implies
$$\int_{\Pi^\xi} F_\varepsilons^\xi(\widehat{v}^\xi_y, B^\xi_y; U^\xi_y) \, \mathrm{d} \hd(y) \le \varepsilons \Vert e(v) \Vert_{L^p(U \setminus B)}^p + \int_{U \cap \varphiartial B} \varphi(\nu_B) \, \mathrm{d}\mathcal{H}^{d-1}\,. $$
By applying this estimate for the sequence of pairs $(u_n,E_n)$, we get by \eqref{eq: weak gsbd convergence2}
\betagin{align}\label{eq: Meps}
\int_{\Pi^\xi} F_\varepsilons^\xi((\widehat{u}_n)^\xi_y, (E_n)^\xi_y; U^\xi_y) \, \mathrm{d} \hd(y) \le M\varepsilon + \int _{U \cap \varphiartial E_n} \varphi(\nu_{E_n}) \, \mathrm{d} \hd \le M(\Vert \varphi \Vert_{L^\infty(\mathbb{S}^{d-1})} + \varepsilons)
\end{align}
for all open $U \subset \Omega$. Since $\color{black}\bar{d} \color{black}(u_n, u)\to 0$, we have that $\color{black}\bar{d} \color{black}((\widehat{u}_n)^\xi_y, \widehat{u}^\xi_y) = \int_{\Omega^\xi_y} d_{\bar{\mathbb{R}}^d}((\widehat{u}_n)^\xi_y, \widehat{u}^\xi_y) \, \, \mathrm{d} x \to 0$ for $\mathcal{H}^{d-1}$-a.e.\ $y \in \Pi^\xi$ and
$\mathcal{H}^{d-1}$-a.e.\
$\xi \in {\mathbb{S}^{d-1}}$. (Notice that we have to restrict our choice to the $\xi$ satisfying $|u_n\cdot \xi| \to +\infty$ $\mathcal{L}^d$-a.e.\ in $A^\infty_u$, which are a set of full measure in ${\mathbb{S}^{d-1}}$, cf.\ \cite[Lemma~2.7]{CC18}.) In particular, this implies
\betagin{subequations}
\betagin{align}
(\widehat{u}_n)^\xi_y \to \widehat{u}^\xi_y \quad &\mathcal{L}^1\text{-a.e.\ in }(\Omega \setminus A_u^\infty)^\xi_y\,, \label{1505190913}\\
|(\widehat{u}_n)^\xi_y| \to +\infty \quad &\mathcal{L}^1\text{-a.e.\ in }(A_u^\infty)^\xi_y\,. \label{1505190915}
\end{align}
\end{subequations}
By using \eqref{eq: Meps} and Fatou's lemma we obtain
\betagin{equation}\label{1505190927}
\liminf_{n \to \infty} F_\varepsilons^\xi((\widehat{u}_n)^\xi_y, (E_n)^\xi_y; U^\xi_y) < +\infty
\end{equation}
for $\mathcal{H}^{d-1}$-a.e.\ $y \in \Pi^\xi$ and any open $U \subset \Omega$. Then, we may find a subsequence $(u_m)_m=(u_{n_m})_m$, depending on $\varepsilon$, $\xi$, and $y$, such that
\betagin{equation}\label{1505190928}
\lim_{m \to \infty} F_\varepsilons^\xi((\widehat{u}_m)^\xi_y, (E_m)^\xi_y; U^\xi_y) =\liminf_{n \to \infty} F_\varepsilons^\xi((\widehat{u}_n)^\xi_y, (E_n)^\xi_y; U^\xi_y)
\end{equation}
for any open $U \subset \Omega$. At this stage, up to passing to a further subsequence, we have
\betagin{equation*}
\mathcal{H}^0\big(\varphiartial (E_m)^\xi_y \big) = N^\xi_y \in \mathbb{N}\,,
\end{equation*}
independently of $m$, so that the points in $\varphiartial (E_m)^\xi_y$ converge, as $m\to \infty$, to $M^\xi_y \leq N^\xi_y$ points
\betagin{equation*}
t_1, \mathrm{d}ots, t_{M^\xi_y}\,,
\end{equation*}
which are either in $\varphiartial E^\xi_y$ or in a finite set $S^\xi_y:=\{t_1, \mathrm{d}ots, t_{M^\xi_y}\} \setminus\varphiartial E^\xi_y \subset (E^\xi_y)^0 \cup (E^\xi_y)^1$, where $(\cdot)^0$ and $(\cdot)^1$ denote the sets with one-dimensional density $0$ or $1$, respectively. Notice that $E^\xi_y$ is thus the union
of $M^\xi_y/2 - \# S^\xi_y$ intervals (up to a finite set of points) on which there holds $\widehat{u}^\xi_y=0$, see \eqref{eq: EcapA} and \eqref{1505190913}.
In view of \eqref{eq: fxsieps} and \eqref{1505190927}, $( (\widehat{u}_m)^\xi_y)'$ are equibounded (with respect to $m$) in $L^p_{\mathrm{loc}}(t_j, t_{j+1})$, for any interval
\betagin{equation*}
(t_j, t_{j+1}) \subset \Omega^\xi_y \setminus (E^\xi_y \cup S^\xi_y)\,.
\end{equation*}
Then, as in the proof of \cite[Theorem~1.1]{CC18}, we have two alternative possibilities on $(t_j, t_{j+1})$: either $(\widehat{u}_m)^\xi_y$ converge locally uniformly in $(t_j, t_{j+1})$ to $\widehat{u}^\xi_y$, or $|(\widehat{u}_m)^\xi_y|\to +\infty$ $\mathcal{L}^1$-a.e.\ in $(t_j, t_{j+1})$.
Recalling that $ J_{\widehat{u}^\xi_y} = \varphiartial (A^\infty_u)^\xi_y \cup \big( (J_u^\xi)^\xi_y \setminus (A^\infty_u)^\xi_y\big)$, see \eqref{2304191254-1} and \eqref{eq: general jump}, we find
\betagin{equation}\label{1505191219}
J_{\xi,y}:= J_{\widehat{u}^\xi_y} \cap (E^\xi_y)^0 \subset S^\xi_y \cap (E^\xi_y)^0 \,.
\end{equation}
We notice that any point in $S^\xi_y$ is the limit of two distinct sequences of points $(p^1_m)_m$, $(p^2_m)_m$ with $p^1_m$, $p^2_m \in \varphiartial(E_m)^\xi_y$.
Thus, in view of \eqref{eq: fxsieps} and \eqref{1505190928}, for any open $U \subset \Omega$ we derive
\betagin{align}\label{eq: slicing formula}
\varepsilon\int_{U^\xi_y \setminus (E\cup A^\infty_u)^\xi_y} \hspace{-0.2em} |(\widehat{u}^\xi_y)'|^p \,\mathrm{d} t &+ \mathcal{H}^0(U^\xi_y \cap \varphiartial E^\xi_y ) \frac{1}{\varphi^*(\xi)} + \mathcal{H}^0(U^\xi_y \cap J_{\xi,y}) \frac{2}{\varphi^*(\xi)} \notag \\&\leq \liminf_{m\to \infty} F_\varepsilons^\xi((\widehat{u}_m)^\xi_y, (E_m)^\xi_y; U^\xi_y) =\liminf_{n \to \infty} F_\varepsilons^\xi((\widehat{u}_n)^\xi_y, (E_n)^\xi_y; U^\xi_y) \,.
\end{align}
We apply Lemma~\ref{le:lemma5BCS} to the rectifiable set
$J_u \cap E^0 \cap \{ \xi \cdot \nu_u \neq 0\}$ and get that for $\mathcal{H}^{d-1}$-a.e.\ $y\in \Pi^\xi$
\betagin{equation*}
y + t \xi \in J_u \cap E^0 \cap \{ \xi \cdot \nu_u \neq 0\} \ \ \ \mathbb{R}ightarrow \ \ \ t \in (E^\xi_y)^0\,.
\end{equation*}
This along with \eqref{1505191219}--\eqref{eq: slicing formula}, the slicing properties \eqref{3105171927}--\eqref{2304191637} (which also hold for $GSBD^p_\infty(\Omega)$ functions), and Fatou's lemma yields that for all $\xi \in {\mathbb{S}^{d-1}} \setminus N_0$, for some $N_0$ with $\mathcal{H}^{d-1}(N_0) = 0$, there holds
\betagin{align*}
\varepsilon &\int_{U \setminus (E\cup A^\infty_u)} \hspace{-0.2em} |e(u) \xi \cdot \xi|^p \, \mathrm{d} x + \int_{U \cap \varphiartial^* E} \frac{|\nu_E \cdot \xi| }{\varphi^*(\xi )}\, \mathrm{d} \hd + \int_{J_u \cap E^0\cap U} \frac{2| \nu_u \cdot \xi |}{\varphi^*(\xi)}\, \mathrm{d} \hd
\notag\\&
\le \int_{\Pi^\xi} \liminf_{n \to \infty} F_\varepsilons^\xi((\widehat{u}_n)^\xi_y, (E_n)^\xi_y; U^\xi_y) \, \, \mathrm{d} \hd
\le \liminf_{n \to \infty} \int_{\Pi^\xi} F_\varepsilons^\xi((\widehat{u}_n)^\xi_y, (E_n)^\xi_y; U^\xi_y) \, \, \mathrm{d} \hd.
\end{align*}
Introducing the set function $\mu$ defined on the open subsets of $\Omega$ by
\betagin{equation}\label{eq: mu-def}
\mu(U):=\liminf_{n \to +\infty} \int _{U\cap \varphiartial E_n} \varphi(\nu_{E_n}) \, \mathrm{d} \hd\,,
\end{equation}
and letting $\varepsilons \to 0$ we find by \eqref{eq: Meps} for all $\xi \in {\mathbb{S}^{d-1}} \setminus N_0$ that
\betagin{align}\label{eq: lastequ.}
\int_{U \cap \varphiartial^* E} \frac{|\nu_E \cdot \xi| }{\varphi^*(\xi )}\, \mathrm{d} \hd + \int_{J_u\cap E^0\cap U} \frac{2| \nu_u \cdot \xi |}{\varphi^*(\xi)}\, \mathrm{d} \hd \leq \mu(U) \,.
\end{align}
The set function $\mu $ is clearly superadditive. Let $\lambda= \mathcal{H}^{d-1} \mres \big( J_u\cap E^0 \big) + \mathcal{H}^{d-1} \mres \varphiartial^* E$ and define
\betagin{equation*}
g_i= \betagin{dcases}
\frac{2| \nu_u \cdot \xi_i |}{\varphi^*(\xi_i)} \qquad & \text{on } J_u \cap E^0\,,\\
\frac{|\nu_E \cdot \xi_i| }{\varphi^*(\xi_i )}\qquad & \text{on }\varphiartial^*E\,,
\end{dcases}
\end{equation*}
where $(\xi_i)_i \subset {\mathbb{S}^{d-1}} \setminus N_0$ is a dense sequence in ${\mathbb{S}^{d-1}}$. By \eqref{eq: lastequ.} we have $\mu(U) \ge \int_U g_i \,\mathrm{d} \lambda$ for all $i \in \mathbb{N}$ and all open $U \subset \Omega$. Then, Proposition~\ref{prop:prop4BCS} yields $\mu(\Omega) \ge \int_\Omega \sup_i g_i \,\mathrm{d} \lambda$. In view of \eqref{1904191706} and \eqref{eq: mu-def}, this implies \eqref{1405191304} and concludes the proof.
\end{proof}
\subsection{Relaxation for functionals defined on pairs of function-set}\label{sec: sub-voids}
In this subsection we give the proof of Proposition \ref{prop:relF} and Theorem \ref{thm:relFDir}. \color{black} We also provide corresponding generalizations to the space $GSBD^p_\infty$, see Proposition \ref{prop:relFinfty} and Theorem \ref{thm:relFDirinfty}. \color{black} For the upper bound, we recall the following result proved in \cite[Proposition 9, Remark 14]{BraChaSol07}.
\betagin{proposition}\label{th: Braides-upper bound}
Let $u \in L^1(\Omega;\mathbb{R}^d)$ and $E \in \mathfrak{M}(\Omega)$ such that $\mathcal{H}^{d-1}(\varphiartial^* E)< +\infty$ and $u\chi_{E^0} \in GSBV^p(\Omega;\mathbb{R}^d)$. Then, there exists a sequence $(u_n)_n \subset W^{1,p}(\Omega;\mathbb{R}^d)$ and $(E_n)_n \subset \mathfrak{M}(\Omega)$ with $E_n$ of class $C^\infty$ such that $u_n \to u$ in $L^1(\Omega;\mathbb{R}^d)$, $\chi_{E_n} \to \chi_E$ in $L^1(\Omega)$, and
\betagin{align*}
&\nabla u_n \chi_{\Omega \setminus E_n} \to \nabla u \chi_{\Omega \setminus E} \ \ \text{ in } L^p(\Omega;\mathbb{M}^{d\times d}), \\
&\limsup_{n \to \infty} \int_{\varphiartial E_n \cap \Omega } \varphi(\nu_{E_n}) \, \mathrm{d} \hd \le \int_{J_u \cap E^0} 2\varphi(\nu_u) \, \mathrm{d} \hd + \int_{\varphiartial^* E\cap \Omega } \varphi(\nu_E) \, \mathrm{d} \hd\,.
\end{align*}
Moreover, if $\mathcal{L}^d(E)>0$, one can guarantee in addition the condition
$\mathcal{L}^d(E_n) = \mathcal{L}^d(E)$ for $n \in \mathbb{N}$.
\end{proposition}
\betagin{proof}[Proof of Proposition \ref{prop:relF}] We first prove the lower inequality, and then the upper inequality. The lower inequality relies on Theorem \ref{thm:lower-semi} and the upper inequality on a density argument along with Proposition \ref{th: Braides-upper bound}.
\noindent \emph{The lower inequality.} Suppose that $u_n \to u \text{ in }L^0(\Omega;\mathbb{R}^d) \text{ and } \chi_{E_n} \to \chi_E \text{ in }L^1(\Omega)$. Without restriction, we can assume that $\sup_n F(u_n,E_n)< +\infty$. In view of \eqref{eq: F functional} and $\min_{\mathbb{S}^{d-1}} \varphi>0$, this implies $\mathcal{H}^{d-1}(\varphiartial E_n) < + \infty$. Moreover, by \eqref{eq: growth conditions} the functions $v_n:= u_n \chi_{\Omega \setminus E_n}$ lie in $GSBD^p(\Omega)$ with $J_{v_n} \subset \varphiartial E_n \cap \Omega$ and satisfy $\sup_{n}\Vert e(v_n)\Vert_{L^p(\Omega)}<+\infty$. This along with the fact that $u_n \to u$ in measure shows that $v_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to $u\chi_{E^0}$, see \eqref{eq: weak gsbd convergence}, where we point out that $A^\infty_u = \lbrace u = \infty \rbrace = \emptyset$. In particular, $u \chi_{E^0} \in GSBD^p_\infty(\Omega)$ and, since $A^\infty_u = \emptyset$, even $u \chi_{E^0} \in GSBD^p(\Omega)$, cf.\ \eqref{eq: compact extension}.
As also \eqref{1405192012} holds, we can apply Theorem \ref{thm:lower-semi}. The lower inequality now follows from \eqref{eqs:1405191304} and the fact that $f$ is convex.
\emph{The upper inequality.} We first observe the following: given $u \in L^0(\Omega;\mathbb{R}^d)$ and $E \in \mathfrak{M}(\Omega)$ with $\mathcal{H}^{d-1}(\varphiartial^* E)<\infty$ and $u\chi_{E^0} \in GSBD^p(\Omega)$, we find an approximating sequence $(v_n)_n \subset L^1(\Omega;\mathbb{R}^d)$ with $v_n\chi_{E^0} \in GSBV^p(\Omega;\mathbb{R}^d)$ such that
\betagin{align*}
{\rm (i)} & \ \ v_n \to u \chi_{E^0} \ \ \text{ in } L^0(\Omega;\mathbb{R}^d)\,,\\
{\rm (ii)} & \ \ e(v_n) \chi_{\Omega \setminus E} \to e(u) \chi_{\Omega \setminus E} \ \ \text{ in } L^p(\Omega;{\mathbb{M}^{d\times d}_{\rm sym}})\,, \\
{\rm (iii)} & \ \ \mathcal{H}^{d-1}\big((J_{v_n}\triangle J_u) \cap E^0\big) \to 0.
\end{align*}
This can be seen by approximating first $u \chi_{E^0}$ by a sequence $(\widetilde{u}_n)_n$ by means of Theorem \ref{thm:densityGSBD}, and by setting $v_n:= \widetilde{u}_n \chi_{E^0}$ for every $n$. It is then immediate to verify that the conditions in \eqref{eqs:main'} for $(\widetilde{u}_n)_n$ imply the three conditions above.
By this approximation, \eqref{eq: growth conditions}, and a diagonal argument, it thus suffices to construct a recovery sequence for
$u \in L^1(\Omega;\mathbb{R}^d)$ with $u\chi_{E^0} \in GSBV^p(\Omega;\mathbb{R}^d)$. To this end, we apply Proposition~\ref{th: Braides-upper bound} to obtain $(u_n,E_n)_n$ and we consider the sequence $u_n\chi_{\Omega \setminus E_n}$. We further observe that, if $\mathcal{L}^d(E)>0$, this recovery sequences $(u_n,E_n)_n$ can be constructed ensuring $\mathcal{L}^d(E_n) = \mathcal{L}^d(E)$ for $n \in \mathbb{N}$.
\end{proof}
\color{black} We briefly discuss that by a small adaption we get a relaxation result for $F$ with respect to the topology induced by $\bar{d}$ on $L^0(\Omega; \bar{\mathbb{R}}^d)$. We introduce $\overline{F}_\infty\colon L^0(\Omega; \bar{\mathbb{R}}^d) \times \mathfrak{M}(\Omega) \to \mathbb{R} \cup \lbrace+ \infty\rbrace$ by
\betagin{equation*}
\overline{F}_\infty(u,E) = \inf\Big\{ \liminf_{n\to\infty} F(u_n,E_n)\colon \, \bar{d}(u_n, u) \to 0 \text{ and } \chi_{E_n} \to \chi_E \text{ in }L^1(\Omega) \Big\}\,.
\end{equation*}
\betagin{proposition}[Characterization of the lower semicontinuous envelope $\overline{F}_\infty$]\label{prop:relFinfty}
Under the assumptions of Proposition~\ref{prop:relF}, it holds that
\betagin{equation*}
\overline{F}_\infty(u,E) = \betagin{dcases} \int_{\Omega \setminus E} f(e(u))\, \, \mathrm{d} x + \int_{\Omega \cap \varphiartial^* E} & \varphi (\nu_E) \, \, \mathrm{d} \hd + \int_{J_u \cap (\Omega \setminus E)^1} 2\, \varphi(\nu_u) \, \, \mathrm{d} \hd \\
&\hspace{-1em}\text{if } u= u \, \chi_{E^0} \in GSBD^p_\infty(\Omega) \text{ and }\mathcal{H}^{d-1}(\varphiartial^* E) < +\infty\,,\\
+\infty &\hspace{-1em}\text{otherwise.}
\end{dcases}
\end{equation*}
Moreover, if $\mathcal{L}^d(E)>0$, then for any $(u,E) \in L^0(\Omega;\bar{\mathbb{R}}^d){\times}\mathfrak{M}(\Omega)$ there exists a recovery sequence $(u_n,E_n)_n\subset L^0(\Omega;\mathbb{R}d){\times}\mathfrak{M}(\Omega)$ such that $\mathcal{L}^d(E_n) = \mathcal{L}^d(E)$ for all $n \in \mathbb{N}$.
\end{proposition}
\betagin{proof}
It is easy to check that the lower inequality still works for $u = u\chi_{E^0} \in GSBD^p_\infty(\Omega)$ by Theorem \ref{thm:lower-semi}, where we use \eqref{eq: growth conditions}, $f(0)=0$, and the fact that $e(u) = 0$ on $\lbrace u = \infty \rbrace$, see \eqref{eq:same}. Moreover, we are able to extend the upper inequality to any $u \in L^0(\Omega; \bar{\mathbb{R}}^d)$ such that $u=u \,\chi_{E^0} \in GSBD^p_\infty(\Omega)$. In fact, it is enough to notice that for any $u=u \,\chi_{E^0} \in GSBD^p_\infty(\Omega)$ and any sequence $(t_n)_n\subset \mathbb{R}d$ with $|t_n| \to \infty$ such that for the functions $\tilde{u}_{t_n} \in GSBD^p(\Omega)$ defined in \eqref{eq: compact extension} property \eqref{eq:same} holds, we obtain $\tilde{u}_{t_n} =\tilde{u}_{t_n} \, \chi_{E^0}$, $\bar{d}(\tilde{u}_{t_n}, u) \to 0$ as $n \to \infty$ , and
\betagin{equation*}
\int_{\Omega \setminus E} f(e(u))\, \, \mathrm{d} x + \int_{\Omega \cap \varphiartial^* E} \varphi (\nu_E) \, \, \mathrm{d} \hd + \int_{J_u \cap (\Omega \setminus E)^1} 2\, \varphi(\nu_u) \, \, \mathrm{d} \hd = \overlinerline{F}(\tilde{u}_{t_n},E)
\end{equation*}
for all $n \in \mathbb{N}$, with $J_u$ defined by \eqref{eq: general jump}. Then, the upper inequality follows from the upper inequality in Proposition \ref{prop:relF} and a diagonal argument.
\end{proof}
\color{black}
As a consequence, we obtain the following lower semicontinuity result in $GSBD^p_\infty$.
\betagin{corollary}[Lower semicontinuity in $GSBD^p_\infty$]\label{cor: GSDB-lsc}
Let us suppose that a sequence $(u_n)_n \subset GSBD^p_\infty(\Omega)$ converges weakly in $GSBD^p_\infty(\Omega)$ to $u\in GSBD^p_\infty(\Omega)$, see \eqref{eq: weak gsbd convergence}. Then for each norm $\varphihi$ on $\mathbb{R}d$
there holds
$$\int_{J_u} \varphihi(\nu_u) \, \, \mathrm{d} \hd \le \liminf_{n \to \infty} \int_{J_{u_n}} \varphihi(\nu_{u_n}) \, \, \mathrm{d} \hd. $$
\end{corollary}
\betagin{proof}
Let $\varepsilons>0$ and $f(\zeta) = \varepsilons|\zeta^T + \zeta|^p $ for $\zeta \in \mathbb{M}^{d\times d}$. The upper inequality in
\color{black} Proposition~\ref{prop:relFinfty} \color{black}
(for $u_n$ and $E = \emptyset$) shows that for each $u_n \in GSBD_\infty^p(\Omega)$, we can find a Lipschitz set $E_n$
with $\mathcal{L}^{d}(E_n) \le \frac{1}{n}$ and $v_n \in L^0(\Omega;\mathbb{R}^d)$ with $v_n|_{\Omega\setminus \overline E_n} \in W^{1,p}( \Omega \setminus \overline E_n;\mathbb{R}^d)$, $v_n|_{E_n}=0$, and $\color{black}\bar{d} \color{black}(v_n,u_n) \le \frac{1}{n}$ (see \eqref{eq:metricd}) such that
\betagin{align}\label{eq: imm-ref}
{ \int_{\Omega \setminus E_n} \varepsilons|e(v_n)|^p\, \, \mathrm{d} x + \int_{\Omega \cap\varphiartial E_n } \varphihi(\nu_{E_n}) \, \mathrm{d} \hd \le \int_{\Omega } \varepsilons|e(u_n)|^p\, \, \mathrm{d} x + \int_{J_{u_n}} 2\varphihi(\nu_{u_n}) \, \mathrm{d} \hd + \frac{1}{n} \,.}
\end{align}
Observe that $\color{black}\bar{d} \color{black}(v_n,u) \to 0$ as $n\to \infty$, and thus $v_n$ converges weakly to $u$ in $GSBD^p_\infty(\Omega)$. By applying Theorem~\ref{thm:lower-semi} on $(v_n,E_n)$ and using $E = \emptyset$ we get \[
\int_{J_u} 2\varphihi(\nu_u) \, \, \mathrm{d} \hd \le \liminf_{n \to \infty} \int_{\Omega \cap\varphiartial E_n } \varphihi(\nu_{E_n})\, \mathrm{d} \hd\,.
\]
This, along with \eqref{eq: imm-ref}, $\sup_{n\in\mathbb{N}} \Vert e(u_n)\Vert_{L^p(\Omega)}<+\infty$, and the arbitrariness of $\varepsilons$ yields the result. \end{proof}
We now address the relaxation of $F_{\rm Dir}$, see \eqref{eq: FDir functional}, i.e., a version of $F$ with boundary data.
We take advantage of the following approximation result which is obtained by following the lines of \cite[Theorem~5.5]{CC17}, where an analogous approximation is proved for Griffith functionals with Dirichlet boundary conditions. The new feature with respect to \cite[Theorem~5.5]{CC17} is that, besides the construction of approximating functions with the correct boundary data, also approximating sets are constructed. For convenience of the reader, we give a sketch of the proof in Appendix~\ref{sec:App} highlighting the adaptations needed with respect to \cite[Theorem~5.5]{CC17}. In the following, we denote by $\overline F'_{\mathrm{Dir}}$ the functional on the right hand side of \eqref{1807191836}.
\betagin{lemma}\label{le:0410191844}
Suppose that $\varphiartial_D \Omega\subset \varphiartial \Omega$ satisfies \eqref{0807170103}. Consider $(v, H) \in L^0(\Omega;\mathbb{R}d) \times \mathfrak{M}(\Omega)$ such that $\overline F'_{\mathrm{Dir}}(v,H)<+\infty$. Then there exist $(v_n, H_n) \in (L^p(\Omega;\mathbb{R}d) \cap SBV^p(\Omega;\mathbb{R}d)) \times \mathfrak{M}(\Omega)$ such that $J_{v_n}$ is closed in $\Omega$ and included in a finite union of closed connected pieces of $C^1$ hypersurfaces, $v_n \in W^{1,p}(\Omega\setminus J_{v_n}; \mathbb{R}d)$, $v_n=u_0$ in a neighborhood $V_n \subset \Omega$ of ${\partial_D \Omega}$, $H_n$ is a set of finite perimeter, and
\betagin{itemize}
\item[{\rm (i)}] $v_n \to v$ in $L^0(\Omega;\mathbb{R}d)$,
\item[{\rm (ii)}]$ \chi_{H_n} \to \chi_H $ in $L^1(\Omega)$,
\item[{\rm (iii)}] $\limsup_{n\to \infty} \overline F'_{\mathrm{Dir}}(v_n, H_n) \leq \overline F'_{\mathrm{Dir}}(v,H)$.
\end{itemize}
\end{lemma}
\betagin{proof}[Proof of Theorem \ref{thm:relFDir}]
First, we denote by $\Omega'$ a bounded open set with $\Omega \subset \Omega' $ and $\Omega' \cap {\partial \Omega} = {\partial_D \Omega}$. By $F'$ and ${\overline F}'$ we denote the analogs of the functionals $F$ and $\overline F$, respectively, defined on $L^0(\Omega';\mathbb{R}d) {\times} \mathfrak{M}(\Omega')$. Given $u \in L^0(\Omega;\mathbb{R}^d)$, we define the extension $u' \in L^0(\Omega';\mathbb{R}^d)$ by setting $u' = u_0$ on $\Omega' \setminus \Omega$ for fixed boundary values $u_0 \in W^{1,p}(\mathbb{R}^d;\mathbb{R}^d)$. Then, we observe
\betagin{equation}\label{eq: extensi}
F'(u', E) = F_{\mathrm{Dir}}(u,E) + \int_{\Omega' \setminus \Omega} f(e(u_0)) \, \mathrm{d} x\,, \quad {\overline F}'(u', E) = \overline F_{\mathrm{Dir}}(u,E) + \int_{\Omega' \setminus \Omega} f(e(u_0)) \, \mathrm{d} x\,.
\end{equation}
Therefore, the lower inequality follows from Proposition \ref{prop:relF} applied for $F', \overline F'$ instead of $F, \overline F$.
We now address the upper inequality.
In view of Lemma~\ref{le:0410191844} and by a diagonal argument, it is enough to prove the result in the case where, besides the assumptions in the statement, also $u \in L^p(\Omega;\mathbb{R}d) \cap SBV^p(\Omega;\mathbb{R}d)$ and $u = u_0$ in a neighborhood $U \subset \Omega$ of $\varphiartial_D \Omega$.
By $(u_n,E_n)_n$ we denote a recovery sequence for $(u,E)$ given by Proposition~\ref{th: Braides-upper bound}. In general, the functions $(u_n)_n$ do not satisfy the boundary conditions required in \eqref{eq: FDir functional}. Let $\mathrm{d}elta>0$ and let $V \subset \subset U$ be a smaller neighborhood of $\varphiartial_D \Omega$. In view of \eqref{eq: growth conditions}--\eqref{eq: F functional}, by a standard diagonal argument in the theory of $\Gamma$-convergence, it suffices to find a sequence $(v_n)_n \subset L^1(\Omega;\mathbb{R}^d)$ with $v_n|_{\Omega \setminus \overline E_n} \in W^{1,p}( \Omega \setminus \overline E_n;\mathbb{R}^d)$, $v_n=0$ on $E_n$, and $v_n = u_0$ on $V\setminus \overline E_n$ such that
\betagin{align}\label{eq: vnunv}
\limsup\nolimits_{n \to \infty}\Vert v_n - u\chi_{E^0} \Vert_{L^1(\Omega)} \le \mathrm{d}elta, \qquad \limsup\nolimits_{n \to \infty}\Vert e(v_n) - e(u_n)\chi_{\Omega \setminus E_n}\Vert_{L^p(\Omega)} \le \mathrm{d}elta.
\end{align}
To this end, choose $\varphisi \in C^\infty(\Omega)$ with $\varphisi = 1$ in $\Omega \setminus U$ and $\varphisi = 0$ on $V$. \color{black} The sequence $(u_n)_n$ converges to $u$ only in $L^1(\Omega;\mathbb{R}^d)$. Therefore, we introduce truncations to obtain $L^p$-convergence: for $M>0$, we define \color{black} $u^{M}$ by $u^M_i = (-M \vee u_i)\wedge M $, where $u_i$ denotes the $i$-th component, $i=1,\ldots,d$. In a similar fashion, we define $u_{n}^M$. By Proposition~\ref{th: Braides-upper bound} we then get $\chi_{E_n} \to \chi_E$ in $L^1(\Omega)$ and
\betagin{align*}
u_{n}^M \to u^M \ \ \ \text{in} \ \ \ L^p(\Omega;\mathbb{R}^d), \ \ \ \ \ \ \ \ \ \nabla u_{n}^M \chi_{\Omega \setminus E_n} \to \nabla u^M \chi_{E^0} \ \ \ \text{in} \ \ \ L^p(\Omega;\mathbb{M}^{d\times d})\,.
\end{align*}
We define $v_n:= (\varphisi u_n^M + (1-\varphisi) u_0)\chi_{\Omega \setminus E_n}$. Clearly, $v_n = u_0$ on $V\setminus \overline E_n$. By $\nabla v_n = \varphisi \nabla u_{n}^M+ (1-\varphisi)\nabla u_{0} + \nabla \varphisi \otimes (u_{n}^M-u_0)$ on $\Omega \setminus E_n$, $u=u_0$ on $U$, and Proposition~\ref{th: Braides-upper bound} we find
\betagin{align*}
\limsup\nolimits_{n \to \infty}\Vert v_n - u\Vert_{L^1(\Omega)}&\le\Vert u - u^M\Vert_{L^1(\Omega)}\,, \\
\limsup\nolimits_{n \to \infty}\Vert e(v_n) - e(u_n)\chi_{\Omega \setminus E_n}\Vert_{L^p(\Omega)}&\le \Vert \nabla u -\nabla u^M\Vert_{L^p(E^0)} + \Vert \nabla \varphisi \Vert_\infty\Vert u - u^M\Vert_{L^p(\Omega)}\,.
\end{align*}
For $M$ sufficiently large, we obtain \eqref{eq: vnunv} since $u=u\chi_{E^0}$. This concludes the proof.
\end{proof}
\color{black} As done for the passage from Proposition~\ref{prop:relF} to Proposition~\ref{prop:relFinfty}, we may obtain the following characterization of the lower semicontinuous envelope of $F_{\mathrm{Dir}}$ with respect to the convergence induced by $\bar{d}$ on $L^0(\Omega; \bar{\mathbb{R}}^d)$.
\betagin{theorem}[Characterization of the lower semicontinuous envelope $\overline{F}_{\mathrm{Dir},\infty}$]\label{thm:relFDirinfty}
Under the assumptions of Theorem~\ref{thm:relFDir}, the lower semicontinous envelope
\betagin{equation*}\label{eq: Fdir-relainfty}
\overline{F}_{\mathrm{Dir},\infty}(u,E) = \Big\{ \liminf_{n\to\infty} F_{\mathrm{Dir}}(u_n,E_n)\colon \, \bar{d}(u_n, u) \to 0 \text{ and } \chi_{E_n} \to \chi_E \text{ in }L^1(\Omega) \Big\}
\end{equation*}
for $ u\in L^0(\Omega; \bar{\mathbb{R}}^d)$ and $E \in \mathfrak{M}(\Omega)$ is given by
\betagin{equation}\label{1807191836'}
\overline{F}_{\mathrm{Dir},\infty}(u,E) =
\overline{F}_\infty(u,E) + \int\limits_{{\partial_D \Omega} \cap \varphiartial^* E} \hspace{-0.5cm} \varphi (\nu_E) \, \mathrm{d} \hd + \int\limits_{ \{ \mathrm{tr}(u) \neq \mathrm{tr}(u_0) \} \cap ({\partial_D \Omega} \setminus \varphiartial^* E) } \hspace{-0.5cm} 2 \, \varphi( \nu_\Omega ) \, \mathrm{d} \hd\,.
\end{equation}
Moreover, if $\mathcal{L}^d(E)>0$, then for any $(u,E) \in L^0(\Omega;\bar{\mathbb{R}}^d){\times}\mathfrak{M}(\Omega)$ there exists a recovery sequence $(u_n,E_n)_n\subset L^0(\Omega;\mathbb{R}d){\times}\mathfrak{M}(\Omega)$ such that $\mathcal{L}^d(E_n) = \mathcal{L}^d(E)$ for all $n \in \mathbb{N}$.
\end{theorem}
Notice that in \eqref{1807191836'} we wrote $\mathrm{tr}(u)$ also for $u\in GSBD^p_\infty(\Omega)$, with a slight abuse of notation:
$\mathrm{tr}(u)$ should be intended as $\mathrm{tr}(\tilde{u}_t)$, cf.\ \eqref{eq: compact extension}, for any $t \in \mathbb{R}^d$ such that $\mathcal{H}^{d-1}(\{ u_0 = t\} \cap {\partial_D \Omega})=0$\color{black}.
\subsection{Compactness and existence results for the relaxed functional}\label{sec: Fcomp}
We start with the following general compactness result.
\betagin{theorem}[Compactness]\label{thm:compF}
For every $(u_n, E_n)_n$ with $\sup_n F(u_n, E_n) < +\infty$, there exist a subsequence (not relabeled), $u \in GSBD^p_\infty(\Omega)$, and $E \in \mathfrak{M}(\Omega)$ with $\mathcal{H}^{d-1}(\varphiartial^* E) < +\infty$ such that $u_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to $u$ and $\chi_{E_n} \to \chi_E$ in $L^1(\Omega)$.
\end{theorem}
\betagin{proof}
Let $(u_n, E_n)_n$ with $\sup_n F(u_n, E_n) < +\infty$. As by the assumptions on $\varphi$ there holds $\sup_{n\in \mathbb{N}} \mathcal{H}^{d-1}(\varphiartial E_n) < +\infty$, a compactness result for sets of finite perimeter (see \cite[Remark 4.20]{AFP}) implies that there exists $E \subset \Omega$ with $\mathcal{H}^{d-1}(\varphiartial^* E)<+\infty$ such that $\chi_{E_n} \to \chi_E$ in $L^1(\Omega)$, up to a subsequence (not relabeled).
Since the functions $u_n=u_n \chi_{\Omega \setminus E_n}$ satisfy $J_{u_n} \subset \varphiartial E_n$, we get $\sup_n \mathcal{H}^{d-1}(J_{u_n}) <+ \infty$. Moreover, by the growth assumptions on $f$ (see \eqref{eq: growth conditions}) we get that $\Vert e(u_n)\Vert_{L^p(\Omega)}$ is uniformly bounded. Thus, by Theorem \ref{th: GSDBcompactness}, $u_n$ converges (up to a subsequence) weakly in $GSBD^p_\infty(\Omega)$ to some $u \in GSBD^p_\infty(\Omega)$. This concludes the proof.
\end{proof}
\color{black} We are now ready to prove Theorem~\ref{th: relF-extended}.
\betagin{proof}[Proof of Theorem~\ref{th: relF-extended}.]
The existence of minimizers for $\overline F_{\mathrm{Dir},\infty}$ follows by combining Theorem~\ref{thm:relFDirinfty} and Theorem~\ref{thm:compF}, by means of \color{black} general properties of relaxations, see e.g.\ \cite[Theorem~3.8]{DMLibro}.
To obtain minimizers for $\overline F_{\mathrm{Dir}}$, it is enough to observe (recall \eqref{eq: compact extension}, \eqref{eq:same}) that
\betagin{equation*}
\overline F_{\mathrm{Dir},\infty}(u,E) \ge \overline F_{\mathrm{Dir}}(v_a, E)
\end{equation*}
for every $u\in GSBD^p_\infty(\Omega)$ and $v_a := u\chi_{\Omega\setminus A^\infty_u} + a \chi_{A^\infty_u}$ (recall $A^\infty_u=\{u=\infty\}$), where $a\colon \mathbb{R}d \to \mathbb{R}d$ is an arbitrary affine function with skew-symmetric gradient (usually called an \emph{infinitesimal rigid motion}).
Starting from a minimizer of $\overline F_{\mathrm{Dir},\infty}$, if $A^\infty_u \neq \emptyset$, we thus obtain a family of minimizers for $\overline F_{\mathrm{Dir}}$, parametrized by the infinitesimal rigid motions $a$. This concludes the proof.
\end{proof}
\color{black}
\section{Functionals on domains with a subgraph constraint}\label{sec:GGG}
In this section we prove the the results announced in Subsection \ref{sec: results2}.
\subsection{Relaxation of the energy $G$}\label{subsec:RelG}
This subsection is devoted to the proof of Theorem~\ref{thm:relG}. The lower inequality is obtained by exploiting the tool of $\sigma^p_{\mathrm{sym}}$-convergence introduced in Section~\ref{sec:sigmap}. The corresponding analysis will prove to be useful also for the compactness theorem in the next subsection. The proof of the upper inequality is quite delicate, and a careful procedure is needed to guarantee that the approximating displacements are still defined on a domain which is the subgraph of a function. We only follow partially the strategy in \cite[Proposition~4.1]{ChaSol07}, and employ also other arguments in order to improve the $GSBV$ proof which might fail in some pathological cases.
Consider a Lipschitz set $\omega \subset \mathbb{R}^{d-1}$ which is uniformly star-shaped with respect to the origin, see \eqref{eq: star-shaped}. We recall the notation $\Omega = \omega \times (-1, M + 1 )$ and
\betagin{align}\label{eq: graphi}
\Omega_h = \lbrace x\in \Omega \colon \, -1 < x_d < h(x') \rbrace, \ \ \ \ \ \Omega_h^+ = \Omega_h \cap \lbrace x_d > 0 \rbrace
\end{align}
for $h\colon\omega \to [0,M]$ measurable, where we write $x = (x',x_d)$ for $x \in \mathbb{R}^d$. Moreover, we let $\Omega^+ = \Omega \cap \lbrace x_d>0 \rbrace$.
\varphiaragraph*{\textbf{The lower inequality.}} Consider $(u_n, h_n)_n$ with $\sup_n G(u_n, h_n) < +\infty$. Then, we have that $h_n \in C^1(\omega; [0,M] )$, $u_n|_{\Omega_{h_n}} \in W^{1,p}(\Omega_{h_n};\mathbb{R}^d )$, $u_n|_{\Omega\setminus \Omega_{h_n}}=0$, and $u_n = u_0$ on $\omega\times (-1,0)$.
Suppose that $(u_n, h_n)_n$ converges in $L^0(\Omega;\mathbb{R}d){\times}L^1(\omega)$ to $(u, h)$.
We let
\betagin{equation}\label{eq: Gamma-n}
\Gamma_n:=\varphiartial \Omega_{h_n} \cap \Omega = \lbrace x \in \Omega\colon \, h_n(x') = x_d \rbrace
\end{equation}
be the graph of the function $h_n$. Note that $\sup_n \mathcal{H}^{d-1}(\Gamma_n)<+\infty$. We take $U = \omega \times (-\frac{1}{2},M) $ and $U' = \Omega = \omega \times (-1,M+1)$, and apply Theorem~\ref{thm:compSps}, to deduce that $(\Gamma_n)_n$ $\sigma^p_{\mathrm{\rm sym}}$-converges (up to a subsequence) to a pair $(\Gamma, G_\infty)$. A fundamental step in the proof will be to show that
\betagin{align}\label{eq: G is empty}
G_\infty = \emptyset\,.
\end{align}
We postpone the proof of this property to Step~3 below. We first characterize the limiting set $\Gamma$ (Step 1) and prove the lower inequality (Step 2), by following
partially the lines of \cite[Subsection~3.2]{ChaSol07}.
In the whole proof, for simplicity we omit to write $\tilde{\subset}$ and $\tilde{=}$ to indicate that the inclusions hold up to $\mathcal{H}^{d-1}$-negligible sets.
\noindent \emph{Step 1: Characterization of the limiting set $\Gamma$.} Let us prove that the set
\betagin{equation}\label{eq: sigma-low-def}
\Sigma:= \Gamma \cap \Omega_h^1
\end{equation}
is vertical, that is
\betagin{equation}\label{eq: Sigma}
(\Sigma+t e_d) \cap \Omega_h^1 \subset \Sigma \ \ \ \ \text{for any $t \geq 0$}\,.
\end{equation}
This follows as in \cite[Subsection~3.2]{ChaSol07}: in fact, consider $(v_n)_n$ and $v$ as in Definition~\ref{def:spsconv}(ii). In particular, $v_n = 0$ on $U' \setminus U$, $J_{v_n} {\subset} \Gamma_n$, and, in view of \eqref{eq: G is empty}, $ v$ is $\mathbb{R}d$-valued with $\Gamma = J_v$. The functions $v'_n(x):= v_n(x', x_d -t) \chi_{\Omega_{h_n}}(x)$ (with $ t >0 $, extended by zero in $\omega{\times}(-1, -1+t)$)
converge to $v'(x):= v(x', x_d-t) \chi_{\Omega_h}(x)$ in measure on $U'$. Since $J_{v_n'} \subset \Gamma_n$, Definition~\ref{def:spsconv}(i) implies $J_{v'} \setminus \Gamma {\subset} (G_\infty)^1$. As $G_\infty = \emptyset$ by \eqref{eq: G is empty}, we get $J_{v'} {\subset} \Gamma$, so that
\[
(\Sigma + t e_d) \cap \Omega_h^1 = (\Gamma + t e_d) \cap \Omega_h^1 = (J_v + t e_d) \cap \Omega_h^1 = J_{v'} \cap \Omega_h^1\subset \Gamma \cap \Omega_h^1=\Sigma\,,
\]
where we have used $\Gamma = J_v$. This shows \eqref{eq: Sigma}. In particular, $\nu_{\Sigma} \cdot e_d =0$ $\mathcal{H}^{d-1}$-a.e.\ in $\Sigma$. Next, we show that
\betagin{equation}\label{2304191211}
\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap \Omega) + 2 \mathcal{H}^{d-1}(\Sigma) \leq \liminf_{n\to \infty} \int_\omega \sqrt{1 + |\nabla h_n(x')|^2} \, \mathrm{d} x'\,.
\end{equation}
To see this, we again consider functions $(v_n)_n$ and $v$ satisfying Definition~\ref{def:spsconv}(ii). In particular, we have $J_{v_n} \subset \Gamma_n$ and $J_v = \Gamma$. Since $\Gamma_n$ is the graph of a $C^1$ function, we either get $v_n|_{\Omega_{h_n}} \equiv \infty$ or, by Korn's inequality, we have $v_n|_{\Omega_{h_n}} \in W^{1,p}(\Omega_{h_n}; \mathbb{R}d)$. Since $v_n = 0$ on $U' \setminus U$, we obtain $v_n|_{\Omega_{h_n}} \in W^{1,p}(\Omega_{h_n}; \mathbb{R}d)$. We apply Theorem \ref{thm:lower-semi} for $E_n = \Omega \setminus \Omega_{h_n}$, $E= \Omega \setminus \Omega_{h}$, and the sequence of functions $w_n := v_n\chi_{\Omega \setminus E_n} = v_n \chi_{\Omega_{h_n}}$.
Observe that $\chi_{E_n} \to \chi_E$ in $L^1(\Omega)$. Moreover, $w_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to $w := v\chi_{\Omega_h}$ since $v_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to $v$ and $\sup_n \mathcal{H}^{d-1}(\varphiartial E_n)<+\infty$. By \eqref{1405191304} for $\varphi \equiv 1$ on $\mathbb{S}^{d-1}$ there holds
$$
\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap\Omega) + 2 \mathcal{H}^{d-1}(J_w \cap \Omega_{h}^1) \leq \liminf_{n\to \infty} \mathcal{H}^{d-1}(\varphiartial \Omega_{h_n} \cap \Omega) \,,
$$
where we used that $E^0 = \Omega_{h}^1$ and $\varphiartial^* E \cap \Omega = \varphiartial^* \Omega_h \cap \Omega$. Since $J_v = \Gamma$ and $J_w \cap \Omega_h^1 = J_v \cap \Omega_h^1 = \Sigma$, we indeed get \eqref{2304191211}, where for the right hand side we use that $\varphiartial \Omega_{h_n}$ is the graph of the function $h_n\in C^1(\omega; [0,M] )$. For later purposes in Step~3, we also note that by Corollary \ref{cor: GSDB-lsc} for $\varphihi(\nu)= |\xi \cdot \nu|$, with $\xi \in \mathbb{S}^{d-1}$ fixed, we get
\betagin{align}\label{eq: aniso-lsc}
\int_\Gamma |\nu_\Gamma \cdot \xi| \, \mathrm{d} \hd = \int_{J_v} |\nu_v \cdot \xi| \, \mathrm{d} \hd \le \liminf_{n \to \infty}\int_{J_{v_n}} |\nu_{v_n} \cdot \xi| \, \mathrm{d} \hd \le \liminf_{n \to \infty} \int_{\Gamma_n} |\nu_{\Gamma_n} \cdot \xi| \, \mathrm{d} \hd\,.
\end{align}
(Strictly speaking, as $\varphihi$ is only a seminorm, we apply Corollary \ref{cor: GSDB-lsc} for $\varphihi +\varepsilons$ for any $\varepsilons>0$.)
\noindent \emph{Step 2: The lower inequality.} We now show the lower bound. Recall that $(u_n, h_n)_n$ converges in $L^0(\Omega;\mathbb{R}d){\times}L^1(\omega)$ to $(u, h)$ and that $(G(u_n, h_n))_n$ is bounded. Then, \eqref{eq: growth conditions} and $\min_{\mathbb{S}^{d-1}} \varphi>0$ along with Theorem \ref{th: GSDBcompactness} and the fact that $\mathcal{L}^d(\lbrace x \in \Omega\colon |u_n(x)| \to \infty \rbrace)=0 $ imply that
the limit $u= u\chi_{\Omega_h}$ lies in $GSBD^p(\Omega)$.
There also holds $u = u_0$ on $\omega\times (-1,0)$ by \eqref{eq: GSBD comp}(i) and the fact that $u_n = u_0$ on $\omega\times (-1,0)$ for all $n \in \mathbb{N}$. In particular, we observe that $u_n = u_n \chi_{\Omega_{h_n}}$ converges weakly in $GSBD^p_\infty(\Omega)$ to $u$, cf.\ \eqref{eq: weak gsbd convergence}. The fact that $h \in BV(\omega; [0,M] )$ follows from a standard compactness argument. This shows $\overlinerline{G}(u,h) < + \infty$.
To obtain the lower bound for the energy, we again apply Theorem \ref{thm:lower-semi} for $E_n = \Omega \setminus \Omega_{h_n}$ and $E= \Omega \setminus \Omega_{h}$. Consider the sequence of functions $v_n := \varphisi u_n\chi_{\Omega \setminus E_n} = \varphisi u_n$, where $\varphisi \in C^\infty(\Omega)$ with $\varphisi = 1$ in a neighborhood of $\Omega^+ = \Omega\cap\lbrace x_d >0\rbrace $ and $\varphisi = 0$ on $\omega \times (-1,-\frac{1}{2})$. We observe that $v_n = 0$ on $U'\setminus U = \omega \times ((-1,-\frac{1}{2}]\cup[M,M+1)) $ and that $v_n$ converges to $ v:= \varphisi u \in GSBD^p(\Omega)$ weakly in $GSBD^p_\infty(\Omega)$. Now we apply Theorem \ref{thm:lower-semi}.
First, notice that \eqref{1405191303}, $\varphisi = 1$ on $\Omega^+$, and the fact that $A^\infty_u = \emptyset$ imply $e(u_n)\chi_{\Omega^+_{h_n}} \rightharpoonup e(u) \chi_{\Omega^+_h}$ weakly in $L^p(\Omega;{\mathbb{M}^{d\times d}_{\rm sym}})$. This along with the convexity of $f$ yields
\betagin{equation}\label{2304191220}
\int_{\Omega_h^+} f(e(u)) \, \mathrm{d} x \leq \liminf_{n\to \infty} \int_{\Omega_{h_n}^+} f(e(u_n)) \, \mathrm{d} x\,.
\end{equation}
Moreover, applying Definition~\ref{def:spsconv}(i) on the sequence $(v_n)_n$, which satisfies $v_n= 0$ on $U' \setminus U$ and $J_{v_n} \subset \Gamma_n$, we observe $J_u = J_v \subset \Gamma$, where we have also used \eqref{eq: G is empty}. Recalling the definition of $J_u' = \lbrace (x',x_d + t)\colon \, x \in J_u, \, t \ge 0 \rbrace$, see \eqref{eq: Ju'}, and using \eqref{eq: sigma-low-def}--\eqref{eq: Sigma} we find $J_u' \cap \Omega^1_h\subset\Sigma$. Thus, by \eqref{2304191211}, we obtain
\betagin{equation}\label{2304191220-2}
\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap \Omega) + 2 \mathcal{H}^{d-1}(J_u' \cap \Omega^1_h) \leq \liminf_{n\to \infty} \int_\omega \sqrt{1 + |\nabla h_n(x')|^2} \, \mathrm{d} x'\,.
\end{equation}
Collecting \eqref{2304191220} and \eqref{2304191220-2} we conclude the lower inequality.
To conclude the proof, it remains to confirm \eqref{eq: G is empty}.
\noindent \emph{Step 3: Proof of $G_\infty =\emptyset$.} Recall the definition of the graphs $\Gamma_n$ in \eqref{eq: Gamma-n} and its $\sigma^p_{\mathrm{\rm sym}}$-limit $\Gamma$ on the sets $U = \omega \times (-\frac{1}{2},M) $ and $U' = \Omega$. As before, consider $\varphisi \in C^\infty(\Omega)$ with $\varphisi = 1$ in a neighborhood of $\Omega^+$ and $\varphisi = 0$ on $\omega \times (-1,-\frac{1}{2})$. By employing (i) in Definition~\ref{def:spsconv} for the sequence $v_n = \varphisi \chi_{\Omega_{h_n}} e_d$ and its limit $v= \varphisi \chi_{ \Omega_{h} } e_d$, we get that $(\varphiartial^*\Omega_h \cap \Omega) \setminus \Gamma {\subset} (G_\infty)^1$. Since $U' \cap \varphiartial^* G_\infty \subset \Gamma$ by definition of $\sigma^p_{\mathrm{\rm sym}}$-convergence, we observe
\betagin{align}\label{eq: summary5}
\Gamma \supset \big(\varphiartial^* G_\infty\cap \Omega \big) \cup \big( \varphiartial^* \Omega_h \cap \Omega \cap (G_\infty)^0 \big)\,.
\end{align}
We estimate the $\mathcal{H}^{d-1}$-measure of the two terms on the right separately.
\noindent \emph{The first term.} We define $\Psi = \varphiartial^* G_\infty \cap \Omega$ for brevity. Since $G_\infty$ is contained in $U = \omega \times (-\frac{1}{2},M)$ and $\Omega = \omega \times (-1,M+1)$, we observe $\Psi = \varphiartial^* G_\infty \cap (\omega \times \mathbb{R})$. Choose $\omega_\Psi \subset \omega$ such that $\omega_\Psi \times \lbrace 0 \rbrace $ is the orthogonal projection of $\Psi$ onto $\mathbb{R}^{d-1}\times \lbrace 0 \rbrace$. Note that $\Psi$ and $\omega_\Psi$ satisfy
\betagin{equation*}\label{eq: needed later3}
\mathcal{H}^0\big( \Psi^{e_d}_y \big) \ge 2 \ \ \ \text{ for all $y \in \omega_\Psi \times \lbrace 0\rbrace$,}
\end{equation*}
since $G_\infty$ is a set of finite perimeter. Thus
\betagin{align}\label{eq: summary1}
\int_{\omega \times \lbrace 0 \rbrace} \mathcal{H}^0\big( (\varphiartial^* G_\infty\cap \Omega )^{e_d}_y \big) \, \mathrm{d} \mathcal{H}^{d-1}(y) \ge 2 \mathcal{H}^{d-1}(\omega_\Psi)\,.
\end{align}
\emph{The second term.} As $\varphiartial^* \Omega_h \cap \Omega$ is the (generalized) graph of the function $h\colon \omega \to [0,M]$, we have
\betagin{align}\label{eq: summary2}
\int_{\omega \times \lbrace 0 \rbrace} \mathcal{H}^0\Big( \big( \varphiartial^* \Omega_h \cap \Omega \big)^{e_d}_y \Big) \, \mathrm{d} \mathcal{H}^{d-1}(y) = \mathcal{H}^{d-1}(\omega)\,.
\end{align}
In a similar fashion, letting $\Lambda_2 = (\varphiartial^* \Omega_h \cap\Omega) \setminus (G_\infty)^0$ and denoting by $\omega_{\Lambda_2}\subset \omega$ its orthogonal projection onto $\mathbb{R}^{d-1} \times \lbrace 0 \rbrace$, we get
\betagin{align}\label{eq: summary3}
\int_{\omega \times \lbrace 0 \rbrace} \mathcal{H}^0\Big( \big((\varphiartial^* \Omega_h \cap\Omega) \setminus (G_\infty)^0 \big)^{e_d}_y \Big) \, \mathrm{d} \mathcal{H}^{d-1}(y) = \mathcal{H}^{d-1}(\omega_{\Lambda_2})\,.
\end{align}
As $\Lambda_2$ is contained in $(G_\infty)^1 \cup \varphiartial^* G_\infty$, we get $\omega_{\Lambda_2} \subset \omega_\Psi$, see Figure~1.
\betagin{figure}[h]\label{figp29copia}
\includegraphics[scale=2]{pag29copia}
\caption{A picture of the situation in the argument by contradiction. We show that in fact $G_\infty=\emptyset$.}
\end{figure}
Therefore, by combining \eqref{eq: summary2} and \eqref{eq: summary3} we find
\betagin{align}\label{eq: summary4}
\int_{\omega \times \lbrace 0 \rbrace} \hspace{-0.2cm} \mathcal{H}^0\Big( \big(\varphiartial^* \Omega_h \cap \Omega \cap (G_\infty)^0 \big)^{e_d}_y \Big) \, \mathrm{d} \mathcal{H}^{d-1}(y) & = \mathcal{H}^{d-1}(\omega) - \mathcal{H}^{d-1}(\omega_{\Lambda_2}) \ge \mathcal{H}^{d-1}(\omega) - \mathcal{H}^{d-1}(\omega_\Psi)\,.
\end{align}
Now \eqref{eq: summary5}, \eqref{eq: summary1}, \eqref{eq: summary4}, and the fact that $\varphiartial^*G_\infty \cap (G_\infty)^0 = \emptyset$ yield
\betagin{align}\label{eq: summary7}
\int_{\omega \times \lbrace 0 \rbrace} \mathcal{H}^{0}(\Gamma^{e_d}_y) \, \mathrm{d} \mathcal{H}^{d-1} & \ge \int_{\omega \times \lbrace 0 \rbrace}\Big(\mathcal{H}^0\big( (\varphiartial^* G_\infty\cap \Omega )^{e_d}_y \big) + \mathcal{H}^0\Big( \big(\varphiartial^* \Omega_h \cap \Omega \cap (G_\infty)^0 \big)^{e_d}_y \Big) \Big) \, \mathrm{d} \mathcal{H}^{d-1} \notag\\
& \ge \mathcal{H}^{d-1}(\omega) + \mathcal{H}^{d-1}(\omega_\Psi)\,.
\end{align}
Since $\Gamma_n$ are graphs of the functions $h_n \colon \omega\to [0,M]$, we get by the area formula and \eqref{eq: aniso-lsc} that
\betagin{align*}
\int_{\omega \times \lbrace 0 \rbrace} \mathcal{H}^{0}(\Gamma^{e_d}_y) \, \mathrm{d} \mathcal{H}^{d-1}(y) & = \int_\Gamma |\nu_\Gamma \cdot e_d| \mathrm{d} \mathcal{H}^{d-1} \le \liminf_{n \to \infty} \int_{\Gamma_n} |\nu_{\Gamma_n} \cdot e_d| \, \mathrm{d} \hd = \mathcal{H}^{d-1}(\omega) \,.
\end{align*}
This along with \eqref{eq: summary7} shows that $\mathcal{H}^{d-1}(\omega_\Psi) = 0$. By recalling that $\omega_\Psi \times \lbrace 0 \rbrace$ is the orthogonal projection of $\varphiartial^* G_\infty \cap (\omega \times \mathbb{R}) = \Psi$ onto $\mathbb{R}^{d-1} \times \lbrace 0 \rbrace$, we conclude that $G_\infty = \emptyset$.
This completes the proof of the lower inequality in Theorem~\ref{thm:relG}. \nopagebreak\hspace*{\fill}$\Box$
\varphiaragraph*{\textbf{The upper inequality.}} \label{page:upperineqbeg} To obtain the upper inequality, it clearly suffices to prove the following result.
\betagin{proposition}\label{prop: enough}
Suppose that $f \ge 0$ is convex and satisfies \eqref{eq: growth conditions}. Consider $(u,h)$ with $ u = u \chi_{\Omega_h} \in~GSBD^p(\Omega)$, $u=u_0$ in $\omega{\times}(-1,0)$, and $h \in BV(\omega; [0,M] )$. Then, there exists a sequence $(u_n, h_n)_n$ with $h_n \in C^1(\omega) \cap BV(\omega; [0,M] )$, $ u_n|_{\Omega_{h_n}} \in W^{1,p}(\Omega_{h_n}; \mathbb{R}^d )$, $u_n=0$ in $\Omega \setminus \Omega_{h_n}$, and $u_n=u_0$ in $\omega{\times}(-1,0)$ such that $u_n \to u$ in $L^0(\Omega;\mathbb{R}^d) $, $h_n \to h$ in $L^1(\omega)$, and
\betagin{subequations}\label{eq: sub}
\betagin{align}
\limsup_{n\to \infty} \int_{\Omega_{h_n}} f(e( u_n )) \, \, \mathrm{d} x &\leq \int_{\Omega_h} f(e(u)) \, \, \mathrm{d} x \,,\label{sub1}\\
\limsup\nolimits_{n\to \infty} \mathcal{H}^{d-1}(\varphiartial \Omega_{h_n} \cap \Omega) & \leq \mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap \Omega) + 2 \mathcal{H}^{d-1}(J_u' \cap \Omega_h^1)\,.\label{sub2}
\end{align}
\end{subequations}
\end{proposition}
In particular, it is not restrictive to assume that $f \ge 0$. In fact, otherwise we consider $\tilde{f} := f + c_2 \ge 0$ changing the value of the elastic energy by the term $c_2\mathcal{L}^d(\Omega_h)$ which is continuous with respect to $L^1(\omega)$ convergence for $h$. Moreover, the integrals $\Omega_{h_n}$ and $\Omega_h$ can be replaced by $\Omega_{h_n}^+$ and $\Omega_h^+$, respectively, since all functions coincide with $u_0$ on $\omega \times (-1,0)$.
\betagin{remark}\label{rem:1805192117}
The proof of the proposition will show that we can construct the sequence $(u_n)_n$ also in such a way that $u_n \in L^\infty(\Omega;\mathbb{R}^d)$ holds for all $n \in \mathbb{N}$. This, however, comes at the expense of the fact that the boundary data is only satisfied approximately, i.e., $u_n|_{\omega \times (-1,0)} \to u_0|_{\omega \times (-1,0)}$ in $W^{1,p}(\omega \times (-1,0);\mathbb{R}^d)$. This slightly different version will be instrumental in Subsection~\ref{sec:phasefield}.
\end{remark}
As a preparation, we first state some auxiliary results. We recall two lemmas from \cite{ChaSol07}. The first is stated in \cite[Lemma~4.3]{ChaSol07}.
\betagin{lemma}\label{le:4.3ChaSol}
Let $h \in BV(\omega; [0, +\infty))$, with $\varphiartial^* \Omega_h$ essentially closed, i.e., $\mathcal{H}^{d-1}(\overline{\varphiartial^* \Omega_h} \setminus \varphiartial^* \Omega_h)=0$. Then, for any $\varepsilon>0$, there exists $g \in C^\infty(\omega; [0, +\infty))$ such that $g \leq h$ a.e.\ in $\omega$, $\|g-h\|_{L^1(\omega)} \leq \varepsilon$, and
\betagin{equation*}
\bigg| \int_\omega \sqrt{1+|\nabla g|^2}\, \mathrm{d} x' - \mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap \Omega) \bigg| \leq \varepsilon\,.
\end{equation*}
\end{lemma}
\betagin{lemma}\label{lemma: graph approx}
Let $h \in BV(\omega; [0,M])$ and let $\Sigma \subset \mathbb{R}^{d}$ with $\mathcal{H}^{d-1}(\Sigma) < +\infty$ be vertical in the sense that $x=(x',x_d) \in \Sigma$ implies $(x',x_d + t) \in \Sigma$ as long as $(x',x_d + t) \in \Omega_h^1$. Then, for each $\varepsilons >0$ there exists $g \in C^\infty(\omega; [0,M])$ such that
\betagin{subequations}
\betagin{align}
\|g-h\|_{L^1(\omega)} &\leq \varepsilon\,,\label{2004192228}\\
\mathcal{H}^{d-1}\big((\varphiartial^* \Omega_h \cup \Sigma) \cap \Omega_{g}\big) & \leq \varepsilon\,,\label{2004192229}\\
\Big| \int_\omega \sqrt{1+|\nabla {g}|^2} \, \mathrm{d} x' - \big( \mathcal{H}^{d-1}(\varphiartial^*\Omega_h \cap \Omega) + 2 \mathcal{H}^{d-1}(\Sigma) \big) \Big|& \leq \varepsilon\,.\label{2004192231}
\end{align}
\end{subequations}
\end{lemma}
\betagin{proof}
We refer to the first step in the proof of \cite[Proposition~4.1]{ChaSol07}, in particular \cite[Equation (12)-(13)]{ChaSol07}. We point out that the case of possibly unbounded graphs has been treated there, i.e., $h \in BV(\omega; [0,+\infty))$. The proof shows that the upper bound on $h$ is preserved and we indeed obtain $g \in C^\infty(\omega; [0,M])$ if $h \in BV(\omega; [0,M])$.
\end{proof}
\color{black} Note that Lemma \ref{lemma: graph approx} states that $\varphiartial^* \Omega_h \cup \Sigma$ can be approximated from below by a smooth graph $g$. However, this only holds \emph{up to a small portion}, see \eqref{2004192229}. Therefore, two additional approximation techniques are needed, one for graphs and one for $GSBD$ functions. \color{black} To this end, we introduce some notation which will also be needed for the proof of Proposition \ref{prop: enough}. Let $k \in \mathbb{N}$, $k>1$. For any $z\in (2 k^{-1}) \mathbb{Z}^d$, consider the hypercubes
\betagin{align}\label{eq: cube-notation}
q_z^k:=z+(-k^{-1},k^{-1})^d\,, \qquad Q_z^k:=z+(-5k^{-1},5k^{-1})^d\,.
\end{align}
Given an open set $U \subset \mathbb{R}^d$, we also define the union of cubes well contained in $U$ by
\betagin{align}\label{eq: well contained}
(U)^k := {\rm int}\Big( \bigcup\nolimits_{z\colon \, Q^k_z \subset U} \overlinerline{q^k_z}\Big)\,.
\end{align}
(Here, ${\rm int}(\cdot)$ denotes the interior. This definition is unrelated to the notation $E^s$ for the set of points with density $s \in [0,1]$.)
\color{black} We now address the two approximation results. We start by an approximation of graphs from which a union of hypercubes has been removed. Recall $\Omega^+ = \Omega \cap \lbrace x_d>0 \rbrace$.
\betagin{lemma}\label{lemma: new-lemma}
Let $g \in C^\infty(\omega;[0,M])$ and let $V_k \subset \Omega^+$ be a union of cubes $Q_z^k$, $z \in \mathcal{Z} \subset (2k^{-1}) \mathbb{Z}^d$, intersected with $(\Omega_g)^k$. Suppose that $V_k$ is vertical in the sense that $(x',x_d) \in V_k$ implies $(x',x_d + t) \in V_k$ for $t \ge 0$ as long as $(x',x_d + t) \in (\Omega_g)^k$. Then, for $k \in \mathbb{N}$ sufficiently large, we find a function $h_k\in C^\infty(\omega;[0,M])$ such that
\betagin{subequations} \label{eq: volume-a-b}
\betagin{equation}\label{eq: volume2a}
\mathcal{L}^d( \Omega_g \, \triangle \, \Omega_{{h}_k} ) \le \mathcal{L}^d(\Omega_g \cap V_k) + C_{g,\omega}k^{-1}\,,
\end{equation}
\betagin{equation}\label{eq: volume2b}
\mathcal{H}^{d-1}(\varphiartial \Omega_{{h}_k} \cap \Omega) \le \mathcal{H}^{d-1}(\varphiartial \Omega_g \cap \Omega) + \mathcal{H}^{d-1}\big(\varphiartial V_k \cap (\Omega_{g})^k\big)+ C_{g,\omega} k^{-1}\,,
\end{equation}
\end{subequations}
where $C_{g,\omega}>0$ depends on $d$, $g$, and $\omega$, but is independent of $k$. Moreover, there are constants $\tau_g,\tau_*>0$ only depending on $d$, $g$, and $\omega$ such that
\betagin{align}\label{eq: inclusion}
x =(x',x_d) \in \Omega_{h_k} \ \ \ \mathbb{R}ightarrow \ \ \ \big( (1-\tau_*/k)\, x', (1-\tau_*/k) \, x_d - 6\tau_g/k \big) \in (\Omega_g)^k \setminus V_k\,.
\end{align}
\end{lemma}
We point out that \eqref{eq: inclusion} means that $h_k$ lies below the boundary of $(\Omega_g)^k \setminus V_k$, up to a slight translation and dilation. We suggest to omit the proof of the lemma on first reading.
\betagin{proof}
The proof relies on slight lifting and dilation of the set $(\Omega_{g})^k\setminus V_k $ along with an application of Lemma \ref{le:4.3ChaSol}. Recall definition \eqref{eq: well contained}, and define $\omega_k \subset \omega \subset \mathbb{R}^{d-1}$ such that $(\omega \times \mathbb{R})^k = \omega_k \times \mathbb{R}$. Since $\omega$ is uniformly star-shaped with respect to the origin, see \eqref{eq: star-shaped}, there exists a universal constant $\tau_\omega>0$ such that
\betagin{align}\label{eq: stretching}
\omega_k \supset (1-\tau k^{-1}) \, \omega \ \ \ \text{ for $\tau \ge \tau_\omega$}\,.
\end{align}
Define $\tau_g := 1+ \sqrt{d} \max_{\omega} |\nabla {g}|$. For $k$ sufficiently large, it is elementary to check that
\betagin{equation}\label{2104191048}
\Omega_{g} \cap (\omega_k \times (0,\infty)) \subset \big( (\Omega_{g})^k + 6 \,\tau_g k^{-1} e_d\big)\,.
\end{equation}
We now ``lift'' the set $(\Omega_g)^k \setminus V_k$ upwards: define the function
\betagin{equation}\label{eq: htilde-def}
g'_k(x'):= \sup\big\{ x_d < g(x') \colon (x', x_d-6\,\tau_g/k) \in (\Omega_{g})^k\setminus V_k \big\} \qquad\text{for }x' \in \omega_k\,.
\end{equation}
We observe that $g_k' \in BV(\omega_k;[0,M])$. Define $(\Omega)^k$ as in \eqref{eq: well contained} and, similar to \eqref{eq: graphi}, we let $\Omega_{g_k'} = \lbrace x\in \omega_k \times (-1, M+1)\colon \, -1 < x_d < g_k'(x') \rbrace$. Since $V_k$ is vertical, we note that $\varphiartial \Omega_{g_k'}\cap (\Omega)^k$ is made of two parts: one part is contained in the smooth graph of $g$ and the rest in the boundary of $V_k + 6\tau_g k^{-1} e_d$. In particular, by \eqref{2104191048} we get
\betagin{equation*}
\varphiartial \Omega_{g'_k} \cap (\Omega)^k \subset (\varphiartial \Omega_{g}\cap\Omega) \cup \big(\varphiartial \Omega_{g'_k} \cap (\Omega)^k\cap \Omega_{g} \big) \subset (\varphiartial \Omega_{g}\cap\Omega) \cup \big(( \varphiartial V_k \cap (\Omega_{g})^k) + 6\,\tau_g k^{-1}\, \,e_d \big)\,.
\end{equation*}
Then, we deduce
\betagin{equation}\label{2104192337}
\mathcal{H}^{d-1}(\varphiartial \Omega_{g'_k} \cap (\Omega)^k ) \le \mathcal{H}^{d-1}(\varphiartial \Omega_g \cap \Omega) + \mathcal{H}^{d-1}\big(\varphiartial V_k \cap (\Omega_{g})^k\big)\,.
\end{equation}
Since by \eqref{2104191048} and \eqref{eq: htilde-def} there holds $(\Omega_g \setminus \Omega_{g_k'})\cap (\Omega)^k \subset V_k + 6\,\tau_g k^{-1} e_d$, the fact that $V_k$ is vertical implies
\betagin{align}\label{eq: htilde}
\mathcal{L}^d\big((\Omega_g \triangle \Omega_{g_k'})\cap (\Omega)^k\big) & \le \mathcal{L}^d\big(\Omega_g \cap (V_k + 6\,\tau_g k^{-1} e_d)\big) \le \mathcal{L}^d(\Omega_g \cap V_k)\,.
\end{align}
As $g_k'$ is only defined on $\omega_k$, we further need a dilation: letting $\tau_* := \tau_\omega \vee (6\tau_g + 6) $ and recalling \eqref{eq: stretching} we define $g_k'' \in BV(\omega;[0,M])$ by
\betagin{align}\label{eq: g''}
g_k''(x') = g_k'( (1-\tau_* k^{-1}) \, x') \ \ \text{ for } \ \ x' \in \omega\,.
\end{align}
(The particular choice of $\tau_*$ will become clear in the proof of \eqref{eq: inclusion} below.) By \eqref{eq: htilde} we get
\betagin{subequations} \label{eq: volume3}
\betagin{equation}
\mathcal{L}^d( \Omega_g \, \triangle\, \Omega_{g_k''} ) \le \mathcal{L}^d(\Omega_g \cap V_k) + C_{g,\omega}k^{-1}\,,
\end{equation}
\betagin{equation}
\big| \mathcal{H}^{d-1}(\varphiartial \Omega_{{g}''_k} \cap\Omega) - \mathcal{H}^{d-1}(\varphiartial \Omega_{g'_k} \cap (\Omega)^k ) \big| \leq C_{g,\omega} k^{-1}\,,
\end{equation}
\end{subequations}
where the constant $C_{g,\omega}$ depends only on $d$, $g$, and $\omega$. We also notice that $\mathcal{H}^{d-1} \big( \overlinerline{\varphiartial^* \Omega_{g_k''}} \setminus \varphiartial^* \Omega_{g_k''} \big) = 0$. Then by Lemma \ref{le:4.3ChaSol} applied for $\varepsilons = 1/k$ we find a function $h_k\in C^\infty(\omega;[0,M])$ with $h_k \le g''_k$ on $\omega$ such that
\betagin{align}\label{eq: htilde2}
\|g_k''-h_k\|_{L^1(\omega)} &\leq k^{-1}\,, \ \ \ \ \ \ \ \ \ \ \big| \mathcal{H}^{d-1}(\varphiartial {\Omega_{h_k}} \cap \Omega ) - \mathcal{H}^{d-1}(\varphiartial^* {\Omega_{g_k''}} \cap \Omega ) \big| \le k^{-1} \,.
\end{align}
By passing to a larger constant $C_{\omega,g}$ and by using \eqref{2104192337}, \eqref{eq: volume3}, and \eqref{eq: htilde2}, we get \eqref{eq: volume-a-b}. We finally show \eqref{eq: inclusion}.
In view of the definitions of $g_k'$ and $g_k''$ in \eqref{eq: htilde-def} and \eqref{eq: g''}, respectively, and the fact that $h_k \le g_k''$, we get
$$ x =(x',x_d) \in \Omega_{h_k} \ \ \ \mathbb{R}ightarrow \ \ \ \big( (1-\tau_*/k)\, x', \, x_d - 6\tau_g/k \big) \in (\Omega_g)^k \setminus V_k\,. $$
Recall $\tau_* = \tau_\omega \vee (6\tau_g + 6) $ and observe that $-(1-\tau_*/k) - 6\tau_g/k \ge - 1 + 6 /k$. Also note that $(\Omega_g)^k \supset (\Omega)^k \cap (\omega \times (-1+6/k,0))$, cf.\ \eqref{eq: well contained}. This along with the verticality of $V_k\subset \Omega^+$ shows
$$
x =(x',x_d) \in \Omega_{h_k} \ \ \ \mathbb{R}ightarrow \ \ \ \big( (1-\tau_*/k)\, x', (1-\tau_*/k) \, x_d - 6\tau_g/k \big) \in (\Omega_g)^k \setminus V_k\,.
$$
This concludes the proof.
\end{proof}
\color{black}
\color{black} Next, we present an approximation technique for $GSBD$ functions \color{black} based on \cite{CC17}. In the following, $\varphisi\colon [0,\infty) \to [0,\infty) $ denotes the function $\varphisi(t) = t \wedge 1$.
\betagin{lemma}\label{lemma: vito approx}
Let $U \subset \mathbb{R}^d$ be open, bounded, $p >1$, and $k \in \mathbb{N}$, $\theta\in (0,1)$ with $k^{-1}$, $\theta$ small enough.
Let $\mathcal{F} \subset GSBD^p(U)$ be such that $\varphisi(|v|) + |e(v)|^p$ is equiintegrable for $v \in \mathcal{F}$.
Suppose that for $v \in \mathcal{F}$ there exists a
set of finite perimeter
$V \subset U$ such that
for each $q^k_z$, $z \in (2k^{-1})\mathbb{Z}^d$, intersecting $(U)^k \setminus V$, there holds that
\betagin{equation}\label{eq: cond1}
\mathcal{H}^{d-1}\big(Q^k_z \cap J_v\big) \leq \theta k^{1-d}\,.
\end{equation}
Then there exists a function $w_k \in W^{1,\infty}( (U)^k \setminus V; \mathbb{R}^d)$ such that
\betagin{subequations}\label{rough-dens}
\betagin{equation}\label{rough-dens-1}
\int_{(U)^k \setminus V} \varphisi(|w_k-v| )\, \mathrm{d} x \leq R_k\,,
\end{equation}
\betagin{equation}\label{rough-dens-2}
\int_{(U)^k \setminus V} |e(w_k)|^p \, \mathrm{d} x \leq \int_U |e( v)|^p \, \mathrm{d} x + R_k\,.
\end{equation}
\end{subequations}
where $(R_k)_k$ is a sequence independent of $v \in \mathcal{F}$ with $R_k \to 0$ as $k \to \infty$.
\end{lemma}
The lemma is essentially a consequence of the rough estimate proved in \cite[Theorem 3.1]{CC17}.
For the convenience of the reader, we include a short proof
in Appendix~\ref{sec:App}.
\color{black} After having collected auxiliary lemmas, we now give a short outline of the proof. \color{black} Recall that $\Omega = \omega \times (-1,M+1)$ for given $M>0$. Consider a pair $(u,h)$ as in Proposition~\ref{prop: enough}. We work with $u \chi_{\Omega_h} \in GSBD^p(\Omega)$ in the following, without specifying each time that $u=0$ in the complement of $\Omega_h$. Recall $J_u'$ defined in \eqref{eq: Ju'}, and, as before, set $\Sigma:= J_u' \cap \Omega_h^1$. This implies
$
J_u \subset (\varphiartial^* \Omega_h \cap \Omega) \cup \Sigma$. Since $\Sigma$ is vertical, we can approximate $(\varphiartial^*\Omega_h \cap \Omega ) \cup \Sigma$ by the graph of a smooth function $g\in C^\infty(\omega;[0,M])$ in the sense of Lemma \ref{lemma: graph approx}.
Our goal is to construct a regular approximation of $u$ in (most of) $\Omega_{g}$ by means of Lemma~\ref{lemma: vito approx}. The main step is to identify suitable exceptional sets $(V_k)_k$ such that for the cubes outside of $(V_k)_k$ we can verify \eqref{eq: cond1}. In this context, we emphasize that it is crucial that each $V_k$ is vertical \color{black} since this allows us to apply Lemma \ref{lemma: new-lemma} and to approximate the boundary of $(\Omega_g)^k \setminus V_k$ from below by a smooth graph. \color{black} Before we start with the actual proof of Proposition~\ref{prop: enough}, \color{black} we address the construction of $(V_k)_k$. To this end, \color{black} we introduce the notion of good and bad nodes, and collect some important properties.
Define the set of nodes
\betagin{align}\label{eq: nodes}
\mathcal{N}_k := \lbrace z \in (2 k^{-1}) \mathbb{Z}^d\colon \, \overlinerline{q^k_z} \subset \Omega_g \rbrace\,.
\end{align}
Let us introduce the families of \emph{good nodes} and \emph{bad nodes} at level $k$.
Let $\rho_1,\rho_2>0$ to be specified below. By $\mathcal{G}_k$ we denote the set of good nodes $z \in \mathcal{N}_k$, namely those satisfying
\betagin{equation}\label{0306191212-1}
\mathcal{H}^{d-1}\big(\overlinerline{q^k_z} \cap (\varphiartial^* \Omega_h \cup \Sigma) \big) \leq \rho_1 k^{1-d}
\end{equation}
or having the property that there exists a set of finite perimeter $F^k_z \subset q^k_z$,
such that
\betagin{equation}\label{0306191212-2}
q^k_z \cap \Omega^0_h \subset (F^k_z)^1, \ \ \ \ \ \ \mathcal{H}^{d-1}\big(\varphiartial^* F^k_z \big) \le \rho_2 k^{1-d}, \ \ \ \ \ \mathcal{H}^{d-1}\big(\overlinerline{q^k_z} \cap \Sigma \cap (F^k_z)^0 \big) \leq \rho_2 k^{1-d}\,.
\end{equation}
We define the set of bad nodes by $\mathcal{B}_k = \mathcal{N}_k \setminus \mathcal{G}_k$. Moreover, let
\betagin{equation}\label{1906191702}
{\mathcal{G}}_k^* :=\{ z \in \mathcal{G}_k \colon \eqref{0306191212-1} \text{ does not hold} \}\,.
\end{equation}
For an illustration of the cubes in $\mathcal{G}_k$ we refer to Figure~2.
\betagin{figure}[h]\label{figp32}
\hspace{-3em}
\betagin{minipage}[c]{0.5\linewidth}
\hspace{8em}
\includegraphics[scale=2]{pag32anew}
\end{minipage}
\betagin{minipage}[c]{0.5\linewidth}
\includegraphics[scale=2]{pag32b}
\end{minipage}
\caption{A simplified representation of nodes in $\mathcal{G}_k$, for $d=2$ and with $\Sigma=\emptyset$. The set ${\mathcal{G}}_k \setminus {\mathcal{G}}_k^*$ corresponds to the cubes containing only a small portion of $\varphiartial^* \Omega_h \cup \Sigma$, see first picture. For the cubes ${\mathcal{G}}_k^*$, the portion of $\varphiartial^* \Omega_h$ is contained in a set $F^k_z$ with small boundary, see second picture. Intuitively, this along with the fact that \eqref{0306191212-1} does not hold means that $\varphiartial^* \Omega_h$ is highly oscillatory in such cubes.}
\end{figure}
We partition the set of good nodes $\mathcal{G}_k$ into
\betagin{equation}\label{0406191145}
\mathcal{G}_k^1 = \big\{ z \in \mathcal{G}_k\colon\, \mathcal{L}^d({q_z^k} \cap \Omega_h^{0}) \le \mathcal{L}^d({q_z^k} \cap \Omega_h^{1}) \big\}, \ \ \ \ \ \ \ \mathcal{G}_k^2 =\mathcal{G}_k \setminus \mathcal{G}_k^1.
\end{equation}
We introduce the terminology ``$q^k_{z'}$ is above $q^k_z$'' meaning that $q^k_{z'}$ and $q^k_z$ have the same vertical projection onto $\mathbb{R}^{d-1} {\times} \{0\}$ and $z'_d > z_d$.
We remark that bad nodes have been defined differently in \cite{ChaSol07}, namely as the cubes having an edge which intersects $\varphiartial^*\Omega_h \cap \Sigma$. This definition is considerably easier than our definition. It may, however, fail in some pathological situations since, in this case, the union of cubes with bad nodes as centers does not necessarily form a ``vertical set''.
\betagin{lemma}[Properties of good and bad nodes]\label{lemma: bad/good}
Given $\Omega_h$ and $\Sigma$, define $\Omega_g$ as in Lemma \ref{lemma: graph approx} for $\varepsilons>0$ sufficiently small.
We can choose $0<\rho_1<\rho_2$ satisfying $\rho_1,\rho_2 \le \frac{1}{2}5^{-d} \theta$ such that the following properties hold for the good and bad nodes defined in \eqref{eq: nodes}--\eqref{0406191145}:
\betagin{align*}
{\rm (i)} & \ \ \text{ if $q^k_{z'}$ is above $q^k_z$ and $ z \in \mathcal{B}_k \cup \mathcal{G}^2_k$, then $ z' \in \mathcal{B}_k \cup \mathcal{G}^2_k$.} \\
{\rm (ii)} & \ \ \text{ if $z,z' \in \mathcal{G}_k$ with $\mathcal{H}^{d-1}(\varphiartial {q_z^k} \cap \varphiartial q^k_{z'})>0$, then $z,z' \in \mathcal{G}^1_k$ or $z,z' \in \mathcal{G}^2_k$.}\\
{\rm (iii)} & \ \ \# \mathcal{B}_k + \# \mathcal{G}_k^* \le 2\rho_1^{-1} k^{d-1} \varepsilon.\\
{\rm (iv)} & \ \ \sum\nolimits_{z \in \mathcal{G}_k^2}\mathcal{L}^d(\Omega_h \cap q_z^k) \le \varepsilon.
\end{align*}
\end{lemma}
We suggest to omit the proof of the lemma on first reading and to proceed directly with the proof of Proposition \ref{prop: enough}.
\betagin{proof}
By $c_\varphii \ge 1$ we denote the maximum of the constants appearing in the isoperimetric inequality and the relative isoperimetric inequality on a cube in dimension $d$. We will show the statement for $\varepsilons$ and $0 < \rho_2 < 1$ sufficiently small satisfying $\rho_2 \le \frac{1}{2} 5^{-d} \theta$, and for $\rho_1 = ((3d+1)c_\varphii)^{-1} \rho_2$.
\emph{Preparations.} First, we observe that for $\rho_2$ sufficiently small we have that $\mathcal{G}_k^* \subset \mathcal{G}_k^1$. Indeed, since for $z \in \mathcal{G}_k^*$ property \eqref{0306191212-2} holds, the isoperimetric inequality implies
\betagin{align}\label{eq: G1G2}
\mathcal{L}^d({q_z^k} \cap \Omega_h^{0}) \le \mathcal{L}^d( F^k_z) \le c_\varphii \big( \mathcal{H}^{d-1}(\varphiartial^* F^k_z ) \big)^{d/(d{-}1)} \le c_\varphii \rho_2^{d/(d{-}1)} k^{-d}.
\end{align}
Then, for $\rho_2$ sufficiently small we get $\mathcal{L}^d({q_z^k} \cap \Omega_h^{0})<\frac{1}{2} \mathcal{L}^d({q_z^k})$, and thus $z \in \mathcal{G}_k^1$, see \eqref{0406191145}.
As a further preparation, we show that for each $z \in \mathcal{G}_k^1$ there exists a set of finite perimeter $H^k_z$ with $\Omega_h^0 \cap q^k_z \subset H^k_z \subset q^k_z$ such that
\betagin{align}\label{eq: setH}
\mathcal{L}^d( H^k_z ) \le c_\varphii \rho_2^{d/(d{-}1)} k^{-d}, \ \ \ \mathcal{H}^{d-1}( \varphiartial^* H^k_z) \le \rho_2 k^{1-d}, \ \ \ \ \mathcal{H}^{d-1}\big(\overlinerline{q^k_z} \cap \Sigma \cap (H^k_z)^0 \big) \leq \rho_2 k^{1-d}\,.
\end{align}
Indeed, if \eqref{0306191212-2} holds, this follows directly from \eqref{0306191212-2} and \eqref{eq: G1G2} for $H^k_z := F^k_z$.
Now suppose that $z \in \mathcal{G}_k^1$ satisfies \eqref{0306191212-1}. In this case, we define $H^k_z := \Omega_h^0 \cap q_z^k$. To control the volume, we use the relative isoperimetric inequality on $q^k_z$ to find by \eqref{0306191212-1}
\betagin{align}\label{eq: vol part}
\mathcal{L}^d( H^k_z ) = \mathcal{L}^d( \Omega^0_h \cap q^k_z ) \wedge \mathcal{L}^d( \Omega^1_h \cap q^k_z ) \le c_\varphii \big(\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap q^k_z)\big)^{d/(d{-}1)} \le c_\varphii \rho_1^{d/(d{-}1)} k^{-d}\,,
\end{align}
i.e., the first part of \eqref{eq: setH} holds since $\rho_1 = ((3d+1)c_\varphii)^{-1} \rho_2$. To obtain the second estimate in \eqref{eq: setH}, the essential step is to control $\mathcal{H}^{d-1}( \varphiartial q^k_z \cap \Omega_h^0)$. For simplicity, we only estimate $\mathcal{H}^{d-1}( \varphiartial_d q^k_z \cap \Omega_h^0)$ where $ \varphiartial_d q^k_z$ denotes the two faces of $\varphiartial q^k_z$ whose normal vector is parallel to $e_d$. The other faces can be treated in a similar fashion. Write $z=(z',z_d)$ and define $\omega_z = z' + (-k^{-1},k^{-1})^{d-1}$. By $\omega_* \subset \omega_z$ we denote the largest measurable set such that the cylindrical set $(\omega_* \times \mathbb{R}) \cap q^k_z$ is contained in $\Omega^0_h$. Then by the area formula (cf.\ e.g.\ \cite[(12.4) in Section~12]{Sim84}) and by recalling notation \eqref{eq: vxiy2} we get
\betagin{align}\label{eq: area-est}
\mathcal{H}^{d-1}( \varphiartial_d q^k_z \cap \Omega_h^0) &\le 2 \mathcal{H}^{d-1}(\omega_*) + 2 \int_{ (\omega_z \setminus \omega_*) \times \lbrace 0 \rbrace} \mathcal{H}^0\big( (\varphiartial^* \Omega_h)^{e_d}_y \big) \, \mathrm{d} \mathcal{H}^{d-1}(y)\notag \\
& \le 2 \mathcal{H}^{d-1}(\omega_*) + 2\int_{\varphiartial^* \Omega_h \cap q^k_z} |\nu_{\Omega_h} \cdot e_d| \, \mathrm{d} \hd \notag \\
& \le 2 \mathcal{H}^{d-1}(\omega_*) +2\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap q^k_z)\,.
\end{align}
As $(\omega_* \times \mathbb{R}) \cap q^k_z \subset \Omega^0_h \cap q^k_z$ and $\mathcal{L}^d(\Omega_h^0 \cap q^k_z) \le c_\varphii \rho_1^{d/(d{-}1)} k^{-d}$ by \eqref{eq: vol part}, we deduce $2k^{-1}\mathcal{H}^{d-1}(\omega_*) \le c_\varphii \rho_1^{d/(d{-}1)} k^{-d}$. This along with \eqref{0306191212-1} and \eqref{eq: area-est} yields
$$\mathcal{H}^{d-1}( \varphiartial_d q^k_z \cap \Omega_h^0) \le c_\varphii \rho_1^{d/(d{-}1)} k^{1-d} + 2\rho_1 k^{1-d} \le 3c_\varphii\rho_1 k^{1-d}\,.$$
By repeating this argument for the other faces and by recalling $\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap q^k_z) \le \rho_1 k^{1-d}$, we conclude that $H^k_z = \Omega_h^0 \cap q_z^k$ satisfies
$$\mathcal{H}^{d-1}(\varphiartial^* H^k_z) = \mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap q^k_z) + \mathcal{H}^{d-1}(\varphiartial q^k_z \cap \Omega_h^0) \le \rho_1 k^{1-d} + d \cdot 3c_\varphii\rho_1 k^{1-d}\le\rho_2k^{1-d}\,, $$
where the last step follows from $\rho_1 = ((3d+1)c_\varphii)^{-1} \rho_2$. This concludes the the second part of \eqref{eq: setH}. The third part follows from \eqref{0306191212-1} and $\rho_1 \le\rho_2$. We are now in a position to prove the statement.
\emph{Proof of (i).} We need to show that for $z' \in \mathcal{G}_k^1$ there holds $z \in \mathcal{G}_k^1$ for all $z \in \mathcal{N}_k$ such that $q^k_{z'}$ is above $q^k_{z}$. Fix such cubes $q_z^k$ and $q^k_{z'}$.
Consider the set $H^k_{z'}$ with $\Omega_h^0 \cap q^k_{z'} \subset H^k_{z'} \subset q^k_{z'}$ introduced in \eqref{eq: setH}, and define $F^k_{z} := H^k_{z'} - z' + z$. Since $\Omega_h$ is a generalized graph, we get $(F^k_z)^1 \supset \Omega_h^0 \cap q^k_z$.
Moreover, since $\Sigma = J_u' \cap \Omega_h^1 $ is vertical in $\Omega_h$, see \eqref{eq: Ju'}, and $(H^k_{z'})^0 \subset \Omega_h^1 \cap q^k_{z'}$, we have
\betagin{equation*}
\Sigma \cap (F^k_z)^0 = \Sigma \cap \Omega_h^1 \cap (F^k_z)^0 \subset (\Sigma \cap \Omega_h^1 \cap (H^k_{z'})^0) + z-z'= (\Sigma \cap (H^k_{z'})^0)+z-z'\,.
\end{equation*}
By \eqref{eq: setH} we thus get $\mathcal{H}^{d-1}( \overline{q^k_z} \cap \Sigma \cap (F^k_z)^0) \leq \mathcal{H}^{d-1}( \overline{q^k_{z'}} \cap \Sigma \cap (H^k_{z'})^0)\leq \rho_2 k^{1-d}$.
Then the third property in \eqref{0306191212-2} is satisfied for $z$.
Again by \eqref{eq: setH} we note that also the first two properties of \eqref{0306191212-2} hold, and thus $z \in \mathcal{G}_k$. Using once more that $\Omega_h$ is a generalized graph, we get $\mathcal{L}^d(\Omega_h\cap q_z^k) \ge \mathcal{L}^d(\Omega_h\cap q_{z'}^k)$. Then $z' \in \mathcal{G}_k^1$ implies $z \in \mathcal{G}_k^1$, see \eqref{0406191145}. This shows (i).
\emph{Proof of (ii).} Suppose by contradiction that there exist $z\in \mathcal{G}_k^1$ and $z' \in \mathcal{G}_k^2$ satisfying $\mathcal{H}^{d-1}(\varphiartial {q_z^k} \cap \varphiartial q^k_{z'})>0$. Define the set $F := H^k_z \cup (\Omega^0_h \cap q^k_{z'})$ with $H^k_z$ from \eqref{eq: setH}, and observe that $F$ is contained in the cuboid $q^k_* = {\rm int}(\overlinerline{q^k_z} \cup \overlinerline{q^k_{z'}})$. Since $H^k_z \supset \Omega_h^0 \cap q^k_z$, we find
$$\mathcal{H}^{d-1}(q^k_* \cap \varphiartial^*F) \le \mathcal{H}^{d-1}(\varphiartial^* H^k_z) + \mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap q^{k}_{z'}) \,.$$
As $\mathcal{G}^2_k \cap \mathcal{G}_k^* = \emptyset$, cf.\ \eqref{eq: G1G2}, for $z' \in \mathcal{G}^2_k$ estimate \eqref{0306191212-1} holds true. This along with \eqref{eq: setH} yields
$$\mathcal{H}^{d-1}(q^k_* \cap \varphiartial^*F) \le \rho_2 k^{1-d} + \rho_1 k^{1-d} \le 2\rho_2 k^{1-d} \,.$$
Then, the relative isoperimetric inequality on $q^k_*$ yields
\betagin{align}\label{eq: contradiction}
\mathcal{L}^d(q^k_* \cap F) \wedge \mathcal{L}^d(q^k_* \setminus F) \le C_* \big(\mathcal{H}^{d-1}(q^k_* \cap \varphiartial^*F)\big)^{d/(d{-}1)} \le C_* (2\rho_2)^{d/(d{-}1)} k^{-d}
\end{align}
for some universal $C_*>0$. On the other hand, there holds $\mathcal{L}^d(q^k_* \cap F) \ge \mathcal{L}^d(\Omega^0_h \cap q^k_{z'}) \ge \frac{1}{2} (2k^{-1})^d $ and $\mathcal{L}^d(q^k_* \setminus F)\ge \mathcal{L}^d(q^k_z \setminus H^k_z ) \ge (2k^{-1})^d - c_\varphii \rho_2^{d/(d{-}1)} k^{-d}$ by \eqref{eq: setH}. However, for $\rho_2$ sufficiently small, this contradicts \eqref{eq: contradiction}. This concludes the proof of (ii).
\emph{Proof of (iii).} Note that $\mathcal{H}^{d-1}$-a.e.\ point in $\mathbb{R}^d$ is contained in at most two different closed cubes $\overlinerline{q^k_z}$, $\overlinerline{q^k_{z'}}$. Therefore, since the cubes with centers in $\mathcal{G}_k^*$ and $\mathcal{B}_k$ do not satisfy \eqref{0306191212-1}, we get
\betagin{align*}
\# \mathcal{B}_k + \# \mathcal{G}_k^* \leq \rho_1^{-1} k^{d-1} \hspace{-0.2cm} \sum_{z \in \mathcal{B}_k \cup \mathcal{G}_k^*} \mathcal{H}^{d-1}\big(\overlinerline{q^k_z} \cap (\varphiartial^* \Omega_h \cup \Sigma) \big) \le 2 \rho_1^{-1} k^{d-1} \mathcal{H}^{d-1}\big((\varphiartial^* \Omega_h \cup \Sigma) \cap {\Omega_g} \big)\,,
\end{align*}
where the last step follows from \eqref{eq: nodes}. This along with \eqref{2004192229} shows (iii).
\emph{Proof of (iv).} Recall that each $z \in \mathcal{G}^2_k$ satisfies \eqref{0306191212-1}, cf.\ \eqref{1906191702} and before \eqref{eq: G1G2}. The relative isoperimetric inequality, \eqref{eq: nodes}, and \eqref{0406191145} yield
\betagin{align*}
\sum\nolimits_{z \in \mathcal{G}_k^2}\mathcal{L}^d(\Omega_h \cap q_z^k) & = \sum\nolimits_{z \in \mathcal{G}_k^2}\mathcal{L}^d(\Omega^0_h \cap q_z^k) \wedge \mathcal{L}^d(\Omega^1_h \cap q_z^k) \le c_\varphii \sum\nolimits_{z \in \mathcal{G}_k^2} \big(\mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap q^k_z)\big)^{\frac{d}{d-1}} \\
&\le c_\varphii \Big(\sum\nolimits_{z \in \mathcal{G}_k^2} \mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap q^k_z)\Big)^{d/(d{-}1)} \le c_\varphii \big( \mathcal{H}^{d-1}(\varphiartial^*\Omega_h \cap \Omega_g) \big)^{d/(d{-}1)}\,.
\end{align*}
By \eqref{2004192229} we conclude for $\varepsilons$ small enough that $\sum\nolimits_{z \in \mathcal{G}_k^2}\mathcal{L}^d(\Omega_h \cap q_z^k) \le c_\varphii \varepsilons^{d/(d{-}1)} \le\varepsilons$.
\end{proof}
\betagin{proof}[Proof of Proposition \ref{prop: enough}]
Consider a pair $(u,h)$ and set $\Sigma := J_u' \cap \Omega_h^1$ with $J_u'$ as in \eqref{eq: Ju'}. Given $\varepsilons>0$, we approximate $(\varphiartial^*\Omega_h \cap \Omega ) \cup \Sigma$ by the graph of a smooth function $g\in C^\infty(\omega;[0,M])$ in the sense of Lemma \ref{lemma: graph approx}. Define the good and bad nodes as in \eqref{eq: nodes}--\eqref{0406191145} for $0<\rho_1,\rho_2 \le \frac{1}{2}5^{-d} \theta$ such that the properties in Lemma \ref {lemma: bad/good} hold. We will first define approximating regular graphs (Step 1) and regular functions (Step 2) for fixed $\varepsilons >0$. Finally, we let $\varepsilons \to 0$ and obtain the result by a diagonal argument (Step 3). In the whole proof, $C>0$ will denote a constant depending only on $d$, $p$, $\rho_1$, and $\rho_2$.
\noindent\emph{Step 1: Definition of regular graphs.} Recall \eqref{eq: well contained}. For each $k \in \mathbb{N}$, we define the set
\betagin{equation}\label{0406191233}
V_k:= \bigcup\nolimits_{z \in \mathcal{G}_k^2 \cup \mathcal{B}_k} Q^k_z \cap (\Omega_g)^k\,.
\end{equation}
We observe that
\betagin{equation}\label{0506192329}
\varphiartial V_k \cap (\Omega_{g})^k \subset \bigcup\nolimits_{z \in \mathcal{B}_k} \varphiartial Q_z^k\,.
\end{equation}
In fact, consider $z \in \mathcal{B}_k \cup \mathcal{G}_k^2$ such that $Q_z^k \cap V_k \neq \emptyset$ and one face of $\varphiartial Q^k_z$ intersects $\varphiartial V_k \cap (\Omega_{g})^k$. In view of \eqref{0406191233}, there exists an adjacent cube $q^k_{z'}$ satisfying $\mathcal{H}^{d-1}(\varphiartial {q_z^k} \cap \varphiartial q^k_{z'})>0$ and ${z'} \in \mathcal{G}_k^1$ since otherwise $\varphiartial Q^k_z \cap \varphiartial V_k \cap (\Omega_{g})^k =\emptyset$. As ${z'} \in \mathcal{G}_k^1$, Lemma \ref{lemma: bad/good}(ii) implies $z \notin \mathcal{G}_k^2$ and therefore $z \in \mathcal{B}_k$. This shows \eqref{0506192329}. A similar argument
yields
\betagin{equation}\label{0506192329-2}
V_k= \Big(\bigcup\nolimits_{z \in \mathcal{B}_k} Q^k_z \cup \bigcup\nolimits_{z \in \mathcal{G}_k^2} q^k_z \Big) \cap (\Omega_g)^k
\end{equation}
up to a negligible set. Indeed, since $V_k$ is a union of cubes of sidelength $2k^{-1}$ centered in nodes in $\mathcal{N}_k$, it suffices to prove that for a fixed
$z \in \mathcal{N}_k \cap V_k$
there holds (a) $z \in \mathcal{G}_k^2$ or that (b) there exists $z' \in \mathcal{B}_k$ such that $z \in Q_{z'}^k$.
Arguing by contradiction, if $z \in \mathcal{N}_k \cap V_k$ and neither (a) nor (b) hold, we deduce that $z \in \mathcal{G}_k^1$ and $Q_z^k \cap \mathcal{B}_k=\emptyset$. Then all $z' \in \mathcal{N}_k\cap Q_z^k$ lie in $\mathcal{G}_k$. More precisely, by $z \in \mathcal{G}_k^1$ and Lemma~\ref{lemma: bad/good}(ii) we get that all $z' \in \mathcal{N}_k\cap Q_z^k$ lie in $\mathcal{G}_k^1$. Then $Q_z^k \cap (\mathcal{G}_k^2 \cup \mathcal{B}_k)=\emptyset$, so that $q_z^k \cap V_k = \emptyset$ by \eqref{0406191233}. This contradicts $z \in V_k$.
Let us now estimate the surface and volume of $V_k$. By \eqref{0506192329} and Lemma \ref{lemma: bad/good}(iii) we get
\betagin{equation}\label{0506192340}
\mathcal{H}^{d-1}(\varphiartial V_k \cap (\Omega_{g})^k) \leq \sum\nolimits_{z \in \mathcal{B}_k} \mathcal{H}^{d-1}(\varphiartial Q_z^k) \le C k^{1-d} \# \mathcal{B}_k \le C \varepsilon\,,
\end{equation}
where $C$ depends on $\rho_1$. In a similar fashion, by \eqref{0506192329-2} and Lemma \ref{lemma: bad/good}(iii),(iv) we obtain
\betagin{align}\label{eq: V volume}
\mathcal{L}^d(V_k\cap \Omega_h) \le Ck^{-d} \, \# \mathcal{B}_k + \sum\nolimits_{z \in \mathcal{G}_k^2} \mathcal{L}^d(q^k_z\cap \Omega_h) \le Ck^{-1}\varepsilons + \varepsilons \le C\varepsilons\,.
\end{align}
Note that $V_k$ is vertical in the sense that $(x',x_d) \in V_k$ implies $(x',x_d + t) \in V_k$ for $t \ge 0$ as long as $(x',x_d + t) \in (\Omega_g)^k$. This follows from Lemma \ref{lemma: bad/good}(i) and \eqref{0406191233}.
\color{black}
We apply Lemma \ref{lemma: new-lemma} for $g$ and $V_k$ to find functions $h_k\in C^\infty(\omega;[0,M])$ satisfying \eqref{eq: volume-a-b} and \eqref{eq: inclusion}. Therefore, by
\eqref{2004192228}, \eqref{eq: volume-a-b}, and \eqref{eq: V volume} we get
\betagin{align}\label{eq: volume2a-new}
\mathcal{L}^d( \Omega_h \, \triangle \, \Omega_{{h}_k} ) & \le \mathcal{L}^d( \Omega_g \, \triangle \, \Omega_{{h}_k} ) + \mathcal{L}^d(\Omega_g \triangle \Omega_h) \le \mathcal{L}^d(\Omega_g \cap V_k) + C_{g,\omega}k^{-1} + \mathcal{L}^d(\Omega_g \triangle \Omega_h) \notag \\ & \le \mathcal{L}^d(\Omega_h \cap V_k) + C_{g,\omega}k^{-1} + 2\mathcal{L}^d(\Omega_g \triangle \Omega_h) \le C\varepsilons + C_{g,\omega}k^{-1}\,.
\end{align}
Moreover, by \eqref{2004192231}, \eqref{eq: volume-a-b}, and \eqref{0506192340} we obtain
\betagin{align}\label{eq: volume2b-new}
\mathcal{H}^{d-1}(\varphiartial \Omega_{{h}_k} \cap \Omega) & \le \mathcal{H}^{d-1}(\varphiartial \Omega_g \cap \Omega) + \mathcal{H}^{d-1}\big(\varphiartial V_k \cap (\Omega_{g})^k\big)+ C_{g,\omega} k^{-1} \notag \\ & \le \mathcal{H}^{d-1}(\varphiartial^* \Omega_h \cap \Omega) + 2 \, \mathcal{H}^{d-1}(\Sigma) + C\varepsilon + C_{g,\omega} k^{-1}\,.
\end{align}
\color{black}
\noindent\emph{Step 2: Definition of regular functions.}
Recall \eqref{0306191212-2}--\eqref{1906191702}, and observe that Lemma~\ref{lemma: bad/good}(iii) implies
\betagin{equation*}
\mathcal{L}^d(F^k)\leq \sum\nolimits_{z \in {\mathcal{G}}_k^*} \mathcal{L}^d(q^k_z) \le Ck^{-d} \# {\mathcal{G}}_k^* \le C\varepsilon\,k^{-1}\,, \quad \text{ where } \ F^k:= \bigcup\nolimits_{z \in {\mathcal{G}}_k^*} (F^k_z)^1\,.
\end{equation*}
We define the functions $v_k \in GSBD^p(\Omega)$ by
\betagin{equation}\label{2006191034}
v_k:= u (1-\chi_{F^k }) \, \chi_{\Omega_g}\,.
\end{equation}
Since $u=0$ in $\Omega\setminus \Omega_h$ and $v_k=0$ in $\Omega\setminus \Omega_g$, we get by \eqref{2004192228} and \eqref{2006191034}
\betagin{equation}\label{2006190839}
\limsup_{k\to \infty} \mathcal{L}^d(\{ v_k \neq u\}) \leq \limsup_{k\to \infty} \mathcal{L}^d\big(F^k \cup (\Omega_h \setminus \Omega_g) \big) \leq C\varepsilon\,.
\end{equation}
We also obtain
\betagin{equation}\label{eq: smalljump}
\mathcal{H}^{d-1}(Q^k_z \cap J_{v_k} ) \leq \theta k^{1-d}
\end{equation}
for each $q^k_z$ intersecting $(\Omega_g)^k \setminus V_k$. To see this, note that the definitions of $\mathcal{N}_k$ in \eqref{eq: nodes} and of $V_k$ in \eqref{0406191233} imply that for each $q^k_z$ with $q^k_z\cap( (\Omega_g)^k \setminus V_k) \neq \emptyset$, each $z'\in \mathcal{N}_k$ with $q^k_{z'} \cap Q^k_z \neq \emptyset$ satisfies $z' \in \mathcal{G}_k$. In view of $\rho_1 < \rho_2 \leq \frac{1}{2}5^{-d} \theta$ (see Lemma ~\ref{lemma: bad/good}), the property then follows from \eqref{0306191212-1}, \eqref{0306191212-2},
$ J_u \cap \Omega_g \subset \varphiartial^* \Omega_h \cup \Sigma$, and the fact that $Q^k_z$ consists of $5^d$ different cubes $q^k_{z'}$.
Notice that $|v_k|\le |u|$ and $|e(v_k)| \le |e(u)|$ pointwise a.e., i.e., the functions $ \varphisi(|v_k|) + |e(v_k)|^p$ are equiintegrable, where $\varphisi(t) = t \wedge 1$. In view of \eqref{eq: smalljump}, we can apply Lemma \ref{lemma: vito approx} on $U = \Omega_g$ for the function $v_k \in GSBD^p(\Omega_g)$ and the sets $V_k$, to get functions $w_k \in W^{1,\infty}( (\Omega_g)^k \setminus V_k; \mathbb{R}^d)$ such that \eqref{rough-dens-1} and \eqref{rough-dens-2} hold for a sequence $R_k \to 0$.
We now define the function $\hat{w}_k\colon \Omega\to \mathbb{R}^d$ by
\betagin{equation*}
\hat{w}_k(x):= \betagin{dcases}
w_k\big((1-\tau_*/k) \, x', (1-\tau_*/k) \, x_d - 6\,\tau_g/k\big) \qquad&\text{if } -1 < x_d < {h}_k(x')\,,\\
0 &\text{otherwise.}
\end{dcases}
\end{equation*}
\color{black} Note that, in view of \eqref{eq: inclusion}, \color{black} the mapping is well defined and satisfies $ \hat{w}_k |_{\Omega_{h_k}} \in W^{1,\infty}(\Omega_{h_k};\mathbb{R}^d)$. By \eqref{2004192228}
\eqref{rough-dens-1}, \eqref{eq: volume2a-new},
\eqref{2006190839}, and $\varphisi \le 1$
we get
\betagin{align}\label{eq: smalldiff}
\limsup_{k\to \infty}\Vert \varphisi(|\hat{w}_k - u|) \Vert_{L^1(\Omega)} \le \limsup_{k\to \infty}\big(\Vert \varphisi(|\hat{w}_k - v_k|)\Vert_{L^1(\Omega)} + \mathcal{L}^d(\{ v_k \neq u\})\big) \le C\varepsilons\,.
\end{align}
In a similar fashion, by employing \eqref{rough-dens-2} in place of \eqref{rough-dens-1} and by the fact that $\Vert e(\hat{w}_k)\Vert_{L^p(\Omega)} \le (1+C_Mk^{-1}) \Vert e(w_k)\Vert_{L^p((\Omega_g)^k \setminus V_k)}$ for some $C_M$ depending on $M$ and $\tau_*$, we obtain
\betagin{align}\label{eq: elastic energy estimate}
\limsup_{k\to \infty} \int_{\Omega} |e(\hat{w}_k)|^p \, \mathrm{d} x & \le \limsup_{k\to \infty} \int_{(\Omega_g)^k \setminus V_k} |e(w_k)|^p \, \mathrm{d} x \notag \\
& \le \limsup_{k\to \infty} \int_{\Omega_g} |e(v_k)|^p \, \mathrm{d} x \le \int_{\Omega_h} |e(u)|^p \, \mathrm{d} x \,,
\end{align}
where the last step follows from \eqref{2006191034}.
\noindent\emph{Step 3: Conclusion.} Performing the construction above for $\varepsilons = 1/n$, $n \in \mathbb{N}$, and choosing for each $n \in \mathbb{N}$ an index $k=k(n)\in \mathbb{N}$ sufficiently large, we obtain a sequence $( \hat{w}_{n}, h_{n})$ such that by \eqref{eq: volume2a-new} and \eqref{eq: smalldiff} we get
\betagin{equation}\label{2709192115}
\hat{w}_n \to u=u\chi_{\Omega_h}\text{ in }L^0(\Omega;\mathbb{R}^d) \quad \text{ and }\quad h_n \to h\text{ in }L^1(\omega)\,.
\end{equation}
By \eqref{eq: volume2b-new} and the definition $\Sigma=J_u' \cap \Omega_h^1$ we obtain \eqref{sub2}. By $GSBD^p$ compactness (see Theorem \ref{th: GSDBcompactness}) applied on $\hat{w}_n = \hat{w}_n\chi_{\Omega_{h_n}} \in GSBD^p(\Omega)$ along with $\hat{w}_n \to u$ in $L^0(\Omega;\mathbb{R}^d)$ we get
$$ \int_{\Omega_h} |e(u)|^p \, \mathrm{d} x \le \liminf_{n \to \infty} \int_{\Omega_{h_n}} |e(\hat{w}_n)|^p \, \mathrm{d} x.$$
This along with \eqref{eq: elastic energy estimate} and the strict convexity of the norm $\Vert \cdot \Vert_{L^p(\Omega)}$ gives
\betagin{equation}\label{2006191122}
e(\hat{w}_n) \to e(u) \quad \text{in }L^p(\Omega; {\mathbb{M}^{d\times d}_{\rm sym}})\,.
\end{equation}
In view of \eqref{eq: growth conditions}, this shows the statement apart from the fact that the configurations $\hat{w}_n$ do possibly not satisfy the boundary data. (I.e., we have now proved the version described in Remark \ref{rem:1805192117} since $\hat{w}_n \in L^\infty(\Omega;\mathbb{R}^d)$.) It remains to adjust the boundary values.
To this end, choose a continuous extension operator from $W^{1,p}(\omega \times (-1,0);\mathbb{R}^d)$ to $W^{1,p}(\Omega;\mathbb{R}^d)$ and denote by $(w_n)_n$ the extensions of $(\hat{w}_n- u_0)|_{\omega \times (-1,0)}$ to $\Omega$. Clearly, $w_n \to 0$ strongly in $W^{1,p}(\Omega;\mathbb{R}^d)$ since $ (\hat{w}_n- u_0)|_{\omega \times (-1,0)} \to 0$ in $W^{1,p}(\omega \times (-1,0);\mathbb{R}^d)$. We now define the sequence $(u_n)_n$ by $u_n:= (\hat{w}_n - w_n)\chi_{\Omega_{h_n}}$. By \eqref{2709192115} we immediately deduce $u_n \to u$ in $L^0(\Omega;\mathbb{R}^d)$. Moreover, $ u_n|_{\Omega_{h_n}} \in W^{1,p}(\Omega_{h_n}; \mathbb{R}d)$, $u_n=0$ in $\Omega \setminus \Omega_{h_n}$, $u_n = u_0$ a.e.\ in $\omega \times (-1,0)$ and
\eqref{2006191122}
still holds with $u_n$ in place of $\hat{w}_n$. Due to \eqref{eq: growth conditions}, this shows \eqref{sub1} and concludes the proof.
\end{proof}
\betagin{remark}[Volume constraint]\label{rem: volume constraint}
Given a volume constraint $\mathcal{L}^d(\Omega_h^+) = m$ with $0 <m < M\mathcal{H}^{d-1}(\omega) $, one can construct the sequence $(u_n,h_n)$ in Proposition \ref{prop: enough} such that also $h_n$ satisfies the volume constraint, cf.\ \cite[Remark~4.2]{ChaSol07}. Indeed, if $\Vert h\Vert_{\infty}<M$, we consider $h^*_n(x') = r_n^{-1} h_n(x')$ and $u^*_n(x',x_d) = u_n(x', r_n x_d)$, where $r_n := m^{-1}\int_\omega h_n \,\, \mathrm{d} x$. Then $\int_\omega h_n^* \,\, \mathrm{d} x =m$. Note that we can assume $\Vert h_n \Vert_\infty \le \Vert h \Vert_\infty$ (apply Proposition~\ref{prop: enough} with $\Vert h\Vert_{\infty}$ in place of $M$). Since $r_n \to 1$, we then find $h_n\colon \omega \to [0,M]$ for $n$ sufficiently large, and \eqref{eq: sub} still holds.
If $\Vert h\Vert_{L^\infty(\omega)}=M$ instead, we need to perform a preliminary approximation: given $\mathrm{d}elta>0$, define $\hat{h}^{\mathrm{d}elta,M} = h \wedge (M- \mathrm{d}elta)$ and $h_\mathrm{d}elta(x') = r_\mathrm{d}elta^{-1} \hat{h}^{\mathrm{d}elta,M}(x')$, where $r_\mathrm{d}elta = m^{-1} \int_\omega\hat{h}^{\mathrm{d}elta,M} \, \, \mathrm{d} x$. Since $\Omega_h$ is a subgraph and $m < M\mathcal{H}^{d-1}(\omega)$, it is easy to check that $r_\mathrm{d}elta > (M-\mathrm{d}elta)/M$ and therefore $\Vert h_\mathrm{d}elta\Vert_\infty < M$. Moreover, by construction we have $\int_\omega h_\mathrm{d}elta \, \, \mathrm{d} x = m$. We define $u_\mathrm{d}elta(x',x_d) = u(x', r_\mathrm{d}elta x_d) \chi_{\Omega_{h_\mathrm{d}elta}}$. We now apply the above approximation on fixed $(u_\mathrm{d}elta, h_\mathrm{d}elta)$, then consider a sequence $\mathrm{d}elta \to 0$, and use a diagonal argument.
\end{remark} \label{page:upperineqend}
\betagin{remark}[Surface tension]
We remark that, similar to \cite{BonCha02, ChaSol07, FonFusLeoMor07}, we could also derive a relaxation result for more general models where the surface tension $\sigma_S$ for the substrate can be different from the the surface tension $\sigma_C$ of the crystal. This corresponds to surface energies of the form
$$\sigma_S \,\mathcal{H}^{d-1}(\{h=0\}) + \sigma_C \mathcal{H}^{d-1}\big(\varphiartial\Omega_h \cap (\omega{\times}(0,+\infty))\big)\,.$$
In the relaxed setting, the surface energy is then given by
\betagin{equation*}
(\sigma_S \wedge \sigma_C) \,\mathcal{H}^{d-1}(\{h=0\}) + \sigma_C \Big(\mathcal{H}^{d-1}\big(\varphiartial^*\Omega_h \cap (\omega{\times}(0,+\infty))\big) + 2\,\mathcal{H}^{d-1}(J_u' \cap \Omega_h^1) \Big)\,.
\end{equation*}
We do not prove this fact here for simplicity, but refer to \cite[Subsection~2.4, Remark~4.4]{ChaSol07} for details how the proof needs to be adapted to deal with such a situation.
\end{remark}
\subsection{Compactness and existence of minimizers} \label{subsec:compactness}
In this short subsection we give the proof of the compactness result stated in Theorem~\ref{thm:compG}. As discussed in Subsection \ref{sec: results2}, this immediately implies the existence of minimizers for problem \eqref{eq: minimization problem2}.
\betagin{proof}[Proof of Theorem~\ref{thm:compG}]
Consider $(u_n, h_n)_n$ with $\sup_n G(u_n, h_n) < +\infty$. First, by \eqref{eq: Gfunctional} and a standard compactness argument we find $h \in BV(\omega; [0,M] )$ such that $h_n \to h$ in $L^1(\omega)$, up to a subsequence (not relabeled). Moreover, by \eqref{eq: growth conditions}, \eqref{eq: Gfunctional}, and the fact that $J_{u_n} \subset \varphiartial \Omega_{h_n}\cap \Omega$ we can apply Theorem \ref{th: GSDBcompactness} to obtain some $u \in GSBD^p_\infty(\Omega)$ such that $u_n \to u$ weakly in $GSBD^p_\infty$. We also observe that $u =u\chi_{\Omega_h}$ and $u = u_0$ on $\omega\times (-1,0)$ by \eqref{eq: GSBD comp}(i), $u_n =u_n\chi_{\Omega_{h_n}}$, and $u_n = u_0$ on $\omega\times (-1,0)$ for all $n \in \mathbb{N}$. It remains to show that $u \in GSBD^p(\Omega)$, i.e., $ \lbrace u = \infty \rbrace = \emptyset$.
To this end, we take $U = \omega \times (-\frac{1}{2},M) $ and $U' = \Omega = \omega \times (-1,M+1)$, and apply Theorem~\ref{thm:compSps} on the sequence $\Gamma_n = \varphiartial \Omega_{h_n} \cap \Omega$ to find that $\Gamma_n$ $\sigma^p_{\mathrm{\rm sym}}$-converges (up to a subsequence) to a pair $(\Gamma, G_\infty)$. Consider $v_n = \varphisi u_n$, where $\varphisi \in C^\infty(\Omega)$ with $\varphisi = 1$ in a neighborhood of $ \omega \times (0,M+1) $ and $\varphisi = 0$ on $\omega \times (-1,-\frac{1}{2})$. Clearly, $v_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to $v:=\varphisi u$. As $J_{v_n} \subset \Gamma_n$ and $v_n = 0$ on $U' \setminus U$ for all $n \in \mathbb{N}$, \color{black} we also obtain $\lbrace v = \infty \rbrace \subset G_\infty$ (up to a $\mathcal{L}^d$-negligible set), see Definition~\ref{def:spsconv}(i). \color{black} As by definition of $v$ we have $\lbrace u = \infty \rbrace = \lbrace v = \infty \rbrace$, we deduce $\lbrace u = \infty \rbrace \subset G_\infty$. It now suffices to recall $G_\infty = \emptyset$, see \eqref{eq: G is empty}, to conclude $\lbrace u = \infty \rbrace = \emptyset$.
\end{proof}
\subsection{Phase field approximation of $\overline G$}\label{sec:phasefield}
This final subsection is devoted to the phase-field approximation of the functional $\overline{G}$. Recall the functionals introduced in \eqref{eq: phase-approx}.
\betagin{proof}[Proof of Theorem~\ref{thm:phasefieldG}]
Fix a decreasing sequence $(\varepsilons_n)_n$ of positive numbers converging to zero. We first prove the liminf and then the limsup inequality.
\noindent \emph{Proof of (i).} Let $(u_n, v_n)_n$ with $\sup_n G_{\varepsilon_n}(u_n, v_n)< +\infty$. Then, $v_n$ is nonincreasing in $x_d$, and therefore
\betagin{equation*}
\widetilde{v}_n (x) :=0 \vee( v_n(x)-\mathrm{d}elta_n x_d) \wedge 1 \quad \text{ for $x \in \Omega = \omega \times (-1,M+1)$}
\end{equation*}
is strictly decreasing on $\lbrace 0 < \widetilde{v}_n < 1\rbrace$, where $(\mathrm{d}elta_n)_n$ is a decreasing sequence of positive numbers converging to zero. For a suitable choice of $(\mathrm{d}elta_n)_n$, depending on $(\varepsilons_n)_n$ and $W$, we obtain $\| v_n - \widetilde{v}_n\|_{L^1(\Omega)} \to 0$ and
\betagin{equation}\label{1805192049}
G_{\varepsilon_n}(u_n, v_n)= G_{\varepsilon_n}(u_n, \widetilde{v}_n) + O(1/n)\,.
\end{equation}
By using the implicit function theorem and the coarea formula for $\widetilde{v}_n$, we can see, exactly as in the proof of \cite[Theorem~5.1]{ChaSol07}, that for a.e.\ $s \in (0,1)$ and $n \in \mathbb{N}$ the superlevel set $\{\widetilde{v}_n > s\}$ is the subgraph of a function $h^s_n \in H^1(\omega; [0,M])$. (Every $h^s_n$ takes values in $[0,M]$ since $\widetilde{v}_n=0$ in $\omega{\times} (M, M+1)$.) By the coarea formula for $\widetilde{v}_n$, $\varphiartial^* \lbrace \widetilde{v}_n > s \rbrace\cap \Omega =\varphiartial^* \Omega_{h_n^s} \cap \Omega$, and Young's inequality we obtain
\betagin{align*}
\int_0^1 \sqrt{2W(s)}\, \mathcal{H}^{d-1}(\varphiartial^* \Omega_{h_n^s} \cap \Omega) \, \mathrm{d} s & \leq \int_\Omega \sqrt{2W(\widetilde{v}_n)} \,|\nabla \widetilde{v}_n| \, \mathrm{d} x \le \int_\Omega \Big( \frac{\varepsilon_n}{2} |\nabla \widetilde{v}_n|^2 + \frac{1}{\varepsilon_n} W(\widetilde{v}_n) \Big) \, \mathrm{d} x\,.
\end{align*}
Then, by Fatou's lemma we get
\betagin{align}\label{eq: coarea, fatou}
\int_0^1 \sqrt{2W(s)}\Big( \liminf_{n\to \infty} \int_\omega \sqrt{1+|\nabla h^s_n(x')|^2} \mathrm{d} x'\Big) \mathrm{d} s \le \liminf_{n\to \infty} \int_\Omega \Big( \frac{\varepsilon_n}{2} |\nabla \widetilde{v}_n|^2 + \frac{1}{\varepsilon_n} W(\widetilde{v}_n) \Big) \, \mathrm{d} x < + \infty
\end{align}
and thus $\liminf_{n\to \infty} \int_\omega \sqrt{1+|\nabla h^s_n(x')|^2} \,\mathrm{d} x'$ is finite for a.e.\ $s\in (0,1)$.
By a diagonal argument, we can find a subsequence (still denoted by $(\varepsilon_n)_n$) and $(s_k)_k \subset (0,1)$ with $\lim_{k\to \infty} s_k =0$ such that for every $k \in \mathbb{N}$ there holds
\betagin{equation}\label{eq: liminfequalinf}
\lim_{n\to \infty} \int_\omega \sqrt{1+|\nabla h^{s_k}_n(x')|^2} \, \mathrm{d} x'=\liminf_{n\to \infty} \int_\omega \sqrt{1+|\nabla h^{s_k}_n(x')|^2} \, \mathrm{d} x' < + \infty \,.
\end{equation}
Up to a further (not relabeled) subsequence, we may thus assume that $h^{s_k}_n$ converges in $L^1(\omega)$ to some function $h^{s_k}$ for every $k$. Since $ \sup_n G_{\varepsilon_n}(u_n, \widetilde{v}_n) < +\infty$ and thus $W(\widetilde{v}_n) \to 0$ a.e.\ in $\Omega$, we obtain $\widetilde{v}_n \to 0$ for a.e.\ $x$ with $x_d > h^{s_k}(x')$ and $\widetilde{v}_n \to 1$ for a.e.\ $x$ with $x_d < h^{s_k}(x')$. (Recall $W(t) = 0 \Leftrightarrow t \in \lbrace 0,1\rbrace$.) This shows that the functions $h^{s_k}$ are independent of $k$, and will be denoted simply by $h\in BV(\omega; [0,M])$.
Let us denote by $u_n^k \in GSBD^p(\Omega)$ the function given by
\betagin{align}\label{1805192011}
u^k_n(x) = \betagin{cases} u_n(x) & \text{if } x_d < h^{s_k}_n(x')\,,\\ 0 & \text{else}\,. \end{cases}
\end{align}
Then $(u_n^k)_n$ satisfies the hypothesis of Theorem~\ref{th: GSDBcompactness} for every $k \in \mathbb{N}$. Indeed, $J_{u^k_n} \subset \varphiartial^* \Omega_{h^{s_k}_n}$ and $\mathcal{H}^{d-1}(\varphiartial^* \Omega_{h^{s_k}_n})$ is uniformly bounded in $n$ by \eqref{eq: liminfequalinf}. Moreover, $(e(u^k_n))_n$ is uniformly bounded in $L^p(\Omega; {\mathbb{M}^{d\times d}_{\rm sym}})$ by \eqref{eq: growth conditions} and the fact that
\[
G_{\varepsilon_n}(u_n, \widetilde{v}_n) \geq (\eta_{\varepsilon_n} + s_k^2) \int _{\Omega} f(e(u^k_n)) \, \mathrm{d} x\,.
\]
Therefore, Theorem~\ref{th: GSDBcompactness} implies that, up to a subsequence, $u^k_n$ converges weakly in $GSBD^p_\infty(\Omega)$ to a function $u^k$. Furthermore, we infer, arguing exactly as in the proof of Theorem~\ref{thm:compG} above, that actually $u^k \in GSBD^p(\Omega)$, i.e., the exceptional set $\lbrace u^k = \infty \rbrace$ is empty. By \eqref{eq: GSBD comp}(i) this yields $u^k_n \to u^k$ in $L^0(\Omega;\mathbb{R}d)$. By a diagonal argument we get (up to a further subsequence) that $u_n^k \to u^k$ pointwise a.e.\ as $n \to \infty$ for all $k\in\mathbb{N}$.
Recalling now the definition of $u^k_n$ in \eqref{1805192011} and the fact that $\lim_{n\to \infty} \Vert h_n^{s_k} - h\Vert_{L^1(\omega)} = 0$ for all $k \in \mathbb{N}$, we deduce that the functions $u^k$ are independent of $k$. This function will simply be denoted by $u \in GSBD^p(\Omega)$ in the following. Note that $u = u \chi_{\Omega_h}$ and that $u = u_0$ on $\omega \times (-1,0)$ since $u_n = u_0$ on $\omega \times (-1,0)$ for all $n \in \mathbb{N}$.
For the proof of \eqref{1805192018}, we can now follow exactly the lines of the lower bound in \cite[Theorem~5.1]{ChaSol07}. We sketch the main arguments for convenience of the reader. We first observe that
$$\int_\Omega \widetilde{v}_n \, f(e(u_n)) \,\, \mathrm{d} x = \int_\Omega \Big(2\int_0^{\widetilde{v}_n(x)}s\,\mathrm{d} s \Big) \, f(e(u_n)(x)) \,\, \mathrm{d} x \ge \int_0^1 2s \Big(\int_{\lbrace\widetilde{v}_n >s \rbrace} f(e(u_n)) \, \mathrm{d} x\Big)\, \mathrm{d} s\,. $$
This along with \eqref{eq: coarea, fatou} and Fatou's lemma yields
\betagin{equation}\label{eq: last equation}
\int_0^1 \liminf_{n\to \infty} \Big( 2s \int_{\{ \widetilde{v}_n > s \} } f(e(u_n)) \,\, \mathrm{d} x + c_W \sqrt{2 W(s)} \int _\omega \sqrt{1+|\nabla h^s_n|^2}\, \mathrm{d} x' \Big)\, \mathrm{d} s\leq \liminf_{n \to \infty} G_{\varepsilon_n}(u_n, \widetilde{v}_n) \,.
\end{equation}
Thus, the integrand
\[
I^s_n:= 2s \int_{\{ \widetilde{v}_n > s \} } f(e(u_n)) \, \mathrm{d} x + c_W \sqrt{2 W(s)} \int _\omega \sqrt{1+|\nabla h^s_n|^2} \, \mathrm{d} x'
\]
is finite for a.e.\ $s \in (0,1)$. We then take $s$ such that $h_n^s \in H^1(\omega)$ for all $n$, and consider a subsequence $(n_m)_m$ such that $\lim_{m\to \infty} I_{n_m}^s = \liminf_{n\to \infty} I_n^s$. Exactly as in \eqref{1805192011}, we let $u^s_{n_m}$ be the function given by $u_{n_m}$ if $x_d< h^{s}_{n_m}(x')$ and by zero otherwise. Repeating the compactness argument below \eqref{1805192011}, we get $u^s_{n_m} \to u$ a.e.\ in $\Omega$ and $h^s_{n_m} \to h$ in $L^1(\omega)$ as $m \to \infty$. We observe that this can be done for a.e.\ $s \in (0,1)$, for a subsequence depending on $s$.
By $\int_{\{ \widetilde{v}_{n_m} > s \} } f(e(u_{n_m})) \, \mathrm{d} x = \int_{\Omega} f(e(u^s_{n_m})) \, \mathrm{d} x $ and the (lower inequality in the) relaxation result Theorem~\ref{thm:relG} (up to different constants in front of the elastic energy and surface energy) we obtain
\betagin{equation*}
2s \int _{ \Omega^+_h } f(e(u)) \, \mathrm{d} x + c_W \sqrt{2W(s)}\,\big( \mathcal{H}^{d-1}(\varphiartial^*\Omega_h \cap \Omega ) + 2 \, \mathcal{H}^{d-1}(J_u' \cap \Omega_h^1) \big) \leq \lim_{n_m\to \infty} I_{n_m}^s = \liminf_{n \to \infty} I_n^s
\end{equation*}
for a.e.\ $s\in(0,1)$.
We obtain \eqref{1805192018} by integrating the above inequality and by using \eqref{1805192049} and \eqref{eq: last equation}. Indeed, the integral on the left-hand side gives exactly $\overline G(u,h)$ as $c_W=(\int_0^1 \sqrt{2 W(s)} \,\mathrm{d} s)^{-1}$.
\noindent \emph{Proof of (ii).} Let $(u, h)$ with $\overline G(u,h) < + \infty$. By the construction in the upper inequality for Theorem~\ref{thm:relG}, see Proposition~\ref{prop: enough} and Remark~\ref{rem:1805192117},
we find $h_n \in C^1(\omega; [0,M] )$ with $h_n \to h$ in $L^1(\omega)$ and $u_n \in L^\infty(\Omega;\mathbb{R}^d)$ with $u_n|_{\Omega_{h_n}} \in W^{1,p}(\Omega_{h_n}; \mathbb{R}d)$ and $u_n \to u$ a.e.\ in $\Omega$ such that
\betagin{equation}\label{1805192140}
\overline G(u,h)= \lim_{n\to \infty} H(u_n, h_n) \quad \text{for } H(u_n, h_n):= \int_{\Omega^+_{h_n}} f(e(u_n)) \, \mathrm{d} x + \mathcal{H}^{d-1}(\varphiartial \Omega_{h_n} \cap \Omega)
\end{equation}
as well as
\betagin{equation}\label{1805192141}
(u_n - u_0)|_{\omega \times (-1,0)} \to 0 \quad\text{in }W^{1,p}(\omega{\times}(-1,0); \mathbb{R}d)\,.
\end{equation}
For each $(u_n,h_n)$, we can use the construction in \cite{ChaSol07} to find sequences $ (u_n^k)_k \subset W^{1,p}(\Omega;\mathbb{R}^d)$ and $(v_n^k)_k \subset H^1(\Omega;[0,1])$ with $u_n^k = u_n$ on $\omega \times (-1,0)$, $u_n^k \to u_n$ in $L^1(\Omega;\mathbb{R}d)$, and $v^k_n \to \chi_{\Omega_{h_n}}$ in $L^1(\Omega)$ such that (cf.\ \eqref{1805192140})
\betagin{equation}\label{1805192154}
\limsup_{k \to \infty} \int_\Omega \bigg( \big( (v_n^k)^2+\eta_{\varepsilon_k}\big) f(e(u_n^k)) + c_W\Big(\frac{W( v_n^k)}{\varepsilon_k} + \frac{\varepsilon_k}{2} |\nabla v_n^k|^2 \Big) \bigg)\, \mathrm{d} x \leq H(u_n,h_n)\,.
\end{equation}
In particular, we refer to \cite[Equation (28)]{ChaSol07} and mention that the functions $(v_n^k)_k$ can be constructed such that $v_n^k =1$ on $\omega \times (-1,0)$ and $v_n^k = 0$ in $\omega \times (M,M+1)$. We also point out that for this construction the assumption $\eta_\varepsilon \varepsilon^{1-p} \to 0$ as $\varepsilon \to 0$ is needed.
By \eqref{1805192140}, \eqref{1805192154}, and a standard diagonal extraction argument we find sequences $(\hat{u}^k)_k \subset (u_n^k)_{n,k}$ and $(v^k)_k \subset (v^k_n)_{n,k}$ such that $\hat{u}^k \to u$ a.e.\ in $\Omega$, $v^k \to \chi_{\Omega_h}$ in $L^1(\Omega)$, and
\betagin{equation}\label{1805192154.2}
\limsup_{k \to \infty} \int_\Omega \bigg( \big( ({v}^k)^2+\eta_{\varepsilon_k}\big) f(e(\hat{u}^k)) + c_W\Big(\frac{W( v^k)}{\varepsilon_k} + \frac{\varepsilon_k}{2} |\nabla v^k|^2 \Big) \bigg)\, \mathrm{d} x \leq \overline G(u,h)\,.
\end{equation}
By using \eqref{1805192141} and the fact that $u^k_n = u_n$ for all $k,n \in \mathbb{N}$, we can modify $(\hat{u}^k)_k$ as described at the end of the proof of Proposition \ref{prop: enough} (see below \eqref{2006191122}): we find a sequence $(u^k)_k$ which satisfies $u^k = u_0$ on $\omega \times (-1,0)$, converges to $u$ a.e.\ in $\Omega$, and \eqref{1805192154.2} still holds, i.e., $\limsup_{k \to \infty}G_{\varepsilons_k}(u^k, v^k) \le \overline G(u,h)$. This concludes the proof.
\end{proof}
\betagin{appendices}
\section{Auxiliary results}\label{sec:App}
In this appendix, we prove two technical approximation results employed in Sections~\ref{sec:FFF} and \ref{sec:GGG}, based on tools from \cite{CC17}.
\betagin{proof}[Proof of Lemma~\ref{le:0410191844}]
Let $(v, H)$ be given as in the statement of the lemma.
Clearly, it suffices to prove the following statement: for every $\eta>0$, there exists $( {v}^\eta, H^\eta) \in L^p(\Omega;\mathbb{R}^d){\times}\mathfrak{M}(\Omega)$ with the regularity and the properties required in the statement of the lemma (in particular, $ {v}^\eta = u_0$ in a neighborhood $V^\eta \subset \Omega$ of $\varphiartial_D \Omega$), such that, for a universal constant $C$, one has $\color{black}\bar{d}\color{black}( v^\eta, v )\le C\eta$ (cf.\ \eqref{eq:metricd} for \color{black} $\bar{d}$\color{black}), $\mathcal{L}^d(H\triangle H^\eta)\le C\eta$, and
\[
\overline F'_{\mathrm{Dir}}( {v}^\eta, H^\eta) \le \overline F'_{\mathrm{Dir}}(v,H) + C\eta\,.
\]
We start by recalling the main steps of the construction in \cite[Theorem~5.5]{CC17} and we refer to \cite{CC17} for details (see also \cite[Section~4, first part]{CC19b}). Based on this, we then explain how to construct $( {v}^\eta, H^\eta)$ simultaneously, highlighting particularly the steps needed for constructing $H^\eta$.
Let $\varepsilon>0$, to be chosen small with respect to $\eta$. By using the assumptions on $\varphiartial\Omega$
given before \eqref{0807170103}, a preliminary step is to find cubes $(Q_j)_{j=1}^{ J }$ with pairwise disjoint closures and hypersurfaces $(\Gamma_j)_{j=1}^J$ with the following properties: each $Q_j$ is centered at $x_j \in \varphiartial_N \Omega$ with sidelength $\varrho_j$, $ {\rm dist} (Q_j, {\partial_D \Omega})> d_\varepsilon >0 $ with $\lim_{\varepsilon \to 0} d_\varepsilon =0$, and
\betagin{align}\label{eq: guarantee2}
\mathcal{H}^{d-1}({\partial_N \Omega} \setminus \widehat{Q}) + \mathcal{L}^d(\widehat{Q}) \le \varepsilon,\qquad\text{for }\widehat{Q}:= \bigcup\nolimits_{j=1}^J \overline Q_j\,.
\end{align}
Moreover, each $\Gamma_j$ is a $C^1$-hypersurface with $x_j \in \Gamma_j \subset \overline Q_j$,
\betagin{equation*}\label{2212181446}
\betagin{split}
\mathcal{H}^{d-1}\big(({\partial_N \Omega}\triangle \Gamma_j)\,\cap \, \overline{Q_j} \big) \le \varepsilon (2\varrho_j)^{d-1}\le \, \frac{\varepsilon}{1-\varepsilon} \mathcal{H}^{d-1}({\partial_N \Omega}\cap \overline{Q_j})\,,
\end{split}
\end{equation*}
and $\Gamma_j$ is a $C^1$-graph with respect to $\nu_{{\partial \Omega}}(x_j)$ with Lipschitz constant less than $\varepsilon/2$. (We can say that ${\partial_N \Omega} \cap Q_j$ is ``almost'' the intersection of $Q_j$ with the hyperplane passing through $x_j$ with normal $\nu_{{\partial \Omega}}(x_j)$.)
We can also guarantee that
\betagin{align}\label{eq: guarantee}
\mathcal{H}^{d-1}\big((\varphiartial^* H \cup J_u) \cap \Omega \cap \widehat{Q}\big) \le \varepsilon, \ \ \ \ \ \ \ \ \mathcal{H}^{d-1}\big((\varphiartial^* H \cup J_u) \cap \varphiartial Q_j \big)= 0
\end{align}
for all $j=1,\ldots,J$.
To each $Q_j$, we associate the following rectangles:
\betagin{equation*}
R_{j}:=\Big\{x_{j}+\sum\nolimits_{i=1}^{d-1} y_i\, b_{j,i}+y_d\, \nu_{j} \colon y_i\in (-\varrho_{j},\varrho_{j}),\, y_d \in (-3\varepsilon \varrho_{j}-t, -\varepsilon \varrho_{j}) \Big\}\,,
\end{equation*}
$$
R'_{j}:=\Big\{x_{j}+\sum\nolimits_{i=1}^{ d-1 } y_i\, b_{j,i}+y_d\, \nu_{j} \colon y_i\in (-\varrho_{j},\varrho_{j}),\, y_d \in (-\varepsilon \varrho_{j}, \varepsilon \varrho_{j}+t) \Big\}\,,
$$
and $\widehat{R}_{j}:=R_{j} \cup R'_{j}$, where $\nu_{j}=-\nu_{{\partial \Omega}}(x_{j})$ denotes the generalized outer normal,
$(b_{j,i})_{i=1}^{d-1}$ is an orthonormal basis of $(\nu_{j})^\varphierp$, and $t>0$ is small with respect to $\eta$. We remark that $\Gamma_j \subset R'_j$ and that
$R_j$ is a small strip adjacent to $R_j'$, which is included in $\Omega \cap Q_j$. (We use here the notation $_j$ in place of $_{h,N}$ adopted in \cite[Theorem~5.5]{CC17}.)
After this preliminary part, the approximating function $u^\eta$ was constructed in \cite[Theorem~5.5]{CC17} starting from a given function $u$ through the following three steps:
\betagin{itemize}
\item[(i)] definition of an extension $\widetilde{u} \in GSBD^p(\Omega + B_t(0))$ which is obtained by a reflection argument \emph{à la} Nitsche \cite{Nie81} inside $\widehat{R}_j$, equal to $u$ in $\Omega\setminus \bigcup_j \widehat{R}_j$, and equal to $u_0$ elsewhere. This can be done such that, for $t$ and $\varepsilon$ small, there holds (see below \cite[(5.13)]{CC17})
\betagin{equation}\label{eq: SSS2}
\int\limits_{(\Omega+B_t(0)) \setminus \Omega} \hspace{-2em} |e(u_0)|^p \, \mathrm{d} x + \int\limits_{\widehat{R}} |e(\widetilde{u})|^p \, \mathrm{d} x + \int\limits_{{R}} |e(u)|^p \, \mathrm{d} x + \mathcal{H}^{d-1}\big(J_{\widetilde{u}} \cap \widehat{R}\big) \le \eta\,,
\end{equation}
where $R:= \bigcup_{j=1}^J {R}_j $ and $\widehat{R}:= \bigcup_{j=1}^J \widehat{R}_j \cap (\Omega+B_t(0))$.
\item[(ii)] application of Theorem~\ref{thm:densityGSBD} on the function $\widetilde{u}^\mathrm{d}elta:= \widetilde{u}\circ (O_{\mathrm{d}elta,x_0})^{-1} + u_0 - u_0 \circ (O_{\mathrm{d}elta,x_0})^{-1}$ (for some $\mathrm{d}elta$ sufficiently small) to get approximating functions $\widetilde{u}^\mathrm{d}elta_n$ with the required regularity which are equal to $u_0 \ast \varphisi_n $ in a neighborhood of ${\partial_D \Omega}$ in $\Omega$, where $\varphisi_n$ is a suitable mollifier. Here, assumption \eqref{0807170103} is crucial.
\item[(iii)] correcting the boundary values by defining $u^\eta$ as $ u^\eta := \widetilde{u}^\mathrm{d}elta_n + u_0 - u_0 \ast \varphisi_n $, for $\mathrm{d}elta$ and $1/n$ small enough.
\end{itemize}
After having recalled the main steps of the construction in \cite[Theorem~5.5]{CC17}, let us now construct $ {v}^\eta$ and $H^\eta$ at the same time, following the lines of the steps (i), (ii), and (iii) above. The main novelty is the analog of step (i) for the approximating sets, while the approximating functions are constructed in a very similar way. For this reason, we do not recall more details
from \cite[Theorem~5.5]{CC17}.
\emph{Step (i).} Step (i) for $ {v}^\eta$ is the same done before for $u^\eta$, starting from $v$ in place of $u$. Hereby, we get a function $\widetilde{v} \in GSBD^p(\Omega + B_t(0)) $.
For the construction of
$H^\eta$, we introduce a set $\widetilde{H} \subset \Omega+B_t(0)$ as follows:
in $R'_j$, we define a set $H'_j$ by a simple reflection of the set $H \cap R_j$ with respect to the common hyperface between $R_j$ and $R'_j$.
Then, we let
$
\widetilde{H}:= H \cup \bigcup_{j=1}^J (H'_j \cap (\Omega+B_t(0)))$. Since $H$ has finite perimeter, also $\widetilde{H}$ has finite perimeter. By \eqref{eq: guarantee} we get $\mathcal{H}^{d-1}(\varphiartial^*\widetilde{H} \cap \widehat{R} )\le \eta/3 $ for $\varepsilon$ small, where as before $\widehat{R}:= \bigcup_{j=1}^J \widehat{R}_j \cap (\Omega+B_t(0))$.
We choose $\mathrm{d}elta$, $\varepsilons$, and $t$ so small that
\betagin{align}\label{eq: deltassmall}
\mathcal{H}^{d-1} \Big(O_{\mathrm{d}elta,x_0}\Big( \bigcup\nolimits_{j=1}^J \varphiartial R_j' \setminus \varphiartial R_j \Big) \cap \Omega \Big) \le \frac{\eta}{ 3 }\,.
\end{align}
We let $ {H}^\eta :=O_{\mathrm{d}elta,x_0}(\widetilde{H})$. Then, we get $\mathcal{L}^d({H}^\eta\triangle H)\le \eta$ for $\varepsilon$, $t$, and $\mathrm{d}elta$ small enough. By \eqref{eq: guarantee2}, \eqref{eq: deltassmall}, and $\mathcal{H}^{d-1}(\varphiartial^*\widetilde{H} \cap \widehat{R} )\le \eta/3 $ we also have (again take suitable $\varepsilon$, $\mathrm{d}elta$)
\betagin{align}\label{eq: SSS}
\int_{\varphiartial^*{H}^\eta} \varphi(\nu_{{H}^\eta}) \, \mathrm{d} \hd \le \int_{ \varphiartial^* H \cap (\Omega\cup {\partial_D \Omega})} \varphi(\nu_H) \, \mathrm{d} \hd + \eta\,.
\end{align}
Moreover, in view of \eqref{0807170103} and $ {\rm dist} (Q_j, {\partial_D \Omega})> d_\varepsilon >0 $ for all $j$, ${H}^\eta$ does not intersect a suitable neighborhood of ${\partial_D \Omega}$. Define $\widetilde{v}^\mathrm{d}elta:= \widetilde{v}\circ (O_{\mathrm{d}elta,x_0})^{-1} + u_0 - u_0 \circ (O_{\mathrm{d}elta,x_0})^{-1}$ and observe that the function $\widetilde{v}^\mathrm{d}elta\chi_{({H}^\eta)^0}$ coincides with $u_0$ in a suitable neighborhood of ${\partial_D \Omega}$. By \eqref{eq: SSS}, by the properties recalled for $\widetilde{u}$, see \eqref{eq: SSS2}, and the fact that $v = v \chi_{H^0}$, it is elementary to check that
\betagin{align}\label{eq: LLLL}
\overline F'_{\mathrm{Dir}}(\widetilde{v}^\mathrm{d}elta \chi_{({H}^\eta)^0}, {H}^\eta) \le \overline F'_{\mathrm{Dir}}(v\chi_{H^0}, H) + C \eta = \overline F'_{\mathrm{Dir}}(v, H) + C \eta\,.
\end{align}
Notice that here it is important to take the same $\mathrm{d}elta$ both for $\widetilde{v}^\mathrm{d}elta$ and ${H}^\eta$, that is to ``dilate'' the function and the set at the same time.
\emph{Step~2.} We apply Theorem~\ref{thm:densityGSBD} to $\widetilde{v}^\mathrm{d}elta \chi_{({H}^\eta)^0}$, to get approximating functions $\widetilde{v}^\mathrm{d}elta_n$ with the required regularity. For $n$ sufficiently large, we obtain $\color{black}\bar{d} \color{black}(\widetilde{v}^\mathrm{d}elta_n \chi_{({H}^\eta)^0}, \widetilde{v}^\mathrm{d}elta \chi_{({H}^\eta)^0})\le \eta$ and
\[
|\overline F'_{\mathrm{Dir}}(\widetilde{v}^\mathrm{d}elta_n \chi_{({H}^\eta)^0}, {H}^\eta) - \overline F'_{\mathrm{Dir}}(\widetilde{v}^\mathrm{d}elta \chi_{({H}^\eta)^0}, {H}^\eta)| \le \eta\,.
\]
\emph{Step 3.} Similar to item (ii) above, we obtain $\widetilde{v}^\mathrm{d}elta_n= u_0 \ast \varphisi_n $ in a neighborhood of ${\partial_D \Omega}$. Therefore, it is enough
to define $ {v}^\eta$ as $ {v}^\eta := \widetilde{v}^\mathrm{d}elta_n + u_0 - u_0 \ast \varphisi_n $. Then by \eqref{eq: LLLL} and Step~2 we obtain $\color{black}\bar{d} \color{black}( v^\eta, v )\le C\eta$ and $\overline F'_{\mathrm{Dir}}( {v}^\eta, H^\eta) \le \overline F'_{\mathrm{Dir}}(v,H) + C\eta$ for $n$ sufficiently large.
\end{proof}
We now proceed with the proof of Lemma~\ref{lemma: vito approx} which relies strongly on \cite[Theorem~3.1]{CC17}. Another main ingredient is the following Korn-Poincar\'e inequality in $GSBD^p$, see \cite[Proposition~3]{CCF16}.
\betagin{proposition}\label{prop:3CCF16}
Let $Q =(-r,r)^d$, $Q'=(-r/2, r/2)^d$, $u\in GSBD^p(Q)$, $p\in [1,\infty)$. Then there exist a Borel set $\omega\subset Q'$ and an affine function $a\colon \mathbb{R}d\to\mathbb{R}d$ with $e(a)=0$ such that $\mathcal{L}^d(\omega)\leq cr \mathcal{H}^{d-1}(J_u)$ and
\betagin{equation}\label{prop3iCCF16}
\int_{Q'\setminus \omega}(|u-a|^{p}) ^{1^*} \, \mathrm{d} x\leq cr^{(p-1)1^*}\Bigg(\int_Q|e(u)|^p\, \mathrm{d} x\Bigg)^{1^*}\,.
\end{equation}
If additionally $p>1$, then there exists $q>0$ (depending on $p$ and $d$) such that, for a given mollifier $\varphi_r\in C_c^{\infty}(B_{r/4})\,, \varphi_r(x)=r^{-d}\varphi_1(x/r)$, the function $ w=u \chi_{Q'\setminus \omega}+a\chi_\omega$ obeys
\betagin{equation}\label{prop3iiCCF16}
\int_{Q''}|e( w \ast \varphi_r)-e(u)\ast \varphi_r|^p\, \mathrm{d} x\leq c\left(\frac{\mathcal{H}^{d-1}(J_u)}{ r^{d-1} }\right)^q \int_Q|e(u)|^p\, \mathrm{d} x\,,
\end{equation}
where $Q''=(-r/4,r/4)^d$.
The constant in \eqref{prop3iCCF16} depends only on $p$ and $d$, the one in \eqref{prop3iiCCF16} also on $\varphi_1$.
\end{proposition}
\betagin{proof}[Proof of Lemma~\ref{lemma: vito approx}]
We recall the definition of the hypercubes
\betagin{equation*}
\betagin{split}
q_z^k:=z+(-k^{-1},k^{-1})^d\,,\qquad{\tilde{q}}_z^k:= z+(-2k^{-1},2k^{-1})^d\,, \qquad Q_z^k:=z+(-5k^{-1},5k^{-1})^d\,,
\end{split}
\end{equation*}
where in addition to the notation in \eqref{eq: cube-notation}, we have also defined the hypercubes ${\tilde{q}}z$. In contrast to \cite[Theorem~3.1]{CC17}, the cubes ${Q_z^k}$ have sidelength $10k^{-1}$ instead of $8k^{-1}$. This, however, does not affect the estimates.
We point out that at some points in \cite[Theorem~3.1]{CC17} cubes of the form $z+(-8k^{-1},8k^{-1})^d$ are used. By a slight alternation of the argument, however, it suffices to take cubes $Q^k_z$. In particular it is enough to show the inequality \cite[(3.19)]{CC17} for a cube $Q_j$ (of sidelength $10k^{-1}$) in place of $\widetilde{Q}_j$ (of sidelength $16k^{-1}$), which may be done by employing rigidity properties of affine functions.
Let us fix a smooth radial function $\varphi$ with compact support on the unit ball $B_1(0)\subset \mathbb{R}d$, and define $\varphi_k(x):=k^d\varphi(kx)$. We choose $\theta< (16c)^{-1}$, where $c$ is the constant in Proposition~\ref{prop:3CCF16} (cf.\ also \cite[Lemma~2.12]{CC17}). Recall \eqref{eq: well contained} and set
\betagin{equation*}
\mathcal{N}'_k:=\{ z \in (2k^{-1}) \mathbb{Z}^d \colon {q_z^k} \cap (U)^k \setminus V \neq \emptyset \}\,.
\end{equation*}
We apply Proposition~\ref{prop:3CCF16} for $r = 4k^{-1}$, for any $z \in \mathcal{N}'_k$ by
taking $v$ as the reference function and $z+(-4k^{-1}, 4k^{-1})^d$ as $Q$ therein. (In the following, we may then use the bigger cube ${Q_z^k}$ in the estimates from above.) Then, there exist $\omega_z \subset {\tilde{q}}z$ and $a_z\colon \mathbb{R}d \to \mathbb{R}d$ affine with $e(a_z)=0$ such that by \eqref{eq: cond1}, \eqref{prop3iCCF16}, and H\"older's inequality there holds
\betagin{subequations}
\betagin{equation}\label{1005171230}
\mathcal{L}^d(\omega_z)\leq 4 c k^{-1} \mathcal{H}^{d-1}(J_{v} \cap {Q_z^k}) \leq 4 c \theta k^{-d} \,,
\end{equation}
\betagin{equation}\label{prop3iCCF16applicata}
\|v-a_z\|_{L^{p}({\tilde{q}}z\setminus \omega_z)} \leq 4 ck^{-1} \|e(v)\|_{L^p({Q_z^k})}\,.
\end{equation}
Moreover, by \eqref{eq: cond1} and \eqref{prop3iiCCF16} there holds
\betagin{equation*}
\betagin{split}
\int_{{q_z^k}}|e(\hat{v}_z\ast \varphi_k)-e(v)\ast \varphi_k|^p\, \mathrm{d} x \leq c\left(\mathcal{H}^{d-1}(J_v \cap {Q_z^k})\,k^{d-1}\right)^q \int_{{Q_z^k}}|e(v)|^p\, \mathrm{d} x \leq c \theta^q \int_{{Q_z^k}}|e(v)|^p\, \mathrm{d} x
\end{split}
\end{equation*}
for $\hat{v}_z:= v\chi_{{\tilde{q}}z\setminus \omega_z}+a_z \chi_{\omega_z}$ and a suitable $q>0$ depending on $p$ and $d$.
\end{subequations}
Let us set
\betagin{equation*}
\omega^k:= \bigcup\nolimits_{ z \in \mathcal{N}_k' } \, \omega_{z}\,.
\end{equation*}
We order (arbitrarily) the nodes $z \in \mathcal{N}_k'$, and denote the set by $(z_j)_{j\in J}$. We define
\betagin{equation}\label{eq:defappr1}
\widetilde{w}_k:=
\betagin{cases}
v \quad &\text{in }\big(\bigcup_{z \in \mathcal{N}'_k} {Q_z^k} \big) \setminus \omega^k\,,\\
a_{z_j}\quad &\text{in }\omega_{z_j}\setminus \bigcup_{i<j}\omega_{z_i}\,,
\end{cases}
\end{equation}
and
\betagin{equation}\label{eq:defapprox}
w_k:= \widetilde{w}_k \ast \varphi_k \quad \text{in }(U)^k \setminus V\,.
\end{equation}
We have that $w_k$ is smooth since $(U)^k \setminus V + \mathrm{supp} \,\varphi_k \subset \bigcup_{z \in \mathcal{N}'_k} {\tilde{q}}z \subset U $ (recall \eqref{eq: well contained}) and $ v|_{{\tilde{q}}z \setminus \omega^k} \in L^p({\tilde{q}}z \setminus \omega^k; \mathbb{R}^d )$ for any $z \in \mathcal{N}'_k$,
by \eqref{prop3iCCF16applicata}.
We define the sets $G^k_1:=\{ z \in \mathcal{N}'_k \colon \mathcal{H}^{d-1}(J_v \cap {Q_z^k})\leq k^{1/2 - d}\}$ and $G^k_2:= \mathcal{N}'_k \setminus G^k_2$. By $\widetilde{G}^k_1$ and $\widetilde{G}^k_2$, respectively, we denote their ``neighbors'', see \cite[(3.11)]{CC17} for the exact definition. We let
\betagin{equation*}
\widetilde{\Omega}^k_{g,2}:= \bigcup\nolimits_{z \in \widetilde{G}^k_2} \, {Q_z^k}\,.
\end{equation*}
There holds (cf.\ \cite[(3.8), (3.9), (3.12)]{CC17})
\betagin{equation}\label{eq: small vol--}
\lim_{k\to \infty} \big( \mathcal{L}^d(\omega^k) + \mathcal{L}^d(\widetilde{\Omega}^k_{g,2})\big) = 0\,.
\end{equation}
At this point, we notice that the set $E_k$ in \cite[(3.8)]{CC17} reduces to $\omega^k$ since in our situation all nodes are ``good'' (see \eqref{eq: cond1} and \cite[(3.2)]{CC17}) and therefore $\widetilde{\Omega}^k_b$ therein is empty.
The proof of (3.1a), (3.1d), (3.1b) in \cite[Theorem~3.1]{CC17} may be followed exactly, with the modifications described just above and the suitable slight change of notation. More precisely, by \cite[equation below (3.22)]{CC17} we obtain
\betagin{equation}\label{2006191938}
\|w_k-v\|_{L^p( ((U)^k \setminus V) \setminus \omega^k)} \leq Ck^{-1} \|e(v)\|_{L^p(U)}\,,
\end{equation}
for a constant $C>0$ depending only on $d$ and $p$,
and \cite[equation before (3.26)]{CC17} gives
\betagin{equation}\label{2006191940}
\int_{\omega^k} \varphisi(|w_k-v|) \, \mathrm{d} x \leq C \Big( \int_{\omega^k \cup \widetilde{\Omega}^k_{g,2}} \big(1+\varphisi(|v|)\big) \, \mathrm{d} x + k^{-1/2} \int_U \big( 1+\varphisi(|v|)\big) \, \mathrm{d} x + k^{-p}\int_U |e(v)|^p \,\, \mathrm{d} x \Big)\,,
\end{equation}
where $\varphisi(t) = t \wedge 1$. Combining \eqref{2006191938}-\eqref{2006191940}, using \eqref{eq: small vol--}, and recalling that $\varphisi$ is sublinear, we obtain \eqref{rough-dens-1}. Note that the sequence $R_k \to 0$ can be chosen independently of $v \in \mathcal{F}$ since $\varphisi(|v|) + |e(v)|^p$ is equiintegrable for $v \in \mathcal{F}$.
Moreover, recalling \eqref{eq:defappr1}-\eqref{eq:defapprox}, we sum \cite[(3.34)]{CC17} for $z=z_j \in \widetilde{G}^k_2$ and \cite[(3.35)]{CC17} for $z=z_j \in \widetilde{G}^k_1$ to obtain
$$\int_{(U)^k \setminus V} |e(w_k)|^p \, \, \mathrm{d} x \le \int_U |e(v)|^p \, \, \mathrm{d} x + Ck^{-q'/2} \int_U |e(v)|^p \, \, \mathrm{d} x + C\int_{\widetilde{\Omega}^k_{g,2}} |e(v)|^p \, \, \mathrm{d} x $$
for some $q' >0$. This along with \eqref{eq: small vol--} and the equiintegrability of $|e(v)|^p$ shows \eqref{rough-dens-2}.
\end{proof}
\end{appendices}
\end{document} |
\begin{document}
\title{Uniruledness of some moduli spaces of stable pointed curves}
\footnotetext{\noindent 2000 {\em Mathematics Subject Classification}.
14H10, 14H51.
\newline \noindent{{\em Keywords and phrases.} Pointed curve, Moduli space, Uniruledness, Hyperelliptic curves, Complete intersection surfaces.}}
\begin{abstract}
\noindent We prove uniruledness of some moduli spaces $\overline{\mathcal{M}}_{g,n}$ of stable curves of genus $g$ with $n$ marked points using linear systems on nonsingular projective surfaces containing the general curve of genus $g$. Precisely we show that $\overline{\mathcal{M}}_{g,n}$ is uniruled for $g=12$ and $n \leq 5$, $g=13$ and $n \leq 3$, $g=15$ and $n \leq 2$.\\
We then prove that the pointed hyperelliptic locus $\mathcal{H}_{g,n}$ is uniruled for $g \geq 2$ and $n \leq 4g+4$.\\
In the last part we show that a nonsingular complete intersection surface does not carry a linear system containing the general curve of genus $g \geq 16$ and if it carries a linear system containing the general curve of genus $12 \leq g \leq 15$, then it is canonical.
\end{abstract}
\begin{section}{Introduction and overview of the strategy}
Let $\mathcal{M}_{g,n}$ be the (coarse) moduli space of smooth curves of genus $g$ with $n$ marked points defined over the complex numbers and $\overline{\mathcal{M}}_{g,n}$ its Deligne-Mumford compactification.\\
The birational geometry of the moduli spaces $\overline{\mathcal{M}}_{g,n}$ has been extensively studied in the last decade. In 2003 Logan (\cite{LO}) proved the existence of an integer $f(g)$ for all $4 \leq g \leq 23$ such that $\overline{\mathcal{M}}_{g,n}$ is of general type for $n \geq f(g)$. Logan's results were improved by Farkas in \cite{F1}, and by Farkas and Verra in \cite{FV1}.\\
On the other hand various results concerning rationality and unirationality in the range $2 \leq g \leq 12$ were proved by Bini and Fontanari (\cite{BF}), Casnati and Fontanari (\cite{CF}) and Ballico, Casnati and Fontanari (\cite{BCF}). Farkas and Verra (\cite{FV2}) advanced our knowledge in the case of negative Kodaira dimension by proving uniruledness for some $\overline{\mathcal{M}}_{g,n}$, $5 \leq g \leq 10$. Further results in the range $12 \leq g \leq 14$ were proved in \cite{BFV} and \cite{BV}.\\
The methods involved in all these works are not uniform. For example the arguments used by the cited Italian authors recall sometimes classical constructions and in any case do not use computation of classes of divisors in $\overline{\mathcal{M}}_{g,n}$, which is the heart of Logan's method.\\
The following table sums up the results of the cited works, exhibiting what has been proved about rationality, unirationality and uniruledness properties for the $\overline{\mathcal{M}}_{g,n}$ and about their Kodaira dimension.\\
In particular in the above contributions it is proved that $\overline{\mathcal{M}}_{g,n}$ is rational for $0 \leq n \leq a(g)$, unirational for $0 \leq n \leq b(g)$, uniruled for $0 \leq n \leq \sigma(g)$ and has nonnegative Kodaira dimension for $n \geq \tau(g)$, where the values of $a$, $b$, $\sigma$ and $\tau$ are as in the table.\\
Note that for $g=4,5,6,7,11$ one has $\tau(g)=\sigma(g)+1$.
\begin{center}
{\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\label{tabella}
$g$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 \\
\hline
$a(g)$ & 12 & 14 & 15 & 12 & 8 & & & & & & & & & & & & & & & \\
\hline
$b(g)$ & 12 & 14 & 15 & 12 & 15 & 11 & 8 & 9 & 3 & 10 & 1 & 0 & 2 & & & & & & & \\
\hline
$\sigma(g)$ & 12 & 14 & 15 & 14 & 15 & 13 & 12 & 10 & 9 & 10 & 1 & 0 & 2 & 0 & 0 & & & & & \\
\hline
$\tau(g)$ & & & 16 & 15 & 16 & 14 & 14 & 13 & 11 & 11& 11 & 11 & 10 & 10 & 9 & 9 & 9 & 7 & 6 & 4\\
\hline
\end{tabular}}
\end{center}
$\left.\right.$\\
$\left.\right.$\\
$\left.\right.$
With a little abuse of notation we allowed $n=0$ to refer to the moduli space $\overline{\mathcal{M}}_g$. In the sequel, when we write $\overline{\mathcal{M}}_{g,n}$, we will always suppose $n \geq 1$.\\
Let $g \geq 2$ be a fixed integer and suppose that there exists a nonsingular projective surface carrying a positive-dimensional non-isotrivial linear system containing the general curve of genus $g$. The idea which will be developed in the first part of this work is to use these linear systems to exhibit a rational curve on $\overline{\mathcal{M}}_{g,n}$ (for some $n$) passing through the general point, thus proving the uniruledness of the space.\\
As the table shows, the problem of stating for which pairs $(g,n)$ $\overline{\mathcal{M}}_{g,n}$ is uniruled is almost solved for $4 \leq g \leq 11$. In this range the question is still open only for four moduli spaces, namely $\overline{\mathcal{M}}_{8,13}$, $\overline{\mathcal{M}}_{9,11}$, $\overline{\mathcal{M}}_{9,12}$ and $\overline{\mathcal{M}}_{10,10}$.\\
On the other hand for $17 \leq g \leq 21$ even the Kodaira dimension of $\overline{\mathcal{M}}_g$ is unknown.\\
We will focus our attention on the remaining values $12 \leq g \leq 16$. In this range $\overline{\mathcal{M}}_{12,1}$, $\overline{\mathcal{M}}_{14,1}$ and $\overline{\mathcal{M}}_{14,2}$ were the only moduli spaces of stable pointed curves which were known to be uniruled (actually unirational).\\
Theorem \ref{equivalentimguniruledsistemalineare} assures that if $\overline{\mathcal{M}}_g$ ($g \geq 3$) is uniruled, then a positive-dimensional linear system containing the general curve of genus $g$ on a surface which is not irrational ruled must exist, but does not say anything about how to construct such a linear system.\\
Let us consider for example the case $g=16$. In \cite{CR3} Chang and Ran showed that the class of the canonical divisor $K_{\overline{\mathcal{M}}_{16}}$ is not pseudoeffective, thus proving that $\overline{\mathcal{M}}_{16}$ has Kodaira dimension $-\infty$.
Later it was proved in \cite{BDPP} that a projective variety has a pseudoeffective canonical bundle if and only if it is not uniruled. As a consequence, $\overline{\mathcal{M}}_{16}$ turned out to be uniruled, but the proof, as it is carried out, does not exhibit any linear system as above.\\
Similarly, Chang and Ran showed in \cite{CR2} that $\overline{\mathcal{M}}_{15}$ has Kodaira dimension $-\infty$ by exhibiting a nef curve having negative intersection number with the canonical bundle.\\
This result was then improved in \cite{BV} by Bruno and Verra, which showed the rational connectedness of $\overline{\mathcal{M}}_{15}$ by explicitly constructing a rational curve passing through two general points of the space. In their article a positive-dimensional linear system on a nonsingular canonical surface containing the general curve of genus $15$ is constructed.\\
With a little amount of work, linear systems containing the general curve of genus $12$ and $13$ can be extracted from \cite{V} and \cite{CR}, respectively. The key point here is to show that they can be realized on \emph{nonsingular} projective surfaces.\\
The case $g=14$ can also be handled using \cite{V}, but in this case our method does not improve the known results.\\
Our main theorem is the following
\begin{teo}
\label{maintheorem}
The moduli space $\overline{\mathcal{M}}_{g,n}$ is uniruled for $g=12$ and $n \leq 5$, $g=13$ and $n \leq 3$, $g=15$ and $n \leq 2$.
\end{teo}
The argument used for the proof (Theorem \ref{mgnuniruled}) is in principle generalizable to check uniruledness for various loci inside $\overline{\mathcal{M}}_{g,n}$. As an example we prove the following statement about pointed hyperelliptic loci $\mathcal{H}_{g,n} \subseteq \mathcal{M}_{g,n}$ i.e. loci of points $[(C,p_1,...,p_n)]$ such that $C$ is hyperelliptic:
\begin{prop2}
The $n$-pointed hyperelliptic locus $\mathcal{H}_{g,n}$ is uniruled for all $g \geq 2$ and $n \leq 4g + 4$.
\end{prop2}
In the particular case $g=2$ one has $\mathcal{H}_{2,n}=\mathcal{M}_{2,n}$, thus Proposition \ref{hgn} proves the uniruledness of $\mathcal{M}_{2,n}$ for $n=1,...,12$. Actually $\mathcal{M}_{2,n}$ is known to be rational for $n=1,...,12$ (see \cite{CF}).
Moreover, recently Casnati has proved the rationality of the pointed hyperelliptic loci $\mathcal{H}_{g,n}$ for $g \geq 3$ and $n \leq 2g+8$ (see \cite{CA}).\\ \\
For reasons which will be outlined in the sequel, the natural question about a possible sharpening of these results led us to the study of linear systems containing the general curve of genus $g$ on nonsingular complete intersection surfaces. The following result sums up our conclusions:
\begin{teo2}
Let $C$ be a general curve of genus $g \geq 3$ moving in a positive-dimensional linear system on a nonsingular projective surface $S$ which is a complete intersection. Then $g \leq 15$. Moreover
\begin{itemize}
\item if $g \leq 11$, then $\emph{Kod}(S) \leq 0$ or $S$ is a canonical surface;
\item if $12 \leq g \leq 15$, then $S$ is a canonical surface.
\end{itemize}
\end{teo2}
\end{section}
\begin{section}{Background material}
\label{background}
We work over the field of complex numbers.\\
Let $X$ be a projective nonsingular surface. Consider a fibration $f:X \rightarrow \mathbb{P}^1$ whose general element is a nonsingular connected curve of genus $g \geq 2$, and $n$ disjoint sections $\sigma_i:\mathbb{P}^1 \rightarrow X$ i.e. morphisms such that $f \circ \sigma_i(y)=y$ for all $y \in \mathbb{P}^1$, $i=1,...,n$. Let $E_i \doteq \sigma_i(\mathbb{P}^1)$. Then define the (n+1)-tuple $(f,E_1,...,E_n)$ as the fibration in $n$-pointed curves of genus $g$ whose fibre over $y$ for all $y \in \mathbb{P}^1$ is the $n$-pointed curve $(C,p_1,...,p_n)$, where $C=f^{-1}(y)$ and $p_i \doteq C \cap E_i$.\\
Let $\Psi_{(f,E_1,...,E_n)}: \mathbb{P}^1 \rightarrow \overline{\mathcal{M}}_{g,n}$ be the morphism given (at least on a nonempty open set) by
$$y \mapsto [(f^{-1}(y),p_1,...,p_n)]$$
where $[\left.\right.]$ denotes the isomorphism class of the pointed curve.\\
A fibration is called \emph{isotrivial} if all its nonsingular fibres are mutually isomorphic or, equivalently,
if two general nonsingular fibres are mutually isomorphic. If $f$ is non-isotrivial, then $\Psi_{(f,E_1,...,E_n)}$ defines a rational curve in $\overline{\mathcal{M}}_{g,n}$ passing through the points corresponding to (an open set of) the fibres of $(f,E_1,...,E_n)$.\\ \\
Let $C$ be a nonsingular projective curve of genus $g$ and let $r,d$ be two non-negative integers.
As standard in Brill-Noether theory, we will denote by $W^r_d(C)$ the subvariety of $\text{Pic}^d(C)$ constructed in \cite{ACGH}, having $\text{Supp}(W^r_d(C))=\left\{L \in \text{Pic}^d(C) \left| h^{0}(L) \geq r+1 \right. \right\}$.
We will denote by $\mathcal{W}^r_{g,d}$ the so-called \emph{universal Brill-Noether locus} i.e. the moduli space of pairs $(C,L)$ where $g(C)=g$ and $L \in W^r_d(C)$.\\
A notion which is fundamental for our purposes is that of curve with general moduli.
\begin{defn}
\label{generalcurve}
A connected projective nonsingular curve $C$ of genus $g$ has \emph{general moduli}, or is a \emph{general curve of genus $g$}, if it is a general fibre in a smooth projective family $\mathscr{C} \rightarrow V$ of curves of genus $g$, parameterized by a nonsingular connected algebraic scheme $V$, and having surjective Kodaira-Spencer map at each closed point $v \in V$.
\end{defn}
In the sequel, when we talk about a general curve, we will always assume it to be connected and nonsingular.\\
Let $D$ be a divisor on $C$ and consider the \emph{Petri map} $\mu_0(D):H^{0}(\mathcal{O}_C(D)) \otimes H^{0}(\mathcal{O}_C(K_C-D)) \rightarrow H^{0}(\mathcal{O}_C(K_C))$ given by the cup-product. If $C$ has general moduli, then $\mu_0$ is injective for any $D$. As an easy consequence we obtain the following
\begin{prop}[\cite{AC}, Corollary 5.7]
\label{proprietàcurvemoduligenerali}
Let $C$ be a general curve of genus $g \geq 3$. Then $H^{1}(L^{\otimes 2})=(0)$ for any invertible sheaf $L$ on $C$ such that $h^{0}(L) \geq 2$.
\end{prop}
Coherently with Definition \ref{generalcurve}, we will adopt the following definition:
\begin{defn}
\label{generalcurveonlinearsystem}
Let $S$ be a projective nonsingular surface, and $C \subset S$ a projective nonsingular connected curve $C$ of genus $g \geq 2$. Let $r \doteq \dim(|C|)$. We say that $C$ is a \emph{general curve moving in an $r$-dimensional linear system on $S$} if there is a pointed connected nonsingular algebraic scheme $(V,v)$ and a commutative diagram as follows:
\begin{equation}
\label{famigliadisuperfici}
\xymatrix{C \ar@{^{(}->}[r] \ar[d] & \mathscr{C} \ar@{^{(}->}[r] \ar[d] & \mathscr{S} \ar[ld]^{\beta} \\ \emph{Spec}(k) \ar[r]^{v} & V & }
\end{equation}
such that
\begin{itemize}
\item[(i)] the left square is a smooth family of deformations of $C$ having surjective Kodaira-Spencer map at every closed point;
\item[(ii)] $\beta$ is a smooth family of projective surfaces and the upper right inclusion restricts over $v$ to the inclusion $C \subset S$;
\item[(iii)] $\dim(|\mathscr{C}(w)|) \geq r$ on the surface $\mathscr{S}(w)$ for all closed points $w \in V$.
\end{itemize}
\end{defn}
Let $S$ be a projective nonsingular surface and let $C \subset S$ be a projective nonsingular connected curve of genus $g$ such that
$\dim(|C|) \geq 1$.
Consider a linear pencil $\Lambda$ contained in $|C|$ whose general member is nonsingular and let $\epsilon:X \rightarrow S$ be the blow-up at its base points (including the infinitely near ones). We obtain a fibration $f:X \rightarrow \mathbb{P}^1$
defined as the composition of $\epsilon$ with the rational map $S \dashrightarrow \mathbb{P}^1$ defined by $\Lambda$. We will call $f$ the \emph{fibration defined by the pencil $\Lambda$}.
\begin{prop}[\cite{S4}, Proposition 4.7]
\label{fibrazioneisotrivialesuperficierigata}
Let $C$ be a general curve of genus $g \geq 3$ moving in a positive-dimensional linear system on a nonsingular projective surface $S$, and let $\Lambda \subset |C|$ be a linear pencil containing $C$ as a member. Then the following are equivalent:
\begin{itemize}
\item[(i)] $\Lambda$ defines an isotrivial fibration;
\item[(ii)] $S$ is birationally equivalent to $C \times \mathbb{P}^1$;
\item[(iii)] $S$ is a non-rational birationally ruled surface.
\end{itemize}
\end{prop}
In the sequel we will say that a linear system is \emph{isotrivial} if a general pencil in it defines an isotrivial fibration.\\
As a corollary of Proposition \ref{fibrazioneisotrivialesuperficierigata} we obtain the following well-known explicit condition for the
uniruledness of $\overline{\mathcal{M}}_g$.
\begin{teo}[\cite{S4}, Theorem 4.9]
\label{equivalentimguniruledsistemalineare}
The following conditions are equivalent for an integer $g \geq 3$:
\begin{itemize}
\item $\overline{\mathcal{M}}_g$ is uniruled;
\item A general curve $C$ of genus $g$ moves in a positive-dimensional linear system on some nonsingular projective surface which is not irrational ruled.
\end{itemize}
\end{teo}
Motivated by this theorem, Sernesi (\cite{S4}) used deformation theory of fibrations over $\mathbb{P}^1$ to work out conditions on a nonsingular projective surface to carry a positive-dimensional non-isotrivial linear system containing the general curve of genus $g$.\\
The following result shows that general curves on irregular surfaces of positive geometric genus cannot move in a positive-dimensional linear
system.
\begin{teo}[\cite{S4}, Theorem 6.5]
\label{sirregolare}
Let $S$ be a projective nonsingular surface with $p_g > 0$ and $q > 0$, and let $C \subset S$ be a nonsingular curve of genus $g \geq 3$. Then $C$ cannot be a general curve moving in a positive-dimensional linear system on $S$.
\end{teo}
Moreover, the following inequality involving the fundamental invariants of the surface has to be satisfied:
\begin{teo}[\cite{S4}, Theorem 5.2]
\label{disuguaglianzafibrazionefree}
Let $C$ be a general curve of genus $g \geq 3$ moving in a positive-dimensional non-isotrivial linear system on a nonsingular projective surface $S$. Then
\begin{equation}
\label{5g-111chios}
5(g-1)-\left[11\chi(\mathcal{O}_S)-2K^2_S+2C^2\right]+h^{0}(\mathcal{O}_S(C)) \leq 0.
\end{equation}
\end{teo}
In the same work a systematical analysis gives upper bounds for the genus of $C$ on $S$ according to the Kodaira dimension of $S$.\\
It turns out that, up to a technical assumption contained in Theorem \ref{superficierazionaleggeq11}, a general curve of genus $g \geq 12$ cannot move in a positive-dimensional non-isotrivial linear system on $S$ if the Kodaira dimension of $S$ is $\leq 0$.\\
Indeed if $\text{Kod}(S) \geq 0$ we have the following theorem:
\begin{teo}[\cite{S4}, Theorem 6.2]
\label{kod=0leq11}
Let $S$ be a projective nonsingular surface with Kodaira dimension $\geq 0$, and let $C \subset S$ be a general curve of genus $g \geq 2$ moving in a positive-dimensional linear system. Then
$$g \leq 5p_g(S)+6+\frac{1}{2}h^{0}(\mathcal{O}_S(K_S-C)).$$
In particular
$$g \leq \left\{\begin{array}{cc}
6, & p_g=0, \\
11, & p_g=1.
\end{array}\right.$$
\end{teo}
Turning to the case $\text{Kod}(S)<0$, Proposition \ref{fibrazioneisotrivialesuperficierigata} tells us that $S$ cannot be a non-rational ruled surface, hence we have only to examine what happens if $S$ is rational.\\
If we have a family (\ref{famigliadisuperfici}) where one fibre of $\beta$ is a rational surface, then all fibres are rational, and by suitably blowing-up sections of $\beta$ (after possibly performing a base change) we can always replace (\ref{famigliadisuperfici}) with a similar family where all fibres of $\beta$ are blow-ups of $\mathbb{P}^2$. Therefore one can always reduce to the situation where there is a birational morphism $\sigma:S \rightarrow \mathbb{P}^2$.
\begin{teo}[\cite{V2}]
\label{superficierazionaleggeq11}
Let $C$ be a general curve of genus $g \geq 2$ moving in a positive-dimensional linear system on a rational surface $S$ such that there is a birational morphism $\sigma:S \rightarrow \mathbb{P}^2$. Suppose that
\begin{itemize}
\item[$(*)$] $|C|$ is mapped by $\sigma$ to a regular linear system of plane curves whose singular points are in general position.
\end{itemize}
Then $g \leq 10$.
\end{teo}
For an extensive discussion of the problems and conjectures related to $(*)$, whose first study dates back to Segre (\cite{SE}), see \cite{V2}, Section 2.2.\\
According to the Enriques-Kodaira birational classification of projective nonsingular surfaces, a surface $S$ with Kodaira dimension $0$ cannot have $p_g(S) \geq 2$, hence, at least under assumption $(*)$, surfaces containing general curves of genus $g \geq 12$ moving in positive-dimensional non-isotrivial linear systems must have Kodaira dimension $\geq 1$.\\ \\
The argument used to prove our main theorem is quite general and can be used to recover the great part of the known results about uniruledness of the $\overline{\mathcal{M}}_{g,n}$. As an example we will use linear systems of curves on $K3$ surfaces.\\
Consider a primitive globally generated and ample line bundle $L$ on a smooth K3 surface $S$. Then the general member of $|L|$ is a smooth irreducible curve (see \cite{SA}). If $g$ is the genus of that curve,
we say that $(S,L)$ is a \emph{smooth primitively polarized K3 surface of genus $g$}. One has $L^2=2g-2$ and $\dim|L|=g$.\\
Assume $g \geq 3$. If $\text{Pic}(S) \cong \mathbb{Z}[L]$, then $L$ is very ample on $S$, hence it embeds $S$ as a smooth projective surface in $\mathbb{P}^g$, whose smooth hyperplane section is a canonical curve (see \cite{SA}).\\
Let $\mathcal{KC}_g$ be the moduli stack parameterizing pairs $(S,C)$, where $S$ is a smooth K3 surface with a primitive polarization $L$ of genus $g$ and $C \in |L|$ a smooth irreducible curve of genus $g$. Let $\mathscr{M}_g$ be the moduli stack of smooth curves of genus $g$, and consider the morphism of stacks
$$c_g:\mathcal{KC}_g \rightarrow \mathscr{M}_g$$
defined as $c_g((S,C))=[C]$.
\begin{teo}[\cite{M1} and \cite{CU}, Proposition 2.2]
\label{cg}
With notation as above, $c_g$ is domi-nant if and only if $g \leq 9$ or $g=11$.
\end{teo}
\end{section}
\begin{section}{Proof of the main theorem}
\label{unirulednessofsomemodulispaces}
To prove Theorem \ref{maintheorem} we state a general result (Theorem \ref{mgnuniruled}) and then apply it to linear systems containing the general curve of genus $12,13$ and $15$, respectively.
\begin{lem}[\cite{FL}, Lemma 2.7]
\label{cd-1ampio}
Let $C$ be a nonsingular projective curve. For all $d \geq 1$, $x \in C$, denote with $C_{{(d-1)},x} \subset C_{(d)}$ the divisor of unordered $d$-tuples containing $x$. $C_{{(d-1)},x}$ is ample in $C_{(d)}$.
\end{lem}
\begin{teo}
\label{mgnuniruled}
Let $C$ be a general curve of genus $g \geq 2$ moving in a non-isotrivial $(r+1)$-dimensional linear system ($r \geq 0$) on a nonsingular projective regular surface $S$, with a deformation $(\mathscr{S},\mathscr{C}) \rightarrow V$ of the pair $(S,C)$ given as in Definition \ref{generalcurveonlinearsystem}.\\
Let $(\mathscr{C},\mathcal{O}_{\mathscr{C}}(\mathscr{C})) \rightarrow V$ be the induced family of deformations of the pair $(C,\mathcal{O}_C(C))$, let $C^2=d$ and suppose that the morphism
$$\alpha: V \rightarrow \mathcal{W}^{r}_{g,d}$$
$$w \mapsto [(\mathscr{C}(w),\mathcal{O}_{\mathscr{C}(w)}(\mathscr{C}(w)))]$$
is dominant.\\
Then $\overline{\mathcal{M}}_{g,n}$ is uniruled for $n \leq r + \rho$, where $\rho=\rho(g,r,d)$ is the Brill-Noether number.
\end{teo}
\begin{proof}
If $r+\rho=0$ there is nothing to prove, hence suppose that $r+\rho \geq 1$.\\
Let $[C]$ be a general point in $\overline{\mathcal{M}}_{g}$ and define $C^r_{(d)}$ to be the inverse image of $W^{r}_{d}(C)$ under the Abel map $C_{(d)} \rightarrow \text{Pic}^d(C)$. We first want to show that there is a $d$-tuple in $C^r_{(d)}$ containing $r+\rho$ general points $q_1,...,q_{r+\rho}$.\\
Since $C_{(d-1),q_i}$ is ample for all $i=1,...,r+\rho$ by Lemma \ref{cd-1ampio}, it intersects every variety of dimension $h$ in a nonempty variety of dimension $\geq h-1$ by Nakai-Moishezon's criterion. Since $C$ has general moduli, $C^r_{(d)}$ has dimension $r+\rho$ and $C_{(d-1),q_1} \cap C^r_{(d)}$ has dimension greater or equal than $r+\rho-1$.
Iterating the above argument, if $r+\rho \geq 2$, one then has that
$$\dim \left(C_{(d-1),q_1} \cap C_{(d-1),q_2} \cap \dots \cap C_{(d-1),q_{r+\rho}} \cap C^r_{(d)}\right) \geq 0$$
hence there is a $d$-tuple in $C^r_{(d)}$ containing the points $q_1,...,q_{r+\rho}$.\\
Since by assumption the map $\alpha$ is dominant and the points $q_i$ are general, there is a point $w \in V$ such that $\mathscr{C}(w) \cong C$ and there is a divisor $D \in |\mathcal{O}_{\mathscr{C}(w)}(\mathscr{C}(w))|$ containing points $r_1,...,r_n$, $n \leq r+\rho$, such that $(C,q_1,...,q_n) \cong (\mathscr{C}(w),r_1,...,r_n)$ as pointed curves.\\
Since $h^1(\mathcal{O}_{\mathscr{S}(w)})=0$, the linear system $|\mathscr{C}(w)|$ cuts out on $\mathscr{C}(w)$ the complete linear series. This means that there exists a non-isotrivial linear pencil $\mathcal{P} \subset |\mathscr{C}(w)|$ whose curves cut on $\mathscr{C}(w)$ the divisor $D$. Now blow up the base points of $\mathcal{P}$ and define $E_i$ as the exceptional divisor over $r_i$ for $i=1,...,n$. This gives a non-isotrivial fibration $(f,E_1,...,E_n)$ over $\mathbb{P}^1$ in $n$-pointed curves of genus $g$, having a curve isomorphic to $(C,q_1,...,q_n)$ among its fibres. Hence there is a rational curve in $\overline{\mathcal{M}}_{g,n}$ passing through the general point $[(C,q_1,...,q_n)]$ of $\overline{\mathcal{M}}_{g,n}$.\\
\end{proof}
\begin{oss}
\label{casorho=0}
If one does not assume the map $\alpha$ to be dominant, Theorem \ref{mgnuniruled} stands true for $n \leq r$. The proof is immediate because, since $\dim(|C|)=r+1$, there is always a linear pencil of curves in $|C|$ passing through $r$ points of $C$.\\
\end{oss}
\begin{oss}
\label{casod=r+rho}
The fact that there is a $d$-tuple of points in $C^r_{(d)}$ containing the points $q_1,...,q_{r+\rho}$ is immediate to see if $d=r+\rho$, since $C_{(d)}$ is irreducible and $C^r_{(d)}$ and $C_{(d)}$ have the same dimension.\\
\end{oss}
\begin{oss}
\label{ggeq7regularredundant}
If $g \geq 7$, the assumption that the surface $S$ is regular is redundant and can consequently be dropped. Let indeed $q=q(S)>0$. If $p_g >0$ too, then $C$ cannot have general moduli by Theorem \ref{sirregolare}. If $p_g=0$, there are two possibilities. If $\text{Kod}(S) <0$, then $S$ is a non-rational ruled surface, thus the linear system $|C|$ is isotrivial by Proposition \ref{fibrazioneisotrivialesuperficierigata}. If $\text{Kod}(S) \geq 0$, then $g \leq 6$ by Theorem \ref{kod=0leq11}, contradiction.\\
\end{oss}
For the sake of simplicity, in the sequel we will sometimes indicate with the same symbol $C$ a general abstract curve of genus $g$ and a projective model of this curve, which will be specified in each situation.
\begin{prop}
\label{m125uniruled}
$\overline{\mathcal{M}}_{12,n}$ is uniruled for $n=1,...,5$.
\end{prop}
\begin{proof}
In \cite{V} the author constructs a reducible nodal curve $D_1 \cup D_2$ of arithmetic genus 12 and degree 17, which is contained in the complete intersection $D_1 \cup D_2 \cup D$ of 5 quadrics in $\mathbb{P}^6$, which is still a reducible nodal curve, and shows that $D_1 \cup D_2$ can be deformed to a general curve $C$ of genus 12. Our first purpose is to show that $C$ is contained in a nonsingular surface which is the complete intersection of four quadrics in $\mathbb{P}^6$.\\
By Lemma 7.6 of \cite{V} one has $h^1(\mathcal{I}_{D_1 \cup D_2}(2))=0$, hence $h^{1}(\mathcal{I}_{C}(2))=0$ by the upper semicontinuity of the cohomology.\\
Since $h^{0}(\mathcal{O}_{\mathbb{P}^6}(2))=28$ and $h^{0}(\mathcal{O}_C(2))=23$, the cohomology sequence associated to the exact sequence
$$0 \rightarrow \mathcal{I}_{C}(2) \rightarrow \mathcal{O}_{\mathbb{P}^6}(2) \rightarrow \mathcal{O}_C(2) \rightarrow 0$$
gives $h^{0}(\mathcal{I}_{C}(2))=5$, hence $C$ is contained in the complete intersection $C \cup C'$ of five quadrics $Q_1,Q_2,...,Q_5$, which we assume to be general quadrics containing $C$.\\
Since $D_1 \cup D_2 \cup D$ is nodal, $C \cup C'$ is nodal too.\\
Note that the base locus of $|\mathcal{I}_C(2)|$ is $C \cup C'$. $Q_i$ is smooth outside of $C \cup C'$ by Bertini's theorem. On the other hand, since for all $p \in C \cup C'$ the tangent space $T_{C \cup C',p}=\bigcap_{i=1}^{5}T_{Q_i,p}$ has dimension 1 or 2, each $Q_i$ is smooth at $p$, hence the general quadric containing $C$ is nonsingular. For the same reason at least four quadrics among the $Q_i$, say $Q_1,...,Q_4$, have to intersect transversally at a fixed point $p \in C \cup C'$, and hence on a nonempty open set of $C \cup C'$, say $(C \cup C') \smallsetminus \left\{p_1,...,p_k\right\}$.\\
Take now $k$ 4-tuples of general quadrics containing $C$, say $(Q^j_1,...,Q^j_4)$, $j=1,...,k$, intersecting transversally, respectively, at $p_j$, and let $s_i$, $i=1,...,4$ (respectively $s^j_i$, $j=1,...,k$) $\in H^{0}(\mathcal{I}_C(2))$ be a section whose zero locus is $Q_i$ (respectively $Q^j_i$). The four quadrics $\widetilde{Q}_i$ defined as the zero loci of the sections $s_i+s^1_i+...+s^k_i \in H^{0}(\mathcal{I}_C(2))$ are four general quadrics containing $C$ and intersecting transversally at every point of $C \cup C'$, hence $S \doteq \bigcap_{i=1}^{4}\widetilde{Q}_i$ is a nonsingular surface containing $C$.\\
A scheme $V$ parameterizing a deformation $\left(\mathscr{C},\mathscr{S}\right)$ as in Theorem \ref{mgnuniruled} can be easily constructed as follows. Consider the Hilbert scheme of curves of arithmetic genus $12$ and degree $17$ in $\mathbb{P}^6$ and let $\mathscr{H}$ be the open set of smooth curves in the irreducible component containing the general curve.
Let $W \rightarrow \mathscr{H}$ be the $\mathbb{P} \times \mathbb{P} \times \mathbb{P} \times \mathbb{P}$-bundle whose fibre over the point $[C]$ is $\mathbb{P}\left(H^{0}(\mathcal{I}_C(2))\right) \times ... \times \mathbb{P}\left(H^{0}(\mathcal{I}_C(2))\right)$. We can take $V$ to be the open set of $W$ such that the points of the fibres of the restricted projection $V \rightarrow \mathscr{H}$ correspond to nonsingular surfaces.\\
Since $S$ is canonical, the adjunction formula gives
\begin{equation}
\label{relazionecanonica}
\mathcal{O}_C(1) \cong \omega_C(-C)
\end{equation}
hence Riemann-Roch theorem on $\mathcal{O}_C(1)$ and Serre duality give $h^{0}(\mathcal{O}_C(C))=1$, which equals $\dim(|C|)$. The linear pencil $|C|$ is non-isotrivial by Proposition \ref{fibrazioneisotrivialesuperficierigata}.\\
Let $h$ be the $g^6_{17}$ embedding the curve $C$ in $\mathbb{P}^6$. Since, for small deformations of the pair $(C,h)$ in the Hilbert scheme, the linear series $h$ remains very ample and the Petri map remains injective, it follows that $h=|\mathcal{O}_C(1)|$ is a general $g^6_{17}$ i.e. a general point in $W^6_{17}(C)$. Since a complete linear series and its residual have the same Brill-Noether number, relation (\ref{relazionecanonica}) implies that the linear series $|\mathcal{O}_C(C)|$ is a general $g^0_{5}$
and so the map
$$\alpha: V \rightarrow \mathcal{W}^0_{12,5}$$
is dominant.\\
The Brill-Noether number of $|\mathcal{O}_C(C)|$ is $\rho=\rho(12,0,5)=5$. By Theorem \ref{mgnuniruled} the statement follows.
\end{proof}
\begin{prop}
\label{m133uniruled}
$\overline{\mathcal{M}}_{13,n}$ is uniruled for $n=1,2,3$.
\end{prop}
\begin{proof}
In \cite{CR} the authors show that the general curve $C$ of genus 13 can be embedded as a non-degenerate curve of degree 13 in $\mathbb{P}^3$.\\
From the cohomology sequence associated to the exact sequence
$$0 \rightarrow \mathcal{I}_{C}(5) \rightarrow \mathcal{O}_{\mathbb{P}^3}(5) \rightarrow \mathcal{O}_C(5) \rightarrow 0$$
one gets $h^{0}(\mathcal{I}_{C}(5))=3$ since $C$ is of maximal rank by \cite{CR}, Theorem 1. One first has to prove that the generic quintic surface containing $C$ is nonsingular. This is done by explicitly constructing a specialization $C''$ of $C$ lying on a nonsingular quintic and such that $h^{0}(\mathcal{I}_{C''}(5))=h^{0}(\mathcal{I}_{C}(5))$.\\
Let $D$ be a nonsingular rational curve of degree $8$ in $\mathbb{P}^3$ and let $F_4$ be a nonsingular quartic surface containing it.
Consider the linear system $|\mathcal{I}_D(5)|$. One has of course $\text{Bs}|\mathcal{I}_D(5)|=E \subset F_4$ where $E$ is a 1-dimensional scheme. By Bertini's theorem the general quintic containing $D$ is smooth away from $E$. Consider the family of reducible quintics $F_4 \cup A$ where $A$ is a plane. Since for each $p \in E$ the plane $A$ can be always assumed not to contain $p$, there exists a quintic containing $D$ which is smooth at $p$, hence the same is true for the general quintic containing $D$. Now fix $p$ and take such a quintic, say $G$. Since $G$ smooth in $p$, it will be smooth along a nonempty open set of $E$, say $E \smallsetminus \left\{p_1,...,p_k\right\}$. Take $k$ general quintics $G_1,...,G_k$ containing $D$ such that $G_i$ is smooth at $p_i$, and let $s$ (respectively $s_i$, $i=1,...,k$) $\in H^{0}(\mathcal{I}_D(5))$ be a section whose zero locus is $G$ (respectively $G_i$). The quintic defined as the zero locus of the section $s+s_1+...+s_k \in H^{0}(\mathcal{I}_D(5))$ is a general quintic containing $D$ which is smooth along $E$.\\
We can then conclude that the general quintic surface containing $D$ is nonsingular, hence the general element $C'$ of the linear system
$|5H-D|$ on $F_4$, where $H$ is a hyperplane section, is
cut by a nonsingular quintic $F_5$. Since $F_4$ is a K3 surface, $C'$ is a nonsingular irreducible curve of genus $10$ and degree $12$. On $F_5$, the curve $C'$ belongs to the linear system $|4H-D|$, where $H$ is again a hyperplane section. Take a general element $C'' \in |5H-C'|=|H+D|$ on $F_5$. One has $p_a(C'')=13$ and $\deg(C'')=13$. Since the linear subsystem $|H|+D$ consists of reducible curves with singular points which are not fixed, if the curve $C''$ is irreducible, then it will be also nonsingular by Bertini's theorem. Since $h^{0}(\mathcal{I}_{C'}(5)) \geq 5$, one has $\dim |H+D| > \dim|H|+D$ and $C''$ is irreducible.\\
By \cite{IL}, Theorem 3.1, the Hilbert scheme $\mathscr{H}$ parameterizing smooth irreducible curves of genus $13$ and degree $13$ in $\mathbb{P}^3$ is irreducible, hence $C''$ must be a specialization of $C$.
By construction and \cite{R} one has $h^1(\mathcal{I}_{C''}(6-i))=h^1(\mathcal{I}_{C'}(i))=h^1(\mathcal{I}_D(5-i))$ for all $i \in \mathbb{Z}$.
Since the sheaf $\mathcal{I}_D$ is 1-regular in the sense of Castelnuovo-Mumford, it follows that $h^{0}(\mathcal{I}_{C''}(5))=h^{0}(\mathcal{I}_{C}(5))=3$.
But then, since the general quintic surface containing $C''$ is nonsingular, the same is true for the general quintic surface containing $C$.\\
The construction of a scheme $V$ parameterizing a deformation $\left(\mathscr{C},\mathscr{S}\right)$ as in Theorem \ref{mgnuniruled} is analogous to the one done in Proposition \ref{m125uniruled}.\\
Let $S$ be a nonsingular quintic surface containing $C$. Since $S$ is canonical, using (\ref{relazionecanonica}), Riemann-Roch on $\mathcal{O}_C(1)$ and Serre duality one obtains that $|C|$ is a linear system of dimension $3$ on $S$, which is non-isotrivial by Proposition \ref{fibrazioneisotrivialesuperficierigata}.\\
Arguing as in the previous proposition one has that the linear series $|\mathcal{O}_C(1)|$ is a general $g^3_{13}$, $|\mathcal{O}_C(C)|$ is a general $g^2_{11}$ and the map
$$\alpha: V \rightarrow \mathcal{W}^2_{13,11}$$
is dominant.\\
The Brill-Noether number of $|\mathcal{O}_C(C)|$ is $\rho=\rho(13,2,11)=1$. By Theorem \ref{mgnuniruled} the statement follows.
\end{proof}
\begin{prop}
\label{m152uniruled}
$\overline{\mathcal{M}}_{15,n}$ is uniruled for $n=1,2$.
\end{prop}
\begin{proof}
In \cite{BV} the authors show that the general curve $C$ of genus 15 can be embedded as a curve of degree 19 in a nonsingular projective regular surface $S$ which is the complete intersection of four quadrics in $\mathbb{P}^6$. $C$ defines on $S$ a non-isotrivial linear system of dimension $h^{0}(\mathcal{O}_C(C))=2$, in particular the linear series $|\mathcal{O}_C(C)|$ is a $g^1_9$.\\
In the article a family of curves $\mathcal{D}$ whose general element has the above property is considered and it is shown that the morphism
$$u:\mathcal{D} \rightarrow \mathcal{W}^1_{15,9}$$
$$C \mapsto [(C,\omega_C(-1))]$$
is dominant (cf. \cite{BV}, (2.9) p. 5).\\
The construction of a scheme $V$ parameterizing a deformation $\left(\mathscr{C},\mathscr{S}\right)$ as in Theorem \ref{mgnuniruled} is analogous to the one done in Proposition \ref{m125uniruled}.\\
Since $S$ is a canonical surface, for a general $C \in \mathcal{D}$ one has $\omega_C(-1) \cong \mathcal{O}_C(C)$ by adjunction, hence the morphism defined in Theorem \ref{mgnuniruled}
$$\alpha: V \rightarrow \mathcal{W}^{1}_{15,9}$$
is dominant.\\
The Brill-Noether number of the linear series $|\mathcal{O}_C(C)|$ is $\rho=\rho(15,1,9)=1$.\\
By Theorem \ref{mgnuniruled} the statement follows.
\end{proof}
Using Theorem \ref{mgnuniruled} one is able to recover the great part of the known results concerning the uniruledness of moduli spaces $\overline{\mathcal{M}}_{g,n}$ using suitable linear systems of curves. Adam Logan has listed a few in the end of his paper \cite{LO}. Just to give an example one can consider linear systems on K3 surfaces.
\begin{prop}
\label{mgnk3}
$\overline{\mathcal{M}}_{g,n}$ is uniruled for $3 \leq g \leq 9$ or $g=11$ and $n=1,...,g-1$.
\end{prop}
\begin{proof}
Theorem \ref{cg} tells that the hyperplane linear system $|H|$ of a general primi-tively polarized K3 surface $(S,H)$ of genus $g \geq 3$ contains the general curve $C$ of genus $g$ for $g \leq 9$ or $g=11$. One has $h^{0}(\mathcal{O}_C(C))=g$, thus Remark \ref{casorho=0} gives the statement.
\end{proof}
In particular one obtains that $\overline{\mathcal{M}}_{11,n}$ is uniruled for all $n \leq 10$, a result which is sharp (cf. the table).\\
By Theorem \ref{cg} the hyperplane linear system $|H|$ of a general primitively polarized K3 surface $(S,H)$ of genus $10$ does not contain the general curve of genus $10$, thus Theorem \ref{mgnuniruled} cannot be applied to $|H|$.\\
Nevertheless, one can use a result contained in \cite{FKPS} to work out a linear system on which Theorem \ref{mgnuniruled} can be applied. One has the following
\begin{prop}
\label{m10n}
$\overline{\mathcal{M}}_{10,n}$ is uniruled for $n=1,...,7$.
\end{prop}
\begin{proof}
Consider a general primitively polarized K3 surface of genus 11 $(S,H)$. By \cite{FKPS}, section 5, the normalization of the general nodal curve in $|H|$ (having arithmetic genus $p_a(H)=11$) is a general curve of genus $10$. \\
Consider the linear subsystem of curves of $|H|$ having an ordinary node at a general point $p$ of $S$. Blowing up the surface $S$ at $p$ one obtains a linear system $|C|$ whose general element is a general curve of genus $10$, having dimension $\dim|H|-3=8$. Remark \ref{casorho=0} gives the statement.
\end{proof}
\end{section}
\begin{section}{Uniruledness of some pointed hyperelliptic loci $\mathcal{H}_{g,n}$}
The proof of Theorem \ref{mgnuniruled} can in principle be adapted to study the uniruledness of various loci in $\overline{\mathcal{M}}_{g,n}$. The starting point will be to exhibit a positive-dimensional non-isotrivial linear system containing a curve corresponding to the general point of the image of the locus under the morphism to $\overline{\mathcal{M}}_g$ forgetting the marked points.\\
As an example of application of this principle, in the sequel we prove the uniruledness of the pointed hyperelliptic loci $\mathcal{H}_{g,n}$ for all $g \geq 2$ and $n \leq 4g+4$.\\
For each integer $g \geq 2$ consider the linear system $|\mathcal{O}(g+1,2)| \subset \mathbb{P}^1 \times \mathbb{P}^1$. The general element of this system is a general hyperelliptic curve $C$ of genus $g$, whose $g^1_2$ is given by the projection onto the first factor.
Note that, since there is not a nonsingular rational curve $E$ such that $E \cdot C=1$, the dimension of $|C|$ is the maximal dimension permitted by \cite{CC}, Theorem 1.1. This is consistent with the fact that linear systems of maximal dimension with respect to a fixed $g$ must be systems of hyperelliptic curves and dominate the hyperelliptic locus (see \cite{CC}, Proposition 2.3).
\begin{prop}
\label{hgn}
The $n$-pointed hyperelliptic locus $\mathcal{H}_{g,n}$ is uniruled for all $g \geq 2$ and $n \leq 4g+4$.
\end{prop}
\begin{proof}
Let $C$ be a general curve in $|\mathcal{O}(g+1,2)|$. Since $\mathbb{P}^1 \times \mathbb{P}^1$ is regular, it is sufficient to show that there exists a divisor in $|\mathcal{O}_C(C)|$ containing $n$ general points, $n \leq 4g+4$. Since the linear system is non-isotrivial, the statement will then follow from a straightforward adaptation of the proof of Theorem \ref{mgnuniruled}.\\
The cohomology sequence associated to the exact sequence
$$0 \rightarrow T_{\mathbb{P}^1 \times \mathbb{P}^1}(-C) \rightarrow T_{\mathbb{P}^1 \times \mathbb{P}^1} \rightarrow {T_{\mathbb{P}^1 \times \mathbb{P}^1}}_{|C} \rightarrow 0$$
gives $h^{0}({T_{\mathbb{P}^1 \times \mathbb{P}^1}}_{|C})=6+g$.\\
Since $h^{0}(T_{\mathbb{P}^1 \times \mathbb{P}^1})=6$ and $\dim W^3_{g+3}(C)=g$,
the linear series $|\mathcal{O}_C(1,1)|$, which is cut by hyperplanes in the projective embedding of $\mathbb{P}^1 \times \mathbb{P}^1$ as a quadric in $\mathbb{P}^3$, is general as a point in $W^3_{g+3}(C)$. Since $|\mathcal{O}_{C}(1,1)|=g^1_2+g^1_{g+1}$ is the sum of the two linear series defined on $C$, respectively, by the two projections of $\mathbb{P}^1 \times \mathbb{P}^1$,
we have that the $g^1_{g+1}$ is general too (the $g^1_2$ is unique).\\
Since $|\mathcal{O}_C(C)|=(g+1)g^1_2+2g^1_{g+1}$,
one has that $|\mathcal{O}_C(C)|$ is a general $g^{3g+4}_{4g+4}$, whose Brill-Noether number is $\rho=\rho(g,3g+4,4g+4)=g$. By Remark \ref{casod=r+rho} there is a divisor in $|\mathcal{O}_C(C)|$ containing $n$ general points, $n \leq 4g+4$.
\end{proof}
\end{section}
\begin{section}{Linear systems containing the general curve of genus $g$ on complete intersection surfaces}
\label{Linear systems containing the general curve of genus $g$ on complete intersection surfaces}
A natural question arising at this point is the following. Theorem \ref{mgnuniruled} is quite general and can in principle be applied to a wide range of linear systems, while we used it only on a few. Are there linear systems which can be used for an improvement of Theorem \ref{maintheorem}? If yes, which are the surfaces carrying them?\\
The results contained in Section \ref{background} tell that, up to a technical assumption, a general curve of genus $g \geq 12$ cannot move in a positive-dimensional non-isotrivial linear system on a nonsingular projective surface $S$ if $\text{Kod}(S) \leq 0$.\\
Hence such a linear system can be realized only on a surface of Kodaira dimension $1$ or on a surface of general type.\\
Now, it is a fact that positive-dimensional linear systems containing the general curve of genus $g$ for $12 \leq g \leq 15$ have been discovered on complete intersection surfaces. In \cite{CR} this is obvious since the curves are embedded in $\mathbb{P}^3$. In \cite{V} and \cite{BV} it does not come as a surprise once one looks at the proofs: typically a reducible curve, say $D_1 \cup D_2$, is constructed on a reducible complete intersection surface $Y$ in $\mathbb{P}^r$ and it is then proved that it can be smoothed to a general curve of genus $g$.\\
The whole construction is harder to handle if $Y$ is not a complete intersection.\\
In this section an extensive discussion on linear systems on complete intersection surfaces is carried out, giving a partial answer to the questions above.
It turns out that, if $S$ is a nonsingular complete intersection surface, it is not possible to have a general curve of genus $g \geq 16$ moving on it, while, if $12 \leq g \leq 15$, the surface $S$ must be a canonical one. There are only five families of canonical complete intersection surfaces. Two of these, namely the quintic surfaces in $\mathbb{P}^3$ and the complete intersections of four quadrics in $\mathbb{P}^6$, were exactly those used in Section \ref{unirulednessofsomemodulispaces} to prove our uniruledness results.\\ \\
It is a classical well-known fact that a general curve of genus $g \geq 12$ cannot be embedded in $\mathbb{P}^2$. The other families of complete intersection surfaces of negative Kodaira dimension are the quadric and the cubic surfaces in $\mathbb{P}^3$ and the complete intersections of two quadrics in $\mathbb{P}^4$ (all these are rational surfaces). The following proposition takes care of these cases:
\begin{prop}
\label{2322}
Let $C$ be a general curve of genus $g \geq 2$ moving in a positive-dimensional linear system on a nonsingular quadric or cubic surface in $\mathbb{P}^3$, or on a nonsingular complete intersection of two quadrics in $\mathbb{P}^4$. Then $g \leq \alpha$, where $\alpha=4,9,8$, respectively.
\end{prop}
\begin{proof}
Let $S=\mathbb{P}^1 \times \mathbb{P}^1$ and consider the exact sequence $0 \rightarrow T_C \rightarrow {T_S}_{|C} \rightarrow N_{C/S} \rightarrow 0$. The surface $S$ is rigid since $h^{1}(T_S)=0$, hence it is sufficient to show that the cohomology map $H^0(N_{C/S}) \xrightarrow{\beta} H^1(T_C)$ is not surjective for $g \geq 5$. One has $C \in |\mathcal{O}(a,b)|$, $a,b \geq 1$, $C^2=2ab$ and $g(C)=(a-1)(b-1)$, thus $h^1(N_{C/S})=h^1(\mathcal{O}_C(C))=0$. It follows that $\beta$ is not surjective if and only if $h^1({T_S}_{|C}) \neq 0$.\\
The cohomology sequence associated to the exact sequence $0 \rightarrow T_{S}(-C) \rightarrow T_{S} \rightarrow {T_{S}}_{|C} \rightarrow 0$ gives $h^{1}({T_S}_{|C})=h^{2}(T_{S}(-C))$.\\
Let $C \in |\mathcal{O}(a,b)|$. Using Serre duality one shows that $h^{2}(T_{S}(-C))=h^{0}(\mathcal{O}(a-4,b-2) \oplus \mathcal{O}(a-2,b-4))$, which equals $0$ if and only if $(a \leq 3 \lor b \leq 1) \wedge (a \leq 1 \lor b \leq 3)$. This condition is never satisfied if $a,b$ are such that $(a-1)(b-1)=g$ for $g \geq 5$ (while it is satisfied if $a=b=3$ i.e. for curves of genus $4$).\\
Let $S$ be a nonsingular cubic surface in $\mathbb{P}^3$ and let $d=\deg C$. Riemann-Roch formula gives $d \leq g+3$. Since $K_S =\mathcal{O}_S(-1)$, the genus formula gives $2g-1 \leq C^2 \leq 3g+1$. In particular $\mathcal{O}_C(C)$ is nonspecial and Riemann-Roch gives $h^{0}(\mathcal{O}_C(C))=C^2 +1-g$. Since $S$ is regular one has $h^{0}(\mathcal{O}_S(C))=C^2+2-g$. Substituting in inequality (\ref{5g-111chios}) one obtains
$$5(g-1)-11\chi(\mathcal{O}_S)+2K^2_S-C^2+2-g=4g-8-C^2 \leq 0.$$
Since $C^2 \leq 3g+1$, one has $4g-8-C^2 \geq g-9$, hence $g \leq 9$ must hold.\\
Let $S$ be a nonsingular complete intersection of two quadrics in $\mathbb{P}^4$. This time Riemann-Roch formula gives $d \leq g+4$ and the genus formula gives $2g-1 \leq C^2 \leq 3g+2$. Arguing as in the previous case one obtains
$$5(g-1)-11\chi(\mathcal{O}_S)+2K^2_S-C^2+2-g=4g-6-C^2 \leq 0.$$
Since $C^2 \leq 3g+2$, one has $4g-6-C^2 \geq g-8$, hence $g \leq 8$ must hold.\\
\end{proof}
There are only three families of complete intersection surfaces having Kodaira dimension $0$, namely quartics in $\mathbb{P}^3$, complete intersections of type $(2,3)$ in $\mathbb{P}^4$ and of type $(2,2,2)$ in $\mathbb{P}^5$ (all these are K3 surfaces). These cases are covered by Theorem \ref{kod=0leq11}, which gives the bound $g \leq 11$.\\
All remaining complete intersection surfaces are of general type, having very ample canonical bundle.
\begin{prop}
\label{elencoSditipogenerale}
Let $C$ be a general curve of genus $g \geq 3$ moving in a positive-dimensional linear system on a nonsingular projective surface $S$ of general type which is a complete intersection of $r-2$ hypersurfaces in $\mathbb{P}^r$. Then $S$ is a canonical surface, that is $S$ must be one of the following types:
\begin{itemize}
\item $S=(5)$;
\item $S=(2,4)$;
\item $S=(3,3)$;
\item $S=(2,2,3)$;
\item $S=(2,2,2,2)$.
\end{itemize}
Moreover, one has $g \leq 15$ and if $g \geq 12$, then $C$ is non-degenerate in $\mathbb{P}^r$.
\end{prop}
\begin{proof}
By assumption $S=(k_1,k_2,...,k_{r-2})$ is the nonsingular complete intersection of $r-2$ projective hypersurfaces of respective degree $k_1,...,k_{r-2}$ in $\mathbb{P}^r$.\\
Let $K_S=\mathcal{O}_S(h)$ and suppose by contradiction that $h \geq 2$. By adjunction one has $\mathcal{O}_C(K_C) \cong \mathcal{O}_C(h) \otimes \mathcal{O}_C(C)$. Since $C$ moves in a positive-dimensional linear system, the normal bundle $N_{C/S} \cong \mathcal{O}_C(C)$ has a nonzero section, hence the bundle $\mathcal{O}_C(h-2+C)$ has a nonzero section too, say $s$. Consider the exact sequence
$$0 \rightarrow \mathcal{O}_C(2) \xrightarrow{\otimes s} \mathcal{O}_C(K_C) \rightarrow T \rightarrow 0$$
where $T$ is the torsion sheaf supported on the zero locus of $s$. The associated cohomology sequence gives $h^1(\mathcal{O}_C(2)) \geq h^1(\mathcal{O}_C(K_C))$, thus $\mathcal{O}_C(2)$ is special. Since $h^{0}(\mathcal{O}_C(1)) \geq 2$, Proposition \ref{proprietàcurvemoduligenerali} gives a contradiction, hence $S$ must be a canonical surface.\\
Suppose by contradiction that $g(C) \geq 12$ and $C$ is a degenerate curve in $\mathbb{P}^r$. Then $C$ also lies on a nonsingular complete
intersection surface $S'$ of the kind $(k_1,...,k_{r-3},1)$ in $\mathbb{P}^r$ i.e. of the kind $(k_1,...,k_{r-3})$ in $\mathbb{P}^{r-1}$. Up to a permutation of the $k_i$, the surface $S'$ can then be supposed to be one among $(2), (3), (2,2), (2,2,2)$, which have Kodaira dimension $\leq 0$ (the case $S'=\mathbb{P}^2$ has already been ruled out).\\
Let $C^2_{S'}$ be the self-intersection of $C$ on $S'$. The genus formula gives $C^2_{S'} \geq 2g-2$. Since $S'$ is regular, $C$ moves on it in a positive-dimensional linear system. If $S'$ is a $(2,2,2)$ the contradiction is given by Theorem \ref{kod=0leq11}, otherwise it is given by Proposition \ref{2322}.\\
By contradiction let $g \geq 16$. By Theorem \ref{disuguaglianzafibrazionefree} inequality
\begin{equation}
\label{5g-111chios2}
5(g-1)-\left[11\chi(\mathcal{O}_S)-2K^2_S+2C^2\right]+h^{0}(\mathcal{O}_S(C)) \leq 0
\end{equation}
must be satisfied and $h^{0}(\mathcal{O}_S(C)) \geq 2$. Let $C$ be embedded in $S$ by a $g^{r}_{d}$ (by the previous part $C$ is nondegenerate in $\mathbb{P}^r$). Since $C$ has general moduli, the Brill-Noether number $\rho(g,r,d)=g-(r+1)(g-d+r)$ must be nonnegative, thus one has
\begin{equation}
\label{drr+1g+r}
d \geq \frac{r}{r+1}g+r
\end{equation}
from which
\begin{equation}
\label{c2casoS=(...)}
C^2=2g-2-C \cdot K_S=2g-2-d \leq \frac{r+2}{r+1}g-r-2.
\end{equation}
Let us examine each possible case.
\begin{itemize}
\item
Let $S$ be a nonsingular quintic in $\mathbb{P}^3$.
One has $5(g-1)-45-2C^2+h^{0}(\mathcal{O}_S(C)) \geq \frac{5}{2}g-40+h^{0}(\mathcal{O}_S(C)) > 0$ if $g \geq 16$, where the first term is the left term of (\ref{5g-111chios2}) and the first inequality follows by (\ref{c2casoS=(...)}). Thus $S$ cannot carry a general curve of genus $g \geq 16$ moving in a positive-dimensional linear system. The structure of the argument is the same for all the other cases.
\item Let $S=(2,4)$.
One has $5(g-1)-50-2C^2+h^{0}(\mathcal{O}_S(C)) \geq \frac{13}{5}g-43+h^{0}(\mathcal{O}_S(C)) > 0$ if $g \geq 16$, where the first term is the left term of (\ref{5g-111chios2}) and the first inequality follows by (\ref{c2casoS=(...)}).
\item Let $S=(3,3)$ in $\mathbb{P}^4$. One has $5(g-1)-48-2C^2+h^{0}(\mathcal{O}_S(C)) \geq \frac{13}{5}g-41+h^{0}(\mathcal{O}_S(C)) > 0$ if $g \geq 16$, where the first term is the left term of (\ref{5g-111chios2}) and the first inequality follows by (\ref{c2casoS=(...)}).
\item
Let $S=(2,2,3)$.
One has $5(g-1)-53-2C^2+h^{0}(\mathcal{O}_S(C)) \geq \frac{8}{3}g-44+h^{0}(\mathcal{O}_S(C)) >0$ if $g \geq 16$, where the first term is the left term of (\ref{5g-111chios2}) and the first inequality follows by (\ref{c2casoS=(...)}).
\item
Let $S=(2,2,2,2)$.
One has $5(g-1)-53-2C^2+h^{0}(\mathcal{O}_S(C)) \geq \frac{19}{7}g-45+h^{0}(\mathcal{O}_S(C)) >0$ if $g \geq 16$, where the first term is the left term of (\ref{5g-111chios2}) and the first inequality follows by (\ref{c2casoS=(...)}).
\end{itemize}
\end{proof}
One can sum up the previous results in the following theorem:
\begin{teo}
Let $C$ be a general curve of genus $g \geq 3$ moving in a positive-dimensional linear system on a nonsingular projective surface $S$ which is a complete intersection. Then $g \leq 15$. Moreover
\begin{itemize}
\item if $g \leq 11$, then $\emph{Kod}(S) \leq 0$ or $S$ is a canonical surface;
\item if $12 \leq g \leq 15$, then $S$ is a canonical surface.
\end{itemize}
\end{teo}
This in particular implies that, if one wants to use Theorem \ref{equivalentimguniruledsistemalineare} to show uniru-ledness for some $\overline{\mathcal{M}}_g$, $g \geq 16$, the surface cannot be a complete intersection.
\end{section}
$\left.\right.$\\ \\
\textbf{Acknowledgements.}\\
The author wants to express his gratitude to his advisor Edoardo Sernesi which led him through to the world of moduli spaces of curves.
A special thank to Antonio Rapagnetta for many helpful discussions and to Claudio Fontanari for useful suggestions on the preliminary draft of this paper.
\small
$\left.\right.$\\
$\left.\right.$\\
\noindent
Luca Benzo, Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Roma, Italy. e-mail
[email protected]
\end{document} |
\begin{document}
\setcounter{tocdepth}{1}
\begin{abstract} We prove that every algebraic stack, locally of finite type over an algebraically closed field with affine stabilizers, is \'etale-locally a quotient stack in a neighborhood of a point with linearly reductive stabilizer group. The proof uses an equivariant version of Artin's algebraization theorem proved in the appendix. We provide numerous applications of the main theorems.
\end{abstract}
\title{A Luna \'etale slice theorem for algebraic stacks}
\section{Introduction}
Quotient stacks form a distinguished class of algebraic stacks which provide intuition for
the geometry of general algebraic stacks. Indeed, equivariant algebraic geometry has a
long history with a wealth of tools at its disposal. Thus, it has long been desired---and
more recently believed \cite{alper-quotient,alper-kresch}---that certain algebraic
stacks are locally quotient stacks. This is fulfilled by the main result of this article:
\begin{theorem}\label{T:field}
Let $\cX$ be a quasi-separated algebraic stack, locally
of finite type over an algebraically closed field $k$, with affine stabilizers. Let $x \in \cX(k)$ be a point and $H \subseteq G_x$ be a subgroup scheme of the stabilizer such that $H$ is linearly reductive and $G_x / H$ is smooth (resp.\ \'etale). Then there exists an affine scheme $\Spec A$ with an action of $H$, a $k$-point $w \in \Spec A$ fixed by $H$, and a smooth (resp.\ \'etale) morphism
$$f\colon \bigl([\Spec A/H],w\bigr) \to (\cX,x)$$
such that $BH \cong f^{-1}(BG_x)$; in particular, $f$ induces the given inclusion $H \to G_x$ on stabilizer group schemes at $w$. In addition, if $\cX$ has affine diagonal, then the morphism $f$ can be arranged to be affine.
\end{theorem}
This justifies the philosophy that quotient stacks of
the form $[\Spec A / G]$, where $G$ is a linearly reductive group, are the building blocks
of algebraic stacks near points with linearly reductive stabilizers.
In the case of smooth algebraic stacks, we can provide a more refined description (Theorem \ref{T:smooth}) which
resolves the algebro-geometric counterpart to the Weinstein
conjectures~\cite{weinstein_linearization}---now known as Zung's Theorem
\cite{zung_proper_grpds,MR2776372,MR3185351,MR3259039}---on the linearization of
proper Lie groupoids in differential geometry. Before we state the second theorem, we introduce the following notation:
if $\cX$ is an algebraic stack over a field $k$ and $x\in \cX(k)$ is a closed
point with stabilizer group scheme $G_x$, then we let $N_x$ denote the normal
space to $x$ viewed as a $G_x$-representation. If $\cI \subseteq \oh_{\cX}$
denotes the sheaf of ideals defining $x$, then $N_{x} = (\cI/\cI^2)^\vee$. If
$G_x$ is smooth, then $N_{x}$ is identified with the tangent space of $\cX$ at
$x$; see Section \ref{S:tangent}.
\begin{theorem}\label{T:smooth}
Let $\cX$ be a quasi-separated algebraic stack, locally
of finite type over an algebraically closed field $k$, with affine stabilizers. Let $x \in |\cX|$ be a smooth and closed point with linearly reductive stabilizer
group $G_x$. Then there exists an affine and \'etale morphism $(U,u) \to (N_x /\!\!/ G_x,0)$, where $N_{x} /\!\!/ G_x$ denotes the GIT quotient, and a cartesian diagram
$$\xymatrix{
\bigl([N_{x}/G_x],0\bigr) \ar[d] & \bigl([W / G_x],w\bigr)\ar[r]^-f \ar[d] \ar[l] & (\cX,x) \\
(N_{x} /\!\!/ G_x,0) & (U,u) \ar[l] \ar@{}[ul]|\square &
}$$
such that $W$ is affine and $f$ is \'etale and induces an isomorphism
of stabilizer groups at $w$. In addition, if $\cX$ has affine diagonal, then the morphism $f$ can be
arranged to be affine.
\end{theorem}
In particular, this theorem implies that $\cX$ and $[N_{x} /G_x]$ have a common \'etale
neighborhood of the form $[\Spec A / G_x]$.
The main techniques employed in the proof of Theorem \ref{T:smooth} are
\begin{enumerate}
\item deformation theory,
\item coherent completeness,
\item Tannaka duality, and
\item Artin approximation.
\end{enumerate}
Deformation theory produces an isomorphism between the $n$th infinitesimal neighborhood $\cN^{[n]}$ of
$0$ in $\cN = [N_x/G_x]$ and the $n$th infinitesimal neighborhood $\cX^{[n]}_x$ of $x$ in
$\cX$. It is not at all obvious, however, that the system of morphisms $\{f^{[n]} \colon \cN^{[n]} \to \cX\}$ algebraizes. We establish algebraization in two steps.
The first step is effectivization. To accomplish this, we introduce \emph{coherent completeness}, a key concept of the article. Recall that if $(A,\mathfrak{m})$ is a complete local ring, then
$\Coh(A) = \varprojlim_n \Coh(A/\mathfrak{m}^{n+1})$. Coherent completeness (Definition \ref{D:cc}) is a generalization of this,
which is more refined than the formal GAGA results of \cite[III.5.1.4]{EGA} and
\cite{geraschenko-brown} (see \S\ref{A:gms_app}). What we prove in \S\ref{S:cc} is the
following.
\begin{theorem} \label{key-theorem}
Let $G$ be a linearly reductive affine group scheme over a field $k$. Let $\Spec A$ be a noetherian affine scheme with an action of~$G$, and let $x \in \Spec A$ be a closed point fixed by $G$. Suppose that $A^{G}$ is a complete local ring. Let $\cX = [\Spec A / G]$ and let $\cX^{[n]}_x$ be the $n$th infinitesimal neighborhood of $x$. Then the natural functor
\begin{equation} \label{eqn-coh}
\Coh(\cX) \to \varprojlim_n \Coh\bigl(\cX^{[n]}_x\bigr)
\end{equation}
is an equivalence of categories.
\end{theorem}
Tannaka duality for algebraic stacks with affine stabilizers was recently established by the
second two authors \cite[Thm.~1.1]{hallj_dary_coherent_tannakian_duality} (also see
Theorem \ref{T:tannakian}). This proves that morphisms between algebraic stacks $\cY \to \cX$ are equivalent to symmetric monoidal functors $\Coh(\cX) \to \Coh(\cY)$. Therefore, to prove Theorem \ref{T:smooth}, we can combine Theorem \ref{key-theorem} with Tannaka duality (Corollary \ref{C:tannakian}) and the above deformation-theoretic observations to show that the morphisms $\{f^{[n]} \colon \cN^{[n]} \to \cX\}$ effectivize to $\mathrm{h}at{f} \colon \mathrm{h}at{\cN} \to \cX$, where $\mathrm{h}at{\cN} = \cN \times_{N_x/\!\!/ G_x} \Spec \mathrm{h}at{\oh}_{N_x/\!\!/ G_x,0}$.
The morphism $\mathrm{h}at{f}$ is then algebraized using Artin approximation \cite{artin-approx}.
The techniques employed in the proof of Theorem \ref{T:field} are similar, but the methods are more involved. Since we no longer assume that $x \in \cX(k)$ is a non-singular point, we cannot expect an \'etale or smooth morphism $\cN^{[n]}\to \cX_x^{[n]}$ where $\cN=[N_x/H]$. Using Theorem \ref{key-theorem} and Tannaka duality, however, we can produce a closed substack $\mathrm{h}at{\cH}$ of $\mathrm{h}at{\cN}$ and a formally versal morphism $\mathrm{h}at{f} \colon \mathrm{h}at{\cH} \to \cX$. To algebraize $\mathrm{h}at{f}$, we apply an equivariant version of Artin algebraization (Corollary \ref{C:equivariant-algebraization}), which we believe is of independent interest.
For tame stacks with finite inertia, Theorem \ref{T:field} is one of the main results of \cite{tame}. The structure of algebraic stacks with infinite stabilizers has been poorly understood until the present article.
For algebraic stacks with infinite stabilizers that are not---or are not known to be---quotient stacks, Theorems \ref{T:field} and \ref{T:smooth} were only known when $\cX = \fM_{g,n}^{\mathrm{ss}}$ is the
moduli stack of semistable curves. This is the central result of \cite{alper-kresch}, where it is also shown that $f$ can be arranged to be representable. For certain quotient stacks, Theorems \ref{T:field} and \ref{T:smooth} can be obtained
using traditional methods in equivariant algebraic geometry, see \S\ref{A:luna} for details.
\subsection{Some remarks on the hypotheses}
We mention here several examples illustrating the necessity of some of the hypotheses of Theorems \ref{T:field} and \ref{T:smooth}.
\begin{example} \label{ex1}
Some reductivity assumption of the stabilizer $G_x$ is necessary in Theorem \ref{T:field}. For instance, consider the group scheme $G= \Spec k[x,y]_{xy+1} \to \AA^1 = \Spec k[x]$ (with multiplication defined by $y \mapsto xyy' + y + y'$), where the generic fiber is $\mathbb{G}_m$ but the fiber over the origin is $\mathbb{G}_a$. Let $\cX = BG$ and $x \in |\cX|$ be the point corresponding to the origin. There does not exist an \'etale morphism $([W/ \mathbb{G}_a], w) \to (\cX, x)$, where $W$ is an algebraic space over $k$ with an action of $\mathbb{G}_a$.
\end{example}
\begin{example} \label{ex2}
It is essential to require that the stabilizer groups are affine in a neighborhood of $x \in |\cX|$. For instance, let $X$ be a smooth curve and let $\cE \to X$ be a group scheme whose generic fiber is a smooth elliptic curve but the fiber over a point $x \in X$ is isomorphic to $\mathbb{G}_m$. Let $\cX = B\cE$. There is no \'etale morphism $([W/ \mathbb{G}_m], w) \to (\cX, x)$, where $W$ is an affine $k$-scheme with an action of $\mathbb{G}_m$.
\end{example}
\begin{example} \label{ex3}
In the context of Theorem \ref{T:field}, it is not possible in general to find a Zariski-local quotient presentation of the form $[\Spec A/G_x]$. Indeed, if $C$ is the projective nodal cubic curve with $\mathbb{G}_m$-action, then there is no Zariski-open $\mathbb{G}_m$-invariant affine neighborhood of the node. If we view $C$ ($\mathbb{G}_m$-equivariantly) as the $\ZZ/2\ZZ$-quotient of the union of two $\mathbb{P}^1$'s glued along two nodes, then after removing one of the nodes, we obtain a (non-finite) \'etale morphism $[\Spec(k[x,y]/xy)/\mathbb{G}_m] \to [C/\mathbb{G}_m]$ where $x$ and $y$ have weights $1$ and $-1$. This is in fact the unique such quotient presentation (see Remark \ref{R:uniqueness}).
\end{example}
The following two examples illustrate that in Theorem \ref{T:field} it is not always possible to obtain a quotient presentation $f\colon [\Spec A/G_x]\to \cX$, such that $f$ is representable or separated without additional hypotheses; see also Question \ref{Q:represent}.
\begin{example} \label{ex5}
Consider the
non-separated affine line as a group scheme $G \to \AA^1$ whose generic fiber
is trivial but the fiber over the origin is $\ZZ/2\ZZ$. Then $BG$ admits an
\'etale neighborhood $f \co [\AA^1/(\ZZ/2\ZZ)] \to BG$ which induces an isomorphism
of stabilizer groups at $0$, but $f$ is not representable in a neighborhood.
\end{example}
\begin{example} \label{ex4}
Let $\mathcal{L}og$ (resp.\ $\mathcal{L}og^{\mathrm{al}}$) be the algebraic stack of log structures (resp.\ aligned log structures) over $\Spec k$ introduced in \cite{olssonloggeometry} (resp.\ \cite{acfw}). Let $r\geq 2$ be an integer and let $\NN^r$ be the free log structure on $\Spec k$. There is an \'etale neighborhood $[\Spec k[\NN^r] / (\mathbb{G}_m^r \rtimes S_r)] \to \mathcal{L}og$ of $\NN^r$ which is not representable. Note that $\mathcal{L}og$ does not have separated diagonal.
Similarly, there is an \'etale neighborhood $[\Spec k[\NN^r] / \mathbb{G}_m^r] \to \mathcal{L}og^{\mathrm{al}}$ of $\NN^r$ (with the standard alignment) which is representable but not separated. Because $[\Spec k[\NN^r] / \mathbb{G}_m^r] \to \mathcal{L}og^{\mathrm{al}}$ is inertia-preserving, $\mathcal{L}og^{\mathrm{al}}$ has affine inertia and hence separated diagonal; however, the diagonal is not affine.
In both cases, this is the unique such quotient presentation (see Remark \ref{R:uniqueness}).
\end{example}
\subsection{Generalizations}
Using similar arguments, one can in fact establish a generalization of Theorem \ref{T:field} to the relative and mixed characteristic setting. This requires developing some background material on deformations of linearly reductive group schemes, a more general version of Theorem \ref{key-theorem} and a generalization of the formal functions theorem for good moduli spaces. To make this article more accessible, we have decided to postpone the relative statement until the follow-up article \cite{ahr2}.
If $G_x$ is not reductive, it is possible that one could find an \'etale
neighborhood $([\Spec A/\GL_n],w)\to (\cX,x)$. However, this is not known even if
$\cX=B_{k[\epsilon]}G_\epsilon$ where $G_\epsilon$ is a deformation of
a non-reductive algebraic group~\cite{mathoverflow_groups-over-dual-numbers}.
In characteristic $p$, the linearly reductive hypothesis in Theorems \ref{T:field} and \ref{T:smooth} is quite restrictive. Indeed, a smooth affine group scheme $G$ over an algebraically closed field $k$ of characteristic $p$ is linearly reductive if and only if $G^0$ is a torus and $|G/G^0|$ is coprime to $p$ \cite{nagata}. We ask however:
\begin{question} \label{Q:geom-red} Does a variant of Theorem \ref{T:field} remain true if ``linearly reductive" is replaced with ``reductive"?
\end{question}
We remark that if $\cX$ is a Deligne--Mumford stack, then the conclusion of Theorem \ref{T:field} holds. We also ask:
\begin{question}\label{Q:represent} If $\cX$ has separated (resp.\ quasi-affine) diagonal, then can the morphism $f$ in Theorems \ref{T:field} and \ref{T:smooth} be chosen to be representable (resp.\ quasi-affine)?
\end{question}
If $\cX$ does not have separated diagonal, then the morphism $f$ cannot
necessarily be chosen to be representable; see Examples \ref{ex5} and \ref{ex4}.
We answer Question \ref{Q:represent} affirmatively when $\cX$ has affine diagonal (Proposition \ref{P:refinement}) or is a quotient stack (Corollary \ref{C:refinement:quot_stack}),
or when $H$ is
diagonalizable (Proposition \ref{P:rep_diagonalizable}).
\subsection{Applications} \label{S:intro-applications}
Theorems \ref{T:field} and \ref{T:smooth} yield a number of applications to old and new problems.
\subsubsection*{Immediate consequences}\label{A:immediate}
Let $\cX$ be a quasi-separated algebraic stack, locally of finite type over an algebraically closed field $k$ with affine stabilizers, and let $x \in \cX(k)$ be a point with linearly reductive stabilizer $G_x$.
\begin{enumerate}
\item\label{I:immediate-embedding} There is an \'etale neighborhood of $x$ with a closed embedding into a smooth algebraic stack.
\item There is an \'etale-local description of the cotangent complex $L_{\cX/k}$ of $\cX$ in terms of the cotangent complex $L_{\cW/k}$ of $\cW=[\Spec A/G_x]$. If $x \in |\cX|$ is a smooth point (so that $\cW$ can be taken to be smooth) and $G_x$ is smooth, then $L_{\cW/k}$ admits an explicit description. If $x$ is not smooth but $G_x$ is smooth, then the $[-1,1]$-truncation of $L_{\cW/k}$ can be described explicitly by appealing to \itemref{I:immediate-embedding}.
\item For any representation $V$ of $G_x$, there exists a vector bundle over an \'etale neighborhood of $x$ extending $V$.
\item If $G_x$ is smooth, then there are $G_x$-actions on the formal miniversal deformation space $\mathrm{h}at{\Def}(x)$ of $x$ and its versal object, and the $G_x$-invariants of $\mathrm{h}at{\Def}(x)$ is the completion of a finite type $k$-algebra. This observation is explicitly spelled out in Remark~\ref{R:miniversal-completion-finite-type}.
\item Any specialization $y \rightsquigarrow x$ of $k$-points is realized by a morphism $[\AA^1/\mathbb{G}_m] \to \cX$. This follows by applying the Hilbert--Mumford criterion to an \'etale quotient presentation constructed by Theorem \ref{T:field}.
\end{enumerate}
\subsubsection*{Local applications} The following consequences of Theorems \ref{T:field} and \ref{T:smooth} to the local geometry of algebraic stacks will be detailed in Section~\ref{S:local_applications}:
\begin{enumerate}
\item a generalization of Sumihiro's theorem on torus actions to Deligne--Mumford stacks
(\S\ref{A:sumihiro}), confirming an expectation of Oprea \cite[\S2]{oprea};
\item a generalization of Luna's \'etale slice theorem to non-affine schemes
(\S\ref{A:luna});
\item the existence of equivariant miniversal deformation spaces for singular curves (\S\ref{A:mv_curve}), generalizing~\cite{alper-kresch};
\item the \'etale-local quotient structure of a good moduli space (\S\ref{A:gms_app}), which in particular establishes formal GAGA for good moduli space morphisms, resolving a conjecture of Geraschenko--Zureick-Brown \cite[Conj.\ 32]{geraschenko-brown};
\item the existence of coherent completions of algebraic stacks at points with linearly reductive stabilizer (\S\ref{A:coherent-completion});
\item a criterion for \'etale-local equivalence of algebraic stacks (\S\ref{A:etale}),
extending Artin's corresponding results for schemes \cite[Cor.~2.6]{artin-approx};
\item the resolution property holds \'etale-locally for algebraic stacks near a point whose stabilizer has linearly reductive connected component (\S\ref{A:resolution-property-etale}), which in particular provides a characterization of toric Artin stacks in terms of stacky fans \cite[Thm.~6.1]{GS-toric-stacks-2}.
\end{enumerate}
\subsubsection*{Global applications} In Section~\ref{S:global_applications}, we provide the following global applications:
\begin{enumerate}
\item compact generation of derived categories of algebraic stacks (\S\ref{A:compact-generation});
\item a criterion for the existence of a good moduli space (\S\ref{A:gms}), generalizing
\cite{keel-mori,afsw-good};
\item algebraicity of stacks of coherent sheaves, Quot schemes, Hilbert schemes, Hom stacks and equivariant Hom stacks (\S\ref{A:algebraicity});
\item a short proof of Drinfeld's results \cite{drinfeld} on algebraic spaces with a $\mathbb{G}_m$-action and a generalization to Deligne--Mumford stacks with $\mathbb{G}_m$-actions (\S\ref{A:drinfeld}); and
\item Bia\l ynicki-Birula decompositions for separated Deligne--Mumford stacks
(\S\ref{A:BB}).
\end{enumerate}
We also note that Theorem \ref{T:field} was applied recently by Toda to resolve a wall-crossing conjecture for higher rank DT/PT invariants by Toda \cite{toda-hallalg}.
\subsection{Leitfaden} \label{S:leitfaden}
The logical order of this article is as follows. In Section~\ref{S:cc-tannaka} we
prove the key coherent completeness result (Theorem~\ref{key-theorem}). In Appendix~\ref{A:approx} we state Artin approximation and
prove an equivariant version of Artin algebraization (Corollary \ref{C:equivariant-algebraization}). These techniques are
then used in Section~\ref{S:proof-section} to prove the main local structure theorems (Theorems~\ref{T:field}
and~\ref{T:smooth}). In Sections~\ref{S:local_applications} and \ref{S:global_applications} we give applications to the
main theorems.
\subsection{Notation} \label{S:notation}
An algebraic stack $\cX$ is quasi-separated if the diagonal and the diagonal of the diagonal are quasi-compact.
An algebraic stack $\cX$ has \emph{affine stabilizers} if for every field $K$
and point $x\colon \Spec K\to \cX$, the stabilizer group $G_x$ is affine.
If $\cX$ is an algebraic stack and $\cZ \subseteq \cX$ is a closed substack, we will denote by $\cX_{\cZ}^{[n]}$ the $n$th nilpotent thickening of $\cZ \subseteq \cX$
(i.e., if $\cI \subseteq \oh_{\cX}$ is the ideal sheaf defining $\cZ$, then $\cX_{\cZ}^{[n]} \to \cX$ is defined by $\cI^{n+1}$). If $x\in |\mathcal{X}|$ is a closed
point, then the \emph{$n$th infinitesimal neighborhood of
$x$} is the $n$th nilpotent thickening of the inclusion of the residual gerbe $\cG_x \to \cX$.
Recall from \cite{alper-good} that a quasi-separated and quasi-compact morphism $f \co \cX \to \cY$ of algebraic stacks is {\it cohomologically affine} if the push-forward functor $f_*$ on the category of quasi-coherent $\oh_{\cX}$-modules is exact.
If $\cY$ has quasi-affine diagonal and $f$ has affine diagonal, then $f$ is cohomologically affine if and only if $\DERF{R} f_* \colon \DCAT_{\QCoh}^+(\cX) \to \DCAT_{\QCoh}^+(\cY)$ is $t$-exact, cf.\ \cite[Prop.~3.10~(vii)]{alper-good} and \cite[Prop.~2.1]{hallj_neeman_dary_no_compacts}; this equivalence is false if $\cY$ does not have affine stabilizers \cite[Rem.~1.6]{hallj_dary_alg_groups_classifying}.
If $G \to \Spec k$ is an affine group scheme of finite type, then we say that $G$ is {\it linearly reductive} if $BG \to \Spec k$ is cohomologically affine.
A quasi-separated and quasi-compact morphism $f \co \cX \to Y$ of algebraic stacks is {\it a good moduli space} if $Y$ is an algebraic space, $f$ is cohomologically affine and $\oh_Y \to f_* \oh_{\cX}$ is an isomorphism.
If $G$ is an affine group scheme of finite type over a field $k$ acting on an algebraic space $X$, then we say that a $G$-invariant morphism $\pi \co X \to Y$ of algebraic spaces is {\it a good GIT quotient} if the induced map $[X/G] \to Y$ is a good moduli space; we often write $Y = X /\!\!/ G$. In the case that $G$ is linearly reductive, a $G$-equivariant morphism $\pi \co X \to Y$ is a good GIT quotient if and only if $\pi$ is affine and $\oh_Y \to (\pi_* \oh_X)^G$ is an isomorphism.
An algebraic stack $\cX$ is said to have the {\it resolution property}
if every quasi-coherent $\oh_{\cX}$-module of finite type is a
quotient of a locally free $\oh_{\cX}$-module of finite type.
By the Totaro--Gross
theorem \cite{totaro,gross-resolution}, a quasi-compact and
quasi-separated algebraic stack is isomorphic to $[U/\GL_N]$, where
$U$ is a quasi-affine scheme and $N$ is a positive integer, if and
only if the closed points of $\cX$ have affine stabilizers and $\cX$
has the resolution property.
We will only use the easy implication ($\Longrightarrow$) in this article which can
be strengthened as follows. If $G$ is a group scheme of finite type over a
field $k$ acting on a quasi-affine scheme $U$, then $[U/G]$ has the resolution
property. This is a
consequence of the following two simple observations: (1) $BG$ has
the resolution property; and (2) if $f \colon \cX \to \cY$ is
quasi-affine and $\cY$ has the resolution property, then $\cX$ has the
resolution property.
If $\cX$ is a noetherian algebraic stack, then we denote by $\Coh(\cX)$ the abelian category of coherent $\oh_{\cX}$-modules.
\section{Coherently complete stacks and Tannaka duality} \label{S:cc-tannaka}
\subsection{Coherently complete algebraic stacks} \label{S:cc}
Motivated by Theorem \ref{key-theorem}, we begin this section with the following key definition.
\begin{definition} \label{D:cc}
Let $\cX$ be a noetherian algebraic stack with affine stabilizers and let $\cZ \subseteq \cX$ be a closed substack. For a coherent $\oh_\cX$-module $\mathcal{F}$, the restriction to $\cX_{\cZ}^{[n]}$ is denoted $\mathcal{F}_n$.
We say that $\cX$ is {\it coherently complete along $\cZ$} if the natural functor
\[
\Coh(\cX) \to \varprojlim_n \Coh\bigl(\cX_{\cZ}^{[n]}\bigr), \qquad \mathcal{F}\mapsto \{\cF_n\}_{n\geq 0}
\]
is an equivalence of categories.
\end{definition}
We now recall some examples of coherently complete algebraic
stacks. Traditionally, such results have been referred to as ``formal
GAGA'' theorems.
\begin{example}\label{E:cc}
Let $A$ be a noetherian ring and let $I \subseteq A$ be an
ideal. Assume that $A$ is $I$-adically complete, that is,
$A\simeq \varprojlim_n A/I^{n+1}$. Then $\Spec A$ is coherently
complete along $\Spec A/I$. More generally if an algebraic stack
$\cX$ is proper over $\Spec A$, then $\cX$ is coherently complete
along $\cX \times_{\Spec A} \Spec A /I$. See \cite[III.5.1.4]{EGA}
for the case of schemes and \cite[Thm.~1.4]{olsson-proper},
\cite[Thm.~4.1]{conrad-gaga} for algebraic stacks.
\end{example}
\begin{example}\label{E:gzb}
Let $(R,\mathfrak{m})$ be a complete noetherian local ring and let
$\pi\colon \cX \to \Spec R$ be a good moduli space of finite
type. If $\cX$ has the resolution property (e.g.,
$\cX \simeq [\Spec B/\GL_n]$, where $B^{\GL_n}=R$), then $\cX$ is
coherently complete along $\pi^{-1}(\Spec R/\mathfrak{m})$ \cite[Thm.~1.1]{geraschenko-brown}.
\end{example}
Note that in the examples above, completeness was always along a
substack that is pulled back from an affine base. Theorem
\ref{key-theorem} is quite different, however, as the following
example highlights.
\begin{example}
Let $k$ be a field, then the quotient stack $[\AA^1_k/\mathbb{G}_m]$ has
good moduli space $\Spec k$. Theorem \ref{key-theorem} implies that
$[\AA^1_k/\mathbb{G}_m]$ is coherently complete along the closed point
$B\mathbb{G}_m$. In this special case, one can even give a direct proof of
the coherent completeness (see Proposition \ref{P:A1-complete}).
\end{example}
We now prove Theorem \ref{key-theorem}.
\begin{proof}[Proof of Theorem \ref{key-theorem}]
Let $\mathfrak{m} \subset A$ be the maximal ideal corresponding to $x$. A coherent $\oh_{\cX}$-module $\cF$ corresponds to a finitely generated $A$-module $M$ with an action of $G$.
Note that since $G$ is linearly reductive, $M^G$ is a finitely generated $A^G$-module \cite[Thm.\ 4.16(x)]{alper-good}. We claim that the following two sequences of $A^G$-submodules $\{(\mathfrak{m}^{n} M)^{G} \}$ and $\{ (\mathfrak{m}^G)^n M^{G} \}$ of $M^{G}$ define the same topology, or in other words that
\begin{equation} \label{E:formal}
M^{G} \to \varprojlim M^{G} / \bigl(\mathfrak{m}^{n} M\bigr)^{G}
\end{equation}
is an isomorphism of $A^G$-modules.
To this end, we first establish that
\begin{equation} \label{E:intersection}
\bigcap_{n \ge 0} \bigl(\mathfrak{m}^n M\bigr)^{G} = 0,
\end{equation}
which immediately informs us that \eqref{E:formal} is injective.
Let $N = \bigcap_{n \ge 0} \mathfrak{m}^n M$. The Artin--Rees lemma implies that $N=\mathfrak{m} N$ so $N \otimes_A A/\mathfrak{m} = 0$.
Since $A^G$ is a local ring, $\Spec A$ has a unique closed orbit $\{x\}$.
Since the support of $N$ is a closed $G$-invariant
subscheme of $\Spec A$ which does not contain $x$, it follows that $N=0$.
We next establish that \eqref{E:formal} is an isomorphism if $A^G$ is artinian. In this case, $\{(\mathfrak{m}^n M)^G\}$ automatically satisfies the Mittag-Leffler condition (it is a sequence of artinian $A^G$-modules). Therefore, taking the inverse limit of the exact sequences $0 \to (\mathfrak{m}^n M)^G \to M^G \to M^G / (\mathfrak{m}^n M)^G \to 0$ and applying \eqref{E:intersection}, yields an exact sequence
$$0 \to 0 \to M^G \to \varprojlim M^G / (\mathfrak{m}^n M)^G \to 0.$$
Thus, we have established \eqref{E:formal} when $A^G$ is artinian.
To establish \eqref{E:formal} in the general case, let $J = (\mathfrak{m}^G) A \subseteq A$ and observe that
\begin{equation} \label{E:limit1}
M^G = \varprojlim M^G / \bigl(\mathfrak{m}^G\bigr)^n M^G = \varprojlim \bigl(M/J^n M\bigr)^G,
\end{equation}
since $G$ is linearly reductive.
For each $n$,
we know that
\begin{equation} \label{E:limit2}
\bigl(M/J^nM\bigr)^G = \varprojlim_l M^G / \bigl((J^n + \mathfrak{m}^l)M \bigr)^G
\end{equation}
using the artinian case proved above. Finally, combining \eqref{E:limit1} and \eqref{E:limit2} together with the observation that $J^n \subseteq \mathfrak{m}^l$ for $n \ge l$, we conclude that
$$\begin{aligned}
M^G & = \varprojlim_n \bigl(M / J^n M\bigr)^G \\
& = \varprojlim_n \varprojlim_l M^G / \bigl((J^n + \mathfrak{m}^l)M \bigr)^G \\
& = \varprojlim_l M^G / \bigl(\mathfrak{m}^l M\bigr)^G.
\end{aligned}$$
We now show that \eqref{eqn-coh} is fully faithful. Suppose that $\shv{G}$ and $\shv{F}$ are coherent
$\oh_{\cX}$-modules, and let $\shv{G}_n$ and $\shv{F}_n$ denote
the restrictions to $\cX_x^{[n]}$, respectively. We need to show
that
\begin{equation*}
\Hom(\shv{G}, \shv{F}) \to \varprojlim \Hom(\shv{G}_n, \shv{F}_n)
\end{equation*}
is bijective. Since $\cX$ has the resolution
property (see \S\ref{S:notation}), we can find locally free $\oh_{\cX}$-modules
$\cE'$ and $\cE$ and an exact sequence
\[
\cE' \to \cE \to \shv{G} \to 0.
\]
This induces a diagram
\[
\xymatrix{
0 \ar[r] & \Hom(\shv{G}, \shv{F}) \ar[r] \ar[d] & \Hom(\cE, \shv{F}) \ar[r] \ar[d] & \Hom(\cE', \shv{F}) \ar[d]\\
0 \ar[r] & \varprojlim \Hom(\shv{G}_n, \shv{F}_n) \ar[r] &
\varprojlim \Hom(\cE_n, \shv{F}_n) \ar[r] & \varprojlim
\Hom(\cE'_n, \shv{F}_n) }
\]
with exact rows. Therefore, it suffices to assume that $\shv{G}$ is
locally free. In this case,
\[
\Hom(\shv{G}, \shv{F}) = \Hom(\oh_{\cX}, \shv{G}^{\vee} \otimes
\shv{F}) \quad \text{and} \quad
\Hom(\shv{G}_n, \shv{F}_n) = \Hom\bigl(\oh_{\cX_x^{[n]}},
(\shv{G}_n^{\vee} \otimes \shv{F}_n)\bigr).
\]
Therefore, we can also assume that $\shv{G} = \oh_{\cX}$ and we need to verify that the map
\begin{equation}\label{E:coh-ff}
\Gamma(\cX,\shv{F}) \to \varprojlim
\Gamma\bigl(\cX_x^{[n]},\shv{F}_n\bigr)
\end{equation}
is an isomorphism. But
$\Gamma\bigl(\cX_x^{[n]},\shv{F}_n\bigr)=
\Gamma(\cX,\shv{F})/\Gamma(\cX,\mathfrak{m}_x^{n+1}\shv{F})$
since $G$ is linearly reductive,
so the map \eqref{E:coh-ff} is identified with the isomorphism \eqref{E:formal}, and the full faithfulness of \eqref{eqn-coh} follows.
We now prove that the functor
\eqref{eqn-coh} is essentially surjective. Let $\{\cF_n\} \in \varprojlim
\Coh(\cX_x^{[n]})$ be a compatible system of coherent sheaves.
Since $\cX$ has the
resolution property (see \S\ref{S:notation}), there is a vector bundle $\shv{E}$ on $\cX$
together with a surjection $\varphi_0\colon \shv{E} \to \shv{F}_0$. We
claim that $\varphi_0$ lifts to a compatible system of morphisms
$\varphi_n \colon \shv{E} \to \shv{F}_n$ for every $n>0$.
It suffices to show that for $n>0$, the natural map $\Hom(\cE, \cF_{n+1}) \to \Hom(\cE, \cF_n)$ is surjective. But this is clear: $\Ext_{\oh_{\cX}}^{1}(\cE, \mathfrak{m}^{n+1} \cF_{n+1}) = 0$ since $\cE$ is locally free and $G$ is linearly reductive. It follows that we obtain an induced morphism of systems
$\{\varphi_n\} \colon \{\cE_n\} \to
\{\cF_n\}$ and, by Nakayama's Lemma, each $\varphi_n$ is surjective.
The system of
morphisms $\{\varphi_n\}$ admits an adic kernel $\{\shv{K}_n\}$ (see \cite[\S3.2]{hallj_dary_coherent_tannakian_duality}, which is a generalization of \cite[Tag \spref{087X}]{stacks-project} to stacks).
Note that, in general, $\shv{K}_n \neq \ker \varphi_n$ and $\shv{K}_n$ is
actually the ``stabilization'' of $\ker \varphi_n$ (in the sense of the
Artin--Rees lemma). Applying the procedure above to $\{\shv{K}_n\}$,
there is another vector bundle $\cH$ and a morphism of systems
$\{\psi_n\} \colon \{\cH_n\} \to \{\cE_n\}$ such that
$\coker(\psi_n) \cong \shv{F}_n$. By the full faithfulness of
\eqref{eqn-coh}, the morphism $\{\psi_n\}$ arises from a unique
morphism $\psi \colon \cH \to \cE$. Letting
$\widetilde{\shv{F}} = \coker \psi$, the universal property of cokernels
proves that there is an isomorphism
$\widetilde{\shv{F}}_n \cong \shv{F}_n$; the result follows.
\end{proof}
\begin{remark} \label{R:explicit}
In this remark, we show that with the hypotheses of Theorem \ref{key-theorem} the coherent $\oh_{\cX}$-module $\cF$ extending a given system $\{\cF_n\} \in \varprojlim \Coh(\cX_x^{[n]})$ can in fact be constructed explicitly. Let $\Gamma$ denote the set of irreducible representations of $G$ with $0 \in \Gamma$ denoting the trivial representation.
For $\rho \in \Gamma$, we let $V_{\rho}$ be the corresponding irreducible representation.
For any $G$-representation~$V$, we set
$$V^{(\rho)} = \bigl(V \otimes V_{\rho}^{\vee}\bigr)^G \otimes V_\rho.$$
Note that $V = \bigoplus_{\rho \in \Gamma} V^{(\rho)}$ and that $V^{(0)} = V^G$ is the subspace of invariants. In particular,
there is a decomposition $A = \bigoplus_{\rho \in \Gamma} A^{(\rho)}$. The data of a coherent $\oh_{\cX}$-module $\cF$ is equivalent to a finitely generated $A$-module $M$ together with a $G$-action,
i.e., an $A$-module $M$ with a decomposition $M = \mathrm{op}lus_{\rho \in \Gamma} M^{(\rho)}$, where each $M^{(\rho)}$ is a direct sum of copies of the irreducible representation $V_\rho$, such that the $A$-module structure on $M$ is compatible with the decompositions of $A$ and $M$. Given a coherent $\oh_{\cX}$-module $\cF = \widetilde{M}$ and a representation $\rho \in \Gamma$, then $M^{(\rho)}$ is a finitely generated $A^G$-module and
$$M^{(\rho)} \to \varprojlim \bigl(M/ \mathfrak{m}^{k} M\bigr)^{(\rho)}$$
is an isomorphism (which follows from \eqref{E:formal}).
Conversely, given a system of
$\{\cF_n = \widetilde{M}_n\} \in \varprojlim \Coh(\cX_x^{[n]})$ where each $M_n$ is a finitely generated $A / \mathfrak{m}^{n+1}$-module with a $G$-action, then the extension $\cF = \widetilde{M}$ can be constructed explicitly by defining:
$$M^{(\rho)} := \varprojlim M_n^{(\rho)} \qquad \text{ and } \qquad M := \bigoplus_{\rho \in \Gamma} M^{(\rho)}.$$
One can show directly that each $M^{(\rho)}$ is a finitely generated $A^G$-module, $M$ is a finitely generated $A$-module with a $G$-action, and $M/ \mathfrak{m}^{n+1} M = M_n$.
\end{remark}
\begin{remark}
An argument similar to the proof of the essential surjectivity of \eqref{eqn-coh} shows that every vector bundle on $\cX$ is the pullback of a $G$-representation under the projection $\pi \co \cX \to BG$. Indeed, given a vector bundle $\cE$ on $\cX$, we obtain by restriction a vector bundle $\cE_0$ on $BG$.
The surjection $\pi^* \cE_0 \to \cE_0$ lifts to a map $\pi^* \cE_0 \to \cE$ since $\cX$ is cohomologically affine. By Nakayama's Lemma, the map $\pi^* \cE_0 \to \cE$ is a surjection of vector bundles of the same rank and hence an isomorphism.
In particular, suppose that $G$ is a diagonalizable group scheme. Then using the notation of Remark \ref{R:explicit}, every irreducible $G$-representation $\rho \in \Gamma$ is one-dimensional so that a $G$-action on $A$ corresponds to a $\Gamma$-grading $A = \bigoplus_{\rho \in \Gamma} A^{(\rho)}$, and an $A$-module with a $G$-action corresponds to a $\Gamma$-graded $A$-module.
Therefore, if $A = \bigoplus_{\rho \in \Gamma} A^{(\rho)}$ is a $\Gamma$-graded noetherian $k$-algebra with $A^{(0)}$ a complete local $k$-algebra, then every finitely generated projective $\Gamma$-graded $A$-module is free. When $G = \mathbb{G}_m$ and $A^G=k$, this is the well known statement (e.g., \cite[Thm.~19.2]{eisenbud}) that every finitely generated projective graded module over a Noetherian graded $k$-algebra $A = \bigoplus_{d \ge 0} A_d$ with $A_0 = k$ is free.
\end{remark}
\subsection{Tannaka duality}
The following Tannaka duality theorem proved by the second and third author is crucial in our argument.
\begin{theorem}\cite[Thm.~1.1]{hallj_dary_coherent_tannakian_duality} \label{T:tannakian}
Let $\cX$ be an excellent stack and let $\cY$ be a noetherian algebraic stack with affine stabilizers. Then the natural functor
$$\Hom(\cX, \cY) \to \Hom_{r\otimes, \simeq}\bigl(\Coh(\cY), \Coh(\cX)\bigr)$$
is an equivalence of categories, where $\Hom_{r\otimes, \simeq}(\Coh(\cY), \Coh(\cX))$ denotes the category whose objects are right exact monoidal functors $\Coh(\cY) \to \Coh(\cX)$ and morphisms are natural isomorphisms of functors.
\end{theorem}
We will apply the following consequence of Tannaka duality:
\begin{corollary} \label{C:tannakian}
Let $\cX$ be an excellent algebraic stack with affine stabilizers and let $\cZ \subseteq \cX$ be a closed substack. Suppose that $\cX$ is coherently complete along $\cZ$.
If $\cY$ is a noetherian algebraic stack with affine stabilizers, then the natural functor
$$\Hom(\cX, \cY) \to \varprojlim_n \Hom\bigl(\cX_{\cZ}^{[n]}, \cY\bigr)$$
is an equivalence of categories.
\end{corollary}
\begin{proof}
There are natural equivalences
\begin{align*}
\Hom(\cX, \cY) & \simeq \Hom_{r\otimes, \simeq}\bigl( \Coh(\cY), \Coh(\cX)\bigr) & & \text{(Tannaka duality)}\\
& \simeq \Hom_{r\otimes, \simeq}\bigl( \Coh(\cY), \varprojlim \Coh\bigl(\cX_{\cZ}^{[n]}\bigr) \bigr) & & \text{(coherent completeness)}\\
& \simeq \varprojlim \Hom_{r\otimes, \simeq}\bigl( \Coh(\cY), \Coh\bigl(\cX_{\cZ}^{[n]}\bigr) \bigr) & & \\
& \simeq \varprojlim \Hom\bigl(\cX_{\cZ}^{[n]}, \cY\bigr) & & \text{(Tannaka duality)}.\qedhere
\end{align*}
\end{proof}
\section{Proofs of Theorems \ref{T:field} and \ref{T:smooth}} \label{S:proof-section}
\subsection{The normal and tangent space of an algebraic stack} \label{S:tangent}
Let $\cX$ be a quasi-separated algebraic stack, locally of finite type over a field $k$, with affine stabilizers. Let $x \in \cX(k)$ be a closed point. Denote by $i \co BG_x \to \cX$ the closed immersion of the residual gerbe of $x$, and by $\cI$ the corresponding ideal sheaf. The {\it normal space to $x$} is $N_x := (\cI/\cI^2)^{\vee} = (i^* \cI)^{\vee}$ viewed as a $G_x$-representation. The {\it tangent space $T_{\cX,x}$ to $\cX$ at $x$} is the $k$-vector space of equivalence classes of pairs $(\tau, \alpha)$ consisting of morphisms $\tau \co \Spec k[\epsilon]/(\epsilon^2) \to \cX$ and 2-isomorphisms $\alpha \co x \to \tau|_{\Spec k}$. The stabilizer $G_x$ acts linearly on the tangent space $T_{\cX,x}$ by precomposition on the 2-isomorphism.
If $G_x$ is smooth, then there is an identification $T_{\cX,x} \cong N_x$ of $G_x$-representations. Moreover, if $\cX = [X/G]$ is a quotient stack where $G$ is a smooth group scheme and $x \in X(k)$ (with $G_x$ not necessarily smooth), then $N_x$ is identified with the normal space $T_{X,x} / T_{G \cdot x, x}$ to the orbit $G \cdot x$ at $x$.
\subsection{The smooth case} \label{S:smooth}
We now prove Theorem \ref{T:smooth} even though it follows directly from Theorem \ref{T:field} coupled with Luna's fundamental lemma \cite[p.~94]{luna}.
We feel that since the proof of Theorem \ref{T:smooth} is more transparent and less technical than Theorem \ref{T:field}, digesting the proof first in this case will make the proof of Theorem \ref{T:field} more accessible.
\begin{proof}[Proof of Theorem \ref{T:smooth}]
We may replace $\cX$ with an open neighborhood of $x$ and thus assume that
$\cX$ is noetherian. Define the quotient stack $\cN= [N_x/G_x]$, where $N_x$ is viewed as an affine scheme via $\Spec(\Sym N_x^{\vee})$.
Since $G_x$ is linearly reductive, we claim that there are compatible
isomorphisms $\cX_x^{[n]} \cong \cN^{[n]}$.
To see this, first note that we can lift $\cX_x^{[0]}=BG_x$ to a unique morphism
$t_n\colon \cX_x^{[n]}\to BG_x$ for all $n$. Indeed, the obstruction to a lift from
$t_n\colon \cX_x^{[n]}\to BG_x$ to $t_{n+1}\colon \cX_x^{[n+1]}\to BG_x$ is an
element of the group $\Ext_{\oh_{BG_x}}^{1}(L_{BG_x/k}, \cI^{n+1}/\cI^{n+2})$
\cite{olsson-defn}, which is zero because $BG_x$ is cohomologically affine and
$L_{BG_x/k}$ is a perfect complex supported in degrees $[0,1]$ as $BG_x\to
\Spec k$ is smooth.
In particular, $BG_x=\cX_x^{[0]}\mathrm{h}ookrightarrow \cX_x^{[1]}$ has a retraction. This implies
that $\cX_x^{[1]} \cong \cN^{[1]}$ since both are trivial deformations by the
same module. Since $\cN\to BG_x$ is smooth, the obstructions to lifting the
morphism $\cX_x^{[1]}\cong \cN^{[1]}\mathrm{h}ookrightarrow \cN$ to $\cX_x^{[n]}\to \cN$ for every $n$
vanish as
$H^1(BG_x,\Omega_{\cN/BG_x}^\vee\otimes \cI^{n+1}/\cI^{n+2})=0$. We have induced
isomorphisms $\cX_x^{[n]} \cong \cN^{[n]}$ by
Proposition~\ref{P:closed/iso-cond:infinitesimal}.
Let $\cN \to N = N_x /\!\!/ G_x$ be the good moduli space and denote by $0 \in N$ the image of the origin. Set $\mathrm{h}at{\cN} := \Spec \mathrm{h}at{\oh}_{N,0} \times_N \cN$. Since $\mathrm{h}at{\cN}$ is coherently complete (Theorem \ref{key-theorem}), we may apply Tannaka duality (Corollary \ref{C:tannakian}) to find a morphism $\mathrm{h}at{\cN} \to \cX$ filling in the diagram
$$
\xymatrix{
\cX_x^{[n]} \cong \cN^{[n]} \ar[r] \ar@/^1.6pc/[rrr] & \mathrm{h}at{\cN} \ar[r] \ar[d] \ar@/^1pc/@{-->}[rr] & \cN \ar[d] & \cX\\
& \Spec \mathrm{h}at{\oh}_{N,0} \ar[r] \ar@{}[ur]|\square & N.
}
$$
Let us now consider the functor $F \co (\mathsf{Sch}/N)^\mathrm{op} \to \Sets$ which assigns to a
morphism $S \to N$ the set of morphisms $S \times_N \cN \to \cX$ modulo
2-isomorphisms. This functor is locally of finite presentation and we have an
element of $F$ over $\Spec \mathrm{h}at{\oh}_{N,0}$. By Artin approximation
\cite[Cor.~2.2]{artin-approx} (cf.\ Theorem~\ref{T:artin-approximation}), there
exist an \'etale morphism $(U,u) \to
(N,0)$ where $U$ is an affine scheme and a morphism $f\colon (\cW,w):=(U
\times_N \cN, (u,0) ) \to (\cX,x)$ agreeing with $(\mathrm{h}at{\cN},0) \to (\cX,x)$ to
first order. Since $\cX$ is smooth at $x$, it follows by
Proposition~\ref{P:closed/iso-cond:infinitesimal} that $f$ restricts to isomorphisms
$f^{[n]}\colon \cW_w^{[n]}\to \cX_x^{[n]}$ for every $n$, hence that $f$ is \'etale at $w$.
This establishes the theorem after shrinking $U$ suitably; the final statement
follows from Proposition \ref{P:refinement}
below.
\end{proof}
\subsection{The general case} \label{S:proof}
We now prove Theorem \ref{T:field} by a similar method to the proof in the smooth case but using equivariant Artin algebraization (Corollary \ref{C:equivariant-algebraization}) in place of Artin approximation.
\begin{proof}[Proof of Theorem \ref{T:field}]
We may replace $\cX$ with an open neighborhood of $x$ and thus assume that
$\cX$ is noetherian and that $x \in |\cX|$ is a closed point.
Let $\cN := [N_x / H]$ and let $N$ be the GIT quotient $N_x /\!\!/ H$; then the induced morphism $\cN \to N$ is a good moduli space. Further, let $\mathrm{h}at{\cN} := \Spec \mathrm{h}at{\oh}_{N,0} \times_N \cN$, where $0$ denotes the image of the origin.
Let $\eta_0 \co BH \to BG_x = \cX_x^{[0]}$ be the morphism induced from the inclusion $H \subseteq G_x$; this is a smooth (resp.\ \'etale) morphism. We first prove by induction that there are compatible $2$-cartesian diagrams
\[
\xymatrix{\cH_n \ar[d]_{\eta_n} \ar@{(->}[r] & \cH_{n+1} \ar[d]^{\eta_{n+1}} \\
\cX_x^{[n]} \ar@{(->}[r] \ar@{}[ur]|\square& \cX_x^{[n+1]},}
\]
where $\cH_0 = BH$ and the vertical maps are
smooth (resp.\ \'etale). Indeed, given $\eta_n \co \cH_n \to \cX_x^{[n]}$, by \cite{olsson-defn}, the obstruction to
the existence of $\eta_{n+1}$ is an element of $
\Ext^2_{\oh_{BH}}(\Omega_{BH/BG_x},\eta_0^*(\cI^{n+1}/\cI^{n+2}))$, but this group vanishes as $H$ is linearly reductive and $\Omega_{BH/BG_x}$ is a vector bundle.
Let $\tau_0 \co \cH_0 = BH \mathrm{h}ookrightarrow \cN$ be the inclusion of the origin. Since $H$ is linearly reductive, the
deformation $\cH_0\mathrm{h}ookrightarrow \cH_1$ is a trivial extension with ideal $N_x^\vee$ and hence we
have
an isomorphism $\tau_1\colon \cH_1\cong \cN^{[1]}$ (see proof of smooth
case). Using linear reductivity of $H$ once again and deformation theory, we
obtain compatible morphisms $\tau_n \colon \cH_n \to \cN$ extending $\tau_0$
and $\tau_1$. These are closed immersions by
Proposition~\ref{P:closed/iso-cond:infinitesimal}~\itemref{PI:closed:infinitesimal}.
The closed immersion $\tau_n\colon \cH_n \mathrm{h}ookrightarrow \cN$ factors through a
closed immersion $i_n\colon \cH_n \mathrm{h}ookrightarrow \cN^{[n]}$.
Since $\mathrm{h}at{\cN}$ is coherently complete, the inverse system of epimorphisms $\oh_{\cN^{[n]}} \to i_{n,*}\oh_{\cH_n}$, in the category $\varprojlim_n \Coh(\cN^{[n]})$, lifts uniquely to an epimorphism $\oh_{\mathrm{h}at{\cN}} \to \oh_{\mathrm{h}at{\cH}}$ in the category $\Coh(\mathrm{h}at{\cN})$. This defines a closed immersion $i\colon \mathrm{h}at{\cH} \to \mathrm{h}at{\cN}$ rendering the following square $2$-cartesian for all $n$:
\[
\xymatrix{
\cH_n \ar@{(->}[d]_{i_n} \ar@{(->}[r] & \mathrm{h}at{\cH} \ar@{(->}[d]^i\\
\cN^{[n]} \ar@{(->}[r] & \mathrm{h}at{\cN}.
}
\]
Since $\mathrm{h}at{\cH}$ also is coherently complete, Tannaka duality (Corollary \ref{C:tannakian}) yields a morphism $\eta\co \mathrm{h}at{\cH} \to \cX$ such that the following square is
$2$-commutative for all $n$:
\[
\xymatrix{
\cH_n \ar[d]_{\eta_n} \ar@{(->}[r] & \mathrm{h}at{\cH} \ar[d]^{\eta} \\
\cX_x^{[n]} \ar[r] & \cX.
}
\]
The morphism $\eta$ is formally versal (resp.\ universal) by Proposition \ref{P:formal-versality-criterion}. We may therefore apply Corollary \ref{C:equivariant-algebraization} to obtain a stack $\cW=[\Spec A/H]$ together with a closed point $w\in |\cW|$, a morphism $f\co (\cW,w)\to (\cX,x)$
of finite type, a flat morphism $\varphi\co \mathrm{h}at{\cH}\to \cW$, identifying $\mathrm{h}at{\cH}$ with the
completion of $\cW$ at $w$, and a $2$-isomorphism $f\circ\varphi\cong \eta$. In particular, $f$ is smooth (resp.\ \'etale)
at $w$. Moreover, $(f\circ \varphi)^{-1}(\cX_x^{[0]})=\cH_0$ so we have a
flat morphism $BH=\cH_0\to f^{-1}(BG_x)$ which equals the inclusion of the
residual gerbe at $w$. It follows that $w$ is an isolated point in the fiber
$f^{-1}(BG_x)$. We can thus replace $\cW$ with an open neighborhood of $w$
such that $\cW\to \cX$ becomes smooth (resp.\ \'etale) and
$f^{-1}(BG_x) = BH$. Since $w$ is a closed point of $\cW$, we may further shrink
$\cW$ so that it remains cohomologically affine (see Lemma~\ref{L:shrink} below).
The final statement follows from Proposition \ref{P:refinement} below.
\end{proof}
\subsection{The refinement} The results in this section can be used to show
that under suitable hypotheses, the quotient presentation $f\colon \cW \to \cX$ in Theorems \ref{T:field} and \ref{T:smooth}
can be arranged to be affine (Proposition \ref{P:refinement}), quasi-affine (Corollary \ref{C:refinement:quot_stack}), and representable (Proposition \ref{P:rep_diagonalizable}).
The following trivial lemma will be frequently applied to a good moduli space
morphism $\pi \colon \cX \to X$. Note that any closed subset $\cZ\subseteq
\cX$ satisfies the assumption in the lemma in this case.
\begin{lemma}\label{L:shrink}
Let $\pi\colon \cX \to X$ be a closed morphism of topological spaces and
let $\cZ\subseteq \cX$ be a closed subset. Assume that every open
neighborhood of $\cZ$ contains $\pi^{-1}(\pi(\cZ))$. If $\cZ \subseteq \cU$
is an open neighborhood of $\cZ$, then there exists an open neighborhood
$U' \subseteq X$ of $\pi(\cZ)$ such that $\pi^{-1}(U') \subseteq \cU$.
\end{lemma}
\begin{proof}
Take $U'=X\setminus \pi(\cX\setminus \cU)$.
\end{proof}
We now come to our main refinement result.
\begin{proposition} \label{P:refinement}
Let $f \co \cW \to \cX$ be a morphism of noetherian algebraic stacks such that $\cW$ is cohomologically affine with affine diagonal. Suppose $w \in |\cW|$ is a closed point such that $f$ induces an injection of stabilizer groups at $w$.
If $\cX$ has affine diagonal, then there exists a cohomologically affine open neighborhood $\cU \subseteq \cW$ of $w$ such that $f|_{\cU}$ is affine.
\end{proposition}
\begin{proof}
By shrinking $\cW$, we may assume that $\Delta_{\cW/\cX}$ is
quasi-finite and after further shrinking, we may arrange so that
$\cW$ remains cohomologically affine (Lemma~\ref{L:shrink}). Let
$p\colon V \to \mathcal{X}$ be a smooth surjection, where $V$ is affine;
then $p$ is affine because $\mathcal{X}$ has affine diagonal. Take
$f_V \colon \cW_V \to V$ to be the pullback of $f$ along
$p$. Then $\cW_V \to \cW$ is affine, and so $\cW_V$ is
cohomologically affine. Since $\cW_V$ also has quasi-finite and
affine diagonal, $f_V$ is separated
\cite[Thm.~8.3.2]{alper-adequate}. By descent, $f$ is separated. In
particular, the relative inertia of $f$,
$I_{\cW/\cX} \to \cW$, is finite. By Nakayama's Lemma, there
is an open substack $\cU$ of $\cW$, containing $w$, with trivial
inertia relative to $\cX$. Thus $\cU\to \cX$ is quasi-compact,
representable and separated. Shrinking $\cU$ further, $\cU$
also becomes cohomologically affine. Since
$\cX$ has affine diagonal, it follows that $f$ is also
cohomologically affine. By Serre's Criterion
\cite[Prop.~3.3]{alper-good}, $f$ is affine.
\end{proof}
\begin{corollary} \label{C:refinement:quot_stack}
Let $S$ be a noetherian scheme. Let
$f \co \cW \to \cX$ be a morphism of noetherian algebraic stacks over $S$
such that $\cW$ is cohomologically affine with affine
diagonal. Suppose $w \in |\cW|$ is a closed point such that $f$
induces an injection of stabilizer groups at $w$. If $\cX=[X/G]$
where $X$ is an algebraic space and $G$ is an affine flat group
scheme of finite type over $S$, then there exists a cohomologically affine open neighborhood
$\cU \subseteq \cW$ of $w$ such that $f|_{\cU}$ is quasi-affine.
\end{corollary}
\begin{proof}
Consider the composition $\cW \to [X/G] \to BG$. By Proposition
\ref{P:refinement}, we may suitably shrink $\cW$ so that the
composition $\cW \to [X/G] \to BG$ becomes affine. Since $X$ is a
noetherian algebraic space, it has quasi-affine diagonal; in
particular $[X/G] \to BG$ has quasi-affine diagonal. It follows
immediately that $\cW \to [X/G]$ is quasi-affine.
\end{proof}
\begin{proposition}\label{P:rep_diagonalizable}
Let $S$ be a noetherian scheme. Let $f\colon \cW \to \cX$ be a morphism of
locally noetherian algebraic stacks over $S$. Assume that $\cX$ has separated
diagonal and that $\cW = [W/H]$, where $W$ is affine over $S$ and $H$ is
of multiplicative type over $S$. If $w\in W$ is fixed by $H$ and $f$ induces an injection
of stabilizer groups at $w$, then there exists an $H$-invariant affine open $U$ of $w$ in
$W$ such that $[U/H] \to \cX$ is representable.
\end{proposition}
\begin{remark}
The separatedness of the diagonal is essential; see Examples \ref{ex5} and \ref{ex4}.
\end{remark}
\begin{proof}
There is an exact sequence of groups over $\cW$:
\[
\xymatrix{1 \ar[r] & I_{\cW/\cX} \ar[r] & I_{\cW/S} \ar[r] & I_{\cX/S}\times_{\cX} \cW.}
\]
Since $f$ induces an injection of stabilizer groups at $w$, it follows that
$(I_{\cW/\cX})_w$ is trivial. Also, since $I_{\cX/S} \to \cX$ is separated,
$I_{\cW/\cX} \to I_{\cW/S}$ is a closed immersion.
Let $I=I_{\cW/\cX}\times_{\cW} W$ and pull the inclusion
$I_{\cW/\cX} \to I_{\cW/S}$ back to $W$. Since $I_{\cW/S}\times_{\cW} W \to H\times_S W$
is a closed immersion, it follows that $I \to H\times_S W$ is a closed immersion. Since $H\to S$ is of multiplicative type and $I_w$ is trivial,
it follows that $I\to W$ is trivial in a neighborhood of $w$~\cite[Exp.~IX, Cor.~6.5]{sga3ii}. By shrinking
this open subset using Lemma \ref{L:shrink}, we obtain the result.
\end{proof}
\section{Local applications} \label{S:local_applications}
\subsection{Generalization of Sumihiro's theorem on torus actions} \label{A:sumihiro}
In \cite[\S 2]{oprea}, Oprea speculates that every quasi-compact
Deligne--Mumford stack $\cX$ with a torus action has an equivariant \'etale
atlas $\Spec A \to \cX$. He proves this when
$\cX=\overline{\cM}_{0,n}(\mathbb{P}^r,d)$ is the moduli space of stable maps and the action is induced by any
action of $\mathbb{G}_m$ on $\mathbb{P}^r$ and obtains some nice applications. We show that Oprea's speculation holds in general.
For group actions on stacks, we follow the conventions of \cite{romagny}.
Let $T$ be a torus acting on an algebraic stack $\cX$, locally of finite type over a field $k$, via $\sigma \co T \times \cX \to \cX$.
Let $\cY = [\cX /T]$. Let $x \in \cX(k)$ be a point with image $y \in \cY(k)$. There is an exact sequence
\begin{equation} \label{E:stab}
\xymatrix{1 \ar[r] & G_x \ar[r] & G_y \ar[r] & T_x \ar[r] & 1},
\end{equation}
where the stabilizer $T_x \subseteq T$ is defined by the fiber product
\begin{equation} \label{D:stab}
\begin{split}
\xymatrix{
T_x\times B G_x \ar[r]^-{\sigma_x} \ar[d] & B G_x \ar[d] \\
T\times B G_x \ar[r]^-{\sigma|_x} \ar@{}[ur]|\square & \cX
} \end{split}
\end{equation}
and $\sigma|_x \co T\times B G_x \xrightarrow{\id\times\iota_x} T \times \cX \xrightarrow{\sigma} \cX$.
Observe that $G_y= \Spec k \times_{BG_x} T_x$. The exact sequence \eqref{E:stab} is trivially split if and only if the induced action $\sigma_x$ of $T_x$ on $BG_x$ is trivial. The sequence is split if and only if the action $\sigma_x$ comes from a group homomorphism $T \to \Aut(G_x)$.
\begin{theorem} \label{T:sumi1}
Let $\cX$ be a quasi-separated algebraic (resp.\ Deligne--Mumford) stack with affine stabilizers, locally of finite type over an algebraically closed field $k$. Let $T$ be a torus acting on $\cX$. Let $x \in \cX(k)$ be a point such that $G_x$ is smooth and the exact sequence \eqref{E:stab} is split (e.g., $\cX$ is an algebraic space). There exists a $T$-equivariant smooth (resp.\ \'etale) neighborhood $(\Spec A,u) \to (\cX,x)$ that induces an isomorphism of stabilizers at $u$.
\end{theorem}
\begin{proof}
Let $\cY = [\cX/T]$ and $y \in \cY(k)$ be the image of $x$.
As the sequence \eqref{E:stab} splits, we can consider $T_x$ as a subgroup of $G_y$.
By applying Theorem \ref{T:field} to $\cY$ at $y$ with respect to the subgroup $T_x \subseteq G_y$, we obtain a smooth (resp.\ \'etale) morphism $f \co [W/T_x] \to \cY$,
where $W$ is an affine scheme with an action of $T_x$, which induces the given inclusion $T_x \subseteq G_y$ at stabilizer groups at a preimage $w \in [W/T_x]$ of $y$. Consider the cartesian diagram
$$\xymatrix{
[W/T_x] \times_{\cY} \cX \ar[r] \ar[d] & \cX \ar[d]\ar[r] & \Spec k\ar[d] \\
[W/T_x] \ar[r] & \cY\ar[r] & BT
}$$
The map $[W/T_x]\to \cY\to BT$ induces the injection $T_x\mathrm{h}ookrightarrow T$ on stabilizers
groups at $w$. Thus, by
Proposition~\ref{P:refinement},
there is an
open neighborhood $\cU\subseteq [W/T_x]$ of $w$ such that $\cU$ is
cohomologically affine and $\cU\to BT$ is affine. The fiber product
$\cX\times_\cY \cU$ is thus an affine scheme $\Spec A$ and the induced map
$\Spec A\to \cX$ is $T$-equivariant. If $u\in \Spec A$ is a closed point above
$w$ and $x$, then the map $\Spec A\to \cX$ induces an isomorphism $T_x\to T_x$
of stabilizer groups at $u$.
\end{proof}
In the case that $\cX$ is a normal scheme, Theorem \ref{T:sumi1} was proved by
Sumihiro~\cite[Cor.~2]{sumihiro}, \cite[Cor.~3.11]{sumihiro2}; then $\Spec A
\to \cX$ can be taken to be an open neighborhood. The nodal cubic with a
$\mathbb{G}_m$-action provides an example where an \'etale neighborhood is needed:
there does not exist a
$\mathbb{G}_m$-invariant affine open neighborhood of the node. Theorem~\ref{T:sumi1} was also known if
$\cX$ is a quasi-projective scheme \cite[Thm.~1.1(iii)]{brion-linearization} or
if $\cX$ is a smooth, proper, tame and irreducible Deligne--Mumford stack,
whose generic stabilizer is trivial and whose coarse moduli space is a scheme
\cite[Prop.~3.2]{skowera}.
\begin{remark} \label{R:not-split}
The theorem above fails when \eqref{E:stab} does not split because an equivariant, stabilizer-preserving, affine, and \'etale neighborhood induces a splitting. For a simple example when \eqref{E:stab} does not split, consider the Kummer exact sequence $1 \to \pmb{\mu}_n \to \mathbb{G}_m \xrightarrow{n} \mathbb{G}_m \to 1$ for some invertible $n$. This gives rise to a $T=\mathbb{G}_m$ action on the Deligne--Mumford stack $\cX=B\pmb{\mu}_n$ with stack quotient $\cY=[\cX/T]=B\mathbb{G}_m$ such that \eqref{E:stab} becomes the Kummer sequence and hence does not split. The action of $\mathbb{G}_m$ on $B\pmb{\mu}_n$ has the following explicit description: for $t \in \mathbb{G}_m(S) = \Gamma(S, \oh_S)^\times$ and $(\cL, \alpha) \in B\pmb{\mu}_n(S)$ (where $\cL$ is a line bundle on $S$ and $\alpha \co \cL^{\otimes n} \to \oh_S$ is an isomorphism), then $t \cdot (\cL, \alpha) = (\cL, t \circ \alpha)$.
There is nevertheless an \'etale presentation $\Spec k \to B \pmb{\mu}_n$ which is equivariant under $\mathbb{G}_m \xrightarrow{n} \mathbb{G}_m$. The following theorem shows that such \'etale presentations exist more generally.\end{remark}
\begin{theorem} \label{T:sumi2}
Let $\cX$ be a quasi-separated Deligne--Mumford stack, locally of finite type over an algebraically closed field $k$. Let $T$ be a torus acting on $\cX$. If $x\in \cX(k)$, then there exist a reparameterization $\alpha \co T \to T$ and an \'etale neighborhood $(\Spec A, u) \to (\cX,x)$ that is equivariant with respect to $\alpha$.
\end{theorem}
\begin{proof}
In the exact sequence \eqref{E:stab}, $G_x$ is \'etale and $T_x$ is diagonalizable. This implies that
$(G_y)^0$ is diagonalizable. Indeed, first note that we have exact sequences:
\[
\xymatrix{1 \ar[r] & G_x\cap (G_y)^0 \ar[r] & (G_y)^0 \ar[r] & (T_x)^0 \ar[r] & 1\phantom{.}\\
1 \ar[r] & G_x\cap (G_y)^0 \ar[r] & (G_y)^0_\mathrm{red} \ar[r] & (T_x)^0_\mathrm{red} \ar[r] & 1.}
\]
The second sequence shows that $(G_y)^0_\mathrm{red}$ is a torus
(as it is connected, reduced and surjects onto a torus with finite kernel) and, consequently,
that $G_x\cap (G_y)^0$ is diagonalizable. It then follows that $(G_y)^0$ is
diagonalizable from the first sequence~\cite[Exp.~XVII, Prop.~7.1.1~b)]{sga3ii}.
Theorem \ref{T:field} produces an \'etale neighborhood $f\colon ([\Spec
A/(G_y)^0],w) \to (\cY,y)$ such that the induced morphism on stabilizers
groups is $(G_y)^0 \to G_y$. Replacing $\cX\to \cY$ with the pull-back along $f$,
we may thus assume that $G_y$ is connected and diagonalizable.
\newcommand{\mathrm{tor}}{\mathrm{tor}}
If we let $G_y=D(N)$, $T_x=D(M)$ and $T=D(\ZZ^r)$, then we have a surjective
map $q\colon \ZZ^r\to M$ and an injective map $\varphi\colon M\to N$. The
quotient $N/M$ is torsion but without $p$-torsion, where $p$ is the
characteristic of $k$. Since all torsion of $M$ and $N$ is $p$-torsion, we
have that $\varphi$ induces an isomorphism of torsion subgroups. We can thus
find splittings of $\varphi$ and $q$ as in the diagram
\[
\xymatrix@C+10mm{
\mathllap{\ZZ^r=}\ZZ^s\mathrm{op}lus M/M_\mathrm{tor}\ar@{(->}[r]^{\alpha=\id\mathrm{op}lus\varphi_2}\ar@{->>}[d]^{q=q_1\mathrm{op}lus\id}
& \ZZ^s\mathrm{op}lus N/N_\mathrm{tor}\mathrlap{=\ZZ^r}\ar@{->>}[d]^{q'=\varphi_1 q_1\mathrm{op}lus\id} \\
\mathllap{M=}M_\mathrm{tor}\mathrm{op}lus M/M_\mathrm{tor}\ar@{(->}[r]^{\varphi=\varphi_1\mathrm{op}lus \varphi_2}
& N_\mathrm{tor}\mathrm{op}lus N/N_\mathrm{tor}\mathrlap{=N.}
}
\]
The map $q'$ corresponds to an embedding $G_y\mathrm{h}ookrightarrow T$ and the map
$\alpha$ to a reparameterization $T\to T$. After reparameterizing the
action of $T$ on $\cX$ via $\alpha$, the surjection $G_y\twoheadrightarrow T_x$
becomes split. The result now follows from Theorem \ref{T:sumi1}.
\end{proof}
We can also prove:
\begin{theorem} \label{T:sumi3}
Let $X$ be a quasi-separated algebraic space, locally of finite type over an algebraically closed field $k$. Let $G$ be an affine group scheme of finite type over $k$ acting on $X$. Let $x \in X(k)$ be a point with linearly reductive stabilizer $G_x$. Then there exists an affine scheme $W$ with an action of $G$ and a $G$-equivariant \'etale neighborhood $(W,w) \to (X,x)$ that induces an isomorphism of stabilizers at $w$.
\end{theorem}
\begin{proof}
By Theorem \ref{T:field}, there exists an \'etale neighborhood $f\colon (\cW,w) \to
([X/G],x)$ such that $\cW$ is cohomologically affine, $f$ induces an isomorphism of
stabilizers at $w$, and $w$ is a closed point. By
Proposition~\ref{P:refinement}, we can assume after shrinking $\cW$ that
the composition $\cW \to [X/G] \to BG$ is affine. It follows that $W = \cW \times_{[X/G]} X$ is affine and that
$W \to X$ is a $G$-equivariant \'etale neighborhood of $x$. If we also let
$w\in W$ denote the unique preimage of $x$, then $G_w=G_x$.
\end{proof}
Theorem \ref{T:sumi3} is a partial generalization of another result of
Sumihiro~\cite[Lem.~8]{sumihiro}, \cite[Thm.~3.8]{sumihiro2}. He proves
the existence of an open $G$-equivariant covering by quasi-projective
subschemes when $X$ is a normal scheme and $G$ is connected.
\subsection{Generalization of Luna's \'etale slice theorem} \label{A:luna}
We now provide a refinement of Theorem \ref{T:field} in the case that $\cX = [X/G]$ is a quotient stack, generalizing Luna's \'etale slice theorem.
\begin{theorem}\label{T:luna}
Let $X$ be a quasi-separated algebraic space, locally of finite type over an algebraically closed field $k$. Let $G$ be an affine smooth group scheme acting on $X$. Let $x \in X(k)$ be a point with linearly reductive stabilizer $G_x$. Then there exists an affine scheme $W$ with an action of $G_x$ which fixes a point $w$, and an unramified $G_x$-equivariant morphism $(W,w) \to (X,x)$ such that $\widetilde{f} \co W \times^{G_x} G \to X$ is \'etale.\footnote{Here, $W \times^{G_x} G$ denotes the quotient $(W \times G) / G_x$. Note that there is an identification of GIT quotients $(W \times^{G_x} G) /\!\!/ G \cong W /\!\!/ G_x$.}
If $X$ admits a good GIT quotient $X \to X /\!\!/ G$, then it is possible to arrange that the induced morphism $W /\!\!/ G_x \to X /\!\!/ G$ is \'etale and $W \times^{G_x} G \cong W /\!\!/ G_x \times_{X /\!\!/ G} X$.
Let $N_x = T_{X,x} / T_{G \cdot x, x}$ be the normal space to the orbit at $x$; this inherits a natural linear action of $G_x$. If $x \in X$ is smooth, then it can be arranged that there is an \'etale $G_x$-equivariant morphism $W \to N_x$ such that $W /\!\!/ G_x \to N_x /\!\!/ G_x$ is \'etale and
$$\xymatrix{
N_x \times^{G_x} G \ar[d] & W \times^{G_x} G\ar[r]^-{\widetilde f} \ar[d] \ar[l] & X \\
N_x /\!\!/ G_x & W /\!\!/ G_x \ar[l] \ar@{}[ul]|\square &
}$$
is cartesian.
\end{theorem}
\begin{proof}
By applying Theorem \ref{T:sumi3}, we can find an affine scheme $X'$ with an action of $G$ and a $G$-equivariant, \'etale morphism $X' \to X$. This reduces the theorem to the case when $X$ is affine, which was established in \cite[p.~97]{luna}, cf.\ Remark~\ref{R:luna}.
\end{proof}
\begin{remark}\label{R:luna}
The theorem above follows from Luna's \'etale slice theorem \cite{luna} if $X$ is affine. In this case, Luna's \'etale slice theorem is stronger than Theorem \ref{T:luna} as it asserts additionally that $W \to X$ can be arranged to be a locally closed immersion (which is obtained by choosing a $G_x$-equivariant section of $T_{X,x} \to N_x$ and then restricting to an open subscheme of the inverse image of $N_x$ under a $G_x$-equivariant \'etale morphism $X \to T_{X,x}$).
Note that while \cite{luna} assumes that $\mathrm{char}(k) = 0$ and $G$ is reductive, the argument goes through unchanged in arbitrary characteristic if $G$ is smooth, and $G_x$ is smooth and linearly reductive. Moreover, with minor modifications, the argument in \cite{luna} is also valid if $G_x$ is not necessarily smooth.
\end{remark}
\begin{remark}\label{R:luna-alper-kresch}
More generally, if $X$ is a normal scheme, it is shown in \cite[\S 2.1]{alper-kresch} that $W \to X$ can be arranged to be a locally closed immersion. However, when $X$ is not normal or is not a scheme, one cannot always arrange $W \to X$ to be a locally closed immersion and therefore we must allow unramified ``slices" in the theorem above.
\end{remark}
\subsection{Existence of equivariant versal deformations for curves} \label{A:mv_curve}
By a \emph{curve}, we mean a proper scheme over $k$ of pure dimension one.
An $n$-pointed curve is a curve $C$ together with $n$ points $p_1,\dots,p_n\in
C(k)$. The points are not required to be smooth nor distinct. We introduce
the following condition on $(C,\{p_j\})$:
\begin{enumerate}[label=($\dagger$),ref=$\dagger$]
\item\label{cond:g1-marked} every connected component of $C$ of arithmetic
genus $1$ contains a point $p_j$.
\end{enumerate}
\begin{theorem} \label{T:curves}
Let $k$ be an algebraically closed field and let $(C,\{p_j\})$ be an
$n$-pointed reduced curve over $k$ satisfying \itemref{cond:g1-marked}.
Suppose that a
linearly reductive group scheme $H$ acts on $C$.
If $\Aut(C,\{p_j\})$ is smooth, then there exist an affine scheme $W$ of finite type
over $k$ with an action of $H$ fixing a point $w \in W$
and a miniversal deformation
$$\xymatrix{
\cC \ar[d] & C \ar[l] \ar[d]\\
W\ar@/^/[u]^{s_j} & \Spec k\ar@/_/[u]_{p_j} \ar[l]_{w}\ar@{}[ul]|\square
}$$
of $C \cong \cC_w$ such that there exists an
action of $H$ on the total family $(\cC,\{s_j\})$ compatible with the actions of $H$ on $W$
and $C$.
\end{theorem}
The theorem above was proven for Deligne--Mumford semistable curves in \cite{alper-kresch}.
\begin{proof}
The stack $\mathcal{U}_n$ parameterizing all $n$-pointed proper schemes of
dimension $1$ is algebraic and quasi-separated~\cite[App.~B]{smyth_towards-a-classification-modular-compactifications}.
The substack $\mathfrak{M}_n\subset \mathcal{U}_n$ parameterizing reduced
$n$-pointed curves is open and the substack $\mathfrak{M}^\dagger_n\subset
\mathfrak{M}_n$, parameterizing reduced $n$-pointed curves satisfying
\itemref{cond:g1-marked} is open and closed.
We claim that $\mathfrak{M}^\dagger_n$ has affine stabilizers. To see this, let
$(C,\{p_j\})$ be an $n$-pointed curve satisfying \itemref{cond:g1-marked}.
The stabilizer of $(C,\{p_j\})$ is a closed subgroup $\Aut(C,
\{p_j\})\subseteq \Aut(\widetilde{C}, Z)$ where $\eta\colon \widetilde{C}\to C$ is the normalization, $Z=\eta^{-1}(\Sing C\cup
\{p_1,p_2,\dots,p_n\})$ with the reduced structure and $\Aut(\widetilde{C},Z)$ denotes the automorphisms of $\widetilde{C}$ that maps $Z$ onto $Z$. Since $\Aut(\pi_0(\widetilde{C}))$ is finite, it is
enough to prove that $\Aut(\widetilde{C}_i,Z\cap \widetilde{C}_i)$ is affine for every component $\widetilde{C}_i$ of $\widetilde{C}$. This holds
since \itemref{cond:g1-marked} guarantees that either $g(\widetilde{C}_i)\neq 1$ or
$Z\cap \widetilde{C}_i\neq \emptyset$.
Since $H$ is linearly reductive and $\Aut(C,\{p_j\})/H$ is smooth, Theorem \ref{T:field} provides an affine scheme $W$ with an action of $H$, a $k$-point $w \in W$ fixed by $H$ and a smooth map $[W/H] \to \mathfrak{M}^\dagger_n$ with $w$ mapping to $(C, \{p_j\})$. This yields a family of $n$-pointed curves $\cC \to W$ with an action of $H$ on $\cC$ compatible with the action on $W$ and $C \cong \cC_w$. The map $W \to \mathfrak{M}^\dagger_n$ is smooth and adic at $w$. Indeed, it is flat by construction and the fiber at $(C,\{p_j\})$ is $\Spec k\to B\Aut(C,\{p_j\})$ which is smooth. In particular, the tangent space of $W$ at $w$ coincides with the tangent space of $\mathfrak{M}^\dagger_n$ at $(C,\{p_j\})$; that is, $\cC \to W$ is a miniversal deformation of $C$ (see Remark~\ref{R:miniversal}).
\end{proof}
\begin{remark}
From the proof, it is clear that Theorem \ref{T:curves} is valid for pointed curves such that every deformation has an affine automorphism group. It was pointed out to us by Bjorn Poonen that if $(C, \{p_j\})$ is an $n$-pointed curve and no connected component of $C_{\mathrm{red}}$ is a smooth unpointed curve of genus 1, then $\Aut(C, \{p_j\})$ is an affine group scheme over $k$. It follows that Theorem \ref{T:curves} is valid for pointed curves $(C, \{p_j\})$ satisfying the property that for every deformation $(C', \{p'_j\})$ of $(C, \{p_j\})$, there is no connected component of $C'_{\mathrm{red}}$ which is a smooth unpointed curve of genus 1.
\end{remark}
In a previous version of this article, we erroneously claimed that if no
connected component of $C_{\mathrm{red}}$ is a smooth unpointed curve of genus 1, then
this also holds for every deformation of $C$. The following example shows that
this is not the case.
\begin{example}
Let $S=\AA^1_\CC=\Spec \CC[t]$, let $S'=\Spec \CC[t,x]/(x^2-t^2)$ and let
$\cC=E\times_\CC S'$ where $E$ is a smooth genus $1$ curve and $\cC\to S'\to S$
is the natural map. Then the fiber over $t=0$ is $E\times_\CC \Spec
\CC[x]/(x^2)$ and the fiber over $t=1$ is $E\amalg E$. Choosing a point
$p\in E$ and a section of $S'\to S$, e.g., $x=t$, gives a section $s$ of $\cC$
which only passes through one of the two components over $t=1$. In particular,
the fiber of $(\cC,s)$ over $t=1$ does not have affine automorphism
group. Alternatively, one could in addition pick a pointed curve $(C,c)$ of
genus at least $1$ and glue $\cC$ with $C\times S$ along $s$ and $c$. This
gives an unpointed counterexample.
\end{example}
\begin{remark} If $\cC \to S$ is a family of pointed curves such that there is no connected component of any fiber whose reduction is a smooth unpointed curve of genus 1, then the automorphism group scheme $\Aut(\cC/S) \to S$ of $\cC$ over $S$ has affine fibers but need not be affine (or even quasi-affine). This even fails for families of Deligne--Mumford semistable curves; see \cite[\S4.1]{alper-kresch}.
\end{remark}
\subsection{Good moduli spaces} \label{A:gms_app}
In the following result, we determine the \'etale-local structure of good moduli space morphisms.
\begin{theorem} \label{T:consequences-gms} Let $\cX$ be a noetherian algebraic stack over an algebraically closed field $k$. Let $\cX \to X$ be a good moduli space with affine diagonal.
If $x\in \cX(k)$ is a
closed point, then there exists an affine scheme $\Spec A$ with an action of $G_x$ and a cartesian diagram
$$\xymatrix{
[\Spec A / G_x] \ar[r] \ar[d] & \cX \ar[d]^{\pi} \\
\Spec A /\!\!/ G_x \ar[r]\ar@{}[ur]|\square & X
}$$ such that $\Spec A /\!\!/ G_x \to X$ is an \'etale neighborhood of $\pi(x)$.
\end{theorem}
In the proof of Theorem \ref{T:consequences-gms} we will use the following minor variation of \cite[Thm.~6.10]{alper-quotient}. We provide a direct proof here for the convenience of the reader.
\begin{proposition}[Luna's fundamental lemma] \label{P:luna}
Let $f \co \cX \to \cY$ be an \'etale, separated and representable morphism of noetherian algebraic stacks such that there is a commutative diagram
$$\xymatrix{
\cX \ar[r]^f \ar[d]_{\pi_{\cX}} & \cY \ar[d]^{\pi_{\cY}} \\
X \ar[r] & Y
}$$
where $\pi_{\cX}$ and $\pi_{\cY}$ are good moduli spaces. Let $x \in |\cX|$ be a closed point. If $f(x) \in |\cY|$ is closed and $f$ induces an isomorphism of stabilizer groups at $x$, then there exists an open neighborhood $U \subset X$ of $\pi_{\cX}(x)$ such that $U \to X \to Y$ is \'etale and $\pi_{\cX}^{-1}(U) \cong U \times_Y \cY$.
\end{proposition}
\begin{proof}
By Zariski's main theorem \cite[Cor.~16.4(ii)]{MR1771927}, there is a factorization $\cX \mathrm{h}ookrightarrow \widetilde{\cX} \to \cY$, where $\cX \mathrm{h}ookrightarrow \widetilde{\cX}$ is an open immersion and $\widetilde{\cX} \to \cY$ is finite. There is a good moduli space $\pi_{\widetilde{\cX}} \co \widetilde{\cX} \to \widetilde{X}$ such that the induced morphism $\widetilde{X}\to Y$ is finite \cite[Thm.\ 4.16(x)]{alper-good}. Note that $x\in |\widetilde{\cX}|$ is closed. By Lemma~\ref{L:shrink} we may thus replace $\cX$ with an open neighborhood $\cU \subset \cX$ of $x$ that is saturated with respect to $\pi_{\widetilde{\cX}}$ (i.e., $\cU = \pi_{\widetilde{\cX}}^{-1}(\pi_{\widetilde{\cX}}(\cU))$). Then $X\to \widetilde{X}$ becomes an open immersion so that $X\to Y$ is quasi-finite.
Further, the question is \'etale local on $Y$. Hence, we may assume that $Y$ is the spectrum of a strictly henselian local ring with closed point $\pi_{\cY}(f(x))$. Since $Y$ is henselian, after possibly shrinking $X$ further, we can arrange that $X \to Y$ is finite with $X$ the spectrum of a local ring with closed point $\pi_{\cX}(x)$. Then $X\to \widetilde{X}$ is a closed and open immersion, hence so is $\cX\to \widetilde{\cX}$. It follows that $\cX\to \cY$ is finite.
As $f$ is stabilizer-preserving at $x$ and $Y$ is strictly henselian, $f$ induces an isomorphism of residual gerbes at $x$. We conclude that $f$ is a finite, \'etale morphism of degree $1$, hence an isomorphism.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:consequences-gms}]
By Theorem \ref{T:good-finite-type}, $\cX \to X$ is of finite type. We may assume that $X=\Spec R$, where $R$ is a noetherian $k$-algebra. By
noetherian approximation along $k \to R$, there is a finite type $k$-algebra $R_0$ and an
algebraic stack $\cX_0$ of finite type over $\Spec R_0$ with affine diagonal such that $\cX \simeq \cX_0 \times_{\Spec R_0} \Spec R$. We may
also arrange that the image $x_0$ of $x$ in $\cX_0$ is closed with linearly reductive
stabilizer $G_x$. We now apply Theorem \ref{T:field} to find a pointed affine \'etale
$k$-morphism $f_0 \colon ([\Spec A_0/G_x],w_0) \to (\cX_0,x_0)$ that induces an
isomorphism of stabilizers at~$w_0$. Pulling this back along $\Spec R \to \Spec R_0$, we
obtain an affine \'etale morphism $f \colon [\Spec A/G_x] \to \cX$ inducing an
isomorphism of stabilizers at all points lying over the preimage of $w_0$. The result now
follows from Luna's fundamental lemma for stacks (Proposition \ref{P:luna}).
\end{proof}
The following corollary answers negatively a question of Geraschenko--Zureick-Brown~\cite[Qstn.\ 32]{geraschenko-brown}: does there exist an algebraic stack, with affine diagonal and good moduli space a field, that is not a quotient stack? In the equicharacteristic setting, this result also settles a conjecture of theirs: formal GAGA holds for good moduli spaces with affine diagonal~\cite[Conj.\ 28]{geraschenko-brown}. The general case will be treated in forthcoming work \cite{ahr2}.
\begin{corollary}\label{C:gb-c28}
Let $\cX$ be a noetherian algebraic stack over a field $k$ (not assumed to be algebraically closed) with affine diagonal. Suppose that there exists a good moduli space $\pi \colon \cX \to \Spec R$, where $(R,\mathfrak{m})$ is a complete local ring.
\begin{enumerate}
\item \label{C:gb-c28:res} Then $\cX\cong[\Spec B/\GL_n]$; in particular, $\cX$ has the resolution property; and
\item \label{C:gb-c28:fGAGA} the natural functor
\[
\Coh(\cX) \to \varprojlim \Coh\bigl( \cX \times_{\Spec R} \Spec
R/\mathfrak{m}^{n+1}\bigr)
\]
is an equivalence of categories.
\end{enumerate}
\end{corollary}
\begin{proof}
By \cite[Thm.~1]{geraschenko-brown}, we have \itemref{C:gb-c28:res}$\implies$\itemref{C:gb-c28:fGAGA}; thus, it suffices to prove \itemref{C:gb-c28:res}.
If $R/\mathfrak{m}
=k$ and $k$ is algebraically closed, then $\cX=[\Spec A/G_x]$ by Theorem
\ref{T:consequences-gms}. Embed $G_x \subseteq \GL_{N,k}$ for some $N$.
Then $\cX=[U/\GL_{N,k}]$ where $U=(\Spec A \times \GL_{N,k})/G_x$ is an
algebraic space. Since $U$ is affine over $\cX$ it is cohomologically affine,
hence affine by Serre's criterion \cite[Prop.~3.3]{alper-good}.
In this case, \itemref{C:gb-c28:res} holds even if $R$ is not
complete but merely henselian.
If $R/\mathfrak{m}=k$ and $k$ is not algebraically closed, then we proceed as follows. Let
$\overline{k}$ be an algebraic closure of $k$. By \cite[$0_{\mathrm{III}}$.10.3.1.3]{EGA},
$\overline{R}=R\otimes_k \overline{k}=\varinjlim_{k \subseteq k' \subseteq \overline{k}} R\otimes_k k'$ is a noetherian local ring with maximal ideal $\overline{\mathfrak{m}} =
\mathfrak{m}\overline{R}$ and residue field $\overline{R}/\overline{\mathfrak{m}}\cong \overline{k}$, and the induced
map $R/\mathfrak{m} \to \overline{R}/\overline{\mathfrak{m}}$ coincides with $k \to \overline{k}$. Since each $R\otimes_k k'$ is henselian, $\overline{R}$ is henselian.
By the case considered above, there is a vector bundle $\overline{\shv{E}}$ on
$\cX_{\overline{k}}$ such that the associated frame bundle is an algebraic space (even
an affine scheme). Equivalently, for every geometric point $y$ of $\cX$,
the stabilizer $G_y$ acts faithfully on $\overline{\shv{E}}_y$, cf.\
\cite[Lem.~2.12]{ehkv}.
We can find a finite extension
$k \subseteq k' \subseteq \overline{k}$ and a vector bundle $\shv{E}$ on
$\cX_{k'}$ that pulls back to $\overline{\shv{E}}$. If $p\colon \cX_{k'}\to \cX$
denotes the natural map, then $p_*\shv{E}$ is a vector bundle and the counit map
$p^*p_*\shv{E}\to \shv{E}$ is surjective. In particular, the stabilizer
actions on $p_*\shv{E}$ are also faithful so the frame bundle $U$ of
$p_*\shv{E}$ is an algebraic space
and $\cX=[U/\GL_{N'}]$ is a quotient stack.
Since $\cX$ is cohomologically affine
and $U \to [U/\GL_{N'}]$ is affine, $U$ is affine by Serre's criterion
\cite[Prop.~3.3]{alper-good}.
In general, let $K=R/\mathfrak{m}$. Since $R$ is a complete $k$-algebra, it admits a coefficient
field; thus, it is also a $K$-algebra. We are now free to replace $k$ with
$K$ and the result follows.
\end{proof}
\begin{remark} If $k$ is algebraically closed, then in Corollary \ref{C:gb-c28}\itemref{C:gb-c28:res} above, $\cX$ is in fact isomorphic to a quotient stack $[\Spec A / G_x]$ where $G_x$ is the stabilizer of the unique closed point. If in addition $\cX$ is smooth, then $\cX \cong [N_x/G_x]$ where $N_x$ is the normal space to $x$ (or equivalently the tangent space of $\cX$ at $x$ if $G_x$ is smooth).
\end{remark}
\subsection{Existence of coherent completions} \label{A:coherent-completion}
Recall that a \emph{complete local stack} is an excellent local stack $(\cX,x)$
with affine stabilizers such that $\cX$ is coherently complete along the
residual gerbe $\cG_x$ (Definition~\ref{D:complete-local-stack}).
The \emph{coherent completion} of a noetherian stack $\cX$ at a point $x$
is a complete local stack $(\mathrm{h}at{\cX}_x,\mathrm{h}at{x})$ together with a morphism $\eta\colon (\mathrm{h}at{\cX}_x,\mathrm{h}at{x}) \to (\cX,x)$ inducing isomorphisms of $n$th infinitesimal neighborhoods of $\mathrm{h}at{x}$ and $x$. If $\cX$ has affine stabilizers, then the pair $(\mathrm{h}at{\cX}_x,\eta)$ is unique up to unique $2$-isomorphism by Tannaka duality (Corollary~\ref{C:tannakian}).
The next
result asserts that the coherent completion always exists under very mild hypotheses.
\begin{theorem} \label{T:complete}
Let $\cX$ be a quasi-separated algebraic stack with affine stabilizers, locally of finite type over an algebraically closed field $k$. For any point $x \in \cX(k)$ with linearly reductive stabilizer $G_x$, the coherent completion $\mathrm{h}at{\cX}_x$ exists.
\begin{enumerate}
\item \label{T:complete:excellent}
The coherent completion is an excellent quotient stack
$\mathrm{h}at{\cX}_x=[\Spec B/G_x]$, and is
unique up to unique $2$-isomorphism. The invariant ring $B^{G_x}$ is
the completion of an algebra of finite type over $k$ and $B^{G_x}\to B$
is of finite type.
\item \label{T:complete:etale-pres}
If $f \co (\cW, w) \to (\cX,x)$ is an \'etale morphism, such that $\cW=[\Spec A/G_x]$, the point $w\in |\cW|$ is closed and $f$ induces an isomorphism of stabilizer groups at $w$; then $\mathrm{h}at{\cX}_x = \cW \times_W \Spec \mathrm{h}at{\oh}_{W,\pi(w)}$, where $\pi \co \cW \to W = \Spec A^{G_x}$ is the morphism to the GIT quotient.
\item \label{T:complete:gms}
If $\pi \co \cX \to X$ is a good moduli space with affine diagonal, then $\mathrm{h}at{\cX}_x = \cX \times_X \Spec \mathrm{h}at{\oh}_{X,\pi(x)}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Theorem \ref{T:field} gives an \'etale morphism $f \co (\cW, w) \to (\cX,x)$, where $\cW=[\Spec A/G_x]$ and $f$ induces an isomorphism of stabilizer groups at the closed point $w$. The main statement and Parts \eqref{T:complete:excellent} and \eqref{T:complete:etale-pres} follow by taking $\mathrm{h}at{\cX}_x = \cW \times_W \Spec \mathrm{h}at{\oh}_{W,\pi(w)}$ and $B=A\otimes_{A^{G_x}} \widehat{A^{G_x}}$.
Indeed, $\mathrm{h}at{\cX}_x=[\Spec B/G_x]$ is coherently complete by Theorem~\ref{key-theorem} and $B$ is excellent since it is of finite type over the complete local ring $B^{G_x}=\widehat{A^{G_x}}$. Part \eqref{T:complete:gms} follows from \eqref{T:complete:etale-pres} after applying Theorem~\ref{T:consequences-gms}.
\end{proof}
\begin{remark} \label{R:miniversal-completion-finite-type}
With the notation of Theorem~\ref{T:complete}~\itemref{T:complete:etale-pres}, observe that if $G_x$ is smooth, then $(\Spec A,w)\to (\cX,x)$ is smooth and adic so the formal miniversal deformation space of $x$ is $\mathrm{h}at{\Def}(x) = \Spf \mathrm{h}at{A}$ where $\mathrm{h}at{A}$ denotes the completion of $A$ at the $G_x$-fixed point $w$ (see Remark~\ref{R:miniversal}). The stabilizer $G_x$ acts on $\Spf \mathrm{h}at{A}$ and its versal object, and it follows from Theorem \ref{key-theorem} that there is an identification $\mathrm{h}at{A}^{G_x} = \widehat{A^{G_x}}$. In particular, $\mathrm{h}at{A}^{G_x}$ is the completion of a $k$-algebra of finite type.
\end{remark}
\begin{remark} \label{R:uniqueness}
If there exists an \'etale neighborhood $f \co \cW=[\Spec A/G_x] \to \cX$ of $x$ such that $A^{G_x} = k$, then the pair $(\cW, f)$ is unique up to unique 2-isomorphism. This follows from Theorem \ref{T:complete} as $\cW$ is the coherent completion of $\cX$ at $x$.
\end{remark}
The \emph{henselization of $\cX$ at $x$} is the stack
$\cX^h_x=\cW\times_W \Spec (A^{G_x})^h$. This stack also satisfies a
universal property (initial among pro-\'etale neighborhoods of the residual
gerbe at $x$) and will be treated in forthcoming work \cite{ahr2}.
\subsection{\'Etale-local equivalences} \label{A:etale}
Before we state the next result, let us recall that if $\cX$ is an algebraic stack, and $x \in \cX(k)$ is a point, then a formal miniversal deformation space of $x$ is a one-point affine formal scheme $\mathrm{h}at{\Def}(x)$, together with a formally smooth morphism $\mathrm{h}at{\Def}(x) \to \cX$ that is an isomorphism on tangent spaces at $x$, see Remark~\ref{R:miniversal-completion-finite-type}. If the stabilizer group scheme $G_x$ is smooth and linearly reductive, then $\mathrm{h}at{\Def}(x)$ inherits an action of $G_x$.
\begin{theorem} \label{T:etale}
Let $\cX$ and $\cY$ be quasi-separated algebraic stacks with affine stabilizers, locally of finite type over an algebraically closed field $k$. Suppose $x \in \cX(k)$ and $y \in \cY(k)$ are points with smooth linearly reductive stabilizer group schemes $G_x$ and $G_y$, respectively. Then the following are equivalent:
\begin{enumerate}
\item\label{TI:etale:miniversal}
There exist an isomorphism $G_x \to G_y$ of group schemes and an isomorphism $\mathrm{h}at{\Def}(x) \to \mathrm{h}at{\Def}(y)$ of formal miniversal deformation spaces which is equivariant with respect to $G_x \to G_y$.
\item\label{TI:etale:completion}
There exists an isomorphism $\mathrm{h}at{\cX}_x \to \mathrm{h}at{\cY}_y$.
\item\label{TI:etale:etale}
There exist an affine scheme $\Spec A$ with an action of $G_x$, a point $w \in \Spec A$ fixed by $G_x$, and a diagram of
\'etale morphisms
$$\xymatrix{
& [\Spec A /G_x] \ar[ld]_f \ar[rd]^g \\
\cX & & \cY
}$$
such that $f(w) = x$ and $g(w) = y$, and both $f$ and $g$ induce isomorphisms of stabilizer groups at $w$.
\end{enumerate}
If additionally $x \in |\cX|$ and $y \in |\cY|$ are smooth, then the conditions above are equivalent to the existence of an isomorphism $G_x \to G_y$ of group schemes and an isomorphism $T_{\cX,x} \to T_{\cY,y}$ of tangent spaces which is equivariant under $G_x \to G_y$.
\end{theorem}
\begin{remark} If the stabilizers $G_x$ and $G_y$ are not smooth, then the theorem above remains true (with the same argument) if the formal miniversal deformation spaces are replaced with formal completions of equivariant flat adic presentations (Definition \ref{D:adic}) and the tangent spaces are replaced with normal spaces. Note that the composition $\Spec A \to [\Spec A/ G_x] \to \cX$ produced by Theorem \ref{T:field} is a $G_x$-equivariant flat adic presentation.
\end{remark}
\begin{proof}[Proof of Theorem \ref{T:etale}]
The implications \itemref{TI:etale:etale}$\implies$\itemref{TI:etale:completion}$\implies$\itemref{TI:etale:miniversal} are immediate. We also have \itemref{TI:etale:miniversal}$\implies$\itemref{TI:etale:completion} as $\cX_x^{[n]} = [\mathrm{h}at{\Def}(x)^{[n]} / G_x]$ and $\cY_y^{[n]} = [\mathrm{h}at{\Def}(y)^{[n]} / G_y]$. We now show that \itemref{TI:etale:completion}$\implies$\itemref{TI:etale:etale}. We are given an isomorphism $\alpha \co\mathrm{h}at{\cX}_x \iso \mathrm{h}at{\cY}_y$. Let
$f \co (\cW,w)\to (\cX,x)$ be an \'etale neighborhood as in
Theorem~\ref{T:field}, that is, $\cW = [\Spec A/G_x]$ and $f$ induces an isomorphism of stabilizer groups at the closed point $w$. Let $W=\Spec A^{G_x}$ denote the good moduli space of
$\cW$ and let $w_0$ be the image of $w$. Then $\mathrm{h}at{\cX}_x=\cW\times_W \Spec
\mathrm{h}at{\oh}_{W,w_0}$. The functor $F\co (T\to W)\mapsto \Hom(\cW\times_W T,\cY)$
is locally of finite presentation. Artin approximation applied to $F$ and
$\alpha\in F(\Spec \mathrm{h}at{\oh}_{W,w_0})$ thus gives an \'etale morphism
$(W',w')\to (W,w)$ and a morphism $\varphi\co \cW':=\cW\times_W W'\to \cY$ such
that $\varphi^{[1]}\co \cW'^{[1]}_{w'}\to \cY_y^{[1]}$ is an isomorphism.
Since $\mathrm{h}at{\cW'}_{w'}\cong \mathrm{h}at{\cX}_x\cong \mathrm{h}at{\cY}_y$, it
follows that $\varphi$ induces an isomorphism $\mathrm{h}at{\cW'}\to \mathrm{h}at{\cY}$ by
Proposition~\ref{P:closed/iso-cond:complete}~\itemref{PI:iso:complete}. After replacing $W'$ with an open
neighborhood we thus obtain an \'etale morphism $(\cW',w')\to (\cY,y)$.
The final statement is clear from Theorem \ref{T:smooth}.
\end{proof}
\subsection{The resolution property holds \'etale-locally}\label{A:resolution-property-etale}
In~\cite[Def.~2.1]{rydh-noetherian}, an algebraic stack $\cX$
is said to be of \emph{global type} (resp.\ \emph{s-global type}) if there
is a representable (resp.\ representable and separated) \'etale surjective
morphism $p\colon [V/\GL_n]\to \cX$ of finite presentation where $V$ is
quasi-affine. That is, the resolution property holds for $\cX$
\'etale-locally. We will show that if $\cX$ has linearly reductive stabilizers
at closed points and affine diagonal, then $\cX$ is of s-global type. We begin
with a more precise statement.
\begin{theorem}\label{T:global-type}
Let $\cX$ be a quasi-separated algebraic stack, of finite type over
a perfect (resp.\ arbitrary) field $k$, with
affine stabilizers. Assume that for every closed point $x\in |\cX|$, the unit
component $G_x^0$ of the stabilizer group scheme $G_x$ is linearly reductive.
Then there exists
\begin{enumerate}
\item a finite field extension $k'/k$;
\item a linearly reductive group scheme $G$ over $k'$;
\item a finitely generated $k'$-algebra $A$ with an action of $G$; and
\item an \'etale (resp.\ quasi-finite flat) surjection
$p\colon [\Spec A/G] \to \cX$.
\end{enumerate}
Moreover, if $\cX$ has affine diagonal, then $p$ can be arranged to be affine.
\end{theorem}
\begin{proof}
First assume that $k$ is algebraically closed. Since $\cX$ is quasi-compact,
Theorem~\ref{T:field} gives an \'etale surjective morphism
$q\co [U_1/G_1]\amalg\dots\amalg [U_n/G_n]\to \cX$ where $G_i$ is a linearly
reductive group scheme over $k$ acting on an affine scheme $U_i$. If we let
$G=G_1\times G_2\times\dots\times G_n$ and let $U$ be the disjoint union of the
$U_i\times G/G_i$, we obtain an \'etale surjective morphism $p\co [U/G]\to \cX$.
If $\cX$ has affine diagonal, then we can assume that $q$, and hence $p$, are
affine.
For general $k$, write the algebraic closure $\overline{k}$ as a union of its
finite subextensions $k'/k$. A standard limit argument gives a solution over
some $k'$ and we compose this with the \'etale (resp.\ flat) morphism
$\cX_{k'}\to \cX$.
\end{proof}
\begin{corollary}\label{C:s-global-type}
Let $\cX$ be an algebraic stack with affine diagonal and of finite type over a
field $k$ (not necessarily algebraically closed). Assume that for every closed
point $x\in |\cX|$, the unit component $G_x^0$ of the stabilizer group scheme
$G_x$ is linearly reductive. Then $\cX$ is of s-global type.
\end{corollary}
\begin{proof}
By Theorem~\ref{T:global-type} there is an affine,
quasi-finite and faithfully flat morphism $\cW\to \cX$ of finite
presentation where $\cW=[\Spec A/G]$ for a linearly reductive group scheme
$G$ over $k'$. If we choose an embedding $G\mathrm{h}ookrightarrow \GL_{n,k'}$, then we can
write $\cW=[\Spec B/\GL_n]$, see proof of Corollary~\ref{C:gb-c28}.
By~\cite[Prop.~2.8~(iii)]{rydh-noetherian}, it follows that $\cX$ is of
s-global type.
\end{proof}
Geraschenko and Satriano define generalized toric Artin stacks in terms of generalized stacky fans.
They establish that over an algebraically closed field of characteristic $0$, an Artin stack $\cX$ with finite quotient singularities is toric if and only if it has affine
diagonal, has an open dense torus $T$ acting on the stack, has linearly
reductive stabilizers, and $[\cX/T]$ is of global
type~\cite[Thm.~6.1]{GS-toric-stacks-2,GS-toric-stacks-2-erratum}. If $\cX$ has linearly reductive
stabilizers at closed points, then so has
$[\cX/T]$. Corollary~\ref{C:s-global-type} thus shows that the last condition
is superfluous.
\section{Global applications}\label{S:global_applications}
\subsection{Compact generation of derived categories} \label{A:compact-generation}
For results involving derived categories of quasi-coherent sheaves, perfect (or compact) generation of the unbounded derived category $\DCAT_{\QCoh}(\cX)$ continues to be an indispensable tool at one's disposal \cite{neeman_duality,BZFN}. We prove:
\begin{theorem}\label{T:compact-generation}
Let $\cX$ be an algebraic stack of finite type over a field $k$ (not assumed to be algebraically closed) with
affine diagonal. If the stabilizer group $G_x$ has linearly reductive
identity component $G_x^0$ for every closed point of $\cX$, then $\cX$ has the Thomason condition; that is,
\begin{enumerate}
\item $\DCAT_{\QCoh}(\cX)$ is compactly generated by a countable set of
perfect complexes; and
\item for every open immersion $\cU\subseteq \cX$, there exists a compact and perfect complex $P \in \DCAT_{\QCoh}(\cX)$ with support precisely $\cX\setminus \cU$.
\end{enumerate}
\end{theorem}
\begin{proof} This follows from
Corollary~\ref{C:s-global-type}
together with \cite[Thm.~B]{perfect_complexes_stacks} (characteristic $0$) and
Theorem~\ref{T:global-type} together with
\cite[Thm.~D]{hallj_dary_alg_groups_classifying} (positive characteristic).
\end{proof}
Theorem \ref{T:compact-generation} was previously only known for stacks with finite stabilizers~\cite[Thm.~A]{perfect_complexes_stacks} or quotients of quasi-projective schemes by a linear action of an affine algebraic group in characteristic $0$ \cite[Cor.~3.22]{BZFN}.
In positive characteristic, the theorem is almost sharp: if the reduced
identity component $(G_x)^0_\mathrm{red}$ is not linearly reductive, i.e., not a torus,
at some point $x$, then $\DCAT_{\QCoh}(\cX)$ is not compactly
generated~\cite[Thm.~1.1]{hallj_neeman_dary_no_compacts}.
If $\cX$ is an algebraic stack of finite type over $k$ with affine stabilizers such that either
\begin{enumerate}
\item the characteristic of $k$ is $0$; or
\item \emph{every} stabilizer is linearly reductive;
\end{enumerate}
then $\cX$ is concentrated, that is, a complex of $\oh_{\cX}$-modules with quasi-coherent cohomology is perfect if and only if it is a compact object of $\DCAT_{\QCoh}(\cX)$ \cite[Thm.~C]{hallj_dary_alg_groups_classifying}.
If $\cX$ admits a good moduli space $\pi\colon \cX\to X$ with affine diagonal,
then one of the two conditions hold by Theorem~\ref{T:consequences-gms}.
If $\cX$ does not admit a good moduli space and is of positive characteristic,
then it is not sufficient that closed points have linearly reductive
stabilizers as the following example shows.
\begin{example}
Let $\cX=[X/(\mathbb{G}_m\times \ZZ/2\ZZ)]$ be the quotient of the non-separated affine
line $X$ by the natural $\mathbb{G}_m$-action and the $\ZZ/2\ZZ$-action that swaps the
origins. Then $\cX$ has two points, one closed with stabilizer group $\mathbb{G}_m$
and one open point with stabilizer group $\ZZ/2\ZZ$. Thus if $k$ has
characteristic two, then not every stabilizer group is linearly reductive and
there are non-compact perfect
complexes~\cite[Thm.~C]{hallj_dary_alg_groups_classifying}.
\end{example}
\subsection{Characterization of when $\cX$ admits a good moduli space} \label{A:gms}
Using the existence of completions, we can give an intrinsic characterization of those algebraic stacks that admit a good moduli space.
We will need one preliminary definition. We say that a geometric point $y \co \Spec K \to \cX$ is {\it geometrically closed} if the image of $(y, \mathrm{id}) \co \Spec K \to \cX \otimes_k K$ is a closed point of $|\cX \otimes_k K|$.
\begin{theorem} \label{T:gms} Let $\cX$ be an algebraic stack with affine diagonal, locally of finite type over an algebraically closed field $k$. Then $\cX$ admits a good moduli space if and only if
\begin{enumerate}
\item\label{TI:gms:unique-closed}
For every point $y \in \cX(k)$, there exists a unique closed point in the closure $\overline{ \{ y \}}$.
\item\label{TI:gms:cc}
For every closed point $x \in \cX(k)$, the stabilizer group scheme $G_x$ is linearly reductive and the morphism $\mathrm{h}at{\cX}_x \to \cX$ from the coherent completion of $\cX$ at $x$ satisfies:
\begin{enumerate}
\item\label{TI:gms:stab-pres}
The morphism $\mathrm{h}at{\cX}_x \to \cX$ is stabilizer preserving at every point; that is, $\mathrm{h}at{\cX}_x \to \cX$ induces an isomorphism of stabilizer groups for every point $\xi \in |\mathrm{h}at{\cX}_x|$.
\item\label{TI:gms:geom-closed}
The morphism $\mathrm{h}at{\cX}_x \to \cX$ maps geometrically closed points to geometrically closed points.
\item\label{TI:gms:injective-on-k}
The map $\mathrm{h}at{\cX}_x(k) \to \cX(k)$ is injective.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark}
The quotient $[\mathbb{P}^1 / \mathbb{G}_m]$ (where $\mathbb{G}_m$ acts on $\mathbb{P}^1$ via multiplication) does not satisfy \itemref{TI:gms:unique-closed}.
If $\cX=[X/(\ZZ/2\ZZ)]$ is the quotient of the non-separated affine line $X$ by the $\ZZ/2\ZZ$-action which swaps the origins (and acts trivially elsewhere), then the map $\Spec k\llbracket x \rrbracket = \mathrm{h}at{\cX}_0 \to \cX$ from the completion at the origin does not satisfy \itemref{TI:gms:stab-pres}.
If $\cX=[(\AA^2 \setminus 0) / \mathbb{G}_m]$ where $\mathbb{G}_m$-acts via $t \cdot (x,y) = (x,ty)$ and $p=(0,1) \in |\cX|$, then the map $\Spec k\llbracket x \rrbracket = \mathrm{h}at{\cX}_{p} \to \cX$ does not satisfy \itemref{TI:gms:geom-closed}.
If $\cX=[C/\mathbb{G}_m]$ where $C$ is the nodal cubic curve with a $\mathbb{G}_m$-action and $p \in |\cX|$ denotes the image of the node, then $[\Spec(k[x,y]/xy) / \mathbb{G}_m] = \mathrm{h}at{\cX}_{p} \to \cX$
does not satisfy \itemref{TI:gms:injective-on-k}. (Here $\mathbb{G}_m$ acts on coordinate axes via $t \cdot (x,y) = (tx, t^{-1}y)$.) These pathological examples in fact appear in many natural moduli stacks; see \cite[App.~A]{afsw-good}.
\end{remark}
\begin{remark}
Consider the non-separated affine line as a group scheme $G \to \AA^1$ whose
generic fiber is trivial but the fiber over the origin is $\ZZ/2\ZZ$ and let
$\cX=[\AA^1/G]$. In this case
\itemref{TI:gms:stab-pres} is not satisfied. Nevertheless, the stack
$\cX$ does have a good moduli space $X=\AA^1$ but $\cX\to X$ has
non-separated diagonal.
\end{remark}
\begin{remark}
When $\cX$ has finite stabilizers, then
conditions~\itemref{TI:gms:unique-closed}, \itemref{TI:gms:geom-closed} and
\itemref{TI:gms:injective-on-k} are always
satisfied. Condition~\itemref{TI:gms:stab-pres} is satisfied if and only if the
inertia stack is finite over $\cX$. In this case, the good moduli space of $\cX$
coincides with the coarse space of $\cX$, which exists by~\cite{keel-mori}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{T:gms}]
For the necessity of the conditions, properties of good moduli spaces imply \itemref{TI:gms:unique-closed} \cite[Thm.\ 4.16(ix)]{alper-good} and that $G_x$ is linearly reductive for a closed point $x \in |\cX|$ \cite[Prop.\ 12.14]{alper-good}. The rest of \itemref{TI:gms:cc} follows from the explicit description of the coherent completion in Theorem~\ref{T:complete}~\itemref{T:complete:gms}.
For the sufficiency, we claim it is enough to verify:
\begin{enumerate}
\item[(I)] For every closed point $x \in |\cX|$, there exists an affine \'etale morphism
$$f \co (\cX_1, w) \to (\cX, x),$$ such that $\cX_1 = [\Spec A / G_x]$ and for each closed point $w' \in \cX_1$,
\begin{enumerate}
\item $f$ induces an isomorphism of stabilizer groups at $w'$; and
\item $f(w')$ is closed.
\end{enumerate}
\item[(II)] For every $y \in \cX(k)$, the closed substack $\overline{ \{y\}}$ admits a good moduli space.
\end{enumerate}
This is proven in \cite[Thm.~1.2]{afsw-good} but we provide a quick proof here for the convenience of the reader. For a closed point $x \in |\cX|$, consider the \v{C}ech nerve of an affine \'etale morphism $f$ satisfying (I)
\[
\xymatrix{ \cdots \cX_3 \ar@<-1ex>[r] \ar[r] \ar@<1ex>[r] & \cX_2 \ar@<.5ex>[r]^{\smash{p_1}} \ar@<-.5ex>[r]_{p_2} & \cX_1 \ar[r]^-{\smash{f}} & \cX_0 = \im(f).
}
\]
Since $f$ is affine, there are good moduli spaces $\cX_i \to X_i$ for each $i \ge 1$, and morphisms $\xymatrix{\cdots X_3 \ar@<1ex>[r] \ar@<-1ex>[r] \ar[r] & X_2 \ar@<.5ex>[r] \ar@<-.5ex>[r] & X_1.}$
We claim that both projections $p_1, p_2 \co \cX_2 \to \cX_1$ send closed points to closed points. If $x_2 \in |\cX_2|$ is a closed point, to check that $p_j(x_2) \in |\cX_1|$ is also closed for either $j=1$ or $j=2$, we may replace the $\cX_i$ with the base changes along $\overline{ \{f(p_j(x_2))\}}\mathrm{h}ookrightarrow \cX_0$. In this case, there is a good moduli space $\cX_0\to X_0$ by (II) and $\cX_1\to \cX_0$ sends closed points to closed points and are stabilizer preserving at closed points by (I).
Luna's fundamental lemma (Proposition \ref{P:luna}) then implies that $\cX_1 \cong \cX_0 \times_{X_0} X_1$ and the claim follows.
Luna's fundamental lemma (Proposition \ref{P:luna}) now applies to $p_j\co \cX_2\to \cX_1$ and says that $X_2 \to X_1$ is \'etale and that $p_j$ is the base change of this map along $\cX_1 \to X_1$. The analogous fact holds for the maps $X_3 \to X_2$. The universality of good moduli spaces induces an \'etale groupoid structure $X_2 \rightrightarrows X_1$. To check that this is an \'etale equivalence relation, it suffices to check that $X_2 \to X_1 \times X_1$ is injective on $k$-points but this follows from the observation the $|\cX_2| \to |\cX_1| \times |\cX_1|$ is injective on closed points. It follows that there is an algebraic space quotient $X_0 := X_1/X_2$ and a commutative cube
\[
\xymatrix@=15pt{
&\cX_2 \ar[rr] \ar[dd] \ar[dl] & & \cX_1 \ar[dd] \ar[dl] \\
\cX_1 \ar[rr] \ar[dd] & & \cX_0 \ar[dd] & \\
& X_2 \ar[rr] \ar[dl] & & X_1 \ar[dl] \\
X_1 \ar[rr] & & X_0. &
}
\]
Since the top, left, and bottom faces are cartesian, it follows from \'etale descent that the right face is also cartesian and that $\cX_0 \to X_0$ is a good moduli space. Moreover, each closed point of $\cX_0$ remains closed in $\cX$ by (Ib). Therefore, if we apply the construction above to find such an open neighborhood $\cX_x$ around each closed point $x$, the good moduli spaces of $\cX_x$ glue to form a good moduli space of~$\cX$.
We now verify condition (I). Let $x \in \cX(k)$ be a closed point. By Theorem \ref{T:field}, there
exist a quotient stack $\cW = [\Spec A / G_x]$ with a closed point $w \in |\cW|$ and an affine \'etale morphism $f \co (\cW, w) \to (\cX, x)$ such that $f$ is stabilizer preserving at $w$.
As the coherent completion of $\cW$ at $w$ is identified with $\mathrm{h}at{\cX}_x$, we have a 2-commutative diagram
\begin{equation} \label{E:completion}
\begin{split}
\xymatrix{
\mathrm{h}at{\cX}_x \ar[d] \ar[rd] & \\
\cW \ar[r]^(.4){f} & \cX.
} \end{split}
\end{equation}
The subset $Q_a\subseteq |\cW|$ consisting of points $\xi \in |\cW|$ such that
$f$ is stabilizer preserving at $\xi$ is constructible.
Since $Q_a$ contains every point in the image of $\mathrm{h}at{\cX}_x \to \cW$ by
hypothesis \itemref{TI:gms:stab-pres}, it follows that $Q_a$ contains a
neighborhood of $w$. Thus after replacing $\cW$ with an open saturated
neighborhood containing $w$ (Lemma~\ref{L:shrink}), we may assume that $f \co
\cW \to \cX$ satisfies condition (Ia).
Let $\pi\co \cW\to W$ be the good moduli space of $\cW$ and consider the
morphism $g=(f,\pi)=\cW\to \cX\times W$. For a point $\xi\in |W|$, let
$\xi^0\in |\cW|$ denote the unique point that is closed in the fiber $\cW_\xi$.
Let $Q_b\subseteq |W|$ be the locus of points $\xi\in |W|$ such that $g(\xi^0)$
is closed in $|(\cX\times W)_\xi|=|\cX_{\kappa(\xi)}|$. This locus is
constructible. Indeed, the subset $\cW^0=\{\xi^0\;:\;\xi\in |W|\}\subseteq
|\cW|$ is easily seen to be constructible; hence so is $g(\cW^0)$ by
Chevalley's theorem. The locus $Q_b$ equals the set of points $\xi\in |W|$ such
that $g(\cW^0)_\xi$ is closed which is constructible by~\cite[IV.9.5.4]{EGA}. The
locus $Q_b$ contains $\Spec \oh_{W,\pi(w)}$ by hypothesis
\itemref{TI:gms:geom-closed} (recall that $\mathrm{h}at{\cX}_x=\cW\times_W \Spec
\mathrm{h}at{\oh}_{W,\pi(w)}$). Therefore, after replacing $\cW$ with an open saturated
neighborhood of $w$, we may assume that $f \co \cW \to \cX$ satisfies condition
(Ib).
For condition (II), we may replace $\cX$ by $\overline{ \{y\} }$. By \itemref{TI:gms:unique-closed}, there is a unique closed point $x \in \overline{ \{y\} }$ and we can find a commutative diagram as in \eqref{E:completion} for $x$. By \itemref{TI:gms:geom-closed} we can, since $f$ is \'etale, also assume that $\cW$ has a unique closed point. Now let $B=\Gamma(\cW,\oh_{\cW})$; then $B$ is a domain of finite type over $k$ \cite[Thm.~4.16(viii),(xi)]{alper-good}; in particular, $B$ is a Jacobson domain. Since $\cW$ has a unique closed point, so too does $\Spec B$ \cite[Thm.~4.16(iii)]{alper-good}. Hence, $B$ is also local and so $\Gamma(\cW, \oh_{\cW}) = k$. By Theorem \ref{T:complete}\itemref{T:complete:etale-pres}, $\mathrm{h}at{\cX}_x = \cW$. By hypothesis \itemref{TI:gms:injective-on-k}, $f \co \cW \to \cX$ is an \'etale monomorphism which is also surjective by hypothesis \itemref{TI:gms:unique-closed}. We conclude that $f \co \cW \to \cX$ is an isomorphism establishing condition (II).
\end{proof}
\subsection{Algebraicity results} \label{A:algebraicity}
In this subsection, we fix a field $k$ (not necessarily algebraically closed), an algebraic space $X$ locally of finite type over $k$, and an algebraic stack $\cX$ of finite type over $X$ with affine diagonal over $X$ such that $\cX \to X$ is a good moduli space. We prove the following algebraicity results.
\begin{theorem}[Stacks of coherent sheaves]\label{T:coh}
The $X$-stack $\underline{\Coh}_{\cX/X}$, whose objects over $T \to X$ are finitely presented quasi-coherent sheaves on $\cX \times_X T$ flat over $T$, is an algebraic stack, locally of finite type over $X$, with affine diagonal over $X$.
\end{theorem}
\begin{corollary}[Quot schemes]\label{C:quot}
Let $\cF$ be a quasi-coherent $\oh_{\cX}$-module. The $X$-sheaf $\underline{\Quot}_{\cX/X}(\cF)$, whose objects over $T \to X$ are quotients $p_1^* \cF \to \cG$, where $p_1 \co \cX \times_X T \to \cX$ is the projection and $\cG$ is a finitely presented quasi-coherent $\oh_{\cX \times_X T}$-module that is flat over $T$, is a separated algebraic space over $X$. In addition, if $\cF$ is coherent, then $\underline{\Quot}_{\cX/X}(\cF)$ is locally of finite type over $X$.
\end{corollary}
\begin{corollary}[Hilbert schemes]\label{C:hilb}
The $X$-sheaf $\underline{\mathrm{Hilb}}_{\cX/X}$, whose objects over $T \to X$ are closed substacks $\cZ \subseteq \cX \times_X T$ such that $\cZ$ is flat and locally of finite presentation over $T$, is a separated algebraic space locally of finite type over $X$.
\end{corollary}
\begin{theorem}[Hom stacks] \label{T:hom}
Let $\cY$ be a quasi-separated algebraic stack, locally of finite type over $X$ with affine stabilizers.
If $\cX \to X$ is flat, then the $X$-stack $\underline{\Hom}_X(\cX, \cY)$, whose objects are pairs consisting of a morphism $T \to X$ of algebraic spaces and a morphism $\cX \times_X T \to \cY$ of algebraic stacks over $X$, is an algebraic stack, locally of finite type over $X$, with quasi-separated diagonal. If $\cY \to X$ has affine (resp.\ quasi-affine, resp.\ separated) diagonal, then the same is true for $\underline{\Hom}_X(\cX, \cY) \to X$.
\end{theorem}
A general algebraicity theorem for Hom stacks was also considered in \cite{hlp}. In the setting of Theorem \ref{T:hom}---without the assumption that $\cX \to X$ is a good moduli space---\cite[Thm.~1.6]{hlp} establishes the algebraicity of $\underline{\Hom}_X(\cX, \cY)$ under the additional hypotheses that $\cY$ has affine diagonal and that $\cX \to X$ is cohomologically projective \cite[Def.~1.12]{hlp}. We note that as a consequence of Theorem \ref{T:consequences-gms}, if $\cX \to X$ is a good moduli space, then $\cX \to X$ is necessarily \'etale-locally cohomologically projective.
We also prove the following, which we have not seen in the literature before.
\begin{corollary}[$G$-equivariant Hom stacks] \label{C:homG}
Let $Z$ and $S$ be quasi-separated algebraic spaces, locally of finite type over $k$. Let $\cX$ be a quasi-separated Deligne--Mumford stack, locally of finite type over $k$.
Let $G$ be a linearly reductive affine group scheme acting on $Z$ and $\cX$.
Let $Z \to S$ and $\cX \to S$ be
$G$-invariant morphisms. Suppose that $Z \to S$ is flat and a good GIT quotient. Then the $S$-groupoid
$\underline{\Hom}_S^{G}(Z, \cX)$, whose objects over $T \to S$ are $G$-equivariant $S$-morphisms $Z \times_S T \to \cX$,
is a Deligne--Mumford stack, locally of finite type over $S$. In addition, if $\cX$ is an algebraic space, then so is $\underline{\Hom}_S^{G}(Z, \cX)$, and
if $\cX$ has quasi-compact and separated diagonal, then so has $\underline{\Hom}_S^{G}(Z, \cX)$.
\end{corollary}
The results of this section will largely be established using Artin's criterion, as formulated in \cite[Thm.~A]{hallj_openness_coh}. This uses the notion of coherence, in the sense of Auslander \cite{auslander}, which we now briefly recall.
Let $A$ be a ring. An additive functor $F \colon \Mod(A) \to \Ab$ is \emph{coherent} if there is a morphism of $A$-modules $\varphi\colon M \to N$ and a functorial isomorphism:
\[
F(-) \simeq \coker(\Hom_{A}(N,-) \xrightarrow{\varphi^*} \Hom_{A}(M,-)).
\]
Coherent functors are a remarkable collection of functors, and form an
abelian subcategory of the category of additive functors
$\Mod(A) \to \Ab$ that is closed under limits \cite{auslander}. It is obvious that coherent functors preserve small (i.e., infinite) products and work
of Krause \cite[Prop.~3.2]{MR2026723} implies that this essentially characterizes them. They frequently arise in algebraic geometry as cohomology and
$\Ext$-functors \cite{hartshorne-coherent}. Fundamental results, such
as cohomology and base change, are very simple consequences of the
coherence of cohomology functors \cite{hallj_coho_bc}.
\begin{example}\label{E:coherent-functor1}
Let $A$ be a ring and $M$ an $A$-module. Then the functor
$I \mapsto \Hom_A(M,I)$ is coherent. Taking $M=A$, we see that the
functor $I \mapsto I$ is coherent.
\end{example}
\begin{example}\label{E:coherent-functor2}
Let $R$ be a noetherian ring and $Q$ a finitely generated
$R$-module. Then the functor
$I \mapsto {Q}\otimes_{R} I$ is
coherent. Indeed, there is a presentation
$A^{\mathrm{op}lus r} \to A^{\mathrm{op}lus s} \to {Q} \to 0$ and an
induced functorial isomorphism
\[
{Q}\otimes_{A} I = \coker(I^{\mathrm{op}lus r} \to I^{\mathrm{op}lus s}).
\]
Since the category of coherent functors is abelian, the claim
follows from Example \ref{E:coherent-functor1}. More generally, if
$Q^\bullet$ is a bounded above complex of coherent
$R$-modules, then for each $j\in \ZZ$ the functor:
\[
I \mapsto H^j({Q}^\bullet\otimes^{\DERF{L}}_{R} I)
\]
is coherent \cite[Ex.~2.3]{hartshorne-coherent}. Such a coherent
functor is said to \emph{arise from a complex}. An additive functor
$F \colon \Mod(A) \to \Ab$ is \emph{half-exact} if for every short
exact sequence $0 \to I'' \to I \to I' \to 0$ the sequence
$F(I'') \to F(I) \to F(I')$ is exact. Functors arising from
complexes are half-exact. Moreover, if $F \colon \Mod(A) \to \Ab$ is
coherent, half-exact and preserves direct limits of $A$-modules,
then it is the direct summand of a functor that arises from a complex
\cite[Prop.~4.6]{hartshorne-coherent}.
\end{example}
The following proposition is a variant of \cite[Thm.~C]{hallj_coho_bc} and \cite[Thm.~D]{perfect_complexes_stacks}.
\begin{proposition}\label{P:coh_gms}
Let $\cplx{F} \in \DCAT_{\QCoh}(\cX)$ and $\cplx{G} \in \mathsf{D}_{\Coh}^b(\cX)$. If $X=\Spec R$ is affine, then the
functor
\[
H_{\cplx{F},\cplx{G}}(-):=\Hom_{\oh_{\cX}}\bigl(\cplx{F},\cplx{G} \otimes_{\oh_\cX}^{\DERF{L}} \QCPBK{\pi}(-)\bigr) \colon \Mod(R) \to \Ab
\]
is coherent.
\end{proposition}
\begin{proof}
This follows immediately from Theorem \ref{T:compact-generation},
\cite[Thm.~4.16(x)]{alper-good} and
\cite[Cor.~4.19]{perfect_complexes_stacks}. The argument is
reasonably straightforward, however, so we sketch it here. To this end if
$\cplx{F} = \oh_{\cX}$, then the projection formula
\cite[Prop.~4.11]{perfect_complexes_stacks} implies that
\begin{align*}
\Hom_{\oh_{\cX}}\bigl(\oh_{\cX},\cplx{G} \otimes_{\oh_\cX}^{\DERF{L}} \QCPBK{\pi}(-)\bigr) &= H^0\bigl(\DERF{R} \Gamma(\cX,\cplx{G} \otimes_{\oh_\cX}^{\DERF{L}} \QCPBK{\pi}(-))\bigr)\\
&\simeq H^0\bigl(\DERF{R} \Gamma(\cX,\cplx{G}) \otimes_{R}^{\DERF{L}} - \bigr).
\end{align*}
Since $\cplx{G} \in \mathsf{D}_{\Coh}^b(\cX)$ and $\cX \to X$ is a good
moduli space, it follows that
$\DERF{R} \Gamma(\cX,\cplx{G}) \in \mathsf{D}^b_{\Coh}(R)$. Indeed
$H^j(\DERF{R} \Gamma(\cX,\cplx{G})) \simeq
\Gamma(\cX,\cH^j(\cplx{G}))$, because $\DERF{R}\Gamma(\cX,-)$ is $t$-exact on
quasi-coherent sheaves (see \S\ref{S:notation}); now apply \cite[Thm.~4.16(x)]{alper-good}. By Example \ref{E:coherent-functor2}, the
functor $H_{\oh_{\cX},\cplx{G}}$ is coherent. If $\cplx{F}$ is
perfect, then
$H_{\cplx{F},\cplx{G}} \simeq H_{\oh_{\cX},{\cplx{F}}^\vee
\otimes^{\DERF{L}}_{\oh_{\cX}} \cplx{G}}$ is also coherent by the
case just established. Now let $\mathcal{T} \subseteq \DCAT_{\QCoh}(\cX)$
be the full subcategory consisting of those $\cplx{F}$ such that
$H_{\cplx{F},\cplx{G}}$ is coherent for every
$\cplx{G} \in \mathsf{D}^b_{\Coh}(\cX)$. Certainly, $\mathcal{T}$ is a
triangulated subcategory that contains the perfect complexes. If
$\{\cplx{F}_\lambda\}_{\lambda\in \Lambda} \subseteq \DCAT_{\QCoh}(\cX)$,
then
$\prod_{\lambda\in \Lambda} H_{\cplx{F}_\lambda,\cplx{G}} \simeq
H_{\mathrm{op}lus_\lambda \cplx{F}_\lambda,\cplx{G}}$. Since small products
of coherent functors are coherent \cite[Ex.~4.9]{hallj_coho_bc}, it
follows that $\mathcal{T}$ is closed under small coproducts. By
Thomason's Theorem (e.g.,
\cite[Cor.~3.14]{perfect_complexes_stacks}),
$\mathcal{T} = \DCAT_{\QCoh}(\cX)$. The result follows.
\end{proof}
The following corollary is a variant of \cite[Thm.~D]{hallj_coho_bc}.
\begin{corollary}\label{C:aff_hom_fund}
Let $\cF$ be a quasi-coherent $\oh_{\cX}$-module and let $\cG$ be a coherent $\oh_{\cX}$-module. If $\cG$ is flat over $X$, then the $X$-presheaf $\underline{\Hom}_{\oh_{\cX}/X}(\cF,\cG)$ whose objects over $T \xrightarrow{\tau} X$ are homomorphisms $\tau_{\cX}^*\cF \to \tau_{\cX}^*\cG$ of $\oh_{\cX\times_X T}$-modules (where $\tau_{\cX} \co \cX \times_X T \to \cX$ is the projection)
is representable by an affine $X$-scheme.
\end{corollary}
\begin{proof}
We argue exactly as in the proof of \cite[Thm.~D]{hallj_coho_bc}, but
using Proposition \ref{P:coh_gms} in place of
\cite[Thm.~C]{hallj_coho_bc}. Again, the argument is quite short, so
we sketch it here for completeness. First, we may obviously reduce
to the situation where $X=\Spec R$. Next, since $\shv{G}$ is flat
over $X$, it follows that
$\shv{G} \otimes^{\DERF{L}}_{\oh_{\cX}} \DERF{L} \pi^*_{\mathsf{QCoh}}(-)
\simeq \shv{G} \otimes_{\oh_{\cX}} \pi^*(-)\simeq \shv{G}
\otimes_R (-)$. The functor
$H(-)= \Hom_{\oh_{\cX}}(\shv{F},\shv{G} \otimes_{\oh_{\cX}}
\pi^*(-))$ is also coherent and left-exact (Proposition \ref{P:coh_gms}). But coherent functors
preserve small products \cite[Ex.~4.8]{hallj_coho_bc}, so the
functor above preserves all limits. It follows from the
Eilenberg--Watts Theorem \cite[Thm.~6]{MR0118757} (also see
\cite[Ex.~4.10]{hallj_coho_bc} for further discussion) that there is
an $R$-module $Q$ and an isomorphism of functors
$H(-) \simeq \Hom_{R}(Q,-)$. Finally, consider an $R$-algebra $C$
and let $T=\Spec C$; then there are functorial isomorphisms:
\begin{align*}
\underline{\Hom}_{\oh_{\cX}/X}(\shv{F},\shv{G})(T \to X) &= \Hom_{\oh_{\cX\times_X T}}(\tau_{\cX}^*\shv{F},\tau_{\cX}^*\shv{G})\\
&\simeq \Hom_{\oh_{\cX}}(\shv{F},(\tau_{\cX})_*\tau_{\cX}^*\shv{G})\\
&\simeq \Hom_{\oh_{\cX}}(\shv{F},\shv{G} \otimes_R C)\\
&\simeq \Hom_{R}(Q,C)\\
&\simeq \Hom_{R\mathrm{-Alg}}(\Sym^{\bullet}_RQ,C)\\
&\simeq \Hom_X(T,\Spec (\Sym^{\bullet}_RQ)).
\end{align*}
Hence,
$\underline{\Hom}_{\oh_{\cX}/X}(\cF,\cG)$ is represented by the
affine $X$-scheme $\Spec (\Sym_R^{\bullet} Q)$.
\end{proof}
\begin{proof} [Proof of Theorem \ref{T:coh}]
The argument is very similar to the proof of
\cite[Thm.~8.1]{hallj_openness_coh}. We may assume that $X=\Spec R$
is affine. If $C$ is an $R$-algebra, it will be convenient to let
$\cX_C = \cX\times_X \Spec C$. We also let
$\Coh^{\mathrm{flb}}(\cX_C)=\underline{\Coh}_{\cX/X}(\Spec C)$;
that is, it denotes the full subcategory of $\Coh(\cX_C)$ with
objects those coherent sheaves that are $C$-flat.
Note that $R$ is of finite type over a field $k$, so $X$ is an
excellent scheme. We may now use the criterion of
\cite[Thm.~A]{hallj_openness_coh}. There are six conditions to
check.
\begin{enumerate}
\item {[Stack]} $\underline{\Coh}_{\cX/X}$ is a stack for the \'etale
topology. This is immediate from \'etale descent of quasi-coherent
sheaves.
\item {[Limit preservation]} If $\{A_j\}_{j\in J}$ is a direct system
of $R$-algebras with limit $A$, then the natural functor
$\varinjlim_j \Coh^{\mathrm{flb}}(\cX_{A_j}) \to
\Coh^{\mathrm{flb}}(\cX_A)$ is an equivalence of
categories. This is immediate from standard limit results \cite[\S\S IV.8, IV.11]{EGA}.
\item {[Homogeneity]} Given a diagram of $R$-algebras
$[B \to A \leftarrow A']$, where $A' \to A$ is surjective with
nilpotent kernel, then the natural functor:
\[
\Coh^{\mathrm{flb}}(\cX_{B \times_A A'}) \to {\Coh}^{\mathrm{flb}}(\cX_B) \times_{{\Coh}^{\mathrm{flb}}(\cX_A)}\Coh^{\mathrm{flb}}(\cX_{A'})
\]
induces an equivalence of categories. This is just a strong version of Schlessinger's
conditions, which is proved in \cite[Lem.~8.3]{hallj_openness_coh}
(see \cite[p.~166]{hallj_openness_coh} for further discussion).
\item {[Effectivity]} If $(A,\mathfrak{m})$ is an
$\mathfrak{m}$-adically complete noetherian local ring, then the natural functor:
\[
\Coh^{\mathrm{flb}}(\cX_A) \to \varprojlim_n \Coh^{\mathrm{flb}}(\cX_{A/\mathfrak{m}^{n+1}})
\]
is an equivalence of categories. This is immediate from Corollary \ref{C:gb-c28}\itemref{C:gb-c28:fGAGA} and the local criterion of flatness.
\item {[Conditions on automorphisms and deformations]} If $A$ is a
finite type $R$-algebra and
$\shv{F} \in \Coh^{\mathrm{flb}}(\cX_A)$, then the infinitesimal
automorphism and deformation functors associated to $\shv{F}$ are
coherent. It is established in
\cite[\S8]{hallj_openness_coh} that as additive functors from $\Mod(A) \to \Ab$:
\begin{align*}
\Aut_{\underline{\Coh}_{\cX/X}}(\shv{F},-) &= \Hom_{\oh_{\cX_A}}(\shv{F},\shv{F} \otimes_{\oh_{\cX_A}} \pi_A^*(-))\quad \mbox{and}\\
\Def_{\underline{\Coh}_{\cX/X}}(\shv{F},-) &= \Ext^1_{\oh_{\cX_A}}(\shv{F},\shv{F} \otimes_{\oh_{\cX_A}} \pi_A^*(-)).
\end{align*}
By Proposition \ref{P:coh_gms}, the functors above are coherent.
\item {[Conditions on obstructions]} If $A$ is a finite type $R$-algebra and $\shv{F} \in \Coh^{\mathrm{flb}}(\cX_A)$, then there is an integer $n$ and a coherent $n$-step obstruction theory for $\shv{F}$. The obstruction theory is described in \cite[\S8]{hallj_openness_coh}. If $\cX \to X$ is flat, then there is the usual $1$-step obstruction theory
\[
\mathrm{O}^2(\shv{F},-) = \Ext^2_{\oh_{\cX_A}}(\shv{F},\shv{F} \otimes_{\oh_{\cX_A}} \pi_A^*(-)).
\]
If $\cX \to X$ is not flat, then $\mathrm{O}^2(\shv{F},-)$ is the second step in a $2$-step obstruction theory, whose primary obstruction lies in:
\[
\mathrm{O}^1(\shv{F},-) = \Hom_{\oh_{\cX_A}}(\shv{F} \otimes_{\oh_{\cX_A}} \shv{T}or_1^{X}(A,\oh_{\cX}),\shv{F} \otimes_{\oh_{\cX_A}} \pi_A^*(-)).
\]
By Proposition \ref{P:coh_gms}, these functors are coherent.
\end{enumerate}
Hence, $\underline{\Coh}_{\cX/X}$ is an algebraic stack that is
locally of finite presentation over $X$. Corollary
\ref{C:aff_hom_fund} now implies that the diagonal is affine. This completes the proof.
\end{proof}
Corollaries \ref{C:quot} and \ref{C:hilb} follow immediately from Theorem \ref{T:coh}. Indeed, the natural functor
$\underline{\mathrm{Quot}}_{\cX/X}(\cF)\to \underline{\Coh}_{\cX/X}$ is quasi-affine by Corollary \ref{C:aff_hom_fund} and Nakayama's Lemma (see \cite[Lem.~2.6]{lieblich-coherent} for details).
\begin{proof}[Proof of Theorem \ref{T:hom}]
This only requires small modifications to the proof of
\cite[Thm.~1.2]{hallj_dary_coherent_tannakian_duality}, which again
uses Artin's criterion as formulated in
\cite[Thm.~A]{hallj_openness_coh}. In more detail: We may assume
that $X=\Spec R$ is affine; in particular, $X$ is excellent so we
may apply \cite[Thm.~A]{hallj_openness_coh}. As in the proof of
Theorem \ref{T:coh}, there are six conditions to check. The
conditions (1) [Stack], (2) [Limit preservation] and (3)
[Homogeneity] are largely routine, and ultimately rely upon
\cite[\S9]{hallj_openness_coh}. Condition (4) [Effectivity] follows
from an idea due to Lurie, and makes use of Tannaka
duality. Indeed, if $(A,\mathfrak{m})$ is a noetherian and
$\mathfrak{m}$-adically complete local $R$-algebra, then the
effectivity condition corresponds to the natural functor:
\[
\Hom_{\Spec A}(\cX_A,\cY_A) \to \varprojlim_n \Hom_{\Spec
A}(\cX_{A/\mathfrak{m}^{n+1}}, \cY_{A})
\]
being an equivalence. The effectivity thus
follows from coherent completeness (Corollary \ref{C:gb-c28}) and
Tannaka duality (Corollary \ref{C:tannakian}). Conditions (5)
and (6) on the coherence of the automorphisms, deformations, and
obstructions follows from \cite{olsson-defn}, Proposition
\ref{P:coh_gms}, and the discussion in
\cite[\S9]{hallj_openness_coh} describing the $2$-term obstruction
theory. A somewhat subtle point is that we do not deform the
morphisms directly, but their graph, because \cite{olsson-defn} is
only valid for representable morphisms. This proves that the stack
$\underline{\Hom}_{X}(\cX,\cY)$ is algebraic and locally of finite
presentation over $X$. The conditions on the diagonal follow from
Corollary \ref{C:aff_hom_fund}, together with some standard
manipulations of Weil restrictions.
\end{proof}
\begin{proof} [Proof of Corollary \ref{C:homG}]
A $G$-equivariant morphism $Z\to \cX$ is equivalent to a morphism
of stacks $[Z/G]\to [\cX/G]$ over $BG$. This gives the $2$-cartesian diagram
\begin{equation*}
\xymatrix{
\underline{\Hom}_S^G(Z,\cX)\ar[r]\ar[d] & \underline{\Hom}_S\bigl([Z/G],[\cX/G]\bigr)\ar[d] \\
S\ar[r] & \underline{\Hom}_S\bigl([Z/G],BG\bigr)\ar@{}[ul]|\square
}
\end{equation*}
where the bottom map is given by the structure map $[Z/G]\to BG$
and the right map is given by postcomposition with the structure map
$[\cX/G]\to BG$. By Theorem \ref{T:hom}, the stacks
$\underline{\Hom}_S\bigl([Z/G],[\cX/G]\bigr)$ and
$\underline{\Hom}_S\bigl([Z/G],BG\bigr)$ are algebraic and locally
of finite type over $S$. The latter always has quasi-affine diagonal and the former has quasi-affine diagonal when $\cX$ has separated diagonal. In particular,
the bottom map is always quasi-affine. It follows that
$\underline{\Hom}_S^G(Z,\cX)$ is always an algebraic stack locally
of finite type over $S$ and has quasi-affine (in particular, quasi-compact and separated) diagonal whenever $\cX$ has quasi-affine diagonal; since $\cX$ is Deligne--Mumford and quasi-separated, this is equivalent to it having separated diagonal. Clearly, $\underline{\Hom}_S^G(Z,\cX)$ has no
non-trivial infinitesimal automorphisms, hence is a Deligne--Mumford stack. Similarly, if $\cX$ is an algebraic space, then $\underline{\Hom}_S^G(Z,\cX)$ has no
non-trivial automorphisms, hence is an algebraic space.
\end{proof}
\subsection{Deligne--Mumford stacks with $\mathbb{G}_m$-actions} \label{A:drinfeld}
Let $\cX$ be a quasi-separated Deligne--Mumford stack, locally of finite type over a field $k$ (not assumed to be algebraically closed), with an action of $\mathbb{G}_m$.
Define the following stacks on $\mathsf{Sch}/k$:
\[
\begin{aligned}
\cX^0 & := \underline{\Hom}^{\mathbb{G}_m}(\Spec k, \cX) & \quad & \text{(the `fixed' locus)\footnotemark} \\
\cX^+ & := \underline{\Hom}^{\mathbb{G}_m}(\AA^1,\cX) & \quad & \text{(the attractor)}
\end{aligned}
\]
where $\mathbb{G}_m$ acts on $\AA^1$ by multiplication, and define the stack $\widetilde{\cX}$ on $\mathsf{Sch}/\AA^1$ by
\[
\widetilde{\cX} := \underline{\Hom}_{\AA^1}^{\mathbb{G}_m}(\AA^2 , \cX \times \AA^1),
\]
where $\mathbb{G}_m$ acts on $\AA^2$ via $t \cdot (x,y) = (tx, t^{-1} y)$ and acts on $\AA^1$ trivially, and the morphism $\AA^2 \to \AA^1$ is defined by $(x,y) \mapsto xy$.
\footnotetext{If $\cX$ is an algebraic space, this is the fixed locus. If $\cX$ is a Deligne--Mumford stack, we will define the {\it fixed locus} $\cX^{\mathbb{G}_m}$ after allowing reparameterizations of the action; see Definition \ref{D:fixed-locus}.}
\begin{theorem} \label{T:drinfeld}
With the hypotheses above, $\cX^0$ and $\cX^+$ are quasi-separated Deligne--Mumford stacks, locally of finite type over $k$. Moreover, the natural morphism $\cX^0 \to \cX$ is a closed immersion, and the natural morphism $\ev_0\co \cX^+ \to \cX^0$ obtained by restricting to the origin is affine. In addition, $\widetilde{\cX}$ is a Deligne--Mumford stack, locally of finite type over $k$, which is quasi-separated whenever $\cX$ has quasi-compact and separated diagonal (e.g., an algebraic space).
\end{theorem}
\begin{remark} When $\cX$ is an algebraic space, then $\cX^0$, $\cX^+$ and $\widetilde{\cX}$ are algebraic spaces and the above result is due to Drinfeld \cite[Prop.~1.2.2, Thm.~1.4.2 and Thm.~2.2.2]{drinfeld}.
\end{remark}
The algebraicity of $\cX^0$, $\cX^+$ and $\widetilde{\cX}$ follows directly from Corollary \ref{C:homG}. To establish the final statements, we will need to establish several preliminary results.
\begin{proposition} \label{P:A1-complete} If $S$ is a noetherian affine scheme, then $[\AA^1_S / \mathbb{G}_m]$ is coherently complete along $[S / \mathbb{G}_m]$, where $S\mathrm{h}ookrightarrow \AA^1_S$ is the zero section.
\end{proposition}
\begin{proof}
Let $A=\Gamma(S,\oh_S)$; then
$\AA^1_S = \Spec A[t]$ and $V(t) = [S/\mathbb{G}_m]$. If $\cF \in\Coh([\AA^1_S/\mathbb{G}_m])$, then
we claim that there exists an integer $n\gg 0$ such that the natural surjection
$\Gamma(\cF) \to \Gamma(\cF/t^n\cF)$ is bijective. Now every coherent sheaf on
$[\AA^1_S/\mathbb{G}_m]$ is a quotient of a finite direct sum of coherent sheaves of the form
$p^*\cE_l$, where $\cE_l$ is the weight $l$ representation of $\mathbb{G}_m$ and $p\co
[\AA^1_S/\mathbb{G}_m] \to [S/\mathbb{G}_m]$ is the natural map. It is enough to prove that
$\Gamma(p^*\cE_l) \to \Gamma(p^*\cE_l/t^n p^*\cE_l)$ is bijective, or equivalently,
that $\Gamma((t^n) \otimes p^*\cE_l) = 0$. But $(t^n) = p^*\cE_n$ and
$\Gamma(p^*\cE_{n+l})=0$ if $n+l>0$, hence for all $n\gg 0$.
We conclude that $\Gamma(\cF) \to \varprojlim_n \Gamma(\cF/t^n\cF)$ is bijective.
What remains can be proven analogously to Theorem \ref{key-theorem}.
\end{proof}
\begin{proposition} \label{P:tannakian2}
Let $W$ be an excellent algebraic space over a field $k$ and let $G$ be an algebraic group acting on $W$. Let $Z \subseteq W$ be a $G$-invariant closed subspace. Suppose that $[W/G]$ is coherently complete along $[Z/G]$. Let $\cX$ be a noetherian algebraic stack over $k$ with affine stabilizers with an action of $G$.
Then the natural map
$$
\Hom^{G}(W, \cX) \to \varprojlim_n \Hom^{G}\bigl(W_Z^{[n]}, \cX\bigr)
$$
is an equivalence of groupoids.
\end{proposition}
\begin{proof}
As in the proof of Corollary~\ref{C:homG}, we have a cartesian diagram of groupoids
\[
\xymatrix{
\Hom^G(W,\cX)\ar[r]\ar[d] & \Hom\bigl([W/G],[\cX/G]\bigr)\ar[d] \\
{*}\ar[r] & \Hom\bigl([W/G],BG\bigr)
}
\]
and a similar cartesian diagram for $W$ replaced with $W_Z^{[n]}$ for any $n$ which gives the cartesian diagram
\[
\xymatrix{
\varprojlim_n \Hom^G\bigl(W_Z^{[n]},\cX\bigr)\ar[r]\ar[d] & \varprojlim_n \Hom\bigl([W_Z^{[n]}/G],[\cX/G]\bigr)\ar[d] \\
{*}\ar[r] & \varprojlim_n \Hom\bigl([W_Z^{[n]}/G],BG\bigr).
}
\]
Since $[W/G]$ is coherently complete along $[Z/G]$, it follows by Tannaka duality that the natural maps from the first square to the
second square are isomorphisms.
\end{proof}
\begin{proposition} \label{P:drinfeld-base-change}
If $f \co \cX \to \cY$ is an \'etale and representable $\mathbb{G}_m$-equivariant morphism of quasi-separated Deligne--Mumford stacks of finite type over a field $k$, then
$\cX^0 = \cY^0 \times_{\cY} \cX$ and $\cX^+ = \cY^+ \times_{\cY^0} \cX^0$.
\end{proposition}
\begin{proof}
For the first statement, let $x \co S \to \cX$ be a morphism from a scheme $S$ such that the composition $f \circ x \co S \to \cY$ is $\mathbb{G}_m$-equivariant. To see that $x$ is $\mathbb{G}_m$-equivariant, it suffices to base change $f$ by $S \to \cY$ and check that a section $S \to \cX_S$ of $\cX_S \to S$ is necessarily equivariant. As $\cX_S \to S$ is \'etale and representable, $S \to \cX_S$ is an open immersion, and since any $\mathbb{G}_m$-orbit in $\cX_S$ is necessarily connected, $S$ is an invariant open of $\cX_S$.
For the second statement,
we need to show that there exists a unique $\mathbb{G}_m$-equivariant morphism filling in the $\mathbb{G}_m$-equivariant
diagram
\begin{equation} \label{D:drinfeld}
\begin{split}
\xymatrix{
\Spec k \times S \ar[r] \ar[d] & \cX \ar[d]^f \\
\AA^1 \times S \ar[r] \ar@{-->}[ur] & \cY
} \end{split} \end{equation}
where $S$ is an affine scheme of finite type over $k$, and the vertical left arrow is the inclusion of the origin. For each $n \ge 1$, the formal lifting property of \'etaleness yields a unique $\mathbb{G}_m$-equivariant map $\Spec(k[x]/x^n) \times S \to \cX$ such that
$$\xymatrix{
\Spec k \times S \ar[r] \ar[d] & \cX \ar[d]^f \\
\Spec (k[x]/x^n) \times S \ar[r] \ar@{-->}[ur] & \cY
}$$
commutes. By Propositions \ref{P:A1-complete} and \ref{P:tannakian2}, there exists a unique $\mathbb{G}_m$-equivariant morphism $\AA^1 \times S \to \cX$ such that \eqref{D:drinfeld} commutes.
\end{proof}
\begin{remark} If $f \co \cX \to \cY$ is not representable, then it is not true in general that $\cX^0 = \cY^0 \times_{\cY} \cX$, e.g., let $f \co B\pmb{\mu}_n\to \Spec k$ where $\mathbb{G}_m$ acts on $B \pmb{\mu}_n$ as in Remark \ref{R:not-split}.
\end{remark}
\begin{remark} It is not true in general that $\cX^+$ is the fiber product of $f \co \cX \to \cY$ along the morphism $\ev_1 \co \cY^+ \to \cY$ defined by $\lambda \mapsto \lambda(1)$. Indeed, consider the $\mathbb{G}_m$-equivariant open immersion $\cX=\mathbb{G}_m \mathrm{h}ookrightarrow \AA^1=\cY$, where $\mathbb{G}_m$ acts by scaling positively. Then $\cY^+ = \cY$ but $\cX^+$ is empty. \end{remark}
\begin{proof}[Proof of Theorem \ref{T:drinfeld}] The algebraicity of $\cX^0$, $\cX^+$ and $\widetilde{\cX}$ follows directly from Corollary \ref{C:homG}. To verify the final statements, we may assume that $k$ is algebraically closed. For any $\mathbb{G}_m$-equivariant map $x \co \Spec k \to \cX$, the stabilizer $T_x$ of $x$ (as defined in \eqref{D:stab}) is $\mathbb{G}_m$ and the map on quotients $B\mathbb{G}_m \to \cY := [\cX/\mathbb{G}_m]$ induces a map $\mathbb{G}_m \to G_y$ on stabilizers providing a splitting of \eqref{E:stab}. Our generalization of Sumihiro's theorem (Theorem \ref{T:sumi1}) provides an \'etale $\mathbb{G}_m$-equivariant neighborhood $(\Spec A,u) \to (\cX,x)$. Proposition \ref{P:drinfeld-base-change} therefore reduces the statements to the case of an affine scheme, which can be established directly; see \cite[\S 1.3.4]{drinfeld}.
\end{proof}
We will now investigate how $\cX^0$ and $\cX^+$ change if we reparameterize the
torus. We denote by $\cX_{\langle d \rangle}$ the Deligne--Mumford stack $\cX$ with $\mathbb{G}_m$-action induced by the reparameterization $\mathbb{G}_m \xrightarrow{d} \mathbb{G}_m$. For integers $d \, | \, d'$, there are maps $\cX^0_{\langle d \rangle} \to \cX^0_{\langle d' \rangle}$ and $\cX^+_{\langle d \rangle} \to \cX^+_{\langle d' \rangle}$ (defined by precomposing with $\AA^1 \to \AA^1, x \mapsto x^{d'/d}$) that are compatible with the natural maps to $\cX$.
Recall from Theorem~\ref{T:drinfeld} that $\cX^0_{\langle d \rangle}\to \cX$ is a closed immersion. Also recall that for $x \in \cX(k)$, there is an exact sequence
\begin{equation} \label{E:stab2}
\xymatrix{1 \ar[r] & G_x \ar[r] & G_y \ar[r] & T_x \ar[r] & 1,}
\end{equation}
where $y$ is the image of $x$ in $\cY = [\cX/\mathbb{G}_m]$; and $T_x := \mathbb{G}_m \times_{\cX} BG_x \subset \mathbb{G}_m$ is the stabilizer, where $\mathbb{G}_m \xrightarrow{\id \times x} \mathbb{G}_m \times \cX \xrightarrow{\sigma_{\cX}} \cX$ is the restriction of the action map; see also \eqref{D:stab}.
\begin{proposition} \label{P:fixed-point}
Let $\cX$ be a quasi-separated Deligne--Mumford stack,
locally of finite type over a field $k$, with an action of $\mathbb{G}_m$.
Let $x\in \cX(k)$ and let $T_x \subset \mathbb{G}_m$ be its stabilizer.
\begin{enumerate}
\item \label{P:fixed-point1}
$x \in \cX^0$ if and only if $T_x = \mathbb{G}_m$ and \eqref{E:stab2} splits.
\item \label{P:fixed-point2}
The following conditions are equivalent:
\begin{enumerate*}
\item \label{P:fixed-point2:reparam2a}
$x \in \cX^0_{\langle d \rangle}$ for sufficiently
divisible integers $d$;
\item \label{P:fixed-point2:reparam2b}
$T_x = \mathbb{G}_m$; and
\item \label{P:fixed-point2:reparam2c}
$\dim G_y = 1$.
\end{enumerate*}
\end{enumerate}
\end{proposition}
\begin{proof}
For \eqref{P:fixed-point1}, if $x \co \Spec k \to \cX$ is $\mathbb{G}_m$-equivariant, then clearly $T_x = \mathbb{G}_m$ and the map on quotients $B\mathbb{G}_m \to \cY := [\cX/\mathbb{G}_m]$ induces a map $\mathbb{G}_m \to G_y$ on stabilizers providing a splitting of \eqref{E:stab2}. Conversely, a section $\mathbb{G}_m \to G_y$ providing a splitting of \eqref{E:stab2} induces a section $B \mathbb{G}_m \to BG_y$ of $BG_y \to B \mathbb{G}_m$, and taking the base change of the composition $B \mathbb{G}_m \to BG_y \to \cY \to B\mathbb{G}_m$ along $\Spec k \to B \mathbb{G}_m$ induces a unique $\mathbb{G}_m$-equivariant map $\Spec k \to \cX$.
For \eqref{P:fixed-point2}, it is clear that \itemref{P:fixed-point2:reparam2b} and \itemref{P:fixed-point2:reparam2c} are equivalent, and that they are implied by \itemref{P:fixed-point2:reparam2a}. On the other hand, if \itemref{P:fixed-point2:reparam2b} holds, then the sequence \eqref{E:stab2} splits after reparameterizing the action by $\mathbb{G}_m \xrightarrow{d} \mathbb{G}_m$ for sufficiently divisible integers $d$ (see proof of Theorem~\ref{T:sumi2}). It now follows from \eqref{P:fixed-point1} that $x \in \cX^0_{\langle d \rangle}$.
\end{proof}
\begin{proposition} \label{P:reparam}
Let $\cX$ be a quasi-separated Deligne--Mumford stack,
locally of finite type over a field $k$, with an action of $\mathbb{G}_m$.
\begin{enumerate}
\item \label{P:reparam1}
For $d \,| \, d'$, the map $\cX^0_{\langle d \rangle} \to \cX^0_{\langle d' \rangle}$ is an open and closed immersion, and $\cX^+_{\langle d \rangle} = \cX^+_{\langle d' \rangle} \times_{\cX^0_{\langle d' \rangle} } \cX^0_{\langle d \rangle}$.
\item \label{P:reparam3}
If $\cX$ is quasi-compact, then for sufficiently divisible integers $d$ and $d'$, $\cX^0_{\langle d \rangle} = \cX^0_{\langle d' \rangle}$ and $\cX^+_{\langle d \rangle} = \cX^+_{\langle d' \rangle}$.
\end{enumerate}
\end{proposition}
\begin{proof} We may assume $k$ is algebraically closed.
For \eqref{P:reparam1}, since $\cX^0_{\langle d \rangle} \to \cX$ is a closed immersion for all $d$ (Theorem \ref{T:drinfeld}), we see that $\cX^0_{\langle d \rangle} \to \cX^0_{\langle d' \rangle}$ is a closed immersion. For any $x \in \cX^0(k)$, Theorem \ref{T:sumi1} provides an \'etale $\mathbb{G}_m$-equivariant morphism $(U, u) \to (\cX, x)$ where $U$ is an affine scheme. By Proposition \ref{P:drinfeld-base-change}, the maps $\cX^0 \to \cX^0_{\langle d \rangle}$ and $\cX^+ \to \cX^+_{\langle d \rangle}$ pullback to the isomorphisms $U^0 \to U^0_{\langle d \rangle}$ and $U^+ \to U^+_{\langle d \rangle}$. This shows both that $\cX^0 \to \cX^0_{\langle d \rangle}$ is an open immersion and that $\cX^+ = \cX^+_{\langle d \rangle} \times_{\cX^0_{\langle d \rangle} } \cX^0$, which implies \eqref{P:reparam1}.
For \eqref{P:reparam3}, the locus of points in $\cY=[\cX/\mathbb{G}_m]$ with a positive dimensional stabilizer is a closed substack. It follows from Proposition \ref{P:fixed-point}\eqref{P:fixed-point2} that the locus of points in $\cX$ contained in $\cX^0_{\langle d \rangle}$ for some $d$ is also closed. In particular, the substacks $\cX^0_{\langle d \rangle}$ stabilize for sufficiently divisible integers $d$ and by \eqref{P:reparam1} the stacks $\cX^+_{\langle d \rangle}$ also stabilize.
\end{proof}
Proposition \ref{P:reparam}\eqref{P:reparam3} justifies the following definition.
\begin{definition} \label{D:fixed-locus}
Let $\cX$ be a quasi-separated Deligne--Mumford stack,
locally of finite type over a field $k$, with an action of $\mathbb{G}_m$. The {\it fixed locus} is the closed substack of $\cX$ defined as
$$\cX^{\mathbb{G}_m} := \bigcup_d \cX_{\langle d \rangle}^0.$$
\end{definition}
\begin{remark}
Consider the action of $\mathbb{G}_m$ on $\cX=B \pmb{\mu}_n$ as in Remark \ref{R:not-split}. Then $\cX^0$ is empty but $\cX_{\langle d \rangle}^0 = \cX$ for all integers $d$ divisible by $n$. Thus, $\cX^{\mathbb{G}_m} = \cX$.
\end{remark}
\subsection{Bia\l ynicki-Birula decompositions for Deligne--Mumford stacks} \label{A:BB}
We provide the following theorem establishing the existence of Bia\l ynicki-Birula decompositions for a Deligne--Mumford stack $\cX$. Our proof relies on the algebraicity of the stacks $\cX^0 = \underline{\Hom}^{\mathbb{G}_m}(\Spec k, \cX)$ and $\cX^+ = \underline{\Hom}^{\mathbb{G}_m}(\AA^1,\cX)$ (Theorem \ref{T:drinfeld}) and the existence of $\mathbb{G}_m$-equivariant \'etale affine neighborhoods (Theorem \ref{T:sumi1}). In particular, our argument recovers the classical result from \cite[Thm.~4.1]{bb}. Due to subtleties arising from group actions on stacks, the proof is substantially simpler in the case that $\cX$ is an algebraic space, and the reader may want to consider this special case on a first reading.
\begin{theorem} \label{T:bb}
Let $\cX$ be a separated Deligne--Mumford stack, of finite type over an arbitrary field $k$, with an action of $\mathbb{G}_m$. Let $\cX^{\mathbb{G}_m} = \coprod_i \cF_i$ be the fixed locus (see Definition \ref{D:fixed-locus}) with connected components $\cF_i$. There exists an affine morphism $\cX_i \to \cF_i$ for each $i$ and a monomorphism $\coprod_i \cX_i \to \cX$. Moreover,
\begin{enumerate}
\item \label{T:bb-proper}
If $\cX$ is proper, then $\coprod_i \cX_i \to \cX$ is surjective.
\item \label{T:bb-smooth}
If $\cX$ is smooth, then $\cF_i$ is smooth and $\cX_i \to \cF_i$ is an affine fibration (i.e., $\cX_i$ is affine space \'etale locally over $\cF_i$).
\item \label{T:bb-locally-closed}
Let $\cX \to X$ be the coarse moduli space.
\begin{enumerate}
\item \label{T:bb-locally-closed-affine}
If $X$ is affine, then $\cX_i \mathrm{h}ookrightarrow \cX$ is a closed immersion.
\item \label{T:bb-locally-affine}
If $X$ has a $\mathbb{G}_m$-equivariant affine open cover (e.g., $X$ is a normal scheme), then
\begin{enumerate}[label=(\roman*),ref=\roman*]
\item \label{T:bb-locally-affine1}
$\cX_i \mathrm{h}ookrightarrow \cX$ is a local immersion
(i.e., a locally closed immersion Zariski-locally on the source) and $\cX_i \to \cX \times \cF_i$ is a locally closed immersion; and
\item \label{T:bb-locally-affine2}
if $\cZ \subset \cX_i$ is an irreducible component, then $\cZ \mathrm{h}ookrightarrow \cX$ is a locally closed immersion.
\end{enumerate}
\item \label{T:bb-locally-closed-smooth-stack}
If $\cX$ is smooth and $X$ is a scheme, then $\cX_i \mathrm{h}ookrightarrow \cX$ is a locally closed immersion.
\item \label{T:bb-locally-closed-quasi-projective}
If there exists a $\mathbb{G}_m$-equivariant locally closed immersion $X \mathrm{h}ookrightarrow \mathbb{P}(V)$ where $V$ is a $\mathbb{G}_m$-representation (e.g., $\cX$ normal and $X$ is quasi-projective), then $\cX_i \mathrm{h}ookrightarrow \cX$ is a locally closed immersion.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark} If $\cX$ is a smooth scheme and $k$ is algebraically closed, then this statement (except Case \eqref{T:bb-locally-affine}) is the classical Bia\l{}ynicki-Birula decomposition theorem \cite[Thm.~4.1]{bb} (using Sumihiro's theorem \cite[Cor.~2]{sumihiro} ensuring that $\cX$ has a $\mathbb{G}_m$-equivariant affine open cover).
If $\cX$ is an algebraic space, then this
was established in \cite[Thm.~B.0.3]{drinfeld} (except Case \eqref{T:bb-locally-affine}). Our formulation of Case \eqref{T:bb-locally-affine}\eqref{T:bb-locally-affine1} was motivated by \cite[Thm.~4.5, p.~69]{hesselink} and \cite[Prop.~13.58]{milne} and
Case \eqref{T:bb-locally-affine}\eqref{T:bb-locally-affine2} was motivated by \cite[Prop.~7.6]{js-bb}.
Using Drinfeld's results and our Theorem \ref{T:sumi1}, {Jelisiejew} and {Sienkiewicz} establish the theorem above when $\cX$ is an algebraic space as a special case of \cite[Thm.~1.5]{js-bb} and their proof in particular recovers the main result of \cite{bb}. Our proof follows a similar strategy by relying on results of the previous section and Theorem \ref{T:sumi1} to reduce to the affine case.
\end{remark}
\begin{remark} It is not true in general that $\cX_i \mathrm{h}ookrightarrow \cX$ is a locally closed immersion.
\begin{enumerate}
\item The condition in \eqref{T:bb-locally-closed-smooth-stack} that $X$ is a scheme is necessary. Sommese has given an example of a smooth algebraic space $X$ such that $X_i \mathrm{h}ookrightarrow X$ is not a locally closed immersion \cite{sommese}. This is based on Hironaka's example of a proper, non-projective, smooth 3-fold.
\item The condition in \eqref{T:bb-locally-closed-quasi-projective} that $X$ is quasi-projective is necessary. Konarski has provided an example of a normal proper scheme $X$ (a toric variety) such that $X_i \mathrm{h}ookrightarrow X$ is not a locally closed immersion \cite{konarski}.
\end{enumerate}
For a smooth Deligne--Mumford stack $\cX$ with a $\mathbb{G}_m$-action, \cite[Prop.~5]{oprea} states that the existence of a Bia\l ynicki-Birula decomposition with each $\cX_i \mathrm{h}ookrightarrow \cX$ locally closed follows from the existence of a $\mathbb{G}_m$-equivariant, \'etale atlas $\Spec A \to \cX$ (as provided by Theorem \ref{T:sumi2}). The counterexamples above show that \cite[Prop.~5]{oprea} is incorrect. Nevertheless, the main result \cite[Thm.\ 2]{oprea} still holds as a consequence of Theorem~\ref{T:bb} since the Deligne--Mumford stack $\overline{\cM}_{0,n}(\mathbb{P}^r,d)$ of stable maps is smooth and its coarse moduli space is a scheme (Case \eqref{T:bb-locally-closed-smooth-stack}).
Moreover, \cite[Thm.~3.5]{skowera} states the above theorem in the case that $\cX$ is a smooth, proper and tame Deligne--Mumford stack with $X$ a scheme but the proof is not valid as it relies on \cite[Prop.~5]{oprea}. A similar error appeared in a previous version of our article where it was claimed incorrectly that $\cX_i \mathrm{h}ookrightarrow \cX$ is a locally closed immersion for any smooth, proper Deligne--Mumford stack.
\end{remark}
\begin{remark} Let $X$ be a separated scheme of finite type over $k$ with finite quotient singularities and with a $\mathbb{G}_m$-action. There is a canonical smooth Deligne--Mumford stack $\cX$ whose coarse moduli space is $X$ (see \cite[\S 4.1]{fantechi-mann-nironi}). The $\mathbb{G}_m$-action lifts canonically to $\cX$. Applying
Theorem \ref{T:bb}\eqref{T:bb-locally-closed-smooth-stack} to $\cX$ and appealing to Proposition \ref{P:cms}\eqref{P:cms-climmersion-nilimmersion-both}, we can conclude that the components $X_i$ of $X^+$ are locally closed in $X$.
\end{remark}
The following proposition establishes properties of the evaluation map $\ev_1 \co \cX^+ \to \cX, \lambda \mapsto \lambda(1)$ in terms of properties of $\cX$. We find it prudent to state a relative version that for a given morphism $f \co \cX \to \cY$ between Deligne--Mumford stacks
establishes properties of the relative evaluation map
$$\ev_f \co \cX^+ \to \cY^+ \times_{\cY} \cX, \lambda \mapsto (f \circ \lambda, \lambda(1))$$
in terms of properties of $f$. In the proof of Theorem~\ref{T:bb} we will only use the absolute case where $\cY=\Spec k$.
\begin{proposition} \label{P:ev1}
Let $f \co \cX \to \cY$ be a $\mathbb{G}_m$-equivariant morphism of quasi-separated Deligne--Mumford stacks that are
locally of finite type over an arbitrary field $k$.
\begin{enumerate}
\item \label{P:ev1-unramified}
$\cX^+ \to \cY^+ \times_{\cY} \cX$ is unramified.
\item \label{P:ev1-representable}
If $f \co \cX \to \cY$ has separated diagonal, then $\cX^+ \to \cY^+ \times_{\cY} \cX$ is representable.
\item \label{P:ev1-monomorphism}
If $f \co \cX \to \cY$ is separated, then $\cX^+ \to \cY^+ \times_{\cY} \cX$ is a monomorphism.
\item \label{P:ev1-surjective}
If $f \co \cX \to \cY$ is proper and $\cY$ is quasi-compact, then $\cX_{\langle d \rangle}^+ \to \cY_{\langle d \rangle}^+ \times_{\cY} \cX$ is surjective for sufficiently divisible integers $d$.
\end{enumerate}
\end{proposition}
\begin{proof}
We may assume that $k$ is algebraically closed.
For \eqref{P:ev1-unramified}, it suffices to show that $\cX^+ \to \cX$ is unramified. We follow the argument of \cite[Prop.~1.4.11(1)]{drinfeld}. We need to check that for any $(\AA^1 \xrightarrow{\lambda} \cX) \in \cX^+(k)$, the induced map $T_{\lambda} \cX^+ \to T_{\lambda(1)} \cX$ on tangent spaces is injective. This map can be identified with the restriction map
\[
\Hom_{\oh_{\AA^1}}^{\mathbb{G}_m}(\lambda^* \Omega_{\cX}^1, \oh_{\AA^1}) \to \Hom_{\oh_{\AA^1 \setminus 0}}^{\mathbb{G}_m}((\lambda^* \Omega_{\cX}^1)|_{\AA^1 \setminus 0}, \oh_{\AA^1 \setminus 0}),
\]
which is clearly injective.
For \eqref{P:ev1-representable}, let $(\lambda \co \AA^1 \to \cX) \in \cX^+(k)$. Automorphisms $\tau_1, \tau_2 \in \Aut_{\cX^+}(\lambda)$ mapping to the same automorphism of $\ev_f(\lambda)$ induce two sections $\widetilde{\tau}_1, \widetilde{\tau}_2$ of $\Isom_{\cX/\cY}(\lambda) \to \AA^1$ agreeing over $\AA^1 \setminus 0$. The valuative criterion for separatedness implies that $\widetilde{\tau}_1 = \widetilde{\tau}_2$ and thus $\tau_1 = \tau_2$.
For \eqref{P:ev1-monomorphism}, since $\cX^+ \to \cY^+ \times_{\cY} \cX$ is unramified and representable, it is enough to prove that is universally injective. A $k$-point of $\cY^+ \times_{\cY} \cX$ with two preimages in $\cX^+$ corresponds to a $\mathbb{G}_m$-equivariant $2$-commutative square
$$\xymatrix{
\AA^1 \setminus 0 \ar@{^(->}[d] \ar[r] & \cX \ar[d]^f\\
\AA^1 \ar[r] \ar@<0.5ex>[ur]^{h_1} \ar@<-0.5ex>[ur]_{h_2} & \cY
}$$
with two $\mathbb{G}_m$-equivariant lifts $h_1, h_2 \co \AA^1 \to \cX$. We need to produce a $\mathbb{G}_m$-equivariant $2$-isomorphism $h_1\iso h_2$. As $f \co \cX \to \cY$ is separated, $I:=\Isom_{\cX/\cY}(h_1, h_2) \to \AA^1$ is proper. The 2-isomorphism $h_1|_{\AA^1 \setminus 0} \iso h_2|_{\AA^1 \setminus 0}$ gives a $\mathbb{G}_m$-equivariant section of $I\to \AA^1$ over $\AA^1 \setminus 0$ and the closure of its graph gives a $\mathbb{G}_m$-equivariant section of $I\to \AA^1$, i.e., a $\mathbb{G}_m$-equivariant 2-isomorphism $h_1 \iso h_2$.
For \eqref{P:ev1-surjective}, by Proposition \ref{P:reparam}\eqref{P:reparam3}, it suffices to show that a $\mathbb{G}_m$-equivariant commutative diagram
$$\xymatrix{
\AA^1 \setminus 0 \ar[r] \ar@{^(->}[d] & \cX \ar[d]^f \\
\AA^1 \ar[r]^{\lambda} \ar@{-->}[ur]^{\lambda'} & \cY
}$$
of solid arrows admits a $\mathbb{G}_m$-equivariant lift $\lambda' \co \AA^1 \to \cX$ after
reparameterizing the action. Let $x \in \cX(k)$ be the image of $1$ under $\AA^1 \setminus 0 \to \cX$. After replacing $\cX$ with the closure of $\im(\AA^1 \setminus 0 \to \cX)$ we may assume that $\cX$ is integral of dimension $\le 1$. Let $\cX' \to \cX$ be the normalization and choose a preimage $x'$ of $x$. Since $\cX' \to \cY$ is proper, the induced map $\AA^1 \setminus 0 \to \cX'$, defined by $t \mapsto t \cdot x'$, admits a unique lift $h \co C \to \cX'$ compatible with $\lambda$ after a ramified extension $(C,c) \to (\AA^1,0)$. Let $x'_0 = h(c) \in \cX'(k)$.
By Theorem \ref{T:sumi2}, there exists $d > 0$ and a $\mathbb{G}_m$-equivariant map $(\Spec A,w_0) \to (\cX'_{\langle d \rangle}, x'_0)$ with $w_0$ fixed by $\mathbb{G}_m$. If $\dim \cX = 0$, then we may assume $A=k$ and so in this case the composition $\AA^1 \to \Spec k \xrightarrow{x} \cX_{\langle d \rangle}$ gives the desired map. If $\dim \cX = 1$, then we may assume that $\Spec(A)$ is a smooth and irreducible affine curve with two orbits---one open and one closed. It follows that $\Spec(A)$ is $\mathbb{G}_m$-equivariantly isomorphic to $\AA^1$ and the composition $\AA^1 \to \cX'_{\langle d \rangle} \to \cX_{\langle d \rangle}$ gives the desired map.
\end{proof}
\begin{proposition} \label{P:cms}
Let $\pi \co \cX \to \cY$ be a $\mathbb{G}_m$-equivariant morphism
of quasi-separated
Deligne--Mumford stacks,
of finite type over an arbitrary field $k$. If $\pi$ is proper and quasi-finite (e.g., $\pi$ is a coarse moduli space), then
\begin{enumerate}
\item \label{P:cms-proper}
$\cX^+ \to \cY^+$ is proper and
\item \label{P:cms-climmersion-nilimmersion-both}
the maps $\cX^0_{\langle d \rangle} \to \cY^0_{\langle d \rangle} \times_{\cY} \cX$ and $\cX^+_{\langle d\rangle} \to \cY^+_{\langle d \rangle} \times_{\cY} \cX$ are closed immersions for all $d>0$ and nilimmersions for $d$ sufficiently divisible. In particular, $\cX^{\mathbb{G}_m} \to \cY^{\mathbb{G}_m} \times_{\cY} \cX$ is a nilimmersion.
\end{enumerate}
\end{proposition}
\begin{proof}
For \eqref{P:cms-proper} it is enough to prove that $\ev_\pi\colon
\cX^+ \to \cY^+\times_\cY \cX$ is proper. First observe that $\cX^+$ and $\cY^+$ are quasi-compact: via $\ev_0$, they are affine over $\cX$ and $\cY$, respectively (Theorem \ref{T:drinfeld}). Since $\cX$ and $\cY$ are quasi-separated, it follows that $\ev_\pi \colon \cX^+ \to \cY^+\times_{\cY} \cX$ is quasi-compact. We may now use the valuative criteria to verify that $\ev_\pi$ is proper. Let
\begin{equation} \label{E:cms-proper}
\begin{split}
\xymatrix{
\Spec K \ar[d] \ar[r] & \cX^+ \ar[d]^{\ev_\pi} \\
\Spec R \ar[r] \ar@{-->}[ur] & \cY^+\times_\cY \cX
}
\end{split}
\end{equation}
be a diagram of solid arrows where $R$ is a DVR with fraction field $K$. This corresponds to a $\mathbb{G}_m$-equivariant diagram
\begin{equation} \label{E:cms-proper2}
\begin{split}
\xymatrix{
\AA^1_R \setminus 0 \ar[r] \ar@{^(->}[d] & \cX \ar[d]^{\pi} \\
\AA^1_R \ar[r] \ar@{-->}[ur] & \cY
}
\end{split}
\end{equation}
of solid arrows, where $0 \in \AA^1_R$ denotes the unique closed point fixed by $\mathbb{G}_m$. Equivalently, we have a diagram
\begin{equation} \label{E:cms-proper3}
\begin{split}
\xymatrix{
[(\AA^1_R \setminus 0)/\mathbb{G}_m] \ar[r] \ar@{^(->}[d] & [\cX/\mathbb{G}_m] \ar[d]^{\pi} \\
[\AA^1_R/\mathbb{G}_m] \ar[r] \ar@{-->}[ur] & [\cY/\mathbb{G}_m]
}
\end{split}
\end{equation}
of solid arrows. A dotted arrow providing a lift of \eqref{E:cms-proper3} is the same as a $\mathbb{G}_m$-equivariant dotted arrow providing a lift of \eqref{E:cms-proper2} or a lift in \eqref{E:cms-proper}. Now let $U=[(\AA_R^1\setminus 0)/\mathbb{G}_m]$ and $S= [\AA^1_R/\mathbb{G}_m]$. By base change, a dotted arrow providing a lift of \eqref{E:cms-proper3} is the same as a section to the projection $S\times_{[\cY/\mathbb{G}_m]}[\cX/\mathbb{G}_m] \to S$ extending the induced section over $U$. Since $S$ is regular and $U$ contains all points of $S$ of codimension $1$, we may apply Lemma \ref{L:extension}\eqref{L:extension-section} to deduce the claim. (It is worthwhile to point out that the existence of lifts in \eqref{E:cms-proper3} is equivalent to the map $[\cX/\mathbb{G}_m] \to [\cY/\mathbb{G}_m]$ being $\Theta$-reductive, as introduced in \cite{dhl-instability}, and that Lemma \ref{L:extension}\eqref{L:extension-section} implies that $[\cX/\mathbb{G}_m] \to [\cY/\mathbb{G}_m]$ is $\Theta$-reductive.)
For \itemref{P:cms-climmersion-nilimmersion-both}, Theorem
\ref{T:drinfeld} implies that $\cX^0_{\langle d \rangle} \to \cX$ and
$\cY^0_{\langle d \rangle} \to \cY$ are closed immersions for all $d>0$;
thus,
$\cX^0_{\langle d \rangle} \to \cY^0_{\langle d \rangle} \times_{\cY}
\cX$ is a closed immersion. For $d$ sufficiently divisible, it is now
easily checked using the quasi-finiteness of $\pi$ that the morphisms in question are also surjective. Also, for all $d>0$, the map $\cX_{\langle d \rangle}^+ \to \cY_{\langle d \rangle}^+ \times_{\cY} \cX$ is proper by
\eqref{P:cms-proper} and a monomorphism by Proposition \ref{P:ev1}\eqref{P:ev1-monomorphism}, and thus a closed immersion. The surjectivity of $\cX_{\langle d \rangle}^+ \to \cY_{\langle d \rangle}^+ \times_{\cY} \cX$ for sufficiently divisible $d$ follows from Proposition \ref{P:ev1}\eqref{P:ev1-surjective}.
\end{proof}
\begin{lemma} \label{L:extension}
Let $S$ be a regular algebraic stack and let $U \subset S$ be an open substack containing all points of codimension $1$. Let $f \co \cX \to S$ be a quasi-finite morphism that is relatively Deligne--Mumford.
\begin{enumerate}
\item \label{L:extension-etale}
If $f|_U \co f^{-1}(U) \to U$ is \'etale, then $f \co \normin{\cX}{U} \to S$ is \'etale, where $\normin{\cX}{U}$ denotes the normalization of $\cX$ in $f^{-1}(U)$.
\item \label{L:extension-section}
If $f \co \cX \to S$ is proper and $f|_U$ has a section, then $f \co \cX \to S$ has a section.
\end{enumerate}
\end{lemma}
\begin{proof}
For \eqref{L:extension-etale}, as the question is smooth-local on $S$ and \'etale-local on $\cX$, we may assume that $\cX$ and $S$ are irreducible schemes. Now the statement follows from Zariski--Nagata purity \cite[Exp.~X, Cor.~3.3]{sga1}. For \eqref{L:extension-section}, by Zariski's Main Theorem \cite[Thm.~16.5(ii)]{MR1771927}, we may factor a section $U \to \cX$ as $U \mathrm{h}ookrightarrow \cV \to \cX$ where $U \mathrm{h}ookrightarrow {\cV}$ is a dense open immersion and ${\cV} \to \cX$ is a finite morphism. Since $\cV\to S$ is proper, quasi-finite and an isomorphism over $U$, it follows that $\normin{{\cV}}{U} \to S$ is proper and \'etale by \eqref{L:extension-etale}. As $I_{\normin{{\cV}}{U}/S} \to \normin{{\cV}}{U}$ is finite, \'etale and generically an isomorphism, it is an isomorphism and we conclude that $\normin{{\cV}}{U}\to S$ is representable. Then $\normin{{\cV}}{U} \to S$ is finite, \'etale and generically an isomorphism, thus an isomorphism.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:bb}]
After reparameterizing the action by $\mathbb{G}_m \xrightarrow{d} \mathbb{G}_m$ for $d$ sufficiently divisible, we may assume that $\cX^0 = \cX^{\mathbb{G}_m}$ (Proposition \ref{P:reparam}). Theorem \ref{T:drinfeld} yields quasi-separated Deligne--Mumford stacks $\cX^0 = \underline{\Hom}^{\mathbb{G}_m}(\Spec k, \cX)$ and $\cX^+ = \underline{\Hom}^{\mathbb{G}_m}(\AA^1,\cX)$, locally of finite type over $k$, such that the morphism $\ev_0 \co \cX^+ \to \cX^0$ is affine. Let $\cX^0 = \coprod_i \cF_i$ be the decomposition into connected components and set $\cX_i := \ev_0^{-1}(\cF_i)$. Since $\cX$ is separated, $\coprod_i \cX_i \to \cX$ is a monomorphism (Proposition \ref{P:ev1}\eqref{P:ev1-monomorphism}). This establishes the main part of the theorem. Part \eqref{T:bb-proper} follows from Proposition \ref{P:ev1}\eqref{P:ev1-surjective}.
We now establish \eqref{T:bb-smooth} and \eqref{T:bb-locally-closed} in stages of increasing generality. If $\cX$ is an affine space with a linear $\mathbb{G}_m$-action, then it is easy to see that $\cX^+ \to \cX^0$ is a projection of linear subspaces. If $\cX = \Spec(A)$ is affine and $A = \bigoplus_{d} A_d$ denotes the induced $\ZZ$-grading, then a direct calculation shows that $\cX^+ = V(\sum_{d < 0} A_d)$ and $\cX^0 = V(\sum_{d \neq 0} A_d)$ are closed subschemes; see \cite[\S 1.3.4]{drinfeld}.
To see \eqref{T:bb-smooth}, we may assume that $k$ is algebraically closed. When $\cX=\Spec A$ is affine, let $x \in \cX^0(k)$ be a fixed point defined by a maximal ideal $\mathfrak{m} \subset A$. The surjection $\mathfrak{m} \to \mathfrak{m}/\mathfrak{m}^2$ admits a $\mathbb{G}_m$-equivariant section which in turn induces a morphism $\cX \to T_{\cX,x} = \Spec(\Sym \mathfrak{m}/\mathfrak{m}^2)$ which is \'etale at $x$, and \eqref{T:bb-smooth} follows from \'etale descent using the case above of affine space and Proposition \ref{P:drinfeld-base-change}.
In general, Proposition \ref{P:fixed-point} and Sumihiro's theorem (Theorem \ref{T:sumi1}) imply that any point of $\cX^0$ has an equivariant affine \'etale neighborhood and thus Proposition \ref{P:drinfeld-base-change} reduces \eqref{T:bb-smooth} to the case of an affine scheme.
For \eqref{T:bb-locally-closed}, let $X^+$ and $X_i$ be the coarse moduli spaces of $\cX^+$ and $\cX_i$. For \eqref{T:bb-locally-closed-affine}, the above discussion shows that since $X$ is affine, $X^+ \to X$ is a closed immersion. Since $\cX^+ \to X^+ \times_{X} \cX$ is a nilimmersion (Proposition \ref{P:cms}\eqref{P:cms-climmersion-nilimmersion-both}), $\cX^+ \to \cX$ is also a closed immersion.
For \eqref{T:bb-locally-affine}, by Proposition \ref{P:cms}\eqref{P:cms-climmersion-nilimmersion-both}, we may assume that $\cX=X$. For any point $x \in X^+$, let $x_0$ be the image of $x$ under $\ev_0 \co X^+ \to X^0$, and choose a $\mathbb{G}_m$-invariant affine open neighborhood $U \subset X$ of $x_0$.
This induces a diagram
\begin{equation} \begin{split} \label{E:locally-affine}
\xymatrix{
U^+ \ar@{^(->}[r] \ar@{^(->}[rd] & \ev_1^{-1}(U) \ar[r] \ar@{^(->}[d] & U \ar@{^(->}[d] \\
& X^+ \ar[r]^{\ev_1} & X .
}
\end{split} \end{equation}
Since $U^+ \to U$ is a closed immersion (as $U$ is affine) and $X^+ \to X$ is separated (it is a monomorphism), $U^+ \to \ev_1^{-1}(U)$ is a closed immersion. Since $U^+ = X^+ \times_{X^0} U^0$ (Proposition \ref{P:drinfeld-base-change}), $x \in U^+$ and $U^+ \to X^+$ is an open immersion. In particular, $U^+ \subset \ev_1^{-1}(U)$ is an open and closed subscheme containing $x$.
For \eqref{T:bb-locally-affine1}, for any $x \in X^+$, we observe from Diagram \eqref{E:locally-affine} that $U^+ \to X^+ \to X$ is a locally closed immersion. Moreover, $U \times U^0$ is an open neighborhood of $(\ev_1(x),x_0)$. Since the restriction of $(\ev_1, \ev_0) \co X^+ \to X \times X^0$ over $U \times U^0$ is the closed immersion $U^+ \to U \times U^0$, it follows that $X^+ \to X \times X^0$ is a locally closed immersion.
For \eqref{T:bb-locally-affine2}, let $Z \subset X^+$ be an irreducible component and $x \in Z$. Then $Z \cap U^+$ is a nonempty open and closed subscheme of the irreducible scheme $Z \cap \ev_1^{-1}(U)$. This shows that $Z \cap U^+ = Z \cap \ev_1^{-1}(U)$ and that $Z \cap \ev_1^{-1}(U) \to U$ is a closed immersion. It follows that $Z \mathrm{h}ookrightarrow X^+ \xrightarrow{\ev_1} X$ is a locally closed immersion.
For \eqref{T:bb-locally-closed-smooth-stack}, observe that \eqref{T:bb-smooth} implies that $\cX_i$ is smooth and connected, thus irreducible. Since the coarse moduli space $X$ is necessarily normal and thus admits a $\mathbb{G}_m$-equivariant open affine cover by Sumihiro's theorem \cite[Cor.~2]{sumihiro}, the conclusion follows from \eqref{T:bb-locally-affine}\eqref{T:bb-locally-affine2}.
For \eqref{T:bb-locally-closed-quasi-projective}, it suffices to show that $X_i \mathrm{h}ookrightarrow X$ is a locally closed immersion by Proposition \ref{P:cms}\eqref{P:cms-climmersion-nilimmersion-both}. This statement is easily reduced to the case of $X=\mathbb{P}(V)$, using a special case of Proposition \ref{P:cms}\eqref{P:cms-climmersion-nilimmersion-both}. For $X=\mathbb{P}(V)$ a direct calculation shows that each $X_i$ is of the form $\mathbb{P}(W) \setminus \mathbb{P}(W')$ for linear subspaces $W' \subset W \subset V$.
\end{proof}
\appendix
\section{Equivariant Artin algebraization} \label{A:algebraization} \label{A:approx}
In this appendix, we give an equivariant generalization of Artin's algebraization theorem \cite[Thm.~1.6]{artin-algebraization}. We follow the approach of \cite{conrad-dejong} using Artin approximation and an effective version of the Artin--Rees lemma.
The main results of this appendix (Theorems \ref{T:approximate-algebraization} and \ref{T:algebraization}) are formulated in greater generality than necessary to prove Theorem \ref{T:field}. We feel that these results are of independent interest and will have further applications. In particular, in the subsequent article \cite{ahr2} we will apply the results of this appendix to prove a relative version of Theorem \ref{T:field}.
\subsection{Good moduli space morphisms are of finite type}
Let $G$ be a group acting on a noetherian ring~$A$.
Goto--Yamagishi~\cite{MR706507} and Gabber~\cite[Exp.~IV,
Prop.~2.2.3]{MR3309086} have proven that $A$ is finitely generated over $A^G$
when $G$ is either diagonalizable (Goto--Yamagishi) or finite and tame
(Gabber). Equivalently, the good moduli space morphism $\mathcal{X}=[\Spec A/G]\to
\Spec A^G$ is of finite type. The following theorem generalizes this result to
every noetherian stack with a good moduli space.
\begin{theorem} \label{T:good-finite-type}
Let $\cX$ be a noetherian algebraic stack. If $\pi \co \cX \to X$ is a good moduli space with affine diagonal, then $\pi$ is of finite type.
\end{theorem}
\begin{proof}
We may assume that $X = \Spec A$ is affine.
Let $p \co U=\Spec B \to \cX$ be an affine presentation. Then $\pi_* (p_* \oh_{U}) = \widetilde{B}$. We need to show that $B$ is a finitely generated $A$-algebra. This follows from the following lemma.
\end{proof}
\begin{lemma}
If $\cX$ is a noetherian algebraic stack and $\pi \co \cX \to X$ is a good moduli space, then $\pi_*$ preserves finitely generated algebras.
\end{lemma}
\begin{proof}
Let $\cA$ be a finitely generated $\oh_{\cX}$-algebra. Write $\cA = \varinjlim_{\lambda} \cF_{\lambda}$ as a union of its finitely generated $\oh_{\cX}$-submodules. Then $\cA$ is generated as an $\oh_{\cX}$-algebra by $\cF_\lambda$ for sufficiently large $\lambda$; that is, we have a surjection $\Sym(\cF_\lambda) \to \cA$.
Since $\pi_*$ is exact, it is enough to prove that $\pi_* \Sym(\cF_\lambda)$ is finitely generated.
But $C:= \Gamma(\cX, \Sym(\cF_\lambda))$ is a $\ZZ$-graded ring which is noetherian by \cite[Thm.~4.16(x)]{alper-good} since $\underline{\Spec}_{\cX}(\Sym(\cF_\lambda))$ is noetherian and $\Spec(C)$ is its good moduli space. It is well-known that $C$ is then finitely generated over $C_0=\Gamma(\cX, \oh_{\cX})=A$.
\end{proof}
\subsection{Artinian stacks and adic morphisms}
\begin{definition}
We say that an algebraic stack $\mathcal{X}$ is \emph{artinian} if it is noetherian
and $|\mathcal{X}|$ is discrete. We say that a quasi-compact and quasi-separated
algebraic stack $\mathcal{X}$ is \emph{local} if there exists a unique closed point
$x\in |\mathcal{X}|$.
\end{definition}
Let $\mathcal{X}$ be a noetherian algebraic stack and let $x\in |\mathcal{X}|$ be a closed
point with maximal ideal $\mathfrak{m}_x \subset \oh_{\cX}$. The \emph{$n$th infinitesimal neighborhood of
$x$} is the closed algebraic stack $\mathcal{X}_x^{[n]}\mathrm{h}ookrightarrow \mathcal{X}$ defined by
$\mathfrak{m}_x^{n+1}$. Note that $\mathcal{X}_x^{[n]}$ is artinian and that $\mathcal{X}_x^{[0]}=\mathcal{G}_x$ is
the residual gerbe. A local artinian stack $\mathcal{X}$ is a local artinian scheme
if and only if $\mathcal{X}_x^{[0]}$ is the spectrum of a field.
\begin{definition} \label{D:adic}
Let $\mathcal{X}$ and $\mathcal{Y}$ be algebraic stacks, and let $x\in |\mathcal{X}|$ and $y\in
|\mathcal{Y}|$ be closed points. If $f\colon (\mathcal{X},x)\to (\mathcal{Y},y)$ is a pointed
morphism, then $\mathcal{X}_x^{[n]}\subseteq f^{-1}(\mathcal{Y}_y^{[n]})$ and we let
$f^{[n]}\colon \mathcal{X}_x^{[n]}\to \mathcal{Y}_y^{[n]}$ denote the induced morphism.
We say that $f$ is \emph{adic} if $f^{-1}(\mathcal{Y}_y^{[0]})=\mathcal{X}_x^{[0]}$.
\end{definition}
Note that $f$ is adic precisely when $f^*\mathfrak{m}_y\to \mathfrak{m}_x$ is surjective.
When $f$ is adic, we thus have that $f^{-1}(\mathcal{Y}_y^{[n]})=\mathcal{X}_x^{[n]}$ for all
$n\geq 0$. Every closed immersion is adic.
\begin{proposition}
Let $\mathcal{X}$ be a quasi-separated algebraic stack and let $x\in |\mathcal{X}|$ be a
closed point. Then there exists an adic flat presentation; that is, there
exists an adic flat morphism of finite presentation $p\colon (\Spec A,v)\to
(\mathcal{X},x)$. If the stabilizer group of $x$ is smooth, then there exists an adic
smooth presentation.
\end{proposition}
\begin{proof}
The question is local on $\mathcal{X}$ so we can assume that $\mathcal{X}$ is quasi-compact.
Start with any smooth presentation $q\colon V=\Spec A\to \mathcal{X}$. The fiber
$V_x=q^{-1}(\mathcal{G}_x)=\Spec A/I$ is smooth over the residue field $\kappa(x)$ of
the residual gerbe. Pick a closed point $v\in V_x$ such that
$\kappa(v)/\kappa(x)$ is separable. After replacing $V$ with an open
neighborhood of $v$, we may pick a regular sequence
$\overline{f}_1,\overline{f}_2,\dots,\overline{f}_n\in A/I$ that generates
$\mathfrak{m}_v$. Lift this to a sequence $f_1,f_2,\dots,f_n\in A$ and let $Z\mathrm{h}ookrightarrow V$ be
the closed subscheme defined by this sequence. The sequence is transversely
regular over $\mathcal{X}$ in a neighborhood $W\subseteq V$ of $v$. In particular,
$U=W\cap Z\to V\to \mathcal{X}$ is flat. By construction $U_x=Z_x=\Spec \kappa(v)$ so
$(U,v)\to (X,x)$ is an adic flat presentation. Moreover, $U_x=\Spec \kappa(v)\to
\mathcal{G}_x\to \Spec \kappa(x)$ is \'etale so if the stabilizer group is smooth, then
$U_x\to \mathcal{G}_x$ is smooth and $U\to X$ is smooth at $v$.
\end{proof}
\begin{corollary}\label{C:adic-artin}
Let $\mathcal{X}$ be a noetherian algebraic stack. The following statements
are equivalent.
\begin{enumerate}
\item\label{CI:adic-artin-pres}
There exists an artinian ring $A$ and a flat presentation
$p\colon \Spec A\to \mathcal{X}$ which is adic at every point of $\mathcal{X}$.
\item\label{CI:artin-pres}
There exists an artinian ring $A$ and a flat presentation
$p\colon \Spec A\to \mathcal{X}$.
\item\label{CI:artin}
$\mathcal{X}$ is artinian.
\end{enumerate}
\end{corollary}
\begin{proof}
The implications
\itemref{CI:adic-artin-pres}$\implies$\itemref{CI:artin-pres}$\implies$\itemref{CI:artin}
are trivial. The implication
\itemref{CI:artin}$\implies$\itemref{CI:adic-artin-pres} follows from the
proposition.
\end{proof}
\begin{remark}\label{R:miniversal}
Let $p$ be a smooth morphism $p\colon (U,u)\to (\mathcal{X},x)$. We say that $p$ is
\emph{miniversal} at $u$ if the induced morphism $T_{U,u}\to T_{\mathcal{X},x}$ on
tangent spaces is an isomorphism. Equivalently, $\Spec \mathrm{h}at{\oh}_{U,u}\to
\mathcal{X}$ is a formal miniversal deformation space.
If the stabilizer at $x$ is smooth, then
$T_{\mathcal{X},x}$ identifies with the normal space $N_x$. Hence, $p$ is miniversal at $u$
if and only if $u$ is a connected component of $p^{-1}(\cG_x)$, that is, if and
only if $p$ is adic after restricting $U$ to a neighborhood of $u$. If the stabilizer at $x$ is
not smooth, then there does not exist smooth adic presentations but there
exists smooth miniversal presentations as well as flat adic presentations.
\end{remark}
If $\cX$ is an algebraic stack, $\cI \subseteq \oh_{\cX}$ is a sheaf of ideals and $\cF$ is a quasi-coherent sheaf, we set $\Gr_{\cI}(\cF) := \mathrm{op}lus_{n \ge 0} \cI^n \cF / \cI^{n+1} \cF$, which is a quasi-coherent sheaf of graded modules on the
closed substack defined by $\cI$.
\begin{proposition}\label{P:closed/iso-cond:infinitesimal}
Let $f\colon (\mathcal{X},x)\to (\mathcal{Y},y)$ be a morphism of noetherian local stacks.
\begin{enumerate}
\item\label{PI:closed:infinitesimal}
If $f^{[1]}$ is a closed immersion, then $f$ is adic and $f^{[n]}$ is
a closed immersion for all $n\geq 0$.
\item\label{PI:iso:infinitesimal}
If $f^{[1]}$ is a closed immersion and there
exists an isomorphism $\varphi\colon \Gr_{\mathfrak{m}_x}(\oh_X)\to
(f^{[0]})^*\Gr_{\mathfrak{m}_y}(\oh_Y)$ of graded $\oh_{\mathcal{X}_x^{[0]}}$-modules,
then $f^{[n]}$ is an isomorphism for all $n\geq 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
Pick an adic flat
presentation $p\colon \Spec A\to \mathcal{Y}$. After pulling back $f$ along $p$, we
may assume that $\mathcal{Y}=\Spec A$ is a scheme. If $f^{[0]}$ is a closed immersion,
then $\mathcal{X}_x^{[0]}$ is also a scheme, hence so is $\mathcal{X}_x^{[n]}$ for all $n\geq 0$.
After replacing $f$ with $f^{[n]}$ for some $n$ we may thus assume that
$\mathcal{X}=\Spec B$ and $\mathcal{Y}=\Spec A$ are affine and local artinian.
If
in addition $f^{[1]}$ is a closed immersion, then $\mathfrak{m}_A\to
\mathfrak{m}_B/\mathfrak{m}_B^2$ is surjective; hence so is $\mathfrak{m}_AB\to \mathfrak{m}_B$ by Nakayama's
Lemma. We conclude that $f$ is adic and that $A\to B$ is surjective (Nakayama's
Lemma again).
Assume that in addition we have an isomorphism $\varphi\colon
\Gr_{\mathfrak{m}_A}A\cong \Gr_{\mathfrak{m}_B} B$ of graded $k$-vector spaces where
$k=A/\mathfrak{m}_A=B/\mathfrak{m}_B$. Then $\dim_k \mathfrak{m}_A^n/\mathfrak{m}_A^{n+1}=\dim_k
\mathfrak{m}_B^n/\mathfrak{m}_B^{n+1}$. It follows that the surjections
$\mathfrak{m}_A^n/\mathfrak{m}_A^{n+1}\to \mathfrak{m}_B^n/\mathfrak{m}_B^{n+1}$ induced by $f$ are
isomorphisms. It follows that $f$ is an isomorphism.
\end{proof}
\begin{definition}\label{D:complete-local-stack}
Let $\mathcal{X}$ be an algebraic stack. We say that $\mathcal{X}$ is a
\emph{complete local stack} if
\begin{enumerate}
\item $\mathcal{X}$ is local with closed point $x$,
\item $\mathcal{X}$ is excellent with affine stabilizers, and
\item $\mathcal{X}$ is coherently complete along the residual gerbe $\cG_x$.
\end{enumerate}
Recall
from Definition \ref{D:cc} that (3) means that the natural functor
\[
\Coh(\cX) \to \varprojlim_n \Coh\bigl(\cX_x^{[n]}\bigr)
\]
is an equivalence of categories.
\end{definition}
\begin{proposition}\label{P:closed/iso-cond:complete}
Let $f\colon (\mathcal{X},x)\to (\mathcal{Y},y)$ be a morphism of complete local stacks.
\begin{enumerate}
\item\label{PI:closed:complete}
$f$ is a closed immersion if and only if $f^{[1]}$ is a closed immersion.
\item\label{PI:iso:complete}
$f$ is an isomorphism if and only if $f^{[1]}$ is a closed immersion and there
exists an isomorphism $\varphi\colon \Gr_{\mathfrak{m}_x}(\oh_X)\to
(f^{[0]})^*\Gr_{\mathfrak{m}_y}(\oh_Y)$ of graded $\oh_{\mathcal{X}_x^{[0]}}$-modules.
\end{enumerate}
\end{proposition}
\begin{proof}
The conditions are clearly necessary. Conversely, if $f^{[1]}$ is a closed
immersion, then $f$ is adic and $f^{[n]}$ is a closed immersion for all $n\geq
0$ by
Proposition~\ref{P:closed/iso-cond:infinitesimal}~\itemref{PI:closed:infinitesimal}. We thus obtain a system of closed immersions $f^{[n]}\colon \mathcal{X}_x^{[n]}\mathrm{h}ookrightarrow
\mathcal{Y}_y^{[n]}$ which is compatible in the sense that $f^{[m]}$ is the pull-back of $f^{[n]}$ for every $m\leq n$. Since $\mathcal{Y}$ is coherently complete, we obtain a unique closed
substack $\mathcal{Z}\mathrm{h}ookrightarrow \mathcal{Y}$ such that $\mathcal{X}_x^{[n]}=\mathcal{Z}\times_\mathcal{Y} \mathcal{Y}_y^{[n]}$ for all $n$.
If there exists an isomorphism
$\varphi$ as in the second statement, then $f^{[n]}$ is an isomorphism for all
$n\geq 0$ by Proposition~\ref{P:closed/iso-cond:infinitesimal}~\itemref{PI:iso:infinitesimal} and $\mathcal{Z}=\mathcal{Y}$.
Finally, since
$\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$ are complete local stacks, it follows by Tannaka duality (Theorem \ref{T:tannakian})
that we have an isomorphism $\mathcal{X}\to \mathcal{Z}$ over $\mathcal{Y}$ and the result follows.
\end{proof}
\subsection{Artin approximation}
Artin's original approximation theorem applies to the henselization of an
algebra of finite type over a field or an excellent Dedekind
domain~\cite[Thm.~1.12]{artin-approx}. This is sufficient for the main body of
this article but for the generality of this appendix we need Artin approximation
over arbitrary excellent schemes. It is well-known that this follows from
Popescu's theorem (general N\'eron desingularization), see
e.g.\ \cite[Thm.~1.3]{popescu-general}, \cite[Thm.~11.3]{spivakovsky_popescus_theorem} and \cite[Tag
\spref{07QY}]{stacks-project}. We include a proof here for completeness.
\begin{theorem}[Popescu]
A regular homomorphism $A\to B$ between noetherian rings is a
filtered colimit of smooth homomorphisms.
\end{theorem}
Here regular means flat with geometrically regular fibers.
See~\cite[Thm.~1.8]{popescu-general} for the original proof and~\cite{swan_popescus_theorem,spivakovsky_popescus_theorem} or \cite[Tag \spref{07BW}]{stacks-project} for more recent proofs.
\begin{theorem}[Artin approximation]\label{T:artin-approximation}
Let $S=\Spec A$ be the spectrum of a G-ring (e.g., excellent), let $s\in S$ be
a point and let $\widehat{S}=\Spec \widehat{A}$ be the completion at $s$. Let
$F\colon (\mathsf{Sch}/S)^\mathrm{op}\to \Sets$ be a functor locally of finite
presentation. Let $\overline{\xi}\in F(\widehat{S})$ and let $N\geq 0$ be an
integer. Then there exists an \'etale neighborhood $(S',s')\to (S,s)$ and an
element $\xi'\in F(S')$ such that $\xi'$ and $\overline{\xi}$ coincide in
$F(S^{[N]}_s)$.
\end{theorem}
\begin{proof}
We may replace $A$ by the localization at the prime ideal $\mathfrak{p}$ corresponding
to the point $s$. Since $A$ is a G-ring, the morphism $A\to \widehat{A}$ is
regular and hence a filtered colimit of smooth homomorphisms $A\to A_\lambda$
(Popescu's theorem). Since $F$ is locally of finite presentation, we can thus
find a factorization $A\to A_1\to \widehat{A}$, where $A\to A_1$ is smooth, and
an element $\xi_1\in F(A_1)$ lifting $\overline{\xi}$. After replacing $A_1$ with a
localization $(A_1)_f$ there is a factorization $A\to A[x_1,x_2,\dots,x_n]\to
A_1$ where the second map is \'etale~\cite[IV.17.11.4]{EGA}. Choose a lift
$\varphi\colon A[x_1,x_2,\dots,x_n]\to A$ of
\[
\varphi_N\colon A[x_1,x_2,\dots,x_n]\to A_1\to \widehat{A}\to A/\mathfrak{p}^{N+1}.
\]
Let $A'=A_1\otimes_{A[x_1,x_2,\dots,x_n]} A$ and
let $\xi'\in F(A')$ be the image of $\xi_1$. By construction we have an
$A$-algebra homomorphism $\varphi'_N\colon A'\to A/\mathfrak{p}^{N+1}$ such that the
images of $\xi'$ and $\overline{\xi}$ are equal in $F(A/\mathfrak{p}^{N+1})$. Since $A\to
A'$ is \'etale the result follows with $S'=\Spec A'$.
\end{proof}
\subsection{Formal versality}
\begin{definition}
Let $\mathcal{W}$ be a noetherian algebraic stack, let $w\in |\mathcal{W}|$ be a closed point and let
$\mathcal{W}_w^{[n]}$ denote the $n$th infinitesimal neighborhood of $w$. Let $\mathcal{X}$ be
a category fibered in groupoids and let $\eta\colon \mathcal{W}\to \mathcal{X}$ be a
morphism. We say that $\eta$ is \emph{formally versal} (resp.\ \emph{formally
universal}) at $w$ if the following lifting condition holds. Given a
$2$-commutative diagram of solid arrows
\[
\xymatrix{
\mathcal{W}_w^{[0]}\ar@{(->}[r]^{\iota}
& \mathcal{Z}\ar[r]^{f}\ar@{(->}[d]^{g}
& \mathcal{W}\ar[d]^{\eta} \\
& \mathcal{Z}'\ar[r]\ar@{-->}[ur]^{f\mathrlap{'}} & \mathcal{X}
}
\]
where $\mathcal{Z}$ and $\mathcal{Z}'$ are local artinian stacks and $\iota$ and $g$ are
closed immersions, there exists a morphism (resp.\ a unique morphism) $f'$
and $2$-isomorphisms such that the whole diagram is $2$-commutative.
\end{definition}
\begin{proposition}\label{P:formal-versality-criterion}
Let $\eta\colon (\mathcal{W},w)\to (\mathcal{X},x)$ be a morphism of noetherian algebraic
stacks. Assume that $w$ and $x$ are closed points.
\begin{enumerate}
\item If $\eta^{[n]}$ is \'etale for every $n$, then
$\eta$ is formally universal at $w$.
\item If $\eta^{[n]}$ is smooth for every $n$ and the stabilizer $G_w$ is
linearly reductive, then $\eta$ is formally versal at $w$.
\end{enumerate}
\end{proposition}
\begin{proof}
We begin with the following observation: if $(\cZ,z)$ is a local artinian stack
and $h\colon (\cZ,z) \to (\mathcal{Q},q)$ is a morphism of algebraic stacks,
where $q$ is a closed point, then there exists an $n$ such that $h$ factors
through $\mathcal{Q}_q^{[n]}$. Now, if we are given a lifting problem, then
the previous observation shows that we may assume that $\cZ$ and $\cZ'$ factor
through some $\cW_w^{[n]} \to \cX_x^{[n]}$.
The first part is now clear from descent.
For the second part, the obstruction to the existence of a lift belongs to the group $\Ext^1_{\oh_{\cZ}}(f^*L_{\cW_w^{[n]}/\cX_x^{[n]}},I)$, where $I$ is the square zero ideal defining the closed immersion $g$. When $\eta^{[n]}$
is representable, this follows directly from \cite[Thm.~1.5]{olsson-defn}. In
general, this follows from the fundamental exact triangle of the cotangent
complex for $\cZ\to \cW_w^{[n]}\times_{\cX_x^{[n]}} \cZ'\to \cZ'$ and two
applications of \cite[Thm.~1.1]{olsson-defn}.
But $\cZ$ is cohomologically affine and $L_{\cW_w^{[n]}/\cX_x^{[n]}}$ is a
perfect complex of Tor-amplitude $[0,1]$, so the $\Ext$-group vanishes. The
result follows.
\end{proof}
\subsection{Refined Artin--Rees for algebraic stacks}
The results in this section are a generalization of~\cite[\S3]{conrad-dejong} (also
see \cite[Tag~\spref{07VD}]{stacks-project}) from rings to algebraic stacks.
\begin{definition}\label{D:AR}
Let $\mathcal{X}$ be a noetherian algebraic stack and let $\mathcal{Z}\mathrm{h}ookrightarrow \mathcal{X}$ be a closed
substack defined by the ideal $\mathcal{I}\subseteq \oh_\mathcal{X}$. Let $\varphi\colon
\mathcal{E}\to \mathcal{F}$ be a homomorphism of coherent sheaves on $\mathcal{X}$. Let $c\geq 0$ be
an integer. We say that $\AR{c}$ holds for $\varphi$ along $\mathcal{Z}$ if
\[
\varphi(\mathcal{E})\cap \mathcal{I}^n\mathcal{F} \subseteq \varphi(\mathcal{I}^{n-c}\mathcal{E}),\quad \forall n\geq c .
\]
\end{definition}
When $\mathcal{X}$ is a scheme, $\AR{c}$ holds for all sufficiently large $c$ by the
Artin--Rees lemma. If $\pi\colon U\to \mathcal{X}$ is a flat presentation, then
$\AR{c}$ holds for $\varphi$ along $\mathcal{Z}$ if and only if $\AR{c}$ holds for
$\pi^*\varphi\colon\pi^*\mathcal{E}\to \pi^*\mathcal{F}$ along $\pi^{-1}(\mathcal{Z})$. In particular
$\AR{c}$ holds for $\varphi$ along $\mathcal{Z}$ for all sufficiently large $c$.
If $f\colon \mathcal{E}'\twoheadrightarrow \mathcal{E}$ is a surjective homomorphism, then $\AR{c}$ for
$\varphi$ holds if and only if $\AR{c}$ for $\varphi\circ f$ holds.
In the following section, we will only use the case when $|\mathcal{Z}|$ is a closed
point.
\begin{theorem}\label{T:Artin-Rees}
Let $\mathcal{E}_2\xrightarrow{\alpha} \mathcal{E}_1\xrightarrow{\beta} \mathcal{E}_0$
and $\mathcal{E}'_2\xrightarrow{\alpha'} \mathcal{E}'_1\xrightarrow{\beta'} \mathcal{E}'_0$
be two complexes of coherent sheaves on a noetherian algebraic stack $\cX$. Let $\mathcal{Z}\mathrm{h}ookrightarrow \mathcal{X}$ be a closed
substack defined by the ideal $\mathcal{I}\subseteq \oh_\mathcal{X}$. Let $c$ be a positive integer. Assume
that
\begin{enumerate}
\item $\mathcal{E}_0,\mathcal{E}'_0,\mathcal{E}_1,\mathcal{E}'_1$ are vector bundles,
\item the sequences are isomorphic after tensoring with $\oh_{\cX}/\mathcal{I}^{c+1}$,
\item the first sequence is exact, and
\item $\AR{c}$ holds for $\alpha$ and $\beta$ along $\mathcal{Z}$.
\end{enumerate}
Then
\begin{enumerate}[label=(\alph*)]
\item the second sequence is exact in a neighborhood of $\mathcal{Z}$;
\item $\AR{c}$ holds for $\beta'$ along $\mathcal{Z}$; and
\item given an isomorphism $\varphi\colon \mathcal{E}_0\to \mathcal{E}'_0$, there exists a
unique isomorphism $\psi$ of $\Gr_\mathcal{I}(\oh_X)$-modules in the diagram
\[
\xymatrix@C+2mm{
\Gr_\mathcal{I}(\mathcal{E}_0)\ar@{->>}[r]^-{\Gr(\gamma)}\ar[d]^{\Gr(\varphi)}_{\cong}
& \Gr_\mathcal{I}(\coker \beta)\ar[d]^{\psi}_{\cong} \\
\Gr_\mathcal{I}(\mathcal{E}'_0)\ar@{->>}[r]^-{\Gr(\gamma')}
& \Gr_\mathcal{I}(\coker \beta')
}
\]
where $\gamma\colon \mathcal{E}_0\to \coker \beta$ and
$\gamma'\colon \mathcal{E}'_0\to \coker \beta'$ denote the induced maps.
\end{enumerate}
\end{theorem}
\begin{proof}
Note that there exists an isomorphism $\psi$ if and only if $\ker
\Gr(\gamma)=\ker \Gr(\gamma')$. All three statements can thus be checked after
pulling back to a presentation $U\to \mathcal{X}$. We may also localize and assume
that $\mathcal{X}=U=\Spec A$ where $A$ is a local ring. Then all vector bundles are
free and we may choose isomorphisms $\mathcal{E}_i\cong\mathcal{E}'_i$ for $i=0,1$ such that
$\beta=\beta'$ modulo $\mathcal{I}^{c+1}$. We can also choose a surjection $\epsilon'\colon \oh_U^n\twoheadrightarrow
\mathcal{E}'_2$ and a lift $\epsilon\colon \oh_U^n\twoheadrightarrow \mathcal{E}_2$ modulo $\mathcal{I}^{c+1}$, so that
$\alpha\circ\epsilon=\alpha'\circ\epsilon'$ modulo $\mathcal{I}^{c+1}$. Thus, we may assume that $\mathcal{E}_i=\mathcal{E}'_i$ for
$i=0,1,2$ are free. The result then follows from~\cite[Lem.~3.1 and
Thm.~3.2]{conrad-dejong} or \cite[Tags~\spref{07VE} and
\spref{07VF}]{stacks-project}.
\end{proof}
\subsection{Equivariant algebraization}
We now consider the equivariant generalization of Artin's algebraization
theorem, see~\cite[Thm.~1.6]{artin-algebraization} and
\cite[Thm.~1.5, Rem.~1.7]{conrad-dejong}. In fact, we give a general
algebraization theorem for algebraic stacks.
\begin{theorem}\label{T:approximate-algebraization}
Let $S$ be an excellent scheme and let $T$ be a noetherian algebraic space over
$S$. Let $\cZ$ be an algebraic stack of finite presentation over $T$ and let
$z\in |\cZ|$ be a closed point such that $\cG_z\to S$ is of finite type. Let $t \in T$ be the image of $z$. Let
$\cX_1,\dots,\cX_n$ be categories fibered in groupoids over $S$, locally of finite presentation. Let $\eta\co \cZ \to
\cX=\cX_1\times_S\dots \times_S \cX_n$ be a morphism. Fix an integer $N\geq 0$. Then there exists
\begin{enumerate}
\item\label{TI:approx-first}
an affine scheme $S'$ of finite type over $S$ and a closed point $s' \in S'$ mapping to the same point in $S$ as $t \in T$;
\item an algebraic stack $\cW \to S'$ of finite type;
\item a closed point $w\in |\cW|$ over $s'$;
\item a morphism $\xi\co \cW \to \cX$;
\item\label{TI:approx-last}
an isomorphism $\cZ \times_T T^{[N]}_t \cong \cW \times_{S'} S'^{[N]}_{s'}$ over $\cX$ mapping $z$ to $w$;
in particular, there is an isomorphism $\cZ^{[N]}_z\cong \cW^{[N]}_w$ over $\cX$; and
\item\label{TI:approx-graded-iso}
an isomorphism $\Gr_{\mathfrak{m}_z}\oh_{\cZ}\cong \Gr_{\mathfrak{m}_w}\oh_{\cW}$ of
graded algebras over $\cZ^{[0]}_z\cong \cW^{[0]}_w$.
\end{enumerate}
Moreover, if $\cX_i$ is a quasi-compact algebraic stack and $\eta_i\co \cZ \to
\cX_i$ is affine for some $i$, then it can be arranged so that $\xi_i \co
\cW \to \cX_i$ is affine.
\end{theorem}
\begin{proof}
We may assume that $S=\Spec A$ is affine. Let $t\in T$ be the image of $z$. By replacing $T$ with the completion $\mathrm{h}at{T}=\Spec \mathrm{h}at{\oh}_{T,t}$
and $\cZ$ with $\cZ \times_T \mathrm{h}at{T}$, we may assume that $T=\mathrm{h}at{T}=\Spec
B$ where $B$ is a complete local ring. By standard limit methods, we have an
affine scheme $S_0=\Spec B_0$ and an algebraic stack $\cZ_0\to S_0$ of finite
presentation and a commutative diagram
\[
\xymatrix{
\cZ\ar[r]\ar[d]\ar@/^2ex/[rr] & \cZ_0\ar[r]\ar[d] & \mathcal{X}\ar[d] \\
T\ar[r] & S_0\ar[r]\ar@{}[ul]|\square & S
}
\]
If $\mathcal{X}_i$ is algebraic and quasi-compact and $\cZ\to \mathcal{X}_i$ is
affine for some $i$, we may also arrange so that $\cZ_0\to \mathcal{X}_i$ is
affine~\cite[Thm.~C]{rydh-noetherian}.
Since $\mathcal{G}_z\to S$ is of finite type, so is $\Spec \kappa(t)\to S$. We may
thus choose a factorization $T\to S_1=\AA^n_{S_0}\to S_0$, such that
$T\to \mathrm{h}at{S}_1=\Spec \mathrm{h}at{\oh}_{S_1,s_1}$ is a
closed immersion; here $s_1\in S_1$ denotes the image of $t\in T$.
After replacing $S_1$ with an open neighborhood, we may assume that $s_1$ is a
closed point. Let
$\mathcal{Z}_1=\mathcal{Z}_0\times_{S_0} S_1$ and
$\mathrm{h}at{\mathcal{Z}}_1=\mathcal{Z}_1\times_{S_1}\mathrm{h}at{S}_1$. Consider the functor
$F\colon (\mathsf{Sch}/S_1)^\mathrm{op}\to \Sets$ where $F(U \to S_1)$ is the set of isomorphism
classes of complexes
$$ \mathcal{E}_2\xrightarrow{\alpha} \mathcal{E}_1\xrightarrow{\beta}
\oh_{\mathcal{Z}_1 \times_{S_1}U}$$
of finitely presented quasi-coherent $\oh_{\mathcal{Z}_1 \times_{S_1} U} $-modules with $\cE_1$ locally free. By standard limit arguments, $F$ is locally of finite presentation.
We have an element $(\alpha,\beta)\in F(\mathrm{h}at{S}_1)$ such that $\im(\beta)$
defines $\mathcal{Z} \mathrm{h}ookrightarrow \mathrm{h}at{\mathcal{Z}}_1$. Indeed, choose a resolution
\[
\mathrm{h}at{\oh}_{S_1,s_1}^{\mathrlap{\mathrm{op}lus r}}\xrightarrow{\quad\widetilde{\beta}\;\;}\mathrm{h}at{\oh}_{S_1,s_1}\twoheadrightarrow B.
\]
After pulling back $\widetilde{\beta}$ to $\mathrm{h}at{\mathcal{Z}}_1$, we obtain a resolution
\[
\ker(\beta)\xhookrightarrow{\;\;\alpha\;\;} \oh_{\mathrm{h}at{\mathcal{Z}}_1}^{\mathrm{op}lus r}
\xrightarrow{\;\;\beta\;\;} \oh_{\mathrm{h}at{\mathcal{Z}}_1}\twoheadrightarrow \oh_{\mathcal{Z}}.
\]
After increasing $N$, we may assume that $\AR{N}$ holds for $\alpha$
and $\beta$ at $z$.
Artin approximation (Theorem~\ref{T:artin-approximation}) gives
an \'etale neighborhood $(S',s') \to (S_1,s_1)$ and an
element $(\alpha',\beta')\in F(S')$ such that $(\alpha,\beta)=
(\alpha',\beta')$ in $F(S_{1,s_1}^{[N]})$. We let $\mathcal{W}\mathrm{h}ookrightarrow \mathcal{Z}_1\times_{S_1}
S'$ be the closed substack defined by $\im(\beta')$. Then $\mathcal{Z} \times_T T_t^{[N]}$ and
$\mathcal{W} \times_{S'} S'^{[N]}_{s'}$ are equal as closed substacks of $\mathcal{Z}_1\times_{S_1} S_{1,s_1}^{[n]}$ and \itemref{TI:approx-first}--\itemref{TI:approx-last}
follows. Finally~\itemref{TI:approx-graded-iso} follows from Theorem~\ref{T:Artin-Rees}.
\end{proof}
\begin{theorem}\label{T:algebraization}
Let $S$, $T$, $\mathcal{Z}$, $\eta$, $N$, $\mathcal{W}$ and $\xi$ be as in
Theorem~\ref{T:approximate-algebraization}. If $\eta_1\colon \mathcal{Z}\to \mathcal{X}_1$ is formally
versal, then there are compatible isomorphisms $\varphi_n\colon \mathcal{Z}_z^{[n]} \iso
\mathcal{W}_w^{[n]}$ over $\mathcal{X}_1$ for all $n\geq 0$. For $n\leq N$, the isomorphism
$\varphi_n$ is also compatible with $\eta$ and $\xi$.
\end{theorem}
\begin{proof}
We can assume that $N\geq 1$. By Theorem~\ref{T:approximate-algebraization}, we have an
isomorphism $\varphi_N\co \cZ_z^{[N]} \to \cW_w^{[N]}$ over
$\mathcal{X}$. By formal versality and induction on $n \geq N$, we can extend $\psi_N=\varphi^{-1}_N$ to compatible morphisms
$\psi_n\colon \cW_w^{[n]}\to \cZ$ over $\mathcal{X}_1$. Indeed, formal versality allows us to find a dotted arrow such the diagram
$$\xymatrix@C+7mm{
\cW_w^{[n]} \ar[r]^-{\psi_n} \ar[d] & \cZ \ar[d]^{\eta} \\
\cW_w^{[n+1]} \ar[r]_-{\xi_1|_{\cW_w^{[n+1]}}} \ar@{-->}[ur]^{\psi_{n+1}} & \cX_1
}$$
is 2-commutative. By
Proposition~\ref{P:closed/iso-cond:infinitesimal}~\itemref{PI:iso:infinitesimal}, $\psi_n$ induces an
isomorphism $\varphi_n\colon \mathcal{Z}_z^{[n]}\to \mathcal{W}_w^{[n]}$.
\end{proof}
We now formulate the theorem above in a manner which is transparently an equivariant analogue of Artin algebraization \cite[Thm.~1.6]{artin-algebraization}. It is this formulation that is directly applied to prove Theorem \ref{T:field}.
\begin{corollary}\label{C:equivariant-algebraization}
Let $H$ be a linearly reductive affine group scheme over an algebraically closed field $k$.
Let $\cX$ be a noetherian algebraic stack of
finite type over $k$ with affine stabilizers.
Let $\mathrm{h}at{\cH} = [\Spec C / H]$ be a noetherian algebraic stack over $k$. Suppose that $C^{H}$ is a complete local $k$-algebra.
Let $\eta\colon \mathrm{h}at{\cH} \to \cX$ be a morphism
that is formally versal at a closed point $z\in |\mathrm{h}at{\cH} |$. Let $N \ge 0$. Then there exists
\begin{enumerate}
\item\label{CI:equialg:first}
an algebraic stack $\cW = [\Spec A / H]$ of finite type over $k$;
\item a closed point $w \in |\cW|$;
\item\label{CI:equialg:third}
a morphism
$f \co \cW \to \cX$;
\item
a morphism
$\varphi \co (\mathrm{h}at{\cH},z) \to (\cW,w)$;
\item a $2$-isomorphism $\tau\co \eta\Rightarrow f\circ \varphi$; and
\item\label{CI:equialg:H-torsors-equal-over-N-truncation}
a $2$-isomorphism $\nu_N\co \alpha^{[N]}\Rightarrow \beta^{[N]}\circ
\varphi^{[N]}$ where $\alpha\co \mathrm{h}at{\cH}\to BH$ and $\beta\co \cW\to BH$
denote the structure morphisms.
\end{enumerate}
such that for all $n$, the induced morphism $\varphi^{[n]} \co \mathrm{h}at{\cH}^{[n]}_z \to \cW^{[n]}_w$ is an isomorphism.
In particular, $\varphi$ induces an isomorphism $\mathrm{h}at{\varphi} \co \mathrm{h}at{\cH} \to \mathrm{h}at{\cW}$ where $\mathrm{h}at{\cW}$ is the coherent completion of $\cW$ at $w$ (i.e., $\mathrm{h}at{\cW}= \cW \times_{W} \Spec \mathrm{h}at{\oh}_{W,w_0}$ where $W = \Spec A^H$ and $w_0 \in W$ is the image of $w $ under $\cW \to W$).
\end{corollary}
\begin{proof}
By Theorem \ref{T:good-finite-type}, the good moduli space
$\mathrm{h}at{\cH}\to \Spec C^H$ is of finite type.
If we apply Theorem~\ref{T:algebraization} with $S = \Spec k$, $T=\Spec C^H$,
$\cZ=\mathrm{h}at{\cH}$,
$\cX_1 = \cX$
and $\cX_2 = BH$, then we immediately
obtain~\itemref{CI:equialg:first}--\itemref{CI:equialg:third} together with
isomorphisms $\varphi_n \co \mathrm{h}at{\cH}_z^{[n]}\to \cW_w^{[n]}$, a compatible
system of $2$-isomorphisms $\{\tau_n \co \eta^{[n]}\Rightarrow f^{[n]}\circ
\varphi^{[n]}\}_{n\geq 0}$ for all $n$, and a $2$-isomorphism $\nu_N$ as in
\itemref{CI:equialg:H-torsors-equal-over-N-truncation}. Since $\mathrm{h}at{\cH}$ and
$\mathrm{h}at{\cW}$ are coherently complete (Theorem \ref{key-theorem}), the
isomorphisms $\varphi_n$ yield an isomorphism
$\mathrm{h}at{\varphi} \co \mathrm{h}at{\cH}\to \mathrm{h}at{\cW}$ and an induced morphism
$\varphi \co \mathrm{h}at{\cH}\to \cW$ by
Tannaka duality (Corollary \ref{C:tannakian}).
Likewise, the system $\{\tau_n\}$ induces a $2$-isomorphism $\tau\co \eta
\Rightarrow f\circ \varphi$ by Tannaka duality (full faithfulness in
Corollary \ref{C:tannakian}).
\end{proof}
\begin{remark}
If $\cX$ is merely a category fibered in groupoids over $k$ that is locally of
finite presentation (analogously to the situation in
\cite[Thm.~1.6]{artin-algebraization}), then
Corollary~\ref{C:equivariant-algebraization} and its proof remain valid except
that instead of the $2$-isomorphism $\tau$ we only have the system
$\{\tau_n\}$.
\end{remark}
\end{document} |
\begin{document}
\title{Curves of equiharmonic solutions, and problems at resonance}
\begin{abstract}
We consider the semilinear Dirichlet problem
\[
\Delta u+kg(u)=\mu _1 \varphi _1+\cdots +\mu _n \varphi _n+e(x) \; \; \mboxox{for $x \in \Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$},
\]
where
$\varphi _k$ is the $k$-th eigenfunction of the Laplacian on $\Omega$ and $e(x) \varphierp \varphi _k$, $k=1, \ldots,n$. Write the solution in the form $u(x)= \Sigma _{i=1}^n \xi _i \varphi _i+U(x)$, with $ U \varphierp \varphi _k$, $k=1, \ldots,n$. Starting with $k=0$, when the problem is linear, we continue the solution in $k$ by keeping $\xi =(\xi _1, \ldots,\xi _n)$ fixed, but allowing for $\mu =(\mu _1, \ldots,\mu _n)$ to vary. Studying the map $\xi \rightarrow \mu$ provides us with the existence and multiplicity results for the above problem. We apply our results to problems at resonance, at both the principal and higher eigenvalues. Our approach is suitable for numerical calculations, which we implement, illustrating our results.
\end{abstract}
\begin{flushleft}
Key words: Curves of equiharmonic solutions, problems at resonance.
\end{flushleft}
\begin{flushleft}
AMS subject classification: 35J60.
\end{flushleft}
\; \;ection{Introduction}
\; \;etcounter{equation}{0}
\; \;etcounter{thm}{0}
\; \;etcounter{lma}{0}
We study existence and multiplicity of solutions for a semilinear problem
\begin{eqnarray}
\lambdabel{0}
& \Delta u+kg(u)=f(x) \; \; \mboxox{for $x \in \Omega$} \,, \\ \nonumber
& u=0 \; \; \mboxox{on $\varphiartial \Omega$}
\end{eqnarray}
on a smooth bounded domain $\Omega \; \;ubset R^m$. Here the functions $f(x) \in L^2(\Omega)$ and $g(u) \in C^1(R)$ are given, $k$ is a parameter. We approach this problem by continuation in $k$. When $k=0$ the problem is linear. It has a unique solution, as can be seen by using Fourier series of the form $u(x)=\Sigma _{j=1}^{\infty} u_j \varphi _j$, where
$\varphi _j$ is the $j$-th eigenfunction of the Dirichlet Laplacian on $\Omega$, with $\int_\Omega \varphi _j ^2 \, dx=1$, and $\lambda _j$ is the corresponding eigenvalue. We now continue the solution in $k$, looking for a solution pair $(k,u)$, or $u=u(x,k)$. At a generic point $(k,u)$ the implicit function theorem applies, allowing the continuation in $k$. These are the {\em regular} points, where the corresponding linearized problem has only the trivial solution. So until a {\em singular} point is encountered, we have a solution curve $u=u(x,k)$. At a singular point practically anything imaginable might happen. At some singular points the M.G. Crandall and P.H. Rabinowitz bifurcation theorem \cite{CR} applies, giving us a curve of solutions through a singular point. But even in this favorable situation there is a possibility that solution curve will ``turn back" in $k$.
In \cite{K1} we have presented a way to continue solutions forward in $k$, which can take us through any singular point. We describe it next. If a solution $u(x)$ is given by its Fourier series $u(x)=\Sigma _{j=1}^{\infty} \xi _j \varphi _j$, we call $U_n=(\xi _1,\xi _2,\ldots, \xi _n)$ the {\em $n$-signature} of solution, or just {\em signature} for short. We also represent $f(x)$ by its Fourier series, and rewrite the problem (\ref{0}) as
\begin{eqnarray}
\lambdabel{0.1}
& \Delta u+kg(u)=\mu^0 _1 \varphi _1+\cdots +\mu^0 _n \varphi _n+e(x) \; \; \mboxox{for $x \in \Omega$}, \\ \nonumber
& u=0 \; \; \mboxox{on $\varphiartial \Omega$}
\end{eqnarray}
with $\mu^0 _j=\int _\Omega f \varphi _j \, dx$, and $e(x)$ is the projection of $f(x)$ onto the orthogonal complement to $\varphi _1, \ldots, \varphi_n$. Let us now constrain ourselves to hold the signature $U_n$ fixed (when continuing in $k$), and in return allow for $\mu _1, \ldots, \mu_n$ to vary. I.e., we are looking for $(u, \mu _1, \ldots, \mu_n)$ as a function of $k$, with $U_n$ fixed, solving
\begin{equation}
\lambdabel{0.1a}
\; \;\; \;\; \; \Delta u+kg(u)=\mu _1 \varphi _1+\cdots +\mu _n \varphi _n+e(x) \; \; \mboxox{for $x \in \Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$} \,,
\end{equation}
\[
\int _\Omega u \varphi _i \, dx=\xi _i, \; \; i=1, \dots, n \,.
\]
It turned out that we can continue forward in $k$ this way, so long as
\begin{equation}
\lambdabel{0.2}
k \max _{u \in R} g'(u)<\lambda _{n+1} \,.
\end{equation}
In the present paper we present a much simplified proof of this result, and generalize it for the case of $(i,n)$ signatures (defined below). Then, we present two new applications.
So suppose the condition (\ref{0.2}) holds, and we wish to solve the problem (\ref{0.1}) at some $k=k_0$. We travel in $k$, from $k=0$ to $k=k_0$, on a curve of fixed signature $U_n=(\xi _1,\xi _2,\ldots, \xi _n)$, obtaining a solution $(u, \mu _1, \ldots, \mu_n)$ of (\ref{0.1a}). The right hand side of (\ref{0.1a}) has the first $n$ harmonics different (in general) from the ones we want in (\ref{0.1}). We now vary $U_n$. The question is: can we choose $U_n$ to obtain the desired $\mu _1=\mu^0 _1, \ldots, \mu_n=\mu^0 _n$, and if so, in how many ways? This corresponds to the existence and multiplicity questions for the original problem (\ref{0}).
In \cite{K1} we obtained this way a unified approach
to the well known results of E.M. Landesman and A.C. Lazer \cite{L}, A. Ambrosetti and G. Prodi \cite{AP}, M. S. Berger and E. Podolak \cite{BP}, H. Amann and P. Hess \cite{AH} and D.G. de Figueiredo and W.-M. Ni \cite{FN}. We also provided some new results on ``jumping nonlinearities", and on symmetry breaking.
Our main new application in the present paper is to unbounded perturbations at resonance, which we describe next. For the problem
\[
\Delta u +\lambda _1 u+g(u)=e(x) \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$} \,,
\]
with a {\em bounded} $g(u)$, satisfying $ug(u) \geq 0$ for all $u \in R$, and $e(x) \in L^{\infty} (\Omega)$ satisfying $\int _\Omega e(x) \varphi _1(x) \, dx=0$, D.G. de Figueiredo and W.-M. Ni \cite{FN} have proved the existence of solutions. R. Iannacci, M.N. Nkashama and J.R. Ward \cite{I} generalized this result
to unbounded $g(u)$ satisfying $g'(u) \leq \gamma <\lambda _2-\lambda _1$ (they can also treat the case $\gamma =\lambda _2-\lambda _1$ under an additional condition).
We consider a more general problem
\[
\Delta u +\lambda _1 u+g(u)=\mu _1 \varphi_1 +e(x) \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$} \,,
\]
with $g(u)$ and $e(x)$ satisfying the same conditions. Writing $u=\xi _1 \varphi _1+U$, we show that there exists a continuous curve of solutions $(u,\mu _1)(\xi _1)$, and all solutions lie on this curve. Moreover $\mu _1(\xi _1)> 0$ ($<0$) for $\xi _1>0$ ($<0$) and large. By continuity, $\mu _1(\xi ^0_1)=0$ at some $ \xi ^0_1$. We see that the existence result of R. Iannacci et al \cite{I} corresponds to just one point on this solution curve.
Our second application is to resonance at higher eigenvalues, where we operate with multiple harmonics. We obtain an extension of D.G. de Figueiredo and W.-M. Ni's \cite{FN} result to any simple $\lambda _k$.
Our approach in the present paper is well suited for numerical computations. We describe the implementation of the numerical computations, and use them to give numerical examples for our results.
\; \;ection{Preliminary results}
\; \;etcounter{equation}{0}
\; \;etcounter{thm}{0}
\; \;etcounter{lma}{0}
Recall that on a smooth bounded domain $\Omega \; \;ubset R^m$ the eigenvalue problem
\[
\Delta u +\lambda u=0 \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$}
\]
has an infinite sequence of eigenvalues $0<\lambda _1<\lambda _2 \leq \lambda _3\leq \ldots \rightarrow \infty$, where we repeat each eigenvalue according to its multiplicity, and the corresponding eigenfunctions we denote $\varphi _k$. These eigenfunctions $\varphi _k$ form an orthogonal basis of $L^2(\Omega)$, i.e., any $f(x) \in L^2(\Omega)$ can be written as $f(x)=\Sigma _{k=1}^{\infty} a_k \varphi _k$, with the series convergent in $L^2(\Omega)$, see e.g., L. Evans \cite{E}. We normalize $||\varphi _k||_{L^2(\Omega)}=1$, for all $k$.
\begin{lma}\lambdabel{lma:1}
Assume that $u(x) \in L^2(\Omega)$, and $u(x)=\Sigma _{k=n+1}^{\infty} \xi _k \varphi _k$. Then
\[
\int_\Omega |\nabla u|^2 \, dx \geq \lambda _{n+1} \int _\Omega u^2 \, dx.
\]
\end{lma}
\noindentndent {\bf Proof:} $\; \;$
Since $u(x)$ is orthogonal to $\varphi _1, \, \ldots, \varphi _n$, the proof follows by the variational characterization of $\lambda _{n+1}$.
$\diamondsuit$
In the following linear problem the function $a(x)$ is given, while $\mu _1, \ldots, \mu _n$, and $w(x)$ are unknown.
\begin{lma}\lambdabel{lma:3}
Consider the problem
\begin{eqnarray}
\lambdabel{9}
& \Delta w+a(x)w=\mu_1 \varphi _1+ \cdots +\mu _n \varphi _n, \; \; \, \mboxox{for $x \in \Omega$}, \\ \nonumber
& w=0 \; \; \mboxox{on $\varphiartial \Omega$}, \\ \nonumber
& \int _\Omega w \varphi _1 \, dx= \ldots = \int _\Omega w \varphi _n \, dx=0.
\end{eqnarray}
Assume that
\begin{equation}
\lambdabel{10}
a(x) < \lambda _{n+1}, \, \; \; \mboxox{for all $x \in \Omega$}.
\end{equation}
Then the only solution of (\ref{9}) is $\mu _1 = \ldots =\mu _n=0$, and $w(x) \equiv 0$.
\end{lma}
\noindentndent {\bf Proof:} $\; \;$
Multiply the equation in (\ref{9}) by $w(x)$, a solution of the problem (\ref{9}), and integrate. Using Lemma \ref{lma:1} and the assumption (\ref{10}), we have
\[
\lambda _{n+1} \int _\Omega w^2 \, dx \leq \int_\Omega |\nabla w|^2 \, dx =\int_\Omega a(x) w^2 \, dx < \lambda _{n+1} \int _\Omega w^2 \, dx.
\]
It follows that $w(x) \equiv 0$, and then
\[
0=\mu_1 \varphi _1+ \cdots +\mu _n \varphi _n \; \; \mboxox{for $x \in \Omega$},
\]
which implies that $\mu _1 = \ldots =\mu _n=0$.
$\diamondsuit$
\begin{cor}\lambdabel{cor:1}
If one considers the problem (\ref{9}) with $\mu _1 = \ldots = \mu _n =0$, then $w(x) \equiv 0$ is the only solution of that problem.
\end{cor}
\begin{cor}\lambdabel{cor:2}
With $f(x) \in L^2(\Omega)$, consider the problem
\begin{eqnarray} \nonumber
& \Delta w+a(x)w=f(x) \; \;\; \; \mboxox{for $x \in \Omega$}\,, \\ \nonumber
& w=0 \; \;\; \; \mboxox{on $\varphiartial \Omega$}, \\ \nonumber
& \int _\Omega w \varphi _1 \, dx= \ldots = \int _\Omega w \varphi _n \, dx=0.
\end{eqnarray}
Then there is a constant $c$, so that the following a priori estimate holds
\[
||w||_{H^2(\Omega)} \leq c ||f||_{L^2(\Omega)} \,.
\]
\end{cor}
\noindentndent {\bf Proof:} $\; \;$
An elliptic estimate gives
\[
||w||_{H^2(\Omega)} \leq c \left(||w||_{L^2(\Omega)}+||f||_{L^2(\Omega)} \right) \,.
\]
Since the corresponding homogeneous problem has only the trivial solution, the extra term on the right is removed in a standard way.
$\diamondsuit$
We shall also need a variation of the above lemma.
\begin{lma}\lambdabel{lma:4}
Consider the problem ($2 \leq i <n$)
\begin{eqnarray}
\lambdabel{11}
& \Delta w+a(x)w=\mu_i \varphi _i+\mu_{i+1} \varphi _{i+1}+ \cdots +\mu _n \varphi _n \; \; \mboxox{for $x \in \Omega$}, \\ \nonumber
& w=0 \; \; \mboxox{on $\varphiartial \Omega$}, \\ \nonumber
& \int _\Omega w \varphi _i \, dx= \int _\Omega w \varphi _{i+1} \, dx=\ldots = \int _\Omega w \varphi _n \, dx=0.
\end{eqnarray}
Assume that
\begin{equation}
\lambdabel{12}
\lambda _{i-1} <a(x) < \lambda _{n+1}, \, \; \; \mboxox{for all $x \in \Omega$} \,.
\end{equation}
Then the only solution of (\ref{11}) is $\mu _i = \ldots =\mu _n=0$, and $w(x) \equiv 0$.
\end{lma}
\noindentndent {\bf Proof:} $\; \;$
Since the harmonics from $i$-th to $n$-th are missing in the solution, we may represent $\displaystyle w=w_1+w_2$, with $\displaystyle w_1 \in Span \{ \varphi_1, \ldots, \varphi _{i-1} \}$, and $\displaystyle w_2 \in Span \{ \varphi_{n+1}, \varphi _{n+2}, \ldots \}$. Multiply the equation (\ref{11}) by $w_1$, and integrate
\[
-\int _\Omega |\nabla w _1|^2 \, dx+\int _\Omega a(x) w _1^2 \, dx+\int _\Omega a(x) w _1 w_2 \, dx=0 \,.
\]
Similarly
\[
-\int _\Omega |\nabla w _2|^2 \, dx+\int _\Omega a(x) w _2^2 \, dx+\int _\Omega a(x) w _1 w_2 \, dx=0 \,.
\]
Subtracting
\begin{equation}
\lambdabel{14}
\int _\Omega |\nabla w _2|^2 \, dx-\int _\Omega |\nabla w _1|^2 \, dx=\int _\Omega a(x) w _2^2 \, dx-\int _\Omega a(x) w _1^2 \, dx \,.
\end{equation}
By the variational characterization of eigenvalues, the quantity on the left in (\ref{14}) is greater or equal to
\[
\lambda _{n+1} \int _\Omega w _2^2 \, dx-\lambda _{i-1} \int _\Omega w _1^2 \, dx \,,
\]
while the one of the on the right is strictly less than the above number, by our condition (\ref{12}). We have a contradiction, unless $w_1=w_2 \equiv 0$. Then $\mu _i = \ldots =\mu _n=0$.
$\diamondsuit$
\begin{cor}\lambdabel{cor:3}
If one considers the problem (\ref{11}) with $\mu _i = \ldots = \mu _n =0$, then $w(x) \equiv 0$ is the only solution of that problem. Consequently, for the problem
\begin{eqnarray} \nonumber
& \Delta w+a(x)w=f(x) \; \;\; \; \mboxox{for $x \in \Omega$}\,, \\ \nonumber
& w=0 \; \;\; \; \mboxox{on $\varphiartial \Omega$}, \\ \nonumber
& \int _\Omega w \varphi _i \, dx= \ldots = \int _\Omega w \varphi _n \, dx=0.
\end{eqnarray}
there is a constant $c$, so that the following a priori estimate holds
\[
||w||_{H^2(\Omega)} \leq c ||f||_{L^2(\Omega)} \,.
\]
\end{cor}
\; \;ection{Continuation of solutions}
\; \;etcounter{equation}{0}
\; \;etcounter{thm}{0}
\; \;etcounter{lma}{0}
Any $f(x) \in L^2(\Omega)$ can be decomposed as $f(x)=\mu_1 \varphi _1+ \ldots +\mu _n \varphi _n+e(x)$, with $e(x)=\Sigma _{j=n+1}^{\infty} e _j \varphi _j$ orthogonal to $\varphi _1, \ldots, \varphi _n$. We consider a boundary value problem
\begin{eqnarray}
\lambdabel{2}
& \Delta u+kg(u)=\mu_1 \varphi _1+ \ldots +\mu _n \varphi _n+e(x) \; \; \mboxox{for $x \in \Omega$}, \\ \nonumber
& u=0 \; \; \mboxox{on $\varphiartial \Omega$}.
\end{eqnarray}
Here $k \geq 0$ is a constant, and $g(u) \in C^1(R) $ is assumed to satisfy
\begin{equation}
\lambdabel{4}
g(u)=
\gamma u+b(u) \,,
\end{equation}
with a real constant $\gamma $, and $b(u)$ bounded for all $u \in R$, and also
\begin{equation}
\lambdabel{3}
g'(u)= \gamma +b'(u)\leq M, \; \; \; \; \mboxox{for all $u \in R \,,$ }
\end{equation}
where $M>0$ a constant.
If $u(x) \in H^2(\Omega) \cap H^1_0(\Omega)$ is a solution of (\ref{2}), we decompose it as
\begin{equation}
\lambdabel{5}
u(x)= \Sigma _{i=1}^n \xi _i \varphi _i+U (x),
\end{equation}
where $U (x)$ is orthogonal to $\varphi _1, \ldots, \varphi _n$ in $L^2(\Omega)$.
For the problem (\ref{2}) we pose an inverse problem: keeping $e(x)$ fixed, find $\mu=\left( \mu _1, \ldots, \mu _n \right)$ so that the problem (\ref{2}) has a solution of any prescribed $n$-signature $\xi=\left( \xi _1, \ldots, \xi _n \right)$.
\begin{thm}\lambdabel{thm:1}
For the problem (\ref{2}) assume that the conditions (\ref{4}), (\ref{3}) hold, and
\[
kM<\lambda _{n+1} \,.
\]
Then given any $\xi=\left( \xi _1, \ldots, \xi _n \right)$, one can find a unique $\mu=\left( \mu _1, \ldots, \mu _n \right)$ for which the problem (\ref{2}) has a solution $u(x) \in H^2(\Omega) \cap H^1_0(\Omega)$ of $n$-signature $\xi$. This solution is unique. Moreover, we have a continuous curve of solutions $(u(k),\mu(k))$, such that $u(k)$ has a fixed $n$-signature $\xi$, for all $0 \leq k \leq 1$.
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
Let $e(x)=\Sigma _{j=n+1}^{\infty} e_j \varphi _j$. When $k=0$, the unique solution of (\ref{2}) of signature $\xi$ is $u(x)=\Sigma _{j=1}^{n} \xi _j \varphi _j-\Sigma _{j=n+1}^{\infty} \frac{e_j}{\lambda _j} \varphi _j$, corresponding to $\mu _j =-\lambda _j \xi _j$, $j=1, \ldots, n$. We shall use the implicit function theorem to continue this solution in $k$. With $u(x)= \Sigma _{i=1}^n \xi _i \varphi _i+U (x)$, we multiply the equation (\ref{2}) by $\varphi _i$, and integrate
\begin{equation}
\lambdabel{16}
\mu _i=-\lambda _i \xi _i+k \int _\Omega g \left( \Sigma _{i=1}^n \xi _i \varphi _i+U \right) \varphi _i \, dx, \; \; i=1, \ldots, n \,.
\end{equation}
Using these expressions in (\ref{2}), we have
\begin{equation}
\lambdabel{17}
\; \; \; \; \; \; \Delta U+kg\left( \; \;um _{i=1}^n \xi _i \varphi _i+U \right)-k \Sigma _{i=1}^n \int _\Omega g \left( \; \;um _{i=1}^n \xi _i \varphi _i+U \right) \varphi _i \, dx \varphi _i=e(x),
\end{equation}
\[
U=0 \; \; \mboxox{on $\varphiartial \Omega$} \,.
\]
The equations (\ref{16}) and (\ref{17}) constitute the classical Lyapunov-Schmidt decomposition of our problem (\ref{2}).
Define $H^2_{{\bf 0}}$ to be the subspace of $H^2(\Omega) \cap H^1_0(\Omega)$, consisting of functions with zero $n$-signature:
\[
H^2_{{\bf 0}} = \left\{ u \in H^2(\Omega) \cap H^1_0(\Omega) \; | \; \int _\Omega u \varphi _i \, dx =0, \; i=1, \ldots, n \right\}.
\]
We recast the problem (\ref{17}) in the operator form as
\[
F(U, k) =e(x),
\]
where $ F(U, k) : H^2_{{\bf 0}} \times R \rightarrow L^2(\Omega)$ is given by the left hand side of (\ref{17}). Compute the Frechet derivative
\[
F_{U}(U, k)w=\Delta w+kg' \left( \Sigma _{i=1}^n \xi _i \varphi _i+U \right)w-\mu^*_1 \varphi _1-\ldots -\mu^* _n \varphi _n \,,
\]
where $\mu^*_i=k \int _\Omega g' \left( \Sigma _{i=1}^n \xi _i \varphi _i+U \right) w \varphi _i \, dx$. By Lemma \ref{lma:3} the map $F_{U}(U, k)$ is injective. Since this map is Fredholm of index zero, it is also surjective. The implicit function theorem applies, giving us locally a curve of solutions $U=U(k)$. Then we compute $\mu=\mu (k)$ from (\ref{16}).
To show that this curve can be continued for all $k$, we only need to show that this curve $(u(k),\mu (k))$ cannot go to infinity at some $k$, i.e., we need an a priori estimate. Since the $n$-signature of the solution is fixed, we only need to estimate $U$. We claim that there is a constant $c>0$, so that
\begin{equation}
\lambdabel{18}
||U||_{H^2(\Omega)} \leq c \,.
\end{equation}
We rewrite the equation in (\ref{17}) as
\[
\Delta U+k\gamma U= -kb\left( \; \;um _{i=1}^n \xi _i \varphi _i+U \right)+k \Sigma _{i=1}^n \int _\Omega b \left( \; \;um _{i=1}^n \xi _i \varphi _i+U \right) \varphi _i \, dx \varphi _i+e(x) \,.
\]
By the Corollary \ref{cor:2} to Lemma \ref{lma:3}, the estimate (\ref{18}) follows, since $b(u)$ is bounded.
Finally, if the problem (\ref{2}) had a different solution $(\bar u(k),\bar \mu (k))$ with the same signature $\xi$, we would continue it back in $k$, obtaining at $k=0$ a different solution of the linear problem of signature $\xi$ (since solution curves do not intersect by the implicit function theorem), which is impossible.
$\diamondsuit$
The Theorem \ref{thm:1} implies that the value of $\xi =(\xi_1, \ldots, \xi_n)$ uniquely identifies the solution pair $(\mu, u(x))$, where $\mu =(\mu_1, \ldots, \mu _n)$. Hence, the solution set of (\ref{2}) can be faithfully described by the map: $\xi \in R^n \rightarrow \mu \in R^n$, which we call the {\em solution manifold}. In case $n=1$, we have the {\em solution curve} $\mu=\mu(\xi)$, which faithfully depicts the solution set. We show next that the solution manifold is connected.
\begin{thm}\lambdabel{thm:2}
In the conditions of Theorem \ref{thm:1}, the solution $(u,\mu_1,\dots,\mu _n)$ of (\ref{2}) is a continuous function of $\xi=(\xi_ 1, \dots ,\xi _n)$. Moreover, we can continue solutions of any signature $\bar \xi$ to solution of arbitrary signature $\hat \xi $ by following any continuous curve in $R^n$ joining $\bar \xi$ and $\hat \xi$.
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
We use the implicit function theorem to show that any solution of (\ref{2}) can be continued in $\xi$. The proof is essentially the same as for continuation in $k$ above. After performing the same Lyapunov-Schmidt decomposition,
we recast the problem (\ref{17}) in the operator form
\[
F(U,\xi)=e(x) \,,
\]
where $F \, : \, H^2_{{\bf 0}} \times R^n \rightarrow L^2$ is defined by the left hand side of (\ref{17}). The Frechet derivative $F_{U}(U, \xi)w$ is the same as before, and by the implicit function theorem we have locally $U=U(\xi)$. Then we compute $\mu=\mu (\xi)$ from (\ref{16}). We use the same a priori bound (\ref{18}) to continue the curve for all $\xi \in R^n$. (The bound (\ref{18}) is uniform in $\xi$.)
$\diamondsuit$
Given a Fourier series $u(x)=\Sigma _{j=1}^{\infty} \xi _j \varphi _j$, we call the vector $(\xi _i, \ldots, \xi _n)$ to be the $(i,n)$-{\em signature } of $u(x)$.
Using Lemma \ref{lma:4} instead of Lemma \ref{lma:3}, we have the following variation of the above result.
\begin{thm}\lambdabel{thm:3}
For the problem (\ref{2}) assume that the conditions (\ref{4}), (\ref{3}) hold, and
\[
\lambda _{i-1} <k\gamma + kg'(u)< \lambda _{n+1}, \, \; \; \mboxox{for all $u \in R$} \,.
\]
Then given any $\xi=\left( \xi _i, \ldots, \xi _n \right)$, one can find a unique $\mu=\left( \mu _i, \ldots, \mu _n \right)$ for which the problem
\begin{eqnarray}
\lambdabel{20}
& \Delta u+kg(u)=\mu_i \varphi _i+ \cdots +\mu _n \varphi _n+e(x), \, \; \; \mboxox{for $x \in \Omega$}, \\ \nonumber
& u=0 \; \; \mboxox{on $\varphiartial \Omega$}
\end{eqnarray}
has a solution $u(x) \in H^2(\Omega) \cap H^1_0(\Omega)$ of the $(i,n)$-signature $\xi$. This solution is unique. Moreover, we have a continuous curve of solutions $(u(k),\mu(k))$, such that $u(k)$ has a fixed $(i,n)$-signature $\xi$, for all $0 \leq k \leq 1$. In addition, we can continue solutions of any $(i,n)$-signature $\bar \xi$ to solution of arbitrary $(i,n)$-signature $\hat \xi $ by following any continuous curve in $R^{n-i+1}$ joining $\bar \xi$ and $\hat \xi$.
\end{thm}
\; \;ection{Unbounded perturbations at resonance}
\; \;etcounter{equation}{0}
\; \;etcounter{thm}{0}
\; \;etcounter{lma}{0}
We use an idea from \cite{I} to get the following a priori estimate.
\begin{lma}\lambdabel{lma:6}
Let $u(x)$ be a solution of the problem
\begin{equation}
\lambdabel{22}
\Delta u +\lambda _1 u+a(x)u=\mu _1 \varphi _1+e(x) \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$},
\end{equation}
with $e(x) \in \varphi _1 ^\varphierp$, and $a(x) \in C(\Omega)$. Assume there is a constant $\gamma$, so that
\[
0 \leq a(x)\leq \gamma <\lambda_2 -\lambda _1, \; \;\; \; \mboxox{for all $x \in \Omega$} \,.
\]
Write the solution of (\ref{22}) in the form $u(x)=\xi _1 \varphi _1+U$, with $U \in \varphi _1 ^\varphierp$, and assume that
\begin{equation}
\lambdabel{22a}
\xi _1 \mu _1 \leq 0 \,.
\end{equation}
Then there exists a constant $c_0$, so that
\begin{equation}
\lambdabel{22.2}
\int_\Omega |\nabla U|^2 \, dx \leq c_0 \,, \; \;\; \; \mboxox{uniformly in $\xi _1 $ satisfying (\ref{22a})}\,.
\end{equation}
\end{lma}
\noindentndent {\bf Proof:} $\; \;$
We have
\begin{equation}
\lambdabel{22.1}
\; \; \; \; \Delta U +\lambda _1 U+a(x)\left(\xi _1 \varphi _1+U \right)=\mu _1 \varphi _1+e(x) \; \; \mboxox{on $\Omega$}, \; \; U=0 \; \; \mboxox{on $\varphiartial \Omega$} \,.
\end{equation}
Multiply this by $\xi _1 \varphi _1-U $, and integrate
\[
\int_\Omega \left(|\nabla U|^2- \lambda _1 U^2 \right)\, dx+\int_\Omega a(x) \left(\xi _1^2 \varphi _1^2-U^2 \right) \, dx-\xi _1 \mu _1=-\int_\Omega eU \, dx \,.
\]
Dropping two non-negative terms on the left, we have
\[
\left(\lambda _2-\lambda _1-\gamma \right)\int_\Omega U^2 \, dx \leq \int_\Omega \left(|\nabla U|^2- \lambda _1 U^2 \right)\, dx-\int_\Omega a(x) U^2 \, dx \leq -\int_\Omega eU \, dx \,.
\]
From this we get an estimate on $\int_\Omega U^2 \, dx$, and then on $\int_\Omega |\nabla U|^2 \, dx$.
$\diamondsuit$
\begin{cor}\lambdabel{cor:4}
If, in addition, $\mu _1=0$ and $e(x) \equiv 0$, then $U \equiv 0$.
\end{cor}
We now consider the problem
\begin{equation}
\lambdabel{23}
\Delta u +\lambda _1 u+g(u)=\mu _1 \varphi _1+e(x) \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$} \,,
\end{equation}
with $e(x) \in \varphi _1 ^\varphierp$. We wish to find a solution pair $(u, \mu _1)$. We have the following extension of the result of R. Iannacci et al \cite{I}.
\begin{thm}\lambdabel{thm4}
Assume that $g(u) \in C^1(R)$ satisfies
\begin{equation}
\lambdabel{24}
u g(u) >0 \; \;\; \; \mboxox{for all $u \in R$} \,,
\end{equation}
\begin{equation}
\lambdabel{25}
g'(u) \leq \gamma< \lambda _2-\lambda _1 \; \;\; \; \mboxox{for all $u \in R$} \,.
\end{equation}
Then there is a continuous curve of solutions of (\ref{23}): $(u(\xi _1),\mu _1(\xi _1))$, $u \in H^2(\Omega) \cap H^1_0(\Omega)$, with $-\infty<\xi _1<\infty$, and $\int _\Omega u(\xi _1) \varphi _1 \, dx=\xi _1$. This curve exhausts the solution set of (\ref{23}). The continuous function $\mu _1(\xi _1)$ is positive for $\xi _1 >0$ and large, and $ \mu _1(\xi _1)<0$ for $\xi _1 <0$ and $|\xi _1|$ large. In particular, $\mu _1(\xi^0 _1)=0$ at some $\xi^0 _1$, i.e., we have a solution of
\[
\Delta u +\lambda _1 u+g(u)=e(x) \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$} \,.
\]
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
By the Theorem \ref{thm:1} there exists a curve of solutions of (\ref{23}) $(u(\xi _1),\mu _1(\xi _1))$, which
exhausts the solution set of (\ref{23}). The condition (\ref{24}) implies that $g(0)=0$, and then integrating (\ref{25}), we conclude that
\begin{equation}
\lambdabel{27}
0 \leq \frac{g(u)}{u} \leq \gamma< \lambda _2-\lambda _1, \; \;\; \; \mboxox{for all $u \in R$} \,.
\end{equation}
Writing $u(x)=\xi _1 \varphi _1+U$, with $U \in \varphi _1 ^\varphierp$, we see that $U$ satisfies
\[
\Delta U +\lambda _1 U+g(\xi _1 \varphi _1+U)=\mu _1 \varphi _1+e(x) \; \; \mboxox{on $\Omega$}, \; \; U=0 \; \; \mboxox{on $\varphiartial \Omega$} \,.
\]
We rewrite this equation in the form (\ref{22}), by letting $a(x)=\frac{ g(\xi _1 \varphi _1+U)}{\xi _1 \varphi _1+U}$. By (\ref{27}), the Lemma \ref{lma:6} applies, giving us the estimate (\ref{22.2}).
We claim next that $|\mu _1(\xi _1)|$ is bounded uniformly in $\xi _1$, provided that $\xi _1 \mu _1 \leq 0$. Indeed, let us assume first that $\xi _1 \geq 0$ and $\mu _1 \leq 0$. Then
\[
\mu_1=\int_\Omega g(u) \varphi _1 \, dx = \int_\Omega \frac{g(u)}{u} \xi _1 \varphi _1^2 \, dx+\int_\Omega \frac{g(u)}{u} U \varphi _1 \, dx \geq \int_\Omega \frac{g(u)}{u} U \varphi _1 \, dx\,,
\]
\[
|\mu _1|=-\mu _1 \leq -\int_\Omega \frac{g(u)}{u} U \varphi _1 \, dx \leq \gamma \int_\Omega | U \varphi _1 | \, dx\leq c_1 \,,
\]
for some $c_1>0$, in view of (\ref{27}) and the estimate (\ref{22.2}). The case when $\xi _1 \leq 0$ and $\mu _1 \geq 0$ is similar.
We now rewrite (\ref{23}) in the form
\begin{equation}
\lambdabel{28}
\Delta u+a(x)u=f(x) \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$}\,
\end{equation}
with $a(x)=\lambda _1 +\frac{g(u)}{u}$, and $f(x)=\mu _1 \varphi _1+e(x)$. By above, we have a uniform in $\xi _1$ bound on $||f||_{L^2(\Omega)}$, and by the Corollary \ref{cor:4} we have uniqueness for (\ref{28}). It follows that
\[
||u||_{H^2(\Omega)} \leq c||f||_{L^2(\Omega)} \leq c_2 \,,
\]
for some $c_2>0$.
Assume, contrary to what we wish to prove, that there is a sequence $\{\xi _1^n \} \rightarrow \infty$, such that $\mu _1 (\xi _1^n) \leq 0$. We have
\[
u=\xi^n _1 \varphi _1+U \,,
\]
with both $u$ and $U$ bounded in $L^2(\Omega)$, uniformly in $\xi _1^n$,
which results in a contradiction for $n$ large. We prove similarly that $ \mu _1(\xi _1)<0$ for $\xi _1 <0$ and $|\xi _1|$ large.
$\diamondsuit$
\noindentndent
{\bf Example} We have solved numerically the problem
\begin{eqnarray} \nonumber
& u''+u+0.2 \, \frac{u^3}{u^2+3u+3}+\; \;in \frac12 u=\mu \; \;in x+ 5 \left(x-\varphii/2 \right), \; \; 0<x<\varphii, \\ \nonumber
& u(0)=u(\varphii)=0 \,. \nonumber
\end{eqnarray}
The Theorem \ref{thm4} applies. Write the solution as $u(x)=\xi \; \;in x +U(x)$, with $\int_0^{\varphii} U(x) \; \;in x \, dx=0$.
Then the solution curve $\mu=\mu (\xi)$ is given in Figure $1$. The picture suggests that the problem has at least one solution for all $\mu$.
\begin{figure}
\caption{ An example for the Theorem \ref{thm4}
\end{figure}
We have the following extension of the results of D.G. de Figueiredo and W.-M. Ni \cite{FN} and R. Iannacci et al \cite{I}, which does not require that $\mu =0$.
\begin{thm}\lambdabel{thm:8}
In addition to the conditions of the Theorem \ref{thm4}, assume that for some constants $c_0>0$ and $p> \frac32$
\begin{equation}
\lambdabel{28.1}
ug(u) > c_0 |u|^p, \; \; \mboxox{for all $u >0$ ($u <0$)} \,.
\end{equation}
Then for the problem (\ref{23}) we have $\lim _{\xi _1 \rightarrow \infty} \mu (\xi _1)=\infty$ ($\, \lim _{\xi _1 \rightarrow -\infty} \mu (\xi _1)=-\infty$).
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
Assume that (\ref{28.1}) holds for $u >0$. By the Theorem \ref{thm4}, $ \mu (\xi _1)>0$ for $\xi _1$ large. Assume, on the contrary, that $\mu (\xi _1)$ is bounded along some sequence of $\xi _1$'s, which tends to $\infty$. Writing $u=\xi _1 \varphi _1+U$, we conclude from the line following (\ref{22.1}) that
\begin{equation}
\lambdabel{28.2}
\int_\Omega U^2 \, dx \leq c_1 \xi _1+c_2, \; \; \mboxox{for some constants $c_1>0$ and $c_2>0$} \,.
\end{equation}
We have
\[
\mu_1=\int_\Omega g(\xi _1 \varphi _1+U) \varphi _1 \, dx =\int_\Omega \left( g(\xi _1 \varphi _1+U)-g(\xi _1 \varphi _1) \right) \varphi _1 \, dx+\int_\Omega g(\xi _1 \varphi _1) \varphi _1 \, dx \,.
\]
Using the mean value theorem, the estimate (\ref{28.2}), and the condition (\ref{28.1}), we estimate
\[
\mu_1 >c_3 {\xi _1}^{p-1}-c_4 {\xi _1}^{1/2}-c_5 \,,
\]
with some positive constants $c_3$, $c_4$ and $c_5$. It follows that $ \mu (\xi _1)$ gets large along our sequence, a contradiction.
$\diamondsuit$
Bounded perturbations at resonance are much easier to handle. For example, we have the following result.
\begin{thm}\lambdabel{thm:5}
Assume that $g(u) \in C^1(R)$ is a bounded function, which satisfies the condition (\ref{24}), and in addition,
\[
\lim _{u \rightarrow \varphim \infty} g(u)=0 \,.
\]
There is a continuous curve of solutions of (\ref{23}): $(u(\xi _1),\mu _1(\xi _1))$, $u \in H^2(\Omega) \cap H^1_0(\Omega)$, with $-\infty<\xi _1<\infty$, and $\int _\Omega u(\xi _1) \varphi _1 \, dx=\xi _1$. This curve exhausts the solution set of (\ref{23}).
Moreover, there are constants $\mu _- <0< \mu _+$ so that the problem (\ref{23}) has at least two solutions for $\mu \in (\mu _-,\mu _+) \; \;etminus 0$, it has at least one solution for $\mu=\mu _- $, $\mu=0$ and $\mu=\mu _+ $, and no solutions for $\mu$ lying outside of $(\mu _-,\mu _+)$.
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
Follow the proof of the Theorem \ref{thm4}. Since $g(u)$ is bounded, we have a uniform in $\xi _1$ bound on $||U||_{C^1}$, see \cite{FN}. Since $\mu_1=\int_\Omega g(\xi _1 \varphi _1+U) \varphi _1 \, dx$, we conclude that for $\xi _1$ positive (negative) and large, $\mu _1$ is positive (negative) and it tends to zero as $\xi _1 \rightarrow \infty$ ($\xi _1 \rightarrow -\infty$).
$\diamondsuit$
\noindentndent
{\bf Example} We have solved numerically the problem
\[
u''+u+\frac{u}{2u^2+u+1}=\mu \; \;in x+ \; \;in 2x, \; \; 0<x<\varphii,
\; \; u(0)=u(\varphii)=0 \,.
\]
The Theorem \ref{thm:5} applies. Write the solution as $u(x)=\xi \; \;in x +U(x)$, with $\int_0^{\varphii} U(x) \; \;in x \, dx=0$.
Then the solution curve $\mu=\mu (\xi)$ is given in Figure $2$. The picture shows that, say, for $\mu=-0.4$, the problem has exactly two solutions, while for $\mu=1$ there are no solutions.
\begin{figure}
\caption{ An example for the Theorem \ref{thm:5}
\end{figure}
We also have a result of Landesman-Lazer type, which also provides some additional information on the solution curve.
\begin{thm}\lambdabel{thm:12}
Assume that the function $g(u) \in C^1(R)$ is bounded, it satisfies (\ref{25}), and in addition, $g(u)$ has finite limits at $\varphim \infty$, and
\[
g(-\infty)<g(u)<g(\infty), \, \; \; \mboxox{for all $u \in R$} \,.
\]
Then there is a continuous curve of solutions of (\ref{23}): $(u(\xi _1),\mu _1(\xi _1))$, $u \in H^2(\Omega) \cap H^1_0(\Omega)$, with $-\infty<\xi _1<\infty$, and $\int _\Omega u(\xi _1) \varphi _1 \, dx=\xi _1$. This curve exhausts the solution set of (\ref{23}), and $\lim _{\xi _1 \rightarrow \varphim \infty}\mu _1(\xi _1) = g(\varphim \infty) \int _\Omega \varphi _1 \, dx$. I.e., the problem (\ref{23}) has a solution if and only if
\[
g(- \infty) \int _\Omega \varphi _1 \, dx<\mu<g( \infty) \int _\Omega \varphi _1 \, dx \,.
\]
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
Follow the proof of the Theorem \ref{thm4}. Since $g(u)$ is bounded, we have a uniform bound on $U$, when we do the continuation in $\xi _1$. Hence $\mu _1 \rightarrow g(\varphim \infty) \int _\Omega \varphi _1 \, dx$, as $\xi _1 \rightarrow \varphim \infty$, and by continuity of $\mu _1 (\xi _1)$, the problem (\ref{23}) is solvable for all $\mu _1$'s lying between these limits.
$\diamondsuit$
\noindentndent
{\bf Example} We have solved numerically the problem
\[
u''+u+\frac{u}{\; \;qrt{u^2+1}}=\mu \; \;in x+ 5\; \;in 2x-\; \;in 10 x, \; \; 0<x<\varphii,
\; \; u(0)=u(\varphii)=0 \,.
\]
The Theorem \ref{thm:12} applies. Write the solution as $u(x)=\xi \; \;in x +U(x)$, with $\int_0^{\varphii} U(x) \; \;in x \, dx=0$. Then the solution curve $\mu=\mu (\xi)$ is given in Figure $3$. It confirms that $\lim _{\xi _1 \rightarrow \varphim \infty}\mu _1(\xi _1) =\varphim \frac{4}{\varphii}$ ($\frac{4}{\varphii}=\int _0^{\varphii} \frac{2}{\varphii} \, \; \;in x \, dx$).
\begin{figure}
\caption{ An example for the Theorem \ref{thm:12}
\end{figure}
One can append the following uniqueness condition (\ref{28.50}) to all of the above results. For example, we have the following result.
\begin{thm}
Assume that the conditions of the Theorem \ref{thm4} hold, and in addition
\begin{equation}
\lambdabel{28.50}
g'(u)>0, \, \; \; \mboxox{for all $u \in R$} \,.
\end{equation}
Then
\begin{equation}
\lambdabel{28.5}
\mu _1'(\xi _1)>0, \; \; \mboxox{ for all $\xi _1 \in R$} \,.
\end{equation}
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
Clearly, $\mu' _1(\xi _1)>0$ at least for some values of $\xi _1$. If (\ref{28.5}) is not true, then $\mu' _1(\xi^0 _1)=0$ at some $\xi^0 _1$.
Differentiate the equation (\ref{23}) in $\xi _1$, set $\xi _1=\xi^0 _1$, and denote $w=u_{\xi _1} |_{\xi _1=\xi^0 _1}$, obtaining
\begin{eqnarray} \nonumber
& \Delta w+\left(\lambda _1 +g'(u) \right)w=0 \; \; \mboxox{for $x \in \Omega$}, \\ \nonumber
& w=0 \; \; \mboxox{on $\varphiartial \Omega$}.
\end{eqnarray}
Clearly, $w$ is not zero, since it has a non-zero projection on $\varphi _1$ ($U_{\xi _1} \in \varphi _1^{\varphierp}$). On the other hand, $w \equiv 0$, since by the assumption (\ref{25}) we have $\lambda _1<\lambda _1 +g'(u)<\lambda _2$.
$\diamondsuit$
\begin{cor}
In addition to the conditions of this theorem, assume that the condition (\ref{28.1}) holds, for all $u \in R$. Then for any $f(x) \in L^2(\Omega)$, the problem
\[
\Delta u+\lambda _1 u +g(u) =f(x) \,, \; \; \mboxox{for $x \in \Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$}
\]
has a unique solution $u(x) \in H^2(\Omega) \cap H^1_0(\Omega)$.
\end{cor}
\; \;ection{Resonance at higher eigenvalues}
\; \;etcounter{equation}{0}
\; \;etcounter{thm}{0}
\; \;etcounter{lma}{0}
We consider the problem
\begin{equation}
\lambdabel{43}
\Delta u +\lambda _k u+g(u)=f(x) \; \; \mboxox{on $\Omega$}, \; \; u=0 \; \; \mboxox{on $\varphiartial \Omega$} \,,
\end{equation}
where $\lambda _k$ is assumed to be a {\em simple} eigenvalue of $-\Delta$. We have the following extension of the result of D.G. de Figueiredo and W.-M. Ni \cite{FN} to the case of resonance at a non-principal eigenvalue.
\begin{thm}\lambdabel{thm7}
Assume that $g(u) \in C^1(R)$ is bounded, it satisfies (\ref{24}), and
\begin{equation}
\lambdabel{44}
g'(u) \leq c_0, \; \; \mboxox{for all $u \in R$, and some $c_0>0$} \,,
\end{equation}
\begin{equation}
\lambdabel{45}
\liminf _{u \rightarrow \infty} g(u)>0, \; \; \limsup _{u \rightarrow -\infty} g(u)<0 \,.
\end{equation}
Assume that $f(x) \in L^2(\Omega)$ satisfies
\begin{equation}
\lambdabel{46}
\int _\Omega f(x) \varphi _k (x) \, dx=0 \,.
\end{equation}
Then the problem (\ref{43}) has a solution $u(x) \in H^2(\Omega) \cap H^1_0(\Omega)$.
\end{thm}
\noindentndent {\bf Proof:} $\; \;$
By (\ref{44}) we may assume that $\lambda _k+ g'(u) \leq \lambda _{n+1}$ for some $n>k$. Expand $f(x)=\mu _1^0 \varphi _1+\mu _2^0 \varphi _2+ \cdots +\mu _n^0 \varphi _n+e(x)$, with
$e(x) \in Span \{ \varphi_1, \ldots, \varphi _{n} \}^{\varphierp}$, and $u(x)=\xi _1 \varphi _1+\xi _2 \varphi _2+ \cdots +\xi _n \varphi _n+U(x)$, with
$U(x) \in Span \{ \varphi_1, \ldots, \varphi _{n} \}^{\varphierp}$. By (\ref{46}), $\mu ^0 _k=0$. By the Theorem \ref{thm:1} for any $\xi=\left( \xi _1, \ldots, \xi _n \right)$, one can find a unique $\mu=\left( \mu _1, \ldots, \mu _n \right)$ for which the problem (\ref{2}) has a solution of $n$-signature $\xi$, and we need to find a $\xi ^0=\left( \xi^0 _1, \ldots, \xi^0 _n \right)$, for which $\mu (\xi ^0)=\left( \mu^0 _1, \ldots, \mu^0 _{k-1},0,\mu^0 _{k+1}, \ldots, \mu^0 _n \right)$.
Multiplying the equation (\ref{43}) by $\varphi _i$, and integrating we get
\[
(\lambda _k-\lambda _i) \xi _i+ \int _\Omega g \left(\; \;um _{i=1}^n \xi _i \varphi _i +U\right) \varphi _i \, dx=\mu ^0 _i, \; \; i=1, \ldots, k-1,k+1, \ldots n
\]
\[
\int _\Omega g \left(\; \;um _{i=1}^n \xi _i \varphi _i +U\right) \varphi _k\, dx=0 \,.
\]
We need to solve this system of equations for $\left( \xi _1, \ldots, \xi _n \right)$. For that we set up a map $T : \left( \eta _1, \ldots, \eta _n \right) \rightarrow \left( \xi _1, \ldots, \xi _n \right)$, by calculating $\xi _i$ from
\[
(\lambda _k-\lambda _i) \xi _i= \mu ^0 _i -\int _\Omega g \left(\; \;um _{i=1}^n \eta _i \varphi _i +U\right) \varphi _i\, dx, \; \; i=1, \ldots, k-1,k+1, \ldots n
\]
followed by
\[
\xi _k=\eta _k-\int _\Omega g \left(\xi _1 \varphi _1+ \cdots +\xi _{k-1} \varphi _{k-1}+\eta _k \varphi _k +\xi _{k+1} \varphi _{k+1}+ \cdots +\xi _n \varphi _n+U \right)\varphi _k \, dx \,.
\]
Fixed points of this map provide solutions to our system of equations. By the Theorem \ref{thm:2}, the map $T$ is continuous. Since $g(u)$ is bounded, $\left( \xi _1, \ldots,\xi _{k-1},\xi_{k+1},\ldots, \xi _n \right)$ belongs to a bounded set. By (\ref{24}) and (\ref{45}), $\xi _k <\eta _k$ for $\eta _k>0$ and large, while $\xi _k >\eta _k$ for $\eta _k<0$ and $|\eta _k|$ large. Hence, the map $T$ maps a sufficiently large ball around the origin in $R^n$ into itself, and Brouwer's fixed point theorem applies, giving us a fixed point of $T$.
$\diamondsuit$
\; \;ection{Numerical computation of solutions}
\; \;etcounter{equation}{0}
\; \;etcounter{thm}{0}
\; \;etcounter{lma}{0}
We describe numerical computation of solutions for the problem
\begin{equation}
\lambdabel{n1}
u''+u+g(u)=\mu \; \;in x+e(x), \; \; 0<x<\varphii, \; \; u(0)=u(\varphii)=0 \,,
\end{equation}
whose linear part is at resonance. We assume that $\int _0^{\varphii} e(x) \; \;in x \, dx=0$. Writing $u(x)=\xi \; \;in x +U(x)$, with $\int _0^{\varphii} U(x) \; \;in x \, dx=0$, we shall compute the solution curve of (\ref{n1}): $(u(\xi),\mu (\xi))$. (I.e., we write $\xi$, $\mu$ instead of $\xi _1$, $\mu _1$.) We shall use Newton's method to perform continuation in $\xi$.
Our first task is to implement the ``linear solver", i.e., the numerical solution of the following problem: given any $\xi \in R$, and any functions $a(x)$ and $f(x)$, find $u(x)$ and $\mu$ solving
\begin{eqnarray}
\lambdabel{n2}
& u''+a(x)u=\mu \; \;in x+f(x), \; \; 0<x<\varphii \,, \\\nonumber
& u(0)=u(\varphii)=0 \,,\\ \nonumber
& \int _0^{\varphii} u(x) \; \;in x \, dx=\xi \,.\nonumber
\end{eqnarray}
The general solution of the equation (\ref{n2}) is of course
\[
u(x)=Y(x)+c_1 u_1(x)+c_2 u_2(x) \,,
\]
where $Y(x)$ is any particular solution, and $u_1$,$u_2$ are two solutions of the corresponding homogeneous equation
\begin{equation}
\lambdabel{n3}
u''+a(x)u=0, \; \; 0<x<\varphii \,.
\end{equation}
We shall use $Y=\mu Y_1+ Y_2$, where $Y_1$ solves
\[
u''+a(x)u=\; \;in x, \; \; u(0)=0, \; \; u'(0)=1 \,,
\]
and $Y_2$ solves
\[
u''+a(x)u=f(x), \; \; u(0)=0, \; \; u'(0)=1 \,.
\]
Let $u_1(x)$ be the solution of (\ref{n3}) with $u(0)=0$, $u'(0)=1$, and let $u_2(x)$ be any solution of (\ref{n3}) with $u_2(0) \ne 0$.
The condition $u(0)=0$ implies that $c_2=0$, i.e., there is no need to compute $u_2(x)$, and we have
\begin{equation}
\lambdabel{n4}
u(x)=\mu Y_1(x)+Y_2(x)+c_1 u_1(x) \,.
\end{equation}
We used the NDSolve command in {\em Mathematica} to calculate $u_1$, $Y_1$ and $Y_2$. {\em Mathematica} not only solves differential equations numerically, but it returns the solution as an interpolated function of $x$, practically indistinguishable from an explicitly defined function.The condition $u(\varphii)=0$ and the last line in (\ref{n2}) imply that
\[
\mu Y_1(\varphii)+c_1 u_1(\varphii)=-Y_2(\varphii) \,,
\]
\[
\mu \int _0^{\varphii} Y_1(x)\; \;in x \, dx+c_1 \int _0^{\varphii} u_1(x)\; \;in x \, dx=\xi -\int _0^{\varphii} Y_2(x)\; \;in x \, dx \,,
\]
Solving this system for $\mu$ and $c_1$, and using them in (\ref{n4}), we obtain the solution of (\ref{n2}).
Turning to the problem (\ref{n1}), we begin with an initial $\xi _0$, and using a step size $\Delta \xi$, on a mesh $\xi _i=\xi _0 +i \Delta \xi$, $i=1,2, \ldots, nsteps$, we compute the solution of (\ref{n1}), satisfying $\int _0^{\varphii} u (x) \; \;in x \, dx=\xi _i$, by using Newton's method. Namely, assuming that the iterate $u_n(x)$ is already computed, we linearize the equation (\ref{n1}) at it, i.e., we solve the problem (\ref{n2}) with
$a(x)=1+g'(u_n(x))$, $f(x)=-g(u_n(x))+g'(u_n(x)) u_n(x)+e(x)$, and $\xi =\xi _i$. After several iterations, we compute $(u(\xi _i), \mu (\xi _i))$. We found that two iterations of Newton's method, coupled with $\Delta \xi$ not too large (e.g., $\Delta \xi=0.5$), were sufficient for accurate computation of the solution curves. To start Newton's iterations, we used $u(x)$ computed at the preceding step, i.e., $u(\xi _{i-1})$.
We have verified our numerical results by an independent calculation. Once a solution of (\ref{n1}) was computed at some $\xi _i$, we took its initial data $u(0)=0$ and $u'(0)$, and computed numerically the solution of the equation in (\ref{n1}) with this initial data, let us call it $v(x)$ (using the NDSolve command). We always had $v(\varphii)=0$ and $\int _0^{\varphii} v(x) \; \;in x \, dx=\xi _i$.
\end{document} |
\begin{document}
\title[Steenrod-\v{C}ech homology-cohomology theories]{Steenrod-\v{C}ech homology-cohomology theories associated with bivariant functors}
\author{Kohei Yoshida}
\email{[email protected]}
\date{\today}
\begin{abstract}
Let $\Bbb{N}Gc_0$ denote the category of all pointed
numerically generated spaces and
continuous maps preserving base-points.
In \cite{numerical}, we described a passage from bivariant functors to
generalized homology and cohomology theories.
In this paper, we construct a bivariant functor
such that the associated cohomology is the $\bf CF$ech
cohomology and the homology is the Steenrod homology
(at least for compact metric spaces).
\end{abstract}
\maketitle
\section{introduction}
We call a topological space $X$ numerically generated
if it has the final topology with respect to its singular simplexes.
CW-complexes are typical examples of such numerically generated spaces.
Let $\Bbb{N}Gc_0$ be the
category of pointed numerically
generated spaces and
pointed continuous maps.
In \cite{numerical} we showed that $\Bbb{N}Gc_0$
is a symmetric monoidal closed category with respect to the smash
product,
and that every bilinear enriched functor
$F:\Bbb{N}Gc_0^{op}\times \Bbb{N}Gc_0 \to \Bbb{N}Gc_0$
gives rise to a pair of generalized homology and cohomology theories,
denoted by $h_{\bullet}(-,F)$ and $h^{\bullet}(-,F)$ respectively,
such that
\[
h_n(X,F) \cong \pi_0 F(S^{n+k},\Sigma^k X), \quad
h^n(X,F) \cong \pi_0 F(\Sigma^k X,S^{n+k})
\]
hold whenever $k$ and $n+k$ are non-negative.
As an example, consider the bilinear enriched functor $F$
which assigns to $(X,Y)$
the mapping space from $X$ to the topological free abelian group $AG(Y)$
generated by the points of $Y$ modulo the relation $*\sim 0$.
The Dold-Thom theorem says that if $X$ is a CW-complex then
the groups $h_n(X,F)$ and $h^n(X,F)$ are, respectively, isomorphic to
the singular homology and cohomology groups of $X$.
But this is not the case for general $X$;
there exists a space $X$ such that $h_n(X,F)$ (resp.\ $h^n(X,F)$)
is not isomorphic to the singular homology (resp.\ cohomology) group of $X$.
The aim of this paper is to construct a bilinear enriched functor
such that for any space $X$ the associated cohomology groups are
isomorphic to the $\bf CF$ech cohomology groups of $X$.
Interestingly, it turns out that the corresponding homology groups
are isomorphic to the Steenrod homology groups for
any compact metrizable space $X$.
Thus we obtain a bibariant theory which ties together
the $\bf CF$ech cohomology and the Steenrod homology theories.
Recall that the $\bf CF$ech cohomology group of $X$
with coefficients group $G$ is defined to be the colimit
of the singlular cohomology groups
\[
\textit{\v{H}}^n(X,G) = {\underrightarrow{{\rm lim}}}_\lambda H^n(X^{\bf CF}_\lambda,G),
\]
where $\lambda$ runs through coverings of $X$ and
$X^{\bf CF}_\lambda$ is the \v{C}ech nerve corresponding to $\lambda$., i.e.
$v\in X_\lambda^{\bf CF}$ is a
vertex of $X^{\bf CF}_\lambda$ corresponding to an open set $V\in \lambda$.
On the other hand, the Steenrod homology group of a compact metric space
$X$ is defined as follows.
As $X$ is a compact metric space, there is a sequence
$\{\lambda_i \}_{i\geq 0}$ of finite open covers of $X$ such that
$\lambda_0 = \{X\}$, $\lambda_i$ is a refinement of $\lambda_{i-1}$, and
$X$ is the inverse limit ${\underleftarrow{{\rm lim}}}_i X^{\bf CF}_{\lambda_i}$.
According to \cite{F}, the Steenrod homology group of $X$ with
coefficients in the spectrum $ \Bbb{S} $ is defined to be the group
\[
H^{st}_{n}(X,\Bbb{S}) =
\pi_{n}\underleftarrow{{\rm holim}}_{\lambda_i}(X^{\bf CF}_{\lambda_i}\wedge \Bbb{S} )
\]
where $\underleftarrow{{\rm holim}}$ denotes the homotopy inverse limit.
Let $\Bbb{N}Gcc_0$ be the subcategory of pointed numerically
generated compact metric spaces and pointed continuous maps.
For given a linear enriched functor $T:\Bbb{N}Gc_0\to\Bbb{N}Gc_0$,
let
\[
\mbox{\v{F}}:\Bbb{N}Gc^{op}_0 \times \Bbb{N}Gcc_0 \to \Bbb{N}Gc_0
\]
be a bifunctor which maps $(X,Y)$ to the space ${\underrightarrow{{\rm lim}}}_\lambda \map_0(X_\lambda,{\underleftarrow{{\rm holim}}}_{\mu_i} T(Y_{\mu_i}^{\bf CF})).$
Here $\lambda$ runs through coverings of $X$, and
$X_\lambda$ is the Vietoris nerves corresponding to $\lambda$ (\cite{P}).
The main results of the paper can be stated as follows.
\begin{Theorem}\label{TH2}
The functor $\FF$ is a bilinear enriched functor.
\end{Theorem}
\begin{Theorem}\label{Sp}
Let $X$ be a compact metraizable space.
Then $h_{n}(X,\FF )=H_n(X,\Bbb{S} )$ is
the Steenrod homology group with coefficients in the spectrum $\Bbb{S} =\{ T(S^k) \}$.
\end{Theorem}
In particular, let us take $AG$ as $T$,
and denote
\[
\bf CF:\Bbb{N}Gc_0^{op}\times \Bbb{N}Gcc_0 \to \Bbb{N}Gc_0
\]
be the corresponding bifunctor \v{F}.
\begin{Theorem}\label{Cor1}
For any pointed space $X$,
$h^n(X,\bf CF)$ is the $\bf CF$ech cohomology
group of $X$, and $h_n(X,\bf CF)$ is the Steenrod homology group
of $X$ if $X$ is a compact metralizable space.
\end{Theorem}
Recall that the Steenrod homology group
is related to the $\bf CF$ech homology group of $X$ by the exact sequence
\[
\xymatrix{
0 \ar[r] &{\underleftarrow{{\rm lim}}}^1_{\lambda_i} \tilde{H}_{n+1}(X^{\bf CF}_{\lambda_i}) \ar[r] &
H^{st}_n(X) \ar[r]
& \tilde{H}_{n}(X) \ar[r] & 0 .\\
}
\]
If $X$ is a movable compactum then we have
${\underleftarrow{{\rm lim}}}^1_{\lambda_i} \tilde{H}_{n+1}(X^{\bf CF}_{\lambda_i})=0$, and hence the following corollary follows.
\begin{Corollary}\label{Cor}
Let $X$ be a movable compactum.
Then $h_n(X,\bf CF )$ is the $\bf CF$ech homology group of $X$.
\end{Corollary}
The paper is organized as follows. In Section 2 we
recall from \cite{numerical} the category $\Bbb{N}Gc_0$ and the passage
from bilinear enriched functors to generalized homology
and cohomology theories. And we recall that Vietoris and
$\bf CF$ech nerves;
In Section 3 we prove Theorem \ref{TH2};
Finally, in Section 4 we prove Theorems 2 and \ref{Cor1}.
\paragraph{Acknowledgement}
We thank Professor K.~Shimakawa for calling my attention to the subject
and for
useful conversations while preparing the manuscript.
\section{Preliminaries}
\subsection{Homology and cohomology theories via bifunctors}
Let $\Bbb{N}Gc_0$ be the category of pointed numerically
generated topological spaces and
pointed continuous maps.
In \cite{numerical} we showed that $\Bbb{N}Gc_0$ satisfies the following properties:
\begin{enumerate}
\item It contains pointed CW-complexes;
\item It is complete and cocomplete;
\item It is monoidally closed in the sense that there is an internal hom
$Z^Y$ satisfying a natural bijection
$\hom_{\Bbb{N}Gc_0}(X \wedge Y,Z) \cong \hom_{\Bbb{N}Gc_0}(X,Z^Y)$;
\item There is a coreflector $\nu \colon \Top_0 \to \Bbb{N}Gc_0$ such that
the coreflection arrow $\nu{X} \to X$ is a weak equivalence;
\item The internal hom $Z^Y$ is weakly equivalent to the space
of pointed maps from $Y$ to $Z$ equipped with the compact-open topology.
\end{enumerate}
Throughout the paper, we write $\map_0(Y,Z)=Z^Y $ for any
$Y,\ Z \in \Bbb{N}Gc_0$.
A map $f \colon X \to Y$ between topological spaces is said to be
numerically continuous if the composite
$f\circ \sigma \colon \Delta^n \to Y$ is continuous
for every singular simplex $\sigma \colon \Delta^n \to X$.
We have the following.
\begin{Proposition}{$($\cite{numerical}$)$}\label{5}
Let $f \colon X \to Y$ be a map between numerically generated spaces.
Then $f$ is numerically continuous
if and only if $f$ is continuous.
\end{Proposition}
From now on, we assume that $\bf C_0$ satisfies the following conditions:
(i) $\bf C_0$ contains all finite CW-complexes.
(ii) $\bf C_0$ is closed under finite wedge sum.
(iii) If $A \subset X$ is an inclusion of objects in $\bf C_0$ then
its cofiber $X \cup CA$ belongs to $\bf C_0$;
in particular, $\bf C_0$ is closed under the suspension functor
$X \mapsto \Sigma X$.
\begin{Definition}
Let $\bf C_0$ be a full subcategory of $\Bbb{N}Gc_0$. A functor $T \colon \bf C_0 \to \Bbb{N}Gc_0$ is called
{\em enriched $($or continuous$)$ \/} if the map
\[
T:\map_0(X,X')\to \map_0(T(X),T(X')),
\]
which assigns $T(f)$ to every $f$,
is a pointed continuous map.
\end{Definition}
Note that if $f$ is constant,
then so is $T(f)$.
\begin{Definition}
An enriched functor $T$ is called $linear$ if
for any pair of a pointed space $X$, a
sequence
\[
T(A)\to T(X)\to T(X\cup CA)
\]
induced by the cofibration sequence
$A\to X \to X\cup CA$, is a homotopy fibration sequence.
\end{Definition}
\begin{Example}
Let $AG:\mbox{CW}_0\to \Bbb{N}Gc_0$ be the functor which assigns to a pointed
CW-complex
$(X,x_0)$ the topological abelian group $AG(X)$ generated by
the points of $X$ modulo the relation $x_0 \sim 0$.
Then $AG$ is a linear enriched functor. (see \cite{numerical})
\end{Example}
\begin{Theorem}{$($\cite[Th 6.4]{numerical}$)$}
A linear enriched functor
$T$ defines a generalized homology $\{ h_n(X,T) \}$ satisfying
\begin{equation*}
h_n(X,T)= \begin{cases}
\pi_nT(X), & n\ge 0\\
\pi_0T(\Sigma^{-n}X), & n<0.
\end{cases}\\
\end{equation*}
\end{Theorem}
Next we introduce the notion of a bilinear enriched functor, and describe a passage
from a bilinear enriched functor to generalized cohomology and generalized homology theories.
We assume that $\bf C_0'$ satisfies the same conditions of
$\bf C_0$.
\begin{Definition}\label{dual}
Let $\bf C_0$ and $\bf C_0'$ be full subcategories of $\Bbb{N}Gc_0$.
A {\em bifunctor} $F \colon \bf C_0^{op}\times \bf C_0' \to
\Bbb{N}Gc_0$ is a function which
\begin{enumerate}
\item to each objects $X\in \bf C_0$ and $Y\in \bf C_0'$
assigns an object $F(X,Y)\in \Bbb{N}Gc_0$;
\item to each $f\in \map_0(X,X')$, $g\in \map_0(Y,Y')$ assigns a continuous map $F(f,g)\in \map_0(F(X',Y),~F(X,Y'))$.
$F$ is required to satisfy the following equalities:
\begin{enumerate}
\item $F(1_X,1_Y)=1_{F(X,Y)}$;
\item $F(f,g)=F(1_X,g)\circ F(f,1_Y)=F(f,1_{Y'})\circ F(1_{X'},g)$;
\item $F(f'\circ f,1_Y)=F(f,1_Y)\circ F(f',1_Y)$,
$F(1_X,g'\circ g)=F(1_X,g')\circ F(1_X,g)$.
\end{enumerate}
\end{enumerate}
\end{Definition}
\begin{Definition}
A bifunctor $F \colon \bf C_{0}^{op}\times \bf C_0 \to \Bbb{N}Gc_0$ is called
{\em enriched\/} if the map
\[
F:\map_0(X,X')\times \map_0(Y,Y')\to \map_0(F(X',Y),F(X,Y')),
\]
which assigns $F(f,g)$ to every pair $(f,g)$,
is a pointed continuous map.
\end{Definition}
Note that if either $f$ or $g$ is constant,
then so is $F(f,g)$.
\begin{Definition}\label{biexact}
For any pairs of pointed spaces $(X,A)$ and $(Y,B)$, $F$ is $bilinear$ if the
sequences
\begin{enumerate}
\item $F(X\cup CA,Y)\to F(X,Y)\to F(A,Y)$
\item $F(X,B)\to F(X,Y)\to F(X,Y\cup CB)$,
\end{enumerate}
induced by the cofibration sequences
$A\to X \to X\cup CA$ and
$B\to Y \to Y\cup CB$, are homotopy fibration
sequences.
\end{Definition}
\begin{Example}
Let $T:\Bbb{N}Gc_0\to \Bbb{N}Gc_0$ be a linear enriched functor,
and let $F(X,Y)=\map_0(X,T(Y))$ for $X,Y\in \Bbb{N}Gc_0$.
Then $F:\Bbb{N}Gc_0^{op}\times \Bbb{N}Gc_0 \to \Bbb{N}Gc_0$ is a bilinear enriched functor.
\end{Example}
\begin{Theorem}{$($\cite[Th 7.4]{numerical}$)$}
A bilinear enriched functor
$F$ defines a generalized cohomology $\{ h^n(-,F) \}$ and
a generalized homology $\{ h_n(-,F) \}$ such that
\begin{equation*}
h_n(Y,F)= \begin{cases}
\pi_0F(S^n,Y) & n\ge 0\\
\pi_0F(S^0,\Sigma^{-n}Y) & n<0,
\end{cases}\\
~~~~h^n(X,F)= \begin{cases}
\pi_0F(X,S^n) & n\ge 0\\
\pi_{-n}F(X,S^0) & n<0,
\end{cases}
\end{equation*}
hold for any $X\in \bf C_0$ and $Y\in \bf C_0'$.
\end{Theorem}
\begin{Proposition}\label{numerical}{$($\cite{numerical}$)$}
If $X$ is a $CW$-complex, we have
$h_n(X,F) =H_n(X,\Bbb{S} )$ and $h^n(X,F)=H^n(X,\Bbb{S})$,
the generalized homology and cohomology groups
with coefficients in the spectrum $\Bbb{S} =\{ F(S^0,S^n)~|~n\geq 0 \}$.
\end{Proposition}
\subsection{Vietoris and $\bf CF$ech nerves}
For each $X \in \Bbb{N}Gc_{0}$,~let $\lambda$ be an open covering of $X$.
According to \cite{P}, the Vietoris nerve of $\lambda$
is a simplicial set in which an
$n$-simplex is an ordered $(n+1)$-tuple
$(x_0,x_1,\cdots ,x_n)$ of points contained in
an open set $U\in \lambda$.
Face and degeneracy operators are respectively given by
\[
d_i(x_0,\cdots ,x_n)=(x_0,x_1,\cdots ,x_{i-1},x_{i+1},
\cdots ,x_n)
\]
and
\[
s_i(x_0,x_1,\cdots x_n)=(x_0,x_1,\cdots x_{i-1},x_i,
x_i,x_{i+1},\cdots ,x_n),~~0\le i \le n.
\]
We denote the realization of
the Vietoris nerve of $\lambda$ by $X_{\lambda}$.
If $\lambda$ is a refinement of $\mu $,
then there is a canonical map $\pi_{{\mu}}^{{\lambda}}:X_{\lambda}\to X_{\mu}$
induced by the identity map of $X$.
The relation between the Vietoris and the $\bf CF$ech nerves
is given by the following Proposition due to Dowker.
\begin{Proposition}\label{D}$ ($\cite{D}$) $
The $\bf CF$ech nerve $X_\lambda^{\bf CF}$ and the Vietoris nerve $X_\lambda$ have the same homotopy type.
\end{Proposition}
According to \cite{D}, for arbitrary topological space,
the Vietoris and $\bf CF$ech homology groups are isomorphic
and the Alexander-Spanier and $\bf CF$ech cohomology groups
are isomorphic.
\section{Proof of Theorem \ref{TH2}}
Let $T$ be a linear enriched functor.
We define a bifunctor $\FF:\Bbb{N}Gc_0^{op}\times \Bbb{N}Gcc_0 \to \Bbb{N}Gc_0$ as follows.
For $X \in \Bbb{N}Gc_0$ and $Y\in \Bbb{N}Gcc_0$, we put
\[
\displaystyle{\FF(X,Y)={\underrightarrow{{\rm lim}}}_\lambda \map_0(X_\lambda,
~{\underleftarrow{{\rm holim}}}_{\mu_i} T(Y_{\mu_i}^{\bf CF}))},
\]
where $\lambda$ is an open covering of $X$ and
$\{\mu_i \}_{i\geq 0}$ is a set of finite open covers of $Y$
such that
$\mu_0 = \{Y\}$, $\mu_i$ is a refinement of $\mu_{i-1}$,
and
$Y$ is the inverse limit ${\underleftarrow{{\rm lim}}}_i Y^{\bf CF}_{\mu_i}$.
Given based maps $f:X\to X'$ and $g:Y\to Y'$,
we define a map
\[
\FF(f,g) \in
\map_0(\FF(X',Y),\FF(X,Y'))
\]
as follows.
Let $\nu$ and $\gamma$ be open covering
of $X'$ and $Y'$ respectively, and
let $f^{\#}\nu =\{f^{-1}(U)~|~U\in \nu \}$ and $g^{\#}\gamma=\{g^{-1}(V)~|~V\in \gamma \}$.
Then $f^{\#}\nu$ and $g^{\#}\gamma$ are
open coverings of $X$ and $Y$ respectively.
By the definition of the nerve,
there are natural maps
$
f_\nu :X_{f^{\#}\nu}\to X^{'}_\nu
$
and
$
g_\gamma :Y^{\bf CF}_{g^{\#}\gamma}\to (Y')^{\bf CF}_\gamma .
$
Hence we have the map
\[
T(g_\gamma)^{f_\nu}:T(Y^{\bf CF}_{g^{\#}\gamma})^{X'_\nu}\to T((Y')^{\bf CF}_{\gamma})^{X_{f^{\#}\nu}}
\]
induced by $f_\nu$
and
$g_\gamma$.
Thus we can define
\[
\FF(f,g)={\underrightarrow{{\rm lim}}}_\nu {\underleftarrow{{\rm holim}}}_\gamma T(g_\gamma)^{f_\nu}
:\FF(X',Y)\to \FF(X,Y').
\]
~\\
\textbf{Theorem \ref{TH2}.}
The functor \v{F} is a bilinear enriched functor.
First we prove the bilinearity of \v{F}.
For $Z$,
we prove that
the sequence
\[
\FF (X\cup CA,Z)\to \FF (X,Z)\to \FF (A,Z)\]
is a homotopy fibration sequence.
Let $A\to X \to X\cup CA$ be a cofibration sequence.
Let $\lambda$ be an open covering of $X\cup CA$, and
let
$\lambda_X$, $\lambda_{CA}$ and $\lambda_A$
be consists of those $U\in \lambda$ such that
$U$ intersects with $X$, $CA$, and $A$, respectively.
We need the following lemma.
\begin{Lemma}\label{heq}
We have a homotopy equivalence
\[
(X\cup CA)_\lambda^{\bf CF}
\simeq X_{\lambda_X}^{\bf CF}\cup C(A_{\lambda_{A}}^{\bf CF}).
\]
\end{Lemma}
\begin{proof}
By the definition of the $\bf CF$ceh nerve, we have
$ (X\cup CA)_\lambda^{\bf CF} = X_{\lambda_X}^{\bf CF}\cup (CA)_{\lambda_{CA}}^{\bf CF}$.
By the homotopy equivalence
\[
A_{\lambda_{A}}^{\bf CF}=A_{\lambda_{A}}^{\bf CF} \times \{0\} \simeq A_{\lambda_{A}}^{\bf CF}\times I
\]
where $I$ is the unit interval,
we have
\[
X_{\lambda_X}^{\bf CF}\cup (CA)_{\lambda_{CA}}^{\bf CF}
\simeq X_{\lambda_X}^{\bf CF}\cup A_{\lambda_A}^{\bf CF}\times I\cup (CA)_{\lambda_{CA}}^{\bf CF}.
\]
Since $(CA)_{\lambda_{CA}}^{\bf CF}\simeq *$, we have
\[
X_{\lambda_X}^{\bf CF}\cup A_{\lambda_A}^{\bf CF}\times I\cup (CA)_{\lambda_{CA}}^{\bf CF}
\simeq X_{\lambda_X}^{\bf CF}\cup C(A_{\lambda_{A}}^{\bf CF}).
\]
Hence we have $(X\cup CA)_\lambda
\simeq X_{\lambda_X}^{\bf CF}\cup C(A_{\lambda_{A}}^{\bf CF}).$
\end{proof}
By Proposition \ref{D} and Lemma \ref{heq},
the sequence
\[ A_{\lambda_A}\to X_{\lambda_X} \to (X\cup CA)_\lambda
\]
is a homotopy cofibration sequence.
Hence the sequence \[
[(X\cup CA)_\lambda ,Z]\to [X_{\lambda_X},Z]\to [A_{\lambda_A},Z]
\]
is an exact sequence for any $\lambda$.
Since the nerves of the form $\lambda_X$ (resp. $\lambda_A$) are cofinal in the set of nerves
of $X$ (resp. $A$),
we conclude that
the sequence
\[
\FF (X\cup CA,Z)\to \FF (X,Z)\to \FF (A,Z)\]
is a homotopy fibration sequence.
Now we show that the sequence
$\FF (Z,A)\to \FF (Z,X)\to \FF (Z,X\cup CA)$
is a homotopy fibration sequence.
By the linearity of $T$, the sequence
\[
T(A^{\bf CF}_{\lambda_A})\to T(X^{\bf CF}_{\lambda_X})\to T((X\cup CA)^{\bf CF}_\lambda)
\]
is a homotopy fibration sequence.
Since the fibre $T(A^{\bf CF}_{\lambda_{A}})$ is a homeomorphic to the inverse limit
\[
{\underleftarrow{{\rm lim}}} (*\to T((X\cup CA)^{\bf CF}_\lambda)
\leftarrow T(X^{\bf CF}_{\lambda_X})),
\]
we have
\[
\displaystyle \begin{array}{l}
{\underleftarrow{{\rm lim}}} (*\to {\underleftarrow{{\rm holim}}}_{\lambda}T((X\cup CA)^{\bf CF}_\lambda)
\leftarrow {\underleftarrow{{\rm holim}}}_{\lambda_{X}}T(X^{\bf CF}_{\lambda_X}))
\\
~~ \simeq {\underleftarrow{{\rm lim}}} ~{\underleftarrow{{\rm holim}}}_{\lambda}(*\to T((X\cup CA)^{\bf CF}_\lambda)
\leftarrow T(X^{\bf CF}_{\lambda_X}))\\
~~ \simeq {\underleftarrow{{\rm holim}}}_{\lambda}{\underleftarrow{{\rm lim}}} (*\to T((X\cup CA)^{\bf CF}_\lambda)
~~ \leftarrow T(X^{\bf CF}_{\lambda_X}))\\
~~ \simeq {\underleftarrow{{\rm holim}}}_{\lambda}T(A^{\bf CF}_{\lambda_A}).\\
\end{array}
\]
This implies that
the sequence
\[
{\underleftarrow{{\rm holim}}}_{\lambda_A} T(A_{\lambda_A}^{\bf CF})\to
{\underleftarrow{{\rm holim}}}_{\lambda_X} T(X_{\lambda_X}^{\bf CF})\to
{\underleftarrow{{\rm holim}}}_{\lambda}T((X\cup CA)_\lambda^{\bf CF})
\]
is a homotopy fibration sequence,
hence so is $\FF (Z,A)\to \FF (Z,X)\to \FF (Z,X\cup CA)$.
Next we prove the continuity of $\FF$.
Let $F(X,Y)=\map_0(X,{\underleftarrow{{\rm holim}}}_{\mu_i} T(Y^{\bf CF}_{\mu_i}))$,
so that we have $\FF(X,Y)={\underrightarrow{{\rm lim}}}_\lambda F(X_\lambda, Y)$.
We need the following lemma.
\begin{Lemma}\label{TH3}
The functor F is an enriched bifunctor.
\end{Lemma}
\begin{proof}
Let $F_1(Y)={\underleftarrow{{\rm holim}}}_{\mu_i} T(Y_{\mu_i}^{\bf CF})$ and
$F_2(X,Z)=\map_0(X,Z)$,
so that we have $F(X,Y) = F_2(X,F_1(Y))$.
Clearly $F_2$ is continuous.
Let $G_1$ be the functor maps $Y$ to ${\underleftarrow{{\rm holim}}}_{\mu_i} Y_{\mu_i}^{\bf CF}$.
Since $T$ is enriched,
$F_1$ is continuous if and only if
so is $G_1$.
It suffices to show that
the map $G'_1 \colon \map_0(Y,Y') \times {\underleftarrow{{\rm holim}}}_{\mu_i} Y_{\mu_i}^{\bf CF} \to {\underleftarrow{{\rm holim}}}_{\lambda_j} (Y')_{\lambda_j}^{\bf CF}$, adjoint to $G_1$,
is continuous for any $Y$ and $Y'$.
Given an open covering $\lambda$ of $Y^{'}$,
let $p^n_{\lambda}$ be the natural map
${\underleftarrow{{\rm holim}}} _\lambda (Y')^{\bf CF}_\lambda \to \map_0(\Delta^n,(Y')^{\bf CF}_{{\lambda}})$.
Then $G'_1$ is continuous if so is the composite
\[
p^n_{\lambda}\circ G'_1 \colon \map_0(Y,Y') \times {\underleftarrow{{\rm holim}}}_{\mu_i} Y_{\mu_i}^{\bf CF} \to
\map_0(\Delta^n,(Y')^{\bf CF}_{\lambda})
\]
for every $\lambda \in \mbox{Cov}(Y')$ and every $n$.
Let $(g,\alpha) \in \map_0(Y,Y') \times {\underleftarrow{{\rm holim}}}_{\mu_i} Y^{\bf CF}_{\mu_i}$, and
let $W_{K,U} \subset \map_0(\Delta^n,(Y')^{\bf CF}_{{\lambda}})$ be an open neighborhood of
$p^n_{\lambda}(G'_1(g,\alpha))
$,
where $K$ is a compact set of $\Delta^n$ and $U$ is an
open set of $(Y')^{\bf CF}_\lambda$.
Let us choose simplices $\sigma$ of $Y_{g^{\sharp}\lambda}^{\bf CF}$
with vertices $g^{-1}(U(\sigma,k)),$
where $U(\sigma,k)\in \lambda$ for
$0 \leq k \leq \dim\sigma$.
Let
\[
\textstyle
O(\sigma) = \bigcap_{0 \leq k \leq \dim\sigma} U(\sigma,k)
\subset Y'.
\]
Let us choose a point $y_\sigma \in \bigcap_{0 \leq k \leq \dim\sigma} g^{-1}(U(\sigma,k))$,
then $g(y_\sigma )\in O(\sigma)$.
Let $W_1$ be the intersection of all $W_{y_\sigma,O(\sigma)}$.
There is an integer $l$ such that
\[
\mu_l>\overline{\mu}_l>g^{\#}\lambda
\]
where $\overline{\mu_l}$ is a closed covering $\{ \overline{V}|V\in \mu_l \}$
of $Y$.
Thus for any $U\in \mu_l$, there is an open set
$V_U\in g^{\#}\lambda$ such that $\overline{U}\subset g^{-1}(V_U)$.
Since $Y$ is a compact set, $\overline{U}$ is compact.
Let $W_2$ be the intersection of $W_{\overline{U},V_U}$,
and let $W=W_1\cap W_2$.
Since $\mu_l>g^{\#}\lambda$, we have
\[
p^n_\lambda (G'_1(g,\alpha))=(g_\lambda)_* (\pi^{\mu_l}_{
g^{\#}\lambda})_*p^n_{\mu_l}\alpha.
\]
where $(g_\lambda)_*$ and $(\pi^{\mu_l}_{
g^{\#}\lambda})_*$ are induced by $g_\lambda :Y_{g^{\#}\lambda}^{\bf CF}\to (Y')_{
\lambda}^{\bf CF}$ and
$\pi^{\mu_l}_{
g^{\#}\lambda}: Y_{\mu_l}^{\bf CF}\to Y_{
g^{\#}\lambda}^{\bf CF}$, respectively.
Let
\[
W'=(p^n_{\mu_l})^{-1}(W_{K,(\pi^{\mu_l}_{
g^{\#}\lambda})^{-1} (g_\lambda)^{-1}(U) }).
\]
Then $W \times W'$ is a neighborhood of $(g,\alpha)$ in
$\map_0(Y,Y') \times {\underleftarrow{{\rm holim}}}_{\mu_i} Y_{\mu_i}$.
To see that $p_{\lambda}\circ G'_1$ is continuous at $(g,\alpha)$,
we need only show that $W \times W'$ is contained in
$(p_{\lambda} \circ G'_1)^{-1}(U)$.
Suppose $(h,\beta)$ belongs to $W \times W'$.
Since $W$ is contained in $W_1$, we have
\[
\textstyle y_\sigma \in h^{-1}(O(\sigma)) \subset
\bigcap_{0 \leq k \leq \dim\sigma} h^{-1}(U(\sigma,k)).
\]
This means that the vertices $h^{-1}(U(\sigma,k)) \in h^{\sharp}\lambda$,
$0 \leq k \leq \dim\sigma$,
determine simplices $\sigma'$ of $Y_{h^{\sharp}\lambda}$
each corresponding to each
$\sigma \subset Y_{g^{\sharp}\lambda}$.
Thus we have an isomorphism
\[
s:Y_{h^{\sharp}\lambda}^{\bf CF}\to Y_{g^{\sharp}\lambda}^{\bf CF},
\]
\[
h^{-1}(U(\sigma,k))\mapsto g^{-1}(U(\sigma,k)).
\]
Moreover since $W$ is contained in $W_2$,
we have $\overline{\mu_l}>h^{\#}\lambda$.
Since the commutative diagram
\[
\xymatrix{
Y_{\mu_l}^{\bf CF}\ar[r]\ar[rd]&Y_{g^{\#}\lambda}^{\bf CF} \ar[r]^{g_\lambda}&(Y')^{\bf CF}_\lambda\\
&Y_{h^{\#}\lambda}^{\bf CF}\ar[ru]_{h_\lambda}\ar[u]^s&
}\]
is commutative,
we have the equation
\[
p^n_{\lambda}\circ G'_1(h,\beta)(K)
=
h_{\lambda}\pi^{\mu_l}_{h^{\#}\lambda}(\beta)(K)
=
g_{\lambda}\pi^{\mu_l}_{g^{\#}\lambda}(\beta)(K)
\]
Since $g_{\lambda}\pi^{\mu_l}_{g^{\#}\lambda}(\beta)(K)$
is continued in $U$, so is
$p^n_{\lambda}\circ G'_1(h,\beta)(K)$.
Thus $p^n_{\lambda}\circ G'_1$ is continuous for all
$\lambda \in \mbox{Cov}(Y')$, and hence so is
$G'_1 \colon \map_0(Y,Y') \times {\underleftarrow{{\rm holim}}}_{\mu_i} Y_{\mu_i}^{\bf CF} \to {\underleftarrow{{\rm holim}}}_{\lambda_j} (Y')_{\lambda_j}^{\bf CF}$.
\end{proof}
We are now ready to prove Theorem \ref{TH2} of $\FF$.
For given pointed spaces $X$, $Y$ and a covering $\mu$ of $X$,
let $i_{\mu}$ denote the natural map
$F(X_{\mu},Y) \to {\underrightarrow{{\rm lim}}}_{\mu}F(X_{\mu},Y)$.
To prove the theorem, it suffices to show that the map
\[
\begin{array}{ccl}
\textstyle
\FF' \circ~(1\times i_\lambda) \colon \map_0(X,X') \times F(X'_{\lambda},Y)
&\to& \map_0(X,X') \times {\underrightarrow{{\rm lim}}}_{\lambda}F(X'_{\lambda},Y)\\
& \to& {\underrightarrow{{\rm lim}}}_{\mu}F(X_{\mu},Y)
\end{array}
\]
which maps $(f,\alpha)$ to
$i_{f^{\sharp}\lambda}(F(f_\lambda ,1_Y)(\alpha))$,
is continuous for every covering $\lambda$ of $X$.
Let $\textstyle
R \colon \map_0(X,X')
\to {\underrightarrow{{\rm lim}}}_{\mu}\map_0(X_\mu,X'_\lambda)
$ be the map which assigns to $f:X\to X'$
the image of $\map_0(X,X')$, $f_\lambda \in \map_0(X_{f^\sharp\lambda}, X'_\lambda)$ in
${\underrightarrow{{\rm lim}}}_{\mu}\map_0(X_\mu,X'_\lambda)$,
and let $Q$ be the map \[
{\underrightarrow{{\rm lim}}}_{\mu}\map_0(X_\mu,X'_\lambda) \times F(X'_{\lambda},Y)
\to {\underrightarrow{{\rm lim}}}_\mu F(X_{\mu},Y),\] \[[f ,\alpha ] \mapsto i_{f^{\sharp}\lambda}f_\lambda \circ \alpha =i_{f^{\sharp}\lambda}(F(f_\lambda ,1_Y)(\alpha) ).
\]
Since we have $\FF'=Q\circ (R\times 1)$, we need only
show the continuity of $Q$ and $R$.
Since $Q$ is a composite of elements of Im$R$ and
$F(X'_{\lambda},Y)$, $Q$ is continuous.
To see that $R$ is continuous, let
$W_{K^f,U}$ be a neighborhood of $f_{\lambda}$ in
$\map_0(X_{f^{\sharp}\lambda},X'_\lambda)$, where
$K^f$ is a compact subset of $X_{f^{\sharp}\lambda}$ and
$U$ is an open subset of $X'_\lambda$.
Since $K^f$ is compact, there is a finite subcomplex $S^f$ of $X_{f^{\sharp}\lambda}$ such that $K^f\subset S^f$.
Let $\tau^f_i,~0\le i\le m,$ be simplexes of $S^f$.
By taking a suitable subdivision of $X_{f^{\sharp}\lambda}$,
we may assume that
there is a simplicial neighborhood $N_{\tau^f_i}$ of each
$\tau^f_i$, $1\le i \le m$, such that $K^f\subset S^f\subset \cup_i N_{\tau^f_i}\subset f_\lambda^{-1}(U)$.
Let $\{ x_k^i \}$ be the set of vertices of $\tau^f_i$ and let
$W$ be the intersection of all $W_{\{ x^i_k \},U_{(\tau^f_i)'}}$
where $U_{{\tau^f_i}'}$ is an open set of $X'_\lambda$ containing the set $\{ f(x^i_k) \}$.
Then $W$ is a neighborhood of $f$.
We need only show that $R(W)\subset i_{f\#\lambda}(W_{{K^f},U})$.
Suppose that $g$ belongs to $W$.
Since $\{ x^i_{k}\}$ is contained in $g^{-1}(U_{({\tau^f_i})'})$ for any $i$, a simplex $\tau_i^{g}$
spanned by the vertices is contained in $X_{g^\sharp \lambda}$.
Let $S^g$ be the finite subcomplex of $X_{g^\sharp \lambda}$ consists of simplexes $\tau^g_i$.
By the construction, $S^f$ and $S^g$ are isomorphic.
Moreover there is a compact subset $K^g$ of $X_{g^\sharp \lambda}$ such that
$K^{g}$ and $K^f$ are homeomorphic.
On the other hand, since $g(\{ x^i_{k}\})\subset U_{({\tau^f_i})'}$, there is a simplex of $X_\lambda'$ having $g_\lambda (\tau_i^{g})$ and $({\tau^f_i})'$ as its faces.
This means that $g_\lambda (\tau_i^{g})\subset f_\lambda (\cup_i N_{\tau^f_i})$.
Thus we have $g_\lambda (K^{g})=\cup_i g_\lambda (\tau_i^{g})\subset f_\lambda (\cup_i N_{\tau^f_i})$.
Let $f^\sharp \lambda \cap g^\sharp \lambda$ be an open covering
\[
\{ f^{-1}(U)\cap g^{-1}(V) ~|~U,V\in \lambda \}
\] of $X$.
We regard $X_{f^\sharp \lambda}$ and $X_{g^\sharp \lambda}$ as a subcomplex of
$X_{f^\sharp \lambda \cap g^\sharp \lambda}$.
Since $g_\lambda |X_{f^\sharp \lambda \cap g^\sharp \lambda}$ is contiguous to $f_\lambda |X_{f^\sharp \lambda \cap g^\sharp \lambda}$,
we have a homotopy equivalence
$g_\lambda |X_{f^\sharp \lambda \cap g^\sharp \lambda} \simeq f_\lambda |X_{f^\sharp \lambda \cap g^\sharp \lambda}$.
By the homotopy extension property of $g_\lambda|X_{f^\sharp \lambda \cap g^\sharp \lambda}:X_{f^\sharp \lambda \cap g^\sharp \lambda}\to X_\lambda'$ and $f_\lambda :X_{f^\sharp \lambda} \to X_\lambda'$,
$g_\lambda|X_{f^\sharp \lambda \cap g^\sharp \lambda}$ extends to map $G:X_{f^\sharp \lambda}\to X'_\lambda$.
We have the relation $G\sim
\pi^{f^\sharp \lambda \cap g^\sharp \lambda}_{f^\sharp \lambda}G=g_\lambda|X_{f^\sharp \lambda \cap g^\sharp \lambda}=\pi^{f^\sharp \lambda \cap g^\sharp \lambda}_{g^\sharp \lambda}g_\lambda \sim g_\lambda$, where
$\sim$ is the relation of the direct limit.
Moreover by $G(K^f)\subset f_\lambda (\cup_i N_{\tau^f_i}) \subset U$,
we have $[g_\lambda]=[G] \in i_{f\# \lambda}(W_{K^f,U})$.
Hence $R$ is continuous, and so is $\FF'$.
\section{Proofs of Theorems \ref{Sp} and \ref{Cor1}}
To prove Theorems \ref{Sp} and \ref{Cor1}, we need several lemmas.
\begin{Lemma}\label{open}
There exists a sequence $\lambda^n_1<\lambda^n_2 <\cdots
<\lambda^n_m<\cdots$ of open coverings of $S^n$ such that :
\begin{enumerate}
\item For each open covering $\mu$ of $S^n$,
there is an $m\in \Bbb{N}$ such that $\lambda^n_m$ is a refinement of $\mu$:
\item For any $m$, $S^n_{\lambda_m}$ is homotopy equivalent to $S^n$.
\end{enumerate}
\end{Lemma}
\begin{proof}
We prove by induction on $n$.
For $n=1$, we define an open covering $\lambda^1_m$
of $S^1$ as follows.
For any $i$ with $0\leq i <4m$, we put
\[
U(i,m)=\{ (\cos \theta ,~\sin \theta )\mid \frac{i-1}{4m}\times 2\pi +
\frac{1}{16m}\times 2\pi <\theta < \frac{i+1}{4m}\times 2\pi +\frac{1}{16m}
\times 2\pi \}.
\]
Let $\lambda^1_m=\{U(i,m)|~0\leq i <4m \}$.
Then the set $\lambda^1_m$ is an open covering of $S^1$ and
is a refinement of $\lambda^1_{m-1}$.
Clearly $(S^1)^{\bf CF }_{\lambda^1_m}$ is homeomorphic to $S^1$, hence $S^1_{\lambda^1_m} $ is homotopy equivalent to $S^1$.
Moreover for any open covering $\mu$ of $S^1$,
there exists an $m$ such that $\lambda^{1}_{m}$ is a refinement of $\mu$.
Hence the lemma is true for $n= 1$.
Assume now that the lemma is true for $1\le k\le n-1$. Let
${\lambda'}^n_m$ be the open covering $\lambda^{n-1}_m\times
\lambda^{1}_m $
of $S^{n-1}\times S^1$ and
let $\lambda_m^{n}$ be the open covering of
$S^n$ induced by the natural map
$p:S^{n-1}\times S^1 \to S^{n-1}
\times S^1/S^{n-1}\vee S^1$.
Since
$S^{n-1}_{\lambda^{n-1}_m}$ is a homotopy equivalence of $S^{n-1}$,
we have
\[
S^n_{\lambda^n_{m} }\approx (S^{n-1}\times S^1/S^{n-1}\vee S^1)_{\lambda^n_m}
\approx (S^{n-1}_{\lambda_m^{n-1}}
\times S^1_{\lambda_m})/(S^{n-1}_{\lambda^{n-1}_{m}}\vee S^1_{\lambda_m})
\approx S^n.
\]
Thus the sequence $\lambda_1^n <\lambda_2^n <\cdots <
\lambda_m^n <\cdots $ satisfies the required conditions.
\end{proof}
\begin{Lemma}
$h_n(X, \FF) \cong \pi_n{\underleftarrow{{\rm holim}}}_\mu T(X_\mu^{\bf CF}) $ for $n\geq 0$.
\end{Lemma}
\begin{proof}
By Lemma \ref{open}, we have an isomorphism
\[{\underrightarrow{{\rm lim}}}_\lambda [S^n_{\lambda},{\underleftarrow{{\rm holim}}}_\mu T(X_{\mu}^{\bf CF})]\\
\cong [S^n,{\underleftarrow{{\rm holim}}}_\mu T(X_{\mu}^{\bf CF})].\]
Thus we have
\[
\displaystyle \begin{array}{ccl}
h_n(X,\FF)&=&\pi_0\FF (S^n,X)\\
&=&\pi_0{\underrightarrow{{\rm lim}}}_\lambda \map_0( S^n_{\lambda},{\underleftarrow{{\rm holim}}}_\mu T(X_{\mu}^{\bf CF}))\\
&\cong& {\underrightarrow{{\rm lim}}}_\lambda [S^0,\map_{0}(S^n_{\lambda},{\underleftarrow{{\rm holim}}}_\mu T(X_{\mu}^{\bf CF})]\\
&\cong& {\underrightarrow{{\rm lim}}}_\lambda [S^n_{\lambda},{\underleftarrow{{\rm holim}}}_\mu T(X_{\mu}^{\bf CF})]\\
&\cong& [S^n,{\underleftarrow{{\rm holim}}}_\mu T(X_{\mu}^{\bf CF})]\\
&\cong& \pi_n{\underleftarrow{{\rm holim}}}_\mu T(X_{\mu}^{\bf CF}).
\end{array}
\]
\end{proof}
Now we are ready to prove Theorem 2.
Let $X$ be a compact metric space and let $\Bbb{S}=\{ T(S^k)~|~k\geq 0\}$.
Since $X$ is a compact metric space,
there is a sequence
$\{\mu_i \}_{i\geq 0}$ of finite open covers of $X$ with $\mu_0=X$
and $\mu_i$ refining $\mu_{i-1}$ such that $X={\underleftarrow{{\rm lim}}}_iX^{\bf CF}_{\mu_i}$ holds.
Let us denote $X_{\mu_i}^{\bf CF}$ by $X_i^{\bf CF}$
and $X_{\mu_i}$ by $X_i$
if there is no possibility of
confusion.
According to [F], there is a short exact sequence
\[
\xymatrix{
0 \ar[r] &{\underleftarrow{{\rm lim}}}^1_{i} {H}_{n+1}(X_i^{\bf CF},\Bbb{S}) \ar[r] &
H^{st}_n(X,\Bbb{S}) \ar[r]
& {\underleftarrow{{\rm lim}}}_i{H}_{n}(X_i^{\bf CF},\Bbb{S}) \ar[r] & 0 \\
}
\]
where $H_n(X,\Bbb{S})$ is the homology group of $X$ with coefficients in the spectrum $\Bbb{S}$.
(This is a special case of the Milnor exact sequence \cite{milnor}.)
On the other hand, by [BK], we have the following.
\begin{Lemma}\label{bous}$ ($\cite{bous}$) $
There is a natural short exact sequence
\[
\xymatrix{
0 \ar[r] &{\underleftarrow{{\rm lim}}}^1_{i}~ \pi_{n+1}T(X_i^{\bf CF}) \ar[r] & \pi_n{\underleftarrow{{\rm holim}}}_i T(X_i) \ar[r]
&{\underleftarrow{{\rm lim}}}_{i} ~\pi_{n}T(X_i^{\bf CF}) \ar[r] & 0. \\
}
\]
\end{Lemma}
By Proposition \ref{numerical},
we have a diagram\\
\begin{equation}
\small{\xymatrix{
0 \ar[r] &{\underleftarrow{{\rm lim}}}^1_{i}~ H_{n+1}(X_i^{\bf CF},\Bbb{S}) \ar[d]^\cong \ar[r] &
H^{st}_n(X,\Bbb{S}) \ar[r]
&{\underleftarrow{{\rm lim}}}_{i} ~H_{n}(X_i^{\bf CF},\Bbb{S}) \ar[r] \ar[d]^\cong & 0\\
0 \ar[r] &{\underleftarrow{{\rm lim}}}^1_{i}~ \pi_{n+1}(T(X_i^{\bf CF}))
\ar[r] & \pi_n({\underleftarrow{{\rm holim}}}_{i}T(X_i^{\bf CF})) \ar[r]
&{\underleftarrow{{\rm lim}}}_{i} ~\pi_{n}(T(X_i^{\bf CF})) \ar[r] & 0. \\
}}
\end{equation}
Hence it suffices to construct a natural homomorphism
\[
H^{st}_n(X,\Bbb{S})
\to \pi_{n}({\underleftarrow{{\rm holim}}}_iT(X_i^{\bf CF}))
\]
making the diagram (4.1) commutative.
Since $T$ is continuous, the identity map $X\wedge S^k\to X\wedge S^k$
induces a continuous map $i':X\wedge T(S^k) \to T(X\wedge S^k)$.
Hence we have the composite homomorphism
\[
\displaystyle \begin{array}{ccl}
H^{st}_n(X,\Bbb{S}) &=& \pi_{n}\underleftarrow{{\rm holim}}_i(X_{i}^{\bf CF}\wedge \Bbb{S})\\
&\cong& {\underrightarrow{{\rm lim}}}_k \pi_{n+k}({\underleftarrow{{\rm holim}}}_i(X_i^{\bf CF}\wedge T(S^k)) \\
&\xrightarrow{I}&{\underrightarrow{{\rm lim}}}_k \pi_{n+k}({\underleftarrow{{\rm holim}}}_iT(X_i^{\bf CF}\wedge S^k))\\
&\cong& \pi_{n}({\underleftarrow{{\rm holim}}}_iT(X_i^{\bf CF}))\\
\end{array}
\]
in which $I={\underrightarrow{{\rm lim}}}_k{i'}^k_*$ is induced by the homomorphisms
\[{i'}^k_*:\pi_{n+k}({\underleftarrow{{\rm holim}}}_i(X_i^{\bf CF}\wedge T(S^k))\to \pi_{n+k}({\underleftarrow{{\rm holim}}}_iT(X_i^{\bf CF}\wedge S^k)).\]
Clearly the homomorphism
$H^{st}_n(X,\Bbb{S})\to \pi_{n}({\underleftarrow{{\rm holim}}}_iT(X_i^{\bf CF}))
$
makes the diagram (4.1) commutative.
Thus $h_n(X,\FF)$ is isomorphic to the Steenrod homology group coefficients
in the spectrum $\Bbb{S}$.
Finally, to prove Theorem 3 it suffices to show that $h^n(X,\bf CF)$ is isomorphic to the $\bf CF$ech cohomology group
of $X$.
By Lemma \ref{open},
we have a homotopy commutative diagram
\[
\xymatrix{
\cdots \ar[r]^= & AG(S^n) \ar[r]^= \ar[d]^\simeq & AG(S^n)
\ar[r]^= \ar[d]^\simeq&\cdots \\
\cdots \ar[r] & AG(S^n_{\lambda^n_{m-1}}) \ar[r]^\simeq & AG(S^n_{\lambda^n_{m}})
\ar[r] &\cdots .\\
}
\]
Hence we have $AG(S^n)\simeq {\underleftarrow{\textrm{holim}}}_i AG(S^n_{\lambda_i^n})$.
Thus we have
\[
\displaystyle \begin{array}{ccl}
h^n(X,\check{C})&=&\pi_0\check{C}(X,S^n)\\
&=&\pi_0{\underrightarrow{{\rm lim}}}_\lambda \map_0(X_{\lambda},{\underleftarrow{{\rm holim}}}_\mu AG((S^n)^{\bf CF}_{\mu}))\\
&\cong& [S^0,{\underrightarrow{{\rm lim}}}_\lambda \map_0(X_{\lambda},AG(S^n)]\\
&\cong& {\underrightarrow{{\rm lim}}}_\lambda [S^0,\map_{0}(X_{\lambda},AG(S^n)]\\
&\cong& {\underrightarrow{{\rm lim}}}_\lambda [S^0\wedge X_{\lambda},AG(S^n)]\\
&\cong& {\underrightarrow{{\rm lim}}}_{\lambda} [X_{\lambda},AG(S^n)].
\end{array}
\]
Hence $h^n(X,\bf CF )$ is isomorphic to the \v{C}ech cohomology group.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{Ap\'ery Limits: Experiments and Proofs}
\dedicatory{Dedicated to the memory of Jon Borwein}
\author{
Marc Chamberland
\thanks{Grinnell College, \email{[email protected]}}
\and
Armin Straub
\thanks{University of South Alabama, \email{[email protected]}}
}
\date{November 6, 2020}
\maketitle
\begin{abstract}
An important component of Ap\'ery's proof that $\zeta (3)$ is irrational
involves representing $\zeta (3)$ as the limit of the quotient of two
rational solutions to a three-term recurrence. We present various approaches
to such Ap\'ery limits and highlight connections to continued fractions as
well as the famous theorems of Poincar\'e and Perron on difference
equations. In the spirit of Jon Borwein, we advertise an
experimental-mathematics approach by first exploring in detail a simple but
instructive motivating example. We conclude with various open problems.
\end{abstract}
\section{Introduction}
A fundamental ingredient of Ap\'ery's groundbreaking proof \cite{apery} of
the irrationality of $\zeta (3)$ is the binomial sum
\begin{equation}
A (n) = \sum_{k = 0}^n \binom{n}{k}^2 \binom{n + k}{k}^2 \label{eq:apery3}
\end{equation}
and the fact that it satisfies the three-term recurrence
\begin{equation}
(n + 1)^3 u_{n + 1} = (2 n + 1) (17 n^2 + 17 n + 5) u_n - n^3 u_{n - 1}
\label{eq:apery3:rec}
\end{equation}
with initial conditions $A (0) = 1$, $A (1) = 5$ --- or, equivalently but more
naturally, $A (- 1) = 0$, $A (0) = 1$. Now let $B (n)$ be the solution to
{\eqref{eq:apery3:rec}} with $B (0) = 0$ and $B (1) = 1$. Ap\'ery showed
that
\begin{equation}
\lim_{n \rightarrow \infty} \frac{B (n)}{A (n)} = \frac{\zeta (3)}{6}
\label{eq:apery3:lim}
\end{equation}
and that the rational approximations resulting from the left-hand side
converge too rapidly to $\zeta (3)$ for $\zeta (3)$ itself to be rational. For
details, we recommend the engaging account \cite{alf} of Ap\'ery's proof.
In the sequel, we will not pursue questions of irrationality further. Instead,
our focus will be on limits, like {\eqref{eq:apery3:lim}}, of quotients of
solutions to linear recurrences. Such limits are often called {\emph{Ap\'ery
limits}} \cite{avz-apery-limits}, \cite{yang-apery-limits}.
Jon Borwein was a tireless advocate and champion of experimental mathematics
and applied it with fantastic success. Jon was also a pioneer of teaching
experimental mathematics, whether through numerous books, such as
\cite{bb-exp1}, or in the classroom (the second author is grateful for the
opportunity to benefit from both). Before collecting known results on
Ap\'ery limits and general principles, we therefore find it fitting to
explore in detail, in Section~\ref{sec:delannoy}, a simple but instructive
example using an experimental approach. We demonstrate how to discover the
desired Ap\'ery limit; and we show, even more importantly, how the
exploratory process naturally leads us to discover additional structure that
is helpful in understanding this and other such limits. We hope that the
detailed discussion in Section~\ref{sec:delannoy} may be particularly useful
to those seeking to integrate experimental mathematics into their own
teaching.
After suggesting further examples in Section~\ref{sec:search}, we explain the
observations made in Section~\ref{sec:delannoy} by giving in
Section~\ref{sec:diffeq} some background on difference equations, introducing
the Casoratian and the theorems of Poincar\'e and Perron. In
Section~\ref{sec:cf}, we connect with continued fractions and observe that,
accordingly translated, many of the simpler examples are instances of
classical results in the theory of continued fractions. We then outline in
Section~\ref{sec:pf} several methods used in the literature to establish
Ap\'ery limits. To illustrate the limitations of these approaches, we
conclude with several open problems in Sections~\ref{sec:franel:d} and
\ref{sec:open}.
Creative telescoping --- including, for instance, Zeilberger's algorithm and
the Wilf--Zeilberger (WZ) method --- refers to a powerful set of tools that,
among other applications, allow us to algorithmically derive the recurrence
equations, like {\eqref{eq:apery3:rec}}, that are satisfied by a given sum,
like {\eqref{eq:apery3}}. In fact, as described in \cite{alf}, Zagier's
proof of Ap\'ery's claim that the sums {\eqref{eq:apery3}} and
{\eqref{eq:apery3:2}} both satisfy the recurrence {\eqref{eq:apery3:rec}} may
be viewed as giving birth to the modern method of creative telescoping. For an
excellent introduction we refer to \cite{aeqb}. In the sequel, all claims
that certain sums satisfy a recurrence can be established using creative
telescoping.
\section{A motivating example}\label{sec:delannoy}
At the end of van der Poorten's account \cite{alf} of Ap\'ery's proof, the
reader is tasked with the exercise to consider the sequence
\begin{equation}
A (n) = \sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k} \label{eq:delannoy}
\end{equation}
and to apply to it Ap\'ery's ideas to conclude the irrationality of $\ln
(2)$. In this section, we will explore this exercise with an experimental
mindset but without using the general tools and connections described later in
the paper. In particular, we hope that the outline below could be handed out
in an undergraduate experimental-math class and that the students could (with
some help, depending on their familiarity with computer algebra systems)
reproduce the steps, feel intrigued by the observations along the way, and
then apply by themselves a similar approach to explore variations or
extensions of this exercise. Readers familiar with the topic may want to skip
ahead.
The numbers {\eqref{eq:delannoy}} are known as the central Delannoy numbers
and count lattice paths from $(0, 0)$ to $(n, n)$ using the steps $(0, 1)$,
$(1, 0)$ and $(1, 1)$. They satisfy the recurrence
\begin{equation}
(n + 1) u_{n + 1} = 3 (2 n + 1) u_n - n u_{n - 1} \label{eq:delannoy:rec}
\end{equation}
with initial conditions $A (- 1) = 0$, $A (0) = 1$. Now let $B (n)$ be the
sequence satisfying the same recurrence with initial conditions $B (0) = 0$,
$B (1) = 1$. Numerically, we observe that the quotients $Q (n) = B (n) / A
(n)$,
\begin{equation*}
(Q (n))_{n \geq 0} = \left(0, \frac{1}{3}, \frac{9}{26},
\frac{131}{378}, \frac{445}{1284}, \frac{34997}{100980},
\frac{62307}{179780}, \frac{2359979}{6809460}, \ldots \right),
\end{equation*}
appear to converge rather quickly to a limit
\begin{equation*}
L := \lim_{n \rightarrow \infty} Q (n) = 0.34657359 \ldots
\end{equation*}
When we try to estimate the speed of convergence by computing the difference
$Q (n) - Q (n - 1)$ of consecutive terms, we find
\begin{equation*}
(Q (n) - Q (n - 1))_{n \geq 1} = \left(\frac{1}{3}, \frac{1}{78},
\frac{1}{2457}, \frac{1}{80892}, \frac{1}{2701215}, \frac{1}{90770922},
\ldots \right) .
\end{equation*}
This suggests the probably-unexpected fact that these are all reciprocals of
integers. Something interesting must be going on here! However, a cursory
look-up of the denominators in the {\emph{\textit{On-Line Encyclopedia of
Integer Sequences}}} (OEIS) \cite{oeis} does not result in a match. (Were we
to investigate the factorizations of these integers, we might at this point
discover the case $x = 1$ of {\eqref{eq:delannoy:x:Qdiff}}. But we hold off on
exploring that observation and continue to focus on the speed of convergence.)
By, say, plotting the logarithm of $Q (n) - Q (n - 1)$ versus $n$, we are led
to realize that the number of digits to which $Q (n - 1)$ and $Q (n)$ agree
appears to increase (almost perfectly) linearly. This means that $Q (n)$
converges to $L$ exponentially.
\begin{exercise*}
For a computational challenge, quantify the exponential convergence by
conjecturing an exact value for the limit of $(Q (n + 1) - Q (n)) / (Q (n) -
Q (n - 1))$ as $n \rightarrow \infty$. Then connect that value to the
recurrence {\eqref{eq:delannoy:rec}}.
\end{exercise*}
At this point, we are confident that, say,
\begin{equation}
Q (50) = 0.34657359027997265470861606072908828403775006718 \ldots
\label{eq:delannoy:Q50}
\end{equation}
agrees with the limit $L$ to more than $75$ digits. The ability to recognize
constants from numerical data is a powerful asset in an experimental
mathematician's toolbox. Several approaches to constant recognition are
lucidly described in \cite[Section~6.3]{bb-exp1}. The crucial ingredients
are integer relation algorithms such as PSLQ or those based on lattice
reduction algorithms like LLL. Readers new to constant recognition may find
the {\emph{Inverse Symbolic Calculator}} of particular value. This web
service, created by Jon Borwein, Peter Borwein and Simon Plouffe, automates
the constant-recognition process: it asks for a numerical approximation as
input and determines, if successful, a suggested exact value. For instance,
given {\eqref{eq:delannoy:Q50}}, it suggests that
\begin{equation*}
L = \frac{1}{2} \ln (2),
\end{equation*}
which one can then easily confirm further to any desired precision. Of course,
while this provides overwhelming evidence, it does not constitute a proof.
Given the success of our exploration, a natural next step would be to repeat
this inquiry for the sequence of polynomials
\begin{equation}
A_x (n) = \sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k} x^k,
\label{eq:delannoy:x}
\end{equation}
which satisfies the recurrence {\eqref{eq:delannoy:rec}} with the term $3 (2 n
+ 1)$ replaced by $(2 x + 1) (2 n + 1)$. An important principle to keep in
mind here is that introducing an additional parameter, like the $x$ in
{\eqref{eq:delannoy:x}}, can make the underlying structure more apparent; and
this may be crucial both for guessing patterns and for proving our assertions.
Now define the secondary solution $B_x (n)$ satisfying the recurrence with
$B_x (0) = 0$, $B_x (1) = 1$. Then, if we compute the difference of quotients
$Q_x (n) = B_x (n) / A_x (n)$ as before, we find that
\begin{equation*}
(Q_x (n) - Q_x (n - 1))_{n \geq 1} = \left(\frac{1}{1 + 2 x},
\frac{1}{2 (1 + 2 x) (1 + 6 x + 6 x^2)}, \ldots \right) .
\end{equation*}
Extending our earlier observation, these now appear to be the reciprocals of
polynomials with integer coefficients. Moreover, in factored form, we are
immediately led to conjecture that
\begin{equation}
Q_x (n) - Q_x (n - 1) = \frac{1}{n A_x (n) A_x (n - 1)} .
\label{eq:delannoy:x:Qdiff}
\end{equation}
Note that, since $Q_x (0) = 0$, this implies
\begin{equation}
Q_x (N) = \sum_{n = 1}^N (Q_x (n) - Q_x (n - 1)) = \sum_{n = 1}^N \frac{1}{n
A_x (n) A_x (n - 1)} \label{eq:delannoy:x:Qsum}
\end{equation}
and hence provides another way to compute the limit $L_x = \lim_{n \rightarrow
\infty} Q_x (n)$ as
\begin{equation*}
L_x = \sum_{n = 1}^{\infty} \frac{1}{n A_x (n) A_x (n - 1)},
\end{equation*}
which avoids reference to the secondary solution $B_x (n)$.
Can we experimentally identify the limit $L_x$? One approach could be to
select special values for $x$ and then proceed as we did for $x = 1$. For
instance, we might numerically compute and then identify the following values:
\begin{equation*}
\renewcommand{1.3}{1.3}
\begin{array}{|l|l|l|l|}
\hline
& x = 1 & x = 2 & x = 3\\
\hline
L_x & \tfrac{1}{2} \ln (2) & \tfrac{1}{2} \ln \left(\tfrac{3}{2} \right)
& \tfrac{1}{2} \ln \left(\tfrac{4}{3} \right)\\
\hline
\end{array}
\end{equation*}
We are lucky and the emerging pattern is transparent, suggesting that
\begin{equation}
L_x = \frac{1}{2} \ln \left(1 + \frac{1}{x} \right) .
\label{eq:delannoy:x:L}
\end{equation}
A possibly more robust approach to identifying $L_x$ empirically is to fix
some values of $n$ and then expand the $Q_x (n)$, which are rational functions
in $x$, into power series. If the initial terms of these power series appear
to converge as $n \rightarrow \infty$ to identifiable values, then it is
reasonable to expect that these values are the initial terms of the power
series for the limit $L_x$. However, expanding around $x = 0$, we quickly
realize that the power series
\begin{equation*}
Q_x (n) = \sum_{k = 0}^{\infty} q_k^{(n)} x^k
\end{equation*}
do not stabilize as $n \rightarrow \infty$, but that the coefficients increase
in size: for instance, we find empirically that
\begin{equation*}
q_1^{(n)} = - n (n + 1), \quad q_2^{(n)} = \frac{1}{8} n (n + 1) (5 n^2 + 5
n + 6),
\end{equation*}
and it appears that, for $k \geq 1$, $q_k^{(n)}$ is a polynomial in $n$
of degree $2 k$. Expanding the $Q_x (n)$ instead around some nonzero value of
$x$ --- say, $x = 1$ --- is more promising. Writing
\begin{equation*}
Q_x (n) = \sum_{k = 0}^{\infty} r_k^{(n)} (x - 1)^k,
\end{equation*}
we observe empirically that
\begin{equation*}
\left( \lim_{n \rightarrow \infty} r_k^{(n)} \right)_{k \geq 1} = \left(-
\frac{1}{4}, \frac{3}{16}, - \frac{7}{48}, \frac{15}{128}, \ldots \right) .
\end{equation*}
Once we realize that the denominators are multiples of $k$, it is not
difficult to conjecture that
\begin{equation}
\lim_{n \rightarrow \infty} r_k^{(n)} = (- 1)^k \frac{2^k - 1}{k \cdot 2^{k
+ 1}} \label{eq:delannoy:x:Q:c1}
\end{equation}
for $k \geq 1$. From our initial exploration, we already know that
$\lim_{n \rightarrow \infty} r_0^{(n)} = \frac{1}{2} \ln (2)$ but we could
also have (re)guessed this value as the formal limit of the right-hand side of
{\eqref{eq:delannoy:x:Q:c1}} as $k \rightarrow 0$ (forgetting that $k$ is
really an integer). Anyway, {\eqref{eq:delannoy:x:Q:c1}} suggests that
\begin{equation*}
L_x = L_1 + \sum_{k = 1}^{\infty} (- 1)^k \frac{2^k - 1}{k \cdot 2^{k + 1}}
\, (x - 1)^k = \frac{1}{2} \ln (2) + \frac{1}{2} \ln \left(\frac{x + 1}{2
x} \right),
\end{equation*}
leading again to {\eqref{eq:delannoy:x:L}}. Finally, our life is easiest if we
look at the power series of $Q_x (n)$ expanded around $x = \infty$. In that
case, we find that the power series of $Q_x (n)$ and $Q_x (n + 1)$ actually
agree through order $x^{- 2 n}$. In hindsight --- and to expand our
experimental reach, it is always a good idea to reflect on the new data in
front of us --- this is a consequence of {\eqref{eq:delannoy:x:Qdiff}} and the
fact that $A_x (n)$ has degree $n$ in $x$ (so that $A_x (n) A_x (n - 1)$ has
degree $2 n - 1$). Therefore, from just the case $n = 3$ we are confident that
\begin{equation*}
L_x = Q_x (3) + O (x^{- 7}) = \frac{1}{2 x} - \frac{1}{4 x^2} + \frac{1}{6
x^3} - \frac{1}{8 x^4} + \frac{1}{10 x^5} - \frac{1}{12 x^6} + O (x^{- 7})
.
\end{equation*}
At this point the pattern is evident, and we arrive, once more, at the
conjectured formula {\eqref{eq:delannoy:x:L}} for $L_x$.
\section{Searching for Ap\'ery limits}\label{sec:search}
Inspired by the approach laid out in the previous section, one can search for
other Ap\'ery limits as follows:
\begin{enumerate}
\item Pick a binomial sum $A (n)$ and, using creative telescoping, compute a
recurrence satisfied by $A (n)$.
\item Compute the initial terms of a secondary solution $B (n)$ to the
recurrence.
\item Try to identify $\lim_{n \rightarrow \infty} B (n) / A (n)$ (either
numerically or as a power series in an additional parameter).
\end{enumerate}
It is important to realize, as will be indicated in Section~\ref{sec:cf}, that
if the binomial sum $A (n)$ satisfies a three-term recurrence, then the
Ap\'ery limit can be expressed as a continued fraction and compared to the
(rather staggering) body of known results \cite{wall-contfrac},
\cite{jt-contfrac}, \cite{handbook-contfrac}, \cite{bvsz-cf}.
Of course, the final step is to prove and/or generalize those discovered
results that are sufficiently appealing. One benefit of an experimental
approach is that we can discover results, connections and generalizations, as
well as discard less-fruitful avenues, before (or while!) working out a
complete proof. Ideally, the processes of discovery and proof inform each
other at every stage. For instance, experimentally finding a generalization
may well lead to a simplified proof, while understanding a small piece of a
puzzle can help set the direction of follow-up experiments. Jon Borwein's
extensive legacy is filled with such delightful examples.
Of course, one could just start with a recurrence; however, selecting a
binomial sum increases the odds that the recurrence has desirable properties
(it is a difficult open problem to ``invert creative telescoping'' in the
sense of producing a binomial sum satisfying a given recurrence). Some simple
suggestions for binomial sums, as well as the corresponding Ap\'ery limits,
are as follows (in each case, we choose the secondary solution with initial
conditions $B (0) = 0$, $B (1) = 1$).
\begin{equation*}
\setlength{\extrarowheight}{7pt}
\renewcommand{1.3}{1.3}
\begin{array}{|>{\displaystyle}l|>{\displaystyle}l|}
\hline
\sum_{k = 0}^n \binom{n}{2 k} x^k & \frac{1}{\sqrt{x}} \quad
\text{(around $x = 1$)}\\
\hline
\sum_{k = 0}^n \binom{n - k}{k} x^k & \frac{2}{1 + \sqrt{1 + 4 x}} \quad
\text{(around $x = 0$)}\\
\hline
\sum_{k = 0}^n \binom{n}{k} \binom{n - k}{k} x^k & \frac{\arctan \left(\sqrt{4 x - 1} \right)}{\sqrt{4 x - 1}} \quad \text{(around $x =
\tfrac{1}{4}$)}\\
\hline
\end{array}
\end{equation*}
\begin{example}
\label{eg:arctan}Setting $x = 1 / 2$ in the last instance leads to the limit
being $\arctan (1) = \pi / 4$ and therefore to a way of computing $\pi$ as
\begin{equation*}
\pi = \lim_{n \rightarrow \infty} \frac{4 B (n)}{A (n)},
\end{equation*}
where $A (n)$ and $B (n)$ both solve the recurrence $(n + 1) u_{n + 1} = (2
n + 1) u_n + n u_{n - 1}$ with $A (- 1) = 0$, $A (0) = 1$ and $B (0) = 0$,
$B (1) = 1$. In an experimental-math class, this could be used to segue into
the fascinating world of computing $\pi$, a topic to which Jon Borwein,
sometimes admiringly referred to as Dr.~Pi, has contributed so much --- one
example being the groundbreaking work in \cite{borwein-piagm} with his
brother Peter. Let us note that this is not a new result. Indeed, with the
substitution $z = \sqrt{4 x - 1}$, it follows from the discussion in
Section~\ref{sec:cf} that the Ap\'ery limit in question is equivalent to
the well-known continued fraction
\begin{equation*}
\arctan (z) = \frac{z}{1 +} \, \frac{1^2 z^2}{3 +} \, \frac{2^2 z^2}{5 +}
\, \cdots \, \frac{n^2 z^2}{(2 n + 1) +} \, \cdots
\end{equation*}
\cite[p.~343, eq.~(90.3)]{wall-contfrac}. The reader finds, for instance,
in \cite[Theorem~2]{bcp-cf-tails} that this continued fraction, as well as
corresponding ones for the tails of $\arctan (z)$, is a special case of
Gauss' famous continued fraction for quotients of hypergeometric functions
${}_2 F_1$. We hope that some readers and students, in particular, enjoy the
fact that they are able to rediscover such results themselves.
\end{example}
\begin{example}
For more challenging explorations, the reader is invited to consider the
binomial sums
\begin{equation*}
A_x (n) = \sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k}^2 x^k, \quad
\sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k}^3 x^k,
\end{equation*}
and to compare the findings with those by Zudilin
\cite{zudilin-appr-polylog} who obtains simultaneous approximations to the
logarithm, dilogarithm and trilogarithm.
\end{example}
\begin{example}
\label{eg:cy}Increasing the level of difficulty further, one may consider,
for instance, the binomial sum
\begin{equation*}
A (n) = \sum_{k = 0}^n \binom{n}{k}^2 \binom{3 k}{n},
\end{equation*}
which is an appealing instance, randomly selected from many others, for
which Almkvist, van Straten and Zudilin \cite[Section~4,
\#219]{avz-apery-limits} have numerically identified an Ap\'ery limit (in
this case, depending on the initial conditions of the secondary solution,
the Ap\'ery limit can be empirically expressed as a rational multiple of
$\pi^2$ or of the $L$-function evaluation $L_{- 3} (2)$, or, in general, a
linear combination of those). To our knowledge, proving the conjectured
Ap\'ery limits for most cases in \cite[Section~4]{avz-apery-limits},
including the one above, remains open. While the techniques discussed in
Section~\ref{sec:pf} can likely be used to prove some individual limits, it
would be of particular interest to establish all these Ap\'ery limits in a
uniform fashion.
\end{example}
Choosing an appropriate binomial sum as a starting point, the present approach
could be used to construct challenges for students in an experimental-math
class, with varying levels of difficulty (or that students could explore
themselves with minimal guidance). As illustrated by Example~\ref{eg:arctan},
simple cases can be connected with existing classical results, and
opportunities abound to connect with other topics such as hypergeometric
functions, computer algebra, orthogonal polynomials, or Pad\'e
approximation, which we couldn't properly discuss here. However, much about
Ap\'ery limits is not well understood and we believe that more serious
investigations, possibly along the lines outlined here, can help improve our
understanding. To highlight this point, we present in
Sections~\ref{sec:franel:d} and \ref{sec:open} several specific open problems
and challenges.
\section{Difference equations}\label{sec:diffeq}
In our initial motivating example, we started with a solution $A (n)$ to the
three-term recurrence {\eqref{eq:delannoy:rec}} and considered a second,
linearly independent solution $B (n)$ of that same recurrence. We then
discovered in {\eqref{eq:delannoy:x:Qsum}} that
\begin{equation*}
B (n) = A (n) \sum_{k = 1}^n \frac{1}{k A (k) A (k - 1)} .
\end{equation*}
That the secondary solution is expressible in terms of the primary solution is
a consequence of a general principle in the theory of difference equations,
which we outline in this section. For a gentle introduction to difference
equations, we refer to \cite{kelley-peterson-diff}.
Consider the general homogeneous linear difference equation
\begin{equation}
u (n + d) + p_{d - 1} (n) u (n + d - 1) + \cdots + p_1 (n) u (n + 1) + p_0
(n) u (n) = 0 \label{eq:de}
\end{equation}
of order $d$, where we normalize the leading coefficient to $1$. If $u_1 (n),
\ldots, u_d (n)$ are solutions to {\eqref{eq:de}}, then their
{\emph{Casoratian}} $w (n)$ is defined as
\begin{equation*}
w (n) = \det \begin{bmatrix}
u_1 (n) & u_2 (n) & \cdots & u_d (n)\\
u_1 (n + 1) & u_2 (n + 1) & \cdots & u_d (n + 1)\\
\vdots & \vdots & \ddots & \vdots\\
u_1 (n + d - 1) & u_2 (n + d - 1) & \cdots & u_d (n + d - 1)
\end{bmatrix} .
\end{equation*}
This is the discrete analog of the Wronskian that is discussed in most
introductory courses on differential equations. By applying the difference
equation {\eqref{eq:de}} to the last row in $w (n + 1)$ and then subtracting
off multiples of earlier rows, one finds that the Casoratian satisfies
\cite[Lemma~3.1]{kelley-peterson-diff}
\begin{equation*}
w (n + 1) = (- 1)^d p_0 (n) w (n)
\end{equation*}
and hence
\begin{equation}
w (n) = (- 1)^{d n} p_0 (0) p_0 (1) \cdots p_0 (n - 1) w (0) .
\label{eq:casoratian:rec}
\end{equation}
In the case of second order difference equations ($d = 2$), we have
\begin{equation*}
\frac{u_2 (n + 1)}{u_1 (n + 1)} - \frac{u_2 (n)}{u_1 (n)} = \frac{u_1 (n)
u_2 (n + 1) - u_1 (n + 1) u_2 (n)}{u_1 (n) u_1 (n + 1)} = \frac{w (n)}{u_1
(n) u_1 (n + 1)},
\end{equation*}
which implies that we can construct a second solution from a given solution as
follows.
\begin{lemma}
Let $d = 2$ and suppose that $u_1 (n)$ solves {\eqref{eq:de}} and that $u_1
(n) \neq 0$ for all $n \geq 0$. Then a second solution of
{\eqref{eq:de}} is given by
\begin{equation}
u_2 (n) = u_1 (n) \sum_{k = 0}^{n - 1} \frac{w (k)}{u_1 (k) u_1 (k + 1)},
\label{eq:u2:casoratian}
\end{equation}
where $w (k) = p_0 (0) p_0 (1) \cdots p_0 (k - 1)$.
\end{lemma}
Here we have normalized the solution $u_2$ by choosing $w (0) = 1$: this
entails $u_2 (0) = 0$ and $u_2 (1) = 1 / u_1 (0)$. Note also that if $p_0 (0)
\neq 0$, then $w (1) \neq 0$, which implies that the solution $u_2$ is
linearly independent from $u_1$.
\begin{example}
For the Delannoy difference equation {\eqref{eq:delannoy:rec}} and the
solutions $A (n)$, $B (n)$ with initial conditions as before, we have $d =
2$ and $p_0 (n) = (n + 1) / (n + 2)$, hence $w (n) = 1 / (n + 1)$. In
particular, equation {\eqref{eq:u2:casoratian}} is equivalent to
{\eqref{eq:delannoy:x:Qsum}}.
\end{example}
Now suppose that $p_k (n) \rightarrow c_k$ as $n \rightarrow \infty$, for each
$k \in \{ 0, 1, \ldots, d - 1 \}$. Then the characteristic polynomial of the
recurrence {\eqref{eq:de}} is, by definition,
\begin{equation*}
\lambda^d + c_{d - 1} \lambda^{d - 1} + \cdots + c_1 \lambda + c_0 =
\prod_{k = 1}^d (\lambda - \lambda_k)
\end{equation*}
with characteristic roots $\lambda_1, \ldots, \lambda_d$. Poincare's famous
theorem \cite[Theorem~5.1]{kelley-peterson-diff} states, under a modest
additional hypothesis, that each nontrivial solution to {\eqref{eq:de}} has
asymptotic growth according to one of the characteristic roots.
\begin{theorem}[Poincar\'e]
\label{thm:poincare}Suppose further that the characteristic roots have
distinct moduli. If $u (n)$ solves the recurrence {\eqref{eq:de}}, then
either $u (n) = 0$ for all sufficiently large $n$, or
\begin{equation}
\lim_{n \rightarrow \infty} \frac{u (n + 1)}{u (n)} = \lambda_k
\label{eq:poincare}
\end{equation}
for some $k \in \{ 1, \ldots, d \}$.
\end{theorem}
If, in addition, $p_0 (n) \neq 0$ for all $n \geq 0$ (so that, by
{\eqref{eq:casoratian:rec}}, the Casoratian $w (n)$ is either zero for all $n$
or nonzero for all $n$), then Perron's theorem guarantees that, for each $k$,
there exists a solution such that {\eqref{eq:poincare}} holds.
\section{Continued Fractions}\label{sec:cf}
In this section, we briefly connect with (irregular) continued fractions
\begin{equation*}
C = b_0 + \frac{a_1}{b_1 +} \, \frac{a_2}{b_2 +} \, \frac{a_3}{b_3 +}
\ldots := b_0 + \frac{a_1}{b_1 + \frac{a_2}{b_2 + \frac{a_3}{b_3 +
\ldots}}},
\end{equation*}
as introduced, for instance, in \cite{jt-contfrac},
\cite[Entry~12.1]{berndtII} or \cite[Chapter~9]{bvsz-cf}. The $n$-th
convergent of $C$ is
\begin{equation*}
C_n = b_0 + \frac{a_1}{b_1 +} \, \frac{a_2}{b_2 +} \, \ldots \,
\frac{a_n}{b_n} .
\end{equation*}
It is well known, and readily proved by induction, that $C_n = B (n) / A (n)$
where both $A (n)$ and $B (n)$ solve the second-order recurrence
\begin{equation}
u_n = b_n u_{n - 1} + a_n u_{n - 2} \label{eq:cf:rec}
\end{equation}
with $A (- 1) = 0$, $A (0) = 1$ and $B (- 1) = 1$, $B (0) = b_0$. (Note that,
if $b_0 = 0$, then the initial conditions for $B (n)$ are equivalent to $B (0)
= 0$, $B (1) = a_1$.)
Conversely, see \cite[Theorem~9.4]{bvsz-cf}, two such solutions to a
second-order difference equation with non-vanishing Casoratian correspond to a
unique continued fraction. In particular, Ap\'ery limits $\lim_{n
\rightarrow \infty} B (n) / A (n)$ arising from second-order difference
equations can be equivalently expressed as continued fractions.
\begin{example}
The Ap\'ery limit {\eqref{eq:delannoy:x:L}} is equivalent to the continued
fraction
\begin{equation*}
\frac{1}{2} \ln \left(1 + \frac{1}{x} \right) = \frac{1}{(2 x + 1) -}
\, \frac{1^2}{3 (2 x + 1) -} \, \frac{2^2}{5 (2 x +
1) -} \cdots \frac{n^2}{(2 n + 1) (2 x + 1) -} \cdots
\end{equation*}
\cite[p.~343, eq.~(90.4)]{wall-contfrac}. To see this, it suffices to note
that, if $A_x (n)$ and $B_x (n)$ are as in Section~\ref{sec:delannoy}, then
$n!A_x (n)$ and $n!B_x (n)$ solve the adjusted recurrence
\begin{equation*}
u_{n + 1} = (2 x + 1) (2 n + 1) u_n - n^2 u_{n - 1}
\end{equation*}
of the form {\eqref{eq:cf:rec}}.
\end{example}
The interested reader can find a detailed discussion of the continued
fractions corresponding to Ap\'ery's limits for $\zeta (2)$ and $\zeta (3)$
in \cite[Section~9.5]{bvsz-cf}, which then proceeds to proving the
respective irrationality results.
\section{Proving Ap\'ery limits}\label{sec:pf}
In the sequel, we briefly indicate several approaches towards proving
Ap\'ery limits. In case of the Ap\'ery numbers {\eqref{eq:apery3}},
Ap\'ery established the limit {\eqref{eq:apery3:lim}} by finding the
explicit expression
\begin{equation}
B (n) = \frac{1}{6} \sum_{k = 0}^n \binom{n}{k}^2 \binom{n + k}{k}^2 \left(\sum_{j = 1}^n \frac{1}{j^3} + \sum_{m = 1}^k \frac{(- 1)^{m - 1}}{2 m^3
\binom{n}{m} \binom{n + m}{m}} \right) \label{eq:apery3:2}
\end{equation}
for the secondary solution $B (n)$. Observe how, indeed, the presence of the
truncated sum for $\zeta (3)$ in {\eqref{eq:apery3:2}} makes the limit
{\eqref{eq:apery3:lim}} transparent. While, nowadays, it is routine
\cite{schneider-apery} to verify that the sum {\eqref{eq:apery3:2}}
satisfies the recurrence {\eqref{eq:apery3:rec}}, it is much less clear how to
discover sums like {\eqref{eq:apery3:2}} that are suitable for proving a
desired Ap\'ery limit.
Shortly after, and inspired by, Ap\'ery's proof, Beukers
\cite{beukers-irr-leg} gave shorter and more elegant proofs of the
irrationality of $\zeta (2)$ and $\zeta (3)$ by considering double and triple
integrals that result in small linear forms in the zeta value and $1$. For
instance, for $\zeta (3)$, Beukers establishes that
\begin{align}
& (- 1)^n \int_0^1 \int_0^1 \int_0^1 \frac{x^n (1 - x)^n y^n (1 - y)^n z^n (1
- z)^n}{(1 - (1 - x y) z)^{n + 1}} \mathrm{d} x \mathrm{d} y \mathrm{d} z \nonumber\\ ={} & A (n) \zeta
(3) - 6 B (n), \label{eq:linearform:beukers:zeta3}
\end{align}
where $A (n)$ and $B (n)$ are precisely the Ap\'ery numbers
{\eqref{eq:apery3}} and the corresponding secondary solution
{\eqref{eq:apery3:2}}. By bounding the integrand, it is straightforward to
show that the triple integral approaches $0$ as $n \rightarrow \infty$. From
this we directly obtain the Ap\'ery limit {\eqref{eq:apery3:lim}}, namely,
$\lim_{n \rightarrow \infty} B (n) / A (n) = \zeta (3) / 6$.
\begin{example}
In the same spirit, the Ap\'ery limit {\eqref{eq:delannoy:x:L}} can be
deduced from
\begin{equation*}
\int_0^1 \frac{t^n (1 - t)^n}{(x + t)^{n + 1}} \mathrm{d} t = A_n (x) \ln
\left(1 + \frac{1}{x} \right) - 2 B_n (x),
\end{equation*}
which holds for $x > 0$ with $A_n (x)$ and $B_n (x)$ as in
Section~\ref{sec:delannoy}. We note that this integral is a variation of the
integral that Alladi and Robinson \cite{ar-irr} have used to prove
explicit irrationality measures for numbers of the form $\ln (1 + \lambda)$
for certain algebraic $\lambda$.
\end{example}
As a powerful variation of this approach, the same kind of linear forms can be
constructed through hypergeometric sums obtained from rational functions. For
instance, Zudilin \cite{zudilin-arithmetic-odd} studies a general
construction, a special case of which is the relation, originally due to
Gutnik,
\begin{equation*}
- \frac{1}{2} \sum_{t = 1}^{\infty} R_n' (t) = A (n) \zeta (3) - 6 B (n),
\quad \text{where } R_n (t) = \left(\frac{(t - 1) \cdots (t - n)}{t (t +
1) \cdots (t + n)} \right)^2,
\end{equation*}
which once more equals {\eqref{eq:linearform:beukers:zeta3}}. We refer to
\cite{zudilin-arithmetic-odd}, \cite{zudilin-appr-polylog} and
\cite[Section~2.3]{avz-apery-limits} for further details and references. A
detailed discussion of the case of $\zeta (2)$ is included in
\cite[Sections~9.5 and 9.6]{bvsz-cf}.
Beukers \cite{beukers-irr} further realized that, in Ap\'ery's cases, the
differential equations associated to the recurrences have a description in
terms of modular forms. Zagier \cite{zagier4} has used such modular
parametrizations to prove Ap\'ery limits in several cases, including for the
Franel numbers, the case $d = 3$ in {\eqref{eq:franel:d}}. The limits occuring
in his cases are rational multiples of
\begin{equation*}
\zeta (2), \quad L_{- 3} (2) = \sum_{n = 1}^{\infty} \frac{\left(\frac{-
3}{n} \right)}{n^2}, \quad L_{- 4} (2) = \sum_{n = 1}^{\infty} \frac{\left(\frac{- 4}{n} \right)}{n^2} = \sum_{n = 0}^{\infty} \frac{(- 1)^n}{(2 n +
1)^2},
\end{equation*}
where $\left(\frac{a}{n} \right)$ is a Legendre symbol and $L_{- 4} (2)$ is
Catalan's constant (whose irrationality remains famously unproven). A general
method for obtaining Ap\'ery limits in cases of modular origin has been
described by Yang \cite{yang-apery-limits}, who proves various Ap\'ery
limits in terms of special values of $L$-functions.
\section{Sums of powers of binomials}\label{sec:franel:d}
Let us consider the family
\begin{equation}
A^{(d)} (n) = \sum_{k = 0}^n \binom{n}{k}^d \label{eq:franel:d}
\end{equation}
of sums of powers of binomial coefficients. It is easy to see that $A^{(1)}
(n) = 2^n$ and $A^{(2)} (n) = \binom{2 n}{n}$. The numbers $A^{(3)} (n)$ are
known as {\emph{Franel numbers}} \cite[A000172]{oeis}. Almost a century
before the availability of computer-algebra approaches like creative
telescoping, Franel \cite{franel94} obtained explicit recurrences for
$A^{(3)} (n)$ as well as, in a second paper, $A^{(4)} (n)$, and he conjectured
that $A^{(d)} (n)$ satisfies a linear recurrence of order $\lfloor (d + 1) / 2
\rfloor$ with polynomial coefficients. This conjecture was proved by Stoll in
\cite{stoll-rec-bounds}, to which we refer for details and references. It
remains an open problem to show that, in general, no recurrence of lower order
exists.
Van der Poorten \cite[p.~202]{alf} reports that the following Ap\'ery
limits in the cases $d = 3$ and $d = 4$ (in which case the binomial sums
satisfy second-order recurrences like Ap\'ery's sequences) have been
numerically observed by Tom Cusick:
\begin{equation}
\lim_{n \rightarrow \infty} \frac{B^{(3)} (n)}{A^{(3)} (n)} =
\frac{\pi^2}{24}, \quad \lim_{n \rightarrow \infty} \frac{B^{(4)}
(n)}{A^{(4)} (n)} = \frac{\pi^2}{30} . \label{eq:franel:L34}
\end{equation}
In each case, $B^{(d)} (n)$ is the (unique) secondary solution with initial
conditions $B^{(d)} (0) = 0$, $B^{(d)} (1) = 1$. The case $d = 3$ was proved
by Zagier \cite{zagier4} using modular forms. Since the case $d = 4$ is
similarly connected to modular forms \cite{cooper-level10}, we expect that
it can be established using the methods in \cite{yang-apery-limits},
\cite{zagier4} as well. Based on numerical evidence, following the approach
in Section~\ref{sec:search}, we make the following general conjecture
extending {\eqref{eq:franel:L34}}:
\begin{conjecture}
\label{conj:franel:2}For $d \geq 3$, the minimal-order recurrence
satisfied by $A^{(d)} (n)$ has a unique solution $B^{(d)} (n)$ with $B^{(d)}
(0) = 0$ and $B^{(d)} (1) = 1$ that also satisfies
\begin{equation}
\lim_{n \rightarrow \infty} \frac{B^{(d)} (n)}{A^{(d)} (n)} = \frac{\zeta
(2)}{d + 1} . \label{eq:conj:franel:2}
\end{equation}
\end{conjecture}
Note that for $d \geq 5$, the recurrence is of order $\geq 3$, and
so the solution $B^{(d)} (n)$ is not uniquely determined by the two initial
conditions $B^{(d)} (0) = 0$ and $B^{(d)} (1) = 1$.
Conjecture~\ref{conj:franel:2} asserts that precisely one of these solutions
satisfies {\eqref{eq:conj:franel:2}}.
Subsequent to making this conjecture, we realized that the case $d = 5$ was
already conjectured in \cite[Section~4.1]{avz-apery-limits} as sequence
\#22. We are not aware of previous conjectures for the cases $d \geq 6$.
We have numerically confirmed each of the cases $d \leq 10$ to more than
$100$ digits.
\begin{example}
\label{eg:franel:5}For $d = 5$, the sum $A^{(5)} (n)$ satisfies a recurrence
of order $3$, spelled out in \cite{perlstadt-franel}, of the form
\begin{equation}
(n + 3)^4 p (n + 1) u (n + 3) + \ldots + 32 (n + 1)^4 p (n + 2) u (n) = 0
\label{eq:franel:5:rec}
\end{equation}
where $p (n) = 55 n^2 + 33 n + 6$. The solution $B^{(5)} (n)$ from
Conjecture~\ref{conj:franel:2} is characterized by $B^{(5)} (0) = 0$ and
$B^{(5)} (1) = 1$ and insisting that the recurrence
{\eqref{eq:franel:5:rec}} also holds for $n = - 1$ (note that this does not
require a value for $B^{(5)} (- 1)$ because of the term $(n + 1)^4$).
Similarly, for $d = 6, 7, 8, 9$ the sequence $B^{(d)} (n)$ in
Conjecture~\ref{conj:franel:2} can be characterized by enforcing the
recurrence for small negative $n$ and by setting $B^{(d)} (n) = 0$ for $n <
0$. By contrast, for $d = 10$, there is a two-dimensional space of sequences
$u (n)$ solving {\eqref{eq:franel:5:rec}} for all integers $n$ with the
constraint that $u (n) = 0$ for $n \leq 0$. Among these, $B^{(10)} (n)$
is characterized by $B^{(10)} (1) = 1$ and $B^{(10)} (2) = 381 / 4$.
\end{example}
Now return to the case $d = 5$ and let $C^{(5)} (n)$ be the third solution to
the same recurrence with $C^{(5)} (0) = 0$, $C^{(5)} (1) = 1$, $C^{(5)} (2) =
\frac{48}{7}$. Numerical evidence suggests that we have the Ap\'ery limits
\begin{equation*}
\lim_{n \rightarrow \infty} \frac{B^{(5)} (n)}{A^{(5)} (n)} = \frac{1}{6}
\zeta (2), \quad \lim_{n \rightarrow \infty} \frac{C^{(5)} (n)}{A^{(5)}
(n)} = \frac{3 \pi^4}{1120} = \frac{27}{112} \zeta (4) .
\end{equation*}
Extending this idea to $d = 5, 6, \ldots, 10$, we numerically find Ap\'ery
limits $C^{(d)} (n) / A^{(d)} (n) \rightarrow \lambda \zeta (4)$ with the
following rational values for $\lambda$:
\begin{equation*}
\frac{27}{112}, \frac{4}{21}, \frac{37}{240}, \frac{7}{55}, \frac{47}{440},
\frac{1}{11} .
\end{equation*}
These values suggest that $\lambda$ can be expressed as a simple rational
function of $d$:
\begin{conjecture}
For $d \geq 5$, the minimal-order recurrence satisfied by $A^{(d)} (n)$
has a unique solution $C^{(d)} (n)$ with $C^{(d)} (0) = 0$ and $C^{(d)} (1)
= 1$ that also satisfies
\begin{equation*}
\lim_{n \rightarrow \infty} \frac{C^{(d)} (n)}{A^{(d)} (n)} = \frac{3 (5
d + 2)}{(d + 1) (d + 2) (d + 3)} \zeta (4) .
\end{equation*}
\end{conjecture}
More generally, we expect that, for $d \geq 2 m + 1$, there exist such
Ap\'ery limits for rational multiples of $\zeta (2), \zeta (4), \ldots,
\zeta (2 m)$. It is part of the challenge presented here to explicitly
describe all of these limits. As communicated to us by Zudilin, one could
approach the challenge, uniformly in $d$, by considering the rational
functions
\begin{equation*}
R_n^{(d)} (t) = \left(\frac{(- 1)^t n!}{t (t + 1) \cdots (t + n)}
\right)^d
\end{equation*}
in the spirit of \cite{zudilin-arithmetic-odd},
\cite{zudilin-appr-polylog} and \cite[Section~2.3]{avz-apery-limits}, as
indicated in Section~\ref{sec:pf}.
\section{Further challenges and open problems}\label{sec:open}
Although many things are known about Ap\'ery limits, much deserves to be
better understood. The explicit conjectures we offer in the previous section
can be supplemented with similar ones for other families of binomial sums. In
addition, many conjectural Ap\'ery limits that were discovered numerically
are listed in \cite[Section~4]{avz-apery-limits} for sequences that arise
from fourth- and fifth-order differential equations of Calabi--Yau type. As
mentioned in Example~\ref{eg:cy}, it would be of particular interest to
establish all these Ap\'ery limits in a uniform fashion.
It is natural to wonder whether a better understanding of Ap\'ery limits can
lead to new irrationality results. Despite considerable efforts and progress
(we refer the reader to \cite{zudilin-arithmetic-odd} and
\cite{brown-apery} as well as the references therein), it remains a
wide-open challenge to prove the irrationality of, say, $\zeta (5)$ or
Catalan's constant. As a recent promising construction in this direction, we
mention Brown's cellular integrals \cite{brown-apery} which are linear forms
in (multiple) zeta values that are constructed to have certain vanishing
properties that make them amenable to irrationality proofs. In particular,
Brown's general construction includes Ap\'ery's results as (unique) special
cases occuring as initial instances.
In another direction, it would be of interest to systematically study
$q$-analogs and, in particular, to generalize from difference equations to
$q$-difference equations. For instance, Amdeberhan and Zeilberger
\cite{az-qapery} offer an Ap\'ery-style proof of the irrationality of the
$q$-analog of $\ln (2)$ based on a $q$-version of the Delannoy numbers
{\eqref{eq:delannoy}}.
Perron's theorem, which we have mentioned after Poincar\'e's
Theorem~\ref{thm:poincare}, guarantees that, for each characteristic root
$\lambda$ of an appropriate difference equation, there exists a solution $u
(n)$ such that $u (n + 1) / u (n)$ approaches $\lambda$. We note that, for
instance, Ap\'ery's linear form {\eqref{eq:linearform:beukers:zeta3}} is
precisely the unique (up to a constant multiple) solution corresponding to the
$\lambda$ of smallest modulus. General tools to explicitly construct such
Perron solutions from the difference equation would be of great utility.
\begin{acknowledgment}
We are grateful to Alan Sokal for improving the
exposition by kindly sharing lots of careful suggestions and comments. We also
thank Wadim Zudilin for helpful comments, including his suggestion at the end
of Section~\ref{sec:franel:d}, and references.
\end{acknowledgment}
\end{document} |
\begin{document}
\title[Cloaking via mapping for the heat equation]{Cloaking via mapping for the heat equation}
\author{R. V. Craster}
\address{R.V.C.: Department of Mathematics, Imperial College London, London, SW7 2AZ, United Kingdom.}
\email{[email protected]}
\author{S. R. L. Guenneau}
\address{S.R.L.G.: Aix-Marseille Universite, CNRS, Centrale Marseille, Institut Fresnel, Avenue Escadrille Normandie-Ni\'emen, 13013, Marseille, France.}
\email{[email protected]}
\author{H. R. Hutridurga}
\address{H.R.H.: Department of Mathematics, Imperial College London, London, SW7 2AZ, United Kingdom.}
\email{[email protected]}
\author{G. A. Pavliotis}
\address{G.A.P.: Department of Mathematics, Imperial College London, London, SW7 2AZ, United Kingdom.}
\email{[email protected]}
\date{\today}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\thispagestyle{empty}
\begin{abstract}
This paper explores the concept of near-cloaking in the context of time-dependent heat propagation. We show that after the lapse of a certain threshold time instance, the boundary measurements for the homogeneous heat equation are close to the cloaked heat problem in a certain Sobolev space norm irrespective of the density-conductivity pair in the cloaked region. A regularised transformation media theory is employed to arrive at our results. Our proof relies on the study of the long time behaviour of solutions to the parabolic problems with high contrast in density and conductivity coefficients. It further relies on the study of boundary measurement estimates in the presence of small defects in the context of steady conduction problem. We then present some numerical examples to illustrate our theoretical results.
\end{abstract}
\section{Introduction}
\subsection{Physical motivation}
This work addresses the concept of near-cloaking in the context of the
time-dependent heat propagation. Our study is motivated by experiments
in electrostatics \cite{Liu_2012} and thermodynamics
\cite{Schittny_2013}, which have demonstrated the markedly different
behaviour of structured cloaks in static and dynamic regimes. It has
been observed, in the physics literature, that the thermal field in
\cite{Schittny_2013} reaches an equilibrium state after a certain time
interval, when it looks nearly identical to electrostatic field
measured in \cite{Liu_2012}. There have already been some rigorous results for the electrostatic case
\cite{Greenleaf_2003}. However, during the transient regime the
thermal field looks much different from the static result, and a
natural question that arises is whether one can give a mathematically
rigorous definition of cloaking for diffusion processes in the time
domain. This has important practical applications as cloaking for
diffusion processes have been applied to mass transport problems in
life sciences \cite{Guenneau_2013} and chemistry \cite{Zeng_2013} as
well as to multiple light scattering whereby light is governed by
ballistic laws \cite{Schittny_2014}. Interestingly, the control of wave
trajectories first proposed in the context of electromagnetism
\cite{pendry_2006} can also be extended to matter waves that are
solutions of a Schr\" odinger equation \cite{Zhang_2008} that is akin to the
heat equation and so the near-cloaking we investigate has broad
implications and application.
\subsection{Analytical motivation}
Change-of-variables based cloaking schemes have been inspired by the work of Greenleaf and co-authors \cite{Greenleaf_2003} in the context of electric impedance tomography and by the work of Pendry and co-authors \cite{pendry_2006} in the context of time-harmonic Maxwell's equations. The transformation employed in both those works is singular and any mathematical analysis involving them becomes quite involved. The transformation in \cite{Greenleaf_2003, pendry_2006} essentially blows up a point to a region in space which needs to be cloaked. These works yield perfect-cloaks, i.e., they render the target region completely invisible to boundary measurements. Regularised versions of this singular approach have been proposed in the literature. In \cite{Kohn_2008}, Kohn and co-authors proposed a regularised approximation of this map by blowing up a small ball to the cloaked region and studied the asymptotic behaviour as the radius of the small ball vanishes, thus recovering the singular transform of \cite{Greenleaf_2003, pendry_2006}. An alternate approach involving the truncation of singularities was employed by Greenleaf and co-authors in \cite{Greenleaf_2008} to provide an approximation scheme for the singular transform in \cite{Greenleaf_2003, pendry_2006}. It is to be noted that the constructions in \cite{Kohn_2008} and \cite{Greenleaf_2008} are shown to be equivalent in \cite{Kocyigit_2013}. We refer the interested reader to the review papers \cite{Greenleaf_2009A, Greenleaf_2009B} for further details on cloaking via change of variables approach with emphasis on the aforementioned singular transform.
Rather than employing the singular transformation, we follow the lead of Kohn and co-authors \cite{Kohn_2008}. This is in contrast with some works in the literature on the time-dependent thermal cloaking strategies where singular schemes are used e.g., \cite{Guenneau_2012, Petiteau_2014, Petiteau_2015}. Note that the evolution equation which we consider is a good model for \cite{Schittny_2013}, which designs and fabricates a microstructured thermal cloak that molds the heat flow around an object in a metal plate. We refer the interested reader to the review paper \cite{Raza_2016} for further details on transformation thermodynamics. The work of Kohn and co-authors \cite{Kohn_2008} estimates that the near-cloaking they propose for the steady conduction problem is $\varepsilon^d$-close to the perfect cloak, where $d$ is the space dimension. Our construction of the cloaking structure is exactly similar to the construction in \cite{Kohn_2008}. In the present time-dependent setting, we allow the solution to evolve in time until it gets close to the associated thermal equilibrium state which solves a steady conduction problem. This closeness is studied in the Sobolev space $\mathrm H^1(\Omega)$. We then employ the $\varepsilon^d$-closeness result of \cite{Kohn_2008} to deduce our near-cloaking theorem in the time-dependent setting. To the best of our knowledge, this is the first work to consider near-cloaking strategies to address time-dependent heat conduction problem.
In the literature, there have been numerous works on the approximate cloaking strategies for the Helmoltz equation \cite{Kohn_2010, Nguyen_2010, Nguyen_2011, Nguyen_2012} (see also \cite{Nguyen-Vogelius_2012} for the treatment of the full wave equation). The strategy in \cite{Nguyen-Vogelius_2012} to treat the time-dependent wave problem is to take the Fourier transform in the time variable. This yields a family of Helmholtz problems (family indexed by the frequency). The essential idea there is to obtain appropriate degree of invisibility estimates for the Helmholtz equation -- the estimates being frequency-dependent. More importantly, these estimates blow-up in the frequency regime. But, they do so in an integrable fashion. Equipped with these approximate cloaking results for the Helmholtz equations, the authors in \cite{Nguyen-Vogelius_2012} simply invert the Fourier transform to read off the near-cloaking result for the time-dependent wave equation. Inspired by \cite{Nguyen-Vogelius_2012}, one could apply the Laplace transform in the time variable for the heat equation and try to mimic the analysis in \cite{Nguyen-Vogelius_2012} for the family of elliptic problems thus obtained. Note that this approach does not require, unlike ours, the solution to the heat conduction problem to reach equilibrium state to obtain approximate cloaking. We have not explored this approach in detail and we leave it for future investigations.
Gralak and co-authors \cite{Gralak_2016} recently developed invisible layered structures in transformation optics in the temporal regime. Inspired by that work, we have also developed a transformation media theory for thermal layered cloaks, which are of practical importance in thin-film solar cells for energy harvesting in photovoltaic industry. These layered cloaks might be of importance in thermal imaging. We direct the interested readers to \cite{Ammari_2005} and references therein.
In the applied mathematics community working on meta-materials, enhancement of near-cloaking is another topic which has been addressed in the literature. Loosely speaking, these enhancement techniques involve covering a small ball of radius $\varepsilon$ by multiple coatings and then applying the push-forward maps of \cite{Kohn_2008}. These multiple coatings which result in the vanishing of certain polarization tensors help us improve the $\varepsilon^d$-closeness of \cite{Kohn_2008} to $\varepsilon^{dN}$-closeness where $N$ denotes the number of coatings in the above construction. For further details, we direct the readers to \cite{Ammari_2012a, Ammari_2012b} in the mathematics literature and \cite{Alu_2005, Alu_2007} in the physics literature (the works \cite{Alu_2005, Alu_2007} employ negative index materials). One could employ the constructions of \cite{Ammari_2012a, Ammari_2012b} in the time-independent setting to our temporal setting to obtain enhanced near-cloaking structures. This again is left for future investigations.
In this present work, we are able to treat time-independent sources for the heat equation. As our approach involves the study of thermal equilibration, we could extend our result to time-dependent sources which result in equilibration. This, however, leaves open the question of near-cloaking for the heat equation with genuinely time-dependent sources which do not result in thermal equilibration. For example, sources which are time harmonic cannot be treated by the approach of this paper. The approach involving Laplace transform mentioned in a previous paragraph might be of help here. There are plenty of numerical works published by physicists on these aspects, but with no mathematical foundation thus far. The authors plan to return to these questions in the near future.
\subsection{Paper structure} The paper is organized as follows. In section \ref{sec:math-set}, we briefly recall the change-of-variable-principle for the heat equation. This section also makes precise the notion of near-cloaking and its connection to perfect cloaking followed by the construction of cloaking density and conductivity coefficients. Our main result (Theorem \ref{thm:near-cloak}) is stated in that section. Section \ref{sec:spec} deals with the long time behaviour of solutions to parabolic problems; the effect of high contrast in density and conduction on the long time behaviour of solutions is also treated in that section. The proof of Theorem \ref{thm:near-cloak} is given in section \ref{sec:near-cloak-proof}. In this section, we also develop upon an idea of layered cloak inspired by the construction in \cite{Gralak_2016}. Finally, in section \ref{sec:numerics}, we present some numerical examples to illustrate our theoretical results.
\section{Mathematical setting}\label{sec:math-set}
Let $\Omega\subset{\mathbb R}^d$ $(d=2,3)$ be a smooth bounded domain such that $B_2\subset \Omega$. Throughout, we use the notation $B_r$ to denote an euclidean ball of radius $r$ centred at the origin.
\subsection{Change-of-variable principle}
The following result recalls the principle behind the change-of-variables based cloaking strategies. This is the typical and essential ingredient of any cloaking strategies in transformation media theory.
\begin{prop}\label{prop:change-variable-princ}
Let the coefficients $A\in\mathrm L^\infty(\Omega;{\mathbb R}^{d\times d})$ and $\rho\in\mathrm L^\infty(\Omega;{\mathbb R})$. Suppose the source term $f\in\mathrm L^2(\Omega;{\mathbb R})$. Consider a smooth invertible map $\mathbb{F}:\Omega\mapsto\Omega$ such that $\mathbb{F}(x) = x$ for each $x\in \Omega\setminus B_2$. Furthermore, assume that the associated Jacobians satisfy ${\rm det}(D\mathbb{F})(x), {\rm det}(D\mathbb{F}^{-1})(x) \ge C >0$ for a.e. $x\in\Omega$. Then $u(t,x)$ is a solution to
\begin{align*}
\rho(x) \frac{\partial u}{\partial t} = \nabla \cdot \Big( A(x) \nabla u \Big) + f(x) \qquad \mbox{ for }(t,x)\in(0,\infty)\times\Omega
\end{align*}
if and only if $v= u\circ \mathbb{F}^{-1}$ is a solution to
\begin{align*}
\mathbb{F}^*\rho(y) \frac{\partial v}{\partial t} = \nabla \cdot \Big( \mathbb{F}^*A(y) \nabla v \Big) + \mathbb{F}^*f(y) \qquad \mbox{ for }(t,y)\in(0,\infty)\times\Omega
\end{align*}
where the coefficients are given as
\begin{equation}\label{eq:push-forward-formulae}
\begin{aligned}
& \mathbb{F}^*\rho(y) = \frac{\rho(x)}{{\rm det }(D\mathbb{F})(x)};
\qquad \qquad \qquad \qquad
& \mathbb{F}^*f(t,y) = \frac{f(t,x)}{{\rm det }(D\mathbb{F})(x)};
\\[0.2 cm]
& \mathbb{F}^*A(y) = \frac{D\mathbb{F}(x) A(x) D\mathbb{F}^\top(x)}{{\rm det }(D\mathbb{F})(x)}
& ~
\end{aligned}
\end{equation}
with the understanding that the right hand sides in \eqref{eq:push-forward-formulae} are computed at $x=\mathbb{F}^{-1}(y)$. Moreover we have for all $t>0$,
\begin{align}\label{eq:prop:change-variable-assertion}
u(t,\cdot) = v(t,\cdot) \qquad \mbox{ in }\Omega\setminus B_2.
\end{align}
\end{prop}
The proof of the above proposition has appeared in the literature in most of the papers on ``cloaking via mapping'' techniques. It essentially involves performing a change of variables in the weak formulation associated with the differential equation. We will skip the proof and refer the reader to \cite[subsection 2.2]{Kohn_2008}, \cite[subsection 2.2, page 976]{Kohn_2010}, \cite[section 2, page 8209]{Guenneau_2012} for essential details.\\
Following Kohn and co-authors \cite{Kohn_2008}, we fix a regularising parameter $\varepsilon>0$ and consider a Lipschitz map $\mathcal{F}_\varepsilon:\Omega\mapsto\Omega$ defined below
\begin{equation}\label{eq:std-near-cloak}
\mathcal{F}_\varepsilon(x) :=
\left\{
\begin{array}{cl}
x & \mbox{ for } x\in \Omega\setminus B_2
\\[0.2 cm]
\left( \frac{2-2\varepsilon}{2-\varepsilon} + \frac{\vert x \vert}{2-\varepsilon}\right) \frac{x}{\vert x\vert} & \mbox{ for } x\in B_2 \setminus B_\varepsilon
\\[0.2 cm]
\frac{x}{\varepsilon} & \mbox{ for }x\in B_\varepsilon.
\end{array}\right.
\end{equation}
Note that $\mathcal{F}_\varepsilon$ maps $B_\varepsilon$ to $B_1$ and the annulus
$B_2\setminus B_\varepsilon$ to $B_2\setminus B_1$. The cloaking strategy with the above map corresponds to having $B_1$ as the cloaked region and the annulus $B_2\setminus B_1$ as the cloaking annulus. The Lipschitz map given above is borrowed from \cite[page 5]{Kohn_2008}. Remark that taking $\varepsilon=0$ in \eqref{eq:std-near-cloak} yields the map
\begin{equation}\label{eq:std-singular-cloak}
\mathcal{F}_0(x) :=
\left\{
\begin{array}{cl}
x & \mbox{ for } x\in \Omega\setminus B_2
\\[0.2 cm]
\left( 1 + \frac12 \left\vert x \right\vert \right) \frac{x}{\vert x\vert} & \mbox{ for } x\in B_2 \setminus \{0\}
\end{array}\right.
\end{equation}
which is the singular transform of \cite{Greenleaf_2003, pendry_2006}. The map $\mathcal{F}_0$ is smooth except at the point $0$. It maps $0$ to $B_1$ and $B_2\setminus\{0\}$ to $B_2\setminus B_1$.
\subsection{Essential idea of the paper}
Let us make precise the notion of near-cloaking we will use throughout this paper. Let $f\in\mathrm L^2(\Omega)$ denote a source term such that ${\rm supp}\, f\subset \Omega\setminus B_2$. Let $g\in\mathrm L^2(\partial\Omega)$ denote a Neumann boundary data. Suppose the initial datum $u^{\rm in}\in \mathrm H^1(\Omega)$ be such that ${\rm supp}\, u^{\rm in}\subset \Omega\setminus B_2$. Consider the homogeneous (conductivity being unity) heat equation for the unknown $u_{\rm hom}(t,x)$ with the aforementioned data.
\begin{equation}\label{eq:intro-u_hom}
\begin{aligned}
\partial_t u_{\rm hom} (t,x) & = \Delta u_{\rm hom}(t,x) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla u_{\rm hom} \cdot {\bf n}(x) & = g(x) \qquad \qquad\qquad \qquad\mbox{ on }(0,\infty)\times\partial\Omega,
\\[0.2 cm]
u_{\rm hom}(0,x) & = u^{\rm in}(x) \qquad \qquad\qquad \qquad\qquad \mbox{ in }\Omega.
\end{aligned}
\end{equation}
Here ${\bf n}(x)$ is the unit exterior normal to $\Omega$ at $x\in\partial\Omega$.\\
Our objective is to construct coefficients $\rho_{\rm cl}(x)$ and $A_{\rm cl}(x)$ such that
\[
\rho_{\rm cl}(x) = \eta(x); \qquad
A_{\rm cl}(x) = \beta(x) \qquad \mbox{ in }B_1
\]
for some arbitrary bounded positive density $\eta$ and for some arbitrary bounded positive definite conductivity $\beta$. This construction should further imply that the evolution for the unknown $u_{\rm cl}(t,x)$ given by
\begin{equation}\label{eq:intro-u_cloak}
\begin{aligned}
\rho_{\rm cl}(x) \partial_t u_{\rm cl} & = \nabla \cdot \Big( A_{\rm cl}(x) \nabla u_{\rm cl} \Big) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla u_{\rm cl} \cdot {\bf n}(x) & = g(x) \qquad \qquad\qquad \qquad \qquad \mbox{ on }(0,\infty)\times\partial\Omega,
\\[0.2 cm]
u_{\rm cl}(0,x) & = u^{\rm in}(x) \qquad \qquad\qquad \qquad\qquad \mbox{ in }\Omega,
\end{aligned}
\end{equation}
is such that there exists a time instant $\mathit{T}<\infty$ so that for all $t\ge \mathit{T}$, we have
\begin{align*}
u_{\rm cl}(t,x) \approx u_{\rm hom}(t,x)
\qquad \mbox{ for }x\in \Omega\setminus B_2.
\end{align*}
The above closeness will be measured in some appropriate function space norm. Most importantly, this approximation should be independent of the density-conductivity pair $\eta,\beta$ in $B_1$. Note, in particular, that the source terms $f(x), g(x), u^{\rm in}(x)$ in \eqref{eq:intro-u_hom} and \eqref{eq:intro-u_cloak} are the same.
\subsection{Cloaking coefficients \& the defect problem}
The following construct using the push-forward maps is now classical
in transformation optics and we use it here for thermodynamics.
\begin{equation}\label{eq:rho-cloak-choice}
\rho_{\rm cl}(x) =
\left\{
\begin{array}{ll}
1 & \quad \mbox{ for }x\in\Omega\setminus B_2,\\[0.2 cm]
\mathcal{F}^*_\varepsilon 1 & \quad \mbox{ for }x\in B_2\setminus B_1,\\[0.2 cm]
\eta(x) & \quad \mbox{ for }x\in B_1
\end{array}\right.
\end{equation}
and
\begin{equation}\label{eq:A-cloak-choice}
A_{\rm cl}(x) =
\left\{
\begin{array}{ll}
{\rm Id} & \quad \mbox{ for }x\in\Omega\setminus B_2,\\[0.2 cm]
\mathcal{F}^*_\varepsilon {\rm Id} & \quad \mbox{ for }x\in B_2\setminus B_1,\\[0.2 cm]
\beta(x) & \quad \mbox{ for }x\in B_1.
\end{array}\right.
\end{equation}
The density coefficient $\eta(x)$ in \eqref{eq:rho-cloak-choice} is any arbitrary real coefficient such that
\[
0 < \eta(x) < \infty \qquad \mbox{ for }x\in B_1.
\]
The conductivity coefficient $\beta(x)$ in \eqref{eq:A-cloak-choice} is any arbitrary bounded positive definite matrix, i.e., there exist positive constants $\kappa_1$ and $\kappa_2$ such that
\[
\kappa_1 \left\vert \xi \right\vert^2 \le \beta(x) \xi \cdot \xi \le \kappa_2 \left\vert \xi \right\vert^2
\qquad \forall (x,\xi)\in B_1\times{\mathbb R}^d.
\]
The following observation is crucial for the analysis to follow. Consider the density-conductivity pair
\begin{equation}\label{eq:rho-small-inclusion}
\rho^\varepsilon(x) =
\left\{
\begin{array}{ll}
1 & \quad \mbox{ for }x\in\Omega\setminus B_\varepsilon,\\[0.2 cm]
\frac{1}{\varepsilon^{d}}\eta\left(\frac{x}{\varepsilon}\right) & \quad \mbox{ for }x\in B_\varepsilon
\end{array}\right.
\end{equation}
and
\begin{equation}\label{eq:A-small-inclusion}
A^\varepsilon(x) =
\left\{
\begin{array}{ll}
{\rm Id} & \quad \mbox{ for }x\in\Omega\setminus B_\varepsilon,\\[0.2 cm]
\frac{1}{\varepsilon^{d-2}}\beta\left(\frac{x}{\varepsilon}\right) & \quad \mbox{ for }x\in B_\varepsilon.
\end{array}\right.
\end{equation}
Next let us compute their push-forwards using the Lipschitz map $\mathcal{F}^\varepsilon$ -- as given by the formulae \eqref{eq:push-forward-formulae} -- yielding
\begin{align*}
\rho_{\rm cl}(y) = \mathcal{F}^*_\varepsilon\rho^\varepsilon(y);
\qquad
A_{\rm cl}(y) = \mathcal{F}^*_\varepsilon A^\varepsilon(y).
\end{align*}
Then, the assertion of Proposition \ref{prop:change-variable-princ} (see in particular the equality \eqref{eq:prop:change-variable-assertion}) implies that the solution $u_{\rm cl}(t,x)$ to \eqref{eq:intro-u_cloak} satisfies for all $t>0$,
\begin{align*}
u_{\rm cl}(t,x) = u^\varepsilon(t,x)
\qquad \mbox{ for all }x\in \Omega\setminus B_2
\end{align*}
with $u^\varepsilon(t,x)$ being the solution to
\begin{align}\label{eq:intro:small-inclusion}
\begin{aligned}
\rho^\varepsilon(x) \partial_t u^\varepsilon & = \nabla \cdot \Big( A^\varepsilon(x) \nabla u^\varepsilon \Big) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla u^\varepsilon \cdot {\bf n}(x) & = g(x) \qquad \qquad\qquad \qquad \qquad \mbox{ on }(0,\infty)\times\partial\Omega,
\\[0.2 cm]
u^\varepsilon(0,x) & = u^{\rm in}(x) \qquad \qquad\qquad \qquad\qquad \mbox{ in }\Omega,
\end{aligned}
\end{align}
with the coefficients in the above evolution being given by \eqref{eq:rho-small-inclusion}-\eqref{eq:A-small-inclusion}. The coefficients $\rho^\varepsilon$ and $A^\varepsilon$ are uniform except for their values in $B_\varepsilon$. Hence we treat $B_\varepsilon$ as a defect where the coefficients show high contrast with respect to their values elsewhere in the domain. Due to the nature of these coefficients, we call the evolution problem \eqref{eq:intro:small-inclusion} the \emph{defect problem with high contrast coefficients} or \emph{defect problem} for short. The change-of-variable principle (see Proposition \ref{prop:change-variable-princ}) essentially says that, to study cloaking for the transient heat transfer problem, we need to compare the solution $u^\varepsilon(t,x)$ to the defect problem \eqref{eq:intro:small-inclusion} with the solution $u_{\rm hom}(t,x)$ to the homogeneous problem \eqref{eq:intro-u_hom} for $x\in\Omega\setminus B_2$.
\subsection{Main result}
We are now ready to state the main result of this work.
\begin{thm}\label{thm:near-cloak}
Let the dimension $d\ge2$. Let $u^\varepsilon(t,x)$ be the solution to the defect problem \eqref{eq:intro:small-inclusion} and let $u_{\rm hom}(t,x)$ be the solution to the homogeneous conductivity problem \eqref{eq:intro-u_hom}. Suppose the data in \eqref{eq:intro-u_hom} and \eqref{eq:intro:small-inclusion} are such that
\begin{align*}
f\in\mathrm L^2(\Omega), \quad {\rm supp}\,f\subset \Omega\setminus B_2, \quad g\in\mathrm L^2(\partial\Omega), \quad u^{\rm in}\in\mathrm H^1(\Omega), \quad {\rm supp}\, u^{\rm in}\subset \Omega\setminus B_2.
\end{align*}
Let us further suppose that the source terms satisfy
\begin{align}
\int_\Omega f(x)\, {\rm d}x + \int_{\partial\Omega} g(x)\, {\rm d}\sigma(x) & = 0, \label{eq:thm-near-cloak-compatible}
\\
\int_\Omega u^{\rm in}(x)\, {\rm d}x & = 0. \label{eq:thm-near-cloak-compatible-initial}
\end{align}
Then, there exists a time $\mathit{T} <\infty$ such that for all $t\ge \mathit{T}$ we have
\begin{align}\label{eq:thm:H12-estimate}
\left\Vert u^\varepsilon(t,\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)} \le C \left( \left\Vert u^{\rm in} \right\Vert_{\mathrm H^1} + \left\Vert f \right\Vert_{\mathrm L^2(\Omega)} + \left\Vert g \right\Vert_{\mathrm L^2(\partial\Omega)} \right)\varepsilon^d,
\end{align}
where the positive constant $C$ depends on the domain $\Omega$ and the $\mathrm L^\infty$ bounds on the density-conductivity pair $(\eta, \beta)$ in $B_1$.
\end{thm}
Thanks to the change-of-variable principle (Proposition \ref{prop:change-variable-princ}), we deduce the following corollary.
\begin{cor}\label{cor:diff-cloak-hom}
Let $u_{\rm cl}(t,x)$ be the solution to the thermal cloak problem \eqref{eq:intro-u_cloak} with the cloaking coefficients $\rho_{\rm cl}(x), A_{\rm cl}(x)$ given by \eqref{eq:rho-cloak-choice}-\eqref{eq:A-cloak-choice}. Let $u_{\rm hom}(t,x)$ be the solution to the homogeneous conductivity problem \eqref{eq:intro-u_hom}. Suppose the data in \eqref{eq:intro-u_hom} and \eqref{eq:intro-u_cloak} are such that
\begin{align*}
f\in\mathrm L^2(\Omega), \quad {\rm supp}\,f\subset \Omega\setminus B_2, \quad g\in\mathrm L^2(\partial\Omega), \quad u^{\rm in}\in\mathrm H^1(\Omega), \quad {\rm supp}\, u^{\rm in}\subset \Omega\setminus B_2.
\end{align*}
Let us further suppose that the source terms satisfy \eqref{eq:thm-near-cloak-compatible} and \eqref{eq:thm-near-cloak-compatible-initial}. Then, there exists a time $\mathit{T} <\infty$ such that for all $t\ge \mathit{T}$ we have
\begin{align}\label{eq:cor:H12-estimate}
\left\Vert u_{\rm cl}(t,\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)} \le C \left( \left\Vert u^{\rm in} \right\Vert_{\mathrm H^1} + \left\Vert f \right\Vert_{\mathrm L^2(\Omega)} + \left\Vert g \right\Vert_{\mathrm L^2(\partial\Omega)} \right) \varepsilon^d,
\end{align}
where the positive constant $C$ depends on the domain $\Omega$ and the $\mathrm L^\infty$ bounds on the density-conductivity pair $(\eta, \beta)$ in $B_1$.
\end{cor}
Whenever the source terms $f(x), g(x)$ and the initial datum $u^{\rm in}(x)$ satisfy the compatibility conditions \eqref{eq:thm-near-cloak-compatible}-\eqref{eq:thm-near-cloak-compatible-initial}, we shall call them \emph{admissible}. Our proof of the Theorem \ref{thm:near-cloak} relies upon two ideas
\begin{enumerate}
\item[$(i)$] long time behaviour of solutions to parabolic problems.
\item[$(ii)$] boundary measurement estimates in the presence of small inhomogeneities.
\end{enumerate}
\begin{rem}
Our strategy of proof goes via the study of the steady state problems associated with the evolutions \eqref{eq:intro-u_hom} and \eqref{eq:intro:small-inclusion}. Those elliptic boundary value problems demand that the source terms be compatible to guarantee existence and uniqueness of solution. These compatibility conditions on the bulk and Neumann boundary sources $f,g$ translates to the zero mean assumption \eqref{eq:thm-near-cloak-compatible}. Note that the compatibility condition \eqref{eq:thm-near-cloak-compatible} is not necessary for the well-posedness of the transient problems.
\end{rem}
A remark about the assumption \eqref{eq:thm-near-cloak-compatible-initial} on the initial datum $u^{\rm in}$ in Theorem \ref{thm:near-cloak} will be made later after the proof of Theorem \ref{thm:near-cloak} as it will be clearer to the reader.
\section{Study of the long time behaviour}\label{sec:spec}
In this section, we deal with the long time asymptotic analysis for the parabolic problems. Consider the initial-boundary value problem for the unknown $v(t,x)$.
\begin{equation}\label{eq:spec-ibvp-v}
\begin{aligned}
\partial_t v & = \Delta v \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla v \cdot {\bf n}(x) & = 0 \quad \qquad \qquad\mbox{ on }(0,\infty)\times\partial\Omega,
\\[0.2 cm]
v(0,x) & = v^{\rm in}(x) \qquad \qquad\qquad \mbox{ in }\Omega.
\end{aligned}
\end{equation}
We give an asymptotic result for the solution to \eqref{eq:spec-ibvp-v} in the $t\to\infty$ limit.
\begin{prop}\label{prop:v-H1-t-infty}
Let $v(t,x)$ be the solution to the initial-boundary value problem \eqref{eq:spec-ibvp-v}. Suppose the initial datum $v^{\rm in}\in\mathrm H^1(\Omega)$. Then, there exists a constant $\gamma>0$ such that
\begin{align}\label{eq:asy-limit-v}
\left\Vert v(t,\cdot) - \langle v^{\rm in} \rangle \right\Vert_{\mathrm H^1(\Omega)} \le e^{-\gamma t} \left\Vert v^{\rm in} \right\Vert_{\mathrm H^1(\Omega)}
\qquad \mbox{ for all }\, t>0
\end{align}
where $\langle v^{\rm in} \rangle$ denotes the initial average, i.e.,
\begin{align*}
\langle v^{\rm in} \rangle := \frac{1}{\left\vert \Omega \right\vert} \int_\Omega v^{\rm in}(x)\, {\rm d}x.
\end{align*}
\end{prop}
Proof of the above proposition is standard and is based in the spectral study of the corresponding elliptic problem, see e.g. \cite{Pazy_1983}. Alternatively, we could also use energy methods based on a priori estimates on the solution of the parabolic partial differential equations, see e.g. \cite[Chapter 13]{Chipot_2000}. As the proof of Proposition \ref{prop:v-H1-t-infty} is quite standard, we omit the proof.\\
Let us recall certain notions necessary for our proof. Let $\{\varphi_k\}_{k=1}^\infty\subset \mathrm H^1(\Omega)$ denote the collection of Neumann Laplacian eigenfunctions on $\Omega$, i.e., for each $k\in\mathbb{N}$,
\begin{equation}\label{eq:asy-spectral-pb-Neumann-Laplacian}
\begin{aligned}
-\Delta \varphi_k & = \mu_k \varphi_k \qquad \mbox{ in }\Omega,
\\[0.2 cm]
\nabla \varphi_k \cdot {\bf n}(x) & = 0 \qquad \quad \mbox{ on }\partial\Omega,
\end{aligned}
\end{equation}
with $\{\mu_k\}$ denoting the eigenvalues. Recall that the spectrum is discrete, non-negative and with no finite accumulation point, i.e.,
\begin{align*}
0=\mu_1 \le \mu_2 \le \cdots \to \infty.
\end{align*}
The next proposition is a result similar in flavour to Proposition \ref{prop:v-H1-t-infty}, but in a more general parabolic setting with high contrast in density and conductivity coefficients. More precisely, let us consider the heat equation with uniform conductivity in the presence of a defect with high contrast coefficients. We particularly choose the conductivity matrix to be \eqref{eq:A-small-inclusion} and the density coefficient to be \eqref{eq:rho-small-inclusion}. For an unknown $v^\varepsilon(t,x)$, consider the initial-boundary value problem
\begin{equation}\label{eq:spec:small-inclusion}
\begin{aligned}
\rho^\varepsilon(x) \partial_t v^\varepsilon & = \nabla \cdot \Big( A^\varepsilon(x) \nabla v^\varepsilon \Big) \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla v^\varepsilon \cdot {\bf n}(x) & = 0 \quad \qquad\qquad\qquad \qquad\mbox{ on }(0,\infty)\times\partial\Omega,
\\[0.2 cm]
v^\varepsilon(0,x) & = v^{\rm in}(x) \qquad \qquad\qquad \qquad\mbox{ in }\Omega.
\end{aligned}
\end{equation}
We give an asymptotic result for the solution $v^\varepsilon(t,x)$ to \eqref{eq:spec:small-inclusion} in the $t\to\infty$ limit. As before, we are interested in the $\left\Vert \cdot \right\Vert_{\mathrm H^1}$-norm rather than the $\left\Vert \cdot \right\Vert_{\mathrm L^2}$-norm.
\begin{prop}\label{prop:v-eps-H1-t-infty}
Let $v^\varepsilon(t,x)$ be the solution to the initial-boundary value problem \eqref{eq:spec:small-inclusion}. Suppose the initial datum $v^{\rm in}\in\mathrm H^1(\Omega)\cap \mathrm L^\infty(\Omega)$. Then, there exists a constant $\gamma_\varepsilon>0$ such that for all $t>0$, we have
\begin{align}\label{eq:asy-limit-v-eps}
\left\Vert v^\varepsilon(t,\cdot) - \mathfrak{m}^\varepsilon \right\Vert_{\mathrm H^1(\Omega)} \le e^{-\gamma_\varepsilon t} \left( \frac{1}{\varepsilon^{d}} \left\Vert v^{\rm in} \right\Vert_{\mathrm L^2(\Omega)} + \frac{1}{\varepsilon^{\frac{d-2}{2}}} \left\Vert \nabla v^{\rm in} \right\Vert_{\mathrm L^2(\Omega)} \right)
\end{align}
where $\mathfrak{m}^\varepsilon$ denotes the following weighted initial average
\[
\mathfrak{m}^\varepsilon := \frac{1}{\left\vert \Omega \right\vert \langle \rho^\varepsilon \rangle} \int_\Omega \rho^\varepsilon(x) v^{\rm in}(x)\, {\rm d}x
\qquad
\mbox{ with }
\quad
\langle \rho^\varepsilon \rangle = \frac{1}{\left\vert \Omega \right\vert}\int_\Omega \rho^\varepsilon(x)\, {\rm d}x.
\]
\end{prop}
\begin{rem}
From estimate \eqref{eq:asy-limit-v-eps} we conclude that the estimate on the right hand side is a product of an exponential decay term and a term of $\mathcal{O}(\varepsilon^{-d})$. So, if $\gamma^\varepsilon\gtrsim 1$, uniformly in $\varepsilon$, then the solution converges to the weighted initial average in the long time regime. We will demonstrate that the decay rate $\gamma^\varepsilon$ is bounded away from zero uniformly (with respect to $\varepsilon$) in subsection \ref{ssec:schrodinger}.
If the density coefficient $\rho^\varepsilon(x)$ is given by \eqref{eq:rho-small-inclusion}, then
\[
\langle \rho^\varepsilon \rangle = \frac{1}{\left\vert \Omega \right\vert} \left\{ \int_{\Omega\setminus B_\varepsilon} 1\, {\rm d}x + \frac{1}{\varepsilon^d} \int_{B_\varepsilon} 1\, {\rm d}x \right\} = \frac{1}{\left\vert \Omega \right\vert} \left( \left\vert \Omega\setminus B_\varepsilon\right\vert + \frac{\pi^\frac{d}{2}}{\Gamma\left(\frac{d}{2} + 1\right)} \right).
\]
where $\Gamma(\cdot)$ denotes the gamma function. Substituting for $\langle \rho^\varepsilon \rangle$ in the expression for the weighted initial average yields
\[
\mathfrak{m}^\varepsilon = \left( \left\vert \Omega\setminus B_\varepsilon\right\vert + \frac{\pi^\frac{d}{2}}{\Gamma\left(\frac{d}{2} + 1\right)} \right)^{-1} \left\{ \int_{\Omega\setminus B_\varepsilon} v^{\rm in}(x)\, {\rm d}x + \frac{1}{\varepsilon^d} \int_{B_\varepsilon} v^{\rm in}(x)\, {\rm d}x \right\}.
\]
Using the assumption on the initial datum $v^{\rm in}$ that it belongs to $\mathrm H^1(\Omega)\cap \mathrm L^\infty(\Omega)$, we get the following uniform bound (uniform with respect to $\varepsilon$) on the weighted initial average
\[
\left\vert \mathfrak{m}^\varepsilon \right\vert
\le 1 + \frac{\left\vert \Omega\right\vert \Gamma\left(\frac{d}{2} + 1\right)}{\pi^\frac{d}{2}} \left\Vert v^{\rm in} \right\Vert_{\mathrm L^2(\Omega)} < \infty.
\]
\end{rem}
\begin{proof}[Proof of Proposition \ref{prop:v-eps-H1-t-infty}]
Let $\mu^\varepsilon_k$ and $\varphi^\varepsilon_k$ be the Neumann eigenvalues and eigenfunctions defined as
\begin{equation}\label{eq:spec:eigenpair-small-inclusion}
\begin{aligned}
-\nabla \cdot \Big( A^\varepsilon \nabla \varphi^\varepsilon_k \Big) & = \mu^\varepsilon_k \rho^\varepsilon \varphi^\varepsilon_k \qquad \mbox{ in }\Omega,
\\[0.2 cm]
\nabla \varphi^\varepsilon_k \cdot {\bf n}(x) & = 0 \qquad\qquad \quad \mbox{ on }\partial\Omega,
\end{aligned}
\end{equation}
where the conductivity-density pair $A^\varepsilon(x)$, $\rho^\varepsilon(x)$ is given by \eqref{eq:A-small-inclusion}-\eqref{eq:rho-small-inclusion}. Here again the spectrum is discrete, non-negative and with no finite accumulation point, i.e.,
\begin{align*}
0=\mu^\varepsilon_1 \le \mu^\varepsilon_2 \le \cdots \to \infty.
\end{align*}
The solution $v^\varepsilon(t,x)$ to \eqref{eq:spec:small-inclusion} can be represented in terms of the basis functions $\{\varphi^\varepsilon_k\}_{k=1}^\infty$ as
\begin{align*}
v^\varepsilon(t,x) = \sum_{k=1}^\infty \mathfrak{b}^\varepsilon_k(t) \varphi^\varepsilon_k(x)
\qquad
\mbox{ with }
\quad
\mathfrak{b}^\varepsilon_k(t) = \int_\Omega v^\varepsilon(t,x)\rho^\varepsilon(x)\varphi^\varepsilon_k(x)\, {\rm d}x.
\end{align*}
The general representation formula for the solution to \eqref{eq:spec:small-inclusion} becomes
\begin{align}\label{eq:spec:represent-v-eps}
v^\varepsilon(t,x) = \sum_{k=1}^\infty \mathfrak{b}^\varepsilon_k(0) e^{-\mu_kt} \varphi_k(x) = \mathfrak{b}^\varepsilon_1(0) \varphi^\varepsilon_1 + \sum_{k=2}^\infty \mathfrak{b}^\varepsilon_k(0) e^{-\mu^\varepsilon_kt} \varphi^\varepsilon_k(x).
\end{align}
The first term in the above representation is nothing but the weighted initial average
\begin{align*}
\varphi^\varepsilon_1\mathfrak{b}^\varepsilon_1(0) = \left\vert\varphi^\varepsilon_1\right\vert^2 \int_\Omega v^{\rm in}(x) \rho^\varepsilon(x) \, {\rm d}x = \frac{1}{\left\vert \Omega \right\vert \langle \rho^\varepsilon\rangle} \int_\Omega v^{\rm in}(x) \rho^\varepsilon(x) \, {\rm d}x =: \mathfrak{m}^\varepsilon.
\end{align*}
The representation \eqref{eq:spec:represent-v-eps} says that the weighted $\mathrm L^2(\Omega)$-norm of $v^\varepsilon(t,x)$ is
\begin{align*}
\left\Vert v^\varepsilon(t,\cdot) \right\Vert^2_{\mathrm L^2(\Omega;\rho^\varepsilon)} = \left\vert \mathfrak{b}^\varepsilon_1(0) \right\vert^2 + \sum_{k=2}^\infty \left\vert \mathfrak{b}^\varepsilon_k(0) \right\vert^2 e^{-2\mu^\varepsilon_kt}
\end{align*}
where we have used the following notation for the weighted Lebesgue space norm:
\[
\left\Vert w \right\Vert_{\mathrm L^2(\Omega;\rho^\varepsilon)} := \int_\Omega w(x) \rho^\varepsilon(x)\, {\rm d}x.
\]
The density coefficient $\rho^\varepsilon(x)$ defined in \eqref{eq:rho-small-inclusion} appears as the weight function in the above Lebesgue space. It follows that
\[
\left\Vert w \right\Vert_{\mathrm L^2(\Omega)} \lesssim \left\Vert w \right\Vert_{\mathrm L^2(\Omega;\rho^\varepsilon)} \lesssim \frac{1}{\varepsilon^{d}} \left\Vert w \right\Vert_{\mathrm L^2(\Omega)}
\]
thanks to the definition of $\rho^\varepsilon(x)$ in \eqref{eq:rho-small-inclusion}. Computing the weighted norm of the difference $v^\varepsilon(t,x) - \mathfrak{m}^\varepsilon$, we get
\begin{equation*}
\begin{aligned}
\left\Vert v^\varepsilon(t,\cdot) - \mathfrak{m}^\varepsilon \right\Vert^2_{\mathrm L^2(\Omega;\rho^\varepsilon)}
= \sum_{k=2}^\infty \left\vert \mathfrak{b}^\varepsilon_k(0) \right\vert^2 e^{-2\mu^\varepsilon_kt}
\le e^{-2\mu^\varepsilon_2t} \left\Vert v^{\rm in} \right\Vert^2_{\mathrm L^2(\Omega;\rho^\varepsilon)}
\end{aligned}
\end{equation*}
Hence we deduce
\begin{align}\label{eq:v-eps-L2-t-infty}
\left\Vert v^\varepsilon(t,\cdot) - \mathfrak{m}^\varepsilon \right\Vert^2_{\mathrm L^2(\Omega)}
\le
e^{-2\mu^\varepsilon_2t} \frac{1}{\varepsilon^{2d}} \left\Vert v^{\rm in} \right\Vert^2_{\mathrm L^2(\Omega)}
\end{align}
Next, it follows from the representation formula \eqref{eq:spec:represent-v-eps} that
\begin{align*}
\sqrt{A^\varepsilon(x)} \nabla v^\varepsilon(t,x) = \sum_{k=2}^\infty \mathfrak{b}^\varepsilon_k(0) e^{-\mu^\varepsilon_kt} \sqrt{A^\varepsilon(x)} \nabla \varphi^\varepsilon_k(x)
\end{align*}
which in turn implies
\begin{align*}
\left\Vert \sqrt{A^\varepsilon} \nabla v^\varepsilon \right\Vert^2_{\mathrm L^2(\Omega)}
& = \sum_{k=2}^\infty \left\vert \mathfrak{b}^\varepsilon_k(0) \right\vert^2 \mu^\varepsilon_k\, e^{-2\mu^\varepsilon_kt}
\le e^{-2\mu^\varepsilon_2t} \sum_{k=2}^\infty \left\vert \mathfrak{b}^\varepsilon_k(0) \right\vert^2 \mu^\varepsilon_k
\\[0.2 cm]
&
= e^{-2\mu^\varepsilon_2t} \left\Vert \sqrt{A^\varepsilon} \nabla v^{\rm in} \right\Vert^2_{\mathrm L^2(\Omega)}
\le e^{-2\mu^\varepsilon_2t}\frac{1}{\varepsilon^{d-2}} \left\Vert \nabla v^{\rm in} \right\Vert^2_{\mathrm L^2(\Omega)}
\end{align*}
Furthermore, as $1 \lesssim \left\Vert A^\varepsilon \right\Vert_{\mathrm L^\infty}$, it follows that
\begin{align*}
\left\Vert \nabla v^\varepsilon \right\Vert^2_{\mathrm L^2(\Omega)}
\le e^{-2\mu^\varepsilon_2t} \frac{1}{\varepsilon^{d-2}} \left\Vert \nabla v^{\rm in} \right\Vert^2_{\mathrm L^2(\Omega)}
\end{align*}
Repeating the above computations for the difference $v^\varepsilon(t,x) - \mathfrak{m}^\varepsilon$, we obtain
\begin{align}\label{eq:nabla-v-eps-L2-t-infty}
\left\Vert \nabla \left( v^\varepsilon - \mathfrak{m}^\varepsilon \right) \right\Vert^2_{\mathrm L^2(\Omega)}
\le e^{-2\mu^\varepsilon_2t} \frac{1}{\varepsilon^{d-2}} \left\Vert \nabla v^{\rm in} \right\Vert^2_{\mathrm L^2(\Omega)}
\end{align}
Gathering the inequalities \eqref{eq:v-eps-L2-t-infty}-\eqref{eq:nabla-v-eps-L2-t-infty} together proves the proposition with the constant $\gamma_\varepsilon = \mu^\varepsilon_2$, i.e., the first non-zero Neumann eigenvalue in the spectral problem \eqref{eq:spec:eigenpair-small-inclusion} on $\Omega$ with the high contrast density-conductivity pair $(\rho^\varepsilon(x), A^\varepsilon(x))$.
\end{proof}
\subsection{Reduction to a Schr\"odinger operator}\label{ssec:schrodinger}
The constants $\gamma$ and $\gamma^\varepsilon$ in Propositions
\ref{prop:v-H1-t-infty} and \ref{prop:v-eps-H1-t-infty} respectively
give the rate of convergence to equilibrium. As the proofs in the previous subsection suggest, these rates are nothing but the first non-zero eigenvalues of the associated Neumann eigenvalue problems.
The constant decay rate $\gamma_\varepsilon$ in Proposition \ref{prop:v-eps-H1-t-infty} may depend on the regularisation parameter $\varepsilon$. In our setting, $\varepsilon$ is nothing but the radius of the inclusion. Hence, to understand the behaviour of $\gamma^\varepsilon$ in terms of $\varepsilon$, we need to study the perturbations in the eigenvalues caused by the presence of inhomogeneities with conductivities and densities different from the background conductivity and density. More importantly, we need to understand the spectrum of transmission problems with high contrast conductivities and densities. The spectral analysis of elliptic operators with such conductivity matrices is done extensively in the literature in the context of electric impedance tomography -- see \cite{Ammari_2003, Ammari_2009} and references therein for further details. In \cite{Ammari_2003}, for example, the authors give an asymptotic expansion for the eigenvalues $\mu^\varepsilon_k$ in terms of the regularising parameter $\varepsilon$ (even in the case of multiplicities). We refer the readers to \cite[Eq.(23), page 74]{Ammari_2003} for the precise expansion. For high contrast conductivities such as $A^\varepsilon$, refer to the concluding remarks in \cite[pages 74-75]{Ammari_2003} and references therein. Another important point to be noted is that the above mentioned works of Ammari and co-authors do not treat high contrast densities while addressing the spectral problems. For our setting -- more specifically for the spectral problem \eqref{eq:spec:eigenpair-small-inclusion} -- the readers are directed to consult the review paper of Chechkin \cite{Chechkin_2006} which goes in detail about the spectral problem in a related setting. Furthermore, the review paper \cite{Chechkin_2006} gives exhaustive reference to literature where similar spectral problems are addressed.
Rather than deducing the behaviour of the first non-zero eigenvalue of the spectral problem \eqref{eq:spec:eigenpair-small-inclusion} from \cite{Chechkin_2006}, we propose an alternate approach. Studying \eqref{eq:spec:eigenpair-small-inclusion} is the same as the study of the spectral problem for the operator
\begin{align}\label{eq:schrod:operator-L}
\mathcal{L}\, h := \frac{1}{\rho^\varepsilon(x)} \nabla \cdot \Big( A^\varepsilon(x) \nabla h(x) \Big).
\end{align}
The idea is to show that the study of the spectral problem for $\mathcal{L}$ is analogous to the study of the spectral problem for a Schr\"odinger-type operator. The essential calculations to follow are inspired by the calculations in \cite[section 4.9, page 125]{Pavliotis_2014}. Note that the operator $\mathcal{L}$ defined by \eqref{eq:schrod:operator-L} with zero Neumann boundary condition is a symmetric operator in $\mathrm L^2(\Omega;\rho^\varepsilon)$, i.e.,
\[
\int_{{\mathbb R}^d} \mathcal{L}\, h_1(x) h_2(x) \rho^\varepsilon(x)\, {\rm d}x = \int_{{\mathbb R}^d} \mathcal{L}\, h_2(x) h_1(x) \rho^\varepsilon(x)\, {\rm d}x
\]
for all $h_1, h_2\in\mathrm L^2({\mathbb R}^d;\rho^\varepsilon)$.\\
Let us now define the operator
\begin{align}\label{eq:schrod:operator-H}
\mathcal{H}\, h := \sqrt{\rho^\varepsilon(x)} \, \mathcal{L}\left(\frac{h}{\sqrt{\rho^\varepsilon(x)}} \right) = \frac{1}{\sqrt{\rho^\varepsilon(x)}} \nabla\cdot \left( \rho^\varepsilon(x) {\mathbb S}igma^\varepsilon(x) \nabla \left(\frac{h}{\sqrt{\rho^\varepsilon(x)}}\right) \right),
\end{align}
with the coefficient
\[
{\mathbb S}igma^\varepsilon(x) := \frac{A^\varepsilon(x)}{\rho^\varepsilon(x)}.
\]
An algebraic manipulation yields
\begin{align*}
\mathcal{H}\, h = \nabla\cdot \Big( {\mathbb S}igma^\varepsilon(x) \nabla h \Big) + W^\varepsilon(x)\, h
\end{align*}
with
\begin{align*}
W^\varepsilon(x) := \frac{1}{\sqrt{\rho^\varepsilon(x)}} \nabla \cdot \left( A^\varepsilon(x) \nabla \left( \frac{1}{\sqrt{\rho^\varepsilon(x)}} \right) \right).
\end{align*}
In our setting, with the high contrast coefficients $A^\varepsilon$ and $\rho^\varepsilon$ from \eqref{eq:A-small-inclusion}-\eqref{eq:rho-small-inclusion}, the coefficients ${\mathbb S}igma^\varepsilon$ and $W^\varepsilon$ become
\begin{equation*}
{\mathbb S}igma^\varepsilon(x) =
\left\{
\begin{array}{ll}
{\rm Id} & \quad \mbox{ for }x\in\Omega\setminus B_\varepsilon,\\[0.2 cm]
\varepsilon^2\, \frac{\beta}{\eta}\left(\frac{x}{\varepsilon}\right) & \quad \mbox{ for }x\in B_\varepsilon.
\end{array}\right.
\end{equation*}
and
\begin{equation*}
W^\varepsilon(x) =
\left\{
\begin{array}{ll}
0 & \quad \mbox{ for }x\in\Omega\setminus B_\varepsilon,\\[0.2 cm]
\varepsilon^2\, \nabla \cdot \left(\beta\left(\frac{x}{\varepsilon}\right) \nabla \left(\frac{1}{\sqrt{\eta}} \left(\frac{x}{\varepsilon}\right) \right) \right) & \quad \mbox{ for }x\in B_\varepsilon.
\end{array}\right.
\end{equation*}
By definition \eqref{eq:schrod:operator-H}, the operators $\mathcal{L}$ and $\mathcal{H}$ are unitarily equivalent. Hence, they have the same eigenvalues. Note that the operator $\mathcal{H}$ is a Schr\"odinger-type operator where the coefficients are of high contrast.\\
By the Rayleigh-Ritz criterion, we have the characterisation
\begin{equation*}
\displaystyle \mu_k^\varepsilon \le \frac{\int_\Omega {\mathbb S}igma^\varepsilon(x) \nabla \varphi(x)\cdot \nabla \varphi(x)\, {\rm d}x + \int_\Omega W^\varepsilon(x) \left\vert \varphi(x) \right\vert^2\, {\rm d}x}{\int_\Omega \left\vert \varphi(x) \right\vert^2\, {\rm d}x}
\end{equation*}
where $\varphi\not\equiv0$ and is orthogonal to first $k-1$ Neumann eigenfunctions $\left\{ \varphi^\varepsilon_1, \dots, \varphi^\varepsilon_{k-1}\right\}$. Note, in particular, for $k=2$ (i.e. the first non-zero eigenvalue) we have
\begin{align}\label{eq:spec:Rayleight-Ritz-high-contrast}
\left\Vert \varphi \right\Vert^2_{\mathrm L^2(\Omega)}\, \mu_2^\varepsilon \le \int_{\Omega} {\mathbb S}igma^\varepsilon(x)\nabla \varphi(x) \cdot \nabla \varphi(x) \, {\rm d}x + \int_{B_\varepsilon} W^\varepsilon(x)\varphi(x) \, {\rm d}x
\end{align}
with $\varphi\in\mathrm H^1(\Omega)$ such that
\[
\int_\Omega \varphi(x)\, {\rm d}x = 0.
\]
Note that we have used the fact that $W^\varepsilon(x)$ is supported on $B_\varepsilon$. Note further that
\begin{align}\label{eq:spec:W-eps-O1}
W^\varepsilon(x) = \varepsilon^2 \left\{ \frac{1}{\varepsilon^2} \beta\left(\frac{x}{\varepsilon}\right) : \left[\nabla^2\left(\frac{1}{\sqrt{\eta}}\right)\right]\left(\frac{x}{\varepsilon}\right) + \frac{1}{\varepsilon^2} \left[\nabla\beta \right]\left(\frac{x}{\varepsilon}\right) \cdot \left[\nabla\left(\frac{1}{\sqrt{\eta}}\right)\right]\left(\frac{x}{\varepsilon}\right)\right\} = \mathcal{O}(1)
\end{align}
if we assume $\beta\in\mathrm W^{1,\infty}(B_1;{\mathbb R}^{d\times d})$ and $\eta\in\mathrm W^{2,\infty}(B_1)$.\\
Note further that the spectral problem \eqref{eq:asy-spectral-pb-Neumann-Laplacian} comes with the following characterisation of the first non-zero eigenvalue (again by the Rayleigh-Ritz criterion)
\begin{align}\label{eq:spec:Rayleight-Ritz}
\left\Vert \varphi \right\Vert^2_{\mathrm L^2(\Omega)}\, \mu_2 \le \int_{\Omega} \nabla \varphi(x) \cdot \nabla \varphi(x) \, {\rm d}x
\end{align}
with $\varphi\in\mathrm H^1(\Omega)$ such that
\[
\int_\Omega \varphi(x)\, {\rm d}x = 0.
\]
Subtracting \eqref{eq:spec:Rayleight-Ritz-high-contrast} from \eqref{eq:spec:Rayleight-Ritz} yields
\begin{align*}
\left\Vert \varphi \right\Vert^2_{\mathrm L^2(\Omega)}\, \left( \mu_2 - \mu^\varepsilon_2 \right) \le \int_{B_\varepsilon} \left( {\rm Id} - \varepsilon^2 \frac{1}{\eta} \left(\frac{x}{\varepsilon}\right) \beta \left(\frac{x}{\varepsilon}\right) \right)\nabla \varphi(x) \cdot \nabla\varphi(x) \, {\rm d}x + \int_{B_\varepsilon} W^\varepsilon(x)\varphi(x) \, {\rm d}x
\end{align*}
Let us now take the test function $\varphi$ to be the normalised eigenfunction $\varphi_2(x)$ associated with the first non-zero eigenvalue $\mu_2$ for the Neumann Laplacian:
\begin{align*}
\left\vert \mu_2 - \mu^\varepsilon_2 \right\vert \le \left\vert \int_{B_\varepsilon} \left( {\rm Id} - \varepsilon^2 \frac{1}{\eta} \left(\frac{x}{\varepsilon}\right) \beta \left(\frac{x}{\varepsilon}\right) \right)\nabla \varphi_2(x) \cdot \nabla\varphi_2(x) \, {\rm d}x + \int_{B_\varepsilon} W^\varepsilon(x)\varphi_2(x) \, {\rm d}x\right\vert
\end{align*}
Using the observation \eqref{eq:spec:W-eps-O1} that the potential $W^\varepsilon$ is of $\mathcal{O}(1)$ and that the Neumann eigenfunctions are bounded in $\mathrm W^{p,\infty}$ for any $p<\infty$ \cite{Grieser_2002}, we have proved that
\[
\left\vert \mu_2 - \mu^\varepsilon_2 \right\vert \lesssim \varepsilon^d.
\]
In this subsection, we have essentially proved the following result.
\begin{prop}\label{prop:first-non-zero-eigenvalue}
Suppose that the high contrast conductivity-density pair $A^\varepsilon, \rho^\varepsilon$ is given by \eqref{eq:A-small-inclusion}-\eqref{eq:rho-small-inclusion} with $\beta\in\mathrm W^{1,\infty}(B_1;{\mathbb R}^{d\times d})$ and $\eta\in\mathrm W^{2,\infty}(B_1)$. Let $\mu^\varepsilon_2$ and $\mu_2$ be the first non-zero eigenvalues associated with the Neumann spectral problems \eqref{eq:spec:eigenpair-small-inclusion} and \eqref{eq:asy-spectral-pb-Neumann-Laplacian} respectively. Then we have
\begin{align}\label{eq:prop-eigenvalues-close}
\left\vert \mu_2 - \mu^\varepsilon_2 \right\vert \lesssim \varepsilon^d.
\end{align}
\end{prop}
Hence, as a corollary to the above result, we can deduce that the decay rate for the homogeneous transient problem \eqref{eq:spec-ibvp-v} and that for the high contrast transient problem \eqref{eq:spec:small-inclusion} are close to each other in the $\varepsilon\ll1$ regime.
\section{Near-cloaking result}\label{sec:near-cloak-proof}
In this section, we will prove the main result of this paper, i.e., Theorem \ref{thm:near-cloak}. To that end, we first consider the steady-state problem associated with the homogeneous heat equation \eqref{eq:intro-u_hom}. More precisely, for the unknown $u_{\rm hom}^{\rm eq}(x)$, consider
\begin{equation}\label{eq:ibvp-u_hom-equil}
\begin{aligned}
-\Delta u_{\rm hom}^{\rm eq}(x) & = f(x) \qquad \mbox{ in }\Omega,
\\[0.2 cm]
\nabla u_{\rm hom}^{\rm eq} \cdot {\bf n}(x) & = g(x) \qquad\mbox{ on }\partial\Omega,
\\[0.2 cm]
\int_\Omega u_{\rm hom}^{\rm eq}(x)\, {\rm d}x & = 0.
\end{aligned}
\end{equation}
Note that the last line of \eqref{eq:ibvp-u_hom-equil} is to ensure that we solve for $u_{\rm hom}^{\rm eq}(x)$ uniquely. Here we assume that the source terms are admissible in the sense of \eqref{eq:thm-near-cloak-compatible}. This guarantees the solvability of the above elliptic boundary value problem.\\
Next we record a corollary to Proposition \ref{prop:v-H1-t-infty} which says how quickly the solution $u_{\rm hom}(t,x)$ to the homogeneous problem \eqref{eq:intro-u_hom} tends to its equilibrium state.
\begin{cor}\label{cor:uhom-time-decay}
Let $u_{\rm hom}(t,x)$ be the solution to \eqref{eq:intro-u_hom} and let $u_{\rm hom}^{\rm eq}(x)$ be the solution to the steady-state problem \eqref{eq:ibvp-u_hom-equil}. Suppose the source terms $f(x)$, $g(x)$ and the initial datum $u^{\rm in}(x)$ are admissible in the sense of \eqref{eq:thm-near-cloak-compatible}-\eqref{eq:thm-near-cloak-compatible-initial}. Then
\begin{align}\label{eq:cor:u-hom-asy-limit}
\left\Vert u_{\rm hom}(t,\cdot) - u_{\rm hom}^{\rm eq}(\cdot) \right\Vert_{\mathrm H^1(\Omega)} \le e^{-\gamma t} \left\Vert u^{\rm in} - u_{\rm hom}^{\rm eq} \right\Vert_{\mathrm H^1(\Omega)}
\end{align}
for some positive constant $\gamma$.
\end{cor}
\begin{proof}
Define a function $w(t,x):= u_{\rm hom}(t,x) - u_{\rm hom}^{\rm eq}(x)$. We have that the function $w(t,x)$ satisfies the evolution equation
\begin{equation*}
\begin{aligned}
\partial_t w & = \Delta w \qquad \qquad \qquad\qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla w \cdot {\bf n}(x) & = 0 \quad \qquad \qquad \qquad\qquad \mbox{ on }(0,\infty)\times\partial\Omega,
\\[0.2 cm]
w(0,x) & = u^{\rm in}(x) - u_{\rm hom}^{\rm eq}(x) \qquad\qquad \mbox{ in }\Omega,
\end{aligned}
\end{equation*}
which is the same as \eqref{eq:spec-ibvp-v}. The estimate \eqref{eq:cor:u-hom-asy-limit} is simply deduced from Proposition \ref{prop:v-H1-t-infty} (see in particular \eqref{eq:asy-limit-v}).
\end{proof}
Now we record a result, as a corollary to Proposition \ref{prop:v-eps-H1-t-infty}, demonstrating how quickly the solution $u^\varepsilon(t,x)$ to the defect problem with high contrast coefficients \eqref{eq:intro:small-inclusion} tends to its equilibrium state. Consider the steady-state problem associated with the defect problem \eqref{eq:intro:small-inclusion}. More precisely, for the unknown $u^\varepsilon_{\rm eq}(x)$, consider the elliptic boundary value problem
\begin{equation}\label{eq:intro-u_eps-equil}
\begin{aligned}
-\nabla \cdot \Big( A^\varepsilon(x) \nabla u^\varepsilon_{\rm eq} \Big) & = f(x) \qquad \mbox{ in }\Omega,
\\[0.2 cm]
\nabla u^\varepsilon_{\rm eq} \cdot {\bf n}(x) & = g(x) \qquad\mbox{ on }\partial\Omega,
\\[0.2 cm]
\int_\Omega \rho^\varepsilon(x) u^\varepsilon_{\rm eq}(x)\, {\rm d}x & = 0.
\end{aligned}
\end{equation}
Note that the normalisation condition in \eqref{eq:intro-u_eps-equil} makes use of the density coefficient $\rho^\varepsilon(x)$ defined by \eqref{eq:rho-small-inclusion}.
\begin{cor}\label{cor:ueps-time-decay}
Let $u^\varepsilon(t,x)$ be the solution to \eqref{eq:intro:small-inclusion} and let $u^\varepsilon_{\rm eq}(x)$ be the solution to the steady-state problem \eqref{eq:intro-u_eps-equil}. Suppose the source terms $f(x)$, $g(x)$ and the initial datum $u^{\rm in}(x)$ are admissible in the sense of \eqref{eq:thm-near-cloak-compatible}-\eqref{eq:thm-near-cloak-compatible-initial}. Suppose further that ${\rm supp}\, u^{\rm in}\subset \Omega \setminus B_2$. Then
\begin{equation}
\begin{aligned}\label{eq:cor:u-eps-asy-limit}
\left\Vert u^\varepsilon(t,\cdot) - u^\varepsilon_{\rm eq}(\cdot) \right\Vert_{\mathrm H^1(\Omega)} &
\\
& \le e^{-\gamma_\varepsilon t} \left( \frac{1}{\varepsilon^{d}} \left\Vert u^{\rm in} - u^\varepsilon_{\rm eq} \right\Vert_{\mathrm L^2(\Omega)} + \frac{1}{\varepsilon^{\frac{d-2}{2}}} \left\Vert \nabla \left( u^{\rm in} - u^\varepsilon_{\rm eq} \right) \right\Vert_{\mathrm L^2(\Omega)} \right).
\end{aligned}
\end{equation}
\end{cor}
\begin{proof}
Define a function $w^\varepsilon(t,x):= u^\varepsilon(t,x) - u^\varepsilon_{\rm eq}(x)$. We have that the function $w^\varepsilon(t,x)$ satisfies the evolution equation
\begin{equation*}
\begin{aligned}
\rho^\varepsilon(x)\partial_t w^\varepsilon & = \nabla \cdot \Big( A^\varepsilon(x) \nabla w^\varepsilon \Big) \qquad \mbox{ in }(0,\infty)\times\Omega
\\[0.2 cm]
\nabla w^\varepsilon \cdot {\bf n}(x) & = 0 \quad \qquad\qquad\qquad \qquad\mbox{ on }(0,\infty)\times\partial\Omega
\\[0.2 cm]
w^\varepsilon(0,x) & = u^{\rm in}(x) - u^\varepsilon_{\rm eq}(x) \qquad \qquad\qquad \qquad\mbox{ in }\Omega
\end{aligned}
\end{equation*}
which is the same as \eqref{eq:spec:small-inclusion}. By assumption, the initial datum $u^{\rm in}$ is supported away from $B_2$. This along with the admissibility assumption \eqref{eq:thm-near-cloak-compatible-initial} implies that the weighted average $\mathfrak{m}^\varepsilon$ defined in Proposition \ref{prop:v-eps-H1-t-infty} vanishes. Then the estimate \eqref{eq:cor:u-eps-asy-limit} is simply deduced from Proposition \ref{prop:v-eps-H1-t-infty} (see in particular \eqref{eq:asy-limit-v-eps}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:near-cloak}]
From the triangle inequality, we have
\begin{align*}
\left\Vert u^\varepsilon(t,\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)}
& \le \left\Vert u^\varepsilon(t,\cdot) - u^\varepsilon_{\rm eq}(\cdot) \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)}
+ \left\Vert u^\varepsilon_{\rm eq} - u^{\rm eq}_{\rm hom} \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)}
\\[0.2 cm]
& \quad + \left\Vert u^{\rm eq}_{\rm hom}(\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)}.
\end{align*}
The boundary trace inequality gives the existence of a constant $\mathfrak{c}_{\Omega}$ -- depending only on the domain $\Omega$ -- such that
\begin{align*}
\left\Vert u^\varepsilon(t,\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)}
& \le \mathfrak{c}_\Omega \left\Vert u^\varepsilon(t,\cdot) - u^\varepsilon_{\rm eq}(\cdot) \right\Vert_{\mathrm H^1(\Omega)}
+ \left\Vert u^\varepsilon_{\rm eq} - u^{\rm eq}_{\rm hom} \right\Vert_{\mathrm H^{\frac12}(\partial\Omega)}
\\[0.2 cm]
& \quad + \mathfrak{c}_\Omega \left\Vert u^{\rm eq}_{\rm hom}(\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^1(\Omega)}.
\end{align*}
Using the result of Corollary \ref{cor:uhom-time-decay}, Corollary \ref{cor:ueps-time-decay} and the DtN estimate from \cite[Lemma 2.2, page 305]{Friedman_1989} (see also \cite[Proposition 1]{Kohn_2008}), we get that the right hand side of the above inequality is bounded from above by
\begin{align*}
& \mathfrak{c}_\Omega
e^{-\gamma_\varepsilon t} \left( \frac{1}{\varepsilon^{d}} \left\Vert u^{\rm in} - u^\varepsilon_{\rm eq} \right\Vert_{\mathrm L^2(\Omega)} + \frac{1}{\varepsilon^{\frac{d-2}{2}}} \left\Vert \nabla \left( u^{\rm in} - u^\varepsilon_{\rm eq} \right) \right\Vert_{\mathrm L^2(\Omega)} \right)
\\[0.2 cm]
&+ \varepsilon^d \left( \left\Vert f\right\Vert_{\mathrm L^2(\Omega)} + \left\Vert g\right\Vert_{\mathrm L^2(\partial\Omega)} \right)
+ \mathfrak{c}_\Omega e^{-\gamma t} \left\Vert u^{\rm in} - u_{\rm hom}^{\rm eq} \right\Vert_{\mathrm H^1(\Omega)}.
\end{align*}
Hence the existence of a time instant $\mathit{T}$ follows such that for all $t\ge\mathit{T}$, we indeed have the estimate \eqref{eq:thm:H12-estimate}.
\end{proof}
\begin{rem}\label{rem:near-cloak-m-eps-avg}
We make some observations on why the admissibility assumption \eqref{eq:thm-near-cloak-compatible-initial} and the assumption on the support of the initial datum in Corollary \ref{cor:ueps-time-decay} were essential to our proof. In the absence of these assumptions, in the proof of Theorem \ref{thm:near-cloak}, we will have to show that
\begin{align}\label{eq:near-cloak-remark-m-eps-avg}
\lim_{\varepsilon\to0} \left\vert \mathfrak{m}^\varepsilon - \langle u^{\rm in} \rangle \right\vert = 0.
\end{align}
Let us compute the difference
\begin{align*}
\mathfrak{m}^\varepsilon - \langle u^{\rm in} \rangle
& = \left( \left\vert \Omega\setminus B_\varepsilon\right\vert + \frac{\pi^\frac{d}{2}}{\Gamma\left(\frac{d}{2} + 1\right)} \right)^{-1} \left\{ \int_{\Omega\setminus B_\varepsilon} u^{\rm in}(x)\, {\rm d}x + \frac{1}{\varepsilon^d} \int_{B_\varepsilon} u^{\rm in}(x)\, {\rm d}x \right\}
\\
& \quad - \frac{1}{\left\vert \Omega \right\vert} \int_{\Omega} u^{\rm in}(x)\, {\rm d}x.
\end{align*}
We have not managed to characterise all initial data that guarantee the asymptote \eqref{eq:near-cloak-remark-m-eps-avg}. The difficulty of this task becomes apparent if you take initial data to be supported away from $B_1$, but not satisfying the zero mean assumption \eqref{eq:thm-near-cloak-compatible-initial}. Note that our assumption on the initial data in Corollary \ref{cor:ueps-time-decay} guarantees the above difference is always zero, irrespective of the value of the parameter $\varepsilon$.
\end{rem}
\begin{rem}\label{rem:near-cloak-full-space}
Our choice of the domain $\Omega$ containing $B_2$ is arbitrary and Corollary \ref{cor:diff-cloak-hom} asserts that for any such arbitrary choice, the distance between the solutions $u_{\rm hom}$ and $u_{\rm cl}$ (measured in the $\mathrm H^\frac12(\partial\Omega)$-norm) can be made as small as we wish provided we engineer appropriate cloaking coefficients (for instance via a homogenization approach) -- see \eqref{eq:rho-cloak-choice} and \eqref{eq:A-cloak-choice} -- in the annulus
$B_2\setminus B_1$. This is the notion of \emph{near-cloak}. Unlike the \emph{perfect cloaking} strategies which demand equality between $u_{\rm hom}$ and $u_{\rm cl}$ everywhere outside $B_2$, near-cloak strategies only ask for them to be close in certain norm topologies. Near-cloaking strategies are what matters in practice.\\
We can still pose the thermal cloaking question in full space ${\mathbb R}^d$. In this scenario, the norm of choice becomes the one in $\mathrm H^1_{\rm loc}({\mathbb R}^d\setminus B_2)$. More precisely, for any compact set $K\subset {\mathbb R}^d\setminus B_2$, we can prove
\[
\left\Vert u^\varepsilon(t,\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^1(K)} \lesssim \varepsilon^{\frac{d}{2}} \qquad \mbox{ for }t\gg 1.
\]
\end{rem}
\subsection{Layered cloaks}\label{ssec:layer-cloak}
Advancing an idea from \cite{Gralak_2016}, we develop a transformation media theory for thermal layered cloaks, which are of practical importance in thin-film solar cells for energy harvesting in the photovoltaic industry. The basic principle behind this construction is the following observation.
\begin{prop}\label{prop:layer-cloak}
Let the spatial domain $\Omega:=(-3,3)^2$. Let the density conductivity pair $\rho\in\mathrm L^\infty(\Omega;{\mathbb R})$, $A\in\mathrm L^\infty(\Omega;{\mathbb R}^{2\times 2})$ be such that they are $(-3,3)$-periodic in the $x_1$ variable. Consider a smooth invertible map $\mathtt{f}(x_2):{\mathbb R}\mapsto{\mathbb R}$ such that $\mathtt{f}(x_2) = x_2$ for $\vert x_2\vert>2$. Assume further that $\mathtt{f}'(x_2)\ge C>0$ for a.e. $x_2\in(-3,3)$. Take the mapping $\mathbb{F}:\Omega\mapsto\Omega$ defined by $\mathbb{F}(x_1,x_2) = (x_1,\mathtt{f}(x_2))$. Then $u(t,x_1,x_2)$ is a $(-3,3)$-periodic solution (in the $x_1$ variable) to
\begin{align*}
\rho(x) \partial_t u = \nabla_x \cdot \Big( A(x) \nabla_x u \Big) + h(x)\qquad \mbox{ for }(t,x)\in(0,\infty)\times\Omega
\end{align*}
if and only if $v= u\circ \mathbb{F}^{-1}$ is a $(-3,3)$-periodic solution (in the $y_1=x_1$ variable) to
\begin{align*}
\mathbb{F}^*\rho(y)\, \partial_t v = \nabla \cdot \Big( \mathbb{F}^*A(y) \nabla v \Big) + \mathbb{F}^*f(y) \qquad \mbox{ for }(t,y)\in(0,\infty)\times\Omega
\end{align*}
where the coefficients are given below
\begin{equation}\label{eq:push-forward-layered-cloak-formulas}
\begin{aligned}
\mathbb{F}^*\rho(y_1,y_2) & = \frac{1}{\mathtt{f}'(x_2)}\rho(x_1,x_2);
\\[0.2 cm]
\mathbb{F}^*h(t,y_1,y_2) & = \frac{1}{\mathtt{f}'(x_2)} h(t,x_1,x_2);
\\[0.2 cm]
\mathbb{F}^*A(y_1,y_2) & =
\left(
\begin{matrix}
\frac{1}{\mathtt{f}'(x_2)} A_{11}(x_1,x_2) & A_{12}(x_1,x_2)
\\[0.3 cm]
\frac{1}{\mathtt{f}'(x_2)} A_{21}(x_1,x_2) & \mathtt{f}'(x_2)A_{22}(x_1,x_2)
\end{matrix}
\right)
\end{aligned}
\end{equation}
with the understanding that the right hand sides in \eqref{eq:push-forward-layered-cloak-formulas} are computed at $(x_1,x_2)=(y_1,\mathtt{f}^{-1}(y_2))$. Furthermore, we have
\[
u(t,x_1,x_2) = v(t,x_1,x_2) \qquad \mbox{ for }\vert x_2\vert\ge2.
\]
\end{prop}
Next, we prove a near-cloaking result in this present setting of layered cloaks. It concerns the following evolution problems: The homogeneous problem
\begin{equation}\label{eq:layer-cloak:u_hom}
\begin{aligned}
\partial_t u_{\rm hom} (t,x) & = \Delta u_{\rm hom}(t,x) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla u_{\rm hom}(x_1,\pm3) \cdot {\bf n}(x_1,\pm3) & = g(x_1) \qquad \qquad\qquad \qquad\mbox{ on }(0,\infty)\times(-3,3),
\\[0.2 cm]
u_{\rm hom}(-3,x_2) & = u_{\rm hom}(3,x_2) \qquad \qquad\qquad \qquad\qquad \mbox{ for }x_2\in(-3,3),
\\[0.2 cm]
u_{\rm hom}(0,x) & = u^{\rm in}(x) \qquad \qquad\qquad \qquad\qquad \mbox{ in }\Omega
\end{aligned}
\end{equation}
and the layered cloak problem
\begin{equation}\label{eq:layer-cloak:u_cloak}
\begin{aligned}
\rho_{\rm cl}(x) \partial_t u_{\rm cl} & = \nabla \cdot \Big( A_{\rm cl}(x) \nabla u_{\rm cl} \Big) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla u_{\rm cl}(x_1,\pm3) \cdot {\bf n}(x_1,\pm3) & = g(x_1) \qquad \qquad\qquad \qquad\mbox{ on }(0,\infty)\times(-3,3),
\\[0.2 cm]
u_{\rm cl}(-3,x_2) & = u_{\rm cl}(3,x_2) \qquad \qquad\qquad \qquad\qquad \mbox{ for }x_2\in(-3,3),
\\[0.2 cm]
u_{\rm cl}(0,x) & = u^{\rm in}(x) \qquad \qquad\qquad \qquad\qquad \mbox{ in }\Omega
\end{aligned}
\end{equation}
where the coefficients $\rho_{\rm cl}$ and $A_{\rm cl}$ in \eqref{eq:layer-cloak:u_cloak} are defined using the Lipschitz mapping $(x_1,x_2)\mapsto (x_1,\mathtt{f}_\varepsilon(x_2))$ with
\begin{equation}\label{eq:layered-cloak-map}
\mathtt{f}_\varepsilon(x_2) :=
\left\{
\begin{array}{cl}
x_2 & \mbox{ for } \left\vert x_2 \right\vert>2
\\[0.2 cm]
\left( \frac{2-2\varepsilon}{2-\varepsilon} + \frac{\vert x_2 \vert}{2-\varepsilon}\right) \frac{x_2}{\vert x_2\vert} & \mbox{ for } 1 \le \vert x_2\vert\le 2
\\[0.2 cm]
\frac{x_2}{\varepsilon} & \mbox{ for }\vert x_2\vert <1.
\end{array}\right.
\end{equation}
The precise construction of the layered cloaks is as follows
\begin{equation}\label{eq:layer:rho-cloak-choice}
\rho_{\rm cl}(x_1,x_2) =
\left\{
\begin{array}{ll}
1 & \quad \mbox{ for }\vert x_2\vert >2,\\[0.2 cm]
\mathcal{F}^*_\varepsilon 1 & \quad \mbox{ for }1<\vert x_2\vert <2,\\[0.2 cm]
\eta(x_1,x_2) & \quad \mbox{ for }\vert x_2 \vert <1
\end{array}\right.
\end{equation}
and
\begin{equation}\label{eq:layer:A-cloak-choice}
A_{\rm cl}(x_1,x_2) =
\left\{
\begin{array}{ll}
{\rm Id} & \quad \mbox{ for }\vert x_2\vert >2,\\[0.2 cm]
\mathcal{F}^*_\varepsilon {\rm Id} & \quad \mbox{ for }1<\vert x_2\vert <2,\\[0.2 cm]
\beta(x_1,x_2) & \quad \mbox{ for }\vert x_2 \vert <1,
\end{array}\right.
\end{equation}
where the push-forward maps are defined in \eqref{eq:push-forward-layered-cloak-formulas}. The density coefficient $\eta(x)$ in \eqref{eq:layer:rho-cloak-choice} is any arbitrary real positive function. The conductivity coefficient $\beta(x)$ in \eqref{eq:layer:A-cloak-choice} is any arbitrary bounded positive definite matrix.
Let us make the observation that the cloaking coefficients $\rho_{\rm cl}$ and $A_{\rm cl}$ given by \eqref{eq:layer:rho-cloak-choice} and \eqref{eq:layer:A-cloak-choice} respectively can be treated as push-forward outcomes (via the push-forward maps \eqref{eq:push-forward-layered-cloak-formulas}) of the following defect coefficients
\begin{equation}\label{eq:layer:rho-small-inclusion}
\rho^\varepsilon(x) =
\left\{
\begin{array}{ll}
1 & \quad \mbox{ for }\varepsilon < \vert x_2\vert < 2,\\[0.2 cm]
\frac{1}{\varepsilon}\eta\left(x_1, \frac{x_2}{\varepsilon}\right) & \quad \mbox{ for }\vert x_2\vert < \varepsilon
\end{array}\right.
\end{equation}
and
\begin{equation}\label{eq:layer:A-small-inclusion}
A^\varepsilon(x) =
\left\{
\begin{array}{ll}
{\rm Id} & \quad \mbox{ for }\varepsilon < \vert x_2\vert < 2,\\[0.2 cm]
\left(
\begin{matrix}
\frac{1}{\varepsilon}\beta_{11}\left(x_1,\frac{x_2}{\varepsilon}\right) & \beta_{12}\left(x_1,\frac{x_2}{\varepsilon}\right)
\\[0.2 cm]
\beta_{21}\left(x_1,\frac{x_2}{\varepsilon}\right) & \varepsilon \beta_{22}\left(x_1,\frac{x_2}{\varepsilon}\right)
\end{matrix}
\right) & \quad \mbox{ for }\vert x_2\vert < \varepsilon.
\end{array}\right.
\end{equation}
It then follows from Proposition \ref{prop:layer-cloak} that comparing $u_{\rm hom}$ and $u_{\rm cl}$ is equivalent to comparing $u_{\rm hom}$ and $u^\varepsilon$ where $u^\varepsilon(t,x)$ solves the following defect problem with the aforementioned $\rho^\varepsilon$ and $A^\varepsilon$ as coefficients:
\begin{equation}\label{eq:layer-cloak:u_defect}
\begin{aligned}
\rho^\varepsilon(x) \partial_t u^\varepsilon & = \nabla \cdot \Big( A^\varepsilon(x) \nabla u^\varepsilon \Big) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega,
\\[0.2 cm]
\nabla u^\varepsilon(x_1,\pm3) \cdot {\bf n}(x_1,\pm3) & = g(x_1) \qquad \qquad\qquad \qquad\mbox{ on }(0,\infty)\times(-3,3),
\\[0.2 cm]
u^\varepsilon(-3,x_2) & = u_{\rm cl}(3,x_2) \qquad \qquad\qquad \qquad\qquad \mbox{ for }x_2\in(-3,3),
\\[0.2 cm]
u^\varepsilon(0,x) & = u^{\rm in}(x) \qquad \qquad\qquad \qquad\qquad \mbox{ in }\Omega
\end{aligned}
\end{equation}
\begin{thm}\label{thm:one-d:near-cloak}
Let $u^\varepsilon(t,x)$ be the solution to the defect problem \eqref{eq:layer-cloak:u_defect} with high contrast coefficients \eqref{eq:layer:rho-small-inclusion}-\eqref{eq:layer:A-small-inclusion} and let $u_{\rm hom}(t,x)$ be the solution to the homogeneous conductivity problem \eqref{eq:layer-cloak:u_hom}. Then, there exists a time instant $\mathit{T} <\infty$ such that for all $t\ge \mathit{T}$ we have
\begin{align}\label{eq:thm:layer:H12-norm}
\left\Vert u^\varepsilon(t,\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm H^\frac12(\Gamma)} \lesssim \varepsilon^2.
\end{align}
\end{thm}
The proof is similar to the proof of Theorem \ref{thm:near-cloak}. More specifically, we show first that the solutions to the transient problems \eqref{eq:layer-cloak:u_defect} and \eqref{eq:layer-cloak:u_hom} converge exponentially fast to their corresponding equilibrium states. We can then adapt the energy approach in the proof of \cite{Friedman_1989} to show that the equilibrium states are $\varepsilon^2$ close in the $\mathrm H^\frac12(\Gamma)$-norm. We note that our analysis of near-cloaking for thermal layered cloaks can be easily adapted to electrostatic and electromagnetic cases. It might also find interesting applications in Earth science for seismic tomography, in which case one could utilise results in \cite{Ammari_2013} to prove near cloaking results related to imaging the subsurface of the Earth with seismic waves produced by earthquakes.
\section{Numerical results}\label{sec:numerics}
This section deals with the numerical tests done in support of the theoretical results in the paper. The tests designed in the subsections to follow make some observations with regards to the near-cloaking scheme designed in the previous sections of this paper. The numerical simulations are done in one, two and three spatial dimensions. It seems natural to start with the one dimensional case, but as it turns out, see subsection \ref{ssec:layer-cloak}, the physical problem of interest for a one-dimensional (so-called layered) cloak requires a two-dimensonal computational domain, and thus we start with the 2D case. We refer the reader to \cite{Gralak_2016} for the precise physical setup and importance of the layered cloak described in the context of Maxwell's equations (this is easily translated into the language of conductivity equation). These numerical simulations were performed with the finite element software COMSOL MULTIPHYSICS.
We choose the spatial domain to be a square $\Omega:=(-3,3)^2$. We take the bulk source $f(x)$, the Neumann datum $g(x)$ and the initial datum $u^{\rm in}(x)$ to be smooth and such that ${\rm supp}\, f\subset \Omega\setminus B_2$, ${\rm supp}\, u^{\rm in}\subset \Omega\setminus B_2$. We further assume that
\[
\int_\Omega f(x)\, {\rm d}x = 0,
\qquad
\int_{\partial\Omega} g(x)\, {\rm d}\sigma(x) = 0,
\qquad
\int_\Omega u^{\rm in}(x)\, {\rm d}x = 0.
\]
This guarantees that the data is admissible in the sense of \eqref{eq:thm-near-cloak-compatible}-\eqref{eq:thm-near-cloak-compatible-initial}.
\subsection{Near-cloaking}\label{ssec:num:near-cloak}
The numerical experiments in this subsection analyse the sharpness of the near-cloaking result (Theorem \ref{thm:near-cloak}) in this paper. We solve the initial-boundary value problem for the unknown $u^\varepsilon(t,x)$:
\begin{equation}\label{eq:num:defect-problem}
\begin{aligned}
\rho^\varepsilon(x)\frac{\partial u^\varepsilon}{\partial t} & = \nabla \cdot \Big( A^\varepsilon(x) \nabla u^\varepsilon \Big) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega,
\end{aligned}
\end{equation}
with the density-conductivity coefficients:
\begin{equation*}
\rho^\varepsilon(x), \, A^\varepsilon(x) =
\left\{
\begin{array}{ll}
1, \qquad \qquad \quad {\rm Id} & \quad \mbox{ for }x\in\Omega\setminus B_\varepsilon,\\[0.2 cm]
\frac{1}{\varepsilon^2}\eta\left(\frac{x}{\varepsilon}\right), \qquad \beta\left(\frac{x}{\varepsilon}\right) & \quad \mbox{ for }x\in B_\varepsilon
\end{array}\right.
\end{equation*}
Note that in two-dimensions, there is no high contrast in the conductivity coefficient $A^\varepsilon(x)$. There is, however, contrast in the density coefficient $\rho^\varepsilon(x)$. The evolution \eqref{eq:num:defect-problem} is supplemented by the Neumann datum
\begin{equation*}
g(x_1,x_2) =
\left\{
\begin{array}{ll}
-3 & \quad \mbox{ for }x_1=\pm 3,\\[0.2 cm]
0 & \quad \mbox{ for }x_2=\pm 3
\end{array}\right.
\end{equation*}
and the initial datum
\begin{equation*}
u^{\rm in}(x_1,x_2) =
\left\{
\begin{array}{ll}
x_1 x_2 & \quad \mbox{ for }x\in\Omega\setminus B_2,\\[0.2 cm]
0 & \quad \mbox{ for }x\in B_\varepsilon
\end{array}\right.
\end{equation*}
The bulk force in \eqref{eq:num:defect-problem} is taken to be
\begin{equation*}
f(x_1,x_2) =
\left\{
\begin{array}{ll}
\sqrt{x_1^2+x_2^2}\, \sin(x_1)\sin(x_2)-2 & \quad \mbox{ for }x\in\Omega\setminus B_2,\\[0.2 cm]
0 & \quad \mbox{ for }x\in B_\varepsilon
\end{array}\right.
\end{equation*}
We next solve the initial-boundary value problem (Neumann) for $u_{\rm hom}(t,x)$ with the above data.
\begin{equation*}
\begin{aligned}
\partial_t u_{\rm hom} (t,x) & = \Delta u_{\rm hom}(t,x) + f(x) \qquad \mbox{ in }(0,\infty)\times\Omega.
\end{aligned}
\end{equation*}
Let us define
\begin{align}\label{eq:num:G-epsilon}
\mathcal{G}_\varepsilon(t) := \frac{\left\Vert u^\varepsilon(t,\cdot) - u_{\rm hom}(t,\cdot) \right\Vert_{\mathrm L^2(\partial\Omega)}}{\varepsilon^d \left( \left\Vert u^{\rm in} \right\Vert_{\mathrm H^1} + \left\Vert f \right\Vert_{\mathrm L^2(\Omega)} + \left\Vert g \right\Vert_{\mathrm L^2(\partial\Omega)} \right)}
\end{align}
We compute the function $\mathcal{G}_\varepsilon(t)$ defined by \eqref{eq:num:G-epsilon} as a function of time for various values of $\varepsilon$, see Figure \ref{fig:num:G-epsilon}, and numerically observe that after $110s$, $\mathcal{G}_\varepsilon(t)$ reaches an asymptote that tends towards a numerical value close to $8.6$ when $\varepsilon$ gets smaller, in agreement with theoretical predictions of Theorem \ref{thm:near-cloak} for space dimension $d=2$. The simulations were performed using an adaptive mesh of the domain $\Omega$ consisting of $2\times10^5$ nodal elements (with at least $10^2$ elements in the small defect when $\varepsilon=10^{-4}$), and a numerical solver based on the backward differential formula (BDF solver) with initial time step $10^{-5}s$, maximum time step $10^{-1}s$, minimum and maximum BDF orders of $1$ and $5$, respectively. Many test cases were ran for various values of $\beta$ (including anisotropic conductivity) and $\eta$ in the small inclusion, and we report some representative curves in Figure \ref{fig:num:G-epsilon}.
\begin{figure}
\caption{Numerical results for ${\mathcal G}
\label{fig:num:G-epsilon}
\end{figure}
\subsection{Contour plots}
Let us start with the contour plots for the layered cloak developed in subsection \ref{ssec:layer-cloak}, see Figure \ref{fig:layered-cloak}. More precisely, we compare solutions $u_{\rm hom}$ and $u_{\rm cl}$ to the evolution problems \eqref{eq:layer-cloak:u_hom} and \eqref{eq:layer-cloak:u_cloak} respectively, where periodic boundary conditions are imposed at the boundary $x_1=\pm3$ and homogeneous Neumann datum at the boundary $x_2=\pm3$. Here, we illustrate the layered cloak by numerically solving an equation for the unknown $u(t,x_1,x_2)$, but the conductivity and density only depend upon the $x_2$ variable. We take the source
\begin{equation*}
f(x_1,x_2) =
\left\{
\begin{array}{ll}
x_2\, \sin(x_2) & \quad \mbox{ for }x_2\in(-3,-2)\cup(2,3),\\[0.2 cm]
0 & \quad \mbox{ for }x_2\in (-2,2)
\end{array}\right.
\end{equation*}
and the initial datum is taken to be
\begin{equation*}
u^{\rm in}(x_1,x_2) =
\left\{
\begin{array}{ll}
x_2 & \quad \mbox{ for }x_2\in(-3,-2)\cup(2,3),\\[0.2 cm]
0 & \quad \mbox{ for }x_2\in (-2,2)
\end{array}\right.
\end{equation*}
Following the layered cloak construction in \eqref{eq:layer:A-cloak-choice}, we take the cloaking conductivity to be the push-forward of identity in the cloaking strip:
\[
A_{\rm cl}(x_1,x_2) = {\rm diag} \left(2-\varepsilon, \frac{1}{2-\varepsilon}\right) \qquad \mbox{ for }1< \vert x_2\vert <2.
\]
We report in Figure \ref{fig:layered-cloak} some numerical results that exemplify the high level of control of the heat flux with a layered cloak: in the upper panels (a), (b), (c), one can see snapshots at representative time steps (t=0, 1 and 4s) of a typical one-dimensional diffusion process in a homogeneous medium for a given source with a support outside $x_2\in(-2,2)$. When we compare the temperature field at initial time step in (a) with that when we replace homogeneous medium by a layered cloak in $1< \vert x_2\vert <2$ in (d), we note no difference. However, some noticeable differences are noted for the temperature field between the homogeneous medium and the cloak when comparing (b) with (d) and (c) with (e). The gradient of temperature field is dramatically decreased in the invisibility region $x_2\in(-1,1)$, leading to an almost uniform (but non zero) temperature field therein, and this is compensated by an enhanced gradient of temperature within the cloak in $1< \vert x_2\vert <2$. One notes that the increased flux in $1< \vert x_2\vert <2$ might be useful to improve efficiency of solar cells in photovoltaics.
Our next experiment is to compare the homogeneous solution and the cloaked solution where the cloaking coefficients are constructed using the push-forward maps as in \eqref{eq:rho-cloak-choice}-\eqref{eq:A-cloak-choice}. The data ($f$, $g$, $u^{\rm in}$) are chosen as in subsection \ref{ssec:num:near-cloak}. We report in Figure \ref{fig:contour:2D} some numerical computations performed in COMSOL that illustrate the strong similarity between the temperature fields in homogeneous (a-d), small defect (e-h) and cloaked (i-l) problems. Obviously, the fields are identical outside $B_2$ at initial time step, then they differ most outside $B_2$ at small time step t=1s, see (b),(f),(j), and become more and more similar with increasing time steps, see (c), (g), (k) and (d), (h), (l). These qualitative observations are consistent with the near-cloaking result, Theorem \ref{thm:near-cloak}, of this paper.
We have also performed a similar experiment in three dimensions, see Figure \ref{fig:contour:3D}. Refer to the caption in Figure \ref{fig:contour:3D} for the parameters considered in the three dimensional problem. Note that for these 3D computations, we mesh the cubical domain with $40,000$ nodal elements, we take time steps of $0.1s$ and use BDF solver with initial time step $0.01s$, maximum time step $0.1s$, minimum and maximum BDF orders of $1$ and $3$, respectively. We use a desktop with 32 Gb of RAM memory and a computation run takes around 1 hour for a time interval between $0.1$ and $10s$. We are not able to study long time behaviours, nor solve the high contrast small defect problem in 3D, as this would require more computational resources. Nevertheless, our 3D computations suggest that there is a strong similarity between the temperature fields in homogeneous and cloaked problems, outside $B_2$. Note also that we consider a source not vanishing inside $B_2$, which motivates further theoretical analysis for sources with a support in the overall domain $\Omega$.
\begin{figure}
\caption{Contour plots of a layered cloak (with a source outside $x_2\in(-2,2)$): $f(x)=x_2\sin(x_2)$ for $x\in\{(x_1,x_2):x_2\in(-3,-2)\cup(2,3)\}
\label{fig:layered-cloak}
\end{figure}
\begin{figure}
\caption{Contour plots of a 2D cloak (with a source outside $B_2$): $f(x)=\sqrt{x_1^2+x_2^2}
\label{fig:contour:2D}
\end{figure}
\begin{figure}
\caption{Numerical results for isosurface plots (source outside and inside $B_2$): $f_{cl}
\label{fig:contour:3D}
\end{figure}
\subsection{Cloaking coefficients}
The cloaking coefficients $\rho_{\rm cl}$ and $A_{\rm cl}$ in the annulus $B_2\setminus B_1$ play all the essential roles in thermal cloaking phenomena. So, it is important in practice -- to gain some physical intuition and start engineering and manufacturing processes of a meta-material cloak -- to analyse the coefficients defined by \eqref{eq:rho-cloak-choice}-\eqref{eq:A-cloak-choice}. For readers' convenience we recall them below (only for the part $\Omega\setminus B_1$ as coefficients inside $B_1$ can be arbitrary)
\begin{equation*}
\rho_{\rm cl}(y) =
\left\{
\begin{array}{ll}
1 & \quad \mbox{ for }y\in\Omega\setminus B_2,\\[0.2 cm]
\frac{1}{{\rm det}(D\mathcal{F}_\varepsilon)(x)}\Big|_{x=\mathcal{F}_\varepsilon^{-1}(y)} & \quad \mbox{ for }y\in B_2\setminus B_1
\end{array}\right.
\end{equation*}
\begin{equation*}
A_{\rm cl}(y) =
\left\{
\begin{array}{ll}
{\rm Id} & \quad \mbox{ for }y\in\Omega\setminus B_2,\\[0.2 cm]
\frac{D\mathcal{F}_\varepsilon(x) D\mathcal{F}_\varepsilon^\top(x)}{{\rm det}(D\mathcal{F}_\varepsilon)(x)}\Big|_{x=\mathcal{F}_\varepsilon^{-1}(y)} & \quad \mbox{ for }y\in B_2\setminus B_1
\end{array}\right.
\end{equation*}
Remark that both the coefficients $\rho_{\rm cl}$ and $A_{\rm cl}$ depend on the regularising parameter $\varepsilon$ via the Lipschitz map $\mathcal{F}_\varepsilon$. In this numerical test, we plot the cloaking coefficients given in terms of the polar coordinates. Consider the Lipschitz map $\mathcal{F}_\varepsilon:\Omega \mapsto \Omega$
\[
x := (x_1,x_2) \mapsto \left( \mathcal{F}^{(1)}_\varepsilon(x), \mathcal{F}^{(2)}_\varepsilon(x) \right) =: (y_1, y_2) = y
\]
If the cartesian coordinates $(x_1,x_2)$ were to be expressed in terms of the polar coordinates as $(r\cos\theta, r\sin\theta)$ then the new coordinates $(\mathcal{F}^{(1)}_\varepsilon(x), \mathcal{F}^{(2)}_\varepsilon(x))$ become $(r'\cos\theta, r'\sin\theta)$ with
\begin{equation}\label{eq:r-prime-defn}
r' :=
\left\{
\begin{array}{cl}
r & \mbox{ for } r \ge 2
\\[0.2 cm]
\frac{2-2\varepsilon}{2-\varepsilon} + \frac{r}{2-\varepsilon} & \mbox{ for } \varepsilon < r < 2
\\[0.2 cm]
\frac{r}{\varepsilon} & \mbox{ for } r \le \varepsilon
\end{array}\right.
\end{equation}
Bear in mind that only the radial coordinate $r$ gets transformed by $\mathcal{F}_\varepsilon$ and the angular coordinate $\theta$ remains unchanged. Reformulating the push-forward maps in terms of the polar coordinates yield the following expressions for the cloaking coefficients in the annulus:
\begin{equation}\label{eq:num:push-forward-polar}
\left.
\begin{aligned}
A_{\rm cl}^{\rm 2D} = \mathcal{F}^\ast\, {\rm Id} & = \mathtt{R}(\theta) {\rm diag}\left(\mathbb{A}_{11}(r'), \frac{1}{\mathbb{A}_{11}(r')} \right) \left[ \mathtt{R}(\theta) \right]^\top
\\
\rho_{\rm cl}^{\rm 2D} = \mathcal{F}^\ast 1 & = \frac{r' -1}{r'}(2-\varepsilon)^2 + \frac{\varepsilon}{r'}(2-\varepsilon)
\end{aligned}
\right\}
\quad \mbox{ for }r'\in(1,2)
\end{equation}
where $\mathbb{A}_{11}(r')$ is given by
\begin{align}\label{eq:mathbb-A11}
\mathbb{A}_{11}(r') = \frac{r' -1}{r'} + \frac{\varepsilon}{r'(2-\varepsilon)}
\qquad \mbox{ for }r'\in[1,2]
\end{align}
and the rotation matrix
\[
\mathtt{R}(\theta) =
\left(
\begin{matrix}
\cos\theta & -\sin\theta
\\
\sin\theta & \cos\theta
\end{matrix}
\right).
\]
We plot in Figure \ref{fig:num:polar-cloaks} the matrix entry $\mathbb{A}_{11}(r')$ given above and the push-forward density $\rho^{\rm 2D}_{\rm cl}$ given in \eqref{eq:num:push-forward-polar}. We observe that when $\varepsilon$ gets smaller, the radial conductivity and density take values very close to zero near the inner boundary of the cloak, which is unachievable in practice (bear in mind that the azimuthal conductivy being the inverse of the radial conductivity, the conductivity matrix becomes extremely anisotropic). Therefore, manufacturing a meta-material cloak would require a small enough value of epsilon so that homogenization techniques can be applied to approximate the anisotropic conductivity with concentric layers of isotropic phases, e.g. as was done for the 2D meta-material cloak manufactured at the Karlsruher Institut f\"ur Technologie \cite{Schittny_2013}.
\begin{figure}
\caption{Plots of $\rho^{\rm 2D}
\label{fig:num:polar-cloaks}
\end{figure}
In three spatial dimensions, we can recast the cloaking coefficients in the spherical coordinates $(r,\theta,\varphi)$. As in the cylindrical coordinate setting, only the radial variable gets modified by the Kohn's transformation $\mathcal{F}^\varepsilon$ and the variables $\theta$, $\varphi$ remain unchanged. The transformed radial coordinate $r'$ is given by \eqref{eq:r-prime-defn}. The push-forward maps of interest for cloaking are
\begin{equation}\label{eq:num:push-forward-spherical}
\left.
\begin{aligned}
A_{\rm cl}^{\rm 3D} = \mathcal{F}^\ast\, {\rm Id} & = \mathtt{R}(\theta) \mathtt{M}(\varphi) {\rm diag}\Big(\mathbb{B}(r'), 2-\varepsilon, 2-\varepsilon \Big) \mathtt{M}(\varphi) \left[ \mathtt{R}(\theta) \right]^\top
\\
\rho_{\rm cl}^{\rm 3D} = \mathcal{F}^\ast 1 & = \mathbb{B}(r') := (2-\varepsilon) \left( 2-\varepsilon - \frac{(2-2\varepsilon)}{r'}\right)^2
\end{aligned}
\right\}
\quad \mbox{ for }r'\in(1,2)
\end{equation}
with the rotation matrix $\mathtt{R}(\theta)$ in three dimensions and the matrix $\mathtt{M}(\varphi)$ given by
\[
\mathtt{R}(\theta) =
\left(
\begin{matrix}
\cos\theta & -\sin\theta & 0
\\
\sin\theta & \cos\theta & 0
\\
0 & 0 & 1
\end{matrix}
\right)
\quad
\mathtt{M}(\varphi) =
\left(
\begin{matrix}
\sin\varphi & 0 & \cos\varphi
\\
0 & 1 & 0
\\
\cos\varphi & 0 & -\sin\varphi
\end{matrix}
\right)
\]
Note that the matrix entry $\mathbb{B}(r')$ in the push-forward conductivity coincides with the push-forward density $\rho_{\rm cl}^{\rm 3D}$. We plot in Figure \ref{fig:num:spherical-cloaks} the radial conductivity and density in the cloaking annulus given in \eqref{eq:num:push-forward-spherical}. One should recall that the 3D spherical cloak only has a varying radial conductivity, the other two polar and azimuthal diagonal entries of the conductivity matrix being constant (i.e. independent of the radial position). Besides, the radial conductivity has the same value as the density within the cloak. All this makes the 3D spherical cloak easier to approximate with homogenization techniques. However, no thermal cloak has been manufactured and experimentally characterised thus far, perhaps due to current limitations in 3D manufacturing techniques; a possible route towards construction of a spherical cloak is 3D printing. Similar computations in spherical coordinates, but for Pendry's singular transformation are found in \cite{Nicolet_2008}, where the matrix $\mathtt{M}$ was introduced to facilitate implementation of perfectly matched layers, cloaks and other transformation based electromagnetic media with spherical symmetry in finite element packages, see also \cite{Nicolet_1994} that predates the field of transformational optics.
\begin{figure}
\caption{Plots of $\rho^{\rm 3D}
\label{fig:num:spherical-cloaks}
\end{figure}
\subsection{Spectral problems} Here we perform some numerical tests in support of the spectral result (Proposition \ref{prop:first-non-zero-eigenvalue}) proved in this paper. The two Neumann spectral problems that we study are
\[
-\Delta \phi = \mu\, \phi \quad \mbox{ in }\Omega:=(-3,3)^d,
\quad
\nabla\phi\cdot{\bf n} = 0 \quad \mbox{ on }\partial\Omega.
\]
\[
-{\rm div}\Big(A^\varepsilon(x) \nabla \phi^\varepsilon \Big) = \mu\, \rho^\varepsilon(x)\, \phi^\varepsilon \quad \mbox{ in }\Omega:=(-3,3)^d,
\quad
\nabla\phi^\varepsilon\cdot{\bf n} = 0 \quad \mbox{ on }\partial\Omega.
\]
The coefficients are of high contrast and take the form
\begin{equation*}
\rho^\varepsilon(x), \, A^\varepsilon(x) =
\left\{
\begin{array}{ll}
1, \qquad \quad {\rm Id} & \quad \mbox{ for }x\in\Omega\setminus B_\varepsilon,\\[0.2 cm]
\frac{1}{\varepsilon^d}, \qquad \frac{1}{\varepsilon^{d-2}}{\rm Id} & \quad \mbox{ for }x\in B_\varepsilon
\end{array}\right.
\end{equation*}
The first non-zero eigenvalue for the Neumann Laplacian is $\mu_{2}={(\pi/6)}^{d}$. We illustrate the result of Proposition \ref{prop:first-non-zero-eigenvalue} by showing that the first non-zero eigenvalue $\mu^\varepsilon_2$ for the defect spectral problem is $\varepsilon^d$-close to $\mu_2$ for various value of $\varepsilon$ and in dimensions one, two and three. The results are tabulated in Table \ref{table:eigenprob}. The associated eigenfields are also plotted in Figure \ref{fig:eigenfields}. Note that the spectral problems in 1D, 2D and 3D are solved with direct UMFPACK and PARDISO solvers using adaptive meshes with 250, 150 and 50 thousand elements, respectively. We made sure that there are at least $10^2$ elements in the small defect for every spectral problem solved.
\begin{table}
\begin{tabular}{|l|l|l|l|c|c|c|c|c|c|r|r|r|r|}
\hline
& \multicolumn{4}{|l|}{Numerical illustration of proposition 5} &\\
\hline
& dim. $d$ & $\mu_{2}={(\pi/6)}^{d}$ & $\mu_{2}^\varepsilon$
& $\mid\mu_{2}-\mu_{2}^\varepsilon\mid$ & Parameter $\varepsilon$\\
\hline
& 1 & 0.52359877559 & 0.5342584732 & 0.0106596976 &$\varepsilon =10 ^{-1}$
\\
Numerical
& 1 & 0.52359877559 & 0.5327534635 & 0.0091546879 & $\varepsilon =10 ^{-2}$
\\
validation of
& 1 & 0.52359877559 & 0.5245077454 & 0.0009896980 & $\varepsilon =10 ^{-3}$
\\
& 1 & 0.52359877559 & 0.5236795221 & 0.0000807465 & $\varepsilon =10 ^{-4}$
\\
$\mid\mu_{2}-\mu_{2}^\varepsilon\mid \leq \varepsilon^d$
& 1 & 0.52359877559 & 0.5236081655 & 0.0000093899 & $\varepsilon =10 ^{-5}$
\\
& 1 & 0.52359877559 & 0.52359897453 & 0.0000001989 & $\varepsilon =10 ^{-6}$
\\
& 1 & 0.52359877559 & 0.52359888455 & 0.0000001089 & $\varepsilon =10 ^{-7}$
\\
\hline
& 2 & 0.27415567781 & 0.2741626732 & 0.0000069954 & $\varepsilon =10 ^{-1}$
\\
& 2 & 0.27415567781 & 0.2741546789 & 0.0000009989 & $\varepsilon =10 ^{-2}$
\\
& 2 & 0.27415567781 & 0.2741556795 & 0.0000000017 & $\varepsilon =10 ^{-3}$
\\
\hline
& 3 & 0.14354757722 & 0.1437347845 & 0.0001872072 &$\varepsilon =10 ^{-1}$
\\
\hline
\end{tabular}
\label{table:eigenprob}
\caption{ Numerical estimate of the difference $\mid\mu_{2}-\mu_{2}^\varepsilon\mid$, versus the parameter $\varepsilon =10^{-m}$ with $m=1,\cdots, 7$ for dimension $d=1$, with $m=1,\cdots, 3$ for $d=2$ and with $m=1$ for $d=3$. Same source, Neumann data, diffusivity and density parameters as in Figure \ref{fig:eigenfields}.}
\end{table}
\begin{figure}
\caption{Numerical results for contour plots of eigenfields $\phi$ and $\phi^\varepsilon$ associated with
$\mu_{2}
\label{fig:eigenfields}
\end{figure}
\section{Concluding remarks}
This work addressed the question of near-cloaking in the time-dependent heat conduction problem. The main inspiration is derived from the work of Kohn and co-authors \cite{Kohn_2008} which quantified near-cloaking in terms of certain boundary measurements. Hence the main result of this paper (see Theorem \ref{thm:near-cloak}) asserts that the difference between the solution to the cloak problem \eqref{eq:intro-u_cloak} and that to the homogeneous problem \eqref{eq:intro-u_hom} when measured in the $\mathrm H^\frac12$-norm on the boundary can be made as small as one wishes by fine-tuning certain regularisation parameter. To the best of our knowledge, this is the first work to consider near-cloaking strategies to address time-dependent heat conduction problem. This work supports the idea of thermal cloaking albeit with the price of having to wait for certain time to see the effect of cloaking. We also illustrate our theoretical results by some numerical simulations. We leave the study of fine properties of the thermal cloak problem for future investigations:
\begin{itemize}
\item Behaviour of the temperature field inside the cloaked region.
\item Designing certain multi-scale structures (\`a la reiterated homogenization) to achieve effective properties close to the characteristics of $\rho_{\rm cl}$ and $A_{\rm cl}$.
\item Study thermal cloaking for time-harmonic sources.
\end{itemize}
\noindent{\bf Acknowledgments.}
The authors would like to thank Yves Capdeboscq for helpful discussions regarding the Calder\'on inverse problem and for bringing to our attention the work of Kohn and co-authors \cite{Kohn_2008}.
H.H. and R.V.C. acknowledge the support of the EPSRC programme grant ``Mathematical fundamentals of Metamaterials for multiscale Physics and Mechanics'' (EP/L024926/1).
R.V.C. also thanks the Leverhulme Trust for their support.
S.G. acknowledges support from EPSRC as a named collaborator on grant EP/L024926/1.
G.P. acknowledges support from the EPSRC via grants EP/L025159/1, EP/L020564/1, EP/L024926/1, EP/P031587/1.
\end{document} |
\begin{document}
\flushbottom
\title{Quantum Zeno Repeaters}
\thispagestyle{empty}
\noindent
Due to inevitable attenuation in the channel, communication between two stations becomes impossible for long distances.
In classical communications, this problem is solved by repeaters based on simple signal amplification.
However, because measuring the state of a quantum system alters its quantum state and due to no-cloning theorem~\cite{NC}, this idea is not applicable in quantum communications~\cite{cacciapuoti2019quantum,cacciapuoti2020entanglement}.
The solution in quantum domain, i.e., the idea of quantum repeaters is based on the so-called entanglement swapping process~\cite{Azuma2015IEEE}.
As illustrated in Fig.~\ref{fig:ShortRepeater}, the idea can be summarized as follows.
Consider that the distance between two parties, Alice and Bob is beyond the limits of sharing entanglement reliably, and that the half of the distance is within the limits.
Placing a repeater station in the middle, Alice prepares a pair of entangled particles and sends one particle to the station.
Bob repeats the same procedure.
Then the repeater station applies local controlled-operations on the two particles it possesses, and the other two particles possessed by Alice and Bob become entangled.
In details, let a system of four qubits in the state $|\Psi_{A_1 A_2 B_2 B_1 }\rangle$ be initially shared among Alice, Repeater, and Bob; each qubit denoted as $A_1$, $A_2$ and $B_2$, and $B_1$, respectively, in the computational basis as
\begin{equation}\label{Intro:eq:initialstate}
|\Psi_{A_1 A_2 B_2 B_1 }\rangle = \frac{|0_{A_1} 0_{A_2} \rangle + |1_{A_1} 1_{A_2} \rangle }{\sqrt{2}} \otimes \frac{|0_{B_2} 0_{B_1} \rangle + |1_{B_2} 1_{B_1} \rangle }{\sqrt{2}}.
\end{equation}
\noindent
A controlled-NOT (CNOT) gate is applied to qubits $A_2$ and $B_2$ as the control and target qubits, respectively, followed by a Hadamard gate on $A_2$.
Then qubits $A_2$ and $B_2$ are measured in $z$-basis, yielding results $\{|i\rangle \}_{i=0,1}$.
Measurement results are communicated through classical channels with Alice and Bob.
Applying one of the single qubit operations $\{ I,\sigma_x,\sigma_z\}$ accordingly, Alice and Bob are left with and entangled pair of qubits,
\begin{equation}\label{Intro:eq:finalstate}
|\Psi_{A_1 B_1 }\rangle = \frac{|0_{A_1} 0_{B_1} \rangle + |1_{A_1} 1_{B_1} \rangle }{\sqrt{2}},
\end{equation}
\noindent
where $I$ is the two-dimensional identity operator, and $\sigma_x$ and $\sigma_z$ are the Pauli operators.
Extending the entanglement swapping process over a commensurate number of repeaters, Alice and Bob can share an entangled state, as shown in Fig.~\ref{fig:LongRepeater}, regardless of how long the distance is between them.
This makes the quantum repeaters essential for long distance quantum communication and quantum Internet, attracting an intense attention from both theoretical and experimental points of view.
In addition to the photon loss, various types of noise pose a challenge.
Through a nested purification protocol Briegel et al. designed a quantum repeater mechanism to overcome the exponentially scaling of error probability with respect to the distance~\cite{briegel1998quantum}, and
enabling reliable communication despite the noise in the channel allows quantum key distribution (QKD) over long distances with unconditional security~\cite{doi:10.1126/science.283.5410.2050}.
Childress et al. considered active purification protocol at each repeater station for fault tolerant long distance quantum communication and proposed a physical realization of the protocol based on nitrogen-vacancy (NV) centers~\cite{childress2006fault}.
It was predicted that the hybrid design of van Loock et al. based on light-spin interaction can achieve 100 Hz over 1280 km with almost unit fidelity~\cite{PhysRevLett.96.240501}.
Generating encoded Bell pairs throughout the communication backbone, the protocol of Jiang et al. applies classical error correcting during simultaneous entanglement swapping and can extend the distance significantly~\cite{jiang2009quantum}.
Yang et al. have proposed a cavity-QED system which does not require the joint-measurement~\cite{PhysRevA.71.034312}, and showed that entanglement swapping can enable entanglement concentration for unknown entangled states~\cite{PhysRevA.71.044302}.
The light-matter interaction at repeater stations mainly for storing the quantum information in matter quantum memories was believed to be necessary, which makes the physical realization challenging.
However, designing an all-photonic quantum repeaters based on all flying qubits, Azuma et al. have shown that it is not the case, making a breakthrough in bringing the concept of quantum repeaters to reality~\cite{azuma2015all}.
Though requiring quantum memories at repeater stations, using spontaneous parametric downconversion sources, the nested purification~\cite{Chen2017} and fault-tolerant two-hierarchy entanglement swapping~\cite{PhysRevLett.119.170502} have been experimentally demonstrated.
Entangling electrons and nuclear spins through interactions with single photons, Kalb et al. have generated copies of remote entangled states towards quantum repeaters~\cite{doi:10.1126/science.aan0070}.
Recently, the idea of building quantum repeaters without quantum memory was experimentally demonstrated recently by Li et al. using linear optics~\cite{li2019experimental}.
For a thorough review of recent advances in quantum repeaters, please refer to ref.~\cite{Yan_2021}.
Implementing the entanglement swapping procedure at each repeater station requires the realization of controlled two-qubit operations in the usual circuit model.
Regardless of the technology and type of physical particles used as qubits, realizing two-qubit logic operations is a bigger challenge than single-qubit operations.
Hence, in this work, we ask whether entanglement swapping can be implemented without controlled two-qubit operations, which could bring the quantum repeaters closer to reality.
We consider quantum Zeno dynamics for serving this purpose.
Beyond practical concerns towards long distance quantum communication and quantum Internet, building quantum repeaters based on quantum Zeno dynamics have potential to contribute to fundamentals of quantum entanglement.
The quantum Zeno effect (QZE) can be described as follows~\cite{doi:10.1063/1.523304,Kofman2000}.
If a quantum system in state $|e\rangle$ initially (at $t=0$) evolves under Hamiltonian $\hat{H}$,
the probability of finding it in the same state, i.e. the \textit{survival probability} at a later time (at $t>0$) is given as
\begin{equation}
p(t) = \left| \left\langle e \left|\exp \left( -{i\over \hslash} \hat{H} t\right) \right| e\right\rangle \right|^2.
\end{equation}
\noindent
Assuming the Hamiltonian $\hat{H}$ with a finite variance $\langle V^2 \rangle$ and considering short times, the survival probability is found to be
\begin{equation}
p(t) \approx 1- \frac{1}{\hslash^2} \langle V^2 \rangle t^2.
\end{equation}
\noindent
Now, let us assume ideal projective measurements on the system at intervals $\tau$.
For $1/\tau \gg \langle V^2 \rangle ^{1 \over 2} / \hslash$, the survival probability is
\begin{equation}
p^n(\tau) = p(t = n \tau) \approx \exp \left[ - \frac{1}{\hslash^2}( \langle V^2 \rangle \tau ) t \right].
\end{equation}
\noindent
In other words, the evolution of the system from the initial state slows down with $\tau$.
What is more, for $\tau\rightarrow 0$, the survival probability $p(t)$ approaches $1$, which is widely considered as freezing the evolution of the system, such as in freezing the optical rogue waves~\cite{BAYINDIR2018141} and quantum chirps~\cite{BAYINDIR2021127096}.
It was also shown that the frequent measurements can be designed for accelerating the decay of the system, which is also known as the anti-Zeno effect~\cite{Kofman2000}.
Introducing a carefully designed set of quantum operations between measurements, QZE can be used to drive the a quantum system towards a target state, which is also known as the quantum Zeno dynamics (QZD).
One of the early experimental evidences of QZE was that in the the RF transition between two $^9\text{Be}^+$ ground state hyperfine levels, collapse to the initial state was observed if frequent measurements are performed~\cite{PhysRevA.41.2295}.
QZE has been studied for slowing down the system's evolution in Bose-Einstein condensates~\cite{Schäfer2014}, ion traps~\cite{PhysRevA.69.012303}, nuclear magnetic resonance~\cite{PhysRevA.87.032112}, cold atoms~\cite{PhysRevLett.87.040402}, cavity-QED~\cite{PhysRevLett.101.180402,PhysRevLett.105.213601,PhysRevA.86.032120} and large atomic systems~\cite{Signoles2014}.
QZE is being considered in various fundamental concepts.
For example, it has been demonstrated in $PT$-symmetric systems in symmetric and broken-symmetric regions~\cite{Chen2021NPJ}.
Quantum heat engines have been attracting an increasing attention in quantum thermodynamics~\cite{tuncer2019work,dag2019temperature}, and Mukherjee et al. has recently discovered the advantages of anti-Zeno effect in fast-driven heat engines~\cite{Mukherjee2020}.
An interesting application of QZD in quantum information and computation is creating entanglement between two initially separated qubits by applying single-qubit operations and performing simple threshold measurements in an iterative way, without requiring a CNOT gate~\cite{PhysRevA.77.062339}.
Reducing the quantum circuit complexity by removing the controlled operations is promising for physical realizations.
In a similar vein, recently, the activation of bound entanglement was shown to be enabled via QZD based on single particle rotations and threshold measurements~\cite{PhysRevA.105.022439}, which requires several three-level controlled operations, bound entangled states and classical communications otherwise in the original activation proposal by Horodecki et al.~\cite{PhysRevLett.82.1056}.
Quantum Zeno effect has been studied for generating multi-partite entanglement as well~\cite{CHEN2016140,doi:10.1126/science.aaa0754}, which is one of the most important problems attracting serious efforts in quantum science and technologies~\cite{DetExp,CavityRefs2,CavityRefs21,SRep_2020_10_3481,PRA_2021_103_052421}.
\section*{Results}
Our QZD proposal for entanglement swapping (ES) starts with the joint system of two Bell states as in Eq.(\ref{Intro:eq:initialstate}), described by the density matrix $\rho_{A_1 A_2 B_2 B_1}$.
We set $\theta=\pi/180$ for simplicity, and through numerical studies we find the threshold measurement operators to be $J_1=|1\rangle\langle1|\otimes|1\rangle\langle1|$ with $J_0 = I-J_1$ in accordance with Ref.~\cite{PhysRevA.77.062339}.
First, let us examine the case of a single iteration, i.e., $n=1$ in the procedure illustrated in Fig.~\ref{fig:CircuitVsZeno}b.
After a single rotate-measure iteration on qubits $A_2$ and $B_2$, we proceed with the final measurement in $z-$basis. Finding $|0\rangle\otimes|0\rangle$, for example, the system of two qubits $A_1$ and $B_1$ is found approximately in the state
\begin{equation}
\displaystyle \rho^1_{A_1 B_1} \!=\!\!
\left(
\begin{array}{cccc}
\ \ \ 0.9993 & -0.0174 & -0.0174 & 0.0003 \\
-0.0174 & \ \ \ 0.0003 & \ \ \ 0.0003 & 0 \\
-0.0174 & \ \ \ 0.0003 & \ \ \ 0.0003 & 0 \\
\ \ \ 0.0003 & 0 & 0 & 0 \\
\end{array}
\right)
\label{eq:rho_{A_1 B_1}},
\end{equation}
\noindent where the subscript denotes the number of iterations performed.
To find after how many iterations we should end the QZD procedure, we run the simulation one hundred times and end the procedure at $n$th run (consisting of $n$ iterations) with $n=1,2,...,100$.
After each, we calculate the negativity of the resulting state $\rho_{A_1 B_1}$, which we plot in Fig.~\ref{fig:figplot}.
Our simulation shows that within this setting, the resulting state after $n=65$ iterations (and after a $\sigma_x$ by Alice following the $z$-basis measurement result) is approximately
\begin{equation}
\displaystyle \rho^{65}_{A_1 B_1} \!=\!\!
\left(
\begin{array}{cccc}
\ \ \ 0.4993 & \ \ \ -0.0174 & \ \ \ 0.0193 & \ \ \ 0.4993 \\
-0.0174 & \ \ \ 0.0006 & -0.0006 & -0.0174 \\
\ \ \ 0.0193 & -0.0006 & \ \ \ 0.0007 & \ \ \ 0.0193 \\
\ \ \ 0.4993 & -0.0174 & \ \ \ 0.0193 & \ \ \ 0.4993 \\
\end{array}
\right)
\label{eq:rho65_{A_1 B_1}},
\end{equation}
\noindent with negativity $N(\rho^{65}_{A_1 B_1})=0.4999$.
The fidelity of this state to the maximally entangled Bell state in Eq.(\ref{Intro:eq:finalstate}) is found to be
$F(\rho^{65}_{A_1 B_1}, |\Psi_{A_1 B_1 }\rangle) := \langle \Psi_{A_1 B_1}| \ \rho^{65}_{A_1 B_1} \ |\Psi_{A_1 B_1} \rangle=0.9986$.
This result shows that the entanglement swapping can be implemented with almost a unit fidelity by QZD, i.e., only through single qubit rotations and simple threshold measurements, without requiring any controlled operations, reducing the complexity of quantum repeaters significantly in terms of controlled two-qubit operations.
Next, we extend the QZD-based ES to a series of repeater stations.
We consider the state $\rho^{65}_{A_1 B_1}$ obtained from the first ES to be one of the two states of the second ES and the other being a maximally entangled state equivalent to $|\Psi_{A_1 B_1}\rangle$.
The obtained non-maximally entangled two-qubit state in the second ES will then be considered for the third ES with a maximally entangled state, and so on for enabling long-distance quantum communication via quantum Zeno repeaters (QZR).
At the first glance, it might be expected to obtain the new state with decaying negativity at each ES, vanishing with increasing distance, i.e., the number of repeater stations.
However, this is not the case, demonstrating the strength of our proposed QZD.
The negativity of the state obtained from each ES exhibits an oscillation. For example, after it decreases to $0.499938$ in the fifth ES, it increases to $0.499942$ in the sixth.
We plot the negativity values of the states obtained over $100$ repeater stations in Fig.~\ref{fig:PlotRepeater}.
To provide a clearer presentation of the turning point of the negativity, we also plot the results for the first $9$ states in Fig.~\ref{fig:PlotRepeaterFirst10}, and provide the corresponding density matrices in the Appendix.
\section*{Discussion}
A major contribution of the proposed quantum Zeno repeaters (QZR) is to reduce the quantum circuit complexity of repeaters in terms of controlled multi-particle operations as illustrated in Fig.~\ref{fig:CircuitVsZeno}a, which is more challenging than single particle operations in any technology in principle.
Because our QZR protocol can be extended to multi-level particles, this reduction would be even more significant than the case of qubits.
However, beyond practical concerns for reducing the quantum circuit complexity, we believe showing that quantum repeaters can be realized via quantum Zeno dynamics contributes to our understanding of quantum entanglement and measurements.
One of the drawbacks of our protocol is that under ideal conditions except for the attenuation in the channel which requires the repeaters in the first place, not exactly but almost unit fidelity can be achieved.
However, over $0.998$ fidelity can be tolerated in physical realizations especially given that the fidelities will decrease in both approaches.
A more serious drawback could be the increased latency. Repeaters based on the standard circuit model requires the implementation of only two logic operations -though one being the controlled multi-particle operation.
Our protocol requires the implementation of several single-particle operations and simple threshold measurements, instead.
This would take a longer time depending on the physical realization, introducing a higher latency, which might not be desired especially considering on-demand systems and designs without quantum memory.
The slight increase in the negativity does not violate the monotonicity of entanglement measure since a single entangled state with negativity $\approx 0.5$ is obtained out of two entangled states with total negativity $\approx 0.5$.
The reason we prefer the negativity entanglement measure as the key performance indicator over the fidelity is as follows.
In each ES, the resulting state is close to one of the four Bell states, which are equivalent under local operations and classical communications~\cite{NC}.
Hence, rather than finding which Bell state it is the closest to and then calculating the fidelity each time, for simplicity, we chose to calculate the negativity which is invariant under Pauli operators that the parties can apply to convert one Bell state to another.
Note that while our QZD-based ES protocol requires $65$ iterations in the first repeater, next repeaters might require a different number of iterations.
Our simulation picks the best number for each repeater station and the presented results are based on the the best outcomes.
For the physical realization of our QZD protocol, we consider the superconducting circuit proposed by Wang et al.~\cite{PhysRevA.77.062339} where the threshold measurements can be implemented by Josephson-junction circuit with flux qubits.
In the same work, physical imperfections were also analyzed by considering a possible deviation from the rotation angle $\theta$ in each iteration.
It was found that in the case of several iterations, the impact of the deviations is minimized, implying the robustness of the protocol.
Because our protocol follows a similar rotate-measure procedure with many iterations, we consider a similar inherent robustness, too.
Apart from the attenuation in the channel, we have studied our protocol under ideal conditions.
However, because QZE has been mostly considered for protecting the system from noise induced by interactions with the environment, it would be interesting as a future research to design a QZD protocol with an inherent error-correction mechanism.
\section*{Methods}
In each iteration of QZD, a set of two basic operations are performed.
First, the following rotation operation is applied on each of the two qubits at the repeater station,
\begin{equation}\label{Methods:eq:singlerotation}
\displaystyle R(\theta) =
\left(
\begin{array}{cc}
\cos \theta & - \sin \theta \\
\sin \theta & \ \ \ \cos \theta \\
\end{array}
\right),
\end{equation}
\noindent which is followed by the threshold measurements on each qubit in concern, defined by the measurement operators
\begin{equation} \label{Methods:eq:Js}
J_1= |i\rangle\langle i| \otimes |j\rangle\langle j|, \ \ \ \ \ J_0 = I^{\otimes 2} -J_1
\end{equation}
\noindent with $i, j \in {0, 1}$ and $I$ being the two-dimensional identity operator.
The system will be found in $|i\rangle|j\rangle$ state with a small probability $\epsilon$, and with $1-\epsilon$ probability it is projected to the $J_0$ subspace.
Hence, in each iteration, the state of the system evolves in the rotate-measure procedure as $\rho \rightarrow \rho^{r} \rightarrow \rho^{rm}$ where
\begin{equation}\label{Methods:eq:rotate}
\rho^{r} = (I \otimes R(\theta)^{\otimes 2} \otimes I ) \ \rho \ (I \otimes R(\theta)^{\otimes 2} \otimes I )^{\dagger},
\end{equation}
\noindent and
\begin{equation} \label{Methods:eq:measure}
\rho^{rm} = {(I \otimes J_0 \otimes I) \ \rho^{r} \ (I \otimes J_0 \otimes I)^{\dagger} \over \text{Tr}[(I \otimes J_0 \otimes I) \ \rho^{r} \ (I \otimes J_0 \otimes I)^{\dagger}]}.
\end{equation}
After $n$ iterations, the QZD process is over and similar to the circuit model computation, two qubits at the repeater are measured in $z-$basis, and according to the results of this final measurement communicated over a classical channel, one of the Pauli operators $\{I,\sigma_x, \sigma_z\}$ is applied to the qubits of Alice and Bob, leaving them not exactly in the Bell state but in the state $\rho '$ with almost a unit fidelity to a Bell state.
Here, $\{i, j \}$ of $J_1$, the rotation angle $\theta$ and the number of iterations $n$ are to be determined by numerical simulations for achieving the closest $\rho '$ to a maximally entangled state.
Note that in each iteration for each qubit, considering a different $\theta$ could improve the performance with the drawback expanding the search space significantly.
For simplicity, we fix $\theta$ for both qubits throughout the process.
For extending the above entanglement swapping procedure to a series of repeater stations, we can assume that the entanglement swapping (ES) starts from both ends and continues towards the repeater station in the middle as in Fig.\ref{fig:LongRepeater}.
Therefore, although assuming that the first ES starts with maximally entangled states, the non-maximally entangled state $\rho '$ is obtained which is to be used in the next ES, creating $\rho ''$ state with a smaller fidelity to the maximally entangled state than $\rho '$.
Our numerical simulation takes into account the generated non-maximally entangled state being the output of each ES as the input to the next ES.
The negativity of a two-qubit state $\rho$ is found by the absolute sum of its negative eigenvalues $\mu_i$ of after partial transposition $\rho^{\Gamma_A}$ with respect to subsystem $A$ as
\begin{equation}
N(\rho) \equiv { ||\rho^{\Gamma_A}||_1 - 1 \over 2},
\end{equation}
\noindent
where $||A||_1$ is the trace norm of the operator $A$~\cite{PhysRevA.65.032314}.
\section*{Appendix}
In the Appendix, we provide the density matrices of the states obtained through ES over nine repeater stations, where $\rho_j$ denotes the state after $j$th ES.
\begin{equation}
\displaystyle \rho_{1} =
\left(
\begin{array}{cccc}
0.000753558 & 0.0193976 & 0.0193976 & -0.000677401 \\
0.0193976 & 0.499319 & 0.499319 & -0.0174372 \\
0.0193976 & 0.499319 & 0.499319 & -0.0174372 \\
-0.000677401 & -0.0174372 & -0.0174372 & 0.000608941 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{2} =
\left(
\begin{array}{cccc}
0.500137 & 0.00196063 & 0.00196063 & 0.499992 \\
0.00196063 & 7.68598\text{x}10^{-6} & 7.68601\text{x}10^{-6} & 0.00196006 \\
0.00196063 & 7.68601\text{x}10^{-6} & 7.68601\text{x}10^{-6} & 0.00196006 \\
0.499992 & 0.00196006 & 0.00196006 & 0.499848 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{3} =
\left(
\begin{array}{cccc}
0.000913537 & 0.0213573 & 0.0213573 & -0.000661988 \\
0.0213573 & 0.499303 & 0.499303 & -0.0154764 \\
0.0213573 & 0.499303 & 0.499303 & -0.0154764 \\
-0.000661988 & -0.0154764 & -0.0154764 & 0.000479705 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{4} =
\left(
\begin{array}{cccc}
0.500258 & 0.00392158 & 0.00392158 & 0.499969 \\
0.00392158 & 0.0000307416 & 0.0000307417 & 0.00391931 \\
0.00392158 & 0.0000307417 & 0.0000307417 & 0.00391931 \\
0.499969 & 0.00391931 & 0.00391931 & 0.49968 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{5} =
\left(
\begin{array}{cccc}
0.0000692054 & 0.00588032 & 0.00588193 & -0.000159162 \\
0.00588032 & 0.499645 & 0.499782 & -0.0135238 \\
0.00588193 & 0.499782 & 0.499919 & -0.0135275 \\
-0.000159162 & -0.0135238 & -0.0135275 & 0.000366047 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{6} =
\left(
\begin{array}{cccc}
0.499725 & 0.00588306 & -0.0115664 & 0.499832 \\
0.00588306 & 0.0000692589 & -0.000136167 & 0.00588432 \\
-0.0115664 & -0.000136167 & 0.000267713 & -0.0115689 \\
0.499832 & 0.00588432 & -0.0115689 & 0.499938 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{7} =
\left(
\begin{array}{cccc}
0.000123053 & 0.00784181 & 0.00784289 & -0.00018147 \\
0.00784181 & 0.499736 & 0.499805 & -0.0115645 \\
0.00784289 & 0.499805 & 0.499873 & -0.0115661 \\
-0.00018147 & -0.0115645 & -0.0115661 & 0.000267619 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{8} =
\left(
\begin{array}{cccc}
0.499815 & 0.00784462 & -0.0096067 & 0.499846 \\
0.00784462 & 0.000123122 & -0.000150778 & 0.00784511 \\
-0.0096067 & -0.000150778 & 0.000184646 & -0.00960729 \\
0.499846 & 0.00784511 & -0.00960729 & 0.499877 \\
\end{array}
\right),
\end{equation}
\begin{equation}
\displaystyle \rho_{9} =
\left(
\begin{array}{cccc}
0.00019229 & 0.0098035 & 0.0098035 & -0.000188386 \\
0.0098035 & 0.499812 & 0.499812 & -0.00960447 \\
0.0098035 & 0.499812 & 0.499812 & -0.00960447 \\
-0.000188386 & -0.00960447 & -0.00960447 & 0.000184561 \\
\end{array}
\right).
\end{equation}
\section*{Availability of Data and Materials}
All data generated or analysed during this study are included in this published article.
\section*{Author contributions statement}
V.B. designed the scheme and carried out the theoretical analysis under the guidance of F.O. V.B., F.O. reviewed the manuscript and contributed to the interpretation of the work and the writing of the manuscript.
\section*{Competing interests}
The authors declare no competing interests.
\ \\ \ \\ \ \\
\begin{figure}
\caption{Illustrating the entanglement swapping procedure. Possessing two qubits, $A_2$ entangled with Alice's qubit $A_1$, and $B_2$ entangled with Bob's qubit $B_1$, Repeater performs operations and measurements on $A_2$ and $B_2$, leaving Alice's and Bob's qubits entangled.}
\label{fig:ShortRepeater}
\end{figure}
\begin{figure}
\caption{Extending the entanglement swapping procedure in Fig.~\ref{fig:ShortRepeater}
\label{fig:LongRepeater}
\end{figure}
\begin{figure}
\caption{Entanglement swapping procedure via \textbf{a)}
\label{fig:CircuitVsZeno}
\end{figure}
\begin{figure}
\caption{Negativity of the state obtained after $n$ rotate-measure iterations of QZD for realizing a single entanglement swapping as in Fig.~\ref{fig:CircuitVsZeno}
\label{fig:figplot}
\end{figure}
\begin{figure}
\caption{Negativity of the obtained state after ES over a hundred repeater stations.}
\label{fig:PlotRepeater}
\end{figure}
\begin{figure}
\caption{Negativity of the obtained state after ES over a few repeater stations for demonstrating one of the periodic turning points.}
\label{fig:PlotRepeaterFirst10}
\end{figure}
\end{document} |
\begin{document}
\title{Some infinite matrix analysis, a Trotter product formula for dissipative operators, and an algorithm for the incompressible Navier-Stokes equation}
\author{J\"org Kampen }
\maketitle
\begin{abstract}
We introduce a global scheme on the $n$-torus of a controlled auto-controlled incompressible Navier-Stokes equation in terms of a coupled controlled infinite ODE-system of Fourier-modes with smooth data. We construct a scheme of global approximations related to linear partial integro-differential equations in dual space which are uniformly bounded in dual Sobolev spaces with polynomially decaying modes. Global boundedness of solution can be proved using an auto-controlled form of the scheme with damping via a time dilatation transformation. The scheme is based on some infinite matrix algebra related to weakly singular integrals, and a Trotter-product formula for dissipative operators which leads to rigorous existence results and uniform global upper bounds. For data with polynomial decay of the modes (smooth data) global existence follows from the preservation of upper bounds at each time step, and a compactness property in strong dual Sobolev spaces (of polynomially decaying Fourier modes). The main difference to schemes considered in \cite{KAC,KNS,K3,KB1,KB2, KB3, KHyp} is that we have a priori estimates for solutions of the iterated global linear partial integro-differential equations. Furthermore, the external control function is optional, controls only the non-dissipative zero modes and is, hence, much simpler. The analysis of the scheme leads to an algorithmic scheme of the Navier-Stokes equation on the torus for regular data based on the Trotter-type product formula for specific dissipative operators. The infinite systems are written also in a real sinus and cosinus basis for numerical purposes.
The main intention here is to define an algorithm which uses the damping via dissipative modes and which converges in strong norms for arbitrary viscosity constants $\nu>0$. Finally, we discuss some notions of the concept of 'turbulence', which may be a concept to be described by dynamical properties, as has been suggested by other authors for a long time (cf. \cite{CFNT, FJRT,FJKT}). An appendix contains a detailed analysis of Euler type Trotter product schemes and criteria of convergence. In this latest version of the paper convergence is reconsidered in detail in the appendix.
\end{abstract}
2010 Mathematics Subject Classification. 35Q30, 76D03.
\section{Introduction}
The formal transformation of partial differential equations to infinite systems of ordinary differential equations via representations in a Fourier basis may be useful if the problem is posed on a torus. Formal representations of solutions are simplified at least if the equations are linear. Well, even nonlinear infinite ODE-representations may be useful in order to define solution schemes. However, in order to make sense of an exponential function of an infinite matrix applied to an infinite vector the sequence spaces involved have to be measured in rather strong norms. Schur \cite{S} may have been the first who provided necessary and sufficient conditions such that infinite linear transformations make sense. His criteria were in the sense of $l^1$-sequence spaces and are far to weak for our purposes here. However, dealing with the incompressible Navier-Stokes equation we shall assume that the initial vector-valued data function $\mathbf{h}=\left(h_1,\cdots ,h_n\right)^T$ satisfies
\begin{equation}
h_i\in C^{\infty}\left({\mathbb T}^n\right),
\end{equation}
where ${\mathbb T}^n$ denotes the $n$-torus (maybe the only flat compact manfold of practical interest when studying the Navier-Stokes equation).
In the following let ${\mathbb Z}$ be the set of integers, let ${\mathbb N}$ be the set of natural numbers including zero, let ${\mathbb R}$ denote the field of real numbers, and let $\alpha,\beta\in {\mathbb Z}^n$ denote multiindices of the set ${\mathbb Z}^n$ of $n$-tuples of integers. Differences of multiindices $\alpha-\beta\in {\mathbb Z}^n$ are built componentwise, and as usual for a multiindex $\gamma$ with nonnegative entries we denote $|\gamma|=\sum_{i=1}^n|\gamma_i|$. Consider a smooth function $\mathbf{h}=\left(h_1,\cdots ,h_n \right)^T$ with $h_i\in C^{\infty}\left({\mathbb T}^n_l\right)$, where ${\mathbb T}^n_l={\mathbb R}^n/l{\mathbb Z}^n$ is the torus of dimension $n$ and size $l>0$. Note that the smoothness of the functions $h_i$ means that there is a polynomial decay of the Fourier modes, i.e., for a torus of size $l=1$ for the modes of multivariate differentials of the function $h_i$ we have
\begin{equation}\label{alphamodes}
\begin{array}{ll}
D^{\gamma}h_{i\alpha}:=\int_{{\mathbb T}^n}D^{\gamma}_xh_i(x)\exp\left(-2\pi i\alpha x \right)dx\\
\\
=(-1)^{|\gamma|}\int_{{\mathbb T}^n}h_i(x)\Pi_{i=1}^n(-2\pi i\alpha_i)^{\gamma_i}\exp\left(-2\pi i\alpha x \right)dx\\
\\
=\Pi_{i=1}^n(2\pi i\alpha_i)^{\gamma_i}h_{i\alpha},
\end{array}
\end{equation}
where as usual for a multiindex $\gamma$ with nonnegative entries $\gamma_i\geq 0$ the symbol $D^{\gamma}_x$ denotes the multivariate derivative with respect to $x$ and of order $\gamma_i$ with respect to the $i$th component of $x$.
Here $h_{i\alpha}$ is the $\alpha$th Fourier mode of the function $h_i$, and $D^{\gamma}h_{i\alpha}$ is the $\alpha$th Fourier mode of the function $D^{\gamma}_xh_i$.
Since $D^{\gamma}_xh_i$ is smooth this function has a Fourier decomposition on ${\mathbb T}^n$.
The sequence $\left( D^{\gamma}h_{i\alpha} \right)_{\alpha\in {\mathbb Z}^n}$ of the associated $\alpha$-modes exists in the space of square integrable sequences $l^2\left({\mathbb Z}^n\right)$. Hence, we have polynomial decay of the modes $h_{i\alpha}$ as $|\alpha|\uparrow \infty$. This polynomial decay is important in the following in order to make sense of certain matrix operations with certain multiindexed infinite matrices and multiplications of these matrices with infinite vectors, because we are interested in regular solutions. In the following we have in mind that a multiindexed vector
\begin{equation}
\mathbf{u}^F:=\left( u_{\alpha} \right)^T_{\alpha\in {\mathbb Z}^n}
\end{equation}
(the superscript $T$ meaning 'transposed') with (possibly complex) constants $u_{\alpha}$ (although we consider only real solutions in this paper) which decay fast enough as $|\alpha|\uparrow \infty$ corresponds to a function
\begin{equation}\label{uclass}
u\in C^{\infty}\left({\mathbb T}^n_l\right),
\end{equation}
where
\begin{equation}
u(x):=\sum_{\alpha\in {\mathbb Z}^n}u_{\alpha}\exp{\left( \frac{2\pi i\alpha x}{l}\right) }.
\end{equation}
We are interested in real functions, and real Fourier systems will be considered along with complex Fourier systems. The advantage of complex Fourier systems is that the notation os more succinct, and they lead to real solutions as well. However, form the numerical point of view it may be an advantage to control the erroer on real spaces. Therefore it make sense to deal with complex Fourier systems and compare them to real Fourier systems on an algorithmic or numerical level at decisive steps where in order to ensure that the approximative solution constructed is real or that the error of the approximative solution is real. More precisely, if the latter representation can be rewritten with real functions $\cos\left( \frac{2\pi i\alpha x}{l}\right)$ and $\sin\left( \frac{2\pi i\alpha x}{l}\right)$ and with real coefficients of these functions, then the function in (\ref{uclass}) has values in ${\mathbb R}$, and this is the case we have in mind in this paper. Later we shall write a system in the basis
\begin{equation}\label{realbasis*}
\left\lbrace \sin\left(\frac{\pi \alpha x}{l}\right) \cos\left(\frac{\pi \alpha x}{l}\right) \right\rbrace_{\alpha\in{\mathbb N}^n},
\end{equation}
where ${\mathbb N}$ denotes the set of natural numbers including zero, i.e., we have $0\in {\mathbb N}$. We shall see that the analysis obtained in the complex notation can be transferred to real analysis with respect to the basis (\ref{realbasis*}).
We say that the infinite vector $\mathbf{u}^F$ is in the dual Sobolev space of order $s\in {\mathbb R}$, i.e.,
\begin{equation}\label{hs}
\mathbf{u}^F\in h^s\left({\mathbb Z}^n\right)
\end{equation}
in symbols, if the corresponding function $u$ defined in (\ref{uclass}) is in the Sobolev space $H^s\left({\mathbb T}^n_l\right)$ of order $s\in {\mathbb R}$. We may also define the dual space $h^s$ directly defining
\begin{equation}
\mathbf{u}^F\in h^s\left({\mathbb Z}^n\right)\leftrightarrow \sum_{\alpha \in{\mathbb Z}^n}|u_{\alpha}|^2\left\langle \alpha\right\rangle^{2s}< \infty,
\end{equation}
where
\begin{equation}
\left\langle \alpha\right\rangle :=\left(1+|\alpha|^2 \right)^{1/2}.
\end{equation}
The two definitions are clearly equivalent.
Note that being an element of the function space in (\ref{hs}) for nonnegative $s$ means that we have some decay of the $\alpha$-modes. The less the order of deacy the less regularity we have. For example for constant $\alpha$-modes the corresponding function in classical $H^s$-space
\begin{equation}
\sum_{\alpha\in {\mathbb Z}^n}C\exp(\frac{2\pi i\alpha x}{l})
\end{equation}
is a formal expression of $C$ times the $\delta$ distribution and we have
\begin{equation}
\delta\in H^s~~\mbox{if}~~s<-\frac{n}{2}.
\end{equation}
Now we may rephrase the ideas in \cite{KAC}, \cite{KNS}, \cite{K3}, \cite{KB2}, and \cite{KB3} in this context. We shall deviate from these schemes as we simplify the control function and define an iterated scheme of global equations corresponding to linear partial integro-differential equations approximating the (controlled) incompressible Navier Stokes equation. We also apply the auto-control transformation in order to sharpen analytical results and get global smooth solutions which are uniformly bounded in time. Note that the latter idea can be made consitent with efficiency if we consider stepsizes of size $0.5$ in the time dilatation transformation considered below, which then stretches a time interval $\left[0,0,5 \right]$ to a time interval of length $\left[0,\frac{1}{\sqrt{3}} \right]$ which is a small increment of amount in time compared to the advantage of a strong potential damping term stabilizing the scheme. Formally, the modes $(v_{i\alpha})_{\alpha\in {\mathbb Z}^n},~1\leq i\leq n$, of the velocity function $v_i,~1\leq i\leq n,$ of the incompressible Navier-Stokes equation satisfy the infinite ODE-system (derivation below)
\begin{equation}\label{navode200first}
\begin{array}{ll}
\frac{d v_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)v_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v_{j(\alpha-\gamma)}v_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{j\gamma}v_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2},
\end{array}
\end{equation}
where the modes $v_{i\alpha}$ depend on time $t$ and such that for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we have
\begin{equation}
v_{i\alpha}(0)=h_{i\alpha}.
\end{equation}
In the infinite system and for fixed $\alpha$ we call the equation in (\ref{navode200first}) with left side $\frac{d v_{i\alpha}}{dt}$ the $\alpha$-mode equation.
Some simple but important observations are in order here. First note that the damping
\begin{equation}
\sum_{j=1}^n\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)
\end{equation}
is not equal to zero unless $|\alpha|=\sum_{i=1}^n|\alpha_i|=0$. Hence we have damping except for the zero-modes $v_{i0}$ (where the subscript $0$ denotes the $n$-tuple of zeros). Second, note that the zero modes $v_{i0}$ contribute to the $\alpha$-mode equation for $\alpha\neq 0$ only via the convection terms. This is because for the second term on the right side of (\ref{navode200first}) only the term with $\gamma=\alpha$, i.e., the summand
\begin{equation}
-\sum_{j=1}^n\frac{2\pi i \alpha_j}{l}v_{j0}v_{i\alpha}
\end{equation}
contains a zero mode. Third, note that the pressure term in (\ref{navode200first}) has no zero mode summands since
\begin{equation}
\begin{array}{ll}
2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{j\gamma}v_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}=\\
\\
2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n\setminus \left\lbrace \alpha,0\right\rbrace} 4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{j\gamma}v_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
More precisely, note that the $0$-mode equation consists only of terms corresponding to convection terms, i.e., we have
\begin{equation}\label{zeromode}
\begin{array}{ll}
\frac{d v_{i0}}{dt}=
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace }\frac{2\pi i \gamma_j}{l}v_{j(-\gamma)}v_{i\gamma}.
\end{array}
\end{equation}
Furthermore we observe that in (\ref{zeromode}) we have no zero modes on the right side but only non zero-modes of coupled equations. Furthermore all non-zero modes involve a damping term, because $\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)<0$ for $\nu>0$ and $|\alpha|\neq 0$. Note that this damping becomes stronger as the order of the modes $|\alpha|\neq 0$ increases - an important difference to the incompressible Euler equation, where we have no viscosity damping indeed. This is also a first hint that it may useful to define a controlled incompressible Navier-Stokes equation on the torus where a simple control functions controls just the zero modes. In order to sharpen analytical results we shall consider extended auto-controlled schemes where we introduce a damping term via time dilatation similar as in \cite{KAC}. This is a time transformation at each time step, and as at each local time step we deal with time-dependent operators this additional idea fits quite well with the scheme on a local level. On a global time-level it guarantees the preservation of global upper bounds.
This idea is implemented in local time where we may consider the coordinate transformation
\begin{equation}\label{timedil}
(\tau (t) ,x)=\left(\frac{t}{\sqrt{1-t^2}},x\right),
\end{equation}
which is a time dilatation effectively and leaves the spatial coordinates untouched.
Then on a time local level, i.e., for some parameter $\lambda >0$ and $t\in [0,1)$ the function $u_i,~1\leq i\leq n$ with
\begin{equation}\label{uvlin}
\lambda(1+t)u_i(\tau,x)=v_i(t ,x),
\end{equation}
carries all information of the velocity function on this interval, and satisfies
\begin{equation}
\frac{\partial}{\partial t}v_i(t,x)=\lambda u_i(\tau,x)+\lambda(1+t)\frac{\partial}{\partial \tau}u_i(\tau,x)\frac{d \tau}{d t},
\end{equation}
where
\begin{equation}
\frac{d\tau}{dt}=\frac{1}{\sqrt{1-t^2}^3}.
\end{equation}
We shall choose $0<\lambda<1$ in order to make the nonlinear terms smaller on the time interval compared to the damping term (they get an additional factor $\lambda$. The price to pay are larger initial data but we shall observe that we can compensate this in an appropriate scheme.
\begin{rem}
Alternatively, we can use localized transformations of the form
\begin{equation}\label{uvloc}
\lambda(1+(t-t_0))u^{t_0}_i(\tau,x)=v_i(t ,x),~\mu>0
\end{equation}
for $t_0\geq 0$ and where
\begin{equation}\label{timedil}
(\tau (t) ,x)=\left(\frac{t-t_0}{\sqrt{1-(t-t_0)^2}},x\right),
\end{equation}
\begin{equation}
\frac{\partial}{\partial t}v_i(t,x)=\lambda u_i(\tau,x)+\lambda(1+(t-t_0))\frac{\partial}{\partial \tau}u_i(\tau,x)\frac{t-t_0}{\sqrt{1-(t-t_0)^2}^3}.
\end{equation}
Using this localized term we can avoid a weaker damping as time increases. In the following we get the equations for the transformations of the form (\ref{uvloc}) if we replace $t$ by $t-t_0$ in the coeffcients.
\end{rem}
We denote the inverse of $\tau(t)$ by $t(\tau)$. For the modes of $u_i,~1\leq i\leq n$ we get the equation
\begin{equation}\label{navode200firsttimedil}
\begin{array}{ll}
\frac{d u_{i\alpha}}{d\tau}=\sqrt{1-t(\tau)^2}^3
\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)u_{i\alpha}-\\
\\
\lambda(1+t(\tau))\sqrt{1-t(\tau)^2}^3\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}u_{j(\alpha-\gamma)}u_{i\gamma}+\\
\\
\lambda(1+t(\tau))\sqrt{1-t(\tau)^2}^3\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)u_{j\gamma}u_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}\\
\\
-\sqrt{1-t^2(\tau)}^3(1+t(\tau))^{-1}u_{i\alpha}.
\end{array}
\end{equation}
Note that in the latter equation the nonlinear terms have the factor $\lambda$ (which may be chosen to be small) while the damping potential term has no factor $\rho$ and may dominate the nonlinear terms. Note that $\lambda>0$ is another scaling factor. Especially, in the transformation above the Laplacian has no coefficient $\lambda>0$. Using a time dilatation transformation at each time step it becomes easier to prove the existence of global upper bounds. On the other hand, used in the present form, at each time step we blow up a time step interval of size $1$ to size infinity, and this is certainly more interesting from an analytical than from a numerical or computational point of view. Well, in general it may be useful to have stability of computations and also in this situation it may be interesting to use time dilatation transformations in order to obtain stability via damping and pay the price of additional computation costs for this stability. So it seems prima facie. However on closer inspection we observe that we may use a time dilatation algorithm on a smaller interval (not on unit time intervals $[l-1,l]$ but on half unit time intervals $\left[l-1,l-\frac{1}{2}\right]$ for exmple and repeat the subscheme twice. The time step size does not increase very much then (by the nature of the time transformation) but we have a damping term now. We shall discuss this more closely in the section about algorithms.
Next we make this idea of our schemes more precise by defining the schemes for computing the modes. We forget the time dilatation for a moment since this is a local operation and consider the global equation in time coordinates $t\geq 0$ again. For purposes of global existence it can make sense to start with the the multivariate Burgers equation, because we know that we have a unique global regular solution for this equation on the $n$-torus. As we shall see below in more detail starting with an assumed solution or the multivariate Burgers equation is also advantageous from an analytical point of view as we know the existences of regular solutions (polynomial decay of modes) and we have to take care only of the additional Leray projection term. On the other hand, and from a numerical point of view we have no explicit formula for solutions of multivariate Burgers equation (except in special cases) and, hence, in a numerical scheme we should better construct the solution. The existence of global solutions $u^{b}$ can be derived by the a priori estimates
\begin{equation}\label{aprioriestburg}
\frac{\partial}{\partial t}\|u^b(t,.)\|_{H^s}\leq \|u^b(t,.)\|_{H^{s+1}}\sum_{i,j}\sum_{|\alpha|+|\beta|\leq s}\|D^{\alpha}u^b_iD^{\beta}u_j\|_{L^2}-2\|\nabla u^b\|^2_{H^s},
\end{equation}
which hold on the $n$-torus, or by the arguments which we discussed in \cite{KB1} and \cite{KB2}. In our context this means that for positive viscosity $\nu>0$ and smooth data (at least formally) we have a global regular solution for the system for $1\leq i\leq n$ and $\alpha \in {\mathbb Z}^n$ of the form
\begin{equation}\label{navode200b}
\begin{array}{ll}
\frac{d u^b_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)u^b_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}u^b_{j(\alpha-\gamma)}u^b_{i\gamma},
\end{array}
\end{equation}
where $u^b_{i\alpha}$ depend on time $t$ and such that for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we have
\begin{equation}
u^b_{i\alpha}(0)=h_{i\alpha}.
\end{equation}
This suggests the following first approach concerning a (formal) scheme for the purpose of global existence.
We compute first $v^0_{i\alpha}=u_{i\alpha}$ for $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n$ and then iteratively
\begin{equation}
v^k_{i\alpha}:=v^0_{i\alpha}+\sum_{p=1}^k\delta v^p_{i\alpha},
\end{equation}
where $\delta v^p_{i\alpha}:=v^p_{i\alpha}-v^{p-1}_{i\alpha}$ for all $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n$ and for $p\geq 1$ we have
\begin{equation}\label{navode200a}
\begin{array}{ll}
\frac{d v^p_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^p_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{p}_{j(\alpha-\gamma)}v^p_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{p}_{j\gamma}v^{p-1}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where $v_{i\alpha}$ depend on time $t$ and such that for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we have
\begin{equation}
v_{i\alpha}(0)=h_{i\alpha}.
\end{equation}
Alternatively, we may start with the initial data $h_i$ and their modes $h_{i\alpha}$ as first order coefficients of the first approximating equation. In this case we have for an iteration number $p\geq 0$ the approximation at the $p$th stage via the linear equation
\begin{equation}\label{navode200lin}
\begin{array}{ll}
\frac{d v^p_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^p_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{p-1}_{j(\alpha-\gamma)}v^{p}_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{p}_{j\gamma}v^{p-1}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where again for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we have
\begin{equation}
v_{i\alpha}(0)=h_{i\alpha}.
\end{equation}
For $p=0$ we then have $v^{p-1}_{j(\alpha-\gamma)}=v^{-1}_{j(\alpha-\gamma)}:=h_{j(\alpha-\gamma)}$.
The latter equation in (\ref{navode200lin}) still corresponds to a partial integro-differential equation. However, this scheme has the advantage that we can built an algorithm on it via a Trotter product formula for infinite matrices (because we know the data $h_{i\alpha}$ and for $p\geq 1$ the data $v^{p-1}_{i\alpha}$ from the previous iteration step).
\begin{rem}
The equation in (\ref{navode200lin}) corresponds to a linear integro-differential equation in classical space and differs in this respect from the approximations we considered in \cite{KB2} and \cite{KB3} and also in \cite{KHyp,K3,KNS} (although we mentioned this type of global scheme in \cite{KNS}). The analysis in all these papers was based on local equations because a priori estimates are easier at hand - as is the trick with the adjoint. In dual spaces of Fourier basis representation considered in this paper spatially global equations are easier to handle. We could also use the corresponding local equations. The corresponding system is
\begin{equation}\label{navode200loc}
\begin{array}{ll}
\frac{d v^p_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^p_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{p-1}_{j(\alpha-\gamma)}v^p_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{p-1}_{j\gamma}v^{p-1}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
but there is no real advantage analyzing (\ref{navode200loc}) instead of (\ref{navode200lin}). Note that may even 'linearize' the Burgers term and substitute the Burgers term (\ref{navode200loc}) by the term $-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{p-1}_{j(\alpha-\gamma)}v^{p-1}_{i\gamma}$. We considered this in \cite{KB3} in classical spaces in order to avoid the use of the adjoint of fundamental solutions and work with estimates of convolutions directly. Again, there is no great advantage to do the same in dual spaces, and it is certainly not preferable from an algorithmic point of view.
\end{rem}
Note that the 'global' term in (\ref{navode200a}) and (\ref{navode200lin}) is of the form
\begin{equation}
2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{p}_{j\gamma}v^{p-1}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{equation}
and it looks more like its 'local' companions (especially the Burgers term) in this discrete Fourier-based representation than is the case for representations in classical spaces.
Formally,
for all $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n$ and for $p\geq 1$ for $\delta v^p_{i\alpha}=v^p_{i\alpha}-v^{p-1}_{i\alpha},~\alpha\in {\mathbb Z}^n$ we have
\begin{equation}\label{navode2000}
\begin{array}{ll}
\frac{d \delta v^p_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)\delta v^p_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{p-1}_{j(\alpha-\gamma)}\delta v^p_{i\gamma}\\
\\
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}\delta v^{p-1}_{j(\alpha-\gamma)} v^{p-1}_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{p}_{j\gamma}v^{p-1}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}\\
\\
-2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{p-1}_{j\gamma}v^{p-2}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where $v_{i\alpha}$ depend on time $t$ and such that for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we have
\begin{equation}
\delta v_{i\alpha}(0)=0.
\end{equation}
It is well-known that the viscosity parameter may be chosen arbitrarily. This fact facilitates the analysis a bit, but for the proof of global existence it is not essential. It is important that the viscosity is strictly positive ($\nu>0$). We need this for the Trotter product formula below, and without it the arguments breaks down.
\begin{rem}
In order to have a certin contraction property of iterated weakly singular elliptic integrals for all modes and in order to have a stronger diffusion damping we
consider parameter transformations of the form
\begin{equation}
v_i(t,x)=r^{\lambda}v^*_i(r^{\nu_0}t,r^{\mu}x),~p(t,x)=r^{\delta}p^*(r^{\nu'}t,r^{\mu}x)
\end{equation}
long with $\tau=r^{\nu}t$ and $y=r^{\mu}x$ for some positive real number $r>0$ and some positive real parameters $\lambda,\mu,\nu_0,\nu'$. We have (using Einstein notation for spatial variables)
\begin{equation}
\begin{array}{ll}
\frac{\partial}{\partial t}v_i(t,x)=r^{\lambda+\nu_0}\frac{\partial}{\partial \tau}v^*_i(\tau,y),
~v_{i,j}(t,x)=r^{\lambda+\mu}v^*_{i,j}(\tau,y),\\
\\
v_{i,j,k}(t,x)=r^{\lambda+2\mu}\frac{\partial^2}{\partial x_j\partial x_k}v^*_i(\tau,y),~p_{,i}(t,x)=r^{\delta+\mu}p^*_{,i}(\tau,y).
\end{array}
\end{equation}
Hence
\begin{equation}
r^{\lambda+\nu_0}\frac{\partial}{\partial \tau}v^*_{i}(\tau,y)=\nu r^{\lambda+2\mu}\Delta v^*_i(\tau,y)-\sum_{j=1}^nr^{2\lambda +\mu}v^*_j(\tau,y)\frac{\partial v^*_i}{\partial x_j}(\tau,y)-r^{\delta+\mu}p^*_{,i},
\end{equation}
or
\begin{equation}
\begin{array}{ll}
\frac{\partial}{\partial \tau}v^*_{i}(\tau,y)=\nu r^{\lambda+2\mu-\lambda -\nu_0}\Delta v^*_i(\tau,y)\\
\\
-\sum_{j=1}^nr^{2\lambda +\mu-\lambda -\nu_0}v^*_j(\tau,y)\frac{\partial v^*_i}{\partial x_j}(\tau,y)-r^{\delta+\mu-\lambda -\nu_0}p^*_{,i},
\end{array}
\end{equation}
which (for example) for the parameter constellations
\begin{equation}\label{paracon}
\mu,\nu,\lambda,\delta \mbox{ with }\lambda +\mu-\nu_0=0 \mbox{ and } \delta+\mu-\lambda-\nu_0=0
\end{equation}
becomes
\begin{equation}\label{v*eq}
\begin{array}{ll}
\frac{\partial}{\partial \tau}v^*_{i}(\tau,y)=\\
\\
\nu r^{2\mu-\nu_0}\Delta v^*_i(\tau,y)-\sum_{j=1}^nr^{\lambda +\mu-\nu_0}v^*_j(\tau,y)\frac{\partial v^*_i}{\partial x_j}(\tau,y)-r^{\delta+\mu-\lambda-\nu_0}p_{,i}\\
\\
=\nu r^{2\mu-\nu_0}\Delta v^*_i(\tau,y)-\sum_{j=1}^nv^*_j(\tau,y)\frac{\partial v^*_i}{\partial x_j}(\tau,y)-p^*_{,i}.
\end{array}
\end{equation}
Note that we may choose (for example) $2\mu-\nu'\in {\mathbb R}_+$ (${\mathbb R}_+$ being the set of strictly positive real numbers) freely and still satisfy the conditions (\ref{paracon}). Note that we can take $\nu'=\nu_0=\nu$. Especially, we are free to assume any viscosity $\nu>0$. Note that the Leray projection term is determined via the relation $\Delta p^*=\sum_{j,m}(v^*_{j,m}v^*_{m,j})$, and for some parmeter constellations it gets the same parameter coeffcient as the the burgers term (cf. Appendix). Furthermore, choosing $\lambda=\delta=0$ and $\mu<\nu_0<2\mu$ with $\mu$ large viscosity damping coefficient becomes large compared to the parmenter coefficients of the nonlinear terms as we get the equation
\begin{equation}\label{v*eq}
\begin{array}{ll}
\frac{\partial}{\partial \tau}v^*_{i}(\tau,y)=\\
\\
\nu r^{2\mu-\nu_0}\Delta v^*_i(\tau,y)-\sum_{j=1}^nr^{ +\mu-\nu_0}v^*_j(\tau,y)\frac{\partial v^*_i}{\partial x_j}(\tau,y)-r^{\mu-\nu_0}p_{,i}.
\end{array}
\end{equation}
Indeed for large $\mu$ we have a strong viscosity damping and small coefficients of the Burgers and the Leray projection term. Especially for large $\mu$ in this parameter constellation we have a contraction property of elliptic integrals which serve as natural upper bounds of the nonlinear growth terms. For example, in natural iteration schemes we may use the relation
\begin{equation}\label{lerayell}
\sum_{\beta\in {\mathbb Z}^n\setminus \left\lbrace 0,\alpha\right\rbrace }\frac{C}{|\alpha-\beta|^{s}}\frac{C}{\beta^r}\leq \frac{cC^2}{1+|\alpha|^{r+s-n}}
\end{equation}
in order to estimate the growth contribution of the Leray projection term at one time step, where for $r\geq n$ and $s\geq n+1$ we have $r+s-n\geq n+2$. The $c$ in (\ref{lerayell}) depends only on the dimension, in with $\mu$ large enough we obatin a contraction property for all modes $\alpha$ different from $0$.
For the numerical and computational analysis it is useful to refine this a bit and consider in addition certain bi-parameter transformation with respect to the viscosity constant $\nu >0$ and with respect to the diameter $l$ of the $n$-torus.
From an analytic perspective we may say that it is sufficient to prove global existence for specific parameters $\nu>0$ and $l>0$. Let us consider coordinate transformations with parameter $l>0$ first.
If $\mathbf{v}^1, p^1$ is solution on the domain $[0,\infty)\times {\mathbb T}^n_1$, then the function pair $(t,x)\rightarrow \mathbf{v}^l(t,x)$, and $(t,x)\rightarrow p^l(t,x)$ on $[0,\infty)\times {\mathbb T}^n_l$ along with
\begin{equation}
\mathbf{v}^l(t,x):=\frac{1}{l}\mathbf{v}^1\left(\frac{t}{l^2},\frac{x}{l} \right),
\end{equation}
and
\begin{equation}
p^l(t,x):=\frac{1}{l^2}p^1\left(\frac{t}{l^2},\frac{x}{l} \right)
\end{equation}
is a solution pair on the $n$-torus of size $l>0$ with initial data
\begin{equation}\label{factorl}
\mathbf{h}^l(x):=\frac{1}{l}\mathbf{h}^1(\frac{x}{l})~\mbox{on}~{\mathbb T}^n_l.
\end{equation}
Hence if we can construct solutions to the Navier-Stokes solution for $h_l\in C^{\infty}\left({\mathbb T}^n_l\right)$ without further restrictions on the data $\mathbf{h}_l$, then we can construct global solutions for the problem in ${\mathbb T}^n_1$ with $\mathbf{h}=\mathbf{h}_1\in C^{\infty}\in \left({\mathbb T}^n\right)_1$ with factor $l$ as in (\ref{factorl}). We only need to produce a fixed number $l>0$ such that arbitrary data are allowed. Second if $v^{\nu}_i,~1\leq i\leq n$ is a solution to the incompressible Navier Stokes equation with parameter $\nu>0$ then via the time transformation
\begin{equation}
v_i(t,x):=(r\nu )^{-1}v^{\nu}_i((r\nu)^{-1}t,x),~p(t,x):=(r\nu)^{-2}p^{\nu}((r\nu)^{-1}t,x)
\end{equation}
(for all $t\geq 0$) we observe that $v_i$ is solution of the incompressible Navier-Stokes equation with viscosity parameter which may be chosen. Concatenation of the previous transformations shows that we can indeed choose specific values for $\nu>0$ and $l>0$, and this may be useful for designing algorithms. For example, it is useful to observe that in (\ref{navode2000}) the modulus of the diagonal coefficients
\begin{equation}\label{dampterm}
\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)
\end{equation}
becomes large for $|\alpha|\neq 0$ if $\nu$ is large or $l$ is small in comparison to the other terms of the iteration in (\ref{navode2000}), but it is even more intersting that we may choose $\nu$ large and $l>0$ large on a scale such that the convection terms are small in comparison to the diagonal damping terms in (\ref{dampterm}).
\end{rem}
As we observed in other articles for the time-local solution of the Navier-Stokes equation we may set up a scheme involving simple scalar linear parabolic equations of the form (\ref{parascalar}) below (cf. \cite{KAC, KNS, K3,KB1,KB2,KB3, KHyp}). We maintained that in classical space a global controlled scheme for an equivalent equation can be obtained if in a time-local scheme the approximate functions in a functional series (evaluated at arbitrary time $t$) inherit the property of being in $C^k\cap H^k$ for $k\geq 2$ together with polynomial decay of order $k$. This is remarkable since you will not expect this from standard local a priori estimates of the Gaussian. For the equivalent controlled scheme in \cite{KB3} we only needed to prove this for the higher order correction terms of the time-local scheme. There it is true for some order of decay because the representations involve convolution integrals with products of approximative value functions where each factor is inductively of polynomial decay. The growth of the Leray projection term for the scheme in \cite{KB3} is linearly bounded on a certain time scale. In \cite{KAC, KNS, K3, KHyp} we proposed auto-controlled schemes and more complicated equivalent controlled schemes with a bounded regular control function which controls the growth of the controlled solution function such that solutions turn out to be uniformly bounded for all time.
Actually, the simple schemes may be reconsidered in terms of the Trotter product formula representations of solutions to scalar parabolic equations of the form
\begin{equation}\label{parascalar}
\frac{\partial u}{\partial t}-\nu \Delta u+Wu=g,
\end{equation}
with initial data $u(0,x)=f\in \cap_{s\in {\mathbb R}^n}H^s$ and some dynamically generated source terms $g$, and where $W=\sum_{i=1}^nw_i(x)\frac{\partial}{\partial x_i}$ is a vector field with bounded and uniformly Lipschitz continuous coefficients. The $w_i$ then are replaced by approximative value function components at each iteration step of course.
Local application of the Trotter product formula leads to representations of the from
\begin{equation}\label{trottappl}
\exp\left(t\left(\nu \Delta +W \right) \right)f=\lim_{k\uparrow \infty}\left(\exp\left(\frac{t}{k}W\right)\exp\left(\frac{t}{k}\nu\Delta\right)\right)^kf.
\end{equation}
Then in a second step having obtained time-local representations of solutions to the incompressible Navier-Stokes equation one can use this reduction from a system to a scalar level and apply the Trotter product formula on a global time scale. Time dilatation transformations as in \cite{KAC} may be used to prove the existence of upper bounds. From this point of view the local contraction results considered in \cite{KB3} or in \cite{KB1, KB2, KHyp} (with the use of adjoint equations) are essential.
In dual spaces the use of a Trotter product formula seems to be not only useful but mandatory in two respects. First, consider the infinite matrix
\begin{equation}\label{damptermmatrix}
D:=\left(d_{\alpha\beta} \right)_{\alpha,\beta\in {\mathbb Z}^n}
:=\left( \delta_{\alpha\beta}\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)\right)_{\alpha,\beta\in {\mathbb Z}^n}
\end{equation}
with the infinite Kronecker delta function $\delta_{\alpha\beta}$, i.e,
\begin{equation}
\delta_{\alpha\beta}=\left\lbrace \begin{array}{ll}
1\mbox{ if }\alpha=\beta\\
\\
0 \mbox{ if }\alpha\neq \beta.
\end{array}\right.
\end{equation}
If we measure regularity with respect to the degree of decay of entries as the order of the modes increases, then this matrix lives in a space of rather weak regularity. Worse then this, iterations of the matrix, i.e., matrices of the form
\begin{equation}
\begin{array}{ll}
D^2:=DD=:\left( d^{(2)}_{\alpha\beta}\right)_{\alpha,\beta\in {\mathbb Z}^n}:=\left(\sum_{\gamma\in {\mathbb Z}^n}d_{\alpha\gamma}d_{\gamma\beta} \right)_{\alpha,\beta\in {\mathbb Z}^n}\\
\\
D^m:=DD^{m-1}=:\left( d^{(m)}_{\alpha\beta}\right)_{\alpha,\beta\in {\mathbb Z}^n}:=\left(\sum_{\gamma\in {\mathbb Z}^n}d_{\alpha\gamma}d^{(m-1)}_{\gamma\beta} \right)_{\alpha,\beta\in {\mathbb Z}^n}
\end{array}
\end{equation}
live in matrix spaces of lower and lower regularity as $m$ increases (if we measure regularity in relation to decay with respect to the order of modes). However, since the matrix $D$ has a minus sign we may make sense of the matrix
\begin{equation}\label{expD}
\exp(D)=\left(\delta_{\alpha\beta}\exp\left(d_{\alpha\alpha}\right)\right)_{\alpha,\beta\in {\mathbb Z}^n}=\left(\delta_{\alpha\beta}\exp\left(\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)\right)\right)_{\alpha,\beta\in {\mathbb Z}^n},
\end{equation}
and this matrix makes perfect sense in terms of the type of regularity mentioned, i.e., the type of regularity expressed by dual Sobolev spaces which measure the order of polynomial decay of the modes. Since we consider infinite ODEs involving a Leray projection term we shall have correlations between the components $1\leq i\leq n$ even at each approximation step. Hence matrices which correspond to linear approximations of the Navier Stokes equations will involve big diagonal matrices of the form
\begin{equation}
\left(\delta_{ij}\exp(D)\right)_{1\leq i,j\leq n},
\end{equation}
where $\delta_{ij}$ denotes the usual Kronecker $\delta$ for $1\leq i,j\leq n$ and $\exp(D)$ is as in (\ref{expD}) above. However, the idea that such a dissipative matrix lives in a regular ($\equiv$'polynomially decaying modes as the order of modes increases') infinite matrix space is the same. So much for the diffusion term. Note that we have some linear growth with respect to the modes of a matrix related to the convection term, and there may be some concern that iteration of this matrix lead to divergences. However, the initial data at each time step have sufficient decay such that they can serve as analytic vectors, and weakly elliptic integrals will show that these decay properties are preserved by the scheme. If the data at each substep are sufficiently regular (where 'sufficiency' may be determined by upper bounds obtained via weakly singular elliptic integrals, cf. below), then on a time local level algorithms via Trotter product formula approximate limits of alternate local solutions on $[t_0,t_1]\times h^s\left({\mathbb Z}^n\right)$ of subproblems of the form
\begin{equation}\label{navode200locsub1}
\left\lbrace \begin{array}{ll}
\frac{d v^p_{0i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^p_{0i\alpha},\\
\\
v^p_{0i\alpha}(t_0)=v^p_{i\alpha}(t_0),~\alpha\in {\mathbb Z}^n
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{navode200locsub2}
\left\lbrace \begin{array}{ll}
\frac{d v^p_{1i\alpha}}{dt}=
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{p-1}_{1j(\alpha-\gamma)}v^p_{1i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{p}_{1j\gamma}v^{p-1}_{1k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},\\
\\
v^p_{1i\alpha}(t_0)=v_{i\alpha}(t_0),~\alpha\in {\mathbb Z}^n,
\end{array}\right.
\end{equation}
and where for $p=1$ we may have $v^{p-1}_{i\alpha}=v^0_{\alpha}=v_{i\alpha}(t_0)$. Here, for $\alpha\in {\mathbb Z}^n$ and $1\leq i\leq n$ the time dependent functions $v_{i\alpha}$ are the modes of the vector $v_{i}$ which is assumed to be known for time $t_0$ (by a previous time step or by the initial data at time $t_0=0$.
If we can ensure that a certain regularity of the data $v^p_{i\alpha}(t_0)$, say $ v_{i}(t_0) \in h^s\left({\mathbb Z}^n\right) $ for $s>n+2$ then we shall observe that $v^p_{i}(t)\in h^s\left({\mathbb Z}^n\right) $ for some $s>n+2$ and time $t\in [t_0,t]$. The inheritage of this regularity then leads to a sequence of solutions $v^p_{i}\in h^s\left({\mathbb Z}^n\right)$ with $s>n+2$, where the limit solves the local incompressible Navier Stokes equation. Writing (\ref{navode200locsub2}) in the form
\begin{equation}\label{navode200locsub2mat}
\left\lbrace \begin{array}{ll}
\frac{d \mathbf{v}^p_{1}}{dt}=B\mathbf{v}^p_1\\
\\
\mathbf{v}^p_{1}(t_0)=\mathbf{v}(t_0),
\end{array}\right.
\end{equation}
where $\mathbf{v}^p_1=\left(v^p_{11},\cdots, v^p_{1n} \right)^T$ and $\mathbf{v}(t_0)=(v_1(t_0),\cdots,v_n(t_0))^T$ are lists of infinite vectors and $B$ is a $n{\mathbb Z}^n\times n{\mathbb Z}^n$ matrix such that the system (\ref{navode200locsub2mat}) is equivalent to the system in (\ref{navode200locsub2}). Note that the matrix $B$ depends only on data which are known from the previous iteration step $p-1$.
For this subproblem the local solution can be represented in the form
\begin{equation}\label{dysonobs}
\begin{array}{ll}
\mathbf{v}^p_1=v(t_0)+\\
\\
\sum_{m=1}^{\infty}\frac{1}{m!}\int_0^tds_1\int_0^tds_2\cdots \int_0^tds_m T_m\left(B(t_1)B(t_2)\cdot \cdots \cdot B(t_m) \right)\mathbf{v}(t_0),
\end{array}
\end{equation}
where for $m=2$ we define
\begin{equation}
T_2\left(B(t_1)B(t_2)\right)=\left\lbrace \begin{array}{ll}
B(t_1)B(t_2)~~\mbox{ if }~t_1\geq t_2\\
\\
B(t_2)B(t_1)~~\mbox{ if }~t_2>t_1,
\end{array}\right.
\end{equation}
and for $m>2$ the time order operator $T_m$ may be defined recursively. Let ${\cal T}_m:\left\lbrace t_1,\cdots,t_m|t_i\geq 0~\mbox{for}~1\leq i\leq m\right\rbrace$.
Having defined $T_m$ for some $m\geq 2$ for $m+1\geq 3$ we first define
\begin{equation}
T_{m+1,\leq t_{m+1}}\left(B(t_1)B(t_2)\cdot \cdots \cdot B(t_{m+1})\right)=T_k\left(B_{t_{k_1}},\cdots ,B_{t_{k_j}} \right),
\end{equation}
where $\left\lbrace t_{k_1},\cdots ,t_{k_j}\right\rbrace:=\left\lbrace t_i \in {\cal T}_m|t_i\leq t_m~\mbox{and}~i\neq m\right\rbrace $, and
\begin{equation}
T_{m+1,>t_{m+1}}\left(B(t_1)B(t_2)\cdot \cdots \cdot B(t_{m+1})\right)=T_l\left(B_{t_{l_1}},\cdots ,B_{t_{l_i}} \right),
\end{equation}
where $\left\lbrace t_{l_1},\cdots ,t_{l_i}\right\rbrace:=\left\lbrace t_p \in {\cal T}_m|t_i> t_m\right\rbrace $. Then
\begin{equation}
\begin{array}{ll}
T_{m+1}\left(B(t_1)B(t_2)\cdot \cdots\cdot B(t_{m+1)} \right)\\
\\
=T_{m+1,\leq t_{m+1}}\left(B(t_1)B(t_2)\cdot \cdots \cdot B(t_{m+1})\right)\times\\
\\
\times B(t_{m+1}) T_{m+1,>t_{m+1}}\left(B(t_1)B(t_2)\cdot \cdots \cdot B(t_{m+1})\right) .
\end{array}
\end{equation}
We shall observe that for data $v_i(t_0)\in h^s\left({\mathbb Z}^n\right)$ for $s>n+2$ and $1\leq i\leq n$ a natural iteration scheme preserves regularity in the sense that $v^p_{1i}\in h^s\left({\mathbb Z}^n\right)$ for all $1\leq i\leq n$. and $p\geq 1$. Next we define some more basic parts of a natural time-local solution scheme.
Consider the first step in our scheme for the vectors $\left( v^p_{i\alpha}\right)_{\alpha\in {\mathbb Z}^n}$. For $p=0$ the modes $\left( v^0_{i\alpha}\right)_{\alpha\in {\mathbb Z}^n}$ are equal to the modes of the (auto-)controlled scheme we describe below and the corresponding equation can be written in the form
\begin{equation}\label{navode200linstage0}
\begin{array}{ll}
\frac{d v^0_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^0_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}h_{j(\alpha-\gamma)}v^0_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{0}_{j\gamma}h_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{array}
\end{equation}
If we consider the $n$-tuple $\mathbf{v}^F=\left( \mathbf{v}^F_1,\cdots,\mathbf{v}^F_n\right)^T$ as an infinite vector with $\mathbf{v}^F_i:=\left(v_{i\alpha}\right)_{\alpha\in {\mathbb Z}^n}$, then with the usual identifications
the equation (\ref{navode200linstage0}) is equivalent to an infinite linear ODE
\begin{equation}
\frac{d \mathbf{v}^{0,F}}{dt}=A_0\mathbf{v}^{0,F},
\end{equation}
where the matrix $A_0$ is implicitly defined by (\ref{navode200linstage0}) and will be given explicitly in our more detailed description below. Together with the initial data
\begin{equation}
\mathbf{v}^{0,F}(0)=\mathbf{h}^F(0)
\end{equation}
this is an equivalent formulation of the equation for the first step of our scheme. We shall define a dissipative diagonal matrix $D_0$ and a matrix $B_0$ related to the convection and Leray projection terms such that $A_0=D_0+B_0$, and prove a Trotter product formula which allows us to make sense of the formal solution
\begin{equation}
\begin{array}{ll}
\mathbf{v}^{0,F}(t)=\exp(A_0t)\mathbf{h}^F\\
\\
:=\lim_{l\uparrow \infty}\lim_{k\uparrow \infty}\left( \exp\left(P_{M^l}(D_0)\frac{t}{k}\right)\exp\left(P_{M^l}B_0\frac{t}{k} \right)\right)^k\mathbf{h}^F.
\end{array}
\end{equation}
Here $P_{M^l}$ denotes a projection to the finite modes of order less or equal to $l>0$. Here and in the following we may consider some order of the multiindices and assume that this order is preserved by the projection operators.
Note that at this first stage the modes $h_{i\alpha}$ are not time-dependent. Hence $A_0$ is defined as a matrix which is independent of time, and we have no need of a Dyson formalism at this stage.
For the higher stages of approximation we have to deal with time dependence of the related infinite matrices $A_p$ which define the infinite ODEs at iteration step $p\geq 0$ for the modes $v^{p}_{i\alpha}$ and $v^{r,p}_{i\alpha}$ in the presence of a control function $r$.
The formula (\ref{trottappl}) makes clear why strong contraction estimates of the time-local expressions
\begin{equation}\label{timeloc}
\exp\left(\frac{t}{k}W\right)\exp\left(\frac{t}{k}\nu\Delta\right)f
\end{equation}
are important. As we said, we pointed out this in \cite{KB2} and \cite{KB3} from a different point of view. In dual spaces we may approximate such expressions via matrix equations of finite modes (projections of expressions of the form (\ref{timeloc}) to approximating equations of finite modes).
For the finite mode approximation of the first approximation sequence $v^0_{i\alpha},~\alpha\in {\mathbb Z}^n$ first order coefficients are time independent and we may use the Baker-Campbell-Hausdorff formula for finite matrices $A$ and $B$ of the form
\begin{equation}\label{BCH}
\exp(A)\exp(B)=\exp(C),
\end{equation}
where
\begin{equation}\label{Ceq}
\begin{array}{ll}
C=A+B+\frac{1}{2}\left[A,B\right]+\frac{1}{12}\left[A,\left[A,B\right]\right]+\frac{1}{12}\left[\left[A,B\right],B\right]\\
\\
+\mbox{ higher order terms}.
\end{array}
\end{equation}
Here $\left[.,. \right]$ denote Lie brackets and the expression 'higher order terms' refers to all terms of multiple commutators of $A$ and $B$. We shall see that for dissipative operators as in the case of the incompressible Navier-Stokes equation and its linear approximations we can prove extensions of the formula in (\ref{BCH}), or we can apply the formula in (\ref{BCH}) considering limits to infinite matrices applied to infinite vectors with polynomial decay of some order, where we restrict our investigation to the special cases which fit for the analysis of some infinite linear ODEs approximating the incompressible Navier-Stokes equation written in dual space. Note that for higher order approximations $v^p_{i\alpha},~\alpha\in {\mathbb Z}^n,~1\leq i\leq n$ with $p\geq 1$ we have time dependence of the coefficients and this means that we have to apply a time order operator as in Dyson's formalism in order to solve the related linear approximating equations formally. Using our observations on well defined infinite matrix operations for function spaces with appropriate polynomial decay these formal Dyson formalism solutions can be justified rigorously. Note that (\ref{Ceq}) is closely connected to the H\"{o}rmander condition, and this was one of the indicators which lead to the expectation in \cite{K3} that global smooth existence is true if the H\"{o}rmander condition is satisfied. We considered this \cite{KHyp}.
In this paper we shall see that we can simplify the schemes formulated in classical spaces in our formulation on dual spaces in some respects. The first simplification is that we may define an iteration scheme on a global time scale, where the growth of the scheme may be estimated via an autocontrolled subscheme, i.e., a scheme where damping potential terms are introduced via time dilatation. The second simplification is that the estimates in dual spaces become estimates of discrete infinite sums which can be done on a very elementary level. Furthermore, it is useful to have a control function which ensures that the scheme for the controlled equation is a scheme for non-zero modes (ensuring that the damping factors associated with strictly positive viscosity or dissipative effects are active for all modes of the controlled scheme). For numerical and analytical purposes we may choose specific $\nu$ and $l$ and define a controlled scheme for a controlled incompressible Navier-Stokes equation such that the diagonal matrix elements corresponding to the Laplacian terms become dominant and such that there exists a global iteration in an appropriate function space. For analytical purposes it is useful to estimate the growth via comparison with a time dilated systems iteratively and locally in time. The control function is a scalar univariate function and is much simpler than the control functions considered in \cite{KNS} and \cite{K3}. Indeed, the control function in the present paper will only control the zero modes $v_{i0}$ of the value function modes.
Next we define a controlled scheme and an extended controlled scheme with an autocontrol. We may start with one of the two possibilities mentioned above. We may start with the solution scheme for the multivariate Burgers equation, which is given in dual representation by the infinite ODE
\begin{equation}\label{navode200**}
\begin{array}{ll}
\frac{d u_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)u_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}u_{j(\alpha-\gamma)}u_{i\gamma},
\end{array}
\end{equation}
where the modes $u_{i\alpha}$ depend on time $t$ and such that for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we have
\begin{equation}
u_{i\alpha}(0)=h_{i\alpha}.
\end{equation}
This is a slight variation of the linearized scheme mentioned above. Indeed for algorithmic purposes we may start with a linearization of (\ref{navode200**}) and call this $u_{i\alpha}$ as well. It does not really matter for analytical purposes such as existence and regularity. Anyway, we may define $v^{0}_{i\alpha}=u_{i\alpha}$ for $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n$ and then iteratively
\begin{equation}
v^{k}_{i\alpha}:=v^{0}_{i\alpha}+\sum_{p=1}^k\delta v^{p}_{i\alpha},
\end{equation}
where for $p\geq 1$ we define $\delta v^{p}_{i\alpha}:=v^{p}_{i\alpha}-v^{(p-1)}_{i\alpha}$ for all $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n$ such that
\begin{equation}\label{navode20001}
\begin{array}{ll}
\frac{d \delta v^p_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)\delta v^p_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace \alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^{p-1}_{j(\alpha-\gamma)}\delta v^p_{i\gamma}\\
\\
-\sum_{j=1}^n\frac{2\pi i \alpha_j}{l}v^{p-1}_{j0}\delta v^p_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace \alpha\right\rbrace}\frac{2\pi i \gamma_j}{l}\delta v^{p-1}_{j(\alpha-\gamma)} v^{p-1}_{i\gamma}\\
\\
-\sum_{j=1}^n\frac{2\pi i \alpha_j}{l}\delta v^{p-1}_{j0} v^{p-1}_{i\alpha}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k) v^{p-1}_{k(\alpha-\gamma)}\delta v^{p}_{j\gamma}}{\sum_{i=1}^n4\pi\alpha_i^2}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)\delta v^{p-1}_{k(\alpha-\gamma)} v^{p-1}_{j\gamma}}{\sum_{i=1}^n4\pi\alpha_i^2}
\end{array}
\end{equation}
(where for $p=0$ and $p=-1$ we define $\delta v^{p}_{i\alpha}=0$). Note that we extracted the zero modes from the sums in (\ref{navode20001}). The reason is that we want to extend this scheme to a controlled scheme which is equivalent but has no zero modes. The present local scheme suggests this because the Leray projection terms do not contain zero modes. Note that for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we have
\begin{equation}
\delta v^{p}_{i\alpha}(0)=0.
\end{equation}
Global smooth existence at stage $p$ of the construction means that the $v^p_{i\alpha}$ are defined for all time ${\mathbb R}_+=\left\lbrace t\in {\mathbb R}|t\geq 0\right\rbrace$, and such that polynomial decay of the modes is preserved throughout time.
Next we introduce the idea of a simplified control function was introduced in \cite{KNS} and \cite{K3}, and \cite{KB3} in a simplified but still complicated form.
Here we introduce a control function for the zero modes. We define
\begin{equation}\label{controlstart}
v^{r,p}_{i\alpha}(t)=v^{p}_{i\alpha }(t)+r^p_0(t),
\end{equation}
where
\begin{equation}
r^p_0:[0,\infty)\rightarrow {\mathbb R}
\end{equation}
is defined by
\begin{equation}
r^p_0(t)=-v^{p}_{i0 }(t).
\end{equation}
We have to show then that $r_0:[0,\infty)\rightarrow {\mathbb R}$ is well-defined (especially bounded), i.e., we have to show that there is a limit
\begin{equation}
r_0(t):=\lim_{p\uparrow \infty} r^p_0(t)=r^0_0(t)+\sum_{p=1}^{\infty}\delta r^p_0(t)
\end{equation}
along with $\delta r^p_0(t)=r^p_0(t)-r^{p-1}_0(t)$.
This leads to the following scheme.
We start with the solution for the multivariate Burgers equation, or a linearized version of the scheme with first order coefficients related to the initial data. Then we annihilate the zero modes, i.e. we define
\begin{equation}
v^{r,0}_{i\alpha}=v^0_{i\alpha}=u_{i\alpha}
\end{equation}
for $\alpha\neq 0$ and
\begin{equation}
v^{r,0}_{i0}=v^{0}_{i0}+r^0_0=0,
\end{equation}
where for all $t\geq 0$
\begin{equation}
r^0_0(t)=-u_{i0}(t).
\end{equation}
For $p\geq 1$ and for $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n\setminus \left\lbrace 0 \right\rbrace$ we define
\begin{equation}
v^{r,k}_{i\alpha}:=v^{r,0}_{i\alpha}+\sum_{p=1}^k\delta v^{r,p}_{i\alpha},
\end{equation}
where $\delta v^{r,p}_{i\alpha}:=\delta v^{p}_{i\alpha}+r^p_{i\alpha}$
for all $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace $, and
\begin{equation}\label{navode20001r}
\begin{array}{ll}
\frac{d \delta v^{r,p}_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)\delta v^{r,p}_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace \alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^{r,p-1}_{j(\alpha-\gamma)}\delta v^{r,p}_{i\gamma}\\
\\
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace \alpha\right\rbrace}\frac{2\pi i \gamma_j}{l}\delta v^{r,p-1}_{j(\alpha-\gamma)} v^{r,p-1}_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{r,p-1}_{j\gamma}\delta v^{r,p}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)\delta v^{r,p-1}_{j\gamma}v^{r,p-1}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{array}
\end{equation}
Furthermore, for all $1\leq i\leq n$ and $\alpha=0$ we shall ensure that
\begin{equation}
\delta v^{r,p}_{i\alpha}(0)=0.
\end{equation}
Note that in the equation (\ref{navode20001r}) the terms
\begin{equation}
\begin{array}{ll}
-\sum_{j=1}^n\frac{2\pi i \alpha_j}{l}v^{p-1}_{j0}\delta v^p_{i\alpha}-\sum_{j=1}^n\frac{2\pi i \alpha_j}{l}\delta v^{p-1}_{j0} v^{p-1}_{i\alpha}
\end{array}
\end{equation}
on the right side of (\ref{navode20001}) are cancelled. This is because we define the control function $r_0$ such that the zero modes become zero. This is done as follows.
First we note that for $p=0$ and $p=-1$ we may define $\delta v^{r,p}_{i\alpha}=0$ for all $\alpha \in {\mathbb Z}^n$.
For $\alpha=0$ we define first the increment $\delta v^{*,r,p}_{i0}$ for $p\geq 1$ via the equation
\begin{equation}\label{navode200011}
\begin{array}{ll}
\frac{d \delta v^{*,r,p}_{i0}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)\delta v^{*,r,p}_{i0}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace \alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^{*,r,p-1}_{j(-\gamma)}\delta v^{*,r,p}_{i\gamma}\\
\\
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace \alpha\right\rbrace}\frac{2\pi i \gamma_j}{l}\delta v^{*,r,p-1}_{j(-\gamma)} v^{*,r,p-1}_{i\gamma}.
\end{array}
\end{equation}
Then we define
\begin{equation}\label{controlend}
\delta r^p_0(t)=-\delta v^{*,r,p}_{i0},
\end{equation}
closing the recursion. Note that this ensures
\begin{equation}
\delta v^{r,p}_{i0}=0
\end{equation}
at all stages $p\geq 1$ for the zero modes (provided that we can ensure that $r^p_0$ is globally well-defined).
The introduction of the control function for the zero modes makes the system 'autonomous' for the modes $\alpha\in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace $. This means that all modes have damping factors (involving $\nu>0$), i.e., we have the stabilizing diagonal factor terms
\begin{equation}
\delta_{\alpha\beta}\nu\left(-\sum_{j=1}^n\frac{4\pi \alpha_j^2}{l^2} \right),
\end{equation}
which contribute for
\begin{equation}
\sum_{j=1}^n\alpha_j^2\neq 0,~\mbox{or}~\alpha_i\neq 0~\forall~1\leq i\leq n.
\end{equation}
We observed that the parameter $\nu>0$ may be large without loss of generality. This alone may indicate that the matter of global smooth existence should be detached from the subject of turbulence as simulation indicate turbulent phenomena for high Reynold numbers.
Solutions of each approximation step of the scheme can be represented in terms of fundamental solutions of scalar equations, where each iteration step of higher order involves fundamental solutions of linear scalar parabolic equations. This is an idea which we considered earlier in \cite{KB1}, \cite{KB2}, \cite{K3}, and \cite{KNS}. In this paper we consider 'fundamental solutions' in a dual space of Fourier modes. They may be called 'generalized $\theta$-functions' but in general they live only in distributional space of negative Sobolev norm. However, we shall see that for certain initial data we can make sense of the related analytic vectors, and show that there spatial convergence radius is global (holds for the full size of the $n$-torus).
In this paper we observe that for all data $\mathbf{h}=\left(h_1,\cdots ,h_n \right)$ along with $h_i\in C^{\infty}$ and all positive real numbers $\nu >0$ the global extended iteration scheme converges globally to a solution of an infinite ODE system equivalent to a controlled Navier-Stokes equation in a strong norm, i.e. for all $t\geq 0$ the sequences $v^p_{i\alpha},~\alpha\in {\mathbb Z}^n$ converge in the dual Sobolev space $h^s\left( {\mathbb Z}^n\right)$ for $s\geq n+2$, where the choice $n+2$ corresponds naturally to certain requirements of infinite matrix operations. Clearly and a fortiori, the sequences converge also in $h^s$ for $s\leq 2$ (as we noted in a former version of this paper). Moreover, the control function is a globally well-defined univariate bounded differentiable function such that controlled incompressible Navier-Stokes equation is equivalent to the usual incompressible Navier-Stokes equation system.
More precisely, we have
\begin{thm}\label{mainthm1}
Given real numbers $l>0$ and $\nu>0$ such for all $1\leq p\leq n$ and data
$\mathbf{h}\in \left[C^{\infty}\left({\mathbb T}^n_l\right)\right]^n$ for all $s\in {\mathbb R}$
we have
\begin{equation}
v^{r}_{i\alpha}\left( (t)\right)_{\alpha\in {\mathbb Z}^n}=\lim_{m\uparrow \infty}\left( v^{r,m}_{i\alpha}(t)\right)_{\alpha\in {\mathbb Z}^n}\in h^s_l\left({\mathbb Z}^n\right).
\end{equation}
for the scheme described in the introduction. Moreover, $t\rightarrow \left( v^{r}_{i\alpha}(t)\right)_{\alpha\in {\mathbb Z}^n}$ satisfies the infinite ODE system in (\ref{navode20001}).
This implies that $v_{i\alpha}(t)=v^r_{i\alpha}(t)-\delta_{0\alpha}r_i(t)$ satisfies the infinite ODE-system corresponding to the incompressible Navier-Stokes equation, where $\delta_{0\alpha}=0$ if $\alpha\neq 0$ and $\delta_{\alpha 0}=1$ if $\alpha=0$, i.e., the function
\begin{equation}\label{vi}
v_j:=\sum_{\alpha\in {\mathbb Z}^n}v_{j\alpha}\exp{(2\pi i\alpha x)},~1\leq j\leq n,
\end{equation}
is a global classical solution of the Navier stokes equation system (\ref{nav}) below. Note that $v_{j\alpha},~\alpha\in {\mathbb Z}^n$ are functions of time as is $v_j$ in (\ref{vi}). Moreover the modes $v_{j\alpha}$ in (\ref{vi}) are real, i.e. the solution in (\ref{vi}) has a representation
\begin{equation}
v_j:=\sum_{\alpha\in {\mathbb N}^n}\left( v_{sj\alpha}\sin{(2\pi\alpha x)}+v_{cj\alpha}\cos{(2\pi\alpha x)}\right) ,~1\leq j\leq n,
\end{equation}
where $v_{sj\alpha}(t)\in {\mathbb R}$ and $v_{cj\alpha}(t)\in {\mathbb R}$ for all $t\geq 0$. Furthermore the solution scheme implies that for regular data $h_i\in H^s$ for $s>n+2+r$ and all $1\leq i\leq n$ we have global regular solutions $v_i\in H^s,~1\leq i\leq n$ of the incompressible Navier Stokes equation with the same $s>n+2+r$.
\end{thm}
Note that in the latter statement this degree of regularity is the optimal regularity iff we consider the data as part of the solution.
Furthermore, we say that $\mathbf{v}=(v_1,\cdots ,v_n)^T$ is a global classical solution of the incompressible Navier-Stokes equation system
\begin{equation}\label{nav}
\left\lbrace \begin{array}{ll}
\frac{\partial\mathbf{v}}{\partial t}-\nu \Delta \mathbf{v}+ (\mathbf{v} \cdot \nabla) \mathbf{v} = - \nabla p~~~~t\geq 0,\\
\\
\nabla \cdot \mathbf{v} = 0,~~~~t\geq 0,\\
\\
\mathbf{v}(0,x)=\mathbf{h}(x),
\end{array}\right.,
\end{equation}
on some domain $\Omega$ (which is the $n$-torus ${\mathbb T}^n$ in this paper) if $\mathbf{v}$ solves the equivalent Navier-Stokes equation system in its Leray projection form,
where $p$ is eliminated by the Poisson equation
\begin{equation}\label{press}
-\Delta p=\sum_{j,k=1}^nv_{j,k}v_{k,j}.
\end{equation}
Note that $v_{j,k}$ denotes the first partial derivative of $v_j$ with respect to the component $x_k$ etc. (Einstein notation). This means that we construct a solution to an equation of the form
\begin{equation}\label{navleraya}
\left\lbrace \begin{array}{ll}
\frac{\partial\mathbf{v}}{\partial t}-\nu \Delta \mathbf{v}+ (\mathbf{v} \cdot \nabla) \mathbf{v} =\sum_{j,k=1}^n\int \nabla K_{{\mathbb T}^n}(x-y)v_{j,k}v_{k,j}(t,y)dy,\\
\\
\mathbf{v}(0,x)=\mathbf{h}(x),
\end{array}\right.
\end{equation}
along with (\ref{press}). Here $K_{{\mathbb T}^n}$ is a kernel on the torus which can be de determinde via Fourier tranformation.
For a fixed function $\mathbf{v}$ which solves (\ref{navleraya}) with (\ref{press}) the Cauchy problem for divergence
\begin{equation}
\begin{array}{ll}
\frac{\partial}{\partial t}\mbox{div} \mathbf{v}-\nu \Delta \operatorname{div} \mathbf{v}+\sum_{j=1}^nv_j\operatorname{div} \mathbf{v}+\sum_{j,k=1}^nv_{j,k}v_{k,j}=-\Delta p\\
\\
\mbox{div}~\mathbf{v}(0,.)=0
\end{array}
\end{equation}
simplifies to
\begin{equation}\label{cauchydiv}
\begin{array}{ll}
\frac{\partial}{\partial t}\mbox{div}~\mathbf{v}-\nu \Delta \operatorname{div} \mathbf{v}+\sum_{j=1}^nv_j\operatorname{div} \mathbf{v}=0\\
\\
\mbox{div}~\mathbf{v}(0,.)=0,
\end{array}
\end{equation}
yielding $\operatorname{div} \mathbf{v}=0$ as the unique solution of (\ref{cauchydiv}). For this reason it suffices to solve the Navier-Stokes equation in its Leray projection form (as is well-known).
In the next Section we look at the structure of the proof of the main theorems. Then in Section 3 we prove theorem \ref{mainthm1}.
\begin{cor}\label{mainthm2}
The statements of (\ref{mainthm2}) hold for all numbers $l>0$ and $\nu>0$. Furthermore, the solution is unique in the function space
\begin{equation}
F_{\mbox{ns}}=\left\lbrace \mathbf{g}:\left[0,\infty\right)\times{\mathbb R}^n\rightarrow {\mathbb R}^n|g_i\in C^{1}\left( \left[0,\infty\right),H^s\left( {\mathbb R}^n\right) \right)~\&~g_i(0,.)=h_i \right\rbrace ,
\end{equation}
where $\mathbf{g}=(g_1,\cdots ,g_n)^T$ and $h_i\in H^s$ for $s>n+2\geq 3$.
\end{cor}
\begin{rem}
For the assumptions of (\ref{mainthm2}) the methods of this paper lead directly to contraction results. Concerning uniqueness the regularity assumptions of the initial data may be weakened, of course.
\end{rem}
If the viscosity converges to zero, then we get the incompressible Euler equation. A Trotter product formula does not hold but we can apply the Dyson formalism for infinite regular matrix operations (in the sense defined in this paper) directly. However we loose uniqueness for $n\geq 3$ and for $n\geq 3$ there exist singular solutions as well.
\begin{cor}
For any dimension $n$ there exists a global regular solution branch of the incompressible Euler equation for data $h_i\in h^s\left({\mathbb Z}^n\right), 1\leq i\leq n$ along with $s>n+2$.
\end{cor}
Here, by a global regular solution branch we mean a global solution where valuations of the $i$th component for $1\leq i\leq n$ at time $t\geq 0$ exist in $h^{s}\left({\mathbb Z}^n\right) $ for all time $t\geq 0$.
Note that an analogous result holds on the whole domain with spatial part on ${\mathbb R}^n$ where viscosity limits for contraction results of the Navier Stokes equation can be used. However, here we consider a different direct approach via a Dyson formalism. A viscosity limit approach via a Trotter product formula seems to be not appropriate in this case.
\begin{cor}
For dimension $n\geq 3$ there exist singular solutions of the incompressible Euler equation for regular data $0\neq h_i\in h^s\left({\mathbb Z}^n\right), 1\leq i\leq n$ along with $s>n+2$.
\end{cor}
The lower bound $n=3$ for singular solutions is related to the fact that the class of singular solutions which can be constructed has no analogue in dimension $n\leq 2$. The constructive ansatz of the solution collapses to the trivial solution.
\section{Structure and ideas of the proof of theorem \ref{mainthm1} and corollary \ref{mainthm2}}
In the proof of theorem \ref{mainthm1} in section 3 below we first recall that the incompressible Navier-Stokes equation is formally equivalent to a system of $n$ infinite ODE-systems
\begin{equation}\label{navode2*}
\begin{array}{ll}
\frac{d v_{i\alpha}}{dt}=\sum_{j=1}^n \nu\left( -\frac{4\pi \alpha_j^2}{l^2}\right)v_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v_{j(\alpha-\gamma)}v_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v_{j\gamma}v_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where $\alpha\in {\mathbb Z}^n$, and where the initial data $\mathbf{v}^F_i(0)=\left( v_{i\alpha}(0)\right)_{\alpha \in {\mathbb Z}^n}$ for $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n$ are given by $n$ infinite vectors
\begin{equation}\label{initdatamod}
\mathbf{v}^F_i(0)=\mathbf{h}^F_i=\left(h_{i\alpha}\right)^T.
\end{equation}
Operations such as $\sum_{\gamma\in {\mathbb Z}^n}v_{k(\alpha-\gamma}v_{j\gamma}$ are interpreted as infinite matrix-vector operations.
We also consider the equivalent representation with respect to the real basis
\begin{equation}\label{realbasisa}
\left\lbrace \sin\left(\frac{2\pi\alpha}{l}\right),\cos\left(\frac{ 2\pi\alpha}{l}\right) \right\rbrace_{\alpha \in {\mathbb N}^n}.
\end{equation}
In this case we use the notation
\begin{equation}\label{initdatamodr}
\mathbf{v}^{re,F}_i(0)=\mathbf{h}^{re,F}_i=\left(h^{re}_{i\alpha}\right)^T
\end{equation}
for the initial data in order to indicate that we are referring to data in the real basis
(\ref{realbasisa}). The structure is quite similar but for each multiindex $\alpha$ we have one equation for the cosine modes and one equation for the sinus modes (cf. \ref{aa}) below. For both types of modes we have operations of the form $\sum_{\gamma\in {\mathbb Z}^n}v_{k(\alpha-\gamma}v_{j\gamma}$ and operations of the form $\sum_{\gamma\in {\mathbb Z}^n}v_{k(\alpha+\gamma}v_{j\gamma}$ which can both be considered as equivalent infinite matrix vector operations. The estimates of these operations via weakly singular elliptic integrals are the same and the transition from the complex system to the real system and vice versa is straightforward. Comparison to the real system makes sure that the procedure written in the more succinct complex notation leads to real solutions. This is clear from an analytical point of view anyway, but from a computational point of view errors of computation should be compared to the real system in order to extract the optimal real approximative solutions of a solution with a possibly complex error.
For numerical purposes considered later it is also useful to observe the dependence of the different terms on the viscosity $\nu$ and the size of the torus.
The representation in (\ref{navode2*}) is more convenient than a representation with respect to the real data but we shall see at every step of the argument that an analogous argument also holds for the representation of the infinite ODE systems with respect to the real basis in (\ref{realbasisa}).
The entries $h_{i\alpha}$ of the infinite vector in (\ref{initdatamod}) denote the Fourier modes of the functions $h_i$ along with $\mathbf{h}\in \left[ C^{\infty}\left({\mathbb T}^n_l\right) \right]^n$. Smoothness of $\mathbf{h}$ translates to polynomial decay of the modes $h_{i\alpha}$. Here we may say that the modes $h_{i\alpha}$ have polynomial decay of order $m>0$ if
\begin{equation}
|\alpha|^mh_{i\alpha}\downarrow 0 \mbox{ as }|\alpha|=\sum_{i=1}^n|\alpha_i|\uparrow \infty
\end{equation}
for a fixed positive integer $m$, and the modes $h_{i\alpha}$ have polynomial decay if they have polynomial of any order $m>0$. We may use the terms 'modes of solution have polynomial decay' and 'solution is smooth' interchangeably in our context.
We may rewrite the equation (\ref{navode2*}) in the form
\begin{equation}\label{navode2*rewr}
\begin{array}{ll}
\frac{d \mathbf{v}^F}{dt}=A^{NS}\left(\mathbf{v}\right) \mathbf{v}^F,
\end{array}
\end{equation}
where $\mathbf{v}^F=\left(\mathbf{v}^F_1,\cdots ,\mathbf{v}^F_n\right)^T$. Furthermore $A^{NS}\left(\mathbf{v}\right) $ is a $n{\mathbb Z}^n\times n{\mathbb Z}^n$-matrix
\begin{equation}
A^{NS}\left(\mathbf{v}\right) =\left(A^{NS}_{ij}\left(\mathbf{v}\right)\right)_{1\leq i,j\leq n},
\end{equation}
where for $1\leq i,j\leq n$ the entry $A^{NS}_{ij}\left(\mathbf{v}\right) $ is a ${\mathbb Z}^n\times {\mathbb Z}^n$-matrix. We define
\begin{equation}
A^{NS}\left( \mathbf{v}\right)\mathbf{v}^F =\left(\sum_{j=1}^nA^{NS}_{1j}\left(\mathbf{v}\right) \mathbf{v}^F_j ,\cdots,\sum_{j=1}^nA^{NS}_{nj}\left( \mathbf{v}\right) \mathbf{v}^F_j \right)^T,
\end{equation}
where for all $1\leq i\leq n$
\begin{equation}
\sum_{j=1}^nA^{NS}_{ij}\left(\mathbf{v}\right) \mathbf{v}^F_j=\left( \sum_{j=1}^n \sum_{\beta\in {\mathbb Z}^n}A^{NS}_{i\alpha j\beta}\left(\mathbf{v}\right) v_{j\beta}\right)_{\alpha\in {\mathbb Z}^n} .
\end{equation}
The entries $A^{NS}_{i\alpha j\beta}\left(\mathbf{v}\right)$ of $A^{NS}\left( \mathbf{v}\right)$ are determined by the equation in (\ref{navode2*}) of course. On the diagonal, i.e., for $i=j$ we have the entries
\begin{equation}
\begin{array}{ll}
\delta_{ij}A^{NS}_{i\alpha j\beta}\left(\mathbf{v}\right)=\delta_{ij\alpha\beta}\sum_{j=1}^n \nu\left( -\frac{4\pi \alpha_j^2}{l^2}\right)
-\delta_{ij}\frac{2\pi i (\alpha_j-\beta_j)}{l}v_{i(\alpha-\beta)}\\
\\+\delta_{ij}2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where $\delta_{ij}=1$ iff $i=j$, $\delta_{ij\alpha\beta}=1$ iff $i=j$ and $\alpha=\beta$, and zero else denotes the Kronecker $\delta$-function, and off-diagonal we have for $i\neq j$ the entries
\begin{equation}
\begin{array}{ll}
(1-\delta_{ij})A^{NS}_{i\alpha j\beta}\left(\mathbf{v}\right)=\frac{2\pi i (\alpha_j-\beta_j)}{l}v_{i(\alpha-\beta)}
\\
\\2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}
\end{array}
\end{equation}
Here, we remark again that the $i$ in the context $2\pi i$ denotes the complex number $i=\sqrt{-1}$, while the other indices $i$ are integer indices. The next step is to define a contolled system according to the ideas described in the introduction. The additional idea of an auto-control is useful in order to get better upper bounds of the solution. We postpone this and consider first the idea of a simple external control in dual space (which is simple compared to control functions in classical spaces we considered in \cite{KB3} and elsewhere). Recall the main idea: the control function cancels the zero modes in an iterative scheme. If this is possible and a regular limit exists, then it cancels the zero modes in the limit as well.
In the real basis (\ref{realbasisa}) we may rewrite the equation (\ref{navode2*}) in the form
\begin{equation}\label{navode2*rewr}
\begin{array}{ll}
\frac{d \mathbf{v}^{re,F}}{dt}=A^{r,NS}\left(\mathbf{v}^{re}\right) \mathbf{v}^{re,F},
\end{array}
\end{equation}
where $\mathbf{v}^{re,F}=\left(\mathbf{v}^{re,F}_1,\cdots ,\mathbf{v}^{re,F}_n\right)^T$. Furthermore $A^{re,NS}\left(\mathbf{v}^r\right) $ is a $2n{\mathbb N}^n\times 2n{\mathbb N}^n$-matrix
\begin{equation}
A^{re,NS}\left(\mathbf{v}\right) =\left(A^{re,NS}_{ij}\left(\mathbf{v}^{re}\right)\right)_{1\leq i,j\leq n}
\end{equation}
where for $1\leq i,j\leq n$ the entries $A^{re,NS}_{ij}\left(\mathbf{v}^{re}\right) $ can be obtained easily from $A^{NS}_{ij}\left(\mathbf{v}\right) $ or derived independently - we shall do this in the proof below.
Consider the representation in (\ref{navode2*}). The zero modes do not appear in the dissipative term related to the Laplacian, and we have observed that we may represent the Leray projection term without zero mode terms as well. Hence, for the zero modes $v_{i0}$ in the equation in (\ref{navode2*}) we have for $\alpha=0$
\begin{equation}\label{navode2*alpha0}
\begin{array}{ll}
\frac{d v_{i0}}{dt}=
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v_{j(-\gamma)}v_{i\gamma}.
\end{array}
\end{equation}
Assuming that the function $t\rightarrow v_{i0}(t)$ is bounded it is natural to define
\begin{equation}
r^0_0(t):=-v_{i0}(t).
\end{equation}
Formally, this leads to the controlled equation for $v^{r}_{i\alpha},~\alpha\in {\mathbb Z}^n\setminus \{0\}={\mathbb Z}^{n,0}$ of the form
\begin{equation}\label{navode2*r}
\begin{array}{ll}
\frac{d v^r_{i\alpha}}{dt}=\sum_{j=1}^n \nu\left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^r_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace 0,\alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^r_{j(\alpha-\gamma)}v^r_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n\setminus \left\lbrace 0,\alpha\right\rbrace}4\pi \gamma_j(\alpha_k-\gamma_k)v^r_{j\gamma}v^r_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where the initial data $\mathbf{v}^{r,F}_i(0)=\left( v^r_{i\alpha}(0)\right)_{\alpha \in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace }$ for $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace$ are given by $n$ infinite vectors
\begin{equation}
\mathbf{v}^{r,m,F}_i(0)=\mathbf{h}^{r,F}_i=\left(h_{i\alpha}\right)^T_{\alpha\in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace}.
\end{equation}
Note that
\begin{equation}
v_{i0}+r_{0}=0,
\end{equation}
such that we may cancel the zero modes. The justification for this is obtained for each iteration step $m$ below.
The idea for a global scheme is to determine $\mathbf{v}^{r,F}=\lim_{m\uparrow \infty}\mathbf{v}^{r,m,F}$ for a simple control function and a certain iteration
\begin{equation}\label{navode2*rewrlina}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,m,F}}{dt}=A^{NS}\left(\mathbf{v}^{r,m-1}\right) \mathbf{v}^{r,m,F},
\end{array}
\end{equation}
starting with $\mathbf{v}^{r,0}:=\mathbf{h}$ or with the information of a global solution of the multivariate Burgers equation. In the real basis we write (\ref{navode2*rewrlina}) in the form
\begin{equation}\label{navode2*rewrlinareal}
\begin{array}{ll}
\frac{d \mathbf{v}^{re,r,m,F}}{dt}=A^{re,NS}\left(\mathbf{v}^{re,r,m-1}\right) \mathbf{v}^{r,m,F},
\end{array}
\end{equation}
where we shall be more explicit below. The equations in (\ref{navode2*rewrlina}) and in (\ref{navode2*rewrlinareal}) are linear equations, which we can solve by Banach contraction principles in exponentially time weighted norms of time-dependent functions with values in strong Sobolev spaces, where the evaluation of each function component at time $t\geq 0$, i.e., $\mathbf{v}^{r,m,F}_i(t)$ lives in a Sobolev space with $h^s\left( {\mathbb Z}^n\right)$ for $s>n+2$. Spatial regularity of this order is observed if we have data of this order of regularity, where comparison with weakly singular elliptic intergals ensures that certain infinite matrix operations of a solution representations in terms of the data preserve this order of regularity. An exponential time weigh of the norm ensures that the representations are globally valid in time. Note that the equation in (\ref{navode2*rewrlina}) and its real form in (\ref{navode2*rewrlinareal}) correspond to linear partial integro-differential equations. This linearity can be used. In order to fine the solution of the incompressible Navier Stokes equation iterations of this type of linear partial integro-differential equations are considered. We observe that for the functional increments
\begin{equation}
\delta\mathbf{v}^{re,r,m+1,F}:=\mathbf{v}^{re,r,m+1,F}-\mathbf{v}^{re,r,m,F}
\end{equation}
for all $m\geq 0$ we have
\begin{equation}\label{navode2*rewrlinareal2}
\begin{array}{ll}
\frac{d \delta\mathbf{v}^{re,r,m+1,F}}{dt}=A^{re,NS}\left(\mathbf{v}^{re,r,m}\right) \delta\mathbf{v}^{re,r,m+1,F}+A^{re,NS}\left(\delta\mathbf{v}^{re,r,m}\right) \mathbf{v}^{r,m,F}.
\end{array}
\end{equation}
Time discretization of the equations leads then to sequence of time-homogeneous equations which can be solved by Trotter product formulas. This is the most elementary method where we use Euler schemes. We also consider more sophisticated methods, where the Euler part of the equation is locally solved in time by a Dyson formalism. This leads to higher order schemes with respect to time straightforwardly. In order to derive Trotter product formulas we consider some multiplicative behavior of infinite matrix multiplications with polynomial decay (related to weakly singular integrals). Limits, where the time step size goes to zero, lead to Dyson type formulas for time dependent approximating linearized infinite ODEs. The simplest method in order to implement this program is an Euler time discretization above. There are refinements and higher order schemes and generalisations of the Trotter product formula. Note that the dissipative terms (corresponding to the Laplacian) are not time-dependent even for generalized models with spatially dependent viscosity, and we can integrate the 'Euler part' locally via a Dyson formula. More precisely, if $E^m\equiv E^m(t)$ denotes the matrix of the Euler part of a linearized approximating equation at iteration stage $p\geq 1$ (an equation such as in (\ref{navode2*rewrlina}) with viscosity $\nu=0$), then for given regular data $ \mathbf{v}^{m,E}(t_0),\mathbf{v}^{m-1,E}(t_0)$ at initial time $t_0\geq 0$ we have a well defined time-local solution
\begin{equation}\label{dysonobs}
\begin{array}{ll}
\mathbf{v}^{m,E}(t)=\mathbf{v}^{m,E}(t_0)+\\
\\
\sum_{p=1}^{\infty}\frac{1}{p!}\int_0^tds_1\int_0^tds_2\cdots \int_0^tds_p T_p\left(E^{m-1}(t_1)E^{m-1}(t_2)\cdot \cdots \cdot E^{m-1}(t_p) \right)\mathbf{v}^{m,E}(t_0),
\end{array}
\end{equation}
where the data $\mathbf{v}^{m-1,E}(t_0)$ appear in the matrices $E^{m-1}(t_i)$. For linear equations with globally regular matrices $E^{m-1}(t_p)$ this representation is even globally valid in time, but without viscosity we loose uniqueness for the nonlinear Euler solution limit for dimension $n \geq 3$ (we shall come back to the reasons for this). This formula for the Euler part can be used for higher order numerical schemes. Note however, that the Euler scheme which we used in a former incarnation of this work is sufficient in order to prove global existence with polynomial decay of the data if the data have polynomial decay.
The series of Dyson type solutions leads to contraction results in appropriate time-weighted function spaces of functions dependent of time and with values in some function spaces of infinite sequences of modes with a certain appropriate order of polynomial decay.
Indeed in the simplest version of a global existence proof we define Trotter product formulas for certain linear infinite equations with time independent coefficients, and then we define a time discretization in order to apply these Trotter product formula. Even the time-dependent linear approximations are then solved by double limits. For each finite approximating system of modes less or equal to some $l>0$ the Trotter product limit considered is first a time discretization limit where the order of modes remains fixed. This leads to a sequence of Dyson type formula limits where we have a polynomial decay of some order in the limit. We give some more details.
In order to define a global solution scheme for this system of coupled infinite nonlinear ODEs we may start with the solution of the (viscous) multivariate Burgers equation (where we know that a unique global smooth solution exists), i.e., the solution of
\begin{equation}\label{navode2*burg}
\begin{array}{ll}
\frac{d v^{B}_{i\alpha}}{dt}=\nu\sum_{j=1}^n \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^{B}_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{B}_{j(\alpha-\gamma)}v^B_{i\gamma},
\end{array}
\end{equation}
where $\alpha\in {\mathbb Z}^n$, and where the initial data $\mathbf{v}^{BF}_i(0)=\left( v^B_{i\alpha}(0)\right)_{\alpha \in {\mathbb Z}^n}$ for $1\leq i\leq n$ and $\alpha\in {\mathbb Z}^n$ are given by $n$ infinite vectors
\begin{equation}
\mathbf{v}^{BF}_i(0)=\mathbf{h}^F_i=\left(h_{i\alpha}\right)^T.
\end{equation}
We know that the solution $\left( v^B_{i\alpha}\right)_{\alpha\in {\mathbb Z}^n,~1\leq i\leq n}$ is in $h^s\left({\mathbb Z}^n\right)$ for all $s\in {\mathbb R}$ for all time $t\geq 0$. However, without assuming knowledge about the multivariate Burgers equation we may also start with
\begin{equation}\label{navodeh}
\begin{array}{ll}
\frac{d v^{0}_{i\alpha}}{dt}=\nu\sum_{j=1}^n \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^{0}_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}h_{j(\alpha-\gamma)}v^0_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^0_{j\gamma}h_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where $\alpha\in {\mathbb Z}^n$, and where the initial data $\mathbf{v}^{0,F}(0)=\left( \mathbf{v}^{0,F}_1(0),\cdots,\mathbf{v}^{0,F}_n(0)\right)$ are given by $\mathbf{v}^{0,F}_i(0)=\left( v^0_{i\alpha}(0)\right)_{\alpha \in {\mathbb Z}^n}=\left( h_{i\alpha}\right)_{\alpha \in {\mathbb Z}^n}$ for $1\leq i\leq n$.
In the equation (\ref{navodeh}) and for $\nu >0$ the zero modes ($|\alpha|=0$) are the only modes where the damping (viscous) term
\begin{equation}
\nu\sum_{j=1}^n \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^{0}_{i\alpha}
\end{equation}
cancels. The same holds for the equation in (\ref{navode2*}) of course. Therefore we introduce the first approximation of a control function $r^0_{i0}:[0,\infty)\rightarrow {\mathbb R}$ for each $1\leq i\leq n$ such the scheme for $\mathbf{v}^{r,0,F}_i=\mathbf{v}^{0,F}_i+\mathbf{e}_0r^0_{i0}, 1\leq i\leq n$ becomes a system of nonzero modes. At each stage we have to prove that this is possible, of course, i.e., that the solution of the linear approximative problem exists. Here, $\mathbf{v}^{0,F}_i+\mathbf{e}_0r^0_{i0}=\left(v^0_{i\alpha} \right)_{\alpha\in {\mathbb Z}^n}=\left(v^{r,0}_{i\alpha} \right)_{\alpha\in {\mathbb Z}^n}+r^0_{i0}$ is a vector with
\begin{equation}
v^{r,0}_{i\alpha}:=\left\lbrace \begin{array}{ll}
v^0_{i\alpha} \mbox{ if }\alpha\neq 0,\\
\\
v^0_{i0}+râ°_{i0}=0 \mbox{ else.}
\end{array}\right.
\end{equation}
Hence, the control function is constructed such that it cancels the zero modes of the original system at each stage of the construction. More precisely, it is constructed as a series $(r^m_0)_m$ where the convergence as $m\uparrow \infty$ follows from properties of the controlled approximative solution functions $\mathbf{v}^{r,m,F}$. Note that it has to be shown that the control function is a) finite at each stage, and b) that it is finite in the limit of all stages. Note that the Leray projection term does not contribute to the zero modes in (\ref{navodeh}), and this may already indicates this finiteness.
The next step is to analyze the scheme formally defined in the introduction starting from (\ref{controlstart}) to (\ref{controlend}). The effect of this formal scheme is that it is autonomous with respect to the nonzero modes, i.e., only the modes with $|\alpha|\neq 0$ are dynamically active. The fact that the suppression of the zero modes via a control function is possible, i.e., that the controlled system is well-defined, is shown within the proof. Then we get into the heart of the proof. The iteration leads to a sequence $\left( \mathbf{v}^{r,m,F}(t)\right)_{ m\in {\mathbb N}}=\left( \mathbf{v}^{r,m,F}_i(t)\right)_{1\leq i\leq n, m\in {\mathbb N}}$ with
\begin{equation}\label{approxstagem}
\mathbf{v}^{r,m,F}(t):=T\exp\left(A^{r}_{m}t\right)\mathbf{h}^{r,F}_i,
\end{equation}
where $T$ is a time order operator (as in Dyson's formalism), and the restrictive dual function spaces together with the dissipative features of the operator $A^{r}_m$ will make sure that at each time step $m$ the approximation (\ref{approxstagem}) (corresponding to a linear equation) really makes sense. In the real basis we have an analogous sequence
$\left( \mathbf{v}^{re,r,m,F}(t)\right)_{ m\in {\mathbb N}}=\left( \mathbf{v}^{re,r,m,F}_i(t)\right)_{1\leq i\leq n, m\in {\mathbb N}}$ with
\begin{equation}\label{approxstagem}
\mathbf{v}^{re,r,m,F}(t):=T\exp\left(A^{re,r}_{m}t\right)\mathbf{h}^{re,r,F}.
\end{equation}
In the following we use the complex notation. In the proof below, we shall supplement the related notation in the real basis and show why the argument holds also in the real basis.
Concerning dissipation features note that we may choose $\nu>0$ arbitrarily as is well-known and as we pointed out in the introduction. Fortunately, we do not need this which indicates that the algorithm can be extended to models with variable positive viscosity- which is desirable from a physical point of view. However, from an algorithmic perspective for the model with constant viscosity it may be useful do consider appropriate constellations of viscosity $\nu >0$ and torus size $l>0$. Note that we have a time-order operator $T$ for all stages $m\geq 1$ since the the infinite matrices $A^{r}_{m}$ depend on time $t$ for $m\geq 1$. In order to make sense of matrix multiplication of infinite unbounded matrices we consider the matrices involved in the computation of the first approximation $\mathbf{v}^{r,0,F}_i(t)=P_0\mathbf{v}^{0,F}_i(t)$ (where $P_0$ is the related projection operator which eliminates the zero mode) as an instructive example. We may rewrite (\ref{navodeh}) in the form
\begin{equation}\label{navodeh2}
\begin{array}{ll}
\frac{d v^{0}_{i\alpha}}{dt}=\nu\sum_{j=1}^n \left(-\frac{4\pi \alpha_j^2}{l^2}\right)v^{0}_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}h_{j(\alpha-\gamma)}v^0_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_i(\alpha_k-\gamma_k)h_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}v^0_{i\gamma}\\
\\
+\sum_{j=1,j\neq i}^n2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)h_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}v^0_{j\gamma}.
\end{array}
\end{equation}
Hence this infinite linear system can be written in terms of a matrix $A^r_0=A_0$ with $n\times n$ infinite matrix entries. More precisely, we have
\begin{equation}
A_{0}=\left(A^{ij}_0\right)_{1\leq i,j\leq n},
\end{equation}
where for $1\leq i,j \leq n$ we have
\begin{equation}
A^{ij}_0=\delta_{ij}D^0+\delta_{ij}C^0+L^0_{ij}
\end{equation}
along with the Kronecker $\delta$-function $\delta_{ij}$, and with the infinite matrices
\begin{equation}\label{mat}
\begin{array}{ll}
D^0:=\left( -\nu\delta_{\alpha\beta}\sum_{j=1}^n\frac{4\pi \alpha_j^2}{l^2}\right)_{\alpha,\beta\in {\mathbb Z}^n},\\
\\
C^0_{ij}:=\left( -\sum_{j=1}^n\frac{2\pi i (\alpha_j-\beta_j)}{l}h_{i(\alpha-\beta)}\right)_{\alpha,\beta \in {\mathbb Z}^n},\\
\\
L^0_{ij}=\left( (2\pi i)\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)h_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}\right)_{\alpha,\beta \in {\mathbb Z}^n},
\end{array}
\end{equation}
corresponding to the the Laplacian, the convection term, and the Leray projection terms respectively. Strictly speaking, we have defined a $n\times n$ matrix of infinite matrices where the Burgers equation terms and the linearized Leray projection terms exist also off diagonal. It is clear how the matrix multiplication may be defined in order to reformulate (\ref{navodeh2}) and we do not dwell on these trivial formalities. We note that
\begin{equation}
\begin{array}{ll}
\frac{\partial \mathbf{v}^{0,F}}{\partial t}=A_0\mathbf{v}^{0,F},\\
\\
\mathbf{v}^{0,F}(0)=\mathbf{h}^F
\end{array}
\end{equation}
along with the vectors $\mathbf{v}^F=\left( \mathbf{v}^F_1,\cdots ,\mathbf{v}^F_n\right)^T$ and $\mathbf{h}^F=\left( \mathbf{h}^F,\cdots ,\mathbf{h}^F\right)^T$. Note that we have off-diagonal terms only because we consider global equations, i.e., equations, which correspond to linear partial integro-differential equations.
Note that the matrices $D^0, C^0_{ij}, L^0_{ij}$ are unbounded even for regular data $h_{i}$ with fast decreasing modes (for the the matrix $C^0_{ij}$ consider constant $\alpha-\beta$ and let $\beta_j$ go to infinity for some $j$). The difficulty to handle unbounded infinite matrices in dual space becomes apparent. However, multiplications of these matrices with the data $\mathbf{h}$ are regular, and because of the special structure of the matrices involved we have matrix-data multiplication which inherits polynomial decay and hence leads to a well-defined iteration of matrices (even in the Euler case).
In the case of an incompressible Navier Stokes equation at this point we can take additional advantage of the dissipative nature of the operator which is indeed the difference to the Euler equation. This is a major motivation for the dissipative Trotter product formula. We describe it here in the comlex notation. We shall supplement this with similar observations in the case of a real basis below. For two matrices
$M=\left( m_{\alpha\beta}\right)_{\alpha,\beta\in {\mathbb Z}^n}$ and $N=\left( n_{\alpha\beta}\right)_{\alpha,\beta\in {\mathbb Z}^n}$ we may formally define the product $P=\left( p_{\alpha\gamma}\right)_{\alpha,\gamma\in {\mathbb Z}^n}=MN$ via
\begin{equation}
p_{\alpha\gamma}=\sum_{\beta\in {\mathbb Z}^n}m_{\alpha\beta}n_{\beta\gamma}
\end{equation}
for all $\alpha,\gamma\in {\mathbb Z}^n$. There are natural spaces for which this definition makes sense (and which we introduce below). In order to apply this natural theory of infinite matrices developed below the next step is to observe that for $k\geq 0$ matrices such as
\begin{equation}\label{mat2}
\begin{array}{ll}
\exp(D^0)\left(C^0_{ij}\right)^k=\left( \exp\left( -\nu\sum_{j=1}^n\frac{4\pi \alpha_j^2}{l^2}\right)\left( \sum_{j=1}^n\frac{2\pi i (\alpha_j-\beta_j=}{l}h_{i(\alpha-\beta)}\right)_{\alpha,\beta \in {\mathbb Z}^n}\right)^k,\\
\\
\exp(D^0)\left( L^0_{ij}\right)^k=
{\Bigg (}\exp\left( -\nu\sum_{j=1}^n\frac{4\pi \alpha_j^2}{l^2}\right)\times\\
\\
\left( 2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)h_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}\right)_{\alpha,\beta \in {\mathbb Z}^n}^k{\Bigg )}
\end{array}
\end{equation}
have indeed bounded entries if the modes $h_{i\beta}$ decreases sufficiently. More importantly, we shall see that with regular data $h_i\in $ related iterated infinite matrix multiplications applied to the data $h_i,~1\leq i\leq n,~h_i\in h^s\left( {\mathbb Z}^n\right),~s>n+2$ lead to well defined sequences of vectors where evaluations at time $t\geq 0$ live in strong Sobolev spaces $h^s\left( {\mathbb Z}^n\right),~s>n+2$ . Note that for the reasons mentioned it is diffcult to define a formal fundamental solution
\begin{equation}
\exp\left( (\delta_{ij}D^0+C^0_{ij}+L^0_{ij})t\right)
\end{equation}
even at stage zero of the approximation (where we do not need a time-order operator).
However for regular data $\mathbf{h}^F$ the solution $\mathbf{v}^{0,F}$ of the firs approximation, i.e., the expression
\begin{equation}
\exp\left( (\delta_{ij}D^0+C^0_{ij}+L^0_{ij})t\right)\mathbf{h}^F
\end{equation}
can be defined directly. We do not even need this. We may consider projections of infinite vectors to finite vectors with modes of order less than $l>0$, i.e., projections $P_{v^l}$ which are define on infinite vectors $\left( g_{\alpha} \right)^T_{\alpha\in {\mathbb Z}^n}$ by
\begin{equation}
P_{v^l}\left( g_{\alpha} \right)^T_{\alpha\in {\mathbb Z}^n}:=\left( g_{\alpha} \right)^T_{|\alpha|\leq l}
\end{equation}
and projections $P_{M^l}$ of infinite matrices $\left(m_{\alpha\beta}\right)_{\alpha\beta \in {\mathbb Z}^n}$ to finite matrices with
\begin{equation}
P_{M^l}\left( m_{\alpha\beta} \right)^T_{\alpha\beta\in {\mathbb Z}^n}
:=\left( m_{\alpha\beta} \right)^T_{|\alpha|\leq l}
\end{equation}
and prove Trotter product formulas for finite dissipative systems, and then consider limits.
This will lead us to the crucial observation then of a product formula of Trotter-type of the form
\begin{equation}\label{trotternav}
\begin{array}{ll}
\lim_{l\uparrow \infty}\exp\left( (\delta_{ij}P_{M^l}D^0+P_{M^l}C^0_{ij}+P_{M^l}L^0_{ij})t\right)P_{v^l}\mathbf{h}^F\\
\\
=\lim_{l\uparrow \infty}\lim_{k\uparrow \infty}\left( \exp\left( \delta_{ij}P_{M^l}D^0\frac{t}{k}\right)\exp\left(\left( P_{M^l}C^0_{ij}+P_{M^l}L^0_{ij}\right) \frac{t}{k}\right)\right)^kP_{v_l}\mathbf{h}^F,
\end{array}
\end{equation}
where $(\delta_{ij}D^0+C^0_{ij}+L^0_{ij})$, i.e., the argument of the projection operator $P_{M_l}$, denote 'quadratic' matrices with $\left( n\times {\mathbb Z}^n\right) $ rows and $\left( n\times {\mathbb Z}^n\right) $ columns and $h$ is some regular function. This means that for finite $l>0$ we can prove a Trotter type relation and in the limit the left side equals the solution $\mathbf{v}^{0,F}$ of the equation above and is defined by the right side of (\ref{trotternav}). In the last step the regularity of the data $\mathbf{h}^F$ comes into play.
In this form this formula is useful only for the stage $0$ where the coefficient functions do not depend on time. For stages $m>0$ of the construction we have to take time dependence into account. However, we may define a Euler-type scheme and define substages which produce the approximations of stage $m$ in the limit. As we indicated above we may even set up higher order schemes using a Dyson formalism or use the Dyson formalism in order to compute an Euler equation limit for short time (in this case we have no fundamental solution at all and need the application to the data).
The formula (\ref{trotternav}) depends on an observation which uses the diagonal structure of $D^0$ (and therefore $\exp\left(D^0\right)$). Here the matrix $C^0_{ij}+L^0_{ij}-\left(C^0_{ij}+L^0_{ij}\right)^T$, i.e., the deficiencies of symmetry in the matrices $C^0_{ij}+L^0_{ij}$, factorize with the correction term of an infinite analog of a special form of a CBH-type formula. It is an interesting fact that this deficiency is in a natural matrix space, we are going to define next.
Concerning the natural matrix spaces, in order to make sense of formulas as in (\ref{approxstagem}) for $s>n+ 2$ for a matrix $M=\left( m_{\alpha\beta}\right)_{\alpha,\beta\in {\mathbb Z}^n}$ we say that
\begin{equation}
M\in M^s_n
\end{equation}
if for all $\alpha,\beta \in {\mathbb Z}^n$
\begin{equation}
|m_{\alpha\beta}|\leq \frac{C}{1+|\alpha-\beta|^{2s}}
\end{equation}
for some $C>0$.
For $s>n$ and a vector $w=(w_{\alpha})_{\alpha\in {\mathbb Z}^n}\in h^s\left({\mathbb Z}\right)$
we define the multiplication of the infinite matrix $M$ with $w$ by $Mw=\left(Mw_{\alpha} \right)_{\alpha\in {\mathbb Z}^n}$ along with
\begin{equation}
Mw_{\alpha}:=\sum_{\beta\in {\mathbb Z}^n}m_{\alpha\beta}w_{\beta}.
\end{equation}
Indeed we observe that for $r,s>n\geq 2$ we have
\begin{equation}
\sum_{\beta\in {\mathbb Z}^n\setminus \left\lbrace 0,\alpha\right\rbrace }\frac{1}{|\alpha-\beta|^{s}\beta^r}\leq \frac{C}{1+|\alpha|^{r+s-n}}
\end{equation}
such that for $s>n$ and $M\in M^s_n$ we have indeed $Mw\in h^r\left({\mathbb Z}^n\right)$ if $w\in h^r\left({\mathbb Z}^n\right)$. This implies that
for $s>n$, $M\in h^s\left({\mathbb Z}^n\times {\mathbb Z}^n\right)$ and $w\in h^r\left({\mathbb Z}^n\right)$
\begin{equation}
M^1w:=Mw,~M^{k+1}w=M\left(M^k\right)w
\end{equation}
is a well defined recursion for $k\geq 1$. Hence for a matrix $M$ which is not time-dependent the analytic vector
\begin{equation}
\exp\left(Mt\right)w:=\sum_{k\geq 0}\frac{M^kt^kw}{k!}
\end{equation}
is well defined (even globally). The reason that we use the stronger assumption $s>n+2$ is that we have additional mode factors in the matrices which render such that we need slightly more regularity to have inheritage of a certain regularity (inheritage of a certain order of polynomial decay). For a time dependent matrix $A=A(t)$ we formally define the time-ordered exponential '\`{a} la Dyson' to be
\begin{equation}
\begin{array}{ll}
T\exp(At):=\sum_{m=0}^{\infty}\frac{1}{m!}\int_{[0,t]}dt_1\cdots dt_mTA(t_1)\cdots A(t_m)dt_1\cdots dt_m\\
\\
:=\sum_{m=0}^{\infty}\int_0^tdt_1\int_0^{t_1}dt_2\cdots \int_0^{t_{n-1}}A(t_1)\cdots A(t_m).
\end{array}
\end{equation}
However, in the situation above of the Navier Stokes operator this is just a formal definition which lives in an extremely weak function space (we are not going to define), while especially for the Euler part $E(t):=A(t)|_{\nu =0}$ the expression
\begin{equation}
\begin{array}{ll}
T\exp(Et)f:=\sum_{m=0}^{\infty}\frac{1}{m!}\int_{[0,t]}dt_1\cdots dt_mTE(t_1)\cdots E(t_m)dt_1\cdots dt_m\\
\\
:=\sum_{m=0}^{\infty}\int_0^tdt_1\int_0^{t_1}dt_2\cdots \int_0^{t_{n-1}}E(t_1)\cdots E(t_m)f.
\end{array}
\end{equation}
makes perfect sense for regular data $f$, i.e. data of polynomial decay of order $s>n+2$. At least this is true for local time while for global time we need the viscosity term in order to establish uniqueness. Combining such a representation for linearized equations at each stage of approximation with a Trotter product formula which adds another damping term via a negative exponential weight we get a natural scheme for the incompressible Navier Stokes equation.
Note that the operator $T$ is the usual Dyson time-order operator which may be defined recursively using the Heavyside function. Note that in this paper the symbol $T$ will sometimes also denote the time horizon but it will be clear from the context which meaning is indended. Note furthermore that schemes with explicit use of the Heavyside functions may make matters a little delicate if stronger regularity with respect to time derivatives is considered - this is another advantage of the simple Euler time discretization. Well, you may check that matrices are smooth on the time diagonals such that these complications of higher order schemes using a Dyson formalism are more of a technical nature. However, you have to be careful at this point and the Euler scheme simplifies the matter a bit.
In any case, at each stage $m>0$ of the construction we shall define a scheme based on the Trotter product formula such that in the limit the linear approximation expressed formally by
\begin{equation}
\mathbf{v}^{r,m,F}(t):=T\exp\left(A^{r}_{m}t\right)\mathbf{h}^{F},
\end{equation}
gets a strict sense. This holds also for the uncontrolled linear approximation
\begin{equation}
\mathbf{v}^{m,F}(t):=T\exp\left(A_{m}t\right)\mathbf{h}^{F},
\end{equation}
where we shall see that these expressions can be well-defined for data with $h_i\in h^s\left({\mathbb Z}\right)$ for $s>n+2$ and for $t\geq 0$. Note that proving the existence of a global limit of the the iteration with respect to $m$ in a regular space also requires $\nu>0$. In order to prove the existence of a limit there are basically three possibilities then. One is to prove the existence of a uniform bound
\begin{equation}\label{uniformbound}
\sup_{t\geq 0}{\big |}\mathbf{v}^{r,m,F}(t){\big |}_{h^s}+\sup_{t\geq 0}{\Big |}\frac{\partial}{\partial t}\mathbf{v}^{r,m,F}(t){\Big |}_{h^s}\leq C
\end{equation}
for some $C>0$ independent of $m$, and then proceed with a compactness arguments a la Rellich. A weaker form of (\ref{uniformbound}) without the time derivative and a strong spatial norm (large $s$) is another variation for this alternative since product formulas for Sobolev norms with $s>\frac{1}{2}n$ and a priori estimates of Schauder type lead to an independent proof of the existence of a regular time derivative in the limit $m\uparrow \infty$. Maybe the latter variation is the most simple one. An alternative is a contraction argument on a ball of an appropriate function space with exponential time weight. Clearly, the radius of the ball will depend on the initial data, dimension and viscosity. This dependence can be encoded in a time weight of a time-weighted norm- as it is known from ODE theory for finite equations. Contraction arguments have the advantage that they lead to uniqueness results naturally. In order to strengthen results concerning the time dependence of regular upper bounds we shall consider auto-controlled schemes, where another time discretization is used and a damping term is introduced via a time dilatation transformation. This latter procedure has the advantage that linear upper bounds and even global uniform upper bounds can be obtained, and it can be combined with both of the former possibilities.
In any case but concerning the second possibility of contraction especially for $\left( t\rightarrow w(t)\right) \in C\left(\left[0,\infty\right)\times h^s\left({\mathbb Z}^n\right)\right)$, we may define for some $C>0$ (depending only on $\nu>0$, the dimension $n>0$, and the initial data components $h_i\in C^{\infty}\left({\mathbb T}^n\right)$) the norm
\begin{equation}
|w|^{\mbox{exp}}_{h^s ,C}:=\sup_{t\in [0,\infty)}\exp(-Ct)|w(t)|_{h^s},
\end{equation}
and the norm
\begin{equation}
|w|^{\mbox{exp},1}_{h^s, C}:=\sup_{t\in [0,\infty)}\exp(-Ct)\left( |w(t)|_{h^s}+|D_tw(t)|_{h^s}\right) ,
\end{equation}
where $D_tw(t):=\left(\frac{d}{\partial t}w(t)_{\alpha} \right)^T_{\alpha\in {\mathbb Z}^n}$ denotes the vector of componentwise derivatives with respect to time $t$.
Then a contraction property
\begin{equation}
\left( \delta\mathbf{v}^{m,F}_i\right)_{1\leq i\leq n} =\left( \mathbf{v}^{r,m,F}_i(t)-\mathbf{v}^{r,m-1,F}_i\right)_{1\leq i\leq n}
\end{equation}
for all $1\leq i\leq n$ with respect to both norms and for $s>n+ 2$ ad $1\leq i\leq n$ can be proved.
Summarizing we have the following steps:
\begin{itemize}
\item[i)] in the first step we do some matrix analysis. First, the multiplication of infinite matrices $A_0 $ and $A_m$ with infinite vectors $\mathbf{h}^F$ at approximation stage $m\geq 0$ is well defined. Second matrix multiplication in the matrix space $M^s_n$ is well-defined for $s>n\geq 2$ as is $\exp(M)$ for $M\in M^s_n$. Then we prove a dissipative Trotter product formula for finite systems and apply the result to the first infinite approximation system with solution $\mathbf{v}^{0,F}$ at stage $m=0$ which is related to a linear integro-partial differential equation.
\item[ii)] In the second step we set up an Euler-type scheme based on the Trotter-product formula which shows that
for some $\nu>0$ and $s>n+ 2$ the exponential
\begin{equation}
\mathbf{v}^{r,m,F}_i(t):=\left( T\exp\left(A^{r}_{m}t\right)\mathbf{h}^{F}\right)_i
\end{equation}
is well-defined for all $1\leq i\leq n$, where $\mathbf{v}^{r,m,F}_i(t)\in h^s_l\left({\mathbb Z}^n\right)$ for $t\geq 0$ and $1\leq i\leq n$ for all $m$, i.e., that the linearized equations for $\mathbf{v}^{r,m,F}_i$ which are equivalent to a linear partial integro-differential equation have a global solution. Here $(.)_i$ indicates that we project to the $i$th component of the $n$ infinite entries of the solution vector at the first stage of construction.
\item[iii)] for $\nu>0$ as in step ii) the limit
\begin{equation}
\mathbf{v}^{r,F}_i(t):=\left( T\exp\left(A^{r}_{\infty}t\right)\mathbf{h}^{F}\right)_i
\end{equation}
exists, where
\begin{equation}
\begin{array}{ll}
\mathbf{v}^{r,F}_i(t):=\lim_{m\uparrow \infty}\mathbf{v}^{m,F}_i(t)\\
\\
=\left( T\exp\left(A^{r}_{\infty}t\right)\mathbf{h}^{F}\right) _i:=\lim_{m\uparrow \infty}\left( T\exp\left(A^{r}_{m}t\right)\mathbf{h}^{F}\right) _i\in h^s_l\left({\mathbb Z}^n\right).
\end{array}
\end{equation}
This limit can be obtained by compactness arguments and by a contraction results in time-weighted regular function spaces. The latter result leads to uniqueness of solutions in the time-weighted function space. In the third step we consider also stronger results for global upper bound for auto-controlled schemes.
\end{itemize}
Concerning the third step (iii) we note that for all $t\geq 0$ and $x\in {\mathbb T}^n_l$ we have
\begin{equation}\label{vrm111}
v^{r,m}_j(t,x):=\sum_{\alpha\in {\mathbb Z}^n}v^{r}_{i\alpha}(t)\exp{(2\pi i\alpha x)}
\end{equation}
for all $1\leq j\leq n$. Hence, in classical Sobolev function spaces we have a compact sequence $\left( v^{r,m,F}_j(t,.)\right)_{m\in {\mathbb N}}$ in higher order Sobolev spaces $H^r$ with $r>s$ by the Rellich embedding theorem on compact manifolds for fixed $t\geq 0$. In this context recall that
\begin{thm}
For any $q>s$, $q,s\in {\mathbb R}$ and any compact manifold $M$ the embedding
\begin{equation}
j:H^q\left(M\right)\rightarrow H^s\left(M\right)
\end{equation}
is compact.
\end{thm}
For any $t\geq 0$ we have a limit $v^r_j(t,.)\in H^{s}\left({\mathbb T}^n\right)\subset C^m$ for $s>m+\frac{n}{2}$ and the fact that the control function $r$ is well-defined and continuous implies
\begin{equation}
v_j(t,.)=v^r_j(t,.)-r(t)\in C^m
\end{equation}
for fixed $t\geq 0$ and all $m\in {\mathbb N}$. Finally we verify that $v_j$ is indeed a classical solution of the original Navier-Stokes equation using the properties we proved for the vector $v^{r,F}$ in the dual formulation. One possibility to do this is to plug in the approximations of the solutions and estimate the remainder pointwise observing that it goes to zero in the strong norm $|w|^{\mbox{exp},1}_{h^s,C}$ for an appropriate constant $C>0$. Using an auto-controlled scheme we can strengthen the results and prove global uniform upper bounds without exponential weights with respect to time.
Note that for any of the alternative schemes the modes $v^{r}_{i\alpha},~1\leq i\leq n,~\alpha \in {\mathbb Z}^n$ determine classical (controlled) velocity functions $v^{m}_i,~1\leq i\leq n$ ( $v^{r,m}_i,~1\leq i\leq n$) which can be plugged into the incompressible Navier Stokes equation in order to verify that the limit $m\uparrow \infty$ is indeed the solution ('the' as we have strictly positive viscosity). If we plug in the approximation of stage $m\geq 0$ we may use the fact that this approximation satisfies a linear partial integro differential equation with coefficients determined in term of the velocity function of the previous step of approximation. This implies that we have for $1\leq i\leq n$
\begin{equation}\label{navlerayco}
\begin{array}{ll}
\frac{\partial v^{m}_i}{\partial t}-\nu \Delta v^m_i+ \sum_{j=1}^nv^m_j \frac{\partial v^m}{\partial x_i} \\
\\
-\sum_{j,k=1}^n\int K_{,i}(x-y)v^m_{j,k}v^m_{k,j}(t,y)dy\\
\\
=-\sum_{j=1}^n\left( v^m_j-v^{m-1}_j\right) \frac{\partial v^m}{\partial x_i} \\
\\
-\sum_{j,k=1}^n\int K_{,i}(x-y)\left( v^m_{j,k}-v^{m-1}_{j,k}\right) v^m_{k,j}(t,y)dy,
\end{array}
\end{equation}
such that a construction of a Cauchy sequence $v^{m}_i,~1\leq i\leq n,~m\geq 0$ with $v^m_i(t,.)\in H^{s}$ for $s>n+2$ with a uniform global upper bound $C$ in $H^s$ leads to global existence. Here Sobolov estimates for products may be used.
\section{Proof of theorem \ref{mainthm1}}
We consider data
$h_i\in C^{\infty}\left( {\mathbb T}^n_l\right) $ for $1\leq i\leq n$. The special domain of a $n$-torus has the advantage that we may represent approximations of a solution evaluated at a given time $t\geq 0$ in the form of Fourier mode coefficients with respect to a Fourier basis
\begin{equation}
\left\lbrace \exp\left( \frac{2\pi i\alpha x}{l}\right) \right\rbrace_{\alpha \in {\mathbb Z}^n},
\end{equation}
i.e., with respect to an orthonormal basis of $L^2\left({\mathbb T}^n_l \right)$. We consider a real basis below, but let us stick to the complex notation first. We shall see that the considerations of this proof can always be translated to analogous considerations in the real basis. It is natural to start with an expansion of the solution $\mathbf{v}=(v_1,\cdots ,v_n)^T$ in the form
\begin{equation}
\begin{array}{ll}
v_i(t,x)=\sum_{\alpha \in {\mathbb Z}^n}v_{i\alpha}\exp\left(\frac{2\pi i\alpha x}{l}\right),\\
\\
p(t,x)=\sum_{\alpha \in {\mathbb Z}^n}p_{\alpha}\exp\left(\frac{2\pi i\alpha x}{l}\right),
\end{array}
\end{equation}
where the modes $v_{i\alpha},~\alpha\in {\mathbb Z}^n$ and $p_{\alpha},~\alpha\in {\mathbb Z}^n$ depend on time.
Here we have an index $1\leq i\leq n$ which gives rise to some ambiguity with the complex number $2\pi i$, but the latter $i$ always occurs in the context of $\pi$ such that no confusion should arise.
Plugging this ansatz into the Navier-Stokes equation system (\ref{nav}) we formally get $n$ coupled infinite differential equations of ordinary type for the modes, i.e., for all $1\leq i\leq n$ we have for all $\alpha \in {\mathbb Z}^n$
\begin{equation}\label{navode}
\begin{array}{ll}
\frac{d v_{i\alpha}}{dt}=\nu\sum_{j=1}^n \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v_{j(\alpha-\gamma)}v_{i\gamma}\\
\\
-2\pi i\alpha_ip_{\alpha},
\end{array}
\end{equation}
where for each $1\leq i\leq n$ and each $\alpha\in {\mathbb Z}^n$
\begin{equation}
v_{i\alpha}:[0,\infty)\rightarrow {\mathbb R}
\end{equation}
are some time-dependent $\alpha$-modes of velocity, and
\begin{equation}
p_{\alpha}:[0,\infty)\rightarrow {\mathbb R}
\end{equation}
are some time-dependent $\alpha$-modes of pressure to be determined in terms of the velocity modes $v_{i\alpha}$.
Here, $(\alpha -\gamma)$ denotes the subtraction between multiindices, i.e.,
\begin{equation}
(\alpha -\gamma ) =(\alpha_1-\gamma_1,\cdots,\alpha_n-\gamma_n),
\end{equation}
where brackets are added for notational reasons in order to mark separate multiindices.
Next we may eliminate the pressure modes $p_{\alpha}$ using Leray projection, which shows that the pressure $p$ satisfies the Poisson equation
\begin{equation}\label{poisson1}
\Delta p=-\sum_{j,k}v_{j,k}v_{k,j},
\end{equation}
where we use the Einstein abbreviation for differentiation, i.e.,
\begin{equation}
v_{j,k}=\frac{\partial v_j}{\partial x_k}~\mbox{etc.}.
\end{equation}
The Poisson equation in (\ref{poisson1}) is re-expressed by an infinite equation system for the $\alpha$-modes of the form
\begin{equation}
p_{\alpha}\sum_{i=1}^n\frac{-4\pi^2\alpha_i^2}{l^2}=\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}\frac{4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{j\gamma}v_{k(\alpha-\gamma)}}{l^2}
\end{equation}
with the formal solution (w.l.o.g. $\alpha\neq 0$ -see below)
\begin{equation}\label{palpha}
p_{\alpha}=-1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{j\gamma}v_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2},
\end{equation}
for $\alpha \neq 0$, which is indeed independent of the size of the torus $l$. This means that this term becomes large compared to the second order terms and the convection term for large tori ($l>0$ large while viscosity $\nu >0$ stays fixed). This may happen for approximations of Cauchy problems by equations for large tori for purpose of simulation. Note that
\begin{equation}
1_{\left\lbrace \alpha\neq 0\right\rbrace}:= \left\lbrace \begin{array}{ll}
1 \mbox{ if }\alpha\neq 0,\\
\\
0 \mbox{ else}.
\end{array}\right.
\end{equation}
Note that we put $p_{0}=0$ for $\alpha=0$ in (\ref{palpha}). We are free to do so since $(\mathbf{v},p+C)$ is a solution of (\ref{nav}) if $(\mathbf{v},p)$ is.
Plugging into (\ref{navode}) we get
for all $1\leq i\leq n$, and all $\alpha \in {\mathbb Z}^n$
\begin{equation}\label{navode2**}
\begin{array}{ll}
\frac{d v_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v_{j(\alpha-\gamma)}v_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{j\gamma}v_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
For $1\leq i\leq n$, this is a system of nonlinear ordinary differential equations of 'infinite dimension', i.e., for infinite modes $v_{i\alpha}$ and $p_{\alpha}$ along with $\alpha \in {\mathbb Z}^n$. Next we rewrite the system in the real basis for torus size $l=1$, i.e in the basis
\begin{equation}\label{realbasis}
\cos\left(2\pi\alpha x\right),~\sin\left(2\pi\alpha x\right),~\alpha\in {\mathbb N}^n,
\end{equation}
where ${\mathbb N}$ denotes the set set of nonnegative integers, i.e., $0\in {\mathbb N}$.
In this variation of representation on $[0,l]^n$ with periodic boundary condition we start with an expansion of the solution $\mathbf{v}^{re}=(v^{re}_1,\cdots ,v^{re}_n)^T$ in the form
\begin{equation}
\begin{array}{ll}
v^{re}_i(t,x)=\sum_{\alpha \in {\mathbb N}^n}v^{re}_{ci\alpha}\cos\left(2\pi i\alpha x\right)+\sum_{\alpha \in {\mathbb N}^n}v^{re}_{si\alpha}\sin\left(2\pi i\alpha x\right),\\
\\
p^{re}(t,x)=\sum_{\alpha \in {\mathbb N}^n}p^{re}_{c\alpha}\cos\left(2\pi i\alpha x\right)+\sum_{\alpha \in {\mathbb N}^n}p^{re}_{s\alpha}\cos\left(2\pi i\alpha x\right),
\end{array}
\end{equation}
where the modes $v^{re}_{ci\alpha},~v^{re}_{si\alpha},~\alpha\in {\mathbb N}^n$ and $p^{re}_{\alpha},~\alpha\in {\mathbb N}^n$ depend on time. The lower indices $c$ and $s$ indicate that the mode is a coefficient in a cosine series and a sinus series respectively. The size $l=1$ is just in order to simplify notation and for numerical purposes we can write the real system with respect to arbitrary size of the torus as well, of course.
Plugging this ansatz into the Navier-Stokes equation system (\ref{nav}) we formally get again $n$ coupled infinite differential equations of ordinary type for the modes, but this time we get products of sinus functions and cosine functions from the nonlinear terms in a first step which we then re-express in cosine terms. For all $1\leq i\leq n$ we have for all $\alpha \in {\mathbb N}^n$
\begin{equation}\label{aa}
\begin{array}{ll}
\frac{d v^{re}_{ci\alpha}}{dt}\cos(2\pi \alpha x)+\frac{d v^{re}_{si\alpha}}{dt}\sin(2\pi \alpha x)=\nu\sum_{j=1}^n \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)v^{re}_{ci\alpha}\cos(2\pi\alpha x)\\
\\
+\nu\sum_{j=1}^n \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)v^{re}_{si\alpha}\sin(2\pi\alpha x)\\
\\
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\leq \alpha}2\pi \gamma_jv^{re}_{cj(\alpha-\gamma)}v^{re}_{ci\gamma}\frac{1}{2}\sin(2\pi\alpha x)\\
\\
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\geq 0}2\pi \gamma_jv^{re}_{cj(\alpha+\gamma)}v^{re}_{ci\gamma}\frac{1}{2}\sin(2\pi\alpha x)\\
\\
+\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\leq \alpha}
2\pi \gamma_jv^{re}_{sj(\alpha-\gamma)}v^{re}_{ci\gamma}\frac{1}{2}\cos(2\pi\alpha x)\\
\\
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\geq 0}
2\pi \gamma_jv^{re}_{sj(\alpha+\gamma)}v^{re}_{ci\gamma}\frac{1}{2}\cos(2\pi\alpha x)\\
\\
+\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\leq \alpha}
2\pi \gamma_jv^{re}_{cj(\alpha-\gamma)}v^{re}_{si\gamma}\frac{1}{2}\sin(2\pi\alpha x)\\
\\
+\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\geq 0}
2\pi \gamma_jv^{re}_{cj(\alpha+\gamma)}v^{re}_{si\gamma}\frac{1}{2}\sin(2\pi\alpha x)\\
\\
+\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\leq \alpha}
2\pi \gamma_jv^{re}_{sj(\alpha-\gamma)}v^{re}_{si\gamma}\frac{1}{2}\sin(2\pi\alpha x)\\
\\
+\sum_{j=1}^n\sum_{\gamma \in {\mathbb N}^n,\gamma\geq 0}
2\pi \gamma_jv^{re}_{sj(\alpha +\gamma)}v^{re}_{si\gamma}\frac{1}{2}\sin(2\pi\alpha x)\\
\\
+2\pi \alpha_ip^{re}_{c\alpha}\sin(2\alpha x)-2\pi \alpha_ip^{re}_{s\alpha}\cos(2\alpha x),
\end{array}
\end{equation}
where we assume that in the sum '$\leq$' denotes an appropriate ordering (for example the lexicographic) of ${\mathbb N}^n$, and where we use
\begin{equation}\label{relsc}
\sin\left( 2\pi \beta x\right) \cos\left( 2\pi \gamma x\right)=\frac{1}{2}\sin\left( 2\pi (\beta+\gamma) x\right)+\frac{1}{2}\sin\left( 2\pi (\beta -\gamma)x\right),
\end{equation}
\begin{equation}\label{relss}
\sin\left( 2\pi \beta x\right) \sin\left( 2\pi \gamma x\right)=-\frac{1}{2}\cos\left( 2\pi (\beta+\gamma) x\right)+\frac{1}{2}\cos\left( 2\pi (\beta-\gamma)x\right) ,
\end{equation}
and
\begin{equation}\label{relcc}
\cos\left( 2\pi \beta x\right) \cos\left( 2\pi \gamma x\right)=\frac{1}{2}\cos\left( 2\pi (\beta+\gamma) x\right)+\frac{1}{2}\cos\left( 2\pi (\beta -\gamma)x\right).
\end{equation}
In order to get a system with respect to the full real basis.
Note that for each $1\leq i\leq n$ and each $\alpha\in {\mathbb Z}^n$
\begin{equation}
v^{re}_{ci\alpha}:[0,\infty)\rightarrow {\mathbb R},~v^{re}_{si\alpha}:[0,\infty)\rightarrow {\mathbb R}
\end{equation}
are some time-dependent $\alpha$-modes of velocity, and
\begin{equation}
p^{re}_{c\alpha},p^{re}_{s\alpha}:[0,\infty)\rightarrow {\mathbb R}
\end{equation}
are some time-dependent $\alpha$-modes of pressure to be determined in terms of the velocity modes $v^{re}_{ci\alpha}$ and $v^{re}_{si\alpha}$.
Next we determine these real pressure modes $p^{re}_{c\alpha}$ and $p^{re}_{s\alpha}$. The Poisson equation in (\ref{poisson1}) is re-expressed by an infinite equation system for thereal $\alpha$-modes of the form
\begin{equation}
\begin{array}{ll}
p^{re}_{c\alpha}\sum_{i=1}^n(-4\pi^2\alpha_i^2)\cos(2\pi \alpha x)+p^{re}_{s\alpha}\sum_{i=1}^n(-4\pi^2\alpha_i^2)\sin(2\pi \alpha x)\\
\\
=-\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{sj\gamma}v_{sk(\alpha-\gamma)}\frac{1}{2}\cos(2\pi \alpha x)\\
\\
-\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}4\pi^2 \gamma_j(\alpha_k+\gamma_k)v_{sj\gamma}v_{sk(\alpha+\gamma)}\frac{1}{2}\cos(2\pi \alpha x)\\
\\
+2\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{sj\gamma}v_{ck(\alpha-\gamma)}\frac{1}{2}\sin(2\pi \alpha x)\\
\\
+2\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}4\pi^2 \gamma_j(\alpha_k+\gamma_k)v_{sj\gamma}v_{ck(\alpha+\gamma)}\frac{1}{2}\sin(2\pi \alpha x)\\
\\
-\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{cj\gamma}v_{ck(\alpha-\gamma)}\frac{1}{2}\sin(2\pi \alpha x)\\
\\
-\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}4\pi^2 \gamma_j(\alpha_k+\gamma_k)v_{cj\gamma}v_{ck(\alpha +\gamma)}\frac{1}{2}\sin(2\pi \alpha x).
\end{array}
\end{equation}
Hence (w.l.o.g. $\alpha\neq 0$ -see below)
\begin{equation}\label{pcalpha}
\begin{array}{ll}
p^{re}_{c\alpha}=\frac{1}{2}1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{sj\gamma}v_{sk(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}\\
\\
+\frac{1}{2}1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}4\pi^2 \gamma_j(\alpha_k+\gamma_k)v_{sj\gamma}v_{sk(\alpha+\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}
\end{array}
\end{equation}
for the cosine modes, and
\begin{equation}\label{psalpha}
\begin{array}{ll}
p^{re}_{s\alpha}=-1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{sj\gamma}v_{ck(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}\\
\\
-1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}4\pi^2 \gamma_j(\alpha_k+\gamma_k)v_{sj\gamma}v_{ck(\alpha+\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}\\
\\
+\frac{1}{2}1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{cj\gamma}v_{ck(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}\\
\\
+\frac{1}{2}1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}4\pi^2 \gamma_j(\alpha_k+\gamma_k)v_{cj\gamma}v_{ck(\alpha+\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}
\end{array}
\end{equation}
for the sinus modes, and for all $\alpha \neq 0$. Note that $p^{re}_{s\alpha}$ and $p^{re}_{c\alpha}$ are indeed independent of the size of the torus $l$. The equations in (\ref{psalpha}), (\ref{pcalpha}), and (\ref{aa}) determine an equation for the infinite vector of real modes
\begin{equation}
\left(v^{re}_{si\alpha},v^{re}_{ci\alpha} \right)_{\alpha\in {\mathbb N}^n,1\leq i\leq n}
\end{equation}
with alternating sinus (subscript $s$) and cosine (subscript $c$) entries.
Let us consider the relation of this real representation to the complex representation which we use in the following because of its succinct form. This relation is one to one, of course, as both representations of functions and equations are dual representations of classical functions and equations. Especially we can rewrite every equation, function, and matrix operation in complex notation in the real notation. This ensures that the operations produce real solutions. There is one issue here, which should be emphasized: the estmates for matrix operations in the complex notation transfer directly to analogous estimates for the corresponding matrix operations related to the real system above, although the matrix operation look partly different. Consider the last to terms in (\ref{psalpha}) for example. We can interpret these terms as finite and infinite matrix operations
\begin{equation}\label{pmult1}
\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}m^-_{ck\alpha\gamma}v_{cj\gamma}
\end{equation}
and
\begin{equation}\label{pmult2}
\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}m^+_{ck\alpha\gamma}v_{cj\gamma},
\end{equation}
where the matrix elements in (\ref{pmult1}) and (\ref{pmult2}) are defined by equivalence of the matrix-vector multiplication with the pressure terms in
\begin{equation}\label{psalpha1}
\begin{array}{ll}
+\frac{1}{2}1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\leq \alpha}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v_{cj\gamma}v_{ck(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}\\
\\
+\frac{1}{2}1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb N}^n,\gamma\geq 0}4\pi^2 \gamma_j(\alpha_k+\gamma_k)v_{cj\gamma}v_{ck(\alpha+\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}
\end{array}
\end{equation}
respectively. The upper bounds estimates for the infinite matrix operations are similar as the infinite matrix operations in complex notation. Note that the matrix operations can be made equivalent indeed by recounting. Furthermore, the finite matrix operations do not alter any convergence properties of the modes for one matrix operation. For iterated matrix operations they can be incorporated naturally in the Trotter product formulas below.
We may take (\ref{navode2**}) as a shorthand notation for an equivalent equation with respect to the full real basis. The equations written in the real basis have structural features which allow us to transfer certain estimates observed for the equations in the complex notation to the equations in the real basis notation. Each step below done in the complex notation is easily transferred to the real notation. However, the complex notation is more convenient, because we do not have two equations for each mode. Therefore, we use the complex notation, and remark that the real systems has the same relevant features for the argument below. Note that the equation (\ref{navode2**}) has the advantage that this approach leads to more general observations concerning the relations of nonlinear PDEs and infinite nonlinear ODEs. However, it is quite obvious that a similar argument holds for the real system above as well such that we can ensure that we are constructing real solutions, although our notation is complex. This means that we construct real solutions $v_{i\alpha}$ with $v_{i\alpha}(t)\in {\mathbb R}$. Note that it suffices to prove that there is a regular solution to (\ref{navode2**}) and to the corresponding system explicitly written in the real basis, because this translates into a classical regular solution of the incompressible Navier-Stokes equation on the $n$-torus, and this implies the existence of a classical solution of the incompressible Navier-Stokes equation in its original formulation, i.e., without elimination of the pressure. We provided an argument of this well-known implication at the end of section 1. This is not the controlled scheme which we considered in the first section. However, this scheme is identical with the controlled scheme at the first stage $m=0$, so we start with some general considerations which apply to this first stage first and introduce the control function later.
So let us look at the simple scheme in more detail now.
The first approximation
is the system
\begin{equation}\label{navode3}
\begin{array}{ll}
\frac{d v^0_{i\alpha}}{dt}=\sum_{j=1}^n \left( -\nu\frac{4\pi \alpha_j^2}{l^2}\right)v^0_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}h_{j(\alpha-\gamma)}v^0_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v^0_{j\gamma}h_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
Let
\begin{equation}
\mathbf{v}^{0,F}=(\mathbf{v}^{0,F}_1,\cdots,\mathbf{v}^{0,F}_n),~\mathbf{v}^{0,F}_i:=\left(v^0_{i\alpha}\right)^T_{\alpha \in {\mathbb Z}^n}.
\end{equation}
Formally, we consider $\mathbf{v}^{0,F}_i$ as an infinite vector, where the upper script $T$ in the componentwise description indicates transposition.
Then equation (\ref{navode3}) may be formally rewritten in the form
\begin{equation}\label{navode4}
\frac{d \mathbf{v}^{0,F}_i}{dt}=A^i_0\mathbf{v}^{0,F}_i+\sum_{j\neq i}L^0_{ij}\mathbf{v}^{0,F}_j,
\end{equation}
with the infinite matrix $A^i_0=\left(a^{i0}_{\alpha\beta} \right)_{\alpha,\beta \in {\mathbb Z}^n}$ and
the entries
\begin{equation}\label{matrix0}
\begin{array}{ll}
a^{i0}_{\alpha\beta}=\delta_{\alpha\beta}\nu\left(-\sum_{j=1}^n\frac{4\pi \alpha_j^2}{l^2} \right)-\sum_{j=1}^n\frac{2\pi i\beta_jh_{j(\alpha-\beta)}}{l}+L_{ii\alpha\beta},
\end{array}
\end{equation}
and where
\begin{equation}
L^0_{ij\alpha\beta}=1_{\left\lbrace \alpha\neq 0\right\rbrace }2\pi i\alpha_i\frac{\sum_{k=1}^n 4\pi^2 \beta_j(\alpha_k-\beta_k)h_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi^2 \alpha_i^2}
\end{equation}
for all $1\leq i,j\leq n$
denote coupling terms related to the Leray projection term.
As we observed in the preceding section you may write (\ref{navode4}) in the form
\begin{equation}\label{navode4big}
\frac{d \mathbf{v}^{0,F}}{dt}=A_0\mathbf{v}^{0,F},
\end{equation}
where
\begin{equation}\label{matrix0*}
A_0\mathbf{v}^{0,F}:=
\left( \left( \sum_{j\in \left\lbrace 1,\cdots ,n\right\rbrace ,\beta\in {\mathbb Z}^n}
\left( \left( \delta_{ij} a^{i0}_{\alpha\beta}\right)
+\overline{\delta}_{ij}L^0_{ij\alpha\beta}\right)v^0_{j\beta}\right)^T_{\alpha \in {\mathbb Z}^n}\right)^T_{ 1\leq i\leq n},
\end{equation}
where we have to insert
\begin{equation}
\overline{\delta}_{ij}=(1-\delta_{ij}),
\end{equation}
since the diagonal terms $L^{0}_{ii}$ are in $a^{i0}$ already,
and where $A_0=\left(A^{ij}_0\right) $ can be considered as a quadratic matrix with $n^2$ infinite matrix entries
\begin{equation}
A^{ij}_0=A^i_0 \mbox{ for }~i=j,
\end{equation}
and
\begin{equation}
A^{ij}_0=L^0_{ij} \mbox{ for }~i\neq j.
\end{equation}
Note that these matrices which determine the first linear approximation matrix $A_0$ are not time-dependent.
The modes $a^{0}_{\alpha\beta},v^0_{i\beta}$, or at least one set of these modes, have to decay appropriately as $|\alpha|,|\beta|\uparrow \infty$ in order that the definition in (\ref{matrix0*}) makes sense, i.e., leads to finite results in appropriate norms which show that the infinite set of modes in $A_0\mathbf{v}^{0,F}$ belong to a regular function in classical space.
Since the matrix $A_0$ has constant entries, the formal solution of (\ref{navode4}) is
\begin{equation}\label{sol01}
\mathbf{v}^{0,F}=\exp\left(A_0t\right)\mathbf{h}^{F}.
\end{equation}
As we have positive viscosity $\nu >0$ for the Navier Stokes equation even the fundamental solution (fundamental matrix)
\begin{equation}
\exp\left(A_0t\right)
\end{equation}
can be defined rigorously by a Trotter product formula for regular input $\mathbf{h}^{F}$ in the matrix $A_0$ by the Trotter product formula. Note that the rows of a finite Trotter approximation of the fundamental matrix are as in (\ref{typent}) below, and such that every row of this approximation lives in a regular space $h^s\left({\mathbb Z}^n\right)$ for $s\geq n+2$ if the data $\mathbf{h}^{F}$ of the matrix $A_0$ are in that space. This behavior is preserved as we go to the Trotter product limit. However, for the sake of global existence the weaker observation that we can make sense of (\ref{sol01}) is essential. Note here that for fixed $\alpha$ and regular $h_j$ (i.e., $h_j$ has polynomially decaying modes) expressions proportional to
\begin{equation}\label{typent}
\exp\left( -\nu\sum_{i=1}^n\alpha^2_it\right)\exp\left(\gamma_j h_{j(\alpha-\gamma)}\right) \rightarrow \exp\left( -\nu\sum_{i=1}^n\alpha^2_it\right) ~\mbox{as}~|\gamma|\uparrow \infty,
\end{equation}
such that the matrix multiplication with regular data in $h^s\left({\mathbb Z}^n\right)$ for $s>n+2$ inherits regularity of this order.
Hence it makes sense to use the dissipative feature on the diagonal terms and a Trotter-type product formula (otherwise we are in trouble because the modulus of diagonal entries increases with the order of the modes, as we remarked before).
Well, in this paper we stress the constructive point of view and the algorithmic perspective, and by this we mean that we approximation infinite model systems by finite systems on the computer where analysis shows how limits behave. This gives global existence and a computation scheme at the same time. We have seen various ingredients of analysis which allow us to observe how close we are to the 'real' limit solution. One aspect is the polynomial decay of modes and it is clear that finite ODE approximating systems approximate the better the stronger the polynomial decay is. Infinite matrix multiplication is related to weakly singular elliptic integrals and controls the quality of this approximation. Another aspect is the time approximation. Here we may use the Dyson formalism for the Euler part of the equation (in the Trotter product) in order to obtain higher order schemes.
For the first approximation we may solve the equation (\ref{sol01}) by approximations via systems of finite modes, i.e., via
\begin{equation}
P_{v^l}\mathbf{v}^{0,F;*}_i=\exp\left(P_{M_l}A_0t\right)P_{v^l}\mathbf{h}^{F}
\end{equation}
where $P_{v^l}$ and $P_{M^l}$ denote projections of vectors and matrices to finite cut-off vectors and finite cut-off matrices of modes of order less or equal to $l$, i.e. modes with multiindices $\alpha=(\alpha_1,\cdots ,\alpha_n)$ which satisfy $|\alpha|=\sum_{i=1}^n\alpha_i\leq l$.
Note that
we used the notation
\begin{equation}
P_{v^l}\mathbf{v}^{0,F;*}_i
\end{equation}
with a star upperscript of the velocity approximation $\mathbf{v}^{0,F;*}_i$ since the projection of the solution of the infinite system is not equal to the solution of the finite projected system in general. The latter is an approximation of the former which becomes equal in the limit as the projection of the infinite system become identity.
Indeed it is the order of polynomial decay of the modes and some properties of infinite matrix times vector multiplications related to the growth behavior of weakly singular integrals which makes it possible to show that
\begin{equation}\label{v0finf}
\begin{array}{ll}
\mathbf{v}^{0,F}_i=\lim_{l\uparrow \infty}P_{v^l}\mathbf{v}^{0,F;*}_i=\lim_{l\uparrow \infty}\lim_{k\uparrow \infty}{\Bigg(} \exp\left(P_{M^l}\left( \delta_{ij}D^0\right) \frac{t}{k}\right)\times\\
\\
\times \exp\left(\left( P_{M^l}\left( \delta_{ij}C^0+L^0_{ij}\right) \right) \frac{t}{k}\right){\Bigg )}^kP_{v^l}\mathbf{h}^{F}_i.
\end{array}
\end{equation}
Note that we even have $\exp\left(\left( \delta_{ij}D^0\right) \frac{t}{k}\right) \exp\left(\left( \left( \delta_{ij}C^0+L^0_{ij}\right) \right)\frac{t}{k}\right)_{ij}\in M^s_n$ (although we do not need this strong behavior). Here the symbol $\left( .\right)_{ij}$ indicates projection onto the infinite ${\mathbb Z}^n\times {\mathbb Z}^n$-block at the $i$th row and the $j$th column of the matrix $A_0$. The right side of (\ref{v0finf}) is well-defined anyway as due to the matrix $\exp\left(\left( \delta_{ij}D^0\right) \frac{t}{k}\right)$ which has the effect of an multiplication of each row by a function which is exponentially decreasing with respect to time and with respect to the order of the modes, and due to the regularity (order of polynomial decay) of the vector $\mathbf{h}^{F}_i$. The more delicate thing is to prove that the Trotter-type approximation converges to the (approximative) solution at the higher order stages $m\geq 1$ of the iterative construction, where we have time dependent convection and Leray projection terms. We come back to this later in the proof of the dissipative Trotter product formula. Let us start with the description of the other stages $m>0$ of the construction first.
At stage $m\geq 1$ having computed $\mathbf{v}^{m-1,F}_i=\left( v^{m-1}_{i\alpha}\right)^T_{\alpha\in {\mathbb Z}^n}$ the $m$th approximation
is computed via the system
\begin{equation}\label{navode3m**}
\begin{array}{ll}
\frac{d v^m_{i\alpha}}{dt}=\sum_{j=1}^n \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^m_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{m-1}_{j(\alpha-\gamma)}v^m_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v^{m}_{j\gamma}v^{m-1}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}
\end{array}
\end{equation}
with the same initial data, of course.
We have
\begin{equation}\label{proofodem}
\mathbf{v}^{m,F}_i:=\left(v^m_{i\alpha}\right)^T_{\alpha \in {\mathbb Z}^n},
\end{equation}
and the equation (\ref{navode3m**}) may be formally rewritten in the form
\begin{equation}\label{navode4m}
\frac{d \mathbf{v}^{m,F}}{dt}=A_m\mathbf{v}^{m,F},
\end{equation}
with the infinite matrix $A_m=\left(a^{ijm}_{\alpha\beta} \right)_{\alpha,\beta \in {\mathbb Z}^n}$ and
the (time-dependent!) entries $A_m=\left(A^{ij}_m\right)=\left( a^{ijm}_{\alpha\beta}\right) $, and along with $A^{ii}_m=D^0+C^m_{ii}+L^m_{ii}$ for all $1\leq i\leq n$, and $A^{ij}_m=L^m_{ij}$ for $i\neq j,~1\leq i,j\leq n$, where for all $i\neq j$ we have
\begin{equation}
\begin{array}{ll}
L^m_{ij\alpha\beta}=2\pi i\alpha_i\frac{\sum_{k=1}^n 4\pi^2 \beta_j(\alpha_k-\beta_k)v^{m-1}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi^2 \alpha_i^2}.
\end{array}
\end{equation}
The matrix $C^m_{ii}$ related to the convection term $-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{m-1}_{j(\alpha-\gamma)}v^m_{i\gamma}$ is defined analogously as in the first stage but with coefficients $v^{m-1}_{i\alpha}$ instead of $h_{i\alpha}$.
Again, the equation (\ref{proofodem}) makes sense for regular data and inductively assumed regular coefficient functions $v^{m-1}_i$, because polynomially decay of the modes $h_{i\alpha}$ and $v_{i\alpha}$ compensate that encoded quadratic growth due to derivatives. For a solution however we have to make sense of exponential functions with exponent $a^{ijm}_{\alpha\beta}$ defined in terms of modes $v^{m-1}_{i\beta}$ and applied to data $\mathbf{h}^F$. Again we shall use a dissipative Trotter product formula in order to have appropriate decay as $|\alpha|,|\beta|\uparrow \infty$ in order for the solution $v^m_{i\alpha}$. However, this time we have time-dependent coefficients and we need a subscheme at each stage $m$ of the construction in order to deal with time dependent coefficients. Or, at least, a subscheme is a solution to this problem. An alternative is a direct time-local application of the Dyson formalism of the Euler part (equation without viscosity) including the regular data. Note that we need the regular data then because there is no fundamental matrix for the Euler equation. Furthermore we are better time-local since solutions of the Euler equation are globally not unique (and can be even singular). These remarks remind us of the advantages of a simple Euler scheme where we do not meet these problems.
The formal solution of (\ref{navode4m}) is
\begin{equation}\label{solm}
\mathbf{v}^{m,F}=T\exp\left(A_mt\right)\mathbf{h}^{F},
\end{equation}
where $T\exp(.)$ is a Dyson-type time-order operator $T$ defined above.
Note that for all $m\geq 1$ the functions $\mathbf{v}^{m,F}_i,~1\leq i\leq n$ represent formal solutions of partial integro-differential equation if we rewrite them in the original space. Again in order to make sense of them we shall use the dissipative feature and a Trotter-type product formula at each substage which is a natural time-discretization. It seems that even at this stage we need viscosity $\nu >0$ in order to obtain solutions for these linear equation in this dual context. This assumption is also needed when we consider the limit $m\uparrow \infty$. We shall estimate at each stage $m$ of the construction
\begin{equation}\label{solmh}
\mathbf{v}^{m,F}=T\exp\left(A_mt\right)\mathbf{h}^{F}
\end{equation}
based on the Trotter product formula where we set up an Euler-type scheme in order to deal with time-dependent infinite matrices in the limit of substages at each stage $m$.
Then we shall also discus the alternative of a direct application of the Dyson formalism for the Euler part. Higher order time discretization schemes can be based on this, but this is postponed to the next section on algorithms.
Next we go into the details of this plan. Let us consider some linear algebra of time-independent infinite matrices with fast decaying entries (which can be applied directly at the stage $m=0$). We consider the complex situation first. The relevant structural conditions of the real systems are analogous and are sated as corollaries. The matrices of the 'complex' scheme considered above are $\left( n\times {\mathbb Z}^n\right) \times \left( n\times {\mathbb Z}^n\right)$-matrices, but - for the sake of simplicity of notation - we shall consider some matrix-algebra for ${\mathbb Z}^n \times {\mathbb Z}^n$-matrices. The considerations can be easily adapted to the formally more complicated case.
First let $D=\left(d_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n}$ and $E=\left(d_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n}$ be two infinite matrices, and define (formally)
\begin{equation}\label{matrixdot}
D\cdot E=\left( f_{\alpha\beta}\right) _{\alpha\beta\in {\mathbb Z}^n},
\end{equation}
where
\begin{equation}\label{matrixdot2}
f_{\alpha\beta}=\sum_{\gamma\in{\mathbb Z}^n}d_{\alpha\gamma}e_{\gamma\beta}.
\end{equation}
Next we define a space of matrices such that (\ref{matrixdot}) makes sense. For $s\in {\mathbb R}$ we define
\begin{equation}
M_n^s:=\left\lbrace D=\left(d_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n}{\Big |}~d_{\alpha\beta}\in {\mathbb C}~\&~\exists C>0: |d_{\alpha\beta}|\leq \frac{C}{1+|\alpha-\beta|^s}\right\rbrace.
\end{equation}
In the following we consider rather regular spaces where $s\geq 2+n$. Some results can be optimized with respect to regularity, but our purpose here is full regularity in the end.
Next we have
\begin{lem}
Let $D\in M^s_n$ and $E\in M^r_s$ for some $s,r\geq n+2$. Then
\begin{equation}
D\cdot E\in M^{r+s-n}_n
\end{equation}
\end{lem}
\begin{proof}
For some $c>0$ we have for $\alpha\neq \beta$
\begin{equation}
\begin{array}{ll}
|f_{\alpha\beta}|=|\sum_{\gamma\in{\mathbb Z}^n}d_{\alpha\gamma}e_{\gamma\beta}|\\
\\
\leq c+\sum_{\gamma\not\in \left\lbrace \alpha,\beta\right\rbrace }\frac{C}{|\alpha-\gamma|^s}\frac{C}{|\gamma-\beta|^r}\\
\\
\leq c+\frac{cC}{|\alpha-\beta|^{r+s-n}}
\end{array}
\end{equation}
The latter inequality is easily obtained b comparison with the integral
\begin{equation}
\int_{{\mathbb R}^n\setminus B_{\alpha\beta}}\frac{dy}{|\alpha-y|^{r}|y-\beta|^{s}}
\end{equation}
where $B_{\alpha\beta}$ is the union of balls of radius $1/2$ around $\alpha$ and around $\beta$. Partial integration of these intergals in polar coordinate form in different cases leads to the conclusion. We observe here that there is some advantage here in the analysis compared to the analysis in classical space because we can avoid the analysis of singularities in discrete space. This advantage can be used to get related estimates via
\begin{equation}
\begin{array}{ll}
\sum_{\gamma\not\in \left\lbrace \alpha,\beta\right\rbrace,~\alpha\neq \beta }\frac{C}{|\alpha-\gamma|^s}\frac{C}{|\gamma-\beta|^r}=\sum_{\gamma'\neq 0,~\alpha\neq \beta }\frac{C}{|\gamma'|^{s+r}}\frac{C|\gamma|'}{|\alpha-\beta|^r}\\
\\
=\sum_{\gamma'\neq 0,~\alpha\neq \beta }\frac{C}{|\gamma'|^{s+r}}\frac{C|\gamma|'}{|\alpha-\beta|^s}
\end{array}
\end{equation}
which leads to upper bounds of the form
\begin{equation}
c+\min\left\lbrace \frac{cC}{|\alpha-\beta|^{r}},\frac{cC}{|\alpha-\beta|^{s}}\right\rbrace
\end{equation}
for some generic constants $c,C$. This line of argument is an alternative.
\end{proof}
In the real case the matrices of the scheme are $\left( 2n\times {\mathbb N}^n\right) \times \left( 2n\times {\mathbb N}^n\right)$-matrices. Again the essential multiplication rules may be defined for ${\mathbb N}^n\times {\mathbb N}^n$ matrices, where the considerations can be easily adapted to the formally more complicated case by renumeration. Hence it is certainly sufficient to consider $\left( n\times {\mathbb N}^n\right) \times \left( n\times {\mathbb N}^n\right)$ matrices instead of$\left( 2n\times {\mathbb N}^n\right) \times \left( 2n\times {\mathbb N}^n\right)$.
First let $D=\left(d_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb N}^n}$ and $E=\left(d_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb N}^n}$ be two infinite matrices, and define (formally)
\begin{equation}\label{matrixdotr}
D\cdot E=\left( f_{\alpha\beta}\right) _{\alpha\beta\in {\mathbb N}^n},
\end{equation}
where
\begin{equation}\label{matrixdot2r}
f_{\alpha\beta}=\sum_{\gamma\in{\mathbb N}^n}d_{\alpha\gamma}e_{\gamma\beta}.
\end{equation}
The space of matrices such that (\ref{matrixdotr}) makes sense is analogous as before. For $s\in {\mathbb R}$ we define
\begin{equation}
M^{re,s}_n:=\left\lbrace D=\left(d_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb N}^n}{\Big |}~d_{\alpha\beta}\in {\mathbb R}~\&~\exists C>0: |d_{\alpha\beta}|\leq \frac{C}{1+|\alpha-\beta|^s}\right\rbrace.
\end{equation}
In the following we consider rather regular spaces where $s\geq 2+n$. Some results can be optimized with respect to regularity, but our purpose here is full regularity in the end.
Next we have
\begin{cor}
Let $D\in M^{re,s}_n$ and $E\in M^{re,r}_s$ for some $s,r\geq n+2$. Then
\begin{equation}
D\cdot E\in M^{re,r+s-n}_n
\end{equation}
\end{cor}
For $r=s\geq n+2$ this behavior allows us to define iterations of matrix multiplications recursively according to the matrix rules defined in (\ref{matrixdot}) and (\ref{matrixdot2}), i.e., we may define by recursion
\begin{equation}
\begin{array}{ll}
A^0=I,~\mbox{where}~I=\left(\delta_{\alpha\beta}\right)_{\alpha\beta\in{\mathbb Z}^n}, \\
\\
A^1=A,\\
\\
A^{k+1}=A\cdot A^k.
\end{array}
\end{equation}
Similar definitions can be considered in the real case, of course.
In the matrix space $M^s_n$ (resp. $M^{re,s}_n$) we may define exponentials. For our problem this space is too 'narrow' to apply this space directly. However, it is useful to note that we have exponentials in this space.
\begin{cor}
Let $D\in M^s_n$ (resp. $D\in M^{re,s}_n$) for some $s\geq n+2$. Then
\begin{equation}
\exp(D)=\sum_{k=0}^{\infty}\frac{D^k}{k!}\in M^s_n~(\mbox{resp.}\in M^{re,s}_n).
\end{equation}
\end{cor}
Well what we need is an estimate for the approximative solutions. In this context we note
\begin{cor}
Let $D\in M^s_n$ (resp. $D\in M^{re,s}_n$) for some $s\geq n+2$ and let $\mathbf{h}^F\in h^s\left({\mathbb Z}^n\right) $ (resp. $\mathbf{h}^F\in h^s\left({\mathbb N}^n\right) $). Then
\begin{equation}
\exp(D)\mathbf{h}^F\in h^s\left({\mathbb Z}^n\right)~\left( \mbox{resp}~\in h^s\left({\mathbb N}^n\right)\right) .
\end{equation}
\end{cor}
\begin{proof}
Let
\begin{equation}
F=\exp(D),~\mbox{where}~F=(f_{\alpha\beta})_{\alpha\beta\in{\mathbb Z}^n},
\end{equation}
and let
\begin{equation}
g_{\alpha}=\sum_{\beta\in {\mathbb Z}^n}f_{\alpha\beta}h_{\beta}.
\end{equation}
Then for some $C,c>0$
\begin{equation}
|g_{\alpha}|=|\sum_{\gamma \in {\mathbb Z}^n}f_{\alpha\beta}h_{\beta}|\leq \sum_{\beta\in {\mathbb Z}^n\setminus \left\lbrace \alpha,0\right\rbrace }\frac{C}{|\alpha-\beta|^s|\beta|^s}\leq \frac{cC}{1+|\alpha|^s}.
\end{equation}
Analogous observations hold in the real case.
\end{proof}
We cannot apply the preceding lemmas directly due to the fact that the diagonal matrix with entries $-\nu\delta_{ij}\sum_{k=1}^n\frac{4\pi^2}{l^2}\alpha_k^2$ is not bounded. Neither is the the matrix related to the convection term. However the multiplication of the dissipative exponential with iterative multiplications of the convection term matrix stay in a regular matrix space such that a multiplication with regular data of polynomial decay lead to regular results. If we want to have a fundamental matrix or uniqueness, then the dissipative term - the smoothing effect of which is obvious in classical space - makes the difference. At first glance, iterations of the matrix lead to matrices which live in weaker and weaker spaces, and we really need the dissipative feature if we do not take the application of the data into account, i.e., the minus signs in the diagonal in mathematical terms, in order to detect the smoothing effect in the exponential form. The irregularity related to a positive sign reminds us of the fact that heat equations cannot be solved backwards in general. However, due to its diagonal structure and its negative sign we can prove a BCH-type formula for infinite matrices. The following has some similarity with Kato's results for semigroups of dissipative operators (cf. \cite{Kat}). However Kato's results (cf. \cite{Kat}) usually require that the domain of the dissipative operator includes the domain of the second operator summand, and this is not true in our formulation for the incompressible Navier-Stokes equation, because the diagonal operator related to the Laplacian, i.e., the diagonal terms $\left(-\delta_{\alpha\beta}\nu\sum_{i=1}^n\alpha_i^2\right)_{\alpha ,\beta\in {\mathbb Z}^n\setminus \left\lbrace 0 \right\rbrace }$, increase quadratically with the order of the modes $\alpha$. So it seems that the result cannot be applied directly. Anyway we continue with the constructive (algorithmic) view, which means that we consider the behavior of finite mode approximations and then we go to the limit of infinite mode systems later. We have
\begin{lem}\label{lembound}
Let $g^F_l$ be finite vectors of modes of order less or equal to $l>0$ (not to be confused with the torus size which we consider to be equal to $1$ w.l.o.g.). Then for any finite vector $f^F_l=\left(f^l_{\alpha}\right)_{|\alpha|\leq l}$ with finite entries $f^l_{\alpha}$, and in the situation of Lemma \ref{productform} below we have
\begin{equation}
{\big |}\left( \exp\left( C^lt\right)\mathbf{g}^F_l -\exp\left( (A^l+B^l)t\right)\mathbf{g}^F_l\right)_{\alpha}{\big |}\leq {\big |}f^l_{\alpha}t^2{\big |},
\end{equation}
where $\left(.\right)_{\alpha}$ denotes the projection to the $\alpha$th component of an infinite vector, and $\exp\left( C^lt\right)=\exp\left( A^lt\right)\exp\left( B^lt\right)$.
\end{lem}
This is what we need in the following but let us have a closer look at the Trotter product formula. The reason is that we represent solutions in a double limit where the inner limit is a Trotter product time limit. The polynomial decay behavior of the data and the matrix entries of the problems then ensure that the spatial limits are well-defined (as the upper bound of the modes $l$ goes to infinity). In the following we use the 'complex' notation and formulate results with respect to ${\mathbb Z}^n$. The considerations can be transferred immediately to real systems with multiindices in ${\mathbb N}^n$. First we define
\begin{defi}
A diagonal matrix $\left(\delta_{\alpha \beta}d_{\alpha\beta} \right)_{\alpha,\beta \in {\mathbb Z}^n}$ is called strictly dissipative of order $m>0$ if $d_{\alpha\alpha}<-c|\alpha| ^m$ for all $\alpha\in {\mathbb Z}^n$ and for some constant $c>0$. It is called dissipative if it is dissipative of some order $m$.
\end{defi}
This is a very narrow definition which we use as we consider constant viscosity, but this is sufficient for the purposes of this paper while the generalisation to variable viscosity is rather straightforward.
In order to state the Trotter-product type result we introduce some notation.
\begin{defi}
For all $m\geq 1$ let $\gamma^m=(\gamma^m_1,\cdots \gamma^m_m)$ denote multiindices with $m$ components and with nonnegative integer entries $\gamma^m_i$ for $m\geq 1$ and $1\leq i\leq m$. For each $m\geq 1$ we denote the set of $m$-tuples with nonnegative entries by ${\mathbb N}_{0}^m$. Let $B=\left(b_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n}$, denote a quadratic matrix, and let $B^T=\left(b_{\beta\alpha}\right)_{\alpha\beta\in {\mathbb Z}^n}$ be its transposed, and $E$ be some other infinite matrix of the same type. For $m\geq 1$ and $m$-tuples $\gamma^m=(\gamma^m_1,\cdots,\gamma^m_n)$ with nonnegative entries $\gamma^m_j,~1\leq j\leq n$, we introduce some abbreviations for certain iterations of Lie brackets operations of matrices. These are iterations of the matrix $\Delta B:=B-B^T$ and either the matrix $B$ or the matrix $B^T$ in arbitrary order. This gives different expressions dependent on the matrix with which we start, and we define $I_{\gamma^m}$ (starting with $\left[\Delta B,B \right]$) and $I^T_{\gamma}$ (starting with $\left[\Delta B,B^T\right]I^T_{\gamma}$ accordingly. First we define $I_{\gamma^1}\left(\Delta B, B\right)=I_{(\gamma^1_1)}(\Delta B,B)$ (starting with $\left[\Delta B,B^T\right]$ for $\gamma_1\geq 0$ recursively). Let
\begin{equation}
\left[E,B\right]_T=EB-B^TE,
\end{equation}
which is a Lie-bracket type operation with the transposed.
For $\gamma^1_1=0$ define
\begin{equation}
I_{(\gamma^1_1)}\left[\Delta B,B\right]:=\Delta B,
\end{equation}
and for $\gamma^1_1>0$ define
\begin{equation}
I_{(\gamma^1_1)}\left[\Delta B,B\right]:=\left[I_{\gamma_1-1}\left[\Delta B,B \right] ,B\right]_T.
\end{equation}
Having defined $I_{\gamma^{m-1}}$ and if $\gamma^m_m=0$ then
\begin{equation}
I_{\gamma^m}\left[\Delta B,B\right]=I_{\gamma^{m-1}}\left[\Delta B,B\right]+I_{\gamma^{m-1}}\left[\Delta B,B^T\right]
\end{equation}
Finally, if $\gamma^m_m>0$, then define
\begin{equation}
I_{\gamma^m}\left[\Delta B,B\right]=\left[ I_{\gamma^{m-1}}\left[\Delta B,B\right],B\right]_T.
\end{equation}
Similarly, for $\gamma^1_1=0$ define
\begin{equation}
I^T_{(\gamma^1_1)}\left[\Delta B,B\right]:=\Delta B,
\end{equation}
and for $\gamma^1_1>0$ define
\begin{equation}
I^T_{(\gamma^1_1)}\left[\Delta B,B\right]:=\left[I_{\gamma_1-1}\left[\Delta B,B^T \right] ,B\right]_T.
\end{equation}
Having defined $I_{\gamma^{m-1}}$ and if $\gamma^m_m=0$ then
\begin{equation}
I^T_{\gamma^m}\left[\Delta B,B\right]=I^T_{\gamma^{m-1}}\left[\Delta B,B\right]+I^T_{\gamma^{m-1}}\left[\Delta B,B^T\right]
\end{equation}
Finally, if $\gamma^m_m>0$, then define
\begin{equation}
I^T_{\gamma^m}\left[\Delta B,B\right]=\left[ I^T_{\gamma^{m-1}}\left[\Delta B,B\right],B^T\right]_T.
\end{equation}
\end{defi}
Next we prove a special CBH-formula for finite matrices. We do not need the full force of the lemma \ref{productform} below for our purpose, but it has some interest of its own. The reader who is interested only in the global existence proof may skip it and consider the simplified alternative considerations in order to see that how a Trotter product result can be applied. The results may be generalized but our main purpose in this article is to define a converging algorithm for the incompressible Navier-Stokes equation which provides also a constructive approach to global existence.
We show
\begin{lem}\label{productform}
Define the set of finite modes
\begin{equation}
{\mathbb Z}^n_l:=
\left\lbrace \alpha\in {\mathbb Z}^n||\alpha|\leq l\right\rbrace .
\end{equation}
Let $A^l$ be the cut-off of order $l$ of the dissipative diagonal matrix of order $2$ related to the Laplacian and let $B^l=\left(b_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n_l}$ be the cut-off of some other matrix. Next for an arbitrary finite quadratic matrix $N^l=\left( n_{\alpha\beta}\right)_{|\alpha|,|\beta|\leq l}$ let
\begin{equation}
\exp_m\left(N\right)=\exp\left(N\right)-\left(\sum_{k=0}^{m-1}\frac{N^k}{k!}\right).
\end{equation}
Then the relation
\begin{equation}\label{BCH*}
\exp(A^l)\exp(B^l)=\exp(C^l),
\end{equation}
holds where $C^l$ is of the form
\begin{equation}\label{Ceq*}
\begin{array}{ll}
C^l=A^l+B^l+\frac{1}{2}\exp(2A^l)\Delta B^l+\sum_{m\geq 1}\exp_m\left( A^l\right)\times\\
\\
\times \left( c_{\beta^m}I_{\beta^m}\left[\Delta B^l ,B^l \right]_T+c^T_{\beta^m}I^T_{\beta^m}\left[\Delta B^l ,B^l \right]_T\right)
\end{array}
\end{equation}
for some constants $c_{\beta^m},c^T_{\beta^m}$ such that the series
\begin{equation}
\sum_{m\geq 1}c_{\beta^m}t^m
\end{equation}
and the series
\begin{equation}
\sum_{m\geq 1}c^T_{\beta^m}t^m
\end{equation}
converges absolutely for all $t\geq 0$.
\end{lem}
\begin{proof}
For simplicity of notation we suppress the upper script $l$ in the following. All matrices $A,B,C,\cdots $ are finite multiindexed matrices of modes of order less or equal to $l$.
The matrix $C=\sum_{i=1}^{\infty}C_i$ is formally determined via power series
\begin{equation}
C(x)=\sum_{i=1}^{\infty}C_ix^i,~C'(x)=\sum_{i=1}^{\infty}iC^l_ix^{i-1}
\end{equation}
by the relation
\begin{equation}
\sum_{k=0}^{\infty}R^k\left[C'(x),C(x) \right]=A+B+\sum_{k\geq 1}R^k\left[A,B\right]\frac{x^k}{k!},
\end{equation}
where for matrices $E,F$ $\left[E,F\right]=EF-FE$ denotes the Lie bracket and
\begin{equation}
R^1[E,F]=[E,F],~R^{k+1}[E,F]=\left[ R^k[E,F],F\right]
\end{equation}
recursively (Lie bracket iteration on the right). Comparing the terms of different orders one gets successively for the first $C$-terms
\begin{equation}
\begin{array}{ll}
C_1=A+B,~C_2=\frac{1}{2}\left[A,B\right],\\
\\
C_3=\frac{1}{12}\left[A,\left[A,B\right] \right]+\frac{1}{12}\left[\left[A,B\right],B\right],\\
\\
C_4=\frac{1}{48}\left[A,\left[\left[A,B\right],B\right]\right]+
\frac{1}{48} \left[ \left[A,\left[A,B\right]\right] ,B\right]\\
\\
C_5=\frac{1}{120}\left[ A,\left[ \left[A,\left[A,B\right]\right] ,B\right]\right]
+\frac{1}{120}\left[ A,\left[ \left[\left[A,B\right],B\right]\right] ,B\right]\\
\\
-\frac{1}{360}\left[ A,\left[ \left[ \left[A,B\right],B\right] ,B\right]\right]
-\frac{1}{360}\left[ \left[ A,\left[A, \left[A,B\right]\right] \right] ,B\right]\\
\\
-\frac{1}{720}\left[ A,\left[ A,\left[A, \left[A,B\right]\right]\right]\right]
-\frac{1}{720}\left[ \left[\left[\left[A,B\right],B\right],B \right] ,B\right],\cdots .
\end{array}
\end{equation}
Iterated Lie brackets simplify if $A$ is a diagonal matrix. First we
define left Lie-bracket iterations, i.e.,
\begin{equation}
L^1[E,F]=[E,F],~L^{k+1}[E,F]=\left[ E\left[ L^k[E,F]\right] \right]
\end{equation}
recursively. Next we study the effect of alternative applications of iterations of the left and right Lie-bracket operation for this specific $A$.
We have
\begin{equation}
\left[ A,B\right]=A\Delta B,
\end{equation}
with $\Delta B=B-B^T$, and
\begin{equation}
L^k\left[ A,B\right]=A^k2^{k-1}\Delta B,
\end{equation}
Next we have
\begin{equation}
R^k\left[ A,B\right]=AR^{k-1}\left[ \Delta B,B\right]_T
\end{equation}
Next
\begin{equation}
LR^k\left[ A,B\right]=A^2R^{k-1}\left( \left[ \Delta B,B\right]_T+\left[ \Delta B,B^T\right]_T\right)
\end{equation}
Induction leads to the observation that other summands than $A+B$ of the series for $C$ can be written in terms of expressions of the form
\begin{equation}
\left( A\right) ^k I_{\beta^m}\left[\Delta B,B \right]_T
\end{equation}
and in terms of expressions of the form
\begin{equation}
\left( A\right) ^k I^T_{\beta^m}\left[\Delta B,B \right]_T
\end{equation}
For each order $p=k+m$ we have a factor $\frac{1}{p!}$ with $\leq 2^p$ summands. This leads to global convergence of the coefficients $c_{\beta^m},c^T_{\beta^m}$.
\end{proof}
We continue to describe consequences in a framework which is very close to the requirements of the systems related to the incompressible Navier-Stokes equation. As a simple consequence of the preceding lemma we have
\begin{lem}
Let $g^F_l$ be a finite vector of modes of order less or equal to $l>0$ .
In the situation of Lemma \ref{productform} we have
\begin{equation}
\lim_{k\uparrow \infty}\left( \exp\left(A^l\frac{t}{k}\right)\exp\left(B^l\frac{t}{k}\right)\right) ^kg^F_l=\exp\left( (A^l+B^l)t\right)g^F_l.
\end{equation}
\end{lem}
It is rather straightforward to reformulate the latter result for equations with matrices of type $P_{M^l}A_0$, $P_{M^l}A^{r}_0$, i.e. finite approximations of order $l$ of the $n{\mathbb Z}^n\times n{\mathbb Z}^n$-matrices $A_0$, $A^{r}_0$. Strictly speaking, the matrix of the controlled system is in ${\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace\times {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace$, but me we add zeros and treat it formally in the same matrix space. We have
\begin{cor}
Let $B^{0,l}_b=\left( P_{M^l}\left( C^0_i+L^0_{ij}\right)_{ij} \right) $ and \newline
$B^{r,l,0}_b=\left( P_{M^l}\left( C^{r,0}_i+L^{r,0}_{ij}\right)_{ij} \right)$ such that
$A^l_0=D^{0,l}_b+B^{0,l}_b$, $A^{r,l}_0=D^{0,l}_b+B^{r,l}_b$. Then the Trotter product formula
\begin{equation}
\lim_{k\uparrow \infty}\left( \exp\left(D^{0,l}_b\frac{t}{k}\right)\exp\left(B^{0,l}_b\frac{t}{k}\right)\right) ^k=\exp\left( A^l_0t\right),
\end{equation}
and the Trotter product formula
\begin{equation}
\lim_{k\uparrow \infty}\left( \exp\left(D^{r,l,0}_b\frac{t}{k}\right)\exp\left(B^{r,l,0}_b\frac{t}{k}\right)\right) ^k=\exp\left( A^{r,l}_0t\right)
\end{equation}
hold.
\end{cor}
Next note that iterations of $B^{0,l}_b$ of order $k\geq 1$ have at most linear growth for constellations of multiindexes where $\alpha-\gamma$ is constant (with the order of the modes as $l\uparrow \infty$). Let is have a closer look at this. Note that $B^{0,l}_b$ is a $n{\mathbb Z}^n\times n{\mathbb Z}^n$ matrix. However, it is sufficient to consider the subblocks $B^{0,l}_{b,ij}$ for fixed $1\leq i,j\leq n$, which are matrices in ${\mathbb Z}^n\times {\mathbb Z}^n$ in order to estimate the growth behavior of iterations of $B^{0,l}_b$. We have
\begin{equation}\label{navode3cit}
\begin{array}{ll}
B^{0,l}_{b,ij\alpha\beta}=
-\sum_{j=1}^n\frac{2\pi i \beta_j}{l}h_{j(\alpha-\beta)}+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{k=1}^n 4\pi \beta_j(\alpha_k-\beta_k)h_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
We consider the entries for the square. We have
\begin{equation}\label{navode3cit2}
\begin{array}{ll}
\sum_{\beta \in {\mathbb Z}^n}B^{0,l}_{b,ij\alpha\beta}B^{0,l}_{b,ij\beta\gamma}=\\
\\
{\Big (}-\sum_{j=1}^n\frac{2\pi i \beta_j}{l}h_{j(\alpha-\beta)}+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{k=1}^n 4\pi \beta_j(\alpha_k-\beta_k)h_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi^2\alpha_i^2}{\Big )}\\
\\
{\Big (}-\sum_{j=1}^n\frac{2\pi i \gamma_j}{l}h_{j(\beta-\gamma)}+2\pi i\beta_i1_{\left\lbrace \beta\neq 0\right\rbrace}\frac{\sum_{k=1}^n 4\pi \gamma_j(\beta_k-\gamma_k)h_{k(\beta-\gamma)}}{\sum_{i=1}^n4\pi^2\beta_i^2}{\Big )}.
\end{array}
\end{equation}
Expanding the latter product (\ref{navode3cit2}) and inspecting the four terms of the expansion we observe that two of these four terms are entries of matrices in the regular matrix space. Only the mixed products of convection terms entries and Leray projection terms entries lead to expressions which are a little less regular. We define an appropriate matrix space
\begin{equation}
\begin{array}{ll}
M_n^{s,\mbox{lin}}:=\\
\\
\left\lbrace D=\left(d_{\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n}{\Big |}~d^l_{\alpha\beta}\in {\mathbb C}~\&~\exists C>0~\forall l: |d^l_{\alpha\beta}|\leq \frac{C+C\left(|\alpha|+|\beta| \right) }{1+|\alpha-\beta|^s}\right\rbrace.
\end{array}
\end{equation}
By induction we have
\begin{lem}\label{bit}
Given $1\leq i,j\leq n$ for all $k\geq 1$ we have
\begin{equation}
\left( B^{0,l}_{b,ij\alpha\beta}\right)^k\in M_n^{s,\mbox{lin}}
\end{equation}
\end{lem}
Hence the statement of lemma \ref{bit} is true also for the exponential of $B^{0,l}_{bij}$. We note
\begin{lem}\label{dbexp}
If $D$ is a strictly dissipative diagonal matrix of order $2$, then for given $1\leq i,j\leq n$ we have
\begin{equation}
\exp(D)\exp\left( B^{0,l}_{b,ij\alpha\beta}\right)\in M_n^{s}.
\end{equation}
\end{lem}
In order to establish Trotter product representations for our iterative linear approximations $v^{r,m,F}$ of a controlled Navier-Stokes equation we consider a matrix space for sequences of finite matrices $\left( D^l\right)_{l}$ where each $D^l$ has modes of order less or equal to $l$. We have already observed that the systems related to the approximations of incompressible Navier Stokes equation operator are matrices in a $n{\mathbb Z}^n\times n{\mathbb Z}^n$ space. In order to prove convergence of Trotter product formulas for these equations it may be useful to define an appropriate matrix space.
\begin{equation}
\begin{array}{ll}
M_{n\times n}^s:={\Big \{} D=(D_{ij})_{1\leq i,j\leq n}=\left(d_{ij\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n,1\leq i,j\leq n}
{\Big |}\\
\\
~d_{ij\alpha\beta}\in {\mathbb C}~\&~\exists C>0: \max_{1\leq i,j\leq n}|d_{ij\alpha\beta}|\leq \frac{C}{1+|\alpha-\beta|^s}{\Big \}}.
\end{array}
\end{equation}
Next we define a space of sequences $\left( \left( D^l_{ij}\right)_{1\leq i,j\leq n}\right)_l$ of finite matrices which approximate matrices $M_{n\times n}^s$. We define
\begin{equation}
\begin{array}{ll}
M_{n\times n}^{s,\mbox{fin}}:={\Big \{} \left( \left( D^l_{ij}\right)_{1\leq i,j\leq n}\right)_l =\left(d^l_{ij\alpha\beta}\right)_{\alpha\beta\in {\mathbb Z}^n_l,1\leq i,j\leq n}{\Big |}\\
\\
~d^l_{ij\alpha\beta}\in {\mathbb C}~\&~\exists C>0~\forall l: \max_{1\leq i,j\leq n}|d^l_{ij\alpha\beta}|\leq \frac{C}{1+|\alpha-\beta|^s}{\Big \}}.
\end{array}
\end{equation}
Here for each $l\geq 1$ the set ${\mathbb Z}^n_l$ denotes the set of modes of order less or equal to $l$.
For our dissipative matrix $D^{0,l}_b$ it follows that for all $t,k$ and
\begin{equation}
\left( \exp\left(D^{0,l}_b\frac{t}{k}\right)\exp\left(B^{0,l}_b\frac{t}{k}\right)\right)^k_l\in M^{s,\mbox{fin}}_{n\times n}
\end{equation}
for $s\geq n$, i.e., the finite approximations of the Trotter product formula are regular matrices indeed. Hence
\begin{equation}
\lim_{l\uparrow\infty}\lim_{k\uparrow \infty}\left( \exp\left(D^{0,l}_b\frac{t}{k}\right)\exp\left(B^{0,l}_b\frac{t}{k}\right)\right)^k\in M^s_{n\times n}
\end{equation}
Now consider finite approximations of the problem (\ref{navode4big}), i.e. cut-offs of this problem, i.e., a set of problems of finite modes with modes of order less or equal to $l$ with
\begin{equation}\label{navode4bigl}
\frac{d \mathbf{v}^{0,F}_l}{dt}=A^l_0\mathbf{v}^{0,F}_l.
\end{equation}
For each $l$ the solution
\begin{equation}\label{soll}
\mathbf{v}^{0,F}_l=\exp\left(A^l_0t\right)\mathbf{h}^{F}_l
\end{equation}
is globally well-defined via the dissipative Trotter product formula. Our infinite linear algebra lemmas above imply that the sequence $\left( \mathbf{v}^{0,F}_l\right)_{l}$ is a Cauchy sequence for $s>n$ if $\mathbf{h}^{F}_i\in h^s\left({\mathbb Z}^n\right)$ for all $s\in {\mathbb R}$ and $1\leq i\leq n$. Hence we have
\begin{lem}
We consider the torus of length $l=1$ w.l.o.g..
Let $\mathbf{h}^{F}\in h^s\left({\mathbb Z}^n\right)$ for all $s\in {\mathbb R}$.
\begin{equation}\label{sol0}
\begin{array}{ll}
\mathbf{v}^{0,F}_i=\left( \exp\left(A_0t\right)\mathbf{h}^{F}\right)_i\\
\\
=\lim_{l\uparrow \infty}\left( \lim_{k\uparrow \infty}\left( \exp\left(D^{0,l}_b\frac{t}{k}\right)\exp\left(B^{0,l}_b\frac{t}{k}\right)\right) ^k\mathbf{h}^{F}_l\right)_i\in h^s\left({\mathbb Z}^n\right)
\end{array}
\end{equation}
whenever $\mathbf{h}^{F}_i\in h^s\left({\mathbb Z}^n\right)$ for $s>n$ for $1\leq i\leq n$.
\end{lem}
For the same reason
\begin{cor}
Consider the same situation as in the preceding lemma.
\begin{equation}\label{sol0}
\begin{array}{ll}
\mathbf{v}^{r,0,F}_i=\left( \exp\left(A^r_0t\right)\mathbf{h}^{F}\right)_i\\
\\
=\lim_{l\uparrow \infty}\left( \lim_{k\uparrow \infty}\left( \exp\left(D^{r,0,l}_b\frac{t}{k}\right)\exp\left(B^{r,0,l}_b\frac{t}{k}\right)\right) ^k\mathbf{h}^{F}_l\right)_i\in h^s\left({\mathbb Z}^n\right)
\end{array}
\end{equation}
whenever $\mathbf{h}^{F}_i\in h^s_l\left({\mathbb Z}^n\right)$ for $s>n$ for $1\leq i\leq n$.
\end{cor}
\begin{proof}
Let $A^{r,l}_0=P_{M^l}A^r_0$ denote the projection of the matrix $A^r_0$ to the finite matrix of modes less or equal to modes of order $l$. For finite vectors $\mathbf{v}^{r,0,F}_{i,l}$ with
\begin{equation}\label{sol00002}
\mathbf{v}^{r,0,F}_{i}=\lim_{l\uparrow \infty}\mathbf{v}^{r,0,F}_{i,l}
\end{equation}
we have
\begin{equation}\label{sol0000l}
\begin{array}{ll}
\mathbf{v}^{r,0,F}_{i,l}=\left( \exp\left(A^{r,l}_0t\right)\mathbf{h}^{F}\right)_i\\
\\
=\left( \lim_{k\uparrow \infty}\left( \exp\left(D^{r,0,l}_b\frac{t}{k}\right)\exp\left(B^{r,0,l}_b\frac{t}{k}\right)\right) ^k\mathbf{h}^{F}_l\right)_i
\end{array}
\end{equation}
Note that
\begin{equation}
\lim_{k\uparrow \infty}\left( \exp\left(D^{r,0,l}_b\frac{t}{k}\right)\exp\left(B^{r,0,l}_b\frac{t}{k}\right)\right) ^k\mathbf{h}^{F}_l
\end{equation}
is a Cauchy sequence in $M^{s,Fin}_{n\times n}$
Hence, the left side of (\ref{sol00002}) is a limit of a Cauchy sequence in $M^s_n$.
\end{proof}
The stage $m=0$ is special as the matrix $B^{r,0,l}_b$ does not depend on time. For this reason we get similar Trotter formulas for time derivatives using the damping of the dissipative factor $\exp\left(D^{r,0,l}_b\right) $. We shall use first order time derivatives formulas for linear problems with time-independent coefficients later when we approximate time dependent linear problems via time discretizations. We have
\begin{cor}
Recall that $A^{r,l}_0=P_{M^l}A^r_0$ denotes the projection of the matrix $A^r_0$ to the finite matrix of modes less or equal to modes of order $l$.
\begin{equation}\label{sol02}
\begin{array}{ll}
\frac{d}{d t}\mathbf{v}^{r,0,F}_i=\frac{d}{d t}\left( \exp\left(A^r_0t\right)\mathbf{h}^{F}\right)_i\\
\\
=\lim_{l\uparrow \infty}\left( A^{r,l}_0\lim_{k\uparrow \infty}\left( \exp\left(D^{r,0,l}_b\frac{t}{k}\right)\exp\left(B^{r,0,l}_b\frac{t}{k}\right)\right) ^k\mathbf{h}^{F}_l\right)_i\in h^s\left({\mathbb Z}^n\right)
\end{array}
\end{equation}
whenever $\mathbf{h}^{F}_i\in h^s_l\left({\mathbb Z}^n\right)$ for $s>n$ for $1\leq i\leq n$.
\end{cor}
Next at stage $m\geq 1$ we cannot apply the results above directly in order to define
\begin{equation}\label{Dysonlimit}
\left( \mathbf{v}^{m,F}\right)=T\exp\left(A_mt\right)\mathbf{h}^{F},
\end{equation}
or in order to define
\begin{equation}\label{Dysonlimitr}
\left( \mathbf{v}^{r,m,F}\right)=T\exp\left(A^r_mt\right)\mathbf{h}^{F}.
\end{equation}
The main difference of the stages $m\geq 1$ to the stage $m=0$ is the time dependence of the coefficients formally expressed by the operators $T$ (\ref{Dysonlimit}) and (\ref{Dysonlimitr}) above. We have already mentioned various methods in order to deal with matter. In principle there are two alternatives: either we may solve the Euler part of the equation separately using a Dyson formalism or we may set up an Euler scheme and prove a Trotter product formula for a subscheme with time-homogeneous subproblems. There are variations for each alternative, of course, but these are the basic possibilities.
We first consider the simpler Euler scheme (second alternative) and postpone the treatment of the Dyson formalism for the Euler part and related higher order schemes to the end of this section and to the next section on algorithms respectively. The second difference is that we need to control the zero modes if we want to establish a global scheme.
For the latter reason, next we consider the controlled scheme (the considerations apply to the uncontrolled scheme as well for one iteration step). In each iteration step we construct the solution of an infinite linear ODE in dual space which corresponds to a linear partial integro-differential equation in the original space. For each iteration step we need some subiterations in order to deal with the time-dependence of the coefficients. In order to apply our observations concerning linear algebra of infinite systems and the Trotter product formula, we consider time-discretizations. The time dependent formulas in (\ref{Dysonlimit}) and (\ref{Dysonlimitr}) have rigorous definition via double limits (with respect to time and with respect to modes) of Trotter product formulas for finite systems.
There are basically two possibilities to define a time-discretized scheme based on this form of a Trotter product formula. One possibility is to consider the successive linearized global problems at each stage $m$ of the construction, where the matrices $A^r_m$ are known in terms of the entries of the infinite vector $\mathbf{v}^{r,m-1,F}$ which contains information known from the previous step. According to the dissipative Trotter product formula we expect a time-discretization error of order $O(h)$ where $h$ is an upper bound of the time step sizes.
The other possibility is to apply the dissipative Trotter product formula locally and establish a time local limit $\lim_{m\uparrow \infty}\mathbf{v}^{r,m,F}$ at each time step proceeding by the semi-group property.
Next we observe that for each stage of approximation the solution
\begin{equation}\label{Dysonlimit*}
\left( \mathbf{v}^{m,F}\right)=T\exp\left(A^r_mt\right)\mathbf{h}^{F},
\end{equation}
of the linear equation
\begin{equation}\label{Dysonlimitr*}
\left( \mathbf{v}^{r,m,F}\right)=T\exp\left(A^r_mt\right)\mathbf{h}^{F}.
\end{equation}
can be computed rather explicitly if we assume $\mathbf{v}^{r,m,F}_i\in h^{s}\left({\mathbb Z}^n \right)$ for $s>n+2$ inductively. Define the approximative Euler matrix of order $p$ corresponding to the approximation Euclidean part of the equation of order $p$, i.e., on the diagonal $i=j$ define
\begin{equation}
\begin{array}{ll}
\delta_{ij}E^{r,NS}_{i\alpha j\beta}\left(\mathbf{v}^{r,p-1,E}\right)=
-\delta_{ij}\sum_{j=1}^n\frac{2\pi i \beta_j}{l}v^{r,p-1,E}_{j(\alpha-\beta)}\\
\\+\delta_{ij}2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v^{r,p-1,E}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
and off-diagonal, i.e. for $i\neq j$, define the entries
\begin{equation}
(1-\delta_{ij})E^{r,p-1,NS}_{i\alpha j\beta}\left(\mathbf{v}^{r,p-1,E}\right)=2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v^{r,p-1,E}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{equation}
Similar for the uncontrolled Euler system, where we have to include the zero modes. For the Euler system the control makes no difference as we have no viscosity terms. For this reason we consider the uncontrolled Euler system in the following and denote the corresponding uncontrolled Euler matrix, i.e., the matrix corresponding the the incompressible Euler equation just by $E(t)$ for the sake of brevity. Similar the matrix of the iterative approximation of stage $p$ is denoted by $E^p(t)$ in the following. The Euler system is more difficult to solve since we have no fundamental matrix which lives in a space of reasonable regularity. We are interested in two issues: a) whether the approximative Euler system can be solved rather explicitly such that this solution can serve for proving a Trotter product formula solution for approximative Navier Stokes equation systems (and hence for the definition of higher order schemes), and b) whether the solution of the Euler equation is possible for $n\geq 3$ or wether solution of this equation are generically singular (have singularities).
First we observe that the approximative uncontrolled Euler system with data
\begin{equation}
\begin{array}{ll}
\mathbf{v}^{p-1,E}(t_0)=\left( \mathbf{v}^{p-1,E}(t_0)_1,\cdots ,\mathbf{v}^{p-1,E}(t_0)_n\right)^T,~\\
\\
\mathbf{v}^{p-1,E}_i(t_0)\in h^{s}\left({\mathbb Z}^n\right),~\mbox{ for }1\leq i\leq n,~s>n+2
\end{array}
\end{equation}
for some initial time $t_0\geq 0$ has a solution for some time $t>0$ of Dyson form
\begin{equation}\label{dysonobseulerrp}
\begin{array}{ll}
\mathbf{v}^{p,E}_1=v(t_0)+\sum_{m=1}^{\infty}\frac{1}{m!}\times\\
\\
\times\int_0^tds_1\int_0^tds_2\cdots \int_0^tds_m T_m\left(E^{p-1}(t_1)E^{p-1}(t_2)\cdot \cdots \cdot E^{p-1}(t_m) \right)\mathbf{v}^{p-1,E}(t_0),
\end{array}
\end{equation}
where the time order operator is defined as above. The matrices $E^{p-1}(t_m)$ depend on $\mathbf{v}^{p-1,E}$ at higher stages $p$ of approximation and on $\mathbf{v}^{p-1,F}(t_0)$ at the first stage of approximation, and having $\mathbf{v}^{p-1,E}_i(t_0)\in h^s$ for $1\leq\leq n$ and $s>n+2$, and assuming $\mathbf{v}^{p-1,E}_i\in h^s$ for $1\leq i\leq n$ and $s>n+2$, we get
\begin{equation}\label{Tm}
T_m\left(E^{p-1}(t_1)E^{p-1}(t_2)\cdot \cdots \cdot E^{p-1}(t_m) \right)\mathbf{v}^{p-1,E}(t_0)\in \left[ h^s\left({\mathbb Z}^n\right)\right]^n
\end{equation}
for $s>n+2$ by the upper bounds for matrix vector multiplications considered above. Furthermore, we get the upper bounds $C^mt^m$ for the term in (\ref{Tm}) for some constant $C>0$, and it follows that the solution to the approximative equation in (\ref{dysonobseulerrp}) is well-defined for $t\geq 0$. We denote
\begin{equation}\label{dysonobseulerrp2}
\begin{array}{ll}
T\exp\left(E^{p}(t) \right)v(t_0):=v(t_0)+\sum_{m=1}^{\infty}\frac{1}{m!}\times\\
\\
\times\int_0^tds_1\int_0^tds_2\cdots \int_0^tds_m T_m\left(E^{p-1}(t_1)E^{p-1}(t_2)\cdot \cdots \cdot E^{p-1}(t_m) \right)\mathbf{v}^{p-1,E}(t_0).
\end{array}
\end{equation}
It is then straightforward to prove the following Trotter product formula for the approximation of order $p$ of the solution of the Navier Stokes equation system.
\begin{cor}
The solution $\mathbf{v}^{p,F}$ of the linear approximative incompressible Navier stokes equation at stage $p\geq 1$ has the representation
\begin{equation}
\begin{array}{ll}
\mathbf{v}^{p,F}(t)=\\
\\
\lim_{k\uparrow \infty}
\left( \exp\left( \left(\delta_{ij}\left(\delta_{\alpha\beta}\nu\sum_{j=1}^n\alpha_j^2 \right)_{\alpha\beta\in {\mathbb Z}^n} \right)_{1\leq i,j\leq n}\frac{t}{k}\right)
T\exp\left(\left( E^{p}\left(\frac{ t}{k}\right) \right)\right)\right) ^k \
\end{array}
\end{equation}
\end{cor}
A similar formula holds for the controlled Navier Stokes equation system.
We come back to this formula below in the description of an algorithm.
Next concerning issue b) we first observe that we get a contraction result for the approximations of order $p$ for the Euler equation. We write
\begin{equation}
E^{p-1}(t)=:E(\mathbf{v}^{p-1,E}(t))
\end{equation}
in order to emphasize the dependence of the Euler matrix at time $t\geq 0$ of order $p$ of the approximative Euler velocity $\mathbf{v}^{p-1,E}(t)$ at stage $p-1$.
First we consider the approximative Euler initial data problem of order $p$, i.e., the problem
\begin{equation}
\frac{d\mathbf{v}^{p,E}}{dt}=E(\mathbf{v}^{p-1,E}(t))\mathbf{v}^{p,E},~\mathbf{v}^{p,E}(0)=\mathbf{h}^F
\end{equation}
for data $\mathbf{h}^F$ with components $\mathbf{h}^F_i\in h^s\left({\mathbb Z}^n\right)$ for $1\leq i\leq n$.
We get
\begin{equation}\label{diffp}
\begin{array}{ll}
\mathbf{v}^{p,E}(t)-\mathbf{v}^{p-1,E}(t)=\int_0^t{\Big (} E(\mathbf{v}^{p-1,E}(s))\delta \mathbf{v}^{p,E}(s)\\
\\
+E(\delta\mathbf{v}^{p-1,E}(s))\mathbf{v}^{p-1,E}(s){\Big )}ds,
\end{array}
\end{equation}
where $\delta \mathbf{v}^{p,E}(s)=
\mathbf{v}^{p,E}(s)-\mathbf{v}^{p-1,E}(s)$.
We denote
\begin{equation}
E(\delta\mathbf{v}^{p-1,E}(s))=:\left( E(\delta\mathbf{v}^{p-1,E}(s))_{i\alpha j\beta}\right)_{1\leq i,j\leq n,\alpha,\beta\in {\mathbb Z}^n}.
\end{equation}
We have a Lipschitz property for the terms on the right side of (\ref{diffp}), i.e., for a Sobolev exponent $s\geq n+2$ we have for $0\leq t\leq T$ (some horizon $T>0$)
\begin{equation}
\begin{array}{ll}
\sum_{j,\beta}E(\mathbf{v}^{p-1,E}(t))_{i\alpha j\beta}\delta v^{p-1,E}_{j\beta}(t)\\
\\
\leq \sum_{\beta}\frac{C|\beta||\alpha-\beta|}{1+|\alpha-\beta|^{n+2}}\frac{\sup_{0\leq u\leq T}{\big |}\delta \mathbf{v}^{p,E}(u){\big |}_{h^{s}}}{1+|\beta|^{n+2}}\\
\\
\leq \frac{C}{1+|\alpha|^{n+2}}\sup_{0\leq u\leq T}{\big |}\delta \mathbf{v}^{p,E}(u){\big |}_{h^{s}}
\end{array}
\end{equation}
The estimate for the $\alpha$-modes of the second term on the right side of (\ref{diffp}) is similar such that we can sum over $\alpha$ and get for some generic $C>0$ the estimate
\begin{equation}\label{diffp}
\begin{array}{ll}
\sup_{t\in [0,T]}{\Big |} \mathbf{v}^{p,E}(t)-\mathbf{v}^{p-1,E}(t){\Big |}_{h^s}\leq C T\sup_{0\leq u\leq T}{\big |}\delta \mathbf{v}^{p,E}(u){\big |}_{h^{s}}\\
\\
+C T\sup_{0\leq u\leq T}{\big |}\delta \mathbf{v}^{p-1,E}(u){\big |}_{h^{s}}.
\end{array}
\end{equation}
For $T>0$ small enough we have $CT\leq \frac{1}{3}$ we get contraction with a contraction constant $c=\frac{1}{2}$. Hence we have local time contraction, and this implies that we have local existence on the interval $\left[0,\frac{1}{C}\right]$.
Why is it not possible to extend this argument and iterate with respect to time using the semi-group property in order to obtain a global solution of the Euler equation ? Well, it is possible to obtain global solutions of the Euler equation, i.e., for the nonlinear equation
\begin{equation}
\frac{d\mathbf{v}^{E}}{dt}=E(\mathbf{v}^{E}(t))\mathbf{v}^{E},~\mathbf{v}^{E}(0)=\mathbf{h}^F
\end{equation}
but uniqueness is lost for dimension $n\geq 3$. This seems paradoxical as contraction results lead to uniqueness naturally. However, the problem is that we cannot control a priori the coefficient $\mathbf{v}^{E}(t)$ in the Euler matrix $E(\mathbf{v}^{E}(t))$ globally. We can control it locally in time but not globally. A proof of this is that we may consider the ansatz
\begin{equation}
\mathbf{v}^E(t):=\frac{\mathbf{u}^E(\tau)}{1-t}
\end{equation}
along with
\begin{equation}
\tau =-\ln\left(1-t \right)
\end{equation}
for the Euler equation. This leads to
\begin{equation}
\frac{d \mathbf{v}^E(t)}{dt}=\frac{\mathbf{u}^E(\tau)}{(1-t)^2}
+\frac{\frac{ d \mathbf{u}^E(\tau)}{d\tau}}{(1-t)}\frac{d\tau}{dt},
\end{equation}
where $\frac{d\tau}{dt}=\frac{1}{1-t}$, such that
\begin{equation}\label{eulerinit}
\begin{array}{ll}
\frac{d\mathbf{u}^{E}}{d\tau}=E(\mathbf{u}^{E}(\tau))\mathbf{u}^{E}-\mathbf{u}^{E},\\
\\
\mathbf{u}^{E}(0)=\mathbf{h}^F,
\end{array}
\end{equation}
where the equation for $t\in [0,1)$ is transformed to an equation for $\tau\in [0,\infty)$. Note that for this ansatz we have used
\begin{equation}
\frac{E(\mathbf{u}^{E}(\tau))\mathbf{u}^{E}(\tau)}{(1-t)^2}=E(\mathbf{v}^{E}(\tau))\mathbf{v}^{E}. \end{equation}
Note that the equation for $\mathbf{u}^E$ has a damping term which helps in order to prove global solutions. Note that any solution of (\ref{eulerinit}) which is well defined on the whole domain (corresponding to the domain $t\in [0,1]$ in original coordinates, and which satisfies
\begin{equation}
\mathbf{u}^{E}(1)\neq 0
\end{equation}
cooresponds to a solution of the Euler equation which becomes singular at time $t=1$. Can such a solution be obtained. It seems difficult to observe this from the equation for $\mathbf{u}^E$ as it stands, but consider a related transformation
\begin{equation}
\mathbf{v}^E(t):=\frac{\mathbf{u}^{\mu,E}(\tau^{\mu})}{\mu-t}
\end{equation}
along with
\begin{equation}
\tau^{\mu} =-\ln\left(1-\frac{t}{\mu} \right)
\end{equation}
for the Euler equation. This leads to
\begin{equation}
\frac{d \mathbf{v}^E(t)}{dt}=\frac{\mathbf{u}^{\mu,E}(\tau^{\mu})}{(\mu-t)^2}
+\frac{\frac{ d \mathbf{u}^{\mu,E}(\tau^{\mu})}{d\tau}}{(\mu-t)}\frac{d\tau^{\mu}}{dt},
\end{equation}
where $\frac{d\tau}{dt}=\frac{1}{1-\frac{t}{\mu}}\frac{1}{\mu}=\frac{1}{\mu-t}$, such that
we get a formally identical equation
\begin{equation}\label{eulerinit}
\begin{array}{ll}
\frac{d\mathbf{u}^{E}}{d\tau}=E(\mathbf{u}^{\mu,E}(\tau))\mathbf{u}^{\mu,E}-\mathbf{u}^{\mu,E},\\
\\
\mathbf{u}^{\mu,E}(0)=\mathbf{h}^F,
\end{array}
\end{equation}
but now the equation for $t\in [0,\rho)$ is transformed to an equation for $\tau\in [0,\infty)$. Now if data at time $t=0$ are different from zero, we can always choose $\rho$ small enough such that any regular velocity function solution evaluated at time $t=\rho$ is different from zero at time $t=\rho$, i.e., $\mathbf{v}^E(\rho)\neq 0$ (use a Taylor formula and the equation for the original velocity vector). This shows that singular solutions are generic and proves the Corollary about singular equations. On the other hand we can obtain local time contraction result for the Euler equation in exponentially time weighted norms and repeat this argument. This leads to a global solution branch, i.e., a global solution which is not unique.
Next, let us be a little bit more explicit about the Euler scheme (the lowest time order scheme we propose). In any case, we define a time-discretized subscheme at each stage $m\geq 1$ and use the Trotter product formula for dissipative operators above locally in time in order to show that we get a converging subscheme. The convergence can be based on compactness arguments, and it can be based on global contraction with respect to a time-weighted norm introduced above. In the latter case we need more regularity (stronger polynomial decay) as we shall see. Another possibility is to establish a time-local contraction at each stage $m\geq 1$, i.e. in order to establish a solution $\mathbf{v}^{r,m,F}$ at stage $m$. The time-local contraction can be iterated due to the semi-group property of the operator such that the subscheme becomes global in time. At each iteration step $m$ the time-discretized scheme for each global linear equation works for the uncontrolled scheme and the controlled scheme if we know that the data $\mathbf{v}^{m-1,F}$ (resp. $\mathbf{v}^{r, m-1,F}$) computed at the preceding step are bounded and regular, i.e., $\mathbf{v}^{r,m-1,F}(t)\in h^s\left({\mathbb Z}^n\right)$ for $s\geq n+2$ with $n\geq 3$. Both subschemes have a limit with and without control function. It is in the limit with respect to the stage $m$ that the control function $r$ becomes useful. It also becomes useful for designing algorithms. Next we write down the subscheme for the uncontrolled subscheme and the controlled subschemes. Later we observe that the control function is globally bounded in order to prove that there is a limit as $m\uparrow \infty$. For each step $m$ of the construction, however, the arguments for the controlled and the uncontrolled scheme are on the same footing. We describe the global linearized scheme, i.e., the scheme based on global linear equations which are equivalent to partial integro-differential equations in the original space.
The procedure is defined recursively. For $T>0$ arbitrary we define a scheme which converges on $[0,T]$ to the approximative solution $\mathbf{v}^{r,m,F}$. Having defined $\mathbf{v}^{m-1,F}$ at stage $m\geq 1$ we define a series $\mathbf{v}^{r,m,F,p},~p\geq 1$, where $p$ is a natural number index which refers to a global time discretization $t^p_q,~p\geq 1$ and where $q\in \left\lbrace 1,\cdots 2^p\right\rbrace$
along with $t^p_q-t^{p}_{q-1}=T2^{-p}$ for all $p\geq 1$, and $t_0:=0$, such that we get a contractive scheme with respect to a certain time-weighted Sobolev norm.
Next for given $p\geq 1$ and $q\in \left\lbrace 1,\cdots ,2^p\right\rbrace$ we define the equation for $\mathbf{v}^{r,m,F,p,q}$ on the interval $\left[ t^p_{q-1},t^{p}_{q}\right]$, and where the initial data are given by
\begin{equation}
\mathbf{v}^{r,m,F,p,q-1}\left(t^p_{q-1}\right).
\end{equation}
Here for $q=1$ we have the initial data
\begin{equation}
\mathbf{v}^{r,m,F,p,0}\left(t^p_{0}\right)=\mathbf{h}^{r,F}.
\end{equation}
Note that the vector $\mathbf{h}^{r,F}$ equals the initial data $\mathbf{h}^F$, but without the zero modes, i.e.,
\begin{equation}
\mathbf{h}^{r,F}=\left(h_1^{r,F},\cdots,h^{r,F}_n\right)^T,
\end{equation}
where for $1\leq i\leq n$ we have
\begin{equation}
h^{r,F}_i=\left(h_{i\alpha} \right)_{\alpha\in {\mathbb Z}^n\setminus \left\lbrace 0 \right\rbrace }.
\end{equation}
Next we define for each $p\geq 1$ the sequence of local equations for $\mathbf{v}^{r,m,F,p,q}$. This leads to a global sequence $\left(\mathbf{v}^{r,m,F,p}\right)_{p\geq 1}$ defined on $[0,T]$ for arbitrary $T>0$. The next goal is to obtain a contraction result with respect to the iteration number $p$ in order to establish the existence of global regular solutions $\mathbf{v}^{r,m,F}$ for the approximative problems
\begin{equation}\label{navode2*rewrm}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,m,F}}{dt}=A^{r,NS}\left(\mathbf{v}^{r,m-1}\right) \mathbf{v}^{r,m,F},
\end{array}
\end{equation}
where $\mathbf{v}^{r,m,F}=\left(\mathbf{v}^{r,m,F}_1,\cdots ,\mathbf{v}^{r,m,F}_n\right)^T$ and where for each $m\geq 1$ the initial data are given by $\mathbf{h}^{r,F}$. Here we look at $A^{r,NS}$ defined above as an operator such that $A^{r,NS}\left(\mathbf{v}^{r,m-1}\right)$ is obtained by applying this operator to the argument $\mathbf{v}^{r,m-1}$ instead of $\mathbf{v}^{r}$. We shall give the details for the iterations steps $p,q\geq 1$. First we have to obtain $\mathbf{v}^{r,m,F,p,q}$ for each $p\geq 1$ and $q\in \left\lbrace 1,\cdots 2^p\right\rbrace$. On the time interval $\left[ t^p_{q-1},t^{p}_{q}\right]=\left[ \frac{q-1}{2^p}T,\frac{q}{2^p}T\right]$ we have the local Cauchy problem
\begin{equation}\label{navode2*rewrmpq}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,m,F,p,q}}{dt}=A^{r,NS}\left(\mathbf{v}^{r,m-1}\right) \mathbf{v}^{r,m,F,p,q},
\end{array}
\end{equation}
where $\mathbf{v}^{r,m,F,p,q}=\left(\mathbf{v}^{r,m,F,p,q}_1,\cdots ,\mathbf{v}^{r,m,F,p,q}_n\right)^T$ and where for each $m\geq 1$ the initial data are given by $\mathbf{v}^{r,m,F,p,q-1}\left( t^p_{q-1}\right) $.
Next we explicitly describe the matrix $A^{r,NS}\left(\mathbf{v}^{r,m-1,p,q}\right) $ which is a $n\left( {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace \right) \times n{\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace $-matrix with
\begin{equation}
A^{r,NS}\left(\mathbf{v}^{r,m-1}\right) =\left(A^{r,NS}_{ij}\left(\mathbf{v}^{r,m-1}\right)\right)_{1\leq i,j\leq n}
\end{equation}
where for $1\leq i,j\leq n$ the entry $A^{r,NS}_{ij}\left(\mathbf{v}^{r,m-1}\right) $ is a ${\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace \times {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace $-matrix. We have
\begin{equation}
\begin{array}{ll}
A^{r,NS}\left( \mathbf{v}^{r,m-1}\right)\mathbf{v}^{r,m,F,p,q}=\\
\\
\left(\sum_{j=1}^nA^{r,NS}_{1j}\left(\mathbf{v}^{r,m-1}\right) \mathbf{v}^{r,m,F,p,q}_1 ,\cdots,\sum_{j=1}^nA^{r,NS}_{nj}\left( \mathbf{v}^{r,m-1}\right) \mathbf{v}^{r,m,F,p,q}_n \right)^T,
\end{array}
\end{equation}
where for all $1\leq i\leq n$
\begin{equation}
\begin{array}{ll}
\sum_{j=1}^nA^{r,NS}_{ij}\left(\mathbf{v}^{r,m-1}\right) \mathbf{v}^{r,m,F,p,q}_j=\\
\\
\left(\left( \sum_{j=1}^n \sum_{\beta\in {\mathbb Z}^n}A^{r,NS}_{i\alpha j\beta}\left(\mathbf{v}^{r,m-1}\right) v^{r,m,F,p,q}_{j\beta}\right)_{\alpha\in {\mathbb Z}^n} \right)^T_{1\leq i\leq n}.
\end{array}
\end{equation}
The entries $A^{r,NS}_{i\alpha j\beta}\left(\mathbf{v}^{r,m-1}\right)$ are determined by the equation as follows. On the diagonal, i.e., for $i=j$ and for $\alpha,\beta\neq 0$ we have the entries
\begin{equation}\label{controlmatrix}
\begin{array}{ll}
\delta_{ij}A^{r,NS}_{i\alpha j\beta}\left(\mathbf{v}^{r,m-1}\right)=\delta_{ij}\sum_{j=1}^n \nu\left( -\frac{4\pi \alpha_j^2}{l^2}\right)
-\delta_{ij}\sum_{j=1}^n\frac{2\pi i \beta_j}{l}v^{r,m-1}_{j(\alpha-\beta)}\\
\\+\delta_{ij}2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v^{r,m-1}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where for $\alpha=\beta$ the terms of the form $v^r_{k(\alpha-\beta)}$ are zero (such that we do not need to exclude these terms explicitly). Furthermore, off-diagonal we have for $i\neq j$ the entries
\begin{equation}
(1-\delta_{ij})A^{r,NS}_{i\alpha j\beta}\left(\mathbf{v}\right)=2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v^{r,m-1}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{equation}
The idea for a global scheme is to determine $\mathbf{v}^{r,F}=\lim_{m\uparrow \infty}\mathbf{v}^{r,m,F}$ for a simple control function (the one which cancels the zero modes at stage $m$ of the iteration) and a certain iteration
\begin{equation}\label{navode2*rewrlin}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,m,F}}{dt}=A^{NS}\left(\mathbf{v}^{r,m-1}\right) \mathbf{v}^{r,m,F},
\end{array}
\end{equation}
starting with a time-independent matrix $A^{NS}\left(\mathbf{v}^{r,-1}\right) :=A^{NS}\left(\mathbf{h}\right)$ for $m=0$. We shall use the abbreviation
\begin{equation}\label{navode2*rewrlin}
\begin{array}{ll}
A^r_m:=A^{NS}\left(\mathbf{v}^{r,m-1}\right),
\end{array}
\end{equation}
which is time-dependent for $m\geq 1$.
The proof of global regular existence of the incompressible Navier Stokes equation can be obtained the sequence $\left( \mathbf{v}^{r,m,F}\right)_{m\geq 1}$
is a Cauchy sequence in $C^1\left((0,T),\left(h^s\left({\mathbb Z}^n\right),\cdots ,h^s\left({\mathbb Z}^n\right) \right)\right) $ with the natural norm (supremum with respect to time).
An alternative is a contraction result. Contraction results have the advantage that they lead to uniqueness results with respect to the related Banach space. The disadvantage is that we need more regularity (order of polynomial decay) in general if we want to have contraction.
However, in a first step we have to ensure the existence of the solutions $\mathbf{v}^{r,m,F}$ of the linearized problems in appropriate Banach spaces.
Next we define a Banach space in order to get a contraction result for the subscheme $\left( \mathbf{v}^{r,m,F,p}\right)_{p\geq 1}$ where the index $p$ is related to the time discretization of size $T\frac{1}{2^p}$ we mentioned above. Note that for each $p\geq 1$ the problem for $\mathbf{v}^{r,m,F,p}$ on $[0,T]$ is defined by $2^p$ recursively defined subproblems for $\mathbf{v}^{r,m,F,p,q}$ for $1\leq q\leq 2^p$ which are defined on the interval $\left[t^p_{q-1},t^p_q\right]$ where the data for the problem for $q\geq 2$ are defined by the final data of the subproblem for $\mathbf{v}^{r,m,F,p-q-1}$ evaluated at $t^p_{q-1}$.
Next consider the function space
\begin{equation}
\begin{array}{ll}
B^{n,s}:=\left\lbrace t\rightarrow \mathbf{u}^F(t)=\left(\mathbf{u}^F_1,\cdots ,\mathbf{u}^F_n\right)^T|\forall t\geq 0:~\mathbf{u}^F_i(t)\in h^s\left({\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace \right) \right\rbrace
\end{array}
\end{equation}
Note that we excluded the zero modes because this Banach space is designed for the controlled equations.
For $\mathbf{u}^F\in B^{n,s}$ define
\begin{equation}
{\Big|}\mathbf{u}^F
{\Big |}^{T,\mbox{exp}}_{h^s,C}:=
\sup_{t\in \left[0,T\right] }\sum_{i=1}^n\exp\left( -Ct\right) {\Big |}\mathbf{u}^F_i(t){\Big |}_{h^s}.
\end{equation}
In the following we abbreviate
\begin{equation}\label{ans}
A^r_m=A^{NS}\left( \mathbf{v}^{r,m-1,F}\right).
\end{equation}
Especially the evaluation of the right side of (\ref{ans}) at time $t$ is denoted by $A^r_m(t)$. Recall that
\begin{equation}
{\mathbb Z}^{n,0}:={\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace
\end{equation}
At each substep $p$ we apply the Trotter product representation of the subproblems for $\mathbf{v}^{r,m,F,p,q}$ for $1\leq q\leq 2^p$ $2^p$-times. Note that at each stage of the construction we know that the Trotter product formula is valid in regular function spaces as the coefficients are not time dependent for each of these $2^p$ subproblems.
For $s>0$ let us assume that
\begin{equation}\label{mminus1ass}
t\rightarrow \mathbf{v}^{r,m-1,F}_i(t) \in C^k\left([0,T],h^{s}\left({\mathbb Z}^{n,0}\right)\right)~\mbox{ for }1\leq i\leq n.
\end{equation}
Here $C^k\left([0,T],h^{s}\left( {\mathbb Z}^{n,0}\right)\right) $
is the function space of time dependent function vectors with image in $ {\mathbb Z}^{n,0}$ which have $k$-times differentiable component functions $t\rightarrow v^{r,m-1}_{i\alpha}(t)$ for
$1\leq i\leq n$ and $\alpha \in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace $. Note that there is no need here to be very restrictive with respect to the degree of the Sobolev norm $s$. We should have $s>n$ in order to prove the existence of anything which exists in a more than distributional sense, of course, and we stick to our assumption $s>n+2$.
Assuming sufficient regularity at the previous stage $m-1$ of the function $\mathbf{v}^{r,m-1,F}$, we have the Taylor formula
\begin{equation}
\begin{array}{ll}
\mathbf{v}^{r,m-1,F}(t+h)=\sum_{0\leq p\leq k-1}D^p_t\mathbf{v}^{r,m-1,F}(t)h^p\\
\\
+\frac{h^p}{(k-1)!}\int_0^1(1-\theta)^{k-1}D^k_t\mathbf{v}^{r,m-1,F}(t+\theta h)d\theta
\end{array}
\end{equation}
for $t\in [0,T-h]$ and $h>0$. Here,
\begin{equation}
D^p_t\mathbf{v}^{r,m-1,F}(t):=\left(\frac{d^p}{dt^p}v^{r,m-1}_{i\alpha}\right)^T_{1\leq i\leq n,\alpha\in {\mathbb Z}^{n,0}}.
\end{equation}
In the following we assume that that $\mathbf{h}^{r,F}_i,~\mathbf{v}^{r,m-1,F}_i\in h^s\left( {\mathbb Z}^{n,0}\right)$ for $s= n+2$ and $1\leq i\leq n$, because this property is inherited for $\mathbf{v}^{r,m,F}_i$ as we go from stage $m-1$ to stage $m$. Note that the matrix-valued function $t\rightarrow A^r_m(t+h)-A^r_m(t)=A^{r,NS}\left( \mathbf{v}^{r,m-1,F}(t+h)\right)-A^{r,NS}\left( \mathbf{v}^{r,m-1,F}(t)\right)$ applied to the data $\mathbf{h}^{r,F}$ is Lipschitz with respect to a $|.|_{h^{s}}$-norm, i.e., for some finite Lipschitz constant $L>0$ we have
\begin{equation}
\begin{array}{ll}
{\Big|}\left( A^{r,NS}\left( \mathbf{v}^{r,m-1,F}(t+h)\right)-A^{r,NS}\left( \mathbf{v}^{r,m-1,F}(t)\right) \right) \mathbf{h}^{r,F}{\Big |}
_{h^{s}}\\
\\
\leq L |\mathbf{v}^{r,m-1,F}(t+h)-\mathbf{v}^{r,m-1,F}(t)|_{h^{s}}.
\end{array}
\end{equation}
Here, we use the regularity (polynomial decay) of the initial data $\mathbf{h}^{r,F}$ and the assumed regularity (polynomial decay) of $\mathbf{v}^{r,m-1,F}(t)$ in order to compensate have to compensate for quadratic terms in the Leray projection term approximation above, i.e., the quadratic multiindex terms related to the multiindices $\beta$ in the Leray projection term and to linear multiindex terms related to the convection term. Here the initial data for the subproblems for $\mathbf{v}^{r,m,F,p,q}$ inherit sufficient regularity in order to preserve the Lipschitz property. Let us have a closer look at this.
At stage $m$ of the global iteration we solve an equation of the form (\ref{navode2*rewrm}). Let us describe this for a unit time interval $[0,1]$, i.e., with time horizon $T=1$ for a moment without loss of generality. The generalization of the description for any finite time horizon $T>0$ is straightforward (we do this below). We do this by a series of time discretizations at time points
\begin{equation}
t_i\in \left\lbrace \frac{k}{2^p}=t^p_k{\big |}0\leq k\leq 2^{p}-1\right\rbrace
\end{equation}
in order to a apply Trotter product formulas at each substep. The approximation at stage $p$ is denoted by $\mathbf{v}^{r,m,F,p}$ and is determined recursively by $2^p$ time-homogeneous problems on time intervals $\left[ t^p_{k},t^p_{k+1}\right] $. At substep $1\leq q\leq 2^p$ we have computed $\mathbf{v}^{r,m,F,p}(t^p_{q-1})$. We then evaluate the matrix in (\ref{navode2*rewrm}) at time $t^p_{q-1}$ and have the problem
\begin{equation}\label{navode2*rewrm2}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,m,F,p,q}}{dt}=A^{r}_m\left(t^p_{q-1}\right) \mathbf{v}^{r,m,F,p,q},
\end{array}
\end{equation}
on $\left[ t^p_{q-1},t^p_{q}\right] $, where we take the data from the previous time substep $t^p_{q-1}$ at stage $m$, i.e.,
\begin{equation}\label{navode2*rewrm2data}
\mathbf{v}^{r,m,F,p,q}(t^p_{q-1})=\mathbf{v}^{r,m,F,p,q-1}(t^p_{q-1}).
\end{equation}
The problem described in (\ref{navode2*rewrm2}) and (\ref{navode2*rewrm2data}) can then be solved by a Trotter product formula. Now compare this with with the set of problems at the next stage $p+1$. On the time interval $\left[t^{p}_{q-1},t^p_q\right]$ we have two subproblems at stage $p+1$. Note that $t^{p+1}_{2(q-1)}=t^p_{q-1}$. On the time interval $\left[t^{p}_{q-1},t^{p+1}_{2q-1}\right]$ we have to solve for
\begin{equation}\label{navode2*rewrm21}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,m,F,p+1,2q-1}}{dt}=A^{r}_m\left(t^p_{q-1}\right) \mathbf{v}^{r,m,F,p+1,2q-1},
\end{array}
\end{equation}
with the initial data from the previous time step, i.e.,
\begin{equation}\label{navode2*rewrm2data21}
\mathbf{v}^{r,m,F,p+1,2q-1}(t^{p+1}_{2(q-1)})=\mathbf{v}^{r,m,F,p+1,2q-2}(t^{p+1}_{2(q-1)}),
\end{equation}
and where we use
\begin{equation}
A^{r}_m\left(t^p_{q-1}\right)=A^{r}_m\left(t^{p+1}_{2(q-1)}\right).
\end{equation}
We then have a second subproblem on the time interval $\left[t^{p+1}_{2q-1},t^{p+1}_{2q}\right]$, where we have to solve for
\begin{equation}\label{navode2*rewrm213}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,m,F,p+1,2q}}{dt}=A^{r}_m\left(t^{p+1}_{2q-1}\right) \mathbf{v}^{r,m,F,p+1,2q},
\end{array}
\end{equation}
with the initial data
\begin{equation}\label{navode2*rewrm2data213}
\mathbf{v}^{r,m,F,p+1,2q}(t^{p+1}_{2(q-1)}).
\end{equation}
The regular spaces with polynomial decaying modes makes it possible to generalize observation well known for Euler scheme for finite dimensional systems quite straightforwardly. Especially, a global $O(h)$ (time stepsize $h$) error is a straightforward consequence of the Taylor formula considered above. This may also be used to estimate the difference of solutions $\mathbf{v}^{r,m,F,p+1,2q-1}$ together with $\mathbf{v}^{r,m,F,p+1,2q-2}$ compared to $\mathbf{v}^{r,m,F,p,q}$.
Indeed using the infinite linear algebra lemmas above we observe
\begin{lem}
Let $s>n+ 2$ and $T>0$ be given. For $k=2$ assume that $\mathbf{v}^{r,m-1,F}$ is regular as in (\ref{mminus1ass}).
Then for some finite $C>0$
\begin{equation}
\begin{array}{ll}
\sup_{u\in [0,T]}\exp(-Cu){\Big|}\mathbf{v}^{r,m,F,p}(u)-\mathbf{v}^{r,m,F,p-1}(u){\Big |}_{h^s}
\leq \frac{L}{2^{p-1}}.
\end{array}
\end{equation}
\end{lem}
\begin{proof}
For notational reasons the size of the torus is assumed to be one. We remark that for general time horizon $T>0$
\begin{equation}
t^p_{2(q-1)}=2(q-1)2^{-p}T=(q-1)2^{-(p-1)}T=t^{p-1}_{q-1},
\end{equation}
and accordingly
\begin{equation}\label{matdiff}
A^r_{m}(t^p_{2(q-1)})=A^r_{m}(t^{p-1}_{q-1})
\end{equation}
For some vector $\mathbf{C}^{h,p}_{q-1}\in h^s\left({\mathbb Z}^n\right)$ for $s\geq n+2$ we postulate the difference
\begin{equation}\label{chp}
\mathbf{v}^{r,m,F,p,q-1}\left( t^p_{2(q-1)}\right)- \mathbf{v}^{r,m,F,p-1,q-1}\left( t^{p-1}_{q-1}\right)=\mathbf{C}^{h,p}_{q-1},
\end{equation}
and we consider some properties which the vector $\mathbf{C}^{h,p}_{q}$ inherits from $\mathbf{C}^{h,p}_{q-1}$.
Next at stage $p\geq 1$ consider the initial data $\mathbf{v}^{r,m,F}\left( t^{p-1}_{q-1}\right) $ of the problem at substep $1\leq q\leq 2^{p-1}$.
We have $t\in \left[t^p_{2(q-1)},t^{p}_{2q}\right]= \left[t^{p-1}_{q-1},t^{p-1}_q\right]$, where it makes sense to consider the the subintervals $\left[t^p_{2(q-1)},t^{p}_{2q-1}\right]$ and $\left[t^p_{2q-1},t^{p}_{2q}\right]$. We may consider $t\in \left[t^p_{2q-1},t^{p}_{2q}\right]$ w.l.o.g. because the following estimate simplifies $t\in \left[t^p_{2q-2},t^{p}_{2q-1}\right]$. For $t\in \left[t^p_{2q-1},t^{p}_{2q}\right]$ we have
\begin{equation}\label{solm11}
\begin{array}{ll}
{\Big |}\mathbf{v}^{r,m,F,p,2q}_i(t)-\mathbf{v}^{r,m,F,p-1,q}_i(t){\Big |}_{h^s}\\
\\
\leq {\Big |}\mathbf{v}^{r,m,F,p,2q}_i(t)-\mathbf{v}^{r,m,F,p,2q}_i(t^p_{2q-1})-\left( \mathbf{v}^{r,m,F,p-1,q}_i(t)-\mathbf{v}^{r,m,F,p-1,q}_i(t^p_{2q-1})\right) {\Big |}_{h^s}\\
\\
+{\Big |}\mathbf{v}^{r,m,F,p,2q-1}_i(t^p_{2q-1})-\mathbf{v}^{r,m,F,p-1,q}_i(t^{p}_{2q-1}){\Big |}_{h^s},
\end{array}
\end{equation}
where we use $\mathbf{v}^{r,m,F,p,2q}_i(t^p_{2q-1})=\mathbf{v}^{r,m,F,p,2q-1}_i(t^p_{2q-1})$.
Since $t^p_{2(q-1)}=t^{p-1}_{q-1}$ and with (\ref{chp}) above for the last term in (\ref{solm11}) we have
\begin{equation}\label{solm11}
\begin{array}{ll}
{\Big |}\mathbf{v}^{r,m,F,p,2q-1}_i(t^p_{2q-1})-\mathbf{v}^{r,m,F,p-1,q}_i(t^{p}_{2q-1}){\Big |}_{h^s}\\
\\
={\Big |}\left( \exp\left(A^r_m(t^p_{2(q-1)})\left( t^p_{2q-1}-t^p_{2(q-1)}\right) \right)\mathbf{v}^{r,m,F,p,2q}(t^p_{2(q-1)})\right)_i
\\
\\
-\left( \exp\left(A^r_m(t^{p-1}_{q-1})\left( t^p_{2q-1}-t^p_{2(q-1)}\right)\right)\mathbf{v}^{r,m-1,F,p-1,q}(t^{p-1}_{q-1})\right)_i{\Big |}_{h^s}\\
\\
= {\Big |}\exp\left(A^r_m(t^p_{2(q-1)})\left( t^p_{2q-1}-t^p_{2(q-1)}\right)\right)\mathbf{C}^{h,p}_{q-1}{\Big |}_{h^s}\\
\\
\leq
{\Big |}\exp\left(\frac{C}{4^p}\right)\mathbf{C}^{h,p}_{q-1}{\Big |}_{h^s}
\end{array}
\end{equation}
for some finite $C>0$, which depends only on data known at stage $m-1$. Furthermore, for the first term on the right side of (\ref{solm11}) we may use the rough estimate
\begin{equation}\label{solm1122}
\begin{array}{ll}
{\Big |}\mathbf{v}^{r,m,F,p,2q}_i(t)-\mathbf{v}^{r,m,F,p,2q}_i(t^p_{2q-1})-\left( \mathbf{v}^{r,m,F,p-1,q}_i(t)-\mathbf{v}^{r,m,F,p-1,q}_i(t^p_{2q-1})\right) {\Big |}_{h^s}\\
\\
\leq {\Big |}\exp\left(A^r_m(t^p_{2q-1})\left( t-t^p_{2q-1}\right) \right)\mathbf{v}^{r,m,F,p,2q}_i(t^p_{2q-1})-\mathbf{v}^{r,m,F,p,2q}_i(t^p_{2q-1}) \\
\\
-\left( \exp\left(A^r_m(t^{p}_{2q-1})
\left( t-t^p_{2q-1}\right)\right)
\mathbf{v}^{r,m,F,p-1,q}_i(t^p_{2q-1})
-\mathbf{v}^{r,m,F,p-1,q}_i(t^p_{2q-1})\right) {\Big |}_{h^s}.
\end{array}
\end{equation}
The right side of (\ref{solm1122}) we observe with (\ref{solm11})
\begin{equation}\label{solm112233}
\begin{array}{ll}
{\Big |}\exp\left(A^r_m(t^p_{2q-1})\left( t-t^p_{2q-1}\right) \right)\mathbf{v}^{r,m,F,p,2q}_i(t^p_{2q-1})-\mathbf{v}^{r,m,F,p-1,q}_i(t^p_{2q-1}) \\
\\
+\mathbf{v}^{r,m,F,p,2q}_i(t^p_{2q-1})-\mathbf{v}^{r,m,F,p-1,q}_i(t^p_{2q-1}) {\Big |}_{h^s}\\
\\
\leq 2{\Big |}\exp\left(\frac{2C}{4^p}\right)\mathbf{C}^{h,p}_{q-1}{\Big |}_{h^s}
\end{array}
\end{equation}
As for each $p\geq 1$ the entries of the sequence $\mathbf{C}^{h,p}_{q}$ are in $O\left(h^2 \right)$ where $h$ denotes the maximal time step size (which is $2^{-p}$ with our choice) the difference to be estimated is in $O(h)$ and we are done.
\end{proof}
The preceding lemma shows that we have a Cauchy sequence
\begin{equation}
\left( \mathbf{v}^{r,m,F,p}\right)_{p\geq 1}
\end{equation}
with respect to a regular (time weighted) norm and with a limit $\mathbf{v}^{r,m,F}$ with
\begin{equation}
\mathbf{v}^{r,m,F}_i(t)\in h^s\left({\mathbb Z}^{n,0}\right)
\end{equation}
for all $t\in [0,T]$. Since we have a dissipative term (damping exponential) in the Trotter product formula, similar observations can be made for the time derivative sequence
\begin{equation}
\left( \frac{d}{dt}\mathbf{v}^{r,m,F,p}\right)_{p\geq 1}.
\end{equation}
Note, however, that the order of regularity $s\geq n+2\geq 5$ can be chosen to be as large as we want, and this can be exploited in order to prove regularity with respect to time $t$ for each $\mathbf{v}^{r,m,F}$ via the defining equation of the latter function. We even do not need estimates for products of functions in Sobolev spaces which may be borrowed from classical Sobolev space analysis. Using the lemma above for large $s>0$ we may instead use the regularity implied by infinite matrix products as pointed out above.
As a consequence of the preceding lemma we note
\begin{lem} For all $m\geq 1$ and $s>n+ 2\geq 5$ the function
\begin{equation}\label{solm}
\left( \mathbf{v}^{r,m,F}\right) _{i}=\left( T\exp\left(A^r_mt\right)\mathbf{h}^{r,F}\right)_i\in h^s_l\left({\mathbb Z}^{n,0}\right),
\end{equation}
is well-defined, whenever $\mathbf{h}^{F}_i\in h^s_l\left({\mathbb Z}^n\right)$.
\end{lem}
The same holds for the uncontrolled approximations, of course. We note
\begin{cor} For all $m\geq 1$ and $s>n+2$ the function
\begin{equation}\label{solm}
\left( \mathbf{v}^{m,F}\right) _{i}=\left( T\exp\left(A_mt\right)\mathbf{h}^{F}\right)_i\in h^s_l\left({\mathbb Z}^n\right)
\end{equation}
is well-defined, whenever $\mathbf{h}^{F}_i\in h^s_l\left({\mathbb Z}^n\right)$.
\end{cor}
However, it is essential to get a uniformly bounded sequence
\begin{equation}\label{sol0}
\left( \mathbf{v}^{r,m,F}\right)_{m\in {\mathbb N}}=
\left( T\exp\left(A^r_mt\right)\mathbf{h}^{F}_i\in h^s_l\left({\mathbb Z}^n\right)\right)_{m\in {\mathbb N}}.
\end{equation}
for some $\nu>0$ (some $\nu$ is sufficient butt we get the bounded sequence for all $\nu$ which is useful for rather straightforward generalised models with spatially dependent viscosity) . We remarked in the introduction that we can even choose $\nu>0$. Indeed, we observed that we can choose it arbitrarily large (as is also well-known). However this was not needed so far and we shall not need it later on. It is just an useful observation in order check algorithms in the most simple situation via equivalent formulations with rigorous damping. At this point it is useful to consider the controlled sequence $\left( \mathbf{v}^{r,m,F}_i\right)_{m\in {\mathbb N},~1\leq i\leq n}$. Recall that the functions $\mathbf{v}^{r,m,F}$ are designed such that the zero modes are zero $v^{r,m}_{i0}=0$. The control function is just defined this way. Next we show that a uniformly bounded controlled sequence $\left( \mathbf{v}^{r,m,F}_i\right)_{m\in {\mathbb N},~1\leq i\leq n}$ implies uniformly boundedness of the uncontrolled sequence $\left( \mathbf{v}^{m,F}_i\right)_{m\in {\mathbb N},~1\leq i\leq n}$. In order to observe this we go back to (\ref{navode200a}).
At stage $m\geq 1$ it is assumed that $v^{r,m-1}_{i0}=0$. The controlled approximating equation at stage $m$ is obtained from (\ref{navode200a}) by elimination of the zero modes. We have for $1\leq i\leq n$ and $\alpha\neq 0$ the equation
\begin{equation}\label{navode200acontr}
\begin{array}{ll}
\frac{d v^{r,m}_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)v^{r,m}_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^{n}\setminus \left\lbrace 0,\alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^{r,m-1}_{j(\alpha-\gamma)}v^{r,m}_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n\setminus \left\lbrace 0,\alpha\right\rbrace}4\pi \gamma_j(\alpha_k-\gamma_k)v^{r,m-1}_{j\gamma}v^{r,m}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{array}
\end{equation}
We considered the controlled equation systems as autonomous systems of non-zero modes. However, in order to compare the controlled system with the original one we may define
\begin{equation}\label{zeromcon}
\begin{array}{ll}
\frac{d v^{r,m}_{i0}}{dt}=
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^{n}\setminus \left\lbrace 0,\alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^{r,m-1}_{j(-\gamma)}v^{r,m}_{i\gamma}.
\end{array}
\end{equation}
Note that $\mathbf{v}^{r,m-1,F}_i,\mathbf{v}^{r,m,F}_i\in h^s\left({\mathbb Z}^{n,0}\right) $ for $s>n\geq 3$ implies that the right side of (\ref{zeromcon}) is bounded by constant finite $C>0$.
Hence we have
\begin{lem}\label{control}
If the sequence $\left( \mathbf{v}^{r,m,F}\right)_{m\geq 1}$ has an upper bound $c>0$ with respect to the $|.|_s=\sum_{i=1}^n|.|_{h^s}$-norm, we have for some finite $C>0$
\begin{equation}
|r_0(t)|\leq \exp(Ct)
\end{equation}
for all $t\geq 0$.
\end{lem}
It remains to show that for some finite $C>0$ such that for all $1\leq i\leq n$ and $m\geq 0$ we have
\begin{equation}\label{est0vmF}
|\mathbf{v}^{r,m,F}_{i}(t,.)|_{h^s_l}\leq C
\end{equation}
Based on the arguments so far there are several ways to get an uniform bound for the sequence $\left( \mathbf{v}^{r,m,F}\right)_{m\geq 1}$. One possibility is to observe that we have upper bounds
\begin{equation}\label{polynomialgrowth}
\sup_{t\in [0,T]}{\big |}\mathbf{v}^{r,m,F}_i(t){\big |}_{h^s}\leq C^m
\end{equation}
for some $s>n$ and all $1\leq i\leq n$. This implies that we we have contraction on the interval $[0,T]$ with respect to the norm ${\big |}.{\big |}^{\exp,T}_{h^s,C}$ (with appropriate generic constant $C>0$). The result in (\ref{polynomialgrowth}) is obtained by time discretization of (\ref{navode2*rewf}). Then approximations $\left( \mathbf{v}^{r,m,F,p,q}(.)\right)_{1\leq q\leq 2^p } $ as in the construction of the solutions $\mathbf{v}^{r,m,F}(t)$ above can be considered. If for $1\leq i\leq n$ $\mathbf{v}^{r,m,F,p}_i(.):[0,T]\rightarrow h^s({\mathbb Z}^n)$ denotes the function which equals the function $\mathbf{v}^{r,m,F,p,q}_i(.)$ on the intervals $\left[ t^p_q,t^p_{q+1}\right]$ then we get with the
\begin{equation}\label{polynomialgrowth2}
\sup_{t\in [0,T]}{\big |}\mathbf{v}^{r,m,F,p}_i(t){\big |}_{h^s}\leq C^m
\end{equation}
for a constant $C>0$ independent of the stage $p$ such that (\ref{polynomialgrowth}) is satisfied.
Another way is via contraction results on certain balls in appropriate function spaces. The radius of such a ball clearly depends on the size of the initial data and on the size of the horizon. However it is sufficient that for each dual Sobolev norm index $s>0$ and for each data size ${\big |}\mathbf{h}^{F}{\big |}_s$ and each horizon size $T>0$ we find a contraction result on an appropriate ball for a related time weighted function space. Let's look at the details. For arbitrary $T>0$ consider two smooth vector-valued functions on the $n$-torus, i.e., functions of the form
\begin{equation}
\mathbf{f},\mathbf{g}\in
\left[ C^{\infty}\left( [0,T],{\mathbb T}^n\right)\right] ^n.
\end{equation}
Consider the equations
\begin{equation}\label{navode2*rewf}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,f,F}}{dt}=A^{r,NS}\left(\mathbf{f}\right) \mathbf{v}^{r,f,F},
\end{array}
\end{equation}
along with $\mathbf{v}^{r,f,F}(0)=\mathbf{h}^{r,F}$, and
\begin{equation}\label{navode2*rewg}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,g,F}}{dt}=A^{r,NS}\left(\mathbf{g}\right) \mathbf{v}^{r,g,F},
\end{array}
\end{equation}
along with $\mathbf{v}^{r,g,F}(0)=\mathbf{h}^{r,F}$.
Here we denote $\mathbf{v}^{r,f,F}=\left(\mathbf{v}^{r,f,F}_1,\cdots ,\mathbf{v}^{r,f,F}_n\right)^T$ and similarly for the function $\mathbf{v}^{r,g,F}$. As in or notation above the matrix $A^{r,NS}\left(\mathbf{f}\right) $ is a $n{\mathbb Z}^n_0\times n{\mathbb Z}^n_0$-matrix, where we abbreviate ${\mathbb Z}^n_0={\mathbb Z} \setminus \left\lbrace 0\right\rbrace$, and where
\begin{equation}
A^{r,NS}\left(\mathbf{f}\right) =\left(A^{r,NS}_{ij}\left(\mathbf{f}\right)\right)_{1\leq i,j\leq n}
\end{equation}
where for $1\leq i,j\leq n$ the entry $A^{r,NS}_{ij}\left(\mathbf{f}\right) $ is a ${\mathbb Z}^n\times {\mathbb Z}^n$-matrix. We define
\begin{equation}
A^{r,NS}\left( \mathbf{f}\right)\mathbf{v}^{r,f,F} =\left(\sum_{j=1}^nA^{r,NS}_{1j}\left(\mathbf{f}\right) \mathbf{v}^{r,f,F}_1 ,\cdots,\sum_{j=1}^nA^{r,NS}_{nj}\left( \mathbf{f}\right) \mathbf{v}^{r,f,F}_n \right)^T,
\end{equation}
where for all $1\leq i\leq n$
\begin{equation}
\sum_{j=1}^nA^{r,NS}_{ij}\left(\mathbf{f}\right) \mathbf{v}^{r,f,F}_j=\left(\left( \sum_{j=1}^n \sum_{\beta\in {\mathbb Z}^n}A^{r,NS}_{i\alpha j\beta}\left(\mathbf{f}\right) v^{r,f,F}_{j\beta}\right)_{\alpha\in {\mathbb Z}^n} \right)^T_{1\leq i\leq n}.
\end{equation}
The entries $A^{r,NS}_{i\alpha j\beta}\left(\mathbf{f}\right)$ of $A^{r,NS}\left( \mathbf{v}\right)$ are determined as follows. On the diagonal, i.e., for $i=j$ we have the entries for $\alpha,\beta\neq 0$
\begin{equation}
\begin{array}{ll}
\delta_{ij}A^{r,NS}_{i\alpha j\beta}\left(\mathbf{f}\right)=\delta_{ij}\sum_{j=1}^n \nu\left( -\frac{4\pi \alpha_j^2}{l^2}\right)
-\delta_{ij}\sum_{j=1}^n\frac{2\pi i \beta_j}{l}f_{j(\alpha-\beta)}\\
\\+\delta_{ij}2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)f_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
where for $\alpha=\beta$ the terms of the form $f_{k(\alpha-\beta)}$ are zero (such that we do not need to exclude these terms explicitly). Furthermore, off-diagonal we have for $i\neq j$ the entries
\begin{equation}
(1-\delta_{ij})A^{r,NS}_{i\alpha j\beta}\left(\mathbf{f}\right)=2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)f_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{equation}
The definition of $A^{r,NS}\left(\mathbf{g}\right) $ is analogous. Next for functions $$\mathbf{u}^{r,F}=\left(\mathbf{u}^{r,F}_1,\cdots ,\mathbf{u}^{r,F}_n\right)$$ and for $s\geq n+2$ consider the norm
\begin{equation}
\begin{array}{ll}
{\big |}\mathbf{u}^{r,F}{\big |}^{T,\exp}_{s,C}:=\sum_{i=1}^n{\big |}\mathbf{u}^{r,F}_i{\big |}^{T,\exp}_{h^s,C}.
\end{array}
\end{equation}
Consider a ball of radius $2{\big |}\mathbf{h}^{r,F}{\big |}^{T,\exp}_{s,C}$ around the origin, i.e., consider the ball
\begin{equation}
B_{2{\big |}\mathbf{h}^{r,F}{\big |}^{T,\exp}_{s,C}}:=\left\lbrace \mathbf{u}^{r,F}{\big |}{\big |}\mathbf{u}^{r,F}{\big |}^{T,\exp}_{s,C}\leq 2{\big |}\mathbf{h}^{r,F}{\big |}^{T,\exp}_{s,C}\right\rbrace .
\end{equation}
For $s\geq n+2$ and for data $\mathbf{h}^{r,F}\in h^s\left({\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace \right) $ the considerations above shows that the Cauchy problem
\begin{equation}
\begin{array}{ll}
\frac{d \mathbf{v}^{r,f,F}}{dt}=A^{r,NS}\left(\mathbf{f}\right) \mathbf{v}^{r,f,F},
\end{array}
\end{equation}
along with $\mathbf{v}^{r,f,F}(0)=\mathbf{h}^{r,F}$ has a regular solution $\mathbf{v}^{r,f,F}$ with $\mathbf{v}^{r,f,F}(t)\in h^s\left({\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace \right) $.
Next we observe that for fixed $\mathbf{u}^{r,F}$ in this ball the linear operator
\begin{equation}
\left( \mathbf{f}-\mathbf{g}\right) \rightarrow \left( A^{r,NS}_{i\alpha j\beta}\left(\mathbf{f}\right)\right) \mathbf{u}^{r,F}- \left( A^{r,NS}_{i\alpha j\beta}\left(\mathbf{g}\right)\right) \mathbf{u}^{r,F}
\end{equation}
is Lipschitz with some Lipschitz constant $L$. Note that we have
\begin{equation}
\begin{array}{ll}
A^{r,NS}_{i\alpha j\beta}\left(\mathbf{f}\right)-A^{r,NS}_{i\alpha j\beta}\left(\mathbf{g}\right)\\
\\
=-\delta_{ij}\sum_{j=1}^n\frac{2\pi i \beta_j}{l}\left( f_{j(\alpha-\beta)}-g_{j(\alpha-\beta)}\right) \\
\\+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)\left( f_{k(\alpha-\beta)}-g_{k(\alpha-\beta)}\right) }{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{array}
\end{equation}
Furthermore, for $\mathbf{g}^{r,F}\in B_{2{\big |}\mathbf{h}^{r,F}{\big |}^{T,\exp}_{s,C}}$ we may assume w.l.o.g. that the Lipschitz constant $L$ is chosen such that
\begin{equation}\label{agu}
\begin{array}{ll}
\sup_{\mathbf{g}^{r,F}\in B_{2{\big |}\mathbf{h}^{r,F}{\big |}^{T,\exp}_{s,C}}}
{\big |}\left( \delta_{ij}A^{r,NS}_{i\alpha j\beta}\left(\mathbf{g}\right)\right)\mathbf{u}^{r,F}{\big |}^{T,\exp}_{h^{s},C}\leq L{\Big |}\mathbf{u}^{r,F}{\big |}^{T,\exp}_{h^{s-2},C}
\end{array}
\end{equation}
where the weaker norm on the right side of (\ref{agu}) is due to the fact that we have to compensate the first term on the right side of
\begin{equation}
\begin{array}{ll}
\delta_{ij}A^{r,NS}_{i\alpha j\beta}\left(\mathbf{g}\right)=\delta_{ij}\sum_{j=1}^n \nu\left( -\frac{4\pi \alpha_j^2}{l^2}\right)
-\delta_{ij}\sum_{j=1}^n\frac{2\pi i \beta_j}{l}g_{j(\alpha-\beta)}\\
\\+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)g_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{array}
\end{equation}
\begin{lem}
Let $T>0$ be arbitrary, and let $\mathbf{f}^{r,F},\mathbf{g}^{r,F}\in B_{2{\big |}\mathbf{h}^{r,F}{\big |}^{T,\exp}_{s,C}}$ for $s\geq n+3$.
For $C\geq 3L$ we have
\begin{equation}
{\Big|}\mathbf{v}^{r,f,F}-\mathbf{v}^{r,g,F}
{\Big |}^{T,\mbox{exp}}_{h^s,C}\leq \frac{1}{2}{\Big|}\mathbf{f}^F-\mathbf{g}^F
{\Big |}^{T,\mbox{exp}}_{h^s,C}.
\end{equation}
For $C\geq 6L$ we have
\begin{equation}
{\Big|}\mathbf{v}^{r,f,F}-\mathbf{v}^{r,g,F}
{\Big |}^{T,\mbox{exp},1}_{h^s,C}\leq \frac{1}{2}{\Big|}\mathbf{f}^F-\mathbf{g}^F
{\Big |}^{T,\mbox{exp},1}_{h^s,C}.
\end{equation}
\end{lem}
\begin{proof}
For each $t\in [0,T]$ we have
\begin{equation}\label{navode2*rewcontw1}
\begin{array}{ll}
{\Big|}\mathbf{v}^{r,f,F}(t)-\mathbf{v}^{r,g,F}(t){\Big |}_{h^s}\\
\\
\leq {\Big |}\int_{0}^{t}A^{r,NS}\left(\mathbf{f}\right)(u) \mathbf{v}^{r,f,F}(u)du
-\int_{0}^{t}A^{r,NS}\left(\mathbf{g}\right)(u) \mathbf{v}^{r,g,F}(u)du{\Big |}_{h^s}\\
\\
\leq {\Big |}\left( \int_{0}^{t}A^{r,NS}\left(\mathbf{f}\right)(u)
-\int_{0}^{t}A^{r,NS}\left(\mathbf{g}\right)(u) \right) \mathbf{v}^{r,f,F}(u)du{\Big |}_{h^s}\\
\\
+ {\Big |}\int_{0}^{t}A^{r,NS}\left(\mathbf{g}\right)(u) \left( \mathbf{v}^{r,f,F}(u)ds
-\mathbf{v}^{r,g,F}(u)\right) du{\Big |}_{h^s}\\
\\
\leq LT\sup_{u\in [0,T]}{\Big |}\mathbf{f}^F(u)
-\mathbf{g}^F(u){\Big |}_{h^s}\\
\\
+ LT\sup_{u\in [0,T]}{\Big |} \mathbf{v}^{r,f,F}(u)ds
-\mathbf{v}^{r,g,F}(u){\Big |}_{h^{s-2}}.
\end{array}
\end{equation}
It follows that
\begin{equation}\label{navode2*rewcontw2}
\begin{array}{ll}
{\Big|}\mathbf{v}^{r,f,F}(t)-\mathbf{v}^{r,g,F}(t){\Big |}_{h^s}\\
\\
\leq L{\Big |}\mathbf{f}
-\mathbf{g} {\Big |}^{T,\mbox{exp}}_{h^s,C}\int_0^{t}\exp(Cu)du\\
\\
+ L{\Big |} \mathbf{v}^{r,f,F}
-\mathbf{v}^{r,g,F}{\Big |}^{T,\mbox{exp}}_{h^s,C}\int_0^{t}\exp(Cu)du\\
\\
\leq \frac{L\exp(Ct)}{C}{\Big |}\mathbf{f}^F
-\mathbf{g}^F{\Big |}^{T,\mbox{exp}}_{h^s,C}\\
\\
+ \frac{L\exp(Ct)}{C}\sup_{u\geq 0}{\Big |} \mathbf{v}^{r,f,F}(u)ds
-\mathbf{v}^{r,g,F}(.){\Big |}^{T,\mbox{exp}}_{h^{s-2},C}
\end{array}
\end{equation}
Since
\begin{equation}
{\Big |} \mathbf{v}^{r,f,F}
-\mathbf{v}^{r,g,F}{\Big |}^{T,\mbox{exp}}_{h^{s-2},C}\leq {\Big |} \mathbf{v}^{r,f,F}
-\mathbf{v}^{r,g,F}{\Big |}^{T,\mbox{exp}}_{h^{s},C}
\end{equation}
it follows that
\begin{equation}\label{navode2*rewcontw2}
\begin{array}{ll}
{\Big|}\mathbf{v}^{r,f,F}(.)-\mathbf{v}^{r,g,F}(.){\Big |}^{T,\mbox{exp}}_{h^s,C}\\
\\
\leq \left(\frac{1}{\left(1-\frac{L}{C} \right) }\frac{L}{C} \right){\Big |}\mathbf{f}^F(u)
-\mathbf{g}^F(u){\Big |}^{T,\mbox{exp}}_{h^s,C}.
\end{array}
\end{equation}
For $C=3L$ the result follows. The reasoning for the stronger norm is similar.
\end{proof}
It is clear that this contraction result leads to global existence and uniqueness. Note that for global smooth existence it is sufficient for each $s\geq n+2$ and $T>0$ we find a constant $C>0$ such that
\begin{equation}
{\Big|}\mathbf{v}^{r,m,F}
{\Big |}^{T,\mbox{exp}}_{h^s,C}\leq C
\end{equation}
Uniform upper bounds for the approximative (controlled) solutions $\mathbf{v}^ {r,m,F}$ lead to existence via compactness as well. Note that the infinite vectors $\mathbf{v}^{r,m,F}_{i}(t)=\left( v^{r,m}_{i\alpha}(t)\right)_{\alpha\in {\mathbb Z}^{n,0}}$
are in $1$-$1$ correspondence with classical functions
\begin{equation}
v^{r,m}_i(t,x)=\sum_{\alpha\in {\mathbb Z}^{n,0}}v^{r,m}_{i\alpha}\exp\left(\frac{2\pi i \alpha x}{l}\right),
\end{equation}
where $v^{r,m}_i\in H^s\left({\mathbb T}^n\right)$ for $s>n+ 2$. Recall that
\begin{thm}
For $r>s$ and for any compact Riemann manifold $M$ (and especially for $M={\mathbb T}^n_l$) we have a compact embedding
\begin{equation}
e:H^r\left(M\right)\rightarrow H^s\left(M\right)
\end{equation}
\end{thm}
This means that $\left( v^{r,m}_i\right)_{m\in {\mathbb N}}$ has a convergent subsequence in $H^r\left({\mathbb T}^n_l\right)$ for $r>s$ which corresponds to converging subsequence in the corresponding Sobolev space of infinite vectors of modes. Hence, passing to an appropriate subsequence $\mathbf{v}^{r,m',F}_{i}(t)$ of $\mathbf{v}^{r,m,F}_{i}(t)$ we have a limit
\begin{equation}
\mathbf{v}^{r,F}_{i}(t)=\lim_{m'\uparrow \infty}\mathbf{v}^{r,m',F}_{i}(t)\in h^r\left({\mathbb Z}^n\right)
\end{equation}
for $r<s$ (Rellich embedding). Since $s$ is arbitrary this limit exists in $h^r\left({\mathbb Z}^n\right)$ for all $r\in {\mathbb R}$. Hence, for all $1\leq i\leq n$ we have a family $\left( \mathbf{v}^{r,m',F}_i\right)_{m'\in {\mathbb N}}$ which satisfies
\begin{equation}\label{navodermproof}
\begin{array}{ll}
\frac{d v^{r,m'}_{i\alpha}}{dt}=\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right) v^{r,m'}_{i\alpha}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace 0,\alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^{r,m'-1}_{j(\alpha-\gamma)}v^{r,m'}_{i\gamma}\\
\\
+2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n\setminus \left\lbrace 0,\alpha\right\rbrace }4\pi \gamma_j(\alpha_k-\gamma_k)v^{r,m'-1}_{j\gamma} v^{r,m'}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{array}
\end{equation}
If we can prove that the limit of infinite vectors $\left( \frac{d v^{r,m'}_{i\alpha}}{dt}(t)\right)_{\alpha \in {\mathbb Z}^n}$ is continuous in the sense that
\begin{equation}
\begin{array}{ll}
\lim_{m'\uparrow \infty}\left( \frac{d v^{r,m'}_{i\alpha}}{dt}(t)\right)_{\alpha \in {\mathbb Z}^n}\in C\left({\mathbb Z}^n\right)\\
\\
:=\left\lbrace \left(g_{\alpha} \right)|\sum_{\alpha\in {\mathbb Z}^n}f_{\alpha}\exp\left(\frac{2\pi i\alpha x}{l}\right)\in C\left({\mathbb Z}^n\right) \right\rbrace,
\end{array}
\end{equation}
then we obtain a classical solution.
Inspecting the terms on the right side the assumption that $s> 2+n$ and $n\geq 2$ is more than sufficient in order to get
\begin{equation}
\left(\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right) v^{r,m'}_{i\alpha}\right)_{\alpha\in {\mathbb Z}^n}\in h^{s-2}\left({\mathbb Z}^n\right)\subset h^{n}\left({\mathbb Z}^n\right),
\end{equation}
such that with this assumption we may ensure that $\left( \frac{d v^{r,m'}_{i\alpha}}{dt}\right)_{m\in {\mathbb N}}(t)$ converges in $C\left({\mathbb Z}^n\right) \subset h^r\left({\mathbb Z}^n\right) $ with $r>\frac{1}{2}n$ if we can control the expressions for the convection term and for the Leray projection term appropriately. However, this is easily done with the help of the infinite linear algebra results above. We stick to $s>n+2$ and $n\geq 2$. First for the convection term for each $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$ we consider
\begin{equation}\label{convec}
-\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n\setminus \left\lbrace \alpha\right\rbrace }\frac{2\pi i \gamma_j}{l}v^{r,m'-1}_{j(\alpha-\gamma)}v^{r,m'}_{i\gamma}.
\end{equation}
We observe ${\big |}\gamma_jv^{r,m'}_{i\gamma}{\big |}_{h^{s-1}\left({\mathbb Z}^n\right)}\leq C$
for some constant $C>0$ independent of $m$, hence
\begin{equation}
{\big |}\left( \sum_{\gamma \in {\mathbb Z}^n}
v^{r,m'-1}_{j(\alpha-\gamma)}\gamma_jv^{r,m'}_{i\gamma}\right)
{\big |}_{\alpha\in {\mathbb Z}^n}\in h^{2s-1-n}\leq C
\end{equation}
for some $C>0$ independent of $m$ such that the limit is in $h^{2}\left({\mathbb Z}^n\right)$. Hence (\ref{convec}) and the limit for $m'\uparrow \infty$ is in $h^{2}\left({\mathbb Z}^n\right)$. Similarly, the Leray projection term
\begin{equation}
2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)v^{r,m'-1}_{j\gamma} v^{r,m'}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}
\end{equation}
is bounded by a product of two infinite vectors which have some uniform bound $C>0$ in $h^{s-1}\left({\mathbb Z}^n\right)$, such that the Leray projection term is safely in $h^{2(s-1)-n}\left({\mathbb Z}^n\right)\subset h^{2}\left({\mathbb Z}^n\right)$ where we did not even take the $|\alpha|^2$ in the denominator corresponding to the Laplacian kernel into account.
We have shown
\begin{lem}
For $s>n+ 2$ and $n\geq 2$, and for the same $\nu>0$ as above there is a $C>0$ such that for all $1\leq i\leq n$ and $m'\geq 0$ we have
\begin{equation}\label{est0vmF}
{\Big |}\frac{d}{dt}\mathbf{v}^{r,m',F}_{i}(t){\Big |}_{h^s_l}\leq C
\end{equation}
uniformly for $t>0$, and
\begin{equation}
\frac{d}{dt}\mathbf{v}^{r,F}_{i}(t)=\lim_{m'\uparrow \infty}\frac{d}{dt}\mathbf{v}^{r,m',F}_{i}(t)\in h^r\left({\mathbb Z}^n\right) \subset C\left({\mathbb Z}^n\right)\subset h^{2}\left({\mathbb Z}^n\right).
\end{equation}
\end{lem}
We conclude
\begin{thm}
The function
\begin{equation}
\mathbf{v}^{r,F}_{i}(t)=\lim_{m'\uparrow \infty}\mathbf{v}^{r,m',F}_{i}(t),~1\leq i\leq n
\end{equation}
satisfies the infinite nonlinear ODE equivalent to the controlled incompressible Navier-Stokes equation on the $n$-torus in a classical sense. Moreover, since the argument above can be repeated with arbitrary large $s>0$ we have that for all $1\leq i\leq n$ the infinite vector $\mathbf{v}^{r,F}_{i}(t)$ and its time derivative are in $h^s\left({\mathbb Z}^n\right)$. Higher order time derivatives also exist in a classical sense by an analogous argument for derivatives of the Navier-Stokes equation.
\end{thm}
Finally we have
\begin{thm}\label{lemma0}
Let $h_i\in C^{\infty}\left({\mathbb T}^n\right)$.
For each $\nu>0$ and $l>0$ and for all $1\leq i\leq n$ and all $t\geq 0$
\begin{equation}
v_i(t,.)\in C^{\infty}\left( {\mathbb T}^n_l \right) .
\end{equation}
and
\begin{equation}
\mathbf{v}^{F}_i(t)\in h^s\left({\mathbb T}^n_l\right) .
\end{equation}
for arbitrary $s\in {\mathbb R}$.
\end{thm}
\begin{proof}
We have $v^F_i(t)=v^{r,F}_i(t)-r(t)\in h^s\left({\mathbb Z}^n\right)$ for all $s\in {\mathbb R}$, because $v^{r,F}_i(t)\in h^s\left({\mathbb Z}^n\right)$ for all $s\in {\mathbb R}$, and $r(t)$ is a constant.
The second part follows from Corollary above. Given $s>0$ we can differentiate
\begin{equation}
v_i(t,.)=\sum_{\gamma\in {\mathbb Z}^n}v_{i\gamma}\exp\left( \frac{2\pi i\gamma x}{l}\right) ),
\end{equation}
up to order $m$, where $m$ is the largest integer less than $s$, and get a Fourier series which converges in $L^2\left({\mathbb T}^n_l\right)$.
Hence,
\begin{equation}
v_i(t,.) \in H^{s}\left( {\mathbb T}^n_l \right),
\end{equation}
for all $t\geq 0$. Since, this is true for all for all $s>0$ we the first statement of this lemma is true.
\end{proof}
Now for a fixed $l>0$ and any $\mathbf{h}^F_1\in h^s\left({\mathbb Z}^n\right)$ for $s\geq pn+2$ we have
\begin{equation}
\frac{d^p}{dt^p}\mathbf{v}^{m,F}_i(t)\in h^{s-pn}_{l}\left({\mathbb Z}^n\right)
\end{equation}
for all $m\geq 0$ from the uniform bound
\begin{equation}
|\mathbf{v}^{m,F}_i(t)|_{h^s_l}\leq C.
\end{equation}
Next let us sharpen the results. The arguments above can be improved in two directions. First we can derive upper bounds by considering time dilatation transformation which leads to damping terms which serve as an auto-control of the system. This leads to global upper bounds in time. The second improvement is that we can solve the Euler part in the equation at each time step locally by a Dyson formalism, and this leads to the extension of the Trotter product formula on a local time level. This approach also leads naturally to higher order schemes in time as is discussed in the next section.
First we consider the auto-control mechanism. Instead of considering a fixed time horizon $T>0$ we consider a time discretization $t_i,i\geq 1$ of the interval $\left[0,\infty \right)$, where we may consider $t_i=i\in {\mathbb N}$ for all $i\geq 1$ and $t_{0}:=0$.
We sketched this idea from the equation in (\ref{timedil}) on in the introduction. In order to get uniform global upper bounds in time we shall consider a variation of this idea. Assume that
\begin{equation}
\mathbf{v}^{F}_i(l-1)\in h^s\left({\mathbb Z}^n\right) , 1\leq i\leq n,~s>n+4
\end{equation}
has been computed (or is given) for $l\geq 1$. We use the regularity order $s>n+3$ because our upper bounds for matrix multiplication and the contraction result with exponentially weighted norms show that
\begin{equation}
\mathbf{v}^{F}_i(t)\in h^s\left({\mathbb Z}^n\right) , 1\leq i\leq n,~s>n+4
\end{equation}
for $t\in [l-1,l]$, or more precisely, that we have
\begin{equation}
|v_{i\alpha}(t)|\leq \frac{C\exp(Ct)}{1+|\alpha|^{n+6}}
\end{equation}
for some generic constant $C>0$ if
\begin{equation}
|v_{i\alpha}(0)|\leq \frac{C}{1+|\alpha|^{n+4}}.
\end{equation}
This looks to be a stronger assumption than necessary, but we intend to simplify the growth estimate and therefore we need the stronger assumption.
\begin{rem}
For a more sophisticated growth estimate the assumption $s>n+2$ would be enough. However, if we use the infinite ODE directly in order to estimate the growth then we need $s>n+4$ as the matrix rules for the Leray projection term imply that we get regularity $s>2r-n-2$ if we start with regularity of order $r>n+2$. Hence if we start with regularity of order $r=n+3$, then we end up with regularity of order $2n+6-n-2=n+4$. Now, if we use the infinite mode equation directly, then we end up with regularity of order $n+2$ (or slightly above) as we loose two orders of regularity by a brute force of the Laplacian. If we do a refined estimate with the fundamental matrix, then we can weaken this assumption, of course.
\end{rem}
Then consider the transformation of the local time interval $[l-1,l)$ with time coordinate $t$ to the infinite time interval $\left[0,\infty\right)$ with time coordinated $\tau$, where
\begin{equation}\label{timedil}
(\tau (t) ,x)=\left(\frac{t-(l-1)}{\sqrt{1-(t-(l-1))^2}},x\right),
\end{equation}
which is a time dilatation effectively and leaves the spatial coordinates untouched.
Then on a time local level, i.e., for $t\in [l-1,l)$ the function $u_i,~1\leq i\leq n$ with
\begin{equation}\label{uvlin}
\rho(1+(t-(l-1)))u_i(\tau,x)=v_i(t-(l-1) ,x),
\end{equation}
carries all information of the velocity function on this interval.
In the following we think of time $t$ in the form
\begin{equation}
t=t^{l-1,-}:=t-(l-1),
\end{equation}
and then suppress the upper script for simplicity of notation.
The following equations are formally identical with the equations in the introduction but note that we have $t=t^{l-1,-}=t-(l-1)$. Having this in mind we note
\begin{equation}\label{eq1}
\frac{\partial}{\partial t}v_i(t,x)=\rho u_i(\tau,x)+\rho(1+t)\frac{\partial}{\partial \tau}u_i(\tau,x)\frac{d \tau}{d t},
\end{equation}
where
\begin{equation}\label{eq2}
\frac{d\tau}{dt}=\frac{1}{\sqrt{1-t^2}^3}.
\end{equation}
Note that the factor $0<\rho <1$ appears quadratically in the nonlinear and only once in the linear term such that these linear terms become smaller compared to the damping term (they get an additional factor $\rho$).
We denote the inverse of $\tau(t)$ by $t(\tau)$. For the modes of $u_i,~1\leq i\leq n$ we get the equation
\begin{equation}\label{navode200firsttimedil*}
\begin{array}{ll}
\frac{d u_{i\alpha}}{d\tau}=\sqrt{1-t(\tau)^2}^3
\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)u_{i\alpha}-\\
\\
\rho(1+t(\tau))\sqrt{1-t(\tau)^2}^3\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}u_{j(\alpha-\gamma)}u_{i\gamma}+\\
\\
\rho(1+t(\tau))\sqrt{1-t(\tau)^2}^3\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)u_{j\gamma}u_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi\alpha_i^2}\\
\\
-\sqrt{1-t^2(\tau)}^3(1+t(\tau))^{-1}u_{i\alpha},~t\mbox{ short for }~t^{l-1,-}=t-(l-1).
\end{array}
\end{equation}
We may consider this equation on the domain $\left[ l-1,l-\frac{1}{2}\right]$ in $t$-coordinates corresponding to a finite time horizon $T$ which is not large, actually, i.e., it is of the size $T=\frac{0.5}{\sqrt{0.75}}=\frac{1}{\sqrt{3}}$ which is only slightly larger than $0.5$. Note that we may use any $t^{l-1,-}\in (0,1)$ giving rise to any finite $T>0$ in the following argument, but it is convenient that $T$ can be chosen so small and we still have only two steps to go to get to the next time step $l$ in original time coordinates $t$!. This makes the auto-control attractive also from an algorithmic perspective. We are interested in global upper bounds for the modes at integer times $t=l$, i.e., we observe the growth of the modes from $v_{i\alpha}(l-1)$ to $v_{i\alpha}(l)$ corresponding to the transition of modes from $vu_{i\alpha}(0)$ to $u_{i\alpha}(T)$ for all $1\leq i\leq n$ and all $\alpha\in {\mathbb Z}^n$. Note that we have
\begin{equation}
t=t^{l-1,-}\in \left[ l-1,l-1+\frac{1}{2}\right]~\mbox{ corresponds to }~\tau\in \left[0,T\right]=\left[0,\frac{1}{\sqrt{3}}\right]
\end{equation}
at each time step where we consider (\ref{navode200firsttimedil*}) on the latter interval. We integrate the equation in (\ref{navode200firsttimedil*}) from $0$ to $T=\frac{1}{\sqrt{3}}$ and have for each mode $\alpha$
\begin{equation}\label{intualg}
\begin{array}{ll}
u_{i\alpha}(T)=u_{i\alpha}(0)+\int_0^{T}{\Bigg (}\sqrt{1-t(\sigma)^2}^3
\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)u_{i\alpha}(\sigma)d\sigma-\\
\\
\rho(1+t(\sigma))\sqrt{1-t(\sigma)^2}^3\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}u_{j(\alpha-\gamma)}(\sigma)u_{i\gamma}(\sigma)+\\
\\
\rho(1+t(\sigma))\sqrt{1-t(\sigma)^2}^3\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)u_{j\gamma}(\sigma)u_{k(\alpha-\gamma)}(\sigma)}{\sum_{i=1}^n4\pi\alpha_i^2}\\
\\
-\sqrt{1-t^2(\sigma)}^3(1+t(\sigma))^{-1}u_{i\alpha}(\sigma){\Bigg)}d\sigma,~t\mbox{ short for }~t^{l-1,-}=t-(l-1).
\end{array}
\end{equation}
The relation in (\ref{uvlin}) translates into a relation of modes
\begin{equation}\label{uvlin*}
\rho(1+(t-(l-1)))u_{i\alpha}=v_{i\alpha}(t-(l-1)),
\end{equation}
such that the assumption
\begin{equation}
|v_{i\alpha}(l-1)|\leq \frac{C}{1+|\alpha|^{n+4}}
\end{equation}
implies that with $C^{\rho}:=\frac{C^\rho}{\sqrt{3}}$ we have
\begin{equation}
|u_{i\alpha}(0)|\leq \frac{C^{\rho}}{1+|\alpha|^{n+4}}.
\end{equation}
Now, by the contraction argument above, for generic $C$ independent of local time $t\in [l-1-l]$ we have for generic time-independent constants $C,C^{\rho}>0$
\begin{equation}
|v_{i\alpha}(t)|\leq \frac{C\exp(Ct)}{1+|\alpha|^{n+6}}
\end{equation}
implies that with $C^{\rho}:=\frac{C^\rho}{\sqrt{3}}$ we have for $\tau\in [0,T]=\left[0,\frac{1}{\sqrt{3}} \right] $
\begin{equation}
|u_{i\alpha}(\tau)|\leq \frac{C^{\rho}\exp(C^{\rho}\tau)}{1+|\alpha|^{n+6}}.
\end{equation}
Next we look at each term on the right side of (\ref{intualg}). The gain of two orders of regularity is useful if we want to apply simplified estimates. For the Laplacian term on the right side of (\ref{intualg}) we have
\begin{equation}
\begin{array}{ll}
{\Big |}\int_0^{T}{\Bigg (}\sqrt{1-t(\sigma)^2}^3
\sum_{j=1}^n\nu \left( -\frac{4\pi \alpha_j^2}{l^2}\right)u_{i\alpha}(\sigma)d\sigma{\Big |}\\
\\
\leq T\nu \sup_{\sigma\in[0,T]}{\Big|}\frac{4\pi |\alpha |^2}{l^2}u_{i\alpha}(\sigma){\Big |}\leq T\nu {\Big|}\frac{4\pi }{l^2}\frac{C^{\rho}\exp(C^{\rho}T)}{1+|\alpha|^{n+4}}{\Big |}.
\end{array}
\end{equation}
Note that this is the only term where we need the stronger assumption of regularity order $s>n+4$. This indicates that estimates with the fundamental matrix show that the weaker assumption of regularity order $s>n+2$ is sufficient if we refine our estimates.
Next consider the convection term.
\begin{equation}\label{intualglap}
\begin{array}{ll}
{\Big |}\int_{0}^T\rho(1+t(\sigma))\sqrt{1-t(\sigma)^2}^3\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}u_{j(\alpha-\gamma)}(\sigma)u_{i\gamma}(\sigma)d\sigma{\Big |}\\
\\
\leq T\rho\frac{3}{2}\sup_{\sigma\in [0,T]}{\Big |}\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}u_{j(\alpha-\gamma)}(\sigma)u_{i\gamma}(\sigma){\Big |}\\
\\
\leq T\rho\frac{3}{2}n\frac{2\pi}{l}{\Big |}\frac{\left( C^{\rho}\right)^2\exp(2C^{\rho}T}{1+|\alpha|^{n+4}}{\Big |},
\end{array}
\end{equation}
by the contraction argument. We shall choose $\rho>0$ such that a lower bound of the damping term dominates the latter upper bound plus the upper bound of the Leray projection term we are going to estimate next. However there the situation is a little involved here as we need a {\it lower} bound of the damping term. Hence we postpone the choice of $\rho$ and consider the Lery projection term. We have the upper bound
\begin{equation}\label{intualgleray}
\begin{array}{ll}
{\Big |}\rho(1+t(\sigma))\sqrt{1-t(\sigma)^2}^3\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)u_{j\gamma}(\sigma)u_{k(\alpha-\gamma)}(\sigma)}{\sum_{i=1}^n4\pi\alpha_i^2}{\Big |}\\
\\
\leq {\Big |}\frac{3}{2}\rho2\pi \sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi \gamma_j(\alpha_k-\gamma_k)u_{j\gamma}(\sigma)u_{k(\alpha-\gamma)}(\sigma){\Big |}\\
\\
\leq \frac{3}{2}\rho8\pi^2 n^2{\Big |}\frac{\left( C^{\rho}\right)^2\exp(2C^{\rho}T)}{1+|\alpha|^{n+4}}{\Big |}
\end{array}
\end{equation}
for the integrand, hence the upper bound
\begin{equation}
T\frac{3}{2}\rho8\pi^2 n^2{\Big |}\frac{\left( C^{\rho}\right)^2\exp(2C^{\rho}T)}{1+|\alpha|^{n+4}}{\Big |}
\end{equation}
for the integral of the intergrand from $0$ to $T$.
Next we need a lower bound of the damping term. We have
\begin{equation}\label{intualgdamp}
\begin{array}{ll}
{\Big |}\int_0^{T}{\Bigg (}
\sqrt{1-t^2(\tau)}^3(1+t(\tau))^{-1}u_{i\alpha}(\sigma){\Bigg)}d\sigma{\big |}\geq
{\big |}{\big (}
\frac{1}{2}\int_0^Tu_{i\alpha}(\sigma){\big)}d\sigma{\big |}.
\end{array}
\end{equation}
anyway. According to these estimates we get from
(\ref{intualg})
\begin{equation}\label{intualgest}
\begin{array}{ll}
|u_{i\alpha}(T)|\leq |u_{i\alpha}(0)|+T\nu {\Big|}\frac{4\pi }{l^2}\frac{C^{\rho}\exp(C^{\rho}T)}{1+|\alpha|^{n+4}}{\Big |}
+T\rho\frac{3}{2}n\frac{2\pi}{l}{\Big |}\frac{\left( C^{\rho}\right)^2\exp(2C^{\rho}T)}{1+|\alpha|^{n+4}}{\Big |}+\\
\\
T\frac{3}{2}\rho 8\pi^2 n^2{\Big |}\frac{\left( C^{\rho}\right)^2\exp(2C^{\rho}T)}{1+|\alpha|^{n+4}}{\Big |}
-
\frac{1}{2}\int_0^Tu_{i\alpha}(\sigma){\big)}d\sigma,
\end{array}
\end{equation}
where we can use a lower bound of the modes in the damping term because of the minus sign. Note that start with an upper bound
\begin{equation}
|u_{i\alpha}(0)|\leq \frac{C^{\rho}}{1+|\alpha|^{n+4}}.
\end{equation}
for all $1\leq i\leq n$ and all modes $\alpha$, and it is sufficient to preserve this bound, i.e, to get
\begin{equation}
|u_{i\alpha}(T)|\leq \frac{C^{\rho}}{1+|\alpha|^{n+4}}.
\end{equation}
Now for time $t$ consider all modes $\alpha \in M^t_{\leq 1/2}$ with
\begin{equation}
M^t_{\leq 1/2}:=\left\lbrace \alpha {\Big |}u_{i\alpha}(t)|\leq\frac{\frac{1}{2}C^{\rho}}{1+|\alpha|^{n+4}}\right\rbrace .
\end{equation}
and all modes $\alpha \in M^t_{> 1/2}$ with
\begin{equation}
M^t_{>1/2}:=\left\lbrace \alpha |u_{i\alpha}(t)|> \frac{\frac{1}{2}C^{\rho}}{1+|\alpha|^{n+4}}\right\rbrace .
\end{equation}
Now the estimates in (\ref{intualgest}) hold for a time discretization $0=T_0<T_1<T_2< \cdots T_m=T$, i.e., we have for $1\leq l\leq m$
\begin{equation}\label{intualgesttimedis}
\begin{array}{ll}
|u_{i\alpha}(T_{l+1})|\leq |u_{i\alpha}(T_l)|+(T_{l+1}-T_l)\nu {\Big|}\frac{4\pi }{l^2}\frac{C^{\rho}\exp(C^{\rho}(T_{l+1}-T_{l}))}{1+|\alpha|^{n+4}}{\Big |}\\
\\
+(T_{l+1}-T_l)\rho\frac{3}{2}n\frac{2\pi}{l}{\Big |}\frac{\left( C^{\rho}\right)^2\exp(2C^{\rho}(T_{l+1}-T_l))}{1+|\alpha|^{n+4}}{\Big |}\\
\\
+(T_{l+1}-T_l)\frac{3}{2}\rho 8\pi^2 n^2{\Big |}\frac{\left( C^{\rho}\right)^2\exp(2C^{\rho}(T_{l+1}-T_l))}{1+|\alpha|^{n+4}}{\Big |}
-
\frac{1}{2}\int_{T_{l}}^{T_{l+1}}u_{i\alpha}(\sigma){\big)}d\sigma.
\end{array}
\end{equation}
We may use a time scale $T_{l+1}-T_l$ which is fine enough such that
\begin{equation}
\exp(C^{\rho}(T_{l+1}-T_{l}))\leq 2,
\end{equation}
and have
\begin{equation}\label{intualgesttimed2}
\begin{array}{ll}
|u_{i\alpha}(T_{l+1})|\leq |u_{i\alpha}(T_l)|+(T_{l+1}-T_l)\nu {\Big|}\frac{8\pi }{l^2}\frac{C^{\rho}}{1+|\alpha|^{n+4}}{\Big |}\\
\\
+(T_{l+1}-T_l)\rho 6n\frac{2\pi}{l}{\Big |}\frac{\left( C^{\rho}\right)^2}{1+|\alpha|^{n+4}}{\Big |}+(T_{l+1}-T_l)6\rho 8\pi^2 n^2{\Big |}\frac{\left( C^{\rho}\right)^2}{1+|\alpha|^{n+4}}{\Big |}\\
\\
-
\frac{1}{2}\int_{T_{l}}^{T_{l+1}}u_{i\alpha}(\sigma){\big)}d\sigma.
\end{array}
\end{equation}
Note that on a small time scale the contraction argument implies that the modes do not change much depending on the time step size. Especially, as we have Lipschitz continuity for the modes in local time we have
\begin{equation}
{\big |}u_{i\alpha}(T_{l+1})-u_{i\alpha}(T_{l}){\big |}\leq L(T_{l+1}-T_l)
\end{equation}
for some Lipschitz constant $L>0$ where this Lipschitz constant can be preserve in the scheme along with the upper bound. For $\alpha \in M^{T_{l-1}}_{> 1/2}$,
\begin{equation}
\rho \leq \frac{1}{C^{\rho}\left( 24n\frac{2\pi}{l}+96\rho\pi^2 n^2\right) },\mbox{ and },
T_{l+1}-T_l \leq \frac{1}{8L},~\nu\frac{8\pi }{l^2}\leq \frac{1}{8}
\end{equation}
(we comment on the latter restriction below), we have (assuming $C^{\rho}\geq 2$ w.l.o.g)
\begin{equation}\label{intualgesttimed2}
\begin{array}{ll}
|u_{i\alpha}(T_{l+1})|\leq |u_{i\alpha}(T_l)|+(T_{l+1}-T_l)\nu {\Big|}\frac{8\pi }{l^2}\frac{C^{\rho}}{1+|\alpha|^{n+4}}{\Big |}\\
\\
+(T_{l+1}-T_l)\rho 6n\frac{2\pi}{l}{\Big |}\frac{\left( C^{\rho}\right)^2}{1+|\alpha|^{n+4}}{\Big |}+(T_{l+1}-T_l)6\rho 8\pi^2 n^2{\Big |}\frac{\left( C^{\rho}\right)^2}{1+|\alpha|^{n+4}}{\Big |}\\
\\
-
\frac{1}{2}{\big (}(T_{l+1}-T_l)u_{i\alpha}(T_l)-L(T_{l+1}-T_l)^2{\big)}\leq \frac{C^{\rho}}{1+|\alpha|^{n+4}}
\end{array}
\end{equation}
where we use
\begin{equation}
\frac{\frac{1}{2}C^{\rho}}{1+|\alpha|^{n+4}}\leq u_{i\alpha}(T_l)\leq \frac{C^{\rho}}{1+|\alpha|^{n+4}}
\end{equation}
A similar estimate with an upper bound for the damping term of the form
\begin{equation}
{\big (}(T_{l+1}-T_l)u_{i\alpha}(T_l)+L(T_{l+1}-T_l)^2{\big)}
\end{equation}
holds for the modes with $\alpha \in M^{T_{l}}_{\leq 1/2}$.
We mentioned the restriction $\nu\frac{8\pi }{l^2}\leq \frac{1}{8}$ which is a restriction due to the Laplacian and seems to be a restriction on the size (lower bound) or on the the viscosity $\nu$ upper bound (high Reynold numbers allowed). Well this is an artificial restriction as it comes from the linear term. As we have remarked, we could have avoided them by representations of the the modes $u_{i\alpha}(T_l)$ in terms of fundamental matrices or even in terms of fundamental matrices for the Laplacian (dual to the simple heat kernel) at the price of a slightly more involved equation. However we could even work directly with the equation for the modes as we have done and still avoid this restriction due to the Laplacian: if we start with the velocity modes $\mathbf{v}^{F}_i$, and then use
\begin{equation}
\mathbf{v}^{\kappa,F}_i(t^{\kappa})=\mathbf{v}^F(t)
\end{equation}
with $\kappa t^{\kappa}=t$, then the 'spatial terms' of the related equation (analogous time transformation) for $\mathbf{u}^{\kappa,F}_i(\tau)=\mathbf{\kappa v}^F_i(t)$ get a small coefficient $\kappa$ while the damping terms does not. This implies that the damping term can dominate the Laplacian in the estimates above without further restrictions. The subscheme described for $t\in \left[l-1,l-\frac{1}{2}\right]$ is repeated twice in transformed time coordinates then. Note that all the estimates above are inherited where we can use the semigroup property in order to get global estimates straightforwardly.
The second improvement we mentioned concerns the direct calculation of the Euler part via a Dyson formalism in local time. This is also related to the corollaries of the main statements of this paper consider the Euler equation. As our considerations are closely linked to higher order schemes of the Navier Stokes equation (and of the Euler equation) with respect to time we consider the related properties of the Dyson formalism in the next section.
\section{A converging algorithm}
The detailed description of an algorithm with error estimates, the extension of the scheme to initial-value boundary problems and free boundary problems, and the extension of the to more realistic models with variable viscosity and to compressible models will be treated elsewhere. We only sketch some features of the algorithm which may be interesting to practitioners of simulation and from a numerical point of view.
The constructive global existence proof above leads to algorithmic schemes which converge in strong norms. From an algorithmic point of view we observe that errors of solutions of finite cut-off systems of the infinite nonlinear infinite ODE representation of the incompressible Navier Stokes equation are controlled
\begin{itemize}
\item[i)] spatially, i.e., with respect to the order of the modes, by converges of each Fourier representation of a velocity component in the dual Sobolev space $h^s\left({\mathbb Z}^n\right)$ for $s>n+2$ (if the initial data are in this space). This convergence (the error) can be estimated by weakly singular elliptic intergals.
\item[ii)] An auto-control time dilatation transformation a time $t\in [l-1,l)$ of the form
\begin{equation}\label{timetransalg}
t\rightarrow \tau= \frac{t-(l-1)}{\sqrt{1-(t-(l-1))^2}}
\end{equation}
combined with a relation for $t\in [l-1,l]$ at each time step $l\geq 1$ of the form
\begin{equation}
\mathbf{v}^F(t)=(1+(t-l-1))\mathbf{u}^F(\tau)
\end{equation}
produces damping term at each time step $l$ for a locally equivalent equation for the modes of the infinite vector $\mathbf{u}^F$. At first glance it seems that the price to pay for this stabilization is that the time interval $[l-1,l)$ is transformed to an infinite time interval $[0,\infty)$ such that the price for stabilization are infinite computation costs. However, this is not the case. At each time step we may use (\ref{timetransalg}) on a substep interval $\left[l-1,l-\frac{1}{2} \right]$ and this stretches the local time step size only from the length of
\begin{equation}
\left[l-1,l-\frac{1}{2} \right] \mbox{ to the length of } \left[0,\frac{1}{\sqrt{3}}\right],
\end{equation}
which is a small increase of computation costs of a factor $\frac{2}{\sqrt{3}}$. Having computed the modes $\mathbf{v}^F(t),\mathbf{u}^F(t)$ (via controlled vectors $\mathbf{v}^{r,F}(t),\mathbf{u}^{r,F}(t)$), at time step $t=l-\frac{1}{2}$ we can consider a second transformation
\begin{equation}\label{timetransalg}
t\rightarrow \tau= \frac{t-\left( l-\frac{1}{2}\right) }{\sqrt{1-\left( l-\frac{1}{2}\right)^2}}
\end{equation}
combined with a relation for $t\in \left[ l-\frac{1}{2},l\right] $ at each time step $l\geq 1$ of the form
\begin{equation}
\mathbf{v}^F(t)=\left( 1+\left( t-\left(l-\frac{1}{2}\right)\right) \right) \mathbf{u}^F(\tau)
\end{equation}
\end{itemize}
In the following we describe algorithmic schemes without auto-control (damping). The damping mechanisms is a feature related exclusively to time, and can be treated separately as its main objective is to get stronger global upper bounds with respect to time. This does not touch the local description essentially.
First, we remark that higher order schemes in time may be based on higher order approximations of the Euler part of the equation. It is interesting that we can solve the Euler part by an iterated Dyson formalism for regular data, although we cannot do this uniquely for dimension $n\geq 3$. We denote the recursively defined solution approximations of the Euler solution part of the controlled scheme by $\mathbf{v}^{E,r,p},~p\geq 0$, where $\mathbf{v}^{E,r,0}=\mathbf{h}^{r}=\left(h_{i\alpha}\right)_{1\leq i\leq n,\alpha\in {\mathbb Z}^n\setminus \left\lbrace 0\right\rbrace }$ denote the initial data of the controlled scheme.
Consider the Euler part of the controlled matrix for the approximative system of order $p\geq 1$ in (\ref{controlmatrix}) above, i.e., the matrix with viscosity $\nu =0$, which is defined on the diagonal by
\begin{equation}
\begin{array}{ll}
\delta_{ij}E^{r,NS}_{i\alpha j\beta}\left(\mathbf{v}^{E,r,p-1}\right)=
-\delta_{ij}\sum_{j=1}^n\frac{2\pi i \beta_j}{l}v^{E,r,p-1}_{j(\alpha-\beta)}\\
\\+\delta_{ij}2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v^{E,r,p-1}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2},
\end{array}
\end{equation}
and where we have off-diagonal, i.e. for $i\neq j$ we have the entries
\begin{equation}
(1-\delta_{ij})E^{r,NS}_{i\alpha j\beta}\left(\mathbf{v}^E\right)=2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace }\frac{\sum_{k=1}^n4\pi \beta_j(\alpha_k-\beta_k)v^{E,r,p-1}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi\alpha_i^2}.
\end{equation}
Then the approximative Euler system with data
\begin{equation}
\mathbf{v}^{r,E}(t_0)=\left( \mathbf{v}^{r,E}(t_0)_1,\cdots ,\mathbf{v}^{r,E}(t_0)_n\right)^T,~
\mathbf{v}^{r,E}(t_0)\in h^{s}\left({\mathbb Z}^n\right),~s>n+2
\end{equation}
for some initial time $t_0\geq 0$ has a solution for some time $t>0$ of Dyson form
\begin{equation}\label{dysonobseuler}
\begin{array}{ll}
\mathbf{v}^{r,E,p}_1=v(t_0)+\sum_{m=1}^{\infty}\frac{1}{m!}\int_0^tds_1\int_0^tds_2\cdots \int_0^tds_m \\
\\
T_m\left(E^{r,p-1}(t_1)E^{r,p-1}(t_2)\cdot \cdots \cdot E^{r,p-1}(t_m) \right)\mathbf{v}^{r,E}(t_0).
\end{array}
\end{equation}
In algorithmic scheme we do this integration up to a certain order $p$, and then use this formula in the Trotter product formula developed above. In algorithms we deal with finite cut-off of modes naturally. Let us be a bit more specific concerning these finite mode approximations.
The infinite ODE-system can be approximated by finite ODE systems of with modes of order less or equal to some positive integer $k$ in the sense of a projection on the modes of order $|\alpha|\leq k$. At a fixed time $t\geq 0$ this is a projection from the intersection $\cap_{s\in {\mathbb R}}h^s\left({\mathbb Z}^n\right)$ of dual Sobolev spaces into the space of (even globally) analytic functions (finite Fourier series). The Trotter product formula for dissipative operators considered above holds for the system of finite modes, of course. More importantly, the resulting schemes based on this Trotter product formula approximate the corresponding scheme of infinite modes if the limit $k\uparrow \infty$ is considered. In this section we define this approximating scheme of finite modes. It is not difficult to show that the error of this scheme converges to zero as $k\uparrow \infty$. Detailed error estimates which relate the maximal mode of the finite system and dimension $n$ and viscosity $\nu$ to the error in $h^s$ norm are of interest in order to design algorithms. This deserves a closer investigation which will be done elsewhere. In this section we define algorithms of of different approximation order in time via finite-mode approximations of the non-linear infinite ODEs which are equivalent to the incompressible Navier-Stokes equation. Furthermore some observations for the choice of parameters of the size of the domain $l>0$ and the viscosity $\nu >0$ are made, and we observe reductions to a cosine basis (symmetric data).
We consider the controlled scheme which solves an equivalent equation for the sake of simplicity. From our description in the introduction it is clear how to compute an approximate solution of the incompressible Navier-Stokes equation from these data. Recall the representation of the controlled Navier-Stokes equation in terms of infinite matrices.
It makes sense to formulate an 'algorithm' first for the infinite nonlinear ODE system. Accordingly, in the following the use of the word 'compute' related to substeps in an infinite scheme is not meant in a strict sense. It may be defined in some more strict sense b use of transfinite Turing machines, but what we have in mind is the finite approxmations of an infinite object such that schemes can be implemented eventually. This is not an algorithm in a strict sense (even the description is not finite), but the projection to a set of finite modes leads to the algorithm immediately once it is described in the infinite set-up.
Consider the limit $\mathbf{v}^{r,F}=\left(v^r_{i\alpha} \right)_{\alpha \in {\mathbb Z}^n}$ of the construction of the last section, i.e., the global solution of the controlled infinite controlled ODE system, as given. Then we may write this function formally in terms of the Dyson formalism as
\begin{equation}
v^{r,F}_i(t)=T\exp\left(A^{r}t \right) \mathbf{h}^{F}_i\in h^s_l\left({\mathbb Z}^n\right),
\end{equation}
where
\begin{equation}
\begin{array}{ll}
T\exp(A^{r}t):=\sum_{m=0}^{\infty}\frac{1}{k!}\int_{[0,t]}dt_1\cdots dt_kTA^{r}(t_1)\cdots A^r(t_k)dt_1\cdots dt_k\\
\\
:=\sum_{k=0}^{\infty}\int_0^tdt_1\int_0^{t_1}dt_2\cdots \int_0^{t_{k-1}}A^{r}(t_1)\cdots A^{r}(t_k),
\end{array}
\end{equation}
where
\begin{equation}
A^r(t_j):=\left(A^{r,ij}\right)_{1\leq i,j\leq n}
\end{equation}
and for all $1\leq i,j\leq n$ we have
\begin{equation}
A^{r,ij}=\left(a^{r,ij}_{\alpha\beta} \right)_{\alpha,\beta\in {\mathbb Z}^n},
\end{equation}
where
\begin{equation}\label{matrixsol}
\begin{array}{ll}
a^{r,ij}_{\alpha\beta}=\delta_{ij}\left( \delta_{\alpha\beta}\nu\left(-\sum_{j=1}^n\frac{4\pi \alpha_j^2}{l^2} \right)-\sum_{j=1}^n\frac{2\pi i\beta_jv_{j(\alpha-\beta)}}{l}\right) +L^v_{ij},
\end{array}
\end{equation}
along with
\begin{equation}
L^v_{ij}=2\pi i\alpha_i\frac{\sum_{k=1}^n 4\pi \beta_j(\alpha_k-\beta_k)v_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi^2 \alpha_i^2}.
\end{equation}
Note that the modes $v_{i\alpha}$ are time-dependent - for this reason the Dyson time-order operator appears in this formal representation. In order to construct a computable approximation of this formula we need a stable dissipation term for the second order part of the operator. We can achieve this if we may use the Trotter product formula above. because the factor $\exp\left( \left( \delta_{ij}D^r\right) \right)$ is indeed in the regular matrix space $M^s_n$ and it is a damping factor for the controlled system. In order to apply the Trotter product formula to the controlled Navier-Stokes system we may consider a time discretization and apply this formal time step by time step. Starting with $v^{r,t_0}_i:=h_i$ let us assume that we have defined a scheme for $0=T_0<t_1<\cdots<t_k$ and computed the modes $v^{r,t_l}_{i,\alpha}$ for $0\leq l\leq t_k$
Then in order to define the scheme recursively on the interval $[t_k,t_{k+1}]$
we first consider the natural splitting of the operator and define
\begin{equation}
A^r(t_j)=\left( \delta_{ij}D\right)+\left( B^{r,ij}(t_k)\right),
\end{equation}
where $B^{r,ij}(t_k)=\left( b^{r,t_k,ij}_{\alpha\beta}\right)$ along with
\begin{equation}\label{matrixscheme}
\begin{array}{ll}
b^{r,t_k,ij}_{\alpha\beta}=
\left( -\sum_{j=1}^n\frac{2\pi i\beta_jv^{r,t_k}_{j(\alpha-\beta)}}{l}\right) +L^{t_k}_{ij},
\end{array}
\end{equation}
and
\begin{equation}
L^{t_k}_{ij}=2\pi i\alpha_i\frac{\sum_{k=1}^n 4\pi \beta_j(\alpha_k-\beta_k)v^{r,t_k}_{k(\alpha-\beta)}}{\sum_{i=1}^n4\pi^2 \alpha_i^2}.
\end{equation}
The most simple infinite scheme we could have in mind is then a scheme approximating $\mathbf{v}^{r,F}(t)=\left(v^r_{i\alpha}(t)\right)_{\alpha\in {\mathbb Z}^n}$ for arbitrary given $t>0$ in $m$ steps then is
\begin{equation}
\begin{array}{ll}
\left( \exp\left(\frac{t}{m}\left( \delta_{ij}D\right) \right) \exp\left(\frac{t}{k}\left( B^{r,ij}(t_m)\right) \right)\right)\times \\
\\
\cdots \left( \exp\left(\frac{t}{m}\left( \delta_{ij}D\right) \right) \exp\left(\frac{t}{k}\left( B^{r,ij}(t_0)\right) \right)\right)\mathbf{h}.
\end{array}
\end{equation}
This is indeed a time-dependent Trotter-product type approximation of order $ o\left(\frac{1}{m}\right)\downarrow 0$ as $m\uparrow \infty$, where $o$ denotes the small Landau $o$. Higher order approximations can be achieved taking account of some correction terms in the Trotter product formula. Indeed, consider 'truncations of order $q$' of the term defined in (\ref{Ceq*}) above. For each $t_r,~0\leq r\leq m$ consider
\begin{equation}\label{Ceqorderk}
\begin{array}{ll}
C^q(t_r)=\left( \delta_{ij}D\right) +\left( B^{r,ij}(t_r)\right)+\\
\\
\sum_{p=1}^{q}\frac{1}{p!}\sum_{l\geq 1,~\beta^l\in {\mathbb N}_{10}^l,~ m'+|\beta^l|=p,~l\leq m'+1}\left( \delta_{ij}D\right)^{m'}\times\\
\\
\times I_{\beta^{m'}}\left[\Delta \left(B^{r,ij}(t_r)\right),\left( B^{r,ij}(t_r)\right) \right]_T.
\end{array}
\end{equation}
According to the Trotter product formula higher order achemes can be obtained if there is a correction term $E$ such that the replacement of $\left( B^{r,ij}(t_m)\right)$ by $\left( B^{r,ij}(t_m)\right)+E$ cancels the correction of order $q$ at each time step which may be defined by
\begin{equation}\label{Ceqorderk}
\begin{array}{ll}
C^q(t_r)_{\mbox{corr}}=C^q(t_r)-\left( \delta_{ij}D\right) +\left( B^{r,ij}(t_r)\right)\\
\\
=\sum_{p=1}^{q}\frac{1}{p!}\sum_{l\geq 1,~\beta^l\in {\mathbb N}_{10}^l,~ m'+|\beta^l|=p,~l\leq m'+1}\left( \delta_{ij}D\right)^{m'}\times\\
\\
\times I_{\beta^{m'}}\left[\Delta \left( B^{r,ij}(t_r)\right),\left( B^{r,ij}(t_r)\right) \right]_T.
\end{array}
\end{equation}
The explicit analysis of these schemes has a right in its own an will be considered elsewhere.
For simulations we have to make a cut-off leaving only finitely many modes, of course. The formulas established remain true if we consider natural projections to systems of finite modes. We define
\begin{equation}\label{finmo}
\begin{array}{ll}
\left( \exp\left(\frac{t}{m}P_{M^l}\left( \delta_{ij}D\right) \right) \exp\left(\frac{t}{k}P_{M^l}\left( B^{r,ij}(t_m)\right) \right)\right)\times \\
\\
\cdots \left( \exp\left(\frac{t}{m}P_{M^l}\left( \delta_{ij}D\right) \right) \exp\left(\frac{t}{k}P_{M^l}\left( B^{r,ij}(t_0)\right) \right)\right)P_{v^l}\mathbf{h}^F,
\end{array}
\end{equation}
where the operator $P_{v^l}$ is defined by
\begin{equation}
P_{v^l}\mathbf{h}^F=\left(P_{1v^l}\mathbf{h}^F_1,\cdots,P_{nv^l}\mathbf{h}^F_n\right)
\end{equation}
such that for $1\leq i\leq n$ and $h_i=\left(h_{i\alpha}\right)_{\alpha\in {\mathbb Z}^n}$
we define
\begin{equation}
P_{iv^l}(v_{\alpha})_{\alpha\in {\mathbb Z}^n}=\left(v^l_{i\alpha}\right)_{|\alpha|\leq k},
\end{equation}
and for infinite matrices $M=(M^{ij})=\left( \left(m^{ij}_{\alpha\beta}\right)_{\alpha,\beta\in {\mathbb Z}^n}\right)$ we define
\begin{equation}
P_{M^l}M=\left(P_{ij,M^l}M^{ij}\right),
\end{equation}
where
\begin{equation}
P_{ij,M^l}M^{ij}=\left(m^{ij}_{\alpha\beta}\right)_{|\alpha|,|\beta|\leq k}
\end{equation}
In both cases (vector- and matrix-projection) we understand that the order of the modes is preserved of course.
Next it is a consequence of our existence constructive proof that
\begin{thm}
Let $h_i\in C^{\infty}({\mathbb T}_l$, and let $T>0$ be a time horizon for the incompressible Navier-Stokes equation problem on the $n$-torus. Let $s\geq n+2$ Then the finite mode scheme defined in (\ref{finmo}) converges for each $0\leq t\leq T$ to the solution of incompressible Navier-Stokes equation in its infinite nonlinear ODE representation for the Fourier modes as $m,l\uparrow \infty$.
\end{thm}
Next recall that the parameter $\nu>0$ can be chosen arbitrarily. A large parameter $\nu>0$ increases damping and makes the computation more stable. However, in order to approximate a Cauchy problem (the initial value problem on the whole space) we need large $l$ (length of the torus). Note that the damping terms in the computation scheme are of order $\frac{1}{l^2}$, the convection terms are of order $\frac{1}{l}$ and the Leray projection terms are independent of the length of the torus. They become dominant if the size $l$ of the $n$-torus is large. From our constructive existence proof we have
\begin{cor}
The Cauchy problem for the incompressible Navier Stokes equation may be approximated by a computational approximation of order $k$ on the $n$-torus ${\mathbb T}_l$ for $l$ large enough where in a bi-parameter transformation of the original problem $\nu>0$ should be chosen
\begin{equation}
\nu\gtrsim l^2
\end{equation}
such that the damping term is not dominated by the Leray projection terms. Furthermore due to the quadratic growth of the moduli of diagonal (damping) terms with the order of the modes compared to linear growth of the convection term and the Leray projection term with the order of the modes the scheme remains stable for larger maximal order $k$ if it is stable for lower maximal order $k$. Furthermore the choice
\begin{equation}
\nu \geq 2|h|^2_s
\end{equation}
for $s\geq 2+n$ leads to solutions which are uniformly bounded with respect to time.
\end{cor}
For numerical and computational purposes it is useful to represent initial data in $\cos$ and $\sin$ terms (otherwise computational errors may lead to nonzero imaginary parts of the computational approximations). We have
\begin{lem}\label{lemeven}
Functions on the $n$-torus ${\mathbb T}^n_l$ be represented by symmetric data, i.e. with the basis of even functions
\begin{equation}
\left\lbrace \cos\left( \frac{2\pi\alpha x}{l}\right) \right\rbrace_{\alpha\in {\mathbb Z}^n}.
\end{equation}
\end{lem}
The reason for the statement of lemma \ref{lemeven} simply is that you may consider a $L^2$- function $f$ on the cube $[-l,l]^n$ (arbitrary prescribed on $[0,l]^n$) with periodic boundary conditions with respect to a general basis for $L^2\left({\mathbb T}^n_l\right)$ such that
\begin{equation}\label{exp}
f(x)=\sum_{\alpha\in {\mathbb Z}^n} f_{\alpha}\exp(\frac{2\pi i\alpha x}{l}),
\end{equation}
and then you observe that $f(\cdots,x_i.\cdots)=f(\cdots,-x_i.\cdots)$ imply that the $\sin$-terms implicit in the representation (\ref{exp}) cancel.
Hence on the $n$-torus which is built from the cube $\left[0,l\right]^n$ we may consider symmetric data without loss of generalization.
\section{The concept of turbulence}
In the context of the results above let us make some final remarks concerning the concept of turbulence.
There are several authors (cf. \cite{CFNT} and \cite{FJRT} for example), who emphasized that the dynamical properties of the Navier-Stokes equation such as the existence of strange attractors, bifurcation behavior etc. rather than the question of global smooth existence may be important for the concept of turbulence. We share this view, and we want to emphasize from our point of view, why we consider the infinite dynamical systems of velocity modes to be the interesting object in order to start the study of turbulence. This concept is indeed difficult, and in a perspective of modelling it is always worth to follow this difficulty up to its origins on a logical or at least very elementary level. According to classical (pre-Fregean) logic, a concept is a list of notions, each of which is a list of notions and so on. This is not a definition but something to start with (the definition has some circularity as there is no sharp distinction between a concept and a notion, but this may be part of an inherent difficulty to define the concept of concept). According to Leibniz a concept is clear (german: 'klar') if enough notions are known in order decide the subsumption of a given object under a concept, and a concept is conspicuous (german: 'deutlich') if there is a complete list of notions of that concept. All this is relative to a 'cognoscens' of course. Since Leibniz expected that even for empirical concepts like 'gold' the list of notions may be infinite he expected a conspicuous cognition to be accessible only to an infinite mind and not to human beings. Anyway, according to this concept of concept we may say that our concept of the concept of turbulence may not even be clear. However, there is some agreement that some specific notions belong to the notions of 'turbulence'. Some aspects of fluid dynamics may be better discussed with respect to more advanced modelling. For example for a realistic discussion of a 'whirl' we may better define a Navier-Stokes equation with a free boundary in one (upper) direction and a gravitational force in the opposite direction (pointing downwards). This free boundary may be analyzed by front fixing on a fixed half space as we did it in the case of an American derivative free boundary problem. Additional effects are created by boundary conditions (the shape of a river bed, for example). Indeed without boundary conditions dissipative features will dominate in the end.
First we observe that it is indeed essential to study the dynamics of the modes.
The Navier-Stokes equation is a model of classical physics ($|\mathbf{v}|\leq c$, where $c>0$ denotes the speed of light. Therefore physics should be invariant with respect to the Galilei transformation. If $\mathbf{v}=(v_i)^T_{1\leq i\leq n}$ is a solution of the Navier Stokes equation then for a particle which is at $\mathbf{x}_0=(x_{01},\cdots,x_{0n})$ at time $t_0$ we have the trajectory $x(t),~t\geq t_0$ determined by the equation
\begin{equation}
\stackrel{.}{\mathbf{x}}(t)=\mathbf{v}, \mbox{ i.e., }x_i(t)=x_{0i}+\int_{t_0}^tv_i(s)ds,~1\leq i\leq n,
\end{equation}
which means that for the modes we have
\begin{equation}
x_{\alpha i}+\int_{t_0}^t v_{i\alpha}(s)ds,~1\leq i\leq n,
\end{equation}
Hence, rotationality, periodic behavior, or irregular dynamic behaviour due to a superposition of different modes $v_{i\alpha}$ which are 'out of phase' translates to an analogous dynamic behavior for the trajectories with a superposition of a uniform translative movement (which disappears in a moved laboratory and is physically irrelevant therefore.
up to Galiliei transformation dynamic behavior of the modes translates into equivalent.
The dynamics of the modes is an infinite ODE which may be studied via bifurcation theory. Since we proved the polynomial decay of modes a natural question is wether a center manifold theorem and the existence of Hopf bifurcations may be proved. The quadratic terms of the modes in the ODE of the incompressible Navier Stokes equation in the dual formulation seem to imply this. Irregular behavior may also be due to several Hopfbifurcations with different periodicity. Bifurcation analysis may lead then to a proof of structural instability, sensitive behaviour and chaos (cf. \cite{A}). Global sensitive behavior and chaos may also be proved via generalisations or Sarkovskii's theorem or via intersection theory of algebraic varieties as proposed in \cite{KD1} and \cite{KD2}. Studying the effects of boundary conditions and free boundary conditions for turbulent behavior may also be crucial. For the Cauchy problem dissipative effects will take over in the long run. For a 'river' modelled by fixed boundary conditions for the second and third velocity components $v_2$ and $v_3$ (river bank) and a free boundary condition (surface) and a fixed boundary condition (bottom) for the first velocity component $v_1$, and a gravitational force in the direction related to the first velocity component, complex dynamical behavior can be due to the boundary effects. It is natural to study such a boundary model by
the front fixing method considered in \cite{K2}. It is likely that the scheme considered in \cite{KNS} and \cite{K3} can be applied in order to obtain global existence results.
Finally, we discuss some relations of the concept of turbulence, singular solutions of the Euler equation, and bifurcation theory from the point of vie of the Dyson formalism developed in this article. We refer to an update of \cite{KSV} for a deeper discussion which will appear soon.
The qualitative description of turbulence in paragraph 31 of \cite{LL} gives a list of notions of the concept of turbulence associated to high Reynold numbers which consists of $\alpha)$ an extraordinary irregular and disordered change of velocity at every point of a space-time area, $\beta)$ the velocity fluctuates around a mean value, $\gamma)$ the amplitudes of the fluctuations are not small compared to the magnitude of the velocity itself in general, $\delta)$ the fluctuations of the velocity can be found at a fixed point of time, $\epsilon)$ the trajectories of fluid particles are very complicated causing a strong mixture of the fluid. All these notions can be satisfied by proving the existence of Hopf bifurcations an Chenciner bifurcations for the infinite ODE equations
of modes. Proving this seems not to be out of reach except for the notion in $\gamma)$ as the usual methods of bifurcation theory prove the existence of such bifurcation via local topological equivalence normal forms where some lower order terms are sufficient to describe some dynamical properties.
\begin{center}
{\bf Appendix}
\end{center}
\appendix{Convergence criteria of Euler type Trotter product schemes for Navier Stokes equations}
In this appendix nested Euler type Trotter product schemes of the Navier Stokes equation are defined, where limits of the schemes define regular solutions of the Navier stokes equation depending on the regularity of the data. The critical regularity of convergence for the scheme are data in a Sobolev space of order $n/2+1$, where algorithms do no not converge for lower regularity indicating possible singularities. We support the conjecture that the equations have singular solutions for data which are only in a Sobolev space of order less than $n/2+1$. Observations of the sketch in \cite{KT} are simplified and sharpened, where explicit upper bound constants are found. This scheme has a first order error with respect to the time discretization size, where higher order schemes can be defined using a Dyson formalism above.
\section{Definition of an Euler-type Trotter-product scheme and statement of results}
We consider global Euler type Trotter product schemes of the incompressible Navier Stokes equation on the $n$-dimensional Torus ${\mathbb T}^n$, where we start with an observation about function spaces.
A function $g$ in the Sobolev space $H^s\left({\mathbb T}^n \right)$ for $s\in {\mathbb R}$ can be considered in the analytic basis $\left\lbrace \exp\left(2\pi i\alpha\right) \right\rbrace_{\alpha\in {\mathbb Z}^n}$, where ${\mathbb Z}$ is the ring of integers as usual and ${\mathbb Z}^n$ is the set of $n$-tuples of integers. The information of the function $g$ is completely encoded in the list of Fourier modes $(g_{\alpha})_{\alpha \in {\mathbb Z}^n}$.
We say that the infinite vector $(g_{\alpha})_{\alpha\in {\mathbb Z}^n}$ is in the dual Sobolev space $h^s=h^s\left({\mathbb Z}^n\right) $ of order $s\in {\mathbb R}$, if
\begin{equation}
\sum_{\alpha \in{\mathbb Z}^n}|g_{\alpha}|^2\left\langle \alpha\right\rangle^{2s}< \infty,
\end{equation}
where
\begin{equation}
\left\langle \alpha\right\rangle :=\left(1+|\alpha|^2 \right)^{1/2}.
\end{equation}
As we work on the torus for the whole time we suppress the reference to ${\mathbb T}^n$ and, respectively, to ${\mathbb Z}^n$ in the denotation of dual Sobolev spaces in the following.
It is well-known that $g\in H^s$ iff $(g_{\alpha})_{\alpha\in {\mathbb Z}^n}\in h^s$ for $s\in {\mathbb R}$. Next we relate this categorization of regularity by orders $s$ of dual Sobelev spaces
to the regularity criteria of initial data for the Navier stokes equation. For the data $h_i,~1\leq i\leq n$ of an incompressible Navier Stokes equation the regularity condition of the scheme is
\begin{equation}\label{regdata1}
\exists C>0~\forall \alpha\in {\mathbb Z}^n:~{\big |}h_{i\alpha}{\big |}\leq \frac{C}{1+|\alpha|^{n+s}} \mbox{ for some fixed }s>1.
\end{equation}
Furthermore for $r<\frac{n}{2}+1$ there are data $h_i,~1\leq i\leq n$ $h_i\in h^{r}({\mathbb Z}^n)$
such that the scheme diverges. This does not prove that there are singular solutions (although this may be so) but that the scheme does not work, i.e. has no finite limit. The case $s=1$ is critical, but a certain contractive property of iterated elliptic integrals is lost. It seems that the algorithm defined below is still convergent but this critical case will be considered elsewhere. Note that for data as in (\ref{regdata1}) we have
\begin{equation}
{\big |}h_{i\alpha}{\big |}^2\leq \frac{C^2}{1+|\alpha|^{2n+2s}}\Rightarrow h_i\in H^{\frac{1}{2}n+s}\left({\mathbb Z}^n\right).
\end{equation}
On the other hand, if $h_i\in H^{\frac{1}{2}n+s}\left({\mathbb Z}^n\right)$ then
\begin{equation}
\sum_{\alpha \in{\mathbb Z}^n}|h_{i\alpha}|^2\left(1+|\alpha|^2 \right)^{n+2s}< \infty,
\end{equation}
which means that there must be a $C>0$ such that the condition in (\ref{regdata1}) holds. Here note that the sum over ${\mathbb Z}^n$ is essentially equivalent to an $n$-dimensional integral such that we need a decay of the modes $|h_{i\alpha}|$ with an exponent which exceeds $n+2s$ by $n$.
Especially note that for $n=3$ the critical regularity of the data for convergence is $H^{2.5}$ (resp. $h^{2.5}$ for the dual space).
Concerning the reason for this specific threshold value $\frac{n}{2}+s$ with $s>1$ for convergence of Euler-type Trotter product schemes we first observe that for some constant $c=c(n)$ depending only on the dimension $n$ we certainly have
\begin{equation}\label{cC}
\sum_{\beta\in {\mathbb Z}^n}\frac{C}{1+|\alpha-\beta|^{m}}\frac{C}{1+|\beta|^{l}}\leq \frac{cC^2}{1+|\alpha|^{m+l-n}}.
\end{equation}
For factors as the Burgers term applied to initial data (which satisfy (\ref{regdata1})) we have orders $m> n+1$ (one spatial derivative) and
$l>n$ such that the right side has a order of decay $>n+1$. Hence, the relation in (\ref{cC}) represents a contractive property for higher order modes $|\alpha|\geq 2$. It becomes contractive for all modes by some spatial scaling (cf. below). Similar for the Leray projection term (which has two spatial derivatives but behaves similar as the Burgers term concerning the decay with respect to modes due to the integration with the dual of first derivatives of the Laplacian kernel) .
We shall use the similar relation in (\ref{cC}) (among other spatial features of the operator), and shall observe that at any stage $N\geq 1$ of a Euler type product scheme $v^N_{i\alpha}(m\delta t^{(N)})_{1\leq i\leq n,~\alpha\in {\mathbb Z}^n,~0\leq m\leq 2^m}$ we have an upper bound of the form
\begin{equation}\label{bb}
{\big |}v^N_{i\alpha}(m\delta t^{(N)}){\big |}\leq \frac{C_0}{1+|\alpha|^{n+s}}
\end{equation}
for all $m\geq 1$ for some finite $C_0>0$ depending only on the dimension $n$, the viscosity $\nu >0$ and the initial data $h_i,~1\leq i\leq n$, and on the time horizon $T>0$. We also provide sharper arguments where we have independence of the upper bound constant $C_0$ of the time horizon $T>0$. In general we shall have an upper bound as in (\ref{bb}) with $C_0>C$ if $|h_{i\alpha}|\leq \frac{C}{1+|\alpha|^{\frac{n}{2}+s}}$ for some $s>1$ since we cannot prove a strong contraction property for the Navier Stokes operator. An other spatial effect of the operator that we use is that all nonlinear terms have a spatial derivative. For the spatial transformation
\begin{equation}
v^{r}_i(t,y)=v_i(t,x),~y_i=rx_i,~1\leq i\leq n
\end{equation}
we get $v_{i,j}(t,x)=v^{r}_{i,j}(t,y)r$ and $v_{i,j,j}(t,x)=v^{r}_{i,j,j}(t,y)r^2$ such that the Navier Stokes equation becomes
\begin{equation}\label{Navlerayr}
\left\lbrace \begin{array}{ll}
\frac{\partial v^r_i}{\partial t}-\nu r^2\sum_{j=1}^n v^r_{i,j,j}
+r\sum_{j=1}^n v^r_jv^r_{i,j}=f_i\\
\\ \hspace{1cm}+rL_{{\mathbb T}^n}\left( \sum_{j,m=1}^n\left( v^r_{m,j} v^r_{j,m} \right) (\tau,.)\right) ,\\
\\
\mathbf{v}^r(0,.)=\mathbf{h},
\end{array}\right.
\end{equation}
where $L_{{\mathbb T}^n}$ denotes the Leray prjection on the torus.
These are are natural upper bounds in the scheme related to Burgers terms, and similar relations hold for Leray projection terms. Since $2s-1>s$ we have a strong contraction property for higher order expansion of approximative solutions and this can be used (either directly via weighted norms or in a refined scheme with an auto-control function).
In order to define a Trotter product scheme for the incompressible Navier Stokes equation models it is convenient to have a time-scaling as well. Note that for a time scale $\rho>0$ with $t=\rho\tau$ where $\rho$ depends only on the initial data, the viscosity and the dimension the resulting Navier Stokes equation is equivalent. For a spatial scaling with parameter $r$ and a time scaling with parameter $\rho$, i.e., for the transformation $v^{\rho,r}_i(\tau,y)=v_i(t,x)$ wit $t=\rho \tau$ and $y_i=rx_i$ we get
\begin{equation}\label{Navleray}
\left\lbrace \begin{array}{ll}
\frac{\partial v^{\rho,r}_i}{\partial \tau}-\rho r^2\nu\sum_{j=1}^n v^{\rho,r}_{i,j,j}
+\rho r\sum_{j=1}^n v^{\rho,r}_jv^{\rho,r}_{i,j}=f_i\\
\\ \hspace{1cm}+\rho rL_{{\mathbb T}^n}\left( \sum_{j,m=1}^n\left( v^{\rho,r}_{m,j}v^{\rho,r}_{j,m}\right) (\tau,.)\right) ,\\
\mathbf{v}^{\rho,r}(0,.)=\mathbf{h},
\end{array}\right.
\end{equation}
to be solved for $\mathbf{v}^{\rho,r}=\left(v^{\rho,r}_1,\cdots ,v^{\rho,r}_n \right)^T$ on the domain $\left[0,\infty \right)\times {\mathbb T}^n$ is completely equivalent to the usual Navier Stokes equation in its Leray projection form. As
\begin{equation}
v_{i,j}=\frac{\partial v^{\rho,r}_i}{\partial y_j}\frac{d y_j}{d x_j}=rv^{\rho,r}_{i,j}
\end{equation}
it is clear that the Burgers term gets a factor $r$ by the scaling and the relation $v_{i,j,j}=r^2v^{\rho,r}_{i,j,j}$ implies that the viscosity term gets a factor $r^2$ upon the simple spatial scaling transformation. The scaling of the Leray projection term is less obvious. For $\rho=1$ we write $v^{1,r}_i=v^{\rho,r}_i$. The pressure scaling $p^{1,r}(t,y)=p(t,x)$ the related Poisson equation (on the torus ${\mathbb T}^n$) in original coordinates
\begin{equation}
\Delta p=\sum_{j,k=1}^nv_{j,k}v_{k,j}
\end{equation}
transforms to
\begin{equation}
r^2\Delta p^{1,r}=\sum_{j,k=1}^nr v^{1,r}_{j,k}r v^{1,r}_{k,j}\Leftrightarrow \Delta p^{1,r}=\sum_{j,k=1}^nv^{1,r}_{j,k}v^{1,r}_{k,j}
\end{equation}
such that from
\begin{equation}
p_{,i}=rp^{1,r}_{,i}
\end{equation}
we know that the Leray projection term gets a scaling factor $r$.
\begin{rem}
Note that the spatial scaling factor behaves differently for some typical Cauchy problems with known singular solutions such as
\begin{equation}\label{w01}
\frac{\partial w_0}{\partial t}-\nu\Delta w_0 +w^2_0,~w(_0)=w_0(0,.).
\end{equation}
Here, a transformation $w^{r}_0(t,y)=w(t,x),~y=rx$ leads to
\begin{equation}\label{w02}
\frac{\partial w^r_0}{\partial t}-\nu r^2\Delta w^r_0 +(w^r_0)^2,~w^r_0(0,.)=g.
\end{equation}
A singular point $(t_s,x_s)$ of a solution $w_0$ of (\ref{w01}) is trasformed to a singular point $(t_s,rx_s)$ of a solution $w^r_0$ of (\ref{w02}). A strong viscosity damping coefficient $r^2\nu$ caused by large $r$ pushes the singular point $(t_s,rx_s)$ towards spatial but it is tthere nevertheless. The example in (\ref{w02}) is a scalar equation but similar considerations hold for systems of equation as well, of course. Similarly, if the function $w_1$ satisfies
\begin{equation}\label{w11}
\frac{\partial w_1}{\partial t}-\nu\Delta w_1 +\left\langle \nabla w_1,\nabla w_1\right\rangle ,~w_1(0,.)=g,
\end{equation}
thenthe function
\begin{equation}\label{w12}
\frac{\partial w^r_1}{\partial t}-\nu r^2\Delta w^r_1 +r^2\left\langle \nabla w_1,\nabla w_1\right\rangle ,~w_1(0,.)=g.
\end{equation}
Here the viscosity term and the nonlinerar term have the same scaling coefficient $r^2$. Again, a singular point $(t_s,x_s)$ of a solution $w_1$ of (\ref{w01}) is trasformed to a singular point $(t_s,rx_s)$ of a solution $w^r_1$ of (\ref{w02}).
\end{rem}
We shall observe that iterative elliptic integrals upper bounds as in (\ref{cC}) related to nonlinear terms of the Navier Stokes equation become contractive for each mode, i.e., the relation in (\ref{cC}) below becomes a contractive upper bound for nonlinear terms in the scheme in the sense that for $s>1$
\begin{equation}\label{cC*}
\sum_{\beta\in {\mathbb Z}^n}\rho r\frac{(n+n^2)C}{1+|\alpha-\beta|^{n+s}}\frac{|\beta|C}{1+|\beta|^{n+s}}\leq \frac{\rho rc_0C^2}{1+|\alpha|^{n+2s-1}},\mbox{where}~n+2s-1>n+s
\end{equation}
for some finite constant $c_0>1 $ which depends on the dimension.
\begin{rem}
Note that the estimate in (\ref{cC*}) is an estimate for the Burgers term prima facie. However, the Leray projection term has a similar estimates since the quadratic terms $(\alpha_k-\gamma_k)\gamma_j$ get a factor $\frac{\alpha_i}{\sum_{i=1}^2\alpha_i^2}$.
\end{rem}
We remark that $C>1$ in \ref{cC*} is a constant is an upper bound constant for the velocity modes which is observed to be inherited by the scheme. Note that the assumption $C>1$ is without loss of generality. There are several aramter choices which make viscosity damping dominant.
Note that for the choice
\begin{equation}\label{rpara}
r=\frac{c_0^2C^2}{\nu},~\rho=\frac{\nu}{2c_0^2C^2}
\end{equation}
the viscosity coefficient $\rho r^2\nu$, i.e. the coefficient of the Laplacian in (\ref{Navleray}), becomes
\begin{equation}\label{viscpar}
\rho r^2\nu =\frac{c_0^2C^2}{2}
\end{equation}
and the parameter coefficient $\rho r$ of the nonlinear terms in (\ref{Navleray}) satisfies
\begin{equation}\label{nlinpara}
\rho r= \frac{1}{2}.
\end{equation}
Note that for $c_0C>1$ the viscosity damping coefficient $\rho r^2\nu$ in (\ref{viscpar}) becomes dominant compared to the coefficient of the nonlinear term $\rho r= \frac{1}{2}$.
We mention that the scheme can be applied can be applied to models with external force terms which are functions the functions $f_i: {\mathbb T}^n\rightarrow {\mathbb R},~1\leq i\leq n$ in $H^1$ (at least). For time-dependent forces the situation is more complicated, because forcer terms may cancel the viscosity term. In this paper we consider the case $f_i\equiv 0$ for all $1\leq i\leq n$.
First we define a solution scheme, and then we state some results concerning convergence and divergence. The main result about convergence is proved in section 2.
For $1\leq i\leq n$ we write the velocity component $v^{\rho,r}_i=v^{\rho,r}_i(\tau,x)$ for fixed $\tau\geq 0$ in the analytic basis $\left\lbrace \exp\left( \frac{2\pi i\alpha x}{l}\right),~\alpha \in {\mathbb Z}^n\right\rbrace $ such that
\begin{equation}
v^{\rho,r}_i(\tau,x):=\sum_{\alpha\in {\mathbb Z}^n}v^{\rho,r}_{i\alpha}(t)\exp{\left( \frac{2\pi i\alpha x}{l}\right) },
\end{equation}
where $l>0$ measures the size of the torus (sometimes we choose $l=1$ without loss of generality).
Then the initial value problem in (\ref{Navleray}) is equivalent to an infinite ODE initial value problem for the infinite time dependent vector function of velocity modes $v^{\rho,r}_{i\alpha},~\alpha\in {\mathbb Z}^n,~1\leq i\leq n$, where
\begin{equation}\label{navode200first}
\begin{array}{ll}
\frac{d v^{\rho,r}_{i\alpha}}{d\tau}=\rho r^2\sum_{j=1}^n\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)v^{\rho,r}_{i\alpha}
-\rho r\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{\rho,r}_{j(\alpha-\gamma)}v^{\rho,r}_{i\gamma}\\
\\
+\rho r2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v^{\rho,r}_{j\gamma}v^{\rho,r}_{k(\alpha-\gamma)}}{\sum_{i=1}^n4\pi^2\alpha_i^2},
\end{array}
\end{equation}
for all $1\leq i\leq n$, and where for all $\alpha\in {\mathbb Z}^n$ we have $v^{\rho,r}_{i\alpha}(0)=h_{i\alpha}$. We denote $\mathbf{v}^{\rho,r,F}=(v^{\rho,r,F}_1,\cdots v^{\rho,r,F}_n)^T$ with $n$ infinite vectors $v^{\rho,r,F}_i=(v^{\rho,r}_{i\alpha})^T_{1\leq i\leq n,~\alpha \in {\mathbb Z}^n}$. The superscript $T$ denotes transposed in accordance to a usual convention about vectors and should not be confused with the time horizon which never appears as a superscript.
As remarked, in this paper we consider the case without force terms, i.e., the case where $f_i=0$ for all $1\leq i\leq n$.
For an arbitrary time horizon $T_0>0$, at stage $N\geq 1$ and with time steps of size $\delta t^{(N)}$ with $2^N\delta t^{(N)}=T_0$ we define an Euler-type Trotter product scheme
with $2^N$ time steps
$$m\delta t^{(N)}\in \left\lbrace 0,\delta t^{(N)},2\delta t^{(N)},\cdots , 2^N\delta t^{(N)}=T_0\right\rbrace.$$
We put upper script stage number $N$ in brackets in order to avoid confusion with an exponent as we intend to study time-errors of the scheme later.
First, at each stage $N\geq 1$ and at each time step number $m\in \left\lbrace 0,\cdots , 2^N\right\rbrace$ we consider the Euler step
\begin{equation}\label{navode200first}
\begin{array}{ll}
v^{\rho,r,N}_{i\alpha}((m+1)\delta t^{(N)})=v^{\rho,r,N}_{i\alpha}(m\delta t^{(N)})+\\
\\
\rho r^2\sum_{j=1}^n\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)v^{\rho,r,N}_{i\alpha}(m\delta t^{(N)})\delta t^{(N)}-\\
\\
\rho r\sum_{j=1}^n\sum_{\gamma \in {\mathbb Z}^n}\frac{2\pi i \gamma_j}{l}v^{\rho,r,N}_{j(\alpha-\gamma)}(m\delta t)v^{\rho,r,N}_{i\gamma}(m\delta t^{(N)})\delta t^{(N)}+\\
\\
2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}\frac{\rho r\sum_{j,k=1}^n\sum_{\gamma\in {\mathbb Z}^n}4\pi^2 \gamma_j(\alpha_k-\gamma_k)v^{\rho,r,N}_{j\gamma}(m\delta t^{(N)})v^{\rho,r,N}_{k(\alpha-\gamma)}(m\delta t^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}\delta t^{(N)}.
\end{array}
\end{equation}
At each stage $N\geq 1$ and for all $1\leq i\leq n$ and all modes $\alpha\in {\mathbb Z}^n$ this defines a list of values
\begin{equation}
v^{\rho,r,N}_{i\alpha}(m\delta t^{(N)}),~m\in \left\lbrace 0,1,\cdots,2^N\right\rbrace,
\end{equation}
such that the whole list of $2^N$ $n{\mathbb Z}^n$-vectors defines an Euler-type approximation of a possible solution of the Navier Stokes equation above (along with zero source term, i.e., $f_i\equiv 0$ for all $1\leq i\leq n$).
For an arbitrary finite time horizon $T^*>0$, and at each stage $N$ we denote the (tentative) approximative solutions by
\begin{equation}
\mathbf{v}^{\rho,r,N,F}(t):=\left( \left(v^{\rho,r,N}_{i\alpha}(t)\right) _{\alpha\in {\mathbb Z}^n} \right)^T_{1\leq i\leq n}
\end{equation}
for $0\leq t\leq T^*$ (where the upper script $T$ always refers to 'transposed' as the usual vector notation is vertical and not to the time horizon).
The last two terms on the right side of (\ref{navode200first}) correspond to the spatial part of the incompressible Euler equation, where for the sake of simplicity of notation we abbreviate (after some renaming)
\begin{equation}
\begin{array}{ll}
e^{\rho,r,N}_{ij\alpha\gamma}(m dt^N)=-\rho r\frac{2\pi i (\alpha_j-\gamma_j)}{l}v^{\rho,r,N}_{i(\alpha-\gamma)}(m\delta t^{(N)})\\
\\
+\rho r2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
4\pi^2 \frac{\sum_{k=1}^n\gamma_j(\alpha_k-\gamma_k)v^{\rho,r,N}_{k(\alpha-\gamma)}(m\delta t^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
Note that with this abbreviation (\ref{navode200first}) becomes
\begin{equation}\label{navode200second}
\begin{array}{ll}
v^{\rho,r,N}_{i\alpha}((m+1)\delta t^{(N)})=v^{\rho,r,N}_{i\alpha}(m\delta t^{(N)})+\\
\\
\rho r^2\sum_{j=1}^n\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)v^{\rho,r,N}_{i\alpha}(m\delta t^{(N)})\delta t^{(N)}+\\
\\
\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}e^{\rho,r,N}_{ij\alpha\gamma}(m\delta t^{(N)})v^{\rho,r,N}_{j\gamma}(m\delta t^{(N)})\delta t^{(N)}.
\end{array}
\end{equation}
As in \cite{KT} we consider Trotter product formulas in order to make use of the damping effect of the diffusion term, which is not obvious in (\ref{navode200second}) where a term with factor $|\alpha|^2$ appears which corresponds to an unbounded Laplacian. At each stage $N$ we get the Trotter product formula stating that for all $T=N\delta t^{(N)}$ we have
\begin{equation}\label{aa}
\begin{array}{ll}
\mathbf{v}^{\rho,r,N,F}(T)\doteq \Pi_{m=0}^{2^N}\left( \delta_{ij\alpha\beta}\exp\left(-\rho r^2\nu \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)\right)\\
\\
\left( \exp\left( \left( \left( e^{N\rho,r,}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta t^{(N)})\right)\delta t^{(N)} \right) \right) \mathbf{h}^F,
\end{array}
\end{equation}
and where $\doteq$ means that the identity holds up to an error $O(\delta t^{(N)})$, i.e. we have a linear error in $\delta t^{(N)}$ (corresponding to a quadratic error in $\delta t^{(N)}$ at each time step).
Note that equation in (\ref{aa}) still makes sense as the viscosity converges to zero as the first factor on the right side of (\ref{aa}) becomes an infinite identity matrix. This leads to a approximation scheme which we denote by
\begin{equation}\label{aazz}
\begin{array}{ll}
\mathbf{e}^{\rho,r,N,F}(T)\doteq \Pi_{m=0}^{2^N}
\left( \exp\left( \left( \left( e^{\rho,r,N}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta t^{(N)})\right)\delta t^{(N)} \right) \right) \mathbf{h}^F.
\end{array}
\end{equation}
We investigate whether the linear error for the Navier Stokes Trotter product scheme goes to zero as $N\uparrow \infty$ or if we have a blow-up. Note that in (\ref{aa}) the entries in $(\delta_{ij\alpha\beta})$ are Kronecker-$\delta$s which describe the unit $n{\mathbb Z}^n\times n{\mathbb Z}^n$-matrix. The formula in (\ref{aa}) is easily verified by showing that at each time step $m$
\begin{equation}
\begin{array}{ll}
\left( \delta_{ij\alpha\beta}\exp\left(-\rho r^2\nu 4\pi^2 \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)\right)\\
\\
\left( \exp\left( \left( \left( e^{\rho,r,N}_{ij\alpha\beta}(m\delta t)^{(N)}\right)_{ij\alpha\beta}\right)\delta t^{(N)}\right) \right)\mathbf{v}^{\rho,r,N,F}(m\delta t^{(N)})
\end{array}
\end{equation}
(as an approximation of $\mathbf{v}^{\rho,r,N,F}((m+1)\delta t^{(N)})$) solves the equation (\ref{navode200first}) with an error of order in $O\left( \left( \delta t^{(N)}\right)^2\right) $.
Before we continue to describe the scheme we introduce the dual Sobolev spaces which we use in order to measure the the lists of modes at each stage $N\geq 1$ and the limit as $N\uparrow \infty$.
Even the scheme above considered on the usual times scale $\rho=1$ and for small spatial time scalar $r>0$ leads to contraction result with exponentially weighted norms, but from the numerical or algorithmic point of view it is interesting to stabilize the scheme and prove time-independent regular upper bounds.
Therefore, here we show that for parameters $\rho, r$ as in (\ref{rpara})
and (\ref{viscpar}) for all stages $N$ and all $m\geq 1$ there is an uper bound constant $C$ (depending only on the initial data $\mathbf{h}$, the dimension $n$, and the viscosity $\nu>0$ ) such that
\begin{equation}\label{bbc}
{\big |}v^{\rho,r,N}_{i\alpha}(m\delta t^{(N)}){\big |}\leq \frac{C}{1+|\alpha|^{n+s}}.
\end{equation}
An explicit upper bound constant is given below in the statement of Theorem \ref{linearboundthm} below.
This is shown directly for this scheme using viscosity damping, but we mention that it can be shown also via comparison functions at each time step. This alternative scheme may be used in order to prove weaker results concerning global solution branches for inviscid limits. It is also interesting form a numerical point of view, since it introduces an additional damping term.
At each time step, i.e., on the time interval $[t_0,t_0+a]$ for some $a\in (0,1)$ we compare the value function $v^{\rho,r}_i,\ 1\leq i\leq n$ with a time dilated function $u^{l,\rho,r,t_0}_i:\left[0,\frac{a}{\sqrt{1-a^2}} \right] \times {\mathbb T}^n\rightarrow {\mathbb R},~1\leq i\leq n$ along with
\begin{equation}\label{transformvul}
\lambda (1+\mu (\tau-t_0))u^{l,\rho,r,t_0}_i(s,.)=v^{\rho,r}_i(\tau,.),~s=\frac{\tau-t_0}{\sqrt{1-(\tau-t_0)^2}}.
\end{equation}
Note that in this definition of $u^{l,\rho,r,t_0}_i$ the function $v_i$ is considered on the interval $\left[t_0,t_0+a\right]$ for each $1\leq i\leq n$. The upper script $l$ indicates that the transofrmation in (\ref{transformvul}) is local in time. Alternatively, we may define a global time transformation
\begin{equation}\label{transformvug}
\lambda (1+\mu \tau)u^{g,\rho,r,t_0}_i(s,.)=v^{\rho,r}_i(\tau,.),~s=\frac{\tau-t_0}{\sqrt{1-(\tau-t_0)^2}}.
\end{equation}
Note that $\tau$ becomes $t$ for $\rho=1$. We write
\begin{equation}
u^{*,\rho,r,t_0}_i,~*\in \left\lbrace l,g\right\rbrace,
\end{equation}
if we want to refer to both transformations at the same time.
Similar comparison functions can be introduced for the incompressible Euler equation as well, of course. The incompressible Euler equation has multiple solutions in general, and even singular solutions. Comparison functions can be used to prove the existence of global solution branches, but also, with a slight modification to prove the existence of singular solutions. Uniqueness arguments have to be added to these techniques in order to show that some evolution equation is deterministic. As comparisons to the Euler equation are useful in any case, we introduce some related notation. The viscosity limit $\nu\downarrow 0$ of the velocity component functions $v^{\rho,r,\nu}_i\equiv v^{\rho,r,}_i$ is denoted by $e^{\rho,r}_i:=\lim_{\nu\downarrow 0}v^{\rho,r,\nu}_i$ whenever this limit exists pointwise, and where we use the same parameter transformation $e^{\rho,r}_i(\tau,y)=e_i(t,x)$ with $t=t(\tau)$ and $y_i=y(x_i)$ as above. Here $e_i,~1\leq i\leq n$ is a solution of the original Euler equation. We then may use the comparison
\begin{equation}
\lambda (1+\mu (\tau-t_0))u^{l,\rho,r,e,t_0}_i(s,.)=e^{\rho,r}_i(\tau,.),~s=\frac{\tau-t_0}{\sqrt{1-(\tau-t_0)^2}},
\end{equation}
or
\begin{equation}
\lambda (1+\mu \tau)u^{g,\rho,r,e,t_0}_i(s,.)=e^{\rho,r}_i(\tau,.),~s=\frac{\tau-t_0}{\sqrt{1-(\tau-t_0)^2}},
\end{equation}
in order to argue for global solution branches of the Euler equation for strong data. In general, it is possible to obtain global solution branches for the incompressible Navier Stokes equation from global solution branches of the incompressible Euler equation using the Trotter product approximations inductively, as there is an additional viscosity damping at each time step in the Trotter product formula for the Navier Stokes equation. The time step size of the corresponding Euler scheme transforms as
\begin{equation}
\delta s^{(N)}=\frac{\delta t^{(N)}}{\sqrt{1-(\delta t^{(N)})^2}}
\end{equation}
The scheme for $u^{l\rho,r,t_0}_i,~1\leq i\leq n$ becomes at each stage $N\geq 1$
\begin{equation}\label{navode200secondu}
\begin{array}{ll}
u^{l,\rho,r,N,t_0}_{i\alpha}((m+1)\delta s^{(N)})=u^{l,\rho,r,N,t_0}_{i\alpha}(m\delta s^{(N)})\\
\\
+\rho r^2\mu^{t_0}\sum_{j=1}^n\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)u^{l,\rho,r,N,t_0}_{i\alpha}(m\delta s^{(N)})\delta s^{(N)}\\
\\
+\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}e^{l,\rho,r,N,u,\lambda,t_0}_{ij\alpha\gamma}(m\delta s)u^{l,\rho,r,N,t_0}_{j\gamma}(m\delta t^{(N)})\delta s^{(N)}.
\end{array}
\end{equation}
where $\mu^{t_0}=\sqrt{1-(.-t_0)^2}^3$ is evaluated at $t_0+m\delta t$ , and
\begin{equation}\label{eujk}
\begin{array}{ll}
e^{l,\rho,r,N,u,\lambda ,t_0}_{ij\alpha\gamma}(m \delta s^{(N)})=-\rho r\lambda\mu^{l,1,t_0}\frac{2\pi i (\alpha_j-\gamma_j)}{l}u^{l,\rho,r,t_0}_{i(\alpha-\gamma)}(m\delta s^{(N)})\\
\\
+\rho r\lambda\mu^{l,1,t_0}\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
\sum_{k=1}^n4\pi^2\gamma_j(\alpha_k-\gamma_k)u^{l,\rho,r,t_0}_{k(\alpha-\gamma)}(m\delta s^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}-\mu^{0,t_0}(m\delta s^{(N)})\delta_{ij\alpha\gamma}
\end{array}
\end{equation}
along with $\mu^{l,1,t_0}(\tau):=(1+\mu (\tau-t_0))\sqrt{1-(\tau-t_0)^2}^3$ and $\mu^{l,0,t_0}(\tau):=\frac{\mu\sqrt{1-(\tau-t_0)^2}^3}{1+\mu (\tau-t_0)}$. Again we note that for $\rho=1$ the transformed time coordinates $\tau=\frac{t}{\rho}$ become equal to the original time coordinates $t$, a case we shall mainly consider in the following.
The last term in (\ref{eujk}) is related to the damping term of the equation for the function $u^{t_0}_i,~1\leq i\leq n$. For times $0< t_e$ and with an analogous time discretization we get the Trotter product formula
\begin{equation}\label{trotterlambda}
\begin{array}{ll}
\mathbf{u}^{l,\rho,r,N,F,t_0}(t_e)\doteq \Pi_{m=0}^{2^N}\left( \delta_{ij\alpha\beta}\exp\left(-\rho r^2\nu \sum_{i=1}^n\alpha_i^2 \delta s^{(N)} \right)\right)\times\\
\\
\times \left( \exp\left( \left( \left( e^{l,\rho,r,N,u,\lambda,t_0}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta s^{(N)})\right)\delta s^{(N)}\right) \right) \mathbf{u}^{l,\rho,r,N,F,t_0}(0).
\end{array}
\end{equation}
In the following remark we describe the role of the parameters.
\begin{rem}
Even if $\rho=1$ and one of the factor $r$ is small compared to the parameter $\mu$ of the damping term $-\mu^{l,0,t_0}(m\delta s)\delta_{ij\alpha\gamma}$ in (\ref{eujk}), then the damping term can dominate the growth of the nonlinear terms if a mode of the value function exceeds a certain level. For higher modes and $\nu >0$ the viscosity damping becomes dominant anyway. Consider the transformation in (\ref{transformvu}) above in the case $\rho=1$. Then the transformation is with respect to original time coordinates. We consider this transformation
\begin{equation}
\lambda (1+\mu (t-t_0))u^{1,r,t_0}_i(s,.)=v^{1,r}_i(t,.),~s=\frac{t-t_0}{\sqrt{1-(t-t_0)^2}}.
\end{equation}
on a time interval $t\in [t_0,t_0+a]$ corresponding to $s\in \left[0,\frac{a}{\sqrt{1-a^2}} \right] $. Assume that $\mu=1$. We have
\begin{equation}
\frac{1}{\lambda}{\big |}v^{1,r}_i(t_0,.){\big |}_{H^p}={\big |}u^{1,r,t_0}_i(0,.){\big |}_{H^p}
\end{equation}
The nonlinear growth is described by the nonlinear Euler terms. For each mode $\alpha$ and at time step number $m$ of stage $N$ the nonlinear Euler term growth minus the potential damping is described by (recall that we have chosen $\rho=\mu=1$)
\begin{equation}\label{remeujk}
\begin{array}{ll}
\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}e^{1,r,N,u,\lambda,t_0}_{ij\alpha\gamma}(m\delta s)u^{N,t_0}_{j\gamma}(m\delta t^{(N)})\delta s^{(N)}=\\
\\
-\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n} r\lambda\mu^{1,t_0}\frac{2\pi i (\alpha_j-\gamma_j)}{l}u^{1,r,t_0}_{i(\alpha-\gamma)}(m\delta s^{(N)})\\
\\
+ r\lambda\mu^{1,t_0}\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
\sum_{k=1}^n4\pi^2\gamma_j(\alpha_k-\gamma_k)u^{1,r,t_0}_{k(\alpha-\gamma)}(m\delta s^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}u^{1,r,N,t_0}_{j\gamma}(m\delta s^{(N)})\delta s^{(N)}\\
\\
-\mu^{0,t_0}(m\delta s^{(N)})u^{1,r,N,t_0}_{i\alpha}(m\delta t^{(N)})\delta s^{(N)}.
\end{array}
\end{equation}
where for $\mu=1$ and $\rho=1$ we have $\mu^{0,t_0}(t-t_0):=\frac{\sqrt{1-(t-t_0)^2}^3}{1+ (t-t_0)}$ (with $t=t(s)$ the inverse of $s=s(t)$). The potential damping in (\ref{remeujk}) becomes relatively strong for example for small $\lambda =r^2 >0$ with $r$ small. A small parameter $r>0$ (keeping $\rho=1$) is sufficient for a linear upper bound with respect to the time horizon $T$, via the global time transformation
\begin{equation}
\lambda (1+\mu t)u^{g,1,r,t_0}_i(s,.)=v^{1,r}_i(t,.),~s=\frac{t-t_0}{\sqrt{1-(t-t_0)^2}}.
\end{equation}
\end{rem}
Note that the scheme for the Euler equation comparison $u^{*,\rho,r,e,t_0}_i,~1\leq i\leq n$ with $*\in \left\lbrace l,g \right\rbrace$ becomes at each stage $N\geq 1$
\begin{equation}\label{navode200secondu}
\begin{array}{ll}
u^{*,\rho,r,e,N,t_0}_{i\alpha}((m+1)\delta s^{(N)})=u^{*,\rho,r,e,N,t_0}_{i\alpha}(m\delta s^{(N)})\\
\\
+\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}e^{*,\rho,r,N,u,\lambda,t_0}_{ij\alpha\gamma}(m\delta t)u^{*,\rho,r,e,N,t_0}_{j\gamma}(m\delta s^{(N)})\delta s^{(N)},
\end{array}
\end{equation}
and that we have an approximation
\begin{equation}\label{trotterlambda}
\begin{array}{ll}
\mathbf{u}^{*,\rho,r,e,N,F,t_0}(t_e)\doteq \\
\\
\Pi_{m=0}^{2^N}\left( \exp\left( \left( \left( e^{*,\rho,r,N,u,\lambda,t_0}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta s^{(N)})\right)\delta s^{(N)}\right) \right) \mathbf{u}^{*,\rho,r,e,N,F,t_0}(0).
\end{array}
\end{equation}
Note that for the original velocity component functions we have $v_i=v^{1,1}_i$, i.e., $\rho=1$ and $r=1$ and we drop the parameter superscripts in this case. The following theorem hold for all dimensions $n\geq 1$, In the next section we prove
\begin{thm}\label{linearboundthm}
If for some constant $C_0>0$ (depending only on the dimension $n$ an the viscosity $\nu >0$) we have
\begin{equation}\label{regdata}
\forall \alpha\in {\mathbb Z}^n:~{\big |}h_{i\alpha}{\big |}\leq
\frac{C_0}{1+|\alpha|^{n+s}} \mbox{ for some }s>1,
\end{equation}
then the limit $\mathbf{v}^F(t)=\lim_{n\uparrow \infty}\mathbf{v}^{N,F}(t)$ of the scheme $\mathbf{v}^{N,F}$ described above exists and describes a global regular solutions of the incompressible Navier Stokes equation on the torus, where a time independent constant $C>0$ exists such that
\begin{equation}\label{eq3}
\sup_{t\geq 0}{\big |}v_i(t,.){\big |}_{H^{\frac{n}{2}+1}}\leq C.
\end{equation}
Furthermore the constant $C$ satisfies
\end{thm}
From the proof of theorem \ref{linearboundthm} we shall observe that the value $s=1$ is critical. We have
\begin{thm}\label{linearboundthm2}
If for some constant $C>0$
\begin{equation}\label{regdata}
\forall \alpha\in {\mathbb Z}^n:~{\big |}h_{i\alpha}{\big |}\leq \frac{C}{1+|\alpha|^{n+s}} \mbox{ for }s=1,
\end{equation}
then the limit $\mathbf{v}^F(t)=\lim_{n\uparrow \infty}\mathbf{v}^{N,F}(t)$ of the scheme $\mathbf{v}^{N,F}$ described above exists and describes a global regular solutions of the incompressible Navier Stokes equation on the torus, where for any $T>0$ there exists a constant $C\equiv C(T)>0$ exists such that
\begin{equation}\label{eq3}
\sup_{t\leq T}{\big |}v_i(t,.){\big |}_{H^{\frac{n}{2}+s}}\leq C.
\end{equation}
\end{thm}
Finally, for $s< 1$ we have indication of divergence. We have
\begin{thm}\label{linearboundthm3}
If for some constant $C>0$
\begin{equation}\label{singdata}
\forall \alpha\in {\mathbb Z}^n:~{\big |}h_{i\alpha}{\big |}\geq \frac{C}{1+|\alpha|^{n+s}} \mbox{ for }s<1,
\end{equation}
then for some data in this class the scheme $\left( \mathbf{v}^{N,F}(t)\right)_{N\geq 1}$ diverges, indicating that singularities may occur.
\end{thm}
\begin{rem}
According to the theorem the critical regularity for global regular existence is $h_i\in H^{\frac{1}{2}n+1}$ for all $1\leq i\leq n$, where we have a uniform upper bound if $h_i\in H^{\frac{1}{2}n+s}$ for $s>1$ for all $1\leq i\leq n$. Theorem \ref{linearboundthm3} only states the divergence of the scheme proposed which does not imply strictly that there are singular solutions for some data $h_i\in H^{\frac{1}{2}n+s},~1\leq i\leq n$ for all $s<1$. However, the methods in \cite{KT} may be used in order to construct such singular solutions. We shall consider this elsewhere. Note that $H^{\frac{1}{2}n+1}$ is a much stronger space than the hypothetical solution space $H^1$ which is proved to imply smoothness in \cite{T}. It is also not identical with the data space $H^2\cap C^2$ on the whole space ${\mathbb R}^n$, where it can be argued that this regularity is sufficient in order to have global regular space on the whole space.
\end{rem}
This paper is a classical interpretation of the scheme considered in \cite{KT}, where the different arguments are used, and the focus is more on the algorithmic perspective. The strong data space with $h_i\in H^s$ for $s>\frac{n}{2}+1$ for all $1\leq i\leq n$ needed for convergence indicates that the conjectures and proofs in \cite{L} and \cite{KP}
concerning the existence of singular solutions may be detected for weaker initial data spaces. In the next section we prove theorem \ref{linearboundthm}. It is a remarkable fact that $H^1$-regularity of the solution is sufficient for global smooth existence (cf. \cite{T}), although it seems that we need stronger data to arrive at this conclusion. This may be related to the fact that many weak schemes fail to converge in spaces of dimension $n=3$. Theorem \ref{linearboundthm2} and Theorem \ref{linearboundthm} follow from analogous observations.
\section{Proof of the Theorem \ref{linearboundthm}}
In the following we mainly work with a direct scheme (without a time-delay transformation) in order to prove the main result descibed in Theorem \ref{linearboundthm}. Later we shall make some remarks concerning alternative arguments using time delay transformations.
Assume that $\rho,r>0$ are positive numbers.
We mention that at each stage $N\geq 1$ we have
\begin{equation}
\forall 1\leq i\leq n~\forall m\geq 1~\forall x:~v^{\rho,r,N}_{i}((m\delta t^{(N)},x)\in {\mathbb R}.
\end{equation}
For the sake of simplicity concerning the torus size $l$ (and without loss of generality) we may consider the case $l=1$ in the following. For any time $T>0$ the limit in (\ref{aa}), i.e., the function
\begin{equation}\label{aaproof}
\begin{array}{ll}
\lim_{N\uparrow \infty}\mathbf{v}^{\rho,r,N,F}(T)=\lim_{N\uparrow \infty} \Pi_{m=0}^{2^N}\left( \delta_{ij\alpha\beta}\exp\left(-\rho r^2\nu \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)\right)\\
\\
\left( \exp\left( \left( \left( e^{\rho,r,N}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta t^{(N)})\right)\delta t^{(N)} \right) \right) \mathbf{h}^F,
\end{array}
\end{equation}
is a candidate for a regular solution at time $T>0$ (for time $t\in (0,T)$ a similar formula with a number $\lfloor\frac{t}{T}2^N\rfloor$ describes the solution at time $t$, of course- here $\lfloor .\rfloor$ denotes the Gaussian floor).
We consider functions
\begin{equation}
\mathbf{v}^{\rho,r,F}:=\mathbf{v}^{\rho,r,F}=\left(v^{\rho,r}_{i\alpha}\right)^T_{\alpha\in {\mathbb Z}^n,~1\leq i\leq n}.
\end{equation}
Here, the list of velocity component modes is denoted by $v^{\rho,r}_i=\left(v^{\rho,r}_{i\alpha} \right)_{\alpha\in {\mathbb Z}^n}$ for all $1\leq i\leq n$, where we identify $n$-tuples of infinite ${\mathbb Z}^n$-tuples of modes with $n{\mathbb Z}^n$-tuples in the obvious way. First for an arbitrary fixed time horizon $T>0$ and $0\leq t\leq T$, $\rho,r>0$ $C>0$ and order $p$ of the dual Sobolev space we define
\begin{equation}
{\big |}\mathbf{v}^{\rho,r, F}{\big |}^{C,\exp}_{h^p}:=\max_{1\leq i\leq n}\sup_{0\leq t\leq T}{\big |}v^{\rho,r}_{i}(t){\big |}_{h^p}\exp(-Ct).
\end{equation}
Accordingly, for discrete time discretization we define
\begin{equation}\label{ncexp}
{\big |}\mathbf{v}^{\rho,r,N,F}{\big |}^{n,C,\exp}_{h^p}:=\max_{1\leq i\leq n}\max_{m\in \left\lbrace 0,\cdots 2^N\right\rbrace }{\big |}v^{\rho,r}_{i}(m\delta t)^{(N)}){\big |}_{h^p}\exp(-Cm\delta t)^{(N)}).
\end{equation}
For $C=0$ we write
\begin{equation}
{\big |}\mathbf{v}^{\rho,r,N,F}{\big |}^{n}_{h^p}:=\max_{1\leq i\leq n}\max_{m\in \left\lbrace 0,\cdots 2^N\right\rbrace }{\big |}v^{\rho,r}_{i}(m\delta t^{(N))}){\big |}_{h^p},
\end{equation}
and, analogously, for continuous time\begin{equation}
{\big |}\mathbf{v}^{\rho,r,F}{\big |}^{n}_{h^p}:=\max_{1\leq i\leq n}\sup_{t\in [ 0,T] }{\big |}v^{\rho,r}_{i}(m\delta t{(N))}){\big |}_{h^p}.
\end{equation}
For the increment
\begin{equation}
\delta \mathbf{v}^{\rho,r,N,F}=\mathbf{v}^{\rho,r,N,F}-\mathbf{v}^{\rho,r,N-1,F}
\end{equation}
we define the corresponding norm on the next coarser time scale, i.e.,
\begin{equation}\label{incrementnorm}
{\big |}\delta \mathbf{v}^{\rho,r,N,F}{\big |}^{n}_{h^p}:=\max_{1\leq i\leq n}\max_{m\in \left\lbrace 0,\cdots 2^{N-1}\right\rbrace }{\big |}\delta v^{\rho,r,N}_{i}(m\delta t^{(N-1)})){\big |}_{h^p}.
\end{equation}
The norms ${\big |}\delta \mathbf{v}^{\rho,r,N,F}{\big |}^{n,C,\exp}_{h^p}$ etc. are defined analogously with a time weight as in (\ref{ncexp}). The increment norm in (\ref{incrementnorm}) measures the maximal deviation of the scheme from a fixed point solution in a given arbitrary large interval $[0,T]$, and the weighted norm may be used in order to show that the scheme is contractive. However, convergence of the scheme follows from the weaker statement that there is a series of error upper bounds $E^{r,N}_p$ of the values ${\big |}\delta \mathbf{v}^{\rho,r,N,F}{\big |}^{n}_{h^p}$ which converges to zero as $N\uparrow \infty$. The argument is simplified by the observation that there is a uniform time-independent global regular upper bound at each stage $N$ of the scheme. We summarize these two facts in Lemma \ref{lemN} and Lemma \ref{lemerr} below which we prove next.
First we have
\begin{lem}\label{lemN}
Let $T>0$ be an arbitrary given time horzon. Let $p>\frac{n}{2}+1$ and $\max_{1\leq i\leq n}{\big |}h_i{\big |}_{h^p}=: C^p_h<\infty$. Then for all $N\geq 1$ there exists a $C_N\geq C^p_h$ such that for all
\begin{equation}\label{CN}
\sup_{t\in [0,T]}{\big |}\mathbf{v}^{\rho,r,N,F}(t,.){\big |}^{n}_{h^p}\leq C_N,
\end{equation}
where $C_N$ is depends only the dimension and the viscosity $\nu$. Especially, for all $N$ the constant $C_N$ is independent of the time horizon $T>0$.
\end{lem}
\begin{rem}
It is remarkable that at each stage $N$ of the scheme a constant $C_N$ can be chosen which is independent of the time horizon $T>0$ as there is no viscosity damping for the zero modes. Indeed we shall first show that there is a linear time uper bound and then use special features if the scheme in order to construct an upper boud which is independent of the time horizon $T>0$.
\end{rem}
\begin{proof}
Since $h_i\in H^p$ for $p>\frac{n}{2}+s$ with $s>1$ we have for all $\alpha \in {\mathbb Z}^n$
\begin{equation}
{\big |}h_{i\alpha}{\big |}\leq \frac{C^{(p)}_{0h}}{1+|\alpha|^{n+s}}
\end{equation}
for some finite $C^{(p)}_{0h} >0$, where we assume
\begin{equation}
C^{(p)}_{0h} \geq 1~~\mbox{w.l.o.g..}
\end{equation}
From finite iterative application of upper bound estimates of the form (\ref{cC}) we have
\begin{equation}
{\big |}v^{\rho,r,N,F}_{i\alpha}(m\delta t^{(N)}){\big |}\leq \frac{C^N_m}{1+|\alpha|^{n+s}}
\end{equation}
for some finite constants $C^N_m$ and for all $0\leq m\leq 2^N$. We observe the growth (as $N\uparrow \infty$) of the finite constants
\begin{equation}
C^N:=\max_{0\leq m\leq 2^N}C^N_m.
\end{equation}
At time step $m+1$ one Euler-type Trotter product step is described by
\begin{equation}\label{vtrotter}
\begin{array}{ll}
\mathbf{v}^{\rho,r,N,F}((m+1)\delta t^{(N)}):=\left( \delta_{ij\alpha\beta}\exp\left(- \rho r^2\nu 4\pi^2 \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)\right)\\
\\
\left( \exp\left( \left( \left( e^{\rho,r,N}_{ij\alpha\beta}(m\delta t)^{(N)}\right)_{ij\alpha\beta}\right)\delta t^{(N)}\right) \right)\mathbf{v}^{\rho,r,N,F}(m\delta t^{(N)}),
\end{array}
\end{equation}
where
\begin{equation}
\begin{array}{ll}
e^{\rho,r,N}_{ij\alpha\gamma}(m dt^{(N)})=-\rho r\frac{2\pi i (\alpha_j-\gamma_j)}{l}v^{\rho,r,N}_{i(\alpha-\gamma)}(m\delta t^{(N)})\\
\\
+\rho r2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
4\pi^2 \frac{\sum_{k=1}^n\gamma_j(\alpha_k-\gamma_k)v^{\rho,r,N}_{k(\alpha-\gamma)}(m\delta t^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
First observe: if an upper bound for a weakened viscosity term $\nu_{vf}(\alpha)$ replacing the original viscosity term factor $ 4\pi^2 \sum_{i=1}^n\alpha_i^2$ in the viscosity term $$\exp\left(-\rho r^2\nu 4\pi^2 \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)$$ for each mode $\alpha$, where
\begin{equation}\label{nufunction}
\forall \alpha\in {\mathbb Z}^n~~0\leq \nu_{vf}(\alpha)\leq \nu 4\pi^2\sum_{i=1}^n\alpha_i^2
\end{equation}
then this implies the existence an upper bound of the original system.
Second we observe that the statement of the Lemma \ref{lemN} can be proved under the assumption that we have an upper bound for the zero modes, i.e., that we have a finite constant $C_0>0$ such that
\begin{equation}\label{zeromodes}
|v^{\rho,r,N}_{i0}(m\delta t^{(N)})|\leq C_{0}\mbox{ for all }0\leq m\leq 2^N.
\end{equation}
\begin{rem}
The statement in (\ref{zeromodes}) can be verified,i.e., the assumption can be eliminated by introduction of an external control function for the zero modes which is the negative of the Burgers increment $$- \sum_{j,\gamma\neq 0}r\frac{2\pi i (-\gamma_j)}{l}v^{\rho,r,N}_{i(-\gamma)}(m\delta t^{(N)})v^{\rho,r,N}_{j\gamma}(m\delta t^{(N)})\delta t^{(N)}$$ of the zero mode at each time step. This leads to an linear time upper bound. However, we shall eliminate the assumption in (\ref{zeromodes}) in a more sophisticated way below such that we get an upper bound which is independent with respect to time.
\end{rem}
For the choices of $\rho,r$ in (\ref{rpara}) and (\ref{viscpar}) with $c=C^{(p)}_{h0}$, i.e., for
\begin{equation}\label{rpara}
r=\frac{c_0^2\left( C^{(p)}_{h0}\right) ^2}{\nu},~\rho=\frac{\nu}{2c_0^2\left( C^{(p)}_{h0}\right) ^2}
\end{equation}
we have $\rho r=\frac{1}{2}$. Then for
\begin{equation}\label{timesize}
\delta t^{(N)}\leq \frac{1}{c(n)\left( C^{(p)}_{h0}\right)^2},~c(n)=4\pi^2(n+n^2)c_0
\end{equation}
along with $c_0$ the constant in the product rule (\ref{cC}) (which is essentially the finite upper bound constant of an weakly singular elliptic integral) we get an upper bound estimate.
Indeed, assuming inductively with respect to time that $C^N_m\leq C^{(p)}_{h0}$ at time step $m$ and for the $\alpha$-modes of the Euler term we have the estimate
\begin{equation}\label{vtrottere}
\begin{array}{ll}
{\Bigg |}\left( \left( \exp\left( \left( \left( e^{\rho,r,N}_{ij\alpha\beta}(m\delta t^{(N)})\right)_{ij\alpha\beta}\right)\delta t^{(N)}\right) \right)\mathbf{v}^{\rho,r,N,F}(m\delta t^{(N)})\right)_{i\alpha}{\Bigg |}\\
\\
\leq 1+\rho r\frac{c(n)\left( C^{(p)}_{h0}\right)^2}{1+|\alpha|^{n+s}}\delta t^{(N)}+\left( \delta t^{(N)} \right)^2,
\end{array}
\end{equation}
where $\left(. \right)_{i\alpha}$ denotes the projection function to the $\alpha$-modeof the $i$th component of the velocity mode vector. Next consider the inductive assumption
\begin{equation}
{\big |}v^{1,r,N,F}_{i\alpha}((m+1)\delta t^{(N)}){\big |}\leq \frac{C^{(p)}_{h0}}{(1+|\alpha|^{n+s})}.
\end{equation}
For the parameter choices of $r$ and $\rho$ in (\ref{rpara}) and the time size in (\ref{timesize}) we compute
\begin{equation}\label{vtrotterialpha}
\begin{array}{ll}
{\big |}v^{1,r,N,F}_{i\alpha}((m+1)\delta t^{(N)}){\big |}\leq \\
\\
{\Bigg|}\exp\left(- \rho r^2\nu 4\pi^2 \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)v^{\rho,r,N,F}(m\delta t^{(N)})_{i\alpha}\times\\
\\
\left(1+\rho r\frac{c(n)\left( C^{(p)}_{h0}\right)^2}{1+|\alpha|^{n+s}}\delta t^{(N)}+\left( \delta t^{(N)} \right)^2\right) {\Bigg |}
\leq \frac{C^{(p)}_{h0}}{(1+|\alpha|^{n+s})},
\end{array}
\end{equation}
where in the last step we use the alternative that either
\begin{equation}
{\big |}v^{\rho,r,N,F}(m\delta t^{(N)})_{i\alpha}{\big |}\leq \frac{C^{(p)}_{h0}}{2(1+|\alpha|^{n+s})},
\end{equation}
or
\begin{equation}
v^{\rho,r,N,F}(m\delta t^{(N)})_{i\alpha}\in \left[\frac{C^{(p)}_{h0}}{2(1+|\alpha|^{n+s})},\frac{C^{(p)}_{h0}}{(1+|\alpha|^{n+s})}\right].
\end{equation}
In order to eliminate the additional assumption concerning the zero modes, i.e., the assumption
\begin{equation}\label{0modes}
\forall 0\leq m\leq 2^N~~v^{\rho,r,N,F}(m\delta t^{(N)})_{i0}=0,
\end{equation}
we consider a related Trotter product scheme and show that the argument above can be applied for this extended scheme. We mentioned that the assumption in (\ref{0modes}) is satisfied for a controlled system with an external control function which forces the zor modes to be zero. It is not difficult to show then that the control function itself is bounded on an arbitrary finite time interval $[0,T]$, and, hence, that a global regular upper bound exists for the uncontrolled system. However, the upper bound constant is then dependent on the time horizon $T>0$. The extended Euler Trotter product schemes allows us to obtain sharper regular upper bounds which are independent of the time horizon $T>0$.
We define
\begin{equation}
\forall 1\leq j\leq n:~c_{j(0)}=h_{j0}.
\end{equation}
Having defined $c_{j(m)}$ and $\mathbf{v}^{\rho,r,N,F,ext}(m\delta t^{(N)})$ for $m\geq 0$ and $1\leq j\leq n$ an extended Euler Trotter product step at time step number $m+1$ is defined in two steps. First we define
\begin{equation}\label{vtrotterext0}
\begin{array}{ll}
\mathbf{v}^{\rho,r,N,F,ext,0}((m+1)\delta t^{(N)}):=\\
\\
\left( \delta_{ij\alpha\beta}\exp\left(- \rho r^2\nu 4\pi^2 \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)\right)\exp\left(- \rho r 2\pi i \sum_{i=1}^n\gamma_j c_m\delta t^{(N)} \right) \\
\\
\left( \exp\left( \left( \left( e^{\rho,r,N,ext}_{ij\alpha\beta}(m\delta t)^{(N)}\right)_{ij\alpha\beta}\right)\delta t^{(N)}\right) \right)\mathbf{v}^{\rho,r,N,F,ext}(m\delta t^{(N)}),
\end{array}
\end{equation}
where
\begin{equation}
\begin{array}{ll}
e^{\rho,r,N,ext}_{ij\alpha\gamma}(m dt^{(N)})=-\rho r\frac{2\pi i (\alpha_j-\gamma_j)}{l}v^{\rho,r,N,ext}_{i(\alpha-\gamma)}(m\delta t^{(N)})\\
\\
+\rho r2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
4\pi^2 \frac{\sum_{k=1}^n\gamma_j(\alpha_k-\gamma_k)v^{\rho,r,N,ext}_{k(\alpha-\gamma)}(m\delta t^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
Then we define
\begin{equation}
c_{m+1}= \sum_{j,\gamma\neq 0}r\frac{2\pi i (-\gamma_j)}{l}v^{\rho,r,N,ext}_{i(-\gamma)}(m\delta t^{(N)})v^{\rho,r,N,ext}_{j\gamma}(m\delta t^{(N)})\delta t^{(N)}
\end{equation}
and
\begin{equation}
\begin{array}{ll}
v^{\rho,r,N,F,ext}((m+1)\delta t^{(N)})_{i\alpha}=v^{\rho,r,N,F,ext,0}((m+1)\delta t^{(N)})_{i\alpha},~\mbox{if~$\alpha\neq 0$}\\
\\
0,,~\mbox{if~$\alpha = 0$}.
\end{array}
\end{equation}
The argument above can then be repeated for the extended scheme.
\end{proof}
Note that the relation
\begin{equation}
D^{\alpha}_xv_i(t,.)=r^{|\alpha|}D^{\alpha}_xv^{\rho,r}_i(\tau,.)
\end{equation}
implies that an upper bound $C_{(m)}$ of $\max_{1\leq i\leq n}\sup_{\tau\in [0,T_{\rho}]}
{\big |}v^{\rho,r}_i(\tau,.){\big |}_{H^m}$
implies that the norm $\max_{1\leq i\leq n}
\sup_{t\in [0,T]}{\big |}v_i(t,.){\big |}_{H^m}$ of the original velocity components has an upper bound $\frac{C_{(m)}}{r^m}$.
Converging error upper bounds stated in the next Lemma imply that for the constants $C_N$ in (\ref{CN}) we have for $m\geq \frac{n}{2}+1$
\begin{equation}\label{C}
\forall~N~C_N\leq C=C^{(p)}_{h0}\left( \frac{c(n)^2\left( C^p_{h0}\right)^2}{\nu}\right)^{m},
\end{equation}
where
\begin{equation}
c(n)=(n^2+n)\sum_{\beta\in {\mathbb Z}^n,~\beta\neq \alpha}\frac{2\pi}{(\alpha-\beta)\beta}.
\end{equation}
In order to analyze the error we consider the difference
\begin{equation}
\delta \mathbf{v}^{\rho,r,N+1,F}( 2m\delta t^{(N)})=\mathbf{v}^{\rho,r,N+1,F}( 2m\delta t^{(N)})- \mathbf{v}^{\rho,r,N,F}(m\delta t^{(N)})
\end{equation}
on the coarser time grid of stage $N$ of the scheme.
\begin{lem}\label{lemerr}
Let $p>\frac{n}{2}+1$ and $\max_{1\leq i\leq n}{\big |}h_i{\big |}_{h^p}=: C_h<\infty$. Given $T_{\rho}>0$ and the parameter choice $r=\frac{c(n)^2\left( C^{(p)}_{h0}\right) ^2}{\nu},~\rho=\frac{\nu}{2c(n)^2\left( C^{(p)}_{h0}\right) ^2} $
let $N\geq 1$ be large enough such that the time step size $\delta t^{(N)}$ satisfies
\begin{equation}\label{timesize}
\delta t^{(N)}=\frac{T_{\rho}}{2^N}\leq \frac{1}{T_{\rho}C},
~\mbox{where $C$ is defined in (\ref{C})}.
\end{equation}
Then we have a decreasing series of error upper bounds $E^{\rho,r,N}_p$
of the values ${\big |}\delta\mathbf{v}^{\rho,r,N,F}{\big |}^{n}_{h^p}$ such that
\begin{equation}
\lim_{N\uparrow \infty}{\big |}\sup_{m\in \left\lbrace 0,\cdots,2^m \right\rbrace }\delta\mathbf{v}^{\rho,r,N,F}(m\delta t^{N}){\big |}^{n}_{h^p}\leq \lim_{N\uparrow \infty}E^{r,N}_p=0
\end{equation}
\end{lem}
\begin{proof}
For given time $T_{\rho}>0$ and
at time $2(m+1)\delta t^{(N+1)})=(m+1)\delta t^{(N)}$ we compare
\begin{equation}
\mathbf{v}^{1,r,N+1,F}(2(m+1)\delta t^{(N+1)} \mbox{ and }\mathbf{v}^{1,r,N,F}((m+1)\delta t^{(N)}),
\end{equation}
where we assume that the error at time $m\delta t^{(N)}$ the error has an upper bound of form
\begin{equation}
{\big |}v^{1,r,N+1,F}_{i\alpha}(2m\delta t^{(N+1)})-v^{1,r,N,F}_{i\alpha}(m\delta t^{(N)}){\big |}\leq \frac{C^N_{(e)m}\delta t^{(N)}}{1+|\alpha|^{n+s}}
\end{equation}
for $s>1$ and for some finite Euler error constant $0\leq C^N_{(e)m}$ which will be determined inductively with respect to $m\geq 0$ and $N\geq 0$. We have $C^N_{(e)0}=0$.
For the time step number $2m+1$ we get
\begin{equation}\label{vtrotter}
\begin{array}{ll}
\mathbf{v}^{\rho,r,N+1,F}((2m+1)\delta t^{(N+1)}):=
\left( \delta_{ij\alpha\beta}\exp\left(- c(n)^2\left( C^{(p)}_{h0}\right)^24\pi^2 \sum_{i=1}^n\alpha_i^2 \delta t^{(N+1)} \right)\right)\\
\\
\left( \exp\left( \left( \left( e^{\rho,r,N}_{ij\alpha\beta}(2m\delta t)^{(N+1)}\right)_{ij\alpha\beta}\right)\delta t^{(N+1)}\right) \right)\mathbf{v}^{\rho,r,N+1,F}(2m\delta t^{(N+1)}),
\end{array}
\end{equation}
where
\begin{equation}
\begin{array}{ll}
e^{\rho,r,N}_{ij\alpha\gamma}(2m dt^{(N+1)})=-\frac{1}{2}\frac{2\pi i (\alpha_j-\gamma_j)}{l}v^{\rho,r,N}_{i(\alpha-\gamma)}(2m\delta t^{(N+1)})\\
\\
+\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
4\pi^2 \frac{\sum_{k=1}^n\gamma_j(\alpha_k-\gamma_k)v^{\rho,r,N+1}_{k(\alpha-\gamma)}(2m\delta t^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}.
\end{array}
\end{equation}
Here we used the specific parameter choice for $r$ and $\rho$ above.
The size of one step on level $N$ equals the size of two steps on level $N+1$. Using upper bound estimates of the form (\ref{cC}) we consider two steps and estimate
\begin{equation}
\begin{array}{ll}
{\big |}v^{\rho,r,N+1,F}_{i\alpha}((2m+2)\delta t^{(N+1)})-v^{\rho,r,N,F}_{i\alpha}(m\delta t^{(N)}){\big |} \\
\\
\leq \frac{c(n)C^{(p)}_{h0}C_{(e)m}\delta t^{(N)}}{1+|\alpha|^{n+s}},
\end{array}
\end{equation}
where $C$ is the constant in (\ref{C}). Here we observe that
It follows that for all $m\in \left\lbrace 0,\cdots,2^N\right\rbrace $
\begin{equation}\label{errorv}
\begin{array}{ll}
\left( v^{1,r,N,F}_{i\alpha}(m\delta t^{(N)})\right)_{N\geq 1}
\end{array}
\end{equation}
is a Cauchy sequence and that there is an
\end{proof}
The argument above proves Theorem \ref{linearboundthm} where the constant $C>0$ depends on the viscosity constant $\nu$. Next we consider variations of the argument which lead to related results which are not covered by the previous argument, and which use local and global time delay transformation with a potential damping term. In the case of a time global transformation we get dependence of the upper bound constant $C$ on the time horizon but independence of the upper bound of the viscosity constant. We also combine viscosity damping with potential damping via local time transformation. Using time horizons $\Delta$ of the local subschemes which are correlated to the viscosity $\nu$ we have another method in order to obtain gobal regular upper bounds. This method will be considered in more detail elsewhere.
Note the role of the spatial parameter $r>0$ in the estimates above. For large $r>0$ the parameter coefficent $\rho r^2$ of the viscosity damping term becomes dominant compared to the parameter coefficient $\rho r$ of the nonlinear term. This implies that the viscosity damping becomes dominant compared to possible grwoth caused ny the nonlinear terms whenever a velocity mode exceeds a certain level. However in order to make potential damping dominant this parameter $r$ may be chosen to be small. Since the potential damping term does not depend on the parameter $r$, potential damping becomes dominant if $r$ is chosen small enough compared to a givane time horizon $T>0$. In this simple case of a global time delay transformation we have dependence of the global uupper bound on the time horizon $T>0$, but the scheme is still global.
Although Euler equations have singular solutions in general, they may have global solution branches in strong spaces as well, and from (\ref{aaproof}). A detailed anaysis is beyond the scope of this paper. Note that the viscosity limit $\nu \downarrow 0$ of (\ref{aaproof}) leads to an Euler scheme for the incompressible Euler equation with a solution candidate of the form
\begin{equation}\label{eeproof}
\begin{array}{ll}
\lim_{N\uparrow \infty}\mathbf{e}^{\rho,r,N,F}(T)=\lim_{N\uparrow \infty} \lim_{\nu\downarrow 0}\Pi_{m=0}^{2^N}\left( \delta_{ij\alpha\beta}\exp\left(-\rho r^2\nu \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)\right)\\
\\
\left( \exp\left( \left( \left( e^{\rho,r,N}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta t^{(N)})\right)\delta t^{(N)} \right) \right) \mathbf{h}^F\\
\\
=\lim_{N\uparrow \infty} \Pi_{m=0}^{2^N}
\left( \exp\left( \left( \left( e^{\rho ,r,N}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta t^{(N)})\right)\delta t^{(N)} \right) \right) \mathbf{h}^F,
\end{array}
\end{equation}
where upper bound estimates in a strong norm for the limit $\lim_{N\uparrow \infty}\mathbf{e}^{\rho,r,N,F}(T)$ in (\ref{eeproof}) imply the existence of regular upper bounds for the limit $\lim_{N\uparrow \infty}\mathbf{v}^{N,F}(T)$ of the Navier stokes equation scheme in the same strong norm.
A global regular upper bound for a global regular solution branch of the incompressible Navier Stokes equation then leads to a unique solution via a well-known uniqueness result which holds on the torus as well, i.e.,
for the incompressible Navier Stokes equation a global regular solution branch $v_i,~1\leq i\leq n$ with $v_i\in C^0\left([0,T],H^s\left({\mathbb T}^n\right)\right) $ for $s\geq 2.5$ leads - via Cornwall's inequality-to uniqueness in this space of regularity. More precisely, if $\tilde{v}_i,~1\leq i\leq n$ is another solution of the incompressible Navier Stokes equation, then we have for some constant $C>0$ depending only on the dimension $n$ and the viscosity $\nu>0$ and some integer $p\geq 4$ we have
\begin{equation}\label{unique}
{\big |}\tilde{v}(t)-v(t){\big |}^2_{L^2}\leq {\big |}\tilde{v}(0)-v(0){\big |}^2_{L^2}
\exp\left(C\int_0^t\left( {\big |}{\big |}v(s){\big |}^p_{L^4}+{\big |}{\big |}v(s){\big |}^2_{L^4}\right)ds \right).
\end{equation}
Here, the choice $p=8$ is sufficient in case of dimension $n=3$. Hence, there is no other solution branch with the same $H^q$ data for $q\geq 2.5$ which is the critical level of regularity. Note that (\ref{unique}) does not hold for the Euler equation.
Next we use viscosity damping in cooperation with potential damping via local and global time dilatation in order to prove related global upper bound results. The existence of a global regular upper bound for time delay transformation schemes follows from three facts: a) for some short time interval $\Delta >0$ and some $0<\rho,r,\lambda<1$ and for stages $N\geq 1$ such that time steps are smaller then $\Delta >0$ we have a preservation of upper bounds in strong norms of the comparison functions $u^{*,1,r,N,t_0}_i,~ 1\leq i\leq n$ for $*\in \left\lbrace l,g\right\rbrace$ at each time step which falls into the time interval $\left[0,\Delta\right]$ with respect to transformed time $s$-coordinates, and b) we have a preservation of the upper bound in the limit $N\uparrow \infty$, i.e., a preservation of an upper bound of the $H^p$-norm for $p>\frac{n}{2}+1$ of the functions $u^{*,1,r,t_0}_i(s,.)=\lim_{N\uparrow \infty}u^{*,\rho,r,N,t_0}_i(s,.),~1\leq i\leq n$ for $s\in \left[0,\Delta\right]$ for $*\in {l,g}$, and c) in the case of a time local time delay transformation, i.e. $*=l$ a upper bound preservation for strong norms can be transferred to the original velocity function components for some $\rho, r,\lambda>0$ and where the time size $\Delta >0$ of the subscheme is related to the viscosity $\nu$. Here for $t\in [t_0,t_0+\Delta]$ we have $v^{\rho,r}_i(t,.)=\lim_{N\uparrow \infty}v^{1,r,N,t_0}_i(t,.)=\lim_{N\uparrow \infty}\lambda (1+\mu(t-t_0)) u^{l,1,r,N,t_0}_i(s,.),~1\leq i\leq n$, and the latter limit function satisfies the incompressible Navier Stokes equation on the time interval $[t_0,t_0+a]$ (recall that $t\in [t_0,t_0+a]$ for $a\in (0,1)$ corresponds to $s\in [0,\Delta]$ with $\Delta=\frac{a}{\sqrt{1-a^2}}$). Here, the potential damping causes a growth term of size $O(\Delta^2)$ which can be offset by the viscosity damping by a choice of a time horizon $\Delta$ of the subscheme which is small compared to $\nu r^2$.
First we formalize the statements in a) and b) in
\begin{lem}\label{mainlem}
Let a time horizon $T>0$ be given and consider a subscheme with local or global time delay transformation, i.e., let $*\in {l,g}$. Given a time interval length $\Delta \in (0,1)$ and stages $N\geq 1$ large enough such that $\delta t^{(N)}\leq \Delta$, for $m> \frac{n}{2}+1$ there is a constant $C'>0$ (depending only on the viscosity and the size of the data) such that
\begin{equation}
{\big |}\mathbf{u}^{*,1,r,N,F,t_0}(0){\big |}^n_{h^m}\leq C'\Rightarrow {\big |}\mathbf{u}^{*,1,r,N,F,t_0}(s){\big |}^n_{h^m}\leq C'
\end{equation}
for
$$s\in \left\lbrace p\delta t^{(N)}|0\leq p\leq 2^{N},~s\leq \Delta\right\rbrace,$$
where transformed time $s\in \left[0,\Delta \right]$ corresponds to original time $t\in \left[t_0,t_0+a\right]$ with $\Delta =\frac{a}{\sqrt{1-a^2}}$. Here, we
define
\begin{equation}
{\big |}\mathbf{u}^{*,1,r,N,F,t_0}(s){\big |}^n_{h^m}:=\max_{1\leq i\leq n}{\big |}\mathbf{u}^{*,1,r,N,F,t_0}_i(s){\big |}_{h^m}
\end{equation}
for $*\in \left\lbrace l,g\right\rbrace$.
The statement holds in the limit $N\uparrow \infty$ as well, i.e.,
for $m> \frac{n}{2}+1$ there exists a constant $C'>0$ (depending only on the viscosity and the size of the data) such that for $s\in \left[0,\frac{\Delta}{\sqrt{1-\Delta^2}} \right]$
\begin{equation}
{\big |}\mathbf{u}^{*,1,r,F,t_0}(0){\big |}^n_{h^m}\leq C'\Rightarrow {\big |}\mathbf{u}^{*,1,r,F,t_0}(s){\big |}^n_{h^m}\leq C'
\end{equation}
for $*\in \left\lbrace l,g\right\rbrace$.
\end{lem}
\begin{rem}
Note that for $s>0$ and $*\in \left\lbrace l,g\right\rbrace$ the inequality
\begin{equation}
|u^{*,1,r}_{i\alpha}|\leq \frac{c}{1+|\alpha|^{n+s}}<\infty
\end{equation}
implies polynomial decay of order $2n+2s$ of the quadratic modes, where the equivalence
\begin{equation}
u^{*,1,r}_i\in H^m\equiv H^m\left({\mathbb Z}^n\right) \mbox{iff} \sum_{\alpha\in {\mathbb Z}^n}|u^{*,1,r}_{i\alpha}|^{2m}(1+|\alpha|^{2m})<\infty
\end{equation}
implies that $u^{*,1,r}_i\in H^{\frac{n}{2}+s}$ an vice versa. According to the theorems stated above, $H^{\frac{n}{2}+1}$ is the critical space for regularity. This means that for theorem \ref{linearboundthm} we assume that $s>\frac{n}{2}+1$.
\end{rem}
Next we sketch a proof for lemma \ref{mainlem} in the case $*=l$. The proof in the case of a global time delay transformation is similar with the difference that the spatial transofrmation parametr $r$ may depend on the time horizon.
The scheme for $u^{l,\rho,r,t_0}_i,~1\leq i\leq n$ becomes at each stage $N\geq 1$
\begin{equation}\label{navode200secondu}
\begin{array}{ll}
u^{l,\rho,r,N,t_0}_{i\alpha}((m+1)\delta s^{(N)})=u^{l,\rho,r,N,t_0}_{i\alpha}(m\delta s^{(N)})\\
\\
+\mu^{t_0}\sum_{j=1}^n\rho r^2\nu \left( -\frac{4\pi^2 \alpha_j^2}{l^2}\right)u^{l,\rho,r,N,t_0}_{i\alpha}(m\delta s^{(N)})\delta s^{(N)}\\
\\
+\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}e^{l,\rho,r,N,u,\lambda,t_0}_{ij\alpha\gamma}(m\delta s)u^{l,\rho,r,N,t_0}_{j\gamma}(m\delta s^{(N)})\delta s^{(N)}.
\end{array}
\end{equation}
where $\mu^{t_0}=\sqrt{1-(.-t_0)^2}^3$ is evaluated at $t_0+m\delta t^{(N)}$ , and
\begin{equation}\label{eujk*}
\begin{array}{ll}
e^{l,\rho,r,N,u,\lambda ,t_0}_{ij\alpha\gamma}(m \delta s^{(N)})=-\lambda \rho r\mu^{1,t_0}\frac{2\pi i (\alpha_j-\gamma_j)}{l}u^{l,\rho,r,N,t_0}_{i(\alpha-\gamma)}(m\delta s^{(N)})\\
\\
+ \lambda \rho r\mu^{l,1,t_0}\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
\sum_{k=1}^n4\pi^2\gamma_j(\alpha_k-\gamma_k)u^{l,\rho,r,N,t_0}_{k(\alpha-\gamma)}(m\delta s^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}-\mu^{l,0,t_0}(m\delta s^{(N)})\delta_{ij\alpha\gamma}
\end{array}
\end{equation}
along with $\mu^{l,1,t_0}(t):=(1+\mu(t-t_0))\sqrt{1-(t-t_0)^2}^3$ and $\mu^{l,0,t_0}(t):=\frac{\mu\sqrt{1-(t-t_0)^2}^3}{1+\mu (t-t_0)}$.
The last term in (\ref{eujk}) is related to the damping term of the equation for the function $u^{l,1,r,t_0}_i,~1\leq i\leq n$. For times $0\leq t_e$ and with an analogous time discretization we get the Trotter product formula
\begin{equation}\label{trotterlambda2}
\begin{array}{ll}
\mathbf{u}^{l,\rho,r,N,F,t_0}(t_e)\doteq \Pi_{m=0}^{2^N}\left( \delta_{ij\alpha\beta}\exp\left(-\rho r^2\nu \sum_{i=1}^n\alpha_i^2 \delta s^{(N)} \right)\right)\times\\
\\
\times \left( \exp\left( \left( \left( e^{l,\rho,r,N,u,\lambda,t_0}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta s^{(N)})\right)\delta s^{(N)}\right) \right) \mathbf{u}^{N,F,t_0}(0).
\end{array}
\end{equation}
The related comparison function for the Euler equation velocity satisfies
\begin{equation}\label{trotterlambdae2}
\begin{array}{ll}
\mathbf{u}^{l,\rho,r,e,N,F,t_0}(t_e)\doteq \\
\\
\Pi_{m=0}^{2^N}\left( \exp\left( \left( \left( e^{l,\rho,r,N,u,\lambda,t_0}_{ij\alpha\beta}\right)_{ij\alpha\beta}(m\delta s^{(N)})\right)\delta s^{(N)}\right) \right) \mathbf{u}^{l,\rho,r,e,N,F,t_0}(0).
\end{array}
\end{equation}
The scheme for the Navier Stokes equation comparison function $\mathbf{u}^{l,\rho,r,N,F,t_0}(t_e)$ has an additional ${\big |}\exp\left(-\rho r^2\nu \sum_{i=1}^n\alpha_i^2 \delta s^{(N)} \right){\big |}\leq 1$ on the diagonal of the infinite matrix $\left( \delta_{ij\alpha\beta}\exp\left(-\rho r^2\nu \sum_{i=1}^n\alpha_i^2 \delta t^{(N)} \right)\right)$ which is effective at each time step of the Trotter scheme. Using this observation and induction with respect to the time step number we get
\begin{lem}
Let $\lambda=1$ and $\rho=1$ for simplicity. For all stages $N\geq 1$, and given $0\leq t_e<1$ there exists $r>0$ such that
\begin{equation}
{\Big |}\mathbf{u}^{l,\rho,r,e,N,F,t_0}(0){\Big |}^n_{h^p}\leq C^p_h~\Rightarrow ~{\Big |}\mathbf{u}^{l,\rho,r,N,F,t_0}(t_e){\Big |}^n_{h^p}\leq C^p_h
\end{equation}
\end{lem}
We observe that the comparison function $\mathbf{u}^{\rho,r,e,N,F,t_0}(t_e)$ for the Euler equation scheme preserves certain upper bounds at one step within the time interval $[0,\Delta]$ for appropriately chosen parameters $\lambda>0$ and $\mu>\lambda$ depending on $\Delta=\frac{a}{\sqrt{1-a^2}}>0$, where $a$ is the size of the corresponding time interval in original coordinates. We mention here that the parameter $\lambda$ can be useful inorder to strengthen the relative strength of potential damping- for $r=\lambda^2$ the nonlinear terms have a coefficient of order $\sim \lambda^3$ such that the nonlinear growth factor $\sim\frac{1}{\lambda^2}$ of the scaled value functions is absorbed and the grwoth of the nonlinear terms is of order $\lambda$ while the potential damping term gets a factor $\frac{1}{\lambda}$ via the scaled value function which is elatively strong. However this it is not really needed in order to prove the existence of a global solution branch of the Euler equation.
Assume inductively that for some finite constant $C>0$ an for $p>\frac{n}{2}+1$ and a given stage $N\geq 1$ with $\delta s^{(N)}\leq \Delta$ and $m\geq 0$ we have
\begin{equation}
{\Big |}\mathbf{u}^{\rho,r,e,N,F,t_0}(m\delta s^{(N)}){\Big |}^n_{h^p}\leq C^p_{h0}
\end{equation}
The Euler equation scheme for the next time step is
\begin{equation}\label{navode200secondue}
\begin{array}{ll}
u^{l,\rho,r,e,N,t_0}_{i\alpha}((m+1)\delta s^{(N)})=u^{l,\rho,r,N,t_0}_{i\alpha}(m\delta s^{(N)})\\
\\
+\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}{\Big (}-\lambda r\mu^{l,1,t_0}\frac{2\pi i (\alpha_j-\gamma_j)}{l}u^{l,\rho,r,e,N,t_0}_{i(\alpha-\gamma)}(m\delta s^{(N)})\\
\\
+ \lambda r\mu^{l,1,t_0}\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
\sum_{k=1}^n4\pi^2\gamma_j(\alpha_k-\gamma_k)u^{\rho,r,e,N,t_0}_{k(\alpha-\gamma)}(m\delta s^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}{\Big )}u^{l,\rho,r,e,N,t_0}_{j\gamma}(m\delta s^{(N)})\delta s^{(N)}\\
\\-\mu^{l,0,t_0}(m\delta s^{(N)})u^{l,\rho,r,e,N,t_0}_{i\alpha}(m\delta s^{(N)})\delta s^{(N)},
\end{array}
\end{equation}
where we note that $|\mu^{l,1,t_0}(t)|\leq(1+\mu \Delta)$ and $|\mu^{l,0,t_0}(t)|\geq \frac{\mu\sqrt{1-\Delta^2}^3}{1+\mu \Delta}$ for $t=t(s)\in [t_0,t_0+\Delta]$ and $t\in [t_0,t_0+a]$. We get
\begin{equation}\label{navode200secondue**}
\begin{array}{ll}
{\Big |}u^{l,\rho,r,e,N,F,t_0}((m+1)\delta s^{(N)}){\Big |}^n_{h^p}\leq {\big |}u^{l,\rho,r,N,t_0}(m\delta s^{(N)}){\big |}_{h^p}^n\\
\\
+{\Big |}{\Big (}\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}{\Big (}-\lambda r\mu^{1,t_0}\frac{2\pi i (\alpha_j-\gamma_j)}{l}u^{l,\rho,r,e,N,F,t_0}_{i(\alpha-\gamma)}(m\delta s^{(N)})\\
\\
+ \lambda r\mu^{1,t_0}\frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
\sum_{k=1}^n4\pi^2\gamma_j(\alpha_k-\gamma_k)u^{l,\rho,r,e,N,t_0}_{k(\alpha-\gamma)}(m\delta s^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}{\Big )}\times\\
\\
\times u^{l,\rho,r,e,N,t_0}_{j\gamma}(m\delta s^{(N)})\delta s^{(N)}{\Big )}_{1\leq i\leq n,\alpha \in {\mathbb Z}^n}+O\left(\left( \delta s^{(N)}\right)^2\right) \\
\\-\frac{\mu\sqrt{1-\Delta^2}^3}{1+\mu \Delta}{\Big |}u^{l,\rho,r,e,N,F,t_0}(m\delta s^{(N)})\delta s^{(N)}{\Big |}_{h^p}^n,
\end{array}
\end{equation}
Hence, the preservation of an upper bound at step $m+1$ follows from
\begin{equation}\label{left}
\begin{array}{ll}
\lambda (1+\mu\Delta){\Big |}{\Big (}\sum_{j=1}^n\sum_{\gamma\in {\mathbb Z}^n}{\Big (}-\frac{2\pi i (\alpha_j-\gamma_j)}{l}u^{l,\rho,r,e,N,F,t_0}_{i(\alpha-\gamma)}(m\delta s^{(N)})\\
\\
+ \frac{2\pi i\alpha_i1_{\left\lbrace \alpha\neq 0\right\rbrace}
\sum_{k=1}^n4\pi^2\gamma_j(\alpha_k-\gamma_k)u^{l,\rho,r,e,N,t_0}_{k(\alpha-\gamma)}(m\delta s^{(N)})}{\sum_{i=1}^n4\pi^2\alpha_i^2}{\Big )} u^{l,\rho,r,e,N,t_0}_{j\gamma}(m\delta s^{(N)})\delta s^{(N)}{\Big )}_{1\leq i\leq n,\alpha \in {\mathbb Z}^n}{\Big |}_{h^p}^n\\
\\
\leq \frac{\mu\sqrt{1-\Delta^2}^3}{1+\mu \Delta}{\Big |}u^{l,\rho,r,e,N,F,t_0}(m\delta s^{(N)})\delta s^{(N)}{\Big |}_{h^p}^n.
\end{array}
\end{equation}
According to the regularity (polynomial decay of the modes) of $u^{e,N,F,t_0}_{i}(m\delta s^{(N)})$ and simple estimates as in (18) the left side of (\ref{left}) has an upper bound
\begin{equation}
\lambda r(1+\mu\Delta)c(n)C^2
\end{equation}
for some constant $c(n)$ which depends only on dimension $n$ such that the inequality in (\ref{left}) follows from
\begin{equation}\label{left2}
\begin{array}{ll}
\lambda r(1+\mu\Delta)c(n)C^2\leq \frac{\mu\sqrt{1-\Delta^2}^3}{1+\mu \Delta}C.
\end{array}
\end{equation}
Here, the number $C>0$ is determined by the data at time $t_0$ and where $r,\lambda >0$ can be chosen such that $|.|_{H^p}$ norms of comparison functions $u^{1,r,t_0}_i,~1\leq i\leq n$ are preserved. This does not imply preservation of the norm for the velocities $v^{1,r}_i,~1\leq i\leq n$ of course. hier
The Euler equation scheme has a limit $N\uparrow \infty$ by standard arguments. Finishing item a) and item b) of the argument we get
\begin{lem}\label{rholemma2}
For all $0\leq t_e<1$ and $p>\frac{n}{2}+1$ and for some $r,\lambda >0$ and $*\in {l,g}$ we get
\begin{equation}
{\Big |}\mathbf{u}^{*,\rho,r,F,t_0}(0){\Big |}^n_{h^p}\leq C\Rightarrow {\Big |}\mathbf{u}^{*,\rho,r,F,t_0}(t_e){\Big |}^n_{h^p}\leq C,
\end{equation}
where $\mathbf{u}^{*,\rho,r,F,t_0}(t_e)=\lim_{N\uparrow \infty} \mathbf{u}^{*,\rho,r,N,F,t_0}(t_e)$.
\end{lem}
Pure potential damping leads to a growth estimate for the original of the original velocity component functions of order $\Delta^2$ on the time interval $\left[t_0,t_0+a\right]$. This growth can be offset even by a small viscosity damping for nonzero modes with the choice $\Delta =\nu r^2$. Again an extended scheme leads to global regular upper bounds.
\footnotetext[1]{\texttt{{[email protected]}, [email protected]}.}
\end{document} |
\begin{document}
\title[Monte Carlo Continual Resolving in Imperfect Information Games]{Monte Carlo Continual Resolving for Online Strategy Computation in Imperfect Information Games}
\author{Michal \v Sustr}
\orcid{0000-0002-3154-4727}
\affiliation{}
\email{[email protected]}
\author{Vojt\v ech Kova\v r\'ik}
\orcid{0000-0002-7954-9420}
\email{[email protected]}
\affiliation{
\institution{Artificial Intelligence Center, FEE Czech Technical University}
\city{Prague}
\state{Czech Republic}
}
\author{Viliam Lis\'y}
\orcid{0000-0002-1647-1507}
\email{[email protected]}
\affiliation{}
\begin{abstract}
Online game playing algorithms produce high-quality strategies with a fraction of memory and computation required by their offline alternatives.
Continual Resolving (CR) is a recent theoretically sound approach to online game playing that has been used to outperform human professionals in poker.
However, parts of the algorithm were specific to poker, which enjoys many properties not shared by other imperfect information games.
We present a domain-independent formulation of CR applicable to any two-player zero-sum extensive-form games (EFGs).
It works with an abstract resolving algorithm, which can be instantiated by various EFG solvers.
We further describe and implement its Monte Carlo variant (\acs{MCCR}) which uses Monte Carlo Counterfactual Regret Minimization (\acs{MCCFR}) as a resolver.
We prove the correctness of CR and show an $O(T^{-1/2})$-dependence of \acs{MCCR}'s exploitability on the computation time.
Furthermore, we present an empirical comparison of \acs{MCCR} with incremental tree building to Online Outcome Sampling and Information-set \acs{MCTS} on several domains.
\end{abstract}
\keywords{counterfactual regret minimization; resolving; imperfect information; Monte Carlo; online play; extensive-form games; Nash equilibrium}
\maketitle
\section{Introduction}
Strategies for playing games can be pre-computed \emph{offline} for all possible situations, or computed \emph{online} only for the situations that occur in a particular match. The advantage of the offline computation are stronger bounds on the quality of the computed strategy. Therefore, it is preferable if we want to solve a game optimally. On the other hand, online algorithms can produce strong strategies with a~fraction of memory and time requirements of the offline approaches. Online game playing algorithms have outperformed humans in Chess~\cite{DeepBlue}, Go~\cite{Silver16:AlphaGo}, and no-limit Poker~\cite{brown2017safe,DeepStack}.
While online approaches have always been the method of choice for strong play in perfect information games, it is less clear how to apply them in imperfect information games (\acs{IIG}s). To find the optimal strategy for a specific situation in an \ac{IIG}, a player has to reason about the unknown parts of the game state. They depend on the (possibly unobservable) actions of the opponent prior to the situation, which in turn depends on what the optimal decisions are for both players in many other parts of the game. This makes the optimal strategies in distinct parts of the game closely interdependent and makes correct reasoning about the current situation difficult without solving the game as a whole.
Existing online game playing algorithms for imperfect information games either do not provide any guarantees on the quality of the strategy they produce~\cite{long2010understanding,Ciancarini2010,cowling2012information}, or require the existence of a compact heuristic evaluation function and a significant amount of computation to construct it~\cite{DeepStack,brown2018depth}. Moreover, the algorithms that are theoretically sound were developed primarily for Texas hold'em poker, which has a very particular information structure. After the initial cards are dealt, all of the actions and chance outcomes that follow are perfectly observable. Furthermore, since the players' moves alternate, the number of actions taken by each player is always known.
None of this holds in general for games that can be represented as two-player zero-sum extensive-form games (\acs{EFG}s). In a blind chess~\cite{Ciancarini2010}, we may learn we have lost a piece, but not necessarily which of the opponent's pieces took it. In visibility-based pursuit-evasion~\cite{raboin2010strategy}, we may know the opponent remained hidden, but not in which direction she moved. In phantom games~\cite{teytaud2011lemmas}, we may learn it is our turn to play, but not how many illegal moves has the opponent attempted. Because of these complications, the previous theoretically sound algorithms for imperfect-information games are no longer directly applicable.
The sole exception is Online Outcome Sampling (\acs{OOS})~\cite{OOS}. It is theoretically sound, completely domain independent, and it does not use any pre-computed evaluation function. However, it starts all its samples from the beginning of the game, and it has to keep sampling actions that cannot occur in the match anymore. As a result, its memory requirements grow as more and more actions are taken in the match, and the high variance in its importance sampling corrections slows down the convergence.
We revisit the Continual Resolving algorithm (CR) introduced in~\cite{DeepStack} for poker and show how it can be generalized in a way that can handle the complications of general two-player zero-sum \ac{EFG}s.
Based on this generic algorithm, we introduce Monte Carlo Continual Resolving (\ac{MCCR}), which combines \ac{MCCFR}~\cite{MCCFR} with incremental construction of the game tree, similarly to \ac{OOS}, but replaces its targeted sampling scheme by Continual Resolving. This leads to faster sampling since \ac{MCCR} starts its samples not from the root, but from the current point in the game. It also decreases the memory requirements by not having to maintain statistics about parts of the game no longer relevant to the current match.
Furthermore, it allows evaluating continual resolving in various domains, without the need to construct expensive evaluation functions.
We prove that \ac{MCCR}'s exploitability approaches zero with increasing computational resources and verify this property empirically in multiple domains.
We present an extensive experimental comparison of \ac{MCCR} with \ac{OOS}, Information-set Monte Carlo Tree Search (\acs{IS-MCTS})~\cite{cowling2012information} and \ac{MCCFR}.
We show that \ac{MCCR}'s performance heavily depends on its ability to quickly estimate key statistics close to the root, which is good in some domains, but insufficient in others.
\section{Background}
We now describe the standard notation for \ac{IIG}s and \ac{MCCFR}.
\subsection{Imperfect Information Games}
We focus on two-player zero-sum extensive-form games with imperfect information.
Based on~\cite{osborne1994course}, game $G$ can be described by
\begin{itemize}
\item $\mathcal{H}$ -- the set of \emph{histories}, representing sequences of actions.
\item $\mathcal{Z}$ -- the set of terminal histories (those $z\in \mathcal H$ which are not a prefix of any other history). We use $g \sqsubset h$ to denote the fact that $g$ is equal to or a prefix of $h$.
\item $\mathcal A(h) := \{ a \, | \ ha \in \mathcal H \}$ denotes the set of actions available at a \emph{non-terminal history} $h\in \mathcal H\setminus \mathcal Z$. The term $ha$ refers to a history, i.e. child of history $h$ by playing $a$.
\item $\mathcal P : \mathcal H \setminus \mathcal Z \rightarrow \{1,2,c\}$ is the \emph{player function} partitioning non-terminal histories into $\mathcal H_1$, $\mathcal H_2$ and $\mathcal H_c$ depending on which player chooses an action at $h$. Player $c$ is a special player, called ``chance'' or ``nature''.
\item \emph{The strategy of chance} is a fixed probability distribution $\sigma_c$ over actions in chance player's histories.
\item The \emph{utility function} $u=(u_1,u_2)$ assigns to each terminal history $z$ the rewards $u_1(z), u_2(z)\in \mathbb{R}$ received by players 1 and 2 upon reaching $z$. We assume that $u_2 = - u_1$.
\item The \emph{information-partition} $\mathcal I = (\mathcal I_1, \mathcal I_2)$ captures the imperfect information of $G$. For each player $i\in \{1,2\}$, $\mathcal I_i$ is a partition of $\mathcal H_i$. If $g,h\in \mathcal H_i$ belong to the same $I\in \mathcal I_i$ then $i$ cannot distinguish between them. Actions available at infoset $I$ are the same as in each history $h$ of $I$, therefore we overload $\mathcal A(I) := \mathcal A(h)$. We only consider games with \emph{perfect recall}, where the players always remember their past actions and the information sets visited so far.
\end{itemize}
A \emph{behavioral strategy} $\sigma_i \in \Sigma_i$ of player $i$ assigns to each $I\in \mathcal I_i$ a probability distribution $\sigma(I)$ over available actions $a\in \mathcal A(I)$.
A \emph{strategy profile} (or simply \emph{strategy}) $\sigma = (\sigma_1,\sigma_2) \in \Sigma_1 \times \Sigma_2$ consists of strategies of players~1~and~2. For a~player $i \in \{1,2\}$, $-i$ will be used to denote the other two actors $\{1,2,c\}\setminus \{i\}$ in $G$ (for example $\mathcal H_{-1} := \mathcal H_{2} \cup \mathcal H_c$) and ${\textnormal{opp}_i}$ denotes $i$'s opponent ($\textrm{opp}_1 := 2$).
\subsection{Nash Equilibria and Counterfactual Values}
The~\emph{reach probability} of a history $h\in \mathcal H$ under $\sigma$ is defined as $\pi^{\sigma}(h)=\pi^{\sigma}_{1}(h)\pi^{\sigma}_{2}(h)\pi^\sigma_c(h)$, where each $\pi^{\sigma}_{i}(h)$ is a~product of probabilities of the~actions taken by player $i$ between the root and $h$.
The reach probabilities $\pi_i^\sigma(h|g)$ and $\pi^\sigma(h|g)$ conditional on being in some $g\sqsubset h$ are defined analogously, except that the products are only taken over the~actions on the path between $g$ and $h$.
Finally, $\pi^{\sigma}_{-i}(\cdot)$ is defined like $\pi^\sigma(\cdot)$, except that in the product $\pi^\sigma_1(\cdot)\pi^\sigma_2(\cdot)\pi^\sigma_c(\cdot)$ the term $\pi^\sigma_i(\cdot)$ is replaced by 1.
The~\emph{expected utility} for player $i$ of a strategy profile $\sigma$ is $u_i(\sigma) = \sum_{z\in \mathcal Z} \pi^\sigma(z)u_i(z)$.
The~profile $\sigma$ is an \emph{$\epsilon$-Nash equilibrium} ($\epsilon$-\acs{NE}) if
\begin{align*}
\left( \forall i\in\{1,2\} \right) \ : \ u_i(\sigma) \geq \max_{\sigma_i' \in \Sigma_i} u_i(\sigma_i',\sigma_{\textnormal{opp}_i}) -\epsilon .
\end{align*}
A Nash equilibrium (\ac{NE}) is an $\epsilon$-\ac{NE} with $\epsilon=0$. It is a standard result that in two-player zero-sum games, all $\sigma^*\in \textrm{\ac{NE}}$ have the same $u_i(\sigma^*)$~\cite{osborne1994course}. The \emph{exploitability} $\textnormal{expl}(\sigma)$ of $\sigma \in \Sigma$ is the average of exploitabilities $\textnormal{expl}_i(\sigma)$, $i \in \{1,2\}$, where
\begin{equation*}
\textnormal{expl}_i(\sigma) := u_i(\sigma^*) - \min_{\sigma'_{\textnormal{opp}_i} \in \Sigma_{\textnormal{opp}_i}} u_i(\sigma_i,\sigma'_{\textnormal{opp}_i}).
\end{equation*}
The expected utility conditional on reaching $h\in\mathcal H$ is
\begin{equation*}
u^\sigma_i(h) = \sum_{h\sqsubset z\in \mathcal Z}\pi^{\sigma}(z|h)u_i(z).
\end{equation*}
An ingenious variant of this concept is the~\emph{counterfactual value} (\acs{CFV}) of a history, defined as $v_i^\sigma(h) := \pi^\sigma_{-i}(h) u_i^\sigma(h)$, and the coun\-terfactual value of taking an~action $a$ at $h$, defined as $v_i^\sigma(h,a) := \pi^\sigma_{-i}(h) u_i^\sigma(ha) $.
We set $v_i^\sigma(I) := \sum_{h\in I} v_i^\sigma(h)$ for $I\in \mathcal I_i$ and define $v_i^\sigma(I,a)$ analogously.
A strategy $\sigma^\star_2\in \Sigma_2$ is a \emph{counterfactual best response} $\textrm{CBR}(\sigma_1)$ to $\sigma_1 \in \Sigma_1$ if $v^{(\sigma_1,\sigma^\star_2)}_2(I) = \max_{a\in\mathcal A(I)} v^{(\sigma_1,\sigma^\star_2)}_2(I,a)$ holds for each $I \in \mathcal I_2$~\cite{CFR-D}.
\subsection{Monte Carlo \acs{CFR}}
For a strategy $\sigma\in \Sigma$, $I\in \mathcal I_i$ and $a\in \mathcal A(I)$, we set the counterfactual regret for not playing $a$ in $I$ under strategy $\sigma$ to
\begin{equation}
r_i^\sigma(I,a) := v_i^{\sigma} (I,a) - v_i^\sigma (I).
\end{equation}
The Counterfactual Regret minimization (CFR) algorithm~\cite{CFR} generates a consecutive sequence of strategies $\sigma^0, \sigma^1,\,\dots,\,\sigma^T$ in such a way that the \emph{immediate counterfactual regret}
\[ \bar R^t_{i,\textnormal{imm}} (I) := \!
\max_{a\in \mathcal A(I)} \bar R^t_{i,\textnormal{imm}} (I,a) := \!
\max_{a\in \mathcal A(I)} \frac{1}{t} \sum_{t'=1}^t r_i^{\sigma^{t'}}(I,a) \]
is minimized for each $I\in \mathcal I_i$, $i \in \{1,2\}$.
It does this by using the Regret Matching update rule~\cite{hart2000simple,blackwell}:
\begin{equation}\label{eq:rm_update}
\sigma^{t+1}(I, a) := \frac{\max \{ \bar R^{t}_{i,\textnormal{imm}} (I,a) ,0\} }{ \sum_{a' \in \mathcal A(I)} \max \{ \bar R^{t}_{i,\textnormal{imm}} (I,a'), 0 \} }.
\end{equation}
Since the overall regret is bounded by the sum of immediate counterfactual regrets~\cite[Theorem 3]{CFR}, this causes the average strategy $\bar{\sigma}^T$ (defined by \eqref{eq:avg_strat1}) to converge to a \ac{NE}~\cite[Theorem 1]{MCCFR}:
\begin{align}\label{eq:avg_strat1}
\bar{\sigma}^T(I, a) := \frac{
\sum_{t=1}^T \pi^{\sigma^t}_i(I) \sigma^t(I,a)
}{
\sum_{t=1}^T \pi^{\sigma^t}_i(I)
} && \textnormal{(where $I\in\mathcal I_i$)} .
\end{align}
In other words, by accumulating immediate cf. regrets at each information set from the strategies $\sigma^0, \dots, \sigma^t$, we can produce new strategy $\sigma^{t+1}$. However only the average strategy is guaranteed to converge to NE with $\mathcal O(1/\sqrt{T})$ -- the individual regret matching strategies can oscillate.
The initial strategy $\sigma^0$ is uniform, but in fact any strategy will work. If the sum in the denominator of update rule~(\ref{eq:rm_update}) is zero, $\sigma^{t+1}(I,a)$ is set to be also uniform.
The disadvantage of CFR is the costly need to traverse the whole game tree during each iteration.
Monte Carlo CFR~\cite{MCCFR} works similarly, but only samples a small portion of the game tree each iteration. It calculates sampled variants of CFR's variables, each of which is an unbiased estimate of the original~\cite[Lemma~1]{MCCFR}.
We use a particular variant of \ac{MCCFR} called Outcome Sampling (\acs{OS})~\cite{MCCFR}.
\ac{OS} only samples a single terminal history $z$ at each iteration, using the sampling strategy $\sigma^{t,\epsilon} := (1-\epsilon)\sigma^t + \epsilon\cdot \textrm{rnd}$, where $\epsilon \in (0,1]$ controls the exploration and $\textrm{rnd}(I, a) := \frac{1}{|\mathcal A(I)|}$.
This $z$ is then traversed forward (to compute each player's probability $\pi_i^{\sigma^t}(h)$ of playing to reach each prefix of $z$) and backward (to compute each player's probability $\pi_i^{\sigma^t}(z|h)$ of playing the remaining actions of the history).
During the backward traversal, the sampled counterfactual regrets at each visited $I\in \mathcal I$ are computed according to \eqref{eq:sampled_regret} and added to $\tilde R^T_{i,\textnormal{imm}}(I)$:
\begin{align}\label{eq:sampled_regret}
\tilde{r}^{\sigma^t}_i(I,a) :=
\begin{cases}
w_I\cdot ( \pi^{\sigma^t}(z|ha) - \pi^{\sigma^t}(z|h) ) & \textnormal{if } h a \sqsubset z \\
w_I \cdot (0 - \pi^{\sigma^t}(z|h) ) & \textnormal{otherwise}
\end{cases},
\end{align}
where $h$ denotes the prefix of $z$ which is in $I$ and $w_I$ stands for
$\frac{1}{\pi^{\sigma^{t,\epsilon}}(z)} \pi_{-i}^{\sigma^t} (z|h) u_i(z)$~\cite{lanctot_thesis}.
\section{Domain-Independent Formulation of Continual Resolving}
The only domain for which continual resolving has been previously defined and implemented is poker. Poker has several special properties:
a) all information sets have a fixed number of histories of the same length,
b) public states have the same size and
c) only a single player is active in any public state.
There are several complications that occur in more general EFGs:
(1) We might be asked to take several turns within a single public state, for example in phantom games.
(2) When we are not the acting player, we might be unsure whether it is the opponent's or chance's turn.
(3) Finally, both players might be acting within the same public state, for example a secret chance roll determines whether we get to act or not.
In this section, we present an~abstract formulation of continual resolving robust enough to handle the complexities of general EFGs.
However, we first need to define the~concepts like public tree and resolving gadget more carefully.
\subsection{Subgames and the Public Tree}
To speak about the information available to player $i$ in histories where he doesn't act, we use \emph{augmented information sets}.
For player $i\in \{1,2\}$ and history $h\in \mathcal H\setminus \mathcal Z$, the $i$'s \emph{observation history} $\vec O_i(h)$ in $h$ is the sequence $(I_1,a_1,I_2,a_2,~\dots)$ of the information sets visited and actions taken by $i$ on the path to $h$ (incl. $I\ni h$ if $h\in \mathcal H_i$). Two histories $g,h\in \mathcal H\setminus \mathcal Z$ belong to the same \emph{augmented information set} $I\in \mathcal I_i^{\textnormal{aug}}$ if $\vec O_i(g) = \vec O_i(h)$.
This is equivalent to the definition from~\citep{CFR-D}, except that our definition makes it clear that $\mathcal I_i^{\textnormal{aug}}$ is also defined on $\mathcal H_i$ (and coincides there with $\mathcal I_i$ because of perfect recall).
\begin{remark}[Alternatives to $\mathcal I^{\textnormal{aug}}$]
$\mathcal I^{\textnormal{aug}}$ isn't the only viable way of generalizing information sets. One could alternatively consider some further-unrefineable perfect-recall partition $\mathcal I^*_i$ of $\mathcal H$ which coincides with $\mathcal I_i$ on $\mathcal H_i$, and many other variants between the two extremes. We focus only on $\mathcal I^{\textnormal{aug}}$, since an in-depth discussion of the general topic would be outside of the scope of this paper.
\end{remark}
We use $\sim$ to denote histories indistinguishable by some player:
\[ g\! \sim \! h \iff \vec O_1(g)=\vec O_1(h) \lor \vec O_2(g)=\vec O_2(h) .\]
By $\approx$ we denote the~transitive closure of $\sim$. Formally, $g \approx h$ iff
\[
\left( \exists n \right) \left( \exists h_1, \dots, h_n \right) :
g\!\sim\! h_1, \ h_1 \!\sim\! h_2, \ \dots,\ h_{n-1} \!\sim\! h_n, \ h_n \!\sim\! h .
\]
\noindent If two states do \emph{not} satisfy $g \approx h$, then it is common knowledge that both players can tell them apart.
\begin{definition}[Public state] \label{def:publ_state}
\emph{Public partition} is any partition $\mathcal S$ of $\mathcal H\setminus \mathcal Z$ whose elements are closed under $\sim$ and form a tree.
An~element $S$ of such $\mathcal S$ is called a \emph{public state}.
The \emph{common knowledge partition} $\mathcal S_\textrm{ck}$ is the one consisting of the equivalence classes of $\approx$.
\end{definition}
\noindent Our definition of $\mathcal S$ is a~reformulation of the definition of~\cite{accelerated_BR} in terms of augmented information sets (which aren't used in~\cite{accelerated_BR}). The~definition of $\mathcal S_\textnormal{ck}$ is novel.
We endow any $\mathcal S$ with the tree structure inherited from $\mathcal H$. Clearly, $\mathcal S_\textrm{ck}$ is the finest public partition.
The~concept of a~public state is helpful for introducing imperfect-information subgames (which aren't defined in~\cite{accelerated_BR}).
\begin{definition}[Subgame]\label{def:subgame}
A \emph{subgame} rooted at a~public state $S$ is the set $G(S) := \{ h\in \mathcal H | \ \exists g \in S: g\sqsubset h \}$.
\end{definition}
For comparison,~\cite{CFR-D} defines a subgame as ``a forest of trees, closed under both the descendant relation and membership within $\mc I^\textnormal{aug}_i$ for any player''.
For any $h\in S \in \mathcal S_\textrm{ck}$, the subgame rooted at $S$ is the smallest~\citep{CFR-D}-subgame containing $h$.
As a result,~\cite{CFR-D}-subgames are ``forests of subgames rooted at common-knowledge public states''.
We can see that finer public partitions lead to smaller subgames, which are easier to solve. In this sense, the common-knowledge partition is the ``best one''. However, finding $\mathcal S_\textrm{ck}$ is sometimes non-trivial, which makes the~definition of general public states from~\cite{accelerated_BR} important.
The drawback of this definition is its ambiguity --- indeed, it allows for extremes such as grouping the whole $\mathcal H$ into a single public state, without giving a practical recipe for arriving at the ``intuitively correct'' public partition.
\subsection{Aggregation and the Upper Frontier}
Often, it is useful to aggregate reach probabilities and counterfactual values over (augmented) information sets or public states. In general EFGs, an augmented information set $I\in \mathcal I_i^\textnormal{aug}$ can be ``thick'', i.e. it can contain both some $ha\in \mathcal H$ and it's parent $h$.
This necessarily happens when we are unsure how many actions were taken by other players between our two successive actions.
For such $I$, we only aggregate over the ``upper frontier'' $\hat I := \{ h \in I | \, \nexists g \in I: g \sqsubset h \, \& \, g\neq h \}$ of $I$~\cite{halpern2016upperFrontier,halpern1997upperFrontier}:
We overload $\pi^\sigma(\cdot)$ as $\pi^\sigma(I) := \sum_{h\in\hat I} \pi^\sigma(h)$ and $v_i^\sigma(\cdot)$ as $v_i^\sigma(I) := \sum_{h\in\hat I} v_i^\sigma(h)$. We define $\hat S$ for $S\in \mathcal S$, $\pi_i^\sigma(I)$, $\pi_{-i}^\sigma(I)$ and $v^\sigma_i(I,a)$ analogously.
By $\hat S(i) := \{ I \in \mathcal I_i^\textnormal{aug}\, | \ \hat I \subseteq \hat S \}$ we denote the topmost (augmented) information sets of player $i$ in $S$.
To the best of our knowledge, the issue of ``thick'' information sets has only been discussed in the context of non-augmented information sets in games with imperfect recall~\cite{halpern2016upperFrontier}.
One scenario where thick augmented information sets cause problems is the resolving gadget game, which we discuss next.
\subsection{Resolving Gadget Game}\label{sec:gadget}
We describe a generalization of the resolving gadget game from~\cite{CFR-D} (cf.~\cite{MoravcikGadget,brown2017safe}) for resolving Player 1's strategy (see Figure~\ref{fig:gadget}).
Let $S\in \mathcal S$ be a public state to resolve from, $\sigma\in \Sigma$, and let $\tilde v(I) \in \mathbb{R}$ for $I\in \hat S(i)$ be the required counterfactual values.
First, the upper frontier of $S$ is duplicated as $\{ \tilde h | \, h\in \hat S \} =: \tilde S$.
Player 2 is the acting player in $\tilde S$, and from his point of view, nodes $\tilde h$ are partitioned according to $\{\tilde I := \{ \tilde h| \, h\in \hat I\}\ |\ I\in \hat S(2)\}$.
In $\tilde h \in \tilde I$ corresponding to $h\in I$, he can choose between ``following'' (F) into $h$ and ``terminating'' (T), which ends the game with utility $\tilde u_2(\tilde h T) := \tilde v(I) \pi_{-2}^\sigma(S) / \pi^\sigma_{-2}(I)$.
From any $h\in \hat S$ onward, the game is identical to $G(S)$, except that the utilities are multiplied by a constant: $\tilde u_i(z) := u_i(z) \pi_{-2}^\sigma(S)$.
To turn this into a well-defined game, a root chance node is added and connected to each $h\in \hat S$, with transition probabilities $\pi_{-2}^\sigma(h) / \pi_{-2}^\sigma(S)$.
\begin{figure}
\caption{Resolving game $\widetilde G\left( S, \sigma, \tilde v \right)$ constructed for player $\triangle$ in a~public state $S$. Player's (augmented) information sets are drawn with solid (resp. dashed) lines of the respective color. The chance node $\bigcirc$ chooses one of $\triangledown$'s histories $\tilde{h_1}
\label{fig:gadget}
\end{figure}
This game is called the \emph{resolving gadget game} $\widetilde G\left( S, \sigma, \tilde v \right)$, or simply $\widetilde G\left(S\right)$ when there is no risk of confusion, and the variables related to it are denoted by tilde.
If $\tilde \rho \in \widetilde \Sigma$ is a ``resolved'' strategy in $\widetilde G(S)$, we denote the new combined strategy in $G$ as $\sigma^{\textnormal{new}} := \sigma|_{G(S)\leftarrow \tilde \rho}$, i.e. play according to strategy $\tilde \rho$ in the subgame $G(S)$ and according to $\sigma$ everywhere else.
The difference between $\widetilde G\left( S, \sigma, \tilde v \right)$ and the original version of~\cite{CFR-D} is that our modification only duplicates the upper frontier $\hat S$ and uses normalization constant $\sum_{\hat S} \pi_{-2}^\sigma (h)$ (rather than $\sum_{S} \pi_{-2}^\sigma (h)$) and estimates $\tilde v(I)=\sum_{\hat I} \tilde v(h)$ (rather than $\sum_{I} \tilde v(h)$).
This enables $\widetilde G\left( S, \sigma, \tilde v \right)$ to handle domains with thick information sets and public states.
While tedious, it is straightforward to check that $\widetilde G\left( S, \sigma, \tilde v \right)$ has all the properties proved in~\cite{CFR-D,DeepStack,Neil_thesis}.
Without our modification, the resolving games would either sometimes be ill-defined, or wouldn't have the~desired properties.
The following properties are helpful to get an intuitive understanding of gadget games. Their more general versions and proofs (resp. references for proofs) are listed in the appendix.
\begin{lemma}[Gadget game preserves opponent's values]
For each $I \in \mathcal I_2^\textnormal{aug}$ with $I\subset G(S)$, we have
$v_2^{\sigma^\textnormal{new}}(I) = \tilde v_2^{\tilde \rho}(I)$.
\end{lemma}
\noindent Note that the conclusion does \emph{not} hold for counterfactual values of the (resolving) player 1! (This can be easily verified on a simple specific example such as Matching Pennies.)
\begin{lemma}[Optimal resolving]
If $\sigma$ and $\tilde \rho$ are both Nash equilibria and $\tilde v(I) = v_2^\sigma(I)$ for each $I\in \hat S(2)$, then $\sigma^{\textnormal{new}}_1$ is not exploitable.
\end{lemma}
\subsection{Continual Resolving}
Domain-independent continual resolving closely follows the structure of continual resolving for poker~\cite{DeepStack}, but uses a~generalized resolving gadget and handles situations which do not arise in poker, such as multiple moves in one public state. We explain it from the perspective of Player 1. The abstract CR keeps track of strategy $\sigma_1$ it has computed in previous moves. Whenever it gets to a public state $S$, where $\sigma_1$ has not been computed, it resolves the subgame $G(S)$.
As a by-product of this resolving, it estimates opponent's counterfactual values $v_2^{\sigma_1,\textrm{CBR}(\sigma_1)}$ for all public states that might come next, allowing it to keep resolving as the game progresses.
CR repetitively calls a \texttt{Play} function which takes the current information set $I\in \mathcal I_1$ as the input and returns an action $a\in \mathcal A(I)$ for Player 1 to play. It maintains the following variables:
\begin{itemize}
\item $S\in \mathcal S$ \dots the current public state,
\item $\textrm{KPS} \subset \mathcal S$ \dots the public states where strategy is known,
\item $\sigma_1$ \dots a strategy defined for every $I\in \mathcal I_1$ in \textrm{KPS},
\item $\textrm{NPS} \subset \mathcal S$ \dots the public states where \texttt{CR} may resolve next,
\item $D(S')$ for $S'\in \textrm{NPS}$ \dots data allowing resolving at $S'$, such as the estimates of opponent's counterfactual values.
\end{itemize}
\begin{algorithm}[t]
\LinesNumbered
\SetKwData{knownStrategy}{KPS}
\SetKwData{knownValues}{NPS}
\SetKwData{state}{S}
\SetKwData{data}{D}
\SetKwData{action}{a}
\SetKwFunction{BuildGadget}{BuildResolvingGame}
\SetKwFunction{GetNextStrategy}{ExtendKPS}
\SetKwFunction{Resolve}{Resolve}
\SetKwFunction{assert}{assert}
\SetKwFunction{return}{return}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Information set $I\in \mathcal I_1$}
\Output{An action $\action\in \mathcal A(I)$}
\BlankLine
\state $\leftarrow$ the public state which contains $I$\;
\If{\state $\notin$ \knownStrategy}{
$\widetilde G(S) \leftarrow$ \BuildGadget{\state,\data{\state}}\;
\knownStrategy $\leftarrow$ \knownStrategy $\cup$ \state\;
\knownValues $\leftarrow$ all $S'\in \mathcal S$ where CR acts for the first time after leaving $\knownStrategy$\;\label{line:NPS}
$\tilde \rho, \tilde \data \leftarrow$ \mathbb{R}esolve{$\widetilde G(S)$,\knownValues}\;
$\sigma_1 |_{S'} \leftarrow \tilde \rho |_{S'}$\;
\data $\leftarrow$ calculate data for \knownValues based on \data, $\sigma_1$ and $\tilde \data$\;
}
\textbf{return} $\action\sim \sigma_1(I)$
\caption{Function \texttt{Play} of Continual Resolving}\label{alg:CR}
\end{algorithm}
The pseudo-code for CR is described in Algorithm~\ref{alg:CR}.
If the current public state belongs to \textrm{KPS}, then the strategy $\sigma_1(I)$ is defined, and we sample action $a$ from it.
Otherwise, we should have the data necessary to build some resolving game $\widetilde G(S)$ (line 3).
We then determine the public states $\textrm{NPS}$ where we might need to resolve next (line 5).
We solve $\widetilde G(S)$ via some resolving method which also computes the data necessary to resolve from any $S'\in \textrm{NPS}$ (line 6).
Finally, we save the resolved strategy in $S$ and update the data needed for resolving (line 7-9).
To initialize the variables before the first resolving, we
set \textrm{KPS} and $\sigma_1$ to $\emptyset$, find appropriate $\textrm{NPS}$, and start solving the game from the root using the same solver as \texttt{Play}, i.e. $\_\, , D \leftarrow \texttt{Resolve}(G, \textrm{NPS})$.
We now consider CR variants that use the gadget game from Section~\ref{sec:gadget} and data of the form $D=(r_1,\tilde v)$, where $r_1(S') = ( \pi_1^{\sigma_1}(h) )_{S'}$ is CR's range and $\tilde v(S') = (\tilde v(J))_J$ estimates opponent's counterfactual value at each $J\in S'(2)$.
We shall use the following notation: $S_n$ is the $n$-th public state from which CR resolves; $\tilde \rho_n$ is the corresponding strategy in $\widetilde G(S_n)$; $\sigma_1^n$ is CR's strategy after $n$-th resolving, defined on $\textrm{KPS}_n$; the optimal extension of $\sigma_1^n$ is
$$\sigma_1^{*n} := {\arg \! \min}_{\nu_1 \in \Sigma_1} \ \textnormal{expl}_1 \left( \sigma_1^n|_{\textrm{KPS}_n} \cup \nu_1|_{\mathcal S \setminus \textrm{KPS}_n} \right) .$$
Lemmata 24 and 25 of~\cite{DeepStack} (summarized into Lemma~\ref{lem:resolving_lemma} in our Appendix~\ref{sec:proofs}) give the following generalization of~\cite[Theorem\,S1]{DeepStack}:
\begin{theorem}[Continual resolving bound]\label{thm:cr}
Suppose that CR uses $D=(r_1,\tilde v)$ and $\widetilde G(S,\sigma_1,\tilde v)$.
Then the exploitability of its strategy is bounded by
$\textnormal{expl}_1 (\sigma_1) \leq \epsilon_0^{\tilde v} + \epsilon_1^R + \epsilon_1^{\tilde v} + \dots + \epsilon_{N-1}^{\tilde v} + \epsilon_N^R$,
where $N$ is the number of resolving steps and
$\epsilon_n^R := \widetilde{\textnormal{expl}}_1(\tilde \rho_n)$,
$\epsilon_n^{\tilde v} := \sum_{J\in \hat S_{n+1}(2)} \left| \tilde v(J) - v_2^{\sigma_1^{*n},CBR}(J) \right|$
are the exploitability (in $\widetilde G(S_n)$) and value estimation error made by the $n$-th resolver (resp. initialization for $n=0$).
\end{theorem}
The DeepStack algorithm from~\cite{DeepStack} is a~poker-specific instance of the~CR variant described in the above paragraph. Its resolver is a modification of CFR with neural network heuristics and sparse look-ahead trees. We make CR domain-independent and allowing for different resolving games ($\texttt{BuildResolvingGame}$), algorithms ($\texttt{Resolve}$), and schemes (by changing line \ref{line:NPS}).
\section{Monte Carlo Continual Resolving}\label{sec:mccr}
Monte Carlo Continual Resolving is a specific instance of CR which uses Outcome Sampling \ac{MCCFR} for game (re)solving. Its data are of the form $D=(r_1,\tilde v)$ described above and it resolves using the gadget game from Section~\ref{sec:gadget}. We first present an abstract version of the algorithms that we formally analyze, and then add improvements that make it practical.
To simplify the theoretical analysis, we assume \ac{MCCFR} computes the exact counterfactual value of resulting average strategy $\bar \sigma^T$ for further resolving. (We later discuss more realistic alternatives.)
The following theorem shows that \ac{MCCR}'s exploitability converges to 0 at the rate of $O(T^{-1/2})$.
\begin{restatable}[MCCR bound]{theorem}{MCCRbound}\label{thm:mccr}
With probability at least $(1-p)^{N+1}$, the exploitability of strategy $\sigma$ computed by \ac{MCCR} satisfies
\begin{align*}
\textnormal{expl}_i(\sigma) \leq
\left( \sqrt{2} / \sqrt{p} + 1 \right)|\mathcal I_i|
\frac{\Delta_{u,i}\sqrt{A_i}}{\delta}
\left( \frac{2}{\sqrt{T_0}} + \frac{2N-1}{\sqrt{T_R}} \right),
\end{align*}
where $T_0$ and $T_R$ are the numbers of \ac{MCCR}'s iterations in pre-play and each resolving, $N$ is the required number of resolvings, $\delta = \min_{z, t} q_t(z)$ where $q_t(z)$ is the probability of sampling $z\in \mathcal Z$ at iteration $t$, $\Delta_{u,i} = \max_{z,z'} | u_i(z) - u_i(z')|$ and $A_i = \max_{I\in \mathcal I_i} |\mathcal A(I)|$.
\end{restatable}
The proof is presented in the appendix. Essentially, it inductively combines the \ac{OS} bound (Lemma~\ref{lem:MCCFR_expl}) with the guarantees available for resolving games in order to compute the overall exploitability bound.\footnote{Note that Theorem~\ref{thm:mccr} isn't a straightforward corollary of Theorem~\ref{thm:cr}, since calculating the numbers $\epsilon_n^{\tilde v}$ does require non-trivial work. In particular, $\bar \sigma^T$ from the $n$-th resolving isn't the same as $\sigma_1^{n*}, CBR(\sigma_1^{n*})$ and the simplifying assumption about $\tilde v$ is \emph{not} equivalent to assuming that $\epsilon_n^{\tilde v} = 0$.}
For specific domains, a much tighter bound can be obtained by going through our proof in more detail and noting that the size of subgames decreases exponentially as the game progresses (whereas the proof assumes that it remains constant). In effect, this would replace the $N$ in the bound above by a small constant.
\subsection{Practical Modifications}\label{sec:hacks}
Above, we describe an abstract version of \ac{MCCR} optimized for clarity and ease of subsequent theoretical analysis.
We now describe the version of \ac{MCCR} that we implemented in practice. The code used for the experiments is available online at \url{https://github.com/aicenter/gtlibrary-java/tree/mccr}.
\subsubsection{Incremental Tree-Building} A massive reduction in the memory requirements can be achieved by building the game tree incrementally, similarly to Monte Carlo Tree Search (\acs{MCTS})~\cite{browne2012survey}. We start with a tree that only contains the root.
When an information set is reached that is not in memory, it is added to it and a playout policy (e.g., uniformly random) is used for the remainder of the sample.
In playout, information sets are not added to memory. Only the regrets in information sets stored in the memory are updated.
\subsubsection{Counterfactual Value Estimation}\label{sec:cfv-estimation}
Since the computation of the~exact counterfactual values of the average strategy needed by $\widetilde G(S,\sigma,\cdot)$ requires the traversal of the whole game tree, we have to work with their estimates instead.
To this end, our \ac{MCCFR} additionally computes the opponent's \emph{sampled counterfactual values}
\[ \tilde v_2^{\sigma^t}(I) :=
\frac{1}{\pi^{\sigma^{t,\epsilon}}(z)} \pi^{\sigma^t}_{-2}(h) \pi^{\sigma^t} \! (z|h) u_2(z)
.\]
It is not possible to compute the exact counterfactual value of the average strategy just from the values of the current strategies. Once the $T$ iterations are complete, the standard way of estimating the counterfactual values of $\bar \sigma^T$ is using \emph{arithmetic averages}
\begin{equation}\label{eq:ord-avg-sampl-vals}
\tilde v(I) := \frac{1}{T}\sum \tilde v_2^{\sigma^t}(I).
\end{equation}
However, we have observed better results with \emph{weighted averages}
\begin{equation}\label{eq:weigh-avg-sampl-vals}
\tilde v(h) := \sum_t \tilde \pi^{\sigma^t}\!\!(h) \, v_2^{\sigma^t}\!(h) \ / \ \sum_t \tilde \pi^{\sigma^t}\!\!(h).
\end{equation}
The stability and accuracy of these estimates is experimentally evaluated in Section~\ref{sec:experiments} and further analyzed in Appendix~\ref{sec:cf_values}. We also propose an unbiased estimate of the exact values computed from the already executed samples, but its variance makes it impractical.
\subsubsection{Root Distribution of Gadgets}
As in~\cite{DeepStack}, we use the information about opponent's strategy from previous resolving when constructing the gadget game. Rather than being proportional to $\pi_{-2}(h)$, the root probabilities are proportional to $\pi_{-2}(h)(\pi_2(h)+\epsilon)$. This modification is sound as long as $\epsilon>0$.
\subsubsection{Custom Sampling Scheme}\label{sec:custom_sampling}
To improve the efficiency of resolving by \ac{MCCFR}, we use a custom sampling scheme which differs from \ac{OS} in two aspects.
First, we modify the above sampling scheme such that with probability 90\% we sample a history that belongs to the current information set $I$. This allows us to focus on the most relevant part of the game.
Second, whenever $\tilde h \in \tilde S$ is visited by \ac{MCCFR}, we sample both actions (T and F). This increases the transparency of the algorithm, since all iterations now ``do a similar amount of work'' (rather than some terminating immediately).
These modifications are theoretically sound, since the resulting sampling scheme still satisfies the assumptions of the general \ac{MCCFR} bound from~\cite{lanctot_thesis}.
\subsubsection{Keeping the Data between Successive Resolvings}
Both in pre-play and subsequent resolvings, \ac{MCCFR} operates on successively smaller and smaller subsets of the game tree. In particular, we don't need to start each resolving from scratch, but we can re-use the previous computations.
To do this, we initialize each resolving \ac{MCCFR} with the \ac{MCCFR} variables (regrets, average strategy and the corresponding value estimates) from the previous resolving (resp. pre-play). In practice this is accomplished by simply not resetting the data from the previous \ac{MCCFR}.
While not being backed up by theory, this approach worked better in most practical scenarios, and we believe it can be made correct with the use of warm-starting~\cite{warm_start} of the resolving gadget.
\section{Experimental evaluation} \label{sec:experiments}
After brief introduction of competing methods and explaining the used methodology, we focus on evaluating the alternative methods to estimate the counterfactual values required for resolving during \ac{MCCFR}.
Next, we evaluate how quickly and reliably these values can be estimated in different domains, since these values are crucial for good performance of \ac{MCCR}. Finally, we compare exploitability and head-to-head performance to competing methods.
\iffalse
We make following experiments to show that \ac{MCCR} performs well:
\begin{enumerate}
\item We compare different averaging schemes introduced in~Section~(\ref{sec:cfv-estimation})
\item We show that in many domains, opponent counterfactual values $v^{\bar{\sigma}^T}_{-i}(I)$ in
top-most information set converge before an $\epsilon$-\ac{NE} strategy is found\vknote{Unify notation. -i or 2?}.
Loosely speaking, we observe that we can be find how to play in the first couple of rounds without knowing exactly how to play in the whole game.\msnote{verify claim}
\item In small domains we show that \ac{MCCR}'s strategy $\dbbar{\sigma}$~introduced in Section~(\ref{sec:perf-eval}) indeed converges to equilibrium strategy as the number of samples in the pre-play or per resolving increases.
\item We tune \ac{MCCR}'s exploration parameter.
\item We compare the exploitability of $\dbbar{\sigma}$ in small domains.
\item We compare head-to-head play performance of~our~method \ac{MCCR} and competing algorithms. Each algorithm gets the same amount of pre-play time and time per move.
\end{enumerate}
All the values of exploitability and \ac{CFV}s are normalized to the range of $\left \langle 0, 1 \right \rangle$ for better readability and comparison.
\fi
\subsection{Competing Methods}
\noindent \textbf{Information-Set Monte Carlo Tree Search.\ }
\acs{IS-MCTS}~\cite{ISMCTS} runs \ac{MCTS} samples as in a perfect information game, but computes statistics for the whole information set and not individual states.
When initiated from a non-empty match history, it starts samples uniformly
from the states in the current information set.
We use two selection functions: Upper Confidence bound applied to Trees (\acs{UCT})~\cite{kocsis2006bandit} and Regret Matching (\acs{RM})~\cite{hart2001reinforcement}. We use the same settings as~in~\cite{OOS}: UCT constant 2x the maximal game outcome, and RM with exploration 0.2.
In the following, we refer to \ac{IS-MCTS} with the corresponding selection function by only \texttt{UCT} or \texttt{RM}.
\noindent \textbf{Online Outcome Sampling.\ }
\ac{OOS}~\cite{OOS} is an online search variant of \ac{MCCFR}.
\ac{MCCFR} samples from the root of the tree and needs to pre-build the whole game tree.
\ac{OOS} has two primary distinctions from \ac{MCCFR}: it builds its search tree incrementally and
it can bias samples with some probability to any specific parts of the game tree.
This is used to target the information sets (\acs{OOS-IST}) or the public states (\acs{OOS-PST})
where the players act during a match.
We do not run OOS-PST on domain of IIGS, due to non-trivial biasing of sampling towards current public state.
We further compare to \ac{MCCFR} with incremental tree building and the random player denoted RND.
\subsection{Computing Exploitability}\label{sec:perf-eval}
Since the online game playing algorithms do not compute the strategy for the whole game, evaluating exploitability of the strategies they play is more complicated.
One approach, called brute-force in~\cite{OOS}, suggest ensuring that the online algorithm is executed in each information set in the game and combining the computed strategies. If the thinking time of the algorithm per move is $t$, it requires $O(t\dot|\mathcal I|)$ time to compute one combined strategy and multiple runs are required to achieve statistical significance for randomized algorithms. While this is prohibitively expensive even for the smaller games used in our evaluation, computing the strategy for each public state, instead of each information set is already feasible. We use this approach, however, it means we have to disable the additional targeting of the current information set in the resolving gadget proposed in Section~\ref{sec:custom_sampling}.
There are two options how to deal with the variance in the combined strategies in different runs of the algorithm in order to compute the exploitability of the real strategy realized by the algorithm. The pessimistic option is to compute the exploitability of each combined strategy and average the results. This assumes the opponent knows the random numbers used by the algorithm for sampling in each resolving. A more realistic option is to average the combined strategies from different runs into an \emph{expected strategy} $\dbbar{\sigma}$ and compute its exploitability. We use the latter.
\iffalse
Imagine a repeated play between two players, where one player plays using a strategy found by a fixed sampling algorithm (let's call him ``algorithm'' player). The other player knows exactly how this algorithm works and plays best response against it (``adversary'' player). However, adversary doesn't know the seeds for algorithm's random number generator. If he would know them, then his uncertainty of the position in the game is only due to chance nodes\footnote{If there is no chance player in the game, with this seed knowledge the game would collapse to perfect information game.} and he can take advantage of the algorithm player. All he needs to do is to find best response to a specific realization of a strategy (a pure strategy) and therefore he makes algorithm's strategy very exploitable. \msnote{revise - expected values with chance nodes!}
Therefore we consider two variants of what seeds adversary might not know:
\begin{enumerate}[label=(\alph*)]
\item \label{itm:action-seed} Adversary doesn't have knowledge of the \emph{action seed}, i.e.~the seed that he uses to select action that he will play.
\item \label{itm:sampling-seed} In addition to action seed, adversary doesn't know the \emph{sampling seed}, i.e.~the seed that the algorithm uses to sample terminal states and calculate his strategy.
\end{enumerate}
Evaluation used in prior work~\cite{CFR}\msnote{what else?} is based on the~scenario~\ref{itm:action-seed}: exploitability of the average strategy $\text{expl}_i(\bar{\sigma})$ is calculated.
In sampling variants of \ac{CFR}~\cite{MCCFR,OOS}, the average strategy depends on sampling seed and we can study what the expected value $\mathbb{E}_{\text{seed}} \left[ \bar{\sigma}^{\text{seed}} \right]$ and variance $\mathbb{V}_{\text{seed}} \left[ \bar{\sigma}^{\text{seed}} \right]$ is with various seeds (or rather estimate these values).
We introduce a concept of \emph{mean seed strategy} $\dbbar{\sigma}$ as the strategy for which we measure exploitability in the experiments, based~on~the scenario~\ref{itm:sampling-seed}.
If we consider changing sample seed between game rounds, then the adversary has to assume any kind of seed is possible and play against $\dbbar{\sigma}$, which is the average strategy of average strategies for different seeds.
Since average strategy is a convex combination of $N$ individual strategies\footnote{N is also the number of sampling seeds we use.}, $\mathbb{E}_{\text{seed}} \left[ \text{\textnormal{expl}}_i(\bar{\sigma}^{\text{seed}}) \right]$ is an upper bound on $\text{\textnormal{expl}}_i( \dbbar{\sigma} )$.
In experiments, we calculate both $\text{expl}_i(\dbbar{\sigma})$ and $$\overline{\text{expl}_i(\bar{\sigma})} := \frac{1}{N} \sum_{\text{seed}} \text{expl}_i(\bar{\sigma}^\text{seed}) .$$
The former is better estimate of actual exploitability for sampling algorithms, but we present also the latter to compare with prior work. Note that we don't calculate $\dbbar{\sigma}$ exactly, as it would require computing it for all possible seeds; by $\dbbar{\sigma}$ we will mean only it's estimate based on $N$ seeds.
We should stress that computing exploitability for online algorithms is significantly harder than for offline algorithms.\footnote{In our experiments, it took around one day to compute in some settings, compared to couple of seconds for offline algorithms.}
We should calculate the expected strategy $\dbbar{\sigma}$ that each algorithm will use in any play situation.
We have to transform the online strategy to offline strategy, which is called ``brute-force strategy'' in~\cite[Section 4.2]{OOS}. It is still practical to estimate $\dbbar{\sigma}$ for public states with $O(t |\mathcal S| N)$ but not for information sets due to complexity $O(t|\mathcal I| N)$.
\fi
\subsection{Domains}
For direct comparison with prior work, we use same domains as~in~\cite{OOS} with parametrizations noted in parentheses: Imperfect Information Goofspiel \texttt{IIGS(N)}, Liar's Dice \texttt{LD(D1,D2,F)} and Generic Poker \texttt{GP(T,C,R,B)}. We add Phantom Tic-Tac-Toe \texttt{PTTT} to also have a domain with thick public states, and use Biased Rock Paper Scissors \texttt{B-RPS}~for small experiments. The detailed rules are in Appendix~\ref{sec:rules} with the sizes of the domains in Table~\ref{tab:sizes}.
We use small and large variants of the domains based on their parametrization. Note that large variants are about $10^{4}$ up to $10^{15}$ times larger than the smaller ones.
\subsection{Results}
\setcounter{figure}{1}
\begin{figure*}
\caption{
Left -- Estimation error of the arithmetic (dashed lines) and weighted averages (solid lines) of action values in \texttt{B-RPS}
\label{fig:expl}
\end{figure*}
\setcounter{figure}{2}
\begin{figure}
\caption{A comparison of exploitability (top) with ``CFV instability'' (bottom) in different domains. For $t=10^7$, the differences are 0 by definition.}
\label{fig:cfv_stability}
\end{figure}
\setcounter{figure}{3}
\subsubsection{Averaging of Sampled \ac{CFV}s}\label{sec:experiment-cfv-averaging}
As mentioned in Section~\ref{sec:cfv-estimation}, computing the exact counterfactual values of the average strategy $\bar\sigma^T$ is often prohibitive, and we replace it by using the arithmetic or weighted averages instead.
To compare the two approaches, we run \ac{MCCFR} on the B-RPS domain (which only has a single \ac{NE} $\sigma*$ to which $\bar\sigma^T$ converges) and measure the distance $\Delta v(t)$ between the estimates and the correct action-values $v_1^{\sigma*}(\textnormal{root},a)$.
In~Figure~\ref{fig:expl} (left), the weighted averages converge to the correct values with increasing number of samples, while the arithmetic averages eventually start diverging.
The weighted averages are more accurate (see Appendix, Figure~\ref{fig:cfv_comparison_domains} for comparison on each domain) and we will use them exclusively in following experiments.
\subsubsection{Stability of \ac{CFV}s}\label{sec:experiment-cfv-convergence}
To find a nearly optimal strategy when resolving, \ac{MCCR} first needs \ac{CFV}s of such a strategy (Lemma~\ref{lem:resolving_lemma}).
However, the \ac{MCCFR} resolver typically won't have enough time to find such $\bar \sigma^T$.
If \ac{MCCR} is to work, the \ac{CFV}s computed by \ac{MCCFR} need to be close to those of an approximate equilibrium \ac{CFV}s even though they correspond to an exploitable strategy.
We run \ac{MCCFR} in the root and focus on \ac{CFV}s in the public states where player 1 acts for the 2nd time (the gadget isn't needed for the first action, and thus neither are CFVs):
\begin{align*}\label{eq:omega}
\Omega := \!\{ J \in \mathcal I_2^\textnormal{aug} \mid \exists S' \!\in \mathcal S, h \in S': J \subset S' \& \textnormal{ pl. 1 played once in }h \}.
\end{align*}
Since there are multiple equilibria \ac{MCCFR} might converge to, we cannot measure the convergence exactly.
Instead, we measure the ``instability'' of \ac{CFV} estimates by saving $\tilde v_2^t(J)$ for $t\leq T$, tracking how $\Delta_t(J) := | \tilde v_2^t(J) - \tilde v_2^T(J) | $ changes over time,
and aggregating it into $\frac{1}{|\Omega|}\sum_J \Delta_t(J)$.
We repeat this experiment with 100 different \ac{MCCFR} seeds and measure the expectation of the aggregates and, for the small domains, the expected exploitability of $\bar \sigma^t$.
If the resulting ``instability'' is close to zero after $10^5$ samples (our typical time budget), we consider CFVs to be sufficiently stable.
Figure~\ref{fig:cfv_stability} confirms that in small domains (LD, GP), \ac{CFV}s stabilize long before the exploitability of the strategy gets low.
The error still decreases in larger games (GS, PTTT), but at a slow rate.
\subsubsection{Comparison of Exploitability with Other Algorithms} \label{sec:expl-other-algs}
We compare the exploitability of \ac{MCCR} to \ac{OOS-PST} and \ac{MCCFR}, and include random player for reference.
We do not include \ac{OOS-IST}, whose exploitability is comparable to that of \ac{OOS-PST}~\cite{OOS}.
For an evaluation of \ac{IS-MCTS}'s exploitability (which is very high, with the exception of poker) we refer the reader to~\cite{OOS,ISMCTS}.
Figure~\ref{fig:expl} (right) confirms that for all algorithms, the exploitability decreases with the increased time per move. \ac{MCCR} is better than \ac{MCCFR} on LD and worse on IIGS. The keep variant of \ac{MCCR} is initially less exploitable than the reset variant, but improves slower. This suggests the keep variant could be improved.
\subsubsection{Influence of the Exploration Rate}
One of \ac{MCCR}'s parameters is the exploration rate $\epsilon$ of its \ac{MCCFR} resolver. When measuring the exploitability of \ac{MCCR} we observed no noteworthy influence of $\epsilon$ (for $\epsilon = 0.2,\,0.4,\,0.6,\,0.8$, across all of the evaluated domains).
\subsubsection{Head-to-head Performance}
For each pair of algorithms, thousands of matches have been played on each domain, alternating positions of player 1 and 2.
In the smaller (larger) domains, players have $0.3s$ (5s) of pre-play computation and $0.1s$ (1s) per move. Table~\ref{tab:matches} summarizes the~results as percentages of the maximum domain payoff.
Note that the results of the matches are not necessarily transitive, since they are not representative of the algorithms' exploitability. When computationally tractable, the previous experiment~\ref{sec:expl-other-algs} is therefore a~much better comparison of an algorithm's performance.
Both variants of MCCR significantly outperform the random opponent in all games. With the exception of PTTT, they are also at least as good as the MCCFR ``baseline''.
This is because public states in PTTT represent the number of moves made, which results in a non-branching public tree and resolving games occupy the entire level as in the original game.
\ac{MCCR} is better than \ac{OOS-PST} in LD and GP, and better than \ac{OOS-IST} in large IIGS.
MCCR is worse than IS-MCTS on all games with the exception of small LD.
However, this does not necessarily mean that MCCR's exploitability is higher than for IS-MCTS in the larger domains, MCCR only fails to find the strategy that would exploit IS-MCTS.
\begin{table*}[t!]
\scriptsize
\setlength\tabcolsep{1.5pt}
\begin{subtable}[t]{.5\linewidth}
\centering
\scalebox{1.02}{\input{matches/IIGS-5.tex}}
\end{subtable}
\begin{subtable}[t]{.5\linewidth}
\centering
\scalebox{1.02}{\input{matches/IIGS-13.tex}}
\end{subtable}\par
\begin{subtable}[t]{.5\linewidth}
\centering
\scalebox{1.02}{\input{matches/LD-116.tex}}
\end{subtable}
\begin{subtable}[t]{.5\linewidth}
\centering
\scalebox{1.02}{\input{matches/LD-226.tex}}
\end{subtable}\par
\begin{subtable}[t]{.5\linewidth}
\centering
\scalebox{1.02}{\input{matches/GP-3322.tex}}
\end{subtable}
\begin{subtable}[t]{.5\linewidth}
\centering
\scalebox{1.02}{\input{matches/GP-4644.tex}}
\end{subtable}\par
\begin{subtable}[t]{.5\linewidth}
\centering
\begin{tabular}{l}
\end{tabular}
\end{subtable}
\begin{subtable}[t]{.5\linewidth}
\centering
\scalebox{1.02}{\input{matches/PTTT.tex}}
\end{subtable}
\rule{0pt}{3mm}
\caption{Head-to-head performance. Positive numbers mean that the row algorithm is winning against the column algorithm by the given percentage of the maximum payoff in the domain. Gray numbers indicate the winner isn't statistically significant.}
\label{tab:matches}
\end{table*}
\normalsize
\section{Conclusion}
We propose a~generalization of Continual Resolving from poker~\cite{DeepStack} to other extensive-form games. We show that the structure of the public tree may be more complex in general, and propose an extended version of the resolving gadget necessary to handle this complexity.
Furthermore, both players may play in the same public state (possibly multiple times), and we extend the definition of Continual Resolving to allow this case as well. We present a completely domain-independent version of the algorithm that can be applied to any EFG, is sufficiently robust to use variable resolving schemes, and can be coupled with different resolving games and algorithms (including classical CFR, depth-limited search utilizing neural networks, or other domain-specific heuristics). We show that the existing theory naturally translates to this generalized case.
We further introduce Monte Carlo CR as a specific instance of this abstract algorithm that uses \ac{MCCFR} as a resolver. It allows deploying continual resolving on any domain, without the need for expensive construction of evaluation functions.
\ac{MCCR} is theoretically sound as demonstrated by Theorem~\ref{thm:mccr}, constitutes an
improvement over \ac{MCCFR} in the online setting in terms head-to-head performance,
and doesn't have the restrictive memory requirements of \ac{OOS}. The experimental evaluation shows that \ac{MCCR} is very sensitive to the quality of its counterfactual value estimates. With good estimates, its worst-case performance (i.e. exploitability) improves faster than that of \ac{OOS}. In head-to-head matches \ac{MCCR} plays similarly to \ac{OOS}, but it only outperforms \ac{IS-MCTS} in one of the smaller tested domains. Note however that the lack of theoretical guarantees of \ac{IS-MCTS} often translates into severe exploitability in practice~\cite{OOS}, and this cannot be alleviated by increasing \ac{IS-MCTS}'s computational resources~\cite{ISMCTS}.
In domains where \ac{MCCR}'s counterfactual value estimates are less precise, its exploitability still converges to zero, but at a slower rate than \ac{OOS}, and its head-to-head performance is noticeably weaker than that of both \ac{OOS} and \ac{IS-MCTS}.
In the future work, the quality of \ac{MCCR}'s estimates might be improved by variance reduction~\cite{var_reduction}, exploring ways of improving these estimates over the course of the game, or by finding an alternative source from which they can be obtained.
We also plan to test the hypothesis that there are classes of games where MCCR performs much better than the~competing algorithms (in particular, we suspect this might be true for small variants of turn-based computer games such as Heroes of Might \& Magic or Civilization).
\section*{Acknowledgments}
Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the program "Projects of Large Research, Development, and Innovations Infrastructures" (CESNET LM2015042), is greatly appreciated.
This work was supported by Czech science foundation grant no. 18-27483Y.
\balance
\section*{Appendix}
\appendix
\printacronyms[include-classes=abbrev,name=Abbreviations]
\section{The Proof of Theorem~\ref{thm:mccr}}\label{sec:proofs}
We now formalize the guarantees available for different ingredients of our continual resolving algorithm, and put them together to prove Theorem~\ref{thm:mccr}.
\subsection{Monte Carlo CFR}
The basic tool in our algorithm is the outcome sampling (\ac{OS}) variant of \ac{MCCFR}.
In the following text, $p \in (0,1]$ and $\delta >0$ will be fixed,
and we shall assume that the \ac{OS}'s sampling scheme is such that for each $z\in \mathcal Z$, the probability of sampling $z$ is either 0 or higher than $\delta$.
In~\cite[Theorem 4]{lanctot_thesis}, it is proven that the \ac{OS}'s average regret decreases with increasing number of iterations. This translates to the following exploitability bound, where
$\Delta_{u,i} := \max_{z1,z_2} |u_i(z_1) - u_i(z_2)|$ is the maximum difference between utilities and
$A_i := \max_{\mathcal H_i} |\mathcal A(h)|$ is the player $i$ branching factor.
\begin{lemma}[\ac{MCCFR} exploitability bound]\label{lem:MCCFR_expl}
Let $\bar \sigma^T$ be the average strategy produced by applying $T$ iterations of \ac{OS} to some game $G$. Then with probability at least $1-p$, we have
\begin{equation} \label{eq:MCCFR_regret}
\bar R^T_i \leq
\left( \sqrt{\frac{2}{p}} + 1 \right)|\mathcal I_i|
\frac{\Delta_{u,i}\sqrt{A_i}}{\delta} \cdot \frac{1}{\sqrt{T}} .
\end{equation}
\end{lemma}
\begin{proof}
The exploitability $\textnormal{expl}_i(\bar \sigma^T)$ of the average strategy $\bar \sigma^T$ is equal to the average regret $\bar R^T_i$.
By Theorem 4 from~\cite{lanctot_thesis}, after $T$ iterations of \ac{OS} we have
\[ \bar R^T_i \leq
\left( \frac{\sqrt{2|\mathcal I_i||\mathcal B_i|}}{\sqrt{p}} + M_i \right)
\frac{\Delta_{u,i}\sqrt{A_i}}{\delta} \cdot \frac{1}{\sqrt{T}}
\]
with probability at least $1-p$, where
$M_i, |\mathcal B_i| \leq |\mathcal I_i|$ are some domain specific constants.
The regret can then be bounded as
\[ \bar R^T_i \leq
\left( \sqrt{\frac{2}{p}} + 1 \right)|\mathcal I_i|
\frac{\Delta_{u,i}\sqrt{A_i}}{\delta} \cdot \frac{1}{\sqrt{T}},
\]
which concludes the proof.
\end{proof}
\begin{lemma}[\ac{MCCFR} value approximation bound]\label{lem:MCCFR_values}
Let $S\in \mathcal S$. Under the assumptions of Lemma~\ref{lem:MCCFR_expl}, we further have
\begin{align*}
\sum_{I\in S(2)} \left|
v_2^{\bar \sigma^T}(I) - v_2^{\bar\sigma^T_1 , \textrm{CBR}(\bar\sigma^T_1) }(I)
\right| \leq \\
\left( \sqrt{\frac{2}{p}} + 1 \right)|\mathcal I_i|
\frac{\Delta_{u,i}\sqrt{A_i}}{\delta} \cdot \frac{1}{\sqrt{T}}.
\end{align*}
\end{lemma}
\begin{proof}
Consider the \emph{full counterfactual regret} of player $2$'s average strategy, defined in~\cite[Appendix A.1]{CFR}:
\begin{equation}\label{eq:full_regret}
R^T_{2,\textnormal{full}}(I) :=
\frac{1}{T} \max_{\sigma'_2 \in \Sigma_2} \sum_{t=1}^T \left(
v_2^{\sigma^t|_{D(I) \leftarrow \sigma'_2}}(I) - v_2^{\sigma^t}(I)
\right) ,
\end{equation}
where $D(I)\subset \mathcal I_2$ contains $I$ and all its descendants.
Since $\textrm{CBR}_2(\bar \sigma^T)$ is one of the strategies $\sigma'_2$ which maximize the sum in \eqref{eq:full_regret}, we have
\begin{equation} \label{eq:regret_and_value_approx}
R^T_{2,\textnormal{full}}(I) = v_2^{\bar\sigma^T_1 , \textrm{CBR}(\bar\sigma_1) }(I) - v_2^{\bar \sigma^T}(I) .
\end{equation}
Consider now any $S \in \mathcal S$. By~\cite[Lemma\,6]{CFR}, we have
\begin{align*}
& \sum_{I\in S(2)} \left|
v_2^{\bar \sigma^T}(I) - v_2^{\bar\sigma^T_1, \textrm{CBR}(\bar\sigma_1) }(I)
\right| \overset{\eqref{eq:regret_and_value_approx} }{\leq} \!\!\!
\sum_{I\in S(2)} \!\! R^T_{2,\textnormal{full}}(I) \\
& \leq \sum_{I\in S(2)} \sum_{J\in D(I)}
R^{T,+}_{i,\textnormal{imm}}(J) \leq
\sum_{J \in \mathcal I_2} R^{T,+}_{i,\textnormal{imm}}(J) .
\end{align*}
The proof is now complete, because the proof of~\cite[Theorem 4]{lanctot_thesis}, which we used to prove Lemma~\ref{lem:MCCFR_expl}, actually shows that the sum $\sum_{J \in \mathcal I_2} R^{T,+}_{i,\textnormal{imm}}(J)$ is bounded by the right-hand side of \eqref{eq:MCCFR_regret}.
\end{proof}
\subsection{Gadget Game Properties}
The following result is a part of why resolving gadget games are useful, and the reason behind multiplying all utilities in $G\left< S, \sigma, v_2^\sigma \right>$ by $\pi_{-2}^\sigma(S)$.
\begin{lemma}[Gadget game preserves opponent's values]\label{lem:gadget_values}
Let $S\in \mathcal S$, $\sigma \in \Sigma$ and let $\tilde \rho$ be any strategy in the resolving game $G\left< S, \sigma, \tilde v\right>$ (where $\tilde v$ is arbitrary).
Denote $\sigma^\textnormal{new} := \sigma |_{G(S) \leftarrow \tilde \rho}$.
Then for each $I \in \mathcal I_2^{\textnormal{aug}}(G(S))$, we have
$v_2^{\sigma^\textnormal{new}}(I) = \tilde v_2^{\tilde \rho}(I)$.
\end{lemma}
\noindent Note that the conclusion does \emph{not} hold for counterfactual values of the (resolving) player 1! (This is can be easily verified by hand in any simple game such as matching pennies.)
\begin{proof}
In the setting of the lemma, it suffices to show that $\tilde{v}_2^{\tilde\rho}(h)$ is equal to $v_2^{\sigma^{\textnormal{new}}}(h)$ for every $h\in G(S)$.
Let $h\in G(S)$ and denote by $g$ the prefix of $h$ that belongs to $\hat S$.
Recall that $\tilde g$ is the parent of $g$ in the resolving game, and that the reach probability $\tilde \pi_{-2}^{\tilde \mu}(\tilde g)$ of $\tilde g$ in the resolving game (for any strategy $\tilde \mu$) is equal to $\pi^\sigma_{-2}(g) / \pi_{-2}^\sigma (S)$.
Since $\tilde u_2 (z) = u_2(z) \pi^\sigma_{-2}(S)$ for any $h\sqsubset z \in \mathcal Z$, the definition of $\sigma^{\textnormal{new}}$, gives $\tilde u_2^{\tilde\rho}(h) = u_2^{\sigma^{\textnormal{new}}}(h) \pi_{-2}^\sigma(S)$.
The following series of identities then completes the proof:
\begin{align*}
\tilde{v}_2^{\tilde\rho}(h) &
= \sum_{h\sqsubset z \in \mathcal Z} \tilde \pi_{-2}^{\tilde\rho}(h) {\tilde\pi}^{\tilde\rho}(z|h) \tilde{u}_2(z) \\
& = \tilde \pi_{-2}^{\tilde\rho}(h) \tilde u_2^{\tilde\rho}(h) \\
& = \tilde \pi^{\tilde\rho}_{-2}(\tilde g) \cdot \tilde \pi_{-2}(g | \tilde g) \cdot \tilde \pi_{-2}^{\tilde\rho}(h|g) \cdot \tilde u_2^{\tilde\rho}(h) \\
& = \tilde \pi_c^{\tilde\rho}(\tilde g) \cdot 1 \cdot \tilde \pi_{-2}^{\tilde\rho}(h|g) \cdot \tilde u_2^{\tilde\rho}(h) \\
& = \frac{ \pi^\sigma_{-2}(g) }{ \pi_{-2}^\sigma (S) } \cdot 1 \cdot \pi_{-2}(h|g) \cdot u_2^{\sigma^{\textnormal{new}}}(h) \pi_{-2}^\sigma(S) \\
& = \pi^\sigma_{-2}(h) u_2^{\sigma^{\textnormal{new}}}(h)
= \pi^{\sigma^{\textnormal{new}}}_{-2}(h) u_2^{\sigma^{\textnormal{new}}}(h)
= v_2^{\sigma^{\textnormal{new}}}(h) .
\end{align*}
\end{proof}
\begin{corollary}[Gadget game preserves value approximation]\label{lem:gadget_BR}
Under the assumptions of Lemma~\ref{lem:gadget_values}, we have the following for each $I \in \mathcal I_2^{\textnormal{aug}}(G(S))$:
\begin{align*}
& \left|
v_2^{\sigma^\textnormal{new}}(I) -
v_2^{\sigma^\textnormal{new}_1, \textrm{CBR}(\sigma^\textnormal{new}_1)}(I)
\right| = \\
& \left|
\tilde{v}_2^{\tilde \rho}(I) -
\tilde{v}_2^{\tilde\rho_1, \widetilde{\textrm{CBR}}(\tilde\rho_1)}(I)
\right| .
\end{align*}
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:gadget_values}, we have $v_2^{\sigma^\textnormal{new}}(I) = \tilde{v}_2^{\tilde \rho}(I)$.
We claim that $\widetilde{\textrm{CBR}}(\tilde\rho_1)$ coincides with $\textrm{CBR}(\sigma^\textnormal{new}_1)$ on $G(S)$, which (again by Lemma~\ref{lem:gadget_values}) implies the equality of the corresponding counterfactual values, and hence the conclusion of the corollary.
To see that the claim is true, recall that the counterfactual best-response is constructed by the standard backwards-induction in the corresponding game tree (player 2 always picks the action with highest $v^{(\cdot)}_2(I,a)$).
We need this backwards induction to return the same strategy independently of whether $G(S)$ is viewed as a subgame of $G$ or $G\left< S, \sigma, v_2^\sigma \right>$.
But this is trivial, since both the structure of $G(S)$ and counterfactual values of player 2 are the same in both games.
This concludes the proof.
\end{proof}
\subsection{Resolving}
The second useful property of resolving games is that if we start with an approximately optimal strategy, and find an approximately optimal strategy in the resolving game, then the new strategy will again be approximately optimal.
This is formalized by the following immediate corollary of Lemma 24 and 25 from~\cite{DeepStack}:
\begin{lemma}[Resolved strategy performs well]\label{lem:resolving_lemma}
Let $S\in \mathcal S$, $\sigma \in \Sigma$ and denote by $\tilde \rho$ a strategy in the resolving game $\widetilde G\left( S, \sigma, v_2^\sigma\right)$.
Then the exploitability $\textnormal{expl}_1(\sigma^\textnormal{new})$ of the strategy
$\sigma^\textnormal{new} := \sigma |_{G(S) \leftarrow \tilde \rho}$ is no greater than
\[ \textnormal{expl}_1(\sigma) + \sum_{I \in S(2)}
\left| v_2^{\sigma_1,\textrm{CBR}(\sigma_1)}(I) - v_2^\sigma(I) \right| + \widetilde \textnormal{expl}_1( \tilde \rho)
.\]
\end{lemma}
\subsection{Monte Carlo Continual Resolving}
Suppose that \ac{MCCR} is run with parameters $T_0$ and $T_R$. For a game requires at most $N$ resolving steps, we then have the following theoretical guarantee:
\MCCRbound*
\begin{proof}
Without loss of generality, we assume that \ac{MCCR} acts as player 1.
We prove the theorem by induction. The initial step of the proof is different from the induction steps, and goes as follows.
Let $S_0$ be the root of the game and denote by $\sigma^0$ the strategy obtained by applying $T_0$ iterations of \ac{MCCFR} to $G$.
Denote by $\epsilon^E_0$ the upper bound on $\textnormal{expl}_1(\sigma^0)$ obtained from Lemma~\ref{lem:MCCFR_expl}.
If $S_1$ is the first encountered public state where player 1 acts then by Lemma~\ref{lem:MCCFR_values} $\sum_{I\in S_1(2))} \left|v_2^{\sigma^0}(I) - v_2^{\sigma^0_1 , \textrm{CBR}(\sigma_1^0) }(I)\right|$ is bounded by some $\epsilon^A_0$.
This concludes the initial step.
For the induction step, suppose that $n\geq 1$ and player 1 has already acted $(n-1)$-times according to some strategy $\sigma^{n-1}$ with $\textnormal{expl}_1(\sigma^{n-1})\leq \epsilon^E_{n-1}$, and is now in a public state $S_n$ where he needs to act again. Moreover, suppose that there is some $\epsilon^A_{n-1}\geq 0$ s.t.
\[ \sum_{S_n(2)} \left|
v_2^{\sigma^{n-1}}(I) - v_2^{\sigma^{n-1}_1 , \textrm{CBR}(\sigma^{n-1}_1) }(I)
\right| \leq \epsilon^A_{n-1} .\]
We then obtain some strategy $\tilde \rho^n$ by resolving the game $\widetilde G_{n} := G\left<S_{n}, \sigma^n,v_2^{\sigma^n}\right>$ by $T_R$ iterations of \ac{MCCFR}.
By Lemma~\ref{lem:MCCFR_expl}, the exploitability $\widetilde \textnormal{expl}_1(\tilde \rho^n)$ in $\widetilde G_{n}$ is bounded by some $\epsilon^R_n$.
We choose our next action according to the strategy
\[ \sigma^n := \sigma^{n-1} |_{G(S_n) \leftarrow \tilde \rho^n} .\]
By Lemma~\ref{lem:resolving_lemma}, the exploitability $\textnormal{expl}_1(\sigma^n)$ is bounded by $\epsilon^E_{n-1} + \epsilon^A_{n-1} + \epsilon^R_n =: \epsilon^E_n$.
The game then progresses until it either ends without player 1 acting again, or reaches a new public state $S_{n+1}$ where player 1 acts.
If such $S_{n+1}$ is reached, then by Lemma~\ref{lem:MCCFR_values}, the value approximation error
$\sum_{S_{n+1}(2)} \left|
\tilde v_2^{\tilde \rho^n}(I) - \tilde v_2^{\tilde \rho^n_1 , \textrm{CBR}(\tilde \rho^n_1) }(I)
\right|$
\emph{in the resolving gadget game} is bounded by some $\epsilon^A_{n+1}$. By Lemma~\ref{lem:gadget_values}, this sum is equal to the value approximation error
$$\sum_{S_{n+1}(2)} \left|
v_2^{\sigma^{n}}(I) - v_2^{\sigma^{n}_1 , \textrm{CBR}(\sigma^{n}_1) }(I)
\right|$$
in the original game.
This concludes the inductive step.
Eventually, the game reaches a terminal state after visiting some sequence $S_1,\dots,S_{n}$ of public states where player 1 acted by using the strategy $\sigma := \sigma^N$. We now calculate the exploitability of $\sigma$.
It follows from the induction that
\[ \textnormal{expl}_1(\sigma) \leq \epsilon^E_N = \epsilon^E_0 + \epsilon^A_0 + \epsilon^R_1 + \epsilon^A_1 + \epsilon^R_2 + \dots + \epsilon^A_{N-1} + \epsilon^R_N .\]
To emphasize which variables come from using $T_0$ iterations of \ac{MCCFR} in the original game and which come from applying $T_R$ iterations of \ac{MCCFR} to the resolving game, we set $\tilde \epsilon^R_n := \epsilon^R_n$ and $\tilde \epsilon^A_n := \epsilon^A_n$ for $n\geq 1$. We can then write
\begin{equation}\label{eq:main_thm_precise}
\textnormal{expl}_1(\sigma) \leq \epsilon^E_N = \epsilon^E_0 + \epsilon^A_0 + \sum_{n=1}^{N-1} (\tilde \epsilon^R_n + \tilde \epsilon^A_n ) + \tilde \epsilon^R_N.
\end{equation}
Since the bound from Lemma~\ref{lem:MCCFR_expl} is strictly higher than the one from Lemma~\ref{lem:MCCFR_values}, we have $\epsilon^A_0 \leq \epsilon^E_0$ and $\tilde \epsilon^A_n \leq \tilde \epsilon^R_n$.
Moreover, we have $G(S_1) \supset G(S_2) \supset \dots G(S_N)$, which means that $\tilde \epsilon^R_1 \geq \tilde \epsilon^R_2 \geq \dots \tilde \epsilon^R_N$.
It follows that $\textnormal{expl}_1(\sigma) \leq 2 \epsilon^E_0 + (2N-1)\tilde \epsilon^R_1$. Finally, we clearly have $N\leq D_1$, where $D_1$ is the ``player 1 depth of the public tree of $G$''. This implies that
\begin{equation}\label{eq:main_thm_nice}
\textnormal{expl}_1(\sigma) \leq 2 \epsilon^E_0 + (2D_1 - 1) \tilde \epsilon^R_1 .
\end{equation}
Plugging in the specific numbers from Lemma~\ref{lem:MCCFR_expl} for $\epsilon^E_0$ and $\tilde \epsilon^R_1$ gives the exact bound, and noticing that we have used the lemma $(D_1+1)$-times implies that the result holds with probability $(1-p)^{D_1+1}$.
\end{proof}
Note that a tighter bound could be obtained if we were more careful and plugged in the specific bounds from Lemma~\ref{lem:MCCFR_expl} and \ref{lem:MCCFR_values} into \eqref{eq:main_thm_precise}, as opposed to using \eqref{eq:main_thm_nice}. Depending on the specific domain, this would yield something smaller than the current bound, but higher than $\epsilon^E_0 + \tilde \epsilon^R_1$.
\section{Computing Counterfactual Values Online}\label{sec:cf_values}
For the purposes of \ac{MCCR}, we require that our solver (\ac{MCCFR}) also returns the counterfactual values of the average strategy.
The straightforward way of ensuring this is to simply calculate the counterfactual values once the algorithm has finished running. However, this might be computationally intractable in larger games, since it potentially requires traversing the whole game tree.
One straightforward way of fixing this issue is to replace this exact computation by a sampling-based evaluation of $\bar \sigma^T$. With a sufficient number of samples, the estimate will be reasonably close to the actual value.
In practice, this is most often solved as follows.
During the normal run of \ac{MCCFR}, we additionally compute the opponent's \emph{sampled counterfactual values}
\[ \tilde v_2^{\sigma^t}(I) :=
\frac{1}{\pi^{\sigma'}(z)} \pi^{\sigma^t}_{-2}(h) \pi^{\sigma^t} \! (z|h) u_2(z)
.\]
Once the $T$ iterations are complete, the counterfactual values of $\bar \sigma^T$ are estimated by $\tilde v(I) := \frac{1}{T}\sum \tilde v_2^{\sigma^t}(I)$.
While this arithmetical average is the standard way of estimating $v_2^{\bar \sigma^T}(I)$, it is also natural to consider alternatives where the uniform weights are replaced by either $\pi^{\sigma^t}(I)$ or $\pi_2^{\sigma^t}(I)$.
In principle, it is possible for all of these weighting schemes to fail (see the counterexample in Section~\ref{sec:cfv_counter_ex}).
We experimentally show that all of these possibilities produce good estimates of $v_2^{\bar \sigma^T}$ in many practical scenarios, see~Figure~\ref{fig:cfv_comparison_domains}.
Even when this isn't the case, one straightforward way to fix this issue is to designate a part of the computation budget to a sampling-based evaluation of $\bar \sigma^T$.
Alternatively, in Lemma~\ref{lem:u_of_avg} we present a method inspired by lazy-weighted averaging from~\cite{lanctot_thesis} that allows for computing unbiased estimates of $v_2^{\sigma^T}$ on the fly during \ac{MCCFR}.
In the main text, we have assumed that the exact counterfactual values are calculated, and thus that $\tilde v(I) = v_2^{\bar \sigma^T}(I)$.
Note that this assumption is made to simplify the theoretical analysis -- in practice, the difference between the two terms can be incorporated into Theorem~\ref{thm:mccr} (by adding the corresponding term into Lemma~\ref{lem:resolving_lemma}).
\begin{figure}
\caption{An extension of experiment~\ref{sec:experiment-cfv-averaging}
\label{fig:cfv_comparison_domains}
\end{figure}
\subsection{The Counterexample}\label{sec:cfv_counter_ex}
In this section we show that no weighting scheme can be used as a universal method for computing the counterfactual values of the average strategy on the fly.
We then derive an alternative formula for $v_i^{\bar \sigma^T}$ and show that it can be used to calculate unbiased estimates of $v_i^{\bar \sigma^T}$ in \ac{MCCFR}.
Note that this problem is not specific to \ac{MCCFR}, but also occurs in CFR (although it is not so pressing there, since CFR's iterations are already so costly that the computation of exact counterfactual values of $\bar \sigma^T$ is not a major expense).
But since these issues already arise in \ac{CFR}, we will work in this simpler setting.
Suppose we have a history $h\in \mathcal H$, strategies $\sigma^1, \dots, \sigma^T$ and the average strategy $\bar \sigma^T$ defined as
\begin{equation}\label{eq:avg_strat}
\bar \sigma^T (I) := \sum_t \pi_i^{\sigma^t}\!\!(I) \, \sigma^t(I) \, / \, \sigma_t \pi_i^{\sigma^t}\!\!(I)
\end{equation}
for $I\in \mathcal I_i$.
First, note that we can easily calculate $\pi^{\bar \sigma^T}_{-i}(h)$. Since $v_i^{\bar \sigma^T}(h) = \pi^{\bar \sigma^T}_{-i}(h) u_i^{\bar \sigma^T}(h)$, an equivalent problem is that of calculating the expected utility of the average strategy at $h$ on the fly, i.e. by using $u_i^{\sigma^t}(h)$ and possibly some extra variables, but without having to traverse the whole tree below $h$.
Looking at the definition of the average strategy, the most natural candidates for an estimate of $u_i^{\bar \sigma^T}(h)$ are the following weighted averages of $u_i^{\sigma^t}(h)$:
\begin{align*}
\tilde u^1(h) & := \sum_t u_i^{\sigma^t}\!\!(h) \, / \, T \\
\tilde u^2(h) & := \sum_t \pi_i^{\sigma^t}\!\!(h) \, u_i^{\sigma^t}\!\!(h) \, / \, \sum_t \pi_i^{\sigma^t}\!\!(g) \\
\tilde u^3(h) & := \sum_t \pi^{\sigma^t}\!\!(h) \, u_i^{\sigma^t}\!\!(h) \, / \, \sum_t \pi^{\sigma^t}\!\!(g) .
\end{align*}
\def\averageStrategyFail{
\tikzset{
level 1/.style = {level distance = 1.5\nodesize, sibling distance=1.5\nodesize},
level 2/.style = {level distance = 1.5\nodesize, sibling distance=1.5\nodesize},
level 3/.style = {level distance = 1.5\nodesize, sibling distance=1.5\nodesize},
level 4/.style = {level distance = 1.5\nodesize, sibling distance=1.5\nodesize},
level 5/.style = {level distance = 1.5\nodesize, sibling distance=1.5\nodesize},
}
\node(1)[pl1]{$h_0$}
child[grow=up]{node[terminal]{0}}
child[grow=right]{node(2)[pl2]{$h_1$}
child[grow=up]{node[terminal]{0}}
child[grow=right]{node(3)[pl1]{$h_2$}
child[grow=up]{node[terminal]{0}}
child[grow=right]{node(4)[pl2]{$h_3$}
child[grow=up]{node[terminal]{0}}
child[grow=right]{node[terminal,label=above:{z}]{1}}
}
}
}
;
}
\begin{figure}
\caption{A domain where weighting schemes for $v_2^{\bar \sigma^T}
\label{fig:avg_str_fail}
\end{figure}
\begin{example}[No weighting scheme works]
Each of the estimates $\tilde u^j(h)$, $j=1,2,3$, can fail to be equal to $u_i^{\bar \sigma^T}$. Yet worse, no similar works reliably for every sequence $(\sigma^t)_t$.
\end{example}
\noindent Consider the game from Figure~\ref{fig:avg_str_fail} with $T=2$, where under $\sigma^1$, each player always goes right (R) and under $\sigma^2$ the probabilities of going right are $\frac{1}{2}, \frac{1}{3}, \frac{1}{5}$ and $\frac{1}{7}$ at $h_0$, $h_1$, $h_2$ and $h_3$ respectively.
A straightforward application of \eqref{eq:avg_strat} shows that the probabilities of R under $\bar \sigma^2$ are $\frac 3 4$, $\frac 2 3$, $\frac{11}{15}$ and $\frac{11}{14}$ and hence $u_1^{\bar \sigma^2}(h_2) = \frac{11}{15}\cdot \frac{11}{14} = \frac{121}{210}$. On the other hand, we have $\tilde u^1(h_2)=\frac{108}{210}$, $\tilde u^2(h_2)=\frac{142}{210}$ and $\tilde u^3(h_2)\doteq\frac{181}{210}$.
To prove the ``yet worse'' part, consider also the sequence of strategies $\nu^1$, $\nu^2$, where $\nu^1 = \sigma^1$ and under $\nu^2$, the probabilities of going right are $\frac{1}{2}, \frac{1}{3}, 1$ and $\frac{1}{5}\cdot\frac{1}{7}$ at $h_0$, $h_1$, $h_2$ and $h_3$ respectively.
The probabilities of R under $\bar \nu^2$ are $\frac 3 4$, $\frac 2 3$, $1$ and $\frac{27}{35}$ and hence $u_1^{\bar \nu^2}(h_2) = \frac{27}{35} = \frac{162}{210} \neq \frac{121}{210} = u_1^{\bar \sigma^2}(h_2)$.
However, the strategies $\sigma^t$ and $\nu^t$ coincide between the root and $h_2$ for each $t$, and so do the utilities $u^{\sigma^t}(h_0)$ and $u^{\nu^t}(h_0)$.
Necessarily, any weighting scheme of the form
\[ \tilde u(h) := \sum_t w_t(h) u_i^{\sigma^t}\!\!(h) \, / \, \sum_t w_t(h) \]
where $w_t(h) \in \mathbb{R}$ only depends on the strategy $\sigma^t$ between the root and $h$, yields the same estimate for $(\sigma^t)_{t=1,2}$ and for $(\nu^t)_{t=1,2}$.
As a consequence, any such weighting scheme will be wrong in at least one of these cases.
\subsection{An Alternative Formula for the Utility of \texorpdfstring{$\bar \sigma^T$}{the average strategy}}
We proceed in three steps. First, we derive an alternative formula for $u^{\bar \sigma^T}(h)$ that uses the cumulative reach probabilities
\[ \textnormal{crp}_i^t(z) := \pi_i^{\sigma^1}(z) + \dots + \pi_i^{\sigma^t}(z) \]
for $z\in \mathcal Z$.
Then we remark that during \ac{MCCFR} it suffices to keep track of $\textnormal{crp}_i^t(ha)$ where $a\in \mathcal A(h)$ and $h$ is in the tree built by \ac{MCCFR}, and show how these values can be calculated similarly to the lazy-weighted averaging of~\cite{lanctot_thesis}.
Lastly, we note that a sampled variant can be used in order to get an unbiased estimate of $u_i^{\bar \sigma^T}$.
Recall the standard fact that the average strategy satisfies\begin{equation}\label{eq:avg_util}
u^{\bar \sigma^T_1, \nu_2}_i (h)
= \frac{\sum_t \pi^{\sigma^t_1}_1(h) u^{\sigma^t_1, \nu_2}_i (h)}{\sum_t \pi^{\sigma^t_1}_1(h)}
\end{equation}
for every $h\in \mathcal H$, and has the analogous property for $\bar \sigma^T_2$.
Indeed, this follows from the formula
\begin{equation}
\pi_i^{\bar \sigma^T} (h)
= \frac{1}{T} \sum_t \pi^{\sigma^t}_i(h) ,
\end{equation}
which can be proven by induction over the length of $h$ using \eqref{eq:avg_strat}.
\begin{lemma}\label{lem:u_of_avg}
For any $h\in \mathcal H$, $i$ and $\sigma^1,\dots, \sigma^T$, we have $u_i^{\bar \sigma^T}(h) = $
\[
\frac{\sum_t \sum_{z\sqsupset h}
\left( \pi_1^{\sigma^t}\!\!(z)\textnormal{crp}_2^t(z) + \textnormal{crp}_1^t(z) \pi_2^{\sigma^t}\!\!(z) - \pi^{\sigma^t}_{1,2}(z) \right)
\pi_c(z|h) u_i(z)
}
{ \textnormal{crp}_1^T(h) \textnormal{crp}_2^T(h) }
. \]
\end{lemma}
\begin{proof}
For $h\in \mathcal H$, we can rewrite $u_i^{\bar \sigma^T}(h)$ as\begin{align*}
u_i^{\bar \sigma^T}(h)
& = u_i^{\bar \sigma^T_1, \bar \sigma^T_2}(h)
\overset{\eqref{eq:avg_util}}{=}
\frac{\sum_t \pi^{\sigma^t_1}_1(h) u^{\sigma^t_1, \bar \sigma^T_2}_i (h)}
{\textnormal{crp}_1^T(h)} \\
& \overset{\eqref{eq:avg_util}}{=} \frac{
\sum_t \pi^{\sigma^t_1}_1(h)
\frac{\sum_s \pi^{\sigma^s_2}_2(h) u^{\sigma^t_1, \sigma^s_2}_i (h)}
{\textnormal{crp}_2^T(h)}
}
{\textnormal{crp}_1^T(h)} \\
& = \frac{
\sum_t \sum_s \pi^{\sigma^t_1}_1(h) \ \pi^{\sigma^s_2}_2(h)
\ u^{\sigma^t_1, \sigma^s_2}_i (h)
}
{\textnormal{crp}_1^T(h)\textnormal{crp}_2^T(h)} = \frac{N}{D}.
\end{align*}
Using the definition of expected utility, we can rewrite the numerator $N$ as
\begin{align*}
N & = \sum_{s,t} \pi^{\sigma^t_1}_1(h) \ \pi^{\sigma^s_2}_2(h)
\sum_{z \sqsupset h} \pi^{\sigma^t}_1(z|h) \ \pi^{\sigma^s}_2(z|h)
\pi_c(z|h) u_i(z) \\
& = \sum_{s,t} \sum_{z \sqsupset h}
\pi^{\sigma^t_1}_1(z) \ \pi^{\sigma^s_2}_2(z) \
\pi_c(z|h) u_i(z) \\
& = \sum_{z \sqsupset h} \left( \pi_c(z|h) u_i(z)
\sum_{s,t} \pi^{\sigma^t_1}_1(z) \ \pi^{\sigma^s_2}_2(z) \right) .
\end{align*}
The double sum over $s$ and $t$ can be rewritten using the formula
\[ \sum_t \sum_s x_t y_s = \sum_t \left[ x_t(y_1+\dots+y_t) + (x_1+\dots+x_t)y_t - x_t y_t \right] , \]
which yields $\sum_{s,t} \pi^{\sigma^t}_1(z) \ \pi^{\sigma^s}_2(z)=$
\begin{align*}
& = & \sum_t \Big[ & \pi^{\sigma^t}_1\!(z) \left(\pi^{\sigma^1}_2\!(z)+\dots+\pi^{\sigma^t}_2\!(z) \right) \ + \\
&& & + \ \left( \pi^{\sigma^1}_1\!(z)+\dots+\pi^{\sigma^t}_1\!(z) \right)\pi^{\sigma^t}_2\!(z) - \pi^{\sigma^t}_1\!(z)\pi^{\sigma^t}_2\!(z) \Big] \\
& = & \sum_t \Big[ & \pi^{\sigma^t}_1\!(z) \ \textnormal{crp}^t_2(z) +
\textnormal{crp}^t_1 (z) \ \pi^{\sigma^t}_2\!(z) -
\pi^{\sigma^t}_1\!(z)\pi^{\sigma^t}_2\!(z) \Big] .
\end{align*}
Substituting this into the formula for $N$ and $\frac{N}{D}$ concludes the proof.
\end{proof}
\subsection{Computing Cumulative Reach Probabilities}
While it is intractable to store $\textnormal{crp}_i^t(z)$ in memory for every $z\in \mathcal Z$, we \emph{can} store the cumulative reach probabilities for nodes in the tree $\mathcal T_t$ built by \ac{MCCFR} at time $t$.
We can translate these into $\textnormal{crp}_i^t(z)$ with the help of the uniformly random strategy $\textnormal{rnd}$:
\begin{lemma}
Let $z\in \mathcal Z$ be s.t. $z \sqsupset ha$, where $h$ is a leaf of $\mathcal T_t$ and $a\in \mathcal A(h)$. Then we have $\textnormal{crp}_i^t(z) = \textnormal{crp}_i^t(ha) \pi^\textnormal{rnd}_i(z|ha)$.
\end{lemma}
\begin{proof}
This immediately follows from the fact that for any $g \notin \mathcal T_t$, $\sigma^s(g) = \textnormal{rnd}(g)$ for every $s=1,2,\dots,t$.
\end{proof}
To keep track of $\textnormal{crp}_i^t(ha)$ for $h\in \mathcal T_t$, we add to it a variable $crp_i(h)$ and auxiliary variables $w_i(ha)$, $a\in \mathcal A(h)$, which measure the increase in cumulative reach probability since the previous visit of $ha$.
All these variables are initially set to $0$ except for $w_i(\emptyset)$ which is always assumed to be equal to 1.
Whenever \ac{MCCFR} visits some $h\in \mathcal T_t$, is visited, $\textnormal{crp}_i(h)$ is increased by $w_i(h)$ (stored in $h$'s parent), each $w_i(ha)$ is increased by $w_i(h) \pi_i^{\sigma^t}(ha|h)$ and $w_i(h)$ (in the parent) is set to $0$.
This ensures that whenever a value $w_i(ha)$ gets updated without being reset, it contains the value $\textnormal{crp}_i^t(ha) - \textnormal{crp}_i^{t_{ha}}(ha)$, where $t_{ha}$ is the previous time when $ha$ got visited. As a consequence, the variables $\textnormal{crp}_i(ha)$ that do get updated are equal to $\textnormal{crp}^t_i(ha)$.
Note that this method is very similar to the lazy-weighted averaging of~\cite{lanctot_thesis}.
Finally, we observe that the formula from Lemma~\ref{lem:u_of_avg} can be used for on-the-fly calculation of an unbiased estimate of $u_i^{\bar \sigma^T}(h)$.
Indeed, it suffices to replace the sum over $z$ by its sampled variant $\hat s^t_i(h) :=$
\begin{equation}\label{eq:sampled_u_of_avg}
\frac{1}{q^t(z)} \left(
\pi_1^{\sigma^t}\!\!(z)\textnormal{crp}_2^t(z)
+ \textnormal{crp}_1^t(z) \pi_2^{\sigma^t}\!\!(z)
- \pi^{\sigma^t}_{1,2}(z)
\right) \pi_c(z|h) u_i(z)
,
\end{equation}
where $z$ is the terminal state sampled at time $t$ and $q^t(z)$ is the probability that it got sampled with $z$.
We keep track of the cumulative sum $\sum_t \hat s_i^t(h)$ and, once we reach iteration $T$, we do one last update of $h$ and set
\begin{align*}
\tilde u_i(h) := \frac{ \sum_t \hat s^t_i(h)}{ \textnormal{crp}_1^T(h) \textnormal{crp}_2^T(h) } & \ \ \ \ \textnormal{ and } &
\tilde v_i(h) := \pi_{-i}^{\bar \sigma^T}(h) \tilde u_i(h) .
\end{align*}
By \eqref{eq:sampled_u_of_avg} and Lemma~\ref{lem:u_of_avg}, we have $\mathbf E \tilde u_i(h) = u_i^{\bar \sigma^T}(h)$ and thus $\mathbf E \tilde v_i(h) = v_i^{\bar \sigma^T}(h)$.
Note that $\tilde v_i(h)$ might suffer from a very high variance and devising its low-variance modification (or alternative) would be desirable.
\section{Game Rules}\label{sec:rules}
\textbf{Biased Rock Paper Scissors} \texttt{B-RPS}
is a version of standard game of Rock-Paper-Scissors with modified payoff matrix:
\begin{table}[H]
\centering
\begin{tabular}{c|ccc}
~ & R & P & S \\
\hline
R & 0 & -1 & 100 \\
P & 1 & 0 & -1 \\
S & -1 & 1 & 0
\end{tabular}
\end{table}
This variant gives the first player advantage and breaks the game action symmetry.
\textbf{Phantom Tic-Tac-Toe} \texttt{PTTT}
Phantom Tic-Tac-Toe is a partially observable variant of Tic-Tac-Toe. It is played by two players on 3x3 board and in every turn one player tries to mark one cell. The goal is the same as in perfect-information Tic-Tac-Toe, which is to place three consecutive marks in a horizontal, vertical, or diagonal row.
Player can see only his own marked cells, or the marked cells of the opponent if they have been revealed to him by his attempts to place the mark in an occupied cell.
If the player is successful in marking the selected cell, the opponent takes an action in the next round. Otherwise, the player has to choose a cell again, until he makes a successful move.
The opponent receives no information about the player's attempts at moves.
\textbf{Imperfect Information Goofspiel} In \texttt{II-GS(N)}, each player is given a private hand of bid cards with values $0$ to $N-1$. A different deck of $N$ point cards is placed face up in a stack. On their turn, each player bids for the top point card by secretly choosing a single card in their hand. The highest bidder gets the point card and adds the point total to their score, discarding the points in the case of a tie. This is repeated $N$ times and the player with the highest score wins.
In \textit{II-Goofspiel}, the players only discover who won or lost a bid, but not the bid cards played. Also, we assume the point-stack is strictly increasing: $0, 1, \dots N-1$. This way the game does not have chance nodes, all actions are private and information sets have various sizes.
\textbf{Liar's Dice} \texttt{LD(D1,D2,F)}, also known as Dudo, Perudo, and Bluff is a dice-bidding game. Each die has faces $1$ to $F-1$ and a star $\star$. Each player $i$ rolls $D_i$ of these dice without showing them to their opponent. Each round, players alternate by bidding on the outcome of all dice in play until one player "calls liar'', i.e. claims that their opponent's latest bid does not hold. If the bid holds, the calling player loses; otherwise, she wins. A bid consists of a quantity of dice and a face value. A face of $\star$ is considered wild and counts as matching any other face. To bid, the player must increase either the quantity or face value of the current bid (or both).
All actions in this game are public. The only hidden information is caused by chance at the beginning of the game. Therefore, the size of all information sets is identical.
\textbf{Generic Poker} \texttt{GP(T, C, R, B)} is a simplified poker game inspired by Leduc Hold'em. First, both players are required to put one chip in the pot. Next, chance deals a single private card to each player, and the betting round begins. A player can either \textit{fold} (the opponent wins the pot), \textit{check} (let the opponent make the next move), \textit{bet} (add some amount of chips, as first in the round), \textit{call} (add the amount of chips equal to the last bet of the opponent into the pot), or \textit{raise} (match and increase the bet of the opponent).
If no further raise is made by any of the players, the betting round ends, chance deals one public card on the table, and a second betting round with the same rules begins. After the second betting round ends, the outcome of the game is determined - a player wins if: (1) her private card matches the table card and the opponent's card does not match, or (2) none of the players' cards matches the table card and her private card is higher than the private card of the opponent. If no player wins, the game is a draw and the pot is split.
The parameters of the game are the number of types of the cards $T$, the number of cards of each type $C$, the maximum length of sequence of raises in a betting round $R$, and the number of different sizes of bets $B$ (i.e., amount of chips added to the pot) for bet/raise actions.
This game is similar to Liar's Dice in having only public actions. However, it includes additional chance nodes later in the game, which reveal part of the information not available before. Moreover, it has integer results and not just win/draw/loss.
No Limit Leduc Hold'em poker with maximum pot size of $N$ and integer bets is $GP(3,2,N,N)$.
\begin{table}[t]
\centering
\begin{small}
\begin{tabular}{rr}
Game & $|\mathcal H|$ \\
\hline
IIGS(5) & 41331 \\
IIGS(13) & $\approx 4\cdot 10^{19}$ \\
LD(1,1,6) & 147456 \\
LD(2.2,6) & $\approx 2\cdot 10^{10}$ \\
GP(3,3,2,2) & 23760 \\
GP(4,6,4,4) & $\approx 8 \cdot 10^{8}$ \\
PTTT & $\approx 10^{10}$ \\
\end{tabular}
\end{small}
\caption{Sizes of the evaluated games.}\label{tab:sizes}
\end{table}
\section{Extended results}
\begin{table*}[p]
\begin{tabular}{ccc}
\texttt{II-GS(5)} & \texttt{LD(1,1,6)} & \texttt{GP(3,3,2,2)} \\
\includegraphics[width=0.27\linewidth]{cfv_convergence/IIGS-5_summary_weighted_large_abs.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/LD-116_summary_weighted_large_abs.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/GP-3322_summary_weighted_large_abs.pdf} \\
\texttt{II-GS(13)} & \texttt{LD(2,2,6)} & \texttt{GP(4,6,4,4)} \\
\includegraphics[width=0.27\linewidth]{cfv_convergence/IIGS-13_summary_weighted_large_abs.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/LD-226_summary_weighted_large_abs.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/GP-4644_summary_weighted_large_abs.pdf} \\
\end{tabular}
\caption{Comparison of counterfactual values for different domains by tracking how absolute differences $\Delta_t(J) = | \tilde v_2^t(J) - \tilde v_2^T(J) | $ change over time.}
\end{table*}
\begin{table*}[p]
\begin{tabular}{ccc}
\texttt{II-GS(5)} & \texttt{LD(1,1,6)} & \texttt{GP(3,3,2,2)} \\
\includegraphics[width=0.27\linewidth]{cfv_convergence/IIGS-5_summary_weighted_large_rel.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/LD-116_summary_weighted_large_rel.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/GP-3322_summary_weighted_large_rel.pdf} \\
\texttt{II-GS(13)} & \texttt{LD(2,2,6)} & \texttt{GP(4,6,4,4)} \\
\includegraphics[width=0.27\linewidth]{cfv_convergence/IIGS-13_summary_weighted_large_rel.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/LD-226_summary_weighted_large_rel.pdf} &
\includegraphics[width=0.27\linewidth]{cfv_convergence/GP-4644_summary_weighted_large_rel.pdf} \\
\end{tabular}
\caption{Comparison of counterfactual values for different domains by tracking how relative differences $\Delta_t(J) = \tilde v_2^t(J) - \tilde v_2^T(J) $ change over time.}
\end{table*}
\begin{table*}[p]
\begin{tabular}{ccc}
\texttt{II-GS(5)} & \texttt{LD(1,1,6)} & \texttt{GP(3,3,2,2)} \\
\includegraphics[width=0.27\linewidth]{exploration/IIGS-5_reset_large.pdf} &
\includegraphics[width=0.27\linewidth]{exploration/LD-116_reset_large.pdf} &
\includegraphics[width=0.27\linewidth]{exploration/GP-3322_reset_large.pdf} \\
\includegraphics[width=0.27\linewidth]{exploration/IIGS-5_keep_large.pdf} &
\includegraphics[width=0.27\linewidth]{exploration/LD-116_keep_large.pdf} &
\includegraphics[width=0.27\linewidth]{exploration/GP-3322_keep_large.pdf}
\end{tabular}
\caption{Sensitivity to exploration parameter. Top row is "reset" variant, bottom row is "keep" variant. }
\end{table*}
\end{document} |
\begin{equation}gin{document}
\title{Maximal Violation of a Broad Class of Bell Inequalities and Its Implications on Self-Testing}
\author{C. Jebarathinam}
\affiliation{Department of Physics and Center for Quantum Frontiers of Research \& Technology (QFort), National Cheng Kung University, Tainan 701, Taiwan}
\author{Jui-Chen Hung}
\affiliation{Department of Physics, National Cheng Kung University, Tainan 701, Taiwan}
\author{Shin-Liang Chen}
\affiliation{Department of Physics and Center for Quantum Frontiers of Research \& Technology (QFort), National Cheng Kung University, Tainan 701, Taiwan}
\affiliation{Dahlem Center for Complex Quantum Systems, Freie Universit\"at Berlin, 14195 Berlin, Germany}
\author{Yeong-Cherng Liang}
\email{[email protected]}
\affiliation{Department of Physics and Center for Quantum Frontiers of Research \& Technology (QFort), National Cheng Kung University, Tainan 701, Taiwan}
\date{\today}
\begin{equation}gin{abstract}
In quantum information, lifting is a systematic procedure that can be used to derive---when provided with a seed Bell inequality---other Bell inequalities applicable in more complicated Bell scenarios. It is known that the procedure of lifting introduced by Pironio [J. Math. Phys. A 46, 062112 (2005)] preserves the facet-defining property of a Bell inequality. Lifted Bell inequalities therefore represent a broad class of Bell inequalities that can be found in {\em all} Bell scenarios. Here, we show that the maximal value of {\em any} lifted Bell inequality is preserved for both the set of nonsignaling correlations and quantum correlations. Despite the degeneracy in the maximizers of such inequalities, we show that the ability to self-test a quantum state is preserved under these lifting operations. In addition, except for outcome-lifting, local measurements that are self-testable using the original Bell inequality---despite the degeneracy---can also be self-tested using {\em any} lifted Bell inequality derived therefrom. While it is not possible to self-test {\em all} the positive-operator-valued measure elements using an outcome-lifted Bell inequality, we show that partial, but robust self-testing statements on the underlying measurements can nonetheless be made from the quantum violation of these lifted inequalities. We also highlight the implication of our observations on the usefulness of using lifted Bell-like inequalities as a device-independent witnesses for entanglement depth. The impact of the aforementioned degeneracy on the geometry of the quantum set of correlations is briefly discussed.
\end{abstract}
\maketitle
\section{Introduction}
Inspired by the thought-provoking paper of Einstein, Podolsky, and Rosen~\cite{EPR35}, Bell derived~\cite{Bell64}---based on well-accepted classical intuitions---an inequality constraining the correlations between local measurement outcomes on two distant systems. He further showed that the so-called Bell inequality can be violated by quantum theory using local, but incompatible measurements on entangled states. Since then, various Bell inequalities, such as the one due to Clauser, Horne, Shimony and Holt (CHSH)~\cite{Clauser69} have been derived to investigate the intriguing nature of quantum theory and also the information-processing power enabled by these nonclassical, Bell-nonlocal~\cite{Brunner14} correlations.
For example, Ekert~\cite{Eke91} showed in 1991 that the quantum violation of Bell inequalities offers an unprecedented level of security for quantum key distribution protocols. Independently, Mayers and Yao~\cite{Mayers:1998aa, MY04} showed that certain extremal quantum correlation enables the possibility to {\em self-test} quantum devices. These discoveries prompted the paradigm of device-independent quantum information~\cite{Scarani12,Brunner14} in which the physical properties of quantum devices are certified without making any assumption on the internal working of the devices, but rather through the observation of a Bell-nonlocal correlation.
Interestingly, an observation of the {\em maximal} quantum violation of certain Bell inequalities, such as the CHSH inequality, is already sufficient to self-test the underlying quantum state and the measurements that give rise to the observed violation~\cite{PR92}. To date, numerous Bell inequalities have been derived (see, e.g.,~\cite{Braunstein1990,WW01,Collins2002,Kaszlikowski2002,Zukowski2002,Sliwa2003,Collins04,Buhrman2005,Guhne2005,Avis2006,Liang2009,Gisin:2009aa,Acin2012,Bancal12,Grandjean2012,Brunner14,Mancinska:2014aa,Liang:PRL:2015,Schwarz16,SAT+17,Enky1810.05636,Cope18,BAS19,Zambrini19} and references therein). However, beyond the CHSH Bell inequality, only a handful of them~\cite{Miller2013,YN13,Mancinska:2014aa,Bamps2015,SAS+16,Jed16,Andersson2017,SAT+17,KST18,BAS19} have been identified as relevant for the purpose of self-testing (see~\cite{SB19} for a recent review on the topic of self-testing). Is it possible to make some general statements regarding the self-testing property of Bell inequalities defined for an {\em arbitrary} Bell scenario?
To answer the above question, we consider, in this work, Bell inequalities that may be obtained from the procedure of Pironio's {\em lifting}~\cite{PIR05}. Importantly, such inequalities exist in {\em all} Bell scenarios beyond the simplest one for two parties, two settings and two outcomes. If a Bell inequality is facet-defining~\cite{Collins04}, the same holds for its liftings~\cite{PIR05}. What about their quantum violation? In \cite{Rosset2014} (see also~\cite{Curchod2015}), it was shown that in addition to the bound satisfied by Bell-local correlations, the maximal quantum (nonsignaling~\cite{BLM+05}) value of Bell inequalities is preserved for party-lifting. In this work, we give an alternative proof of this fact and show, in addition, that the maximal quantum (non-signaling) value of a Bell inequality is also preserved for {\em other} types of Pironio's lifting.
As a corollary of our results, we further show that the self-testing properties of a Bell inequality is largely preserved through the procedure of lifting. In other words, if a Bell inequality can be used to self-test some quantum state $\ket{\psi}$, so can its liftings. Moreover, except for the case of outcome-lifting, the possibility to self-test the underlying measurement operators using a Bell inequality remains intact upon the application of Pironio's lifting. As we illustrate in this work, the maximizers of lifted Bell inequalities are not unique. There is thus no hope (see, e.g.,~\cite{GKW+18}) of providing a {\em complete} self-testing of the employed quantum device using a lifted Bell inequality. Nonetheless, we provide numerical evidence suggesting that lifted Bell inequalities provide the same level of robustness for self-testing the relevant parts of the devices.
The rest of this paper is organized as follows. In Sec. \ref{prl}, we introduce basic notions of a Bell scenario and remind the definitions of self-testing.
After that, we investigate and compare the maximal violation of lifted Bell inequalities against that of the original Bell inequalities, assuming quantum, or general nonsignaling correlations \cite{BLM+05}. In the same section, we also discuss the self-testing property of lifted Bell inequalities, and the usefulness of party-lifted Bell-like inequalities as device-independent witnesses for entanglement depth~\cite{Liang:PRL:2015}. In Sec.~\ref{con}, we present some concluding remarks and possibilities for future research. Examples illustrating the non-uniqueness of the maximizers of lifted Bell inequalities, as well as their implications on the geometry of the quantum set of correlations are provided in the Appendices.
\section{Preliminaries}\label{prl}
\subsection{Bell scenario}
Consider a Bell scenario involving $n$ spatially separated parties labeled by $i \in \{1,2,\cdots,n\}$ and
let the $i$-th party performs a measurement labeled by $j_i$,with outcomes labeled by $k_i$.
In any given Bell scenario, we may appreciate the strength of correlation between measurement outcomes via a collection of joint conditional probabilities.
Following the literature (see, e.g.,~\cite{Brunner14}), we represent these conditional probabilities---dubbed a {\em correlation}---of getting the outcome combination $\vec{k}=(k_1k_2\cdots k_n)$ conditioned on performing the measurements $\vec{j}=(j_1j_2\cdots j_n)$ by the vector
$\vec{P}:=\{P(\vec{k}|\vec{j})\}$. In addition to the normalization condition and the positivity constraint $P(\vec{k}|\vec{j})\ge 0$ for all $\vec{k},\vec{j}$,
each correlation is required to satisfy the nonsignaling constraints~\cite{BLM+05} (see also~\cite{PR94})
\begin{equation}gin{align}\label{Eq:NS}
&\sum_{k_i} P(k_1 \cdots k_i \cdots k_n|j_1 \cdots j_i \cdots j_n) \nonumber \\
&=\sum_{k_i} P(k_1 \cdots k_i \cdots k_n|j_1 \cdots j'_i \cdots j_n)
\end{align}
for all $i, j_1, \cdots j_{i-1},j_i,j'_i,j_{i+1} \cdots j_n$, and $k_\ell$ (with $\ell\neq i$).
In any given Bell scenario, the set of correlations satisfying these constraints forms the so-called nonsignaling
polytope $\mathcal{N}$ \cite{BLM+05}.
A correlation is called Bell-local~\cite{Brunner14} if it can be explained by a local-hidden-variable model~\cite{Bell64},
\begin{equation}gin{equation*}
P(k_1 \cdots k_n|j_1 \cdots j_n)=\sum_\lambda p_\lambda P(k_1|j_1, \lambda)\cdots P(k_n|j_n, \lambda)
\end{equation*}
for all $k_1 \cdots k_n,j_1 \cdots j_n$, where $\lambda$ is the hidden variable which occurs with probability $p_\lambda$,
$\sum_\lambda p_\lambda=1$ and $P(k_i|j_i, \lambda)$ is the probability of obtaining the measurement outcome
$k_i$ given the setting $j_i$ and the hidden variable $\lambda$. In a given Bell scenario, the set of Bell-local
correlations forms a polytope called a Bell polytope, or more frequently a local polytope $\mathcal{L}$, which is a subset of the nonsignaling polytope.
A correlation which cannot be explained by a local-hidden-variable model is said to be Bell-nonlocal
and must necessarily violate a Bell inequality~\cite{Bell64} --- a constraint satisfied by all $\vec{P}\in\mathcal{L}$.
A linear Bell inequality has the generic form:
\begin{equation}gin{align}\label{Eq:npartiteBI}
I_n(\vec{P}):=\vec{B}\cdot \vec{P}=\sum_{k_1\cdots k_n,j_1\cdots j_n} B_{\vec{k},\vec{j}}P(\vec{k}|\vec{j})
\stackrel{\mathcal{L}}{\le} \begin{equation}ta_{\mathcal{L}},
\end{align}
where $\vec{B}:=\{B_{\vec{k},\vec{j}}\}$ denotes the vector of Bell coefficients, $\vec{B}\cdot \vec{P}$ and $\begin{equation}ta_{\mathcal{L}}$ are, respectively, the Bell expression and the local bound of a Bell inequality.
A correlation $\vec{P}$ is called quantum if the joint probabilities can be written as
\begin{equation}\label{Eq:Born1}
P(k_1 \cdots k_n|j_1 \cdots j_n)=\operatorname{tr} \left( \otimes^n_{i=1} M^{(i)}_{k_i|j_i} \rho_{12\cdots n} \right)
\end{equation}
where $\rho_{12\cdots n}$ is an $n$-partite density matrix and $\{M^{(i)}_{k_i|j_i}\}_{k_i}$ is the positive-operator-valued measure (POVM) describing the $j_i$-th measurement of the $i$-party. By definition, POVM elements satisfy the constraints of being positive semidefinite, $M^{(i)}_{k_i|j_i}\succeq 0$ for all $k_i$ and $j_i$, as well as the normalization requirement $\sum_{k_i} M^{(i)}_{k_i|j_i}= {\bf{1}}$ for all $j_i$. Thus, a correlation is quantum if and only if the joint probabilities of such a correlation can be realized experimentally by performing local measurements on an $n$-partite quantum system. The set of quantum correlations $\mathcal{Q}$ forms a convex set satisfying $\mathcal{L} \subset \mathcal{Q}\subset \mathcal{N}$. It is, however, not a polytope \cite{WW01} (see also~\cite{GKW+18}). When necessary, we will use $\mathcal{Q}_n$ to denote the set of quantum correlations arising in an $n$-partite Bell scenario.
\subsection{Self-testing}
\label{Sec:SelfTest}
Certain nonlocal correlations $\vec{P}\in\mathcal{Q}$ have the appealing feature of being able to reveal (essentially unambiguously) the quantum strategy, i.e., the underlying state and/or the measurement operators leading to these correlations~\cite{PR92,WW01,MY04,SB19}. Following \cite{MY04}, we say that such a $\vec{P}\in\mathcal{Q}$ self-tests the underlying quantum strategy. To this end, it is worth noting that all pure bipartite entangled states can be self-tested~\cite{Coladangelo:2017aa}.
To facilitate subsequent discussions, we remind here the formal definition of self-testing in a bipartite Bell scenario following the approach of \cite{GKW+18} (see also~\cite{SB19}). Specifically, consider two spatially separated parties Alice and Bob who each performs measurements labeled by $x$, $y$ and, respectively, observes the outcomes $a$, $b$.
We say that a bipartite correlation $\vec{P}:=\{P(ab|xy)\}$ satisfying
\begin{equation}gin{equation}\label{Eq:Born2}
P(ab|xy)=\operatorname{tr}\left[ M^{(1)}_{a|x} \otimes M^{(2)}_{b|y} \rho_{12}\right],
\end{equation}
for all $a$, $b$, $x$ and $y$ self-tests the reference (entangled) state $\ket{\tilde{\psi}_{12}}$ if there exists a local isometry $\Phi=\Phi_1 \otimes \Phi_2$ such that
\begin{equation} \label{Eq:StateTransformation}
\Phi\, \rho_{12}\, \Phi^\dag= \proj{\tilde{\psi}_{12}} \otimes \rho_{aux}
\end{equation}
where $\rho_{12}$ is the measured quantum state [acting on $\mathcal{H}_A \otimes \mathcal{H}_B$], $\rho_{aux}$ is an auxiliary state acting
on $\mathcal{H}_{A'} \otimes \mathcal{H}_{B'}$, and $\mathcal{H}_{A'}$ and $\mathcal{H}_{B'}$ are the Hilbert spaces associated with the other degrees of freedom of Alice and Bob's subsystem respectively~\cite{CKS19}.
Often, a $\vec{P}\in\mathcal{Q}$ that self-tests some reference quantum state can also be used to certify the measurements as well. In such cases,
we say that a bipartite correlation $\vec{P}$ obtained from Eq.~\eqref{Eq:Born2} self-tests the reference quantum state $\ket{\tilde{\psi}_{12}}$ and the reference POVM
$\{\tilde{M}^{(1)}_{a|x}\}_{a}$, $\{\tilde{M}^{(2)}_{b|y}\}_{b}$ if there exists a local isometry
$\Phi=\Phi_1 \otimes \Phi_2$ such that Eq.~\eqref{Eq:StateTransformation} holds and
\begin{equation}gin{align}\label{Eq:POVMTransformation}
&\Phi [ M^{(1)}_{a|x} \otimes M^{(2)}_{b|y} \rho_{12} ] \Phi^\dag
= [ \tilde{M}^{(1)}_{a|x} \otimes \tilde{M}^{(2)}_{b|y} \proj{\tilde{\psi}_{12}} ] \otimes \rho_{aux}.
\end{align}
for all $a$, $b$, $x$ and $y$. By summing over $a,b$, and using the normalization of POVM, one recovers Eq.~\eqref{Eq:StateTransformation} from Eq.~\eqref{Eq:POVMTransformation}.
Interestingly, there are Bell inequalities whose maximal quantum violation alone is sufficient to self-test the
quantum state (and the measurement operators) \cite{MYS12,YN13,PVN14,SAS+16,Jed16,SAT+17,Natarajan:2017,Coladangelo2018}. Since then, identifying Bell
inequalities which can be used for the task of self-testing has received considerable attention.
To this end, note that if the maximal quantum violation of a Bell inequality self-tests some quantum state as well as the underlying measurements, then this maximal quantum violation must be achieved by a unique correlation \cite{GKW+18}.
More formally, consider a bipartite Bell inequality,
\begin{equation} \label{2partiteBI}
I_2(\vec{P}):=\sum_{a, b,x,y} B_{a, b,x,y}P(ab|xy) \stackrel{\mathcal{L}}{\le} \begin{equation}ta_\mathcal{L},
\end{equation}
with a quantum bound (maximal quantum violation):
\begin{equation}gin{equation}
\begin{equation}ta_\mathcal{Q}=\max_{\vec{P}\in\mathcal{Q}} I_2(\vec{P})>\begin{equation}ta_\mathcal{L}.
\end{equation}
We say that an observation of the quantum violation $I_2(\vec{P})=\begin{equation}ta_\mathcal{Q}$ self-tests the reference (entangled) state $\ket{\tilde{\psi}_{12}}$ and the reference POVM $\{\tilde{M}^{(1)}_{a|x}\}_{a}$, $\{\tilde{M}^{(2)}_{b|y}\}_{b}$ if there exists a local isometry
$\Phi=\Phi_1 \otimes \Phi_2$ such that Eq.~\eqref{Eq:StateTransformation}, Eq.~\eqref{Eq:POVMTransformation} hold for all $\vec{P}\in\mathcal{Q}$ satisfying Eq.~\eqref{Eq:Born2} and $I_2(\vec{P})=\begin{equation}ta_\mathcal{Q}$.
\section{Maximal violation of lifted Bell inequalities and its implications}
\label{maxlift}
In this section, we show that the maximal quantum (nonsignaling) violation of a lifted Bell inequality must be the same as that of the Bell inequality from which the lifting is applied. We then discuss the implication of these observations in the context of self-testing, on the geometry of the quantum set of correlations, as well as on the device-independent certification of entanglement depth. For ease of presentation, our discussion will often be carried out assuming a bipartite Bell scenario (for the original Bell inequality). However, it should be obvious from the presentation that our results also hold for any Bell scenario with more parties, and also for a Bell inequality that is not necessarily facet-defining.
\subsection{More inputs}
Let us begin with the simplest kind of lifting, namely, one that allows additional measurement settings. Applying Pironio's input-lifting~\cite{PIR05} to a Bell inequality means to consider the very same Bell inequality in a Bell scenario with more measurement settings for at least one of the parties. At first glance, it may seem rather unusual to make use of only the data collected for a subset of the input combinations, but in certain cases (see, e.g.,~\cite{Shadbolt2012}), the consideration of all input-lifted facets is already sufficient to identify the non-Bell-local nature of the observed correlations.
Since an input-lifted Bell inequality is exactly the same as the original Bell inequality, its maximal quantum (non-signaling) violation is obviously the same as that of the original Bell inequality. Similarly, it is evident that if the maximal quantum violation of the original Bell inequality self-tests some reference quantum state $\ket{\psi}$ and POVMs $\{M^{(1)}_{a|x}\}_{a,x}$,$\{M^{(2)}_{b|y}\}_{b,y}$,\ldots, so does the maximal quantum violation of the input-lifted Bell inequality.
However, since no constraint is imposed on the additional inputs that do not appear in the Bell expression, it is clear that even if we impose the constraint that the maximal quantum violation of an input-lifted Bell inequality is attained, these other local POVMs can be completely arbitrary. Thus, the subset of quantum correlation attaining the maximal quantum violation of {\em any} input-lifted Bell inequality is {\em not} unique, and has a degeneracy that increases with the number of these ``free" inputs. In other words, the set of quantum maximizers of any input-lifted Bell inequality define a flat region of the boundary of the quantum set of correlations, cf.~\cite{GKW+18}. In particular, it could lead to completely flat boundaries of $\mathcal{Q}$ on specific two-dimensional slices in the correlation space (see Fig.~\ref{Fig1}). For some explicit examples illustrating the aforementioned non-uniqueness, see Appendix~\ref{Examples}.
\subsection{More outcomes}
Instead of the trivial input-lifting, one may also lift a Bell inequality to a scenario with more measurement outcomes. Specifically, consider a bipartite Bell scenario where
the $y'$-th measurement of Bob has $v\ge2$ possible outcomes.
The simplest outcome-lifting \`a la Pironio~\cite{PIR05} then consists of two steps: (1) choose an outcome, say, $b=b'$ from Bob's $y'$-th measurement, and (2) replaces in the sum of Eq.~\eqref{2partiteBI} all terms of the form $P(ab'|xy')$ by $P(ab'|xy')+P(au|xy')$.
The resulting outcome-lifted Bell inequality reads as:
\begin{equation}gin{align}\label{LO2partiteBI}
I^\text{\tiny LO}_2=&\!\!\sum_{a, b,x,y\ne y'} B_{a, b,x,y}P(ab|xy) +\!\!\sum_{a,x,b\ne b', u} B_{a, b,x,y'}P(ab|xy') \nonumber \\
&+\sum_{a,x} B_{a, b',x,y'}\left[ P(ab'|xy') + P(au|xy')\right]\stackrel{\mathcal{L}}{\le} \begin{equation}ta_{L}
\end{align}
where the local bound $\begin{equation}ta_\mathcal{L}$ is provably the same~\cite{PIR05} as that of the original Bell inequality, Eq.~\eqref{2partiteBI}. It is worth noting that outcome-lifted Bell inequalities arise naturally in the study of detection loopholes in Bell experiments, see, e.g.,~\cite{Branciard2011,Cope18}.
\subsubsection{Preservation of quantum and nonsignaling violation}
As with input-lifting, we now proceed to demonstrate the invariance of maximal Bell violation with outcome-lifting.
\begin{equation}gin{proposition}\label{Obs:MaxValuePreserved-LO}
Lifting of outcomes preserves the quantum bound and the nonsignaling bound of any Bell inequality, i.e., $\begin{equation}ta^\text{\tiny LO}_{\mathcal{Q}}=\begin{equation}ta_{\mathcal{Q}}$ and $\begin{equation}ta^\text{\tiny LO}_{\mathcal{N}}=\begin{equation}ta_{\mathcal{N}}$, where $\begin{equation}ta_{\mathcal{Q}}$ ($\begin{equation}ta^\text{\tiny LO}_\mathcal{Q}$) and $\begin{equation}ta_{\mathcal{N}}$ ($\begin{equation}ta^\text{\tiny LO}_{\mathcal{N}}$) are, respectively, the quantum and the nonsignaling bounds of the original (outcome-lifted) Bell inequality.
\end{proposition}
\begin{equation}gin{proof}
From Eq.~\eqref{LO2partiteBI}, one clearly sees that the $b'$-th outcome and the $u$-th outcome of Bob's $y'$-th measurement are treated on equal footing. So, we may as well consider Bob's $y'$-th measurement as an {\em effective} $v$-outcome measurement by considering its $b'$-th outcome and its $u$-th outcome together as one outcome. Hence, if we define
\begin{equation} \label{coarse-grain}
\begin{equation}gin{split}
\tilde{P}(ab|xy)&=P(ab|xy),\quad y\neq y',\\
\tilde{P}(ab|xy')&=P(ab|xy'),\quad b\not\in\{b',u\},\\
\tilde{P}(ab'|xy')&=P(ab'|xy') + P(au|xy')
\end{split}
\end{equation}
and substitute it back into Eq.~\eqref{LO2partiteBI}, we recover the Bell expression of the original Bell inequality [left-hand-side of Eq.~\eqref{2partiteBI}] by identifying $\tilde{P}(ab|xy)$ in $I^\text{\tiny LO}_2$ as ${P}(ab|xy)$ in $I_2$. Moreover, if $\vec{P}$ defined for this more-outcome Bell scenario is quantum (nonsignaling), the resulting correlation obtained with the coarse-graining procedure of Eq.~\eqref{coarse-grain} is still quantum (nonsignaling). A proof of this for quantum correlations is provided in Appendix~\ref{QRpre} (see, e.g., \cite{Jul14} for the case of nonsignaling correlation).
This implies that for any violation of the outcome-lifted Bell inequality (\ref{LO2partiteBI}) by a quantum (nonsignaling)
correlation, there always exists another quantum (nonsignaling) correlation that gives the same amount of violation for
the original Bell inequality (\ref{2partiteBI}). In particular, the maximal quantum (nonsignaling) violation of these inequalities must satisfy
\begin{equation} \label{OtoL}
\begin{equation}ta_{\mathcal{N}} \ge \begin{equation}ta^\text{\tiny LO}_{\mathcal{N}} \quad \begin{equation}ta_{\mathcal{Q}} \ge \begin{equation}ta^\text{\tiny LO}_{\mathcal{Q}}.
\end{equation}
On the other hand, instead of grouping the outcomes in the outcome-lifted Bell scenario, one could also start from the original Bell scenario and (arbitrarily) {\em split} the $b'$-th outcome of Bob's $y'$-th measurement into two outcomes labeled by $b=b'$ and $b=u$. Hence, if we define $\widehat{P}(ab'|xy')$, $\widehat{P}(au|xy')$ in the outcome-lifted Bell scenario such that
\begin{equation} \label{Eq:fine_grain}
\begin{equation}gin{split}
\widehat{P}(ab|xy) = P(ab|xy),\quad &y\neq y'\text{\ or\ } y=y',b\neq b',u\\
0\le \widehat{P}(ab'|xy')&,\,\, \widehat{P}(au|xy')\le 1,\\
\widehat{P}(ab'|xy') + \widehat{P}&(au|xy')=P(ab'|xy')
\end{split}
\end{equation}
and substitute it into Eq.~\eqref{2partiteBI}, we recover the outcome-lifted Bell expression [Eq.~\eqref{LO2partiteBI}] by identifying $\widehat{P}(ab'|xy')$ and $\widehat{P}(au|xy')$, respectively, as $P(ab'|xy')$ and $P(au|xy')$ in $I^\text{\tiny LO}_2$, cf Eq.~\eqref{LO2partiteBI}.
Moreover, the correlation obtained by locally splitting the outcomes, as required in Eq.~\eqref{Eq:fine_grain}, is realizable quantum-mechanically (see Appendix~\ref{QRpre}) or in general nonsignaling theory (see \cite{Jul14}) if the original correlations are, respectively, quantum and nonsignaling. Hence, for any violation of the original Bell inequality [Eq.~\eqref{2partiteBI}] by a quantum (nonsignaling) correlation, there always exists a quantum (nonsignaling) correlation giving the same amount of violation for the outcome-lifted Bell inequality (\ref{LO2partiteBI}), i.e.,
\begin{equation} \label{LtoO}
\begin{equation}ta_{\mathcal{Q}} \le \begin{equation}ta^\text{\tiny LO}_{\mathcal{Q}}; \quad \begin{equation}ta_{\mathcal{N}} \le \begin{equation}ta^\text{\tiny LO}_{\mathcal{N}}.
\end{equation}
Combining Eqs. (\ref{OtoL}) and (\ref{LtoO}), it then follows that the maximal quantum (nonsignaling) violation of any Bell inequality is {\em preserved} through the procedure of outcome-lifting, i.e.,
\begin{equation}
\begin{equation}ta_{\mathcal{Q}} = \begin{equation}ta^\text{\tiny LO}_{\mathcal{Q}}; \quad \begin{equation}ta_{\mathcal{N}} = \begin{equation}ta^\text{\tiny LO}_{\mathcal{N}}.
\end{equation}
This completes the proof when only one of the outcomes ($b=u$) of one of the measurements ($y=y'$) of one of the parties (Bob) is lifted. However, since more complicated outcome-lifting can be achieved by concatenating the simplest outcome-lifting presented above, the proof for the general scenarios can also be obtained by concatenating the proof given above, thus completing the proof for the general scenario.
\end{proof}
\subsubsection{Implications on self-testing}
As an implication of the above Proposition, we obtain the following result in the context of quantum theory.
\begin{equation}gin{cor}\label{Res:SelfTestState}
If the maximal quantum violation of a Bell inequality self-tests a quantum state $\ket{\tilde{\psi}}$, then
any Bell inequality obtained therefrom by outcome-lifting also self-tests $\ket{\tilde{\psi}}$.
\end{cor}
\begin{equation}gin{proof}
For definiteness, we prove this for the specific case of $n=2$, the general proof is completely analogous. To this end, let $\rho^*_{12}$ denote an optimal quantum state that maximally violates the outcome-lifted Bell inequality (\ref{LO2partiteBI}) with appropriate choice of POVMs $\{M^{(1)}_{a|x}\}_{a,x}$,$\{M^{(2)}_{b|y}\}_{b,y}$.
As shown in Appendix \ref{QRpre}, this quantum state $\rho^*_{12}$ can also be used to realize an effective $v$-outcome distribution for Bob's $y'$-th measurement by combining Bob's relevant POVM elements for this measurement into a single POVM element, thereby implementing the local coarse graining given in Eq. (\ref{coarse-grain}) to give the maximal quantum violation of the original Bell inequality, Eq.~\eqref{2partiteBI}. Suppose that the maximal quantum violation of inequality~\eqref{LO2partiteBI} {\em does not} self-test the reference state $\ket{\tilde{\psi}_{12}}$, i.e., there {\em does not} exist {\em any} local isometry $\Phi=\Phi_1 \otimes \Phi_2$ such that
\begin{equation}
\Phi \rho^*_{12} \Phi^\dag= \proj{\tilde{\psi}_{12}} \otimes \rho_{aux}
\end{equation}
for some $\rho_{aux}$. Then, we see that the maximal quantum violation of inequality~\eqref{2partiteBI} (attainable using $\rho^*_{12}$) also cannot self-test the reference state $\ket{\tilde{\psi}_{12}}$. The desired conclusion follows by taking the contrapositive of the above implication.
\end{proof}
A few remarks are now in order. As with any other Bell inequality, in examining the quantum violation of an outcome-lifted Bell inequality, one may consider {\em arbitrary} local POVMs having the right number of outcomes (acting on some given Hilbert space). {\em A priori}, they do not have to be related to the optimal POVM of the original Bell inequality. However, from the proof of Proposition~\ref{Obs:MaxValuePreserved-LO}, one notices that POVMs arising from splitting the outcomes of the original optimal POVM do play an important role in attaining the maximal quantum violation of the outcome-lifted Bell inequality.
The arbitrariness in this splitting, nonetheless, implies that $\vec{P}\in\mathcal{Q}$ maximally violating an outcome-lifted Bell inequality are {\em not unique} (see Appendix~\ref{App:LO} for some explicit examples). Since this invalidates a necessary requirement to self-test both the state and {\em all} the local POVMs (see Proposition $C.1.$ of Ref. \cite{GKW+18}), we must thus conclude---given that such an inequality preserves the ability to self-test the underlying state---that its maximal violation cannot be used to {\em completely} self-test the underlying measurements. Using the swap method of~\cite{YVB+14}, we nevertheless show in Appendix~\ref{App:Self-test} that the quantum violation of an outcome-lifted Bell inequality may still provide robust self-testing of some of the underlying POVM elements, as well as the nature of the merged POVM elements.
\subsection{More parties}
Finally, let us consider the party-lifting of~\cite{PIR05}. Again, for simplicity, we provide hereafter explicit constructions and proofs only for the bipartite scenario, with the multipartite generalizations proceed analogously. To this end, it is expedient to write a generic bipartite Bell inequality such that
\begin{equation}
I_2:=\sum_{a,b,x,y} B_{a,b,x,y}P(ab|xy)
\stackrel{\mathcal{L}}{\le} 0, \label{OnPBE}
\end{equation}
i.e., with its local bound being zero.\footnote{This can always be achieved by (repeatedly) applying identity of the form given in Eq.~\eqref{Eq:NS} to both sides of Eq.~\eqref{2partiteBI}.} For any {\em fixed} but {\em arbitrary} input-output pair $c',z'$ of the additional party (Charlie), applying the party-lifting of~\cite{PIR05} to inequality~\eqref{OnPBE} gives rise to the tripartite Bell inequality:
\begin{equation}
I^\text{\tiny LP}_2:=\sum_{a,b,x,y} B_{a,b,x,y}P(abc'|xyz')
\stackrel{\mathcal{L}}{\le} 0. \label{LPBE}
\end{equation}
It is worth noting that such Bell inequalities has found applications in the foundations of quantum theory~\cite{Bancal:NatPhys:2012,Barnea2013}, as well as in the systematic generation~\cite{Curchod2015} of device-independent witnesses for entanglement depth~\cite{Liang:PRL:2015}.
\subsubsection{Preservation of quantum and nonsignaling violation}
\label{Sec:PartyLiftingPreservation}
That the maximal quantum and nonsignaling violation remain unchanged under Pironio's party-lifting operation~\cite{PIR05} follows directly from the results shown in Section 2.4 of~\cite{Rosset2014}, as well as a special case (with $n=k$) of Theorem 2 of~\cite{Curchod2015}. For the convenience of subsequent discussions, however, we provide below an alternative proof of this observation.
\begin{equation}gin{observation}\label{PartyLiftingPreservation}
Lifting of parties preserves the quantum bound and the nonsignaling bound of any Bell inequality, i.e., $\begin{equation}ta^\text{\tiny LP}_{\mathcal{Q}}=\begin{equation}ta_{\mathcal{Q}}$ and $\begin{equation}ta^\text{\tiny LP}_{\mathcal{N}}=\begin{equation}ta_{\mathcal{N}}$, where $\begin{equation}ta_{\mathcal{Q}}$ ($\begin{equation}ta^\text{\tiny LP}_\mathcal{Q}$) and $\begin{equation}ta_{\mathcal{N}}$ ($\begin{equation}ta^\text{\tiny LP}_{\mathcal{N}}$) are, respectively, the quantum and the nonsignaling bounds of the original (party-lifted) Bell inequality.
\end{observation}
\begin{equation}gin{proof}
For a tripartite Bell scenario relevant to inequality~\eqref{LPBE}, the marginal probability of Charlie getting the outcome $c'$ conditioned on him performing the measurement labeled by $z'$ is:
\begin{equation}gin{equation}
P(c'|z')=\sum_{a,b}P(abc'|xyz').
\end{equation}
Since the party-lifted inequality of Eq.~\eqref{LPBE} is saturated with the choice of $P(c'|z')=0$, thereby making $P(abc'|xyz')=0$ for all $a,b,x,y$, the observation holds trivially if inequality~\eqref{LPBE} {\em cannot} be violated by general nonsignaling correlations.
Conversely, if inequality~\eqref{LPBE} can be violated by some quantum, or general nonsignaling correlation, the corresponding
$P(c'|z')$ must be nonvanishing. Henceafter, we thus assume that $P(c'|z')>0$.
To this end, note that
\begin{equation}\label{Eq:Dfn:ConditionalBipartite}
{P}_{c'|z'}(ab|xy):=P(abc'|xyz')/P(c'|z'),
\end{equation}
gives the probabilities of Alice and Bob obtaining the outcomes $a$ and $b$ conditioned on her (him) choosing measurement $x$ ($y$), Charlie
measuring $z'$ and obtaining the outcome $c'$.
Note that the vector of probabilities $ \vec{P}_{c'|z'}:=\{{P}_{c'|z'}(ab|xy)\}$
is a legitimate correlation in the (original) Bell scenario corresponding to inequality~(\ref{OnPBE}).
To prove the observation, we now focus on the case of finding the quantum bound, i.e., the maximum value of the left-hand-side of Eq.~\eqref{LPBE} for quantum correlations --- the proof for the nonsignaling case is completely analogous. To this end, note that the quantum bound of inequality~\eqref{LPBE}---given the above remarks---satisfies:
\begin{equation}gin{align}\label{Eq:BoundPartyLifting}
\begin{equation}ta^\text{\tiny LP}_{\mathcal{Q}}&=\max_{\{P(abc'|xyz')\} \in \mathcal{Q}_3} \sum_{a,b,x,y} B_{a,b,x,y}P(abc'|xyz')\nonumber\\
&=\max_{\{P(abc'|xyz')\} \in \mathcal{Q}_3} \sum_{a,b,x,y} B_{a,b,x,y}{P}_{c'|z'}(ab|xy)P(c'|z')\nonumber\\
&\le\max_{\vec{P}_{c'|z'} \in \mathcal{Q}_2} \sum_{a,b,x,y} B_{a,b,x,y}{P}_{c'|z'}(ab|xy)\max P(c'|z')\nonumber\\
&=\max_{\vec{P}_{c'|z'} \in \mathcal{Q}_2} \sum_{a,b,x,y} B_{a,b,x,y}{P}_{c'|z'}(ab|xy)= \begin{equation}ta_\mathcal{Q},
\end{align}
where the first inequality follows from the fact that an independent maximization over $\vec{P}_{c'|z'} \in \mathcal{Q}_2$ and $P(c'|z')$ is, in principle, less constraining than a maximization over all tripartite quantum distributions $\{P(abc'|xyz')\}$, the second-last equality follows from the fact that $P(c'|z')\le1$ for legitimate marginal probability distributions, and the last equality follows from the fact that any bipartite quantum correlation can be seen as the marginalization of a tripartite one.
To complete the proof, note that the inequality $\begin{equation}ta^\text{\tiny LP}_{\mathcal{Q}}\le \begin{equation}ta_{\mathcal{Q}}$ can indeed be saturated if
\begin{equation}gin{subequations}
\begin{equation}gin{gather}
P(abc'|xyz') = P^*(ab|xy)P(c'|z')\quad \forall\,\, a,b,x,y,\label{Eq:Factorization}\\
P(c'|z') = 1,\label{Eq:Deterministic}\\
\sum_{a,b,x,y} B_{a,b,x,y}P^*(ab|xy) = \begin{equation}ta_\mathcal{Q}.\label{Eq:Saturation}
\end{gather}
\end{subequations}
Moreover, these equations can be simultaneously satisfied with the three parties sharing a state of the form $\ket{\psi_{123}} = \ket{\psi^*_{12}}\otimes\ket{\psi_3}$ while
employing the local measurements:
\begin{equation}gin{equation}
M^{(1)}_{a|x} = M^{(1*)}_{a|x},\quad M^{(2)}_{b|y} = M^{(2*)}_{b|y},\quad M^{(3)}_{c|z'} = {\bf{1}}\delta_{c,c'},
\end{equation}
where $\ket{\psi^*_{12}}, \{M^{(1*)}_{a|x}\}_{a,x}, \{M^{(2*)}_{b|y}\}_{b,y}$ consistute a maximizer for the (original) Bell inequality of Eq.~\eqref{OnPBE}, i.e.,
\begin{equation}gin{equation}
\begin{equation}gin{split}
\begin{equation}ta_\mathcal{Q}&= \max_{\vec{P} \in \mathcal{Q}_2} \sum_{a,b,x,y} B_{a,b,x,y}{P}(ab|xy)\\
&= \sum_{a,b,x,y} B_{a,b,x,y}{P^*}(ab|xy) \\
&= \sum_{a,b,x,y} B_{a,b,x,y}\bra{\psi^*_{12}} M^{(1*)}_{a|x}\otimes M^{(2*)}_{b|y}\ket{\psi^*_{12}}.
\end{split}
\end{equation}
\end{proof}
\subsubsection{Implications on self-testing}
As an implication of the above observation, we obtain the following result in the context of quantum theory.
\begin{equation}gin{cor}\label{Res:PartyLifting}
If a Bell inequality self-tests $\ket{\tilde{\psi}}$ and some reference POVMs $\{\tilde{M}^{(1)}_{a|x}\}_{a,x}, \{\tilde{M}^{(2)}_{b|y}\}_{b,y}$ etc., then the maximal quantum violation of any Bell inequality obtained therefrom via party-lifting also self-tests the same state and the same local POVMs for an appropriate subset of parties.
\end{cor}
\begin{equation}gin{proof}
In the following, we use inequality~\eqref{LPBE} to illustrate how the proof works in the tripartite case. Note from Eq.~\eqref{Eq:BoundPartyLifting} that when the party-lifted Bell inequality of Eq.~(\ref{LPBE}) is violated to its quantum maximum $\begin{equation}ta_\mathcal{Q}$, the marginal distribution of Charlie necessarily satisfies $P(c'|z')=1$. It then follows from Eq.~\eqref{Eq:Dfn:ConditionalBipartite} that
\begin{equation}\label{Eq:Factorization2}
P(abc'|xyz')=P(ab|xy)P(c'|z') \quad \forall\,\, a,b,x,y.
\end{equation}
Furthermore, from Eq.~\eqref{Eq:BoundPartyLifting}, this tripartite distribution gives the quantum bound of inequality~\eqref{LPBE} if and only if the marginal distributions $P(ab|xy)$ of Eq.~\eqref{Eq:Factorization2} violates the original Bell inequality of Eq.~\eqref{OnPBE} to its quantum bound.
Therefore, if the original Bell inequality self-tests the $2$-partite entangled state $\ket{\tilde{\psi}_{12}}$, then for any tripartite density matrix $\rho_{123}$ leading to the quantum maximum of inequality Eq.~\eqref{LPBE}, there must exist a local isometry $\Phi=\Phi_1 \otimes \Phi_2 $ such that
\begin{equation}
\begin{equation}gin{split}
&\Phi \operatorname{tr}_{3} \left(\rho_{123}\right) \Phi^\dag=\proj{\tilde{\psi}}\otimes \rho_{aux},\\
&\Phi \left[ M^{(1)}_{a|x} \otimes M^{(2)}_{b|y} \operatorname{tr}_{3} \left(\rho_{123}\right) \right] \Phi^\dag \\
= &\left( \tilde{M}^{(1)}_{a|x} \otimes \tilde{M}^{(2)}_{b|y} \proj{\tilde{\psi}} \right) \otimes \rho_{aux}.
\end{split}
\end{equation}
where $\rho_{aux}$ is some auxiliary density matrix acting on other degrees of freedom of Alice and Bob's subsystem. In other words, if the quantum maximum of the original Bell inequality can be used to self-test $\ket{\tilde{\psi}}$ and reference POVMs $\{\tilde{M}^{(1)}_{a|x}\}_{a,x}$, $\{\tilde{M}^{(2)}_{b|y}\}_{b,y}$, so does the quantum maximum of the party-lifted Bell inequality.
\end{proof}
\subsubsection{Implications on device-independent certification of entanglement depth}
In Theorem 2 of~\cite{Curchod2015}, it was shown that if
\begin{equation}gin{align}\label{Eq:size-k}
\sum_{k_1\cdots k_n,j_1\cdots j_n} B_{\vec{k},\vec{j}}P(\vec{k}|\vec{j})\stackrel{\mathcal{R}_{n,\ell}}{\le} 0,
\end{align}
is satisfied by an $n$-partite resource $\mathcal{R}$ (quantum or nonsignaling) that has a group size of $\ell$, its lifting to $(n+h)$ parties also holds for the same kind of resource of group size $\ell$:
\begin{equation}gin{align}\label{Eq:size-k:n+h}
\sum_{k_1\cdots k_n,j_1\cdots j_n} B_{\vec{k},\vec{j}}P(\vec{k},\vec{o}|\vec{j},\vec{s})\stackrel{\mathcal{R}_{n+h,\ell}}{\le} 0,
\end{align}
where $\vec{o}$ ($\vec{s}$) is any {\em fixed}, but {\em arbitrary} string of outputs (inputs) for the $h$ additional parties. For the case of $\ell=n$, the above result reduces to Observation~\ref{PartyLiftingPreservation} discussed in Sec.~\ref{Sec:PartyLiftingPreservation}.
When the considered resource is restricted to shared quantum correlations, inequality~\eqref{Eq:size-k} and inequality~\eqref{Eq:size-k:n+h} are instances of so-called
device-independent witnesses for entanglement depth~\cite{Liang:PRL:2015}, i.e., Bell-like inequalities capable of certifying---directly from the observed correlation---a lower bound on the entanglement depth~\cite{Sorensen:PRL:2001} of the measured system. More specifically, if the observed quantum value of the left-hand-side of Eq.~\eqref{Eq:size-k} or Eq.~\eqref{Eq:size-k:n+h} is greater than 0, then one can certify that the locally measured quantum state must have an entanglement depth of at least $\ell+1$.
Although the above result of~\cite{Curchod2015} can be applied to an arbitrary number of $(n+h)$ parties, Observation~\ref{PartyLiftingPreservation} implies that if the seed inequality is that applicable to an $n$-partite Bell scenario, the extended scenario can never be used to certify an entanglement depth that is larger than $n$. This follows from the fact that the maximal quantum value of these party-lifted Bell-like inequalities is the same as the original Bell-like inequality [cf. Eq.~\eqref{Eq:size-k}], and is already attainable using a quantum state of entanglement depth $n$.
\section{Concluding Remarks}\label{con}
Lifting, as introduced by Pironio \cite{PIR05}, is a procedure that allows one to systematically construct Bell inequalities for {\em all} Bell scenarios starting from a Bell inequality applicable to a simpler scenario. It is known that Pironio's lifting preserves the facet-defining property of Bell inequalities, and thus lifted Bell inequalities (in particular, those lifted from the CHSH Bell inequality) can be found in {\em all} nontrivial Bell scenarios. In this work, we show that lifting leaves the maximal quantum and non-signaling value of Bell inequalities unchanged.
Naturally, one may ask whether the quantum state and local measurements maximally violating a lifted Bell inequality are related to that of the original Bell inequality. Indeed, we show that Pironio's lifting also preserves the self-testability of a quantum state. Hence, the quantum state maximally violating a lifted Bell inequality is---modulo irrelevant local degrees of freedom---the same as that of the original inequality. Likewise, the self-testability of given local measurements is preserved using {\em any} but the outcome-lifting procedure.
The maximizers of lifted Bell inequalities are, as we show, generally not unique. Consequently, it is impossible to use the observed quantum value of such an inequality to self-test both the underlying state and {\em all} the local measurements: in the case of an input-lifted Bell inequality, no conclusions can be drawn regarding the additional measurements that do not appear in the inequality; in the case of a party-lifted Bell inequality, nothing can be said about the measurements of the additional party; in the case of an output-lifted Bell inequality, the self-testing of all the local POVM elements is impossible, but the self-testing of their combined effect seems possible. In fact, our numerical results (see Appendix~\ref{App:Self-test}) suggest that such a self-testing is just as robust as the original Bell inequality. Thus, Bell inequalities lifted from CHSH serve as generic examples whose maximal quantum violation can be used to self-test a state, but not the underlying measurements in its entirety.
Notice also that the non-uniqueness mentioned above evidently becomes more and more pronounced as the number of ``irrelevant" degrees of freedom increases, for example, by repeatedly applying lifting to a given Bell inequality. Since {\em only} correlation $\vec{P}$ belonging to the boundary of $\mathcal{Q}$ could violate a linear Bell inequality maximally, as the complexity of the Bell scenario (say, in terms of the number of measurements, outcomes, or parties) increases, it is conceivable that one can always find a flat boundary of $\mathcal{Q}$ (corresponding to those of the lifted Bell inequality) with increasing dimension. Proving this statement rigorously and finding the exact scaling of the dimension of these flat boundaries would be an interesting direction to pursue for future research.
Besides, it will be interesting to see---in comparison with the original Bell inequality---whether the robustness in self-testing that we have observed for a particular version of the outcome-lifted CHSH Bell inequality is generic. From our example for the outcome-lifted CHSH inequality, it becomes clear that self-testing of the combined POVM elements is (sometimes) possible even if the self-testing of all individual POVM elements is not. This possibility opens another direction of research in the context of self-testing. In addition, our results also prompted the following question: does there exist a physical situation (say, the observation of the maximal quantum value of some Bell inequality) where the underlying measurements can be self-tested, but not the underlying state? Since the self-testing of measurements is seemingly more demanding than that for a quantum state, it is conceivable that no such examples can be constructed. Proving that this is indeed the case, however, clearly lies beyond the scope of the present paper.
{\em Note added:} While completing this manuscript, we became aware of the work of~\cite{Rosset2019private} which also discusses, among others, extensively the properties of liftings, as well as the work of~\cite{JedPrivate}, which exhibits examples of quantum correlations that can only be used to self-test the measured quantum state but not the underlying measurements.
\begin{equation}gin{acknowledgements}
We thank Jean-Daniel Bancal, J{\k{e}}drzej Kaniewski, Denis Rosset, and Ivan \v{S}upi\'c for useful discussions. This work is supported by the Foundation for the Advancement of Outstanding Scholarship, Taiwan as well as the Ministry of Science and Technology, Taiwan (Grants No. 104-2112-M-006-021-MY3, No. 107-2112-M-006-005-MY2, 107-2627-E-006-001, 108-2811-M-006-501, and 108-2811-M-006-515).
\end{acknowledgements}
\appendix
\section{Examples of $\vec{P}\in\mathcal{Q}$ violating lifted Bell inequalities maximally }
\label{Examples}
To illustrate the non-unique nature of the maximizers of lifted Bell inequalities, consider the CHSH Bell inequality~\cite{Clauser69}:
\begin{equation}gin{align}\label{chshineq}
\sum_{x,y,a,b=0,1} (-1)^{xy+a+b} P(ab|xy)\stackrel{\mathcal{L}}{\le} 2
\end{align}
as our seed inequality. Since this is a Bell inequality defined in the simplest Bell scenario (with two binary-output per party), its liftings can be found in all nontrivial Bell scenarios.
The quantum bound and nonsignaling bound of the above Bell inequality
are given, respectively, by $\begin{equation}ta_{\mathcal{Q}}=2\sqrt{2}$ and $\begin{equation}ta_{\mathcal{N}}=4$.
It is known~\cite{PR92} that the maximal quantum violation of the CHSH inequality
can be used to self-test (up to local isometry) the two-qubit maximally
entangled state $\ket{\phi^+}=\left(\ket{00}+\ket{11}\right)/\sqrt{2}$ and
the Pauli observables $\{\sigma_z,\sigma_x\}$
on one side and the Pauli observables $\{(\sigma_x+\sigma_z)/\sqrt{2},(\sigma_x-\sigma_z)/\sqrt{2}\}$ on the other.
Thus, the correlation that gives the quantum maximum of the inequality~\eqref{chshineq} is {\em unique}
and is given by
\begin{equation}\label{Eq:TsirelsonPoint}
P_Q(ab|xy)=\tfrac{1}{4}+(-1)^{a+ b + x y}\tfrac{\sqrt{2}}{8}, \,\,\, a,b,x,y\in\{0,1\},
\end{equation}
where the $+1$-outcome of the observables is identified with the 0-th outcome in the conditional outcome probability distributions.
\subsection{Lifting of Inputs}
\label{App:LI}
For input-lifting, consider now a bipartite Bell scenario where Bob has instead three binary inputs $(y=0,1,2)$. In this new Bell scenario, the following Bell inequality:
\begin{equation}gin{align}\label{liftinpchshineq}
I^\text{\tiny LI-CHSH}_2:=\sum_{x,y,a,b=0,1} (-1)^{x y + a + b} \tilde{P}(ab|xy) \stackrel{\mathcal{L}}{\le} 2,
\end{align}
with $\{\tilde{P}(ab|xy)\}_{a,b,x=0,1,y=0,1,2}$ can be obtained by applying input-lifting to inequality~\eqref{chshineq}.
To illustrate the non-uniqueness of its maximizers, one may employ, e.g., either of the two trivial measurements for the third measurement $(y=2)$:
\begin{equation}gin{equation}
M^{(2)}_{b|2}={\bf{1}}\delta_{b,0}\quad\text{or}\quad M^{(2)}_{b|2}={\bf{1}}\delta_{b,1}.
\end{equation}
Correspondingly, one obtains, in addition to [cf. Eq.~\eqref{Eq:TsirelsonPoint}]
\begin{equation}gin{subequations}\label{Eq:PTilde}
\begin{equation}gin{equation}
\tilde{P}^\text{\tiny LI}_{Q_1}(ab|x,y)=\tilde{P}^\text{\tiny LI}_{Q_2}(ab|x,y)=P_Q(a,b|x,y)
\end{equation}
for $a,b,x,y=0,1$, the distributions
\begin{equation}gin{equation}
\tilde{P}^\text{\tiny LI}_{Q_1}(ab|xy)=\frac{1}{2}\delta_{b,0}\quad\text{and}\quad \tilde{P}^\text{\tiny LI}_{Q_2}(ab|x,y)=\frac{1}{2}\delta_{b,1},
\end{equation}
\end{subequations}
for $a,b,x=0,1$ but $y=2$.\footnote{More abstractly, these correlations can also be obtained from Eq.~\eqref{Eq:TsirelsonPoint} by
applying input operations as given by Eq. (10) in \cite{Jul14}.}
It is then easily verified that both these correlations violate inequality~\eqref{liftinpchshineq} to its quantum maximum of $2\sqrt{2}$. In fact, since the Bell expression of Eq.~\eqref{liftinpchshineq} is linear in $\tilde{P}$, it follows that an arbitrary convex combination of $\tilde{P}^\text{\tiny LI}_{Q_1}$ and $\tilde{P}^\text{\tiny LI}_{Q_2}$
\begin{equation}\label{QBLCHSH}
\tilde{P}(ab|xy)=p\tilde{P}^\text{\tiny LI}_{Q_1}(ab|xy) +(1-p)\tilde{P}^\text{\tiny LI}_{Q_2}(ab|xy),
\end{equation}
where $0\le p \le 1$, must also give the maximal quantum value of Bell inequality~\eqref{liftinpchshineq}. Moreover, from the convexity of the set of quantum correlations $\mathcal{Q}$, we know that an arbitrary convex combination of the two correlations given above is also quantum realizable. Geometrically, this means that the set of $\tilde{P}\in\mathcal{Q}$ defined by Eq.~\eqref{QBLCHSH} forms a one-dimensional flat region of the quantum boundary. In Fig.~\ref{Fig1}, we show a two-dimensional slice on the space of correlations spanned by $\tilde{P}^\text{\tiny LO}_{Q_1}$, $\tilde{P}^\text{\tiny LO}_{Q_2}$, and the uniform distribution $\tilde{P}_0$. Note that on this peculiar slice, even $\mathcal{Q}$ appears to be a polytope.
\subsection{Lifting of Outcomes}
\label{App:LO}
For output-lifting, consider a bipartite Bell scenario where Bob's measurements have instead three outcomes $(b=0,1,2)$. In this Bell scenario, the following Bell inequality:
\begin{equation}gin{align}\label{liftchshineq}
I^\text{\tiny LO-CHSH}_2:=\sum_{x,y,a=0,1}\sum_{b=0,1,2}\!\!\! (-1)^{x y + a+b} P(ab|xy) \stackrel{\mathcal{L}}{\le} 2\end{align}
can be obtained by applying outcome-lifting to the $b=0$ outcome of Bob's measurements in inequality~\eqref{chshineq}.
Here, $b=2$ is the new outcome.
Indeed, it is readily seen that the two correlations:
\begin{equation}gin{gather}
\label{EQCL1}
P^\text{\tiny LO}_{Q_1}(ab|xy)
= \left[\tfrac{1}{4}+(-1)^{a+ b + x y}\tfrac{\sqrt{2}}{8}\right](1-\delta_{b,2}),\\
\label{EQCL2}
P^\text{\tiny LO}_{Q_2}(ab|xy)
=\left[\tfrac{1}{4}+(-1)^{a+ b + x y}\tfrac{\sqrt{2}}{8}\right](1-\delta_{b,0})
\end{gather}
as well as
\begin{equation}gin{equation}\label{Eq:OL34}
\begin{equation}gin{split}
P^\text{\tiny LO}_{Q_3}(ab|xy)&=P^\text{\tiny LO}_{Q_1}(ab|xy)\delta_{y,0}+P^\text{\tiny LO}_{Q_2}(ab|xy)\delta_{y,1},\\
P^\text{\tiny LO}_{Q_4}(ab|xy)&=P^\text{\tiny LO}_{Q_1}(ab|xy)\delta_{y,1}+P^\text{\tiny LO}_{Q_2}(ab|xy)\delta_{y,0}
\end{split}
\end{equation}
all give rise to the quantum maximum of $2\sqrt{2}$ for the Bell inequality of Eq.~\eqref{liftchshineq}.
To see that they are indeed quantum realizable, we note that the correlation given in Eq. (\ref{EQCL1}) can be produced by employing the quantum strategy used to produce the correlation given in Eq.~\eqref{Eq:TsirelsonPoint}. By construction, Bob's measurement only produces two outcomes labeled by $b=0$ and $b=1$, and thus the $b=2$ outcome never appears, as required in Eq.~\eqref{EQCL1}.
To obtain the correlation given in Eq. (\ref{EQCL2}), one may start from the correlation of Eq.~\eqref{EQCL1} and apply the classical relabeling of $b=0\leftrightarrow b=2$.
Similarly, the two correlations of Eq.~\eqref{Eq:OL34} can be realized by first implementing the quantum strategy that realizes $\vec{P}^\text{\tiny LO}_{Q_1}$, followed by applying the classical relabeling of $b=0\leftrightarrow b=2$ depending on whether $y=0$ or $y=1$ \footnote{This belongs to the class of outcome operation given by Eq. (9) in \cite{Jul14}.}. Since $\{\vec{P}^\text{\tiny LO}_{Q_i}\}_{i=1}^4$ forms a linearly independent set, and an arbitrary convex combination of them also gives the quantum bound, we thus see that the quantum face\footnote{The set of quantum correlations that gives the quantum bound of a Bell inequality is called quantum face, see~\cite{GKW+18}.} of the outcome-lifted inequality of \eqref{liftchshineq} is (at least) three-dimensional.
\begin{equation}gin{figure}[t!]
\includegraphics[width=7.5cm]{2DSliceSub_25-Jul-2019}
\caption{A two-dimensional slice in the input-lifted space of correlations spanned by $\tilde{P}^\text{\tiny LO}_{Q_1}$, $\tilde{P}^\text{\tiny LO}_{Q_2}$ [see Eq.~\eqref{Eq:PTilde}] and the uniform distribution $\tilde{P}_0$. From the innermost to the outermost, we have, respectively, the set of Bell-local correlations $\mathcal{L}$ (green), the set of quantum correlations $\mathcal{Q}$ (red, dashed boundary), and the set of nonsignaling correlations $\mathcal{N}$ (mauve). To illustrate the degeneracy in the maximally-violating correlations, we have chosen the input-lifted Bell inequality of Eq.~\eqref{liftinpchshineq} and the marginal correlator for $y=2$, i.e., $E^B_2=\tilde{P}(b=0|y=2)-\tilde{P}(b=1|y=2)$ as, respectively, the vertical and horizontal axis of this plot. As opposed to the two-dimensional slices shown in~\cite{GKW+18}, the set of quantum correlations $\mathcal{Q}$ appears to be a rectangle on this slice.
\label{Fig1}}
\end{figure}
\subsection{Lifting of Party}
Geometrically, party-lifting also introduces degeneracy in the maximizers of a Bell inequality. For example, a possible party-lifting of the CHSH Bell inequality of Eq.~\eqref{chshineq} to the three-party, two-input, two-output Bell scenario reads as:
\begin{equation}gin{equation}\label{Eq:CHSH:PartyLifted}
\sum_{x,y,a,b=0,1} \left\{\left[(-1)^{xy+a+b}-\tfrac{1}{2}\right] P(ab0|xy0)\right\}\stackrel{\mathcal{L}}{\le} 0
\end{equation}
As mentioned in the proof of Corollary~\ref{Res:PartyLifting}, maximizers of this Bell inequality must be such that the bipartite marginal distribution $P(ab|xy)$ violates CHSH Bell inequality maximally while the marginal distribution for the third party must satisfy $P(0|0)=1$. However, these conditions do not impose any constraint on the other marginal distribution $P(c|z)$ for $z\neq 0$. In particular, both the choice of $P(c|1)=\delta_{c,0}$ and $P(c|1)=\delta_{c,1}$ would fulfill the above requirement. As such, although the quantum face of the CHSH Bell inequality is a point in the correlation space, the quantum face of the party-lifted Bell inequality of Eq.~\eqref{Eq:CHSH:PartyLifted} has become one-dimensional.
\section{Quantum realizability of distributions obtained by grouping and splitting outcomes}\label{QRpre}
In this Appendix, we provide the details showing how one can---while preserving between the quantum violation of a Bell inequality and its outcome-lifted version---realize quantum-mechanically a fewer-outcome (more-outcome) correlation if the original more-outcome (fewer-outcome) correlation is quantum.
\subsection{Grouping of outcomes}
Suppose that the joint probabilities $P(ab'|xy')$ and $P(au|xy')$ on the right-hand-side of Eq. (\ref{coarse-grain}) are realized by
a quantum state $\rho_{12}$ with the POVM $\{M^{(1)}_{a|x}\}_{a,x}$ on Alice's side and $\{M^{(2)}_{b|y}\}_{b,y}$
on Bob's side, cf. Eq.~\eqref{Eq:Born2}. Then the joint probabilities $\tilde{P}(ab'|xy')$ appearing on the left-hand-side of Eq. (\ref{coarse-grain}), which corresponds to an effective $v$-outcome distribution, is realizable by the same quantum state $\rho_{12}$ with the same POVM $\{M^{(1)}_{a|x}\}_{a,x}$ on Alice's side
and the following POVM $\tilde{M}^{(2)}_{b|y}$ on Bob's side:\footnote{That $\tilde{M}^{(2)}_{b|y}$ satisfies both the positivity constraints and the normalization constraints is evident from Eq.~\eqref{Eq:POVM:Grouping}.}
\begin{equation}gin{equation}\label{Eq:POVM:Grouping}
\begin{equation}gin{split}
\tilde{M}^{(2)}_{b|y} &= M^{(2)}_{b|y}\quad y\neq y',\\
\tilde{M}^{(2)}_{b|y'} &= M^{(2)}_{b|y'}\quad b\not\in\{b',u\},\\
\tilde{M}^{(2)}_{b'|y'} &= M^{(2)}_{b'|y'}+ M^{(2)}_{u|y'}.
\end{split}
\end{equation}
With this choice, it follows from
\begin{equation}gin{equation}\label{Eq:Born3}
\tilde{P}(ab'|xy')=\operatorname{tr} (M^{(1)}_{a|x} \otimes \tilde{M}^{(2)}_{b'|y'} \rho_{12}),
\end{equation}
Eq.~\eqref{Eq:POVM:Grouping}, and
\begin{equation}gin{align}\label{Eq:BornRule:Combined}
\operatorname{tr} (M^{(1)}_{a|x} \otimes \tilde{M}^{(2)}_{b'|y'} \rho_{12})&=\operatorname{tr} (M^{(1)}_{a|x} \otimes {M}^{(2)}_{b'|y'} \rho_{12}) \nonumber \\
&+\operatorname{tr} ( M^{(1)}_{a|x} \otimes {M}^{(2)}_{u|y'} \rho_{12})
\end{align}
that Eq.~\eqref{coarse-grain} is satisfied, and hence that the violation-preserving fewer-outcome correlation is indeed attainable by coarse graining, i.e., the grouping of outcomes.
\subsection{Splitting of outcomes}
\label{App:Split}
On the other hand, if we instead start from the original (fewer-outcome) Bell scenario, then the joint probabilities appearing on the right-hand-side of Eq.~\eqref{Eq:fine_grain}, which corresponds to a $v+1$-outcome distribution, can be realized by, e.g., employing the same quantum state $\rho_{12}$ with the same POVM $\{M^{(1)}_{a|x}\}_{a,x}$ on Alice's side and the following POVM $\tilde{M}^{(2)}_{b|y}$ on Bob's side:
\begin{equation}gin{equation}\label{Eq:POVM:Splitting}
\begin{equation}gin{split}
M^{(2)}_{b|y} &= \tilde{M}^{(2)}_{b|y}\quad y\neq y',\\
M^{(2)}_{b|y'} &= \tilde{M}^{(2)}_{b|y'}\quad b\not\in\{b',u\},\\
M^{(2)}_{b'|y'} = p\tilde{M}^{(2)}_{b'|y'},&\quad M^{(2)}_{u|y'} = (1-p)\tilde{M}^{(2)}_{b'|y'},
\end{split}
\end{equation}
for arbitrary $0\le p \le 1$. The positivity of the left-hand-side of Eq.~\eqref{Eq:POVM:Splitting} and their normalization are evident from their definition. Moreover, using
Eq.~\eqref{Eq:BornRule:Combined} and
\begin{equation}gin{equation}\label{Eq:Born3}
\widehat{P}(ab|xy)=\operatorname{tr} (M^{(1)}_{a|x} \otimes {M}^{(2)}_{b'|y'} \rho_{12}),
\end{equation}
it is easy to see that Eq.~\eqref{Eq:fine_grain} holds with the assignment given in Eq.~\eqref{Eq:POVM:Splitting}. Hence, the POVM given by the left-hand-side of Eq.~\eqref{Eq:POVM:Splitting} indeed realizes the required violation-preserving more-outcome correlation by splitting the $b'$-outcome of Bob's $y'$-th measurement.
\section{Robust self-testing based on the quantum violation of the outcome-lifted CHSH inequality}
\label{App:Self-test}
We show in this Appendix that self-testing via the quantum violation of the outcome-lifted inequality of Eq.~\eqref{liftchshineq} is robust. In this regard, note that the maximal quantum violation of inequality~\eqref{liftchshineq} can also be achieved by Alice and Bob sharing the following two-qubit maximally entangled state
\begin{equation}gin{equation}\label{Eq:TargetState}
\ket{\tilde{\psi}} = \cos\frac{\pi}{8}\frac{\ket{00}-\ket{11}}{\sqrt{2}} + \sin\frac{\pi}{8}\frac{\ket{01}+\ket{10}}{\sqrt{2}},
\end{equation}
while performing the optimal qubit measurements for Alice:
\begin{equation}gin{equation}\label{Eq:POVMA}
\begin{equation}gin{aligned}
\tilde{M}_{0|0}^{(1)} = \frac{1}{2}(\openone + \sigma_z),\quad \tilde{M}_{1|0}^{(1)} = \frac{1}{2}(\openone - \sigma_z),\\
\tilde{M}_{0|1}^{(1)} = \frac{1}{2}(\openone + \sigma_x),\quad \tilde{M}_{1|1}^{(1)} = \frac{1}{2}(\openone - \sigma_x),\\
\end{aligned}
\end{equation}
and for Bob:
\begin{equation}gin{equation}\label{Eq:POVMB}
\begin{equation}gin{aligned}
\tilde{M}_{0|0}^{(2)} = E_{0|0},\quad \tilde{M}_{1|0}^{(2)} = \frac{1}{2}(\openone - \sigma_z),\quad \tilde{M}_{2|0}^{(2)} = E_{2|0},\\
\tilde{M}_{0|1}^{(2)} = E_{0|1},\quad \tilde{M}_{1|1}^{(2)} = \frac{1}{2}(\openone - \sigma_x),\quad \tilde{M}_{2|1}^{(2)} = E_{2|1},\\
\end{aligned}
\end{equation}
where $\{E_{b|y}\}_{b=0,2}$ are any valid POVM elements satisfying $\sum_{b=0,2} E_{b|y} = \openone - M_{1|y}^{(2)}$ for all $y$.
Notice that Eq.~\eqref{Eq:StateTransformation} and Eq.~\eqref{Eq:POVMTransformation} only hold for the case of perfect self-testing of the reference state and reference measurements. To demonstrate robust self-testing for the above reference state $\ket{\tilde{\psi}}$ and reference measurements $\{\tilde{M}^{(1)}_{a|x}\}_{a,x}$, $\{\tilde{M}^{(2)}_{b|y}\}_{b,y}$, we follow the approach of~\cite{YVB+14} to arrive at statements saying that if the observed quantum violation of inequality~\eqref{liftchshineq} is close to its maximal value, then (1) the measured system contains some degrees of freedom that has a high fidelity with respect to the reference state $\ket{\tilde{\psi}}$, and (2) with high probability, the uncharacterized measurement devices function like $\{\tilde{M}^{(1)}_{a|x}\}_{a,x}$, $\{\tilde{M}^{(2)}_{b|y}\}_{b,y}$ acting on the same degrees of freedom.
\subsection{Robust self-testing of the reference state}
To this end, we shall make use of the swap method proposed in~\cite{YVB+14}. The key idea is to introduce local swap operators $\Phi_1,\Phi_2$ so that the state acting on Alice's and Bob's Hilbert space (of unknown dimension) gets swapped locally with some auxiliary states of trusted Hilbert space dimension (qubit in our case). To better understand how this works, let us first consider an example with characterized devices before proceeding to the case where the devices are uncharacterized.
For this purpose, let us concatenate the following controlled-not (CNOT) gates
\begin{equation}gin{equation}\label{Eq:UV}
\begin{equation}gin{aligned}
&U_1 = \openone\otimes \proj{0} + \sigma_x\otimes\proj{1},\\
&V_1 = \proj{0}\otimes\openone + \proj{1}\otimes\sigma_x,
\end{aligned}
\end{equation}
to obtain the (two-qubit) swap gate $\Phi_1 = U_1V_1U_1$ acting on $\mathcal{H}_A\otimes\mathcal{H}_{A'}$ (see Sec.~\ref{Sec:SelfTest}). We may define a swap operator $\Phi_2=U_2V_2U_2$ acting on Bob's systems in exactly the same way. Importantly, one notices from Eq.~\eqref{Eq:POVMA} and Eq.~\eqref{Eq:POVMB} that it is possible to express the individual unitaries in terms of the POVM elements leading to the maximal quantum violation of inequality~\eqref{liftchshineq}. For example, one may take
\begin{equation}gin{equation}
\sigma_z = \openone - 2 \tilde{M}_{1|0}^{(i)},\quad \sigma_x = \openone - 2 \tilde{M}_{1|1}^{(i)}\quad\forall\,\, i=1,2.
\end{equation}
Moreover, if we define the global swap gate by $\Phi=\Phi_1\otimes\Phi_2$ and denote the state acting on $\mathcal{H}_A\otimes\mathcal{H}_B$ by $\rho_{12}$, then the ``swapped'' state is:
\begin{equation}gin{equation}\label{Eq:SwappedState}
\rho^{\text{\tiny SWAP}}:=\operatorname{tr}_{12}[\Phi(\proj{0}\otimes \rho_{12} \otimes\proj{0})\Phi^\dag]
\end{equation}
where $\operatorname{tr}_{12}$ represents a partial trace over the Hilbert space of $\mathcal{H}_A\otimes\mathcal{H}_B$. When $\Phi$ is exactly the swap gate defined above via Eq.~\eqref{Eq:UV}, $\rho^{\text{\tiny SWAP}}$ is exactly $\rho_{12}$. Thus, the fidelity between $\rho^{\text{\tiny SWAP}}$ and the reference state $\ket{\tilde{\psi}}$: $F=\langle \tilde{\psi}|\rho^{\text{\tiny SWAP}}|\tilde{\psi}\rangle$ provides a figure of merit on the similarity between (some relevant parts of) the shared state $\rho_{12}$ and the reference state $\ket{\tilde{\psi}}$.
To perform a device-independent characterization, the assumption of $\tilde{M}_{1|x}^{(1)}$ and $\tilde{M}_{1|y}^{(2)}$ is relaxed to unknown projectors $M_{1|x}^{(1)}$ and $M_{1|y}^{(2)}$ (acting on Hilbert space of arbitrary dimensions), and the corresponding ``CNOT'' gates becomes
\begin{equation}gin{equation}
\begin{equation}gin{aligned}
&U_i = \openone\otimes \proj{0} + (\openone - 2 M_{1|1}^{(i)})\otimes\proj{1},\\
&V_i = (\openone - M_{1|0}^{(i)}) \otimes\openone + M_{1|0}^{(i)}\otimes\sigma_x,
\end{aligned}
\label{EqApp:_CNOTs_DI}
\end{equation}
for $i=1,2$. One can verify that the fidelity $F=\langle \tilde{\psi}|\rho^{\text{\tiny SWAP}}|\tilde{\psi}\rangle$ is then a linear function of the moments such as $\langle M_{a|x}^{(1)}\otimes M_{b|y}^{(2)}\rangle$, $\langle M_{a|x}^{(1)}\otimes M_{b|y}^{(2)}M_{b'|y'}^{(2)}\rangle$ etc., where $\langle\cdot\rangle:=\operatorname{tr}(\cdot\rho_{12})$.
Thus, a lower bound on $F$ for any observed value of Bell inequality violation (without assuming the shared state or the measurements performed) can be obtained by solving the following semidefinite program:
\begin{equation}gin{equation}
\begin{equation}gin{aligned}
\min\quad &F\\
\text{such that} \quad& \Gamma^S\succeq 0,\\
&I_2^{\text{\tiny LO-CHSH}} = I_{2,\text{\tiny obs}}^{\text{\tiny LO-CHSH}},
\end{aligned}
\label{EqApp:min_fidelity_SDP}
\end{equation}
where $\Gamma^S$ is any Navascu{\'e}s-Pironio-Ac{\'i}n-type~\cite{NPA07} moment matrix that contains all the moments appearing in $F$. In our computation, we employed a moment matrix that is built from a sequence of operators $S$ that contains all operators from level 1+AB (or equivalently, level 1 from the hierarchy of Ref.~\cite{MBL+13}) and some additional operators from level 3.
Our results (see Fig.~\ref{fig_min_fidelity_LOCHSH}) clearly shows that the self-testing property of $I_2^{\text{\tiny LO-CHSH}}$ with respect to the reference maximally entangled state $\ket{\tilde{\psi}}$ of Eq.~\eqref{Eq:TargetState} is indeed robust. In other words, as long as the observed violation of $I_2^{\text{\tiny LO-CHSH}}$ is greater than $\approx2.4$, one can still obtain a non-trivial lower bound on the fidelity ($>1/2$) with respect to $\ket{\tilde{\psi}}$. Moreover, a separate computation using the original CHSH Bell inequality of Eq.~\eqref{chshineq} (and the same level of approximation of $\mathcal{Q}$) gives---within the numerical precision of the solver---the same curve, thereby suggesting that the outcome-lifted Bell inequality of~\eqref{liftchshineq} offers the same level of robustness as compared with its seed inequality.
\begin{equation}gin{figure}
\includegraphics[width=0.9\linewidth]{fig_min_fidelity_LOCHSH}
\caption{Lower bounds on the fidelity as a function of the value of the outcome-lifted CHSH inequality $I_2^{\text{\tiny LO-CHSH}}$. The results are obtained by solving the semidefinite program described in Eq.~\eqref{EqApp:min_fidelity_SDP}.
}
\label{fig_min_fidelity_LOCHSH}
\end{figure}
\subsection{Robust self-testing of Alice's POVM}
Even though it is impossible to completely self-test all local measurements, robust self-testing of Alice's POVM---as one would intuitively expect---can still be achieved. In particular, when the observed violation of $I_2^{\text{\tiny LO-CHSH}}$ is close to the quantum bound of $2\sqrt{2}$, it must be the case that Alice's measurements (on the relevant degrees of freedom) indeed behave like measurements in the $\sigma_z$ and $\sigma_x$ basis, respectively, for $x=0,1$.
To that end, we again make use the swap method proposed in Ref.~\cite{YVB+14}. The idea is that if these measurements behave as expected, then their measurements on the auxiliary states swapped into the uncharacterized device, i.e., $\Phi_1(|\varphi\rangle)$---with $\ket{\varphi}$ being eigenstates of $\sigma_z$ and $\sigma_x$---should produce outcomes $a$ with statistics $\{P(a|x,|\varphi\rangle)\}$ satisfying
\begin{equation}gin{equation}
P(0|0,|0\rangle) = P(1|0,|1\rangle) = P(0|1,|+\rangle) = P(1|1,|-\rangle) = 1.
\label{EqApp:Pax_ideal}
\end{equation}
\begin{equation}gin{figure}
\includegraphics[width=0.9\linewidth]{fig_min_tau_LOCHSH}
\caption{Lower bounds on the figure of merit defined in Eq.~\eqref{EqApp:tau} as a function of the value of the outcome-lifted CHSH inequality $I_2^{\text{\tiny LO-CHSH}}$. The bounds are obtained by solving the semidefinite program described in Eq.~\eqref{EqApp:min_tau_SDP}.
}
\label{fig_min_tau_LOCHSH}
\end{figure}
Using the same swap operator defined via Eq.~\eqref{EqApp:_CNOTs_DI}, we get
\begin{equation}gin{equation}\label{Eq:ProbOutcome}
P(a|x,|\varphi\rangle) = \operatorname{tr}\left\{M_{a|x}^{(1)}\left[\Phi_1\big( \rho_{12}\otimes\proj{\varphi}\big) \Phi_1^\dag\right]\right\},
\end{equation}
where we have, for simplicity, omitted the identity operator acting on Bob's system. Notice that the left-hand-side of Eq.~\eqref{Eq:ProbOutcome} is again some linear combination of moments. Likewise for the following figure of merit~\cite{YVB+14}:
\begin{equation}gin{equation}
\begin{equation}gin{aligned}
\tau = \frac{1}{2}\big[ &P(0|0,|0\rangle) + P(1|0,|1\rangle)\\
+ &P(0|1,|+\rangle) + P(1|1,|-\rangle) \big] - 1,
\end{aligned}
\label{EqApp:tau}
\end{equation}
which takes value between $-1$ and $+1$. The maximum of $+1$, in particular, happens {\em only} when Alice's POVM $M^{(1)}_{a|x}$ correspond to measurements in $\sigma_z$ and $\sigma_x$, respectively, for $x=0,1$. $\tau$ therefore quantifies the extent to which the measurement devices function like the reference measurements. Given a violation of $I_2^{\text{\tiny LO-CHSH}}$, a lower bound on $\tau$ can thus be obtained by solving the following semidefinite program:
\begin{equation}gin{equation}
\begin{equation}gin{aligned}
\min\quad &\tau\\
\text{such that} \quad& \Gamma^S\succeq 0,\\
&I_2^{\text{\tiny LO-CHSH}} = I_{2,\text{\tiny obs}}^{\text{\tiny LO-CHSH}}.
\end{aligned}
\label{EqApp:min_tau_SDP}
\end{equation}
The resulting lower bounds on $\tau$ are shown in Figure~\ref{fig_min_tau_LOCHSH}. We see that the value of $1$ is obtained when the maximal quantum value of $I_2^{\text{\tiny LO-CHSH}}$ is observed, which means that the reference measurements of Eq.~\eqref{Eq:POVMA} are correctly certified in this case. For nonmaximal values of $I_2^{\text{\tiny LO-CHSH}}$, we see that $\tau$ decreases accordingly. Importantly, as pointed out in Ref.~\cite{YVB+14}, the procedure of sending the prepared eigenstates into the swap gate is a virtual process that allows us to interpret the figure of merit operationally, but the result still holds without any assumption on the devices of interest.
\subsection{Partial but robust self-testing of Bob's POVMs}
Finally, we would like to show that the outcome-lifted CHSH inequality of Eq.~\eqref{liftchshineq} can also be used for a ``partial'' self-testing of Bob's optimal measurements. The steps are the same as those described in the self-testing of Alice's measurements. That is, the eigenstates of $\sigma_z$ and $\sigma_x$ are sent to the swap gate before Bob performs his measurements $\{M_{b|y}^{(2)}\}$.
To this end, we define the analog of Eq.~\eqref{EqApp:tau} as:
\begin{equation}gin{equation}
\begin{equation}gin{aligned}
\tau_3 = \frac{1}{2}\big[ &P(0|0,|0\rangle) + P(1|0,|1\rangle) + P(2|0,|0\rangle)\\
+ &P(0|1,|+\rangle) + P(1|1,|-\rangle) + P(2|1,|+\rangle) \big] - 1,
\end{aligned}
\label{EqApp:tau3}
\end{equation}
and introduce a further figure of merit
\begin{equation}gin{equation}
\tau_1 = P(1|0,|0\rangle) + P(1|1,|+\rangle) -1
\label{EqApp:tau1}
\end{equation}
to self-test only the POVM element corresponding to Bob's outcome 1 for both measurements.
Thus, $\tau_3$ takes into account of Bob's all measurements outcomes while $\tau_1$ only involves the second measurement outcome. All these figures of merit range from $-1$ to $+1$, and $+1$ is recovered for
\begin{equation}gin{itemize}
\item $\tau_3$ if Bob's measurement device acts on the swapped eigenstate according to Eq.~\eqref{Eq:POVMB};
\item $\tau_1$ if Bob's measurement device acts on the swapped eigenstate in such a way that the 2nd POVM element for each measurement functions according to Eq.~\eqref{Eq:POVMB}
\end{itemize}
In other words, the value of $\tau_1$ measures the extent to which $M_{1|y}^{(2)}$ behaves according to that prescribed in Eq.~\eqref{Eq:POVMB}, while the value of $\tau_3$ further indicates if the combined effect of $M_{0|y}^{(2)}+M_{2|y}^{(2)}$
also behaves according to that prescribed in Eq.~\eqref{Eq:POVMB}.
By solving the semidefinite program of Eq.~\eqref{EqApp:min_tau_SDP} using the appropriate objective functions, we obtained lower bounds on each figure of merit as a function of the quantum violation of $I_2^{\text{\tiny LO-CHSH}}$. As shown in Fig.~\ref{fig_min_tau_Bob_LOCHSH}, the bounds on $\tau_3$ and $\tau_1$ when $I_2^{\text{\tiny LO-CHSH}}$ takes its maximal value successfully self-tests, respectively, the combined effect of $M_{0|y}^{(2)}+M_{2|y}^{(2)}$ as well as that of $M_{1|y}^{(2)}$. In summary, for the outcome-lifted CHSH inequality of Eq.~\eqref{liftchshineq}, where the first outcome is lifted in each of Bob's measurement, it is still possible to self-test Alice's optimal measurements and the overall behavior of Bob's measurements.
\begin{equation}gin{figure}
\includegraphics[width=0.9\linewidth]{fig_min_tau_Bob_LOCHSH}
\caption{
Lower bounds on the figures of merit defined in Eqs.~\eqref{EqApp:tau3}-\eqref{EqApp:tau1} as a function of the value of the outcome-lifted CHSH inequality $I_2^{\text{\tiny LO-CHSH}}$. The bounds are obtained by solving the semidefinite program described in Eq.~\eqref{EqApp:min_tau_SDP} with the appropriate figure of merit. The two figures of merit, as explained in the text, reflect different aspects of the self-testability of Bob's measurements.
}
\label{fig_min_tau_Bob_LOCHSH}
\end{figure}
\end{document} |
\begin{document}
\begin{titlepage}
\title{Prior-independent Auctions for Risk-averse Agents}
We study simple and approximately optimal auctions for agents with a
particular form of risk-averse preferences. We show that, for
symmetric agents, the optimal revenue (given a prior distribution over
the agent preferences) can be approximated by the first-price auction
(which is prior independent), and, for asymmetric agents, the optimal
revenue can be approximated by an auction with simple form. These
results are based on two technical methods. The first is for
upper-bounding the revenue from a risk-averse agent. The second gives
a payment identity for mechanisms with pay-your-bid semantics.
\thispagestyle{empty}
\end{titlepage}
\section{Introduction}
\label{sec:intro}
We study optimal and approximately optimal auctions for agents with
risk-averse preferences. The economics literature on this subject is
largely focused on either comparative statics, i.e., is the
first-price or second-price auction better when agents are risk
averse, or deriving the optimal auction, e.g., using techniques from
optimal control, for specific distributions of agent preferences. The
former says nothing about optimality but considers realistic
prior-independent auctions; the latter says nothing about realistic
and prior-independent auctions. Our goal is to study approximately
optimal auctions for risk-averse agents that are realistic and not
dependent on assumptions on the specific form of the distribution of
agent preferences. One of our main conclusions is that, while the
second-price auction can be very far from optimal for risk-averse
agents, the first-price auction is approximately optimal for an
interesting class of risk-averse preferences.
The microeconomic treatment of risk aversion in auction theory
suggests that the form of the optimal auction is very dependent on
precise modeling details of the preferences of agents, see, e.g.,
\citet{MR84} and \citet{M84}. The resulting auctions are unrealistic
because of their reliance on the prior assumption and because they are
complex \citep[cf.][]{wil-87}. Approximation can address both issues.
There may be a class of mechanisms that is simple, natural, and much
less dependent on exact properties of the distribution. As an example
of this agenda for risk neutral agents, \citet{HR09} showed that for a
large class of distributional assumptions the second-price auction
with a reserve is a constant approximation to the optimal single-item
auction. This implies that the only information about the
distribution of preferences that is necessary for a good approximation
is a single number, i.e., a good reserve price. Often from this sort
of ``simple versus optimal'' result it is possible to do away with the
reserve price entirely. \citet{DRY10} and \citet{RTY12} show that
simple and natural mechanisms are approximately optimal quite broadly.
We extend this agenda to auction theory for risk-averse agents.
The least controversial approach for modeling risk-averse agent
preferences is to assume agents are endowed with a concave function
that maps their wealth to a utility. This introduces a non-linearity
into the incentive constraints of the agents which in most cases makes
auction design analytically intractable. We therefore restrict
attention to a very specific form of risk aversion that is both
computationally and analytically tractable: utility functions that are
linear up to a given capacity and then flat. Importantly, an agent
with such a utility function will not trade off a higher probability
of winning for a lower price when the utility from such a lower price
is greater than her capacity. While capacitated utility functions are
unrealistic, they form a basis for general concave utility functions.
In our analyses we will endow the benchmark optimal auction with
knowledge of the agents' value distribution and capacity; however,
some of the mechanisms we design to approximate this benchmark will be
oblivious to them.
As an illustrative example, consider the problem of maximizing welfare
by a single-item auction when agents have known capacitated utility
functions (but unknown values). Recall that for risk-neutral agents
the second-price auction is welfare-optimal as the payments are
transfers from the agents to the mechanism and cancel from the
objective welfare which is thus equal to value of the winner.
(The auctioneer is assumed to have linear utility.)
For
agents with capacitated utility, the second-price auction can be far
from optimal. For instance, when the difference between the highest
and second highest bid is much larger than the capacity then the
excess value (beyond the capacity) that is received by the winner does
not translate to extra utility because it is truncated at the
capacity. Instead, a variant of the second-price auction, where the
highest bidder wins and is charged the maximum of
the second highest bid and her bid less her capacity,
obtains the optimal welfare.
Unfortunately, this auction is parameterized by the form of the
utility function of the agents.
There is, however, an auction,
not dependent on specific knowledge of the utility functions or prior distribution,
that is also welfare optimal: If the agents values are
drawn i.i.d.\@ from a common prior distribution then the first-price
auction is welfare-optimal. To see this: (a) standard analyses show
that at equilibrium the highest-valued agent wins, and (b) no agent
will shade her bid more than her capacity as she receives no
increased utility from such a lower payment but her probability of
winning strictly decreases.
Our main goal is to duplicate the above observation for the objective
of revenue. It is easy to see that the gap between the optimal
revenues for risk-neutral and capacitated agents can be of the same
order as the gap between the optimal welfare and the optimal revenue
(which can be unbounded). When the capacities are small the revenue
of the welfare-optimal auction for capacitated utilities is close to
its welfare (the winners utility is at most her capacity). Of course,
when capacities are infinite or very large then the risk-neutral
optimal revenue is close to the capacitated optimal revenue (the
capacities are not binding). One of our main technical results shows
that even for mid-range capacities one of these two mechanisms that
are optimal at the extremes is close to optimal.
As a first step towards understanding profit maximization for
capacitated agents, we characterize the optimal auction for agents
with capacitated utility functions. We then give a ``simple versus
optimal'' result showing that either the revenue-optimal auction
for risk-neutral agents or the above welfare-optimal auction for
capacitated agents is a good approximation to the revenue-optimal auction for
capacitated agents. The Bulow-Klemperer \citeyearpar{BK96} Theorem
implies that with enough competition (and mild distributional
assumptions) welfare-optimal auctions are approximately
revenue-optimal. Of course, the first-price auction is
welfare-optimal and prior-independent; therefore we conclude that it
is approximately revenue-optimal for capacitated agents.
Our ``simple versus optimal'' result comes from an upper bound on the
expected payment of an agent in terms of her allocation rule
\citep[cf.][]{M81}. This upper bound is the most technical result in
the paper; the difficulties that must be overcome by our analysis are
exemplified by the following observations. First, unlike in
risk-neutral mechanism design, Bayes-Nash equilibrium does not imply
monotonicity of allocation rules. There are mechanisms where an agent
with a high value would prefer less overall probabability of service
than she would have obtained if she had a lower value
(\autoref{ex:nonmono} in \autoref{sec:optimal}). Second, even in the
case where the capacity is higher than the maximum value of any agent,
the optimal mechanism for risk-averse agents can generally obtain more
revenue than the optimal mechanism for risk-neutral agents
(\autoref{ex:v<C} in \autoref{sec:pricebound}). This may be
surprising because, in such a case, the revenue-optimal mechanism for
risk-neutral agents would give any agent a wealth that is within the
linear part of her utility function. Finally, while our upper bound
on risk-averse payments implies that this relative improvement is
bounded by a factor of two for large capacities, it can be arbitraily
large for small capacities (\autoref{ex:er-gap} in
\autoref{sec:pricebound}).
It is natural to conjecture that the first-price auction will continue
to perform nearly optimally well beyond our simple model (capacitated
utility) of risk-averse preferences. It is a relatively
straightforward calculation to see that for a large class of
risk-averse utility functions from the literature \citep[e.g.,][]{M84}
the first-price auction is approximately optimal at extremal risk
parameters (risk-neutral or extremely risk-averse).
We leave to future work the
extension of our analysis to mid-range risk parameters for these other
families of risk-averse utility functions.
It is significant and deliberate that our main theorem is about the
first-price auction which is well known to not have a truthtelling
equilibrium. Our goal is a prior-independent mechanism. In
particular, we would like our mechanism to be parameterized neither by the
distribution on agent preference nor by the capacity that governs the
agents utility function. While it is standard in mechanism design and
analysis to invoke the {\em revelation principle} \citep[cf.][]{M81}
and restrict attention to auctions with truthtelling as equilibrium,
this principle cannot be applied in prior-independent auction design.
An auction with good equilibrium can be implemented by one with
truthtelling as an equilibrium if the agent strategies can be
simulated by the auction. In a Bayesian environment, agent strategies
are parameterized by the prior distribution and therefore the
suggested revelation mechanism is not generally prior independent.
\paragraph{Risk Aversion, Universal Truthfulness, and Truthfulness in Expectation\stoccom{.}}
Our results have an important implication on a prevailing and
questionable perspective that is explicit and implicit broadly in the
field of algorithmic mechanism design. Two standard solution concepts
from algorithmic mechanism design are ``universal truthfulness'' and
``truthfulness in expectation.'' A mechanism is universally truthful
if an agent's optimal (and dominant) strategy is to reveal her values
for the various outcomes of the mechanism regardless of the reports of
other agents or random coins flipped by the mechanism. In contrast,
in a truthful-in-expectation mechanism, revealing truthfully her values
only maximizes the agent's utility in expectation over the random
coins tossed by the mechanism. Therefore, a risk-averse agent modeled
by a non-linear utility function may not bid truthfully in a
truthful-in-expectation mechanism designed for risk-neutral agents,
whereas in a universally truthful mechanism an agent behaves the same
regardless of her risk attitude. For this reason, the above-mentioned
perspective sees universally truthful mechanisms superior because the
performance guarantees shown for risk-neutral agents seem to apply to
risk-averse agents as well.
This perspective is incorrect because the optimal performance possible
by a mechanism is different for risk-neutral and risk-averse agents.
In some cases, a mechanism may exploit the risk attitude of the agents
to achieve objectives better than the optimal possible for
risk-neutral agents;
in other cases, the objective itself relies on the utility functions
(e.g.\@ social welfare maximization), and therefore the same outcome
has a different objective value. In all these situations, the
performance guarantee of universally truthful mechanisms measured by
the risk-neutral optimality loses its meaning. We have already
discussed above two examples for capacitated agents that illustrate
this point: for welfare maximization the second-price auction is not
optimal, for revenue maximization the risk-neutral revenue-optimal
auction can be far from optimal.
The conclusion of the discussion above is that the universally
truthful mechanisms from the literature are not generally good when
agents are risk averse; therefore, the solution concept of universal
truthfulness buys no additional guarantees over truthfulness in
expectation. Nonetheless, our results suggest that it may be possible
to develop a general theory for prior-independent mechanisms for
risk-averse agents. By necessity, though, this theory will look
different from the existing theory of algorithmic mechanism design.
\paragraph{Summary of Results\stoccom{.}}
Our main theorem is that the first-price auction is a
prior-independent $5$-approximation for revenue for two or more
agents with i.i.d.\@ values and risk-averse preferences (given by a
common capacity). The technical results that enable this theorem are
as follows:
\begin{itemize}
\item The optimal auction for agents with capacitated utilities is a
two-priced mechanism where a winning agent either pays her full
value or her value less her capacity.
\item The expected revenue of an agent with capacitated utility and
regular value distribution can be bounded in terms of an expected
(risk-averse) virtual surplus, where the (risk-averse) virtual value
is twice the risk-neutral virtual value plus the value minus
capacity (if positive).
\item Either the mechanism that optimizes value minus capacity (and
charges the Clarke payments or value minus capacity, whichever is
higher) or the risk-neutral revenue optimal mechanism is a
3-approximation to the revenue optimal auction for capacitated
utilities.
\item We characterize the Bayes-Nash equilibria of auctions with
capacitated agents where each bidder's payment when served is a
deterministic function of her value. An example of this is the
first-price auction.
The BNE strategies of the capacitated agents can be calculated
formulaically from the BNE strategies of risk-neutral agents.
\end{itemize}
Some of these results extend beyond single-item auctions. In
particular, the characterization of equilibrium in the first-price
auction holds for position auction environments (i.e., where agents
are greedily by bid assigned to positions with decreasing
probabilities of service and charged their bid if served). Our
simple-versus-optimal 3-approximation holds generally for
downward-closed environments, non-identical distributions, and
non-identical capacities.
\paragraph{Related Work\stoccom{.}}
The comparative performance of first- and second-price auctions
in the presence of risk aversion has been well studied in the
Economics literature. From a revenue perspective,
first-price auctions are shown to outperform second-price auctions very
broadly. \citet{RS81} and \citet{H80} show this for symmetric settings where bidders have the same concave utility
function. \citet{MR84} show this for more general preferences.
\citet{M87} shows that in addition to the revenue dominance, bidders whose
risk attitudes exhibit \emph{constant absolute risk aversion (CARA)} are indifferent
between first- and second-price auctions, even though they pay more in
expectation in the first-price auction. \citet{HMZ10} considers the optimal
reserve prices to set in each, and shows that the optimal reserve in the
first price auction is less than that in the second price auction. Interestingly, under
light conditions on the utility functions, as risk aversion increases, the
optimal first-price reserve price decreases.
\citet{Matthews83} and \citet{MR84} have considered optimal mechanisms for a single item, with symmetric bidders
(i.i.d.\@ values and identical utility function), for CARA and more general preferences.
Recently, \citet{DP12} have shown that by insuring bidders against
uncertainty, any truthful-in-expectation mechanism for risk-neutral
agents can be converted into a dominant-strategy incentive compatible
mechanism for risk-averse buyers with no loss of revenue. However,
there is potentially much to gain---mechanisms for risk-averse buyers
can achieve unboundedly more welfare and revenue than mechanisms for
risk-neutral bidders, as we show in \autoref{ex:er-gap} of
\autoref{sec:pricebound}.
\section{Preliminaries}
\label{sec:prelim}
\paragraph{Risk-averse Agents\stoccom{.}}
Consider selling an item to an agent who has a private
valuation~$v$ drawn from a known distribution~$F$. Denote the
outcome by $(x, p)$, where $x \in \{0, 1\}$ indicates
whether the agent gets the item, and $p$ is the payment made.
The agent obtains a {\em wealth} of $vx - p$ for such an
outcome and the agent's utility is given by a concave utility function
$u(\cdot)$ that maps her wealth to utility, i.e., her utility for
outcome $(x,p)$ is $u(vx - p)$. Concave
utility functions are a standard approach for modeling
risk-aversion.\footnote{There are other definitions of risk aversion;
this one is the least controversial. See \citet{MWG95} for a thorough exposition of expected utility theory.}
A {\em capacitated} utility function is $\util_\capa(z) = \min (z,C)$
for a given $C$ which we refer to as the {\em capacity}.
Intuitively, small $C$ corresponds to severe risk aversion; large
$C$ corresponds to mild risk aversion; and $C = \infty$
corresponds to risk neutrality. An agent views an auction as a
deterministic rule that maps a random source and the (possibly random)
reports of other agents which we summarize by $\pi$, and the report
$b$ of the agent, to an allocation and payment. We denote these
coupled allocation and payment rules as $\ralloc(b)$ and
$\rprice(b)$, respectively. The agent wishes to maximize her
expected utility which is given by $\Ex[\pi]{\util_\capa(v
\ralloc(b) - \rprice(b))}$, i.e., she is a von
Neumann-Morgenstern utility maximizer.
\paragraph{Incentives\stoccom{.}}
A strategy profile of agents is $ss = (s_1,\ldots,s_n)$
mapping values to reports. Such a strategy profile is in {\em
Bayes-Nash equilibrium} (BNE) if each
agent~$i$ maximizes her utility by reporting $s_i(v_i)$.
I.e., for all $i$, $v_i$, and~$z$:
$$
\Ex[\pi]{u(v_i \ralloc_i(s_i(v_i)) - \rprice_i(s_i(v_i)))}
\geq
\Ex[\pi]{u(v_i \ralloc_i(z) - \rprice_i(z))}
$$ where $\pi$ denotes the random bits accessed by the mechanism as
well as the random inputs $s_j(v_j)$ for $j \neq i$ and
$v_j \sim F_j$. A mechanism is {\em Bayesian incentive
compatible} (BIC) if truthtelling is a Bayes-Nash equilibrium:
for all $i$, $v_i$, and $z$
\begin{align}
\label{eq:IC}
\Ex[\pi]{u(v_i \ralloc_i(v_i) - \rprice_i(v_i))}
\geq
\Ex[\pi]{u(v_i \ralloc_i(z) - \rprice_i(z))}
\tag{IC}
\end{align} where $\pi$ denotes the random bits accessed by the mechanism as
well as the random inputs $v_j \sim F_j$ for $j \neq i$.
We will consider only mechanisms where losers have no payments, and
winners pay at most their bids. These constraints imply ex post {\em
individual rationality} (IR). Formulaically, for all $i$, $vi$, and
$\pi$, $\rprice_i(v_i) \leq vi$ when $\ralloc_i(v_i) = 1$ and
$\rprice_i(v_i) = 0$ when $\ralloc_i(v_i) = 0$.
\paragraph{Auctions and Objectives\stoccom{.}}
The revenue of an auction $\mathcal M$ is the total payment of all agents;
its expected revenue for implicit distribution $F$ and Bayes-Nash
equilibrium is denoted $\REV(\mathcal M) = \Ex[\pi,vs]{\sum_i
\rprice_i(v_i)}$.
The welfare
of an auction $\mathcal M$ is the total utility of all participants
including the auctioneer; its expected welfare is denoted $\WEL(\mathcal M)
= \REV(\mathcal M) + \Ex[\pi,vs]{\sum_i u(v_i
\ralloc_i(v_i) - \rprice_i(v_i))}$.
Some examples of auctions are:
the {\em first-price auction} (FPA) serves the agent with the highest bid and
charges her her bid; the {\em second-price auction} (SPA) serves the
agent with the highest bid and charges her the second-highest bid.
The second price auction is incentive compatible regardless of agents'
risk attitudes. The {\em capacitated second-price auction} (CSP)
serves the agent with the highest bid and charges her the maximum of
her value less her capacity and the second highest bid. The
second-price auction for capacitated agents is incentive compatible
for capacitated agents because, relative to the second-price auction,
the utility an agent receives for truthtelling is unaffected and the
utility she receives for any misreport is only (weakly) lower.
\paragraph{Two-Priced Auctions\stoccom{.}}
The following class of auctions will be relevant for agents with
capacitated utility functions.
\begin{definition}
\label{def:two-price}
A mechanism~$\mathcal M$ is \emph{two-priced} if, whenever $\mathcal M$ serves
an agent with capacity $C$ and value $v$, the agent's payment
is either $v$ or $v-C$; and otherwise (when not served) her
payment is zero. Denote by $\qq_{\mathrm{val}}(v)$ and $\qq_{\capa}(v)$ probability of
paying $v$ and $v - C$, respectively.
\end{definition}
\noindent
Note that from an agent's perspective the outcome of a two-priced
mechanism is fully described by a $\qq_{\capa}$ and $\qq_{\mathrm{val}}$.
\paragraph{Auction Theory for Risk-neutral Agents\stoccom{.}}
For risk neutral agents, i.e., with $u(\cdot)$ equal to the
identity function, only the probability of winning and expected
payment are relevant. The {\em interim allocation rule} and {\em
interim payment rule} are given by the expectation of $\ralloc$ and
$\rprice$ over~$\pi$ and denoted as $x(b) =
\Ex[\pi]{\ralloc(b)}$ and $p(b) =
\Ex[\pi]{\rprice(b)}$, respectively (recall that $\pi$ encodes
the randomization of the mechanism and the reports of other agents).
For risk-neutral agents, \citet{M81} characterized interim allocation
and payment rules that arise in BNE and solved for the revenue optimal
auction.
These results are summarized in the following theorem.
\begin{theorem}[\citealp{M81}]
\label{thm:myerson}
For risk neutral bidders with valuations drawn independently and
identically from $F$,
\begin{enumerate}
\item (monotonicity)
\label{thmpart:monotone}
The allocation rule $x(v)$ for each agent is monotone
non-decreasing in $v$.
\item
\label{thmpart:payment}
(payment identity) The payment rule satisfies $p(v) = v x(v) - \int_0^vx(z) \mathrm{d} z$.
\item
\label{thmpart:virt}
(virtual value) The ex ante expected payment of an agent is
$\Ex[v]{p(v)} = \Ex[v]{\varphi(v)x(v)}$ where
$\varphi(v) = v - \frac{1-F(v)}{f(v)}$ is the {\em virtual value} for value $v$.
\item
\label{thmpart:opt}
(optimality) When the distribution $F$ is {\em regular}, i.e., $\varphi(v)$ is
monotone, the second-price auction with reserve $\varphi^{-1}(0)$ is
revenue-optimal.
\end{enumerate}
\end{theorem}
\noindent The payment identity in \autoref{thmpart:payment} implies
the {\em revenue equivalence} between any two auctions with the same
BNE allocation rule.
A well-known result by \citeauthor{BK96} shows that, in \autoref{thmpart:opt} of \autoref{thm:myerson}, instead of
having a reserve price to make the second-price auction optimal, one may as well add in another identical bidder to get at least as much revenue.
\begin{theorem}[\citealp{BK96}]
\label{thm:bk}
For risk neutral bidders with valuations drawn i.i.d.\@ from a regular distribution, the revenue from the second-price auction with $n+1$
bidders is at least that of the optimal auction for $n$~bidders.
\end{theorem}
\section{The Optimal Auctions}
\label{sec:optimal}
In this section we study the form of optimal mechanisms for
capacitated agents. In \autoref{sec:optimal-two-price}, we show that
it is without loss of generality to consider two-priced auctions, and
in \autoref{sec:two-price-BIC} we characterize the incentive
constraints of two-priced auctions. In \autoref{sec:polytime} we use
this characterization to show that the optimal auction (in discrete
type spaces) can be computed in polynomial time in the number of
types.
\subsection{Two-priced Auctions Are Optimal}
\label{sec:optimal-two-price}
Recall a two-priced auction is one where when any agent is served she is
either charged her value or her value minus her capacity. We show
below that restricting our attention to two-priced auctions is without
loss for the objective of revenue.
\begin{theorem}
\label{thm:optimal-two-price}
For any auction on capacitated agents there is a two-priced auction
with no lower revenue.
\end{theorem}
\begin{proof}
We prove this theorem in two steps. In the first step we show, quite
simply, that if an agent with a particular value received more wealth
than $C$ then we can truncate her wealth to $C$ (by charging
her more). With her given value she is indifferent to this change,
and for all other values this change makes misreporting this value
(weakly) less desirable. Therefore, such a change would not induce
misreporting and only (weakly) increases revenue. This first step
gives a mechanism wherein every agent's wealth is in the linear part
of her utility function. The second step is to show that we can
transform the distribution of wealth into a two point distribution.
Whenever an agent with value~$v$ is offered a price that results in a wealth $w \in [0, C]$, we instead offer her a price
of $v - C$ with probability $w / C$, and a price of $v$ with the remaining probability. Both the expected
revenue and the utility of a truthful bidder is unchanged. The expected utility of other types to misreport~$v$, however,
weakly decreases by the concavity of $\util_\capa$, because mixing over endpoints of an interval on a concave function gives
less value than mixing over internal points with the same expectation.
\end{proof}
\subsection{Characterization of Two-Priced Auctions}
\label{sec:two-price-BIC}
In this section we characterize the incentive constraints of
two-priced auctions. We focus on the induced two-priced mechanism for a single agent given
the randomization $\pi$ of other agent values and the mechanism.
The interim two-priced allocation rule of this agent is denoted by
$\alloc(v) = \qq_{\mathrm{val}}(v) + \qq_{\capa}(v)$.
\begin{lemma}
\label{lem:two-price-BIC}
A mechanism with two-price allocation rule $\alloc = \qq_{\mathrm{val}} + \qq_{\capa}$ is BIC if
and only if for all $v$ and $\val^{+}$ such that $v < \val^{+}
\leq v + C$,
\begin{equation}
\label{eq:near-bic}
\frac{\qq_{\mathrm{val}}(v)}{C} \leq \frac{ \qq_{\capa}(\val^{+}) - \qq_{\capa}(v)}{\val^{+} - v} \leq \frac{\alloc(\val^{+})}{C}.
\end{equation}
\end{lemma}
Equation \eqref{eq:near-bic} can be equivalently written as the
following two linear constraints on $\qq_{\capa}$, for all $\val^{-} \leq v \leq
\val^{+} \in [v -C,v + C]$:
\begin{align}
\qq_{\capa}(\val^{+}) &\geq \qq_{\capa}(v) + \frac{\val^{+}-v}{C}\cdot \qq_{\mathrm{val}}(v), \label{eq:bic-underbidding}\\
\qq_{\capa}(\val^{-}) &\geq \qq_{\capa}(v) - \frac{v-\val^{-}}{C}\cdot \alloc(v).\label{eq:bic-overbidding}
\end{align}
Equations \eqref{eq:bic-underbidding} and \eqref{eq:bic-overbidding}
are illustrated in \autoref{fig:opt-bic}. For a fixed $v$,
\eqref{eq:bic-underbidding} with $\val^{+}=v+C$ yields a lower
bounding line segment from $(v, \qq_{\capa}(v))$ to $(v + C,
\qq_{\capa}(v) + \qq_{\mathrm{val}}(v))$, and \eqref{eq:bic-overbidding} with $\val^{-}
= v - C$ gives a lower bounding line segment from $(v,
\qq_{\capa}(v))$ to $(v - C, \qq_{\capa}(v)-\alloc(v))$. Note that
\eqref{eq:bic-underbidding} implies that $\qq_{\capa}$ is monotone.
In the special case when $\qq_{\capa}$ is differentiable, by taking $\val^{+}$
approaching $v$ in \eqref{eq:near-bic}, we have
$\tfrac{\qq_{\mathrm{val}}(v)}{C} \leq \qq_{\capa}'(v) \leq
\tfrac{\alloc(v)}{C}$ for all~$v$. In general, we have the
following condition in the integral form (see \autoref{sec:opt-app}
for a proof).
\begin{corollary}
\label{cor:int-bic}
The allocation rule $\alloc = \qq_{\mathrm{val}} + \qq_{\capa}$ of a BIC two-priced mechanism
for all $v < \val^{+}$ satisfies:
\begin{align}
\label{eq:int-bic}
\int_{v}^{\val^{+}} \frac{\qq_{\mathrm{val}}(z)}{C} \: \mathrm{d} z \leq \qq_{\capa}(\val^{+}) - \qq_{\capa}(v) \leq \int_{v}^{\val^{+}}
\frac{\alloc(z)}{C} \: \mathrm{d} z.
\end{align}
\end{corollary}
\begin{figure}
\caption{Fixing $\alloc(v) = \qq_{\mathrm{val}
\label{fig:opt-bic}
\end{figure}
Importantly, the equilbrium characterization of two-priced mechanisms
does not imply monotonicity of the allocation rule $x$. This is
in contrast with mechanisms for risk-neutral agents, where incentive
compatibility requires a monotone allocation rule
(\autoref{thm:myerson}, \autoref{thmpart:monotone}). This
non-monotonicity is exhibited in the following example.
\begin{example}
\label{ex:nonmono}
There is a single-agent two-priced mechanism with a non-monotone
allocation rule. Our agent has two possible values $v = 3$ and
$v = 4$, and capacity $C$ of $2$. We give a two price
mechanism. Recall that $\qq_{\capa}(v)$ is the probability with which the
mechanism sells the item and charges $v - C$; $\qq_{\mathrm{val}}(v)$ is
the probability with which the mechanism sells the item and charges
$v$; and $\alloc(v) = \qq_{\capa}(v) + \qq_{\mathrm{val}}(v)$. The mechanism and its outcome are summarized in the following table.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
v & $\alloc$ & $\qq_{\capa}$ & $\qq_{\mathrm{val}}$ & \shortstack{utility from \\truthful reporting} & \shortstack{utility from\\ misreporting} \\ \hline
3 & 5/6 & 1/2 & 1/3 & 1 & 2/3 \\ \hline
4 & 2/3 & 2/3 & 0 & 4/3 & 4/3 \\ \hline
\end{tabular}
\end{center}
\end{example}
\subsection{Optimal Auction Computation}
\label{sec:polytime}
Solving for the optimal mechanism is computationally tractable for any
discrete (explicitly given) type space $T$. Given a discrete
valuation distribution on support~$T$, one can use $2\left|T\right|$ variables to
represent the allocation rule of any two-priced mechanism, and the
expected revenue is a linear sum of these variables.
\autoref{lem:two-price-BIC} shows that one can use $O(|T|^2)$ linear
constraints to express all BIC allocations, and hence the revenue
optimization for a single bidder can be solved by a $O(|T|^2)$-sized
linear program. Furthermore, using techniques developed by
\citet{CDW12} and \citet{AFHHM12}, in particular the ``token-passing''
characterization of single-item auctions by \citet{AFHHM12}, we
obtain:
\begin{theorem}
\label{thm:polytime}
For $n$~bidders with independent valuations with type spaces $T_1, \cdots, T_n$ and capacities $C_1, \cdots,
C_n$, one can solve for the optimal single-item auction with a linear program of size $O\left(\left(\sum_i |T_i| \right)^2\right)$.
\end{theorem}
\section{An Upper Bound on Two-Priced Expected Payment}
\label{sec:pricebound}
In this section we will prove an upper bound on the expected payment
from any capacitated agent in a two-priced mechanism. This upper bound
is analogous in purpose to the identity between expected risk-neutral
payments and expected virtual surplus of \citet{M81} from which
optimal auctions for risk-neutral agents are derived. We use this
bound in \autoref{sec:firstprice} and \autoref{sec:one-v-two} to
derive approximately optimal mechanisms.
As before, we focus on the induced two-priced mechanism for a single agent given
the randomization $\pi$ of other agent values and the mechanism.
The expected payment of a bidder of value~$v$ under allocation rule $\alloc(v) = \qq_{\capa}(v) + \qq_{\mathrm{val}}(v)$
is $p(v) = v\cdot \qq_{\mathrm{val}}(v) + (v-C)\cdot \qq_{\capa}(v) =
v \cdot \alloc(v) - C \cdot \qq_{\capa}(v)$.
Recall from \autoref{thm:myerson} that the (risk-neutral) virtual
value for an agent with value drawn from distribution $F$ is
$\varphi(v) = v - \frac{1-F(v)}{f(v)}$ and that the
expected risk-neutral payment for allocation rule $x(\cdot)$ is
$\Ex[v]{\varphi(v)x(v)}$. Denote $\max(0,\varphi(v))$ by
$\varphiplus(v)$ and $\max(v - C,0)$ by $pvc$.
\begin{theorem}
\label{thm:pricebound}
For any agent with value $v \sim F$, capacity~$C$, and two-priced allocation rule $\alloc(v) = \qq_{\mathrm{val}}(v) + \qq_{\capa}(v)$,
\begin{align*}
\Ex[v]{p(v)} \leq &
\Ex[v]{\varphiplus(v) \cdot \alloc(v)}
+ \Ex[v]{\varphiplus(v) \cdot \qq_{\capa}(v)} \\
& + \Ex[v]{pvc \cdot \qq_{\capa}(v)}.
\end{align*}
\end{theorem}
\begin{corollary}
\label{cor:revbound}
When bidders have regular distributions and a common capacity, either
the risk-neutral optimal auction or the capacitated second price
auction (whichever has higher revenue) gives a 3-approximation to the
optimal revenue for capacitated agents.
\end{corollary}
\begin{proof}
For each of the three parts of the revenue upper bound of
\autoref{thm:pricebound}, there is a simple auction that optimizes the
expectation of the part across all agents. For the first two parts,
the allocation rules across agents (both for $\alloc(\cdot)$ and
$\qq_{\capa}(\cdot)$) are feasible. When the distributions of agent values
are regular (i.e., the virtual value functions are monotone), the
risk-neutral revenue-optimal auction optimizes virtual surplus across
all feasible allocations (i.e., expected virtual value of the agent
served); therefore, its expected revenue upper bounds the first and
second parts of the bound in \autoref{thm:pricebound}. The revenue of
the third part is again the expectation of a monotone function (in
this case $pvc$) times the service probability. The auction that
serves the agent with the highest (positive) ``value minus capacity''
(and charges the winner the maximum of her ``minimum winning bid,'' i.e., the second-price payment rule, and her ``value minus
capacity'') optimizes such an expression over all feasible
allocations; therefore, its revenue upper bounds this third part of
the bound in \autoref{thm:pricebound}. When capacities are identical,
this auction is the capacitated second price auction.
\end{proof}
Before proving \autoref{thm:pricebound}, we give two examples. The
first shows that the gap between the revenue of the capacitated
second-price auction and the risk-neutral revenue-optimal auction
(i.e., the two auctions from \autoref{cor:revbound}) can be
arbitrarily large. This means that there is no hope that an auction
for risk-neutral agents always obtains a good revenue for risk-averse
agents. The second example shows that even when all values are
bounded from above by the capacity (and therefore, capacities are
never binding in a risk-neutral auction) an auction for risk-averse
agents can still take advantage of risk aversion to generate higher
revenue. Consequently, the fact that we have two risk-neutral revenue
terms in the bound of \autoref{thm:pricebound} is necessary (as the
``value minus capacity'' term is zero in this case).
\begin{example}
\label{ex:er-gap}
The {\em equal revenue} distribution on interval $[1,h]$ has
distribution function $F(z) = 1-1/z$ (with a point mass at $h$).
The distribution gets its name because such an agent would accept
any offer price of $p$ with probability $1/p$ and generate
an expected revenue of one. With one such agent the optimal
risk-neutral revenue is one. Of course, an agent with capacity $C
= 1$ would happily pay her value minus her capacity to win all the
time (i.e., $\alloc(v) = \qq_{\capa}(v) = 1$). The revenue of this auction
is $\Ex{v} - 1 = \ln h$. For large~$h$, this is unboundedly larger
than the revenue we can obtain from a risk-neutral agent with the
same distribution.
\end{example}
\begin{example}
\label{ex:v<C}
The revenue from a two-priced mechanism can be better than the optimal
risk-neutral revenue even when all values are no more than the
capacity. Consider selling to an agent with capacity of $C = 1000$ and
value drawn from the equal revenue distribution from
\autoref{ex:er-gap} with $h=1000$.
The following two-priced rule is BIC and generates revenue of
approximately $1.55$ when selling to such a bidder. Let $\qq_{\capa}(v) =
\frac{0.6}{1000}(v-1)$, $\alloc(v) = \min(\qq_{\capa}(v) + 0.6, 1)$, and
$\qq_{\mathrm{val}}(v) = \alloc(v) - \qq_{\capa}(v)$ (shown in
\autoref{fig:ex-eqrevunderC}). Recall that the expected payment from
an agent with value $v$ can be written as $v\alloc(v) - C
\qq_{\capa}(v)$; for small values, this will be approximately $0.6$; for
large values this will increase to $400$. The expected revenue is
$\int_1^{1000} \left( z \cdot \alloc(z) -1000\ \qq_{\capa}(z) \right) f (z)
\mathrm{d} z + \frac{1}{1000}(1000 \cdot \qq_{\mathrm{val}}(1000)) \approx 1.15 + 0.4
\approx 1.55$, an improvement over the optimal risk-neutral revenue of
$1$.
\end{example}
\begin{figure}
\caption{With $C=1000$ and values from the equal revenue
distribution on $[1,1000]$, this two-priced mechanism is BIC
and achieves $1.55$ times the revenue of the optimal
risk-neutral mechanism.\label{fig:ex-eqrevunderC}
\label{fig:ex-eqrevunderC}
\end{figure}
In the remainder of this section we instantiate the following outline
for the proof of \autoref{thm:pricebound}. First, we transform any
given two-priced allocation rule $\alloc = \qq_{\mathrm{val}} + \qq_{\capa}$ into a new two-priced rule
$\allocbar(v) = \qq_{\capa}bar(v) + \qq_{\mathrm{val}}bar(v)$ (for which the
expected payment is $pbar(v) = v \allocbar(v) - C
\qq_{\capa}bar(v)$). While this transformation may violate some incentive
constraints (from \autoref{lem:two-price-BIC}), it enforces convexity
of $\qq_{\capa}bar(v)$ on $v \in [0,C]$ and (weakly) improves
revenue. Second, we derive a simple upper bound on the payment rule
$pbar(\cdot)$. Finally, we use the enforced convexity property
of $\qq_{\capa}bar(\cdot)$ and the revenue upper bound to partition the
expected payment $\Ex[v]{pbar(v)}$ by the three terms that
can each be attained by simple mechanisms.
\subsection{Two-Priced Allocation Construction}
\label{sec:mechbar}
We now construct a two-priced allocation rule $\allocbar = \qq_{\mathrm{val}}bar + \qq_{\capa}bar$
from $\alloc = \qq_{\mathrm{val}} + \qq_{\capa}$ for which (a) revenue is improved, i.e.,
$pbar(v) \geq p(v)$, and (b) the probability the agent
pays her value minus capacity, $\qq_{\capa}bar(v)$, is convex for $v \in
[0,C]$. In fact, given $\qq_{\mathrm{val}}$, $\qq_{\capa}bar$ is the smallest function
for which IC constraint \eqref{eq:bic-underbidding} holds; and in the
special case when $\qq_{\mathrm{val}}$ is monotone, the left-hand side of
\eqref{eq:int-bic} is tight for $\qq_{\capa}bar$ on $[0, C]$. Other
incentive constraints may be violated by $\allocbar$, but we use it only
as an upper bound for revenue.
\begin{definition}[$\allocbar$]
\label{def:mechbar}
We define $\allocbar = \qq_{\capa}bar + \qq_{\mathrm{val}}bar$ as follows:
\begin{enumerate}
\item $\qq_{\mathrm{val}}bar(v) = \qq_{\mathrm{val}}(v)$;
\item Let $r(v)$ be $\tfrac{1}{C} \sup_{z \leq v} \qq_{\mathrm{val}}(z)$, and let
\begin{align}
\label{eq:qcbar}
\qq_{\capa}bar(v) = \left\{ \begin{array}{ll}
\int_0^{v} r(y) \: \mathrm{d} y, & v \in [0, C]; \\
\qq_{\capa}(v), & v > C.
\end{array}
\right.
\end{align}
\end{enumerate}
\end{definition}
\begin{lemma}[Properties of $\allocbar$]
\label{thm:mechbar}
\ \\
\begin{enumerate}
\item \label{prop:qcbar-conv} On $v \in [0,C]$, $\qq_{\capa}bar(\cdot)$
is a convex, monotone increasing function.
\item \label{lem:qcbar}
On all $v$, $\qq_{\capa}bar(v) \leq \qq_{\capa}(v)$.
\item
\label{thmpart:qcbar-bic}
The incentive
constraint from the left-hand side of \eqref{eq:int-bic} holds for
$\qq_{\capa}bar$:
$\frac{1}{C}\int_v^{\val^{+}} \qq_{\mathrm{val}}bar(z)\: \mathrm{d} z \leq \qq_{\capa}bar(\val^{+}) -
\qq_{\capa}bar(v)$ for all $v < \val^{+}$.
\item \label{cor:qcbar} On all $v$, $\qq_{\capa}bar(v) \leq \qq_{\capa}(v)$,
$\allocbar(v) \leq \alloc(v)$, and $pbar(v) \geq
p(v)$.
\end{enumerate}
\end{lemma}
The proof of \autoref{lem:qcbar} is technical, and we give a sketch
here. Recall that, for each $v$, the IC constraint
\eqref{eq:bic-underbidding} gives a linear constraint lower
bounding~$\qq_{\capa}(\val^{+})$ for every $\val^{+} > v$. If one
decreases $\qq_{\capa}(v)$, the lower bound it imposes on~$\qq_{\capa}(\val^{+})$
is simply ``pulled down'' and is less binding. The definition
of~$\qq_{\capa}bar$ simply lands $\qq_{\capa}bar(v)$ on the most binding lower
bound, and therefore not only makes $\qq_{\capa}bar(v)$ at most
$\qq_{\capa}(v)$, but also lowers the linear constraint that $v$ imposes
on larger values. If the number of values is countable or if $\qq_{\mathrm{val}}$ is
piecewise constant, the lemma is easy to see by induction. A full
proof for the general case of \autoref{lem:qcbar}, along with the
proofs of the other more direct parts of \autoref{thm:mechbar}, is
given in \autoref{sec:pricebound-app}.
\subsection{Payment Upper Bound}
\label{sec:pricebar}
\begin{figure}
\caption{ \label{fig:opt-geometry}
\label{fig:paymentblockregion}
\label{fig:paymentupperbound}
\label{fig:opt-geometry}
\end{figure}
Recall that $pbar(v)$ is the expected payment corresponding
with two-priced allocation rule~$\allocbar(v)$. We now give an upper
bound on~$pbar(v)$.
\begin{lemma}
\label{lem:pricebar}
The payment $pbar(v)$ for $v$ and two-priced rule
$\allocbar(v)$ satisfies
\begin{align}
\label{eq:pricebound}
pbar(v) &\leq v \allocbar(v) -
\int_0^{v} \allocbar(z) \: \mathrm{d} z + \int_0^{v} \qq_{\capa}bar(z) \: \mathrm{d} z.
\end{align}
\end{lemma}
\begin{proof}
View a two-priced mechanism $\allocbar = \qq_{\mathrm{val}}bar + \qq_{\capa}bar$ as charging
$v$ with probability $\allocbar(v)$ and giving a rebate of $C$
with probability $\qq_{\capa}bar(v)$. We bound this rebate as follows
(which proves the lemma):
\begin{align*}
C \cdot \qq_{\capa}bar(v) &\geq C \cdot \qq_{\capa}bar(0) + \int_0^v \qq_{\mathrm{val}}bar(z) \: \mathrm{d} z \\
&\geq \int_0^{v} \allocbar(z) \: \mathrm{d} z - \int_0^{v} \qq_{\capa}bar(z) \: \mathrm{d} z.
\end{align*}
The first inequality is from \autoref{thmpart:qcbar-bic} of \autoref{thm:mechbar}. The second inequality
is from the definition of $\qq_{\capa}bar(0) = 0$ in \eqref{eq:qcbar} and
$\qq_{\mathrm{val}}bar(v) = \allocbar(v) - \qq_{\capa}bar(v)$. See \autoref{fig:opt-geometry} for an illustration.
\end{proof}
\subsection{Three-part Payment Decomposition}
\label{sec:revbar}
\begin{figure}
\caption{Breakdown of the expected payment upper bound in a two-priced auction.\label{fig:payment-breakdown}
\label{fig:payment-breakdown}
\end{figure}
Below, we bound $pbar(\cdot)$ (and hence $p(\cdot)$) in
terms of the expected payment of three natural mechanisms. As seen
geometrically in \autoref{fig:payment-breakdown}, the bound given in
\autoref{lem:pricebar} can be broken into two parts: the area above
$\allocbar(\cdot)$, and the area below $\qq_{\capa}bar(\cdot)$. We refer to the
former as $\pricebar^{\mathrm{I}}(\cdot)$; we further split the latter quantity into two
parts: $\pricebar^{\mathrm{I}}I(\cdot)$, the area corresponding to $v \in [0, C]$, and
$\pricebar^{\mathrm{I}}II(\cdot)$, that corresponding to $v \in [C, v]$. We
define these quantities formally below:
\begin{align}
\label{eq:pI}
\pricebar^{\mathrm{I}}(v) &= \allocbar(v) v - \int_0^{v} \allocbar(z) \: \mathrm{d} z,\\
\label{eq:pII}
\pricebar^{\mathrm{I}}I(v) &= \int_0^{\min\{v, C\}} \qq_{\capa}bar(z) \: \mathrm{d} z \\
\label{eq:pIII}
\pricebar^{\mathrm{I}}II(v) &= \begin{cases}
0, & v \leq C; \\
\int_{C}^{v} \qq_{\capa}bar(z) \: \mathrm{d} z, & v > C.
\end{cases}
\end{align}
\begin{proof}[\stoccom{Proof} of \autoref{thm:pricebound}\stoccom{.}]
We now bound the revenue from each of the three parts of the payment
decomposition. These bounds, combined with \autoref{cor:qcbar} of
\autoref{thm:mechbar} and \autoref{lem:pricebar}, immediately give
\autoref{thm:pricebound}.
\begin{description}
\item[Part~1.]
$\Ex[v]{\pricebar^{\mathrm{I}}(v)} = \Ex[v]{\varphi(v) \cdot \allocbar(v)} \leq \Ex[v]{\varphiplus(v) \cdot \alloc(v)}$.
Formulaically, $\pricebar^{\mathrm{I}}(\cdot)$ corresponds to the risk-neutral payment
identity for $\allocbar(\cdot)$ as specified by
\autoref{thmpart:payment} of \autoref{thm:myerson}; by
\autoref{thmpart:virt} of \autoref{thm:myerson}, in expectation over
$v$, this payment is equal to the expected virtual surplus
$\Ex[v]{\varphi(v) \cdot \allocbar(v)}$.\footnote{Note: This
equality does not require monotonicity of the allocation rule
$\allocbar(\cdot)$; as long as \autoref{thmpart:payment} of
\autoref{thm:myerson} formulaically holds, \autoref{thmpart:virt}
follows from integration by parts.} The inequality follows as terms
$\varphi(v)$ and $\allocbar(v)$ in this expectation are point-wise
upper bounded by $\varphiplus(v) = \max(\varphi(v),0)$ and
$\alloc(v)$, respectively, the latter by \autoref{cor:qcbar} of \autoref{thm:mechbar}.
\item[Part~2.]
$\Ex[v]{\pricebar^{\mathrm{I}}I(v)} \leq \Ex[v]{\varphi(v) \cdot \qq_{\capa}bar(v)} \leq
\Ex[v]{\varphiplus(v) \cdot \qq_{\capa}(v)}$.
By definition of $\pricebar^{\mathrm{I}}I(\cdot)$ in \eqref{eq:pII}, if the statement of
the lemma holds for $v = C$ it holds for $v > C$; so we
argue it only for $v \in [0,C]$. Formulaically, with respect
to a risk-neutral agent with allocation rule $\qq_{\capa}bar(\cdot)$, the
risk-neutral payment is $v \cdot \qq_{\capa}bar(v) - \int_0^v
\qq_{\capa}bar(z) \: \mathrm{d} z$, the surplus is $v \cdot \qq_{\capa}bar(v)$, and the
risk-neutral agent's utility (the difference between the surplus and
payment) is $\int_0^v \qq_{\capa}bar(z) \: \mathrm{d} z = \pricebar^{\mathrm{I}}I(v)$. Convexity
of $\qq_{\capa}bar(\cdot)$, from \autoref{prop:qcbar-conv} of \autoref{thm:mechbar}, implies that the
risk-neutral payment is at least half the surplus, and so is at least
the risk-neutral utility. The lemma follows, then, by the same
argument as in the previous part.
\item[Part~3.]
$\Ex[v]{\pricebar^{\mathrm{I}}II(v)} \leq \Ex[v]{pvc \cdot \qq_{\capa}bar(v)}
= \Ex[v]{pvc \cdot \qq_{\capa}(v)}$.
The statement is trivial for $v \leq C$ so assume $v \geq
C$. By definition $\qq_{\capa}bar(v) = \qq_{\capa}(v)$ for $v > C$.
By \eqref{eq:bic-underbidding}, $\qq_{\capa}(\cdot)$ is monotone
non-decreasing. Hence, for $v > C$, $\pricebar^{\mathrm{I}}II(v) =
\int_{C}^{v} \qq_{\capa}(z) \: \mathrm{d} z\leq\int_{C}^{v} \qq_{\capa}(v)
\: \mathrm{d} z = (v-C) \cdot \qq_{\capa}(v)$.
Plugging in $pvc = \max(v - C,0)$ and taking expectation over~$v$,
we obtain the bound.
\end{description}
\end{proof}
\section{Approximation Mechanisms and a Payment Identity}
\label{sec:payid}
In this section we first give a payment identity for Bayes-Nash
equilibria in mechanisms that charge agents a deterministic amount
upon winning (and zero upon losing). Such one-priced payment schemes
are not optimal for capacitated agents; however, we will show that
they are approximately optimal. When agents are symmetric (with
identical distribution and capacity) we use this payment identity to
prove that the first-price auction is approximately optimal. When
agents are asymmetric we give a simple direct-revelation one-priced
mechanism that is BIC and approximately optimal.
\subsection{A One-price Payment Identity}
For risk-neutral agents, the Bayes-Nash equilibrium conditions entail
a payment identity: given an interim allocation rule, the payment rule
is fixed (\autoref{thm:myerson}, \autoref{thmpart:payment}). For
risk-averse agents there is no such payment identity: there are
mechanisms with the identical BNE allocation rules but distinct BNE
payment rules. We restrict attention to auctions wherein an agent's
payment is a deterministic function of her value (if she wins) and
zero if she loses.
We call these {\em one-priced} mechanisms; for these mechanisms there
is a (partial) payment identity.
Payment identities are an interim phenomenon. We consider a single
agent and the induced allocation rule she faces from a Bayesian
incentive compatible auction (or, by the revelation principle, any BNE of any mechanism).
This allocation rule internalizes randomization in the environment and
the auction, and specifies the agents' probability of winning,
$x(v)$, as a function of her value. Given allocation
rule~$x(v)$, the risk-neutral expected payment is
$prn(v) = v \cdot x(v) - \int_0^{v} x(z) \:
\mathrm{d} z$ (\autoref{thm:myerson}, \autoref{thmpart:payment}).
Given an allocation rule $x(v)$, a one-priced mechanism with
payment rule $p(v)$ would charge the agent
$p(v)/x(v)$ upon winning and zero otherwise (for an
expected payment of $p(v)$). Define $\price^{\mathrm{VC}}(v) = (v -
C) \cdot x(v)$ which, intuitively, gives a lower bound on a
capacitated agent's willingness to trade-off decreased probability of
winning for a cheaper price.
\begin{theorem}
\label{thm:payid}
An allocation rule $x$ and payment rule $p$ are the BNE of a
one-priced mechanism if and only if (a) $x$ is monotone
non-decreasing and (b) if $p(v) \geq \price^{\mathrm{VC}}(v)$ for all $v$
then $p = pcap$ is defined as
\begin{align}
pcap(0) & = 0, \label{eq:payid-zero}\\
pcap(v) & = \max
\left( \price^{\mathrm{VC}}(v),\
\sup_{\val^{-} < v} \left\{ pcap(\val^{-}) +
(prn(v) - prn(\val^{-}))\right\}
\right). \label{eq:payid-max}
\end{align}
Moreover, if $x$ is strictly increasing then $p(v) \geq
\price^{\mathrm{VC}}(v)$ for all $v$ and $p = pcap$ is the unique
equilibrium payment rule.
\end{theorem}
The payment rule should be thought of in terms of two ``regimes'':
when $pcap = \price^{\mathrm{VC}}$, and when $pcap > \price^{\mathrm{VC}}$, corresponding
to the first and second terms in the $\max$ argument of
\eqref{eq:payid-max} respectively. In the latter regime,
\eqref{eq:payid-max} necessitates that
$\dpricecap(v) = \dpricern(v)$; for nearby such points $v$
and $v+\epsilonilon$, the $\val^{-}$ involved in the supremum will
be the same, and thus $pcap(v+\epsilonilon) - pcap(v) = prn(v+\epsilonilon) - prn(v)$.
The proof is relegated to \autoref{sec:payid-app}. The main intuition for this characterization is that risk-neutral
payments are ``memoryless'' in the following sense. Suppose we fix
$prn(v)$ for a~$v$ and ignore the incentive of an agent with value
$\val^{+} > v$ to prefer reporting $\val^{-} < v$, then the risk-neutral
payment for all $\val^{+} > v$ is $prn(\val^{+}) = p(v) +
\int_v^{\val^{+}} (x(\val^{+}) - x(z)) \, \mathrm{d} z$. This memorylessness is
simply the manifestation of the fact that the risk-neutral payment
identity imposes local constraints on the derivatives of the payment, i.e., $\dpricern(v) = v \cdot
\dalloc(v)$.
There is a simple algorithm for constructing the risk-averse payment
rule $pcap$ from the risk-neutral payment rule $prn$ (for
the same allocation rule $x$).
\begin{enumerate}
\item[0.] For $v < C$,
$pcap(v) = prn(v)$.
\item \label{step:p^C=>p^VC} The $pcap(v) =
prn(v)$ identity continues until the value $v'$ where
$pcap(v') = \price^{\mathrm{VC}}(v')$, and
$pcap(v)$ switches to follow $\price^{\mathrm{VC}}(v)$.
\item When $v$
increases to the value $v''$ where $\dpricern(v'') =
\dpvc(v'')$ then $pcap(v)$ switches to follow
$prn(v)$ shifted up by the difference $\price^{\mathrm{VC}}(v'') - prn(v'')$ (i.e., its derivative $\dpricecap(v)$ follows
$\dpricern(v)$).
\item Repeat this process from Step~\ref{step:p^C=>p^VC}.
\end{enumerate}
\begin{lemma}
\label{remark:payid}
The one-priced BIC allocation rule $x$ and payment rule
$pcap$ satisfy the following
\begin{enumerate}
\item
\label{thmpart:lbpc}
For all $v$, $pcap(v) \geq \max(prn(v),\price^{\mathrm{VC}}(v))$.
\item
\label{thmpart:px-mon}
Both $pcap(v)$ and $pcap(v) / x(v)$ are monotone non-decreasing.
\end{enumerate}
\end{lemma}
The proof of \autoref{thmpart:px-mon} is contained in the proof of \autoref{lem:mono+payid=>bne} in \autoref{sec:payid-app}, and
\autoref{thmpart:lbpc} follows directly from equations \eqref{eq:payid-zero} and \eqref{eq:payid-max}.
\subsection{Approximate Optimality of First-price Auction}
\label{sec:firstprice}
We show herein that for agents with a common capacity and values drawn
i.i.d.\@ from a continuous, regular distribution $F$ with strictly
positive density the first-price auction is approximately optimal.
It is easy to solve for a symmetric equilibrium in the first-price
auction with identical agents. First, guess that in BNE the agent
with the highest value wins. When the agents are i.i.d.\@ draws from
distribution $F$, the implied allocation rule is $x(v) =
F^{n-1}(v)$. \autoref{thm:payid} then gives the necessary
equilibrium payment rule $pcap(v)$ from which the bid function
${\bid^{\capa}}(v) = pcap(v) / x(v)$ can be calculated. We
verify that the initial guess is correct as \autoref{remark:payid}
implies that the bid function is symmetric and monotone. There is no
other symmetric equilibrium.\footnote{Any other symmetric equilibrum
must have an allocation rule that is increasing but not always
strictly so. For this to occur the bid function must not be
strictly increasing implying a point mass in the distribution of
bids. Of course, a point mass in a symmetric equilibrium bid
function implies that a tie is not a measure zero event. Any agent
has a best response to such an equilibrium of bidding just higher
than this pointmass so at essentially the same payment, she always
``wins'' the tie.}
\begin{proposition}
\label{prop:fpa-unique}
The first-price auction for identical (capacity and value
distribution) agents has a unique symmetric BNE wherein the highest
valued agent wins.
\end{proposition}
The expected revenue at this equilibrium is $n
\Ex[v]{pcap(v)}$. \autoref{remark:payid} implies that
$pcap$ is at least $prn$ and $\price^{\mathrm{VC}}$.
\begin{corollary}
\label{cor:fpa-rev}
The expected revenue of the first-price auction for identical
(capacity and value distribution) agents is at least that of the
capacitated second-price auction and at least that of the second-price
auction.
\end{corollary}
Our main theorem then follows by combining \autoref{cor:fpa-rev} with
the revenue bound in \autoref{thm:pricebound} and \autoref{thm:bk} by
\citet{BK96}.
\begin{theorem}
\label{thm:fpa-approx}
For $n\geq 2$ agents with common capacity and values drawn i.i.d.\@ from a
regular distribution, the revenue in the first price auction (FPA) in
the symmetric Bayes-Nash equilibrium is a 5-approximation to the
optimal revenue.
\end{theorem}
\begin{proof}
An immediate consequence of \autoref{thm:bk} is that for~$n\geq 2$
risk-neutral, regular, i.i.d.\@ bidders, the second-price auction
extracts a revenue that is at least half the optimal revenue; hence,
by \autoref{cor:revbound}, the optimal revenue for capacitated bidders
by any BIC mechanism is at most four times the second-price revenue
plus the capacitated second-price revenue. Since the first-price
auction revenue in BNE for capacitated agents is at least the
capacitated second-price revenue and the second-price revenue, the
first-price revenue is a 5-approximation to the optimal
revenue.\footnote{In fact, the \citet{BK96} result shows that the
second-price auction is asymptotically optimal so for large $n$ this
bound can be asymptotically improved to three.}
\end{proof}
\subsection{Approximate Optimality of One-Price Auctions}
\label{sec:one-v-two}
We now consider the case of asymmetric value distributions and
capacities. In such settings the highest-bid-wins first-price auction
does not have a symmetric equilbria and arguing revenue bounds for it
is difficult. Nonetheless, we can give asymmetric one-priced
revelation mechanisms that are BIC and approximately optimal.
With respect to \autoref{ex:v<C} in \autoref{sec:pricebound} which
shows that the option to charge two possible prices from a given type
may be necessary for optimal revenue extraction, this result shows
that charging two prices over charging one price does not confer a
significant advantage.
\begin{theorem}
\label{thm:dc-3apx}
For $n$ (non-identical) agents, their capacities
$C_1,\ldots,C_n$, and regular value distributions
$F_1,\ldots,F_n$, there is a one-priced BIC mechanism whose revenue is at least one third of the optimal
(two-priced) revenue.
\end{theorem}
\begin{proof}
Recall from \autoref{thm:pricebound} that either the risk-neutral optimal revenue or $\Ex[v_1, \ldots, v_n]{\max
\{(v_i - C_i)_+\}}$ is at least one third of the optimal revenue. We apply \autoref{thm:payid} to two monotone
allocation rules:
\begin{enumerate}
\item the interim allocation rule of the risk-neutral optimal auction, and
\item the interim allocation rule specified by: serve agent~$i$ that
maximizes $v_i - C_i$, if positive; otherwise, serve nobody.
\end{enumerate}
As both allocations are monotone, we apply \autoref{thm:payid} to obtain two single-priced BIC mechanisms.
By \autoref{remark:payid}, the expected revenue of the first mechanism is at least the risk neutral optimal revenue, and the expected
revenue of the second mechanism is at least $\Ex[v_1, \ldots, v_n]{\max \{(v_i - C_i)_+\}}$. The theorem
immediately follows for the auction with the higher expected revenue.
\end{proof}
Although \autoref{thm:dc-3apx} is stated as an existential result, the
two one-priced mechanisms in the proof can be described analytically
using the algorithm following \autoref{thm:payid} for calculating the capacitated BIC payment rule. The interim allocation rules are straightforward
(the first: $x_i(v_i) = \prod_{j \neq i}
F_j(\varphi_j^{-1}(\varphi_i(vi)))$, and the second:
$x_i(v_i) = \prod_{j \neq i} F_j(v_i - C_i +
C_j)$), and from these we can solve for $p^{C_i}_i(v)$.
\section{Conclusions}
\label{s:conc}
For the purpose of keeping the exposition simple, we have applied our
analysis only to single-item auctions. Our techniques, however, as
they focus on analyzing and bounding revenue of a single agent for a
given allocation rule, generalize easily to structurally rich
environments. Notice that the main theorems of
Sections~\ref{sec:optimal}, \ref{sec:pricebound}, and the first part
of \autoref{sec:payid} do not rely on any assumptions on the
feasibility constraint except for downward closure, i.e., that it is
feasible to change an allocation by withholding service to an agent who was
formerly being served.
For example, our prior-independent 5-approximation result generalizes
to symmetric feasibility constraints such as position auctions. A
position auction environment is given by a decreasing sequence of
weights $\alpha_1,\ldots,\alpha_n$ and the first-price position
auction assigns the agents to these positions greedily by bid. With
probability $\alpha_i$ the agent in position $i$ receives an item and
is charged her bid; otherwise she is not charged. (These position
auctions have been used to model pay-per-click auctions for selling
advertisements on search engines where $\alpha_i$ is the probability
that an advertiser gets clicked on when her ad is shown in the $i$th
position on the search results page.) For agents with identical
capacities and value distributions, the first-price position auction
where the bottom half of the agents are always rejected is a
5-approximation to the revenue-optimal position auction (that may potentially
match all the agents to slots).
Our one- versus two-price result generalizes to asymmetric
capacities, asymmetric distributions, and asymmetric downward-closed
feasibility constraints. A downward-closed feasibility constraint is
given by a set system which specifies which subset of agents can be
simultaneously served. Downward-closure requires that any subset of a
feasible set is feasible. A simple one-priced mechanism is a
3-approximation to the optimal mechanism in such an environment. The
mechanism is whichever has higher revenue of the standard (risk
neutral) revenue-optimal mechanism (which serves the subset of agents
with the highest virtual surplus, i.e., sum of virtual values) and the
one-priced revelation mechanism that serves the set of agents $S$ that
maximizes $\sum_{i \in S} (vi - C_i)^+$ subject to feasibility.
A main direction for future work is to relax some of the assumptions
of our model. Our approach to optimizing over mechanisms for
risk-averse agents relies on (a) the simple model of risk aversion
given by capacitated utilities and (b) that losers neither make (i.e.,
ex post individual rationality) nor receive payments (i.e., no
bribes). These restrictions are fundamental for obtaining linear
incentive compatibility constraints. Of great interest in future
study is relaxation of these assumptions.
There is a relatively well-behaved class of risk attitudes known as
{\em constant absolute risk aversion} where the utility function is
parameterized by risk parameter $R$ as $u_R(w) = \frac{1}{R} (1 -
e^{-Rw})$. These model the setting in which a bidder's risk aversion
is independent of wealth, and hence bidders view a lottery over
payments for an item the same no matter their valuations. \citet{M84}
exploits this and derives the optimal auction for such risk
attitudes. A first step in extending our results to more interesting
risk attitudes would be to consider such risk preferences.
Our analytical (and computational) solution to the optimal auction
problem for agents with capacitated utilities requires an {\em ex post
individual rationality} constraint on the mechanism that is standard
in algorithmic mechanism design. This constraint requires that an
agent who loses the auction cannot be charged. While such a
constraint is natural in many settings, it is with loss and, in fact,
ill motivated for settings with risk-averse agents. One of the most
standard mechanisms for agents with risk-averse preferences is the
``insurance mechanism'' where an agent who may face some large
liability with small probability will prefer to pay a constant
insurance fee so that the insurance agency will cover the large
liability in the event that it is incurred. This mechanism is not ex
post individually rational. Does the first-price
auction (which is ex post individual rational) approximate the
optimal interim individually rational mechanism?
\begin{thebibliography}{}
\bibitem[Alaei et~al., 2012]{AFHHM12}
Alaei, S., Fu, H., Haghpanah, N., Hartline, J., and Malekian, A. (2012).
\newblock Bayesian optimal auctions via multi- to single-agent reduction.
\newblock In {\em ACM Conference on Electronic Commerce}.
\bibitem[Bulow and Klemperer, 1996]{BK96}
Bulow, J. and Klemperer, P. (1996).
\newblock {Auctions Versus Negotiations}.
\newblock {\em The American Economic Review}, 86(1):180--194.
\bibitem[Cai et~al., 2012]{CDW12}
Cai, Y., Daskalakis, C., and Weinberg, S.~M. (2012).
\newblock An algorithmic characterization of multi-dimensional mechanisms.
\newblock In {\em STOC}, pages 459--478.
\bibitem[Dhangwatnotai et~al., 2010]{DRY10}
Dhangwatnotai, P., Roughgarden, T., and Yan, Q. (2010).
\newblock Revenue maximization with a single sample.
\newblock In {\em ACM Conference on Electronic Commerce}, pages 129--138.
\bibitem[Dughmi and Peres, 2012]{DP12}
Dughmi, S. and Peres, Y. (2012).
\newblock {Mechanisms for Risk Averse Agents, Without Loss}.
\newblock {\em arXiv preprint arXiv:1206.2957}, pages 1--9.
\bibitem[Hartline and Roughgarden, 2009]{HR09}
Hartline, J.~D. and Roughgarden, T. (2009).
\newblock Simple versus optimal mechanisms.
\newblock In {\em ACM Conference on Electronic Commerce}, pages 225--234.
\bibitem[Holt, 1980]{H80}
Holt, C.~A. (1980).
\newblock {Competitive bidding for contracts under alternative auction
procedures}.
\newblock {\em The Journal of Political Economy}, 88(3):433--445.
\bibitem[Hu et~al., 2010]{HMZ10}
Hu, A., Matthews, S.~A., and Zou, L. (2010).
\newblock {Risk aversion and optimal reserve prices in first- and second-price
auctions}.
\newblock {\em Journal of Economic Theory}, 145(3):1188--1202.
\bibitem[Mas-Colell et~al., 1995]{MWG95}
Mas-Colell, A., Whinston, M.~D., and Green, J.~R. (1995).
\newblock {\em {Microeconomic Theory}}.
\newblock Oxford University Press.
\bibitem[Maskin and Riley, 1984]{MR84}
Maskin, E. and Riley, J. (1984).
\newblock Optimal auctions with risk averse buyers.
\newblock {\em Econometrica}, 52(6):1473--1518.
\bibitem[Matthews, 1983]{Matthews83}
Matthews, S. (1983).
\newblock Selling to risk averse buyers with unobservable tastes.
\newblock {\em Journal of Economic Theory}, 30(2):370--400.
\bibitem[Matthews, 1984]{M84}
Matthews, S.~A. (1984).
\newblock On the implementability of reduced form auctions.
\newblock {\em Econometrica}, 52(6):pp. 1519--1522.
\bibitem[Matthews, 1987]{M87}
Matthews, S.~A. (1987).
\newblock {Comparing auctions for risk averse buyers: A buyer's point of view}.
\newblock {\em Econometrica}, 55(3):633--646.
\bibitem[Myerson, 1981]{M81}
Myerson, R. (1981).
\newblock Optimal auction design.
\newblock {\em Mathematics of Operations Research}, 6(1):pp. 58--73.
\bibitem[Riley and Samuelson, 1981]{RS81}
Riley, J. and Samuelson, W. (1981).
\newblock {Optimal Auctions}.
\newblock {\em The American Economic Review}, 71(3):381--392.
\bibitem[Roughgarden et~al., 2012]{RTY12}
Roughgarden, T., Talgam-Cohen, I., and Yan, Q. (2012).
\newblock Supply-limiting mechanisms.
\newblock In {\em ACM Conference on Electronic Commerce}, pages 844--861.
\bibitem[Wilson, 1987]{wil-87}
Wilson, R. (1987).
\newblock Game theoretic analysis of trading processes.
\newblock {\em Advances in Economic Theory}.
\end{thebibliography}
\appendix
\section{Proofs from Section\IFSTOCELSE{ 3}{~\ref{sec:optimal}}}
\label{sec:opt-app}
\Xcomment{
\autoref{thm:optimal-two-price} (Restatement): For any auction for
capacitated agents there is a two-priced auction with no lower
revenue.
\begin{proof}[\NOTSTOC{Proof }of \autoref{thm:optimal-two-price}\NOTSTOC{.}]
As explained in the text, the proof consists of two steps. We first
formalize the first step, then give the second in full.
\paragraph{Step~1\NOTSTOC{.}} \emph{There is a BIC optimal mechanism in which the wealth of any outcome is no more
than~$C$.}
Given any BIC mechanism~$\mathcal M_0$, we construct the following
mechanism~$\mathcal M_1$: $\mathcal M_1$ does everything exactly the same way
as~$\mathcal M_0$, except that whenever $\mathcal M_0$ makes a truthful bidder's
wealth more than~$C$, $\mathcal M_1$ makes it equal to~$C$. To be
more precise, whenever $\mathcal M_0$ sells the item and charges a bidder
reporting~$v$ less than~$v - C$, $\mathcal M_1$ charges her $v
- C$ instead. $\mathcal M_1$ has obviously at least as much revenue
as~$\mathcal M_0$, since the payment in any outcome weakly increases.
$\mathcal M_1$ is also BIC as long as~$\mathcal M_0$ is: the utility of truthful
reporting is kept the same, but the utility for any type~$v$ to
misreport another type~$v'$ weakly decreases compared with
in~$\mathcal M_0$.
\paragraph{Step~2\NOTSTOC{.}} \emph{There is a BIC optimal mechanism in which the utility of any bidder is either
$C$ or~$0$ in any outcome.}
We take a BIC mechanism~$\mathcal M_1$ in which a bidder in every outcome
has a nonnegative wealth no larger than~$C$. We convert $\mathcal M_1$
to a BIC mechanism~$\mathcal M_2$ which has no less revenue but a bidder in
any outcome has wealth either $0$ or~$C$. It is important that
in~$\mathcal M_1$, only the linear part of the utility function is binding for
any truthful bidder. Let $\ralloc$ and $\rprice_{\mathcal M_1}$ be the
allocation and payment rule in~$\mathcal M_1$. We first define a parameter
in~$\mathcal M_2$. Fixing a type~$v$, conditioning on $\mathcal M_1$ selling
the item, the bidder's expected utility
is $\Ex[\pi]{\util_\capa(v - \rprice_{\mathcal M_1}(v)) \mid \ralloc(v) = 1} \leq C$. Define
\begin{align*}
\beta(v) = \frac{\Ex{\util_\capa(v - \rprice_{\mathcal M_1}(v)) \mid \ralloc(v) = 1}}{C} \leq 1.
\end{align*}
By the property of~$\mathcal M_1$, $v - \rprice_{\mathcal M_1}(v) \leq C$ for any $\pi$, and
$\util_\capa(v - \rprice_{\mathcal M_1}(v)) = v - \rprice_{\mathcal M_1}(v)$. We therefore have
\begin{equation}
\label{eq:beta}
\Ex[\pi]{\rprice_{\mathcal M_1}(v) \;\mid\; \ralloc(v) = 1} = v - \beta(v) C.
\end{equation}
Now we construct mechanism~$\mathcal M_2$. On any reported value~$v$,
$\mathcal M_2$ sells the item if and only if~$\mathcal M_1$ does; in addition,
conditioning on selling, $\mathcal M_2$ charges the bidder $v - C$
with probability~$\beta(v)$, and with the remaining probability
charges her~$v$.
The allocation rule of $\mathcal M_1$ and~$\mathcal M_2$ are the same, and we
denote both as $\ralloc$. We use $\rprice_{\mathcal M_2}$ to denote the
payment rules of~$\mathcal M_2$. Note that $\mathcal M_2$ is so constructed
that any truthful bidder has the same expected utility in~$\mathcal M_2$ as
in~$\mathcal M_1$:
\begin{align*}
\Ex[\pi]{\util_\capa(v \ralloc(v) - \rprice_{\mathcal M_1}(v))} = \Ex[\pi]{\util_\capa(v \ralloc(v) -
\rprice_{\mathcal M_2}(v))}, \STOC{\\
&} \forall v.
\end{align*}
We now consider:
\NOTSTOC{\begin{description}}
\IFSTOCELSE{\textbf{Revenue:}}{\item[Revenue:]}
By definition, the expected revenues of the two mechanisms are the same.
\IFSTOCELSE{\textbf{Incentives:}}{\item[Incentives:]}
It suffices to show that any agent's utility from misreporting (weakly) decreases. I.e.,
$$\Ex[\pi]{\util_\capa(v \ralloc(v') - \rprice_{\mathcal M_2}(v'))} \leq
\Ex[\pi]{\util_\capa(v \ralloc(v') - \rprice_{\mathcal M_1}(v'))},$$
because then by the \eqref{eq:IC} condition
of~$\mathcal M_1$ we have
Conditioning on not selling, the expected utility of type~$v$ deviating to~$v'$ is~$0$ in both. We therefore only need to consider the expected utility conditioning on selling.
In both mechanisms, the wealth of type~$v$ misreporting~$v'$ lies in $[v - v', v - v' + C]$
in any outcome. Let $\mu$ be the linear function connecting
$(v-v', \util_\capa(v -v'))$ and $(v - v' + C, \util_\capa(v - v' + C))$, i.e.,
\begin{align*}
\mu(z) &= \util_\capa(v - v') \STOC{\\
& \allocuad} + \frac{\util_\capa(v - v' + C) - \util_\capa(v - v')}{C} \cdot (z - v + v').
\end{align*}
We have
\begin{align*}
&\Ex[\pi]{\util_\capa(v - \rprice_{\mathcal M_2}(v')) \;\mid\; \ralloc(v') = 1} \\
&\allocuad= \beta(v') \util_\capa(v - v' + C) + (1 - \beta(v')) \util_\capa(v - v') \\
&\allocuad= \beta(v') \mu(v - v' + C) + (1 - \beta(v')) \util_\capa(v - v') \\
&\allocuad= \mu(v - v' + \beta(v') C) \\
&\allocuad= \Ex[\pi]{\mu(v - \rprice_{\mathcal M_1}(v')) \;\mid\; \ralloc(v') = 1} \\
&\allocuad\leq \Ex[\pi]{\util_\capa(v - \rprice_{\mathcal M_1}(v')) \;\mid\; \ralloc(v') = 1}.
\end{align*}
The first equality is by construction of~$\mathcal M_2$; the second from the definition of $\util_\capa$ and~$\mu$; the third by
the linearity of~$\mu$; the fourth by~\eqref{eq:beta}. The last inequality is because, on the interval $[v - v', v
- v' + C]$, $\mu$ is pointwise weakly dominated by~$\util_\capa$ by the concavity of~$\util_\capa$.
This shows that in~$\mathcal M_2$, any bidder of any type has weakly less incentive to misreport his valuation than in~$\mathcal M_1$.
\NOTSTOC{\end{description}}
This finishes the proof of \autoref{thm:optimal-two-price}.
\end{proof}
}
\noindent{\bf \autoref{lem:two-price-BIC} (Restatement).} {\em A mechanism
with two-price allocation rule $\alloc = \qq_{\mathrm{val}} - \qq_{\capa}$ is BIC if and only if
for all $v$ and $\val^{+}$ such that $v < \val^{+} \leq v +
C$,
\begin{equation}
\frac{\qq_{\mathrm{val}}(v)}{C} \leq \frac{ \qq_{\capa}(\val^{+}) - \qq_{\capa}(v)}{\val^{+} - v} \leq \frac{\alloc(\val^{+})}{C}.\tag{\ref{eq:near-bic}}
\end{equation}}
\begin{proof}[\NOTSTOC{Proof }of \autoref{lem:two-price-BIC}\NOTSTOC{.}]
Consider an agent and fix two possible values of the agent $v \leq
\val^{+} \leq v + C$. The utility for truthtelling with
value~$v$ is $C \cdot \qq_{\capa}(v)$ in a two-price auction. The
utility for misreporting $\val^{+}$ from value~$v$ is
$\qq_{\mathrm{val}}(\val^{+}) \cdot (v - \val^{+}) + \qq_{\capa}(\val^{+}) \cdot (C +
v - \val^{+})$: when the mechanism sells and charges $\val^{+}$,
the agent's utility is~$v - \val^{+}$; when the mechanism sells and
charges $\val^{+} - C$, her utility is $\util_\capa(C + v -
\val^{+}) = C + v - \val^{+}$ (since $v < \val^{+}$).
Likewise, the utility for misreporting $v$ from true value
$\val^{+}$ is $\qq_{\mathrm{val}}(v) \cdot (\val^{+} - v) + \qq_{\capa}(v) \cdot
C$. Note that here when the mechanism charges $v - C$, the
utility of the agent is~$C$ because the wealth $C - v +
\val^{+}$ is more than~$C$; when the mechanism charges~$v$, her
utility is $\val^{+} - v$ because we assumed $\val^{+} \leq v +
C$.
An agent with valuation~$v$ (or $\val^{+}$) would not misreport
$\val^{+}$ (or $v$) if and only if
\begin{align}
\qq_{\capa}(v) \cdot C \geq &\ \qq_{\mathrm{val}}(\val^{+}) \cdot (v - \val^{+}) \STOC{\nonumber\\
& \quad} + \qq_{\capa}(\val^{+}) \cdot (C + v - \val^{+}); \label{eq:v-dev-v'}\\
\qq_{\capa}(\val^{+}) \cdot C \geq &\ \qq_{\mathrm{val}}(v) \cdot (\val^{+} - v) + \qq_{\capa}(v) \cdot C. \label{eq:v'-dev-v}
\end{align}
Now the right side of \eqref{eq:near-bic} follows
from~\eqref{eq:v-dev-v'} and the left side follows
from~\eqref{eq:v'-dev-v}.
When $\val^{+} > v + C$, the agent with value~$v$ certainly
has no incentive to misreport~$\val^{+}$, since any outcome results in
non-positive utility. Alternatively, the agent with value $\val^{+}$
will derive utility $C \cdot \alloc(v)$ from misreporting $v$
and thus will misreport if and only if $\alloc(v) >
\qq_{\capa}(\val^{+})$. Substituting $v+C$ for $\val^{+}$ in
equation~\eqref{eq:near-bic} gives $\alloc(v)\leq \qq_{\capa}(v + C)$,
and taking this for intermediate points between $v + C$ and
$\val^{+}$ gives monotonicity of $\qq_{\capa}(v)$ over $[v + C,
\val^{+}]$. Combining these gives $\alloc(v)\leq \qq_{\capa}(v + C)
\leq \qq_{\capa}(v)$ and hence $\val^{+}$ will not misreport $v$.
\end{proof}
\noindent{\bf \autoref{cor:int-bic} (Restatement):}
{\em The allocation rules $\qq_{\capa}$ and $\qq_{\mathrm{val}}$ of a BIC two-priced mechanism satisfies that for all $v < \val^{+}$,
\begin{align}
\int_{v}^{\val^{+}} \frac{\qq_{\mathrm{val}}(z)}{C} \: \mathrm{d} z \leq \qq_{\capa}(\val^{+}) - \qq_{\capa}(v) \leq \int_{v}^{\val^{+}}
\frac{\alloc(z)}{C} \: \mathrm{d} z.\tag{\ref{eq:int-bic}}
\end{align}}
\begin{proof}[\NOTSTOC{Proof }of \autoref{cor:int-bic}\NOTSTOC{.}]
Without loss of generality, suppose $\val^{+} \leq v + C$ (the
statement then follows for higher $\val^{+}$ by induction). Define
function
\begin{align*}
\qq_{\capa}bar(z) = \qq_{\capa}(v) + \int_{v}^{z} \frac{\sup_{y' \in [v, y]} \qq_{\mathrm{val}}(y')}{C} \: \mathrm{d} y, \quad \forall z \in
[v, \val^{+}],
\end{align*}
then $\qq_{\capa}bar(z) \geq \qq_{\capa}(v) + \int_{v}^z \frac{\qq_{\mathrm{val}}(y)}{C} \: \mathrm{d} y$ and hence
\begin{align*}
\int_{v}^z \frac{\qq_{\mathrm{val}}(y)}{C} \: \mathrm{d} y \leq \qq_{\capa}bar(z) - \qq_{\capa}(v).
\end{align*}
By the argument in the proof of \autoref{thm:mechbar},\autoref{lem:qcbar}, we have $\qq_{\capa}bar(z) \leq \qq_{\capa}(z)$, for all~$z$. This gives the left
side of \eqref{eq:int-bic}. The other side is proven similarly.
\end{proof}
\section{Proofs from \IFSTOCELSE{Section~4}{\autoref{sec:pricebound}}}
\label{sec:pricebound-app}
\noindent{\bf \autoref{def:mechbar} (Restatement).}
We define $\allocbar = \qq_{\capa}bar + \qq_{\mathrm{val}}bar$ as follows:
\begin{enumerate}
\item $\qq_{\mathrm{val}}bar(v) = \qq_{\mathrm{val}}(v)$;
\item Let $r(v)$ be $\tfrac{1}{C} \sup_{z \leq v} \qq_{\mathrm{val}}(z)$, and let
\begin{align}
\tag{\ref{eq:qcbar}}
\qq_{\capa}bar(v) = \left\{ \begin{array}{ll}
\int_0^{v} r(y) \: \mathrm{d} y, & v \in [0, C]; \\
\qq_{\capa}(v), & v > C.
\end{array}
\right.
\end{align}
\end{enumerate}
\noindent{\bf \autoref{thm:mechbar} (Restatement).} {\em \begin{enumerate}
\item
On $v \in [0,C]$, $\qq_{\capa}bar(\cdot)$ is a convex, monotone
increasing function.
\item
On all $v$, $\qq_{\capa}bar(v) \leq \qq_{\capa}(v)$.
\item
The incentive constraint from the left-hand side of \eqref{eq:int-bic} holds for $\qq_{\capa}bar$:
$\int_v^{\val^{+}} \qq_{\mathrm{val}}bar(z)\: \mathrm{d} z \leq \qq_{\capa}bar(\val^{+}) - \qq_{\capa}bar(v)$ for all $v < \val^{+}$.
\item
On all $v$, $\qq_{\capa}bar(v) \leq \qq_{\capa}(v)$, $\allocbar(v) \leq \alloc(v)$, and $pbar(v) \geq p(v)$.
\end{enumerate}}
\begin{proof}[\stoccom{Proof} of \autoref{thm:mechbar}\stoccom{.}]
\
\begin{enumerate}
\item On $[0,C]$, $\qq_{\capa}bar(v)$ is the integral of a monotone,
non-negative function.
\item The statement holds directly from the definition for $v >
C$; therefore, fix $v \leq C$ in the argument below.
Since $r(v)$ is an increasing function of~$v$, it is
Riemann integrable (and not only Lebesgue integrable).
Fixing $v$, we show that, given any $\epsilon \leq 0$, $\qq_{\capa}bar(v) \leq \qq_{\capa}(v) + \epsilon$. Fix an integer $N >
v / \epsilon$, and let $\Delta$ be $v / N < \epsilon$. Consider Riemann sum $S = \sum_{j = 1}^{N} \Delta \cdot r(\xi_j)$,
where each $\xi_j$ is an arbitrary point in $[(j-1)\Delta, j \Delta]$.\footnote{Obviously $S$ depends both on
$\Delta$ and the choice of $\xi_j$'s. For cleanness of notation we omit this dependence and do not write
$S_{\Delta, \xi}$.} We will also denote by $S(k) = \sum_{j = 1}^k \Delta \cdot
r(\xi_j)$, $k \leq N$, the partial sum of the first $k$ terms. Since $\qq_{\capa}bar(v) = \lim_{\Delta \to 0}
S$, it suffices to show that for all $\Delta < \epsilon$, $S \leq \qq_{\capa}(v)
+ \epsilon$. In order to show this, we define a piecewise linear function $y$. On $[0, \Delta]$, $y$ is~$0$, and
then on interval $[j \Delta, (j + 1)\Delta]$, $y$ grows at a rate $r((j-1)\Delta)$. Intuitively, $y$
``lags behind'' $\qq_{\capa}$ by an interval $\Delta$ and we will show it lower bounds $\qq_{\capa}$ and upper bounds $S + \epsilon$.
Note that since $r$ is an increasing function, $y$ is convex.
We first show $y(v) \leq \qq_{\capa}(v)$. We will show by induction on~$j$ that $y(z) \leq \qq_{\capa}(z)$ for
all $z \in [0, j\Delta]$. Since $y$ is $0$ on $[0, \Delta]$, the base case $j = 1$ is trivial. Suppose we have
shown $y(z) \leq \qq_{\capa}(z)$ for all $z \in [0, (j-1)\Delta]$, let us consider the interval $[(j-1)\Delta, j\Delta]$.
Let $z^*$ be $\operatorname{arg\,max}_{z \leq (j-1)\Delta} \qq_{\mathrm{val}}(z)$.\footnote{Here we assumed that $\sup_{z < (j-1)\Delta} \qq_{\mathrm{val}}(z)$
can be attained by~$z^*$, which is certainly the case when $\qq_{\mathrm{val}}$ is continuous. It is straightforward to see though
that we do not need such an assumption. It suffices to choose $z^*$ such that $\qq_{\mathrm{val}}(z^*)$ is close enough to $r((j-1)\Delta)$. The proof goes almost without change,
except with an even smaller choice of~$\Delta$.} By the induction hypothesis, $y(z^*) \leq \qq_{\capa}(z^*)$. Recall that
$z^* \leq z \leq C$. By the BIC
condition \eqref{eq:bic-underbidding}, for all $z \geq z^*$,
\begin{align*}
\qq_{\capa}(z) \geq \qq_{\capa}(z^*) + \frac{\qq_{\mathrm{val}}(z^*)}{C} (z - z^*).
\end{align*}
On the other hand,
by definition, $r$ is constant on $[z^*, z]$, and the derivative of $y$ is no larger than $r(z^*)$ on
$[z^*, z]$. Hence for all $z \leq j\Delta$,
\begin{align*}
y(z) & \leq y(z^*) + \frac{\qq_{\mathrm{val}}(z^*)}{C}(z - z^*) \\
& \leq \qq_{\capa}(z^*) + \frac{\qq_{\mathrm{val}}(z^*)}{C} (z - z^*)
\leq \qq_{\capa}(z).
\end{align*}
This completes the induction and shows $y(z) \leq \qq_{\capa}(v)$ for all $z \in [0, v]$.
Now we show $S\leq y(v) + \epsilon$. Note that since $r(z) \leq 1$ for all~$z$, $S
\leq S(N - 1) + \Delta < S(N - 1) + \epsilon$. We will show by induction that $S(N -
1) \leq y(v)$. Our induction hypothesis is $S(j - 1) \leq y(j \Delta)$. The base case
for $j = 1$ is obvious as $S(0) = y(\Delta) = 0$.
\begin{align*}
S(j) & = S(j-1) + \Delta \cdot r(\xi_j) \\
& \leq y(j) + \Delta \cdot r(j \Delta) \\
& = y(j+1).
\end{align*}
In the inequality we used the induction hypothesis and the monotonicity of~$r$. The last equality is by definition
of~$y$.
This completes the proof of \autoref{lem:qcbar}.
\item
For $v\leq\val^{+} \leq C$, by definition of $\qq_{\capa}bar$,
\begin{align*}
\qq_{\capa}bar(\val^{+}) - \qq_{\capa}bar(v) = \int_{v}^{\val^{+}} r(z) \: \mathrm{d} z \geq \int_{v}^{\val^{+}}
\frac{\qq_{\mathrm{val}}(z)}{C} \: \mathrm{d} z.
\end{align*}
For $C \leq v\leq \val^{+}$, $\qq_{\capa}bar$ and $\qq_{\mathrm{val}}bar$ are equal to $\qq_{\capa}bar$ and $\qq_{\mathrm{val}}bar$ on $[v, \val^{+}]$, and the
inequality follows from \autoref{cor:int-bic}.
For $v \leq C$ and $\val^{+} \geq C$, we have
\begin{align*}
\qq_{\capa}bar(\val^{+}) - \qq_{\capa}bar(v) & = [\qq_{\capa}bar(\val^{+}) - \qq_{\capa}bar(C)] + [\qq_{\capa}bar(C) - \qq_{\capa}bar(v)]\\
& \geq \int_{C}^{\val^{+}} \frac{\qq_{\mathrm{val}}(z)}{C} \: \mathrm{d} z + \int_{v}^{C} \frac{\qq_{\mathrm{val}}(z)}{C} \: \mathrm{d} z \\
& = \int_{v}^{\val^{+}} \frac{\qq_{\mathrm{val}}(z)}{C} \: \mathrm{d} z.
\end{align*}
\item The first part, $\qq_{\capa}bar(v) \leq \qq_{\capa}(v)$, is from
\autoref{lem:qcbar} of the lemma and the definition of $\qq_{\capa}bar(v)
= \qq_{\capa}(v)$ on $v > C$. The second part, $\allocbar(v) \leq
\alloc(v)$, follows from the definition of $\qq_{\mathrm{val}}bar(v) =
\qq_{\mathrm{val}}(v)$, the first part, and the definition of $\alloc(v) =
\qq_{\mathrm{val}}(v) + \qq_{\capa}(v)$. The third part, $pbar(v) \geq
p(v)$, follows because lowering $\qq_{\capa}(v)$ to
$\qq_{\capa}bar(v)$ on $v \in [0,C]$ foregoes payment of
$v-C$ which is non-positive (for $v \in [0,C]$). \qedhere
\end{enumerate}
\end{proof}
\section{Proofs from \IFSTOCELSE{Section 5}{\autoref{sec:payid}}}
\label{sec:payid-app}
\label{sec:first-app}
\noindent{\bf \autoref{thm:payid} (Restatement).}
{\em An allocation rule $x$ and payment rule $p$ are the BNE of a one-priced mechanism if and only if
(a) $x$ is
monotone non-decreasing and (b) if $p(v) \geq \price^{\mathrm{VC}}(v)$ for all $v$ then $p = pcap$ is defined as
\begin{align}
\tag{\ref{eq:payid-zero}}
pcap(0) & = 0,\\
\tag{\ref{eq:payid-max}}
pcap(v) & = \max
\left( \price^{\mathrm{VC}}(v),\
\sup_{\val^{-} < v} \left\{ pcap(\val^{-}) +
(prn(v) - prn(\val^{-}))\right\}
\right).
\end{align}
Moreover, if $x$ is strictly increasing then $p(v) \geq
\price^{\mathrm{VC}}(v)$ for all $v$ and $p = pcap$ is the unique
equilibrium payment rule.}
The proof follows from a few basic conditions. First, with strictly
monotone allocation rule $x$, the payment upon winning must be at
least $v-C$; otherwise, a bidder would wish to overbid and see
a higher chance of winning, with no decrease in utility on
winning. Second, when the payment on winning is strictly greater than
$v-C$, the bidder is effectively risk-neutral and the
risk-neutral payment identity must hold locally. Third, when an agent
is paying exactly $v-C$ on winning, they are capacitated when
considering underbidding, but risk-neutral when considering
overbidding. As a result, at such a point, $pcap$ must be at
least as steep as $prn$, i.e., if $\dpricern(v) >
\dpvc(v)$, $pcap$ will increase above $\price^{\mathrm{VC}}$, at which point
it must follow the behavior of $prn$.
\autoref{thm:payid} follows from the following three lemmas which show
the necessity of monotonicity, the (partial) necessity of the payment
identity, and then the sufficiency of monotonicity and the payment
identity.
\begin{lemma}
\label{lem:bne=>mono}
If $x$ and $p$ are the BNE of a one-priced mechanism, then
$x$ is monotone non-decreasing.
\end{lemma}
\autoref{lem:bne=>mono} shows that monotonicity of the allocation rule
is necessary for BNE in a one-priced mechanism. Compare this to
\autoref{ex:nonmono} where we exhibited a non-one-priced mechanisms
that was not monotone. Because the utilities may be capacitated, the
standard risk-neutral monotonicity argument; which involves writing
the IC constraints for a high-valued agent reporting low and a
low-valued agent reporting high, adding, and canceling payments; does
not work.
\begin{lemma}
\label{lem:bne=>payid}
If $x$ and $p$ are the BNE of a one-priced mechanism and
$p(v) \geq \price^{\mathrm{VC}}(v)$ for all $v$, then $p =
pcap$ (as defined in \autoref{thm:payid}); moreover, if $x$ is
strictly monotone then $p(v) \geq \price^{\mathrm{VC}}(v)$ for all $v$.
\end{lemma}
From \autoref{lem:bne=>payid} we see that one-priced mechanisms almost
have a payment identity. It is obvious that a payment identity does
not generically hold as a capacitated agent with value $v$ is
indifferent between payments less than $v - C$; therefore, the
agent's incentives does not pin down the payment rule if the payment
rule ever results in a wealth for the agent of more than $C$.
Nonetheless, the lemma shows that this is the only thing that could
lead to a multiplicity of payment rules. Additionally, the lemma
shows that if $x$ is strictly monotone, then these sorts of
payment rules cannot arise.
\begin{lemma}
\label{lem:mono+payid=>bne}
If allocation rule $x$ is monotone non-decreasing and payment
rule $p = pcap$ (as defined in \autoref{thm:payid}), then
they are the Bayes-Nash equilibrium of a one-priced mechanism.
\end{lemma}
The following claim and notational definition will be used throughout the proofs below.
\begin{claim}
\label{c:misreports}
Compared to the wealth of type $v$ on truthtelling, when type
$\val^{+} > v$ misreports $v$ she obtains strictly more wealth
(and is more capacity constrained) and when type $\val^{-} < v$
misreports $v$ she obtains strictly less wealth (and is less
capacity constrained) and if $p(v) \geq \price^{\mathrm{VC}}(v)$ then
type $\val^{-}$ is strictly risk neutral on reporting $v$.
\end{claim}
\begin{definition}
Denote the utility for type $v$ misreporting $v'$ for the same implicit
allocation and payment rules by $\util_\capareport(v, v')$ and
$\utilreport^{\mathrm{RN}}(v, v')$ for risk-averse and risk-neutral agents,
respectively.
\end{definition}
\begin{proof}[\stoccom{Proof} of \autoref{lem:bne=>mono}\stoccom{.}]
We prove via contradiction. Assume that $x$ is not monotone, and
hence there is a pair of values, $\val^{-} < \val^{+}$, for which $x(\val^{-})> x(\val^{+})$. We will
consider this in three cases: (1) when a type of $\val^{-}$ is
capacitated upon truthfully reporting and winning, and when a type of
$\val^{-}$ is strictly in the risk-neutral section of her utility upon
winning and a type of $\val^{+}$ is either in the (2) capacitated or
(3) strictly risk-neutral section of her utility upon winning.
\begin{enumerate}
\item ($\val^{-}$ capacitated). If $\val^{-}$ is capacitated upon
winning, then $\val^{+}$ will also be capacitated upon winning and
misreporting $\val^{-}$ (\autoref{c:misreports}). A capacitated agent is already receiving
the highest utility possible upon winning. Therefore, $\val^{+}$
strictly prefers misreporting $\val^{-}$ as such a report (strictly)
increases probability of winning and (weakly) increases utility from
winning.
\item ($\val^{-}$ risk-neutral, $\val^{+}$ capacitated). We split this case into two subcases depending on whether the agent with type $\val^{-}$ is capacitated with misreport $\val^{+}$.
\begin{enumerate}
\item ($\val^{-}$ capacitated when misreporting $\val^{+}$). As the
truthtelling $\val^{+}$ type is also capacitated (by assumption of
this case), the utilities of these two scenarios are the same,
i.e.,
\begin{align}
\label{eq:mono1}
\util_\capareport(\val^{-}, \val^{+}) &=
\util_\capareport(\val^{+}, \val^{+}).\\
\intertext{Since type $\val^{-}$ truthfully reporting
$\val^{-}$ is strictly uncapacitated, if her value was increased she
would feel a change in utility (for the same report); therefore,
type $\val^{+}$ reporting $\val^{-}$ has strictly more utility (\autoref{c:misreports}), i.e.,}
\label{eq:mono2}
\util_\capareport(\val^{+}, \val^{-}) &> \util_\capareport(\val^{-}, \val^{-}).\\
\intertext{Combining \eqref{eq:mono1} and \eqref{eq:mono2} we arrive at the contradiction that type $\val^{+}$ strictly prefers to report $\val^{-}$, i.e.,}
\notag
\util_\capareport(\val^{+}, \val^{-}) &> \util_\capareport(\val^{+}, \val^{+}).
\end{align}
\item ($\val^{-}$ risk-neutral when misreporting $\val^{+}$). First,
it cannot be that the bidder of type $\val^{+}$ is capacitated for
both reports $\val^{+}$ and $\val^{-}$ as, otherwise, misreporting
$\val^{-}$ gives the same utility upon winning but strictly higher
probability of winning. Therefore, both types are risk neutral when
reporting $\val^{-}$. Type $\val^{-}$ is risk-neutral for both reports
so she feels the discount in payment from reporting $\val^{+}$
instead of $\val^{-}$ linearly; type
$\val^{+}$ feels the discount less as she is capacitated at
$\val^{+}$. On the other hand, $\val^{+}$ has a higher value for
service and therefore feels the higher service probability from
reporting $\val^{-}$ over $\val^{+}$ more than $\val^{-}$.
Consequently, if $\val^{-}$ prefers reporting $\val^{-}$ to
$\val^{+}$, then so must $\val^{+}$ (strictly).
\end{enumerate}
\item ($\val^{-}$ risk-neutral, $\val^{+}$ risk-neutral). First, note
that the price upon winning must be higher when reporting $\val^{-}$
than $\val^{+}$, i.e., $p(\val^{-})/x(\val^{-}) >
p(\val^{+})/x(\val^{+})$; otherwise a bidder of type
$\val^{+}$ would always prefer to report $\val^{-}$ for the higher
utility upon winning and higher chance of winning. Thus, a bidder
of type $\val^{+}$ must be risk-neutral upon underreporting
$\val^{-}$ and winning; furthermore, risk-neutrality of $\val^{+}$
for reporting $\val^{+}$ implies the risk-neutrality of $\val^{-}$
for reporting $\val^{+}$ (\autoref{c:misreports}). As both
$\val^{+}$ and $\val^{-}$ are risk-neutral for reporting either of
$\val^{-}$ or $\val^{+}$, the standard monotonicity argument for
risk-neutral agents applies.
\end{enumerate}
Thus, for $x$ to be in BNE it must be monotone non-decreasing.
\end{proof}
\begin{proof}[\stoccom{Proof} of \autoref{lem:bne=>payid}\stoccom{.}]
First we show that if $x$ is strictly monotone then
$p(v) \geq \price^{\mathrm{VC}}(v)$ for all $v$. If $p(v) <
\price^{\mathrm{VC}}(v)$ then type $v$ on truthtelling obtains a wealth
$w$ strictly larger than $C$. Type $\val^{-} = v -
\epsilon$, for $\epsilon \in (0,w-C)$, would also be capacitated
when reporting $v$; therefore, by strict monotonicity of $x$
such a overreport strictly increases her utility and BIC is violated.
The following two claims give the necessary condition.
\begin{align}
\label{eq:payid-low}
pcap(v) & \geq pcap(\val^{-}) + (prn(v) - prn(\val^{-})), \quad \forall \val^{-} < v \\
\label{eq:payid-high}
pcap(v) & \leq \sup_{\val^{-} < v} \left\{ pcap(\val^{-}) + (prn(v) - prn(\val^{-})) \right\}, \quad \forall v {\textrm \ s.t.\ }
pcap(v) > \price^{\mathrm{VC}}(v).
\end{align}
Equation \eqref{eq:payid-low} is easy to show. Since $pcap(v)
\geq \price^{\mathrm{VC}}(v)$, the wealth of any type~$\val^{-}$ when winning is at
most~$C$, and strictly smaller than $C$ if overbidding. In
other words, when overbidding, a bidder only uses the linear part of
her utility function and therefore can be seen as risk neutral.
Equation \eqref{eq:payid-low} then follows directly from the standard
argument for risk neutral agents.\footnote{For a risk neutral agent,
the risk neutral payment maintains the least difference in payment
to prevent all types from overbidding.}
Equation \eqref{eq:payid-high} would be easy to show if $pcap$ is
continuous: for all $v$ where $pcap(v) > \price^{\mathrm{VC}}(v)$, there
is a neighborhood $(v - \epsilon, v]$ such that deviating on this
interval only incurs the linear part of the utility function and the
agent is effectively risk neutral. We give the following general
proof that deals with discontinuity and includes continuous cases as
well.
To show \eqref{eq:payid-high}, it suffices to show that, for each $v$ where $pcap(v)
> \price^{\mathrm{VC}}(v)$, for any $\epsilon > 0$, $pcap(v) < pcap(\val^{-}) + (prn(v) - prn(\val^{-}) + \epsilon$
for some $\val^{-} < v$. Consider any $\val^{-} > v - \tfrac \epsilon 2$. Since $pcap(\val^{-}) \geq \price^{\mathrm{VC}}(\val^{-}) =
(\val^{-} - C) x(\val^{-}) > (v - \tfrac \epsilon 2 - C) x(\val^{-})$, the utility for $v$ to misreport
$\val^{-}$, i.e., $\util_\capareport(v, \val^{-})$ is not much smaller than if the agent is risk neutral:
\begin{align*}
\utilreport^{\mathrm{RN}}(v, \val^{-}) - \util_\capareport(v, \val^{-}) < \frac \epsilon 2 x(\val^{-}).
\end{align*}
The following derivation, starting with the BIC condition, gives the desired bound:
\begin{align*}
0 \leq \util_\capareport(v, v) - \util_\capa(v, \val^{-}) & < \util_\capa(v, v) - \utilreport^{\mathrm{RN}}(v, \val^{-}) + \frac \epsilon
2 x(\val^{-}) \\
& = (x(v) v - pcap(v)) - (x(\val^{-}) v - pcap(\val^{-})) + \frac \epsilon 2 x(\val^{-}) \\
& = (x(v) - x(\val^{-}))v - (pcap(v) - pcap(\val^{-})) + \frac \epsilon 2 x(\val^{-}) \\
& \leq prn(v) - prn(\val^{-}) + (v - \val^{-}) x(v) - (pcap(v) - pcap(\val^{-})) +
\frac \epsilon 2 x(\val^{-}) \\
& \leq prn(v) - prn(\val^{-}) - (pcap(v) - pcap(\val^{-})) + \epsilon.
\end{align*}
The first equality holds because $pcap(v) > \price^{\mathrm{VC}}(v)$; the second to last inequality uses the definition of risk
neutral payments (\autoref{thm:myerson}, \autoref{thmpart:payment}), and the last holds because $x(\val^{-})
< x(v) \leq 1$.
\end{proof}
\begin{proof}[\stoccom{Proof} of \autoref{lem:mono+payid=>bne}\stoccom{.}]
The proof proceeds in three steps. First, we show that an agent with
value $v$ does not want to misreport a higher value $\val^{+}$.
Second, we show that the expected payment on winning, i.e.,
$pcap(v)/x(v)$ is monotone in $v$. Finally, we
show that the agent with value $v$ does not want to misreport a
lower value $\val^{-}$. Recall in the subsequent discussion that
$prn$ is the risk-neutral expected payment for allocation
rule~$x$ (from \autoref{thm:myerson}, \autoref{thmpart:payment}).
\begin{enumerate}
\item \label{step:misreporting-highval} (Type $v$ misreporting $\val^{+}$.) This argument pieces
together two simple observations. First, \autoref{c:misreports}
and the fact that $pcap \geq \price^{\mathrm{VC}}$ imply that $v$ is
risk-neutral upon reporting $\val^{+}$.
Second, by definition of $pcap$, the difference in a capacitated agent's
payments given by $pcap(\val^{+}) - pcap(v)$ is at least that for a risk neutral agent given by
$prn(\val^{+}) - prn(v)$. The risk-neutral agent's
utility is linear and she prefers reporting $v$ to $\val^{+}$.
As the risk-averse agent's utility is also linear for payments in
the given range and because the difference in payments is only
increased, then the risk-averse agent must also prefer reporting
$v$ to $\val^{+}$.
\item (Monotonicity of $pcap / x$.) The monotonicity of
$\tfrac{pcap}{x}$, which is \autoref{thmpart:px-mon} of
\autoref{remark:payid}, will be used in the next case (and some
applications of \autoref{thm:payid}). We consider $v$ and
$\val^{+}$ and argue that $\tfrac{pcap(v)}{x(v)}
\leq \tfrac{pcap(\val^{+})}{x(\val^{+})}$. First, suppose
that the wealth upon winning of an agent with value $v$ is
$C$, i.e., $pcap(v) = \price^{\mathrm{VC}}(v)$. If
$pcap(\val^{+}) = \price^{\mathrm{VC}}(\val^{+})$ as well, then by definition of $\price^{\mathrm{VC}}$
(by $\tfrac{\price^{\mathrm{VC}}(v)}{x(v)} = v - C$) monotonicity
of $pcap/x$ holds for these points. If $pcap$ is
higher than $\price^{\mathrm{VC}}$ at $\val^{+}$ then this only improves
$pcap/x$ at $\val^{+}$. Second, suppose that the wealth
of an agent with value $v$ is strictly larger than $C$,
meaning this agent's utility increases with wealth. The allocation
rule $x(\cdot)$ is weakly monotone (\autoref{lem:bne=>mono}), suppose for a contradiction
that $\tfrac{pcap(v)}{x(v)} >
\tfrac{pcap(\val^{+})}{x(\val^{+})}$ on $v < \val^{+}$.
Then the agent with value $v$ can pretend to have value
$\val^{+}$, obtain at least the same probability of winning, and
obtain strictly lower payment. This increase in wealth is strictly
desired, and therefore, this agent strictly prefers misreport
$\val^{+}$. Combined with \autoref{step:misreporting-highval},
above, which argued that a low valued agent would not prefer to
pretend to have a higher value, this is a contradiction.
\item (Type $v$ misreporting $\val^{-}$.) If $pcap(v) =
\price^{\mathrm{VC}}(v)$, then paying less on winning does not translate into
extra utility, and hence by the monotonicity of $pcap/x$,
the agent would never misreport.
We thus focus then on the case that $pcap(v) > \price^{\mathrm{VC}}(v)$. By
the monotonicity of $pcap / x$, there is a point $vzero <
v$ such that for every value $\val^{-}$ between $vzero$ and $v$, if
an agent with value $v$ reported $\val^{-}$, she would still be in
the risk-neutral section of her utility function. Specifically, this
entails that $\forall \val^{-}$ such that $vzero < \val^{-} < v$,
$pcap(\val^{-})/x(\val^{-}) \geq v - C$. Consider such
a $vzero$ and any such $\val^{-}$. For any such point,
$pcap(\val^{-})/x(\val^{-}) > \val^{-} - C$, and hence a
bidder with value $\val^{-}$ would also be strictly in the risk-neutral
part of her utility function upon winning.
For every such point, by our formulation in \eqref{eq:payid-max},
$pcap(v) - pcap(\val^{-}) = prn(v) -
prn(v)$. As a result, since she is effectively risk-neutral in
this situation, she cannot wish to misreport $\val^{-}$; otherwise, the
combination of $x$ and $prn$ would not be BIC for
risk-neutral agents.
For any $\val^{-} \leq vzero$, the wealth on winning for a bidder
with value $v$ would increase, but only into the capacitated
section of her utility function, hence gaining no utility on winning,
but losing out on a chance of winning thanks to the weak monotonicity
of $x$. Hence, she would never prefer to bid $\val^{-}$ over
bidding $vzero$. Combining this argument with the above argument,
our agent with value $v$ does not prefer to misreport any $\val^{-}
< v$. \qedhere
\end{enumerate}
\end{proof}
\Xcomment{
We then argue that $pcap(v) \geq pcap(\val^{-}) +
(prn(v) - prn(\val^{-}))\ \ \forall \val^{-} < v$. These
together give $pcap(v) \geq \max(\price^{\mathrm{VC}}(v), \sup \left(
pcap(\val^{-}) + (prn(v) - prn(\val^{-}))\ |\ \val^{-}
< v \right)$.
We assume this for the rest of the proof, that $pcap \geq
\price^{\mathrm{VC}}$. As a result, every bidder when truthfully reporting is in the
risk-neutral part of their utility function upon winning, either at
the cusp or strictly below the cusp. Given this, any such bidder is
risk-neutral when considering overbidding and paying a potentially
higher price upon winning. And as a result, if $pcap$ is ever
less steep than $prn$ at a point $v$ - specifically, if ever
$\dpricecapright(v) < \dpricern(v)$ - a bidder with value $v$
would prefer to overreport, violating BIC.
As $pcap$ must then always be as steep as $prn$, we have
$pcap(v) - pcap(\val^{-}) \geq (prn(v) -
prn(\val^{-}))\allocuad \forall \val^{-} \leq v$ and hence
$$pcap(v) \geq \max(\price^{\mathrm{VC}}(v), \sup \left( pcap(\val^{-}) + (prn(v) - prn(\val^{-}))\ |\ \val^{-} < v \right).$$
\paragraph{$\left(pcap(v) \leq \max(\price^{\mathrm{VC}}(v), \ldots)\right)$.}
We consider here the case that $pcap(v) > \price^{\mathrm{VC}}(v)$; if
$pcap(v) = \price^{\mathrm{VC}}(v)$, then clearly the inequality we desire
holds. Begin by assuming for contradiction's sake that
$pcap(v) > \sup \left(pcap(\val^{-}) + (prn(v) -
prn(\val^{-})) | \val^{-} < v \right)$. We will show that this
must entail that the marginal cost that a bidder with value $v$ is
paying for allocation is strictly above $v$, and hence such a
bidder would prefer to underbid.
So, in this case, we have $pcap(v) > \lim_{\val^{-} \to v}
pcap(\val^{-}) + (prn(v) - prn(\val^{-}))$, and as
$x$ is strictly monotone, we have
\begin{align*}
\lim_{\val^{-} \to v} \frac{pcap(v) - pcap(\val^{-})}{x(v) - x(\val^{-})} > \lim_{\val^{-} \to v} \frac{prn(v) - prn(\val^{-})}{x(v) - x(\val^{-})}.
\end{align*}
By the risk-neutral payment identity, we know that $\lim_{\val^{-} \to v} \frac{prn(v) - prn(\val^{-})}{x(v) - x(\val^{-})} = v$, that the marginal cost of an increase in allocation probability at $v$ is $v$. Hence,
$\lim_{\val^{-} \to v} \frac{pcap(v) - pcap(\val^{-})}{x(v) - x(\val^{-})} > v$ and there exists a $\delta>0$ such that there exist arbitrarily small values $\epsilonilon>0$ s.t.
\begin{align*}
\frac{pcap(v) - pcap(v-\epsilonilon)}{x(v) - x(v-\epsilonilon)} &\geq v + \delta.
\end{align*}
Then, for any such $\epsilonilon$, we have $x(v-\epsilonilon)v - pcap(v-\epsilonilon) \geq x(v)v - pcap(v) + \delta(x(v) - x(v-\epsilonilon))$, and since $x$ is strictly monotonic,
\begin{align*}
x(v-\epsilonilon)v - pcap(v-\epsilonilon) &> x(v)v - pcap(v).
\end{align*}
If the bidder with value $v$ is in the risk-neutral part of their
utility function upon winning and reporting $v-\epsilonilon$, this
states that they will strictly prefer to underreport $v-\epsilonilon$
rather than truthfully reporting $v$, violating BIC.
If $x$ is continuous immediately below $v$, then $pcap$
must also be continuous, and hence given that $pcap(v) >
\price^{\mathrm{VC}}(v)$ by assumption, there is a $\gamma >0$ s.t. for all points
$z\in [v-\gamma, v]$, $pcap(z) > (v-C)x(z)$ -
hence at every such point, the bidder with value $v$ is in the
risk-neutral part of their utility function upon underbidding
$z$. Then choose an $\epsilonilon$ to be smaller than $\gamma$, and the
above conditions hold, giving that the bidder with value $v$
desires to report $v-\epsilonilon$, violating BIC.
If $x$ is not continuous immediately below $v$, then
$pcap$ can also be discontinuous immediately below $v$. In
such a case though, if the marginal cost of the increased allocation
is higher than $v$, then the bidder will still want to misreport
lower. In particular, for any lower point $v-\epsilonilon$, we know
from above that $pcap(v-\epsilonilon) \geq
(v-\epsilonilon-C)x(v)$ - so on misreporting, a bidder with
value $v$ will be really close to being risk-neutral. Thus, we can
choose an $\epsilonilon$ such that the utility loss from being capacitated
-
$x(v-\epsilonilon)(v-\frac{pcap(v-\epsilonilon)}{x(v-\epsilonilon)}
- C)$ is less than $\delta(x(v) - x(v-\epsilonilon))$,
the guaranteed difference between misreporting and truthfully
reporting. By our assumption here that $x$ is not continuous
immediately below $v$, this is feasible. Hence a bidder with value
$v$ would still wish to misreport $v-\epsilonilon$, violating BIC.
\end{proof}
}
\end{document} |
\betaegin{document}
\title{Fields of Quantum Reference Frames based on Different
Representations of Rational Numbers as States of Qubit Strings}
\alphauthor{Paul Benioff\\
Physics Division, Argonne National Laboratory \\
Argonne, IL 60439 \\
e-mail: [email protected]}
\deltaate{\today}
\maketitle
\betaegin{abstract}In this paper fields of quantum
reference frames based on gauge transformations of
rational string states are described in a way that,
hopefully, makes them more understandable than their
description in an earlier paper. The approach taken
here is based on three main points: (1) There are a
large number of different quantum theory representations
of natural numbers, integers, and rational numbers as
states of qubit strings. (2) For each representation,
Cauchy sequences of rational string states give a
representation of the real (and complex) numbers. A
reference frame is associated to each representation.
(3) Each frame contains a representation of all
mathematical and physical theories that have the real
and complex numbers as a scalar base for the theories.
These points and other aspects of the resulting fields are
then discussed and justified in some detail. Also two
different methods of relating the frame field to physics
are discussed.
\end{abstract}
\section{Introduction}
In other work \cite{BenFIQRF} two dimensional fields of
quantum reference frames were described that were based on
different quantum theory representations of the real
numbers. Because the description of the fields does not
seem to be understood, it is worthwhile to approach a description
in a way that will, hopefully, help to make the fields
better understood. This the goal of this contribution to the
third Feynman conference proceedings.
The approach taken here is based on three main points:
\betaegin{itemize} \item There are a large number of
different quantum theory representations of
natural numbers, integers, and rational numbers as
states of qubit strings. These arise from gauge
transformations of the qubit string states. \item
For each representation, Cauchy sequences of rational
string states give a representation of the real (and
complex) numbers. A reference frame is
associated to each representation. \item Each frame
contains a representation of all mathematical and physical
theories. Each of these is a mathematical structure that
is based on the real and complex number
representation base of the frame. \end{itemize} This approach
is summarized in the next section with more details given
in the following sections.
\section{Summary Discussion of the Three Points}\label{SDTP}
As is well known, the large amount of work and interest in quantum
computing and quantum information theory is based on quantum
mechanical representations of numbers as states of strings of
qubits and linear superpositions of these states. The
numbers represented by these states are usually the nonnegative
integers or the natural numbers. Examples of integer
representations are states of the form $|\gamma,s\rangle$ and
their linear superpositions $\primesi =\sum_{s,\gamma}c(\gamma,s)|\gamma,s\rangle.$
Here $|\gamma,s\rangle =|\gamma,0\rangle\otimes_{j=1}^{j=n}
|s(j),j\rangle$ is a state of a string
of $n+1$ qubits where the qubit at location $0$ denotes
the sign (as $\gamma =+,-$) and $s:[1,c^{\deltaag}ots,n]\rightarrow
\{0,1\}$ is a $0-1$ valued function on the integer
interval $[1,n]$ where $s(n)=1.$ This last condition
removes the redundancy of leading $0s.$
This description can be extended to give quantum mechanical
representations of rational numbers as states of qubit strings
\cite{BenRCRNQM} and of real and complex numbers as Cauchy sequences
of rational number states of qubit strings \cite{BenRRCNQT}.
As will be seen, string rational numbers can be
represented by qubit string states $|\gamma,s\rangle$ where $s$
is a $0-1$ valued function on an integer interval $[l,u]$ with
$l\leq 0$ and $u\gammaeq 0$ and the sign qubit $\gamma$, at position
$0$.
A basic point to note is that there are a great many
different representations of rational numbers (and of
integers and natural numbers) as states of qubit strings.
Besides the arbitrariness of the location of the qubit
string on an integer lattice there is the arbitrariness of
the choice of quantization axis for each qubit in the
string. This latter arbitrariness is equivalent to a gauge
freedom for the choice of which system states correspond to
the qubit states $|0\rangle$ and $|1\rangle$.
This arbitrariness of gauge choice for each qubit is
discussed in Section \ref{GT} in terms of global and
local gauge transformations of rational string states of
qubit strings. A different representation of string rational
numbers as states of qubit strings is associated with each
gauge transformation.
As will be seen in the next section, there is a quantum
representation of real numbers associated with each
representation of the rational string states. It is clear
that there are a large number of different representations
of real numbers as each representation is associated with a
different gauge gauge representation of the rational string states.
The gauge freedom in the choice of quantization axis for
each qubit plays an important role in quantum cryptography
and the transfer of quantum information between a sender
and receiver. The choice of axis is often referred to
a reference frame chosen by sender and receiver for
transformation and receipt of quantum information
\cite{Bagan,Rudolph,Bartlett,vanEnk}.
Here this idea of reference frames is taken over in
that a reference frame $F_{R_{U}}$ is associated with
each quantum representation $R_{U}$ of real numbers.
Since each real number representation is associated
with a gauge transformation $U$, one can also associate
frames directly with gauge transformations, as in
$U\rightarrow F_{U}$ instead of $F_{R_{U}}.$
It should be noted that complex numbers are also included
in this description since they can be represented as an
ordered pair of real numbers. Or they can be built up
directly from complex rational string states. From now
on real numbers and their representations will be assumed
to also include complex numbers and their representations.
An important point for this paper is that any physical or
mathematical theory, for which the real numbers form a base
or scalar set, has a representation in each frame as a
mathematical structure based on the real number representation
in the frame. Since this is the case for all physical theories
considered to date, it follows that they all have representations
in each frame as mathematical structures based on the real number
representation in the frame. It follows that theories such
as quantum mechanics, quantum field theory. QED, QCD,
string theory, and special and general relativity all have
representations in each frame as mathematical structures
based on the real number representation associated with
the frame. It is also the case that, if the space time
manifold is considered to be a $4-tuple$ of the real
numbers, then each frame contains a representation of
the space time manifold as a $4-tuple$ of the real
number representation.
To understand these observations better it is useful to
briefly describe theories. Here the usual
mathematical logic \cite{Shoenfield} characterization of
theories as being defined by a set of axioms is used. All
theories have in common the logical axioms and logical
rules of deduction; they are distinguished by having
different sets of nonlogical axioms. This is the case
whether the axioms are explicitly stated or not.\footnote{
The importance of the axiomatic
characterization of physical or mathematical theories is
to be emphasized. All properties of physical or mathematical
systems are described in the theory as theorems which are
obtained from the axioms by use of the logical rules of
deduction. Without axioms a theory is empty as it
can not make any predictions or have any meaning.}
Each theory described by a consistent set of axioms has
many representations (called models in mathematical
logic)\footnote{In physics models have a different meaning
in that they are also theories. However they are simpler
theories as they are based on simplifying model assumptions
which serve as axioms for the simpler theory.}
as mathematical structures in which the theory axioms are
true. Depending on the axiom set the representations may
or may not be isomorphic.
The real numbers axiomatized as a complete ordered field
are an example of a simple theory. For this
axiomatization all representations of the real numbers are
isomorphic. However they are not the same. All theories
based on the real (or complex) numbers include
the real (and complex) number axioms in their axiom
sets.
These well known aspects of theories are are quite familiar
in the application of group theory to physics. Each abstract
group is defined by a set of axioms that consist of the general
group axioms and additional ones to describe the particular
group considered. Each group has many different representations
as different mathematical systems. These can be matrices or
operators in quantum theory. These are further classified by
the dimensionality of the representation and whether they are
reducible or irreducible. The importance of different
irreducible representations to describe physical systems and
their states is well known.
As sets of axioms and derived theorems, physical theories,
unlike mathematical theories, also have representations as
descriptions of physical systems and their properties. It
is immaterial here whether these representations are
considered to be maps from theory statements to physical
systems and their properties or from different mathematical
representations of the theory to physical systems and their
properties. It is, however, relevant to note that theoretical
predictions to be tested by experiment are or can be
represented by equations whose solutions are real numbers.
Spectra of excited states of nuclei, atoms, and molecules
are examples. The same holds for observables with discrete
spectra such as spin, isospin, and angular momentum. Here
the eigenvalues are the real number equivalents of integers
or rational numbers.
\section{Real Numbers and their Representation in Quantum Theory}
\label{RNRQT}It is useful to begin with a question "What are the
real numbers?" The most general answer is that
they are elements of any set of mathematical or physical
systems that satisfies the real
number axioms. The axioms express the requirements
that the set must be a complete, ordered field.
This means that the set must be closed under
addition, multiplication and their inverses, A linear
order must exist for the collection, and any convergent
sequence of elements converges to an element of the set.
It follows that all sets of real numbers must at least
have these properties. However they can have other
properties as well. A study of most any mathematical
analysis textbook will show that real numbers are
defined as equivalence classes of either Dedekind
cuts or of Cauchy sequences of rational numbers.\footnote{
Rational numbers are defined as equivalence classes of
ordered pairs of integers which are in turn defined as
equivalence classes of ordered pairs of the natural
numbers $0,1,2,c^{\deltaag}ots$.} A sequence $t_{n}$ of rational
numbers is a Cauchy sequence if\betaegin{equation}\label{cauchynos}
\betaegin{array}{c}\mbox{For all $l$ there is an $h$ such
that} \\ |t_{j}-t_{k}|\leq 2^{-l} \\ \mbox{for all
$j,k>h$.} \end{array}\end{equation} The proof that the
set of equivalence classes of Cauchy sequences are real
numbers requires proving that it is a complete ordered field.
A similar situation holds for rational numbers (and
integers and natural numbers). Rational numbers are
elements of any set that satisfies the axioms for an
ordered field. However the field is not complete.
The representation of rational numbers used here will be
based on finite strings of digits or kits in
some base $k$ along with a sign and $"k-al"$ point.
Use of the string representation is based on the fact that
physical representations of rational numbers (and integers and
natural numbers) are given as digits or states of strings
of kits or qukits (with a sign and $"k-al"$ point for rational
numbers) in some base $k\gammaeq 2$. Such numbers are also the base
of all computations, mainly because of the efficiency in
representing large numbers and in carrying out arithmetic
operations. The usefulness of this representation is based
on the fact that for any base $k\gammaeq 2,$ these string numbers are
dense in the set of all rational numbers.
Here, to keep things simple, representations will be
limited to binary ones with $k=2.$ The representations will
be further restricted here to be represented as states of
finite strings of qubits. This is based on the fact that
quantum mechanics is the
basic mechanics applicable to all physical systems. The
Cauchy condition will be applied to sequences of these
states to give quantum theory representations of real
numbers.
It should be noted that there are also quantum theory
representations of real numbers that are not based on
Cauchy sequences of states of qukit strings. Besides
those described in \cite{Litvinov,Corbett,Tokuo} there
are representations as Hermitian operators in Boolean
valued models of ZF set theory \cite{Takeuti,Davis,Gordon}
and in a category theory context \cite{Krol}. These
representations will not be considered further here because
of the limitation here of representations to those based on finite
strings of qubits.
\subsection{Rational Number States}
Here a compact representation of rational string states is used
that combines the location of the $"binal"$ point and the sign.
For example, the state $|1001-0111\rangle$ is a
state of eight $0-1$ qubits and one $\primem$ qubit representing a
rational string number $-9.4375$ in the ordinary decimal
form.
Qubit strings and their states can be described by locating
qubits on an integer lattice\footnote{Note that
the only relevant properties of the integer locations is their
ordering, a discrete linear ordering. Nothing is assumed
about the spacing between adjacent locations on the lattice.}
Rational string states correspond
to states of qubits occupying an integer interval $[m+l,m+u].$
Here $l\leq 0\leq u$, $m$ is the location of the $\primem$ qubit,
and the $0-1$ qubits occupy all positions in the interval
$[m+l,m+u].$ Note that two qubits, a sign one and a
$0-1$ one, occupy position $m.$ For fermionic qubits this
can be accounted for by including extra variables to
distinguish the qubit types.
One way to describe rational string states is by strings
of qubit annihilation creation (AC) operators $a_{\alpha,j},a^{\deltaag}_{\alpha,j}$
acting on a qubit vacuum state $|0\rangle.$ Also present
is another qubit type represented by the AC operators
$c_{\gamma,m}c^{\deltaag}_{g,m}$. Here $\alpha =0,1;\gamma=+,-$, and $j,m$ are
integers.
For this work it is immaterial whether the AC operators satisfy
commutation relations or anticommutation relations:
\betaegin{equation}\label{acomm} \betaegin{array}{c}[a_{\alpha,j},a^{\deltaag}_{\alpha,j}p]=
\deltaelta_{j,j^{\prime}}\deltaelta_{\alpha,\alphalpha^{\p}} \\
\mbox{$[a^{\deltaag}_{\alpha,j},a^{\deltaag}_{\alpha,j}p]=[a_{\alpha,j},a_{\alpha,j}p]=0$}\end{array}\end{equation}or
\betaegin{equation}\label{aacomm} \betaegin{array}{c}\{a_{\alpha,j},a^{\deltaag}_{\alpha,j}p\}=
\deltaelta_{j,j^{\prime}} \deltaelta_{\alpha,\alphalpha^{\p}} \\
\{a^{\deltaag}_{\alpha,j},a^{\deltaag}_{\alpha,j}p\}=\{a_{\alpha,j},a_{\alpha,j}p\}=0.\end{array}\end{equation}
with similar relations for the $c$ operators. The $c$
operators commute with the $a$ operators.
Rational number states are represented by strings of $a$
creation operators and one $c$ creation operator acting
on the qubit vacuum state $|0\rangle$ as \betaegin{equation}
\label{rastrst}|\gamma,m,s,l,u\rangle=c^{\deltaag}_{\gamma,m}
a^{\deltaag}_{s(u),u}c^{\deltaag}otsa^{\deltaag}_{s(l),
l}|0\rangle.\end{equation} Here $l\leq m\leq
u$ and $l<u$ with $l,m,u$ integers, and $s:[l,u]\rightarrow
\{0,1\}$ is a $\{0,1\}$ valued function on the integer
interval $[l,u]$. Alternatively $s$ can
be considered as a restriction to $[l,u]$ of a function
defined on all the integers.
An operator $\tilde{N}$ can be defined whose eigenvalues
correspond to the values of the rational numbers one associates
with the string states. $\tilde{N}$ is the product of two
commuting operators, a sign scale operator $\tilde{N}_{ss}$,
and a value operator $\tilde{N}_{v}.$ One has\betaegin{equation}
\label{defN} \betaegin{array}{c}\tilde{N}=\tilde{N}_{ss}
\tilde{N}_{v} \\ \mbox{where } \tilde{N}_{ss} =\sum_{\gamma,m}\gamma
2^{-m}c^{\deltaag}_{\gamma,m}c_{\gamma,m} \\ \tilde{N}_{v}=\sum_{i,j}i2^{j}
a^{\deltaag}_{i,j}\alpha_{i,j}.\end{array}\end{equation} The operator is
given for reference only as it is not used to define
arithmetic properties of the rational string states.
There is a large amount of arithmetical redundancy
in the states. For instance the arithmetic properties of a
rational string state are invariant under a translation
along the integer axis. This is a consequence of the fact
that these properties of the state are determined
by the distribution of $1s$ relative to the position $m$
of the sign and not on the value of $m$. The other
redundancy arises from the fact that states that differ
in the number of leading or trailing $0s$ are all
arithmetically equal.
These redundancies can be used to define equivalence classes
of states or select one member as a representative of each
class. Here the latter choice will be made in that rational number
states will be restricted to those with $m=0$ for the sign
location and those with $s$ restricted so that $s(l)=1$ if
$l<0$ and $s(u)=1$ if $u>0.$ This last condition removes
leading and trailing $0s.$ The state
$a^{\deltaag}_{0,0}c^{\deltaag}_{+,0}|0\rangle$ is the number $0$. For ease
in notation from now on the variables $m,l,u$ will be
dropped from states. Thus states $|0,\gamma,s,l,u\rangle$ will
be represented as $|\gamma,s\rangle$ with the values of $l,u$
included in the definition of $s$.
There are two basic arithmetic properties, equality, $=_{A}$
and ordering $\leq_{A}$ Arithmetic equality is defined by
\betaegin{equation}\label{equalA}\betaegin{array}{c}
|\gamma,s\rangle =_{A}|\gamma^{\prime},s^{\prime}\rangle,
\\ \mbox{ if } l^{\prime}=l,\; u^{\prime}=u,\;
\gamma^{\prime}=\gamma \ \mbox{ and } 1_{s^{\prime}}=1_{s}.\end{array}
\end{equation} Here $1_{s}=\{j:s(j)=1\}$ the set
of integers $j$ for which $s(j)=1$ and similarly for $1_{s^{\prime}}.$
That is, two states are arithmetically equal if one has the same
distribution of $1s$ relative to the location
of the sign as the other.
Arithmetic ordering on positive rational string states is
defined by \betaegin{equation}\label{deforderA}
|+,s\rangle \leq_{A}|+,s^{\prime}\rangle,
\mbox{ if } 1_{s}\leq 1_{s^{\prime}} \end{equation}
where\betaegin{equation}\label{deforderAone}
\betaegin{array}{c}1_{s}< 1_{s^{\prime}}\mbox{ if there is a $j$
where $j$ is in $1_{s^{\prime}}$ and not in
$1_{s}$} \\ \mbox{and for all $m>j, m\epsilon 1_{s}$
iff $m\epsilon 1_{s^{\prime}}$}.\end{array}
\end{equation} The extension to zero and negative
states is given by\betaegin{equation}\label{0negordr}
\betaegin{array}{c}|+,\underline{0}\rangle \leq_{A}|+,s\rangle
\mbox{ for all $s$}\\ |+,s\rangle \leq_{A}|+,s^{\prime}\rangle
\rightarrow |-,s^{\prime} \rangle\leq_{A}|-,s\rangle.
\end{array}\end{equation}
The definitions of $=_{A},\leq_{A}$ can be extended to
linear superpositions of rational string states in a
straightforward manner to give probabilities that two states are
arithmetically equal or that one state is less than another
state.
Operators for the basic arithmetic operations of addition,
subtraction, multiplication, and division to
any accuracy $|+,-\ell\rangle$ are represented by
$\tilde{+}_{A},\tilde{-}_{A},\tilde{\times}_{A},
\tilde{\deltaiv}_{A,\ell}.$ The state
\betaegin{equation}\label{accur}|+,-\ell\rangle
=c^{\deltaag}_{+,0}a^{\deltaag}_{\underline{0}_{[0,-\ell+1]}}a^{\deltaag}_{1,-\ell}|0\rangle.
\end{equation} where $a^{\deltaag}_{\underline{0}_{[0,-\ell+1]}}
=a^{\deltaag}_{0,0}a^{\deltaag}_{0,-1}c^{\deltaag}otsa^{\deltaag}_{0,-\ell+1},$ is an eigenstate
of $\tilde{N}$ with eigenvalue $2^{-\ell}.$
As an example of the explicit action of the arithmetic
operators, the unitary addition operator $\tilde{+}$
satisfies \betaegin{equation}\label{addn}
\tilde{+}_{A}|\gamma s\rangle|\gammaps^{\p}\rangle=|\gamma
s\rangle|\gamma^{\prime\prime}s^{\prime\prime}\rangle\end{equation} where
$|\gamma^{\prime\prime}s^{\prime\prime}\rangle$ is the resulting addend state.
It is often useful to write the addend state as
\betaegin{equation}\label{addnplA} |\gamma^{\prime\prime}s^{\prime\prime}\rangle=
|\gammaps^{\p}+_{A}\gamma s\rangle =_{A}|\gammap,s^{\p}\rangle +_{A}|\gamma,s\rangle.
\end{equation} More details on the arithmetic operations
are given elsewhere \cite{BenRRCNQT}. Note that these
operations are quite different from the usual quantum theory
superposition, product, etc. operations. This is the reason
for the presence of the subscript A.
\subsection{The Cauchy Condition}
The arithmetic operators can be used to define rational
number properties of rational string states and their
superpositions. They can also be used to define the
Cauchy condition for a sequence of rational string states.
Let $\{|\gamma_{n},s_{n}\rangle :n=1,2,c^{\deltaag}ots\}$ be any sequence
of rational string states. Here for each $n$ $\gamma_{n}\epsilon
\{+,-\}$ and $s_{n}$ is a $0-1$ valued function from a finite
integer interval that includes $0.$
The sequence $\{|\gamma_{n}s_{n}\rangle\}$
satisfies the Cauchy condition if \betaegin{equation}\label{cauchy}
\betaegin{array}{c}\mbox{ For each $\ell$ there is an $h$
where for all $j,k>h$} \\
|(|\gamma_{j}s_{j}-_{A}\gamma_{k}s_{k}|_{A})\rangle
<_{A}|+,-\ell\rangle.\end{array} \end{equation} In this
definition $|(|\gamma_{j}s_{j}-_{A}\gamma_{k}
s_{k}|_{A})\rangle$ is the state that is
the arithmetic absolute value of the arithmetic difference
between the states $|\gamma_{j},s_{j}\rangle$ and
$|\gamma_{k},s_{k}\rangle.$ The Cauchy condition says that
this state is arithmetically less than or equal to the
state $|+,-\ell\rangle$ for all $j,k$ greater than some $h$.
It must be emphasized that this Cauchy condition statement
is a direct translation of Eq. \ref{cauchynos} to apply to
rational string states. It has nothing to do with the
usual convergence of sequences of states in a Hilbert or
Fock space. It is easy to see that state sequences which
converge arithmetically do not converge quantum
mechanically.
It was also seen in \cite{BenRRCNQT} that the Cauchy condition
can be extended to sequences of linear superpositions of
rational states. Let $\primesi_{n}
=\sum_{\gamma,s}|\gamma, s\rangle\langle\gamma s|\primesi_{n}\rangle.$
Here $\sum_{s}=\sum_{l\leq 0}\sum_{u\gammaeq 0}\sum_{s_{[l,u]}}$
is a sum over all integer intervals $[l,u]$ and
over all $0-1$ valued functions from $[l,u].$ From this
one can define the probability that the arithmetic absolute value
of the arithmetic difference between $\primesi_{j}$ and
$\primesi_{k}$ is arithmetically less than or equal to
$|+,-\ell\rangle$ by\betaegin{equation}\label{Pjml}
\betaegin{array}{l}P_{j,m,\ell}= \sum_{\gamma,s}
\sum_{\gammap,s^{\p}} |\langle\gamma, s|\primesi_{j}\rangle
\langle\gammap, s^{\p}|\primesi_{m}\rangle|^{2} \\
\hspace{2cm} |(|\gamma,s-_{A}\gammap ,s^{\p}|_{A})\rangle \leq_{A}
|+,-\ell\rangle. \end{array}\end{equation}
The sequence $\{\primesi_{n}\}$ satisfies the Cauchy
condition if $P_{\{\primesi_{n}\}} =1$
where\betaegin{equation}\label{limPjkl}
P_{\{\primesi_{n}\}} =\liminf_{\ell\rightarrow\infty}
\limsup_{h\rightarrow\infty}\liminf_{j,k>h}P_{j,m,\ell}.
\end{equation} Here $P_{\{\primesi_{n}\}}$ is the probability
that the sequence $\{\primesi_{n}\}$ satisfies the Cauchy condition.
Cauchy sequences can be collected into equivalence classes
by defining $\{|\gamma_{n},s_{n}\rangle\} \equiv
\{|\gamma^{\prime}_{n}
s^{\prime}_{n}\rangle\}$ if the Cauchy condition holds with
$\gamma^{\prime}_{k}$ replacing $\gamma_{k}$ and
$s^{\prime}_{k}$ replacing $s_{k}$ in
Eq. \ref{cauchy}. To this end let $[\{|\gamma_{n},
s_{n}\rangle\}]$ denote the equivalence class containing
the Cauchy sequence $\{|\gamma_{n}, s_{n}\rangle\}.$ Similarly
$\{\primesi_{n}\}\equiv\{\primesi^{\prime}_{m}\}$ if
$P_{\{\primesi_{n}\}\equiv\{\primesi^{\prime}_{m}\}}=1$ where
$P_{\{\primesi_{n}\}\equiv\{\primesi^{\prime}_{m}\}}$ is given by
Eqs. \ref{Pjml} and \ref{limPjkl} with $\primesi^{\prime}_{k}$
replacing $\primesi_{k}$ in Eq. \ref{Pjml}.
The definitions of $=_{A},\leq_{A},\tilde{+}_{A},\tilde{-}_{A},
\tilde{\times}_{A},\tilde{\deltaiv}_{A,\ell}$ can be lifted to
definitions of $=_{R},\leq_{R},\tilde{+}_{R},\tilde{-}_{R},
\tilde{\times}_{R},\tilde{\deltaiv}_{R}$ on the set $[\{|\gamma_{n},
s_{n}\rangle\}]$ of all equivalence classes. It can be
shown \cite{Ben RRCRNQT} that $[\{|\gamma_{n},
s_{n}\rangle\}]$ with these operations and relations
is a representation or model of the real number axioms. In
this sense it is entirely equivalent to $R$ which is the
real number component of the complex number base for the
Hilbert space containing the rational string number states that
were used to define $[\{|\gamma_{n},s_{n}\rangle\}].$
Another representation of real numbers can be obtained by
replacing the sequences $\{|\gamma_{n},s_{n}\rangle\}$ by
operators. This can be achieved by replacing each index
$n$ by the rational string state that corresponds to
the natural number $n$. These are defined by
$|\gamma,s\rangle$ where $\gamma=+$ and $l=0$ where $l$ is the
lower interval bound, for the domain of $s$ as a $0-1$
function over the integer interval $[l,u].$
In this case each sequence $|\gamma_{n},
s_{n}\rangle$ corresponds to an operator $\tilde{O}$
defined on the domain of natural number states. One has
\betaegin{equation}\label{defO} \tilde{O}|+,s\rangle =
|\gamma_{n},s_{n}\rangle \end{equation} where $n$
is the $\tilde{N}$ (defined in Eq. \ref{defN})
eigenvalue of the state $|+,s\rangle.$ $\tilde{O}$ is
defined to be Cauchy if the righthand sequence in Eq.
\ref{defO} is Cauchy. One can also give a Cauchy
condition for $\tilde{O}$ by replacing the natural
numbers in the definition quantifiers by natural number
states.
One can repeat the description of equivalence classes of
Cauchy sequences of states for the operators to obtain
another representation of real numbers as equivalence
classes of Cauchy operators. The two definitions are
closely related as Eq. \ref{defO} shows, and should
be equivalent as representations of real number axioms.
This should follow from the use of Eq. \ref{defO} to
replace the left hand expression for the right hand
expression in all steps of the proofs that Cauchy
sequences of rational string states satisfy the real
number axioms.
\section{Gauge Transformations}\label{GT}
The representation of rational string states as states
of strings of qubits as in Eq. \ref{rastrst} implies a
choice of quantization axes for each qubit. Usually one
assumes the same axis for each qubit where the axis is
fixed by some external field. However this is not
necessary, and in some cases, such as quantum cryptography
\cite{QCrypt}, rotation of the axis plays an important
role.
In general there is no reason why the axes cannot be
arbitrarily chosen for each qubit. This freedom of
arbitrary directions for the axes of each qubit
corresponds to the set of possible local and global
gauge transformations of the qubit states. Each gauge
transformation corresponds to a particular choice in
that it defines the axis direction of a qubit relative
to that of its neighbors.
Here a gauge transformation $U$ can be defined as an
$SU(2)$ valued function on the integers, $U:\{c^{\deltaag}ots
-1,0,1,c^{\deltaag}ots\}\rightarrow SU(2).$ $U$ is global if
$U_{j}$ is independent of $j$, local if it depends on
$j$. The effect of $U$ on a rational string state
$|\gamma,s\rangle$ is given by \betaegin{equation}\label{Urast}
U|\gamma,s\rangle =U_{0}c^{\deltaag}_{\gamma,0}U_{u}a^{\deltaag}_{s(u),u}c^{\deltaag}ots
U_{l}a^{\deltaag}_{s(l),l}|0\rangle =(c^{\deltaag}_{U_{0}})_{\gamma,0}
(a^{\deltaag}_{U_{u}})_{s(u),u}c^{\deltaag}ots(a^{\deltaag}_{U_{l}})_{s(l),l}|0\rangle
\end{equation}
where\betaegin{equation}\label{adU}\betaegin{array}{c}
(a^{\deltaag}_{U_{j}})_{i,j}=U_{j}a^{\deltaag}_{i,j}=\sum_{k}(U_{j})_{i,k}
a^{\deltaag}_{k,j}\\ a_{U_{j}})_{h,j}=a_{h,j}U_{j}^{\deltaag}=
\sum_{i}(U_{j})^{*}_{i,h}a_{i,j}\end{array}\end{equation}
These results are based on the representation of $U_{j}$
as \betaegin{equation}\label{Uexpand} U_{j}=\sum_{i,h}
(U_{j})_{i,h}a^{\deltaag}_{i,j}a_{h,j}\end{equation}
Arithmetic relations and operators transform in the
expected way. For the relations one defines
$=_{A,U}$ and $\leq_{A,U}$ by\betaegin{equation}\label{=AU}
\betaegin{array}{c}=_{A,U}:= (U=_{A}U^{\deltaag}) \\
\leq_{A,U}:= U\leq_{A} U^{\deltaag}.
\end{array}\end{equation} These relations express the fact
that $U|\gamma s\rangle =_{AU}U|\gammaps^{\p}\rangle$ if and only if
$|\gamma,s\rangle =_{A}|\gammap,s^{\p}\rangle$ and $U|\gamma,s\rangle
\leq_{AU}U|\gammap,s^{\p}\rangle$ if and only if
$|\gamma,s\rangle\leq_{A}|\gammap,s^{\p}\rangle.$
For the operation $\tilde{+}_{A}$ one defines $\tilde{+}_{A,U}$
by \betaegin{equation}\label{addAU}\tilde{+}_{A,U}:=(U\times U)
\tilde{+}_{A}(U^{\deltaag}\times U^{\deltaag}).\end{equation} Then
\betaegin{equation}\label{addAUA}\betaegin{array}{l}
\tilde{+}_{A,U}(U|\gamma,s\rangle\times U|\gammap,s^{\p}\rangle)\\ \hspace{1cm}
=(U\times U) \tilde{+}_{A}(|\gamma,s\rangle\times |\gammap,s^{\p}\rangle)
\end{array}\end{equation} as expected. This is consistent with the
definition of $\tilde{+}_{A}$ in Eq. \ref{addn} as a binary relation.
Similar relations hold for $\tilde{\times}_{A},\tilde{-}_{A},
\tilde{\deltaiv}_{A,l}.$
It follows from these properties that the Cauchy condition
is preserved under gauge transformations. If a sequence of
states $\{\primesi_{n}\}$ is Cauchy then the sequence
$\{U\primesi_{n}\}$ is U-Cauchy which means that it is Cauchy
relative to the transformed arithmetic relations and
operations. For example, if a sequence $\{|\gamma_{n},
s_{n}\rangle\}$ satisfies the Cauchy condition,
then the transformed sequence $\{U|\gamma_{n},
s_{n}\rangle\}$ satisfies the U-Cauchy condition:
\betaegin{equation}\label{cauchyU}
\betaegin{array}{c}\mbox{ For each $\ell$ there is an $h$
where for all $j,k>h$} \\|(U|\gamma_{j}s_{j}-_{AU}
U\gamma_{k}s_{k}|_{AU})\rangle
<_{AU}U|+,-\ell\rangle.\end{array} \end{equation}
These definitions and considerations extend to the Cauchy
operators. If $\tilde{O}$ is Cauchy the the above shows
that \betaegin{equation}\label{defOU}\tilde{O}_{U}=
U\tilde{O}U^{\deltaag} \end{equation} is U-Cauchy. However
$\tilde{O}_{U}$ is not a Cauchy operator in the original
frame and $\tilde{O}$ is not Cauchy in the transformed frame.
To see that $\tilde{O}_{U}$ is not Cauchy in the original frame
it is instructive to consider a simple example.
First one works with the Cauchy
property for sequences of states. Let
$f:(-\infty,n]\rightarrow \{0,1\}$ be a $0-1$ function from
the set of all integers $\leq n$ where $f(n)=1.$ Define a
sequence of states\betaegin{equation}\label{fseq}
|f\rangle_{m} =c^{\deltaag}_{+,0}a^{\deltaag}_{f(n),n}a^{\deltaag}_{f(n-1),n-1}c^{\deltaag}ots
a^{\deltaag}_{f(-m),-m}|0\rangle\end{equation}for $m=1,2,c^{\deltaag}ots.$
The sequence is Cauchy as $||f(j)\rangle
-_{A}|f(k)\rangle|_{A}\leq_{A}|+,-\ell\rangle$ for all
$j,k>\ell.$ However for any gauge transformation $U$
the sequence \betaegin{equation}\label{fseqU}
U|f\rangle_{m} =c^{\deltaag}_{+,0}(a^{\deltaag}_{U})_{f(n),n}c^{\deltaag}ots
(a^{\deltaag}_{U})_{f(-m),-m}|0\rangle\end{equation} is not Cauchy
as expansion of the $a^{\deltaag}_{U}$ in terms of the $a^{\deltaag}$ by Eq.
\ref{adU} gives $U|f\rangle_{m}$ as a sum of terms whose
arithmetic divergence is independent of $m$.
To show that $\tilde{O}_{U}$ is Cauchy in the
transformed frame if and only if $\tilde{O}$ is Cauchy in
the original frame one can start with the expression for
the Cauchy condition in the transformed frame:
\betaegin{equation}\label{cauchtOU}
\betaegin{array}{c}|\tilde{O}_{U}U|s_{j}\rangle
-_{AU}\tilde{O}_{U}U|s_{k}\rangle|_{AU}\leq_{AU}U|+,-\ell\rangle \\
\mbox{for all $U|s_{j}\rangle,$ $U|s_{k}\rangle \gammaeq_{AU}$ some
$U|s_{h}\rangle$}\end{array}\end{equation} From Eq. \ref{defOU}
one gets $$\tilde{O}_{U}U|s_{j}\rangle
-_{AU}\tilde{O}_{U}U|s_{k}\rangle=U\tilde{O}|s_{j}\rangle
-_{AU}U\tilde{O}|s_{k}\rangle.$$ From Eqs. \ref{addn} and
\ref{addnplA} applied to $-_{AU}$ and Eqs. \ref{addAU} and
\ref{addAUA} one obtains $$U\tilde{O}|s_{j}\rangle
-_{AU}U\tilde{O}|s_{k}\rangle =U(\tilde{O}|s_{j}\rangle
-_{A}\tilde{O}|s_{k}\rangle).$$ Use of\betaegin{equation}\label{absUA}
|-|_{AU}=U|-|_{A}U^{\deltaag} \end{equation}for the absolute
value operator gives $$|U(\tilde{O}|s_{j}\rangle
-_{A}\tilde{O}|s_{k}\rangle)|_{AU}=U(|\tilde{O}|s_{j}\rangle
-_{A}\tilde{O}|s_{k}\rangle|_{A}).$$ Finally from Eq.
\ref{=AU} one obtains $$\betaegin{array}{l}U(|\tilde{O}|s_{j}\rangle
-_{A}\tilde{O}|s_{k}\rangle|_{A})\leq_{AU}U|+,-\ell\rangle
\\ \hspace{1cm} \leftrightarrow |\tilde{O}|j\rangle
-_{A}\tilde{O}|s_{k}\rangle|_{A}\leq_{A}|+,-\ell\rangle
\end{array}$$ which is the desired result. Thus one sees
that the Cauchy property is preserved in unitary transformations
from one reference frame to another.
As was done with the Cauchy sequences and operators, the
U-Cauchy sequences or their equivalents, U-Cauchy operators,
can be collected into a set ${\mathcal R}_{U}$ of
equivalence classes that represent the real numbers.
This involves lifting up the basic arithmetic relations
$=_{AU},\leq_{AU}$ and operations $\tilde{+}_{AU},
\tilde{\times}_{AU},\tilde{-}_{AU}, \tilde{\deltaiv}_{AU,l}$
to real number relations $=_{RU},\leq_{RU}$ and operations
$\tilde{+}_{RU}, \tilde{\times}_{RU},\tilde{-}_{RU},
\tilde{\deltaiv}_{RU},$ and showing that ${\mathcal R}_{U}$
is a complete, ordered, field.
It is also the case that for almost all gauge $U$ the real
numbers in $\mathcal{R}_{U}$ are orthogonal to those in
$\mathcal R$ in the following sense. One can see that each
equivalence class in $\mathcal R$ contains a state sequence
$|\gamma_{n},s_{[u,-n]}\rangle$ where $s$ is a $0-1$ valued function
on the interval of all integers $\leq u.$ Let $U$ be a gauge
transformation with associated state sequence
$|U_{0}\gamma_{n},Us_{[u,-n]}\rangle.$ Both sequences satisfy
their respective Cauchy conditions. However the overlap
$\langle\gamma_{n},s_{[u,-n]}|U_{0}\gamma_{n},Us_{[u,-n]}\rangle
\rightarrow 0$ as $n\rightarrow\infty.$ This expresses the
sense in which $\mathcal R$ and $\mathcal{R}_{U}$ are
orthogonal.
\section{Fields of Quantum Frames}
As has been seen, one can
define many quantum theory representations
of real numbers as Cauchy sequences of states of qubit
strings or as Cauchy operators on the qubit string
states. The large number of representations stems from the
gauge (global and local) freedom in the choice of a
quantization axis for the qubit strings. Complex number
representations are included either as ordered pairs of
the real number representations or as extensions of the
description of Cauchy sequences or Cauchy operators to
complex rational string states \cite{BenRRCNQT}.
It was also seen that for each gauge transformation $U$
the real and complex number representations ${\mathcal
R}_{U},\; \mathcal{C}_{U}$ are the base of a frame $F_{U}$.
The frame $F_{U}$ also contains representations of
physical theories as mathematical structures based on
${\mathcal R}_{U},\; \mathcal{C}_{U}.$
The work in the last two sections shows that the description
of rational string states as states of finite strings of
qubits given is a description in a Fock space. (A Fock space
is used because of the need to describe states
of variable numbers of qubits and their linear
superpositions in one space.) The arithmetic operations
$+_{A},-_{A},\times_{A},\deltaiv_{A,\ell}$ on states of
these strings are represented by Fock space operators.
The properties of these operators acting on the qubit
string states are used to show that the states represent
binary rational numbers. Finally equivalence classes of
sequences of these states or of operators that satisfy the
Cauchy condition are proved to be real numbers
\cite{BenRRCNQT}.
The essential point here is that the Fock space, $\mathcal F$,
and any additional mathematics used to obtain these results
are based on a set $R,C$ of real and complex numbers. For
example, superposition coefficients of basis states are
in $C$, the inner product is a map from pairs of states to
$C$, operator spectra are elements of $C$, etc. The space
time manifold used to describe the dynamics of any
physical representations of the qubit strings is given by
$R^{4}$.
It follows that one can assign a reference frame $F$ to
$R$ and $C.$ Here $F$ contains all physical and mathematical
theories that are represented as mathematical structures
based on $R$ and $C$. However, unlike the case for the frames
$F_{U},$ the only properties that $R$ and $C$ have are those
based on the relevant axioms (complete ordered field for $R$).
Other than that, nothing is known about how they are
represented.
This can be expressed by saying that $R$ and $C$ are
external, absolute, and given. This seems to be the usual
position taken by physics in using theories based on $R$
and $C$. Physical theories are silent on what properties $R$
and $C$ may have other than those based on the relevant
axioms. However, as has been seen, one can use these
theories to describe many representations $R_{U}$ and $C_{U}$
and associated frames $F_{U}$ based on $SU(2)$ gauge
transformations of the qubit strings. As noted, for each
$U,$ $F_{U}$ contains representations of all physical
theories as mathematical structures over $R_{U},C_{U}$.
For these frames one can see that $R_{U}$ and
$C_{U}$ have additional properties besides those given by
the relevant axioms. They are also equivalence classes of
Cauchy sequences $\{U|\gamma_{n},s_{n}\rangle \}$ or Cauchy
operators $\tilde{O}_{U}.$
Fig. \ref{RCST1} is a schematic illustration of the
relation between frame $F$ and the frames $F_{U}$. Only
three of the infinitely many $F_{U}$ frames are shown.
The arrows indicate the derivation direction in that $R,C$
based theory in $F$ is used to describe, for each $U$
$R_{U}$ and $C_{U}$ that are the base of all theories in
$F_{U}$. Note that the frame $F_{ID}$ with the
identity gauge transformation is also included as one of
the $F_{U}.$ It is \emph{not} the same as $F$ because
$R_{ID}$ is not the same as $R$.
\betaegin{figure}[h]\betaegin{center}
\resizebox{120pt}{120pt}{\includegraphics[230pt,200pt]
[490pt,460pt]{RCST1.eps}}\end{center}
\caption{Relation between a Base Frame $F$ and Gauge
Transformation Frames. A frame $F_{U}$ is associated
with each gauge transformation $U$ of the rational string
states in $F$. The three frame connections shown are
illustrative of the infinitely many connections,
shown by the two headed vertical arrow. Each $F_{U}$
is based on real and complex numbers $\mathcal{R}_{U},
\mathcal{C}_{U}$ and a space time manifold $\mathcal{R}^{4}_{U}$.}
\label{RCST1} \end{figure}
The above relations between the frames $F$ and $F_{U}$
shows that one can repeat the description of real and
complex numbers as Cauchy sequences of (or Cauchy
operators on) rational string states in each frame
$F_{U}$. In this case the Fock space representation,
${\mathcal F}_{U}$, used to describe the qubit string
states in $F_{U}$, is different from from $\mathcal F$ in
that it is based on $R_{U},C_{U}$ instead of on $R,C.$
However the two space representations are related by an
isomorphism that is based on the isomorphism between
$R$ and $R_{U}.$
It is useful to examine this more. First consider the
states of a qubit at site j. For an observer
in frame $F,$ these states have the general form
$\alphalpha|0\rangle+\betaeta|1\rangle$ where $\alphalpha$ and
$\betaeta$ are complex numbers in $C$. Let $U$ be
an $SU(2)$ gauge transformation where $U(j)$ is defined by
$$\betaegin{array}{c}U(j)|0\rangle =|+\rangle \\ U(j)|1\rangle
=|-\rangle\end{array}$$ where $|\primem\rangle= (1/\sqrt{2})
|0\rangle \primem |1\rangle).$ Then the states
$|+\rangle$ and $|-\rangle$ in frame $F$ are seen by an
observer in $F_{U}$ as the states $|0\rangle$ and
$|1\rangle$ respectively as the quantization axis is
different.
A similar situation holds for states of qubit strings.
To an observer in $F$ the state $U|\gamma,s\rangle$ is
different from $|\gamma,s\rangle.$ However an observer in
frame $F_{U},$ with a different set of quantization
axes for each qubit, would represent the state
$U|\gamma,s\rangle$ as $|\gamma,s\rangle$ as to him it is the
same state relative to his axis set as it is to the
observer in $F$ for his axis set.
The situation is slightly different for linear
superpositions of basis states. To an observer in $F$,
the coefficients $\alphalpha,\betaeta$ in the state $\alphalpha U|
\gamma,s\rangle +\betaeta U|\gammap,s^{\p}\rangle$ represent abstract
elements of $C$. The same observer sees that this state
in $F_{U}$ is represented by $\alphalpha_{U}|\gamma,s\rangle
+\betaeta_{U}|\gammap,s^{\p}\rangle$ where $\alphalpha_{U}, \betaeta_{U}$
as elements of $C_{U}$, represent the same abstract
complex numbers as do $\alphalpha,\betaeta.$ However an observer
in $F_{U}$ sees this same state as $\alphalpha|\gamma,s\rangle
+\betaeta|\gammap,s^{\p}\rangle.$ To him the real and complex
number base of his frame is abstract and is represented by $R,C.$
In general an observer in any frame sees the real
and complex number base of his own frame as abstract and given with no
particular representation evident. However the observer
in frame $F$ also sees that what to him is the abstract
number $r,$ is the number $r_{U}$ as and element of
$R_{U}$ in frame $F_{U}.$
These considerations also extend to group representations as
matrices of complex numbers. If the element $g,$ as an
abstract element of $SU(2),$ is represented in frame $F$
by the matrix $\left| \betaegin{array}{cc} a & b\\c & d
\end{array}\right|$ where $a,b,c,d$ are elements of $C$,
then, to an observer in $F,$ $g$ is represented in frame
$F_{U}$ by $\left| \betaegin{array}{cc} a_{U} & b_{U}\\c_{U} &
d_{U}\end{array}\right|.$ Here $a_{U},b_{U},c_{U},
d_{U}$, as elements of $C_{U}$, correspond to the same
abstract complex numbers as do $a,b,c,d.$ However an
observer in $F_{U}$ sees this representation as
$\left| \betaegin{array}{cc} a & b\\c & d
\end{array}\right|$ which is the same as the $F$ observer
sees in his own frame.
Following this line of argument one can now describe
another generation of frames with each frame $F_{U}$ in
the role of a parent to progeny frames just as $F$ is a
parent to the frames $F_{U}$ as in Fig. \ref{RCST1}.
This is shown schematically in Fig. \ref{RCST2}. Again,
only three of the infinitely many stage 2 frames
emanating from each stage 1 frame are shown.
\betaegin{figure}[h]\betaegin{center}
\resizebox{130pt}{130pt}{\includegraphics[250pt,160pt]
[540pt,450pt]{RCST2.eps}}\end{center}
\caption{Three Iteration Stages of Frames coming from
Frames. Only three frames of the infinitely many, one for
each gauge $U,$ are shown at stages $1$ and $2$. The
arrows connecting the frames show the iteration direction
of frames emanating from frames.} \label{RCST2}
\end{figure}
Here something new appears in that there are many
different paths to a stage 2 frame. For each path, such as
$F\rightarrow F_{U_{1}}\rightarrow F_{U_{2}},$ $U_{2}$ is
the product $U^{\prime\prime}U_{1}$ of two gauge transformations
where $U^{\prime\prime}=U_{2}U^{\deltaag}_{1}.$ An observer in this
frame $F_{U_{2}}$ sees the real and complex number frame
base as abstract, and given. To him they can be
represented as $R,C.$ An observer in $F_{U_{1}}$ sees
the real and complex number base of $F_{U_{2}}$ as
$R_{U^{\prime\prime}},C_{U^{\prime\prime}}.$
However an observer in $F$ sees the
real and complex number base of $F_{U_{2}}$ as
$R_{U^{\prime\prime}|U_{1}},C_{U^{\prime\prime}|U_{1}}.$ The subscript
$U^{\prime\prime}|U_{1}$ denotes the fact that relative to $F$
the number base of $F_{U_{2}}$ is constructed in two
stages. First the Fock space $\mathcal F$ is used to
construct representations $R_{U_{1}},C_{U_{1}}$ of
$R$ and $C$ as $U_{1}$ Cauchy
sequences of states $\{U_{1}|\gamma_{n},s_{s}\rangle\}.$
Then in frame $F_{U_{1}}$ the Fock space ${\mathcal
F}_{U_{1}},$ based on $R_{U_{1}},C_{U_{1}},$ is used to
construct the number representation base of $F_{U_{2}}$
as $U^{\prime\prime}$ Cauchy sequences $\{U^{\prime\prime}|\gamma_{n},s_{n}
\rangle\}$ of qubit string states in ${\mathcal F}_{U_{1}}.$
One sees, then, that, for each path leading to a specific
stage 2 frame, there is a different two stage construction
of the number base of the frame. This view from the
parent frame $F$ is the same view that we have as observers
outside the whole frame structure. That is, our
external view coincides with that for an observer inside
the parent frame $F$.
The above description of frames emanating from frames for
2 stages suggests that the process can be iterated.
There are several possibilities besides a finite number
of iterations exemplified by Fig.\ref{RCST2} for 2
iterations. Fig. \ref{RCST3} shows the field structure
for a one way infinite number of iterations.
\betaegin{figure}[h]\betaegin{center}
\resizebox{130pt}{130pt}{\includegraphics[230pt,120pt]
[560pt,490pt]{RCST3.eps}}\end{center}
\caption{One way Infinite Iteration of Frames coming from
Frames. Only three of the infinitely many frames, one for
each gauge $U$ are shown for stages $1,2,c^{\deltaag}ots,j,j+1,c^{\deltaag}ots.$
The arrows connecting the frames show the iteration or emanation
direction. The center arrows labeled ID denote iteration of the
identity gauge transformation.} \label{RCST3}
\end{figure} Here one sees that each frame has an
infinite number of descendent frame generations and, except for
frame $F,$ a finite number number of ancestor
generations. The structure of the frame field seen by an
observer in $F$ is the same as that viewed from the
outside. For both observers the base real and complex numbers
for $F$ are seen as abstract and given with no structure
other than that given by the axioms for the real and
complex numbers.
There are two other possible stage or generation structures
for the frame fields, two way infinite and finite cyclic
structures. These are shown schematically in Figs.
\ref{RCST4} and \ref{RCST5}. The direction of
iteration in the cyclic field is shown by the arrows on
the circle rather than example arrows connecting frames
as in the other figures. For both these frame fields each
frame has infinitely many parent frames and infinitely
many daughter frames. There is no single ancestor frame
and no final descendent frames. The cyclic field is
unique in that each frame is both its own descendent and
its own ancestor. The distance between these connections
is equal to the number of iterations in the cyclic field.
\betaegin{figure}[h!]\betaegin{center}
\resizebox{130pt}{130pt}{\includegraphics[230pt,120pt]
[560pt,490pt]{RCST4.eps}}\end{center}
\caption{Two way Infinite Iteration of Frames coming from
Frames. Only three of the infinitely many frames, one for
each gauge $U$ are shown at each stage $c^{\deltaag}ots ,-1,0,1,
c^{\deltaag}ots,j,j+1,c^{\deltaag}ots$. The solid arrows connecting the
frames show the iteration or emanation direction. The wavy
arrows denote iterations connecting stage $1$ frames to
those at stage $j$. The straight dashed arrows denote
infinite iterations from the left and to the right.of the
identity gauge transformation.} \label{RCST4} \end{figure}
\betaegin{figure}[h!]\betaegin{center}
\resizebox{130pt}{130pt}{\includegraphics[230pt,120pt]
[560pt,490pt]{RCST5.eps}}\end{center} \caption{Schematic
Representation of Cyclic Iteration of Frames coming from
Frames. The vertical two headed arrows represent the gauge
transformations at each stage and the arrows along both
ellipses show the direction of iteration. To avoid a very
complex and messy figure no arrows connecting frames to
frames are shown.} \label{RCST5} \end{figure}
These two frame fields differ from the others in that the
structure seen from the outside is different from that for an
observer in any frame. There is no frame $F$ from which the view is
the same as from the outside. Viewed from the outside there are no
abstract, given real and complex number sets for the field as a
whole. All of these are internal in that an observer in
frame $F_{U_{j}}$ at generation $j$ sees the base
$R_{U_{j}},C_{U_{j}}$ as abstract with no properties
other than those given by the relevant axioms.
The same holds for the representations of the space time
manifold. Viewed from the outside there is no fixed
abstract space time representation as a 4-tuple of real numbers
associated with the field as a whole. All space time
representations of this form are internal and associated
with each frame. This is based on the observation that
the points of a continuum space time as a 4-tuple of
representations of the real numbers are different in
each frame because the real number representations are
different in each frame. Also, contrary to the
situation for the fields in Figs. \ref{RCST1}-\ref{RCST3},
there is no representation of the space time points
that can be considered to be as fixed and external to
the frame field.
The lack of a fixed abstract, external space time manifold
representation for the two-way infinite and cyclic frame
fields is in some ways similar to the lack of a
background space time in loop quantum gravity
\cite{Smolin}. There are differences in that in
loop quantum gravity space is discrete on the Planck
scale and not continuous \cite{Ashtekar}. It should be
noted though that the representation of space time as
continuous is not a necessary consequence of the frame
fields and their properties. The important part is the
real and complex number representation base of each
frame, not whether space time is represented as discrete
or continuous.
It is useful to summarize the views of observers inside
frames and outside of frames for the different field
types. For all the fields except the cyclic ones an
observer in any frame $F_{U_{j}}$ at stage $j$ sees the real
number base $R_{U_{j}}$ of his frame as abstract and external
with no properties other than those given by the axioms for a
complete ordered field. The observer also cannot see any
ancestor frames. He/she can see the whole frame
field structure for all descendent frames at stages
$k>j.$ Except as noted below, the view of an outside
observer is different in that he/she can see the whole
frame field structure. This includes the point that, to
internal observers in a frame, the real and complex
number base of the frame is abstract and external.
For frame fields with a fixed ancestor frame $F$, Figs.
\ref{RCST1}, \ref{RCST2}, \ref{RCST3}, the view of an outside
observer is almost the same as that of an observer in frame
$F$. Both see the real and complex number base of $F$ as
abstract and external. Both can also see the field structure
for all frames in the fields. However the outside observer
can see that frame $F$ has no ancestors. This is not
available to an observer in $F$ as he/she cannot see the
whole frame field.
The cyclic frame field is different in that for an
observer in any frame at stage $j,$ frames at other stages
are both descendent and ancestor frames. This suggests
that the requirement that a frame observer cannot see
the field structure for ancestor frames, but can see it for
descendent frames, may have to be changed, at least for
this type of frame field. How and what one learns from
such changes are a subject for future work.
\section{Relation between the Frame Field and Physics}
\label{RFFP} So far, frame fields based on different quantum
mechanical representations of real and complex numbers
have been described. Each frame contains a representation
of physical theories as mathematical structures based
on the real number representation base of the frame. The
representations of the physical theories in the different
frames are different because the real (and complex) number
representations are different. They are also
isomorphic because the real (and complex) number
representations are isomorphic.
The description of the frame field given so far is incomplete
because nothing has been said about the relation of the
frame field to physics. So far the only use of physics has
been to limit the description of rational number representations
to quantum mechanical states of strings of qubits.
The main problem here is that to date all physical theories
make no use of this frame field. This is evidenced by the
observation that the only properties of real numbers
relevant to physical theories are those derivable from the
real number axioms for a complete, ordered field. So far
physical theories are completely insensitive to details of
different representations of the real numbers.
This problem is also shown by the observation that there is no
evidence of this frame structure and the multiplicity of
real number representations in our view of the existence
of just one physical universe with its space time manifold,
and with real and complex numbers that can be treated as
absolute and given. There is no hint, so far, either in
physical theories or in properties of physical systems and
the physical universe, of the different representations and
the frame fields.
This shows that the main problem is to reconcile the
great number of different representations of the real and
complex numbers and the $R^{4}$ space time manifold as
bases for different representations of physical
theories with the lack of dependence of physical theories
on these representations and no evidence for them in our
view of the physical universe.
One possible way to do this might be to collapse the frame
field to a smaller field, ideally with just one frame. As
a step in this direction one could limit quantum theory
representations of rational string numbers to those that
are gauge invariant. This would have the effect of collapsing
all frames $F_{U_{j}}$ at any stage $j$ into one stage
$j$ frame\footnote{Note that $U_{j}$ is a gauge
transformation. It is not the $jth$ element of one.}.
The resulting frame field would then be one
dimensional with one frame at each stage.
The idea of constructing representations that are gauge
invariant for some gauge transformations has already been
used in another context. This is the use of the decoherent
free subspace (DFS) approach to quantum error correction.
This approach \cite{Lidar,LidarII} is based to quantum error
avoidance in quantum computations. This method identifies
quantum errors with gauge transformations $U$. In this case
the goal is to find subspaces of qubit string states that are
invariant under at least some gauge $U$ and are preserved by
the Hamiltonian dynamics for the system.
One way to achieve this is based on irreducible representations
of direct products of $SU(2)$ as the irreducible subspaces are
invariant under the action of some $U$. As an example, one can
show that \cite{Enk} the subspaces defined by the irreducible
4 dimensional representation of $SU(2)\times SU(2)$ are invariant
under the action of any global $U$. The subspaces are the three
dimensional subspace with isospin $I=1$, spanned by the states
$|00\rangle,|11\rangle,1/\sqrt{2}(|01\rangle +|10\rangle)$
and the $I=0$ subspace containing $1/\sqrt{2}(|01\rangle
-|10\rangle).$ The action of any global $U$ on
states in the $I=1$ subspace can change one of the
$I_{z}$ states into linear superpositions of all states in
the subspace. But it does not connect the states in the $I=1$
subspace with that in the $I=0$ subspace.
It follows that one can replace a string of $2n$ qubits
with a string of $n$ new qubits where the
$|0\rangle$ and $|1\rangle$ states of the $jth$
new qubit correspond to any state in the respective
$I=1$ and $I=0$ subspaces of the 4 dimensional
representation of $SU(2)_{2j-1}\times SU(2)_{2j}.$ Any
state of the $n$ new qubits is invariant under all
global gauge transformations and all local gauge
transformations where \betaegin{equation}\label{Ueq}
U_{2j-1}=U_{2j}.\end{equation}
This replacement of states of strings of $2n$ qubits
by states of strings of $n$ new qubits gives the result
that, for all $U$ satisfying Eq. \ref{Ueq}, the $F_{U}$ frames at
any stage $j$ all become just one frame at stage $j$.
However this still leaves many gauge $U$ for which the new
qubit string state representation is not gauge invariant.
Another method of arriving at a gauge invariant
description of rational string states is based on the
description of the kinematics of a quantum system by
states in a Hilbert space $\mathcal H,$ based on the $SU(2)$ group
manifold. Details of this, generalized to all compact Lie
groups, are given in \cite{Mukunda} and \cite{Ashtekar}.
In essence this extends the well known situation for
angular momentum representations of states based on the
manifold of the group $SO(3)$ to all compact Lie groups.
For the angular momentum case the the action of any
rotation on the states $|l,m\rangle$ gives linear
combinations of states with different $m$ values but with
the same $l$ value. The Hilbert space spanned by all
angular momentum eigenstates can be expanded as a
direct sum \betaegin{equation}\label{Hl}
\mathcal H =\betaigoplus_{l}\mathcal{H}_{l} \end{equation}
where $l=0,1,2,c^{\deltaag}ots$ labels the irreducible
representations of $SO(3).$ Qubits can be associated with
this representation by choosing two different $l$ values,
say $l_{0}$ and $l_{1}.$ Then any states in the subspaces
$\mathcal{H}_{l_{0}}$ and $\mathcal{H}_{l_{1}}$ correspond
to the $|0\rangle$ and $|1\rangle$ qubit states
respectively. These states are invariant under all
rotations. Extension of this construction to all finite qubit
strings gives a representation of natural numbers,
integers and rational numbers that is invariant under all
$SO(3)$ gauge transformations.
This development can also be carried out for any compact
Lie group where the quantum kinematics of a system is
based on the group manifold \cite{Mukunda,Ashtekar}.
In the case of $SU(2)$ Eq. \ref{Hl} holds with
$l=0,1/2,1.3/2,c^{\deltaag}ots$. The momentum
subspaces $\mathcal{H}_{l}$ are invariant under all
$SU(2)$ transformations. As in the angular momentum
case one can use this to describe states of qubits as
\betaegin{equation}\label{invqub}\betaegin{array}{c}
|0\rangle\rightarrow \mathcal{H}_{l_{0}} \\
|1\rangle\rightarrow \mathcal{H}_{l_{0}}\end{array}
\end{equation} that are $SU(2)$ invariant.
This construction can be extended to states of finite
strings of qubits. Details of the mathematics needed for
this, applied to graphs on a compact space manifold,
are given in \cite{Ashtekar}. In this way one can
describe representations of rational string numbers that
are $SU(2)$ gauge invariant for all gauge $U$.
There is a possible connection between these representations
of numbers and the Ashtekar approach to loop quantum gravity.
The Ashtekar approach \cite{Ashtekar} describes $G$ valued
connections on graphs defined on a $3D$ space manifold where
$G$ is a compact Lie group. The Hilbert space of states on all
graphs can be represented as a direct sum of spaces for each graph.
The space for each graph can be further decomposed into a sum
of gauge invariant subspaces. This is similar to the spin
network decomposition first described by Penrose
\cite{Penrose}.
The connection to qubit strings is made by noting that
strings correspond to simple one dimensional graphs.
States of qubit strings are defined as above by choosing
two $l$ values for the space of invariant subspaces as in
Eq. \ref{invqub}. It is hoped to describe more details
of this connection in future work.
Implementation of this approach to reduction of the frame
field one still leaves a one dimensional line of
iterated frames. The line is finite, Fig. \ref{RCST2},
one way infinite, Fig. \ref{RCST3}, two way infinite,
Fig. \ref{RCST4}, or closed, Fig. \ref{RCST5}. Because the
two way infinite and cyclic fields have no abstract
external sets of real and complex numbers and no abstract
external space time, it seems appropriate to limit
consideration to them. Here the cyclic field may be the
most interesting because the number of the iterations in
the cycle is undetermined. If it were possible to reduce
the number to $0$, then one would end up with a picture
like Fig. \ref{RCST1} except that the $R$ and $C$ base of
the frame would be identified in some sense with the gauge
invariant representations described in the frame. Whether
this is possible, or even desirable, or not, is a problem left to
future work.
Another approach to connecting the frame field to physics
is based on noting that the independence of physical
theories from the properties of different real and complex
number representations can be expressed using notions of
symmetry and invariance. This is that
$$\betaegin{array}{l}\mbox{\emph{All physical theories to date
are invariant}} \\\mbox{\emph{under all $SU(2)$ gauge
transformations of the qubit based}} \\ \mbox{\emph{representations
of real and complex numbers.}}\end{array}$$ Note that the
gauge transformations apply not only to the qubit string
states but also to the arithmetic relations and operations
on the string states, Eqs. \ref{=AU} and \ref{addAU}, to sequences
of states, to the Cauchy condition Eq. \ref{cauchyU}, and
to the definition of the basic field operations on the real
numbers.
The symmetry of physical theories under these gauge
transformations suggests that it may be useful to
drop the invariance and consider at the outset candidate
theories that break this symmetry. These would be
theories in which some basic aspect of physics is
representation dependent. One approach might be to look
for some type of action whose minimization, coupled with
the above requirement of gauge invariance, leads to some
restriction on candidate theories. This gauge theory
approach is yet another aspect to investigate in the
future.
\section{Discussion}
There are some points of the material presented here that
should be noted. The gauge transformations described here
apply to finite strings of qubits and their states.
These are the basic objects. Since these can be used to
represent natural numbers, integers, and rational numbers
in quantum mechanics, one can, for each type of number,
describe different representations related by $SU(2)$ gauge
transformations on the qubit string states. Here this
description was extended to sequences of qubit string
states that satisfied the Cauchy condition to give
different representations of the real numbers.
A reference frame was associated to each real and complex
number representation. Each frame contains a
representation of all physical theories as mathematical
structures based on the real and complex number
representation base of the frame. If the space time manifold is
considered to be a $4-tuple$ of the real numbers, then each
frame includes a representation of space time as a
$4-tuple$ of the real number representation.
If one takes this view regarding space time, it follows
that for all frames with an ancestor frame, an observer
outside the frame field or an observer in an ancestor
frame sees that the points of space time in each descendent
frame have structure as each point is an equivalence class of
Cauchy sequences of (or a Cauchy operator on) states of qubit
strings. It is somewhat disconcerting to regard space
time points as having structure. However this structure
is seen only by the observers noted above. An observer in
a frame $F$ does not see his or her own space time points
as having structure because the real numbers that are
the base of his frame do not have structure. Relative
to an observer in frame $F$, the space time base of the frame
is a manifold of featureless points.
It should also be noted that even if one takes the view
that the space time manifold is some noncompact, smooth
manifold that is independent of $R^{4},$ one still has the
fact that functions from the manifold to the real numbers
are frame dependent in that the range set of the
functions is the real number representation base of the
frame. Space time metrics are good examples of this. As is
well known they play an essential role in physics.
In quite general terms, this work is motivated by the need
to develop a coherent theory that treats mathematics and physics
together as a coherent whole \cite{BenTCTPM}. It
may be hoped that the approach taken here that describes
fields of frames based on different representations of
real and complex numbers will shed light on such a theory.
The point that these representations are based on
different representations of states of qubit strings shows
the basic importance of these strings to this endeavor.
Finally it should be noted that the structure of
frames emanating from frames has nothing to do with the
Everett Wheeler view of multiple universes \cite{Everett}.
If these multiple universes exist, then they would exist
within each frame in the field.
\end{document} |
\begin{document}
\mainmatter
\title{An infinitary model of linear logic}
\author{Charles Grellois \and Paul-Andr\'e Melli\`es}
\authorrunning{Charles Grellois \and Paul-Andr\'e Melli\`es}
\institute{Universit\'e Paris Diderot, Sorbonne Paris Cit\'e\\
Laboratoire Preuves, Programmes, Syst\`emes\\
\mailsa\\
\mailsb}
\maketitle
\begin{abstract}
In this paper, we construct an infinitary variant of the relational model of linear logic,
where the exponential modality is interpreted as the set of finite or countable multisets.
We explain how to interpret in this model the fixpoint operator~$\fixpoint{}$ as a Conway operator
alternatively defined in an inductive or a coinductive way. We then extend the relational semantics
with a notion of color or priority in the sense of parity games. This extension enables us
to define a new fixpoint operator~$\fixpoint{}$ combining both inductive and coinductive policies.
We conclude the paper by mentionning a connection between the resulting model of
$\lambda$-calculus with recursion and higher-order model-checking.
\keywords{Linear logic, relational semantics, fixpoint operators, induction and coinduction,
parity conditions, higher-order model-checking.}
\end{abstract}
\abovedisplayskip = 5pt
\belowdisplayskip = 5pt
\abovedisplayshortskip = 5pt
\belowdisplayshortskip = 5pt
\section{Introduction}
In many respects, denotational semantics started in the late 1960's with Dana Scott's introduction of domains and the fundamental intuition that
$\lambda$-terms should be interpreted as \emph{continuous} rather than general functions between domains.
This seminal insight has been so influential in the history of our discipline that it remains
deeply rooted in the foundations of denotational semantics more than fourty-five years later.
In the case of linear logic, this inclination for continuity means that
the interpretation of the exponential modality
$$
A \quad \mapsto \quad !\, A
$$
is \emph{finitary} in most denotational semantics of linear logic.
This finitary nature of the exponential modality is tightly connected to continuity
because this modality regulates the linear decomposition of the intuitionistic implication:
$$A \, \Rightarrow \, B \quad = \quad !\, A \, \multimap \, B.$$
Typically, in the qualitative and quantitative
coherence space semantics of linear logic,
the coherence space $!\,A$ is either defined as the coherence space~$!A$
of \emph{finite} cliques (in the qualitative semantics) or of \emph{finite} multi-cliques (in the quantitative semantics)
of the original coherence space~$A$.
This finiteness condition on the cliques $\{a_1,\dots,a_n\}$ or multi-cliques $[a_1,\dots,a_n]$ of the coherence space $!A$
captures the computational intuition that, in order to reach a given position~$b$ of the coherence space~$B$,
every proof or program
$$
f \quad : \quad !\, A \, \multimap \, B
$$
will only explore a \emph{finite} number of copies of the hypothesis~$A$,
and reach at the end of the computation a specific position~$a_i$ in each copy of the coherence space~$A$.
In other words, the finitary nature of the interpretation of $!A$ is just an alternative
and very concrete way to express in these traditional models of linear logic
the continuity of proofs and programs.
In this paper, we would like to revisit this well-established semantic tradition and accomodate another equally well-established tradition,
coming this time from verification and model-checking.
We find especially important to address and to clarify an apparent antagonism between the two traditions.
Model-checking is generally interested in infinitary (typically $\omega$-regular) inductive and coinductive behaviours
of programs which lie obviously far beyond the scope of Scott continuity.
For that reason, we introduce a variant of the relational semantics of linear logic
where the exponential modality, noted in this context
$$
A \quad \mapsto \quad \lightning \, A
$$
is defined as the set of \emph{finite} or \emph{countable} multisets of the set~$A$.
From this follows that a proof or a program
$$
A \, \Rightarrow \, B \quad = \quad \lightning\, A \, \multimap \, B.
$$
is allowed in the resulting infinitary semantics to explore
a possibly countable number of copies of his hypothesis~$A$
in order to reach a position in~$B$.
By relaxing the continuity principle, this mild alteration of the original relational semantics
paves the way to a fruitful interaction between linear logic and model-checking.
This link between linear logic and model-checking is supported by the somewhat unexpected observation
that the binary relation
$$Y(f) \quad : \quad ! X \quad \morph{} \quad A$$
defining the fixpoint $\fixpoint{}(f)$ associated to a morphism
$$f \quad : \quad ! X \, \otimes \, !A \quad \morph{} \quad A$$
in the familiar (and thus finitary) relational semantics of linear logic
is defined by performing a series of explorations of the infinite binary tree
$$
\begin{array}{ccc}
\textbf{comb}
&
\quad = \quad &
\vcenter{\xymatrix @-1.6pc {
&\bullet \ar@{-}[rd]\ar@{-}[ld]
\\
\circ && \bullet \ar@{-}[rd]\ar@{-}[ld]
\\
& \circ && \bullet \ar@{-}[rd]\ar@{-}[ld]
\\
&& \circ && \bullet \ar@{.}[rd]\ar@{-}[ld]
\\
&&& \circ &&
}}
\end{array}
$$
by an alternating tree automaton $\langle \, \Sigma \, , \, Q \, , \, \delta_f \, \rangle$
on the alphabet $\Sigma=\{\bullet,\circ\}$ defined by the binary relation~$f$.
The key idea is to define the set of states of the automaton as $Q=A\uplus X$ and to associate a transition
$$
\delta_f(\bullet,a) \quad = \quad (\, x_1\wedge \dots \wedge x_k \, , \, a_1\wedge \dots \wedge a_n \, )
$$
\noindent
of the automaton to any element $(([x_1,\dots,x_k],[a_1,\dots, a_n]),a)$ of the binary relation $f$,
where the $x_i$'s are elements of~$X$ and the $a_i$'s are elements of~$A$~;
and to let the symbol~$\circ$ accept any state $x\in X$.
Then, it appears that the traditional definition of the fixpoint operator $\fixpoint{}(f)$ as a binary relation $!X\to A$
may be derived from the construction of run-trees of the tree-automaton $\langle \, \Sigma \, , \, Q \, , \, \delta_f \, \rangle$
on the infinitary tree $\textbf{comb}$.
More precisely, the binary relation $Y(f)$ contains all the elements $([x_1,\dots,x_k],a)$
such that there exists a finite run-tree (called $\textit{witness}$) of the tree automaton $\langle \, \Sigma \, , \, Q \, , \, \delta_f \, \rangle$
accepting the state~$a$ with the multi-set of states $[x_1,\dots,x_k]$ collected at the leaves $\circ$.
As far as we know, this automata-theoretic account of the traditional construction of the fixpoint operator $\fixpoint{}(f)$
in the relational semantics of linear logic is a new insight of the present paper, which we carefully develop in~\S\ref{section/finitary-fixpoint}.
\medbreak
Once this healthy bridge between linear logic and tree automata theory identified,
it makes sense to study variations of the relational semantics inspired by verification.
This is precisely the path we follow here by replacing the finitary interpretation $!A$
of the exponential modality by the finite-or-countable one $\lightning A$.
This alteration enables us to define an inductive as well as a coinductive fixpoint operator $\fixpoint{}$
in the resulting infinitary relational semantics.
The two fixpoint operators only differ in the acceptance condition applied to the run-tree $\textit{witness}$.
We carry on in this direction, and introduce a \emph{coloured} variant of the relational semantics,
designed in such a way that the tree automaton $\langle \, \Sigma \, , \, Q \, , \, \delta_f \, \rangle$
associated to a morphism $f: {!X}\otimes{!A}\to A$ defines a parity tree automaton.
This leads us to the definition of an inductive-coinductive fixpoint operator~$\fixpoint{}$
tightly connected to the current investigations on higher-order model-checking.
\paragraph*{Related works.}
The present paper is part of a wider research project devoted to the relationship
between linear logic, denotational semantics and higher-order model-checking.
The idea developed here of shifting from the traditional finitary relational semantics
of linear logic to infinitary variants is far from new.
The closest to our work in this respect is probably the work by Miquel \cite{these-miquel}
where stable but non-continuous functions between coherence spaces are considered.
However, our motivations are different, since we focus here on the case
of a modality $!A$ defined by finite-or-countable multisets in~$A$,
which is indeed crucial for higher-order model-checking, but is not considered by Miquel.
In another closely related line of work, Carraro, Ehrhard and Salibra \cite{carraro-ehrhard-salibra}
formulate a general and possibly infinitary construction of the exponential modality~$A\mapsto {!A}$
in the relational model of linear logic.
However, the authors make the extra finiteness assumption in~\cite{carraro-ehrhard-salibra}
that the support of a possibly infinite multiset in $!A$ is necessarily finite.
Seen from that prospect, one purpose of our work is precisely to relax this finiteness condition which
appears to be too restrictive for our semantic account of higher-order model-checking
based on linear logic.
In a series of recent works,
Salvati and Walukiewicz \cite{salvati-walukiewicz2} \cite{salvati-walukiewicz3} have exhibited
a nice and promising connection between higher-order model checking
and finite models of the simply-typed $\lambda$-calculus.
In particular, they establish the decidability of weak MSO properties of higher-order recursion schemes
by using purely semantic methods.
In comparison, we construct here a cartesian-closed category of sets and coloured relations
(rather than finite domains) where $\omega$-regular properties of higher-order recursion schemes
(and more generally of $\lambda\,Y$-terms) may be interpreted semantically thanks to a colour modality.
In a similar direction, Ong and Tsukada \cite{ong-tsukada} have recently constructed
a cartesian-closed category of infinitary games and strategies with similar connections
to higher-order model-checking.
Coming back to linear logic,
we would like to mention the works by Baelde \cite{baelde} and Montelatici \cite{montelatici}
who developed infinitary variants (either inductive-coinductive or recursive) of linear logic,
with an emphasis on the syntactic rather than semantic side.
In a recent paper working like we do here at the converging point of linear logic and automata theory,
Terui \cite{terui} uses a qualitative variant of the relational semantics of linear logic
where formulas are interpreted as partial orders and proofs as downward sets in order to
establish a series of striking results on the complexity of normalization of simply-typed $\lambda$-terms.
Finally, an important related question which we leave untouched here is the comparison
between our work
and the categorical reconstruction of parity games achieved by Santocanale
\cite{santocanale2,santocanale} using the notion of bicomplete category,
see also his more recent work with Fortier \cite{santocanale3}.
\paragraph*{Plan of the paper.}
We start by recalling in \S\ref{section/rel} the traditional relational model of linear logic.
Then, after recalling in \S\ref{section/fixpoints} the definition of a Conway fixpoint operator in a Seely category,
we construct in \S\ref{section/finitary-fixpoint} such a Conway operator for the relational semantics.
We then introduce in \S\ref{section/infinitary-rel} our infinitary variant of the relational semantics,
and illustrate its expressive power in \S\ref{section/inductive-and-coinductive} by defining two
different Conway fixpoint operators.
Then, we define in \S\ref{section/coloured-modality} a coloured modality for the relational semantics,
and construct in \S\ref{section/y-colore} a Conway fixpoint operator in that framework.
We finally conclude in \S\ref{section/conclusion}.
\section{The relational model of linear logic}
\label{section/rel}
In order to be reasonably self-contained, we briefly recall the relational model of linear logic.
The category $Rel$ is defined as the category with finite or countable sets as objects, and with binary relations between $A$ and $B$
as morphisms~$A\to B$.
The category $Rel$ is symmetric monoidal closed, with tensor product defined as (set-theoretic) cartesian product,
and tensorial unit defined as singleton:
$$
\begin{tabular}{rclcrcl}
$A \otimes B$ &$\, = \,$ & $A \times B$
& \quad\quad\quad\quad\quad\quad &
$1$ &$\, = \,$ & $\{\star\}$.
\end{tabular}
$$
Its internal hom (also called linear implication) $X \multimap Y$ simply defined as $X \otimes Y$.
Since the object $\bot\,=\,1\,=\,\{\star\}$ is dualizing,
the category $Rel$ is moreover $\ast$-autonomous.
The category $Rel$ has also finite products defined as
$$
\begin{tabular}{rcl}
$A \& B$ &$\quad = \quad$ & $\{(1,a) \ \vert \ a \in A\} \cup \{(2,b) \ \vert \ b \in B\} $
\end{tabular}
$$
with the empty set as terminal object $\top$.
As in any category with finite products, there is a diagonal morphism $\diag{A}:A \rightarrow A\, \&\, A$
for every object~$A$, defined
as
$$
\diag{A} \quad = \quad \{(a,\,(i,\,a))\ \vert\ i \in \{1,\,2\} \mbox{ and } a \in A\}
$$
Note that the category $Rel$ has finite sums as well, since the negation $\negation{A}\ =\ A \multimap \bot$
of any object $A$ is isomorphic to the object $A$ itself.
All this makes $Rel$ a model of multiplicative additive linear logic.
In order to establish that it defines a model of propositional linear logic,
we find convenient to check that it satisfies the axioms
of a Seely category, as originally axiomatized by Seely \cite{seely}
and then revisited by Bierman \cite{bierman}, see the survey \cite{models-of-linear-logic} for details.
To that purpose, recall that a \emph{finite multiset} over a set $A$
is a (set-theoretic) function $w\,:\,A \rightarrow \mathbb{N}$ with finite support,
where the support of $w$ is the set of elements of $A$ whose image is not equal to $0$.
The functor $!: Rel\to Rel$ is defined as
$$
\begin{tabular}{rcl}
$!\,A$ & $\quad = \quad$ & $\mathcal{M}_{fin} (A)$\\
$!\,f$ & $=$ & $\{([a_1, \cdots,\, a_n],\,[b_1,\cdots ,\, b_n]) \ \vert\
\forall i,\,(a_i,\,b_i) \in f \}$\\
\end{tabular}
$$
\noindent
The comultiplication and counit of the comonad are defined as the digging and dereliction morphisms below:
$$
\begin{tabular}{rcl}
$\dig{A}$ & $\quad = \quad$ & $\{(w_1 + \cdots + w_k,\,[w_1, \cdots,\,w_k])\ \vert\ \forall i, \, w_i \in\ !\,A\} \ \in\ Rel(!A,\,!!A)$\\
$\der{A}$ & $=$ & $\{([a],\,a) \ \vert\ a \in A \} \ \in\ Rel(!A,\,A)$\\
\end{tabular}
$$
\noindent
In order to define a Seely category, one also needs the family of isomorphisms
$$
\begin{array}{ccccc}
mzero & \quad : \quad & 1 & \quad \longrightarrow \quad & ! \, \top
\\
mdeux{A}{B} & \quad : \quad & ! \, A \, \otimes \, ! \, B & \quad \longrightarrow \quad & ! \, (\, A \, \& \, B \, )
\end{array}
$$
\noindent
which are defined as $mzero = \{(\star,\,[])\}$ and
$$
\begin{tabular}{rcl}
$mdeux{A}{B}$ & $ = $ & $\{(([a_1, \cdots, a_m],[b_1,\cdots,b_n]),[(1,a_1), \cdots,\, (1,a_m),\,(2,b_1),\,\cdots,\,(2,b_n)]) \}$\\
\end{tabular}
$$
\noindent
One then carefully checks that the coherence diagrams expected of a Seely category commute.
From this follows that
\begin{property}
The category $Rel$ together with the finite multiset interpretation
of the exponential modality~$!$ defines a model of propositional linear logic.
\end{property}
\section{Fixpoint operators in models of linear logic}\label{section/fixpoints}
We want to extend linear logic with a fixpoint rule:
$$
\AxiomC{$!\,X \otimes \, !\, A \vdash A$}
\RightLabel{\quad $fix$}
\UnaryInfC{$!\,X \vdash A$}
\DisplayProof
$$
\noindent
In order to interpret it in a Seely category, we need a parametrized fixpoint operator,
defined below as a family of functions
$$
\fixpoint{X,A}\ :\ \mathscr{C}(!\,X \, \otimes \, !A \, , \, A \,) \quad \morph{} \quad \mathscr{C}(!\,X,A)
$$
\noindent
parametrized by $X, A$ and satisfying two elementary conditions,
mentioned for instance by Simpson and Plotkin in~\cite{simpson-plotkin}.
\begin{itemize}
\item \textbf{Naturality:} for any $g:{!\,X}\multimap Z$ and $f: {!\,Z} \, \otimes \, ! \, A \multimap A$, the diagram:
$$
\xymatrix@R=1em{
!\,X \ar[dd]_{\dig{X}} \ar[rrrr]^{\fixpoint{X,A}(k)} & & & & A\\
\\
!\,!\,X \ar[rrrr]_{!\,g} & & & & !\,Z \ar[uu]_{\ \ \fixpoint{Z,A}(f)}}
$$
\noindent
commutes, where the morphism $k:{!\,X} \otimes {!\,A} \multimap A$ in the upper part of the diagram is defined as the composite
$$
\xymatrix@R=1em{
!\,X \, \otimes \, ! \, A \ar[rrrr]^{k} \ar[dd]_{\dig{X} \,\otimes \, !A} & & & & A\\
\\
! \, ! \, X \, \otimes \, ! \, A \ar[rrrr]_{!\,g\,\otimes \, ! A} & & & & !\, Z\, \otimes \, ! \,A \ar[uu]_{f}\\
}
$$
\item \textbf{Parametrized fixpoint property:} for any $f: {!\,X} \,\otimes \, {!\,A} \multimap A$, the following diagram commutes:
$$
\xymatrix {
!\,X \ar[d]_{!\,\diag{X}} \ar[rrrr]^{\fixpoint{X,A}(f)} & & & & A\\
!\,(\,X\,\&\,X\,) \ar[d]_{(mdeux{X}{X})^{-1}} & & & & !\,X \, \otimes \, ! \, A \ar[u]_{f}\\
!\,X \, \otimes \, !\,X \ar[rrrr]_{!\,X\, \otimes\, \dig{X}} & & & & !\,X \, \otimes \,!\,!\,X \ar[u]_{!\,X \, \otimes \, ! \, \fixpoint{X,A}(f)}\\
}
$$
\end{itemize}
These two equations are fundamental but they do not reflect all the equational properties
of the fixpoint operator in domain theory.
For that reason, Bloom and Esik introduced the notion of \emph{Conway theory}
in their seminal work on iteration theories~\cite{bloom-esik,Bloom19961}.
This notion was then rediscovered and adapted to cartesian categories
by Hasegawa \cite{hasegawa}, by Hyland and by Simpson and Plotkin \cite{simpson-plotkin}.
Hasegawa and Hyland moreover independently established a nice correspondence
between the resulting notion of \emph{Conway fixpoint operator} and
the notion of trace operator introduced a few years earlier
by Joyal, Street and Verity \cite{joyal-street-verity}.
Here, we adapt in the most straightforward way this notion of Conway fixpoint operator
to the specific setting of Seely categories.
Before going any further, we find useful to introduce the following notation: for every pair of morphisms
$$f\,:\,!\,X \,\otimes \,! \, B \multimap A \quad \mbox{ and } \quad g\,:\,!\,X \,\otimes \, ! \, A \multimap B$$
\noindent
we write $f \star g\,:\,!\,X \, \otimes \, ! \, A \multimap A$ for the composite:
$$
\makebox[10cm]{\xymatrix @R=2em{
!\,X \, \otimes \, ! \, A \ar[d]_{!\,\diag{X} \,\otimes\,!\,A} \ar[rrrr]^{f \star g} & & & & A\\
!\,(\,X \, \& \, X\,)\,\otimes\,!\,A \ar[d]_{(mdeux{X}{X})^{-1}\,\otimes\,!\,A} & & & & !\,X\, \otimes \, ! \, B \ar[u]_f\\
!\,X \, \otimes \, !\,X\,\otimes\,!\,A \ar[d]_{!\,X\,\otimes\,mdeux{X}{A}} &&&& !\,X \, \otimes \, ! \, (\,!\,X\,\otimes\,!\,A\,) \ar[u]_{!\,X\,\otimes\,!\,g} \\
!\,X \, \otimes \, !\,(\,X\,\&\,A\,) \ar[rrrr]_{!\,X\,\otimes\,\dig{X\& A}} && && !\,X \, \otimes \, ! \,!\,(\,X\,\&\,A\,) \ar[u]_{!\,X\,\otimes\,!\,(mdeux{X}{A})^{-1}} \\
}}
$$
\noindent
A Conway operator is then defined as a parametrized fixpoint operator satisfying the two additional properties below:
\begin{itemize}
\item \textbf{Parametrized dinaturality:} for any $f\,:\,!\,X \,\otimes \,! \, B \multimap A$ and $g\,:\,!\,X \,\otimes \, ! \, A \multimap B$, the following diagram commutes:
$$
\xymatrix @R=2em {
!\,X \ar[d]_{!\,\diag{X}} \ar[rrrr]^{\fixpoint{X,A}(f \star g)} & & & & A\\
!\,(\,X\,\&\,X\,) \ar[d]_{(mdeux{X}{X})^{-1}} & & & & !\,X\,\otimes\,!\, B \ar[u]_f\\
!\,X\,\otimes\,!\,X \ar[rrrr]_{!\,X\,\otimes\,\dig{X}} &&&& !\,X \, \otimes \, !\,!\,X \ar[u]_{!\,X\,\otimes\, !\,\fixpoint{X,B}(g \star f)}\\
}
$$
\item \textbf{Diagonal property:} for every morphism $f\,:\,!\,X\,\otimes\,!\,A\,\otimes\,!\,A\,\multimap A$,
\begin{equation}
\label{eq/Y-diag}
\fixpoint{X,A}\,(\,(mdeux{X}{A})^{-1} \, \circ \,\fixpoint{X \& A,A}\,(\,f\, \circ\,(\,(mdeux{X}{A})^{-1} \, \otimes \, !\,A\,)\,)
\end{equation}
belongs to $!\,X \multimap A$, since
$$
\xymatrix {
!\,(\,X\,\&\,A \,) \,\otimes\,!\,A \ar[rr]^{(mdeux{X}{A})^{-1} \, \otimes \, !\,A} & & !\,X\,\otimes\,!\,A\,\otimes\,!\,A \ar[rr]^{f} & & A\\
}
$$
is sent by $\fixpoint{X \& A,A}$ to a morphism of $!\,(\,X \,\&\,A\,) \multimap A$, so that
$$
(mdeux{X}{A})^{-1} \, \circ \,\fixpoint{X \& A,A}\,(\,f\, \circ\,(\,(mdeux{X}{A})^{-1} \, \otimes \, !\,A\,) \ : \ !\,X\,\otimes\,!\, A \multimap A
$$
to which the fixpoint operator $\fixpoint{X,A}$ can be applied, giving the morphism (\ref{eq/Y-diag}) of $!\,X \multimap A$. This morphism is required to
coincide with the morphism $\fixpoint{X,A}(k)$, where the morphism $k:{!X}\,\otimes\,{!A}\to A$ is defined as the composite
$$
\xymatrix@R=1em {
!\,X \, \otimes \,!\, A \ar[dd]_{!\,X\,\otimes\,!\,\diag{A}} \ar[rrrr]^{k} & & & & A \\
\\
!\,X\,\otimes\,!\,(\,A\,\&\,A\,) \ar[rrrr]_{!\,X\,\otimes\,(mdeux{A}{A})^{-1}} && & &!\,X \, \otimes \,!\, A \, \otimes \,!\, A \ar[uu]_f\\
}
$$
\end{itemize}
\noindent
Just as expected, we recover in that way the familiar notion of Conway fixpoint operator
as formulated in any cartesian category by Hasegawa, Hyland, Simpson and Plotkin:
\begin{property}
A Conway operator in a Seely category is the same thing as a Conway operator
(in the sense of \cite{hasegawa,simpson-plotkin}) in the cartesian closed category
associated to the exponential modality by the Kleisli construction.
\end{property}
\section{A fixpoint operator in the relational semantics}
\label{section/finitary-fixpoint}
\red{
The relational model of linear logic can be equipped
with a natural parameterized fixpoint operator $\fixpoint{}$ which transports any binary relation}
$$
\red{f\quad : \quad !\, X\,\otimes \, !\,A\, \quad \multimap \quad A}
$$
\red{to the binary relation}
$$
\red{\fixpoint{X,A}(f)\quad : \quad !\,X \multimap A}
$$
defined in the following way:
\begin{equation}\label{master-equation}
\begin{tabular}{rclr}
\hspace{-.8em}$\fixpoint{X,A}\,(f)$ & $\, = \,$ & $\{ \, (w,a) \, | \,$ & $\exists \textit{witness} \in \runtree{f}{a} \,\, \mbox{with} \,\, w = \leaves{\textit{witness}} $\\
&&& $ \mbox{and } \textit{witness} \mbox{ is accepting} \, \}$\\
\end{tabular}
\end{equation}
where $\runtree{f}{a}$ is the set of ``run-trees'' defined as trees
with nodes labelled by elements of the set $X \uplus A$ and such that:
\begin{itemize}
\item the root of the tree is labelled by $a$,
\item the inner nodes are labelled by elements of the set $A$,
\item the leaves are labelled by elements of the set $X \uplus A$,
\item and for every node labelled by an element $b \in A$:
\begin{itemize}
\item if $b$ is an inner node, and letting $a_1, \cdots, a_n$ denote the labels
of its children belonging to $A$ and $x_1, \cdots,\, x_m$ the labels belonging to $X$:
$$
\begin{tikzpicture}
\tikzset{level distance=25pt,sibling distance=15pt}
\Tree [.$b$ $x_1$ $\cdots$ $x_m$ $a_1$ $\cdots$ $a_n$ ]
\end{tikzpicture}
$$
then
$
([(1,x_1),\cdots,\,(1,x_m),\,(2,a_1),\cdots ,\,(2,\,a_n)],b) \in f
$
\item if $b$ is a leaf, then $([],b) \in f$.
\end{itemize}
\end{itemize}
and where $\leaves{\textit{witness}}$ is the multiset obtained by enumerating the labels of the leaves of the run-tree $\textit{witness}$.
Recall that multisets account for the number of occurences of an element,
so that $\leaves{\textit{witness}}$ has the same number of elements as there are leaves
in the run-tree $\textit{witness}$.
Moreover, $\leaves{\textit{witness}}$ is independent of the enumeration of the leaves,
since multisets can be understood as abelian versions of lists.
Finally, we declare that a run-tree is \emph{accepting} when it is a finite tree.
\begin{property}
The fixpoint operator $\fixpoint{}$ is a Conway operator on Rel.
\end{property}
\begin{example}
\label{example/witness}
Suppose that
$$f\quad =\quad \{([],a)\} \cup \{([a,x],a)\}$$
where $A\,=\,\{a\}$ and $X\,=\,\{x\}$. Denote by $\mathcal{M}_n$ the finite multiset containing the element $x$ with multiplicity $n$.
Then, for every $n \in \mathbb{N}$, we have that $(\mathcal{M}_n,a) \in \fixpoint{X,A}(f)$
since $(\mathcal{M}_n,a)$ can be obtained from the $\{a,\,x\}$-labelled witness run-tree
of Figure \ref{figure/witness1}, which has $n+1$ internal occurrences of the element $a$,
and $n$ occurrences of the element $x$ at the leaves. The witness tree is finite, so that it is accepted.
Now, consider the relation
$$g\quad =\quad \{([a],a)\} \cup \{([a,x],a)\}$$
In that case, $(\mathcal{M}_n,a)$ is not an element of $\fixpoint{X,A}(g)$ for any $n \in \mathbb{N}$
because all run-trees are necessarily infinite, as depicted in Figure \ref{figure/badwitness},
and thus, none is accepting.
As a consequence, $\fixpoint{X,A}(g)$ is the empty relation.
\begin{figure}\label{figure/witness1}
\label{figure/badwitness}
\end{figure}
\end{example}
The terminology which we have chosen for the definition of $\fixpoint{}$ is obviously automata-theoretic.
In fact, as we already mentioned in the introduction, this definition may be formulated
as an exploration of the infinitary tree $\textbf{comb}$ on the ranked alphabet $\Sigma\,=\,\{\,\bullet\,:\,2,\,\circ\,:\,0 \,\}$
by an alternating tree automaton associated to the binary relation $f\,:\,!\, X\,\otimes \, !\,A\, \multimap A$.
Indeed, given an element $a \in A$,
consider the alternating tree automata $\mathcal{A}_{f,a}\,=\,\langle \Sigma,\, X \uplus A,\,\delta,\,a \rangle$ where, for $b \in A$ and $x \in X$:
$$
\delta(b,\,\bullet)= \bigvee_{(([x_1,\cdots,\,x_n),[a_1,\cdots,\,a_m]),b) \in f} \ \left(\,(1,x_1) \wedge \cdots \wedge (1,x_n) \wedge (2,a_1) \wedge \cdots \wedge (2,a_m) \right)
$$
\begin{center}
\begin{tabular}{rcl}
$\delta(x,\,\bullet)\ =\ \bot$ &
$\quad \quad \delta(x,\,\circ)\ =\ \top \quad \quad $ &
$\delta(b,\,\circ)\ =\ \begin{cases} \top &\mbox{if } ([],b) \in f \\
\bot & \mbox{else} \end{cases}$\\
\end{tabular}
\end{center}
Note that we allow here the use of an infinite non-deterministic choice operator $\bigvee$ in formulas describing transitions, but only with \emph{finite} alternation.
Now, our point is that $\runtree{f}{a}$ coincides with the set of run-trees of the alternating automaton $\mathcal{A}_{f,a}$
over the infinite tree \textbf{comb} depicted in the Introduction.
Notice that only finite run-trees are accepting: this requires that for some $b \in A$ the transition $\delta(b,\,\bullet)$ contains the alternating choice $\top$, in which the exploration of the infinite branch of \textbf{comb} stops and produces an accepting run-tree. This requires in particular the existence of some $b \in A$ such that $([],b) \in f$.
\section{Infinitary exponentials}
\label{section/infinitary-rel}
Now that we established a link with tree automata theory, it is tempting to relax the finiteness acceptance condition
on run-trees applied in the previous section.
To that purpose, however, we need to relax the usual assumption that the formulas of linear logic
are interpreted as finite or countable sets.
Suppose indeed that we want to interpret the exponential modality
$$\lightning A$$
as the set of finite or countable multisets, where a countable multiset
of elements of~$A$ is defined as a function
$$
A \quad \longrightarrow \quad \overline{\mathbb{N}}
$$
with finite or countable support.
Quite obviously, the set
$$
\lightning \, \mathbb{N}
$$
has the cardinality of the reals $2^{\aleph_0}$.
We thus need to go beyond the traditionally countable relational interpretations of linear logic.
However, we may suppose that every set $A$ interpreting a formula has a cardinality
below or equal $2^{\aleph_0}$.
In order to understand why, it is useful to reformulate the elements of $\lightning A$
as finite or infinite words of elements of~$A$ modulo an appropriate notion of equivalence
of finite or infinite words up to permutation of letters.
Given a finite word $u$ and a finite or infinite word $w$, we write
$$
u \sqsubseteq w
$$
when there exists a finite prefix $v$ of $w$ such that $u$ is a prefix of $v$ modulo permutation of letter.
We write
$$
w_1 \simeq w_2 \stackrel{def}{\iff} \forall u \in A^{*}, \quad u\sqsubseteq w_1 \iff u\sqsubseteq w_2
$$
where $A^{*}$ denotes the set of finite words on the alphabet~$A$.
\begin{proposition}
There is a one-to-one relationship between the elements of $\lightning A$
and the finite or infinite words on the alphabet $A$ modulo the equivalence relation~$\simeq$.
\end{proposition}
This means in particular that for every set~$A$,
there is a surjection from the set~$A^{\infty}=A^{\ast}\uplus A^{\omega}$
of finite or infinite words on the alphabet~$A$ to the set $\lightning A$
of finite or countable multisets.
An element of the equivalence class associated to a multiset is called a \emph{representation} of this multiset.
Notice that if a set~$A$ has cardinality at most $2^{\aleph_0}$, the set~$A^{\infty}$
is itself bounded by $2^{\aleph_0}$, since $(2^{\aleph_0})^{\aleph_0}\ =\ 2^{\aleph_0 \times \aleph_0}\ =\ 2^{\aleph_0}$.
This property leads us to define the following extension of $Rel$:
\begin{definition}
The category $Relinfinitary$ has the sets $A,B$ of cardinality at most
$2^{\aleph_0}$ as objects, and binary relations $f\subseteq A\times B$
between $A$ and $B$ as morphisms $A\to B$.
\end{definition}
Since a binary relation between two sets $A$ and $B$ is a subset of $A\times B$,
the cardinality of a binary relation in $Relinfinitary$ is also bounded by $2^{\aleph_0}$.
Note that the hom-set $Relinfinitary(A,B)$ is in general of higher cardinality than $2^{\aleph_0}$,
yet it is bounded by the cardinality of the powerset of the reals.
It is immediate to establish that:
\begin{property}
The category $Relinfinitary$ is $\ast$-autonomous and has finite products.
As such, it provides a model of multiplicative additive linear logic.
\end{property}
There remains to show that the finite-or-countable multiset construction $\lightning$
defines a categorical interpretation of the exponential modality of linear logic.
Again, just as in the finitary case, we find convenient to check that $Relinfinitary$ together
with the finite-or-countable multiset interpretation $\lightning$ satisfy the axioms of a Seely category.
In that specific formulation of a model of linear logic, the first property to check is that:
\begin{property}
The finite-or-countable multiset construction $\lightning$ defines
a comonad on the category $Relinfinitary$.
\end{property}
The counit of the comonad is defined as the binary relation
$$
\der{A} \quad : \quad \lightning \, A \quad \longrightarrow \quad A
$$
which relates $[a]$ to $a$ for every element~$a$ of the set~$A$.
In order to define its comultiplication, we need first to extend the notion of sum
of multisets to the infinitary case, which we do in the obvious way, by extending
the binary sum of~$\mathbb{N}$ to possibly infinite sums in its completion $\overline{\mathbb{N}}$.
In order to unify the notation for finite-or-countable multisets with the one for finite multisets used in Section \ref{section/rel},
we find convenient to denote by $[a_1,\,a_2,\,\cdots]$ the countable multiset admitting the representation $a_1 a_2 \cdots$
We are now ready to describe the comultiplication
$$\dig{A}\quad : \quad \lightning\, A \quad \rightarrow \quad \lightning\,\lightning\,A$$
of the comonad $\lightning$ as a straightforward generalization of the finite case:
$$
\begin{tabular}{rcl}
$\dig{A}$&$\quad = \quad $ &$\{(w_1 + \cdots + w_k,\,[w_1, \cdots,\,w_k])\ \vert\ \forall i \in \{1, \cdots n\}, \, w_i \in\ \lightning\,A\}$\\
&& $ \! \! \! \! \! \! \cup\ \{ (w_1 + \cdots + w_k + \cdots,\,[w_1, \cdots,\,w_k, \cdots])\ \vert\ \forall i \in \mathbb{N}, \, w_i \in\ \lightning\,A\}$\\
\end{tabular}
$$
One then defines the isomorphism
\begin{equation}\label{equation/iso-seely-0}
mzero \ = \ \{(\star,[])\} \quad : \quad 1 \quad \longrightarrow \quad \lightning \, \top
\end{equation}
and the family of isomorphisms
\begin{equation}\label{equation/iso-seely-2}
mdeux{A}{B} \quad : \quad
\lightning \, A \, \otimes \,
\lightning \, B
\quad \longrightarrow \quad
\lightning \, (\, A \, \& \, B \, )
\end{equation}
indexed by the objects $A,B$ of the category~$Relinfinitary$ which relates
every pair $(w_A,w_B)$ of the set $\lightning \, A \, \otimes \, \lightning \, B$ with the finite-or-countable multiset
$$
(\{1\}\times w_A) + (\{2\} \times w_B) \quad \in \quad \lightning \, (\, A \, \& \, B \, )
$$
where the operation $\{1\} \times w_A$ maps the finite-or-countable multiset $w_A\,=\,[a_1,\,a_2,\ldots]$ of elements of
$A$ to the finite-or-countable multiset $[(1,\,a_1),\,(1,\,a_2),\ldots]$ of $\lightning (A \& B)$. We define
$\{2\} \times w_B$ similarly.
We check carefully that
\begin{property}
The comonad $\lightning$ on the category $Relinfinitary$
together with the isomorphisms (\ref{equation/iso-seely-0}) and (\ref{equation/iso-seely-2})
satisfy the coherence axioms of a Seely category -- see \cite{models-of-linear-logic}.
\end{property}
\noindent
In other words, this comonad $\lightning$ over the category $Relinfinitary$ induces a new and infinitary model of propositional linear logic.
The next section is devoted to the definition of two different fixpoint operators living inside this new model.
\section{Inductive and coinductive fixpoint operators}\label{section/inductive-and-coinductive}
In the infinitary relational semantics, a binary relation
$$f\,\,:\,\,\lightning A \, \, \multimap \, \, B$$
may require a countable multiset~$w$ of elements (or positions) of the input set~$A$
in order to reach a position~$b$ of the output set~$B$.
For that reason, we need to generalize the notion of alternating tree automata to \emph{finite-or-countable alternating tree automata},
a variant in which formulas defining transitions use of a possibly countable alternation operator $\bigwedge$
and of a possibly countable non-deterministic choice operator $\bigvee$.
The generalization of the family of automata $\mathcal{A}_{f,a}$ of \S\ref{section/finitary-fixpoint} leads to a new definition of the set $\runtree{f}{a}$, in which witness trees may have internal nodes of countable arity.
A first important observation is the following result:
\begin{property}
Given $f\,:\,\lightning\, A \, \otimes \, \lightning \,X \multimap A$, $a \in A$, and $\textit{witness} \in \runtree{f}{a}$, the multiset $\leaves{\textit{witness}}$ is finite or countable.
\end{property}
An important consequence of this observation is that the definition of the Conway operator $\fixpoint{}$ given in Equation~(\ref{master-equation})
can be very simply adapted to the finite-or-countable interpretation of the exponential modality~$\lightning$ in the Seely category $Relinfinitary$.
Moreover, in this infinitary model of linear logic, we can give more elaborate acceptation conditions, among which two are canonical:
\begin{itemize}
\item considering that any run-tree is accepting, one defines the \emph{coinductive} fixpoint on the model, which is the greatest fixpoint over $Relinfinitary$.
\item on the other hand, by accepting only trees without infinite branches, we obtain the \emph{inductive} interpretation of the fixpoint,
which is the least fixpoint operator over $Relinfinitary$.
\end{itemize}
It is easy to see that the two fixpoint operators are different: recall Example~\ref{example/witness},
and observe that the binary relation~$g$ is also a relation in the infinitary semantics.
It turns out that its inductive fixpoint is the empty relation, while its coinductive fixpoint coincides with the relation
$$
\{\mathcal{M}_n,a) \ \vert \ \forall n \in \mathbb{N}\}\ \cup\ \{([x,\,x,\,\cdots],a)\}
$$
In this coinductive interpretation, the run-tree obtained by using infinitely $([x,a],a)$ and never $([a],a)$ is accepting and is the witness tree generating $\{([x,\,x,\,\cdots],a)\}$.
\begin{property}
The inductive and coinductive fixpoint operators over the infinitary relational model of linear logic are Conway operators on this Seely category.
\end{property}
\section{The coloured exponential modality}\label{section/coloured-modality}
In their semantic study of the parity conditions used in higher-order model-checking,
and more specifically in the work by Kobayashi and Ong \cite{kobayashi-ong},
the authors have recently discovered~\cite{coloured-tensorial-logic} that these parity conditions are secretly regulated
by the existence of a comonad $\Box$ which can be interpreted in the relational semantics of linear logic
as
$$
\Box\ A \quad = \quad Col \times A
$$
where $Col=\{1,\dots,N\}$ is a finite set of integers called \emph{colours}.
The colours (or priorities) are introduced in order to regulate the fixpoint discipline:
in the immediate scope of an even colour, fixpoints should be interpreted coinductively,
and inductively in the immediate scope of an odd colour.
It is worth mentioning that the comonad~$\Box$ has its comultiplication defined by the maximum operator
in order to track the maximum colour encountered during a computation:
$$
\begin{array}{ccccccc}
\delta_A &\, = \, &\{ (max(c_1,c_2), a) , (c_1, (c_2, a))) \, | \, c_1,c_2\in Col, a\in A\} &\hspace{.5em} : \hspace{.5em}&
\Box A & \hspace{.5em} \multimap \hspace{.5em} & \Box \, \Box A\\
\varepsilon_A & = & \{ (1,a) , a) \, | \, a\in A\} & : & \Box A & \multimap & A
\end{array}
$$
whereas the counit is defined using the minimum colour $1$.
The resulting comonad is symmetric monoidal and also satisfies the following key property:
\begin{property}
There exists a distributive law $\lambda \, : \, \lightning \ \Box \,\, \rightarrow \,\, \Box \ \lightning$ between comonads.
\end{property}
\noindent
A fundamental consequence is that the two comonads can be composed
into a single comonad $\lightning\hspace{-.5765em}\lightning\hspace{-.5765em}\lightning$ defined as follows:
$$
\lightning\hspace{-.5765em}\lightning\hspace{-.5765em}\lightning \quad = \quad \lightning \ \circ \ \Box
$$
The resulting \emph{infinitary} and \emph{coloured} relational semantics of linear logic
is obtained from the category $Relinfinitary$ equipped with the composite comonad $\lightning\hspace{-.5765em}\lightning\hspace{-.5765em}\lightning$.
\begin{theorem}
The category $Relinfinitary$ together with the comonad $\lightning\hspace{-.5765em}\lightning\hspace{-.5765em}\lightning$
defines a Seely category and thus a model of propositional linear logic.
\end{theorem}
\section{The inductive-coinductive fixpoint operator~$\fixpoint{}$}
\label{section/y-colore}
We combine the results of the previous sections
in order to define a fixpoint operator $\fixpoint{}$ over the infinitary coloured relational model,
which generalizes both the inductive and the coinductive fixpoint operators.
Note that in this infinitary and coloured framework,
we wish to define a fixpoint operator $\fixpoint{}$ which transports a binary relation
$$
f\quad : \quad\lightning\hspace{-.5765em}\lightning\hspace{-.5765em}\lightning X \, \otimes \, \lightning\hspace{-.5765em}\lightning\hspace{-.5765em}\lightning A \,\,\, \multimap \,\,\, A
$$
into a binary relation
$$
\fixpoint{X,A}\,(f) \quad : \quad \lightning\hspace{-.5765em}\lightning\hspace{-.5765em}\lightning X \,\,\, \multimap \,\,\, A.
$$
To that purpose, notice that the definition given in \S\ref{section/finitary-fixpoint}
of the set $\runtree{f}{a}$ of run-trees
extends immediately to this new coloured setting, since the only change is in the set of labellings.
Again, accepting all run-trees would lead to the coinductive fixpoint, while accepting only run-trees whose branches are finite
would lead to the inductive fixpoint.
We now define our acceptance condition for run-trees in the expected way,
directly inspired by the notion of alternating parity tree automaton.
Consider a run-tree $\textit{witness}$, and remark that its nodes are labelled with elements
of $(\,Col \times A \,) \cup (\,Col \times X\,)$.
We call the colour of a node the first element of its label. Coloured acceptance is then defined as follows:
\begin{itemize}
\item a finite branch is accepting,
\item an infinite branch is accepting precisely when the greatest colour appearing infinitely often in the labels of its nodes is even.
\item a run-tree is accepting precisely when all its branches are accepting.
\end{itemize}
Note that a run-tree whose nodes are all of an even colour will be accepted independently of its depth,
as in the coinductive interpretation, while a run-tree labelled only with odd colours will be accepted
precisely when it is finite, just as in the inductive interpretation.
We call the fixpoint operator associated with the notion of coloured acceptation
the inductive-coinductive fixpoint operator over the infinitary coloured relational model.
\begin{theorem}
The inductive-coinductive fixpoint operator $\fixpoint{}$ defined over the infinitary coloured relational semantics of linear logic
is a Conway operator.
\end{theorem}
\section{Conclusion}\label{section/conclusion}
In this article, we introduced an infinitary variant of the familiar relational semantics of linear logic.
We then established that this infinitary model accomodates an inductive as well as a coinductive Conway operator $\fixpoint{}$.
This propelled us to define a coloured relational semantics and to define an inductive-coinductive fixpoint operator
based on a parity acceptance condition.
The authors proved recently \cite{coloured-tensorial-logic} that a recursion scheme can be interpreted in this model in such a way
that its denotation contains the initial state of an alternating parity automaton if and only if the tree
it produces satisifies the MSO property associated to the automaton.
A crucial point related to the work by Salvati and Walukiewicz \cite{salvati-walukiewicz1}
is the fact that a tree satisfies a given MSO property if and only if any suitable representation
as an infinite tree of a $\lambda Y$-term generating it also does.
We are thus convinced that this infinitary and coloured variant of the relational semantics of linear logic
will play an important and clarifying role in the denotational and compositional study of higher-order model-checking.
\section*{Appendix: Seely categories}
A \emph{Seely category} is defined as a symmetric monoidal closed category
$(\mathscr{L},\otimes,1)$ with binary products $A\& B$, a terminal object~$\top$, and:
\begin{enumerate}
\item a comonad $(!,\dig{},\der{})$,
\item two natural isomorphisms
\begin{center}
\begin{tabular}{ccc}
$mdeux{A}{B}
\hspace{1em}
:
\hspace{1em}
!A\otimes !B
\hspace{.3em}
\cong
\hspace{.3em}
!(A\& B)$
&
\hspace{4em}
&
$mzero
\hspace{1em}
:
\hspace{1em}
1
\hspace{.3em}
\cong
\hspace{.3em}
!\top$
\end{tabular}
\end{center}
making
$$(!,m) \quad : \quad (\mathscr{L},\&,\top) \quad \morph{} \quad (\mathscr{L},\otimes,1)$$
a symmetric monoidal functor.
\end{enumerate}
One also asks that the coherence diagram
\begin{equation}
\label{equation/diagram-of-seely-one}
\vcenter{\xymatrix @-.5pc {
!A\otimes !B
\ar[rrrr]^{m}
\ar[dd]_{\dig{A}\otimes \dig{B}}
&&&&
!(A\& B)
\ar[d]^{\dig{A\& B}}
\\
&&&&
!!(A\& B)
\ar[d]^{!\paire{!\pi_1}{!\pi_2}}
\\
!!A\otimes !!B
\ar[rrrr]^{m}
&&&&
!(!A\& !B)}}
\end{equation}
commutes in the category~$\mathscr{L}$ for all objects~$A$ and~$B$, and that the four following diagrams expressing the fact that the functor~$(!,m)$
is symmetric monoidal:
\begin{equation}\label{equation/diagram-of-seely-two}
\vcenter{\xymatrix @+.2pc {
(!A\otimes !B)\otimes !C
\ar[rrr]^{\alpha}
\ar[d]_{m\otimes !C}
&&&
!A\otimes(!B\otimes !C)
\ar[d]^{!A\otimes m}
\\
!(A\& B)\otimes !C\ar[d]_{m}
&&&
!A\otimes !(B\& C)\ar[d]^{m}
\\
!((A\& B)\& C)\ar[rrr]^{!\alpha}
&&&
!(A\& (B\& C))}}
\end{equation}
\begin{equation}\label{equation/diagram-of-seely-three}
\begin{array}{ccc}
\vcenter{\xymatrix @-.1pc {
!A\otimes 1
\ar[rr]^{\rho}
\ar[d]_{!A\otimes m}
&&
!A\\
!A\otimes !\top\ar[rr]^{m}
&&
!(A\&\top)\ar[u]_{!\rho}}}
&&
\vcenter{\xymatrix @-.1pc {
1\otimes !B\ar[rr]^{\lambda}\ar[d]_{m\otimes !B} && !B\\
!\top\otimes !B\ar[rr]^{m}&&!(\top\& B)\ar[u]_{!\lambda}}}
\end{array}
\end{equation}
\begin{equation}\label{equation/diagram-of-seely-four}
\vcenter{
\xymatrix @+.6pc {
!A\otimes !B\ar[rr]^{\gamma} \ar[d]_{m}
&&
!B\otimes !A\ar[d]^{m}
\\
!(A\& B)\ar[rr]^{!\gamma}
&&
!(B\& A)
}}
\end{equation}
commute in the category~$\mathscr{L}$ for all objects~$A,B$ and $C$.
\end{document} |
\begin{document}
\title{The minimal communication cost for simulating entangled qubits
}
\author{Martin J. Renner,}
\email{[email protected]}
\affiliation{University of Vienna, Faculty of Physics, Vienna Center for Quantum Science and Technology (VCQ), Boltzmanngasse 5, 1090 Vienna, Austria}
\affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria}
\author{Marco Túlio Quintino}
\email{[email protected]}
\affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria}
\affiliation{University of Vienna, Faculty of Physics, Vienna Center for Quantum Science and Technology (VCQ), Boltzmanngasse 5, 1090 Vienna, Austria}
\date{\today}
\begin{abstract}
We analyse the amount of classical communication required to reproduce the statistics of local projective measurements on a general pair of entangled qubits, ${\ket{\Psi_{AB}}=\sqrt{p}\ket{00}+\sqrt{1-p}\ket{11}}$ ($1/2\leq p \leq 1$). We construct a classical protocol that perfectly simulates local projective measurements on all entangled qubit pairs by communicating one classical trit. Additionally, when $\frac{2p(1-p)}{2p-1} \log{\left(\frac{p}{1-p}\right)}+2(1-p)\leq1$,
approximately $0.835 \leq p \leq 1$, we present a classical protocol which requires only a single bit of communication. The latter model even allows a perfect classical simulation with an average communication cost that approaches zero in the limit where the degree of entanglement approaches zero ($p \to 1$). This proves that the communication cost for simulating weakly entangled qubit pairs is strictly smaller than for the maximally entangled one.
\end{abstract}
\maketitle
Bell's nonlocality theorem \cite{Bell} shows that quantum correlations cannot be reproduced by local hidden variables. This discovery has significantly changed our understanding of quantum theory and correlations allowed by nature. Additionally, Bell nonlocality found application in cryptography~\cite{Ekert91} and opened the possibility for protocols in which security does not rely on quantum theory but on the non-signalling principle~\cite{Acin07,pironio10,Vazirani14,supic19}.
A crucial hypothesis for all nontrivial results of Bell nonlocality is that the parties cannot communicate their choice of measurements to each other. In other words, the applications and foundational value of Bell nonlocality breaks when communication is allowed.
However, since measurements are described by continuous parameters, one might expect that the communication cost to reproduce nonlocal quantum correlations is infinite \cite{Maudlin1992}. After a sequence of improved protocols for entangled qubits \cite{Brassard1999, Steiner2000, cerf2000, Pati2000, massar2001}, a breakthrough was made by Toner and Bacon in 2003 \cite{tonerbacon2003}.
They showed that a single classical bit of communication is sufficient to simulate the statistics of all local projective measurements on a maximally entangled qubit pair. Classical communication has then been established as a natural measure of Bell nonlocality~\cite{degorre2005, Degorre2007, Regev2010, Branciard2011, Branciard2012, Brassard2015, Brassard2019} and found applications in constructing local hidden variable models~\cite{degorre2005}.
For non-maximally entangled qubit pairs, somehow counterintuitively, all known protocols require strictly more resources. In terms of communication, the best known result is also due to Toner and Bacon~\cite{tonerbacon2003}. There, they present a protocol for non-maximally entangled qubits, which requires two bits of communication (see Ref.~\cite{Renner2022} for a two-bit protocol which considers general POVM measurements). The asymmetry of partially entangled states and other evidences suggested that simulating weakly entangled states may be harder than simulating maximally entangled ones. For instance, in Ref.~\cite{brunner2005} the authors prove that at least two uses of a PR-box are required for simulating weakly entangled qubit pairs, while a single use of a PR-box is sufficient for maximally entangled qubits~\cite{cerfprbox}. Additionally, weakly entangled states are strictly more robust than maximally entangled ones when the detection loophole is considered~\cite{Eberhard92,Cabello2007detection,Brunner2007detection,araujo12}.
In this work, we present a protocol that simulates the statistics of arbitrary local projective measurements on weakly entangled qubit pairs with only a single bit of communication. Then, we construct another protocol to simulate local projective measurements on any entangled qubit pair at the cost of a classical trit.
\begin{figure}
\caption{\textbf{a)}
\label{fig1}
\end{figure}
\textit{The task and introduction of our notation. ---}
Up to local unitaries, a general entangled qubit pair can be written as
\begin{align}
\ket{\Psi_{AB}}=\sqrt{p}\ket{00}+\sqrt{1-p}\ket{11} \, ,
\end{align}
with $1/2 \leq p \leq 1$.
At the same time, projective qubit measurements can be identified with a normalized three-dimensional real vector $\vec{x}\in\mathbb{R}^3$, the Bloch vector, $\vec{x}=[x_x,x_y,x_z]$ (with $|\vec{x}|= 1$) via the equation $\ketbra{\vec{x}}=\big(\mathds{1} + \vec{x}\cdot\vec{\sigma}\big)/2$. Here, $\vec{\sigma}=(\sigma_X,\sigma_Y,\sigma_Z)$ are the standard Pauli matrices. In this way, we denote Alice's and Bob's measurement projectors as
$\ketbra*{\pm\vec{x}}$ and $\ketbra*{\pm \vec{y}}$, which satisfy $\ketbra*{+\vec{x}}+\ketbra*{-\vec{x}}=\ketbra*{+\vec{y}}+\ketbra*{-\vec{y}}=\mathds{1}$.
According to Born's rule, when Alice and Bob apply their measurements on the entangled state $\ket{\Psi_{AB}}$, they output $a,b\in\{-1,+1\}$ according to the statistics:
\begin{align} \small
p_Q(a,b|\vec{x},\vec{y})=\Tr[\ketbra*{a\vec{x}}\otimes \ketbra*{b \vec{y}}\ \ketbra*{\Psi_{AB}}] . \label{prob}
\end{align} \normalsize
In this work, we consider the task of simulating the statistics of Eq.~\eqref{prob} with purely classical resources. More precisely, instead of Alice and Bob performing measurements on the actual quantum state, Alice prepares an output $a$ and a message $c \in \{1,2,...,d\}$ that may depend on her measurement setting $\vec{x}$ and a shared classical variable $\lambda$ that follows a certain probability function $\rho(\lambda)$ (see Fig.~\ref{fig1}~b)). Therefore, we can denote Alice's strategy as $p_A(a,c|\vec{x},\lambda)$. Afterwards, Alice sends the message $c$ to Bob, who produces an outcome $b$ depending on the message $c$ he received from Alice, his measurement setting $\vec{y}$ and the shared variable $\lambda$. In total, we denote his strategy as $p_B(b|c,\vec{y},\lambda)$. All together, the total probability that Alice and Bob output $a,b\in\{-1,+1\}$ becomes:
\begin{align} \small
p_C(a,b|\vec{x},\vec{y})=
\int_\lambda \text{d}\lambda \; \rho(\lambda) \sum_{c=1}^{d} p_A(a,c|\vec{x},\lambda) p_B (b|\vec{y},c,\lambda) \, .
\end{align}
\normalsize
The simulation is successful if, for any choice of projective measurements and any outcome, the classical statistics matches the quantum predictions:
\begin{equation}
p_C(a,b|\vec{x},\vec{y})=p_Q(a,b|\vec{x},\vec{y}) \, .
\end{equation}
For what follows, we also introduce the Heaviside function, defined by $H(z)=1$ if $z\geq 0$ and $H(z)=0$ if $z<0$, as well as the related functions $\Theta(z):=H(z)\cdot z$ and the sign function $\sgn(z):=H(z)-H(-z)$.
\textit{Revisiting known protocols. ---}
Our methods are inspired by the best previously known protocol to simulate general entangled qubit pairs, the so-called "classical teleportation" protocol \cite{cerf2000, tonerbacon2003}. To understand the idea, we first rewrite the quantum probabilities in Eq.~\eqref{prob} by using the rule of conditional probabilities $p(a,b|\vec{x},\vec{y})=p(a|\vec{x},\vec{y})\cdot p(b|\vec{x},\vec{y},a)$. More precisely, we denote with $p_\pm:=
\sum_{b} p(a=\pm 1, b|\vec{x},\vec{y})$ the marginal probabilities of Alice's output that read as follows:
\begin{align}
p_\pm =\Tr[\ketbra*{\pm\vec{x}}{\pm\vec{x}}\otimes \mathds{1}\ \ketbra*{\Psi_{AB}}{\Psi_{AB}}]\, . \label{pplusminus}
\end{align}
Note that, due to non-signalling, the marginals $p_\pm$ do not depend on $\vec{y}$. At the same time, given Alice's outcome $a=\pm 1$, Bob's qubit collapses into a pure post-measurement state, that we denote here as:
\begin{align}
\ketbra*{\vec{v}_\pm}&:=\Tr_A[\ketbra*{\pm \vec{x}}\otimes \mathds{1}\ \ketbra*{\Psi_{AB}}]/p_\pm \, . \label{vplusminus}
\end{align}
If now Bob measures his qubit with the projectors $\ketbra{\pm \vec{y}}$, he outputs $b$ according to Born's rule: \begin{align}
p(b|\vec{x},\vec{y},a)=\Tr [\ketbra{b \vec{y}} \ketbra{\vec{v}_a}]= |\braket{\vec{v}_a}{b\vec{y}}|^2 \, .
\end{align}
With the introduced notation, we can rewrite the quantum probabilities from Eq.~\eqref{prob} into:
\begin{align}
p_Q(a,b|\vec{x},\vec{y})=p_{a}\cdot |\braket{\vec{v}_a}{b\vec{y}}|^2 \, . \label{rewrite}
\end{align}
This directly implies a strategy to simulate entangled qubit pairs. Alice outputs $a=\pm 1$ according to her marginals $p_\pm$. Then, given her outcome $a$, she prepares a qubit in the correct post-measurement state $\ketbra{\vec{v}_a}$ and sends it to Bob. Finally, he measures the qubit with his projectors $\ketbra{\pm \vec{y}}$.
\begin{comment}
Old:
In that approach, Alice and Bob have access to a classical simulation of a qubit channel. More precisely, Alice outputs $a=\pm 1$ according to the marginals of her measurement, that we denote here as $p_\pm:=
\sum_{b} p(a=\pm 1, b|\vec{x},\vec{y})$ and read as follows:
\begin{align}
p_\pm =\Tr[\ketbra*{\pm\vec{x}}{\pm\vec{x}}\otimes \mathds{1}\ \ketbra*{\Psi_{AB}}{\Psi_{AB}}]\, . \label{pplusminus}
\end{align}
At the second stage, given her output $a=\pm 1$, Alice calculates the post-measurement state for Bob's qubit, that we denote as $\ketbra{\vec{v}_\pm}$:
\begin{align}
\ketbra*{\vec{v}_\pm}&:=\Tr_A[\ketbra*{\pm \vec{x}}\otimes \mathds{1}\ \ketbra*{\Psi_{AB}}]/p_\pm \, . \label{vplusminus}
\end{align}
Now, Alice sends the state $\vec{v}_a$ to Bob. Clearly, if they have access to quantum resources, she can prepare a qubit in that state and sends it to Bob, who can measure that qubit with his projectors $\ketbra{\pm \vec{y}}$. In that way, Bob's output $b$ follows the statistics of Born's rule:
\begin{align}
p(b|\vec{x},\vec{y},a)=\Tr [\ketbra{b \vec{y}} \ketbra{\vec{v}_a}]= |\braket{\vec{v}_a}{b\vec{y}}|^2 \, .
\end{align}
All together, such a strategy leads to the distribution $p_Q(a,b|\vec{x},\vec{y})=p_{a}\cdot |\braket{\vec{v}_a}{b\vec{y}}|^2$.
This simulates the correct quantum probabilities given by Eq.~\eqref{prob}, as one can conclude from the rule of conditional probabilities $p(a,b|\vec{x},\vec{y})=p(a|\vec{x},\vec{y})\cdot p(b|\vec{x},\vec{y},a)$.
\end{comment}
However, in a classical simulation, Alice cannot send a physical qubit to Bob. Nevertheless, it is possible to simulate a qubit in that prepare-and-measure (PM) scenario with only two classical bits of communication \cite{tonerbacon2003}. In order to do so, Alice and Bob share four normalized three-dimensional vectors $\vec{\lambda}_1,\vec{\lambda}_2, \vec{\lambda}_3, \vec{\lambda}_4\in S_2$. The first two $\vec{\lambda}_1$ and $\vec{\lambda}_2$ are uniformly and independently distributed on the sphere, whereas $\vec{\lambda}_3=-\vec{\lambda}_1$ and $\vec{\lambda}_4=-\vec{\lambda}_2$. From these four vectors, Alice chooses the one that maximizes $\vec{\lambda}_i\cdot \vec{v}_a$ and communicates the result to Bob. This requires a message with four different symbols ($d=4$), hence, two bits of communication. It turns out that the distribution of the chosen vector becomes $\Theta(\vec{v}_a \cdot \vec{\lambda})/\pi$ (see Appendix~\ref{appendixb} for an independent proof). Finally, Bob takes the chosen vector $\vec{\lambda}$ and outputs $b=\sgn(\vec{y}\cdot \vec{\lambda})$. It was realized several times in the literature \cite{gisingisin1999, cerf2000, degorre2005} that such a distribution $\Theta(\vec{v}_a \cdot \vec{\lambda})/\pi$ serves as a classical description of the state $\ketbra{\vec{v}_a}$. This is specified by the following Lemma (see Appendix~\ref{prooflemma} for a proof):
\begin{lemma}\label{lemma1}
Bob receives a vector $\vec{\lambda}\in S_2$ distributed as $\rho(\vec{\lambda})=\Theta(\vec{v}\cdot \vec{\lambda})/\pi$ and outputs $b=\sgn(\vec{y}\cdot \vec{\lambda})$. For every qubit state $\vec{v}\in S_2$ and measurement $\vec{y}\in S_2$ this reproduces quantum correlations:
\begin{align}
p(b=\pm 1|\vec{y},\vec{v})&=(1 \pm \vec{y}\cdot \vec{v})/2=|\braket{\pm \vec{y}}{\vec{v}}|^2\, .
\end{align}
\end{lemma}
We remark that, simulating a qubit in a PM scenario requires at least two bits of communication~\cite{Renner2022}, which implies that this approach~\cite{cerf2000, tonerbacon2003} cannot be improved directly.
However, in this work, we introduce a method that supersedes such a constraint.
\textit{Our approach. ---}
The goal for Alice is still to prepare the distribution ${\rho(\vec{\lambda})=\Theta(\vec{v}_a\cdot \vec{\lambda})/\pi}$ to Bob. The improvement here comes from the way to achieve that. In the previous approach, Alice chooses her output first (according to the probabilities $p_\pm$) and then samples the corresponding distribution $\Theta(\vec{\lambda}\cdot \vec{v}_\pm)/\pi$. In our approach, Alice samples first the weighted sum $p_+\ \Theta(\vec{\lambda}\cdot \vec{v}_+)/\pi + p_-\ \Theta(\vec{\lambda}\cdot \vec{v}_-)/\pi$ of these two distributions. Afterwards (Step 3), she chooses her output $a=\pm 1$ in such a way that, conditioned on her output $a$, the resulting distribution of $\vec{\lambda}$ becomes exactly $\Theta(\vec{v}_a \cdot \vec{\lambda})/\pi$. At the same time, the weights $p_\pm$ ensure that Alice outputs according to the correct marginals. More formally, all our simulation protocols fit into the following general framework:
\setcounter{protocol}{-1}
\begin{protocol}
General framework: \label{protocolgenfram}
\begin{enumerate}
\item Alice chooses her basis $\vec{x}$ and calculates $p_\pm , \vec{v}_\pm$.
\item Alice and Bob share two (or three) vectors $\vec{\lambda}_i \in S_2$ according to a certain distribution (specified later). Alice informs Bob to choose one of these vectors such that the resulting distribution of the chosen vector $\vec{\lambda}$ becomes:
\begin{align}
\rho_{\vec{x}}(\vec{\lambda}):= p_+\ \Theta(\vec{v}_+ \cdot \vec{\lambda})/\pi +p_-\ \Theta(\vec{v}_- \cdot \vec{\lambda})/\pi \, . \label{defrho}
\end{align}
\item Given that $\vec{\lambda}$, Alice outputs $a=\pm 1$ with probability
\begin{align}
p_A(a|\vec{x},\vec{\lambda})=\frac{p_a\ \Theta(\vec{\lambda}\cdot \vec{v}_a)/\pi}{\rho_{\vec{x}}(\vec{\lambda})} \, . \label{aliceoutput}
\end{align}
\item Bob chooses his basis $\vec{y}$ and outputs $b=\sgn(\vec{y}\cdot \vec{\lambda})$.
\end{enumerate}
\end{protocol}
\begin{proof}
To see that this is sufficient to simulate the correct statistics, we first calculate the total probability that Alice outputs $a=\pm 1$ in Step 3:
\begin{align}
p_A(a|\vec{x})&=\int_{S_2} p_A(a|\vec{x},\vec{\lambda})\cdot \rho_{\vec{x}}(\vec{\lambda}) \, \mathrm{d}\vec{\lambda}\\
&=\int_{S_2} p_a\ \Theta(\vec{\lambda}\cdot \vec{v}_\pm)/\pi \, \mathrm{d}\vec{\lambda} =p_a \, . \label{marginals}
\end{align}
Now we can show that, given Alice outputs $a=\pm 1$, the conditional distribution of the resulting vector $\vec{\lambda}$ is:
\begin{align}
\rho_{\vec{x}}(\vec{\lambda}|a)=\frac{p_A(a|\vec{x},\vec{\lambda})\cdot \rho_{\vec{x}}(\vec{\lambda})}{p_A(a|\vec{x})}=\frac{1}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{v}_a) \, .
\end{align}
As in the previous approach, Lemma~\ref{lemma1} ensures that Bob outputs $b$ in Step 4 according to $p(b|\vec{x},\vec{y},a)= |\braket{\vec{v}_a}{b\vec{y}}|^2$. All together, the total probability of this procedure becomes $p_C(a,b|\vec{x},\vec{y})=p_a\cdot p(b|\vec{x},\vec{y},a)=p_a\cdot |\braket{\vec{v}_a}{b\vec{y}}|^2$. Again, this equals $p_Q(a,b|\vec{x},\vec{y})$ as given in Eq.~\eqref{rewrite}.
\end{proof}
Hence, the amount of communication to simulate a qubit pair reduces to an efficient way to sample the distributions $\rho_{\vec{x}}(\vec{\lambda})$. Clearly, the ability to sample each term $\Theta(\vec{\lambda}\cdot \vec{v}_\pm)/\pi$ individually implies the possibility to sample the weighted sum of these two terms $\rho_{\vec{x}}(\vec{\lambda})$. However, in general this is not necessary, and we find more efficient ways to do that. The improvement comes from the fact, that the two post-measurement states are not independent of each other but satisfy the following relation:
\begin{align}
p_+\ketbra*{\vec{v}_+}+p_-\ketbra*{\vec{v}_-}=\Tr_A[\ketbra*{\Psi_{AB}}] \, .
\end{align}
This follows directly from Eq.~\eqref{vplusminus} and $\ketbra*{+\vec{x}}+\ketbra*{-\vec{x}}=\mathds{1}$. In the Bloch vector representation, this equation becomes:
\begin{align}
p_+\ \vec{v}_++p_-\ \vec{v}_-=(2p-1)\ \vec{z} \, , \label{nonsignalling}
\end{align}
where we define $\vec{z}:=(0,0,1)^T$. For instance, if the state is local ($p=1$), the two post-measurement states are always $\vec{v}_\pm=\vec{z}$, independent of Alice's measurement $\vec{x}$. In that case, the distributions $\rho_{\vec{x}}(\vec{\lambda})\equiv \Theta(\vec{\lambda}\cdot \vec{z})/\pi$ in Eq.~\eqref{defrho} are constant and do not require any communication to be implemented. If the state is weakly entangled ($p\lesssim 1$), one post-measurement state $\vec{v}_a$ is still very close to the vector $\vec{z}$. In this way, it turns out that, for every $\vec{x}$, the distribution $\rho_{\vec{x}}(\vec{\lambda})$ is dominated by a constant part proportional to $\Theta(\vec{\lambda}\cdot \vec{z})/\pi$. More formally, we can define
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda}):=&\rho_{\vec{x}}(\vec{\lambda})-\frac{(2p-1)}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{z}) \, ,
\end{align}
and in Appendix~\ref{rhotilde} we prove the following properties of that distribution, and in Fig.~\ref{fig2} in Appendix~\ref{rhotilde} we illustrate the behaviour of the most relevant distributions used in our protocols.
\begin{lemma}\label{lemma2}
The distribution $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ defined above is positive, $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\geq 0$ and sub-normalized, $\int_{S_2} \tilde{\rho}_{\vec{x}}(\vec{\lambda}) \ \mathrm{d}\vec{\lambda}=2(1-p)$. Additionally, it respects the two upper bounds, $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{\sqrt{p(1-p)}}{\pi}$ and $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{p_\pm}{\pi} |\vec{\lambda}\cdot \vec{v}_\pm|$.
\end{lemma}
\textit{Single bit protocol for weakly entangled states. ---}
In particular, when the state is weakly entangled, the extra term $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ remains small. This allows us to find the
following protocol for the range $p\geq 1/2+\sqrt{3}/4 \approx 0.933$.
\begin{protocol}[$1/2+\sqrt{3}/4\leq p\leq 1$, 1 bit] \label{protocol1}
Same as Protocol~\ref{protocolgenfram} with the following 2. Step:\\
Alice and Bob share two normalized three-dimensional vectors $\vec{\lambda}_1, \vec{\lambda}_2 \in S_2$ according to the distribution:
\begin{align}
\rho(\vec{\lambda}_1)=\frac{1}{4\pi}\, ,&&
\rho(\vec{\lambda}_2)=\frac{1}{\pi}\ \Theta(\vec{\lambda}_2 \cdot \vec{z}) \, .
\end{align}
Alice sets $c=1$ with probability:
\begin{align}
\begin{split}
p_A(c=1|\vec{x},\vec{\lambda}_1)=(4\pi)\cdot \tilde{\rho}_{\vec{x}} (\vec{\lambda}_1)
\end{split}
\end{align}
and otherwise she sets $c=2$. She communicates the bit $c$ to Bob. Both set $\vec{\lambda}:=\vec{\lambda}_c$ and reject the other vector.
\end{protocol}
\begin{proof}
Whenever Alice chooses the first vector, the resulting distribution of the chosen vector is precisely $p(c=1|\vec{x},\vec{\lambda}_1)\cdot \rho(\vec{\lambda}_1)=\tilde{\rho}_{\vec{x}}(\vec{\lambda}_1)$. The total probability that she chooses the first vector is $\int_{S_2} p(c=1|\vec{x},\vec{\lambda}_1)\cdot \rho(\vec{\lambda}_1) \, \mathrm{d}\vec{\lambda}_1=\int_{S_2} \tilde{\rho}_{\vec{x}}(\vec{\lambda}_1) \, \mathrm{d}\vec{\lambda}_1 = 2(1-p)$ (see Lemma~\ref{lemma2}). In all the remaining cases, with total probability $2p-1$, she chooses vector $\vec{\lambda}_2$, distributed as $\Theta(\vec{\lambda}_2 \cdot \vec{z})/\pi$. Therefore, the total distribution of the chosen vector $\vec{\lambda}:=\vec{\lambda}_c$ becomes the desired distribution
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})+\frac{(2p-1)}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{z})=\rho_{\vec{x}}(\vec{\lambda}) \, .
\end{align}
In order for the protocol to be well defined, it has to hold that $0\leq p(c=1|\vec{x},\vec{\lambda}_1)\leq 1$, hence $0\leq \tilde{\rho}_{\vec{x}} (\vec{\lambda}_1)\leq 1/(4\pi)$. As a consequence of Lemma~\ref{lemma2}, this is true whenever $1/2+\sqrt{3}/4 \leq p\leq 1$.
\end{proof}
\begin{figure}
\caption{
Length of the classical message $d$ required to simulate a qubit pair ${\ket{\Psi_{AB}
\label{figcomparison}
\end{figure}
In Appendix~\ref{improvedprot}, we show how to improve Protocol~\ref{protocol1} to simulate every weakly entangled state with $0.835 \leq p \leq 1$ by communicating only a single bit. Moreover, it turns out that the bit is not necessary in each round. More precisely, Alice sends the bit in only a ratio of $N(p)$ of the rounds, where
\begin{align}
N(p):=\frac{2p(1-p)}{2p-1} \log{\left(\frac{p}{1-p}\right)}+2(1-p) \, .
\end{align}
In the remaining rounds, with probability $1-N(p)$, they do not communicate with each other. It is known that a simulation of a maximally entangled state without communication in some fraction of rounds is impossible. This would contradict the fact that the singlet has no local part \cite{elitzur1992, barrett2006}. Hence, our result shows that simulating weakly entangled states requires strictly less communication resources than simulating a maximally entangled one.
Independently of this, we can also use our approach to quantify the local content of any pure entangled two-qubit state, which provides an independent proof of the result by Portmann et al.~\cite{Portmann2012localpart} (see Appendix~\ref{applocalcontent}).
\textit{Trit protocol for arbitrary entangled pairs. ---}
It is worth mentioning, that we also recover a one-bit protocol for simulating the maximally entangled state ($p=1/2$) in our framework. In that case, there is another geometric argument that allows to sample the distributions $\rho_{\vec{x}}(\vec{\lambda})$ efficiently. More precisely, the two post-measurement states are always opposite of each other $\vec{v}_-=-\vec{v}_+$ and it holds that $p_+=p_-=1/2$. In this way, the distribution $\rho_{\vec{x}}(\vec{\lambda})$ turns out to be $\rho_{\vec{x}}(\vec{\lambda})=|\vec{\lambda}\cdot \vec{v}_+|/(2\pi)$. It was already observed, by Degorre et al.~\cite{degorre2005}, that this distribution can be sampled by communicating only a single bit of communication (see Appendix~\ref{appendixb} for details and an independent proof).
Here, we connect this with the techniques developed for Protocol~~\ref{protocol1} to present a protocol which simulates all entangled qubit pairs by communicating a classical trit.
\begin{protocol}[$1/2\leq p\leq 1$, 1 trit]
Same as Protocol~\ref{protocolgenfram} with the following 2. Step:\\
Alice and Bob share three normalized three-dimensional vectors $\vec{\lambda}_1,\vec{\lambda}_2,\vec{\lambda}_3 \in S_2$ according to the following distribution:
\begin{align}
\rho(\vec{\lambda}_1)=\frac{1}{4\pi}\, ,&&
\rho(\vec{\lambda}_2)=\frac{1}{4\pi}\, ,&&
\rho(\vec{\lambda}_3)=\frac{1}{\pi}\ \Theta(\vec{\lambda}_3 \cdot \vec{z})\, .
\end{align}
If $p_+\leq 0.5$ she sets $\vec{v}:=\vec{v}_+$, otherwise she sets $\vec{v}:=\vec{v}_-$. Afterwards, she sets $c=1$ if $|\vec{v}\cdot \vec{\lambda}_1|\geq |\vec{v}\cdot \vec{\lambda}_2|$ and $c=2$ otherwise. Finally, with probability
\begin{align}
\begin{split}
p(t=c|\vec{x},\vec{\lambda}_c)=\frac{\tilde{\rho}_{\vec{x}}(\vec{\lambda}_c)}{\frac{1}{2\pi} |\vec{\lambda}_c \cdot \vec{v}|}
\end{split}
\end{align}
she sets $t=c$ and otherwise, she sets $t=3$. She communicates the trit $t$ to Bob. Both set $\vec{\lambda}:=\vec{\lambda}_t$ and reject the other two vectors.
\end{protocol}
Albeit we give the formal proof in Appendix~\ref{prooftrit}, the idea is the following. As a result of Ref.~\cite{degorre2005}, the distribution of the vector $\vec{\lambda}_c$ is precisely $\rho(\vec{\lambda}_c)=\frac{1}{2\pi}|\vec{v}\cdot \vec{\lambda}_c|$. Now, the function $p(t=c|\vec{x},\vec{\lambda}_c)$ is constructed such that Alice samples $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ whenever she sets $t=c$ and a term proportional to $\Theta(\vec{\lambda} \cdot \vec{z})/\pi$ whenever she sets $t=3$. Summing both terms together, the chosen vector $\vec{\lambda}:=\vec{\lambda}_t$ has the required distribution $\rho_{\vec{x}}(\vec{\lambda})$. In the special case of a maximally entangled state ($p=1/2$), Alice never chooses $\vec{\lambda}_3$ and only needs to send the bit $c$.
The main open question now is whether a single bit is sufficient to simulate every entangled qubit pair, see Fig.~\ref{figcomparison}.
We remark that our framework is in principle capable to provide such a model. The challenge becomes to find for each qubit pair, a distribution of two shared random vectors,
such that Alice can sample $\rho_{\vec{x}}(\vec{\lambda})$ for every measurement basis $\vec{x}$.
In all protocols considered here, the shared vectors are independent of each other, i.e., $\rho(\vec{\lambda}_1,\vec{\lambda}_2)=\rho(\vec{\lambda}_1)\cdot \rho(\vec{\lambda}_2)$. Dropping this constraint may be a way to extend our approach.
\textit{Discussion. ---}
To conclude, we showed that a classical trit is enough for simulating the outcomes of local projective measurements on any entangled qubit pair. For weakly entangled states, we proved that already a single bit is sufficient. In the latter case, Alice does not need to send the bit in all the rounds, which is impossible for a maximally entangled state \cite{elitzur1992, barrett2006}. In this way, we show that simulating weakly entangled states is strictly simpler than simulating maximally entangled ones, solving a longstanding question \cite{Gisinreview2018,Brassard2003, Brunner_2014}.
\begin{acknowledgments}
We thank \v{C}aslav Brukner, Valerio Scarani, Peter Sidajaya, Armin Tavakoli, Isadora Veeren, and Bai Chu Yu for fruitful discussions and inspiration. Furthermore, we thank Nicolas Brunner and Nicolas Gisin for pointing out Ref.~\cite{Portmann2012localpart} to us.
M.J.R. and M.T.Q. acknowledge financial support from the Austrian Science Fund (FWF) through BeyondC (F7103-N38), the Project No. I-2906, as well as support by the John Templeton Foundation through Grant 61466, The Quantum Information Structure of Spacetime (qiss.fr), the Foundational Questions Institute (FQXi) and the research platform TURIS. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801110. It reflects only the authors' view, the EU Agency is not responsible for any use that may be made of the information it contains. ESQ has received funding from the Austrian Federal Ministry of Education, Science and Research (BMBWF).
\end{acknowledgments}
\nocite{apsrev42Control}
\onecolumngrid
\appendix
\section{Proof of Lemma~\ref{lemma1}} \label{prooflemma}
\setcounter{lemma}{0}
\begin{lemma}
Bob receives a vector $\vec{\lambda}\in S_2$ distributed as $\rho(\vec{\lambda})=\Theta(\vec{v}\cdot \vec{\lambda})/\pi$ and outputs $b=\sgn(\vec{y}\cdot \vec{\lambda})$. For every qubit state $\vec{v}\in S_2$ and measurement $\vec{y}\in S_2$ this reproduces quantum correlations:
\begin{align}
p(b=\pm 1|\vec{y},\vec{v})&=(1 \pm \vec{y}\cdot \vec{v})/2=|\braket{\pm \vec{y}}{\vec{v}}|^2\, .
\end{align}
\end{lemma}
\begin{proof}
Bob outputs $b=+1$ if and only if $\vec{y}\cdot \vec{\lambda}\geq 0$. Therefore, the total probability that Bob outputs $b=+1$ becomes:
\begin{align}
p(b=+1| \vec{y}, \vec{v})=\int_{S_2} H(\vec{y}\cdot \vec{\lambda})\cdot \ \rho(\vec{\lambda})\ \mathrm{d}\vec{\lambda}=\frac{1}{\pi}\int_{S_2} H(\vec{y}\cdot \vec{\lambda})\cdot \ \Theta(\vec{v}\cdot \vec{\lambda})\ \mathrm{d}\vec{\lambda} \, .
\end{align}
Here, $H(z)$ is the Heaviside function ($H(z)=1$ if $z\geq 0$ and $H(z)=0$ if $z< 0$) and $\Theta(z):=H(z)\cdot z$. The evaluation of the exact same integral is done in Ref.~\cite{Renner2022} (Lemma~1) and in similar forms also in Ref.~\cite{gisingisin1999, cerf2000, degorre2005}. For the sake of completeness, we restate the same proof as in Ref.~\cite{Renner2022} here:\\
"Note that both functions in the integral $H(\vec{y}\cdot \vec{\lambda})$ and $\Theta(\vec{v}\cdot \vec{\lambda})$ have support in only one half of the total sphere (the hemisphere centred around $\vec{y}$ and $\vec{v}$, respectively). For example, if $\vec{v}=-\vec{y}$ these two hemispheres are exactly opposite of each other and the integral becomes zero. For all other cases, we can observe that the value of the integral depends only on the angle between $\vec{y}$ and $\vec{v}$, because the whole expression is spherically symmetric. Therefore, it is enough to evaluate the integral for $\vec{y}=(0,1,0)^T$ and $\vec{v}=(-\sin{\beta},\cos{\beta},0)^T$, where we can choose without loss of generality $0\leq\beta\leq \pi$. Furthermore, we can use spherical coordinates for $\vec{\lambda}=(\sin{\theta}\cdot \cos{\phi}, \sin{\theta}\cdot \sin{\phi}, \cos{\theta})$ (note that $\vert\vec{\lambda}\vert =1$). With this choice of coordinates, the region in which both factors have non-zero support becomes exactly $\beta\leq \phi \leq \pi$ (at the same time, $\theta$ is unrestricted, $0 \leq \theta \leq \pi$). More precisely, $0\leq \phi \leq \pi$ is the support for $H(\vec{y}\cdot \vec{\lambda})$ and $\beta\leq \phi \leq \pi + \beta$ is the support for $\Theta(\vec{v}\cdot \vec{\lambda}$). In this way, the integral becomes:
\begin{align}
\frac{1}{\pi}\int^{2\pi}_{0} \int^{\pi}_{0} H(\vec{y}\cdot \vec{\lambda})\cdot \ \Theta(\vec{v}\cdot \vec{\lambda}) \cdot \sin{\theta}\ \mathrm{d}\theta \ \mathrm{d}\phi =\frac{1}{\pi}\int^{\pi}_{\beta} \int^{\pi}_{0} \ \sin{\phi} \cdot \sin^2{\theta}\ \mathrm{d}\theta \ \mathrm{d}\phi =\frac{1}{2}(1+\cos{\beta})=\frac{1}{2}(1+\vec{y}\cdot \vec{v}) \, ."
\end{align}
Hence, $p(b=+1|\vec{y},\vec{v})=(1-\vec{y}\cdot \vec{v})/2$. Clearly, $p(b=-1| \vec{y}, \vec{v})=1-p(b=+1| \vec{y}, \vec{v})=(1-\vec{y}\cdot \vec{v})/2$.
\end{proof}
It appears several times in this work that $\int_{S_2} \Theta(\vec{\lambda}\cdot \vec{v})/\pi \ \mathrm{d}\vec{\lambda}=1$ for every normalized vector $\vec{v}\in S_2$. The proof follows by a similar calculation as in the above Lemma:
\begin{align}
\frac{1}{\pi}\int_{S_2} \ \Theta(\vec{\lambda}\cdot \vec{v}) \ \mathrm{d}\vec{\lambda}=\frac{1}{\pi}\int_{S_2} \ H(\vec{\lambda}\cdot \vec{v})\cdot \Theta(\vec{\lambda}\cdot \vec{v}) \ \mathrm{d}\vec{\lambda}=\frac{1}{2}(1+\vec{v}\cdot \vec{v})=1 \, . \label{normalization}
\end{align}
The introduction of the Heaviside function in the second step clearly does not change the integral since $H(\vec{\lambda}\cdot \vec{v})$ has the same support as $\Theta(\vec{\lambda}\cdot \vec{v})$ or, more formally, $\Theta(\vec{\lambda}\cdot \vec{v}):=H(\vec{\lambda}\cdot \vec{v})\cdot (\vec{\lambda}\cdot \vec{v})=H(\vec{\lambda}\cdot \vec{v})^2\cdot (\vec{\lambda}\cdot \vec{v})=H(\vec{\lambda}\cdot \vec{v})\cdot \Theta(\vec{\lambda}\cdot \vec{v})$, where we used that $H(z)=H(z)^2$ for every $z\in \mathbb{R}$.
\section{Protocol for the maximally entangled qubit pair}\label{appendixb}
As mentioned in the main text, in the case of a maximally entangled state ($p=1/2$), there is a similar geometric argument that allows to sample the distributions $\rho_{\vec{x}}(\vec{\lambda})$ efficiently. More precisely, the two post-measurement states are always opposite of each other $\vec{v}_-=-\vec{v}_+$ and it holds that $p_+=p_-=1/2$. Therefore, the distribution $\rho_{\vec{x}}(\vec{\lambda})$ has, for every choice of Alice's measurement $\vec{x}$, the form (note that $\Theta(z)+\Theta(-z)=|z|$ for every $z\in \mathbb{R}$)
\begin{align}
\rho_{\vec{x}}(\vec{\lambda})=\frac{1}{2\pi}\left(\Theta(\vec{\lambda}\cdot \vec{v}_+)+\ \Theta(\vec{\lambda}\cdot (-\vec{v}_+))\right)=\frac{1}{2\pi}|\vec{\lambda}\cdot \vec{v}_+| \, . \label{distribsinglet}
\end{align}
It was already observed by Degorre et al.~\cite{degorre2005} ("Theorem~6 (The “choice” method)"), that this distribution can be sampled by communicating only a single bit. This leads directly to a simulation of the maximally entangled state with one bit of communication. This is exactly the version of Degorre et al.~\cite{degorre2005} for the protocol of Toner and Bacon for the singlet (see also "Theorem 10 (Communication)" of Ref.~\cite{degorre2005}):
\begin{protocol}[$p=1/2$, 1 bit, from Ref.~\cite{degorre2005}] \label{protocoldegorre}
Same as Protocol~\ref{protocolgenfram} with the following 2. Step:\\
Alice and Bob share two normalized three-dimensional vectors $\vec{\lambda}_1,\vec{\lambda}_2 \in \mathbb{R}^3$ that are independent and uniformly distributed on the unit sphere, $\rho(\vec{\lambda}_1)=\rho(\vec{\lambda}_2)=1/4\pi$. Alice sets $c=1$ if $|\vec{v}_+\cdot \vec{\lambda}_1|\geq |\vec{v}_+\cdot \vec{\lambda}_2|$ and $c=2$ otherwise. She communicates the bit $c$ to Bob and both set $\vec{\lambda}:=\vec{\lambda}_c$.\\
\end{protocol}
\begin{proof}
The proof can be found in Ref.~\cite{degorre2005} ("Theorem~6 (The “choice” method)"). However, we want to give an independent proof here. We can focus first on the case where $\vec{v}_+=\vec{z}$. All other cases are analogous due to the spherical symmetry of the problem. In that case, we can write $\vec{\lambda}_1$ and $\vec{\lambda}_2$ in spherical coordinates, $\vec{\lambda}_i=(\sin{\theta_i}\cdot \cos{\phi_i}, \sin{\theta_i}\cdot \sin{\phi_i}, \cos{\theta_i})$. In this notation, Alice picks $\vec{\lambda}_1$ if and only if $|\vec{\lambda}_1\cdot \vec{z}|=|\cos{\theta_1}|\geq |\vec{\lambda}_2\cdot \vec{z}|=|\cos{\theta_2}|$. For a given $\vec{\lambda}_1$, this happens with probability
\begin{align}
\int_{S_2}\ H(|\vec{\lambda}_1\cdot \vec{z}|-|\vec{\lambda}_2\cdot \vec{z}|)\cdot \rho(\vec{\lambda}_2) \ \mathrm{d}\vec{\lambda}_2=&\frac{1}{4\pi}\ \int^{2\pi}_{0} \int^{\pi}_{0} H(|\cos{\theta_1}|-|\cos{\theta_2}|) \cdot \sin{\theta_2}\ \mathrm{d}\theta_2 \ \mathrm{d}\phi_2\\
=&\frac{1}{2}\ \int^{\pi}_{0} H(|\cos{\theta_1}|-|\cos{\theta_2}|) \cdot \sin{\theta_2}\ \mathrm{d}\theta_2
\end{align}
If $0\leq \theta_1\leq \pi/2$ and hence $\cos{\theta_1}\geq 0$, the region where $H(|\cos{\theta_1}|-|\cos{\theta_2}|)=1$ becomes exactly $\theta_1\leq \theta_2 \leq \pi- \theta_1$ and hence the above integral becomes:
\begin{align}
\frac{1}{2}\ \int^{\pi-\theta_1}_{\theta_1} \sin{\theta_2}\ \mathrm{d}\theta_2
=\frac{1}{2}\ \left[ -\cos{\theta_2} \right]^{\pi-\theta_1}_{\theta_1}=\left( -\cos{(\pi-\theta_1)}+\cos{(\theta_1)} \right)/2=\cos{\theta_1}=|\vec{z}\cdot \vec{\lambda}_1|
\end{align}
For $\pi/2\leq \theta_1\leq \pi$ and hence $\cos{\theta_1}\leq 0$ a similar calculation leads to $-\cos{\theta_1}=|\cos{\theta_1}|=|\vec{z}\cdot \vec{\lambda}_1|$. (One can also observe that the integral depends only on $|\cos{\theta_1}|$, which leads to the same statement.) Hence, whenever Alice chooses $\vec{\lambda}:=\vec{\lambda}_1$ the distribution of that vector becomes $\rho(\vec{\lambda}_1)\cdot|\vec{z}\cdot \vec{\lambda}_1|=|\vec{z}\cdot \vec{\lambda}_1|/(4\pi)$. Analogously, whenever Alice chooses $\vec{\lambda}:=\vec{\lambda}_2$ the distribution of that chosen vector is again $|\vec{z}\cdot \vec{\lambda}_2|/(4\pi)$, due to the symmetric roles of $\vec{\lambda}_1$ and $\vec{\lambda}_2$. Hence, the distribution of the chosen vector $\vec{\lambda}$ becomes, in total, the sum of these two terms $\rho(\vec{\lambda})=|\vec{z}\cdot \vec{\lambda}|/(2\pi)$. For a general vector $\vec{v}_+$, the analog expression $\rho(\vec{\lambda})=|\vec{v}_+\cdot \vec{\lambda}|/(2\pi)$ holds, because of the spherical symmetry of the protocol.
\end{proof}
We also want to remark here, that in the case of a maximally entangled state, Alice's response function in the third step Eq.~\eqref{aliceoutput} can be, due to Eq.~\eqref{distribsinglet}, rewritten into $p_A(a=\pm 1|\vec{x},\vec{\lambda})=H(\vec{\lambda}\cdot \vec{v}_+)$, or, equivalently, $a=\sgn(\vec{\lambda}\cdot \vec{v}_+)$.
\subsection{"Classical teleportation" protocol}
With this observation, we can also understand the classical teleportation protocol from the main text. To avoid confusion, this protocol is not of the form given in Protocol~\ref{protocolgenfram}:\\
\begin{protocol}\label{protocol4}
The following protocol simulates a qubit in a prepare-and-measure scenario:
\begin{enumerate}
\item Alice chooses the quantum state $\ketbra{v}=(\mathds{1}+\vec{v}\cdot \vec{\sigma})/2$ she wants to send.
\item Alice and Bob share two normalized three-dimensional vectors $\vec{\lambda}_1,\vec{\lambda}_2 \in \mathbb{R}^3$ that are independent and uniformly distributed on the unit sphere, $\rho(\vec{\lambda}_1)=\rho(\vec{\lambda}_2)=1/4\pi$. Alice sets $c_1=1$ if $|\vec{v}\cdot \vec{\lambda}_1|\geq |\vec{v}\cdot \vec{\lambda}_2|$ and $c_1=2$ otherwise. In addition, Alice defines a second bit $c_2=\sgn(\vec{\lambda}_{c_1}\cdot \vec{v})$. She communicates the two bits $c_1$ and $c_2$ to Bob and both set $\vec{\lambda}:=c_2\ \vec{\lambda}_{c_1}$.
\item Bob outputs $b=\sgn(\vec{y}\cdot \vec{\lambda})$.
\end{enumerate}
\end{protocol}
\begin{proof}
As a result of the above, the distribution of the vector $\vec{\lambda}_{c_1}$ is $\rho(\vec{\lambda}_{c_1})=|\vec{\lambda}_{c_1}\cdot \vec{v}|/(2\pi)$. When he defines $\vec{\lambda}:=c_2\ \vec{\lambda}_{c_1}$, he exactly flips the vector $\vec{\lambda}_{c_1}$ if and only if $\vec{\lambda}_{c_1}\cdot \vec{v}< 0$. With the additional flip, he obtains the distribution:
\begin{align}
\frac{1}{2\pi}|\vec{\lambda}_{c_1}\cdot \vec{v}|=\frac{1}{2\pi}\left(\Theta(\vec{\lambda}_{c_1}\cdot \vec{v})+\ \Theta(\vec{\lambda}_{c_1}\cdot (-\vec{v}))\right)
\xlongrightarrow{flip}\frac{1}{2\pi}\left(\Theta(\vec{\lambda}\cdot \vec{v})+\ \Theta((-\vec{\lambda})\cdot (-\vec{v}))\right)=\frac{1}{\pi}\Theta(\vec{\lambda}\cdot \vec{v}) \, .
\end{align}
Therefore, the distribution of the vector $\vec{\lambda}$ becomes $\Theta(\vec{v} \cdot \vec{\lambda})/\pi$. In this way, Alice managed to send exactly a classical description of the state $\ketbra*{\vec{v}}$ to Bob. More precisely, Lemma~\ref{lemma1} ensures that Bob outputs according to $p(b=\pm 1|\vec{v},\vec{y})=(1\pm \vec{y}\cdot \vec{v})/2$, as required by quantum mechanics.
\end{proof}
Note that, in the main text, the second step is formulated as follows: "Alice and Bob share four normalized three-dimensional vectors $\vec{\lambda}_1,\vec{\lambda}_2, \vec{\lambda}_3, \vec{\lambda}_4\in S_2$. The first two $\vec{\lambda}_1$ and $\vec{\lambda}_2$ are uniformly and independently distributed on the sphere, whereas $\vec{\lambda}_3=-\vec{\lambda}_1$ and $\vec{\lambda}_4=-\vec{\lambda}_2$. From these four vectors, Alice chooses the one that maximizes $\vec{\lambda}_i\cdot \vec{v}$ and communicates the result to Bob and both set $\vec{\lambda}:=\vec{\lambda}_i$."
This is just a reformulation of the second step in Protocol~\ref{protocol4} and both versions are equivalent. To see this, fix $\vec{\lambda}_1$ and $\vec{\lambda}_2$. If $|\vec{v}\cdot \vec{\lambda}_1|\geq |\vec{v}\cdot \vec{\lambda}_2|$ and $\vec{v}\cdot \vec{\lambda}_1\geq 0$, Alice will send $c_1=1$ and $c_2=+1$ and both set $\vec{\lambda}:=c_2\vec{\lambda}_{c_1}=\vec{\lambda}_1$ in step two of Protocol~\ref{protocol4}. In the reformulation, it turns out that the vector that maximizes $\vec{v}\cdot \vec{\lambda}_i$ is precisely $\vec{\lambda}_1$ since $|\vec{v}\cdot \vec{\lambda}_1|\geq |\vec{v}\cdot \vec{\lambda}_2|$ and $\vec{v}\cdot \vec{\lambda}_1\geq 0$ imply that $\vec{v}\cdot \vec{\lambda}_1\geq \vec{v}\cdot \vec{\lambda}_2$; $\vec{v}\cdot \vec{\lambda}_1\geq \vec{v}\cdot \vec{\lambda}_3=-\vec{v}\cdot \vec{\lambda}_1$ as well as $\vec{v}\cdot \vec{\lambda}_1\geq \vec{v}\cdot \vec{\lambda}_4=-\vec{v}\cdot \vec{\lambda}_2$. With similar arguments, one can check that, for a fixed $\vec{\lambda}_1$ and $\vec{\lambda}_2$, they always choose the same vector $\vec{\lambda}$ in both versions.
\section{Proof of the trit protocol}\label{prooftrit}
\setcounter{protocol}{1}
\begin{protocol}[$1/2\leq p\leq 1$, 1 trit]
Same as Protocol~\ref{protocolgenfram} with the following 2. Step:\\
Alice and Bob share three normalized three-dimensional vectors $\vec{\lambda}_1,\vec{\lambda}_2,\vec{\lambda}_3 \in S_2$ according to the following distribution:
\begin{align}
\rho(\vec{\lambda}_1)=\frac{1}{4\pi}\, ,&&
\rho(\vec{\lambda}_2)=\frac{1}{4\pi}\, ,&&
\rho(\vec{\lambda}_3)=\frac{1}{\pi}\ \Theta(\vec{\lambda}_3 \cdot \vec{z})\, .
\end{align}
If $p_+\leq 0.5$ she sets $\vec{v}:=\vec{v}_+$, otherwise she sets $\vec{v}:=\vec{v}_-$. Afterwards, she sets $c=1$ if $|\vec{v}\cdot \vec{\lambda}_1|\geq |\vec{v}\cdot \vec{\lambda}_2|$ and $c=2$ otherwise. Finally, with probability
\begin{align}
\begin{split}
p(t=c|\vec{x},\vec{\lambda}_c)=\frac{\tilde{\rho}_{\vec{x}}(\vec{\lambda}_c)}{\frac{1}{2\pi} |\vec{\lambda}_c \cdot \vec{v}|}
\end{split}
\end{align}
she sets $t=c$ and otherwise, she sets $t=3$. She communicates the trit $t$ to Bob. Both set $\vec{\lambda}:=\vec{\lambda}_t$ and reject the other two vectors.
\end{protocol}
\begin{proof}
We show that the distribution of the shared vector $\vec{\lambda}$ becomes exactly the required $\rho_{\vec{x}}(\vec{\lambda})$. Consider the step before Alice sets $t=c$ or $t=3$. As a result of Protocol~\ref{protocoldegorre} from Ref.~\cite{degorre2005}, the distribution of the vector $\vec{\lambda}_c$ is $\rho(\vec{\lambda}_c)=\frac{1}{2\pi}|\vec{v}\cdot \vec{\lambda}_c|$. Therefore, using a similar idea as in the protocol for weakly entangled states, whenever she sets $t=c$, the resulting distribution of the chosen vector is precisely $p(t=c|\vec{x},\vec{\lambda}_c)\cdot \frac{1}{2\pi}|\vec{v}\cdot \vec{\lambda}_c|=\tilde{\rho}_{\vec{x}}(\vec{\lambda}_c)$. The total probability that she sets $t=c$ is $\int_{S_2} p(t=c|\vec{x},\vec{\lambda}_c)\cdot \frac{1}{2\pi}|\vec{v}\cdot \vec{\lambda}_c| \, \mathrm{d}\vec{\lambda}_c=\int_{S_2} \tilde{\rho}_{\vec{x}}(\vec{\lambda}_c) \, \mathrm{d}\vec{\lambda}_c = 2(1-p)$ (see Lemma~\ref{lemma2}). In all the remaining cases, with total probability $2p-1$, she chooses vector $\vec{\lambda}_3$, distributed as $\Theta(\vec{\lambda}_3 \cdot \vec{z})/\pi$. Therefore, the total distribution of the chosen vector $\vec{\lambda}:=\vec{\lambda}_t$ becomes the desired distribution
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})+\frac{(2p-1)}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{z})=\rho_{\vec{x}}(\vec{\lambda}) \, .
\end{align}
The fact that $0\leq p(t=c|\vec{x},\vec{\lambda}_c) \leq 1$ follows from $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{p_\pm}{\pi} |\vec{\lambda}\cdot \vec{v}_\pm|$ in Lemma~\ref{lemma2}.
\end{proof}
\setcounter{protocol}{4}
\section{Properties of $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$} \label{rhotilde}
\begin{figure}
\caption{A sketch of the relevant distributions for the state $\ket{\Psi_{AB}
\label{fig2}
\end{figure}
In this section, we prove several properties of the distribution $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ that are crucial for our protocols. Let us recall that,
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda}):=&\frac{1}{\pi}\left( p_+\ \Theta(\vec{\lambda}\cdot \vec{v}_+)+p_- \ \Theta(\vec{\lambda}\cdot \vec{v}_-)-(2p-1)\ \Theta(\vec{\lambda}\cdot \vec{z})\right)
\end{align}
where, $0\leq p_\pm \leq 1$, $p_++p_-=1$ and $p\geq0.5$ (therefore $0\leq (2p-1)\leq 1$) and all vectors are normalized vectors on the Bloch sphere $\vec{\lambda},\vec{v}_+,\vec{v}_-,\vec{z}\in S_2$. The only relevant equation that we need for the proof is Eq.~\eqref{nonsignalling}, which reads as
\begin{equation}
p_+\ \vec{v}_++p_-\ \vec{v}_-=(2p-1)\ \vec{z},
\end{equation} and the definition of $\Theta(z)$:
\begin{align}
\Theta(z):=\biggl\lbrace \begin{array}{cc}
z & \text{if } z\geq 0 \\
0 & \text{if } z<0
\end{array}\, .
\end{align}
\begin{lemma}
The Distribution $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ defined above satisfies the following properties:
\begin{enumerate}[label=(\roman*)]
\item positive: $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\geq 0$
\item symmetric: $\tilde{\rho}_{\vec{x}}(\vec{\lambda})=\tilde{\rho}_{\vec{x}}(-\vec{\lambda})$
\item area: $\int_{S_2} \tilde{\rho}_{\vec{x}}(\vec{\lambda}) \ \mathrm{d}\vec{\lambda}=2(1-p)$
\item 1st bound: $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{p_\pm}{\pi} |\vec{\lambda}\cdot \vec{v}_\pm|$
\item 2nd bound: $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{1}{2\pi}\frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C|\cos{(\theta)}|}$ with $C:=2p-1$ and $\cos{(\theta)}=\vec{\lambda}\cdot \vec{z}$
\item 3rd bound: $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{\sqrt{p(1-p)}}{\pi}$
\end{enumerate}
\end{lemma}
\begin{proof}
Most of these properties follow directly from the fact that the function $\Theta(a)$ is convex and satisfies:
\begin{align}
\forall a,b\in \mathbb{R}:\ \Theta(a)+\Theta(b)\geq \Theta(a+b) \, .
\end{align}
Furthermore, we use the following property frequently:
\begin{align}
\forall a,b\in \mathbb{R} \text{ with } a\geq 0:\ \Theta(a\cdot b)= a\cdot \Theta(b) \, .
\end{align}
Furthermore, $\Theta(a)+\Theta(-a)=|a|$ as well as $\Theta(a)-\Theta(-a)=a$ for all $ a\in \mathbb{R}$. All of these properties follow directly from the definition of $\Theta(a)$.\\
\textit{(I) positive:}\\
We can use $\Theta(a)+\Theta(b)\geq \Theta(a+b)$ with $a= p_+\ \vec{\lambda}\cdot \vec{v}_+$ and $b= p_-\ \vec{\lambda}\cdot \vec{v}_-$. As a consequence of $p_+\ \vec{v}_++p_-\ \vec{v}_-=(2p-1)\ \vec{z}$ we obtain $a+b=p_+\ \vec{\lambda}\cdot \vec{v}_++p_-\ \vec{\lambda}\cdot \vec{v}_-=\vec{\lambda}\cdot(p_+\ \vec{v}_++p_-\ \vec{v}_-)=(2p-1)\ \vec{\lambda}\cdot\vec{z}$ and therefore:
\begin{align}
\Theta(p_+\ \vec{\lambda}\cdot \vec{v}_+)+\Theta(p_-\ \vec{\lambda}\cdot \vec{v}_-)&\geq \Theta((2p-1)\ \vec{\lambda}\cdot\vec{z})\\
p_+\ \Theta(\vec{\lambda}\cdot \vec{v}_+)+p_-\ \Theta(\vec{\lambda}\cdot \vec{v}_-)&\geq (2p-1)\ \Theta(\vec{\lambda}\cdot\vec{z})\\
\tilde{\rho}_{\vec{x}}(\vec{\lambda})&\geq 0 \, .
\end{align}
Note that we have used $p_+,p_-\geq 0$ and $(2p-1)\geq 0$.\\
\textit{(II) symmetric:}\\
From $\Theta(a)-\Theta(-a)=a$ we conclude:
\begin{align}
a+b&=(a+b)\\
\Theta(a)-\Theta(-a)+\Theta(b)-\Theta(-b)&=\Theta(a+b)-\Theta(-a-b)\\
\Theta(a)+\Theta(b)-\Theta(a+b)&=\Theta(-a)+\Theta(-b)-\Theta(-a-b) \, .
\end{align}\\
Now we can choose $a= p_+\ \vec{\lambda}\cdot \vec{v}_+$ and $b= p_-\ \vec{\lambda}\cdot \vec{v}_-$ such that $a+b=p_+\ \vec{\lambda}\cdot \vec{v}_++p_-\ \vec{\lambda}\cdot \vec{v}_-=(2p-1)\ \vec{\lambda}\cdot\vec{z}$. We obtain directly:
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})=\frac{1}{\pi}(\Theta(a)+\Theta(b)-\Theta(a+b))=\frac{1}{\pi}(\Theta(-a)+\Theta(-b)-\Theta(-a-b))=\tilde{\rho}_{\vec{x}}(-\vec{\lambda}) \, .
\end{align}\\
\textit{(III) area:}\\
Since $\int_{S_2} \Theta(\vec{\lambda}\cdot \vec{v})/\pi \ \mathrm{d}\vec{\lambda}=1$ (see Eq.~\eqref{normalization}) we obtain by linearity:
\begin{align}
\int_{S_2} \tilde{\rho}_{\vec{x}}(\vec{\lambda}) \ \mathrm{d}\vec{\lambda}=&\int_{S_2} \frac{p_+}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{v}_+) \ \mathrm{d}\vec{\lambda}+\int_{S_2} \frac{p_-}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{v}_-) \ \mathrm{d}\vec{\lambda}-\int_{S_2} \frac{(2p-1)}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{z}) \ \mathrm{d}\vec{\lambda}\\
=&p_++p_--(2p-1)=1-(2p-1)=2(1-p) \, .
\end{align}\\
\textit{(III) 1st bound:}\\
We can use $\Theta(a)+\Theta(b)\geq \Theta(a+b)$ with $a= p_+\ \vec{\lambda}\cdot \vec{v}_++p_-\ \vec{\lambda}\cdot \vec{v}_-=(2p-1)\ \vec{\lambda}\cdot\vec{z}$ and $b=- p_-\ \vec{\lambda}\cdot \vec{v}_-$. We obtain $a+b=p_+\ \vec{\lambda}\cdot \vec{v}_+$ and therefore:
\begin{align}
\Theta((2p-1)\ \vec{\lambda}\cdot\vec{z})+\Theta(- p_-\ \vec{\lambda}\cdot \vec{v}_-)&\geq \Theta(p_+\ \vec{\lambda}\cdot \vec{v}_+)\\
\Theta((2p-1)\ \vec{\lambda}\cdot\vec{z})+\Theta(- p_-\ \vec{\lambda}\cdot \vec{v}_-)+\Theta(p_-\ \vec{\lambda}\cdot \vec{v}_-)&\geq \Theta(p_+\ \vec{\lambda}\cdot \vec{v}_+)+\Theta(p_-\ \vec{\lambda}\cdot \vec{v}_-)\\
p_-\ \Theta(- \vec{\lambda}\cdot \vec{v}_-)+p_-\ \Theta(\vec{\lambda}\cdot \vec{v}_-)&\geq p_+\ \Theta(\vec{\lambda}\cdot \vec{v}_+)+p_-\ \Theta(\vec{\lambda}\cdot \vec{v}_-)-(2p-1)\ \Theta(\vec{\lambda}\cdot\vec{z})\\
p_-\ |\vec{\lambda}\cdot \vec{v}_-|&\geq \pi \cdot \tilde{\rho}_{\vec{x}}(\vec{\lambda}) \, .
\end{align}
If we choose $b=- p_+\ \vec{\lambda}\cdot \vec{v}_+$ instead, we obtain $p_+\ |\vec{\lambda}\cdot \vec{v}_+|\geq \pi \cdot \tilde{\rho}_{\vec{x}}(\vec{\lambda})$.\\
\textit{(IV) 2nd bound:}\\
Here we prove:
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{1}{2\pi}\frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C|\cos{(\theta)}|}
\end{align}
where $C:=2p-1$ and $\cos{(\theta)}=\vec{\lambda}\cdot \vec{z}$.
We prove it in the following way: For a given vector, $\vec{\lambda} \in S_2$ we want to find the distribution $\tilde{\rho}_{\vec{x}}$ for which $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ is maximal. First, we focus on a vector $\vec{\lambda}$ in the lower hemisphere ($\vec{\lambda}\cdot \vec{z}\leq 0$). In that region, it turns out that $\tilde{\rho}_{\vec{x}}(\vec{\lambda})=\frac{1}{\pi}(p_+ \Theta(\vec{v}_+\cdot \vec{\lambda})+p_- \Theta(\vec{v}_-\cdot \vec{\lambda}))=\rho_{\vec{x}}(\vec{\lambda})$ and furthermore, only one of the two terms ($p_+ \vec{v}_+\cdot \vec{\lambda}$ or $p_- \vec{v}_-\cdot \vec{\lambda}$) is positive (if both are positive, we have $p_+ \vec{v}_+\cdot \vec{\lambda}+p_- \vec{v}_-\cdot \vec{\lambda}=C\ \vec{z}\cdot \vec{\lambda}>0$ which is a contradiction). Therefore, we want to find, for a given vector $\vec{\lambda}$, $\vec{v}_+$ and $p_+$ such that $\rho_{\vec{x}}(\vec{\lambda})=\frac{1}{\pi}(p_+ \vec{v}_+\cdot \vec{\lambda})$ is maximal (we choose "+" w.l.o.g.).
For what follows, we choose the following parametrization: $\vec{v}_\pm=(\sin{(\alpha_\pm)}, 0, \cos{(\alpha_\pm)})^T$, $\vec{\lambda}=(\sin{(\theta)}, 0, \cos{(\theta)})^T$. Note that for the derivation of the bound, we can assume w.l.o.g. that the $y$-component of $\vec{\lambda}$ is zero (the bound depends only on the $z$-component of $\vec{\lambda}$ because of the symmetry around the $z$-axis). Furthermore, we can assume w.l.o.g. that $\vec{v}_+$ has also zero $y$-component in order to achieve the maximum. Solving the equation $p_+ \vec{v}_++p_- \vec{v}_-=C\ \vec{z}$ together with $p_++p_-=1$ leads to:
\begin{align}
p_+=\frac{1-C^2}{2-2C\cos{(\alpha_+)}}\, , &&\sin{(\alpha_-)}=\frac{(1-C^2)\sin{(\alpha_+)}}{2C\cos{(\alpha_+)}-(1+C^2)}\, , &&\cos{(\alpha_-)}=\frac{(1+C^2)\cos{(\alpha_+)}-2C}{2C\cos{(\alpha_+)}-(1+C^2)} \, .
\end{align}
Here, $\sin{(\alpha_-)}$ and $\cos{(\alpha_-)}$ is stated merely for completeness and not necessary for what follows. In order to maximize $\frac{1}{\pi}(p_+ \vec{v}_+\cdot \vec{\lambda})$, we have to find the maximal $\alpha_+$ for the function:
\begin{align}
\frac{1}{\pi}(p_+ \vec{v}_+\cdot \vec{\lambda})=\frac{(1-C^2)\cos{(\theta-\alpha_+)}}{2\pi(1-C\cos{(\alpha_+)})} \, .
\end{align}
Optimizing over $\alpha_+$ gives the condition: $C\sin{(\theta)}=\sin{(\theta -\alpha_+)}$. Rewriting the maximal $\alpha_+$ in terms of $\theta$ and $C$ leads to the following bound (note that $\cos{(\theta)}\leq 0$ and therefore $\cos{(\theta)}=-|\cos{(\theta)}|$):
\begin{align}
\frac{1}{\pi}(p_+ \vec{v}_+\cdot \vec{\lambda}) \leq \frac{1}{2\pi}\frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}-C\cos{(\theta)}}=\frac{1}{2\pi}\frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C|\cos{(\theta)}|} \, .
\end{align}
For a vector in the upper hemisphere, we simply observe that the function is symmetric $\tilde{\rho}_{\vec{x}}(\vec{\lambda})=\tilde{\rho}_{\vec{x}}(-\vec{\lambda})$. This leads directly to,
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{1}{2\pi}\frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C|\cos{(\theta)}|}\, .
\end{align}
since the bound is invariant under changing the sign of $\sin{(\theta)}$ and $\cos{(\theta)}$ (note that $-\vec{\lambda}=(-\sin{(\theta)}, 0, -\cos{(\theta)})^T$).\\
\textit{(VI) 3rd bound:}\\
We maximize the 2nd bound over $\theta$. It turns out that the maximum is reached when $\cos{(\theta)}=0$, which leads to:
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \frac{\sqrt{1-C^2}}{2\pi}=\frac{\sqrt{p(1-p)}}{\pi} \, .
\end{align}
Note, that this bound is strictly weaker than the second bound but easier to state and useful for pedagogical reasons.
\end{proof}
\section{Improved one bit protocol}\label{improvedprot}
We can improve the protocol from the main text in two independent ways. The first improvement comes from the fact that $p_A(c=1|\vec{x},\vec{\lambda}_1)=(4\pi)\cdot \tilde{\rho}_{\vec{x}} (\vec{\lambda}_1)\leq 4 \sqrt{p(1-p)}$, where we used the bound $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \sqrt{p(1-p)}/\pi$. Hence, if the state is very weakly entangled ($p\lesssim 1$), the probability that Alice sends the bit $c=1$ is always small. Intuitively speaking, this allows us to rewrite the protocol into a form where Alice and Bob only communicate in a fraction of rounds, but in those rounds with a higher (rescaled) probability $p_A(c=1|\vec{x},\vec{\lambda}_1)\propto \tilde{\rho}_{\vec{x}} (\vec{\lambda}_1)$. In the limit where $p$ approaches one (the separable state $\ketbra{00}$), the fraction of rounds in which they have to communicate at all approaches even zero. The second improvement comes from using a better bound for the function $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$. Indeed, the bound $\tilde{\rho}_{\vec{x}}(\vec{\lambda})\leq \sqrt{p(1-p)}/\pi$ is easy to state but not optimal. More precisely, we have proven in Appendix~\ref{rhotilde}:
\begin{align}
0\leq \tilde{\rho}_{\vec{x}}(\vec{\lambda}) \leq \tilde{\rho}_{max}(\vec{\lambda}):=\frac{1}{2\pi}\frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C|\cos{(\theta)}|} \, .
\end{align}
Here, $C:=2p-1$ and $\cos{(\theta)}=\vec{\lambda}\cdot \vec{z}$ is the $z$ component of $\vec{\lambda}$ in spherical coordinates. Note that, neither $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ nor $\tilde{\rho}_{max}(\vec{\lambda})$ are normalized, and we define the function $N(p)$ as the normalization of that function $\tilde{\rho}_{max}(\vec{\lambda})$:
\begin{align}
N(p):=&\int_{S_2} \tilde{\rho}_{max}(\vec{\lambda}) \ \mathrm{d}\vec{\lambda} \, .
\end{align}
To not scare off the reader, we evaluate the integral after we present the protocol.
\begin{protocol}[$0.835\leq p\leq 1$, 1 bit in the worst case, $N(p)$ bits on average]
Same as Protocol~\ref{protocolgenfram} with the following 2. Step:\\
Alice and Bob share two random vectors $\vec{\lambda}_1, \vec{\lambda}_2 \in S_2$ according to the distribution:
\begin{align}
\rho(\vec{\lambda}_1)=\frac{1}{N(p)} \cdot \tilde{\rho}_{max}(\vec{\lambda}_1)\, ,&&
\rho(\vec{\lambda}_2)=\frac{1}{\pi}\ \Theta(\vec{\lambda}_2 \cdot \vec{z}) \, .
\end{align}
In addition, Alice and Bob share a random bit $r$ distributed according to $p(r=0)=1-N(p)$ and $p(r=1)=N(p)$. If $r=0$, Alice and Bob do not communicate and both set $\vec{\lambda}:=\vec{\lambda}_2$. If $r=1$, Alice sets $c=1$ with probability:
\begin{align}
\begin{split}
p(c=1|\vec{x},\vec{\lambda}_1)=\tilde{\rho}_{\vec{x}} (\vec{\lambda}_1)/\tilde{\rho}_{max}(\vec{\lambda}_1)
\end{split}
\end{align}
and otherwise she sets $c=2$. She communicates the bit $c$ to Bob. Both set $\vec{\lambda}:=\vec{\lambda}_c$ and reject the other vector.
\end{protocol}
\begin{proof}
We show that the distribution of the shared vector $\vec{\lambda}$ becomes exactly the required $\rho_{\vec{x}}(\vec{\lambda})$. To see that, consider all the cases where Alice chooses the first vector. This happens only when $r=1$ and when she sets the bit $c$ to 1. Hence, with total probability of:
\begin{align}
p(r=1)\cdot p(c=1|\vec{x},\vec{\lambda}_1)\cdot \rho(\vec{\lambda}_1)=\tilde{\rho}_{\vec{x}}(\vec{\lambda}_1) \, .
\end{align}
The total probability that she is choosing the first vector is $2(1-p)$. In all the remaining cases, with total probability $2p-1$, she is choosing vector $\vec{\lambda}:=\vec{\lambda}_2$. Therefore, the total distribution of the chosen vector $\vec{\lambda}$ becomes the desired distribution
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})+\frac{(2p-1)}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{z})=\rho_{\vec{x}}(\vec{\lambda}) \, .
\end{align}
Here, the first term $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ in the sum corresponds to all the instances where Alice chooses $\vec{\lambda}:=\vec{\lambda}_1$ and the second term to all the instances, where she chooses $\vec{\lambda}_2$. In order for the protocol to be well defined, it has to hold that $0\leq p(c=1|\vec{x},\vec{\lambda}_1)\leq 1$ and $0\leq N(p)\leq 1$. The first bound is true since $0\leq \tilde{\rho}_{\vec{x}} (\vec{\lambda}_1)\leq \tilde{\rho}_{max} (\vec{\lambda}_1)$ and the second bound $0\leq N(p)\leq 1$ holds whenever $0.835\leq p\leq 1$.
\end{proof}
We can explicitly solve that integral for $N(p)$ by using spherical coordinates $\vec{\lambda}=(\sin{\theta}\cdot \cos{\phi}, \sin{\theta}\cdot \sin{\phi}, \cos{\theta})$:
\begin{align}
\begin{split}
N(p)=&\int_{S_2} \tilde{\rho}_{max}(\vec{\lambda}) \ \mathrm{d}\vec{\lambda}
= \int^{2\pi}_{0} \int^{\pi}_{0} \tilde{\rho}_{max}(\theta,\phi) \cdot \sin{\theta}\ \mathrm{d}\theta \ \mathrm{d}\phi
= \int^{2\pi}_{0} \int^{\pi}_{0} \frac{1}{2\pi}\frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C|\cos{(\theta)}|} \cdot \sin{\theta}\ \mathrm{d}\theta \ \mathrm{d}\phi \\
=& \int^{\pi}_{0} \frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C|\cos{(\theta)}|} \cdot \sin{\theta}\ \mathrm{d}\theta
= 2 \int^{\pi/2}_{0} \frac{1-C^2}{\sqrt{1-C^2\sin^2{(\theta)}}+C \cos{(\theta)}} \cdot \sin{\theta}\ \mathrm{d}\theta \, .
\end{split}
\end{align}
Substituting $x=\cos{\theta}\cdot C$ leads to:
\begin{align}
\begin{split}
N(p)=& \frac{2(1-C^2)}{C} \int^{C}_{0} \frac{1}{\sqrt{(1-C^2)+x^2}+x}\ \mathrm{d}x \\
=& \frac{2(1-C^2)}{C} \left[\frac{1}{2}\log{\left(\sqrt{(1-C^2)+x^2}+x\right)}-\frac{1-C^2}{4\left(\sqrt{(1-C^2)+x^2}+x\right)^2} \right]^{C}_{0} \\
=& \frac{1-C^2}{2C} \left(\log{\left(\frac{1+C}{1-C}\right)}+\frac{2C}{1+C} \right)\\
=& \frac{1-C^2}{2C} \log{\left(\frac{1+C}{1-C}\right)}+(1-C) \\
=& \frac{2p(1-p)}{2p-1} \log{\left(\frac{p}{1-p}\right)}+2(1-p) \, .
\end{split}
\end{align}
Here we used $C=2p-1$ in the last step.
\section{Maximal local content}\label{applocalcontent}
In Appendix~\ref{improvedprot}, we showed that it is possible to simulate weakly entangled states without communication in some fraction of rounds. One can even go one step further and maximize the fraction of rounds in which no communication is required. This is called the local content of the state $\ket{\Psi_{AB}}$. More formally, a simulation of an entangled qubit pair can be decomposed into a local part $p_L(a,b|\vec{x},\vec{y})$ that can be implemented by Alice and Bob without communication and the remaining non-local content, denoted as $p_{NL}(a,b|\vec{x},\vec{y})$:
\begin{align}
p_Q(a,b|\vec{x},\vec{y})=p_L \cdot p_L(a,b|\vec{x},\vec{y})+(1-p_L)\cdot p_{NL}(a,b|\vec{x},\vec{y}) \, . \label{localcontent}
\end{align}
The problem consists in finding the maximal value of $p_L$ for a given state $\ket{\Psi_{AB}}=\sqrt{p}\ket{00}+\sqrt{1-p}\ket{00}$, that we denote here as $p^{max}_L(p)$.
For a maximally entangled state, Elitzur, Popescu and Rohrlich showed that the local content is necessarily zero (hence $p^{max}_L(p=1/2)=0$), also known as the EPR2 decomposition \cite{elitzur1992} (see also Barrett et al.~\cite{barrett2006}). For general entangled qubit pairs, it was shown by Scarani~\cite{Scarani2008localpart} that the local content is upper bounded by $2p-1$, hence $p^{max}_L(p)\leq 2p-1$. At the same time, subsequently better lower bounds were found \cite{elitzur1992, Scarani2008localpart, Branciard2010localpart, Portmann2012localpart}. Finally, Portmann et al. \cite{Portmann2012localpart} found an explicit decomposition with a local content of $p_L(p)=2p-1$, hence proving that the upper and lower bound coincide and, therefore, $p^{max}_L(p)=2p-1$. Here, we give an independent proof of the result by Portmann et al. \cite{Portmann2012localpart}. More precisely, we provide a protocol that simulates any pure entangled two-qubit state of the general form $\ket{\Psi_{AB}}=\sqrt{p}\ket{00}+\sqrt{1-p}\ket{00}$ with a local content of $p_L=2p-1$.\\
We remark that, often in the literature, the state $\ket{\Psi_{AB}}$ is written as $\ket{\Psi_{AB}}=\cos{\theta}\ket{00}+\sin{\theta}\ket{00}$ where $\cos{\theta}\geq \sin{\theta}$. These two notations are related through the following expressions: $\cos{2\theta}=2p-1$ which follows from $\cos{\theta}=\sqrt{p}$ and $\sin{\theta}=\sqrt{1-p}$ together with $\cos{2\theta}=\cos^2{\theta}-\sin^2{\theta}=p-(1-p)=2p-1$.
\begin{protocol}[$1/2\leq p\leq 1$, maximal local content of $p_L=2p-1$]
Same as Protocol~\ref{protocolgenfram} with the following 2. Step:\\
Alice and Bob share a random vector $\vec{\lambda}_1 \in S_2$ according to the distribution ($\vec{z}:=(0,0,1)^T$):
\begin{align}
\rho(\vec{\lambda}_1)=\frac{1}{\pi}\ \Theta(\vec{\lambda}_1 \cdot \vec{z}) \, .
\end{align}
In addition, Alice and Bob share a random bit $r$ distributed according to $p(r=0)=2p-1$ and $p(r=1)=2(1-p)$. If $r=0$, Alice and Bob do not communicate and both set $\vec{\lambda}:=\vec{\lambda}_1$. If $r=1$, Alice samples $\vec{\lambda}\in S_2$ according to the distribution $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$ and communicates that $\vec{\lambda}$ to Bob. (Alice can for example encode the three coordinates of $\vec{\lambda}$ and send this information to Bob.)
\end{protocol}
\begin{proof}
Whenever $r=1$, the resulting distribution of $\vec{\lambda}$ is, by construction, $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$. Whenever, $r=0$ (with probability $2p-1$) the distribution of $\vec{\lambda}$ is $\Theta(\vec{\lambda} \cdot \vec{z})/\pi$. Therefore, the total distribution for the chosen vector $\vec{\lambda}$ becomes the desired distribution
\begin{align}
\tilde{\rho}_{\vec{x}}(\vec{\lambda})+\frac{(2p-1)}{\pi}\ \Theta(\vec{\lambda}\cdot \vec{z})=\rho_{\vec{x}}(\vec{\lambda}) \, .
\end{align}
Hence, by the proof of Protocol~\ref{protocolgenfram}, this exactly reproduces quantum correlations.
\end{proof}
This directly provides a decomposition of the above form (Eq.~\eqref{localcontent}) for $p_L=2p-1$. More precisely, whenever $r=0$ (with total probability $2p-1$), they do not communicate and, hence, implement a local strategy $p_L(a,b|\vec{x},\vec{y})$. On the other hand, whenever $r=0$ they communicate and, therefore, implement a non-local behaviour $p_{NL}(a,b|\vec{x},\vec{y})$. That protocol requires an unbounded amount of communication in the rounds where, $r=1$ and there are more efficient ways to sample the distribution $\tilde{\rho}_{\vec{x}}(\vec{\lambda})$. However, this is not an issue for determining $p_L^{max}(p)$ since we only maximize the local content, hence, the number of rounds in which Alice and Bob do not have to communicate. At the same time, we are not concerned with the amount of communication in the remaining rounds. One can also ask, what is the maximal local content under the restriction that Alice and Bob only communicate a single bit in the remaining rounds. We found in Appendix~\ref{improvedprot} such a decomposition. However, in general, it seems unlikely that a decomposition that attains the maximal local content of $p^{max}_L(p)=2p-1$ can be achieved if the two parties communicate only a single bit in the remaining rounds.
\end{document} |
\begin{document}
\baselineskip=20pt
\textheight=600pt
\title{Entanglement entropy bound in quantum mechanics}
\author{Maurizio Melis}
\email{E-mail address: [email protected]\\
Corresponding address: via Basilicata 31, 09127 Cagliari}
\begin{abstract}
We compute, in a quantum mechanical framework, the entanglement entropy of a spherically symmetric quantum system
composed of two separate regions. In particular, we consider quantum states described by a wave function scale invariant and vanishing at infinity, with an asymptotic behaviour analogous to the case of a Coulomb potential or an harmonic oscillator.
\noindent
We find that the entanglement entropy bound is proportional to the area of the boundary between the two
separate regions, in accordance with the holographic bound on entropy. This study shows that
the dependence of the maximum entanglement entropy on the boundary area can be derived in the context of quantum mechanics and does not necessarily require a quantum field theory approach.
PACS: 03.65.Ud (entanglement and quantum nonlocality)
Keywords: quantum mechanics, entanglement entropy, area law, holographic bound, nonlocality
\end{abstract}
\maketitle
\section{Introduction}
\label{intro}
Einstein, Podolsky and Rosen (EPR) proposed a thought experiment
\cite{EPR}
to prove that quantum mechanics predicts the existence of ``spooky''
nonlocal correlations between
spatially separate parts of a quantum system, a phenomenon that Schr\"odinger
\cite{schrodinger} called {\it entanglement}. Afterward,
Bell \cite{bell} derived some inequalities that
can be violated in quantum mechanics but must be satisfied
by any local hidden variable model.
It was Aspect \cite{aspect} who first verified in laboratory that the EPR experiment,
in the version proposed by Bohm \cite{bohm}, violates Bell inequalities,
showing therefore
that quantum entanglement and nonlocality are correct predictions of quantum mechanics.
A renewed interest in entanglement came from black hole physics: as suggested in
\cite{sorkin,srednicki}, black hole entropy can be interpreted
in terms of quantum entanglement, since the horizon of a black hole divides
spacetime into two subsystems such that
observers outside cannot communicate the results of their
measurements to observers inside, and vice versa.
Black hole entanglement entropy turns out to scale with the area
${\cal A}$ of the event horizon, as supported by
a simple argument proposed by Srednicki in \cite{srednicki}.
If we trace over the field degrees of freedom located outside the black hole,
the resulting entanglement entropy $S_A$ depend only on the degrees of freedom
inside the black hole (regione $A$).
If then we trace over the
degrees of freedom inside the black hole, we obtain an entropy $S_B$
which depends only on the degrees of freedom outside (region $B$).
It is straightforward to
show that $S_A = S_B=S$, therefore the entropy $S$ should depend only
on properties shared by the two regions inside and outside the black hole. The
only feature they have in common is their boundary, so it is reasonable to expect
that $S$ depends only on the area ${\cal A}$ of the event horizon.
\noindent
This result is in accordance with the renowned Bekenstein-Hawking
formula \cite{bekenstein1,bekenstein2,hawking1,hawking2}:
$S_{BH}=\frac{\cal A}{4\ell_P^2}$,
where $S_{BH}$ is the black hole entropy and $\ell_P$ is the Planck length.
Some reviews and recent results on entanglement entropy in conformal field theory and
black hole physics can be found in \cite{melis1,melis2,takayanagi,das}.
It is also a well-known property of many-body systems that the entanglement entropy obeys an ``area law''
with a logarithmic correction, as discussed e.g. in \cite{wolf,plenio,wolf2,cardy}.
The area scaling of entanglement entropy has been investigated much more in the
context of quantum field theory than in quantum mechanics. In order to bridge
the gap, we study the entanglement entropy of a quantum system composed
of two separate parts, described by a
wave function $\psi$, which we assume invariant under scale transformations and
vanishing exponentially at infinity. We will show
that the entropy $S$ of both parts of our system is bounded by
$S \lesssim \eta \frac{\cal A}{4\ell_P^2}$, where $\ell_P$ is the Planck length and
$\eta$ is a numerical constant related to the dimensionless parameter $\lambda$
appearing in the wave function $\psi$.
This result, obtained at the leading order in $\lambda$,
is in accordance with the so-called {\it holographic bound}
on the entropy $S$ of an arbitrary system \cite{bousso,thooft,susskind}:
$S \leq \frac{\cal A}{4\ell_P^2}$, where
$\cal A$ is the area of the surface enclosing the system.
The holographic bound is an extension of the Bekenstein-Hawking formula for black hole entropy to all
matter in the universe.
In Section \ref{general} we present the main features of our approach, focusing in particular on the properties of
entanglement entropy and on the form
of the wave function describing the system. In Section \ref{results} we calculate analytically the bound on
entanglement entropy. In Section \ref{conclusions} we summarize both the limits and the goals of our approach.
\section{Main features of the model}
\label{general}
Let us consider a spherically symmetric quantum system,
composed of two regions $A$ and $B$ separated by a
spherical surface of radius $R$ (Fig. \ref{regions}).
\begin{figure}
\caption{\label{regions}
\label{regions}
\end{figure}
\noindent
The variables $\varrho$, $r$ describing the system
are subjected to the following constraints (see Fig. \ref{regions}):
\begin{displaymath}
\left \{\begin{array}{l}
0\leq \varrho \leq R \quad\quad\, \mbox{region A}\\
r\geq R \quad\quad\quad\quad \mbox{region B}\, ,
\end{array}
\right .
\end{displaymath}
where $R$ is the radius of the spherical surface separating the two regions.
\noindent
It is convenient to introduce two dimensionless variables
\begin{equation}
\label{coordinates}
x = \frac{\varrho}{R}, \quad \quad y = \frac{r}{R},
\end{equation}
subjected to the constraints $0\leq x \leq 1$ and $y\geq 1$.
\noindent
In the following we will assume that the dynamics of the system is spherically symmetric,
in order to treat the problem as one-dimensional in each region $A$ and $B$, with all
physical properties depending only on the radial distance from the origin.
\subsection{Entanglement entropy}
\label{general1}
Let $\psi(x,\, y)$ be a generic wave function describing
the system in Fig. \ref{regions}, composed of two parts $A$ and $B$.
\noindent
As discussed e.g. in \cite{susskindbook,landau}, we can provide a
description of all mesauraments
in region $A$ through the so-called {\it density matrix} $\rho_{_A}(x,\, x')$:
\begin{equation}
\label{matrix1}
\rho_{_A}(x,\, x') = \int_{_B} d^3y\, \psi^*(x,\, y)\psi(x',\, y)\, ,
\end{equation}
where $d^3y$ is related to the volume element $d^3r$ in $B$ through the relation
$d^3r = R^3d^3y$, with $d^3y = y^2\sin\theta d\theta\,d\phi\,dy$
in spherical coordinates.
\noindent
Similarly, experiments performed in $B$ are described by
the density matrix $\rho_{_B}(y,\, y')$:
\begin{equation}
\label{matrix2}
\rho_{_B}(y,\, y') = \int_{_A} d^3x\, \psi^*(x,\, y)\psi(x,\, y')\, ,
\end{equation}
where $d^3x$ is related to the volume element $d^3\varrho$ in $A$ through the relation
$d^3\varrho = R^3d^3x$, with $d^3x = x^2\sin\theta d\theta\,d\phi\,dx$
in spherical coordinates.
\noindent
Notice that $\rho_{_A}$ is calculated tracing out the exterior variables $y$, whereas
$\rho_{_B}$ is evaluated tracing out the interior variables $x$.
\noindent
Density matrices have the following properties:
\begin{enumerate}
\item Tr$\, \rho = 1$ (total probability equal to 1)
\item $\rho = \rho^{\dagger}$ (hermiticity)
\item $\rho_j \geq 0$ (all eigenvalues are positive or zero).
\end{enumerate}
\noindent
When only one eigenvalue of $\rho$ is nonzero, the nonvanishing
eigenvalue is equal to 1 by virtue of the trace condition on $\rho$.
This case occurs only if the wave function can be factorized into an
uncorrelated product
\begin{equation}
\label{uncorrelated}
\psi(x,\, y) = \psi_{_A}(x)\cdot\psi_{_B}(y)\, .
\end{equation}
\noindent
A quantitative measure of the degree of entanglement between
the two parts $A$ and $B$ of the system is provided by the {\it von Neumann entropy}
\begin{equation}
\label{ee}
S = -\mbox{Tr}(\rho\ln\rho)\, ,
\end{equation}
which is also called {\it entanglement entropy}.
\noindent
When the two subsystems $A$ and $B$ are each the complement of the other,
entanglement entropy can be calculated by tracing out the variables associated to
region $A$
or equivalently to region $B$, since it turns out $S_{_A}=S_{_B}$.
\subsection{Wave function}
\label{general2}
The spherical region $A$ in Fig. \ref{regions} is part of a larger
closed system $A\cup B$, described by a wave function $\psi (x,\, y)$,
where $x$ denotes the set of coordinates in $A$ and
$y$ the coordinates in $B$.
We will exploit for $\psi$ the following analytic forms:
\begin{equation}
\label{wave}
\psi (x,\, y) = C_n\, e^{-\lambda y^n/x^n} \quad\quad \mbox{(with}\;\; n=1 \;\; \mbox{or}\;\; n=2\,\mbox{)}\, ,
\end{equation}
where $C_n$ is the normalization constant and $\lambda$ is a dimensionless parameter,
which we assume much greater than unity ($\lambda \gg 1$).
\noindent
From the point of view of an observer in $A$, the asymptotic behaviour of $\psi (x,\, y)$, for $n=1$, is an exponential decay, as in the case of a Coulomb potential, while for $n=2$ the wave function $\psi$ has an asymptotic gaussian slope, as in the case of an harmonic oscillator.
\noindent
If the system is in a quantum state subjected to a central potential,
the complete wave function $\Phi$
should contain an angular part expressed by spherical harmonics
$Y_{lm}(\theta,\, \phi)$:
\begin{equation}
\Phi (x,\,y;\,\theta,\,\phi) = Y_{lm}(\theta,\, \phi)\,\psi (x,\, y)\, ,
\end{equation}
If the angular momentum is zero, the angular component of the wave function reduces to a constant
$Y_{00}(\theta,\, \phi) = 1/\sqrt{4\pi}$ and can be included in the constant $C_n$ [Eq. (\ref{wave})], which
hence represents the overall normalization constant of the complete wave function $\Phi$.
\noindent
In order to justify the form (\ref{wave}) of the wave function $\psi$, let us list
the main properties it satisfies:
\noindent
{\bf 1)} $\psi$ depends on both sets of variables $x,\, y$
defined in the two separate regions
$A$ and $B$, but it is not factorizable in the product (\ref{uncorrelated})
of two distinct parts
depending only on one variable:
\begin{equation}
\psi (x,\, y) \neq \psi_{_A}(x)\cdot \psi_{_B}(y)\, .
\end{equation}
This assumption guarantees that the entanglement entropy of the system
is not identically zero.
\noindent
{\bf 2)} $\psi$ depends on the variables $x,\, y$ through their ratio $y/x$, hence the wave function
is invariant under scale transformations:
\begin{equation}
x\to\mu x \;\; \mbox{and} \;\; y\to\mu y, \;\; \mbox{with} \;\; \mu \;\;
\mbox{constant}\, .
\end{equation}
\noindent
{\bf 3)} $\psi$ has the asymptotic form of the wave function for a quantum state in a Coulomb potential (for $n=1$)
or for an harmonic oscillator (for $n=2$).
\noindent
The asymptotic form of the radial part $f(r)$ of the wave functions describing the quantum states in
a Coulomb potential or for an harmonic oscillator is
\begin{equation}
\label{erre}
f(r) \approx e^{-\kappa r^n} \approx e^{-\kappa R^n y^n}\, , \quad \mbox{with}\; n=1\; \mbox{or}\; n=2\, .
\end{equation}
$r\to\infty$ is the radial distance from the origin, $y$ is defined by $y=r/R$ while the parameter $\kappa$ is
\begin{equation}
\label{kappa}
\kappa = \frac{\sqrt{2m|E|}}{n\hbar R^{n-1}}
\end{equation}
for a particle with mass $m$ and energy $E$ ($\hbar$ is the Planck constant divided by $2\pi$).
\noindent
The Coulomb case ($n=1$) corresponds to a bound state with negative energy ($E<0$), while the harmonic oscillator
case ($n=2$) has positive energy ($E>0$).
\noindent
The radial part $f(r)$ of the wave function coincides with
the restriction $\psi_{_B}(y)$ of the wave function $\psi(x,\,y)$ to the
exterior region $B$, as seen by
an inner observer localized for instance at $x=1$, i.e. on
the boundary between the two regions:
\begin{equation}
\label{restriction}
\left. \psi_{_B}(y)=\psi(x,\,y)\right|_{x=1} \approx e^{-\lambda y^n}\, .
\end{equation}
By comparing the asymptotic behaviour of the wave functions
(\ref{erre}) and (\ref{restriction}) as $y\to\infty$, we find:
\begin{equation}
\label{lambdalast}
\lambda = \gamma\,R\, , \quad \mbox{with} \quad
\gamma = \frac{\sqrt{2m|E|}}{n\hbar}\, .
\end{equation}
In Section \ref{results} we will assume $\lambda\gg 1$, which is always true in a
system with $R$ sufficiently large, as it easily follows from
Eq. (\ref{lambdalast}).
\noindent
If the inner observer is not localized on the boundary
but in a fixed point $x_0$ inside region $A$ (with $0<x_0<1$), then the
expression (\ref{lambdalast}) of $\lambda$ has to be multiplied by $x_0$.
\noindent
Notice that the dependence of $\lambda$ on the radius $R$ of the boundary
has been derived by imposing an asymptotic form
on the wave function $\psi(x,\,y)$ as $y\to\infty$, with respect to
a fixed point $x\sim 1$ inside the spherical region of the system in Fig. \ref{regions}.
In Section \ref{results} we will show that the entropy of both parts of our system
depends on $\lambda^2$, which in turn is proportional to the area ${\cal A}$ of the spherical boundary.
The area scaling of the entanglement entropy
is, essentially, a consequence of the nonlocality of the wavefunction $\psi(x,\,y)$,
which establishes a correlation
between points inside ($x\sim 1$) and outside ($y\to\infty$) the boundary.
\section{Analytic results}
\label{results}
We normalize the wave function (\ref{wave}) by imposing the condition
\begin{equation}
\int_{_A} d^3x\int_{_B} d^3y\, \psi^*(x,\, y)\psi (x,\, y) = 1\, ,
\end{equation}
with $d^3x = x^2\sin\theta d\theta\,d\phi\,dx$ and
$d^3y = y^2\sin\theta d\theta\,d\phi\,dy$ in spherical coordinates.
Under the assumption
$\lambda\gg 1$, the normalization constant $C_n$ in Eq. (\ref{wave}) turns out to be
\begin{equation}
\label{normalization}
C_n \approx 2^{n-1}\lambda\frac{e^{\lambda}}{\sqrt{\pi}}\, ,
\end{equation}
as easily obtained by means of Wolfram Mathematica 11.1 \cite{math}.
\noindent
Let us focus on the interior region $A$ represented in Fig. \ref{regions}.
We calculate the density matrix $\rho_{_A}(x,\, x')$ by tracing out the variables $y$
related to the subsystem $B$, as expressed in Eq. (\ref{matrix1}):
\begin{eqnarray}
\label{density}
\rho_{_A}(x,\, x') & = & \int_{_B} d^3y\, \psi^*(x,\, y)\psi (x',\, y) \nonumber \\
& \approx & \frac{C_n^2}{2^{n-1}\lambda}\frac{x^nx'^n}{x^n+x'^n}e^{-\lambda\frac{x^n+x'^n}{(xx')^n}}\, ,
\end{eqnarray}
where we assumed $\lambda\gg 1$.
\noindent
It is easy to verify that the density matrix $\rho_{_A}(x,\, x')$ satisfies all
properties listed in \ref{general1}:
\begin{enumerate}
\item Total probability equal to 1:
\begin{displaymath}
\int_{_A} d^3x\, \rho_{_A}(x,\, x) = 1 \Longleftrightarrow \mbox{Tr}(\rho_{_A}) = 1\, .
\end{displaymath}
\item Hermiticity:
\begin{displaymath}
\rho_{_A}(x,\, x') = \rho^*_{_A}(x',\, x) \Longleftrightarrow \rho_{_A} =
\rho_{_A}^{\dagger}\, .
\end{displaymath}
\item All eigenvalues are positive or zero:
\begin{displaymath}
\rho_{_A}(x,\, x') \geq 0 \quad \forall\; x,\, x'\in (0,\, 1) \Longrightarrow
\big(\rho_{_A}\big)_j \geq 0 \, .
\end{displaymath}
\end{enumerate}
The entanglement entropy (\ref{ee}) can be expressed in the form
\begin{equation}
\label{entropy}
S_A = -\int_{_A} d^3x\, \rho_{_A}(x,\, x)\ln[\rho_{_A}(x,\, x)]\, .
\end{equation}
Substituting the expression (\ref{density}) of $\rho_{_A}(x,\, x')$,
with $x'=x$, we find:
\begin{equation}
S_A \approx \frac{C_n^2}{2^n\lambda}
\int_{_A} d^3x\, e^{-2\lambda/x^n}x^n\ln\left(\frac{C_n^2}{2^n\lambda}\,e^{-2\lambda/x^n}x^n\right )\, .
\end{equation}
We can maximize the previous integral by means of the condition
\begin{equation}
e^{-2\lambda/x} \leq e^{-2\lambda} \quad \forall\; x \in (0,\, 1)\, .
\end{equation}
By neglecting the subleading terms in $\lambda\gg 1$ and inserting the expression (\ref{normalization}) of the normalization constant $C_n$,
the entanglement entropy $S_A$ turns out to be bounded by
\begin{equation}
S_A \lesssim \left(\frac{16}{5}\right)^{n-1}\frac{\lambda^2}{3}\, .
\end{equation}
If we substitute $\lambda=\gamma\,R$, as given in Eq. (\ref{lambdalast}), we finally find:
\begin{equation}
\label{final}
S_A \lesssim \eta\,\frac{\cal A}{4\ell^2_P}\, , \quad \mbox{with} \quad
\eta = \left(\frac{16}{5}\right)^{n-1}\frac{\ell^2_P}{3\pi}\,\gamma^2\, ,
\end{equation}
where ${\cal A}=4\pi R^2$ is the area of the spherical boundary in Fig. \ref{regions},
$\ell_P = \left(\hbar G/c^3\right)^{1/2}$ is the Planck length and $\gamma$ is given in Eq. (\ref{lambdalast}).
The result (\ref{final}) is in accordance with the holographic bound on entropy
$S\leq\frac{\cal A}{4\ell^2_P}$,
discussed in \cite{bousso,thooft,susskind}, and
shows that the entanglement entropy of both parts of our
composite system obeys an ``area law''.
\noindent
For a particle with energy $E$ and mass $m$ we can express the parameter $\eta$ in the form
\begin{equation}
\label{eta}
\eta = \left(\frac{16}{5}\right)^{n-1}\frac{2}{3\pi}\,\frac{m|E|}{n\, m_P^2 c^2}\, ,
\end{equation}
where we combined Eqs. (\ref{lambdalast}), (\ref{final}) and introduced the Planck mass
$m_{_{P}} = (\hbar c/G)^{1/2}$.
Under the assumptions $|E|\ll m_Pc^2$ and $m\lesssim m_{_{P}}$, we obtain
$\eta\ll 1$, therefore in this case the bound (\ref{final}) on entropy is
much stronger than the holographic bound $S\leq\frac{\cal A}{4\ell^2_P}$.
For instance, if we consider the electron in the ground state of the hydrogen atom, we find
$\eta = \frac{2}{3\pi}\,\frac{mc^2|E|}{(m_P c^2)^2} \approx 10^{-50}$, being $mc^2 = 0.511$ MeV,
$|E| = 13.6$ eV and $m_{_{P}}c^2 \approx 1.2\cdot 10^{19}$ GeV.
All calculations performed in this Section can be repeated by focusing on the
exterior region $B$ represented in Fig. \ref{regions}. If we trace out the interior
variables $x$, as in Eq. (\ref{matrix2}), the density matrix
$\rho_{_B}(y,\, y')$ turns out to be
\begin{eqnarray}
\rho_{_B}(y,\, y') & = & \int_{_A} d^3x\, \psi^*(x,\, y)\psi (x,\, y')\nonumber \\
& \approx & \frac{C_n^2}{2^{n-1}\lambda}\frac{e^{-\lambda(y^n+y'^n)}}{y^n+y'^n}\quad \mbox{(with}\; n=1\;
\mbox{or}\; n=2\mbox{)}\, ,
\end{eqnarray}
where we substituted the expression (\ref{wave}) of the wave function $\psi$
and applied the usual assumption $\lambda\gg 1$.
\section{Conclusions}
\label{conclusions}
In this study we proposed a simple approach to the calculation of the entanglement entropy of
a spherically symmetric quantum system.
The result obtained in Eq. (\ref{final}), $S_A \lesssim \eta\,\frac{\cal A}{4\ell^2_P}$,
is in accordance with the holographic
bound on entropy and with the ``area law'' discussed e.g. in \cite{wolf,plenio,wolf2}.
Our result, derived in the context of quantum mechanics, shows that the maximum entanglement entropy of a system is proportional to the area of the boundary and not to the volume of the system, as one would have expected.
Therefore, it turns out that the information content
of a system is stored on its boundary rather than in the bulk \cite{susskindbook}.
\noindent
The area scaling of the entanglement entropy
is a consequence of the nonlocality of the
wave function, which relates the points inside the boundary
with those outside. In particular, we derived the area law for entropy by
imposing an asymptotic behaviour on the wave function $\psi(x,\,y)$ as $y\to\infty$,
with respect to a fixed point $x\sim 1$ inside the interior region.
\noindent
The main limit of our model is that we considered only two particular forms of
the wave function $\psi$. However, more general forms of $\psi$
might be considered in future developments of the model.
Let us finally stress
that our results are valid
at the leading order in the dimensionless parameter $\lambda\gg 1$ appearing in
the wave function $\psi$ of the system.
\noindent
The treatment presented in this study
for the entanglement entropy of a quantum system is an extremely
simplified model,
but the accordance of our result with the holographic bound
and with the area scaling of the entanglement entropy indicates that we
isolated the essential physics of the problem in spite of all simplifications.
\noindent
A remarkable finding of this study is also that the area scaling of the entanglement entropy bound is intrinsic to quantum mechanics and does not necessarily depend on quantum field theory.
\begin{acknowledgments}
\noindent
Most of the work in this study was carried out during the author's Ph.D. in Nuclear and Subnuclear Physics and Astrophysics at Physics Department of Cagliari University.
\noindent
The author is very greatful to his supervisor, Prof. M. Cadoni, for helpful discussions and criticism.
\end{acknowledgments}
\appendix*
\section{Nonlocal potentials}
\noindent
In order to calculate the nonlocal potential $V(x,\,y)$,
corresponding to the wave function $\psi (x,\, y)$ in Eq. (\ref{wave}),
let us consider the time-independent Schr\"odinger equation
\begin{displaymath}
\left(-\frac{\hbar^2}{2m}\nabla^2+V\right)\psi(x,\,y) = E\psi(x,\,y)\, ,
\end{displaymath}
where $E$ is the ground state energy of the system while
the Laplace operator $\nabla^2$ is expressed by the identity
\begin{displaymath}
\nabla^2 \equiv \frac{1}{r^2}\frac{\partial}{\partial r}\left
(r^2\frac{\partial}{\partial r}\right ) \equiv
\frac{2}{r}\frac{\partial}{\partial r} + \frac{\partial^2}{\partial r^2}\, ,
\end{displaymath}
valid for spherical symmetric systems with radial coordinate $r$.
\noindent
The restrictions of the potential $V(x,\,y)$ to the regions $A$ and $B$ are defined,
respectively, as
\begin{displaymath}
V_{_{A}}(x) = \left . V_{_{A}}(x,\, y)\right |_{y=1} \quad \mbox{and} \quad
V_{_{B}}(y) = \left . V_{_{B}}(x,\, y)\right |_{x=1}\, .
\end{displaymath}
\noindent
Analogously, the wave function $\psi (x,\, y)$ in the regions $A$ and $B$ of our system can be plotted by fixing
one of the two variables $x$, $y$ and representing $\psi$ with respect
to the other variable.
\subsection{Asymptotic Coulomb potential}
By setting $y=1$ or $x=1$ in Eq. (\ref{wave}) with $n=1$ (in the case of an asymptotic
Coulomb potential), we obtain
the following restrictions of $\psi (x,\, y)$ to the regions $A$ or $B$ of our system (Fig. 1):
\begin{eqnarray}
\psi_{_A}(x) & = & \left .\psi(x,\, y)\right |_{y=1} = Ce^{-\frac{\lambda}{x}}\, ,
\quad \mbox{with} \; x\in (0,\, 1) \nonumber \\
\psi_{_B}(y) & = & \left .\psi(x,\, y)\right |_{x=1} = Ce^{-\lambda y}\, , \quad
\mbox{with} \; y\in (1,\, +\infty)\, .\nonumber
\end{eqnarray}
The shape of $\psi_{_{A}}(x)$ and $\psi_{_{B}}(y)$ is represented in the following figure:
\begin{displaymath}
\resizebox{0.52\width}{0.52\height}{\includegraphics{wf1.png}}
\end{displaymath}
\noindent
In region $A$ we obtain
\begin{displaymath}
V_{_{A}}(x,\, y) = E_{_{A}} + \frac{\hbar^2\lambda^2}{2mR^2}\frac{y^2}{x^4}\, ,
\end{displaymath}
while in region $B$ we find
\begin{displaymath}
V_{_{B}}(x,\, y) = E_{_{B}} + \frac{\hbar^2\lambda^2}{2mR^2}\frac{1}{x^2}
\left(1-\frac{2x}{\lambda y}\right)\, .
\end{displaymath}
By setting the zero of the potential $V(x,\,y)$ at infinity (for $y\to\infty$),
we obtain
\begin{displaymath}
\lim_{y\to\infty}V_{_{B}}(y) = 0 \Longrightarrow
E_{_{B}} = -\frac{\hbar^2\lambda^2}{2mR^2}\, .
\end{displaymath}
By imposing the continuity of the potential $V(x,\,y)$ on the boundary
(for $x=y=1$), we find
\begin{displaymath}
\left . V_{_{A}}(x)\right |_{x=1} = \left . V_{_{B}}(y)\right |_{y=1}
\Longrightarrow E_{_{A}} = -\frac{\hbar^2\lambda^2}{2mR^2}
\left(1+\frac{2}{\lambda}\right)\, .
\end{displaymath}
The restriction of the potential $V(x,\,y)$ to the region $A$ is
\begin{displaymath}
V_{_{A}}(x) = \frac{\hbar^2\lambda^2}{2mR^2}\left(\frac{1}{x^4}-1-
\frac{2}{\lambda}\right)\quad \mbox{with} \; x\in (0,\, 1)\, ,
\end{displaymath}
while the restriction of $V(x,\,y)$ to the region $B$ is
\begin{displaymath}
V_{_{B}}(y) = -\frac{\hbar^2\lambda}{mR^2}\frac{1}{y}
\quad \mbox{with} \; y\in (1,\, +\infty)\, .
\end{displaymath}
The behaviour of $V_{_{A}}(x)$ and $V_{_{B}}(y)$ is represented in the following figure:
\begin{displaymath}
\resizebox{0.42\width}{0.42\height}{\includegraphics{V1.png}}
\end{displaymath}
Finally, the nonlocal potential $V(x,\,y)$,
corresponding to the nonlocal wave function $\psi (x,\, y)$ [Eq. (\ref{wave}) with $n=1$] is given by
\begin{displaymath}
V(x,\,y) = \left\{\begin{array}{ll}
\frac{\hbar^2\lambda^2}{2mR^2}\left(\frac{y^2}{x^4}-1-\frac{2}{\lambda}\right)
\quad & \mbox{region}\; A\\
\frac{\hbar^2\lambda^2}{2mR^2}\left(\frac{1}{x^2}-\frac{2}{\lambda xy}-1\right)
\quad & \mbox{region}\; B\, .
\end{array}\right .
\end{displaymath}
\subsection{Asymptotic harmonic oscillator potential}
By setting $y=1$ or $x=1$ in Eq. (\ref{wave}) with $n=2$ (in the case of an asymptotic harmonic oscillator potential), we obtain
the following restrictions of $\psi (x,\, y)$ to the regions $A$ or $B$ of our system (Fig. 1):
\begin{eqnarray}
\psi_{_A}(x) & = & \left .\psi(x,\, y)\right |_{y=1} = Ce^{-\frac{\lambda}{x^2}}\, ,
\quad \mbox{with} \; x\in (0,\, 1) \nonumber \\
\psi_{_B}(y) & = & \left .\psi(x,\, y)\right |_{x=1} = Ce^{-\lambda y^2}\, , \quad
\mbox{with} \; y\in (1,\, +\infty)\, .\nonumber
\end{eqnarray}
The shape of $\psi_{_{A}}(x)$ and $\psi_{_{B}}(y)$ is represented in the following figure:
\begin{displaymath}
\resizebox{0.42\width}{0.42\height}{\includegraphics{wf2.png}}
\end{displaymath}
\noindent
In region $A$ we obtain
\begin{displaymath}
V_{_{A}}(x,\, y) = E_{_{A}} + \frac{\hbar^2\lambda}{mR^2}\frac{1}{x^4}\left(\frac{2\lambda}{x^2}-1\right)\, ,
\end{displaymath}
while in region $B$ we find
\begin{displaymath}
V_{_{B}}(x,\, y) = E_{_{B}} + \frac{\hbar^2\lambda}{mR^2}\left(2\lambda y^2-3\right)\, .
\end{displaymath}
By setting the zero of the potential $V(x,\,y)$ on the boundary (for $x=y=1$),
we obtain
\begin{displaymath}
\left . V_{_{A}}(x)\right |_{x=1} = 0 \Longrightarrow E_{_{A}} = \frac{\hbar^2\lambda}{mR^2}\left(1-2\lambda\right)
\end{displaymath}
\begin{displaymath}
\left . V_{_{B}}(y)\right |_{y=1} = 0 \Longrightarrow
E_{_{B}} = \frac{\hbar^2\lambda}{mR^2}\left(3-2\lambda\right)\, .
\end{displaymath}
The restriction of the potential $V(x,\,y)$ to the region $A$ is
\begin{displaymath}
V_{_{A}}(x) = \frac{\hbar^2\lambda}{mR^2}\left[\frac{1}{x^4}\left(\frac{2\lambda}{x^2}-1\right)+1-2\lambda\right]
\quad \mbox{with} \; x\in (0,\, 1)\, ,
\end{displaymath}
while the restriction of $V(x,\,y)$ to the region $B$ is
\begin{displaymath}
V_{_{B}}(y) = \frac{2\hbar^2\lambda^2}{mR^2}\left(y^2-1\right)\quad \mbox{with} \; y\in (1,\, +\infty)\, .
\end{displaymath}
The behaviour of $V_{_{A}}(x)$ and $V_{_{B}}(y)$ is represented in the following figure:
\begin{displaymath}
\resizebox{0.42\width}{0.42\height}{\includegraphics{V2.png}}
\end{displaymath}
\noindent
Finally, the nonlocal potential $V(x,\,y)$,
corresponding to the nonlocal wave function $\psi (x,\, y)$ [Eq. (\ref{wave}) with $n=2$] is given by
\begin{displaymath}
V(x,\,y) = \left\{\begin{array}{ll}
\frac{\hbar^2\lambda}{mR^2}\left[\frac{y^2}{x^4}\left(\frac{2\lambda y^2}{x^2}-1\right)+1-2\lambda\right]
\quad & \mbox{region}\; A\\
\frac{\hbar^2\lambda}{mR^2}\left[\frac{1}{x^2}\left(\frac{2\lambda y^2}{x^2}-3\right)+3-2\lambda\right]
\quad & \mbox{region}\; B\, .
\end{array}\right .
\end{displaymath}
\end{document} |
\begin{document}
\title[Patterns of conjunctive forks]{\large Patterns of conjunctive forks\\[0.5ex]~~}
\author{Va\v{s}ek Chv\'{a}tal, Franti\v sek Mat\' u\v s and Yori Zw\'ol\v{s}}
\address{Department of Computer Science and Software Engineering\\
Concordia University, Montreal, Quebec H3G 1M8\\ Canada}
\email[V.\ Chv\'{a}tal]{[email protected]}
\thanks{Research of V.\ Chv\'{a}tal and Y.\ Zw\'ol\v{s} was supported by the Canada Research
Chairs program and by the Natural Sciences and
Engineering Research Council of Canada}
\address{Institute of Information Theory and Automation\\
Academy of Sciences of the Czech Republic\\
Pod vod\' arenskou v\v e\v z\'{\i} 4, 182 08 Prague~8\\
Czech Republic}
\email[F.\ Mat\' u\v s]{[email protected]}
\thanks{Research of F.\ Mat\' u\v s was supported by
Grant Agency of the Czech Republic under Grant 13-20012S}
\address{Google DeepMind, London, United Kingdom}
\email[Y.\ Zw\'ol\v{s}]{[email protected]}
\keywords{Conjunctive fork, conditional independence, covariance, correlation,
binary random variables, causal betweenness, causality.}
\subjclass[2010]{Primary 62H20,
secondary 62H05.
}
\begin{abstract}
Three events in a probability space form a conjunctive fork if
they satisfy specific constraints on conditional independence
and covariances. Patterns of conjunctive forks within collections
of events are characterized by means of
systems of linear equations that have positive solutions. This characterization allows
patterns of conjunctive forks to be recognized in polynomial time.
Relations
to previous work on causal betweenness and
on patterns of conditional independence among random variables
are discussed.
\end{abstract}
\maketitle
\newcommand{\emptyset}{\emptyset}
\newcommand{\subseteq}{\subseteq}
\newcommand{\setminus}{\setminus}
\renewcommand{\geqslantslant}{\geqslantslantqslant}
\renewcommand{\leqslantslant}{\leqslantslantqslant}
\renewcommand{\geqslantslantq}{\geqslantslantqslant}
\renewcommand{\leqslantslantq}{\leqslantslantqslant}
\def\mathbb{Z}{\mathbb{Z}}
\renewcommand{\ensuremath{P}\xspace}{\ensuremath{P}\xspace}
\newcommand{\ensuremath{Q}\xspace}{\ensuremath{Q}\xspace}
\newcommand{\ensuremath{\mathrm{E}}\xspace}{\ensuremath{\mathrm{E}}\xspace}
\newcommand{\ensuremath{\mathds{1}}}{\ensuremath{\mathds{1}}}
\newcommand{\ensuremath{\!\perp\!\!\!\!\perp\!}}{\ensuremath{\!\perp\!\!\!\!\perp\!}}
\newcommand{\ensuremath{{\mathfrak q}}\xspace}{\ensuremath{{\mathfrak q}}\xspace}
\newcommand{\ensuremath{\mathfrak r}\xspace}{\ensuremath{\mathfrak r}\xspace}
\newcommand{\ensuremath{\mathfrak b}\xspace}{\ensuremath{\mathfrak b}\xspace}
\newcommand{\ensuremath{\mathfrak s}\xspace}{\ensuremath{\mathfrak s}\xspace}
\newcommand{\ensuremath{\mathit\Omega}\xspace}{\ensuremath{\mathit\Omega}\xspace}
\newcommand{\zlo}[2]{\ensuremath{\mbox{\large$\frac{#1}{#2}$}}}
\newcommand{\abs}[1]{\lvert#1\rvert}
\newcommand{\peq}{\mathrel{\setminusash{\overset{\scriptscriptstyle\ensuremath{P}\xspace}{=}}}}
\newcommand{\eee}{\ensuremath{\mathrel{\setminusash{\overset{\scriptscriptstyle\ensuremath{\mathfrak r}\xspace}{\thicksim}}}}\xspace}
\newcommand{\eeq}{\ensuremath{\mathrel{\setminusash{\overset{\scriptscriptstyle\ensuremath{{\mathfrak q}}\xspace}{\thicksim}}}}\xspace}
\newcommand{\overline{A}}{\overline{A}}
\newcommand{\overline{B}}{\overline{B}}
\newcommand{\overline{C}}{\overline{C}}
\newcommand{\overline{D}}{\overline{D}}
\newcommand{\overline{E}}{\overline{E}}
\newcommand{\overline{F}}{\overline{F}}
\newcommand{\ensuremath{\varepsilon}\xspace}{\ensuremath{\ensuremath{\varepsilon}\xspacepsilon}\xspace}
\newcommand{\ensuremath{\mathrm{corr}}\xspace}{\ensuremath{\mathrm{corr}}\xspace}
\newcommand{\cvr}[2]{\ensuremath{\mathrm{cov}({#1},{#2})}\xspace}
\section{Motivation}
Hans Reichenbach \ensuremath{\!\perp\!\!\!\!\perp\!}te[Chapter 19]{Rei56} defined a \emph{conjunctive fork}
as an ordered triple $(A,B,C)$ of events $A$, $B$ and $C$ in a probability
space $(\ensuremath{\mathit\Omega}\xspace,\mathcal F,\ensuremath{P}\xspace)$ that~satisfies
\begin{align}
\ensuremath{P}\xspace(AC|B) &= \ensuremath{P}\xspace(A|B) \ensuremath{P}\xspace(C|B)\,,\label{reich1}\\
\ensuremath{P}\xspace(AC|\overline{B}) &= \ensuremath{P}\xspace(A|\overline{B}) \ensuremath{P}\xspace(C|\overline{B})\,,\label{reich2}\\
\ensuremath{P}\xspace(A|B) &> \ensuremath{P}\xspace(A|\overline{B})\,,\label{reich3}\\
\ensuremath{P}\xspace(C|B) &> \ensuremath{P}\xspace(C|\overline{B})\,\label{reich4}
\end{align}
where, as usual, $AC$ is a shorthand for $A\cap C$ and $\overline{B}$ denotes the
complementary event $\ensuremath{\mathit\Omega}\xspace\setminus B$. (Readers comparing this definition with Reichenbach's
original beware: his notation is modified here by switching the role of
$B$ and $C$. To denote the middle event in the fork, he used $C$, perhaps
as mnemonic for `common cause'.)
Implicit in this definition is the asumption
\begin{equation}
0<\ensuremath{P}\xspace(B)<1\, ,\label{reich5}
\end{equation}
which is needed to define the conditional probabilities in~\eqref{reich1}--\eqref{reich4}.
A similar notion
was introduced earlier in the context of sociology \ensuremath{\!\perp\!\!\!\!\perp\!}te[Part~I,
Section 2]{KenLaz50}, but the context of Reichenbach's discourse was philosophy of science: conjunctive
forks play a central role in his causal theory of time. In this
role, they have attracted considerable attention: over one hundred
publications, such as \ensuremath{\!\perp\!\!\!\!\perp\!}te{Bre77,Sal80,Sal84,EllEri86,CarJon91,
Dow92,Spo94,HofRedSza99,Kor99}, refer to them. Yet for all this interest, no one seems to have asked
a fundamental question:
\begin{center}
\fbox{\emph{What do ternary relations defined by conjunctive forks look like?}}
\end{center}
The purpose of our paper is to answer this question.
An additional stimulus to our work was a previous answer~\ensuremath{\!\perp\!\!\!\!\perp\!}te{ChvWu12} to a similar question,
\begin{center}
\fbox{\emph{What do ternary relations defined by causal betweenness look like?}}
\end{center}
Here, \emph{causal betweenness} is another ternary relation on sets of events in probability spaces, also introduced by Reichenbach\ensuremath{\!\perp\!\!\!\!\perp\!}te[p.~190]{Rei56} in the context of his causal theory of time.
(This relation is reviewed in Section~\ref{S:btw}.) From this perspective, our paper may be seen as a companion to~\ensuremath{\!\perp\!\!\!\!\perp\!}te{ChvWu12}.
\section{The main result}
Let us write $(A,B,C)_{\ensuremath{P}\xspace}$ to signify that
$(A,B,C)$ is a conjunctive fork in a probability
space $(\ensuremath{\mathit\Omega}\xspace,\mathcal F,\ensuremath{P}\xspace)$ and let us say that
events $A_i$ in this space, indexed by elements $i$ of a set $N$,
\emph{fork-represent} a ternary relation $\ensuremath{\mathfrak r}\xspace$ on $N$ if and only if
\[
\ensuremath{\mathfrak r}\xspace=\{(i,j,k)\in N^3\colon (A_i,A_j,A_k)_{\ensuremath{P}\xspace}\,\}\,.
\]
In order to characterize ternary relations on a finite ground set that are fork representable, we need a few definitions.
To begin, call a ternary relation \ensuremath{\mathfrak r}\xspace
on a ground set~$N$ a \emph{forkness} if and only if it satisfies
\begin{align}
(i,j,i)\in\ensuremath{\mathfrak r}\xspace\;\;&\Rightarrow\;\;(j,i,j)\in\ensuremath{\mathfrak r}\xspace\label{sym}\\
(i,j,i), (j,k,j)\in\ensuremath{\mathfrak r}\xspace
\;\;&\Rightarrow\;\;(i,k,i)\in\ensuremath{\mathfrak r}\xspace\label{E:trans}\\
(i,k,j)\in\ensuremath{\mathfrak r}\xspace\;\;&\Rightarrow\;\;(j,k,i)\in\ensuremath{\mathfrak r}\xspace\label{flip}\\
(i,j,k)\in\ensuremath{\mathfrak r}\xspace\;\;&\Rightarrow\;\;(i,j,j), (j,k,k),(k,i,i)\in \ensuremath{\mathfrak r}\xspace \label{lower}\\
(i,k,j), (i,j,k)\in\ensuremath{\mathfrak r}\xspace\;\;&\Rightarrow\;\; (j,k,j)\in\ensuremath{\mathfrak r}\xspace \label{btw}
\end{align}
for all choices $i,j,k$ in $N$.
Given a forkness $\ensuremath{\mathfrak r}\xspace$, write
\[
V_\ensuremath{\mathfrak r}\xspace =\{i\in N\colon(i,i,i)\in\ensuremath{\mathfrak r}\xspace\}
\]
and let \eee denote the binary relation defined on $V_\ensuremath{\mathfrak r}\xspace$ by
\[
i\eee j\;\;\Leftrightarrow\;\;(i,j,i)\in\ensuremath{\mathfrak r}\xspace\,.
\]
This binary relation is reflexive by definition of $V_\ensuremath{\mathfrak r}\xspace$, it
is symmetric by~\eqref{sym}, and it is transitive by~\eqref{E:trans}. In short, \eee is
an equivalence relation. Call a forkness \ensuremath{\mathfrak r}\xspace \emph{regular} if, and only if,
\begin{equation}\label{E:regular}
(i,j,k)\in\ensuremath{\mathfrak r}\xspace, \; i\eee i', \; j\eee j', \; k\eee k'
\;\Rightarrow\; (i',j',k')\in \ensuremath{\mathfrak r}\xspace\,.
\end{equation}
The \emph{quotient} of a a regular forkness \ensuremath{\mathfrak r}\xspace is the the ternary relation whose ground set is the set of equivalence classes of \eee
and which
consists of all triples $(I,J,K)$ such that $(i,j,k)\in\ensuremath{\mathfrak r}\xspace$
for at least one $i$ in $I$, at least one $j$ in $J$, and at least one $k$ in $K$. (Equivalently, since \ensuremath{\mathfrak r}\xspace is regular, $(I,J,K)$ belongs to its quotient if and only if
$(i,j,k)\in\ensuremath{\mathfrak r}\xspace$ for all $i$ in $I$, for all $j$ in $J$,~and~for all~$k$~in~$K$.)
Call a ternary relation \ensuremath{{\mathfrak q}}\xspace \emph{solvable} if and only if the linear system
\begin{multline}\label{system}
x_{\{I,K\}}\!=\!x_{\{I,J\}}+x_{\{J,K\}}\\
\text{ for all $(I,J,K)$ in $\ensuremath{{\mathfrak q}}\xspace$ with pairwise distinct $I,J,K$}
\end{multline}
has a solution with each $x_{\{I,J\}}$ positive.
\begin{thm}\label{T:ChMaZw}
A ternary relation on a finite ground set is fork representable if and only
if it is a regular forkness and its quotient is solvable.
\end{thm}
Theorem~\ref{T:ChMaZw} implies that fork-representability of a ternary
relation~$\ensuremath{\mathfrak r}\xspace$ on a finite ground set $N$ can be tested in time polynomial
in~$\abs{N}$. More precisely, polynomial time suffices to test
\ensuremath{\mathfrak r}\xspace for being a forkness, for testing its regularity, and for the construction of its quotient \ensuremath{{\mathfrak q}}\xspace. Solvability of \ensuremath{{\mathfrak q}}\xspace means solvability of a system
of linear equations and linear inequalities, which can be tested in polynomial
time by the breakthrough result of~\ensuremath{\!\perp\!\!\!\!\perp\!}te{Kha79}.
We prove the easier `only if' part of Theorem~\ref{T:ChMaZw} in in Section~\ref{S:onlyif} and we prove the `if' part in Section~\ref{S:mainproof}.
In Section~\ref{S:btw}, we comment on causal betweenness and its relationship to causal forks.
In the final Section~\ref{S:disc}, we discuss connections to previous work on
patterns of conditional independence.
\section{Proof of the `only if' part}\label{S:onlyif}
\subsection{Reichenbach's definition restated}
Reichenbach's definition of a conjunctive fork has a neat paraphrase in terms of random variables. To present it, let us first review a few standard definitions.
The \emph{indicator function\/} $\ensuremath{\mathds{1}}_E$ of an event $E$ in a probability space is the random variable defined by $\ensuremath{\mathds{1}}_E(\omega)=1$ if $\omega\in E$
and $\ensuremath{\mathds{1}}_E(\omega)=0$ if $\omega\in \overline{E}$. Indicator functions $\ensuremath{\mathds{1}}_A$ and $\ensuremath{\mathds{1}}_C$ are said to be \emph{conditionally independent given $\ensuremath{\mathds{1}}_B$,\/} in symbols $\ensuremath{\mathds{1}}_A\ensuremath{\!\perp\!\!\!\!\perp\!}\ensuremath{\mathds{1}}_C|\ensuremath{\mathds{1}}_B$, if and only if events $A$, $B$, $C$ satisfy~\eqref{reich1} and~\eqref{reich2}.
The \emph{covariance\/} of $\ensuremath{\mathds{1}}_A$ and $\ensuremath{\mathds{1}}_B$, denoted here as $\cvr{A}{B}$, is defined by
$$\cvr{A}{B}=\ensuremath{P}\xspace(AB)-\ensuremath{P}\xspace(A)\ensuremath{P}\xspace(B).$$
Since~\eqref{reich3} means that $\cvr{A}{B}>0$ and~\eqref{reich4} means that $\cvr{B}{C}>0$, we conclude that
\begin{multline}
(A,B,C)_\ensuremath{P}\xspace \Leftrightarrow
\ensuremath{\mathds{1}}_A\ensuremath{\!\perp\!\!\!\!\perp\!}\ensuremath{\mathds{1}}_C|\ensuremath{\mathds{1}}_B \;\&\; \cvr{A}{B}\!\!>\!\!0 \;\&\; \cvr{B}{C}\!\!>\!\!0. \label{fork}
\end{multline}
\subsection{A couple of Reichenbach's results}
\ Reichenbach~\ensuremath{\!\perp\!\!\!\!\perp\!}te[p.~160, equation~(12)]{Rei56} noted that
\begin{multline}\label{rr1}
\ensuremath{\mathds{1}}_A\ensuremath{\!\perp\!\!\!\!\perp\!}\ensuremath{\mathds{1}}_C|\ensuremath{\mathds{1}}_B \;\Rightarrow\;\\
\cvr{A}{C}=\cvr{B}{B}\cdot(\ensuremath{P}\xspace(A|B)-\ensuremath{P}\xspace(A|\overline{B}))\cdot(\ensuremath{P}\xspace(C|B)-\ensuremath{P}\xspace(C|\overline{B}))
\end{multline}
and so~\ensuremath{\!\perp\!\!\!\!\perp\!}te[p.~158, inequality (1)]{Rei56}
\begin{equation}\label{covac}
(A,B,C)_\ensuremath{P}\xspace \Rightarrow \cvr{A}{C}>0.
\end{equation}
Implication~\eqref{covac} was his reason for calling the fork
`conjunctive': it is ``a fork which makes the conjunction of the two
events more frequent than it would be for independent events''
\ensuremath{\!\perp\!\!\!\!\perp\!}te[p.~159]{Rei56}.
Since
\begin{multline*}
\cvr{B}{B}(\ensuremath{P}\xspace(A|B)-\ensuremath{P}\xspace(A|\overline{B}))=\\ \ensuremath{P}\xspace(AB)(1-\ensuremath{P}\xspace(B))-\ensuremath{P}\xspace(A\overline{B})\ensuremath{P}\xspace(B)=\cvr{A}{B}
\end{multline*}
and, similarly, $\cvr{B}{B}(\ensuremath{P}\xspace(C|B)-\ensuremath{P}\xspace(C|\overline{B}))=\cvr{C}{B}$,
Reichenbach's implication~\eqref{rr1} can be stated as
\begin{equation}\label{fero}
\ensuremath{\mathds{1}}_A\ensuremath{\!\perp\!\!\!\!\perp\!}\ensuremath{\mathds{1}}_C|\ensuremath{\mathds{1}}_B \;\Rightarrow\;\cvr{A}{C}\cdot\cvr{B}{B}=\cvr{A}{B}\cdot\cvr{B}{C}.
\end{equation}
\subsection{The idea of the proof}
An event $E$ is called \emph{\ensuremath{P}\xspace-nontrivial\/} if and only if $0<\ensuremath{P}\xspace(E)<1$, which is equivalent to $\cvr{E}{E}>0$.
When $E,F$ are \ensuremath{P}\xspace-nontrivial events, the \emph{correlation\/} of their indicator functions, denoted here as $\ensuremath{\mathrm{corr}}\xspace(E,F)$, is defined
by
\[
\ensuremath{\mathrm{corr}}\xspace(E,F)\;=\; \frac {\cvr{E}{F}}{\cvr{E}{E}^{1/2}\cvr{F}{F}^{1/2}}\, .
\]
In these terms, \eqref{fero} reads
\begin{equation}\label{corident}
\ensuremath{\mathds{1}}_A\ensuremath{\!\perp\!\!\!\!\perp\!}\ensuremath{\mathds{1}}_C|\ensuremath{\mathds{1}}_B \;\Rightarrow\;\ensuremath{\mathrm{corr}}\xspace(A,C)=\ensuremath{\mathrm{corr}}\xspace(A,B)\cdot \ensuremath{\mathrm{corr}}\xspace(B,C).
\end{equation}
The strict inequalities \eqref{reich3}, \eqref{reich4}, \eqref{reich5} imply that
\begin{multline}\label{nontriv}
\text{in every conjunctive fork $(A,B,C)$,}\\ \text{all three events $A$, $B$, $C$ are \ensuremath{P}\xspace-nontrivial,}
\end{multline}
and so $\ensuremath{\mathrm{corr}}\xspace(A,B)$, $\ensuremath{\mathrm{corr}}\xspace(A,C)$, $\ensuremath{\mathrm{corr}}\xspace(B,C)$ are well defined;
\eqref{fork} guarantees that $\ensuremath{\mathrm{corr}}\xspace(A,B)>0$, $\ensuremath{\mathrm{corr}}\xspace(B, C)>0$ and \eqref{covac} guarantees that $\ensuremath{\mathrm{corr}}\xspace(A,C)>0$.
Fact~\eqref{corident} guarantees that the system
\[
x_{\{A,C\}}\!=\!x_{\{A,B\}}+x_{\{B,C\}}
\text{ for all conjunctive forks $(A,B,C)$}
\]
can be solved by setting $x_{\{E,F\}} = -\ln \ensuremath{\mathrm{corr}}\xspace(E,F)$.
This observation goes a long way toward proving the `only if' part of Theorem~\ref{T:ChMaZw}, but it does not quite get there:
For instance, if $(A,B,C)$ is a conjunctive fork and $A\peq B\,$, then $\ensuremath{\mathrm{corr}}\xspace(A,B)=1$, and so $x_{\{A,B\}}=0$, but the `only if'
part of the theorem requires $x_{\{A,B\}}>0$. To get around such obstacles, we deal with
the quotient of the ternary relation made from conjunctive forks.
\subsection{Other preliminaries}
Events $E$ and $F$ are said to be \ensuremath{P}\xspace-equal, in symbols $E\peq F$, if and only
if $\ensuremath{P}\xspace(E{\vartriangle}F)=0$. We claim that
\begin{align}
& (A,B,A)_{\ensuremath{P}\xspace} \text{ if and only if $A$ is \ensuremath{P}\xspace-nontrivial and $A\peq B\,$,}\label{iji} \\
& \text{if $(A,B,C)_{\ensuremath{P}\xspace}$ and $(A,C,B)_{\ensuremath{P}\xspace}$, then $B\peq C$.}\label{fbtw}
\end{align}
To justify claim~\eqref{iji}, note that $(A,B,A)_{\ensuremath{P}\xspace}$ means the conjunction of $\cvr{A}{B}>0$ and at least one of
$\ensuremath{P}\xspace(A)=0$, $\ensuremath{P}\xspace(A)=1$, $A\peq \overline{B}$, $A\peq B$; of the four equalities, only the last one is compatible with $\cvr{A}{B}>0$.
To justify claim~\eqref{fbtw}, note that $\cvr{B}{B}-\cvr{B}{C}=\ensuremath{P}\xspace (B)\ensuremath{P}\xspace (\overline{B} C) +\ensuremath{P}\xspace (\overline{B})\ensuremath{P}\xspace (B\overline{C})$; when $B$ is
$\ensuremath{P}\xspace$-nontrivial, the right-hand side vanishes if and only if $\ensuremath{P}\xspace(B{\vartriangle}C)=0$; it follows that
\begin{align*}
& \text{if $(A,B,C)_{\ensuremath{P}\xspace}$, then $\cvr{B}{B}\geqslantslantq\cvr{B}{C}$}\\
& \text{with equality if and only if $B\peq C\,$;}
\end{align*}
by \eqref{fero}, this implies that
\begin{align*}
& \text{if $(A,B,C)_{\ensuremath{P}\xspace}$, then $\cvr{A}{B}\geqslantslantq\cvr{A}{C}$}\\
& \text{with equality if and only if $B\peq C\,$,}
\end{align*}
which in turn implies~\eqref{fbtw}.
\subsection{The proof}
\begin{lem}\label{V1}
A ternary relation is fork representable only
if it is a regular forkness.
\end{lem}
\begin{proof}
Consider a fork-representable ternary relation \ensuremath{\mathfrak r}\xspace on a ground set $N$. Proving the lemma means verifying that \ensuremath{\mathfrak r}\xspace
has properties \eqref{sym}, \eqref{E:trans}, \eqref{flip}, \eqref{lower}, \eqref{btw}, \eqref{E:regular}.
Since \ensuremath{\mathfrak r}\xspace is fork representable, there are events $A_i$ in in some probability
space $(\ensuremath{\mathit\Omega}\xspace,\mathcal F,\ensuremath{P}\xspace)$, with $i$ ranging over $N$, such that
\[
(i,j,k)\in \ensuremath{\mathfrak r}\xspace \;\Leftrightarrow\;
(A_i,A_j,A_k)_{\ensuremath{P}\xspace}\, .
\]
Properties \eqref{sym} and \eqref{E:trans},
\begin{align*}
& (i,j,i)\in\ensuremath{\mathfrak r}\xspace\;\;\Rightarrow\;\;(j,i,j)\in\ensuremath{\mathfrak r}\xspace\, ,\\
& (i,j,i), (j,k,j)\in\ensuremath{\mathfrak r}\xspace\;\;\Rightarrow\;\;(i,k,i)\in\ensuremath{\mathfrak r}\xspace\, ,
\end{align*}
follow from \eqref{iji}. Property~\eqref{flip},
\begin{align*}
& (i,k,j)\in\ensuremath{\mathfrak r}\xspace\;\;\Rightarrow\;\;(j,k,i)\in\ensuremath{\mathfrak r}\xspace\, ,
\end{align*}
is implicit in the definition of $(A_i,A_j,A_k)_{\ensuremath{P}\xspace}$. Property~\eqref{lower},
\begin{align*}
& (i,j,k)\in\ensuremath{\mathfrak r}\xspace\;\;\Rightarrow\;\;(i,j,j), (j,k,k),(k,i,i)\in \ensuremath{\mathfrak r}\xspace\, ,
\end{align*}
follows from~\eqref{covac}. Property \eqref{btw},
\begin{align*}
& (i,j,k)\in\ensuremath{\mathfrak r}\xspace\;\;\Rightarrow\;\;(i,j,j), (j,k,k),(k,i,i)\in \ensuremath{\mathfrak r}\xspace\, ,
\end{align*}
follows from \eqref{fbtw} and \eqref{iji}. Property~\eqref{E:regular},
\[
(i,j,k)\in\ensuremath{\mathfrak r}\xspace, \; i\eee i', \; j\eee j', \; k\eee k'
\;\Rightarrow\; (i',j',k')\in \ensuremath{\mathfrak r}\xspace\, ,
\]
follows from~\eqref{iji} alone.
\end{proof}
\begin{lem}\label{V2}
A regular forkness is fork representable only if its quotient is solvable.
\end{lem}
\begin{proof}
Consider a fork-representable regular forkness \ensuremath{\mathfrak r}\xspace on a ground set $N$.
Since \ensuremath{\mathfrak r}\xspace is fork representable, there are events $A_i$ in in some probability
space $(\ensuremath{\mathit\Omega}\xspace,\mathcal F,\ensuremath{P}\xspace)$, with $i$ ranging over $N$, such that
\[
(i,j,k)\in \ensuremath{\mathfrak r}\xspace \;\Leftrightarrow\;
(A_i,A_j,A_k)_{\ensuremath{P}\xspace}\, .
\]
With \ensuremath{{\mathfrak q}}\xspace standing for the quotient of \ensuremath{\mathfrak r}\xspace, proving the lemma means finding
positive numbers $x_{\{I,J\}}$ such that
\begin{multline}\label{solution}
x_{\{I,K\}}\!=\!x_{\{I,J\}}+x_{\{J,K\}}\\
\text{ for all $(I,J,K)$ in $\ensuremath{{\mathfrak q}}\xspace$ with pairwise distinct $I,J,K$.}
\end{multline}
We claim that this can be done by first choosing an element $r(I)$ from each equivalence class $I$ of \eee and then setting $$x_{\{I,J\}} = -\ln \ensuremath{\mathrm{corr}}\xspace({A_{r(I)}},{A_{r(J)}})$$
for every pair $I,J$ of distinct equivalence classes that appear together in a triple in \ensuremath{{\mathfrak q}}\xspace.
(By \eqref{iji}, the right hand side depends only on $I$ and $J$ rather than the choice of $r(I)$ and $r(J)$.)
To justify this claim, note first that~\eqref{solution} is satisfied by virtue of~\eqref{corident}.
A special case of the {\em covariance inequality\/} guarantees that every pair $A,B$ of events satisfies
\[
(\cvr{A}{B})^2 \leqslantslant \cvr{A}{A}\cdot \cvr{B}{B}
\]
and that the two sides are equal if and only if
$A\peq B$ or $A\peq \overline{B}$ or $\ensuremath{P}\xspace(A)=0$ or $\ensuremath{P}\xspace(A)=1$ or $\ensuremath{P}\xspace(B)=0$ or $\ensuremath{P}\xspace(B)=1$.
If equivalence classes $I,J$ of \eee are distinct, then $A_{r(I)}\not\peq A_{r(J)}$; if, in addition, $I$ and $J$ appear together
in a triple in \ensuremath{{\mathfrak q}}\xspace, then $A_{r(I)}\not\peq \overline{A}_{r(J)}$ (since $\ensuremath{\mathrm{corr}}\xspace({A_{r(I)}},{A_{r(J)}})>0$ by~\eqref{fork} and~\eqref{covac}) and
$0<\ensuremath{P}\xspace(A_{r(I)})<1$, $0<\ensuremath{P}\xspace(A_{r(J)})<1$ (by~\eqref{nontriv}). In this case, $$\cvr{A_{r(I)}}{A_{r(J)}}^2<\cvr{A_{r(I)}}{A_{r(I)}}\cdot\cvr{A_{r(J)}}{A_{r(J)}},$$
and so $x_{\{I,J\}}>0$.
\end{proof}
\section{Proof of the `if' part}\label{S:mainproof}
\begin{lem}\label{V3}
If the quotient of a regular forkness on a finite ground set is solvable, then it is fork representable.
\end{lem}
\begin{proof}
Given the quotient \ensuremath{{\mathfrak q}}\xspace of a regular forkness on a finite ground set along with
positive numbers $x_{\{I,J\}}$ such that
\begin{multline}
x_{\{I,K\}}\!=\!x_{\{I,J\}}+x_{\{J,K\}}\\
\text{ for all $(I,J,K)$ in $\ensuremath{{\mathfrak q}}\xspace$ with pairwise distinct $I,J,K$,}\label{solution++}
\end{multline}
we have to construct a probability
space $(\ensuremath{\mathit\Omega}\xspace,\mathcal F,\ensuremath{P}\xspace)$ and
events $A_I$ in this space, indexed by elements $I$ of the ground set $C$ of \ensuremath{{\mathfrak q}}\xspace, such that
\begin{equation}\label{qeq}
\ensuremath{{\mathfrak q}}\xspace=\{(I,J,K)\in C^3\colon (A_I,A_J,A_K)_{\ensuremath{P}\xspace}\,\}\,.
\end{equation}
For this purpose, we let $\ensuremath{\mathit\Omega}\xspace$ be the power set $2^C$ of $C$, we let $\mathcal F$ be the power set $2^{\ensuremath{\mathit\Omega}\xspace}$ of $\ensuremath{\mathit\Omega}\xspace$, and we set
\[
A_I=\{\omega\in\ensuremath{\mathit\Omega}\xspace: I\in\omega\}.
\]
Now let us construct the probability measure $\ensuremath{P}\xspace$. Set $n=\abs{C}$.
Given a subset $L$ of $C$, consider the function $\chi_L\colon \ensuremath{\mathit\Omega}\xspace\rightarrow \{+1, -1\}$ defined by
\[
\chi_L(\omega)= (-1)^{\abs{\,\omega\,\cap\, L\,}}\,.
\]
Let $E_\ensuremath{{\mathfrak q}}\xspace$ stand for the family of all two-element subsets
$\{I,J\}$ of $C$ such that $I$ and $J$ appear together in a triple in \ensuremath{{\mathfrak q}}\xspace.
Let $M_\ensuremath{{\mathfrak q}}\xspace$ stand for the family of all three-element subsets
$\{I,J,K\}$ of $C$ such that $\{I,J\}$, $\{J,K\}$, and $\{K,I\}$ belong to $E_\ensuremath{{\mathfrak q}}\xspace$
and no triple in $\ensuremath{{\mathfrak q}}\xspace$ is formed by all three $I,J,K$.
Finally, for positive numbers $\gamma$ and $\ensuremath{\varepsilon}\xspace$ that are sufficiently small in a sense to be specified shortly,
define $\ensuremath{P}\xspace\colon \ensuremath{\mathit\Omega}\xspace\rightarrow {\bf R}$ by
\[
\ensuremath{P}\xspace(\omega)=2^{-n}\Big[1+
\textstyle{\sum_{\{I,J\}\in E_\ensuremath{{\mathfrak q}}\xspace}\;\chi_{\{I,J\}}(\omega)\,\gamma^{x\{I,J\}}
+\ensuremath{\varepsilon}\xspace\sum_{\{I,J,K\}\in M_\ensuremath{{\mathfrak q}}\xspace}}\;\chi_{\{I,J,K\}}(\omega)\Big].
\]
(In exponents, we write $x\{I,J\}$ in place of $x_{\{I,J\}}$.)
Here, readers with background in harmonic analysis will have recognized
the Fourier-Stieltjes transform on the group $\mathbb{Z}_2$ and the characters
$\chi_L$.
When $L$ is nonempty, $\chi_L$ takes each of the values $\pm1$ on the same
number its arguments $\omega$, and so $\sum_{\omega\in{\mathit\Omega}}\chi_L(\omega)=0$, which implies that
$\sum_{\omega\in{\mathit\Omega}}P(\omega)=1$. If $\gamma<n^{-2/x\{I,J\}}$
for all $\{I,J\}\in E_\ensuremath{{\mathfrak q}}\xspace$ and $\ensuremath{\varepsilon}\xspace<n^{-3}$, then
\[
2^{n}\ensuremath{P}\xspace(\omega)\geqslantslantq
1-\abs{E_\ensuremath{{\mathfrak q}}\xspace}\,n^{-2}-\abs{M_\ensuremath{{\mathfrak q}}\xspace}\,n^{-3}>0\,,
\]
so that \ensuremath{P}\xspace is a probability measure, positive on the elementary events.
The bulk of the proof consists of verifying that this construction satisfies~\eqref{qeq}.
We may assume that $C\ne\emptyset$ (else $\ensuremath{\mathit\Omega}\xspace$ is a singleton, and so \eqref{qeq} holds).
Given a subset $S$ of $C$, write
\[
A_S \;=\; \{\omega\in\ensuremath{\mathit\Omega}\xspace: S\subseteq\omega\}.
\]
Since
\[
\textstyle{\sum_{\omega\in A_S}\chi_L(\omega)}=
\begin{cases}
\;(-1)^{|L|}\,2^{n-\abs{S}}& \quad\text{if $L\subseteq S$},\\
\;0 & \quad\text{otherwise},
\end{cases}
\]
we have
\begin{multline}\label{master}
2^{\abs{S}}\ensuremath{P}\xspace(A_S)= \\
1+ \textstyle{\sum_{\{I,J\}\in E_\ensuremath{{\mathfrak q}}\xspace,\:I,J\in S}\,\gamma^{x\{I,J\}}}
-\ensuremath{\varepsilon}\xspace\,\big|\{\{I,J,K\}\in M_\ensuremath{{\mathfrak q}}\xspace\colon I,J,K\in S\}\big|\,.
\end{multline}
In particular, formula \eqref{master} yields for all $I$ in $C$
\[
\ensuremath{P}\xspace(A_I)=\frac12
\]
and it yields for all choices of distinct $I,J$ in $C$
\begin{equation}\label{f2}
\ensuremath{P}\xspace(A_{\{I,J\}})= \frac14+
\begin{cases}
\;\frac14\gamma^{\,x\{I,J\}} & \quad\text{if $\{I,J\}\in E_\ensuremath{{\mathfrak q}}\xspace$},\\
\;0\, & \quad\text{otherwise.}
\end{cases}
\end{equation}
It follows that
\begin{equation}\label{cvrgamma}
\cvr{A_I}{A_J}=
\begin{cases}
\;\frac14\;\gamma^{\,x(\{I,J\})} & \text{if } \{I,J\}\in E_\ensuremath{{\mathfrak q}}\xspace,\\
\;0 & \text{otherwise.}
\end{cases}
\end{equation}
For future reference, note also that, by definition,
\begin{align}\label{qfork}
&\text{\ensuremath{{\mathfrak q}}\xspace is a forkness such that $(I,I,I)\in\ensuremath{{\mathfrak q}}\xspace$ for all $I$ in $C$}\\
&\text{and such that $(I,J,I)\not\in\ensuremath{{\mathfrak q}}\xspace$ whenever $I\ne J$.}\nonumber
\end{align}
and that
\begin{align}\label{ijj}
& \{I,J\}\in E_\ensuremath{{\mathfrak q}}\xspace \;\Leftrightarrow\; (I,J,J) \in\ensuremath{{\mathfrak q}}\xspace.
\end{align}
(here, implication $\Rightarrow$ follows from properties~\eqref{flip} and \eqref{lower} of forkness
and implication $\Leftarrow$ follows straight from the definition of $E_\ensuremath{{\mathfrak q}}\xspace$).
Now we are ready to verify~\eqref{qeq}. Given a triple $(I,J,K)$ in $C^3$, we have to show that
\begin{equation}\label{verq}
(A_I,A_J,A_K)_{\ensuremath{P}\xspace} \Leftrightarrow (I,J,K)\in\ensuremath{{\mathfrak q}}\xspace\, .
\end{equation}
{\sc Case 1:} $I=J=K$.\\
Since $C\ne\emptyset$, all $I$ in $C$ are $\ensuremath{P}\xspace$-nontrivial, and so
we have $(A_I,A_I,A_I)_{\ensuremath{P}\xspace}$. By~\eqref{qfork}, we have $(I,I,I)\in\ensuremath{{\mathfrak q}}\xspace$.
{\sc Case 2:} $I\ne J$, $K=I$.\\
Here, $A_I\not\peq A_J$, and so~\eqref{iji} implies that
$(A_I,A_J,A_I)$ is not a conjunctive fork. By \eqref{qfork}, we have $(I,J,I)\not\in\ensuremath{{\mathfrak q}}\xspace$.
{\sc Case 3:} $I\ne J$, $K=J$.\\
If $\{I,J\}\in E_\ensuremath{{\mathfrak q}}\xspace$, then~\eqref{cvrgamma} guarantees $\cvr{A_I}{A_J}>0$, which implies $(A_I,A_J,A_J)_{\ensuremath{P}\xspace}$. By~\eqref{ijj}, we have $(I,J,J)\in\ensuremath{{\mathfrak q}}\xspace$.
If $\{I,J\}\not\in E_\ensuremath{{\mathfrak q}}\xspace$, then~~\eqref{cvrgamma} guarantees $\cvr{A_I}{A_J}=0$, and so $(A_I,A_J,A_J)$ is not a conjunctive fork. By~\eqref{ijj}, we have $(I,J,J)\not\in\ensuremath{{\mathfrak q}}\xspace$.
{\sc Case 4:} $I= J$, $K\ne J$.\\
This case is reduced to {\sc Case 3} by the flip $I\leqslantslantftrightarrow K$, which preserves both sides of~\eqref{verq}.
{\sc Case 5:} $I,J,K$ {\em are pairwise distinct and at least one of
$\{I,J\}$, $\{J,K\}$, $\{K,I\}$ does not belong to $E_\ensuremath{{\mathfrak q}}\xspace$.\/}\\
By~\eqref{cvrgamma}, at least one of the covariances $\cvr{A_I}{A_J}$, $\cvr{A_J}{A_K}$, $\cvr{A_K}{A_I}$ vanishes,
and so~\eqref{fork}, \eqref{covac} guarantee that $(A_I,A_J,A_K)$ is not a conjunctive fork. By definition of
$E_\ensuremath{{\mathfrak q}}\xspace$, we have $(I,J,K)\not\in\ensuremath{{\mathfrak q}}\xspace$.
{\sc Case 6:} $I,J,K$ {\em are pairwise distinct and all of $\{I,J\}$, $\{J,K\}$, $\{K,I\}$ belong to $E_\ensuremath{{\mathfrak q}}\xspace$.\/}\\
By~\eqref{cvrgamma}, all of $\cvr{A_I}{A_J}$, $\cvr{A_J}{A_K}$, $\cvr{A_K}{A_I}$ are positive.
Now~\eqref{fork} implies that $(A_I,A_J,A_K)_{\ensuremath{P}\xspace}$ is equivalent to
$\ensuremath{\mathds{1}}_{A_I}\ensuremath{\!\perp\!\!\!\!\perp\!}\ensuremath{\mathds{1}}_{A_K}|\ensuremath{\mathds{1}}_{A_J}$, which means the conjunction of
\begin{align*}
\tfrac12\ensuremath{P}\xspace(A_{\{I,J,K\}})&=\ensuremath{P}\xspace(A_{\{I,J\}})\ensuremath{P}\xspace(A_{\{J,K\}})\,,\\
\tfrac12\big[\ensuremath{P}\xspace(A_{\{I,K\}})-\ensuremath{P}\xspace(A_{\{I,J,K\}})\big]&=
\big[\tfrac12-\ensuremath{P}\xspace(A_{\{I,J\}})\big]\cdot
\big[\tfrac12-\ensuremath{P}\xspace(A_{\{J,K\}})\big]\,.
\end{align*}
Substitution from \eqref{f2} converts these two equalities to
\begin{align}
\label{E:vind}
8\ensuremath{P}\xspace(A_{\{I,J,K\}})=&
1\!+\!\gamma^{\,x\{I,J\}}\!+\!\gamma^{\,x\{J,K\}}
\!+\!\gamma^{\,x\{I,J\}+x\{J,K\}}\\
8\ensuremath{P}\xspace(A_{\{I,J,K\}})=&
1\!+\!\gamma^{\,x\{I,J\}}\!+\!\gamma^{\,x\{J,K\}}
\!-\!\gamma^{\,x\{I,J\}+x\{J,K\}}\!+\!2\gamma^{\,x\{I,K\}}.
\label{E:vind2}
\end{align}
Conjunction of~\eqref{E:vind} and ~\eqref{E:vind2} is equivalent to the conjunction of~\eqref{E:vind} and
\begin{equation}\label{E:vind3}
x_{\{I,J\}}+x_{\{J,K\}}=x_{\{I,K\}}\, .
\end{equation}
To summarize, $(A_I,A_J,A_K)_{\ensuremath{P}\xspace}$ is equivalent to the conjunction of~\eqref{E:vind} and ~\eqref{E:vind3}.
{\sc Subcase 6.1:} $\{I,J,K\}\in M_\ensuremath{{\mathfrak q}}\xspace$.\\
In this subcase, formula \eqref{master} yields
\[
8\,\ensuremath{P}\xspace(A_{\{I,J,K\}})=1+ \gamma^{\,x\{I,J\}}+ \gamma^{\,x\{J,K\}}+ \gamma^{\,x\{I,K\}}-\ensuremath{\varepsilon}\xspace\, ,
\]
which reduces \eqref{E:vind} to
$\gamma^{\,x\{I,K\}}-\ensuremath{\varepsilon}\xspace =\gamma^{\,x\{I,J\}+x\{J,K\}}$.
This is is inconsistent with~\eqref{E:vind3}, and so $(A_I,A_J,A_K)$ is not a conjunctive fork.
By definition of $M_\ensuremath{{\mathfrak q}}\xspace$, we have $(I,J,K)\not\in~\ensuremath{{\mathfrak q}}\xspace$.
{\sc Subcase 6.2:} $\{I,J,K\}\not\in M_\ensuremath{{\mathfrak q}}\xspace$.\\
In this subcase, formula \eqref{master} yields
\[
8\,\ensuremath{P}\xspace(A_{\{I,J,K\}})=1+ \gamma^{\,x\{I,J\}}+ \gamma^{\,x\{J,K\}}+ \gamma^{\,x\{I,K\}} ,
\]
which reduces \eqref{E:vind} to~\eqref{E:vind3}, and so
$(A_I,A_J,A_K)_{\ensuremath{P}\xspace}$ is equivalent to~\eqref{E:vind3} alone.
Now completing the proof means verifying that
\[
x_{\{I,J\}}+x_{\{J,K\}}=x_{\{I,K\}} \;\;\Leftrightarrow\;\; (I,J,K)\in\ensuremath{{\mathfrak q}}\xspace .
\]
Implication $\Leftarrow$ is ~\eqref{solution++}.
To prove the reverse implication, note first that by~\eqref{solution++} along with $x_{\{I,J\}}>0$ and $x_{\{J,K\}}>0$, we have
\begin{align}
x_{\{I,J\}}+x_{\{J,K\}}=x_{\{I,K\}} \;\;\Rightarrow\;\; x_{\{J,K\}}\!<\!x_{\{I,K\}} \;\;\Rightarrow\;\; (J,I,K)\not\in \ensuremath{{\mathfrak q}}\xspace\, ,\label{jik}\\
x_{\{I,J\}}+x_{\{J,K\}}=x_{\{I,K\}} \;\;\Rightarrow\;\; x_{\{I,J\}}\!<\!x_{\{I,K\}} \;\;\Rightarrow\;\; (I,K,J)\not\in \ensuremath{{\mathfrak q}}\xspace\, .\label{ikj}
\end{align}
By assumptions of this case and subcase, some triple in $\ensuremath{{\mathfrak q}}\xspace$ is formed by all three $I,J,K$
and so, since \ensuremath{{\mathfrak q}}\xspace is a forkness, ~\eqref{flip} with \ensuremath{{\mathfrak q}}\xspace in place of \ensuremath{\mathfrak r}\xspace guarantees that at least one of $(J,I,K)$, $(I,K,J)$, $(I,J,K)$
belongs to~\ensuremath{{\mathfrak q}}\xspace. If $x_{\{I,J\}}+x_{\{J,K\}}=x_{\{I,K\}}$, then \eqref{jik} and \eqref{ikj} exclude the first two options, and so we have
$(I,J,K)\in\ensuremath{{\mathfrak q}}\xspace$.
\end{proof}
\begin{lem}\label{V4}
If a regular forkness has a fork-representable quotient, then it is fork representable.
\end{lem}
\begin{proof}
Given a regular forkness \ensuremath{\mathfrak r}\xspace, a probability space $(\ensuremath{\mathit\Omega}\xspace^0,\mathcal F^0,\ensuremath{P}\xspace^0)$, and
events $A_I^0$ in this space, indexed by elements $I$ of the ground set $C$ of the quotient \ensuremath{{\mathfrak q}}\xspace of \ensuremath{\mathfrak r}\xspace, such that
\[
\ensuremath{{\mathfrak q}}\xspace=\{(I,J,K)\in C^3\colon (A_I^0,A_J^0,A_K^0)_{\ensuremath{P}\xspace^0}\,\},
\]
we have to construct a probability
space $(\ensuremath{\mathit\Omega}\xspace,\mathcal F,\ensuremath{P}\xspace)$ and
events $A_i$ in this space, indexed by elements $i$ of the ground set $N$ of \ensuremath{\mathfrak r}\xspace, such that
\begin{equation}\label{req}
\ensuremath{\mathfrak r}\xspace=\{(i,j,k)\in N^3\colon (A_i,A_j,A_k)_{\ensuremath{P}\xspace}\,\}\,.
\end{equation}
For this purpose, we let $\ensuremath{\mathit\Omega}\xspace$ be the power set $2^N$ of $N$, we let $\mathcal F$ be the power set $2^\ensuremath{\mathit\Omega}\xspace$ of $\ensuremath{\mathit\Omega}\xspace$,
and we set
\[
A_i=\{\omega\in\ensuremath{\mathit\Omega}\xspace: i\in\omega\}.
\]
For each element $\omega$ of $\ensuremath{\mathit\Omega}\xspace$ such that every equivalence class $I$ of \eee satisfies $I\subseteq\omega$ or $I\cap\omega=\emptyset$, define $\ensuremath{P}\xspace(\omega)=\ensuremath{P}\xspace^0(\omega^0)$, where $\omega^0$ is the set of equivalence classes of \eee contained in $\omega$. For all other elements $\omega$ of $\ensuremath{\mathit\Omega}\xspace$, define $\ensuremath{P}\xspace(\omega)=0$. Now verifying~\eqref{req} is a routine matter.
\end{proof}
\section{Causal betweenness}\label{S:btw}
Reichenbach \ensuremath{\!\perp\!\!\!\!\perp\!}te[p.~190]{Rei56} defined an event $B$ to be
\emph{causally between} events $A$ and $C$ if
\begin{align*}
1 > \ensuremath{P}\xspace(A|B) > \ensuremath{P}\xspace(A|C)> \ensuremath{P}\xspace(A) > 0\,,\\
1 > \ensuremath{P}\xspace(C|B) > \ensuremath{P}\xspace(C|A)> \ensuremath{P}\xspace(C) > 0\,,\\
\ensuremath{P}\xspace(C|AB) = \ensuremath{P}\xspace(C|B)\,.\qquad\qquad
\end{align*}
Implicit in this definition is the assumption $\ensuremath{P}\xspace(B)>0$
that makes $\ensuremath{P}\xspace(A|B)$ and $\ensuremath{P}\xspace(C|B)$ meaningful. In turn,
$\ensuremath{P}\xspace(A|B)>0$ means $\ensuremath{P}\xspace(AB)>0$, which makes $\ensuremath{P}\xspace(C|AB)$ meaningful. If $B$
is causally between $A$ and $C$, then all three events are
$\ensuremath{P}\xspace$-nontrivial and no two of them $\ensuremath{P}\xspace$-equal.
If $(A,B,C)$ is a conjunctive fork, then (contrary to the claim in~\ensuremath{\!\perp\!\!\!\!\perp\!}te[p.~179]{Bre77}) $B$ need not be causally
between $A$ and $C$ even if no two of $A,B,C$ are $\ensuremath{P}\xspace$-equal: for example, if
\[
\begin{array}{llll}
\ensuremath{P}\xspace(A BC)=1/5, &\ensuremath{P}\xspace(A B\overline{C})=1/5,\\
\ensuremath{P}\xspace(\overline{A} BC)=1/5, &\ensuremath{P}\xspace(\overline{A} B\overline{C})=1/5,\\
\ensuremath{P}\xspace(A\overline{B} C)=0, &\ensuremath{P}\xspace(A\overline{B}\overline{C})=0,\\
\ensuremath{P}\xspace(\overline{A}\overline{B} C)=0, &\ensuremath{P}\xspace(\overline{A}\overline{B}\overline{C})=1/5,
\end{array}
\]
then $(A,B,C)$ is a conjunctive fork and $\ensuremath{P}\xspace(A|B) = \ensuremath{P}\xspace(A|C)$.
If an event $B$ is causally between $A$ and $C$, then $(A,B,C)$ need not
be a conjunctive fork: for example, if
\[
\begin{array}{llll}
\ensuremath{P}\xspace(A BC)=1/20, &\ensuremath{P}\xspace(A B\overline{C})=2/20,\\
\ensuremath{P}\xspace(\overline{A} BC)=2/20, &\ensuremath{P}\xspace(\overline{A} B\overline{C})=4/20,\\
\ensuremath{P}\xspace(A\overline{B} C)=0, &\ensuremath{P}\xspace(A\overline{B}\overline{C})=1/20,\\
\ensuremath{P}\xspace(\overline{A}\overline{B} C)=1/20, &\ensuremath{P}\xspace(\overline{A}\overline{B}\overline{C})=9/20,
\end{array}
\]
then $B$ is causally between $A$ and $C$ and $\ensuremath{P}\xspace(AC|\overline{B}) \ne \ensuremath{P}\xspace(A|\overline{B}) \ensuremath{P}\xspace(C|\overline{B})$.
Following~~\ensuremath{\!\perp\!\!\!\!\perp\!}te{ChvWu12}, we call a ternary relation $\ensuremath{\mathfrak b}\xspace$ on a finite ground set $N$
an {\em abstract causal betweenness\/} if, and only if,
there are events $A_i$ with $i$ ranging over $N$ such that
\begin{equation*}\label{cbtw}
\ensuremath{\mathfrak b}\xspace=\{(i,j,k)\in N^3\colon\;\text{$A_j$ is causally between $A_i$ and $A_k$}\}\,.
\end{equation*}
A natural question is which ternary~relations $\ensuremath{\mathfrak b}\xspace$ form an abstract causal betweenness.
This question was answered in ~\ensuremath{\!\perp\!\!\!\!\perp\!}te[Theorem~1]{ChvWu12} in terms of
the directed graph~$G(\ensuremath{\mathfrak b}\xspace)$ whose vertices are all two-element
subsets of $N$ and whose edges are all ordered pairs
$(\{i,j\},\{i,k\})$ such that $(i,j,k)\in\ensuremath{\mathfrak b}\xspace$ with $i,j,k$ pairwise
distinct:
\begin{align}
&\text{a ternary relation \ensuremath{\mathfrak b}\xspace on a finite ground set} \label{ChvWu}\\
&\text{is an abstract causal betweenness if and only if}\nonumber \\
&\quad \bullet\; (i,j,k)\in\ensuremath{\mathfrak b}\xspace\;\Rightarrow\; i,j,k \text{ are pairwise distinct,}\nonumber\\
&\quad \bullet\; (i,j,k)\in\ensuremath{\mathfrak b}\xspace\;\Rightarrow\; (k,j,i)\in\ensuremath{\mathfrak b}\xspace,\nonumber \\
&\quad \bullet\; \text{$G(\ensuremath{\mathfrak b}\xspace)$ contains no directed~cycle.}\nonumber
\end{align}
(The third requirement implies that $(i,j,k)\in\ensuremath{\mathfrak b}\xspace\;\Rightarrow\; (i,k,j)\not\in\ensuremath{\mathfrak b}\xspace$: else $G(\ensuremath{\mathfrak b}\xspace)$ would contain the directed cycle
$\{i,j\}\rightarrow\{i,k\}\rightarrow\{i,j\}$.)
An essential difference between abstract causal betweenness and fork-representable relations is that, on the one hand, every triple in an abstract causal betweenness consists of pairwise distinct elements and, on the other hand, a forkness includes with every triple $(i,j,k)$ most of triples formed by at most two of $i,j,k$. This difference notwithstanding, the two can be compared. The trick is to introduce, for every ternary relation \ensuremath{\mathfrak r}\xspace, the ternary relation $\ensuremath{\mathfrak r}\xspace^\sharp$ consisting of all triples in \ensuremath{\mathfrak r}\xspace that have pairwise distinct elements.
We claim that
\begin{align}\label{comp}
&\text{if \ensuremath{\mathfrak r}\xspace is a fork-representable relation on a finite ground set}\\
&\text{such that $(i,j,i)\not\in\ensuremath{\mathfrak r}\xspace$ whenever $i\ne j$,}\nonumber\\
&\text{then $\ensuremath{\mathfrak r}\xspace^\sharp$ is an abstract causal betweenness.}\nonumber
\end{align}
To justify this claim, consider a fork-representable relation \ensuremath{\mathfrak r}\xspace on a finite set $N$ such that $(i,j,i)\not\in\ensuremath{\mathfrak r}\xspace$ whenever $i\ne j$. By Lemma~\ref{V1}, \ensuremath{\mathfrak r}\xspace is a forkness; assumption
$i\ne j\Rightarrow (i,j,i)\not\in\ensuremath{\mathfrak r}\xspace$ implies that \eee is the identity relation, and so the quotient of \ensuremath{\mathfrak r}\xspace is isomorphic to \ensuremath{\mathfrak r}\xspace. Now Lemma~\ref{V2} guarantees that \ensuremath{\mathfrak r}\xspace is solvable: there are positive numbers $x_{\{i,j\}}$ such that
$x_{\{i,k\}}=x_{\{i,j\}}+x_{\{j,k\}}$ for all $(i,j,k)$ in $\ensuremath{\mathfrak r}\xspace$ with pairwise distinct $i,j,k$. Since
$x_{\{i,k\}}>x_{\{i,j\}}$ for every edge $(\{i,j\},\{i,k\})$ of~$G(\ensuremath{\mathfrak r}\xspace^\sharp)$, this directed graph is acyclic, and so~\eqref{ChvWu} guarantees that $\ensuremath{\mathfrak r}\xspace^\sharp$ is an abstract causal betweenness.
Assumption $i\ne j\Rightarrow (i,j,i)\not\in\ensuremath{\mathfrak r}\xspace$ cannot be dropped from~\eqref{comp}: consider $\ensuremath{\mathfrak r}\xspace = N^3$. This \ensuremath{\mathfrak r}\xspace is fork representable (for instance, by $\ensuremath{\mathit\Omega}\xspace=\{x,y\}$, $\ensuremath{P}\xspace(x)=\ensuremath{P}\xspace(y)=1/2$, and $A_i=\{x\}$ for all $i$ in $N$). Nevertheless, if $\abs{N}\geqslantslant 3$, then $G(\ensuremath{\mathfrak r}\xspace^\sharp)$ contains cycles, and so $\ensuremath{\mathfrak r}\xspace^\sharp$ is not an abstract causal betweenness.
The converse of~\eqref{comp},
\begin{align*}
&\text{if $\ensuremath{\mathfrak r}\xspace^\sharp$ is an abstract causal betweenness}\phantom{xxxxxxxxxxxxxxxxxx}\\
&\text{then \ensuremath{\mathfrak r}\xspace is a fork-representable relation}\\
&\text{such that $(i,j,i)\not\in\ensuremath{\mathfrak r}\xspace$ whenever $i\ne j$,}
\end{align*}
is false. Even its weaker version,
\begin{align*}
&\text{if \ensuremath{\mathfrak r}\xspace is a regular forkness}\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxi}\\
&\text{such that $\ensuremath{\mathfrak r}\xspace^\sharp$ is an abstract causal betweenness,}\\
&\text{then \ensuremath{\mathfrak r}\xspace is a fork-representable relation,}
\end{align*}
is false: consider the smallest forkness \ensuremath{\mathfrak r}\xspace on $\{1,2,3,4\}$
that contains the relation
\begin{align*}
\{&(1,3,2), (2,3,4), (3,1,4), (1,4,2), \label{example}\\
&(2,3,1), (4,3,2), (4,1,3), (2,4,1)\}. \nonumber
\end{align*}
Minimality of \ensuremath{\mathfrak r}\xspace implies that
$(i,j,i)\not\in\ensuremath{\mathfrak r}\xspace$ whenever $i\ne j$; it follows that \eee is the identity relation, and so \ensuremath{\mathfrak r}\xspace
is a regular forkness. Graph $G(\ensuremath{\mathfrak r}\xspace^\sharp)$ is acyclic
\begin{center}\setlength{\unitlength}{1.3mm}\setlength{\fboxsep}{0.5mm}
\begin{picture}(40,29)(0,7)
\put(10,30){\vector(0,-1){6}}\put(10,30){\line(0,-1){10}}
\put(20,30){\vector(-1,-1){7}} \put(20,30){\line(-1,-1){10}}
\put(10,30){\vector(1,-1){14}}\put(10,30){\line(1,-1){20}}
\put(30,30){\vector(-1,-1){14}} \put(30,30){\line(-1,-1){20}}
\put(10,20){\vector(0,-1){6}}\put(10,20){\line(0,-1){10}}
\put(20,30){\vector(1,-2){6}} \put(20,30){\line(1,-2){10}}
\put(10,10){\vector(1,0){11}}\put(10,10){\line(1,-0){20}}
\put(30,30){\vector(0,-1){12}} \put(30,30){\line(0,-1){20}}
\put(10,20){\makebox(0,0){\fcolorbox{black}{white}{\tiny$\{3,4\}$}}}
\put(10,10){\makebox(0,0){\fcolorbox{black}{white}{\tiny$\{2,4\}$}}}
\put(30,10){\makebox(0,0){\fcolorbox{black}{white}{\tiny$\{1,2\}$}}}
\put(10,30){\makebox(0,0){\fcolorbox{black}{white}{\tiny$\{1,4\}$}}}
\put(20,30){\makebox(0,0){\fcolorbox{black}{white}{\tiny$\{1,3\}$}}}
\put(30,30){\makebox(0,0){\fcolorbox{black}{white}{\tiny$\{2,3\}$}}}
\end{picture}
\end{center}
and so \eqref{ChvWu} guarantees that $\ensuremath{\mathfrak r}\xspace^\sharp$ is an abstract causal betweenness.
By Lemma~\ref{V2}, \ensuremath{\mathfrak r}\xspace is not fork representable: here, system~\eqref{solution} is isomorphic to
\begin{eqnarray*}
x_{\{1,2\}}=x_{\{1,3\}}+x_{\{2,3\}} \\
x_{\{2,4\}}=x_{\{2,3\}}+x_{\{3,4\}} \\
x_{\{3,4\}}=x_{\{1,3\}}+x_{\{1,4\}} \\
x_{\{1,2\}}=x_{\{1,4\}}+x_{\{2,4\}}
\end{eqnarray*}
and this system has no solution with $x_{\{1,4\}}>0$ as the linear combination of its four equations
with multipliers $-1$, $+1$, $+1$, $+1$ reads $0=2x_{\{1,4\}}$.
\section{Concluding remarks}\label{S:disc}
1. The patterns studied in this work are based on combinations of conditional
independence and covariance constraints for events. In recent decades,
patterns of conditional independence among random variables have been studied
in statistics and in probability theory since they provide insight to decompositions
of multidimensional distributions, so sought for in applications. A framework
for this activity was developed in the graphical
models community\ensuremath{\!\perp\!\!\!\!\perp\!}te{Lauri}.
A general formulation of the problem considers random variables
$\xi_i$ indexed by $i$ in $N$ and patterns consisting of the conditional
independences $\xi_i\ensuremath{\!\perp\!\!\!\!\perp\!}\xi_j|\xi_K$ where $\xi_K=(\xi_k)_{k\in K}$,
$i,j\in N$, and $i,j\not\in K$. The case $i=j$ means functional
dependence of $\xi_i$ on $\xi_K$, a.s. The problem is highly nontrivial
even for four variables~\ensuremath{\!\perp\!\!\!\!\perp\!}te{M.4var.III}.
First treatments go back to~\ensuremath{\!\perp\!\!\!\!\perp\!}te{Pearl,Spo80}. The variant of the problem
excluding the functional dependence is most frequent \ensuremath{\!\perp\!\!\!\!\perp\!}te{Studeny}.
Restrictions to Gaussian~\ensuremath{\!\perp\!\!\!\!\perp\!}te{M.Lnenicka,sul} or binary variables,
positivity of the distribution of $\xi_N$, etc., have been studied
as well \ensuremath{\!\perp\!\!\!\!\perp\!}te{Dawid}. The idea to employ the Fourier-Stieltjes transform,
as in Section~\ref{S:mainproof}, appeared in~\ensuremath{\!\perp\!\!\!\!\perp\!}te{M.indep}, characterizing
patterns of unconditional independence.
2. For patterns of conditional independence, the role of forkness is played
by graphoids \ensuremath{\!\perp\!\!\!\!\perp\!}te{Pearl}, semigraphoids \ensuremath{\!\perp\!\!\!\!\perp\!}te{semigr},
imsets \ensuremath{\!\perp\!\!\!\!\perp\!}te{Studeny}, semimatroids \ensuremath{\!\perp\!\!\!\!\perp\!}te{M.4var.III}, etc. Notable are
connections to matroid representations theory, see \ensuremath{\!\perp\!\!\!\!\perp\!}te{M.matroid}.
3. All possible patterns of conjunctive forks on events $A_i$ indexed by $i$ in $N$
arise by varying a probability measure on \ensuremath{\mathit\Omega}\xspace , the power set of $N$. For a ternary relation \ensuremath{\mathfrak r}\xspace on
a finite set $N$, the set
$\mathcal P_\ensuremath{\mathfrak r}\xspace$ of probability measures $\ensuremath{P}\xspace$ on \ensuremath{\mathit\Omega}\xspace that satisfy $(i,j,k)\in \ensuremath{\mathfrak r}\xspace \;\Leftrightarrow\;
(A_i,A_j,A_k)_{\ensuremath{P}\xspace}$ is described by finitely many constraints that require quadratic
polynomials in in\-deter\-minates $z_{\omega}$ indexed by $\omega$ in $\ensuremath{\mathit\Omega}\xspace$
to be positive or zero. For fork-representability
of~$\ensuremath{\mathfrak r}\xspace$, it matters only whether $\mathcal P_\ensuremath{\mathfrak r}\xspace$
is empty or not, which can be found out in polynomial time by the main result of the present paper.
The shape of~$\mathcal P_\ensuremath{\mathfrak r}\xspace$, which is a semialgebraic subset of the probability simplex,
might be difficult to understand; to reveal it, finer algebraic
techniques are needed, as in algebraic statistics \ensuremath{\!\perp\!\!\!\!\perp\!}te{DoSS09,Zwi}.
4. One of the two {\em Discrete Mathematics\/} reviewers asked: ``Is there some interesting algebraic/combinatorial structure
in admissible forknesses? Can they be partially ordered for a fixed $N$?" We leave these questions open.
\section*{Acknowledgments}
This work began in March 2011 in the weekly meetings of the seminar
ConCoCO (Concordia Computational Combinatorial Optimization). We wish
to thank its participants, in particular, Luc Devroye and Peter Sloan,
for stimulating and helpful interactions. We also thank the two {\em Discrete Mathematics\/}
reviewers for their thoughtful comments that
made us improve the presentation considerably.
\end{document} |
\betaegin{document}
\tauitle{On singularities of lattice varieties }
\alphauthor{
Himadri Mukherjee\\
Department of Mathematics and Statistics\\
Indian Institute of Science Education and Research, Kolkata\\
[email protected]
}
\deltaate{\tauoday}
\muaketitle
\betaegin{abstract}
Toric varieties associated with distributive lattices arise as a fibre of a flat degeneration of a Schubert variety in a minuscule. The singular locus of these varieties has been studied by various authors. In this article we prove that the number of diamonds incident on a lattice point $\alpha$ in a product of chain lattices is more than or equal to the codimension of the lattice. Using this we also show that the lattice varieties associated with product of chain lattices is smooth.
\varepsilonnd{abstract}
\sigmaection{Introduction}
\lambdaabel{introduction}
The toric varieties associate to distributive lattices are studied by various authors for the last few decades. In \cite{Hibi} Hibi shows that the $k$-algebra $k[\cal{L}]$ associated to a lattice $\cal{L}$ is an integral domain if and only if the lattice $\cal{L}$ is a distributive lattice. Furthermore using a standard monomial basis it was showed that the $k$-algebra is normal. Therefore the toric variety associated to the binomial ideal $I(\muathcal{L})= <x_\alpha x_\beta - x_{\alpha \vee \beta} x_{\alpha \wedge \beta} | \alpha \, , \beta \in \cal{L} >$ related to a distributive lattice $\cal{L}$ is a normal toric variety. In \cite{g-l} Lakshmibai and Gonciulea shows that the cone over the Schubert variety $X(\omegamega)$ i.e $\wedgeidehat{X(\omegamega)}$ associated to a $\omegamega \in \muathrm{I}_{(d,n)}$ degenerates to the lattice variety $X(\cal{L}_\omegamega)$ for the Bruhat poset $\cal{L}_\omegamega$. They also find the orbit decomposition of these lattice varieties and propose conjectures related to their singularities \cite{GLnext}. Wagner in \cite{wagner} finds the singular locus of these varieties depending on conditions on {\varepsilonm contractions} of the poset of join irreducibles $J$. In \cite{HL} the singular locus of these varieties were revisited to find a standard monomial basis for the co-tangent space of the variety associated to these lattices. The singular locus and the multiplicities of the varieties associated to Bruhat lattices were discussed in \cite{b-l}\cite{GH}. In the same article interesting formulas for the multiplicities of these varieties at the distinguished points of the $T$ orbits were found. The authors in \cite{GH} also propose conjectures regarding the singularities of these for the general $I_{(d,n)}$ \cite{GH}.
In \cite{HL} the notion of a $\tauau$ diagonal is introduced. The $\tauau$-diagonals are particular class of diamonds that are incident on an embedded sublattice $D_\tauau$ at only one point. In the present article we simplify that concept and introduce the set of diamonds $E_\alpha$. Based on a lower bound on the size of this set we hope to find the singular locus of the space $X(\cal{L})$. As a combinatorial object $E_\alpha$ is interesting in it's own rights as counting the number of sublattices of a given distributive lattice is long standing complicated problem \cite{Dedikind} with various degrees of generalizations.
This article investigates the distributive lattice varieties that possess a tree as its poset of join-irreducibles( see for definition \cite{GG}). We provide a necessary and sufficient criterion for a distributive lattice to be a tree lattice. We also give an example for a tree lattice for which the singular locus of $X(\cal{L})$ is non-empty. We define a particular type of tree lattice that is called a square lattice for which that set of join-irreducibles are union of chains except at the root. We show that these lattices are product lattices of chain lattices. We give an interesting tight bound on the cardinality of the number of diamonds containing a given element $\alpha$. We give a different proof of the fact that the singular locus of the varieties associated to square lattices is empty using elementary combinatorial arguments and the bound described before. We show that the affine cone $\wedgeidehat{X(\cal{L})}$ over the variety $\wedgeidehat{X(\cal{L})}$ has no singular points except at the vertex of the cone.
\sigmaection{Main Results}
\betaegin{thm}\lambdaabel{a} Let $\cal{L}$ be a distributive lattice and $\alpha \in \cal{L}$. $Y(\muathcal{L})=V(I(\muathcal{L})) \sigmaubset \muathbb{A}^{|\cal{L}|}$ and let point $p_{\alpha}$ defined as \[ (p_{\alpha})_{\beta} = \lambdaeft\{ \betaegin{array}{ll}
0 & \mubox{if $\beta \nueq \alpha $};\\
c \nueq 0 & \mubox{otherwise}.\varepsilonnd{array} \right. \] Where $(p_{\alpha})_{\beta}$ denotes the $\beta$th coordinate of the point $p_{\alpha}$. The point $p_\alpha$ is smooth if \[|E_{\alpha}| \gammaeq |\muathcal{L}| -|J(\muathcal{L})|\]
\varepsilonnd{thm}
The above is a direct application of Jacobian criterion of smoothness for affine varieties. We use the inequality to prove that the restriction of the Jacobian matrix at the point $p_{\alpha}$ has rank more than or equal to $\card{E_\alpha}$ and therefore it is smooth at the points where $\card{E_\alpha}$ is more than the codimension of the variety in the full affine space at that point. And hence smooth by Jacobian criterion. See \cite{eis}.
\betaegin{thm}\lambdaabel{b} For $\alpha \in \cal{L}$ where $L$ is a square lattice we have $|E_\alpha| \gammaeq |\muathcal{L}| - |J| $
\varepsilonnd{thm}
We define an operation on distributive lattices which we call pruning. Given a maximal join irreducible $\beta$ we find a sublattice $\cal{L}_\beta$ which is without that join irreducible. For square lattices we have a simple structure of the poset $\cal{L} \sigmaetminus \cal{L}_\beta$ which enables us to get a bound on the diamond relations with corners at $\alpha$ which are not in the sublattice $\cal{L}_\beta$. This is an equality for a square lattice which is product of two chain lattices.
\betaegin{thm}\lambdaabel{c} For a square lattice $\muathcal{L}$, $X(\muathcal{L})$ is non singular at all its points
\varepsilonnd{thm}
As a consequence of the theorem \ref{a} and \ref {b} we have the smoothness of the affine variety $\wedgeidehat{X(\cal{L})}$ at all points except the point $p=(0,0,0 \lambdadots , 0)$ hence the projective variety $X(\cal{L})$ is smooth at all its points.
\sigmaection{Definitions and Lemmas}
In this section we also recall the known results and basic definitions regarding distributive lattices, that will be used in this present article, a thorough text can be obtained in \cite{HL,GG,b-l, GLdef}. A partial ordered set $(P, \lambdaeq)$ is called a lattice if it is a non-empty set such that the two binary operations defined as $x \vee y = inf\{ z \in P | z \gammaeq x,y\}$ called ``join" of $x$ and $y$ and $x \wedge y = sup \{z \in P | z \lambdaeq x,y\}$ called ``meet" of $x$ and $y$ exist and are idempotent, associative and commutative \cite{GG} and for all choices of $x,y \in P$ they satisfy: \[x \wedge (x \vee y)=x\] \[x \vee (x \wedge y)=x\]
Further a lattice will be called a distributive lattice if it satisfies the distributive identity as defined below:
\betaegin{defn}\varepsilonmph{Distributivity identity}
$(x\veeee y) \wedgeedge z = (x \wedgeedge z) \veeee (y \wedgeedge z)$.
\varepsilonnd{defn}
An element $x \in \cal{D}$ where $\cal{D}$ is a distributive lattice is called a join irreducible if it is not a join of two non-comparable lattice elements, or equivalently if $x =y \vee z$ then either $x=y$ or $x=z$. The set of join irreducibles in the lattice $\cal{D}$ plays an important role, let us denote it by $J(\cal{D})$ or simply $J$ if there is no scope of confusion. A subset $S$ of the poset $J$ is called a hereditary if $\phiorall x \in S$ and for all $y \lambdaeq x$ we have $y \in S$.
\betaegin{defn} Two elements $\alpha,\beta \in \cal{D}$ are said to be covers, or $\alpha$ covers $\beta$ or $\alpha \gammatrdot \beta$ if $\alpha > \beta $ and whenever there is $\alpha \gammaeq x > \beta$ then $ \alpha =x$.
\varepsilonnd{defn}
So a maximal chain $\muathcal{M}$ in $\cal{D}$ can be written as $\muathcal{M}= \{\underline{1}, a_1,a_2, \lambdadots , a_n , \underline{0} | \underline{1} \gammatrdot a_1 \gammatrdot a_2 \gammatrdot \lambdadots \gammatrdot a_n \gammatrdot \underline{0}, \, a_i \in \cal{D} \}$ where $\underline{1}, \underline{0}$ are the maximal and the minimal elements of $\cal{D}$ respectively. The important theorem by Birkhoff \cite{GG} should be quoted here.
\betaegin{thm}[G.Birkhoff \cite{GG}] The distributive lattice $\cal{D}$ and the lattice of hereditary sets of $J(\cal{D})$ are isomorphic.
\varepsilonnd{thm}
Let us call the lattice of the hereditary subsets of $J$ by $\muathfrak{I}(J)$. The isomorphism in the above theorem is a basic tool for lattice theory and we will take the opportunity to put it in writing here. $\phi : \cal{D} \rightarrow$ $\muathfrak{I}(J)$ is defined as $\phi(\alpha)=\{\beta \in J | \beta \lambdaeq \alpha\}$. We will invent a shorter notation as $\phi(\alpha)=I_\alpha$. Observe that the subset $I_\alpha$ is a hereditary subset. Note that the join and meet operations of the lattice $\muathfrak{I}$ are just union and intersection respectively. \lambdaabel{lattice ideals}
To a distributive lattice $\cal{D}$=$\{ a_1, a_2, \lambdadots, a_m\}$ ( henceforth all lattices will be distributive unless mentioned otherwise ) we can attach the polynomial ring over a field $k$ as $k[\cal{D}]$ = $k [ x_{a_1},x_{a_2},x_{a_3} ,\lambdadots , x_{a_n}]$. In this polynomial ring we have the ideal $I(\cal{D})$ generated by the set of binomials $\{x_\alpha x_\beta -x_{\alpha \vee \beta} x_{\alpha \wedge \beta}| \alpha,\beta \in \cal{D} \, \muathrm{and} \, \alpha \underline{n}im \beta\}$ where $\alpha \underline{n}im \beta$ denotes that they are non-comparable elements of $\cal{D}$. This ideal is of some interest to both geometers and lattice theorists as we see in \cite{GLdef,Hibi,HL} these ideals related to distributive lattices are discussed in various contexts. The vanishing locus of the the ideal $I(\cal{D})$ in the affine space $\muathbb{A}^{|\cal{D}|}$ is discussed in \cite{HL}. The singular locus of these algebraic varieties are of considerable interest \cite{HL,b-l}. In this present article we will define a class of distributive lattices for which the vanishing locus of the ideal $I(\cal{D})$ in the projective space $\muathbb{P}^{\card{\cal{D}}-1}$ is non-singular. For an introduction to toric varieties reader may consult \cite{F} and \cite{toroidal} and \cite{ES}.
In \cite{HL} we have seen that the dimension of the variety $\wedgeidehat{X(\cal{D})}=V(I(\cal{D}))\sigmaubset \muathbb{A}^{\card{\cal{D}}}$ is given by the number of join irreducible elements in the lattice $\cal{D}$ which is equal to the length of a maximal chain of $\cal{D}$ \cite{HL}. A chain in a lattice $\cal{D}$ is a totally ordered subset of the lattice $\cal{D}$, and a "chain lattice" is a lattice which is totally ordered. Note that given a natural number $n$ there is a unique chain lattice up to a lattice isomorphism let us call that lattice $c(n)$. Let us also write down the following definition in this juncture.
\betaegin{defn}
For a distributive lattice $\cal{D}$ we define $\deltaim(\cal{D})= \card{\muathcal{M}}$. The natural number $\card{\muathcal{M}}$ is also called the length of the maximal chain $\muathcal{M} \sigmaubset \cal{D}$.
\varepsilonnd{defn}
In this section we introduce few definitions regarding distributive lattices that will be used in proving the results of the present article.
\betaegin{defn}\lambdaabel{square lattice}
A distributive lattice $\muathcal{L}$ is called a \varepsilonmph{tree lattice} if the poset of join irreducible elements of the lattice $\muathcal{L}$ is a tree.
\varepsilonnd{defn}
The motivation behind the previous definition is the observation that the distributive lattices that possess a tree as join irreducible poset is easier to handle.
\betaegin{defn}
A distributive lattice $\cal{L}$ will be called an \varepsilonmph{honest lattice} if for every $\alphalpha$, $\betaeta$ $\in J(\muathcal{L})$ such that $\alphalpha$ covers $\betaeta$ in $J(\cal{L})$ then $\alphalpha$ covers $\betaeta$ in $\cal{L}$.
\varepsilonnd{defn}
The above definition is motivated by the fact that for such a lattice the set of join irreducible is a nicely embedded sub-lattice.
The following theorem gives a clear picture on the above two definitions.
\betaegin{thm}\lambdaabel{tree honest equivalence}
A distributive lattice is a tree lattice if and only if it is a honest lattice.
\varepsilonnd{thm}
\betaegin{proof}
Let $\cal{L}$ be an honest lattice and let $J=J(\cal{L})$ denote its set of join irreducible. If $J$ is not a tree then there exist $\alphalpha$, $\betaeta$, $\gammaamma$ , $\deltaelta$ $\in J$ such that $\gammaamma$ covers both $\alphalpha$ and $\betaeta$ in $J$. Further $\deltaelta$ is covered by both $\alphalpha$ and $\betaeta$ in $J$. In that case $\deltaelta \lambdaeq \alphalpha \veeee \betaeta \lambdaeq \gammaamma$ which means $\alphalpha$ is covered by $\alphalpha \veeee \betaeta $ covered by $\gammaamma$ since $\alphalpha \veeee \betaeta$ is not a join irreducible it leads to a contradiction to our assumption that $\cal{L}$ is honest.
For the reverse direction let us assume that $\cal{L}$ is a square lattice, then if its not honest there are elements $\alphalpha$, $\betaeta$ in $J$ such that $\alphalpha \gammatrdot \betaeta$ in $J$ but not in $\cal{L}$. Which means there is an element $\tauheta \in \cal{L}$ such that $\alphalpha > \tauheta > \betaeta$ in $\cal{L}$. And since $\tauheta \nuotin J$ there are elements $x$,$y$ $\in \cal{L}$ such that $\tauheta = x \veeee y$. Now since $x \veeee y < \alphalpha$ both $x$ and $y$ are less than $\alphalpha$. $\Rightarrow x \wedgeedge y < \alphalpha$. Since $x \veeee y > \betaeta \Rightarrow $ either $x $ or $y$ is larger than $\betaeta$ from lemma \ref{check}.
Without loss of generality, let us assume that $x$ is larger than $\betaeta$. Since $x$ is not in $J$ we will have two elements $x_1$, $y_1$ $\in \cal{L}$ such that $x_1 \veeee y_1 = x > \betaeta$. Using the lemma below we have $x_1 > \betaeta$ and hence we can continue the whole process, but our lattice $\cal{L}$ being a finite lattice, the process will ends after finitely mane iterations. Which mean we will have $z$ such that $\alphalpha > z >\betaeta $ and $z \in J$ contradicting our assumption.
\varepsilonnd{proof}
Observe that in a tree lattice every join irreducible has a unique join irreducible below it since the join irreducible poset is a tree. This is a very important property that will be exploited in this article. Because of the importance of the fact we will write it down as a lemma.
\betaegin{lem}\lambdaabel{unique}
Let $\beta $ be a join-irreducible in a tree lattice $\cal{L}$, then there is a unique join irreducible $\gamma \in J(\cal{L})$ such that $\beta \gammaeq \gamma$ and they are covers in $J$.
\varepsilonnd{lem}
\betaegin{proof}
Clearly if there are more than one such join irreducible say $\lambda_1,\lambda_2$ then there are two paths from the root $\rho$ to $\beta$ via each of $\lambda_1,\lambda_2$ which contradicts the hypothesis that the lattice is a tree lattice.
\varepsilonnd{proof}
\betaegin{lem}\lambdaabel{check}
If $x$, $y$ $\in \cal{L}$ such that $x \veeee y > \betaeta$ where $\betaeta \in J$ then either $x > \betaeta $ or $y > \betaeta $.
\varepsilonnd{lem}
\betaegin{proof}
Since $I_{x \veeee y} = I_x \cup I_y $ which contains $I_{\betaeta}$ which correspond to the element $\betaeta$ $\cal{L}ongrightarrow \betaeta \in I_x$ or $I_y$.
\varepsilonnd{proof}
\betaegin{defn}\lambdaabel{non singular lattice}
A distributive lattice $\cal{L}$ will be called a non singular lattice if the projective variety $X(\cal{L})$ is smooth.
\varepsilonnd{defn}
\betaegin{defn} \lambdaabel{pruning}
A pruning of a lattice $\cal{L}$ with respect to a maximal join irreducible $\betaeta$ is the lattice $\muathcal{I}(J\sigmaetminus \{\betaeta\})$ where $\muathcal{I}(A)$ is the poset of hereditary subsets of a poset $A$.
\varepsilonnd{defn}
Let us also invent a notation for a pruning, $\cal{L}_\beta$=$I( J \sigmaetminus \{\betaeta \})$
At this juncture we write down an idea related to the joining move that we defined above even though it will not be used in this present paper but for the sake of a completeness of mind.
\betaegin{lem} \lambdaabel{chain}
let $\beta$ be a maximal join-irreducible element in the square lattice $\cal{L}$ then $B_\beta=\cal{L} \sigmaetminus \cal{L}_\beta$ is the set $\{ \gamma \in \cal{L} | \gamma \gammaeq \beta\}$.
\varepsilonnd{lem}
\betaegin{proof}
The set $B_\beta$ can be identified with the set $\{ \gamma \in \cal{L} | \gamma \gammaeq \beta\}$ because since by pruning we have deleted the maximal join irreducible $\beta$ from all ideals that contains it, we are left with the ideals that do not contain the element $\beta$ in $\cal{L}_\beta$. Which means that the elements of $B_\beta$ are the ideals in $J(\cal{L})$ that contains the element $\beta$ or equivalently these are precisely the elements $\{\gamma \in \cal{L} | \gamma \gammaeq \beta\}$.
\varepsilonnd{proof}
\betaegin{lem} \lambdaabel{bijection}
Let $\beta \in \cal{L}$ be a maximal join irreducible and let $\beta_1 < \beta$ be the unique (see \ref{unique}) join irreducible below $\beta$ then the posets $B_b=\cal{L} \sigmaetminus \cal{L}_\beta$ and $B_{\betaeta_1}= \muathcal{L}_\betaeta \sigmaetminus (\muathcal{L}_\betaeta)_{\betaeta_1}$ are isomorphic as posets.
\varepsilonnd{lem}
\betaegin{proof}
Let us define a poset map $\phi: B_{b_1} \lambdaongrightarrow B_\beta$ as $\phi(x)=x \vee \beta$. It is clear that this a poset map. So let us see why it is a bijection. If $\phi(x)=\phi(y)$ then $x \vee \beta = y \vee \beta$ which means $I_x \cup \{b\}=I_y \cup \{\beta\}$ But since $I_x$ and $I_y$ does not contain $\beta$ we have $I_x=I_y$, or by Birkhoff's theorem we have $x=y$. For surjectivity part of the claim, note that if $\lambda \in B_b$ then $\lambda \gammaeq \beta$ now if we consider the set $I_\lambda \sigmaetminus \{\beta\}$ this is an ideal in $B_{\beta_1}$, for if $s \in I_\lambda \sigmaetminus \{\beta\}$ and if $t \lambdaeq s$ then since $I_\lambda$ is an ideal $ t \in I_\lambda$ hence $t \in I_\lambda \sigmaetminus \{\beta\}$. Also note that $\beta_1 \in I_\lambda$ since $\beta_1 \lambdaeq \beta$ and hence it is in $I_\lambda \sigmaetminus \{\beta\}$. Hence putting all these together we get an element $\gamma \in B_{\beta_1}$ such that $\gamma \vee \beta = \lambda$ completing the surjectiveness of the claim.
\varepsilonnd{proof}
The algebraic variety associated to a tree lattice need not be smooth always. Let us explain the situation with the help of an example. The lattice in the figure below the point $p=(0,0,1,0,0,0,0,0,0,0)$ is not a smooth point. Even though the lattice is a tree lattice. \lambdaabel{example}
\betaegin{center}
\betaegin{tikzpicture}
\nuode (max) at (7,4) {10};
\nuode (a) at (6,3) {9};
\nuode (b) at (7,3) {8};
\nuode (c) at (8,3) {7};
\nuode (d) at (6,2) {6};
\nuode (e) at (7,2) {5};
\nuode (f) at (8,2) {4};
\nuode (g) at (5,1) {3};
\nuode (h) at (7,1) {2};
\nuode (i) at (6,0) {1};
\deltaraw (max) -- (a) -- (d) -- (g) -- (i) -- (h) -- (f) -- (c) -- (max) ;
\deltaraw (h) --(d) ;
\deltaraw (h) -- (e) -- (a);
\deltaraw (d) -- (b) -- (max);
\deltaraw (e) -- (c);
\deltaraw (f) -- (b);
\varepsilonnd{tikzpicture}
Related to the join irreducible poset
\betaegin{tikzpicture}
\nuode (max) at (0,1) {$3$};
\nuode (a) at (1,0) {$1$};
\nuode (b) at (2,1) {$2$};
\nuode (c) at (1,2) {$5$};
\nuode (d) at (3,2) {$4$};
\deltaraw (max) -- (a) -- (b) -- (c) ;
\deltaraw (b) -- (d);
\varepsilonnd{tikzpicture}
\varepsilonnd{center}
Note that in the above picture the lattice corresponding to $\muathcal{L}$ is not non-singular as the point given by the coordinates $p=(0,0,1,0,0,0,0,0,0,0)$ is not smooth. Which means the projective space \mubox{Proj$k[\cal{D}]$} is not smooth.
This above example motivates us to restrict our lattice further to what we will call a square lattice.
\betaegin{defn} A tree lattice $\cal{L}$ will be called a square lattice if the graph of the Hasse diagram of its poset of join-irreducibles has the following property. Degree of all the vertices except for the root is at the most two.
\varepsilonnd{defn}
From the above definition we will derive the following properties of the square lattices. Observe that for a square lattice $\cal{L}$ the poset of join irreducibles is union of a collection of chains almost disjoint.
\betaegin{lem}
The join irreducible poset $J$ of a square lattice $\cal{L}$ is union of a collection of chains, disjoint except at the root.
\varepsilonnd{lem}
\betaegin{proof}
If $\alpha \in J$ and $\alpha$ is not the root say $\rho$ then it has degree at most two in the Hasse graph. Which means if $\alpha \in J$ is a maximal element and if it is different from the root then it has a unique predecessor $\alpha_1$. And the element $\alpha_1$ has one predecessor or it is the root. So continuing the argument and observing that we are dealing with finite lattices we get a chain $\{ \alpha,\alpha_1,\alpha_2 , \lambdadots , \alpha_t = \rho\}$ call this chain $c(t+1)_\alpha$. Next let us pick another maximal element $\beta \in J$, if there are none then we have successfully written the join irreducible set as a union of chains, else with the maximal element $\beta$ we can associate another chain $c(t_1)$ and this chain is disjoint with the chain $c(t)$ except at $\rho$. In other words $c(t) \cap c(t_1) =\{\rho \}$. Continuing the process we will get the desired result.
\varepsilonnd{proof}
\betaegin{thm} \lambdaabel{square} There are natural numbers $n_1,n_2,\lambdadots , n_t$ such that the square lattice $\cal{L}$ is isomorphic to the product of chain lattices $c(n_1),c(n_2), \lambdadots, c(n_t)$.
\varepsilonnd{thm}
\betaegin{proof}
Since we have a square lattice the poset of join-irreducibles $J$ can be written as union of chains $c(n_1),c(n_2), \lambdadots , c(n_t)$. We will look at the lattice $\cal{L}$ as the lattice of hereditary sets of $J$. And let us give a map as: \[ \phi : \muathcal{L} \lambdaongrightarrow \prod{} c(n_i) \] as \[ \phi ( I_\alpha)= (\beta_1,\beta_2,\beta_3,\lambdadots , \beta_t )\]
Where $\beta_i$ is the maximal element of the chain $ I_\alpha \cap c(n_i)$. Let us see that this map is surjective. If we choose a collection of elements $\beta_1, b_2, \lambdadots \beta_t$ we look at the hereditary set $I=\{ z \in \muathcal{L} | \varepsilonxists \, i \, \muathrm{such \, that } \, z \lambdaeq \betaeta_i \}$ surely by construction the elements $\beta_i$ , $i \lambdaeq t$ are the maximal elements of the set $I \cap c(n_i)$. Hence the map defined as above is surjective.
Now for the injectivity if we have two elements $\alpha,\gamma \in \cal{L}$ such that $\phi(\alpha)=\phi(\gamma)=(\beta_1,\beta_2, \lambdadots \beta_t)$ then $\beta_i$ is the maximal element of both $I_\alpha \cap c(n_i)$ and $I_\gamma \cap c(n_i)$ but since $I_\alpha$ and $I_\gamma$ are ideals we have $I_\alpha \cap c(n_i) =I\gamma \cap c(n_i) \, \phiorall i$ or equivalently $I_\alpha=I_\gamma$ or $\alpha=\gamma$.
\varepsilonnd{proof}
In this section we prove a general theorem about singularity of lattice toric varieties. First let us formulate the notion of a diamond and a diamond relation precisely.
\betaegin{defn}\lambdaabel{diamond}
A diamond in a distributive lattice $\muathcal{L}$ is $D=\{\alpha,\beta,\gammaamma,\delta\}$ such that there are two non comparable elements $x,y \in D$ and $x \veeee y , x \wedge y \in D$.
\varepsilonnd{defn}
Note that every diamond $D=\{\alpha, \beta ,\alpha \veeee \beta , \alpha \wedgeedge \beta \}$ gives rise to a generator in the ideal $I(\cal{L})$ namely the relation ( henceforth will be called a diamond relation ) $f_{D}=x_{\alpha}x_{\beta}-x_{\alpha \veeee \beta }x_{\alpha \wedgeedge \beta}$. So we can write the ideal $I(\cal{L})$ as the ideal in $k[\cal{L}]$ generated by the relations $f_{D}$ where $D$ ranges over all the diamonds of the distributive lattice $\cal{L}$. Let us also call the set of all the diamond relations of the lattice $\cal{L}$ as $\muathfrak{D}$ Given $\alpha \in \cal{L}$ let us define a class of diamond relations that contains $\alpha$.
\betaegin{defn}
For $\alpha \in \cal{L}$ , $E_{\alpha}=\{(\alpha,\gamma)| \varepsilonxists D \in \muathfrak{D} | \alpha, \gamma \in D\}$
\varepsilonnd{defn}
\betaegin{lem}\lambdaabel{inequality} For a square lattice $\cal{L}$ and $\beta$ a maximal join-irreducible in $J(\cal{L})$ we have $$ \card{E_\alpha \sigmaetminus E_\alpha(\cal{L}_\beta)}= \card{E_\alphalpha} -\card{E_\alphalpha(\cal{L}_\beta)} \gammaeq \card{\cal{L}} - \card{\cal{L}_{\beta}} - 1 $$
\varepsilonnd{lem}
\betaegin{proof}
Let $n=\card{\cal{L}}-\card{\cal{L}_\beta}$, and let $\{b_1,b_2,\lambdadots , b_n\}=\cal{L} \sigmaetminus \cal{L}_\beta$, further let us assume that $\{D_1,D_2, \lambdadots , D_r\}=E_\alpha \sigmaetminus E_\alpha(\cal{L}_\beta)$. Let us call $D_i=\{\alpha, x_i,y_i,z_i\}$. Since these diamond relations are not in $E_\alpha(\cal{L}_\beta)$ which means at least one of $x_i,y_i,z_i \in \cal{L} \sigmaetminus \cal{L}_\beta$ for each $i \lambdaeq r$ without loss of generality let us assume that it is $x_i$. Now we know from \ref{chain} that $\cal{L} \sigmaetminus \cal{L}_\beta$ is given by $\{\gamma \in \cal{L} | \gamma \gammaeq \beta\}$. And since $x_i \in \cal{L} \sigmaetminus \cal{L}_\beta$ we have $x_i \gammaeq \beta$ which leads us to the following two cases:
\betaegin{itemize}
\item \varepsilonmph{\underline{case one}} $x_i$ is the maximal element of the diamond $D_i$. In this case since $x_i$ is the maximal element it is join of two other elements $\delta,\varepsilon \in D_i$ which means at least one of $\delta,\varepsilon \gammaeq \beta$ see \ref{check}. But it cannot be both since that will imply $\alpha \gammaeq \beta$ which is contrary to our assumption that $\alpha \in \cal{L}_\beta$. So we have exactly two elements of $D_i$ in $\cal{L} \sigmaetminus \cal{L}_\beta$.
\item \varepsilonmph{\underline{case two}} $x_i$ is not the maximal element. In this case we have another element $\delta \in D_i$ such that $\delta \gammaeq x_i \gammaeq \beta$. But only these two elements are larger than $\beta$ since otherwise we will have $\alpha > \beta$ and we contradict the assumption about $\alpha \in \cal{L} \sigmaetminus \cal{L}_\beta$.
\varepsilonnd{itemize}
To sum up the above two cases we can say that we have one of $y_i,z_i \in \cal{L} \sigmaetminus \cal{L}_\beta$ apart from $x_i$ which is by our assumption is in the chain. Or one can say that there exist $b_{i_1} , b_{i_2}$ such that $x_i =b_{i_1}$ and either of $y_i,z_i$, without loss of generality let us assume $y_i = b_{i_2}$. Let us rewrite $D_i$ in light of this new information. $D_i=\{\alpha, b_{i_1},b_{i_2},z_i\}$. Let us also see that $b_{i_1}$ $ b_{i_2}$ are comparable since otherwise if these are the non-comparable elements in the diamond $D_i$ then both being larger than $\beta$ it will imply both $b_{i_1} \vee b_{i_2}$ and $b_{i_1} \wedge b_{i_2}$ larger than $\beta$ but since one of these two must be $\alpha$ which is not larger than $\beta$ we lead to a contradiction to our assumption that the elements $b_{i_1}$ and $ b_{i_2}$ are non-comparable. So without loss of generality let us assume that $b_{i_1} > b_{i_2}$. Note that with these assumptions in place we see that $b_{i_1}$ is the maximal element of the diamond $D_i$. So the Hasse diagram of the diamond looks like either of the following two:
\betaegin{center}
\betaegin{tikzpicture}
\nuode (max) at (1,2) {$b_{i_1}$};
\nuode (a) at (2,1) {$b_{i_2}$};
\nuode (b) at (0,1) {$z_i$};
\nuode (c) at (1,0) {$\alpha$};
\deltaraw (a) -- (max) -- (b) -- (c) -- (a) ;
\nuode (max) at (8,2) {$b_{i_1}$};
\nuode (a) at (9,1) {$b_{i_2}$};
\nuode (b) at (7,1) {$\alpha$};
\nuode (c) at (8,0) {$z_i$};
\deltaraw (a) -- (max) -- (b) -- (c) -- (a) ;
\varepsilonnd{tikzpicture}
\varepsilonnd{center}
Now by \ref{greater} we have $\card{E_\alpha}-\card{E_\alpha(\cal{L}_\beta)} \gammaeq \card{\cal{L}}-\card{\cal{L}_\beta}-1=r-1$.
\varepsilonnd{proof}
For a square lattice $\muathcal{L} = \prod_{i \lambdaeq r } C(n_i)$, we know that the set of join irreducibles can be identified with the poset $\cup C(n_i)$, let us take a maximal join irreducible $[\betaeta]$ in $\cal{L}$ , given by $[\betaeta]=(\rho,\rho,\rho,\lambdadots, \beta,\rho, \lambdadots, \rho)$ where $\beta$ is the maximal element of $C(n_i)$ and $\rho$ is the minimal element of $\cal{L}$. Note that since the join irreducibles of $\cal{L}$ is identified with the union of $C(n_j)$ we will call this join irreducible $[\betaeta]$ by $\beta$ that way we will reduce the burden of notation without hampering the generality of the treatment. Let us prove the lemma below with these notations in mind.
\betaegin{lem} \lambdaabel{greater} For $\cal{L}$, $\beta$, $C(n_j)$, $B_\beta$ as above let $\alpha \in \cal{L}_\beta$ then $\card{E_\alpha} - \card{E_\alpha(\cal{L}_\beta)} \gammaeq \card{B_\beta} -1$
\varepsilonnd{lem}
\betaegin{proof}
Let us write $\alpha$ as the tuple $(a_1,a_2,a_3, \lambdadots a_r)$ and without loss of generality let us assume that $[\betaeta]=(\beta,\rho,\rho \lambdadots , \rho)$. Let us call $b = min \{ \gamma \in B_\beta | \gamma \gammaeq \alpha \}$. So we can write $b$ even more explicitly with these information in place as $b= ( \beta , a_1,a_2 ,\lambdadots , a_r)$. Given $\beta_1=(\beta,\gamma_2,\gamma_3,\lambdadots,\gamma_r) \in B_\beta$ and $b_1 \nueq b$ (continuing with the same notations as in the previous lemma ) we will show that there is a diamond $D_{\beta_1}=\{\alpha,b_1,\beta_2,z\}$ such that $D_{\beta_1}$ and $D_{{b_1}'}$ are different for different elements $b_1$ and ${b_1}'$. We will prove the above by showing the following cases:
\betaegin{itemize}
\item \varepsilonmph{\underline{Case one}} $b_1 >b$ : in this case note that $\gamma_i \gammaeq a_i \, , \phiorall i \gammaeq 2$. Let us take $z=(a_1,\gamma_2,\gamma_3,\lambdadots , \gamma_r)$. We have $z \vee b = b_1 $ and $z \wedge b = \alpha$
\betaegin{center}
\betaegin{tikzpicture}
\nuode (max) at (1,2) {$b_1$};
\nuode (a) at (0,1) {$z$};
\nuode (b) at (2,1) {$b$};
\nuode (c) at (1,0) {$\alpha$};
\deltaraw (a) -- (max) -- (b) -- (c) -- (a) ;
\varepsilonnd{tikzpicture}
\varepsilonnd{center}
\item \varepsilonmph{\underline{Case two}} $b_1 < b$: in this case we have $\gamma_i \lambdaeq a_i \, , \phiorall i \gammaeq 2$. Note that $\alpha$ and $\beta_1$ are non-comparable since otherwise if $b_1 >\alpha$ that contradicts the assumption about the minimality of $b$ and if $b_1 < \alpha $ then $\alpha \in B_\beta$ contradicting our assumption about $\alpha$. And since these are non-comparable we take $z= \alpha \wedge \beta_1$. Note that we have $b_1 \vee \alpha = b$ also.
\betaegin{center}
\betaegin{tikzpicture}
\nuode (max) at (1,2) {$b$};
\nuode (a) at (0,1) {$\alpha$};
\nuode (b) at (2,1) {$b_1$};
\nuode (c) at (1,0) {$z$};
\deltaraw (a) -- (max) -- (b) -- (c) -- (a) ;
\varepsilonnd{tikzpicture}
\varepsilonnd{center}
\item \varepsilonmph{\underline{Case three}} $b_1$ is non-comparable to $b$ : in this case we see that $b_1$ is non-comparable to $\alpha$ since otherwise if $b_1 > \alpha$ then $b > b_1 \wedge b > \alpha$ and $b$ strictly larger than $b_1 \wedge b$ so it contradicts the minimality of $\alpha$. And if $b_1 < \alpha$ then it contradicts the definition of $\alpha$. So $b_1 $ and $\alpha$ are non-comparable so let us take $z = b_1 \wedge \alpha$ and $b_2 = b_1 \vee \alpha$.
\betaegin{center}
\betaegin{tikzpicture}
\nuode (max) at (1,2) {$b_2$};
\nuode (a) at (0,1) {$\alpha$};
\nuode (b) at (2,1) {$b_1$};
\nuode (c) at (1,0) {$z$};
\deltaraw (a) -- (max) -- (b) -- (c) -- (a) ;
\varepsilonnd{tikzpicture}
\varepsilonnd{center}
\varepsilonnd{itemize}
So the diamonds that we have specified to every element in $B_\beta \sigmaetminus \{b\}$ are all distinct since given $b_1$ we have the tuple $(\alpha,b_1) \in E_\alpha$, which means that the set $E_\alpha \sigmaetminus E_\alpha(\cal{L}_\beta)$ has cardinality more than or equal to $\card{B_\beta}-1$. This completes the proof of the lemma.
\varepsilonnd{proof}
\sigmaection{Proof of the Main Results} \lambdaabel{main}
Now we are ready to prove the main theorems of the article.
\sigmaubsection{ Proof of The Theorem \ref{a}}
\betaegin{proof}
Let us write down the set of diamonds in the lattice $\cal{D}$ as $D_1,D_2 , \lambdadots, D_m$ of which let us choose $D_1, D_2, \lambdadots, D_r$ diamonds containing the tuples $(\alpha_i,\gamma_i) \in E_\alpha$, as there can be more than one diamonds containing the same tuple. Let us also enumerate the diamond relations that generate the ideal $I(\cal{L})$ as $f_1,f_2, \lambdadots, f_m$ where $f_i = f_{D_i}$. So for each $f_i$, $i \lambdaeq r$ we have a $\beta_i \in \cal{L}$ such that $f_i= x_{\alpha}x_{\beta_i} - x_{\delta_i}x_{\gammaamma_i}$ for some $\delta_i , \gammaamma_i \in \cal{L}$.
So we have $\phirac{\partial f_i }{\partial x_{\beta_i}}|_{p_\alpha}=x_\alpha|_{p_\alpha}=c \nueq 0 , \, \phiorall i \lambdaeq r $ and $\phirac{\partial f_i}{\partial x_\beta}=0 , \, \phiorall i > r$. The last equality is derived from the fact that $x_\alpha$ is the only nonzero coordinate and it only occurs in the diamond relations $f_i$ for $i \lambdaeq r$. So the Jacobian matrix \cite[p.~31]{Ha}\cite[p.~404]{eis} has the following shape.
\[ \lambdaeft( \betaegin{array}{c}
\phirac{\partial f_i}{\partial x_{\betaeta_j}}
\varepsilonnd{array} \right)= \lambdaeft(
\betaegin{array}{cc}
cI_{r \tauimes r} & 0 \\
A & B
\varepsilonnd{array}\right)\], for some appropriate size matrices $A$ and $B$. So the rank of the Jacobian matrix is greater than or equal to $r=|E_\alpha|$ and since the dimension of the variety $X$ at the point $p_\alpha$ is $|J(\cal{L})|$ we have the result \cite[p.~32]{Ha}.
\varepsilonnd{proof}
\sigmaubsection{Proof of the Theorem \ref{b}}
\betaegin{proof}
To prove the statement we will make use of the pruning operation.
We will prove the lemma by induction on $|J|$. For the base case $|J|=1$ the lattice $\cal{L}$ is just a chain. Hence $E_\alpha=\varepsilonmptyset $ and since $J=\cal{L}$ in this case we have the result that we seek to prove. For the general case we will break it into two cases.
\betaegin{itemize}
\item \varepsilonmph{\underline{Case One}}
Since $\alpha$ is not the maximal element of the lattice $\cal{L}$ we can always find a join irreducible $\beta$ such that $\beta \nuless \alpha$. We prune the lattice $\cal{L}$ with respect to the maximal join-irreducible $\betaeta$ and let the pruned lattice be called $\cal{L}_\beta$ and let us call the join irreducibles of this sublattice be $J_\beta$.
By induction hypothesis we know that \[ \card{E_\alphalpha(\cal{L}_\beta)} \gammaeq \card{\cal{L}_\beta} - \card{ J_\beta} \]
Now we also have \[ \card{J_b} = \card{ J} - 1 \] Putting these two equations together we have \[\card{E_\alpha (\cal{L}_{\beta})} \gammaeq \card{\cal{L}_{\beta}} - \card{ J} + 1 \]
Note that if we prove the following \[ \card{E_\alphalpha} -\card{E_\alpha(\cal{L}_{\beta})} = \card{\cal{L}} - \card{\cal{L}_{\beta}} - 1 \] then we have
\[ \card{E_\alpha}-\card{E_\alpha(\cal{L}_\beta)} \gammaeq \card{\cal{L}} - \card{L_\beta}-1\]
\[ \cal{L}eftrightarrow \card{E_\alpha} \gammaeq \card{\cal{L}}-\card{\cal{L}_\beta} + \card{E_\alpha(\cal{L}_\beta)} -1 \]
\[ \gammaeq \card{\cal{L}} - \card{\cal{L}_\beta} + \card{\cal{L}_\beta} - \card{J} +1 -1 \]
\[ \gammaeq \card{\cal{L}} - \card{J}\]
If we prove the above inequality we will have proved this particular case of the lemma. We conclude the above inequality in the following lemmas \ref{inequality}
\item \varepsilonmph{\underline{Case Two}} In this case we have $\alpha = \muathrm{max}(\cal{L})$. We reduce this case to the previous case by replacing the lattice $(\cal{L}, \lambdaeq)$ with the lattice $(\cal{L} , \tauriangleleft)$ where the order $\tauriangleleft$ is given by the following rule: $x \tauriangleleft y \cal{L}eftrightarrow y \lambdaeq x $. Observe that a diamond in the lattice $\cal{L},\lambdaeq$ is still a diamond in $(\cal{L}, \tauriangleleft)$ and vice versa. And also observe that the set $E_\alpha$ remains same for both the lattices for a given element $\alpha \in \cal{L}$. But since we have reversed the order the maximal element $\alpha$ is now the minimal element of the lattice $(\cal{L}, \tauriangleleft)$. Hence by the previous case we have the required inequality.
\varepsilonnd{itemize}
\varepsilonnd{proof}
\sigmaubsection{Proof of the Theorem \ref{c}}
\betaegin{proof}
We will prove that the affine cone over the variety $X(\cal{L})$, namely $\wedgeidehat{X(\muathcal{L})} =\muathrm{Spec}( k[\cal{L}])$ is smooth at all points except at the vertex. Hence the projective variety $X(\cal{L})$= $\muathrm{Proj} ( k[\cal{L}])$ is smooth at all points. Let $p \in \wedgeidehat{X(\cal{L})}$ which is not the origin, so we have at the least one $\alpha \in \cal{L}$ such that the \mubox{$\alpha$th} coordinate of $p$ namely $(p)_\alpha = x_\alpha$ is nonzero. Now by theorem \ref{a} we know that the point $p$ is smooth if $\card{E_\alpha} \gammaeq \card{\cal{L}} - \card{J}$ which is true for any $\alpha$ for a square lattice $\cal{L}$ by theorem \ref{b}.
\varepsilonnd{proof}
\sigmaection{References}
\betaibliographystyle{abbrv}
\betaibliography{main}
\betaegin{thebibliography}{99}
\betaibitem{GG}
G. Gratzer, {\varepsilonm Lattice theory: first concepts and distributive lattices}, Dover pub. Inc.,
2009.
\betaibitem{Dedikind}
Daniel Kleitman,{\varepsilonm On Dedekind's problem: The number of monotone Boolean functions}, Proc. Amer. Math. Soc. {\betaf 21} (1969), 677-682
\betaibitem{b-l}
J. Brown and V. Lakshmibai, {\varepsilonm Singular loci of Bruhat-Hibi
toric varieties}, J. Alg., {\betaf 319}(2008) no 11. 4759-4799
\betaibitem{eis}
{\sigmac D. Eisenbud}, Commutative algebra with a view toward
Algebraic Geometry, {\varepsilonm Springer-Verlag}, GTM, 150.
\betaibitem{ES}
D. Eisenbud and B. Sturmfels, {\varepsilonm Binomial ideals}, preprint
(1994).
\betaibitem{F}
W. Fulton, {\varepsilonm Introduction to Toric Varieties}, Annals of Math.
Studies {\betaf 131}, Princeton U. P., Princeton N. J., 1993.
\betaibitem{GLdef}
N. Gonciulea and V. Lakshmibai, {\varepsilonm Degenerations of flag and
Schubert varieties to toric varieties}, Transformation Groups, vol
1, no:3 (1996), 215-248.
\betaibitem{GLnext}
N. Gonciulea and V. Lakshmibai, {\varepsilonm Singular loci of ladder
determinantal varieties and Schubert varieties}, J. Alg., 232
(2000), 360-395.
\betaibitem{g-l}
N. Gonciulea and V. Lakshmibai, {\varepsilonm Schubert varieties, toric
varieties and ladder determinantal varieties }, Ann. Inst.
Fourier, t.47, 1997, 1013-1064.
\betaibitem{Ha}
R. Hartshorne, {\varepsilonm Algebraic Geometry}, Springer-Verlag, New
York, 1977.
\betaibitem{Hibi}
T. Hibi, {\varepsilonm Distributive lattices, affine semigroup rings, and
algebras with straightening laws}, Commutative Algebra and
Combinatorics, Advanced Studies in Pure Math. {\betaf 11} (1987)
93-109.
\betaibitem{GH}
J. Brown and V. Lakshmibai, {\varepsilonm Singular loci of Grassmann-Hibi toric varieties},
Michigan Math. J. {\betaf 59} (2010), no. 2, 243–267
\betaibitem{toroidal} G. Kempf et al, {\varepsilonm Toroidal Embeddings}, Lecture notes in
Mathematics,N0. 339, Springer-Verlag, 1973.
\betaibitem{HL}
V. Lakshmibai and H. Mukherjee, {\varepsilonm Singular loci of Hibi toric
varieties},J Ramanujan Math. Soc., {\betaf 26}(2011) no. 1 (1-29)
\betaibitem{wagner}
D. G. Wagner, {\varepsilonm Singularities of toric varieties associated
with finite distributive lattices}, Journal of Algebraic Combinatorics, no. 5 (1996) 149-165 .
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\articletype{Manuscript}
\title{A feasible adaptive refinement algorithm for linear semi-infinite optimization}
\author{
\name{Shuxiong Wang\textsuperscript{1}\thanks{Email: [email protected]}}
\affil{\textsuperscript{1} Department of Mathematics, University of California, Irvine, CA,US}
}
\maketitle
\begin{abstract}
A numerical method is developed to solve linear semi-infinite programming problem (LSIP) in which
the iterates produced by the algorithm are feasible for the original problem.
This is achieved by constructing a sequence of standard linear programming problems
with respect to the successive discretization of the index set such that the approximate
regions are included in the original feasible region.
The convergence of the approximate solutions to the solution of the original problem is proved
and the associated optimal objective function values of the approximate problems are
monotonically decreasing and converge to the optimal value of LSIP.
An adaptive refinement procedure is designed to discretize the index set and update the constraints
for the approximate problem.
Numerical experiments demonstrate the performance of the proposed algorithm.
\end{abstract}
\begin{keywords}
Linear semi-infinite optimization, feasible iteration, concavification, adaptive refinement
\end{keywords}
\section{Introduction}
Linear semi-infinite programming problem (LSIP) refers to the optimization problem with finitely many
decision variables and infinitely many linear constraints associated with some parameters,
which can be formulated as
\begin{gather}\label{LSIP}
\begin{split}
\min\limits_{x\in \mathbb{R}^n}\quad & c^{\top}x \\
\textrm{s.t.}\quad & a(y)^{\top}x + a_0(y)\geq 0\ \forall y\in Y, \\
& x_i \geq 0, i = 1,2,...,n,
\end{split}\tag*{(LSIP)}
\end{gather}
where $c\in \mathbb{R}^n$, $a(y) = [a_1(y),...,a_n(y)]^{\top}$ and
$a_i: \mathbb{R}^m \mapsto \mathbb{R}$, for $i = 0,1,...,n$,
are real-valued coefficient functions, $Y \subseteq \mathbb{R}^m$ is the index set.
In this paper, we assume that $Y = [a,b]$ is an interval with $a<b$.
Denote by $F$ the feasible set of (\ref{LSIP}):
\begin{equation*}
F = \{x\in \mathbb{R}^n_{+}\ |\ a(y)^{\top}x + a_0(y)\geq 0, \forall y\in Y\},
\end{equation*}
where $\mathbb{R}^n_{+} = \{x\in \mathbb{R}^n\ |\ x_i \geq 0, i = 1,2,...,n\}$.
Linear semi-infinite programming has wide applications in economics, robust optimization and numerous
engineering problems, etc.
More details can be found in \cite{yy1,yy2,yy3} and references therein.
Numerical methods have been proposed for solving linear semi-infinite programming problems
such as discretization methods, local reduction methods and descent direction methods
(See \cite{mma,ma,hkko,rg} for an overview of these methods).
The main idea of discretization methods is to solve the following
linear program
\begin{align*}
\min_{x\in \mathbb{R}^n_{+}} &\quad f(x) \\
\textrm{s.t.} &\quad a(y)^{\top}x + a_0(y)\geq 0\ \forall y\in T,
\end{align*}
in which the original index set $Y$ in \ref{LSIP} is replaced by its finite subset $T$.
The iterates generated by the discretization methods converge to a solution of the original
problem as the distance between $T$ and $Y$ tends to zero (see \cite{BB,yy2,mma}).
The reduction methods solve nonlinear equations by qusi-Newton method,
which require the smoothing conditions on the functions defining the constraint \cite{sa}.
The feasible descent direction methods generate a feasible direction based on the
current iterate and achieve the next iterate by such a direction \cite{tse}.
The purification methods proposed in \cite{ea,em} generate a finite feasible sequence where the
objective function value of each iterate is reduced.
The method proposed in \cite{ea} requires that the feasible set of \ref{LSIP} is locally polyhedral,
and the method proposed in \cite{em}
requires that the coefficient functions $a_i, i=0,1,...,n,$ are analytic.
Feasible iterative methods for nonlinear semi-infinite optimization problems have been developed
via techniques of convexification or concavification etc \cite{floudas,wang,Amit}.
These methods might be applicable to solve \ref{LSIP} directly. However, they are not
developed specifically for \ref{LSIP}. Computational time will be reduced if the algorithm
can be adapted to linear case effectively.
In this paper, we develop a feasible iterative algorithm to solve \ref{LSIP}. The basic idea is to
construct a sequence of standard linear optimization problems with respect to
the discretized subsets of the index set such that the feasible region of each linear optimization
problem is included in the feasible region of \ref{LSIP}. The
proposed method consists of two stages. The first stage is based on the restriction of the
semi-infinite constraint.
The second stage is base on estimating the lower bound of the coefficient functions using
concavification or interval method.
The rest of the paper is organized as follows. In section 2, we propose the methods to construct
the inner approximate regions for the feasible region of \ref{LSIP}. Numerical method to solve the
original linear semi-infinite programming problem is proposed in section 3. In section 4, we
implement our algorithm to some numerical examples to show the performance of the method. At last,
we conclude our paper in section 5.
\section{Restriction of the lower level problem}
The restriction of the lower level problem leads to inner approximation of the feasible region of
\ref{LSIP}, and thus, to feasible iterates. Two-stage procedures are performed to achieve the restriction for \ref{LSIP}.
At the first stage, we construct an uniform lower-bound function w.r.t decision variables
for the function defining constraint in \ref{LSIP}. This step requires to solve a global optimization
associated with coefficient functions over the index set.
The second stage is to estimate the lower bound of the coefficient functions over the index set rather
than solving the optimization problems globally which significantly reduce the computational cost.
\subsection{Construction of the lower-bound function}
The semi-infinite constraint of \ref{LSIP} can be reformulated as
\begin{equation}\label{fs1}
\min_{y\in Y} \{a(y)^{\top} x + a_0(y)\} \geq 0.
\end{equation}
Since $a(y)^{\top} x = \sum_{i=1}^{n} a_i(y) x_i$, (\ref{fs1}) is equivalent to
\[\min_{y\in Y}\{ \sum_{i=1}^{n} a_i(y) x_i + a_0(y)\}\geq 0.\]
By exchanging the minimization and summation on the left side of the inequality, we
obtain a new linear inequality
\begin{equation}
\sum_{i=1}^{n} \{\min_{y\in Y} a_i(y)\} x_i + \min_{y\in Y} a_0(y)\geq 0.
\label{fs2}
\end{equation}
Since the decision variables $x_i \geq 0$, $i = 1,2,...,n$, we have
\[\sum_{i=1}^{n} \{\min_{y\in Y} a_i(y)\} x_i + \min_{y\in Y} a_0(y) \leq \min_{y\in Y}\{ \sum_{i=1}^{n} a_i(y) x_i + a_0(y)\}.\]
Thus, we obtain an uniform lower-bound function for $\min_{y\in Y} \{a(y)^{\top} x + a_0(y)\} $.
And any point $x$ satisfying (\ref{fs2}) is a feasible point for LSIP.
Let $\bar{F}$ be the feasible region defined by the inequality (\ref{fs2}), i.e.,
\[\bar{F} = \{x\in \mathbb{R}^n_{+}\ |\ \sum_{i=1}^{n} \{\min_{y\in Y} a_i(y)\} x_i + \min_{y\in Y} a_0(y)\geq 0\}.\]
From above analysis, we conclude that $\bar{F} \subseteq F$.
The main difference between the original constraint (\ref{fs1}) and the restriction constraint (\ref{fs2}) is that
the minimization is independent on the decision variable $x$ in the later case.
In order to compute $\bar{F}$, we need to solve a series of problems as follows:
\begin{equation}
\label{gloa}
\min_{y} \ a_i(y)\quad \textrm{s.t.}\quad y\in Y
\end{equation}
for $i = 0,1,...,n$.
Based on $\bar{F}$, we can construct a linear program associated with one linear
inequality constraint such that it has the same objective
function as LSIP and any feasible point of the constructed problem is feasible for LSIP.
Such a problem is defined as
\begin{equation}
\min_{x\in \mathbb{R}^n} \quad c^{\top} x \quad
\textrm{s.t.} \quad x \in \bar{F}.
\tag*{(R-LSIP)}\label{rlsip}
\end{equation}
To characterize how well R-LSIP approximates LSIP, we can estimate the distance between
$g(x) = \min_{y\in Y} \{a(y)^{\top} x + a_0(y)\}$ and
$\bar{g}(x) = \sum_{i=1}^{n} \{\min_{y\in Y} a_i(y)\} x_i +
\min_{y\in Y} a_0(y)$ which have been used to define the constraints of LSIP and R-LSIP.
Assume that each function $a_i(y)$ is Lipschitz continuous on $Y$, i.e., there exist some
constant $L_i \geq 0$ such that $|a_i(y) - a_i(z)| \leq L_i |y - z|$ holds for any
$y,z\in Y, i = 0,1,2,...,n$. By direct computation, we have
\[|g(x) - \bar{g}(x)| \leq (\sum_{i=1}^{n} L_i x_i + L_0) (b-a).\]
It turns out that for any fixed $x$, the error between $g(x)$ and $\bar{g}(x)$ is bounded
linearly with respect to $(b-a)$.
Furthermore, if we assume that the decision variables are upper bounded
(e.g., $0\leq x_i \leq U_i$ for some constants $U_i > 0$, $i = 0,1,2,...,n$), we have
\[|g(x) - \bar{g}(x)| \leq (\sum_{i=1}^{n} L_i U_i + L_0) (b-a).\]
This indicates that the error between $g(x)$ and $\bar{g}(x)$ goes to zeros uniformly
as $|b-a|$ tends to zero. By dividing the index set $Y = [a,b]$ into subintervals,
one can construct a sequence of linear programs that approximate LSIP exhaustively
as the size of the subdivision (formally defined in section {\bf 3.1}) tends to zero.
Given a subdivision, constructing R-LSIP on each subinterval requires to solve \ref{rlsip}
globally which will become computationally expensive due to the increasing number of
subintervals and non-convexity of the coefficient functions in general. In fact, it is not
necessary to solve \ref{rlsip} exactly. In the next section, we will discuss how to
estimate a good lower bound of \ref{rlsip} and use it to construct the feasible approximation
problems for \ref{LSIP}.
\subsection{Construction of the inner approximation region}
In order to guarantee that the feasible region $\bar{F}$ derived from inequality (\ref{fs2}) is an
inner approximation of the feasible region of \ref{LSIP}, optimization problem (\ref{gloa})
needs to be solved globally. However, computing a lower bound for (\ref{gloa}) is enough to generate
a restriction problem of \ref{LSIP}. In this section, we present two alternative approaches to
approximate problem (\ref{gloa}).
The idea of the first approach comes from the techniques of interval methods \cite{IT,IT1}.
Given an interval $Y = [a,b]$, the range of $a_i(y)$ on $Y$ is defined as $R(a_i,Y) = [R_i^l,R_i^u] = \{a_i(y)\ |\ y\in Y\}$. An interval function $A_i(Y) = [A_i^l, A_i^u]$ is called a inclusion function for $a_i(y)$ on $Y$ if $R(a_i,Y) \subseteq A_i(Y) $.
A natural inclusion function can be obtained by replacing the decision variable $y$ in $a_i(y)$ with the corresponding
interval and computing the resulting expression using the rules of interval arithmetic \cite{IT1}.
In some special cases, the natural inclusion function is tight (i.e., $R(a_i,Y) = A_i(Y)$). However,
in more general cases, the natural interval function overestimates the original range of
$a_i(y)$ on $Y$ which implies that $A_i^l < \min_{y\in Y} a_i(y)$. In such cases, the tightness
of the inclusion can be measured by
\begin{equation}\label{haus}
\max\{|R_i^l - A_i^l|,|R_i^u - A_i^u|\} \leq \gamma |b-a|^p\quad \textrm{and}\quad |A_i^l - A_i^u| \leq \delta |b-a|^p,
\end{equation}
where $p \geq 1$ is the convergence order, $\gamma \geq0$ and $\delta\geq 0$ are constants which
depend on the expression of $a_i(y)$ and the interval $[a,b]$.
By replacing $\min_{y\in Y} a_i(y)$ in (\ref{fs2}) with $A_i^l$ for $i = 0,1,...,n$, we have a new
linear inequality as follows
\begin{equation}\label{app1}
\sum_{i = 1}^{n}A_i^l x_i + A_0^l \geq 0.
\end{equation}
It is obvious that any $x$ satisfying (\ref{app1}) is a feasible point for \ref{LSIP}.
The second approach to estimate the lower bound of problem (\ref{fs2}) is to construct a uniform
lower bound function $\bar{a}_i(y)$ such that $\bar{a}_i(y) \leq a_i(y)$ holds for all
$y\in Y$. In addition, we require that the optimal solution for
\[\min_{y}\ \bar{a}_i(y)\quad \textrm{s.t.}\quad y\in Y\]
is easy to be identified. Here, we construct a concave lower bound function for $a_i(y)$ by adding a
negative quadratic term to it, i.e.,
\[\bar{a}_i(y) = a_i(y) - \frac{\alpha_i}{2}(y-\frac{a+b}{2})^2,\]
where $\alpha \geq 0$ is a parameter. It follows that $\bar{a}_i(y) \leq a_i(y) \ \forall y\in Y$.
Furthermore, $\bar{a}_i(y)$ is twice continuously differentiable if and only if
$a_i(y)$ is twice continuously differentiable and the second derivative of $\bar{a}_i(y)$ is
$\bar{a}''_i(y) = a''_i(y) - \alpha_i$.
Thus $\bar{a}_i(y)$ is concave on $Y$ if the parameter $\alpha_i$ satisfies
$\alpha_i \geq \max_{y\in Y}\ a''_i(y)$. To sum up, we select the parameter $\alpha_i$ such that
\begin{equation}\label{alpha}
\alpha_i \geq \max \{0, \max_{y\in Y} a''_i(y)\}.
\end{equation}
This guarantees that $\bar{a}_i(y)$ is a lower bound concave function of $a_i(y)$ on the index set
$Y$. The computation of $\alpha_i$ in (\ref{alpha}) involves a global optimization. However, we can use
any upper bound of the right hand side in (\ref{alpha}). Such an upper bound can be obtained by interval methods proposed above.
On the other hand, the distance between $\bar{a}_i(y)$ and $a_i(y)$ on $[a,b]$ is
\begin{equation*}\label{dist}
\max_{y\in Y} |a_i(y) - \bar{a}_i(y)| = \frac{\alpha_i}{8}(b-a)^2.
\end{equation*}
Since $\bar{a}_y(y)$ is concave on $Y$, the minimizer of $\bar{a}_i(y)$ on $Y$ is attained on the boundary of $Y$ (see \cite{convex}), i.e.,
$\min_{y\in Y} \bar{a}_i(y) = \min\{\bar{a}_i(a),\bar{a}_i(b)\}$.
By replacing $\min_{y\in Y} a_i(y)$ in (\ref{fs2}) with $\min_{y\in Y} \bar{a}_i(y)$, we get the
second type of restriction constraint as follows
\begin{equation}\label{app2}
\sum_{i = 1}^{n} \min\{\bar{a}_i(a),\bar{a}_i(b)\} x_i + \min\{\bar{a}_0(a),\bar{a}_0(b)\} \geq 0.
\end{equation}
The two approaches are distinct in the sense that the interval method requires mild assumptions on
the coefficient function while the concave-function based method admits better approximation rate.
\section{Numerical method}
Based on the restriction approaches developed in the previous section, we are able to construct a
sequence of approximations for \ref{LSIP} by dividing the original index set into subsets
successively and constructing linear optimization problems associated with restricted constraints on
the subsets.
\begin{definition}
We call $T = \{\tau_0,...,\tau_N\}$ a subdivision of the interval $[a,b]$ if
\[a=\tau_0 \leq \tau_1\leq ... \leq \tau_N = b.\]
\end{definition}
Let $Y_k = [\tau_{k-1},\tau_{k}]$ for $k = 1,2,...,N$, the length of $Y_k$ is defined by $|Y_k| = |\tau_k - \tau_{k-1}|$ and the
length of the subdivision $T$ is defined by $|T| = \max_{1\leq k \leq N}|Y_k|$.
It follows that $Y = \cup_{i = 1}^N Y_k$.
The intuition behind the approximation of \ref{LSIP} through subdivision comes from an observation
that the original semi-infinite constraints in \ref{LSIP}
\[a(y)^{\top}x + a_0(y) \geq 0, \ \forall y\in Y\]
can be reformulated equivalently as finitely many semi-infinite constraints
\[a(y)^{\top}x + a_0(y) \geq 0, \ \forall y\in Y_k, k = 1,2,...,N.\]
Given a subdivision, we can construct the approximate constraint on each subinterval and combine
them together to formulate the inner-approximation of the original feasible region. The corresponding
optimization problem provide a restriction of \ref{LSIP}. The solution of the approximate problem
approach to the optimal solution of \ref{LSIP} as the size of the subdivision tends to zero.
The two different approaches (e.g., interval method and Concavification method)
were introduced in section 2 to construct the approximate region that lies inside of the original
feasible region. This induces two different types of approximation problems when applied to a
particular subdivision. We only describe main results for the first type (e.g., interval method)
and focus on the convergence and algorithm for the second one.
We introduce the Slater condition and a lemma derived from it which will be used in the
following part. We say Slater condition holds for \ref{LSIP} if there exists a point $\bar{x}\in \mathbb{R}^n_{+}$ such that
\[a(y)^{\top}\bar{x}+a_0(y) > 0, \ \forall y\in Y.\]
Let $F^o = \{x\in F\ |\ a(y)^{\top}\bar{x}+a_0(y) > 0, \ \forall y\in Y\}$ be the set of all the Slater points in $F$.
It is shown that the feasible region $F$ is exactly the closure of $F^o$
under the Slater condition \cite{mma}. We present this result as a lemma and give a direct proof in
the appendix.
\begin{lemma}\label{lm32}
Assume that the Slater condition holds for \ref{LSIP} and the index set $Y$ is compact,
then we have
\[ F = cl(F^o),\]
where $cl(F^o)$ represents the closure of the set $F^o$.
\end{lemma}
\subsection{Restriction based on interval method}
Let $A_i(Y_k) = [A_{i,k}^l,A_{i,k}^u]$ be the inclusion function of $a_i(y)$ on $Y_k$. By estimating
the lower bound for $\min_{y\in Y_k} a_i(y)$ via interval method, we can construct the following
linear constraints
\[ \sum_{i=1}^n A_{i,k}^l x_i + A_{0,k}^l \geq 0, k = 1,2,...,N,\]
corresponding to the original constraints
$a(y)^{\top}x + a_0(y) \geq 0, \ \forall y\in Y_k, k = 1,2,...,N$.
For simplicity, we reformulate the inequalities as
\begin{equation}
A_T^{\top}x+b_T \geq 0,
\label{31}
\end{equation}
where $A_T(i,k) = A_{i,k}^l$ and $b_T(k) = A_{0,k}^l$ for $i = 1,2,...,n, k = 1,2,...,N$.
The approximation problem for \ref{LSIP} in such case is formulated as
\begin{align}
\begin{split}
\min_{x\in \mathbb{R}^n_{+}}\ c^{\top}x \quad \textrm{s.t.}\quad A_T^{\top}x+b_T \geq 0.
\end{split}\tag*{R1-LSIP(T)}\label{r1lsip}
\end{align}
Following the analysis in section 2, we know that
$\{x\in \mathbb{R}^n_{+}\ |\ A_T^{\top}x+b_T \geq 0\} \subseteq F$.
Therefore, any feasible point of \ref{r1lsip} is feasible for \ref{LSIP} provided that the feasible
region of \ref{r1lsip} is non-empty. By solving \ref{r1lsip}, we can obtain a feasible approximate solution for \ref{LSIP} and the corresponding optimal value
of \ref{r1lsip} provides an upper bound for the optimal value of \ref{LSIP}.
Let $F(T) = \{x\in \mathbb{R}^n_{+}\ |\ A_T^{\top}x + b_T \geq 0\}$ be
the feasible region of \ref{r1lsip}. We say that $F(T)$ is consistent if
$F(T) \neq \emptyset$. In this case, the corresponding problem \ref{r1lsip} is called consistent.
The following lemma shows that the approximate problem \ref{r1lsip} is consistent for all $|T|$ small enough if Slater condition holds for \ref{LSIP}.
\begin{lemma}\label{lm33}
Assume that the Slater condition holds for \ref{LSIP} and the coefficient functions
$a_i(y)$, $i = 0,1,...n$, are
Lipschitz continuous on $Y$, then $F(T)$ is nonempty for all $|T|$ small enough.
\end{lemma}
In following theorem, we show that any accumulation point of the solutions of the approximate
problems \ref{r1lsip} is a solution to \ref{LSIP} if the size of the subdivision tends to zero.
\begin{theorem}\label{t34}
Assume the Slater condition holds for \ref{LSIP} and the level set
$L(\bar{x}) = \{x\in F\ |\ c^{\top}x\leq c^{\top}\bar{x}\}$ is bounded ($\bar{x}$ is a Slater
point). Let $\{T_k\}$ be a sequence of subdivisions of $Y$ such that
$T_0$ is consistent and $\lim_{k\to \infty} |T_k| = 0$ with $T_k\subseteq T_{k+1}$.
Let $x_k^*$ be a solution of R1-LSIP($T_k$). Then any accumulation point of the sequence $\{x_k^* \}$ is an optimal solution to \ref{LSIP}.
\end{theorem}
\subsection{Restriction based on concavification}
Given a subdivision $T = \{\tau_0,...,\tau_N\}$ and $Y_k = [\tau_{k-1},\tau_k], k=1,2,...N$,
by applying concavification method in section 2 to each of the finitely many semi-infinite constraints
\[a(y)^{\top}x + a_0(y)\geq 0, \ \forall y\in Y_k, k = 1,2,...,N,\]
we can construct the linear constraints as follows
\[\sum_{i=1}^n \min\{\bar{a}_i(\tau_{k-1}),\bar{a}_i(\tau_k)\} x_i + \min\{\bar{a}_0(\tau_{k-1}),\bar{a}_0(\tau_k)\} \geq 0, \ k = 1,2,...,N,\]
where $\bar{a}_i(\cdot)$ is the concavification function defined on $Y_k$ when we calculate
$\bar{a}_i(\tau_{k-1})$ or $\bar{a}_i(\tau_k)$ (i.e.,
$\bar{a}_i(y) = a_i(y) - \frac{\alpha_{i,k}}{2} (y- \frac{\tau_k - \tau_{k-1}}{2})^2$).
We rewrite the above inequalities as
\[\bar{A}^{\top}_{T} x + \bar{b}_T \geq 0,\]
where $\bar{A}_T(i,k) = \min\{\bar{a}_i(\tau_{k-1}),\bar{a}_i(\tau_k)\}$ and
$\bar{b}_T(k) = \min\{\bar{a}_0(\tau_{k-1}),\bar{a}_0(\tau_k)\}$. The corresponding approximate
problem for \ref{LSIP} is defined by
\begin{align}
\begin{split}
\min_{x\in \mathbb{R}^n_{+}}\ c^{\top}x \quad \textrm{s.t.}\quad \bar{A}_T^{\top}x+\bar{b}_T \geq 0.
\end{split}\tag*{R2-LSIP(T)}\label{r2lsip}
\end{align}
Let $\bar{F}(T) = \{x\in \mathbb{R}^n_{+} \ |\ \bar{A}^{\top}_{T} x + \bar{b}_T \geq 0\}$ be the
feasible set of the problem \ref{r2lsip}, we can conclude that $\bar{F}(T)\subseteq F$.
The approximate problem \ref{r2lsip} is similar to \ref{r1lsip} in the sense that both problems
induce restrictions of \ref{LSIP}. Therefore, any feasible solution of \ref{r2lsip} is feasible
for \ref{LSIP} and the corresponding optimal value provide an upper bound for the optimal value
of the problem \ref{LSIP}.
The following lemma shows that if the Slater condition holds for \ref{LSIP},
\ref{r2lsip} is consistent for all $|T|$ small enough (e.g., $\bar{F}(T) \neq \emptyset$).
Proof can be found in appendix.
\begin{lemma}\label{l35}
Assume the Slater condition holds for \ref{LSIP} and $a_i(y)$, $i = 1,2,...,n$, are twice continuously differentiable. Then \ref{r2lsip} is consistent for all $|T|$ small enough.
\end{lemma}
In order to find a good approximate solution for \ref{LSIP}, \ref{r2lsip} need to be solved
iteratively during which the subdivision will be refined. We present a particular strategy of
the refinement here such that the approximate regions of \ref{r2lsip} are monotonically
enlarging from the inside of the feasible region $F$. Consequently, the corresponding optimal
values of the approximation problems are monotonically decreasing and converge to the optimal
value of the original linear semi-infinite problem. Note that such a refinement procedure can
not guarantee the monotonic property when applied to solve \ref{r1lsip}.
Let $T = \{\tau_k\ |\ k = 0,1,...,N\}$ be a subdivision of the $Y$. Assume
$Y_k = [\tau_{k-1},\tau_k]$ is the subinterval to be refined.
Denote by $\tau_{k,1}$ and $\tau_{k,2}$ the trisection points of $Y_k$:
\[\tau_{k,1} = \tau_{k-1} + \frac{1}{3}(\tau_k - \tau_{k-1}), \
\tau_{k,2} = \tau_{k-1} + \frac{2}{3}(\tau_k -
\tau_{k-1}).\]
The constraint in \ref{r2lsip} on the subset $Y_k$ is
\begin{equation}\label{3.3}
\sum_{i=1}^n [\min [\bar{a}_i(\tau_{k-1}), \bar{a}_i(\tau_k)]x_i + \min [\bar{a}_0(\tau_{k-1}), \bar{a}_0(\tau_k)] \geq 0,
\end{equation}
where $\bar{a}_i(y) = a_i(y) - \frac{\alpha_{i,k}}{2}(y - \frac{\tau_{k-1} + \tau_k}{2})^2$ and
parameter $\alpha_{i,k}$ is calculated in the manner of (\ref{alpha}).
The lower bounding functions on each subset after refinement are defined by
\begin{align*}
\bar{a}_i^1(y) &= a_i(y) - \frac{\alpha^1_{i,k}}{2}(y-\frac{\tau_{k-1}+\tau_{k,1}}{2})^2,\ y\in Y_{k,1}=[\tau_{k-1},\tau_{k,1}], \\
\bar{a}_i^2(y) &= a_i(y) - \frac{\alpha^2_{i,k}}{2}(y-\frac{\tau_{k,1}+\tau_{k,2}}{2})^2,\ y\in Y_{k,2}=[\tau_{k,1},\tau_{k,2}], \\
\bar{a}_i^3(y) &= a_i(y) - \frac{\alpha^3_{i,k}}{2}(y-\frac{\tau_{k,2}+\tau_{k}}{2})^2,\ y\in Y_{k,3}=[\tau_{k,2},\tau_{k}],
\end{align*}
where $\alpha_{i,k}^j, j=1,2,3$ are selected such that
$\alpha_{i,k}^j \geq \max \{0, \max_{y\in Y_{k,j}} \nabla^2 a_i(y)\}$ and $\alpha_{i,k}^j \leq \alpha_{i,k}$ for $j = 1,2,3$.
The refined approximate region $\bar{F}(T\cup\{\tau_{k,1},\tau_{k,2}\})$ is obtained by replacing
the constraint (\ref{3.3}) in $\bar{F}(T)$ with
\begin{align*}
& \sum_{i=1}^n [\min [\bar{a}_i^1(\tau_{k-1}), \bar{a}_i^1(\tau_{k,1})]x_i + \min [\bar{a}_0^1(\tau_{k-1}), \bar{a}_0^1(\tau_{k,1})] \geq 0, \\
& \sum_{i=1}^n [\min [\bar{a}_i^2(\tau_{k,1}), \bar{a}_i^2(\tau_{k,2})]x_i + \min [\bar{a}_0^2(\tau_{k,1}), \bar{a}_0^2(\tau_{k,2})] \geq 0, \\
& \sum_{i=1}^n [\min [\bar{a}_i^3(\tau_{k,2}), \bar{a}_i^3(\tau_k)]x_i + \min [\bar{a}_0^3(\tau_{k,2}), \bar{a}_0^3(\tau_k)] \geq 0.
\end{align*}
\begin{lemma}\label{3.6}
Let $T$ be a consistent subdivision of $Y$. Assume that $\bar{F}(T\cup\{\tau_{k,1},\tau_{k,2}\})$
is obtained by the trisection refinement procedure above, then we have
\[\bar{F}(T) \subseteq \bar{F}(T\cup\{\tau_{k,1},\tau_{k,2}\}) \subseteq F.\]
\end{lemma}
\begin{proof}
Since $x\in \mathbb{R}^n_{+}$, it suffices to prove that for $i = 0,1,2,...,n,$
\[\min[\bar{a}_i^1(\tau_{k-1}), \bar{a}_i^1(\tau_{k,1}),\bar{a}_i^2(\tau_{k,1}), \bar{a}_i^2(\tau_{k,2}),\bar{a}_i^3(\tau_{k,2}), \bar{a}_i^3(\tau_k)] \geq \min [\bar{a}_i(\tau_{k-1}), \bar{a}_i(\tau_k)].\]
By direct computation, we know $\bar{a}_i^1(\tau_{k-1}) \geq \bar{a}_i(\tau_{k-1})$ and
$\bar{a}_i^3(\tau_k) \geq \bar{a}_i(\tau_k)$. Since $\bar{a}_i(y)$ is concave
on $Y_k = [\tau_{k-1},\tau_k]$, we have $\bar{a}_i(\tau_{k,j}) \geq \min [\bar{a}_i(\tau_{k-1}), \bar{a}_i(\tau_k)]$ for $j = 1,2$.
In addition, direct calculation implies
\begin{equation*}
\min[\bar{a}_i^1(\tau_{k,1}),\bar{a}_i^2(\tau_{k,1})] \geq \bar{a}_i(\tau_{k,1}), \quad
\min[\bar{a}_i^2(\tau_{k,2}),\bar{a}_i^3(\tau_{k,2})] \geq \bar{a}_i(\tau_{k,2}).
\end{equation*}
The last two statements indicate that
$ \min[\bar{a}_i^1(\tau_{k,1}),\bar{a}_i^2(\tau_{k,1})] \geq \min [\bar{a}_i(\tau_{k-1}), \bar{a}_i(\tau_k)]$ and
$ \min[\bar{a}_i^2(\tau_{k,2}),\bar{a}_i^3(\tau_{k,2})] \geq \min [\bar{a}_i(\tau_{k-1}), \bar{a}_i(\tau_k)]$.
This proves our statement.
\qquad\end{proof}
We present in the following theorem the general convergence results for approximating \ref{LSIP} via a
sequence of restriction problems.
\begin{theorem}\label{3.7}
Assume that the assumptions in Theorem \ref{t34} hold.
Let $\{T_k\}$ be a sequence of subdivisions of the index set $Y$, which is obtained by trisection refinement recursively, such that $T_0$ is consistent and $\lim_{k\to\infty} |T_k| = 0$.
Denote by $x^*_k$ the optimal solution to R2-LSIP($T_k$). Then we have:\\
(1) $x^*_k$ is feasible for \ref{LSIP} and any accumulation point of the sequence $\{x^*_k\}$
is a feasible solution to \ref{LSIP}.\\
(2) $\{f(x^*_k):\ f(x^*_k) = c^{\top}x^*_k\}$ is a decreasing sequence and
$v^* = \lim_{k\to\infty} f(x^*_k)$ is an optimal value to \ref{LSIP}.
\end{theorem}
\begin{proof}
The proof of the first statement is similar to the proof in Theorem \ref{t34}.
From Lemma \ref{3.6}, we know that $\bar{F}(T_{k-1}) \subseteq \bar{F}(T_{k})$ holds
for $k\in \mathbb{N}$ which implies that the sequence $\{f(x^*_k)\}$ is decreasing.
Since the level set $L(\bar{x})$ is bounded, the sequence $\{f(x^*_k)\}$ is bounded. Therefore,
the limit of the sequence exists which is denoted by $v^*$. From (1), we know that
$v^*$ is an optimal value to \ref{LSIP}.
This completes our proof.
\qquad\end{proof}
\subsection{Adaptive refinement algorithm}
In this section, we present a specific algorithm to solve \ref{LSIP}. The algorithm is based on
solving the approximate linear problems \ref{r2lsip} (or \ref{r1lsip}) for a given subdivision
$T$ and then refine the subdivision to improve the solution. The key idea of the algorithm is to
select the candidate subsets in $T$ to be refined in an adaptive manner rather than making the
refinement exhaustively.
We introduce the optimality condition for \ref{LSIP} as follows before presenting the details of
the algorithm.
Given a point $x\in F$, let $A(x) = \{y\in Y\ |\ a(y)^{\top}x + a_0(y) = 0\}$ be the active index set
for \ref{LSIP} at $x$. If some constraint qualification (e.g., Slater condition) holds for \ref{LSIP},
a feasible point $x^* \in F$ is an optimal solution if and only if $x^*$ satisfies the KKT systems
(\cite{mma}), i.e.,
\[c - \sum_{y\in A(x^*)}\lambda_y a(y) = 0\]
for some $\lambda_y \geq 0$, $y\in A(x^*)$.
\begin{definition}
We say that $x^*\in F$ is an $(\epsilon,\delta)$ optimal solution to \ref{LSIP} if
there exist some indices $y\in Y$ as well as $\lambda_y \geq 0$ such that
\[||c - \sum_{y\in A(x^*,\delta)}\lambda_y a(y)||\leq \epsilon,\]
where $A(x^*,\delta) = \{y\in Y\ |\ 0\leq a(y)^{\top}x^* + a_0(y) \leq \delta\}$.
\end{definition}
\begin{table*}
\indent {\bf ------------------------------------------------------------------------------------------------}\\
\indent\textbf{Algorithm 1} (Adaptive Refinement Algorithm for LSIP) \\
\indent {\bf ------------------------------------------------------------------------------------------------}
\begin{itemize}
\item[\textbf{S1.}] Find an initial subdivision $T_0$ such that R2-LSIP($T_0$) is consistent.
Choose an initial point $x_0$ and tolerances $\epsilon$ and $\delta$. Set $k = 0$.
\item[\textbf{S2.}] Solve R2-LSIP($T_k$) to obtain a solution $x^*_k$
and the active index set $A(x^*_k)$.
\item[\textbf{S3.}] Terminate if $x^*_k$ is an $(\epsilon,\delta)$ optimal solution to
\ref{LSIP}.
Otherwise update $T_{k+1}$ and $F(T_{k+1})$ by trisection refinement procedure for subintervals
in $T_k$ that correspond to $A(x^*_k)$.
\item[\textbf{S4.}] Let $k = k+1$ and go to step 2.
\end{itemize}
\indent {\bf ------------------------------------------------------------------------------------------------}
\end{table*}
To obtain a consistent subdivision in the firs step of Algorithm 1, we apply the adaptive
refinement algorithm to the following problem
\begin{align}\label{initial}
\begin{split}
\min_{(x,z)\in \mathbb{R}^n_{+}\times \mathbb{R}}\ z \quad \textrm{s.t.}\quad a(y)^{\top}x + a_0(y) \geq z\ \forall y\in Y
\end{split}\tag*{LSIP$_0$}
\end{align}
until a feasible solution $(x_0,z_0)$, with $z_0 \geq 0$, of the problem LSIP$_0$($T_0$) is found
for some subdivision $T_0$. The current subdivision $T_0$ is consistent and chosen as the initial
subdivision of Algorithm 1. In addition, $x_0$ is feasible for the original problem and selected
as the initial point for the algorithm.
The refinement procedure in the third step of the algorithm is taken as follows.
In the $k$th iteration, each $[\tau^k_{i-1},\tau^k_i]$ is divided into three equal length subsets
for $i\in A(x^*_k)$. New constraints are constructed on the subsets and used to update the
constraint corresponding to $[\tau^k_{i-1},\tau^k_i]$ for each index $i\in A(x^*_k)$.
Then we have $\bar{F}(T_{k+1})$ and the associated approximation problem R2-LSIP($T_{k+1}$).
\begin{theorem}[Convergence of Algorithm 1]
\label{conapp2}
Assume the Slater condition holds for \ref{LSIP} and the coefficient functions
$a_i(y)$, $i = 0,1,...,n$, are twice continuously differentiable. Then Algorithm 1
terminates in finitely many iterations for any positive tolerances $\epsilon$ and $\delta$.
\end{theorem}
\begin{proof}
Let $x^*_k$ be a solution to the approximate subproblem R2-LSIP($T_k$) with
$T_k = \{\tau^k_j\ |\ j = 0,1,...,N_k\}$, there exists some
$\lambda_j^k \geq 0$ for $j\in A(x^*_k)$ such that
\begin{equation}\label{kkt}
c - \sum_{j\in A(x^*_k)} \lambda_j^k \min[\bar{a}(\tau^k_{j-1}),\bar{a}(\tau_j^k)] = 0,
\end{equation}
where $A(x^*_k) = \{j\ |\ \min[\bar{a}(\tau^k_{j-1}),\bar{a}(\tau_j^k)]x^*_k +
\min[\bar{a}_0(\tau^k_{j-1}),\bar{a}_0(\tau_j^k)]= 0\}$ is the active index set for R2-LSIP($T_k$)
at $x^*_k$ and $\min[\bar{a}(\tau^k_{j-1}),\bar{a}(\tau_j^k)]$ represents a vector in $\mathbb{R}^n$
such that the $i$th element is defined by $\min[\bar{a}_i(\tau^k_{j-1}),\bar{a}_i(\tau_j^k)]$.
Since $\bar{a}_i(y) = a_i(y)-\frac{\alpha_{i,k}}{2}(y-\frac{\tau_{j-1}^k + \tau_j^k}{2})^2$ for
$y\in [\tau^k_{j-1},\tau^k_j]$, we have
\[\min[\bar{a}(\tau^k_{j-1}),\bar{a}(\tau_j^k)] = \min[a(\tau^k_{j-1}),a(\tau_j^k)] - \frac{1}{8}(\tau^k_j -
\tau^k_{j-1})^2 \alpha^k,\]
where $\alpha^k = (\alpha^k_{1,j},\alpha^k_{2,j},...,\alpha^k_{n,j})^{\top}$ is the parameter vector on the subset $[\tau^k_{j-1},\tau^k_j]$ with all elements are uniformly bounded. On the other hand, since $a_i(y)$ is twice continuously differentiable, there exists $\bar{\tau}_{j-1}^k$
such that
\[a_i(\tau^k_j) = a_i(\tau^k_{j-1}) + a_i^{'}(\bar{\tau}^k_{j-1})(\tau^k_j - \tau^k_{j-1}), 1\leq i\leq n\]
which implies that $\min[a(\tau^k_{j-1}),a(\tau_j^k)] = a(\tau_{j-1}^k) +
(\tau^k_j - \tau^k_{j-1}) \beta^k$ where $\beta^k \in \mathbb{R}^n$ is a constant vector
(e.g., $\beta^k_i = a_i^{'}(\bar{\tau}^k_{j-1})$ if $\min[a(\tau^k_{j-1}),a(\tau_j^k)]
= a(\tau_j^k)$ and $\beta^k_i = 0$ otherwise).
It follows that
\begin{equation}
\min[\bar{a}(\tau^k_{j-1}),\bar{a}(\tau_j^k)] = a(\tau^k_{j-1}) +
(\tau^k_j - \tau^k_{j-1}) \beta^k - \frac{1}{8}(\tau^k_j - \tau^k_{j-1})^2 \alpha^k.
\label{minabar}
\end{equation}
Substitute $\min[\bar{a}(\tau^k_{j-1}),\bar{a}(\tau_j^k)]$ in (\ref{minabar}) into (\ref{kkt}) and
$A(x^*_k)$, we can claim that it suffices to prove the lengths of all the subsets
$[\tau^k_{j-1},\tau^k_j]$ for $j\in A(x^*_k)$ converge to zeros as the iteration $k$ tends to infinity.
From the algorithm, we know that in each iteration at least one subset $[\tau^k_{j-1},\tau^k_j]$ is
divided into three equal subintervals where the length of each subinterval is bounded above by
$\frac{1}{3}(\tau^k_j - \tau^k_{j-1}) \leq \frac{1}{3}(b-a)$.
For each integer $p\in \mathbb{N}$, at least one interval with its length bounded by
$\leq \frac{1}{3^p}(b-a)$ is generated. Furthermore, all the subintervals
$[\tau^k_{j-1},\tau^k_j]$, $j\in A(x^*_k)$ are different for all $k\in \mathbb{N}$. Since for each
$p\in \mathbb{N}$, only finitely subintervals with length greater than $\frac{1}{3^p}(b-a)$ exists.
This implies that the lengths of the subsets
$[\tau^k_{j-1},\tau^k_j]$ for $j\in A(x^*_k), k\in \mathbb{N}$ must tend to zero.
\qquad\end{proof}
We can conclude from Theorem \ref{conapp2} that if the tolerances $\epsilon$ and $\delta$ are
decreasing to zero then any accumulation point of the sequence generated by Algorithm 1 is a solution
to the original linear semi-infinite programming.
\begin{corollary}
Let the assumptions in Theorem \ref{conapp2} be satisfied and the tolerances $(\epsilon_k,\delta_k)$
are chosen such that $(\epsilon_k,\delta_k)\searrow (0,0)$. If $x^*_k$ is an $(\epsilon_k,\delta_k)$ KKT point for \ref{LSIP} generated by Algorithm 1, then any accumulation point $x^*$ of the sequence $\{x^*_k\}$ is a solution to \ref{LSIP}.\label{corollary310}
\end{corollary}
It follows from Corollary \ref{corollary310} the sequence $\{c^{\top}x^*_k\}$ is monotonically
decreasing to the optimal value of \ref{LSIP} as $k$ tends to infinity. In the implement of our
algorithm, the termination criterion is set as
\[|c^{\top}x^*_k - c^{\top}x^*_{k-1}| \leq \epsilon.\]
The convergence of Algorithm 1 is also applicable to the case that the approximate problem
R1-LSIP($T_k$) is used in the second step. The proof is similar to that in theorem \ref{conapp2} as we
explained in appendix. However, we can not guarantee the sequence $\{c^{\top}x^*_k\}$ is
monotonically decreasing.
\subsection{Remarks}
The proposed algorithm can be applied to solve linear semi-infinite optimization problem with finitely
many semi-infinite constraints and some extra linear constraints, i.e.,
\[\min_{x\in X} c^{\top}x \quad \textrm{s.t.} \quad a^j(y)^{\top}x + a^j_0(y) \geq 0,
\ \forall y\in Y, j = 1,2,...,m,\]
where $X = \{x\in \mathbb{R}^n\ |\ Dx\geq d\}$ and $a^j(\cdot): \mathbb{R}
\mapsto \mathbb{R}^n$.
In such a case, we split each decision variable $x_i$ into two non-negative variable $y_i \geq 0$
and $z_i \geq 0$ such that $x_i = y_i - z_i$, and then substitute $x_i$ into the above problem.
Then the problem is reformulated as a linear semi-infinite programming problem with non-negative
decision variables in which the Algorithm 1 can be applied to solve it.
Such a technique is applied in the numerical experiments.
In the case that $X = [X_l, X_u]$ is a box in $\mathbb{R}^n$, we can set a new variable transformation
as $x = z + X_l$ in which $z\geq 0$.
The advantage to reformulate the original problem in such a translation is that the
dimension of the new variables is the same as that of the original decision variables.
\section{Numerical experiments}
We present the numerical experiments for a couple of optimizaiton problems selected from the
literature. The algorithm is implemented
in $Matlab$ 8.1 and the subproblem is solved by using $linprog$ of $Optimization$ $Toolbox$ 6.3 with
default tolerance and active set algorithm. All the following experiments were run on 3.2 GHz Intel(R)
Core(TM) processor.
The computation of the bounds for the coefficient functions and the parameter $\alpha$ in the
second approach are obtained directly if the closed form bound exists. Otherwise, we use $Matlab$
toolbox $Intlab$ 6.0 \cite{intlab} to obtain the corresponding
bounding values.
The problems in the literature are listed as follows. \\
\textbf{Problem\ 1.}
\begin{align*}
\label{eg1}
\min \quad & \sum_{i=1}^n i^{-1}x_i \\
\textrm{s.t.} \quad & \sum_{i=1}^n y^{i-1}x_i\geq tan(y), \ \forall y\in [0,1].
\end{align*}
This problem is taken from \cite{mma} and also tested in \cite{BB} for $n=8$. For $1\leq n\leq 7$, the problem
has unique optimal solution and the Strong Slater condition holds. The problem for $n=8$ is hard to solve
and thus a good test of the performance for our algorithm. \\
\textbf{Problem\ 2.} This problem has same formulation as Problem 1 with $n = 9$ which is also
tested in \cite{BB}. \\
\textbf{Problem\ 3.}
\begin{align*}
\min \quad & \sum_{i=1}^8 i^{-1}x_i \\
\textrm{s.t.} \quad & \sum_{i=1}^8 y^{i-1}x_i\geq \frac{1}{2-y}, \ \forall y\in [0,1].
\end{align*}
This problem is taken from \cite{KS} and also tested in \cite{BB}. \\
\textbf{Problem\ 4.}
\begin{align*}
\min \quad & \sum_{i=1}^7 i^{-1}x_i \\
\textrm{s.t.} \quad & \sum_{i=1}^7 y^{i-1}x_i\geq -\sum_{i=0}^4 y^{2i}, \ \forall y\in [0,1].
\end{align*}
\textbf{Problem\ 5.}
\begin{align*}
\min \quad & \sum_{i=1}^9 i^{-1}x_i \\
\textrm{s.t.} \quad & \sum_{i=1}^9 y^{i-1}x_i\geq \frac{1}{1+y^2}, \ \forall y\in [0,1].
\end{align*}
Problem 4 and 5 are taken from \cite{TE} and also tested in \cite{BB}.
The following problems, as noted in \cite{BB}, arise in the design of finite impulse response(FIR) filters which
are more computationally demanding than the previous ones (see, e.g., \cite{rg,BB}).
\textbf{Problem\ 6.}
\begin{align*}
\min \quad & -\sum_{i=1}^{10} r_{2i-1}x_i \\
\textrm{s.t.} \quad & 2\sum_{i=1}^{10} \textrm{cos}((2i-1)2\pi y)x_i \geq -1, \ \forall y\in [0,0.5],
\end{align*}
where $r_i = 0.95^i$.
\textbf{Problem\ 7.} This problem is formulated as Problem 6 where $r_i = 2\rho
\textrm{cos}(\theta)r_{i-1} - \rho^2
r_{i-2}$ with $\rho = 0.975$, $\theta = \pi/3$, $r_0 = 1$, $r_1 = 2\rho \textrm{cos}(\theta)/(1+\rho^2)$.
\textbf{Problem\ 8.} This problem is also formulated as Problem 6 where $r_i = \frac{\textrm{sin}(2\pi f_s i)}{2\pi f_s i}$ with $f_s = 0.225$.
The numerical results are summarized in Table \ref{table1} where CPU Time is the time cost when
the algorithm terminates, Objective Value represents the objective function value at the iteration
point when the algorithm terminates, No of Iteration is the number of
iterations when the algorithm terminates for each particular problem and Violation measures the
feasibility of the solution $x^*$ obtained by the algorithm which is defined by $\min_{y\in \bar{Y}}
g(x^*,y)$ with $\bar{Y} = a:10^{-6}:b$.
We also list the numerical results for these problems by MATLAB toolbox $fseminf$ as a reference.
We can see that the algorithm proposed in this paper generates the feasible solutions
for all the problems tested. This is coincide with the theoretical results.
Furthermore, Algorithm 1 works well for the
computational demanding problems 6-8. The solver $fseminf$ is faster than our method, however the
feasibility is not guaranteed for this kind of method.
\renewcommand{1.3}{1.3}
\begin{table}
\tbl{Summary of numerical results for the proposed algorithm in this paper}
{\begin{tabular}{lcccccc} \toprule
& Algorithm & CPU Time(sec) & Objective Value & No. of Iterations & Violation \\ \midrule
\multirow{3}*{\bf Problem 1.}
& Approach 1 & 1.8382 & 0.6174 & 169 & 2.3558e-04 \\
& Approach 2 & 1.8910 & 0.6174 & 172 & 1.5835e-04 \\
& fseminf & 0.2109 & 0.6163 & 33 & -1.2710e-04 \\ \hline
\multirow{3}*{\bf Problem 2.}
& Approach 1 & 5.2691 & 0.6163 & 273 & 4.1441e-04 \\
& Approach 2 & 4.0928 & 0.6166 & 266 & 1.8372e-04 \\
& fseminf & 0.3188 & 0.6157 & 46 & -7.6194e-04 \\ \hline
\multirow{3}*{\bf Problem 3.}
& Approach 1 & 0.1646 & 0.6988 & 12 & 2.7969e-03 \\
& Approach 2 & 0.1538 & 0.6988 & 13 & 2.8014e-03 \\
& fseminf & 0.2387 & 0.6932 & 35 & -5.8802e-07 \\ \hline
\multirow{3}*{\bf Problem 4.}
& Approach 1 & 4.1606 & -1.7841 & 354 & 1.9689e-05 \\
& Approach 2 & 4.1928 & -1.7841 & 356 & 1.9646e-05 \\
& fseminf & 0.4794 & -1.7869 & 70 & -3.4649e-09 \\ \hline
\multirow{3}*{\bf Problem 5.}
& Approach 1 & 4.2124 & 0.7861 & 300 & 1.9829e-05 \\
& Approach 2 & 4.7892 & 0.7861 & 302 & 1.9243e-05 \\
& fseminf & 0.3642 & 0.7855 & 32 & -8.5507e-07 \\ \hline\hline
\multirow{3}*{\bf Problem 6.}
& Approach 1 & 1.7290 & -0.4832 & 137 & 5.0697e-06 \\
& Approach 2 & 1.5302 & -0.4832 & 132 & 5.0914e-06 \\
& fseminf & 1.1476 & -0.4754 & 86 & -1.2219e-04 \\ \hline
\multirow{3}*{\bf Problem 7.}
& Approach 1 & 2.5183 & -0.4889 & 170 & 2.8510e-04 \\
& Approach 2 & 3.2521 & -0.4890 & 219 & 2.8861e-04 \\
& fseminf & 1.0480 & -0.4883 & 86 & -1.5211e-03 \\ \hline
\multirow{3}*{\bf Problem 8.}
& Approach 1 & 4.4262 & -0.4972 & 252 & 4.5808e-05 \\
& Approach 2 & 4.0216 & -0.4972 & 252 & 5.0055e-05 \\
& fseminf & 0.4324 & -0.4973 & 45 & -4.3322e-07 \\ \hline\hline
\end{tabular}}
\tabnote{\textsuperscript{*}
Approach 1 represents {\bf Algorithm 1} with R1-LSIP and Apporach 2 represents {\bf Algorithm 1} with
R2-LSIP.}
\label{table1}
\end{table}
\section{Conclusion}
A new numerical method for solving linear semi-infinite programming problems
is proposed which guarantees that each iteration point is feasible for the original problem.
The approach is based on a two-stage restriction of the original semi-infinite constraint.
The first stage restriction allows us to consider semi-infinite constraint independently to the
decision variables on the subsets of the index set. In the second stage,
the lower bounds for the optimal values of the optimization problems associated with coefficient
functions are estimated using two different approaches. The approximation error goes to zero
as the size of the subdivisions tends to zero.
The approximate problems with finitely many linear constraints is constructed such that the
corresponding feasible regions are included in the feasible region of \ref{LSIP}. It follows that
any feasible solution of the approximate problem is feasible for \ref{LSIP} and the corresponding
objective function value provide an upper bound for the optimal value of \ref{LSIP}. It is proved
that the solutions of the approximate problems converge to that of the original problem.
Also, the sequence of optimal values of the approximate problems converge to the optimal
value of \ref{LSIP} in a monotonic manner.
An adaptive refinement algorithm is developed to obtain an approximate solution to \ref{LSIP}
which is proved to terminate in finite iterations for arbitrarily given tolerances.
Numerical results show that the algorithm works well in finding feasible solutions for
\ref{LSIP}.
\section{Appendices}
\noindent\textbf{Proof of Lemma \ref{lm32}}
Since the Slater condition holds, there exists a point $\bar{x}\in \mathbb{R}^n$ such that
\[a(y)^{\top}\bar{x}+a_0(y) > 0, \ \forall y\in Y.\]
It has been shown in \cite{ma} that the boundary of $F$ is
\[\partial F = \{x\in F\ |\ \min_{y\in Y}\{ a(y)^{\top}\bar{x}+a_0(y)\} = 0 \}.\]
It follows that $F = F^o \cup \partial F$. The compactness of the index set $Y$ implies that the function
$g(x) = \min_{y\in Y}\{ a(y)^{\top}x+a_0(y)\}$ is continuous. Thus, $F$ is closed.
It suffices to prove that
\[\partial F \subseteq cl(F^o).\]
For any $\tilde{x}\in \partial F$, we have $a(y)^{\top}\tilde{x}+a_0(y) = 0$ for all $y\in A(\tilde{x})$ with
$A(\tilde{x}) = \{y\in Y\ |\ a(y)^{\top}\tilde{x}+a_0(y) = 0\}$.
Then
\[a(y)^{\top}(\bar{x}-\tilde{x}) > 0, \ \forall y\in A(\tilde{x}).\]
This indicates that for any $\tau > 0$, we have
\[a(y)^{\top}(\tilde{x} + \tau(\bar{x}-\tilde{x})) + a_0(y) > 0,\ \forall y\in A(\tilde{x}).\]
For a point $y\in Y$ and $y\notin A(\tilde{x})$, there holds that $a(y)^{\top} \tilde{x} + a_0(y) > 0$.
Therefore, $a(y)^{\top}(\tilde{x} + \tau(\bar{x}-\tilde{x})) + a_0(y) > 0$ for $\tau$ small enough.
Since $Y$ is compact, we can chose a uniform $\tau$ such that
$a(y)^{\top}(\bar{x} + \tau(\bar{x}-\tilde{x})) + a_0(y) > 0, \ \forall y\in Y$ for $\tau$
small enough. It follows that we can choose a sequence
$\tau_k >0$ with $\lim_{k\to\infty} \tau_k = 0$ such that
\[a(y)^{\top}(\tilde{x} + \tau_k(\bar{x}-\tilde{x})) + a_0(y) > 0, \forall y\in Y, k\in \mathbb{N}.\]
Hence $x_k = \tilde{x} + \tau_k(\bar{x}-\tilde{x})\in F^o$ and $\lim_{k\to \infty} x_k = \tilde{x}$ which implies
that $\tilde{x}\in cl(F^o)$.
This completes our proof.
\newline
\newline
\noindent\textbf{Proof of Lemma \ref{lm33}}
The Slater condition implies that there exists a point $\bar{x}\in \mathbb{R}^n_{+}$ such that
\[a(y)^{\top}\bar{x} + a_0(y) > 0, \ \forall y\in Y.\]
Let $T = \{\tau_k\ |\ k = 0,1,...,N\}$, from (\ref{haus}) we know that for each $Y_k = [\tau_{k-1},\tau_k]$,
$k = 1,2,...,N$, there holds that
\[\min_{y\in Y_k} a_i(y) - A_{i,k}^l \leq \gamma_i |Y_k|^p \leq \gamma_i |T|^p, i = 0,1,...,n, k = 1,2,...N,\]
with $p\geq1$.
By direction computation, we have
\begin{equation*}
\sum_{i = 1}^n [\min_{y\in Y_k} a_i(y)]\bar{x}_i + \min_{y\in Y_k} a_0(y) - [\sum_{i = 1}^n A_{i,k}^l \bar{x}_i
+ A_{0,k}^l] \leq [\sum_{i = 1}^n \gamma_i \bar{x}_i + \gamma_0] |Y_k|^p.
\end{equation*}
The Lipschitz continuity of $a_i(y)$, $i = 0,1,...,n$, implies that
\[\min_{y\in Y_k} [\sum_{i = 1}^n a_i(y)\bar{x}_i + a_0(y)]-[\sum_{i = 1}^n [\min_{y\in Y_k} a_i(y)]\bar{x}_i +
\min_{y\in Y_k} a_0(y)] \leq [\sum_{i = 1}^n L_i\bar{x}_i + L_0]|Y_k|.\]
It follows from the last two inequalities that
\begin{equation*}
\sum_{i = 1}^n A_{i,k}^l \bar{x}_i + A_{0,k}^l \geq \min_{y\in Y_k} [\sum_{i = 1}^n a_i(y)\bar{x}_i + a_0(y)]
- \{[\sum_{i = 1}^n \gamma_i \bar{x}_i + \gamma_0] |Y_k|^p + [\sum_{i = 1}^n L_i\bar{x}_i + L_0]|Y_k|\},
\end{equation*}
which implies that $\sum_{i = 1}^n A_{i,k}^l \bar{x}_i + A_{0,k}^l \geq 0$, $k = 1,2,...,N$, for $|T|$ small
enough. This implies that $\bar{x}$ is a feasible point for the approximate region $F(T)$.
This completes our proof.
\newline
\newline
\noindent\textbf{Proof of Theorem \ref{t34}}
By the construction of the approximate regions, we know that $F(T_k)$ is included in the original feasible
set, i.e., $F(T_k)\subseteq F$ for all $k\in \mathbb{N}$. Hence, we have
$\{x_k^* \} \subseteq F$.
Let $\bar{x}$ be any Slater point, we can conclude from Lemma \ref{lm33} that $\bar{x}$ is contained
in $F(T_k)$ for $k$ large enough. Thus $c^{\top}x_k^* \leq c^{\top}\bar{x}$ which indicates that
$x_k^*\in L(\bar{x})$ for sufficient large $k$.
Since the level set $L(\bar{x})$ is compact, there exists at least an accumulation point $x^*$ of
the sequence $\{x_k^*\}$. Assume without loss of generality that the sequence $\{x_k^*\}$ itself
converges to $x^*$, i.e., $\lim_{k\to \infty} x_k^* = x^*$.
It suffices to prove that $x^*$ is an optimal solution to \ref{LSIP}. It is obvious that $x^*$ is feasible for \ref{LSIP}.
Let $x_{opt}$ be an optimal solution to \ref{LSIP}. If $x_{opt}\in F^o$,
then $x_{opt}\in F(T_k)$ for all $k$ large enough. This indicates that $f(x_k^*) \leq f(x_{opt})$ for $k$
large enough and thus
\[f(x^*) = \lim_{k\to \infty} f(x_k^*) \leq f(x_{opt}),\]
where $f(x) = c^{\top}x$.
If $x_{opt}$ lies on the boundary of the feasible set $F$, there exists a sequence of the Slater points
$\{\bar{x}_j\ |\ \bar{x}_j\in F^o\}$ such that $\lim_{j\to \infty} \bar{x}_j = x_{opt}$. For each
$\bar{x}_j\in F^o$ there exists at least an index $k = k(j)$ such that $\bar{x}_j\in F(T_{k(j)})$
which implies that $f(x_{k(j)}^*) \leq f(\bar{x}_j)$ for $j\in \mathbb{N}$.
Since $\{x_k^*\}$ converges to $x^*$ and $\{x_{k(j)}^*\}$ is a subsequence of $\{x_k^*\}$, the sequence
$\{x_{k(j)}^*\}$ is convergent and $\lim_{j\to \infty} f(x_{k(j)}^*) = f(x^*)$.
By the continuity of $f$ we have
\[f(x^*) = \lim_{j\to \infty} f(x_{k(j)}^*) \leq \lim_{j\to \infty} f(\bar{x}_j) = f(x_{opt}).\]
To sum up, we have $x^*\in F$ and $f(x^*) \leq f(x_{opt})$.
This completes our proof.
\newline
\newline
\noindent\textbf{Proof of Lemma \ref{l35}}
Let $\bar{x}\in F $ be a Slater point, then we have
\[a(y)^{\top}\bar{x} + a_0(y) > 0, \ \forall y\in Y_k, k = 1,2,...,N.\]
Since $a_i(\cdot), i = 1,2,...,n$ are twice continuously differentiable, they are Lipschitz continuous,
i.e., there exist a constant $L$ such that
\[|a_i(y) - a_i(z)| \leq L|y-z|,\ \forall y,z \in Y.\]
Let $\bar{g}_k(x) = \sum_{i=1}^n \min\{\bar{a}_i(\tau_{k-1}),\bar{a}_i(\tau_k)\} x_i + \min\{\bar{a}_0(\tau_{k-1}),
\bar{a}_0(\tau_k)\}$, then we have
\begin{align*}
& a(y)^{\top}\bar{x} + a_0(y) - \bar{g}_k(\bar{x}) \\
& = \sum_{i=1}^n a_i(y) \bar{x}_i + a_0(y) -
\sum_{i=1}^n \min\{\bar{a}_i(\tau_{k-1}),\bar{a}_i(\tau_k)\} \bar{x}_i + \min\{\bar{a}_0(\tau_{k-1}),
\bar{a}_0(\tau_k)\} \\
& \leq (L\sum_{i=1}^n (\bar{x}_i +1)) |Y_k|, \ \forall y\in Y_k, k = 1,2,...,N.
\end{align*}
It follows that $\bar{g}_k(\bar{x}) \geq 0$ if $|T|$ is sufficiently small which implies
$\bar{x} \in \bar{F}(T)$.
This completes our proof.
\newline
\newline
\noindent\textbf{Convergence of Algorithm 1 for R1-LSIP}
Since $x_k$ is a solution of R1-LSIP($T_k$) for a consistent subdivision $T_k = \{\tau^k_j\ | \ j = 0,1,...,N_k\}$
in the $k$th iteration, it must satisfy the KKT condition as follows:
\begin{equation}\label{kktapp1}
c - \sum_{j\in A(x^*_k)} \lambda^k_j A_{T_k}(:,j) = 0
\end{equation}
where $A(x^*_k) = \{j\ |\ A(:,j)^{\top} x^*_k + b_{T_k}(j) = 0\}$ and
$A_{T_k}(i,j) = A^l_{i,j}$, $b_{T_k}(j) =
A^l_{0,j}$ is the corresponding lower bound for $a_i(y)$ and $a_0(y)$ on $[\tau^k_{j-1},\tau^k_j]$.
By (\ref{haus}) we know that for any $\bar{\tau}^k_j\in [\tau^k_{j-1},\tau^k_j]$ there holds that
\[|a_i(\bar{\tau}^k_j) - A_{T_k}(i,j)| \leq \gamma^k_i |\tau^k_j - \tau^k_{j-1}|^p, 0\leq i\leq n, j\in A(x^*_k)\]
where $\gamma^k_i, 1\leq i\leq n$, $p\geq 1$ are constants. Thus, there exist some constants
$0 \leq \beta^k_i \leq \gamma_i^k, i = 0,1,2,...,n$ such that
$A_{T_k}(i,j) = a_i(\bar{\tau}^k_j) + \beta^k_i |\tau^k_j - \tau^k_{j-1}|^p$. Substitute this into
(\ref{kktapp1}) and $A(x^*_k)$ we have
\begin{align*}
& c - \sum_{j\in A(x^*_k)} \lambda^k_j [a(\bar{\tau}^k_j) + (|\tau^k_j - \tau^k_{j-1}|^p)\beta^k] = 0, \\
& A(x^*_k) = \{j\ |\ [a(\bar{\tau}^k_j) + (|\tau^k_j - \tau^k_{j-1}|^p)\beta^k] x^*_k +
a_0(\bar{\tau}^k_j) + (|\tau^k_j - \tau^k_{j-1}|^p)\beta_0^k = 0\},
\end{align*}
where $a(\bar{\tau}^k_j) = (a_1(\bar{\tau}^k_j), a_2(\bar{\tau}^k_j),...,a_n(\bar{\tau}^k_j))^{\top}$.
It follows that $x^*_k$ is a $(\epsilon,\delta)$ KKT point of \ref{LSIP} if
the lengths of the subsets $[\tau^k_{j-1},\tau^k_j]$ for $j\in A(x^*_k)$ converge to zero as $k$
goes to infinity. This is true due to the similar argument in the proof of theorem \ref{conapp2}.
\end{document} |
\begin{document}
\newtheorem{axiom}{Axiom}
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\pagestyle{plain}
\title{The Foundations of Mathematics\\in the Physical Reality}
\author{Doeko H. Homan}
\date{May 30, 2022}
\maketitle
\pagenumbering{arabic}
It is well-known that the concept {\em set} can be used as the foundations for
mathematics. And the Peano axioms for the set of all natural numbers should be
`considered as the fountainhead of all mathematical knowledge' (Halmos [1974]
page 47). However, natural numbers should be {\em defined}, thus `what is a
natural number', not `what is the set of all natural numbers'.\par
The basic properties of a set are the members of that set, and the sets that
set is a member of. Thus there is no set without members. And set theory has to
be included objects existing in physical reality, called `individuals'.
Example: A single shoe is not a singleton set but an individual, a pair of
shoes (often a left one and a right one) is a set of individuals.\par
In this article we present an axiomatic definition of sets with individuals.
Natural numbers and ordinals are defined. Limit ordinals are `first numbers',
that is a first number of the Peano axioms. For every natural number $m$ we
define `first $\omega^m$-numbers'. Every ordinal is an ordinal with a first
$\omega^0$-number, every `limit ordinal' is an ordinal with a first
$\omega^m$-number. Ordinals with a first number satisfy the Peano axioms.
\section{What is a set?}
At an early age you develop the idea of `what is a set'. You belong to a family
or a clan, you belong to the inhabitants of a village or a particular region.
Experience shows there are objects that constitute a set. A constituent of a
set is called `a member' of that set. A set can be a member of a set.\par
Experience shows an object `is equal to' or `is not equal to' another object.
The relations `is equal to' and `is not equal to' are defined in set theory.
For individuals the relations `is equal to' and `is not equal to' are given in
physical reality but are not known within set theory. Therefore we define: The
only member of an individual is the individual oneself.
\section{Very basic set theory}
Sets are denoted by $s$, $t$, $u$, $\ldots$, by capital letters or by lowercase
Greek letters. Symbol $\in$ is the `membership relation symbol', to read as `is
a member of' or as `belongs to'. For every $s$ and every $t$ applies $s\in t$
or the negation of $s\in t$, denoted by $s\notin t$. Thus for every $s$ applies
$s\in s$ or $s\notin s$.\par
Formulas are built up from formulas like $s\in t$ by means of $\wedge$ (and),
$\vee$ (or), $\rightarrow$ (material implication), $\leftrightarrow$
(biimplication), $\neg$ (negation) and by means of $\forall s$ (for every set
$s$) and $\exists s$ (exists set $s$). The parentheses $($ and $)$ preclude
ambiguity and improve legibility. Then $s\notin t$ is short for $\neg(s\in t$),
$(s\in t\vee s\notin t)$ is a tautology and $\exists s\exists t(s\in t\wedge s
\notin t)$ is a contradiction.
\begin{definition}
Set $s$ is an `individual' if $s\in s$.
\boldmath$\Box$\end{definition}
If $s$ is an individual then $s$ `is a member of oneself'. Sets being a member
of themselves seem problematic. Section 4.6.5 in (Mendelson [2015]) states:\par
`The words "individual" and "atom" are sometimes used as synonyms for\par
"urelement". (...) Thus, urelements have no members.'\\
Thus an atom or an urelement is certainly not what we call an individual. The
membership relation for individuals seems awkward. But there are no logical
obstacles for `being a member of oneself'. To make a distinction between
individuals and other sets, in the phrase `an individual or a set', or `a set
of individuals', or `a transitive set', that sets are not individuals.\\
\mbox{ }\par
Then $\forall s(s\in s\vee s\notin s)$ thus an individual is a member of $s$,
or a set is not a member of $s$. Therefore $\forall s\exists u((u\in s\wedge u
\in u)\vee(u\notin s\wedge u\notin u))$.\\
The same result also follows from Russell's paradox.\\
\mbox{ }\\
Russell's paradox (Jech [2002] page 4).\\
Assume $s$ is a set whose members are all those (and only those) sets that are
not members of themselves\par
$\exists s\forall u(u\in s\leftrightarrow u\notin u)$.\\
The formula applies to every $u$ thus we can choose $s$. Then the formula
reads\par
$\exists s(s\in s\leftrightarrow s\notin s)$ thus $\exists s(s\in s\wedge s
\notin s)$, a clear contradiction. Therefore\par
$\forall s\exists u((u\in s\wedge u\in u)\vee(u\notin s\wedge u\notin u))$.\\
Thus Russell's paradox shows: For every $s$, an individual is a member of $s$,
or a set is not a member of $s$. Logic allows for individuals.\\
\mbox{ }\\
We try to prove by the same kind of reasoning $\exists v(v\notin s)$.\\
Assume $v$ is a set whose members are all those (and only those) members of $s$
that are not members of themselves (thus are not individuals)\par
$\exists v\forall u(u\in v\leftrightarrow(u\in s\wedge u\notin u))$.\\
The formula applies to every $u$ thus we can choose $v$\par
$\exists v(v\in v\leftrightarrow(v\in s\wedge v\notin v))$.\\
Then $\exists v(v\notin v\wedge v\notin s)$ thus $v$ is not an individual and
not a member of $s$. Thus (Jech [2002] page 4): `The set of all sets does not
exist.'\\
\mbox{ }\\
Again we try the same kind of reasoning. Assume $v$ is a set whose members are
all those (and only those) members of $s$ that are individuals\par
$\exists v\forall u(u\in v\leftrightarrow(u\in s\wedge u\in u))$.\\
The formula applies to every $u$ thus we can choose $v$\par
$\exists v(v\in v\leftrightarrow(v\in s\wedge v\in v))$.\\
Then $\exists v(v\notin v\vee(v\in v\wedge v\in s))$. Thus $v$ is a set of
individuals, or $v$ is an individual and a member of $s$. Again, individuals
are on par with other sets. And `the set of all individuals' is not excluded.
\section{The axioms of set theory}
\begin{definition} $s$ `is equal to' $t$, denoted by $s=t$ if $\forall u(u\in s
\leftrightarrow u\in t)$.\par
$s\neq t$ is short for $\neg(s=t)$.
\boldmath$\Box$\end{definition}
\begin{axiom}(equality)
$\forall s\forall t(s=t\rightarrow\forall u(s\in u\leftrightarrow t\in u))$.
\boldmath$\Box$\end{axiom}
If $s=t$ then $s$ and $t$ have the same basic properties: $s$ and $t$ have the
same members, and $s$ and $t$ are members of the same sets.
\begin{axiom}(individuals)
$\forall s(s\in s\rightarrow\forall u(u\in s\rightarrow u=s))$.
\boldmath$\Box$\end{axiom}
Thus the only member of an individual is the individual oneself. Then\par
$\forall s\forall t(s\in s\wedge t\notin t\rightarrow s\neq t\wedge t\notin
s)$.
\begin{axiom}(pairs)
$\forall s\forall t\exists v\forall u(u\in v\leftrightarrow(u=s\vee u=t))$.
\boldmath$\Box$\end{axiom}
Then $v$ is denoted by $\{s,t\}$. The `singleton' $\{s\}$ of $s$ is short for
$\{s,s\}$.\par
$s\notin s\rightarrow\{s\}\neq s$ otherwise $s\in s$. Therefore $s\notin s
\rightarrow\{s\}\notin\{s\}$.\par
$s\in s\rightarrow\{s\}=s$, the singleton of an individual is the individual
oneself.\par
$s\neq t\rightarrow\{s,t\}\notin\{s,t\}$. Thus a pair of different sets is not
an individual.\\
\mbox{ }\\
The `empty set' is usually postulated by $\exists s\forall u (u\notin s)$. The
formula applies to every $u$ thus we can choose $s$. Therefore $\exists s(s
\notin s)$ thus the empty set, denoted by $\emptyset$, is not an individual.
The axiom of regularity is usually\par
$\forall s(s\neq\emptyset\rightarrow\exists v(v\in s\wedge\forall u(u\in v
\rightarrow u\notin s)))$.\\
Thus every individual contradicts the axiom of regularity. Therefore we have to
abandon the empty set and reformulate the axiom of regularity.
\begin{axiom} $\forall s\exists v(v\in s\wedge v\in v\wedge\forall u(u\in s
\rightarrow u\in u)\vee$\\
(regularity) $v\in s\wedge v\notin v\wedge\forall u(u\in v\wedge u\notin u
\rightarrow u\notin s))$.
\boldmath$\Box$\end{axiom}
Then to every set belongs an individual or a set, and regularity does not apply
to $s$ if $s$ is an individual or a set of individuals.
\begin{axiom}(union)
$\forall s\exists v\forall u(u\in v\leftrightarrow\exists w(w\in s\wedge u\in
w))$.
\boldmath$\Box$\end{axiom}
The union $v$ of $s$ is denoted by $\bigcup s$, and $\forall s\exists u(u\in
\bigcup s)$. The `union $s\cup t$' of $s$ and $t$ is defined by $s\cup t=
\bigcup\{s,t\}$. Thus $s\in s\rightarrow\bigcup s=s\wedge s\cup\{s\}=s$.
\begin{axiom}(power)
$\forall s\exists v\forall u(u\in v\leftrightarrow\forall w(w\in u\rightarrow
w\in s))$.
\boldmath$\Box$\end{axiom}
The power $v$ of $s$ is denoted by ${\cal{P}}s$, and $\forall s(s\in
{\cal{P}}s)$.\\
We reformulate the axiom `separation'.
\begin{axiom}(separation)\par
If $\Phi(u)$ denotes a well formed formula $\Phi$ with $u$ free in $\Phi$
then\par
$\forall s(\exists u(u\in s\wedge\Phi(u))\rightarrow\exists v\forall u(u\in v
\leftrightarrow u\in s\wedge\Phi(u)))$.
\boldmath$\Box$\end{axiom}
In section 7 we postulate an `axiom of infinity'.
\section{Transitive sets}
\begin{definition}
$s$ is `included in' $t$, denoted by $s\subseteq t$ if $\forall u(u\in s
\rightarrow u\in t)$.\par
$s$ `is transitive' if $\forall u(u\in s\rightarrow u\subseteq s)$, or
equivalently $\bigcup s\subseteq s$.
\boldmath$\Box$\end{definition}
Every individual and every set of individuals is transitive. If $s$ is
transitive then $\bigcup s$ and $s\cup\{s\}$ are transitive. If $v\notin v$
then $\{v\}$ is not transitive.
\begin{theorem} If $s$ is transitive and $\exists u(u\in s\wedge u\notin u)$
then a set of individuals\par
is a member of $s$: $\exists v(v\in s\wedge v\notin v\wedge\forall u(u\in v
\rightarrow u\in u))$.
\end{theorem}
{\it Proof.} Apply regularity to $s$ to find $v\in s\wedge v\notin v$ such
that\par
$\forall u(u\in v\wedge u\notin u\rightarrow u\notin s)$. Then $\forall u(u\in
v\wedge u\in s\rightarrow u\in u)$.\par
Set $s$ is transitive and $v\in s$ thus $\forall u(u\in v\rightarrow u\in
s)$.\par
Therefore $\exists v(v\in s\wedge v\notin v\wedge\forall u(u\in v\rightarrow u
\in u))$.
\begin{flushright}\boldmath$\Box$\end{flushright}
If $u$ and $v$ are the only individuals belonging to transitive set $s$ then
$\{u,v\}$ is the only possible set of individuals belonging to $s$. Therefore
$\{u,v\}\in s\cup\{s\}$.\par
In the remainder we fix the pair of different individuals $o$ and $e$ and
define $0=\{o,e\}$. Then $0\notin0$, $\bigcup0=0$ and $\forall u(u\notin u
\rightarrow u\notin0)$.
\begin{theorem}
If $T$ is a transitive set with transitive members and\par
$0\in T\wedge\forall u(u\in T\wedge u\in u\rightarrow u\in0)$ then\par
$\forall s\forall t(s\in T\wedge s\notin s\wedge t\in T\wedge t\notin t
\rightarrow(s\in t\vee s=t\vee t\in s))$.
\end{theorem}
{\it Proof.} $T$ is a transitive set with transitive members and $0\in T$
therefore\par
$\forall u(u\in T\rightarrow((u\notin0\leftrightarrow u\notin u)\wedge(u\notin
u\rightarrow0\in u\cup\{u\})))$. Thus\par
$0\in T\wedge0\notin0\wedge\forall u(u\in T\wedge u\notin u\rightarrow(0\in u
\vee0=u\vee u\in0))$.\\
Assume exists $v$ such that\par
$v\in T\wedge v\notin v\wedge\exists w(w\in T\wedge w\notin w\wedge(v\notin w
\wedge v\neq w\wedge w\notin v))$.\par
Then $v\neq0$ and $v\neq w$.\\
Use separation to define $V$ by\\
$\forall u(u\in V\leftrightarrow u\in T\wedge u\notin u\wedge\exists x(x\in T
\wedge x\notin x\wedge(u\notin x\wedge u\neq x\wedge x\notin u)))$.\par
Then $v\in V\wedge v\notin v$, $0\notin V$ and $\forall u(u\in0\rightarrow u
\notin V)$.\\
Apply regularity to $V$ to find $t\in V$ such that $t\in T\wedge t\notin t
\wedge$\par
$\exists x(x\in T\wedge x\notin x\wedge(x\notin t\wedge x\neq t\wedge t\notin
x))\wedge\forall u(u\in t\wedge u\notin u\rightarrow u\notin V)$.\par
$0\notin V$ thus $t\neq0$. And $t\in T\wedge t\notin t$ therefore $0\in t$.\par
Set $t\in V$ thus $\exists x(x\in T\wedge x\notin x\wedge(x\notin t\wedge x\neq
t\wedge t\notin x))$.\\
Use separation to define $W$ by\par
$\forall u(u\in W\leftrightarrow u\in T\wedge u\notin u\wedge(u\notin t\wedge u
\neq t\wedge t\notin u))$.\par
Then $x\in W\wedge x\notin x$ and $0\notin W$.\\
Apply regularity to $W$ to find $y\in W$ such that\par
$y\in T\wedge y\notin y\wedge(y\notin t\wedge y\neq t\wedge t\notin y)\wedge
\forall u(u\in y\wedge u\notin u\rightarrow u\notin W)$.\par
Then $y\neq0\wedge y\in T\wedge y\notin y$ thus $y$ is transitive and $0\in
y$.\par
Set $T$ is transitive thus $\forall u(u\in y\wedge y\in T\rightarrow u\in T)$
therefore\\
$\forall u(u\in y\wedge u\notin u\rightarrow u\notin W)\rightarrow\forall u(u
\in y\wedge u\notin u\rightarrow(u\in t\vee u=t\vee t\in u))$.\par
Then $(t\in u\vee t=u)\wedge u\in y\rightarrow t\in y$ contradictory to $t
\notin y$.\par
Thus $\forall u(u\in y\wedge u\notin u\rightarrow u\notin W)\rightarrow
\forall u(u\in y\wedge u\notin u\rightarrow u\in t)$.\par
Sets $y$ and $t$ are transitive, $0\in y$ and $0\in t$ thus $y\subseteq t$.\par
Then $y\subseteq t\wedge y\neq t\rightarrow\exists s(s\in t\wedge s\notin s
\wedge s\notin y)$ otherwise $y=t$.\par
Set $s\neq0$, $s\in t$ and $\forall u(u\in t\wedge u\notin u\rightarrow u\notin
V)$ therefore\par
$\forall u(u\in T\wedge u\notin u\rightarrow(u\in s\vee u=s\vee s\in u))$.\par
Set $y\in T$ and $s\notin y$ thus $y\in s\vee y=s$.\par
But $(y\in s\vee y=s)\wedge s\in t\rightarrow y\in t$ contradictory to $y\notin
t$.\\
Therefore the assumption `exists $v$' is not correct thus\par
$\forall s\forall t(s\in T\wedge s\notin s\wedge t\in T\wedge t\notin t
\rightarrow(s\in t\vee s=t\vee t\in s))$.
\begin{flushright}\boldmath$\Box$\end{flushright}
If both $s$ and $t$ are transitive sets with transitive members then $(s\cup t)
\cup\{s,t\}$ is a transitive set with transitive members and $s$ and $t$ belong
to $(s\cup t)\cup\{s,t\}$. Then the `law of trichotomy' reads\par
$s\notin s\wedge t\notin t\wedge\forall u(u\in s\cup t\wedge u\in u\rightarrow
u\in0)\rightarrow s\in t\vee s=t\vee t\in s$.\\
Then $\bigcup s=s$ or $\bigcup s\neq s\wedge s=\bigcup s\cup\{\bigcup s\}$.
\section{Natural numbers}
\begin{definition}
$s$ is a `natural number' if $s$ is a transitive set with transitive\par
members and $s\notin0\wedge\forall u(u\in s\cup\{s\}\rightarrow u\in0\cup\{0\}
\vee\bigcup u\in u)$.\par
The `successor' of natural number $s$ is $s\cup\{s\}$.
\boldmath$\Box$\end{definition}
Then $0$ is the first natural number. In fact, any pair of different
individuals can be a first natural number. In this section $0$ is denoted by
$\alpha$. If $s$ is a natural number then $s\notin\alpha\leftrightarrow s\notin
s$. And
$$\bigcup\alpha=\alpha\wedge s\notin\alpha\wedge\forall u(u\in s\cup\{s\}
\rightarrow u\in\alpha\cup\{\alpha\}\vee\bigcup u\in u).$$
Then $\alpha$ is the first natural number. And\\
\mbox{ }\par
$\forall u(u\in s\cup\{s\}\wedge u\in u\rightarrow u\in\alpha)$,\par
$\forall u(u\in s\cup\{s\}\rightarrow u\in\alpha\cup\{\alpha\}\vee u=\bigcup u
\cup\{\bigcup u\})$,\par
$\forall u(u\in s\cup\{s\}\wedge u\notin\alpha\rightarrow\bigcup u\notin
\alpha)$.
\begin{theorem}
The successor $s\cup\{s\}$ of natural number $s$ is a natural number.
\end{theorem}
{\it Proof.} $s\cup\{s\}$ is a transitive set with transitive members.\par
$s$ is a natural number thus $\forall u(u\in s\cup\{s\}\rightarrow u\in\alpha
\cup\{\alpha\}\vee\bigcup u\in u)$.\par
And $s\notin\alpha$ implies $s\cup\{s\}\notin\alpha$ and $s\cup\{s\}\neq
\alpha$. Thus $s\cup\{s\}\notin\alpha\cup\{\alpha\}$.\par
Then $\bigcup(s\cup\{s\})=s\wedge s\in s\cup\{s\}$ thus $\bigcup(s\cup\{s\})\in
s\cup\{s\}$.\par
Therefore $s\cup\{s\}$ is a natural number.
\begin{flushright}\boldmath$\Box$\end{flushright}
\begin{theorem}
If $s$ is a natural number and $\alpha\in s$ then\par
$\forall u(u\notin\alpha\wedge u\in s\rightarrow(u\mbox{ is a natural
number}))$.
\end{theorem}
{\it Proof.} If $u\in s$ then $u$ is a transitive set with transitive
members.\par
$s$ is transitive thus $\forall v(v\in u\cup\{u\}\wedge u\in s\rightarrow v\in
s)$. Then\par
$\bigcup\alpha=\alpha\wedge u\notin\alpha\wedge\forall v(v\in u\cup\{u\}
\rightarrow v\in\alpha\cup\{\alpha\}\vee\bigcup v\in v)$.\par
Therefore $\forall u(u\notin\alpha\wedge u\in s\rightarrow(u\mbox{ is a natural
number}))$.
\begin{flushright}\boldmath$\Box$\end{flushright}
Next theorem is the `principle of mathematical induction'.
\begin{theorem}
If $s$ is a natural number and $\Phi(u)$ is a well formed formula $\Phi$\par
with natural number $u$ free in $\Phi$ then\par
$\Phi(\alpha)\wedge\forall u(u\notin\alpha\wedge u\in s\wedge\Phi(u)\rightarrow
\Phi(u\cup\{u\}))\rightarrow\Phi(s)$.
\end{theorem}
{\it Proof.} If $s$ is first number $\alpha$ then $\Phi(\alpha)$ thus
$\Phi(s)$.\par
If $s\neq\alpha$ then $\alpha\in s$. Use separation to define set $v$ by\par
$\forall u(u\in v\leftrightarrow u\notin\alpha\wedge u\in s\wedge\Phi(u))$.
Then $\alpha\in v$ and $s\notin v$.\par
Use separation to define set $w$ by\par
$\forall u(u\in w\leftrightarrow u\notin\alpha\wedge u\in s\cup\{s\}\wedge
u\notin v)$. Then $s\in w$ and $\alpha\notin w$.\par
Apply regularity to set $w$ to find natural number $z\in w$ such that\par
$z\in s\cup\{s\}\wedge z\notin\alpha\wedge z\notin v\wedge\forall u(u\in z
\wedge u\notin u\rightarrow u\notin w)$.\par
Then $\alpha\in v$ thus $z\neq\alpha$. And $z\notin\alpha$ therefore $\alpha\in
z$. Thus $z$ is not first\par
number $\alpha$ therefore $z=\bigcup z\cup\{\bigcup z\}$. Then $\bigcup z\in z$
and $\bigcup z\notin\alpha$.\par
And $\bigcup z\in z\wedge z\in s\cup\{s\}\rightarrow\bigcup z\in s$ thus
$\bigcup z\in s\wedge\bigcup z\notin w\rightarrow\bigcup z\in v$.\par
Therefore $\Phi(\bigcup z)$ thus $\Phi(z)$. And $z\in s\cup\{s\}$.\par
If $z\in s$ then $z\notin\alpha\wedge\Phi(z)\rightarrow z\in v$ contradictory
to $z\notin v$.\par
Therefore $z=s$ thus $\Phi(s)$.
\begin{flushright}\boldmath$\Box$\end{flushright}
The axioms formulated by G. Peano are satisfied:
\begin{itemize}
\item there is first natural number $\alpha$,
\item every natural number has as successor a natural number,
\item if $s$ is a natural number and $s\cup\{s\}=\alpha$ then $s\in\alpha$
contradictory to $s\notin\alpha$ thus first number $\alpha$ is not the
successor of a natural number,
\item if $s$ and $t$ are natural numbers and $s\cup\{s\}=t\cup\{t\}$ then
$s\in t\cup\{t\}$ and $t\in s\cup\{s\}$ therefore $s=t$ thus if both
successors of a pair of natural numbers are equal then both natural
numbers are equal,
\item the principle of mathematical induction.
\end{itemize}
In the remainder natural numbers are denoted by $l$, $m$, $n$, $x$ or $y$.
The first natural number is $0$, and $1$ denotes the successor of $0$ thus $1=0
\cup\{0\}$.
\begin{definition}
$m$ is `less than' $n$, denoted by $m<n$ if $m\in n$,\par
$m$ is `less than or equal to' $n$, denoted by $m\leq n$ if $n\notin m$.
\boldmath$\Box$\end{definition}
Then $0$ is less than every other natural number. We define addition $m+n$ and
multiplication $m\cdot n$. As usual multiplication precedes addition.
\begin{definition}
$m+0=m\mbox{, }\mbox{ }m+(n\cup\{n\})=(m+n)\cup\{m+n\}$,\par
$m\cdot0=0\mbox{, }\mbox{ }m\cdot(n\cup\{n\})=m\cdot n+m$.
\boldmath$\Box$\end{definition}
Then $m+1=(m+0)\cup\{m+0\}$ thus $m+1=m\cup\{m\}$ and $0+1=1$. With the
principle of mathematical induction one can prove $m+n$ and $m\cdot n$ are
natural numbers, $1+n=n+1$ and the basic properties of the Peano Arithmetic
(addition and multiplication).
\section{Ordinals}
\begin{definition}
$\beta$ is an `ordinal' if $\beta$ is a transitive set with transitive\par
members and $\beta\notin0\wedge\forall u(u\in\beta\wedge u\in u\rightarrow u
\in0)$.\par
Ordinal $\alpha$ is a `first number' if $\bigcup\alpha=\alpha$.
\boldmath$\Box$\end{definition}
Thus $0\in\beta\cup\{\beta\}$ and $0$ is a first number. If $\alpha$ is a first
number and $u\in\alpha\wedge u\notin u$ then $u$ and $u\cup\{u\}$ are ordinals
and $u\cup\{u\}\in\alpha$. And $\beta\notin0\leftrightarrow\beta\notin\beta$.
We define addition of ordinals and natural numbers.
\begin{definition}
If $\beta$ is an ordinal then\par
$\beta+0=\beta\mbox{, }\mbox{ }\beta+(n+1)=(\beta+n)\cup\{\beta+n\}$.
\boldmath$\Box$\end{definition}
Then $\beta\cup\{\beta\}$ and $\beta+1$ are ordinals. With the principle of
mathematical induction one can prove $\forall m\forall n((\beta+m)+n=\beta+
(m+n))$.
\begin{theorem}
If $\beta$ is an ordinal then $\exists\alpha\exists n(\bigcup\alpha=\alpha
\wedge\beta=\alpha+n)$.
\end{theorem}
{\it Proof.} If $\beta$ is a first number then $\beta=\beta+0$ thus $\alpha=
\beta$ and $n=0$.\par
If $\bigcup\beta\neq\beta$ then $\beta=\bigcup\beta+1$. Use separation to
define set $v$ by\par
$\forall u(u\in v\leftrightarrow u\in\beta\wedge u\notin u\wedge\exists n
(\beta=u+n))$. Then $\bigcup\beta\in v$.\par
Apply regularity to $v$ to find ordinal $\alpha\in v\wedge\alpha\notin\alpha$
such that\par
$\alpha\in\beta\wedge\exists n(\beta=\alpha+n)\wedge\forall u(u\in\alpha\wedge
u\notin u\rightarrow u\notin v)$. Then $\bigcup\alpha\in\alpha\cup
\{\alpha\}$.\par
If $\bigcup\alpha\in\alpha$ then $\alpha=\bigcup\alpha+1$ therefore $\beta=
\bigcup\alpha+(1+n)$.\par
Then $\bigcup\alpha\in\alpha\wedge\alpha\in\beta$ thus $\bigcup\alpha\in\beta$
therefore $\bigcup\alpha\in v$.\par
Then $\bigcup\alpha\in\alpha\wedge\bigcup\alpha\in v$ contradictory to $\forall
u(u\in\alpha\wedge u\notin u\rightarrow u\notin v)$.\par
Therefore $\exists\alpha\exists n(\bigcup\alpha=\alpha\wedge\beta=\alpha+n)$.
\begin{flushright}\boldmath$\Box$\end{flushright}
\begin{definition}
Ordinal $\beta$ is an `ordinal with first number $\alpha$' if\par
$\bigcup\alpha=\alpha\wedge\beta\notin\alpha\wedge\forall u(u\in\beta\cup
\{\beta\}\rightarrow u\in\alpha\cup\{\alpha\}\vee\bigcup u\in u)$.\par
The `successor' of ordinal $\beta$ is $\beta\cup\{\beta\}$.
\boldmath$\Box$\end{definition}
Then $\forall\beta(\beta\notin\alpha\rightarrow\beta\notin\beta)$.
First number $\alpha$ satisfies the definition of an ordinal with first number
$\alpha$. Natural numbers (the `finite' numbers) are ordinals with first number
$0$.\par
The theorems in section 5 about natural numbers apply analogous to ordinals
with first number $\alpha$. Thus the successor of an ordinal with first number
$\alpha$ is an ordinal with first number $\alpha$ and the ordinals with first
number $\alpha$ satisfy the Peano axioms.\par
If $\beta$ is an ordinal with first number $\alpha$ and $\beta\neq\alpha$ then
$\bigcup\beta\in\beta$ thus $\bigcup\beta$ is the `greatest' ordinal with first
number $\alpha$ belonging to $\beta$.
\begin{theorem}
If $\beta$ is an ordinal with first number $\alpha$ then for every $n$ $(\beta+
n)$\par is an ordinal with first number $\alpha$.
\end{theorem}
{\it Proof.} $\beta+0=\beta$ thus $\beta+0$ is an ordinal with first number
$\alpha$.\par
Assume if $x<n$ then $\beta+x$ is an ordinal with first number $\alpha$. Then
the\par successor $(\beta+x)+1$ of ordinal $\beta+x$ is an ordinal with first
number $\alpha$\par
therefore $\beta+(x+1)$ is an ordinal with first number $\alpha$.\par
Apply the principle of mathematical induction to $0\leq x\wedge x<n$ and\par
conclude for every $n$ $(\beta+n)$ is an ordinal with first number $\alpha$.
\begin{flushright}\boldmath$\Box$\end{flushright}
Every ordinal is an ordinal with a first number $\alpha$. We define $\omega^0=
1$. Then\\\mbox{ }\par
$\forall n(\alpha+\omega^0\cdot n$ is an ordinal with first number
$\alpha)$,\par
$\forall n(\alpha+\omega^0\cdot(n+1)=(\alpha+\omega^0\cdot n)+\omega^0)$,\par
$\forall n(\alpha+\omega^0\cdot n\in(\alpha+\omega^0\cdot n)+\omega^0)$.
\section{More first numbers}
\begin{definition}
If $\gamma$ is an ordinal then\par
$F^0_\gamma=\gamma\mbox{, }\mbox{ }\forall u(u\in F^{m+1}_\gamma\leftrightarrow
u\in F^m_\gamma\wedge\bigcup F^m_u=u)$.\par
Ordinal $\alpha$ is a `first $\omega^m$-number' if $\bigcup F^m_\alpha=\alpha$.
\boldmath$\Box$\end{definition}
Then $\forall m(F^m_0=0\wedge\bigcup F^m_0=0\wedge F^m_1=1\wedge\bigcup F^m_1=0
\wedge(0\in F^m_\gamma\cup\{F^m_\gamma\}))$.\\
If $0<m$ then $F^m_\gamma$ is a set of individuals and first numbers belonging
to $\gamma$.\\
And $\bigcup\bigcup F^m_\gamma\subseteq\bigcup F^m_\gamma$. If $u\in\bigcup
F^m_\gamma$ then $u$ is transitive and $u\cup\{u\}\in\bigcup F^m_\gamma$\par
thus $u\in\bigcup\bigcup F^m_\gamma$. Therefore if $0<m$ then $\bigcup
F^m_\gamma$ is a first number.\\
Then $\forall u(u\in F^m_\gamma\rightarrow\bigcup F^m_\gamma\notin u)$.\\
Apply the law of trichotomy and conclude $0<m\rightarrow F^m_\gamma\subseteq
\bigcup F^m_\gamma\cup\{\bigcup F^m_\gamma\}$. And $F^0_\gamma\subseteq
\bigcup F^0_\gamma\cup\{\bigcup F^0_\gamma\}$ therefore $\forall m(F^m_\gamma
\subseteq\bigcup F^m_\gamma\cup\{\bigcup F^m_\gamma\})$.\par
\begin{theorem}
If $\gamma$ is an ordinal then $\forall l(l\leq m\rightarrow F^m_\gamma
\subseteq F^l_\gamma)$.
\end{theorem}
{\it Proof.} $F^0_\gamma=\gamma$ and $\forall l(l\leq0\rightarrow l=0)$ thus
$\forall l(l\leq0\rightarrow F^0_\gamma\subseteq F^l_\gamma)$.\par
Assume if $x<m$ then $\forall l(l\leq x\rightarrow F^x_\gamma\subseteq
F^l_\gamma)$.\par
Then $F^{x+1}_\gamma\subseteq F^x_\gamma\wedge\forall l(l\leq x\rightarrow
F^x_\gamma\subseteq F^l_\gamma)$ thus $\forall l
(l\leq x\rightarrow F^{x+1}_\gamma\subseteq F^l_\gamma)$.\par
And $l=x+1\rightarrow F^{x+1}_\gamma\subseteq F^l_\gamma$ therefore $\forall l
(l\leq x+1\rightarrow F^{x+1}_\gamma\subseteq F^l_\gamma)$.\par
Apply the principle of mathematical induction to $0\leq x\wedge x<m$ and\par
conclude $\forall l(l\leq m\rightarrow F^m_\gamma\subseteq F^l_\gamma)$.
\begin{flushright}\boldmath$\Box$\end{flushright}
Then $F^m_\gamma\subseteq F^0_\gamma$ thus $\gamma\notin F^m_\gamma$. And
$\forall l(l\leq m\rightarrow\bigcup F^m_\gamma\subseteq\bigcup F^l_\gamma)$.\\
Apply the law of trichotomy and conclude $\bigcup F^m_\gamma\in\gamma
\leftrightarrow\bigcup F^m_\gamma\neq\gamma$.\\
Thus every first $\omega^m$-number is a first number, every first number is a
first $\omega^0$-number and $0$ is for every $m$ a first $\omega^m$-number. If
$\bigcup F^{m+1}_\alpha\neq\alpha$ then $\bigcup F^{m+1}_\alpha$ is the
greatest first $\omega^m$-number belonging to $\alpha$.\par
If $\bigcup F^m_\alpha=\alpha$ then $\forall l(l\leq m\rightarrow\alpha
\subseteq\bigcup F^l_\alpha\wedge\bigcup F^l_\alpha\subseteq\bigcup\alpha)$
thus $\bigcup\alpha=\alpha$ therefore $\forall l(l\leq m\rightarrow
\bigcup F^l_\alpha=\alpha)$. And $\forall u(u\in F^{m+1}_\alpha\leftrightarrow
u\in\alpha\wedge\bigcup F^m_u=u)$.\par
If $\bigcup F^m_\gamma\in F^m_\gamma$ then $\forall l(\bigcup F^{m+l}_\gamma
\neq\gamma)$ otherwise $\bigcup F^m_\gamma=\gamma$ contradictory to $\gamma
\notin F^m_\gamma$. Thus $\bigcup F^m_\gamma\in F^m_\gamma\rightarrow\forall l
(\bigcup F^{m+l}_\gamma\in\gamma)$.
\begin{axiom}
$\forall\alpha\forall m(\bigcup F^m_\alpha=\alpha\rightarrow\exists\gamma
\forall u(u\in\gamma\leftrightarrow\exists n(u\in\alpha+\omega^m\cdot n)))$.
\boldmath$\Box$\end{axiom}
$\gamma$ is denoted by $\alpha+\omega^{m+1}$. Thus $0+\omega^{m+1}$ is a set.\\
If $\bigcup\alpha=\alpha\wedge\beta\notin\alpha\wedge\beta\in\alpha+\omega^1$
then $\beta$ is an ordinal with first number $\alpha$. Thus every natural
number is a member of $0+\omega^1$ and every member of $0+\omega^1$ is a
natural number. This is the `axiom of infinity'.
\begin{definition}
If $\alpha$ is a first $\omega^m$-number then\par
$\alpha+\omega^{m+1}\cdot0=\alpha$,\par
$\alpha+\omega^{m+1}\cdot(n+1)=(\alpha+\omega^{m+1}\cdot n)+\omega^{m+1}$.
\boldmath$\Box$\end{definition}
If $\alpha$ is a first $\omega^{m+1}$-number then $\alpha$ is a first
$\omega^m$-number therefore\\
$\alpha+\omega^{m+1}=\alpha+\omega^{m+1}\cdot1$. And $\alpha+\omega^0=
\alpha+\omega^0\cdot1$ thus\par
$\forall m(\bigcup F^m_\alpha=\alpha\rightarrow\alpha+\omega^m=\alpha+\omega^m
\cdot1)$.
\begin{theorem}
$\bigcup F^m_\alpha=\alpha\rightarrow\alpha\in\alpha+\omega^m$.
\end{theorem}
{\it Proof.} If $\bigcup F^0_\alpha=\alpha$ then $\alpha+\omega^0=\alpha\cup
\{\alpha\}$ therefore $\alpha\in\alpha+\omega^0$.\par
Assume if $x<m$ then $\bigcup F^x_\alpha=\alpha\rightarrow\alpha\in\alpha+
\omega^x$.\par
If $\bigcup F^{x+1}_\alpha=\alpha$ then $\bigcup F^x_\alpha=\alpha$ thus
$\alpha\in\alpha+\omega^x$.\par
Then $\alpha+\omega^x=\alpha+\omega^x\cdot1$ thus $\alpha\in\alpha+\omega^x
\cdot1$ therefore $\alpha\in\alpha+\omega^{x+1}$.\par
Apply the principle of mathematical induction to $0\leq x\wedge x<m$ and\par
conclude $\bigcup F^m_\alpha=\alpha\rightarrow\alpha\in\alpha+\omega^m$.
\begin{flushright}\boldmath$\Box$\end{flushright}
\begin{theorem}
$\bigcup F^m_\alpha=\alpha\rightarrow\forall n(\bigcup F^m_{\alpha+\omega^{m+1}
\cdot n}=\alpha+\omega^{m+1}\cdot n)$.
\end{theorem}
{\it Proof.} If $\bigcup F^0_\alpha=\alpha$ then $\alpha+\omega^1$ exists
and\par
$\forall u(u\in\alpha+\omega^1\leftrightarrow\exists l(u\in\alpha+\omega^0\cdot
l))$. Then $\alpha+\omega^0\cdot l$ is an ordinal with\par
first number $\alpha$ thus a transitive set with transitive members.\par
Then $\bigcup(\alpha+\omega^1)\subseteq\alpha+\omega^1$ and
$\alpha+\omega^1\subseteq\bigcup(\alpha+\omega^1)$ thus $\alpha+\omega^1$ is a
first\par
$\omega^0$-number. Therefore $\bigcup F^0_\alpha=\alpha\rightarrow
\bigcup F^0_{\alpha+\omega^1}=\alpha+\omega^1$.\\
Assume if $y<l$ then $\alpha+\omega^1\cdot y$ is a first $\omega^0$-number.\par
Then $\alpha+\omega^1\cdot(y+1)=(\alpha+\omega^1\cdot y)+\omega^1$.\par
According to the induction assumption $\alpha+\omega^1\cdot y$ is a first
$\omega^0$-number.\par
Therefore $(\alpha+\omega^1\cdot y)+\omega^1$ is a first $\omega^0$-number thus
$\alpha+\omega^1\cdot(y+1)$ is\par
a first $\omega^0$-number.\\
Apply the principle of mathematical induction to $0\leq y\wedge y<l$ and\par
conclude $\bigcup F^0_\alpha=\alpha\rightarrow\forall l(\bigcup F^0_{\alpha+
\omega^{0+1}\cdot l}=\alpha+\omega^{0+1}\cdot l)$.\\
Assume if $x<m$ then $\bigcup F^x_\alpha=\alpha\rightarrow\forall n
(\bigcup F^x_{\alpha+\omega^{x+1}\cdot n}=\alpha+\omega^{x+1}\cdot n)$.\par
If $\alpha$ is a first $\omega^{x+1}$-number then apply the axiom of infinity
thus\par
$\exists\alpha+\omega^{(x+1)+1}\forall u(u\in\alpha+\omega^{(x+1)+1}
\leftrightarrow\exists l(u\in\alpha+\omega^{x+1}\cdot l))$.\par
$\alpha$ is a first $\omega^{x+1}$-number thus $\alpha$ is a first
$\omega^x$-number. Then according\par
to the induction assumption $\alpha+\omega^{x+1}\cdot l$ is a first
$\omega^x$-number.\par
If $v=\alpha+\omega^{x+1}\cdot l$ then $\bigcup F^x_v=v\wedge v\in v+
\omega^{x+1}$ therefore\par
$v\in\alpha+\omega^{x+1}\cdot(l+1)$ thus $v\in\alpha+\omega^{(x+1)+1}$.
Then\par
$\forall u(\exists v(u\in v\wedge v\in\alpha+\omega^{(x+1)+1}\wedge
\bigcup F^x_v=v)\rightarrow u\in\bigcup F^{x+1}_{\alpha+\omega^{(x+1)+1}})$.
\par
Then $\alpha+\omega^{(x+1)+1}\subseteq\bigcup F^{x+1}_{\alpha+
\omega^{(x+1)+1}}$. And $\bigcup F^{x+1}_{\alpha+\omega^{(x+1)+1}}\subseteq
\alpha+\omega^{(x+1)+1}$.\par
Therefore $\bigcup F^{x+1}_{\alpha+\omega^{(x+1)+1}}=\alpha+
\omega^{(x+1)+1}$.\par
Thus $\bigcup F^{x+1}_\alpha=\alpha\rightarrow\bigcup F^{x+1}_{\alpha+
\omega^{(x+1)+1}}=\alpha+\omega^{(x+1)+1}$.\\
Assume if $y<l$ then $\alpha+\omega^{(x+1)+1}\cdot y$ is a first
$\omega^{x+1}$-number.\par
Then $\alpha+\omega^{(x+1)+1}\cdot(y+1)=(\alpha+\omega^{(x+1)+1}\cdot y)+
\omega^{(x+1)+1}$.\par
According to the induction assumption $\alpha+\omega^{(x+1)+1}\cdot y$ is a
first\par
$\omega^{x+1}$-number. Therefore $(\alpha+\omega^{(x+1)+1}\cdot y)+
\omega^{(x+1)+1}$ is a first $\omega^{x+1}$-number\par
thus $\alpha+\omega^{(x+1)+1}\cdot(y+1)$ is a first $\omega^{x+1}$-number.\\
Apply the principle of mathematical induction to $0\leq y\wedge y<l$ and\par
conclude $\bigcup F^{x+1}_\alpha=\alpha\rightarrow\forall l
(\bigcup F^{x+1}_{\alpha+\omega^{(x+1)+1}\cdot l}=\alpha+\omega^{(x+1)+1}\cdot
l)$.\\
Apply the principle of mathematical induction to $0\leq x\wedge x<m$ and\par
conclude $\bigcup F^m_\alpha=\alpha\rightarrow\forall n
(\bigcup F^m_{\alpha+\omega^{m+1}\cdot n}=\alpha+\omega^{m+1}\cdot n)$.
\begin{flushright}\boldmath$\Box$\end{flushright}
With the principle of mathematical induction one can prove\par
$\forall n\forall l(\bigcup F^m_\alpha=\alpha\rightarrow(\alpha+\omega^{m+1}
\cdot n)+\omega^{m+1}\cdot l=\alpha+\omega^{m+1}\cdot(n+l))$.
\begin{theorem}
$\bigcup F^m_\alpha=\alpha\rightarrow F^{m+1}_{\alpha+\omega^{m+1}}=
F^{m+1}_\alpha\cup\{\alpha\}$.
\end{theorem}
{\it Proof.} If $\alpha$ is a first $\omega^0$-number then $\bigcup\alpha=
\alpha$ and\par
$\forall u(u\in F^1_{\alpha+\omega^1}\leftrightarrow u\in\alpha+\omega^1\wedge
\bigcup u=u)$.\par
And $\forall u(u\in\alpha+\omega^1\leftrightarrow\exists n(u\in\alpha+\omega^0
\cdot n))$.\par
Then $\alpha+\omega^0\cdot n$ is an ordinal with first number $\alpha$
therefore\par
$\forall u(u\in\alpha+\omega^0\cdot n\cup\{\alpha+\omega^0\cdot n\}\rightarrow
u\in\alpha\cup\{\alpha\}\vee\bigcup u\in u)$.\par
Thus $\forall u(u\in F^1_{\alpha+\omega^1}\rightarrow(u\in\alpha\cup\{\alpha\}
\vee\bigcup u\in u)\wedge\bigcup u=u)$.\par
Then $F^1_{\alpha+\omega^1}\subseteq F^1_\alpha\cup\{\alpha\}$. And
$F^1_\alpha\cup\{\alpha\}\subseteq F^1_{\alpha+\omega^1}$ thus
$F^1_{\alpha+\omega^1}=F^1_\alpha\cup\{\alpha\}$.\par
Therefore $\bigcup F^0_\alpha=\alpha\rightarrow F^{0+1}_{\alpha+\omega^{0+1}}=
F^{0+1}_\alpha\cup\{\alpha\}$. Then $\bigcup F^{0+1}_{\alpha+\omega^{0+1}}=
\alpha$.\par
Thus $\alpha$ is the greatest first $\omega^0$-number belonging to $\alpha+
\omega^{0+1}$.\\
Assume if $x<m$ then $\bigcup F^x_\alpha=\alpha\rightarrow F^{x+1}_{\alpha+
\omega^{x+1}}=F^{x+1}_\alpha\cup\{\alpha\}$.\par
Then $\alpha$ is the greatest first $\omega^x$-number belonging to $\alpha+
\omega^{x+1}$.\par
If $\alpha$ is a first $\omega^{x+1}$-number then apply the axiom of infinity
thus\par
$\exists\alpha+\omega^{(x+1)+1}\forall u(u\in\alpha+\omega^{(x+1)+1}
\leftrightarrow\exists l(u\in\alpha+\omega^{x+1}\cdot l))$.\par
Then $\forall u(u\in F^{(x+1)+1}_{\alpha+\omega^{(x+1)+1}}\leftrightarrow
u\in\alpha+\omega^{(x+1)+1}\wedge\bigcup F^{x+1}_u=u)$.\par
And $\forall u(u\in\alpha+\omega^{(x+1)+1}\leftrightarrow\exists l(u\in\alpha
\cup\{\alpha\}\vee u\in\alpha+\omega^{x+1}\cdot(l+1)))$.\par
Then $\alpha$ is a first $\omega^{x+1}$-number thus $\alpha$ is a first
$\omega^x$-number and $\alpha+\omega^{x+1}\cdot l$\par
is a first $\omega^x$-number. Then according to the induction assumption\par
$F^{x+1}_{(\alpha+\omega^{x+1}\cdot l)+\omega^{x+1}}=F^{x+1}_{\alpha+
\omega^{x+1}\cdot l}\cup\{\alpha+\omega^{x+1}\cdot l\}$. Thus $\alpha+
\omega^{x+1}\cdot l$ is the\par
greatest first $\omega^x$-number belonging to $\alpha+\omega^{x+1}\cdot
(l+1))$.\par
Then $\forall u(u\in\alpha+\omega^{(x+1)+1}\wedge\bigcup F^{x+1}_u=u\rightarrow
u\in\alpha\cup\{\alpha\}\wedge\bigcup F^{x+1}_u=u)$.\par
Thus $F^{(x+1)+1}_{\alpha+\omega^{(x+1)+1}}\subseteq F^{(x+1)+1}_\alpha\cup
\{\alpha\}$. And $F^{(x+1)+1}_\alpha\cup\{\alpha\}\subseteq
F^{(x+1)+1}_{\alpha+\omega^{(x+1)+1}}$.\par
Therefore $F^{(x+1)+1}_{\alpha+\omega^{(x+1)+1}}=F^{(x+1)+1}_\alpha\cup
\{\alpha\}$.\\
Apply the principle of mathematical induction to $0\leq x\wedge x<m$ and\par
conclude $\bigcup F^m_\alpha=\alpha\rightarrow F^{m+1}_{\alpha+\omega^{m+1}}=
F^{m+1}_\alpha\cup\{\alpha\}$. Then $\bigcup F^{m+1}_{\alpha+\omega^{m+1}}=
\alpha$.\par
Therefore $\alpha$ is the greatest first $\omega^m$-number belonging to
$\alpha+\omega^{m+1}$.
\begin{flushright}\boldmath$\Box$\end{flushright}
\begin{theorem}
$\bigcup F^m_\gamma=\gamma\wedge\bigcup F^{m+1}_\gamma\in\gamma\rightarrow
\gamma=\bigcup F^{m+1}_\gamma+\omega^{m+1}$.
\end{theorem}
{\it Proof.} $\bigcup F^{m+1}_\gamma\in\gamma$ thus $\bigcup F^{m+1}_\gamma$ is
the greatest first $\omega^m$-number belonging to\par
$\gamma$. And $\bigcup F^{m+1}_\gamma+\omega^{m+1}$ is a first
$\omega^m$-number. Thus $\bigcup F^{m+1}_\gamma$ is the greatest\par
first $\omega^m$-number belonging to $\bigcup F^{m+1}_\gamma+\omega^{m+1}$.\par
Then $\bigcup F^{m+1}_\gamma+\omega^{m+1}\notin\gamma\wedge\gamma\notin
\bigcup F^{m+1}_\gamma+\omega^{m+1}$.\par
Apply the law of trichotomy and conclude $\gamma=\bigcup F^{m+1}_\gamma+
\omega^{m+1}$.
\begin{flushright}\boldmath$\Box$\end{flushright}
\begin{theorem}
$\bigcup F^m_\gamma=\gamma\rightarrow\exists\alpha\exists n
(\bigcup F^{m+1}_\alpha=\alpha\wedge\gamma=\alpha+\omega^{m+1}\cdot n)$.
\end{theorem}
{\it Proof.} $\bigcup F^{m+1}_\gamma=\gamma\rightarrow\bigcup F^m_\gamma=
\gamma\wedge\gamma=\gamma+\omega^{m+1}\cdot0$ thus $\alpha=\gamma$ and
$n=0$.\par
If $\bigcup F^{m+1}_\gamma\neq\gamma$ then $\bigcup F^{m+1}_\gamma\in
\gamma$ thus $\gamma=\bigcup F^{m+1}_\gamma+\omega^{m+1}\cdot1$.\par
Use separation to define set $v$ of first $\omega^m$-numbers by\par
$\forall u(u\in v\leftrightarrow u\in F^{m+1}_\gamma\wedge u\notin u\wedge
\exists n(\gamma=u+\omega^{m+1}\cdot n))$. Then $\bigcup F^{m+1}_\gamma\in
v$.\par
Apply regularity to $v$ to find first $\omega^m$-number $\alpha\in v$ such
that\par
$\alpha\in F^{m+1}_\gamma\wedge\exists n(\gamma=\alpha+\omega^{m+1}\cdot n)
\wedge\forall u(u\in\alpha\wedge u\notin u\rightarrow u\notin v)$.\par
Then $\bigcup F^{m+1}_\alpha\in\alpha\cup\{\alpha\}$.\par
If $\bigcup F^{m+1}_\alpha\in\alpha$ then $\bigcup F^m_\alpha=\alpha\wedge
\bigcup F^{m+1}_\alpha\in\alpha\rightarrow\alpha=\bigcup F^{m+1}_\alpha+
\omega^{m+1}\cdot1$.\par
Thus $\bigcup F^{m+1}_\alpha$ is the greatest first $\omega^m$-number belonging
to $\alpha$ and\par
$\gamma=\bigcup F^{m+1}_\alpha+\omega^{m+1}\cdot(1+n)$.\par
Then $\bigcup F^{m+1}_\alpha\in\alpha\wedge\alpha\in F^{m+1}_\gamma$ thus
$\alpha\in\gamma$ therefore $\bigcup F^{m+1}_\alpha\in F^{m+1}_\gamma$.\par
Then $\bigcup F^{m+1}_\alpha\in v$ contradictory to $\forall u(u\in\alpha\wedge
u\notin u\rightarrow u\notin v)$.\par
Therefore $\bigcup F^m_\gamma=\gamma\rightarrow\exists\alpha\exists n(\bigcup
F^{m+1}_\alpha=\alpha\wedge\gamma=\alpha+\omega^{m+1}\cdot n)$.
\begin{flushright}\boldmath$\Box$\end{flushright}
If $\bigcup F^m_\alpha=\alpha$ then $\forall n(\bigcup F^m_{\alpha+\omega^{m+1}
\cdot n}=\alpha+\omega^{m+1}\cdot n)$ thus $\alpha+\omega^{m+1}\cdot n$ is the
greatest first $\omega^m$-number belonging to $\alpha+\omega^{m+1}\cdot(n+1)$.
If $\bigcup F^{m+1}_\alpha=\alpha$ then for every $n\neq0$ is $\alpha$ the
greatest first $\omega^{m+1}$-number belonging to $\alpha+\omega^{m+1}
\cdot n$.\par
Every first number is a first $\omega^0$-number thus every ordinal is an
ordinal with a first $\omega^0$-number. We define `$\omega^{m+1}$-numbers with
first number $\alpha$'.
\begin{definition}
Ordinal $\gamma$ is an `$\omega^{m+1}$-number with first number $\alpha$'
if\par
$\bigcup F^m_\gamma=\gamma\wedge\bigcup F^{m+1}_\alpha=\alpha\wedge$\par
$\mbox{ }\mbox{ }\mbox{ }\gamma\notin\alpha\wedge\forall u(u\in\gamma\cup
\{\gamma\}\rightarrow u\in\alpha\cup\{\alpha\}\vee\bigcup F^{m+1}_u\in u)$.\par
The successor of $\omega^{m+1}$-number $\gamma$ with first number $\alpha$ is
$\gamma+\omega^{m+1}$.
\boldmath$\Box$\end{definition}
If $\gamma\neq\alpha$ then $\gamma=\bigcup F^{m+1}_\gamma+\omega^{m+1}$ thus
$\exists\beta\exists n(\bigcup F^{m+1}_\beta=\beta\wedge\gamma=\beta+\omega^
{m+1}\cdot n)$. Then $\gamma\neq\beta$ thus $n\neq0$ and $\beta$ is the
greatest first $\omega^{m+1}$-number belonging to $\gamma$. Thus $\alpha\in
\beta\cup\{\beta\}$. And $\beta\in\gamma\wedge\bigcup F^{m+1}_\beta=\beta
\rightarrow\beta\in\alpha\cup\{\alpha\}$ thus $\beta=\alpha$. If $\gamma=
\alpha$ then $\gamma=\alpha+\omega^{m+1}\cdot0$. Thus $\exists n(\gamma=\alpha+
\omega^{m+1}\cdot n)$.\par
Ordinals with a first $\omega^0$-number satisfy the Peano axioms. By the same
kind of reasoning used in section 5 about natural numbers one can prove if
$\gamma$ is an $\omega^{m+1}$-number with first number $\alpha$ then
\begin{itemize}
\item there is first $\omega^{m+1}$-number $\alpha$,
\item the successor $\gamma+\omega^{m+1}$ of $\gamma$ is an
$\omega^{m+1}$-number with first number $\alpha$,
\item $\alpha$ is not the successor of an $\omega^{m+1}$-number with first
number $\alpha$,
\item if $\beta\notin\alpha\wedge\beta\in\gamma\wedge\bigcup F^m_\beta=\beta$
then $\beta$ is an $\omega^{m+1}$-number with first number $\alpha$,
\item if $\beta$ is an $\omega^{m+1}$-number with first number $\alpha$ and
$\beta+\omega^{m+1}=\gamma+\omega^{m+1}$ then $\beta\in\gamma\cup\{\gamma\}
\wedge\gamma\in\beta\cup\{\beta\}$ thus $\beta=\gamma$,
\item if $\Phi(\beta)$ is a well formed formula with first $\omega^m$-number
$\beta$ free in $\Phi$ then\\
$\Phi(\alpha)\wedge\forall\beta(\beta\notin\alpha\wedge\beta\in\gamma\wedge
\bigcup F^m_\beta=\beta\wedge\Phi(\beta)\rightarrow\Phi(\beta+\omega^{m+1}))
\rightarrow\Phi(\gamma)$.
\end{itemize}
Thus $\omega^m$-numbers with first number $\alpha$ satisfy the Peano axioms.
Therefore\\
\mbox{ }\par
if $\beta$ is an ordinal then $\exists\alpha\exists n(\bigcup\alpha=\alpha
\wedge\beta=\alpha+\omega^0\cdot n)$,\par
if $\gamma$ is a first $\omega^m$-number then $\exists\alpha\exists n(\bigcup
F^{m+1}_\alpha=\alpha\wedge\gamma=\alpha+\omega^{m+1}\cdot n)$.
\section{Counting down ordinals}
Ordinal $\alpha$ is a `first $\omega^\omega$-number' if $\forall m
(\bigcup F^m_\alpha=\alpha)$. $0$ is a first $\omega^\omega$-number.\\
If $\alpha$ is a first $\omega^\omega$-number then for every $m$ and for every $n$ $(\alpha+\omega^{m+1}\cdot
n)$ is an $\omega^{m+1}$-number with first number $\alpha$.
\begin{theorem}
If $\gamma$ is an ordinal then $\exists\alpha(\forall m(\bigcup F^m_\alpha=
\alpha)\wedge\exists l(\bigcup F^{l+1}_\gamma=\alpha))$.
\end{theorem}
{\it Proof.} If ordinal $\gamma$ is a first $\omega^\omega$-number then
$\alpha=\gamma$ thus $\bigcup F^{0+1}_\gamma=\alpha$.\\
If $\gamma$ is not a first $\omega^\omega$-number then $\exists n
(\bigcup F^n_\gamma\in\gamma)$ thus\par
$\bigcup F^0_\gamma\in\gamma$ or $\exists m(m<n\wedge\bigcup F^m_\gamma=\gamma
\wedge\bigcup F^{m+1}_\gamma\in\gamma)$.\par
If $\bigcup F^0_\gamma\in\gamma$ then $\gamma$ is an ordinal with a first
number.\par
Therefore $\bigcup F^0_\gamma\in\gamma\wedge\exists\beta\exists n
(\bigcup F^0_\beta=\beta\wedge n\neq0\wedge\gamma=\beta+\omega^0\cdot n\wedge
\bigcup F^{0+1}_\gamma=\beta)$.\par
If $\exists m(\bigcup F^m_\gamma=\gamma\wedge\bigcup F^{m+1}_\gamma\in\gamma)$
then\par
$\exists\beta\exists n(\bigcup F^{m+1}_\beta=\beta\wedge n\neq0\wedge
\gamma=\beta+\omega^{m+1}\cdot n\wedge\bigcup F^{(m+1)+1}_\gamma=\beta)$.\\
Thus in both cases $\exists\beta\exists n$ such that we can `countdown $\gamma$
to $\beta$ in $n$ steps'.\par
If $\forall l(\bigcup F^l_\beta=\beta)$ then $\beta$ is a first
$\omega^\omega$-number and $\beta\in\gamma$.\par
If $\exists m(\bigcup F^m_\beta=\beta\wedge\bigcup F^{m+1}_\beta\in\beta)$ then
$\beta=\bigcup F^{m+1}_\beta+\omega^{m+1}$. Therefore\par
$\exists\beta_1(\bigcup F^{m+1}_{\beta_1}=\beta_1\wedge\exists n_1(n_1\neq0
\wedge\beta=\beta_1+\omega^{m+1}\cdot n_1))$. Thus\par
$\mbox{ }\gamma=(\beta_1+\omega^{m+1}\cdot n_1)+\omega^0\cdot n$. Then
$\beta_1$ is the greatest first $\omega^{m+1}$-number\par
belonging to $\beta$ and $\beta\in\gamma$ therefore $\bigcup F^{(m+1)+1}_\gamma
=\beta_1$.\\
Thus we can countdown $\gamma$ to $\beta_1$ in $n+n_1$ steps.\par
If $\forall l(\bigcup F^l_{\beta_1}=\beta_1)$ then $\beta_1$ is a first
$\omega^\omega$-number and $\beta_1\in\gamma$.\par
If $\exists m_1(\bigcup F^{m_1}_{\beta_1}=\beta_1\wedge
\bigcup F^{m_1+1}_{\beta_1}\in\beta_1)$ then $\beta_1=
\bigcup F^{m_1+1}_{\beta_1}+\omega^{m_1+1}$. Therefore\par
$\exists\beta_2(\bigcup F^{m_1+1}_{\beta_2}=\beta_2\wedge\exists n_2(n_2\neq0
\wedge\beta_1=\beta_2+\omega^{m_1+1}\cdot n_2))$. Thus\par
$\mbox{ }\gamma=((\beta_2+\omega^{m_1+1}\cdot n_2)+\omega^{m+1}\cdot n_1)+
\omega^0\cdot n$. Then $\beta_2$ is the greatest first\par
$\omega^{m_1+1}$-number belonging to $\beta_1$ and $\beta_1\in\gamma$ therefore
$\bigcup F^{(m_1+1)+1}_\gamma=\beta_2$.\\
Thus we can countdown $\gamma$ to $\beta_2$ in $n+n_1+n_2$ steps.\par
If $\forall l(\bigcup F^l_{\beta_2}=\beta_2)$ then $\beta_2$ is a first
$\omega^\omega$-number and $\beta_2\in\gamma$.\\
Thus there is a rule to construct a descending sequence\\
$\mbox{ }\mbox{ }\mbox{ }\mbox{ }\ldots,\mbox{ }\beta_2\in\beta_1,\mbox{ }
\beta_1\in\beta,\mbox{ }\beta\in\gamma$.\\
If $\beta$ is a first $\omega^\omega$-number then we define $\alpha=\beta$
thus\par
$\exists\alpha(\forall m(\bigcup F^m_\alpha=\alpha)\wedge\exists l
(\bigcup F^{l+1}_\gamma=\alpha))$.\\
If $\beta$ is not a first $\omega^\omega$-number then we define $\beta_0=
\beta$.\par
Assume $\forall n\exists\beta_{n+1}(\beta_{n+1}\in\beta_n)$. Thus $\beta_{n+1}
\in\beta\wedge\beta_{n+1}\in\gamma$.\par
Use separation to define set $s$ by\par
$\forall u(u\in s\leftrightarrow u\in\gamma\wedge\exists n(u=\beta_n))$ thus
$s\subseteq\gamma$ and set $\beta_0\in s$. Then\par
$\forall v(v\in s\rightarrow\exists n(v=\beta_n\wedge\beta_{n+1}\in v\wedge
\beta_{n+1}\in s))$ contradictory to regularity\par
$\exists v(v\in s\wedge v\notin v\wedge\forall u(u\in v\wedge u\notin u
\rightarrow u\notin s))$.\\
Therefore $\exists\alpha(\forall m(\bigcup F^m_\alpha=\alpha)\wedge\exists l
(\bigcup F^{l+1}_\gamma=\alpha))$.
\begin{flushright}\boldmath$\Box$\end{flushright}
If $\gamma\neq\alpha$ then $\alpha$ is the greatest first $\omega^
\omega$-number belonging to $\gamma$. Then counting down $\gamma$ terminates at
$\alpha$ in a finite number of steps. In this way, with the Peano axioms, if
$\alpha\neq0$ then $\alpha$ is an impassable barrier for counting down $\gamma$
to $0$ in a finite number of steps.\par
If $\gamma=0$ or $0$ is the greatest first $\omega^\omega$-number belonging to
$\gamma$ then counting down $\gamma$ terminates at $0$ in a finite number of
steps. And if $\gamma\neq0$ then $\exists l(\bigcup F^{l+1}_\gamma=0\wedge
F^{l+1}_\gamma=1)$.
\begin{description}
\item P.R. Halmos. \em Naive Set Theory \em\\
Springer-Verlag New York Inc., 1974
\item Thomas Jech. \em Set Theory\em\\
The Third Millennium Edition, revised and expanded, May 2002\\
Springer Monographs in Mathematics
\item Elliott Mendelson. \em Introduction to Mathematical Logic\em\\
Sixth Edition, 2015
\end{description}
\end{document} |
\begin{document}
\title{When Should You Wait Before Updating?}
\begin{abstract}
Consider a dynamic network and a given distributed problem.
At any point in time, there might exist several solutions that are equally good with respect to the problem specification, but that are different from an algorithmic perspective, because some could be easier to update than others when the network changes.
In other words, one would prefer to have a solution that is more robust to topological changes in the network; and in this direction the best scenario would be that the solution remains correct despite the dynamic of the network.
In~\cite{CasteigtsDPR20}, the authors introduced a very strong robustness criterion: they required that for any removal of edges that maintain the network connected, the solution remains valid.
They focus on the maximal independent set problem, and their approach consists in characterizing the graphs in which there exists a robust solution (the existential problem), or even stronger, where any solution is robust (the universal problem).
As the robustness criteria is very demanding, few graphs have a robust solution, and even fewer are such that all of their solutions are robust.
In this paper, we ask the following question: \textit{Can we have robustness for a larger class of networks, if we bound the number $k$ of edge removals allowed}?
To answer this question, we consider three classic problems: maximal independent set, minimal dominating set and maximal matching.
For the universal problem, the answers for the three cases are surprisingly different.
For minimal dominating set, the class does not depend on the number of edges removed. For maximal matching, removing only one edge defines a robust class related to perfect matchings, but for all other bounds $k$, the class is the same as for an arbitrary number of edge removals.
Finally, for maximal independent set, there is a strict hierarchy of classes: the class for the bound $k$ is strictly larger than the class for bound $k+1$.
For the robustness notion of \cite{CasteigtsDPR20}, no characterization of the class for the existential problem is
known, only a polynomial-time recognition algorithm. We show that the situation is even worse for bounded $k$:
even for $k=1$, it is NP-hard to decide whether a graph has a robust maximal independent set.
\end{abstract}
\section{Introduction}
In the field of computer networks, the phrase ``\textit{dynamic networks}'' refers to many different realities, ranging
from static wired networks in which links can be unstable, up to wireless ad hoc networks in which entities directly communicate with each other by radio.
In the latter case, entities may join, leave, or even move inside the network at any time in completely unpredictable ways.
A common feature of all these networks is that communication links keep changing over time.
Because of this aspect, algorithmic engineering is far more difficult than in fixed static networks.
Indeed, solutions must be able to
adapt to incessant topological changes. This becomes particularly challenging when it comes to maintaining a single
leader~\cite{CF13r} or a (supposed to be) ``static'' covering data structure, for instance, a spanning tree, a node
coloring, a Maximal Independant Set (MIS), a Minimal Dominating Set (MDS), or a Maximal Matching (MM).
Most of the time, to overcome such topological changes, algorithms compute and recompute their solution to try to be as close as possible to a correct solution in all circumstances.
Of course, when the network dynamics is high, meaning that topological changes are extremely frequent, it sometimes becomes impossible to obtain an acceptable solution.
In practice, the correctness requirements of the algorithm are most often relaxed in order to approach the desired behavior, while amortizing the recomputation cost.
Actually, this sometimes leads to reconsider the very nature of the
problems, for example: looking for a ``moving leader'', a leader or a spanning tree per connected component, a temporal
dominated set, an evolving MIS, a best-effort broadcast, \textit{etc.}--- we refer
to~\cite{CF13r,CFQS12} for more examples.
In this paper, we address the problem of network dynamics under an approach similar to the one introduced
in~\cite{BDKP15,CasteigtsDPR20}: \textit{To what extent of network dynamics can a computation be performed without relaxing its specification?}
Before going any further into our motivation, let us review related work on which our study relies.
Numerous models for dynamic networks have been proposed during the last decades--refer to~\cite{CF13r} for a
comprehensive list of models-- some of them aiming at unifying previous modeling approaches, mainly~\cite{CFQS12,XFJ03}.
As is often the case, in this work, the network is modeled as a graph, where the set of vertices (also called nodes) is fixed, while the communication
links are represented by a set of edges appearing and disappearing unexpectedly over the time.
Without extra assumptions, this modeling includes all possibilities that can occur over the time, for example, the
network topology may include no edges at some instant, or it may also happen that some edge present at some time disappears definitively after that.
According to different assumptions on the appearance and disappearance (frequency, synchrony, duration, etc.), the dynamics of temporal networks can be classified in many classes~\cite{CFQS12}.
One of these classes, Class $\mathcal{TC^R}$, is particularly
important.
In this class, a temporal path between any two vertices appears infinitely often.
This class is arguably the most natural and versatile generalization of the notion of connectivity from static networks to dynamic networks: every vertex is able to send (not necessarily directly) a message to any other vertex at any time.
For a dynamic network of the class $\mathcal{TC^R}$ on a vertex set $V$, one can partition $V\times V$ into three sets: the edges that are present infinitely often over the time --called \emph{recurrent} edges--, the edges that are present only a finite number of times --called \emph{eventually absent} edges--, and the edges that are never present.
The union of the first two sets defines a graph called the {\em footprint} of the network~\cite{CFQS12},
while its restriction to the edges that are infinitely often present is called the {\em eventual footprint}~\cite{BDKP16}.
In~\cite{BDKP16}, the authors prove that Class~$\mathcal{TC^R}$ is actually the set of dynamic networks whose {\em eventual footprint} is connected.
In conclusion, from a distributed computing point of view, it is more than reasonable to consider only dynamic networks such that some of their edges are recurrent and their union does form a {\em connected} spanning subgraph of their footprint.
Unfortunately, it is impossible for a node to distinguish between a recurrent and an eventually absent edge. Therefore, the best the nodes can do is to compute a solution relative to the footprint, hoping that this solution
still makes sense in the eventual footprint, whatever it is.
In \cite{CasteigtsDPR20}, the authors introduce the concept of \textit{robustness} to capture this intuition, defined as follows:
\begin{definition}[Robustness]
A property $P$
is robust over a graph $G$ if and only if $P$ is satisfied in every connected spanning subgraph of $G$ (including $G$
itself).
\end{definition}
Another way to phrase this definition is to say that \emph{a property $P$ is robust if it is still satisfied when we remove any number of edges, as long as the graph stays connected}.
In~\cite{CasteigtsDPR20}, the authors focus on the problem of maximal independent set (MIS). That is, they study the cases where a set of vertices can keep being an MIS even if we remove edges. They structure their results around two questions:
\noindent\textbf{Universal question:} For which networks are \emph{all the solutions} robust against any edge removals that do not disconnect the graph?
\noindent\textbf{Existential question:} For which networks does \emph{there exist a solution} that is robust against any edge removals that do not disconnect the graph?
The authors in~\cite{CasteigtsDPR20} establish a characterization of the networks that answer the first questions for
the MIS problem. Still for the same problem, they provide a polynomial-time algorithm to decide whether a network answers the second question.
Note that the study of robustness was also very recently addressed for the case of metric properties in~\cite{CasteigtsCHL22}.
In that paper, the authors show that deciding whether the distance between two given vertices is robust can be done in
linear time. However, they also show that deciding whether the diameter is robust or not is coNP-complete.
\subsection{Our approach}
Our goal is to go beyond \cite{CasteigtsDPR20}, and to get both a more fine-grained and a broader understanding of the notion of robustness.
Let us start with the fine-grain dimension.
In~\cite{CasteigtsDPR20}, a solution had to be robust against any number of edge removals as long as the graph remains connected.
In this paper, we want to understand what are the structures that are robust against $k$ edge removals while keeping the connectivity constraint, for any specific $k$, adding granularity to the notion.
We call this concept $k$-robustness (see formal definition below) and we focus on the universal and the existential question of \cite{CasteigtsDPR20} for this fined-grain version of the robustness.
Now for the broader dimension, let us discuss the problems studied. In~\cite{CasteigtsDPR20}, the problem studied is MIS, which is a good choice in the sense that it leads to
an interesting landscape.
Indeed, robustness being a very demanding property, one has to find problems to which it can
apply without leading to trivial answers.
In this direction, one wants to look at local problems, because a modification will only have consequences in some
neighborhood and not on the whole graph, which leaves the hope that it actually does not affect the correctness at
all.
Among the classic local problems, as studied in the LOCAL model (see~\cite{Peleg00} for the original definition and~\cite{HirvonenS20} for a recent book), there are mainly coloring problems and packing/covering problems.
The coloring problems (with a fixed number of colors) are not meaningful in our context: an edge removal can only help.
But the packing/covering problems are all interesting, thus we widen the scope to cover three classic problems in this paper:
maximal independent set (MIS) as before, but also maximal matching (MM) and minimal dominating set (MDS).
To help the reader grasp some intuition on our approach, let us illustrate the $1$-robustness for the maximal
matching, \emph{i.e.} a set of edges that do not share vertices
and that is maximal in the sense that no edge can be added.
To be $1$-robust, a matching must still be maximal after the removal of \emph{one arbitrary} edge that does not disconnect the graph.
Let us go over various configurations illustrated in Figure~\ref{fig:MM} (the matched edges are bold ones).
\begin{figure}
\caption{\label{fig:MM}
\label{fig:mm-a}
\label{fig:mm-b}
\label{fig:mm-c}
\label{fig:MM}
\end{figure}
For the two graphs in Figure~\ref{fig:mm-a}, that are cycles of 6 vertices, we can observe that two instances of maximal matching can have different behaviors.
Indeed, in the top one, if we remove one matched edge, we are left
with a matching that is not maximal in the new graph: the two edges adjacent to the removed one could be added. By
contrast, in the bottom graph, any edge removal leaves a graph that is still a maximal matching.
Now, in the graph of Figure~\ref{fig:mm-b}, a complete balanced bipartite graph, all the maximal matchings are identical up to isomorphism.
After one arbitrary edge removal, we are left with a graph where no new edge can be matched. Therefore in this graph, any matching is robust to one edge removal. Note that this is not true for any number of edge removals, illustrating the fact that $k$-robustness and robustness are not equivalent.
Finally, in Figure~\ref{fig:mm-c}, all the maximal matchings consists of only one edge, and they are not robust to an
edge removal. Indeed, after the matched edge is removed, one can choose any of the two remaining ones.
To summarize, Figure~\ref{fig:MM} illustrates the effect of $1$-robustness in three different cases: one where
\textit{some} matchings are $1$-robust, one where \textit{all} matchings are $1$-robust, and one where \textit{no} matching is
$1$-robust.
\subsection{Our results}
Our first contribution is to introduce the fine-grained version of robustness in Section~\ref{sec:model}.
After that, every technical section of this paper is devoted to provide some answer to the fine-grained version of one of the two questions highlighted above (existential \textit{vs.} universal) for one of the problems we study.
Our focus is actually in understanding how do the different settings compare, in terms of both problems and number of removable edges.
Let us start with the universal question. Here, we prove that the three problems have three different behaviors.
For minimal dominating set, the class of the graphs for which any solution is $k$-robust is exactly the same for every $k$ (a class that already appeared in \cite{CasteigtsDPR20} under the name of \emph{sputnik graphs}) as proved in Section~\ref{sec:MDS}.
For maximal matching, the case of $k=1$, which we used previously as an example, is special and draws an interesting connection with perfect matchings, but then the class is identical for every $k\geq 2$. These results are presented in Section~\ref{sec:MM}.
Finally, for maximal independent set, we show in Section~\ref{sec:MIS} that there is a strict hierarchy: the class for $k$ edge removals is strictly smaller than the one for $k-1$. For this case, we do not pinpoint the exact characterization, but give some additional structural results on the classes.
The existential question is much more challenging. Section~\ref{sec:NPH} presents some preliminary results on the study of this question.
For maximal independent set, we show that for any~$k$, deciding whether a graph has a maximal independent set that is robust to $k$ edge removals is NP-hard. This is the first NP-hardness result for this type of question.
\section{Model, definitions, and basic properties}
\label{sec:model}
In the paper, except when stated otherwise, the graph is named $G$, the vertex set $V$ and the edge set $E$.
\subsection{Robustness and graph problems}
The key notion of this paper is the one of $k$-robustness.
\begin{definition}
Given a graph problem and a graph, a solution is \emph{$k$-robust} if after the removal of \emph{at most $k$} edges, either the graph is disconnected, or the solution is still valid.
\end{definition}
Note that $k$-robustness is about removing at most $k$ edges, not exactly $k$ edges.
We will abuse notation and write $\infty$-robust when mentioning the notion of robustness from \cite{CasteigtsDPR20}, with an unbounded number of removals. Hence $k$ is a parameter in $\mathbb{N}\cup \infty$.
\begin{notation}
We define $\mathcal{U}^k_{P}$ and $\mathcal{E}^k_{P}$ the following way:
\begin{itemize}
\item Let $\mathcal{U}^k_{P}$ be the class of graphs such that any solution to the problem $P$ is $k$-robust.
\item Let $\mathcal{E}^k_{P}$ be the class of graphs such that there exists a solution to the problem $P$ that is $k$-robust
\end{itemize}
\end{notation}
Note that to easily incorporate the parameter $k$, we decided to not follow the exact same notations as in~\cite{CasteigtsDPR20}.
\paragraph*{Graph problems.}
We consider three graph problems:
\begin{enumerate}
\item Minimal dominating set (MDS): Select a minimal set of vertices such that every vertex of the graph is either in the set or has a neighbor in the set.
\item Maximal matching (MM): Select a maximal set of edges such that no two selected edges share endpoint.
\item Maximal independent set (MIS): Select a maximal set of vertices such that no two selected vertices share an edge.
\end{enumerate}
A \emph{perfect matching} is a matching where every vertex is matched.
We will also use the notion of \emph{$k$-dominating set}, which is a set of selected vertices such that every vertex is either selected or is adjacent to two selected vertices.
Note that $k$-dominating set sometimes refer to another notion related to the distance to the selected vertices, but this is not our definition.
\paragraph*{The case of robust maximal matching.}
For maximal matching, the definition of robustness may vary. The definition we take is the following.
A maximal matching $M$ of a graph $G$ is $k$-robust if after removing any set of at most $k$ edges such that the graph $G$ is still connected, what remains of $M$ is a maximal matching of what remains of $G$.
\subsection{Graph notions}
\label{subsec:graph-notions}
We list a few graph theory definitions that we will need.
\begin{definition}
The \emph{neighborhood} of a node $v$, denoted $N(v)$, is the set of nodes that are adjacent to $v$.
The \emph{closed neighborhood} of a node $v$, denoted $N[v]$, is the neighborhood of $v$, plus $v$ itself.
\end{definition}
\begin{definition}
A graph is $t$-(edge)-connected if, after the removal of any set of $(t-1)$ edges, the graph is still connected. A $t$-(edge)-connected component is a subgraph that is $t$-(edge)-connected.
\end{definition}
In the following we are only interested in edge connectivity thus we will simply write \emph{$t$-connectivity} to refer to \emph{$t$-edge-connectivity}.
In our proofs, we will use the following easy observation multiple times : in a 2-connected graph every vertex belongs to a cycle.
\begin{definition}
In a connected graph, a \emph{bridge} is an edge whose removal disconnects the graph.
\end{definition}
\begin{definition}\label{def:join}
Given two graphs $G$ and $H$, the \emph{join} of these two graphs, $join(G,H)$, is the graph made by taking the union of $G$ and $H$, and adding all the possible edges $(u,v)$, with $u\in G$ and $v\in H$. See Figure~\ref{fig:join}.
\end{definition}
\begin{definition}
A \emph{sputnik graph} (\cite{CasteigtsDPR20}) is a graph where every node that is part of a cycle has an \emph{antenna}, that is a neighbor with degree 1. See Figure~\ref{fig:sputnik}.
\end{definition}
\begin{figure}
\caption{\label{fig:join}
\caption{\label{fig:sputnik}
\caption{
Illustration of the definitions of Subsection~\ref{subsec:graph-notions}
\label{fig:join}
\label{fig:sputnik}
\end{figure}
\subsection{Basic properties}
The following properties follow from the definitions.
\begin{property}\label{prop:basic-inclusions}
For any problem $P$, for any $k$, $\mathcal{U}^{k+1}_{P}\subseteq \mathcal{U}^{k}_{P}$ and $\mathcal{E}^{k+1}_{P}\subseteq \mathcal{E}^{k}_{P}$.
\end{property}
In particular, $\mathcal{U}^{\infty}_{P}\subseteq \mathcal{U}^{k}_{P} \subseteq \mathcal{U}^{1}_{P}$ and $\mathcal{E}^{\infty}_{P}\subseteq \mathcal{E}^{k}_{P} \subseteq \mathcal{E}^{1}_{P}$, for all $k$.
\begin{property}\label{prop:connectivity}
If a graph is \emph{$(k+1)$-connected} then a solution is $k$-robust if and only if after the removal of any set of $k$ edges the solution is still correct.
\end{property}
\section{Minimal dominating set}
\label{sec:MDS}
\begin{theorem}
\label{thm:MDS}
For all $k$ in $\mathbb{N}\cup \infty$, $\mathcal{U}_{MDS}^k$ is the set of sputnik graphs.
\end{theorem}
\begin{proof}
We know from~\cite{CasteigtsDPR20} that the theorem holds for $k=\infty$.
Hence, thanks to Property~\ref{prop:basic-inclusions}, it is sufficient to prove that the theorem is true for $k=1$.
For the sake of contradiction, consider a graph~$G$ in $\mathcal{U}_{MDS}^1$ that is not a sputnik graph.
Then there is a node $u$ that belongs to a cycle, and that has no antenna. Let $S$ be the closed neighborhood of $u$, $S=N[u]$.
We say that a node of $S$, different from $u$, is an \emph{inside node} if it is only connected to nodes in~$S$.
We now consider two cases depending on whether there is an inside node or not. See Figure~\ref{fig:proof-MDS}.
\begin{enumerate}
\item Suppose there exists an inside node $v$. Note that $v$ has at least one neighbor different from $u$ because otherwise it would be an antenna.
Let the set $W$ be the closed neighborhood of $v$, except~$u$. The set $D=V\setminus W$ is a dominating set of the graph, because all the nodes either belong to $D$ or are neighbors of $u$ (which belongs to $D$).
Now, we transform $D$ into a \emph{minimal} dominating set greedily: we remove nodes from $D$ in an arbitrary order, until no more nodes can be removed without making $D$ non-dominating.
We claim that this minimal dominating set is not 1-robust.
Indeed, if we remove the edge $(u,v)$, $v$ is not covered any more (none of its current neighbors belongs to $D$), and the graph is still connected (because $v$ has a neighbor different from $u$).
\item Suppose there is no inside vertex.
Let $a$ be a neighbor of $u$ in the cycle. Let $W$ be the set $S\setminus a$.
Again we claim that $V \setminus W$ is a dominating set.
Indeed, because there is no inside node, every node in $S$ different from $u$ is covered by node outside $W$, and $u$ is covered by~$a$, which belongs to $V\setminus W$.
As before we can make this set an MDS by removing nodes greedily, and again we claim it is not 1-robust.
Indeed, if we remove the edge $(u,a)$, we do not disconnect the graph (because of the cycle containing $u$), and $u$ is left uncovered.
\end{enumerate}
\end{proof}
\begin{figure}
\caption{\label{fig:proof-MDS}
\label{fig:proof-MDS}
\end{figure}
\section{Maximal matching}
\label{sec:MM}
We now turn our attention to the problem of maximal matching, and get the following theorem.
\begin{theorem}\label{thm:max-matching}
The class $\mathcal{U}^1_{MM}$ is composed of the set of trees, of balanced complete bipartite graphs, and of cliques with an even number of nodes.
For any $k\geq 2$, the class $\mathcal{U}^{k}_{MM}$ is composed of the cycle on four nodes and of the set of trees.
\end{theorem}
The core of this part is the study of the case where only one edge is removed. At the end of the section we consider the more general technically less interesting case of multiple edges removal.
\subsection{One edge removal}
In this subsection we characterize the class of graphs where every maximal matching is 1-robust.
\begin{lemma}\label{lem:universel-MM-1}
$\mathcal{U}^1_{MM}$ is composed of the set of trees, of balanced complete bipartite graphs, and of cliques with an even number of nodes.
\end{lemma}
The rest of this subsection is devoted to the proof of Lemma~\ref{lem:universel-MM-1}.
\paragraph*{A result about perfect matchings}
The core of the proof is to show a connection to perfect matchings. Once this is done, we can use the following theorem from~\cite{Summer79}.
\begin{theorem}[\cite{Summer79}]\label{thm:summer79}
The class of graphs such that any maximal matching is perfect is the union of the balanced complete bipartite graphs and of the cliques of even size.
\end{theorem}
\paragraph*{First inclusion}
We start with the easy direction of the theorem, which is to prove that the graphs we mentioned are in $\mathcal{U}^1_{MM}$.
In trees, any property is robust, since no edge can be removed without disconnecting the graph.
For the two other types, we will use the following claim.
\begin{claim}
Perfect matchings are 1-robust maximal matchings.
\end{claim}
Consider a perfect matching in a graph, and remove an arbitrary edge (that does not disconnect the graph).
If this edge was not in the matching, and then we still have a perfect matching, thus a maximal matching. If this edge was in the matching, then there are only two non-matched nodes in the graph (the ones that were adjacent to the edge), and all their neighbours are matched, thus the matching is still maximal. This proves the claim.$
\lhd$
{}{}
In balanced complete bipartite graphs and cliques of even size, any maximal matching is perfect (Theorem~\ref{thm:summer79}), and since perfect matchings are 1-robust maximal matchings, we get the first direction of Lemma~\ref{lem:universel-MM-1}.
\paragraph*{Second inclusion: three useful claims}
We now tackle the other direction. The following lemma establishes a local condition that 1-robust matchings must satisfy. See Figure~\ref{fig:u-not-matched} for an illustration.
\begin{claim}\label{clm:u-not-matched}
In a 1-robust maximal matching $M$, if a node $u$ is not matched, then all the nodes of $N(u)$ are matched, and their matched edges are bridges of the graph.
\end{claim}
\begin{figure}
\caption{Illustration of Claim~\ref{clm:u-not-matched}
\label{fig:u-not-matched}
\end{figure}
The fact that all the nodes in $N(u)$ are matched follows from $M$ being a maximal matching.
Now, suppose that there exists $(v,w)\in M$, such that $v\in N(u)$ and $(v,w)$ is not a bridge.
In other words, the removal of $(v,w)$ does not disconnect the graph.
After this removal, both $u$ and $v$ are unmatched, and since $(u,v)$ is an edge of the graph, the matching in the new graph cannot be maximal. This contradicts the 1-robustness of $M$, and proves the lemma.
$
\lhd$
{}
The following claim follows directly from Claim~\ref{clm:u-not-matched}.
\begin{claim}\label{clm:triangle}
In a 1-robust maximal matching $M$, if there is an unmatched node $u$, two nodes $a,b\in N(u)$ with $(a,b)\in E$, then $(a,b)\notin M$.
\end{claim}
We now study the shape of 1-robust maximal matchings in cycles.
\begin{claim}\label{clm:cycle}
In every maximal matching of a graph in $\mathcal{U}^1_{MM}$, if a node belongs to a cycle, then it is matched.
\end{claim}
Our proof of Claim~\ref{clm:cycle} consists in proving that if a maximal matching does not satisfy the condition, then either it is not 1-robust, or we can use it to build another maximal matching that is not 1-robust. In both cases this means the graph was not in $\mathcal{U}^1_{MM}$.
Consider a node $u$ in a cycle. Let $a$ and $b$ be its direct neighbors in the cycle, and let its other neighbors be $(c_i)_i$. There can be several configurations, with $a$ adjacent to $b$ or not, etc. The proof is generic to all these cases, but Figure~\ref{fig:cycle} illustrates different cases.
Consider a 1-robust maximal matching $M$ where $u$ is unmatched.
Because of Claim~\ref{clm:u-not-matched}, we know that there exists nodes $a'$, $b'$, and $c_i'$ for all $i$, such that respectively $(a,a')$, $(b,b')$ and $(c_i,c_i')$ (for all $i$) are bridges of the graph. Because of the bridge condition, these nodes $a'$, $b'$ and $c_i'$ (for all $i$) are all different, and are different from $a$, $b$, $u$ and the $c_i$'s.
Let us also denote $d$ the neighbor of $a$ in the cycle that is not $u$.
Note that $d$ can be a $c_i$ or $b$, but no other named node. (See Figure~\ref{fig:cycle} for an illustration.)
Now we create a new matching $M'$ from $M$ in the following way.
First remove all the edge of the matching that are not adjacent to one of the nodes above.
Then, remove $(a,a')$ and any edge matching $d$ (if it exists). Note that this last edge matching $d$ could be a $(c_j,c_j')$ or $(b,b')$.
Add $(a,d)$ to the matching (note that both nodes are unmatched before this operation). In this matching, all the neighbors of $u$ are matched. We complete this matching into a maximal matching $M'$. The edge $(a,d)$ is in $M'$ and $u$ is unmatched, which is a contradiction with Claim~\ref{clm:u-not-matched}, thus $M'$ cannot be 1-robust, and this proves the claim. $
\lhd$
{}
\begin{figure}
\caption{\label{fig:cycle}
\label{fig:cycle}
\end{figure}
\paragraph*{Second inclusion: putting pieces together}
\begin{claim}\label{clm:no-hybrid}
A graph in the class $\mathcal{U}^1_{MM}$ is either a tree or is 2-connected.
\end{claim}
Consider a graph that is neither a tree nor a 2-connected graph.
There necessarily exists a bridge $(u,v)$ such that $u$ belongs to a cycle.
We distinguish two cases.
\begin{enumerate}
\item Node $v$ is linked only to $u$, that is, $v$ is a pendant node. Then we build a maximal matching $M$ by first forcing $u$ to be matched to a node that is not $v$, and then completing it greedily. Now, if we remove the edge that matches $u$, we do not disconnect the graph since $u$ was part of a cycle, but neither $u$ nor $v$ is matched, thus the matching is not maximal ($(u,v)$ could be added).
Thus the matching $M$ was not 1-robustness.
\item Node $v$ is linked to another node $w$. Let consider $(v_i)_i$ the set of nodes such that $v_i\ne v$ and $(u,v_i)$ is a bridge. By the previous point, we know that there exists some $w_i\ne u$ in $N(v_i)$. Moreover, those $(w_i)$ must be distinct pairwise and from all the other named nodes, otherwise $(u,v_i)$ would not have been a bridge.
The node $w$ and the nodes $(w_i)_i$ cannot be part of the 2-connected component of $u$, otherwise $(u,v)$ and $(u,v_i)$
would not be a bridge.
We build a maximal matching~$M$ by first forcing $(u,v)$ and $(v_i,w_i)$ for all i, and then completing it greedily.
As observed earlier, in the 2-connected component of $u$ every node must belong to a cycle, thus
by Claim~\ref{clm:cycle}, we get that every node of this component must be matched.
We now build a second matching $M'$. We start from $M$ and remove from the matching $(u,v)$ and every edge that is in $v$'s side of the bridge.
Then we force $(v,w)$ in the matching, and complete it greedily. The matching $M'$ is maximal and $u$ is unmatched, since all of its neighbors are matched, hence by Claim~\ref{clm:cycle} it is not 1-robust, since it belongs to a 2-connected components thus to a cycle.
\end{enumerate}
This concludes the proof of the claim.
$
\lhd$
{}
To conclude a graph in the class is either a tree, or is 2-connected, and in this last case because of Claim~\ref{clm:cycle}, every node must be matched in every maximal matching. Then Lemma~\ref{lem:universel-MM-1} follows from Theorem~\ref{thm:summer79}.
\subsection{More than one edge removal}
\begin{lemma}\label{lem:max-matching-k-more-than-2}
For any $k\geq 2$, $\mathcal{U}^{k}_{MM}$ is composed of the cycle on four nodes and of the set of trees.
\end{lemma}
\begin{proof}
We first prove the reverse inclusion.
As before, trees are in $\mathcal{U}^k_{MM}$ for any $k$ because any edge removal disconnects the graph. Then for $C_4$, note that it belongs to $\mathcal{U}^1_{MM}$, and that the removal of more than one edge disconnects the graph.
For the other direction, we can restrict to $\mathcal{U}^2_{MM}$, and by definition it is included in $\mathcal{U}^1_{MM}$. Thus we can simply study the case of the balanced complete bipartite graphs and of the cliques on an even number of nodes.
Consider first a complete bipartite graph $B_{k,k}$ with $k>2$ (that is any $B_{k,k}$ larger than $C_4$), and a maximal matching $M$.
Take two arbitrary edges $(a_1,b_1)$ and $(a_2,b_2)$ from the matching and remove them from the graph.
The graph is still connected.
Now the nodes $a_1$ and $b_2$ are unmatched and there is an edge between them, thus the resulting matching is not maximal and $M$ is not 2-robust.
Thus the only $B_{k,k}$ left in the class $\mathcal{U}^2_{MM}$ is $C_4$.
For the cliques on an even number of nodes, consider one that has strictly more than two vertices. A maximal matching $M$ contains at least two edges $(u_1,v_1)$ and $(u_2,v_2)$. When we remove these edges from the graph, we still have a connected graph, $u_1$ and $u_2$ are unmatched, but $(u_1,u_2)$ still exists, thus the resulting matching is not maximal and $M$ was not 2-robust.
\end{proof}
\section{Maximal independent set}
\label{sec:MIS}
Maximal independent set illustrates yet another behavior for the classes $(\mathcal{U}_{MIS}^k)_k$: they form an infinite strict hierarchy.
\subsection{An infinite hierarchy}
\begin{theorem}\label{thm:MIS}
For every $k\geq 1$, $\mathcal{U}^{k+1}_{MIS}$ is strictly included in $\mathcal{U}^{k}_{MIS}$.
\end{theorem}
\begin{proof}
Let $k\geq 1$.
We will define a graph $G_k$, and prove that it belongs to $\mathcal{U}^{k}_{MIS}$ but not to $\mathcal{U}^{k+1}_{MIS}$.
To build $G_k$, consider a bipartite graph with $k+2$ nodes on each of the sides $A$ and $B$, and add a pendant neighbor $v$ to a node $u$ on the side $A$. See Figure~\ref{fig:MIS}.
This graph has only three MIS: $A$, $v \cup B$, and $v\cup (A\setminus u)$.
Indeed: (1) if the MIS contains $u$, then it cannot contain vertices outside of $A$, and to be maximal it contains all of $A$, (2) if it contains a vertex of $B$, it cannot contain a vertex of $A$, and by maximality it contains all of $B$ and $v$, and (3) if it contains $v$, and no vertex of $B$, then by maximality it is $v \cup (A \setminus u) $.
\begin{figure}
\caption{\label{fig:MIS}
\label{fig:MIS}
\end{figure}
We claim that these three MIS are $k$-robust, therefore $G_k$ is in $\mathcal{U}^{k}_{MIS}$.
Suppose an MIS is not $k$-robust. Then there exists a vertex $w$ that is not part of the MIS, such that after at most $k$ edge removals, it has no neighbor in the MIS anymore.
Let us make a quick case analysis depending on who is this vertex $w$.
It cannot be $v$, since removing the edge $(u,v)$ would disconnect the graph.
It cannot be a vertex of $A$, nor of $B$, because in all MIS mentioned, all non selected nodes (except $v$) have at least $k+1$ selected neighbors.
Now we claim that $v \cup (A \setminus u) $ is not $(k+1)$-robust, thus $G_k$ does not belong to $\mathcal{U}^{k+1}_{MIS}$.
We choose a vertex $b$ on the $B$ side, and remove all the edges $(a,b)$ for $a\in A\setminus u$.
This is a set of $k+1$ edges whose removal does not disconnect the graph, but leaves $b$ without selected neighbors. This $v \cup (A \setminus u)$ is not $(k+1)$-robust.
\end{proof}
\subsection{A structure theorem for $\mathcal{U}_{MIS}^k$}
The construction used in the proof of Theorem~\ref{thm:MIS} is very specific and does not really inform about the nature of the graphs in $\mathcal{U}_{MIS}^k$.
It can be generalized, with antennas on both sides and arbitrarily large (unbalanced) bipartite graphs with arbitrary number of antennas per nodes, but it is still specific.
Moreover these construction heavily rely on pendant nodes, that are in some sense abusing the fact that we do not worry about the correctness of the solution if the graph gets disconnected.
In order to better understand these classes, and to give a more flexible way to build such graphs, we prove a theorem about how the class behaves with respect to the join operation (Definition~\ref{def:join}).
We denote by $\mathcal{G}_p$ the class of graphs where every maximal independent set has size \emph{at least} $p$.
We say that a graph class is \emph{stable by an operation} if, by applying this operation to any (set of) graph(s) from the class, the resulting graph is also in the class.
\begin{theorem}
For all $k$, the class $\mathcal{U}_{MIS}^k \cap \mathcal{G}_{k+1}$ is stable by join operation. Also, if either $G$ or $H$ is not in $\mathcal{U}^{k+1}_{MIS}$, then $join(G,H)$ is not in $\mathcal{U}^{k+1}_{MIS}$ either.
\end{theorem}
\begin{proof}
Let us start with the first statement of the theorem.
Consider two graphs $G$ and $H$ in $\mathcal{U}_{MIS}^k \cap \mathcal{G}_{k+1}$.
We prove that $J=join(G,H)$ is also in $\mathcal{U}_{MIS}^k \cap \mathcal{G}_{k+1}$.
\begin{claim}
Any MIS of $J$ is either completely contained in the vertex set of $G$, and is an MIS of $G$, or contained in the vertex set of $H$, and is an MIS of $H$.
\end{claim}
Consider an independent set in $J$.
If it has a node $u$ in $G$, then it has no node in $H$, as by construction, all nodes of $H$ are linked to $u$.
The analogue holds if the independent set has a node in $H$.
Thus any independent set is either completely contained in $G$ or completely contained in $H$.
Now, a set is maximal independent in $G$ (resp. $H$) alone if and only if it is maximal independent in $G$ (resp. $H$) inside $J$. Indeed the only edges that we have added are between nodes of $G$ and nodes of $H$. This proves the claim.
$
\lhd$
{}
Therefore, the resulting graph is in $\mathcal{G}_{k+1}$. Now for the $k$-robustness, consider without loss of generality an MIS of $J$ that is in part~$G$, and suppose it is not $k$-robust.
In this case there must exists a non-selected vertex $v$, that has no more selected neighbors after the removal of $k$ edges (while the graph stays connected).
This node cannot be in the part $G$, otherwise the same independent set in the graph $G$ would not be $k$-robust.
And it cannot be in the part $H$, since every node of $H$ is linked to all the vertices of the MIS, and this set has size at least $k+1$ since $G\in \mathcal{G}_{k+1}$.
Now, let us move on to the second statement of the theorem. Let's assume that $G$ has an MIS $S$ and $k+1$ edges such that their removal makes that $S$ is not longer maximal (i.e. there exists some $u$ that can be added to the set). Then, $S$ is also an MIS of $join(G,H)$, and the removal of the same edges will allow to add $u$ to the set, as the only new neighbors of $u$ are from $H$ that does not contain any node of the chosen MIS
\end{proof}
\section{The existence of a robust MIS is NP-hard}
\label{sec:NPH}
Remember that we have defined two types of graph classes related to robustness. For a given problem, and a parameter $k$, the universal class is the class where every solution is $k$-robust. This is the version we have explored so far. For this version, recognizing the graphs of the class is easy since these have simple explicit characterization.
The second type of class is the existential type, where we want that there exists a solution that is $k$-robust. And here the landscape is much more complex.
Indeed, in \cite{CasteigtsDPR20} in the simpler case of robustness without parameter, there is no explicit characterization of the existential class, only a rather involved algorithm.
In this section we show that, when we add the parameter $k$ the situation becomes even more challenging: the algorithm of \cite{CasteigtsDPR20} runs in polynomial time, and here we show that the recognition of $\mathcal{E}_{MIS}^1$ is NP-hard.
\begin{theorem}\label{thm:MIS-NP-hard}
For every odd integer $k$, it is NP-hard to decide whether a graph belongs to $\mathcal{E}_{MIS}^k$.
\end{theorem}
The rest of this section is devoted to the proof of this theorem. It is based on the NP-completeness of the following problem.\\
\noindent\textsc{Perfect stable}\\
Input: A graph $G=(V,E)$.\\
Question: Does there exists a subset of vertices $S \subset V$ that is independent 2-dominating?\\
Remember that a set is independent 2-dominating if no two neighbors can be selected, and every non-selected vertex should have at least two selected neighbors. Just to get some intuition about why we are interested in this problem, note that with independent 2-dominating after removing an edge between a selected and a non-selected vertex, the non-selected vertex is still dominated.
It was proved in~\cite{CroitoruS83} that \textsc{Perfect stable} is NP-hard in general. We will need the following strengthening of this hardness result.
\begin{lemma}\label{lem:NP-perfect-stable}
Deciding whether a 2-connected graph has an independent 2-dominating set is NP-complete.
\end{lemma}
Note that this lemma does not follow directly from~\cite{CroitoruS83} because the reduction there does use some non-2-connected graphs.
\begin{proof}
Let $G$ be an arbitrary connected graph with at least one edge.
Consider $G'$ to be the same as $G$ but with a universal vertex, that is, $G$ with an additional vertex $u$ that is adjacent to all the vertices of $G$.
This graph is 2-edge connected.
Indeed, since $G$ is connected and has at least two vertices, removing any edge $(u,v)$ with $v\in V(G)$ cannot disconnect the graph, and removing an edge from $G$ does not disconnect the graph because all nodes are linked through~$v$.
We claim that $G'$ has an independent 2-dominating set if and only if $G$ has one.
First, suppose that $G$ has such a set $S$. Note that the set $S$ has at least two selected vertices. Indeed, $G$ has at least one edge, which implies that at least one vertex is not selected (by independence), and such a vertex should be dominated by at least two selected vertices.
Now we claim that $S$ is also a solution for $G'$. Indeed, the addition of $u$ to the graph does not impact the independence of $S$, nor the 2-domination of the nodes of $G$, and $v$ is covered at least twice, since there are at least two selected vertices in $G$.
Second, if $G'$ has independent 2-dominating set $S'$, it cannot contain $v$.
Indeed, because of the independence condition, if $v$ is selected, then no other node can be selected, and then the 2-domination condition is not satisfied.
Then $S'$ is contained in $G$ and is
an independent 2-dominating set of $G$.
\end{proof}
Now, let us formalise the connection between robustness and independent 2-domination.
\begin{lemma}\label{lem:perfect-stable-1-robust}
In a 2-connected graph, the 1-robust maximal independent sets are exactly the independent 2-dominating sets.
\end{lemma}
\begin{proof}
As a consequence of Property~\ref{prop:connectivity}, in a 2-connected graph, a 1-robust MIS is an MIS that is robust against the removal of any edge (that is, we can forget about the preserved connectivity in the robustness definition).
This means that every node not in the MIS is covered twice, otherwise one could break the maximality by removing the edge between the node covered only once and the node that covers it. In other words, the independent set must be 2-dominating.
For the other direction it suffices to note that any independent dominating set is a maximal independent set.
\end{proof}
At that point, plugging Lemma~\ref{lem:NP-perfect-stable} and Lemma~\ref{lem:perfect-stable-1-robust} we get that deciding whether there exists a 1-robust MIS in a graph is NP-hard, even if we assume 2-connectivity. This last lemma is the final step to prove Theorem~\ref{thm:MIS-NP-hard}.
\begin{lemma}
For any 2-connected graph $G$
and any integer $k>1$, we can build in polynomial-time a graph $G'$, such that: $G$ has a 1-robust MIS if and only if $G'$ has a $2k-1$-robust MIS.
\end{lemma}
\begin{proof}
We build $G'$ in the following way. Take $k$ copies of $G$, denoted $G_1,..., G_k$, with the notation that $u_x$ is the copy of vertex $u$ in the x-th copy.
For every edge $(u,v)$ of $G$, we add the edge $(u_x,v_y)$ for any pair $x,y \in 1,...,k$.
Let us first establish the following claim. An MIS in the graph $G'$ necessarily has the following form: it is the union of the exact same set repeated on each copy.
Indeed, let $u_i$ be in the MIS. For any $j\ne i$, all the neighbors of $u_j$ in the copy $G_j$ are a neighbor of $u_i$, which implies that they are not in the MIS.
Hence, no neighbor of $u$ in any copy can be in the MIS. As those nodes are the only neighbors of $u_j$, it implies that $u_j$ is also in the MIS.
Now suppose that $G$ has a 1-robust MIS. We can select the clones of this MIS in each copy, and build an MIS for $G'$ (the independence and maximality are easy to check).
In this MIS of $G'$, every non-selected vertex has at least $2k$ selected neighbors, therefore this MIS is $2k-1$ robust.
Finally, suppose that $G'$ has a $2k-1$ robust MIS. Thanks to the claim above, we know that this MIS is the same set of vertices repeated on each copy. We claim that when restricted to a given copy, this MIS is 1-robust. Indeed, if it were not, then there would be one non-selected vertex with at most one selected neighbor, and this would mean that in $G'$ this vertex would have only $k$ selected neighbors, which contradicts the $2k-1$ robust (given the connectivity).
\end{proof}
\section{Conclusions}
\label{sec:concl}
In this paper we have developed the theory of robustness in several ways: adding granularity and studying new natural problems to explore its diversity.
The next step is to fill in the gaps in our set of results: characterizing exactly the classes $\mathcal{U}^k_{MIS}$, and understanding the complexity of answering the existential question for maximal matching and minimum dominating set.
We believe that a polynomial-time algorithm can be designed to answer the existential question in the case of maximal matching with $k=1$, with an approach similar to the one of~\cite{CasteigtsDPR20} for MDS (that is, via a careful dynamic programming work on a tree-like decomposition of the graphs).
A more long-term goal is to reuse the insights gathered by studying robustness to help the design of dynamic algorithms.
\paragraph*{Acknowledgements.}
We thank Nicolas El Maalouly for fruitful discussions about perfect matchings and Dennis Olivetti for pointing out reference \cite{Summer79}.
\DeclareUrlCommand{\Doi}{\urlstyle{same}}
\renewcommand{\doi}[1]{\href{https://doi.org/#1}{\footnotesize\sf doi:\Doi{#1}}}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Rejoinder}
\runtitle{Rejoinder}
\pdftitle{Rejoinder of Discussion of Statistical Inference: The Big Picture by R. E. Kass}
\begin{aug}
\author{\fnms{Robert E.} \snm{Kass}\corref{}\ead[label=e1]{[email protected]}}
\runauthor{R. E. Kass}
\affiliation{Carnegie Mellon University}
\address{Robert E. Kass is Professor,
Department of Statistics, Center
for the Neural Basis of
Cognition, and Machine Learning Department, Carnegie Mellon
University, Pittsburgh, Pennsylvania 15213, USA \printead{e1}.}
\end{aug}
\end{frontmatter}
In writing my essay I
presumed I was voicing, with a few novel nuances,
a nearly universal attitude among contemporary
statistical practitioners---at\break least among those who had
wrestled with the incompatibility of Bayesian and frequentist logic.
Then David Madigan collected commentaries from several
thoughtful and accomplished statisticians.
Not only do I know
Andrew Gelman, Steve Goodman, Hal Stern and Rob McCulloch,
and respect them
deeply, but I would have been inclined to imagine I had been speaking
for them successfully. Their remarks shook me from my complacency.
While they generally agreed with much of what I had to say, there
were several points that would clearly benefit from additional
clarification and discussion, including the role of subjectivity in
Bayesian inference, the approximate
alignment of our theoretical and real worlds, and the utility of
$p$-values. Here I will ignore these specific
disagreements and comment further
only on the highest-level issues.
We care about our philosophy of statistics, first and foremost, because
statistical inference sheds light on
an important part of human existence, inductive reasoning,
and we want to understand it.
Philosophical perspectives are also
supposed to guide behavior, in research and in teaching.
My polemics focused on teaching, highlighting my discomfort with
the use of Figure~3 as the ``big picture'' of statistical inference.
My sense had been that as a principal
description of statistical thinking, Figure~3 was widely considered
bothersome, but no one had complained publicly. McCulloch agreed zealously.
Gelman and Stern, however, dissented; both find much
continuing use for the notion that statistics is largely about
reasoning from samples to populations. As a matter of classroom
effectiveness, I am sure that many instructors can do a great job
of conveying essential ideas of statistics using Figure~3. My main
point, though, was that introductory courses benefit
from emphasizing the abstraction of statistical models---their
hypothetical, contingent nature---along with the great utility of this
kind of abstraction. As we remarked in Brown and Kass (\citeyear{BroKas09}), when Box (\citeyear{Box})
said, ``All models are wrong, but some are useful,'' he was expressing
a quintessentially statistical attitude. Figure~1 seeks
to make Box's sentiment central to statistical pedagogy, and I tried
to indicate the way the main idea may be illustrated repeatedly
throughout an elementary course.
Recognizing Box's apparent influence here,
Goodman then asked whether I was simply restating Box's philosophy,
and he further prodded me to show how my own statement of
statistical pragmatism could be consequential.
In his 1976 Fisher Lecture, cited by Goodman,
Box railed against what he called ``mathematicity,'' meaning
theory developed in isolation from practice, and he stressed
the iterative nature of model building. The
fundamental role of
model criticism based on Fisherian logic was emphasized not only
by Box but also, in several roughly contemporaneous
discussions, by Dempster and by Rubin, and these presumably influenced
Gelman and Stern, who, together with their colleague Xiao-Li Meng,
developed and studied Bayesian model checking procedures.
Importantly, model criticism plays a prominent role
in Gelman et al. (\citeyear{Geletal04}).
The aim of my discussion, however, was somewhat different than what I
take Box to have intended.
I~understand Box to have said
that estimation should be Bayesian but criticism frequentist,
or inspired by frequentist logic. Statistical pragmatism
asserts, more simply and more generally,
that both forms of logic have merit, and either can be used
for any aspect of scientific inference.
In addition, I suggested the commonality of subjunctive statements
to help us acknowledge
that the big issues, in practice, are not Bayes
versus frequentist but rather the effects of various modeling
assumptions, and the likely behavior of procedures.
Stern noted that the pragmatism I described\break ``seems to be a fairly
evolved state for a statistician; it seems to require a clear
understanding of the various competing foundational arguments that
have preceded it historically.'' I agree. Along with Goodman, Stern
wondered whether such an eclectic philosophy could influence statistical
behavior, especially when tackling unsolved problems. I would claim
that it does. I admit, however, that
I have not done the substantial work it would take
to provide a satisfactory argument, with compelling examples. Lacking
this, I
will try to make do with a brief illustration.
Many experiments in neuroscience apply a fixed stimulus repeatedly to
some neural network and observe the consequences.
A typical example, discussed by Vu, Yu and Kass (\citeyear{VuYuKas09}), involved the
audio replay of a short snippet of a natural birdsong while a single
auditory neuron was recorded from a zebra finch. In such contexts,
mutual information is often used to
quantify the strength of the relationship between stimulus and response.
Mutual information requires the joint time series of
stimulus and response to be stationary and ergodic, but bird songs
contain many bursts of rapidly varying intensities with long pauses in
between. Thus, a snippet of natural song appears highly
nonstationary. In other experiments, the stimulus is
deterministic. Vu et al. asked whether, in such contexts,
estimates of mutual information become meaningless.
If we demand that there be a well-defined chance mechanism behind
every stochastic assumption, as the literal interpretation of
Figure~3 suggests, then clearly mutual information becomes void
for deterministic stimuli; but so too would any kind
of statistical inference involving the joint distribution.
The broader notion emphasized by Figure~1 is
that the mathematical formalism in the stochastic model is an
abstraction whose primary purpose is to represent, in all relevant respects,
the variability displayed by the data. Under this interpretation,
stochastic models can be of use even with deterministic stimuli.
Thus, dismissal of mutual information on the grounds of inadequate
chance mechanism is too crude. Instead, the constraint on
time series variability imposed by stationarity must be
considered carefully.
Vu et al. provided more pointed criticism,
some new mathematical
analysis, and a way to salvage the usual quantitative measures in such
settings. Was the philosophy behind Figure~1 necessary to obtain the
results of Vu et al.? No. But as I hope to have indicated, it
was helpful in supporting a path we could follow, and that is all one
should ask of foundations.\looseness=1
\vspace*{6pt}
\end{document} |
\begin{document}
\title{Model Order Reduction by means of Active Subspaces and Dynamic Mode Decomposition for Parametric Hull Shape Design Hydrodynamics}
\author[1]{Marco~Tezzele\footnote{[email protected]}}
\author[1,2]{Nicola~Demo\footnote{[email protected]}}
\author[1]{Mahmoud~Gadalla\footnote{[email protected]}}
\author[1]{Andrea~Mola\footnote{[email protected]}}
\author[1]{Gianluigi~Rozza\footnote{[email protected]}}
\affil[1]{Mathematics Area, mathLab, SISSA, International School of Advanced Studies, via Bonomea 265, I-34136 Trieste, Italy}
\affil[2]{Fincantieri - Divisione Navi Mercantili e Passeggeri, Cantieri Navali Italiani SpA, Trieste, Italy}
\maketitle
\begin{abstract}
We present the results of the application of a parameter space
reduction methodology based on active subspaces (AS) to the hull
hydrodynamic design problem. Several parametric deformations of an
initial hull shape are considered to assess the influence of the shape
parameters on the hull wave resistance. Such problem is
relevant at the preliminary stages of the ship design, when
several flow simulations are carried out by the engineers to
establish a certain sensibility with respect to the parameters, which
might result in a high number of time consuming hydrodynamic
simulations. The main idea of this work is to employ the AS to
identify possible lower dimensional structures in the parameter
space. The complete pipeline involves the use of free form deformation
to parametrize and deform the hull shape, the high fidelity solver
based on unsteady potential flow theory with fully nonlinear free
surface treatment directly interfaced with CAD, the use of dynamic
mode decomposition to reconstruct the final steady state given only
few snapshots of the simulation, and the reduction of the parameter
space by AS, and shared subspace. Response surface method is used
to minimize the total drag.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Nowadays, simulation-based design has naturally evolved into
simulation-based design optimization thanks to new computational
infrastructures and new mathematical methods. In this work we present
an innovative pipeline that combines geometrical parametrization, different model
reduction techniques, and constrained optimization.
The objective is to minimize the total resistance of a hull advancing in calm water
subject to a constraint on the volume of the hull. We employ free form
deformation (FFD)~\cite{sederbergparry1986,rozza2013free} to parametrize and deform the bottom part of the
stern of the DTMB~5415. For a simulation-based design optimization of
that hull see for example~\cite{serani2016ship}. We select the displacement of some FFD control
points as our parameters, and we sample this parameter space in order
to reduce its dimension by finding an active subspace~\cite{constantine2015active}. In particular we seek a shared
subspace~\cite{ji2018shared} between the target function to minimize
and the constraint function. This subspace allows us to easily perform
the minimization without violating the constraint. As fluid dynamic
model we use a fully nonlinear potential flow one, implemented in
the software WaveBEM
(see~\cite{molaEtAl2013,MolaHeltaiDeSimone2017}). It is interfaced
with CAD data structures, and automatically generates the
computational grids and carry out the simulation with no need for
human intervention. We further accelerate the unsteady flow
simulations through dynamic mode decomposition
presented in~\cite{schmid2010dynamic,kutz2016dynamic} and implemented using
PyDMD~\cite{demo18pydmd}. It allows to reconstruct and predict
all the fields of interest given only few snapshots of the
simulation.
The particular choice of target function and
constraint does not represent a limitation since the methodology
we present does not rely on those particular
functions. Also the specific part of the domain to be deformed has been
chosen to present the pipeline and does not represent a limitation in
the application of the method.
\section{A Benchmark Problem: Estimation of the Total Resistance}
The hull we consider is
the DTMB~5415, since it is a benchmark for naval hydrodynamics
simulation tools due to the vast experimental data available in the
literature~\cite{olivieri2001towing}. A side view of the
complete hull (used as reference domain $\Omega$) is depicted in Figure~\ref{fig:ffd_hull}.
Given a set of geometrical parameters $\ensuremath{\boldsymbol{\mu}} \in \mathbb{D} \subset \mathbb{R}^m$ with
$m \in \mathbb{N}$, we can define a shape morphing function
$\mathcal{M}(\boldsymbol{x}; \boldsymbol{\mu}): \mathbb{R}^3 \to
\mathbb{R}^3$ that maps the reference domain $\Omega$ to the deformed
one $\Omega(\ensuremath{\boldsymbol{\mu}})$, as $\Omega(\ensuremath{\boldsymbol{\mu}}) =
\mathcal{M}(\Omega; \ensuremath{\boldsymbol{\mu}})$.
A detailed description of the specific $\mathcal{M}$ and $\ensuremath{\boldsymbol{\mu}}$ used
is in Section~\ref{sec:ffd}.
In the estimation of the total resistance, the simulated flow
field past a surging ship depends on the specific parametric hull shape considered. Thus, the output
of each simulation depends on the parameters defining the deformed shape. To investigate the effect of the shape
parameters on the total drag, we identify a suitable
set of sampling points in $\mathbb{D}$, which, through
the use of free form deformation, define a corresponding set of hull
shapes. Each geometry in such set is used to run an unsteady fluid dynamic simulation based on a fully nonlinear potential fluid model.
As a single serial simulation requires approximatively 24h to converge to a steady state solution,
DMD is employed to reduce such cost to roughly 10h.
The relationship between each point in $\mathbb{D}$ and the estimate for the resistance
is then analyzed by means of AS in order to verify if a further reduction in the parameter space
is feasible.
\section{Shape Parametrization and Morphing through Free Form Deformation}
\label{sec:ffd}
The free form deformation (FFD) is a widely used technique to deform in a
smooth way a geometry of interest. This section presents a summary of
the method. For a deeper insight on the formulation and more
recent works the reader can refer
to~\cite{sederbergparry1986,lombardi2012numerical,rozza2013free,sieger2015shape,
forti2014efficient,salmoiraghi2016isogeometric,tezzele2017dimension}.
\begin{figure}
\caption{Reference domain $\Omega$, the original DTMB 5415 hull, and the FFD points.}
\label{fig:ffd_hull}
\end{figure}
Basically the FFD needs a lattice of points, called FFD control points, surrounding the object to
morph. Then some of these control points are moved and all the points
of the geometry are deformed by a trivariate tensor-product of
B\'ezier or B-spline functions. The displacements of the control
points are the design parameters $\ensuremath{\boldsymbol{\mu}}$ mentioned above. The transformation is composed by 3 different
functions. First we map the physical domain $\Omega$ onto
the reference one $\widehat{\Omega}$ using the map~$\ensuremath{p}si$. Then we move some FFD control points $\boldsymbol{P}$ through the map
$\widehat{T}$. This deforms all the points inside the lattice. Finally
we map back to the physical space the reference deformed
domain $\widehat{\Omega}(\ensuremath{\boldsymbol{\mu}})$, obtaining $\Omega(\ensuremath{\boldsymbol{\mu}})$ with $\ensuremath{p}si^{-1}$. The composition of these 3 maps is
$T(\cdot, \ensuremath{\boldsymbol{\mu}}) = (\ensuremath{p}si^{-1} \circ \widehat{T} \circ \ensuremath{p}si) (\cdot, \ensuremath{\boldsymbol{\mu}})$.
In Figure~\ref{fig:ffd_hull} we see the lattice of
points around the bottom part of the stern of the
DTMB~5415.
In particular
we are going to move 7 of them in the vertical direction and 3 along the span
of the boat, so $\ensuremath{\boldsymbol{\mu}} \in \mathbb{D} \subset \mathbb{R}^{10}$,
where $\mathbb{D} = [-0.6, 0.5]^{10}$. The original hull corresponds
to $\ensuremath{\boldsymbol{\mu}} = 0$.
We implemented all the algorithms in a Python
package called PyGeM~\cite{pygem}.
\section{High Fidelity Solver based on Fully Nonlinear Potential Flow Model}
\label{sec:solver}
The mathematical model adopted for the simulations is based on potential flow theory. Under the assumptions
of irrotational flow and non viscous fluid, the velocity field admits a scalar potential in
the simply connected flow domain representing the volume of
water surrounding the ship hull. In addition, the Navier-Stokes equation of
fluid mechanics can be simplified to the Laplace equation, which is solved
to evaluate the velocity potential, and to the Bernoulli equation, which allows
the computation of the pressure field. The Laplace equation is
complemented by non penetration boundary condition on the hull, and by fully nonlinear
and unsteady boundary conditions on the water free
surface, written in semi-Lagrangian form~\cite{beck1994}.
In the resulting nonlinear time dependent boundary value problem, the hull is assumed to be
a rigid body moving under the action of gravity and of the hydrodynamic forces
obtained from the pressures resulting from the solution of Bernoulli equation.
The equations governing the hull motions are the 3D
rigid body equations in which the angular displacements are expressed by
unit quaternions.
At each time instant,
the unknown potential and node displacement fields are obtained by a
nonlinear problem, which results from the spatial and temporal
discretization of the continuous boundary value problem. The spatial discretization of the Laplace
problem is based upon a boundary element method (BEM) described
in~\cite{molaEtAl2013,Giuliani2015}. The domain boundary is
discretized into quadrilateral cells and bilinear shape functions are used to approximate the surface geometry, the velocity
potential values, and the normal component of its surface gradient. The iso-parametric
BEM developed is based on collocating a boundary integral equation~\cite{brebbia}
in correspondence with each node of the computational grid, and on
computing a numerical approximation of the integrals appearing in such
equations. The resulting linear algebraic equations are then combined
with the ODEs derived from the finite element spatial
discretization of the unsteady fully nonlinear free surface boundary
conditions.
The final FSI problem is obtained by complementing the
described system with the equations
of the rigid hull dynamics.
The fully coupled system solution is integrated over time by an arbitrary
order and arbitrary time step implicit backward difference formula scheme.
The potential
flow model is implemented in a C++ software~\cite{molaEtAl2013}. It is equipped with a mesh module directly interfaced
with CAD data structures based on the methodologies for surface mesh generation~\cite{dassi2014}. Thus, for each IGES geometry tested, the
computational grid is generated in a fully automated fashion at the
start of each simulation.
At each time step the wave resistance is computed as $R^w =
\int_{\Gamma^b}p\ensuremath{\boldsymbol{n}}b\,d\Gamma\cdot\eb_X$ making use of the pressure
$p$ obtained
plugging the computed potential in the Bernoulli equation. The non viscous fluid dynamic model drag prediction
is complemented by an estimation of the viscous drag obtained by the
ITTC-57 formula~\cite{MorallITTC1957}.
Results shown in~\cite{MolaHeltaiDesimone2016} indicate that for
Froude numbers in $[0.2, 0.4]$ the total
drag computed for the DTMB~5415 hull differs less than 6\% with
respect to the measurements in~\cite{olivieri2001towing}. For $\text{Fr}=0.28$, at which the present
campaign is carried out, the predicted drag is 46.389~N, which is off by 2.7\% from the correponding 45.175~N experimental
value. It is reasonable to infer that for
each parametric deformations of the hull the accuracy of the full order model
prediction will be similar to that of the results discussed.
\section{Dynamic Mode Decomposition for Fields Reconstruction}
\label{sec:dmd}
The dynamic mode decomposition (DMD) technique is a tool for the
complex data systems analysis, initially developed in~\cite{schmid2010dynamic} for
the fluid dynamics applications. The DMD provides an approximation of the
Koopman operator capable to describe the system evolution as
linear combination of few linear evolving structures, without requiring any
information about the considered system. We can estimate the future
evolution of these structures in order to reconstruct the system dynamics also
in the future. In this work, we reduce the temporal window where the full order
solutions are computed and we reconstruct the system evolution, applying the
DMD to the output of the full-order model, to gain a significant
reduction of the computational cost.
We define the operator $\mathbf{A}$ such that $x_{k+1} = \mathbf{A} x_{k}$,
where $x_k$ and $x_{k+1}$ refer respectively to the system state at two
sequential instants.
To build this operator, we collect several vectors
$\{\mathbf{x}_i\}_{i=1}^m$ that contain the system states equispaced in time,
called \textit{snapshots}. Let assume all the snapshots have the
same dimension $n$ and the number of snapshots $m<n$. We can arrange the snapshots in two matrices
$\mathbf{S} =
\begin{bmatrix}
\mathbf{x}_1 & \mathbf{x}_2 & \dotsc & \mathbf{x}_{m-1}
\end{bmatrix}$ and
$\mathbf{\dot{S}} =
\begin{bmatrix}
\mathbf{x}_2 & \mathbf{x}_3 & \dotsc & \mathbf{x}_{m}
\end{bmatrix}$,
with $\mathbf{x}_i = \begin{bmatrix} x_i^1 & x_i^2 & \cdots & x_i^n
\end{bmatrix}^\intercal$.
The best-fit $\mathbf{A}$ matrix is given by $\mathbf{A} = \mathbf{\dot{S}}
\mathbf{S}^\dagger$, where $^\dagger$ denotes the Moore-Penrose pseudo-inverse. The biggest issue is
related to the dimension of the snapshots: usually in a complex system the
number of degrees of freedom is high, so the operator $\mathbf{A}$ is very
large. To avoid this, the DMD technique projects the snapshots onto the low-rank
subspace defined by the proper orthogonal decomposition modes.
We decompose the matrix $\mathbf{S}$ using the truncated SVD, that is
$\mathbf{S} \approx \mathbf{U}_r \bm{\Sigma}_r \mathbf{V}^*_r$, and we call
$\mathbf{U}_r$ the matrix whose columns are the first $r$ modes.
Hence, the reduced operator is computed as:
$\mathbf{\tilde{A}} = \mathbf{U}_r^* \mathbf{A} \mathbf{U}_r =
\mathbf{U}_r^* \mathbf{\dot{S}} \mathbf{V}_r \bm{\Sigma}_r^{-1}$.
We can compute the eigenvectors and eigenvalues of $\mathbf{A}$ through the
eigendecomposition of $\mathbf{\tilde{A}}$, to simulate the system
dynamics. Defining $\mathbf{W}$ and $\bm{\Lambda}$ such that
$\mathbf{\tilde{A}}\mathbf{W} = \bm{\Lambda} \mathbf{W}$, the elements in
$\bm{\Lambda}$ correspond to the nonzero eigenvalues of $\mathbf{A}$ and the
eigenvectors $\bm{\Theta}$ of matrix $\mathbf{A}$, also called DMD
\textit{modes}, can be computed as $\bm{\Theta} = \mathbf{\dot{S}}\mathbf{V}_r
\bm{\Sigma}_r^{-1} \mathbf{W}$.
We implement the algorithm described above, and its most popular
variants, in an open source Python package called PyDMD~\cite{demo18pydmd}. We use
it to reconstruct the evolution of the fluid dynamics system
presented above.
\section{Parameter Space Reduction by means of Active Subspaces}
\label{sec:active}
The active subspaces (AS) property~\cite{constantine2015active} is an
emerging technique for dimension reduction in the parameter
studies. AS has been exploited in several parametrized engineering
models~\cite{grey2017active, constantine2017time, demo2018efficient,
tezzele2017combined}. Considering a multivariate scalar function $f$
depending on the parameters $\ensuremath{\boldsymbol{\mu}}$, AS seeks a set of
important directions in the parameter space along which $f$ varies the most. Such directions are linear
combinations of the parameters, and span a lower dimensional
subspace of the input space. This corresponds to a rotation of the
input space that unveils a low dimensional structure of $f$. In the following
we review the AS theory (see~\cite{lukaczyk2014active, constantine2015active}).
Consider a differentiable, square-integrable scalar function $f
(\ensuremath{\boldsymbol{\mu}}): \mathbb{D} \subset \mathbb{R}^m \rightarrow \mathbb{R}$, and a uniform probability density function $\rho: \mathbb{D}
\rightarrow \mathbb{R}^+$. First, we
scale and translate the inputs to be centered at 0 with equal
ranges.
To determine the important directions that most effectively
describe the function variability, the eigenspaces of the
uncentered covariance matrix $\mathbf{C} = \int_{\mathbb{D}} (\ensuremath{\boldsymbol{n}}abla_{\ensuremath{\boldsymbol{\mu}}} f) ( \ensuremath{\boldsymbol{n}}abla_{\ensuremath{\boldsymbol{\mu}}} f )^T
\rho \, d \ensuremath{\boldsymbol{\mu}}$, needs to be established.
$\mathbf{C}$ is symmetric positive
definite so it admits a real eigenvalue decomposition, $\mathbf{C} = \mathbf{W} \bm{\Lambda} \mathbf{W}^T,$
where $\mathbf{W}$ is a $m \times m$ column matrix of eigenvectors, and
$\bm{\Lambda}$ is the diagonal matrix of non-negative eigenvalues
arranged in descending order. Low eigenvalues suggest that
the corresponding vectors are in the null space of the covariance
matrix, and we can discard those vectors to form an
approximation. The lower dimensional parameter subspace is formed
by the first $M < m$ eigenvectors that correspond to the
relatively large eigenvalues.
We can partition
$\mathbf{W}$ into $\mathbf{W}_1$ containing the first $M$ eigenvectors which span the
active subspace, and $\mathbf{W}_2$ containing the
eigenvectors spanning the inactive subspace. The dimension reduction
is performed by projecting the full parameter space onto the
active subspace obtaining the active variables
$\ensuremath{\boldsymbol{\mu}}_M = \mathbf{W}_1^T\ensuremath{\boldsymbol{\mu}} \in \mathbb{R}^M$. The inactive variables are $\ensuremath{\boldsymbol{\eta}} = \mathbf{W}_2^T \ensuremath{\boldsymbol{\mu}} \in \mathbb{R}^{m - M}$.
Hence, $\ensuremath{\boldsymbol{\mu}} \in
\mathbb{D}$ can be expressed as $
\ensuremath{\boldsymbol{\mu}} = \mathbf{W}_1\mathbf{W}_1^T\ensuremath{\boldsymbol{\mu}} +
\mathbf{W}_2\mathbf{W}_2^T\ensuremath{\boldsymbol{\mu}} = \mathbf{W}_1 \ensuremath{\boldsymbol{\mu}}_M +
\mathbf{W}_2 \ensuremath{\boldsymbol{\eta}}$. The function $f (\ensuremath{\boldsymbol{\mu}})$ can then be
approximated with $g (\ensuremath{\boldsymbol{\mu}}_M)$, and the evaluations of some chosen samples $g_i$ for $i = 1, \dots, p \leq N_s$ can be exploited to
construct a response surface $\mathcal{R}$.
We use the concept of shared subspace~\cite{ji2018shared}. It links the AS of
different functions that share the same parameter space. Expressing
both the objective function and a constraint using the same
reduced variables leads to an easy constrained optimization via the
response surfaces.
The shared subspace $\mathbf{Q}$ between some
$f_i$, $i \in \mathbb{N}$, having an active
subspace of dimension $M$, is defined as follows.
Let us assume that the functions are exactly represented by their AS
approximations,
then for all $\mathbf{Q} \in \mathbb{R}^{m \times M}$ such that
$\mathbf{W}_{1, \,i}^T \mathbf{Q} = \mathbf{Id}_M$, we have $f_i (\ensuremath{\boldsymbol{\mu}}) =f_i (\mathbf{Q}\mathbf{W}_{1, \,i}^T \, \ensuremath{\boldsymbol{\mu}}).$
A system of equations needs to be solved for $\mathbf{Q}$, and it can
be proven that $\mathbf{Q}$ will be a linear combination of the active
subspaces of $f_i$.
\section{Numerical Results}
\label{sec:results}
In this section we present the numerical results obtained with the
application of the complete pipeline, presented above, to the DTMB 5415 model hull.
Using the FFD algorithm implemented in PyGeM~\cite{pygem} we create
200 different deformations of the original hull, sampling uniformly the
parameter space $\mathbb{D} \subset \mathbb{R}^{10}$.
Each IGES geometry produced is the
input of the full order simulation, in which the hull has been set to advance in calm
water at a constant speed corresponding to Fr $= 0.28$. The full
order computations simulate only 15s of the flow past the
hull after it has been impulsively started from rest. For each
simulation we save the snapshots of the full flow field every 0.1s
between the 7th and the 15th second. The DMD algorithm
implemented in PyDMD~\cite{demo18pydmd} uses these snapshots to
complete the fluid dynamic simulations until convergence to the regime
solution. The reconstructed flow field is then used to calculate the hull
total resistance, that is the quantity of interest we want to
minimize. For each geometry we also compute the volume of the
hull below a certain height $z$ equal for all the hulls. This
is intended as the load volume.
With all the input/output pairs for both the total resistance and the
load volume we can extract the active subspaces for each target
function and compute the shared subspace. Using the shared subspace
has the advantage to allow the representation of the target functions
with respect to the same reduced parameters. The drawback is loosing
the optimality of AS, since it means to shift the
rotation of the parameter space from the optimal one given by
AS. This is clear in Figure~\ref{fig:active_shared_vol},
where the load volume is expressed versus the 1D active variable
$\ensuremath{\boldsymbol{\mu}}_{\text{vol}} = \mathbf{W}_{1, \,\text{vol}}^T \,
\ensuremath{\boldsymbol{\mu}}$ (on the left), and versus the shared variable
$\ensuremath{\boldsymbol{\mu}}_Q = \mathbf{Q}^T\ensuremath{\boldsymbol{\mu}}$ in 1D and 2D. The
values of the target function are not perfectly aligned anymore along
the shared variable.
\begin{figure}
\caption{On the left the load volume with respect to the active
variable $\ensuremath{\boldsymbol{\mu}
\label{fig:active_shared_vol}
\end{figure}
To capture the most information while having the possibility
to plot the results, we select the active subspace to be of dimension
2 for both quantities of interest and we compute the
corresponding shared subspace, that is a linear combination of the
active subspaces of the two functions.
Then we select a lower and an upper bound in which we constraint the
load volume. After, we compute the subset of the shared subspace that satisfies
such constraint in order to impose it on the total resistance. In
Figure~\ref{fig:resistance_surf} we plot on the left the volume against the
bidimensional shared variable (seen from above) and in red the
realizations of the volume between the imposed bounds.
In the center we plot the sufficient summary plot for the total resistance
with respect to the shared variables and highlight the simulations
that satisfy the volume constraint in red, together with the response
surface. We notice that here the data are more
scattered if compared with the volume. This is due to the high
nonlinearity of the problem and the inactive directions we
discarded. On the right we construct a polynomial response
surface of order 2 using only the red dots and in green we
highlight the minimum of such response surface. That represents the
minimum of the total resistance in the reduced parameter space
subjected to the volume constraint. The root mean square error
committed with the response surface is around 0.7 Newton
depending on the run. We can map such reduced point in the full space of the parameters and
deform the original hull accordingly. This represents a good starting
point for a more sophisticated optimization.
\begin{figure}
\caption{On the left and in the center the sufficient summary plot of the resistance
along with a polynomial response surface. In
red the points that satisfy the volume constraint. On the right the
response surface of order two constructed with the red points. In
green the minimum.}
\label{fig:resistance_surf}
\end{figure}
\section{Conclusions and Perspectives}
\label{sec:the_end}
In this work we presented a complete pipeline composed of shape
parametrization, hydrodynamic simulations, and model reduction
combining DMD and AS. We applied it to the problem of minimizing the
total resistance of a hull advancing in calm water, subject to a load
volume constraint, varying the shape of the DTMB~5415. We expressed the
two functions in the same reduced space and we constructed a response
surface to find the minimum of the total drag that satisfies the
volume constraint. We committed an error around 0.7 Newton with respect to
the full order solver. This minimum can be used as a starting
point of a more sophisticated optimization algorithm. The proposed pipeline is
independent of the specific deformed part of the hull, of the
solver used, and of the function to minimize.
Future work will focus on several different areas to improve physical and
mathematical aspects of the algorithms presented, as well as to better integrate its
different parts, and to automate the simulation campaign with post
processing processes.
\end{document} |
\begin{document}
\title{Bargmann-Fock realization of the noncommutative torus}
\begin{abstract}
We give an interpretation of the Bargman transform as a correspondence
between state spaces that is analogous to commonly considered intertwiners in representation theory of finite groups.
We observe that the non-commutative torus is nothing else that the range of the star-exponential for the Heisenberg group
within the Kirillov's orbit method context. We deduce from this a realization of the non-commutative torus as acting
on a Fock space of entire functions.
{\mathfrak{e}}nd{abstract}
\section{Introduction}
For systems of finite degrees of freedom, there are two main quantization procedures, the other ones being variants of them.
The first one consisting in pseudo-differential calculus (see e.g. \cite{Ho}). The second one relies on geometric quantization (see e.g. \cite{Wo}). The main difference
between the two lies in the types of polarizations on which they are based. Pseudo-differential calculus
is based on the existence of a ``real polarization", while geometric quantization often uses ``complex polarizations".
There are no systematic ways to compare the two. Although in some specific situations, this comparison is possible.
This is what is investigated in the present work.
The aim of the small note is threefold. First we give an interpretation of the Bargman transform as a correspondence
between state spaces that is analogous to commonly considered intertwiners in representation theory of finite groups.
Second, we observe that the non-commutative torus is nothing else that the range of the star-exponential for the Heisenberg group
within the Kirillov's orbit method context. Third, we deduce from this a realization of the non-commutative torus as acting
on a Fock space of entire functions. The latter relates the classical approach to the non-commutative torus in the context
of Weyl quantization to its realization, frequent in the Physics literature, in terms of the canonical quantization.
\noindent{\bf Acknowledgment} The authors thank the Belgian Scientific Policy (BELSPO) for support under IAP grant
P7/18 DYGEST.
\section{Remarks on the geometric quantization of co-adjoint orbits} We let $\mathbb CO$ be a co-adjoint orbit of a connected Lie group $G$ in the dual $\g^\star$ of its Lie algebra $\g$.
We fix a base point $\xi_o$ in $\mathbb CO$ and denote by $K:=:G_{\xi_o}$ its stabilizer in $G$ (w.r.t. the co-adjoint action).
We assume $K$ to be connected.
Denote by $\k$ the Lie sub-algebra of $K$ in $\g$. Consider the $\mathbb R$-linear map
$\xi_o:\k\to\fu(1)=\mathbb R:Z\mapsto<\xi_o,Z>\;.$
Since the two-form $\delta\xi_o:\g\times\g\to\mathbb R:(X,Y)\mapsto<\xi_o,[X,Y]>$ identically vanishes on $\k\times\k$,
the above mapping is a character of $\k$.
Assume the above character exponentiates to $K$ as a unitary character (Kostant's condition):
$\chi:K\to U(1)\quad(\chi_{\star e}=i\xi_o)\;.$
One then has an action of $K$ on $U(1)$ by group automorphisms:
$K\times U(1)\to U(1):(k,z)\mapsto\chi(k)z\;.$
The associated circle bundle $Y\;:=\;G\times_KU(1)$ is then naturally a $U(1)$-principal bundle over the orbit $\mathbb CO$:
$\pi:Y\to\mathbb CO:[g,z]\mapsto\mbox{$\mathtt{Ad}$}^\flat_g(\xi_o)$
(indeed, one has the well-defined $U(1)$-right action $[g,z].z_0:=[g,zz_0]$).
\noindent The data of the character yields a connection one-form $\varpi$ in $Y$. Indeed,
the following formula defines a left-action of $G$ on $Y$:
$g_0.[g,z]\;:=\;[g_0g,z]\;.$
\noindent When $\xi_o|_\k$ is non-trivial, the latter is transitive: ${\bf C}_g(k).[g,z]=[gk,z]=[g,\chi(k)z]$ and $\pi(g_0.[g,z])=g_0.\pi(g)$. We then set
\begin{equation}\label{1FORM}
\varpi_{[g,z]}(X^\star_{[g,z]})\;:=\;-\,<\mbox{$\mathtt{Ad}$}^\flat_g\xi_o,X>
{\mathfrak{e}}nd{equation}
with, for every $X\in\g$: $X^\star_{[g,z]}\;:=\;\ddto{\mathfrak{e}}xp(-tX).[g,z]$. The above formula (\ref{1FORM}) defines a 1-form. Indeed, an element
$X\in\g$ is such that $X^\star_{[g,z]}=0$ if and only if $\mbox{$\mathtt{Ad}$}_{g^{-1}}X\in\ker(\xi_0)\cap\k$. It is a connection form because for every $z_0\in U(1)$, one has
$(z_0^\star\varpi)_{[g,z]}(X^\star)=\varpi_{[g,zz_0]}(z_{0\star_{[g,z]}}X^\star)=\varpi_{[g,zz_0]}(X^\star)=-<\mbox{$\mathtt{Ad}$}^\flat_g\xi_o,X>=\varpi_{[g,z]}(X^\star)$.
At last, denoting by $\iota_y\;:=\;\ddto y.e^{it}$ the infinitesimal generator of the circle action on $Y$, one has
$\iota_{[g,z]}\;=\;-(\mbox{$\mathtt{Ad}$}_gE)^\star_{[g,z]}$
where $E\in\k$ is such that $<\xi_0,E>=1$. Indeed,
$\ddto [g,z].e^{it}=\ddto [g,e^{it}z]=\ddto [g{\mathfrak{e}}xp(tE),z]=\ddto [{\mathfrak{e}}xp(t\mbox{$\mathtt{Ad}$}_gE)g,z]\;.$
Therefore: $\varpi_{[g,z]}(\iota)=<\mbox{$\mathtt{Ad}$}_g^\flat\xi_0,\mbox{$\mathtt{Ad}$}_gE>=1$.
\noindent The curvature $\Omega^{\varpi}:={\rm d}\varpi+[\varpi,\varpi]={\rm d}\varpi$ of that connection equals the lift $\pi^\star(\omega^\mathbb CO)$ to $Y$
of the KKS-symplectic structure $\omega^\mathbb CO$ on $\mathbb CO$ because $X^\star_{[g,z]}.\varpi(Y^\star)=<\mbox{$\mathtt{Ad}$}_g^\flat\xi_o,[X,Y]>$.
\subsection{Real polarizations} We now relate Kirillov's polarizations to Souriau's Planck condition. We consider a {\bf partial polarization} affiliated to $\xi_o$ i.e. a sub-algebra $\b$ of $\g$ that is normalized by $\k$ and maximal for the property of being isotropic w.r.t. the two-form $\delta\xi_o$ on $\g$ defined as
$\delta\xi_o(X,Y)\;:=\;<\xi_o,[X,Y]>$. We assume that the analytic (i.e. connected) Lie sub-group ${\bf B}$ of $G$ whose Lie algebra is $\b$ is closed. We denote by $Q\;:=\;G/{\bf B}$ the corresponding quotient manifold. Note that one necessarily has $K\subset{\bf B}$, hence the fibration
$p:\mathbb CO\to Q:\mbox{$\mathtt{Ad}$}^\flat_g(\xi_o)\mapsto g{\bf B}$. The distribution $\fL$ in $T\mathbb CO$ tangent to the fibers is isotropic w.r.t. the KKS form. Its $\varpi$-horizontal lift $\overline{\fL}$ in $T(Y)$, being integrable, induces a {\bf Planck foliation} of $Y$ (cf. p. 337 in Souriau's book \cite{S}).
\noindent Usually, Kirllov's representation space consists in a space of sections of the associated complex line bundle $\mathbb E_\chi:=G\times_\chi\mathbb C\to Q$ where $\chi$ is viewed as a unitary character of ${\bf B}$. While Kostant-Souriau representation space rather
consists in a space of sections of the line bundle $Y\times_{U(1)}\mathbb C\to\mathbb CO$. One therefore looks for a morphism between these spaces.
To this end we first observe that the circle bundle $G\times_\chi U(1)\to Q$ is a principal $U(1)$-bundle similarly as $Y$ is over $\mathbb CO$. Second, we note the complex line bundle isomorphism over $Q$:
\begin{equation}\label{U(1)BUNDLEISO}
(G\times_\chi U(1))\times_{U(1)}\mathbb C\to G\times_\chi\mathbb C:[[g_0,z_0],z]\mapsto[g,z_0z]\;.
{\mathfrak{e}}nd{equation}
Third, we have the morphism of $U(1)$-bundle:
$$
\begin{array}{ccc}
Y&\stackrel{\tilde{p}}{\longrightarrow}&G\times_\chi U(1)\\
\downarrow&&\downarrow\\
\mathbb CO&\stackrel{{p}}{\longrightarrow}&Q
{\mathfrak{e}}nd{array}
$$
where $\tilde{p}([gk,\chi(k^{-1})z_0])\;:=\;[gb,\chi(b^{-1})z_0]$.
This induces a linear map between equivariant functions:
$\tilde{p}^\star: C^\infty(G\times_\chi U(1),\mathbb C)^{U(1)}\to C^\infty(Y,\mathbb C)^{U(1)}$
which, red through the isomorphism (\ref{U(1)BUNDLEISO}), yields a natural $G$-equivariant linear embedding
$\mathbb Gamma^\infty(Q,\mathbb E_\chi)\to\mathbb Gamma^\infty(\mathbb CO,Y\times_{U(1)}\mathbb C)$
whose image coincides with the Planck space.
\subsection{Complex polarizations} We first note that for every element $X\in\g$, the $\varpi$-horizontal lift of $X^\star_\xi$ at $y=[g,z]\in Y$
is given by
$h_y(X^\star_\xi)\;:=\;X^\star_y+<\xi,X>\iota_y\;.$
Indeed, $\pi_{\star y}(X^\star_y)=X^\star_\xi$ and $\varpi_y(X^\star_y+<\xi,X>\iota_y)=-<\xi,X>+<\xi,X>=0$.
\noindent Therefore, for every smooth section $\varphi$ of the associated complex line bundle $\mathbb F\;:=\;Y\times_{U(1)}\mathbb C\to\mathbb CO$,
denoting by $\nabla$ the covariant derivative in $\mathbb F$ associated to $\varpi$ and by $\hat{\varphi}$ the $U(1)$-equivariant function on $Y$ representing $\varphi$, one has
$\widehat{\nabla_{X^\star}\varphi}(y)=X^\star_y\hat{\varphi}\,-\,i\,<\xi,X>\hat{\varphi}(y)\;.$
Now let us assume that our orbit $\mathbb CO$ is {\mathfrak{e}}mph{pseudo-Kahler} in the sense that it is equipped with a $G$-invariant
$\omega^\mathbb CO$-compatible (i.e. $J\in\mbox{\rm Sp}(\omega^\mathbb CO)$) almost complex structure $J$. Let us denote by
$T_\xi(\mathbb CO)^\mathbb C\;=\;T_\xi^{0}(\mathbb CO)\,\oplus\,T^1_\xi(\mathbb CO)$
the $(-1)^{0,1} i$-eigenspace decomposition of the complexified tangent space $T_\xi(\mathbb CO)$ w.r.t. $J_\xi$.
One observes the following descriptions
$T_\xi^{0,1}(\mathbb CO)\;=\;>X+(-1)^{0,1}iJX<_{X\in T_\xi(\mathbb CO)}$
where the linear span is taken over the complex numbers. We note also that $\dim_\mathbb C T_\xi^{0,1}(\mathbb CO)=\frac{1}{2}\dim_\mathbb R\mathbb CO$.
\noindent In that context, a smooth section $\varphi$ of $\mathbb F$ is called {\bf polarized} when $\nabla_Z\varphi=0$
for every $Z\in T^0(\mathbb CO)$. The set $\mbox{\rm hol}(\mathbb F)$ of polarized sections is a complex sub-space of
$\mathbb Gamma^\infty(\mathbb F)$. Moreover, it carries a natural linear action of $G$. Indeed, the group $G$ acts on
$\mathbb Gamma^\infty(\mathbb F)$ via $\widehat{U(g)\varphi}:=(g^{-1})^\star\hat{\varphi}$. The fact that both $\varphi$ and $J$
are $G$-invariant then implies that $\mbox{\rm hol}(\mathbb F)$ is a $U$-invariant sub-space of $\mathbb Gamma^\infty(\mathbb F)$. The linear representation
$U:G\to\mbox{$\mathtt{End}$}(\mbox{\rm hol}(\mathbb F))$
is called the {\bf Bargman-Fock representation}.
\section{The Heisenberg group} We consider a symplectic vector space $(V,\Omega)$ of dimension $2n$ and its associated
Heisenberg Lie algebra: $\h_n:=V\oplus\mathbb R E$ whose Lie bracket is given by $[v,v']:=\Omega(v,v')E$ for all $v,v'\in V$, the element $E$
being central. The corresponding connected simply connected Lie group ${\bf H}_n$ is modeled on $\h_n$ with group
law given by $g.g':=g+g'+\frac{1}{2}[g.g']$.
\noindent Within this setting, one observes that the exponential mapping is the identity map on $\h_n$. The symplectic structure defines an isomorphism ${}^\flat:V\to V^\star$ by
${}^\flat v(v'):=\Omega(v,v')$. The latter extends to a linear isomorphism ${}^\flat:\h_n\to\h_n^\star$ where we set
${}^\flat E(v+zE):=z$.
\noindent Now one has \begin{equation}\label{ADFLAT}\mbox{$\mathtt{Ad}$}^\flat_{v+zE}({}^\flat v_0+\mu{}^\flat E)={}^\flat(v_0-\mu v)+\mu{}^\flat E\;.{\mathfrak{e}}nd{equation}
Indeed, in the case of the Heisenberg group, the exponential mapping coincides with the identity mapping. Hence, for every
$v_1+z_1E\in\h_n$:
\begin{eqnarray*}
&& <\mbox{$\mathtt{Ad}$}^\flat_{v+zE}({}^\flat v_0+\mu{}^\flat E),v_1+z_1E>=<{}^\flat v_0+\mu{}^\flat E,\mbox{$\mathtt{Ad}$}_{-v-zE}(v_1+z_1E)>\\
&=&\ddto<{}^\flat v_0+\mu{}^\flat E,(-v-zE).(tv_1+tz_1E).(v+zE)>\;,
{\mathfrak{e}}nd{eqnarray*}
which in view of the above expression for the group law immediately yields (\ref{ADFLAT}).
\noindent Generic orbits $\mathbb CO$ are therefore affine hyperplanes parametrized by $\mu\in\mathbb R_0$. Setting $\xi_0:=\mu{}^\flat E$, every real
polarization $\b$ corresponds to a choice of a Lagrangian sub-space $\fL$ in $V$: $\b=\fL\oplus\mathbb R E$. Note in particular, that
the polarization is an ideal in $\h_n$. Choosing a Lagrangian sub-space
$\q$ in duality with $\fL$ in $V$ determines an Abelian sub-group $Q={\mathfrak{e}}xp(\q)$ in ${\bf H}_n$ which splits the exact sequence
${\bf B}\to{\bf H}_n\to{\bf H}_n/{\bf B}=:Q$ i.e. ${\bf H}_n=Q.{\bf B}$. The stabilizers all coincide (in the generic case) with the center $K:=\mathbb R E$
of ${\bf H}_n$ and one has the global trivialization $\mathbb CO\to{\bf H}_n:{}^\flat v+\xi_0\mapsto v$.
\subsection{Representations from real and complex polarizations}
This yields the linear isomorphism
$\mathbb Gamma^\infty(\mathbb E_\chi)\to C^\infty(Q):u\mapsto\hat{u}|_Q=:\tilde{u}$ under which the ${\bf H}_n$-action reads
$U_{\mbox{\rm \tiny{KW}}}(qb)\tilde{u}(q_0)=e^{i\mu(z+\Omega(q-q_0,p))}\tilde{u}(q_0-q)$ with $b=p+zE\,,\,p\in{\mathfrak{e}}xp(\fL)$. The latter induces a unitary
representation on $L^2(Q)$.
\noindent The isomorphism $\mathbb C^n=\q\oplus i\fL\to V:Z=q+ip\mapsto q+p$ yields an ${\bf H}_n$-equivariant global complex
coordinate system on the orbit $\mathbb CO$ through $Z\mapsto\mbox{$\mathtt{Ad}$}^\flat_{q+p}\xi_0$. The map $V\times U(1)\to Y:(v_0,z_0)\mapsto[v_0,z_0]$
consists in a global trivialization of the bundle $Y\to\mathbb CO$ under the isomorphism $V\simeq\mathbb CO$ described above. Hence the linear isomorphism $\mathbb Gamma^\infty(\mathbb F)\to C^\infty(V):\varphi\mapsto\tilde{\varphi}:=\hat{\varphi}(\,.\,,\,1)$. At the level of the trivialization the left-action
of ${\bf H}_n$ on $Y$ reads: $g.(v_0,z_0)=(v_0+q+p,e^{i\mu(z+\frac{1}{2}\Omega(q,p)+\frac{1}{2}\Omega(q+p,v_0))}z_0)$. Also,
the representation is given by $g.\tilde{\varphi}(v_0)=e^{i\mu(z+\frac{1}{2}\Omega(q,p)+\frac{1}{2}\Omega(q+p,v_0))}\tilde{\varphi}(v_0-p-q)$.
Choosing a basis $\{f_j\}$ of $\q$ and setting $\{e_j\}$ for the corresponding dual basis of $\fL$, one has
$\partial_{\overline{Z}^j}=-\frac{1}{2}(f^\star_j+ie^\star_j)$. Within the trivialization, the connection form corresponds to $\varpi=\frac{\mu}{2}(p^j{\rm d}q_j-q^j{\rm d}p_j)+\iota_\star$. A simple computation then yields:
$\mbox{\rm hol}(\mathbb F)\;=\;\{\varphi_f\;:\;\mathbb C^n\to\mathbb C:z\mapsto e^{-\frac{\mu}{4}|z|^2}\,f(z)\;\quad (\,f\;\mbox{\rm entire}\,)\;\}\;.$
Note that provided $\mu>0$, the space $\mbox{\rm hol}(\mathbb F)$ naturally contains the pre-Hilbert space:
$\mathbb CL^2_{\mbox{\rm hol}}(\mathbb F)\;:=\;\{\varphi_f\;:\;<\varphi_f,\varphi_f>\;:=\;\int_{\mathbb C^n}e^{-\frac{\mu}{2}|z|^2}\,\left| f(z)\right|^2\,{\rm d}q\,{\rm d}p\,<\,\infty\;\}\;.$
The above sub-space turns out to be invariant under the representation $U$ of $H_n$. The latter is seen to be
unitary and irreducible. For negative $\mu$, one gets a unitary representation by considering anti-polarized sections corresponding to
anti-holomorphic functions. Note that within complex coordinates, the action reads $U_{\mbox{\rm \tiny{BF}}}(g)\tilde{\varphi}(Z_0):=g.\tilde{\varphi}(Z_0)=e^{i\mu(z+\frac{1}{2}\mbox{\rm Im}(\frac{1}{2}{Z}^2+\overline{Z}Z_0))}\tilde{\varphi}(Z_0-Z)$.
\subsection{Intertwiners and the Bargmann transform}
\noindent We know (cf. Stone-von Neumann's theorem) that in the case of the Heisenberg group, representations constructed either
via complex or via real polarizations are equivalent. In order to exhibit intertwiners, we make the following observation.
\begin{prop}
Let $G$ be a Lie group with left-invariant Haar measure ${\rm d}g$ and $(\mathbb CH,\rho)$ and $(\mathbb CH',\rho')$ be square-integrable unitary representations. Assume furthermore the continuity of the associated bilinear forms $\mathbb CH\times\mathbb CH\to L^2(G):(\varphi_1,\varphi_2)\to[g\mapsto<\varphi_1|\rho(g)\varphi_2>]$ and $\mathbb CH'\times\mathbb CH'\to L^2(G):(\varphi'_1,\varphi'_2)\to[g\mapsto<\varphi_1'|\rho'(g)\varphi'_2>]$. Fix ``mother states" $|{\mathfrak{e}}ta>\in\mathbb CH$ and $|{\mathfrak{e}}ta'>\in\mathbb CH'$.
For every element $g\in G$, set $|{\mathfrak{e}}ta_g>:=\rho(g)|{\mathfrak{e}}ta>$ and $|{\mathfrak{e}}ta'_g>:=\rho'(g)|{\mathfrak{e}}ta'>$.
Then the following formula
\begin{equation}\label{INTERTWINER}
T:=\int_G\,|{\mathfrak{e}}ta'_g><{\mathfrak{e}}ta_g|\,{\rm d}g
{\mathfrak{e}}nd{equation}
formally defines an intertwiner from $(\mathbb CH,\rho)$ to $(\mathbb CH',\rho')$.
{\mathfrak{e}}nd{prop}
{{\mathfrak{e}}m Proof}.
First let us observe that square-integrability and Cauchy-Schwartz inequality on $L^2(G)$ imply that for all $\varphi\in\mathbb CH$ and $\varphi'\in\mathbb CH'$ the element $[g\mapsto<\varphi'|{\mathfrak{e}}ta'_g><{\mathfrak{e}}ta_g|\varphi>]$ is well defined as an element of $L^1(G)$. Moreover the continuity of the above mentioned bilinear forms insures the continuity of the bilinear map $\mathbb CH\times\mathbb CH'\to\mathbb C$ defined by integrating the later. This is in this sense that we understand formula (\ref{INTERTWINER}). Now for all $|\varphi>\in\mathbb CH$ and $g_0\in G$, one has
$T|\varphi_{g_0}>=\int|{\mathfrak{e}}ta'_g><{\mathfrak{e}}ta_g|\varphi_{g_0}>\,{\rm d}g=\int|{\mathfrak{e}}ta'_g><{\mathfrak{e}}ta_{g_0^{-1}g}|\varphi>\,{\rm d}g=
\int|{\mathfrak{e}}ta'_{g_0g}><{\mathfrak{e}}ta_{g}|\varphi>\,{\rm d}g=\int\rho'(g_0)|{\mathfrak{e}}ta'_{g}><{\mathfrak{e}}ta_{g}|\varphi>\,{\rm d}g=\rho'(g_0)T|\varphi>$.
\mathbb EPf
\noindent In our present context of the Heisenberg group, the integration over $G$ should rather be replaced by an integration
over the orbit (which does not correspond to a sub-group of ${\bf H}_n$). But the above argument essentially holds the same way up to a slight modification by a pure phase. Namely,
\begin{prop}
Fix $\tilde{\varphi}^0\in\mathbb CL^2_{\mbox{\rm hol}}(\mathbb F)$ and $\tilde{u}^0\in L^2(Q)$.
For every $v\in V$, set $\tilde{\varphi}^0_v:=U_{\mbox{\rm \tiny{BF}}}(v)\tilde{\varphi}^0$ and $\tilde{u}^0_v:=U_{\mbox{\rm \tiny{KW}}}(v)\tilde{u}^0$. Then, setting
$$
T_{\varphi^0,u^0}\;:=\;\int_V\,|\tilde{\varphi}^0_v><\tilde{u}^0_v|\,{\rm d}v
$$
formally defines a $V$-intertwiner between $U_{\mbox{\rm \tiny{KW}}}$ and $U_{\mbox{\rm \tiny{BF}}}$.
{\mathfrak{e}}nd{prop}
{{\mathfrak{e}}m Proof}. Observe firstt that for every
$w\in V$, one has $\tilde{\varphi}^0_{wv}=e^{\frac{i\mu}{2}\Omega(w,v)}\tilde{\varphi}^0_{w+v}$ and similarly for $\tilde{u}^0$. Also for every $\tilde{u}\in L^2(Q)$, one has
\begin{eqnarray*}
T_{\varphi^0,u^0}U_{\mbox{\rm \tiny{KW}}}(w)\tilde{u}&=&\int_V\,|\tilde{\varphi}^0_v><\tilde{u}^0_v|\tilde{u}_w>\,{\rm d}v=
\int_V\,|\tilde{\varphi}^0_v><\tilde{u}^0_{w^{-1}v}|\tilde{u}>\,{\rm d}v\\
&=&
\int_V\,|\tilde{\varphi}^0_v><e^{-\frac{i\mu}{2}\Omega(w,v)}\tilde{u}^0_{v-w}|\tilde{u}>\,{\rm d}v=
\int_V\,e^{\frac{i\mu}{2}\Omega(w,v)}|\tilde{\varphi}^0_v><\tilde{u}^0_{v-w}|\tilde{u}>\,{\rm d}v\\
&=&
\int_V\,e^{\frac{i\mu}{2}\Omega(w,v+w)}|\tilde{\varphi}^0_{v+w}><\tilde{u}^0_{v}|\tilde{u}>\,{\rm d}v=
\int_V\,e^{\frac{i\mu}{2}\Omega(w,v)}|\tilde{\varphi}^0_{v+w}><\tilde{u}^0_{v}|\tilde{u}>\,{\rm d}v\\
&=&
\int_V\,|\tilde{\varphi}^0_{wv}><\tilde{u}^0_{v}|\tilde{u}>\,{\rm d}v=U_{\mbox{\rm \tiny{BF}}}(w)T_{\varphi^0,u^0}\tilde{u}\;.
{\mathfrak{e}}nd{eqnarray*}
Now, one needs to check whether the above definition makes actual sense. The special choices $\tilde{\varphi}^0(Z):=\varphi_1(Z)=e^{-\frac{\mu}{4}|Z|^2}$ and
$\tilde{u}^0(q):=e^{-\alpha q^2}$ reproduce the usual Bargman transform. Indeed, first observe that
$<\tilde{u}^0_v,\tilde{u}>=e^{\frac{-i\mu}{2}qp}\int_Qe^{i\mu q_0p-\alpha(q_0-q)^2}\tilde{u}(q_0){\rm d}q_0$ and
$\tilde{\varphi}^0_v(v_1)=e^{i\frac{\mu}{2}(qp_1-q_1p)-\frac{\mu}{4}((q_1-q)^2+(p_1-p)^2)}$. This leads to
$T\tilde{u}(v_1)=\int{\rm d}q_0{\rm d}q{\rm d}p \;e^{\frac{i\mu}{2}((p-p_1)(2q_0-q_1-q)+(2q_0-q_1)p_1)}e^{-\alpha(q_0-q)^2-\frac{\mu}{4}((q_1-q)^2+(p_1-p)^2)}\tilde{u}(q_0)$. From the fact that $\int e^{-ixy}e^{-\frac{x^2}{2\sigma^2}}{\rm d}x=e^{-\frac{\sigma^2}{2}y^2}$, we get: $$T\tilde{u}(v_1)=\left(2\sqrt{\frac{\pi}{\mu}}\right)^n\int{\rm d}q_0{\rm d}q \;e^{\frac{i\mu}{2}(2q_0-q_1)p_1}e^{-\alpha(q_0-q)^2-\frac{\mu}{4}(q_1-q)^2}e^{-\frac{\mu}{4}(2q_0-q_1-q)^2}\tilde{u}(q_0)\;.$$
The formula $\int e^{-\frac{2}{\sigma^2}(q-q_0)^2}{\rm d}q=(\frac{\sqrt{2\pi}}{\sigma})^n$ yields
$$T\tilde{u}(v_1)=\left(2\pi\sqrt{\frac{\alpha+\frac{\mu}{4}}{\mu}}\right)^n\int{\rm d}q_0 \;e^{\frac{i\mu}{2}(2q_0-q_1)p_1}e^{-\frac{\mu}{2}((q_1-q_0)^2+\frac{1}{2}q_0^2)}\tilde{u}(q_0)\;.$$
Setting $Z_1:=q_1+ip_1$ leads to
$$T\tilde{u}(v_1)=e^{-\frac{\mu}{4}|Z_1|^2}\left(2\pi\sqrt{\frac{\alpha+\frac{\mu}{4}}{\mu}}\right)^n\int{\rm d}q_0 \;e^{-\frac{\mu}{4}(Z_1-q_0)(Z_1-3q_0)}\tilde{u}(q_0)\;.$$
\mathbb EPf
\begin{rmk}The usual Bargman transform (see Folland \cite{F} page 40) equals the latter when $\frac{\alpha}{3}=\frac{\pi}{2}=\frac{\mu}{4}$.{\mathfrak{e}}nd{rmk}
\section{Star-exponentials and noncommutative tori}
\subsection{Recalls on Weyl calculus}
It is well known that the Weyl-Moyal algebra can be seen as a by-product of the Kirillov-Weyl representation. In \cite{BGT},
this fact is realized in terms of the natural symmetric space structure on the coadjoint orbits of the Heisenberg group.
This construction is based on the fact that the Euclidean centered symmetries on $V=\mathbb R^{2n}\simeq\mathbb CO$ naturally ``quantize as phase
symmetries". More precisely, at the level of the Heisenberg group the flat symmetric space structure on $V$ is encoded
by the involutive automorphism $$\sigma:{\bf H}_n\to{\bf H}_n:v+zE\mapsto-v+zE\;.$$ The latter yields an involution
of the equivariant function space $$\sigma^\star:C^\infty({\bf H}_n,\mathbb C)^B\to C^\infty({\bf H}_n,\mathbb C)^B$$ which induces by restriction to
$Q$ the unitary phase involution: $$\mathbb Sigma:L^2(Q)\to L^2(Q):\tilde{u}\mapsto[q\mapsto\tilde{u}(-q)]\;.$$ Observing that the associated map
$${\bf H}_n\to U(L^2(Q)):g\mapsto U_{\mbox{\rm \tiny{KW}}}(g)\,\mathbb Sigma\,U_{\mbox{\rm \tiny{KW}}}(g^{-1})$$ is constant on the
lateral classes of the stabilizer group $\mathbb R E$, one gets an ${\bf H}_n$-equivariant mapping: $$\Xi_\mu:V\simeq{\bf H}_n/\mathbb R E\to U(L^2(Q)):
v\mapsto U_{\mbox{\rm \tiny{KW}}}(v)\,\mathbb Sigma\,U_{\mbox{\rm \tiny{KW}}}(v^{-1})$$ which at the level of functions yields the
so-called ``Weyl correspondance" valued in the $C^\star$-algebra of bounded operators on $L^2(Q)$:
$$
\Xi_\mu:L^1(V)\to\mathbb CB(L^2(Q)):F\mapsto\int_VF(v)\,\Xi_\mu(v)\,{\rm d}v\;.
$$
The above mapping uniquely extends from the space of compactly supported functions $C^\infty_c(V)$ to an injective continuous
map defined on the Laurent Schwartz B-space $\mathbb CB(V)$ of complex valued smooth functions on $V$ whose partial derivatives at every order are bounded:
$$
\Xi_\mu:\mathbb CB(V)\to\mathbb CB(L^2(Q))\;,
$$
expressing in particular that the quantum operators associated to classical observables in the B-space are $L^2(Q)$-bounded
in accordance with the classical Calder\`on-Vaillancourt theorem.
\noindent It turns out that the range of the above map is a sub-algebra of $\mathbb CB(L^2(Q))$. The Weyl-product $\star_\mu$ on $\mathbb CB(V)$ is then defined by structure transportation:
$$
F_1\star_\mu F_2\;:=\;\Xi_\mu^{-1}\left(\Xi_\mu(F_1)\circ\Xi_\mu(F_2)\right)
$$
whose asymptotics in terms of powers of $\theta\;:=\;\frac{1}{\mu}$ consists in the usual formal Moyal-star-product:
$$
F_1\star_\mu F_2\;\sim\;F_1\star^0_\theta F_2\;:=\;\sum_{k=0}^\infty\frac{1}{k!}\left(\frac{\theta}{2i}\right)^k\Omega^{i_1j_1}...
\Omega^{i_kj_k}\partial^k_{i_1...i_k}F_1\,\partial^k{j_1...j_k}F_2\;.
$$
The resulting associative
algebra $(\mathbb CB(V),\star_\mu)$ is then both Fr\'echet (w.r.t. the natural Fr\'echet topology on $\mathbb CB(V)$) and pre-$C^\star$
(by transporting the operator norm from $\mathbb CB(L^2(Q))$).
\subsection{Star-exponentials}
Combining the above mentioned results of \cite{BGT} and well-known results on star-exponentials (see e.g. \cite{A}),
we observe that the heuristic consideration of the series:
$$
\mbox{Exp}_{\theta}(F)\;:=\;\sum_{k=0}^\infty\frac{1}{k!}\left(\frac{i}{\theta}F\right)^{\star_\mu k}
$$
yields a well-defined group homomorphism:
$$
\mathbb CE_\theta:{\bf H}_n\to(\mathbb CB(V),\star_\mu):g\mapsto\mbox{Exp}_{\theta}(\lambda_{\log g})
$$
where $\lambda$ denotes the classical linear moment:
$$
\h_{n}\to C^\infty(V):X\mapsto[v\mapsto<\mbox{$\mathtt{Ad}$}_v^\flat\xi_0,X>]\;.
$$
Indeed, in this case, if $F$ depends only either on the $q$-variable or on the $p$-variable then
the above star-exponential just coincides with the usual exponential:
$\mbox{Exp}_{\theta}(F)={\mathfrak{e}}xp\left(\frac{i}{\theta}F\right)$. In particular, for $x$ either in $Q$ or ${\mathfrak{e}}xp\fL$ we find:
$$
\left(\mathbb CE_\theta(x)\right)(v)\;=\;e^{\frac{i}{\theta^2}\Omega(x,v)}\;;
$$
while for $x=zE$, we find the constant function:
$$
\left(\mathbb CE_\theta(zE)\right)(v)\;=\;e^{\frac{zi}{\theta^2}}\;.
$$
From which we conclude that $\mathbb CE_\theta$ is indeed valued in $\mathbb CB(V)$.
\subsection{An approach to the non-commutative torus}
For simplicity, restrict to the case $n=1$ and consider $\Omega$-dual basis elements $e_q$ of $\q$ and $e_p$ of $\fL$.
From what precedes we observe that the group elements
$$
U\;:=\;\mathbb CE_\theta({\mathfrak{e}}xp(\theta^2e_q))\;=\;e^{ip}\quad\mbox{ and }\quad V\;:=\;\mathbb CE_\theta({\mathfrak{e}}xp(\theta^2e_p))\;=\;e^{-iq}
$$
behave inside the image group $\mathbb CE_\theta({\bf H}_1)\;\subset\;\mathbb CB(\mathbb R^2)$ as
$$
U\,V\;=\;e^{i\theta}V\,U
$$
(where we omitted to write $\star_\mu$).
\noindent In particular, we may make the following
\begin{dfn} Endowing $(\mathbb CB(\mathbb R^2),\star_\mu)$ with its pre-$C^\star$-algebra structure (coming from $\Xi_\mu$),
the complex linear span inside $\mathbb CB(\mathbb R^2)$ of the sub-group of $\mathbb CE_\theta({\bf H}_1)$
generated by elements $U$ and $V$ underlies a pre-$C^\star$-algebra $\mathbb T^\circ_\theta$ that completes as the non-commutative 2-torus
$\mathbb T_\theta$.
{\mathfrak{e}}nd{dfn}
\subsection{Bargman-Fock realization of the non-commutative torus}
Identifying elements of $\mathbb R^2=V\subset{\bf H}_1$ with complex numbers as before, we compute that
$$
T\circ\mathbb Sigma\;=\;-\mbox{$\mathtt{id}$}^\star\circ T\quad\mbox{and }\quad \mbox{\rm BF}_\mu(Z)\tilde{\varphi}(Z_0)\;:=\;T\Xi_\mu(Z)T^{-1}\tilde{\varphi}(Z_0)\;=\;e^{i\mu\,\mbox{\rm Im}(\overline{Z}Z_0)}
\tilde{\varphi}(2Z-Z_0)\;.
$$
By structure transportation, we define the following correspondence:
$$
\mbox{\rm BF}_\mu:\mathbb CB(\mathbb C)\to\mathbb CB(L^2_{\mbox{\rm hol}}(\mathbb F))
$$
as the unique continuous linear extension of
$$
C^\infty_c(\mathbb C)\to\mathbb CB(L^2_{\mbox{\rm hol}}(\mathbb F)):F\mapsto\int_\mathbb C F(Z)\,\mbox{\rm BF}_\mu(Z)\,{\rm d}Z\;.
$$
\begin{prop}
Applied to an element in $\mathbb CE_\theta({\bf H}_1)$ of the form $F_X(Z)\;=\;e^{i\alpha \, \mbox{\rm Im}(\overline{Z}X)}$ with $X\in\mathbb C$ and $\alpha\in\mathbb R$, one has:
$$
\mbox{\rm BF}_\mu(F_X)
\tilde{\varphi}(Z_0)\;=\;e^{\frac{i\alpha}{2}\mbox{\rm Im}(\overline{Z_0}X)}\tilde{\varphi}(\mu Z_0+\alpha X)\;.
$$
{\mathfrak{e}}nd{prop}
{{\mathfrak{e}}m Proof}.
A small computation leads to the formula:
$$
\mbox{\rm BF}_\mu(F)\tilde{\varphi}(Z_0)\;=\;\frac{1}{4}\int_\mathbb C F\left(\frac{1}{2}(Z+Z_0)\right)\,e^{\frac{i}{2}\mu\,\mbox{\rm Im}(\overline{Z}Z_0)}\,
\tilde{\varphi}(Z)\,{\rm d}Z\;.
$$
Applied to an element in $\mathbb CE_\theta({\bf H}_1)$ of the form $F_X(Z)\;=\;e^{i\alpha \, \mbox{\rm Im}(\overline{Z}X)}$ with $X\in\mathbb C$ and $\alpha\in\mathbb R$, the above
formula yields:
$\mbox{\rm BF}_\mu(F_X)
\tilde{\varphi}(Z_0)\;=\;e^{\frac{i\alpha}{2}\mbox{\rm Im}(\overline{Z_0}X)} \mathbb CF_\mathbb C
(\tilde{\varphi})(\mu Z_0+\alpha X)$
where $\mathbb CF_\mathbb C(\tilde{\varphi})(Z_0)\;:=\;C\int_\mathbb C e^{\frac{i}{2}\,\mbox{\rm Im}(\overline{Z}Z_0)} \tilde{\varphi}(Z)\,{\rm d}Z$.
The limit $X_0\to0$ yields $\tilde{\varphi}=\mathbb CF_\mathbb C(\tilde{\varphi})$, hence:
$$
\mbox{\rm BF}_\mu(F_X)
\tilde{\varphi}(Z_0)\;=\;e^{\frac{i\alpha}{2}\mbox{\rm Im}(\overline{Z_0}X)}\tilde{\varphi}(\mu Z_0+\alpha X)\;.
$$
\mathbb EPf
\section{Conclusions}
We now summarize what has been done in the present work. First, we establish a way to systematically produce explicit formulae for
intertwiners of group unitary represenations. Second, applying the above intertwiner in the case of the Bargmann-Fock and Kirillov-Weyl realizations of the unitary dual of the Heisenberg groups, we realized the non-commutative torus as the range of the star-exponential for the Heisenberg group. And third, we then deduced from this a realization of the non-commutative torus as acting
on a Fock space of entire functions.
\begin{thebibliography}{MM65}
\bibitem[A]{A} Arnal, D. The {\mathfrak{e}}mph{$\star$-exponential}. Quantum theories and geometry (Les Treilles, 1987), 23---51, Math. Phys. Stud., 10, Kluwer Acad. Publ., Dordrecht, 1988.
\bibitem[BDS]{BDS} Bieliavsky, P., Detournay, S. and Spindel, Ph. ; {\mathfrak{e}}mph{The Deformation Quantizations of the Hyperbolic Plane}; Comm. Math. Phys. Volume 289, Number 2, 2009, 529---559.
\bibitem[BG2]{BG2} Bieliavsky, P., Gayral. V; {\mathfrak{e}}mph{Deformation quantization for actions of K\"ahlerian Lie Groups}, arXiv:1109.3419.
\bibitem[BGT]{BGT} Bieliavsky, Pierre; de Goursac, Axel; Tuynman, Gijs;
{\mathfrak{e}}mph{Deformation quantization for Heisenberg supergroup}.
J. Funct. Anal. 263 (2012), no. 3, 549---603.
\bibitem[F]{F} Folland, Gerald B. {\mathfrak{e}}mph{Harmonic analysis in phase space}. Annals of Mathematics Studies, 122. Princeton University Press, Princeton, NJ, 1989.
\bibitem[H]{Ho} H\"ormander, Lars {\mathfrak{e}}mph{The analysis of linear partial differential operators. III. Pseudo-differential operators. } Classics in Mathematics. Springer, Berlin, 2007.
\bibitem[S]{S} Souriau, J.-M. {\mathfrak{e}}mph{Structure des systèmes dynamiques}. Ma\^\i trises de math\'ematiques Dunod, Paris 1970.
\bibitem[U]{U}Unterberger, A. , Unterberger, J. ; {\mathfrak{e}}mph{La s\'erie discr\`ete de $SL_2(\mathbb R)$ et les op\'erateurs pseudo-diff\'erentiels sur une demi-droite}; Ann. sci. ENS, tome 17 no. 1, 1984, 83-116.
\bibitem[W]{Wo}Woodhouse, N. M. J. {\mathfrak{e}}mph{Geometric quantization}. Second edition. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1992.
{\mathfrak{e}}nd{thebibliography}
{\mathfrak{e}}nd{document} |
\begin{document}
\title{Cavity linewidth narrowing with dark-state polaritons }
\author{G. W. Lin$^{1}$}
\email{[email protected]}
\author{J. Yang$^{1}$}
\author{X. M. Lin$^{2}$}
\author{Y. P. Niu$^{1}$}
\email{[email protected]}
\author{S. Q. Gong$^{1}$}
\email{[email protected]}
\affiliation{$^{1}$Department of Physics, East China University of Science and Technology,
Shanghai 200237, China}
\affiliation{$^{2}$School of Physics and Optoelectronics Technology, Fujian Normal
University, Fuzhou 350007, China}
\begin{abstract}
We perform a quantum-theoretical treatment of cavity linewidth narrowing with
intracavity electromagnetically induced transparency (EIT). By means of
intracavity EIT, the photons in the cavity are in the form of cavity
polaritons: bright-state polariton and dark-state polariton. Strong coupling
of the bright-state polariton to the excited state induces an effect known as
vacuum Rabi splitting, whereas the dark-state polariton decoupled from the
excited state induce a narrow cavity transmission window. Our analysis would
provide a quantum theory of linewidth narrowing with a quantum field pulse at
the single-photon level.
\end{abstract}
\pacs{(270.1670) Coherent optical effects; (020.1670); (140.3945) Microcavities;
Coherent optical effects}
\maketitle
\section{\textbf{Introduction}}
Electromagnetically induced transparency (EIT) can be used to make a resonant,
opaque medium transparent by means of a strong coupling field acting on the
linked transition \cite{Harris,Fleischhauer}. The EIT medium in an optical
cavity known as intracavity EIT was first discussed by Lukin et al.
\cite{Lukin}. They show that the cavity response is drastically modified by
intracavity EIT, resulting in frequency pulling and a substantial narrowing of
spectral features. By following this seminal work, significant experimental
advance has been made in narrowing the cavity linewidth
\cite{Wang,Wu,Hernandez,Wu1} and enhancing the cavity lifetime \cite{Laupr}.
The previous semi-classical treatments of cavity linewidth narrowing and
lifetime enhancing with intracavity EIT were based on the solution of the
susceptibility of EIT system. Here we present a quantum-theoretical treatment
of cavity linewidth narrowing with intracavity EIT. By means of intracavity
EIT, the photons in the cavity are in the form of cavity polaritons:
bright-state polariton and dark-state polariton. Strong coupling of the
bright-state polariton to the excited state induces an effect known as vacuum
Rabi splitting, whereas the dark-state polariton decoupled from the excited
state induces a narrow cavity transmission window $\upsilon=\cos^{2}
\theta\upsilon_{0}$, with $\upsilon_{0}$ the empty-cavity linewidth and
$\theta$ the mixing angle of the dark-state polariton \cite{Fleischhauer1}. We
discuss the condition required for cavity linewidth narrowing and find that
when the atom-cavity system is in the collective strong coupling regime, a
weak control field is sufficient for avoiding the absorption owing to
spontaneous emission of the excited state, and then the dark-state polariton
induces a very narrow cavity linewidth. This result is different from that
based on previous semi-classical treatments of intracavity EIT
\cite{Lukin,Wang}, where the strong control field is required for avoiding the
absorption of a probe field pulse if all atoms are prepared in one of their
ground states. \begin{figure}
\caption{ (Color online) (a)
Schematic setup to cavity linewidth narrowing with dark-state polaritons. (b)
The relevant atomic level structure and transitions.}
\end{figure}
\section{Semi-classical theory}
We first review the intracavity EIT with the semi-classical theory based on
the solution of the susceptibility of EIT system. An EIT medium of length $l$
is trapped in an optical cavity of length $L$. The EIT medium response of a
probe classical field is characterized by the real $\chi^{^{\prime}}$ and the
imaginary $\chi^{^{\prime\prime}}$ parts of the susceptibility. The real part
gives the dispersion $\frac{\partial\chi^{^{\prime}}}{\partial\omega_{p}}$ and
the imaginary part gives the absorption coefficient $\alpha=2\pi\omega_{p}
\chi^{^{\prime\prime}}/c$, with probe frequency $\omega_{p}$. Then the ratio
of the linewidth $\upsilon$ of the cavity with the EIT medium to that of the
empty cavity is \cite{Lukin,Wang}
\begin{equation}
\frac{\upsilon}{\upsilon_{0}}=\frac{1-r\tau}{\sqrt{\tau}(1-r)}\frac{1}{1+\eta
},
\end{equation}
where $\tau=\exp(-\alpha l)$, $\eta=\omega_{r}(l/2L)\frac{\partial
\chi^{^{\prime}}}{\partial\omega_{p}}$, with $\omega_{r}$ the cavity resonant
frequency, and $r$ is the intensity reflectivity of the cavity mirror. As
shown in Ref \cite{Lukin,Wang}, when the EIT medium is driven by a strong
control field, the absorption of the probe field can be negligible
($\chi^{^{\prime\prime}}\rightarrow0$), whereas the dispersion is large,
resulting in a substantial narrowing of the cavity linewidth.
\section{\textbf{Quantum-theoretical treatment}}
Now we try to solve the quantum dynamics of EIT in an optical standing-wave
cavity [Fig, 1(a)]. We consider an atomic system with three levels, two ground
states $\left\vert g\right\rangle $, $\left\vert s\right\rangle $, and an
excited $\left\vert e\right\rangle $, forming a $\Lambda$-configuration [see
Fig, 1(b)]. The two transitions $\left\vert g\right\rangle \leftrightarrow
\left\vert e\right\rangle $ and $\left\vert s\right\rangle \leftrightarrow
\left\vert e\right\rangle $ are resonantly coupled by a cavity mode and a
laser fields, respectively. The interaction Hamiltonian for the coherent
processes is described by $H_{I}=
{\displaystyle\sum\nolimits_{j=1}^{N}}
(g\left\vert e\right\rangle _{j}\left\langle g\right\vert a+\Omega\left\vert
s\right\rangle _{j}\left\langle e\right\vert +H.c.)$, where $a$ is the
annihilation operator of the cavity mode, $g$ ($\Omega$) is the coupling
strength of quantized cavity mode (external field) to the corresponding
transition. We assume that almost all atoms are in one of their ground states,
e.g. $\left\vert G\right\rangle =\prod_{j=1}^{N}\left\vert g_{j}\right\rangle
$, at all times, and define the collective atomic operators $C_{\mu}^{\dagger
}=\frac{1}{\sqrt{N}}
{\displaystyle\sum\nolimits_{j=1}^{N}}
\left\vert \mu\right\rangle _{j}\left\langle g\right\vert $ with $\mu=e,s$,
then $H_{I}$ can be rewritten as
\begin{equation}
H_{I}^{^{\prime}}=\sqrt{N}gC_{e}^{\dagger}a+C_{s}^{\dagger}C_{e}\Omega+H.c.,
\end{equation}
where the coupling constant $g$ between the atoms and the quantized cavity
mode is collectively enhanced by a factor $\sqrt{N}$. In analogy to the
dark-state polariton for a travelling light pulse \cite{Fleischhauer1}, one
can define two standing-wave cavity polaritons: a dark-state polariton
$m_{D}=\cos\theta a-\sin\theta C_{s}$, and a bright-state polariton
$m_{B}=\sin\theta a+\cos\theta C_{s}$, with $\cos\theta=\Omega_{c}
/\sqrt{Ng^{2}+\Omega_{c}^{2}}$ and $\sin\theta=\sqrt{N}g/\sqrt{Ng^{2}
+\Omega_{c}^{2}}$. In terms of these cavity polaritons the Hamiltonian
$H_{I}^{^{\prime}}$ can be represented by
\begin{equation}
H_{I}^{^{^{\prime\prime}}}=\sqrt{Ng^{2}+\Omega^{2}}(C_{e}^{\dagger}m_{B}
+C_{e}m_{B}^{\dagger}).
\end{equation}
The cavity dark-state polariton is decoupled from the collective excited state
$C_{e}^{\dagger}\left\vert G\right\rangle $.
The external fields interact with cavity mode $a$ through two input ports
$\alpha_{in}$, $\beta_{in}$, and two output ports $\alpha_{out}$, $\beta
_{out}$.\ The Hamiltonian for the cavity input--output processes is described
by \cite{Walls} $H_{in-out}=
{\displaystyle\sum\nolimits_{\Theta=\alpha,\beta}}
{\displaystyle\int\nolimits_{-\infty}^{+\infty}}
\omega d\omega\Theta^{\dagger}(\omega)\Theta(\omega)+
{\displaystyle\sum\nolimits_{\Theta=\alpha,\beta}}
[i
{\displaystyle\int\nolimits_{-\infty}^{+\infty}}
d\omega\sqrt{\frac{\kappa}{2\pi}}\Theta^{\dagger}(\omega)a+H.c.],$ where
$\omega$ is the frequency of the external field, $\kappa$ is the bare cavity
decay rate without EIT medium, and $\Theta(\omega)$ with the standard relation
$[\Theta(\omega),\Theta^{\dagger}(\omega^{^{\prime}})]=\delta(\omega
-\omega^{^{\prime}})$ denotes the one-dimensional free-space mode. We express
the Hamiltonian $H_{in-out}$ in the polariton bases:
\begin{align}
H_{in-out}^{^{\prime}} & =
{\displaystyle\sum\limits_{\Theta=\alpha,\beta}}
{\displaystyle\int\nolimits_{-\infty}^{+\infty}}
\omega d\omega\Theta^{\dagger}(\omega)\Theta(\omega)\nonumber\\
& +
{\displaystyle\sum\limits_{\Theta=\alpha,\beta}}
[i
{\displaystyle\int\nolimits_{-\infty}^{+\infty}}
d\omega\Theta^{\dagger}(\omega)(\sqrt{\frac{\kappa_{_{D}}}{2\pi}}
m_{D}\nonumber\\
& +\sqrt{\frac{\kappa_{_{B}}}{2\pi}}m_{B})+H.c.],
\end{align}
with $\kappa_{_{D}}=\cos^{2}\theta\kappa$ and $\kappa_{_{B}}=\sin^{2}
\theta\kappa$.
In the intracavity EIT system, only the bright-state polariton $m_{B}$
resonantly couples to the excited state. Under the condition
\begin{equation}
\sqrt{Ng^{2}+\Omega^{2}}\gg\kappa_{_{B}},\gamma_{e},
\end{equation}
with $\gamma_{e}$ the spontaneous-emission rate of the excited state
$\left\vert e\right\rangle $, the resonant interaction in Hamiltonian
$H_{I}^{^{^{\prime\prime}}}$ in the so-called strong coupling regime will
induce an effect known as vacuum Rabi splitting \cite{Boca}, i.e., the
splitting of the cavity transmission peak for the bright-state polariton
$m_{B}$ into a pair of resolvable peaks at $\omega=\omega_{0}\pm\sqrt
{Ng^{2}+\Omega^{2}}$ (here $\omega_{0}$ is the resonant frequency of cavity
mode). Thus one can neglect bright-state polariton $m_{B}$ to calculate the
cavity transmission spectrum near the cavity resonant frequency $\omega_{0}$.
According to quantum Langevin equation, the evolution equation of the cavity
dark-state polariton $m_{D}$ is given by \cite{Walls}\begin{figure}
\caption{(Color online) The
transmission spectrum $T$ as a function of $\Delta(\omega)$ for $\Omega
=\{5g,0.5g\}
\end{figure}
\begin{equation}
\dot{m}_{D}=-i\omega_{0}m_{D}-\kappa_{_{D}}m_{D}+\sqrt{\kappa_{_{D}}}
\alpha_{in}+\sqrt{\kappa_{_{D}}}\beta_{in}.
\end{equation}
Using the relationships between the input and output modes at each mirror
\cite{Walls}
\begin{equation}
\alpha_{out}\left( t\right) +\alpha_{in}\left( t\right) =\sqrt
{\kappa_{_{D}}}m_{D},
\end{equation}
and
\begin{equation}
\beta_{out}\left( t\right) +\beta_{in}\left( t\right) =\sqrt{\kappa_{_{D}
}}m_{D},
\end{equation}
and the Fourier transformations: $\Lambda=\sqrt{\frac{1}{2\pi}}
{\displaystyle\int\nolimits_{-\infty}^{+\infty}}
d\omega\Lambda(\omega)e^{-i\omega t}$, with $\Lambda=m_{D},$ $\alpha
_{in},\alpha_{out},\beta_{in},\beta_{out}$, we can find
\begin{equation}
\alpha_{out}\left( \omega\right) =\frac{\kappa_{_{D}}\beta_{in}\left(
\omega\right) }{\kappa_{_{D}}-i\Delta(\omega)},
\end{equation}
where $\Delta(\omega)=\omega-\omega_{0}$, and we have assumed that the photons
enter into the cavity from the input port $\beta_{in}$ ($\alpha_{in}=0$). Then
the transmission spectrum for intracavity EIT is described by
\begin{equation}
T(\omega)=\frac{\left\vert \alpha_{out}\left( \omega\right) \right\vert
^{2}}{\left\vert \beta_{in}\left( \omega\right) \right\vert ^{2}}
=\frac{\kappa_{_{D}}^{2}}{\kappa_{_{D}}^{2}+\Delta^{2}(\omega)}.
\end{equation}
As depicted in Fig. 2, the transmission spectrum $T$ can be controlled by the
external coherent field. Then the calculate of cavity linewidth $\Delta
_{\upsilon}$, i.e., the full width at half height of $T(\omega)$, gives
\begin{equation}
\upsilon=2\kappa_{_{D}}=2\kappa\cos^{2}\theta=\cos^{2}\theta\upsilon_{0},
\end{equation}
here $\upsilon_{0}=2\kappa$ is the empty-cavity linewidth \cite{Walls}.
\section{\textbf{Brief discussion }}
Next we briefly discuss the results of the quantum-theoretical treatment of
intracavity EIT. First, the polariton $m_{D}$ corresponds to the well-known
\textquotedblleft dark-state polariton\textquotedblright\ for a travelling
light pulse \cite{Fleischhauer1}. In Ref \cite{Fleischhauer1}, the mixing
angle $\theta$ determines the group velocity of dark-state polariton, whereas
the mixing angle $\theta$ of cavity dark-state polariton $m_{D}$ here
determines the effective cavity decay rate $\kappa_{_{D}}$. Second, the atomic
ground states have long coherence time and very small decay rate, thus the
main source of absorption by EIT system is spontaneous emission of the excited
state. To avoid the absorption of the probe field by the coupling of
bright-state polariton $m_{B}$ to excited state, we require the condition in
Eq. (5). If atom-cavity system is in the weak coupling regime $\sqrt
{N}g<\kappa,\gamma_{e}$, we need a strong control field, $\Omega\gg g$, to
satisfy the required condition. However, if atom-cavity system is in the
collective strong coupling regime $\sqrt{N}g\gg\kappa,\gamma_{e}$, even when
the control field is so weak that $\Omega\ll g$, the required condition is
still satisfied and the absorption owing to spontaneous emission of the
excited state can be neglected, then the dark-state polariton will induce a
very narrow cavity linewidth $\upsilon=$ $\cos^{2}\theta\upsilon_{0}
\approx\upsilon_{0}\Omega^{2}/Ng^{2}$. We note this result is different from
that based on previous semi-classical treatments of intracavity EIT
\cite{Lukin,Wang}, where a strong control field is required for avoiding the
absorption of a probe field pulse if initially (i.e., before the probe field
arrives) all atoms are in their ground states $\left\vert G\right\rangle
=\prod_{j=1}^{N}\left\vert g_{j}\right\rangle $.
\section{\textbf{Conclusion }}
In summary, we have performed a theoretical investigation of intracavity EIT
quantum mechanically. In intracavity EIT system the cavity photons are in the
form of cavity polaritons: bright-state polariton and dark-state polariton.
Strong coupling of the bright-state polariton to the excited state leads to an
effect known as vacuum Rabi splitting, whereas the dark-state polariton
decoupled from the excited state induce a narrow transmission window. If
atom-cavity system is in the weak coupling regime, a strong control field is
requied for avoiding the absorption owing to spontaneous emission of the
excited state. However, if atom-cavity system is in the collective strong
coupling regime, a weak control field is sufficient for avoiding the
absorption, and then the dark-state polariton induces a very narrow cavity
linewidth. This result is different from that based on previous semi-classical
treatments of intracavity EIT \cite{Lukin,Wang}, where the strong control
field is required for avoiding the absorption of a probe field pulse if all
atoms are prepared in one of their ground states. Our quantum-theoretical
treatment of intracavity EIT would provide a quantum theory of linewidth
narrowing with quantum field pulses at the single-photon level.
\textbf{Acknowledgments: }This work was supported by the National Natural
Sciences Foundation of China (Grants No. 11204080, No. 11274112, No. 91321101,
and No. 61275215), the Fundamental Research Funds for the Central Universities
(Grants No. WM1313003).
\end{document} |
\begin{document}
\title[Tug-of-war and Krylov-Safonov]
{H\"older estimate for a tug-of-war game with $1<p<2$ from Krylov-Safonov regularity theory}
\author[Arroyo]{\'Angel Arroyo}
\address{MOMAT Research Group, Interdisciplinary Mathematics Institute, Department of Applied Mathematics and Mathematical Analysis, Universidad Complutense de Madrid, 28040 Madrid, Spain}
\email{[email protected]}
\author[Parviainen]{Mikko Parviainen}
\address{Department of Mathematics and Statistics, University of Jyv\"askyl\"a, PO~Box~35, FI-40014 Jyv\"askyl\"a, Finland}
\email{[email protected]}
\date{\today}
\keywords{ABP-estimate, elliptic non-divergence form partial differential equation with bounded and measurable coefficients, dynamic programming principle, local H\"older estimate, p-Laplacian, Pucci extremal operator, tug-of-war with noise}
\subjclass[2020]{35B65, 35J15, 35J92, 91A50}
\maketitle
\begin{abstract}
We propose a new version of the tug-of-war game and a corresponding dynamic programming principle related to the $p$-Laplacian with $1<p<2$. For this version, the asymptotic H\"older continuity of solutions can be directly derived from recent Krylov-Safonov type regularity results in the singular case. Moreover, existence of a measurable solution can be obtained without using boundary corrections. We also establish a comparison principle.
\end{abstract}
\section{Introduction}
In Section 7 of \cite{arroyobp} we show that a solution to the dynamic programming principle
\begin{equation}\label{usual-DPP}
u(x)
=
\frac{\alpha}{2}\bigg(\sup_{h\in B_1}u(x+\varepsilon h)+\inf_{h\in B_1}u(x+\varepsilon h)\bigg)
+
\beta\vint_{B_1}u(x+\varepsilon h)\,dh
+
\varepsilon^2f(x)
\end{equation}
with $\beta=1-\alpha\in(0,1]$ and $x\in\Omegaega\subset\mathbb{R}^N$ satisfies certain extremal inequalities, and thus has asymptotic H\"older regularity directly by the Krylov-Safonov theory developed in that paper. This dynamic programming principle corresponds to a version of the tug-of-war game with noise as explained in \cite{manfredipr12}, and it is linked to the $p$-Laplacian when $p\geq 2$ with suitable choice of probabilities $\alpha=\alpha(N,p)$ and $\beta=\beta(N,p)$. Interested readers can consult the references \cite{peress08}, \cite{blancr19}, \cite{lewicka20} and \cite{parviainenb} for more information about the tug-of-war games.
In the case $1<p<2$, one usually considers a variant of the game known as the tug-of-war game with orthogonal noise as in \cite{peress08}, although there is also other recent variant covering this range and not using orthogonal noise but measures absolutely continuous with respect to the $N$-dimensional Lebesgue measure \cite{lewicka21}. An inconvenience of the ``orthogonal noise'' approach of \cite{peress08} is that the uniform part of the measure is supported in $(N-1)$-dimensional balls. Thus we do not expect that solutions to the corresponding dynamic programming principle satisfy the extremal inequalities required for the Krylov-Safonov type regularity estimates obtained in \cite{arroyobp,arroyobp2}.
Second, there are some deep measurability issues related to the corresponding dynamic programming principles, and this introduces some difficulties in the existence and measurability proofs in the case $1<p<2$, as explained at the beginning of Section~\ref{sec:exist}. As a matter of fact, in \cite{hartikainen16} and \cite{arroyohp17} a modification near the boundary was necessary in order to guarantee the measurability.
For these reasons, and with the purpose of covering the case $1<p<2$, we propose a different variant of the tug-of-war game that can be described by a dynamic programming principle having a uniform part in a $N$-dimensional ball in (\ref{eq:dpp}) below. For this variant, no boundary modifications are needed in the existence proof (Theorem~\ref{thm:existence}) because of better continuity properties that are addressed in Section~\ref{sec:existence}. We also establish a comparison principle and thus uniqueness of solutions (Theorem~\ref{thm:comparison}). Moreover, the solutions to this dynamic programming principle are asymptotically H\"older continuous (Corollary~\ref{cor:holder}) and converge, passing to a subsequence if necessary, to a solution of the normalized $p$-Laplace equation (Theorem~\ref{thm:convergence}).
\section{Preliminaries}
We denote by $B_1$ the open unit ball of $\mathbb{R}^N$ centered at the origin. For $|z|=1$ we introduce the following notation,
\begin{equation}\label{operator-I}
\mathcal{I}^z_\varepsilon u(x)
:=
\frac{1}{\gamma_{N,p}}\vint_{B_1}u(x+\varepsilon h)(z\cdot h)_+^{p-2}\,dh,
\end{equation}
for each Borel measurable bounded function $u$, where $\gamma_{N,p}$ is the normalization constant
\begin{equation}\label{gamma-ctt}
\gamma_{N,p}
:=
\vint_{B_1}(z\cdot h)_+^{p-2}\,dh
=
\frac{1}{2}\vint_{B_1}|h_1|^{p-2}\,dh,
\end{equation}
which is independent of the choice of $|z|=1$. Here, we have used the following notation
\begin{equation}
\label{eq:plusfunction}
(t)_+^{p-2}
=
\begin{cases}
t^{p-2} & \text{ if } t>0,\\
0 & \text{ if } t\leq 0.
\end{cases}
\end{equation}
We remark that $t\mapsto(t)^{p-2}_+$ is a continuous function in $\mathbb{R}$ when $p>2$, but it presents a discontinuity at $t=0$ when $1<p\leq 2$.
We observe integrating for example over a cube containing $B_1$ that $\gamma_{N,p}<\infty$ for any $p>1$. Later we compute the precise value of $\gamma_{N,p}$ in (\ref{eq:precise-constant}) but we immediately observe that $\gamma_{N,p}>\frac{1}{2}$ if $1<p<2$ since
\begin{equation*}
2\gamma_{N,p}
=
\vint_{B_1}|z\cdot h|^{p-2}\,dh
\geq
\vint_{B_1}|h|^{p-2}\,dh
>
1.
\end{equation*}
Throughout the paper $\Omegaega\subset\mathbb{R}^N$ denotes a bounded domain. For $\varepsilon>0$, we define the $\varepsilon$-neighborhood of $\Omegaega$ as
\begin{align*}
\Omegaega_\varepsilon=\{x\in\mathbb{R}^N\,:\,\operatorname{dist}(x,\Omegaega)<\varepsilon\}.
\end{align*}
Let $f:\Omegaega\to\mathbb{R}$ be a Borel measurable bounded function. We consider a dynamic programming principle (DPP)
\begin{align}
\label{eq:dpp}
u(x)
=
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\Big)
+
\varepsilon^2f(x)
\end{align}
for $x\in\Omegaega$, with prescribed Borel measurable bounded boundary values $g:\Omegaega_\varepsilon\setminus\Omegaega\to\mathbb{R}$. The parameter $p$ above is linked to the $p$-Laplace operator as explained in Section~\ref{sec:p-lap}.
Next we recall the asymptotic H\"older continuity result derived in \cite{arroyobp} and \cite{arroyobp2}. The results there apply to a quite general class of discrete stochastic processes with bounded and measurable increments and their expectations, or equivalently functions satisfying the corresponding dynamic programming principles. Moreover, the results actually hold for functions merely satisfying inequalities given in terms of extremal operators, which we recall below. This can be compared with the H\"older result for PDEs given in terms of Pucci operators (see for example \cite{caffarellic95} and \cite{krylovs79,krylovs80,trudinger80}).
For $\mathcal{L}ambda\geq1$, let $\mathcal{M}(B_\mathcal{L}ambda)$, as in those papers, denote the set of symmetric unit Radon measures with support in $B_\mathcal{L}ambda$ and $\nu:\mathbb{R}^N\to \mathcal{M}(B_\mathcal{L}ambda)$ such that
\begin{equation*}
x\mapsto\int_{B_\mathcal{L}ambda} u(x+h) \,d\nu_x(h)
\end{equation*}
defines a Borel measurable function for every Borel measurable $u:\mathbb{R}^N\to \mathbb{R}$. By symmetric we mean that
\begin{align*}
\nu_x(E)=\nu_x(-E)
\end{align*}
holds for every measurable set $E\subset\mathbb{R}^N$.
\begin{definition}[Extremal operators]
\label{def:pucci}
Let $u:\mathbb{R}^N\to\mathbb{R}$ be a Borel measurable bounded function. We define the extremal Pucci type operators
\begin{equation*}
\begin{split}
\mathcal{L}_\varepsilon^+ u(x)
:\,=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \sup_{\nu\in \mathcal{M}(B_\mathcal{L}ambda)} \int_{B_\mathcal{L}ambda}\delta u(x,\varepsilon h) \,d\nu(h)+\beta\vint_{B_1} \delta u(x,\varepsilon h)\,dh\bigg)
\\
=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \sup_{h\in B_\mathcal{L}ambda} \delta u(x,\varepsilon h) +\beta\vint_{B_1} \delta u(x,\varepsilon h)\,dh\bigg)
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\mathcal{L}_\varepsilon^- u(x)
:\,=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \inf_{\nu\in \mathcal{M}(B_\mathcal{L}ambda)} \int_{B_\mathcal{L}ambda}\delta u(x,\varepsilon h) \,d\nu(h)+\beta\vint_{B_1} \delta u(x,\varepsilon h)\,dh\bigg)
\\
=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \inf_{h\in B_\mathcal{L}ambda} \delta u(x,\varepsilon h) +\beta\vint_{B_1} \delta u(x,\varepsilon h)\,dh\bigg),
\end{split}
\end{equation*}
where $\delta u(x,\varepsilon h)=u(x+\varepsilon h)+u(x-\varepsilon h)-2u(x)$ for every $h\in B_\mathcal{L}ambda$.
\end{definition}
Naturally also other domains of definition are possible instead of $\mathbb{R}^N$ above.
\begin{theorem}[Asymptotic H\"older, \cite{arroyobp, arroyobp2}]
\label{Holder}
There exists $\varepsilon_0>0$ such that if $u$ satisfies $\mathcal{L}_\varepsilon^+ u\ge -\rho$ and $\mathcal{L}_\varepsilon^- u\le \rho$ in $B_{R}$ for some $1-\alpha=\beta>0$ where $\varepsilon<\varepsilon_0R$, there exist $C,\gamma>0$ such that
\[
|u(x)-u(y)|\leq \frac{C}{R^\gamma}\left(\sup_{B_{R}}|u|+R^2\rho\right)\big(|x-y|^\gamma+\varepsilon^\gamma\big)
\]
for every $x, y\in B_{R/2}$.
\end{theorem}
It is worth remarking that the constants $C$ and $\gamma$ are independent of $\varepsilon$, and depend exclusively on $N$, $\mathcal{L}ambda\geq 1$ and $\beta=1-\alpha\in(0,1]$. Also a version of Harnack's inequality \cite[Theorem 5.5]{arroyobp2} holds if the extremal inequalities are satisfied for some $\beta>0$. For a different approach for regularity in the case of tug-of-war games, see \cite{luirops13} ($p>2$) and \cite{luirop18} ($p>1$).
\section{Existence of measurable solutions with $1<p<\infty$}
\label{sec:existence}
In this section we prove existence and uniqueness of solutions to the DPP (\ref{eq:dpp}). In addition, when $1<p<2$, such DPP satisfies the requirements to get the asymptotic H\"older estimate from Theorem~\ref{Holder}. The regularity result and the connection of such a DPP (as well as the corresponding tug-of-war game) to the $p$-Laplacian are addressed in Sections~\ref{sec:reg} and \ref{sec:p-lap}, respectively.
\begin{remark}\label{rem-average}
Observe that the operator $\mathcal{I}^z_\varepsilon$ defined in (\ref{operator-I}) is a linear average for each $|z|=1$, in the sense that $\mathcal{I}^z_\varepsilon$ satisfies the following:
\begin{enumerate}
\item[\textit{i)}] stability: $\displaystyle\inf_{B_\varepsilon(x)}u\leq\mathcal{I}^z_\varepsilon u(x)\leq\sup_{B_\varepsilon(x)}u$;
\item[\textit{ii)}] monotonicity: $\mathcal{I}^z_\varepsilon u\leq\mathcal{I}^z_\varepsilon v$ for $u\leq v$;
\item[\textit{iii)}] linearity: $\mathcal{I}^z_\varepsilon(au+bv)=a\,\mathcal{I}^z_\varepsilon u+b\,\mathcal{I}^z_\varepsilon v$ for $a,b\in\mathbb{R}$.
\end{enumerate}
\end{remark}
\subsection{Continuity estimates for $\mathcal{I}^z_\varepsilon u$}
Given a Borel measurable bounded function $u:\Omegaega_\varepsilon\to\mathbb{R}$, we show that the function $\mathcal{I}^z_\varepsilon u(x)$ is continuous with respect to $x\in\overline\Omegaega$ and $|z|=1$. In fact, we prove that $(x,z)\mapsto\mathcal{I}^z_\varepsilon u(x)$ is uniformly continuous. As a consequence of this, the function
\begin{equation*}
x
\longmapsto
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\Big)
\end{equation*}
is continuous in $\overline\Omegaega$ as shown in Lemma~\ref{lem:sup-inf-cont}.
\begin{lemma}
\label{lem:continuous-wrtz}
Let $\Omegaega\subset\mathbb{R}^N$. For $u:\Omegaega_\varepsilon\to\mathbb{R}$ a Borel measurable bounded function and $x\in\overline\Omegaega$, the function $$z\longmapsto\mathcal{I}^z_\varepsilon u(x)$$ is continuous on $|z|=1$. Moreover, the family $\{z\mapsto\mathcal{I}^z_\varepsilon u(x)\,:\,x\in\overline\Omegaega\}$ is equicontinuous on $|z|=1$.
\end{lemma}
\begin{proof} For $|z|=|w|=1$ we have
\begin{equation*}
\left|\mathcal{I}^z_\varepsilon u(x)-\mathcal{I}^w_\varepsilon u(x)\right|
\leq
\frac{\|u\|_\infty}{\gamma_{N,p}}\vint_{B_1}\big|(z\cdot h)^{p-2}_+-(w\cdot h)^{p-2}_+\big|\,dh,
\end{equation*}
uniformly for every $x\in\overline\Omegaega$. We claim that the limit
\begin{equation}\label{claim}
\lim_{|z-w|\to 0}\vint_{B_1}\big|(z\cdot h)^{p-2}_+-(w\cdot h)^{p-2}_+\big|\,dh
=
0
\end{equation}
holds for every $1<p<\infty$. Observe that the limit in (\ref{claim}) is independent of $x$ and $u$, and thus it holds uniformly for every $x\in\overline\Omegaega$, then the (uniform) equicontinuity of $\mathcal{I}^z_\varepsilon u$ in $\Omegaega$ follows.
\textit{i) Case $p=2$.} Since $(t)^0_+=\raisebox{2pt}{\rm{$\chi$}}_{(0,\infty)}(t)$, we have
\begin{equation*}
\begin{split}
\vint_{B_1}\big|(z\cdot h)^0_+-(w\cdot h)^0_+\big|\,dh
=
\frac{|B_1\cap(\{z\cdot h>0\}\triangle\{w\cdot h>0\})|}{|B_1|}
\leq
C |z-w|
\end{split}
\end{equation*}
for some explicit constant $C>0$ depending only on $N$, so (\ref{claim}) follows. Here $\triangle$ stands for the symmetric difference $A\triangle B=(A\setminus B)\cup(B\setminus A)$.
\textit{ii) Case $p>2$.} We observe that the function $t\mapsto(t)^{p-2}_+$ is continuous in $\mathbb{R}$. In addition, given any $|z|=|w|=1$, it holds that
\begin{equation*}
|(z\cdot h)^{p-2}_+-(w\cdot h)^{p-2}_+|
\leq 1
\end{equation*}
for each $h\in B_1$. Then (\ref{claim}) follows by the Dominated Convergence Theorem.
\textit{iii) Case $1<p<2$.} This case requires a bit care since obtaining an integrable upper bound needed for the Dominated Convergence Theorem is not as straightforward.
To this end, we observe the inequality
\begin{align*}
\abs{a-b}\le (a+b)\bigg(1-\frac{\min\{a,b\}}{\max\{a,b\}}\bigg).
\end{align*}
for every $a,b>0$. Thus
\begin{equation*}
|(z\cdot h)^{p-2}_+-(w\cdot h)^{p-2}_+|
\leq
\big((z\cdot h)^{p-2}_++(w\cdot h)^{p-2}_+\big)\bigg(1-\frac{\min\{(z\cdot h)^{p-2}_+,(w\cdot h)^{p-2}_+\}}{\max\{(z\cdot h)^{p-2}_+,(w\cdot h)^{p-2}_+\}}\bigg).
\end{equation*}
In that way, applying H\"older inequality with $q=\frac{1}{2}\,\frac{p-3}{p-2}$ and recalling the definition of $\gamma_{N,\frac{p+1}{2}}$ we can estimate
\begin{multline*}
\vint_{B_1}|(z\cdot h)^{p-2}_+-(w\cdot h)^{p-2}_+|\,dh
\\
\leq
2\gamma_{N,\frac{p+1}{2}}^{2\frac{p-2}{p-3}}\bigg(\vint_{B_1}\bigg(1-\frac{\min\{(z\cdot h)^{p-2}_+,(w\cdot h)^{p-2}_+\}}{\max\{(z\cdot h)^{p-2}_+,(w\cdot h)^{p-2}_+\}}\bigg)^{-\frac{p-3}{p-1}}\,dh\bigg)^{-\frac{p-1}{p-3}}.
\end{multline*}
Observe that $\tfrac{p+1}{2}>1$ and thus $\gamma_{N,\frac{p+1}{2}}<\infty$. Now, since $t\mapsto(t)^{p-2}_+$ is continuous in $(0,+\infty)$ (and we set other values identically to 0 in (\ref{eq:plusfunction})), and so, for any given $|z|=1$ and for each $h\in B_1$ such that $z\cdot h>0$, it holds that $(w\cdot h)^{p-2}_+\to(z\cdot h)^{p-2}_+$ as $w\to z$ with $|w|=1$. Then, we see that the function in the integral on the right hand side is bounded between $0$ and $1$ and converges to $0$ as $w\to z$ for each $h\in B_1$, so the assumptions in the Dominated Convergence Theorem are fulfilled, yielding that the right hand side above converges to $0$ as $w\to z$, and thus (\ref{claim}) follows.
\end{proof}
\begin{lemma}
\label{lem:continuous}
Let $\Omegaega\subset\mathbb{R}^N$. For $|z|=1$, the function
\begin{equation*}
x\longmapsto \mathcal{I}^z_\varepsilon u(x)
\end{equation*}
is continuous in $\overline\Omegaega$ for every Borel measurable bounded function $u:\Omegaega_\varepsilon\to\mathbb{R}$. Moreover, $\{\mathcal{I}^z_\varepsilon u\,:\,|z|=1\}$ is equicontinuous in $\overline\Omegaega$.
\end{lemma}
\begin{proof}
Let $u:\Omegaega_\varepsilon\to\mathbb{R}$ be a bounded Borel measurable function defined in $\Omegaega_\varepsilon$. Our aim is to show that
\begin{equation*}
\lim_{x,y\in\overline\Omegaega \text{ s.t. } |x-y|\to 0}\left|\mathcal{I}^z_\varepsilon u(x)-\mathcal{I}^z_\varepsilon u(y)\right|
=
0.
\end{equation*}
We can write
\begin{align*}
&
\mathcal{I}^z_\varepsilon u(x)
=
\frac{1}{\gamma_{N,p}\abs{B_1}}\int_{\mathbb{R}^N}u(x+\varepsilon h)\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(z\cdot h)^{p-2}_+\,dh
,
\\
&
\mathcal{I}^z_\varepsilon u(y)
=
\frac{1}{\gamma_{N,p}\abs{B_1}}\int_{\mathbb{R}^N}u(x+\varepsilon h)\raisebox{2pt}{\rm{$\chi$}}_{B_1(-\frac{x-y}{\varepsilon})}(h)(z\cdot(h+\tfrac{x-y}{\varepsilon}))^{p-2}_+\,dh,
\end{align*}
so the following estimate follows immediately,
\begin{multline*}
\left|\mathcal{I}^z_\varepsilon u(x)-\mathcal{I}^z_\varepsilon u(y)\right|
\\
\leq
\frac{\norm{u}_{\infty}}{\gamma_{N,p}\abs{B_1}}\int_{\mathbb{R}^N}\big|\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(z\cdot h)^{p-2}_+-\raisebox{2pt}{\rm{$\chi$}}_{B_1(-\frac{x-y}{\varepsilon})}(h)(z\cdot(h+\tfrac{x-y}{\varepsilon}))^{p-2}_+\big|\,dh.
\end{multline*}
We focus on the integral above. We can assume without loss of generality that $z=e_1$, otherwise we could perform a change of variables. In addition, and for the sake of simplicity, we denote $\xi=-\frac{x-y}{\varepsilon}$. Then the result follows from the following claim,
\begin{equation}\label{claim2}
\lim_{\xi\to 0}\int_{\mathbb{R}^N}\big|\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_+-\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\big|\,dh
=
0.
\end{equation}
To see this we need to distinguish two cases depending on the value of $p$.
\textit{i) Case $p\geq2$.} The integrand in (\ref{claim2}) converges to zero as $\xi\to 0$ for almost every $h\in\mathbb{R}^N$. Moreover, it is bounded by $2$ and zero outside a bounded set. Then the claim follows by the Dominated Convergence Theorem as $\xi\to 0$.
\textit{ii) Case $1<p<2$.} In order to apply the Dominated Convergence Theorem when $1<p<2$, we observe similarly as in the proof of Lemma~\ref{lem:continuous-wrtz} that
\begin{multline*}
\big|\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_+-\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\big|
\\
\begin{split}
\leq
~&
\big(\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_++\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\big)
\\
~&
\cdot\bigg(1-\frac{\min\{\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_+,\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\}}{\max\{\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_+,\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\}}\bigg).
\end{split}
\end{multline*}
In that way, applying H\"older inequality with $q=\frac{1}{2}\,\frac{p-3}{p-2}$,
\begin{multline*}
\int_{D_0}\big|\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_+-\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\big|\,dh
\\
\leq
C\bigg(\int_{\mathbb{R}^N}\bigg(1-\frac{\min\{\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_+,\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\}}{\max\{\raisebox{2pt}{\rm{$\chi$}}_{B_1}(h)(h_1)^{p-2}_+,\raisebox{2pt}{\rm{$\chi$}}_{B_1(\xi)}(h)(h_1-\xi_1)^{p-2}_+\}}\bigg)^{-\frac{p-3}{p-1}}\,dh\bigg)^{-\frac{p-1}{p-3}}
\end{multline*}
for every small enough $\xi$ where $C>0$ only depends on $N$ and $p$. Now again, the right hand side in the integral above is bounded between $0$ and $1$. Moreover, the integrand converges to $0$ as $\xi\to 0$ for almost every $h\in\mathbb{R}^N$, so that the Dominated Convergence Theorem implies (\ref{claim2}). Moreover, since the estimates obtained in this proof hold uniformly for every $x\in\overline\Omegaega$ and $|z|=1$, this implies the uniform equicontinuity of the family $x\mapsto\mathcal{I}^z_\varepsilon u(x)$ with respect to $|z|=1$.
\end{proof}
As a direct consequence of the continuity estimate from the previous Lemma we get the following result.
\begin{lemma}\label{lem:sup-inf-cont}
Let $\Omegaega\subset\mathbb{R}^N$ and $u:\Omegaega_\varepsilon\to\mathbb{R}$ be a Borel measurable bounded function. Then the function
\begin{equation*}
x
\longmapsto
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\Big)
\end{equation*}
is continuous in $\overline\Omegaega$.
\end{lemma}
\begin{proof}
The result follows directly from the equicontinuity in $\overline\Omegaega$ of the set of functions $\{\mathcal{I}^z_\varepsilon u\,:\,|z|=1\}$ (Lemma~\ref{lem:continuous}) and the elementary inequalities
\begin{align*}
\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)-\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(y)
\leq
~&
\sup_{|z|=1}\big\{\mathcal{I}^z_\varepsilon u(x)-\mathcal{I}^z_\varepsilon u(y)\big\},
\\
\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)-\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(y)
\leq
~&
\sup_{|z|=1}\big\{\mathcal{I}^z_\varepsilon u(x)-\mathcal{I}^z_\varepsilon u(y)\big\}.\qedhere
\end{align*}
\end{proof}
\subsection{Existence and uniqueness}
\label{sec:exist}
Next we show existence of Borel measurable solutions to the DPP (\ref{eq:dpp}). We also establish a comparison principle and thus uniqueness of solutions.
We remark that, contrary to the existence proofs in \cite{hartikainen16,arroyohp17}, no boundary correction is needed as in those references, since Lemma~\ref{lem:continuous} guarantees that $u-\varepsilon^2f$ is continuous in $\Omegaega$, and the solutions to (\ref{eq:dpp}) are measurable.
Also recall that measurability of operators containing $\sup$ and $\inf$ is not completely trivial as shown by Example 2.4 in \cite{luirops14}.
The proof of existence of solutions to the DPP (\ref{eq:dpp}) with prescribed values in $$\Gamma_\varepsilon=\Omegaega_\varepsilon\setminus\Omegaega$$ is based on Perron's method. For that, given Borel measurable bounded functions $f:\Omegaega\to\mathbb{R}$ and $g:\Gamma_\varepsilon\to\mathbb{R}$, we consider the family $\mathcal{S}_{f,g}$ of Borel measurable functions $u:\Omegaega_\varepsilon\to\mathbb{R}$ such that $u-\varepsilon^2f$ is continuous in $\Omegaega$ and
\begin{equation}\label{sub-DPP}
\begin{cases}
\displaystyle
u
\leq
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u\Big)+\varepsilon^2f
& \text{ in } \Omegaega,
\\
u
\leq
g
& \text{ in }\Gamma_\varepsilon.
\end{cases}
\end{equation}
In the PDE literature, the corresponding class would be the class of subsolutions with suitable boundary conditions. In the following lemmas we prove that $\mathcal{S}_{f,g}$ is non-empty and uniformly bounded.
\begin{lemma}
\label{lem:non-empty}
Let $f$ and $g$ be Borel measurable bounded functions in $\Omegaega$ and $\Gamma_\varepsilon$, respectively. There exists a Borel measurable function $u:\Omegaega_\varepsilon\to\mathbb{R}$ such that $u-\varepsilon^2f$ is continuous in $\Omegaega$ and $u$ satisfies (\ref{sub-DPP}) with $u=g$ in $\Gamma_\varepsilon$.
\end{lemma}
\begin{proof}
Let $C>0$ be a constant to be fixed later, fix $R=\displaystyle\sup_{x\in\Omegaega_\varepsilon}|x|$ and define
\begin{equation*}
u(x)
=
\begin{cases}
C(|x|^2-R^2)+\varepsilon^2f(x) & \text{ if } x\in\Omegaega,
\\
g(x) & \text{ if } x\in\Gamma_\varepsilon.
\end{cases}
\end{equation*}
Then $u-\varepsilon^2f$ is clearly continuous in $\Omegaega$. To see that $u$ satisfies (\ref{sub-DPP}), let
\begin{equation*}
u_0(x)
=
C(|x|^2-R^2)-\varepsilon^2\|f\|_\infty-\|g\|_\infty
\end{equation*}
for every $x\in\Omegaega_\varepsilon$. Then $u_0\leq u$ in $\Omegaega_\varepsilon$. By the linearity and the monotonicity of the operator $\mathcal{I}^z_\varepsilon$ (see Remark~\ref{rem-average}),
\begin{equation*}
\begin{split}
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\Big)
\geq
~&
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u_0(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u_0(x)\Big)
\\
\geq
~&
\frac{1}{2}\inf_{|z|=1}\big\{\mathcal{I}^z_\varepsilon u_0(x)+\mathcal{I}^{-z}_\varepsilon u_0(x)\big\}
\\
=
~&
C\inf_{|z|=1}\bigg\{\frac{1}{2\gamma_{N,p}}\vint_{B_1}|x+\varepsilon h|^2|z\cdot h|^{p-2}\,dh\bigg\}
\\
~&
-CR^2-\varepsilon^2\|f\|_\infty-\|g\|_\infty.
\end{split}
\end{equation*}
By the symmetry properties and the identity (\ref{integral-p3}), it turns out that
\begin{equation*}
\begin{split}
\frac{1}{2\gamma_{N,p}}\vint_{B_1}|x+\varepsilon h|^2|z\cdot h|^{p-2}\,dh
=
|x|^2+\varepsilon^2\,\frac{N+p-2}{N+p}
\geq
|x|^2+\frac{\varepsilon^2}{3}
\end{split}
\end{equation*}
holds for any $|z|=1$, $N\geq 2$ and $p>1$. Therefore
\begin{equation*}
\begin{split}
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\Big)
\geq
~&
C\Big(|x|^2+\frac{\varepsilon^2}{3}\Big)-CR^2-\varepsilon^2\|f\|_\infty-\|g\|_\infty
\\
=
~&
u(x)-\varepsilon^2f(x)+\Big(\frac{C\varepsilon^2}{3}-\varepsilon^2\|f\|_\infty-\|g\|_\infty\Big).
\end{split}
\end{equation*}
Then (\ref{sub-DPP}) follows for $C=3(\|f\|_\infty+\varepsilon^{-2}\|g\|_\infty)$ since then the expression in the parenthesis right above equals zero.
\end{proof}
\begin{lemma}
\label{lem:boundedness}
Let $f$ be a Borel measurable bounded function in $\Omegaega$. If $u:\Omegaega_\varepsilon\to\mathbb{R}$ is a Borel measurable function satisfying (\ref{sub-DPP}) in $\Omegaega$ then
\begin{equation*}
\sup_\Omegaega u
\leq
C\varepsilon^2\|f\|_\infty+\|g\|_\infty
\end{equation*}
for some constant $C>0$ depending only on $\Omegaega$ and $\varepsilon$.
\end{lemma}
\begin{proof}
We start by extending the function $u$ as $\|g\|_\infty$ outside $\Omegaega_\varepsilon$.
For each $x\in\Omegaega$ let
\begin{equation*}
S_x
=
\big\{h\in B_1\,:\, \tfrac{1}{2}\leq|h|<1\ \text{ and }\ x\cdot h\geq 0\big\}.
\end{equation*}
Then we define the constant
\begin{equation*}
\vartheta
=
\vartheta(N,p)
=
\frac{1}{2\gamma_{N,p}|B_1|}\int_{S_x}|z\cdot h|^{p-2}\,dh,
\end{equation*}
which is independent of $x\in\Omegaega$ and $|z|=1$. Indeed, since
\begin{equation*}
\int_{S_x\cap\{z\cdot h<0\}}|z\cdot h|^{p-2}\,dh
=
\int_{S_{-x}\cap\{z\cdot h>0\}}|z\cdot h|^{p-2}\,dh
\end{equation*}
we can write
\begin{align*}
\int_{S_x}|z\cdot h|^{p-2}\,dh&=\int_{S_x\cap\{z\cdot h>0\}}|z\cdot h|^{p-2}\,dh
+
\int_{S_{x}\cap\{z\cdot h<0\}}|z\cdot h|^{p-2}\,dh
\\
&
=\int_{S_x\cap\{z\cdot h>0\}}|z\cdot h|^{p-2}\,dh
+
\int_{S_{-x}\cap\{z\cdot h>0\}}|z\cdot h|^{p-2}\,dh.
\end{align*}
Using this and the fact that $S_x\cup S_{-x}=B_1\setminus B_{1/2}$, we get
\begin{equation*}
\int_{S_x}|z\cdot h|^{p-2}\,dh
=
\int_{B_1\setminus B_{1/2}}(z\cdot h)^{p-2}_+\,dh
=
\gamma_{N,p}\abs{B_1}\Big(1-\frac{1}{2^{N+p-2}}\Big).
\end{equation*}
In the last equality we used the definition of $\gamma_{N,p}$ and a change of variables as
\begin{align*}
\int_{B_1\setminus B_{1/2}}(z\cdot h)^{p-2}_+\,dh
&=\int_{B_1}(z\cdot h)^{p-2}_+\,dh-\int_{B_{1/2}}(z\cdot h)^{p-2}_+\,dh\\
&=\int_{B_1}(z\cdot h)^{p-2}_+\,dh-2^{-(N+p-2)}\int_{B_1}(z\cdot h)^{p-2}_+\,dh.
\end{align*}
Thus we obtain
\begin{equation*}
\vartheta
=
\frac{1}{2}-\frac{1}{2^{N+p-1}}
\in(\tfrac{1}{4},\tfrac{1}{2})
\end{equation*}
for any $N\geq 2$ and $1<p<\infty$.
Let $x\in\Omegaega$. By Lemma~\ref{lem:continuous-wrtz}, there exists $|z_0|=1$ maximizing $\mathcal{I}^z_\varepsilon u(x)$ among all $|z|=1$. Then
\begin{equation*}
\begin{split}
u(x)-\varepsilon^2f(x)
\leq
~&
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\Big)
\\
\leq
~&
\frac{1}{2}\big(\mathcal{I}^{z_0}_\varepsilon u(x)+\mathcal{I}^{-z_0}_\varepsilon u(x)\big)
\\
=
~&
\frac{1}{2\gamma_{N,p}}\vint_{B_1}u(x+\varepsilon h)|z_0\cdot h|^{p-2}\,dh
\\
\leq
~&
\vartheta\sup_{h\in B_1\cap S_x}\{u(x+\varepsilon h)\}
+(1-\vartheta)\sup_{\mathbb{R}^N} u.
\end{split}
\end{equation*}
For each $k\in\mathbb{N}$, let $V_k=\mathbb{R}^N\setminus B_{\sqrt{k}\,\varepsilon/2}$. Since
\begin{equation*}
|x+\varepsilon h|^2
\geq
|x|^2+\frac{\varepsilon^2}{4}
\end{equation*}
for every $h\in B_1\cap S_x$, it turns out that $x+\varepsilon h\in V_{k+1}$ for every $h\in B_1\cap S_x$ and $x\in V_k$. Therefore
\begin{equation*}
\sup_{V_k}u
\leq
\vartheta\sup_{V_{k+1}}u
+(1-\vartheta)\sup_{\mathbb{R}^N}u+\varepsilon^2\|f\|_\infty.
\end{equation*}
Iterating this inequality starting from $V_0=\mathbb{R}^N$ we obtain
\begin{equation*}
\sup_{\mathbb{R}^N}u
\leq
\vartheta^k\sup_{V_k}u
+\bigg(\sum_{j=0}^{k-1}\vartheta^j\bigg)\big((1-\vartheta)\sup_{\mathbb{R}^N}u+\varepsilon^2\|f\|_\infty\big),
\end{equation*}
and rearranging terms
\begin{equation*}
\sup_{\mathbb{R}^N}u
\leq
\sup_{V_k}u
+\frac{1-\vartheta^k}{\vartheta^k(1-\vartheta)}\,\varepsilon^2\|f\|_\infty.
\end{equation*}
Since $\Omegaega$ is bounded, choosing large enough $k_0=k_0(\varepsilon,\Omegaega)$ we ensure that $\Omegaega\subset B_{\sqrt{k_0}\,\varepsilon/2}$, and thus $V_{k_0}\subset\mathbb{R}^N\setminus\Omegaega$. Thus there is necessarily a step $k\leq k_0$ so that $\displaystyle\sup_{V_k}u\leq\sup_{\mathbb{R}^N\setminus\Omegaega}u\leq\|g\|_\infty$. Using also that $\vartheta\in(\frac{1}{4},\frac{1}{2})$, we get
\begin{equation*}
\sup_\Omegaega u
\leq
\sup_{\mathbb{R}^N}u
\leq
\|g\|_\infty
+2^{2k+1}\,\varepsilon^2\|f\|_\infty.
\qedhere
\end{equation*}
\end{proof}
Now we have the necessary lemmas to work out the existence through Perron's method. The idea is to take the pointwise supremum of functions in $\mathcal{S}_{f,g}$, the family of Borel measurable functions $u$ with $u-\varepsilon^2f\in C(\Omegaega)$ satisfying (\ref{sub-DPP}) (this would be subsolutions in corresponding PDE context), and to show that this is the desired solution. Here we also utilize the continuity of $u-\varepsilon^2f$ in $\Omega$ so that the supremum of functions in the uncountable set $\mathcal{S}_{f,g}$ is measurable. Indeed, otherwise to the best of our knowledge, Perron's method does not work as such (unless $p=\infty$ \cite{lius15}) but one needs to construct a countable sequence of functions as in \cite{luirops14} to guarantee the measurability.
\begin{theorem}
\label{thm:existence}
Let $f$ and $g$ be Borel measurable bounded functions in $\Omegaega$ and $\Gamma_\varepsilon$, respectively. There exists $u:\Omegaega_\varepsilon\to\mathbb{R}$ a Borel measurable solution satisfying
\begin{equation}\label{DPP-g}
\begin{cases}
\displaystyle
u
=
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u\Big)+\varepsilon^2f & \text{ in } \Omegaega,
\\[1em]
\displaystyle
u
=
g & \text{ in } \Gamma_\varepsilon.
\end{cases}
\end{equation}
\end{theorem}
\begin{proof}
In view of Lemmas~\ref{lem:non-empty} and \ref{lem:boundedness}, the set $\mathcal{S}_{f,g}$ is non-empty and uniformly bounded. Thus, we can define $\overline u$ as the pointwise supremum of functions in $\mathcal{S}_{f,g}$, that is,
\begin{equation*}
\overline u (x)
=
\sup_{u\in\mathcal{S}_{f,g}}u(x)
\end{equation*}
for each $x\in\Omegaega_\varepsilon$. The boundedness of $\overline u$ is immediate. Moreover, $\overline u$ is Borel measurable. Indeed, since $\overline u-\varepsilon^2f$ can be expressed as the pointwise supremum of continuous functions $u-\varepsilon^2f$ with $u\in\mathcal{S}_{f,g}$, it turns out that $\overline u-\varepsilon^2f$ is lower semicontinuous in $\Omegaega$, and thus measurable, so the measurability of $\overline u$ follows.
By Lemma~\ref{lem:non-empty}, there exists at least one function $u$ in $\mathcal{S}_{f,g}$ such that $u=g$ in $\Gamma_\varepsilon$, so $\overline u$ agrees with $g$ in $\Gamma_\varepsilon$. On the other hand, since $\overline u\geq u$ for every $u\in\mathcal{S}_{f,g}$, then
\begin{equation*}
u-\varepsilon^2f
\leq
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u\Big)
\leq
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon\overline u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon\overline u\Big)
\end{equation*}
in $\Omegaega$. Taking the pointwise supremum in $\mathcal{S}_{f,g}$ we have that
\begin{align}
\label{eq:ol-u-sub}
\overline u-\varepsilon^2f
\leq
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon\overline u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon\overline u\Big).
\end{align}
Hence $\overline u$ is a Borel measurable bounded subsolution to (\ref{sub-DPP}) with $\overline u=g$ in $\Gamma_\varepsilon$. Next we show that $\overline u-\varepsilon^2f$ is indeed continuous in $\Omegaega$. For this, let $\widetilde u:\Omegaega_\varepsilon\to\mathbb{R}$ be the Borel measurable function defined by
\begin{equation*}
\widetilde u
=
\begin{cases}
\displaystyle\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon\overline u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon\overline u\Big)+\varepsilon^2f
& \text{ in } \Omegaega,
\\
g & \text{ in } \Gamma_\varepsilon.
\end{cases}
\end{equation*}
Then $\overline u\leq\widetilde u$ in $\Omegaega$ by (\ref{eq:ol-u-sub}), so $\widetilde u$ is a subsolution to (\ref{sub-DPP}) since the right hand side above can be estimated from above by $\displaystyle\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon\widetilde u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon\widetilde u\Big)+\varepsilon^2f$. Observe also that $\widetilde u-\varepsilon^2f$ is continuous in $\Omegaega$ by Lemma~\ref{lem:continuous}, so $\widetilde u\in\mathcal{S}_{f,g}$. Thus $\widetilde u\leq\overline u$, and in consequence $\overline u=\widetilde u\in\mathcal{S}_{f,g}$. Moreover
\begin{equation*}
\overline u
=
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon\overline u+\inf_{|z|=1}\mathcal{I}^z_\varepsilon\overline u\Big)+\varepsilon^2f
\end{equation*}
in $\Omegaega$ and the proof is finished.
\end{proof}
The uniqueness of solutions to (\ref{DPP-g}) is directly deduced from the following comparison principle.
\begin{theorem}
\label{thm:comparison}
Let $f$ be a Borel measurable bounded function in $\Omegaega$, and let $u,v:\Omegaega_\varepsilon\to\mathbb{R}$ be two Borel measurable solutions to the DPP (\ref{eq:dpp}) in $\Omegaega$ such that $u\leq v$ in $\Gamma_\varepsilon$. Then $u\leq v$ in $\Omegaega$.
\end{theorem}
\begin{proof}
For simplicity, we define $w=u-v$. Then $w$ is continuous in $\Omegaega$ by Lemma~\ref{lem:sup-inf-cont} and $w\leq 0$ in $\Gamma_\varepsilon$. Furthermore, $w$ is uniformly continuous in $\Omegaega$, and thus we can define $\widetilde w:\Omegaega_\varepsilon\to\mathbb{R}$ by
\begin{equation*}
\widetilde w(x)
=
\begin{cases}
\displaystyle\lim_{\Omegaega\ni y\to x}w(y) & \text{ if } x\in\partial\Omegaega,
\\
w(x) & \text{ otherwise,}
\end{cases}
\end{equation*}
so that $\widetilde w\in C(\overline\Omegaega)$.
Let us suppose thriving for a contradiction that
\begin{equation*}
M
=
\sup_{\Omegaega_\varepsilon}w
=
\max_{\overline\Omegaega}\widetilde w
>
0,
\end{equation*}
where the fact that $w\leq 0$ in $\Gamma_\varepsilon$ is used above. By continuity, the set $A=\{x\in\overline\Omegaega\,:\,\widetilde w(x)=M\}$ is non-empty and closed. Indeed, since $\overline\Omegaega$ is bounded, then $A$ is compact.
For any fixed $y\in\Omegaega$, by Lemma~\ref{lem:continuous-wrtz} there exist $|z_1|=|z_2|=1$ such that
\begin{equation*}
\mathcal{I}^{z_1}_\varepsilon u(y)=\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(y)
\qquad\text{ and }\qquad
\mathcal{I}^{z_2}_\varepsilon v(y)=\inf_{|z|=1}\mathcal{I}^z_\varepsilon v(y).
\end{equation*}
Then
\begin{equation*}
\begin{split}
w(y)
=
~&
u(y)-v(y)
\\
=
~&
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(y)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(y)\Big)
-
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon v(y)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon v(y)\Big)
\\
\leq
~&
\frac{1}{2}\Big(\mathcal{I}^{z_1}_\varepsilon u(y)+\mathcal{I}^{z_2}_\varepsilon u(y)\Big)
-
\frac{1}{2}\Big(\mathcal{I}^{z_1}_\varepsilon v(y)+\mathcal{I}^{z_2}_\varepsilon v(y)\Big)
\\
=
~&
\frac{1}{2}\Big(\mathcal{I}^{z_1}_\varepsilon w(y)+\mathcal{I}^{z_2}_\varepsilon w(y)\Big)
\\
\leq
~&
\sup_{|z|=1}\mathcal{I}^z_\varepsilon w(y).
\end{split}
\end{equation*}
Using this, for any $x\in A$,
\begin{equation*}
M
=
\widetilde w(x)
=
\lim_{\Omegaega\ni y\to x}w(y)
\leq
\lim_{\Omegaega\ni y\to x}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon w(y)\Big)
=
\sup_{|z|=1}\mathcal{I}^z_\varepsilon w(x)
\leq
M,
\end{equation*}
where the continuity of $x \mapsto \displaystyle\sup_{|z|=1}\mathcal{I}^z_\varepsilon w(x)$ by Lemma~\ref{lem:continuous} has been used in the last equality. That is, $\displaystyle\sup_{|z|=1}\mathcal{I}^z_\varepsilon w(x)=M$, and again by Lemma~\ref{lem:continuous-wrtz}, there exists $|z_0|=1$ such that
\begin{equation*}
\frac{1}{\gamma_{N,p}}\vint_{B_1}w(x+\varepsilon h)(z_0\cdot h)^{p-2}_+\,dh
=
M.
\end{equation*}
By the definition of $\gamma_{N,p}$ and the fact that $w\leq M$, this implies that $w(x+\varepsilon h)=M$ for a.e. $|h|<1$ such that $z_0\cdot h>0$. Then $x+\varepsilon h\in A\subset\overline\Omegaega$ for a.e. $|h|<1$ such that $z_0\cdot h>0$. Moreover, by the continuity of $\widetilde w$ in $\overline \Omega$, it turns out that $x+\varepsilon h\in A$ for every $|h|\leq 1$ with $z_0\cdot h\geq 0$. In particular, picking any $|h|=1$ such that $z_0\cdot h=0$ we have that $x\pm\varepsilon h\in A$, so $x=\frac{1}{2}(x+\varepsilon h)+\frac{1}{2}(x-\varepsilon h)$. That is, any point $x\in A$ is the midpoint between two different points $x_1,x_2\in A$. The contradiction then follows by choosing $x\in A$ to be an extremal point of $A$, that is, a point which cannot be written as a convex combination $\lambda x_1+(1-\lambda)x_2$ of points $x_1,x_2\in A$ with $\lambda\in(0,1)$ (take for instance any point $x\in A$ maximizing the Euclidean norm among all points in $A$). Then $M\leq 0$ and the proof is finished.
\end{proof}
\section{Regularity for the tug-of-war game with $1<p<2$}
\label{sec:reg}
The above DPP can also be stochastically interpreted. It is related to a two-player zero-sum game played in a bounded domain $\Omega\subset \mathbb{R}^N$. When the players are at $x\in \Omega$, they toss a fair coin and the winner of the toss may choose $z\in \mathbb{R}n,\ \abs{z}=1$, so the next point is chosen according to the probability measure
\begin{align*}
A\mapsto \frac{1}{\gamma_{N,p}}\frac{1}{\abs{B_\varepsilon(x)}}\int_{A\cap B_\varepsilon(x)}\Big(z\cdot \frac{h-x}{\varepsilon}\Big)_+^{p-2}\,dh.
\end{align*}
Then the players play a new round starting from the current position.
When the game exits the domain and the first point outside the domain is denoted by $x_{\tau}$, Player II pays Player I the amount given by $F(x_{\tau})$, where $F:\mathbb{R}n\setminus \Omega\to \mathbb{R}$
is a given payoff function.
Since Player I gets the payoff at then end, she tries (heuristically speaking) to maximize the outcome, and since Player II has to pay it, he tries to minimize it.
This is a variant of a so called tug-of-war game considered for example in \cite{peresssw09, peress08, manfredipr12}.
As explained in more detail in those references, $u$ denotes the value
of the game, i.e.\ the expected payoff of the game when players are optimizing over their strategies. Then for this $u$ the DPP holds and it can be heuristically interpreted by considering one round of the game and summing up the different outcomes (either Player I or Player II wins the toss) with the corresponding probabilities.
Next we show that if $u$ is a solution to the DPP (\ref{eq:dpp}), then it satisfies the extremal inequalities (when $1<p<2$ and also $p=2$) needed in order to apply the H\"older result in Theorem \ref{Holder}. However, in the case $2<p<\infty$ the DPP (\ref{eq:dpp}) does not have any Pucci bounds, as we explain later in Remark~\ref{remark-p>2}.
\begin{proposition}
\label{prop:satisfies-extremal}
Let $1<p<2$ and $u$ be a bounded Borel measurable function satisfying
\begin{equation*}
u(x)
=
\frac{1}{2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\Big)
+
\varepsilon^2f(x).
\end{equation*}
Then $\mathcal{L}_\varepsilon^+u+f\geq0$ and $\mathcal{L}_\varepsilon^-u+f\leq0$ for some $1-\alpha=\beta>0$ depending on $N$ and $p$, where $\mathcal{L}_\varepsilon^+$ and $\mathcal{L}_\varepsilon^-$ are the extremal operators as in Definition~\ref{def:pucci}.
\end{proposition}
\begin{proof}
Let
\begin{equation*}
1-\alpha
=
\beta
=
\frac{1}{2\gamma_{N,p}}
\in
(0,1)
\end{equation*}
and, for $|z|=1$, consider the measure defined as
\begin{equation*}
\nu(E)
=
\frac{1}{|B_1|}\int_{B_1\cap E}\frac{|z\cdot h|^{p-2}-1}{2\gamma_{N,p}-1}\,dh
\end{equation*}
for every Borel measurable set. Then $\nu\in\mathcal{M}(B_1)$. Indeed, by the definition of $\gamma_{N,p}$ and the fact that $\gamma_{N,p}>\frac{1}{2}$ for $1<p<2$, we have that $\nu$ is a positive measure such that $\nu(B_1)=1$.
Therefore,
\begin{equation*}
\begin{split}
\alpha\int_{B_\mathcal{L}ambda}u(x+\varepsilon h)\,d\nu(h)+\beta\vint_{B_1}u(x+\varepsilon h)\,dh
=
~&
\frac{1}{2}\cdot\frac{1}{\gamma_{N,p}}\vint_{B_1}u(x+\varepsilon h)|z\cdot h|^{p-2}\,dh
\\
=
~&
\frac{1}{2}\Big(\mathcal{I}^z_\varepsilon u(x)+\mathcal{I}^{-z}_\varepsilon u(x)\Big),
\end{split}
\end{equation*}
so
\begin{equation*}
\mathcal{L}^-_\varepsilon u(x)
\leq
\frac{\mathcal{I}^z_\varepsilon u(x)+\mathcal{I}^{-z}_\varepsilon u(x)-2u(x)}{2\varepsilon^2}
\leq
\mathcal{L}^+_\varepsilon u(x)
\end{equation*}
for every $|z|=1$. Now, if $u$ is a solution to the DPP, then
\begin{equation*}
\begin{split}
-f(x)
=
~&
\frac{1}{2\varepsilon^2}\Big(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)-2u(x)\Big)
\\
\leq
~&
\sup_{|z|=1}\left\{\frac{\mathcal{I}^z_\varepsilon u(x)+\mathcal{I}^{-z}_\varepsilon u(x)-2u(x)}{2\varepsilon^2}\right\}
\\
\leq
~&
\mathcal{L}^+_\varepsilon u(x),
\end{split}
\end{equation*}
and similarly for $\mathcal{L}^-_\varepsilon u(x)\leq-f(x)$.
\end{proof}
\begin{remark}\label{remark-p>2}
The extremal inequalities do not hold for $2<p<\infty$. Indeed, the map
\begin{equation*}
E\mapsto\frac{1}{2\gamma_{N,p}|B_1|}\int_{B_1\cap E}|z\cdot h|^{p-2}\,dh
\end{equation*}
defines a probability measure in $B_1$ which is absolutely continuous with respect the Lebesgue measure, and whose density function vanishes as $h$ approaches an orthogonal direction to $z$ when $p>2$. Thus, it is not possible to decompose the measure as a convex combination of the uniform probability measure on $B_1$ and any probability measure $\nu$, which is an essential step in the proof of the H\"older estimate in \cite{arroyobp} and \cite{arroyobp2}.
\end{remark}
By Proposition \ref{prop:satisfies-extremal}, a solution $u$ to the DPP (\ref{eq:dpp}) satisfies the conditions of Theorem~\ref{Holder}. Thus it immediately follows that $u$ is asymptotically H\"older continuous.
\begin{corollary}
\label{cor:holder}
There exists $\varepsilon_0>0$ such that if $u$ is a solution to the DPP (\ref{eq:dpp}) in $B_{R}$ where $\varepsilon<\varepsilon_0R$, there exist $C,\gamma>0$ (independent of $\varepsilon$) such that
\[
|u(x)-u(y)|\leq \frac{C}{R^\gamma}\left(\sup_{B_{R}}|u|+R^2\sup_{B_R}\abs{f}\right)\big(|x-y|^\gamma+\varepsilon^\gamma\big)
\]
for every $x,y\in B_{R/2}$.
\end{corollary}
\section{A connection to the $p$-laplacian}
\label{sec:p-lap}
In this section we consider a connection of solutions to the DPP (\ref{eq:dpp}) to the viscosity solutions to
\begin{align}
\label{eq:normpl}
\Delta_p^N u=-f,
\end{align}
where we now assume $f\in C(\overline\Omega)$.
Here $\Delta_p^Nu$ stands for the normalized $p$-Laplacian which is the non-divergence form operator
\begin{equation*}
\Delta_p^Nu
=
\Delta u+(p-2)\frac{\prodin{D^2u\nabla u}{\nabla u}}{|\nabla u|^2}.
\end{equation*}
In \cite[Section 7]{arroyobp} it was already pointed out that in the case $2 \le p<\infty$ this follows (up to a multiplicative constant) for the dynamic programming principle describing the usual tug-of-war game (\ref{usual-DPP}), so here the main interest lies in the case $1<p<2$.
First, to establish the connection to the $p$-Laplace equation, we need to derive asymptotic expansions related to the DPP (\ref{eq:dpp}) for $C^2$-functions.
The expansion below holds for the full range $1<p<\infty$.
\begin{proposition}
\label{prop:asymp-exp}
Let $u\in C^2(\Omegaega)$. If $1<p<\infty$, then
\begin{equation}
\label{eq:asymp}
\begin{split}
\mathcal{I}^z_\varepsilon u(x)
=
~&
u(x)+\varepsilon\,\frac{\gamma_{N,p+1}}{\gamma_{N,p}}\,\nabla u(x)\cdot z
\\
~&
+
\frac{\varepsilon^2}{2(N+p)}\left[\Delta u(x)+(p-2)\prodin{D^2u(x)z}{z}\right]+o(\varepsilon^2).
\end{split}
\end{equation}
In particular, if $\nabla u(x)\neq 0$ and $z^*=\frac{\nabla u(x)}{|\nabla u(x)|}$, then
\begin{equation*}
\lim_{\varepsilon\to0}\frac{\mathcal{I}^{z^*}_\varepsilon u(x)+\mathcal{I}^{-z^*}_\varepsilon u(x)-2u(x)}{2\varepsilon^2}
=
\frac{1}{2(N+p)}\,\Delta^N_pu(x).
\end{equation*}
\end{proposition}
\begin{proof}
For the sake of simplicity, we use the notation for the tensor product of (column) vectors in $\mathbb{R}^N$, $v\otimes w=vw^\top$, which allows to write $\prodin{Mv}{v}=\mathrm{Tr}\left\{M\,v\otimes v\right\}$. Using the second order Taylor's expansion of $u$ we obtain
\begin{align}
\label{eq:expansion}
\frac{\mathcal{I}^z_\varepsilon u(x)-u(x)}{\varepsilon}
=
~&
\frac{1}{\gamma_{N,p}}\vint_{B_1}\left(\nabla u(x)\cdot h+\frac{\varepsilon}{2}\,\mathrm{Tr}\left\{D^2u(x) h\otimes h\right\}+o(\varepsilon)\right)(z\cdot h)_+^{p-2}\,dh\nonumber
\\
=
~&
\nabla u(x)\cdot\left(\frac{1}{\gamma_{N,p}}\vint_{B_1}h\,(z\cdot h)_+^{p-2}\,dh\right)\nonumber
\\
~&
+
\frac{\varepsilon}{2}\,\mathrm{Tr}\left\{D^2u(x)\left(\frac{1}{\gamma_{N,p}}\vint_{B_1}h\otimes h\,(z\cdot h)_+^{p-2}\,dh\right)\right\}+o(\varepsilon).
\end{align}
In order to compute the first integral in the right-hand side of the previous identity, let $R$ be any orthogonal transformation such that $Re_1=z$ i.e.\ $e_1=R^\top z$. Then a change of variables $Rw=h$ yields
\begin{align*}
\vint_{B_1}h\,(z\cdot h)_+^{p-2}\,dh
=
R\vint_{B_1}w\,(w_1)_+^{p-2}\,dw
\end{align*}
using $z\cdot Rw=z^\top Rw=(R^\top z)^\top w=e_1^\top w=w_1$. Going back to the original notation and observing by symmetry that
\begin{equation*}
\vint_{B_1}h_i(h_1)_+^{p-2}\,dh
=
\begin{cases}
\displaystyle\vint_{B_1}(h_1)_+^{p-1}\,dh
=
\gamma_{N,p+1}
&
\text{ if } i=1,
\\
0
&
\text{ if } i\neq1,
\end{cases}
\end{equation*}
we get
\begin{align*}
\vint_{B_1}h\,(z\cdot h)_+^{p-2}\,dh=R\vint_{B_1}h\,(h_1)_+^{p-2}\,dh=\gamma_{N,p+1}Re_1=\gamma_{N,p+1}z.
\end{align*}
Next we repeat the change of variables in the second integral on the right hand side of (\ref{eq:expansion}) to get
\begin{equation*}
\begin{split}
\frac{1}{\gamma_{N,p}}\vint_{B_1}h\otimes h\,(z\cdot h)_+^{p-2}\,dh
=
~&
R\left(\frac{1}{\gamma_{N,p}}\vint_{B_1}h\otimes h\,(h_1)_+^{p-2}\,dh\right)R^\top.
\end{split}
\end{equation*}
Observe that the integral in parenthesis above is a diagonal matrix. Indeed, for $i\neq j$, by symmetry,
\begin{equation*}
\frac{1}{\gamma_{N,p}}\vint_{B_1}h_ih_j\,(h_1)_+^{p-2}\,dh
=
0.
\end{equation*}
In order to compute the diagonal elements, we utilize the explicit values of the normalization constants from Lemma~\ref{integral-p}, then
\begin{equation*}
\frac{1}{\gamma_{N,p}}\vint_{B_1}h_i^2\,(h_1)_+^{p-2}\,dh
=
\frac{1}{2\gamma_{N,p}}\vint_{B_1}h_i^2\,|h_1|^{p-2}\,dh
=
\begin{cases}
\frac{p-1}{N+p}, & i=1,\\[5pt]
\frac{1}{N+p}, & i=2,\ldots,n.
\end{cases}
\end{equation*}
Combining, we get
\begin{align*}
\frac{1}{\gamma_{N,p}}\vint_{B_1}h\otimes h\,(z\cdot h)_+^{p-2}\,dh
=~&
R\left(\frac{1}{N+p}\,(I-e_1\otimes e_1)+\frac{p-1}{N+p}\,e_1\otimes e_1\right)R^\top
\\
=
~&
\frac{1}{N+p}\left(I+(p-2)z\otimes z\right).
\end{align*}
The proof is concluded after replacing these integrals in the expansion for $\mathcal{I}^z_\varepsilon u(x)$.
\end{proof}
Next we show that the solutions to the DPP (\ref{eq:dpp}) converge uniformly as $\varepsilon\to 0$ to a viscosity solution of
\begin{align*}
\Delta_p^Nu=-2(N+p)f.
\end{align*}
But before, we recall the definition of viscosity solutions for the convenience. Below $\lambda_{\max} (D^2\phi(x_0))$ and $\lambda_{\min} (D^2\phi(x_0))$ refer to the largest and smallest eigenvalue, respectively, of $D^2\phi(x_0)$. This definition is equivalent to the standard way of defining viscosity solutions through convex envelopes. Different definitions of viscosity solutions in this context are analyzed for example in \cite[Section 2]{kawohlmp12}.
\begin{definition}
\label{eq:def-normalized-visc}
Let $\Omega\subset\mathbb{R}^N$ be a bounded domain and $1<p<\infty$. A lower semicontinuous function $u$ is a viscosity supersolution of (\ref{eq:normpl}) if for all $x_0\in\Omega$ and $\phi\in C^2(\Omega)$ such that $u-\phi$ attains a local minimum at $x_0$, one has
\begin{equation*}
\begin{cases}
\Delta_p^N \phi(x_0)\le -f(x_0)\ &\text{if}\quad \nabla\phi(x_0)\neq 0,\\
\Delta\phi(x_0)+(p-2)\lambda_{\max} (D^2\phi(x_0))\le -f(x_0)\ &\text{if}\quad\nabla\phi(x_0)=0\,\,\text{and}\,\, p\geq 2,\\
\Delta\phi(x_0)+(p-2)\lambda_{\min} (D^2\phi(x_0))\le -f(x_0)\ &\text{if}\quad\nabla\phi(x_0)=0\,\,\text{and}\,\, 1<p<2.
\end{cases}
\end{equation*}
An upper semicontinuous function $u$ is a viscosity subsolution of (\ref{eq:normpl}) if $-u$ is a supersolution. We say that $u$ is a viscosity solution of (\ref{eq:normpl}) in $\Omega$ if it is both a viscosity sub- and supersolution.
\end{definition}
\begin{theorem}
\label{thm:convergence}
Let $1<p<2$ and $\{u_\varepsilon\}$ be a family of uniformly bounded Borel measurable solutions to the DPP (\ref{eq:dpp}). Then there is a subsequence and a H\"older continuous function $u$ such that
\begin{align*}
u_{\varepsilon}\to u \quad \text{locally uniformly.}
\end{align*}
Moreover, $u$ is a viscosity solution to $\Delta_p^Nu=-2(N+p)f$.
\end{theorem}
\begin{proof}
First we can use Asymptotic Arzel\`a-Ascoli's Theorem \cite[Lemma 4.2]{manfredipr12} in connection to Theorem~\ref{Holder} to find a locally uniformly converging subsequence to a H\"older continuous function.
Then it remains to verify that the limit is a viscosity solution to the $p$-Laplace equation. For $\phi\in C^2(\Omegaega)$, fix $x\in \Omega$. By Lemma~\ref{lem:continuous-wrtz}, there exists $|z_1^\varepsilon|=1$ such that
\[
\mathcal{I}^{z_1^\varepsilon}_\varepsilon\phi(x)=\inf_{|z|=1}\mathcal{I}^z_\varepsilon \phi(x).
\]
By Proposition~\ref{prop:asymp-exp},
\begin{multline}
\label{eq:approx-ineq}
\frac{\inf_{|z|=1}\mathcal{I}^{z}_\varepsilon \phi(x)+\sup_{|z|=1}\mathcal{I}^{z}_\varepsilon \phi(x)-2\phi(x)}{2\varepsilon^2}
\\
\begin{split}
\ge
~&
\frac{\mathcal{I}^{z_1^\varepsilon}_\varepsilon \phi(x)+\mathcal{I}^{-z_1^\varepsilon}_\varepsilon \phi(x)-2\phi(x)}{2\varepsilon^2}
\\
=
~&
\frac{1}{2(N+p)}\left[\Delta \phi(x)+(p-2)\mathrm{Tr}\left\{D^2\phi(x)\, z_1^\varepsilon\otimes z_1^\varepsilon\right\}\right]+\frac{o(\varepsilon^2)}{\varepsilon^{2}}.
\end{split}
\end{multline}
Let $u$ be the H\"older continuous limit obtained as a uniform limit of the solutions to the DPP. Choose a point $x_0\in \Omegaega$ and a $C^2$-function $\phi$ defined in a neighborhood of $x_0$ touching $u$ at $x_0$ from below.
By the uniform convergence, there exists a sequence $x_{\varepsilon} $ converging to $x_0$ such that $u_{\varepsilon} - \phi $ has a minimum at $x_{\varepsilon}$ (see \cite[Section 10.1.1]{evans10}) up to an error $\eta_{\varepsilon}>0$, that is, there exists $x_{\varepsilon}$ such that
\begin{equation*}
u_{\varepsilon} (y) - \phi (y)
\geq
u_{\varepsilon} (x_{\varepsilon}) - \phi(x_{\varepsilon})-\eta_{\varepsilon}
\end{equation*}
at the vicinity of $x_{\varepsilon}$. The arbitrary error $\eta_{\varepsilon}$ is due to the fact that $u_{\varepsilon}$ may be discontinuous and we might not attain the infimum. Moreover, by adding a constant, we may assume that $\phi(x_{\varepsilon}) = u_{\varepsilon} (x_{\varepsilon})$ so that $\phi$ approximately touches $u_{\varepsilon}$ from below. Recalling the fact that $u_\varepsilon$ is a solution to the DPP (\ref{eq:dpp}) and that $\mathcal{I}^z_\varepsilon$ is monotone and linear (see Remark~\ref{rem-average}), we have that
\begin{equation*}
\mathcal{I}^z_\varepsilon u_\varepsilon(x_\varepsilon)
\geq
\mathcal{I}^z_\varepsilon\phi (x_\varepsilon)+u_\varepsilon(x_\varepsilon)-\phi(x_\varepsilon)-\eta_\varepsilon.
\end{equation*}
Thus, by choosing $\eta_{\varepsilon}=o(\varepsilon^2)$, we obtain
\[
\frac{o(\varepsilon^2)}{\varepsilon^2} \ge \frac{\inf_{|z|=1}\mathcal{I}^{z}_\varepsilon \phi(x_{\varepsilon})+\sup_{|z|=1}\mathcal{I}^{z}_\varepsilon \phi(x_{\varepsilon})-2\phi(x_{\varepsilon})+2\varepsilon^2 f(x_{\varepsilon})}{2\varepsilon^2}.
\]
Using (\ref{eq:approx-ineq}) at $x_{\varepsilon}$ and combining this with the previous estimate, we obtain
\begin{align}
\label{eq:final-expansion}
-f(x_{\varepsilon})+\frac{o(\varepsilon^2)}{\varepsilon^2} \ge\frac{1}{2(N+p)}\left[\Delta \phi(x_{\varepsilon})+(p-2)\mathrm{Tr}\left\{D^2\phi(x_{\varepsilon})\, z_1^\varepsilon\otimes z_1^\varepsilon\right\}\right].
\end{align}
Let us assume first that $\nabla\phi(x_0)\neq 0$. By (\ref{eq:asymp}), we see that
\begin{align*}
\lim_{\varepsilon\to 0}z_{1}^{\varepsilon}=-\frac{\nabla \phi(x_0)}{\abs{\nabla \phi(x_0)}},
\end{align*}
and thus we end up with
\begin{align*}
-f(x_0) \ge\frac{1}{2(N+p)}\Delta_p^N \phi(x_0).
\end{align*}
Finally we consider the case $\nabla\phi(x_0)=0$. Similarly as above, (\ref{eq:final-expansion}) follows. Even if we now have no information on the convergence of $z_1^\varepsilon$, since
\begin{equation*}
\lambda_{\min}(M)
\leq
\mathrm{Tr}\left\{M\,z\otimes z\right\}
\leq
\lambda_{\max}(M)
\end{equation*}
for every $|z|=1$, we still can deduce
\begin{align*}
\begin{cases}
\Delta\phi(x_0)+(p-2)\lambda_{\min} (D^2\phi(x_0))\le -2(N+p)f(x_0)\quad &\text{if}\quad p\geq 2,\\
\Delta\phi(x_0)+(p-2)\lambda_{\max} (D^2\phi(x_0))\le -2(N+p)f(x_0)\quad &\text{if}\quad 1<p<2.
\end{cases}
\end{align*}
Thus we have shown that $u$ is a viscosity supersolution to the $p$-Laplace equation. Similarly, starting with $\sup$ instead of $\inf$, we can show that $u$ is a subsolution, and thus a solution.
\end{proof}
\begin{remark}
For $1<p<\infty$, $u\in C^2(\Omegaega)$ and $x\in\Omegaega$ such that $\nabla u(x)\neq 0$ we could also show that
\begin{equation*}
\begin{split}
\frac{1}{2}\bigg(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\bigg)
=
~&
u(x)+
\frac{\varepsilon^2}{2(N+p)}\,\Delta^N_pu(x)+o(\varepsilon^2).
\end{split}
\end{equation*}
Indeed, by working carefully through the estimates similarly as in \cite[Lemmas 2.1--2.2]{peress08}, we could show
\begin{equation*}
\begin{split}
\frac{1}{2}\bigg(\sup_{|z|=1}\mathcal{I}^z_\varepsilon u(x)+\inf_{|z|=1}\mathcal{I}^z_\varepsilon u(x)\bigg)
=
~&
\frac{\mathcal{I}^{z^*}_\varepsilon u(x)+\mathcal{I}^{-z^*}_\varepsilon u(x)}{2}+o(\varepsilon^2),
\end{split}
\end{equation*}
for $z^*=\frac{\nabla u(x)}{|\nabla u(x)|}$.
By Proposition~\ref{prop:asymp-exp}, we have
\begin{align*}
\frac{\mathcal{I}^{z^*}_\varepsilon u(x)+\mathcal{I}^{-z^*}_\varepsilon u(x)}{2}+o(\varepsilon^2)
=
~&
u(x)+\frac{\varepsilon^2}{2(N+p)}\,\Delta^N_pu(x)+o(\varepsilon^2),
\end{align*}
and combining these estimates we obtain the desired estimate. This would give an alternative way to write down the proof that the limit is a $p$-harmonic function. Reading this expansion in a viscosity sense gives a different characterization of $p$-harmonic functions as in \cite{manfredipr10} and \cite{kawohlmp12} by the same proof as above.
\end{remark}
\appendix
\section{Some useful integrals}
In the appendix, we record some useful integrals that, no doubt, are known to the experts but are hard to find in the literature.
\begin{lemma}
Let $\alpha_1,\ldots,\alpha_N>-1$. Then
\begin{equation}\label{integrals0}
\vint_{B_1}|h_1|^{\alpha_1}\cdots|h_N|^{\alpha_N}\,dh_1\cdots dh_N
=
\frac{1}{\pi^{N/2}}\cdot\frac{\Gamma(\frac{N}{2}+1)\Gamma(\frac{\alpha_1+1}{2})\cdots\Gamma(\frac{\alpha_N+1}{2})}{\Gamma(\frac{N+\alpha_1+\cdots+\alpha_N+2}{2})}.
\end{equation}
\end{lemma}
\begin{proof}
For convenience, we denote by $B_1^N$ the $N$-dimensional unit ball centered at the origin. We decompose the integral over $B_1^N$ by integrating in the first place with respect to $t=h_N$, that is
\begin{multline*}
\int_{B_1^N}|h_1|^{\alpha_1}\cdots|h_N|^{\alpha_N}\,dh_1\cdots dh_N
\\
=
\int_{-1}^1|t|^{\alpha_N}\left(\int_{\sqrt{1-t^2}\,B_1^{N-1}}|h_1|^{\alpha_1}\cdots|h_{N-1}|^{\alpha_{N-1}}\,dh_1\cdots dh_{N-1}\right)\,dt.
\end{multline*}
Then the change of variables $\sqrt{1-t^2}(w_1,\ldots,w_{N-1})=(h_1,\ldots,h_{N-1})$ and returning to the original notation gives
\begin{align*}
&\int_{B_1^N}|h_1|^{\alpha_1}\cdots|h_N|^{\alpha_N}\,dh_1\cdots dh_N
\\
&=
\int_{-1}^1|t|^{\alpha_N}(1-t^2)^{\frac{N+\alpha_1+\ldots+\alpha_{N-1}-1}{2}}\,dt
\cdot\int_{B_1^{N-1}}|h_1|^{\alpha_1}\cdots|h_{N-1}|^{\alpha_{N-1}}\,dh_1\cdots dh_{N-1}.
\end{align*}
Next we focus on the the first integral in the right hand side. Using the symmetry properties and performing a change of variables we get
\begin{align*}
\int_{-1}^1|t|^{\alpha_N}(1-t^2)^{\frac{N+\alpha_1+\ldots+\alpha_{N-1}-1}{2}}\,dt
=
~&
2\int_0^1t^{\alpha_N}(1-t^2)^{\frac{N+\alpha_1+\ldots+\alpha_{N-1}-1}{2}}\,dt
\\
=
~&
\int_0^1t^{\frac{\alpha_N-1}{2}}(1-t)^{\frac{N+\alpha_1+\ldots+\alpha_{N-1}-1}{2}}\,dt
\\
=
~&
\frac{\Gamma(\tfrac{\alpha_N+1}{2})\Gamma(\tfrac{N+\alpha_1+\ldots+\alpha_{N-1}+1}{2})}{\Gamma(\tfrac{N+\alpha_1+\ldots+\alpha_{N-1}+\alpha_N+2}{2})},
\end{align*}
where for the last equality we have recalled the well-known formula arising in connection to the $\beta$-function
\begin{equation*}
\int_0^1t^{x-1}(1-t)^{y-1}\,dt
=
\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}
\end{equation*}
for $x,y>0$. In the same way, we in general obtain
\begin{multline*}
\int_{B_1^{N-k}}|h_1|^{\alpha_1}\cdots|h_{N-k}|^{\alpha_{N-k}}\,dh_1\cdots dh_{N-k}
\\
\begin{split}
=&
\int_{-1}^1|t|^{\alpha_{N-k}}(1-t^2)^{\frac{\alpha_1+\ldots+\alpha_{N-k-1}+N-k-1}{2}}\,dt
\\
&\hspace{1 em}\cdot\int_{B_1^{N-k-1}}|h_1|^{\alpha_1}\cdots|h_{N-k-1}|^{\alpha_{N-k-1}}\,dh_1\cdots dh_{N-k-1}\nonumber
\\
=&\int_{0}^1|t|^{\frac{\alpha_{N-k}+1}{2}-1}(1-t)^{\frac{\alpha_1+\ldots+\alpha_{N-k-1}+N-k+1}{2}-1}\,dt
\\
&\hspace{1 em}\cdot\int_{B_1^{N-k-1}}|h_1|^{\alpha_1}\cdots|h_{N-k-1}|^{\alpha_{N-k-1}}\,dh_1\cdots dh_{N-k-1}
\end{split}
\end{multline*}
Iterating the above formula, and dividing out the repeating terms we get
\begin{multline*}
\int_{B_1^N}|h_1|^{\alpha_1}\cdots|h_N|^{\alpha_N}\,dh_1\cdots dh_N\\
= \frac{1}{\Gamma(\tfrac{\alpha_1+\ldots+\alpha_N+N+2}{2})}\Gamma(\tfrac{\alpha_N+1}{2})\cdots \Gamma(\tfrac{\alpha_2+1}{2}) \Gamma(\tfrac{\alpha_1+3}{2})\int_{-1}^1|h_1|^{\alpha_1}\,dh_1
\end{multline*}
Then using the definition of the Gamma function and integration by parts, we get
\begin{equation*}
\Gamma(\tfrac{\alpha_1+3}{2})\int_{-1}^1|h_1|^{\alpha_1}\,dh_1
=\Gamma(\tfrac{\alpha_1+1}{2}+1)\frac{2}{\alpha_1+1}
=\Gamma(\tfrac{\alpha_1+1}{2}).
\end{equation*}
Finally, the result follows by recalling that
\begin{equation*}
|B_1^N|
=
\frac{\pi^{N/2}}{\Gamma(\tfrac{N}{2}+1)}.\qedhere
\end{equation*}
\end{proof}
The next lemma follows as a special case of the previous result. This lemma is the one we actually use in the proofs.
\begin{lemma}\label{integral-p}
Let $1<p<\infty$. Then
\begin{equation}\label{integral-p0}
\frac{1}{2\gamma_{N,p}}\vint_{B_1}|h_1|^p\,dh
=
\frac{p-1}{N+p}
\end{equation}
and
\begin{equation}\label{integral-p2}
\frac{1}{2\gamma_{N,p}}\vint_{B_1}|h_1|^{p-2}|h_2|^2\,dh
=
\frac{1}{N+p}.
\end{equation}
\end{lemma}
\begin{proof}
First we recall the definition of $\gamma_{N,p}$ in (\ref{gamma-ctt}) and use (\ref{integrals0}) to obtain
\begin{equation}
\label{eq:precise-constant}
\gamma_{N,p}
=
\vint_{B_1}(z\cdot h)_+^{p-2}\,dh
=
\frac{1}{2}\vint_{B_1}|h_1|^{p-2}\,dh
=
\frac{1}{2\sqrt{\pi}}\cdot\frac{\Gamma(\frac{N}{2}+1)\Gamma(\frac{p-1}{2})}{\Gamma(\frac{N+p}{2})}.
\end{equation}
\sloppy
Since $\Gamma(s+1)=s\,\Gamma(s)$ and $\Gamma(\frac{1}{2})=\sqrt\pi$, applying the identity (\ref{integrals0}) with $\alpha_1=p$ and $\alpha_2=\ldots=\alpha_N=0$ we have
\begin{equation*}
\vint_{B_1}|h_1|^p\,dh
=
\frac{1}{\sqrt{\pi}}\cdot\frac{\Gamma(\frac{N}{2}+1)\Gamma(\frac{p+1}{2})}{\Gamma(\frac{N+p+2}{2})}
=
\frac{p-1}{N+p}\cdot\frac{1}{\sqrt{\pi}}\cdot\frac{\Gamma(\frac{N}{2}+1)\Gamma(\frac{p-1}{2})}{\Gamma(\frac{N+p}{2})},
\end{equation*}
and (\ref{integral-p0}) follows by combining the previous formulas. Similarly, since $\Gamma(\frac{3}{2})=\frac{\sqrt\pi}{2}$,
\begin{equation*}
\begin{split}
\vint_{B_1}|h_1|^{p-2}|h_2|^2\,dh
=
\frac{1}{\pi}\cdot\frac{\Gamma(\frac{N}{2}+1)\Gamma(\frac{p-1}{2})\Gamma(\frac{3}{2})}{\Gamma(\frac{N+p+2}{2})}
=
\frac{1}{N+p}\cdot\frac{1}{\sqrt{\pi}}\cdot\frac{\Gamma(\frac{N}{2}+1)\Gamma(\frac{p-1}{2})}{\Gamma(\frac{N+p}{2})},
\end{split}
\end{equation*}
and (\ref{integral-p2}) follows.
\end{proof}
\begin{corollary}
Let $1<p<\infty$ and $|z|=1$. Then
\begin{equation}\label{integral-p3}
\frac{1}{2\gamma_{N,p}}\vint_{B_1}|h|^2|z\cdot h|^{p-2}\,dh
=
\frac{N+p-2}{N+p}.
\end{equation}
\end{corollary}
\begin{proof}
By symmetry, we can take $z=e_1$, so $z\cdot h=h_1$. Then
$|h|^2|h_1|^{p-2}=|h_1|^p+|h_1|^{p-2}|h_2|^2+\ldots+|h_1|^{p-2}|h_N|^2$, so
\begin{equation*}
\vint_{B_1}|h|^2|z\cdot h|^{p-2}\,dh
=
\vint_{B_1}|h_1|^p\,dh+(N-1)\vint_{B_1}|h_1|^{p-2}|h_2|\,dh,
\end{equation*}
so (\ref{integral-p3}) follows from (\ref{integral-p0}) and (\ref{integral-p2}).
\end{proof}
\noindent\textbf{Acknowledgements.}
\'A.~A.\ is supported by grant PID2021-123151NB-I00.
\end{document} |
\begin{document}
\setlength{\oddsidemargin}{0cm} \setlength{\evensidemargin}{0cm}
\baselineskip=20pt
\theoremstyle{plain} {\mathfrak m}akeatletter
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{coro}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{defi}[theorem]{Definition}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{exam}[theorem]{Example}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{conj}[theorem]{Conjecture}
\newtheorem{prob}[theorem]{Problem}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{claim}{Claim}
\newcommand{\SO}{{{\mathfrak m}athrm S}{{\mathfrak m}athrm O}}
\newcommand{\SU}{{{\mathfrak m}athrm S}{{\mathfrak m}athrm U}}
\newcommand{\Sp}{{{\mathfrak m}athrm S}{{\mathfrak m}athrm p}}
\newcommand{\so}{{{\mathfrak m}athfrak s}{{\mathfrak m}athfrak o}}
\newcommand{\Ad}{{{\mathfrak m}athrm A}{{\mathfrak m}athrm d}}
\newcommand{{\mathfrak m}}{{{\mathfrak m}athfrak m}}
\newcommand{{\mathfrak g}}{{{\mathfrak m}athfrak g}}
\newcommand{{\mathfrak p}}{{{\mathfrak m}athfrak p}}
\newcommand{{\mathfrak k}}{{{\mathfrak m}athfrak k}}
\newcommand{{\mathrm E}}{{{\mathfrak m}athrm E}}
\newcommand{{\mathrm F}}{{{\mathfrak m}athrm F}}
\newcommand{{\mathrm T}}{{{\mathfrak m}athrm T}}
\newcommand{{\mathfrak b}}{{{\mathfrak m}athfrak b}}
\newcommand{{\mathfrak l}}{{{\mathfrak m}athfrak l}}
\newcommand{{\mathrm G}}{{{\mathfrak m}athrm G}}
\newcommand{{\mathrm Ric}}{{{\mathfrak m}athrm Ric}}
\newcommand{\approx}{\approx}
\numberwithin{equation}{section}
\title{New Non-naturally reductive Einstein metrics on Exceptional simple Lie groups}
\author{Huibin Chen, Zhiqi Chen and ShaoQiang Deng}
\address{School of Mathematical Sciences and LPMC, Nankai University,
Tianjin 300071, P.R. China}\email{[email protected], [email protected],[email protected]}
\date{}
{\mathfrak m}aketitle
\begin{abstract}
In this article, we achieved several non-naturally reductive Einstein metrics on exceptional simple Lie groups, which are formed by the decomposition arising from general Wallach spaces. By using the decomposition corresponding to the two involutive automorphisms, we calculated the non-zero coefficients in the expression for the components of Ricci tensor with respect to the given metrics. The Einstein metrics are obtained as solutions of systems polynomial equations, which we manipulate by symbolic computations using Gr\"{o}bner bases.
\end{abstract}
\section{Introduction}
A Riemannian manifold $(M,g)$ is called Einstein if there exists a constant $\lambda\in{\mathfrak m}athbb{R}$ such that the Ricci tensor $r$ with respect to $g$ satisfies $r=\lambda g$. The readers can go to Besse's book \cite{Be} for more details and results in this field before 1986. General existence results are difficult to obtain, as a result, many mathematicians pay more attention on the special examples for Einstein manifolds. Among the first important attempts, the works of G. Jensen \cite{GJ1973} and M. Wang, W. Ziller \cite{WaZi} made much contributions to the progress of this field. When the problem is restricted to Lie groups, D'Atri and Ziller in \cite{JW} obtained a large number of naturally reductive left-invariant metrics when $G$ is simple. Also in this paper, they raised a problem: Whether there exists non-naturally reductive Einstein metrics on compact Lie group $G$?
In 1994, K. Mori \cite{Mo} discovered the first left-invariant Einstein metrics on compact simple Lie groups $\SU(n)$ for $n{\mathfrak g}eq 6$, which are non-naturally reductive. In 2008, Arvanitoyeorgos, Mori and Sakane proved the existence of new non-naturally reductive Einstein metrics for $\SO(n)(n{\mathfrak g}eq11)$, $\Sp(n)(n{\mathfrak g}eq3)$, ${\mathrm E}_6$, ${\mathrm E}_7$ and ${\mathrm E}_8$, using fibrations of a compact simple Lie group over a K\"{a}hler $C$-space with two isotropy summands (see \cite{ArMoSa}). In 2014, by using the methods of representation theory, Chen and Liang \cite{ChLi} found a non-naturally reductive Einstein metric on the compact simple Lie group ${\mathrm F}_4$. More recently, Chrysikos and Sakane proved that there exists non-naturally reductive Einstein metric on exceptional Lie groups, espeacially for ${\mathrm G}_2$, they gave the first example of non-naturally reductive Einstein metric (see \cite{ChSa}).
In this paper we consider new non-naturally reductive Einstein metrics on compact exceptional Lie groups $G$ which can be seen as the principal bundle over generalized Wallach spaces $M=G/K$. In 2014, classification for generalized Wallach spaces arising from a compact simple Lie group has been obtained by Nikonorov \cite{Ni} and Chen, Kang and Liang \cite{ChKaLi}, in particular, Nikonorov investigated the semi-simple case and gave the classification in \cite{Ni}.
As is known to all, the involutive automorphisms play an important role in the development of homogeneous geometry. The Riemannian symmetric pairs were classified by Cartan \cite{Ca}, in Lie algebra level, which can be treated as the structure of a Lie algebra with an involutive automorphism satisfying some topologic properties. Later on, the more general semi-simple symmetric pairs were studied by Berger \cite{Ber}, whose classification can be obtained in the view of involutive automorphism. Recently, Huang and Yu \cite{HuYu} classified the Klein four subgroups ${\mathrm G}amma$ of ${\mathfrak m}athrm{Aut}(\frak{u}_0)$ for each compact Lie algebra $\frak{u}_0$ up to conjugation by calculating the symmetric subgroups ${\mathfrak m}athrm{Aut}(\frak{u}_0)^\theta$ and their involution classes.
According to the article \cite{ChKaLi}, each kind of generalized Wallach spaces arising from simple Lie groups is associated with two commutative involutive automorphisms of ${\mathfrak g}$, the Lie algebra of $G$. With these two involutive automorphisms, we have two different corresponding decompositions of ${\mathfrak g}$, which are in fact irreducible symmetric pairs. According to these two irreducible symmetric pairs, we can get some linear equations for the non-zero coefficients in the expression of components of Ricci tensor with respect to the given metric. With the help of computer, we get the Einstein metrics from the solutions of systems polynomial equations. We mainly deal with two kinds of generalized Wallach spaces, one of which is without centers in ${\mathfrak k}$ and the other is with a $1$-dimensional center in ${\mathfrak k}$. Along with the results in \cite{ChLi}, we list all the number of non-naturally reductive left-invariant Einstein metrics on exceptional simple Lie groups $G$ arising from generalized Wallach spaces with no center in ${\mathfrak k}$.
\begin{table}[!hbp]
\begin{tabular}{c|c|c|c|c}
\hline
$G$ & Types & $K$ & $p+q$ & $N_{non-nn}$ \\
\hline
\hline
${\mathrm F}_4$ & ${\mathrm F}_4$-I & $\SO(8)$ & $1+3$ & 1\cite{ChLi}\\
& ${\mathrm F}_4$-II & $\SU(2)\times\SU(2)\times\SO(5)$ & $3+3$ & 3 new\\
\hline
${\mathrm E}_6$ & ${\mathrm E}_6$-III & $\SU(2)\times{\mathfrak m}athrm{Sp}(3)$ & $2+3$ & 4 new\\
& ${\mathrm E}_6$-II & ${\mathfrak m}athrm{U}(1)\times\SU(2)\times\SU(2)\times\SU(4)$ & $0+3+3$ & 7 new\\
\hline
${\mathrm E}_7$ & ${\mathrm E}_7$-I & $\SU(2)\times\SU(2)\times\SU(2)\times\SO(8)$ & $4+3$ & 7 new \\
& ${\mathrm E}_7$-III & $\SO(8)$ & $1+3$ & 1 \\
& ${\mathrm E}_7$-II & ${\mathfrak m}athrm{U}(1)\times\SU(2)\times\SU(6) $ & $0+2+3$ & 6 new \\
\hline
${\mathrm E}_8$ & ${\mathrm E}_8$-I & $\SU(2)\times\SU(2)\times\SO(12)$ & $3+3$ & 11 new\\
& ${\mathrm E}_8$-II & $\Ad(\SO(8)\times\SO(8))$ & $2+3$ & 2 new\\
\hline
\end{tabular}
\caption{Number of non-naturally reductive left-invariant Einstein metrics on exceptional simple Lie group $G$ arising from generalized Wallach spaces}
\end{table}
In this table, we still use the notations in \cite{ChKaLi} to represent the type of generalized Wallach space, $N_{non-nn}$ represents the number of non-naturally reductive Einstein metrics on $G$ and $p,q$ coincides with the indices in the decomposition ${\mathfrak g}={\mathfrak k}_1\oplus\cdots\oplus{\mathfrak k}_p\oplus{\mathfrak m}_1\oplus\cdots\oplus{\mathfrak m}_q$, in fact $q=3$ for all types.
We describe our results in the following theorem.
\begin{theorem}
\begin{enumerate}
\item The compact simple Lie group ${\mathrm F}_4$ admits at least 6 new non-naturally reductive and non-isometric left-invariant Einstein metrics. These metrics are $\Ad(\SU(2)\times\SU(2)\times\SO(5))$-invariant.
\item The compact simple Lie group ${\mathrm E}_6$ admits at least 11 new non-naturally reductive and non-isometric left-invariant Einstein metrics. Four of these metrics are $\Ad(\SU(2)\times{\mathfrak m}athrm{Sp}(3))$-invariant and thie other 7 are $\Ad({\mathfrak m}athrm{U}(1)\times\SU(2)\times\SU(2)\times\SU(4))$-invariant.
\item The compact simple Lie group ${\mathrm E}_7$ admits at least 13 new non-naturally reductive and non-isometric left-invariant Einstein metrics. Seven of these metrics are $\Ad(\SU(2)\times\SU(2)\times\SU(2)\times\SO(8))$-invariant and the other 6 are $\Ad({\mathfrak m}athrm{U}(1)\times\SU(2)\times\SU(6))$-invariant.
\item The compact simple Lie group ${\mathrm E}_8$ admits at least 13 new non-naturally reductive and non-isometric left-invariant Einstein metrics. Two of these metrics are $\Ad(\SO(8)\times\SO(8))$-invariant and the other 11 are $\Ad(\SU(2)\times\SU(2)\times\SO(12))$-invariant.
\end{enumerate}
\end{theorem}
The paper is organized as follows: In section 2 we will recall a formula for the Ricci tensor of $G$ when we see $G$ as a homogeneous space. In section 3 we will introduce how we operate our methods to solve the non-zero coefficients in the expressions for Ricci tensor, where we will classify the exceptional Lie groups by the number of simple ideals of ${\mathfrak k}$. Then for each case in section 3, we will discuss the non-naturally reductive Einstein metrics via the solutions of systems polynomial equations, which will be described in section 4.
\section{The Ricci tensor for reductive homogeneous spaces}
In this section we will recall an expression for the Ricci tensor with respect to a class of given metrics on a compact semi-simple Lie group and figure out whether a metric on $G$ is naturally reductive.
Let $G$ be a compact semi-simple Lie group with Lie algebra ${\mathfrak g}$, $K$ a connected closed subgroup of $G$ with Lie algebra ${\mathfrak k}$. Through this paper, we denote by $B$ the negative of the Killing form of ${\mathfrak g}$, which is positive definite because of the compactness of $G$, as a result, $B$ can be treated as an inner product on ${\mathfrak g}$. Let ${\mathfrak g}={\mathfrak k}\oplus{\mathfrak m}$ be the reductive decomposition with respect to $B$ such that $[{\mathfrak k},{\mathfrak m}]\subset{\mathfrak m}$, where ${\mathfrak m}$ is the tangent space of $G/K$. We assume that ${\mathfrak m}$ can be decomposed into mutually non-equivalent irreducible $\Ad(K)$-modules as follows:
\begin{equation*}
{\mathfrak m}={\mathfrak m}_1\oplus\cdots\oplus{\mathfrak m}_q.
\end{equation*}
We will write ${\mathfrak k}={\mathfrak k}_0\oplus{\mathfrak k}_1\oplus\cdots\oplus{\mathfrak k}_p$, where ${\mathfrak k}_0=Z({\mathfrak k})$ is the center of ${\mathfrak k}$ and ${\mathfrak k}_i$ is the simple ideal for $i=1,\cdots,p$. Let $G\times K$ act on $G$ by $(g_1,g_2)g=g_1gg_2^{-1}$, then $G\times K$ acts almost effectively on $G$ with isotropy group $\Delta(K)=\{(k,k)|k\in K\}$. As a result, $G$ can be treated as the coset space $(G\times K)/\Delta(K)$ and we have ${\mathfrak g}\oplus{\mathfrak k}=\Delta({\mathfrak k})\oplus\Omega$, where $\Omega\cong T_0((G\times K)/\Delta(K))\cong{\mathfrak g}$ via the linear map $(X,Y)\rightarrow(X-Y)\in{\mathfrak g}$, $(X,Y)\in\Omega$.
As is known, there exists an 1-1 corresponding between all $G$-invariant metrics on the reductive homogeneous space $G/K$ and $\Ad_G(K)$-invariant inner products on ${\mathfrak m}$. A Riemannian homogeneous space $(M=G/K,g)$ with reductive complement ${\mathfrak m}$ of ${\mathfrak k}$ in ${\mathfrak g}$ is called $naturally$ $reductive$ if
\begin{equation*}
([X,Y]_{\mathfrak m},Z)+(Y,[X,Z]_{\mathfrak m})=0,
\end{equation*}
where $X,Y,Z\in{\mathfrak m}$, $(\ ,\ )$ is the corresponding inner product on ${\mathfrak g}$.
In \cite{JW}, D'Atri and Ziller study the naturally reductive metrics among left invariant metrics on compact Lie groups and they obtained a complete classification of such metrics in the simple case. The following theorem will play an important role to decide whether a left-invariant metric on a Lie group is naturally reductive.
\begin{theorem}\label{nr}
For any inner product $b$ on the center ${\mathfrak k}_0$ of ${\mathfrak k}$, the following left-invariant metrics on $G$ is naturally reductive with respect to the action $(g,k)y=gyk^{-1}$ of $G\times K$:
\begin{equation*}
\langle\ ,\ \rangle=u_0b|_{fk_0}+u_1B|_{{\mathfrak k}_1}+\cdots+u_pB|_{{\mathfrak k}_p}+xB|_{{\mathfrak m}},\quad(u_0,u_1,\cdots,u_p,x\in{\mathfrak m}athbb{R}^+)
\end{equation*}
Conversely, if a left-invariant metric $\langle\ ,\ \rangle$ on a compact simple Lie group $G$ is naturally reductive, then there exists a closed subgroup $K$ of $G$ such that $\langle\ ,\ \rangle$ can be written as above.
\end{theorem}
Now we have a orthogonal decomposition of ${\mathfrak g}$ with respect to the Killing form of ${\mathfrak g}$: ${\mathfrak g}={\mathfrak k}_0\oplus{\mathfrak k}_1\oplus\cdots\oplus{\mathfrak k}_p\oplus{\mathfrak m}_1\oplus\cdots\oplus{\mathfrak m}_q=({\mathfrak k}_0\oplus{\mathfrak k}_1\oplus\cdots\oplus{\mathfrak k}_p)\oplus({\mathfrak k}_{p+1}\oplus\cdots\oplus{\mathfrak k}_{p+q})$, with ${\mathfrak m}_1\oplus\cdots\oplus{\mathfrak m}_q={\mathfrak k}_{p+1}\oplus\cdots\oplus{\mathfrak k}_{p+q}$, respectively. In addition, we assume that ${\mathfrak m}athrm{dim}_{{\mathfrak m}athbb{R}}{\mathfrak k}_0\leq1$ and the ideals ${\mathfrak k}_i$ are mutually non-isomorphic for $i=1,\cdots,p$. Then we consider the following left-invariant metric on $G$ which is in fact $\Ad(K)$-invariant:
\begin{equation}\label{metric1}
\langle\ ,\ \rangle=x_0\cdot B|_{{\mathfrak k}_0}+x_1\cdot B|_{{\mathfrak k}_1}+\cdots+x_{p+q}\cdot B|_{{\mathfrak k}_{p+q}},
\end{equation}
where $x_i\in{\mathfrak m}athbb{R}^+$ for $i=1,\cdots,p+q$.
Set from now on $d_i={\mathfrak m}athrm{dim}_{{\mathfrak m}athbb{R}}{\mathfrak k}_i$ and $\{e^{i}_{\alpha}\}_{\alpha=1}^{d_i}$ be a $B$-orthonormal basis adapted to the decomposition of ${\mathfrak g}$ which means $e_{\alpha}^{i}\in{\mathfrak k}_i$ and $\alpha$ is the number of basis in ${\mathfrak k}_i$. Then we consider the numbers $A_{\alpha,\beta}^{{\mathfrak g}amma}=B([e_{\alpha}^{i},e_{\beta}^{j}],e_{{\mathfrak g}amma}^{k})$ such that $[e_{\alpha}^{i},e_{\beta}^{j}]=\sum_{{\mathfrak g}amma}A_{\alpha,\beta}^{{\mathfrak g}amma}e_{{\mathfrak g}amma}^{k}$, and set
\begin{equation*}
(ijk):=\Big[\begin{array}{cc}
i \\
j \ k
\end{array}\Big]=\sum(A_{\alpha,\beta}^{{\mathfrak g}amma})^2,
\end{equation*}
where the sum is taken over all indices $\alpha,\beta,{\mathfrak g}amma$ with $e_{\alpha}^{i}\in{\mathfrak k}_i, e_{\beta}^{j}\in{\mathfrak k}_j, e_{{\mathfrak g}amma}^{k}\in{\mathfrak k}_k$. Then $(ijk)$ is independent of the choice for the $B$-orthonormal basis of ${\mathfrak k}_i,{\mathfrak k}_j,{\mathfrak k}_k$, and symmetric for all three indices which means $(ijk)=(jik)=(jki)$.
In \cite{ArMoSa} and \cite{PaSa}, the authors obtained the formulas for the components of Ricci tensor with respect to the left-invariant metric given by (\ref{metric1}), which can be described by the following lemma:
\begin{lemma}\label{formula1}
Let $G$ be a compact connected semi-simple Lie group endowed with the left-invariant metric $\langle\ ,\ \rangle$ given by (\ref{metric1}). Then the components $r_0,r_1,\cdots,r_{p+q}$ of the Ricci tensor associated to $\left<\ ,\ \right>$ are expressed as follows:
\begin{equation*}
r_k=\frac{1}{2x_k}+\frac{1}{4d_k}\sum_{j,i}\frac{x_k}{x_jx_i}\Big[\begin{array}{cc}
k \\
j \ i
\end{array}\Big]-\frac{1}{2d_k}\sum_{j,i}\frac{x_j}{x_kx_i}\Big[\begin{array}{cc}
j \\
k \ i
\end{array}\Big],\quad(k=0,1,\cdots,p+q).
\end{equation*}
Here, the sums are taken over all $i=0,1,\cdots,p+q$. In particular, for each $k$ it holds that
\begin{equation*}
\sum_{i,j}^{p+q}\Big[\begin{array}{cc}
j \\
k \ i
\end{array}\Big]=\sum_{ij}^{p+q}(kij)=d_k.
\end{equation*}
\end{lemma}
\section{Calculations for non-zero coefficients in the expressions of Ricci tensor}
In this section, we will calculate the non-zero coefficients in the expressions for the components of the Ricci tensor with respect to the given metric (\ref{metric1}). First of all, we classify the exceptional Lie groups without center in $K$ into the following three types, namely $p=2$, $p=3$ and $p=4$, where $p$ represents the number of the simple ideals of ${\mathfrak k}$. For the case of $p=1$, according to the classification \cite{ChKaLi}, there are only ${\mathrm F}_4$-I and ${\mathrm E}_7$-III, the non-naturally reductive Einstein metrics on which were studied in \cite{ChLi} and [Lei], respectively.
We recall the definition of generalized Wallach spaces. Let $G/K$ be a reductive homogeneous space, where $G$ is a semi-simple compact connected Lie group, $K$ is a connected closed subgroup of $G$, ${\mathfrak g}$ and ${\mathfrak k}$ are the corresponding Lie algebras, respectively. If ${\mathfrak m}$, the tangent space of $G/K$ at $o={\mathfrak p}i(e)$, can be decomposed into three ${\mathfrak m}athrm{ad}({\mathfrak k})$-invariant irreducible summands pairwise orthogonal with respect to $B$ as:
\begin{equation*}
{\mathfrak m}={\mathfrak m}_1\oplus{\mathfrak m}_2\oplus{\mathfrak m}_3,
\end{equation*}
satisfying $[{\mathfrak m}_i,{\mathfrak m}_i]\in{\mathfrak k}$ for $i\in\{1,2,3\}$ and $[{\mathfrak m}_i,{\mathfrak m}_j]\in{\mathfrak m}_k$ for $\{i,j,k\}=\{1,2,3\}$, then we call $G/K$ a generalized Wallach space.
In \cite{ChKaLi} and \cite{Ni}, the authors gave the complete classification of generalized Wallach spaces in the simple case. Here we use the notations in \cite{ChKaLi}, then for $p=2$, there are ${\mathrm E}_6$-III and ${\mathrm E}_8$-II, for $p=3$, there are ${\mathrm F}_4$-II and ${\mathrm E}_8$-I and for $p=4$ there is only ${\mathrm E}_7$-I. For later use, we introduce the following lemma given in \cite{ArDzNi}.
\begin{lemma}\label{iii}
Let ${\mathfrak m}athfrak{q}\subset{\mathfrak m}athfrak{r}$ be arbitrary subalgebra in ${\mathfrak g}$ with ${\mathfrak m}athfrak{q}$ simple. Consider in ${\mathfrak m}athfrak{q}$ an orthonormal (with respect to $B_{\mathfrak m}athfrak{r}$) basis $\{f_j\}(1\leq j\leq {\mathfrak m}athrm{dim}({\mathfrak m}athfrak{q}))$. Then
\begin{equation*}
\sum_{i,j,k=1}^{{\mathfrak m}athrm{dim}({\mathfrak m}athfrak{q})}(B_{{\mathfrak m}athfrak{r}}([f_i,f_j],f_k))^2 =\alpha_{{\mathfrak m}athfrak{q}}^{{\mathfrak m}athfrak{r}}\cdot{\mathfrak m}athrm{dim}({\mathfrak m}athfrak{q}),
\end{equation*}
where $\alpha_{{\mathfrak m}athfrak{q}}^{\mathfrak m}athfrak{r}$ is determined by the equation $B_{{\mathfrak m}athfrak{q}}=\alpha_{{\mathfrak m}athfrak{q}}^{\mathfrak m}athfrak{r}\cdot B_{{\mathfrak m}athfrak{r}}|_{{\mathfrak m}athfrak{q}}$.
\end{lemma}
For $p=2$, we consider the following metrics on ${\mathfrak g}$:
\begin{equation}\label{metric2}
<\ ,\ >=x_1B|_{{\mathfrak k}_1}+x_2B|_{{\mathfrak k}_2}+x_3B|_{{\mathfrak m}_1}+x_4B|_{{\mathfrak m}_2}+x_5B|_{{\mathfrak m}_3}.
\end{equation}
Since ${\mathfrak k}_1,{\mathfrak k}_2$ are the simple ideals of ${\mathfrak k}$ and by the definition of generalized Wallach space, we can easily obtain the possible non-zero coefficients in the expressions for Ricci tensor as follows:
$$(111),(222),(133),(144),(155),(233),(244),(255),(345)$$
According to Lemma \ref{formula1}, we have
\begin{equation*}
\begin{split}
r_1&=\frac{1}{4d_1}\left(\frac{1}{x_1}(111)+\frac{x_1}{x_3^2}(133)+\frac{x_1}{x_4^2}(144)+\frac{x_1}{x_5^2}(155)\right),\\
r_2&=\frac{1}{4d_2}\left(\frac{1}{x_2}(222)+\frac{x_2}{x_3^2}(233)+\frac{x_2}{x_4^2}(244)+\frac{x_2}{x_5^2}(255)\right),\\
r_3&=\frac{1}{2x_3}+\frac{1}{2d_3}(345)\left(\frac{x_3}{x_4x_5}-\frac{x_4}{x_3x_5}-\frac{x_5}{x_3x_4}\right)-\frac{1}{2d_3}\left(\frac{x_1}{x_3^2}(133)+\frac{x_2}{x_3^2}(233)\right),\\
r_4&=\frac{1}{2x_4}+\frac{1}{2d_4}(345)\left(\frac{x_4}{x_3x_5}-\frac{x_3}{x_4x_5}-\frac{x_5}{x_3x_4}\right)-\frac{1}{2d_4}\left(\frac{x_1}{x_4^2}(144)+\frac{x_2}{x_4^2}(244)\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{2d_5}(345)\left(\frac{x_5}{x_4x_3}-\frac{x_4}{x_3x_5}-\frac{x_3}{x_5x_4}\right)-\frac{1}{2d_5}\left(\frac{x_1}{x_5^2}(155)+\frac{x_2}{x_5^2}(255)\right),
\end{split}
\end{equation*}
and
\begin{equation}\label{eqn2}
\begin{split}
&(111)+(133)+(144)+(155)=d_1,\\
&(222)+(233)+(244)+(255)=d_2,\\
&2(133)+2(233)+2(345)=d_3,\\
&2(144)+2(244)+2(345)=d_4,\\
&2(155)+2(255)+2(345)=d_5.
\end{split}
\end{equation}
We used the symmetric property of three indices in $(ijk)$ in above equations.
In order to calculate the non-zero coefficients $(111)$, $(222)$, $(133)$, $(144)$, $(155)$, $(233)$, $(244)$, $(255)$, $(345)$, we should know more about the structure of the corresponding Lie algebras. Since the structure of generalized Wallach space arising from a simple group can be decided by two commutative involutive automorphisms on ${\mathfrak g}$, we can learn more information from these two automorphisms.
\begin{lemma}\label{p=2}
In the case of $p=2$, the non-zero coefficients in the components of Ricci tensor with respect to metric (\ref{metric2}) are as follows:\\
for ${\mathrm E}_6$-III, the non-zero coefficients are
\begin{equation*}
\begin{split}
&(111)=\frac{1}{2}, (133)=0, (144)=\frac{7}{4}, (155)=\frac{3}{4},\\
&(222)=7, (233)=\frac{7}{2}, (244)=\frac{35}{4}, (255)=\frac{7}{4}, (345)=\frac{7}{2},
\end{split}
\end{equation*}
for ${\mathrm E}_8$-II, the non-zero coefficients are
\begin{equation*}
\begin{split}
&(111)=(222)=\frac{28}{5}, (345)=\frac{256}{15},\\
&(133)=(144)=(233)=(244)=(155)=(255)=\frac{112}{15}.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Case of ${\mathrm E}_6$-III. For this case, we denote the two commutative involutive automorphisms by $\theta$ and $\tau$. By the conclusions in \cite{ChKaLi}, we know that each of the automorphisms corresponds to an irreducible symmetric pair. For the structure of ${\mathrm E}_6$-III, we have the following decomposition:
\begin{equation*}
{\mathfrak g}=A_1\oplus C_3\oplus{\mathfrak p}_1\oplus{\mathfrak p}_2\oplus{\mathfrak p}_3.
\end{equation*}
Let
\begin{equation}\label{E6III-1}
{\mathfrak g}={\mathfrak m}athfrak{b}+{\mathfrak p}, \quad {\mathfrak m}athfrak{b}={\mathfrak m}athfrak{b}_1+{\mathfrak m}athfrak{b}_2, \quad {\mathfrak b}_1=A_1, \quad {\mathfrak m}athfrak{b}_2=C_3+{\mathfrak m}_1\cong A_5, \quad {\mathfrak p}={\mathfrak m}_2+{\mathfrak m}_3,
\end{equation}
then $({\mathfrak g},{\mathfrak m}athfrak{b})$ is in fact the irreducible symmetric pair corresponding to $\theta$.\\
Let
\begin{equation}\label{E6III-2}
{\mathfrak g}={\mathfrak m}athfrak{b'}+{\mathfrak p}', \quad {\mathfrak m}athfrak{b}'=A_1+C_3+{\mathfrak m}_2\cong{\mathrm F}_4, \quad {\mathfrak p}'={\mathfrak m}_1+{\mathfrak m}_3,
\end{equation}
then $({\mathfrak g},{\mathfrak m}athfrak{b}')$ is in fact the irreducible symmetric pair corresponding to $\tau$.
We consider the following metric on ${\mathfrak g}$ with respect to the decomposition (\ref{E6III-1})
\begin{equation}\label{metric21}
<<\ ,\ >>_1=u_1B|_{A_1}+u_2B|_{A_5}+u_3|_{\mathfrak p},
\end{equation}
and denote the components of the Ricci tensor with respect to the metric $<<,>>_1$ by $\tilde{r}_1$, $\tilde{r}_2$ and $\tilde{r}_3$. If we let $x_1=u_1, x_2=x_3=u_2, x_4=x_5=u_3$, then the metric given by (\ref{metric2}) and (\ref{metric21}) are the same, thus their corresponding Ricci tensor are also the same, which means $r_1=\tilde{r}_1$, $r_2=r_3=\tilde{r}_2$, $r_4=r_5=\tilde{r}_3$. As a result, from the expressions of each Ricci components, we have
\begin{equation}\label{eqn21}
\begin{split}
&\frac{1}{4d_2}\left((222)+(233)\right)=\frac{1}{4}-\frac{1}{2d_3}(345),\\
&\frac{1}{4d_2}\left((244)+(255)\right)=\frac{1}{2d_3}(345),\\
&\frac{1}{2d_4}(144)=\frac{1}{2d_5}(155),\\
&(133)=0.
\end{split}
\end{equation}
With the same method, we consider the irreducible symmetric pair corresponding to the involutive automorphism $\tau$. The metric taken into consideration is as follows:
\begin{equation}\label{metric22}
<<\ ,\ >>_2=w_1B|_{{\mathrm F}_4}+w_2B|_{{\mathfrak p}'},
\end{equation}
If we let $x_1=x_2=x_4=w_1$ and $x_3=x_5=w_2$, the Ricci tensors with respect to (\ref{metric2}) and (\ref{metric22}) are the same, by comparing the expression for both components, we have
\begin{equation}\label{eqn22}
\begin{split}
&\frac{1}{4d_1}\left((111)+(144)\right)=\frac{1}{4d_2}\left((222)+(244)\right)=\frac{1}{4}-\frac{1}{2d_4}(345),\\
&\frac{1}{4d_1}\left((133)+(155)\right)=\frac{1}{4d_2}\left((233)+(255)\right)=\frac{1}{2d_4}(345).
\end{split}
\end{equation}
We can easily calculate $\alpha_{A_1}^{{\mathrm E}_6}=\frac{8}{48}$, and $\alpha_{C_3}^{{\mathrm E}_6}=\frac{16}{48}$, since their corresponding roots have the same length, we obtain $(111)=\frac{1}{2}$ and $(222)=7$ according to Lemma \ref{iii}. From Table 1 in \cite{Ni}, we can find out $(345)=\frac{7}{2}$, along with the equations (\ref{eqn2}), (\ref{eqn21}) and (\ref{eqn22}) and $d_1=3$, $d_2=21$, $d_3=14$, $d_4=28$ and $d_5=12$, one can easily get the solutions given in the lemma.
Case of ${\mathrm E}_8$-II. For this case, we have the following decomposition:
\begin{equation*}
{\mathfrak g}=D_4\oplus D_4\oplus{\mathfrak m}_1\oplus{\mathfrak m}_2\oplus{\mathfrak m}_3.
\end{equation*}
Then we study the two commutative involutive automorphisms denoted by $\theta$ and $\tau$. In fact, the corresponding irreducible symmetric pair $({\mathfrak g},{\mathfrak m}athfrak{b})$ and $({\mathfrak g},{\mathfrak m}athfrak{b}')$ of $\theta$ and $\tau$ are respectively as follows:
\begin{equation*}\label{E8II-1}
{\mathfrak g}={\mathfrak m}athfrak{b}+{\mathfrak p}, \quad {\mathfrak m}athfrak{b}=D_4+D_4+{\mathfrak m}_1\cong D_8, \quad {\mathfrak p}={\mathfrak m}_2+{\mathfrak m}_3.
\end{equation*}
\begin{equation*}\label{E8II-2}
{\mathfrak g}={\mathfrak m}athfrak{b}'+{\mathfrak p}', \quad {\mathfrak m}athfrak{b}'=D_4+D_4+{\mathfrak m}_2\cong D_8, \quad {\mathfrak p}={\mathfrak m}_1+{\mathfrak m}_3.
\end{equation*}
By the same methods operated in the case of ${\mathrm E}_6$-III, we have
\begin{equation*}
\begin{split}
&(111)=(222)=\frac{28}{5}, (345)=\frac{256}{15},\\
&(133)=(144)=(155)=(233)=(244)=(255)\frac{112}{15}.
\end{split}
\end{equation*}
\end{proof}
For $p=3$, we have the following decomposition:
\begin{equation*}
{\mathfrak g}={\mathfrak k}_1+{\mathfrak k}_2+{\mathfrak k}_3+{\mathfrak m}_1+{\mathfrak m}_2+{\mathfrak m}_3,
\end{equation*}
and we consider the metric as follows:
\begin{equation}\label{metric3}
<\ ,\ >=x_1B|_{{\mathfrak k}_1}+x_2B|_{{\mathfrak k}_2}+x_3B|_{{\mathfrak k}_3}+x_4B|_{{\mathfrak p}_1}+x_5B|_{{\mathfrak p}_2}+x_6B|_{{\mathfrak p}_3}.
\end{equation}
Since ${\mathfrak k}_i(i=1,2,3)$ is simple ideal of ${\mathfrak k}$ and according to the structure of generalized Wallach space, it is easy to know that the possible non-zero coefficients in the expression for the components of Ricci tensor with respect to the metric (\ref{metric3}) are as follows:
\begin{equation*}
(111), (222), (333), (144), (155), (166), (244), (255), (266), (344), (355), (366), (456).
\end{equation*}
Then the components of the Ricci tensor with respect to this metric are as follows according to the Lemma \ref{formula1}:
\begin{equation*}
\begin{split}
r_1&=\frac{1}{4d_1}\left(\frac{1}{x_1}(111)+\frac{x_1}{x_4^2}(144)+\frac{x_1}{x_5^2}(155)+\frac{x_1}{x_6^2}(166)\right),\\
r_2&=\frac{1}{4d_2}\left(\frac{1}{x_2}(222)+\frac{x_2}{x_4^2}(244)+\frac{x_2}{x_5^2}(255)+\frac{x_2}{x_6^2}(266)\right),\\
r_3&=\frac{1}{4d_3}\left(\frac{1}{x_3}(333)+\frac{x_3}{x_4^2}(344)+\frac{x_3}{x_5^2}(355)+\frac{x_3}{x_6^2}(366)\right),\\
r_4&=\frac{1}{2x_4}+\frac{1}{2d_4}(456)\left(\frac{x_4}{x_5x_6}-\frac{x_6}{x_4x_5}-\frac{x_5}{x_4x_6}\right)-\frac{1}{2d_4}\left(\frac{x_1}{x_4^2}(144)+\frac{x_2}{x_4^2}(244)+\frac{x_3}{x_4^2}(344)\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{2d_5}(456)\left(\frac{x_5}{x_4x_6}-\frac{x_6}{x_4x_5}-\frac{x_4}{x_5x_6}\right)-\frac{1}{2d_5}\left(\frac{x_1}{x_5^2}(155)+\frac{x_2}{x_5^2}(255)+\frac{x_3}{x_5^2}(355)\right),\\
r_6&=\frac{1}{2x_6}+\frac{1}{2d_6}(456)\left(\frac{x_6}{x_5x_4}-\frac{x_4}{x_6x_5}-\frac{x_5}{x_4x_6}\right)-\frac{1}{2d_6}\left(\frac{x_1}{x_6^2}(166)+\frac{x_2}{x_4^2}(266)+\frac{x_3}{x_4^2}(366)\right).
\end{split}
\end{equation*}
and the following equations:
\begin{equation}\label{eqn3}
\begin{split}
&(111)+(144)+(155)+(166)=d_1,\\
&(222)+(244)+(255)+(266)=d_2,\\
&(333)+(344)+(355)+(366)=d_3,\\
&2(144)+2(244)+2(344)+2(456)=d_4,\\
&2(155)+2(255)+2(355)+2(456)=d_5,\\
&2(166)+2(266)+2(366)+2(456)=d_6.
\end{split}
\end{equation}
\begin{lemma}\label{p=3}
The possible non-zero coefficients in the expression for the components of Ricci tensor with respect to the metric (\ref{metric3}) are as follows:
for case of ${\mathrm F}_4$-II, we have
\begin{equation*}
\begin{split}
&(111)=(222)=\frac{2}{3}, (333)=\frac{10}{3}, (144)=(244)=\frac{5}{3}, (155)=(266)=0, \\
&(166)=(255)=\frac{2}{3}, (355)=(366)=\frac{10}{9}, (344)=\frac{40}{9},(456)=\frac{20}{9}.
\end{split}
\end{equation*}
for case of ${\mathrm E}_8$-I, we have
\begin{equation*}
\begin{split}
&(111)=(222)=\frac{1}{5},(333)=22,(144)=(244)=\frac{6}{5},(155)=(266)=0,\\
&(166)=(255)=\frac{8}{5},(344)=\frac{44}{5},(355)=(366)=\frac{88}{5},(456)=\frac{64}{5}.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Case of ${\mathrm F}_4$-II. We have the following decomposition according to the structure of ${\mathrm F}_4$-II:
\begin{equation*}
{\mathfrak g}=A_1^1+A_1^2+C_2+{\mathfrak m}_1+{\mathfrak m}_2+{\mathfrak m}_3.
\end{equation*}
If we let
\begin{equation*}
\frak{b}=A_1^1+A_1^2+C_2+{\mathfrak m}_1,\quad {\mathfrak p}={\mathfrak m}_2+{\mathfrak m}_3,
\end{equation*}
then $({\mathfrak g},\frak{b})$ is an irreducible symmetric pair. In fact, this decomposition can be achieved by the first involutive automorphism $\theta$, which means $\frak{b}\cong B_4$, therefore simple. Now, we consider the following metric on ${\mathfrak g}$:
\begin{equation}\label{metric31}
<<\ ,\ >>_1=u_1B|_{\frak{b}}+u_2B|_{{\mathfrak p}},
\end{equation}
this metric is the same as the one in (\ref{metric3}) if we let $x_1=x_2=x_3=x_4=u_1$ and $x_5=x_6=u_2$. As a result, if we denote the components of the Ricci tensor with respect to (\ref{metric3}) by $\tilde{r}_1$ and $\tilde{r}_2$, then it holds $r_1=r_2=r_3=r_4=\tilde{r}_1$ and $r_5=r_6=\tilde{r}_2$. With comparing these equations, we have
\begin{equation}\label{eqn31}
\begin{split}
&\frac{1}{4d_1}((111)+(144))=\frac{1}{4d_2}((222)+(244))=\frac{1}{4d_3}((333)+(344))=\frac{1}{4}-\frac{1}{2d_4}(456),\\
&\frac{1}{4d_1}((155)+(166))=\frac{1}{4d_2}((255)+(266))=\frac{1}{4d_3}((355)+(366))=\frac{1}{2d_4}(456).
\end{split}
\end{equation}
We consider the following decomposition of ${\mathfrak g}$,
\begin{equation*}
{\mathfrak g}=\frak{b}'+{\mathfrak p}'=\frak{b}'_1+\frak{b}'_2+{\mathfrak p}',
\end{equation*}
where $\frak{b}'=A_1^1+A_1^1+C_2+{\mathfrak m}_2,\frak{b}'_1=A_1^1,\frak{b}'_2=A_1^2+C_2+{\mathfrak m}_2,{\mathfrak p}={\mathfrak m}_1+{\mathfrak m}_3$. In fact, $\frak{b}'_2\cong C_3$, and this decomposition is corresponding to the second involutive automorphism $\tau$ on ${\mathfrak g}$. Hence, $({\mathfrak g},\frak{b}')$ is an irreducible symmetric pair. Now, we consider the metric on ${\mathfrak g}$ as follows:
\begin{equation}\label{metric32}
<<\ ,\ >>_2=w_1B|_{A_1^1}+w_2B|_{C_3}+w_3B|_{{\mathfrak p}^{'}}.
\end{equation}
If we set $x_1=w_1, x_2=x_3=x_5=w_2, x_4=x_6=w_3$ in (\ref{metric3}), then the two metrics are the same, as a result, we can get the following equations:
\begin{equation}\label{eqn32}
\begin{split}
&\frac{1}{4d_2}((222)+(255))=\frac{1}{4d_3}((333)+(355))=\frac{1}{4}-\frac{1}{2d_5}((456)-(155)),\\
&\frac{1}{4d_2}((244)+(266))=\frac{1}{4d_3}((344)+(366))=\frac{1}{2d_5}(456),\\
&\frac{1}{2d_5}(155)=0, \frac{1}{2d_4}(144)=\frac{1}{2d_6}(166).
\end{split}
\end{equation}
We can easily calculate $\alpha_{A_1}^{{\mathrm F}_4}=\frac{8}{36}$ and $\alpha_{C_2}^{{\mathrm F}_4}=\frac{12}{36}$ both with the long roots of ${\mathrm F}_4$, therefore, $(111)=(222)=\frac{2}{3}$ and $(333)=\frac{10}{3}$ according to Lemma \ref{iii}, besides we get $(456)=\frac{20}{9}$ from \cite{Ni}. Thus, along with the above equations (\ref{eqn3}), (\ref{eqn31}) and (\ref{eqn32}), one can easily get the solutions as the lemma, here $d_1=3, d_2=3, d_3=10, d_4=20, d_5=8,d_6=8$.
Case of ${\mathrm E}_8$-I. According to \cite{ChKaLi}, ${\mathrm E}_8$-I has the following decomposition:
\begin{equation*}
{\mathfrak g}=A_{1}^{1}+A_{1}^{2}+D_6+{\mathfrak m}_1+{\mathfrak m}_2+{\mathfrak m}_3.
\end{equation*}
According to the two commutative involutive automorphisms on ${\mathrm E}_8$-I in \cite{ChKaLi}, we have the following two decompositions which make $({\mathfrak g},{\mathfrak b})$ and $({\mathfrak g},{\mathfrak b}')$ be two different irreducible symmetric pairs.
\begin{equation*}
\begin{split}
&{\mathfrak g}={\mathfrak b}+{\mathfrak p}, {\mathfrak b}=A_{1}^{1}+A_{1}^{2}+D_6+{\mathfrak m}_1\cong D_8, {\mathfrak p}={\mathfrak m}_2+{\mathfrak m}_3,\\
{\mathfrak g}={\mathfrak b}'+{\mathfrak p}', {\mathfrak b}'={\mathfrak b}'_1+&{\mathfrak b}'_2=A_1+A_1+D_6+{\mathfrak m}_2, {\mathfrak b}'_1=A_{1}^{1}, {\mathfrak b}'_2=A_{1}^{2}+D_6+{\mathfrak m}_2\cong {\mathrm E}_7, {\mathfrak p}'={\mathfrak m}_1+{\mathfrak m}_3.
\end{split}
\end{equation*}
By using the same methods above, we can calculate $(111)=(222)=\frac{1}{5}$ and $(333)=22$ by Lemma \ref{iii} and get $(456)=\frac{64}{5}$, as a result, we have
\begin{equation*}
\begin{split}
&(111)=(222)=\frac{1}{5},(333)=22,(144)=(244)=\frac{6}{5},(155)=(266)=0,\\
&(166)=(255)=\frac{8}{5},(344)=\frac{44}{5},(355)=(366)=\frac{88}{5},(456)=\frac{64}{5}.
\end{split}
\end{equation*}
\end{proof}
For $p=4$, there is only ${\mathrm E}_7$-I in this type, and the decomposition according to \cite{ChKaLi} is
\begin{equation*}
{\mathfrak g}=A_1^1+A_1^2+A_1^3+D_4+{\mathfrak p}_1+{\mathfrak p}_2+{\mathfrak p}_3.
\end{equation*}
The considered metric is of the following form:
\begin{equation}\label{metric4}
<\ ,\ >=x_1B|_{A_1^1}+x_2B|_{A_1^2}+x_3B|_{A_1^3}+x_4B|_{D_4}+x_5B|_{{\mathfrak m}_1}+x_6B|_{{\mathfrak m}_2}+x_7B|_{{\mathfrak m}_3},
\end{equation}
It is easy to know that the possible non-zero coefficients in the expression for components of Ricci tensor with respect to the metric given by (\ref{metric4}) are
\begin{equation*}
(111), (222), (333), (444), (155), (166), (177), (255),(266), (277), (355), (366), (377), (455), (466), (477).
\end{equation*}
As a result, the components of Ricci tensor are:
\begin{equation*}
\begin{split}
r_1&=\frac{1}{4d_1}\left(\frac{1}{x_1}(111)+\frac{x_1}{x_5^2}(155)+\frac{x_1}{x_6^2}(166)+\frac{x_1}{x_7^2}(177)\right),\\
r_2&=\frac{1}{4d_2}\left(\frac{1}{x_2}(222)+\frac{x_2}{x_5^2}(255)+\frac{x_2}{x_6^2}(266)+\frac{x_2}{x_7^2}(277)\right),\\
r_3&=\frac{1}{4d_3}\left(\frac{1}{x_3}(333)+\frac{x_3}{x_5^2}(355)+\frac{x_3}{x_6^2}(366)+\frac{x_3}{x_7^2}(377)\right),\\
r_4&=\frac{1}{4d_4}\left(\frac{1}{x_4}(444)+\frac{x_4}{x_5^2}(455)+\frac{x_4}{x_6^2}(466)+\frac{x_4}{x_7^2}(477)\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{2d_5}(567)\left(\frac{x_5}{x_6x_7}-\frac{x_6}{x_7x_5}-\frac{x_7}{x_5x_6}\right)-\frac{1}{2d_5}\left(\frac{x_1}{x_5^2}(155)+\frac{x_2}{x_5^2}(255)+\frac{x_3}{x_5^2}(355)+\frac{x_4}{x_5^2}(455)\right),\\
r_6&=\frac{1}{2x_6}+\frac{1}{2d_6}(567)\left(\frac{x_6}{x_5x_7}-\frac{x_7}{x_6x_5}-\frac{x_5}{x_7x_6}\right)-\frac{1}{2d_6}\left(\frac{x_1}{x_6^2}(166)+\frac{x_2}{x_6^2}(266)+\frac{x_3}{x_6^2}(366)+\frac{x_4}{x_6^2}(466)\right),\\
r_7&=\frac{1}{2x_7}+\frac{1}{2d_7}(567)\left(\frac{x_7}{x_5x_6}-\frac{x_5}{x_6x_7}-\frac{x_6}{x_7x_6}\right)-\frac{1}{2d_7}\left(\frac{x_1}{x_7^2}(177)+\frac{x_2}{x_7^2}(277)+\frac{x_3}{x_7^2}(377)+\frac{x_4}{x_7^2}(477)\right).
\end{split}
\end{equation*}
\begin{lemma}\label{p=4}
The possible non-zero coefficients in the expression for components of Ricci tensor with respect to the metric given by (\ref{metric4}) are as follows:
\begin{equation*}
\begin{split}
&(111)=(222)=(333)=\frac{1}{3},(444)=\frac{28}{3},(567)=\frac{64}{9},\\
&(166)=(177)=(255)=(277)=(355)=(366)=\frac{4}{3},\\
&(155)=(266)=(377)=0,(455)=(466)=(477)=\frac{56}{9}.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
According to the two commutative involutive automorphisms on ${\mathrm E}_7$-I in \cite{ChKaLi}, we have the following two decompositions which make $({\mathfrak g},{\mathfrak b})$ and $({\mathfrak g},{\mathfrak b}')$ be two different irreducible symmetric pairs.
\begin{equation*}
\begin{split}
&{\mathfrak g}={\mathfrak b}+{\mathfrak p}, {\mathfrak b}=A_1^1+A_1^2+A_1^3+D_4+{\mathfrak m}_1={\mathfrak b}_1+{\mathfrak b}_2,\\
& {\mathfrak b}_1=A_1^1,{\mathfrak b}_2=A_1^2+A_1^3+D_4+{\mathfrak m}_1\cong D_6, {\mathfrak p}={\mathfrak m}_2+{\mathfrak m}_3,\\
&{\mathfrak g}={\mathfrak b}'+{\mathfrak p}', {\mathfrak b}=A_1^1+A_1^2+A_1^3+D_4+{\mathfrak m}_2={\mathfrak b}'_1+{\mathfrak b}'_2, \\
&{\mathfrak b}'_1=A_1^2,{\mathfrak b}'_2=A_1^1+A_1^3+D_4+{\mathfrak m}_2\cong D_6, {\mathfrak p}={\mathfrak m}_1+{\mathfrak m}_3.
\end{split}
\end{equation*}
Then we can use the similar methods to calculate out the coefficients given in the lemma.
\end{proof}
\textbf{Case of ${\mathrm E}_7$-II.} In \cite{ChKaLi}, ${\mathrm E}_7$-II has the following decomposition:
\begin{equation}
{\mathrm E}_7={\mathrm T}\oplus A_1\oplus A_5\oplus{\mathfrak p}_1\oplus{\mathfrak p}_2\oplus{\mathfrak p}_3,
\end{equation}
where ${\mathrm T}$ is the 1-dimensional center of ${\mathfrak k}$.\\
We consider left-invariant metrics on ${\mathrm E}_7$-II as follows:
\begin{equation}\label{metric7}
<\ ,\ >=u_0\cdot B|_{{\mathrm T}}+x_1\cdot B|_{A_1}+x_2\cdot B|_{A_5}+x_3\cdot B|_{{\mathfrak p}_1}+x_4\cdot B|_{{\mathfrak p}_2}+x_5\cdot B|_{{\mathfrak p}_3},
\end{equation}
where $u_0, x_1, x_2, x_3, x_4, x_5\in{\mathfrak m}athbb{R}^+$. According to the structure of generalized Wallach space, the possible non-zero coefficients in the expression for the components of Ricci tensor with respect to the metric (\ref{metric7}) are
$$(033), (044), (055), (111), (133), (144), (155), (222), (233), (244), (255), (345)$$
Therefore, by Lemma \ref{formula1}, the components of Ricci tensor with respect to the metric (\ref{metric7}) can be expressed as follows:
\[\left\{\begin{aligned}
r_0&=\frac{1}{4d_0}\left(\frac{u_0}{x_3^2}(033)+\frac{u_0}{x_4^2}(044)+\frac{u_0}{x_5^2}(055)\right),\\
r_1&=\frac{1}{4d_1}\left(\frac{1}{x_1}(111)+\frac{x_1}{x_3^2}(133)+\frac{x_1}{x_4^2}(144)+\frac{x_1}{x_5^2}(155)\right),\\
r_2&=\frac{1}{4d_2}\left(\frac{1}{x_2}(222)+\frac{x_2}{x_3^2}(233)+\frac{x_2}{x_4^2}(244)+\frac{x_2}{x_5^2}(255)\right),\\
r_3&=\frac{1}{2x_3}+\frac{1}{2d_3}(345)\left(\frac{x_3}{x_4x_5}-\frac{x_4}{x_5x_3}-\frac{x_5}{x_3x_4}\right)-\frac{1}{2d_3}\left(\frac{u_0}{x_3^2}(033)+\frac{x_1}{x_3^2}(133)+\frac{x_2}{x_3^2}(233)\right),\\
r_4&=\frac{1}{2x_4}+\frac{1}{2d_4}(345)\left(\frac{x_4}{x_5x_3}-\frac{x_5}{x_3x_4}-\frac{x_3}{x_4x_5}\right)-\frac{1}{2d_4}\left(\frac{u_0}{x_4^2}(044)+\frac{x_1}{x_4^2}(144)+\frac{x_2}{x_4^2}(244)\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{2d_5}(345)\left(\frac{x_5}{x_3x_4}-\frac{x_3}{x_4x_5}-\frac{x_4}{x_5x_3}\right)-\frac{1}{2d_5}\left(\frac{u_0}{x_5^2}(055)+\frac{x_1}{x_5^2}(155)+\frac{x_2}{x_5^2}(255)\right).
\end{aligned}\right.\]
and
\begin{equation}\label{eqn777}
\begin{split}
&(033)+(044)+(055)=d_0,\\
&(111)+(133)+(144)+(155)=d_1,\\
&(222)+(233)+(244)+(255)=d_2,\\
&2(033)+2(133)+2(233)+2(345)=d_3,\\
&2(044)+2(144)+2(244)+2(345)=d_4,\\
&2(055)+2(155)+2(255)+2(345)=d_5,
\end{split}
\end{equation}
where we used the symmetric properties of the indices in $(ijk)$.
\begin{lemma}\label{coeff7}
In the case of ${\mathrm E}_7$-II, the possible non-zero coefficients in the expression for the components of Ricci tensor with respect to metric (\ref{metric7}) are as follows:
\begin{equation*}
\begin{split}
&(033)=\frac{4}{9}, (044)=\frac{5}{9}, (055)=0, (345)=\frac{20}{3},\\
&(111)=\frac{1}{3}, (133)=1, (144)=0, (155)=\frac{5}{3},\\
&(222)=\frac{35}{3}, (233)=\frac{35}{9}, (244)=\frac{70}{9}, (255)=\frac{35}{3}.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
In fact, there are two involutive automorphisms on ${\mathrm E}_7$-II \cite{ChKaLi}, denoted by $\sigma$ and $\tau$, where $\sigma$ corresponds to the irreducible symmetric pair $({\mathfrak g},{\mathfrak b})$ having the following decomposition:
\begin{equation}\label{decom71}
{\mathfrak g}={\mathfrak b}\oplus{\mathfrak p},\quad{\mathfrak b}={\mathrm T}\oplus A_1\oplus A_5\oplus{\mathfrak p}_1\cong A_7,\quad{\mathfrak p}={\mathfrak p}_2\oplus{\mathfrak p}_3,
\end{equation}
and $\tau$ corresponds to the irreducible symmetric pair $({\mathfrak g},{\mathfrak b}')$ having the following decomposition:
\begin{equation}\label{decom72}
{\mathfrak g}={\mathfrak b}'\oplus{\mathfrak p}',\quad{\mathfrak b}'={\mathfrak b}_1'\oplus{\mathfrak b}_2',\quad{\mathfrak b}_1'=A_1,\quad{\mathfrak b}_2'={\mathrm T}\oplus A_5\oplus{\mathfrak p}_2\cong D_6,\quad{\mathfrak p}'={\mathfrak p}_1\oplus{\mathfrak p}_3.
\end{equation}
We consider the following left-invariant metric on ${\mathrm E}_7$ with respect to the decomposition (\ref{decom71})
\begin{equation}\label{metric71}
(\ ,\ )_1=w_1\cdot B|_{{\mathfrak b}}+w_2\cdot B|_{{\mathfrak p}},
\end{equation}
if we let $u_0=x_1=x_2=x_3=w_1$ and $x_4=x_5=w_2$ in (\ref{metric7}), then these two metrics are the same, as a result, if we denote the components of Ricci tensor with respect to the metric (\ref{metric71}) by $\tilde{r}_1$ and $\tilde{r}_2$, then we have $r_0=r_1=r_2=r_3=\tilde{r}_1$ and $r_4=r_5=\tilde{r}_2$. With an easy calculation we get the following equations of the possible non-zero coefficients:
\begin{equation}\label{eqn71}
\begin{split}
&\frac{1}{4d_0}(033)=\frac{1}{d_1}\left((111)+(133)\right)=\frac{1}{d_2}\left((222)+(233)\right)=\frac{1}{4}-\frac{1}{2d_3}(345),\\
&\frac{1}{4d_0}\left((044)+(055)\right)=\frac{1}{4d_1}\left((144)+(155)\right)=\frac{1}{4d_2}\left((244)+(255)\right)=\frac{1}{2d_3}(345),
\end{split}
\end{equation}
where we used the equations in (\ref{eqn777}) for simplification.
On the other hand, we consider the left-invariant metrics on ${\mathrm E}_7$-II with respect to the decomposition (\ref{decom72}) as follows:
\begin{equation}\label{metric72}
(\ ,\ )_2=v_1\cdot B|_{A_1}+v_2\cdot B|_{D_6}+v_3\cdot B|_{{\mathfrak p}'},
\end{equation}
if we let $x_1=v_1$, $u_0=x_2=x_4=v_2$ and $x_3=x_5=v_3$ in the metric (\ref{metric7}), then the components of Ricci tensor with respect to these two metrics are the same, which means if we denote the components of Ricci tensor with respect to the metric (\ref{metric72}) by $\tilde{r}'_1$, $\tilde{r}'_2$ and $\tilde{r}'_3$, we have $r_1=\tilde{r}'_1$, $r_0=r_2=r_4=\tilde{r}'_2$ and $r_3=r_5=\tilde{r}'_3$. With a short calculation, we have the following equations:
\begin{equation}\label{eqn72}
\begin{split}
&\frac{1}{4d_0}(044)=\frac{1}{4d_3}\left((222)+(244)\right)=\frac{1}{4}-\frac{1}{2d_4}(345),\\
&\frac{1}{4d_0}\left((033)+(055)\right)=\frac{1}{4d_2}\left((233)+(255)\right)=\frac{1}{2d_4}(345),
\end{split}
\end{equation}
where we used the equations in (\ref{eqn777}) for simplification.
By Lemma \ref{iii}, we can calculate that $(111)=\frac{1}{3}$ and $(222)=\frac{35}{3}$, with Table 1 in \cite{Ni}, we have $(345)=\frac{20}{3}$ and $d_0=1, d_1=1, d_2=35, d_3=24, d_4=30, d_5=40$, along with linear equations (\ref{eqn71}) and (\ref{eqn72}), we get the possible non-zero coefficients as follows:
\begin{equation}
\begin{split}
&(033)=\frac{4}{9}, (044)=\frac{5}{9}, (055)=0, (345)=\frac{20}{3},\\
&(111)=\frac{1}{3}, (133)=1, (144)=0, (155)=\frac{5}{3},\\
&(222)=\frac{35}{3}, (233)=\frac{35}{9}, (244)=\frac{70}{9}, (255)=\frac{35}{3}.
\end{split}
\end{equation}
\end{proof}
\textbf{Case of ${\mathrm E}_6$-II.} As is shown in \cite{ChKaLi}, ${\mathrm E}_6$-II can be decomposed as follows:
\begin{equation}\label{decom6}
{\mathfrak g}={\mathrm T}\oplus A_1^1\oplus A_1^2\oplus A_3\oplus{\mathfrak p}_1\oplus{\mathfrak p}_2\oplus{\mathfrak p}_3,
\end{equation}
and we consider the following left-invariant metrics on ${\mathrm E}_6$ according to this decomposition:
\begin{equation}\label{metric6}
<\ ,\ >=u_0\cdot B|_{{\mathrm T}}+x_1\cdot B|_{A_1^1}+x_2\cdot B|_{A_1^2}+x_3\cdot B|_{A_3}+x_4\cdot B|_{{\mathfrak p}_1}+x_5\cdot B|_{{\mathfrak p}_2}+x_6\cdot B|_{{\mathfrak p}_3},
\end{equation}
where $u_0, x_1, x_2, x_3, x_4, x_5, x_6\in{\mathfrak m}athbb{R}^+$. Because of the structure of generalized Wallach space, the possible non-zero coefficients in the expressions for the components of Ricci tensor with respect to the metric (\ref{metric6}) are as follows:
$$(044), (055), (066), (111), (144), (155), (166), (222), (244), (255), (266), (333), (344), (355), (366), (456).$$
By Lemma \ref{formula1}, the components of Ricci tensor with respect to the metric (\ref{metric6}) can be simplified as follows:
\[\left\{\begin{aligned}
r_0&=\frac{1}{4d_0}\left(\frac{u_0}{x_4^2}(044)+\frac{u_0}{x_5^2}(055)+\frac{u_0}{x_6^2}(066)\right),\\
r_1&=\frac{1}{4d_1}\left(\frac{1}{x_1}(111)+\frac{x_1}{x_4^2}(144)+\frac{x_1}{x_5^2}(155)+\frac{x_1}{x_6^2}(166)\right),\\
r_2&=\frac{1}{4d_2}\left(\frac{1}{x_2}(222)+\frac{x_2}{x_4^2}(244)+\frac{x_2}{x_5^2}(255)+\frac{x_2}{x_6^2}(266)\right),\\
r_3&=\frac{1}{4d_3}\left(\frac{1}{x_3}(333)+\frac{x_3}{x_4^2}(344)+\frac{x_3}{x_5^2}(355)+\frac{x_3}{x_6^2}(366)\right),\\
r_4&=\frac{1}{2x_4}+\frac{1}{2d_4}(456)\left(\frac{x_4}{x_5x_6}-\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}\right)-\frac{1}{2d_4}\left(\frac{u_0}{x_4^2}(044)+\frac{x_1}{x_4^2}(144)+\frac{x_2}{x_4^2}(244)+\frac{x_3}{x_4^2}(344)\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{2d_5}(456)\left(\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}-\frac{x_4}{x_5x_6}\right)-\frac{1}{2d_5}\left(\frac{u_0}{x_5^2}(055)+\frac{x_1}{x_5^2}(155)+\frac{x_2}{x_5^2}(255)+\frac{x_3}{x_5^2}(355)\right),\\
r_6&=\frac{1}{2x_6}+\frac{1}{2d_6}(456)\left(\frac{x_6}{x_4x_5}-\frac{x_4}{x_5x_6}-\frac{x_5}{x_6x_4}\right)-\frac{1}{2d_6}\left(\frac{u_0}{x_6^2}(066)+\frac{x_1}{x_6^2}(166)+\frac{x_2}{x_6^2}(266)+\frac{x_3}{x_6^2}(366)\right).
\end{aligned}\right.\]
and
\begin{equation}\label{eqn666}
\begin{split}
&(044)+(055)+(066)=d_0,\\
&(111)+(144)+(155)+(166)=d_1,\\
&(222)+(244)+(255)+(266)=d_2,\\
&(333)+(344)+(355)+(366)=d_3,\\
&2(044)+2(144)+2(244)+2(344)+2(456)=d_4,\\
&2(055)+2(155)+2(255)+2(266)+2(456)=d_5,\\
&2(066)+2(166)+2(266)+2(366)+2(456)=d_6,
\end{split}
\end{equation}
where we used the symmetric properties of the indices in $(ijk)$.
\begin{lemma}\label{coeff6}
In the case of ${\mathrm E}_6$-II, the possible non-zero coefficients in the expression for the components of Ricci tensor with respect to metric (\ref{metric6}) are as follows:
\begin{equation*}
\begin{split}
&(044)=1/2, (055)=1/2, (066)=0, (456)=4,\\
&(111)=1/2, (144)=0, (155)=1, (166)=3/2,\\
&(222)=1/2, (244)=1, (255)=0, (266)=3/2,\\
&(333)=5, (344)=5/2, (355)=5/2, (366)=5.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
In fact, according to \cite{ChKaLi}, there are two involutive automorphisms on ${\mathrm E}_6$-II, denoted by $\sigma$ and $\tau$, each of which corresponds an irreducible symmetric pair. Further, $\sigma$ corresponds to the irreducible symmetric pair $({\mathfrak g},{\mathfrak b})$ with the following decomposition:
\begin{equation}\label{decom61}
{\mathfrak g}={\mathfrak b}\oplus{\mathfrak p},\ {\mathfrak b}={\mathfrak b}_1\oplus{\mathfrak b}_2,\ {\mathfrak b}_1=A_1^1,\ {\mathfrak b}_2={\mathrm T}\oplus A_1^2\oplus A_3\oplus{\mathfrak p}_1\cong A_5,\ {\mathfrak p}={\mathfrak p}_1\oplus{\mathfrak p}_2,
\end{equation}
while $\tau$ corresponds to the irreducible symmetric pair $({\mathfrak g},\b')$ with decomposition as follows:
\begin{equation}\label{decom62}
{\mathfrak g}={\mathfrak b}'\oplus{\mathfrak p}',\ {\mathfrak b}'={\mathfrak b}'_1\oplus{\mathfrak b}'_2,\ {\mathfrak b}'_1=A_1^2,\ {\mathfrak b}'_2={\mathrm T}\oplus A_1^1\oplus A_3\oplus{\mathfrak p}_2\cong A_5,\ {\mathfrak p}'={\mathfrak p}_1\oplus{\mathfrak p}_2.
\end{equation}
Then, we consider the following left-invariant metrics on ${\mathrm E}_6$ according to the decomposition (\ref{decom61}):
\begin{equation}\label{metric61}
(\ ,\ )_1=w_1\cdot B|_{A_1^1}+w_2\cdot B|_{A_5}+w_3\cdot B|_{{\mathfrak p}}.
\end{equation}
If we let $u_0=x_2=x_3=x_4=w_2$, $x_1=w_1$ and $x_5=x_6=w_3$ in the metric (\ref{metric6}), then these two metrics are the same, as a result, the components of Ricci tensor with respect to these two metrics are equal respectively, which means if we denote the components of Ricci tensor with respect to metric (\ref{metric61}) by $\tilde{r}_1, \tilde{r}_2$ and $\tilde{r}_3$, then $r_0=r_2=r_3=r_4=\tilde{r}_2, r_1=\tilde{r}_1$ and $r_5=r_6=\tilde{r}_3$, with a short calculation, one can get the following system of equations:
\begin{equation}\label{eqn61}
\begin{split}
&\frac{1}{4d_0}(044)=\frac{1}{4d_2}\left((222)+(244)\right)=\frac{1}{4d_3}\left((333)+(344)\right)=\frac{1}{4}-\frac{1}{2d_4}(456),\\
&\frac{1}{4d_0}\left((055)+(066)\right)=\frac{1}{4d_2}\left((255)+(266)\right)=\frac{1}{4d_3}\left((355)+(366)\right)=\frac{1}{2d_4}(456),\\
&\frac{1}{d_5}(155)=\frac{1}{d_6}(166),\ (144)=0,
\end{split}
\end{equation}
where we used the equations in (\ref{eqn666}) for simplification.
On the other hand, we consider the following left-invariant metric on ${\mathrm E}_6$ with respect to the decomposition (\ref{decom62}):
\begin{equation}\label{metric62}
(\ ,\ )_2=v_1\cdot B|_{A_1^2}+v_2\cdot B|_{A_5}+v_3\cdot B|_{{\mathfrak p}'}.
\end{equation}
If we let $u_0=x_1=x_3=x_5=v_2$, $x_2=v_1$ and $x_4=x_6=v_3$ in the metric (\ref{metric6}), then these two metrics are the same, as a result, the components of Ricci tensor with respect to these two metrics are equal respectively, which means if we denote the components of Ricci tensor with respect to metric (\ref{metric61}) by $\tilde{r}'_1, \tilde{r}'_2$ and $\tilde{r}'_3$, then $r_0=r_1=r_3=r_5=\tilde{r}'_2, r_2=\tilde{r}'_1$ and $r_4=r_6=\tilde{r}'_3$, with a short calculation, one can get the following system of equations:
\begin{equation}\label{eqn62}
\begin{split}
&\frac{1}{4d_0}(055)=\frac{1}{4d_1}\left((111)+(155)\right)=\frac{1}{4d_3}\left((333)+(355)\right)=\frac{1}{4}-\frac{1}{2d_5}(456),\\
&\frac{1}{4d_0}\left((044)+(066)\right)=\frac{1}{4d_1}\left((144)+(166)\right)=\frac{1}{4d_3}\left((344)+(366)\right)=\frac{1}{2d_5}(456),\\
&\frac{1}{d_6}(266)=\frac{1}{d_4}(244),\ (255)=0,
\end{split}
\end{equation}
where we used the equations in (\ref{eqn666}) for simplification.
By Lemma \ref{iii}, we have $(111)=(222)=\frac{1}{2}$ and $(333)=5$, from Table 1 in \cite{Ni}, we get $(456)=4$ and $d_0=1, d_1=d_2=3, d_3=15, d_4=d_5=16, d_6=24$, along with the equations (\ref{eqn61}) and (\ref{eqn62}), one can get the possible non-zero coefficients as given in the Lemma:
\end{proof}
\begin{remark}
Besides ${\mathrm E}_6$-III, for the other cases there are isomorphisms between the subalgebras of ${\mathfrak k}$ in the decomposition corresponding to the structure of generalized Wallach space, but these isomorphisms don't affect the behavior of the Ricci tensor. In particular, one can verify that ${\mathrm Ric}_{<\ ,\ >}({\mathfrak k}_i,{\mathfrak k}_j)=0(i\neq j)$, where ${\mathfrak k}_i$ is isomorphism to ${\mathfrak k}_j$, As a result, ${\mathrm Ric}$ is still diagnal.
\end{remark}
\section{Discussions on non-naturally reductive Einstein metrics}
Now we have obtained the components of Ricci tensor for each case by Lemma \ref{p=2}, Lemma \ref{p=3} and Lemma \ref{p=4}, from which we will find non-naturally reductive Einstein metrics case by case in this section.
\textbf{Case of $p=2$.} We will give the criterion to determine whether a left-invariant metric of the form (\ref{metric2}) is naturally reductive.
\begin{prop}\label{nrp=2}
If a left-invariant metric $<,>$ of the form (\ref{metric1}) on $G$ is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$, then for the case of $p=2$, one of the following holds:\\
Case of ${\mathrm E}_6$-III: 1)$x_2=x_3$, $x_4=x_5$ 2)$x_1=x_2=x_4$, $x_3=x_5$ 3)$x_1=x_2=x_5$, $x_3=x_4$ 4)$x_3=x_4=x_5$. \\
Case of ${\mathrm E}_8$-II: 1)$x_1=x_2=x_3$, $x_4=x_5$ 2)$x_1=x_2=x_4$, $x_3=x_5$ 3)$x_1=x_2=x_5$, $x_3=x_5$ 4)$x_3=x_4=x_5$.
Conversely, if one of 1), 2), 3), 4) is satisfied, then the metric of the form (\ref{metric2}) is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$.
\end{prop}
\begin{proof}
Let ${\mathfrak l}$ be the Lie algebra of $L$. Then we have either ${\mathfrak l}\subset {\mathfrak k}$ or ${\mathfrak l}\not\subset{\mathfrak k}$. First we consider the case of ${\mathfrak l}\not\subset{\mathfrak k}$. Let $\frak{h}$ be the subalgebra of ${\mathfrak g}$ generated by ${\mathfrak l}$ and ${\mathfrak k}$. Since ${\mathfrak g}={\mathfrak k}_1\oplus{\mathfrak k}_2\oplus{\mathfrak m}_1\oplus{\mathfrak m}_2\oplus{\mathfrak m}_3$ for the case of $p=2$, $\frak{h}$ must contains only one of ${\mathfrak m}_1,{\mathfrak m}_2$ and ${\mathfrak m}_3$. If ${\mathfrak m}_1\subset\frak{h}$, then ${\mathfrak k}\oplus{\mathfrak m}_1$ is a subalgebra of ${\mathfrak g}$ according to \cite{ChKaLi}. In fact, for case of ${\mathrm E}_6$-III, it is isomorphic to $A_1\oplus A_5$, where ${\mathfrak k}_2\oplus{\mathfrak m}_1\cong A_5$. Therefore by Theorem \ref{nr}, we have $x_2=x_3$, $x_4=x_5$. Similarly, for case of ${\mathrm E}_8$-II, we have $x_1=x_2=x_3$, $x_4=x_5$. By a similar way, we can obtain 2) and 3) in the proposition for each case.
Now we consider the case ${\mathfrak l}\subset{\mathfrak k}$. Since the orthogonal complement ${\mathfrak l}^{{\mathfrak p}erp}$ of ${\mathfrak l}$ with respect to $B$ contains the orthogonal complement ${\mathfrak k}^{{\mathfrak p}erp}$ of ${\mathfrak k}$, we see that ${\mathfrak m}_1\oplus{\mathfrak m}_2\oplus{\mathfrak m}_3\subset{\mathfrak l}^{{\mathfrak p}erp}$. Since the invariant metric $<,>$ is naturally reductive with respect to $G\times L$, it follows that $x_3=x_4=x_5$ by Theorem \ref{nr}. The converse is a direct consequence of Theorem \ref{nr}.
\end{proof}
\textbf{Case of ${\mathrm E}_6$-III.} According Lemma \ref{p=2}, we have
\begin{equation*}
\begin{split}
r_1&=\frac{1}{12}\left(\frac{1}{2x_1}+\frac{7x_1}{4x_4^2}+\frac{3x_1}{4x_5^2}\right),\\
r_2&=\frac{1}{84}\left(\frac{7}{x_2}+\frac{7x_2}{2x_3^2}+\frac{35x_2}{4x_4^2}+\frac{7x_2}{4x_5^2}\right),\\
r_3&=\frac{1}{2x_3}+\frac{1}{8}\left(\frac{x_3}{x_4x_5}-\frac{x_4}{x_3x_5}-\frac{x_5}{x_3x_4}\right)-\frac{1}{8}\frac{x_2}{x_3^2},\\
r_4&=\frac{1}{2x_4}+\frac{1}{16}\left(\frac{x_4}{x_3x_5}-\frac{x_3}{x_4x_5}-\frac{x_5}{x_3x_4}\right)-\frac{1}{56}\left(\frac{7x_1}{4x_4^2}+\frac{35x_2}{4x_4^2}\right),\\
r_5&=\frac{1}{2x_5}+\frac{7}{48}\left(\frac{x_5}{x_4x_3}-\frac{x_4}{x_3x_5}-\frac{x_3}{x_5x_4}\right)-\frac{1}{24}\left(\frac{3x_1}{4x_5^2}+\frac{7x_2}{4x_5^2}\right).
\end{split}
\end{equation*}
We consider the system of equations
\begin{equation}\label{eq21}
r_1-r_2=0, r_2-r_3=0, r_3-r_4=0, r_4-r_5=0.
\end{equation}
Then finding Einstein metrics of the form (\ref{metric2}) reduces to finding the positive solutions of system (\ref{eq21}), and we normalize the equations by putting $x_5 = 1$. Then we obtain the system of equations:
\begin{equation}\label{eq211}
\begin{split}
g_1=&3\,{x_{{1}}}^{2}{x_{{4}}}^{2}x_{{2}}{x_{{3}}}^{2}-x_{{1}}{x_{{2}}}^{2}{x_{{3}}}^{2}{x_{{4}}}^{2}+7\,{x_{{1}}}^{2}x_{{2}}{x_{{3}}}^{2}-5\,{x_{{2}}}^{2}x_{{1}}{x_{{3}}}^{2}\\
&-2\,{x_{{2}}}^{2}x_{{1}}{x_{{4}}}^{2}-4\,x_{{1}}{x_{{4}}}^{2}{x_{{3}}}^{2}+2\,x_{{2}}{x_{{4}}}^{2}{x_{{3}}}^{2}=0,\\
g_2=&{x_{{2}}}^{2}{x_{{3}}}^{2}{x_{{4}}}^{2}-6\,x_{{2}}{x_{{3}}}^{3}x_{{4}}+6\,x_{{2}}x_{{3}}{x_{{4}}}^{3}+5\,{x_{{2}}}^{2}{x_{{3}}}^{2}\\
&+8\,{x_{{2}}}^{2}{x_{{4}}}^{2}-24\,x_{{2}}x_{{3}}{x_{{4}}}^{2}+4\,{x_{{4}}}^{2}{x_{{3}}}^{2}+6\,x_{{2}}x_{{3}}x_{{4}}=0,\\
g_3=&6\,{x_{{3}}}^{3}x_{{4}}-6\,{x_{{4}}}^{3}x_{{3}}+x_{{1}}{x_{{3}}}^{2}+5\,x_{{2}}{x_{{3}}}^{2}-4\,x_{{2}}{x_{{4}}}^{2}\\
&-16\,x_{{4}}{x_{{3}}}^{2}+16\,{x_{{4}}}^{2}x_{{3}}-2\,x_{{3}}x_{{4}}=0,\\
g_4=&3\,x_{{1}}{x_{{4}}}^{2}x_{{3}}+7\,x_{{2}}x_{{3}}{x_{{4}}}^{2}+8\,x_{{4}}{x_{{3}}}^{2}-48\,{x_{{4}}}^{2}x_{{3}}\\
&+20\,{x_{{4}}}^{3}-3\,x_{{1}}x_{{3}}-15\,x_{{2}}x_{{3}}+48\,x_{{3}}x_{{4}}-20\,x_{{4}}=0.
\end{split}
\end{equation}
We consider a polynomial ring $R={\mathfrak m}athbb{Q}[z, x_1, x_2, x_3, x_4]$ and an ideal $I$ generated by $\{g_1, g_2, g_3, g_4, \\z x_1 x_2 x_3 x_4-1\}$ to find non-zero solutions of equations (\ref{eq211}). We take a lexicographic order $>$ with $z>x_1 >x_2 >x_3 >x_4$ for a monomial ordering on $R$. Then with the aid of computer, the following polynomial is contained in the Gr\"{o}bner basis for the ideal $I$
\begin{equation*}
\left( x_{{4}}-1 \right) \left( 5\,x_{{4}}-3 \right) \left( 5\,x_{{4}}-19 \right) h(x_4),
\end{equation*}
where $h(x_4)$ is of the form
\begin{equation}
\begin{split}
&620527834748568712226625\,{x_{{4}}}^{46}-17142288459030942157682550\,{x_{{4}}}^{45}\\
&+253864478218386260238125175\,{x_{{4}}}^{44}-2655902456682476684982068196\,{x_{{4}}}^{43}\\
&+21712791139584353835485509485\,{x_{{4}}}^{42}-146403945857203174056695826906\,{x_{{4}}}^{41}\\
&+841869064160931856135565647035\,{x_{{4}}}^{40}-4221496823183250288515785683288\,{x_{{4}}}^{39}\\
&+18759663705905743422883726751607\,{x_{{4}}}^{38}-74772865457920384779255097278978\,{x_{{4}}}^{37}\\
&+269810159883362340386858437110705\,{x_{{4}}}^{36}-887921763230246899234386977810964\,{x_{{4}}}^{35}\\
&+2680934636604177050138806853307267\,{x_{{4}}}^{34}-7463374027396529763571324631419086\,{x_{{4}}}^{33}\\
&+19236417063475016928618367461353205\,{x_{{4}}}^{32}-46067840601646061767106985652544544\,{x_{{4}}}^{31}\\
&+102827687516709239196388118203922250\,{x_{{4}}}^{30}-214524790280392516306729959721167612\,{x_{{4}}}^{29}\\
&+419397032805870597215879980744285542\,{x_{{4}}}^{28}-
770243915769947385884764661220911880\,{x_{{4}}}^{27}\\&+
1332121344976070900463276148279367906\,{x_{{4}}}^{26}-
2174890268260219400375855326539637796\,{x_{{4}}}^{25}\\&+
3360427246597106223273731326789411118\,{x_{{4}}}^{24}-
4926216667916456054561642964481111312\,{x_{{4}}}^{23}\\&+
6868691169547446497713714767840930254\,{x_{{4}}}^{22}-
9130195555722867119052619469924183268\,{x_{{4}}}^{21}\\&+
11592691118354371158751244714914822658\,{x_{{4}}}^{20}-
14080199932731634104314039587318940936\,{x_{{4}}}^{19}\\&+
16371489061442382163179799619487885062\,{x_{{4}}}^{18}-
18224385478214023428533707109785905340\,{x_{{4}}}^{17}\\&+
19411619002013319176816159460886119530\,{x_{{4}}}^{16}-
19763525846081914361620786672655930400\,{x_{{4}}}^{15}\\&+
19206776730802850981024169581516423525\,{x_{{4}}}^{14}-
17785423401614562582040867258711520750\,{x_{{4}}}^{13}\\&+
15655271805998900455634624058209469875\,{x_{{4}}}^{12}-
13053707551812208998459860496371252500\,{x_{{4}}}^{11}\\&+
10256813778567948010832363355641230625\,{x_{{4}}}^{10}-
7536513577438512742744940874573881250\,{x_{{4}}}^{9}\\&+
5123727743209309471365339473434734375\,{x_{{4}}}^{8}-
3177971936471765628211397123442375000\,{x_{{4}}}^{7}\\&+
1766171205209537862901035966120796875\,{x_{{4}}}^{6}-
859452515630617218777390718317656250\,{x_{{4}}}^{5}\\&+
355236019137782072058927844356328125\,{x_{{4}}}^{4}-
119504109722879595751903187339062500\,{x_{{4}}}^{3}\\&+
30626968356732968210488474290234375\,{x_{{4}}}^{2}-
5304673599935287496386273558593750\,x_{{4}}\\&+
463705449010204912215012369140625
\end{split}
\end{equation}
In fact, in the Gr\"{o}bner basis of the ideal $I$, $x_1,x_2$ and $x_3$ can be written into polynomials of $x_4$. By solving $h(x_4)=0$, we have four solutions, namely $0.6711159524, 0.8439629969, 0.9167404817,2.171597540$ and all corresponding solutions of the system of equations $\{g_1=0,g_2=0,g_3=0,g_4=0,h(x_4)=0\}$ with $x_1x_2x_3x_4\neq0$ are as follows:
\begin{equation*}
\begin{split}
&\{x_1\approx0.1550362575,x_2\approx0.7478555199,x_3\approx1.042517676,x_4\approx0.6711159524\},\\
&\{x_1\approx1.366119112,x_2\approx0.2521439928,x_3\approx0.6761926469,x_4\approx0.8439629969\},\\
&\{x_1\approx0.09898950458,x_2\approx0.2192177752,x_3\approx0.5309066187,x_4\approx0.9167404817\},\\
&\{x_1\approx0.2432173551,x_2\approx0.4934849553,x_3\approx2.270522057,x_4\approx2.171597540\}.
\end{split}
\end{equation*}
Due to Proposition \ref{nrp=2}, we conclude that each of these four solutions induces a non-naturally reductive Einstein metric.\\
For $x_4=1$, the system $\{g_1=0,g_2=0,g_3=0,g_4=0\}$ has the following four solutions:
\begin{equation*}
\begin{split}
&\{x_1=1,x_2=1,x_3=1,x_4=1\},\\
&\{x_1\approx0.1030504001,x_2\approx0.3244706112,x_3\approx0.3244706112,x_4=1\},\\
&\{x_1\approx1.613068224,x_2\approx0.4009377358,x_3\approx0.4009377358,x_4=1\},\\
&\{x_1\approx0.1984241041,x_2\approx1.108447830,x_3\approx1.108447830,x_4=1\}.
\end{split}
\end{equation*}
For $x_4=\frac{3}{5}$, the system $\{g_1=0,g_2=0,g_3=0,g_4=0\}$ has only one solution given by
\begin{equation*}
\{x_1=x_2=x_4=\frac{3}{5},x_3=1\},
\end{equation*}
and for $x_4=\frac{19}{5}$, the system $\{g_1=0,g_2=0,g_3=0,g_4=0\}$ also has only one solution given by
\begin{equation*}
\{x_1=x_2=1,x_3=x_4=\frac{19}{5}\}.
\end{equation*}
According to Proposition \ref{nrp=2}, these corresponding left-invariant Einstein metrics are all naturally reductive.
In summarize, we find 4 different non-naturally reductive left-invariant Einstein metrics on ${\mathrm E}_6$-III.
\textbf{Case of ${\mathrm E}_8$-II.} According Lemma \ref{p=2}, we have
\begin{equation*}
\begin{split}
r_1&=\frac{1}{112}\left(\frac{28}{5x_1}+\frac{112x_1}{15x_3^2}+\frac{112x_1}{15x_4^2}+\frac{112x_1}{15x_5^2}\right),\\
r_2&=\frac{1}{112}\left(\frac{28}{5x_2}+\frac{112x_2}{15x_3^2}+\frac{112x_2}{15x_4^2}+\frac{112x_2}{15x_5^2}\right),\\
r_3&=\frac{1}{2x_3}+\frac{2}{15}\left(\frac{x_3}{x_4x_5}-\frac{x_4}{x_3x_5}-\frac{x_5}{x_3x_4}\right)-\frac{1}{128}\left(\frac{112x_1}{15x_3^2}+\frac{112x_2}{15x_3^2}\right),\\
r_4&=\frac{1}{2x_4}+\frac{2}{15}\left(\frac{x_4}{x_3x_5}-\frac{x_3}{x_4x_5}-\frac{x_5}{x_3x_4}\right)-\frac{1}{128}\left(\frac{112x_1}{15x_4^2}+\frac{112x_2}{15x_4^2}\right),\\
r_5&=\frac{1}{2x_5}+\frac{2}{15}\left(\frac{x_5}{x_4x_3}-\frac{x_4}{x_3x_5}-\frac{x_3}{x_5x_4}\right)-\frac{1}{128}\left(\frac{112x_1}{15x_5^2}+\frac{112x_2}{15x_5^2}\right),
\end{split}
\end{equation*}
We consider the system of equations given by
\begin{equation}\label{eq22}
r_1-r_2=0, r_2-r_3=0, r_3-r_4=0, r_4-r_5=0,
\end{equation}
and normalize the equations by putting $x_5 = 1$, then we obtain the system of equations:
\begin{equation}\label{eq222}
\begin{split}
g_1=&4\,{x_{{1}}}^{2}{x_{{3}}}^{2}{x_{{4}}}^{2}x_{{2}}-4\,x_{{1}}{x_{{2}}}^{2}{x_{{3}}}^{2}{x_{{4}}}^{2}+4\,{x_{{1}}}^{2}{x_{{3}}}^{2}x_{{2}}+4\,{x_{{1}}}^{2}{x_{{4}}}^{2}x_{{2}}\\&-4\,{x_{{2}}}^{2}x_{{1}}{x_{{3}}}^{2}-4\,{x_{{2}}}^{2}x_{{1}}{x_{{4}}}^{2}-3x_{{1}}{x_{{3}}}^{2}{x_{{4}}}^{2}+3x_{{2}}{x_{{3}}}^{2}{x_{{4}}}^{2}=0,\\
g_2=&8\,{x_{{2}}}^{2}{x_{{3}}}^{2}{x_{{4}}}^{2}-16\,{x_{{3}}}^{3}x_{{2}}x_{{4}}+16\,{x_{{4}}}^{3}x_{{2}}x_{{3}}+7\,x_{{1}}x_{{2}}{x_{{4}}}^{2}+8\,{x_{{2}}}^{2}{x_{{3}}}^{2}\\&+15\,{x_{{2}}}^{2}{x_{{4}}}^{2}-60\,x_{{2}}x_{{3}}{x_{{4}}}^{2}+6\,{x_{{3}}}^{2}{x_{{4}}}^{2}+16\,x_{{2}}x_{{3}}x_{{4}}=0,\\
g_3=&32\,{x_{{3}}}^{3}x_{{4}}-32\,{x_{{4}}}^{3}x_{{3}}+7\,x_{{1}}{x_{{3}}}^{2}-7\,x_{{1}}{x_{{4}}}^{2}+7\,x_{{2}}{x_{{3}}}^{2}-7\,x_{{2}}{x_{{4}}}^{2}\\&-60\,x_{{4}}{x_{{3}}}^{2}+60\,x_{{3}}{x_{{4}}}^{2}=0,\\
g_4=&7\,x_{{1}}x_{{3}}{x_{{4}}}^{2}+7\,x_{{2}}x_{{3}}{x_{{4}}}^{2}-60\,x_{{3}}{x_{{4}}}^{2}+32\,{x_{{4}}}^{3}-7\,x_{{1}}x_{{3}}\\&-7\,x_{{2}}x_{{3}}+60\,x_{{3}}x_{{4}}-32\,x_{{4}}=0.
\end{split}
\end{equation}
We consider a polynomial ring $R={\mathfrak m}athbb{Q}[z, x_1, x_2, x_3, x_4]$ and an ideal $I$ generated by $\{g_1, g_2, g_3, g_4, \\z x_1 x_2 x_3 x_4-1\}$ to find non-zero solutions of equations (\ref{eq222}). We take a lexicographic order $>$ with $z>x_1 >x_2 >x_3 >x_4$ for a monomial ordering on $R$. Then with the aid of computer, the following polynomial is contained in the Gr\"{o}bner basis for the ideal $I$
\begin{equation*}
\left( x_{{4}}-1 \right) \left( 23\,x_{{4}}-7 \right) \left( 7\,x_{{4}}-23 \right) h(x_4),
\end{equation*}
where
\begin{equation*}
\begin{split}
h(x_4)=&18820892214681403392\,{x_{{4}}}^{24}-106573710368905887744\,{x_{{4}}}^{23}\\
&+367021480848929587200\,{x_{{4}}}^{22}-989697149383674494976\,{x_{{4}}}^{21}\\
&+1859094664559751753728\,{x_{{4}}}^{20}-
3257511072225679640576\,{x_{{4}}}^{19}\\&+4280088309639423272992\,{x_{{4}
}}^{18}-5679995572440505667140\,{x_{{4}}}^{17}\\&+6595970829340416842428
\,{x_{{4}}}^{16}-7511322681489787363579\,{x_{{4}}}^{15}\\&+
9419511263909486275350\,{x_{{4}}}^{14}-9260548321425771133485\,{x_{{4}
}}^{13}\\&+11189718816841142104820\,{x_{{4}}}^{12}-9260548321425771133485
\,{x_{{4}}}^{11}\\&+9419511263909486275350\,{x_{{4}}}^{10}-
7511322681489787363579\,{x_{{4}}}^{9}\\&+6595970829340416842428\,{x_{{4}}
}^{8}-5679995572440505667140\,{x_{{4}}}^{7}\\&+4280088309639423272992\,{x
_{{4}}}^{6}-3257511072225679640576\,{x_{{4}}}^{5}\\&+
1859094664559751753728\,{x_{{4}}}^{4}-989697149383674494976\,{x_{{4}}}
^{3}\\&+367021480848929587200\,{x_{{4}}}^{2}-106573710368905887744\,x_{{4
}}\\&+18820892214681403392.
\end{split}
\end{equation*}
By solving $h(x_4)=0$ numerically, we find positive four solutions which are given approximately by $x_4\approx0.3526915707$(we state this solution will make $x_2$ negative), $x_4\approx0.7261283537$, $x_4\approx2.835338531$ and $x_4\approx1.377166991$ and we split the corresponding solutions of the system of equations $\{g_1=0,g_2=0,g_3=0,g_4=0,h(x_4)=0\}$ with $x_1x_2x_3x_4\neq0$ into two groups as follows:
\[{\mathfrak m}box{Group 1.}\left\{\begin{aligned}
&\{x_1\approx1.304885525,x_2\approx0.4602586724,x_3\approx2.835338531,x_4\approx2.835338531\},\\
&\{x_1\approx0.4602586724,x_2\approx1.304885525,x_3\approx2.835338531,x_4\approx2.835338531\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 2.}\left\{\begin{aligned}
&\{x_1\approx0.1431443064,x_2\approx0.1431443064,x_3=1,x_4\approx0.7261283537\},\\
&\{x_1\approx0.1971336881,x_2\approx0.1971336881,x_3\approx1.377166991,x_4\approx1.377166991\}.
\end{aligned}\right.\]
For $x_4=1$, the system $\{g_1=0,g_2=0,g_3=0,g_4=0\}$ has five solutions which can be split into the following three groups:
\[{\mathfrak m}box{Group 3.}\begin{aligned}
\{x_1=x_2=x_3=x_4=1\},
\end{aligned}\]
\[{\mathfrak m}box{Group 4.}\begin{aligned}
\{x_1=x_2=x_3=\frac{7}{23},x_4=1\},
\end{aligned}\]
\[{\mathfrak m}box{Group 5.}\left\{\begin{aligned}
&\{x_1\approx0.4602221254,x_2\approx0.1623293541,x_3\approx0.3526915707,x_4=1\},\\
&\{x_1\approx0.1623293541,x_2\approx0.4602221254,x_3\approx0.3526915707,x_4=1\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 6.}\begin{aligned}
\{x_1\approx0.1431443064,x_2\approx0.1431443064,x_3\approx0.7261283537,x_4=1\}.
\end{aligned}\]
For $x_4=\frac{7}{23}$ and $x_4=\frac{23}{7}$, the corresponding solutions of the system of equations $\{g_1=0,g_2=0,g_3=0,g_4=0\}$ are as follows respectively:
\[{\mathfrak m}box{Group 7.}\left\{\begin{aligned}
&\{x_1=x_2=x_4=\frac{7}{23},x_3=1\},\\
&\{x_1=x_2=1,x_3=x_4=\frac{23}{7}\}.
\end{aligned}\right.\]
Among these solutions, we remark that the solution in Group 3 induces the Killing metric, the solutions in Group 4 and Group 7 induce the same metrics up to isometry which are naturally reductive due to Proposition \ref{nrp=2}, while the solutions in Group 1 and Group 5 induce the same metrics and the solutions in Group 2 and Group 6 also induce the same metrics up to isometry which are all non-naturally reductive due to Proposition \ref{nrp=2}.
Therefore, we find 2 non-naturally reductive Einstein metrics on ${\mathrm E}_8$-II.
\textbf{Case of $p=3$}. With the similar reason, we give the following proposition to decide whether a left-invariant metric is naturally reductive.
\begin{prop}\label{nrp=3}
If a left-invariant metric $<,>$ of the form (\ref{metric1}) on $G$ is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$, then for the case of $p=3$, one of the following holds: 1)$x_1=x_2=x_3=x_4$, $x_5=x_6$ 2)$x_2=x_3=x_5$, $x_4=x_6$ 3)$x_1=x_3=x_6$, $x_4=x_5$ 4)$x_4=x_5=x_6$.
Conversely, if one of 1), 2), 3), 4) is satisfied, then the metric of the form (\ref{metric3}) for the case of $p=3$ is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$.
\end{prop}
\textbf{Case of ${\mathrm F}_4$-II.} According Lemma \ref{p=3}, we have
\begin{equation*}
\begin{split}
r_1&=\frac{1}{12}\left(\frac{2}{3x_1}+\frac{5x_1}{3x_4^2}+\frac{2x_1}{3x_6^2}\right),\\
r_2&=\frac{1}{12}\left(\frac{2}{3x_2}+\frac{5x_2}{3x_4^2}+\frac{2x_2}{3x_5^2}\right),\\
r_3&=\frac{1}{40}\left(\frac{10}{3x_3}+\frac{40x_3}{9x_4^2}+\frac{10x_3}{9x_5^2}+\frac{10x_3}{9x_6^2}\right),\\
r_4&=\frac{1}{2x_4}+\frac{1}{18}\left(\frac{x_4}{x_5x_6}-\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}\right)-\frac{1}{40}\left(\frac{5x_1}{3x_4^2}+\frac{5x_2}{3x_4^2}+\frac{40x_3}{9x_4^2}\right),\\
r_5&=\frac{1}{2x_5}+\frac{5}{36}\left(\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}-\frac{x_4}{x_5x_6}\right)-\frac{1}{16}\left(\frac{2x_2}{3x_5^2}+\frac{10x_3}{9x_5^2}\right),\\
r_6&=\frac{1}{2x_6}+\frac{5}{36}\left(\frac{x_6}{x_5x_4}-\frac{x_5}{x_6x_4}-\frac{x_4}{x_5x_6}\right)-\frac{1}{16}\left(\frac{2x_1}{3x_6^2}+\frac{9x_3}{9x_6^2}\right).
\end{split}
\end{equation*}
We consider the system of equations
\begin{equation}\label{eq31}
r_1-r_2=0, r_2-r_3=0, r_3-r_4=0, r_4-r_5=0,r_5-r_6=0.
\end{equation}
Then finding Einstein metrics of the form (\ref{metric3}) reduces to finding the positive solutions of system (\ref{eq31}), and we normalize the equations by putting $x_4 = 1$. Then we obtain the system of equations:
\begin{equation}\label{eq311}
\begin{split}
g_1=&5\,{x_{{1}}}^{2}x_{{2}}{x_{{5}}}^{2}{x_{{6}}}^{2}-5\,x_{{1}}{x_{{2}}}^
{2}{x_{{5}}}^{2}{x_{{6}}}^{2}+2\,{x_{{1}}}^{2}x_{{2}}{x_{{5}}}^{2}-2\,
x_{{1}}{x_{{2}}}^{2}{x_{{6}}}^{2}-2\,x_{{1}}{x_{{5}}}^{2}{x_{{6}}}^{2}
+2\,x_{{2}}{x_{{5}}}^{2}{x_{{6}}}^{2}=0,\\
g_2=&5\,{x_{{2}}}^{2}x_{{3}}{x_{{5}}}^{2}{x_{{6}}}^{2}-4\,x_{{2}}{x_{{3}}}^
{2}{x_{{5}}}^{2}{x_{{6}}}^{2}+2\,{x_{{2}}}^{2}x_{{3}}{x_{{6}}}^{2}-x_{
{2}}{x_{{3}}}^{2}{x_{{5}}}^{2}-x_{{2}}{x_{{3}}}^{2}{x_{{6}}}^{2}-3\,x_
{{2}}{x_{{5}}}^{2}{x_{{6}}}^{2}\\&+2\,x_{{3}}{x_{{5}}}^{2}{x_{{6}}}^{2}=0,\\
g_3=&-3\,x_{{1}}{x_{{5}}}^{2}x_{{6}}-3\,x_{{2}}{x_{{5}}}^{2}x_{{6}}-8\,x_{{
3}}{x_{{5}}}^{2}x_{{6}}-14\,{x_{{5}}}^{3}+36\,{x_{{5}}}^{2}x_{{6}}+6\,
x_{{5}}{x_{{6}}}^{2}+3\,x_{{2}}x_{{6}}+5\,x_{{3}}x_{{6}}\\&-36\,x_{{5}}x_
{{6}}+14\,x_{{5}}=0,\\
g_4=&-14\,{x_{{4}}}^{3}x_{{5}}+14\,x_{{4}}{x_{{5}}}^{3}+3\,x_{{1}}{x_{{5}}}^{2}-3\,x_{{2}}{x_{{4}}}^{2}+3\,x_{{2}}{x_{{5}}}^{2}-5\,x_{{3}}{x_{{4}}}^{2}+8\,x_{{3}}{x_{{5}}}^{2}\\
&+36\,{x_{{4}}}^{2}x_{{5}}-36\,x_{{4}}{x_{{5}}}^{2}-6\,x_{{4}}x_{{5}}=0,\\
g_5=&20\,{x_{{5}}}^{3}x_{{6}}-20\,x_{{5}}{x_{{6}}}^{3}+3\,x_{{1}}{x_{{5}}}^
{2}-3\,x_{{2}}{x_{{6}}}^{2}+5\,x_{{3}}{x_{{5}}}^{2}-5\,x_{{3}}{x_{{6}}
}^{2}-36\,{x_{{5}}}^{2}x_{{6}}+36\,x_{{5}}{x_{{6}}}^{2}=0.
\end{split}
\end{equation}
We consider a polynomial ring $R=Q[z, x_1, x_2, x_3, x_5,x_6]$ and an ideal $I$ generated by $\{g_1, g_2, g_3, g_4,g_5, \\z x_1 x_2 x_3 x_5x_6-1\}$ to find non-zero solutions of equations (\ref{eq311}). We take a lexicographic order $>$ with $z>x_1 >x_2 >x_3 >x_5>x_6$ for a monomial ordering on $R$. Then with the help of computer, the following polynomial is contained in the Gr\"{o}bner basis for the ideal $I$
\begin{equation}
( x_{{6}}-1 ) ( 7\,x_{{6}}-11 ) ( 2375\,
{x_{{6}}}^{3}-4195\,{x_{{6}}}^{2}+1960\,x_{{6}}-272
)\cdot h(x_6),
\end{equation}
where $h(x_6)$ is a polynomial of $x_6$ of degree 114. We put it in Appendix I for readers' convenience.
For $x_6=1$, with the polynomials in Gr\"{o}bner basis, we get four values of $x_5$, namely $0.2797176824$, $0.3650688296$, $1$, $1.121529277$, whose corresponding solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0\}$ can be given as follows:
\[{\mathfrak m}box{Group 1.}\begin{aligned}
\{x_1=x_2=x_3=x_5=x_6=1\}.
\end{aligned}\]
\[{\mathfrak m}box{Group 2.}\left\{\begin{aligned}
&\{x_1\approx0.1355974584,x_2=x_3=x_5\approx0.2797176824,x_6=1\},\\
&\{x_1\approx1.653201132,x_2=x_3=x_5\approx0.3650688296,x_6=1\},\\
&\{x_1\approx0.2762168891,x_2=x_3=x_5\approx1.121529277,x_6=1\}.
\end{aligned}\right.\]
For $x_6=\frac{11}{7}$, we substitute it into the polynomials in the Gr\"{o}bner basis, then we get the following corresponding solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0\}$ with $x_1x_2x_3x_5x_6\neq0$:
\[{\mathfrak m}box{Group 3.}\begin{aligned}
\{x_1=x_2=x_3=1,x_5=x_6=\frac{11}{7}\}.
\end{aligned}\]
By solving $x_{{6}}-1 ) ( 7\,x_{{6}}-11 ) ( 2375\,{x_{{6}}}^{3}-4195\,{x_{{6}}}^{2}+1960\,x_{{6}}-272=0$ numerically, there are three positive solutions given approximately by $x_6\approx0.2797176824$, $x_6\approx0.3650688296$ and $x_6\approx1.121529277$, whose corresponding solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0,x_{{6}}-1 ) ( 7\,x_{{6}}-11 ) ( 2375\,{x_{{6}}}^{3}-4195\,{x_{{6}}}^{2}+1960\,x_{{6}}-272=0\}$ with $x_1x_2x_3x_5x_6\neq0$
are:
\[{\mathfrak m}box{Group 3.}\left\{\begin{aligned}
&\{x_2\approx0.1355974584,x_1=x_3=x_6\approx0.2797176824,x_5=1\},\\
&\{x_2\approx1.653201132,x_1=x_3=x_6\approx0.3650688296,x_5=1\},\\
&\{x_2\approx0.2762168891,x_1=x_3=x_6\approx1.121529277,x_5=1\}.
\end{aligned}\right.\]
By solving $h(x_6)=0$ numerically, we get 6 different positive solutions, namely $0.4941864913$, $0.7403305751$, $1.068217773$, $1.160571982$, $1.345214992$, $1.422410517$, whose corresponding solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0,h(x_6)=0\}$ with $x_1x_2x_3x_5x_6\neq0$ can be split into the following three groups:
\[{\mathfrak m}box{Group 4.}\left\{\begin{aligned}
&\{x_1\approx0.1516435461,x_2\approx0.1404443065,x_3\approx0.2282956381,x_5\approx1.068217773,x_6\approx0.4941864913\},\\
&\{x_1\approx0.1404443065,x_2\approx0.1516435461,x_3\approx0.2282956381,x_5\approx0.4941864913,x_6\approx1.068217773\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 5.}\left\{\begin{aligned}
&\{x_1\approx0.1951256737,x_2\approx1.654400436,x_3\approx0.3012253093,x_5\approx1.160571982,x_6\approx0.7403305751\},\\
&\{x_1\approx1.654400436,x_2\approx0.1951256737,x_3\approx0.3012253093,x_5\approx0.7403305751,x_6\approx1.160571982\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 6.}\left\{\begin{aligned}
&\{x_1\approx0.3075015814,x_2\approx1.094420015,x_3\approx1.138681932,x_5\approx1.422410516,x_6\approx1.345214992\},\\
&\{x_1\approx1.094420015,x_2\approx0.3075015814,x_3\approx1.138681932,x_5\approx1.345214992,x_6\approx1.422410516\}.
\end{aligned}\right.\]
Among these metrics, we remark that the left-invariant Einstein metrics induced by the solutions in Group1, 2, 3 are all naturally reductive, while the left-invariant Einstein metrics induced by the solutions in Group 4, 5, 6 are all non-naturally reductive due to Proposition \ref{nrp=3}. In particular, the solutions in Group 2 and 3 induce the same metrics up to isometry respectively and the solutions in each of Group 4-6 induce a same metric up to isometry.
In conclusion, we find 3 different non-naturally reductive left-invariant Einstein metrics on ${\mathrm F}_4$-II.
\textbf{Case of ${\mathrm E}_8$-I.} According Lemma \ref{p=3}, we have
\begin{equation}\label{eqnE8I}
\begin{split}
r_1&=\frac{1}{12}\left(\frac{1}{5x_1}+\frac{6x_1}{5x_4^2}+\frac{8x_1}{5x_6^2}\right),\\
r_2&=\frac{1}{12}\left(\frac{1}{5x_2}+\frac{6x_2}{5x_4^2}+\frac{8x_2}{5x_5^2}\right),\\
r_3&=\frac{1}{264}\left(\frac{22}{x_3}+\frac{44x_3}{5x_4^2}+\frac{88x_3}{5x_5^2}+\frac{88x_3}{5x_6^2}\right),\\
r_4&=\frac{1}{2x_4}+\frac{2}{15}\left(\frac{x_4}{x_5x_6}-\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}\right)-\frac{1}{96}\left(\frac{6x_1}{5x_4^2}+\frac{6x_2}{5x_4^2}+\frac{44x_3}{5x_4^2}\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{10}\left(\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}-\frac{x_4}{x_5x_6}\right)-\frac{1}{128}\left(\frac{8x_2}{5x_5^2}+\frac{88x_3}{88x_5^2}\right),\\
r_6&=\frac{1}{2x_6}+\frac{1}{10}\left(\frac{x_6}{x_5x_4}-\frac{x_5}{x_6x_4}-\frac{x_4}{x_5x_6}\right)-\frac{1}{128}\left(\frac{8x_1}{5x_6^2}+\frac{88x_3}{5x_6^2}\right).
\end{split}
\end{equation}
We consider the system of equations
\begin{equation}\label{eq32}
r_1-r_2=0, r_2-r_3=0, r_3-r_4=0, r_4-r_5=0, r_5-r_6=0.
\end{equation}
Then finding Einstein metrics of the form (\ref{metric3}) reduces to finding the positive solutions of system (\ref{eq32}), and we normalize the equations by putting $x_4 = 1$. Then we obtain the system of equations:
\begin{equation}\label{eq322}
\begin{split}
g_1=&6\,{x_{{1}}}^{2}x_{{2}}{x_{{5}}}^{2}{x_{{6}}}^{2}-6\,x_{{1}}{x_{{2}}}^
{2}{x_{{5}}}^{2}{x_{{6}}}^{2}+8\,{x_{{1}}}^{2}x_{{2}}{x_{{5}}}^{2}-8\,
x_{{1}}{x_{{2}}}^{2}{x_{{6}}}^{2}-x_{{1}}{x_{{5}}}^{2}{x_{{6}}}^{2}+x_
{{2}}{x_{{5}}}^{2}{x_{{6}}}^{2}
=0,\\
g_2=&6\,{x_{{2}}}^{2}x_{{3}}{x_{{5}}}^{2}{x_{{6}}}^{2}-2\,x_{{2}}{x_{{3}}}^
{2}{x_{{5}}}^{2}{x_{{6}}}^{2}+8\,{x_{{2}}}^{2}x_{{3}}{x_{{6}}}^{2}-4\,
x_{{2}}{x_{{3}}}^{2}{x_{{5}}}^{2}-4\,x_{{2}}{x_{{3}}}^{2}{x_{{6}}}^{2}
-5\,x_{{2}}{x_{{5}}}^{2}{x_{{6}}}^{2}\\&+x_{{3}}{x_{{5}}}^{2}{x_{{6}}}^{2
}=0,\\
g_3=&3\,x_{{1}}x_{{3}}{x_{{5}}}^{2}{x_{{6}}}^{2}+3\,x_{{2}}x_{{3}}{x_{{5}}}
^{2}{x_{{6}}}^{2}+30\,{x_{{3}}}^{2}{x_{{5}}}^{2}{x_{{6}}}^{2}+32\,x_{{
3}}{x_{{5}}}^{3}x_{{6}}-120\,x_{{3}}{x_{{5}}}^{2}{x_{{6}}}^{2}+32\,x_{
{3}}x_{{5}}{x_{{6}}}^{3}\\&+16\,{x_{{3}}}^{2}{x_{{5}}}^{2}+16\,{x_{{3}}}^
{2}{x_{{6}}}^{2}+20\,{x_{{5}}}^{2}{x_{{6}}}^{2}-32\,x_{{3}}x_{{5}}x_{{
6}}=0,\\
g_4=&-3\,x_{{1}}{x_{{5}}}^{2}x_{{6}}-3\,x_{{2}}{x_{{5}}}^{2}x_{{6}}-22\,x_{
{3}}{x_{{5}}}^{2}x_{{6}}-56\,{x_{{5}}}^{3}+120\,{x_{{5}}}^{2}x_{{6}}-8
\,x_{{5}}{x_{{6}}}^{2}+3\,x_{{2}}x_{{6}}\\&+33\,x_{{3}}x_{{6}}-120\,x_{{5
}}x_{{6}}+56\,x_{{5}}=0,\\
g_5=&16\,{x_{{5}}}^{3}x_{{6}}-16\,x_{{5}}{x_{{6}}}^{3}+x_{{1}}{x_{{5}}}^{2}
-x_{{2}}{x_{{6}}}^{2}+11\,x_{{3}}{x_{{5}}}^{2}-11\,x_{{3}}{x_{{6}}}^{2
}-40\,{x_{{5}}}^{2}x_{{6}}+40\,x_{{5}}{x_{{6}}}^{2}=0.
\end{split}
\end{equation}
We consider a polynomial ring $R={\mathfrak m}athbb{Q}[z, x_1, x_2, x_3, x_5,x_6]$ and an ideal $I$ generated by $\{g_1, g_2, g_3, g_4,g_5, \\z x_1 x_2 x_3 x_5x_6-1\}$ to find non-zero solutions of equations (\ref{eq322}). We take a lexicographic order $>$ with $z>x_1 >x_2 >x_3 >x_5>x_6$ for a monomial ordering on $R$. Then with the aid of computer, the following polynomial is contained in the Gr\"{o}bner basis for the ideal $I$
\begin{equation*}
(x_{{6}}-1 ) ( 7\,x_{{6}}-23 ) ( 864\,{
x_{{6}}}^{3}-1676\,{x_{{6}}}^{2}+973\,x_{{6}}-177)\cdot f(x_6)\cdot h(x_6),
\end{equation*}
where $f(x_6)$ is a polynomial of $x_6$ given by
\begin{equation*}
\begin{split}
f(x_6)&=24313968\,{x_{{6}}}^{14}-271810080\,{x_{{6}}}^{13}+
1334881896\,{x_{{6}}}^{12}-4102312320\,{x_{{6}}}^{11}+9388266607\,{x_{
{6}}}^{10}\\&-17066486910\,{x_{{6}}}^{9}+25201149031\,{x_{{6}}}^{8}-
30982882320\,{x_{{6}}}^{7}+31894938304\,{x_{{6}}}^{6}-27360921600\,{x_
{{6}}}^{5}\\&+19523164352\,{x_{{6}}}^{4}-11276897280\,{x_{{6}}}^{3}+
5059512320\,{x_{{6}}}^{2}-1663672320\,x_{{6}}+301113344,
\end{split}
\end{equation*}
and $h(x_6)$ is a polynomial of $x_6$ of degree 114. For readers' convenience, we put it in Appendix II.
For $x_6=1$, from the Gr\"{o}bner basis of the ideal $I$ and with the aid of computer, we get the four solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0\}$ which can be split into the following groups:
\[{\mathfrak m}box{Group 1.}\begin{aligned}
\{x_1=x_2=x_3=x_4=x_5=1\},
\end{aligned}\]
\[{\mathfrak m}box{Group 2.}\left\{\begin{aligned}
&\{x_2=x_3=x_5\approx0.4188876552,x_1\approx0.04273408738,x_6=1\},\\
&\{x_2=x_3=x_5\approx0.4617244620,x_1\approx1.543913333,x_6=1\},\\
&\{x_2=x_3=x_5\approx1.059202697,x_1\approx0.07225088447,x_6=1\}.
\end{aligned}\right.\]
By solving $(7\,x_{{6}}-23 ) ( 864\,{x_{{6}}}^{3}-1676\,{x_{{6}}}^{2}+973\,x_{{6}}-177=0$ numerically, there are three positive solutions which can be given approximately $x_6\approx0.4188876553, x_6\approx0.4617244621, x_6\approx1.059202697$ and the corresponding solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0,7\,x_{{6}}-23 ) ( 864\,{x_{{6}}}^{3}-1676\,{x_{{6}}}^{2}+973\,x_{{6}}-177=0\}$ with $x_1x_2x_3x_5x_6\neq0$ are given by
\[{\mathfrak m}box{Group 3.}\left\{\begin{aligned}
&\{x_1=x_3=x_6\approx0.4188876552,x_2\approx0.04273408738,x_5=1\},\\
&\{x_1=x_3=x_6\approx0.4617244620,x_2\approx1.543913333,x_5=1\},\\
&\{x_1=x_3=x_6\approx1.059202697,x_2\approx0.07225088447,x_5=1\}.
\end{aligned}\right.\]
By solving $f(x_6)=0$ numerically, there are 6 different positive solutions, namely $0.7920673406$, $0.8040419514$, $1.075965351$, $1.681651936$, $2.596366999$, $3.419732659$, and the corresponding solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0,f(x_6)=0\}$ with $x_1x_2x_3x_5x_6\neq0$ are given by
\[{\mathfrak m}box{Group 3.}\left\{\begin{aligned}
&\{x_1=x_2\approx1.211722573,x_3\approx0.2521819866,x_5=x_6\approx0.7920673405\},\\
&\{x_1=x_2\approx0.04116638566,x_3\approx0.2299652722,x_5=x_6\approx0.8040419514\},\\
&\{x_1=x_2\approx0.07360068971,x_3\approx1.138692978,x_5=x_6\approx1.075965351\},\\
&\{x_1=x_2\approx0.07189340238,x_3\approx0.3957889206,x_5=x_6\approx1.681651936\},\\
&\{x_1=x_2\approx1.241147181,x_3\approx0.6544562607,x_5=x_6\approx2.596366998\},\\
&\{x_1=x_2\approx0.1594378743,x_3\approx1.292216476,x_5=x_6\approx3.419732659\}.
\end{aligned}\right.\]
By solving $h(x_6)=0$ numerically, there are 10 different positive solutions, namely $0.4271280200$, $0.4742936355$, $0.7058209689$, $0.8630215200$, $1.008898001$, $1.010769751$, $2.058282527$, $2.099884282$, $3.40293\\1725$, $3.413270469$, and the corresponding solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0,f(x_6)=0\}$ with $x_1x_2x_3x_5x_6\neq0$ can be split into the following 5 groups:
\[{\mathfrak m}box{Group 4.}\left\{\small\begin{aligned}
&\{x_1\approx0.0468426418,x_2\approx0.0433223582,x_3\approx0.4600814315,x_5\approx1.008898001,x_6\approx0.4271280200\},\\
&\{x_1\approx0.0433223582,x_2\approx0.0468426418,x_3\approx0.4600814315,x_5\approx0.4271280200,x_6\approx1.008898001\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 5.}\left\{\small\begin{aligned}
&\{x_1\approx0.05053229100,x_2\approx1.535627653,x_3\approx0.5100903370,x_5\approx1.01076975,x_6\approx0.4742936355\},\\
&\{x_1\approx1.535627653,x_2\approx0.05053229100,x_3\approx0.5100903370,x_5\approx0.4742936355,x_6\approx1.01076975\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 6.}\left\{\small\begin{aligned}
&\{x_1\approx1.075832320,x_2\approx0.04173283859,x_3\approx0.2381776636,x_5\approx0.8630215199,x_6\approx0.7058209688\},\\
&\{x_1\approx0.04173283859,x_2\approx1.075832320,x_3\approx0.2381776636,x_5\approx0.7058209688,x_6\approx0.8630215199\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 7.}\left\{\small\begin{aligned}
&\{x_1\approx0.0887989484,x_2\approx1.442031123,x_3\approx0.4977693932,x_5\approx2.099884281,x_6\approx2.058282526\},\\
&\{x_1\approx1.442031123,x_2\approx0.0887989484,x_3\approx0.4977693932,x_5\approx2.058282526,x_6\approx2.099884281\}.
\end{aligned}\right.\]
\[{\mathfrak m}box{Group 8.}\left\{\small\begin{aligned}
&\{x_1\approx0.1570295299,x_2\approx0.9524941307,x_3\approx1.170481952,x_5\approx3.413270468,x_6\approx3.402931724\},\\
&\{x_1\approx0.9524941307,x_2\approx0.1570295299,x_3\approx1.170481952,x_5\approx3.402931724,x_6\approx3.413270468\}.
\end{aligned}\right.\]
Among these solutions, we remark that the solution in Group 1 is Killing metric, the solutions in Group 2 and Group 3 induce the same metrics up to isometry respectively which are naturally reductive due to Proposition \ref{nrp=3}, while the solutions in Group 3 induce 6 different left-invariant Einstein metrics which are non-naturally reductive and the solutions in each of Group 4-8 induce a same metric up to isometry which is non-naturally reductive due to Proposition \ref{nrp=3}.
In conclusion, we find 11 different non-naturally reductive Einstein metrics on ${\mathrm E}_8$-I.
\textbf{Case of $p=4$.} By the same discussion in \ref{nrp=2}, we give the criterion to determine whether a left-invariant metric of the form (\ref{metric4}) is naturally reductive.
\begin{prop}\label{nrp=4}
If a left-invariant metric $<,>$ of the form (\ref{metric1}) on $G$ is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$, then for the case of $p=4$, one of the following holds: 1)$x_2=x_3=x_4=x_5$, $x_6=x_7$ 2)$x_1=x_3=x_4=x_6$, $x_5=x_7$ 3)$x_1=x_2=x_4=x_7$, $x_5=x_6$ 4)$x_5=x_6=x_7$.
Conversely, if one of 1), 2), 3), 4) is satisfied, then the metric of the form (\ref{metric4}) for the case of $p=4$ is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$.
\end{prop}
\textbf{Case of ${\mathrm E}_7$-I.} Due to Lemma \ref{p=4}, we have the following equations:
\begin{equation}\label{eqn4}
\begin{split}
r_1&=\frac{1}{12}\left(\frac{1}{3x_1}+\frac{4x_1}{3x_6^2}+\frac{4x_1}{3x_7^2}\right),\\
r_2&=\frac{1}{12}\left(\frac{1}{3x_2}+\frac{4x_2}{3x_5^2}+\frac{4x_2}{3x_7^2}\right),\\
r_3&=\frac{1}{12}\left(\frac{1}{3x_3}+\frac{4x_3}{3x_5^2}+\frac{4x_3}{3x_6^2}\right),\\
r_4&=\frac{1}{112}\left(\frac{28}{3x_4}+\frac{56x_4}{9x_5^2}+\frac{56x_4}{9x_6^2}+\frac{56x_4}{9x_7^2}\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{9}\left(\frac{x_5}{x_6x_7}-\frac{x_6}{x_7x_5}-\frac{x_7}{x_6x_5}\right)-\frac{1}{64}\left(\frac{4x_2}{3x_5^2}+\frac{4x_3}{3x_5^2}+\frac{56x_4}{9x_5^2}\right),\\
r_6&=\frac{1}{2x_6}+\frac{1}{9}\left(\frac{x_6}{x_7x_5}-\frac{x_7}{x_6x_5}-\frac{x_5}{x_7x_6}\right)-\frac{1}{64}\left(\frac{4x_1}{3x_6^2}+\frac{4x_3}{3x_6^2}+\frac{56x_4}{9x_6^2}\right),\\
r_7&=\frac{1}{2x_7}+\frac{1}{9}\left(\frac{x_7}{x_5x_6}-\frac{x_6}{x_7x_5}-\frac{x_5}{x_7x_6}\right)-\frac{1}{64}\left(\frac{4x_1}{3x_7^2}+\frac{4x_2}{3x_7^2}+\frac{56x_4}{9x_7^2}\right).
\end{split}
\end{equation}
We consider the system of equations given by
\begin{equation}\label{eq4}
r_1-r_2=0, r_2-r_3=0, r_3-r_4=0, r_4-r_5=0, r_5-r_6=0, r_6-r_7=0.
\end{equation}
Since there are 7 variables in the system of equations, which is quite complicated for computer to calculate the Gr\"{o}bner bases, we normalize them by $x_6=x_7=1$, then it is easy to find that $x_2=x_3$ from the equations in (\ref{eqn4}), as a result, we have the following system of equations:
\begin{equation}\label{eq4}
\begin{split}
g_1=&8\,{x_{{1}}}^{2}x_{{3}}{x_{{5}}}^{2}-4\,x_{{1}}{x_{{3}}}^{2}{x_{{5}}}^{2}-4\,x_{{1}}{x_{{3}}}^{2}-x_{{1}}{x_{{5}}}^{2}+x_{{3}}{x_{{5}}}^{2}=0,\\
g_2=&4\,{x_{{3}}}^{2}x_{{4}}{x_{{5}}}^{2}-4\,x_{{3}}{x_{{4}}}^{2}{x_{{5}}}^{2}+4\,{x_{{3}}}^{2}x_{{4}}-2\,x_{{3}}{x_{{4}}}^{2}-3\,x_{{3}}{x_{{5}}}^{2}+x_{{4}}{x_{{5}}}^{2}=0,\\
g_3=&16\,{x_{{4}}}^{2}{x_{{5}}}^{2}-16\,x_{{4}}{x_{{5}}}^{3}+6\,x_{{3}}x_{{4}}+22\,{x_{{4}}}^{2}-40\,x_{{5}}x_{{4}}+12\,{x_{{5}}}^{2}=0,\\
g_4=&3\,x_{{1}}{x_{{5}}}^{2}+3\,x_{{3}}{x_{{5}}}^{2}+14\,x_{{4}}{x_{{5}}}^{2}+32\,{x_{{5}}}^{3}-72\,{x_{{5}}}^{2}-6\,x_{{3}}-14\,x_{{4}}+40\,x_{{5}}=0.
\end{split}
\end{equation}
We consider a polynomial ring $R=Q[z, x_1, x_3, x_4,x_5]$ and an ideal $I$ generated by $\{g_1, g_2, g_3, g_4, \\z x_1 x_3 x_4 x_5-1\}$ to find non-zero solutions of equations (\ref{eq4}). We take a lexicographic order $>$ with $z>x_1 >x_3 >x_4 >x_5$ for a monomial ordering on $R$. Then with the aid of computer, the following polynomial is contained in the Gr\"{o}bner basis for the ideal $I$
\begin{equation}\label{p4}
\left( x_{{5}}-1 \right) \left( 4949\,{x_{{5}}}^{3}-9379\,{x_{{5}}}^{2}+5155\,x_{{5}}-875 \right) \cdot h(x_5),
\end{equation}
where $h(x_5)$ is a polynomial of degree $25$ given by
\begin{equation*}
\begin{split}
h(x_5)&=25101347481190400\,{x_{{5}}}^{25}-213612622522613760\,{x_{{5}}}^{24}+
1125174177049870336\,{x_{{5}}}^{23}\\&-4398212675755048960\,{x_{{5}}}^{22
}+13830794079039782912\,{x_{{5}}}^{21}-36611831495905378304\,{x_{{5}}}
^{20}\\&+83642128611649716224\,{x_{{5}}}^{19}-167796138043083587584\,{x_{
{5}}}^{18}+299027316357649125376\,{x_{{5}}}^{17}\\&-477090509137235365888
\,{x_{{5}}}^{16}+685232950401086713856\,{x_{{5}}}^{15}-
888981413909110722560\,{x_{{5}}}^{14}\\&+1043617887134219845504\,{x_{{5}}
}^{13}-1109092082064780894976\,{x_{{5}}}^{12}+1065873428902655206688\,
{x_{{5}}}^{11}\\&-924019385502926205728\,{x_{{5}}}^{10}+
719633332172147554621\,{x_{{5}}}^{9}-500367766771546562545\,{x_{{5}}}^
{8}\\&+307906916846444099368\,{x_{{5}}}^{7}-165616036629637255130\,{x_{{5
}}}^{6}+76466724878453429510\,{x_{{5}}}^{5}\\&-29517565522401301760\,{x_{
{5}}}^{4}+9135633690673393900\,{x_{{5}}}^{3}-2105338765392512650\,{x_{
{5}}}^{2}\\&+314822238961211625\,x_{{5}}-22360268064771875
\end{split}
\end{equation*}
In equation \ref{p4}, we can get four solutions, namely $1$, $0.3741245714$, $0.4352557643$, $1.085749994$.\\
For $x_5=1$, we have $x_5=x_6=x_7=1$, then by Proposition \ref{nrp=4}, we know the left-invariant Einstein metrics corresponding to these solutions are all naturally reductive.\\
For other three solutions, the corresponding solutions of the system of equations $\{g_1=0,g_2=0,g_3=0,g_4=0\}$ with $x_1x_3x_4x_5\neq0$ are as follows:
\begin{equation*}
\begin{split}
&\{x_1=0.06992197765, x_3=x_4=x_5=0.3741245714\},\\
&\{x_1=1.574157664, x_3=x_4=x_5=0.4352557643\},\\
&\{x_1=0.1259345024, x_3=x_4=x_5=1.085749994\}.
\end{split}
\end{equation*}
Due to Proposition \ref{nrp=4}, the left-invariant Einstein metrics induced by these solutions are all naturally reductive.
The solutions of $h(x_5)=0$ can be given approximately by $\{x_5=0.3952383758, x_5=0.4800791989, x_5=0.4889224428, x_5=0.6243909850, x_5=0.8764616162, x_5=0.9877146527, x_5=1.214528817\}$ and the corresponding solutions of the system of equations $\{g_1=0,g_2=0,g_3=0,g_4=0,h(x_5)=0\}$ with $x_1x_3x_4x_5\neq0$ are as follows:
\begin{equation*}
\begin{split}
&\{x_1=0.07185376030, x_3=0.08311707788, x_4=0.5173806186, x_5=0.3952383758\},\\
&\{x_1=1.505180802, x_3=0.09335085020, x_4=0.6214188681, x_5=0.4800791989\},\\
&\{x_1=0.07131646202, x_3=0.6268846419, x_4=0.2651770624, x_5=0.4889224428\},\\
&\{x_1=1.506498452, x_3=0.8045481479, x_4=0.3009635903, x_5=0.6243909850\},\\
&\{x_1=1.119004750, x_3=1.1136471437, x_4=1.063993479, x_5=0.8764616162\},\\
&\{x_1=0.08089954767, x_3=0.08095556860, x_4=0.2627271790, x_5=0.9877146527\},\\
&\{x_1=0.1003478332, x_3=1.505404155, x_4=0.3341288081, x_5=1.214528817\},\\
\end{split}
\end{equation*}
Due to Proposition \ref{nrp=4}, the left-invariant Einstein metrics induced by these solutions are all non-naturally reductive.
In summarize, we find 7 different non-naturally reductive left-invariant Einstein metrics on ${\mathrm E}_7$-I.
\textbf{Case of ${\mathrm E}_7$-II.} By Lemma \ref{coeff7}, the components of Ricci tensor with respect to the metric (\ref{metric7}) are as follows:
\[\left\{\begin{aligned}
r_0&=\frac{1}{4}\left(\frac{4u_0}{9x_3^2}+\frac{5u_0}{9x_4^2}\right),\\
r_1&=\frac{1}{12}\left(\frac{1}{3x_1}+\frac{x_1}{x_3^2}+\frac{5x_1}{3x_5^2}\right),\\
r_2&=\frac{1}{140}\left(\frac{35}{3x_2}+\frac{35x_2}{9x_3^2}+\frac{70x_2}{9x_4^2}+\frac{35x_2}{3x_5^2}\right),\\
r_3&=\frac{1}{2x_3}+\frac{5}{36}\left(\frac{x_3}{x_4x_5}-\frac{x_4}{x_5x_3}-\frac{x_5}{x_3x_4}\right)-\frac{1}{48}\left(\frac{4u_0}{9x_3^2}+\frac{x_1}{x_3^2}+\frac{35x_2}{9x_3^2}\right),\\
r_4&=\frac{1}{2x_4}+\frac{1}{9}\left(\frac{x_4}{x_5x_3}-\frac{x_5}{x_3x_4}-\frac{x_3}{x_4x_5}\right)-\frac{1}{60}\left(\frac{5u_0}{9x_4^2}+\frac{70x_2}{9x_4^2}\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{12}\left(\frac{x_5}{x_3x_4}-\frac{x_3}{x_4x_5}-\frac{x_4}{x_5x_3}\right)-\frac{1}{80}\left(\frac{5x_1}{3x_5^2}+\frac{35x_2}{3x_5^2}\right).
\end{aligned}\right.\]
Then we will give a criterion to decide whether a metric of the form (\ref{metric7}) is naturally reductive.
\begin{prop}\label{nr7}
If a left-invariant metric $<\ ,\ >$ of the form (\ref{metric7}) on $G={\mathrm E}_7$ is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$, then one of the following holds:
\begin{equation*}
1)u_0=x_1=x_2=x_3, x_4=x_5\quad 2)u_0=x_2=x_4, x_3=x_5\quad 3)x_1=x_2=x_5, x_3=x_4\quad 4)x_3=x_4=x_5.
\end{equation*}
Conversely, if one of the conditions 1), 2), 3), 4) holds, then the metric $<\ ,\ >$ of the form (\ref{metric7}) is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$.
\end{prop}
\begin{proof}
Let $\frak{l}$ be the Lie algebra of $L$. Then we have either $\frak{l}\subset{\mathfrak k}$ or $\frak{l}\not\subset{\mathfrak k}$. For the case of $\frak{l}\not\subset{\mathfrak k}$. Let $\frak{h}$ be the subalgebra of ${\mathfrak g}$ generated by $\frak{l}$ and ${\mathfrak k}$. Since ${\mathfrak g}={\mathrm T}\oplus A_1\oplus A_5\oplus{\mathfrak p}_1\oplus{\mathfrak p}_2\oplus{\mathfrak p}_3$ and the structure of generalized Wallach spaces, there must only one ${\mathfrak p}_i$ contained in $\frak{h}$. If ${\mathfrak p}_1\subset\frak{h}$, then $\frak{h}={\mathfrak k}\oplus{\mathfrak p}_1\cong A_7$, which is in fact the set of fixed points of the involutive automorphism $\sigma$. Due to Theorem \ref{nr}, we have $u_0=x_1=x_2=x_3, x_4=x_5$. If ${\mathfrak p}_2\subset\frak{h}$, then $\frak{h}={\mathfrak k}\oplus{\mathfrak p}_2\cong A_1\oplus D_6$, which is in fact the set of the fixed points of the involutive automorphism $\tau$, as a result of Theorem \ref{nr}, we have $u_0=x_2=x_4, x_3=x_5$. If ${\mathfrak p}_3\subset\frak{h}$, then $\frak{h}={\mathfrak k}\oplus{\mathfrak p}_3\cong{\mathrm T}\oplus{\mathrm E}_6$, which is corresponds to the involutive $\sigma\tau=\tau\sigma$ \cite{ChKaLi}, due to Theorem \ref{nr}, we have $x_1=x_2=x_5,x_3=x_4$.
We proceed with the case $\frak{l}\subset\frak{k}$. Because the orthogonal complement $\frak{l}^{{\mathfrak p}erp}$ of $\frak{l}$ with respect to $B$ contains the orthogonal complement ${\mathfrak k}^{{\mathfrak p}erp}$ of ${\mathfrak k}$, it follows that ${\mathfrak p}_1\oplus{\mathfrak p}_2\oplus{\mathfrak p}_3\subset\frak{l}^{{\mathfrak p}erp}$. Since the invariant metric $<\ ,\ >$ is naturally reductive with respect to $G\times L$, we conclude that $x_3 = x_4 = x_5$ by Theorem \ref{nr}.
The converse is a direct conclusion of Theorem \ref{nr}.
\end{proof}
Recall that the homogeneous Einstein equation for the left-invariant metric $<\ ,\ >$ is given by
$$\{r_0-r_1=0, r_1-r_2=0, r_2-r_3=0, r_3-r_4=0,r_4-r_5=0\}.$$
We normalize the metric by setting $u_0=1$, then the homogeneous Einstein equation is equivalent to the following system of equations:
\[\left\{\begin{aligned}
g_0 = &-5\,{x_{{1}}}^{2}{x_{{3}}}^{2}{x_{{4}}}^{2}-3\,{x_{{1}}}^{2}{x_{{4}}}^
{2}{x_{{5}}}^{2}-{x_{{3}}}^{2}{x_{{4}}}^{2}{x_{{5}}}^{2}+5\,x_{{1}}{x_
{{3}}}^{2}{x_{{5}}}^{2}+4\,x_{{1}}{x_{{4}}}^{2}{x_{{5}}}^{2}=0,\\
g_1 = &5\,{x_{{1}}}^{2}x_{{2}}{x_{{3}}}^{2}{x_{{4}}}^{2}+3\,{x_{{1}}}^{2}x_{{
2}}{x_{{4}}}^{2}{x_{{5}}}^{2}-3\,x_{{1}}{x_{{2}}}^{2}{x_{{3}}}^{2}{x_{
{4}}}^{2}-2\,x_{{1}}{x_{{2}}}^{2}{x_{{3}}}^{2}{x_{{5}}}^{2}-x_{{1}}{x_
{{2}}}^{2}{x_{{4}}}^{2}{x_{{5}}}^{2}-3\,x_{{1}}{x_{{3}}}^{2}{x_{{4}}}^
{2}{x_{{5}}}^{2}\\&+x_{{2}}{x_{{3}}}^{2}{x_{{4}}}^{2}{x_{{5}}}^{2}=0,\\
g_2=&9\,x_{{1}}x_{{2}}{x_{{4}}}^{2}{x_{{5}}}^{2}+36\,{x_{{2}}}^{2}{x_{{3}}}
^{2}{x_{{4}}}^{2}+24\,{x_{{2}}}^{2}{x_{{3}}}^{2}{x_{{5}}}^{2}+47\,{x_{
{2}}}^{2}{x_{{4}}}^{2}{x_{{5}}}^{2}-60\,x_{{2}}{x_{{3}}}^{3}x_{{4}}x_{
{5}}+60\,x_{{2}}x_{{3}}{x_{{4}}}^{3}x_{{5}}\\&-216\,x_{{2}}x_{{3}}{x_{{4}
}}^{2}{x_{{5}}}^{2}+60\,x_{{2}}x_{{3}}x_{{4}}{x_{{5}}}^{3}+36\,{x_{{3}
}}^{2}{x_{{4}}}^{2}{x_{{5}}}^{2}+4\,x_{{2}}{x_{{4}}}^{2}{x_{{5}}}^{2}=0,\\
g_3=&-9\,x_{{1}}{x_{{4}}}^{2}x_{{5}}+56\,x_{{2}}{x_{{3}}}^{2}x_{{5}}-35\,x_
{{2}}{x_{{4}}}^{2}x_{{5}}+108\,{x_{{3}}}^{3}x_{{4}}-216\,{x_{{3}}}^{2}
x_{{4}}x_{{5}}-108\,x_{{3}}{x_{{4}}}^{3}+216\,x_{{3}}{x_{{4}}}^{2}x_{{
5}}\\&-12\,x_{{3}}x_{{4}}{x_{{5}}}^{2}+4\,{x_{{3}}}^{2}x_{{5}}-4\,{x_{{4}
}}^{2}x_{{5}}=0,\\
g_4=&9\,x_{{1}}x_{{3}}{x_{{4}}}^{2}+63\,x_{{2}}x_{{3}}{x_{{4}}}^{2}-56\,x_{
{2}}x_{{3}}{x_{{5}}}^{2}-12\,{x_{{3}}}^{2}x_{{4}}x_{{5}}-216\,x_{{3}}{
x_{{4}}}^{2}x_{{5}}+216\,x_{{3}}x_{{4}}{x_{{5}}}^{2}+84\,{x_{{4}}}^{3}
x_{{5}}\\&-84\,x_{{4}}{x_{{5}}}^{3}-4\,x_{{3}}{x_{{5}}}^{2}=0.
\end{aligned}\right.\]
Consider the polynomial ring $R={\mathfrak m}athbb{Q}[z, x_1, x_2, x_3, x_4, x_5]$ and the ideal $I$, generated by polynomials $\{ z x_1 x_2 x_3 x_4 x_5-1, g_0, g_1, g_2, g_3, g_4\}$. We take a lexicographic ordering $>$, with $z > x_1 > x_2 > x_3 > x_4 > x_5$ for a monomial ordering on $R$. Then, by the aid of computer, we see that a Gr\"{o}bner basis for the ideal $I$ contains a polynomial of $x_5$ given by
\begin{equation*}
( x_{{5}}-1)( 1067x_{{5}}-392)( 2x_{{5}}-7)( 875{x_{{5}}}^{3}-5155{x_{{5}}}^{2}+9379x_{{5}}\\-4949)h(x_5),
\end{equation*}
where $h(x_5)$ is a polynomial of degree 78. Since the length of this polynomial may affect the readers to read, we put it in the Appendix III.
We remark that $x_1, x_2, x_3, x_4$ can be written into a polynomial of $x_5$ with coefficient of rational numbers. By solving $h(x_5)=0$ numerically, we get 6 solutions, namely $x_5\approx0.3954420465, x_5\approx0.7869165511, x_5\approx1.022441180, x_5\approx1.525178916, x_5\approx2.907605999, x_5\approx3.996569735$. Further more, the corresponding solutions of the system of equations $\{g_0=0, g_1=0, g_2=0, g_3=0, g_4=0, h(x_5)=0\}$ with $x_1x_2x_3x_4x_5\neq0$ are as follows:
\begin{equation*}
\begin{split}
&\{x_1\approx0.6527831128,x_2\approx0.4342037927,x_3\approx0.7023547363,x_4\approx0.7181567785,x_5\approx0.3954420465\},\\
&\{x_1\approx1.238139339,x_2\approx0.2406838191,x_3\approx0.9516792542,x_4\approx0.6904065038,x_5\approx0.7869165511\},\\
&\{x_1\approx0.07716292844,x_2\approx0.2617256871,x_3\approx1.140229546,x_4\approx0.6923748274,x_5\approx1.022441180\},\\
&\{x_1\approx0.1125592068,x_2\approx0.3602427458,x_3\approx0.7175220814,x_4\approx1.576177679,x_5\approx1.525178916\},\\
&\{x_1\approx1.055549830,x_2\approx0.7580899024,x_3\approx0.9219014311,x_4\approx2.908338968,x_5\approx2.907605999\},\\
&\{x_1\approx0.3623080653,x_2\approx1.354302959,x_3\approx1.066348568,x_4\approx4.005454245,x_5\approx3.996569735\}.
\end{split}
\end{equation*}
Due to Proposition \ref{nr7}, we conclude that these six solutions induce six different non-naturally reductive left-invariant Einstein metrics on ${\mathrm E}_7$.
For $x_5=1, x_5=\frac{392}{1067}$ and $x_5=\frac{7}{2}$, the corresponding solution of the system of equations $\{g_0=0, g_1=0, g_2=0, g_3=0, g_4=0\}$ with $x_1x_2x_3x_4x_5\neq0$ are as follows:
\begin{equation*}
\begin{split}
&\{x_1=x_2=x_3=x_4=x_5=1\},\\
&\{x_1=x_2=x_5=\frac{392}{1067},x_3=x_4=\frac{742}{1067}\},\\
&\{x_1=x_2=x_3=1,x_4=x_5=\frac{7}{2}\}.
\end{split}
\end{equation*}
Due to Proposition \ref{nr7}, the left invariant Einstein metrics induced by these three solutions are all naturally reductive.
For $(875{x_{{5}}}^{3}-5155{x_{{5}}}^{2}+9379x_{{5}}-4949)=0$, the solutions of the system of equations $\{g_0=0, g_1=0, g_2=0, g_3=0, g_4=0,(875{x_{{5}}}^{3}-5155{x_{{5}}}^{2}+9379x_{{5}}-4949)=0\}$ with $x_1x_2x_3x_4x_5\neq0$ are as follows:
\begin{equation*}
\begin{split}
&\{x_1\approx0.1159884900,x_2=x_4=1,x_3=x_5\approx0.9210223401\},\\
&\{x_1\approx3.616626805,x_2=x_4=1,x_3=x_5\approx2.297499727\},\\
&\{x_1\approx0.1868949087,x_2=x_4=1,x_3=x_5\approx2.672906503\}.
\end{split}
\end{equation*}
Due to Proposition \ref{nr7}, the left invariant Einstein metrics induced by these three solutions are all naturally reductive.
In conclusion, we find six different left-invariant Einstein metrics on ${\mathrm E}_7$ which are non-naturally reductive.
\textbf{Case of ${\mathrm E}_6$-II.} By Lemma \ref{coeff6}, the components of Ricci tensor with respect to the metric (\ref{metric6}) can be expressed as follows:
\[\left\{\begin{aligned}
r_0&=\frac{1}{4}\left(\frac{u_0}{2x_4^2}+\frac{u_0}{2x_5^2}\right),\\
r_1&=\frac{1}{12}\left(\frac{1}{2x_1}+\frac{x_1}{x_5^2}+\frac{3x_1}{2x_6^2}\right),\\
r_2&=\frac{1}{12}\left(\frac{1}{2x_2}+\frac{x_2}{x_4^2}+\frac{3x_2}{2x_6^2}\right),\\
r_3&=\frac{1}{60}\left(\frac{5}{x_3}+\frac{5x_3}{2x_4^2}+\frac{5x_3}{2x_5^2}+\frac{5x_3}{x_6^2}\right),\\
r_4&=\frac{1}{2x_4}+\frac{1}{8}\left(\frac{x_4}{x_5x_6}-\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}\right)-\frac{1}{32}\left(\frac{u_0}{2x_4^2}+\frac{x_2}{x_4^2}+\frac{5x_3}{2x_4^2}\right),\\
r_5&=\frac{1}{2x_5}+\frac{1}{8}\left(\frac{x_5}{x_6x_4}-\frac{x_6}{x_4x_5}-\frac{x_4}{x_5x_6}\right)-\frac{1}{32}\left(\frac{u_0}{2x_5^2}+\frac{x_1}{x_5^2}+\frac{5x_3}{2x_5^2}\right),\\
r_6&=\frac{1}{2x_6}+\frac{1}{12}\left(\frac{x_6}{x_4x_5}-\frac{x_4}{x_5x_6}-\frac{x_5}{x_6x_4}\right)-\frac{1}{48}\left(\frac{3x_1}{2x_6^2}+\frac{3x_2}{2x_6^2}+\frac{5x_3}{x_6^2}\right).
\end{aligned}\right.\]
The we will give a criterion to decide whether a left-invariant metric of the form (\ref{metric6}) on ${\mathrm E}_6$ is naturally reductive.
\begin{prop}\label{nr6}
If a left-invariant metric $<\ ,\ >$ of the form (\ref{metric6}) on $G={\mathrm E}_6$ is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$, then one of the following holds:
\begin{equation*}
1)u_0=x_2=x_3=x_4, x_5=x_6\ 2)u_0=x_1=x_3=x_5, x_4=x_6\ 3)x_1=x_2=x_3=x_6, x_4=x_5\ 4)x_4=x_5=x_6.
\end{equation*}
Conversely, if one of the conditions 1), 2), 3), 4) holds, then the metric $<\ ,\ >$ of the form (\ref{metric6}) is naturally reductive with respect to $G\times L$ for some closed subgroup $L$ of $G$.
\end{prop}
\begin{proof}
Let $\frak{l}$ be the Lie algebra of $L$. Then we have either $\frak{l}\subset{\mathfrak k}$ or $\frak{l}\not\subset{\mathfrak k}$. For the case of $\frak{l}\not\subset{\mathfrak k}$. Let $\frak{h}$ be the subalgebra of ${\mathfrak g}$ generated by $\frak{l}$ and ${\mathfrak k}$. Since ${\mathfrak g}={\mathrm T}\oplus A_1^1\oplus A_1^2\oplus A_3\oplus{\mathfrak p}_1\oplus{\mathfrak p}_2\oplus{\mathfrak p}_3$ and the structure of generalized Wallach spaces, there must only one ${\mathfrak p}_i$ contained in $\frak{h}$. If ${\mathfrak p}_1\subset\frak{h}$, then $\frak{h}={\mathfrak k}\oplus{\mathfrak p}_1\cong A_1^1\oplus A_5$, which is in fact the set of fixed points of the involutive automorphism $\sigma$. Due to Theorem \ref{nr}, we have $u_0=x_2=x_3=x_4,x_5=x_6$. If ${\mathfrak p}_2\subset\frak{h}$, then $\frak{h}={\mathfrak k}\oplus{\mathfrak p}_2\cong A_1^2\oplus A_5$, which is in fact the set of the fixed points of the involutive automorphism $\tau$, as a result of Theorem \ref{nr}, we have $u_0=x_1=x_3=x_5, x_4=x_6$. If ${\mathfrak p}_3\subset\frak{h}$, then $\frak{h}={\mathfrak k}\oplus{\mathfrak p}_3\cong{\mathrm T}\oplus D_5$, which is corresponds to the involutive $\sigma\tau=\tau\sigma$ \cite{ChKaLi}, due to Theorem \ref{nr}, we have $x_1=x_2=x_3=x_6,x_4=x_5$.
We proceed with the case $\frak{l}\subset\frak{k}$. Because the orthogonal complement $\frak{l}^{{\mathfrak p}erp}$ of $\frak{l}$ with respect to $B$ contains the orthogonal complement ${\mathfrak k}^{{\mathfrak p}erp}$ of ${\mathfrak k}$, it follows that ${\mathfrak p}_1\oplus{\mathfrak p}_2\oplus{\mathfrak p}_3\subset\frak{l}^{{\mathfrak p}erp}$. Since the invariant metric $<\ ,\ >$ is naturally reductive with respect to $G\times L$, we conclude that $x_4 = x_5 = x_6$ by Theorem \ref{nr}.
The converse is a direct conclusion of Theorem \ref{nr}.
\end{proof}
Recall that the homogeneous Einstein equation for the left-invariant metric $<\ ,\ >$ is given by
$$\{r_0-r_1=0, r_1-r_2=0, r_2-r_3=0, r_3-r_4=0,r_4-r_5=0\}.$$
Then finding Einstein metrics of the form (\ref{metric6}) reduces to finding the positive solutions of the above system and we normalize the metric by setting $x_6=1$, then the homogeneous Einstein equation is equivalent to the following system of equations:
\[\left\{\begin{aligned}
g_0 = &-3\,{x_{{1}}}^{2}{x_{{4}}}^{2}{x_{{5}}}^{2}+3\,u_{{0}}x_{{1}}{x_{{4}}}
^{2}+3\,u_{{0}}x_{{1}}{x_{{5}}}^{2}-2\,{x_{{1}}}^{2}{x_{{4}}}^{2}-{x_{
{4}}}^{2}{x_{{5}}}^{2}=0,\\
g_1 = &3\,{x_{{1}}}^{2}x_{{2}}{x_{{4}}}^{2}{x_{{5}}}^{2}-3\,x_{{1}}{x_{{2}}}^
{2}{x_{{4}}}^{2}{x_{{5}}}^{2}+2\,{x_{{1}}}^{2}x_{{2}}{x_{{4}}}^{2}-2\,
x_{{1}}{x_{{2}}}^{2}{x_{{5}}}^{2}-x_{{1}}{x_{{4}}}^{2}{x_{{5}}}^{2}+x_
{{2}}{x_{{4}}}^{2}{x_{{5}}}^{2}=0,\\
g_2=&3\,{x_{{2}}}^{2}x_{{3}}{x_{{4}}}^{2}{x_{{5}}}^{2}-2\,x_{{2}}{x_{{3}}}^
{2}{x_{{4}}}^{2}{x_{{5}}}^{2}+2\,{x_{{2}}}^{2}x_{{3}}{x_{{5}}}^{2}-x_{
{2}}{x_{{3}}}^{2}{x_{{4}}}^{2}-x_{{2}}{x_{{3}}}^{2}{x_{{5}}}^{2}-2\,x_
{{2}}{x_{{4}}}^{2}{x_{{5}}}^{2}+x_{{3}}{x_{{4}}}^{2}{x_{{5}}}^{2}=0,\\
g_3=&16\,{x_{{3}}}^{2}{x_{{4}}}^{2}{x_{{5}}}^{2}-24\,x_{{3}}{x_{{4}}}^{3}x_
{{5}}+24\,x_{{3}}x_{{4}}{x_{{5}}}^{3}+3\,u_{{0}}x_{{3}}{x_{{5}}}^{2}+6
\,x_{{2}}x_{{3}}{x_{{5}}}^{2}+8\,{x_{{3}}}^{2}{x_{{4}}}^{2}+23\,{x_{{3
}}}^{2}{x_{{5}}}^{2}\\&-96\,x_{{3}}x_{{4}}{x_{{5}}}^{2}+16\,{x_{{4}}}^{2}
{x_{{5}}}^{2}+24\,x_{{3}}x_{{4}}x_{{5}}=0,\\
g_4=&16\,{x_{{4}}}^{3}x_{{5}}-16\,x_{{4}}{x_{{5}}}^{3}+u_{{0}}{x_{{4}}}^{2}
-u_{{0}}{x_{{5}}}^{2}+2\,x_{{1}}{x_{{4}}}^{2}-2\,x_{{2}}{x_{{5}}}^{2}+
5\,x_{{3}}{x_{{4}}}^{2}-5\,x_{{3}}{x_{{5}}}^{2}-32\,{x_{{4}}}^{2}x_{{5
}}\\&+32\,x_{{4}}{x_{{5}}}^{2}=0,\\
g_5=&6\,x_{{1}}x_{{4}}{x_{{5}}}^{2}+6\,x_{{2}}x_{{4}}{x_{{5}}}^{2}+20\,x_{{
3}}x_{{4}}{x_{{5}}}^{2}-8\,{x_{{4}}}^{2}x_{{5}}-96\,x_{{4}}{x_{{5}}}^{
2}+40\,{x_{{5}}}^{3}-3\,u_{{0}}x_{{4}}-6\,x_{{1}}x_{{4}}-15\,x_{{3}}x_
{{4}}\\&+96\,x_{{4}}x_{{5}}-40\,x_{{5}}=0.
\end{aligned}\right.\]
Consider the polynomial ring $R={\mathfrak m}athbb{Q}[z, u_0, x_1, x_2, x_3, x_4, x_5]$ and the ideal $I$, generated by polynomials $\{ z u_0 x_1 x_2 x_3 x_4 x_5 -1, g_0, g_1, g_2, g_3, g_4, g_5\}$. We take a lexicographic ordering $>$, with $z > u_0 > x_1 > x_2 > x_3 > x_4 > x_5$ for a monomial ordering on $R$. Then, by the aid of computer, we see that a Gr\"{o}bner basis for the ideal $I$ contains a polynomial of $x_5$ given by $( x_{{5}}-1 ) ( 17\,x_{{5}}-31 ) ( 319\,{x_{{5}}}^{3}-585\,{x_{{5}}}^{2}+298\,x_{{5}}-46)\cdot
h(x_5)$, where $h(x_5)$ is a polynomial of degree 178, we put it in the Appendix IV, since its length may affect our readers to read. In fact, we remark that with the polynomials in the Gr\"{o}bner basis, $x_i$ can be written into a expression of $x_{i+1}, \cdots, x_5$ and $i=1,\cdots,5$, while $u_0$ can be written into a expression of $x_1,x_2,x_3,x_4,x_5$.
By solving $h(x_5)=0$ numerically, there exists 14 positive solutions which can be given approximately by $x_5\approx0.3190072071$, $x_5\approx0.3565775930$, $x_5\approx0.4054489785$, $x_5\approx0.4709163886$, $x_5\approx0.5455899299$, $x_5\approx0.7832400305$, $x_5\approx1.000773211$, $x_5\approx1.002658584$, $x_5\approx1.003465783$, $x_5\approx1.006528315$, $x_5\approx1.069488872$, $x_5\approx1.155548556$, $x_5\approx1.646506483$, $x_5\approx1.695781258$. Moreover, the corresponding solutions of the system of equations $\{g_0=0,g_1=0,g_2=0,g_3=0,g_4=0,g_5=0,h(x_5)=0\}$ with $u_0x_1x_2x_3x_4x_5\neq0$ can be split into the following 7 groups:
\[{\mathfrak m}box{1.}\left\{\footnotesize\begin{aligned}
&\{u_0\approx0.3120392058,x_1\approx0.1471819373,x_2\approx0.1040632043,x_3\approx0.4015791280,x_4\approx1.003465783,x_5\approx0.3190072071\},\\
&\{u_0\approx0.3120392058,x_1\approx0.1040632043,x_2\approx0.1471819373,x_3\approx0.4015791280,x_4\approx0.3190072071,x_5\approx1.003465783\},\\
\end{aligned}\right.\]
\[{\mathfrak m}box{2.}\left\{\footnotesize\begin{aligned}
&\{u_0\approx0.3832451893,x_1\approx0.4156187592,x_2\approx0.1033703333,x_3\approx0.2795983424,x_4\approx1.000773211,x_5\approx0.3565775930\},\\
&\{u_0\approx0.3832451893,x_1\approx0.1033703333,x_2\approx0.4156187592,x_3\approx0.2795983424,x_4\approx0.3565775930,x_5\approx1.000773211\},\\
\end{aligned}\right.\]
\[{\mathfrak m}box{3.}\left\{\footnotesize\begin{aligned}
&\{u_0\approx0.4028095641,x_1\approx0.1658986529,x_2\approx1.591316257,x_3\approx0.5073650261,x_4\approx1.006528315,x_5\approx0.4054489785\},\\
&\{u_0\approx0.4028095641,x_1\approx1.591316257,x_2\approx0.1658986529,x_3\approx0.5073650261,x_4\approx0.4054489785,x_5\approx1.006528315\},\\
\end{aligned}\right.\]
\[{\mathfrak m}box{4.}\left\{\footnotesize\begin{aligned}
&\{u_0\approx0.5209148934,x_1\approx0.5695949321,x_2\approx1.598554538,x_3\approx0.3242377273,x_4\approx1.002658584,x_5\approx0.4709163886\},\\
&\{u_0\approx0.5209148934,x_1\approx1.598554538,x_2\approx0.5695949321,x_3\approx0.3242377273,x_4\approx0.4709163886,x_5\approx1.002658584\},\\
\end{aligned}\right.\]
\[{\mathfrak m}box{5.}\left\{\footnotesize\begin{aligned}
&\{u_0\approx0.7642362204,x_1\approx0.1166446353,x_2\approx0.1088139526,x_3\approx0.2444045194,x_4\approx1.069488872,x_5\approx0.5455899299\},\\
&\{u_0\approx0.7642362204,x_1\approx0.1088139526,x_2\approx0.1166446353,x_3\approx0.2444045194,x_4\approx0.5455889929,x_5\approx1.069488872\},\\
\end{aligned}\right.\]
\[{\mathfrak m}box{6.}\left\{\footnotesize\begin{aligned}
&\{u_0\approx1.095971585,x_1\approx0.1445747761,x_2\approx1.600102216,x_3\approx0.3092235071,x_4\approx1.155548556,x_5\approx0.7832400305\},\\
&\{u_0\approx1.095971585,x_1\approx1.600102216,x_2\approx0.1445747761,x_3\approx0.3092235071,x_4\approx0.7832400305,x_5\approx1.155548556\},\\
\end{aligned}\right.\]
\[{\mathfrak m}box{7.}\left\{\footnotesize\begin{aligned}
&\{u_0\approx2.260526144,x_1\approx0.2562898903,x_2\approx1.059701210,x_3\approx1.147117560,x_4\approx1.695781258,x_5\approx1.646506483\},\\
&\{u_0\approx2.260526144,x_1\approx1.059701210,x_2\approx0.2562898903,x_3\approx1.147117560,x_4\approx1.646506483,x_5\approx1.695781258\}.
\end{aligned}\right.\]
By solving $319\,{x_{{5}}}^{3}-585\,{x_{{5}}}^{2}+298\,x_{{5}}-46=0$ numerically, there exists three different positive solutions which can be given approximately by $x_5\approx0.3244770611$, $x_5\approx0.4009373579$ and $x_5\approx1.108447830$. Further, the corresponding solutions of the system of equations $\{g_0=0,g_1=0,g_2=0,g_3=0,g_4=0,g_5=0,319\,{x_{{5}}}^{3}-585\,{x_{{5}}}^{2}+298\,x_{{5}}-46=0\}$ with $u_0x_1x_2x_3x_4x_5\neq0$ are:
\[{\mathfrak m}box{8.}\left\{\begin{aligned}
&\{u_0=x_1=x_3=x_5\approx0.3244770611,x_2\approx0.1030504001,x_4=1\},\\
&\{u_0=x_1=x_3=x_5\approx0.4009373579,x_2\approx1.613068224,x_4=1\},\\
&\{u_0=x_1=x_3=x_5\approx1.108447830,x_2\approx0.1984241041,x_4=1\}.
\end{aligned}\right.\]
For $x_5=1$, there exists four solutions of the system $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0\}$ with $u_0x_1x_2x_3x_4x_5\neq0$ which can be given approximately by
\[{\mathfrak m}box{9.}\begin{aligned}
\{u_0=x_1=x_2=x_3=x_4=x_5=1\},
\end{aligned}\]
\[{\mathfrak m}box{10.}\left\{\begin{aligned}
&\{u_0=x_2=x_3=x_4\approx0.3244770611,x_1\approx0.1030504001,x_5=1\},\\
&\{u_0=x_2=x_3=x_4\approx0.4009373579,x_1\approx1.613068224,x_5=1\},\\
&\{u_0=x_2=x_3=x_4\approx1.108447830,x_1\approx0.1984241041,x_5=1\}.
\end{aligned}\right.\]
For $x_5=\frac{31}{17}$, the corresponding solution of the system of equations $\{g_1=0,g_2=0,g_3=0,g_4=0,g_5=0\}$ with $u_0x_1x_2x_3x_4x_5\neq0$ is
\[{\mathfrak m}box{11.}\begin{aligned}
\{u_0=\frac{737}{289},x_1=x_2=x_3=1,x_4=x_5=\frac{31}{17}\}.
\end{aligned}\]
Among these solutions, we remark that the left-invariant Einstein metrics induced by the solutions in Group 8-11 are all naturally reductive, while the left-invariant Einstein metrics induced by the solutions in Group 1-7 are all non-naturally reductive according to Proposition \ref{nr6}. In particular, the metrics induced by the solutions in Group 8 and Group 10 are the same up to isometry respectively and the solutions in each of Group 1-7 induce a same metric up to isometry.
In conclusion, we find 7 different non-naturally left-invariant Einstein metrics on ${\mathrm E}_6$-II.
\section{Appendix I}
We put $h(x_6)$ in Section 4 here\\
\tiny$h(x_6)=288548666216102485319490298372480722534400000000000000000000000000000000000000000000000
\,{x_{{6}}}^{114}-\\
7335395742115974539503349380923216834120908800000000000000000000000000000000000000000000
\,{x_{{6}}}^{113}+\\
94078381310558994585051331187125707811005556326400000000000000000000000000000000000000000
\,{x_{{6}}}^{112}-\\
815372756534232158773437000571967604719385984368640000000000000000000000000000000000000000
\,{x_{{6}}}^{111}+\\
5394020114040582693624836326072703055152460459723980800000000000000000000000000000000000000
\,{x_{{6}}}^{110}-\\
29147068861160512043456310394532638973009479245456474112000000000000000000000000000000000000
\,{x_{{6}}}^{109}+\\
134330326689565122860196509693349702085611553735614524293120000000000000000000000000000000000
\,{x_{{6}}}^{108}-\\
543950722347413709091314078263315359554321702464653838631567360000000000000000000000000000000
\,{x_{{6}}}^{107}+\\
1977156456198764081665489415283217701559164441214210196526596096000000000000000000000000000000
\,{x_{{6}}}^{106}-\\
6553932091117232855452398232625779163507500833393670938453396684800000000000000000000000000000
\,{x_{{6}}}^{105}+\\
20051771138222265162787867070150309155860454357487955338163262586880000000000000000000000000000
\,{x_{{6}}}^{104}-\\
57149190441408854243012248124815005821521968707890639680575132139520000000000000000000000000000
\,{x_{{6}}}^{103}+\\
152835776655049481507606713389342085792929023876647000321045284939366400000000000000000000000000
\,{x_{{6}}}^{102}-\\
385756462944628713137839357523266365536724194341244461787260182897623040000000000000000000000000
\,{x_{{6}}}^{101}+\\
923246441937253023544548551112293271009138083357097396744185678294679552000000000000000000000000
\,{x_{{6}}}^{100}-\\
2103412757751515961253892845131681054271482548768957772344571292651880448000000000000000000000000
\,{x_{{6}}}^{99}+\\
4576635509329089592980580819532376494698679620316682908084360030156887162880000000000000000000000
\,{x_{{6}}}^{98}-\\
9536363644246112135881661787938304131231765937926496419549356264244148961280000000000000000000000
\,{x_{{6}}}^{97}+\\
19075075896720745327516498667217050539300287648144531385033627602887114752000000000000000000000000
\,{x_{{6}}}^{96}-\\
36702573270534659785989982819949577303959575255730333060378328961655387287715840000000000000000000
\,{x_{{6}}}^{95}+\\
68056216913739631749322574839527960190346820317441692917509489044571240158724096000000000000000000
\,{x_{{6}}}^{94}-\\
121812029770982949418572416294490232060904869125587642005056843136001221680798105600000000000000000
\,{x_{{6}}}^{93}+\\
210768703644041099247097074305412845727257394478296076350620300770567300126175395840000000000000000
\,{x_{{6}}}^{92}-\\
353023485401160038105010334971232204765337446216424784610083373458822887516598697984000000000000000
\,{x_{{6}}}^{91}+\\
573094311879980091977951108679734497114275489362396555522847525055228969066686316544000000000000000
\,{x_{{6}}}^{90}-\\
902776547435297518619872204025127546436588215347707699177613095959712708113907843072000000000000000
\,{x_{{6}}}^{89}+\\
1381470179511776696238079869975770761332760056220154172745621510511802459843682998681600000000000000
\,{x_{{6}}}^{88}-\\
2055691660816228681808727467311341844197624685508666419978584648469236044884728275271680000000000000
\,{x_{{6}}}^{87}+\\
2977509595428067235101258282418760365285755126075924171615103016656820208031235928227840000000000000
\,{x_{{6}}}^{86}-\\
4201725120405361863667354990201606709573190236890861384608467887244322436946675328090112000000000000
\,{x_{{6}}}^{85}+\\
5781756483073179454917236955355314508646903884452703502012201893138820119263630187298816000000000000
\,{x_{{6}}}^{84}-\\
7764371676592897388676886890577299023889575649802583977971700446782610657379596330493542400000000000
\,{x_{{6}}}^{83}+\\
10183620496243087634800999266392091503767020139559664690681345852294086238598894375770521600000000000
\,{x_{{6}}}^{82}-\\
13054516827199874689647995931541533885437441220682700405586454195035408459346608479398789120000000000
\,{x_{{6}}}^{81}+\\
16367178143760644912462581706756749405506470367424836184653013293361886227319582250458152960000000000
\,{x_{{6}}}^{80}-\\
20082208595708058444257930092579772418036354526175667544909927092935162381791985074106793984000000000
\,{x_{{6}}}^{79}+\\
24128089109767676333163701531096577498630224048672867437805663399205390974551999451865415680000000000
\,{x_{{6}}}^{78}-\\
28401200715041387036896988893111857222372577414558286924609319958231877059210419539825708236800000000
\,{x_{{6}}}^{77}+\\
32768861687769238956167161989233205835442693551136264891648443112259550388012676313583688089600000000
\,{x_{{6}}}^{76}-\\
37075430186581614315913211547784967892352876656501255953200659995382589809183727308920499077120000000
\,{x_{{6}}}^{75}+\\
41151153785704934764308791987090636719181983801375625528153654823770872229589873780129323048960000000
\,{x_{{6}}}^{74}-\\
44823088360265402011083462355710485240467134423408550926180182695515577114948033664144657645568000000
\,{x_{{6}}}^{73}+\\
47927115385399894568622394434700182348901704209940419379929530409209547107317917236377852489728000000
\,{x_{{6}}}^{72}-\\
50319904861589947132816254744282238135435415320599883838412063668440441095639280488162700008243200000
\,{x_{{6}}}^{71}+\\
51889630250816021795835597281647403593459135837539633376498567673844740216014707791176178306022400000
\,{x_{{6}}}^{70}-\\
52564350198681855220054382985091068588471068882030190034292454885932208798043191092370630644910080000
\,{x_{{6}}}^{69}+\\
52317215639869872635966701755733856675226626488683021867912886301886830804537686698081061757064960000
\,{x_{{6}}}^{68}-\\
51168007224088062314848347765733799961772588113940585157751524190533262394091745932641046693611264000
\,{x_{{6}}}^{67}+\\
49180910182259851588727130149233817252851530615710856300448164229057873608404771272526968737850176000
\,{x_{{6}}}^{66}-\\
46458838246393871693574869345586158471422065209303270911696345268890771337201589894328021757505497600
\,{x_{{6}}}^{65}+\\
43134972355331848138944108783221020809381262458884310419495192086629751470296379376274718216741116800
\,{x_{{6}}}^{64}-\\
39362439497633534511245212818869498004709129802059023254302574691891728808858041908647936337387500800
\,{x_{{6}}}^{63}+\\
35303193191972277228439777065137656416167494386497276870790422335802930644442653095711597112409271840
\,{x_{{6}}}^{62}-\\
31117160063399164337139437688865039829295017277594793219988925986005316978924338694150335736704040128
\,{x_{{6}}}^{61}+\\
26952596800815813143356092122470908475673026431828753639697781507002683163957207516646044377535302416
\,{x_{{6}}}^{60}-\\
22938385143521621229111024324925922016048441025457404563508543696129189690123364595841333795232375488
\,{x_{{6}}}^{59}+\\
19178716787417680585229756047009699670913747539285298670733754230262805598913938949076314126676954520
\,{x_{{6}}}^{58}-\\
15750326215528480002321380215653164362612161106504843101453491079248331235708830001179564074205178096
\,{x_{{6}}}^{57}+\\
12702155582893911210516581025463366185635794710186618524034771427770882482957248082633410219471019268
\,{x_{{6}}}^{56}-\\
10057112312755464433983066058320195619101802597197233470570592201692916613350578305131786195545412768
\,{x_{{6}}}^{55}+\\
7815426696830648061914604250266266973243034097843367116915760277296648396423251065688813292827296958
\,{x_{{6}}}^{54}-\\
5959041500741504739342676230879226001046706924977493457884142602249121692721907973267744556413515708
\,{x_{{6}}}^{53}+\\
4456465144171912344654772807258183612540227615811733523998081212234586349153413960140835167900125575
\,{x_{{6}}}^{52}-\\
3267582019130579458571931040636411368414210107466090031294076856865039471544160629816891300649138914
\,{x_{{6}}}^{51}+\\
2348019298622537219931845106040325224449376841127936407087717058427605648457057613218067167306832799
\,{x_{{6}}}^{50}-\\
1652797965624984944364734501531548328098700190410203729083639848818702551406718676413560442288648794
\,{x_{{6}}}^{49}+\\
1139126533749155188323113784381862130506344272474638522684929310780995606920131814752352645015571154
\,{x_{{6}}}^{48}-\\
768312629130087932486704909567133368795476393558489294068272136432855429858675357821521262760285172
\,{x_{{6}}}^{47}+\\
506859330039629945929036123576903762986470201996591911552482105191542415407606632542148413760601644
\,{x_{{6}}}^{46}-\\
326874849254477840682384310229946957489771134083669968516994727481563164015454638090887344294345670
\,{x_{{6}}}^{45}+\\
205955910185909378300314035545530956983528846439074840056626177783231992263223908507756385078838752
\,{x_{{6}}}^{44}-\\
126710981824123666170968229538116813326554434163226695218742665671948434421611801101640777890740102
\,{x_{{6}}}^{43}+\\
76075669993045192563804968832318949836021878712164829516745508159362085161521962927802256246872538
\,{x_{{6}}}^{42}-\\
44546123774203858562737203316507694133062777619959630084625346029037382741728458814705943928706820
\,{x_{{6}}}^{41}+\\
25424079948814984629207093374323401542188106890965494590130148254438572882740924871317462129769286
\,{x_{{6}}}^{40}-\\
14134754167722643633726939221604262601223529929465692488570987952206549624337306012138781911925014
\,{x_{{6}}}^{39}+\\
7650210550322154812189001607124054641751922374805051507835273092274564010735771078329022936177926
\,{x_{{6}}}^{38}-\\
4028400549539187018992894114891089589580150128650696195170202803559075981096739802818599329458446
\,{x_{{6}}}^{37}+\\
2062504690081829900018028989594844151118352103047456363540289957813121404803502918698507464289716
\,{x_{{6}}}^{36}-\\
1026089059219399650760857906769747803720756098122009477430762995721873270212987522789252292364364
\,{x_{{6}}}^{35}+\\
495703837079474212874671053140765282382801340577813553149110407378105516432073961287160046768876
\,{x_{{6}}}^{34}-\\
232390733552737045335629185654999832651320240145225223444250737827560540662735451238646960136598
\,{x_{{6}}}^{33}+\\
105652022802101393188141512641862236798001179702090179364971313726411114144364573456998404552476
\,{x_{{6}}}^{32}-\\
46546959217278241385295256857636422335592436377929517788382230391252368928319967388969151737398
\,{x_{{6}}}^{31}+\\
19858100541803106926904408901752742130340672229494971316383032353051521167461457189672834072094
\,{x_{{6}}}^{30}-\\
8197455866919393826435150321412801803120761170805735702648847245471335334010157085431416016164
\,{x_{{6}}}^{29}+\\
3271585422148232882984609479876199064691579238439876178430988984960889820781934191606548102735
\,{x_{{6}}}^{28}-\\
1261233266374010693827434138105796490286236015474740910991540133982120592512441565670641934156
\,{x_{{6}}}^{27}+\\
469228512106326617112460403969728143919350145959862950061782396864465244470750494493757108597
\,{x_{{6}}}^{26}-\\
168301715723186009564910019276060906015115581793854255120274409266120834901127339810646645928
\,{x_{{6}}}^{25}+\\
58134776061640584899879938134874407675849646364688326565254499838031295975064452921070505454
\,{x_{{6}}}^{24}-\\
19315871932373280332127853706359356286529593585317343480616017086638110758962679878721592032
\,{x_{{6}}}^{23}+\\
6165442090907758467003182162168410728084866700934563866146171038163427792394988732447742576
\,{x_{{6}}}^{22}-\\
1887871491252749071165094213916356664798522111340622135482992091045000367356910998396504640
\,{x_{{6}}}^{21}+\\
553685436076448230582095063647153018919049767435862585449491653881668083955425859890407200
\,{x_{{6}}}^{20}-\\
155270069022930306455064658339299828485707651572993944747264551460206031752289211982987200
\,{x_{{6}}}^{19}+\\
41554020009974686461242824300388850412981599048711760342154161186385016909464516477310800
\,{x_{{6}}}^{18}-\\
10590245914774140828048927818702238404535327456947357218449153450593999656631562693776000
\,{x_{{6}}}^{17}+\\
2563987767983678891215691704653502301548996316078479440915761179635744715385150414860000
\,{x_{{6}}}^{16}-\\
588100297307408593160912180295955225449377126353066214406197496763592683855730919488000
\,{x_{{6}}}^{15}+\\
127394819920587345997715014771148502743677895787382759422010047188016607123337846352000
\,{x_{{6}}}^{14}-\\
25968706971106515156435215854797406123655066708180261542604377597361944349585098624000
\,{x_{{6}}}^{13}+\\
4960507940027602159108171414133540542870357682494160638808628474544074120268857600000
\,{x_{{6}}}^{12}-\\
883568435934032491854600142821810738907025437706278188778615895234588463202734080000
\,{x_{{6}}}^{11}+\\
145897682005731748694196345430889511740879526344443757233716282574010926547568640000
\,{x_{{6}}}^{10}-\\
22175866222857978558060838480620319985670100890081201714818580060768067723673600000
\,{x_{{6}}}^{9}+\\
3075862118928405963111852344998231570651617836342389807394973285218950438502400000
\,{x_{{6}}}^{8}-\\
385105259410167229527440401287446716975632740993559048809336040524404994867200000
\,{x_{{6}}}^{7}+\\
42917339672778567692638781664562933815548814423377227738175562693341224960000000
\,{x_{{6}}}^{6}-\\
4178590536688350662228868634718089493254270404833603948496989437288710144000000
\,{x_{{6}}}^{5}+\\
346352374181513467040566508305589632427483211407599391709775625992142848000000
\,{x_{{6}}}^{4}-\\
23523257372893535305090108878056327495383233042527291749486130479759360000000
\,{x_{{6}}}^{3}+\\
1230919766784379333820283776146657594954326996169558341670191811788800000000
\,{x_{{6}}}^{2}-\\
44258346696601099692147834894822419343469825540627109426180915200000000000
\,x_{{6}}+\\
822759441845983293555551467431826001867074646280506498917335040000000000$
\normalsize\section{Appendix II}
We put $h(x_6)$ here as follows:\\
\tiny$h(x_6)=4483391625806314902278623053770051604405925444922144197732882524176570461483849883534229504
\,{x_{{6}}}^{114}\\-
205326408642245030427921175421329504773285549623799716231703635104414332571732690022581665792
\,{x_{{6}}}^{113}\\+
4748227343288912990296885343966373457069938283464960086266393172774393766368333307469432356864
\,{x_{{6}}}^{112}\\-
73857027765851476864907995448674285385990573072187964007497919962287095629599881653594965934080
\,{x_{{6}}}^{111}\\+
867664040149060106328280687969035749162428713361957811145995338185100965861669204561179292205056
\,{x_{{6}}}^{110}\\-
8190861944847112540343332023107832167424206756712692909839631199237350246201450512424812054315008
\,{x_{{6}}}^{109}\\+
64524201420812387655648394773450794368378133978768740061962179983314070957918412138538943926763520
\,{x_{{6}}}^{108}\\-
434712852282437200305047238826737804111974281774961322415630966710723431930513864104548900129472512
\,{x_{{6}}}^{107}\\+
2545970008909317278697022351939013486712452518283817781567560897727987929782362553710098265197772800
\,{x_{{6}}}^{106}\\-
13097028510927733556711072703012531491895193895358908087070737594767573219855012966233119733042905088
\,{x_{{6}}}^{105}\\+
59489010993776772645935266065220062725433224359410810434669261324137366389534849780109565418018963456
\,{x_{{6}}}^{104}\\-
238513405926928676903444856966833432897697263280851985483727297580965902485083469530724927053045432320
\,{x_{{6}}}^{103}\\+
837192767901400659327388305673834160162972688262551443375142899312686418524934321672790512987409481728
\,{x_{{6}}}^{102}\\-
2513099136708432936794293404949731029600341919853348088292159358568388896888677689779264842174440669184
\,{x_{{6}}}^{101}\\+
6053719990602192749861843286183348238861603167575816936543214471122284892335479278172815542584764006400
\,{x_{{6}}}^{100}\\-
9163876311642991055395998749631150596871330962512532720142436345754112557269606001994254021795015294976
\,{x_{{6}}}^{99}\\-
9024249443985225373330180204805081933528916042035915597129425510528112051248275814341201244956282322944
\,{x_{{6}}}^{98}\\+
142074934217896015771539864320501366171593067667530296183577155353072731641661116166010165465113457328128
\,{x_{{6}}}^{97}\\-
705841999687383448773452929656752250551625929461698468891233310881014725571830333537346557215306246258688
\,{x_{{6}}}^{96}\\+
2556963252800373545141667373003943578430092496177297523595625717533131530755386426887189567118496556384256
\,{x_{{6}}}^{95}\\-
7506000937205795890819942467284655011933626507544074468697398990596153361536969409902109321077199164407808
\,{x_{{6}}}^{94}\\+
17936874296373676114196887995833491364108238894730113646733427249384695776872725472056415982690935691542528
\,{x_{{6}}}^{93}\\-
31994491935711999747347405170149967952484298573433497955772482422845439648875035140492738459008304894967808
\,{x_{{6}}}^{92}\\+
23577072416713937683736594033035793565791544287767289977647307789915294750305008817333193966475822808170496
\,{x_{{6}}}^{91}\\+
118049483336789073704806907341232718697539837973168098200793109647025686330009533187642821903486004605485056
\,{x_{{6}}}^{90}\\-
734226593700496983659859704822162571968318879856660029116807065471505272263760698526703379649368421159665664
\,{x_{{6}}}^{89}\\+
2676100767811761358159438941280732258286679394686887068007192972643646754163029477957216740488747100107964416
\,{x_{{6}}}^{88}\\-
7662993760316617988424167386186942948965267517817521495920041905265592014721029928346467644366995617925300224
\,{x_{{6}}}^{87}\\+
18228878401501112954518648203563585882625088337782727970218694078395658528409378636434793400122030758366806016
\,{x_{{6}}}^{86}\\-
35497917132999416914255497739983952213960106444786351129608800669158645578268238705407671446370670634791337984
\,{x_{{6}}}^{85}\\+
50080604073204066831006607591646819608839953930614131764384353535761210999449666445853487954362644268536823808
\,{x_{{6}}}^{84}\\-
15720285635577805336432985158241452724031542073857594702537987971249459895284267203702419415647310137933168640
\,{x_{{6}}}^{83}\\-
213253922155301306434562560925404886975837336511684803944554016208255017287642490436704424404810363342977761280
\,{x_{{6}}}^{82}\\+
1007493043455866176533503224292999744184162166789427844011779822050534712435567593774778075593340183108260986880
\,{x_{{6}}}^{81}\\-
3171041860497603522808595547606604744997187422941016394874266237599677292995197145911648361293608523220490649600
\,{x_{{6}}}^{80}\\+
8192737511252059459769691589000259072047018568076663448429160778728067226324093815029594653616870160763925299200
\,{x_{{6}}}^{79}\\-
18305135177471481148456590646858825040465716298190833048619372186905924029582005984292092620693283163176802713600
\,{x_{{6}}}^{78}\\+
35624173877351729177444371747291195446773624694183712274885750085250404507310578866325771229695512146484731576320
\,{x_{{6}}}^{77}\\-
58582234594937660061939858792579634811160139147261998875044075344563326032022628102019721902715587786812839428096
\,{x_{{6}}}^{76}\\+
71819099318483048562659053709646755971169118190910895195097395111871506597473238712641461855192748334374136578048
\,{x_{{6}}}^{75}\\-
22093940243587190007287340760642189308946381991953536838792383303959476123643969362831745194258873365455764979712
\,{x_{{6}}}^{74}\\-
233025217903294298517141287464717280418226382056033072040305034274672769945242846989255002152281094724055960911872
\,{x_{{6}}}^{73}\\+
1025523298653789662058798736792859638602868572150673287005471885539852081213238769000767234977739727429349020794880
\,{x_{{6}}}^{72}\\-
3059734288811407594538221696447826114213564934841831098202348820839496834181221537195234953305884612856030091542528
\,{x_{{6}}}^{71}\\+
7724840108834128111122985253623743171066155405872104137836175939697010487564442474281477549811357135551193290637312
\,{x_{{6}}}^{70}\\-
17598555229915204052409275721605938532617305913212823977587619976905852836472667837758344972101004218032171429593088
\,{x_{{6}}}^{69}\\+
37214168808436892955139992394629913811032837847887399991546743647308501001802312702530950997073919761611388968124416
\,{x_{{6}}}^{68}\\-
74164349973524924164044748487950527176785143354532194711993428539554783092197599993287548621237068947503033156239360
\,{x_{{6}}}^{67}\\+
140599225063026401498793383882444543241421231293952449285214127461284194362568804729645646538437155823194867224264704
\,{x_{{6}}}^{66}\\-
255134929665028448462123935651676393218192518641856186992058689294677223050628809197576711116283639555716651134943232
\,{x_{{6}}}^{65}\\+
445115345117851916722473459965705479745480779085200616816510631773582234441390714046969812170649847704933366696257536
\,{x_{{6}}}^{64}\\-
749061719034523494343938879903371231292812041784789121517707667677755583798652397656826217325621436188989758776354816
\,{x_{{6}}}^{63}\\+
1219007780972759381537137388267547796560314187219082470186252125095981682570335264334237104391399268072460319401508864
\,{x_{{6}}}^{62}\\-
1922267833843986235022933568021913995489998232190319502655612143006018494863365132399299614081058546517837130018135296
\,{x_{{6}}}^{61}\\+
2942049957640812163906274015735702461296336164943877137584275801313056626469838939156398925995441529615603165326043200
\,{x_{{6}}}^{60}\\-
4376243911416590305490787122087133601237261355330019247902839015057483055317677721200907371667145695529192616110802304
\,{x_{{6}}}^{59}\\+
6333726504703618856653502310224966798566019634504342266079892781888679281304936160549551502977632695189789118671499520
\,{x_{{6}}}^{58}\\-
8927675097091887804745226828453944747104206326592990075536994371766100547562570977756777636313020791928669151806494976
\,{x_{{6}}}^{57}\\+
12265685956882439991845820057377301977701307980221174648635325190086145361805524317571531714569142062396466085421260528
\,{x_{{6}}}^{56}\\-
16436954639181339199725274838984659482882699396892670565503454008734824997698238810780224586923809935787495073755682048
\,{x_{{6}}}^{55}\\+
21497350258453638415397279374929990280857180860199552360851908786111509181565393506839861175772235405584492192924051136
\,{x_{{6}}}^{54}\\-
27453825635407692526495522227723181436782015685051421530035594447587824862989263800564974129586632784329227564794286480
\,{x_{{6}}}^{53}\\+
34250139598857372273893286430557004669085816584142659388356305799891035604666301778847885029162327546724314447540592476
\,{x_{{6}}}^{52}\\-
41756199327444679394343041648930798665430872130161334400565034986479139562256167975306892977457785457429926023677226648
\,{x_{{6}}}^{51}\\+
49763342019653154352562591967729846304052752675277957675977061260505683059276922971558559201463011741849416976582325488
\,{x_{{6}}}^{50}\\-
57987487462746366335335991718002373485838879355098968647341803895525756487993561779421879523681815240678851464094886056
\,{x_{{6}}}^{49}\\+
66081292616855572255171806158941767112015260139978995153901025159263362839587014955768679394364920959353261947428964737
\,{x_{{6}}}^{48}\\-
73655294673820815925288413980144343074212843219082144168473872042616657762760922761859179435971211674668985781534517500
\,{x_{{6}}}^{47}\\+
80306691545910150245367112937968208511672761873112402793431287741215694385854056693545710303318473848966530575545751580
\,{x_{{6}}}^{46}\\-
85653093862349827565580393514587961471802292127448490876587618595025473333644309850362532742862630521792781327819145020
\,{x_{{6}}}^{45}\\+
89367533191496865457171836824211611038413396796619272675309994300077047958693420361198462461574043052964279954786643780
\,{x_{{6}}}^{44}\\-
91210449257553934247002472035018984390462182691618369432805884278529261913185953934119479451380811117892501648130466020
\,{x_{{6}}}^{43}\\+
91054455426155996855357744213192997670275564845348405630767309072963233095896146453409671919815167821476644134852675444
\,{x_{{6}}}^{42}\\-
88898437541174711715857750586848730886368844816411220259916897384768072942558579930738004664931842901116211216035133972
\,{x_{{6}}}^{41}\\+
84868888741265046378290537973934804950595327870250375948494626877567996926661092217987732177349541474335293535257240726
\,{x_{{6}}}^{40}\\-
79208115369114766990089175674872559209294113338184568307577444706375155818086412275304344000524148250370963873485090316
\,{x_{{6}}}^{39}\\+
72250776886153494923569109186657985156939727734775421593425799130418669640659884616431522278010448820239559548579599460
\,{x_{{6}}}^{38}\\-
64391829980467870144752860336339707615215827827081427038901326665274904763149563765091447902175939632575049165759240844
\,{x_{{6}}}^{37}\\+
56050056059301403897498004796897349226648237681855325925861150312204486441739258979455207815222504548273765606248331496
\,{x_{{6}}}^{36}\\-
47631779511787796571969634019449416455725165799777979791064852380476760453638349613177170209221510076284992362837866884
\,{x_{{6}}}^{35}\\+
39499080619739491579277073599405766205548262015368623778229693050332744389406034558016435258661721158636673516939597564
\,{x_{{6}}}^{34}\\-
31945859788755370092456761074054472831651732972799157275898200584015213478472443103682886677025951103427156238146420388
\,{x_{{6}}}^{33}\\+
25183722561334978336454160054965989376608115536889702639786203461738916756192687653176293304462703554397983017958570457
\,{x_{{6}}}^{32}\\-
19338099618373722669059669684408378980599446416023939193013170475797613589593234875606362932851846780945957933674850408
\,{x_{{6}}}^{31}\\+
14453573286857473064473938716154482163802115435134318539543214024100524130195352073381460755410255125893797529219895456
\,{x_{{6}}}^{30}\\-
10506283893683950982237258985379197783560704043783843231538653100486294357045515762738566190667231697911368567221530200
\,{x_{{6}}}^{29}\\+
7420676999572068797167588141139250346102101153695723803051284950741365894561917099189383422998601170039729736052998528
\,{x_{{6}}}^{28}\\-
5087758778644159009567284077404521921647088174034583785106555486576588627972776012025436255495025920499344227899039344
\,{x_{{6}}}^{27}\\+
3382383075548043015684431304657228198049197356435193103644820049023079706343604156193866348986611530702250237289093120
\,{x_{{6}}}^{26}\\-
2177758814225798082716970231656255637946844467794189997304066903819557627554011437473728015218092776210095863631577936
\,{x_{{6}}}^{25}\\+
1356166832243486057136360018201916252457358988130700500991581357957438071388815746386879543929710277448847638477085960
\,{x_{{6}}}^{24}\\-
815645666020237828883820614975859631653770088733044655641730207118001267968093449599423029741642182490137553503053984
\,{x_{{6}}}^{23}\\+
473021446550315591942800621896765768992471722688223069494892337368139735748855607037515729677471830411607294688173696
\,{x_{{6}}}^{22}\\-
264050411498286501527450861864478799401697473834016975193519596212174085659602735695340944356689801385833734708694944
\,{x_{{6}}}^{21}\\+
141604493847024673563277897583433233231512714531396898135844096095557682072804603943045270097096431539411007607985856
\,{x_{{6}}}^{20}\\-
72798151358188646984916605951368515149104134569690822065973025532312726888482172688778782109993500493509669967860224
\,{x_{{6}}}^{19}\\+
35791915960104934870095512437794470624977306091544210630932214416509751900942559643587069631054972130794391619136832
\,{x_{{6}}}^{18}\\-
16785065526227397551430648223230953785768377099815666232468321802711730642283913098885866848845514746866586876890112
\,{x_{{6}}}^{17}\\+
7486039386167758165498756320666538440668058449690300911364834080457175291113335212717971648404758925373300286691344
\,{x_{{6}}}^{16}\\-
3164690591337089269122651792673430984332571623895791846860594990256707408956006069818488869855195323713547291274240
\,{x_{{6}}}^{15}\\+
1263371862412637336118077448284129277800934557049604208353286016322956282676072492375524618321518497537179081405440
\,{x_{{6}}}^{14}\\-
474237174459865673622950650903008236257362680808571814423376106553991218384112852650618724470295923203219808460800
\,{x_{{6}}}^{13}\\+
166567588485401531601351575658375428881889405229134541528417132083554220056821494460709982665900520238239579509760
\,{x_{{6}}}^{12}\\-
54429856659678273150275880156007449410860423905145212704433824330499298121721064388192700925034251046304478330880
\,{x_{{6}}}^{11}\\+
16436942247144086690315529389700884710622790625823197634388451443696054490105773730764452786436016477036778127360
\,{x_{{6}}}^{10}\\-
4550559903116335272190334356678067567300330339887054893274986715736952210291974416159798729836648828774753239040
\,{x_{{6}}}^{9}\\+
1143794330365551469610644486847760213393907293445506682022696752757580949711190716433894667610583913035584634880
\,{x_{{6}}}^{8}\\-
257892406115483250820149690568604951656065829658549404453378180558623971925103779553458445170103335234043904000
\,{x_{{6}}}^{7}\\+
51366626789697031994971660978007265502691340725200934436341529069242803083658647745944596999843687882240819200
\,{x_{{6}}}^{6}\\-
8857785333824662183640035325209323345883967696840724309403070568683468357320498685584956576771894509410713600
\,{x_{{6}}}^{5}\\+
1286350853431778836288662240252507434753495656746971709685224196828127797048648651729278000729076056824217600
\,{x_{{6}}}^{4}\\-
151108572289028044685257699340398716743469878173371315869891303511532560180920169534954354971901547669094400
\,{x_{{6}}}^{3}\\+
13468295516322163243823865461588689875067172048266895252627491145464137024105265628046711165122125391462400
\,{x_{{6}}}^{2}\\-
809955593679404419990999198050851090431254722130327121234710273131829159152130816806933719380294578995200
\,x_{{6}}\\+
24652795478782568705193226321305604963662010504130505462968280403875490149216360605689272167309928038400$
\section{Appendix III}
We put $h(x_5)$ here as follows:\\
\tiny$128347343962975370412693426749009849339839357016801236766964799153788520545469283287700441036952900345128917401600000000\\
\,{x_{{5}}}^{178}-\\
5530540078084901614541347163272585556710461227261360504342982180428179351592239804543551577526948963951376020275200000000\\
\,{x_{{5}}}^{177}+\\
1214368671181023056340825625079201542939242778107087349459445314331171252355905862118372161779163451531290451629834240000\\00
\,{x_{{5}}}^{176}-\\
1812431994310165286339190290550274741253258982737776277937193581998625964287755164528412074760604334773217549337021644800\\000
\,{x_{{5}}}^{175}+\\
2068707462078478021064883505615211507123991283159373507839725373497169580163620572530599699435828318654118957649735188480\\0000
\,{x_{{5}}}^{174}-\\
1925873300646058612284297034856109247401795114532998418465537834769953774453717368954658193957404620362604996521357381468\\16000
\,{x_{{5}}}^{173}+\\
1522757295376626038741802388644779881996787547117412902238876564126995898728424975593066546705697211558618302730037040565\\452800
\,{x_{{5}}}^{172}-\\
1051322757936520155495220079568528552005553685193943817063614397077959353964356464752768181592436787886207528323855554564\\8496640
\,{x_{{5}}}^{171}+\\
6466180208832624164623954167835744881120569147912311659285213246326350815257644212205523180135164250850094582806452137537\\9881984
\,{x_{{5}}}^{170}-\\
3596848876442193547475956779805286385701797301462769959601207412927544020389729830738737743522457707905475592562932467680\\57942016
\,{x_{{5}}}^{169}+\\
1830830541452091752073244673215535267393611413556147916846999545960035711373596330479080243786654533431614760172044776515\\267723264
\,{x_{{5}}}^{168}-\\
8607332481708493271697961883983485815650906558642787236387410082195867547956110322590237108086473316084344853203251259425\\888927744
\,{x_{{5}}}^{167}+\\
3765785531756162462427350264287410124363832376273812417161365892528408491688570163592369263093746241632193297632948022624\\0175538176
\,{x_{{5}}}^{166}-\\
1542758760588866064808912185086604663134560725127199843177604916390452553607595186339595386002418176472602234569399028917\\30183127040
\,{x_{{5}}}^{165}+\\
5948861302729476122393831944548721298294439852821189927023221924120621699253980927796068501984745210247055008344663587788\\42761396224
\,{x_{{5}}}^{164}-\\
2168433620503978001015246311858351279579697918033759070972940543387586412850181135391452008869571808403033665890101664503\\880721891328
\,{x_{{5}}}^{163}+\\
7499537721285873739505304287210342067791732541484579259340739857323203882008983520798041021251358693804347870758083031024\\468473413632
\,{x_{{5}}}^{162}-\\
2468704930469775775549100387314406027416273460312995966687429927495116482035439308730930902233085159700116864761193703030\\6596946706432
\,{x_{{5}}}^{161}+\\
7755911459172691169162387197427066684138277328892163727843824411761749593503084147541538475456398973104702372303567199338\\1088770654208
\,{x_{{5}}}^{160}-\\
2331058312235712666892225448509846596174222520233966816340610963617511249198797415785868029717774184443317302451929362040\\81047705485312
\,{x_{{5}}}^{159}+\\
6716277499879468660401153158232638278819814773807434406849956148142762041754503108242985523991280790795627581337968578165\\36286227333120
\,{x_{{5}}}^{158}-\\
1858444060643361072383491574826396089107694238070005703922032639993004377971311240260904533619650739293066599774730420765\\791351805575168
\,{x_{{5}}}^{157}+\\
4946684911247295558626514887387912259913504869666357827546724104367661923235136463821908209878377733679310763952445549592\\908904016248832
\,{x_{{5}}}^{156}-\\
1268362635471390146625495033502720783248432474716429078211589854255894741904562053987685615084639680167706889959649283875\\6448960051412992
\,{x_{{5}}}^{155}+\\
3136829044349927529309080246078610614695759437088711569871550101091262533047682613429105463982477859241444089535123758134\\7039301509414912
\,{x_{{5}}}^{154}-\\
7491223764029709356811637399377955598275358997168945003640574807572416180566257375258081450158359154489493889604479591633\\1762994599493632
\,{x_{{5}}}^{153}+\\
1729321274353975460068086313299385761058347135937046518707069672519313793171224006543407273167152230907451040387888013318\\93454464611385344
\,{x_{{5}}}^{152}-\\
3862448271322219100148003314179251789945179543745535319206591457036005905551041782644218932825546267455936900955711508691\\93583252475281408
\,{x_{{5}}}^{151}+\\
8353697560808298624818573838240776081557945392632408597890113984450485534542780037572958427247794539389093564782532916911\\25573957343313920
\,{x_{{5}}}^{150}-\\
1750875437025128396945875516893813752771485565823419734087812202563484252488690007146280268562438058613572858052007235138\\002290270874894336
\,{x_{{5}}}^{149}+\\
3558717741163152898610084645653818401412789520711877110687028727558579770637872151654439202485152044014730072561485319944\\228895779548413952
\,{x_{{5}}}^{148}-\\
7018883023774221747505869337866007795290076146413652181055494807400316300676735421670090898086016716547456109999916206449\\519670646700572672
\,{x_{{5}}}^{147}+\\
1344090326577454676269080536638959740286362021751705582475905543979195627608206166108067704687629566692088196824886282111\\2215871390889988096
\,{x_{{5}}}^{146}-\\
2500357340996334350493014184080272021854591086360947102851033455325624121591770297969978787420202499919740127471441716297\\0442944448491134976
\,{x_{{5}}}^{145}+\\
4520587476454335797927157969001199900667087694446069724497212679260433239135486965740591399930085092271431240280202006273\\9314054267520240640
\,{x_{{5}}}^{144}-\\
7946807071999575205546372233951826638836565949636313435627902629207612818360294911794695283214829026118001658227256208134\\5749785210716211200
\,{x_{{5}}}^{143}+\\
1358819274464904722904368996520264658037602452075211613568903004595164450690664725365245465720025382923226535487437168271\\01277137327336608768
\,{x_{{5}}}^{142}-\\
2260730501023715221402951820117237525988685309020907058903498223984445669060339191697014056761522313232661929997845934749\\92095051616500285440
\,{x_{{5}}}^{141}+\\
3660842969126465175883093795434890919005846002838643575918432039468292728514894869069401979867167460458228431903630436903\\41034489112715775488
\,{x_{{5}}}^{140}-\\
57711894360699221710021632077546747141049919132051451118883569811576518841725496626317030674140166222839953247106293261686\\6722439728606024704
\,{x_{{5}}}^{139}+\\
88590360337026478263637661218289704622147477120797995792259616642457384487164259362378208804951177351417626524836035815855\\3596241524859969856
\,{x_{{5}}}^{138}-\\
13243593542905229239104932446222722176189832466152708436166911969003720588150326322038614993840850184087510296807151247921\\96657926110337094016
\,{x_{{5}}}^{137}+\\
19282281473410535418050532777533532869464145616742040289190223888954237651154333070833464824361114582899354355893859852910\\04765281444161453376
\,{x_{{5}}}^{136}-\\
27343184778566301245697638082020449115287839566295218491265463524964789763744342140049519481489031047423991498340090089037\\38220390470222017280
\,{x_{{5}}}^{135}+\\
37761371663291914344043172602728576069421507588325510595315115650429218955393674783844925170536510050310104954088789332000\\27289534047414741248
\,{x_{{5}}}^{134}-\\
50778818993359011070181288002348728645172153207204768262401563804792898691324670301084026269897601577889867509279571581822\\80430915966281621888
\,{x_{{5}}}^{133}+\\
66470826431264506725268252441958349989067586344273044684172374564969673914111921389593200972749451069705616951995554397162\\21552032257127773000
\,{x_{{5}}}^{132}-\\
84665730446582041918171153590547170741297717467316880323800825659789885267675243367605866820091964253027116271823832882947\\75015222009735128560
\,{x_{{5}}}^{131}+\\
10486888119163061675447751156948857121859199963141219964176558780631056040684574668787571259362320276852048973761325818054\\937135826435624366351
\,{x_{{5}}}^{130}-\\
12620536013278324560228947084595780237143275014528607919005920036866493148030308250538675171805810291495028831899621383322\\298441867693259250546
\,{x_{{5}}}^{129}+\\
14739776737863001042567666839590226112058259549544710673907166304300126061157057052491061865523699228635651853581189588696\\247164325419697733428
\,{x_{{5}}}^{128}-\\
16679460369617362662719340877280409880674634391915309198415911964751127421921600749464236131604806858424593842748837891846\\739757489894904575926
\,{x_{{5}}}^{127}+\\
18246061855069356262774427930563565665891620910923173909675603823473951667167929641807805709338972897645339868438683701760\\461288425937234279024
\,{x_{{5}}}^{126}-\\
19233285570060442488386914937827877833117782052023146760898126187418807939640764053489420635252356121614705421200423646460\\481711509959509996810
\,{x_{{5}}}^{125}+\\
19443561542206869977818115693871312507881639500992954223365519335213019660010193595130336288020823745865621810816118340474\\023713736998330239034
\,{x_{{5}}}^{124}-\\
18713558278608281062081771967599945053247549774478586447692193518839999432427875633869392221142542916733641040206241694047\\577434653558675107746
\,{x_{{5}}}^{123}+\\
16940702713246375549201339934890455814313161656505614285438423919187724969115857333000689856362928884964018194309672467846\\101734015661861925761
\,{x_{{5}}}^{122}-\\
14106868018312175283411426758066900824971466460722743232145611371702530230608598928878075127752965426412640251894401792077\\955028894600122973020
\,{x_{{5}}}^{121}+\\
10295093031677265847549491836410135945032518966928003240117761989185601767384020728189302501462292684543984779278993192166\\480607950060017596026
\,{x_{{5}}}^{120}-\\
56955931757311713250542011136580134827210012000416028674940282054640428556099691079062154130308932174574948553626237631614\\79145129817542588416
\,{x_{{5}}}^{119}+\\
59845702255823773594572639594670346834536718872857896346986903258304795382857646805219789178415115069538483599936926543953\\5413981694461515732
\,{x_{{5}}}^{118}+\\
46278038460576254401890695841715450584245508027898552895811009196131523939137191042432623029599335802073326694035221787070\\30205932996838300968
\,{x_{{5}}}^{117}-\\
95705144423768811129922653417032357524282570334568308511775658040090103089348805301978632738388114618292528003582343002110\\10561199525416340128
\,{x_{{5}}}^{116}+\\
13816357541942362853812005959596217263811954198763888031918529922869020142707740777124766845337383030929312305375453019848\\859819280608263097912
\,{x_{{5}}}^{115}-\\
16998762591513613402205589295835895767820651652591273663955679899317436248352168540147170954921068343887990626294466265620\\529906129147460870106
\,{x_{{5}}}^{114}+\\
18842664725874904761582680706937296082479141662342395954790075813878725006427224089402377457362416718305196830885963529603\\974259123094715664788
\,{x_{{5}}}^{113}-\\
19200042801081826463945410967341636351725716070301634363644751842135677495525102964833404254403218149575076166154487886901\\083456510120496325024
\,{x_{{5}}}^{112}+\\
18070657052022404756902351480006894645422849368713506412749886434655518138174980842745337661934344467314412302415031328692\\627546584176909135628
\,{x_{{5}}}^{111}-\\
15604413386847734385815400120003927902497985226759843334259915008135757467031268664231870103696132891900831564223206756506\\497003192621371827344
\,{x_{{5}}}^{110}+\\
12084476395928257126009035173251416686493471710527219618463993257752279283341616182259877513798705304117531535646864445617\\965407465906487555092
\,{x_{{5}}}^{109}-\\
78931866904733395034289256132372966791572253304031661980346275203024602129850729610902464030334058964117115195632584864941\\27971601291451064660
\,{x_{{5}}}^{108}+\\
34654996331738116300513568927403911926897583255662792934426368710859590885641946318677096585786926931022969967024549321238\\38414077492176951172
\,{x_{{5}}}^{107}+\\
76342008137265855352457154173381913204276007231457903654058261392175970556076497411261325380320027940810270982640154787009\\6967821982063779234
\,{x_{{5}}}^{106}-\\
44089803863368391781962443736090686629848075360746732521233299213774457394769510576494086504194180112749229865300915479725\\31039930948862934080
\,{x_{{5}}}^{105}+\\
71792449556531971953932454872195603675644492773984566939341763623444370920314174139255900502559173080009628439424070427939\\05460470523437518140
\,{x_{{5}}}^{104}-\\
89027748353493143968044907141596602812663575433727226473159497890050299318071488925474014828449673921704912480683055317484\\90006229283334704360
\,{x_{{5}}}^{103}+\\
95390674243254028951416387101583884086507501671268109816504623055013937666607034381708557029350366091476713277207949663517\\75487499094709589528
\,{x_{{5}}}^{102}-\\
91712234227698594536947570100774813311059646124353327376104771944853102915172522736039016832906690161497750518546322055944\\02097572116070256360
\,{x_{{5}}}^{101}+\\
79831777547393387233398507707238880663607776647088597506503954257562846903175308829998405072345711476181455288410415947100\\68921223755775729952
\,{x_{{5}}}^{100}-\\
62259913093396359529440393894415639279058410216823142055147288941120570247242267720798257239989029000507597202679444889292\\84093955133745238552
\,{x_{{5}}}^{99}+\\
41789821240458939607128369026605760499680705283128064529550596100332375129726584250761953092423420196295474837565550618296\\69044944383370612043
\,{x_{{5}}}^{98}-\\
21117226671067749526925361477915031392700848141975664954737679906141142109136681558694635419241480249635576154592696844169\\41442053607155074002
\,{x_{{5}}}^{97}+\\
25218654402723218219412784265723850953865720122335202548657396853917596689033709014977411645525251066039904060353323050936\\5241977745737342108
\,{x_{{5}}}^{96}+\\
12351901037265407518282511061924740081620887845329673297102975249899282902269201828534732039482980880851177263059482460946\\46808832942615408698
\,{x_{{5}}}^{95}-\\
22597244646886550293078056370393329489404383276336677456171260270575647121965608337549367316252638863990999613113150114620\\72671019300635050048
\,{x_{{5}}}^{94}+\\
28031880378023975171918667387500994519280618075336086932158347006240582466329666264537234857543990217318762234697012190043\\18019354780247752102
\,{x_{{5}}}^{93}-\\
29089255504626968924361800462538757287633097787777324974718727706835499245661123896717969138624170884825368135175763189069\\73135074449468395158
\,{x_{{5}}}^{92}+\\
26645021033609399461262534085181854856939338699023228792812734128507042303568792307871574093207225493150390219840028345114\\74969500545023873550
\,{x_{{5}}}^{91}-\\
21814922959691964094476059682501210954703338243601457652314421767688853540495125406400582874348545913416657521694300262271\\13470519156640604723
\,{x_{{5}}}^{90}+\\
15758654837135889183460680619241121345718529494988694667242400885689975561363363561580029598485923625345885310278805771313\\29351010801960397708
\,{x_{{5}}}^{89}-\\
95172661954258507553271371937762198333761721598902982633603037568503484657512654656156066079406907401567336725746575017299\\4856172321973028678
\,{x_{{5}}}^{88}+\\
39014686092257133831443531466603091263676174943552517365091898167412001742049805163875032554485238175235423078617584040535\\8712766643471427288
\,{x_{{5}}}^{87}+\\
56306353613503845332057336721374540379515945535305732546721819714852460457836450478125330158592696380190021620866439445174\\028209651652318772
\,{x_{{5}}}^{86}-\\
36374308180934624338524000268106860653494568971058062405309950289805717416519666771402797225714893990126502337369595545627\\6055560680623137904
\,{x_{{5}}}^{85}+\\
53297956771021117315491231524854588607242419943130087864632930472382619648513793937279756303165103303724757897883570829598\\7475117960798034280
\,{x_{{5}}}^{84}-\\
58297741240687620422289275345171965040842707518430485059229706416906860879716902929719624361568531406401401132718540294171\\9154090756673793056
\,{x_{{5}}}^{83}+\\
54330903210510886919361286368047497716299260441536648053969514497720268765137053322555373482108793150393869774749051953152\\0530201751173553744
\,{x_{{5}}}^{82}-\\
44704275809797281898701588342830510390573895877345201155814289370013643210189774929323922082067974186887763600200081515869\\7080190365055345344
\,{x_{{5}}}^{81}+\\
32503608535394811085953898691740466307592868141835482024433247841500930421182775408446308972220230791884217504329273145047\\1599173728260916256
\,{x_{{5}}}^{80}-\\
20215044023687390802933425302157410839300360854477633793570033856904646215630026149855169020769412283681057493550633190564\\9518775406220896384
\,{x_{{5}}}^{79}+\\
95455511760176906919642371018800777844928981892707676741336945239143854492767770189799804927761612219179909567218538004185\\721626064469287232
\,{x_{{5}}}^{78}-\\
14147289720312394799626262849538252360861445526076119383146324714652244427564337843245262917773455576022560697589042884454\\464859890845895424
\,{x_{{5}}}^{77}-\\
39302651966794739241441801902560933280468734145339788320977576917795749884335851551263189459606607301337904666767757125978\\267514864559508352
\,{x_{{5}}}^{76}+\\
67316792605993791532762566613828344487040070952746968642533721000873868547897567812986029980401576153885902870070627992555\\092522387884253696
\,{x_{{5}}}^{75}-\\
75203875030209922332892102625703996246116196358548901487920954173893395949417607502630035981178016934328943997944060685041\\395830099475730688
\,{x_{{5}}}^{74}+\\
69356238253577045740751896975855014387640153539352455575996756589040915266505150451115111665471412061460145771548352734492\\425874846575889408
\,{x_{{5}}}^{73}-\\
55882145172451240580650159385164235785228893390588449769171108831427098245073648575921095037230474829566721300916894295493\\703778969485416960
\,{x_{{5}}}^{72}+\\
39755816674798794007370663733978784382203908845747672022768045381155025481228631577698942706829288156509263124975455285819\\579519137495484416
\,{x_{{5}}}^{71}-\\
24451718575218720970468823294677958604313573707162914153481709414216683555862120054548038297516111048610601728370244621651\\212620833740092416
\,{x_{{5}}}^{70}+\\
11956562829943089240709532476926166890966612723278822428118425819851692352915774525692323779136753502937127680835171950829\\831879231882358784
\,{x_{{5}}}^{69}-\\
30226179685352039524800895955345518361428984559404018686106510870471446895431897107265209451384703040089178242959815118647\\16436962424064000
\,{x_{{5}}}^{68}-\\
24690249453372989486548988650013722604899775366506767841752505768075950190862100587797874419081111566489917193304089340530\\75924353398038528
\,{x_{{5}}}^{67}+\\
51391162600474090628179820949437302702433716303933541947640669239446112757970577632494101706386877398251723711426473015455\\92070388948152320
\,{x_{{5}}}^{66}-\\
58016933737449570249418245581618155774558431581978900974847618045478308614747170956293182772236500164744849510928662342594\\25447096176164864
\,{x_{{5}}}^{65}+\\
52470207983394212985532467799261748523998335482054751876048963426595198925301259805545847617844690549121399379171943055954\\80992515734233088
\,{x_{{5}}}^{64}-\\
41181913824363202156024495448388860536445289957372511185158547964785391835612056853597056909058460857622368249998517670677\\79205471540314112
\,{x_{{5}}}^{63}+\\
28654171224447462196366498412975425948857786241381232214366058391961459443574050054046213103309661027283066591164906676834\\58750069620195328
\,{x_{{5}}}^{62}-\\
17539935888893859800208050537393306053478743006695073079714961776584629936856395525152189863849060518976797817425307957491\\90097221369921536
\,{x_{{5}}}^{61}+\\
90170636498893406824621487169529644598554287065722061778535708629457657462303066216592842316306489740001611819155809947780\\5445026163949568
\,{x_{{5}}}^{60}-\\
32603661531740924584844086706839266469149059970378843940642961411723253467203426415115021686700741917193295500308066940884\\1522536068415488
\,{x_{{5}}}^{59}-\\
11936558749239290836832309161032373802380504842588523102621146188184590128081771504527234555171543128611069846184140813371\\994814585241600
\,{x_{{5}}}^{58}+\\
17367788399603318397345516421677058663345724218952740860889840237862455416104920898634010668268825804035383463218018712005\\1297876981579776
\,{x_{{5}}}^{57}-\\
22116933081401017809909955289146481313870821367815606692489632240257072881381344071353955714357189678764343537608818241583\\9343764015939584
\,{x_{{5}}}^{56}+\\
20537730348353175922642973998378638386027232303528240034100887500777897088339400802178768382455131879787194540315140326183\\6715143544700928
\,{x_{{5}}}^{55}-\\
16242888037740361052528503359930824371322590988352274414172280309236828717087724837144652333925765454150079341617874649791\\8662720286097408
\,{x_{{5}}}^{54}+\\
114497696417758830197884111014698275137907535546875784137114831967225750146579289875639311614494765405835224440977096348370\\018726215417856
\,{x_{{5}}}^{53}-\\
729014545543073753790652421369886470888781883047576051376117884950795563340996027921182895994576279970678981168896388322476\\21086697488384
\,{x_{{5}}}^{52}+\\
416913257353352701963521372823394455738834133092343639285187122160809371135395447116412349334667379719063547121602154863733\\77831168114688
\,{x_{{5}}}^{51}-\\
207711649274708177123401975289823487557380477178611698044681330049965675246332266068387382905655874262667655727588838695814\\06138364067840
\,{x_{{5}}}^{50}+\\
816854024833697254047187243814337994162598294423541545890172062484427295343707105226053583362081314562481035222874332853256\\1279110348800
\,{x_{{5}}}^{49}-\\
144902380859816206608390794831745077273651088901696066845363525363734998515480962817310030216670490686027871363219912680451\\9523866116096
\,{x_{{5}}}^{48}-\\
155294518679120509418701272345361442131232284506148369824023354394210186222397898249937716080135372521447976527531641259161\\0529404944384
\,{x_{{5}}}^{47}+\\
246327392876394518418207281982112962012310972779666085586573466205858045853257705081642517716622737771701145414666936967327\\0877159948288
\,{x_{{5}}}^{46}-\\
235861637461994723114858865845752407871264144004202233092449567274236237832118393570438936089077517372625163144258722869296\\4070228230144
\,{x_{{5}}}^{45}+\\
187333592910790309558745043476603327571926520280422505218976817041003091498588933060037429602469331176392729457559868920025\\3003504812032
\,{x_{{5}}}^{44}-\\
133441330636232855439063727913384242514777704174868004324740215758834128192380876932023977918212310664367343831200293350654\\8313157009408
\,{x_{{5}}}^{43}+\\
879939050015245402885487709871665292173388006220640074169754233172423958808989610875007232500580654624855569083619929423061\\464777752576
\,{x_{{5}}}^{42}-\\
545871352085453629145805969945989221505151324342931115292760961248386185520016634954290794646935576612407195912199375452072\\008476524544
\,{x_{{5}}}^{41}+\\
321539361809710217991965283618576504821314970135012052850959973697973432905823889804387799747203933936974404799154501801235\\565214957568
\,{x_{{5}}}^{40}-\\
180886319908987016805945966138642725696390791099428116350016486118298447380423873998403451084902107355251191742163898327194\\352031367168
\,{x_{{5}}}^{39}+\\
975591535677821713968581599540664091395578426591479240443201789011632520044638702182797490480132976057131516041547334603979\\19085527040
\,{x_{{5}}}^{38}-\\
505771062475735734546421248450914914769317392653165401507004897347248621285361619386861242551398569264824052603616415068700\\15194824704
\,{x_{{5}}}^{37}+\\
252491841888224928759870248139713509284565485598208706035503639179459074390425142437761999325069058755914017519845822237389\\30798264320
\,{x_{{5}}}^{36}-\\
121531555111204348187289925549538244593267043921539071732513184654057994565662200881721531737013789979276751326227796956990\\78523387904
\,{x_{{5}}}^{35}+\\
564472593274100680600148912652239028720020979640638052759402890006138135299939335815204919186461581503379024157095331210078\\8278919168
\,{x_{{5}}}^{34}-\\
253126487273471433364948693115559920574526815925215831578482781218900380864004328804719564096780464499776632622378966261176\\4875886592
\,{x_{{5}}}^{33}+\\
109621314158756279378197806414111283700938814106296707538719647548646472860482993785928951975224278513883142785815248134972\\9463042048
\,{x_{{5}}}^{32}-\\
4585096771161110970920487751193294646122645548961171185426818258611867615855062369619815031522862463686011697365569566376356\\85646336
\,{x_{{5}}}^{31}+\\
185206322387095209162134553638724181255063099693953347166513428516243205486880448492237894644849278724477797771619439736452\\403953664
\,{x_{{5}}}^{30}-\\
722282917313616580717016568145931274352825287602271305037529260205924907179318739751751977755696269612161967084692046690788\\82484224
\,{x_{{5}}}^{29}+\\
271851440255818060509335490529765189991654424752954306581208549856161217808095244129356303624892099807183160403117357912246\\90311168
\,{x_{{5}}}^{28}-\\
986957294587225067939468327940238019061084871030747194704248509002668589811357593311246913911366435897988275414621217940493\\3562368
\,{x_{{5}}}^{27}+\\
345397762963202387374064203242664310678712047222430316372625747429612451883616916424192412939624916317705178923902677734880\\4444160
\,{x_{{5}}}^{26}-\\
116425760423306465214911023686588832585931810791078378406770158253540037018508994444858400360221965940799287682361320095740\\4348416
\,{x_{{5}}}^{25}+\\
377642767236105053773739732119515032245566246383740440081945941862204403899387217417962246137784628667728143242559640823257\\890816
\,{x_{{5}}}^{24}-\\
117746194354247061813124832871526951862062016052302572623527888730558772370864326119937274147018027728377536767742531764934\\934528
\,{x_{{5}}}^{23}+\\
352459870759017596756740697434679649855872761844905021374167118499217354362152667054855737000124806799768236468068217344218\\89024
\,{x_{{5}}}^{22}-\\
101148462211640964398258040702198264139855120333737285521450150284231522877704654697828850473172705829129258172716088307335\\82336
\,{x_{{5}}}^{21}+\\
277845776669249670026113592719534200179051070301050586342016529351199814983917459135671441046941536923534166276679528022540\\2880
\,{x_{{5}}}^{20}-\\
729222059942677239194861472602631970992826200048350708453908761070046160711682680923725375727555598800883196185895432265662\\464
\,{x_{{5}}}^{19}+\\
182490369232807069021403284819016676754281859773867649736818460685067399592662641294817600630086707838402766296441916835758\\080
\,{x_{{5}}}^{18}-\\
434447774417556252713537169449578275615658559171010875350475357432739654092695255407634470093845253375407341706207100319825\\92
\,{x_{{5}}}^{17}+\\
981312487711337377849734520545821034395980798786557655574632650995129713487289075138443547286859623905492491939765776338124\\8
\,{x_{{5}}}^{16}-\\
209672188120683804006152344764047079267202374011796846480825399560699264551760692952860839830270392859460630702219370640179\\2
\,{x_{{5}}}^{15}+\\
422314275671514886321123779568396098193816231242663172075394857732652916567647564791990662433359094935353521403228822962176\\
\,{x_{{5}}}^{14}-\\
79864922656123001672373003402242843914722460212594718228143218584471442458879920199517152848106944554516039278408268513280\\
\,{x_{{5}}}^{13}+\\
14114974194660343006125317651559136005123731524715696467962306387120914690527501028427382041548468495779517799388749496320\\
\,{x_{{5}}}^{12}-\\
2318631152616942863910854119763409708464510724161955816200337405320112889645914590041456718031593814708975738270882201600\\
\,{x_{{5}}}^{11}+\\
351711906340592001610413792726774343523694998021397689259098322027975998519200903802531516313342353703169108580342169600\\
\,{x_{{5}}}^{10}-\\
48881654482529098774644700465700353976861287780733065236104378937903373457008646072769666061615264294470604491325440000\\
\,{x_{{5}}}^{9}+\\
6165199924406916311726448672009618574787780580033514548805089543085277996422090595590674195421977333160669442211840000\\
\,{x_{{5}}}^{8}-\\
697265508611783523063822245221103038364102732880164181773707035276354526553076359731686155685276694260890153779200000\\
\,{x_{{5}}}^{7}+\\
69639850156490923537983768960854449000882112021803741111317445152925716960736160612227466714320924661507948544000000
\,{x_{{5}}}^{6}-\\
6019516033064772902099011570341461065066869308802412228392754929923286004456239967801891827762038101078179840000000
\,{x_{{5}}}^{5}+\\
437977099237740958620049330989996973827136218815395628707280026388262269517157957648785965624334795512217600000000
\,{x_{{5}}}^{4}-\\
25760391661695283164506462715706115713693855148545427268637078512865404994468648889469516554411358289920000000000
\,{x_{{5}}}^{3}+\\
1148563712556502089867697244210114379584113909091646727528636419580863895074813100054898930060623872000000000000
\,{x_{{5}}}^{2}-\\
34512293527648499324118594567227987096710626145681634865537971099433193107503302883739921127833600000000000000
\,x_{{5}}+\\
524157766562226639572627086349581115653126065201703317806174894734427971871427112480895139840000000000000000$
\normalsize\section{Appendix IV}
We put $h(x_5)$ here as follows:\\
\tiny$1283473439629753704126934267490098493398393570168012367669647991537885205454692832877004410369529003451289174016000000\\00
\,{x_{{5}}}^{178}-\\
5530540078084901614541347163272585556710461227261360504342982180428179351592239804543551577526948963951376020275200000000\\
\,{x_{{5}}}^{177}+\\
1214368671181023056340825625079201542939242778107087349459445314331171252355905862118372161779163451531290451629834240000\\00
\,{x_{{5}}}^{176}-\\
1812431994310165286339190290550274741253258982737776277937193581998625964287755164528412074760604334773217549337021644800\\000
\,{x_{{5}}}^{175}+\\
2068707462078478021064883505615211507123991283159373507839725373497169580163620572530599699435828318654118957649735188480\\0000
\,{x_{{5}}}^{174}-\\
1925873300646058612284297034856109247401795114532998418465537834769953774453717368954658193957404620362604996521357381468\\16000
\,{x_{{5}}}^{173}+\\
1522757295376626038741802388644779881996787547117412902238876564126995898728424975593066546705697211558618302730037040565\\452800
\,{x_{{5}}}^{172}-\\
1051322757936520155495220079568528552005553685193943817063614397077959353964356464752768181592436787886207528323855554564\\8496640
\,{x_{{5}}}^{171}+\\
6466180208832624164623954167835744881120569147912311659285213246326350815257644212205523180135164250850094582806452137537\\9881984
\,{x_{{5}}}^{170}-\\
3596848876442193547475956779805286385701797301462769959601207412927544020389729830738737743522457707905475592562932467680\\57942016
\,{x_{{5}}}^{169}+\\
1830830541452091752073244673215535267393611413556147916846999545960035711373596330479080243786654533431614760172044776515\\267723264
\,{x_{{5}}}^{168}-\\
8607332481708493271697961883983485815650906558642787236387410082195867547956110322590237108086473316084344853203251259425\\888927744
\,{x_{{5}}}^{167}+\\
3765785531756162462427350264287410124363832376273812417161365892528408491688570163592369263093746241632193297632948022624\\0175538176
\,{x_{{5}}}^{166}-\\
1542758760588866064808912185086604663134560725127199843177604916390452553607595186339595386002418176472602234569399028917\\30183127040
\,{x_{{5}}}^{165}+\\
5948861302729476122393831944548721298294439852821189927023221924120621699253980927796068501984745210247055008344663587788\\42761396224
\,{x_{{5}}}^{164}-\\
2168433620503978001015246311858351279579697918033759070972940543387586412850181135391452008869571808403033665890101664503\\880721891328
\,{x_{{5}}}^{163}+\\
7499537721285873739505304287210342067791732541484579259340739857323203882008983520798041021251358693804347870758083031024\\468473413632
\,{x_{{5}}}^{162}-\\
2468704930469775775549100387314406027416273460312995966687429927495116482035439308730930902233085159700116864761193703030\\6596946706432
\,{x_{{5}}}^{161}+\\
7755911459172691169162387197427066684138277328892163727843824411761749593503084147541538475456398973104702372303567199338\\1088770654208
\,{x_{{5}}}^{160}-\\
2331058312235712666892225448509846596174222520233966816340610963617511249198797415785868029717774184443317302451929362040\\81047705485312
\,{x_{{5}}}^{159}+\\
6716277499879468660401153158232638278819814773807434406849956148142762041754503108242985523991280790795627581337968578165\\36286227333120
\,{x_{{5}}}^{158}-\\
1858444060643361072383491574826396089107694238070005703922032639993004377971311240260904533619650739293066599774730420765\\791351805575168
\,{x_{{5}}}^{157}+\\
4946684911247295558626514887387912259913504869666357827546724104367661923235136463821908209878377733679310763952445549592\\908904016248832
\,{x_{{5}}}^{156}-\\
1268362635471390146625495033502720783248432474716429078211589854255894741904562053987685615084639680167706889959649283875\\6448960051412992
\,{x_{{5}}}^{155}+\\
3136829044349927529309080246078610614695759437088711569871550101091262533047682613429105463982477859241444089535123758134\\7039301509414912
\,{x_{{5}}}^{154}-\\
7491223764029709356811637399377955598275358997168945003640574807572416180566257375258081450158359154489493889604479591633\\1762994599493632
\,{x_{{5}}}^{153}+\\
1729321274353975460068086313299385761058347135937046518707069672519313793171224006543407273167152230907451040387888013318\\93454464611385344
\,{x_{{5}}}^{152}-\\
3862448271322219100148003314179251789945179543745535319206591457036005905551041782644218932825546267455936900955711508691\\93583252475281408
\,{x_{{5}}}^{151}+\\
8353697560808298624818573838240776081557945392632408597890113984450485534542780037572958427247794539389093564782532916911\\25573957343313920
\,{x_{{5}}}^{150}-\\
1750875437025128396945875516893813752771485565823419734087812202563484252488690007146280268562438058613572858052007235138\\002290270874894336
\,{x_{{5}}}^{149}+\\
3558717741163152898610084645653818401412789520711877110687028727558579770637872151654439202485152044014730072561485319944\\228895779548413952
\,{x_{{5}}}^{148}-\\
7018883023774221747505869337866007795290076146413652181055494807400316300676735421670090898086016716547456109999916206449\\519670646700572672
\,{x_{{5}}}^{147}+\\
1344090326577454676269080536638959740286362021751705582475905543979195627608206166108067704687629566692088196824886282111\\2215871390889988096
\,{x_{{5}}}^{146}-\\
2500357340996334350493014184080272021854591086360947102851033455325624121591770297969978787420202499919740127471441716297\\0442944448491134976
\,{x_{{5}}}^{145}+\\
4520587476454335797927157969001199900667087694446069724497212679260433239135486965740591399930085092271431240280202006273\\9314054267520240640
\,{x_{{5}}}^{144}-\\
7946807071999575205546372233951826638836565949636313435627902629207612818360294911794695283214829026118001658227256208134\\5749785210716211200
\,{x_{{5}}}^{143}+\\
1358819274464904722904368996520264658037602452075211613568903004595164450690664725365245465720025382923226535487437168271\\01277137327336608768
\,{x_{{5}}}^{142}-\\
2260730501023715221402951820117237525988685309020907058903498223984445669060339191697014056761522313232661929997845934749\\92095051616500285440
\,{x_{{5}}}^{141}+\\
3660842969126465175883093795434890919005846002838643575918432039468292728514894869069401979867167460458228431903630436903\\41034489112715775488
\,{x_{{5}}}^{140}-\\
57711894360699221710021632077546747141049919132051451118883569811576518841725496626317030674140166222839953247106293261686\\6722439728606024704
\,{x_{{5}}}^{139}+\\
88590360337026478263637661218289704622147477120797995792259616642457384487164259362378208804951177351417626524836035815855\\3596241524859969856
\,{x_{{5}}}^{138}-\\
13243593542905229239104932446222722176189832466152708436166911969003720588150326322038614993840850184087510296807151247921\\96657926110337094016
\,{x_{{5}}}^{137}+\\
19282281473410535418050532777533532869464145616742040289190223888954237651154333070833464824361114582899354355893859852910\\04765281444161453376
\,{x_{{5}}}^{136}-\\
27343184778566301245697638082020449115287839566295218491265463524964789763744342140049519481489031047423991498340090089037\\38220390470222017280
\,{x_{{5}}}^{135}+\\
37761371663291914344043172602728576069421507588325510595315115650429218955393674783844925170536510050310104954088789332000\\27289534047414741248
\,{x_{{5}}}^{134}-\\
50778818993359011070181288002348728645172153207204768262401563804792898691324670301084026269897601577889867509279571581822\\80430915966281621888
\,{x_{{5}}}^{133}+\\
66470826431264506725268252441958349989067586344273044684172374564969673914111921389593200972749451069705616951995554397162\\21552032257127773000
\,{x_{{5}}}^{132}-\\
84665730446582041918171153590547170741297717467316880323800825659789885267675243367605866820091964253027116271823832882947\\75015222009735128560
\,{x_{{5}}}^{131}+\\
10486888119163061675447751156948857121859199963141219964176558780631056040684574668787571259362320276852048973761325818054\\937135826435624366351
\,{x_{{5}}}^{130}-\\
12620536013278324560228947084595780237143275014528607919005920036866493148030308250538675171805810291495028831899621383322\\298441867693259250546
\,{x_{{5}}}^{129}+\\
14739776737863001042567666839590226112058259549544710673907166304300126061157057052491061865523699228635651853581189588696\\247164325419697733428
\,{x_{{5}}}^{128}-\\
16679460369617362662719340877280409880674634391915309198415911964751127421921600749464236131604806858424593842748837891846\\739757489894904575926
\,{x_{{5}}}^{127}+\\
18246061855069356262774427930563565665891620910923173909675603823473951667167929641807805709338972897645339868438683701760\\461288425937234279024
\,{x_{{5}}}^{126}-\\
19233285570060442488386914937827877833117782052023146760898126187418807939640764053489420635252356121614705421200423646460\\481711509959509996810
\,{x_{{5}}}^{125}+\\
19443561542206869977818115693871312507881639500992954223365519335213019660010193595130336288020823745865621810816118340474\\023713736998330239034
\,{x_{{5}}}^{124}-\\
18713558278608281062081771967599945053247549774478586447692193518839999432427875633869392221142542916733641040206241694047\\577434653558675107746
\,{x_{{5}}}^{123}+\\
16940702713246375549201339934890455814313161656505614285438423919187724969115857333000689856362928884964018194309672467846\\101734015661861925761
\,{x_{{5}}}^{122}-\\
14106868018312175283411426758066900824971466460722743232145611371702530230608598928878075127752965426412640251894401792077\\955028894600122973020
\,{x_{{5}}}^{121}+\\
10295093031677265847549491836410135945032518966928003240117761989185601767384020728189302501462292684543984779278993192166\\480607950060017596026
\,{x_{{5}}}^{120}-\\
56955931757311713250542011136580134827210012000416028674940282054640428556099691079062154130308932174574948553626237631614\\79145129817542588416
\,{x_{{5}}}^{119}+\\
59845702255823773594572639594670346834536718872857896346986903258304795382857646805219789178415115069538483599936926543953\\5413981694461515732
\,{x_{{5}}}^{118}+\\
46278038460576254401890695841715450584245508027898552895811009196131523939137191042432623029599335802073326694035221787070\\30205932996838300968
\,{x_{{5}}}^{117}-\\
957051444237688111299226534170323575242825703345683085117756580400901030893488053019786327383881146182925280035823430021101\\0561199525416340128
\,{x_{{5}}}^{116}+\\
13816357541942362853812005959596217263811954198763888031918529922869020142707740777124766845337383030929312305375453019848\\859819280608263097912
\,{x_{{5}}}^{115}-\\
16998762591513613402205589295835895767820651652591273663955679899317436248352168540147170954921068343887990626294466265620\\529906129147460870106
\,{x_{{5}}}^{114}+\\
18842664725874904761582680706937296082479141662342395954790075813878725006427224089402377457362416718305196830885963529603\\974259123094715664788
\,{x_{{5}}}^{113}-\\
19200042801081826463945410967341636351725716070301634363644751842135677495525102964833404254403218149575076166154487886901\\083456510120496325024
\,{x_{{5}}}^{112}+\\
18070657052022404756902351480006894645422849368713506412749886434655518138174980842745337661934344467314412302415031328692\\627546584176909135628
\,{x_{{5}}}^{111}-\\
15604413386847734385815400120003927902497985226759843334259915008135757467031268664231870103696132891900831564223206756506\\497003192621371827344
\,{x_{{5}}}^{110}+\\
12084476395928257126009035173251416686493471710527219618463993257752279283341616182259877513798705304117531535646864445617\\965407465906487555092
\,{x_{{5}}}^{109}-\\
78931866904733395034289256132372966791572253304031661980346275203024602129850729610902464030334058964117115195632584864941\\27971601291451064660
\,{x_{{5}}}^{108}+\\
34654996331738116300513568927403911926897583255662792934426368710859590885641946318677096585786926931022969967024549321238\\38414077492176951172
\,{x_{{5}}}^{107}+\\
76342008137265855352457154173381913204276007231457903654058261392175970556076497411261325380320027940810270982640154787009\\6967821982063779234
\,{x_{{5}}}^{106}-\\
44089803863368391781962443736090686629848075360746732521233299213774457394769510576494086504194180112749229865300915479725\\31039930948862934080
\,{x_{{5}}}^{105}+\\
71792449556531971953932454872195603675644492773984566939341763623444370920314174139255900502559173080009628439424070427939\\05460470523437518140
\,{x_{{5}}}^{104}-\\
89027748353493143968044907141596602812663575433727226473159497890050299318071488925474014828449673921704912480683055317484\\90006229283334704360
\,{x_{{5}}}^{103}+\\
95390674243254028951416387101583884086507501671268109816504623055013937666607034381708557029350366091476713277207949663517\\75487499094709589528
\,{x_{{5}}}^{102}-\\
91712234227698594536947570100774813311059646124353327376104771944853102915172522736039016832906690161497750518546322055944\\02097572116070256360
\,{x_{{5}}}^{101}+\\
79831777547393387233398507707238880663607776647088597506503954257562846903175308829998405072345711476181455288410415947100\\68921223755775729952
\,{x_{{5}}}^{100}-\\
62259913093396359529440393894415639279058410216823142055147288941120570247242267720798257239989029000507597202679444889292\\84093955133745238552
\,{x_{{5}}}^{99}+\\
41789821240458939607128369026605760499680705283128064529550596100332375129726584250761953092423420196295474837565550618296\\69044944383370612043
\,{x_{{5}}}^{98}-\\
21117226671067749526925361477915031392700848141975664954737679906141142109136681558694635419241480249635576154592696844169\\41442053607155074002
\,{x_{{5}}}^{97}+\\
25218654402723218219412784265723850953865720122335202548657396853917596689033709014977411645525251066039904060353323050936\\5241977745737342108
\,{x_{{5}}}^{96}+\\
12351901037265407518282511061924740081620887845329673297102975249899282902269201828534732039482980880851177263059482460946\\46808832942615408698
\,{x_{{5}}}^{95}-\\
22597244646886550293078056370393329489404383276336677456171260270575647121965608337549367316252638863990999613113150114620\\72671019300635050048
\,{x_{{5}}}^{94}+\\
28031880378023975171918667387500994519280618075336086932158347006240582466329666264537234857543990217318762234697012190043\\18019354780247752102
\,{x_{{5}}}^{93}-\\
29089255504626968924361800462538757287633097787777324974718727706835499245661123896717969138624170884825368135175763189069\\73135074449468395158
\,{x_{{5}}}^{92}+\\
26645021033609399461262534085181854856939338699023228792812734128507042303568792307871574093207225493150390219840028345114\\74969500545023873550
\,{x_{{5}}}^{91}-\\
21814922959691964094476059682501210954703338243601457652314421767688853540495125406400582874348545913416657521694300262271\\13470519156640604723
\,{x_{{5}}}^{90}+\\
15758654837135889183460680619241121345718529494988694667242400885689975561363363561580029598485923625345885310278805771313\\29351010801960397708
\,{x_{{5}}}^{89}-\\
95172661954258507553271371937762198333761721598902982633603037568503484657512654656156066079406907401567336725746575017299\\4856172321973028678
\,{x_{{5}}}^{88}+\\
39014686092257133831443531466603091263676174943552517365091898167412001742049805163875032554485238175235423078617584040535\\8712766643471427288
\,{x_{{5}}}^{87}+\\
56306353613503845332057336721374540379515945535305732546721819714852460457836450478125330158592696380190021620866439445174\\028209651652318772
\,{x_{{5}}}^{86}-\\
36374308180934624338524000268106860653494568971058062405309950289805717416519666771402797225714893990126502337369595545627\\6055560680623137904
\,{x_{{5}}}^{85}+\\
53297956771021117315491231524854588607242419943130087864632930472382619648513793937279756303165103303724757897883570829598\\7475117960798034280
\,{x_{{5}}}^{84}-\\
58297741240687620422289275345171965040842707518430485059229706416906860879716902929719624361568531406401401132718540294171\\9154090756673793056
\,{x_{{5}}}^{83}+\\
54330903210510886919361286368047497716299260441536648053969514497720268765137053322555373482108793150393869774749051953152\\0530201751173553744
\,{x_{{5}}}^{82}-\\
44704275809797281898701588342830510390573895877345201155814289370013643210189774929323922082067974186887763600200081515869\\7080190365055345344
\,{x_{{5}}}^{81}+\\
32503608535394811085953898691740466307592868141835482024433247841500930421182775408446308972220230791884217504329273145047\\1599173728260916256
\,{x_{{5}}}^{80}-\\
20215044023687390802933425302157410839300360854477633793570033856904646215630026149855169020769412283681057493550633190564\\9518775406220896384
\,{x_{{5}}}^{79}+\\
95455511760176906919642371018800777844928981892707676741336945239143854492767770189799804927761612219179909567218538004185\\721626064469287232
\,{x_{{5}}}^{78}-\\
14147289720312394799626262849538252360861445526076119383146324714652244427564337843245262917773455576022560697589042884454\\464859890845895424
\,{x_{{5}}}^{77}-\\
39302651966794739241441801902560933280468734145339788320977576917795749884335851551263189459606607301337904666767757125978\\267514864559508352
\,{x_{{5}}}^{76}+\\
67316792605993791532762566613828344487040070952746968642533721000873868547897567812986029980401576153885902870070627992555\\092522387884253696
\,{x_{{5}}}^{75}-\\
75203875030209922332892102625703996246116196358548901487920954173893395949417607502630035981178016934328943997944060685041\\395830099475730688
\,{x_{{5}}}^{74}+\\
69356238253577045740751896975855014387640153539352455575996756589040915266505150451115111665471412061460145771548352734492\\425874846575889408
\,{x_{{5}}}^{73}-\\
55882145172451240580650159385164235785228893390588449769171108831427098245073648575921095037230474829566721300916894295493\\703778969485416960
\,{x_{{5}}}^{72}+\\
39755816674798794007370663733978784382203908845747672022768045381155025481228631577698942706829288156509263124975455285819\\579519137495484416
\,{x_{{5}}}^{71}-\\
24451718575218720970468823294677958604313573707162914153481709414216683555862120054548038297516111048610601728370244621651\\212620833740092416
\,{x_{{5}}}^{70}+\\
11956562829943089240709532476926166890966612723278822428118425819851692352915774525692323779136753502937127680835171950829\\831879231882358784
\,{x_{{5}}}^{69}-\\
30226179685352039524800895955345518361428984559404018686106510870471446895431897107265209451384703040089178242959815118647\\16436962424064000
\,{x_{{5}}}^{68}-\\
24690249453372989486548988650013722604899775366506767841752505768075950190862100587797874419081111566489917193304089340530\\75924353398038528
\,{x_{{5}}}^{67}+\\
51391162600474090628179820949437302702433716303933541947640669239446112757970577632494101706386877398251723711426473015455\\92070388948152320
\,{x_{{5}}}^{66}-\\
58016933737449570249418245581618155774558431581978900974847618045478308614747170956293182772236500164744849510928662342594\\25447096176164864
\,{x_{{5}}}^{65}+\\
52470207983394212985532467799261748523998335482054751876048963426595198925301259805545847617844690549121399379171943055954\\80992515734233088
\,{x_{{5}}}^{64}-\\
41181913824363202156024495448388860536445289957372511185158547964785391835612056853597056909058460857622368249998517670677\\79205471540314112
\,{x_{{5}}}^{63}+\\
28654171224447462196366498412975425948857786241381232214366058391961459443574050054046213103309661027283066591164906676834\\58750069620195328
\,{x_{{5}}}^{62}-\\
17539935888893859800208050537393306053478743006695073079714961776584629936856395525152189863849060518976797817425307957491\\90097221369921536
\,{x_{{5}}}^{61}+\\
90170636498893406824621487169529644598554287065722061778535708629457657462303066216592842316306489740001611819155809947780\\5445026163949568
\,{x_{{5}}}^{60}-\\
32603661531740924584844086706839266469149059970378843940642961411723253467203426415115021686700741917193295500308066940884\\1522536068415488
\,{x_{{5}}}^{59}-\\
11936558749239290836832309161032373802380504842588523102621146188184590128081771504527234555171543128611069846184140813371\\994814585241600
\,{x_{{5}}}^{58}+\\
17367788399603318397345516421677058663345724218952740860889840237862455416104920898634010668268825804035383463218018712005\\1297876981579776
\,{x_{{5}}}^{57}-\\
22116933081401017809909955289146481313870821367815606692489632240257072881381344071353955714357189678764343537608818241583\\9343764015939584
\,{x_{{5}}}^{56}+\\
20537730348353175922642973998378638386027232303528240034100887500777897088339400802178768382455131879787194540315140326183\\6715143544700928
\,{x_{{5}}}^{55}-\\
16242888037740361052528503359930824371322590988352274414172280309236828717087724837144652333925765454150079341617874649791\\8662720286097408
\,{x_{{5}}}^{54}+\\
114497696417758830197884111014698275137907535546875784137114831967225750146579289875639311614494765405835224440977096348370\\018726215417856
\,{x_{{5}}}^{53}-\\
729014545543073753790652421369886470888781883047576051376117884950795563340996027921182895994576279970678981168896388322476\\21086697488384
\,{x_{{5}}}^{52}+\\
416913257353352701963521372823394455738834133092343639285187122160809371135395447116412349334667379719063547121602154863733\\77831168114688
\,{x_{{5}}}^{51}-\\
207711649274708177123401975289823487557380477178611698044681330049965675246332266068387382905655874262667655727588838695814\\06138364067840
\,{x_{{5}}}^{50}+\\
816854024833697254047187243814337994162598294423541545890172062484427295343707105226053583362081314562481035222874332853256\\1279110348800
\,{x_{{5}}}^{49}-\\
144902380859816206608390794831745077273651088901696066845363525363734998515480962817310030216670490686027871363219912680451\\9523866116096
\,{x_{{5}}}^{48}-\\
155294518679120509418701272345361442131232284506148369824023354394210186222397898249937716080135372521447976527531641259161\\0529404944384
\,{x_{{5}}}^{47}+\\
246327392876394518418207281982112962012310972779666085586573466205858045853257705081642517716622737771701145414666936967327\\0877159948288
\,{x_{{5}}}^{46}-\\
235861637461994723114858865845752407871264144004202233092449567274236237832118393570438936089077517372625163144258722869296\\4070228230144
\,{x_{{5}}}^{45}+\\
187333592910790309558745043476603327571926520280422505218976817041003091498588933060037429602469331176392729457559868920025\\3003504812032
\,{x_{{5}}}^{44}-\\
133441330636232855439063727913384242514777704174868004324740215758834128192380876932023977918212310664367343831200293350654\\8313157009408
\,{x_{{5}}}^{43}+\\
879939050015245402885487709871665292173388006220640074169754233172423958808989610875007232500580654624855569083619929423061\\464777752576
\,{x_{{5}}}^{42}-\\
545871352085453629145805969945989221505151324342931115292760961248386185520016634954290794646935576612407195912199375452072\\008476524544
\,{x_{{5}}}^{41}+\\
321539361809710217991965283618576504821314970135012052850959973697973432905823889804387799747203933936974404799154501801235\\565214957568
\,{x_{{5}}}^{40}-\\
180886319908987016805945966138642725696390791099428116350016486118298447380423873998403451084902107355251191742163898327194\\352031367168
\,{x_{{5}}}^{39}+\\
975591535677821713968581599540664091395578426591479240443201789011632520044638702182797490480132976057131516041547334603979\\19085527040
\,{x_{{5}}}^{38}-\\
505771062475735734546421248450914914769317392653165401507004897347248621285361619386861242551398569264824052603616415068700\\15194824704
\,{x_{{5}}}^{37}+\\
252491841888224928759870248139713509284565485598208706035503639179459074390425142437761999325069058755914017519845822237389\\30798264320
\,{x_{{5}}}^{36}-\\
121531555111204348187289925549538244593267043921539071732513184654057994565662200881721531737013789979276751326227796956990\\78523387904
\,{x_{{5}}}^{35}+\\
564472593274100680600148912652239028720020979640638052759402890006138135299939335815204919186461581503379024157095331210078\\8278919168
\,{x_{{5}}}^{34}-\\
253126487273471433364948693115559920574526815925215831578482781218900380864004328804719564096780464499776632622378966261176\\4875886592
\,{x_{{5}}}^{33}+\\
109621314158756279378197806414111283700938814106296707538719647548646472860482993785928951975224278513883142785815248134972\\9463042048
\,{x_{{5}}}^{32}-\\
4585096771161110970920487751193294646122645548961171185426818258611867615855062369619815031522862463686011697365569566376356\\85646336
\,{x_{{5}}}^{31}+\\
185206322387095209162134553638724181255063099693953347166513428516243205486880448492237894644849278724477797771619439736452\\403953664
\,{x_{{5}}}^{30}-\\
722282917313616580717016568145931274352825287602271305037529260205924907179318739751751977755696269612161967084692046690788\\82484224
\,{x_{{5}}}^{29}+\\
271851440255818060509335490529765189991654424752954306581208549856161217808095244129356303624892099807183160403117357912246\\90311168
\,{x_{{5}}}^{28}-\\
986957294587225067939468327940238019061084871030747194704248509002668589811357593311246913911366435897988275414621217940493\\3562368
\,{x_{{5}}}^{27}+\\
345397762963202387374064203242664310678712047222430316372625747429612451883616916424192412939624916317705178923902677734880\\4444160
\,{x_{{5}}}^{26}-\\
116425760423306465214911023686588832585931810791078378406770158253540037018508994444858400360221965940799287682361320095740\\4348416
\,{x_{{5}}}^{25}+\\
377642767236105053773739732119515032245566246383740440081945941862204403899387217417962246137784628667728143242559640823257\\890816
\,{x_{{5}}}^{24}-\\
117746194354247061813124832871526951862062016052302572623527888730558772370864326119937274147018027728377536767742531764934\\934528
\,{x_{{5}}}^{23}+\\
352459870759017596756740697434679649855872761844905021374167118499217354362152667054855737000124806799768236468068217344218\\89024
\,{x_{{5}}}^{22}-\\
101148462211640964398258040702198264139855120333737285521450150284231522877704654697828850473172705829129258172716088307335\\82336
\,{x_{{5}}}^{21}+\\
277845776669249670026113592719534200179051070301050586342016529351199814983917459135671441046941536923534166276679528022540\\2880
\,{x_{{5}}}^{20}-\\
729222059942677239194861472602631970992826200048350708453908761070046160711682680923725375727555598800883196185895432265662\\464
\,{x_{{5}}}^{19}+\\
182490369232807069021403284819016676754281859773867649736818460685067399592662641294817600630086707838402766296441916835758\\080
\,{x_{{5}}}^{18}-\\
4344477744175562527135371694495782756156585591710108753504753574327396540926952554076344700938452533754073417062071003198259\\2
\,{x_{{5}}}^{17}+\\
9813124877113373778497345205458210343959807987865576555746326509951297134872890751384435472868596239054924919397657763381248\\
\,{x_{{5}}}^{16}-\\
2096721881206838040061523447640470792672023740117968464808253995606992645517606929528608398302703928594606307022193706401792\\
\,{x_{{5}}}^{15}+\\
422314275671514886321123779568396098193816231242663172075394857732652916567647564791990662433359094935353521403228822962176\\
\,{x_{{5}}}^{14}-\\
79864922656123001672373003402242843914722460212594718228143218584471442458879920199517152848106944554516039278408268513280\\
\,{x_{{5}}}^{13}+\\
14114974194660343006125317651559136005123731524715696467962306387120914690527501028427382041548468495779517799388749496320\\
\,{x_{{5}}}^{12}-\\
2318631152616942863910854119763409708464510724161955816200337405320112889645914590041456718031593814708975738270882201600\\
\,{x_{{5}}}^{11}+\\
351711906340592001610413792726774343523694998021397689259098322027975998519200903802531516313342353703169108580342169600\\
\,{x_{{5}}}^{10}-\\
48881654482529098774644700465700353976861287780733065236104378937903373457008646072769666061615264294470604491325440000\\
\,{x_{{5}}}^{9}+\\
6165199924406916311726448672009618574787780580033514548805089543085277996422090595590674195421977333160669442211840000\\
\,{x_{{5}}}^{8}-\\
697265508611783523063822245221103038364102732880164181773707035276354526553076359731686155685276694260890153779200000\\
\,{x_{{5}}}^{7}+\\
69639850156490923537983768960854449000882112021803741111317445152925716960736160612227466714320924661507948544000000\\
\,{x_{{5}}}^{6}-\\
6019516033064772902099011570341461065066869308802412228392754929923286004456239967801891827762038101078179840000000\\
\,{x_{{5}}}^{5}+\\
437977099237740958620049330989996973827136218815395628707280026388262269517157957648785965624334795512217600000000\\
\,{x_{{5}}}^{4}-\\
25760391661695283164506462715706115713693855148545427268637078512865404994468648889469516554411358289920000000000\\
\,{x_{{5}}}^{3}+\\
1148563712556502089867697244210114379584113909091646727528636419580863895074813100054898930060623872000000000000\\
\,{x_{{5}}}^{2}-\\
34512293527648499324118594567227987096710626145681634865537971099433193107503302883739921127833600000000000000\\
\,x_{{5}}+\\
524157766562226639572627086349581115653126065201703317806174894734427971871427112480895139840000000000000000$
\normalsize
\end{document} |
\begin{document}
\title[Asymptotic Log-Harnack Inequality for Monotone SPDE]
{Asymptotic Log-Harnack Inequality for Monotone SPDE with Multiplicative Noise}
\author{Zhihui Liu}
\address{Department of Mathematics,
The Hong Kong University of Science and Technology, Kowloon, Hong Kong}
\curraddr{}
\email{[email protected] and [email protected]}
\subjclass[2010]{Primary 60H15; 60H10, 37H05}
\keywords{Asymptotic log-Harnack inequality,
monotone stochastic partial differential equations,
asymptotic strong Feller,
asymptotic irreducibility}
\date{\today}
\dedicatory{}
\begin{abstract}
We derive an asymptotic log-Harnack inequality for nonlinear monotone SPDE driven by possibly degenerate multiplicative noise.
Our main tool is the asymptotic coupling by the change of measure.
As an application, we show that, under certain monotone and coercive conditions on the coefficients, the corresponding Markov semigroup is asymptotically strong Feller, asymptotic irreducibility, and possesses a unique and thus ergodic invariant measure.
The results are applied to highly degenerate finite-dimensional or infinite-dimensional diffusion processes.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec1}
The dimension-free Harnack-type inequality has been a very efficient tool to study diffusion semigroups in recent years.
The dimension-free power-Harnack inequality and log-Harnack inequality were introduced in \cite{Wan97(PTRF)} for elliptic diffusion semigroups on noncompact Riemannian manifolds and \cite{Wan10(JMPA)} for heat semigroups on manifolds with boundary.
Both inequalities have been investigated extensively and applied to SDEs and SPDEs via coupling by the change of measures, see, e.g., \cite{ATW09(SPA), Liu09(JEE), MRW11(PA), Wan07(AOP), Wan11(AOP), Wan17(JFA), WZ14(SPA), WZ15(PA), Zha10(PA)}, the monograph \cite{Wan13}, and references therein.
In particular, these inequalities imply gradient estimates and thus the strong Feller property, irreducibility, and the uniqueness of invariant probability measures of the associated Markov semigroups.
When the stochastic system is degenerate so that these properties are unavailable, power- and log-Harnack inequalities could not hold.
In this case, it is natural to investigate weaker versions of these properties by exploiting Harnack inequalities in the weak version.
For instance, the strong Feller property is invalid for stochastic 2D Navier--Stokes equations driven by degenerate additive noises, whereas the weaker version, called the asymptotically strong Feller property, had been proved in \cite{HM06(ANN)} and \cite{Xu11(JEE)} by making use of asymptotic couplings and a modified log-Harnack inequality, respectively.
This type of log-Harnack inequality in the weak version is of the form
\begin{align} \label{alh}
P_t \log f(x) & \le \log P_t f(y)+\Phi(x, y)
+ \Psi_t(x, y) \|\nabla \log f\|_\infty,
\end{align}
for any $t>0$, $x,y$ in the underlying Hilbert space $H$, and $f \in \mathcal B^+_b(H)$ with $\|\nabla \log f\|_\infty<\infty$, where $\Phi$ and $\Psi_t : H \times H \rightarrow (0, \infty)$ are measurable with $\Psi_t \downarrow 0$ as $t \uparrow \infty$.
Now it is called an asymptotic log-Harnack inequality in \cite{BWY19(SPA)}, where the authors proved that the Markov semigroup $P_t$ is asymptotic strong Feller, asymptotic irreducibility, and possesses at most one invariant probability measure, provided that \eqref{alh} holds.
They also gave some applications to stochastic systems with infinite memory driven by non-degenerate noises.
Recently, such inequality \eqref{alh} was also studied in
\cite{LLX19(IDA), HLL20(JEE), HLL20(SPL)} for stochastic 3D fractional Leray-$\alpha$ model, semilinear SPDEs with monotone coefficients, and stochastic 2D hydrodynamical systems, with degenerate noises, respectively.
To the best of our knowledge, there are few results concerning the Harnack inequality for nonlinear monotone SPDEs driven by possibly degenerate multiplicative noise in the literature.
We only aware of \cite{HLL20(SPL)} considered a semilinear monotone SPDE driven by degenerate multiplicative noise using the coupling method and the strong dissipativity of the unbounded linear operator.
This is the main motivation of the present study.
We also note that Harnack inequalities for nonlinear monotone SPDEs, including stochastic $p$-Laplacian equation and stochastic generalized porous media equation, both driven by non-degenerate additive noise were obtained in \cite{Liu09(JEE), LW08(JMAA), Wan07(AOP)}.
We focus on the possibly fully nonlinear SPDE \eqref{eq-z} under monotone assumptions on the coefficients; see Section \ref{sec2} for more details.
Inspired by the coupling method developed
in \cite{BKS20(AAP), BWY19(SPA), Hai02(PTRF), HMS11(PTRF), KS18(PTRF), Oda08(PTRF)}, we prove the asymptotic log-Harnack inequality \eqref{alh} and applied it to derive asymptotic properties for the corresponding Markov semigroup.
Several finite-dimensional and infinite-dimensional SDEs driven by highly degenerate noise are given to illustrate our main results.
In this setup, the strong Feller property is generally invalid, so we are in a weak situation without standard Wang-type log-Harnack inequalities.
The rest of the paper is organized as follows.
In Section \ref{sec2}, we give some preliminaries for the considered Eq. \eqref{eq-z}.
We preform and prove the first main result, the desired asymptotic log-Harnack inequality \eqref{har} in Theorem \ref{tm-har}, for Eq. \ref{eq-z} with non-degenerate noise in Section \ref{sec3}.
The idea is then extended, in Section \ref{sec4}, to the highly non-degenerate noise case (see Eq. \eqref{eq-xy}), where we get another main result, the asymptotic log-Harnack inequality \eqref{har+} in Theorem \ref{tm-har+}.
Finally, in the last section, we give some concrete examples either of SODEs or SPDEs.
\section{Preliminaries}
\label{sec2}
Let $V \subset H=H^* \subset V^*$ be a Gelfand triple, i.e., $(H, (\cdot, \cdot), \|\cdot\|)$ is a separable Hilbert space identified with its dual space $H^*$ via the Riesz isomorphism, $V$ is a reflexive Banach space continuously and densely embedded into $H$, and $V^*$ is the dual space of $V$ with respect to $H$.
Denote by $\langle\cdot, \cdot\rangle$ the dualization between $V$ and $V^*$, then it follows that $\langleu, v\rangle=(u, v)$ for any $u \in V$ and $v \in H$.
Denote by $\mathcal B_b(H)$ the class of bounded measurable functions on $H$ and $\mathcal B^+_b(H)$ the set of positive functions in $\mathcal B_b(H)$.
For a function $f \in \mathcal B_b(H)$, define
\begin{align*}
|\nabla f|(x)=\limsup_{y \rightarrow x} \frac{|f(z)-f(w)|}{\|z-w\|},
\quad x \in H.
\end{align*}
Denote by $\|\cdot\|_\infty$ the uniform norm:
$\|\nabla f\|_\infty=\sup_{x \in H} |\nabla f|(x)$ and define
${\rm Lip}(H)=\{ f : H \rightarrow \mathbb R, \ \|\nabla f\|_\infty<\infty\}$, the family of all Lipschitz functions on $H$.
Set ${\rm Lip}_b(H):={\rm Lip}(H) \cap \mathcal B_b(H)$.
Let $U$ be another separable Hilbert space and
$(\mathcal L_2(U, H), \|\cdot\|_{\mathcal L_2})$ be the space consisting of all Hilbert--Schmidt operators from $U$ to $H$.
Let $(W_t)_{t \ge 0}$ be a $U$-valued cylindrical Wiener process with respect to a complete filtered probability space $(\Omega, \mathscr F, (\mathscr F_t)_{t \ge 0},\mathbb P)$,
i.e., there exists an orthonormal basis $\{e_n\}_{n=1}^\infty$ of $U$ and a family of independent 1D Brownian motions $\{\beta_n\}_{n=1}^\infty $ such that
\begin{align*}
W_t=\sum_{n=1}^\infty e_n\beta_n(t),\quad t \ge 0.
\end{align*}
Let us consider the stochastic equation
\begin{align}\label{eq-z}
{\rm d}Z_t=b(Z) {\rm d}t+\sigma (Z_t) {\rm d}W_t,
\quad Z_0=z \in H,
\end{align}
where $b: V \rightarrow V^*$, $\sigma: V \rightarrow \mathcal L_2(U; H)$ are measurable and satisfy the following conditions.
\begin{ap} \label{ap}
There exist constants $\alpha>1$, $\eta \in \mathbb R$, and $C_i>0$ with $i=1,2,3,4$ such that for all $u, v, w \in V$,
\begin{align}
& \mathbb R \ni c \mapsto \langleb(u+c v), w\rangle
\quad \text{is continuous}, \label{ap-con} \\
& 2 \langleb(u)-b(v), u-v\rangle
+\|\sigma(u)-\sigma(v)\|^2_{\mathcal L_2} \le \eta \|u-v\|^2, \label{ap-mon} \\
& 2 \langleb(w), w\rangle
+\|\sigma(w)\|^2_{\mathcal L_2}
\le C_1+\eta \|w\|^2-C_2 \|w\|^\alpha, \label{ap-coe} \\
& \|b(w)\|_{V^*} \le C_3+C_4\|w\|^{\alpha-1}. \label{ap-gro}
\end{align}
\end{ap}
To derive an asymptotic log-Harnack inequality for Eq. \eqref{eq-z}, we need the following standard non-degenerate condition.
\begin{ap}\label{ap-ell}
$\sigma: H \rightarrow \mathcal L_2(U; H)$ is bounded
and invertible with bounded right pseudo-inverse $\sigma^{-1}: H \rightarrow \mathcal L(H; U)$, i.e., $\sigma(z)\sigma^{-1}(z)={\rm Id}_H$ (the identity operator on $H$) for all $z \in H$, with $\|\sigma^{-1}\|_\infty:=\sup_{z \in H} \|\sigma^{-1}(z)\|_{\mathcal L(H; U)}<\infty$.
\end{ap}
Under the above monotone and coercive conditions, one has the following known well-posedness result of Eq. \eqref{eq-z} and the Markov property of the solution, see, e.g., \cite[Theorems II.2.1, II.2.2]{KR79} and \cite[Theorem 4.2.4, Proposition 4.3.5]{LR15}.
\begin{lm} \label{lm-well}
Let $T>0$ and Assumption \ref{ap} hold.
For any $\mathscr F_0$-measurable $z \in L^2(\Omega, H)$, Eq. \eqref{eq-z} with initial datum $Z_0=z$ exists a unique solution $\{Z_t:\ t\in [0,T]\}$ in $L^2(\Omega; \mathcal C([0,T]; H)) \cap L^\alpha(\Omega \times (0, T); V)$ which is a Markov process such that
\begin{align}
(Z_t, v)=(z, v)+\int_0^t \langleb(Z_r), v\rangle {\rm d}r+\int_0^t (v, \sigma (Z_r) {\rm d}W_r)
\end{align}
holds a.s. for all $v \in V$ and $t \in [0, T]$.
\end{lm}
Denote by $(P_t)_{t \ge 0}$ the corresponding Markov semigroup, i.e.,
\begin{align} \label{pt}
P_t f(z)=\mathbb E [ f(Z^z_t) ], \quad
t \ge 0, \ z \in H, \ f \in \mathcal B_b(H).
\end{align}
\section{Asymptotic Log-Harnack Inequality}
\label{sec3}
Our main aim in this section is to derive an asymptotic log-Harnack inequality under Assumptions \ref{ap} and \ref{ap-ell}, and then use it to derive several asymptotic properties for $P_t$ defined in \eqref{pt}.
\subsection{Asymptotic Log-Harnack Inequality}
We first give the construction of an asymptotic coupling by the change of measure.
Let $\lambda>\eta/2$ be a constant, where $\eta$ appears in Eq. \eqref{ap-mon} and Eq. \eqref{ap-coe}.
Consider
\begin{align} \label{eq-cou}
d \bar Z_t &=(b(\bar Z_t)
+\lambda \sigma(\bar Z_t) \sigma^{-1}(Z_t) (Z_t-\bar Z_t) ) {\rm d}t
+\sigma(\bar Z_t) {\rm d}W_t,
\end{align}
with initial datum $\bar Z_0=\bar z \in H$.
Under Assumptions \ref{ap}-\ref{ap-ell}, it is not difficult to check that the additional drift term $\lambda \sigma(\bar Z_t) \sigma^{-1}(Z_t) (Z_t-\bar Z_t)$ satisfies the hemicontinuity, locally monotonicity, coercivity, and growth conditions in \cite{LR10(JFA)}, thus the asymptotic coupling between $Z$ and $\bar Z$ is well-defined.
Our first aim is to examine that $\bar Z(t)$ has the same transition semigroup $P_t$ under another probability measure.
To this end, we set
\begin{align} \label{v}
v_t:=\lambda \sigma^{-1}(Z_t) (Z_t-\bar Z_t), \quad
\widetilde{W}(t):=W_t +\int_0^t v_r {\rm d}r,
\end{align}
and define
\begin{align} \label{R}
R(t): & =\exp\Big( -\int_0^t \langlev_r, {\rm d}W_r\rangle_U
-\frac12 \int_0^t \|v_r\|^2_U {\rm d}r\Big), \quad t \ge 0.
\end{align}
We first check that $R$ defined by \eqref{R} is a local uniformly integrable martingale such that the following estimate \eqref{est-R} holds.
\begin{lm} \label{lm-R}
Under Assumption \ref{ap}, there exists a constant $\gamma=2 \lambda-\eta$ such that for any $T>0$,
\begin{align} \label{est-R}
\sup_{t \in [0, T]} \mathbb E[R(t) \log R(t)]
\le \frac{\lambda^2 \|\sigma^{-1} \|_\infty^2}{2 \gamma} \|z-\bar z\|^2.
\end{align}
Consequently, there exists a unique probability measure $\mathbb Q$ on
$(\Omega, \mathscr F_\infty)$ such that
\begin{align} \label{Q}
\frac{{\rm d}\mathbb Q|\mathscr F_t}{{\rm d} \mathbb P|\mathscr F_t}=R(t), \quad t \ge 0.
\end{align}
Moreover, $(\widetilde W_t)_{t \ge 0}$ is a $U$-valued cylindrical Wiener process under $\mathbb Q$.
\end{lm}
\begin{proof}
Let $T>0$ be fixed.
For any $n$ with $\|z\|<n$, define the stopping time
\begin{align*}
\tau_n=\inf \{ t \ge 0: \|Z_t\| \ge n\}.
\end{align*}
Due to the non-explosion of Eq. \eqref{eq-z} and Eq. \eqref{eq-cou}, it is clear that $\tau_n \uparrow \infty$ as $n \uparrow \infty$ and $(Z_t)_{t \in [0, T \wedge \tau_n]}$ and $(\bar Z_t)_{t \in [0, T \wedge \tau_n]}$ are both bounded.
It follows from Assumption \ref{ap-ell} that Novikov condition holds on $[0, T \wedge \tau_n]$, i.e.,
\begin{align*}
\mathbb E \exp \Big(\frac12 \int_0^{T \wedge \tau_n} \|v_t\|^2_U {\rm d}t \Big)
<\infty.
\end{align*}
According to Girsanov theorem, $({\widetilde W}_t)_{t \in [0,T \wedge \tau_n]}$ is a $U$-valued cylindrical Wiener Process under the probability measure ${\mathbb Q}_{T,n} := R(T \wedge \tau_n) \mathbb P$.
By the construction \eqref{v}, we can rewrite Eq. \eqref{eq-z} and Eq. \eqref{eq-cou} on $T \wedge \tau_n$ as
\begin{align} \label{eq-cou-rew}
\begin{split}
d Z_t &=(b(Z_t)-\lambda (Z_t-\bar Z_t) ) {\rm d}t
+\sigma(Z_t) {\rm d}{\widetilde W}_t, \quad t \le T \wedge \tau_n, \\
d \bar Z_t &=b(\bar Z_t) {\rm d}t +\sigma(\bar Z_t) {\rm d}{\widetilde W}_t, \quad t \le T \wedge \tau_n,
\end{split}
\end{align}
with initial values $Z_0=z$ and $\bar Z_0=\bar z$, respectively.
Applying It\^o formula on $[0, T \wedge \tau_n]$, under the probability $\mathbb Q_{T,n}$, we have
\begin{align*}
& {\rm d}\|Z_t-\bar Z_t\|^2
=2 \langleZ_t-\bar Z_t, (\sigma(Z_t)-\sigma(\bar Z_t) ) {\rm d}{\widetilde W}_t\rangle \\
& +\int_0^t [\|\sigma(Z_t)-\sigma(\bar Z_t)\|^2_{HS}
+2\langleZ_t-\bar Z_t, b(Z_t)-b(\bar Z_t)
-\lambda (Z_t-\bar Z_t) \rangle {\rm d}t.
\end{align*}
Taking expectations $\mathbb E_{\mathbb Q_{T,n}}$ on the above equation and using the fact that $({\widetilde W}_t)_{t \in [0,T \wedge \tau_n]}$ is a cylindrical Wiener Process under ${\mathbb Q}_{T,n}$, we obtain
\begin{align*}
\mathbb E_{\mathbb Q_{T,n}} \|Z_t-\bar Z_t\|^2
\le \|z-\bar z\|^2 -(2 \lambda-\eta) \int_0^t \mathbb E_{\mathbb Q_{T,n}} \|Z_r-\bar Z_r\|^2 {\rm d}r,
\end{align*}
where we have used \eqref{ap-mon}.
The Gr\"onwall inequality leads to
\begin{align}
\mathbb E_{\mathbb Q_{T,n}} \|Z_t-\bar Z_t\|^2
\le e^{-\gamma t} \|z-\bar z\|^2, \quad 0 \le t \le T \wedge \tau_n,
\end{align}
with $\gamma=2 \lambda-\eta$.
It follows from the above estimate, Assumption \ref{ap-ell}, and Fubini theorem that
\begin{align} \label{est-Rn}
& \sup_{t \in [0, T], n \ge 0} \mathbb E[R(t \wedge \tau_n) \log R(t \wedge \tau_n)] \nonumber \\
& =\sup_{t \in [0, T], n \ge 0} \mathbb E_{\mathbb Q_{T,n}} [\log R(t \wedge \tau_n)] \nonumber \\
& =\frac12 \sup_{n \ge 0} \int_0^{T \wedge \tau_n} \mathbb E_{\mathbb Q_{T,n}} \|v_r\|^2_U {\rm d}r
\le \frac{\lambda^2 \|\sigma^{-1} \|_\infty^2}{2 \gamma} \|z-\bar z\|^2.
\end{align}
Let $0 \le s<t \le T$.
Using the dominated convergence theorem and the martingale property of
$(R(t \wedge \tau_n))_{t \in [0, T]}$, we have
\begin{align*}
\mathbb E[R(t)|\mathscr F_s]
=\mathbb E[\lim_{n \rightarrow \infty} R(t \wedge \tau_n)|\mathscr F_s]
=R(s).
\end{align*}
This shows that $(R(t))_{t \in [0, T]}$ is a martingale and thus
${\mathbb Q}_T(A)={\mathbb Q}_{T,n}(A)$ for all $A \in \mathscr F_{T \wedge \tau_n}$, where ${\mathbb Q}_T := R(T) \mathbb P$.
By Girsanov theorem, for any $T > 0$, $(\widetilde W_t)_{t \in [0, T]}$ is a cylindrical Wiener process under the probability measure ${\mathbb Q}_T $.
By Fatou lemma,
\begin{align*}
& \liminf_{n \rightarrow \infty} \mathbb E_{\mathbb Q_{T,n}} [\log R(t \wedge \tau_n)] \\
& =\frac12 \liminf_{n \rightarrow \infty} \mathbb E_{\mathbb Q_T}\int_0^{t \wedge \tau_n} \|v_r\|^2_U {\rm d}r
\ge \frac12 \mathbb E_{\mathbb Q_T} \int_0^t \|v_r\|^2_U {\rm d}r,
\end{align*}
and thus we get
\begin{align*}
& \sup_{t \in [0, T]} \mathbb E[R(t) \log R(t)]
=\sup_{t \in [0, T]} \mathbb E_{\mathbb Q_T} [\log R(t)] \\
& = \frac12 \mathbb E_{\mathbb Q_T} \int_0^T \|v_r\|^2_U {\rm d}r
\le \liminf_{n \rightarrow \infty} \mathbb E_{\mathbb Q_{T,n}} [\log R(t \wedge \tau_n)].
\end{align*}
Then \eqref{est-R} follows from the estimate \eqref{est-Rn}.
Finally, by the martingale property of $R$, the family $({\mathbb Q}_T)_{T>0}$ is harmonic.
By Kolmogorov harmonic theorem, there exists a unique probability measure ${\mathbb Q}$ on
$(\Omega, \mathscr F_\infty)$ such that \eqref{Q} holds, and thus $(\widetilde W_t)_{t \ge 0}$ is a U-valued cylindrical Wiener process under $\mathbb Q$.
\end{proof}
Next, we show that $\|Z^z_t-\bar Z^{\bar z}_t\|$ decays exponentially fast as $t \rightarrow \infty$ in the $L^2(\Omega, \mathbb Q; H)$-norm sense, where $Z_\cdot^z$ and $\bar Z^{\bar z}_\cdot$ denote the solutions of Eq. \eqref{eq-z} with $Z_0=z$ and Eq. \eqref{eq-cou} with $\bar Z=\bar z$, respectively.
\begin{cor} \label{cor-x-y}
Under Assumption \ref{ap},
\begin{align} \label{x-y}
\mathbb E_{\mathbb Q} \|Z^z_t-\bar Z^{\bar z}_t\|^2
\le e^{-\gamma t} \|z-\bar z\|^2,
\quad t \ge 0.
\end{align}
\end{cor}
\begin{proof}
Let $t \ge 0$ be fixed.
Similarly to the proof in Lemma \ref{lm-R},
applying It\^o formula under the probability measure $\mathbb Q$ and using the condition \ref{ap-mon}, we obtain
\begin{align*}
\mathbb E_{\mathbb Q} \|Z^z_t-\bar Z^{\bar z}_t\|^2
\le \|z-\bar z\|^2 - \gamma \int_0^t \mathbb E_{\mathbb Q} \|Z^z_r-\bar Z^{\bar z}_r\|^2 {\rm d}r,
\end{align*}
with $\gamma=2 \lambda-\eta$, from which we conclude \eqref{x-y} by Gr\"onwall lemma.
\end{proof}
From Lemma \ref{lm-R}, it is clear that $R$ can be also rewriten as
\begin{align} \label{R+}
R(t)=\exp\Big( -\int_0^t \langlev_r, {\rm d}\widetilde{W}(r)\rangle_U
+\frac12 \int_0^t \|v_r\|^2_U {\rm d}r\Big), \quad t \ge 0,
\end{align}
and is a uniformly integrable martingale under the probability measure $\mathbb Q$.
Moreover, by the well-posedness of the asymptotic coupling \eqref{eq-cou}, or equivalently, Eq. \eqref{eq-cou-rew}, $\bar Z(t)$ has the same transition semigroup $P_t$ under the probability measure $\mathbb Q$ defined by \eqref{Q}:
\begin{align} \label{pt-y}
P_t f(\bar z)=\mathbb E_{\mathbb Q} [f(\bar Z_t^{\bar z})],\quad
t \ge 0, \ \bar z \in H, \ f \in \mathcal B_b(H).
\end{align}
Now we can give a proof of the asymptotic log-Harnack inequality \eqref{har} for Eq. \eqref{eq-z}.
Using the results in \cite[Theorem 2.1]{BWY19(SPA)}, we have the following asymptotic properties implied by this inequality.
\begin{tm} \label{tm-har}
Let Assumptions \ref{ap} and \ref{ap-ell} hold.
For any $t \ge 0$, $z, \bar z \in H$, and $f \in \mathcal B^+_b(H)$ with
$\|\nabla \log f\|_\infty<\infty$,
\begin{align}\label{har}
P_t \log f(z) & \le \log P_t f(\bar z)
+\frac{\lambda^2 \|\sigma^{-1} \|_\infty^2}{2 \gamma} \|z-\bar z\|^2 \nonumber \\
& \quad + e^{-\frac{\gamma t}2}\|\nabla \log f\|_\infty \|z-\bar z\|.
\end{align}
Consequently, the following asymptotic properties hold.
\begin{enumerate}
\item (Gradient estimate)
For any $t>0$ and $f \in {\rm Lip}_b(H)$, there exist a constant $C=\gamma^{-\frac12}\lambda \|\sigma^{-1}\|_\infty$ such that
\begin{align}\label{est-gra}
\|D P_t f\|^2 \le C \sqrt{P_t f^2-(P_t f)^2}
+ e^{-\frac{\gamma t}2}\|\nabla f\|_\infty.
\end{align}
Consequently, $P_t$ is asymptotically strong Feller.
\item (Asymptotic irreducibility)
Let $z \in H$ and $B \subset H$ be a measurable set such that
$\delta(z, B):=\liminf_{t \rightarrow \infty} P_t(z, B)>0$.
Then $\liminf_{t \rightarrow \infty} P_t(z, B_\epsilon)>0$ for all $z \in H$ and $\epsilon>0$,
where $B_\epsilon:=\{z \in H; \inf_{w \in B} \|z-w\|<\epsilon\}$.
Moreover, for any $\epsilon_0 \in (0, \delta(z, B))$, there exists a constant $t_0>0$ such that
$P_t(w, B_\epsilon)>0$ provided $t \ge t_0$ such that
$e^{-\gamma t/2} \|z-w\|< \epsilon \epsilon_0$.
\item (Ergodicity)
If $\eta<0$,
then there exists a unique and thus ergodic invariant probability measure $\mu$ for $P_t$.
\iffalse
which exponentially converges to equilibrium with respect to $z$ in balls in $H$,
i.e.,
\begin{align}\label{est-gra}
\|P_t f(z)-\mu(f)\| \le \|f\|_{\rm Lip} e^{-\frac\eta 2t} (C+\|z\|)
\end{align}
holds for all $t>0$, $z \in H$, $f \in {\rm Lip}(H)$.
\fi
\end{enumerate}
\end{tm}
\begin{proof}
It follows from \eqref{pt-y} and \eqref{Q} that
\begin{align*}
& P_t \log f(y)
=\mathbb E_{\mathbb Q} [\log f(\bar Z^{\bar z}_t)] \\
& =\mathbb E_{\mathbb Q} [\log f(Z^z_t)]
+\mathbb E_{\mathbb Q} [\log f(\bar Z^{\bar z}_t)-\log f(Z^z_t)] \\
& = \mathbb E [R(t) f(Z^z_t)]
+\mathbb E_{\mathbb Q} [\log f(\bar Z^{\bar z}_t)-\log f(Z^z_t)].
\end{align*}
Using the Young inequality in \cite[Lemma 2.4]{ATW09(SPA)}, we get
\begin{align*}
P_t \log f(y)
& \le \log P_t f(x)+\mathbb E [R(t) \log R(t)]
+ \|\nabla \log f\|_\infty \mathbb E_{\mathbb Q}
\|\bar Z^{\bar z}_t-Z^z_t\|.
\end{align*}
Taking into account \eqref{est-R} and \eqref{x-y}, we obtain \eqref{har}.
The gradient estimate \eqref{est-gra} and asymptotic irreducibility follow from \cite{BWY19(SPA)}, Theorem 2.1 (1) and (4), respectively.
The asymptotically strong Feller property of $P_t$ is a direct consequence of the gradient estimate \eqref{est-gra} and \cite[Proposition 3.12]{HM06(ANN)}.
Then $P_t$ possesses at most one invariant measure.
To show the ergodicity, it suffices to show the existence of an invariant measure, which is proved in \cite[Theorem 4.3.9]{LR15}, so we complete the proof.
\end{proof}
\section{Applications to Degenerate Diffusions}
\label{sec4}
In this section, our main aim is to generate the idea and results in Section \ref{sec3} to degenerate diffusions.
Let $(H_i, (\cdot, \cdot)_i, \|\cdot\|_i)$, $i=1,2$, be two separable Hilbert spaces and there are two Gelfand triples $V_i \subset H_i=H_i^* \subset V_i^*$, $i=1,2$.
Define $H:=H_1 \times H_2$ with inner product
\begin{align} \label{pro-H}
\Big( \Big(\begin{array}{c} x \\ y \end{array} \Big),
\Big(\begin{array}{c} \bar x \\ \bar y \end{array} \Big) \Big)
:=(x, y)+(\bar x, \bar y),
\quad (x, y), (\bar x, \bar y) \in H_1 \times H_2,
\end{align}
and norm
\begin{align} \label{norm-H}
\Big\|\left(\begin{array}{c} x \\ y \end{array}\right) \Big\|
:=\sqrt{\|x\|_1^2+\|y\|_2^2}, \quad (x, y) \in H_1 \times H_2.
\end{align}
Then $(H, (\cdot, \cdot), \|\cdot\|)$ is a separable Hilbert space.
Our main concern in this section is to consider
\begin{align}\label{eq-xy}
\begin{split}
dX_t &=b_1(X_t, Y_t) {\rm d}t+\sigma_1 {\rm d}W_t, \quad X_0=x \in H_1, \\
dY_t &=b_2(X_t, Y_t) {\rm d}t+\sigma_2 (X_t, Y_t) {\rm d}W_t,
\quad Y_0=y \in H_2,
\end{split}
\end{align}
where $b_i: V \rightarrow V^*_1$, , $i=1,2$, $\sigma_1 \in \mathcal L_2(U; H_1)$, and $\sigma_2: V_2 \rightarrow \mathcal L_2(U; H_2)$, are measurable maps, and $W$ is a $U$-valued cylindrical Wiener process.
Denote by $\|\cdot\|_{\mathcal L_2^i}$ the Hlibert--Schimdt operator norms from $U$ to $H_i$, $i=1,2$, respectively.
Eq. \eqref{eq-xy} can be rewritten as Eq. \eqref{eq-z} with initial datum
$Z_0=z=(x, y) \in H_1 \times H_2$, where
\begin{align} \label{b-s}
Z=\left(\begin{array}{c} X \\ Y \end{array}\right), \quad
b(Z)=\left(\begin{array}{c} b_1(Z) \\ b_2(Z) \end{array}\right), \quad
\sigma(Z)=\left(\begin{array}{c} \sigma_1 \\ \sigma_2(Z) \end{array}\right).
\end{align}
It follows from the definition \eqref{norm-H} of the norm in $H$ that
\begin{align*}
\|\sigma(u)\|^2_{\mathcal L_2}
& =\sum_{n=1}^\infty \Big\|\Big(\begin{array}{c} \sigma_1 e_k \\ \sigma_2(u) e_k \end{array} \Big) \Big]^2
=\sum_{n=1}^\infty \|\sigma_1 e_k \|_1^2+\|\sigma_2(u) e_k \|_2^2 \\
& =\|\sigma_1\|^2_{\mathcal L_2^1}+\|\sigma_2(u)\|^2_{\mathcal L_2^2}, \quad u \in H.
\end{align*}
This shows that, if \eqref{ap-mon}-\eqref{ap-coe} in Assumption \ref{ap} hold with different $\sigma$ and different constants, then Eq. \eqref{eq-xy} with coefficients \eqref{b-s} also satisfies Assumption \ref{ap}.
Then one use the well-posedness result, Lemma \ref{lm-well} of Eq. \eqref{eq-xy}.
\begin{ap}\label{ap+}
\eqref{ap-mon}-\eqref{ap-gro} hold with $\sigma$, $C_j$, replaced by $\sigma_2$ (the corresponding $\|\cdot\|_{\mathcal L_2}$-norms of $\sigma$ are replaced by $\|\cdot\|_{\mathcal L^2_2}$-norm of $\sigma_2$), and $C_j+\|\sigma_1\|^2_{\mathcal L_2^1}$ for $j=1,3$, respectively.
\end{ap}
Similarly to Assumption \ref{ap-ell}, we just give the following analogous non-degenerate condition on $\sigma_2$.
The first diffusion $\sigma$ on $H_1$ may taken to be extremely degenerate.
In the examples given in the last section which are frequently used, one can choose $\sigma_1=0$.
\begin{ap}\label{ap-ell+}
$\sigma_2: H \rightarrow \mathcal L_2(U; H_2)$ is bounded and invertible with bounded right pseudo-inverse $\sigma_2^{-1}: H \rightarrow \mathcal L(H; U)$ with $\|\sigma_2^{-1}\|_\infty:=\sup_{z \in H} \|\sigma_2^{-1}(z)\|_{\mathcal L(H; U)}<\infty$.
\end{ap}
In finite-dimensional case, this model has been intensively investigated, see, e.g., \cite{MSH02(SPA), Zha10(SPA)} for results on well-posedness, derivative formulas, ergodicity, Harnack inequalities, hypercontractivity, and so forth.
We also note that \cite{Wan17(JFA)} studied Eq. \eqref{eq-xy} with linear $b_1$, semilinear $b_2$, degenerate $\sigma_1=0$, and non-degenerate
$\sigma_2$ independent of the system.
They applied the coupling method in the semigroup framework to get a power-Harnack inequality and show the hypercontractivity of the Markov semigroup.
For Eq. \eqref{eq-xy} with coefficients \eqref{b-s} satisfying Assumptions \ref{ap+} and \ref{ap-ell+}, we consider the asymptotic coupling
\begin{align} \label{eq-cou-xy}
\begin{split}
d \bar X_t &=(b_1(\bar X_t, \bar Y_t)+\lambda (X_t-\bar X_t) ){\rm d}t+\sigma_1 {\rm d}W_t, \\
d \bar Y_t &=(b_2(\bar X_t, \bar Y_t)
+\sigma_2(\bar X_t, \bar Y_t) \widehat{v}_t ) {\rm d}t
+\sigma_2(\bar X_t, \bar Y_t) {\rm d}W_t,
\end{split}
\end{align}
with initial datum $(\bar X_0, \bar Y_0)=(\bar x, \bar y) \in H_1 \times H_2$, where
\begin{align} \label{v+}
\widehat{v}_t:=\lambda \sigma_2^{-1}(X_t, Y_t) (Y_t-\bar Y_t).
\end{align}
Under Assumptions \ref{ap+} and \ref{ap-ell+}, it is not difficult to check that the additional drift term $\sigma_2(\bar X_t, \bar Y_t) \widehat{v}_t$ with $v_t$ given by \eqref{v+} satisfies the hemicontinuity, locally monotonicity and growth condition in \cite{LR10(JFA)}, thus the above asymptotic coupling \eqref{eq-cou-xy} is well-defined.
Similarly to the arguments in Section \ref{sec3}, we set
\begin{align*}
\widehat{W}_t:=W_t +\int_0^t\widehat{v}( r) {\rm d}r,
\end{align*}
and define
\begin{align} \label{R+}
\widehat{R}(t): & =\exp\Big( -\int_0^t \langle\widehat{v}_r, {\rm d}W_r\rangle_U
-\frac12 \int_0^t \|\widehat{v}_r\|^2_U {\rm d}r\Big), \quad t \ge 0.
\end{align}
By using the stopping time technique and the dominated convergence theorem as in Lemma \ref{lm-R}, it is not difficult to show that $\widehat{R}$ defined by \eqref{R+} is a uniformly integrable martingale and that there exists a unique probability measure
$\widehat{\mathbb Q}$ on
$(\Omega, \mathscr F_\infty)$ such that
\begin{align} \label{Q+}
\frac{{\rm d}\widehat{\mathbb Q}|\mathscr F_t}{{\rm d} \mathbb P|\mathscr F_t}=\widehat{R}(t), \quad t \ge 0,
\end{align}
and $(\widehat{W}_t)_{t \ge 0}$ is a cylindrical Wiener process under $\widehat{\mathbb Q}$.
Rewrite Eq. \eqref{eq-xy} and Eq. \eqref{eq-cou-xy} as
\begin{align} \label{eq-xy+}
\begin{split}
dX_t &=(b_1(X_t, Y_t)- \sigma_1 v_t) {\rm d}t+\sigma_1 {\rm d}\widehat{W}_t, \\
dY_t &=(b_2(X_t, Y_t)-\lambda (Y_t-\bar Y_t) ) {\rm d}r
+\sigma_2(X_t, Y_t) {\rm d}\widehat{W}_t,
\end{split}
\end{align}
and
\begin{align} \label{eq-cou-xy+}
\begin{split}
d \bar X_t &=(b_1(\bar X_t, \bar Y_t)+\lambda (X_t-\bar X_t)
- \sigma_1 v_t ){\rm d}t+\sigma_1 {\rm d}\widehat{W}_t, \\
d \bar Y_t &=(b_2(\bar X_t, \bar Y_t)) {\rm d}t
+\sigma_2(\bar X_t, \bar Y_t) {\rm d}\widehat{W}_t,
\end{split}
\end{align}
with initial datum $(X_0, Y_0)=(x, y), (\bar X_0, \bar Y_0)=(\bar x, \bar y) \in H_1 \times H_2$, respectively.
Denote by $Z_\cdot^z=(X_\cdot^x, Y_\cdot^y)$ and $\bar Z_\cdot^{\bar z}=(\bar X_\cdot^{\bar x}, \bar Y_\cdot^{\bar y})$ denote the solutions of Eq. \eqref{eq-xy+} with $z=(x, y)$ and Eq. \eqref{eq-cou-xy+} with $\bar z=(\bar x, \bar y)$, respectively.
The well-posedness of Eq. \eqref{eq-cou-xy+} implies that
\begin{align} \label{pt-y+}
P_t f(\bar x, \bar y)=\mathbb E_{\widehat{\mathbb Q}} [f(\bar X_t^{\bar x}, \bar Y_t^{\bar y})],\quad t \ge 0, \ f \in \mathcal B_b(H_1 \times H_2).
\end{align}
We have the following uniform estimate of $\mathbb E[\widehat{R}(t) \log \widehat{R}(t)]$ on any finite interval $[0, T]$ and exponential decay of $\|(X_t^x, Y_t^y)-(\bar X_t^{\bar x}, \bar Y_t^{\bar y})\|$ in the $L^2(\Omega, \widehat{\mathbb Q}; H_1 \times H_2)$-norm sense.
The details could be make rigorous by using stopping time technique, dominated convergence theorem, and Fatou lemma as in Lemma \ref{lm-R}.
\begin{lm} \label{lm-R+}
Under Assumptions \ref{ap+} and \ref{ap-ell+}, we have
\begin{align*}
&\mathbb E_{\widehat{\mathbb Q}}
\|(X_t^x, Y_t^y)-(\bar X_t^{\bar x}, \bar Y_t^{\bar y})\|^2
\le e^{-\gamma t} (\|x-\bar x\|^2+\|y-\bar y\|^2),
\quad t \ge 0, \\
&\sup_{t \in [0, T]} \mathbb E[\widehat{R}(t) \log \widehat{R}(t)]
\le \frac{\lambda^2 \|\sigma_2^{-1}\|^2}{2 \gamma} (\|x-\bar x\|^2+\|y-\bar y\|^2),
\quad T>0.
\end{align*}
\end{lm}
\begin{proof}
Applying It\^o formula to $Z_t-\bar Z_t:=(X_t-\bar X_t, Y_t-\bar Y_t)$, we have
\begin{align*}
& {\rm d}\|Z_t-\bar Z_t\|^2
=2 \langleZ_t-\bar Z_t, (\sigma_2(Z_t)-\sigma_2(\bar Z_t) ) {\rm d}\widehat{W}_t\rangle \\
& +\int_0^t [\|\sigma_2(Z_t)-\sigma_2(\bar Z_t)\|^2_{\mathcal L_2^2}
+2\langleZ_t-\bar Z_t, b(Z_t)-b(\bar Z_t)
-\lambda (Z_t-\bar Z_t) \rangle {\rm d}t.
\end{align*}
Taking expectations $\mathbb E_{\widehat{\mathbb Q}}$ on the above equation, using the fact that $({\widehat W}_t)_{t \ge 0}$ is a cylindrical Wiener Process under $\widehat{\mathbb Q}$, and taking into account Assumption \ref{ap-mon} with $\sigma$ and related $\|\cdot\|_{\mathcal L_2}$-norm replaced by $\sigma_2$ and $\|\cdot\|_{\mathcal L^2_2}$-norm of $\sigma_2$, respectively, we obtain
\begin{align*}
\mathbb E_{\widehat{\mathbb Q}} \|Z_t-\bar Z_t\|^2
\le \|z-\bar z\|^2 - \gamma \int_0^t \mathbb E_{\widehat{\mathbb Q}} \|Z_r-\bar Z_r\|^2 {\rm d}r,
\end{align*}
from which we conclude the first inequality.
On the other hand, it follows from Assumption \ref{ap-ell+} that
\begin{align*}
& \sup_{t \in [0, T]} \mathbb E[R(t) \log R(t)]
=\sup_{t \in [0, T]} \mathbb E_{\widehat{\mathbb Q}} [\log R(t)] \\
& = \frac12 \int_0^t \mathbb E_{\widehat{\mathbb Q}} \|v_r\|^2_U {\rm d}r
\le \frac{\lambda^2 \|\sigma_2^{-1}\|^2}{2 \gamma} \|z-\bar z\|^2.
\end{align*}
This shows the last inequality and we complete the proof.
\end{proof}
Finally, we derive the following asymptotic log-Harnack inequality and asymptotic properties for Eq. \eqref{eq-xy}.
The proof is similarly to that of Theorem \ref{tm-har}, so we omit the details.
\begin{tm} \label{tm-har+}
Let Assumptions \ref{ap+} and \ref{ap-ell+} hold.
For any $t \ge 0$, $(x, y), (\bar x, \bar y) \in H_1 \times H_2$, and $f \in \mathcal B^+_b(H_1 \times H_2)$ with
$\|\nabla \log f\|_\infty<\infty$,
\begin{align}\label{har+}
P_t \log f(\bar x, \bar y) & \le \log P_t f(x, y)
+\frac{\lambda^2 \|\sigma^{-1} \|_\infty^2}{2 \gamma} (\|x-\bar x\|^2+\|y-\bar y\|^2) \nonumber \\
& \quad + e^{-\frac{\gamma t}2}\|\nabla \log f\|_\infty
\sqrt{\|x-\bar x\|^2+\|y-\bar y\|^2}.
\end{align}
Consequently, similar gradient estimate, asymptotically strong Feller, and asymptotic irreducibility in Theorem \ref{tm-har} holds.
In particular, if $\eta<0$, there exists a unique and thus ergodic invariant probability measure $\mu$ for $P_t$.
\end{tm}
\section{Examples}
\subsection{Asymptotic Log-Harnack Inequality for Degenerate SODEs}
For SODE with non-degenerate multiplicative noise, Harnack inequality was shown in \cite{Wan11(AOP)}.
Here we mainly focus on degenerate case in the framework of Section \ref{sec4}.
In finite-dimensional case that $H_1=\mathbb R^n$ and $H_2=\mathbb R^m$ with $n, m \ge 1$, $H=\mathbb R^n \times \mathbb R^m$ and both $V$, $H^*$, and $V^*$ coincide with $H$.
Then $W$ is an $m$-D Brownian motion.
\begin{ex}
Consider Eq. \eqref{eq-xy} in the case $n=m=1$ and
\begin{align} \label{ex-HM}
b(X,Y)=\left(\begin{array}{c} -X \\ Y-Y^3 \end{array}\right), \quad
\sigma(X,Y)=\left(\begin{array}{c} 0 \\ 1 \end{array}\right),
\quad X, Y \in \mathbb R^2.
\end{align}
It was shown in \cite[Example 3.14]{HM06(ANN)} that the corresponding Markov semigroup is not strong Feller.
Therefore, the standard log-Harnack inequality could not be valid.
For $w=(w_1, w_2) \in \mathbb R^2$, direct calculations yield that
\begin{align} \label{ex-HM+}
2\langleb(w), w\rangle+\|\sigma(w)\|^2_{\mathcal L_2}
=-2\|w\|^2-2(w_2^2-1)^2+3
\le -2\|w\|^2+3,
\end{align}
which shows \eqref{ap-coe} with $\eta=-2<0$.
Similarly, one can check other conditions in Assumptions \ref{ap} and \ref{ap-ell}, respectively, Assumptions \ref{ap+} and \ref{ap-ell+}, hold.
Then by Theorem \ref{tm-har+}, the associated Markov semigroup associated with Eq. \eqref{ex-HM} satisfies the asymptotic log-Harnack inequality \eqref{har} and possesses a unique ergodic invariant measure which is asymptotically strong Feller and irreducibility.
\end{ex}
\begin{ex}
Consider Eq. \eqref{eq-xy} driven by additive noise, i.e., $\sigma_2$ does not depend on the system \eqref{eq-xy}, with drift function $b$ satisfying
\begin{align} \label{ex-MSH}
\langleb(w), w\rangle \le \alpha-\beta \|w\|^2, \quad w\in \mathbb R^{n+m},
\end{align}
for some constants $\alpha, \beta>0$.
This includes Eq. \eqref{eq-xy} with $b_i$ and $\sigma_i$ given by \eqref{ex-HM}.
So the strong Feller property fails and power- or log-Harnack inequality could not hold.
We note that the author in \cite[Theorem 4.4]{MSH02(SPA)} proved that the corresponding Markov semigroup possesses a unique ergodic invariant measure, provided the above dissipative condition holds for $b \in \mathcal C^\infty(\mathbb R^{m+n})$ and centain Lyapunov structure holds for Eq. \eqref{eq-xy}.
Using Theorem \ref{tm-har+}, we have that the associated Markov semigroup satisfies the asymptotic log-Harnack inequality \eqref{har} and possesses a unique ergodic invariant measure which is asymptotically strong Feller and irreducibility.
We do not possess differentiable regularity on $b$ besides the monotone, coercive, and growth conditions in Assumption \ref{ap}.
\end{ex}
\subsection{Asymptotic Log-Harnack Inequality for SPDEs}
In this part, we give several examples of SPDEs such that the main result, Theorems \ref{tm-har} and \ref{tm-har+}, can be applied.
Let $q \ge 2$, $\mathcal OO$ be a bounded open subset of $\mathbb R^d$, and $L^q$, $W_0^{1,q}$, and $W_0^{-1,q^*}$ be the usual Lebesgue and Sobolev spaces on $\mathcal OO$, where $q^*=q/(q-1)$.
Then we have two Gelfand triples
$W_0^{1,q} \subset L^2 \subset W_0^{-1,q^*}$ and $L^q \subset W_0^{-1,2} \subset L^{q^*}$.
Let $U=L^2$ and $W$ be an $L^2$-valued cylindrical Wiener process.
\subsubsection{Nondegenerate case}
In this part, we impose Assumption \ref{ap-ell} and the following Lipschitz continuity and linear growth conditions on $\sigma$: there exists a positive constant $L_\sigma$ such that for all $u, v, w \in L^2$,
\begin{align} \label{lip}
\|\sigma(u)-\sigma(v)\|^2_{\mathcal L_2} \le L^2_\sigma \|u-v\|^2, \quad
\|\sigma(w)\|^2_{\mathcal L_2}
\le L^2_\sigma(1+ \|w\|^2).
\end{align}
\begin{ex}
Take $H=L^2$ and $V=W_0^{1,q}$.
Consider the stochastic generalized $p$-Laplacian equation
\begin{align}\label{p-lap}
{\rm d}Z_t={\rm div}(|\nabla Z_t|^{q-2} \nabla Z_t- c |Z_t|^{\tilde{q}-2} Z_t) {\rm d}t+\sigma (Z_t) {\rm d}W_t,
\end{align}
with $Z_0=z \in L^2$ and homogeneous Dirichlet boundary condition,
where $c \ge 0$ and $\tilde{q} \in [1, q]$.
Then Eq. \eqref{p-lap} is equivalent to Eq. \eqref{eq-z} with $b$ given by
$b(u)={\rm div}(|\nabla u|^{q-2} \nabla u)- c |u|^{\tilde{q}-2} u$,
$u \in W_0^{1,q}$.
In this case, one can check that \ref{ap-con}--\ref{ap-gro} hold with $\sigma=0$; see, e.g., \cite[Examples 4.1.5 and 4.1.9]{LR15}.
Under the condition \eqref{lip}, we obtain Assumption \ref{ap} with
$\alpha=q$ and $\eta=L^2_\sigma$.
We note that, for Eq. \eqref{p-lap} driven by non-degenerate additive noise, \cite[Theorem 1.1 and Example 3.3]{Liu09(JEE)} got a power-Harnack inequality to the related Markov semigroup $P_t$.
Whether $P_t$ satisfies power- or log-Harnack inequality in the multiplicative noise case is unknown.
Under Assumption \ref{ap-ell}, we use Theorem \ref{tm-har} to derive the asymptotic log-Harnack \eqref{har} with certain constants for $P_t$.
\iffalse
To show that \eqref{ap-mon} and \eqref{ap-coe} hold for some negative $eta$.
To this end, let us use the following elementary inequality.
\begin{align} \label{}
(|u|^r u- |v|^r v, u-v) \ge 2^{-r} \|u-v\|^{r+2}_{L^{r+2}},
\end{align}
for $u, v \in L^{r+2}$.
\fi
\end{ex}
\begin{ex}
Take $H=W_0^{-1,2}$ and $V=L^q$.
Consider the stochastic generalized porous media equation
\begin{align}\label{p-med}
{\rm d}Z_t=( L \Psi(Z_t) +\Phi(Z_t)){\rm d}t+\sigma (Z_t) {\rm d}W_t,
\end{align}
with $Z_0=z \in W_0^{-1,2}$ and homogeneous Dirichlet boundary condition.
Here $L$ is a negative definite self-adjoint linear operator in $L^2$ such that its inverse is bounded in $L^q$ (including the Dirichlet Laplacian operator),
$\Psi, \Phi$ are Nemytskii operators related to functions $\psi, \phi: \mathbb R \rightarrow \mathbb R$, respectively, such that the following monotonicity and growth conditions hold for some constants $C_+>0$, $C, \eta \in \mathbb R$:
\begin{align*}
&|\psi(t)|+|\phi(t)-C t| \le C_+(1+|t|^{q-1}), \\
& -2\langle\Psi(u)-\Psi(v), u-v\rangle+2 \langle\Phi(u)-\Phi(v), (-L)^{-1}(u-v)\rangle \\
& \le -C_+ \|u-v\|^q_{L^q}+\eta \|u-v\|^2,
\quad t \in \mathbb R, \ u, v \in L^q.
\end{align*}
A very simple example satisfying the above two inequalities is give by
$\psi(t)=|t|^{q-2} t$ and $\phi(t)=\eta t$, $t \in \mathbb R$.
Then one can check that $b$ defined by
$b(u)=L \Psi(u) +\Phi(u)$, $u \in W_0^{-1,2}$,
satisfies Assumption \ref{ap} with $\sigma=0$; see, e.g., \cite[Theorem A.2]{Wan07(AOP)}.
According to \eqref{lip}, we obtain Assumption \ref{ap} with
$\alpha=q$ and $\eta=L^2_\sigma$.
We note that in the non-degenerate additive noise case, \cite[Theorem 1.1]{Wan07(AOP)} got a power-Harnack inequality to the related Markov semigroup $P_t$.
Whether power- or log-Harnack inequality holds for $P_t$ in
the multiplicative noise case remains open.
Using Theorem \ref{tm-har}, under the usual non-degenerate Assumption \ref{ap-ell}, we conclude that the asymptotic log-Harnack \eqref{har} holds with certain constants for $P_t$.
\end{ex}
\subsubsection{Degenerate case}
Next we give an SPDE with degenerate multiplicative noise.
\begin{ex}
Let $\mathcal OO=(0,1)$, $U=H_1=H_2=L^2$, and $W$ be an $L^2$-valued cylindrical Wiener process.
Consider Eq. \eqref{eq-xy} with
\begin{align} \label{ex-Hai}
b(X,Y)=\left(\begin{array}{c} \Delta X+Y+2X-X^3 \\
\Delta Y+X+2Y-Y^3\end{array}\right),
\quad (X,Y) \in L^2 \times L^2.
\end{align}
We note that, the author in \cite[Theorem 6.2]{Hai02(PTRF)} showed that Eq. \eqref{eq-xy} with drift \eqref{ex-Hai} and diffusion given by
\begin{align*}
\sigma(X,Y)=\left(\begin{array}{c} 0 \\
{\rm Id}_{L^2} \end{array}\right),
\quad (X,Y) \in L^2 \times L^2,
\end{align*}
where ${\rm Id}_{L^2}$ denotes the identity operator on $L^2$, possesses a unique ergodic invariant measure, by using asymptotic coupling method in combination with the Lyapunov structure of this system.
One can check that Assumption \ref{ap+} hold on $b$ given in \eqref{ex-Hai} and $\sigma$ given in \eqref{b-s} such that $\sigma \in \mathcal L_2(U; H_1)$ and $\sigma_2$ satisfies \eqref{lip} with $\sigma$ and related $\|\cdot\|_{\mathcal L_2}$-norm replaced by $\sigma_2$ and related $\|\cdot\|_{\mathcal L^2_2}$-norm, respectively.
Then using Theorem \ref{tm-har+}, we show that the associated Markov semigroup satisfies the asymptotic log-Harnack inequality \eqref{har} and possesses a unique ergodic invariant measure.
\end{ex}
\end{document} |
\begin{document}
\title{Alternating Paths of Fully Packed Loops and Inversion Number}
\author[S. Ng]{Stephen Ng}
\address{Department of Mathematics\\
University of Rochester\\
Rochester, NY 14627, USA}
\email{[email protected]}
\date{Version: \today}
\maketitle
\begin{abstract}
We consider the set of alternating paths on a fixed fully packed loop of size $n$, which we denote by $\phi_0$. This set is in bijection with the set of fully packed loops of size $n$ and is also in bijection with the set of alternating sign matrices by a well known bijection. Furthermore, for a special choice of $\phi_0$, we demonstrate that the set of alternating paths are nested osculating loops, which give rise to a modified height function representation which we call Dyck islands. Dyck islands can be constructed as a union of lattice Dyck paths, and we use this structure to give a simple graphical formula for the calculation of the inversion number of an alternating sign matrix.
\end{abstract}
\section{Introduction}
The motivation for studying alternating paths of fully packed loops began with the online note of Ayyer and Zeilberger \cite{AZ} on an attempt to prove the Razumov-Stroganov conjecture. This note introduced the notion of an alternating path of a fully packed loop and described their action on the underlying link patterns in simple example cases. Furthermore, they conjectured the existence of an algorithm for finding an alternating path that would implement the pullback of the local XXZ Hamiltonians into the space of fully packed loops in such a way that would provide a solution to the Razumov-Stroganov conjecture. The RS conjecture has since been solved by Cantini and Sportiello's detailed analysis \cite{CantiniSportiello} of Wieland's gyration operation \cite{Wieland} on fully packed loops, but the question of the existence of an algorithm with the desired properties remains open.
On another note, Striker \cite{StrikerPolytope} and Behrend and Knight \cite{BK} independently studied the notion of the alternating sign matrix polytope. In particular, Striker gave a nice characterization of the face lattice of this polytope in terms of what she called doubly directed regions of flow diagrams \cite{StrikerPolytope}.
Recast into the fully packed loop picture, this is described as follows: Given any two fully packed loops, there is an alternating path (possibly a disjoint union of alternating loops) along which they differ in color. Then given some collection of fully packed loops of size $n$, one can consider the union of all alternating paths between pairs of fully packed loops. This union represents the smallest face of the alternating sign matrix polytope which contains all of the fully packed loops in the collection.
In what follows, this paper is divided into two additional sections. In Section \ref{sec:defn}, we present all relevant definitions and develop the correspondence between alternating sign matrices, fully packed loops, and Dyck islands. Section \ref{sec:inv} of this paper then demonstrates the utility of this new representation by establishing a connection between the shape of the Dyck island and the inversion number of an alternating sign matrix. In particular, we show that the inversion number of an alternating sign matrix can be decomposed as follows:
\begin{equation*}
\mathrm{inv}(A) = \sum_{i=1}^{\ell} \mathrm{inv}(\gamma_i) - k
\end{equation*}
where $\gamma_1, \ldots, \gamma_\ell$ are boundary paths of the Dyck island corresponding to $A$, $k$ is the number of off-diagonal osculations of these paths, and $\mathrm{inv}(\gamma_i)$ is the inversion number of the alternating sign matrix corresponding to the Dyck island described by just $\gamma_i$. The quantity $\mathrm{inv}(\gamma_i)$ will be shown to be dependent only on the diameter of the loop and the number of osculations on the diagonal. See Figure \ref{fig:DIinv} below for some preliminary examples.
\begin{figure}
\caption{Some example Dyck islands with inversion number 4 and their corresponding alternating sign matrices.}
\label{fig:DIinv}
\end{figure}
\section{Definitions and the Alternating Sign Matrix - Fully Packed Loop - Dyck Island correspondence }
\label{sec:defn}
Let us now clarify the terminology which will be used throughout the paper. We wish to emphasize the harmony of the different representations of fully packed loops. In Section \ref{sec:inv}, we plan to use several representations at once, particularly in the proof of the main theorem.
\begin{defn}
A \emph{fully packed loop} of size $n$ is a connected graph arranged in an $n\times n$ grid such that there are $n^2$ internal vertices of degree $4$, and $4n$ external vertices of degree 1. Edges of the graph are colored either light or dark such that all internal vertices are incident to two light edges and two dark edges--this is the six-vertex condition (see Figure \ref{fig:6v}). Furthermore, edges incident to the vertices of degree 1 alternate in color in the manner seen in Figure \ref{fig:phi}. These are the domain wall boundary conditions. The set of all fully packed loops of size $n$ will be denoted $\FPL{n}$.
\end{defn}
\begin{figure}
\caption{The six-vertex condition}
\label{fig:6v}
\end{figure}
\begin{defn}
An \emph{alternating sign matrix} of size $n$ is an $n\times n$ matrix with entries $0$, $1$, or $-1$ such that each row sum is equal to 1, each column sum is equal to 1, and the non-zero entries alternate in sign along both rows and columns.
\end{defn}
Figure \ref{fig:asm} gives two example alternating sign matrices.
\begin{defn}
Given a fixed alternating sign matrix, $A$, a \emph{diagonal one} of $A$ is an entry along the diagonal which takes the value $1$.
\end{defn}
It is well known (see \cite{ProppManyFaces} for a review) that there exists a bijection between fully packed loops of size $n$ and alternating sign matrices of size $n$. Vertices of type $1-4$ correspond to 0, and vertices of type $5$ and $6$ correspond to 1 and $-1$ subject to the alternating sign condition.
\begin{figure}
\caption{Two example alternating sign matrices}
\label{fig:asm}
\end{figure}
\begin{defn}
Let $\phi_0$ be the fully packed loop corresponding to the identity matrix.
Likewise, let $\phi_1$ be the fully packed loop corresponding to the skew-identity matrix.
\end{defn}
\begin{figure}
\caption{The fully packed loops $\phi_0$ and $\phi_1$.}
\label{fig:phi}
\end{figure}
\begin{defn}
An \emph{alternating path} is a collection of edges of a fully packed loop which form lattice path loops and for which the edge color alternates. An \emph{alternating loop} is a single loop which has alternating edge colors. Thus, an alternating path is a union of alternating loops. Figure \ref{fig:alt} gives examples of alternating paths in a $5\times5$ fully packed loop corresponding to the alternating sign matrices in Figure \ref{fig:asm}.
\end{defn}
\begin{defn}
Let $p = \cup_i \gamma_i$ be a union of one or more lattice path loops, $\gamma_i$. Define the flip of $\gamma_i$ to be the map of $\FPL{n}$ to itself which flips the colors of the edges of $\gamma_i$ from light to dark and vice versa if $\gamma_i$ is an alternating loop, and does nothing if $\gamma_i$ is not an alternating loop. We define the \emph{flip of $p$} to be the map from $\FPL{n}$ to itself which flips all $\gamma_i$ which are alternating. A \emph{plaquette flip} is a flip of a loop surrounding a $1\times 1 $ box.
\end{defn}
\begin{figure}
\caption{Alternating paths on $\phi_0$.}
\label{fig:alt}
\end{figure}
\begin{defn}
A \emph{Dyck island} of size $n+1$ is an $n\times n$ tableau filled with entries, $\delta_{ij}$ for $1 \leq i,j \leq n$, from $\{0,1,2,3, \ldots\}$ according to the following rules:
\begin{itemize}
\item $\delta_{ij} \geq \delta_{i^\prime,j^\prime}$ whenever $i \geq j, i\leq i^\prime$, and $j \geq j^\prime$
\item $\delta_{ij} \geq \delta_{i^\prime, j^\prime}$ whenever $i \leq j, i \geq i^\prime$ and $j \leq j^\prime$
\item $\delta_{ij} = 0 \textrm{ or } 1 $ if $i,j \in \{1,n\}$
\item $|\delta_{ij}-\delta_{(i+1)j}| \leq 1$ and $|\delta_{ij}-\delta_{i(j+1)}| \leq 1$.
\end{itemize}
\label{def:di}
\end{defn}
In words, the above definition tells us the following: Fix a box on the diagonal. Entries of boxes above and to the right are weakly decreasing, by increments of at most 1 per step. Likewise, entries of boxes below and to the left are weakly decreasing, by increments of at most 1 per step.
Furthermore, boxes along the boundary can only take the values 0 or 1. Superimposing a Dyck island over the fully packed loop $\phi_0$ specifies an alternating path along the boundaries of level sets where the value inside a box in a Dyck island indicates the number of alternating paths which contain the box. Figure \ref{fig:exampleDI} shows the two Dyck islands corresponding to the alternating paths of Figure \ref{fig:alt}.
\begin{figure}
\caption{Two example Dyck islands. Boundary paths have been drawn in bold.}
\label{fig:exampleDI}
\end{figure}
It is possible to construct all Dyck islands inductively from the Dyck island of all zero entries according to the following local update rules:
\begin{itemize}
\item
$\raisebox{-1cm}{
\begin{tikzpicture}[scale=1.3]
\draw[gray, very thin] (-.75,-.25) rectangle (.75,.25);
\draw[gray, very thin] (-.25,.75) rectangle (.25,-.75);
\node at (0,0) {$i$};
\node at (-.5,0) {$i$};
\node at (.5,0) {$i$};
\node at (0,-.5) {$i$};
\node at (0,.5) {$i$};
\end{tikzpicture}
}
\leftrightarrow
\raisebox{-1cm}{
\begin{tikzpicture}[scale=1.3]
\draw[gray, very thin] (-.75,-.25) rectangle (.75,.25);
\draw[gray, very thin] (-.25,.75) rectangle (.25,-.75);
\node at (0,0) {\tiny{$i+1$}};
\node at (-.5,0) {$i$};
\node at (.5,0) {$i$};
\node at (0,-.5) {$i$};
\node at (0,.5) {$i$};
\end{tikzpicture}
}$ when the entry to be updated is on the diagonal.
\item
$\raisebox{-1cm}{
\begin{tikzpicture}[scale=1.3]
\draw[gray, very thin] (-.75,-.25) rectangle (.75,.25);
\draw[gray, very thin] (-.25,.75) rectangle (.25,-.75);
\node at (0,0) {$i$};
\node at (-.5,0) {\tiny{$i+1$}};
\node at (.5,0) {$i$};
\node at (0,-.5) {\tiny{$i+1$}};
\node at (0,.5) {$i$};
\end{tikzpicture}
}
\leftrightarrow
\raisebox{-1cm}{
\begin{tikzpicture}[scale=1.3]
\draw[gray, very thin] (-.75,-.25) rectangle (.75,.25);
\draw[gray, very thin] (-.25,.75) rectangle (.25,-.75);
\node at (0,0) {\tiny{$i+1$}};
\node at (-.5,0) {\tiny{$i+1$}};
\node at (.5,0) {$i$};
\node at (0,-.5) {\tiny{$i+1$}};
\node at (0,.5) {$i$};
\end{tikzpicture}
}$ when the entry to be updated is above the diagonal.
\item
$\raisebox{-1cm}{
\begin{tikzpicture}[scale=1.3]
\draw[gray, very thin] (-.75,-.25) rectangle (.75,.25);
\draw[gray, very thin] (-.25,.75) rectangle (.25,-.75);
\node at (0,0) {$i$};
\node at (-.5,0) {$i$};
\node at (.5,0) {\tiny{$i+1$}};
\node at (0,-.5) {$i$};
\node at (0,.5) {\tiny{$i+1$}};
\end{tikzpicture}
}
\leftrightarrow
\raisebox{-1cm}{
\begin{tikzpicture}[scale=1.3]
\draw[gray, very thin] (-.75,-.25) rectangle (.75,.25);
\draw[gray, very thin] (-.25,.75) rectangle (.25,-.75);
\node at (0,0) {\tiny{$i+1$}};
\node at (-.5,0) {$i$};
\node at (.5,0) {\tiny{$i+1$}};
\node at (0,-.5) {$i$};
\node at (0,.5) {\tiny{$i+1$}};
\end{tikzpicture}
}$ when the entry to be updated is below the diagonal.
\end{itemize}
For the purpose of making sense of the update rules along the boundary, assume that the Dyck island has an additional first and last row of 0 entries and an additional first and last column of 0 entries.
\begin{defn}
Following \cite{ProppManyFaces}, the \emph{height function} representation of an $n\times n$ alternating sign matrix, $A_{i^\prime j^\prime}$, is an $(n+1) \times (n+1)$ matrix
\begin{equation*}
h_{ij} = i+j - 2 \left(\sum_{i^\prime = 1}^i \sum_{j^\prime = 1}^j A_{i^\prime j^\prime} \right)
\end{equation*}
where $0\leq i,j \leq n$, and the sums are to be zero when $i=0$ or $j=0$, so that $h_{ij}=i+j$ whenever $i=0$ or $j=0$. See Figure \ref{fig:heightFunc} to see examples of height functions which correspond to the Dyck islands in Figure \ref{fig:exampleDI}.
\end{defn}
\begin{figure}
\caption{Two example height functions}
\label{fig:heightFunc}
\end{figure}
In \cite{LS}, Lascoux and Schutzenberger showed that monotone triangles (yet another object in bijection with alternating sign matrices--see \cite{ProppManyFaces} for examples) satisfy an interesting lattice structure, which by the above bijection, carries through to the height function representation. The infimum and supremum of the entire set is the identity and skew-identity, respectively. In the height function representation, there is a particularly easy interpretation of the lattice structure: two height functions, $h^{(1)}$ and $h^{(2)}$, satisfy $h^{(1)} \leq h^{(2)}$ if and only if $h^{(1)}_{ij} \leq h^{(2)}_{ij}$ for $0\leq i,j \leq n$. What is more, entries of the height function differ by even numbers, and the border entries ($i,j \in {0,n}$) remain constant. This motivates the following formula which gives a bijection between Dyck islands and height functions:
Let $h^{(0)}$ correspond to the minimal $(n+1) \times (n+1)$ height function (which corresponds to the identity matrix in the alternating sign matrix picture) and let $h$ be any given $(n+1) \times (n+1)$ height function. Then we get a corresponding Dyck island $\delta$ via
\begin{equation*}
\delta_{ij} = \frac 1 2 (h_{ij} - h^{(0)}_{ij}) \textrm{ where } 1 \leq i, j \leq n-1 .
\end{equation*}
Because the entries with $i \in \{ 0 , n+1\}$ or $j \in \{0, n+1\}$ remain constant in the height function representation, it is clear that the above map is a bejection. We get the following proposition.
\begin{prop}
Dyck islands are in bijection with fully packed loops and alternating sign matrices.
\end{prop}
We now introduce terminology which is useful for describing any specified Dyck island.
\begin{defn}
A \emph{Dyck word of semilength $n$} is a word of length $2n$ from the alphabet $\{u,d\}$ such that the number of `$u$' and `$d$' are equal, and in all of the subwords consisting of the first consecutive $i$ letters, the number of `$u$' always exceeds or is equal to the number of `$d$' for each $i$ in the range $1\leq i \leq 2n$.
\end{defn}
\begin{defn}
A \emph{Dyck path of semilength $n$} is a lattice path in a finite size square lattice constructed from a Dyck word of semilength $n$ in which the path begins at some point along the diagonal and the letters $\{u,d\}$ are interpreted as $\{$right, down$\}$ for paths above the diagonal or as $\{$down, right$\}$ for paths below the diagonal. By construction, a Dyck path begins and ends on the diagonal, and never crosses the diagonal. In other words, we fix whether our path is above the diagonal or below the diagonal and interpret $u$ to be a move away from the diagonal and $d$ to be a move toward the diagonal.
\end{defn}
\begin{defn}
Given any Dyck island of size $n$, we notice that entries take values in the set $\{0, 1, \ldots , \lceil\frac{n}{2} \rceil\}$. Consider the union of all boxes labelled $i$ such that $1 \leq i \leq \lceil \frac{n}{2} \rceil$. By construction, these regions are bounded by boxes labelled $i-1$ or $i+1$. The union of all edges between such regions for all $i \in \{0,1,\ldots, \lceil \frac{n}{2} \rceil \}$ are lattice path loops, which we call \emph{boundaries} or \emph{boundary paths}. By specifying all boundaries, one can retrieve the entries of the Dyck island by inserting in each entry the number of boundaries which contain the box (in the interior of the boundary) in consideration. See Figure \ref{fig:exampleDI} for examples.
\end{defn}
Boundaries are a union of lattice path loops which are allowed to touch along vertices of the underlying graph. We fix the convention that we decompose the boundaries of a Dyck island into loops which can be described by two Dyck paths of equivalent semilength: one which forms the northeastern side of the boundary, and the other which forms the southwestern side. One can check that boundaries decompose into different Dyck paths under the local update rules for Dyck islands, but the existence of the Dyck path representation is preserved.
\begin{defn}
For any given lattice path loop, $\gamma$, which is assumed to form part of the boundary of a given Dyck island, we denote the northeast boundary of $\gamma$ with $\mathrm{neb}(\gamma)$ and we denote the southwestern boundary of $\gamma$ with $\mathrm{swb}(\gamma)$. Both $\mathrm{neb}(\gamma)$ and $\mathrm{swb}(\gamma)$ are Dyck paths. See Figure \ref{fig:nebswbdemo} for an example.
\end{defn}
\begin{defn}
Let $\pi_1$ and $\pi_2$ be two distinct Dyck paths which are part of the boundary of a given Dyck island (possibly from the same loop). An \emph{osculation} is a point of the lattice which $\pi_1$ and $\pi_2$ share in common.
\end{defn}
In order to eliminate ambiguities in decomposing the boundaries, we fix the convention that loops with osculations along the diagonal cannot be broken up into smaller loops. For example, in Figure \ref{fig:DIinv}, the first Dyck island is described by two loops and the remaining three are described by one loop. Figure \ref{fig:nebswbdemo} gives another example.
\begin{figure}
\caption{An example lattice path loop of semilength $3$ in the boundary of a Dyck island. The northeast boundary of the given lattice loop is labelled with a dashed path. The southwest bondary is labelled with a solid path. Note that by our convention, we will always describe these boundaries as a single loop rather than two loops.}
\label{fig:nebswbdemo}
\end{figure}
Observe that distinct Dyck paths forming the boundary of a Dyck island may not share an edge because this would violate the fourth condition of Definition \ref{def:di}. Hence the Dyck paths which form the boundaries of Dyck islands only touch at isolated points.
We now wish to demonstrate how the notion of a Dyck island is related to the set of alternating paths on the fully packed loop $\phi_0$ of arbitrary size $n$. We will find that by reinterpreting alternating paths as the union of lattice path loops in the square lattice of size $n$, we recover the boundary of a Dyck island. Though Dyck islands are perhaps most simply defined via the connection to height functions, their discovery arose through the study of alternating paths.
\begin{lemma}
Any alternating loop can be constructed by some sequence of plaquette flips.
\label{lem:plaquetteConstruction}
\end{lemma}
\begin{proof}
The following proof works for any given fully packed loop. With a fixed fully packed loop and alternating path in mind, it is clear that if we can apply a single plaquette flip to all boxes in the interior, then each interior edge is flipped twice, while each exterior edge is only flipped once. Thus, such a sequence of plaquette flips implements an alternating path.
Let us call a box \emph{accessible} if it can be flipped by a plaquette flip eventually, after some sequence of plaquette flips in the interior. We wish to demonstrate that all plaquette flips in the interior of an alternating path are accessible. The proof is by induction on the number of boxes in the interior. The case of one box is obvious, since this is simply a plaquette flip to begin with. The inductive step is demonstrated by cutting up the interior of the alternating path into two parts, where we cut along some alternating path. Then one of the two regions is bounded by an alternating path and the other region can be shown to be bounded by an alternating path upon a color flip operation applied to the cutting path. The number of boxes that each of these smaller alternating path bounds is smaller than $n$, therefore by the inductive hypothesis, all of the boxes within are accessible.
Lastly, to demonstrate that such an alternating cutting path exists, we make the observation that every edge is part of some alternating path, by the six-vertex condition. Then, if we pick any edge that is both incident to a vertex on the alternating path and in the interior of the alternating path and use this edge to find a new alternating cutting path, we see that the cutting path must be incident to our original alternating path in at least 2 points.
\end{proof}
\begin{figure}
\caption{The path colored blue illustrates one possible cutting path.}
\label{fig:cutting}
\end{figure}
Plaquette flips in the fully packed loop picture correspond to the local update rules in the Dyck island picture. This is the content of the next proposition.
\begin{prop}
Fix a box $\alpha$ in the $n\times n$ square lattice. Let $f_\alpha$ be the operator acting on $\FPL{n}$ which implements a plaquette flip on the box $\alpha$ if it is surrounded by an alternating path and which does nothing otherwise. Let $u_\alpha$ be the operator on $(n-1) \times (n-1)$ Dyck islands which implements a local update at the box $\alpha$ (adds $\pm1$ to the box $\alpha$) if it is permissible, and does nothing otherwise. Then there exists a bijection $M$ from $\FPL{n}$ to the $(n-1)\times(n-1)$ Dyck islands such that
\begin{equation*}
M\circ f_\alpha = u_\alpha \circ M.
\end{equation*}
Furthermore, $M$ is the map which forgets information about color and reinterprets alternating paths of $\phi_0$ as boundary paths in Dyck islands.
\label{prop:bijectionM}
\end{prop}
\begin{proof}
We will construct the bijection $M$.
First, we establish a bijective correspondence between alternating paths of $\phi_0$ and the set of fully packed loops. Observe that a simple consequence of the six-vertex condition and the boundary condition is that for any fixed fully packed loop $\phi$, all alternating paths close up into a union of loops. Since the six vertex condition guarantees that if any two fully packed loops differ at a vertex, they must differ along two (one black and one white) or all four edges. It follows then that the set of all edges which differ between $\phi$ and $\phi_0$ (the fully packed loop corresponding to the identity matrix) is an alternating path which we denote $\tilde{\gamma}=\gamma_1 \cup \ldots \cup \gamma_\ell$. Thus, we see that the flip of $\tilde{\gamma}$ maps $\phi$ to $\phi_0$ and vice versa. The upshot of this is that it is possible to obtain every fully packed loop as the flip of some alternating path of $\phi_0$.
Next, we observe that an arbitrary alternating path loop, $\gamma$, in $\phi_0$ traverses at least two points along the diagonal, that $\gamma$ can be decomposed into a portion above the diagonal ($\mathrm{neb}(\gamma)$) and a portion below the diagonal ($\mathrm{swb}(\gamma)$), and that $\mathrm{neb}(\gamma)$ and $\mathrm{swb}(\gamma)$ are Dyck paths. This follows because all off-diagonal elements of $\phi_0$ are only of type $3$ or $4$ (see Figures \ref{fig:6v} and \ref{fig:phi}). Because alternating paths of $\phi_0$ are a union of alternating path loops $\gamma_1, \ldots, \gamma_\ell$, we see that by forgetting the coloring of a given alternating path we can reinterpret it as the boundary path of a Dyck island.
We define our bijection $M$ from $\FPL{n}$ to $(n-1)\times(n-1)$ Dyck islands to be the map obtained from the following process:
\begin{center}
\begin{enumerate}
\item Given $\phi$, find the corresponding alternating path of $\phi_0$. Call it $\tilde{\gamma}$.
\item Interpret $\tilde{\gamma}$ as a boundary path of a Dyck island, $\delta$ , and fill in the entries $\delta_{ij}$ according to the number of boundary path loops which contain the box $(i,j)$.
\end{enumerate}
\end{center}
This process may be completed in reverse, so it follows that $M$ is a bijection.
Lastly, in order to establish that
\begin{equation*}
M\circ f_\alpha = u_\alpha \circ M.
\end{equation*}
we simply observe that the local update rules for Dyck islands correspond to an application of $f_\alpha$ to some accessible box $\alpha$. This is because an accessible box $\alpha$ corresponds to a box of one of the three following types:
\begin{equation*}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin, gray] (-.9,-.9) grid (1.9,1.9);
\draw[very thick] (1,1.9) -- (1,1) -- (0,1) -- (0,0) -- (-.9,0);
\draw[line width=1mm,color=red,opacity=.4] (0,1) -- (0,0) -- (1,0);
\node at (.5,.5) {$\tiny{\alpha}$};
\end{tikzpicture}
} \leftrightarrow
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin, gray] (-.9,-.9) grid (1.9,1.9);
\draw[very thick] (1,1.9) -- (1,1) -- (0,1) -- (0,0) -- (-.9,0);
\draw[line width=1mm,color=red,opacity=.4] (0,1) -- (1,1) -- (1,0);
\node at (.5,.5) {$\tiny{\alpha}$};
\end{tikzpicture}
}
\end{equation*}
\begin{equation*}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin, gray] (-.9,-.9) grid (1.9,1.9);
\draw[very thick] (1.9,1) -- (1,1) -- (1,0) -- (0,0) -- (0,-.9);
\draw[line width=1mm,color=red,opacity=.4] (0,1) -- (0,0) -- (1,0);
\node at (.5,.5) {$\tiny{\alpha}$};
\end{tikzpicture}
} \leftrightarrow
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin, gray] (-.9,-.9) grid (1.9,1.9);
\draw[very thick] (1.9,1) -- (1,1) -- (1,0) -- (0,0) -- (0,-.9);
\draw[line width=1mm,color=red,opacity=.4] (0,1) -- (1,1) -- (1,0);
\node at (.5,.5) {$\tiny{\alpha}$};
\end{tikzpicture}
}
\end{equation*}
\begin{equation*}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin, gray] (-.9,-.9) grid (1.9,1.9);
\draw[very thick] (-.9,0) -- (0,0) -- (0,1.9);
\draw[very thick] (1,-.9) -- (1,1) -- (1.9,1);
\node at (.5,.5) {$\tiny{\alpha}$};
\end{tikzpicture}
} \leftrightarrow
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin, gray] (-.9,-.9) grid (1.9,1.9);
\draw[very thick] (-.9,0) -- (0,0) -- (0,1.9);
\draw[very thick] (1,-.9) -- (1,1) -- (1.9,1);
\draw[line width=1mm,color=red,opacity=.4] (0,0) rectangle (1,1);
\node at (.5,.5) {$\tiny{\alpha}$};
\end{tikzpicture}
}.
\end{equation*}
\end{proof}
Because it is more convenient for our analysis of the inversion number, in the remainder of the paper we use the Dyck islands picture. The following proposition explicitly establishes the correspondence between alternating sign matrices and Dyck islands.
\begin{prop}
Let a Dyck island be described by a boundary path $\tilde{\gamma}=\gamma_1\cup \ldots \cup \gamma_\ell$. Let $w$ be the Dyck word corresponding to $\mathrm{neb}(\gamma_i)$ (respectively $\mathrm{swb}(\gamma_i)$). Let $v$ be a vertex of the underlying lattice corresponding to a consecutive subword $ud$ or $du$ of $\mathrm{neb}(\gamma_i)$ ($\mathrm{swb}(\gamma_i)$). Generically, $v$ corresponds to a $1$ if the subword is $ud$ and $v$ corresponds to a $-1$ if the subword is $du$. All other vertices correspond to $0$. The exceptional cases deal with vertices where distinct loops touch, and where loops traverse a vertex along the diagonal. They are:
\begin{enumerate}
\item If $v$ is a vertex along the diagonal which is traversed by either $\mathrm{neb}(\gamma_i)$ or $\mathrm{swb}(\gamma_i)$ but not both, then $v$ corresponds to a $0$ in the alternating sign matrix picture.
\item If $v$ is a vertex along the diagonal which is traversed by both $\mathrm{neb}(\gamma_i)$ and $\mathrm{swb}(\gamma_i)$, then $v$ corresponds to a $-1$ in the alternating sign matrix picture.
\item If $v$ is a vertex not along the diagonal which is traversed by both $\mathrm{neb}(\gamma_i)$ and $\mathrm{neb}(\gamma_j)$ (or is traversed by both $\mathrm{swb}(\gamma_i)$ and $\mathrm{swb}(\gamma_j)$), then $v$ corresponds to a $0$ in the alternating sign matrix picture.
\end{enumerate}
\end{prop}
\begin{proof}
The proof proceeds by applying the map $M^{-1}$ where $M$ is the bijection from Proposition \ref{prop:bijectionM} and carefully examining the resulting picture. In all of the diagrams below, red lines indicate the location of the action of a flip of an alternating loop.
The generic picture is established by observing four scenarios:
\begin{equation*}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,0) -- (.9,0);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,0) -- (.9,0);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (-.9,0) -- (0,0) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (-.9,0) -- (0,0) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\end{tikzpicture}
}.
\end{equation*}
A flip of $\tilde{\gamma}$ acting on $\phi_0$ transforms these cases into vertices of type $5$ or $6$ (see Figure \ref{fig:6v}). It follows that such vertices correspond to non-zero entries in the alternating sign matrix picture.
To see that the alternating condition is satisfied in the generic case, suppose that we are in a situation with no exceptions (that is, all loops are disjoint and do not have northeast and southwest boundaries intersecting at any point on the diagonal). We first demonstrate that the alternating condition is satisfied in the case that there is only one loop and then show that it is also satisfied for nested loops. The generic case follows.
Fix a loop $\gamma_i$ and consider the Dyck island described by this single loop. Fix a column for observation such that $\gamma_i$ intersects the column in at least one vertex of type $ud$. If the column intersects the point of $\gamma_i$ furthest to the upper left or if it intersects the point of $\gamma_i$ furthest to the lower right, then the column contains only one vertex of type $ud$ and no vertices of type $du$. Otherwise, $\mathrm{neb}(\gamma_i)$ must intersect the column in one vertex of type $ud$ above one vertex of type $du$, or else it intersects along neither. Likewise, still considering the same column, $\mathrm{swb}(\gamma_i)$ must intersect the column in one vertex of type $du$ above one vertex of type $ud$, or else it intersects along neither. After accounting for a diagonal one, we see that the alternating condition is satisfied. See Figure \ref{fig:column}.
\begin{figure}
\caption{Columns 2 and 3 illustrate how vertices of type $ud$ and $iu$ come in pairs. Columns 1 and 4 illustrates what happens in extremal situations.}
\label{fig:column}
\end{figure}
Suppose we consider a new Dyck island with two loops $\gamma_i$ and $\gamma_j$ such that $\gamma_i$ is contained in the interior of $\gamma_j$. If we pick a column which intersects both $\gamma_i$ and $\gamma_j$, then intersections will be ordered in the following way (reading from top to bottom): First intersections of $\mathrm{neb}(\gamma_j)$, then intersections of $\mathrm{neb}(\gamma_i)$, then possibly a diagonal one, then intersections of $\mathrm{swb}(\gamma_i)$ and lastly, intersections of $\mathrm{swb}(\gamma_j)$. It is easy to check that the alternating condition is still satisfied. Intersections along rows are checked in exactly the same way.
Exception (1) corresponds to one of the following two pictures:
\begin{equation*}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\end{tikzpicture}
}.
\end{equation*}
Likewise, exception (2) corresponds to
\begin{equation*}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\end{tikzpicture}
}.
\end{equation*}
Lastly, exception (3) corresponds to
\begin{equation*}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,0) -- (.9,0);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (0,-.9) -- (0,0) -- (.9,0);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (-.9,0) -- (0,0) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\end{tikzpicture}
}
\raisebox{-.5cm}{
\begin{tikzpicture}[scale=.5]
\draw[very thin,gray] (-.9,-.9) grid (.9,.9);
\draw[very thick] (-.9,0) -- (0,0) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (.9,0) -- (0,0) -- (0,.9);
\draw[line width=1mm,color=red,opacity=.4] (-.9,0) -- (0,0) -- (0,-.9);
\end{tikzpicture}
}.
\end{equation*}
Exceptions (1) and (3) are obvious and preserve the alternating condition because they may be interpreted as the merging of adjacent $1$ and $-1$ vertices into a vertex contributing $0$. Exception (2) is established by isolating $\gamma_i$. If we read the vertices in the given column (row) from top (left) to bottom (right), then there is a vertex of type $ud$ before and another vertex of type $ud$ after. $\gamma_i$ does not cross the column (row) in any other locations, so the alternating condition forces the vertex in consideration along the diagonal to be $-1$.
\end{proof}
\section{Boundary paths of Dyck islands and Inversion number}
\label{sec:inv}
\begin{defn}
The \emph{inversion number} of an alternating sign matrix $A$, denoted by $\mathrm{inv}(A)$, is
\begin{equation*}
\mathrm{inv}(A) = \sum_{\substack{1 \leq i, i^\prime, j,j^\prime \leq n\\ i>i^\prime \\j< j^\prime}} A_{ij} A_{i^\prime j^\prime}
\end{equation*}
\end{defn}
The inversion number is an extension of the standard notion for permutation matrices to all alternating sign matrices. We shall often abuse notation and speak of the inversion number of the associated fully packed loop or Dyck island.
By only considering non-zero terms, the above sum can be reduced to a sum over pairs of integer tuples $(i,j)$ and $(i^\prime, j^\prime)$ such that $(i^\prime,j^\prime)$ is strictly above and strictly to the right of $(i,j)$ and such that $A_{ij}$ and $A_{i^\prime j^\prime}$ are non-zero. It is easy to see that under a flip across the diagonal, such pairs and the value of $A_{ij}A_{i^\prime j^\prime}$ is preserved. Thus, the following lemma is true.
\begin{lemma}
Let $A^\prime$ be the reflection of $A$ across the diagonal. Then $\mathrm{inv}(A) = \mathrm{inv}(A^\prime)$.
\label{lem:invDiag}
\end{lemma}
We now give a way to characterize the inversion number in terms of the loops of a Dyck island. Let $\gamma$ be a loop of a given Dyck island $\delta$. We write $\mathrm{inv}(\gamma)$ to mean the inversion number of a new Dyck island defined by the single boundary loop, $\gamma$.
\begin{thm}
If $\delta$ is a Dyck island consisting of $\ell$ loops (possibly nested), $\gamma_1, \ldots , \gamma_\ell$ with a total number of $k$ off-diagonal osculations, then
\begin{equation*}
\mathrm{inv}(\delta) = \sum_{n=1}^\ell \mathrm{inv}(\gamma_n) -k
\end{equation*}
\label{thm:DIinversions}
\end{thm}
Before proving the theorem, we prove a lemma about evaluating a Dyck island consisting of only one loop $\gamma$.
Recall that the \emph{semilength} of a Dyck path of $2n$ steps ($n$ rises and $n$ falls) is $n$. If the northeast and southwest boundaries of a loop, $\gamma$, are Dyck paths of semilength $n$, then we say also that the semilength of $\gamma$ is $n$. Furthermore, we define an \emph{internal one} to be a diagonal point of the lattice which is strictly contained inside the interior of the loop. In the alternating sign matrix picture, these points correspond to entries along the diagonal with the value 1.
The analysis of inversion numbers of Dyck islands will be facilitated by the Dyck path structure of the northeast and southwest boundaries of loops. The next lemma characterizes a key property of subpaths of Dyck paths encoded as words in a two letter alphabet.
\begin{lemma}
Let $w$ be a word in the alphabet $\{u,d\}$ such that $w$ begins with the letter $u$ and ends with the letter $d$. Then the number of consecutive $ud$ subwords minus the number of consecutive $du$ subwords is exactly 1.
\label{lem:words}
\end{lemma}
\begin{proof}
Start with the word $ud$, which obviously evaluates to 1. The insertion of a single $u$ or $d$ in the middle of the word does not change this evaluation. Thus, by checking that the following four insertion scenarios do not change the evaluation, we are done.
\begin{align*}
uu & \leftrightarrow uuu \\
ud & \leftrightarrow uud \\
du & \leftrightarrow duu \\
dd & \leftrightarrow dud
\end{align*}
The case for insertion of a $d$ is exactly analogous.
\end{proof}
\begin{defn}
The \emph{contribution zone of vertex $v$} is the rectangular sublattice which lies strictly above and to the right of $v$. Let $CZ(v)$ denote the set of all vertices in the contribution zone of $v$ corresponding to alternating sign matrix entries $1$ or $-1$. The \emph{contribution of the vertex $v$} is the sum over products of pairs of entries given by
\begin{equation*}
\sum_{w \in CZ(v)} A_v A_w
\end{equation*}
where $A_v$ denotes the alternating sign matrix entry corresponding to vertex position $v$. By Lemma \ref{lem:invDiag}, we could also assign the contribution zone of $v$ to be below and to the left.
\end{defn}
\begin{lemma}
Fix an alternating sign matrix of size n corresponding to a Dyck island defined by one path $\gamma$. Let $v$ be a vertex on $\mathrm{swb}(\gamma)$ corresponding to a vertex of type $ud$ or $du$. Then the contribution of all pairs containing the vertex $v$ is $\pm \mathrm{height}(v)$, where the sign corresponds to whether $v$ is a $1$ or $-1$ in the alternating sign matrix picture.
\label{lem:heightv}
\end{lemma}
\begin{proof}
If $v$ is a vertex of type $ud$ or $du$ with height 0 and $v$ is not simultaneously in $\mathrm{neb}(\gamma)$, then we see that $v$ corresponds to neither $1$ nor $-1$ so that its contribution must be 0. When $\mathrm{neb}(\gamma)$ has an vertex corresponding to $du$ at $v$ as well, $v$ corresponds to the alternating sign matrix entry $-1$, but $CZ(v)$ is empty. Thus the result follows when $v$ is height 0.
Assume that $\mathrm{height}(v) \geq 1$ and suppose that $v$ corresponds to $1$ (the proof of the case of $-1$ is completely analogous). We wish to demonstrate that the contribution due to vertices of type $ud$ or $du$ in $\mathrm{neb}(\gamma)$ and diagonal ones inside $CZ(v)$ sum to $\mathrm{height}(v)$. Each diagonal one contributes 1 to the sum. The proof will follow once we give a description of the subpath of $\mathrm{neb}(\gamma)$ in $CZ(v)$.
First, we claim that the subpath starts at a vertex of type $uu$ or $ud$ and ends in a vertex of type $ud$ or $dd$. If this were not the case, we would be able to add an additional vertex to the subpath. Thus, the word corresponding to the subpath begins with $u$ and ends with $d$.
Assume that $\mathrm{neb}(\gamma)$ does not touch the diagonal in the contribution zone of $v$. Then there are $\mathrm{height}(v) -1$ diagonal ones and in the word description of the subpath of $\mathrm{neb}(\gamma)$ in $CZ(v)$ each $ud$ corresponds to $1$ and each $du$ corresponds to $-1$. By Lemma \ref{lem:words}, the contribution from the subpath of $\mathrm{neb}(\gamma)$ in $CZ(v)$ is 1, and we conclude that the contribution of all terms in $CZ(v)$ is $\mathrm{height}(v)$. See Figure \ref{fig:contribution1} for an illustration.
\begin{figure}
\caption{Contributions from the contribution zone of $v$ include a diagonal one. Observe that $\mathrm{height}
\label{fig:contribution1}
\end{figure}
Lastly, let us also consider the case when $\mathrm{neb}(\gamma)$ touches the diagonal in the contribution zone of $v$ a total of $k$ times. Then there are $\mathrm{height}(v)-1-k$ diagonal ones. The subpath of $\mathrm{neb}(\gamma)$ in the contribution zone of $v$ must still begin with a vertex of type $uu$ or $ud$ and end with a vertex of type $ud$ or $dd$. In constructing the word corresponding to the subpath, let us mark each vertex which touches the diagonal with $\tilde{d}\tilde{u}$. Such vertices correspond to 0, in the alternating sign matrix picture, but vertices labelled $u\tilde{d}$ or $\tilde{u}d$ still correspond to 1. Thus the contribution from the subpath in $CZ(v)$ is $1+k$ and the total contribution is $\mathrm{height}(v)$. See Figure \ref{fig:contribution2} for an illustration.
\begin{figure}
\caption{The case when $\mathrm{neb}
\label{fig:contribution2}
\end{figure}
\end{proof}
\begin{lemma}
Suppose that an $n\times n$ alternating sign matrix, $\gamma$, corresponds to a Dyck island described by a single loop, which we also call $\gamma$, and suppose also that it has $m$ diagonal ones. Then we have
\begin{equation*}
\mathrm{inv}(\gamma) = \mathrm{semilength}(\gamma) + m.
\end{equation*}
\label{lem:singleLoop}
\end{lemma}
\begin{proof}
We will pair up vertices in the following manner: First, we pair up all diagonal ones with each of the vertices in their respective contribution zones. These vertices will all lie on $\mathrm{neb}(\gamma)$. Second, we pair up all vertices on $\mathrm{swb}(\gamma)$ with each of the vertices in each of their respective contribution zones. These vertices will lie on $\mathrm{neb}(\gamma)$ and also the diagonal ones. Lastly, we remark that all pairs are then accounted for, since there are no vertices above and to the right of $\mathrm{neb}(\gamma)$.
By Lemma \ref{lem:heightv}, each of the vertices on $\mathrm{swb}(\gamma)$ contribute $\mathrm{height}(v)$ for corners of type $ud$ and $-\mathrm{height}(v)$ for corners of type $du$. Diagonal ones contribute 1 to the inversion number sum, since the contribution from a diagonal one comes from pairs which lie on a subpath of $\mathrm{neb}(\gamma)$ to which Lemma \ref{lem:words} applies.
Thus, it suffices to compute the following alternating sum of the heights:
\begin{equation*}
\sum_{x, \textrm{ corners of type } ud} \mathrm{height}(x) - \sum_{y, \textrm{ corners of type } du} \mathrm{height}(y) = \mathrm{semilength} (\gamma)
\end{equation*}
This sum clearly holds when $\mathrm{swb}(\gamma)$ corresponds to the Dyck word $uu\ldots udd \ldots d$. We show that it holds for any permissible $\mathrm{swb}(\gamma)$ by showing that the sum is invariant under the interchange $ud \leftrightarrow du$ whenever the interchange yields a permissible Dyck word. Locally, there are four cases to check:
\begin{align*}
\ldots uudd \ldots &\leftrightarrow \ldots udud \ldots \\
\ldots uudu \ldots &\leftrightarrow \ldots uduu \ldots \\
\ldots dudd \ldots &\leftrightarrow \ldots ddud \ldots \\
\ldots dudu \ldots &\leftrightarrow \ldots dduu \ldots \\
\end{align*}
Let us check the first case and remark that the other cases are completely analogous. On the left hand side, locally, we have a single vertex of type $ud$ at height $h$. On the right hand side, this becomes two vertices of type $ud$ at height $h-1$ and one vertex of type $du$ at height $h-2$. The corresponding sum for the right hand side is $2(h-1) -(h-2) = h$.
\end{proof}
\begin{defn}
An \emph{off-diagonal osculation} is a vertex $v$ not on the diagonal which lies on $\mathrm{neb}(\gamma_1)$ and $\mathrm{swb}(\gamma_2)$ for two distinct boundary paths $\gamma_1$ and $\gamma_2$ in a Dyck island. See Figure \ref{fig:osculation} for an example.
\end{defn}
We will need the following technical notation in order to complete the proof of Theorem \ref{thm:DIinversions}.
\begin{defn}
Consider some $n\times n$ alternating sign matrix corresponding to a Dyck island with boundary paths $\gamma_1, \ldots, \gamma_\ell$. We define $\mathrm{N}(\gamma_i)$ to be the number of loops in the set $\{\gamma_1,\ldots,\gamma_\ell\}$, excluding $\gamma_i$, which contain $\gamma_i$ in its interior.
Likewise, if this given alternating sign matrix has diagonal ones located at vertices $p_1, \ldots, p_m$, then $\mathrm{N}(p_i)$ denotes the number of loops in the set $\{\gamma_1, \ldots,\gamma_\ell\}$ which contain $p_i$ in its interior.
\end{defn}
\begin{defn}
If a Dyck island, $\delta$, is given and is described by the boundary paths $\gamma_1,\ldots, \gamma_\ell$, consider a new Dyck island, denoted by $\delta_i$, which is described by the single path $\gamma_i$ for some $1\leq i \leq \ell$. Let $\mathrm{Diag}(\gamma_i)$ be the number of diagonal ones of $\delta_i$ contained inside $\gamma_i$.
\end{defn}
\begin{defn}
Let $\gamma$ be a boundary path in some fixed Dyck island. We define $\mathcal{O}^\gamma$ to be the set of vertices on the diagonal which are also vertices of $\gamma$. We define $\mathcal{O}^\gamma_{sw}$ to be the subset of $\mathcal{O}^\gamma$ which restricts to vertices on $\mathrm{swb}(\gamma)$. We note that by our conventions, it is possible for a vertex to be a vertex of $\mathrm{swb}(\gamma)$ and $\mathrm{neb}(\gamma)$ simultaneously.
\end{defn}
\begin{defn}
Consider some $n\times n$ alternating sign matrix corresponding to a Dyck island with boundary paths $\gamma_1, \ldots, \gamma_\ell$. The \emph{contribution of $\mathrm{neb}(\gamma_i)$ to the inversion sum}, denoted $\mathrm{C}(\mathrm{neb}(\gamma_i))$, is the sum of the contributions from all of the vertices in $\mathrm{neb}(\gamma_i)$. In exactly the same way, we define the \emph{contribution of $\mathrm{swb}(\gamma_i)$ to the inversion sum} and denote it by $\mathrm{C}(\mathrm{neb}(\gamma_i))$. If a vertex is in both $\mathrm{neb}(\gamma_i)$ and $\mathrm{swb}(\gamma_i)$, then in order to avoid double counting, we establish the convention that it contributes to $\mathrm{C}(\mathrm{neb}(\gamma_i))$ but not to $\mathrm{C}(\mathrm{swb}(\gamma_i))$.
\end{defn}
\begin{figure}
\caption{An example of a Dyck island with two nested boundary paths. The only off-diagonal osculation labelled with a circle.}
\label{fig:osculation}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thm:DIinversions}]
Fix an $n\times n$ alternating sign matrix which corresponds to a Dyck island with boundary paths $\gamma_1, \ldots , \gamma_\ell$.
If $\gamma_1, \ldots, \gamma_\ell$ are disjoint and not nested, the formula is clear, since we can consider a block diagonal decomposition of smaller alternating sign matrices, each containing a single $\gamma_i$. Then the result follows by applying Lemma \ref{lem:singleLoop}.
To deal the case when some of the $\gamma_i$ are nested, but have no off-diagonal osculations, we claim that we can evaluate the contributions of pairs containing vertices in the southwest boundaries and northeast boundaries of the $\gamma_i$ according to the formulas:
\begin{align*}
\mathrm{C}(\mathrm{swb} (\gamma)) &= \mathrm{semilength}(\gamma) + \mathrm{N}(\gamma) (1 + \#\{\mathcal{O}^\gamma_{sw}\}) \\
\mathrm{C}(\mathrm{neb} (\gamma)) &= \mathrm{N}(\gamma) (1+ \#\{\mathcal{O}^\gamma \setminus \mathcal{O}^\gamma_{sw}\})
\end{align*}
A justification of these formulas will be provided in Lemma \ref{lem:swbneb}.
Then if $\delta$ is the Dyck island corresponding described by the boundary paths $\gamma_1 , \ldots , \gamma_\ell$ with diagonal ones labeled $p_1, \ldots, p_m$ and with no off-diagonal osculations, we have the following evaluation of $\mathrm{inv}(\delta)$:
\begin{align*}
\mathrm{inv}(\delta) &= \sum_{i=1}^\ell \mathrm{C}(\mathrm{swb}(\gamma_i)) + \mathrm{C}(\mathrm{neb}(\gamma_i)) + \sum_{j=1}^m \mathrm{N}(p_j) \\
&= \sum_{i=1}^\ell \mathrm{semilength}(\gamma_i) + \mathrm{N}(\gamma_i)(2+ \#\{\mathcal{O}^{\gamma_i}\}) + \sum_{j=1}^m \mathrm{N}(p_j) \\
&= \sum_{i=1}^\ell \mathrm{semilength}(\gamma_i) + \mathrm{Diag}(\gamma_i) \\
&= \sum_{i=1}^\ell \mathrm{inv}(\gamma_i).
\end{align*}
The second to last equality holds because the number of diagonal osculations exactly account for the diagonal ones that would have been there otherwise.
Also, the value $2$ accounts for the two diagonal ones which would have been in place of the left and right endpoints of the northeast and southwest boundaries of $\gamma_i$. This is equivalent to the assertion that
\begin{equation*}
\sum_{i=1}^\ell N(\gamma_i)(2+\#\{\mathcal{O}^{\gamma_i}\}) + \sum_{j=1}^m \mathrm{N}(p_j) = \sum_{i=1}^\ell \mathrm{Diag}(\gamma_i).
\end{equation*}
See figures \ref{fig:calculation1} and \ref{fig:calculation2} for specific examples of this calculation.
Lastly, let us account for off-diagonal osculations. Suppose $v_1, \ldots, v_k$ are all of the off-diagonal osculations in a given Dyck island. By definition, for each $v_i$, there are two distinct boundary paths $\gamma_{i_1}$ and $\gamma_{i_2}$ which meet at $v_i$. Without loss of generality, suppose that $\mathrm{neb}(\gamma_{i_1})$ is incident to $v_i$ in a vertex of type $ud$ and that $\mathrm{neb}(\gamma_{i_2})$ is incident to $v_i$ in a vertex of type $du$. It must follow that $\gamma_{i_1}$ is contained in the interior of $\gamma_{i_2}$.
Then suppose $v_i$ splits into two vertices $v_{i_1}$ and $v_{i_2}$ located at the same lattice point, with the formal condition that $v_{i_2}$ is considered above and to the right of $v_{i_1}$ and that $v_{i_1}$ is in $\mathrm{neb}(\gamma_{i_1})$ and $v_{i_2}$ is in $\mathrm{neb}(\gamma_{i_2})$. For the sake of computing the inversion number, we assume that $v_{i_1}$ corresponds to a $1$ and that $v_{i_2}$ corresponds to $-1$. If we carry out this procedure for each $i$ from $1$ to $k$, then we have formally reduced to the case above with no osculations. To finish, we simply need to calculate the effect of ``merging'' the two vertices $v_{i_1}$ and $v_{i_2}$ back into the vertex $v_i$. Suppose that $v_{i_2}$ contributes $-x$ to $\mathrm{C}(\mathrm{neb}(\gamma_{i_2}))$ for some non-negative integer $x$. It follows that $v_{i_1}$ contributes $x+1$ to $\mathrm{C}(\mathrm{neb}(\gamma_{i_1}))$ since it is nested one additional level beyond $v_{i_2}$ since it is (formally) in the interior of $\gamma_{i_2}$. The alternating sign matrix entry located at $v_i$ corresponds to a $0$, therefore the effect of merging $v_{i_1}$ and $v_{i_2}$ to the inversion number sum is to negate the contributions $-x + x +1$. Note that this splitting and merging procedure does not influence the contributions of other vertices to the inversion number sum.
Thus in the end, the sum $\sum_{i=1}^\ell \mathrm{inv}(\gamma_i)$ overestimates the inversion number by exactly the number of off-diagonal osculations, $k$, and the equation holds.
\end{proof}
\begin{figure}
\caption{We calculate the inversion number of the given Dyck island $\delta$ in two ways. First, we use the formula of Theorem \ref{thm:DIinversions}
\label{fig:calculation1}
\end{figure}
\begin{figure}
\caption{For comparison, we modify $\gamma_1$ so that only $\mathrm{neb}
\label{fig:calculation2}
\end{figure}
\begin{lemma}
Consider an $n \times n$ alternating sign matrix corresponding to a Dyck island described by boundary paths $\gamma_1,\ldots , \gamma_\ell$. The contributions from the southwest boundaries and northeast boundaries of each $\gamma_i$ are:
\begin{align*}
\mathrm{C}(\mathrm{swb} (\gamma_i)) &= \mathrm{semilength}(\gamma_i) + \mathrm{N}(\gamma_i) (1 + \#\{\mathcal{O}^{\gamma_i}_{sw}\}) \\
\mathrm{C}(\mathrm{neb} (\gamma_i)) &= \mathrm{N}(\gamma_i) (1+ \#\{\mathcal{O}^{\gamma_i} \setminus \mathcal{O}^{\gamma_i}_{sw}\}).
\end{align*}
\label{lem:swbneb}
\end{lemma}
\begin{proof}
Suppose that $\gamma_i$ is in the interior of N loops. First, let us prove the northeast boundary equation. Consider the Dyck word representing $\mathrm{neb}(\gamma_i)$. Generically, each `$ud$' contributes $+\mathrm{N}$ and each `$du$' contributes $-\mathrm{N}$. Thus by Lemma \ref{lem:words}, the total contribution is $\mathrm{N}$. The only exception to this rule is when $\mathrm{neb}(\gamma_i)$ touches the diagonal at a vertex $v$ such that $v \notin\mathcal{O}^\gamma_{sw}$. In this case, the corresponding `$du$' does not contribute a $-\mathrm{N}$ since there is a $0$ in place of a $1$ in the alternating sign matrix picture. This establishes the equation for the contribution to the northeast boundary.
For the southwest boundary, $\mathrm{swb}(\gamma_i)$, consider the corresponding Dyck word. Generically, each instance of `$ud$' contributes ($\mathrm{N} + \mathrm{height}$) and each instance of `$du$' contributes ($-\mathrm{N} - \mathrm{height}$). By Lemma \ref{lem:words} and the calculation in Lemma \ref{lem:singleLoop}, this word evaluates to $\mathrm{semilength}(\gamma) + \mathrm{N}$. There are two special cases to consider: corner vertices of $\mathrm{swb}(\gamma_i)$ along the diagonal which are either in $\mathrm{neb}(\gamma_i)$ or not in $\mathrm{neb}(\gamma_i)$. When a vertex along the diagonal is also a corner vertex of $\mathrm{swb}(\gamma_i)$ but not $\mathrm{neb}(\gamma_i)$, it corresponds to a $0$ in the alternating sign matrix picture instead of the $-1$ in the generic case. Thus we compensate by adding an extra factor of $N$ to $\mathrm{C}(\mathrm{swb}(\gamma_i))$. In the other case when a given vertex along the diagonal is simultaneously in $\mathrm{swb}(\gamma_i)$ and $\mathrm{neb}(\gamma_i)$, we observe that this vertex corresponds to $-1$ in the alternating sign matrix picture, but that it has already been accounted for in $\mathrm{C}(\mathrm{neb}(\gamma_i))$. Hence, every diagonal vertex on $\mathrm{swb}(\gamma_i)$ will contribute $0$ to $\mathrm{C}(\mathrm{swb}(\gamma_i))$ instead of $-\mathrm{N}$. Thus the equation for the contribution to the southeast boundary is established.
\end{proof}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Cycle density in infinite Ramanujan graphs\thanksref{T1}}
\runtitle{Cycle density in infinite Ramanujan graphs}
\begin{aug}
\author[A]{\fnms{Russell}~\snm{Lyons}\corref{}\ead[label=e1]{[email protected]}\ead[label=u1,url]{http://pages.iu.edu/\textasciitilde rdlyons/}}
\and
\author[B]{\fnms{Yuval}~\snm{Peres}\ead[label=e2]{[email protected]}\ead[label=u2,url]{http://research.microsoft.com/en-us/um/people/peres/}}
\runauthor{R. Lyons and Y. Peres}
\affiliation{Indiana University and Microsoft Corporation}
\address[A]{Department of Mathematics\\
Indiana University\\
831 E 3rd St\\
Bloomington, Indiana 47405-7106\\
USA\\
\printead{e1}\\
\printead{u1}}
\address[B]{Microsoft Corporation\\
One Microsoft Way\\
Redmond, Washington 98052-6399\\
USA\\
\printead{e2}\\
\printead{u2}}
\end{aug}
\thankstext{T1}{Supported in part by Microsoft Research and
NSF Grant DMS-10-07244.}
\received{\smonth{10} \syear{2013}}
\revised{\smonth{8} \syear{2014}}
\begin{abstract}
We introduce a technique using nonbacktracking random walk for
estimating the spectral radius of simple random walk.
This technique relates the density of nontrivial cycles in simple random
walk to that in nonbacktracking random walk. We apply this to
infinite Ramanujan graphs, which are regular graphs whose spectral
radius equals that of the tree of the same degree.
Kesten showed that the only infinite Ramanujan graphs that are Cayley
graphs are trees.
This result was extended to unimodular random rooted regular graphs by
Ab\'ert, Glasner and Vir\'ag.
We show that an analogous result holds for all regular graphs: the
frequency of times spent by simple random walk in a nontrivial cycle is
a.s. 0 on every infinite Ramanujan graph.
We also give quantitative versions of that result, which we apply to answer
another question of
Ab\'ert, Glasner and Vir\'ag, showing that on an infinite Ramanujan graph,
the probability that simple random walk encounters a short cycle tends
to 0
a.s. as the time tends to infinity.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{60G50}
\kwd{82C41}
\kwd[; secondary ]{05C81}.
\end{keyword}
\begin{keyword}
\kwd{Nonbacktracking random walks}
\kwd{spectral radius}
\kwd{regular graphs}.
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{s.intro}
A path in a multigraph is called \textit{nonbacktracking} if no edge is
immediately followed by its reversal.
Note that a loop is its own reversal.
Nonbacktracking random walks are almost as natural as ordinary
random walks, though more difficult to analyze in most situations.
Moreover, they can be more useful than ordinary random walks when random
walks are used to search for something, as they explore more quickly, not
wasting time immediately backtracking; see \citet{MR2348845}.
Our aim, however, is to use
them to analyze the spectral radius of ordinary random walks on regular
graphs.
The \textit{spectral radius} of a (connected, locally finite)
multigraph $G$
is defined to be $\rho(G) := \limsup_{n \to\infty} p_n(o, o)^{1/n}$
for a
vertex $o \in G$, where $p_n(x, y)$ is the $n$-step transition probability
for simple random walk on $G$ from $x$ to $y$. It is well known that
$\rho(G)$ does not depend on the choice of $o$.
If $G = \mathbb{T}_d$ is a regular tree of degree $d$, then $\rho(G) =
2\sqrt{d-1}/d$. Regular trees are Cayley graphs of groups. In general,
when $G$ is a Cayley graph of a group, \citet{MR22253}
proved that $\rho(G) > \rho(\mathbb{T}_d)$ when $G$ has degree $d$
and $G \ne
\mathbb{T}_d$. \citet{MR222911} also proved that for Cayley graphs,
$\rho(G) =1$ iff $G$ is amenable.
If $G$ is a $d$-regular multigraph, then its universal cover is
$\mathbb{T}_d$, whence $\rho(\mathbb{T}_d) \le\rho(G) \le1$.
Using the method of proof
due to \citet{MR536645}, various researchers related $1 - \rho(G)$ to the
expansion (or isoperimetric) constant of infinite graphs $G$, showing that
again, $G$ is amenable iff $\rho(G) = 1$; see \citet{MR85m58185}, \citet{MR88h58118}, \citet{MR87e60124}, \citet{MR973878}, \citet{MR89g60216},
\citet{MR89a05103} and \citet{MR94i31012}.
It appears considerably more difficult to understand the other inequality
for $\rho(G)$: when is $\rho(G) = \rho(\mathbb{T}_d)$?
This question will be our focus.
For finite graphs, the spectral radius is 1. Of interest instead is the
second largest eigenvalue, $\lambda_2$, of the transition matrix.
An inequality of Alon and Boppana [see \citet{MR88e05077} and \citet{MR1124768}] says that if $\langle{G_n; n \ge1}\rangle$
is a
family of
$d$-regular graphs whose size tends to infinity, then $\liminf_{n
\to\infty} \lambda_2(G_n) \ge\rho(\mathbb{T}_d)$.
Regular graphs $G$ such that all eigenvalues have absolute value either 1
or at most $\rho(\mathbb{T}_d)$ were baptized \textit{Ramanujan graphs} by
\citet{MR963118}, who, with \citet{MR939574},
were the first to exhibit explicit such families. Moreover,
their examples had better expansion properties than the random graphs that
had been constructed earlier. See \citet{MR1966527}
and \citet{MR2331350} for
surveys of finite Ramanujan graphs.
\citet{AGVmeasurable} studied the density of short
cycles in Ramanujan
graphs. One of their tools was graph limits, which led them to define and
study \textit{infinite Ramanujan graphs}, which are
$d$-regular infinite
graphs whose spectral radius equals $\rho(\mathbb{T}_d)$.
Now limits of finite graphs, taken in the appropriate sense, are
probability measures on rooted graphs; the probability measures that arise
have a property called unimodularity.
Theorem~5 of \citet{AGVmeasurable} shows that every
unimodular
random rooted infinite regular graph that is a.s. Ramanujan is a.s. a tree.
Unimodularity is a kind of stochastic homogeneity that, among other things,
ensures that simple random walk visits short
cycles with positive frequency when they exist.
\citet{AGVmeasurable} asked whether the hypothesis
of unimodularity could
be weakened to something called stationarity. We answer this affirmatively
in a very strong sense, using no extra hypotheses on the graph and
including cycles of all lengths at once.
To state our result,
call a cycle \textit{nontrivial} if it is not purely a
backtracking cycle,
that is, if when backtracks are erased iteratively from the cycle, some edge
remains.
For example, a single loop is a nontrivial 1-edge cycle, but a~loop
followed by the same loop is a trivial 2-edge cycle.
Let $X = \langle{X_n; n \ge 1}\rangle$ be simple random walk on $G$,
where $X_n$
are directed edges and the tail of $X_1$ is any fixed vertex.
Call $n$ a \textit{nontrivial cycle time} of $X$ if there
exist $1 \le s \le n \le t$ such that $(X_s, X_{s+1}, \ldots, X_t)$ is a nontrivial cycle.
\begin{theorem}\label{t.main}
If $G$ is an infinite Ramanujan graph of degree at least 3, then a.s. the
density of nontrivial cycle times of $X$ in $[1, n]$ tends to 0 as $n
\to\infty$.
\end{theorem}
Now fix $L \ge 1$. Let $q_n$ be
the probability that simple random walk at time $n$ lies on a nontrivial
cycle of length at most $L$. Then the preceding theorem implies that
$\liminf_{n \to\infty} q_n = 0$.
In their Problem~10, \citet{AGVmeasurable} ask
whether $\lim_{n \to\infty} q_n = 0$.
We answer it affirmatively.
\begin{theorem}\label{t.prob10}
Let $G$ be an infinite Ramanujan graph and $L \ge 1$.
Then $\lim_{n \to\infty} q_n = 0$.
\end{theorem}
In broad outline,
our technique to prove these results is the following:
First, we prove that when simple
random walk on $G$ has many nontrivial cycle times, then so does
nonbacktracking random walk.
Second, we deduce that under these circumstances, we may transform
nonbacktracking paths to nonbacktracking cycles with controlled length
and find that there are many nonbacktracking cycles.
The exponential growth rate of the number of nonbacktracking cycles is
called the cogrowth of $G$.
Finally, we use the cogrowth formula relating cogrowth to spectral radius
to conclude that $G$ is not Ramanujan.
Thus, of central importance to us is the notion of cogrowth. We state the
essentials here.
Let the number of nonbacktracking cycles of length $n$ starting from some
fixed $o \in\mathtt{V}(G)$ be $b_n(o)$.
Let
\[
\operatorname{\mathtt{cogr}} (G) := \limsup_{n \to\infty} b_n(o)^{1/n}
\]
be the exponential
growth rate of the number of nonbacktracking cycles containing~$o$.
This number is called the \textit{cogrowth} of $G$.
The reason for this name is that if we consider a universal covering map
$\varphi\dvtx T \to G$, then the cogrowth of $G$ equals the exponential
growth rate of $\varphi^{-1}(o)$ inside $T$ since $\varphi$
induces a bijection between simple paths in $T$ and
nonbacktracking paths in $G$.
By using this covering map, one can see that $\operatorname{\mathtt{cogr}} (G)$ does
not depend on
$o$.
Note, too, that if $\mathcal{P}$ is a finite path in $G$ that lifts to
a path in
$T$ from vertex $x$ to vertex $y$, then erasing backtracks from~$\mathcal{P}$
iteratively yields $\varphi[\mathcal{P}']$, where $\mathcal{P}'$ is
the shortest path in
$T$ from $x$ to $y$.
Let $G$ be a connected graph.
It is not hard to check the following:
If $G$ has no simple nonloop cycle and at most one loop,
then $\operatorname{\mathtt{cogr}} (G) = 0$.
If $G$ has
one simple cycle and no loop or no simple nonloop cycle and
two loops, then $\operatorname{\mathtt{cogr}} (G) = 1$.
In all other cases, that is, when the fundamental group of $G$ is not
virtually abelian, $\operatorname{\mathtt{cogr}} (G) > 1$.
The central result about cogrowth is the following formula (\ref{e.cogrowth}),
due to \citet{MR599539} for Cayley graphs and extended by
\citet{MR1120509} to all regular graphs.
\begin{theorem}[(Cogrowth formula)]\label{t.cogrowth}
If $G$ is a $d$-regular connected multigraph, then
\begin{equation}
\label{e.cogr-cond} \operatorname{\mathtt{cogr}} (G) > \sqrt{d-1} \quad\mbox{iff}\quad \rho(G) >
{2\sqrt{d-1} \over d},
\end{equation}
in which case
\begin{equation}
\label{e.cogrowth} d \rho(G) = {d-1 \over\operatorname{\mathtt{cogr}} (G)} + \operatorname{\mathtt{cogr}} (G).
\end{equation}
If (\ref{e.cogr-cond}) fails, then
$\rho(G) = 2\sqrt{d-1}/d$
and $\operatorname{\mathtt{cogr}} (G) \le\sqrt{d-1}$.
\end{theorem}
See \citet{LPbook}, Section~6.3, for a proof.
Our use of Theorem~\ref{t.cogrowth} will be mainly via (\ref{e.cogr-cond}), rather
than (\ref{e.cogrowth}).
In order to use (\ref{e.cogr-cond}), we shall prove the following
result on
density of nontrivial cycle times:
\begin{theorem}\label{t.main2}
Suppose that $G$ is a graph all of whose degrees are at least~3.
If with positive probability the limsup
density of nontrivial cycle times of simple random walk in $[1, n]$ is
positive as $n \to\infty$, then the same holds for nonbacktracking random
walk.
\end{theorem}
Here, nonbacktracking random walk is the random walk that at every
time~$n$, chooses
uniformly among all possible edges that are not the reversal of the $n$th
edge.
In terms of the universal cover $\varphi\dvtx T \to G$, if simple random
walk $X = \langle{X_n; n \ge1}\rangle$ is lifted to a random walk,
call it
$\widehat{X}$, on
$T$, then $\widehat{X}$ is simple random walk on $T$.
Backtracking on $G$ is the same as backtracking on $T$.
Since all degrees of $T$
are at least 3, $\widehat{X}$ is transient and so there is a unique
simple path
$\mathcal{P}$ in~$T$ with the same starting point as $\widehat{X}$
and having
infinite intersection with $\widehat{X}$. The law of
$\varphi[\mathcal{P}]$ is that of nonbacktracking random walk on $G$.
As it may be of separate interest, we note in passing the following basic
elementary bound on the number of nonbacktracking cycles.
Write $S(x) := \{ n; b_n(x) \ne0\}$.
If a nonbacktracking cycle is a loop or
has the property that its last edge is different from the
reverse of its first edge, then call the cycle \textit{fully
nonbacktracking} (usually called ``cyclically reduced'' in the case of a
Cayley graph).
Let the number of fully nonbacktracking
cycles of length $n$ starting from $x$ be $b_n^*(x)$.
\begin{proposition}\label{p.cogrlimit}
Let $G$ be a graph with $\operatorname{\mathtt{cogr}} (G) \ge1$.
For each $x \in\mathtt{V}(G)$, we have that
$\lim_{S(x) \ni n \to\infty} b_n(x)^{1/n}$ exists and
there is
a constant $c_x$ such that $b_n(x) \le c_x \operatorname{\mathtt{cogr}} (G)^n$ for all
$n \ge1$.
Furthermore, if $x$ belongs to a simple cycle of length $L$, then $c_x
\le
2 + 2L\operatorname{\mathtt{cogr}} (G)^{L-2}$.
If $G$ is $d$-regular, then $G$ is Ramanujan iff for all vertices $x$ and
all $n \ge1$, we have $b_n^*(x) \le2 (d-1)^{n/2}$.
\end{proposition}
We shall illustrate our technique first
by giving a short proof of Kesten's theorem (extended to transitive
multigraphs).
We then prove a version of Theorems \ref{t.main} and~\ref{t.main2} with
a stronger hypothesis on the
density of nontrivial cycle times, a hypothesis that holds for stationary
random rooted graphs, for example.
The proof of the full Theorems \ref{t.main} and \ref{t.main2}
requires a large number of technical lemmas,
which makes the basic idea harder to see.
The final section proves Theorem~\ref{t.prob10}.
All our graphs are undirected connected infinite
multigraphs. However, each edge comes with
two orientations, except loops, which come with only one orientation.
An edge $e$ is oriented from its tail $e^{-}$ to its head $e^{+}$.
These endpoints are the same when $e$ is a loop.
A vertex may have many loops and two vertices may be joined by many edges.
If $e$ is an oriented edge, then its reversal is the same unoriented
edge with the opposite orientation, denoted $- e$. This is equal to
$e$ iff $e$ is a loop.
We shall have no need of unimodularity or stationarity, so we do not define
those terms.
\section{Kesten's theorem}\label{s.kesten}
It is easiest to understand the basic ideas behind our proofs in the case
of transitive multigraphs.
\citet{MR22253} proved the following result and
various extensions for Cayley graphs.
\begin{theorem}\label{t.kestenramanujan}
If $d \ge3$ and $G$ is a $d$-regular transitive multigraph that is not a
tree, then $\rho(G) > \rho(\Bbb T_d)$.
\end{theorem}
\begin{pf}
Let $L$ be the length of the shortest cycle in $G$ (which is 1 if
there is a loop).
Consider a nonbacktracking random walk $\langle{Y_n; n \ge1}\rangle
$, where each
edge $Y_{n+1}$ is chosen uniformly among the edges incident to the head
$Y_n^{+}$ of $Y_n$, other than the reversal of $Y_n$.
We are going to handle loops differently than other cycles, so it will be
convenient to let
\[
L' := \cases{L, & \quad \mbox{if }$L > 1$,
\cr
3, & \quad \mbox{if
}$L = 1$.}
\]
Let $A_n$ be the event that
$Y_{n+1}, \ldots, Y_{n+L'}$ is a nonbacktracking cycle.
Write $b := d-1$.
For $n \ge 1$,
\[
\mathbf{P}(A_n \vert Y_1, \ldots, Y_n) \ge
{1 \over d b^{L'-1}}
\]
since if $L > 1$, then there is a way to traverse a simple cycle starting
at $Y_n^{+}$ and not using the reversal of $Y_n$,
while if $L = 1$, then the walk can first choose an edge other than the
reversal of $Y_n$, then traverse a loop, and then return by the
reversal of
$Y_{n+1}$.
Let $Z_k := \mathbf{1}_{A_{k L'}} - \mathbf{P}(A_{k L'} \vert Y_1,
\ldots, Y_{k L'})$.
Then $\langle{Z_k; k \ge1}\rangle$ are uncorrelated, whence by the
Strong Law of
Large Numbers for uncorrelated random variables, we have
\[
\lim_{n \to\infty} {1 \over n} \sum
_{k=0}^{n-1} Z_k = 0 \qquad\mbox{a.s.},
\]
which implies that
\[
\liminf_{n \to\infty} {1 \over n} \sum
_{k=0}^{n-1} \mathbf{1}_{A_{k L'}} \ge
{1 \over d b^{L'-1}} \qquad\mbox{a.s.}
\]
Therefore, if we choose $\varepsilon< 1/ (d b^{L'-1} )$, then in
$n L'$
steps, at least $\varepsilon n$ events $A_{k L'}$ will occur for $0 \le k <
n$ with probability tending to 1 as $n \to\infty$.
Consider the following transformation of a path $\mathcal{P}= (Y_1,
\ldots,Y_{nL'})$ to a ``reduced'' path $\mathcal{P}'$:
For each $k$ such that $A_{kL'}$ occurs,
remove the edges $Y_{k+1}, \ldots, Y_{k+L'}$.
Next, combine $\mathcal{P}$ and $\mathcal{P}'$ to form a
nonbacktracking cycle $\mathcal{P}''$ by
appending to $\mathcal{P}$ a nonbacktracking cycle of length
$L'$ that does not begin with the reversal of $Y_{nL'}$,
and then by returning to the tail of $Y_1$ by $\mathcal{P}'$
in reverse order.
Note that the map $\mathcal{P}\mapsto\mathcal{P}''$ is 1--1.
When at least $\varepsilon n$ events $A_{k L'}$ occur,
the length of $\mathcal{P}''$ is at most $(2n + 1 - \varepsilon n)L'$.
The number of nonbacktracking paths $Y_1, \ldots, Y_n$ equals $d b^{n-1}$,
whence $\sum_{k \le(2n + 1 - \varepsilon n)L'} b_k(G) \ge d b^{nL'-1}/2$
for\vspace*{1pt} large $n$.
This gives that\break $\operatorname{\mathtt{cogr}} (G) > \sqrt b$, which implies the result by
Theorem~\ref{t.cogrowth}.
\end{pf}
An alternative way of handling loops in the above proof is to use
the following:
Consider a random walk on a graph with spectral radius $\rho$.
Suppose that we introduce a delay so that each step goes nowhere with
probability $p_{\mathrm{delay}}$, and otherwise chooses a neighbor with the same
distribution as before. Then the new spectral radius equals $p_{\mathrm{delay}}+ (1-p_{\mathrm{delay}}) \rho$.
Hence, if there is a loop at each vertex and $G$ is $d$-regular, then
$\rho(G) \ge1/(d-1) + (d-2)\rho(\mathbb{T}_d)/(d-1) > \rho(\mathbb{T}_d)$.
For a simple extension of this proof, let $G$ be a $d$-regular multigraph.
Suppose that there are some $L, M < \infty$ such that for
every vertex $x \in\mathtt{V}(G)$, there is a simple cycle of length
at most
$L$ that is at distance at most $M$ from $x$. Then $\rho(G) > \rho
(\mathbb{T}_d)$.
Theorem~3 of \citet{AGVmeasurable} gives a
quantitative strengthening of
this result.
\section{Expected frequency}\label{s.expfreq}
In the case of transitive multigraphs that are not trees,
it is clear that simple random walk
a.s. has many nontrivial cycle times.
The most difficult part of our extension to general regular graphs is to
show how this property is inherited by nonbacktracking random walk.
This actually does not depend on regularity and is an interesting fact in
itself.
Before we prove the general case, which has many complications, it may be
helpful to the reader to see how to prove Theorem~\ref{t.main} with a stronger
assumption on the density of nontrivial cycle times.
Recall that a cycle is
\textit{nontrivial} if it is not purely a backtracking cycle,
that is, when backtracks are erased iteratively from the cycle, some
edge remains.
We call such cycles \textit{NT-cycles}.
\begin{theorem}\label{t.AGV}
Suppose that $G$ is a graph all of whose degrees lie in some
interval $[3, D]$.
If the limsup expected frequency that simple random walk traverses
some nontrivial cycle of length at
most $L$ is positive, then the same is
true for nonbacktracking random walk. Hence, if $G$ is also
$d$-regular, then $\rho(G) > 2\sqrt{d-1}/d$.
\end{theorem}
\begin{pf}
We may assume that
simple cycles of length exactly $L$ are traversed with positive
expected frequency.
Let $X = \langle{X_n; n \ge1}\rangle$ be simple random walk on $G$
and $\widehat{X}=
\langle{\widehat{X}_n}\rangle$ be its lift to the universal cover
$T$ of $G$.
Now consider $X$.
It contains purely backtracking excursions that are erased when we
iteratively erase all backtracking.
Let the lengths of the successive excursions be $M_1, M_2, \ldots\,$, where
$M_i \ge0$. Define
\begin{equation}\label{e.defPhi}
\Phi(n) := n + \sum_{k = 1}^{n}
M_k.
\end{equation}
Then the edges that remain
after erasing all backtracking are $\langle{X_{\Phi(n)}; n \ge
1}\rangle$.
If we write $Y_n := X_{\Phi(n)}$, then
$Y := \langle{Y_n}\rangle=: \mathtt{NB}(X)$ is the nonbacktracking
path created from $X$.
Let $\operatorname{im}\Phi$ be the image of $\Phi$.
Thus, $t \in\operatorname{im}\Phi$ iff the edge $X_t$ is not erased from $X$
when erasing all backtracking.
Consider a time $t$ such that $X_t$ completes a
traversal of a simple cycle of length~$L$.
Because all degrees of $T$ are at least 3, the probability (given the past)
that $\widehat{X}$ will never cross the edge $-\widehat{X}_t$ after
time $t$ is at least $1/2$.
In such a case, the cycle just traversed will not be erased (even in part)
by the future.
However, erasing backtracks from $X_1, \ldots, X_t$ may erase
(at least in part) this cycle.
Let $\mathtt{Trav}(Y)$ be the set of times $n$ such that $Y_n$
completes a traversal of a cycle
and let $\mathtt{Trav}(X)$ be the set of times $t$ that $X_t$
completes a traversal of a simple cycle of length $L$.
We divide the rest of the proof into two cases, depending on whether $L >
1$ or not.
First, suppose that $L > 1$.
Define a map $\psi\dvtx\mathbb{Z}^+ \to\mathtt{Trav}(Y) \cup\{
\infty\}$ as follows:
\[
\psi(t) := \cases{\Phi^{-1}(t)+L, & \quad \mbox{if }$t \in\operatorname{im}\Phi \cap
\mathtt{Trav}(X)$\mbox{ and }$\Phi^{-1}(t)+L \in\mathtt{Trav}(Y)$,
\cr
\infty, &\quad \mbox{otherwise}.}
\]
For $t \in\mathtt{Trav}(X)$, the probability (given the past) that
the steps
$X_{t+1},
\ldots,\break X_{t+L}$ traverse the same cycle $X_{t-L+1}, X_{t-L+2},
\ldots,
X_{t}$ of length\vspace*{1pt} $L$ (in only $L$ steps
and in the same direction) and then (on the tree)
$\widehat{X}_{t+L+1}, \widehat{X}_{t+L+2}, \ldots$ never
crosses the edge $-\widehat{X}_t$ is at least $1/(2D^L)$;
similarly, for traversing the cycle in the opposite direction.
In at least one of these two cases, some part of
the cycle $X_{t+1}, \ldots, X_{t+L}$ will
be left after erasing \textit{all} backtracks in $X$, in which case
$\psi(t)
\in\mathtt{Trav}(Y)$.
Therefore, $\mathbf{P} [\psi(t) \in\mathtt{Trav}(Y) | t
\in\mathtt{Trav}(X) ] \ge
1/(2D^L)$, that is,
$\mathbf{P} [\psi(t) \in\mathtt{Trav}(Y) ] \ge\mathbf
{P}[t \in\mathtt{Trav}(X)]/(2D^L)$.
Hence,
\[
\sum_{s \le t} \mathbf{P} \bigl[\psi(s) \in
\mathtt{Trav}(Y) \bigr] \ge \sum_{s \le t} \mathbf{P}
\bigl[s \in\mathtt{Trav}(X)\bigr]/\bigl(2D^L\bigr).
\]
Note that $\psi(t_1) = \psi(t_2) \in\mathtt{Trav}(Y)$ implies that
$t_1 = t_2$.
Since $\psi(s) \le s+L$, it follows that
\[
\sum_{k \le t+L} \mathbf{1}_{[{k \in\mathtt{Trav}(Y)}]} \ge \sum
_{s \le t} \mathbf{1}_{[\psi(s) \in\mathtt{Trav}(Y)]},
\]
whence
\begin{eqnarray*}
&&\limsup_{n \to\infty} n^{-1} \sum
_{k \le n} \mathbf{P}\bigl[k \in \mathtt{Trav}(Y)\bigr]
\\
&&\qquad\ge \limsup_{t \to\infty} t^{-1} \sum
_{s \le t} \mathbf{P}\bigl[s \in \mathtt{Trav}(X)\bigr]/
\bigl(2D^L\bigr)
\\
&&\qquad= \limsup_{t \to\infty} t^{-1} \mathbf{E} \biggl[
\sum_{s \le t} \mathbf{1}_{[s \in\mathtt{Trav}(X)]} \biggr] \Big/
\bigl(2D^L\bigr) > 0
\end{eqnarray*}
by assumption.
Now the method of proof of Theorem~\ref{t.kestenramanujan} applies when
$G$ is regular.
Finally, suppose that $L = 1$.
This means that $t \in\mathtt{Trav}(X)$ iff $X_t$ is a loop, and
similarly for
$\mathtt{Trav}(Y)$.
Define a map $\psi\dvtx\mathbb{Z}^+ \to\mathbb{Z}^+ \cup\{\infty
\}$ as follows:
\[
\psi(t) := \cases{\Phi^{-1}(t), & \quad \mbox{if }$t \in\operatorname{im}\Phi \cap
\mathtt{Trav}(X)$,
\cr
\Phi^{-1}(t+1), &\quad \mbox{if }$t \in\mathtt{Trav}(X)
\setminus\operatorname{im}\Phi$ and $t+1 \in\operatorname{im}\Phi\cap\mathtt{Trav}(X)$,$\!\!$
\cr
\infty, & \quad $\mbox{otherwise}$.}
\]
Consider $t \in\mathtt{Trav}(X)$.
If erasing backtracks from $X_1, \ldots, X_t$ does not erase
$X_t$, then the probability (given the past) that
(on the tree) $\widehat{X}_{t+1}, \widehat{X}_{t+2}, \ldots$ never
crosses the edge $-\widehat{X}_t$ is at least $1/2$, in which case $t
\in\operatorname{im}\Phi$.
On the other hand,
if erasing backtracks from $X_1, \ldots, X_t$ \textit{does} erase
$X_t$,
then the probability (given the past) that $X_{t+1}$ is a loop
and (on the tree)
$\widehat{X}_{t+2}, \widehat{X}_{t+3}, \ldots$ never
crosses the edge $-\widehat{X}_{t+1}$ is at least $1/(2D)$, in which case
$t \notin\operatorname{im}\Phi$ and $t+1 \in\operatorname{im}\Phi\cap\mathtt
{Trav}(X)$.
In each of these two cases, $\psi(t) \in\mathtt{Trav}(Y)$.
Therefore, $\mathbf{P} [\psi(t) \in\mathtt{Trav}(Y) | t
\in\mathtt{Trav}(X) ] \ge1/(2D)$,
that is,\break $\mathbf{P} [\psi(t) \in\mathtt{Trav}(Y) ] \ge
\mathbf{P}[t \in\mathtt{Trav}(X)]/(2D)$.
Now the rest of the proof goes through as when $L > 1$, with the small
change that instead of injectivity, we have that
$|\psi^{-1}(n)| \le2$ for $n \in\mathtt{Trav}(Y)$.
\end{pf}
\section{Proofs of Theorems \texorpdfstring{\protect\ref{t.main}}{1.1}
and \texorpdfstring{\protect\ref{t.main2}}{1.4}}\label{s.proofmain}
Here, we remove from Theorem~\ref{t.AGV} the
upper bound on the degrees in $G$ that was assumed
and we weaken the assumption on
the nature of nontrivial cycle frequency.
Consider a finite path $\mathcal{P}= \langle{e_t; 1 \le t \le
n}\rangle$.
Say that a time $t$ is a
\textit{cycle time of} $\mathcal{P}$ if there exist $1 \le
s \le t \le u \le n$
such that $(e_s, e_{s+1}, \ldots, e_u)$ is a cycle.
If the cycle is required to be an NT-cycle, then
we will call $t$ an NT-cycle time, and likewise for other types of cycles.
Call a cycle \textit{fully nontrivial} if it is a loop or is
nontrivial and
its first edge is not the reverse of its last edge.
Such cycles will be called FNT-cycles.
For a finite or infinite path $\mathcal{P}$, we denote by $\mathcal
{P}\mathord{\upharpoonright}n$ its
initial segment of $n$ edges.
We state a slightly different version of Theorems \ref{t.main} and \ref{t.main2} here. At the end of the section,
we shall deduce the theorems as originally
stated in Section~\ref{s.intro}.
\begin{theorem}\label{t.AGV2}
Suppose that $G$ is a graph all of whose degrees are at least 3.
Let $X = \langle{X_t; t \ge1}\rangle$ be simple random walk on $G$.
If with positive probability
the limsup frequency of NT-cycle times of $X \mathord{\upharpoonright}n$
is positive as $n \to\infty$ (i.e., the expected limsup frequency is
positive), then the same is
true for nonbacktracking random walk. If $G$ is also
$d$-regular, then $\rho(G) > 2\sqrt{d-1}/d$.
\end{theorem}
For $n \in\mathbb{Z}^+$, $\alpha> 0$, and a path $\mathcal{P}$ of
length $> n$,
let $C_n(\alpha, \mathcal{P})$ be the indicator that the number of
NT-cycle times
of $\mathcal{P}\mathord{\upharpoonright}n$ is ${>}\alpha n$.
We shall prove the following finitistic version of Theorem~\ref{t.AGV2}, which will
be useful to us later.
\begin{theorem}\label{t.AGVfinite}
There exist $\zeta, \gamma> 0$ with the following property.
Suppose that $G$ is a graph all of whose degrees are at least 3.
Then for all $n$ and $\alpha$,
\[
\mathbf{E} \bigl[C_n(\alpha, X) \bigl(1 - C_n \bigl(\hat
\alpha, \mathtt{NB}(X) \bigr) \bigr) \bigr] < 3 e^{-\zeta n},
\]
where
\[
\hat\alpha := \gamma\alpha/\log^2 (10{,}368/\alpha).
\]
There exists $\zeta' > 0$ such that if $G$ is also $d$-regular,
$\operatorname{\mathtt{cogr}} (G) > 1$, and
\[
\mathbf{E} \bigl[C_n(\alpha, X) \bigr] > \frac{c_o n e^{-\zeta' n}}{\operatorname{\mathtt{cogr}} (G) - 1},
\]
where $c_o$ is as in Proposition~\ref{p.cogrlimit},
then $\rho(G) > 2\sqrt{d-1}/d$.
If $G$ is $d$-regular and
\[
\limsup_{n \to\infty} \bigl[\mathbf{E} {C_n(\alpha, X)}
\bigr]^{1/n} = 1,
\]
then
\[
\rho(G) > \frac{\sqrt{d-1}}{d} \bigl((d-1)^{\hat\alpha/24} +
(d-1)^{-\hat
\alpha
/24} \bigr).
\]
\end{theorem}
We shall use the following obvious fact.
\begin{lemma}\label{l.concat-NB}
If $(e_1, \ldots, e_k)$ and $(f_1, \ldots, f_m)$ are paths without
backtracking, the head of $e_k$ equals the tail of $f_1$, and $e_k$ is not
the reverse of $f_1$, then $(e_1, \ldots, e_k, f_1, \ldots, f_m)$ is
a path
without backtracking.
\end{lemma}
We shall apply the following well-known lemma to intervals with integer
endpoints.
\begin{lemma}[(Vitali covering)]\label{l.vitali}
Let $I$ be a finite collection of
subintervals of~$\mathbb{R}$. Write $\|I\|$ for the sum of the lengths
of the intervals in $I$. Then there exists a subcollection $J$ of
$I$ consisting of pairwise disjoint intervals such that $\|J\| \ge
\|I\|/3$.
\end{lemma}
This lemma is immediate from choosing iteratively the largest interval
disjoint from previously chosen intervals.
The following is a simple modification of a standard bound on large
deviations.
\begin{lemma}\label{l.exp-tails}
Suppose that $c > 0$.
There exist $\varepsilon\in(0, 1)$ and $\beta> 0$ such that
whenever $Z_1, \ldots, Z_n$ are random variables
satisfying the inequalities
$P[Z_k > z \vert Z_1, \ldots, Z_{k-1}] \le e^{-c z}$ for all $z > 0$,
we have
\[
\mathbf{P} \Biggl[\,\sum_{k=1}^{\varepsilon n}
Z_k \ge n \Biggr] \le e^{-\beta n}.
\]
\end{lemma}
\begin{pf}
Write $S_t := \sum_{j=1}^{\lfloor t\rfloor} Z_j$.
Given $\lambda:= c/2$ and $k \in[1, n]$, we have
\begin{eqnarray*}
\mathbf{E} \bigl[e^{\lambda Z_k} | Z_1, \ldots,
Z_{k-1} \bigr] &=& \int_0^\infty\mathbf{P}
\bigl[e^{\lambda Z_k} > z | Z_1, \ldots, Z_{k-1}
\bigr] \,dz \\
&\le & 1 + \int_1^\infty z^{-c/\lambda} \,dz
= 2,
\end{eqnarray*}
whence
\[
\mathbf{E} \bigl[e^{\lambda S_k} | Z_1, \ldots,
Z_{k-1} \bigr] \le 2 e^{\lambda S_{k-1}}.
\]
By induction, therefore, we have that $\mathbf{E} [e^{\lambda S_k} ]
\le2^k$.
It follows by Markov's inequality that
\[
\mathbf{P} \Biggl[\,\sum_{k=1}^{\varepsilon n}
Z_k \ge n \Biggr] = \mathbf{P} \bigl[e^{\lambda S_{\varepsilon n}} \ge
e^{\lambda n} \bigr] \le 2^{\varepsilon n} e^{-\lambda n}.
\]
Thus, if we choose $\varepsilon:= \min\{1/4, c/(4\log2)\}$ and $\beta:=
c/4$, the desired bound holds.
\end{pf}
Several lemmas now follow that will be used to handle various possible
behaviors of simple random walk on $G$.
\begin{lemma}\label{l.escape-tails}
Suppose that $G$ is a graph all of whose degrees are at least 3.
Let $X$ record the oriented edges taken by simple random walk on $G$.
Let $\Phi(n)$ index the $n$th edge of $X$ that remains in $\mathtt{NB}(X)$, so that
$\mathtt{NB}(X) = \langle{X_{\Phi(n)}}\rangle$: see~(\ref{e.defPhi}). Write $\Phi(0) := 0$.
Then there exists
$t_0 < \infty$ such that for all $n$ and all $t > t_0$, we have
\[
\mathbf{P} \bigl[\Phi(n+1) - \Phi(n) > t \bigr] < (8/9)^{t/2}.
\]
In addition, there exists $r$ such
that for every $n$ and $\lambda$,
\[
\mathbf{P} \bigl[\Phi(n) > (r+ \lambda) n \bigr] < (8/9)^{\lambda n/4}.
\]
More generally, for all $L > 0$,
there exists $r_L \le36^2 (8/9)^{L/4}$ such that
\[
\mathbf{P} \biggl[\,\sum_{k < n} \bigl(\Phi(k+1)-
\Phi(k) \bigr) \mathbf{1}_{[\Phi(k+1)-\Phi(k) >
L]} > (r_L + \lambda)n \biggr] <
(8/9)^{\lambda n/4}.
\]
\end{lemma}
\begin{pf}
Let $\widehat{X}$ be the lift of $X$ to $T$.
Then $\Phi$ also indexes the edges that remain in $\mathtt
{NB}(\widehat{X})$.
Since the distance of $\mathtt{NB}(\widehat{X})$ from $ \widehat{X}_1^-$ increases by 1
at each
step, the times $\Phi(n+1) - \Phi(n)$ are dominated by the times between
escapes for random walk on $\mathbb{N}$ that has probability $2/3$ to
move right and
$1/3$ to move left, reflecting at 0.
These in turn are dominated by the time to the first escape
for random walk~$S$ on $\mathbb{Z}$ with the same bias.
Such an escape can happen only at an odd time, $t$.
The chance of an escape at time exactly $t$ is
\begin{eqnarray*}
&&\mathbf{P}\bigl[S_{t-1} = 0, S_t = 1,
\forall t' > t \ S_{t'} > 0\bigr]
\\
&&\qquad= \pmatrix{t-1\cr (t-1)/2} \biggl(\frac{2}{3}\biggr)^{(t-1)/2}
\biggl(\frac{1}{3} \biggr)^{(t-1)/2}
\biggl(\frac{2}{3} \biggr) \biggl(\frac{1}{2} \biggr)
\\
&&\qquad \sim c \frac{(\sqrt{8/9} )^t}{\sqrt t}
\end{eqnarray*}
for some constant $c$.
This proves the first inequality.
Now $\Phi(n) = \sum_{k < n} (\Phi(k+1) - \Phi(k) )$ and these
summands are dominated by the corresponding inter-escape times for the
biased random walk on $\mathbb{Z}$. The latter are i.i.d. with some distribution
$\nu$
(which we bounded in the last paragraph),
whence if we choose $a := (8/9)^{1/4} \in
(\sqrt{8/9}, 1 )$ and put $b := \sum_{j \ge1} a^{-j} \nu
(j) \in
(1, \infty)$,
we obtain that for all $n$, we have $\mathbf{E} [a^{-\Phi(n)} ]\le b^n$.
By Markov's inequality, this implies that
\[
\mathbf{P} \bigl[\Phi(n) > (c + \lambda) n \bigr] \le a^{(c+\lambda)n}
b^n,
\]
so if we choose $c = r$ with $a^rb = 1$, then we obtain
the second inequality.
The third inequality follows similarly:
let $B_k := (\Phi(k+1) - \Phi(k) )\times\break
\mathbf{1}_{[\Phi(k+1) - \Phi(k) > L]}$.
Put $b_L := \nu[1,L] + \sum_{j > L} a^{-j} \nu(j) \in(1, 1 + 36 a^{L})$.
Then
$\mathbf{E} [a^{-\sum_{k=0}^{n-1} B_k} ]
\le b_L^n$
for all $n$.
By Markov's inequality, this implies that
\[
\mathbf{P} \Biggl[\,\sum_{k=0}^{n-1}
B_k > (c_L + \lambda) n \Biggr] \le a^{(c_L+\lambda)n}
b_L^n,
\]
so if we choose $c_L = r_L$ with $a^{r_L} b_L = 1$, then we obtain
the third inequality. We have the estimate $r_L \le36^2 a^{L}$.
\end{pf}
It follows that
\[
\mathbf{P} \bigl[\Phi\bigl(n/(r+\lambda)\bigr) > n \bigr] < a^{\lambda n}.
\]
That is, except for exponentially small probability, there are at least
$n/(r+\lambda)$ nonbacktracking edges by time $n$.
Similarly,
except for exponentially small probability, there are at most
$(r_L+\lambda)n$ edges by time $n$ that are in intervals of length${}>L$
that have no escapes.
The following is clear.
\begin{lemma}\label{l.NBin-cycle}
With notation as in Lemma~\ref{l.escape-tails}, if $s \le\Phi(n) \le
t$ satisfy
$ X_s^{-} = X_t^{+}$, then $n$ is a cycle time of $\mathtt{NB}(X)
\mathord{\upharpoonright}
t$.
\end{lemma}
We call\vspace*{1pt} a time $t$ an \textit{escape time} for $\widehat{X}$ if
$-\widehat{X}_{t+1} \notin\mathtt{NB}(\widehat{X}_1, \ldots,
\widehat{X}_t)$ and $-\widehat{X}_{t+1}
\notin\{\widehat{X}_s; s > t+1\}$.
We let $\mathtt{Esc}(\widehat{X})$ be the set of escape times for
$\widehat{X}$.
Then $\mathtt{Esc}(\widehat{X}) = \operatorname{im}(\Phi-1)$.
\begin{lemma}\label{l.predictable}
Suppose that $\langle{\tau_k; 1 \le k < K}\rangle$ is a strictly increasing
sequence of stopping times for $X$, where $K$ is random, possibly
$\infty$.
Then there exist $\eta, \delta> 0$ such that for all $n > 0$,
\[
\mathbf{P} \bigl[K > n, \bigl|\bigl\{k \le n; \tau_k \in\mathtt {Esc}(
\widehat{X})\bigr\}\bigr| < \eta n \bigr] < e^{-\delta n}.
\]
\end{lemma}
\begin{pf}
Define random variables $\sigma_j$, $\lambda_j$ recursively.
First, we describe in words what they are.
Start by setting $\lambda_1 := 1$ and by
examining what happens after time $\tau_1$. If $\widehat{X}$ escapes,
define $\sigma_1 := 1$, $\lambda_2 := 2$,
and look at time $\tau_2$. If not, then
look at the first time $\tau_j$ that occurs after the first time $\ge t+1$
we know that $\widehat{X}$ has not escaped,\vspace*{1pt} that is, $t+1$ if
$-\widehat{X}_{t+1} \in\mathtt{NB}(\widehat{X}_1, \ldots, \widehat
{X}_t)$ or else
$\min\{ s > t+1; -\widehat{X}_{t+1} = \widehat{X}_s\}$,
and define $\sigma_1 := \tau_j - \tau_1$, $\lambda_2 := j$.
Now repeat from time $\tau_{\lambda_1}$ to define $\sigma_2$ and
$\lambda_3$, etc.
The precise definitions are as follows.
Suppose that $K > n$ (otherwise we do not define these random
variables).
Define $A_j$ to be the event that
one of the following holds:
\[
-\widehat{X}_{\tau_j+1} \in\mathtt{NB}(\widehat{X}_1, \ldots,
\widehat{X}_{\tau_j}) \quad\mbox{or}\quad \tau_j \in
\mathtt{Esc}(\widehat{X}).
\]
Write $\lambda_1 := 1$.
To recurse, suppose that $\lambda_k$ has been defined.
Let
\[
\lambda_{k+1} := \cases{ \lambda_k + 1, & \quad \mbox{if }$A_{\lambda_k}$,\cr
\min \bigl\{ j; -\widehat{X}_{\tau_{\lambda_k}+1}
\in\{\widehat{X}_s; \tau_{\lambda_k}+1 < s <
\tau_{j}\} \bigr\}, & \quad \mbox{otherwise}}
\]
and
\[
\sigma_k := \cases{ 1, & \quad \mbox{if }$A_{\lambda_k}$,
\cr
\tau_{\lambda_{k+1}} - \tau_{\lambda_k}, &\quad \mbox{otherwise}.}
\]
Let $J := \max\{j; \lambda_j \le n\}$.
This is the number of times we have looked for escapes up to the
$n$th stopping time. Each stopping time until the $n$th is covered by one
of the intervals $[\tau_1, \tau_{\lambda_2}), \ldots, [\tau
_{\lambda_J},
\tau_{\lambda_{J+1}})$, which have lengths $\sigma_1, \ldots,
\sigma_J$.
Therefore,
we have that
\[
\sum_{j=1}^J \sigma_j \ge n.
\]
We claim that this forces $J$ to be large with high probability:
\begin{equation}\label{e.Jlarge}
\mathbf{P}[J \le\varepsilon n] \le e^{-\gamma n}
\end{equation}
for some $\varepsilon, \gamma> 0$.
Indeed, we claim that for each $k \le\varepsilon n$,
\[
\mathbf{P} \Biggl[\sum_{j=1}^k
\sigma_j \ge n \Biggr] \le e^{-\beta n},
\]
where $\varepsilon$ and $\beta$ are given by Lemma~\ref{l.exp-tails}
with $c$ (in that lemma) to be determined.
This would imply that
\[
\mathbf{P}[J \le\varepsilon n] \le \varepsilon n e^{-\beta n}.
\]
Now $\tau_{\lambda_{k+1}} - \tau_{\lambda_{k}} \ge\lambda_{k+1} -
\lambda_{k}$. Thus, it suffices to show that there is some $c > 0$ for
which
\[
\mathbf{P} [\lambda_{k+1} - \lambda_{k} \ge z |
\sigma_1, \ldots, \sigma_{k-1} ] \le e^{-c z}
\]
for all $z > 1$.
Now the event $\lambda_{k+1} - \lambda_{k} \ge z > 1$ implies the
event $B$
that $-\widehat{X}_t \notin\mathtt{NB}(\widehat{X}_1, \ldots,
\widehat{X}_{\tau_{\lambda_{k}}})$ for all $t \in(\tau_{\lambda_{k}},
\tau_{\lambda_{k}} + z)$ and that $-\widehat{X}_t =
\widehat{X}_{\tau_{\lambda_{k}}}$ for some $t \ge\tau_{\lambda
_{k}} + z$.
Because the distance from $\widehat{X}_t$ to $\widehat{X}_{\tau
_{\lambda_{k}}}$ has\vspace*{1pt} a
probability at least $2/3$ to get larger at all times, this is
exponentially unlikely in $z$.
What we need, however, is that this is exponentially unlikely even under
the given conditioning.
For every event $A$ in the $\sigma$-field on which we are
conditioning, we
always have that $A \supseteq[\tau_{\lambda_{k}} \in\mathtt{Esc}(\widehat{X})]$.
Furthermore, $\mathbf{P}[\tau_{\lambda_{k}} \in\mathtt
{Esc}(\widehat{X})] \ge1/2$. Hence, $\mathbf{P}(B
\vert A) \le2 \mathbf{P}(B)$, so that the bound on the unconditional
probability of
$B$ also gives an exponential bound on the conditional probability of $B$.
Thus, we have proved (\ref{e.Jlarge}).
Define $E_k := [\tau_{\lambda_k} \in\mathtt{Esc}(\widehat{X})]$.
We claim that
\begin{equation}\label{e.alwaysesc}
\mathbf{P} \bigl(E_k \vert\sigma(E_1, \ldots,
E_{k-1}) \bigr) \ge 1/2.
\end{equation}
Indeed, let $Z_t$ be the distance of $\widehat{X}_t^{+}$ to
$\widehat{X}_1^-$.
Note that $t \in\mathtt{Esc}(\widehat{X})$ iff $Z_s > Z_t$ for all
$s > t$.
Write $F_t(j)$ for the event that $Z_s > j$ for all $s > t$.
We claim that
\[
\mathbf{P} \bigl(E_k \vert\sigma(E_1, \ldots,
E_{k-1}, \widehat {X}_1, \ldots, \widehat{X}_{\lambda_k},
\lambda_1, \ldots, \lambda_k) \bigr) \ge 1/2,
\]
which is stronger than (\ref{e.alwaysesc}).
By choice of $\lambda_1, \ldots, \lambda_k$, we have that for every event
$E \in\sigma(E_1, \ldots, E_{k-1}, \widehat{X}_1, \ldots,
\widehat{X}_{\lambda_k}, \lambda_1, \ldots, \lambda_k)$,
\[
\mathbf{P}(E_k \vert E) = \mathbf{P} \bigl(F_{t}(j_m)
\vert F_{t}(j_1), \ldots, F_{t}(j_{m-1})
\bigr)
\]
for some $j_1, \ldots, j_{m-1} < j_m$ and some $t$, where $m \ge1$.
Since $F_t(j_i) \supseteq F_t(j_m)$ and $\mathbf{P} (F_t(j_m)
) \ge1/2$, the
claim follows.
Therefore, by (\ref{e.alwaysesc}),
we may couple the events $E_k$ to Bernoulli trials with
probability $1/2$ each so that the $k$th successful trial implies $E_k$. This
shows that there exists $\delta> 0$ such that
\[
\mathbf{P} \bigl[J > \varepsilon n, \bigl|\{k \le J; E_k\}\bigr| < \varepsilon
n/3 \bigr] < e^{-\delta n}.
\]
Hence,
\[
\mathbf{P} \bigl[K = \infty, \bigl|\bigl\{j \le n; \tau_j \in\mathtt
{Esc}(\widehat{X})\bigr\}\bigr| < \varepsilon n/3 \bigr] < e^{-\delta n}.
\]
Thus, we may choose $\eta:= \varepsilon/3$.
\end{pf}
Call a cycle of $>L$ edges an $L^+$-\textit{cycle}.
Define $I(n, L)$ to be the set of times $t \in[1, n]$ for which there
exist $1 \le s \le t \le u \le n$ such that $(X_s, X_{s+1}, \ldots, X_u)$
is a nontrivial $L^+$-cycle.
\begin{lemma}\label{l.whenlong}
If $n, L \ge1$ and
$\beta\in(0, 1)$, then
\[
\mathbf{E} \bigl[ \mathbf{1}_{[|I(n, L)| \ge\beta n]} \bigl(1 - C_n\bigl(
\beta/(2L), \mathtt{NB}(X)\bigr) \bigr) \bigr] < (8/9)^{\beta n/16}.
\]
\end{lemma}
\begin{pf}
Let $\mathtt{Long}$ be the event $ [|I(n, L)| \ge\beta n ]$.
Let $J$ be the union of intervals in $[1, n]$ that have length $> L$ and
are disjoint from $\mathtt{Esc}(X)$.
Let $\mathtt{Bad}$ be the event that $|J| > \beta n/2$.
By Lemma~\ref{l.escape-tails}, we have $\mathbf{P}(\mathtt{Bad}) <
(8/9)^{\beta n/16}$
(use $\lambda:= \beta/4$ there).
On the event $\mathtt{Long}\setminus\mathtt{Bad}$, the set $I(n, L)$ contains
at least $\beta n/2$ times that are within distance $L/2$ of an escape.
Therefore,
on the event $\mathtt{Long}\setminus\mathtt{Bad}$, there are at
least $\beta n/(2L)$
escapes in nontrivial cycles, whence $\mathtt{NB}(X)\mathord
{\upharpoonright}n$ has ${\ge}\beta n/(2L)$ NT-cycle times.
\end{pf}
Let $I_{\circ}(n, L)$ be the (random)
set of times $t \in[1, n] \setminus I(n, L)$
such that $X_t$ is a loop.\vspace*{-3pt}
\begin{lemma}\label{l.whenloop}
There exist $\eta, \delta> 0$ such that
if $n, L \ge1$ and
$\beta\in(0, 1)$, then
\[
\mathbf{E} \bigl[ \mathbf{1}_{[|I_{\circ}(n, L)| \ge\beta n]} \bigl(1 - C_n\bigl(\eta
\beta /(L+1), \mathtt{NB}(X)\bigr) \bigr) \bigr] < e^{-\delta n}.
\]
\end{lemma}
\begin{pf}
Let $\mathtt{Loop}:= [|I_{\circ}(n, L)| \ge\beta n ]$.
Note that if there are 3 times at which a given loop in $G$ is
traversed, then necessarily the first of those times belongs to a
nontrivial cycle with at least one of the other times. In particular,
if a
given loop is traversed at least $L+2$ times, then it belongs
to a nontrivial long cycle. Therefore, on
$\mathtt{Loop}$, there are ${\ge}\beta n/(L+1)$ times spent in distinct loops.
If we take the first traversal of
a loop as a stopping time, then Lemma~\ref{l.predictable} supplies us
with $\eta,
\delta$ such that
on the event $\mathtt{Loop}$, except for probability $< e^{-\delta n}$,
the number of new loops at escape times is at least $\eta\beta n/(L+1)$.
Necessarily, all such loops remain in $\mathtt{NB}(X)$.
Therefore, $\mathtt{NB}(X)\mathord{\upharpoonright}n$ also has at
least $\eta\beta n/(L+1)$
loops
on the event $\mathtt{Loop}$ except for probability${}<e^{-\delta n}$.
\end{pf}
Let $D(n)$ denote the maximal number of disjoint FNT-cycles in $X
\mathord{\upharpoonright}n$, other than loops.\vspace*{-3pt}
\begin{lemma}\label{l.whenmany}
There exist $\eta, \delta> 0$ such that
if $n \ge1$ and
$\beta\in(0, 1)$, then
\[
\mathbf{E} \bigl[ \mathbf{1}_{[|D(n)| \ge\beta n]} \bigl(1 - C_n\bigl(\eta
\beta, \mathtt{NB}(X)\bigr) \bigr) \bigr] < e^{-\delta n}.
\]
\end{lemma}
\begin{pf}
Fix $n \ge1$.
Let $\mathtt{Cycs}$ be a (measurable) set of pairs of
times $1 \le s < t \le n$ such that $(X_s, X_{s+1}, \ldots, X_t)$
is an FNT-cycle other\vspace*{1pt} than a loop, chosen so that $|\mathtt{Cycs}| = D(n)$.
Let $\mathtt{Cycs\&Esc}:= \{(s, t) \in\mathtt{Cycs}; t \in
\mathtt{Esc}(\widehat{X}) \}$.
By Lemma~\ref{l.predictable}, on the event $D(n) \ge\beta n$, we have
$|\mathtt{Cycs\&Esc}| > \eta'\beta n$ for some $\eta' > 0$
except for exponentially small probability.
Let $\mathtt{Sofar}$ be the set of $(s, t) \in\mathtt{Cycs}$ such
that $X_{s-1} \ne
- X_t$ or $s = 1$.
Note that for $(s, t) \in\mathtt{Cycs}$,
the cycle from $X_s$ to $X_t$ can be traversed in either
order, both being equally likely given $X_1, \ldots, X_{s-1}$, and at least
one of them has the property that $(s, t) \in\mathtt{Sofar}$ [see
Lemma~\ref{l.concat-NB}, where we concatenate $\mathtt{NB}(X_1, \ldots,
X_{s-1})$ with either
$\mathtt{NB}(X_s, \ldots, X_t)$ or $\mathtt{NB}(X_t, X_{t-1}, \ldots
, X_s)$, as
appropriate].
In fact, the same holds even conditioned on $\mathtt{Cycs\&Esc}$.
Therefore, we may couple to Bernoulli\vadjust{\goodbreak} trials and conclude that on the event
$D(n) \ge\beta n$, we have
$|\mathtt{Cycs\&Esc}\cap\mathtt{Sofar}| > \eta'\beta n/3$
except for exponentially small probability.
Note that for $t \in\mathtt{Cycs\&Esc}\cap\mathtt{Sofar}$, some
edge in the cycle
$(X_s, \ldots, X_t)$ belongs to $\mathtt{NB}(X)$ (see Lemma~\ref{l.concat-NB}
again)---more
precisely, $u \in\operatorname{im}\Phi$ for some $u \in[s, t]$---,
whence on the event $D(n) \ge\beta n$, we have
$\mathtt{NB}(X) \mathord{\upharpoonright}n$ has $> \eta\beta n$
cycle times
except for exponentially small probability, where $\eta:= \eta'/3$.
\end{pf}
\begin{lemma}\label{l.longcycles}
Suppose that $\rho:= \rho(G) < 1$, $n \ge2$, $\varepsilon> 0$ and $L
> 2e^2$.
Then
\[
\mathbf{P} \bigl[\bigl|\bigl\{\mbox{$L^+$-cycle times of } X
\mathord {\upharpoonright}n\bigr\}\bigr| \ge \varepsilon n \bigr] < e^{(6n/L)\log L}
\rho^{\varepsilon n/3}/(1 - \rho).
\]
\end{lemma}
\begin{pf}
For every $n$ and $k$,
the chance that $X_n$ begins a cycle of length $k$ is at most $\rho^k$.
Suppose that
the number of $L^+$-cycle times of
$X \mathord{\upharpoonright}n$ is at least $\varepsilon n$.
Then there are disjoint $L^+$-cycles in $X_1, \ldots, X_n$
the sum of whose lengths
is at least $\varepsilon n/3$ by Lemma~\ref{l.vitali}.
There are fewer than $n/L$ starting points and fewer than $n/L$ ending
points for those cycles since each has
length ${}>L$ and they are disjoint. The number of collections of
subsets of
$[0, n]$ of size at most $2n/L$ is ${<}e^{(n+1) h(2/L)} < e^{(6n/L)\log
L}$, where $h(\alpha) := -\alpha\log\alpha- (1-\alpha)\log
(1-\alpha)$.
This is because $h(\alpha) < -2\alpha\log\alpha$ for $\alpha< e^{-2}$.
For each such collection of starting and ending points giving
total length $k$, the chance that they do start
$L^+$-cycles is at most $\rho^k$, whence summing over collections and total
lengths that are ${\ge}\varepsilon n/3$, we get the result.
\end{pf}
Call a nonbacktracking cycle an \textit{NB-cycle}.
If an NB-cycle is a loop or
has the property that its last edge is different from the
reverse of its first edge, then call the cycle \textit{fully nonbacktracking},
abbreviated \textit{FNB-cycle}.
Recall that the number of NB-cycles of length $n$ starting from $x
\in\mathtt{V}(G)$ is $b_n(x)$.
We also say that a cycle starting from $x$ is ``at $x$''.
Let the number of FNB-cycles of length $n$ at $x$ be $b_n^*(x)$.
Recall that $S(x) := \{ n; b_n(x) \ne0\}$.
We shall need the following bounds on $b_n(x)$.
\renewcommand{1.2}{1.5}
\begin{proposition}
Let $G$ be a graph with $\operatorname{\mathtt{cogr}} (G) \ge1$.
For each $x \in\mathtt{V}(G)$, we have that
$\lim_{S(x) \ni n \to\infty} b_n(x)^{1/n}$ exists and
there is
a constant $c_x$ such that $b_n(x) \le c_x \operatorname{\mathtt{cogr}} (G)^n$ for all
$n \ge1$.
Furthermore, if $x$ belongs to a simple cycle of length $L$, then $c_x
\le
2 + 2L\operatorname{\mathtt{cogr}} (G)^{L-2}$.
If $G$ is $d$-regular, then $G$ is Ramanujan iff for all vertices $x$ and
all $n \ge1$, we have $b_n^*(x) \le2 (d-1)^{n/2}$.
\end{proposition}
\begin{pf}
Write $S^*(x) := \{ n; b_n^*(x) \ne0\}$.
Given two FNB-cycles starting at $x$,
we may concatenate the first with either
the second or the reversal of the second to obtain an FNB-cycle at $x$,
unless both FNB-cycles are the same loop.
Therefore, if $b_n^*(x)$ is the number of FNB-cycles at $x$, we have
$b_m^*(x) b_n^*(x)/2 \le b_{m+n}^*(x)$ for $m + n \ge3$, whence
$\langle{b_n^*(x)/2; n \ge2}\rangle$ is supermultiplicative and
Fekete's lemma
implies that $\lim_{S^*(x) \ni n \to\infty} b_n^*(x)^{1/n}$ exists and
$b_n^*(x) \le2 \operatorname{\mathtt{cogr}} (G)^n$ for $n \ge2$.
It is easy to check that the same inequality holds for $n = 1$.
Together with Theorem~\ref{t.cogrowth},
this also implies that
if $G$ is $d$-regular and Ramanujan, then for all vertices $x$ and
all $n \ge1$, we have $b_n^*(x) \le2 (d-1)^{n/2}$.
Let $\widehat{b}_n(x) := b_n(x) - b_n^*(x)$ be the number of nonloop
NB-cycles at
$x$ whose last edge equals the reverse of its first edge, that is, NB-cycles
that are not FNB-cycles.
We shall bound $\widehat{b}_n(x)$ when
$x$ belongs to a simple cycle, say, $\mathcal{P}_0 = (e_1, \ldots, e_L)$
with $L$ edges.
Let $\mathcal{P}$ be a nonloop NB-cycle at $x$ whose last edge is
$e'$ and whose
first edge is $- e'$.
If $e'$ is a loop, then removing $e'$ at the end of $\mathcal{P}$
gives an FNB-cycle
$\mathcal{P}'$ at $x$.
Otherwise, decompose $\mathcal{P}$ as $\mathcal{P}_1.\mathcal{P}_2$, where
$.$ indicates concatenation, and $\mathcal{P}_2$ is maximal
containing only edges $e$ such
that $e \in\mathcal{P}_0$ or $- e \in\mathcal{P}_0$.
By reversing $\mathcal{P}_0$ if necessary, we may assume the former:
all edges of
$\mathcal{P}_2$ lie in~$\mathcal{P}_0$.
Suppose the first edge of $\mathcal{P}_2$ is $e_k$.
Then $\mathcal{P}_2$ traverses the remainder of $\mathcal{P}_0$ and
possibly the whole of
$\mathcal{P}_0$ several times.
Thus, write $\mathcal{P}_2 = \mathcal{P}_3.\mathcal{P}_4$, where
$\mathcal{P}_3 = (e_k, \ldots,
e_L)$ has length${}\le L$.
Finally, the NB-cycle $\mathcal{P}' := \mathcal{P}_1.(- e_{k-1},
\ldots,
- e_1).\overline{\mathcal{P}}_4$ is FNB, where the bar indicates
path reversal.
In addition, the length of $\mathcal{P}'$ differs from the length of
$\mathcal{P}$ by at
most $L-2$.
Since the map $\mathcal{P}\mapsto\mathcal{P}'$ is injective,
$\widehat{b}_n(x) \le\sum_{i=-1}^{L-2} b_{n+i}^*(x) \le2L \operatorname{\mathtt{cogr}} (G)^{n+L-2}$.
Combining the results of the previous two paragraphs, we obtain that if $x$
belongs to a simple cycle, then there is a constant $c_x$ such that for all
$n \ge1$, we have $b_n(x) \le c_x \operatorname{\mathtt{cogr}} (G)^n$.
We also get the bound claimed for $c_x$.
We now prove the same for
$x$ that do not belong to a simple cycle.
We claim that if $y$ is a neighbor of $x$, then $b_n(x) \le
b_{n-2}(y) + b_n(y) + b_{n+2}(y)$.
Indeed, let $\mathcal{P}$ be an NB-cycle at $x$.
Suppose the first edge of $\mathcal{P}$ goes to $y$.
If $\mathcal{P}$ is not FNB, then removing the first and last
edges of $\mathcal{P}$ yields an NB-cycle at $y$ of length $n-2$.
If $\mathcal{P}$ is FNB, then shifting the starting point from $x$ to $y$
yields an
FNB-cycle at $y$ of length $n$.
Lastly, if the first edge of $\mathcal{P}$ does not go to $y$, then we
may prepend
to $\mathcal{P}$ an edge from $y$ to $x$ and either append an edge
from $x$ to $y$
if the last edge of $\mathcal{P}$ was not from $y$, or else delete the last
edge of
$\mathcal{P}$, yielding an NB-cycle at $y$ of length $n+2$ or $n$.
This map of NB-cycles at $x$ to NB-cycles at $y$ is injective, which gives
the claimed inequality.
It follows that
$b_n(x) \le c_x b_n(z)$, where
$z$ is the nearest point to $x$ that belongs to a simple cycle and $c_x$
does not depend on $n$.
Finally, if
$\lim_{S(x) \ni n \to\infty} b_n(x)^{1/n}$ exists for one $x$, then it
exists for all $x$ by the covering-tree argument we used earlier in
Section~\ref{s.intro}.
Suppose that for all $x$ belonging to a
simple cycle, $\lim_{S^*(x) \ni n \to\infty} b^*_n(x)^{1/n} <
\operatorname{\mathtt{cogr}} (G)$.
Then the bounds in the preceding paragraphs show that
$\lim_{S^*(x) \ni n \to\infty} b_n(x)^{1/n} < \operatorname{\mathtt{cogr}} (G)$.
It is not hard to see that therefore
$\limsup_{S(x) \ni n \to\infty} b_n(x)^{1/n} < \operatorname{\mathtt{cogr}} (G)$ as
well, which
is a
contradiction to the definition of $\operatorname{\mathtt{cogr}} (G)$.
Hence for some $x$, we have
$\lim_{S^*(x) \ni n \to\infty} b^*_n(x)^{1/n} = \operatorname{\mathtt{cogr}} (G)$
and, therefore,
$\lim_{S(x) \ni n \to\infty} b_n(x)^{1/n} = \operatorname{\mathtt{cogr}} (G)$ as well.
Together with Theorem~\ref{t.cogrowth},
this also implies that
if $G$ is $d$-regular and for all vertices $x$ and
all $n \ge1$, we have $b_n^*(x) \le2 (d-1)^{n/2}$, then $G$ is Ramanujan,
which completes the proof of
the last sentence of the proposition.
\end{pf}
Let $Y := \mathtt{NB}(X)$.
Let $A^L_n(\beta)$ be the event that there are $\ge\beta n$
times $t \in[1, n]$ for which there
exist $1 \le s \le t \le u \le n$ such that $(Y_s, Y_{s+1}, \ldots, Y_u)$
is a cycle with $u - s < L$.
\renewcommand{1.2}{4.13}
\begin{lemma}\label{l.boost}
Let $G$ be $d$-regular with $\operatorname{\mathtt{cogr}} (G) > 1$ and $\beta\in(0, 1)$.
For every $L < \infty$, if
\[
\mathbf{P} \bigl[A^L_n(\beta) \bigr] >
{c_o n (d-1)^{-\beta^2 n/6 + L/2} \over\operatorname{\mathtt{cogr}} (G) - 1},
\]
where $c_o$ is as in Proposition~\ref{p.cogrlimit},
then $\rho(G) > 2\sqrt{d-1}/d$.
If
\begin{equation}\label{e.cond1}
\limsup_{n \to\infty} \mathbf{P} \bigl[A^{L}_n(
\beta) \bigr]^{1/n} = 1,
\end{equation}
then
\[
\rho(G) > {\sqrt{d-1} \over d} \bigl((d-1)^{\beta/24} +
(d-1)^{-\beta/24} \bigr).
\]
\end{lemma}
\begin{pf}
We may suppose that $\rho(G) < 1$, as there is nothing to prove
otherwise.
Let $A_n(\beta, L)$ be the event that $Y \mathord{\upharpoonright}n$ has
at least $\beta n$ cycle times
and that $Y_n$ completes a cycle of length ${\le}L$.
Note that $
\mathbf{P} [A_k(\beta, L) ]
\ge
\mathbf{P} [A^L_n(\beta) ]
/n$ for
some $k \in[\beta n, n]$ by considering the last cycle completed.
Consider the following transformation $\mathcal{P}\mapsto\mathcal
{P}'$ of finite
nonbacktracking paths $\mathcal{P}$:
let $I$ be the collection of cycles in $\mathcal{P}$. Choose
(measurably) a maximal
subcollection $J$ as
in Lemma~\ref{l.vitali}. Excise the edges in $J$ from $\mathcal{P}$,
concatenate the
remainder, and remove backtracks to arrive at $\mathcal{P}'$.
Then $\mathcal{P}'$ is a nonbacktracking path without cycles and
$|\mathcal{P}| - |\mathcal{P}'|$
is at least $1/3$ the number of cycle times of $\mathcal{P}$.
Fix $n$.
Let $p_n(\beta, L) := \mathbf{P} (A_n(\beta, L) )$.
Let $q_n(\beta, L)$ be the probability
that the length of $\mathcal{P}'$ is at most $n - \beta n/3$.
By the last paragraph, we have $q_n(\beta, L) \ge p_n(\beta, L)$.
We define another transformation $\mathcal{P}\mapsto\mathcal{P}''$
as follows, where
$\mathcal{P}''$ will be a nonbacktracking cycle when $Y_n$ completes
a cycle:
Let\vspace*{1.5pt} $m := \min\{i; Y_i^{+} = Y_n^{+}\}$.
Let $s$ be minimal with $\mathcal{P}'$
ending in $(Y_s, Y_{s+1}, \ldots, Y_n)$ and define $\widehat{\mathcal{P}}$ by
$\mathcal{P}' = \widehat{\mathcal{P}}.(Y_s, Y_{s+1}, \ldots, Y_n)$, where
$.$ indicates concatenation.
Since $\mathcal{P}'$ has no cycles, if $m < n$ (which it is if $Y_n$ completes
a cycle),
then $m < s$.
Now define $\mathcal{P}'' := \mathcal{P}.\overline{\mathcal{P}'}$
if $s = n$, where the bar indicates path reversal,
or else $\mathcal{P}'' := \mathcal{P}.(Y_{m+1}, Y_{m+2}, \ldots,
Y_s).\overline{\widehat{\mathcal{P}}}$.
Write $b := d-1$.
On the event
$A_n(\beta, L)$, we have that $\mathcal{P}''$ is a nonbacktracking cycle
with length at most $2n-\beta n/3 + L$. Furthermore, the map $\mathcal{P}
\mapsto\mathcal{P}''$ is injective because the first part of
$\mathcal{P}''$ is simply
$\mathcal{P}$.
Therefore, Proposition~\ref{p.cogrlimit} provides a constant $c_o$
such that
\[
d b^{n-1} q_n(\beta, L) \le \sum
_{k \le2n-\beta n/3 + L} c_o b_k(o),
\]
whence
\[
q_n(\beta, L) \le {c_o \operatorname{\mathtt{cogr}} (G)^{2n-\beta n/3 + L} \over b^{n} (\operatorname{\mathtt{cogr}} (G) - 1)}.
\]
For some $k \ge\beta n$,
we have
\[
{\mathbf{P} [A^L_n(\beta) ] \over n} \le p_k(\beta, L) \le q_k(
\beta, L) \le {c_o \operatorname{\mathtt{cogr}} (G)^{2k-\beta k/3 + L} \over b^{k} (\operatorname{\mathtt{cogr}} (G) - 1)}.
\]
It follows that if $\operatorname{\mathtt{cogr}} (G) \le\sqrt b$, then
the last quantity above is
\[
\le {c_o b^{-\beta k/6 + L/2} \over\operatorname{\mathtt{cogr}} (G) - 1} \le {c_o b^{-\beta^2 n/6 + L/2} \over\operatorname{\mathtt{cogr}} (G) - 1},
\]
which proves the first part of the lemma.
Similarly, if
(\ref{e.cond1}) holds, then
\[
\operatorname{\mathtt{cogr}} (G) \ge b^{1/(2-\beta/3)} > b^{1/2+\beta/12},
\]
whence by Theorem~\ref{t.cogrowth},
\[
\rho(G) > { b^{1/2+\beta/12} + b^{1/2-\beta/12} \over d}.
\]
\upqed\end{pf}
We remark that with more work, we may let $L := \infty$ in (\ref{e.cond1}).
\begin{pf*}{Proof of Theorem~\ref{t.AGVfinite}}
Let $X = \langle{X_n}\rangle$ be simple random walk on $G$ and
$\widehat{X}=
\langle{\widehat{X}_n}\rangle$ be its lift to the universal cover
$T$ of $G$.
Fix $n$.
Let $\mathtt{Good}$ be the event that $C_n(\alpha, X) = 1$.
We may choose $L \le34\log(10{,}368/\alpha)$ so that the number $r
_L$ of
Lemma~\ref{l.escape-tails} satisfies $r_L < \alpha/8$.
Fix such an $L$.
Let $\mathtt{Long}:= [|I(n, L)| \ge\alpha n / 2 ]$.
By Lemma~\ref{l.whenlong} (using $\beta:= \alpha/2$), we have that
\[
\mathbf{E} \bigl[\mathbf{1}_{\mathtt{Long}} \bigl(1 - C_n\bigl(
\alpha /(4L), \mathtt{NB}(X)\bigr) \bigr) \bigr] < (8/9)^{\alpha n/32}.
\]
Let $\mathtt{Loop}:= [|I_{\circ}(n, L)| \ge\alpha n/(8L) ]$.
Then by Lemma~\ref{l.whenloop} (using $\beta:= \alpha/4$),
\[
\mathbf{E} \bigl[{\mathbf{1}}_{\mathtt{Loop}} \bigl(1 - C_n\bigl(
\eta _1\alpha/\bigl(8L^2+8L\bigr), \mathtt{NB}(X)\bigr)
\bigr) \bigr] < e^{-\delta_1 n}
\]
for some $\eta_1, \delta_1 > 0$.
On the event $(\mathtt{Good}\setminus\mathtt{Long}\setminus\mathtt{Loop})$,
there are $\ge\alpha n/4$
times $t \in[1, n]$ for which there
exist $1 \le s \le t \le u \le n$ such that $(X_s, X_{s+1}, \ldots, X_u)$
is an NT-cycle with $1 \le u - s < L$ and that does not contain any loops;
this is because every loop can be contained in at most $2L$ NT-cycles of
length at most $L$ in $X \mathord{\upharpoonright}n$.
By Lemma~\ref{l.vitali},
on the event $(\mathtt{Good}\setminus\mathtt{Long}\setminus\mathtt{Loop})$,
there are ${\ge}\alpha n/(12 L)$ disjoint nonloop NT-cycles in $X
\mathord{\upharpoonright}
n$.
Within every NT-cycle, there is an FNT-cycle.
Thus, on the event $(\mathtt{Good}\setminus\mathtt{Long}\setminus
\mathtt{Loop})$,
there are ${\ge}\alpha n/(12 L)$ disjoint nonloop FNT-cycles in $X
\mathord{\upharpoonright}n$, that is, $D(n) > \alpha n/(12L)$
in the notation of Lemma~\ref{l.whenmany}.
Applying that lemma with $\beta:= \alpha/(12L)$ yields
\[
\mathbf{E} \bigl[{\mathbf{1}}_{\mathtt{Good}\setminus\mathtt
{Long}\setminus\mathtt{Loop}} \bigl(1 - C_n\bigl(
\eta_2 \alpha/(12L), \mathtt{NB}(X)\bigr) \bigr) \bigr] <
e^{-\delta_2 n}
\]
for some $\eta_2, \delta_2 > 0$.
Thus, the statement of the theorem holds with $\zeta:= \min
\{(\alpha/32)\log(9/8), \delta_1, \delta_2\}$ and $\gamma:= \min
\{\eta_1/12, \eta_2\}$.
Now we prove the second part of the theorem.
Suppose that $G$ is $d$-regular.
We may also suppose that $\rho(G) < (8/9)^{1/4}$, as there is nothing
to prove
otherwise.
Choose $L$ so that $L/\log L \ge2853/\alpha$, which is ${>}84/ (\alpha\log(1/\rho) )$.
Lemma~\ref{l.longcycles} then ensures that the above event $\mathtt
{Long}$ has
exponentially small probability:
\[
\mathbf{P}(\mathtt{Long}) < \frac{\rho^{\alpha n/84}}{1 - \rho}.
\]
Let $Y := \mathtt{NB}(X)$.
Although we did not state it, our proofs of Lemmas \ref{l.whenloop}
and~\ref{l.whenmany} provide many cycle times of $Y$ that occur in
cycles of length${}\le L$, that is, they show that the event
$A^L_n(\beta)$
occurs with high probability for certain $\beta$.
Thus,
\[
\mathbf{P} \bigl[\mathtt{Good}\setminus\mathtt{Long}\setminus
A^L_n(\hat\alpha) \bigr] < {\rho^{\alpha n/84} \over1 - \rho} +
e^{-\zeta n}.
\]
It follows by Lemma~\ref{l.boost} that if
\[
\mathbf{P}(\mathtt{Good}) \ge \frac{c_o n (d-1)^{-\alpha^2 n/24 + L/2}}{\operatorname{\mathtt{cogr}} (G) - 1} + \frac{\rho^{\alpha n/84}}{1 - \rho}
+ e^{-\zeta n},
\]
then $\rho(G) > 2\sqrt{d-1}/d$.
Finally, if
$\limsup_{n \to\infty} [\mathbf{E}C_n(\alpha, X) ]^{1/n}
= 1$, then
$\limsup_{n \to\infty}
\mathbf{P} [A^L_n(\hat\alpha,\break Y) ]^{1/n} = 1$, so
Lemma~\ref{l.boost}
completes the proof.
\end{pf*}
\renewcommand{1.2}{4.14}
\begin{remark}\label{r.rho}
Instead of requiring all degrees in $G$ to be at least 3, one could require
that $\rho(G) < 1$. A similar result holds.
\end{remark}
\begin{pf*}{Proof of Theorem~\ref{t.main2}}
Let $\mathcal{P}$ be an infinite path. Write $\alpha_n$ for the
number of NT-cycle
times${}\le n$ in $\mathcal{P}$, divided by $n$. Since we count here
cycles that may
end after time $n$, this may be larger than the density $\beta_n$
of NT-cycle times in $\mathcal{P}\mathord{\upharpoonright}n$.
However, we claim that $\limsup_{n \to\infty} \beta_n \ge\limsup_{n
\to\infty} \alpha_n$, whence the limsups are equal.
Suppose that $\alpha_n > \beta_n$.
Then there is some NT-cycle time $t \le n$ that belongs to an NT-cycle that
ends at some time $s > n$.
Every time in $[t, s]$ then is an NT-cycle time for $\mathcal{P}$.
It follows that $\beta_s \ge\alpha_n$, and this proves the claim.
It is now clear that Theorem~\ref{t.main2} follows from Theorem~\ref{t.AGV2}.
\end{pf*}
\begin{pf*}{Proof of Theorem~\ref{t.main}}
The proof follows just as for Theorem~\ref{t.main2}.
\end{pf*}
\section{Cycle encounters}\label{s.enc}
Here, we prove Theorem~\ref{t.prob10}.
We first sketch the proof that $q_n \to0$.
Assume the random walk has a good chance of encountering a short cycle
at a large time $n$. Because of the inherent fluctuations of
random walk, the time it reaches such a short cycle cannot be precise;
there must be many times around $n$ with approximately the same chance.
This means that there are actually many short cycles and if we look at
how many are encountered at times around $n$, we will have a good chance
of seeing many. This means the cycles are relatively dense (for random
walk) in that part of the graph, which boosts the cogrowth and hence the
spectral radius.
We begin by proving the following nonconcentration property of simple
random walk on regular graphs.\vspace*{-3pt}
\renewcommand{1.2}{5.1}
\begin{lemma}\label{l.nonconcen}
Write $p_n(\cdot, \cdot)$ for the $n$-step transition probability of
simple random walk on a given graph.
Let $d < \infty$ and $\varepsilon> 0$. There exists $c > 0$ such that for
every $d$-regular graph $G$, every $o \in\mathtt{V}(G)$, and every $n
\ge1$, there exists $A \subseteq\mathtt{V}(G)$ that has the property\vspace*{-2pt} that
\begin{equation}\label{e.Abig}
p_n(o, A) > 1 - \varepsilon
\end{equation}
and\vspace*{-2pt}
\begin{equation}\label{e.Agood}
\forall x \in A, \forall k \in [0, \sqrt n ] \qquad p_{n+2k}(o, x) \ge
c p_n(o, x).
\end{equation}
\end{lemma}
\begin{pf}
Write $Q_n(j)$ for the probability that a binomial random variable with
parameters $\lfloor{n/2}\rfloor$ and $1/d$ takes the value $j$.
Given $\varepsilon$, define $c'$ so that\vspace*{-2pt}
\[
\sum_{|j - n/(2d)| \le c' \sqrt n} Q_n(j) > 1 -
\varepsilon^2.
\]
It has been known since the time of de Moivre\vspace*{-1pt} that
\[
Q_{n+2k}(j+k) \ge c Q_n(j)
\]
whenever $n \ge0$, $k \in[0, \sqrt n]$, and
$|j-n/(2d)| \le c' \sqrt n$.
Given $G$, $o\in\mathtt{V}(G)$, and $n \ge1$, let $X_1, \ldots,
X_n$ be
$n$ steps of simple random walk on $G$ starting with $ X_1^{-} = o$.
Define\vspace*{-2pt}
\[
Z := \bigl|\bigl\{i \in[1, n/2]; X_{2i-1} = - X_{2i}\bigr\}\bigr|.
\]
The events $ [X_{2i-1} = - X_{2i} ]$ are Bernoulli trials with
probability $1/d$ each, whence $Z$ has a binomial distribution with
parameters $\lfloor{n/2}\rfloor$ and $1/d$.
Thus,
\[
\mathbf{P} \bigl[\bigl|Z - n/(2d)\bigr| \le c' \sqrt n \bigr] > 1 -
\varepsilon^2
\]
by choice of $c'$.
Define\vspace*{-2pt}
\[
A := \bigl\{ x \in\mathtt{V}(G); \mathbf{P} \bigl[\bigl|Z - n/(2d)\bigr| \le
c' \sqrt n | X_{n}^{+} = x \bigr] > 1 -
\varepsilon \bigr\}.
\]
Since\vspace*{-6pt}
\begin{eqnarray*}
1 - \varepsilon^2 &<& \mathbf{P} \bigl[\bigl|Z - n/(2d)\bigr| \le
c' \sqrt n \bigr]
\\[-2pt]
&=& \sum_{ x \in\mathtt{V}(G)} p_n(o, x) \mathbf{P}
\bigl[\bigl|Z - n/(2d)\bigr| \le c' \sqrt n | X_{n}^{+}
= x \bigr]
\\[-2pt]
&\le& p_n(o, A) + \bigl(1 - p_n(o, A) \bigr) (1-
\varepsilon),
\end{eqnarray*}
we obtain (\ref{e.Abig}).
If we excise all even backtracking pairs $(X_{2i-1}, X_{2i})$ ($1 \le i
\le
n/2$) from the path $(X_1, \ldots, X_n)$, then we obtain simple random walk
for $n - 2Z$ steps conditioned not to have any even-time step be a backtrack.
Given $k \in[0, \sqrt n]$,
let $X'_1, \ldots, X'_{n+2k}$ be simple random walk from $o$
coupled with $X$ as follows:
Define
\[
Z' := \bigl|\bigl\{i \in\bigl[1, (n+2k)/2\bigr]; X'_{2i-1}
= - X'_{2i}\bigr\}\bigr|.
\]
By choice of $c$, we have $\mathbf{P}[Z' = j+k] \ge c \mathbf{P}[Z =
j]$ whenever $|j -
n/(2d)| \le c' \sqrt n$.
Thus, we may couple $X'$ and $X$ so that $Z' = Z+k$ with probability at
least~$c$ whenever $ X_n^{+} \in A$.
Furthermore, we may assume that the coupling is such that when $Z' = Z+k$
and we excise from each path the even backtracking pairs,
then what remains in $X'$ is the same as in $X$.
This implies that with probability at least $c$, we have ${X'}_{n+2k}^{+}
= X_n^{+}$ whenever $X_n^+ \in A$.
This gives (\ref{e.Agood}).
\end{pf}
\renewcommand{1.2}{1.2}
\begin{theorem}
Let $G$ be an infinite Ramanujan graph and $L \ge1$. Let
$q_n$ be
the probability that simple random walk at time $n$ lies on a nontrivial
cycle of length at most $L$. Then $\lim_{n \to\infty} q_n =
0$.
\end{theorem}
\begin{pf}
Let $S$ be the set of vertices that lie on a \textit{simple} cycle of
length at
most~$L$, so that $q'_n := \mathbf{P} [ X_n^- \in S ] =
\Omega(q_n)$ for
$n \ge~L$.
Suppose that $q'_n > 2\varepsilon$.
Choose $A$ and $c$ as in the lemma.
Then $\mathbf{P} [ X_n^- \in A \cap S ] \ge\varepsilon$, whence
$\mathbf{P} [ X_{n+2k}^- \in A \cap S ] \ge c \varepsilon$
for $k \in [0, \sqrt n ]$.
Let $I^L(n_1, n_2)$ be the number of
times $t \in[n_1, n_2]$ for which there
exist $n_1 \le s \le t \le u \le n_2$
such that $(X_s, X_{s+1}, \ldots, X_u)$
is a cycle with $u - s \le L$.
Then $\mathbf{E}_o [I^L(n, n + \sqrt n - 1), X_n^- \in S
] \ge c'
\sqrt
n$ for some constant $c' > 0$ (depending only on $c \varepsilon$).
Thus, there is some vertex $x \in S$ (a value of $ X_{n}^-$) for which
$\mathbf{E}_x [I^L(1, \sqrt n) ] \ge c' \sqrt n$.
This also means
\[
\mathbf{P}_x \bigl[I^L(1, \sqrt n) \ge c'
\sqrt n/2 \bigr] \ge c'/2.
\]
Then Theorem~\ref{t.AGVfinite} completes the argument when $n$ is sufficiently
large since $c_x \le2+2L\operatorname{\mathtt{cogr}} (G)^{L-2}$.
[The case $\operatorname{\mathtt{cogr}} (G) = 1$ is immediate.]
Alternatively, one can appeal to Lemmas \ref{l.whenmany} and \ref{l.boost} instead of Theorem~\ref{t.AGVfinite}.
\end{pf}
\printaddresses
\end{document} |
\begin{document}
\title[Widom-Rowlinson model and regular graphs]{The Widom-Rowlinson model, the hard-core model and the extremality of the complete graph}
\author[E.~Cohen]{Emma Cohen}
\address{School of Mathematics, Georgia Institute of Technology\\ Atlanta, GA 30332-0160}
\email{[email protected]}
\author[P.~Csikv\'ari]{P\'{e}ter Csikv\'{a}ri}
\address{Massachusetts Institute of Technology \\ Department of Mathematics \\
Cambridge MA 02139 \& MTA-ELTE Geometric and Algebraic Combinatorics Research Group
\\ H-1117 Budapest
\\ P\'{a}zm\'{a}ny P\'{e}ter s\'{e}t\'{a}ny 1/C \\ Hungary}
\email{[email protected]}
\author[W.~Perkins]{Will Perkins}
\address{School of Mathematics, University of Birmingham, UK}
\email{[email protected]}
\author[P.~Tetali]{Prasad Tetali}
\address{School of Mathematics and School of Computer Science, Georgia Institute of Technology\\ Atlanta, GA 30332-0160}
\email{[email protected]}
\thanks{The second author is partially supported by the National Science Foundation under grant no. DMS-1500219, by the MTA R\'enyi "Lend\"ulet" Groups and Graphs Research Group, by the ERC Consolidator Grant 648017, and by the Hungarian National Research, Development and Innovation Office, NKFIH grant K109684. Research of the last author is supported in part by the NSF grant DMS-1407657.}
\subjclass[2010]{Primary: 05C35. Secondary: 05C31, 05C70, 05C80}
\keywords{graph homomorphisms, Widom-Rowlinson model, hard-core model}
\begin{abstract} Let $H_{\mathrm{WR}}$ be the path on $3$ vertices with a loop at each vertex. D.~Galvin \cite{Gal1,Gal2} conjectured, and E.~Cohen, W.~Perkins and P.~Tetali \cite{CPT} proved that for any $d$-regular simple graph $G$ on $n$ vertices we have
$$\textbf{h}om(G,H_{\mathrm{WR}})\leq \textbf{h}om(K_{d+1},H_{\mathrm{WR}})^{n/(d+1)}.$$
In this paper we give a short proof of this theorem together with the proof of a conjecture of Cohen, Perkins and Tetali \cite{CPT}. Our main tool is a simple bijection between the Widom-Rowlinson model and the hard-core model on another graph. We also give a large class of graphs $H$ for which we have
$$\textbf{h}om(G,H)\leq \textbf{h}om(K_{d+1},H)^{n/(d+1)}.$$
In particular, we show that the above inequality holds if $H$ is a path or a cycle of even length at least $6$ with loops at every vertex.
\end{abstract}
\maketitle
\section{Introduction} For graphs $G$ and $H$, with vertex and edge sets $\V{G}, \E{G}, \V{H}$, and $\E{H}$ respectively, a map $\varphi:\V{G}\to \V{H}$ is a homomorphism if $(\varphi(u),\varphi(v))\in \E{H}$ whenever $(u,v)\in \E{G}$. The number of homomorphisms from $G$ to $H$ is denoted by $\textbf{h}om(G,H)$. When $H=H_{\mathrm{ind}}$, an edge with a loop at one end, homomorphisms from $G$ to $H_{\mathrm{ind}}$ correspond to independent sets in the graph $G$, and so $\textbf{h}om(G,H_{\mathrm{ind}})$ counts the number of independent sets in $G$.
For a given $H$, the set of homomorphisms from $G$ to $H$ correspond to valid configurations in a corresponding statistical physics model with \emph{hard constraints} (forbidden local configurations). The independent sets of $G$ are the valid configurations of the \emph{hard-core model} on $G$, a model of a random independent set from a graph. Another notable case is when $H=H_{\mathrm{WR}}$, a path on $3$ vertices with a loop at each vertex. In this case, we can imagine a homomorphism from $G$ to $H_{\mathrm{WR}}$ as a $3$-coloring of the vertex set of $G$ subject to the requirement that a blue and a red vertex cannot be adjacent (with white vertices considered unoccupied); such a coloring is called a \emph{Widom-Rowlinson configuration} of $G$, from the Widom-Rowlinson model of two particle types which repulse each other \cite{WR,BHW}. See Figure \ref{fig:H-examples}.
\begin{figure}
\caption{The target graphs for the Widom-Rowlinson model and the hard-core model.}
\label{fig:H-examples}
\end{figure}
For a fixed graph $H$, it is natural to study the normalized graph parameter
$$p_H(G):=\textbf{h}om(G,H)^{1/|\V{G}|},$$
where $\V{G}$ denotes the number of vertices of the graph $G$.
For $H=H_{\mathrm{ind}}$, J.~Kahn \cite{Kahn} proved that for any $d$-regular bipartite graph $G$,
$$p_{H_{\mathrm{ind}}}(G)\leq p_{H_{\mathrm{ind}}}(K_{d,d}),$$
where $K_{d,d}$ is the complete bipartite graph with classes of size $d$. Y.~Zhao \cite{Zhao1} showed that one could drop the condition of bipartiteness in Kahn's theorem. That is, he showed that $p_{H_{\mathrm{ind}}}(G)\leq p_{H_{\mathrm{ind}}}(K_{d,d})$, for \emph{any} $d$-regular graph $G$. Y.~Zhao proved his result by reducing the general case to the bipartite case with a clever trick. He proved that
$$p_{H_{\mathrm{ind}}}(G)\leq p_{H_{\mathrm{ind}}}(G\times K_2),$$
where $G\times K_2$ is the bipartite graph obtained by replacing every vertex $u$ of $\V{G}$ by a pair of vertices $(u,0)$ and $(u,1)$ and replacing every edge $(u,v)\in \E{G}$ by the pair of edges $((u,0),(v,1))$ and $((u,1),(v,0))$. This is clearly a bipartite graph, and if $G$ is $d$-regular then $G\times K_2$ is still $d$-regular.
D.~Galvin \cite{Gal1,Gal2} conjectured a different behavior for $H = H_{\mathrm{WR}}$: that instead of $K_{d,d}$, the complete graph $K_{d+1}$ maximizes $p_{H_{\mathrm{ind}}}(G)$ among $d$-regular graphs $G$. E.~Cohen, W.~Perkins and P.~Tetali \cite{CPT} proved that this was indeed the case:
\begin{Th} \label{WR} \cite{CPT} For any $d$-regular simple graph $G$ on $n$ vertices we have
$$p_{H_{\mathrm{WR}}}(G)\leq p_{H_{\mathrm{WR}}}(K_{d+1});$$
in other words,
$$\textbf{h}om(G,H_{\mathrm{WR}})\leq \textbf{h}om(K_{d+1},H_{\mathrm{WR}})^{n/(d+1)}.$$
\end{Th}
One of the goals of this paper is to give a very simple proof of this fact\footnote{In fact, Theorem~\ref{WR} follows from a stronger result in \cite{CPT} that the Widom-Rowlinson {\em occupancy fraction} is maximized by $K_{d+1}$. We note that this stronger result also follows from the transformation below and Theorem 1 of \cite{davies2015independent}.}, along with a slight generalization. We use a trick similar to that used by Y.~Zhao \cite{Zhao1,Zhao2}. We will need the following definition:
\begin{Def} The \emph{extended line graph} $\widetilde{H}$ of a (bipartite) graph $H$ has $\V{\widetilde{H}} = \E{H}$; two edges $e$ and $f$ of $H$ are adjacent in $\widetilde{H}$ if
\begin{enumerate}[(a)]
\item $e=f$,
\item $e$ and $f$ share a common vertex, or
\item $e$ and $f$ are opposite edges of a $4$-cycle in $G$.
\end{enumerate}
\end{Def}
Throughout, $\V{H}$ and $\E{H}$ refer to the vertex-set and edge-set, respectively, of the graph $H$. If $H$ is bipartite, we use $\A{H}$ and $\B{H}$ to refer to the parts of a fixed bipartition.
Now we can give a generalization of Theorem~\ref{WR}:
\begin{Th} \label{gen} If $\widetilde{H}$ is the extended line graph of a bipartite graph $H$, then for any $d$-regular simple graph $G$ on $n$ vertices we have
$$p_{\widetilde{H}}(G)\leq p_{\widetilde{H}}(K_{d+1}),$$
or in other words,
$$\textbf{h}om(G,\widetilde{H})\leq \textbf{h}om(K_{d+1},\widetilde{H})^{n/(d+1)}.$$
\end{Th}
To see that Theorem~\ref{gen} is a generalization of Theorem~\ref{WR} it suffices to check that $H_{\mathrm{WR}}$ is precisely the extended line graph of the path on $4$ vertices.
In Section~\ref{extension} we will prove a slight generalization of Theorem~\ref{gen} which allows for weights on the vertices of $H$.
\section{Short proof of Theorem~\ref{WR}}
We are not the first to notice the following connection between the Widom-Rowlinson model and the hardcore model (see, e.g., Section 5 of \cite{BHW}):
Given a graph $G$, let $G'$ be the bipartite graph with vertex set $\V{G'} = \V{G}\times \{0,1\}$, where $(u,0)$ and $(v,1)$ are adjacent in $G'$ whenever either $(u,v)\in \E{G}$ or $u=v$. That is, $G'$ is $G\times K_2$ with the extra edges $((u,0),(u,1))$ for all $u\in \V{G}$. We will show that
$$\textbf{h}om(G,H_{\mathrm{WR}})=\textbf{h}om(G',H_{\mathrm{ind}}).$$
Indeed, consider an independent set $I$ in $G'$. Color $u\in \V{G}$ blue if $(u,1)\in I$, red if $(u,0)\in I$, and white if it is neither red or blue. Note that since $I$ was an independent set and $((u,0),(u,1))\in \E{G'}$, the color of vertex $u$ is well-defined and this coloring is in fact a Widom-Rowlinson coloring of $G$. This same construction also works in the other direction, so
$$\textbf{h}om(G,H_{\mathrm{WR}})=\textbf{h}om(G',H_{\mathrm{ind}}).$$
If $G$ is $d$-regular then $G'$ is $(d+1)$-regular, and $K_{d+1}' = K_{d+1,d+1}$. Applying J. Kahn's result \cite{Kahn} for $(d+1)$-regular bipartite graphs, we see that if $G$ has $n$ vertices then
\begin{align*}
\textbf{h}om(G,H_{\mathrm{WR}})&=\textbf{h}om(G',H_{\mathrm{ind}})\\
&\leq \textbf{h}om(K_{d+1,d+1},H_{\mathrm{ind}})^{2n/(2(d+1))}=\textbf{h}om(K_{d+1},H_{\mathrm{WR}})^{n/(d+1)}.
\end{align*}
We remark that the transformation $G\to G'$ is also mentioned in \cite{KLL}.
\section{Extension} \label{extension}
In this section we would like to point out that for every graph $H$ there is an $\widetilde{H}$ such that
$$\textbf{h}om(G,\widetilde{H})=\textbf{h}om(G',H),$$
where $G'$ is the bipartite graph defined in the previous section. Exactly the same argument we used for $H_{\mathrm{WR}}$ will work for any graph $\widetilde{H}$ constructed in this manner. Actually, the situation is even better. To give the most general version we need a definition.
\begin{Def} Let $G$ be a bipartite graph. Let $H$ be another bipartite graph equipped with a weight function $\nu: \V{H}\to \mathbb{R}_+$. Let $\mathbb{I}_{\E{H}}: \A{H}\times \B{H}\to \{0,1\}$ denote the characteristic function of $\E{H}$. Define
$$Z_b(G,H)=\sum_{\substack{\varphi:\V{G}\to \V{H}\\\varphi(\A{G})\subseteq \A{H}\\\varphi(\B{G})\subseteq \B{H}}}\prod_{(a,b)\in \E{G}} \mathbb{I}_{\E{H}}(\varphi(a),\varphi(b))\prod_{w \in \V{G}}\nu(\varphi(w)),$$
(The subscript $b$ stands for bipartite.)
If $G$ and $H$ are not necessarily bipartite graphs, but $H$ is a weighted graph we can still define
$$Z(G,H)=\sum_{\varphi: \V{G}\to \V{H}}\prod_{(u,v)\in \E{G}}\mathbb{I}_{\E{H}}(\varphi(u),\varphi(v))\prod_{w \in \V{G}}\nu(\varphi(w)).$$
\end{Def}
In the language of statistical phsyics, $Z_b(G,H)$ and $Z(G,H)$ are \emph{partition functions}.
Somewhat surprisingly, J.~Kahn's result holds even in this general case, as shown by D.~Galvin and P.~Tetali \cite{GalTet}.
\begin{Th} \label{gen3} \cite{GalTet} For any bipartite graph $H$ equipped with the weight function $\nu: \V{H}\to \mathbb{R}_+$ and $\mathbb{I}_{\E{H}}: \A{H}\times \B{H}\to \{0,1\}$, and for any $d$-regular simple graph $G$ on $n$ vertices,
$$Z_b(G,H)\leq Z_b(K_{d,d},H)^{n/(2d)}.$$
\end{Th}
The key observation is that for a bipartite graph $H$ equipped with the weight function $\nu: \V{H}\to \mathbb{R}_+$ and characteristic function $\mathbb{I}_{\E{H}}: \A{H}\times \B{H}\to \{0,1\}$, we can define a weighted graph $\widetilde{H}$ with weight function $\widetilde{\nu}$ and characteristic function $\mathbb{I}_{\E{\widetilde{H}}}$ such that
\begin{align}\label{eqn:transformation}
Z(G,\widetilde{H})=Z_b(G',H)\,,
\end{align}
for any graph $G$ (where $G'$ is the modification of $G$ defined in the previous section).
Indeed, construct $\widetilde{H}$ with vertex set $\A{H}\times \B{H}$, edges
$$\mathbb{I}_{\E{\widetilde{H}}}((a_1,b_1),(a_2,b_2))=\mathbb{I}_{\E{H}}(a_1,b_2)\mathbb{I}_{\E{H}}(a_2,b_1),$$
and weight function
$$\widetilde{\nu} (a,b)=\nu(a)\nu(b)\mathbb{I}_{\E{H}}(a,b).$$
In effect, the vertex set of $\widetilde{H}$ is only the edges of $H$ (since non-edge pairs are given weight $0$).
Now, for a map $\varphi:G'\to H$, we can consider the map $\widetilde{\varphi}: G\to \widetilde{H}$ given by
$$\widetilde{\varphi}(u)=(\varphi((u,0)),\varphi((u,1))).$$
By the construction of the graphs $G'$ and $\widetilde{H}$, the contribution of $\varphi$ to $Z_b(G,H)$ is the same as the contribution of $\widetilde{\varphi}$ to $Z(G,\widetilde{H})$, and the result \eqref{eqn:transformation} follows.
Finally, applying Theorem~\ref{gen3} to the $(d+1)$-regular graph $G'$ yields
$$Z(G,\widetilde{H})=Z_b(G',H)\leq Z_b(K_{d,d},H)^{2n/(2(d+1))}=Z(K_{d+1},\widetilde{H})^{n/(d+1)}.$$
Hence we have proved the following theorem.
\begin{Th} \label{weighted-main} For a bipartite graph $H=(A,B,E)$ with vertex weight function $\nu: \V{H}\to \mathbb{R}_+$ let
$\widetilde{H}$ be the following weighted graph: its vertex set is $E(H)$, its edge set is defined by
$((a_1,b_1),(a_2,b_2))\in E(\widetilde{H})$ if and only if $(a_1,b_2)\in E(H)$ and $(a_2,b_1) \in E(H)$,
and the weight function on the vertex set is $\widetilde{\nu} (a,b)=\nu(a)\nu(b)$ for $(a,b)\in E(H)$.
Then for any $d$--regular simple graph $G$ on $n$ vertices we have
$$Z(G,\widetilde{H})\leq Z(K_{d+1},\widetilde{H})^{n/(d+1)}.$$
\end{Th}
We can obtain Conjecture 3 of \cite{CPT} as a corollary by applying this theorem in the case where $H$ is the path on $4$ vertices, $a_1 b_1 a_2 b_2$, with appropriate vertex weights. Indeed, if $\nu(a_1)=1$, $\nu(b_1)=\lambda_b$, $\nu(a_2)=\frac{\lambda_w}{\lambda_b}$, $\nu(b_2)=\frac{\lambda_r\lambda_b}{\lambda_w}$ then $\widetilde{H}$ is precisely the Widom-Rowlinson graph with vertex weights $\lambda_b,\lambda_r,\lambda_w$. This proves that even for the vertex-weighted Widom-Rowlinson graph we have
$$Z(G,H_{\mathrm{WR}})\leq Z(K_{d+1},H_{\mathrm{WR}})^{n/(d+1)}.$$
Hence we have proved the following theorem.
\begin{Th} Let $H_{\mathrm{WR}}$ be the Widom-Rowlinson graph with vertex weights $\lambda_b,\lambda_w,\lambda_r$. Then
for any $d$--regular simple graph $G$ on $n$ vertices we have
$$Z(G,H_{\mathrm{WR}})\leq Z(K_{d+1},H_{\mathrm{WR}})^{n/(d+1)}.$$
\end{Th}
Now let us consider the special case when $H$ is unweighted ($\nu\equiv 1$). In this case $\widetilde{\nu}$ is just $\mathbb{I}_{\E{H}}$, so we can think of $\widetilde{H}$ as an unweighted graph with vertex set $\V{\widetilde{H}} = \E{H}$. There is an edge in $\widetilde{H}$ between edges $e=(a_1, b_1)$ and $f=(a_2,b_2)$ of $H$ whenever $(a_1,b_2)$ and $(a_2,b_1)$ are both also edges of $H$. This is always the case when either $a_1=a_2$ or $b_1 = b_2$, so in particular every edge $e\in \E{H}=\V{\widetilde{H}}$ has a self-loop in $\widetilde{H}$, and every pair of incident edges in $H$ are adjacent in $\widetilde{H}$. We also get an edge $(e,f)\in \E{\widetilde{H}}$ if four vertices $a_1 b_1 a_2 b_2$ are all distinct and form a 4-cycle with $e$ and $f$ as opposite edges. In other words, $\widetilde{H}$ is precisely the extended line graph of $H$. Hence as a corollary of Theorem~\ref{weighted-main} we have proved Theorem~\ref{gen}.
If $H$ does not contain any $4$-cycle, then $\widetilde{H}$ is simply the line graph of $H$ with loops at every vertex.
In particular, if $H$ is a path (or even cycle of length at least $6$) then $\widetilde{H}$ is again a path (or even cycle of length at least $6$), but now with a loop at every vertex. Letting $H^o$ denote the graph obtained by adding a loop at every vertex of the graph $H$, we can write the corollary
\begin{Cor}
If $H = C_k^o$ (for $k\geq 6$ even) or if $H = P_k^o$ (for any $k$), then for any $d$-regular graph $G$
$$p_H(G) \leq p_H(K_{d+1}).$$
\end{Cor}
It is a good question how to characterize all of the graphs $\widetilde{H}$ which can be obtained this way. Note that since $\widetilde{H}$ is always fully-looped, this class has no intersection with the class of graphs found by Galvin \cite{Gal1}: the set of graphs $H_q^{\ell}$ obtained from a complete looped graph on $q$ vertices with $\ell \geq 1$ loops deleted.
\begin{Rem} Let $S_k$ be the star on $k$ vertices. One can show (for details see \cite{Gal1}) that, for large enough $d$,
$$p_{S_k^o}(K_{d+1})<p_{S_k^o}(K_{d,d})$$
for $k\geq 6$. From this example we can see that in order to have $p_H(G) \leq p_H(K_{d+1})$ it is not sufficient merely for $H$ to have a loop at every vertex.
\end{Rem}
L.~Sernau \cite{S} introduced many ideas for extending certain inequalities to a larger class of graphs. For instance, recall that the $H_1\times H_2$ has $\V{H_1\times H_2} = \V{H_1}\times \V{H_2}$ and $((a_1,b_1),(a_2,b_2))\in \E{H_1\times H_2}$ if and only if $(a_1,a_2)\in \E{H_1}$ and $(b_1,b_2)\in \E{H_2}$. Sernau noted that if $H_1$ and $H_2$ are graphs such that
$$p_{H_i}(G)\leq p_{H_i}(K_{d+1})\,,$$
for $i=1,2$, then it is also true that
$$p_{H_1\times H_2}(G)\leq p_{H_1\times H_2}(K_{d+1}).$$
This inequality simply follows from the identity
$$\textbf{h}om(G,H_1\times H_2) = \textbf{h}om(G,H_1)\textbf{h}om(G,H_2),$$
which is explained in \cite{S}. Surprisingly, this observation does not allow us to extend our result to any new graphs, because the product of two extended line graphs is again an extended line graph:
$$\widetilde{H}_1\times \widetilde{H}_2=\widetilde{H}_{12},$$
where $H_{12}=(\A{H_1}\times \A{H_2},\B{H_1}\times \B{H_2},\E{H_1}\times \E{H_2})$.
\section{On a theorem of L.~Sernau}
Theorem 3 of \cite{S} also provides a class of graphs for which $K_{d+1}$ is the maximizing graph. Below we explain the relationships between our results and his theorem.
\begin{Def} Let $H$ and $A$ be graphs. Then the graph $H^A$ is defined as follows: its vertices are the maps $f:V(A)\to V(H)$ and the $(f_1,f_2)\in E(H^A)$ if $(f_1(u),f_2(v))\in E(H)$ whenever $(u,v)\in E(A)$.
\end{Def}
Then Sernau proved the following theorem.
\begin{Th}\cite{S} Let $G$ be a $d$--regular graph, and let $F=l(H^B)$, where $H$ is an arbitrary graph, $B$ is a bipartite graph, and $l(H^B)$ is the graph induced by the vertices of $H^B$ which have a loop.
Then
$$p_F(G)\leq p_F(K_{d+1}).$$
\end{Th}
When $H=H_{ind},B=K_2$ then $l(H^B)=H_{WR}$ so this also proves the conjecture of D. Galvin. Note that when $B=K_2$ then $l(H^B)$ is the extended line graph of $H\times K_2$. It is not a great surprise that these results are similar, even the proofs behind these results are strongly related to each other.
\section{Conjectures}
Let $H$ be a simple graph, i.e., with no multiple edges or loops. Let $H^o$ denote the graph obtained by adding a loop at each vertex of $H$ (so for instance $C_n^o$ denotes the $n$-cycle with a loop at each vertex).
\begin{Conj} Let $G$ be a $d$-regular simple graph. Then for any $n\geq 4$
$$p_{C_n^o}(G)\leq p_{C_n^o}(K_{d+1}).$$
\end{Conj}
\begin{Conj} Let $G$ be a $d$-regular simple graph. Then for any $d\geq 4$
$$p_{S_4^o}(G)\leq p_{S_4^o}(K_{d+1}).$$
Furthermore, for $k\geq 6$
$$p_{S_k^o}(G)\leq p_{S_k^o}(K_{d,d}).$$
\end{Conj}
Finally, for an arbitrary graph $H$ it is not clear how to characterize the maximizers over all $d$-regular graphs $G$ of $p_H(G)$. If we restrict to bipartite $G$, however, D.~Galvin and P.~Tetali proved that $p_H(G) \le p_H(K_{d,d})$ \cite{GalTet}. We conjecture that this can be extended to the class of triangle-free graphs.
\begin{Conj} Let $G$ be a $d$--regular triangle-free graph. Then for any graph $H$ we have
$$p_H(G)\leq p_H(K_{d,d}).$$
\end{Conj}
\noindent \textbf{Acknowledgments.} We thank David Galvin and Luke Sernau for helpful conversations. We are also grateful to the anonymous referees for their careful reading and useful suggestions on the paper.
\end{document} |
\begin{document}
\baselineskip=17pt
\title[Fibers and local connectedness of planar continua]{Fibers and local connectedness of planar continua}
\author[B. Loridant]{Beno\^it Loridant}
\address{Montanuniversit\"at Leoben,
Franz Josefstrasse 18\\ Leoben 8700, Austria}
\email{[email protected]}
\author[J. Luo]{Jun Luo}
\address{School of Mathematics\\
Sun Yat-Sen University\\ Guangzhou 512075, China}
\email{[email protected]}
\date{}
\begin{abstract}
We describe non-locally connected planar continua via the concepts of fiber and numerical scale.
Given a continuum $X\subset\mathbb{C}$ and $x\in\partial X$, we show that the set of points $y\in \partial X$ that cannot be separated from $x$ by any finite set $C\subset \partial X$ is a continuum. This continuum is called the {\em modified fiber} $F_x^*$ of $X$ at $x$. If $x\in X^o$, we set $F^*_x=\{x\}$. For $x\in X$, we show that $F_x^*=\{x\}$ implies that $X$ is locally connected at $x$. We also give a concrete planar continuum $X$, which is locally connected at a point $x\in X$ while the fiber $F_x^*$ is not trivial.
The scale $\ell^*(X)$ of non-local connectedness is then the least integer $p$ (or $\infty$ if such an integer does not exist) such that for each $x\in X$ there exist $k\le p+1$ subcontinua $$X=N_0\supset N_1\supset N_2\supset\cdots\supset N_{k}=\{x\}$$ such that $N_{i}$ is a fiber of $N_{i-1}$ for $1\le i\le k$. If $X\subset\mathbb{C}$ is an unshielded continuum or a continuum whose complement has finitely many components, we obtain that local connectedness of $X$ is equivalent to the statement $\ell^*(X)=0$.
We discuss the relation of our concepts to the works of Schleicher (1999) and Kiwi (2004). We further define an equivalence relation $\sim$ based on the fibers and show that the quotient space $X/\sim$ is a locally connected continuum. For connected Julia sets of polynomials and more generally for unshielded continua, we obtain that every prime end impression is contained in a fiber. Finally, we apply our results to examples from the literature and construct for each $n\ge1$ concrete examples of path connected continua $X_n$ with $\ell^*(X_n)=n$.
\end{abstract}
\subjclass[2010]{54D05, 54H20, 37F45, 37E99.}
\keywords{ Local connectedness, fibers, numerical scale, upper semi-continuous decomposition.}
\maketitle
\section{Introduction and main results}
Motivated by the construction of Yoccoz puzzles used in the study on local connectedness of quadratic Julia sets and the Mandelbrot set $\mathcal{M}$, Schleicher~\cite{Schleicher99a} introduces the notion of fiber for full continua (continua $M\subset\mathbb{C}$ having a connected complement $\mathbb{C}\setminus M$), based on ``separation lines'' chosen from particular countable dense sets of external rays that land on points of $M$. Kiwi \cite{Kiwi04} uses finite ``cutting sets'' to define a modified version of fiber for Julia sets, even when they are not connected.
Jolivet-Loridant-Luo~\cite{JolivetLoridantLuo0000} replace Schleicher's ``separation lines'' with ``good cuts'', {\em i.e.}, simple closed curves $J$ such that $J\cap \partial M$ is finite and $J\setminus M\ne\emptyset$. In this way, Schleicher's approach is generalized to continua $M\subset\mathbb{C}$ whose complement $\mathbb{C}\setminus M$ has finitely many components. For such a continuum $M$, the {\em pseudo-fiber $E_x$} (of $M$) at a point $x\in M$ is the collection of the points $y\in M$ that cannot be separated from $x$ by a good cut; the {\em fiber $F_x$ at $x$} is the component of $E_x$ containing $x$. Here, a point $y$ is separated from a point $x$ by a simple closed curve $J$ provided that $x$ and $y$ belong to different components of $\mathbb{C}\setminus J$. And $x$ may belong to the bounded or unbounded component of $\mathbb{C}\setminus J$.
Clearly, the fiber $F_x$ at $x$ always contains $x$. We say that a pseudo-fiber or a fiber is {\em trivial} if it coincides with the single point set $\{x\}$.
By \cite[Proposition 3.6]{JolivetLoridantLuo0000}, every fiber of $M$ is again a continuum with finitely many complementary components. Thus the hierarchy by ``fibers of fibers'' is well defined. Therefore, the \emph{scale} $\ell(M)$ \emph{of non-local connectedness} is defined as the least integer $k$ such that for each $x\in M$ there exist $p\le k+1$ subcontinua $M=N_0\supset N_1\supset\cdots\supset N_{p}=\{x\}$ such that $N_{i}$ is a fiber of $N_{i-1}$ for $1\le i\le p$. If such an integer $k$ does not exist we set $\ell(M)=\infty$.
In this paper, we rather follow Kiwi's approach~\cite{Kiwi04} and define ``modified fibers'' for continua on the plane. The key point is: Kiwi focuses on Julia sets and uses ``finite cutting sets'' that consist of pre-periodic points, but we consider arbitrary continua $M$ on the plane (which may have interior points) and use ``finite separating sets''.
We refer to Example~\ref{cutting-2} for the difference between separating and cutting sets. Moreover, in Jolivet-Loridant-Luo~\cite{JolivetLoridantLuo0000}, a good cut is not contained entirely in the underlying continuum $M$. In the curent paper we will remove this assumption and only require that a good cut is a simple closed curve intersecting $\partial M$ at finitely many points. After this slight modification we can establish the equivalence between the above mentioned two approaches to define fiber, using good cuts or using finite separating sets. See Remark \ref{modified} for further details.
The notions and results will be presented in a way that focuses on the general topological aspects, rather than in the framework of complex analysis and dynamics.
\begin{defi}\label{kiwi-fiber} Let $X\subset\mathbb{C}$ be a continuum. We will say that a point $x\in \partial X$ is \emph{separated from a point} $y\in \partial X$ \emph{by a subset} $C\subset X$ if there is a \emph{separation} $\partial X\setminus C=A\cup B$ with $x\in A$ and $y\in B$. Here `` $\partial X\setminus C=A\cup B$ is a separation'' means that $\overline{A}\cap B=A\cap\overline{B}=\emptyset$.
\begin{itemize}
\item The \emph{ modified pseudo-fiber $E^*_x$ of $X$} at a point $x$ in the interior $X^o$ of $X$ is $\{x\}$; and the \emph{modified pseudo-fiber $E^*_x$ of $X$} at a point $x\in \partial X$ is the set of the points $y\in \partial X$ that cannot be separated from $x$ by any finite set $C\subset \partial X$.
\item The \emph{modified fiber} $F_x^*$ of $X$
at $x$ is the connected component of $E^*_x$ containing $x$. We say $E^*_x$ or $F_x^*$ is \emph{trivial} if it consists of the point $x$ only. ({\em We will show that $E^*_x=F^*_x$ in Theorem \ref{main-1}, so the notion of modified pseudo-fiber is only used as a formal definition.})
\item We inductively define a {\em fiber of order} $k\ge2$ as a fiber of a continuum $Y\subset X$, where $Y$ is a fiber of order $k-1$.
\item The {\em local scale of non-local connectedness} of $X$ at a point $x\in X$, denoted $\ell^*(X,x)$, is the least integer $p$ such that there exist $k\le p+1$ subcontinua $$X=N_0\supset N_1\supset N_2\supset\cdots\supset N_{k}=\{x\}$$ such that $N_{i}$ is a fiber of $N_{i-1}$ for $1\le i\le k$. If such an integer does not exist we set $\ell^*(X,x)=\infty$.
\item The (global) {\em scale of non-local connectedness} of $X$ is
$$\ell^*(X)=\sup\{\ell^*(X,x): x\in X\}.$$ We also call $\ell^*(X,x)$ the {\em local NLC-scale of $X$ at $x$}, and $\ell^*(X)$ the {\em global NLC-scale}.
\end{itemize}
\end{defi}
We firstly obtain the equality $F_x^*=E_x^*$ and relate trivial fibers to local connectedness. Here, local connectedness at a particular point does not imply trivial fiber. In particular, let $\mathcal{K}\subset[0,1]$ be Cantor's ternary set, let $X$ be the union of $\mathcal{K}\times[0,1]$ with $[0,1]\times\{1\}$. See Figure \ref{comb}. Then $X$ is locally connected at every $x=(t,1)$ with $t\in\mathcal{K}$, while the modified fiber $F_x^*$ at this point is the whole segment $\{t\}\times[0,1]$. See Example \ref{cantor-comb} for more details.
\begin{main-theorem}\label{main-1}
Let $X\subset\mathbb{C}$ be a continuum. Then $F_x^*=E_x^*$ for every $x\in X$; moreover, $F_x^*=\{x\}$ implies that $X$ is locally connected at $x$.
\end{main-theorem}
Secondly, we characterize modified fibers $F^*_x=E_x^*$ through simple closed curves $\gamma$ that separate $x$ from points $y$ in $X\setminus F^*_x$ and that intersect $\partial X$ at a finite set or an empty set.
This provides an equivalent way to develop the theory of fibers, for planar continua, and leads to a partial converse for the second part of Theorem \ref{main-1}. See Remark \ref{modified}.
\begin{main-theorem}\label{criterion}
Let $X\subset\mathbb{C}$ be a continuum. Then $F^*_x$ at any point $x\in X$ consists of the points $y\in X$ such that every simple closed curve $\gamma$ separating $x$ from $y$ must intersect $\partial X$ at infinitely many points. Or, equivalently, $X\setminus F_x^*$ consists of the points $z\in X$ which may be separated from $x$ by a simple closed curve $\gamma$ such that $\gamma\cap\partial X$ is a finite set.
\end{main-theorem}
This criterion can be related to Kiwi's characterization of fibers~\cite[Corollary 2.18]{Kiwi04}, as will be explained at the end of Section~\ref{proof-criterion}.
\begin{rema}\label{modified}
We define a simple closed curve $\gamma$ to be a \emph{ good cut of a continuum} $X\subset\mathbb{C}$ if $\gamma\cap\partial X$ is a finite set (the empty set is also allowed). We also say that two points $x,y \in X$ are separated by a good cut $\gamma$ if they lie in different components of $\mathbb{C}\setminus \gamma$. This slightly weakens the requirements on ``good cuts'' in \cite{JolivetLoridantLuo0000}. Therefore, given a continuum $X\subset\mathbb{C}$ whose complement has finitely many components, the modified pseudo-fiber $E_x^*$ at any point $x\in X$ is a subset of the pseudo-fiber $E_x$ at $x$, if $E_x$ is defined as in \cite{JolivetLoridantLuo0000}. Consequently, we can infer that local connectedness of $X$ implies triviality of all the fibers $F_x^*$, by citing two of the four equivalent statements of \cite[Theorem 2.2]{JolivetLoridantLuo0000}: (1) $X$ is locally connected; (2) every pseudo-fiber $E_x$ is trivial. The same result does not hold when the complement $\mathbb{C}\setminus X$ has infinitely many components. Sierpinski's universal curve gives a counterexample.
\end{rema}
\begin{rema}\label{two-approaches}
The two approaches, via pseudo-fibers $E_x$ and modified pseudo-fibers $E_x^*$, have their own merits. The former one follows Schleicher's approach and is more closely related to the theory of puzzles in the study of Julia sets and the Mandelbrot set; hence it may be used to analyse the structure of such continua by cultivating the dynamics of polynomials. The latter approach has a potential to be extended to the study of general compact metric spaces; and, at the same time, it is directly connected with the first approach when restricted to planar continua.
\end{rema}
Thirdly, we study the topology of $X$ by constructing an equivalence relation $\sim$ on $X$ and cultivating the quotient space $X/\!\sim$, which will be shown to be a locally connected continuum. This relation $\sim$ is based on fibers of $X$ and every fiber $F_x^*$ is contained in a single equivalence class.
\begin{defi}\label{def:eqrel} Let $X\subset\mathbb{C}$ be a continuum.
Let $X_0$ be the union of all the nontrivial fibers $F_x^*$ for $x\in X$ and $\overline{X_0}$ denote the closure of $X_0$. We define $x\sim y$ if $x=y$ or if $x\ne y$ belong to the same component of $\overline{X_0}$. Then $\sim$ is a closed equivalence relation on $X$ such that, for all $x\in X$, the equivalence class $[x]_\sim$ always contains the modified fiber $F_x^*$ and equals $\{x\}$ if only $x\in (X\setminus \overline{X_0})$. Consequently, every equivalence class $[x]_\sim$ is a continuum, so that the natural projection $\pi(x)=[x]_\sim$ is a monotone mapping, from $X$ onto its quotient $X/\!\sim$.
\end{defi}
\begin{rema} Actually, there is a more natural equivalence relation $\approx$ by defining $x \approx y$ whenever there exist points $x_1=x,$ $x_2,\ldots, x_n=y$ in $X$ such that $x_i\in F^*_{x_{i-1}}$. However, the relation $\approx$ may not be closed, as a subset of the product $X\times X$. On the other hand, if we take the closure of $\approx$ we will obtain a closed relation, which is reflexive and symmetric but may not be transitive (see Example~\ref{relations}). The above Definition~\ref{def:eqrel} solves this problem.
\end{rema}
The following theorem provides important information about the topology of $X/\!\sim$.
\begin{main-theorem}\label{main-3}
Let $X\subset\mathbb{C}$ be a continuum. Then $X/\!\sim$ is metrizable and is a locally connected continuum, possibly a single point.
\end{main-theorem}
\begin{rema}
The result of Theorem \ref{main-3} is of fundamental significance from the viewpoint of topology. It also plays a crucial role in the study of complex dynamics. In particular, if $J$ is the Julia set (assumed to be connected) of a polynomial $f(z)$ with degree $n\ge2$ the restriction $\left.f\right|_J: J\rightarrow J$ induces a continuous map $f_\sim: J/\!\sim\rightarrow J/\!\sim$ such that $\pi\circ f=f_\sim\circ\pi$. See Theorem \ref{dynamic}. Moreover, the modified fibers $F_x^*$ are closely related to impressions of prime ends. See Theorem \ref{impression}. Combining this with laminations on the unit circle $S^1\subset\mathbb{C}$, the system $f_\sim: J/\!\sim\rightarrow J/\!\sim$ is also a factor of the map $z\mapsto z^n$ on $S^1$. However, it is not known yet whether the the decomposition $\{[x]_\sim: x\in X\}$ by classes of $\sim$ coincide with the finest locally connected model discussed in \cite{BCO11}. For more detailed discussions related to the dynamics of polynomials, see for instance \cite{BCO11,Kiwi04} and references therein.
\end{rema}
Finally, to conclude the introduction, we propose two problems.
\begin{ques}\label{scale-model}
To estimate the scale $\ell^*(X)$ from above for particular continua $X\subset\mathbb{C}$ such that $\mathbb{C}\setminus X$ has finitely many components, and to compute the quotient space $X/\!\sim$ or the locally connected model introduced in \cite{BCO11}. The Mandelbrot set or the Julia set of an infinitely renormalizable quadratic polynomial (when this Julia set is not locally connected) provide very typical choices of $X$. In particular, the scale $\ell^*(X)$ will be zero if the Mandelbrot set is locally connected, \emph{i.e.}, if \emph{MLC} holds. In such a case, the relation $\sim$ is trivial and its quotient is immediate.
\end{ques}
\begin{rema}
Section \ref{examples} gives several examples of continua $X\subset\mathbb{C}$. We obtain the decomposition $\{[x]_\sim:x\in X\}$ into sub-continua and represent the quotient space $X/\!\sim$ on the plane. For those examples, the scale $\ell^*(X)$ is easy to determine.
\end{rema}
\begin{ques}\label{finest} Given an unshielded continuum $X$ in the plane,
is it possible to construct the ``finest'' upper semi-continuous decomposition of $X$ into sub-continua that consist of fibers, in order that the resulted quotient space is a locally connected continuum (or has the two properties mentioned in Theorem~\ref{main-3})? Such a finest decomposition has no refinement that has the above properties. If $X$ is the Sierpinski curve, which is not unshielded, the decomposition $\left\{[x]_\sim:x\in X\right\}$ obtained in Theorem \ref{main-3} does not suffice.
\end{ques}
\begin{rema}
The main motivation for Problem \ref{finest} comes from \cite{BCO11} in which the authors, Blokh-Curry-Oversteegen, consider ``unshielded'' continua $X\subset\mathbb{C}$ which coincide with the boundary of the unbounded component of $\mathbb{C}\setminus X$. They obtain the existence of the finest monotone map $\varphi$ from $X$ onto a locally connected continuum on the plane, such that $\varphi(X)$ is the finest locally connected model of $X$ and extend $\varphi$ to a map $\hat{\varphi}: \hat{\mathbb{C}}\rightarrow\hat{\mathbb{C}}$ that maps $\infty$ to $\infty$, collapses only those components of $\mathbb{C}\setminus X$ whose boundary is collapsed by $\varphi$, and is a homeomorphism elsewhere in $\hat{\mathbb{C}}\setminus X$ \cite[Theorem 1]{BCO11}. This is of significance in the study of complex polynomials with connected Julia set, see \cite[Theorem 2]{BCO11}.
\end{rema}
\begin{rema}
The equivalence classes $[x]_\sim$ obtained in this paper give a concrete upper semi-continuous decomposition of an arbitrary continuum $X$ on the plane, with the property that the quotient space $X/\!\sim$ is a locally connected continuum. In the special case $X$ is unshielded, the finest decomposition in~\cite[Theorem 1]{BCO11} is finer than or equal to our decomposition $\{[x]_\sim:x\in X\}$. See Theorem~\ref{impression} for details when $X$ is assumed to be unshielded. The above Problem~\ref{finest} asks whether those two decompositions actually coincide. If the answer is yes, the quotient space $X/\!\sim$ in Theorem \ref{main-3} is exactly the finest locally connected model of $X$, which shall be in some sense ``computable''. Here, an application of some interest is to study the locally connected model of an infinitely renormalizable Julia set~\cite{Jiang00} or of the Mandelbrot set, as mentioned in Problem \ref{scale-model}.
\end{rema}
We arrange our paper as follows. Section~\ref{basics} recalls some basic notions and results from topology that are closely related to local connectedness. Sections~\ref{proof-1},~\ref{proof-criterion} and~\ref{proof-3} respectively prove Theorems~\ref{main-1},~\ref{criterion} and~\ref{main-3}. Section~\ref{fiber-basics} discusses basic properties of fibers, studies fibers from a viewpoint of dynamic topology (as proposed by Whyburn \cite[pp.130-144]{Whyburn79}) and relates the theory of fibers to the theory of prime ends for unshielded continua. Finally, in Section~\ref{examples}, we illustrate our results through examples from the literature and give an explicit sequence of path connected continua $X_n$ satisfying $\ell^*(X_n)=n$.
\section{A Revisit to Local Connectedness}\label{basics}
\begin{defi}\label{lc-notion}
A topological space $X$ is \emph{locally connected at} a point $x_0\in X$ if for any neighborhood $U$ of $x_0$ there exists a connected neighborhood $V$ of $x_0$ such that $V\subset U$, or equivalently, if the component of $U$ containing $x_0$ is also a neighborhood of $x_0$. The space $X$ is then called \emph{locally connected} if it is locally connected at every of its points.
\end{defi}
We focus on metric spaces and their subspaces. The following characterization can be found as the definition of locally connectedness in~\cite[Part A, Section XIV]{Whyburn79}.
\begin{lemm}\label{local-criterion}
A metric space $(X,d)$ is locally connected at $x_0\in X$ if and only if for any $\varepsilon>0$ there exists $\delta>0$ such that any point $y\in X$ with $d(x_0,y)<\delta$ is contained together with $x_0$ in a connected subset of $X$ of diameter less than $\varepsilon$.
\end{lemm}
When $X$ is compact, Lemma \ref{local-criterion} is a local version of \cite[p.183, Lemma 17.13(d)]{Milnor06}. For the convenience of the readers, we give here the concrete statement as a lemma.
\begin{lemm}\label{global-criterion}
A compact metric space $X$ is locally connected if and only if for every $\varepsilon>0$ there exists $\delta>0$ so that any two points of distance less than $\delta$ are contained in a connected subset of $X$ of diameter less than $\varepsilon$.
\end{lemm}
Using Lemma \ref{local-criterion}, we obtain a fact concerning continua of the Euclidean space $\mathbb{R}^n$.
\begin{lemm}\label{fact-1}
Let $X\subset\mathbb{R}^n$ be a continuum and $U=\bigcup_{\alpha\in I}W_\alpha$ the union of any collection $\{W_\alpha: \alpha\in I\}$ of components of $\mathbb{R}^n\setminus X$. If $X$ is locally connected at $x_0\in X$, then so is $X\cup U$. Consequently, if $X$ is locally connected, then so is $X\cup U$.
\end{lemm}
\begin{proof}
Choose $\delta$ with properties from Lemma \ref{local-criterion} with respect to $x_0, X$ and $\varepsilon/2$. For any $y\in U$ with $d(x_0,y)<\delta$ we consider the segment $[x_0,y]$ between $x_0$ and $y$. If $[x_0,y]\subset(X\cup U)$, we are done. If not, choose the point $z\in ([x_0,y]\cap X)$ that is closest to $y$. Clearly, the segment $[y,z]$ is contained in $X\cup U$. By the choice of $\delta$ and Lemma \ref{local-criterion}, we may connect $z$ and $x_0$ with a continuum $A\subset X$ of diameter less than $\varepsilon/2$. Therefore, the continuum $B:=A\cup[y,z]\subset(X\cup U)$ is of diameter at most $\varepsilon$ as desired.
\end{proof}
In the present paper, we are mostly interested in continua on the plane, especially continua $X$ which are on the boundary of a continuum $M\subset\mathbb{C}$. Typical choice of such a continuum $M$ is the filled Julia set of a rational function. Several fundamental results from Whyburn's book~\cite{Whyburn79} will be very helpful in our study.
The first result gives a fundamental fact about a continuum failing to be locally connected at one of its points. The proof can be found in~\cite[p.124, Corollary]{Whyburn79}.
\begin{lemm}\label{non-lc}
A continuum $M$ which is not locally connected at a point $p$ necessarily fails to be locally connected at all points of a nondegenerate subcontinuum of $M$.
\end{lemm}
The second result will be referred to as {\bf Torhorst Theorem} in this paper (see~\cite[p.124, Torhorst Theorem]{Whyburn79}
and~\cite[p.126, Lemma 2]{Whyburn79}).
\begin{lemm}\label{torhorst}
The boundary $B$ of each component $C$ of the complement of a locally connected continuum $M$ is itself a locally connected continuum. If further $M$ has no cut point, then $B$ is a simple closed curve.
\end{lemm}
We finally recall a {\em Plane Separation Theorem} \cite[p.120, Exercise 2]{Whyburn79}.
\begin{prop}\label{theo:sep} If $A$ is a continuum and $B$ is a closed connected set of the plane with $A\cap B=T$ being a totally disconnected set, and with $A\setminus T$ and $B\setminus T$ being connected, then there exists a simple closed curve $J$ separating $A\setminus T$ and $B\setminus T$ such that $J\cap (A\cup B)\subset A\cap B=T$.
\end{prop}
\section{Fundamental properties of fibers}\label{proof-1}
The proof for Theorem \ref{main-1} has two parts. We start from the equality $E_x^*=F_x^*$.
\begin{theo}\label{E=F}
Let $X\subset\mathbb{C}$ be a continuum. Then $E_x^*=F_x^*$ for every $x\in X$.
\end{theo}
\begin{proof}
Suppose that $E_x^*\setminus F_x^*$ contains some point $x'$. Then we can fix a separation $E_x^*=A\cup B$ with $F_x^*\subset A$ and $x'\in B$. Since $E_x^*$ is a compact set, the distance ${\rm dist}(A,B):=\min\{|y-z|: y\in A, z\in B\}$ is positive. Let
\[ A^*=\left\{z\in\mathbb{C}: {\rm dist}(z, A)<\frac{1}{3}{\rm dist}(A,B)\right\}
\]
and
\[ B^*=\left\{z\in\mathbb{C}: {\rm dist}(z, B)<\frac{1}{3}{\rm dist}(A,B)\right\}.\]
Then $A^*$ and $B^*$ are disjoint open sets in the plane, hence $K=X\setminus(A^*\cup B^*)$ is a compact subset of $X$. As $E_x^*\cap K=\emptyset$, we may find for each $z\in K$ a finite set $C_z$ and a separation
\[X\setminus C_z=U_z\cup V_z\]
such that $x\in U_z, z\in V_z$. Here, we have $U_z=X\setminus(C_z\cup V_z)=X\setminus(C_z\cup \overline{V_z})$ and $V_z=X\setminus(C_z\cup U_z)=X\setminus(C_z\cup \overline{U_z})$; so both of them are open in $X$.
By flexibility of $z\in K$, we obtain an open cover $\{V_z: z\in K\}$ of $K$, which then has a finite subcover $\left\{V_{z_1},\ldots, V_{z_n}\right\}$.
Let
\[U=U_{z_1}\cap\cdots\cap U_{z_n},\ V=V_{z_1}\cup\cdots\cup V_{z_n}.\]
Then $U,V$ are disjoint sets open in $X$ such that $C:=X\setminus(U\cup V)$ is a subset of $C_{z_1}\cup\cdots\cup C_{z_n}$, hence it is also a finite set. Now, on the one hand, we have a separation $X\setminus C= U\cup V$ with $x\in U$ and $K\subset V$; on the other hand, from the equality $K=X\setminus(A^*\cup B^*)$ we can infer $U\subset (A^*\cup B^*)$. Combining this with the fact that $x'\in B^*$, we may check that $A':=(U\cup\{x'\})\cap A^*=U\cap A^*$ and $B':=(U\cup\{x'\})\cap B^*=(U\cap B^*)\cup\{x'\}\subset B^*$ are separated in $X$. Let $C'=C\setminus\{x'\}$. Since $A'\subset U$ and $V$ are also separated in $X$, we see that \[X\setminus C'=U\cup \{x'\}\cup V=(U\cap A^*)\cup(U\cap B^*)\cup\{x'\}\cup V=A'\cup(B'\cup V)\]
is a separation with $x\in A'$ and $x'\in(B'\cup V)$. This contradicts the assumption that $x'\in E_x^*$, because $E_x^*$ being the pseudo-fiber at $x$, none of its points can be separated from $x$ by the finite set $C'$.
\end{proof}
Then we recover in fuller generality that triviality of the fiber at a point $x$ in a continuum $M\subset \mathbb{C}$ implies local connectedness of $M$ at $x$. More restricted versions of this result appear earlier: in \cite{Schleicher99a} for continua in the plane with connected complement, in \cite{Kiwi04} for Julia sets of monic polynomials or the components of such a set, and in \cite{JolivetLoridantLuo0000} for continua in the plane whose complement has finitely many components.
\begin{theo}\label{trivial-fiber}
If $F_x^*=\{x\}$ for a point $x$ in a continuum $X\subset\mathbb{C}$ then $X$ is locally connected at $x$.
\end{theo}
\begin{proof}
We will prove that if $X$ is not locally connected at $x$ then $F_x^*$ contains a non-degenerate continuum $M\subset X$.
By definition, if $X$ is not locally connected at $x$ there exists a number $r>0$ such that the component $Q_x$ of $B(x,r)\cap X$ containing $x$ is not a neighborhood of $x$ in $X$. Here
\[B(x,r)=\{y: |x-y|\le r\}.\]
This means that there exist a sequence of points $\{x_k\}_{k=1}^\infty\subset X\setminus Q_x$ such that $\lim\limits_{k\rightarrow\infty}x_k=x$. Let $Q_k$ be the component of $B(x,r)\cap X$ containing $x_k$. Then $Q_i\cap\{x_k\}_{k=1}^\infty$ is a finite set for each $i\ge1$, and hence we may assume, by taking a subsequence, that $Q_i\cap Q_j=\emptyset$ for $i\ne j$.
Since the hyperspace of the nonempty compact subsets of $X$ is a compact metric space under Hausdorff metric, we may further assume that there exists a continuum $M$ such that $\lim\limits_{k\rightarrow\infty}Q_k=M$ under Hausdorff distance. Clearly, we have $x\in M\subset Q_x$. The following Lemma \ref{reach-boundary} implies that the diameter of $M$ is at least $r$. Since every point $y\in M\setminus\{x\}$ cannot be separated from $x$ by a finite set in $X$, $F_x^*$ cannot be trivial and our proof is readily completed.
\end{proof}
\begin{lemm}\label{reach-boundary}
In the proof for Theorem \ref{trivial-fiber}, every component of $B(x,r)\cap X$ intersects $\partial B(x,r)$. In particular, $Q_k\cap\partial B(x,r)\ne\emptyset$ for all $k\ge1$.
\end{lemm}
\begin{proof}
Otherwise, there would exist a component $Q$ of $B(x,r)\cap X$ such that $Q\cap\partial B(x,r)=\emptyset$. Then, for each point $y$ on $X\cap\partial B(x,r)$, the component $Q_y$ of $X\cap B(x,r)$ containing $y$ is disjoint from $Q$. By definition of quasi-components, we may choose a separation $X\cap B(x,r)=U_y\cup V_y$ with $Q_y\subset U_y$ and $Q\subset V_y$. Since every $U_y$ is open in $X\cap B(x,r)$, we have an open cover
\[\left\{U_y: y\in X\cap\partial B(x,r)\right\}\]
for $X\cap\partial B(x,r)$, which necessarily has a finite subcover, say $\left\{U_{y_1},\ldots, U_{y_t}\right\}$. Let
\[U=U_{y_1}\cup\cdots\cup U_{y_t},\quad V=V_{y_1}\cap\cdots\cap V_{y_t}.\]
Then $X\cap B(x,r)=U\cup V$ is a separation with $\partial B(x,r)\subset U$. Therefore,
\[X=[(X\setminus B(x,r))\cup U]\cup V\]
is a separation, which contradicts the connectedness of $X$.
\end{proof}
\section{Schleicher's and Kiwi's approaches unified}\label{proof-criterion}
Let $X$ be a topological space and $x_0$ a point in $X$. The component of $X$ containing $x_0$ is the maximal connected set $P\subset X$ with $x_0\in P$. The quasi-component of $X$ containing $x_0$ is defined to be the set
\[Q=\{y\in X: \ \text{no\ separation}\ X=A\cup B\ \text{exists\ such\ that}\ x\in A, y\in B\}.\]
Equivalently, the quasi-component of a point $p\in X$ may be defined as the intersection of all closed-open subsets of $X$ containing $p$. Since any component is contained in a quasi-component, and since quasi-components coincide with the components whenever $X$ is compact \cite{Kuratowski68}, we can infer an equivalent definition of pseudo fiber as follows.
\begin{prop}\label{seppoints} Let $X\subset\mathbb{C}$ be a continuum. Two points of $X$ are separated by a finite set $C\subset X$ iff they belong to distinct quasi-components of $X\setminus C$.
\end{prop}
The following proposition implies Theorem~\ref{criterion}. We present it in this form, since it can be seen as a modification of Whyburn's plane separation theorem (Proposition~\ref{theo:sep}). Actually, the main idea of our proof is borrowed from~\cite[p.126]{Whyburn79} and is slightly adjusted.
\begin{prop}\label{key} Let $C$ be a finite subset of a continuum $X\subset\mathbb{C}$ and $x,y$ two points on $X\setminus C$. If there is a separation $X\setminus C=P\cup Q$ with $x\in P$ and $y\in Q$ then $x$ is separated from $y$ by a simple closed curve $\gamma$ with $(\gamma\cap X)\subset C$.
\end{prop}
\begin{proof}
We first note that every component of $\overline{P}$ intersects $C$. If on the contrary a component $W$ of $\overline{P}\subset(P\cup C)$ is disjoint from $C$, then $\overline{P}$ is disconnected and a contradiction follows. Indeed, we have $\overline{P}\ne W$ since $\overline{P}\cap C\ne \emptyset$. And all the components of $\overline{P}$ intersecting $C$ are disjoint from $W$. As $C$ is a finite set, there are finitely many such components, say $W_1, \ldots, W_t$. However, since a quasi-component of a compact metric space is just a component, we can find separations $\overline{P}=A_i\cup B_i$ for $1\le i\le t$ such that $W\subset A_i, W_i\subset B_i$. Let $A=\cap_iA_i$ and $B=\cup_i B_i$. Then $\overline{P}=A\cup B$ is a separation with $A\cap Q=\emptyset$, hence $X= A\cup(B\cup Q)$ is a separation of $X$. This contradicts the connectedness of $X$.
Since every component of $\overline{P}$ intersects $C$ and since $C$ is a finite set, we know that $\overline{P}$ has finitely many components, say $P_1,\ldots, P_k$. We may assume that $x\in P_1$. Similarly, every component of $\overline{Q}\subset(Q\cup C)$ intersects $C$ and $\overline{Q}$ has finitely many components, say $Q_1,\ldots, Q_l$. We may assume that $y\in Q_1$.
Let $P_1^*=P_2\cup\cdots\cup P_k\cup Q_1\cup\cdots\cup Q_l$. Then $X=P_1\cup P_1^*$, $x\in P_1$, $y\in P_1^*$ and $(P_1\cap P_1^*)\subset C$.
Let $N_1=\{z\in P_1;\ {\rm dist}(z,P_1^*)\ge1\}$ and for each $j\ge2$, let \footnote{This idea is inspired from the proof of Whyburn's plane separation theorem, see Proposition~\ref{theo:sep}}
\[N_j=\{z\in P_1:\ 3^{-j}\le {\rm dist}(z,P_1^*)\le 3^{-j+1}\}.\]
Clearly, every $N_j$ is a compact set. Therefore, we may cover $N_j$ by finitely many open disks centered at a point in $N_j$ and with radius $r_j=3^{-j-1}$, say $B(x_{j1},r_j),\ldots, B(x_{jk(j)},r_j)$.
For $j\geqslant 1$, let us set $M_j=\bigcup_{i=1}^{k(j)}\overline{B(x_{ji},r_j)}$. Then
$M=\overline{\bigcup_{j\geqslant 1}M_j}$ is a compact set containing $P_1$. Its interior $M^o$ contains $x$. Moreover, $P_1^*\cap \left(\bigcup_{j\geqslant 1}M_j\right)=\emptyset$ by definition of $N_j$ and $M_j$, while $M\setminus\left(\bigcup_{j}M_j\right)$ is a subset of $P_1\cap P_1^*$, hence we have $M\cap P_1^*=P_1\cap P_1^*$ and $y\notin M$. Also, $\partial M\cap X$ is a subset of $P_1\cap P_1^*$, hence it is a finite set.
Now $M$ is a continuum, since $P_1$ is itself a continuum and the disks $B(x_{ji},r_j)$ are centered at $x_{ji}\in N_j$. The continuum $M$ is even locally connected at every point on $M\setminus C=\bigcup_{j}M_j$. Indeed, it is locally a finite union of disks, since $M_j\cap M_k=\emptyset$ as soon as $|j-k|>1$ and since every point of $M\setminus C$ is in one of these disks. As $C$ is finite, it follows from Lemma~\ref{non-lc} that $M$ is a locally connected continuum.
Now, let $U$ be the component of $\mathbb{C}\setminus M$ that contains $y$. By Torhorst Theorem, see Lemma \ref{torhorst}, the boundary $\partial U$ of $U$ is a locally connected continuum. Therefore, by Lemma~\ref{fact-1}, the union $U\cup\partial U$ is also a locally connected continuum. Since $U$ is a complementary component of $\partial U$, the union $U\cup\partial U$ even has no cut point. It follows from Torhorst Theorem that the boundary $\partial V$ of any component $V$ of $\mathbb{C}\setminus(U\cup\partial U)$ is a simple closed curve. Note that this curve separates every point of $U$ from any point of $V$. Choosing $V$ to be the component of $\mathbb{C}\setminus(U\cup\partial U)$ containing $x$, we obtain a simple closed curve $J=\partial V$ separating $y$ from $x$.
Finally, since $J=\partial V\subset\partial U\subset\partial M$, we see that $J\cap X$ is contained in the finite set $C$. Consequently, $J$ is a good cut of $X$ separating $x$ from $y$.
\end{proof}
This result proves Theorem~\ref{criterion} and is related to Kiwi's characterization of fibers. Restricting to connected Julia sets $J(f)$ of polynomials $f$, Kiwi~\cite{Kiwi04} had defined for $\zeta\in J(f)$ the fiber $\textrm{Fiber}(\zeta)$ as the set of $\xi\in J(f)$ such that $\xi$ and $\zeta$ lie in the same connected component of $J(f)\setminus Z$ for every finite set $Z\subset J(f)$, made of periodic or preperiodic points that are not in the grand orbit of a Cremer point. Kiwi showed in~\cite[Corollary 2.18]{Kiwi04} that these fibers can be characterized by using separating curves involving external rays.
\section{A locally connected model for the continuum $X$}\label{proof-3}
In this section, we recall a few notions and results from Kelley's {\em General Topology}~\cite{Kelley55} and construct a proof for Theorem~\ref{main-3}, the results of which are divided into two parts:
\begin{itemize}
\item[(1)] $X/\!\sim$ is metrizable, hence is a compact connected metric space, {\em i.e.}, a continuum.
\item[(2)] $X/\!\sim$ is a locally connected continuum.
\end{itemize}
A decomposition $\mathcal{D}$ of a topological space $X$ is {\em upper semi-continuous} if for each $D\in\mathcal{D}$ and each open set $U$ containing $D$ there is an open set $V$ such that $D\subset V\subset U$ and $V$ is the union of members of $\mathcal{D}$ \cite[p.99]{Kelley55}. Given a decomposition $\mathcal{D}$, we may define a projection $\pi: X\rightarrow\mathcal{D}$ by setting $\pi(x)$ to be the unique member of $\mathcal{D}$ that contains $x$. Then, the quotient space $\mathcal{D}$ is equipped with the largest topology such that $\pi: X\rightarrow \mathcal{D}$ is continuous. We copy the result of \cite[p.148, Theorem 20]{Kelley55} as follows.
\begin{theo}\label{kelley-p148-20}
Let $X$ be a topological space, let $\mathcal{D}$ be an upper semi-continuous decomposition of $X$ whose members are compact, and let $\mathcal{D}$ have the quotient topology. Then $\mathcal{D}$ is, respectively, Hausdorff, regular, locally compact, or has a countable base, provided $X$ has the corresponding property.
\end{theo}
Urysohn's metrization theorem \cite[p.125, Theorem 16]{Kelley55} states that a regular $T_1$-space whose topology has a countable base is metrizable. Combining this with Theorem \ref{kelley-p148-20}, we see that the first part of Theorem \ref{main-3} is implied by the following theorem, since $X$ is a continuum on the plane and has all the properties mentioned in Theorem \ref{kelley-p148-20}. One may also refer to \cite[p.40, Theorem 3.9]{Nadler92}, which states that any upper semi-continuous decomposition of a compact metric space is metrizable.
\begin{theo}\label{usc}
The decomposition $\{[x]_\sim:x\in X\}$ is upper semi-continuous.
\end{theo}
\begin{proof}
Given a set $U$ open in $X$, we need to show that the union $U_\sim\subset U$ of all the classes $[x]_\sim\subset U$ is open in $X$. In other words, we need to show that $X\setminus U_\sim$ is closed in $X$, which implies that $\pi(X\setminus U_\sim)=(X/\!\sim)\setminus\pi(U_\sim)$ is closed in the quotient $X/\!\sim$. Here, we note that $X\setminus U_\sim$ is just the union of all the classes $[x]_\sim$ that intersects $X\setminus U$.
Assume that $y_k\in X\setminus U_\sim$ is a sequence converging to $y$, we will show that $[y]_\sim\setminus U\ne\emptyset$, hence that $y\in X\setminus U_\sim$. Let $z_k$ be a point in $[y_k]_\sim\setminus U$ for each $k\ge1$. By coming to an appropriate subsequence, we may further assume that
\begin{itemize}
\item $[y_i]_\sim\cap[y_j]_\sim=\emptyset$ for $i\ne j$;
\item the sequence of continua $[y_k]_\sim$ converges to a continuum $M$ under Hausdorff metric;
\item the sequence $z_k$ converges to a point $z_\infty$.
\end{itemize}
Clearly, we have $z_\infty\in M$; and, as $X\setminus U$ is compact, we also have $z_\infty\in X\setminus U$. If the sequence $[y_k]_\sim$ is finite, then $M=[y]_\sim$, thus $[y]_\sim\setminus U\ne\emptyset$. If the sequence $[y_k]_\sim$ is infinite, let us check that $M\subset F_y^*$. Indeed, for any point $z\in M$ and for any finite set $C\subset X$ disjoint from $\{y,z\}$, all but finitely many $[y_k]_\sim$ are connected disjoint subsets of $X\setminus C$. It follows that there exists no separation $X\setminus C=A\cup B$ such that $y\in A, z\in B$, because $y$ and $z$ are both limit points of the sequence of continua $[y_k]_\sim$. Hence $M\subset F_y^*\subset [y]_\sim$ and $z_\infty\in M\cap(X\setminus U)$, indicating that $[y]_\sim\setminus U\ne\emptyset$.
\end{proof}
\begin{theo}\label{lc}
The quotient $X/\!\sim$ is a locally connected continuum.
\end{theo}
\begin{proof}
As $X$ is a continuum, $\pi(X)=X/\!\sim$ is itself a continuum. We now prove that this quotient is locally connected. If $V$ is an open set in $X/\!\sim$ that contains $[x]_{\sim}$, as an element of $X/\!\sim$, then the pre-image $U:=\pi^{-1}(V)$ is open in $X$ and contains the class $[x]_{\sim}$ as a subset. We shall prove that $V$ contains a connected neighborhood of $[x]_\sim$. Without loss of generality, we assume that $U\ne X$. Let $Q$ be the component of $U$ that contains $[x]_{\sim}$. By the boundary bumping theorem~\cite[Theorem 5.7, p75]{Nadler92} (see also~\cite[p.41, Exercise 2]{Whyburn79}), since $X$ is connected, we have $\overline{Q}\setminus U\ne\emptyset$. Moreover, our proof will be completed by the following claim.
\noindent
{\bf Claim.} The connected set $\pi(Q)$, hence the component of $V$ that contains $[x]_{\sim}$ as a point, is a neighborhood of $[x]_{\sim}$ in the quotient space $X/\!\sim$.
Otherwise, there would exist an infinite sequence of points $[x_k]_{\sim}$ in $V\setminus \pi(Q)$ such that $\lim\limits_{k\rightarrow\infty}[x_k]_{\sim}= [x]_{\sim}$ under the quotient topology. Since $U=\pi^{-1}(V)$, every $x_k$ belongs to $U$. Let $Q_k$ be the component of $U$ that contains $x_k$. Here we have $Q_k\cap Q=\emptyset$. And, by the above mentioned boundary bumping theorem, we also have $\overline{Q_k}\setminus U\ne\emptyset$.
Now, choose points $y_k\in [x_k]_{\sim}$ for every $k\ge1$ such that $\{y_k\}$ has a limit point $y$. Here, we certainly have $[y_k]_{\sim}=[x_k]_{\sim}$ and $[y]_{\sim}=[x]_{\sim}$.
By coming to an appropriate subsequence, we may assume that $\lim\limits_{k\rightarrow\infty}y_k=y$ and that $\lim\limits_{k\rightarrow\infty}\overline{Q_k}=M$ under Hausdorff metric. Then $M$ is a continuum with $y\in M\subset Q$ and $M\setminus U\ne\emptyset$, indicating that the fiber $F_y^*$ contains $M$, hence intersects $X\setminus U$. In other words, $F_y^*\nsubseteq U$, which contradicts the inclusions $y\in Q\subset U$ and $F_y^*\subset [y]_{\sim}=[x]_\sim\subset U$.
\end{proof}
\section{How fibers are changed under continuous maps}\label{fiber-basics}
In this section, we discuss how fibers are changed under continuous maps. As a special application, we may compare the dynamics of a polynomial $f_c(z)=z^n+c$ on its Julia set $J_c$, the expansion $z\mapsto z^d$ on unit circle, and an induced map $\tilde{f}_c$ on the quotient $J_c/\!\sim$.
Let $X, Y\subset\mathbb{C}$ be continua and $x\in X$ a point. The first primary observation is that $f(F_x^*)\subset F_{f(x)}^*$ for any finite-to-one continuous surjection $f: X\rightarrow Y$.
Indeed, for any $y\ne x$ in the fiber $F_x^*$ and any finite set $C\subset Y$ that is disjoint from $\{f(x),f(y)\}$, we can see that $f^{-1}(C)$ is a finite set disjoint from $\{x,y\}$. Since $y\in F_x^*$ there exists no separation $X\setminus f^{-1}(C)=A\cup B$ with $x\in A, y\in B$; therefore, there exists no separation $Y\setminus C=P\cup Q$ with $f(x)\in P, f(y)\in Q$. This certifies that $f(y)\in F_{f(x)}^*$.
By the above inclusion $f(F_x^*)\subset F_{f(x)}^*$ we further have $f(X_0)\subset Y_0$. Here $X_0$ is the union of all the nontrivial fibers $F_x^*$ in $X$, and $Y_0$ the union of those in $Y$. It follows that $f([x]_\sim)\subset[f(x)]_\sim$. Therefore, the correspondence $[x]_\sim\xrightarrow{\hspace{0.2cm}\tilde{f}\hspace{0.2cm}}[f(x)]_\sim$ gives a well defined map $\tilde{f}: X/\!\sim\ \rightarrow Y/\!\sim$ that satisfies the following commutative diagram, in which each downward arrow $\downarrow$ indicates the natural projection $\pi$ from a space onto its quotient.
\[\begin{array}{ccc} X&\xrightarrow{\hspace{1cm}f\hspace{1cm}}&Y\\ \downarrow&&\downarrow\
\\ X/\!\sim&\xrightarrow{\hspace{1cm}\tilde{f}\hspace{1cm}}&Y/\!\sim
\end{array}\]
Given an open set $U\subset Y/\!\sim$, we can use the definition of quotient topology to infer that $V:=\tilde{f}^{-1}(U)$ is open in $X/\!\sim$ whenever $\pi^{-1}(V)$ is open in $X$. On the other hand, the above diagram ensures that $\pi^{-1}(V)=f^{-1}\left(\pi^{-1}(U)\right)$, which is an open set of $X$, by continuity of $f$ and $\pi$.
The above arguments lead us to a useful result for the study of dynamics on Julia sets.
\begin{theo}\label{dynamic}
Let $X, Y\subset\mathbb{C}$ be continua. Let the relation $\sim$ be defined as in Theorem \ref{main-3}. If $f: X\rightarrow Y$ is continuous, surjective and finite-to-one then $\tilde{f}([x]_\sim):=[f(x)]_\sim$ defines a continuous map with $\pi\circ f=\tilde{f}\circ\pi$.
\end{theo}
\begin{rema}
Every polynomial $f_c(z)=z^n+c$ restricted to its Julia set $J_c$ satisfies the conditions of Theorem \ref{dynamic}, if we assume that $J_c$ is connected; so the restricted system $f_c: J_c\rightarrow J_c$ has a factor system $\tilde{f}_c: J_c/\!\sim\rightarrow J_c/\!\sim$, whose underlying space is a locally connected continuum.
\end{rema}
Let $X\subset\mathbb{C}$ be an unshielded continuum and $U_\infty$ the unbounded component of $\mathbb{C}\setminus X$. Here, $X$ is unshielded provided that $X=\partial U_\infty$. Let $\mathbb{D}:=\{z\in\hat{\mathbb{C}}: |z|\le1\}$ be the unit closed disk. By Riemann Mapping Theorem, there exists a conformal isomorphism $\Phi: \hat{\mathbb{C}}\setminus\mathbb{D}\rightarrow U_\infty$ that fixes $\infty$ and has positive derivative at $\infty$. The prime end theory \cite{CP02, UrsellYoung51} builds a correspondence between an angle $\theta\in S^1:=\partial\mathbb{D}$ and a continuum
\[\displaystyle Imp(\theta):=\left\{w\in X: \ \exists\ z_n\in\mathbb{D}\ \text{with}\ z_n\rightarrow e^{{\bf i}\theta}, \lim\limits_{n\rightarrow\infty}\Phi(z_n)=w\right\}\]
We call $Imp(\theta)$ the {\em impression of $\theta$}. By \cite[p.173, Theorem 9.4]{CL66}, we may fix a simple open arc ${\mathcal R}_\theta$ in $\mathbb{C}\setminus\mathbb{D}$ landing at the point $e^{{\bf i}\theta}$ such that $\overline{\Phi({\mathcal R}_\theta)}\cap X=Imp(\theta)$.
We will connect impressions to fibers. Before that, we obtain a useful lemma concerning good cuts of an unshielded continuum $X$ on the plane. Here a good cut of $X$ is a simple closed curve that intersects $X$ at a finite subset (see Remark~\ref{modified}).
\begin{lemm}\label{optimal-cut}
Let $X\subset\mathbb{C}$ be an unshielded continuum and $U_\infty$ the unbounded component of $\mathbb{C}\setminus X$. Let $x$ and $y$ be two points on $X$ separated by a good cut of $X$. Then we can find a good cut separating $x$ from $y$ that intersects $U_\infty$ at an open arc.
\end{lemm}
\begin{proof}
Since each of the two components of $\mathbb{C}\setminus\gamma$ intersects $\{x,y\}$, we have $\gamma\cap U_\infty\ne\emptyset$. Since $\gamma\cap X$ is a finite set, the difference $\gamma\setminus X$ has finitely many components. Let $\gamma_1,\ldots,\gamma_k$ be the components of $\gamma\setminus X$ that lie in $U_\infty$. Let $\alpha_i=\Phi^{-1}(\gamma_i)$ be the pre-images of $\gamma_i$ under $\Phi$. Then every $\alpha_i$ is a simple open arc in $\{z: |z|>1\}$ whose end points $a_i, b_i$ are located on the unit circle; and all those open arcs $\alpha_1,\ldots,\alpha_k$ are pairwise disjoint.
If $k\geqslant 2$, rename the arcs $\alpha_2,\ldots,\alpha_k$ so that we can find an open arc $\beta\subset(\mathbb{C}\setminus\mathbb{D})$ disjoint from $\bigcup_{i=1}^k\alpha_i$ that connects a point $a$ on $\alpha_1$ to a point $b$ on $\alpha_2$. Then $\gamma\cup\Phi(\beta)$ is a $\Theta$-curve separating $x$ from $y$ (see~\cite[Part B, Section VI]{Whyburn79} for a definition of $\Theta$-curve). Let $J_1$ and $J_2$ denote the two components of $\gamma\setminus\overline{\Phi(\beta)}=\gamma\setminus\{\Phi(a),\Phi(b)\}$. Then $J_1\cup\overline{\Phi(\beta)}$ and $J_2\cup\overline{\Phi(\beta)}$ are both good cuts of $X$. One of them, denoted by $\gamma'$, separates $x$ from $y$~\cite[$\Theta$-curve theorem, p.123]{Whyburn79}. By construction, this new good cut intersects $U_\infty$ at $k'$ open arcs for some $1\leqslant k'\leqslant k-1$. For relative locations of $J_1,J_2$ and $\Phi(\beta)$ in $\hat{\mathbb{C}}$, we refer to Figure~\ref{theta} in which $\gamma$ is represented as a circular circle, although a general good cut is usually not a circular circle.
\begin{figure}
\caption{The $\Theta$-curve together with the arcs $J_1$, $J_2$, and $\Phi(\beta)$.}
\label{theta}
\end{figure}
If $k'\geqslant 2$, we may use the same argument on $\gamma'$ and obtain a good cut $\gamma''$, that separates $x$ from $y$ and that intersects $U_\infty$ at $k''$ open arcs for some $1\leqslant k''\leqslant k-2$. Repeating this procedure for at most $k-1$ times, we will obtain a good cut separating $x$ from $y$ that intersects $U_\infty$ at a single open arc.
\end{proof}
\begin{theo}\label{impression}
Let $X\subset\mathbb{C}$ be an unshielded continuum. Then every impression $Imp(\theta)$ is contained in a fiber $F_w^*$ for some $w\in Imp(\theta)$.
\end{theo}
\begin{proof}
Suppose that a point $y\ne x$ on $Imp(\theta)$ is separated from $x$ in $X$ by a finite set. By Proposition \ref{key}, we can find a good cut $\gamma$ separating $x$ from $y$. By Lemma \ref{optimal-cut}, we may assume that $\gamma\cap U_\infty$ is an open arc $\gamma_1$. Let $a$ and $b$ be the two end points of $\alpha_1=\Phi^{-1}(\gamma_1)$, an open arc in $\mathbb{C}\setminus\mathbb{D}$.
Fix an open arc ${\mathcal R}_\theta$ in $\mathbb{C}\setminus\mathbb{D}$ landing at the point $e^{{\bf i}\theta}$ such that $\overline{\Phi({\mathcal R}_\theta)}\cap X=Imp(\theta)$. We note that $e^{{\bf i}\theta}\in \{a,b\}$. Otherwise, there is a number $r>1$ such that $\mathcal{R}_\theta\cap\{z: |z|<r\}$ lies in the component of
\[(\mathbb{C}\setminus\mathbb{D})\setminus(\{a,b\}\cup \alpha_1)\qquad (\text{difference\ of}\ \mathbb{C}\setminus\mathbb{D}\ \text{and}\ \{a,b\}\cup \alpha_1)\]
whose closure contains $e^{{\bf i}\theta}$. From this we see that
$\Phi(\mathcal{R}_\theta\cap\{z: |z|<r\})$
is disjoint from $\gamma$ and is entirely contained in one of the two components of $\mathbb{C}\setminus\gamma$, which contain $x$ and $y$ respectively. Therefore,
\[\overline{\Phi(\mathcal{R}_\theta\cap\{z: |z|<r\})}\]
hence its subset $Imp(\theta)$ cannot contain $x$ and $y$ at the same time. This contradicts the assumption that $x,y\in Imp(\theta)$.
Now we will lose no generality by assuming that $e^{{\bf i}\theta}=a$. Then $\Phi({\mathcal R}_\theta)$ intersects $\gamma_1$ infinitely many times, since $\overline{\Phi({\mathcal R}_\theta)}\setminus\Phi({\mathcal R}_\theta)$ contains $\{x,y\}$. This implies that $a$ is the landing point of $\mathcal{R}_\theta\subset(\mathbb{C}\setminus\mathbb{D})$.
Let $w=\lim_{z\rightarrow a}\left.\Phi\right|_{\alpha_1}(z)$. Then $\{x,y,w\}\subset Imp(\theta)$, and the proof will be completed if we can verify that $Imp(\theta)\subset F_w^*$.
Suppose there is a point $w_1\in Imp(\theta)$ that is not in $F_w^*$. By Lemma \ref{optimal-cut} we may find a good cut $\gamma'$ separating $w$ from $w_1$ that intersects $U_\infty$ at an open arc $\gamma_1'$. Let $\alpha_1'=\Phi^{-1}(\gamma_1')$. Let $I$ be the component of $S^1\setminus\overline{\alpha_1'}$ that contains $a$. Since $w\notin\gamma'$, the closure $\overline{\alpha_1'}$ does not contain the point $a$. Therefore, $\mathcal{R}_\theta\cap\{z: |z|<r_1\}$ is disjoint from $\alpha_1'$ for some $r_1>1$. For such an $r_1$, the image $\Phi(\mathcal{R}_\theta\cap\{z: |z|<r_1\})$ is disjoint from $\gamma'$. On the other hand, the good cut $\gamma'$ separates $w$ from $w_1$. Therefore, the closure of $\Phi(\mathcal{R}_\theta\cap\{z: |z|<r_1\})$ hence its subset $Imp(\theta)$ does not contain the two points $w$ and $w_1$ at the same time. This is a contradiction.
\end{proof}
\begin{rema}\label{z^d-factor}
Let $J_c$ be the connected Julia set of a polynomial.
The equivalence classes $[x]_\sim$ obtained in this paper determine an upper semi-continuous decomposition of $J_c$, such that the quotient space is a locally connected continuum. Theorem \ref{impression} says that the impression of any prime end is entirely contained in a single class $[x]_\sim$. Therefore, the finest decomposition mentioned in \cite[Theorem 1]{BCO11} is finer than $\{[x]_\sim: x\in J_c\}$. Currently it is not clear whether these two decompositions just coincide. This is proposed as an open question in Problem \ref{finest}.
\end{rema}
\section{Facts and Examples}\label{examples}
In this section, we give several examples to demonstrate the difference between (1) separating and cutting sets, (2) the fiber $F_x^*$ and the class $[x]_\sim$, (3) a continuum $X\subset\mathbb{C}$ and the quotient space $X/\!\sim$ for specific choices of $X$. We also construct an infinite sequence of continua which have scales of any $k\ge 2$ and even up to $\infty$, although the quotient of each of those continua is always homeomorphic with the unit interval $[0,1]$.
\begin{exam}[{\bf Separating Sets and Cutting sets}]\label{cutting-2}
For a set $M\subset\mathbb{C}$,
a set $C\subset M$ is said to \emph{separate} or to be a \emph{separating set between} two points $a,b\subset M$ if there is a separation $M\setminus C=P\cup Q$ satisfying $a\in P, b\in Q$;
and a subset $C\subset M$ is called a {\em cutting set between two points $a,b\in M$} if $\{a,b\}\subset(X\setminus C)$ and if the component of $X\setminus C$ containing $a$ does not contain $b$ \cite[p.188,$\S$47.VIII]{Kuratowski68}.
Let $L_1$ be the segment between the points $(2,1)$ and $(2,0)$ on the plane, $Q_1$ the one between $(-2,0)$ and $c=(0,\frac{1}{2})$, and $P_1$ the broken line connecting $(2,0)$ to $(-2,0)$ through $(0,-1)$, as shown in Figure \ref{cutting-separating}.
\begin{figure}
\caption{The continuum $X$ and its quotient as a Hawaiian earring minus an open rectangle.}
\label{cutting-separating}
\end{figure}
Define $(x_1,x_2)\xrightarrow{\hspace{0.2cm}f\hspace{0.2cm}}(\frac{1}{2}x_1,\frac{1}{2}x_2)$ and $(x_1,x_2)\xrightarrow{\hspace{0.2cm}g\hspace{0.2cm}}(\frac{1}{2}x_1,x_2)$.
For any $k\ge1$, let $L_{k+1}=g(L_k)$ and $Q_{k+1}=g(Q_k)$; let $P_{k+1}=f(P_k)$. Let $B_k=L_k\cup P_k\cup Q_k$. Then $\{B_k: k\ge1\}$ is a sequence of broken lines converging to the segment $B$ between $a=(0,0)$ and $b=(0,1)$. Let $N=\left(\bigcup_kB_k\right)\bigcup B$. Then $N$ is a continuum, which is not locally connected at each point of $B$. Moreover, the singleton $\{c\}$ is a cutting set, but not a separating set, between the points $a$ and $b$. The only nontrivial fiber is $B=\{0\}\times[0,1]=F_x^*$ for each $x\in B$. So we have $\ell^*(N)=1$. Also, it follows that $[x]_\sim=B$ for all $x\in B$ and $[x]_\sim=\{x\}$ otherwise. In particular, the broken lines $B_k$ are still arcs in the quotient space but, under the metric of quotient space, their diameters converge to zero. Consequently, the quotient $N/\!\sim$ is topologically the difference of a Hawaiian earring with a full open rectangle. See the right part of Figure~\ref{cutting-separating}. In other words, the quotient space $N/\!\sim$ is homeomorphic with the quotient $X/\!\sim$ of Example~\ref{witch-broom}.
\end{exam}
\begin{exam}[{\bf The Witch's Broom}]\label{witch-broom}
Let $X$ be the witch's broom \cite[p.84, Figure 5.22]{Nadler92}. See Figure~\ref{broom}.
\begin{figure}
\caption{An intuitive depiction of the witch's broom and its quotient space.}
\label{broom}
\end{figure}
More precisely, let $A_0:=[\frac{1}{2},1]\times\{0\}$; let $A_k$ be the segment connecting $(1,0)$ to $(\frac{1}{2},2^{-k})$ for $k\ge0$. Then $\displaystyle A=\bigcup_{k\ge0}A_k$ is a continuum (an infinite broom) which is locally connected everywhere but at the points on $[\frac{1}{2},1)\times\{0\}$. Let $g(x)=\frac{1}{2}x$ be a similarity contraction on $\mathbb{R}^2$. Let
\[X=\{(0,0)\}\cup A\cup f(A)\cup f^2(A)\cup\cdots\cup f^n(A)\cup\cdots\cdots.\]
The continuum $X$ is called the {\em Witch's Broom}. Consider the fibers of $X$, we have $F_x^*=\{x\}$ for each $x$ in $X\cap\{(x_1,x_2): x_2>0\}$ and for $x=(0,0)$. The nontrivial fibers include: $F_{(1,0)}^*=[\frac{1}{2},1]\times\{0\}$, $F_{(2^{-k},0)}^*=[2^{-k-1},2^{-k+1}]\times\{0\}\quad(k\ge1)$,
and
\[ F^*_{(x_1,0)}=[2^{-k},2^{-k+1}]\times\{0\}\quad (2^{-k}<x_1<2^{-k+1}, k\ge1).\]
Consequently, $[x]_\sim=\{x\}$ for each $x$ in $X\cap\{(x_1,x_2): x_2>0\}$, while $[x]_\sim=[0,1]\times\{0\}$ for $x\in[0,1]\times\{0\}$. See the right part of Figure \ref{broom} for a depiction of the quotient $X/\!\sim$.
\end{exam}
\begin{exam}[{\bf Witch's Double Broom}]\label{relations}
Let $X$ be the witch's broom. We call the union $Y$ of $X$ with a translated copy $X+(-1,0)$ the {\em witch's double broom} (see Figure \ref{double-broom}).
\begin{figure}
\caption{Relative locations of the points $(\pm1,0)$ and $(0,0)$ in witch's double broom.}
\label{double-broom}
\end{figure}
Define $x\approx y$ if there exist points $x_1=x,$ $x_2,\ldots, x_n=y$ in $Y$ such that $x_i\in F^*_{x_{i-1}}$. Then $\approx$ is an equivalence and is not closed. Its closure $\approx^*$ is not transitive, since we have $(-1,0)\approx^*(0,0)$ and $(0,0)\approx^*(1,0)$, but $(-1,0)$ is not related to $(1,0)$ under $\approx^*$.
\end{exam}
\begin{exam}[{\bf Cantor's Teepee}]\label{cantor-teepee}
Let $X$ be Cantor's Teepee~\cite[p.145]{SS-1995}. See Figure \ref{teepee}. Then the fiber $F_p^*=X$; and for every other point $x$, $F_x^*$ is exactly the line segment on $X$ that crosses $x$ and $p$. Therefore, $\ell^*(X)=1$. Moreover, $[x]_\sim=X$ for every $x$, hence the quotient is a single point. In this case, we also say that $X$ is collapsed to a point.
\begin{figure}
\caption{A simple representation of Cantor's Teepee.}
\label{teepee}
\end{figure}
\end{exam}
\begin{exam}[{\bf Cantor's Comb}]\label{cantor-comb}
Let $\mathcal{K}\subset[0,1]$ be Cantor's ternary set. Let $X$ be the union of $\mathcal{K}\times[0,1]$ with $[0,1]\times\{1\}$. See Figure \ref{comb}. We call $X$ the Cantor comb. Then the fiber $F_x^*=\{x\}$ for every point on $X$ that is off $\mathcal{K}\times[0,1]$; and for every point $x$ on $\mathcal{K}\times[0,1]$, the fiber $F_x^*$ is exactly the vertical line segment on $\mathcal{K}\times[0,1]$ that contains $x$. Therefore, $\ell^*(X)=1$. Moreover, $[x]_\sim=F_x^*$ for every $x$, hence the quotient is homeomorphic to $[0,1]$. Here, we note that $X$ is locally connected at every point lying on the common part of $[0,1]\times\{1\}$ and $\mathcal{K}\times[0,1]$, although the fibers at those points are each a non-degenerate segment.
\begin{figure}
\caption{Cantor's Comb, its nontrivial fibers, and the quotient $X/\!\sim$.}
\label{comb}
\end{figure}
\end{exam}
\begin{exam}[{\bf More Combs}]\label{more-combs}
We use Cantor's ternary set $\mathcal{K}\subset[0,1]$ to construct a sequence of continua $\{X_k: k\ge1\}$, such that the scale $\ell^*(X_k)=k$ for all $k\ge1$. We also determine the fibers and compute the quotient spaces $X_k/\!\sim$. Let $X_1$ be the union of $X_1'=(\mathcal{K}+1)\times[0,2]$ with $[1,2]\times\{2\}$. Here $\displaystyle \mathcal{K}+1:=\left\{x_1+1: x_1\in\mathcal{K}\right\}$. Then $X_1$ is homeomorphic with Cantor's Comb defined in Example \ref{cantor-comb}. We have $\ell^*(X_1)=1$ and that $X_1/\!\sim$ is homeomorphic with $[0,1]$. Let $X_2$ be the union of $X_1$ with $\displaystyle [0,1]\times(\mathcal{K}+1)$. See Figure \ref{combs}.
\begin{figure}
\caption{A simple depiction of $X_2$, the largest fiber, and the quotient $X_2/\!\sim$.}
\label{combs}
\end{figure}
Then the fiber of $X_2$ at the point $(1,2)\in X_2$ is $F_{(1,2)}^*=X_2\cap\{(x_1,x_2): x_1\le 1\}$, which will be referred to as the ``largest fiber'', since it is the fiber with the largest scale in $X_2$. See the central part of Figure~\ref{combs}. The other fibers are either a single point or a segment, of the form $\{(x_1,x_2): 0\le x_2\le 2\}$ for some $x_1\in \mathcal{K}+1$. Therefore, we have $\ell^*(X_2)=2$ and can check that the quotient $X_2/\!\sim$ is homeomorphic with $[0,1]$. Let $X_3$ be the union of $X_2$ with \[
\displaystyle \frac{X_1}{2}=\left\{\left(\frac{x_1}{2},\frac{x_2}{2}\right): (x_1,x_2)\in X_1\right\}.\]
Then the largest fiber of $X_3$ is exactly $F_{(1,2)}^*=X_3\cap\{(x_1,x_2): x_1\le 1\}$, which is homeomorphic with $X_2$. Therefore, $\ell^*(X_3)=3$; moreover, $X_3/\!\sim$ is also homeomorphic with $[0,1]$. See upper part of Figure \ref{scale-4}.
\begin{figure}
\caption{A depiction of $X_3, X_4$, the largest fibers, and the quotients $X_3/\!\sim$ and $X_4/\!\sim$ .}
\label{scale-4}
\end{figure}
Let $X_4=X_2\cup\frac{X_2}{2}$. Then the largest fiber of $X_4$ is $F_{(1,2)}^*=X_4\cap\{(x_1,x_2): x_1\le 1\}$, which is homeomorphic with $X_3$. Similarly, we can infer that $\ell^*(X_4)=4$ and that $X_4/\!\sim$ is homeomorphic with $[0,1]$. See lower part of Figure \ref{scale-4}. The construction of $X_k$ for $k\ge5$ can be done inductively. The general formula
$X_{k+2}=X_2\bigcup\frac{1}{2}X_k$ defines a path-connected continuum for all $k\ge3$, for which the largest fiber is homeomorphic to $X_{k+1}$. Therefore, we have $\ell^*(X_k)=k$; moreover, the quotient space $X_k/\!\sim$ is always homeomorphic to the interval $[0,1]$. Finally, we can verify that \[X_\infty=\{(0,0)\}\cup\left(\bigcup_{k=2}^\infty X_k\right)\]
is a path connected continuum and that its largest fiber is homeomorphic to $X_\infty$ itself. Therefore, $X_\infty$ has a scale $\ell^*(X_\infty)=\infty$, and its quotient is homeomorphic to $[0,1]$.
\end{exam}
\textbf{Acknowledgements}
The authors are grateful to the referee for very helpful remarks, especially those about a gap in the proof for Theorem \ref{E=F} and an improved proof for Lemma \ref{fact-1}. The first author was supported by the Agence Nationale de la Recherche (ANR) and the Austrian Science Fund
(FWF) through the project \emph{Fractals and Numeration} ANR-FWF I1136 and the FWF Project 22 855.
The second author was supported by the Chinese National Natural Science Foundation Projects 10971233 and 11171123.
\normalsize
\baselineskip=17pt
\end{document} |
\begin{document}
\title{Nested Dissection Meets IPMs: \\
Planar Min-Cost Flow in Nearly-Linear Time\footnote{A preliminary
version of this work was published at SODA 2022.}}
\date{}
\mathbf{A}ketitle
\begin{abstract}
We present a nearly-linear time algorithm for finding a minimum-cost
flow in planar graphs with polynomially bounded integer costs and
capacities. The previous fastest algorithm for this problem is
based on interior point methods (IPMs) and works for general sparse
graphs in $O(n^{1.5}\text{poly}(\log n))$ time [Daitch-Spielman,
STOC'08].
Intuitively, $\widetilde{O}mega(n^{1.5})$ is a natural runtime barrier for IPM-based
methods, since they require $\sqrt{n}$ iterations, each
routing a possibly-dense electrical flow.
To break this barrier, we develop a new implicit representation for
flows based on generalized nested-dissection [Lipton-Rose-Tarjan,
JSTOR'79] and approximate Schur complements [Kyng-Sachdeva,
FOCS'16]. This implicit representation permits us to design a data
structure to route an electrical flow with sparse demands in roughly
$\sqrt{n}$ update time, resulting in a total running time of
$O(n\cdot\text{poly}(\log n))$.
Our results immediately extend to all families of separable graphs.
\end{abstract}
{(\mathrm{new})}page
\tableofcontents
{(\mathrm{new})}page
\section{Introduction}
The minimum cost flow problem on planar graphs is a foundational
problem in combinatorial optimization studied since the 1950's. It has
diverse applications including network design, VLSI layout, and
computer vision. The seminal paper of Ford and Fulkerson in the 1950's
\cite{ford1956maximal} presented an $O(n^{2})$ time algorithm for the
special case of max-flow on $s,t$-planar graphs, i.e., planar graphs
with both the source and sink lying on the same face.
Over the decades since, a number of nearly-linear time max-flow
algorithms have been developed for special graph classes, including
undirected planar graphs by Reif, and Hassin-Johnson
\cite{reif1983minimum, hassin1985n}, planar graphs by
Borradaile-Klein \cite{BK09:journal},
and finally bounded genus graphs by Chambers-Erickson-Nayyeri
\cite{CEN12:journal}. However, for the more general min-cost flow
problem, there is no known result specializing on planar graphs with
better guarantees than on general graphs.
In this paper, we present the first nearly-linear time algorithm for min-cost flow on planar graphs:
\begin{restatable}[Main result]{theorem}{mainthm}\label{thm:mincostflow}
Let $G=(V,E)$ be a directed planar graph with $n$ vertices and $m$ edges.
Assume that the demands $\bm{d}$, edge capacities $\bm{u}$ and costs $\bm{c}$ are all integers and bounded by $M$ in absolute value.
Then there is an algorithm that computes a minimum cost flow satisfying demand $\bm{d}$ in $\widetilde{O}(n\log M)$ \footnote{Throughout the paper, we use $\overline{\vv}erline{t}ilde(f(n))$ to denote $O(f(n) \log^{O(1)}f(n))$.} expected time.
\end{restatable}
\begin{comment}
The \emph{depth} of an algorithm is defined as the maximum length of a sequential computation in the algorithm;
it is a well-studied concept as an alternative to runtime analysis in parallel computing \cite{SV81}.
\begin{theorem}
Our min-cost flow algorithm has $\widetilde{O}(\sqrt{n})$
depth by using parallel Laplacian solvers \cite{PS14, KX16}.
\end{theorem}
As the height of the separator tree is $O(\log m)$ and different regions at the same level are independent, we may perform one step of the IPM in $\widetilde{O}(1)$ depth, given a parallel Laplacian solver. As the IPM has $\widetilde{O}(\sqrt{m})$ steps, our algorithm has $\widetilde{O}(\sqrt{m})$ depth in total. We omit the formal proof in this paper.
\end{comment}
Our algorithm is fairly general and uses the planarity assumption
minimally. It builds on a combination of interior point methods (IPMs),
approximate Schur complements, and nested-dissection,
with the latter being the only component that exploits planarity.
Specifically, we require that for any subgraph of the input graph with $k$
vertices, we can find an $O(\sqrt{k})$-sized balanced vertex separator
in nearly-linear time.
As a result, the algorithm naturally generalizes to all graphs
with small separators:
Given a class $\mathbf{A}thcal{C}$ of graphs closed under taking subgraphs, we say it is \emph{$\alpha$-separable} if there are constants $0< b < 1$ and $c > 0$ such that every graph in $\mathbf{A}thcal{C}$ with $n$ vertices and $m$ edges has a balanced vertex separator with at most $c m^\alpha$ vertices, and both components obtained after removing the separator have at most $bm$ edges.
Then, our algorithm generalizes as follows:
\begin{restatable}[Separable min-cost flow]{corollary}{separableMincostflow}\label{thm:separable_mincostflow}
Let $\mathbf{A}thcal{C}$ be an $\alpha$-separable graph class such that we can compute a balanced separator for any graph in $\mathbf{A}thcal{C}$ with $m$ edges in $s(m)$ time for some convex
function $s$.
Given a graph $G\in \mathbf{A}thcal{C}$ with $n$ vertices and $m$ edges, integer demands $\bm{d}$, edge capacities $\bm{u}$ and costs $\bm{c}$, all bounded by $M$ in absolute value,
there is an algorithm that computes a minimum cost flow on $G$ satisfying demand $\bm{d}$ in $\widetilde{O}((m+m^{1/2+\alpha})\log M+s(m))$ expected time.
\end{restatable}
Beyond the study of structured graphs, we believe our paper is of
broader interest. The study of efficient optimization algorithms on
geometrically structured graphs is a topic at the intersection of
computational geometry, graph theory, combinatorial optimization, and
scientific computing, that has had a profound impact on each of these
areas. Connections between planarity testing and $3$-vertex
connectivity motivated the study of depth-first search algorithms
\cite{tarjan1971efficient}, and using geometric structures to find
faster solvers for structured linear systems provided foundations of
Laplacian algorithms as well as combinatorial scientific computing
\cite{lipton1979generalized, gremban1996combinatorial}.
Several surprising insights from our nearly-linear time algorithm are:
\begin{enumerate}
\item We are able to design a data structure for maintaining a
feasible primal-dual (flow/slack) solution that allows sublinear time updates --
requiring $\overline{\vv}erline{t}ilde(\sqrt{nK})$
time for a batch update consisting of updating the flow value of $K$ edges.
This ends up not being a bottleneck for the overall performance
because the interior point method only takes roughly $\sqrt{n}$
iterations and makes $K$-sparse updates roughly $\sqrt{n / K}$
times, resulting in a total running time of $\overline{\vv}erline{t}ilde(n)$.
\item We show that the subspace constraints on the feasible primal-dual solutions
can be maintained implicitly under dynamic updates to the solutions.
This circumvents the need to track the infeasibility of primal
solutions (flows), which was required in previous works.
\end{enumerate}
We hope our result provides both a host of new tools for devising
algorithms for separable graphs, as well as insights on how to further
improve such algorithms for general graphs.
\subsection{Previous work}
The min-cost flow problem is well studied in both structured graphs
and general graphs.
\cref{tab:runtime} summarizes the best algorithms for different settings prior to this work.
\begin{table}[H]
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline
Min-cost flow & Time bound & Reference\tabularnewline
\hline
\hline
Strongly polytime & $O(m^{2}\log n+mn\log^2 n)$ & \cite{orlin1988faster}\tabularnewline
\hline
Weakly polytime & $\widetilde{O}((m + n^{3/2}) \log^2 M)$ & \cite{BrandLLSSSW21} \tabularnewline
\hline
Unit-capacity & $m^{\frac{4}{3}+o(1)}\log M$ & \cite{axiotis2020circulation}\tabularnewline
\hline
\textbf{Planar graph} & $\widetilde{O}(n\log M)$ & \textbf{this paper}\tabularnewline
\hline
Unit-capacity planar graph & $O(n^{4/3}\log M)$ & \cite{KS19}\tabularnewline
\hline
Graph with treewidth $\tau$ & $\widetilde{O}(n\tau^{2}\log M)$ & \cite{treeLP}\tabularnewline
\hline
Outerplanar graph & $O(n\log^{2}n)$ & \cite{kaplan2013min}\tabularnewline
\hline
Unidirectional, bidirectional cycle & $O(n)$, $O(n\log n)$ & \cite{vaidyanathan2010fast}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Fastest known exact algorithms for the min-cost flow problem, ordered by the generality of the result. Here, $n$ is the number of vertices,
$m$ is the number of edges, and $M$ is the maximum of edge capacity and cost value.
After the preliminary version of this work was published at SODA 2022, the best weakly polytime algorithm was improved to $\widetilde{O}(m^{1 + o(1)} \log^2 M)$ by \cite{almostlinearmincostflow}.
\label{tab:runtime}}
\end{table}
\paragraph{Min-cost flow / max-flow on general graphs.}
Here, we focus on recent exact max-flow and min-cost flow algorithms.
For an earlier history, we refer the reader to the monographs \cite{king1994faster,ahuja1988network}.
For the approximate max-flow problem, we refer the reader to the recent
papers \cite{christiano2011electrical,sherman2013nearly,kelner2014almost,sherman2017area,sidford2018coordinate,bernstein2021deterministic}.
To understand the recent progress, we view the max-flow problem as
finding a unit $s,t$-flow with minimum $\ell_{\infty}$-norm,
and the shortest path problem as finding a unit $s,t$-flow with minimum $\ell_{1}$-norm.
Prior to 2008, almost all max-flow algorithms reduced
this $\ell_{\infty}$ problem to a sequence of $\ell_{1}$ problems,
(shortest path) since the latter can be solved efficiently.
This changed with the celebrated work of Spielman and Teng, which
showed how to find electrical flows ($\ell_{2}$-minimizing unit
$s,t$-flow) in nearly-linear time \cite{spielman2004nearly}.
Since the $\ell_{2}$-norm is closer to
$\ell_{\infty}$ than $\ell_{1}$, this gives a more powerful primitive
for the max-flow problem.
In 2008, Daitch and Spielman demonstrated that one could apply
interior point methods (IPMs) to reduce min-cost flow to
roughly $\sqrt{m}$ electrical flow computations.
This follows from the fact that IPMs take $\overline{\vv}erline{t}ilde(\sqrt{m})$
iterations and each iteration requires solving an electrical flow
problem, which can now be solved in $\overline{\vv}erline{t}ilde(m)$ time due to the
work of Spielman and Teng.
Consequently, they obtained an algorithm with a $\overline{\vv}erline{t}ilde(m^{3/2}\log M)$ runtime \cite{daitch2008faster}.
Since then, several algorithms have utilized electrical flows and
other stronger primitives for solving max-flow and min-cost flow
problems.
For graphs with unit capacities, M\k{a}dry gave a
$\overline{\vv}erline{t}ilde(m^{10/7})$-time max-flow algorithm, the first that broke the
$3/2$-exponent barrier \cite{madry2013navigating}. It was later
improved and generalized to $O(m^{4/3+o(1)}\log M)$ \cite{axiotis2020circulation} for the min-cost flow problem.
Kathuria et al. \cite{KathuriaLS20} gave a similar runtime of $O(m^{4/3 + o(1)} U^{1/3})$ where $U$ is the max capacity.
The runtime improvement comes from decreasing the number of iterations of IPM to
$\overline{\vv}erline{t}ilde(m^{1/3})$ via a more powerful primitive of
$\ell_{2}+\ell_{p}$ minimizing flows \cite{kyng2019flows}.
For general capacities, the runtime has recently been improved to
$\overline{\vv}erline{t}ilde((m+n^{3/2})\log^2 M)$ for min-cost flow on dense graphs~\cite{BrandLLSSSW21},
and $\overline{\vv}erline{t}ilde(m^{\frac{3}{2}-\frac{1}{328}}\log M)$ for max-flow on sparse graphs~\cite{GaoLP21:arxiv}.
These algorithms focus on decreasing
the per-iteration cost of IPMs by dynamically maintaining electrical
flows. After the preliminary version of this work was accepted to SODA
2022, \cite{BGJLLPS21} gave a runtime of $\overline{\vv}erline{t}ilde(m^{\frac{3}{2}-\frac{1}{58}}\log^2 M)$ for general min-cost flow following the dynamic electrical flow framework.
Most recently, \cite{almostlinearmincostflow} improved the runtime for
general min-cost flow to $\widetilde{O}(m^{1+o(1)}\log^2 M)$ by solving a sequence of approximate undirected minimum-ratio cycles.
\paragraph{Max-flow on planar graphs.}
The planar max-flow problem has an equally long history. We refer the
reader to the thesis \cite{borradaile2008exploiting} for a detailed
exposition. In the seminal work of Ford and Fulkerson that introduced the max-flow min-cut theorem,
they also gave a max-flow algorithm for $s,t$-planar graphs (planar
graphs where the source and sink lie on the same face)\cite{ford1956maximal}.
This algorithm iteratively sends flow along the top-most augmenting path.
Itai and Shiloach showed how to implement each step in
$O(\log n)$ time, thus giving an $O(n\log n)$ time algorithm for
$s,t$-planar graphs~\cite{itai1979maximum}.
In this setting, Hassin also showed that the max-flow can be computed
using shortest-path distances in the planar dual in $O(n \log n)$ time~\cite{Hassin}.
Building on Hassin's work, the current best runtime is $O(n)$
by Henzinger, Klein, Rao, and Subramanian \cite{henzinger1997faster}.
For undirected planar graphs, Reif first gave an $O(n\log^{2}n)$ time
algorithm for finding the max-flow value \cite{reif1983minimum}.
Hassin and Johnson then showed how to compute the flow in the same
runtime \cite{hassin1985n}. The current best runtime is
$O(n\log\log n)$ by Italiano, Nussbaum, Sankowski, and
Wulff-Nilsen~\cite{INSW11}.
For general planar graphs, Weihe gave the first $O(n\log n)$ time
algorithm, assuming the graph satisfies certain
connectivity conditions~\cite{weihe1997maximum}.
Later, Borradaile and Klein gave an $O(n\log n)$
time algorithm for any planar graph \cite{BK09:journal}.
The multiple-source multiple-sink version of max-flow is considered
much harder on planar graphs. The first result of $O(n^{1.5})$
time was by Miller and Naor when sources and sinks are all on same face
\cite{MN95:journal}. This was then improved to $O(n\log^{3}n)$
in~\cite{BKMNW17:journal}.
For generalizations of planar graphs, Chambers, Ericskon and Nayyeri gave the first nearly-linear time
algorithm for max-flow on graphs embedded on bounded-genus surfaces~\cite{CEN12:journal}.
Miller and Peng gave an $\overline{\vv}erline{t}ilde(n^{6/5})$-time algorithm for approximating undirected max-flow for the class of $O(\sqrt{n})$-separable graphs~\cite{MP13},
although this is superseded by the previously mentioned works for general graphs~\cite{sherman2013nearly,kelner2014almost}.
\paragraph{Min-cost flow on planar graphs.}
Imai and Iwano gave a $O(n^{1.594}\log M)$ time algorithm for min-cost
flow for the more general class of $O(\sqrt{n})$-separable
graphs~\cite{II90}.
To the best of our knowledge, there is little else known
about min-cost flow on general planar graphs. In the special case of
unit capacities, \cite{AKLR18,LR19} gives an $O(n^{6/5}\log M)$ time
algorithm for min-cost perfect matching in bipartite planar graphs,
and Karczmarz and Sankowski gives a $O(n^{4/3}\log M)$ time algorithm
for min-cost flow~\cite{KS19}.
Currently, bounded treewidth graphs is the only graph family we know that admits min-cost flow algorithms that run in nearly-linear time \cite{treeLP}.
\subsection{Challenges}
Here, we discuss some of the challenges in developing faster
algorithms for the planar min-cost flow problem from a convex
optimization perspective. For a discussion on challenges in designing
combinatorial algorithms, we refer the reader to
\cite{khuller1993lattice}. Prior to our result, the fastest min-cost
flow algorithm for planar graphs is based on interior point methods (IPMs) and takes $\overline{\vv}erline{t}ilde(n^{3/2}\log M)$ time~\cite{daitch2008faster}.
Intuitively, $\widetilde{O}mega(n^{3/2})$ is a natural runtime barrier for IPM-based methods,
since they require $\widetilde{O}mega(\sqrt{n})$ iterations, each computing a possibly-dense electrical flow.
\paragraph*{Challenges in improving the number of iterations.}
The $\widetilde{O}mega(\sqrt{n})$ term comes from the fact that IPM uses the
electrical flow problem ($\ell_{2}$-type problem) to approximate the
shortest path problem ($\ell_{1}$-type problem). This
$\widetilde{O}mega(\sqrt{n})$ term is analogous to the flow decomposition
barrier: in the worst case, we need $\widetilde{O}mega(n)$ shortest paths
($\ell_{1}$-type problem) to solve the max-flow problem
($\ell_{\infty}$-type problem). Since $\ell_{2}$ and $\ell_{\infty}$
problems differ a lot when there are $s-t$ paths with drastically
different lengths, difficult instances for electrical flow-based
max-flow methods are often serial-parallel (see Figure 3 in
\cite{christiano2011electrical} for an example). Therefore, planarity
does not help to improve the $\sqrt{n}$ term.
Although more general $\ell_{2}+\ell_{p}$ primitives have been
developed \cite{AdilKPS19, kyng2019flows, AdilS20, AdilBKS21}, exploiting their
power in designing current algorithms for exact max-flow problem has
been limited to perturbing the IPM trajectory, and such a perturbation
only works when the residual flow value is large. In all previous
works tweaking IPMs for breaking the 3/2-exponent barrier
\cite{madry2013navigating,madry2016computing,cohen2017negative,KathuriaLS20,axiotis2020circulation},
an augmenting path algorithm is used to send the remaining flow at the
end. Due to the residual flow restriction, all these results assume
unit-capacities on edges, and it seems unlikely that planarity can be
utilized to design an algorithm for polynomially-large capacities with
fewer than $\sqrt{n}$ IPM iterations.
\paragraph*{Challenges in improving the cost per iteration.}
Recently, there has been much progress on utilizing data structures
for designing faster IPM algorithms for general linear programs and
flow problems on general graphs. For general linear programs, robust
interior point methods have been developed recently with running times
that essentially match the matrix multiplication cost
\cite{CohenLS21, van2020deterministic, BrandLSS20, huang2021solving,
van2021unifying}. This version of IPM ensures that the $\ell_{2}$
problem solved changes in a sparse manner from iteration to iteration.
When used to design graph algorithms, the $i$-th iteration of
a robust IPM involves computing an electrical flow on some graph
$G_{i}$. The edge support remains unchanged between iterations, though
the edge weights change. Further, if $K_{i}$ is the number of edges
with weight changes between $G_{i}$ and $G_{i+1}$, then robust IPMs
guarantee that
\[
\sum_{i}\sqrt{K_{i}}=\overline{\vv}erline{t}ilde(\sqrt{m}\log M).
\]
Roughly, this says that, on average, each edge weight changes only
poly-log many times throughout the algorithm. Unfortunately, any
sparsity bound is not enough to achieve nearly-linear time. Unlike the
shortest path problem, changing \emph{any} edge in a connected graph
will result in the electrical flow changing on essentially
\emph{every} edge. Therefore, it is very difficult to implement
(robust) IPMs in sublinear time per iteration, even if the subproblem
barely changes every iteration. On moderately dense graphs with
$m=\widetilde{O}mega(n^{1.5})$, this issue can be avoided by first approximating
the graph by sparse graphs and solving the electrical flow on the
sparse graphs. This leads to $\overline{\vv}erline{t}ilde(n) \ll \overline{\vv}erline{t}ilde(m)$ time cost per
step~\cite{BrandLSS20}. However, on sparse graphs, significant obstacles remain.
Recently, there has been a major breakthrough in this direction by
using random walks to approximate the electrical flow
\cite{GaoLP21:arxiv, BGJLLPS21}. Unfortunately, this still requires
$m^{1-\frac{1}{58}}$ time per iteration.
Finally, we note that \cite{treeLP} gives an
$\overline{\vv}erline{t}ilde(n \tau^2 \log M)$-time algorithm for linear programs with
$\tau$ treewidth. Their algorithm maintains the solution using an
implicit representation. This implicit representation involves a
$\tau \times \tau$ matrix that records the interaction between every
variable within the vertex separator set. Each step of the algorithm updates
this matrix once and it is not the bottleneck for the
$\overline{\vv}erline{t}ilde(n \tau^2 \log M)$-time budget. However, for planar graphs,
this $\tau \times \tau$ matrix is a dense graph on $\sqrt{n}$ vertices
given by the Schur complement on the separator. Hence, updating this
using their method requires $\widetilde{O}mega(n)$ time per step.
Our paper follows the approach in \cite{treeLP} and shows that this
dense graph can be sparsified. This is however subtle. Each step of
the IPM makes a global update via the implicit representation, hence
checking whether the flow is feasible takes at least linear time.
Therefore, we need to ensure each step is exactly feasible despite the
approximation.
If we are unable to do that, the algorithm will need to fix the flow
by augmenting paths at the end like \cite{KathuriaLS20,
axiotis2020circulation}, resulting in super-linear time and
polynomial dependence on capacities, rather than logarithmic.
\subsection{Our approaches}
In this section, we introduce our approach and explain how we overcome the difficulties we mentioned.
The min-cost flow problem can be reformulated into a linear program
in the following primal-dual form:
\[
\mathbf{A}thrm{(Primal)}=\mathbf{A}thbf{I}n_{\mathbf{B}^{\top}\bm{f}=\bm{z}ero,\;\bm{l}\leq\bm{f}\leq\bm{u}}\bm{c}^{\top}\bm{f}\quad\text{and}\quad\mathbf{A}thrm{(Dual)}=\mathbf{A}thbf{I}n_{\mathbf{B}\bm{y}+\bm{s}=\bm{c}}\sum_{i}\mathbf{A}thbf{I}n(\bm{l}_{i}\bm{s}_{i},\bm{u}_{i}\bm{s}_{i}),
\]
where $\mathbf{B}\in\mathbf{A}thbb{R}^{m\times n}$ is an edge-vertex incidence matrix of the
graph, $\bm{f}$ is the flow and $\bm{s}$ is the slack (or adjusted cost
vector). The primal is the min-cost circulation problem and the
dual is a variant of the min-cut problem. Our algorithm for
min-cost flow is composed of a novel application of IPM (\cref{subsec:IPM}) and new data structures
(\cref{subsec:overview_representation}). The IPM method reduces
solving a linear program to applying a sequence of
$\overline{\vv}erline{t}ilde(\sqrt{m}\log M)$ projections and the data structures
implement the primal and dual projection steps roughly in
$\overline{\vv}erline{t}ilde(\sqrt{m})$ amortized time.
\paragraph{Robust IPM.}
We first explain the IPM briefly. To minimize $\bm{c}^\top \bm{f}$, each step of the IPM method moves the flow vector $\bm{f}$ to the direction of $-\bm{c}$. However, such $\bm{f}$ may exceed the maximum or minimum capacities. IPM incorporates these capacity constraints by routing flows slower when they are approaching their capacity bounds. This is achieved by controlling the edge weights $\mathbf{W}$ and direction $\bm{v}$ in each projection step.
Both $\mathbf{W}$ and $\bm{v}$ are roughly chosen from some explicit entry-wise formula
of $\bm{f}$ and $\bm{s}$, namely, $\mathbf{W}_{ii}=\psi_{1}(\bm{f}_{i},\bm{s}_{i})$
and $\bm{v}_{i}=\psi_{2}(\bm{f}_{i},\bm{s}_{i})$. Hence, the main bottleneck
is to implement the projection step (computing $\mathbf{P}_{\bm{w}}\bm{v}$).
For the min-cost flow problem, this projection step corresponds to an electrical flow computation.
Recently, it has been observed that there is a lot of freedom in
choosing the weight $\mathbf{W}$ and the direction $\bm{v}$ (see for example
\cite{CohenLS21}). Instead of computing them exactly, we maintain some
entry-wise approximation $\overline{\vf},\overline{\vs}$ of $\bm{f},\bm{s}$ and use them to
compute $\mathbf{W}$ and $\bm{v}$. By updating $\overline{\vf}_{i},\overline{\vs}_{i}$ only when
$\bm{f}_{i},\bm{s}_{i}$ changed significantly, we can ensure $\overline{\vf},\overline{\vs}$ has
mostly sparse updates. Since $\mathbf{W}$ and $\bm{v}$ are given by some
entry-wise formula of $\overline{\vf}$ and $\overline{\vs}$, this ensures that $\mathbf{W},\bm{v}$ change
sparsely and in turn allows us to maintain the corresponding projection
$\mathbf{P}_{\bm{w}}$ via low-rank updates.
We refer to IPMs that use approximate $\overline{\vf}$ and $\overline{\vs}$ as robust IPMs.
In this paper, we apply the version given in \cite{treeLP} in a
black-box manner. In \cref{subsec:IPM}, we state the IPM we use. The
key challenge is implementing each step in roughly $\overline{\vv}erline{t}ilde (\sqrt{m})$
time.
\paragraph{Separators and Nested Dissection.}
Our data structures rely on the separability property of the input graph,
which dates back to the nested dissection algorithms for
solving planar linear systems~\cite{lipton1979generalized,
gilbert1987}. By recursively partitioning the graph into
edge-disjoint subgraphs (i.e. regions) using balanced vertex separators, we
can construct a hierarchical decomposition of a planar graph $G$ which
is called a \textit{separator tree} \cite{fakcharoenphol2006planar}. This is
a binary search tree over the edges in $G$. Each node in the
separator tree represents a region in $G$. In planar graphs, for a
region $H$ with $|H|$ vertices, an
$O(\sqrt{|H|})$-sized vertex separator suffices to partition it into
two balanced sub-regions which are represented by the two children of
$H$ in the separator tree. The two subregions partition the edges in $H$ and
share only vertices in the separator. We call the set of vertices in a
region $H$ that appear in the separators of its ancestors the
\textit{boundary} of $H$. Any two regions can only share
vertices on their boundaries unless one of them is an ancestor of the
other.
Nested dissection algorithms~\cite{lipton1979generalized,gilbert1987}
essentially replace each region by a graph involving only its boundary vertices,
in a bottom-up manner.
For planar linear systems, solving the dense $\sqrt{n} \times \sqrt{n}$ submatrix corresponding
to the top level vertex separator leads to a runtime of $n^{\omega / 2}$ where $\omega$ is the
matrix multiplication exponent.
For other problems as shortest path, this primitive involving
dense graphs can be further accelerated using additional properties
of distance matrices~\cite{fakcharoenphol2006planar}.
\paragraph{Technique 1: Approximate Nested Dissection and Lazy Propagation}
Our representation of the Laplacian inverse, and in turn the
projection matrix, hinges upon a sparsified version of the
nested dissection representation.
That is, instead of a dense inverse involving all pairs of boundary
vertices, we maintain a sparse approximation.
This sparsified nested dissection has been used in the approximate
undirected planar flow algorithm from~\cite{MP13}.
However, that work pre-dated (and in some sense motivated) subsequent
works on nearly-linear time approximations of Schur complements
on general graphs~\cite{KLPSS16, KyngS16, Kyng17}.
Re-incorporating these sparsified algorithms gives runtime dependencies
that are nearly-linear, instead of quadratic, in separator sizes,
with an overall error that is acceptable to the robust IPM framework.
By maintaining objects with size nearly equal to the separator size in each node of the separator tree, we can support updating an single edge or a batch of edges in the graph efficiently. Our data structures for maintaining the approximate Schur complements and the slack and flow projection matrices all utilize this idea.
For example, to maintain the Schur complement of a region $H$ onto
its boundary (which is required in implementating the IPM step), we
maintain (1) Schur complements of its children onto their boundaries
recursively and (2) Schur complement of the children's boundaries onto
the boundary $H$. Thus, to update an edge, the path in the
separator tree from the leaf node containing the edge to
the root is visited. To update multiple edges in a batch, each node in
the union of the tree paths is visited. The runtime is nearly
linear in the total number of boundary vertices of all nodes (regions)
in the union.
For $K$ edges being updated, the runtime is bounded by $\overline{\vv}erline{t}ilde(\sqrt{mK})$.
Step $i$ of our IPM algorithm takes $\overline{\vv}erline{t}ilde(\sqrt{mK_i})$ time, where $K_{i}$
is the number of coordinates changed in $\mathbf{W}$ and $\bm{v}$ in the step.
Such a recursive approximate Schur complement structure was used
in~\cite{GHP18}, where the authors achieved a running time of $\overline{\vv}erline{t}ilde(\sqrt{m} K_i).$
\paragraph{Technique 2: Batching the changes.} It is known that over
$t$ iterations of an IPM, the number of coordinate changes (by
more than a constant factor) in $\mathbf{W}$ and $\bm{v}$ is bounded by $O(t^2)$.
This directly gives $\sum_{i=1}^{\overline{\vv}erline{t}ilde(\sqrt{m})}K_{i}=m$ and thus a total runtime
of
$\sqrt{m} \left( \sum_{i=1}^{ \overline{\vv}erline{t}ilde(\sqrt{m}) } \sqrt{K_i} \right)
= \overline{\vv}erline{t}ilde(m^{1.25}).$ In order to obtain a nearly-linear runtime, the
robust IPM carefully batches the updates in different steps. In the
$i$-th step, if the change in an edge variable has exceeded some fixed
threshold compared to its value in the $(i-2^l)$-th step for some
$l\le \ell_i$, we adjust its approximation. (Here, $\ell_i$ is the
number of trailing zeros in the binary representation of $i,$
i.e. $2^{\ell_i}$ is the largest power of $2$ that divides $i$.)
This ensures that $K_i,$ the number of coordinate changes at step $i$,
is bounded by $\overline{\vv}erline{t}ilde(2^{2\ell_i}).$
Since each value of $\ell_i$ arises once every $2^{\ell_i}$ steps,
we can prove that the sum of square roots of the number of changes over all steps is bounded
by $\overline{\vv}erline{t}ilde(m),$ i.e.,
$\sum_{i=1}^{\overline{\vv}erline{t}ilde(\sqrt{m})} \sqrt{K_{i}}=\overline{\vv}erline{t}ilde(\sqrt{m}).$
Combined with the runtime of the data structures, this gives an
$\overline{\vv}erline{t}ilde(m)$ overall runtime.
\paragraph{Technique 3: Maintaining feasibility via two projections.}
A major difficulty in the IPM is maintaining a flow vector $\bm{f}$ that satisfies
the demands exactly and a slack vector $\bm{s}$ that can be expressed as
$\bm{s} = \bm{c} - \mathbf{B} \bm{y}$. If we simply project $\bm{v}$ approximately in each
step, the flow we send is not exactly a circulation. Traditionally,
this can be fixed by computing the excess demand each step and sending
flow to fix this demand. Since our edge capacities can be polynomially
large, this step can take $\widetilde{O}mega(m)$ time.
To overcome this feasibility problem, we note that distinct projection operators
$\mathbf{P}_{\bm{w}}$ can be used in IPMs for $\bm{f}$ and $\bm{s}$ as long as each
projection is close to the true projection and that the step satisfies
$\mathbf{B}^{\top}\Delta\bm{f}=\bm{z}ero$ and $\mathbf{B}\Delta\bm{y}+\Delta\bm{s}=\bm{z}ero$ for
some $\Delta\bm{y}$.
This two-operator scheme is essential to
our improvement since one can prove that any projection that gives
feasible steps for $\bm{f}$ and $\bm{s}$ simultaneously must be the exact electrical
projection, which takes linear time to compute.
\begin{comment} removed because it's not novel
\paragraph{Technique 4: Detecting large changes by maintaining a random sketch on the separator tree.}
To detect the set of edges where the flow (or slack) value has changed
by more than the threshold, we would like to query the sum of changes
of squares of flow/slack values in a subgraph. (By supporting such
queries efficiently, we can detect the set of edges recursively on the
separator tree.) However, maintaining this sum directly is challenging
since a single update may cause the flow value to change on each edge.
We instead maintain a random sketch of the flow vector for each node. Roughly speaking, the flow vector in any region $H$ can be written as $\overline{\vv}erline{\mm}_H \bm{z}$ where $\overline{\vv}erline{\mm}_H$ is defined recursively as $\overline{\vv}erline{\mm}_H=(\overline{\vv}erline{\mm}_{D_1}+\overline{\vv}erline{\mm}_{D_2})\mathbf{A}thbf{M}_H$, for $D_1$ and $D_2$ being the two children of $H$ in the separator tree. This forest structure allows us to maintain $\mathbf{\Phi}\overline{\vv}erline{\mm}\bm{z}$ so that each change to $\mathbf{A}thbf{M}_H$ only affects its ancestors and any $K$-sparse update costs $\overline{\vv}erline{t}ilde(\sqrt{mK})$ time.
\end{comment}
\section{Overview}
In this section, we give formal statements of the main theorems proved
in the paper, along with the proof for our main result.
We provide a high-level explanation of the algorithm, sometimes using a simplified setup.
The main components of this paper are:
the IPM from \cite{treeLP} (\cref{subsec:IPM});
a data structure to maintain a collection of Schur complements via nested dissection of the graph (\cref{subsec:overview_sc});
abstract data structures to maintain the solutions $\bm{s}, \bm{f}$ implicitly, notably using an abstract tree operator (\cref{subsec:overview_representation});
a sketching-based data structure to maintain the approximations $\overline{\vv}erline \bm{s}$ and $\overline{\vv}erline\bm{f}$ needed in the IPM (\cref{subsec:overview_sketch}); and finally, the definition of the tree operators for slack and flow corresponding to the IPM projection matrices onto their respective feasible subspaces, along with the complete IPM data structure for slack and flow (\cref{subsec:dual_overview,subsec:primal_overview}).
We extend our result to $\alpha$-separable graphs in \cref{sec:separable}.
\input{overview_ipm}
A key idea in our paper involves the computation of projection
matrices required for the RIPM. Recall from the definition of
$\mathbf{P}_{\bm{w}}$ in \cref{alg:IPM_centering},
the true projection matrix is
\begin{align}
\mathbf{P}_{\bm{w}} &\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=}\mathbf{W}^{1/2}\mathbf{B}(\mathbf{B}^{\top}\mathbf{W}\mathbf{B})^{-1}\mathbf{B}^{\top}\mathbf{W}^{1/2}. \nonumber
\intertext{We let $\mathbf{A}thbf{L}$ denote the weighted Laplacian where $\mathbf{A}thbf{L}=\mathbf{B}^{\top}\mathbf{W}\mathbf{B}$, so that}
\mathbf{P}_{\bm{w}} &= \mathbf{W}^{1/2}\mathbf{B}\mathbf{A}thbf{L}^{-1}\mathbf{B}^{\top}\mathbf{W}^{1/2}. \label{eq:overview:mproj}
\end{align}
\begin{lemma}
\label{lem:approx-projections}
To implement \cref{line:step_user_begin} in
\cref{alg:IPM_centering}, it suffices to find an approximate
slack projection matrix $\widetilde{\mathbf{P}}_{\bm{w}}$ satisfying
$\norm{
\left(\widetilde{\mathbf{P}}_{\bm{w}} - \mathbf{P}_{\bm{w}} \right) \bm{v}}_{2} \leq
\alpha \norm{\bm{v}}_2$ and
$\mathbf{W}^{-1/2} \widetilde{\mathbf{P}}_{\bm{w}} \bm{v} \in \mathbf{A}thrm{Range}(\mathbf{B})$;
and an approximation flow projection matrix
$\widetilde{\mathbf{P}}'_{\bm{w}}$ satisfying
$\norm{ \left (\widetilde{\mathbf{P}}'_{\bm{w}} -\mathbf{P}_{\bm{w}} \right) \bm{v}}_{2} \leq \alpha \norm{\bm{v}}_2$ and
$\mathbf{B}^\top \mathbf{W}^{1/2} \widetilde{\mathbf{P}}'_{\bm{w}} \bm{v} = \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}$.
\end{lemma}
\begin{proof}
We simply observe that setting $\bm{v}^{\parallel} = \widetilde{\mathbf{P}}_{\bm{w}} \bm{v}$ and $\bm{v}^{\perp} = \bm{v} - \widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}$ suffices.
\end{proof}
In finding these approximate projection matrices, we apply ideas from nested dissection and approximate Schur complements to the matrix $\mathbf{A}thbf{L}$.
\input{overview_sc}
\input{overview_representation}
\input{overview_sketch}
\input{overview_maintain_slack}
\input{overview_maintain_flow}
\subsection{Main proof\label{subsec:overview_proof}}
We are now ready to prove our main result.
\cref{algo:IPM_impl} presents the implementation of RIPM \cref{alg:IPM_centering} using our data structures.
\begin{algorithm}
\caption{Implementation of Robust Interior Point Method\label{algo:IPM_impl}}
\begin{algorithmic}[1]
\Procedure{$\textsc{CenteringImpl}$}{$\mathbf{B},\phi,\bm{f},\bm{s},t_{\mathbf{A}thrm{start}},t_{\mathbf{A}thrm{end}}$}
\State $G$: graph on $n$ vertices and $m$ edges with incidence matrix $\mathbf{B}$
\State $\mathbf{A}thcal{S}, \mathbf{A}thcal{F}$: data structures for slack and flow maintenance
\Comment \cref{thm:SlackMaintain,thm:FlowMaintain}
\State $\alpha \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \frac{1}{2^{20}\lambda}, \lambda \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 64\log(256m^{2})$
\State $t\leftarrow t_{\mathbf{A}thrm{start}}$, $\overline{\vf}\leftarrow\bm{f},\overline{\vs}\leftarrow\bm{s},\overline{\vv}erline{t}\leftarrow t$, $\mathbf{W}\leftarrow\nabla^{2}\phi(\overline{\vf})^{-1}$
\Comment variable initialization
\State $\bm{v}_{i} \leftarrow \sinh(\lambda\gamma^{\overline{\vv}erline{t}}(\overline{\vf},\overline{\vs})_{i})$ for all $i \in [n]$
\Comment data structure initialization
\State $\mathbf{A}thcal{F}.\textsc{Initalize}(G,\bm{f},\bm{v}, \mathbf{W},\varepsilonsc=O(\alpha/\log m),\overline{\vv}erline{\varepsilonilon}=\alpha)$
\Comment choose $\varepsilonsc$ so $\eta\varepsilonsc \leq \alpha$ in \cref{thm:SlackMaintain}
\State $\mathbf{A}thcal{S}.\textsc{Initalize}(G,\overline{\vv}erline{t}^{-1}\bm{s},\bm{v}, \mathbf{W},\varepsilonsc=O(\alpha/\log m),\overline{\vv}erline{\varepsilonilon}=\alpha)$
\Comment and $O(\eta\varepsilonsc) \leq \alpha$ in \cref{thm:FlowMaintain}
\While{$t\geq t_{\mathbf{A}thrm{end}}$}
\State $t\leftarrow\mathbf{A}x\{(1-\frac{\alpha}{\sqrt{m}})t, t_{\mathbf{A}thrm{end}}\}$
\State Update $h=-\alpha/\|\cosh(\lambda\gamma^{\overline{\vv}erline{t}}(\overline{\vf},\overline{\vs}))\|_{2}$
\Comment $\gamma$ as defined in \cref{eq:mu_t_def}\label{line:step_given_begin_impl}
\State Update the diagonal weight matrix $\mathbf{W}=\nabla^{2}\phi(\overline{\vf})^{-1}$\label{line:step_given_3_impl}
\State $\mathbf{A}thcal{F}.\textsc{Reweight}(\mathbf{W})$ \Comment update the implicit representation of $\bm{f}$ with new weights
\State $\mathbf{A}thcal{S}.\textsc{Reweight}(\mathbf{W})$ \Comment update the implicit representation of $\bm{s}$ with new weights
\State $\bm{v}_{i} \leftarrow \sinh(\lambda\gamma^{\overline{\vv}erline{t}}(\overline{\vf},\overline{\vs})_{i})$ for all $i$ where $\overline{\vf}_i$ or $\overline{\vs}_i$ has changed \label{line:step_given_end_impl}
\Comment update direction $\bm{v}$
\State \Comment $\mathbf{P}_{\bm{w}}\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=}\mathbf{W}^{1/2}\mathbf{B}(\mathbf{B}^{\top}\mathbf{W}\mathbf{B})^{-1}\mathbf{B}^{\top}\mathbf{W}^{1/2}$
\State $\mathbf{A}thcal{F}.\textsc{Move}(h,\bm{v})$
\Comment Update $\bm{f}\leftarrow\bm{f}+h\mathbf{W}^{1/2}\bm{v} - h \mathbf{W}^{1/2} \tilde{\bm{f}}$
with $\tilde{\bm{f}} \approx \mathbf{P}_{\bm{w}} \bm{v}$
\State $\mathbf{A}thcal{S}.\textsc{Move}(h,\bm{v})$ \Comment Update $\bm{s}\leftarrow\bm{s}+\overline{\vv}erline{t} h\mathbf{W}^{-1/2}\tilde{\bm{s}}$ with $\tilde{\bm{s}} \approx\mathbf{P}_{\bm{w}} \bm{v}$
\State $\overline{\vf}\leftarrow\mathbf{A}thcal{F}.\textsc{Approximate}()$ \Comment Maintain $\overline{\vf}$ such that $\|\mathbf{W}^{-1/2}(\overline{\vf}-\bm{f})\|_{\infty}\leq\alpha$
\State $\overline{\vs}\leftarrow\bar{t}\mathbf{A}thcal{S}.\textsc{Approximate}()$ \Comment Maintain $\overline{\vs}$ such that $\|\mathbf{W}^{1/2}(\overline{\vs}-\bm{s})\|_{\infty}\leq\overline{\vv}erline{t}\alpha$
\label{line:step_user_end_impl}
\If {$|\overline{\vv}erline{t}-t|\geq\alpha\overline{\vv}erline{t}$}
\State $\bm{s}\leftarrow \overline{\vv}erline{t}\mathbf{A}thcal{S}.\textsc{Exact}()$
\State $\overline{\vv}erline{t}\leftarrow t$
\State $\mathbf{A}thcal{S}.\textsc{Initalize}(G, \overline{\vv}erline{t}^{-1}\bm{s}, \bm{v}, \mathbf{W},\varepsilonsc=O(\alpha/\log m),\overline{\vv}erline{\varepsilonilon}=\alpha)$
\EndIf
\EndWhile
\State \mathbf{A}thbb{R}eturn $(\mathbf{A}thcal{F}.\textsc{Exact}(),\overline{\vv}erline{t}\mathbf{A}thcal{S}.\textsc{Exact}())$\label{line:step_user_output_impl}
\EndProcedure
\end{algorithmic}
\end{algorithm}
We first prove a lemma about how many coordinates change in $\bm{w}$ and $\bm{v}$ in each step. This is useful for bounding the complexity of each iteration.
\begin{lemma}\label{lem:cor_change}
When updating $\bm{w}$ and $\bm{v}$ at the $(k+1)$-th step of the $\textsc{CenteringImpl}$ algorithm,
$\bm{w}$ and $\bm{v}$ change in $O(2^{2\ell_{k-1}}\log^{2}m+2^{2\ell_{k}}\log^{2}m)$ coordinates, where $\ell_{k}$
is the largest integer $\ell$ with $k \equiv 0\mod2^{\ell}$.
\end{lemma}
\begin{proof}
Since both $\bm{w}$ and $\bm{v}$ are an entry-wise function of $\overline{\vf},\overline{\vs}$ and $\overline{\vv}erline{t}$, we need to examine these variables.
First, $\overline{\vv}erline{t}$ changes every $\overline{\vv}erline{t}ilde(\sqrt{m})$ steps, and when $\overline{\vv}erline{t}$ changes, every coordinate of $\bm{w}$ and $\bm{v}$ changes.
Over the entire \textsc{CenteringImpl} run, $\overline{\vv}erline{t}$ changes $\overline{\vv}erline{t}ilde(1)$ number of times, so we may incur an additive $\overline{\vv}erline{t}ilde(m)$ term overall, and assume $\overline{\vv}erline{t}$ does not change for the rest of the analysis.
By \cref{thm:IPM}, we have $h\|\bm{v}\|_2 = O(\frac{1}{\log m})$ at all steps. So we apply \cref{thm:SlackMaintain} and \cref{thm:FlowMaintain} both with parameters $\beta = O(\frac{1}{\log m})$ and $\overline{\vv}erline{\varepsilonilon}=\alpha=\Theta(\frac{1}{\log m})$. We use their conclusions in the following argument.
Let the superscript $^{(k)}$ denote the variable at the end of the $k$-th step.
By definition, $\bm{w}^{(k+1)}$ is an entry-wise function of $\overline{\vf}^{(k)}$, and recursively, $\overline{\vf}^{(k)}$ is an entry-wise function of $\bm{w}^{(k)}$.
We first prove inductively that at step $k$, $O(2^{2\ell_{k}}\log^{2}m)$ coordinates of $\overline{\vf}$ change to $\bm{f}^{(k)}$ where $\bm{f}^{(k)}$ is the exact solution, and there are no other changes. This allows us to conclude that $\bm{w}^{(k+1)}$ differ from $\bm{w}^{(k)}$ on $O(2^{2\ell_{k}} \log^2 m)$ coordinates.
In the base case at step $k=1$, because $\bm{w}^{(1)}$ is equal to the initial weights $\bm{w}^{(0)}$,
only $O(2^{2\ell_{1}}\log^{2}m)$ coordinates $\overline{\vf}_e$ change to $\bm{f}^{(1)}_e$.
Suppose at step $k$, a set $S$ of $O(2^{2\ell_{k}}\log^{2}m)$ coordinates of $\overline{\vf}$ change; that is, $\overline{\vf}|_S$ is updated to $\bm{f}^{(k)}|_S$, and there are no other changes.
Then at step $k+1$, by definition, $\bm{w}^{(k+1)}$ differ from $\bm{w}^{(k)}$ exactly on $S$,
and in turn, $\overline{\vf}^{(k+1)}|_S$ is set to $\bm{f}^{(k)}|_S$ again (\cref{line:D-induced-ox-change} of \cref{algo:maintain-vector}).
In other words, there is no change from this operation.
Then, $O(2^{2\ell_{k+1}}\log^{2}m)$ additional coordinates $\overline{\vf}_e$ change to $\bm{f}^{(k+1)}_e$.
Now, we bound the change in $\overline{\vs}$: \cref{thm:SlackMaintain} guarantees that in the $k$-th step, there are $O(2^{2\ell_{k}}\log^{2}m)+D$ coordinates in $\overline{\vs}$ that change, where $D$ is the number of changes between $\bm{w}^{(k-1)}$ and $\bm{w}^{(k)}$ and is equal to $O(2^{2\ell_{k-1}} \log^2 m)$ as shown above.
Finally, $\bm{v}^{(k+1)}$ is an entry-wise function of $\overline{\vf}^{(k)}$ and $\overline{\vs}^{(k)}$, so we conclude that $\bm{v}^{(k+1)}$ and $\bm{v}^{(k)}$ differ on at most $O(2^{2\ell_{k}}\log^{2}m) + 2 \cdot O(2^{2\ell_{k-1}}\log^{2}m)$ coordinates.
\end{proof}
\mathbf{A}inthm*
\begin{proof}
The proof is structured as follows. We first write the minimum cost
flow problem as a linear program of the form \cref{eq:LP}. We prove
the linear program has an interior point and is bounded, so to satisfy the assumptions
in \cref{thm:IPM}. Then, we implement the IPM algorithm using the
data structures from \cref{subsec:overview_representation,subsec:overview_sketch,subsec:dual_overview,subsec:primal_overview}.
Finally, we bound the cost of each operations of the data structures.
To write down the min-cost flow problem as a linear program of the form \cref{eq:LP}, we
add extra vertices $s$ and $t$.
Let $\bm{d}$ be the demand vector of the min-cost flow problem.
For every vertex $v$ with $\bm{d}_{v}<0$, we add a directed edge from
$s$ to $v$ with capacity $-\bm{d}_{v}$ and cost $0$. For every vertex
$v$ with $\bm{d}_{v}>0$, we add a directed edge from $v$ to $t$
with capacity $\bm{d}_{v}$ and cost $0$.
Then, we add a directed edge
from $t$ to $s$ with capacity $4nM$ and cost $-4nM$. The modified graph is no longer planar but it has only two extra vertices $s$ and $t$.
The cost and capacity on the $t\rightarrow s$ edge is chosen such that
the minimum cost flow problem on the original graph is equivalent
to the minimum cost circulation on this new graph. Namely, if the
minimum cost circulation in this new graph satisfies all the demand
$\bm{d}_{v}$, then this circulation (ignoring the flow on the new edges)
is the minimum cost flow in the original graph.
Since \cref{thm:IPM} requires an interior point in the polytope,
we first remove all directed edges $e$ through which no flow from $s$ to $t$ can
pass. To do this, we simply check, for every directed edge
$e=(v_{1},v_{2})$, if $s$ can reach $v_{1}$ and if $v_{2}$ can
reach $t$. This can be done in $O(m)$ time by a BFS from $s$ and
a reverse BFS from $t$.
With this preprocessing, we write the minimum cost circulation problem
as the following linear program
\[
\mathbf{A}thbf{I}n_{\mathbf{B}^{\top}\bm{f}=\bm{z}ero,\; \bm{l}^{\mathbf{A}thrm{new}}\leq\bm{f}\leq\bm{u}^{\mathbf{A}thrm{new}}}(\bm{c}^{\mathbf{A}thrm{new}})^{\top}\bm{f}
\]
where $\mathbf{B}$ is the signed incidence matrix of the new graph, $\bm{c}^{\mathbf{A}thrm{new}}$
is the new cost vector (with cost on extra edges), and $\bm{l}^{\mathbf{A}thrm{new}},\bm{u}^{\mathbf{A}thrm{new}}$
are the new capacity constraints. If an edge $e$ has only one direction,
we set $\bm{l}_{e}^{\mathbf{A}thrm{new}}=0$ and $\bm{u}_e^{{(\mathrm{new})}} = \bm{u}_e$, otherwise, we orient the edge arbitrarily and set $-\bm{l}_{e}^{\mathbf{A}thrm{new}} = \bm{u}_e^{\mathbf{A}thrm{new}} = \bm{u}_e$.
Now, we bound the parameters $L,R,r$ in \cref{thm:IPM}. Clearly,
$L=\|\bm{c}^{\mathbf{A}thrm{new}}\|_{2}=O(Mm)$ and $R=\|\bm{u}^{\mathbf{A}thrm{new}}-\bm{l}^{\mathbf{A}thrm{new}}\|_{2}=O(Mm)$.
To bound $r$, we prove that there is an ``interior'' flow $\bm{f}$
in the polytope $\mathbf{A}thcal{F}$. We construct this $\bm{f}$ by $\bm{f}=\sum_{e\in E}\bm{f}^{(e)}$,
where $\bm{f}^{(e)}$ is a circulation passing through edges $e$ and $(t,s)$
with flow value $1/(4m)$. All such circulations exist because of
the removal preprocessing. This $\bm{f}$ satisfies the capacity constraints
because all capacities are at least $1$. This shows $r \geq \frac{1}{4m}$.
The RIPM in \cref{thm:IPM} runs the subroutine \textsc{Centering} twice.
In the first run, the constraint matrix is the incidence matrix of a new underlying graph,
constructed by making three copies of each edge in the original graph $G$.
Since copying edges does not affect planarity, and our data structures allow for duplicate edges,
we use the implementation given in \textsc{CenteringImpl} (\cref{algo:IPM_impl}) for both runs.
By the guarantees of \cref{thm:SlackMaintain} and \cref{thm:FlowMaintain},
we correctly maintain $\bm{s}$ and $\bm{f}$ at every step in \textsc{CenteringImpl},
and the requirements on $\overline{\vf}$ and $\overline{\vs}$ for the RIPM are satisfied.
Hence, \cref{thm:IPM} shows that we can find a circulation $\bm{f}$
such that $(\bm{c}^{\mathbf{A}thrm{new}})^{\top}\bm{f}\leq\mathbf{A}thrm{OPT}-\frac{1}{2}$
by setting $\varepsilonilon=\frac{1}{CM^{2}m^{2}}$ for some large constant
$C$ in \cref{alg:IPM_centering}.
Note that $\bm{f}$, when restricted to the original graph, is almost
a flow routing the required demand with flow value off by at most $\frac{1}{2nM}$. This
is because sending extra $k$ units of fractional flow from $s$ to
$t$ gives extra negative cost $\leq-knM$. Now we can round $\bm{f}$
to an integral flow $\bm{f}^{\mathbf{A}thrm{int}}$ with same or better flow
value using no more than $\widetilde{O}(m)$ time \cite{kang2015flow}.
Since $\bm{f}^{\mathbf{A}thrm{int}}$ is integral with flow value at least the total demand
minus $\frac{1}{2}$, $\bm{f}^{\mathbf{A}thrm{int}}$ routes the demand completely.
Again, since $\bm{f}^{\mathbf{A}thrm{int}}$ is integral with cost at most $\mathbf{A}thrm{OPT}-\frac{1}{2}$,
$\bm{f}^{\mathbf{A}thrm{int}}$ must have the minimum cost.
Finally, we bound the runtime of one call to \textsc{CenteringImpl}.
We initialize the data structures for flow and slack by \textsc{Initialize}.
Here, the data structures are given the first IPM step direction $\bm{v}$ for preprocessing; the actual step is taken in the first iteration of the main while-loop.
At each step of \textsc{CenteringImpl}, we perform the implicit update of $\bm{f}$ and $\bm{s}$ using $\textsc{Move}$; we update $\mathbf{W}$ in the data structures using $\textsc{Reweight}$; and we construct the explicit approximations
$\overline{\vf}$ and $\overline{\vs}$ using $\textsc{Approximate}$;
each in the respective flow and slack data structures.
We return the true $(\bm{f},\bm{s})$ by $\textsc{Exact}$.
The total cost of \textsc{CenteringImpl} is dominated by $\textsc{Move}$, $\textsc{Reweight}$, and $\textsc{Approximate}$.
Since we call \textsc{Move}, \textsc{Reweight} and \textsc{Approximate} in order in each step and the runtime for \textsc{Move}, \textsc{Reweight} are both dominated by the runtime for \textsc{Approximate}, it suffices to bound the runtime for \textsc{Approximate} only.
\cref{thm:IPM} guarantees that there are $T=O(\sqrt{m}\log n\log(nM))$ total \textsc{Approximate} calls.
\cref{lem:cor_change} shows that at the $k$-th call, the number of coordinates changed in $\bm{w}$ and $\bm{v}$ is bounded by $K \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} O(2^{2\ell_{k-1}} \log^2 m+2^{2\ell_{k-2}} \log^2 m)$, where $\ell_k$ is the largest integer $\ell$ with $k \equiv 0 \mod 2^{\ell}$, or equivalently, the number of trailing zeros in the binary representation of $k$.
\cref{thm:IPM} further guarantees we can apply \cref{thm:SlackMaintain} and \cref{thm:FlowMaintain} with parameter $\beta = O(1/\log m)$,
which in turn shows the amortized time for the $k$-th call is
\[
\overline{\vv}erline{t}ilde(\varepsilonsc^{-2} \sqrt{m(K + N_{k-2^{\ell_k}})}).
\]
where $N_{k} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_k} (\beta/\alpha)^2 \log^2 m = O(2^{2\ell_k} \log^2 m)$,
where $\alpha = O(1/\log m)$ and $\varepsilonilon_{\mathbf{P}}= O(1/\log m)$ are defined in \textsc{CenteringImpl}.
\begin{comment}
Hence, the total time for $\textsc{Move}$ and $\textsc{Reweight}$ over $T$ calls is
\[
\widetilde{O}\left(\sum_{k=1}^{T}\sqrt{m}\cdot2^{\ell_{k'}}\right).
\]
\end{comment}
Observe that $K + N_{k-2^{\ell_k}} = O(N_{k-2^{\ell_k}})$. Now, summing over all $T$ calls, the total time is
\begin{align*}
O(\sqrt{m} \log m) \sum_{k=1}^T \sqrt{N_{k-2^{\ell_k}}} &=
O(\sqrt{m} \log^2 m) \sum_{k=1}^T 2^{\ell_{(k - 2^{\ell_k})}} \\
&= O(\sqrt{m} \log^2 m) \sum_{k'=1}^T 2^{\ell_{k'}}\sum_{k=1}^{T}[k-2^{\ell_k}=k'],
\intertext{
where we use $[\cdot]$ for the indicator function, i.e., $[k-2^{\ell_k}=k']=1$ if $k-2^{\ell_k}=k'$ is true and $0$ otherwise. As there are only $\log T$ different powers of $2$ in $[1, T]$, the count $\sum_{1\le k\le T}[k-2^{\ell_k}=k']$ is bounded by $O(\log T)$ for any $k'\in \{1,\dots, T\}$. Then the above expression is
}
&= O(\sqrt{m} \log^2 m \log T) \sum_{k'=1}^T 2^{\ell_{k'}}.
\intertext{
Since $\ell_k$ is the number of trailing zeros on $k$,
it can be at most $\log T$ for $k \leq T$. We again rearrange the summation by possible values of $\ell_{k'}$,
and note that there are at most $T/2^{i + 1}$ numbers between 1 and $T$ with $i$ trailing zeros, so}
\sum_{k'=1}^T 2^{\ell_{k'}} &= \sum_{i=0}^{\log T} 2^{i} \cdot T/2^{i+1} = O(T \log T).
\end{align*}
So the overall runtime is $O(\sqrt{m} T \log m \log^2 T)$. Combined with \cref{thm:IPM}'s guarantee of $T=O(\sqrt{m}\log n\log(nM))$, we conclude the overall runtime is $\overline{\vv}erline{t}ilde (m \log M)$.
\end{proof}
\section{Preliminaries}
\emph{We assume all matrices and vectors in an expression have matching dimensions.} That is, we will trivially pad matrices and vectors with zeros when necessary. This abuse of notation is unfortunately unavoidable as we will be considering lots of submatrices and subvectors.
\paragraph{General Notations.}
An event holds with high probability if it holds with probability at least $1-n^c$ for arbitrarily large constant $c$. The choice of $c$ affects guarantees by constant factors.
We use boldface lowercase variables to denote vectors, and boldface uppercase variables to denote matrices.
We use $\|\bm{v}\|_2$ to denote the 2-norm of vector $\bm{v}$ and $\|\bm{v}\|_{\mathbf{A}thbf{M}}$ to denote $\bm{v}^\top \mathbf{A}thbf{M}\bm{v}$.
For any vector $\bm{v}$ and scalar $x$, we define $\bm{v}+x$ to be the vector obtained by adding $x$ to each coordinate of $\bm{v}$
and similarly $\bm{v}-x$ to be the vector obtained by subtracting $x$ from each coordinate of $\bm{v}$.
We use $\bm{z}ero$ for all-zero vectors and matrices where dimensions are determined by context. We use $\bm{1}_{A}$ for the vector with value $1$ on coordinates in $A$ and $0$ everywhere else. We use $\mathbf{A}thbf{I}$ for the identity matrix and $\mathbf{A}thbf{I}_{S}$ for the identity matrix in $\mathbf{A}thbb{R}^{S \times S}$.
For any vector $\bm{x} \in \mathbf{A}thbb{R}^{S}$,
$\bm{x}|_{C}$ denotes the sub-vector of $\bm{x}$ supported on $C\subseteq S$;
\emph{more specifically, $\bm{x}|_C \in \mathbf{A}thbb{R}^S$, where $\bm{x}_i = 0$ for all $i \notin C$.}
For any matrix $\mathbf{A}thbf{M} \in \mathbf{A}thbb{R}^{A\times B}$,
we use the convention that $\mathbf{A}thbf{M}_{C, D}$ denotes the sub-matrix of $\mathbf{A}thbf{M}$ supported on $C\times D$ where $C\subseteq A$ and $D\subseteq B$.
When $\mathbf{A}thbf{M}$ is not symmetric and only one subscript is specified, as in $\mathbf{A}thbf{M}_D$, this denotes the sub-matrix of $\mathbf{A}thbf{M}$ supported on $A \times D$.
To keep notations simple, $\mathbf{A}thbf{M}^{-1}$ will denote the inverse of $\mathbf{A}thbf{M}$ if it is an invertible matrix and the Moore-Penrose pseudo-inverse otherwise.
For two positive semi-definite matrices $\mathbf{A}thbf{L}_1$ and $\mathbf{A}thbf{L}_2$, we write $\mathbf{A}thbf{L}_1 \approx_t \mathbf{A}thbf{L}_2$ if $e^{-t} \mathbf{A}thbf{L}_1\preceq \mathbf{A}thbf{L}_2 \preceq e^{t} \mathbf{A}thbf{L}_1$, where $\mathbf{A}\preceq \mathbf{B}$ means $\mathbf{B}-\mathbf{A}$ is positive semi-definite.
Similarly we define $\geq_t$ and $\leq_t$ for scalars, that is, $x\leq_t y$ if $e^{-t}x\le y\le e^t x$.
\paragraph{Graphs and Trees.}
We define \emph{modified planar graph} to mean a graph obtained from a planar graph by adding $2$ new vertices $s, t$ and
any number of edges incident to the new vertices. We allow distinguishable parallel edges in our graphs.
We assume the input graph is connected.
We use $n$ for the number of vertices and $m$ for the number of edges in the input graph.
We will use $\bm{w}$ for the vector of edge weights in a graph. We define $\mathbf{W}$ as the diagonal matrix $\mathbf{A}thrm{diag}(\bm{w})$.
We define $\mathbf{A}thbf{L} = \mathbf{B}^\top \mathbf{W} \mathbf{B}$ be the Laplacian matrix associated with an undirected graph $G$ with non-negative edge weights $\mathbf{W}$.
We at times use a graph and its Laplacian interchangeably.
For a subgraph $H \subseteq G$, we use $\mathbf{A}thbf{L}[H]$ to denote the weighted Laplacian on $H$, and $\mathbf{B}[H]$ to denote the incidence matrix of $H$.
For a tree $\mathcal{T}$, we write $H \in \mathcal{T}$ to mean $H$ is a node in $\mathcal{T}$.
We write $\mathcal{T}_H$ to mean the complete subtree of $\mathcal{T}$ rooted at $H$.
We say a node $A$ is an ancestor of $H$ if $H$ is in the subtree rooted at $A$, and $H \neq A$.
The \emph{level} of a node in a tree is defined so that leaf nodes have level 0, and the root has level $\eta$, where $\eta$ is the height of the tree. For interior nodes, the level is the length of the longest path from the node to a leaf.
By this definition, note that the level of a node and its child can differ by more than 1.
For binary tree data structures, we assume there is constant time access to each node.
\paragraph{IPM data structures.}
When we discuss the data structures in the context of the IPM, step 0 means the initialization step. For $k > 0$, step $k$ means the $k$-th iteration of the while-loop in \textsc{Centering} (\cref{alg:IPM_centering,algo:IPM_impl}); that is, it is the $k$-th time we update the current solutions.
For any vector or matrix $\bm{x}$ used in the IPM, we use $\bm{x}^{(k)}$ to denote the value of $\bm{x}$ at the end of the $k$-th step.
In all procedures in these data structures, we assume inputs are given by the set of changed coordinates and their values,
\emph{compared to the previous input}.
Similarly, we output a vector by the set of changed coordinates and their values, compared to the previous output.
This can be implemented by checking memory for changes.
We use \textsc{smallCaps} to denote function names and data structure classes, and \texttt{typewriterFont} to denote an instantiation of a data structure.
We say a data structure B \emph{extends} A in the object-oriented sense. Inside data structure B, we directly access functions and variables of A when the context is clear, or use the keyword \texttt{super}.
In the data structure where we write $\mathbf{A}thbf{L}^{-1} \bm{x}$ for some Laplacian $\mathbf{A}thbf{L}$ and vector $\bm{x}$, we imply the use of an SDD-solver as a black box in nearly-linear time:
\begin{theorem}[\cite{spielman2004nearly, JambulapatiS21}]
\label{thm:laplacianSolver}
There is a randomized algorithm which is an $\varepsilon$-approximate Laplacian system solver for the any input $n$-vertex $m$-edge graph
and $\varepsilon\in (0,1)$ and has the following runtime
$O(m\mathrm{poly}(\log \log n)\log(1/\varepsilon))$.
\end{theorem}
\section{Nested dissection and approximate Schur complements} \label{sec:apxsc}
This section lays the foundation for a recursive decomposition of the input graph.
Our goal is to set up the machinery necessary for approximating $\mathbf{P}_{\bm{w}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{W}^{1/2} \mathbf{B} (\mathbf{B}^\top \mathbf{W} \mathbf{B})^{-1} \mathbf{B}^{\top} \mathbf{W}^{1/2}$ as needed in the robust IPM.
In particular, we are interested in the weighted Laplacian matrix $\mathbf{A}thbf{L} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{B}^\top \mathbf{W} \mathbf{B}$.
We begin with a discussion of nested dissection and the associated Schur complements.
\input{sc_basic_schur}
\input{sc_separator_tree}
\input{sc_Linverse}
\input{sc_dynamicSC}
\section{Maintaining the implicit representation}
In this section, we give a general data structure \textsc{MaintainRep}.
At a high level, \textsc{MaintainRep} implicitly maintains a vector $\bm{x}$ throughout the IPM,
by explicitly maintaining vector $\bm{y}$, and implicitly maintaining a \emph{tree operator} $\mathbf{A}thbf{M}$ and vector $\bm{z}$,
with $\bm{x} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \bm{y} + \mathbf{A}thbf{M} \bm{z}$.
\textsc{MaintainRep} supports the IPM operations \textsc{Move} and \textsc{Reweight} as follows:
To move in step $k$ with direction $\bm{v}^{(k)}$ and step size $\alpha^{(k)}$,
the data structure computes some $\bm{z}^{(k)}$ from $\bm{v}^{(k)}$ and
updates $\bm{x} \leftarrow \bm{x} + \mathbf{A}thbf{M} (\alpha^{(k)} \bm{z}^{(k)})$.
To reweight with new weights $\bm{w}^{(\mathrm{new})}$ (which does not change the value of $\bm{x}$),
the data structure computes $\mathbf{A}thbf{M}^{{(\mathrm{new})}}$ using $\bm{w}^{(\mathrm{new})}$,
updates $\mathbf{A}thbf{M} \leftarrow \mathbf{A}thbf{M}^{{(\mathrm{new})}}$, and updates $\bm{y}$ to offset the change in $\mathbf{A}thbf{M} \bm{z}$.
In \cref{subsec:maintain_z}, we define $\bm{z}^{(k)}$ and show how to maintain $\bm{z} = \sum_{i=1}^k \bm{z}^{(i)}$ efficiently.
In \cref{subsec:tree_operator}, we define tree operators.
Finally in \cref{subsec:maintain_rep_impl}, we implement \textsc{MaintainRep} for a general tree operator $\mathbf{A}thbf{M}$.
Our goal is for this data structure to maintain the updates to the slack and flow solutions at every IPM step.
Recall at step $k$, we want to update the slack solution by $\bar{t} h \mathbf{W}^{1/2} \widetilde{\mathbf{P}}_{\bm{w}} \bm{v}^{(k)}$
and the partial flow solution by $ h \mathbf{W}^{-1/2} \widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}^{(k)}$.
In later sections, we define specific tree operators $\mathbf{A}thbf{M}^{{(\mathrm{slack})}}$ and $\mathbf{A}thbf{M}^{{(\mathrm{flow})}}$ so that the slack and flow updates can be written as $\mathbf{A}thbf{M}^{{(\mathrm{slack})}} (\bar{t} h \bm{z}^{(k)})$ and $\mathbf{A}thbf{M}^{{(\mathrm{flow})}} (h \bm{z}^{(k)})$ respectively.
This then allows us to use two copies of \textsc{MaintainRep} to maintain the solutions throughout the IPM.
To start, recall the information stored in the \textsc{DynamicSC} data structure: at every node $H$ we have Laplacian $\mathbf{A}thbf{L}^{(H)}$.
In the previous section, we defined matrices $\widetilde{\mathbf{\Gamma}}$ and $\mathbf{\Pi}^{(i)}$'s as functions of the $\mathbf{A}thbf{L}^{(H)}$'s, in order to approximate $\mathbf{A}thbf{L}^{-1}$.
\textsc{MaintainRep} will contain a copy of the \textsc{DynamicSC} data structure; therefore, the remainder of this section will freely refer to $\widetilde{\mathbf{\Gamma}}$ and $\mathbf{\Pi}^{(0)}, \cdots, \mathbf{\Pi}^{(\eta-1)}$.
\input{maintain_z}
\input{tree_operator}
\subsection{Proof of \crtcref{thm:maintain_representation}} \label{subsec:maintain_rep_impl}
Finally, we give the data structure for maintaining an implicit representation of the form $\bm{y} + \mathbf{A}thbf{M} \bm{z}$ throughout the IPM.
For an instantiation of this data structure, there is exactly one call to \textsc{Initialize} at the very beginning, and one call to \text{Exact} at the very end.
Otherwise, each step of the IPM consists of one call to \textsc{Reweight} followed by one call to \textsc{Move}.
Note that this data structure \emph{extends} \textsc{MaintainZ} in the object-oriented programming sense.
\MaintainRepresentation*
\begin{algorithm}
\caption{Implicit representation maintenance}\label{alg:maintain_representation}
\begin{algorithmic}[1]
\State \textbf{data structure} \textsc{MaintainRep} \textbf{extends} \textsc{MaintainZ}
\State \textbf{private: member}
\State \hspace{4mm} $\mathcal{T}$: separator tree
\State \hspace{4mm} $\bm{y} \in \mathbf{A}thbb{R}^{m}$: offset vector
\State \hspace{4mm} $\mathbf{A}thbf{M}$: \emph{instructions to compute the tree operator} $\mathbf{A}thbf{M} \in \mathbf{A}thbb{R}^{m \times n}$
\State \LeftComment \; $\bm{z} = c {\bm{z}^{(\mathbf{A}thrm{step})}} + {\bm{z}^{(\mathbf{A}thrm{sum})}}$ maintained by \textsc{MaintainZ}, accessable in this data structure
\State \LeftComment \; \texttt{DynamicSC}: an accessable instance of \textsc{DynamicSC} maintained by \textsc{MaintainZ}
\State
\Procedure{Initialize}{$G,\mathcal{T}, \mathbf{A}thbf{M}, \bm{v}\in \mathbf{A}thbb{R}^{m}, \bm{w}\in\mathbf{A}thbb{R}_{>0}^{m}, \bm{x}^{(\mathrm{init})} \in \mathbf{A}thbb{R}^{m}, \varepsilonsc>0$}
\State $\mathbf{A}thbf{M} \leftarrow \mathbf{A}thbf{M}$ \Comment{initialize the instructions to compute $\mathbf{A}thbf{M}$}
\State $\texttt{Super}.\textsc{Initialize}(G, \mathcal{T}, \bm{v}\in \mathbf{A}thbb{R}^{m}, \bm{w}\in\mathbf{A}thbb{R}_{>0}^{m},\varepsilonsc>0)$ \Comment initialize $\bm{z}$
\State $\bm{y}\leftarrow \bm{x}^{(\mathrm{init})}$ \label{line:init_y}
\EndProcedure
\State
\Procedure{Reweight}{$\bm{w}^{(\mathrm{new})}$}
\State Let $\mathbf{A}thbf{M}^{(\mathrm{old})}$ represent the current tree operator $\mathbf{A}thbf{M}$
\State $\texttt{Super}.\textsc{Reweight}(\bm{w}^{(\mathrm{new})})$ \Comment{update representation of $\bm{z}$ and \texttt{DynamicSC}}\label{line:super_reweight}
\State \Comment $\mathbf{A}thbf{M}$ is updated as a result of reweight in \texttt{DynamicSC}
\State $\Delta \mathbf{A}thbf{M} \leftarrow \mathbf{A}thbf{M} - \mathbf{A}thbf{M}^{(\mathrm{old})}$ \Comment{$\Delta \mathbf{A}thbf{M}$ is represented implicitly}
\State $\bm{y}\leftarrow \bm{y} - \textsc{ComputeMz}(\Delta \mathbf{A}thbf{M}, c{\bm{z}^{(\mathbf{A}thrm{step})}}+{\bm{z}^{(\mathbf{A}thrm{sum})}})$ \Comment{\cref{algo:computeMz}} \label{line:update_y}
\EndProcedure
\State
\Procedure{Move}{$\alpha, \bm{v}^{(\mathrm{new})}$}
\State $\texttt{Super}.\textsc{Move}(\alpha, \bm{v}^{(\mathrm{new})})$
\EndProcedure
\State
\Procedure{Exact}{$ $}
\State \mathbf{A}thbb{R}eturn $\bm{y} + \textsc{ComputeMz}(\mathbf{A}thbf{M}, c{\bm{z}^{(\mathbf{A}thrm{step})}}+{\bm{z}^{(\mathbf{A}thrm{sum})}})$ \Comment{\cref{algo:computeMz}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{proof}
First, we discuss how $\mathbf{A}thbf{M}$ is stored in the data structure:
Recall $\mathbf{A}thbf{M}$ is represented implicitly by a collection of edge operators and leaf operators on the separator tree $\mathcal{T}$,
so that each edge operator is stored at a corresponding node of $\mathcal{T}$, and each leaf operator is stored at a corresponding leaf node of $\mathcal{T}$.
However, the data structure \emph{does not} store any edge or leaf operator matrix explicitly.
We make a key assumption that each edge and leaf operator is computable using $O(1)$-number of $\mathbf{A}thbf{L}^{(H)}$ matrices from \textsc{DynamicSC}. This will be true for the slack and flow operators we define.
As a result, to store an edge or leaf operator at a node, we simply store \emph{pointers to the matrices from \textsc{DynamicSC}} required in the definition, and an $O(1)$-sized instruction for how to compute the operator.
The computation time is proportional to the size of the matrices in the definitions, but crucially the instructions have only $O(1)$-size.
Now, we prove the correctness and runtime of each procedure separately.
Observe that the invariants claimed in the theorem are maintained correctly if each procedure is implemented correctly.
\paragraph{\textsc{Initialize}:}
\cref{line:init_y} sets $\bm{y}\leftarrow \bm{x}^{(\mathrm{init})}$, and $\texttt{Super}.\textsc{Initialize}$ sets $\bm{z}\leftarrow \bm{z}ero$.
So we have $\bm{x}=\bm{y}+\mathbf{A}thbf{M}\bm{z}$ at the end of initialization. Furthermore, the initialization of $\bm{z}$ correctly sets ${\bm{z}^{(\mathbf{A}thrm{step})}}$ in terms of $\bm{v}$.
By \cref{thm:maintain_z}, \texttt{Super}.\textsc{Initialize} takes $\widetilde{O}(\varepsilonsc^{-2}m)$ time.
Storing the implicit representation of $\mathbf{A}thbf{M}$ takes $O(m)$ time.
\paragraph{\textsc{Reweight}:}
By \cref{thm:maintain_z}, $\texttt{Super}.\textsc{Reweight}$ updates its current weight and \texttt{DynamicSC}, and updates ${\bm{z}^{(\mathbf{A}thrm{step})}}$ correspondingly to maintain the invariant, while not changing the value of $\bm{z}$.
Because $\mathbf{A}thbf{M}$ is stored by instructions, no explicit update to $\mathbf{A}thbf{M}$ is required. \cref{line:update_y} updates $\bm{y}$ to zero out the changes to $\mathbf{A}thbf{M}\bm{z}$.
The instructions for computing $\Delta \mathbf{A}thbf{M}$ require the Laplacians from \texttt{DynamicSC} before and after the update in \cref{line:super_reweight}.
For this, we monitor the updates of \texttt{dynamicSC} and stores the old and new values.
The runtime of this is bounded by the runtime of updating \texttt{dynamicSC}, which is in turn included in the runtime for \texttt{Super}.\textsc{Reweight}.
Let $K$ upper bound the number of coordinates changed in $\bm{w}$ and the number of edge and leaf operators changed in $\mathbf{A}thbf{M}$. Then $\texttt{Super}.\textsc{Reweight}$ takes $\widetilde{O}(\varepsilonsc^{-2}\sqrt{mK})$ time,
and $\textsc{Exact}(\Delta \mathbf{A}thbf{M}, \bm{z})$ takes $O(T(K))$ time.
Thurs, the total runtime is $\widetilde{O}(\varepsilonsc^{-2} \sqrt{mK} +T(m))$.
\paragraph{\textsc{Move}:}
The runtime and correctness follow from \cref{thm:maintain_z}.
\paragraph{\textsc{Exact}:}
\textsc{ComputeMz} computes $\mathbf{A}thbf{M}\bm{z}$ correctly in $O(T(m))$ time by \cref{cor:exacttime}. Adding the result to $\bm{y}$ takes $O(m)$ time and gives the correct value of $\bm{x}=\bm{y}+\mathbf{A}thbf{M}\bm{z}$. Thus, \textsc{Exact} returns $\bm{x}$ in $O(T(m))$ time.
\end{proof}
\section{Maintaining vector approximation}
\label{sec:sketch}
Recall at every step of the IPM, we want to maintain approximate vectors $\overline{\vs}, \overline{\vf}$ so that
\begin{align*}
\norm{ \mathbf{W}^{-1/2} (\overline{\vf} - \bm{f})}_\infty \leq \delta \quad \text{and} \quad
\norm{ \mathbf{W}^{1/2} (\overline{\vs}- \bm{s})}_\infty \leq \delta'
\end{align*}
for some additive error tolerances $\delta$ and $\delta'$.
In the previous section, we showed how to maintain some vector $\bm{x}$ implicitly as $\bm{x} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \bm{y} + \mathbf{A}thbf{M} \bm{z}$ throughout the IPM, where $\bm{x}$ should represent $\bm{s}$ or part of $\bm{f}$.
In this section, we give a data structure to efficiently maintain an approximate vector $\overline{\vv}erline{\vx}$ to the $\bm{x}$ from \textsc{MaintainRep}, so that at every IPM step,
\[
\norm{\mathbf{D}^{1/2} \left(\overline{\vv}erline{\vx} - \bm{x}\right)}_\infty \leq \delta,
\]
where $\mathbf{D}$ is a dynamic diagonal scaling matrix. (It will be $\mathbf{W}^{-1}$ for the flow or $\mathbf{W}$ for the slack.)
In \cref{subsec:sketch_vector_to_change}, we reduce the problem of maintaining $\overline{\vv}erline{\vx}$ to detecting coordinates in $\bm{x}$ with large changes.
In \cref{subsec:sketch_change_to_sketch}, we detect coordinates of $\bm{x}$ with large changes using a sampling technique on a binary tree, where Johnson-Lindenstrauss sketches of subvectors of $\bm{x}$ are maintained at each node the tree.
In \cref{subsec:sketch_maintenance}, we show how to compute and maintain the necessary collection of JL-sketches on the separator tree $\mathcal{T}$; in particular, we do this efficiently with only an implicit representation of $\bm{x}$.
Finally, we put the three parts together to prove \cref{thm:VectorTreeMaintain}.
We use the superscript $^{(k)}$ to denote the variable at the end of the $k$-th step of the IPM; that is, $\mathbf{D}^{(k)}$ and $\bm{x}^{(k)}$ are $\mathbf{D}$ and $\bm{x}$ at the end of the $k$-th step. Step 0 is the state of the data structure immediately after initialization.
\input{sketch_sampling}
\input{sketch_Mz}
\subsection{Proof of \crtcref{thm:VectorTreeMaintain}}
\label{subsec:sketch_final_proof}
We combine the previous three subsections for the overall approximation procedure.
It is essentially \textsc{AbstractMaintainApprox} in \cref{algo:maintain-vector},
with the abstractions replaced by a data structure implementation.
We did not provide the corresponding pseudocode.
\vectorTreeMaintain*
\begin{proof}
The data structure \textsc{AbstractMaintainApprox} in \cref{algo:maintain-vector}
performs the correct vector approximation maintenance, however, it is not completely implemented.
\textsc{MaintainApprox} simply replaces the abstractions with a concrete implementation using the data structure \textsc{MaintainSketch} from \cref{algo:maintain-sketch}.
First, for notation purposes, let $\bm{z} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} c {\bm{z}^{(\mathbf{A}thrm{step})}} + {\bm{z}^{(\mathbf{A}thrm{sum})}}$, and let $\bm{x} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \bm{y} + \mathbf{A}thbf{M} \bm{z}$, so that at step $k$, \textsc{Approximate} procedure has $\bm{x}^{(k)}$ (in implicit form) as input, and return $\overline{\vv}erline{\vx}$.
Let $\ell \in \{1, \dots, O(\log m)\}$. We define a new dynamic vector $\bm{x}_{\ell}$ \emph{symbolically}, which is represented at each step $k$ for $k \geq 2^\ell$ by
\[
\bm{x}^{(k)}_{\ell} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \bm{y}^{(k)}_{\ell} + \mathbf{A}thbf{M}^{(k)}_{\ell} \bm{z}^{(k)}_{\ell},
\]
where the new tree operator $\mathbf{A}thbf{M}_{\ell}$ at step $k$ is given by
\begin{itemize}
\item ${\mathbf{A}thbf{M}^{(k)}_{\ell}}_{(H, P)}=\mathbf{A}thrm{diag}\left(\mathbf{A}thbf{M}^{(k)}_{(H, P)}, \mathbf{A}thbf{M}^{(k-2^\ell)}_{(H, P)}\right)$ for each child-parent edge $(H,P)$ in $\mathcal{T}$,
\item ${\mathbf{A}thbf{J}^{(k)}_{\ell}}_H=\overline{\vv}erline{\mathbf{D}}_{E(H), E(H)}\left[\mathbf{A}thbf{J}^{(k)}_H~\mathbf{A}thbf{J}^{(k-2^\ell)}_H\right]$ for each leaf node $H \in \mathcal{T}$,
\end{itemize}
where $\overline{\vv}erline{\mathbf{D}}$ is the diagonal matrix defined in \textsc{FindLargeCoordinates}, with $\overline{\vv}erline{\mathbf{D}}_{i,i} = \mathbf{D}^{(k)}_{i,i}$ at step $k$ if $\overline{\vv}erline{\vx}_i$ has not been updated after step $k-2^\ell$, and zero otherwise.
At step $k$, the vector $\bm{y}_{\ell}$ is given by $\bm{y}_{\ell}^{(k)}= \overline{\vv}erline{\mathbf{D}}^{1/2} \left( \bm{y}^{(k)}-\bm{y}^{(k-2^\ell)}\right)$,
and $\bm{z}_{\ell}$ by $\bm{z}_{\ell}^{(k)} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \left[\bm{z}^{(k)} ~ \bm{z}^{(k-2^\ell)}\right]^\top$.
Then, at each step $k$ with $k \geq 2^\ell$, we have
\begin{align} \label{eq:x_ell^k}
\bm{x}_{\ell}^{(k)} &\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \bm{y}_{\ell}^{(k)} + \mathbf{A}thbf{M}_{\ell}^{(k)} \bm{z}_{\ell}^{(k)} \\
&= \left(\overline{\vv}erline{\mathbf{D}}^{1/2}\bm{y}^{(k)} + \overline{\vv}erline{\mathbf{D}}^{1/2}\mathbf{A}thbf{M}^{(k)}\bm{z}^{(k)}\right) - \left(
\overline{\vv}erline{\mathbf{D}}^{1/2} \bm{y}^{(k-2^\ell)} +\overline{\vv}erline{\mathbf{D}}^{1/2}\mathbf{A}thbf{M}^{(k-2^\ell)} \bm{z}^{(k-2^\ell)} \right) \nonumber \\
&= \overline{\vv}erline{\mathbf{D}}^{1/2} (\bm{x}^{(k)} - \bm{x}^{(k - 2^\ell)}) \nonumber.
\end{align}
Note this is precisely the vector $\bm{q}$ for a fixed $\ell$ in \textsc{FindLargeCoordinates} in \cref{algo:maintain-vector}.
It is straightforward to see that $\mathbf{A}thbf{M}_{\ell}$ indeed satisfies the definition of a tree operator.
Furthermore, $\mathbf{A}thbf{M}_{\ell}$ has the same complexity as $\mathbf{A}thbf{M}$.
\textsc{MaintainApprox} will contain $O(\log m)$ copies of the \textsc{MaintainSketch} data structures in total, where the $\ell$-th copy sketches $\bm{x}_{\ell}$ as it changes throughout the IPM algorithm.
We now describe each procedure in words, and then prove their correctness and runtime.
\paragraph{\textsc{Initialize}$(\mathcal{T}, \mathbf{A}thbf{M}, c, {\bm{z}^{(\mathbf{A}thrm{step})}}, {\bm{z}^{(\mathbf{A}thrm{sum})}}, \bm{y}, \mathbf{D}, \rho, \delta)$:}
This procedure implements the initialization of \textsc{AbstractMaintainApprox},
where the dynamic vector $\bm{x}$ to be approximated is represented by $\bm{x} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \bm{y} + \mathbf{A}thbf{M} (c {\bm{z}^{(\mathbf{A}thrm{step})}} + {\bm{z}^{(\mathbf{A}thrm{sum})}})$.
The initialization steps described in \cref{algo:maintain-vector} takes $O(w m)$ time.
Let $\mathbf{\Phi}$ denote the JL-sketching matrix.
We initialize two copies of the \textsc{MaintainSketch} data structure, $\texttt{ox\_cur}$ and $\texttt{ox\_prev}$. At step $k$, $\texttt{ox\_cur}$ will maintain sketches of $\mathbf{\Phi} \bm{x}^{(k)}$, and $\texttt{ox\_prev}$ will maintain sketches of $\mathbf{\Phi} \bm{x}^{(k-1)}$.
(The latter is initialized at step $1$, but we consider it as part of initialization.)
In addition, for each $0 \leq \ell \leq O(m)$, we initialize a copy $\texttt{sketch}_{\ell}$ of \textsc{MaintainSketch}.
These are needed for the implementation of $\textsc{FindLargeCoordinates}(\ell)$ in \textsc{Approximate}.
Specifically, at step $k = 2^\ell$ of the IPM, we initialize $\texttt{sketch}_{\ell}$ by calling
$\texttt{sketch}_{\ell}.\textsc{Initialize}(\mathcal{T}, \mathbf{\Phi}, \mathbf{A}thbf{M}_{\ell}^{(k)}, \bm{z}_{\ell}^{(k)}, \bm{y}_{\ell}^{(k)})$.
(Although this occurs at step $k > 0$, we charge its runtime according to its function as part of initialization.)
The total initialization time is $O(wm\log m)=O(m\eta^2\log m\log (\frac{m}{\rho}))$ by \cref{lem:maintain-sketch}.
By the existing pseudocode in \cref{algo:maintain-vector}, it correctly initializes $\overline{\vv}erline{\vx} \leftarrow \bm{x}$.
\paragraph{\textsc{Approximate}$(\mathbf{A}thbf{M}^{(\mathrm{new})}, c^{(\mathrm{new})}, {\bm{z}^{(\mathbf{A}thrm{step})}}^{(\mathrm{new})}, {\bm{z}^{(\mathbf{A}thrm{sum})}}^{(\mathrm{new})}, \bm{y}^{(\mathrm{new})}, \mathbf{D}^{(\mathrm{new})})$:}
This procedure implements \textsc{Approximate} in \cref{algo:maintain-vector}.
We consider when the current step is $k$ below.
First, we update the sketch data structures $\texttt{sketch}_{\ell}$ for each $\ell$
by calling $\texttt{sketch}_{\ell}.\textsc{Update}$.
Recall at step $k$, $\texttt{sketch}_{\ell}$ maintains sketches for the vector $\bm{x}_{\ell}^{(k)} = \overline{\vv}erline{\mathbf{D}}^{1/2} (\bm{x}^{(k)} - \bm{x}^{(k-2^\ell)})$,
although the actual representation in $\texttt{sketch}_{\ell}$ of the vector $\bm{x}_{\ell}$ is given by $\bm{x}_{\ell} = \bm{y}_{\ell} + \mathbf{A}thbf{M}_{\ell} \bm{z}_{\ell}$ as defined in \cref{eq:x_ell^k}.
Next, we execute the pseudocode given in \textsc{Approximate} in \cref{algo:maintain-vector}:
To update $\overline{\vv}erline{\vx}_e$ to $\bm{x}^{(k-1)}_e$ for a single coordinate (\cref{line:D-induced-ox-change} of \cref{algo:maintain-vector}),
we find the leaf node $H$ containing the edge $e$, and call $\texttt{ox\_prev}.\textsc{Query}(H)$.
This returns the subvector $\bm{x}^{(k-1)}|_{E(H)}$, from which we can make the assignment to $\overline{\vv}erline{\vx}_e$.
To update $\overline{\vv}erline{\vx}_e$ to $\bm{x}^{(k)}_e$ for single coordinates (\cref{line:update-ox} of \cref{algo:maintain-vector}),
we do the same as above, except using the data structure $\texttt{ox\_cur}$.
In the subroutine \textsc{FindLargeCoordinates}$(\ell)$,
the vector $\bm{q}$ defined in the pseudocode is exactly $\bm{x}_\ell^{(k)}$.
We get the value of $\mathbf{\Phi}_{E(u)} \bm{q}$ at a node $u$ by calling $\texttt{sketch}_{\ell}.\textsc{Estimate}(u)$,
and we get the value of $\bm{q}|_{E(u)}$ at a leaf node $u$ by calling
$\texttt{sketch}_{\ell}.\textsc{Query}(u)$.
\subparagraph{Number of coordinates changed in $\overline{\vv}erline{\vx}$ during \textsc{Approximate}.}
In \cref{line:D-induced-ox-change} of \textsc{Approximate} in \cref{algo:maintain-vector},
$\overline{\vv}erline{\vx}$ is updated in every coordinate $e$ where $\mathbf{D}_e$ differs compared to the previous step.
Next, the procedure collect a set of coordinates for which we update $\overline{\vv}erline{\vx}$,
by calling \textsc{FindLargeCoordinates}$(\ell)$ for each $0 \leq \ell \leq \ell_k$,
where $\ell_k$ is defined to be the number of trailing zeros in the binary representation of $k$.
(These are exactly the values of $\ell$ such that $k \equiv 0 \mod 2^\ell$).
In each call of \textsc{FindLargeCoordinates}$(\ell)$,
There are $O(2^{2\ell} (\eta/\delta)^2 \log^2 m \log(m/\rho))$ iterations of the outer for-loop,
and $O(1)$ iterations of the inner while-loop by the assumption of $\|\bm{x}^{(k+1)}-\bm{x}^{(k)}\|_{\mathbf{D}^{(k+1)}}\leq\beta$ and
\cref{lem:change-detection}.
Each iteration of the while-loop adds a $O(1)$ sized set to the collection $I$ of candidate coordinates.
So overall, \textsc{FindLargeCoordinates}$(\ell)$ returns a set of size
$O(2^{2\ell} (\eta/\delta)^2 \log^2 m \log(m/\rho))$.
Summing up over all calls of $\textsc{FindLargeCoordinates}$, the total size of the set of coordinates to update is
\begin{equation} \label{eq:N_k-defn}
N_k \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \sum_{\ell=0}^{\ell_k} O(2^{2\ell}(\beta/\delta)^{2}\log^{2}m\log(m/\rho))= O(2^{2\ell_{k}}(\beta/\delta)^{2}\log^{2}m).
\end{equation}
We define $\ell_0=N_0=0$ for convenience.
\subparagraph{Changes to sketching data structures.}
Let $\mathbf{A}thcal{S}^{(k)}$ denote the set of nodes $H$, where one of (when applicable) $\mathbf{A}thbf{M}_{(H,P)}$, $\mathbf{A}thbf{J}_H$, ${\bm{z}^{(\mathbf{A}thrm{step})}}|_{F_H}$, ${\bm{z}^{(\mathbf{A}thrm{sum})}}|_{F_H}$, $\bm{y}_{F_H}$, $\mathbf{D}_{E(H)}$ changes during step $k$.
(They are entirely induced by changes in $\bm{v}$ and $\bm{w}$ at step $k$.)
We store $\mathbf{A}thcal{S}^{(k)}$ for each step.
For each $\ell$, the diagonal matrix $\overline{\vv}erline{\mathbf{D}}$ is the same as $\mathbf{D}$,
except $\overline{\vv}erline{\mathbf{D}}_{ii}$ is temporarily zeroed out for $2^\ell$ steps after $\overline{\vv}erline{\bm{x}}_i$ changes at a step.
Thus, the number of coordinate changes to $\overline{\vv}erline{\mathbf{D}}$ at step $k$ is the number of changes to $\mathbf{D}$, plus $N_{k-1}+N_{k-2^\ell}$:
$N_{k-1}$ entries are zeroed out because of updates to $\overline{\vv}erline{\bm{x}}_i$ in step $k-1$.
The $N_{k-2^\ell}$ entries that were zeroed out in step $k-2^\ell+1$ because of the update to $\overline{\vv}erline{\bm{x}}_i$ in step $k-2^\ell$ are back.
Hence, at step $k$,
the updates to $\texttt{sketch}_{\ell}$ are induced by updates to $\overline{\vv}erline{\mathbf{D}}$, and the updates to $\bm{x}$ at step $k$, and at step $k-2^\ell$.
The updates to the two $\bm{x}$ terms are restricted to the nodes $\mathbf{A}thcal{S}^{(k-2^\ell)} \cup \mathbf{A}thcal{S}^{(k)}$ in $\mathcal{T}$ for \cref{algo:maintain-sketch}.
Updates to $\texttt{ox\_cur}$ and $\texttt{ox\_prev}$ can be similarly analyzed.
\subparagraph{Runtime of \textsc{Approximate}.}
First, we consider the time to update each $\texttt{sketch}_{\ell}$:
At step $k$, the analysis above combined with \cref{lem:maintain-sketch} show that $\texttt{sketch}_{\ell}.\textsc{Update}$ with new iterations of the appropriate variables run in time
\begin{align*}
&\phantom{{}={}} O\left(w \cdot T \left(\eta \cdot (|\mathbf{A}thcal{S}^{(k)}|+|\mathbf{A}thcal{S}^{(k-2^\ell)}|+N_{k-1}+N_{k-2^\ell} ) \right) \right) \\
&\leq w \cdot O\left(T (\eta \cdot (|\mathbf{A}thcal{S}^{(k)}| +N_{k-1}+N_{k-2^\ell} )) \right) + w \cdot O \left( T(\eta \cdot |\mathbf{A}thcal{S}^{(k-2^\ell)}| )\right),
\end{align*}
where we use the concavity of $T$.
The second term can be charged to step $k-2^\ell$.
Thus, the amortized time cost for $\texttt{sketch}_{\ell}.\textsc{Update}$ at step $k$ is
\[
w \cdot O(T(\eta \cdot (|\mathbf{A}thcal{S}^{(k)}|+N_{k-1} + N_{k-2^{\ell_k}}))).
\]
Summing over all $0 \leq \ell \leq O(\log m)$ for the different copies of $\texttt{sketch}_{\ell}$, we get an extra $O(\log m)$ factor in the overall update time.
Similarly, we can update $\texttt{ox\_prev}$ and $\texttt{ox\_cur}$ in the same amortized time.
Next, we consider the runtime for \cref{line:D-induced-ox-change} in \cref{algo:maintain-vector}:
The number of coordinate accesses to $\bm{x}^{(k-1)}$ is $|\{i: \mathbf{D}_{ii}^{(k)}-\mathbf{D}_{ii}^{(k-1)} \neq 0\}| = O(\mathbf{A}thcal{S}^{(k)})$. Each coordinate is computed by calling $\texttt{ox\_cur}.\textsc{Query}$, and by \cref{lem:maintain-sketch}, the total time for these updates is $w \cdot O(T(\eta \cdot |\mathbf{A}thcal{S}^{(k)}|)$.
Finally, we analyze the remainder of the procedure, which consists of \textsc{FindLargeCoordinates}($\ell$) for each $0 \leq \ell \leq \ell_k$ and the subsequent updates to entries of $\overline{\vv}erline{\vx}$:
For each \textsc{FindLargeCoordinates}$(\ell)$ call, by \cref{lem:change-detection}, $N_{k,\ell} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \Theta(2^{2\ell} (\beta/\delta)^2 \log^2 m \log (m/\rho))$ sampling paths are explored in the $\texttt{sketch}_{\ell}$ data structure, where each sampling path correspond to one iteration of the while-loop.
We calculate $\|\mathbf{\Phi}_{E(H)} \bm{x}_{\ell} \|_{2}^{2}$ at a node $H$ in the sampling path
using $\texttt{sketch}_{\ell}.\textsc{Estimate}(H)$, and at a leaf node $H$
using $\texttt{sketch}_{\ell}.\textsc{Query}(H)$.
The total time is $w\cdot O(T(\eta \cdot N_{k,\ell}))$ by \cref{lem:maintain-sketch}.
To update a coordinate $i \in E(H)$ that was identified to be large, we can refer to the output of $\texttt{sketch}_{\ell}.\textsc{Query}(H)$ from the sampling step.
Summing over each $0 \leq \ell \leq \ell_k$, we see that the total time for the \textsc{FindLargeCoordinates} calls and the subsequent updates fo $\overline{\vv}erline{\vx}$ is
\[
\sum_{\ell=0}^{\ell_k} w \cdot O(T(\eta \cdot N_{k,\ell})) =
w \cdot O(T(\eta \cdot N_{k})),
\]
where $N_k$ is the number of coordinates that are updated in $\overline{\vv}erline{\vx}$ as shown in \cref{eq:N_k-defn}.
Combined with the update times, we conclude that the total amortized cost of \textsc{Approximate} at step $k$ is
\begin{align*}
&\Theta(\eta^2\log (\frac{m}{\rho})\log m) \cdot T(\eta \cdot (|\mathbf{A}thcal{S}^{(k)}|+N_{k-1} + N_{k - 2^{\ell_k}})).
\end{align*}
Observe that $N_{k-1} = N_{k-2^0}$ and $N_{k-2^\ell}$ are both bounded by $O(N_{k-2^{\ell_k}})$:
When $\ell\neq \ell_k$, the number of trailing zeros in $k-2^\ell$ is no more than $\ell_k$. When $\ell=\ell_k$, the number of trailing zeros of $k-2^{\ell_k}$ is $\ell_{k-2^{\ell_k}}$. In both cases, $\ell_{k-2^{\ell}}\le \ell_{k-2^{\ell_k}}$.
So we have the desired overall runtime.
\end{proof}
\begin{comment}
After writing this proof, it occurred to the authors that defining the new $\bm{x}_{\ell}$'s is more complicated than necessary. One can simply maintain $O(\log m)$ copies of \textsc{MaintainSketch}, so that we have one copy $\texttt{sketch}_{\ell, x}$ which maintains sketches of $\mathbf{\Phi} \overline{\vv}erline{\mathbf{D}}^{1/2} \bm{x}^{(k)}$ at step $k$, and one copy $\texttt{sketch}_{\ell}$ for each $\ell \leq O(\log m)$ which maintains sketches of $\mathbf{\Phi} \overline{\vv}erline{\mathbf{D}}^{1/2} \bm{x}^{(k - 2^\ell)}$ at stek $k \geq 2^{\ell}$.
Then
\[
\texttt{sketch}_{\ell,x}.\textsc{Estimate}(u) - \texttt{sketch}_{\ell}.\textsc{Estimate}(u) = \mathbf{\Phi}_{E(u)} (\mathbf{D}^{1/2} (\bm{x}^{(k)} - \bm{x}^{(k - 2^\ell)})).
\]
\end{comment}
\section{Slack projection} \label{sec:slack_projection}
In this section, we define the slack tree operator as required to use \textsc{MaintainRep}.
We then give the full slack maintenance data structure.
\subsection{Tree operator for slack}
The full slack update at IPM step $k$ with step direction $\bm{v}^{(k)}$ and step size $\bar{t} h$ is
\[
\bm{s}\leftarrow\bm{s}+\mathbf{W}^{-1/2}\widetilde{\mathbf{P}}_{\bm{w}} (\bar{t} h \bm{v}^{(k)}),
\]
where we require $\widetilde{\mathbf{P}}_{\bm{w}} \approx \mathbf{P}_{\bm{w}}$ and $\widetilde \mathbf{P}_{\bm{w}}\bm{v}^{(k)} \in \range{\mathbf{W}^{1/2} \mathbf{B}}$.
Let $\widetilde{\mathbf{A}thbf{L}}^{-1}$ denote the approximation of $\mathbf{A}thbf{L}^{-1}$ from \cref{eq:overview_Linv_approx}, maintained and computable with a \textsc{DynamicSC} data structure. If we define
\[
\widetilde{\mathbf{P}}_{\bm{w}} = \mathbf{W}^{1/2} \mathbf{B} \widetilde{\mathbf{A}thbf{L}}^{-1} \mathbf{B}^\top \mathbf{W}^{1/2} =
\mathbf{W}^{1/2} \mathbf{B} \mathbf{\Pi}^{(0)\top} \cdots \mathbf{\Pi}^{(\eta-1)\top} \widetilde\mathbf{\Gamma} \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(0)} \mathbf{B}^\top \mathbf{W}^{1/2}.
\]
then $\widetilde{\mathbf{P}}_{\bm{w}} \approx_{\eta \varepsilonsc} \mathbf{P}_{\bm{w}}$, and $\range{\widetilde{\mathbf{P}}_{\bm{w}}}=\range{\mathbf{P}_{\bm{w}}}$ by definition, where $\eta$ and $\varepsilonsc$ are parameters in \textsc{DynamicSC}.
Hence, this suffices as our approximate slack projection matrix.
In order to use \textsc{MaintainRep} to maintain $\bm{s}$ throughout the IPM, it remains to define a slack tree operator $\mathbf{A}thbf{M}^{{(\mathrm{slack})}}$ so that
\[
\mathbf{W}^{-1/2} \widetilde \mathbf{P}_{\bm{w}} \bm{v}^{(k)} = \mathbf{A}thbf{M}^{{(\mathrm{slack})}} \bm{z}^{(k)},
\]
where $\bm{z}^{(k)} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \widetilde{\mathbf{\Gamma}} \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(0)} \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}^{(k)}$ at IPM step $k$.
We proceed by defining a tree operator $\mathbf{A}thbf{M}$ satisfying $\widetilde \mathbf{P}_{\bm{w}} \bm{v}^{(k)} = \mathbf{A}thbf{M} \bm{z}^{(k)}$. Namely, we show that $\mathbf{A}thbf{M} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{W}^{1/2} \mathbf{B} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}$ is indeed a tree operator.
Then we set $\mathbf{A}thbf{M}^{{(\mathrm{slack})}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{W}^{-1/2} \mathbf{A}thbf{M}$.
For the remainder of the section, we abuse notation and use $\bm{z}$ to mean $\bm{z}^{(k)}$ for one IPM step $k$.
\begin{definition}[Slack projection tree operator] \label{defn:slack-forest-operator}
Let $\mathcal{T}$ be the separator tree from data structure \textsc{DynamicSC}, with Laplacians $\mathbf{A}thbf{L}^{(H)}$ and $\widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(H)}, \partial H)$ at each node $H \in \mathcal{T}$.
We use $\mathbf{B}[H]$ to denote the adjacency matrix of $G$ restricted to the region.
For a node $H \in \mathcal{T}$, define $V(H)$ and $F_H$ required by the tree operator as $\bdry{H} \cup F_H$ and $F_H$ from the separator tree construction respectively.
Note the slightly confusing fact that $V(H)$ \emph{is not} the set of vertices in region $H$ of the input graph $G$, \emph{unless} $H$ is a leaf node.
Suppose node $H$ has parent $P$, then define the tree edge operator $\mathbf{A}thbf{M}_{(H, P)} : \mathbf{A}thbb{R}^{V(P)} \mathbf{A}psto \mathbf{A}thbb{R}^{V(H)}$ as:
\begin{equation}
\label{eq:slack-forest-op-defn}
\mathbf{A}thbf{M}_{(H,P)} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{A}thbf{I}_{\bdry{H} \cup F_H}-\left(\mathbf{A}thbf{L}^{(H)}_{\elim{H},\elim{H}}\right)^{-1} \mathbf{A}thbf{L}^{(H)}_{\elim{H}, \bdry{H}} = \mathbf{A}thbf{I}_{\bdry{H} \cup F_H} - \mathbf{X}^{(H) \top},
\end{equation}
where $\mathbf{X}^{(H)}$ is defined in \cref{def:mx^(H)}.
At each leaf node $H$ of $\mathcal{T}$, define the leaf operator $\mathbf{A}thbf{J}_H=\mathbf{W}^{1/2} \mathbf{B}[H]$.
\end{definition}
The remainder of this section proves the correctness of the tree operator.
\begin{comment}
First, we show the following helper lemma:
\begin{lemma} \label{lem:slack_partial_project}
Let us define $\mathbf{\Pi}^{(\eta)} = \mathbf{A}thbf{I}$. For a node $H \in \mathcal{T}$, recall $H$ represents a region of the input graph $G$. Then let $\mathbf{B}[H]$ denote the incidence matrix of the region, and let $\mathbf{B} = \mathbf{B}[G]$ be the incidence matrix of $G$.
For $0 \leq i \leq \eta$, we have
\begin{align*}
\sum_{\substack{\eta(H) = i \\ A : H \in \mathcal{T}_A}} \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A} &= \mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta) \top} \sum_{\eta(H) \geq i} \bm{z}|_{F_H}, \qquad \text{and} \\
\sum_{\substack{\eta(H) = i \\ A : H \in \mathcal{T}_A}} \mathbf{B}[H] \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A} &= \mathbf{B} \mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta) \top} \sum_{\eta(H) \geq i} \bm{z}|_{F_H}.
\end{align*}
\end{lemma}
\begin{proof}
We show both inductively with descending $i$. When $i = \eta$, the left hand sides are $\bm{z}|_{F_G}$ and $\mathbf{B} \bm{z}|_{F_G}$ respectively, where $G$ is the root node. The right hand sides are respectively equal.
Inductively at level $i$, for the first identity, we have
\begin{align*}
&\phantom{{}={}} \mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta) \top} \sum_{\eta(H) \geq i} \bm{z}|_{F_H} \\
&=\mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta) \top} \left(\sum_{\eta(H) = i} \bm{z}|_{F_H} + \sum_{\eta(H) \geq i+1} \bm{z}|_{F_H} \right) \\
&= \sum_{\eta(H) = i} \bm{z}|_{F_H} +
\mathbf{\Pi}^{(i)\top} \mathbf{\Pi}^{(i+1) \top}\cdots \mathbf{\Pi}^{(\eta) \top} \sum_{\eta(H) \geq i+1} \bm{z}|_{F_H}
\tag{by the fact that $\mathbf{\Pi}^{(j)} \bm{z}|_{F_H} = \bm{z}|_{F_H}$ if $j \geq \eta(H)$} \\
&= \sum_{\eta(H) = i} \bm{z}|_{F_H}
+ \left(\prod_{\eta(H) = i} \mathbf{A}thbf{I} - \mathbf{X}^{(H) \top} \right)
\sum_{\substack{\eta(H) = i+1 \\ A : H \in \mathcal{T}_A}} \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A} \tag{by \cref{lem:projection_by_nodes} and induction} \\
&= \sum_{\eta(H) = i} \bm{z}|_{F_H} +
\sum_{\substack{\eta(H) = i, H' \in \mathcal{T}_A \\ \text{$H'$ parent of $H$}}} (\mathbf{A}thbf{I} - \mathbf{X}^{(H) \top}) \mathbf{A}thbf{M}_{H' \leftarrow A} \bm{z}|_{F_A}
\tag{since $\range{\mathbf{A}thbf{M}_{H' \leftarrow A}} \subseteq \ker{\mathbf{X}^{(H)\top}}$ if $H$ is not a child of $H'$} \\
&= \sum_{\eta(H) = i} \bm{z}|_{F_H} +
\sum_{\substack{\eta(H) = i \\ \text{$A$ ancestor of $H$}}} \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A} \tag{by definition of $\mathbf{A}thbf{M}_{(H,H')}$} \\
&= \sum_{\substack{\eta(H) = i \\ A : H \in \mathcal{T}_A}} \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A}.
\end{align*}
Similarly, for the second identity, we have
\begin{align*}
&\phantom{{}={}} \mathbf{B} \mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta) \top} \sum_{\eta(H) \geq i} \bm{z}|_{F_H} \\
&= \mathbf{B} \sum_{\eta(H) = i} \bm{z}|_{F_H}
+ \mathbf{B} \left(\mathbf{A}thbf{I} - \sum_{\eta(H) = i} \mathbf{X}^{(H) \top} \right) \mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta) \top} \sum_{\eta(H) \geq i+1} \bm{z}|_{F_H} \\
&= \sum_{\eta(H) = i} \mathbf{B}[H] \bm{z}|_{F_H}
+ \sum_{\substack{\eta(H) = i+1 \\ A : H \in \mathcal{T}_A}} \mathbf{B}[H] \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A}
- \left(\sum_{\eta(H) = i} \mathbf{B}[H] \mathbf{X}^{(H) \top}\right) \left(\sum_{\substack{\eta(H) = i+1\\ A : H \in \mathcal{T}_A}} \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A} \right)
\intertext{we can replace $\mathbf{B}$ with $\mathbf{B}[H]$ when it is multiplied with a vector supported on $F_H$. The middle term is by induction hypothesis for the second identity, the last term is by induction hypothesis for the first identity.
Finally, applying the fact $\range{\mathbf{A}thbf{M}_{H' \leftarrow A}} \subseteq \ker{\mathbf{X}^{(H)\top}}$ if $H$ is not a child of $H'$ and factoring the last two terms, we get}
&= \sum_{\eta(H) = i} \mathbf{B}[H] \bm{z}|_{F_H}
+ \sum_{\substack{\eta(H') = i+1 \\ A : H' \in \mathcal{T}_A}} \left(\sum_{\text{child $H$ of $H'$}} \mathbf{B}[H] (\mathbf{A}thbf{I} - \mathbf{X}^{(H)\top})\right) \mathbf{A}thbf{M}_{H' \leftarrow A} \bm{z}|_{F_A} \\
&= \sum_{\eta(H) = i} \mathbf{B}[H] \bm{z}|_{F_H}
+ \sum_{\substack{\eta(H') = i+1 \\ A : H' \in \mathcal{T}_A}}\sum_{\text{child $H$ of $H'$}} \mathbf{B}[H] \mathbf{A}thbf{M}_{(H,H')} \mathbf{A}thbf{M}_{H' \leftarrow A} \bm{z}|_{F_A} \tag{by definition of $\mathbf{A}thbf{M}_{(H,H')}$, and noting that $\range{\mathbf{A}thbf{M}_{H' \leftarrow A}} = \partial H' \cup F_{H'} \subseteq \partial H$} \\
&=\sum_{\substack{\eta(H) = i \\ A : H \in \mathcal{T}_A}} \mathbf{B}[H] \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A}.
\end{align*}
This concludes the overall proof.
\end{proof}
\begin{proof}
Setting $i = 0$ in the second identity of \cref{lem:slack_partial_project}, we get
\[
\sum_{\text{leaf $H$, node $A$}: H \in \mathcal{T}_A} \mathbf{B}[H] \mathbf{A}thbf{M}_{H \leftarrow A} \bm{z}|_{F_A} = \mathbf{B} \mathbf{\Pi}^{(0)\top} \cdots \mathbf{\Pi}^{(\eta) \top} \sum_{\eta(H) \geq 0} \bm{z}|_{F_H} = \mathbf{B} \mathbf{\Pi}^{(0)\top} \cdots \mathbf{\Pi}^{(\eta) \top} \bm{z}.
\]
Multiplying by $\mathbf{W}^{1/2}$ on the left gives the desired conclusion.
\end{proof}
\end{comment}
\begin{lemma}
\label{lem:slack-operator-correctness}
Let $\mathbf{A}thbf{M}$ be the tree operator as defined in \cref{defn:slack-forest-operator}. We have $$\mathbf{A}thbf{M}\bm{z}=\mathbf{W}^{1/2}\mathbf{B} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}\bm{z}.$$
\end{lemma}
We begin with a few observations about the $\mathbf{\Pi}^{(i)}$'s:
\begin{observation} \label{obs:img-mpi}
For any $0 \leq i < \eta$, and for any vector $\bm{x}$, we have
$\mathbf{\Pi}^{(i) \top} \bm{x} = \bm{x} + \bm{y}_i$, where $\bm{y}_i$ is a vector supported on $F_i = \cup_{H \in \mathcal{T}(i)} F_H$.
Extending this observation, for $0 \leq i < j < \eta$,
\[
\mathbf{\Pi}^{(i)\top} \cdots \mathbf{\Pi}^{(j-1)\top}\bm{x} = \bm{x} + \bm{y},
\]
where $\bm{y}$ is a vector supported on $F_i \cup \cdots \cup F_{j-1} = \cup_{H : i \leq \eta(H) < j} F_H$.
Furthermore, if $\bm{x}$ is supported on $F_A$ for $\eta(A) = j$, then $\bm{y}$ is supported on $\cup_{H \in \mathcal{T}_A} F_H$.
\end{observation}
The following helper lemma describes a sequence of edge operators from a node to a leaf.
\begin{lemma}
\label{lem:slack-edge-operator-correctness}
For any leaf node $H\in \mathcal{T}$, and a node $A$ with $H \in \mathcal{T}_A$ ($A$ is an ancestor of $H$ or $H$ itself), we have
\begin{equation} \label{eq:slack-edge-path}
\mathbf{A}thbf{M}_{H\leftarrow A}\bm{z}|_{\elim{A}} = \mathbf{A}thbf{I}_{\partial H \cup F_H} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}\bm{z}|_{F_A}.
\end{equation}
\end{lemma}
\begin{proof}
For simplicity of notation, let $V(H) \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \partial H \cup F_H$ for a node $H$.
To start, observe that for a node $A$ at level $\eta(A)$, we have $\mathbf{\Pi}^{(i)} \bm{z}|_{F_A} = \bm{z}|_{F_A}$ for all $i \geq \eta(A)$. So it suffices to prove
\[
\mathbf{A}thbf{M}_{H\leftarrow A}\bm{z}|_{\elim{A}} = \mathbf{A}thbf{I}_{V(H)} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta(A)-1) \top} \bm{z}|_{F_A}.
\]
Let the path from leaf $H$ up to node $A$ in $\mathcal{T}$ be denoted $(H_0 \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} H, H_1, \ldots, H_t \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} A)$, for some $t\le \eta(A)$. We will prove by induction for $k$ decreasing from $t$ to $0$:
\begin{equation} \label{eq:slack-op-induction}
\mathbf{A}thbf{M}_{H_k\leftarrow A}\bm{z}|_{\elim{A}} = \mathbf{A}thbf{I}_{V(H_k)} \mathbf{\Pi}^{(\eta(H_k)) \top} \mathbf{\Pi}^{(\eta(H_k) + 1)\top} \cdots \mathbf{\Pi}^{(\eta(A)-1) \top}\bm{z}|_{\elim{A}}.
\end{equation}
For the base case of $H_t = A$, we have $\mathbf{A}thbf{M}_{H_t \leftarrow A} \bm{z}|_{\elim{A}} = \bm{z}|_{\elim{A}} = \mathbf{A}thbf{I}_{V(H_t)} \bm{z}|_{\elim{A}}$.
For the inductive step at $H_k$, we first apply induction hypothesis for $H_{k+1}$ to get
\begin{align}
\mathbf{A}thbf{M}_{H_{k+1}\leftarrow A}\bm{z}|_{\elim{A}} &= \mathbf{A}thbf{I}_{V(H_{k+1})} \mathbf{\Pi}^{(\eta(H_{k+1})) \top} \cdots \mathbf{\Pi}^{(\eta(A)-1) \top}\bm{z}|_{\elim{A}}.
\intertext{
Multiplying by the edge operator $\mathbf{A}thbf{M}_{(H_{k}, H_{k+1})}$ on both sides gives}
\mathbf{A}thbf{M}_{H_{k}\leftarrow A}\bm{z}|_{\elim{A}} &= \mathbf{A}thbf{M}_{(H_k, H_{k+1})} \mathbf{A}thbf{I}_{V(H_{k+1})} \mathbf{\Pi}^{(\eta(H_{k+1})) \top} \cdots \mathbf{\Pi}^{(\eta(A)-1) \top}\bm{z}|_{\elim{A}}.
\end{align}
Recall the edge operator $\mathbf{A}thbf{M}_{(H_k, H_{k+1})}$ maps vectors supported on $V(H_{k+1})$ to vectors supported on $V(H_k)$ and zeros otherwise. So we can drop the $\mathbf{A}thbf{I}_{V(H_{k+1})}$ term in the right hand side.
Let $\bm{x} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{\Pi}^{(\eta(H_{k+1})) \top} \cdots \mathbf{\Pi}^{(\eta(A)-1) \top}\bm{z}|_{\elim{A}}$.
Now, by the definition of the edge operator, the above equation becomes
\begin{equation} \label{eq:slack-op}
\mathbf{A}thbf{M}_{H_{k}\leftarrow A}\bm{z}|_{\elim{A}} = (\mathbf{A}thbf{I}_{V(H_k)} - \mathbf{X}^{(H_k) \top})\bm{x}.
\end{equation}
On the other hand, we have
\begin{align*}
\mathbf{A}thbf{I}_{V(H_k)} \mathbf{\Pi}^{(\eta(H_k))\top} \cdots \mathbf{\Pi}^{(\eta(H_{k+1})-1)\top} \bm{x} &=
\mathbf{A}thbf{I}_{V(H_k)} \mathbf{\Pi}^{(\eta(H_k))\top} \left(\mathbf{\Pi}^{(\eta(H_k)+1)\top} \cdots \mathbf{\Pi}^{(\eta(H_{k+1})-1)\top} \bm{x} \right)\\
&= \mathbf{A}thbf{I}_{V(H_k)} \mathbf{\Pi}^{(\eta(H_k))\top} (\bm{x} + \bm{y}),
\intertext{
where $\bm{y}$ is a vector supported on $\cup F_R$ for nodes $R$ at levels $\eta(H_k)+1, \cdots, \eta(H_{k+1})-1$ by \cref{obs:img-mpi}.
In particular, $\bm{y}$ is zero on $F_{H_k}$. Also, $\bm{y}$ is zero on $\partial H_k$, since by \cref{lem:bdry-containment}, $\partial H_k \subseteq \cup_{\text{ancestor $A'$ of $H_k$}} F_{A'}$, and ancestors of $H_k$ are at level $\eta(H_{k+1})$ or higher.
Then $\bm{y}$ is zero on $V(H_k) = \partial H_k \cup F_{H_k}$, and the right hand side is
}
&= (\mathbf{A}thbf{I}_{V(H_k)} - \mathbf{X}^{(H_k) \top}) \bm{x},
\end{align*}
where we apply the definition of $\mathbf{\Pi}^{(\eta(H_k))\top}$ and expand the left-multiplication by $\mathbf{A}thbf{I}_{V(H_k)}$.
Combining with \cref{eq:slack-op} and substituting back the definition of $\bm{x}$, we get
\[
\mathbf{A}thbf{M}_{H_{k}\leftarrow A}\bm{z}|_{\elim{A}} = \mathbf{A}thbf{I}_{V(H_{k})} \mathbf{\Pi}^{(\eta(H_{k})) \top} \cdots \mathbf{\Pi}^{(\eta(A)-1) \top}\bm{z}|_{\elim{A}}.
\]
which completes the induction.
\begin{comment}
------------------------------------------------
We have two cases for the inductive step. Suppose $\eta(H_k)>i$. By inductive hypothesis, we have
$$(\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_k}\cup \elim{H_k}}=\mathbf{A}thbf{M}_{H_k\leftarrow R}\bm{z}|_{\elim{R}}.$$
By $\mathbf{\Pi}^{(i)\top}=\mathbf{A}thbf{I}-\sum_{H\in \mathcal{T}(i)}\mathbf{X}^{(H)\top}=\mathbf{A}thbf{I}-\sum_{H\in \mathcal{T}(i)}\left(\mathbf{A}thbf{L}_{\elim{H}, \elim{H}}^{(H)}\right)^{-1}\mathbf{A}thbf{L}_{\elim{H}, \bdry{H}}^{(H)}$ from \cref{lem:projection_by_nodes}, multiplying $\mathbf{\Pi}^{(i)\top}$ to the left of some vector only changes the coordinates $\bigcup_{H\in \mathcal{T}(i)} \elim{H}$. Because no descendant of $H_k$ is in $\mathcal{T}(i)$, $\bigcup_{H\in \mathcal{T}(i)} \elim{H}$ is disjoint from $\bdry{H_k}\cup \elim{H_k}$. Thus, multiplying $\mathbf{\Pi}^{(i)\top}$ on the left to $(\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_k}\cup \elim{H_k}}$ does not change the value.
Suppose $\eta(H_k)=i$. By inductive hypothesis, we have
$$(\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}=\mathbf{A}thbf{M}_{H_{k+1}\leftarrow R}\bm{z}|_{\elim{R}}.$$
Because $\mathbf{A}thbf{M}_{H_{k}\leftarrow R}=\mathbf{A}thbf{M}_{(H_{k}, H_{k+1})} \mathbf{A}thbf{M}_{H_{k+1}\leftarrow R}$, it suffices to prove
$$\mathbf{A}thbf{M}_{(H_{k}, H_{k+1})}(\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}=(\mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_{k}}\cup \elim{H_{k}}},$$
or equivalently
\begin{equation}
\mathbf{A}thbf{M}_{(H_{k}, H_{k+1})}\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}=\mathbf{A}thbf{I}_{\bdry{H_{k}}\cup \elim{H_{k}}}\mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}.
\label{eq:inductive_step}
\end{equation}
By \cref{lem:projection_by_nodes}, multiplying $\mathbf{\Pi}^{(j)\top}$ to the left of some vector only changes the coordinates $\bigcup_{H\in \mathcal{T}(j)} \elim{H}$. Thus, $\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}$ agree with $\bm{z}|_{\elim{R}}$ except for coordinates in $\bigcup_{j>i, H\in \mathcal{T}(j)} \elim{H}=C_i$. As $\eta(R)>i$, $\elim{R}\subseteq C_i$. We have $\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}$ is nonzero only on $C_i$. This further reduces proving \cref{eq:inductive_step} to proving
$$\mathbf{A}thbf{M}_{(H_{k}, H_{k+1})}\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}=\mathbf{A}thbf{I}_{\bdry{H_{k}}\cup \elim{H_{k}}}\mathbf{\Pi}^{(i) \top}\mathbf{A}thbf{I}_{C_i}.$$
Because $\elim{H_k}\subseteq \bdry{H_{k+1}}\cup \elim{H_{k+1}}$, LHS is exactly $\mathbf{A}thbf{M}_{(H_{k}, H_{k+1})}$. Because $\mathbf{\Pi}^{(i)\top}=\mathbf{A}thbf{I}-\sum_{H\in \mathcal{T}(i)}\mathbf{X}^{(H)\top}$ and $\range{\mathbf{X}^{(H)\top}}\cap(\bdry{H_{k}}\cup \elim{H_{k}})=\emptyset$ for all $H\neq H_k$ at level $i$ , RHS is equal to $\mathbf{A}thbf{I}_{\bdry{H_{k}}\cup \elim{H_{k}}}(\mathbf{A}thbf{I}-\mathbf{X}^{(H_{k})\top})\mathbf{A}thbf{I}_{C_i}=\mathbf{A}thbf{M}_{(H_{k}, H_{k+1})}.$ The last equality uses $(\bdry{H_{k}}\cup \elim{H_{k}})\cap C_i=\bdry{H_{k}}$ and $\mathbf{X}^{(H_k)\top}\in \mathbf{A}thbb{R}^{\elim{H_{k}}\times \bdry{H_k}}$. This completes the inductive step.
We consider the vector
$$(\mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_k}\cup \elim{H_k}}=\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}.$$
\begin{align*}
&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top}\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\left(\mathbf{A}thbf{I}-\sum_{H\in \mathcal{T}(i)}\left(\mathbf{A}thbf{L}_{\elim{H}, \elim{H}}^{(H)}\right)^{-1}\mathbf{A}thbf{L}_{\elim{H}, \bdry{H}}^{(H)}\right)\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\left(\mathbf{A}thbf{I}-\left(\mathbf{A}thbf{L}_{\elim{H_k}, \elim{H_k}}^{(H_k)}\right)^{-1}\mathbf{A}thbf{L}_{\elim{H_k}, \bdry{H_k}}^{(H_k)}\right)\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\left(\mathbf{A}thbf{I}_{\bdry{H_k}}-\left(\mathbf{A}thbf{L}_{\elim{H_k}, \elim{H_k}}^{(H_k)}\right)^{-1}\mathbf{A}thbf{L}_{\elim{H_k}, \bdry{H_k}}^{(H_k)}\right)\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}} \tag{as $(\bdry{H_k}\cup \elim{H_k})\cap (\bdry{H_{k+1}}\cup \elim{H_{k+1}})=\bdry{H_k}$}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{A}thbf{M}_{(H_k, H_{k+1})}\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}.
\end{align*}
We may multiply this with the inductive hypothesis to get
\begin{align*}
&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{A}thbf{M}_{(H_k, H_{k+1})} \mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\mathbf{A}thbf{M}_{H_{k+1}\leftarrow R}\bm{z}|_{\elim{R}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top}\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}} \mathbf{A}thbf{M}_{H_{k+1}\leftarrow R}\bm{z}|_{\elim{R}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top}\mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}} (\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top} \mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top} \mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}\\
=&(\mathbf{\Pi}^{(i) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\bdry{H_k}\cup \elim{H_k}}.\\
\end{align*}
The reason for the second last equality is that for every vertex $v\not\in{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}$, either
the $v$-th column of $$\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top}$$ is $\bm{z}ero^\top$, or the $v$-th row of $$\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}}$$ is $\bm{z}ero$. We prove this in two cases. If $v\not\in \bdry{H_{k+1}}\cup \elim{H_{k+1}}\cup \elim{H_k}$, the $v$-th row of
$$\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{\Pi}^{(i) \top}=\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\left(\mathbf{A}thbf{I}-\left(\mathbf{A}thbf{L}_{\elim{H_k}, \elim{H_k}}^{(H_k)}\right)^{-1}\mathbf{A}thbf{L}_{\elim{H_k}, \bdry{H_k}}^{(H_k)}\right)$$ is zero. Otherwise, we may assume $v\in\elim{H_k}$. Because each of the operators $\mathbf{\Pi}^{(i+1)\top},\ldots, \mathbf{\Pi}^{(\eta(R)-1)\top}$ only modifies $\elim{H}$ for $H$ at levels $i+1,\ldots, \eta(R)-1$ when applying to the left,
$$(\mathbf{\Pi}^{(i+1) \top} \cdots \mathbf{\Pi}^{(\eta(R)-1) \top}\bm{z}|_{\elim{R}})|_{\elim{H_k}}=(\bm{z}|_{\elim{R}})|_{\elim{H_k}}=\bm{z}ero.$$
On the other hand, by the inductive hypothesis, $\mathbf{A}thbf{M}_{(H_k, H_{k+1})} \mathbf{A}thbf{M}_{H_{k+1}\leftarrow R}\bm{z}|_{\elim{R}}$ is nonzero only on $\bdry{H_{k+1}}\cup \elim{H_{k+1}}$. Thus,
\begin{align*}
&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{A}thbf{M}_{(H_k, H_{k+1})} \mathbf{A}thbf{I}_{\bdry{H_{k+1}}\cup \elim{H_{k+1}}}\mathbf{A}thbf{M}_{H_{k+1}\leftarrow R}\bm{z}|_{\elim{R}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{A}thbf{M}_{(H_k, H_{k+1})} \mathbf{A}thbf{M}_{H_{k+1}\leftarrow R}\bm{z}|_{\elim{R}}\\
=&\mathbf{A}thbf{I}_{\bdry{H_k}\cup \elim{H_k}}\mathbf{A}thbf{M}_{H_{k}\leftarrow R}\bm{z}|_{\elim{R}}\\
=&\mathbf{A}thbf{M}_{H_{k}\leftarrow R}\bm{z}|_{\elim{R}}.
\end{align*} This completes the inductive step.
\end{comment}
\end{proof}
To prove \cref{lem:slack-operator-correctness}, we apply the leaf operators to the result of the previous lemma and sum over all nodes and leaf nodes.
\begin{proof}[Proof of \cref{lem:slack-operator-correctness}]
Let $H$ be a leaf node. We sum \cref{eq:slack-edge-path} over all $A$ with $H \in \mathcal{T}_A$ to get
\begin{align*}
\sum_{A : H \in \mathcal{T}_A} \mathbf{A}thbf{M}_{H\leftarrow A}\bm{z}|_{\elim{A}} &= \mathbf{A}thbf{I}_{\partial H \cup F_H} \sum_{A : H \in \mathcal{T}_A} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}\bm{z}|_{F_A}\\
&= \mathbf{A}thbf{I}_{\partial H \cup F_H} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}\bm{z},
\end{align*}
where we relax the sum in the right hand side to be over all nodes in $\mathcal{T}$, since by \cref{obs:img-mpi}, for any $A$ with $H \notin \mathcal{T}_A$, we simply have $\mathbf{A}thbf{I}_{\partial H \cup F_H} \mathbf{\Pi}^{(0)\top} \cdots \mathbf{\Pi}^{(\eta-1)\top} \bm{z}|_{F_A} = \bm{z}ero$. Next, we apply the leaf operator $\mathbf{A}thbf{J}_H = \mathbf{W}^{1/2} \mathbf{B}[H]$ to both sides to get
\begin{align*}
\sum_{A : H \in \mathcal{T}_A} \mathbf{A}thbf{J}_H \mathbf{A}thbf{M}_{H\leftarrow A}\bm{z}|_{\elim{A}} &= \mathbf{W}^{1/2}\mathbf{B}[H] \mathbf{A}thbf{I}_{\partial H \cup F_H} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}\bm{z}.
\end{align*}
Since $\mathbf{B}[H]$ is zero on columns supported on $V(G) \setminus (\partial H \cup F_H)$, we can simply drop the $\mathbf{A}thbf{I}_{\partial H \cup F_H}$ in the right hand side.
Finally, we sum up the equation above over all leaf nodes. The left hand side is precisely the definition of $\mathbf{A}thbf{M} \bm{z}$. Recall the regions of the leaf nodes partition the original graph $G$, so we have
\begin{align*}
\sum_{H \in \mathcal{T}(0)} \sum_{A : H \in \mathcal{T}_A} \mathbf{A}thbf{J}_H \mathbf{A}thbf{M}_{H\leftarrow A}\bm{z}|_{\elim{A}} &= \mathbf{W}^{1/2} \left(\sum_{H\in \mathcal{T}(0)} \mathbf{B}[H] \right) \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}\bm{z} \\
\mathbf{A}thbf{M} \bm{z} &= \mathbf{W}^{1/2} \mathbf{B} \mathbf{\Pi}^{(0) \top} \cdots \mathbf{\Pi}^{(\eta - 1) \top}\bm{z}.
\end{align*}
\end{proof}
We now examine the slack tree operator complexity.
\begin{lemma} \label{lem:slack_operator_complexity}
The complexity of the slack tree operator as defined in \cref{defn:flow-forest-operator} is $T(k) = \overline{\vv}erline{t}ilde(\sqrt{mk}\cdot \varepsilonsc^{-2})$, where $\varepsilonsc$ is the Schur complement approximation factor from data structure \textsc{DynamicSC}.
\end{lemma}
\begin{proof}
Let $\mathbf{A}thbf{M}_{(D,P)}$ be a tree edge operator.
Applying $\mathbf{A}thbf{M}_{(D,P)}=\mathbf{A}thbf{I}_{\bdry{D}}-\left(\mathbf{A}thbf{L}^{(D)}_{\elim{D},\elim{D}}\right)^{-1} \mathbf{A}thbf{L}^{(D)}_{\elim{D}, \bdry{D}}$
to the left or right consists of three steps which are applying $\mathbf{A}thbf{I}_{\bdry{D}}$,
applying $\mathbf{A}thbf{L}^{(D)}_{\elim{D}, \bdry{D}}$ and solving for $\mathbf{A}thbf{L}^{(D)}_{\elim{D},\elim{D}}\bm{v}=\bm{b}$ for some vectors $\bm{v}$ and $\bm{b}$.
Each of the three steps costs time $O(\varepsilonsc^{-2}|\bdry{D}\cup \elim{D}|)$ by \cref{lem:fastApproxSchur} and \cref{thm:laplacianSolver}.
For any leaf node $H$, $H$ has a constant number of edges, and it takes constant time to compute $\mathbf{A}thbf{J}_H \bm{u}$ for any vector $\bm{u}$. The number of vertices may be larger but the nonzeros of $\mathbf{A}thbf{J}_H=\mathbf{W}^{1/2} \mathbf{B}[H]$ only depends on the number of edges.
To bound the total cost over $k$ distinct edges, we apply \cref{lem:planarBoundChangeCost}, which then gives the claimed complexity.
\end{proof}
\subsection{Proof of \crtcref{thm:SlackMaintain}}
Finally, we give the full data structure for maintaining the slack solution.
The tree operator $\mathbf{A}thbf{M}$ defined in \cref{defn:slack-forest-operator} satisfies $\mathbf{A}thbf{M}\bm{z}^{(k)} =\widetilde{\mathbf{P}}_{\bm{w}}\bm{v}^{(k)}$ at step $k$, by the definition of $\bm{z}^{(k)}$.
To support the proper update $\bm{s}\leftarrow\bm{s}+\overline{\vv}erline{t} h\mathbf{W}^{-1/2}\widetilde{\mathbf{P}}_{\bm{w}} \bm{v}^{(k)}$, we define $\mathbf{A}thbf{M}^{{(\mathrm{slack})}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{W}^{-1/2}\mathbf{A}thbf{M}$ and note it is also a tree operator:
\begin{lemma}
\label{lem:wm}
Suppose $\mathbf{A}thbf{M}$ is a tree operator supported on $\mathcal{T}$ with complexity $T(K)$. Let $\mathbf{D}$ be a diagonal matrix in $\mathbf{A}thbb{R}^{E\times E}$ where $E=\bigcup_{\text{leaf }H\in \mathcal{T}}E(H)$. Then $\mathbf{D}\mathbf{A}thbf{M}$ can be represented by a tree operator with complexity $T(K)$.
\end{lemma}
\begin{proof}
Suppose $\mathbf{A}thbf{M} \in \mathbf{A}thbb{R}^{E\times V}$. For any vector $\bm{z}\in \mathbf{A}thbb{R}^{V}$, $\mathbf{D}\mathbf{A}thbf{M}\bm{z}=\mathbf{D}(\mathbf{A}thbf{M}\bm{z})$. Thus, to compute $\mathbf{D}\mathbf{A}thbf{M}\bm{z}$, we may first compute $\mathbf{A}thbf{M}\bm{z}$ and then multiply the $i$-th entry of $\mathbf{A}thbf{M}\bm{z}$ with $\mathbf{D}_{i, i}$. This can be achieved by defining a new tree operator $\mathbf{A}thbf{M}'$ with leaf operators $\mathbf{A}thbf{J}'$ such that $\mathbf{A}thbf{J}'_{H}=\mathbf{D}_{E(H), E(H)}\mathbf{A}thbf{J}_{H}$ and $\mathbf{A}thbf{M}'_{(H, P)}=\mathbf{A}thbf{M}_{(H, P)}$. The size of each leaf operator remains constant. All edge operators do not change from $\mathbf{A}thbf{M}$. Thus, the new operator $\mathbf{A}thbf{M}'$ has the same complexity as $\mathbf{A}thbf{M}$.
\end{proof}
With the lemma above, we can use \textsc{MaintainRep} (\cref{alg:maintain_representation}) to maintain the implicit representation of $\bm{s}$ and \cref{thm:VectorTreeMaintain} to maintain an approximate vector $\overline{\vs}$ as required in \cref{algo:IPM_impl}.
A single IPM step calls the procedures \textsc{Reweight, Move, Approximate} in this order once.
Note that we reinitialize the data structure when $\overline{\vv}erline{t}$ changes, so within each instantiation, may assume $\bar{t} = 1$ by scaling. $\overline{\vv}erline{t}$ changes only $\widetilde{O}(1)$ times in the IPM.
\begin{algorithm}
\caption{Slack Maintenance, Main Algorithm}\label{alg:slack-maintain-main}
\begin{algorithmic}[1]
\State \textbf{data structure} \textsc{MaintainSlack} \textbf{extends} \textsc{MaintainRep}
\State \textbf{private: member}
\State \hspace{4mm} \textsc{MaintainRep} $\texttt{maintainRep}$: data structure to implicitly maintain
\[
\bm{s} = \bm{y} + \mathbf{W}^{-1/2} \mathbf{A}thbf{M} (c {\bm{z}^{(\mathbf{A}thrm{step})}} + {\bm{z}^{(\mathbf{A}thrm{sum})}}).
\]
\Comment $\mathbf{A}thbf{M}$ is defined by \cref{defn:slack-forest-operator}
\State \hspace{4mm} \textsc{MaintainApprox} \texttt{bar\_s}: data structure to maintain approximation $\overline{\vs}$ to $\bm{s}$ (\cref{thm:VectorTreeMaintain})
\State
\Procedure{Initialize}{$G,\bm{s}^{(\mathrm{init})} \in\mathbf{A}thbb{R}^{m},\bm{v}\in \mathbf{A}thbb{R}^{m}, \bm{w}\in\mathbf{A}thbb{R}_{>0}^{m},\varepsilonsc>0,
\overline{\vv}erline{\varepsilonilon}>0$}
\State Build the separator tree $\mathcal{T}$ by \cref{thm:separator_tree_construction}
\State $\texttt{maintainRep}.\textsc{Initialize}(G, \mathcal{T}, \mathbf{W}^{-1/2}\mathbf{A}thbf{M}, \bm{v}, \bm{w}, \bm{s}^{(\mathrm{init})}, \varepsilonsc)$
\Comment{initialize $\bm{s} \leftarrow \bm{s}^{{(\mathrm{init})}}$}
\State $\texttt{bar\_s}.\textsc{Initialize}(\mathbf{W}^{-1/2}\mathbf{A}thbf{M},c, {\bm{z}^{(\mathbf{A}thrm{step})}},{\bm{z}^{(\mathbf{A}thrm{sum})}}, \bm{y}, \mathbf{W}, n^{-5}, \overline{\vv}erline{\varepsilonilon})$
\Comment{initialize $\overline{\vs}$ approximating $\bm{s}$}
\EndProcedure
\State
\Procedure{Reweight}{$\bm{w}^{(\mathrm{new})} \in \mathbf{A}thbb{R}^m_{>0}$}
\State $\texttt{maintainRep}.\textsc{Reweight}(\bm{w}^{(\mathrm{new})})$
\EndProcedure
\State
\Procedure{Move}{$\alpha,\bm{v}^{(\mathrm{new})} \in \mathbf{A}thbb{R}^m$}
\State $\texttt{maintainRep}.\textsc{Move}(\alpha, \bm{v}^{(\mathrm{new})})$
\EndProcedure
\State
\Procedure{Approximate}{ }
\State \Comment the variables in the argument are accessed from \texttt{maintainRep}
\State \mathbf{A}thbb{R}eturn $\overline{\vs}=\texttt{bar\_s}.\textsc{Approximate}(\mathbf{W}^{-1/2}\mathbf{A}thbf{M},c, {\bm{z}^{(\mathbf{A}thrm{step})}},{\bm{z}^{(\mathbf{A}thrm{sum})}},\bm{y}, \mathbf{W})$
\EndProcedure
\State
\Procedure{Exact}{ }
\State \mathbf{A}thbb{R}eturn $\texttt{maintainRep}.\textsc{Exact}()$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\SlackMaintain*
\begin{proof}[Proof of \cref{thm:SlackMaintain}]
We prove the runtime and correctness of each procedure separately.
Recall by \cref{lem:slack-edge-operator-correctness}, the tree operator $\mathbf{A}thbf{M}$ has complexity $T(K) = O(\varepsilonsc^{-2} \sqrt{mK})$.
\paragraph{\textsc{Initialize}:}
By the initialization of \texttt{maintainRep}~(\cref{thm:maintain_representation}), the implicit representation of $\bm{s}$ in \texttt{maintainRep}~is correct and $\bm{s} = \bm{s}^{{(\mathrm{init})}}$. By the initialization of \texttt{bar\_f}, $\overline{\vs}$ is set to $\bm{s}$ to start.
Initialization of $\texttt{maintainRep}$ takes $\overline{\vv}erline{t}ilde(m\varepsilonsc^{-2})$ time by \cref{thm:maintain_representation},
and the initialization of $\texttt{bar\_s}$ takes $\overline{\vv}erline{t}ilde(m)$ time by \cref{thm:VectorTreeMaintain}.
\paragraph{\textsc{Reweight}:}
In \textsc{Reweight}, the value of $\bm{s}$ does not change, but all the variables in \texttt{MaintainRep} are updated to depend on the new weights.
The correctness and runtime follow from \cref{thm:maintain_representation}.
\paragraph{\textsc{Move}:}
$\texttt{maintainRep}.\textsc{Move}(\alpha, \bm{v}^{(k)})$ updates the implicit representation of $\bm{s}$ by
\[
\bm{s} \leftarrow \bm{s} + \mathbf{W}^{-1/2} \mathbf{A}thbf{M} \alpha \bm{z}^{(k)}.
\]
By the definition of the slack projection tree operator $\mathbf{A}thbf{M}$ and \cref{lem:slack-operator-correctness}, this is equivalent to the update
\[
\bm{s} \leftarrow \bm{s} + \alpha\mathbf{W}^{-1/2}\widetilde{\mathbf{P}}_{\bm{w}}\bm{v}^{(k)},
\]
where $\widetilde{\mathbf{P}}_{\bm{w}} = \mathbf{W}^{1/2} \mathbf{B} \mathbf{\Pi}^{(0)}\cdots \mathbf{\Pi}^{(\eta-1)} \widetilde\mathbf{\Gamma} \mathbf{\Pi}^{(\eta-1)}\cdots \mathbf{\Pi}^{(0)} \mathbf{B}^{\top} \mathbf{W}^{1/2}$.
By \cref{thm:L-inv-approx}, $\|\widetilde{\mathbf{P}}_{\bm{w}}-\mathbf{P}_{\bm{w}}\|_{\mathbf{A}thrm{op}}\leq \eta \varepsilonsc$.
From the definition, $\range{\mathbf{W}^{1/2} \widetilde \mathbf{P}_{\bm{w}}}\subseteq \range \mathbf{B}$.
By the guarantees of \texttt{maintainRep}, if $\bm{v}^{(k)}$ differs from $\bm{v}^{(k-1)}$ on $K$ coordinates, then the runtime is $\widetilde{O}(\varepsilonsc^{-2}\sqrt{mK})$. Furthermore, ${\bm{z}^{(\mathbf{A}thrm{step})}}$ and ${\bm{z}^{(\mathbf{A}thrm{sum})}}$ change on $\elim{H}$ for at most $\widetilde{O}(K)$ nodes in $\mathcal{T}$.
\paragraph{\textsc{Approximate}:}
The returned vector $\overline{\vs}$ satisfies $\|\mathbf{W}^{1/2}(\overline{\vs}-\bm{s})\|_{\infty}\leq\overline{\vv}erline{\varepsilonilon}$ by the guarantee of \\ \texttt{bar\_s}.\textsc{Approximate} from \cref{thm:VectorTreeMaintain}.
\paragraph{\textsc{Exact}:}
The runtime and correctness directly follow from the guarantee of $\texttt{maintainRep}.\textsc{Exact}$ given in \cref{thm:maintain_representation}.
Finally, we have the following lemma about the runtime for \textsc{Approximate}. Let $\overline{\vs}^{(k)}$ denote the returned approximate vector at step $k$.
\begin{lemma}\label{lem:slack-approx}
Suppose $\alpha\|\bm{v}\|_{2}\leq\beta$ for some $\beta$ for all calls to \textsc{Move}.
Let $K$ denote the total number of coordinates changed in $\bm{v}$ and $\bm{w}$ between the $k-1$-th and $k$-th \textsc{Reweight} and \textsc{Move} calls. Then at the $k$-th \textsc{Approximate} call,
\begin{itemize}
\item The data structure first sets $\overline{\vs}_e\leftarrow \bm{s}^{(k-1)}_e$ for all coordinates $e$ where $\bm{w}_e$ changed in the last \textsc{Reweight}, then sets $\overline{\vs}_e\leftarrow \bm{s}^{(k)}_e$ for $O(N_k\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_{k}}(\frac{\beta}{\overline{\vv}erline{\varepsilonilon}})^{2}\log^{2}m)$ coordinates $e$, where $\ell_{k}$ is the largest integer
$\ell$ with $k=0\mod2^{\ell}$ when $k\neq 0$ and $\ell_0=0$.
\item The amortized time for the $k$-th \textsc{Approximate} call
is $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}\sqrt{m(K+N_{k-2^{\ell_k}})})$.
\end{itemize}
\end{lemma}
\begin{proof}
Since $\overline{\vs}$ is maintained by \texttt{bar\_s}, we apply \cref{thm:VectorTreeMaintain} with $\bm{x} = \bm{s}$ and diagonal matrix $\mathbf{D} = \mathbf{W}$.
We need to prove $\|\bm{x}^{(k)}-\bm{x}^{(k-1)}\|_{\mathbf{D}^{(k)}}\le O(\beta)$ for all $k$ first. The constant factor in $O(\beta)$ does not affect the guarantees in \cref{thm:VectorTreeMaintain}. The left-hand side is
\begin{align*}
\norm{\bm{s}^{(k)}-\bm{s}^{(k-1)}}_{\mathbf{W}^{(k)}} &= \norm{\alpha^{(k)} {\mathbf{W}^{(k)}}^{-1/2} \widetilde{\mathbf{P}}_{\bm{w}}\bm{v}^{(k)}}_{\mathbf{W}^{(k)}} \tag{by \textsc{Move}}\\
&= \norm{\alpha^{(k)} \widetilde{\mathbf{P}}_{\bm{w}}\bm{v}^{(k)}}_2 \\
&\le (1+\eta\varepsilonsc) \alpha^{(k)} \|\bm{v}^{(k)}\|_2 \tag{by the assumption that $\alpha\|\bm{v}\|_{2}\leq\beta$}\\
&\le 2 \beta.
\end{align*}
Where the second last step follows from $\|\widetilde{\mathbf{P}}_{\bm{w}}-\mathbf{P}_{\bm{w}}\|_{\mathbf{A}thrm{op}}\leq \eta\varepsilonsc$ and the fact that $\mathbf{P}_{\bm{w}}$ is an orthogonal projection.
Now, we can apply \cref{thm:VectorTreeMaintain} to conclude that at each step $k$,
\texttt{bar\_s}.\textsc{Approximate} first sets $\overline{\vs}_e\leftarrow \bm{s}^{(k-1)}_e$ for all coordinates $e$ where $\bm{w}_e$ changed in the last \textsc{Reweight}, then set $\overline{\vs}_e\leftarrow \bm{s}^{(k)}_e$ for $O(N_k\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_{k}}(\frac{\beta}{\overline{\vv}erline{\varepsilonilon}})^{2}\log^{2}m)$ coordinates $e$, where $\ell_{k}$ is the largest integer
$\ell$ with $k=0\mod2^{\ell}$ when $k\neq 0$ and $\ell_0=0$.
For the second point, \textsc{Move} updates ${\bm{z}^{(\mathbf{A}thrm{step})}}$ and ${\bm{z}^{(\mathbf{A}thrm{sum})}}$ on $\elim{H}$ for $\widetilde{O}(K)$ different nodes $H \in \mathcal{T}$ by \cref{thm:maintain_representation}.
\textsc{Reweight} then updates ${\bm{z}^{(\mathbf{A}thrm{step})}}$ and ${\bm{z}^{(\mathbf{A}thrm{sum})}}$ on $F_H$ for $\widetilde{O}(K)$ different nodes, and updates the tree operator $\mathbf{W}^{-1/2}\mathbf{A}thbf{M}$ on $\widetilde{O}(K)$ different edge and leaf operators. In turn, it updates $\bm{y}$ on $E(H)$ for $\widetilde{O}(K)$ leaf nodes $H$. Now, we apply \cref{thm:VectorTreeMaintain} and the complexity of the tree operator to conclude the desired amortized runtime.
\end{proof}
\end{proof}
\section{Flow projection} \label{sec:flow_projection}
In this section, we define the flow tree operator as required to use \textsc{MaintainRep}.
We then give the full flow maintenance data structure.
During the IPM, we maintain $\bm{f} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \hat{\bm{f}} - \bm{f}^{\perp}$ by maintaining the two terms separately. For IPM step $k$ with direction $\bm{v}^{(k)}$ and step size $h$, we update them as follows:
\begin{align*}
\hat{\bm{f}} &\leftarrow \hat{\bm{f}} + h \mathbf{W}^{1/2} \bm{v}^{(k)}, \\
\label{eq:tf-update}
\bm{f}^{\perp} &\leftarrow \bm{f}^{\perp} + h \mathbf{W}^{1/2} \widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}^{(k)},
\end{align*}
where $\widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}^{(k)}$ satisfies $\norm{\widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}^{(k)} - \mathbf{P}_{\bm{w}} \bm{v}^{(k)}}_2 \leq \varepsilon \norm{\bm{v}^{(k)}}_2$ for some factor $\varepsilon$, and $\mathbf{B}^\top \mathbf{W}^{1/2} \widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}^{(k)} = \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}^{(k)}$. We will include the initial value of $\bm{f}$ in $\hat{\bm{f}}$.
Maintaining $\hat\bm{f}$ is straightforward; in the following section, we focus on $\bm{f}^{\perp}$.
\subsection{Tree operator for flow}
We hope to use \textsc{MaintainRep} to maintain $\bm{f}^{\perp}$ throughout the IPM. In order to do so, it remains to define a flow tree operator $\mathbf{A}thbf{M}^{{(\mathrm{flow})}}$ so that
\[
\mathbf{W}^{1/2} \widetilde \mathbf{P}'_{\bm{w}} \bm{v}^{(k)} = \mathbf{A}thbf{M}^{{(\mathrm{flow})}} \bm{z}^{(k)},
\]
where $\widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}$ satisfies the constraints mentioned above,
and $\bm{z}^{(k)} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \widetilde{\mathbf{\Gamma}} \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(0)} \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}^{(k)}$.
We will define a flow projection tree operator $\mathbf{A}thbf{M}$ so that $\tilde{\bm{f}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{A}thbf{M} \bm{v}^{(k)}$ satisfies
$\norm{\tilde{\bm{f}} - \mathbf{P}_{\bm{w}} \bm{v}}_2 \leq O(\eta \varepsilonsc) \norm{\bm{v}}_2$
and $\mathbf{B}^{\top} \mathbf{W}^{1/2} \tilde{\bm{f}} = \mathbf{B}^{\top} \mathbf{W}^{1/2} \bm{v}^{(k)}$.
This means it is feasible to set $\widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}^{(k)} = \tilde{\bm{f}}$.
Then, we define $\mathbf{A}thbf{M}^{{(\mathrm{flow})}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{W}^{-1/2} \mathbf{A}thbf{M}$.
For the remainder of the section, we abuse notation and use $\bm{z}$ to mean $\bm{z}^{(k)}$ for one IPM step $k$.
\begin{definition}[Flow projection tree operator] \label{defn:flow-forest-operator}
Let $\mathcal{T}$ be the separator tree from data structure \textsc{DynamicSC}, with Laplacians $\mathbf{A}thbf{L}^{(H)}$ and $\widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(H)}, \partial H)$ at each node $H \in \mathcal{T}$.
We use $\mathbf{B}[H]$ to denote the adjacency matrix of $G$ restricted to the region.
To define the flow projection tree operator $\mathbf{A}thbf{M}$, we proceed as follows:
The tree operator is supported on the tree $\mathcal{T}$.
For a node $H \in \mathcal{T}$ with parent $P$, define the tree edge operator $\mathbf{A}thbf{M}_{(H, P)}$ as:
\begin{equation}
\label{eq:flow-forest-op-defn}
\mathbf{A}thbf{M}_{(H, P)} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} (\mathbf{A}thbf{L}^{(H)})^{-1} \widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(H)}, \partial H).
\end{equation}
At each node $H$, we let $F_H$ in the tree operator be the set $F_H$ of eliminated vertices defined in the separator tree.
At each leaf node $H$ of $\mathcal{T}$, we have the leaf operator $\mathbf{A}thbf{J}_H=\mathbf{W}^{1/2} \mathbf{B}[H]$.
\end{definition}
Before we give intuition and formally prove the correctness of the flow tree operator, we examine its complexity.
\begin{lemma}
\label{lem:flow _operator_complexity}
The complexity of the flow tree operator as defined in \cref{defn:flow-forest-operator} is $T(k) = \overline{\vv}erline{t}ilde(\sqrt{mk}\cdot \varepsilonsc^{-2})$, where $\varepsilonsc$ is the overall approximation factor from data structure \textsc{DynamicSC}.
\end{lemma}
\begin{proof}
Let $\mathbf{A}thbf{M}_{(H, P)}$ be a tree edge operator. Note that it is a symmetric matrix.
For any leaf node $H$, $H$ has a constant number of edges, and it takes constant time to compute $\mathbf{A}thbf{J}_H \bm{u}$ for any vector $\bm{u}$. The number of vertices may be larger but the nonzeros of $\mathbf{A}thbf{J}_H=\mathbf{W}^{1/2} \mathbf{B}[H]$ only depends on the number of edges.
If $H$ is not a leaf node, then $\mathbf{A}thbf{M}_{(H, P)} \bm{u}$ consists of multiplying with $\widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(H)}, \partial H)$ and solving the Laplacian system $\mathbf{A}thbf{L}^{(H)}$.
By \cref{lem:fastApproxSchur} and \cref{thm:laplacianSolver}, this can be done in $\overline{\vv}erline{t}ilde(\varepsilonsc^{-2} \cdot |\partial H|)$ time.
To bound the total cost over $k$ distinct edges, we apply \cref{lem:planarBoundChangeCost}, which gives the claimed complexity.
\end{proof}
\begin{theorem}\label{thm:flow-forest-operator-correctness}
Let $\bm{v} \in \mathbf{A}thbb{R}^m$, and let $\bm{z} = \widetilde{\mathbf{\Gamma}} \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(0)} \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}$.
Let $\mathbf{A}thbf{M}$ be the flow projection tree operator from \cref{defn:flow-forest-operator}.
Suppose $\varepsilonsc=O(1/ \log m)$ is the overall approximation factor from \textsc{DynamicSC}.
Then $\tilde{\bm{f}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{A}thbf{M} \bm{z}$ satisfies $\mathbf{B}^\top \mathbf{W}^{1/2} \tilde{\bm{f}} = \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}$ and $\norm{\tilde{\bm{f}} - \mathbf{P}_{\bm{w}} \bm{v}}_2 \leq O(\eta \varepsilonsc) \norm{\bm{v}}_2$.
\end{theorem}
The remainder of the section is dedicated to proving this theorem.
Fix $\bm{v}$ for the remainder of this section. Let $\bm{d} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v} \in \mathbf{A}thbb{R}^n$;
since it is supported on the vertices of $G$ and its entries sum to 0, it is a \emph{demand} vector.
In the first part of the proof, we show that $\tilde{\bm{f}}$ routes the demand $\bm{d}$.
Let $\bm{f}^\star \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{P}_{\bm{w}} \bm{v} = \mathbf{W}^{1/2}\mathbf{B}\mathbf{A}thbf{L}^{-1} \bm{d}$. In the second part of the proof, we show that $\tilde{\bm{f}}$ is close to $\bm{f}^\star$.
Finally, a remark about terminology:
\begin{remark}
If $\mathbf{B}$ is the incidence matrix of a graph, then any vector of the form $\mathbf{B} \bm{x}$ is a flow by definition. Often in this section, we have vectors of the form $\mathbf{W}^{1/2} \mathbf{B} \bm{x}$. In this case, we refer to it as a \emph{weighted flow}. We say a weighted flow $\bm{f}$ routes a demand $\bm{d}$ if $(\mathbf{W}^{1/2} \mathbf{B})^\top \bm{f} = \bm{d}$.
\end{remark}
We proceed with a series of lemmas and their intuition, before tying them together in the overall proof at the end of the section.
\begin{restatable}{lemma}{demandDecomposition}\label{thm:demand-decomposition}
Let $\bm{z} = \widetilde{\mathbf{\Gamma}} \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(0)} \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}$ be as given in \cref{thm:flow-forest-operator-correctness}.
For each node $H \in \mathcal{T}$, let $\bm{z}|_{\elim{H}}$ be the sub-vector of $\bm{z}$ supported on the vertices $\elim{H}$, and define the demand
\[
\bm{d}^{(H)} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{A}thbf{L}^{(H)} \bm{z}|_{\elim{H}}.
\]
Then $\bm{d} = \sum_{H \in \mathcal{T}} \bm{d}^{(H)}$.
\end{restatable}
\begin{proof}
In the proof, note that all $\mathbf{A}thbf{I}$ are $n \times n$ matrices,
and we implicitly pad all vectors with the necessary zeros to match the dimensions.
For example, $\bm{z}|_{F_H}$ below should be viewed as an $n$-dimensional vector supported on $F_H$.
Define
\[\mathbf{X}^{(i)} = \sum_{H \in \mathcal{T}(i)} \mathbf{X}^{(H)}.
\]
We have
\[
\mathbf{\Pi}^{(i)} = \mathbf{A}thbf{I} - \mathbf{X}^{(i)}
= \mathbf{A}thbf{I} - \sum_{H \in \mathcal{T}(i)} \mathbf{A}thbf{L}^{(H)}_{\bdry{H}, F_H} \left( \mathbf{A}thbf{L}^{(H)}_{F_H, F_H}\right)^{-1}.
\]
Suppose $H$ is at level $i$ of $\mathcal{T}$. We have
\begin{align}
\bm{z}|_{F_H}
& = (\mathbf{A}thbf{L}_{F_H,F_H}^{(H)})^{-1} \mathbf{\Pi}^{(\eta-1)}\cdots\mathbf{\Pi}^{(1)}\mathbf{\Pi}^{(0)} \bm{d} \nonumber \\
& = (\mathbf{A}thbf{L}_{F_H,F_H}^{(H)})^{-1} \mathbf{\Pi}^{(i-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)} \bm{d}, \label{eq:z_FH-alt}
\end{align}
where we use the fact $\text{Im}(\mathbf{X}^{(H')}) \cap F_H = \emptyset$ if $\eta(H') \geq i$.
From this expression for $\bm{z}|_{F_H}$, we have
\begin{align*}
\bm{d}^{(H)} &\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{A}thbf{L}^{(H)} \bm{z}|_{F_H} \\
&= \mathbf{A}thbf{L}^{(H)}_{\partial H, F_H} \bm{z}|_{F_H} + \mathbf{A}thbf{L}^{(H)}_{F_H, F_H} \bm{z}|_{F_H} \\
&= \mathbf{X}^{(H)} (\mathbf{\Pi}^{(i-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)}\bm{d})_{F_H} + (\mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)}\bm{d})|_{F_H},
\end{align*}
where the last line follows from \cref{eq:z_FH-alt}. By padding zeros to $\mathbf{X}^{(H)}$, we can write the equation above as
$$\bm{d}^{(H)} = \mathbf{X}^{(H)} \mathbf{\Pi}^{(i-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)}\bm{d} + (\mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)}\bm{d})|_{F_H}.$$
Now, computing the sum, we have
\begin{align*}
\sum_{H \in \mathcal{T}} \bm{d}^{(H)}
&=\sum_{i=0}^{\eta}\sum_{H\in \mathcal{T}(i)}\mathbf{X}^{(H)}\mathbf{\Pi}^{(i-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)} \bm{d} +\sum_{i=0}^{\eta}\sum_{H\in \mathcal{T}(i)} (\mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)} \bm{d})|_{F_H}\\
&=\left(\sum_{i=0}^{\eta}\mathbf{X}^{(i)} \mathbf{\Pi}^{(i-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)} \bm{d}\right) + \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)} \bm{d} \tag{$F_H$ partition $V(G)$}\\
&=\left(\sum_{i=0}^{\eta-1}(\mathbf{A}thbf{I}-\mathbf{\Pi}^{(i)})\mathbf{\Pi}^{(i-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)} \bm{d}\right) + \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(1)} \mathbf{\Pi}^{(0)} \bm{d} \\
&= \bm{d}, \tag{telescoping sum}
\end{align*}
completing our proof.
\end{proof}
Next, we examine the feasibility of $\tilde{\bm{f}}$. To begin, we introduce a decomposition of $\tilde{\bm{f}}$ based on the decomposition of $\bm{d}$, and prove its feasibility.
\begin{definition}
Let $\mathbf{A}thbf{M}^{(H)}$ be the flow tree operator supported on the tree $\mathcal{T}_H \in \mathcal{F}$ (\cref{defn:subtree-operator}). We define the flow $\tilde{\bm{f}}^{(H)} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \mathbf{A}thbf{M}^{(H)} \bm{z} = \mathbf{A}thbf{M}^{(H)} \bm{z}|_{F_H}$.
\end{definition}
\begin{lemma}\label{lem:flow-forest-operator-feasibility}
We have that $(\mathbf{W}^{1/2} \mathbf{B})^\top \tilde{\bm{f}}^{(H)} = \bm{d}^{(H)}$. In other words, the weighted flow $\tilde{\bm{f}}^{(H)}$ routes the demand $\bm{d}^{(H)}$ \emph{using the edges of the original graph $G$}.
\end{lemma}
\begin{proof}
We will first show inductively that for each $H \in \mathcal{T}$, we have $\mathbf{B}^\top \mathbf{W}^{1/2} \mathbf{A}thbf{M}^{(H)} = \mathbf{A}thbf{L}^{(H)}$.
In the base case, if $H$ is a leaf node of $\mathcal{T}$, then $\mathcal{F}_H$ is a tree with root $H$ and a single leaf node under it.
Then $\mathbf{A}thbf{M}^{(H)} = \mathbf{W}^{1/2} \mathbf{B}[H]$.
It follows that
\[
\mathbf{B}^\top \mathbf{W}^{1/2} \mathbf{A}thbf{M}^{(H)} = \mathbf{B}^\top \mathbf{W}^{1/2} \mathbf{W}^{1/2} \mathbf{B}[H] = \mathbf{A}thbf{L}^{(H)},
\]
by definition of $\mathbf{A}thbf{L}^{(H)}$ for a leaf $H$ of $\mathcal{T}$.
In the other case, $H$ is not a leaf node of $\mathcal{T}$. Let $D_1, D_2$ be the two children of $H$. Then
\begin{align*}
\mathbf{B}^\top \mathbf{W}^{1/2} \mathbf{A}thbf{M}^{(H)} &= \mathbf{B}^\top \mathbf{W}^{1/2}
\left( \mathbf{A}thbf{M}^{(D_1)} \mathbf{A}thbf{M}_{(D_1, H)} +
\mathbf{A}thbf{M}^{(D_2)} \mathbf{A}thbf{M}_{(D_2, H)} \right) \\
&= \mathbf{A}thbf{L}^{(D_1)} \mathbf{A}thbf{M}_{(D_1, H)} +
\mathbf{A}thbf{L}^{(D_2)} \mathbf{A}thbf{M}_{(D_2, H)} \tag{by induction} \\
&= \mathbf{A}thbf{L}^{(D_1)} (\mathbf{A}thbf{L}^{(D_1)})^{-1} \widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(D_1)}, \partial D_1) +
\mathbf{A}thbf{L}^{(D_2)} (\mathbf{A}thbf{L}^{(D_2)})^{-1} \widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(D_2)}, \partial D_2) \\
&= \widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(D_1)}, \partial D_1) + \widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(D_2)}, \partial D_2) \\
&= \mathbf{A}thbf{L}^{(H)}.
\end{align*}
Finally, we conclude that $\mathbf{B}^\top \mathbf{W}^{1/2} \tilde{\bm{f}}^{(H)} = \mathbf{B}^\top \mathbf{W}^{1/2} \mathbf{A}thbf{M}^{(H)} \bm{z}|_{F_H} = \mathbf{A}thbf{L}^{(H)} \bm{z}_{F_H} = \bm{d}^{(H)}$, where the last inequality follows by definition of $\bm{d}^{(H)}$.
\end{proof}
We observe an orthogonality property of the flows, which will become useful later:
\begin{lemma} \label{lem:orthogonal-flows}
For any nodes $H, H'$ at the same level in $\mathcal{T}$, $\range{\mathbf{A}thbf{M}^{(H)}}$ and $\range{\mathbf{A}thbf{M}^{(H')}}$ are disjoint.
Consequently, the flows $\tilde{\bm{f}}^{(H)}$ and $\tilde{\bm{f}}^{(H')}$ are orthogonal.
\end{lemma}
\begin{proof}
Recall leaves of $\mathcal{T}$ correspond to pairwise edge-disjoint, constant-sized regions of the original graph $G$.
Since $H$ and $H'$ are at the same level in $\mathcal{T}$, we know $\mathcal{T}_H$ and $\mathcal{T}_{H'}$ have disjoint sets of leaves.
The range of $\mathbf{A}thbf{M}^{(H)}$ is supported on edges in the regions given by leaves of $\mathcal{T}_H$, and analogously for the range of $\mathbf{A}thbf{M}^{(H')}$.
\end{proof}
\input{flow_operator}
\subsection{Proof of \crtcref{thm:FlowMaintain}}
Finally, we present the overall flow maintenance data structure.
It is analogous to slack, except during each \textsc{Move} operation, there is an additional term of $\alpha \mathbf{W}^{1/2} \bm{v}$.
\label{subsec:flow-main-proof}
\FlowMaintain*
\begin{algorithm}
\caption{Flow Maintenance, Main Algorithm}\label{alg:flow-maintain-main}
\begin{algorithmic}[1]
\State \textbf{data structure} \textsc{MaintainFlow} \textbf{extends} \textsc{MaintainZ}
\State \textbf{private: member}
\State \hspace{4mm} $\bm{w} \in \mathbf{A}thbb{R}^m$: weight vector \Comment we use the diagonal matrix $\mathbf{W}$ interchangeably
\State \hspace{4mm} $\bm{v} \in \mathbf{A}thbb{R}^m$: direction vector
\State \hspace{4mm} \textsc{MaintainRep} $\texttt{maintainRep}$: data structure to implicitly maintain
\[
\bm{f}^{\perp} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \bm{y} + \mathbf{W}^{1/2} \mathbf{A}thbf{M} (c {\bm{z}^{(\mathbf{A}thrm{step})}} + {\bm{z}^{(\mathbf{A}thrm{sum})}}).
\]
\Comment $\mathbf{A}thbf{M}$ is defined by \cref{defn:flow-forest-operator}
\State \hspace{4mm} $\hat c \in \mathbf{A}thbb{R}, \hat{\bm{f}}_0 \in \mathbf{A}thbb{R}^m$: scalar and vector to implicitly maintain
\[
\hat{\bm{f}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \hat{\bm{f}}_0 + \hat c \cdot \mathbf{W} \bm{v}.
\]
\State \hspace{4mm} \textsc{MaintainApprox} \texttt{bar\_f}: data structure to maintain approximation $\overline{\vf}$ to $\bm{f}$ (\cref{thm:VectorTreeMaintain})
\State
\Procedure{Initialize}{$G,\bm{f}^{{(\mathrm{init})}} \in\mathbf{A}thbb{R}^{m},\bm{v}\in \mathbf{A}thbb{R}^{m}, \bm{w}\in\mathbf{A}thbb{R}_{>0}^{m},\varepsilonsc>0,
\overline{\vv}erline{\varepsilonilon}>0$}
\State Build the separator tree $\mathcal{T}$ by \cref{thm:separator_tree_construction}
\State $\texttt{maintainRep}.\textsc{Initialize}(G, \mathcal{T}, \mathbf{W}^{1/2}\mathbf{A}thbf{M}, \bm{v}, \bm{w}, \bm{z}ero, \varepsilonsc)$
\Comment{initialize $\bm{f}^{\perp} \leftarrow \bm{z}ero$}
\State $\bm{w} \leftarrow \bm{w}, \bm{v} \leftarrow \bm{v}$
\State $\hat{c}\leftarrow 0, \hat{\bm{f}}_0\leftarrow \bm{f}^{(\mathrm{init})}$ \Comment initialize $\hat{\bm{f}} \leftarrow \bm{f}^{{(\mathrm{init})}}$\label{line:flowinitf0}
\State $\texttt{bar\_f}.\textsc{Initialize}(-\mathbf{W}^{1/2}\mathbf{A}thbf{M},c, {\bm{z}^{(\mathbf{A}thrm{step})}},{\bm{z}^{(\mathbf{A}thrm{sum})}}, -\bm{y}+\hat{\bm{f}}_0+\hat{c}\cdot \mathbf{W} \bm{v}, \mathbf{W}^{-1}, n^{-5}, \overline{\vv}erline{\varepsilonilon})$
\State \Comment{initialize $\overline{\vf} \leftarrow \bm{f}^{{(\mathrm{init})}}$}
\EndProcedure
\State
\Procedure{Reweight}{$\bm{w}^{(\mathrm{new})} \in \mathbf{A}thbb{R}^m_{>0}$}
\State $\texttt{maintainRep}.\textsc{Reweight}(\bm{w}^{(\mathrm{new})})$
\State $\Delta \bm{w} \leftarrow \bm{w}^{{(\mathrm{new})}} - \bm{w}$
\State $\bm{w} \leftarrow \bm{w}^{(\mathrm{new})}$
\State $\hat{\bm{f}}_0 \leftarrow \hat{\bm{f}}_0 - \hat{c} (\Delta \mathbf{W})^{1/2} \bm{v}$
\EndProcedure
\State
\Procedure{Move}{$\alpha,\bm{v}^{(\mathrm{new})} \in \mathbf{A}thbb{R}^m$}
\State $\texttt{maintainRep}.\textsc{Move}(\alpha, \bm{v}^{(\mathrm{new})})$
\State $\Delta \bm{v} \leftarrow \bm{v}^{(\mathrm{new})} - \bm{v}$
\State $\bm{v}\leftarrow \bm{v}^{(\mathrm{new})}$
\State $\hat{\bm{f}}_0 \leftarrow \hat{\bm{f}}_0 - \hat{c} \mathbf{W}^{1/2} \Delta \bm{v}$
\State $\hat{c}\leftarrow\hat{c}+\alpha$
\EndProcedure
\State
\Procedure{Approximate}{ }
\State \Comment the variables in the argument are accessed from \texttt{maintainRep}
\State \mathbf{A}thbb{R}eturn $\texttt{bar\_f}.\textsc{Approximate}({-\mathbf{W}^{1/2}\mathbf{A}thbf{M}},c, {\bm{z}^{(\mathbf{A}thrm{step})}},{\bm{z}^{(\mathbf{A}thrm{sum})}},-\bm{y}+\hat{\bm{f}}_0+\hat{c}\cdot \mathbf{W} \bm{v}, \mathbf{W}^{-1})$
\EndProcedure
\State
\Procedure{Exact}{ }
\State $\bm{f}^{\perp} \leftarrow \texttt{maintainRep}.\textsc{Exact}()$
\State \mathbf{A}thbb{R}eturn $(\hat{\bm{f}}_0+\hat{c}\cdot \mathbf{W} \bm{v})- \bm{f}^{\perp}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{proof}[Proof of \cref{thm:FlowMaintain}]
We have the additional invariant that the IPM flow solution $\bm{f}$ can be recovered in the data structure by the identity
\begin{align} \label{eq:vf-decomp}
\bm{f} &= \hat{\bm{f}} - \bm{f}^{\perp},
\end{align}
where $\bm{f}^{\perp}$ is implicit maintained by \texttt{maintainRep}, and $\hat{\bm{f}}$ is implicitly maintained by the identity $\hat{\bm{f}} = \hat{\bm{f}}_0 + \hat c \mathbf{W} \bm{v}$.
We prove the runtime and correctness of each procedure separately.
Recall by \cref{lem:slack-edge-operator-correctness}, the tree operator $\mathbf{A}thbf{M}$ has complexity $T(K) = O(\varepsilonsc^{-2} \sqrt{mK})$.
\paragraph{\textsc{Initialize}:}
By the initialization of \texttt{maintainRep}~(\cref{thm:maintain_representation}), the implicit representation of $\bm{f}^{\perp}$ in \texttt{maintainRep}~is correct and $\bm{f}^{\perp} = \bm{z}ero$.
We then set $\hat{\bm{f}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \hat{\bm{f}}_0 + \hat c \mathbf{W} \bm{v} = \bm{f}^{(\mathrm{init})}$. So overall, we have $\bm{f} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \hat{\bm{f}} + \bm{f}^{\perp} = \bm{f}^{(\mathrm{init})}$.
By the initialization of \texttt{bar\_f}, $\overline{\vf}$ is set to $\bm{f} = \bm{f}^{(\mathrm{init})}$ to start.
Initialization of $\texttt{maintainRep}$ takes $\overline{\vv}erline{t}ilde(m\varepsilonsc^{-2})$ time by \cref{thm:maintain_representation},
and the initialization of $\texttt{bar\_f}$ takes $\overline{\vv}erline{t}ilde(m)$ time by \cref{thm:VectorTreeMaintain}.
\paragraph{\textsc{Reweight}:}
The change to the representation in $\bm{f}^{\perp}$ is correct via \texttt{maintainRep}~in exactly the same manner as the proof for the slack solution.
For the representation of $\hat{\bm{f}}$, the change in value caused by the update to $\bm{w}$ is subtracted from the $\hat{\bm{f}}_0$ term, so that the representation is updated while the overall value remains the same.
\paragraph{\textsc{Move}:}
This is similar to the proof for the slack solution.
$\texttt{maintainRep}.\textsc{Move}(\alpha, \bm{v}^{(k)})$ updates the implicit representation of $\bm{f}^{\perp}$ by
\[
\bm{f}^{\perp} \leftarrow \bm{f}^{\perp} + \mathbf{W}^{1/2} \mathbf{A}thbf{M} \alpha \bm{z}^{(k)},
\]
where $\mathbf{A}thbf{M}$ is the flow projection tree operator defined in \cref{defn:flow-forest-operator}.
By \cref{lem:slack-operator-correctness}, this is equivalent to the update
\[
\bm{f}^{\perp} \leftarrow \bm{f}^{\perp} + \alpha \mathbf{W}^{1/2} \tilde{\bm{f}},
\]
where $\norm{\tilde{\bm{f}} - \mathbf{P}_{\bm{w}} \bm{v}^{(k)}}_2 \leq O(\eta \varepsilonsc) \norm{\bm{v}^{(k)}}_2$ and $\mathbf{B}^{\top} \mathbf{W}^{1/2} \tilde{\bm{f}} = \mathbf{B}^{\top} \mathbf{W}^{1/2} \bm{v}^{(k)}$ by \cref{thm:flow-forest-operator-correctness}.
For the $\hat{\bm{f}}$ term, let $\hat{\bm{f}}_0', \hat c', \bm{v}'$ be the state of $\hat{\bm{f}}_0, \hat c$ and $\bm{v}$ at the start of the procedure, and similarly let $\hat{\bm{f}}'$ be the state of $\hat{\bm{f}}$ at the start.
At the end of the procedure, we have
\[
\hat{\bm{f}} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} \hat{\bm{f}}_0 + \hat c \mathbf{W} \bm{v} = \hat{\bm{f}}_0' - \hat c' \mathbf{W}^{1/2} \Delta \bm{v} + (\hat c' + \alpha) \mathbf{W} \bm{v} = \hat{\bm{f}}_0' + \hat c' \mathbf{W}^{1/2} \bm{v}' + \alpha \mathbf{W}^{1/2} \bm{v} = \hat{\bm{f}}' + \alpha \mathbf{W}^{1/2} \bm{v},
\]
so we have the correct update $\hat{\bm{f}} \leftarrow \hat{\bm{f}} + \alpha \mathbf{W}^{1/2} \bm{v}$. Combined with $\bm{f}^{\perp}$, the update to $\bm{f}$ is
\[
\bm{f} \leftarrow \bm{f} + \alpha \mathbf{W}^{1/2} \bm{v} - \alpha \mathbf{W}^{1/2} \tilde{\bm{f}}.
\]
By \cref{thm:maintain_representation}, if $\bm{v}^{(k)}$ differs from $\bm{v}^{(k-1)}$ on $K$ coordinates, then the runtime of \texttt{maintainRep}~is $\widetilde{O}(\varepsilonsc^{-2}\sqrt{mK})$. Furthermore, ${\bm{z}^{(\mathbf{A}thrm{step})}}$ and ${\bm{z}^{(\mathbf{A}thrm{sum})}}$ change on $\elim{H}$ for at most $\widetilde{O}(K)$ nodes in $\mathcal{T}$. Updating $\hat{\bm{f}}$ takes $O(K)$ time where $K \leq O(m)$, giving us the overall claimed runtime.
\paragraph{\textsc{Approximate}:}
By the guarantee of \texttt{bar\_f}.\textsc{Approximate} from \cref{thm:VectorTreeMaintain},
the returned vector satisfies
$\|\mathbf{W}^{-1/2}\left( \overline{\vf} - (\hat{\bm{f}} - \bm{f}^{\perp})\right)\|_{\infty}\leq\overline{\vv}erline{\varepsilonilon}$,
where $\hat{\bm{f}}$ and $\bm{f}^{\perp}$ are maintained in the current data structure.
\paragraph{\textsc{Exact}:}
The runtime and correctness follow from the guarantee of $\texttt{maintainRep}.\textsc{Exact}$ given in \cref{thm:maintain_representation} and the invariant that $\bm{f} = \hat{\bm{f}} - \bm{f}^{\perp}$.
Finally, we have the following lemma about the runtime for \textsc{Approximate}. Let $\overline{\vf}^{(k)}$ denote the returned approximate vector at step $k$.
\begin{lemma}
Suppose $\alpha\|\bm{v}\|_{2}\leq\beta$ for some $\beta$ for all calls to \textsc{Move}.
Let $K$ denote the total number of coordinates changed in $\bm{v}$ and $\bm{w}$ between the $k-1$-th and $k$-th \textsc{Reweight} and \textsc{Move} calls. Then at the $k$-th \textsc{Approximate} call,
\begin{itemize}
\item The data structure first sets $\overline{\vf}_e\leftarrow \bm{f}^{(k-1)}_e$ for all coordinates $e$ where $\bm{w}_e$ changed in the last \textsc{Reweight}, then sets $\overline{\vf}_e\leftarrow \bm{f}^{(k)}_e$ for $O(N_k\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_{k}}(\frac{\beta}{\overline{\vv}erline{\varepsilonilon}})^{2}\log^{2}m)$ coordinates $e$, where $\ell_{k}$ is the largest integer
$\ell$ with $k=0\mod2^{\ell}$ when $k\neq 0$ and $\ell_0=0$.
\item The amortized time for the $k$-th \textsc{Approximate} call
is $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}\sqrt{m(K+N_{k-2^{\ell_k}})})$.
\end{itemize}
\end{lemma}
\begin{proof}
The proof is similar to the one for slack.
Since ${\overline{\vf}}$ is maintained by \texttt{bar\_f}, we apply \cref{thm:VectorTreeMaintain} with $\bm{x} = \overline{\vf}$ and diagonal matrix $\mathbf{D} = \mathbf{W}^{-1}$.
We need to prove $\|\bm{x}^{(k)}-\bm{x}^{(k-1)}\|_{\mathbf{D}^{(k)}}\le O(\beta)$ for all $k$ first. The constant factor in $O(\beta)$ does not affect the guarantees in \cref{thm:VectorTreeMaintain}. The left-hand side is
\begin{align*}
\norm{{\bm{f}}^{(k)}-{\bm{f}}^{(k-1)}}_{{\mathbf{W}^{(k)}}^{-1}} &= \norm{-\alpha^{(k)} \mathbf{A}thbf{M} \bm{z}^{(k)}+\alpha^{(k)}\bm{v}^{(k)}}_2 \tag{by \textsc{Move}}\\
&\le \norm{-\alpha^{(k)} \mathbf{A}thbf{M} \bm{z}^{(k)}}_2+\norm{\alpha^{(k)}\bm{v}^{(k)}}_2\\
&\le (2+O(\eta\varepsilonsc)) \alpha^{(k)} \|\bm{v}^{(k)}\|_2 \tag{by the assumption that $\alpha\|\bm{v}\|_{2}\leq\beta$}\\
&\le 3 \beta.
\end{align*}
Now, we can apply the conclusions from \cref{thm:VectorTreeMaintain} to get that at the $k$-th step, the data structure first sets $\overline{\vf}_e\leftarrow \bm{f}^{(k-1)}_e$ for all coordinates $e$ where $\bm{w}_e$ changed in the last \textsc{Reweight}, then sets $\overline{\vf}_e\leftarrow \bm{f}^{(k)}_e$ for $O(N_k\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_{k}}(\frac{\beta}{\overline{\vv}erline{\varepsilonilon}})^{2}\log^{2}m)$ coordinates $e$, where $\ell_{k}$ is the largest integer
$\ell$ with $k=0\mod2^{\ell}$ when $k\neq 0$ and $\ell_0=0$.
For the second point, \textsc{Move} updates ${\bm{z}^{(\mathbf{A}thrm{step})}}$ and ${\bm{z}^{(\mathbf{A}thrm{sum})}}$ on $\elim{H}$ for $\widetilde{O}(K)$ different nodes $H \in \mathcal{T}$ by \cref{thm:maintain_representation}.
\textsc{Reweight} then updates ${\bm{z}^{(\mathbf{A}thrm{step})}}$ and ${\bm{z}^{(\mathbf{A}thrm{sum})}}$ on $F_H$ for $\widetilde{O}(K)$ different nodes, and updates the tree operator $\mathbf{W}^{-1/2}\mathbf{A}thbf{M}$ on $\widetilde{O}(K)$ different edge and leaf operators. In turn, it updates $\bm{y}$ on $E(H)$ for $\widetilde{O}(K)$ leaf nodes $H$. The changes of $\hat{\bm{f}}$ cause $O(K)$ changes to the vector $-\bm{y}+\hat{\bm{f}}_0+\hat{c}\cdot \mathbf{W} \bm{v}$, which is the parameter $\bm{y}$ of \cref{thm:VectorTreeMaintain}. Now, we apply \cref{thm:VectorTreeMaintain} and the complexity of the tree operator to conclude the desired amortized runtime.
\end{proof}
\end{proof}
\section{Min-Cost Flow for Separable Graphs}
\label{sec:separable}
In this section, we extend our result to $\alpha$-separable graphs.
\separableMincostflow*
The change in running time essentially comes from the parameters of the separator tree which we shall discuss in \cref{subsec:separable_separator_tree}. We then calculate the total running time and prove \cref{thm:separable_mincostflow} in \cref{subsec:separable_main}.
\subsection{Separator Tree for Separable Graphs}
\label{subsec:separable_separator_tree}
Since our algorithm only exploits the separable property of the planar graphs, it can be applied to other separable graphs directly and yields different running times. Similar to the planar case, by adding two extra vertices to any $\alpha$-separable graph, it is still $\alpha$-separable with the constant $c$ in \cref{defn:separable-graph} increased by $2$.
Recall the definition of separable graphs:
\defSeparableGraph*
We define a separator tree $\mathcal{T}$ for an $\alpha$-separable graph $G$ in the same way as for a planar graph.
\begin{definition} [Separator tree $\mathcal{T}$ for $\alpha$-separable graph] \label{defn:separator-tree-separable}
Let $G$ be an $\alpha$-separable graph.
A separator tree $\mathcal{T}$ is a binary tree whose nodes represent subgraphs of $G$ such that the children of each node $H$ form a balanced partition of $H$.
Formally, each node of $\mathcal{T}$ is a \emph{region} (edge-induced subgraph) $H$ of $G$; we denote this by $H \in \mathcal{T}$.
At a node $H$, we store subsets of vertices $\bdry{H}, \sep{H}, \elim{H} \subseteq V(H)$,
where $\bdry{H}$ is the set of \emph{boundary vertices} that are incident to vertices outside $H$ in $G$;
$\sep{H}$ is the balanced vertex separator of $H$;
and $\elim{H}$ is the set of \emph{eliminated vertices} at $H$.
Concretely, the nodes and associated vertex sets are defined recursively in a top-down way as follows:
\begin{enumerate}
\item The root of $\mathcal{T}$ is the node $H = G$, with $\bdry{H} = \emptyset$ and $\elim{H} = \sep{H}$.
\item A non-leaf node $H \in \mathcal{T}$ has exactly two children $D_1, D_2 \in \mathcal{T}$ that form an edge-disjoint partition of $H$, and their vertex sets intersect on the balanced separator $\sep{H}$ of $H$.
Define $\bdry{D_1} = (\bdry{H} \cup \sep{H}) \cap V(D_1)$, and similarly $\bdry{D_2} = (\bdry{H} \cup \sep{H}) \cap V(D_2)$.
Define $\elim{H} = \sep{H} \setminus \bdry{H}$.\label{property: boundary-separable}
\item If a region $H$ contains a constant number of edges, then we stop the recursion and $H$ becomes a leaf node. Further, we define $\sep{H} = \emptyset$ and $\elim{H} = V(H) \setminus \bdry{H}$. Note that by construction, each edge of $G$ is contained in a unique leaf node.
\end{enumerate}
Let $\eta(H)$ denote the height of node $H$ which is defined as the maximum number of edges on a tree path from $H$ to one of its descendants. $\eta(H)=0$ if $H$ is a leaf. Note that the height difference between a parent and child node could be greater than one. Let $\eta$ denote the height of $\mathcal{T}$ which is defined as the maximum height of nodes in $\mathcal{T}$. We say $H$ is at \emph{level} $i$ if $\eta(H)=i$.
\end{definition}
The only two differences between the separator trees for planar and $\alpha$-separable graphs are their construction time and update time (for $k$-sparse updates). For the planar case, these are bounded by \cref{thm:separator_tree_construction} and \cref{lem:planarBoundChangeCost} respectively. We shall prove their analogs \cref{thm:separableDecompT} and \cref{lem:separableBoundChangeCost}.
\cite{GHP18} showed that the separator tree can be constructed in $O(s(n)\log n)$ time for any class of $1/2$-separable graphs where $s(n)$ is the time for computing the separator. The proof can be naturally extended to $\alpha$-separable graphs. We include the extended proofs in \cref{sec:appendix} for completeness.
\begin{restatable}{lemma}{separableDecompT} \label{thm:separableDecompT} Let $\mathbf{A}thcal{C}$ be an $\alpha$-separable class such that we can compute a balanced separator for any graph in $\mathbf{A}thcal{C}$ with $n$ vertices and $m$ edges in $s(m)$ time for some convex function $s(m)\ge m$. Given an $\alpha$-separable graph, there is an algorithm that computes a separator tree $\mathcal{T}$ in $O(s(m) \log m)$ time.
\end{restatable}
Note that $s(\cdot)$ does not depend on $n$ because we may assume the graph is connected so that $n=O(m)$.
We then prove the update time. Same as the planar case, we define $\pathT{H}$ to be the set of all ancestors of $H$ in the separator tree and $\pathT{\mathcal{H}}$ to be the union of $\pathT{H}$ for all $H \in \mathcal{H}$. Then we have the following bound:
\begin{restatable}{lemma}{separableBoundChangeCost}
\label{lem:separableBoundChangeCost}
Let $G$ be an $\alpha$-separable graph with separator tree $\mathcal{T}$. Let $\mathcal{H}$ be a set of $K$ nodes in $\mathcal{T}$. Then
\begin{align*}
\sum_{H \in \pathT{\mathcal{H}}}| \bdry{H}| +|\sep{H}| \leq \overline{\vv}erline{t}ilde(K^{1-\sepConst} m^\sepConst).
\end{align*}
\end{restatable}
By setting $\alpha$ as $1/2$, we get \cref{lem:planarBoundChangeCost} for planar graphs as a corollary.
\subsection{Proof of Running time}
In this section, we prove \cref{thm:separable_mincostflow}. The data structures (except for the construction of the separator tree) will use exactly the same pseudocode as for the planar case. Thus, the correctness can be proven in the same way. We prove the runtimes only.
For the planar case, after constructing the separator tree by \cref{thm:separator_tree_construction}, \cref{lem:planarBoundChangeCost} is the lemma that interacts with other parts of the algorithm. For $\alpha$-separable graphs, we first construct the separator tree in $O(s(m)\log m)$ time by \cref{thm:separableDecompT}. Then we propagate the change in runtime
($\overline{\vv}erline{t}ilde(\sqrt{mK})$ from \cref{lem:planarBoundChangeCost} to $\overline{\vv}erline{t}ilde(m^\alpha K^{1-\alpha})$ from \cref{lem:separableBoundChangeCost})
to all the data structures and to the complexity $T(\cdot)$ of the flow and slack tree operators.
We first propagate the change to the implicit representation maintenance data structure, which is the common component for maintaining the flow and the slack vectors.
\begin{theorem} \label{thm:separable_maintain_representation}
Given an $\alpha$-separable graph $G$ with $n$ vertices and $m$ edges, and its separator tree $\mathcal{T}$ with height $\eta$,
the deterministic data structure \textsc{MaintainRep} (\cref{alg:maintain_representation})
maintains the following variables correctly at the end of every IPM step:
\begin{itemize}
\item the dynamic edge weights $\bm{w}$ and step direction $\bm{v}$ from the current IPM step,
\item a \textsc{DynamicSC} data structure on $\mathcal{T}$ based on the current edge weights $\bm{w}$,
\item an implicitly represented tree operator $\mathbf{A}thbf{M}$ supported on $\mathcal{T}$ with complexity $T(K)$, \emph{computable using information from \textsc{DynamicSC}},
\item scalar $c$ and vectors ${\bm{z}^{(\mathbf{A}thrm{step})}}, {\bm{z}^{(\mathbf{A}thrm{sum})}}$, which together represent $\bm{z} = c {\bm{z}^{(\mathbf{A}thrm{step})}} + {\bm{z}^{(\mathbf{A}thrm{sum})}}$,
such that at the end of step $k$,
\[
\bm{z} = \sum_{i=1}^{k} \alpha^{(i)} \bm{z}^{(i)},
\]
where $\alpha^{(i)}$ is the step size $\alpha$ given in \textsc{Move} for step $i$,
\item ${\bm{z}^{(\mathbf{A}thrm{step})}}$ satisfies ${\bm{z}^{(\mathbf{A}thrm{step})}} = \widetilde{\mathbf{\Gamma}} \mathbf{\Pi}^{(\eta-1)} \cdots \mathbf{\Pi}^{(0)} \mathbf{B}^{\top} \mathbf{W}^{1/2} \bm{v}$,
\item an offset vector $\bm{y}$ which together with $\mathbf{A}thbf{M}, \bm{z}$ represent $\bm{x}=\bm{y}+\mathbf{A}thbf{M}\bm{z}$, such that after step $k$,
\[
\bm{x} = \bm{x}^{{(\mathrm{init})}}+\sum_{i=1}^{k} \mathbf{A}thbf{M}^{(i)} (\alpha^{(i)} \bm{z}^{(i)}),
\]
where $\bm{x}^{{(\mathrm{init})}}$ is an initial value from \textsc{Initialize}, and $\mathbf{A}thbf{M}^{(i)}$ is the state of $\mathbf{A}thbf{M}$ after step $i$.
\end{itemize}
The data structure supports the following procedures:
\begin{itemize}
\item $\textsc{Initialize}(G, \mathcal{T}, \mathbf{A}thbf{M}, \bm{v}\in\mathbf{A}thbb{R}^{m},\bm{w}\in\mathbf{A}thbb{R}_{>0}^{m}, \bm{x}^{{(\mathrm{init})}} \in \mathbf{A}thbb{R}^m, \varepsilonilon_{\mathbf{P}} > 0)$:
Given a graph $G$, its separator tree $\mathcal{T}$, a tree operator $\mathbf{A}thbf{M}$ supported on $\mathcal{T}$ with complexity $T$,
initial step direction $\bm{v}$, initial weights $\bm{w}$, initial vector $\bm{x}^{{(\mathrm{init})}}$, and target projection matrix accuracy $\varepsilonilon_{\mathbf{P}}$, preprocess in $\widetilde{O}(\varepsilonsc^{-2}m+T(m))$ time and set $\bm{x} \leftarrow \bm{x}^{{(\mathrm{init})}}$.
\item $\textsc{Reweight}(\bm{w} \in\mathbf{A}thbb{R}_{>0}^{m}$ given implicitly as a set of changed coordinates):
Update the weights to $\bm{w}^{(\mathrm{new})}$.
Update the implicit representation of $\bm{x}$ without changing its value, so that all the variables in the data structure are based on the new weights.
The procedure runs in
$\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}K^{1-\sepConst} m^\sepConst+T(K))$ total time,
where $K$ is an upper bound on the number of coordinates changed in $\bm{w}$ and the number of leaf or edge operators changed in $\mathbf{A}thbf{M}$.
There are most $\widetilde{O}(K)$ nodes $H\in \mathcal{T}$ for which ${\bm{z}^{(\mathbf{A}thrm{step})}}|_{F_H}$ and ${\bm{z}^{(\mathbf{A}thrm{sum})}}|_{F_H}$ are updated.
\item $\textsc{Move}(\alpha \in \mathbf{A}thbb{R}$, $\bm{v} \in \mathbf{A}thbb{R}^{n}$ given implicitly as a set of changed coordinates):
Update the current direction to $\bm{v}$, and then ${\bm{z}^{(\mathbf{A}thrm{step})}}$ to maintain the claimed invariant.
Update the implicit representation of $\bm{x}$ to reflect the following change in value:
\[
\bm{x} \leftarrow \bm{x} + \mathbf{A}thbf{M} (\alpha {\bm{z}^{(\mathbf{A}thrm{step})}}).
\]
The procedure runs in $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2} K^{1-\sepConst} m^\sepConst)$ time,
where $K$ is the number of coordinates changed in $\bm{v}$ compared to the previous IPM step.
\item $\textsc{Exact}()$:
Output the current exact value of $\bm{x}=\bm{y} + \mathbf{A}thbf{M} \bm{z}$ in $\widetilde{O}(T(m))$ time.
\end{itemize}
\end{theorem}
\begin{proof}
The bottlenecks of \textsc{Move} is \textsc{PartialProject}. For each $H \in \mathbf{A}thcal{P}_{\mathcal{T}}(\mathbf{A}thcal{H})$, recall from \cref{thm:apxsc} that $\mathbf{A}thbf{L}^{(H)}$ is supported on the vertex set $\elim{H} \cup \bdry{H}$ and has $\widetilde{O}(\varepsilonsc^{-2}|\elim{H}\cup \bdry{H}|)$ edges. Hence,
$(\mathbf{A}thbf{L}^{(H)}_{\elim{H},\elim{H}})^{-1} \bm{u}|_{\elim{H}}$ can be computed by an exact Laplacian solver in $\widetilde{O}(\varepsilonsc^{-2}|\elim{H}\cup \bdry{H}|)$ time, and the subsequent left-multiplying by $\mathbf{A}thbf{L}^{(H)}_{\bdry{H}, \elim{H}}$ also takes $\widetilde{O}(\varepsilonsc^{-2}|\elim{H}\cup \bdry{H}|)$ time. By \cref{lem:separableBoundChangeCost}, \textsc{PartialProject} takes $\widetilde{O}(\varepsilonsc^{-2}K^{1-\sepConst} m^\sepConst)$ time. \textsc{Move} also runs in $\widetilde{O}(\varepsilonsc^{-2}K^{1-\sepConst} m^\sepConst)$ time.
\textsc{Reweight} calls \textsc{PartialProject} and \textsc{ReversePartialProject} for $O(1)$ times and \textsc{ComputeMz} once. \textsc{ReversePartialProject} costs the same as \textsc{PartialProject}. The runtime of \textsc{ComputeMz} is still bounded by the complexity of the tree operator, $O(T(K))$. Thus, \textsc{PartialProject} takes $\widetilde{O}(\varepsilonsc^{-2}K^{1-\sepConst} m^\sepConst)$ time. \textsc{Move} also runs in $\widetilde{O}(\varepsilonsc^{-2}K^{1-\sepConst} m^\sepConst+T(K))$ time.
Runtimes of other procedures and correctness follow from the same argument as in the proof for \cref{thm:maintain_representation}.
\end{proof}
Then we may use \cref{thm:separable_maintain_representation} and \cref{thm:VectorTreeMaintain} to maintain vectors $\overline{\vf}, \overline{\vs}$, with the updated complexity of the operators.
\begin{lemma}
\label{lem:separable_operator_complexity}
For any $\alpha$-separable graph $G$ with separator tree $\mathcal{T}$, the flow and slack operators defined in \cref{defn:flow-forest-operator,defn:slack-forest-operator} both have complexity $T(K)=O(\varepsilonsc^{-2}K^{1-\sepConst} m^\sepConst)$.
\end{lemma}
\begin{proof}
The leaf operators of both the flow and slack tree operators has constant size.
Let $\mathbf{A}thbf{M}_{(H, P)}$ be a tree edge operator. Note that it is a symmetric matrix.
For the slack operator,
Applying $\mathbf{A}thbf{M}_{(D,P)}=\mathbf{A}thbf{I}_{\bdry{D}}-\left(\mathbf{A}thbf{L}^{(D)}_{\elim{D},\elim{D}}\right)^{-1} \mathbf{A}thbf{L}^{(D)}_{\elim{D}, \bdry{D}}$
to the left or right consists of three steps which are applying $\mathbf{A}thbf{I}_{\bdry{D}}$,
applying $\mathbf{A}thbf{L}^{(D)}_{\elim{D}, \bdry{D}}$ and solving for $\mathbf{A}thbf{L}^{(D)}_{\elim{D},\elim{D}}\bm{v}=\bm{b}$ for some vectors $\bm{v}$ and $\bm{b}$.
For the flow operator, $\mathbf{A}thbf{M}_{(H, P)} \bm{u}$ consists of multiplying with $\widetilde{\mathbf{Sc}}(\mathbf{A}thbf{L}^{(H)}, \partial H)$ and solving the Laplacian system $\mathbf{A}thbf{L}^{(H)}$.
Each of the steps costs time $O(\varepsilonsc^{-2}|\bdry{D}\cup \elim{D}|)$ by \cref{lem:fastApproxSchur} and \cref{thm:laplacianSolver}.
To bound the total cost over $K$ distinct edges, we apply \cref{lem:separableBoundChangeCost} instead of \cref{lem:planarBoundChangeCost}, which gives the claimed complexity.
\end{proof}
We then have the following lemmas for maintaining the flow and slack vectors:
\begin{theorem}[Slack maintenance for $\alpha$-separable graphs]
\label{thm:separableSlackMaintain}
Given a modified planar graph $G$ with $n$ vertices and $m$ edges, and its separator tree $\mathcal{T}$ with height $\eta$,
the randomized data structure \textsc{MaintainSlack} (\cref{alg:slack-maintain-main})
implicitly maintains the slack solution $\bm{s}$ undergoing IPM changes,
and explicitly maintains its approximation $\overline{\vs}$,
and supports the following procedures with high probability against an adaptive adversary:
\begin{itemize}
\item $\textsc{Initialize}(G,\bm{s}^{{(\mathrm{init})}} \in\mathbf{A}thbb{R}^{m},\bm{v}\in \mathbf{A}thbb{R}^{m}, \bm{w}\in\mathbf{A}thbb{R}_{>0}^{m},\varepsilonilon_{\mathbf{P}}>0,\overline{\vv}erline{\varepsilonilon}>0)$:
Given a graph $G$, initial solution $\bm{s}^{{(\mathrm{init})}}$, initial direction $\bm{v}$, initial weights $\bm{w}$,
target step accuracy $\varepsilonilon_{\mathbf{P}}$ and target approximation accuracy
$\overline{\vv}erline{\varepsilonilon}$, preprocess in $\widetilde{O}(m \varepsilonilon_{\mathbf{P}}^{-2})$ time,
and set the representations $\bm{s} \leftarrow \bm{s}^{{(\mathrm{init})}}$ and $\overline{\vv}erline{\vx} \leftarrow \bm{s}$.
\item $\textsc{Reweight}(\bm{w}\in\mathbf{A}thbb{R}_{>0}^{m},$ given implicitly as a set of changed weights):
Set the current weights to $\bm{w}$ in $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}K^{1-\sepConst} m^\sepConst)$ time,
where $K$ is the number of coordinates changed in $\bm{w}$.
\item $\textsc{Move}(t \in\mathbf{A}thbb{R},\bm{v}\in\mathbf{A}thbb{R}^{m} $ given implicitly as a set of changed coordinates):
Implicitly update $\bm{s} \leftarrow \bm{s}+t \mathbf{W}^{-1/2}\widetilde{\mathbf{P}}_{\bm{w}} \bm{v}$ for some
$\widetilde{\mathbf{P}}_{\bm{w}}$ with $\|(\widetilde{\mathbf{P}}_{\bm{w}} -\mathbf{P}_{\bm{w}}) \bm{v} \|_2 \leq\eta\varepsilonsc \norm{\bm{v}}_2$,
and $\widetilde{\mathbf{P}}_{\bm{w}} \bm{v} \in \range{\mathbf{B}}$.
The total runtime is $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}K^{1-\sepConst} m^\sepConst)$ where $K$ is the number of coordinates changed in $\bm{v}$.
\item $\textsc{Approximate}()\rightarrow\mathbf{A}thbb{R}^{m}$: Return the vector $\overline{\vs}$
such that $\|\mathbf{W}^{1/2}(\overline{\vs}-\bm{s})\|_{\infty}\leq\overline{\vv}erline{\varepsilonilon}$
for the current weight $\bm{w}$ and the current vector $\bm{s}$.
\item $\textsc{Exact}()\rightarrow\mathbf{A}thbb{R}^{m}$:
Output the current vector $\bm{s}$ in $\widetilde{O}(m \varepsilonsc^{-2})$ time.
\end{itemize}
Suppose $t\|\bm{v}\|_{2}\leq\beta$ for some $\beta$ for all calls to \textsc{Move}.
Suppose in each step, \textsc{Reweight}, \textsc{Move} and \textsc{Approximate} are called in order. Let $K$ denote the total number of coordinates changed in $\bm{v}$ and $\bm{w}$ between the $(k-1)$-th and $k$-th \textsc{Reweight} and \textsc{Move} calls. Then at the $k$-th \textsc{Approximate} call,
\begin{itemize}
\item the data structure first sets $\overline{\vs}_e\leftarrow \bm{s}^{(k-1)}_e$ for all coordinates $e$ where $\bm{w}_e$ changed in the last \textsc{Reweight}, then sets $\overline{\vs}_e\leftarrow \bm{s}^{(k)}_e$ for $O(N_k\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_{k}}(\frac{\beta}{\overline{\vv}erline{\varepsilonilon}})^{2}\log^{2}m)$ coordinates $e$, where $\ell_{k}$ is the largest integer
$\ell$ with $k=0\mod2^{\ell}$ when $k\neq 0$ and $\ell_0=0$.
\item The amortized time for the $k$-th \textsc{Approximate} call
is $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}(m^{\alpha}(K+N_{k-2^{\ell_k}})^{1-\alpha}))$.
\end{itemize}
\end{theorem}
\begin{proof}
Because $T(m)=\widetilde{O}(\varepsilonsc^{-2}m)$ (\cref{lem:separable_operator_complexity}), the runtime of \textsc{Initialize} is still $\widetilde{O}(\varepsilonsc^{-2}m)$ by \cref{thm:separable_maintain_representation} and \cref{thm:VectorTreeMaintain}. The runtime of \textsc{Reweight}, \textsc{Move}, and \textsc{Exact} follow from the guarantees of \cref{thm:separable_maintain_representation}. The runtime of \textsc{Approximate} follows from \cref{thm:VectorTreeMaintain} with $T(K)=O(K^{1-\sepConst} m^\sepConst)$ (\cref{lem:separable_operator_complexity}).
\end{proof}
\begin{theorem}[Flow maintenance for $\alpha$-separable graphs]
\label{thm:separableFlowMaintain}
Given a $\alpha$-separable graph $G$ with $n$ vertices and $m$ edges, and its separator tree $\mathcal{T}$ with height $\eta$,
the randomized data structure \textsc{MaintainFlow} (\cref{alg:flow-maintain-main})
implicitly maintains the flow solution $\bm{f}$ undergoing IPM changes,
and explicitly maintains its approximation $\overline{\vf}$,
and supports the following procedures with high probability against an adaptive adversary:
\begin{itemize}
\item
$\textsc{Initialize}(G,\bm{f}^{{(\mathrm{init})}} \in\mathbf{A}thbb{R}^{m},\bm{v} \in \mathbf{A}thbb{R}^{m}, \bm{w}\in\mathbf{A}thbb{R}_{>0}^{m},\varepsilonilon_{\mathbf{P}}>0,
\overline{\vv}erline{\varepsilonilon}>0)$: Given a graph $G$, initial solution $\bm{f}^{(\mathrm{init})}$, initial direction $\bm{v}$,
initial weights $\bm{w}$, target step accuracy $\varepsilonilon_{\mathbf{P}}$,
and target approximation accuracy $\overline{\vv}erline{\varepsilonilon}$,
preprocess in $\widetilde{O}(m \varepsilonilon_{\mathbf{P}}^{-2})$ time and set the internal representation $\bm{f} \leftarrow \bm{f}^{{(\mathrm{init})}}$ and $\overline{\vf} \leftarrow \bm{f}$.
\item $\textsc{Reweight}(\bm{w}\in\mathbf{A}thbb{R}_{>0}^{m}$ given implicitly as a set of changed weights): Set the current weights to $\bm{w}$ in
$\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}\alpha)$ time, where $K$ is
the number of coordinates changed in $\bm{w}$.
\item $\textsc{Move}(t\in\mathbf{A}thbb{R},\bm{v}\in\mathbf{A}thbb{R}^{m}$ given
implicitly as a set of changed coordinates):
Implicitly update
$\bm{f} \leftarrow \bm{f}+ t \mathbf{W}^{1/2}\bm{v} - t \mathbf{W}^{1/2} \widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}$ for
some $\widetilde{\mathbf{P}}'_{\bm{w}} \bm{v}$,
where
$\|\widetilde{\mathbf{P}}'_{\bm{w}} \bm{v} - \mathbf{P}_{\bm{w}} \bm{v} \|_2 \leq O(\eta \varepsilonsc) \norm{\bm{v}}_2$ and
$\mathbf{B}^\top \mathbf{W}^{1/2}\widetilde{\mathbf{P}}'_{\bm{w}}\bm{v}= \mathbf{B}^\top \mathbf{W}^{1/2} \bm{v}$.
The runtime is $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2} K^{1-\sepConst} m^\sepConst)$, where $K$ is
the number of coordinates changed in $\bm{v}$.
\item $\textsc{Approximate}()\rightarrow\mathbf{A}thbb{R}^{m}$: Output the vector
$\overline{\vf}$ such that $\|\mathbf{W}^{-1/2}(\overline{\vf}-\bm{f})\|_{\infty}\leq\overline{\vv}erline{\varepsilonilon}$ for the
current weight $\bm{w}$ and the current vector $\bm{f}$.
\item $\textsc{Exact}()\rightarrow\mathbf{A}thbb{R}^{m}$:
Output the current vector $\bm{f}$ in $\widetilde{O}(m \varepsilonsc^{-2})$ time.
\end{itemize}
Suppose $t\|\bm{v}\|_{2}\leq\beta$ for some $\beta$ for all calls to \textsc{Move}.
Suppose in each step, \textsc{Reweight}, \textsc{Move} and \textsc{Approximate} are called in order. Let $K$ denote the total number of coordinates changed in $\bm{v}$ and $\bm{w}$ between the $(k-1)$-th and $k$-th \textsc{Reweight} and \textsc{Move} calls. Then at the $k$-th \textsc{Approximate} call,
\begin{itemize}
\item the data structure first sets $\overline{\vf}_e\leftarrow \bm{f}^{(k-1)}_e$ for all coordinates $e$ where $\bm{w}_e$ changed in the last \textsc{Reweight}, then sets $\overline{\vf}_e\leftarrow \bm{f}^{(k)}_e$ for $O(N_k\stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_{k}}(\frac{\beta}{\overline{\vv}erline{\varepsilonilon}})^{2}\log^{2}m)$ coordinates $e$, where $\ell_{k}$ is the largest integer
$\ell$ with $k=0\mod2^{\ell}$ when $k\neq 0$ and $\ell_0=0$.
\item The amortized time for the $k$-th \textsc{Approximate} call
is $\widetilde{O}(\varepsilonilon_{\mathbf{P}}^{-2}(m^{\alpha}(K+N_{k-2^{\ell_k}})^{1-\alpha}))$.
\end{itemize}
\end{theorem}
The proof is the same as \cref{thm:separableSlackMaintain}.
Finally, we can prove \cref{thm:separable_mincostflow}.
\begin{proof}[Proof of \cref{thm:separable_mincostflow}]
The correctness is exactly the same as the proof for \cref{thm:mincostflow}.
\begin{comment}
For correctness, one can verify that the linear program for minimum cost flow problem remains the same as in the planar case (\cref{eq:LP}) and satisfies the assumptions
in \cref{thm:IPM}. Then, we implement the IPM algorithm using the
data structures from \cref{subsec:overview_representation,subsec:overview_sketch,subsec:dual_overview,subsec:primal_overview}.
. We recalculate the runtime as follows:
For simplicity, we assume $\alpha > 1/2$. When $\alpha\le 1/2$, we may use the proof for the planar case directly to get a $\widetilde{O}(m\log M)$ runtime for \textsc{CenteringImpl}. As $m=O(s(m))$, \cref{thm:separable_mincostflow} is proved in this case.
At each step of \textsc{CenteringImpl}, we perform the implicit update of $\bm{f}$ and $\bm{s}$ using $\textsc{Move}$; we update $\mathbf{W}$ in the data structures using $\textsc{Reweight}$; and we construct the explicit approximations
$\overline{\vf}$ and $\overline{\vs}$ using $\textsc{Approximate}$;
each in the respective flow and slack data structures.
We return the true $(\bm{f},\bm{s})$ by $\textsc{Exact}$.
The total cost of \textsc{CenteringImpl} is dominated by $\textsc{Move}$, $\textsc{Reweight}$, and $\textsc{Approximate}$.
Since we call \textsc{Move}, \textsc{Reweight} and \textsc{Approximate} in order in each step and the runtime for \textsc{Move}, \textsc{Reweight} are both dominated by the runtime for \textsc{Approximate}, it suffices to bound the runtime for \textsc{Approximate} only.
\cref{thm:IPM} guarantees that there are $T=O(\sqrt{m}\log n\log(nM))$ total \textsc{Approximate} calls.
\cref{lem:cor_change} shows that at the $k$-th call, the number of coordinates changed in $\bm{w}$ and $\bm{v}$ is bounded by $K \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} O(2^{2\ell_{k-1}} \log^2 m+2^{2\ell_{k-2}} \log^2 m)$, where $\ell_k$ is the largest integer $\ell$ with $k \equiv 0 \mod 2^{\ell}$, or equivalently, the number of trailing zeros in the binary representation of $k$.
\cref{thm:IPM} further guarantees we can apply \cref{thm:SlackMaintain} and \cref{thm:FlowMaintain} with parameter $\beta = O(1/\log m)$,
which in turn shows the amortized time for the $k$-th call is
\[
\overline{\vv}erline{t}ilde(\varepsilonsc^{-2} m^{\alpha}(K + N_{k-2^{\ell_k}})^{1-\alpha}).
\]
\end{comment}
For the runtime, we use the data structure runtimes given in \cref{thm:separableSlackMaintain} and \cref{thm:separableFlowMaintain}. We may assume $\alpha>1/2$ because otherwise the graph is $1/2$-separable and the runtime follows from \cref{thm:mincostflow}.
The amortized time for the $k$-th IPM step is
\[
\overline{\vv}erline{t}ilde(\varepsilonsc^{-2} m^{\alpha}(K + N_{k-2^{\ell_k}})^{1-\alpha}).
\]
where $N_{k} \stackrel{\mathbf{A}thrm{{\mathbf{Sc}riptscriptstyle def}}}{=} 2^{2\ell_k} (\beta/\alpha)^2 \log^2 m = O(2^{2\ell_k} \log^2 m)$,
where $\alpha = O(1/\log m)$ and $\varepsilonilon_{\mathbf{P}}= O(1/\log m)$ are defined in \textsc{CenteringImpl}.
Observe that $K + N_{k-2^{\ell_k}} = O(N_{k-2^{\ell_k}})$. Now, summing over all $T$ steps, the total time is
\begin{align}
O({m}^{\alpha} \log m) \sum_{k=1}^T (N_{k-2^{\ell_k}})^{1-\alpha} &=
O({m}^{\alpha} \log^2 m) \sum_{k=1}^T 2^{2(1-\alpha)\ell_{(k - 2^{\ell_k})}} \nonumber\\
&= O({m}^{\alpha} \log^2 m) \sum_{k'=1}^T \nonumber 2^{2(1-\alpha)\ell_{k'}}\sum_{k=1}^{T}[k-2^{\ell_k}=k'], \nonumber\\
&= O(m^{\alpha} \log^2 m \log T) \sum_{k'=1}^T 2^{2(1-\alpha)\ell_{k'}}. \label{eq:separableSum}
\end{align}
Without $1-\alpha$ in the exponent, recall from the planar case that
\[
\sum_{k'=1}^T 2^{\ell_{k'}} = \sum_{i=0}^{\log T} 2^{i} \cdot T/2^{i+1} = O(T \log T).
\]
The summation from \cref{eq:separableSum} is
\begin{align*}
\sum_{k=1}^{T} 2^{2(1-\alpha) \ell_k} &= \sum_{k=1}^{T}(2^{\ell_{k}})^{2-2\alpha} \\
&\le \left(\sum_{k=1}^T 1^{1/(2\alpha-1)}\right)^{2\alpha-1}\left(\sum_{k=1}^T\left(\left(2^{\ell_{k}}\right)^{2-2\alpha}\right)^{1/(2-2\alpha)}\right)^{2-2\alpha} \tag{by H\"older's Inquality}\\
&= \overline{\vv}erline{t}ilde\left(T^{2\alpha-1} (T \log T)^{2-2\alpha}\right)\\
&= \overline{\vv}erline{t}ilde(\sqrt{m}\log M \log T),
\end{align*}
where we use $T=O(\sqrt{m}\log n\log(nM))$ from \cref{thm:IPM}.
So the runtime for \textsc{CenteringImpl} is $\widetilde{O}(m^{1/2+\alpha} \log M)$.
By \cref{thm:separableDecompT}, the overall runtime is $\widetilde{O}(m^{1/2+\alpha} \log M +s(m))$.
\end{proof}
\label{subsec:separable_main}
{(\mathrm{new})}page
\appendix
\section{Appendix}
\label{sec:appendix}
\begin{comment}
\begin{proof}[Proof for \cref{thm:decompT}]
For some constant $S(G)$ be a separator of size $\lceil n^\alpha \rceil$ such that both sides of the cut has at most $cn$ vertices. When $cn+b n^\alpha \ge \frac{c+1}{2}n$, we let $S(G)$ be all vertices with degree at least $1$ to divide the edges evenly. This happens only when $n$ is no more than $\frac{2b}{1-c}=O(1)$.
First, we let $G$ be the root node of $\mathbf{A}thcal{T}(G)$. Let $G_1$ and $G_2$ be the two disjoint components of $G$ obtained after the removal of the vertices in $S(G)$. We define the children $\child{G}{1}, \child{G}{2}$ of $G$ as follows: $V(\child{G}{i}) = V(G_i) \cup S(G)$, $E(\child{G}{i}) = E(G_i)$, for $i =1, 2$. Edges connecting some vertex in $G_i$ and another vertex in $S(G)$ are added to $E(\child{G}{i})$. For each edge connecting two vertices in $S(G)$, we append it to $E(\child{G}{1})$ or $E(\child{G}{2})$, whichever has less edges. By construction, property \ref{p:2} in the definition of $\mathbf{A}thcal{T}(G)$ holds. We continue by repeatedly splitting each child $\child{G}{i}$ that has at least one edge in the same way as we did for $G$, whenever possible. There are $O(m)$ components, each containing exactly $1$ edge. The components containing exactly $1$ edge form the \emph{leaf nodes} of $\mathbf{A}thcal{T}(G)$. Note that the height of $\mathbf{A}thcal{T}(G)$ is bounded by $O(\log n + \log m)=O(\log m)$ as for any child $H'$ of a node $H$, either $|V(H')|\le \frac{c+1}{2}|V(H)|$ or $|E(H')|\le \frac{2}{3}|E(H)|$. (The second case happens when $S(H)$ is set to be all vertices with degree at least $1$.)
The running time of the algorithm is bounded by the total time to construct the seprator for all nodes in the tree. Because the tree has height $O(\log m)$ and nodes with the same depth does not share any edge, the sum of edges over all tree nodes is $O(m\log m)$. Since $s(m)$ is convex, the algorithm runs in no more than $O(s(m)\log m)$ time.
\end{proof}
\end{comment}
\planarBoundChangeCost*
\begin{proof}
Note that $\elim{H}$ is always a subset of $\sep{H}$. We will instead prove
\begin{align*}
\sum_{H \in \pathT{\mathcal{H}}}| \bdry{H}| +|\sep{H}| \leq \overline{\vv}erline{t}ilde(\sqrt{mK} ).
\end{align*}
First, we decompose the quantity we want to bound by levels in $\mathcal{T}$:
\begin{equation}
\sum_{H \in \pathT{\mathcal{H}}} | \bdry{H} | + |\sep{H}| = \sum_{i=0}^{\eta} \sum_{H \in \pathT{\mathcal{H},i}} |\bdry{H} | + |\sep{H}|.
\end{equation}
We first bound $\sum_{H \in \pathT{\mathcal{H}, i}} | \bdry{H} | + |\sep{H}|$ for a fixed $i$.
Our main observation is that we can bound the total number of boundary vertices of nodes at level $i$ by the number of boundary and separator vertices of nodes at level $(i+1)$.
Formally, our key claim is the following
\begin{equation} \label{eq: bdryNodeParent}
\sum_{H \in \pathT{\mathcal{H}, i}} | \bdry{H} | \leq \sum_{H' \in \pathT{\mathcal{H}, i+1}} \left( | \bdry{H'} | + 2 | \sep{H'} |\right).
\end{equation}
Without loss of generality, we may assume that if node $H$ is included in the left hand sum, then its sibling is included as well.
Next, recall by the definition of $\mathcal{T}$, for siblings $H_1, H_2$ with parent $H'$,
their boundaries are defined as
\[
\bdry{H}_i= \left( \sep{H'} \cup \bdry{H'} \right) \cap V(H_i) = (S(H') \cap V(H_i)) \cup ((\partial H' \setminus S(H')) \cap V(H_i)),
\]
for $i = 1,2$. Furthermore, $V(H_1) \cup V(H_2) = V(H)$.
Another crucial observation is that a vertex from $\bdry H'$ exists in both $H_1$ and $H_2$ if and only if that vertex belongs to the separator $S(H')$.
\begin{align}
|\bdry{H_1}| + |\bdry{H_2}| & \leq |\sep{H'}| + |(\bdry{H'} \setminus \sep{H'}) \cap V(H_1)| + |\sep{H'}| + |(\bdry{H'} \setminus \sep{H'}) \cap V(H_2')| \nonumber \\
& \leq |\bdry{H'}| + 2 |\sep{H'}|. \label{eq: bdrySiblings}
\end{align}
By summing \cref{eq: bdrySiblings} over all pairs of siblings in
$\pathT{\mathcal{H}, i}$, we get \cref{eq: bdryNodeParent}.
By repeatedly applying~\cref{eq: bdryNodeParent} until we reach the root
at height $\eta$, we have
\begin{equation} \label{eq: inductiveClaim}
\sum_{H \in \pathT{\mathcal{H}, i}} | \bdry{H}| \leq 2 \sum_{j=i+1}^{\eta} \sum_{H' \in \pathT{\mathcal{H},j}} |S(H')|.
\end{equation}
Summing over all the levels in $\mathcal{T}$, we have
\begin{align}
\sum_{i=0}^{\eta} \sum_{H \in \pathT{\mathcal{H}, i}} (|
\bdry{H}| + |S(H)| )
& \leq 2 \sum_{j=0}^{\eta} (j+1) \sum_{H' \in \pathT{\mathcal{H},j}} |S(H')|
\tag{by \text{\cref{eq: inductiveClaim}}}\\
& \leq 2 c \sum_{j=0}^{\eta} (j+1) \sum_{H' \in \pathT{\mathcal{H},j}}\sqrt{|E(H')|},
\label{eq:boundChangeCostSum}
\end{align}
where $c$ is the constant such that $|S(H')| \le c \left(|E(H')|\right)^{1/2}$ in the definition of being 1/2-separable.
Furthermore, the set of ancestors of $\mathcal{H}$ at level $j$ has size $| \pathT{\mathcal{H},j}| \le |\mathcal{H}| =K$.
Applying the Cauchy-Schwarz inequality, we get that
\begin{align*}
\sum_{H \in \pathT{\mathcal{H}}} \left( | \bdry{H}| + |S(H)| \right)
& \leq 2 c \sum_{j=0}^{\eta} (j+1) \sqrt{|\pathT{\mathcal{H},j}|} \cdot \left(\sum_{H' \in \pathT{\mathcal{H},j}} {|E(H')|}\right)^{1/2} \\
& \leq 2 c \sum_{j=0}^{\eta} (j+1) \sqrt{K} \cdot \left(\sum_{H' \in \pathT{\mathcal{H},j}} {|E(H')|}\right)^{1/2}\\
& \leq 2 c \eta \sqrt{K} \sum_{j=0}^{\eta}\left(\sum_{H' \in \pathT{\mathcal{H},j}} {|E(H')|}\right)^{1/2} \\
& \leq O(\eta^2 \sqrt{mK}),
\end{align*}
where the final inequality follows from the fact that nodes at the same level form an edge partition of $G$. As $\eta = O(\log m)$, the lemma follows.
\end{proof}
\separableDecompT*
\begin{proof}
First, we let $G$ be the root node of $\mathbf{A}thcal{T}(G)$. Let $G_1$ and $G_2$ be the two disjoint components of $G$ obtained after the removal of the vertices in $S(G)$. We define the children $\child{G}{1}, \child{G}{2}$ of $G$ as follows: $V(\child{G}{i}) = V(G_i) \cup S(G)$ and $E(\child{G}{i}) = E(G_i)$, for $i =1, 2$. Edges connecting some vertex in $G_i$ and another vertex in $S(G)$ are added to $E(\child{G}{i})$. For each edge connecting two vertices in $S(G)$, we append it to $E(\child{G}{1})$ or $E(\child{G}{2})$, whichever has less edges. By construction, property \ref{property: boundary-separable} in the definition of $\mathbf{A}thcal{T}(G)$ holds. We continue by repeatedly splitting each child $\child{G}{i}$ that has at least one edge in the same way as we did for $G$, whenever possible. There are $O(m)$ components, each containing exactly $1$ edge. The components containing exactly $1$ edge form the \emph{leaf nodes} of $\mathbf{A}thcal{T}(G)$. Note that the height of $\mathbf{A}thcal{T}(G)$ is bounded by $O(\log m)=O(\log m)$ as for any child $H'$ of a node $H$, $|E(H')|\le b|E(H)|$.
The running time of the algorithm is bounded by the total time to construct the separator for all nodes in the tree. Because the tree has height $O(\log m)$ and nodes with the same depth does not share any edge, the sum of edges over all tree nodes is $O(m\log m)$. Since $s(m)$ is convex, the algorithm runs in no more than $O(s(m)\log m)$ time.
\end{proof}
\separableBoundChangeCost*
\begin{proof}
\begin{comment}
Note that $\elim{H}$ is always a subset of $\sep{H}$. We will instead prove
\begin{align*}
\sum_{H \in \pathT{\mathcal{H}}}| \bdry{H}| +|\sep{H}| \leq \overline{\vv}erline{t}ilde(K^{1-\sepConst} m^\sepConst).
\end{align*}
We decompose the quantity we want to bound by levels of nodes in $\mathcal{T}$:
\begin{equation} \label{eq: levelSum-sep}
\sum_{H \in \pathT{\mathcal{H}}} | \bdry{H} | + |\sep{H}| = \sum_{i=0}^{\eta} \sum_{H \in \pathT{\mathcal{H},i}} |\bdry{H} | + |\sep{H}|.
\end{equation}
We first bound $\sum_{H \in \pathT{\mathcal{H}, i}} | \bdry{H} | + |\sep{H}|$. For the sake of analysis, we may assume for each node $H$ in $\pathT{\mathcal{H}, i}$, its sibling is also in $\pathT{\mathcal{H}, i}$. (This can be done by redefining the height of a node as the height of its parent minus $1$ while keeping the original height of the root.)
Our main observation is that we can bound the number of boundary vertices of nodes at level $i$ with the number of boundary vertices and their separator vertices of nodes at level $(i+1)$ in the separator tree. Formally, our key claim is the following
\begin{equation} \label{eq: bdryNodeParent-sep}
\sum_{H \in \pathT{\mathcal{H}, i}} | \bdry{H} | \leq \sum_{H' \in \pathT{\mathcal{H}, i+1}} \left( | \bdry{H'} | + 2 | \sep{H'} |\right)
\end{equation}
To see this, recall that by~\cref{property: boundary} in the definition of $\mathcal{T}$, the boundary of a node $H$ is defined as $\bdry{H} = \left( \sep{P} \cup \bdry{P} \right) \cap V(H)$ where $P$ is the parent of $H$. Therefore, for any two siblings $H_1,H_2$ at level $i$ sharing a parent $H'$ at level $(i+1)$ in $\mathcal{T}$, we have
\begin{align}
\begin{split}
|\bdry{H_1}| + |\bdry{H_2}| & \leq |\sep{H'}| + |(\bdry{H'} \cap V(H_1))\setminus \sep{H'}| + |\sep{H'}| + |(\bdry{H'} \cap V(H_2))\setminus \sep{H'}| \\
& \leq |\bdry{H'}| + 2 |\sep{H'}| \label{eq: bdrySiblings-sep}
\end{split}
\end{align}
The last equality crucially relies on the fact that a vertex from $\bdry{H'}$ occurs in both $H_1$ and $H_2$ iff that vertex is a separator vertex, i.e., it belongs to $S(H')$. Also, note that the set $S(H')$ needs to be included twice since the separator vertices of $H'$ are included in both $H_1$ and $H_2$ by construction of $\mathcal{T}$.
By summing \cref{eq: bdrySiblings-sep} over all pairs of siblings in
$\pathT{\mathcal{H}, i}$, we get~\cref{eq: bdryNodeParent-sep}.
Repeatedly applying~\cref{eq: bdryNodeParent} until we reach the root
at height $\eta$, one can show by induction that
\begin{equation} \label{eq: inductiveClaim-sep}
\sum_{H \in \pathT{\mathcal{H}, i}} | \bdry{H}| \leq 2 \sum_{j=i+1}^{\eta} \sum_{H' \in \pathT{\mathcal{H},j}} |S(H')|.
\end{equation}
Summing over all the levels in $\mathcal{T}$, we have
\begin{align*}
\sum_{i=0}^{\eta} \sum_{H \in \pathT{\mathcal{H}, i}} (|
\bdry{H}| + |S(H)| )
& \leq 2 \sum_{j=0}^{\eta} (\j+1) \sum_{H' \in \pathT{\mathcal{H},j}} |S(H')| && \text{(\cref{eq: inductiveClaim-sep})}\\
& \leq 2 c \sum_{j=0}^{\eta} (j+1) \sum_{H' \in \pathT{\mathcal{H},j}}\sqrt{|E(H')|}. && \text{($|S(H')| \le c |E(H')|^\alpha$)}
\end{align*}
Since every
$H \in \mathcal{H}$ has at most one ancestor at level $j,$ we have
$| \pathT{\mathcal{H},j}| \le |\mathcal{H}| =K.$
\end{comment}
Using the separator tree, we have \cref{eq:boundChangeCostSum} in exactly the same way as for the planar case.
\begin{align*}
\sum_{H \in \pathT{\mathcal{H}}} (|
\bdry{H}| + |S(H)| )
& \leq 2 c \sum_{j=0}^{\eta} (j+1) \sum_{H' \in \pathT{\mathcal{H},j}}\sqrt{|E(H')|} \\
\intertext{Applying H\"older's Inequality instead of Cauchy-Schwarz for the planar case, we get}
& \leq 2 c \sum_{j=0}^{\eta} (j+1) |\pathT{\mathcal{H},j}|^{1-\alpha} \cdot \left(\sum_{H' \in \pathT{\mathcal{H},j}} {|E(H')|}\right)^{\alpha} \\
& \leq 2 c \sum_{j=0}^{\eta} (j+1) {K}^{1-\alpha} \cdot \left(\sum_{H' \in \pathT{\mathcal{H},j}} {|E(H')|}\right)^{\alpha}\\
& \leq 2 c \eta {K}^{1-\alpha} \sum_{j=0}^{\eta}\left(\sum_{H' \in \pathT{\mathcal{H},j}} {|E(H')|}\right)^{\alpha} \\
& \leq O(\eta^2 K^{1-\sepConst} m^\sepConst),
\end{align*}
where the final inequality follows from the fact that nodes at the same level form an edge partition of $G$. As $\eta = O(\log m)$, the lemma follows.
\end{proof}
\end{document} |
\begin{document}
\twocolumn[\icmltitle{FedBR\xspace: Improving Federated Learning on Heterogeneous Data \\ via Local Learning Bias Reduction}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist}
\icmlauthor{Yongxin Guo}{CUHKSZ,AIRS}
\icmlauthor{Xiaoying Tang}{CUHKSZ,AIRS,FNI}
\icmlauthor{Tao Lin}{WestlakeRCIF,WestlakeSoE}
\end{icmlauthorlist}
\icmlaffiliation{CUHKSZ}{School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong 518172, China.}
\icmlaffiliation{WestlakeRCIF}{Research Center for Industries of the Future, Westlake University}
\icmlaffiliation{WestlakeSoE}{School of Engineering, Westlake University}
\icmlaffiliation{AIRS}{The Shenzhen Institute of Artificial Intelligence and Robotics for Society}
\icmlaffiliation{FNI}{The Guangdong Provincial Key Laboratory of Future Networks of Intelligence, The Chinese University of Hong Kong (Shenzhen), Shenzhen 518172, China }
\icmlcorrespondingauthor{Xiaoying Tang}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in]
\printAffiliationsAndNotice{}
\setlength{\parskip}{0pt plus0pt minus0pt}
\begin{abstract}
Federated Learning (FL) is a way for machines to learn from data that is kept locally, in order to protect the privacy of clients. This is typically done using local SGD, which helps to improve communication efficiency. However, such a scheme is currently constrained by slow and unstable convergence due to the variety of data on different clients' devices. In this work, we identify three under-explored phenomena of biased local learning that may explain these challenges caused by local updates in supervised FL.
As a remedy, we propose FedBR\xspace, a novel unified algorithm that reduces the local learning bias on features and classifiers to tackle these challenges.
FedBR\xspace has two components. The first component helps to reduce bias in local classifiers by balancing the output of the models. The second component helps to learn local features that are similar to global features, but different from those learned from other data sources. We conducted several experiments to test FedBR\xspace and found that it consistently outperforms other SOTA FL methods. Both of its components also individually show performance gains.
Our code is available at \url{https://github.com/lins-lab/fedbr}.
\looseness=-1
\end{abstract}
\section{Introduction}
Federated Learning (FL) is an emerging privacy-preserving distributed machine learning paradigm.
The model is transmitted to the clients by the server, and when the clients have completed local training, the parameter updates are sent back to the server for integration.
Clients are not required to provide local raw data during this procedure, maintaining their privacy.
As the workhorse algorithm in FL, FedAvg \citep{mcmahan2016communication} proposes local SGD to improve communication efficiency.
However, the considerable heterogeneity between local client datasets leads to inconsistent local updates and hinders convergence.
Several studies propose variance reduction methods \citep{karimireddyscaffold2019,das2020faster}, or suggest regularizing local updates towards global models \citep{lifederated2018,li2021model} to tackle this issue.
Almost all these existing works directly regularize models by utilizing the global model collected from previous rounds to reduce the variance or minimize the distance between global and local models~\citep{lifederated2018,li2021model}.
However, it is hard to balance the trade-offs between optimization and regularization to perform well, and data heterogeneity remains an open question in the community, as justified by the limited performance gain, e.g.\ in our Table~\ref{Performance of algorithms} and experiment results in some previous works~\citep{tang2022virtual,li2021model,yoon2021federated,chen2021bridging,luo2021no}.
\looseness=-1
Apart from the existing solutions, we revisit and reinterpret the fundamental issues in federated deep learning, caused by data heterogeneity and local updates.
As our first contribution, we identify three pitfalls of FL systematically and in a unified view, termed \emph{local learning bias}, from the perspective of representation learning\footnote{Please refer to section \ref{sec:The Pitfalls of FL on Heterogeneous Data Distributions} for more justifications about the existence of our observations.}:
1) Biased local classifiers are unable to effectively classify unseen data (c.f.\ Figure~\ref{fig:draft1}), due to the shifted decision boundaries dominated by local class distributions; 2) Local features (extracted by a local model) differ significantly from global features (similarly extracted by a centralized global model), even for the same input data (c.f.\ Figure~\ref{fig:draft2}); and 3) Local features, even for data from different classes, are too close to each other to be accurately distinguished (c.f.\ Figure~\ref{fig:draft2}).
\looseness=-1
\begin{figure*}
\caption{\small
\textbf{Observation for learning bias of FL}
\label{fig:draft1}
\label{fig:draft2}
\label{learning bias in FL}
\end{figure*}
To this end, we propose FedBR\xspace, a unified method that leverages (1) a globally shared, label-distribution-agnostic pseudo-data and (2) two key algorithmic components, to simultaneously address the three difficulties outlined above. \\
The first component of FedBR\xspace alleviates the first difficulty by forcing the output distribution of the pseudo-data to be balanced.
The second component of FedBR\xspace aims to address the second and third difficulties simultaneously, where we develop a \emph{\textbf{min-max contrastive learning method}} to learn client invariant local features.
More precisely, a key two-stage algorithm is proposed to maximize the elimination of feature learning biases: the first stage learns a projection space to distinguish the features of two types, while the second stage enforces learned features on the projected feature space that are farther from local features and closer to global ones.
All these can be unified into a simple yet effective min-max procedure to alleviate the local learning bias in FL, with trivial requirements on pseudo-data while still preserving privacy.
\looseness=-1
\paragraph*{Our main contributions are:}
\begin{itemize}[leftmargin=12pt,nosep]
\item We provide a unified view to interpret the learning difficulty in FL with heterogeneous data, and identify three key pitfalls to explain the issue of local learning biases.
\item We propose FedBR\xspace, a unified algorithm that leverages pseudo-data to reduce the learning bias on local features and classifiers.
We design two orthogonal key components of FedBR\xspace to complement each other to improve the learning quality of clients with heterogeneous data.
\item FedBR\xspace considerably outperforms other FL baselines by a large margin, as justified by extensive numerical evaluation on RotatedMNIST, CIFAR10, and CIFAR100. Besides, FedBR\xspace does not require the labeled or large number of global shared pseudo-data, thereby improving the efficiency.
\end{itemize}
\section{Related Works}
\paragraph{Federated Learning (FL).}
As the de facto FL algorithm, \citet{mcmahan2016communication,lin2020dont} propose to use local SGD steps to alleviate the communication bottleneck.
However, the objective inconsistency caused by the local data heterogeneity considerably hinders the convergence of FL algorithms~\citep{lifederated2018,wang2020tackling,karimireddyscaffold2019,karimireddy2020mime,guo2021towards}.
To address the issue of heterogeneity in FL, a series of projects has been proposed. FedProx~\citep{lifederated2018} incorporates a proximal term into local objective functions to reduce the gap between the local and global models. SCAFFOLD~\citep{karimireddyscaffold2019} adopts the variance reduction method on local updates, and Mime~\citep{karimireddy2020mime} increases convergence speed by adding global momentum to global updates. Recently, Moon~\citep{li2021model} has proposed to employ contrastive loss to reduce the distance between global and local features.
However, their projection layer is only used as part of the feature extractor, and cannot contribute to distinguishing the local and global features---a crucial step identified by our investigation for better model performance.\looseness=-1
In this paper, we focus on improving the global model in Federated Learning (FL) by designing methods that perform well on all local distributions. The designed algorithm works on the local training stage, which aligns with previous research in this area, such as~\citet{mcmahan2016communication, lifederated2018, karimireddyscaffold2019, li2021model, tang2022virtual}.
Other topics like improving the global aggregation stages~\citep{wang2020federated, yoshida2019hybrid} or Personalized Federated Learning (PFL) methods~\citep{tan2022towards, wu2022motley,jiang2023testtime} are orthogonal to our approach and could be further combined with our method.
\paragraph{Data Augmentation in FL.}
To reduce data heterogeneity, some data-based approaches suggest sharing a global dataset among clients and combining global datasets with local datasets~\citep{tuor2021overcoming,yoshida2019hybrid}.
Some knowledge distillation-based methods also require a global dataset~\citep{tao2020ensemble,li2019fedmd}, which is used to transfer knowledge from local models (teachers) to global models (students).
Considering the impractical of sharing the global datasets in FL settings, some recent research use proxy datasets with augmentation techniques.
Astraea~\citep{duan2019astraea} uses local augmentation to create a globally balanced distribution.
XorMixFL~\citep{shin2020xor} encodes a couple of local data and decodes it on the server using the XOR operator.
FedMix~\citep{yoon2021fedmix} creates the privacy-protected augmentation data by averaging local batches and then applying Mixup in local iterations.
VHL~\citep{tang2022virtual} relies on the created virtual data with labels and forces the local features to be close to the features of same-class virtual data.
Different from previous works, this paper designs methods that utilize label-agnostic pseudo-data, and outperform other methods using significantly less pseudo-data.
\begin{figure*}
\caption{\small
\textbf{Observation for biased local features}
\label{global_feature}
\label{local_feature_1}
\label{local_feature_2}
\label{Output of model trained on class 1-5}
\end{figure*}
\section{The Pitfalls of FL on Heterogeneous Data}
\label{sec:The Pitfalls of FL on Heterogeneous Data Distributions}
\paragraph{FL and local SGD.}
FL is an emerging learning paradigm that supposes learning on various clients while clients can not exchange data to protect users' privacy.
Learning occurs locally on the clients, while the server collects and aggregates gradient updates from the clients.
The standard FL considers the following problem:
\begin{align}
\textstyle
f^* = \min_{\momega \in R^d} \left[ f(\momega) = \sum_{i=1}^{N} p_i f_i(\momega) \right] \,,
\end{align}
where $f_i(\momega)$ is the local objective function of client $i$, and $p_i$ is the weight for $f_i(\momega)$. In practice, we set $p_i = \nicefrac{|D_i|}{|D|}$ by default, where $D_i$ is the local dataset of client $i$ and $D$ is the combination of all local datasets. The global objective function $f(\momega)$ aims to find $\momega$ that can perform well on all clients.
\looseness=-1
In the training process of FL, the communication cost between client and server has become an essential factor affecting the training efficiency.
Therefore, local SGD \citep{mcmahan2016communication} has been proposed to reduce the communication round. In local SGD, clients perform multiple local steps before synchronizing to the server in each communication round.
\looseness=-1
\paragraph{Bias caused by local updates.}
In this paper, we consider improving previous works by proposing a label-agnostic method, and we first identify the pitfalls of FL on heterogeneous data in a label-agnostic way as follows.
\looseness=-1
\begin{proposition}[Local Learning Bias in FedAvg] For FedAvg, the local models after local epochs could be biased, in detail,
\begin{itemize}[leftmargin=12pt,nosep]
\item \emph{Biased local feature:} For local feature extractor $F_i(\cdot)$, and centralized trained global feature extractor $F_g(\cdot)$, we have:
1) Given the data input $X$, $F_i(X)$ could deviate largely from $F_g(X)$.
2) Given the input from different data distributions $X_1$ and $X_2$, $F_i(X_1)$ could be very similar or almost identical to $F_i(X_2)$.
\item \emph{Biased local classifier:} After a sufficient number of iterations, local models classify all samples into only the classes that appeared in the local datasets.
\end{itemize}
\label{bias caused by local updates}
\end{proposition}
To verify the correctness of Proposition~\ref{bias caused by local updates}, we can use some toy examples to show the existence of the biased local feature and classifiers. For toy examples on more complex scenarios and on the benefits of using FedBR\xspace please refer to Appendix~\ref{sec:t-SNE and classcifier output}.
\begin{example} [Observation for biased local features]
Figures~\ref{global_feature} and~\ref{local_feature_1} show that \emph{local features differ from global features for the same input}, and Figures~\ref{local_feature_1} and~\ref{local_feature_2} show that \emph{local features are similar even for different input distributions.}
We define this observation as the ``biased local feature''.
In detail, we calculate $F_1(X_1)$, $F_1(X_2)$, $F_g(X_1)$, and $F_g(X_2)$, and use t-SNE to project all the features to the same 2D space.
We can observe that the local features of data in $X_2$ are so close to local features of data in $X_1$, and it is non-trivial to tell which category the current input belongs to by merely looking at the local features.
\label{example 1}
\end{example}
\begin{example} [Observation for biased local classifiers]
Figure \ref{Class imbalance figure} shows the output of the local model on data $X_2$, where all data in $X_2$ are incorrectly categorized into classes 0 to 4 of $X_1$.
The observation, i.e.\ data from classes that are absent from local datasets cannot be correctly classified by the local classifiers, refers to the ``biased local classifiers''.
More precisely, Figure~\ref{output-class-8} shows the prediction result of one sample (class $8$) and Figure~\ref{output-all-data} shows the predicted distribution of all samples in $X_2$.
\looseness=-1
\label{example 2}
\end{example}
\begin{figure}
\caption{\small
\textbf{Observation for biased local classifiers}
\label{output-class-8}
\label{output-all-data}
\label{Class imbalance figure}
\end{figure}
\paragraph{Distinct from the local learning bias in previous works.}
We acknowledge the discussion of learning bias in some previous works, e.g.\ in~\citet{karimireddyscaffold2019,lifederated2018,li2021model}.
However, our work differs in several ways:
\begin{enumerate}[leftmargin=12pt,nosep]
\item FedProx~\citep{lifederated2018} defines local drifts as the differences in model weights, while SCAFFOLD~\citep{karimireddyscaffold2019} considers gradient differences as client drifts.
These methods, though have been effective on traditional optimization tasks, may only have marginal improvements on deep models, as shown in~\citet{tang2022virtual,li2021model,yoon2021federated,chen2021bridging,luo2021no}. \looseness=-1
\item MOON~\citep{li2021model} minimizes the distance between global and local features, but its performance is limited because they use only the projection layer as part of the feature extractor, and the contrastive loss diminished without our designed max step (c.f.\ Table~\ref{Performance of algorithms} and Table~\ref{ablation study}).
\item VHL~\citep{tang2022virtual} defines local learning bias as the shift in features between samples of the same classes; however, this approach requires prior knowledge of local label information and results in a much larger virtual dataset, especially when increasing the number of classes.
Our method instead achieves better performance with significantly fewer pseudo-data (see Table~\ref{Comparison with VHL}).
\end{enumerate}
\begin{figure}
\caption{\small
\textbf{Optimization flow of FedBR\xspace}
\label{Overview of FedAug}
\end{figure}
\section{FedBR\xspace: Reducing Learning Bias in FL}
Addressing the local learning bias is crucial to improving FL on heterogeneous data, due to the \emph{bias} discussed in Proposition~\ref{bias caused by local updates}.
To this end, we propose FedBR\xspace as shown in Figure~\ref{Overview of FedAug}, a novel framework that leverages the globally shared pseudo-data with two key components to reduce the local training bias, namely
1) reducing the local classifier's bias by balancing the output distribution of classifiers (component 1),
and 2) an adversary contrastive scheme to learn unbiased local features (component 2).
\subsection{Overview of the FedBR\xspace}\label{sec:overview}
The learning procedure of FedBR\xspace on each client $i$ involves the construction of a global pseudo-data (c.f.\ Section~\ref{sec:construction}), followed by applying two key debias steps in a \textbf{\emph{min-max}} approach to jointly form two components (c.f.\ Section~\ref{sec:component1} and~\ref{sec:component2}) to reduce the bias in the classifier and feature, respectively.
The min-max procedure of FedBR\xspace can be interpreted as first projecting features onto spaces that can distinguish global and local feature best, then 1) minimizing the distance between the global and local features of pseudo-data and maximizing distance between local features of pseudo-data and local data; 2) minimize classification loss of both local data and pseudo-data:
\textbf{Max Step:} $\max_{\boldsymbol{\theta}} \cL_{adv} (D_p, D_i) $
\begin{small}
\begin{align}
:= \mathbb{E}_{\xx_p \sim D_p, \xx \sim D_i} \left[ \cL_{con} (\xx_{p}, \xx, \mphi_g, \mphi_i, \boldsymbol{\theta}) \right] \,.
\label{adversary step equation}
\end{align}
\end{small}
\textbf{Min Step:} $\min_{\mphi_i, \momega} \cL_{gen} (D_p, D_i)$
\begin{small}
\begin{align}
& := \mathbb{E}_{(\xx, \yy) \sim D_i} \left[ \cL_{cls} (\xx, \yy, \mphi_i, \momega) \right] \nonumber \\
& \qquad + \lambda \mathbb{E}_{\xx_p\sim D_p} \left[ \cL_{cls} (\xx_p, \tilde{\yy_p}, \mphi_i, \momega) \right] \nonumber \\
& \qquad + \mu \mathbb{E}_{\xx_p \sim D_p, \xx \sim D_i} \left[ \cL_{con} (\xx_{p}, \xx, \mphi_g, \mphi_i, \boldsymbol{\theta}) \right] \,. \label{classification step eqation}
\end{align}
\end{small}
$\cL_{cls}$ and $\cL_{con}$ represent the cross-entropy loss and a contrastive loss (will be detailed in Section~\ref{sec:component2}), respectively.
$D_i$ denotes the distribution of the local dataset at client $i$.
$D_p$ is that of shared pseudo-dataset, where $\tilde{\yy_p}$ is the pseudo-label of pseudo-data.
The model is composed of a feature extractor $\mphi$ and a classifier $\momega$, where the omitted subscript $i$ and $g$ correspond to the local client $i$ and global parameters, respectively (e.g.\ $\mphi_g$ denotes the feature extractors received from the server at the beginning of each communication round).
We additionally use a projection layer $\boldsymbol{\theta}$ for the max step to project features onto spaces where global and local features have the largest dissimilarity.\\
Apart from the cross-entropy loss of local data in~\eqref{classification step eqation}, the second term aims to overcome the biased local classifier while the local feature is debiased by the third term.
\looseness=-1
The proposed FedBR\xspace is summarized in Algorithm \ref{Algorithm Framework of AugCA}.
The global communication part is the same as FedAvg,
and the choice of synchronizing the new pseudo-data to clients in each round is optional\footnote{
As shown in Figure~\ref{Performance when only transfer pseudo-data at the beginning of training}, the communication-efficient variant of FedBR\xspace---i.e.\ only transferring pseudo-data at the beginning of the FL training---is on par with the choice of frequent pseudo-data synchronization.
}.
\paragraph{The benefit of FedBR\xspace for using less prior information.} In additional to the superior performance, the design of FedBR\xspace has the following benefits:
1) Unlike previous works~\cite{tang2022virtual}, our method does not require knowledge of the local label distribution and is label-agnostic.
This means that the size of the pseudo-data will not increase as the number of classes increases.
Our results, shown in Table~\ref{Comparison with VHL} and Figure~\ref{Performance when only transfer pseudo-data at the beginning of training}, demonstrate that our method, FedBR\xspace, can achieve better performance with significantly less pseudo-data.
2) Pseudo-data can be used for various tasks.
Pseudo-data created using CIFAR10 performs well on tasks with local data from CIFAR10 and CIFAR100, as seen in Figure~\ref{Performance on different types of pseudo-data}.
\subsection{Construction of the Pseudo-Data} \label{sec:construction}
The choice of the pseudo-data in our FedBR\xspace framework is arbitrary.
For ease of presentation and taking the communication cost into account, we showcase two construction approaches below and detail their performance gain over all other existing baselines in Section~\ref{sec:experiments}:
\begin{itemize}[leftmargin=12pt,nosep]
\item \textbf{Random Sample Mean (RSM)}.
Similar to the treatment in FedMix~\citep{yoon2021fedmix}, one RSM sample of the pseudo-data is estimated through a weighted combination of a random subset of local samples, and the pseudo-label is set\footnote{
We assume that pseudo-data does not belong to any particular classes, and should not give high confidence to any of that.
\looseness=-1
} to $\tilde{\mathbf{y}_p} = \frac{1}{C} \cdot \mathbf{1}$.
It is worth noting that RSM \textit{does not require the local data to be balanced} when constructing the pseudo-data, as long as the local data is distinct from the pseudo-data.
We show in Figure~\ref{Performance on different label distribution of RSM} that our algorithm (FedBR\xspace) can achieve comparable performance using pseudo-data constructed from data with unbalanced label distribution.
For more details, see Algorithm \ref{Construct Augmentation Data} in the appendix.
\item \textbf{Mixture of local samples and the sample mean of a proxy dataset (Mixture)}.
This strategy relies on applying the procedure of RSM to irrelevant and globally shared proxy data (refer to Algorithm~\ref{Construct Augmentation Data by Proxy Data}).
To guard the distribution distance between the pseudo-data and local data, one sample of the pseudo-data at each client is constructed by
\begin{small}
\begin{align}
\textstyle
\tilde{\xx}_p \!=\! \frac{1}{K \!+\! 1} ( \xx_p \!+\! \sum_{k \!=\! 1}^{K} \xx_k ) ,
\tilde{\yy}_p \!=\! \frac{1}{K \!+\! 1} ( \frac{1}{C} \!\cdot\! \mathbf{1} \!+\! \sum_{k \!=\! 1}^{K} \yy_k ) \,,
\label{pseudo_data}
\end{align}
\end{small}
where $\xx_p$ is one RSM sample of the global proxy dataset, and $\xx_k$ and $\yy_k$ correspond to the data and label of one local sample (vary depending on the client).
$K$ is a constant that controls the closeness between the distribution of pseudo-data and local data.
As we will show in Section~\ref{sec:experiments}, setting $K = 1$ is data-efficient yet sufficient to achieve good results.
\end{itemize}
\paragraph{Remark: preserving privacy via Mixture.}
The RSM method is similar to the data augmentation method used in FedMix.
Similar to the discussion in FedMix, such a scheme may leak privacy.
To address this, we propose using the Mixture as a privacy-preserving method.
Mixture can even outperform RSM as justified in Figure~\ref{Performance on different types of pseudo-data}.
\begin{algorithm}[!t]
\small
\begin{algorithmic}[1]
\small
\Require{Local datasets $D_1, \dots, D_N$, pseudo dataset $D_{p}$ where $|D_p| = B$, and $B$ is the batch size, number of local iterations $K$, number of communication rounds $T$, number of clients chosen in each round $M$, weights used in designed loss $\lambda, \mu$, local learning rate $\eta$.}
\Ensure{Trained model $\momega_T$, $\boldsymbol{\theta}_T$, $\mphi_T$.}
\myState{Initialize $\momega_0, \boldsymbol{\theta}_0, \mphi_0$.}
\For{$t = 0, \dots, T-1$}
\myState{Send $\momega_t, \boldsymbol{\theta}_t, \mphi_t$, $D_p$ (optional) to all clients.}
\For{chosen client $i = 1, \dots, M$}
\myState{$\momega_i^{0} = \momega_t, \boldsymbol{\theta}_i^{0} = \boldsymbol{\theta}_t, \mphi_i^{0} = \mphi_t, \mphi_g = \mphi_t$}
\For{$k = 1, \dots, K$}
\myState{\# Max Step}
\myState{$\boldsymbol{\theta}_i^{k} = \boldsymbol{\theta}_i^{k-1} + \eta \nabla_{\boldsymbol{\theta}} \cL_{adv}$.}
\myState{\# Min Step}
\myState{$\momega_i^{k} = \momega_i^{k-1} - \eta \nabla_{\momega} \cL_{k}.$}
\myState{$\mphi_i^{k} = \mphi_i^{k-1} - \eta \nabla_{\mphi} \cL_{gen}.$}
\EndFor
\myState{Send $\momega_i^{K}, \boldsymbol{\theta}_i^{K}, \mphi_i^{K}$ to server.}
\EndFor
\myState{$\momega_{t+1} = \frac{1}{M} \sum_{i=1}^{M} \momega_i^{K}$.}
\myState{$\boldsymbol{\theta}_{t+1} = \frac{1}{M} \sum_{i=1}^{M} \boldsymbol{\theta}_i^{K}$.}
\myState{$\mphi_{t+1} = \frac{1}{M} \sum_{i=1}^{M} \mphi_i^{K}$.}
\EndFor
\end{algorithmic}
\mycaptionof{algorithm}{\small Algorithm Framework of FedBR\xspace}
\label{Algorithm Framework of AugCA}
\end{algorithm}
\subsection{Component 1: Reducing Bias in Local Classifiers} \label{sec:component1}
Due to the issue of label distribution skew or the absence of some samples for the majority/minority classes, the trained local model classifier tends to overfit the locally presented classes, and may further hinder the quality of the feature extractor (as justified in Figure \ref{Class imbalance figure} and Proposition~\ref{bias caused by local updates}).
\looseness=-1
As a remedy, here we implicitly mimic the global data distribution---by using the pseudo-data constructed in Section~\ref{sec:construction}---to regularize the outputs and thus debias the classifier (note that Component 1 is the second term of~\eqref{classification step eqation}):
\looseness=-1
\begin{align*}
\lambda \mathbb{E}_{\xx_p\sim D_i} \left[ \cL_{cls} (\xx_p, \tilde{\yy_p}, \mphi_i, \momega) \right] \,.
\end{align*}
\looseness=-1
\subsection{Component 2: Reducing Bias in Local Features}
\label{sec:component2}
In addition to alleviating the biased local classifier in Section~\ref{sec:component1}, here we introduce a crucial adversary strategy to learn unbiased local features.
\paragraph{Intuition of constructing an adversarial problem.}
As discussed in Proposition~\ref{bias caused by local updates}, effective federated learning on heterogeneous data requires learning debiased local feature extractors that 1) can extract local features that are close to global features of the same input data; 2) can extract different local features for input samples from different distributions.
However, existing methods that directly minimize the distance between global features and local features~\citep{lifederated2018,li2021model} have limited performance gain (c.f.\ Table~\ref{Performance of algorithms}) due to the diminishing optimization objective caused by the indistinguishability between the global and local features of the same input.
To this end, we propose to extend the idea of adversarial training to our FL scenarios:
\begin{enumerate}[leftmargin=12pt, nosep]
\item We construct a projection layer as the critical step to distinguish features extracted by the global and local feature extractor: such layer ensures that the projected features extracted by the local feature extractor will be close to each other (even for distinct local data distributions), but the difference between features extracted by the global and local feature extractor after projection will be considerable (even for the same input samples).
\item We can find that constructing such a projection layer can be achieved by maximizing the local feature bias discussed in Proposition~\ref{bias caused by local updates}.
More precisely, it can be achieved by maximizing the distance between global and local features of pseudo-data and simultaneously minimizing the distance between local features of pseudo-data and local data.
\item We then minimize the local feature biases (discussed in Proposition~\ref{bias caused by local updates}) under the trained projection space, to enforce the learned local features of pseudo-data to be closer to the global features of pseudo-data but far away from the local features of real local data.
\end{enumerate}
\paragraph{On the importance of utilizing the projection layer to construct the adversary problem.}
To construct the aforementioned adversarial training strategy, we consider using an additional projection layer to map features onto the projection space\footnote{
Such a projection layer is not part of the feature extractor or used for classification, as shown in Figure \ref{Overview of FedAug}.
}.
In contrast to the existing works that similarly add a projection layer~\citep{li2021model}, we show that
1) simply adding a projection layer as part of the feature extractor has trivial performance gain (c.f.\ Figure~\ref{Performance of algorithms with/without additional projection layer});
2) our design is the key step to reducing the feature bias and boosting the federated learning on heterogeneous data (c.f.\ Table \ref{ablation study}).
\looseness=-1
\paragraph{Objective function design.}
We extend the idea of \citet{li2021model} and improve the contrastive loss initially proposed in simCLR~\citep{chen2020simple} to our challenging scenario.
Different from previous works, we use the projected features (global and local) on pseudo-data as the positive pairs and rely on the projected local feature of both pseudo-data and local data as the negative pairs:
\begin{small}
\begin{align}
\textstyle
& f_1 = \exp\left( \frac{\text{sim}\left( P_{\mtheta}(\mphi_i(\xx_p) ), P_{\mtheta}(\mphi_g(\xx_p)) \right)}{\tau_1} \right) \,, \\
& f_2 = \exp\left( \frac{\text{sim}\left( P_{\mtheta}(\mphi_i(\xx_p)), P_{\mtheta}(\mphi_i(\xx)) \right)}{\tau_2} \right) \,, \\
& \cL_{con}(\xx_p, \xx, \mphi_g, \mphi_i, \mtheta) = - \log \left( \frac{ f_1 }{ f_1 + f_2} \right) \,,
\label{contrastive loss}
\end{align}
\end{small}
where $P_{\mtheta}$ is the projection layer parameterized by $\mtheta$, $\tau_1$ and $\tau_2$ are temperature parameters, and $\text{sim}$ is the cos-similarity function.
Our implementation uses a tied value for $\tau_1$ and $\tau_2$ for the sake of simplicity, but an improved performance may be observed by tuning these two.
\begin{figure}
\caption{\small \textbf{Convergence curve of algorithms on different datasets.}
\label{convergence curve main}
\end{figure}
\begin{table*}[!t]
\small
\centering
\caption{\small
\textbf{Performance of algorithms.}
We split RotatedMNIST, CIFAR10, and CIFAR100 to 10 clients with $\alpha = 0.1$, and ran 1000 communication rounds on RotatedMNIST and CIFAR10 for each algorithm, 800 communication rounds CIFAR100.
We report the mean of maximum (over rounds) 5 test accuracies and the number of communication rounds to reach the threshold accuracy.
}
\label{Performance of algorithms}
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
\multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{RotatedMNIST (CNN)} & \multicolumn{2}{c}{CIFAR10 (VGG11)} & \multicolumn{2}{c}{CIFAR100 (CCT)} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& Acc (\%) & Rounds for 80\% & Acc (\%) & Rounds for 55\% & Acc (\%) & Rounds for 43\% \\
\midrule
Local & 14.67 & -
& 10.00 & - & 1.31 & - \\
FedAvg & 82.47 & 828 (1.0X) & 58.99 & 736 (1.0X) & 44.00 & 550 (1.0X) \\
FedProx & 82.32 & 824 (1.0X) & 59.14 & 738 (1.0X) & 43.09 & 756 (0.7X) \\
Moon & 82.68 & 864 (0.9X) & 58.23 & 820 (0.9X) & 42.87 & 766 (0.7X) \\
DANN & 84.83 & 743 (1.1X) & 58.29 & 782 (0.9X) & 41.83 & - \\
GroupDRO & 80.23 & 910 (0.9X) & 56.57 & 835 (0.9X) & 44.34 & 444 (1.2X) \\
FedBR\xspace (Ours) & \textbf{86.58} & \textbf{628 (1.3X)} & 64.65 & 496 (1.5X) & 45.14 & 352 (1.5X) \\
\midrule
FedAvg + Mixup & 82.56 & 840 (1.0X) & 58.57 & 826 (0.9X) & 46.37 & 358 (1.6X) \\
FedMix & 81.33 & 902 (0.9X) & 57.37 & 872 (0.8X) & 42.69 & - \\
FedBR\xspace + Mixup (Ours) & 83.42 & 736 (1.1X) & \textbf{65.32} & \textbf{392 (1.9X)} & \textbf{47.75} & 294 (1.9X) \\
\bottomrule
\end{tabular}
}
\end{table*}
\begin{table}[!t]
\small
\centering
\caption{\small
\textbf{Comparison with VHL.}
We split CIFAR10 and CIFAR100 to 10 clients with $\alpha = 0.1$, and
report the mean of maximum (over rounds) 5 test accuracies and the number of communication rounds to reach the threshold accuracy. We set different numbers of virtual data to check the performance of VHL, and pseudo-data only transfer once in FedBR\xspace (32 pseudo-data). For CIFAR100, we choose Mixup as the backbone.
\looseness=-1
}
\label{Comparison with VHL}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
\multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{CIFAR10 (VGG11)} & \multicolumn{2}{c}{CIFAR100 (CCT)} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& Acc (\%) & Rounds for 60\% & Acc (\%) & Rounds for 46\% \\
\midrule
VHL (2000 virtual data) & 61.23 & 886 (1.0X) & 46.80 & 630 (1.0X) \\
VHL (20000 virtual data) & 59.65 & 998 (0.9X) & 46.51 & 714 (0.9X) \\
FedBR\xspace (32 pseudo-data) & \textbf{64.61} & \textbf{530 (1.8X)} & \textbf{47.67} & \textbf{554 (1.1X)} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[!t]
\small
\centering
\caption{\small
\textbf{Combining FedBR\xspace with other baselines.}
We split CIFAR10 and CIFAR100 to 10 clients with $\alpha = 0.1$, and
report the mean of maximum (over rounds) 5 test accuracies. For FedBR\xspace, pseudo-data only transfer once (32 pseudo-data) using \textbf{Mixture}. For CIFAR100, we choose Mixup as the backbone.
\looseness=-1
}
\label{Combining with other baselines}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
\multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{CIFAR10 (VGG11)} & \multicolumn{2}{c}{CIFAR100 (CCT)} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& w/o FedBR\xspace & + FedBR\xspace & w/o FedBR\xspace & + FedBR\xspace \\
\midrule
FedAvg & 58.99 & 64.66 (+5.67) & 46.37 & 47.98 (+1.61) \\
FedCM & 62.63 & \textbf{65.32 (+2.69)} & 46.15 & 46.95 (+0.80) \\
FedDecorr & 54.15 & 62.70 (+8.55) & 47.18 & \textbf{48.34 (+1.16)} \\
FedNTD & 59.10 & 59.26 (+0.16) & 47.02 & 47.18 (+0.16) \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Experiments} \label{sec:experiments}
\subsection{Experiment Setting}
We elaborate on experiment settings in Appendix \ref{sec:Experiment Details}.
\looseness=-1
\paragraph{Baseline algorithms.}
We compare FedBR\xspace with both SOTA FL baselines including FedAvg~\citep{mcmahan2016communication}, Moon~\citep{li2021model}, FedProx~\citep{lifederated2018}, VHL~\citep{tang2022virtual}, FedMix~\citep{yoon2021fedmix}, FedNTD~\citep{lee2022preservation}, FedCM~\citep{xu2021fedcm}, and FedDecorr~\citep{shi2022towards} which are most relevant to our proposed algorithms.
Similar to a very recent study in benchmarking FL~\citep{bai2023benchmarking}, we also contain domain generalization (DG) methods as baselines and check their performance under standard FL settings.
For DG baselines, we choose GroupDRO~\citep{sagawa2019distributionally}, Mixup~\citep{yan2020improve}, and DANN~\citep{ganin2015domain}. We also discuss other DG baselines in Appendix~\ref{sec:Experiment Details}.
Unless specially mentioned, all algorithms use FedAvg as the backbone algorithm.
\looseness=-1
\paragraph{Models and datasets.} We examine all algorithms on RotatedMNIST, CIFAR10, and CIFAR100 datasets.
We use a four-layer CNN for RotatedMNIST, VGG11 for CIFAR10, and Compact Convolutional Transformer (CCT~\citep{hassani2021escaping}) for CIFAR100.
We split the datasets following the idea introduced in~\cite{yurochkin2019bayesian,hsu2019measuring,reddi2021adaptive}, where we leverage the Latent Dirichlet Allocation (LDA) to control the distribution drift with parameter $\alpha$.
The pseudo-data is chosen as \textbf{\textit{RSM}} by default, and we also provide results on other types of pseudo-data (c.f.\ Figure~\ref{Performance on different types of pseudo-data}).
By default, we generate one batch of pseudo-data (64 for MNIST and 32 for other datasets) in each round, and we also investigate only generating one batch of pseudo-data at the beginning of training to reduce the communication cost (c.f.\ Figure~\ref{Performance when only transfer pseudo-data at the beginning of training}, Figure~\ref{Performance on different types of pseudo-data}).
We use SGD optimizer and set the learning rate to $0.001$ for RotatedMNIST, and $0.01$ for other datasets.
The local batch size is set to 64 for RotatedMNIST, and 32 for other datasets (following the default setting in DomainBed ~\citep{gulrajani2020in}).
Additional results regarding the impact of hyper-parameter choices and performance gain of FedBR\xspace on other datasets/settings/evaluation metrics can be found in Appendix \ref{sec:Additional Results}.
\looseness=-1
\subsection{Numerical Results}
\paragraph*{The superior performance of FedBR\xspace over existing FL and DG algorithms.\footnote{
See CIFAR10 + ResNet18 results in Table~\ref{Performance of cifar10 on resnet} of Appendix~\ref{sec:Additional Results}.}
\looseness=-1
}
In Table \ref{Performance of algorithms} and Figure~\ref{convergence curve main}, we show the performance and convergence curve of baseline methods as well as our proposed FedBR\xspace algorithm.
When comparing different FL and DG algorithms, we discovered that:
1) FedBR\xspace performs best in all settings; 2) DG baselines only slightly outperform ERM, and some are even worse; 3) Regularizing local models to global models from prior rounds, such as Moon and Fedprox, does not result in positive outcomes.
\setcounter{subfigure}{0}
\begin{figure}
\caption{\small
\textbf{Ablation studies of FedBR\xspace}
\label{Performance of algorithms with/without additional projection layer}
\label{Performance when only transfer pseudo-data at the beginning of training}
\label{Performance on different types of pseudo-data}
\label{Performance on different label distribution of RSM}
\label{Additional ablation studies}
\end{figure}
\begin{table*}[!t]
\small
\centering
\caption{\small
\textbf{Ablation studies of FedBR\xspace} on the effects of two components.
We show the performance of two components and remove the max step (Line 8 in Algorithm~\ref{Algorithm Framework of AugCA}) of component 2.
We split RotatedMNIST, CIFAR10, and CIFAR100 to 10 clients with $\alpha = 0.1$.
We run 1000 communication rounds on RotatedMNIST and CIFAR10 for each algorithm and 800 communication rounds on CIFAR100.
We report the mean of maximum (over rounds) 5 test accuracies and the number of communication rounds to reach the target accuracy.
}
\label{ablation study}
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
\multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{RotatedMNIST (CNN)} & \multicolumn{2}{c}{CIFAR10 (VGG11)} & \multicolumn{2}{c}{CIFAR100 (CCT)} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& Acc (\%) & Rounds for 80\% & Acc (\%) & Rounds for 55\% & Acc (\%) & Rounds for 43\% \\
\midrule
FedAvg & 82.47 & 828 (1.0X) & 58.99 & 736 (1.0X) & 46.37 & 358 (1.0X) \\
\midrule
Component 1 & 84.40 & 770 (1.1X) & 64.32 & \textbf{442 (1.7X)} & 47.22 & 330 (1.1X) \\
\, + min step & 80.81 & 922 (0.9X) & 62.98 & 562 (1.3X) & 46.54 & 358 (1.0X) \\
Component 2 & 86.25 & 648 (1.3X) & 63.44 & 483 (1.5X) & \textbf{47.78} & 308 (1.2X) \\
\, + w/o max step & 81.24 & 926 (0.9X) & 58.84 & 584 (1.3X) & 43.50 & 512 (0.7X) \\
FedBR\xspace & \textbf{86.58} & \textbf{628 (1.3X)} & \textbf{64.65} & 496 (1.5X) & \textbf{47.75} & \textbf{294 (1.2X)} \\
\bottomrule
\end{tabular}
}
\end{table*}
\paragraph{Comparison with VHL.}
We vary the size of virtual data in VHL and compare it with our FedBR\xspace in Table~\ref{Comparison with VHL}~\footnote{The performance of FedBR\xspace is slightly different from results in Table~\ref{Performance of algorithms} because we only use 32 pseudo-data here to make a fair comparison with VHL.}:
our communication-efficient FedBR\xspace only uses 32 pseudo-data and transfers pseudo-data once, while the communication-intensive VHL~\citep{tang2022virtual} requires the size of virtual data to be proportional to the number of classes and uses at least 2{,}000 virtual data (the authors suggest 2{,}000 for CIFAR10 and 20{,}000 for CIFAR100 respectively in the released official code, and we use the default value of hyper-parameters and implementation provided by the authors).
We can find that 1) FedBR\xspace always outperforms VHL. 2) FedBR\xspace overcomes several shortcomings of VHL, e.g.\ the need for labeled virtual data and the large size of the virtual dataset.
\looseness=-1
\subsection{Ablation Studies}
\label{sec:ablation study}
\paragraph{Effectiveness of the different components in FedBR\xspace.}
In Table \ref{ablation study}, we show the improvements brought by different components of FedBR\xspace.
In order to highlight the importance of our two components, especially the max-step (c.f.\ Line 8 in Algorithm~\ref{Algorithm Framework of AugCA}) in component 2, we first consider two components of FedBR\xspace individually, followed by removing the max-step.
We find that:
1) Two components of FedBR\xspace have individual improvements compared with FedAvg, but the combined solution FedBR\xspace consistently achieves the best performance.
2) The projection layer is crucial.
After removing projection layers, Component 2 of FedBR\xspace performs even worse than FedAvg; such insights may also explain the limitations of Moon~\citep{li2021model}.
\looseness=-1
\paragraph{Performance of FedBR\xspace on CIFAR10 with different number of clients.}
In Table \ref{Performance of FedAug on CIFAR10 with different number of clients}, we increase the number of clients to 100, and $10$ clients are randomly chosen in each communication round.
We can find that FedBR\xspace consistently outperform other methods.
\begin{table}[!t]
\small
\centering
\caption{\small
\textbf{Performance of algorithms with 100 clients.}
We split CIFAR10 dataset into 100 clients with $\alpha = 0.1$.
We run 1000 communication rounds for each algorithm on the VGG11 model and report the mean of the maximal 5 accuracies (over rounds) during training on test datasets.
\looseness=-1
}
\resizebox{.48\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c c}
\toprule
Methods & FedAvg & FedDecorr & FedMix & FedProx & Mixup & VHL & FedBR \\
\midrule
Acc & 38.20 & 35.53 & 34.71 & 37.90 & 36.63 & 40.93 & 41.59 \\
\bottomrule
\end{tabular}
}
\label{Performance of FedAug on CIFAR10 with different number of clients}
\end{table}
\begin{table}[!t]
\small
\centering
\caption{\small
\textbf{Performance of local model on balanced global test datasets.}
We split CIFAR10 to 10 clients with $\alpha = 0.1$, and
report the test accuracies achieved by the local models/aggregated models at the end of each communication round. For FedBR\xspace, pseudo-data only transfer once (32 pseudo-data).
\looseness=-1
}
\label{Performance of local model on balanced global test datasets}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
Algorithm & FedAvg & FedDecorr & VHL & FedBR\xspace \\
\midrule
Local Model Performance & 21.01 & 21.18 & 32.81 & 21.83 \\
Aggregated Model Performance & 46.37 & 47.10 & 46.80 & 47.67 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[!t]
\small
\centering
\caption{\small
\textbf{Parameter transmitted and mean simulation time in each round.}
We split CIFAR10 and CIFAR100 to 10 clients with $\alpha = 0.1$. For FedBR\xspace, pseudo-data only transfer once (32 pseudo-data). The simulation time only includes the computation time per step, and do not includes the communication time. CIFAR100 experiments use Mixup as backbone.
\looseness=-1
}
\label{Parameter transmitted and mean simulation time in each round}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
CIFAR10 (VGG11) & FedAvg & Moon & VHL & FedCM & FedBR\xspace \\
\midrule
Parameters (Millions) & 9.2 & 9.7 & 9.2 & 18.4 & 9.7 \\
Mean simulation time (s) & 0.29 & 0.69 & 0.43 & 0.36 & 0.60 \\
\toprule
CIFAR100 (CCT) & FedAvg & Moon & VHL & FedCM & FedBR\xspace \\
\midrule
Parameters (Millions) & 22.4 & 22.6 & 22.4 & 44.8 & 22.6 \\
Mean simulation time (s) & 0.67 & 1.97 & 1.44 & 0.85 & 1.19 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}
\caption{\small
\textbf{Convergence curve w.r.t. simulation time.}
\label{fig:CIFAR10-time-smooth}
\label{fig:CIFAR100-time-smooth}
\label{Convergence curve w.r.t. simulation time}
\end{figure}
\paragraph{Reducing the communication cost of FedBR\xspace.}
To reduce the communication overhead, we reduce the size of pseudo-data, and only transmit one mini-batch of pseudo-data (64 for MNIST and 32 for others) once at the beginning of training.
In Figure~\ref{Performance when only transfer pseudo-data at the beginning of training}, we show the performance of FedBR\xspace when pseudo-data only transfer to clients at the beginning of the training (64 pseudo-data for RotatedMNIST, and 32 for CIFAR10 and CIFAR100).
Results show that only transferring pseudo-data once can achieve comparable performance gain compared with transferring pseudo-data in each round.
This indicates that the performance of FedBR\xspace will not drop even if we give a small number of pseudo-data.
\looseness=-1
\paragraph{Regarding privacy issues caused by RSM.} Because RSM may have some privacy issues, we consider using Mixture to protect privacy.
In Figure~\ref{Performance on different types of pseudo-data}, we show the performance of FedBR\xspace with different types of pseudo-data (pseudo-data only transfer once at the beginning of training as in Figure~\ref{Performance when only transfer pseudo-data at the beginning of training}).
Results show that:
1) FedBR\xspace consistently outperforms FedAvg on all types of pseudo-data.
2) When using \textbf{Mixture} as pseudo-data and setting $K = 0$ (\eqref{pseudo_data}), FedBR\xspace still have a performance gain compared with FedAvg, and a more significant performance gain can be observed by setting $K = 1$.
\paragraph{Constructing pseudo-data by RSM using local data with unbalanced label distribution.} In Figure~\ref{Performance on different label distribution of RSM}, we construct the pseudo-data for FedBR\xspace using data with (1) balanced and (2) unbalanced label distributions. Results show that the performance of FedBR\xspace remained the same even when the data used to create the pseudo-data had an unbalanced label distribution.
\paragraph{Combining FedBR\xspace with other FL methods enhances performance.} In Table~\ref{Combining with other baselines}, we combine FedBR\xspace with other SOTA FL algorithms, including FedNTD~\citet{lee2022preservation}, FedCM~\citep{xu2021fedcm} and FedDecorr~\citep{shi2022towards}. Results demonstrate that FedBR\xspace significantly enhances the performance of these methods through simple integration.
\paragraph{Effectiveness of FedBR\xspace on reducing the local learning bias.} To validate FedBR\xspace's ability to train unbiased local models, we save and assess the local models at the end of each communication round using balanced global test datasets. Results in Table~\ref{Performance of local model on balanced global test datasets} show that: 1) FedBR can achieve better performance than FedAvg without using the labeled global shared data, and the aggregated model matches and even surpasses VHL's performance 2) Local models of VHL perform better than other methods by using labeled global shared datasets to correct classification errors. It is natural for VHL to achieve better local performance as local datasets of VHL is relatively balanced.
\paragraph{Performance of FedBR\xspace regarding communication and computation costs.} Using pseudo-data and an additional projection layer in FedBR\xspace increases computation and communication costs. We quantify this by reporting transmitted parameters and mean simulation time per round in Table~\ref{Parameter transmitted and mean simulation time in each round}, and display the convergence curve with respect to the simulation time in Figure~\ref{Convergence curve w.r.t. simulation time}. Results show that: 1) The computation time of FedBR\xspace is similar to that of other FL methods that adding regularization terms to overcome the local learning bias, such as VHL and Moon. 2) FedBR\xspace's communication cost remains minimal as it only introduce an additional small three-layer MLP projection layer, in contrast to the larger feature extractors found in modern deep neural networks.
\section{Conclusion and Future works}
We propose a new algorithm, FedBR\xspace, for Federated Learning that uses label-agnostic pseudo-data to improve performance on heterogeneous data. It has two key components and experiments show it significantly improves Federated Learning.
Unlike previous methods, FedBR\xspace does not require labeled pseudo-data or a large pseudo-dataset, therefore reducing the communication costs.
However, FedBR\xspace requires additional computation as the algorithm needs additional forward propagation on pseudo-data, as well as the additional computation on the min-max optimization procedure. It would be interesting to explore ways to reduce this extra computation in the future.
\appendix
\onecolumn
{
\hypersetup{linkcolor=black}
\parskip=0em
\renewcommand{Contents of Appendix}{Contents of Appendix}
\tableofcontents
\addtocontents{toc}{\protect\setcounter{tocdepth}{3}}
}
\section{Experiment Details}
\label{sec:Experiment Details}
\paragraph{Framework and baseline algorithms.} In addition to traditional FL methods, we aim to see if domain generalization (DG) methods can help increase model performance during FL training.
Thus, we use the DomainBed benchmark \citep{gulrajani2020in}, which contains a series of regularly used DG algorithms and datasets. The algorithms in DomainBed can be divided into three categories:
\begin{itemize}[leftmargin=12pt,nosep]
\item \textbf{Infeasible methods:} Some algorithms can't be applied in FL scenarios due to the privacy concerns, for example, MLDG \citep{li2017learning}, MMD \citep{li2018domain}, CORAL \citep{sun2016deep}, VREx \citep{krueger2020out} that need features or data from each domain in each iteration.
\item \textbf{Feasible methods (with limitations):} Some algorithms can be applied in FL scenarios with some limitations. For example, DANN \citep{ganin2015domain}, CDANN \citep{li2018deep} require knowing the number of domains/clients, which is impractical in the cross-device setting.
\item \textbf{Feasible methods ( without limitations):} Some algorithms can be directly applied in FL settings. For example, ERM, GroupDRO \citep{sagawa2019distributionally}, Mixup \citep{yan2020improve}, and IRM \citep{martin2019invariant}.
\end{itemize}
We choose several common used DG algorithms that can easily be applied in Fl scenarios, including ERM, GroupDRO \citep{sagawa2019distributionally}, Mixup \citep{yan2020improve}, and DANN \citep{ganin2015domain}.
For FL baselines, we choose FedAvg~\citep{mcmahan2016communication} (equal to ERM), Moon~\citep{li2021model}, FedProx~\citep{lifederated2018}, SCAFFOLD~\citep{karimireddyscaffold2019} and FedMix~\citep{yoon2021fedmix} which are most related to our proposed algorithms.
Notice that some existing works consider combining FL and domain generalization. For example, combining DRO with FL~\citep{mohri2019agnostic,deng2021distributionally}, and combine MMD or DANN with FL~\citep{peng2019federated,wang2022framework,shen2021fedmm}. The natural idea of the former two DRO-based approaches is the same as our GroupDRO implementations, with some minor weight updates differences; the target of the later series of works that combine MMD or DANN is to train models to work well on unseen distributions, which is orthogonal with our consideration (overcome the local heterogeneity).To check the performance of this series of works, we choose to integrate FL and DANN into our environments.
Notice that we carefully tune all the baseline methods. The implementation detail of each algorithm is listed below:
\begin{itemize} [leftmargin=12pt,nosep]
\item GroupDRO: The weight of each client is updated by $\momega_i^{t+1} = \momega_i^{t} \exp (0.01 l_i^{t})$, where $l_i^{t}$ is the loss value of client $i$ at round $t$.
\item Mixup: Local data is mixed by $\tilde{\xx} = \lambda \xx_i + (1 - \lambda) \xx_j$, and $\lambda$ is sampled by $Beta(0.2, 0.2)$.
\item DANN: Use a three-layer MLP as domain discriminator, where the width of MLP is 256. The weight of domain discriminate loss is tuned in $\{ 0.01, 0.1, 1 \}$.
\item FedProx: The weight of proximal term is tuned in $\{ 0.001, 0.01, 0.1 \}$.
\item Moon: The projection layer is a two-layer MLP, the MLP width is setting to 256, and the output dimension is 128. We tuned the weight of contrastive loss in $\{ 0.01, 0.1, 1, 10 \}$.
\item FedMix: The mixup weight $\lambda$ used in FedMix is tuned in $\{ 0.01, 0.1, 0.2 \}$, we construct 64 augmentation data in each local step for RotatedMNIST, and 32 samples for CIFAR10 and CIFAR100..
\item VHL: We use the same setting as in the original paper, with the weight of augmentation classification loss $\alpha = 1.0$, and use the "proxy\_align\_loss" provided by the authors for feature alignment. Virtual data is generated by untrained style-GAN-v2, and we sample 2000 virtual data for CIFAR10 and RotatedMNIST; 20000 virtual data for CIFAR100 follow the default setting of the original work. To make a fair comparison, we sample 32 virtual samples in each local step for CIFAR10 and CIFAR100.
\item FedNTD: We use the official code of FedNTD, set $\tau = 1.0$ as suggested in the original paper, and $\beta$ is tuned in $\{1.0, 0.1\}$
\item FedDecorr: We use the official code of FedDecorr, and set the weight of penalty term to $0.1$.
\item FedBR\xspace: We use a three-layer MLP as the projection layer, the MLP width is set to 256, and the output dimension is 128. By default, we set $\tau_1 = \tau_2 = 2.0$, the weight of contrastive loss $\mu = 0.5$, and the weight of AugMean $\lambda = 1.0$ on MNIST and CIFAR100, $\lambda = 0.1$ on CIFAR10 and PACS. We sample 64 pseudo-data in each local step for RotatedMNIST and 32 samples for CIFAR10 and CIFAR100.
\end{itemize}
\paragraph{Datasets and Models.} For datasets, we choose RotatedMNIST, CIFAR10, CIFAR100, and PACS.
For RotatedMNIST, CIFAR10, and CIFAR100, we split the datasets following the idea introduced in~\cite{yurochkin2019bayesian,hsu2019measuring,reddi2021adaptive}, where we leverage the Latent Dirichlet Allocation (LDA) to control the distribution drift with parameter $\alpha$. Larger $\alpha$ indicates smaller non-iidness. We divided each environment into two clients for PACS, with the first client containing data from classes 0-3, and the second client containing data from classes 4-6.
Unless specially mentioned, we split RotatedMNIST, CIFAR10, and CIFAR100 to 10 clients and set $\alpha = 0.1$. For PACS, we have 8 clients instead. Notice that for each client of CIFAR10, we utilize a special transformation, i.e., rotation to the local data, to simulate the natural shift. In detail:
\begin{itemize} [leftmargin=12pt,nosep]
\item RotatedMNIST: We first split MNIST by LDA using parameter $\alpha = 0.1$ to 10 clients, then for each client, we rotate the local data by $\{0, 15, 30, 45, 60, 75, 90, 105, 120, 135\}$.
\item CIFAR10: We first split CIFAR10 by LDA using parameter $\alpha = 0.1$ to $N$ clients. Then for each client, we sample $q \in \mathbb{R}^{10}$ from $Dir(1.0)$. For each image in local data, we sample an angle in $\{0, 15, 30, 45, 60, 75, 90, 105, 120, 135\}$ by probability $q$, and rotate the image by the angle.
\item Clean CIFAR10: Unlike the previous setting, we do not rotate the samples in CIFAR10 (no inner-class non-iidness).
\item CIFAR100: We split the CIFAR100 by LDA using parameter $\alpha = 0.1$, and transform the train data using RandomCrop, RandomHorizontalFlip, and normalization.
\end{itemize}
Each communication round includes 50 local iterations, with 1000 communication rounds for RotatedMNIST and CIFAR10, 800 communication rounds for CIFAR100, and 400 communication rounds for PACS. Notice that the number of communication rounds is carefully chosen, and the accuracy of all algorithms does not significantly improve after the given communication rounds.
The public data is chosen as RSM~\citep{yoon2021fedmix} by default, and we also provide results on other proxy datasets.
We utilize a four-layer CNN for MNIST, VGG11 for CIFAR10 and PACS, and CCT~\citep{hassani2021escaping} (Compact Convolutional Transformer, cct\_7\_3x1\_32\_c100) for CIFAR100.
For each algorithm and dataset, we employ SGD as the optimizer, and set learning rate $lr = 0.001$ for MNIST, and $lr = 0.01$ for CIFAR10 , CIFAR100, and PACS. When using CCT and ResNet, we set momentum as $0.9$. We set the same random seeds for all algorithms.
We set local batch size to 64 for RotatedMNIST, and 32 for CIFAR10, CIFAR100, and PACS.
\section{Details of Augmentation Data}
\label{sec:details of augmentation data}
We use the data augmentation framework the same as FedMix, as shown in Algorithm \ref{Construct Augmentation Data}. For each local dataset, we upload the mean of each $M$ samples to the server. The constructed augmentation data is close to random noise. As shown in Figure \ref{Augmentation data in CIFAR10}, we randomly choose one sample in the augmentation dataset of CIFAR10 dataset.
\begin{algorithm}[h]
\small
\begin{algorithmic}[1]
\Require{local Datasets $D_1, \dots, D_N$, number of augmentation data for each client $K$, number of samples to construct one augmentation sample $M$.}
\Ensure{Augmentation Dataset $D_{p}$.}
\myState{Initialize $D_{p} = \emptyset$.}
\For{$i = 1, \dots, N$}
\For{$k = 1, \dots, K$}
\myState{Randomly sample $x_1, \dots, x_M$ from $D_i$.}
\myState{$\bar{x} = \frac{1}{M} \sum_{m=1}^{M} x_M$.}
\myState{$D_p = D_p \cup \{\bar{x}\}$}
\EndFor
\EndFor
\end{algorithmic}
\mycaptionof{algorithm}{\small Construct Augmentation Data}
\label{Construct Augmentation Data}
\end{algorithm}
\begin{algorithm}[h]
\small
\begin{algorithmic}[1]
\Require{Proxy Datasets $D_{prox}$, number of augmentation data $K$, number of samples to construct one augmentation sample $M$.}
\Ensure{Augmentation Dataset $D_{p}$.}
\myState{Initialize $D_{p} = \emptyset$.}
\For{$k = 1, \dots, K$}
\myState{Randomly sample $x_1, \dots, x_M$ from $D_{prox}$.}
\myState{$\bar{x} = \frac{1}{M} \sum_{m=1}^{M} x_M$.}
\myState{$D_p = D_p \cup \{\bar{x}\}$}
\EndFor
\end{algorithmic}
\mycaptionof{algorithm}{\small Construct Augmentation Data by Proxy Data}
\label{Construct Augmentation Data by Proxy Data}
\end{algorithm}
\begin{figure*}
\caption{We show 20 augmentation data of CIFAR10 dataset here. Notice that the augmentation data is close to random noise and can not be classified as any class.}
\label{Augmentation data in CIFAR10}
\end{figure*}
\section{Additional Results}
\label{sec:Additional-Results}
\begin{figure*}
\caption{\small Convergence curve of algorithms on different datasets.}
\label{convergence curve appendix}
\end{figure*}
\label{sec:Additional Results}
\subsection{Results with Error Bar}
In this section, we report the performance of our method FedAug and other baselines with an error bar to verify the performance gain of our proposed method.
\begin{table*}[!ht]
\small
\centering
\caption{\small
\textbf{Performance of algorithms with error bar.}
All examined algorithms use FedAvg as the backbone. We run 1000 communication rounds on RotatedMNIST and CIFAR10 for each algorithm. For each algorithm, we run three different trials with different random seeds. For each trial, we report the mean of maximum 5 accuracies for test datasets and the number of communication rounds to reach the threshold accuracy.
}
\label{Performance of algorithms with error bar}
\begin{tabular}{l c c c c c c c c c c}
\toprule
\multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{RotatedMNIST} & \multicolumn{2}{c}{CIFAR10} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& Acc (\%) & Rounds for 80\% & Acc (\%) & Rounds for 55\% \\
\midrule
ERM (FedAvg) & $82.78 \pm 0.38$ & 821 (1.0X) & $58.97 \pm 0.30$ & 742 (1.0X) \\
DANN & $84.67 \pm 0.46$ & 754 (1.1X) & $58.98 \pm 0.61$ & 747 (1.0X) \\
Mixup & $82.38 \pm 0.07$ & 853 (1.0X) & $58.32 \pm 0.33$ & 822 (0.9X) \\
GroupDRO & $80.65 \pm 0.53$ & 929 (0.9X) & $56.72 \pm 0.26$ & 840 (0.9X) \\
\midrule
FedBR\xspace (Ours) & $\boldsymbol{87.05 \pm 0.44}$ & \textbf{637 (1.3X)} & $\boldsymbol{64.62 \pm 0.32}$ & \textbf{374 (2.0X)} \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[!ht]
\small
\centering
\caption{\small
\textbf{Performance of algorithms on CIFAR10.} We split CIFAR10 dataset to 10 clients with $\alpha = 0.1$, without additional rotation. For each algorithm, we run 1000 communication rounds on ResNet18 (with group normalization), and set local steps to 50. Note that we set momentum to 0.9 for ResNet18.
}
\label{Performance of cifar10 on resnet}
\begin{tabular}{l c c c c c c c c c c}
\toprule
& FedAvg & FedProx & Moon & VHL & FedBR\xspace (ours) \\
\midrule
Accuracy (ResNet18) & 45.91 & 46.28 & 43.85 & 43.7 & 47.29 \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Ablation Study of FedBR\xspace}
\paragraph{Values of $\tau_1$ and $\tau_2$ in Component 2.}
In this paragraph, we investigate how the value of $\tau_1$ and $\tau_2$ affect the performance of the second component of FedBR\xspace. In table \ref{Performance of AugCA under different weights of AugCA-O and AugCA-S}, we show the results on Rotated-MNIST dataset with different weights $\tau_1$ and $\tau_2$. Results show that: 1) Setting $\tau_2 = 0$ , which only minimizes the distance of global and local features, has significant performance gain compare with ERM. However, adding $\tau_2$ can further improve the performance. 2) The best weight on Rotated-MNIST dataset is $\tau_1 = 2.0$ and $\tau_2 = 0.5$.
\begin{table*}[!ht]
\small
\centering
\caption{\small
\textbf{Performance of Component 2 of FedBR\xspace under different values of $\tau_1, \tau_2$.}
We run 1000 communication rounds on RotatedMNIST dataset. For each setting, we run three different trials with different random seeds. For each trial, we report the mean of maximum 5 accuracies for test datasets and the number of communication rounds to reach the threshold accuracy.
}
\label{Performance of AugCA under different weights of AugCA-O and AugCA-S}
\begin{tabular}{l c c c c c c c c c c}
\toprule
$\tau_1$ & $\tau_2$ & Acc (\%) & Rounds for 80\% & Rounds for 85\% \\
\midrule
2.0 & 0.0 & $86.11 \pm 0.77$ & 746 & 933 \\
2.0 & 0.1 & $86.22 \pm 0.33$ & 753 & 920 \\
2.0 & 0.5 & $87.24 \pm 0.50$ & 647 & 851 \\
2.0 & 1.0 & $86.25 \pm 0.87$ & 705 & 922 \\
2.0 & 2.0 & $86.01 \pm 0.33$ & 680 & 932 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[!ht]
\small
\centering
\caption{\small
\textbf{Performance of FedBR\xspace under different values of $\tau_1, \tau_2$.}
We run 1000 communication rounds on the CIFAR10 dataset. For each setting, we run three different trials with different random seeds. For each trial, we report the mean of maximum 5 accuracies for test datasets and the number of communication rounds to reach the threshold accuracy.
}
\label{Performance of FedAug under different weights of AugCA-O and AugCA-S}
\begin{tabular}{l c c c c c c c c c c}
\toprule
$\tau_1$ & $\tau_2$ & Acc (\%) & Rounds for 55\% & Rounds for 60\% \\
\midrule
2.0 & 0.0 & $64.05 \pm 0.27$ & 390 & 563 \\
2.0 & 0.5 & $64.26 \pm 0.47$ & 382 & 585 \\
2.0 & 1.0 & $64.77 \pm 0.24$ & 374 & 533 \\
2.0 & 2.0 & $64.62 \pm 0.32$ & 374 & 541 \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{Weights of the first component of FedBR\xspace.} In this paragraph, we investigate how the weights of the first component of FedBR\xspace affect the performance of models in table \ref{Performance of AugMean under different weights}.
\begin{table*}[!ht]
\small
\centering
\caption{\small
\textbf{Performance of component 1 under different weights.}
We run 1000 communication rounds on the CIFAR10 dataset. For each setting, we run three different trials with different random seeds. For each trial, we report the mean of maximum 5 accuracies for test datasets and the number of communication rounds to reach the threshold accuracy. We use $\lambda$ as the weight of the first component of FedBR\xspace.
}
\label{Performance of AugMean under different weights}
\begin{tabular}{l c c c c c c c c c c}
\toprule
$\lambda$ & Acc (\%) & Rounds for 55\% & Rounds for 60\% \\
\midrule
0.1 & $64.12 \pm 0.27$ & 442 & 591 \\
0.5 & $64.92 \pm 0.46$ & 385 & 536 \\
1.0 & $64.50 \pm 0.34$ & 379 & 565 \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{Domain robustness of FL and DG algorithms.}
We also hope that our method can increase the model's robustness because it expects to train client invariant features.
Therefore, we calculate the worst accuracy on test datasets of all clients/domains and report the mean of each algorithm's top 5 worst accuracies in Table \ref{Worst Case Performance of algorithms} to show the domain robustness of algorithms.
We have the following findings:
1) FedBR\xspace significantly outperforms other approaches, and the improvements of FedBR\xspace over FedAvg become more significant than the mean accuracy in Table \ref{Performance of algorithms}. FedBR\xspace has a role in learning a domain-invariant feature and improving robustness, as evidenced by this finding.
2) Under these settings, DG baselines outperform FedAvg. This finding demonstrates that the DG algorithms help to enhance domain robustness.
\begin{table*}[!ht]
\small
\centering
\caption{\small
\textbf{Performance of algorithms.}
All examined algorithms use FedAvg as the backbone. We run 1000 communication rounds on RotatedMNIST and CIFAR10 for each algorithm, 800 communication rounds CIFAR100 and 400 communication rounds for PACS. We report the mean of maximum 5 accuracies for test datasets and the number of communication rounds to reach the final accuracy of ERM .
}
\label{Performance of algorithms-appendix}
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
\multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{RotatedMNIST} & \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{PACS} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& Acc (\%) & Rounds (Speed up) & Acc (\%) & Rounds (Speed up) & Acc (\%) & Rounds (Speed up) \\
\midrule
ERM (FedAvg) & 82.47 & 828 (1.0X) & 58.99 & 736 (1.0X) & 64.03 & 168 (1.0X) \\
FedProx & 82.32 & 824 (1.0X) & 59.14 & 738 (1.0X) & 65.10 & 168 (1.0X) \\
SCAFFOLD & 82.49 & 814 (1.0X) & 59.00 & 738 (1.0X) & 64.49 & 168 (1.0X) \\
FedMix & 81.33 & 902 (0.9X) & 57.37 & 872 (0.8X) & 62.14 & 228 (0.7X) \\
Moon & 82.68 & 864 (0.9X) & 58.23 & 820 (0.9X) & 64.86 & 122 (1.4X) \\
DANN & 84.83 & 743 (1.1X) & 58.29 & 782 (0.9X) & 64.97 & 109 (1.5X) \\
Mixup & 82.56 & 840 (1.0X) & 58.57 & 826 (0.9X) & 64.36 & 210 (0.8X) \\
GroupDRO & 80.23 & 910 (0.9X) & 56.57 & 835 (0.9X) & 64.40 & 170 (1.0X) \\
\midrule
FedBR\xspace (Ours) & \textbf{86.58} & \textbf{628 (1.3X)} & \textbf{64.65} & \textbf{496 (1.5X)} & \textbf{65.63} & \textbf{100 (1.7X)} \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[!ht]
\small
\centering
\caption{\small
\textbf{Worst Case Performance of algorithms.}
All examined algorithms use FedAvg as the backbone. We run 1000 communication rounds on RotatedMNIST and CIFAR10 for each algorithm, 800 rounds for CIFAR100, and 400 communication rounds for PACS. We calculate the worst accuracy for all clients in each round and report the mean of the top 5 worst accuracies for each method. Besides, we report the number of communication rounds to reach the final worst accuracy of FedAvg.
}
\label{Worst Case Performance of algorithms}
\begin{tabular}{l c c c c c c c c c c c c}
\toprule
\multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{RotatedMNIST} & \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{PACS} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& Acc (\%) & Rounds (Speed up) & Acc (\%) & Rounds (Speed up) & Acc (\%) & Rounds (Speed up) \\
\midrule
ERM (FedAvg) & 66.60 & 816 (1.0X) & 41.30 & 846 (1.0X) & 42.79 & 170 (1.0X) \\
FedProx & 65.88 & 780 (1.0X) & 41.84 & 840 (1.0X) & 42.82 & 170 (1.0X) \\
SCAFFOLD & 66.72 & 804 (1.0X) & 40.88 & 840 (1.0X) & 41.63 & 170 (1.0X) \\
FedMix & 60.52 & 910 (0.9X) & 28.44 & - & 38.00 & - \\
Moon & 66.18 & 866 (0.9X) & 40.34 & 908 (0.9X) & 41.59 & 66 (2.6X) \\
DANN & 67.85 & 753 (1.1X) & 43.38 & 747 (1.1X) & 40.51 & 59 (2.9X) \\
Mixup & 66.25 & 836 (1.0X) & 40.32 & 984 (0.9X) & 41.89 & 252 (0.7X) \\
GroupDRO & 68.53 & \textbf{568 (1.4X)} & 46.90 & 656 (1.3X) & 43.18 & 246 (0.7X) \\
\midrule
FedBR\xspace (Ours) & \textbf{77.13} & 630 (1.3X) & \textbf{48.94} & \textbf{632 (1.3X)} & \textbf{43.99} & \textbf{58 (2.9X)} \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{T-SNE and Classcifier Outputs of Toy Examples}
\label{sec:t-SNE and classcifier output}
As the setting in Figure \ref{Output of model trained on class 1-5} and Figure \ref{Class imbalance figure}, we investigate if the two components of FedBR\xspace will help for mitigating the proposed bias on feature and classifier. Figure \ref{Features after AugCA} show the features after the second component of FedBR\xspace, which implies this component can significantly mitigate the proposed feature bias: 1) on the seen datasets, local features are close to global features. 2) on the unseen datasets, the local feature is far away from that of seen datasets.
Figure \ref{Classifier after AugMean} shows the output of the local classifier after the first component of FedBR\xspace on unseen classes. Notice that compared with Figure \ref{Class imbalance figure}, the output is more balanced.
\begin{figure*}
\caption{\small Features after the second component of FedBR\xspace.}
\label{Features after AugCA}
\end{figure*}
\begin{figure*}
\caption{\small Classifier output after the first component of FedBR\xspace on unseen classes.}
\label{Classifier after AugMean}
\end{figure*}
\begin{figure*}
\caption{\small Fine-tuned local features after 10 local epochs.}
\label{Fine-tuned Features after 10 local epochs}
\end{figure*}
\begin{figure*}
\caption{\small Fine-tuned local features after 20 local epochs.}
\label{Fine-tuned Features after 20 local epochs}
\end{figure*}
In Figure~\ref{Fine-tuned Features after 10 local epochs} and Figure~\ref{Fine-tuned Features after 20 local epochs}, we show the local learning bias when local model has better feature initialization. We copy the feature extractor of global model to local models, and randomly initialize local classifiers. Results show that: 1) The drifts between global and local features are still significant even has a good feature initialization.
2) The local features of unseen data are less relevant to the local features of seen data compare with training from scratch. This indicates that such a problem will be mitigated after enough training rounds.
3) The drifts between global and local features increase as the number of local epochs increases.
We also investigate if our observation remains for different stages of global models. In this experiment, we use CIFAR10 dataset, and train global model for 1, 3, 10 epochs on the whole dataset to obtain 29.74\%, 38.65\%, 49.28\% global accuracy, then we directly copy global models to clients (including classifier). We fine-tune the global models for 10 local epochs, results are shown in Figure~\ref{Fine-tuned E1 Features after 10 local epochs}. Results show that: For not well-trained global models, difference between global features on the same input and similarity between local features of different inputs are both significant.
\begin{figure*}
\caption{\small First train global model on the whole dataset for 1, 3, and 10 epoch (w.r.t. each row), then report local features after 10 local epochs.}
\label{Fine-tuned E1 Features after 10 local epochs}
\end{figure*}
\begin{figure}
\caption{\textbf{Illustration of our observation under mild split conditions:}
\label{fig:Illustration of our observation under mild split conditions}
\end{figure}
\subsection{T-SNE Results on Mild Conditions}
We introduce a parameter $\alpha \in [0, 0.5]$ to control the level of non-i.i.d. of clients, where a larger $\alpha$ indicates less non-i.i.d., and $\alpha = 0.5$ indicates a balanced local distribution. Results are shown in Figure~\ref{fig:Illustration of our observation under mild split conditions}: 1) The local feature on the unseen data (Local $X_2$) still lacks a clear decision boundary, and the local features are close even for data from different classes. 2) The decision boundary of the aggregated model becomes clearer as $\alpha$ increases, supporting the necessity of reducing the local bias. 3) Our observation on the biased classifier still holds, where a smaller $\alpha$ leads to a more biased classifier output.
\begin{figure}
\caption{\textbf{Convergence curve w.r.t simulation time.}
\label{fig:Convergence curve w.r.t simulation time}
\end{figure}
\end{document} |
\begin{equation}gin{document}
\begin{equation}gin{abstract}
We consider the global bifurcation problem for spatially periodic traveling waves for two-dimensional gravity-capillary vortex sheets.
The two fluids have arbitrary constant, non-negative densities (not both zero),
the gravity parameter can be positive, negative, or zero, and the surface tension
parameter is positive. Thus, included in the parameter set are
the cases of pure capillary water waves and gravity-capillary water waves.
Our choice of coordinates allows for the possibility that the fluid interface is not a graph over the horizontal.
We use a technical reformulation
which converts the traveling wave equations into a system of the form ``identity plus compact." Rabinowitz'
global bifurcation theorem
is applied and the final conclusion is the existence of either a closed loop of solutions, or
an unbounded set of nontrivial traveling wave solutions
which contains waves which may move arbitrarily fast, become arbitrarily long,
form singularities in
the vorticity or curvature, or whose interfaces self-intersect.
\end{abstract}
\maketitle
\section{Introduction}
We consider the case of two two-dimensional
fluids, of infinite vertical extent and periodic in the horizontal direction (of period $M > 0$) and separated by an interface which is free to move. Each fluid has a constant, non-negative density: $\rho_2\ge 0$ in the upper fluid and $\rho_1\ge 0$ in the lower.
Of course, we do not allow both densities to be zero, but if one of the densities is zero, then it is known as the water wave case. The velocity of each fluid satisfies the
incompressible, irrotational Euler equations. The restoring forces in the problem include non-zero surface tension (with surface tension constant $\tau > 0$)
on the interface
and a gravitational body force (with acceleration $g \in {\bf{R}}$, possibly zero)
which acts in the vertical direction.
Since the fluids are irrotational, the interface is
a vortex sheet, meaning that the vorticity in the problem is an amplitude times a Dirac mass supported on the interface.
We call this problem ``the two-dimensional gravity-capillary vortex sheet problem."
The average vortex strength on the interface is denoted by $\overline\gamma$.
In \cite{AAW}, two of the authors and Akers established a new formulation for the traveling wave problem for parameterized
curves, and applied it to the vortex sheet with surface tension (in case the two fluids have the same density).
The curves in \cite{AAW} may have multi-valued height. This is significant
since it is known that there exist traveling waves in the presence of surface tension
which do indeed have multi-valued height; the most famous such waves are the Crapper
waves \cite{Crapper}, and there are other, related waves known \cite{kinnersley},
\cite{AAW2}, \cite{deBoeck2}.
The results of \cite{AAW} were both analytical and
computational; the analytical conclusion was a local bifurcation theorem, demonstrating that there exist traveling vortex sheets with surface tension
nearby to equilibrium. In the present work, we establish a {\it global} bifurcation theorem for the problem with general densities. We now state a somewhat informal version of this theorem:
\begin{equation}gin{theorem} \label{main result}
{\bf (Main Theorem)}
For all choices of the constants $\tau > 0$, $M > 0$, $\overline\gamma\in{\bf{R}}$,
$\rho_1,\rho_2 \ge 0$ (not both zero) and $g \in {\bf{R}}$,
there exist a countable number of connected sets of smooth\footnote
{Here and below, when we say a function is ``smooth" we mean that its derivatives of all orders exist.}
non-trivial symmetric periodic traveling wave solutions, bifurcating from a quiescent equilibrium,
for the two-dimensional gravity-capillary vortex sheet problem.
If $\bar{\gamma}\neq 0$ or $\rho_{1}\neq\rho_{2},$ then each of these
connected sets has at least one of the following properties:
\begin{equation}gin{enumerate}[(a)]
\item it contains waves whose interfaces have lengths per period which are arbitrarily long;
\item it contains waves whose interfaces have arbitrarily large curvature;
\item it contains waves where the jump of the tangential component of the fluid velocity
across the interface
or its derivative is arbitrarily large;
\item its closure contains a wave whose interface has a point of self intersection;
\item it contains a sequence of waves whose interfaces converge to a flat configuration but whose speeds contain at least two convergent subsequences whose limits differ.
\end{enumerate}
In the case that $\bar{\gamma}=0$ and $\rho_{1}=\rho_{2},$ then each connected
set has at least one of the properties (a)-(f), where (f) is the following:
\begin{equation}gin{enumerate}[(f)]
\item it contains waves which have speeds which are arbitrarily large.
\end{enumerate}
\end{theorem}
We mention that in the case of pure gravity waves, it has sometimes been possible to
rule out the possibility of an outcome like (e) above; one such paper, for example, is
\cite{constantinStrauss}. The argument to eliminate such an outcome is typically
a maximum principle argument, and this type of argument appears to be unavailable in the
present setting because of the larger number of derivatives stemming from the presence of
surface tension. In a forthcoming numerical work, computations will be presented which indicate
that in some cases, outcome (e) can in fact occur for gravity-capillary waves \cite{aapwPreprint}.
Following \cite{AAW}, we start from the formulation of the problem introduced by Hou, Lowengrub, and Shelley, which uses geometric dependent variables and a
normalized arclength parameterization of the free surface \cite{HLS1}, \cite{HLS2}.
This formulation follows from the observation that the tangential velocity can be
chosen arbitrarily, while only the normal velocity needs to be chosen in accordance with the physics of the problem. The tangential velocity can then
be selected in a convenient fashion which allows us to
specialize the equations of motion to the periodic traveling wave case in a way that does not require the interface to be a graph over the horizontal coordinate.
The resulting equations are nonlocal, nonlinear and involve the singular Birkhoff-Rott integral. Despite their complicated appearance, using several well-known properties of the Birkhoff-Rott integral
we are able to recast the traveling wave equations in the form of ``identity plus compact." Consequently, we are able to use an abstract version of the Rabinowitz global-bifurcation theory \cite{R} to prove our main result. An interesting feature of our formulation is that, unlike similar formulations that allow for overturning waves
by using a conformal mapping, an extension of the present method to the case of 3D waves,
using for instance ideas like those in \cite{ambroseMasmoudi2},
seems entirely possible.
The main theorem allows for both positive and negative gravity;
equivalently, we could say we allow a heavier fluid above or below a lighter fluid.
As remarked in \cite{AAW2}, this is an effect that relies strongly on the presence
of surface tension. In the case of pure gravity waves, there are some
theorems in the literature demonstrating the nonexistence of traveling waves in
the case of negative gravity \cite{hur-nonlinearity}, \cite{toland-pseudo}.
A similar problem was treated by Amick and Turner \cite{amickTurner1}.
As with the present paper they treat the global bifurcation of interfacial
waves between two fluids. However, they require the non-stagnation
condition that the horizontal velocity of the fluid is less than the
wave speed ($u<c$). Thus their global connected set stops once $u=c$ and
there cannot be any overturning waves. Their paper has some other
less important differences as well, namely it treats solitary waves and the
top and bottom are fixed ($0<y<1$). Their methodology is very different
from ours as well, since they handle the case of a smooth density first
without using the Birkhoff-Rott formulation, and only later let the density
approach a step function. Another paper \cite{amickTurner2} by the same authors
only treats small solutions.
Global bifurcation with $\rho_2\equiv 0$, that is, in the water wave case,
has been studied by a variety of authors. In particular, global
bifurcation that permits overturning waves in the case of constant
vorticity is treated in \cite{walterPreprint}. Another recent paper is \cite{deBoeck},
in which a global bifurcation theorem is proved in the case $\rho_{2}\equiv0$ for capillary-gravity
waves on finite depth, also with constant vorticity.
Both of these works allow for multi-valued waves by means
of a conformal mapping.
Walsh treats global bifurcation for capillary water waves with general non-constant vorticity in \cite{walsh},
with the requirement that the interface be a graph with respect to the horizontal coordinate.
The methodologies of all of these papers are completely different from the present work.
Our reformulation of the traveling wave problem into the form ``identity plus compact'' uses the presence
of surface tension in a fundamental way. In particular, the surface tension enters the problem through the
curvature of the interface, and the curvature involves derivatives of the free surface. By inverting these derivatives,
we gain the requisite compactness. The paper \cite{matioc} uses a similar idea to gain compactness
in order to prove
a global bifurcation theorem for capillary-gravity water waves with constant vorticity and single-valued height.
We mention that the current work finds examples of solutions for interfacial irrotational flow which exist for all time.
The relevant initial value problems are known to be well-posed at short times \cite{A}, but behavior at large times
is in general still an open question. Some works on existence or nonexistence of singularities for these problems
are \cite{splash1}, \cite{splash2}, \cite{splash3}. For small-amplitude, pure capillary water waves, global solutions
are known to exist in general \cite{globalCapillary}.
The plan of the paper is as follows: in Section 2, we describe the equations of motion for the relevant interfacial
fluid flows. In Section 3, we detail our traveling wave formulation which uses the arclength formulation and which
allows for waves with multi-valued height. In Section 4, we explore the consequences of the assumption of spatial
periodicity for our traveling wave formulation. In Section 5, we continue to work with the traveling wave formulation,
now reformulating into an equation of the form ``identity plus compact.'' This sets the stage for Section 6, in which
we state a more detailed version of our main theorem and provide the proof.
\section{The Equations of Motion}\label{eom}
If we make the canonical identification\footnote{Throughout this paper we make this identification for any vector in ${\bf{R}}^2$.} of ${\bf R}^2$ with the complex plane ${\bf C}$, we may represent the free surface at time $t$, denoted by $S (t)$, as the graph (with respect to the parameter $\alpha$) of
$$
z(\alpha,t) = x(\alpha,t) + i y(\alpha,t).
$$
The unit tangent and upward normal vectors to $S $ are, respectively:
\begin{equation}gin{equation}\label{tangent def}
T = {z_\alpha\over |z_\alpha|} \text{ and } N = i {z_\alpha\over |z_\alpha|}.
\end{equation}
(A derivative with respect to $\alpha$ is denoted either as a subscript or as $\partial_\alpha$.)
Thus we have uniquely defined real valued functions $U(\alpha,t)$ and $V(\alpha,t)$ such that
\begin{equation}gin{equation}\label{z eqn}
z_t = U N + V T
\end{equation}
for all $\alpha$ and $t$. We call $U$ the normal velocity of the interface and $V$ the tangential velocity.
The normal velocity $U$ is determined from fluid mechanical considerations and is given by:
\begin{equation}gin{equation}\label{this is U}
U= {\bf{R}}e(W^* N)
\end{equation}
where
\begin{equation}gin{equation}\label{B}
W^*(\alpha,t):=\frac{1}{2 \pi i }\mathrm{PV}\int_{{\bf{R}}} {\gamma(\alpha',t) \over z(\alpha,t) -z(\alpha',t)} d \alpha'
\end{equation}
is commonly referred to as the Birkhoff-Rott integral. (We use ``$*$" to denote complex conjugation.)
The real-valued quantity $\gamma$ is called in \cite{HLS1} ``the unnormalized vortex sheet-strength," though in this document we
will primarily refer to it as simply the ``vortex sheet-strength."
It
can be used to recover the Eulerian fluid velocity (denoted by $u$) in the bulk at time $t$ and position $w \notin S (t)$ via
\begin{equation}\label{velocity}
u(w,t):=\left[\frac{1}{2 \pi i }\int_{{\bf{R}}} {\gamma(\alpha',t) \over w -z(\alpha',t)} d \alpha' \right]^*.
\end{equation}
The quantity $\gamma$ is also related to the jump in the tangential velocity of the fluid.
Specifically, using the Plemelj formulas, one finds that:
$$
[[ u ]] := \lim_{ w \to z(\alpha,t)^+} u(w,t) - \lim_{ w \to z(\alpha,t)^-} u(w,t) = { \gamma(\alpha,t)\over z^*_\alpha(\alpha,t)}.
$$
In the above, the ``$+$" and ``$-$" modifying $z(\alpha,t)$ mean that the limit is taken from ``above" or ``below" $S (t)$, respectively.
If we let $j(\alpha,t):={\bf{R}}e( [[u]]^* T)$ be the component of $[[u]]$ which is tangent to $S (t)$ at $z(\alpha,t)$, then
the preceding formula shows:
\begin{equation}\label{jump}
\gamma(\alpha,t) = j(\alpha,t)\ |z_\alpha(\alpha,t)|,
\end{equation}
which is to say that $\gamma(\alpha,t)$ is a scaled version of the {\it jump in the tangential velocity} of the fluid
across the interface.
As shown in \cite{A}, $\gamma$ evolves according to the equation
\begin{equation}gin{multline}\label{gamma eqn v1}
\gamma_{t}=\tau\frac{\theta_{\alpha\alpha}}{|z_\alpha|}+\frac{((V-
{\bf{R}}e( W^* T)
)\gamma)_{\alpha}}{|z_\alpha|}\\
-2A\left(\frac{
{\bf{R}}e(W_t^* T)
}{|z_\alpha|}+\frac{1}{8}\frac{(\gamma^{2})_{\alpha}}{|z_\alpha|^{2}}
+gy_{\alpha}-(V-
{\bf{R}}e(W^* T)
)
{\bf{R}}e(W^*_\alpha T)
\right).
\end{multline}
Here $A$ is the {\it Atwood number},
$$A:=\frac{\rho_{1}-\rho_{2}}{\rho_{1}+\rho_{2}}.$$
Note that $A$ can be taken as any value in the interval $[-1,1].$
Lastly, $\theta(\alpha,t)$ is the {\it tangent angle} to $S (t)$ at the point $z(\alpha,t)$. Specifically it is defined by the relation
$$
z_\alpha = |z_\alpha| e^{i \theta}.
$$
Observe that we have the following nice representations of the tangent and normal vectors in terms of $\theta$:
\begin{equation}\label{nice T}
T={e^{i \theta}} \text{ and } N={ie^{i \theta}}.
\end{equation}
As observed above, the tangential velocity $V$ has no impact on the geometry of $S (t)$. As such, we are free to make $V$ anything we wish.
In this way, one sees that equations \eqref{z eqn} and \eqref{gamma eqn v1} form a closed dynamical system.
In \cite{HLS1}, the authors make use of the flexibility in the choice of $V$ to design an efficient and non-stiff numerical method for the solution of the dynamical system. In the article \cite{A},
$V$ is selected in a way which is helpful in making {\it a priori} energy estimates, and in completing
a proof of local-in-time well-posedness of the initial value problem.
We leave $V$ arbitrary for now.
\section{Traveling waves}
We are interested in finding traveling wave solutions, which is to say solutions where both the interface and Eulerian fluid velocity
propagate horizontally with no change in form and at constant speed.
To be precise:
\begin{equation}gin{definition} \label{TW def} We say $(z(\alpha,t),\gamma(\alpha,t))$ is a traveling wave solution of \eqref{z eqn} and \eqref{gamma eqn v1}
if there exists $c \in {\bf{R}}$ such that for all $t \in {\bf{R}}$ we have
\begin{equation}gin{equation}\label{interface translates}
S (t) = S (0) + ct
\end{equation}
and, for all $w \notin S (t)$,
\begin{equation}gin{equation}\label{velocity translates}
u(w,t) = u(w-ct,0)
\end{equation}
where $u$ is determined from $(z(\alpha,t),\gamma(\alpha,t))$ by way of \eqref{velocity}.
\end{definition}
Later on the speed $c$ will serve as our bifurcation parameter.
We have the following results concerning traveling wave solutions of \eqref{z eqn}~and~\eqref{gamma eqn v1}.
\begin{equation}gin{proposition}\label{TW eqn lemma}{\bf (Traveling wave ansatz)}
(i) Suppose that $(z(\alpha,t),\gamma(\alpha,t))$ solves \eqref{z eqn} and \eqref{gamma eqn v1}
and, moreover, there exists $c \in {\bf{R}}$ such that
\begin{equation}gin{equation}\label{TW v1}
{z}_t = c \quad \text{and} \quad
{\gamma}_t =0
\end{equation}
holds for all $\alpha$ and $t$. Then $(z(\alpha,t),\gamma(\alpha,t))$ is a traveling wave solution with speed~$c$.
(ii) If $(\check{z},\check{\gamma})$ is a traveling wave solution with speed~$c$ of \eqref{z eqn} and \eqref{gamma eqn v1}
then there exists a reparameterization of $S (t)$ which maps $(\check{z},\check{\gamma}) \mapsto (z,\gamma)$ where
$(z,\gamma)$
satisfies~\eqref{TW v1}.
\end{proposition}
\begin{equation}gin{proof}
First we prove (i).
Since $z_t = c$, we have $z(\alpha,t) = z(\alpha,0) + c t$ which immediately gives \eqref{interface translates}.
Then, since $\gamma_t = 0$ we have $\gamma(\alpha,t) = \gamma(\alpha,0)$ and thus
\begin{equation}gin{equation}
u^*(w,t)=\frac{1}{2 \pi i }\int_{{\bf{R}}} {\gamma(\alpha',t) \over w -z(\alpha',t)} d \alpha'\\=
\frac{1}{2 \pi i }\int_{{\bf{R}}} {\gamma(\alpha',0) \over w -(z(\alpha',0) +ct)} d \alpha' = u^*(w-ct,0).
\end{equation}
And so we have \eqref{velocity translates}.
Now we prove (ii).
Suppose $(\check{z}(\begin{equation}ta,t),\check{g}(\begin{equation}ta,t))$ gives a traveling wave solution. The reparameterization which
yields \eqref{TW v1} can be written explicitly. Specifically, condition \eqref{interface translates}
implies that $z(\alpha,t) := \check{z}(\alpha,0) + ct$ is a parameterization of $S (t)$. Clearly $z_t = c$,
and we have the first equation in \eqref{TW v1}.
Now let $\gamma(\alpha,t)$ be the corresponding vortex sheet-strength for the parameterization of $S (t)$
given by $z(\alpha,t)$. Since we have a traveling wave, we have \eqref{velocity translates}. Define
$$
m(w,t) =: {1 \over 2 \pi i} \int_{\bf{R}} {\gamma(\alpha',t) -\gamma(\alpha',0)\over w-ct -\check{z}(\alpha',0)} d \alpha'. $$
Then for $w \notin S (t)$ we have
\begin{equation}gin{equation}\begin{equation}gin{split}
m(w,t) &
= {1 \over 2 \pi i} \int_{\bf{R}} {\gamma(\alpha',t)\over w-(\check{z}(\alpha',0)+ct)} d \alpha - {1 \over 2 \pi i} \int_{\bf{R}} {\gamma(\alpha',0)\over (w-ct) -\check{z}(\alpha',0)} d \alpha' \\
& = u(w,t) - u(w-ct,0) = 0.
\end{split}
\end{equation}
However, for a point $w_0=\check{z}(\alpha,0) + ct \in S (t)$,
the Plemelj formulas state that
$$
\lim_{w \to w_0^{\pm}} m(w) = \textrm{PV} {1 \over 2 \pi i} \int_{\bf{R}} {\gamma(\alpha',t) -\gamma(\alpha',0)\over\check{z}(\alpha,0) -\check{z}(\alpha',0)} d \alpha \pm {1 \over 2} {
\gamma(\alpha,t)-\gamma(\alpha,0) \over \check{z}_\alpha(\alpha,0)}
$$
where the ``$+$" and ``$-$" signs modifying $w_0$ in the limit indicate that the limit is taken from ``above" or ``below" $S (t)$, respectively. But, of course, $m$ is identically zero so that
$$
{1 \over 2} \left(
\gamma(\alpha,t)-\gamma(\alpha,0)\right)= \pm \check{z}_\alpha(\alpha,0)
\textrm{PV} {1 \over 2 \pi i}
\int_{\bf{R}} {\gamma(\alpha',t) -\gamma(\alpha',0)\over\check{z}(\alpha,0) -\check{z}(\alpha',0)} d \alpha,
$$
which in turn implies $\gamma(\alpha,t) = \gamma(\alpha,0)$. Since this is true for any $t$ and any $\alpha$, we see that $\gamma_t = 0$, the second equation in \eqref{TW v1}.
\end{proof}
\begin{equation}gin{remark} We additionally assume that $S (t)$ is parameterized to be proportional to arclength, {\it i.e.}
\begin{equation}gin{equation}\label{arclength param}
|z_\alpha| = \sigma= \text{constant} > 0
\end{equation}
for all $(\alpha,t)$. One may worry that the enforcement of the parameterization such that $z_t = c$ in \eqref{TW v1} is at odds with this sort of
arclength parameterization. However, notice that $z_t = c$ implies that $z_{\alpha t} = 0$ which in turn implies that $z_\alpha$ (and thus $|z_\alpha|$) does not depend on time.
Then the reparamaterization of $S (t)$ given by $\widetilde{z}(\begin{equation}ta(\alpha),t) = z(\alpha,t)$ where $d \begin{equation}ta/d \alpha = |z_\alpha|/\sigma$ has $|\widetilde{z}_\begin{equation}ta| = \sigma$.
Thus it is merely a convenience to assume \eqref{arclength param}.
We will select a convenient choice for $\sigma$ later.
Arguments parallel to the above show that $z_t = c$ implies that $\theta_t = 0$ and thus we will view $\theta$ as being a function of $\alpha$ only.
\end{remark}
Now we insert the ansatz \eqref{TW v1} and the arclength parameterization \eqref{arclength param} into the equations of motion \eqref{z eqn} and \eqref{gamma eqn v1}. First, as observed in \cite{AAW},
we see that elementary trigonometry shows that $z_t = c$ and \eqref{z eqn} are equivalent to
\begin{equation}gin{equation}\label{U eqn}
U =- c \sin\theta
\end{equation}
and
\begin{equation}gin{equation}\label{V eqn}
V=c \cos\theta .
\end{equation}
Notice this last equation selects $V$ in terms of the tangent angle $\theta$. That is to say \eqref{V eqn} should be viewed as the definition of $V$. On the other hand \eqref{U eqn} should be viewed as one of the equations we wish to solve. Using \eqref{this is U}, we rewrite it as
\begin{equation}gin{equation}\label{wn eqn}{\bf{R}}e(W^*N) + c \sin\theta = 0.
\end{equation}
The above considerations transform \eqref{gamma eqn v1} to:
\begin{equation}gin{multline}\label{gamma eqn v2}
0=\tau\frac{\theta_{\alpha\alpha}}{\sigma}+\frac{\{(c\cos\theta-
{\bf{R}}e( W^* T))\gamma\}_{\alpha}}{\sigma} \\
-2A\left(\frac{1}{8}\frac{(\gamma^{2})_{\alpha}}{\sigma^{2}}
+g\sin\theta -(c\cos\theta -
{\bf{R}}e(W^* T)
)
{\bf{R}}e(W^*_\alpha T)
\right).
\end{multline}
The last part of this expression may be rewritten as follows.
Observe that
\begin{equation}gin{equation} \label{this}
-{1 \over 2}\partial_\alpha \{ (c \cos\theta -{\bf{R}}e(W^* T))^2 \}
= (c \cos\theta - {\bf{R}}e(W^*T))\left(c \sin\theta \theta_\alpha + {\bf{R}}e(W^* T_\alpha)+{\bf{R}}e(W^*_\alpha T) \right).
\end{equation}
Using \eqref{nice T}, we see that $T_\alpha = N \theta_\alpha$.
Thus since $\theta$ is real valued and by virtue of \eqref{wn eqn}, we have
$$c \sin\theta \theta_\alpha + {\bf{R}}e(W^* T_\alpha) = (c\sin\theta + {\bf{R}}e(W^*N))\theta_\alpha = 0.$$
So \eqref{this} simplifies to
$$
-{1 \over 2}\partial_\alpha (c \cos\theta -{\bf{R}}e(W^* T))^2= (c \cos\theta - {\bf{R}}e(W^*T)){\bf{R}}e(W^*_\alpha T) .
$$
Hence
\begin{equation}gin{multline} \label {gamma**}
0=\tau\frac{\theta_{\alpha\alpha}}{\sigma}+\frac{\{(c\cos\theta -
{\bf{R}}e( W^* T))\gamma\}_{\alpha}}{\sigma}\\
-2A\left(\frac{1}{8}\frac{(\gamma^{2})_{\alpha}}{\sigma^{2}}+
g \sin\theta +{1 \over 2}\partial_\alpha (c \cos\theta -{\bf{R}}e(W^* T))^2
\right).
\end{multline}
which we rewrite as
\begin{equation}gin{multline}
-\theta_{\alpha \alpha} = {\mathcal{P}}hi(\theta,\gamma;c, \sigma)
:= {1 \over \tau}{(\partial_\alpha\{(c\cos\theta - {\bf{R}}e( W^* T))\gamma\})}\\
- {A\over \tau}\left(\frac{1}{4\sigma}{\partial_\alpha(\gamma^{2})}
+ {2 g\sigma }\sin\theta +{\sigma}\partial_\alpha \{c \cos\theta -{\bf{R}}e(W^* T))^2\} \right).
\end{multline}
Note that we have not specified $z$ as one of the dependencies of ${\mathcal{P}}hi$.
This may seem unusual, given the prominent role of $z$
in computing the Birkhoff-Rott integral $W^*$. However,
given $\sigma$ in \eqref{arclength param} one can determine $z(\alpha,t)$ solely from
the tangent angle $\theta(\alpha)$, at least up to a rigid translation.
Specifically, and without loss of generality, we have
\begin{equation}gin{equation}\label{reconstruct}
z(\alpha,0) = z(\alpha,t)-ct = \sigma \int_0^{\alpha} e^{i \theta(\alpha')} d\alpha'.
\end{equation}
In this way, we view $W^*$ as being a function of $\theta$, $\gamma$ and $\sigma$.
In short, we have shown the following:
\begin{equation}gin{lemma}\label{TWE}{\bf (Traveling wave equations, general version)}
Given functions $\theta(\alpha,t)$ and $\gamma(\alpha,t)$ and constants $c \in{\bf{R}}$ and $\sigma > 0$, compute $z(\alpha,t)$
from \eqref{reconstruct}, $W^*$ from \eqref{B} and $N$ and $T$ from \eqref{nice T}. If
\begin{equation}gin{equation} \label{TW v3}
{\bf{R}}e(W^* N) + c \sin\theta = 0 \quad \text{and} \quad \theta_{\alpha \alpha} + {\mathcal{P}}hi(\theta,\gamma;c,\sigma) = 0
\end{equation}
holds then $(z(\alpha,t),\gamma(\alpha,t))$ is a traveling wave solution with speed $c$ for $\eqref{z eqn}$ and $\eqref{gamma eqn v1}$.
\end{lemma}
It happens that under the assumption that the traveling waves are spatially periodic, \eqref{TW v3}
can be reformulated as ``identity plus compact" which, in turn will allow us to employ powerful abstract global bifurcation results.
The next section deals with how to deal with spatial periodicity.
\section{Spatial periodicity}
To be precise, by spatial periodicity we mean the following:
\begin{equation}gin{definition}
Suppose that $(z(\alpha,t),\gamma(\alpha,t))$ is a solution of \eqref{z eqn} and \eqref{gamma eqn v1} such that
$$
S (t) = S (t) + M
$$
and
$$
u(w + M,t) = u(w,t)
$$
for all $t$ and $w \notin S (t)$, then the solution is said to be (horizontally) spatially periodic with period~$M$.
\end{definition}
It is clear if one has a spatially periodic curve $S (t)$ then it can be parameterized in such a way that the
parameterization is $2\pi$-periodic in its dependence on the parameter. That is to say,
the curve can be parameterized such that
\begin{equation}gin{equation}
\label{periodicity}
z(\alpha+2 \pi,t) = z(\alpha,t) +M.
\end{equation}
It is here that we encounter a sticky issue. As described in Lemma \ref{TWE}, our goal is to find $\theta$ and $\gamma$ such that \eqref{TW v3} holds and additionally \eqref{periodicity} holds.
The issue is that, given a function $\theta(\alpha)$ which is $2\pi-$periodic with respect to $\alpha$,
it may not be the case that the curve $z$ reconstructed from it via \eqref{reconstruct} satisfies \eqref{periodicity}.
In fact, due to \eqref{arclength param},
the periodicity \eqref{periodicity} is valid if and only if
\begin{equation}gin{equation}
\label{reconstruct condition}
2\pi \overline{\cos\theta} :=
\int_0^{2\pi} \cos(\theta(\alpha')) d\alpha' ={M \over \sigma} \quad \text{and} \quad
2\pi \overline{\sin\theta} := \int_0^{2\pi} \sin(\theta(\alpha')) d\alpha' = 0.
\end{equation}
We could impose \eqref{reconstruct condition} on $\theta$.
However, we follow another strategy which leaves $\theta$ free
by modifying \eqref{TW v3} so that \eqref{reconstruct condition} holds.
Indeed, we first fix the spatial period $M>0$.
Suppose we are given a real $2\pi$-periodic function $\theta(\alpha)$ for which
\begin{equation}gin{equation}\label{not vertical}
\overline{\cos\theta} \ne 0
\end{equation}
so that the period $M$ of the curve will not vanish.
Then we define the ``renormalized curve" as
\begin{equation}gin{equation}\label{this is z}
\widetilde{Z} [\theta](\alpha) =
\frac{M}{2\pi \overline{\cos\theta}} \left\{
\int_0^\alpha e^{i\theta(\begin{equation}ta)} d\begin{equation}ta - i\alpha\ \overline{\sin\theta} \right\}
\end{equation}
Of course, this function is one derivative smoother than $\theta$.
A direct calculation shows that
\begin{equation}gin{equation}\label{periodicity 2}
\widetilde{Z}[\theta](\alpha+2\pi) = \widetilde{Z}[\theta](\alpha) + M.
\end{equation}
Thus $w=\widetilde{Z}[\theta]$ is the parameterization of a curve which satisfies
\begin{equation}gin{equation}\label{per 2}
w(\alpha+2\pi) = w(\alpha) + M \text{ for all $\alpha$ in ${\bf{R}}$}.
\end{equation}
Now $\displaystyle \partial_\alpha\widetilde Z[\theta] =
\frac{M}{2\pi \overline{\cos\theta}} (\exp(i\theta [\alpha]) -i\overline{\sin\theta})$, and
the tangent and normal vectors for $\widetilde{Z}[\theta]$ are given by:
\begin{equation}gin{equation}\label{tangent def 2}
\widetilde{T} [\theta]:= {\partial_\alpha \widetilde{Z}[\theta] / \vert \partial_\alpha \widetilde{Z}[\theta] \vert}
\quad \text{ and } \quad
\widetilde{N} [\theta]:= {i \partial_\alpha \widetilde{Z}[\theta] / \vert \partial_\alpha \widetilde{Z}[\theta] \vert}.
\end{equation}
These expressions are not equal to $e^{i\theta}$ and $i e^{i \theta}$, as was the case for $T$ and $N$ in \eqref{nice T}.
For a given real function $\gamma(\alpha)$ and parametrized curve $w(\alpha)$,
define the {\it Birkhoff-Rott integral}
\begin{equation}gin{equation*}\label{B2}
B[w]\gamma(\alpha):=\frac{1}{2 \pi i }\mathrm{PV}\int_{{\bf{R}}} {\gamma(\alpha') \over w(\alpha) -w(\alpha')} d \alpha' .
\end{equation*}
Thus $W^*(\cdot,t) = B[z(\cdot,t)]\gamma(\cdot,t)$.
If $w$ satisfies \eqref{per 2},
we can rewrite this integral as
\begin{equation}gin{equation}\label{B periodic 1}
B[{w}]\gamma(\alpha)=\frac{1}{2 i M}\mathrm{PV}\int_{0}^{2 \pi}
\gamma(\alpha') \cot\left({\pi \over M} (w(\alpha) -w(\alpha')) \right) d \alpha'
\end{equation}
by means of Mittag-Leffler's famous series expansion for the cotangent (see, e.g., Chapter 3 of
\cite{ablowitzFokas}).
Finally,
for any real $2 \pi$-periodic functions $\theta$ and $\gamma$ and any constant $c \in {\bf{R}}$, define
\begin{equation}gin{multline}
\widetilde{{\mathcal{P}}hi}(\theta,\gamma;c)
:={1 \over \tau} \partial_\alpha \{c\cos\theta -
{\bf{R}}e( B[ \widetilde{Z}[\theta] ]\gamma\ \widetilde{T}[\theta]))\gamma \} \\
-{A\over \tau}\left(\frac{\pi \overline{\cos\theta}}{2M} \partial_\alpha {(\gamma^{2})}
+ \frac {gM} {\pi \overline{\cos\theta} }
\left( \sin\theta - \overline{\sin\theta} \right)
+ \frac {M} {2\pi \overline{\cos\theta}} \partial_\alpha
\{ (c \cos\theta
-{\bf{R}}e(B[ \widetilde{Z}[\theta] ]\gamma\, \widetilde{T}[\theta])) ^2\}
\right).
\end{multline}
In terms of
these definitions the basic equations are rewritten as follows:
\begin{equation}gin{proposition}\label{TWE2}
{\bf (Traveling wave equations, spatially periodic version)}
If the
$2\pi$-periodic functions $\theta(\alpha)$, $\gamma(\alpha)$ and the constant $c \ne 0$
satisfy \eqref{not vertical} and
\begin{equation}gin{equation} \label{TW v4}
{\bf{R}}e(B[ \widetilde{Z}[\theta] ]\gamma\, \widetilde{N}[\theta]) + c \sin\theta = 0 \quad \text{and} \quad \theta_{\alpha \alpha} + \widetilde{{\mathcal{P}}hi}(\theta,\gamma;c) = 0
\end{equation}
then
$(\widetilde{Z}[\theta](\alpha)+ct,\gamma(\alpha,t))$ is a spatially periodic traveling wave solution with speed $c$ and period $M$ for $\eqref{z eqn}$ and $\eqref{gamma eqn v1}$.
\end{proposition}
\begin{equation}gin{proof}
Putting $w = \widetilde{Z}[\theta]$, from the definitions above we have
$\widetilde{N}[{\theta}]=iw_\alpha/|w_\alpha |$.
Thus by Lemma \ref{incompressibility} below,
$$\displaystyle
\int_0^{2 \pi} {\bf{R}}e(B[ \widetilde{Z}[\theta] ]\gamma \ \widetilde{N}[\theta])\ d\alpha = 0.$$
Together with the first equation in \eqref{TW v4} and the fact $c \ne 0$, this gives
\begin{equation}gin{equation}\label{win}
\overline{\sin \theta} = 0.\end{equation}
Now we let $\sigma = M/(2\pi\overline{\cos\theta})$ and compute $z(\alpha,t)$
from \eqref{reconstruct}, $W^*$ from \eqref{B}, and $N$ and $T$ from \eqref{tangent def}.
By \eqref{reconstruct} and \eqref{this is z}
we see that
$z(\alpha,t) = \widetilde{Z}[\theta](\alpha)+ct$. This in turn gives
$W^*=B[ \widetilde{Z}[\theta] ]\gamma$, $N=\widetilde{N}[\theta]$ and $T=\widetilde{T}[\theta]$.
Together with the fact that $\overline{\sin\theta} = 0$, this shows that
$$
\widetilde{{\mathcal{P}}hi}(\theta,\gamma;c)={\mathcal{P}}hi(\theta,\gamma;c,\sigma).
$$
Thus both equations in \eqref{TW v4} coincide exactly their counterparts in \eqref{TW v3}. Proposition \ref{TWE} then shows that $z(\alpha,t)$ is a traveling
wave with speed $c\in{\bf{R}}$. We know that $z(\alpha,t)$ is $M-$periodic since it was constructed from $\theta$ with $\widetilde{Z}$.
\end{proof}
\begin{equation}gin{lemma} \label{incompressibility}
If $w(\cdot)$ satisfies \eqref{per 2}
and $\gamma(\cdot)$ is a $2\pi$-periodic function, then
$$
\int_0^{2 \pi} {\bf{R}}e\left( B[w]\gamma(\alpha) {iw_\alpha(\alpha) \over | w_\alpha (\alpha)| } \right) d\alpha = 0.
$$
\end{lemma}
\begin{equation}gin{proof} This lemma says that the mean value of the normal component of $B[w](\gamma)$ is equal to zero. This follows from the fact that
$B[w](\gamma)$ extends to a divergence-free field in the interior of the fluid region, and from the Divergence Theorem.
\end{proof}
\section{Reformulation as ``identity plus compact" }
\subsection{Mapping properties}
Let
${\mathcal{H}}^s_\text{per}:={\mathcal{H}}^s_{\text{per}}[0,2\pi]$ be the usual Sobolev space of $2\pi-$periodic functions from ${\bf{R}}$ to ${\bf{C}}$ whose first $s \in {\bf N}$ weak derivatives are square integrable.
Likewise for intervals $I \subset {\bf{R}}$, let
${\mathcal{H}}^s(I)$ be the usual Sobolev space of functions from $I$ to ${\bf{C}}$ whose first $s \in {\bf N}$ weak derivatives are square integrable. Finally,
${\mathcal{H}}^s_\text{loc}$ is the set of all functions from ${\bf{R}}$ to ${\bf{C}}$ which are in ${\mathcal{H}}^s(I)$ for all bounded intervals $I \subset {\bf{R}}$.
By \eqref{periodicity 2}, $\widetilde{Z}[\theta](\alpha)-M \alpha/2\pi$ is periodic.
Let
$$
{\mathcal{H}}^s_M:=\left\{ w \in {\mathcal{H}}^s_\text{loc}: w(\alpha) - M \alpha/2 \pi \in {\mathcal{H}}^s_\text{per} \right\}.
$$
Clearly ${\mathcal{H}}^s_M$ is a complete metric space with the metric of ${\mathcal{H}}^s_\text{per}$
We have the following lemma concerning the renormalized curve $\widetilde{Z}[\theta]$:
\begin{equation}gin{lemma}\label{Z props}
For $s \ge 1$ and $h\ge0$ let
$$
{\mathcal{U}}^s_h:=\left\{ \theta \in {\mathcal{H}}^s_\text{per} : \int_0^{2\pi} \cos(\theta(\alpha)) d\alpha > h \right\}.
$$
Then the map $\widetilde{Z}[\theta]$ defined in \eqref{this is z} is smooth from ${\mathcal{U}}^s_h$ into ${\mathcal{H}}^{s+1}_M$
and
the maps $\widetilde{N}[\theta]$ and $\widetilde{T}[\theta]$ given in \eqref{tangent def 2} are
smooth from ${\mathcal{U}}^s_h$ into ${\mathcal{H}}^s_\text{per}$.
Moreover, for any $h > 0$, there exists $C>0$ such that
\begin{equation}gin{equation}\label{Z estimate}
\| \widetilde{T} [\theta] \|_{{\mathcal{H}}^s_\text{per}} + \| \widetilde{N} [\theta] \|_{{\mathcal{H}}^s_\text{per}}
+ \| \widetilde{Z}[\theta] \|_{{\mathcal{H}}^{s+1}(0,2\pi)} +
\left \| {1 \over \partial_\alpha \widetilde{Z}[\theta]} \right \|_{{\mathcal{H}}^{s}_{\text{per}}}
\le C(1+ \| \theta \|_{{\mathcal{H}}^s_\text{per}}).
\end{equation}
for all $\theta \in {\mathcal{U}}^s_h$.
\end{lemma}
\begin{equation}gin{proof}
As already mentioned, $\widetilde{Z}[\theta]$ is one derivative smoother than $\theta$.
A series of naive estimates leads to the bound on $ \widetilde{Z}[\theta] $ in \eqref{Z estimate}. Next, since $\theta$ belongs to ${\mathcal{U}}^s_h$,
$$
\overline{\sin\theta} + {1 \over 100 \pi^2} h^2
\le {1 \over 2\pi} \left[ \int_0^{2\pi}\sin(\theta(a))da + {1 \over 50\pi}\left(\int_0^{2 \pi} \cos(\theta(a))da \right)^2 \right].
$$
The Cauchy-Schwarz inequality on the cosine term leads to
\begin{equation}gin{equation*}
\overline{\sin\theta}+ {1 \over 100 \pi} h^2
\le {1 \over 2\pi} \int_0^{2\pi}\left( \sin(\theta(a)) + {1 \over 25} \cos^2(\theta(a)) \right) da \le 1
\end{equation*}
since $\displaystyle \sin(x) + (1 / 25) \cos^2(x) \le 1$
Thus
$$
\overline{\sin\theta} \le 1 - {1 \over 100 \pi^2} h^2.
$$
For $h >0$, this implies that $ e^{i\theta} - i\, \overline{\sin\theta}$ cannot vanish.
Hence $1/\partial_\alpha \widetilde{Z}[\theta] \in {\mathcal{H}}^s_{\text{per}}$ and the remaining bounds in \eqref{Z estimate}
follow by routine estimates.
The smooth dependence of $\widetilde{Z}$, $\widetilde{N}$ and $\widetilde{T}$ on $\theta$
is a consequence of standard results on compositions.
\end{proof}
The most singular part of the Birkhoff-Rott operator $B$ is essentially the periodic Hilbert transform $H$,
which is defined as
$$
H \gamma(\alpha):= {1 \over 2 \pi } \mathrm{PV} \int_0^{2 \pi} \gamma(\alpha') \cot\left( {1 \over 2} (\alpha -\alpha') \right) d \alpha'. $$
It is well-known that
for any $s \ge 0$, $H$ is a bounded linear map from ${\mathcal{H}}^s_\text{per}$ to ${\mathcal{H}}^s_{{\text{per}},0}$
(the subscript $0$ here indicates that the average over a period vanishes).
Moreover,
$H$ annihilates the constant functions and
$H^2 \gamma = -\gamma + \overline\gamma$, where $\displaystyle \overline{\gamma}=:{1 \over 2\pi} \int_0^{2\pi} \gamma(a) da. $
In order to conclude that the leading singularity of the function $B[w]\gamma$ is given in terms of $H\gamma$,
we require a ``chord-arc" condition, as stated in the following lemma.
\begin{equation}gin{lemma}
\label{hilbert plus smooth}
For $b \geq 0$ and $s \ge 2$, let the ``chord-arc space" be
$$
\mathcal{C}_{b}^s:=\left\{ w(\alpha) \in {\mathcal{H}}^s_M :
\inf_{\alpha,\alpha' \in [0,2\pi]} \left \vert {w(\alpha')-w(\alpha) \over \alpha' - \alpha} \right \vert > b \right\}
$$
and the remainder operator $K$ be
$$
{K}[w] \gamma(\alpha):=
B[w]\gamma(\alpha) - {1 \over 2 i w_\alpha(\alpha)} H \gamma(\alpha).$$
Then
$(w,\gamma) \mapsto {K}[w]\gamma$ is a smooth map from
$\mathcal{C}^s_b \times {\mathcal{H}}^1_{\text{per}} \to {\mathcal{H}}^{s-1}_{\text{per}}.$ If $b>0,$
then there exists a constant $C>0$ such that for all $w\in\mathcal{C}^{s}_{b}$ and for all
$\gamma\in\mathcal{H}^{1}_{per},$
$$
\| {K}[w] \gamma\|_{{\mathcal{H}}^{s-1}_\text{per}} \le C \| \gamma \|_{{\mathcal{H}}^1_\text{per}} \exp \left\{ C \| w \|_{{\mathcal{H}}^s(0,2\pi)}\right\}.
$$
\end{lemma}
\begin{equation}gin{proof} See Lemma 3.5 of \cite{A}. We mention that related lemmas can be found elsewhere in the
literature, such as in \cite{BHL}.
\end{proof}
The set $\mathcal{C}_b^s$ is the open subset of ${\mathcal{H}}^s_M$ of functions whose graphs satisfy the ``chord-arc" condition.
{\it This condition precludes self-intersection of the graph.}
Note that this is true
even in the case where $b = 0$ since we have selected the strict inequality in the definition.
Of course if $b>0$, membership of $w$ in this set $\mathcal{C}_b^s$
implies that $|w_\alpha(\alpha)| \ge b$ \ for all $\alpha$.
Note that $H\gamma$ is real because $\gamma$ is real-valued.
Also note that the definition of $\widetilde{T}[\theta]$ implies that
$\widetilde{T}[\theta]/\partial_\alpha \widetilde{Z}[\theta] = 1/|\partial_\alpha \widetilde{Z}[\theta] |$ is also real.
Thus
\begin{equation}gin{equation}\label{WT is smooth}\begin{equation}gin{split}
{\bf{R}}e \left( (B[ \widetilde{Z}[\theta] ]\gamma\ \widetilde{T}[\theta]\right)
&={\bf{R}}e\left( \left( K[\widetilde{Z}[\theta]]\gamma\right) \widetilde{T}[\theta] \right) + {\bf{R}}e\left( \left( {1 \over 2 i \partial_\alpha \widetilde{Z}[\theta]} H \gamma \right)
\widetilde{T}[\theta] \right)\\
&= {\bf{R}}e\left( \left( K[\widetilde{Z}[\theta] ]\gamma\right) \widetilde{T}[\theta] \right)
\end{split}
\end{equation}
and similarly
\begin{equation}gin{equation}\label{WN is not smooth}\begin{equation}gin{split}
{\bf{R}}e ( (B[ \widetilde{Z}[\theta] ]\gamma\ \widetilde{N}[\theta])
={1 \over 2 |\partial_\alpha \widetilde{Z}[\theta]|} H \gamma
+ {\bf{R}}e( K[\widetilde{Z}[\theta] ]\gamma\ \widetilde{N}[\theta] ) .
\end{split}
\end{equation}
Therefore, counting derivatives and applying Lemmas \ref{Z props} and \ref{hilbert plus smooth},
we directly obtain the following regularity.
\begin{equation}gin{cor}
Let $s,s_1 \ge1$, $b > 0, h>0$ and
$$
{\mathcal{U}}^s_{b,h}:= \left\{ \theta \in {\mathcal{U}}^s_h : \widetilde{Z}[\theta] \in \mathcal{C}^{s+1}_b \right \}.
$$
Then the mappings
$(\theta,\gamma) \to B[ \widetilde{Z}[\theta] ]\gamma$
and $(\theta,\gamma) \to {\bf{R}}e( B[ \widetilde{Z}[\theta] ]\gamma\ \widetilde{N}[\theta] )$
are smooth from ${\mathcal{U}}^s_{b,h} \times {\mathcal{H}}^{s_1}_\text{per}$ into ${\mathcal{H}}^{\min\left\{s,s_1\right\}}_\text{per}$.
Furthermore,
${\bf{R}}e( B[ \widetilde{Z}[\theta] ]\gamma\ \widetilde{T}[\theta] )$
is a smooth map from ${\mathcal{U}}^s_{b,h} \times {\mathcal{H}}^1_\text{per}$
into ${\mathcal{H}}^{s}_\text{per}$.
\end{cor}
\begin{equation}gin{cor} \label{zero mean}
$\widetilde{{\mathcal{P}}hi}(\theta,\gamma;c)$ is a smooth map from ${\mathcal{U}}^1_{b,h} \times {\mathcal{H}}^1_\text{per} \times {\bf{R}}$ into
$L^2_\text{per,0}:={\mathcal{H}}^0_{{\text{per}},0} $.
\end{cor}
\begin{equation}gin{proof} The fact that $\widetilde{{\mathcal{P}}hi}(\theta,\gamma;c)$ is a smooth map from ${\mathcal{U}}^1_{b,h} \times {\mathcal{H}}^1_\text{per} \times {\bf{R}}$
into $L^2_{\text{per}}$ follows from the previous corollary and the definition of $\widetilde{{\mathcal{P}}hi}$. Examination of the terms in $\widetilde{{\mathcal{P}}hi}$ shows that all but one is a perfect derivative, and
thus will have mean value zero on $[0,2\pi]$. The remaining term is a constant times
$\sin\theta - \overline{\sin\theta}$, which also has mean zero.
Thus $\widetilde{{\mathcal{P}}hi} \in L^2_{{\text{per}},0}.$
\end{proof}
We introduce the ``inverse" operator
$$
\partial_\alpha^{-2} f (\alpha):= \int_0^\alpha \int_0^a f(s) ds da - {\alpha \over 2 \pi} \int_0^{2 \pi} \int_0^a f(s) ds da, $$
which is bounded from $H^s_{per,0}$ to $H^{s+2}_{per}$.
Indeed, it is obvious that $\partial_\alpha^2 (\partial_\alpha^{-2} f)= f$,
so we only need to demonstrate the periodicity of $\partial_\alpha^{-2} f$ for any $f\in H^s_{per,0}$.
To this end, we compute
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\partial_\alpha^{-2} f(\alpha+2\pi)&= \int_0^{\alpha + 2\pi} \int_0^a f(s) ds da - {\alpha + 2 \pi \over 2 \pi} \int_0^{2 \pi} \int_0^a f(s) ds da
\\&=
\int_{2 \pi}^{\alpha + 2\pi} \int_0^a f(s) ds da
- {\alpha \over 2 \pi} \int_0^{2 \pi} \int_0^a f(s) ds da.
\end{split}
\end{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
= \int_{0}^{ \alpha} \int_0^{b} f(s) ds db
- {\alpha \over 2 \pi} \int_0^{2 \pi} \int_0^a f(s) ds da = \partial_\alpha^{-2} f(\alpha).
\end{split}
\end{equation*}
\subsection{Final reformulation}
Using \eqref{WN is not smooth} in the first equation of $\eqref{TW v4}$ yields the equation
$$
H \gamma + 2 \left \vert \partial_\alpha \widetilde{Z}[\theta] \right \vert
{\bf{R}}e \left( ( K[ \widetilde{Z}[\theta] ] \gamma)\widetilde{N}[\theta] \right) + 2 c \left \vert \partial_\alpha \widetilde{Z}[\theta] \right \vert \sin\theta = 0. $$
It will be helpful to break $\gamma$ up into the sum of its average value and a mean zero piece, so we let
$$
\gamma_1:=\gamma - \overline\gamma . $$
Applying $H$ to both sides and using $H^2 \gamma = -\gamma + \overline\gamma = -\gamma_1$, we obtain
\begin{equation}gin{equation}\label{gamma 1 eqn}
\gamma_1 - H \left \{ 2 \left \vert \partial_\alpha \widetilde{Z}[\theta] \right \vert
{\bf{R}}e \left( ( K[ \widetilde{Z}[\theta] ] (\overline\gamma + \gamma_1) ) \widetilde{N}[\theta] \right) + 2 c \left \vert \partial_\alpha \widetilde{Z}[\theta] \right \vert \sin\theta \right\} = 0.
\end{equation}
It will turn out that we are free to specify $\overline\gamma $ in advance, and so henceforth we will view $\overline\gamma $ as a constant in the equations, akin to $g$, $M$, $A$ or $\tau$.
Now one of the equations we wish to solve is $\theta_{\alpha \alpha} + \widetilde{{\mathcal{P}}hi}(\theta,\gamma;c)=0$.
We use $\paa$ to ``solve" this equation for $\theta$.
Keeping in mind that $\gamma = \overline\gamma + \gamma_1$, we define
\begin{equation}gin{equation} \label{captheta}
{\bf{T}}heta(\theta,\gamma_1;c): = - \partial_\alpha^{-2} \widetilde{{\mathcal{P}}hi}(\theta,\overline\gamma +\gamma_1;c) .
\end{equation}
Then
the second equation in \eqref{TW v4} is equivalent to
$\displaystyle \theta - {\bf{T}}heta(\theta,\gamma_1;c) = 0 $.
Knowing
that $\theta = {\bf{T}}heta$, we are also free to rewrite \eqref{gamma 1 eqn} as
$\gamma_1- \Gamma (\theta,\gamma_1;c) = 0$,
where
\begin{equation}gin{multline} \label{Gamma}
\Gamma (\theta,\gamma_1;c) \\:= 2 H \left \{
\left\vert \partial_\alpha \widetilde{Z}[{\bf{T}}heta(\theta,\gamma_1;c)] \right \vert
{\bf{R}}e \left( \left(K\left[ \widetilde{Z}[{\bf{T}}heta(\theta,\gamma_1;c)] \right](\overline\gamma +\gamma_1) \right)
\widetilde{N}[{\bf{T}}heta(\theta,\gamma_1;c)] \right)
\right. \\ \left. + c\left \vert \partial_\alpha \widetilde{Z}[{\bf{T}}heta(\theta,\gamma_1;c)] \right \vert \sin({\bf{T}}heta(\theta,\gamma_1;c)) \right\}.
\end{multline}
Summarizing, our equations now have the form
\begin{equation}gin{equation} \label{TW v5}
\displaystyle \theta - {\bf{T}}heta(\theta,\gamma_1;c) = 0, \quad \gamma_1- \Gamma (\theta,\gamma_1;c) = 0
\end{equation}
The set where the solutions will be situated is ${\mathcal{U}}={\mathcal{U}}_{0,0}$ where
\begin{equation}gin{multline} \label{solution set}
{\mathcal{U}}_{b,h}:=
\left\{ (\theta,\gamma_1;c) \in {\mathcal{H}}^1_{\text{per}} \times {\mathcal{H}}^1_{{\text{per}},0} \times{\bf{R}}: \right. \\ \left.
\theta \text{ is odd, } \gamma_1 \text { is even, }
\overline{\cos\theta} > h,
\widetilde{Z}[\theta] \in {\bf{C}}A^2_b \text{ and } \widetilde{Z}[{\bf{T}}heta(\theta,\gamma_1;c)] \in {\bf{C}}A^{3}_b
\right\}.
\end{multline}
These sets are given the topology of ${\mathcal{H}}^1_{\text{per}} \times {\mathcal{H}}^1_{{\text{per}},0} \times{\bf{R}}$.
Note that they are defined so that the functions have one derivative. The following theorem states
that ${\bf{T}}heta$ and $\Gamma$ gain an extra derivative.
\begin{equation}gin{theorem}\label{TWE5}{\bf (``Identity plus compact" formulation)}
For all $b, h>0$, the pair $({\bf{T}}heta,\Gamma )$ is a compact map from ${\mathcal{U}}_{b,h}$ into ${\mathcal{H}}^2_{{\text{per}},\text{odd}} \times {\mathcal{H}}^2_{{\text{per}},0,\text{even}}$
and is smooth from ${\mathcal{U}}$ into ${\mathcal{H}}^2_{{\text{per}},\text{odd}} \times {\mathcal{H}}^2_{{\text{per}},0,\text{even}}$.
If $(\theta,\gamma_1;c) \in {\mathcal{U}}$ solves \eqref{TW v5},
then the pair
$(\widetilde{Z}[\theta](\alpha)+ct,\overline\gamma +\gamma_1(\alpha,t))$ is a spatially periodic,
symmetric traveling wave solution of $\eqref{z eqn}$ and $\eqref{gamma eqn v1}$ with speed $c$ and period $M$.
\end{theorem}
\begin{equation}gin{proof}
Observe that from the results of the previous section,
${\bf{T}}heta(\theta,\gamma_1;c)$ is a smooth map from ${\mathcal{U}}^1_{b,h} \times {\mathcal{H}}^1_{{\text{per}},0} \times {\bf{R}}$ into ${\mathcal{H}}^2_{\text{per}}$
for any $b,h > 0$.
Careful unraveling of the definitions shows that
$\Gamma (\theta,\gamma_1;c)$ is a smooth map from the set
$\{ (\theta,\gamma_1;c) \in {\mathcal{U}}^1_{b,h} \times {\mathcal{H}}^1_{{\text{per}},0} \times {\bf{R}}:
\widetilde{Z}[{\bf{T}}heta(\theta,\gamma_1;c)] \in {\bf{C}}A^{3}_b \}$
into ${\mathcal{H}}^2_{{\text{per}},0}$. These facts, together with the uniform bound for fixed $b>0$ on the remainder operator $K$ in Lemma \ref{hilbert plus smooth}, imply that
the mapping $({\bf{T}}heta,\Gamma)$ is compact from ${\mathcal{U}}_{b,h}$ into ${\mathcal{H}}^2_{{\text{per}},\text{odd}} \times {\mathcal{H}}^2_{{\text{per}},0,\text{even}}$, since ${\mathcal{H}}^2_{{\text{per}}}$ is compactly embedded in ${\mathcal{H}}^1_{{\text{per}}}$.
We conclude that ${\bf{T}}heta$ and $\Gamma $ are also smooth maps, but not necessarily compact, on the
union of the previous sets over all $b,h>0$, which is to say
that $({\bf{T}}heta,\Gamma)$ is smooth on ${\mathcal{U}}$.
The second statement in the theorem is obvious from the previous discussion.
Finally, the subscripts ``odd" and ``even" in the target space for $({\bf{T}}heta,\Gamma)$ above simply denote the subspaces which consist of odd and even functions.
That $({\bf{T}}heta,\Gamma)$ preserves this symmetry can be directly checked from its definition; the computation is not short, but neither is it difficult. So we omit it.
\end{proof}
\section{Global Bifurcation}
\subsection {General considerations}\label{gen con}
Our basic tool is the following global bifurcation theorem, which is based on the use of Leray-Schauder degree.
It is fundamentally due to Rabinowitz \cite{R} and was later generalized by Kielh\"ofer \cite{K}.
\begin{equation}gin{theorem} \label{glbif} {\bf (general bifurcation theorem)}
Let $X$ be a Banach space and $U$ be an open subset of $X\times\mathbf{R}$.
Let $F$ map $U$ continuously into $X$.
Assume that
\begin{equation}gin{enumerate} [(a)]
\item the Frechet derivative $D_{\xi}F(0,\cdot)$ belongs to $C(\mathbf{R},L(X,X))$,
\item the mapping $(\xi,c)\to F(\xi,c)-\xi$ is compact from $X\times{\bf{R}}$ into $X$,
and \item $F(0,c_0)=0$ and $D_{x}F(0,c)$ has an odd crossing number at $c=c_{0}.$
\end{enumerate}
Let
$\mathcal{S}$ denote the closure of the set of nontrivial solutions of $F(\xi,c)=0$ in
$X\times\mathbf{R}.$
Let $\mathcal{C}$ denotee the
connected component of $\mathcal{S}$ to which $(0,c_{0})$ belongs.
Then one of the following alternatives is valid:
\begin{equation}gin{enumerate}[(i)]
\item $\mathcal{C}$ is unbounded; or
\item $\mathcal{C}$ contains a point $(0,c_{1})$ where $c_{0}\neq c_{1}$; or
\item $\mathcal{C}$ contains a point on the boundary of $U.$
\end{enumerate}
\end{theorem}
{The {\it crossing number} is the number of eigenvalues of $D_xF(0,c)$ that pass through
$0$ as $c$ passes through~$c_0$.}
In his original paper \cite{R} Rabinowitz assumed that $F$ has the form
$F(\xi,c) = \xi - cG(\xi)$. Kielh\"offer's book \cite{K} permits the general form as above.
Theorem II.3.3 of \cite{K} states Theorem \ref{glbif} in the case that $U=X\times{\bf{R}}$.
The proof of Theorem \ref{glbif} with an open set $U$ is practically identical to that in \cite{K}.
We apply this theorem to our problem by fixing $b,h>0$ and setting $U = {\mathcal{U}}_{b,h}$, $X={\mathcal{H}}^1_{{\text{per}},\text{odd}} \times {\mathcal{H}}^1_{{\text{per}},0,\text{even}}$, $\xi = (\theta,\gamma_1)$, and
$F(\xi, c) = \xi - ({\bf{T}}heta(\xi,c),\Gamma(\xi,c))$. Then
the problem laid out in Theorem \ref{TWE5} fits into the framework of this theorem.
All we need to do is to choose $c_0$ so that the linearization has an odd crossing number when $c = c_0$.
In fact, the simplest case with crossing number one will suffice.
\subsection{Computation of the crossing number}
This calculation is difficult primarily due to the large number of terms we must differentiate.
Thus we introduce some notation which will help to compress the calculations.
For any map $\mu(\theta,\gamma_1;c)$,
we use $(\breve{\theta}, \breve{\gamma})$ to denote the direction of differentiation. To wit, we define:
\begin{equation}gin{multline}
\mu_0:=\mu(0,0;c) \quad \text{and}\\
D\mu:= D_{\theta,\gamma_1} \mu(\theta,\gamma_1;c) \big\vert_{(0,0;c)} (\breve{\theta},\breve{\gamma})
:=\lim_{\epsilon \to 0} {1 \over \epsilon}\left( \mu(\epsilon \breve{\theta},\epsilon \breve{\gamma};c)-\mu(0,0;c)\right).
\end{multline}
We let $Q(\theta,\gamma_1) := (\overline\gamma + \gamma_1)^2$,
$Y(\theta) = \overline{\sin\theta}$, $\Sigma(\theta) = M / (2\pi \overline{\cos\theta})$,
and
$\widetilde{\mathsf{W}}^*[\theta,\gamma_1]:=
B[\widetilde{Z}[\theta]](\overline\gamma + \gamma_1)$.
It is to be understood that by $\sin$ and $\cos$ we mean the maps $\theta \to \sin\theta $ and $\theta \to \cos\theta $, respectively.
We will first compute the linearizations of ${\bf{T}}heta$ and $\Gamma $
while ignoring the restrictions to symmetric (even/odd) functions;
of course, computation of the full linearization will restrict in a natural way to the
linearization of the symmetric problem.
The following quantities and derivatives thereof are elementary.
$$
\sin_0 = 0, \quad \cos_0 = 1,\quad Q_0= \overline\gamma^2, \quad
Y_0 = 0, \quad \Sigma_0 =1,
\quad \widetilde{Z}_0= {M\over 2\pi} \alpha,
$$
$$
\quad (\partial_\alpha \widetilde{Z})_0= \left \vert \partial_\alpha \widetilde{Z} \right \vert_0 = {M \over 2\pi},
\quad
\widetilde{T}_0= 1,
\quad
\widetilde{N}_0 = i
\quad
\text{and} \quad
\widetilde{\mathsf{W}}^*_0 = 0.$$
$$
D\sin = \breve{\theta}, \quad D \cos = 0, \quad DQ = 2 \overline\gamma \breve{\gamma},\quad
D Y = {1 \over 2\pi} \int_0^{2 \pi} \breve{\theta}(a) da, \quad D \Sigma = 0,
$$
$$
D\widetilde{Z} ={i M \over 2 \pi} \left(
\int_0^\alpha \breve{\theta}(a) da - {\alpha \over 2\pi} \int_0^{2\pi} \breve{\theta}(a) da
\right),
\quad
D\partial_\alpha \widetilde{Z}={i M \over 2 \pi} \left(
\breve{\theta} - {1 \over 2\pi} \int_0^{2\pi} \breve{\theta}(a) da
\right),
$$
$$
D\left \vert \partial_\alpha \widetilde{Z}\right \vert = 0,\quad
D\widetilde{T}= i \left( \breve{\theta} - {1 \over 2\pi} \int_0^{2 \pi} \breve{\theta}(a) da \right)
\quad \text{and} \quad
D\widetilde{N} = -\breve{\theta} + {1 \over 2\pi} \int_0^{2 \pi} \breve{\theta}(a) da.
$$
The computation of
$D\widetilde{\mathsf{W}}^*$ is somewhat more complicated.
By the product and chain rules,
\begin{equation}gin{equation}
\begin{equation}gin{split}
D\widetilde{\mathsf{W}}^*&= D\left[ B[\widetilde{Z}[\theta]] (\overline\gamma + \gamma_1) \right]
(\breve{\theta}, \breve{\gamma}) \\
& ={1 \over 2 i M} \mathrm{PV} \int_0^{2 \pi}
D \left[( \overline\gamma + \gamma_1(\alpha')) \cot \left( {\pi \over M} \left( \widetilde{Z}[\theta](\alpha) - \widetilde{Z}[\theta](\alpha')\right)\right) \right]d\alpha'
\\
&={1 \over 2 i M} \mathrm{PV} \int_0^{2 \pi}
\breve{\gamma}(\alpha') \cot \left( {\pi \over M} \left( \widetilde{Z}_0(\alpha) - \widetilde{Z}_0(\alpha')\right)\right)d\alpha'
\\
&-{\pi \over 2 i M^2} \mathrm{PV} \int_0^{2 \pi}
\overline\gamma \csc^2 \left( {\pi \over M} \left( \widetilde{Z}_0(\alpha) - \widetilde{Z}_0(\alpha')\right)\right)
\left(D \widetilde{Z}(\alpha) - D \widetilde{Z}(\alpha') \right) (\breve{\theta}) \ d\alpha' .
\end{split}
\end{equation}
Now we use the fact that $\displaystyle \widetilde{Z}_0(\alpha) = {M / 2 \pi}\alpha$ and the definition of $H$ to
see that the first of the two terms above is exactly
$\displaystyle {(\pi / i M)} H \breve{\gamma}.$
The second term $T_2$ is
\begin{equation}gin{equation}
\begin{equation}gin{split}
&-{ \pi \overline\gamma \over 2 i M^2} \mathrm{PV} \int_0^{2 \pi}
\csc^2 \left( {1 \over 2} \left( \alpha - \alpha'\right)\right) \left(D\widetilde{Z}(\alpha) - D\widetilde{Z}(\alpha')\right)d\alpha'
\\
=&-{ \pi\overline\gamma \over i M^2} \mathrm{PV} \int_0^{2 \pi}
\cot \left( {1 \over 2} \left( \alpha - \alpha'\right)\right) {\partial \over \partial \alpha'} D
\widetilde{Z}(\alpha')d\alpha'
=-{2 \pi^2 \overline\gamma \over i M^2} H \left( \partial_\alpha D\widetilde{Z} \right)
\end{split}
\end{equation}
But
$$\displaystyle
{ \partial_\alpha} D\tilde{Z}
=
{i M \over 2 \pi} \left( \breve{\theta}(\alpha') -{1 \over 2\pi} \int_0^{2\pi} \breve{\theta}(a) da \right).
$$
Since $H$ annihilates constants, the second term $T_2$ is equal to
$-\displaystyle
({\pi \overline\gamma / M}) H \breve{\theta}.
$
Thus
\begin{equation}gin{equation} \label{DW}
D\widetilde{\mathsf W}^*= {\pi \over i M} H \breve{\gamma} - { \pi \overline\gamma \over M} H \breve{\theta}.
\end{equation}
In order to evaluate $D{\bf{T}}heta$, we have ${\bf{T}}heta_0 = 0$ and
\begin{equation}gin{multline}
D {\bf{T}}heta = -{1 \over \tau} \partial_\alpha^{-2} \partial_\alpha
D\left[
(c\cos - {\bf{R}}e( \widetilde{\mathsf W}^* \widetilde{T}))(\overline\gamma + \gamma_1)\right]\\+ {A\over \tau}\partial_\alpha^{-2} D \left[\frac{\partial_\alpha Q}{4\Sigma}
+2 g \Sigma \left( \sin - Y\right)
+{\Sigma}\partial_\alpha (c \cos-{\bf{R}}e(\widetilde{\mathsf W}^* \widetilde{T}))^2
\right]
\end{multline}
Carrying out $D$, we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
D {\bf{T}}heta =& -{1 \over \tau}\paa\partial_\alpha
\left[
(cD \cos - {\bf{R}}e( (D \widetilde{\mathsf W}^*) \widetilde{T}_0) - {\bf{R}}e( \widetilde{\mathsf W}_0^* (D\widetilde{T}))
)\overline\gamma
+ (c\cos_0 - {\bf{R}}e( \widetilde{\mathsf W}_0^* \widetilde{T}_0))\breve{\gamma}\right]
\\&+ {A\over \tau}\partial_\alpha^{-2} \left[\frac{ \Sigma_0 \partial_\alpha DQ - \partial_\alpha (Q_0) D\Sigma }{4\Sigma^2_0}
+2 g \Sigma_0 \left( D \sin - D Y\right)+ 2 g D \Sigma \left( \sin_0 - Y_0 \right) \right]
\\&
+ {A\over \tau}\partial_\alpha^{-2} \left[
(D\Sigma)\partial_\alpha (c \cos_0-{\bf{R}}e(\widetilde{\mathsf W}_0^* \widetilde{T}_0))^2 \right] \\&+
{A\over \tau}\partial_\alpha^{-2} \left[2 \Sigma_0\partial_\alpha[ (c \cos_0-{\bf{R}}e(\widetilde{\mathsf W}_0^* \widetilde{T}_0))
(c D\cos-{\bf{R}}e((D\widetilde{\mathsf W}^*) \widetilde{T}_0)-{\bf{R}}e(\widetilde{\mathsf W}_0^* (D\widetilde{T})))
]
\right]
\end{split}
\end{equation*}
When we use the expressions at the start of this section, this quantity reduces to
\begin{equation}gin{multline}
D {\bf{T}}heta = -{1 \over \tau}\paa\partial_\alpha
\left[
- \overline\gamma {\bf{R}}e( D \widetilde{\mathsf W}^* )
+ c\breve{\gamma}\right]
\\+ {A\over \tau}\partial_\alpha^{-2} \left[\frac{ \overline\gamma \partial_\alpha\breve{\gamma} }{2(M/2\pi)}
+
{g M \over \pi} P\breve{\theta}
\right]
+ {A\over \tau}\partial_\alpha^{-2} \left[
{M \over \pi} \partial_\alpha[ -c
{\bf{R}}e(D\widetilde{\mathsf W}^* )
]
\right],
\end{multline}
where
$$
P \breve{\theta} := \breve{\theta}-{1 \over 2 \pi}\int_0^{2\pi} \breve{\theta} (a) da.
$$
Using \eqref{DW} in this expression, we get
\begin{equation}gin{equation} \label{DTheta}
D {\bf{T}}heta =-{\pi \overline\gamma \over M} \left( {\overline\gamma \over \tau}
-{ c A M \over \pi \tau}
\right)
\partial_\alpha^{-2} \partial_\alpha
H \breve{\theta}
+{A g M \over \pi \tau} \partial_\alpha^{-2} P \breve{\theta}
+ \left( {A \overline\gamma \pi \over \tau M} -{ c \over \tau}\right)\partial_\alpha^{-2} \partial_\alpha \breve{\gamma}.
\end{equation}
Lastly, to compute compute $D\Gamma $, we have $\Gamma_{0} = 0$ and
by \eqref{WN is not smooth} and \eqref{Gamma},
$$
\Gamma =
2 H \left( \left \vert \partial_\alpha \widetilde{Z}[{\bf{T}}heta] \right \vert {\bf{R}}e\left(
\widetilde{W}^*\left[{\bf{T}}heta,\overline\gamma + \gamma_1\right] \widetilde{N}\left[ {\bf{T}}heta \right] \right)
-{1 \over 2 } H \gamma_1\right) +2 c H\left( \left \vert \partial_\alpha \widetilde{Z}[{\bf{T}}heta] \right \vert \sin({\bf{T}}heta)
\right)
$$
Differentiating, we get
\begin{equation}gin{equation}
\begin{equation}gin{split}
D \Gamma = &
2 H
\left\{ D \left \vert \partial_\alpha \widetilde{Z} \right \vert \circ D{\bf{T}}heta\ {\bf{R}}e\left(
\widetilde{\mathsf W}^*_0 \widetilde{N}_0 \right) \right.
+\left \vert \partial_\alpha \widetilde{Z}_0 \right \vert {\bf{R}}e\left(
D \left(\widetilde{W}[{\bf{T}}heta,\overline\gamma +\gamma_1] \right)\widetilde{N}_0\right) \\ &\left.
+\left \vert \partial_\alpha \widetilde{Z}_0\right \vert {\bf{R}}e\left(
\widetilde{\mathsf W}^*_0\ D \widetilde{N} \circ D {\bf{T}}heta \right)
-{1 \over 2 } H \breve{\gamma}\right\}
+2 c H\left\{
D \left \vert \partial_\alpha \widetilde{Z} \right \vert \circ D{\bf{T}}heta\ \sin_0 +
\left \vert \partial_\alpha \widetilde{Z}_0 \right \vert D\sin \circ D {\bf{T}}heta \right \}
\\
= & \breve{\gamma}+
{M \over \pi}H {\bf{R}}e \left( i D \left(\widetilde{\mathsf W}[{\bf{T}}heta,\overline\gamma +\gamma_1] \right)
\right) +{c M \over \pi} H D {\bf{T}}heta
\end{split}
\end{equation}
because $D|\partial_\alpha\widetilde Z| = 0$ and $\widetilde W_0^*=0$.
By \eqref{DW},
we have
$\displaystyle
D \left(\widetilde{ W}[{\bf{T}}heta,\overline\gamma +\gamma_1] \right)
= {\pi \over iM} H \breve{\gamma} - {\pi \overline\gamma \over M} H D{\bf{T}}heta $,
so that
\begin{equation}gin{equation} \label{DGamma}
D \Gamma = \breve{\gamma} - {\bf{R}}e (\breve{\gamma} - i\overline{\gamma} D{\bf{T}}heta) + \frac{cM}{\pi} HD{\bf{T}}heta =
{cM \over \pi} H D{\bf{T}}heta.
\end{equation}
Combining
\eqref{DW}, \eqref{DTheta} and \eqref{DGamma}, we see that the linearization of the mapping
$(\theta,\gamma_1) \to (\theta-{\bf{T}}heta,\gamma_1 -\Gamma )$ at $(0,0;c)$ is
\begin{equation}gin{equation}\label{Lc}\begin{equation}gin{split}
L _c
\left[
\begin{equation}gin{array}{c} \breve{\theta} \\ \breve{\gamma}
\end{array}
\right]& :=
\left[
\begin{equation}gin{array}{c}
\breve{\theta} - D{\bf{T}}heta\\
\breve{\gamma}-D\Gamma
\end{array}
\right]\\&
= \left[
\begin{equation}gin{array}{cc}
1 + {\pi \overline\gamma \over M} \left( {\overline\gamma \over \tau}
-{ c A M \over \pi \tau}
\right)
\paa\partial_\alpha
H
- {AgM \over \pi \tau} \partial_\alpha^{-2} P
&
- \left( {A \overline\gamma \pi \over \tau M} -{ c \over \tau}\right)\partial_\alpha^{-2} \partial_\alpha \\
{\overline\gamma c } \left( {\overline\gamma \over \tau}
-{ c A M \over \pi \tau}
\right)
H \partial_\alpha^{-2} \partial_\alpha
H - {c M^2Ag \over \pi^2 \tau} H\partial_\alpha^{-2} P
& 1
-c \left( {A \overline\gamma \over \tau } -{ cM \over \pi \tau}\right)H\partial_\alpha^{-2} \partial_\alpha
\end{array}
\right]\left[
\begin{equation}gin{array}{c} \breve{\theta} \\ \breve{\gamma}
\end{array}
\right]
\end{split}
\end{equation}
Our goal is to find those values of $c$ such that $L_c$ has a one-dimensional nullspace.
Because we are working with $2\pi$-periodic functions, we may expand them as
$$
\breve{\theta}(\alpha) = \sum_{k \in {\bf{Z}}} \widehat{\breve{\theta}}(k) e^{ik \alpha}\quad \text{and} \quad \breve{\gamma}(\alpha) = \sum_{k \in {\bf{Z}}'} \widehat{\breve{\gamma}}(k) e^{ik \alpha}.
$$
We have denoted ${\bf{Z}}' : = {\bf{Z}} / \left\{ 0 \right\}$.
We can eliminate the $k = 0$ coefficient for $\breve{\gamma}$ since it has zero mean.
The operators $\partial_\alpha$, $H$, $P$ and $\paa$ can be represented on the Fourier side in the usual way:
$$
\widehat{\partial_\alpha \mu}(k) = ik \widehat{\mu}(k),\quad \widehat{H \mu}(k) = -i{\textrm{ sgn}}(k) \widehat{\mu}(k)
$$
$$
\quad \widehat{P \mu}(k) = (1 - \delta_0(k))\widehat{\mu}(k)
\quad \text{and} \quad \widehat{\partial_\alpha^{-2} \mu} (k) = -{1 \over k^2} \widehat{\mu}(k),
$$
where $\delta_0(k) = 1$ for $k = 0$ and is otherwise zero.
Thus $L_c$ is represented on the frequency side as the Fourier multiplier
\begin{equation}gin{equation}\begin{equation}gin{split}
\widehat{L _c
\left[
\begin{equation}gin{array}{c} \breve{\theta} \\ \breve{\gamma}
\end{array}
\right]} (k) & =
\widehat{L _c}(k)
\left[
\begin{equation}gin{array}{c}\widehat{ \breve{\theta} }(k)\\ \widehat{\breve{\gamma}}(k)
\end{array}
\right] \end{split}
\end{equation}
where, for $k \ne 0$
\begin{equation}gin{equation}
\begin{equation}gin{split}
\widehat{L _c}(k)
&
= \left[
\begin{equation}gin{array}{cc}
1 - {\pi \overline\gamma \over M} \left( {\overline\gamma \over \tau}
-{ c A M \over \pi \tau}
\right)
|k|^{-1} + {A gM \over \pi \tau} k^{-2}
&
i\left( { A \overline\gamma \pi \over \tau M} -{ c \over \tau}\right)k^{-1} \\
i{\overline\gamma c } \left( {\overline\gamma \over \tau}
-{ c A M \over \pi \tau}
\right)
k^{-1} -i {c M^2 A g \over \pi^2 \tau} {\textrm{ sgn}}(k) k^{-2}
& 1
+c \left( {A \overline\gamma \over \tau } -{ cM \over \pi \tau}\right)|k|^{-1}
\end{array}
\right]\end{split}
\end{equation}
and
$ \widehat{L_c}(0) $ is the identity.
Now we can easily compute the point spectrum of $L_c$.
In particular, $\lambda \in {\bf{C}}$ is an eigenvalue of $L_c$ if and only if
$\lambda$ is an eigenvalue of $\widehat{L_c}(k)$ for some integer $k$.
For any nonzero integer $k$,
an elementary computation shows that the two eigenvalues
of $\widehat{L_c}(k)$ are $1$ and
$$
\lambda_k(c) := 1+ \frac{2\overline\gamma c A M \pi -M^2c^2 - \overline\gamma ^2 \pi^2}{M\pi \tau} \left \vert k \right \vert^{-1} +\frac{g A M}{\pi \tau} \left \vert k \right \vert^{-2}
$$
Since this expression is even in $k$, every eigenvalue of $L_c$ has even multiplicity.
So any crossing number for $L_c$ will necessarily be even.
However, the eigenvector of $\widehat{L_c}(k)$ associated to this eigenvalue is
$\displaystyle
\left[ \begin{equation}gin{array}{c}
{\textrm{ sgn}}(k) {i \pi /c M} \\
1
\end{array}
\right]
$
which in turn implies that
$$
\left[ \begin{equation}gin{array}{c}
{i \pi /c M} \\
1
\end{array}
\right] e^{i k \alpha}
\quad \text{and} \quad
\left[ \begin{equation}gin{array}{c}
-{i \pi /c M} \\
1
\end{array}
\right] e^{-i k \alpha}
$$
are the corresponding eigenfunctions for $L_c$ with eigenvalue $\lambda_k(c)$.
Of course, we can break these up into real an imaginary parts to get an equivalent basis for the eigenspace,
namely,
$$
\left[ \begin{equation}gin{array}{c}
-{(\pi /c M)} \sin(k \alpha) \\
\cos(k \alpha)
\end{array}
\right]
\quad \text{and} \quad
\left[ \begin{equation}gin{array}{c}
{ (\pi /c M)} \cos(k \alpha) \\
\sin(k \alpha)
\end{array}
\right].
$$
Only the first of these satisfies the symmetry properties ($\theta$ odd, $\gamma_1$ even)
required by our function space \eqref{solution set}.
Thus when we take account of the symmetry, the dimension of the eigenspace equals one.
We summarize the spectral analysis as follows.
\begin{equation}gin{proposition} {\bf (Spectrum of $L_c$)}
Let $L_{c}$ be the linearization of the mapping
$(\theta,\gamma,c) \in {\mathcal{U}} \to (\theta-{\bf{T}}heta,\gamma-\Gamma) \in {\mathcal{H}}^2_{\text{per}} \times {\mathcal{H}}^2_{{\text{per}},0}$ at $(0,0;c)$.
The spectrum of $L_{c}$ consists of $1$ and the point spectrum
$$
\sigma_{\text{pt}}:=\left\{ \lambda_k(c): = 1+ \frac{2\overline\gamma c A M \pi -M^2c^2 - \overline\gamma ^2 \pi^2}{M\pi \tau} k^{-1} +\frac{g A M}{\pi \tau} k^{-2}
: k \in {\bf{N}}
\right\}.
$$
Moreover, each eigenvalue $\lambda$ has geometric and algebraic multiplicity $N_\lambda(c)$ where $$
N_\lambda(c):= \left \vert
\left\{ k \in {\bf{N}} \text{ such that } \lambda_k (c)= \lambda \right\}
\right \vert.
$$
The eigenspace for $\lambda$ is
$$
E_\lambda(c):=\text{span} \left\{
\left[ \begin{equation}gin{array}{c}
-{(\pi /c M)} \sin(k \alpha) \\
\cos(k \alpha)
\end{array}
\right]: \text{$k \in {\bf{N}}$ such that }\lambda_k(c) = \lambda
\right\}.
$$
\end{proposition}
\begin{equation}gin{cor}\label{crossing}
Fix $A$, $g$, $\overline\gamma \in {\bf{R}}$ and $\tau,M>0$. Let
$$
\mathcal{K}:=\left\{ k \in {\bf{Z}} :
\pi^2 \overline\gamma ^2 A^2 k ^2 + \pi \tau k^3 M - \pi^2 k^2 \overline\gamma ^2 + k A g M^2 > 0 \text{ and }
A g M/\pi \tau k \notin {\bf{N}}/\left\{ k \right\} \right\}. $$
For $k \in {\bf{N}}$, let
$$
c_\pm(k):={\pi \overline\gamma A \over M} \pm {1 \over kM} \sqrt{\pi^2 \overline\gamma ^2 A^2 k ^2 + \pi \tau k^3 M - \pi^2 k^2 \overline\gamma ^2 + k A g M^2}. $$
Then $\left \vert \mathcal{K} \right \vert = \infty$
and $L_{c}$ has crossing number equal to one at a real value $c = c_\pm(k)$
if and only if $k \in \mathcal{K}$.
If $A=0$, then $\mathcal K={\bf{Z}}$.
\end{cor}
\begin{equation}gin{proof}
Fix $k \in {\bf{N}}$.
We are looking for the real values of $c\in{\bf{R}}$ for which $\lambda_k=0$.
There are at most two roots.
A routine calculation shows that $\lambda_k(c) = 0$ if and only if $c = c_\pm(k)$.
The first condition in the definition of $\mathcal{K}$ shows that $c_\pm(k)$ is a real number.
Thus $\lambda =0$ is an eigenvalue.
We must compute its crossing number and the first step is to calculate its multiplicity, denoted $N_0(c_\pm(k))$.
Thus, given $c=c_\pm(k)$, we must find all $l \in {\bf{N}}$
such that $\lambda_l(c_\pm(k)) = 0.$ Clearly $l = k$ works.
A small amount of algebra shows that the only other possible solution of $\lambda_l(c_\pm(k)) = 0$
is $$l = l_k:= AgM/\pi\tau k.$$
Another calculation shows that
$$
\lambda_k(c_\pm(k) + \epsilon) = \mp {2 \sqrt{\pi^2 \overline\gamma ^2 A^2 k ^2 + \pi \tau k^3 M - \pi^2 k^2 \overline\gamma ^2 + k A g M^2} \over k^2 \pi \tau} \epsilon - {M \over \tau k \pi} \epsilon^2
$$
and
$$
\lambda_{l_k}(c_\pm(k)+\epsilon) = \mp {2 \sqrt{\pi^2 \overline\gamma ^2 A^2 k ^2 + \pi \tau k^3 M - \pi^2 k^2 \overline\gamma ^2 + k A g M^2} \over A g M} \epsilon - {k \over A g} \epsilon^2.$$
Now assume that $k \in \mathcal{K}$. The second condition in the definition of $\mathcal{K}$ shows that $l_k \notin {\bf{N}}/\left\{k\right\}$ and thus $N_0(c_\pm(k)) = 1$. The first condition guarantees that
coefficient of $\epsilon$ in the expansion of $\lambda_k(c_\pm(k) + \epsilon)$ is non-zero. Thus we see $\lambda_\pm(c)$ changes sign as $c$ passes through $c_\pm(k)$: the crossing number is equal to one and thus is odd. Thus we have shown the ``if" direction in the corollary.
The ``only if" direction follows by showing that the crossing number is either two or zero when one of the conditions is not met. The details are simple so we omit them.
To show that $\left \vert \mathcal{K} \right \vert = \infty$, observe that, since $\tau, M > 0$, we have
$$\displaystyle
\lim_{k \to \infty} \left( \pi^2 \overline\gamma ^2 A^2 k ^2 + \pi \tau k^3 M - \pi^2 k^2 \overline\gamma ^2 + k A g M^2 \right) = \infty.
$$
Therefore the first condition in the definition of $\mathcal{K}$ is met for all $k$ sufficiently large. Likewise, no matter the choices of the parameters
$A,g,M$ and $\tau$, $\displaystyle \lim_{k \to \infty} AgM/\pi \tau k =0 $ and the second
condition holds for $k$ sufficiently large. Thus $\left \vert \mathcal{K} \right \vert$ contains all $k > k_0$ for some finite $k_0 \in {\bf{N}}$.
\end{proof}
\subsection {Application of the abstract theorem}
An appeal to Theorem \ref{glbif} has the following consequence.
\begin{equation}gin{theorem} {\bf (Global bifurcation)} \label{technical version}
Let the surface tension $\tau>0$, period $M>0$,
Atwood number $A\in{\bf{R}}$, and average vortex strength $\overline\gamma\in{\bf{R}}$ be given.
Let ${\mathcal{U}}$,
$\mathcal{K}$ and $c_\pm(k)$ be defined as above.
Let $\mathcal{S} \subset {\mathcal{U}}$ be the closure (in ${\mathcal{H}}^1_{\text{per}} \times {\mathcal{H}}^1_{{\text{per}},0} \times{\bf{R}}$) of the set of all solutions of \eqref{TW v5}
for which either $\theta \ne 0$ or $\gamma_1 \ne 0$.
Given $k \in \mathcal{K}$,
let $\mathcal{C}_\pm(k)$ be the connected component of $\mathcal{S}$ which contains $(0,0;c_\pm(k))$.
Then
\begin{equation}gin{enumerate}[(I)]
\item $either\ \mathcal{C}_\pm(k)$ is unbounded;
\item $or\ \mathcal{C}_\pm(k) = \mathcal{C}_+(l)$ or $\mathcal{C}_\pm(k) = \mathcal{C}_-(l)$
for some $l \in \mathcal{K}$ with $l \ne k$;
\item $or\ \mathcal{C}_\pm(k)$ contains a point on the boundary of ${\mathcal{U}}$.
\end{enumerate}
\end{theorem}
\begin{equation}gin{proof}
We saw in Theorem \ref{TWE5} that the mapping $({\bf{T}}heta,\Gamma)$ was compact on ${\mathcal{U}}_{b,h}$ for all $b,h>0$ and we saw that there are always choices of $c$ which result
in an odd crossing number in Corollary \ref{crossing}. Thus Theorem \ref{glbif} can be applied with outcomes which coincide with the outcomes $(I)$-$(III)$
in Theorem \ref{technical version} except with the replacement of ${\mathcal{U}}$ with the ${\mathcal{U}}_{b,h}$ in $(III)$. Since ${\mathcal{U}} = \cup_{b,h>0} {\mathcal{U}}_{b,h}$ an easy topological argument
gives $(III)$ as above.
\end{proof}
This general statement leads in turn to our main conclusion.
\begin{equation}gin{proof} [Proof of Theorem \ref{main result}]
By Proposition \ref{TWE5}, a solution $(\theta,\gamma_1;c)$ of \eqref{TW v5} gives rise to
symmetric periodic traveling wave solutions of the two dimensional gravity-capillary vortex sheet problem by
taking $z(\alpha,t)=ct + \widetilde{Z}[\theta](\alpha)$ and $\gamma = \overline\gamma + \gamma_1.$ (Note that, as in the proof of Proposition \ref{TWE2},
this implies that $N=\widetilde{N}[\theta]$ and $T=\widetilde{T}[\theta].$ We will use the two equivalent notations interchangeably in what follows.)
Note that in the statement of Theorem \ref{main result} it is stated that the traveling wave solutions are smooth, whereas the solutions given in Theorem \ref{technical version}
are stated to merely belong to ${\mathcal{H}}^1_{\text{per}} \times {\mathcal{H}}^1_{{\text{per}},0} $.
However, the maps ${\bf{T}}heta$ and $\Gamma $ in \eqref{TW v5} are smoothing.
Therefore a simple bootstrap argument shows that $\theta$ and $\gamma_1$
are in ${\mathcal{H}}^s_{\text{per}}$ for any $s$ and thus in $C^\infty$. The corresponding traveling waves
are likewise smooth; the details are routine and omitted.
Each of the outcomes (a)-(f) in Theorem \ref{main result} corresponds to one of
the alternatives (I)-(III) of Theorem \ref{technical version}. Fix $k \in \mathcal{K}$.
It is straightforward to see that
alternative (II) in Theorem \ref{technical version} is interpreted as outcome (e) in Theorem \ref{main result}.
Now consider alternative (III).
If $(\theta,\gamma_1;c)\in \mathcal{C}_\pm(k)$ is on the boundary of ${\mathcal{U}}$,
then inspection of the definition of ${\mathcal{U}}$ shows that
\begin{equation}gin{equation} \label{altIII}
\text{either } \ \overline{\cos\theta} = 0\quad
\text{or} \quad\widetilde{Z}[\theta] = \widetilde{Z}[{\bf{T}}heta] \notin {\bf{C}}A^3_0.
\end{equation}
The reconstruction of the curve $S (t)$ from $\theta$ via $\widetilde{Z}$
(recalling that $\overline{\sin\theta} =0$ for solutions) shows that
the length of $S (t)$ per period is given by
$$
L[\theta]:=\int_0^{2\pi} \left \vert \partial_\alpha \widetilde{Z}[\theta](a)\right \vert da
= {M \over \overline{\cos\theta}}.
$$
In case $\overline{\cos\theta} = 0$,
the length of the curve reconstructed from $(\theta,\gamma_1;c)$ is formally infinite.
Since $(\theta,\gamma_1;c)$ is in the closure of the set of nontrivial traveling wave solutions,
the more precise statement is that
there is sequence of solutions whose lengths diverge, which is outcome (a).
Now suppose we have the other alternative in \eqref{altIII},
namely, that $\widetilde{Z}[\theta] \notin {\bf{C}}A_0^3$.
Since $h=0$ in this space, it means that
$$
\inf_{\alpha,\alpha' \in [0,2\pi] } \zeta = 0, \quad \text{ where } \zeta(\alpha,\alpha') :=
\left\vert{ \widetilde{Z}[\theta](\alpha') - \widetilde{Z}[\theta](\alpha) \over \alpha' - \alpha } {\bf{R}}V.
$$
Moreover,
\begin{equation} \label{not equal}
\lim_{\alpha' \to \alpha} \zeta(\alpha,\alpha')
= \left\vert \partial_\alpha \widetilde{Z}[\theta](\alpha) {\bf{R}}V
= \frac M{2\pi\overline{cos\theta} }
= {L[\theta]\over 2\pi} \ge 1.
\end{equation}
But clearly
${(\widetilde{Z}[\theta](\alpha') - \widetilde{Z}[\theta](\alpha) ) / (\alpha' - \alpha) }$
is a continuous function of $(\alpha,\alpha')$ for $\alpha \ne \alpha'$.
Thus its infimum,
which vanishes, is attained at some pair of values $(\alpha_\star,\alpha'_\star)$
where $\alpha_\star \ne \alpha'_\star$.
Hence $\zeta(\alpha_*,\alpha'_*)=0$,
which in turn implies that
$$
\left\vert \widetilde{Z}[\theta](\alpha'_\star) - \widetilde{Z}[\theta](\alpha_\star) {\bf{R}}V = 0. $$
This means that the curve reconstructed from $\theta$ intersects itself, outcome~(d).
Now consider alternative (I).
Then $\mathcal{C}_\pm(k)$ contains a sequence of solutions $\left\{ (\theta_n,\gamma_{1,n};c_n)\right\}$
for which
$$
\lim_{n \to \infty} \left( |c_n| + \|\theta_n\|_{{\mathcal{H}}^1_{\text{per}}} + \| \gamma_{1,n} \|_{{\mathcal{H}}^1_{\text{per}}} \right) = \infty,
$$
so that at least one of the three terms on the left diverges.
By combining (\ref{reconstruct condition}), (\ref{this is z}), and (\ref{win}), we see that $|\partial_{\alpha}\widetilde{Z}[\theta_{n}]|=\sigma_{n},$ and
the length of one period of the interface is thus proportional to $\sigma_{n}.$
If $\sigma_{n}\rightarrow\infty,$ then outcome (a) has occurred; thus, we may assume that
$\sigma_{n}$ remains bounded above independently of $n.$ Since the length of one period of the curve may not vanish (by periodicity),
we also see that $\sigma_n$ is bounded below (away from zero) independently of $n.$
Considering again (\ref{reconstruct condition}), we see that $\sigma_n$ being bounded above implies that $\overline{\cos\theta_n}$
is bounded away from zero.
Suppose first that $|c_n|\to\infty$
but $\|\theta_n\|_{{\mathcal{H}}^1_{\text{per}}} + \| \gamma_{1,n} \|_{{\mathcal{H}}^1_{\text{per}}}$ is bounded.
We see from \eqref{wn eqn} that
$$\|c_{n}\sin\theta_{n}\|_{H^{1}}=\|{\bf{R}}e(W^{*}_{n}N_{n})\|_{H^{1}}.$$
Since $W_{n}^{*}=B[\widetilde{Z}[\theta_{n}]]\gamma_{n},$
this implies
$$\|c_{n}\sin\theta_{n}\|_{H^{1}}=\|{\bf{R}}e(B[\widetilde{Z}[\theta_{n}]]\gamma_{n}\ N_{n})\|_{H^{1}}.$$
We then use (\ref{WN is not smooth}) to write this as
$$\|c_{n}\sin\theta_{n}\|_{H^{1}}=\left\|\frac{1}{2\sigma_n}H\gamma_n + {\bf{R}}e(K[\widetilde{Z}[\theta_n]]\gamma_n\ N_n)\right\|_{H^{1}}.$$
We have remarked above that $\frac{1}{\sigma_n}$ is bounded independently of $n,$ and we see that $H\gamma_n$ is uniformly bounded in $H^{1}$
since $\gamma_{n}$ is.
Applying Lemma \ref{hilbert plus smooth} gives an estimate for the operator $K,$ and we find the following:
$$\|c_{n}\sin\theta_n\|_{H^{1}}\leq C\|\gamma_{n}\|_{H^{1}}+C\exp\{C\|\widetilde{Z}[\theta_n]\|_{H^{1}}\}\|\gamma_{n}\|_{H^{1}}\|N_{n}\|_{H^{1}}.$$
Since $\theta_{n}$ is, by assumption, bounded in $H^{1},$ we see from (\ref{this is z}) and (\ref{tangent def 2}) that $\widetilde{Z}[\theta_n]$ and $N_n$ are as well.
We conclude that
$c_n \sin\theta_n$ is bounded in ${\mathcal{H}}^{1}_{{\text{per}}},$ independently of $n.$
Therefore $\|\sin\theta_n\|_{{\mathcal{H}}^1_{\text{per}}} \to 0,$ and by Sobolev embedding,
$\sin\theta_n\to 0$ uniformly.
This implies that $\theta_n$ converges to a multiple of $\pi;$
the uniform convergence and the continuity and oddness of $\theta_n$ make it is straightforward to see that this multiple must be zero.
Note also, then, that $|\cos \theta_n| \to 1,$ uniformly.
Continuing, we integrate \eqref{gamma**} once, finding that the quantity
\begin{equation} \label{abc}
(c_n\cos\theta_n - {\bf{R}}e( W^*_n T_n))\gamma_n
-2A\left(\frac{1}{8}\frac{\gamma_n^{2}}{\sigma_{n}}+
g\sigma_{n} \int^\alpha \sin\theta_n\, d\alpha +{\sigma_{n} \over 2} \{c_n \cos\theta_n -{\bf{R}}e(W^*_n T_n)\}^2 \right) \end{equation}
is bounded in $L^2_{per}$.
Recalling again that $W_{n}^{*}=B[\widetilde{Z}(\theta_{n})]\gamma_{n},$ we see that
(\ref{WT is smooth}) gives a formula for ${\bf{R}}e(W_{n}^*T_{n}).$
Similarly to our previous use of Lemma \ref{hilbert plus smooth}, we see that
Lemma \ref{hilbert plus smooth} then implies that
${\bf{R}}e(W_{n}^{*}T_{n})$ is bounded in ${\mathcal{H}}^{1}_{{\text{per}}}.$
If $A=0$, we then deduce that $c_n\gamma_n\cos\theta_n $ is bounded in $L^2_{per}$
and therefore $\gamma_n \to0$ in $L^2_{per}$.
Thus the average, which is a constant, must satisfy $\overline{\gamma} = 0$.
If we have $\bar{\gamma}\neq 0,$ then this is a contradiction.
Now assume that $A\ne 0$. Dividing (\ref{abc}) by $c_{n}^{2},$ we see that all
the terms then go to zero as $n\to\infty$ except
$\frac{\sigma_{n}}{2}(\cos\theta_n)^2.$ This then implies that
$\frac{\sigma_{n}}{2}(\cos\theta_n)^2$ goes to zero, which is a contradiction
since $\cos\theta_n\to 1$ uniformly.
If $\bar{\gamma}=0$ and $A=0,$ then we do not rule out $|c_{n}|\rightarrow\infty;$ this is
possibility (f) of the theorem.
If $\| \theta_n\|_{{\mathcal{H}}^1_{\text{per}}}$ diverges, then either $\theta_n$ or $\partial_\alpha \theta_n$
diverges in $L^2_{\text{per}}$.
Since $\theta$ is the tangent angle to $S (t)$, the curvature is exactly
$\kappa(\alpha) = \partial_\alpha \theta(\alpha)/\sigma. $
Inspection of (\ref{reconstruct condition}) indicates that if $\sigma_{n}\rightarrow0,$ then $\overline{\cos\theta_{n}}\rightarrow\infty.$ This clearly cannot be the case, however, and thus $\sigma_{n}$
cannot go to zero.
Recall that we assume that $\sigma_{n}$ is bounded above.
If it is $\partial_\alpha \theta_n$ that diverges, then
we see that the curvature diverges, which is to say that we have outcome (b).
On the other hand, suppose that it is $\theta_n$ that diverges in $L^2_{\text{per}}$.
Since $\theta_n$ is odd and periodic, $\theta_n(0)=\theta_n(2 \pi) = 0$.
If the $L^2_{\text{per}}$-norm of $\theta_n$ diverges then of course its $L^\infty$-norm also diverges
and so does $\partial_\alpha \theta_n$.
Which means that the curvature for the reconstructed interface is diverging.
Thus $\|\theta_n\|_{{\mathcal{H}}^1_{\text{per}}}$ diverging implies~ outcome~(b).
If $\| \gamma_{1,n}\|_{{\mathcal{H}}^1_{\text{per}}}$ diverges, then either
$\gamma_{1,n}$ or $\partial_\alpha \gamma_{1,n}$ diverges in $L^2_{\text{per}}$.
Suppose that it is the former.
From Section \ref{eom}, the jump in the tangential velocity across the interface
is related to $\gamma$ by \eqref{jump}.
By the reconstruction method and the length $L$ above, it implies that
$ j(\alpha,t) =2 \pi (\overline\gamma + \gamma_1(\alpha,t))/ L[\theta].$
If $L[\theta_n]$ remains bounded, then clearly the jump $j$ given above diverges in $L^2_{\text{per}}$,
which is outcome (c). If $L[\theta_n]$ diverges, we have outcome (a).
Likewise, if $\partial_\alpha \gamma_{1,n}$ diverges in $L^2_{\text{per}}$, then
either the derivative of the jump diverges, outcome (c), or else the length diverges, outcome (a).
\end{proof}
{}
\end{document} |